text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Unicorn Companies Data Analysis
```
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
unicorn_master = pd.read_csv("Unicorn_Companies.csv")
unicorn_master.head(3)
print(f"Shape of the dataset is {unicorn_master.shape}")
print(f"The dataset has the following datatypes for the corresponding columns\n {unicorn_master.dtypes}")
unicorn_master.describe()
# Converting Valuation
unicorn_master["Valuation ($B)"] = unicorn_master["Valuation ($B)"].replace({"\$": ""}, regex=True)
unicorn_master["Valuation ($B)"] = unicorn_master["Valuation ($B)"].astype(float)
# Basic Overview of Data before data cleaning.
# Doing it here because I added a nonetype later
fig = px.treemap(unicorn_master,path= ["Country","Industry", "Company"],
values="Valuation ($B)", color_discrete_sequence=px.colors.qualitative.Pastel)
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25))
# Converting Total Raised
# new column to separate billions, mil and thousands.
unicorn_master["Total Raised Unit"] = unicorn_master["Total Raised"].str[-1]
unicorn_master["Total Raised"] = unicorn_master["Total Raised"].replace({"\$": "", "B$": "", "M$": "",
"None": np.nan, "K$": ""}, regex=True)
unicorn_master["Total Raised"] = unicorn_master["Total Raised"].astype(float)
# used a loop here (might be a better way to do it)
for raised, row in unicorn_master.iterrows():
if row["Total Raised Unit"] == "B":
unicorn_master.loc[raised, "Total Raised"] = row["Total Raised"] * 1000000000
elif row["Total Raised Unit"] == "M":
unicorn_master.loc[raised, "Total Raised"] = row["Total Raised"] * 1000000
elif row["Total Raised Unit"] == "K":
unicorn_master.loc[raised, "Total Raised"] = row["Total Raised"] * 1000
# remove added column, add total raised column
# divide by 1 bil to match it to valuation column
unicorn_master = unicorn_master.drop("Total Raised Unit", axis=1)
unicorn_master["Total Raised"] = unicorn_master["Total Raised"].values/1000000000
unicorn_master.head()
# Convert Dates for "Joining" and "Date Founded"
unicorn_master["Date Joined"] = pd.to_datetime(unicorn_master["Date Joined"])
unicorn_master[unicorn_master["Founded Year"] == "None"] = None
unicorn_master["Founded Year"] = pd.to_datetime(unicorn_master["Founded Year"])
# Converting Number of Investors from Object to float
unicorn_master['Investors Count'] = unicorn_master['Investors Count'].replace({'None': '0'}, regex=True)
unicorn_master['Investors Count'] = unicorn_master['Investors Count'].astype(float)
# Duplicates and NAN values check
print(unicorn_master.isna().values.any())
print(unicorn_master.duplicated().values.any())
# couldn't find duplicates so didn't drop them yet.
# Financial Stage comparison wrt Valuation,
# Dropped "none" values here, so only companies with valid financial stage are shown.
Financial_St = unicorn_master[unicorn_master["Financial Stage"] != "None"]
Financial_St["Financial Stage"] = Financial_St['Financial Stage'].replace({"Acq": "Acquired"})
Financial_St = Financial_St.dropna()
fig = px.bar(data_frame=Financial_St, x="Financial Stage", y="Valuation ($B)", color="Country", color_discrete_sequence=px.colors.qualitative.Pastel)
fig.show()
fig = px.bar(data_frame=Financial_St, x="Financial Stage", y="Valuation ($B)", color="Country", facet_col="Industry", facet_col_wrap=3,
color_discrete_sequence=px.colors.qualitative.Pastel)
fig.update_yaxes(matches=None)
fig.for_each_yaxis(lambda y: y.update(title = ''))
fig.add_annotation(x=-2,y=0.5,
text="Valuation (in $B)", textangle=-90,
xref="paper", yref="paper")
fig.show()
#biggest companies
top_10_companies = unicorn_master.sort_values("Valuation ($B)", ascending=False)[:10]
top_10_companies
px.bar(top_10_companies,x="Company", y=["Total Raised", "Valuation ($B)"],opacity = 0.5,
orientation = "v", barmode="group",color_discrete_sequence=px.colors.qualitative.Bold)
# Top 5 countries
country_unicorns = unicorn_master.groupby("Country")
top5Countries = country_unicorns['Valuation ($B)'].sum().sort_values(ascending=False)[:5]
top5Countries
px.bar(top5Countries)
industry_total = unicorn_master['Industry'].value_counts()
industry_top_5 = industry_total.head()
px.bar(industry_top_5, color_discrete_sequence=px.colors.qualitative.Pastel)
# Total Investors
fig = px.scatter(unicorn_master, x=unicorn_master["Company"][:100], y=unicorn_master["Valuation ($B)"][:100], color=unicorn_master["Investors Count"][:100],
color_continuous_scale=px.colors.cyclical.mrybm)
fig.show()
```
| github_jupyter |
# Scrape Web AIS Data to Test Model
This notebook is a scraper to get data from the web. AIS data are available from several websites to download and some even come with API calls, but the data is quite expensive. We only need a small portion of data for the project, hence we download bits and pieces of data plus we use a small scraper to obtain enough current data on a single vessel to test our model. The scraper is running locally to output a csv.
The scraped data would need to join with 4 other external datasets, to add additional features and to preprocess to feed it into our model for prediction.
```
from bs4 import BeautifulSoup
import pandas as pd
import requests
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
from datetime import datetime, timedelta
import time
# depricated!! - need to change iframe
# using selenium on marinetraffic to login and navigate to page to view ais data (depricated due to change in iframe)
# need to use own credentials with EMAIL_USER, PASSWORD_USER
driver = webdriver.Chrome()
driver.get("https://www.marinetraffic.com/en/data/?asset_type=vessels&columns=flag,shipname,photo,recognized_next_po, EMAIL_USERrt,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position,notes")
try:
element_present = EC.visibility_of_element_located((By.CSS_SELECTOR, 'css-flk0bs'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue!")
result = driver.find_element_by_class_name('css-flk0bs')
result.click()
result = driver.find_element_by_class_name('e2e_header_sign_in_button')
result.click()
try:
element_present = EC.visibility_of_element_located((By.ID, 'email'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue!")
username = driver.find_element_by_id("email")
password = driver.find_element_by_id("password")
username.send_keys("EMAIL_USER")
password.send_keys("PASSWORD_USER")
driver.find_element_by_id("login_form_submit").click()
try:
element_present = EC.visibility_of_element_located((By.ID, 'user-logggin'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue!")
boat = driver.find_element_by_id("user-logggin")
boat.click()
try:
element_present = EC.visibility_of_element_located((By.ID, 'nw_my_fleets'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue!")
boat = driver.find_element_by_id("nw_my_fleets")
boat.click()
# using 2 dummies to wait until page is correctly loaded (iframe)
try:
element_present = EC.visibility_of_element_located((By.ID, 'dummywait'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue!")
driver.find_element_by_xpath('//a[contains(text(), "Fishing")]').click()
try:
element_present = EC.visibility_of_element_located((By.ID, 'dummywait'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue!")
driver.find_element_by_xpath('//a[contains(text(), "SUNDEROEY")]').click()
try:
element_present = EC.visibility_of_element_located((By.ID, 'viewVesselEventsList'))
WebDriverWait(driver, 20).until(element_present)
print("Page is ready!")
except TimeoutException:
print("whatever...continue last!")
html_page = driver.find_element_by_xpath("//body").get_attribute('outerHTML')
# deprecated!!
# using vesselfinder to obtain vessel ais data (much lower resolution)
vessel_name = 'SUNDEROEY'
vessel_IMO = '9294903'
vessel_mmsi = '316042032'
session = requests.Session()
web_page = session.get("https://www.vesselfinder.com/vessels/" + vessel_name + \
"-IMO-" + vessel_IMO + "-MMSI-" + vessel_mmsi, \
headers={'User-Agent': 'Mozilla/5.0'})
data = {}
soup = BeautifulSoup(web_page.content, 'html.parser')
ship_div = soup.findAll("section", {"class":["ship-section"]})
for div in ship_div:
ship_table = div.findAll("table", {"class":["tparams"]})
for table in ship_table:
rows = table.find_all('tr')
for row in rows:
cols = row.find_all('td')
cols = [ele.text.strip() for ele in cols]
if len(cols) > 1 and cols[1] not in ['-', '']:
data[cols[0]] = cols[1]
# parse data string from above to separate into features
course = data['Course / Speed'].split('/')[0].rstrip('° ')
speed = data['Course / Speed'].split('/')[1].rstrip(' kn').lstrip()
lon = data['Coordinates'].split('/')[1]
if 'W' in lon:
lon = float(lon.rstrip(' W')) * -1
else:
lon = float(lon.rstrip(' E'))
lat = data['Coordinates'].split('/')[0]
if 'W' in lat:
lat = float(lat.rstrip(' S')) * -1
else:
lat = float(lat.rstrip(' N'))
mmsi = data['IMO / MMSI'].split('/')[1].strip()
time = data['Position received']
if 'hours ago' in time:
time = int(time.split(' ')[0]) * 60
elif 'mins ago' in time:
time = int(time.split(' ')[0])
# open previous saved target ais data in csv format
try:
with open('model_application_SUNDEROEY.csv', "r") as f1:
last_line = f1.readlines()[-1]
prev_speed = last_line.split(',')[5].rstrip('\n')
prev_course = last_line.split(',')[4]
prev_lat = last_line.split(',')[3]
prev_lon = last_line.split(',')[2]
except Exception:
print('file not found')
prev_speed = '0'
prev_course = '0'
prev_lat = 0
prev_lon = 0
# if data is updated then save new line to csv
if prev_speed == speed and prev_course == course and prev_lat == str(lat) and prev_lon == str(lon):
print('data did not change')
else:
d = datetime.now() - timedelta(minutes=time)
timestamp = (d - datetime(1970, 1, 1)).total_seconds()
line = mmsi + ',' + speed + ',' + course + ',' + str(lon) + ',' + str(lat) + ',' + str(int(timestamp))
with open('model_application_SUNDEROEY.csv','a') as f:
f.write(line)
f.write("\n")
```
| github_jupyter |
# ***Introduction to Radar Using Python and MATLAB***
## Andy Harrison - Copyright (C) 2019 Artech House
<br/>
# Adaptive Kalman Filtering ($\sigma_k$ method)
***
Referring to Section 9.1.3.3, consider the problem of tracking a maneuvering target, such as a ship, automobile, aircraft, or ballistic object experiencing atmospheric drag. While an attempt could be made to model the control inputs, there would be no way to model environmental conditions such as ocean current, wind, and temperature. A simple approach to this problem would be to increase the process noise variance to take into account the variance in the unknown system dynamics. While this method does give rise to a nondiverging filter, it is usually far from optimal. A maneuvering target must undergo some type of acceleration. Therefore, implementing a constant velocity model results in filtered output that does not react quickly to maneuvers and takes a long period of time to recover. While implementing a constant acceleration model results in filtered output that responds more quickly to maneuvers, this model amplifies the noise and in the residual.
***
Begin by setting the library path
```
import lib_path
```
Set the start time (s), end time (s) and time step (s)
```
start = 0.0
end = 100.0
step = 0.1
```
Calculate the number of updates and create the time array with the `linspace` routine from `scipy`
```
from numpy import linspace
number_of_updates = round( (end - start) / step) + 1
t, dt = linspace(start, end, number_of_updates, retstep=True)
```
Set the adaptive parameters
```
n_sigma = 0.08
scale = 1e3
```
Set the maneuver parameters (time (s) and velocity (m/s))
```
maneuver_time = 20.0
vmx = 100.0
vmy = 20.0
vmz = 15.0
```
Set the initial position (m)
```
px = 2.0
py = 1.0
pz = 5.0
```
Set the initial velocity (m/s)
```
vx = 10.0
vy = 20.0
vz = 15.0
```
Set the measurement noise (m^2) and process variance ( m^2, (m/s)^2, (m/s/s)^2)
```
measurement_noise_variance = 10.0
process_noise_variance = 1e-6
```
Create the target trajectory
```
from numpy import zeros
x_true = zeros([6, number_of_updates])
```
Add the maneuver
```
from numpy import ones_like, zeros_like
pre_index = [n for n, e in enumerate(t) if e < maneuver_time]
post_index = [n for n, e in enumerate(t) if e >= maneuver_time]
x = px + vx * t[pre_index]
xm = x[-1] + vmx * (t[post_index] - maneuver_time)
y = py + vy * t[pre_index]
ym = y[-1] + vmy * (t[post_index] - maneuver_time)
z = pz + vz * t[pre_index]
zm = z[-1] + vmz * (t[post_index] - maneuver_time)
x_true[0] = [*x, *xm]
x_true[1] = [*(vx * ones_like(t[pre_index])), *(vmx * ones_like(t[post_index]))]
x_true[2] = [*y, *ym]
x_true[3] = [*(vy * ones_like(t[pre_index])), *(vmy * ones_like(t[post_index]))]
x_true[4] = [*z, *zm]
x_true[5] = [*(vz * ones_like(t[pre_index])), *(vmz * ones_like(t[post_index]))]
```
Set up the measurement noise
```
from numpy import sqrt, random
v = sqrt(measurement_noise_variance) * (random.rand(number_of_updates) - 0.5)
```
Initialize state and input control vector
```
x = zeros(6)
u = zeros_like(x)
```
Initialize the covariance and control matrix
```
from numpy import eye
P = 1.0e3 * eye(6)
B = zeros_like(P)
```
Initialize measurement and process noise variance
```
R = measurement_noise_variance * eye(3)
Q = process_noise_variance * eye(6)
```
State transition and measurement transition
```
A = eye(6)
A[0, 1] = dt
A[2, 3] = dt
A[4, 5] = dt
```
Measurement transition matrix
```
H = zeros([3, 6])
H[0, 0] = 1
H[1, 2] = 1
H[2, 4] = 1
```
Initialize the Kalman filter
```
from Libs.tracking import kalman
kf = kalman.Kalman(x, u, P, A, B, Q, H, R)
```
Generate the measurements
```
from numpy import matmul
z = [matmul(H, x_true[:, i]) + v[i] for i in range(number_of_updates)]
```
Update the filter for each measurement
```
kf.filter_sigma(z, n_sigma, scale)
from matplotlib import pyplot as plt
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Position - X
plt.figure()
plt.plot(t, x_true[0, :], '', label='True')
plt.plot(t, [z[0] for z in z], ':', label='Measurement')
plt.plot(t, [x[0] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - X (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Position - Y
plt.figure()
plt.plot(t, x_true[2, :], '', label='True')
plt.plot(t, [z[1] for z in z], ':', label='Measurement')
plt.plot(t, [x[2] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - Y (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Position - Z
plt.figure()
plt.plot(t, x_true[4, :], '', label='True')
plt.plot(t, [z[2] for z in z], ':', label='Measurement')
plt.plot(t, [x[4] for x in kf.state], '--', label='Filtered')
plt.ylabel('Position - Z (m)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - X
plt.figure()
plt.plot(t, x_true[1, :], '', label='True')
plt.plot(t, [x[1] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - X (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - Y
plt.figure()
plt.plot(t, x_true[3, :], '', label='True')
plt.plot(t, [x[3] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - Y (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Velocity - Z
plt.figure()
plt.plot(t, x_true[5, :], '', label='True')
plt.plot(t, [x[5] for x in kf.state], '--', label='Filtered')
plt.ylabel('Velocity - Z (m/s)', size=12)
plt.legend(loc='best', prop={'size': 10})
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
# Residual
plt.figure()
plt.plot(t, kf.residual, '')
plt.ylabel('Residual (m)', size=12)
# Set the plot title and labels
plt.title('Adaptive Kalman Filter - $\sigma_k$ Method', size=14)
plt.xlabel('Time (s)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/medium-cognitive-search/tutorials/blogposts/medium/cognitive-search/medlineplus_sparknlp.ipynb)
```
%%capture
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
import sparknlp
import xml.etree.ElementTree as ET
import pandas as pd
import urllib.request
from sparknlp.annotator import *
from sparknlp.base import *
import pyspark.sql.functions as F
from pyspark.ml.linalg import Vectors, VectorUDT
from pyspark.ml.feature import BucketedRandomProjectionLSH, BucketedRandomProjectionLSHModel
%%time
spark = sparknlp.start(gpu=True)
print("Spark NLP version: {}".format(sparknlp.version()))
print("Apache Spark version: {}".format(spark.version))
%%capture
!wget https://github.com/JohnSnowLabs/spark-nlp-workshop/raw/master/tutorials/blogposts/medium/cognitive-search/corpus/mplus_topics_2021-06-01.txt
medlineplusDF = spark.read.option("header","true").csv("mplus_topics_2021-06-01.txt")
%%time
medlineplusDF.show(5, truncate=True)
medlineplusDF = medlineplusDF.withColumn("text", F.concat(F.col("metadesc"), F.lit(" "), F.col("norm_summary"))).select("title", "url", "text")
medlineplusDF.persist()
medlineplusDF.show(5, truncate=100)
docass = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence_detector_dl = SentenceDetectorDLModel \
.pretrained("sentence_detector_dl", "xx") \
.setInputCols(["document"]) \
.setOutputCol("sentence")
emb_use = UniversalSentenceEncoder.pretrained("tfhub_use_multi", "xx") \
.setInputCols("sentence") \
.setOutputCol("use_embeddings")
pipeline_use = Pipeline(stages=[
docass, sentence_detector_dl, emb_use
])
model_use = pipeline_use.fit(medlineplusDF)
medlineplusSentencesDF = model_use.transform(medlineplusDF)
%%time
medlineplusSentencesDF.show(5)
medlineplusSentencesDF = medlineplusSentencesDF.select(
F.col("title"),
F.col("url"),
F.arrays_zip(
F.col("sentence.result").alias("sentence"),
F.col("sentence.begin").alias("begin"),
F.col("sentence.end").alias("end"),
F.col("use_embeddings.embeddings")
).alias("zip")
).select(
F.col("title"),
F.col("url"),
F.explode(F.col("zip")).alias("zip")
).select(
F.col("title"),
F.col("url"),
F.col("zip")['0'].alias("sentence"),
F.col("zip")['1'].alias("begin"),
F.col("zip")['2'].alias("end"),
F.col("zip")['3'].alias("embeddings")
)
myudf = F.udf(lambda vs: Vectors.dense(vs), VectorUDT())
medlineplusSentencesDF = medlineplusSentencesDF.select("title", "url", "sentence", "begin", "end", myudf("embeddings").alias("embeddings"))
%%time
medlineplusSentencesDF.persist()
medlineplusSentencesDF.show(5)
%%time
brp = BucketedRandomProjectionLSH(
inputCol="embeddings",
outputCol="hashes",
bucketLength=10,
numHashTables=5
)
brp_model = brp.fit(medlineplusSentencesDF)
hashesDF = brp_model.transform(medlineplusSentencesDF)
%%time
hashesDF.persist()
hashesDF.select("title", "sentence", "embeddings", "hashes").show(5, truncate=60)
brp_model.write().overwrite().save("brp_model.parquet")
brp_model = BucketedRandomProjectionLSHModel.load("brp_model.parquet")
def get_key(query, model):
queryDF = spark.createDataFrame([[query]]).toDF("text")
queryDF = model.transform(queryDF)
queryDF = queryDF.select(
F.explode(
F.arrays_zip(
F.col("sentence.result"),
F.col("use_embeddings.embeddings")
)
).alias("zip")
).select(
F.col("zip")['0'].alias("sentence"),
myudf(F.col("zip")['1']).alias("embeddings")
)
key = queryDF.select("embeddings").take(1)[0].embeddings
return key
def find_close_sentences(query, emb_model, brp_model, hashesDF, k):
key = get_key(query, emb_model)
resultsDF = brp_model.approxNearestNeighbors(hashesDF, key, k)
return resultsDF.select("title", "url", "sentence", "distCol", "hashes")
key = get_key("How to treat depression?", model_use)
key
%%time
find_close_sentences("How to treat depression?", model_use, brp_model, hashesDF, 5).show(truncate=False)
%%time
find_close_sentences("How to treat diabetes?", model_use, brp_model, hashesDF, 10).show(truncate=False)
question = "How can I prevent cancer?"
candidatesDF = find_close_sentences(question, model_use, brp_model, hashesDF, 20)
%%time
candidatesDF.persist()
candidatesDF.show(20, truncate=80)
candidateSourcesDF = candidatesDF.groupBy("title", "url").count().select("*").orderBy("count", ascending=False)
candidateSourcesDF.show(20, truncate=False)
candidate_titles = list(candidatesDF.select("title").toPandas()['title'])
candidate_sources_pd = medlineplusDF.where(F.col("title").isin(candidate_titles)).toPandas()
pd.set_option('display.max_colwidth', None)
candidate_sources_pd.head(4)
```
| github_jupyter |
# Info from the web
**This notebook goes with [a blog post at Agile*](http://ageo.co/xlines02).**
We're going to get some info from a web service, and from Wikipedia. We'll make good use of [the `requests` library](http://docs.python-requests.org/en/master/), a really nicely designed Python library for making web requests in Python.
## Curvenam.es
[`curvenam.es`](http://curvenam.es) is a little web app for looking up curve mnemonics from LAS files.
Here's what [the demo request from the site](http://curvenam.es/lookup) looks like:
http://curvenam.es/lookup?mnemonic=TEST&method=fuzzy&limit=5&maxdist=2
We split this into the URL, and the query parameters:
```
import requests
url = 'http://curvenam.es/lookup'
params = {'mnemonic': 'DT4P',
'method': 'fuzzy',
'limit': 1,
'maxdist': 2
}
r = requests.get(url, params)
```
If we were successful, the server sends back a `200` status code:
```
r.status_code
```
The result of the query is in the `text` attribute of the result:
```
r.text
```
There's a convenient `json()` method to give us the result as JSON:
```
r.json()
try:
print(r.json()['result'][0]['curve']['description'])
except:
print("No results")
```
----
## Geological ages from Wikipedia
Sometimes there isn't a nice API and we have to get what we want from unstructured data. Let's use the task of getting geological ages from Wikipedia as an example.
We'll start with the Jurassic, then generalize.
```
url = "http://en.wikipedia.org/wiki/Jurassic" # Line 1
```
I used `View Source` in my browser to figure out where the age range is on the page, and what it looks like. The most predictable spot, that will work on every period's page, is in the infobox. It's given as a range, in italic text, with "million years ago" right after it.
Try to find the same string here.
```
r = requests.get(url) # Line 2
```
Now we have the entire text of the webpage, along with some metadata. The text is stored in `r.text`, and I happen to know roughly where the relevant bit of text is: around the 8500th character, give or take:
```
r.text[8400:8600]
r.text[7400:7600] # I don't count these lines either.
```
We can get at that bit of text using a [regular expression](https://docs.python.org/2/library/re.html):
```
import re
s = re.search(r'<i>(.+?million years ago)</i>', r.text)
text = s.group(1)
text
```
And if we're really cunning, we can get the start and end ages:
```
start, end = re.search(r'<i>([\.0-9]+)–([\.0-9]+) million years ago</i>', r.text).groups() # Line 3
duration = float(start) - float(end) # Line 4
print("According to Wikipedia, the Jurassic lasted {:.2f} Ma.".format(duration)) # Line 5
```
An exercise for you, dear reader: Make a function to get the start and end ages of *any* geologic period, taking the name of the period as an argument. I have left some hints.
```
def get_age(period):
url = "http://en.wikipedia.org/wiki/" + period
r = requests.get(url)
start, end = re.search(r'<i>([\.0-9]+)–([\.0-9]+) million years ago</i>', r.text).groups()
return float(start), float(end)
```
You should be able to call your function like this:
```
period = "Jurassic"
get_age(period)
```
Now we can make a function that makes the sentence we made before, calling the function you just wrote:
```
def duration(period):
t0, t1 = get_age(period)
duration = t0 - t1
response = "According to Wikipedia, the {0} lasted {1:.2f} Ma.".format(period, duration)
return response
duration('Cretaceous')
```
<hr />
<div>
<img src="https://avatars1.githubusercontent.com/u/1692321?s=50"><p style="text-align:center">© Agile Geoscience 2016</p>
</div>
| github_jupyter |
# 5 - Chart results
Turning all the accessibiltiy results into charts, per scenario and destination. The process is set up for batch production, which imposes higher costs in terms of organization at a benefit of scale. You can of course adapt the code to a simpler approach
Charting in Python always feels inferior to charting in R, my "native" chart habitat (sorry seaborn). Therefore I've used the plotnine library which ports R's ggplot2 library to python. It's important to note that plotnine contains only about 85-90% of ggplot's functionality, so experienced R users may occasionally be confused or disappointed.
It's also worth pointing out that plotnine returns deeply mysterious error messages when you type something incorrectly or just use a parameter it doesn't have (as per above). Budget extra time for troubleshooting.
```
import pandas as pd
import os, sys
import re
import GOSTnets as gn
import importlib
importlib.reload(gn)
import geopandas as gpd
import rasterio
from rasterio import features
from shapely.wkt import loads
from shapely import wkt
import numpy as np
import pandas as pd
import palettable
from functools import reduce
from pandas.api.types import CategoricalDtype
from plotnine import *
from pprint import pprint
from mizani.formatters import percent_format
```
## Setup
Set path locations
```
basepth = r'..'
chart_pth = r'products/charts'
fin_pth = r'final'
input_pth = r'inputs'
net_pth = r'results/200521' # change folder name to date range of last output
table_pth = r'tables'
adm_pth = r'../../../GEO/Boundaries'
geo_pth = r'../../../GEO'
```
### Load in data
```
pckle = r'final_G.pickle'
```
Tabular data
```
# Load in tabular data
hrsl_adm = pd.read_csv(os.path.join(geo_pth,'Population/CXB/hrsl_pts_admins.csv'))
demog_upz = pd.read_csv(os.path.join(input_pth,'demog2011_upz.csv'))
wt_demog_upz = pd.read_csv(os.path.join(input_pth,'wt_demog2011_upz.csv'))
econ_union = pd.read_csv(os.path.join(input_pth,'econ_union_clean.csv'))
econ_union.union_code = econ_union.union_code.astype('str')
```
Processed admin data
```
adm3 = gpd.read_file(r'results/spatial/adm3_summary.gpkg',driver="GPKG")
adm4 = gpd.read_file(r'results/spatial/adm4_summary.gpkg',driver="GPKG")
# Fix types for later joining
adm4['ADM4_PCODE'] = adm4['ADM4_PCODE'].astype(str)
wt_demog_upz['adm3_code'] = wt_demog_upz['adm3_code'].astype(str)
econ_union['union_code'] = econ_union['union_code'].astype(str)
```
Processed tabular data
```
timecats_merged = pd.read_csv(r'results/tables/adm2_time_categories.csv')
timecats_merged_tu = pd.read_csv(r'results/tables/adm2_time_categories_tu.csv')
educ_data = pd.read_csv(r'results/tables/adm2_access_by_educ_level.csv')
econC_empl_data = pd.read_csv(r'results/tables/adm2_access_by_empl_type_EconCensus.csv')
Census_empl_data = pd.read_csv(r'results/tables/adm2_access_by_empl_type_2011Census.csv')
empl_access_upz = pd.read_csv(r'results/tables/adm3_pop_weighted_access_by_empl_type_upz_EconCensus.csv')
empl_access_upz_long = pd.read_csv(r'results/tables/adm3_pop_weighted_access_by_empl_type_upz_EconCensus_LONG.csv')
empl_pc_access_upz_long = pd.read_csv(r'results/tables/adm3_pop_weighted_access_by_empl_type_upz_PopCensus_LONG.csv')
educ_access_upz_long = pd.read_csv(r'results/tables/adm3_pop_weighted_access_by_educ_level_upz_PopCensus_LONG.csv')
```
#### Define a reference dict that can be looped through for chart generation at scale
This is a crucial, and admittedly tedious step. The dictionary contains values for labeling and displaying different destination / scenario combinations of data at different administrative levels. Setting the break ranges in particular may take some trial and error to correctly scope what looks best. The benefit is that once set, you can loop over the dictionary and create bundles of charts very quickly for any number of administrative units
I recommend preparing this dictionary in a separate text editor like Sublime or Atom, where you can see values more clearly and edit many things at once.
```
all_scenarios = {\
'current_cxb' : {'title': 'Cox\'s Bazar',
'scenario' : '',
'coord_cartes' : [0,150],
'breaks_adm2' : [60,90,10],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'CXB'}, \
'current_chitt' : {'title': 'Chittagong',
'scenario' : '',
'coord_cartes' : [0,180],
'breaks_adm2' : [120,180,30],
'breaks_adm3' : [0,240,30],
'breaks_adm4' : [0,360,60],
'plot_name' : 'Chittagong'}, \
'current_martar' : {'title': 'Martarbari',
'scenario' : '',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,120,30],
'breaks_adm3' : [0,210,30],
'breaks_adm4' : [0,300,30],
'plot_name' : 'Martarbari'}, \
'current_health' : {'title': 'healthcare facilities',
'scenario' : '',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,40,10],
'breaks_adm4' : [0,45,15],
'plot_name' : 'health'}, \
'current_primary_ed' : {'title': 'primary schools',
'scenario' : '',
'coord_cartes' : [0,20],
'breaks_adm2' : [0,10,5],
'breaks_adm3' : [0,15,5],
'breaks_adm4' : [0,20,5],
'plot_name' : 'primary_education'}, \
'current_secondary_ed' : {'title': 'secondary schools',
'scenario' : '',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,40,10],
'plot_name' : 'secondary_education'}, \
'current_tertiary_ed' : {'title': 'universities',
'scenario' : '',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,90,15],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'tertiary_education'}, \
'current_allmkts' : {'title': 'all markets (of any size)',
'scenario' : '',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,20,5],
'breaks_adm4' : [0,30,10],
'plot_name' : 'allmkts'}, \
'current_growthcenters' : {'title': 'growth centers',
'scenario' : '',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,60,15],
'plot_name' : 'growthcenters'}, \
'ua_cxb' : {'title': 'Cox\'s Bazar',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,150],
'breaks_adm2' : [60,90,10],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'CXB'}, \
'ua_chitt' : {'title': 'Chittagong',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [120,180,30],
'breaks_adm3' : [0,240,30],
'breaks_adm4' : [0,360,60],
'plot_name' : 'Chittagong'}, \
'ua_martar' : {'title': 'Martarbari',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,120,30],
'breaks_adm3' : [0,210,30],
'breaks_adm4' : [0,300,30],
'plot_name' : 'Martarbari'}, \
'ua_health' : {'title': 'healthcare facilities',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,40,10],
'breaks_adm4' : [0,45,15],
'plot_name' : 'health'}, \
'ua_primary_ed' : {'title': 'primary schools',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,20],
'breaks_adm2' : [0,10,5],
'breaks_adm3' : [0,15,5],
'breaks_adm4' : [0,20,5],
'plot_name' : 'primary_education'}, \
'ua_secondary_ed' : {'title': 'secondary schools',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,40,10],
'plot_name' : 'secondary_education'}, \
'ua_tertiary_ed' : {'title': 'universities',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,90,15],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'tertiary_education'}, \
'ua_allmkts' : {'title': 'all markets (of any size)',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,20,5],
'breaks_adm4' : [0,30,10],
'plot_name' : 'allmkts'}, \
'ua_growthcenters' : {'title': 'growth centers',
'scenario' : 'with CXB ferry and upgrades to key roads',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,60,15],
'plot_name' : 'growthcenters'}, \
'uns_cxb' : {'title': 'Cox\'s Bazar',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,150],
'breaks_adm2' : [60,90,10],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'CXB'}, \
'uns_chitt' : {'title': 'Chittagong',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [120,180,30],
'breaks_adm3' : [0,240,30],
'breaks_adm4' : [0,360,60],
'plot_name' : 'Chittagong'}, \
'uns_martar' : {'title': 'Martarbari',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,120,30],
'breaks_adm3' : [0,210,30],
'breaks_adm4' : [0,300,30],
'plot_name' : 'Martarbari'}, \
'uns_health' : {'title': 'healthcare facilities',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,40,10],
'breaks_adm4' : [0,45,15],
'plot_name' : 'health'}, \
'uns_primary_ed' : {'title': 'primary schools',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,20],
'breaks_adm2' : [0,10,5],
'breaks_adm3' : [0,15,5],
'breaks_adm4' : [0,20,5],
'plot_name' : 'primary_education'}, \
'uns_secondary_ed' : {'title': 'secondary schools',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,40,10],
'plot_name' : 'secondary_education'}, \
'uns_tertiary_ed' : {'title': 'universities',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,90,15],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'tertiary_education'}, \
'uns_allmkts' : {'title': 'all markets (of any size)',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,20,5],
'breaks_adm4' : [0,30,10],
'plot_name' : 'allmkts'}, \
'uns_growthcenters' : {'title': 'growth centers',
'scenario' : 'with CXB ferry and upgrades to key northern roads',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,60,15],
'plot_name' : 'growthcenters'},
'unf_cxb' : {'title': 'Cox\'s Bazar',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,150],
'breaks_adm2' : [60,90,10],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'CXB'}, \
'unf_chitt' : {'title': 'Chittagong',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [120,180,30],
'breaks_adm3' : [0,240,30],
'breaks_adm4' : [0,360,60],
'plot_name' : 'Chittagong'}, \
'unf_martar' : {'title': 'Martarbari',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,120,30],
'breaks_adm3' : [0,210,30],
'breaks_adm4' : [0,300,30],
'plot_name' : 'Martarbari'}, \
'unf_health' : {'title': 'healthcare facilities',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,40,10],
'breaks_adm4' : [0,45,15],
'plot_name' : 'health'}, \
'unf_primary_ed' : {'title': 'primary schools',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,20],
'breaks_adm2' : [0,10,5],
'breaks_adm3' : [0,15,5],
'breaks_adm4' : [0,20,5],
'plot_name' : 'primary_education'}, \
'unf_secondary_ed' : {'title': 'secondary schools',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,30],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,40,10],
'plot_name' : 'secondary_education'}, \
'unf_tertiary_ed' : {'title': 'universities',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,180],
'breaks_adm2' : [0,90,15],
'breaks_adm3' : [0,180,30],
'breaks_adm4' : [0,180,30],
'plot_name' : 'tertiary_education'}, \
'unf_allmkts' : {'title': 'all markets (of any size)',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,20,5],
'breaks_adm3' : [0,20,5],
'breaks_adm4' : [0,30,10],
'plot_name' : 'allmkts'}, \
'unf_growthcenters' : {'title': 'growth centers',
'scenario' : 'with upgrades to key roads',
'coord_cartes' : [0,60],
'breaks_adm2' : [0,30,5],
'breaks_adm3' : [0,30,10],
'breaks_adm4' : [0,60,15],
'plot_name' : 'growthcenters'} }
```
## Charting
#### Colors are important, colors are our friends
```
from pandas.api.types import CategoricalDtype
from plotnine import *
from mizani.formatters import percent_format
## good reference: https://datacarpentry.org/python-ecology-lesson/07-visualization-ggplot-python/index.html
from palettable.wesanderson import FantasticFox2_5
# from palettable.wesanderson import FantasticFox2_5
from palettable.colorbrewer.sequential import BuGn_6
from palettable.wesanderson import IsleOfDogs3_4 # decent
from palettable.cartocolors.sequential import BurgYl_5
FantasticFox2_5.hex_colors
BurgYl_5.hex_colors
TU_comp_colors = ['#fcdaa4','#915133'] # #edddc6 too light
TU_comp_colors
## not necessary, keeping for reference
empl_colors = {'Industrial' : FantasticFox2_5.hex_colors[1],
'Service' : FantasticFox2_5.hex_colors[2],
'Total' : FantasticFox2_5.hex_colors[3] }
empl_colors
```
Leftover code for finding available fonts
```
# import matplotlib.font_manager
# matplotlib.font_manager.findSystemFonts()
```
Ignore warnings from PlotNine
```
import warnings
warnings.filterwarnings("ignore") # couldn't manage to specify the exact PlotNineWarning
```
### Adm2 - District-wide
#### Simple averages
##### Percentage time categories plots
Create an ordered variable
```
# Determine order and create a categorical type
# education_list = educ_access_upz_long['Education level'].value_counts().index.tolist()
timecat_list = ['0 - 15','15 - 30','30 - 45','45 - 60','60 - 75', '75 - 90','90 - 120','120 - 180','180 - 240','240 - 300']
timecat_cat = pd.Categorical(timecats_merged['time_cat'], categories=timecat_list)
timecat_cat_tu = pd.Categorical(timecats_merged_tu['time_cat'], categories=timecat_list)
# assign to a new column in the DataFrame
timecats_merged = timecats_merged.assign(timecat_cat = timecat_cat).rename(columns={'timecat_cat':'Travel time category' })
timecats_merged_tu = timecats_merged_tu.assign(timecat_cat = timecat_cat_tu).rename(columns={'timecat_cat':'Travel time category' })
timecats_merged.head(2)
timecats_merged_tu.head(2)
```
Create the charts
```
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm2'][0]
break2 = val['breaks_adm2'][1]
break3 = val['breaks_adm2'][2]
adm2_timecat_pct_plot = (ggplot(timecats_merged) # defining what data to use
+ aes(x='Travel time category',y='{}_pop_pct'.format(key)) # defining what variable to use
+ geom_bar(size=16,stat="identity",fill = "#FF6666") # defining the type of plot to use
+ scale_y_continuous(labels=lambda l: ["%d%%" % (v * 100) for v in l]) # lambda for making % labels
# + scale_y_continuous(labels=percent_format()) # alternate method
# + theme(axis_text_y = element_text(angle = 45)) # not necessary with coord_flip
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ labs(title='Travel time to {}\n{}'.format(val['title'],val['scenario']),\
subtitle='Extrapolated from HRSL population models', \
x='Minutes to {}'.format(val['title']), \
y='District population share')
+ coord_flip()
)
ggsave(plot = adm2_timecat_pct_plot, filename = 'adm2_{}_timecat_pct_plot'.format(key), path = chart_pth, dpi = 450)
```
##### Population totals time categories plots
```
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm2'][0]
break2 = val['breaks_adm2'][1]
break3 = val['breaks_adm2'][2]
adm2_timecat_popn_plot = (ggplot(timecats_merged) # defining what data to use
+ aes(x='Travel time category',y='{}_popn'.format(key)) # defining what variable to use
+ geom_bar(size=16,stat="identity",fill = "#FF6666") # defining the type of plot to use
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ labs(title='Travel time to {}\n{}'.format(val['title'],val['scenario']),\
subtitle='Extrapolated from HRSL population models', \
x='Minutes to {}'.format(val['title']), \
y='Population')
+ coord_flip()
+ scale_y_continuous(label='comma')
)
ggsave(plot = adm2_timecat_popn_plot, filename = 'adm2_{}_timecat_popn_plot'.format(key), path = chart_pth, dpi = 450)
```
#### Teknaf/Ukhia Comparisons
Repeated but comparing Teknaf and Ukhia to other Upazilas
```
TU_comp_colors
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm2'][0]
break2 = val['breaks_adm2'][1]
break3 = val['breaks_adm2'][2]
adm2_timecat_pct_plot = (ggplot(timecats_merged_tu) # defining what data to use
+ aes(x='Travel time category',y='{}_pop_pct'.format(key),fill='TU') # defining what variable to use
+ geom_bar(size=16,stat="identity",position="dodge") # defining the type of plot to use
+ scale_y_continuous(labels=lambda l: ["%d%%" % (v * 100) for v in l]) # lambda for making % labels
# + scale_y_continuous(labels=percent_format()) # alternate method
# + theme(axis_text_y = element_text(angle = 45)) # not necessary with coord_flip
+ theme(panel_background = element_rect(fill = "#fbfbfb"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ labs(title='Travel time to {}\n{}'.format(val['title'],val['scenario']),\
subtitle='Extrapolated from HRSL population models', \
x='Minutes to {}'.format(val['title']), \
y='District population share')
+ scale_fill_manual(values=TU_comp_colors,name='Upazilas') # legend title here
+ coord_flip()
)
ggsave(plot = adm2_timecat_pct_plot, filename = 'adm2_{}_tu_timecat_pct_plot'.format(key), path = chart_pth, dpi = 450)
```
##### Population totals time categories plots
```
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm2'][0]
break2 = val['breaks_adm2'][1]
break3 = val['breaks_adm2'][2]
adm2_timecat_popn_plot = (ggplot(timecats_merged_tu) # defining what data to use
+ aes(x='Travel time category',y='{}_popn'.format(key), fill='TU') # defining what variable to use
+ geom_bar(size=16,stat="identity",position="dodge") # defining the type of plot to use
+ theme(panel_background = element_rect(fill = "#fbfbfb"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ labs(title='Travel time to {}\n{}'.format(val['title'],val['scenario']),\
subtitle='Extrapolated from HRSL population models', \
x='Minutes to {}'.format(val['title']), \
y='Population')
+ coord_flip()
+ scale_fill_manual(values=TU_comp_colors,name='Upazilas') # legend title here
+ scale_y_continuous(label='comma')
)
ggsave(plot = adm2_timecat_popn_plot, filename = 'adm2_{}_tu_timecat_popn_plot'.format(key), path = chart_pth, dpi = 450)
```
#### Breakdowns by education and employment level
```
educ_data
econC_empl_data
Census_empl_data
```
##### Breakdowns by education level
```
# Determine order and create a categorical type
# Note that value_counts() is already sorted
# education_list = educ_data['Education level'].value_counts().index.tolist()
education_list = ['No education', 'Primary', 'Lower Secondary', 'Secondary', 'Higher Secondary', 'University']
education_cat = pd.Categorical(educ_data['Education level'], categories=education_list)
# assign to a new column in the DataFrame
educ_data = educ_data.assign(education_cat = education_cat)
for key, val in all_scenarios.items():
# print(key) # for troubleshooting
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm2'][0]
break2 = val['breaks_adm2'][1]
break3 = val['breaks_adm2'][2]
adm2_educ_plot = (ggplot(educ_data)
+ aes(x='education_cat',y='{}_avg_time'.format(key))
+ geom_bar(width=.6,stat="identity",fill = "#ce78b3")
+ scale_y_continuous(breaks=range(break1,break2,break3))
+ scale_fill_manual(values=BuGn_6.hex_colors)
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ labs(title='Average travel times to {}\nby level of education {}'.format(val['title'],val['scenario']), \
subtitle='Extraploted from 2011 census',\
x='Education Level', \
y='Minutes travel to {}'.format(val['title']) )
+ coord_flip(ylim=(break1,break2))
)
ggsave(plot = adm2_educ_plot, filename = 'adm2_educ_{}_plot'.format(key), path = chart_pth, dpi = 450)
```
##### Breakdowns by employment - 2013 econ census
Commented out as we ended up using only the 2011 population census versions of this
```
# for key, val in all_scenarios.items():
# # print(key)
# ylim1 = val['coord_cartes'][0]
# ylim2 = val['coord_cartes'][1]
# break1 = val['breaks_adm2'][0]
# break2 = val['breaks_adm2'][1]
# break3 = val['breaks_adm2'][2]
# adm2_empl_econ_census_plot = (ggplot(econC_empl_data)
# + aes(x='Employment type',y='{}_avg_time'.format(key),fill='Employment type')
# + geom_bar(width=.6,stat="identity") \
# + coord_cartesian(ylim=(ylim1,ylim2))
# + scale_y_continuous(breaks=range(break1,break2,break3))
# + scale_fill_manual(values=FantasticFox2_5.hex_colors)
# + theme(panel_background = element_rect(fill = "#EDEDED"),
# panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
# panel_grid_major_y = element_blank(),
# panel_grid_minor = element_blank(),
# panel_border = element_blank(),
# axis_line = element_line(colour = '#7f7f7f'),
# axis_title = element_text(colour = '#4C4C4C',size=10.5),
# axis_text_x = element_text(colour = '#7f7f7f'),
# axis_text_y = element_text(colour = '#7f7f7f'),
# axis_ticks = element_blank(),
# text = element_text(family='Proxima Nova',weight='bold'),
# legend_text = element_text(weight='normal'),
# legend_background = element_blank(),
# legend_box_margin=22.5,
# legend_title_align = 'center',
# legend_title = element_text(color='#4C4C4C'),
# legend_position = 'bottom')
# + labs(title='Average travel times to {}\nby type of employment {}'.format(val['title'],val['scenario']), \
# subtitle='Extraploted from economic census',\
# x='Type of employment', \
# y='Minutes travel to {}'.format(val['title']) )
# )
# ggsave(plot = adm2_empl_econ_census_plot, filename = 'adm2_empl_econ_census_{}_plot'.format(key), path = chart_pth, dpi = 450)
# # ### EMPLOYMENT
# # ggsave(plot = adm2_empl_econ_census_cxb_plot, filename = 'adm2_empl_econ_census_cxb_plot', path = chart_pth, dpi = 450)
# # adm2_empl_econ_census_cxb_plot = (ggplot(econC_empl_data) # defining what data to use
# # + aes(x='Employment type',y='current_cxb_avg_time',fill='Employment type') # defining what variable to use
# # + geom_bar(width=.6,stat="identity") # defining the type of plot to use
# # + coord_cartesian(ylim=(60,120))
# # + scale_y_continuous(breaks=range(60,130,10))
# # + scale_fill_manual(values=FantasticFox2_5.hex_colors)
# # + labs(title='Average travel times to Cox\'s Bazaar by type of employment', subtitle='Extraploted from economic census', x='Employment type', y='Minutes travel to Cox\'s Bazaar')
# # )
```
##### Breakdowns by employment type (from 2011 census)
```
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm2'][0]
break2 = val['breaks_adm2'][1]
break3 = val['breaks_adm2'][2]
adm2_empl_pop_census_plot = (ggplot(Census_empl_data)
+ aes(x='Employment type',y='{}_avg_time'.format(key),fill='Employment type')
+ geom_bar(width=.6,stat="identity") \
# + coord_cartesian(ylim=(ylim1,ylim2))
+ scale_y_continuous(breaks=range(break1,break2,break3))
+ scale_fill_manual(values=FantasticFox2_5.hex_colors)
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_x = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ labs(title='Average travel times to {}\nby type of employment {}'.format(val['title'],val['scenario']), \
subtitle='Extraploted from population census',\
x='Type of employment', \
y='Minutes travel to {}'.format(val['title']) )
)
ggsave(plot = adm2_empl_pop_census_plot, filename = 'adm2_empl_pop_census_{}_plot'.format(key), path = chart_pth, dpi = 450)
```
### Adm3 - Upazila
#### Simple averages
```
adm3
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm3'][0]
break2 = val['breaks_adm3'][1]
break3 = val['breaks_adm3'][2]
adm3_plot = (ggplot(adm3) # defining what data to use
+ aes(x=str.title('Upazila_Name'),y='{}_avg_time'.format(key)) # defining what variable to use
+ geom_bar(width=.6,position="dodge",stat="identity",fill="#7398a1") # defining the type of plot to use
+ labs(title='Average travel times to {}'.format(val['title']),\
subtitle='Upazila', \
x='Upazila', \
y='Minutes travel to {}\n{}'.format(val['title'],val['scenario']) )
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ coord_flip(ylim=(break1,break2))
+ scale_y_continuous(breaks=range(break1,break2,break3))
)
ggsave(plot = adm3_plot, filename = 'adm3_{}_plot'.format(key), path = chart_pth, dpi = 450)
```
#### Employment access
```
empl_access_upz_long['Employment category'].unique()
empl_access_upz_long['scen_dest'].unique()
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm3'][0]
break2 = val['breaks_adm3'][1]
break3 = val['breaks_adm3'][2]
data = empl_pc_access_upz_long[empl_pc_access_upz_long['scen_dest'] == key]
empl_access = (ggplot(data) # defining what data to use
+ aes(x='ADM3_EN',y='access_time'.format(key),fill="Employment category") # defining what variable to use
+ geom_bar(width=.6,position="dodge",stat="identity",) # defining the type of plot to use
+ labs(title='Average travel times to {}\nby type of employment {}'.format(val['title'],val['scenario']), \
subtitle='Upazila', \
x='Upazila', \
y='Minutes travel to {}'.format(val['title']))
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ coord_flip(ylim=(break1,break2))
+ scale_y_continuous(breaks=range(break1,break2,break3))
+ scale_fill_manual(values=FantasticFox2_5.hex_colors)
)
ggsave(plot = empl_access, filename = 'adm3_empl_EconCensus_{}_plot'.format(key), path = chart_pth, dpi = 450)
```
#### Educational level access
```
educ_access_upz_long.head(2)
```
re-ordering education levels for logical chart order
```
# Determine order and create a categorical type
# Note that value_counts() is already sorted
# education_list = educ_access_upz_long['Education level'].value_counts().index.tolist()
education_list = ['No education', 'Primary', 'Lower Secondary', 'Secondary', 'Higher Secondary', 'University']
education_cat = pd.Categorical(educ_access_upz_long['Education level'], categories=education_list)
# assign to a new column in the DataFrame
educ_access_upz_long = educ_access_upz_long.assign(education_cat = education_cat).rename(columns={'education_cat':'Education lvl' })
educ_access_upz_long.head(2)
```
Making the education charts
```
for key, val in all_scenarios.items():
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm3'][0]
break2 = val['breaks_adm3'][1]
break3 = val['breaks_adm3'][2]
data = educ_access_upz_long[educ_access_upz_long['scen_dest'] == key]
educ_access = (ggplot(data) # defining what data to use
+ aes(x='ADM3_EN',y='access_time'.format(key),fill="Education lvl") # defining what variable to use
+ geom_bar(width=.6,position="dodge",stat="identity",) # defining the type of plot to use
+ labs(title='Average travel times to {}\nby level of education {}'.format(val['title'],val['scenario']), \
subtitle='Upazila', \
x='Upazila', \
y='Minutes travel to {}'.format(val['title']))
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ coord_flip(ylim=(break1,break2))
+ scale_y_continuous(breaks=range(break1,break2,break3))
+ scale_fill_manual(values=BuGn_6.hex_colors,name='Education level') # legend title here
)
ggsave(plot = educ_access, filename = 'adm3_educ_PopCensus_{}_plot'.format(key), path = chart_pth, dpi = 450)
educ_access
```
### Adm4 - Union comparisons
```
adm4.head(2)
for key, val in all_scenarios.items():
adm4 = adm4[adm4['ADM4_EN'] != 'St.Martin Dwip']
# print(key)
ylim1 = val['coord_cartes'][0]
ylim2 = val['coord_cartes'][1]
break1 = val['breaks_adm4'][0]
break2 = val['breaks_adm4'][1]
break3 = val['breaks_adm4'][2]
adm4_plot = (ggplot(adm4) # defining what data to use
+ aes(x='ADM4_EN',y='{}_avg_time'.format(key)) # defining what variable to use
+ geom_bar(width=.6,position="dodge",stat="identity",fill="#7398a1") # defining the type of plot to use
+ labs(title='Average travel times to {}\n{}'.format(val['title'],val['scenario']),\
subtitle='Union', \
x='Union', \
y='Minutes travel to {}\n{}'.format(val['title'],val['scenario']) )
+ theme(panel_background = element_rect(fill = "#EDEDED"),
panel_grid_major = element_line(size = 0.5, linetype = 'solid', colour = '#C9C9C9'),
panel_grid_major_y = element_blank(),
panel_grid_minor = element_blank(),
panel_border = element_blank(),
axis_line = element_line(colour = '#7f7f7f'),
axis_title = element_text(colour = '#4C4C4C',size=10.5),
axis_text_x = element_text(colour = '#7f7f7f'),
axis_text_y = element_text(colour = '#7f7f7f'),
axis_ticks = element_blank(),
text = element_text(family='Proxima Nova',weight='bold'),
legend_text = element_text(weight='normal'),
legend_background = element_blank(),
legend_box_margin=22.5,
legend_title_align = 'center',
legend_title = element_text(color='#4C4C4C'),
legend_position = 'bottom')
+ coord_flip(ylim=(break1,break2))
+ scale_y_continuous(breaks=range(break1,break2,break3))
+ theme(axis_text_y = element_text(size = 4.25))
)
ggsave(plot = adm4_plot, filename = 'adm4_{}_plot'.format(key), path = chart_pth, dpi = 450)
adm4_plot
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Distributed TensorFlow with Horovod
In this tutorial, you will train a word2vec model in TensorFlow using distributed training via [Horovod](https://github.com/uber/horovod).
## Prerequisites
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)
* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../../configuration.ipynb) to:
* install the AML SDK
* create a workspace and its configuration file (`config.json`)
* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
```
## Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
```
## Create or Attach existing AmlCompute
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
```
The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`.
## Create a Dataset for Files
A Dataset can reference single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets)
```
from azureml.core import Dataset
web_paths = ['https://azureopendatastorage.blob.core.windows.net/testpublic/text8.zip']
dataset = Dataset.File.from_files(path=web_paths)
```
You may want to register datasets using the register() method to your workspace so that the dataset can be shared with others, reused across various experiments, and referred to by name in your training script.
```
dataset = dataset.register(workspace=ws,
name='wikipedia-text',
description='Wikipedia text training and test dataset',
create_new_version=True)
# list the files referenced by the dataset
dataset.to_path()
```
## Train model on the remote compute
### Create a project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
```
import os
project_folder = './tf-distr-hvd'
os.makedirs(project_folder, exist_ok=True)
```
Copy the training script `tf_horovod_word2vec.py` into this project directory.
```
import shutil
shutil.copy('tf_horovod_word2vec.py', project_folder)
```
### Create an experiment
Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed TensorFlow tutorial.
```
from azureml.core import Experiment
experiment_name = 'tf-distr-hvd'
experiment = Experiment(ws, name=experiment_name)
```
### Create an environment
In this tutorial, we will use one of Azure ML's curated TensorFlow environments for training. [Curated environments](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments#use-a-curated-environment) are available in your workspace by default. Specifically, we will use the TensorFlow 1.13 GPU curated environment.
```
from azureml.core import Environment
tf_env = Environment.get(ws, name='AzureML-TensorFlow-1.13-GPU')
```
### Configure the training job
Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on.
In order to execute a distributed run using MPI/Horovod, you must create an `MpiConfiguration` object and pass it to the `distributed_job_config` parameter of the ScriptRunConfig constructor. The below code will configure a 2-node distributed job running one process per node. If you would also like to run multiple processes per node (i.e. if your cluster SKU has multiple GPUs), additionally specify the `process_count_per_node` parameter in `MpiConfiguration` (the default is `1`).
```
from azureml.core import ScriptRunConfig
from azureml.core.runconfig import MpiConfiguration
src = ScriptRunConfig(source_directory=project_folder,
script='tf_horovod_word2vec.py',
arguments=['--input_data', dataset.as_mount()],
compute_target=compute_target,
environment=tf_env,
distributed_job_config=MpiConfiguration(node_count=2))
```
### Submit job
Run your experiment by submitting your ScriptRunConfig object. Note that this call is asynchronous.
```
run = experiment.submit(src)
print(run)
run.get_details()
```
### Monitor your run
You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
Alternatively, you can block until the script has completed training before running more code.
```
run.wait_for_completion(show_output=True)
```
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
rates = 2**np.arange(7)/80
print(rates)
def get_inputs(sm):
seq_len = 220
sm = sm.split()
if len(sm)>218:
print('SMILES is too long ({:d})'.format(len(sm)))
sm = sm[:109]+sm[-109:]
ids = [vocab.stoi.get(token, unk_index) for token in sm]
ids = [sos_index] + ids + [eos_index]
seg = [1]*len(ids)
padding = [pad_index]*(seq_len - len(ids))
ids.extend(padding), seg.extend(padding)
return ids, seg
def get_array(smiles):
x_id, x_seg = [], []
for sm in smiles:
a,b = get_inputs(sm)
x_id.append(a)
x_seg.append(b)
return torch.tensor(x_id), torch.tensor(x_seg)
```
### ECFP4
```
from rdkit import Chem
from rdkit.Chem import AllChem
def bit2np(bitvector):
bitstring = bitvector.ToBitString()
intmap = map(int, bitstring)
return np.array(list(intmap))
def extract_morgan(smiles, targets):
x,X,y = [],[],[]
for sm,target in zip(smiles,targets):
mol = Chem.MolFromSmiles(sm)
if mol is None:
print(sm)
continue
fp = AllChem.GetMorganFingerprintAsBitVect(mol, 2, 1024) # Morgan (Similar to ECFP4)
x.append(sm)
X.append(bit2np(fp))
y.append(target)
return x,np.array(X),np.array(y)
```
### ST, RNN, BERT
```
import torch
from pretrain_trfm import TrfmSeq2seq
from pretrain_rnn import RNNSeq2Seq
from bert import BERT
from build_vocab import WordVocab
from utils import split
pad_index = 0
unk_index = 1
eos_index = 2
sos_index = 3
mask_index = 4
vocab = WordVocab.load_vocab('data/vocab.pkl')
trfm = TrfmSeq2seq(len(vocab), 256, len(vocab), 3)
trfm.load_state_dict(torch.load('.save/trfm_12_23000.pkl'))
trfm.eval()
print('Total parameters:', sum(p.numel() for p in trfm.parameters()))
rnn = RNNSeq2Seq(len(vocab), 256, len(vocab), 3)
rnn.load_state_dict(torch.load('.save/seq2seq_1.pkl'))
rnn.eval()
print('Total parameters:', sum(p.numel() for p in rnn.parameters()))
bert = BERT(len(vocab), hidden=256, n_layers=8, attn_heads=8, dropout=0)
bert.load_state_dict(torch.load('../result/chembl/ep00_it010000.pkl'))
bert.eval()
print('Total parameters:', sum(p.numel() for p in bert.parameters()))
```
### GC
```
import os
import deepchem as dc
from deepchem.models.tensorgraph.models.graph_models import GraphConvModel
```
## HIV
```
df = pd.read_csv('data/hiv.csv')
print(df.shape)
df.head()
df_large = df[np.array(list(map(len, df['smiles'])))>218]
keys = ['0', '1']
bottom = df_large.groupby('HIV_active').count()['smiles'].values
plt.figure(figsize=(2,4))
plt.bar(keys, bottom)
plt.xlabel('class')
plt.ylabel('counts')
plt.title('HIV')
plt.grid()
plt.show()
plt.hist(list(map(len, df_large['smiles'].values)), bins=20)
plt.xlabel('SMILES length')
plt.ylabel('counts')
plt.title('HIV')
plt.grid()
plt.show()
df_train = df[np.array(list(map(len, df['smiles'])))<=218]
df_test = df[np.array(list(map(len, df['smiles'])))>218]
def ablation_hiv(X, X_test, y, y_test, rate, n_repeats):
auc = np.empty(n_repeats)
for i in range(n_repeats):
clf = MLPClassifier(max_iter=1000)
if rate==1:
X_train, y_train = X,y
else:
X_train, _, y_train, __ = train_test_split(X, y, test_size=1-rate, stratify=y)
clf.fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
auc[i] = roc_auc_score(y_test, y_score[:,1])
ret = {}
ret['auc mean'] = np.mean(auc)
ret['auc std'] = np.std(auc)
return ret
def ablation_hiv_dc(dataset, test_data, rate, n_repeats):
auc = np.empty(n_repeats)
for i in range(n_repeats):
clf = GraphConvModel(n_tasks=1, batch_size=64, mode='classification')
splitter = dc.splits.RandomStratifiedSplitter()
train_data, _, __ = splitter.train_valid_test_split(dataset, frac_train=rate, frac_valid=1-rate, frac_test=0)
clf.fit(train_data)
metrics = [dc.metrics.Metric(dc.metrics.roc_auc_score)]
scores = clf.evaluate(test_data, metrics)
auc[i] = scores['roc_auc_score']
ret = {}
ret['auc mean'] = np.mean(auc)
ret['auc std'] = np.std(auc)
return ret
```
### ST
```
x_split = [split(sm) for sm in df_train['smiles'].values]
xid, _ = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
x_split = [split(sm) for sm in df_test['smiles'].values]
xid, _ = get_array(x_split)
X_test = trfm.encode(torch.t(xid))
print(X_test.shape)
y, y_test = df_train['HIV_active'].values, df_test['HIV_active'].values
scores = []
for rate in rates:
score_dic = ablation_hiv(X, X_test, y, y_test, rate, 20)
print(rate, score_dic)
scores.append(score_dic['auc mean'])
print(np.mean(scores))
score_dic = ablation_hiv(X, X_test, y, y_test, 1, 20)
print(score_dic)
```
### ECFP
```
x,X,y = extract_morgan(df_train['smiles'].values, df_train['HIV_active'].values)
print(len(X), len(y))
x,X_test,y_test = extract_morgan(df_test['smiles'].values, df_test['HIV_active'].values)
print(len(X_test), len(y_test))
scores = []
for rate in rates:
score_dic = ablation_hiv(X, X_test, y, y_test, rate, 20)
print(rate, score_dic)
scores.append(score_dic['auc mean'])
print(np.mean(scores))
score_dic = ablation_hiv(X, X_test, y, y_test, 1, 20)
print(score_dic)
```
### RNN
```
x_split = [split(sm) for sm in df_train['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
x_split = [split(sm) for sm in df_test['smiles'].values]
xid, _ = get_array(x_split)
X_test = rnn.encode(torch.t(xid))
print(X_test.shape)
y, y_test = df_train['HIV_active'].values, df_test['HIV_active'].values
scores = []
for rate in rates:
score_dic = ablation_hiv(X, X_test, y, y_test, rate, 20)
print(rate, score_dic)
scores.append(score_dic['auc mean'])
print(np.mean(scores))
score_dic = ablation_hiv(X, X_test, y, y_test, 1, 20)
print(score_dic)
```
### GC
```
featurizer = dc.feat.ConvMolFeaturizer()
loader = dc.data.CSVLoader(
tasks=['HIV_active'],
smiles_field='smiles',
featurizer=featurizer)
dataset = loader.featurize('data/hiv.csv')
train_data = dataset.select(np.where(np.array(list(map(len, df['smiles'])))<=218)[0])
test_data = dataset.select(np.where(np.array(list(map(len, df['smiles'])))>218)[0])
scores = []
for rate in rates:
score_dic = ablation_hiv_dc(train_data, test_data, rate, 20)
print(rate, score_dic)
scores.append(score_dic['auc mean'])
print(np.mean(scores))
score_dic = ablation_hiv_dc(train_data, test_data, 1, 20)
print(rate, score_dic)
```
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
# Sharpe Style Analysis
Sharpe Style Analysis is an elegant and simple decomposition exercise similar to what we did in the previous lab session, with the added constraint that the coefficients are all positive and add to 1.
Therefore, the coefficients of performing style analysis on the observed return of a manager can be interpreted as weights in a portfolio of building blocks which together, _mimic_ that return series. The exercise can reveal drifts in a manager's style as well as provide insight into what the manager is likely doing to obtain the returns.
# Performing Sharpe Style Analysis
The key to obtaining the weights is our old friend, the quadriatic optimizer. We are asking the optimizer to find the weights that minimizes the square of the difference between the observed series and the returns of a benchmark portfolio that holds the explanatory building blocks in those same weights. This is equivalent to minimizing the _tracking error_ between the two return series.
The code to implement this is a very slight modification of the `minimize_vol` we have previously implemented:
```python
def style_analysis(dependent_variable, explanatory_variables):
"""
Returns the optimal weights that minimizes the Tracking error between
a portfolio of the explanatory variables and the dependent variable
"""
n = explanatory_variables.shape[1]
init_guess = np.repeat(1/n, n)
bounds = ((0.0, 1.0),) * n # an N-tuple of 2-tuples!
# construct the constraints
weights_sum_to_1 = {'type': 'eq',
'fun': lambda weights: np.sum(weights) - 1
}
solution = minimize(portfolio_tracking_error, init_guess,
args=(dependent_variable, explanatory_variables,), method='SLSQP',
options={'disp': False},
constraints=(weights_sum_to_1,),
bounds=bounds)
weights = pd.Series(solution.x, index=explanatory_variables.columns)
return weights
```
The Objective function is a very simple one-liner
```python
def portfolio_tracking_error(weights, ref_r, bb_r):
"""
returns the tracking error between the reference returns
and a portfolio of building block returns held with given weights
"""
return tracking_error(ref_r, (weights*bb_r).sum(axis=1))
```
```
import numpy as np
import pandas as pd
import edhec_risk_kit_202 as erk
%load_ext autoreload
%autoreload 2
ind = erk.get_ind_returns()["2000":]
```
Let's construct a manager that invests in 30% Beer, 50% in Smoke and 20% in other things that have an average return of 0% and an annualized vol of 15%
```
mgr_r = 0.3*ind["Beer"] + .5*ind["Smoke"] + 0.2*np.random.normal(scale=0.15/(12**.5), size=ind.shape[0])
```
Now, assume we knew absolutely nothing about this manager and all we observed was the returns. How could we tell what she was invested in?
```
weights = erk.style_analysis(mgr_r, ind)*100
weights.sort_values(ascending=False).head(6).plot.bar()
```
Contrast this to the results of a regression. Because the model is in fact very true (i.e. we really did construct the manager's returns out of the building blocks), the results are remarkably accurate. However, the negative coefficients are hard to intepret and in real-life data, those will be much larger. However when it works well, such as in this artificial example here, the results can be very accurate.
```
coeffs = erk.regress(mgr_r, ind).params*100
coeffs.sort_values().head()
```
Negative 4.5% in Household?
```
coeffs.sort_values(ascending=False).head(6).plot.bar()
```
## Style Drift: Time Varying Exposures using Style Anaylsis
One of the most common ways in which Sharpe Style Analysis can be used is to measure style drift. If you run the style analysis function over a rolling window of 1 to 5 years, you can extract changes in the style exposures of a manager.
We'll look at Rolling Windows in the next lab session.
As an exercise to the student, download a set of returns from Yahoo Finance, and try and measure the style drift in your favorite fund manager. Use reliable Value and Growth ETFs such as "SPYG" and "SPYV" along with a SmallCap ETF such as "SLY" and LargeCap ETF such as "OEF".
Alternately, the Fama-French research factors and use the Top and Bottom portfolios by Value (HML) and Size (SMB) to categorize mutual funds into categories. This is very similar to the "Style Box" methodology employed by Morningstar and displayed on their website. Compare your results with their results to see if they agree!
# Warning: Potential Misuse of Style Analysis
Style Analysis works best when the explanatory indices are in fact a good specification of what is happening. For instance, it usually gives you very useful and revealing insight if you use a stock market index (such as SPY) and other broad indices, ETFs or mutual funds (such as a Value Fund, a Growth Fund, an International Fund, a Bond Fund etc).
Part of the skill in extracting meaningful results is to pick the right set of explanatory variables.
However, a part of the challenge with Style Analysis is that it will _always_ return a portfolio. Although it is possible to develop a figure of merit of fit quality similar to an $R^2$, it will still always give you an answer, however unreasonable it might be, and it's not always obvious how much one can rely on the result.
For instance, we can try and extract the major industries that Buffer invested in since 2000 as follows:
```
brka_m = pd.read_csv("brka_m.csv", index_col=0, parse_dates=True).to_period('M')
mgr_r_b = brka_m["2000":]["BRKA"]
weights_b = erk.style_analysis(mgr_r_b, ind)
weights_b.sort_values(ascending=False).head(6).round(4)*100
```
If we want to look at the last decade (2009-2018):
```
brk2009 = brka_m["2009":]["BRKA"]
ind2009 = ind["2009":]
erk.style_analysis(brk2009, ind2009).sort_values(ascending=False).head(6).round(4)*100
```
Should you believe the analysis? Probably not. However, when the specification is in fact accurate (as we saw in the articially generated series) the results can be very revealing
| github_jupyter |
Demonstrates loading deck.gl as external JSS and CSS components, rendering it as part of an app, and then linking it to Panel widgets.
```
import panel as pn
js_files = {'deck': 'https://cdn.jsdelivr.net/npm/deck.gl@8.1.12/dist.min.js',
'mapboxgl': 'https://api.mapbox.com/mapbox-gl-js/v1.7.0/mapbox-gl.js'}
css_files = ['https://api.mapbox.com/mapbox-gl-js/v1.7.0/mapbox-gl.css']
pn.extension(js_files=js_files, css_files=css_files, sizing_mode="stretch_width")
```
First, let's declare the cities we are interested in using Python:
```
cities = [
{"city":"San Francisco","state":"California","latitude":37.7751,"longitude":-122.4193},
{"city":"New York","state":"New York","latitude":40.6643,"longitude":-73.9385},
{"city":"Los Angeles","state":"California","latitude":34.051597,"longitude":-118.244263},
{"city":"London","state":"United Kingdom","latitude":51.5074,"longitude":-0.1278},
{"city":"Hyderabad","state":"India","latitude":17.3850,"longitude":78.4867}]
```
Next, let's declare an HTML `<div>` to render the plot into, then define the deck.gl script code to render a plot for those cities.
```
html = """
<div id="deckgl-container" style="height: 500px;width: 100p"></div>
<script type="text/javascript">
// Data
var CITIES = %s;
var deckgl = new deck.DeckGL({
container: 'deckgl-container',
mapboxApiAccessToken: 'pk.eyJ1IjoicGhpbGlwcGpmciIsImEiOiJjajM2bnE4MWcwMDNxMzNvMHMzcGV3NjlnIn0.976fZ1azCrTh50lEdZTpSg',
initialViewState: {
longitude: CITIES[0].longitude,
latitude: CITIES[0].latitude,
zoom: 10,
},
controller: true,
layers: [
new deck.ScatterplotLayer({
data: CITIES,
getPosition: d => [d.longitude, d.latitude],
radiusMinPixels: 10
})
],
});
</script>
""" % cities
deckgl = pn.pane.HTML(html, height=500)
```
Next we can declare a Panel widget and define a ``jslink`` to update the deck.gl plot whenever the widget state changes. The example is adapted from https://deck.gl/gallery/viewport-transition but replaces D3 widgets with Panel-based widgets.
```
widget = pn.widgets.RadioButtonGroup(options=[c["city"] for c in cities])
update_city = """
var d = CITIES[source.active];
deckgl.setProps({
initialViewState: {
longitude: d.longitude,
latitude: d.latitude,
zoom: 10,
transitionInterpolator: new deck.FlyToInterpolator({speed: 1.5}),
transitionDuration: 'auto'
}
});
"""
widget.jslink(deckgl, code={'active': update_city});
component = pn.Column(widget, deckgl)
component
```
## App
Lets wrap it into nice template that can be served via `panel serve deckgl.ipynb`
```
pn.template.FastListTemplate(site="Panel", title="Deck.gl", main=["This app **demonstrates loading deck.gl JSS and CSS components** and then linking it to Panel widgets.", component]).servable();
```
| github_jupyter |
# Solving Higher-Order ODEs
Teng-Jui Lin
Content adapted from UW AMATH 301, Beginning Scientific Computing, in Spring 2020.
- Solving higher-order ODEs (systems of first-order ODEs)
- Forward Euler
- Backward Euler
- `scipy` implementation
- Solving systems of first-order ODEs by [`scipy.integrate.solve_ivp()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html)
## Solving higher-order ODEs (system of ODEs)
A higher-order ODE can always be written as a system of first-order ODEs. We can then solve the first-order ODE system using forward Euler, backward Euler, or `scipy.integrate.solve_ivp()`.
### Forward Euler
Forward Euler for linear system has a similar form of
$$
\begin{aligned}
\mathbf{x_{k+1}} &= \mathbf{x_k} + \Delta t \mathbf{Ax_k} \\
&= (\mathbf{I} + \Delta t \mathbf{A})\mathbf{x_k}
\end{aligned}
$$
The method is stable when all of the absolute value of eigenvalues of $\mathbf{I} + \Delta t \mathbf{A}$ are less than 1: $|\lambda| < 1$.
### Backward Euler
Backward Euler for linear system has a similar form of
$$
\mathbf{x_{k+1} = x_k} + \Delta t \mathbf{Ax_{k+1}}
$$
that can be implemented as
$$
(\mathbf{I} - \Delta t \mathbf{A})\mathbf{x_{k+1}} = \mathbf{x_k}
$$
which can be implemented with LU decomposition. The method is stable when all of the absolute value of eigenvalues of $(\mathbf{I} - \Delta t \mathbf{A})^{-1}$ are less than 1: $|\lambda| < 1$.
### Implementation
**Problem Statement.** Linear pendulum.
The motion of a linear pendulum at small angles can be described by the linear second-order ODE
$$
\ddot{\theta} = -\dfrac{g}{L}\theta
$$
where $\theta$ is the angle of deflection of the pendulum from the vertical. Given physical parameters $L = 1 \ \mathrm{m}$ and $g = 9.8 \ \mathrm{m/s^2}$.
The second-order ODE can be converted to a system of two first-order ODEs
$$
\begin{cases}
\dot{\theta} = \omega \\
\dot{\omega} = -\dfrac{g}{L}\theta
\end{cases}
$$
which is equivalent to
$$
\mathbf{\dot{x} = Ax}
$$
where
$$
\mathbf{x} =
\begin{bmatrix}
\theta \\ \omega
\end{bmatrix},
\mathbf{A} =
\begin{bmatrix}
0 & 1 \\
-\dfrac{g}{L} & 0
\end{bmatrix}
$$
For the initial conditions $\theta(0) = 0, \dot{\theta}(0) = \omega(0) = 0.5$, we will solve the system of ODEs over $t\in [0, 10] \ \mathrm{s}$ using numerical methods, then we compare the results with it analytical solution of
$$
\mathbf{x}(t) =
\begin{bmatrix}
\theta(t) \\ \omega(t)
\end{bmatrix} =
\begin{bmatrix}
0.5 \sqrt{\frac{L}{g}} \sin(\sqrt{\frac{g}{L}}t) \\
0.5 \cos(\sqrt{\frac{g}{L}}t)
\end{bmatrix}.
$$
(a) Use forward Euler for linear system with $\Delta t = 0.005 \ \mathrm{s}$ to solve the system. Determine the stability of the method and compare the error at final time with the analytical solution.
(b) Use backward Euler for linear system with $\Delta t = 0.005 \ \mathrm{s}$ using LU decomposition. Determine the stability of the method and compare the error at final time with the analytical solution.
(c) Use [`scipy.integrate.solve_ivp()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to solve the system. Compare the error at final time with the analytical solution.
#### Forward Euler
```
import numpy as np
import matplotlib.pyplot as plt
import scipy
from scipy import integrate
# define physical constants
g = 9.8
l = 1
# define time array
t_initial = 0
t_final = 10
dt = 0.005
t = np.arange(t_initial, t_final+dt/2, dt)
t_len = len(t)
# define matrix and initial conditions
A = np.array([[0, 1], [-g/l, 0]])
x0 = np.array([0, 0.5])
x = np.zeros((2, t_len))
x[:, 0] = x0
# forward euler of linear system
for i in range(t_len - 1):
x[:, i+1] = x[:, i] + dt*A@x[:, i]
# compare with analytical soln
x_exact = lambda t : np.array([0.5 * np.sqrt(l/g) *np.sin(t*np.sqrt(g/l)),
0.5 * np.cos(t*np.sqrt(g/l))])
x_error = np.linalg.norm(x[:, -1] - x_exact(t_final))
print(f'Error = {x_error :.2f}')
# assess stability
stability_matrix = np.eye(len(A)) + dt*A
abs_eig = abs(np.linalg.eig(stability_matrix)[0])
if max(abs_eig) > 1:
print(f'Unstable with |lambda_max| = {max(abs_eig) :.4f}')
else:
print(f'Stable with |lambda_max| = {max(abs_eig) :.4f}')
# plot settings
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
plt.rcParams.update({
'font.family': 'Arial', # Times New Roman, Calibri
'font.weight': 'normal',
'mathtext.fontset': 'cm',
'font.size': 18,
'lines.linewidth': 2,
'axes.linewidth': 2,
'axes.spines.top': False,
'axes.spines.right': False,
'axes.titleweight': 'bold',
'axes.titlesize': 18,
'axes.labelweight': 'bold',
'xtick.major.size': 8,
'xtick.major.width': 2,
'ytick.major.size': 8,
'ytick.major.width': 2,
'figure.dpi': 80,
'legend.framealpha': 1,
'legend.edgecolor': 'black',
'legend.fancybox': False,
'legend.fontsize': 14
})
fig, ax = plt.subplots(figsize=(4, 4))
ax.plot(t, x_exact(t)[0], '--', label='Analytic')
ax.plot(t, x[0], label='Forward Euler')
ax.set_xlabel('$t \ [\mathrm{s}]$')
ax.set_ylabel('$\\theta \ [\mathrm{rad}]$')
ax.set_title('$\\theta(t)$')
ax.set_xlim(t_initial, t_final)
ax.set_ylim(-0.3, 0.3)
ax.legend(loc='upper left', bbox_to_anchor=(1.05, 1.05))
```
▲ Forward Euler has first order error and the amplitude grows over time compared to the analytic solution.
#### Backward Euler
```
# define physical constants
g = 9.8
l = 1
# define time array
t_initial = 0
t_final = 10
dt = 0.005
t = np.arange(t_initial, t_final+dt/2, dt)
t_len = len(t)
# define matrix and initial conditions
A = np.array([[0, 1], [-g/l, 0]])
x0 = np.array([0, 0.5])
x = np.zeros((2, t_len))
x[:, 0] = x0
P, L, U = scipy.linalg.lu(np.eye(len(A)) - dt*A)
# backward euler of linear system
for i in range(t_len - 1):
y = np.linalg.solve(L, P@x[:, i])
x[:, i+1] = np.linalg.solve(U, y)
# compare with analytical soln
x_exact = lambda t : np.array([0.5 * np.sqrt(l/g) *np.sin(t*np.sqrt(g/l)),
0.5 * np.cos(t*np.sqrt(g/l))])
x_error = np.linalg.norm(x[:, -1] - x_exact(t_final))
print(f'Error = {x_error :.2f}')
# assess stability
stability_matrix = np.linalg.inv(np.eye(len(A)) - dt*A)
abs_eig = abs(np.linalg.eig(stability_matrix)[0])
if max(abs_eig) > 1:
print(f'Unstable with |lambda_max| = {max(abs_eig) :.4f}')
else:
print(f'Stable with |lambda_max| = {max(abs_eig) :.4f}')
fig, ax = plt.subplots(figsize=(4, 4))
ax.plot(t, x_exact(t)[0], '--', label='Analytic')
ax.plot(t, x[0], label='Backward Euler')
ax.set_xlabel('$t \ [\mathrm{s}]$')
ax.set_ylabel('$\\theta \ [\mathrm{rad}]$')
ax.set_title('$\\theta(t)$')
ax.set_xlim(t_initial, t_final)
ax.set_ylim(-0.3, 0.3)
ax.legend(loc='upper left', bbox_to_anchor=(1.05, 1.05))
```
▲ Backward Euler has first order error and the amplitude decays over time compared to the analytic solution.
#### `scipy.integrate.solve_ivp()`
```
# define physical constants
g = 9.8
l = 1
# define time array
t_initial = 0
t_final = 10
dt = 0.005
t = np.arange(t_initial, t_final+dt/2, dt)
t_len = len(t)
# define initial conditions
x0 = np.array([0, 0.5])
# define ode system
dtheta_dt = lambda theta, omega : omega
domega_dt = lambda theta, omega: -g/l*theta
ode_syst = lambda t, z : np.array([dtheta_dt(*z), domega_dt(*z)])
# solve ode system
scipy_soln = scipy.integrate.solve_ivp(ode_syst, [t_initial, t_final], x0, t_eval=t).y
# compare with analytical soln
x_exact = lambda t : np.array([0.5 * np.sqrt(l/g) *np.sin(t*np.sqrt(g/l)),
0.5 * np.cos(t*np.sqrt(g/l))])
x_error = np.linalg.norm(scipy_soln[:, -1] - x_exact(t_final))
print(f'Error = {x_error :.4f}')
fig, ax = plt.subplots(figsize=(4, 4))
ax.plot(t, x_exact(t)[0], '--', label='Analytic')
ax.plot(t, scipy_soln[0], label='scipy')
ax.set_xlabel('$t \ [\mathrm{s}]$')
ax.set_ylabel('$\\theta \ [\mathrm{rad}]$')
ax.set_title('$\\theta(t)$')
ax.set_xlim(t_initial, t_final)
ax.set_ylim(-0.3, 0.3)
ax.legend(loc='upper left', bbox_to_anchor=(1.05, 1.05))
```
▲ The scipy implementation has fourth order error, agreeing with the analytic solution.
| github_jupyter |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2021 Lorena A. Barba, Pi-Yueh Chuang
# Multiple logistic regression
This is the fifth lesson of our module on _deep learning_, and it's perhpas surprising that we still are discussing linear models! Our approach is to build up to our final goal in an incremental fashion.
We have thus stayed in the more accessible setting of linear models while introducing key ingredients: gradient descent for optimization; automatic differentiation; multi-dimensional input variables (features); normalization (feature scaling); underfitting/overfitting and regularization. You've come a long way!
In this lesson, we want to give you a taste of more practical machine learning applications. We are going to use _multiple logistic regression_ (meaning, we have multiple features) for the problem of identifying defective metal-casting parts: a classification problem. Let's begin.
First, let's import modules that we'll need later:
```
from autograd import numpy
from autograd import grad
from matplotlib import pyplot
```
With the `numpy` submodule from `autograd`, instead of regular `numpy`, we can get automatic differentiation with the `grad` function.
This is where `autograd` keeps track of and applies the differentiation rules to NumPy functions. The only requirement is that we define Python _functions_ for the code portions we would like derivatives of.
## Images of metal-casting parts
In automated manufacturing, it's common to check the quality of products using computer vision. After taking a picture of a product, a machine learning model identifies if this product has defects or not. This lesson will use a multiple logistic regression model to identify defective metal-casting parts from pictures.
The source of the images of casting parts is in Reference [1]. To make the dataset smaller for this teaching material, we converted the images to grayscale and reduced the resolutions and the number of images. (With the original data, the training would take too long to run on a laptop.) We also transformed the data to `numpy` compressed array format (extension `.npz`). This file format is useful for saving several arrays together in one file.
Read about it in the [`numpy.savez()`](https://numpy.org/doc/stable/reference/generated/numpy.savez.html) documentation page.
Run the code below to load the image data. If you don't have the data locally, you can download it from this short URL: https://go.gwu.edu/engcomp6data5 — be sure to edit the path below, in that case. And do to read the documentation page for [`numpy.load()`](https://numpy.org/doc/stable/reference/generated/numpy.load.html#numpy.load) if you need to.
```
# read in images and labels
with numpy.load("../data/casting_images.npz") as data:
ok_images = data["ok_images"]
def_images = data["def_images"]
type(ok_images)
ok_images.shape
```
The data file contains two datasets: `ok_images` and `def_images`, representing the images of casting parts looking okay and defective. Each dataset has a shape of `(number of images, total number of pixels)`.
You may wonder why we have the ***total number of pixels*** rather than ***pixels in x by pixels in y***. Originally, a grayscale image is a 2D array with each element representing the color value of the corresponding pixel. Instead of 2D arrays, we use 1D arrays (i.e., flattened) here. Using 1D arrays makes programming easier because we can use basic linear algebra (such as matrix-vector multiplications) to express and code our model.
The first thing to do whenever obtaining a dataset is to examine it:
```
n_ok_total = ok_images.shape[0]
res = int(numpy.sqrt(def_images.shape[1]))
print("Number of images without defects:", n_ok_total)
print("Image resolution: {} by {}".format(res, res))
n_def_total = def_images.shape[0]
print("Number of images with defects:", n_def_total)
```
We did not check whether the images in `def_images` have the same resolution as those in `ok_images` because we are pretty sure they do. However, as this is your first time seeing the dataset, it's never a bad idea to do a double-check.
We can use `pyplot.imshow` to examine the images. Don't forget to convert the flattened image arrays back to 2D arrays before plotting.
```
fig, axes = pyplot.subplots(2, 3, figsize=(8, 6), tight_layout=True)
axes[0, 0].imshow(ok_images[0].reshape((res, res)), cmap="gray")
axes[0, 1].imshow(ok_images[50].reshape((res, res)), cmap="gray")
axes[0, 2].imshow(ok_images[100].reshape((res, res)), cmap="gray")
axes[1, 0].imshow(ok_images[150].reshape((res, res)), cmap="gray")
axes[1, 1].imshow(ok_images[200].reshape((res, res)), cmap="gray")
axes[1, 2].imshow(ok_images[250].reshape((res, res)), cmap="gray")
fig.suptitle("Casting parts without defects", fontsize=20);
```
And some images of the casting parts with defects:
```
fig, axes = pyplot.subplots(2, 3, figsize=(8, 6), tight_layout=True)
axes[0, 0].imshow(def_images[0].reshape((res, res)), cmap="gray")
axes[0, 1].imshow(def_images[50].reshape((res, res)), cmap="gray")
axes[0, 2].imshow(def_images[100].reshape((res, res)), cmap="gray")
axes[1, 0].imshow(def_images[150].reshape((res, res)), cmap="gray")
axes[1, 1].imshow(def_images[200].reshape((res, res)), cmap="gray")
axes[1, 2].imshow(def_images[250].reshape((res, res)), cmap="gray")
fig.suptitle("Casting parts with defects", fontsize=20);
```
## Multiple logistic regression
Lesson 2 introduced you to logistic regression for binary classification based on a single feature. A logistic regression model can also be used to predict the probability of a multi-dimensional input (e.g., an image) being of class $1$ or $0$.
The numbers $1$ and $0$ represent digitized labels, classes, or categories. Humans naturally use strings to label or categorize things. Computationally, it is easier to use numbers to describe categories. In this lesson, we will use $1$ to represent defective casting parts and $0$ for normal parts. Our logistic model aims to predict the probability of a casting part being defective.
The only difference between this lesson and lesson 2 is the number of input features. You have seen multiple linear regression in lesson 3: the setting is similar here. If we have $N$ images during training, our model is:
$$
\begin{aligned}
\hat{y}^{(1)} &= \operatorname{logistic}\left(b + w_1 x_1^{(1)} + w_2 x_2^{(1)} + \cdots + w_{n} x_{n}^{(1)}\right) \\
\hat{y}^{(2)} &= \operatorname{logistic}\left(b + w_1 x_1^{(2)} + w_2 x_2^{(2)} + \cdots + w_{n} x_{n}^{(2)}\right) \\
\vdots & \\
\hat{y}^{(N)} &= \operatorname{logistic}\left(b + w_1 x_1^{(N)} + w_2 x_2^{(N)} + \cdots + w_{n} x_{n}^{(N)}\right) \\
\end{aligned}
$$
where the superscripts $(1)\dots(N)$ denote the $N$ images; $\hat{y}$ is the predicted probability of the corresponding image being defective; and $x_1, x_2, \dots, x_{n}$ represent the greyscale values for the $n$ pixels.
In matrix-vector form:
$$
\begin{bmatrix}\hat{y}^{(1)} \\ \vdots \\ \hat{y}^{(N)}\end{bmatrix} =
\operatorname{logistic}\left(
\begin{bmatrix}b \\ \vdots \\ b \end{bmatrix} +
\begin{bmatrix}
x_1^{(1)} & \cdots & x_{n}^{(1)} \\
\vdots & \ddots & \vdots \\
x_1^{(N)} & \cdots & x_{n}^{(N)}
\end{bmatrix}
\begin{bmatrix}w_1 \\ \vdots \\ w_{n} \end{bmatrix}
\right)
$$
<br />
$$
\mathbf{\hat{y}} =
\operatorname{logistic}\left(\mathbf{b} + X \mathbf{w}\right)
$$
The code for the logistic function is the same as in lesson 2, but now it also works with an array input.
```
def logistic(x):
"""Logistic/sigmoid function.
Arguments
---------
x : numpy.ndarray
The input to the logistic function.
Returns
-------
numpy.ndarray
The output.
Notes
-----
The function does not restrict the shape of the input array. The output
has the same shape as the input.
"""
return 1. / (1. + numpy.exp(-x))
```
And our multiple logistic regression model:
```
def logistic_model(x, params):
"""A logistic regression model.
A a logistic regression is y = sigmoid(x * w + b), where the operator *
denotes a mat-vec multiplication.
Arguments
---------
x : numpy.ndarray
The input of the model. The shape should be (n_images, n_total_pixels).
params : a tuple/list of two elemets
The first element is a 1D array with shape (n_total_pixels). The
second element is a scalar (the intercept)
Returns
-------
probabilities : numpy.ndarray
The output is a 1D array with length n_samples.
"""
return logistic(numpy.dot(x, params[0]) + params[1])
```
The loss function is also the same as in lesson 2 for regular logistic regression, but for multiple features, plus we add the regularization term from lesson 4:
$$
\mathrm{loss} = - \sum_{i=1}^{N} y_{\text{true}}^{(i)}\log\left(\hat{y}^{(i)}\right) + \left(1-y_{\text{true}}^{(i)}\right)\log\left(1-\hat{y}^{(i)}\right) + \lambda\sum_{i=1}^{n}{w_i}^2
$$
Or, in vector form:
$$
\mathrm{loss} = - \left[
\mathbf{y}_{\text{true}}\cdot\log\left(\mathbf{\hat{y}}\right)+
\left(\mathbf{1}-\mathbf{y}_{\text{true}}\right)\cdot\log\left(\mathbf{1}-\mathbf{\hat{y}}\right)\
\right] + \lambda\sum_{i=1}^{n}{w_i}^2
$$
where the bolded $\mathbf{1}$ represents a vector of ones.
```
def model_loss(x, true_labels, params, _lambda=1.0):
"""Calculate the predictions and the loss w.r.t. the true values.
Arguments
---------
x : numpy.ndarray
The input of the model. The shape should be (n_images, n_total_pixels).
true_labels : numpy.ndarray
The true labels of the input images. Should be 1D and have length of
n_images.
params : a tuple/list of two elements
The first element is a 1D array with shape (n_total_pixels). The
second elenment is a scalar.
_lambda : float
The weight of the regularization term. Default: 1.0
Returns
-------
loss : a scalar
The summed loss.
"""
pred = logistic_model(x, params)
loss = - (
numpy.dot(true_labels, numpy.log(pred+1e-15)) +
numpy.dot(1.-true_labels, numpy.log(1.-pred+1e-15))
) + _lambda * numpy.sum(params[0]**2)
return loss
```
##### Notes:
1. We added a tiny term `1e-15` to the `log` calculation to avoid the infinity when `pred` or/and `1-pred` are zero.
2. The function `model_loss` combines the calculations of model predictions and the loss. By doing so, we can easily calculate the gradients using the `grad` from `autograd`, just like we did in lesson 2.
## Training, validation, and test datasets
The goal of a machine learning model is to predict something that it never sees during training. For this reason, we should evaluate the performance (e.g., accuracy) of a machine learning model against data that are not part of training. The data, however, should be similar to the training data.
Model optimization (i.e., model fitting) is referred to as _training_ the model in analogy to teaching humans. In school, students practice with take-home exercises. Throughout the semester, you have quizzes to monitor your learning progress. And at the end of the semester, teachers evaluate your performance using final exams. The problems in quizzes and final exams are different from take-home exercises but similar enough. A well-trained student should be able to solve them using the knowledge learned from take-home exercises.
Training a machine learning model is similar. We split our data into three datasets: **training**, **validation**, and **test** datasets. We use the training dataset to train the model and monitor the learning progress using the validation dataset during the training. At the end of the training, we use the test dataset to evaluate the model's final performance.
Updating the model parameters using gradient descent only requires the training dataset. Every several optimization iterations, we can evaluate the current performance using the validation dataset and adjust the training strategy in real-time accordingly. This adjustment is called **hyperparameter tuning**. We will talk about hyperparameter tuning in later lessons. On the other hand, if the training progress is satisfying and reaches our performance goal, we can stop the optimization iterations earlier.
We have no rigorous theory for how big the training, validation, and test datasets should be. Nevertheless, for a dataset as small as what we have in this lesson (i.e., 1300 images), a typical split is 60% for training, 20% for validation, and the remaining 20% for testing.
The code below splits the data with these proportions.
```
# numbers of images for validation (~ 20%)
n_ok_val = int(n_ok_total * 0.2)
n_def_val = int(n_def_total * 0.2)
print("Number of images without defects in validation dataset:", n_ok_val)
print("Number of images with defects in validation dataset:", n_def_val)
# numbers of images for test (~ 20%)
n_ok_test = int(n_ok_total * 0.2)
n_def_test = int(n_def_total * 0.2)
print("Number of images without defects in test dataset:", n_ok_test)
print("Number of images with defects in test dataset:", n_def_test)
# remaining images for training (~ 60%)
n_ok_train = n_ok_total - n_ok_val - n_ok_test
n_def_train = n_def_total - n_def_val - n_def_test
print("Number of images without defects in training dataset:", n_ok_train)
print("Number of images with defects in training dataset:", n_def_train)
```
After determining the numbers of images in the three datasets, we use [`numpy.split()`](https://numpy.org/doc/stable/reference/generated/numpy.split.html) to do the splitting. Be sure to reaad the documentation for this handy function.
```
ok_images = numpy.split(ok_images, [n_ok_val, n_ok_val+n_ok_test], 0)
def_images = numpy.split(def_images, [n_def_val, n_def_val+n_def_test], 0)
```
The final step is to combine the images with and without defects. We use `numpy.concatenate`.
```
images_val = numpy.concatenate([ok_images[0], def_images[0]], 0)
images_test = numpy.concatenate([ok_images[1], def_images[1]], 0)
images_train = numpy.concatenate([ok_images[2], def_images[2]], 0)
```
You can use `pyplot.imshow` to visualize the images in these three datasets, just like we did previously.
## Data normalization: z-score normalization
In lesson 3, you learned about data normalization and a technique called min-max scaling. Here, we introduce an alternative method called **z-score normalization**:
$$
z = \frac{x - \mu_{\text{train}}}{\sigma_{\text{train}}}
$$
Here, $\mu_{\text{train}}$ and $\sigma_{\text{train}}$ denote the mean value and the standard deviation of the training dataset. Remember that no matter whether $x$ represents training, validation, or test datasets, we always have to use the mean value and the standard deviation from the _training_ dataset.
```
# calculate mu and sigma
mu = numpy.mean(images_train, axis=0)
sigma = numpy.std(images_train, axis=0)
# normalize the training, validation, and test datasets
images_train = (images_train - mu) / sigma
images_val = (images_val - mu) / sigma
images_test = (images_test - mu) / sigma
```
##### Notes:
1. $\mu_{\text{train}}$ and $\sigma_{\text{train}}$ are the mean and the standard deviation of pixels across different images. So the shape of `mu` and `sigma` should be the same as the image resolution, which is `(128, 128)` for this particular dataset.
2. After the z-score normalization, the resulting training dataset has mean value of $0$ and standard deviation of $1$ for all pixels.
3. We normalize the validation and test datasets with the training dataset's mean and standard deviation. However, normalized validation and test datasets should have mean and standard deviation close to 0 and 1 because validation and test data should be similar to training data.
## Creating labels/classes
We now have images as input data to train a model. We will need to provide the correct labels for these images, corresponding to defective and normal parts. It's analogous to your teacher giving you the solutions to take-home exercises, for you to practice.
To label the images, we use $1$ to represent defective parts, and $0$ for normal parts. The code below creates `labels_train`, `labels_val`, and `labels_test`, which hold the labels for the images in training, validation, and test datasets.
```
# labels for training data
labels_train = numpy.zeros(n_ok_train+n_def_train)
labels_train[n_ok_train:] = 1.
# labels for validation data
labels_val = numpy.zeros(n_ok_val+n_def_val)
labels_val[n_ok_val:] = 1.
# labels for test data
labels_test = numpy.zeros(n_ok_test+n_def_test)
labels_test[n_ok_test:] = 1.
```
Also, recall that a logistic model only predicts a casting part's probability of having defects. We need to set a decision threshold and say that if a probability is higher than this threshold, the corresponding casting part is defective. Like in lesson 2, we use $0.5$ for the threshold here:
```
def classify(x, params):
"""Use a logistic model to label data with 0 or/and 1.
Arguments
---------
x : numpy.ndarray
The input of the model. The shape should be (n_images, n_total_pixels).
params : a tuple/list of two elements
The first element is a 1D array with shape (n_total_pixels). The
second element is a scalar.
Returns
-------
labels : numpy.ndarray
The shape of the label is the same with `probability`.
Notes
-----
This function only works with multiple images, i.e., x has a shape of
(n_images, n_total_pixels).
"""
probabilities = logistic_model(x, params)
labels = (probabilities >= 0.5).astype(float)
return labels
```
## Evaluating model performance: F-score
Before we train our model, we need to define some metrics to evaluate the performance against the validation data _during_ training. We consider four possible outcomes of the prediction results:
1. **True-positive (TP)**: the model predicts that a part has defects, and it does.
2. **False-positive (FP)**: the model predicts that a part has defects, but it does not.
3. **True-negative (TN)**: the model predicts that a part does not have defects, and it does not.
4. **False-negative (FN)**: the model predicts that a part does not have defects, but it does.
A table called a [**confusion matrix**](https://en.wikipedia.org/wiki/Confusion_matrix) or **error matrix** summarizes the four outcomes:
| | w/ defects (predicted) | w/o defects (predicted) |
|---------------------------|------------------------|-------------------------|
| w/ defects (true answer) | $N_{TP}$ | $N_{FN}$ |
| w/o defects (true answer) | $N_{FP}$ | $N_{TN}$ |
$N_{TP}$, $N_{FP}$, $N_{FN}$, and $N_{TN}$ denote the numbers of images with each of the four outcomes.
Using an error matrix, we can define several metrics to evaluate the performance; the most basic ones are **precision** and **recall**.
Precision measures how many parts actually have defects among those predicted to be defective. Recall means how many defective parts are actually caught by the model.
$$
\mathrm{precision}\equiv\frac{\text{number of defective parts identified by the model}}{\text{predicted total number of defective parts}} = \frac{N_{TP}}{N_{TP}+N_{FP}}
$$
$$
\mathrm{recall}\equiv\frac{\text{number of defective parts identified by the model}}{\text{total number of defective parts}} = \frac{N_{TP}}{N_{TP}+N_{FN}}
$$
The two metrics together measure the model performance.
For example, a model with low precision but high recall successfully catches most defective parts, but also misjudges many normal parts. This causes many normal parts to be discarded and increases the cost of manufacturing.
Using a single number to evaluate the performance is desirable, and thus we define the **F-score** as a metric combining both precision and recall into one single value:
$$
\text{F-score}\equiv\frac{\left(1+\beta^2\right)\mathrm{precision}\times\mathrm{recall}}{\beta^2 \mathrm{precision} + \mathrm{recall}}
$$
$\beta$ is a user-defined coefficient representing the weight of recall. A higher $\beta$ means recall is more important than precision, and vice versa. When $\beta=1$, we don't have any preference over precision or recall. Let's pick that for this lesson.
```
def performance(predictions, answers, beta=1.0):
"""Calculate precision, recall, and F-score.
Arguments
---------
predictions : numpy.ndarray of integers
The predicted labels.
answers : numpy.ndarray of integers
The true labels.
beta : float
A coefficient representing the weight of recall.
Returns
-------
precision, recall, score : float
Precision, recall, and F-score, respectively.
"""
true_idx = (answers == 1) # the location where the answers are 1
false_idx = (answers == 0) # the location where the answers are 0
# true positive: answers are 1 and predictions are also 1
n_tp = numpy.count_nonzero(predictions[true_idx] == 1)
# false positive: answers are 0 but predictions are 1
n_fp = numpy.count_nonzero(predictions[false_idx] == 1)
# true negative: answers are 0 and predictions are also 0
n_tn = numpy.count_nonzero(predictions[false_idx] == 0)
# false negative: answers are 1 but predictions are 0
n_fn = numpy.count_nonzero(predictions[true_idx] == 0)
# precision, recall, and f-score
precision = n_tp / (n_tp + n_fp)
recall = n_tp / (n_tp + n_fn)
score = (
(1.0 + beta**2) * precision * recall /
(beta**2 * precision + recall)
)
return precision, recall, score
```
A higher F-score is better. A perfect model has F-score of one (or $100\%$) regardless of $\beta$, which means both precision and recall are $100\%$.
## Initialization
Let's initialize the parameters to zero, and use `grad` to compute derivatives with respect to the parameters, as before:
```
# a function to get the gradients of a logistic model
gradients = grad(model_loss, argnum=2)
# initialize parameters
w = numpy.zeros(images_train.shape[1], dtype=float)
b = 0.
```
Before training, we calculate the initial F-score against the test dataset to see how much the prediction improves after training.
```
# initial accuracy
pred_labels_test = classify(images_test, (w, b))
perf = performance(pred_labels_test, labels_test)
print("Initial precision: {:.1f}%".format(perf[0]*100))
print("Initial recall: {:.1f}%".format(perf[1]*100))
print("Initial F-score: {:.1f}%".format(perf[2]*100))
```
You may be surprised at the not-so-bad initial performance. In fact, this initial performance is skewed by the improper design of the datasets.
Due to the all-zero initial parameters, the model initially predicts that all parts have defects. In other words, the variable `pred_labels_test` is an array of ones. Now, remember that we have 103 normal parts and 156 defective parts in our test dataset. With an initial prediction of all parts being defective, we have the following error matrix:
| | w/ defects (predicted) | w/o defects (predicted) |
|---------------------------|------------------------|-------------------------|
| w/ defects (true answer) | $N_{TP}=156$ | $N_{FN}=0$ |
| w/o defects (true answer) | $N_{FP}=103$ | $N_{TN}=0$ |
The initial precision is about $60.2\%$, and the initial recall is exactly $100\%$. The initial F-score is $75.2\%$.
The high initial performance comes from having more defective parts than normal parts in the test dataset. Designing a proper ratio between data labeled with 1 and 0 is therefore critical. It's like designing and answering true-false questions of an exam. If the correct answers to a set of true-false questions are mostly true, and a lazy student guesses true for all questions and finishes the quiz in one minute, they can get a very high score! Yet that score does not truly represent the student's knowledge level.
In machine learning, test and validation data should mimic real-world data. For example, if 98% of production from a casting factory meets the quality criterion, using a test dataset with 156 defective and 103 normal parts may be improper.
In this lesson, we will leave the datasets as-is. Designing a good dataset is application-dependent: it requires deep knowledge and experience in the corresponding field of application.
## Training/optimization
In the optimization loop (using gradient descent), we'll monitor the progress of training using the validation dataset. If the validation loss is not changing much, we'll stop the training.
Here, we only use validation loss to control the optimization. In real applications, it's common to use a combination of validation loss, accuracy, and other metrics to control the optimization.
```
# learning rate
lr = 1e-5
# a variable for the change in validation loss
change = numpy.inf
# a counter for optimization iterations
i = 0
# a variable to store the validation loss from the previous iteration
old_val_loss = 1e-15
# keep running if:
# 1. we still see significant changes in validation loss
# 2. iteration counter < 10000
while change >= 1e-5 and i < 10000:
# calculate gradients and use gradient descents
grads = gradients(images_train, labels_train, (w, b))
w -= (grads[0] * lr)
b -= (grads[1] * lr)
# validation loss
val_loss = model_loss(images_val, labels_val, (w, b))
# calculate f-scores against the validation dataset
pred_labels_val = classify(images_val, (w, b))
score = performance(pred_labels_val, labels_val)
# calculate the chage in validation loss
change = numpy.abs((val_loss-old_val_loss)/old_val_loss)
# update the counter and old_val_loss
i += 1
old_val_loss = val_loss
# print the progress every 10 steps
if i % 10 == 0:
print("{}...".format(i), end="")
print("")
print("")
print("Upon optimization stopped:")
print(" Iterations:", i)
print(" Validation loss:", val_loss)
print(" Validation precision:", score[0])
print(" Validation recall:", score[1])
print(" Validation F-score:", score[2])
print(" Change in validation loss:", change)
```
We didn't use the validation F-score to stop the optimization because improving the predicted probability does not necessarily improve the predicted labels. For example, suppose the probabilities predicted by the logistic model during the last three iterations are $0.1$, $0.2$, and $0.3$. In that case, the predicted labels are still $0$, $0$, and $0$ for the three iterations. Hence monitoring F-score (which relies on predicted labels) will cause the optimization to stop too early while the model is still improving.
Finally, let's examine the model's performance using the test dataset and see if it's better than the initial performance.
```
# final accuracy
pred_labels_test = classify(images_test, (w, b))
perf = performance(pred_labels_test, labels_test)
print("Final precision: {:.1f}%".format(perf[0]*100))
print("Final recall: {:.1f}%".format(perf[1]*100))
print("Final F-score: {:.1f}%".format(perf[2]*100))
```
Bravo! The precision improved from $60.2\%$ to $88.0\%$, which means when our model says a part has defects, there is an $88.0\%$ chance it is indeed defective. The recall dropped from $100%$ to $84.6\%$, which means our model misses about $15\%$ of the defective parts. The F-score, representing overall accuracy, improved from $75.2\%$ to $86.3\%$.
##### Final note
No model is perfect and it's not realistic to expect $100\%$ accuracy. How accurate a model should be to consider it a good model is problem-dependent. Still, an F-score of $86.3\%$ does not seem very exciting. In this example, it means about $15\%$ of defective products slip through and may be handed over to customers.
In a later lesson, we will improve the performance by replacing the multiple logistic regression model with something more interesting: a **neural network** model.
## What we've learned
- Turn an image into a vector of grayscale values to use it as input data.
- Set up a classification problem from multi-dimensional feature vectors.
- Apply multiple logistic regression to identify defective metal-casting parts.
- Split data into training, validation, and test datasets to assess model performance.
- Normalize the data using z-score.
- Evaluate the performance of a classification model using F-score.
## Reference
1. Kantesaria, N., Vaghasia, P., Hirpara, J., & Bhoraniya, R., (2020). Casting product image data for quality inspection. Kaggle. https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product
```
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
```
| github_jupyter |
## Single Track Demo
Process a single ATL03 granule using SlideRule's ATL06-SR algorithm and compare the results to the existing ATL06 data product.
### What is demonstrated
* The `icesat2.atl06` API is used to perform a SlideRule processing request of a single ATL03 granule
* The `icesat2.h5` API is used to read existing ATL06 datasets
* The `matplotlib` package is used to plot the elevation profile of all three tracks in the granule (with the first track overlaid with the expected profile)
* The `cartopy` package is used to produce different plots representing the geolocation of the gridded elevations produced by SlideRule.
### Points of interest
Most use cases for SlideRule use the higher level `icesat2.atl06p` API which works on a region of interest; but this notebook shows some of the capabilities of SlideRule for working on individual granules.
```
import sys
import pandas as pd
import numpy as np
import math
from sliderule import icesat2
# Configure Session #
icesat2.init("icesat2sliderule.org", True)
resource = "_20181019065445_03150111_003_01.h5"
#
# Distance between two coordinates
#
def geodist(lat1, lon1, lat2, lon2):
lat1 = math.radians(lat1)
lon1 = math.radians(lon1)
lat2 = math.radians(lat2)
lon2 = math.radians(lon2)
dlon = lon2 - lon1
dlat = lat2 - lat1
dist = math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2)**2
dist = 2.0 * math.atan2(math.sqrt(dist), math.sqrt(1.0 - dist))
dist = 6373.0 * dist
return dist
#
# ATL06 Read Request
#
def expread(resource, asset):
# Read ATL06 Data
segments = icesat2.h5("/gt1r/land_ice_segments/segment_id", resource, asset)
heights = icesat2.h5("/gt1r/land_ice_segments/h_li", resource, asset)
latitudes = icesat2.h5("/gt1r/land_ice_segments/latitude", resource, asset)
longitudes = icesat2.h5("/gt1r/land_ice_segments/longitude", resource, asset)
# Build Dataframe of SlideRule Responses
lat_origin = latitudes[0]
lon_origin = longitudes[0]
distances = [geodist(lat_origin, lon_origin, latitudes[i], longitudes[i]) for i in range(len(heights))]
df = pd.DataFrame(data=list(zip(heights, distances, latitudes, longitudes)), index=segments, columns=["h_mean", "distance", "latitude", "longitude"])
# Filter Dataframe
df = df[df["h_mean"] < 25000.0]
df = df[df["h_mean"] > -25000.0]
df = df[df["distance"] < 4000.0]
# Return DataFrame
print("Retrieved {} points from ATL06, returning {} points".format(len(heights), len(df.values)))
return df
#
# SlideRule Processing Request
#
def algoexec(resource):
# Build ATL06 Request
parms = {
"cnf": 4,
"ats": 20.0,
"cnt": 10,
"len": 40.0,
"res": 20.0,
"maxi": 1
}
# Request ATL06 Data
rsps = icesat2.atl06(parms, resource, as_numpy=False)
# Calculate Distances
lat_origin = rsps["lat"][0]
lon_origin = rsps["lon"][0]
distances = [geodist(lat_origin, lon_origin, rsps["lat"][i], rsps["lon"][i]) for i in range(len(rsps["h_mean"]))]
# Build Dataframe of SlideRule Responses
df = pd.DataFrame(data=list(zip(rsps["h_mean"], distances, rsps["lat"], rsps["lon"], rsps["spot"])), index=rsps["segment_id"], columns=["h_mean", "distance", "latitude", "longitude", "spot"])
# Return DataFrame
print("Reference Ground Tracks: {} to {}".format(min(rsps["rgt"]), max(rsps["rgt"])))
print("Cycle: {} to {}".format(min(rsps["cycle"]), max(rsps["cycle"])))
print("Retrieved {} points from SlideRule".format(len(rsps["h_mean"])))
return df
# Execute SlideRule Algorithm
act = algoexec("ATL03"+resource)
# Read ATL06 Expected Results
exp = expread("ATL06"+resource, "atlas-s3")
# Build Standard (Expected) Dataset
standard1 = exp.sort_values(by=['distance'])
# Build Track (Actual) Datasets
track1 = act[act["spot"].isin([1, 2])].sort_values(by=['distance'])
track2 = act[act["spot"].isin([3, 4])].sort_values(by=['distance'])
track3 = act[act["spot"].isin([5, 6])].sort_values(by=['distance'])
```
## Matplotlib Plots
```
# Import MatPlotLib Package
import matplotlib.pyplot as plt
# Create Elevation Plot
fig = plt.figure(num=None, figsize=(12, 6))
# Plot Track 1 Elevations
ax1 = plt.subplot(131)
ax1.set_title("Along Track 1 Elevations")
ax1.plot(track1["distance"].values, track1["h_mean"].values, linewidth=1.0, color='b')
ax1.plot(standard1["distance"].values, standard1["h_mean"].values, linewidth=1.0, color='g')
# Plot Track 2 Elevations
ax2 = plt.subplot(132)
ax2.set_title("Along Track 2 Elevations")
ax2.plot(track2["distance"].values, track2["h_mean"].values, linewidth=1.0, color='b')
# Plot Track 3 Elevations
ax3 = plt.subplot(133)
ax3.set_title("Along Track 3 Elevations")
ax3.plot(track3["distance"].values, track3["h_mean"].values, linewidth=1.0, color='b')
# Show Plot
plt.show()
```
## Cartopy Plots
```
import cartopy
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
# Create PlateCarree Plot
fig = plt.figure(num=None, figsize=(24, 12))
################################
# add global plot
################################
ax1 = plt.subplot(121,projection=cartopy.crs.PlateCarree())
ax1.set_title("Ground Tracks")
# add coastlines with filled land and lakes
ax1.add_feature(cartopy.feature.LAND, zorder=0, edgecolor='black')
ax1.add_feature(cartopy.feature.LAKES)
ax1.set_extent((-180,180,-90,90),crs=cartopy.crs.PlateCarree())
# format grid lines
gl = ax1.gridlines(crs=cartopy.crs.PlateCarree(), draw_labels=True, linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlocator = mticker.FixedLocator([-180, -120, -60, 0, 60, 120, 180])
gl.ylocator = mticker.FixedLocator([-90, -60, -30, 0, 30, 60, 90])
# plot ground tracks
ax1.plot(track1["longitude"], track1["latitude"], linewidth=1.5, color='r',zorder=2, transform=cartopy.crs.Geodetic())
################################
# add zoomed plot
################################
ax2 = plt.subplot(122,projection=cartopy.crs.PlateCarree())
ax2.set_title("Ground Tracks")
# add coastlines with filled land and lakes
ax2.add_feature(cartopy.feature.LAND, zorder=0, edgecolor='black')
ax2.add_feature(cartopy.feature.LAKES)
ax2.set_extent((-80,-60,-90,-70),crs=cartopy.crs.PlateCarree())
# format grid lines
gl = ax2.gridlines(crs=cartopy.crs.PlateCarree(), draw_labels=True, linewidth=1, color='gray', alpha=0.5, linestyle='--')
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
# plot ground tracks
ax2.plot(track1["longitude"], track1["latitude"], linewidth=1.5, color='r',zorder=2, transform=cartopy.crs.Geodetic())
# show plot
plt.show()
# Select Projection
#p = cartopy.crs.PlateCarree()
#p = cartopy.crs.LambertAzimuthalEqualArea()
p = cartopy.crs.SouthPolarStereo()
# Create SouthPolar Plot
fig = plt.figure(num=None, figsize=(12, 6))
ax1 = plt.axes(projection=p)
ax1.gridlines()
# Plot Ground Tracks
ax1.set_title("Ground Tracks")
ax1.plot(track1["longitude"].values,track1["latitude"].values,linewidth=1.5, color='r',zorder=2, transform=cartopy.crs.Geodetic())
# Plot Bounding Box
box_lon = [-70, -70, -65, -65, -70]
box_lat = [-80, -82.5, -82.5, -80, -80]
ax1.plot(box_lon, box_lat, linewidth=1.5, color='b', zorder=2, transform=cartopy.crs.Geodetic())
# add coastlines with filled land and lakes
ax1.add_feature(cartopy.feature.LAND, zorder=0, edgecolor='black')
ax1.add_feature(cartopy.feature.LAKES)
# add margin around plot for context
ax1.set_xmargin(1.0)
ax1.set_ymargin(1.0)
# show plot
plt.show()
```
| github_jupyter |
```
import random
import http.client
import urllib
import urllib3
import time
from redis import Redis
from datetime import datetime
import pymysql
# Connect to the database
# 连接mysql,host指定主机;port指定端口,如果mysql为默认端口3306可以不写;
# user,password分别指定登录mysql的用户名和密码;
# db指定数据库;charset指定字符集;
connection = pymysql.connect(host='localhost',
user='root',
password='131415',
db='liuyu',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
ONLINE_LAST_MINUTES = 5
redis = Redis()
host = "106.ihuyi.com"
sms_send_uri = "/webservice/sms.php?method=Submit"
#查看用户名 登录用户中心->验证码通知短信>产品总览->API接口信息->APIID
account = "C48856660"
#查看密码 登录用户中心->验证码通知短信>产品总览->API接口信息->APIKEY
password = "7505ea7957222ed2fe6fc2958311cdbd"
class zhuCe(object):
def __init__(self):
pass
def youxiang(self):
print("请输入邮箱号码")
self.youxiang=input("")
print("您输入邮箱号码为:",self.youxiang)
def mima(self):
print("请输入密码")
self.mima_1=input("")
print("请输入确认密码")
self.mima_2=input("")
if self.mima_1==self.mima_2:
print("密码输入成功")
self.mima=self.mima_1
else:
print("两次密码输入不一致!!!")
self.mima()
def mark_online(self,user_id): #将一个用户标记为online
now=int(time.time())
expires=now+ONLINE_LAST_MINUTES*60
print(expires)
all_users_key = 'online-users/%d' % (now//60)
user_key = 'user-activity/%s'%user_id
print(all_users_key,user_key)
p = redis.pipeline()
p.sadd(all_users_key,user_id)
p.set(user_key,now)
p.expireat(all_users_key,expires)
self.q=p.execute()
def yanzheng(self):
for i in range(1,4):
yanzheng_1=random.randrange(1000,9999)
print("请输入验证码:",yanzheng_1)
yanzheng_2=int(input(""))
if yanzheng_1==yanzheng_2:
print("验证通过")
self.duanxin()
break
def duanxin(self):
print("请输入手机号码")
self.shoujihao=input("")
self.mark_online(self.youxiang)
if self.q[2]== True:
print("请输入手机验证码")
self.send_sms()
self.yanzhengma()
else:
print("验证码超时!!!")
def send_sms(self):
self.mobile = self.shoujihao
text = "您的验证码是:121254。请不要把验证码泄露给其他人。"
params = urllib.parse.urlencode({'account': account, 'password' : password, 'content': text, 'mobile':self.mobile,'format':'json' })
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
conn = http.client.HTTPConnection(host, port=80, timeout=30)
conn.request("POST", sms_send_uri, params, headers)
response = conn.getresponse()
response_str = response.read()
conn.close()
def yanzhengma(self):
self.phone=input("")
self.T=False
if self.phone=="121254":
print("注册成功")
self.T= True
self.zhuijia()
else:
print("注册失败")
exit(0)
def zhuijia(self):
cursor = connection.cursor()
sql = "INSERT INTO liuyu(id,passward) VALUES (%s, %s);"
id = [self.youxiang]
passward = [self.mima]
# 执行SQL语句
cursor.execute(sql, [id, passward])
connection.commit()
#cursor.close()
qwe=zhuCe()
qwe.youxiang()
qwe.mima()
qwe.yanzheng()
import matplotlib.pyplot as plt # plt 用于显示图片
import matplotlib.image as mpimg # mpimg 用于读取图片
import numpy as np
wangzhe = mpimg.imread('wangzhe.jpg') # 读取和代码处于同一目录下的 lena.png
# 此时 lena 就已经是一个 np.array 了,可以对它进行任意处理
luban = mpimg.imread('luban.jpg')
dianwei = mpimg.imread('dianwei.jpg')
zhaoyun = mpimg.imread('zhaoyun.jpg')
jiazai = mpimg.imread('jiazai.jpg')
wangzhe.shape #(512, 512, 3)
dianwei.shape #(512, 512, 3)
zhaoyun.shape #(512, 512, 3)
zhaoyun.shape #(512, 512, 3)
jiazai.shape #(512, 512, 3)
plt.imshow(wangzhe) # 显示图片
plt.axis('off') # 不显示坐标轴
plt.show()
class Wangzhe(object):
def __init__(self):
pass
def renji(self):
print('请选择人机或者多人对战!')
res = input('输入')
print("您选的是",res)
self.res=res
def tiaoxuan(self):
print('请从典韦、赵云、鲁班中挑选一个人物!')
ren = input('输入')
print("您选的是",ren)
self.ren=ren
def xianshizhanli(self):
if self.ren == '典韦':
plt.imshow(dianwei) # 显示图片
plt.axis('off') # 不显示坐标轴
plt.show()
print("您选择的典韦战力为10000,防御力为5000")
elif self.ren == '赵云':
plt.imshow(zhaoyun) # 显示图片
plt.axis('off') # 不显示坐标轴
plt.show()
print("您选择的赵云战力为20000,防御力为2000")
else:
plt.imshow(luban) # 显示图片
plt.axis('off') # 不显示坐标轴
plt.show()
print("您选择的鲁班战力为15000,防御力为3500")
def renwuqueding(self):
print("您的人物以确定,现在系统随机为您生成对战玩家")
q = np.random.choice(['典韦','赵云','鲁班'])
print('您的对手为',q)
def kaishi(self):
print("请输入开始")
w = input('输入')
if w == "开始":
print("输入正确")
else:
print("输入有误!")
def jiazai(self):
print('正在加载,请耐心等待!')
plt.imshow(luban) # 显示图片
plt.axis('off') # 不显示坐标轴
plt.show()
qwe = Wangzhe()
qwe.renji()
qwe.tiaoxuan()
qwe.xianshizhanli()
qwe.renwuqueding()
qwe.kaishi()
qwe.jiazai()
class Name(object):
def __init__(self,a,b,c):
self.__a=a
self.__b=b
self.__c=c
self.__d=0
@property
def A(self):
print(self.__a)
@A.setter
def A(self,a1):
self.__a=a1
@property
def B(self):
print(self.__b)
@A.setter
def B(self,b1):
self.__b=b1
@property
def C(self):
print(self.__c)
@A.setter
def C(self,c1):
self.__c=c1
def play(self):
print("三者之和为:",self.__a+self.__b+self.__c)
qwe=Name(4,5,6)
qwe.play()
qwe.a1=10
qwe.B=20
qwe.play()
import random
import pymysql
conn = pymysql.connect(user="root",password="zwx123",database="youxiang",charset="utf8")
class Regist(object):
"""
Implementation resgist in 163.
"""
def account(self):
"""
Input account.
"""
# 检测邮箱的正确性.
# input 返回出来的是一个字符串
L = []
email = input('请输入邮箱:>>')
self.email=email
print(' 您输入的邮箱是: %s' % email)
self.password()
def password(self):
"""
input passward,
"""
# 密码的长度必须是6-20位
# 密码必须含有大写小写字母以及数字.
for _ in range(4):
password_1 = input('请输入您的密码:>>')
password_2 = input('请输入确认密码:>>')
if password_1 == password_2:
print('密码确认成功')
self.verfily()
self.password=password_1
break
else:
print('两次密码不一致')
else:
print('您可能是一个机器人滚')
def verfily(self):
"""
ver...
"""
# 英文和数字的结合
# 给予一个简单的数学运算得到结果.
for _ in range(3):
number = random.randrange(1000,9999)
print('验证码是: %d'%number)
number_2 = input('输入验证码:>>')
if number == int(number_2):
print('注册成功')
break
else:
print('验证码错误')
else:
print('机器人')
def zhuijia(self):
cursor = conn.cursor()
sql = "INSERT INTO userinfo(name,pwd) VALUES (%s, %s);"
name = [self.email]
pwd = [self.password]
# 执行SQL语句
cursor.execute(sql, [name, pwd])
conn.commit()
#cursor.close()
def main():
regist = Regist()
regist.account()
regist.zhuijia()
main()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import xarray as xr
path='../data/output/'
#% % mask
mask=xr.open_dataset(path+'../LAND_MASK.CRI_360x180.nc')
# mask.land_mask.plot()
# mask=mask.sortby('lat',ascending=False)
oceanmask=np.array(mask.land_mask)
oceanmask[oceanmask>0]=np.nan
oceanmask[oceanmask==0]=1
#%%
periods=[(2003,2016),(1993,2016)]
for period in periods:
t0,t1 = period
df=pd.read_pickle(path+'OM_reconstructions_{}-{}.p'.format(t0,t1))
print ('{} - {}'.format(t0,t1))
names = ['JPL',
'CSR',
'IMB+WGP',
'IMB+GWB+ZMP','UCI+WGP','UCI+GWB+ZMP'
]
if t0<2002:
names = [name for name in names if not name=='JPL' and not name=='CSR']
uncs = ['temporal','spatial','intrinsic']
unc_perc = np.zeros((len(names),len(uncs)))
# name = names[0]
# df_sel=df['mask']
# df_sel = df[[col for col in np.array(df.columns) if col.startswith(name)]]
for j,name in enumerate(names):
cols = [col for col in np.array(df.columns) if col.startswith(name)]
mu = [np.nanmean(np.array(df[col]) * oceanmask.flatten() )
for col in np.array(df.columns) if col.startswith(name)]
#% %
print(name)
tot_unc = mu[2]+mu[3]+mu[4]
# tot_unc = mu[1]
# tot_unc = np.sqrt(mu[2]**2)+(mu[3]**2)+(mu[4]**2)
for i, typ in enumerate(uncs):
print('{}: {:.3f}%'.format(typ, (mu[i+2]*100)/tot_unc ))
unc_perc[j,i] = (mu[i+2]*100)/tot_unc
unc_avg = np.mean(unc_perc,axis=0)
print('')
print('on average:')
for i,typ in enumerate(uncs):
print('{} : {:.3f}%'.format(typ,unc_avg[i]))
print('---')
#%%
periods=[(2003,2016),(1993,2016)]
for period in periods:
t0,t1 = period
df=pd.read_pickle(path+'OM_reconstructions_{}-{}.p'.format(t0,t1))
print ('{} - {}'.format(t0,t1))
names = ['JPL',
'CSR',
'IMB+WGP',
'IMB+GWB+ZMP','UCI+WGP','UCI+GWB+ZMP'
]
if t0<2002:
names = [name for name in names if not name=='JPL' and not name=='CSR']
uncs = ['AIS','GIS','GLA','LWS']
unc_perc = np.zeros((len(names),len(uncs)))
# name = names[0]
# df_sel=df['mask']
# df_sel = df[[col for col in np.array(df.columns) if col.startswith(name)]]
for j,name in enumerate(names):
cols = [col for col in np.array(df.columns) if col.startswith(name)]
mu = [np.nanmean(np.array(df[col]) * oceanmask.flatten() )
for col in np.array(df.columns) if col.startswith(name)]
#% %
print(name)
tot_unc = mu[2]+mu[3]+mu[4]
# tot_unc = mu[1]
# tot_unc = np.sqrt(mu[2]**2)+(mu[3]**2)+(mu[4]**2)
for i, typ in enumerate(uncs):
print('{}: {:.3f}%'.format(typ, (mu[i+5]*100)/tot_unc ))
unc_perc[j,i] = (mu[i+5]*100)/tot_unc
unc_avg = np.mean(unc_perc,axis=0)
print('')
print('on average:')
for i,typ in enumerate(uncs):
print('{} : {:.3f}%'.format(typ,unc_avg[i]))
print('---')
```
| github_jupyter |
<div class="contentcontainer med left" style="margin-left: -50px;">
<dl class="dl-horizontal">
<dt>Title</dt> <dd> Points Element</dd>
<dt>Dependencies</dt> <dd>Plotly</dd>
<dt>Backends</dt> <dd><a href='../bokeh/Points.ipynb'>Bokeh</a></dd> <dd><a href='../matplotlib/Points.ipynb'>Matplotlib</a></dd> <dd><a href='./Points.ipynb'>Plotly</a></dd>
</dl>
</div>
```
import numpy as np
import holoviews as hv
hv.extension('plotly')
```
The ``Points`` element visualizes as markers placed in a space of two independent variables, traditionally denoted *x* and *y*. In HoloViews, the names ``'x'`` and ``'y'`` are used as the default ``key_dimensions`` of the element. We can see this from the default axis labels when visualizing a simple ``Points`` element:
```
%%opts Points (color='black' symbol='x')
np.random.seed(12)
coords = np.random.rand(50,2)
hv.Points(coords)
```
Here both the random *x* values and random *y* values are *both* considered to be the 'data' with no dependency between them (compare this to how [``Scatter``](./Scatter.ipynb) elements are defined). You can think of ``Points`` as simply marking positions in some two-dimensional space that can be sliced by specifying a 2D region-of-interest:
```
%%opts Points (color='black' symbol='x' size=10)
hv.Points(coords) + hv.Points(coords)[0.6:0.8,0.2:0.5]
```
Although the simplest ``Points`` element simply mark positions in a two-dimensional space without any associated value this doesn't mean value dimensions aren't supported. Here is an example with two additional quantities for each point, declared as the ``value_dimension``s *z* and α visualized as the color and size of the dots, respectively:
```
%%opts Points [color_index=2]
np.random.seed(10)
data = np.random.rand(100,4)
points = hv.Points(data, vdims=['z', 'size'])
points + points[0.3:0.7, 0.3:0.7].hist()
```
In the right subplot, the ``hist`` method is used to show the distribution of samples along the first value dimension we added (*z*).
The marker shape specified above can be any supported by [matplotlib](http://matplotlib.org/api/markers_api.html), e.g. ``s``, ``d``, or ``o``; the other options select the color and size of the marker. For convenience with the [bokeh backend](Bokeh_Backend), the matplotlib marker options are supported using a compatibility function in HoloViews.
**Note**: Although the ``Scatter`` element is superficially similar to the [``Points``](./Points.ipynb) element (they can generate plots that look identical), the two element types are semantically quite different. The fundamental difference is that [``Points``](./Points.ipynb) are used to visualize data where the *y* variable is *dependent*. This semantic difference also explains why the histogram generated by ``hist`` call above visualizes the distribution of a different dimension than it does for [``Scatter``](./Scatter.ipynb).
This difference means that ``Points`` naturally combine elements that express independent variables in two-dimensional space, for instance [``Raster``](./Raster.ipynb) types such as [``Image``](./Image.ipynb). Similarly, ``Scatter`` expresses a dependent relationship in two-dimensions and combine naturally with ``Chart`` types such as [``Curve``](./Curve.ipynb).
For full documentation and the available style and plot options, use ``hv.help(hv.Points).``
| github_jupyter |
# Futurology Replication Power Analysis
J. Nathan Matias ([natematias.com](https://natematias.com)) ([@natematias](https://twitter.com/natematias))
May 2019
```
options("scipen"=9, "digits"=4)
library(dplyr)
library(MASS) # contains fitdistr
library(ggplot2) # for plotting
library(rlang)
library(tidyverse)
library(viridis) # colorblind safe palettes
library(DeclareDesign)
library(beepr)
## Installed DeclareDesign 0.13 using the following command:
# install.packages("DeclareDesign", dependencies = TRUE,
# repos = c("http://R.declaredesign.org", "https://cloud.r-project.org"))
## DOCUMENTATION AT: https://cran.r-project.org/web/packages/DeclareDesign/DeclareDesign.pdf
cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
options(repr.plot.width=7, repr.plot.height=3.5)
sessionInfo()
## the power-analysis-utils.R source file includes the following methods:
# mu.diff.from.mu.irr
# betas.logit.from.prob
# betas.logit.from.mean
# min.diagnosis.power
# iterate.for.power
# plot.power.results
source("../power-analysis-utils.R")
```
# Step One: Analyze Current Dataset to Observe Control Group Characteristics
In this step, we use the `fitdistr` method from the `MASS` library to estimate the parameters of a negative binomial distribution describing the number of newcomer comments and the number of removed newcomer comments per discussion.
```
data.folder <- "/home/civilservant/Tresors/CivilServant/projects/CivilServant-reddit/replication-drive-2019/power-analysis/datasets"
# dataset from CivilServant production
#post.filename <- "sticky_comment_power_analysis_2t7no_5.2018_3.2019_posts.csv"
#comment.filename <- "sticky_comment_power_analysis_2t7no_5.2018_3.2019_comments.csv"
# baumgartner dataset
post.filename <- "sticky_comment_power_analysis_2t7no_12.2018_2.2019_posts.csv"
comment.filename <- "sticky_comment_power_analysis_2t7no_12.2018_2.2019_comments.csv"
post.df <- read.csv(file.path(data.folder, post.filename))
comment.df <- read.csv(file.path(data.folder, comment.filename))
print(paste("Removing",
nrow(subset(post.df, is.na(num.comments) | is.na(num.comments.removed))),
"posts with NA comments or NA num comments removed(usually auto-removed spam posts)"
))
post.df <- subset(post.df, is.na(num.comments)!=TRUE & is.na(num.comments.removed)!=TRUE)
print(paste("Total: ", nrow(post.df), "posts"))
#comment.df
## ADD DATES TO COMMENTS AND POSTS
post.df$created <- as.Date(post.df$created.utc)
comment.df$created <- as.Date(comment.df$created.utc)
total.days <- as.integer(max(post.df$created) - min(post.df$created))
print(paste("Posts Min date:", min(post.df$created), ", Max date:", max(post.df$created)))
print(paste("Posts per day", as.integer(nrow(post.df) / total.days)))
print(paste("Comments Min date:", min(comment.df$created), ", Max date:", max(comment.df$created)))
print(paste("Mean Comments per post", as.integer(mean(post.df$num.comments))))
print(paste("Mean Newcomer Comments per post", as.integer(mean(post.df$newcomer.comments))))
print(paste("Mean Newcomer Comments removed per post", as.integer(mean(post.df$newcomer.comments.removed))))
print(paste("Mean Comments Visible", mean(comment.df$visible=="False", na.rm=TRUE)))
print(paste("Num Newcomer Comments", nrow(subset(comment.df, author.prev.comments==0))))
print(paste("Num Newcomer Comments / Day:", as.integer(nrow(subset(comment.df, author.prev.comments==0)) / total.days)))
print(paste("Num Newcomer Comments Removed / Day:", as.integer(nrow(subset(comment.df, author.prev.comments==0 & visible=="False")) / total.days)))
print(paste("% Newcomer Comments Removed:", nrow(subset(comment.df, author.prev.comments == 0 & visible=="False")) / nrow(subset(comment.df, author.prev.comments == 0 ))*100))
print("")
print(paste("Mean Comments Per Day", nrow(comment.df) / total.days))
print(paste("Mean Comments Removed / Day:", as.integer(nrow(subset(comment.df, visible=="False")) / total.days)))
print(paste("% Comments Removed:", nrow(subset(comment.df, visible=="False")) / nrow(comment.df)*100))
```
### Estimate the Base Rates of Post-Level Variables with a Negative Binomial Estimator
```
print("Newcomer Comments")
print(summary(post.df$newcomer.comments))
nc.dist <- fitdistr(post.df$newcomer.comments, densfun="negative binomial")
print("Num Comments Removed")
print(summary(post.df$newcomer.comments.removed))
ncr.dist <- fitdistr(post.df$newcomer.comments.removed, densfun="negative binomial")
```
### Experiment Configuration
For a replication, the only thing you should need to change would be the configuration settings. You shouldn't need to change any other methods.
```
## THIS CONFIGURATION SPECIFIES THE
## CHARACTERISTICS OF A THE TEST
count.config <- data.frame(
pa.label = "futurology.design",
n.max = 18000,
n.min = 2500,
NC.mu = nc.dist$estimate[['mu']],
NC.theta = nc.dist$estimate[['size']],
NC.effect.irr = exp(0.1655), # 18% increase <- you may want to adjust this
NCR.mu = ncr.dist$estimate[['mu']],
NCR.theta = ncr.dist$estimate[['size']],
NCR.effect.irr = exp(-0.16251) # 0.85x, a 15% decrease <- you may decide to adjust this
)
diagnose.experiment <- function( n.size, cdf, sims.count = 500, bootstrap.sims.count = 500){
design <- declare_population(N = n.size) +
declare_potential_outcomes(
# number of newcomer comments
NC_Z_0 = rnegbin(n=N, mu = cdf$NC.mu,
theta = cdf$NC.theta),
NC_Z_1 = rnegbin(n=N, mu = cdf$NC.mu + mu.diff.from.mu.irr(
cdf$NC.mu,
cdf$NC.effect.irr),
theta = cdf$NC.theta),
# number of newcomer comments removed
NCR_Z_0 = rnegbin(n=N, mu = cdf$NCR.mu,
theta = cdf$NCR.theta),
NCR_Z_1 = rnegbin(n=N, mu = cdf$NCR.mu + mu.diff.from.mu.irr(
cdf$NCR.mu,
cdf$NCR.effect.irr),
theta = cdf$NCR.theta)
) +
declare_assignment(num_arms = 2,
conditions = (c("0", "1"))) +
declare_estimand(ate_NC_1_0 = log(cdf$NC.effect.irr)) +
declare_estimand(ate_NCR_1_0 = log(cdf$NCR.effect.irr)) +
declare_reveal(outcome_variables = c("NC", "NCR")) +
##
## ESTIMATORS FOR NC
##
## Estimator: Negative Binomial Regression
## this is how a custom estimator is defined.
## in this case, it runs glm.nb, records the coefficient & test statistics
## for the term "Z1" in the summary table, calculates confidence intervalss
## and returns the outcome along with confidence intervals
declare_estimator(handler=tidy_estimator(function(data){
m <- glm.nb(formula = NC ~ Z, data)
out <- subset(tidy(m), term == "Z1")
transform(out,
conf.low = estimate - 1.96*std.error,
conf.high = estimate + 1.96*std.error
)
}), estimand="ate_NC_1_0", label="NC.nb 1/0") +
##
## ESTIMATORS FOR NCR
##
## Estimator: Negative Binomial Regression
declare_estimator(handler=tidy_estimator(function(data){
m <- glm.nb(formula = NCR ~ Z, data)
out <- subset(tidy(m), term == "Z1")
transform(out,
conf.low = estimate - 1.96*std.error,
conf.high = estimate + 1.96*std.error
)
}), estimand="ate_NCR_1_0", label="NCR.nb 1/0")
diagnosis <- diagnose_design(design, sims = sims.count,
bootstrap_sims = bootstrap.sims.count)
diagnosis
}
```
# Conduct Power Analysis (Posts)
```
interval = 1000
power.iterate.df <- iterate.for.power(count.config,
diagnosis.method=diagnose.experiment,
iteration.interval = interval)
#beep(sound="treasure")
colnames(power.iterate.df)
```
## Diagnose Post Power Analysis
### Statistical Power Associated with Estimators
```
ggplot(power.iterate.df, aes(n, power, color=estimator_label)) +
## CHART SUBSTANCE
geom_line() +
geom_point() +
## LABELS AND COSMETICS
geom_hline(yintercept=0.8, size=0.25) +
theme_bw(base_size = 12, base_family = "Helvetica") +
theme(axis.text.x = element_text(angle=45, hjust = 1)) +
scale_y_continuous(breaks = seq(0,1,0.1), limits = c(0,1), labels=scales::percent) +
scale_x_continuous(breaks = seq(count.config$n.min,count.config$n.max,interval)) +
scale_color_viridis(discrete=TRUE) +
xlab("sample size") +
ggtitle("Statistical Power Associated with Estimators")
```
# Diagnose Comment Power Analysis
In this case, we assign comments to discussions and treat the discussions. Here, we need to specify a distribution of cluster sizes
```
comment.config <- data.frame(
pa.label = "comment.design",
# n.max = 12100,
n.max = 5100,
n.min = 100,
NC.mu = nc.dist$estimate[['mu']],
NC.theta = nc.dist$estimate[['size']],
NC.effect.irr = exp(0.1397), # 15% increase <- you may want to adjust this
VISIBLE.ctl = 0.79, ## the control group probability of complying with rules
VISIBLE.effect.a = 0.02 ## the change in the probability of complying with rules
)
# tutorial on specifying clusters of different sizes here:
# https://declaredesign.org/r/estimatr/articles/simulations-debiasing-dim.html
## here, n is the number of discussions
diagnose.experiment.binary <- function( n.size, cdf, sims.count = 500, bootstrap.sims.count = 500){
design <- declare_population(
# here, a cluster is a post
clusters = add_level(
N = n.size,
# we add one, since declare_population can't handle empty clusters
# this adds bias into our power analysis
individs_per_clust = rnegbin(n=n.size, mu = cdf$NC.mu, theta = cdf$NC.theta)+1,
condition_pr = 0.5
),
# here, an individual is a comment
individual = add_level(
N = individs_per_clust,
epsilon = rnorm(N, sd = 3) #add an error term for individuals within clusters
)
) +
## these potential outcome definitions may be incorrect
declare_potential_outcomes(
VISIBLE_Z_0 = rbinom(N, 1, cdf$VISIBLE.ctl),
VISIBLE_Z_1 = rbinom(N, 1, cdf$VISIBLE.ctl + cdf$VISIBLE.effect.a)
) +
declare_assignment(clusters = clusters, prob = 0.5) +
declare_estimand(ate_VISIBLE_mean_1_0 = cdf$VISIBLE.effect.a) +
declare_reveal(outcome_variables = c("VISIBLE"), assignment_variables = c("Z")) +
declare_estimator(formula = VISIBLE ~ Z,
clusters = clusters,
label = "VISIBLE_mean_1_0")
diagnosis <- diagnose_design(design, sims = sims.count,
bootstrap_sims = bootstrap.sims.count)
diagnosis
}
interval = 500
comment.iterate.df <- iterate.for.power(comment.config,
diagnosis.method=diagnose.experiment.binary,
iteration.interval = interval)
#beep(sound="treasure")
#subset(comment.iterate.df, n==100)
ggplot(comment.iterate.df, aes(n, power, color=estimator_label)) +
## CHART SUBSTANCE
geom_line() +
geom_point() +
## LABELS AND COSMETICS
geom_hline(yintercept=0.8, size=0.25) +
theme_bw(base_size = 12, base_family = "Helvetica") +
theme(axis.text.x = element_text(angle=45, hjust = 1)) +
scale_y_continuous(breaks = seq(0,1,0.1), limits = c(0,1), labels=scales::percent) +
scale_x_continuous(breaks = seq(comment.config$n.min,comment.config$n.max,interval)) +
scale_color_viridis(discrete=TRUE) +
xlab("sample size") +
ggtitle("Statistical Power Associated with Estimators")
#comment.iterate.df
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import torchvision.transforms as transform
import matplotlib.pyplot as plt
device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# loading the data
train_data=datasets.FashionMNIST(root='/.data',download=True,train=True,transform=transform.ToTensor())
test_data=datasets.FashionMNIST(root='/.data',download=True,train=False,transform=transform.ToTensor())
train_loader=DataLoader(train_data,batch_size=64,shuffle=True)
test_loader=DataLoader(test_data,batch_size=64,shuffle=False)
examples=iter(train_loader)
images,labels=next(examples)
for i in range(6):
plt.subplot(2,3,i+1)
plt.imshow(images[i][0],cmap='gray')
plt.show()
print(images.shape)
print(labels.shape)
input_size=28*28
num_classes=10
learning_rate=0.001
batch_size=4
hidden_size=50
# creating the neural network
class NN(nn.Module):
def __init__(self,input_size,hidden_size,num_classes):
super(NN,self).__init__()
self.l1=nn.Linear(input_size,hidden_size)
self.relu=nn.ReLU()
self.l2=nn.Linear(hidden_size,num_classes)
def forward(self,x):
x=self.l1(x)
x=self.relu(x)
x=self.l2(x)
return x
model=NN(input_size,hidden_size,num_classes).to(device)
criterion=nn.CrossEntropyLoss()
optimizer=optim.Adam(model.parameters(),lr=learning_rate)
print(len(train_loader))
n_total_steps=len(train_loader)
num_epochs=5
for epoch in range(num_epochs):
for i,(images,labels) in enumerate(train_loader):
images=images.reshape(-1,input_size).to(device)
labels=labels.to(device)
# forward pass
outputs=model(images)
loss=criterion(outputs,labels)
# backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
# training loss
if (i+1)%100==0:
print(f'epoch{epoch+1}/{num_epochs},step {i+1}/{n_total_steps},loss={loss.item():.4f}')
with torch.no_grad():
n_correct=0
n_samples=0
for images,labels in test_loader:
images=images.reshape(-1,28*28).to(device)
labels=labels.to(device)
outputs=model(images)
# value,index
_,predictions=torch.max(outputs,1)
n_samples+=labels.shape[0]
n_correct+=(predictions==labels).sum().item()
acc=100.0 * n_correct / n_samples
print(f'accuracy={acc}')
test_examples=iter(test_loader)
test_image,_=next(test_examples)
for i in range(3):
plt.subplot(1,3,i+1)
plt.imshow(test_image[i][0],cmap='gray')
plt.show()
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
#Vertex AI: Track parameters and metrics for custom training jobs
## Overview
This notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.
### Dataset
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
### Objective
In this notebook, you will learn how to use Vertex AI SDK to:
* Track training parameters and prediction metrics for a custom training job.
* Extract and perform analysis for all parameters and metrics within an Experiment.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
**If you are using Colab or AI Platform Notebooks**, your environment already meets
all the requirements to run this notebook. You can skip this step.
**Otherwise**, make sure your environment meets this notebook's requirements.
You need the following:
* The Google Cloud SDK
* Git
* Python 3
* virtualenv
* Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to [Setting up a Python development
environment](https://cloud.google.com/python/setup) and the [Jupyter
installation guide](https://jupyter.org/install) provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)
1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)
1. [Install
virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)
and create a virtual environment that uses Python 3. Activate the virtual environment.
1. To install Jupyter, run `pip install jupyter` on the
command-line in a terminal shell.
1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
1. Open this notebook in the Jupyter Notebook Dashboard.
### Install additional packages
Run the following commands to install the Vertex AI SDK and other packages used in this notebook.
```
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
```
Install Vertex AI SDK.
```
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
```
Install tensorflow and sklearn for training and evaluation models.
```
! pip install {USER_FLAG} --upgrade tensorflow sklearn
```
### Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
```
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### Select a GPU runtime
**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).
1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).
1. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
#### Set your project ID
**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
```
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
```
Otherwise, set your project ID here.
```
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
```
Set gcloud config to your project ID.
```
!gcloud config set project $PROJECT_ID
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using AI Platform Notebooks**, your environment is already
authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
1. In the Cloud Console, go to the [**Create service account key**
page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).
2. Click **Create service account**.
3. In the **Service account name** field, enter a name, and
click **Create**.
4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "AI Platform"
into the filter box, and select
**AI Platform Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
5. Click *Create*. A JSON file that contains your key downloads to your
local environment.
6. Enter the path to your service account key as the
`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create AI Platform model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are
available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may
not use a Multi-Regional Storage bucket for training with AI Platform.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Import libraries and define constants
Import required libraries.
```
import pandas as pd
from google.cloud import aiplatform
from sklearn.metrics import mean_absolute_error, mean_squared_error
from tensorflow.python.keras.utils import data_utils
```
## Initialize Vertex AI and set an _experiment_
Define experiment name.
```
EXPERIMENT_NAME = "" # @param {type:"string"}
```
If EXEPERIMENT_NAME is not set, set a default one below:
```
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
```
Initialize the *client* for Vertex AI.
```
aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_NAME,
experiment=EXPERIMENT_NAME,
)
```
## Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
```
!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {BUCKET_NAME}/data/
gcs_csv_path = f"{BUCKET_NAME}/data/abalone_train.csv"
```
### Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
```
ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name
```
### Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
```
%%writefile training_script.py
import pandas as pd
import argparse
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--num_units', dest='num_units',
default=64, type=int,
help='Number of unit for first layer.')
args = parser.parse_args()
# uncomment and bump up replica_count for distributed training
# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# tf.distribute.experimental_set_strategy(strategy)
col_names = ["Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age"]
target = "Age"
def aip_data_to_dataframe(wild_card_path):
return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)
for fp in tf.data.Dataset.list_files([wild_card_path])])
def get_features_and_labels(df):
return df.drop(target, axis=1).values, df[target].values
def data_prep(wild_card_path):
return get_features_and_labels(aip_data_to_dataframe(wild_card_path))
model = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])
model.compile(loss='mse', optimizer='adam')
model.fit(*data_prep(os.environ["AIP_TRAINING_DATA_URI"]),
epochs=args.epochs ,
validation_data=data_prep(os.environ["AIP_VALIDATION_DATA_URI"]))
print(model.evaluate(*data_prep(os.environ["AIP_TEST_DATA_URI"])))
# save as AI Platform Managed model
tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"])
```
### Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata
```
job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest",
)
```
Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
```
aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
model = job.run(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs={parameters['epochs']}", f"--num_units={parameters['num_units']}"],
)
```
### Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
```
endpoint = model.deploy(machine_type="n1-standard-4")
```
Once model is deployed, perform online prediction using the `abalone_test` dataset and calculate prediction metrics.
Prepare the prediction dataset.
```
def read_data(uri):
dataset_path = data_utils.get_file("auto-mpg.data", uri)
col_names = [
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Age",
]
dataset = pd.read_csv(
dataset_path,
names=col_names,
na_values="?",
comment="\t",
sep=",",
skipinitialspace=True,
)
return dataset
def get_features_and_labels(df):
target = "Age"
return df.drop(target, axis=1).values, df[target].values
test_dataset, test_labels = get_features_and_labels(
read_data(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv"
)
)
```
Perform online prediction.
```
prediction = endpoint.predict(test_dataset.tolist())
prediction
```
Calculate and track prediction evaluation metrics.
```
mse = mean_squared_error(test_labels, prediction.predictions)
mae = mean_absolute_error(test_labels, prediction.predictions)
aiplatform.log_metrics({"mse": mse, "mae": mae})
```
### Extract all parameters and metrics created during this experiment.
```
aiplatform.get_experiment_df()
```
### View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
```
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
```
## Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
* Training Job
* Model
* Endpoint
* Cloud Storage Bucket
```
delete_training_job = True
delete_model = True
delete_endpoint = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Undeploy and delete the endpoint
endpoint.undeploy_all()
endpoint.delete()
# Delete the model
model.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME
```
| github_jupyter |
This is a companion notebook for the book [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=keras&a_bid=76564dff). For readability, it only contains runnable code blocks and section titles, and omits everything else in the book: text paragraphs, figures, and pseudocode.
**If you want to be able to follow what's going on, I recommend reading the notebook side by side with your copy of the book.**
This notebook was generated for TensorFlow 2.6.
### Processing words as a sequence: the Sequence Model approach
#### A first practical example
**Downloading the data**
```
!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
!rm -r aclImdb/train/unsup
```
**Preparing the data**
```
import os, pathlib, shutil, random
from tensorflow import keras
batch_size = 32
base_dir = pathlib.Path("aclImdb")
val_dir = base_dir / "val"
train_dir = base_dir / "train"
for category in ("neg", "pos"):
os.makedirs(val_dir / category)
files = os.listdir(train_dir / category)
random.Random(1337).shuffle(files)
num_val_samples = int(0.2 * len(files))
val_files = files[-num_val_samples:]
for fname in val_files:
shutil.move(train_dir / category / fname,
val_dir / category / fname)
train_ds = keras.preprocessing.text_dataset_from_directory(
"aclImdb/train", batch_size=batch_size
)
val_ds = keras.preprocessing.text_dataset_from_directory(
"aclImdb/val", batch_size=batch_size
)
test_ds = keras.preprocessing.text_dataset_from_directory(
"aclImdb/test", batch_size=batch_size
)
text_only_train_ds = train_ds.map(lambda x, y: x)
```
**Preparing integer sequence datasets**
```
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
max_length = 600
max_tokens = 20000
text_vectorization = TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_length,
)
text_vectorization.adapt(text_only_train_ds)
int_train_ds = train_ds.map(lambda x, y: (text_vectorization(x), y))
int_val_ds = val_ds.map(lambda x, y: (text_vectorization(x), y))
int_test_ds = test_ds.map(lambda x, y: (text_vectorization(x), y))
```
**A sequence model built on top of one-hot encoded vector sequences**
```
import tensorflow as tf
from tensorflow.keras import layers
inputs = keras.Input(shape=(None,), dtype="int64")
embedded = tf.one_hot(inputs, depth=max_tokens)
x = layers.Bidirectional(layers.LSTM(32))(embedded)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
```
**Training a first basic sequence model**
```
callbacks = [
keras.callbacks.ModelCheckpoint("one_hot_bidir_lstm.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=10, callbacks=callbacks)
model = keras.models.load_model("one_hot_bidir_lstm.keras")
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
```
#### Understanding word embeddings
##### Learning word embeddings with the `Embedding` layer
**Instantiating an `Embedding` layer**
```
embedding_layer = layers.Embedding(input_dim=max_tokens, output_dim=256)
```
**Model that uses an Embedding layer trained from scratch**
```
inputs = keras.Input(shape=(None,), dtype="int64")
embedded = layers.Embedding(input_dim=max_tokens, output_dim=256)(inputs)
x = layers.Bidirectional(layers.LSTM(32))(embedded)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("embeddings_bidir_gru.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=10, callbacks=callbacks)
model = keras.models.load_model("embeddings_bidir_gru.keras")
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
```
###### Understanding padding & masking
**Model that uses an Embedding layer trained from scratch, with masking enabled**
```
inputs = keras.Input(shape=(None,), dtype="int64")
embedded = layers.Embedding(
input_dim=max_tokens, output_dim=256, mask_zero=True)(inputs)
x = layers.Bidirectional(layers.LSTM(32))(embedded)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("embeddings_bidir_gru_with_masking.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=10, callbacks=callbacks)
model = keras.models.load_model("embeddings_bidir_gru_with_masking.keras")
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
```
##### Using pretrained word embeddings
###### Downloading the GloVe word embeddings
```
!wget http://nlp.stanford.edu/data/glove.6B.zip
!unzip -q glove.6B.zip
```
**Parsing the GloVe word-embeddings file**
```
import numpy as np
path_to_glove_file = "glove.6B.100d.txt"
embeddings_index = {}
with open(path_to_glove_file) as f:
for line in f:
word, coefs = line.split(maxsplit=1)
coefs = np.fromstring(coefs, "f", sep=" ")
embeddings_index[word] = coefs
print(f"Found {len(embeddings_index)} word vectors.")
```
###### Loading the GloVe embeddings in the model
**Preparing the GloVe word-embeddings matrix**
```
embedding_dim = 100
vocabulary = text_vectorization.get_vocabulary()
word_index = dict(zip(vocabulary, range(len(vocabulary))))
embedding_matrix = np.zeros((max_tokens, embedding_dim))
for word, i in word_index.items():
if i < max_tokens:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
embedding_layer = layers.Embedding(
max_tokens,
embedding_dim,
embeddings_initializer=keras.initializers.Constant(embedding_matrix),
trainable=False,
mask_zero=True,
)
```
###### Training a simple bidirectional LSTM on top of the GloVe embeddings
**Model that uses a pretrained Embedding layer**
```
inputs = keras.Input(shape=(None,), dtype="int64")
embedded = embedding_layer(inputs)
x = layers.Bidirectional(layers.LSTM(32))(embedded)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("glove_embeddings_sequence_model.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=10, callbacks=callbacks)
model = keras.models.load_model("glove_embeddings_sequence_model.keras")
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
```
| github_jupyter |
# Pix2Pix in Tensorflow
This tutorials walks through an implementation of Pix2Pix as described in [Image-to-Image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004).
This specific implementation is designed to 'remaster' black-and-white frames from films with 4:3 aspect ratio into full-color and 16:9 aspect ratio frames.
To learn more about the Pix2Pix framework, and the images that can be generated using this framework, see my [Medium post](https://medium.com/p/f4d551fa0503).
This notebook requires the additional helper.py file, which can be obtained [here]()
```
#Import the libraries we will need.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.contrib.slim as slim
import os
import scipy.misc
import scipy
from PIL import Image
from glob import glob
import os
from helper import *
%matplotlib inline
#Size of image frames
height = 144
width = 256
```
## Defining the Adversarial Networks
### Generator Network
```
def generator(c):
with tf.variable_scope('generator'):
#Encoder
enc0 = slim.conv2d(c,64,[3,3],padding="SAME",
biases_initializer=None,activation_fn=lrelu,
weights_initializer=initializer)
enc0 = tf.space_to_depth(enc0,2)
enc1 = slim.conv2d(enc0,128,[3,3],padding="SAME",
activation_fn=lrelu,normalizer_fn=slim.batch_norm,
weights_initializer=initializer)
enc1 = tf.space_to_depth(enc1,2)
enc2 = slim.conv2d(enc1,128,[3,3],padding="SAME",
normalizer_fn=slim.batch_norm,activation_fn=lrelu,
weights_initializer=initializer)
enc2 = tf.space_to_depth(enc2,2)
enc3 = slim.conv2d(enc2,256,[3,3],padding="SAME",
normalizer_fn=slim.batch_norm,activation_fn=lrelu,
weights_initializer=initializer)
enc3 = tf.space_to_depth(enc3,2)
#Decoder
gen0 = slim.conv2d(
enc3,num_outputs=256,kernel_size=[3,3],
padding="SAME",normalizer_fn=slim.batch_norm,
activation_fn=tf.nn.elu, weights_initializer=initializer)
gen0 = tf.depth_to_space(gen0,2)
gen1 = slim.conv2d(
tf.concat([gen0,enc2],3),num_outputs=256,kernel_size=[3,3],
padding="SAME",normalizer_fn=slim.batch_norm,
activation_fn=tf.nn.elu,weights_initializer=initializer)
gen1 = tf.depth_to_space(gen1,2)
gen2 = slim.conv2d(
tf.concat([gen1,enc1],3),num_outputs=128,kernel_size=[3,3],
padding="SAME",normalizer_fn=slim.batch_norm,
activation_fn=tf.nn.elu,weights_initializer=initializer)
gen2 = tf.depth_to_space(gen2,2)
gen3 = slim.conv2d(
tf.concat([gen2,enc0],3),num_outputs=128,kernel_size=[3,3],
padding="SAME",normalizer_fn=slim.batch_norm,
activation_fn=tf.nn.elu, weights_initializer=initializer)
gen3 = tf.depth_to_space(gen3,2)
g_out = slim.conv2d(
gen3,num_outputs=3,kernel_size=[1,1],padding="SAME",
biases_initializer=None,activation_fn=tf.nn.tanh,
weights_initializer=initializer)
return g_out
```
### Discriminator Network
```
def discriminator(bottom, reuse=False):
with tf.variable_scope('discriminator'):
filters = [32,64,128,128]
#Programatically define layers
for i in range(len(filters)):
if i == 0:
layer = slim.conv2d(bottom,filters[i],[3,3],padding="SAME",scope='d'+str(i),
biases_initializer=None,activation_fn=lrelu,stride=[2,2],
reuse=reuse,weights_initializer=initializer)
else:
layer = slim.conv2d(bottom,filters[i],[3,3],padding="SAME",scope='d'+str(i),
normalizer_fn=slim.batch_norm,activation_fn=lrelu,stride=[2,2],
reuse=reuse,weights_initializer=initializer)
bottom = layer
dis_full = slim.fully_connected(slim.flatten(bottom),1024,activation_fn=lrelu,scope='dl',
reuse=reuse, weights_initializer=initializer)
d_out = slim.fully_connected(dis_full,1,activation_fn=tf.nn.sigmoid,scope='do',
reuse=reuse, weights_initializer=initializer)
return d_out
```
### Connecting them together
```
tf.reset_default_graph()
#This initializaer is used to initialize all the weights of the network.
initializer = tf.truncated_normal_initializer(stddev=0.02)
#
condition_in = tf.placeholder(shape=[None,height,width,3],dtype=tf.float32)
real_in = tf.placeholder(shape=[None,height,width,3],dtype=tf.float32) #Real images
Gx = generator(condition_in) #Generates images from random z vectors
Dx = discriminator(real_in) #Produces probabilities for real images
Dg = discriminator(Gx,reuse=True) #Produces probabilities for generator images
#These functions together define the optimization objective of the GAN.
d_loss = -tf.reduce_mean(tf.log(Dx) + tf.log(1.-Dg)) #This optimizes the discriminator.
#For generator we use traditional GAN objective as well as L1 loss
g_loss = -tf.reduce_mean(tf.log(Dg)) + 100*tf.reduce_mean(tf.abs(Gx - real_in)) #This optimizes the generator.
#The below code is responsible for applying gradient descent to update the GAN.
trainerD = tf.train.AdamOptimizer(learning_rate=0.0002,beta1=0.5)
trainerG = tf.train.AdamOptimizer(learning_rate=0.002,beta1=0.5)
d_grads = trainerD.compute_gradients(d_loss,slim.get_variables(scope='discriminator'))
g_grads = trainerG.compute_gradients(g_loss, slim.get_variables(scope='generator'))
update_D = trainerD.apply_gradients(d_grads)
update_G = trainerG.apply_gradients(g_grads)
```
## Training the network
Now that we have fully defined our network, it is time to train it!
```
batch_size = 4 #Size of image batch to apply at each iteration.
iterations = 500000 #Total number of iterations to use.
subset_size = 5000 #How many images to load at a time, will vary depending on available resources
frame_directory = './frames' #Directory where training images are located
sample_directory = './samples' #Directory to save sample images from generator in.
model_directory = './model' #Directory to save trained model to.
sample_frequency = 200 #How often to generate sample gif of translated images.
save_frequency = 5000 #How often to save model.
load_model = False #Whether to load the model or begin training from scratch.
subset = 0
dataS = sorted(glob(os.path.join(frame_directory, "*.png")))
total_subsets = len(dataS)/subset_size
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
if load_model == True:
ckpt = tf.train.get_checkpoint_state(model_directory)
saver.restore(sess,ckpt.model_checkpoint_path)
imagesY,imagesX = loadImages(dataS[0:subset_size],False, np.random.randint(0,2)) #Load a subset of images
print "Loaded subset " + str(subset)
draw = range(len(imagesX))
for i in range(iterations):
if i % (subset_size/batch_size) != 0 or i == 0:
batch_index = np.random.choice(draw,size=batch_size,replace=False)
else:
subset = np.random.randint(0,total_subsets+1)
imagesY,imagesX = loadImages(dataS[subset*subset_size:(subset+1)*subset_size],False, np.random.randint(0,2))
print "Loaded subset " + str(subset)
draw = range(len(imagesX))
batch_index = np.random.choice(draw,size=batch_size,replace=False)
ys = (np.reshape(imagesY[batch_index],[batch_size,height,width,3]) - 0.5) * 2.0 #Transform to be between -1 and 1
xs = (np.reshape(imagesX[batch_index],[batch_size,height,width,3]) - 0.5) * 2.0
_,dLoss = sess.run([update_D,d_loss],feed_dict={real_in:ys,condition_in:xs}) #Update the discriminator
_,gLoss = sess.run([update_G,g_loss],feed_dict={real_in:ys,condition_in:xs}) #Update the generator
if i % sample_frequency == 0:
print "Gen Loss: " + str(gLoss) + " Disc Loss: " + str(dLoss)
start_point = np.random.randint(0,len(imagesX)-32)
xs = (np.reshape(imagesX[start_point:start_point+32],[32,height,width,3]) - 0.5) * 2.0
ys = (np.reshape(imagesY[start_point:start_point+32],[32,height,width,3]) - 0.5) * 2.0
sample_G = sess.run(Gx,feed_dict={condition_in:xs}) #Use new z to get sample images from generator.
allS = np.concatenate([xs,sample_G,ys],axis=1)
if not os.path.exists(sample_directory):
os.makedirs(sample_directory)
#Save sample generator images for viewing training progress.
make_gif(allS,'./'+sample_directory+'/a_vid'+str(i)+'.gif',
duration=len(allS)*0.2,true_image=False)
if i % save_frequency == 0 and i != 0:
if not os.path.exists(model_directory):
os.makedirs(model_directory)
saver.save(sess,model_directory+'/model-'+str(i)+'.cptk')
print "Saved Model"
```
## Using a trained network
Once we have a trained model saved, we may want to use it to generate new images, and explore the representation it has learned.
```
test_directory = './test_frames' #Directory to load test frames from
subset_size = 5000
batch_size = 60 # Size of image batch to apply at each iteration. Will depend of available resources.
sample_directory = './test_samples' #Directory to save sample images from generator in.
model_directory = './model' #Directory to save trained model to.
load_model = True #Whether to load a saved model.
dataS = sorted(glob(os.path.join(test_directory, "*.png")))
subset = 0
total_subsets = len(dataS)/subset_size
iterations = subset_size / batch_size #Total number of iterations to use.
if not os.path.exists(sample_directory):
os.makedirs(sample_directory)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
if load_model == True:
ckpt = tf.train.get_checkpoint_state(model_directory)
saver.restore(sess,ckpt.model_checkpoint_path)
for s in range(total_subsets):
generated_frames = []
_,imagesX = loadImages(dataS[s*subset_size:s*subset_size+subset_size],False, False) #Load a subset of images
for i in range(iterations):
start_point = i * batch_size
xs = (np.reshape(imagesX[start_point:start_point+batch_size],[batch_size,height,width,3]) - 0.5) * 2.0
sample_G = sess.run(Gx,feed_dict={condition_in:xs}) #Use new z to get sample images from generator.
#allS = np.concatenate([xs,sample_G],axis=2)
generated_frames.append(sample_G)
generated_frames = np.vstack(generated_frames)
for i in range(len(generated_frames)):
im = Image.fromarray(((generated_frames[i]/2.0 + 0.5) * 256).astype('uint8'))
im.save('./'+sample_directory+'/frame'+str(s*subset_size + i)+'.png')
#make_gif(generated_frames,'./'+sample_directory+'/a_vid'+str(i)+'.gif',
# duration=len(generated_frames)/10.0,true_image=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/pachterlab/CWGFLHGCCHAP_2021/blob/master/notebooks/Extras/scrubletAnalysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!date
```
### **Download Data**
```
#Install kallisto and bustools
!wget --quiet https://github.com/pachterlab/kallisto/releases/download/v0.46.2/kallisto_linux-v0.46.2.tar.gz
!tar -xf kallisto_linux-v0.46.2.tar.gz
!cp kallisto/kallisto /usr/local/bin/
!wget --quiet https://github.com/BUStools/bustools/releases/download/v0.40.0/bustools_linux-v0.40.0.tar.gz
!tar -xf bustools_linux-v0.40.0.tar.gz
!cp bustools/bustools /usr/local/bin/
import requests
from tqdm import tnrange, tqdm_notebook
def download_file(doi,ext):
url = 'https://api.datacite.org/dois/'+doi+'/media'
r = requests.get(url).json()
netcdf_url = r['data'][0]['attributes']['url']
r = requests.get(netcdf_url,stream=True)
#Set file name
fname = doi.split('/')[-1]+ext
#Download file with progress bar
if r.status_code == 403:
print("File Unavailable")
if 'content-length' not in r.headers:
print("Did not get file")
else:
with open(fname, 'wb') as f:
total_length = int(r.headers.get('content-length'))
pbar = tnrange(int(total_length/1024), unit="B")
for chunk in r.iter_content(chunk_size=1024):
if chunk:
pbar.update()
f.write(chunk)
return fname
#Get reference data () fastq's)
#Trinity Transcripts
download_file('10.22002/D1.1825','.gz')
#Gff3 (Trinity)
download_file('10.22002/D1.1824','.gz')
# Get doi links for all Starvation cDNA fastq.gz files
starvFiles = []
dois = ['10.22002/D1.1840','10.22002/D1.1841','10.22002/D1.1842','10.22002/D1.1843',
'10.22002/D1.1844','10.22002/D1.1845','10.22002/D1.1846','10.22002/D1.1847',
'10.22002/D1.1848','10.22002/D1.1849','10.22002/D1.1850','10.22002/D1.1851',
'10.22002/D1.1852','10.22002/D1.1853','10.22002/D1.1854','10.22002/D1.1855'] #16 doi numbers
for doi in dois:
url = 'https://api.datacite.org/dois/'+doi+'/media'
r = requests.get(url).json()
netcdf_url = r['data'][0]['attributes']['url']
starvFiles += [netcdf_url]
s1 = starvFiles[0]
s2 = starvFiles[1]
s3 = starvFiles[2]
s4 = starvFiles[3]
s5 = starvFiles[4]
s6 = starvFiles[5]
s7 = starvFiles[6]
s8 = starvFiles[7]
s9 = starvFiles[8]
s10 = starvFiles[9]
s11 = starvFiles[10]
s12 = starvFiles[11]
s13 = starvFiles[12]
s14 = starvFiles[13]
s15 = starvFiles[14]
s16 = starvFiles[15]
# Get doi links for all Stimulation cDNA fastq.gz files
stimFiles = []
dois = ['10.22002/D1.1860','10.22002/D1.1863','10.22002/D1.1864','10.22002/D1.1865',
'10.22002/D1.1866','10.22002/D1.1868','10.22002/D1.1870','10.22002/D1.1871'] #8 numbers
for doi in dois:
url = 'https://api.datacite.org/dois/'+doi+'/media'
r = requests.get(url).json()
netcdf_url = r['data'][0]['attributes']['url']
stimFiles += [netcdf_url]
stim1 = stimFiles[0]
stim2 = stimFiles[1]
stim3 = stimFiles[2]
stim4 = stimFiles[3]
stim5 = stimFiles[4]
stim6 = stimFiles[5]
stim7 = stimFiles[6]
stim8 = stimFiles[7]
#Get original, CellRanger clustered starvation adata
download_file('10.22002/D1.1798','.gz')
#Cell barcodes selected from Stim ClickTags
download_file('10.22002/D1.1817','.gz')
!gunzip *.gz
!pip install --quiet anndata
!pip install --quiet scanpy==1.6.0
!pip install --quiet louvain
!pip install scrublet
```
### **Import Packages**
```
import pandas as pd
import anndata
import scanpy as sc
import numpy as np
import scipy.sparse
import scrublet as scr
import matplotlib.pyplot as plt
%matplotlib inline
sc.set_figure_params(dpi=125)
```
### **Run kallisto bus on data with Starvation cDNA data**
```
#Make Kallisto index (referene https://www.kallistobus.tools/getting_started)
!mv D1.1825 transcripts.fa
!kallisto index -i clytia_trin.idx -k 31 transcripts.fa
```
Run kallisto for one set of samples
```
#Create BUS files from fastq's, can't do separate lines
!mkfifo R1.gz R2.gz R1_02.gz R2_02.gz R1_03.gz R2_03.gz R1_04.gz R2_04.gz; curl -Ls $s1 > R1.gz & curl -Ls $s2 > R2.gz & curl -Ls $s3 > R1_02.gz & curl -Ls $s4 > R2_02.gz & curl -Ls $s5 > R1_03.gz & curl -Ls $s6 > R2_03.gz & curl -Ls $s7 > R1_04.gz & curl -Ls $s8 > R2_04.gz & kallisto bus -i clytia_trin.idx -o bus_output/ -x 10xv2 -t 2 R1.gz R2.gz R1_02.gz R2_02.gz R1_03.gz R2_03.gz R1_04.gz R2_04.gz
#Generate gene-count matrices
!wget --quiet https://github.com/bustools/getting_started/releases/download/getting_started/10xv2_whitelist.txt
#Make t2g file
!mv D1.1824 trinity.gff3
!awk '{ print $12"\t"$10}' trinity.gff3 > t2g_rough.txt
!sed 's/[";]//g' t2g_rough.txt > t2g_trin.txt
#!cd bus_output/
!mkdir bus_output/genecount/ bus_output/tmp/
!bustools correct -w 10xv2_whitelist.txt -p bus_output/output.bus | bustools sort -T bus_output/tmptmp/ -t 2 -p - | bustools count -o bus_output/genecount/genes -g t2g_trin.txt -e bus_output/matrix.ec -t bus_output/transcripts.txt --genecounts -
```
Run kallisto for other sample set
```
#Create BUS files from fastq's
!mkfifo R1_new.gz R2_new.gz R1_02_new.gz R2_02_new.gz R1_03_new.gz R2_03_new.gz R1_04_new.gz R2_04_new.gz; curl -Ls $s9 > R1_new.gz & curl -Ls $s10 > R2_new.gz & curl -Ls $s11 > R1_02_new.gz & curl -Ls $s12 > R2_02_new.gz & curl -Ls $s13 > R1_03_new.gz & curl -Ls $s14 > R2_03_new.gz & curl -Ls $s15 > R1_04_new.gz & curl -Ls $s16 > R2_04_new.gz & kallisto bus -i clytia_trin.idx -o bus_output_02/ -x 10xv2 -t 2 R1_new.gz R2_new.gz R1_02_new.gz R2_02_new.gz R1_03_new.gz R2_03_new.gz R1_04_new.gz R2_04_new.gz
#Generate gene-count matrices
!cd bus_output_02/
!mkdir bus_output_02/genecount/ bus_output_02/tmp/
!bustools correct -w 10xv2_whitelist.txt -p bus_output_02/output.bus | bustools sort -T bus_output_02/tmp/ -t 2 -p - | bustools count -o bus_output_02/genecount/genes -g t2g_trin.txt -e bus_output_02/matrix.ec -t bus_output_02/transcripts.txt --genecounts -
```
Merge matrices (Add -1 to first and -2 to second dataset)
```
path = "bus_output/genecount/"
jelly_adata_01 = sc.read(path+'genes.mtx', cache=True)
jelly_adata_01.var_names = pd.read_csv(path+'genes.genes.txt', header=None)[0]
jelly_adata_01.obs_names = pd.read_csv(path+'genes.barcodes.txt', header=None)[0]
jelly_adata_01.obs_names = [i+"-1" for i in jelly_adata_01.obs_names]
path = "bus_output_02/genecount/"
jelly_adata_02 = sc.read(path+'genes.mtx', cache=True)
jelly_adata_02.var_names = pd.read_csv(path+'genes.genes.txt', header=None)[0]
jelly_adata_02.obs_names = pd.read_csv(path+'genes.barcodes.txt', header=None)[0]
jelly_adata_02.obs_names = [i+"-2" for i in jelly_adata_02.obs_names]
jelly_adata = jelly_adata_01.concatenate(jelly_adata_02,join='outer', index_unique=None)
jelly_adata_01
sc.pp.filter_cells(jelly_adata_01, min_counts=10)
sc.pp.filter_genes(jelly_adata_01, min_counts=5)
jelly_adata_01
sc.pp.filter_cells(jelly_adata_02, min_counts=10)
sc.pp.filter_genes(jelly_adata_02, min_counts=5)
jelly_adata_02
cellR = anndata.read('D1.1798')
cells = list(cellR.obs_names)
len(set(cells).intersection(jelly_adata_01.obs_names))/len(cells)
len(set(cells).intersection(jelly_adata_02.obs_names))/len(cells)
```
Check doublets detected by scrublet
```
scrub = scr.Scrublet(jelly_adata_01.X, expected_doublet_rate=0.06,n_neighbors=0.5*np.sqrt(len(jelly_adata_01.obs_names)))
scrub.call_doublets(threshold=0.4)
doublet_scores, predicted_doublets = scrub.scrub_doublets(min_counts=5,
min_cells=5,
min_gene_variability_pctl=85,
n_prin_comps=30)
#scrub.call_doublets(threshold=0.4)
scrub.plot_histogram();
scrub.set_embedding('UMAP', scr.get_umap(scrub.manifold_obs_, 10, min_dist=0.3))
# # Uncomment to run tSNE - slow
# print('Running tSNE...')
# scrub.set_embedding('tSNE', scr.get_tsne(scrub.manifold_obs_, angle=0.9))
# # Uncomment to run force layout - slow
# print('Running ForceAtlas2...')
# scrub.set_embedding('FA', scr.get_force_layout(scrub.manifold_obs_, n_neighbors=5. n_iter=1000))
scrub.plot_embedding('UMAP', order_points=True);
```
Compare 'doublets' with filtered cells (look for overlap)
```
len(cellR[cellR.obs['louvain'] == '10'].obs_names)
len(set(cellR[cellR.obs['louvain'] == '10'].obs_names).intersection(jelly_adata_01.obs_names[predicted_doublets]))/len(cellR[cellR.obs['louvain'] == '10'].obs_names)
len(set(jelly_adata_01.obs_names[predicted_doublets]).intersection(cellR.obs_names))/len(jelly_adata_01.obs_names[predicted_doublets])
```
Repeat for second sample
```
scrub = scr.Scrublet(jelly_adata_02.X, expected_doublet_rate=0.06,n_neighbors=0.5*np.sqrt(len(jelly_adata_02.obs_names)))
scrub.call_doublets(threshold=0.4)
doublet_scores2, predicted_doublets2 = scrub.scrub_doublets(min_counts=5,
min_cells=5,
min_gene_variability_pctl=85,
n_prin_comps=30)
#scrub.call_doublets(threshold=0.4)
scrub.plot_histogram();
```
Compare 'doublets' with filtered cells (look for overlap)
```
len(jelly_adata_02.obs_names[predicted_doublets2])
len(set(cellR[cellR.obs['louvain'] == '10'].obs_names).intersection(jelly_adata_02.obs_names[predicted_doublets2]))/len(cellR[cellR.obs['louvain'] == '10'].obs_names)
len(set(jelly_adata_02.obs_names[predicted_doublets2]).intersection(cellR.obs_names))/len(jelly_adata_02.obs_names[predicted_doublets2])
```
Filter cell barcodes by previous selection done with ClickTag counts
```
# # Filter barcodes by 'real' cells
# cellR = anndata.read('D1.1798')
# cells = list(cellR.obs_names)
# jelly_adata = jelly_adata[cells,:]
# jelly_adata
# jelly_adata.write('fedStarved_raw.h5ad')
```
### **Run kallisto bus on data with Stimulation cDNA data**
Run kallisto for one set of samples
```
#Create BUS files from fastq's, can't do separate lines
!mkfifo R1_stim.gz R2_stim.gz R1_02_stim.gz R2_02_stim.gz ; curl -Ls $stim1 > R1_stim.gz & curl -Ls $stim2 > R2_stim.gz & curl -Ls $stim3 > R1_02_stim.gz & curl -Ls $stim4 > R2_02_stim.gz & kallisto bus -i clytia_trin.idx -o bus_output/ -x 10xv3 -t 2 R1_stim.gz R2_stim.gz R1_02_stim.gz R2_02_stim.gz
#Generate gene-count matrices
!wget --quiet https://github.com/bustools/getting_started/releases/download/getting_started/10xv3_whitelist.txt
#!cd bus_output/
!mkdir bus_output/genecount/ bus_output/tmp/
!bustools correct -w 10xv3_whitelist.txt -p bus_output/output.bus | bustools sort -T bus_output/tmptmp/ -t 2 -p - | bustools count -o bus_output/genecount/genes -g t2g_trin.txt -e bus_output/matrix.ec -t bus_output/transcripts.txt --genecounts -
!ls test
```
Run kallisto for other sample set
```
#Create BUS files from fastq's
!mkfifo R1_stim2.gz R2_stim2.gz R1_02_stim2.gz R2_02_stim2.gz ; curl -Ls $stim5 > R1_stim2.gz & curl -Ls $stim6 > R2_stim2.gz & curl -Ls $stim7 > R1_02_stim2.gz & curl -Ls $stim8 > R2_02_stim2.gz & kallisto bus -i clytia_trin.idx -o bus_output_02/ -x 10xv3 -t 2 R1_stim2.gz R2_stim2.gz R1_02_stim2.gz R2_02_stim2.gz
#Generate gene-count matrices
!cd bus_output_02/
!mkdir bus_output_02/genecount/ bus_output_02/tmp/
!bustools correct -w 10xv3_whitelist.txt -p bus_output_02/output.bus | bustools sort -T bus_output_02/tmp/ -t 2 -p - | bustools count -o bus_output_02/genecount/genes -g t2g_trin.txt -e bus_output_02/matrix.ec -t bus_output_02/transcripts.txt --genecounts -
```
Merge matrices (Add -1 to first and -2 to second dataset)
```
path = "bus_output/genecount/"
jelly_adata_01 = sc.read(path+'genes.mtx', cache=True)
jelly_adata_01.var_names = pd.read_csv(path+'genes.genes.txt', header=None)[0]
jelly_adata_01.obs_names = pd.read_csv(path+'genes.barcodes.txt', header=None)[0]
jelly_adata_01.obs_names = [i+"-1" for i in jelly_adata_01.obs_names]
path = "bus_output_02/genecount/"
jelly_adata_02 = sc.read(path+'genes.mtx', cache=True)
jelly_adata_02.var_names = pd.read_csv(path+'genes.genes.txt', header=None)[0]
jelly_adata_02.obs_names = pd.read_csv(path+'genes.barcodes.txt', header=None)[0]
jelly_adata_02.obs_names = [i+"-2" for i in jelly_adata_02.obs_names]
jelly_adata = jelly_adata_01.concatenate(jelly_adata_02,join='outer', index_unique=None)
```
Filter cell barcodes by previous filtering done with ClickTag counts
```
# Filter barcodes by 'real' cells
!mv D1.1817 jelly4stim_individs_tagCells_50k.mat
barcodes_list = sio.loadmat('jelly4stim_individs_tagCells_50k.mat')
barcodes_list.pop('__header__', None)
barcodes_list.pop('__version__', None)
barcodes_list.pop('__globals__', None)
# Add all cell barcodes for each individual
barcodes = []
for b in barcodes_list:
if barcodes_list[b] != "None":
barcodes.append(b)
print(len(barcodes))
barcodes = [s.replace('-1', '-3') for s in barcodes]
barcodes = [s.replace('-2', '-1') for s in barcodes]
barcodes = [s.replace('-3', '-2') for s in barcodes]
jelly_adata = jelly_adata[barcodes,:]
jelly_adata.write('stimulation_raw.h5ad')
```
| github_jupyter |
# Load the Totality of the Data
#### The data is quite big here, and all of it cannot be loaded at once with a simple read_csv call.
**A solution is to specify types, to gain memory (for example switching from float64 to float32)**
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
```
## /!\ Fixed some mistakes, big thanks to @Chris Deotte and @CPMP /!\
> https://www.kaggle.com/c/microsoft-malware-prediction/discussion/80338
Here are the types I use. :
- I load objects as categories, object is fine as well.
- Binary values are switched to int8
- Binary values with missing values are switched to float16 (int does not understand nan), it is possible to use category here as well.
- 64 bits encoding are all switched to 32, or 16 of possible
```
dtypes = {
'MachineIdentifier': 'category',
'ProductName': 'category',
'EngineVersion': 'category',
'AppVersion': 'category',
'AvSigVersion': 'category',
'IsBeta': 'int8',
'RtpStateBitfield': 'float16',
'IsSxsPassiveMode': 'int8',
'DefaultBrowsersIdentifier': 'float32',
'AVProductStatesIdentifier': 'float32',
'AVProductsInstalled': 'float16',
'AVProductsEnabled': 'float16',
'HasTpm': 'int8',
'CountryIdentifier': 'int16',
'CityIdentifier': 'float32',
'OrganizationIdentifier': 'float16',
'GeoNameIdentifier': 'float16',
'LocaleEnglishNameIdentifier': 'int16',
'Platform': 'category',
'Processor': 'category',
'OsVer': 'category',
'OsBuild': 'int16',
'OsSuite': 'int16',
'OsPlatformSubRelease': 'category',
'OsBuildLab': 'category',
'SkuEdition': 'category',
'IsProtected': 'float16',
'AutoSampleOptIn': 'int8',
'PuaMode': 'category',
'SMode': 'float16',
'IeVerIdentifier': 'float16',
'SmartScreen': 'category',
'Firewall': 'float16',
'UacLuaenable': 'float64', # was 'float32'
'Census_MDC2FormFactor': 'category',
'Census_DeviceFamily': 'category',
'Census_OEMNameIdentifier': 'float32', # was 'float16'
'Census_OEMModelIdentifier': 'float32',
'Census_ProcessorCoreCount': 'float16',
'Census_ProcessorManufacturerIdentifier': 'float16',
'Census_ProcessorModelIdentifier': 'float32', # was 'float16'
'Census_ProcessorClass': 'category',
'Census_PrimaryDiskTotalCapacity': 'float64', # was 'float32'
'Census_PrimaryDiskTypeName': 'category',
'Census_SystemVolumeTotalCapacity': 'float64', # was 'float32'
'Census_HasOpticalDiskDrive': 'int8',
'Census_TotalPhysicalRAM': 'float32',
'Census_ChassisTypeName': 'category',
'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float32', # was 'float16'
'Census_InternalPrimaryDisplayResolutionHorizontal': 'float32', # was 'float16'
'Census_InternalPrimaryDisplayResolutionVertical': 'float32', # was 'float16'
'Census_PowerPlatformRoleName': 'category',
'Census_InternalBatteryType': 'category',
'Census_InternalBatteryNumberOfCharges': 'float64', # was 'float32'
'Census_OSVersion': 'category',
'Census_OSArchitecture': 'category',
'Census_OSBranch': 'category',
'Census_OSBuildNumber': 'int16',
'Census_OSBuildRevision': 'int32',
'Census_OSEdition': 'category',
'Census_OSSkuName': 'category',
'Census_OSInstallTypeName': 'category',
'Census_OSInstallLanguageIdentifier': 'float16',
'Census_OSUILocaleIdentifier': 'int16',
'Census_OSWUAutoUpdateOptionsName': 'category',
'Census_IsPortableOperatingSystem': 'int8',
'Census_GenuineStateName': 'category',
'Census_ActivationChannel': 'category',
'Census_IsFlightingInternal': 'float16',
'Census_IsFlightsDisabled': 'float16',
'Census_FlightRing': 'category',
'Census_ThresholdOptIn': 'float16',
'Census_FirmwareManufacturerIdentifier': 'float16',
'Census_FirmwareVersionIdentifier': 'float32',
'Census_IsSecureBootEnabled': 'int8',
'Census_IsWIMBootEnabled': 'float16',
'Census_IsVirtualDevice': 'float16',
'Census_IsTouchEnabled': 'int8',
'Census_IsPenCapable': 'int8',
'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',
'Wdft_IsGamer': 'float16',
'Wdft_RegionIdentifier': 'float16',
'HasDetections': 'int8'
}
test_df = pd.read_csv('../input/test.csv', dtype=dtypes)
test_df.info()
train_df = pd.read_csv('../input/train.csv', dtype=dtypes)
train_df.info()
```
## Voilà !
#### Everything is loaded !
Hope it helped !
If you found a way to improve the types, let me know, I'll update the kernel.
| github_jupyter |
<a href="https://colab.research.google.com/github/satyajitghana/TSAI-DeepNLP-END2.0/blob/main/07_Seq2Seq/END2_Seq2seq_Class_Code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.legacy.datasets import Multi30k
from torchtext.legacy.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
!pip install spacy --upgrade
%%bash
python -m spacy download en
python -m spacy download de
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
print(vars(train_data.examples[0]))
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = optim.Adam(model.parameters())
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
| github_jupyter |
# Multivariate time series classification with sktime
In this notebook, we will use sktime for multivariate time series classification.
For the simpler univariate time series classification setting, take a look at this [notebook](https://github.com/alan-turing-institute/sktime/blob/master/examples/01_classification_univariate.ipynb).
### Preliminaries
```
import numpy as np
from sklearn.pipeline import Pipeline
from sktime.classification.compose import ColumnEnsembleClassifier
from sktime.classification.compose import TimeSeriesForestClassifier
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.shapelet_based import MrSEQLClassifier
from sktime.datasets import load_basic_motions
from sktime.transformers.series_as_features.compose import ColumnConcatenator
from sklearn.model_selection import train_test_split
```
### Load multivariate time series/panel data
The [data set](http://www.timeseriesclassification.com/description.php?Dataset=BasicMotions) we use in this notebook was generated as part of a student project where four students performed four activities whilst wearing a smart watch. The watch collects 3D accelerometer and a 3D gyroscope It consists of four classes, which are walking, resting, running and badminton. Participants were required to record motion a total of five times, and the data is sampled once every tenth of a second, for a ten second period.
```
X, y = load_basic_motions(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
# multivariate input data
X_train.head()
# multi-class target variable
np.unique(y_train)
```
## Multivariate classification
`sktime` offers three main ways of solving multivariate time series classification problems:
1. _Concatenation_ of time series columns into a single long time series column via `ColumnConcatenator` and apply a classifier to the concatenated data,
2. _Column-wise ensembling_ via `ColumnEnsembleClassifier` in which one classifier is fitted for each time series column and their predictions aggregated,
3. _Bespoke estimator-specific methods_ for handling multivariate time series data, e.g. finding shapelets in multidimensional spaces (still work in progress).
### Time series concatenation
We can concatenate multivariate time series/panel data into long univiariate time series/panel and then apply a classifier to the univariate data.
```
steps = [
('concatenate', ColumnConcatenator()),
('classify', TimeSeriesForestClassifier(n_estimators=100))]
clf = Pipeline(steps)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
```
### Column ensembling
We can also fit one classifier for each time series column and then aggregated their predictions. The interface is similar to the familiar `ColumnTransformer` from sklearn.
```
clf = ColumnEnsembleClassifier(estimators=[
("TSF0", TimeSeriesForestClassifier(n_estimators=10), [0]),
("BOSSEnsemble3", BOSSEnsemble(max_ensemble_size=5), [3]),
])
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
```
### Bespoke classification algorithms
Another approach is to use bespoke (or classifier-specific) methods for multivariate time series data. Here, we try out the MrSEQL algorithm in multidimensional space.
```
clf = MrSEQLClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
```
| github_jupyter |
Suggested by DLM as a potential slow degree of freedom that may not be sampled easily.
Approach
1. Identify all molecules that contain a carboxyl group
2. For each such molecule, identify all the torsion angles that involve a hydroxyl hydrogen
3. Plot and see -- would expect there to be two modes separated by 180 degrees, sampled at least a little bit -- what's the relaxation time for this flip?
4. If this suggests any test that can be automated, put it in `bayes_implicit_solvent.utils` or something
```
from openeye import oechem
from bayes_implicit_solvent.solvation_free_energy import smiles_list, db, mol_top_sys_pos_list
import mdtraj as md
import pyemma
from simtk import unit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t = np.arange(10000) * 1000 * unit.femtosecond / unit.nanosecond
#hydroxyl_in_carboxylic_acid_smarts = '[OX2H][CX3]=[OX1]' # had zero hits!
#carboxyl_smarts = '[CX3](=O)[OX2H1]' # had zero hits!
#carboxyl_smarts = '[CX3](=O)[OX1H0-,OX2H1]' # has >0 hits, but misses acetic acid...
#hydroxyl_smarts = '[OX2H]' # hits a lot of extra things, obviously
carbonyl_O_smarts = '[CX3](=[OX1])O'
def get_target_atom_names_that_match_smarts(mol, smarts):
mol_ = oechem.OEMol(mol)
qmol = oechem.OEQMol()
oechem.OEParseSmarts(qmol, smarts)
unique = False
ss = oechem.OESubSearch(qmol)
matches = []
for match in ss.Match( mol_, unique):
matches.append(set([a.GetName() for a in match.GetTargetAtoms()]))
return matches
all_matches = []
for i in range(len(mol_top_sys_pos_list)):
mol, _, _, _ = mol_top_sys_pos_list[i]
#all_matches.append(get_target_atom_names_that_match_smarts(mol, carboxyl_smarts))
#all_matches.append(get_target_atom_names_that_match_smarts(mol, hydroxyl_smarts))
#all_matches.append(get_target_atom_names_that_match_smarts(mol, hydroxyl_in_carboxylic_acid_smarts))
all_matches.append(get_target_atom_names_that_match_smarts(mol, carbonyl_O_smarts))
problematic_inds = [i for i in range(len(all_matches)) if len(all_matches[i]) > 0]
all_matches[problematic_inds[0]]
```
# TODO: BETTER ALGORITHM FOR ENUMERATING TORSIONS
```
[(i, smiles_list[i]) for i in problematic_inds]
def bonds_share_an_atom(bond1, bond2):
return bond1[1] == bond2[0]
def get_torsion_tuples(traj):
"""dumb brute-force O(n_bonds^3) iteration"""
bonds = list([(a.index, b.index) for (a,b) in traj.top.bonds]) + list([(b.index, a.index) for (a,b) in traj.top.bonds])
torsions = []
for bond1 in bonds:
for bond2 in bonds:
for bond3 in bonds:
if bonds_share_an_atom(bond1, bond2) and bonds_share_an_atom(bond2, bond3):
putative_torsion = (bond1[0], bond1[1], bond2[1], bond3[1])
if putative_torsion[-1] < putative_torsion[0]:
putative_torsion = putative_torsion[::-1]
if len(set(putative_torsion)) == 4:
torsions.append(putative_torsion)
return sorted(list(set(torsions)))
freesolv_dict = {}
for i in range(len(db)):
smiles = db[i][1]
freesolv_dict[smiles] = db[i]
# loop over the flagged molecules and loop over the desired torsions, make plots
for ind in problematic_inds:
smiles = smiles_list[ind]
freesolv_entry = freesolv_dict[smiles]
freesolv_id, chemical_name = freesolv_entry[0], freesolv_entry[2]
title = chemical_name
traj_name = '../bayes_implicit_solvent/vacuum_samples/vacuum_samples_{}.h5'.format(ind)
traj = md.load(traj_name)
torsions = get_torsion_tuples(traj)
print('total # torsions: ', len(torsions))
atom_names = [a.name for a in traj.top.atoms]
element_symbols = [a.element.symbol for a in traj.top.atoms]
selected_torsions = []
atoms_of_interest = set()
for match_atom_name_set in all_matches[ind]:
atoms_of_interest = atoms_of_interest.union(match_atom_name_set)
for torsion in torsions:
torsion_atom_name_set = set([atom_names[atom_ind] for atom_ind in torsion])
torsion_element_list = [element_symbols[atom_ind] for atom_ind in torsion]
torsion_element_string = ''.join(torsion_element_list)
contains_OH = ('OH' in torsion_element_string) or ('HO' in torsion_element_string)
contains_only_one_H = (sum([e == 'H' for e in torsion_element_list]) == 1)
contains_atoms_of_interest = len(atoms_of_interest.intersection(torsion_atom_name_set)) > 0
#if contains_OH and contains_only_one_H and contains_atoms_of_interest:
#if contains_OH and contains_only_one_H:
#if contains_OH:
#if contains_atoms_of_interest:
selected_torsions.append(torsion)
print('# selected torsions: ', len(selected_torsions))
if len(selected_torsions) > 0:
feat = pyemma.coordinates.featurizer(traj.topology)
feat.add_dihedrals(selected_torsions, cossin=False)
X = feat.transform(traj)
for j in range(len(feat.describe())):
angle_name = feat.describe()[j]
fig = plt.figure(figsize=(9,4))
ax = plt.subplot(1,2,1)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.scatter(t, X[:,j], s=0.5)
plt.xlabel('simulation time (ns)')
plt.ylim(-np.pi, np.pi)
plt.ylabel(angle_name)
plt.yticks((-np.pi, 0, np.pi), (r'$-\pi$', '0', r'$\pi$'))
ax = plt.subplot(1,2,2)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.hist(X[:,j], bins=50, range=(-np.pi, np.pi))
plt.xlim(-np.pi, np.pi)
plt.xlabel(angle_name)
plt.xticks((-np.pi, 0, np.pi), (r'$-\pi$', '0', r'$\pi$'))
plt.yticks([])
plt.ylabel('probability density')
fig.suptitle(title, y=1.05)
plt.tight_layout()
plt.savefig('carboxyl-torsion-plots/{}_torsion{}.png'.format(freesolv_id, j), dpi=300, bbox_inches='tight')
```
| github_jupyter |
# Naive Bayes on Iris Flower Species Datset
```
from csv import reader
from random import seed
from random import randrange
from math import sqrt
from math import exp
from math import pi
```
## 1.Load a CSV file
```
def load_csv(filename):
dataset = list()
with open(filename, 'r') as file:
csv_reader = reader(file)
for row in csv_reader:
if not row:
continue
dataset.append(row)
return dataset
```
## 2.Convert string column to float
```
def str_column_to_float(dataset, column):
for row in dataset:
row[column] = float(row[column].strip())
```
## 3.Convert string column to integer
```
def str_column_to_int(dataset, column):
class_values = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup
```
## 4.Split a dataset into k folds
```
def cross_validation_split(dataset, n_folds):
dataset_split = list()
dataset_copy = list(dataset)
fold_size = int(len(dataset) / n_folds)
for _ in range(n_folds):
fold = list()
while len(fold) < fold_size:
index = randrange(len(dataset_copy))
fold.append(dataset_copy.pop(index))
dataset_split.append(fold)
return dataset_split
```
## 5.Calculate accuracy percentage
```
def accuracy_metric(actual, predicted):
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
```
## 6.Evaluate an algorithm using a cross validation split
```
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
folds = cross_validation_split(dataset, n_folds)
scores = list()
for fold in folds:
train_set = list(folds)
train_set.remove(fold)
train_set = sum(train_set, [])
test_set = list()
for row in fold:
row_copy = list(row)
test_set.append(row_copy)
row_copy[-1] = None
predicted = algorithm(train_set, test_set, *args)
actual = [row[-1] for row in fold]
accuracy = accuracy_metric(actual, predicted)
scores.append(accuracy)
return scores
```
## 7.Split the dataset by class values, returns a dictionary
```
def separate_by_class(dataset):
separated = dict()
for i in range(len(dataset)):
vector = dataset[i]
class_value = vector[-1]
if (class_value not in separated):
separated[class_value] = list()
separated[class_value].append(vector)
return separated
```
## 8.Calculate mean, standard deviation for a list of numbers
```
# Calculate the mean of a list of numbers
def mean(numbers):
return sum(numbers)/float(len(numbers))
# Calculate the standard deviation of a list of numbers
def stdev(numbers):
avg = mean(numbers)
variance = sum([(x-avg)**2 for x in numbers]) / float(len(numbers)-1)
return sqrt(variance)
```
## 9.Calculate the mean, stdev and count for each column in a dataset
```
def summarize_dataset(dataset):
summaries = [(mean(column), stdev(column), len(column)) for column in zip(*dataset)]
del(summaries[-1])
return summaries
```
## 10.Split dataset by class then calculate statistics for each row
```
def summarize_by_class(dataset):
separated = separate_by_class(dataset)
summaries = dict()
for class_value, rows in separated.items():
summaries[class_value] = summarize_dataset(rows)
return summaries
```
## 11.Calculate the Gaussian probability distribution function for x
```
def calculate_probability(x, mean, stdev):
exponent = exp(-((x-mean)**2 / (2 * stdev**2 )))
return (1 / (sqrt(2 * pi) * stdev)) * exponent
```
## 12.Calculate the probabilities of predicting each class for a given row
```
def calculate_class_probabilities(summaries, row):
total_rows = sum([summaries[label][0][2] for label in summaries])
probabilities = dict()
for class_value, class_summaries in summaries.items():
probabilities[class_value] = summaries[class_value][0][2]/float(total_rows)
for i in range(len(class_summaries)):
mean, stdev, _ = class_summaries[i]
probabilities[class_value] *= calculate_probability(row[i], mean, stdev)
return probabilities
```
## 13.Predict the class for a given row
```
def predict(summaries, row):
probabilities = calculate_class_probabilities(summaries, row)
best_label, best_prob = None, -1
for class_value, probability in probabilities.items():
if best_label is None or probability > best_prob:
best_prob = probability
best_label = class_value
return best_label
```
## 14.Naive Bayes Algorithm
```
def naive_bayes(train, test):
summarize = summarize_by_class(train)
predictions = list()
for row in test:
output = predict(summarize, row)
predictions.append(output)
return(predictions)
```
## 15.Test Naive Bayes on Iris Dataset
```
seed(1)
filename = 'iris.csv'
dataset = load_csv(filename)
for i in range(len(dataset[0])-1):
str_column_to_float(dataset, i)
# convert class column to integers
str_column_to_int(dataset, len(dataset[0])-1)
# evaluate algorithm
n_folds = 5
scores = evaluate_algorithm(dataset, naive_bayes, n_folds)
print('Scores: %s' % scores)
print('Mean Accuracy: %.3f%%' % (sum(scores)/float(len(scores))))
```
| github_jupyter |
```
from molmap.model import RegressionEstimator, MultiClassEstimator, MultiLabelEstimator
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from chembench import dataset
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from molmap import MolMap
from molmap import feature
def Rdsplit(df, random_state = 888, split_size = [0.8, 0.1, 0.1]):
base_indices = np.arange(len(df))
base_indices = shuffle(base_indices, random_state = random_state)
nb_test = int(len(base_indices) * split_size[2])
nb_val = int(len(base_indices) * split_size[1])
test_idx = base_indices[0:nb_test]
valid_idx = base_indices[(nb_test):(nb_test+nb_val)]
train_idx = base_indices[(nb_test+nb_val):len(base_indices)]
print(len(train_idx), len(valid_idx), len(test_idx))
return train_idx, valid_idx, test_idx
data = dataset.load_BACE()
```
## Pre-fit your molmap object
```
mp1 = MolMap(ftype='descriptor',metric='cosine',)
mp1.fit(verbose=0, method='umap', min_dist=0.1, n_neighbors=15,)
bitsinfo = feature.fingerprint.Extraction().bitsinfo
flist = bitsinfo[bitsinfo.Subtypes.isin(['PubChemFP'])].IDs.tolist()
mp2 = MolMap(ftype = 'fingerprint', fmap_type = 'scatter', flist = flist) #
mp2.fit(method = 'umap', min_dist = 0.1, n_neighbors = 15, verbose = 0)
```
## Extract Fmaps
```
X1 = mp1.batch_transform(data.x)
X2 = mp2.batch_transform(data.x)
Y = pd.get_dummies(data.df['Class']).values
Y.shape
train_idx, valid_idx, test_idx = Rdsplit(data.df, random_state = 888)
trainX = (X1[train_idx], X2[train_idx])
validX = (X1[valid_idx], X2[valid_idx])
testX = (X1[test_idx], X2[test_idx])
trainY = Y[train_idx]
validY = Y[valid_idx]
testY = Y[test_idx]
# define your model, note that if your task is a multi-label problem, you should use MultiLabelEstimator
clf = MultiClassEstimator(n_outputs=trainY.shape[1],
fmap_shape1 = X1.shape[1:],
fmap_shape2 = X2.shape[1:],
metric='ROC',
dense_layers = [128, 64], gpuid = 0)
# fit your model
clf.fit(trainX, trainY, validX, validY )
```
## plot training history
```
pd.DataFrame(clf.history.history)[['accuracy', 'val_accuracy']].rename(columns={'accuracy':'auc', 'val_accuracy':'val_auc'}).plot()
pd.DataFrame(clf.history.history)[['loss', 'val_loss']].plot()
print('Best epochs: %.2f, Best loss: %.2f' % (clf._performance.best_epoch, clf._performance.best))
```
# performance on test set
```
auc = clf.score(testX, testY)
auc
df_pred = pd.DataFrame([testY[:, 0], clf.predict_proba(testX)[:,0]]).T
df_pred.columns=['y_true', 'y_pred_prob']
df_pred.head(5)
df_pred.plot(figsize=(16, 4))
df_pred.corr()
```
| github_jupyter |
```
#this notebook takes the scraped wikipedia data (JSons) and
#transforms it into a dataframe for guessing
import json
import os
import numpy as np
from collections import defaultdict
all_persons_dict = defaultdict(dict)
jsns = filter(lambda s: s.startswith("grab_musicians"),os.listdir("musical_scraper"))
for fname in jsns:
with open(os.path.join("musical_scraper",fname)) as fin:
js = json.load(fin)
for key,attrs_dict in js.items():
if "Born" not in attrs_dict:
continue
if key not in all_persons_dict:
attrs_dict["_list"] = [attrs_dict["_list"]]
all_persons_dict[key] = attrs_dict
else:
all_persons_dict[key]["_list"].append( attrs_dict["_list"])
from collections import Counter
binary_feature_counts = Counter()
for entity in all_persons_dict.values():
for key in entity.keys():
binary_feature_counts[key]+=1
categorical_features = {
'Genres':[u'Genres',u'Genre(s)',u'Stylistic origins',u'Cultural origins'],
'Origin':[u'Origin',u'Country',],
'Occupation':[ u'Occupation(s)',u'Occupation',],
'Instruments':[u'Instruments',u'Typical instruments',]
}
binary_feature_counts["_page_url"] = 0
binary_feature_counts["_list"] = 0
binary_feature_counts["Years active"] = 0
binary_feature_counts["Born"] = 0
binary_feature_counts["Associated acts"] = 0
for name,f_list in categorical_features.items():
for feature in f_list:
binary_feature_counts[feature] = 0
binary_features = dict(binary_feature_counts.most_common(100)).keys()
from collections import defaultdict,Counter
feature_factor_freqs = defaultdict(int)
for entity in all_persons_dict.values():
attributes = set()
for feature_name in binary_features:
if feature_name in entity:
attributes.add(feature_name+':is_known')
for feature_name,keys in categorical_features.items():
for key in keys:
vals = entity.get(key,'').lower().replace(',',' ')
vals = filter(len,vals.split())
for val in vals:
attributes.add(feature_name+":"+val)
#_list
list_urls = entity.get("_list",'')
for list_url in list_urls:
category=list_url.split('/')[-1]
attributes.add("category:"+category)
#activity
yrs_active = entity.get(u'Years active','').lower()
yrs_active = yrs_active.replace(',',' ').replace(u'\u2013',' ').replace("-",' ').replace("present","2016")
yrs_active = filter(len,yrs_active.split())
yrs_active = filter(lambda word: word.isdigit() and len(word)==4
,yrs_active)
if len(yrs_active) >=2:
start,end = int(yrs_active[0]), int(yrs_active[-1])
for decade in np.arange(start//10,end//10+1)*10:
if decade < 1920:
decade = "before_1920"
attributes.add("decades_active:"+str(decade))
attributes.add("first_activity:"+str(start//3*3))
if end == 2016:
end = "still_active"
else:
end = end//3*3
attributes.add("last_activity:"+str(end))
for attr in attributes:
feature_factor_freqs[attr]+=1
feature_factor_freqs = Counter(
dict(
filter( lambda (key,val): val>100,
feature_factor_freqs.items())
)
)
accepted_features = feature_factor_freqs.keys()
feature_to_id = {f:i for i,f in enumerate(accepted_features)}
len(accepted_features)
from collections import defaultdict,Counter
rows = []
for entity in all_persons_dict.values():
attributes = set()
for feature_name in binary_features:
if feature_name in entity:
attributes.add(feature_name+':is_known')
for feature_name,keys in categorical_features.items():
for key in keys:
vals = entity.get(key,'').lower().replace(',',' ')
vals = filter(len,vals.split())
for val in vals:
attributes.add(feature_name+":"+val)
#_list
list_urls = entity.get("_list",'')
for list_url in list_urls:
category=list_url.split('/')[-1]
attributes.add("category:"+category)
#activity
yrs_active = entity.get(u'Years active','').lower()
yrs_active = yrs_active.replace(',',' ').replace(u'\u2013',' ').replace("-",' ').replace("present","2016")
yrs_active = filter(len,yrs_active.split())
yrs_active = filter(lambda word: word.isdigit() and len(word)==4
,yrs_active)
if len(yrs_active) >=2:
start,end = int(yrs_active[0]), int(yrs_active[-1])
for decade in np.arange(start//10,end//10+1)*10:
if decade < 1920:
decade = "before_1920"
attributes.add("decades_active:"+str(decade))
attributes.add("first_activity:"+str(start//3*3))
if end == 2016:
end = "still_active"
else:
end = end//3*3
attributes.add("last_activity:"+str(end))
attribute_ids = filter(lambda x:x is not None,
map(feature_to_id.get, attributes)
)
row = np.zeros(len(accepted_features),dtype=np.bool)
row[attribute_ids] = True
rows.append(row)
accepted_features = map(lambda s: s.encode('utf8'),accepted_features)
import pandas as pd
df = pd.DataFrame(rows,columns = accepted_features)
df.to_csv("musicians_categorized.csv")
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=[20,5])
_=plt.plot(*zip(*feature_factor_freqs["first_activity"].items()))
plt.xlim(1500,2100)
#feature_factor_freqs['Country']
```
| github_jupyter |
# Topics: Publisher and Subscriber in Python
In this workspace, there are three notebooks:
* Publisher (this one)
* Subscriber (right side)
* roscore (bottom)
Please follow the instructions in this notebook: you will be advised to switch to the other notebooks when necessary.
## The Publisher Node
"Node" is the ROS term for an executable that is connected to the ROS network. Here we'll create the publisher ("talker") node which will broadcast a message. But first, make sure that [roscore](./roscore.ipynb) is running in the bottom tab of this workspace.
```
import rospy
from std_msgs.msg import String
```
You need to import rospy if you are writing a ROS Node. The `std_msgs.msg` import is so that we can reuse the `std_msgs/String` message type (a simple string container) for publishing.
```
pub = rospy.Publisher('chatter', String, queue_size=10)
```
This section of code defines the talker's interface to the rest of ROS.
`pub = rospy.Publisher("chatter", String, queue_size=10)` declares that your node is publishing to the `chatter` topic using the message type `String`. `String` here is actually the class `std_msgs.msg.String`. The `queue_size` argument limits the amount of queued messages if any subscriber is not receiving them fast enough.
```
rospy.init_node('talker', anonymous=True)
```
The next line, `rospy.init_node(NAME, ...)`, is very important as it tells rospy the name of your node -- until rospy has this information, it cannot start communicating with the ROS [Master](http://wiki.ros.org/Master). In this case, your node will take on the name `talker`. NOTE: the name must be a [base name](http://wiki.ros.org/Names), i.e. it cannot contain any slashes "/".
`anonymous = True` ensures that your node has a unique name by adding random numbers to the end of NAME. [Refer to Initialization and Shutdown - Initializing your ROS Node](http://wiki.ros.org/rospy/Overview/Initialization%20and%20Shutdown#Initializing_your_ROS_Node) in the `rospy` documentation for more information about node initialization options.
```
rate = rospy.Rate(1) # 1 Hz
```
This line creates a `Rate` object rate. With the help of its method `sleep()`, it offers a convenient way for looping at the desired rate. With its argument of 1, we should expect to go through the loop 1 time per second (as long as our processing time does not exceed one second!)
```
def talker():
count = 1
msg = String()
while not rospy.is_shutdown():
msg.data = "hello world %d" % count
pub.publish(msg)
count += 1
rate.sleep()
```
This loop is a fairly standard rospy construct: checking the `rospy.is_shutdown()` flag and then doing work. You have to check `is_shutdown()` to check if your program should exit (e.g. if there is a `Ctrl-C` or otherwise). In this case, the "work" is a call to `pub.publish(msg)` that publishes a string to our chatter topic. The loop calls `rate.sleep()`, which sleeps just long enough to maintain the desired rate through the loop.
(You may also run across `rospy.sleep()` which is similar to `time.sleep()` except that it works with simulated time as well (see [Clock](http://wiki.ros.org/Clock)).)
Before starting to publish, let's first move to the [Subscriber Node](Subscriber.ipynb) in the right tab of the workspace.
You may now start to send messages with this last bit of code:
```
try:
talker()
except KeyboardInterrupt, rospy.ROSInterruptException:
pass
```
This catches a `KeyboardInterrupt` or a `rospy.ROSInterruptException` exception, which can be thrown by `rospy.sleep()` and `rospy.Rate.sleep()` methods when `Ctrl-C` is pressed or your Node is otherwise shutdown. The reason this exception is raised is so that you don't accidentally continue executing code after the `sleep()`.
When you run the talker, messages will be displayed in the subscriber window.
After a while, you may stop the talker loop by pressing the `Interrupt` button of this notebook.
Finally, let's publish a single message with the commands:
```
msg = String()
msg.data = "ROS-Industrial rocks!"
pub.publish(msg)
```
Feel free to edit the data before sending the message, and verify that the content displayed in the subscriber window is the same as sent by the publisher.
For going back to the main page, please close the other tabs and click on the following link:
[Go back to the main page](../../README.ipynb)
| github_jupyter |
# SMO
上面求对偶问题的时候我们说到SMO算法,但是没有具体说明,现在我们来看看.
$\underset{a}{max}(\sum_{i=1}^{n}a_i-\frac{1}{2}\sum_{i,j=1}^{n}a_ia_jy_iy_j<x_i,x_j>)$
s.t., $0\leqslant a_i \leqslant C,i=1,...,n$
$\sum_{i=1}^{n}a_iy_i=0$
等价于(这个等价不好推导建议直接知道结果去细想一下):
$\underset{a}{min}\Psi(\vec{a})=\underset{a}{min}\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}Y_iY_jK(\vec{x_i},\vec{x_j})a_ia_j-\sum_{i=1}^{N}a_i$
s.t., $0\leqslant a_i \leqslant C,\forall {i}$
$\sum_{i=1}^{N}a_iy_i=0$
1998年,Microsoft Research的John C. Platt在论文[《Sequential Minimal Optimization:A Fast Algorithm for Training Support Vector Machines》](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf)中提出针对上述问题的解法:SMO算法,它很快便成为最快的二次规划优化算法,特别是在针对线性SVM和数据稀疏时性能更优。
#### 1.SMO算法推导
首先定义特征到结果的输出函数:
$u=\vec{W}\cdot\vec{X}-b$
这个u与我们之前定义的$f(x)=w^{T}x+b$是一样的
接着重新定义一下原始的优化问题:
$\underset{w,b}{min}\frac{1}{2}||\vec{w}||$ s.t. $y_i(\vec{w}\cdot\vec{x_i}-b) \geqslant1,\forall {i}$
求导得到:
$\vec{w}=\sum_{i=1}^{N}y_ia_i\vec{x_i},b=\vec{w}\cdot\vec{x_k}-y_k$ for som $a_k>0$
带入 $u=\vec{W}\cdot\vec{X}-b$得到,$u=\sum_{j=1}^{N}y_ja_jK(\vec{x_j},\vec{x})-b$
引入拉格朗日乘子转换后:
$\underset{a}{min}\Psi(\vec{a})=\underset{a}{min}\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}Y_iY_jK(\vec{x_i},\vec{x_j})a_ia_j-\sum_{i=1}^{N}a_i$
s.t., $ a_i \geqslant 0,\forall {i}$
$\sum_{i=1}^{N}a_iy_i=0$
加入松弛变量后,模型修改为:
$\underset{w,b,\xi}{min}\frac{1}{2}||\vec{w}||^{2}+C\sum_{i=1}^{N}$ s.t. $y_i(\vec{w}\cdot\vec{x_i}-b)\geqslant 1-\xi_i,\forall {i},0\leqslant a_i \leqslant C,\forall {i}$
最终我们的问题变成为:
$\underset{a}{min}\Psi(\vec{a})=\underset{a}{min}\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}Y_iY_jK(\vec{x_i},\vec{x_j})a_ia_j-\sum_{i=1}^{N}a_i$
s.t., $0\leqslant a_i \leqslant C,\forall {i}$
$\sum_{i=1}^{N}a_iy_i=0$
下面要解决的问题就是在上述$a_i=\{a_1,a_2,...,a_n\}$上求目标函数的最小值.那么我们按照之前的想法我们去固定$a_1$以外的所有参数,然后在$a_1$上去求极值,这样可行嘛?很显然是不行的,因为我们现在多了一个约束条件:
$\sum_{i=1}^{N}a_iy_i=0$
那么一旦固定$a_1$以外的所有参数,那么$a_1$也就固定下来了
$a_1y_1=-\sum_{i=2}^{N}a_iy_i$
所以我们现在需要固定除了$a_1,a_2$以外的所有参数这样的话$a_1,a_2$就不是死的了,也就是说目标函数只是关于$a_1,a_2$的函数,这样不断的在一堆乘子中去抽取两个求解,不断迭代,最终找到原问题的解.
因为我们已经固定了除了$a_1,a_2$,以外的所有乘子,所以除了$a_1,a_2$以外的所有乘子之和是一个常数,并且由于限制条件$\sum_{i=1}^{N}a_iy_i=0$的存在,那么形式如下:
$a_iy_i+a_2y_2 + \sum_{i=3}^{N}a_iy_i=0 \Rightarrow a_iy_i+a_2y_2 + \zeta =0$
$a_iy_i+a_2y_2=\zeta $
$\zeta $是一个常数
那么对于原对偶问题的子问题的目标函数可以表达为(这个形式下面会阐述):
$\Psi=\frac{1}{2}K_{11}a_1^{2}+\frac{1}{2}K_{22}a_{2}^{2}+sK_{12}a_1a_2+y_1a_1v_1+y_2a_2v_2-a_2+\Psi_{constant} $
其中
$K_{ij}=K(\vec{x_i},\vec{x_j}),$
$v_i=\sum_{j=3}^{N}y_ja_jK_{ij}=u_i+b-y_1a_1K_{1i}-y_2a_2K_{2i}$
为了解决这个子问题,首要的问题就是如何选取这个$a_1,a_2$,根据KKT条件可以得出目标函数中$a_i$的取值的意义:
$a_i=0 \Leftrightarrow y_iu_i \Rightarrow 1,$
$0<a_i<C \Leftrightarrow y_iu_i=1,$
$a_i=C \Leftrightarrow y_iu_i \leqslant 1.$
这里啊$a_i$还是拉格朗日乘子:
1.对于第一种情况,表明$a_i$是正常分类,在间隔边界内部或者叫间隔边界后面(我们知道正确分类的点$y_if(x_i) \geqslant 0$);
2.对于第二种情况,表明$a_i$是支持向量,在间隔边界上;
3.对于第三种情况,表明$a_i$是在两条间隔边界之间;
而最优解需要满足KKT条件,即上述3个条件都得满足,那么以下几种情况将会出现不满足:
1.$ y_iu_i \leqslant 1.$ 但是$a_i<C$是不满足的
2.$y_iu_i \Rightarrow 1$,但是$a_i>0$是不满足的
3.$y_iu_i =1$,但是$a_i=0$或$a_i=C$是不满足的
也就是说如果存在不满足KKT条件的$a_i$,那么需要更新这些$a_i$,这是第一个约束条件.
我们现在选择两个乘子$a_1,a_2$,他们在更新前分别是$a_1^{old},a_2^{old}$,更新后是$a_1^{new},a_2^{new}$,由于第二个约束条件的存在(也就是我们上面说的$\sum_{i=1}^{N}a_iy_i=0$),所以我们要保证:
$a_1^{new}y_1+a_2^{new}y_2=a_1^{old}y_1+a_2^{old}y_2=\zeta$
这里的$\zeta$是一个常数,且上述的等式的意思是前面两部分都是等于一个常数
因为两个因子不好同时求解,所以我们先求第二个乘子的解(当然要先求第一个也是可以的$a_2$的解$a_2^{new}$,得到$a_2$的解$a_2^{new}$后,再用$a_2^{new}$去表示$a_1$的解$a_1^{new}$.
为了求解$a_2^{new}$,需要先确定$a_2^{new}$的取值范围。假设它的上下边界分别为H和L,那么有:
$L\leqslant a_2^{new} \leqslant H$
接下来综合$0 \leqslant a_i \leqslant C,i=1,...,n$和$a_1^{new}y_1+a_2^{new}y_2=a_1^{」old}y_1+a_2^{old}y_2=\zeta$这两个约束条件求$a_2^{new}$的取值范围
因为我们的y取值只有1或者-1
那么在当y1!=y2的时候,根据$a_1^{new}y_1+a_2^{new}y_2=a_1^{old}y_1+a_2^{old}y_2=\zeta$可得:
- $a_1^{old}-a_2^{old}=\zeta$或者$a_2^{old}-a_1^{old}=\zeta$
我们使用$0 \leqslant a_i \leqslant C,i=1,...,n$进行取值画图:

如果$a_2<a_1$,很明显$a_2$的上限是$(C,C-\zeta)$中的$C-\zeta$.
如果$a_2>a_1$,很明显$a_2$的上限是$(C+\zeta,C)$中的$C$.
那么两个综合一下,用一个式子进行表示就是$H=min(C,C-\zeta)$
也就是说如果$a_2<a_1$时,我们取的是较小的$C-\zeta$,如果$a_2>a_1$我们也是去较小的$C$
同样我们来看看$a_2$的下限:
如果$a_2<a_1$,很明显$a_2$的下限是0.
如果$a_2>a_1$,很明显$a_2$的下限是$-\zeta$.
那么两个综合一下,用一个式子进行表示就是$L=min(0,-\zeta)$
那么在y1=y2的时候
同样根据$a_1^{new}y_1+a_2^{new}y_2=a_1^{old}y_1+a_2^{old}y_2=\zeta$可得:
$a_1^{old}+a_2^{old}=\zeta$,所以$L=max(0,\zeta-C),H=min(C,\zeta)$

如此,根据y1和y2异号或者同号,可以得出$a_2^{new}$的上下界分别是:
$\left\{\begin{matrix}
L=max(0,a_2^{old}-a_1^{old}),H=min(C,C+a_2^{old}-a_1^{old}) &if y1\neq y_2 \\
L=max(0,a_2^{old}+a_1^{old}-C),H=min(C,a_2^{old}+a_1^{old}) &if y_1 = y_2
\end{matrix}\right.$
回顾下第二个约束条件$a_1^{new}y_1+a_2^{new}y_2=a_1^{old}y_1+a_2^{old}y_2=\zeta$,令两边乘以y1,可得
$a_1+sa_2=a_1^{*}+sa_2^{*}=w$
其中$w=-y_1\sum_{i=3}^{n}y_ia_i^{*}$
因此$a_1$可以用$a_2$表示,$a_1=w-s*a_2$,从而把子问题的目标函数转换为只含$a_2$的问题:
$\Psi=\frac{1}{2}K_{11}a_1^{2}+\frac{1}{2}K_{22}a_{2}^{2}+sK_{12}a_1a_2+y_1a_1v_1+y_2a_2v_2-a_2+\Psi_{constant} $
对$a_2$求导:
$\frac{\partial \Psi}{\partial a_2}=-sK_{11}(w-sa_2)+K_{22}a_2-K_{12}a_2+sK_{12}(w-sa_2)-y_2v_1+s+y_2v_2-1=0$
简化下
$a_2(K_{11}+K_{22}-2K_{12})=s(K_{11}-K_{12})w+y_2(v_1-v_2)+1-s$
然后将$s=y_1*y_2,a_1+sa_2=a_1^{*}+sa_2^{*}=w$,
$K_{ij}=K(\vec{x_i},\vec{x_j})$,
$v_i=\sum_{j=3}^{N}y_ja_jK_{ij}=u_i+b-y_1a_1K_{1i}-y_2a_2K_{2i}$
带入上式可得(由于这里我们还没有考虑到$ 0\leqslant a_2 \leqslant C $,所以这个解是未经过裁剪的我们用unc表示):
$a_2^{new,unc}(K_{11}+K_{22}-2K_{12})=a_2^{old}(K_{11}+K_{22}-2K_{12})+y_2(u_1-u_2+y_2-y_1)$
令$E_i=u_i-y_i$(表示真实值与预测值之差),$\eta =K(\vec{x_1},\vec{x_1})+K(\vec{x_2},\vec{x_2})-2K(\vec{x_1},\vec{x_2})$
然后两边同时除上$\eta$,得到一个关于单变量$a_2$的解:
$a_2^{new,unc}=a_2^{old}+\frac{y_2(E_1-E_2)}{\eta}$
那么没有考虑约束条件的单变量$a_2$就解出来了,现在加入条件$ 0\leqslant a_2 \leqslant C $的解为:
$a_2^{new}=\left\{ \begin{matrix}
H,&a_2^{new,unc}>H
\\ a_2^{new,unc},&L\leqslant a_2^{new,unc}\leqslant H
\\ L,&a_2^{new,unc} \leqslant L
\end{matrix}\right.$
求解出$a_2^{new}$后那么$a_1^{new}=a_1^{old}+y_1y_2(a_2^{old}-a_2^{new})$
那么如何选择$a_1,a_2$呢?
- 对于$a_1$,即第一个乘子,我们可以通过刚刚说的那3中不满足KKT条件来找;
- 对于第二个乘子$a_2$,可以寻找条件满足$max|E_i-E_j|$的乘子
在b满足下述条件:
$\left\{ \begin{matrix}
b_1&if \ 0<a_1^{new}<C,
\\ b_2&if \ 0<a_2^{new}<C
\\ \frac{(b_1+b_2)}{2}&otherwise
\end{matrix}\right.$

且每次更新完两个乘子的优化后,都需要再重新计算b,及对应的Ei值。
最后更新出来所有的$a_i$,y,b这样模型就出来了
$f(x)=\sum_{i=1}^{n}a_iy_i<x_i,x>+b$
## 总结SMO
SMO的主要步骤如下(图来自博主v_JULY_v):


| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Inference PyTorch Bert Model with ONNX Runtime on GPU
In this tutorial, you'll learn how to load a Bert model from PyTorch, convert it to ONNX, and inference it for high performance using ONNX Runtime and NVIDIA GPU. In the following sections, we are going to use the Bert model trained with Stanford Question Answering Dataset (SQuAD) dataset as an example. Bert SQuAD model is used in question answering scenarios, where the answer to every question is a segment of text from the corresponding reading passage, or the question might be unanswerable.
This notebook is for GPU inference. For CPU inference, please look at another notebook [Inference PyTorch Bert Model with ONNX Runtime on CPU](PyTorch_Bert-Squad_OnnxRuntime_CPU.ipynb).
## 0. Prerequisites ##
It requires your machine to have a GPU, and a python environment with [PyTorch](https://pytorch.org/) installed before running this notebook.
#### GPU Environment Setup using AnaConda
First, we install [AnaConda](https://www.anaconda.com/distribution/) in a target machine and open an AnaConda prompt window when it is done. Then run the following commands to create a conda environment. This notebook is tested with PyTorch 1.5.0 and OnnxRuntime 1.3.0.
```console
conda create -n gpu_env python=3.6
conda activate gpu_env
conda install -c anaconda ipykernel
conda install -c conda-forge ipywidgets
python -m ipykernel install --user --name=gpu_env
jupyter notebook
```
Finally, launch Jupyter Notebook and you can choose gpu_env as kernel to run this notebook.
Onnxruntime-gpu need specified version of CUDA and cuDNN. You can find the Requirements [here]( http://www.onnxruntime.ai/docs/how-to/install.html). Remember to add the directories to PATH environment variable (See [CUDA and cuDNN Path](#CUDA-and-cuDNN-Path) below).
```
import sys
run_install = False # Only need install once
if run_install:
if sys.platform in ['linux', 'win32']: # Linux or Windows
!{sys.executable} -m pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
else: # Mac
print("PyTorch 1.9 MacOS Binaries do not support CUDA, install from source instead")
!{sys.executable} -m pip install onnxruntime-gpu==1.8.1 onnx==1.9.0 onnxconverter_common==1.8.1
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==4.8.2
!{sys.executable} -m pip install psutil pytz pandas py-cpuinfo py3nvml coloredlogs wget netron sympy
import torch
import onnx
import onnxruntime
import transformers
print("pytorch:", torch.__version__)
print("onnxruntime:", onnxruntime.__version__)
print("onnx:", onnx.__version__)
print("transformers:", transformers.__version__)
```
## 1. Load Pretrained Bert model ##
We begin by downloading the SQuAD data file and store them in the specified location.
```
import os
cache_dir = "./squad"
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
predict_file_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"
predict_file = os.path.join(cache_dir, "dev-v1.1.json")
if not os.path.exists(predict_file):
import wget
print("Start downloading predict file.")
wget.download(predict_file_url, predict_file)
print("Predict file downloaded.")
```
Let's first define some constant variables.
```
# Whether allow overwriting existing ONNX model and download the latest script from GitHub
enable_overwrite = True
# Total samples to inference, so that we can get average latency
total_samples = 1000
# ONNX opset version
opset_version=11
```
Specify some model configuration variables.
```
# fine-tuned model from https://huggingface.co/models?search=squad
model_name_or_path = "bert-large-uncased-whole-word-masking-finetuned-squad"
max_seq_length = 128
doc_stride = 128
max_query_length = 64
```
Start to load model from pretrained. This step could take a few minutes.
```
# The following code is adapted from HuggingFace transformers
# https://github.com/huggingface/transformers/blob/master/examples/run_squad.py
from transformers import (BertConfig, BertForQuestionAnswering, BertTokenizer)
# Load pretrained model and tokenizer
config_class, model_class, tokenizer_class = (BertConfig, BertForQuestionAnswering, BertTokenizer)
config = config_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, do_lower_case=True, cache_dir=cache_dir)
model = model_class.from_pretrained(model_name_or_path,
from_tf=False,
config=config,
cache_dir=cache_dir)
# load some examples
from transformers.data.processors.squad import SquadV1Processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(None, filename=predict_file)
from transformers import squad_convert_examples_to_features
features, dataset = squad_convert_examples_to_features(
examples=examples[:total_samples], # convert enough examples for this notebook
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=False,
return_dataset='pt'
)
```
## 2. Export the loaded model ##
Once the model is loaded, we can export the loaded PyTorch model to ONNX.
```
output_dir = "./onnx"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
export_model_path = os.path.join(output_dir, 'bert-base-cased-squad_opset{}.onnx'.format(opset_version))
import torch
use_gpu = torch.cuda.is_available()
device = torch.device("cuda" if use_gpu else "cpu")
# Get the first example data to run the model and export it to ONNX
data = dataset[0]
inputs = {
'input_ids': data[0].to(device).reshape(1, max_seq_length),
'attention_mask': data[1].to(device).reshape(1, max_seq_length),
'token_type_ids': data[2].to(device).reshape(1, max_seq_length)
}
# Set model to inference mode, which is required before exporting the model because some operators behave differently in
# inference and training mode.
model.eval()
model.to(device)
if enable_overwrite or not os.path.exists(export_model_path):
with torch.no_grad():
symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
torch.onnx.export(model, # model being run
args=tuple(inputs.values()), # model input (or a tuple for multiple inputs)
f=export_model_path, # where to save the model (can be a file or file-like object)
opset_version=opset_version, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input_ids', # the model's input names
'input_mask',
'segment_ids'],
output_names=['start', 'end'], # the model's output names
dynamic_axes={'input_ids': symbolic_names, # variable length axes
'input_mask' : symbolic_names,
'segment_ids' : symbolic_names,
'start' : symbolic_names,
'end' : symbolic_names})
print("Model exported at ", export_model_path)
```
## 3. PyTorch Inference ##
Use PyTorch to evaluate an example input for comparison purpose.
```
import time
# Measure the latency. It is not accurate using Jupyter Notebook, it is recommended to use standalone python script.
latency = []
with torch.no_grad():
for i in range(total_samples):
data = dataset[i]
inputs = {
'input_ids': data[0].to(device).reshape(1, max_seq_length),
'attention_mask': data[1].to(device).reshape(1, max_seq_length),
'token_type_ids': data[2].to(device).reshape(1, max_seq_length)
}
start = time.time()
outputs = model(**inputs)
latency.append(time.time() - start)
print("PyTorch {} Inference time = {} ms".format(device.type, format(sum(latency) * 1000 / len(latency), '.2f')))
```
## 4. Inference ONNX Model with ONNX Runtime ##
### CUDA and cuDNN Path
onnxruntime-gpu has dependency on [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn). Required CUDA version can be found [here](http://www.onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements)
```
# Change to True when onnxruntime (like onnxruntime-gpu 1.0.0 ~ 1.1.2) cannot be imported.
add_cuda_path = False
# For Linux, see https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#environment-setup
# Below is example for Windows
if add_cuda_path:
cuda_dir = 'D:/NVidia/CUDA/v11.0/bin'
cudnn_dir = 'D:/NVidia/CUDA/v11.0/bin'
if not (os.path.exists(cuda_dir) and os.path.exists(cudnn_dir)):
raise ValueError("Please specify correct path for CUDA and cuDNN. Otherwise onnxruntime cannot be imported.")
else:
if cuda_dir == cudnn_dir:
os.environ["PATH"] = cuda_dir + ';' + os.environ["PATH"]
else:
os.environ["PATH"] = cuda_dir + ';' + cudnn_dir + ';' + os.environ["PATH"]
```
Now we are ready to inference the model with ONNX Runtime.
```
import psutil
import onnxruntime
import numpy
assert 'CUDAExecutionProvider' in onnxruntime.get_available_providers()
device_name = 'gpu'
sess_options = onnxruntime.SessionOptions()
# Optional: store the optimized graph and view it using Netron to verify that model is fully optimized.
# Note that this will increase session creation time so enable it for debugging only.
sess_options.optimized_model_filepath = os.path.join(output_dir, "optimized_model_{}.onnx".format(device_name))
# Please change the value according to best setting in Performance Test Tool result.
sess_options.intra_op_num_threads=psutil.cpu_count(logical=True)
session = onnxruntime.InferenceSession(export_model_path, sess_options)
latency = []
for i in range(total_samples):
data = dataset[i]
# TODO: use IO Binding (see https://www.onnxruntime.ai/python/api_summary.html) to improve performance.
ort_inputs = {
'input_ids': data[0].cpu().reshape(1, max_seq_length).numpy(),
'input_mask': data[1].cpu().reshape(1, max_seq_length).numpy(),
'segment_ids': data[2].cpu().reshape(1, max_seq_length).numpy()
}
start = time.time()
ort_outputs = session.run(None, ort_inputs)
latency.append(time.time() - start)
print("OnnxRuntime {} Inference time = {} ms".format(device_name, format(sum(latency) * 1000 / len(latency), '.2f')))
```
We can compare the output of PyTorch and ONNX Runtime. We can see some results are not close. It is because ONNX Runtime uses some approximation in CUDA optimization. Based on our evaluation on SQuAD data set, F1 score is on par for models before and after optimization.
```
print("***** Verifying correctness *****")
for i in range(2):
print('PyTorch and ONNX Runtime output {} are close:'.format(i), numpy.allclose(ort_outputs[i], outputs[i].cpu(), rtol=1e-02, atol=1e-02))
diff = ort_outputs[i] - outputs[i].cpu().numpy()
max_diff = numpy.max(numpy.abs(diff))
avg_diff = numpy.average(numpy.abs(diff))
print(f'maximum_diff={max_diff} average_diff={avg_diff}')
```
### Inference with Actual Sequence Length
Note that ONNX model is exported using dynamic length axis. It is recommended to use actual sequence input without padding instead of fixed length input for best performance. Let's see how it can be applied to this model.
From an example input below, we can see zero padding at the end of each sequence.
```
# An example input (we can see padding). From attention_mask, we can deduce the actual length.
inputs
```
The original sequence length is 128. After removing paddings, the sequence length is reduced. Input with smaller sequence length need less computation, thus we can see there is improvement on inference latency.
```
import statistics
latency = []
lengths = []
for i in range(total_samples):
data = dataset[i]
# Instead of using fixed length (128), we can use actual sequence length (less than 128), which helps to get better performance.
actual_sequence_length = sum(data[1].numpy())
lengths.append(actual_sequence_length)
opt_inputs = {
'input_ids': data[0].numpy()[:actual_sequence_length].reshape(1, actual_sequence_length),
'input_mask': data[1].numpy()[:actual_sequence_length].reshape(1, actual_sequence_length),
'segment_ids': data[2].numpy()[:actual_sequence_length].reshape(1, actual_sequence_length)
}
start = time.time()
opt_outputs = session.run(None, opt_inputs)
latency.append(time.time() - start)
print("Average length", statistics.mean(lengths))
print("OnnxRuntime {} Inference time with actual sequence length = {} ms".format(device_name, format(sum(latency) * 1000 / len(latency), '.2f')))
```
Let's compare the output and see whether the results are close.
**Note**: Need end-to-end evaluation on performance and accuracy if you use this strategy.
```
print("***** Comparing results with/without paddings *****")
for i in range(2):
print('Output {} are close:'.format(i), numpy.allclose(opt_outputs[i], ort_outputs[i][:,:len(opt_outputs[i][0])], rtol=1e-03, atol=1e-03))
```
## 5. Offline Optimization and Test Tools
It is recommended to try [OnnxRuntime Transformer Model Optimization Tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers) on the exported ONNX models. It could help verify whether the model can be fully optimized, and get performance test results.
#### Transformer Optimizer
Although OnnxRuntime could optimize Bert model exported by PyTorch. Sometime, model cannot be fully optimized due to different reasons:
* A new subgraph pattern is generated by new version of export tool, and the pattern is not covered by older version of OnnxRuntime.
* The exported model uses dynamic axis and this makes it harder for shape inference of the graph. That blocks some optimization to be applied.
* Some optimization is better to be done offline. Like change input tensor type from int64 to int32 to avoid extra Cast nodes, or convert model to float16 to achieve better performance in V100 or T4 GPU.
We have python script **optimizer.py**, which is more flexible in graph pattern matching and model conversion (like float32 to float16). You can also use it to verify whether a Bert model is fully optimized.
In this example, we can see that it introduces optimization that is not provided by onnxruntime: SkipLayerNormalization and bias fusion, which is not fused in OnnxRuntime due to shape inference as mentioned.
It will also tell whether the model is fully optimized or not. If not, that means you might need change the script to fuse some new pattern of subgraph.
Example Usage:
```
from onnxruntime.transformers import optimizer
optimized_model = optimizer.optimize_model(export_model_path, model_type='bert', num_heads=12, hidden_size=768)
optimized_model.save_model_to_file(optimized_model_path)
```
You can also use command line like the following:
#### Float32 Model
Let us optimize the ONNX model using the script. The first example will output model with float32 to store weights. This is the choice for most GPUs without Tensor Core.
If your GPU (like V100 or T4) has Tensor Core, jump to [Float16 Model](#6.-Model-Optimization-with-Float16) section since that will give you better performance than Float32 model.
```
optimized_fp32_model_path = './onnx/bert-base-cased-squad_opt_{}_fp32.onnx'.format('gpu' if use_gpu else 'cpu')
!{sys.executable} -m onnxruntime.transformers.optimizer --input $export_model_path --output $optimized_fp32_model_path
```
#### Optimized Graph
We can open the optimized model using [Netron](https://github.com/lutzroeder/netron) to visualize.
The graph is like the following:
<img src='images/optimized_bert_gpu.png'>
Sometime, optimized graph is slightly different. For example, FastGelu is replaced by BiasGelu for CPU inference; When the option --input_int32 is used, Cast nodes for inputs are removed.
```
import netron
# change it to True if want to view the optimized model in browser
enable_netron = False
if enable_netron:
# If you encounter error "access a socket in a way forbidden by its access permissions", install Netron as standalone application instead.
netron.start(optimized_fp32_model_path)
```
### Performance Test Tool
The following will create 1000 random inputs of batch_size 1 and sequence length 128, then measure the average latency and throughput numbers.
Note that the test uses fixed sequence length. If you use [dynamic sequence length](#Inference-with-Actual-Sequence-Length), actual performance depends on the distribution of sequence length.
**Attention**: Latency numbers from Jupyter Notebook are not accurate. See [Attional Info](#7.-Additional-Info) for more info.
```
GPU_OPTION = '--use_gpu' if use_gpu else ''
!{sys.executable} -m onnxruntime.transformers.bert_perf_test --model $optimized_fp32_model_path --batch_size 1 --sequence_length 128 --samples 1000 --test_times 1 $GPU_OPTION
```
Let's load the summary file and take a look. Note that blank value in OMP_NUM_THREADS or OMP_WAIT_POLICY means the environment variable does not exist.
```
import os
import glob
import pandas
latest_result_file = max(glob.glob("./onnx/perf_results_GPU_B1_S128_*.txt"), key=os.path.getmtime)
result_data = pandas.read_table(latest_result_file)
print("Float32 model perf results from", latest_result_file)
# Remove some columns that have same values for all rows.
columns_to_remove = ['model', 'graph_optimization_level', 'batch_size', 'sequence_length', 'test_cases', 'test_times', 'use_gpu']
result_data.drop(columns_to_remove, axis=1, inplace=True)
result_data
```
From above result, we can see that latency is very close for different settings. The default setting (intra_op_num_threads=0, OMP_NUM_THREADS and OMP_WAIT_POLICY does not exist) performs the best.
### Model Results Comparison Tool
When a BERT model is optimized, some approximation is used in calculation. If your BERT model has three inputs, a script compare_bert_results.py can be used to do a quick verification. The tool will generate some fake input data, and compare the inference outputs of the original and optimized models. If outputs are all close, it is safe to use the optimized model.
For GPU inference, the absolute or relative difference is larger than those numbers of CPU inference. Note that slight difference in output will not impact final result. We did end-to-end evaluation using SQuAD data set using a fine-tuned squad model, and F1 score is almost the same before/after optimization.
```
!{sys.executable} -m onnxruntime.transformers.compare_bert_results --baseline_model $export_model_path --optimized_model $optimized_fp32_model_path --batch_size 1 --sequence_length 128 --samples 100 --rtol 0.01 --atol 0.01 $GPU_OPTION
```
## 6. Model Optimization with Float16
The optimizer.py script have an option **--float16** to convert model to use float16 to store weights. After the conversion, it could be faster to run in GPU with tensor cores like V100 or T4.
Let's run tools to measure the performance on V100. The results show significant performance improvement: latency is about 3.4 ms for float32 model, and 1.8 ms for float16 model.
```
optimized_fp16_model_path = './onnx/bert-base-cased-squad_opt_{}_fp16.onnx'.format('gpu' if use_gpu else 'cpu')
!{sys.executable} -m onnxruntime.transformers.optimizer --input $export_model_path --output $optimized_fp16_model_path --float16
GPU_OPTION = '--use_gpu' if use_gpu else ''
!python -m onnxruntime.transformers.bert_perf_test --model $optimized_fp16_model_path --batch_size 1 --sequence_length 128 --samples 1000 --test_times 1 $GPU_OPTION
import os
import glob
import pandas
latest_result_file = max(glob.glob("./onnx/perf_results_GPU_B1_S128_*.txt"), key=os.path.getmtime)
result_data = pandas.read_table(latest_result_file)
print("Float32 model perf results from", latest_result_file)
# Remove some columns that have same values for all rows.
columns_to_remove = ['model', 'graph_optimization_level', 'batch_size', 'sequence_length', 'test_cases', 'test_times', 'use_gpu']
result_data.drop(columns_to_remove, axis=1, inplace=True)
result_data
```
### Throughput Tuning
Some application need best throughput under some constraint on latency. This can be done by testing performance of different batch sizes. The tool could help on this.
Here is an example that check the performance of multiple batch sizes (1, 2, 4, 8, 16, 32 and 64) using default settings.
```
GPU_OPTION = '--use_gpu' if use_gpu else ''
THREAD_SETTING = '--intra_op_num_threads 3'
!{sys.executable} -m onnxruntime.transformers.bert_perf_test --model $optimized_fp16_model_path --batch_size 1 2 4 8 16 32 --sequence_length 128 --samples 1000 --test_times 1 $THREAD_SETTING $GPU_OPTION
import os
import glob
import pandas
latest_result_file = max(glob.glob("./onnx/perf_results_*.txt"), key=os.path.getmtime)
result_data = pandas.read_table(latest_result_file)
print("Float16 model summary from", latest_result_file)
columns_to_remove = ['model', 'graph_optimization_level', 'test_cases', 'test_times', 'use_gpu', 'sequence_length']
columns_to_remove.extend(['intra_op_num_threads'])
result_data.drop(columns_to_remove, axis=1, inplace=True)
result_data
```
## 7. Additional Info
Note that running Jupyter Notebook has significant impact on performance result. You can close Jupyter Notebook and other applications, then run the performance test in a console to get more accurate performance numbers.
We have a [benchmark script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/run_benchmark.sh). It is recommended to use it measure inference speed of OnnxRuntime.
[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.
Here is the machine configuration that generated the above results. You might get slower or faster result according to your hardware.
```
!{sys.executable} -m onnxruntime.transformers.machine_info --silent
```
| github_jupyter |
# Introduction to web scraping
This workshop is a one-hour beginner's introduction to web scraping.
This notebook deliberately has more content that we can reasonably cover in one hour. **The most important material is in bold**, and we'll focus on that material in person. To get the most out of this workshop, I'd suggest spending some time working through it in full after the workshop.
We'll cover the following topics:
[Motivation](#motivation)
_Why would you want to scrape data from the web?_
[How the Web works](#web)
_A high-level appreciation of how the Web works will help us to scrape data effectively._
[Making a request](#request)
_How can we ask other computers on the Internet to send us data using Python?_
[Parsing HTML](#parsing)
_Web pages are just files in a special format. Extracting information out of these files involves parsing HTML._
[Terms of Service](#terms)
_Don't go scraping willy-nilly!_
[Further resources](#resources)
_So you want to learn more about web scraping._
## Motivation<a id='motivation'></a>
It's 2018. The web is everywhere.
* If you want to buy a house, real estate agents have [websites](https://www.wendytlouie.com/) where they list the houses they're currently selling.
* If you want to know whether to where a rain jacket or shorts, you check the weather on a [website](https://weather.com/weather/tenday/l/Berkeley+CA+USCA0087:1:US).
* If you want to know what's happening in the world, you read the news [online](https://www.sfchronicle.com/).
* If you've forgotten which city is the capital of Australia, you check [Wikipedia](https://en.wikipedia.org/wiki/Australia).
**The point is this: there is an enormous amount of information (also known as data) on the web.**
If we (in our capacities as, for example, data scientists, social scientists, digital humanists, businesses, public servants or members of the public) can get our hands on this information, **we can answer all sorts of interesting questions or solve important problems**.
* Maybe you're studying gender bias in student evaluations of professors. One option would be to scrape ratings from [Rate My Professors](https://www.ratemyprofessors.com/) (provided you follow their [terms of service](https://www.ratemyprofessors.com/TermsOfUse_us.jsp#use))
* Perhaps you want to build an app that shows users articles relating to their specified interests. You could scrape stories from various news websites and then use NLP methods to decide which articles to show which users.
* [Geoff Boeing](https://geoffboeing.com/) and [Paul Waddell](https://ced.berkeley.edu/ced/faculty-staff/paul-waddell) recently published [a great study](https://arxiv.org/pdf/1605.05397.pdf) of the US housing market by scraping millions of Craiglist rental listings. Among other insights, their study shows which metropolitan areas in the US are more or less affordable to renters.
## How the Web works<a id='web'></a>
Here's our high-level description of the web.
**The internet is a bunch of computers connected together.** Some computers are laptops, some are desktops, some are smart phones, some are servers owned by companies. Each computer has its own address on the internet. Using these addresses, **one computer can ask another computer for some information (data). We say that the first computer sends a _request_ to the second computer, asking for some particular information. The second computer sends back a _response_**. The response could include the information requested, or it could be an error message. Perhaps the second computer doesn't have that information any more, or the first computer isn't allowed to access that information.
<img src='../img/computer-network.png' />
We said that there is an enormous amount of information available on the web. When people put information on the web, they generally have two different audiences in mind, two different types of consumers of their information: humans and computers. If they want their information to be used primarily by humans, they'll make a website. This will let them lay out the information in a visually appealing way, choose colours, add pictures, and make the information interactive. If they want their information to be used by computers, they'll make a web API. A web API provides other computers structured access to their data. We won't cover APIs in this workshop, but you should know that i) APIs are very common and ii) if there is an API for a website/data source, you should use that over web scraping. Many data sources that you might be interested in (e.g. social media sites) have APIs.
**Websites are just a bunch of files on one of those computers. They are just plain text files, so you can view them if you want. When you type in the address of a website in your browser, your computer sends a request to the computer located at that address. The request says "hey buddy, please send me the file(s) for this website". If everything goes well, the other computer will send back the file(s) in the response**. Everytime you navigate to a new website or page in your browser, this process repeats.
<img src='../img/request-response.png' />
**There are three main languages that that website files are written with: HyperText Markup Language (HTML), Cascading Style Sheets (CSS) and JavaScript (JS)**. They normally have `.html`, `.css` and `.js` file extensions. Each language (and thus each type of file) serves a different purpose. **HTML files are the ones we care about the most, because they are the ones that contain the text you see on a web page**. CSS files contain the instructions on how to make the content in a HTML visually appealing (all the colours, font sizes, border widths, etc.). JavaScript files have the instructions on how to make the information on a website interactive (things like changing colour when you click something, entering data in a form). In this workshop, we're going to focus on HTML.
**It's not too much of a simplification to say:**
\begin{equation}
\textrm{Web scraping} = \textrm{Making a request for a HTML file} + \textrm{Parsing the HTML response}
\end{equation}
## Making a request<a id='request'></a>
**The first step in web scraping is to get the HTML of the website we want to scrape. The [requests](http://docs.python-requests.org/en/master/) library is the easiest way to do this in Python.**
```
import requests
url = 'https://en.wikipedia.org/wiki/Canberra'
response = requests.get(url)
```
Great, it looks like everything worked! Let's see our beautiful HTML:
```
response
```
Huh, that's weird. Doesn't look like HTML to me.
What the `requests.get` function returned (and the thing in our `response` variable) was a Response object. It itself isn't the HTML that we wanted, but rather a collection of metadata about the request/response interaction between your computer and the Wikipedia server.
For example, it knows whether the response was successful or not (`response.ok`), how long the whole interaction took (`response.elapsed`), what time the request took place (`response.headers['Date']`) and a whole bunch of other metadata.
```
response.ok
response.headers['Date']
```
**Of course, what we really care about is the HTML content. We can get that from the `Response` object with `response.text`. What we get back is a string of HTML, exactly the contents of the HTML file at the URL that we requested.**
```
html = response.text
print(html[:1000])
```
### Challenge
Get the HTML for [the Wikipedia page about HTML](https://en.wikipedia.org/wiki/HTML).
Print out the first 1000 characters and compare it to the HTML you see when you view the source HTML in your broswer.
```
# solution
url = 'https://en.wikipedia.org/wiki/HTML'
response = requests.get(url)
html = response.text
```
### Challenge
Write a function called `get_html` that takes a URL as an argument and returns the HTML contents as a string. Test your function on the page for [Sir Tim Berners-Lee](https://en.wikipedia.org/wiki/Tim_Berners-Lee).
```
# solution
def get_html(url):
response = requests.get(url)
return response.text
url = 'https://en.wikipedia.org/wiki/Tim_Berners-Lee'
html = get_html(url)
```
### Challenge
What happens if the request doesn't go so smoothly? Add a defensive measure to your function to check that the response recieved was successful.
```
# solution
def get_html(url):
response = requests.get(url)
assert response.ok, "Whoops, this request didn't go as planned!"
return response.text
url = 'https://en.wikipedia.org/wiki/Tim_Berners-Lee'
html = get_html(url)
```
## Parsing HTML<a id='parsing'></a>
**The second step in web scraping is parsing HTML. This is where things can get a little tricky.**
**Imagine you're in the field of education, in fact your specialty is studying higher education institutions. You're wondering how different disciplines change over time. Is it true that disciplines are incorporating more computational techniques as the years go on? Is that true for all disciplines or only some? Can we spot emerging themes across a whole university?**
**To answer these questions, we're going to need data. We're going to collect a dataset of all courses registered at UC Berkeley, not just those being taught this semester but all courses currently approved to be taught. These are listed on [this page](http://guide.berkeley.edu/courses/), called the Academic Guide. Well, actually they're not directly listed on that page. That page lists the departments/programs/units that teach currently approved courses. If we click on each department (for the sake of brevity, I'm just going to call them all "departments"), we can see the list of all courses they're approved to teach. For example, [here's](http://guide.berkeley.edu/courses/aerospc/) the page for Aerospace Studies. We'll call these pages departmental pages.**
### Challenge
View the source HTML of [the page listing all departments](http://guide.berkeley.edu/courses/), and see if you can find the part of the HTML where the departments are listed. There's a lot of other stuff in the file that we don't care too much about. You could try `Crtl-F`ing for the name of a department you can see on the webpage.
**Solution**
You should see something like this:
```
<div id="atozindex">
<h2 class="letternav-head" id='A'><a name='A'>A</a></h2>
<ul>
<li><a href="/courses/aerospc/">Aerospace Studies (AEROSPC)</a></li>
<li><a href="/courses/africam/">African American Studies (AFRICAM)</a></li>
<li><a href="/courses/a,resec/">Agricultural and Resource Economics (A,RESEC)</a></li>
<li><a href="/courses/amerstd/">American Studies (AMERSTD)</a></li>
<li><a href="/courses/ahma/">Ancient History and Mediterranean Archaeology (AHMA)</a></li>
<li><a href="/courses/anthro/">Anthropology (ANTHRO)</a></li>
<li><a href="/courses/ast/">Applied Science and Technology (AST)</a></li>
<li><a href="/courses/arabic/">Arabic (ARABIC)</a></li>
<li><a href="/courses/arch/">Architecture (ARCH)</a></li>
```
**This is HTML. HTML uses "tags", code that surrounds the raw text which indicates the structure of the content. The tags are enclosed in `<` and `>` symbols. The `<li>` says "this is a new thing in a list and `</li>` says "that's the end of that new thing in the list". Similarly, the `<a ...>` and the `</a>` say, "everything between us is a hyperlink". In this HTML file, each department is listed in a list with `<li>...</li>` and is also linked to its own page using `<a>...</a>`. In our browser, if we click on the name of the department, it takes us to that department's own page. The way the browser knows where to go is because the `<a>...</a>` tag tells it what page to go to. You'll see inside the `<a>` bit, there's a `href=...`. That tells us the (relative) location of the page it's linked to.**
### Challenge
Look at HTML source of [the page for the Aerospace Studies department](http://guide.berkeley.edu/courses/aerospc/), and try to find the part of the file where the information on each course is. Again, try searching for it using `Crtl-F`.
**Solution**
```
<div class="courseblock">
<button class="btn_toggleCoursebody" aria-expanded="false" aria-controls="cb_aerospc1a" data-toggle="#cb_aerospc1a">
<a name="spanaerospc1aspanspanfoundationsoftheu.s.airforcespanspan1unitspan"></a>
<h3 class="courseblocktitle">
<span class="code">AEROSPC 1A</span>
<span class="title">Foundations of the U.S. Air Force</span>
<span class="hours">1 Unit</span>
</h3>
```
The content that we care about is enclosed within HTML tags. It looks like the course code is enclosed in a `span` tag, which has a `class` attribute with the value `"code"`. What we'll have to do is extract out the information we care about by specifying what tag it's enclosed in.
But first, we're going to need to get the HTML of the first page.
### Challenge
Get the HTML content of `http://guide.berkeley.edu/courses/` and store it in a variable called `academic_guide_html`. You can use the `get_html` function you wrote before.
Print the first 500 characters to see what we got back.
```
# solution
academic_guide_url = 'http://guide.berkeley.edu/courses/'
academic_guide_html = get_html(academic_guide_url)
print(academic_guide_html[:500])
```
Great, we've got the HTML contents of the Academic Guide site we want to scrape. Now we can parse it. ["Parsing"](https://en.wikipedia.org/wiki/Parsing) means to turn a string of data into a structured representation. When we're parsing HTML, we're taking the Python string and turning it into a tree. The Python package `BeautifulSoup` does all our HTML parsing for us. We give it our HTML as a string and it returns a parsed HTML tree. Here, we're also telling BeautifulSoup to use the `lxml` parser behind the scenes.
```
from bs4 import BeautifulSoup
academic_guide_soup = BeautifulSoup(academic_guide_html, 'lxml')
```
We said before that all the departments were listed on the Academic Guide page with links to their departmental page, where the actual courses are listed. So we can find all the departments by looking in our parsed HTML for all the links. Remember that the links are represented in the HTML with the `<a>...</a>` tag, so we ask our `academic_guide_soup` to find us all the tags called `a`. What we get back is a list of all the `a` elements in the HTML page.
```
links = academic_guide_soup.find_all('a')
# print a random link element
links[48]
```
So now we have a list of `a` elements, each one represents a link on the Academic Guide page. But there are other links on this page in addition to the ones we care about, for example, a link back to the UC Berkeley home page. How can we filter out all the links we don't care about?
### Challenge
Look through the list `links`, or the HTML source, and figure out how we can identify just the links that we care about, namely the links to departmental pages.
```
# solution
import re
def is_departmental_page(link):
"""
Return true if `link` points to a departmental page.
By examining the source HTML by eye, I noticed that
the links we care about (i.e. the departmental pages)
all point to a relative path that starts with "/courses/".
This function uses that idea to determine if the link is
a departmental page.
"""
# some links don't have a href attribute, only a name attribute
# we don't care about them
try:
href = link.attrs['href']
except KeyError:
return False
pattern = r'/courses/(.*)/'
match = re.search(pattern, href)
return bool(match)
print(links[0])
print(is_departmental_page(links[0]))
print()
print(links[48])
print(is_departmental_page(links[48]))
```
Let's use our new `is_departmental_page` function to filter out the links we don't care about. How many departments do we have?
```
departmental_page_links = [link for link in links if is_departmental_page(link)]
len(departmental_page_links)
```
Each link in our `departmental_page_links` list contains a HTML element representing a link. Each element contains not only the relative location of the link but also the text that is linked (i.e. the words on the page that are underlined and you can click on to go to the linked page). In BeautifulSoup, we can get that text by asking for it with `element.text`, like this:
```
departmental_page_links[0].text
```
### Challenge
From the `departmental_page_links`, we can extract out the name and the code for each department. Try doing this.
```
# solution
import re
def extract_department_name_and_code(departmental_link):
"""
Return the (name, code) for a department.
The easiest way to do this is to use regular expressions.
We're not going to cover regular expressions in this workshop,
but here's how to do it anyway.
"""
text = departmental_link.text
pattern = r'([^(]+) \((.*)\)'
match = re.search(pattern, text)
if match:
return match.group(1), match.group(2)
extract_department_name_and_code(links[48])
```
From each link in our `departmental_page_links` list, we can get the relative link that it points to like this:
```
departmental_page_links[0].attrs['href']
```
### Challenge
Write a function that extracts out the relative link of a link element.
*Hint: This has a similar solution to our `is_departmental_page` function from before.*
```
# solution
def extract_relative_link(departmental_link):
"""
We noted above that all the departmental links point to "/courses/something/",
where the "something" looks a lot like their code. This function
extracts out that "something", so we can add it to the base URL of
the Academic Guide page and get full paths to each departmental page.
"""
href = departmental_link.attrs['href']
pattern = r'/courses/(.*)/'
match = re.search(pattern, href)
if match:
return match.group(1)
extract_relative_link(departmental_page_links[0])
```
Alright! Now we've identified all the departmental links on the Academic Guide page, we've found their name and code, and we know the relative link they point to. Next, we can use this relative link to construct the full URL they point to, which we'll then use to scrape the HTML for each departmental page.
Let's write a function that takes a departmental link and returns the absolute URL of its departmental page.
```
def construct_absolute_url(departmental_link):
relative_link = extract_relative_link(departmental_link)
return academic_guide_url + relative_link
construct_absolute_url(departmental_page_links[37])
```
To summarize so far, we've gone from the URL of the Academic Guide website, found all the departments that offer approved courses, identified their name and code and the link to their departmental page which lists all the courses they teach.
Now we want to find the get the HTML for each departmental page and scrape it for all the courses they offer. Let's focus on one page for now, the Aerospace Studies page. Once we select the link, we use our functions from above to: i) get the name (I guess we already know it's Aerospace, but whatever) and code, get the full URL, get the HTML for that URL and then parse the HTML.
```
aerospace_link = departmental_page_links[0]
aerospace_name, aerospace_code = extract_department_name_and_code(aerospace_link)
aerospace_url = construct_absolute_url(aerospace_link)
aerospace_html = get_html(aerospace_url)
aerospace_soup = BeautifulSoup(aerospace_html, 'lxml')
print(aerospace_html[:500])
```
Right at the start of this section on parsing HTML, we saw the HTML for a departmental page. Here it is again.
```
<div class="courseblock">
<button class="btn_toggleCoursebody" aria-expanded="false" aria-controls="cb_aerospc1a" data-toggle="#cb_aerospc1a">
<a name="spanaerospc1aspanspanfoundationsoftheu.s.airforcespanspan1unitspan"></a>
<h3 class="courseblocktitle">
<span class="code">AEROSPC 1A</span>
<span class="title">Foundations of the U.S. Air Force</span>
<span class="hours">1 Unit</span>
</h3>
```
It looks like each course is listed in a `div` element that has a `class` attribute with value `"courseblock"`. We can use this information to identify all the courses on a page and then extract out the information from them. You've seen how to do this before, here it is again:
```
aerospace_courseblocks = aerospace_soup.find_all(class_='courseblock')
len(aerospace_courseblocks)
```
Looks like the Aerospace department has seven current courses they're approved to teach (at the time of writing). Looking at the page in our browser, that looks right to me! So now we have a list called `aerospace_courseblocks` that holds seven elements that each refer to one course taught by the Aerospace department. Now we can extract out any information we care about. We just have to look at the page in our browser, decide what information we care about, then look at the HTML source to see where that information is kept in the HTML structure. Finally, we write a function for each piece of information we want to extract out of a course.
### Challenge
Write functions to take a courseblock and extract:
- The course code (e.g. AEROSPC 1A)
- The coure name
- The number of units
- The textual description of the course
```
# solution
def extract_course_code(courseblock):
span = courseblock.find(class_='code')
return span.text
def extract_course_title(courseblock):
span = courseblock.find(class_='title')
return span.text
def extract_course_units(courseblock):
span = courseblock.find(class_='hours')
return span.text
def extract_course_description(courseblock):
span = courseblock.find(class_='coursebody')
return span.text
def extract_one_course(courseblock):
course = {}
course['course_code'] = extract_course_code(courseblock)
course['course_title'] = extract_course_title(courseblock)
course['course_units'] = extract_course_units(courseblock)
course['course_description'] = extract_course_description(courseblock)
return course
first_aerospace_course = extract_one_course(aerospace_courseblocks[0])
for value in first_aerospace_course.values():
print(value)
print()
```
Let's write a function to scrape these four pieces of information from every course from every department and save it as a csv file.
```
def scrape_one_department(department_link):
department_name, department_code = extract_department_name_and_code(department_link)
department_url = construct_absolute_url(department_link)
department_html = get_html(department_url)
department_soup = BeautifulSoup(department_html, 'lxml')
department_courseblocks = department_soup.find_all(class_='courseblock')
result = []
for courseblock in department_courseblocks:
course = extract_one_course(courseblock)
course['department_name'] = department_name
course['department_code'] = department_code
result.append(course)
return result
aerospace_courses = scrape_one_department(aerospace_link)
for value in aerospace_courses[0].values():
print(value)
print()
import time
def scrape_all_departments(be_nice=True):
academic_guide_url = 'http://guide.berkeley.edu/courses/'
academic_guide_html = get_html(academic_guide_url)
academic_guide_soup = BeautifulSoup(academic_guide_html, 'lxml')
links = academic_guide_soup.find_all('a')
departmental_page_links = [link for link in links if is_departmental_page(link)]
result = []
for departmental_page_link in departmental_page_links:
department_result = scrape_one_department(departmental_page_link)
result.extend(department_result)
if be_nice:
time.sleep(1)
return result
import pandas as pd
result = scrape_all_departments(be_nice=False)
df = pd.DataFrame(result)
print(str(len(df)) + ' courses scraped')
df.head()
```
9360 courses scraped! (At the time of writing). Wow, that was a lot easier than doing it by hand!
## Terms of Service<a id='terms'></a>
As you've seen, web scraping involves making requests from other computers for their data. It costs people money to maintain the computers that we request data from: it needs electricity, it requires staff, sometimes you need to upgrade the computer, etc. But we didn't pay anyone for using their resources.
Because we're making these requests programmatically, we could make many, many requests per second. For example, we could put a request in a never-ending loop which would constantly request data from a server. But computers can't handle too much traffic, so eventually this might crash someone else's computer. Moreover, if we make too many requests when we're web scraping, that might restrict the number of people who can view the web page in their browser. This isn't very nice.
Websites often have Terms of Service, documents that you agree to whenever you visit a site. Some of these terms prohibit web scraping, because it puts too much strain on their servers, or they just don't want their data accessed programmatically. Whatever the reason, we need to respect a websites Terms of Service. **Before you scrape a site, you should always check its terms of service to make sure it's allowed.**
Often, there are better ways of accessing the same data. For the Wikipedia sites we scraped, there's actually an [API](https://www.mediawiki.org/wiki/REST_API) that we could have used. In fact, Wikipedia would prefer that we access their data that way. There's even a [Python package](https://pypi.org/project/wikipedia/) that wraps around this API to make it even easier to use. Furthermore, Wikipedia actually makes all of its content available for [direct download](https://dumps.wikimedia.org/). **The point of the story is: before web scraping, see if you can get the same data elsewhere.** This will often be easier for you and preferred by the people who own the data.
Moreover, if you're affiliated with an institution, you may be breaching existing contracts by engaging in scraping. UC Berkeley's Library [recommends](http://guides.lib.berkeley.edu/text-mining) following this workflow:
<img src='../img/workflow.png' />
## Further resources<a id='resources'></a>
### Resources for learning
* Work through this notebook in full.
* _We glossed over a lot of details. As your next step, I'd suggest spending as much time as you need to understand every line of text and code in this notebook._
* [Web-scraping with Python](http://shop.oreilly.com/product/0636920078067.do)
* _A great textbook for learning more about web scraping using Python._
* [Fantastic Data and Where To Find Them: An introduction to APIs, RSS, and Scraping](https://www.youtube.com/watch?v=A42voDYkFZw)
* This is a recorded video of a workshop on collecting data via the web at PyCon, a Python conference._
* [D-Lab workshops](http://dlab.berkeley.edu/training)
* _We teach workshops on web scraping and NLP-related tools throughout the semester. Check this page for the latest scheduled workshops._
* [D-Lab consulting](http://dlab.berkeley.edu/consulting)
* _We also offer free consulting for members of UC Berkeley's community. Reach out to us if you ever need a hand with a web scraping or NLP project!_
### A few libraries to be aware of
* [requests-HTML](http://html.python-requests.org/)
* _Did you see how easy it was to request data using the `requests` library? Well the author of that library, Kenneth Reitz, has another library for parsing HTML. I'm not that familiar with it, but it looks promising if it's by Reitz!_
* [furl](https://github.com/gruns/furl)
* _Lets you extract out different parts of a URL._
* [cssutils](http://cthedot.de/cssutils/)
* _We didn't talk about CSS at all, but you can also scrape data depending on its visual characteristics. This is a great library for parsing CSS files, but you can get some of the same functionality with BeautifulSoup._
* [scrapy](https://scrapy.org/)
* _Need to do some serious web scraping? You'll wanna check out Scrapy._
* [newspaper](https://newspaper.readthedocs.io/en/latest/)
* _If you know you're focussed on newspaper articles, this is a great little library for parsing common formats._
| github_jupyter |
<a href="https://colab.research.google.com/github/Worlddatascience/DataScienceCohort/blob/master/Stock%20Prediction%20Application%20and%20Deployment%20using%20Flask%20(Data_Science_Project_Team_1).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**DATA SCIENCE TEAM 1**
Anade Davis - Data Science Manager
Christopher Rutherford - Data Scientist
Tanjeel Ahmed - Data Science Researcher - Project Lead
Gabe Smithline - Quantitative Analyst
Berkalp Altay - Data Analyst
Zain Dwiat - Data Scientist
In this team project we will demonstrate how to create a Stock Prediction Model and deploy it for Real World Application.
**Tanjeel Ahmed Obervation**
RNN (recurrent neural networks) is most famous for time series data prediction
and LSTM is based on RNN but there's a fact that when we are feeding only historical data to our model to predict stock prices, models like LSTM often finds the pattern correctly but in terms of prediction accuracy it doesn't work well.
* For example we can take Netflix stock price, it was going downwards at the
beginning of this year but then corona hit the market and from then Netflix stock price is skyrocketing.
* On the other hand CNN (convolutional neural networks) is build for finding patterns using different layers.
Although CNN was not built for the specific time series prediction purpose, it performs well on time series data because of its different layers(in our case we can say 7 days as a layer) predicting method.
**Chris Rutherford Observation:**
I've noticed that as well with LSTMs, it seems to detect the overall pattern, but for the prediction, it would be nowhere near the last true price.
* It seemed to be near the average of the historical data which wasn't too useful.
* When I used a CNN it was able to create the predictions based off of the most recent prices.
**Gabe Smithline Observation:**
A lot of the data now isn’t indicative of traditional historical prices and it might be better to have something that’s closer to the most recent stock price just because of how much unknown there is in general so a CNN might be best.
## Stock price forecasting model (For 7 days)
We are going to make a machine learning model that can predict stock prices for 7 days. As we are getting our data through an API and by analyzing those data (Historical data) we are going to predict the next 7 days stock price, we can categorize this machine learning model as a supervised machine learning model. This particular problem can also be defined as time series data prediction model as the data is changing chronologically. The data we are getting from the API is labelled data. The labels are
1. Open
2. High
3. Low
4. Close
5. Volume
We are going to predict the Closing price.
There are many popular algorithms established in the market that can predict stock prices like MLP (Multilayer perceptions), RNN (Recursive neural networks), LSTM (Long short-term memory), CNN (Convolutional neural networks). For this project we are going to use CNN algorithm-based model.
CNN algorithm is a deep learning algorithm that was originally developed to process image data but recently it shows amazing results in predicting sequential data analysis (time series data analysis), which is exactly the type of data we are working with in this project.
We can describe our procedure in some steps:
- Getting the data through API (we are using alphavantage API): We are using the data of Microsoft company’s stock price.
- Preprocessing the data
- EDA (exploratory data analysis): In this case we can divide our EDA process in two parts (1) Cleaning the data (2) Visualizing the data
- Creating features
- Setting up the model (CNN model): (1) First we have to construct the model layers (2) Fit the model with the data taken through the API and processed with.
- Predicting the stock price
- Visualizing our prediction
- Deployment of our model
### First, we import the essential data science libraries we will use throughout this project.
```
# Importing the essential libraries for our CNN model
import numpy as np #It does fast computations with arrays. As NumPy uses the c programming language in the backend it computes faster.
import pandas as pd #To work with our dataframe. To manipulate them and save data into optimized dataframes
import matplotlib.pyplot as plt #create custom visualizations/plots of the data
import seaborn as sns; sns.set() #sets the style of plots
```
### Getting the Data
```
#This code pulls the data from the alphavantage API
#Right now it is just microsoft's data
import requests
API_KEY = 'FYEEXVSKRKYAO1VI'
ticker="MSFT"
r = requests.get("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol="+ticker+"&apikey="+API_KEY)
if (r.status_code == 200):
print(r.json())
```
Helper function for cleaning the data
```
def clean_data(data):
#Raw data from API
df = pd.DataFrame.from_dict(data, orient="index")
#Transposing Data and Taking column with data
df2 = (df.T['Time Series (Daily)'])
#Dropping all NAs
df2 = df2.dropna()
df_convert = df2.to_frame()
#We will take the data from this and use it to index our dataframe
df_index = list(df_convert.index)
#Convert to cleaned dataframe
df2 = pd.json_normalize(df2)
#We add this list to the dataframe with our actual data
df2['Index'] = df_index
#We then set the index column as our actual index
df2 = df2.set_index('Index')
#Put the data in the correct order
df2 = df2[::-1]
return df2
df=clean_data(r.json())
df
#pd.options.display.max_rows=100 #This is the code to display max rows
#df #since there is only 100 rows I want to manually check to see if there are any irregularities or potential nan values
```
### Processing the Data and Generating Features
Checking the column data types
```
df.dtypes #check data types of each column
```
We need to convert the index to a datetime object and the other columns to numbers:
```
def conv_types(data):
data.index=pd.to_datetime(data.index) #convert the index to a datetime object for plotting
#convert each column to integers for plotting and model input
data['1. open']=pd.to_numeric(data['1. open'])
data['2. high']=pd.to_numeric(data['2. high'])
data['3. low']=pd.to_numeric(data['3. low'])
data['4. close']=pd.to_numeric(data['4. close'])
data['5. volume']=pd.to_numeric(data['5. volume'])
return df
df=conv_types(df)
df.dtypes #verify type conversions
def plot_data(data):
plt.figure(figsize=(22,6)) #set the size of the figure
plt.subplot(1,2,1)
#create two subplots - one for daily prices, one for daily volume
plt.plot(data['1. open'], color='blue', label="Open")
plt.plot(data['2. high'], color='green', label="High")
plt.plot(data['3. low'], color='red', label="Low")
plt.plot(data['4. close'], color='purple', label="Close")
plt.title(str("Microsoft ($MSFT) Daily Stock Price\n"+str(data.index[0].date())+" to "+str(data.index[-1].date())))
plt.xlabel("Date")
plt.ylabel("Price in USD")
plt.legend()
plt.subplot(1,2,2)
plt.plot(data['5. volume'], color='orange', label="Volume")
plt.title(str("Microsoft ($MSFT) Daily Trading Volume\n"+str(data.index[0].date())+" to "+str(data.index[-1].date())))
plt.xlabel("Date")
plt.ylabel("Trading Volume in millions")
plt.show()
plot_data(df)
```
## Setting up the Model
### Preprocessing
```
#helper function to split the time series (explained in next block of code)
def split_sequence(sequence, n_steps):
X, y = list(), list() #create empty lists to store X and y variables
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
#if we are, simply stop executing this block of code
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
seq_length = 5 #seq_length tells how many sequential data points are used for each element of the array
#5 means 5 data points will be used at a time
#for instance, the first element of the array will be data points 1-5
#the second element of the array will be data points 2-6
#third element will be data points 3-7, and so on
#we only need the close price data for our model
data = np.array(df['4. close'])
#splitting the data with our helper function
X, y = split_sequence(data, seq_length)
#reshape from [samples, timesteps] into [samples, timesteps, features] for training the model
#the model requires these three values for input
n_features = 1 #our only feature, or input variable, is the closing stock price
X = X.reshape((X.shape[0], X.shape[1], n_features))
```
### Constructing the model layers
```
import tensorflow as tf #backend for keras
from tensorflow import keras #used for creating the CNN model
from keras.models import Sequential
from keras.layers import Flatten, Dense
from keras.layers.convolutional import Conv1D, MaxPooling1D #the data is 1-dimensional so we will use a 1d convolution layer
#define/setup model
model = Sequential()
#add a 1d convolutional layer
#64 filters used in the convolution operation (a somewhat arbitrary choice)
#kernel_size of 2 means the convolutional layer reads 2 data points at a time, applies the filters, and outputs it to the next layer
model.add(Conv1D(filters=64, kernel_size=2, activation='relu', input_shape=(seq_length, n_features)))
#add a pooling layer, which reduces the dimensionality of the input size
#takes the max value of each window of length 2
model.add(MaxPooling1D(pool_size=2))
#flattens the data into a 1d vector - first layer of the fully connected layers
model.add(Flatten())
#dense layer with relu activation
#outputs zero for values less than or equal to zero and keeps positive values the same
model.add(Dense(50, activation='relu'))
#1d dense layer to generate the predicted value
#only 1 dimensional because only one value is being predicted at each time point
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
```
### Fitting the model
```
#the model is fitted on the training data over 100 epochs to improve and update its parameters
#verbose controls how detailed the output is when training the model
def fit_model(X, y):
"""Trains the neural network"""
model.fit(X, y, epochs=100, verbose=2)
fit_model(X,y)
```
## Creating the predictions
```
DAYS_TO_PREDICT = 7
def make_predictions(X, DAYS_TO_PREDICT):
test_seq = X[-1:] #set the test sequence to start at the most recent stock values
preds = [] #create empty list to store predictions
#create predictions using most recent 5 data points
#the first prediction will assist in making the second prediction, and so on
for _ in range(DAYS_TO_PREDICT):
y_test_pred = model(test_seq) #outputs the prediction from our test sequence
pred = y_test_pred #assign pred to the raw output of the model for rebuilding new_seq and test_seq
preds.append(y_test_pred) #add the prediction to the list we will use for plotting
new_seq = test_seq.flatten() #flatten the output from the model into a 1d array
new_seq = np.append(new_seq, [pred]) #add prediction to the list the model is using to make predictions
new_seq = new_seq[1:] #exclude the first element from the list; use our most recent prediction for the next prediction instead
test_seq = new_seq.reshape(1, seq_length, 1) #reshape for the model
preds=np.array(preds).reshape(DAYS_TO_PREDICT,) #reshape the prediction array for plotting
return preds
predicted_price=make_predictions(X, 7)
def make_pred_index(data, preds):
"""creating the datetime index for our predictions to be just after our actual data"""
predicted_index = pd.date_range(start=data.index[-1], #start 1 day beyond existing data
periods=DAYS_TO_PREDICT+1, #run time interval through days predicted
closed='right') #include the last value of the time frame
return predicted_index
def make_pred_series(preds, predicted_index):
"""create new series of predicted prices with corresponding dates predicted"""
predicted_price = pd.Series(data=preds,
index=predicted_index)
return predicted_price
predicted_index = make_pred_index(df, predicted_price)
predicted_price = make_pred_series(predicted_price, predicted_index)
```
### Visualizing the predictions
```
def plot_predictions(data, predictions):
"""Takes the dataframe of historical data and Series of predictions as input"""
plt.figure(figsize=(15,6))
plt.plot(data['4. close'][:-1], #plot data used for training the model
label='Historical Daily Price')
plt.plot(predictions, '.-', #plot predictions
label='Predicted Daily Price')
plt.title("MSFT daily closing price (2020)\nPrediction for "+str(predictions.index[0].date())+" to "+str(predictions.index[-1].date()))
plt.legend()
plt.show()
plot_predictions(df, predicted_price)
def plot_predictions_recent(data, predictions):
plt.figure(figsize=(15,6))
#plot data used for training the model
plt.plot(data[data.index>="2020-08-01"]['4. close'][:-1],
label='Historical Daily Price')
#add the predcited prices to the plot
plt.plot(predictions, '.-',
label='Predicted Daily Price')
plt.title("MSFT daily closing price (2020)\nPrediction for "+str(predictions.index[0].date())+" to "+str(predictions.index[-1].date()))
plt.legend()
plt.show()
plot_predictions_recent(df, predicted_price)
```
Interpretation:
In blue we see the historical price stock price and in orange we see the prediction for 7 days into the future. As one can see since the shut down becuase of COVID, Microsoft stock has risen steadily over time. This is expected as many big tech stocks have seen an increase in stock price because in this virtual world because there is an even greater reliance on their products.
Microsoft dropped from a little above 230 to just above 200 during the first week of September. Our model predicts a slight increase within the next week to at about 207.
### Backtesting
We had to do break up our data for backtesting in an untraditional way. This is because the way the stocks behave, they are to eradict to predict from just an 80 20 test split. What we ended up doing is removing the last week's worth of data, running that dataframe through the model then comparing the predicted to the actual.
```
df_backtest = df[:-5] # remove the last 5 data points to keep the most recent full week
df_backtest
#we only need the close price data for our model
data_bt = np.array(df_backtest['4. close'])
#splitting the data with our helper function
X_bt, y_bt = split_sequence(data_bt, seq_length)
#reshape from [samples, timesteps] into [samples, timesteps, features] for training the model
#the model requires these three values for input
n_features = 1 #our only feature, or input variable, is the closing stock price
X_bt = X_bt.reshape((X_bt.shape[0], X_bt.shape[1], n_features))
#define/setup model
model_bt = Sequential()
#add a 1d convolutional layer
#64 filters used in the convolution operation (a somewhat arbitrary choice)
#kernel_size of 2 means the convolutional layer reads 2 data points at a time, applies the filters, and outputs it to the next layer
model_bt.add(Conv1D(filters=64, kernel_size=2, activation='relu', input_shape=(seq_length, n_features)))
#add a pooling layer, which reduces the dimensionality of the input size
#takes the max value of each window of length 2
model_bt.add(MaxPooling1D(pool_size=2))
#flattens the data into a 1d vector - first layer of the fully connected layers
model_bt.add(Flatten())
#dense layer with relu activation
#outputs zero for values less than or equal to zero and keeps positive values the same
model_bt.add(Dense(50, activation='relu'))
#1d dense layer to generate the predicted value
#only 1 dimensional because only one value is being predicted at each time point
model_bt.add(Dense(1))
model_bt.compile(optimizer='adam', loss='mse')
model_bt.fit(X_bt, y_bt, epochs=100)
DAYS_TO_PREDICT = 7
def make_predictions_bt(X, DAYS_TO_PREDICT):
test_seq = X[-1:] #set the test sequence to start at the most recent stock values
preds = [] #create empty list to store predictions
#create predictions using most recent 5 data points
#the first prediction will assist in making the second prediction, and so on
for _ in range(DAYS_TO_PREDICT):
y_test_pred = model_bt(test_seq) #outputs the prediction from our test sequence
pred = y_test_pred #assign pred to the raw output of the model for rebuilding new_seq and test_seq
preds.append(y_test_pred) #add the prediction to the list we will use for plotting
new_seq = test_seq.flatten() #flatten the output from the model into a 1d array
new_seq = np.append(new_seq, [pred]) #add prediction to the list the model is using to make predictions
new_seq = new_seq[1:] #exclude the first element from the list; use our most recent prediction for the next prediction instead
test_seq = new_seq.reshape(1, seq_length, 1) #reshape for the model
preds=np.array(preds).reshape(DAYS_TO_PREDICT,) #reshape the prediction array for plotting
return preds
predicted_price_bt = make_predictions_bt(X_bt, 7)
predicted_index_bt = make_pred_index(df_backtest, predicted_price_bt)
predicted_price_bt = make_pred_series(predicted_price_bt, predicted_index_bt)
plot_predictions(df_backtest, predicted_price_bt)
plot_predictions_recent(df_backtest, predicted_price_bt)
```
#### Comparing our predicion to the true values
```
plt.figure(figsize=(15,6))
plt.plot(df[(df.index<="2020-09-11") & (df.index>="2020-07-01")]['4. close'], label='True Historical Price')
plt.plot(df[df.index>="2020-09-14"]['4. close'], label='True Future Price', color='green')
plt.plot(predicted_price_bt, label='Predicted Price')
plt.title("MSFT daily closing price (2020)\nPrediction for "+str(predicted_price_bt.index[0].date())+" to "+str(predicted_price_bt.index[-1].date()))
plt.legend()
plt.show()
```
### Deploy
```
# Importing the essential libraries
!pip install flask_ngrok
import numpy as np #fast computations with arrays
import pandas as pd #save data into optimized dataframes
import matplotlib.pyplot as plt #create custom visualizations/plots of the data
import seaborn as sns; sns.set() #sets the style of plots
#This code pulls the data from the alpavantage api
#Right now it is just microsoft's data
import requests
#helper function to split the time series (explained in next block of code)
def split_sequence(sequence, n_steps):
X, y = list(), list() #create empty lists to store X and y variables
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
#if we are, simply stop executing this block of code
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
def our_model(ticker, DAYS_TO_PREDICT):
API_KEY = 'FYEEXVSKRKYAO1VI'
r = requests.get('https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol='+ticker+'&apikey='+API_KEY)
#Raw data from API
df = pd.DataFrame.from_dict(r.json(), orient="index")
#Transposing Data and Taking column with data
df2 = (df.T['Time Series (Daily)'])
#Dropping all NAs
df2 = df2.dropna()
df_convert = df2.to_frame()
#We will take the data from this and use it to index our dataframe
df_index = list(df_convert.index)
#Convert to cleaned dataframe
df2 = pd.json_normalize(df2)
#We add this list to the dataframe with our actual data
df2['Index'] = df_index
#We then set the index column as our actual index
df2 = df2.set_index('Index')
#Put the data in the correct order
df2 = df2[::-1] #reverse the order of the dataframe
df2.index=pd.to_datetime(df2.index) #convert the index to a datetime object for plotting
#convert each column to integers for plotting and model input
df2['1. open']=pd.to_numeric(df2['1. open'])
df2['2. high']=pd.to_numeric(df2['2. high'])
df2['3. low']=pd.to_numeric(df2['3. low'])
df2['4. close']=pd.to_numeric(df2['4. close'])
df2['5. volume']=pd.to_numeric(df2['5. volume'])
seq_length = 5 #seq_length tells how many sequential data points are used for each element of the array
#5 means 5 data points will be used at a time
#for instance, the first element of the array will be data points 1-5
#the second element of the array will be data points 2-6
#third element will be data points 3-7, and so on
#we only need the close price data for our model
data = np.array(df2['4. close'])
#splitting the data with our helper function
X, y = split_sequence(data, seq_length)
#reshape from [samples, timesteps] into [samples, timesteps, features] for training the model
#the model requires these three values for input
n_features = 1 #our only feature, or input variable, is the closing stock price
X = X.reshape((X.shape[0], X.shape[1], n_features))
import tensorflow as tf #backend for keras
from tensorflow import keras #used for creating the CNN model
from keras.models import Sequential
from keras.layers import Flatten, Dense
from keras.layers.convolutional import Conv1D, MaxPooling1D #the data is 1-dimensional so we will use a 1d convolution layer
#define/setup model
model = Sequential()
#add a 1d convolutional layer
#64 filters used in the convolution operation (a somewhat arbitrary choice)
#kernel_size of 2 means the convolutional layer reads 2 data points at a time, applies the filters, and outputs it to the next layer
model.add(Conv1D(filters=64, kernel_size=2, activation='relu', input_shape=(seq_length, n_features)))
#add a pooling layer, which reduces the dimensionality of the input size
#takes the max value of each window of length 2
model.add(MaxPooling1D(pool_size=2))
#flattens the data into a 1d vector - first layer of the fully connected layers
model.add(Flatten())
#dense layer with relu activation
#outputs zero for values less than or equal to zero and keeps positive values the same
model.add(Dense(50, activation='relu'))
#1d dense layer to generate the predicted value
#only 1 dimensional because only one value is being predicted at each time point
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
#the model is fitted on the training data over 100 epochs to improve and update its parameters
#verbose controls how detailed the output is when training the model
model.fit(X, y, epochs=100)
test_seq = X[-1:] #set the test sequence to start at the most recent stock values
preds = [] #create empty list to store predictions
#DAYS_TO_PREDICT=7 #number of future days to predict
#create predictions using most recent 5 data points
#the first prediction will assist in making the second prediction, and so on
for _ in range(DAYS_TO_PREDICT):
y_test_pred = model(test_seq) #outputs the prediction from our test sequence
pred = y_test_pred #assign pred to the raw output of the model for rebuilding new_seq and test_seq
preds.append(y_test_pred) #add the prediction to the list we will use for plotting
new_seq = test_seq.flatten() #flatten the output from the model into a 1d array
new_seq = np.append(new_seq, [pred]) #add prediction to the list the model is using to make predictions
new_seq = new_seq[1:] #exclude the first element from the list; use our most recent prediction for the next prediction instead
test_seq = new_seq.reshape(1, seq_length, 1) #reshape for the model
preds=np.array(preds).reshape(DAYS_TO_PREDICT,) #reshape the prediction array for plotting
#creating the datetime index for our predictions to be just after our actual data
predicted_index = pd.date_range(
start=df2.index[-2], #start 1 day beyond existing data
periods=DAYS_TO_PREDICT+1, #run time interval through days ahead predicted
closed='right') #include the last value of the time frame
#create new series with predicted prices for corresponding dates predicted
predicted_price = pd.Series(
data=preds,
index=predicted_index)
# plt.figure(figsize=(15,6))
# plt.plot(df2['4. close'][:-1], #plot data used for training the model
# label='Historical Daily Price')
# plt.plot(predicted_price, #plot predictions
# label='Predicted Daily Price')
# plt.title(label=ticker+" daily closing price")
# plt.legend()
# plt.savefig("pred_prices.png")
return predicted_price
#plt.figure(figsize=(15,6))
#plot data used for training the model
#plt.plot(df2[df2.index>="2020-08-01"]['4. close'][:-1],
#label='Historical Daily Price')
#add the predcited prices to the plot
#plt.plot(predicted_price,
#label='Predicted Daily Price')
#plt.title(ticker + " daily closing price (2020)\nPredction for "+str(predicted_price.index[0].date())+" to "+str(predicted_price.index[-1].date()))
#plt.legend()
#return plt.show()
#source: https://medium.com/@kshitijvijay271199/flask-on-google-colab-f6525986797b
#other source: https://www.youtube.com/watch?v=Pc8WdnIdXZg
from flask import Flask, request #importing Flask to run the model on a host
from flask_ngrok import run_with_ngrok #importing ngrok to make Flask apps available on the internet (in this case, temporarily)
app = Flask(__name__)
run_with_ngrok(app) #starts ngrok when the app is run
@app.route("/") #Assigns "/" as the URL for the home function, using Flask
def home():
#return a simple form with a title and place for ticker and a button named "Predict"
return '''<form action='/predict' method="post" class="col s12">
<div class="row">
<div class="input-field col s4">
<label for="first_name"><b>Ticker</b></label>
<br>
<input placeholder="Ticker" name="Ticker" id="first_name" type="text" class="validate">
</div>
<div class="row center">
<button type="submit" class="btn-large waves-effect waves-light orange">Predict</button>
</div>
</div>
</form>
'''
@app.route('/predict',methods=['POST','GET']) #Assigns "/predict" as the URL for the predict function, using Flask
def predict():
ticker = request.form["Ticker"] #extract the ticker
prediction = our_model(ticker, 7) #supply the ticker to the model and get the 7-day prediction from the model
table = prediction.to_frame() #convert the predicted results in pandas series to pandas dataframe
table.columns = ["Predicted Close Price"] #add a column named "Predicted Close Price"
html = table.to_html() #convert the dataframe into the html version
#next, return the table of dates along with the Predicted Close Prices
return '''<form action='/predict' method="post" class="col s12">
<div class="row">
<div class="input-field col s4">
<label for="first_name"><b>Ticker</b></label>
<br>
<input placeholder="Ticker" name="Ticker" id="first_name" type="text" class="validate">
</div>
<div class="row center">
<button type="submit" class="btn-large waves-effect waves-light orange">Predict</button>
</div>
</div>
</form>
</div>
<br>
'''+ html + '''<br>
</div>
'''
app.run() #run the Flask app
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Build a linear model with Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/linear"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This end-to-end walkthrough trains a logistic regression model using the `tf.estimator` API. The model is often used as a baseline for other, more complex, algorithms.
## Setup
```
!pip install sklearn
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import clear_output
from six.moves import urllib
```
## Load the titanic dataset
You will use the Titanic dataset with the (rather morbid) goal of predicting passenger survival, given characteristics such as gender, age, class, etc.
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v2.feature_column as fc
import tensorflow as tf
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
```
## Explore the data
The dataset contains the following features
```
dftrain.head()
dftrain.describe()
```
There are 627 and 264 examples in the training and evaluation sets, respectively.
```
dftrain.shape[0], dfeval.shape[0]
```
The majority of passengers are in their 20's and 30's.
```
dftrain.age.hist(bins=20)
```
There are approximately twice as many male passengers as female passengers aboard.
```
dftrain.sex.value_counts().plot(kind='barh')
```
The majority of passengers were in the "third" class.
```
dftrain['class'].value_counts().plot(kind='barh')
```
Females have a much higher chance of surviving versus males. This is clearly a predictive feature for the model.
```
pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive')
```
## Feature Engineering for the Model
Estimators use a system called [feature columns](https://www.tensorflow.org/guide/feature_columns) to describe how the model should interpret each of the raw input features. An Estimator expects a vector of numeric inputs, and *feature columns* describe how the model should convert each feature.
Selecting and crafting the right set of feature columns is key to learning an effective model. A feature column can be either one of the raw inputs in the original features `dict` (a *base feature column*), or any new columns created using transformations defined over one or multiple base columns (a *derived feature columns*).
The linear estimator uses both numeric and categorical features. Feature columns work with all TensorFlow estimators and their purpose is to define the features used for modeling. Additionally, they provide some feature engineering capabilities like one-hot-encoding, normalization, and bucketization.
### Base Feature Columns
```
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
vocabulary = dftrain[feature_name].unique()
feature_columns.append(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype=tf.float32))
```
The `input_function` specifies how data is converted to a `tf.data.Dataset` that feeds the input pipeline in a streaming fashion. `tf.data.Dataset` can take in multiple sources such as a dataframe, a csv-formatted file, and more.
```
def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function():
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df))
if shuffle:
ds = ds.shuffle(1000)
ds = ds.batch(batch_size).repeat(num_epochs)
return ds
return input_function
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, num_epochs=1, shuffle=False)
```
You can inspect the dataset:
```
ds = make_input_fn(dftrain, y_train, batch_size=10)()
for feature_batch, label_batch in ds.take(1):
print('Some feature keys:', list(feature_batch.keys()))
print()
print('A batch of class:', feature_batch['class'].numpy())
print()
print('A batch of Labels:', label_batch.numpy())
```
You can also inspect the result of a specific feature column using the `tf.keras.layers.DenseFeatures` layer:
```
age_column = feature_columns[7]
tf.keras.layers.DenseFeatures([age_column])(feature_batch).numpy()
```
`DenseFeatures` only accepts dense tensors, to inspect a categorical column you need to transform that to a indicator column first:
```
gender_column = feature_columns[0]
tf.keras.layers.DenseFeatures([tf.feature_column.indicator_column(gender_column)])(feature_batch).numpy()
```
After adding all the base features to the model, let's train the model. Training a model is just a single command using the `tf.estimator` API:
```
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result)
```
### Derived Feature Columns
Now you reached an accuracy of 75%. Using each base feature column separately may not be enough to explain the data. For example, the correlation between gender and the label may be different for different gender. Therefore, if you only learn a single model weight for `gender="Male"` and `gender="Female"`, you won't capture every age-gender combination (e.g. distinguishing between `gender="Male"` AND `age="30"` AND `gender="Male"` AND `age="40"`).
To learn the differences between different feature combinations, you can add *crossed feature columns* to the model (you can also bucketize age column before the cross column):
```
age_x_gender = tf.feature_column.crossed_column(['age', 'sex'], hash_bucket_size=100)
```
After adding the combination feature to the model, let's train the model again:
```
derived_feature_columns = [age_x_gender]
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns+derived_feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result)
```
It now achieves an accuracy of 77.6%, which is slightly better than only trained in base features. You can try using more features and transformations to see if you can do better!
Now you can use the train model to make predictions on a passenger from the evaluation set. TensorFlow models are optimized to make predictions on a batch, or collection, of examples at once. Earlier, the `eval_input_fn` was defined using the entire evaluation set.
```
pred_dicts = list(linear_est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
```
Finally, look at the receiver operating characteristic (ROC) of the results, which will give us a better idea of the tradeoff between the true positive rate and false positive rate.
```
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# Набор данных взят с https://www.kaggle.com/aungpyaeap/fish-market
# Параметры нескольких популярных промысловых рыб
# length 1 = Body height
# length 2 = Total Length
# length 3 = Diagonal Length
fish_data = pd.read_csv("datasets/Fish.csv", delimiter=',')
print(fish_data)
# Выделим входные параметры и целевое значение
x_labels = ['Height', 'Width']
y_label = 'Weight'
data = fish_data[x_labels + [y_label]]
print(data)
# Определим размер валидационной и тестовой выборок
val_test_size = round(0.2*len(data))
print(val_test_size)
# Генерируем уникальный seed
my_code = "Ботиров"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
# Создадим обучающую, валидационную и тестовую выборки
random_state = my_seed
train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state)
train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state)
print(len(train), len(val), len(test))
# Выделим обучающую, валидационную и тестовую выборки
train_x = train[x_labels]
train_y = np.array(train[y_label]).reshape(-1,1)
val_x = val[x_labels]
val_y = np.array(val[y_label]).reshape(-1,1)
test_x = test[x_labels]
test_y = np.array(test[y_label]).reshape(-1,1)
# Нормируем значения параметров
scaler_x = MinMaxScaler()
scaler_x.fit(train_x)
scaled_train_x = scaler_x.transform(train_x)
scaler_y = MinMaxScaler()
scaler_y.fit(train_y)
scaled_train_y = scaler_y.transform(train_y)
# Создадим модель метода k-ближайших соседей и обучим ее на нормированных данных. По умолчанию k = 5.
minmse=10
mink=0
a=[]
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
for k in range(1,51):
model1 = KNeighborsRegressor(n_neighbors = k)
model1.fit(scaled_train_x, scaled_train_y)
val_predicted = model1.predict(scaled_val_x)
mse1 = mean_squared_error(scaled_val_y, val_predicted)
a.append(mse1)
if mse1<minmse:
minmse=mse1
mink=k
print("минимальная среднеквадратичная ошибка",minmse)
print("значение k, которому соответсвует минимальная среднеквадратичная ошибка",mink)
print()
print(a)
model1 = KNeighborsRegressor(n_neighbors = 8)
model1.fit(scaled_train_x, scaled_train_y)
val_predicted = model1.predict(scaled_val_x)
mse1 = mean_squared_error(scaled_val_y, val_predicted)
print(mse1)
# Проверим результат на тестевойой выборке.
scaled_test_x = scaler_x.transform(test_x)
scaled_test_y = scaler_y.transform(test_y)
test_predicted = model1.predict(scaled_test_x)
mse2 = mean_squared_error(scaled_test_y,test_predicted)
print(mse2)
```
| github_jupyter |
# Exercise 02 - OLAP Cubes - Grouping Sets
All the databases table in this demo are based on public database samples and transformations
- `Sakila` is a sample database created by `MySql` [Link](https://dev.mysql.com/doc/sakila/en/sakila-structure.html)
- The postgresql version of it is called `Pagila` [Link](https://github.com/devrimgunduz/pagila)
- The facts and dimension tables design is based on O'Reilly's public dimensional modelling tutorial schema [Link](http://archive.oreilly.com/oreillyschool/courses/dba3/index.html)
Start by connecting to the database by running the cells below. If you are coming back to this exercise, then uncomment and run the first cell to recreate the database. If you recently completed the slicing and dicing exercise, then skip to the second cell.
```
!PGPASSWORD=student createdb -h 127.0.0.1 -U student pagila_star
!PGPASSWORD=student psql -q -h 127.0.0.1 -U student -d pagila_star -f Data/pagila-star.sql
```
### Connect to the local database where Pagila is loaded
```
import sql
%load_ext sql
DB_ENDPOINT = "127.0.0.1"
DB = 'pagila_star'
DB_USER = 'student'
DB_PASSWORD = 'student'
DB_PORT = '5432'
# postgresql://username:password@host:port/database
conn_string = "postgresql://{}:{}@{}:{}/{}" \
.format(DB_USER, DB_PASSWORD, DB_ENDPOINT, DB_PORT, DB)
print(conn_string)
%sql $conn_string
```
### Star Schema
<img src="pagila-star.png" width="50%"/>
# Grouping Sets
- It happens often that for 3 dimensions, you want to aggregate a fact:
- by nothing (total)
- then by the 1st dimension
- then by the 2nd
- then by the 3rd
- then by the 1st and 2nd
- then by the 2nd and 3rd
- then by the 1st and 3rd
- then by the 1st and 2nd and 3rd
- Since this is very common, and in all cases, we are iterating through all the fact table anyhow, there is a more clever way to do that using the SQL grouping statement "GROUPING SETS"
## Total Revenue
Write a query that calculates total revenue (sales_amount)
```
%%sql
SELECT
SUM(factsales.sales_amount) AS revenue
FROM
factsales
```
## Revenue by Country
Write a query that calculates total revenue (sales_amount) by country
```
%%sql
SELECT
dimstore.country as country,
sum(factsales.sales_amount) as revenue
FROM
factsales
join dimstore on (dimstore.store_key=factsales.store_key)
group by
dimstore.country
```
## Revenue by Month
Write a query that calculates total revenue (sales_amount) by month
```
%%sql
SELECT
dimdate.month AS month,
SUM(factsales.sales_amount) AS revenue
FROM
factsales
JOIN dimdate ON (dimdate.date_key = factsales.date_key)
GROUP BY
dimdate.month
```
## Revenue by Month & Country
TODO: Write a query that calculates total revenue (sales_amount) by month and country. Sort the data by month, country, and revenue in descending order. The first few rows of your output should match the table below.
```
%%sql
SELECT
dimdate.month AS month,
dimstore.country AS country,
SUM(factsales.sales_amount) AS revenue
FROM
factsales
JOIN dimdate ON (dimdate.date_key = factsales.date_key)
JOIN dimstore ON (dimstore.store_key = factsales.store_key)
GROUP BY
dimdate.month,
dimstore.country
ORDER BY
dimdate.month,
dimstore.country,
revenue DESC
```
<div class="p-Widget jp-RenderedHTMLCommon jp-RenderedHTML jp-mod-trusted jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/html"><table>
<tbody><tr>
<th>month</th>
<th>country</th>
<th>revenue</th>
</tr>
<tr>
<td>1</td>
<td>Australia</td>
<td>2364.19</td>
</tr>
<tr>
<td>1</td>
<td>Canada</td>
<td>2460.24</td>
</tr>
<tr>
<td>2</td>
<td>Australia</td>
<td>4895.10</td>
</tr>
<tr>
<td>2</td>
<td>Canada</td>
<td>4736.78</td>
</tr>
<tr>
<td>3</td>
<td>Australia</td>
<td>12060.33</td>
</tr>
</tbody></table></div>
## Revenue Total, by Month, by Country, by Month & Country All in one shot
TODO: Write a query that calculates total revenue at the various grouping levels done above (total, by month, by country, by month & country) all at once using the grouping sets function. Your output should match the table below.
```
%%sql
SELECT
dimdate.month AS month,
dimstore.country AS country,
SUM(factsales.sales_amount) AS revenue
FROM
factsales
JOIN dimdate ON (dimdate.date_key = factsales.date_key)
JOIN dimstore ON (dimstore.store_key = factsales.store_key)
GROUP BY
grouping sets ((), dimdate.month, dimstore.country, (dimdate.month, dimstore.country))
LIMIT 20
```
<div class="p-Widget jp-RenderedHTMLCommon jp-RenderedHTML jp-mod-trusted jp-OutputArea-output jp-OutputArea-executeResult" data-mime-type="text/html"><table>
<tbody><tr>
<th>month</th>
<th>country</th>
<th>revenue</th>
</tr>
<tr>
<td>1</td>
<td>Australia</td>
<td>2364.19</td>
</tr>
<tr>
<td>1</td>
<td>Canada</td>
<td>2460.24</td>
</tr>
<tr>
<td>1</td>
<td>None</td>
<td>4824.43</td>
</tr>
<tr>
<td>2</td>
<td>Australia</td>
<td>4895.10</td>
</tr>
<tr>
<td>2</td>
<td>Canada</td>
<td>4736.78</td>
</tr>
<tr>
<td>2</td>
<td>None</td>
<td>9631.88</td>
</tr>
<tr>
<td>3</td>
<td>Australia</td>
<td>12060.33</td>
</tr>
<tr>
<td>3</td>
<td>Canada</td>
<td>11826.23</td>
</tr>
<tr>
<td>3</td>
<td>None</td>
<td>23886.56</td>
</tr>
<tr>
<td>4</td>
<td>Australia</td>
<td>14136.07</td>
</tr>
<tr>
<td>4</td>
<td>Canada</td>
<td>14423.39</td>
</tr>
<tr>
<td>4</td>
<td>None</td>
<td>28559.46</td>
</tr>
<tr>
<td>5</td>
<td>Australia</td>
<td>271.08</td>
</tr>
<tr>
<td>5</td>
<td>Canada</td>
<td>243.10</td>
</tr>
<tr>
<td>5</td>
<td>None</td>
<td>514.18</td>
</tr>
<tr>
<td>None</td>
<td>None</td>
<td>67416.51</td>
</tr>
<tr>
<td>None</td>
<td>Australia</td>
<td>33726.77</td>
</tr>
<tr>
<td>None</td>
<td>Canada</td>
<td>33689.74</td>
</tr>
</tbody></table></div>
| github_jupyter |
# ORF recognition by CNN
In 105, we used Conv1D layers with filter width=3 with dropout to reduce overfitting. The simulated RNA lengths were 1000.
Here, try really short simulated RNA.
RNA_LENS=71, CDS_LEN=63 and was cut in half with layers with filters 8 and 30 epochs with 5 folds.
```
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=2000 # how many protein-coding sequences
NC_SEQUENCES=2000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=91 # how long is each sequence
CDS_LEN=73
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 14
DROP_RATE = 0.2
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=40 # how many times to train on all the data
SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=5 # train the model this many times (range 1 to SPLITS)
import sys
try:
from google.colab import drive
IN_COLAB = True
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import *
except:
print("CoLab not working. On my PC, use relative paths.")
IN_COLAB = False
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_gen import *
from SimTools.RNA_describe import ORF_counter
from SimTools.RNA_prep import *
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
if not assert_imported_RNA_gen():
print("ERROR: Cannot use RNA_gen.")
if not assert_imported_RNA_prep():
print("ERROR: Cannot use RNA_prep.")
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
# Use code from our SimTools library.
def make_generators(seq_len):
pcgen = Collection_Generator()
pcgen.get_len_oracle().set_mean(seq_len)
tora = Transcript_Oracle()
tora.set_cds_len_mean(CDS_LEN) # CDS=ORF+STOP.
pcgen.set_seq_oracle(tora)
ncgen = Collection_Generator()
ncgen.get_len_oracle().set_mean(seq_len)
return pcgen,ncgen
pc_sim,nc_sim = make_generators(RNA_LEN)
pc_train = pc_sim.get_sequences(PC_SEQUENCES)
nc_train = nc_sim.get_sequences(NC_SEQUENCES)
print("Train on",len(pc_train),"PC seqs")
print("Train on",len(nc_train),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("PC train")
describe_sequences(pc_train)
print("NC train")
describe_sequences(nc_train)
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
pc_sim.set_reproducible(True)
nc_sim.set_reproducible(True)
pc_test = pc_sim.get_sequences(PC_TESTS)
nc_test = nc_sim.get_sequences(NC_TESTS)
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
```
## Conclusion
The CNN is very capable of learning ORF/nonORF from simulated short RNA.
```
```
| github_jupyter |
```
import panel as pn
pn.extension()
```
The ``Button`` widget allows triggering events when the button is clicked. In addition to a value parameter, which will toggle from `False` to `True` while the click event is being processed an additional ``clicks`` parameter that can be watched to subscribe to click events.
For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb).
#### Parameters:
For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).
##### Core
* **``clicks``** (int): Number of clicks (can be listened to)
* **``value``** (boolean): Toggles from `False` to `True` while the event is being processed.
##### Display
* **``button_type``** (str): A button theme; should be one of ``'default'`` (white), ``'primary'`` (blue), ``'success'`` (green), ``'info'`` (yellow), or ``'danger'`` (red)
* **``disabled``** (boolean): Whether the widget is editable
* **``name``** (str): The title of the widget
___
```
button = pn.widgets.Button(name='Click me', button_type='primary')
button
```
The ``clicks`` parameter will report the number of times the button has been pressed:
```
button.clicks
```
The ``on_click`` method can trigger function when button is clicked:
```
text = pn.widgets.TextInput(value='Ready')
def b(event):
text.value = 'Clicked {0} times'.format(button.clicks)
button.on_click(b)
pn.Row(button, text)
```
The ``Button`` name string may contain Unicode characters, providing a convenient way to define common graphical buttons:
```
backward = pn.widgets.Button(name='\u25c0', width=50)
forward = pn.widgets.Button(name='\u25b6', width=50)
search = pn.widgets.Button(name='🔍', width=100)
pn.Row(backward, forward, search)
```
The color of the button can be set by selecting one of the available button types:
```
pn.Column(*(pn.widgets.Button(name=p, button_type=p) for p in pn.widgets.Button.param.button_type.objects))
```
### Controls
The `Button` widget exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
```
pn.Row(button.controls, button)
```
| github_jupyter |
```
# Functions: from Fahrenheit to Celsius
# Author: Chris Wang
# eqution: f = c*1.8 +32
def cel2fah(src):
if src < -273:
raise ValueError('Value not exists')
res = 5 * (src - 32)/9.0
return res
try:
cel = float (input("Please input Celsius value"))
fah = cel2fah(cel)
print("the corresponding Fahrenheit data %0.1f" % (fah))
except ValueError('Value not exists'):
print("Celsius Value not exists")
else:
print("No value error")
# Function: judge a interger is prime or not
# Author: Chris Wang
min_prime = 2 # minimum prime is 2
def is_prime(temp):
#judge input string is decimal only, - , binary, roma are not allowed
if(temp.isdecimal() == False):
return False
temp = int(temp)
# if input is 1,0,or negetive, indicates no prime
if(temp < min_prime):
return False
max_bound = temp/2 + 1
max_bound = int(max_bound)
for i in range(min_prime, max_bound, 1):
if(temp % i == 0):
return False
else:
return True
assert(isPrime('data') == False)
assert(isPrime('-3.1') == False)
assert(isPrime('3') == True)
assert(isPrime('67') == True)
# Function: calculate sum of 0-9 in a string
# Author: Chris Wang
def sum_digit(src):
if src.count('.') > 1:
raise ValueError("str error")
sum = 0
for i in range( len(src) ):
if(src[i] >= '0' and src[i] <= '9'):
sum = sum + int( src[i] )
return sum
assert(sum_digit("124.56") == 18)
assert(sum_digit("156") == 12)
# Function: delete repeted char
# Author : Chris Wang
import random
# generate a random list
def random_list(start,stop,length):
# bound dection
if (length >= 0):
length=int(length)
start, stop = (int(start), int(stop))
# bound dection
if (start > stop):
start, stop = (int(stop), int(start))
random_list = []
for i in range(length):
# add item to the end of the list
random_list.append(random.randint(start, stop))
return random_list
num_start = 0
num_end = 100
num_cnt = 30 # generate 30 numbers
lst = random_list(num_start,num_end,num_cnt)
print("origin list:" + str(lst))
lst_reg = []
# for i in range( len(lst) ):
# if lst[i] not in lst_reg:
# lst_reg.append( lst[i] ) # no return value function
# print("Sorting out:" + str(lst_reg))
lst_reg = set(lst)
print("Sorting out:" + str(lst_reg))
def fun(temp):
yield temp*2
print("*")
for i in fun(5):
print("x"*10)
class ZFile(object):
def __init__(self, filename, mode='r', basedir=''):
self.filename = filename
self.mode = mode
if self.mode in ('w', 'a'):
self.zfile = zipfile.ZipFile(
filename, self.mode, compression=zipfile.ZIP_DEFLATED)
else:
self.zfile = zipfile.ZipFile(filename, self.mode)
self.basedir = basedir
if not self.basedir:
self.basedir = os.path.dirname(filename)
def addfile(self, path, arcname=None):
path = path.replace('//', '/')
if not arcname:
if path.startswith(self.basedir):
arcname = path[len(self.basedir):]
else:
arcname = ''
self.zfile.write(path, arcname)
def addfiles(self, paths):
for path in paths:
if isinstance(path, tuple):
self.addfile(*path)
else:
self.addfile(path)
def close(self):
self.zfile.close()
def extract_to(self, path):
for p in self.zfile.namelist():
self.extract(p, path)
def extract(self, filename, path):
if not filename.endswith('/'):
f = os.path.join(path, filename)
dir = os.path.dirname(f)
if not os.path.exists(dir):
os.makedirs(dir)
self.zfile(f, 'wb').write(self.zfile.read(filename))
with open('.\data\people.txt','r') as fp:
print(fp.name)
read_txt = fp.readlines()
for row in read_txt:
row = row.strip('\n') # delete '\n'
if row == 1:
continue
print(read_txt)
```
| github_jupyter |
# Chapter 15
*Modeling and Simulation in Python*
Copyright 2021 Allen Downey
License: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ModSimPy/raw/master/' +
'modsim.py')
# import functions from modsim
from modsim import *
```
[Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/ModSimPy/blob/master//chapters/chap15.ipynb)
So far the systems we have studied have been physical in the sense that they exist in the world, but they have not been physics, in the sense of what physics classes are usually about. In the next few chapters, we'll do some physics, starting with **thermal systems**, that is, systems where the temperature of objects changes as heat transfers from one to another.
## The coffee cooling problem
The coffee cooling problem was discussed by Jearl Walker in
"The Amateur Scientist", *Scientific American*, Volume 237, Issue 5, November 1977. Since then it has become a standard example of modeling and simulation.
Here is my version of the problem:
> Suppose I stop on the way to work to pick up a cup of coffee, which I take with milk. Assuming that I want the coffee to be as hot as possible when I arrive at work, should I add the milk at the coffee shop, wait until I get to work, or add the milk at some point in between?
To help answer this question, I made a trial run with the milk and
coffee in separate containers and took some measurements (not really):
- When served, the temperature of the coffee is 90 °C. The volume is
300 mL.
- The milk is at an initial temperature of 5 °C, and I take about
50 mL.
- The ambient temperature in my car is 22 °C.
- The coffee is served in a well insulated cup. When I arrive at work after 30 minutes, the temperature of the coffee has fallen to 70 °C.
- The milk container is not as well insulated. After 15 minutes, it
warms up to 20 °C, nearly the ambient temperature.
To use this data and answer the question, we have to know something
about temperature and heat, and we have to make some modeling decisions.
## Temperature and heat
To understand how coffee cools (and milk warms), we need a model of
temperature and heat. **Temperature** is a property of an object or a
system; in SI units it is measured in degrees Celsius (°C). Temperature quantifies how hot or cold the object is, which is related to the average velocity of the particles that make it up.
When particles in a hot object contact particles in a cold object, the
hot object gets cooler and the cold object gets warmer as energy is
transferred from one to the other. The transferred energy is called
**heat**; in SI units it is measured in joules (J).
Heat is related to temperature by the following equation (see
<http://modsimpy.com/thermass>):
$$Q = C~\Delta T$$
where $Q$ is the amount of heat transferred to an object, $\Delta T$ is resulting change in temperature, and $C$ is the **thermal mass** of the object, which quantifies how much energy it takes to heat or cool it. In SI units, thermal mass is measured in joules per degree Celsius (J/°C).
For objects made primarily from one material, thermal mass can be
computed like this:
$$C = m c_p$$
where $m$ is the mass of the object and $c_p$ is the **specific heat capacity** of the material (see <http://modsimpy.com/specheat>).
We can use these equations to estimate the thermal mass of a cup of
coffee. The specific heat capacity of coffee is probably close to that
of water, which is 4.2 J/g/°C. Assuming that the density of coffee is
close to that of water, which is 1 g/mL, the mass of 300 mL of coffee is 300 g, and the thermal mass is 1260 J/°C.
So when a cup of coffee cools from 90 °C to 70 °C, the change in
temperature, $\Delta T$ is 20 °C, which means that 25 200 J of heat
energy was transferred from the coffee to the surrounding environment
(the cup holder and air in my car).
To give you a sense of how much energy that is, if you were able to
harness all of that heat to do work (which you cannot), you could
use it to lift a cup of coffee from sea level to 8571 m, just shy of the height of Mount Everest, 8848 m.
Assuming that the cup has less mass than the coffee, and is made from a material with lower specific heat, we can ignore the thermal mass of the cup. For a cup with substantial thermal mass, like a ceramic mug, we might consider a model that computes the temperature of coffee and cup separately.
## Heat transfer
In a situation like the coffee cooling problem, there are three ways
heat transfers from one object to another (see <http://modsimpy.com/transfer>):
- Conduction: When objects at different temperatures come into
contact, the faster-moving particles of the higher-temperature
object transfer kinetic energy to the slower-moving particles of the lower-temperature object.
- Convection: When particles in a gas or liquid flow from place to
place, they carry heat energy with them. Fluid flows can be caused
by external action, like stirring, or by internal differences in
temperature. For example, you might have heard that hot air rises,
which is a form of "natural convection".
- Radiation: As the particles in an object move due to thermal energy,
they emit electromagnetic radiation. The energy carried by this
radiation depends on the object's temperature and surface properties
(see <http://modsimpy.com/thermrad>).
For objects like coffee in a car, the effect of radiation is much
smaller than the effects of conduction and convection, so we will ignore it.
Convection can be a complex topic, since it often depends on details of fluid flow in three dimensions. But for this problem we will be able to get away with a simple model called "Newton's law of cooling".
## Newton's law of cooling
Newton's law of cooling asserts that the temperature rate of change for an object is proportional to the difference in temperature between the object and the surrounding environment:
$$\frac{dT}{dt} = -r (T - T_{env})$$
where $T$, the temperature of the object, is a function of time, $t$, $T_{env}$ is the temperature of the environment, and $r$ is a constant that characterizes how quickly heat is transferred between the system and the environment.
Newton's so-called "law " is really a model: it is a good approximation in some conditions and less good in others.
For example, if the primary mechanism of heat transfer is conduction,
Newton's law is "true", which is to say that $r$ is constant over a
wide range of temperatures. And sometimes we can estimate $r$ based on
the material properties and shape of the object.
When convection contributes a non-negligible fraction of heat transfer, $r$ depends on temperature, but Newton's law is often accurate enough, at least over a narrow range of temperatures. In this case $r$ usually has to be estimated experimentally, since it depends on details of surface shape, air flow, evaporation, etc.
When radiation makes up a substantial part of heat transfer, Newton's
law is not a good model at all. This is the case for objects in space or in a vacuum, and for objects at high temperatures (more than a few
hundred degrees Celsius, say).
However, for a situation like the coffee cooling problem, we expect
Newton's model to be quite good.
## Implementation
To get started, let's forget about the milk temporarily and focus on the coffee.
Here's a function that takes the parameters of the system and makes a `System` object:
```
def make_system(T_init, volume, r, t_end):
return System(T_init=T_init,
T_final=T_init,
volume=volume,
r=r,
t_end=t_end,
T_env=22,
t_0=0,
dt=1)
```
In addition to the parameters, `make_system` sets the temperature of the environment, `T_env`, the initial time stamp, `t_0`, and the time step, `dt`, which we will use use to simulate the cooling process.
Here's a `System` object that represents the coffee.
```
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
```
The values of `T_init`, `volume`, and `t_end` come from the statement of the problem.
I chose the value of `r` arbitrarily for now; we will see how to estimate it soon.
Strictly speaking, Newton's law is a differential equation, but over a short period of time we can approximate it with a difference equation:
$$\Delta T = -r (T - T_{env}) dt$$
where $dt$ is the time step and $\Delta T$ is the change in temperature during that time step.
Note: I use $\Delta T$ to denote a change in temperature over time, but in the context of heat transfer, you might also see $\Delta T$ used to denote the difference in temperature between an object and its
environment, $T - T_{env}$. To minimize confusion, I avoid this second
use.
The following function takes the current time `t`, the current temperature, `T`, and a `System` object, and computes the change in temperature during a time step:
```
def change_func(t, T, system):
r, T_env, dt = system.r, system.T_env, system.dt
return -r * (T - T_env) * dt
```
We can test it with the initial temperature of the coffee, like this:
```
change_func(0, coffee.T_init, coffee)
```
With `dt=1` minute, the temperature drops by about 0.7 °C, at least for this value of `r`.
Now here's a version of `run_simulation` that simulates a series of time steps from `t_0` to `t_end`:
```
def run_simulation(system, change_func):
t_array = linrange(system.t_0, system.t_end, system.dt)
n = len(t_array)
series = TimeSeries(index=t_array)
series.iloc[0] = system.T_init
for i in range(n-1):
t = t_array[i]
T = series.iloc[i]
series.iloc[i+1] = T + change_func(t, T, system)
system.T_final = series.iloc[-1]
return series
```
There are a two things here that are different from previous versions of `run_simulation`.
First, we use `linrange` to make an array of values from `t_0` to `t_end` with time step `dt`.
`linrange` is similar to `linspace`; they both take a start value and an end value and return an array of equally spaced values.
The difference is the third argument: `linspace` takes an integer that indicates the number of points in the range; `linrange` takes a step size that indicates the interval between values.
When we make the `TimeSeries`, we use the keyword argument `index` to indicate that the index of the `TimeSeries` is the array of time stamps, `t_array`.
Second, this version of `run_simulation` uses `iloc` rather than `loc` to specify the rows in the `TimeSeries`.
Here's the difference:
* With `loc`, the label in brackets can be any kind of value, with any start, end, and time step. For example, in the world population model, the labels are years starting in 1960 and ending in 2016.
* With `iloc`, the label in brackets is always an integer starting at 0. So we can always get the first element with `iloc[0]` and the last element with `iloc[-1]`, regardless of what the labels are.
In this version of `run_simulation`, the loop variable is an integer, `i`, that goes from `0` to `n-1`, including `0` but not including `n-1`.
So the first time through the loop, `i` is `0` and the value we add to the `TimeSeries` has index 1.
The last time through the loop, `i` is `n-2` and the value we add has index `n-1`.
We can run the simulation like this:
```
results = run_simulation(coffee, change_func)
```
The result is a `TimeSeries` with one row per time step.
Here are the first few rows:
```
show(results.head())
```
And the last few rows:
```
show(results.tail())
```
With `t_0=0`, `t_end=30`, and `dt=1`, the time stamps go from `0.0` to `30.0`.
Here's what the `TimeSeries` looks like.
```
results.plot(label='coffee')
decorate(xlabel='Time (minute)',
ylabel='Temperature (C)',
title='Coffee Cooling')
```
The temperature after 30 minutes is 72.3 °C, which is a little higher than what's stated in the problem, 70 °C.
```
coffee.T_final
```
By trial and error, we could find the value of `r` where the final temperature is precisely 70 °C.
But it is more efficient to use a root-finding algorithm.
## Finding roots
The ModSim library provides a function called `root_scalar` that finds the roots of non-linear equations. As an example, suppose you want to find the roots of the polynomial
$$f(x) = (x - 1)(x - 2)(x - 3)$$
A **root** is a value of $x$ that makes $f(x)=0$. Because of the way I wrote this polynomial, we can see that if $x=1$, the first factor is 0; if $x=2$, the second factor is 0; and if $x=3$, the third factor is 0, so those are the roots.
I'll use this example to demonstrate `root_scalar`. First, we have to
write a function that evaluates $f$:
```
def func(x):
return (x-1) * (x-2) * (x-3)
```
Now we call `root_scalar` like this:
```
res = root_scalar(func, bracket=[1.5, 2.5])
res
```
The first argument is the function whose roots we want. The second
argument is an interval that contains or "brackets" a root. The result is an object that contains several variables, including `root`, which is the root that was found.
```
res.root
```
If we provide a different interval, we find a different root.
```
res = root_scalar(func, bracket=[2.5, 3.5])
res.root
```
If the interval doesn't contain a root, you'll get a `ValueError` and a message like "f(a) and f(b) must have different signs".
```
res = root_scalar(func, bracket=[4, 5])
```
Now we can use `root_scalar` to estimate `r`.
## Estimating `r`
What we want is the value of `r` that yields a final temperature of
70 °C. To use `root_scalar`, we need a function that takes `r` as a parameter and returns the difference between the final temperature and the goal:
```
def error_func(r, system):
system.r = r
results = run_simulation(system, change_func)
return system.T_final - 70
```
This is called an "error function" because it returns the
difference between what we got and what we wanted, that is, the error.
With the right value of `r`, the error is 0.
We can test `error_func` like this, using the initial guess `r=0.01`:
```
coffee = make_system(T_init=90, volume=300, r=0.01, t_end=30)
error_func(0.01, coffee)
```
The result is an error of 2.3 °C, which means the final temperature with `r=0.01` is too high.
```
error_func(0.02, coffee)
```
With `r=0.02`, the error is about -11°C, which means that the final temperature is too low. So we know that the correct value must be in between.
Now we can call `root_scalar` like this:
```
res = root_scalar(error_func, coffee, bracket=[0.01, 0.02])
res.flag
```
The first argument is the error function.
The second argument is the `System` object, which `root_scalar` passes as an argument to `error_func`.
The third argument is an interval that brackets the root.
Here are the results.
```
r_coffee = res.root
r_coffee
```
In this example, `r_coffee` turns out to be about `0.0115`, in units of min$^{-1}$ (inverse minutes).
We can confirm that this value is correct by setting `r` to the root we found and running the simulation.
```
coffee.r = res.root
run_simulation(coffee, change_func)
coffee.T_final
```
The final temperature is very close to 70 °C.
## Exercises
**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 °C, in a vessel with `r=0.1`, for 15 minutes, and plot the results.
By trial and error, find a value for `r` that makes the final temperature close to 20 °C.
```
# Solution goes here
# Solution goes here
```
**Exercise:** Write an error function that simulates the temperature of the milk and returns the difference between the final temperature and 20 °C. Use it to estimate the value of `r` for the milk.
```
# Solution goes here
# Solution goes here
# Solution goes here
```
| github_jupyter |
# Introduction
**Authors: M. Ravasi, D. Vargas, I. Vasconcelos**
Welcome to the **Solving large-scale inverse problems in Python with PyLops** tutorial!
The aim of this tutorial is to:
- introduce you to the concept of *linear operators* and their usage in the solution of *inverse problems*;
- show how PyLops can be used to set-up non-trivial linear operators and solve inverse problems in Python;
- Walk you through a set of use cases where PyLops has been leveraged to solve real scientific problems and present future directions of development.
## Useful links
- Tutorial Github repository: https://github.com/mrava87/pylops_pydata2020
- PyLops Github repository: https://github.com/equinor/pylops
- PyLops reference documentation: https://pylops.readthedocs.io/en/latest/
## Theory in a nutshell
In this tutorial we will try to keep the theory to a minimum and quickly expose you to practical examples. However, we want to make sure that some of the basic underlying concepts are clear to everyone and define a common mathematical notation.
At the core of PyLops lies the concept of **linear operators**. A linear operator is generally a mapping or function that acts linearly on elements of a space to produce elements of another space. More specifically we say that $\mathbf{A}:\mathbb{F}^m \to \mathbb{F}^n$ is a linear operator that maps a vector of size $m$ in the *model space* to a vector of size $n$ in the *data space*:
$$\mathbf{y} = \mathbf{A} \mathbf{x}$$
We will refer to this as **forward model (or operation)**.
Conversely the application of its adjoint to a data vector is referred to as **adjoint modelling (or operation)**:
$$\mathbf{x} = \mathbf{A}^H \mathbf{y}$$
In its simplest form, a linear operator can be seen as a **matrix** of size $n \times m$ (and the adjoint is simply its transpose and complex conjugate). However in a more general sense we can think of a linear operator as any pair of software code that mimics the effect a matrix on a model vector as well as that of its adjoint to a data vector.
Solving an inverse problems accounts to removing the effect of the operator/matrix $\mathbf{A}$ from the data $\mathbf{y}$ to retrieve the model $\mathbf{x}$ (or an approximation of it).
$$\hat{\mathbf{x}} = \mathbf{A}^{-1} \mathbf{y}$$
In practice, the inverse of $\mathbf{A}$ is generally not explicitely required. A solution can be obtained using either direct methods, matrix decompositions (eg SVD) or iterative solvers. Luckily, many iterative methods (e.g. cg, lsqr) do not need to know the individual entries of a matrix to solve a linear system. Such solvers only require the computation of forward and adjoint matrix-vector products - exactly what a linear operator does!
**So what?**
We have learned that to solve an inverse problem, we do not need to express the modelling operator in terms of its dense (or sparse) matrix. All we need to know is how to perform the forward and adjoint operations - ideally as fast as possible and using the least amount of memory.
Our first task will be to understand how we can effectively write a linear operator on pen and paper and translate it into computer code. We will consider 2 examples:
- Element-wise multiplication (also known as Hadamard product)
- First Derivative
Let's first import the libraries we need in this tutorial
```
# Run this when using Colab (will install the missing libraries)
# !pip install pylops pympler scooby
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pylops
import scooby
from scipy.linalg import lstsq
from pylops import LinearOperator
from pylops.utils import dottest
```
## Element-wise multiplication
We start by creating a barebore linear operator that performs a simple element-wise multiplication between two vectors (the so-called Hadamart product):
$$ y_i = d_i x_i \quad \forall i=0,1,...,n-1 $$
If we think about the forward problem the way we wrote it before, we can see that this operator can be equivalently expressed as a dot-product between a square matrix $\mathbf{D}$ that has the $d_i$ elements along its main diagonal and a vector $\mathbf{x}$:
$$\mathbf{x} = \mathbf{D} \mathbf{y}$$
Because of this, the related linear operator is called *Diagonal* operator in PyLops.
We are ready to implement this operator in 2 different ways:
- directly as a diagonal matrix;
- as a linear operator that performs directly element-wise multiplication.
### Dense matrix definition
```
n = 10
diag = np.arange(n)
D = np.diag(diag)
print('D:\n', D)
```
We can now apply the forward by simply using `np.dot`
```
x = np.ones(n)
y = np.dot(D, x) # or D.dot(x) or D @ x
print('y: ', y)
```
As we have access to all the entries of the matrix, it is very easy to write the adjoint
```
xadj = np.dot(np.conj(D.T), y)
print('xadj: ', xadj)
```
*Note:* since the elements of our matrix are real numbers, we can avoid applying the complex conjugation here.
Everything seems very easy so far. This approach does however carry some problems:
- we are storing $N^2$ numbers, even though we know that our matrix has only elements along its diagonal.
- we are applying a dot product which requires $N^2$ multiplications and summations (most of them with zeros)
Of course in this case we could use a sparse matrix, which allows to store only non-zero elements (and their index) and provides a faster way to perform the dot product.
### Linear operator definition
Let's take a leap of faith, and see if we can avoid thinking about the matrix altogether and write just an equivalent (ideally faster) piece of code that mimics this operation.
To write its equivalent linear operator, we define a class with an init method, and 2 other methods:
- _matvec: we write the forward operation here
- _rmatvec: we write the adjoint operation here
We see that we are also subclassing a PyLops LinearOperator. For the moment let's not get into the details of what that entails and simply focus on writing the content of these three methods.
```
class Diagonal(LinearOperator):
"""Short version of a Diagonal operator. See
https://github.com/equinor/pylops/blob/master/pylops/basicoperators/Diagonal.py
for a more detailed implementation
"""
def __init__(self, diag, dtype='float64'):
self.diag = diag
self.shape = (len(self.diag), len(self.diag))
self.dtype = np.dtype(dtype)
def _matvec(self, x):
y = self.diag * x
return y
def _rmatvec(self, x):
y = np.conj(self.diag) * x
return y
```
Now we create the operator
```
Dop = Diagonal(diag)
print('Dop: ', Dop)
```
### Linear operator application
Forward
```
y = Dop * x # Dop @ x
print('y: ', y)
```
Adjoint
```
xadj = Dop.H * y
print('xadj: ', xadj)
```
As expected we obtain the same results!
**EX:** try making a much bigger vector $\mathbf{x}$ and time the forward and adjoint for the two approaches
```
def Diagonal_timing():
"""Timing of Diagonal operator
"""
n = 10000
diag = np.arange(n)
x = np.ones(n)
# dense
D = np.diag(diag)
from scipy import sparse
Ds = sparse.diags(diag, 0)
# lop
Dop = Diagonal(diag)
# uncomment these
%timeit -n3 -r3 np.dot(D, x)
%timeit -n3 -r3 Ds.dot(x)
%timeit -n3 -r3 Dop._matvec(x)
Diagonal_timing()
```
### Linear operator testing
One of the most important aspect of writing a Linear operator is to be able to verify that the code implemented in forward mode and the code implemented in adjoint mode are effectively adjoint to each other.
If this is not the case, we will struggle to invert our linear operator - some iterative solvers will diverge and other show very slow convergence.
This is instead the case if the so-called *dot-test* is passed within a certain treshold:
$$
(\mathbf{A}*\mathbf{u})^H*\mathbf{v} = \mathbf{u}^H*(\mathbf{A}^H*\mathbf{v})
$$
where $\mathbf{u}$ and $\mathbf{v}$ are two random vectors.
Let's use `pylops.utils.dottest`
```
dottest(Dop, n, n, verb=True);
```
## First Derivative
Let's consider now something less trivial. We use a first-order centered first derivative stencil:
$$ y_i = \frac{x_{i+1} - x_{i-1}}{2 \Delta} \quad \forall i=1,2,...,N $$
where $\Delta$ is the sampling step of the input signal. Note that we will deal differently with the edges, using a forward/backward derivative.
### Dense matrix definition
```
nx = 11
D = np.diag(0.5*np.ones(nx-1), k=1) - np.diag(0.5*np.ones(nx-1), k=-1)
D[0, 0] = D[-1, -2] = -1
D[0, 1] = D[-1, -1] = 1
print('D:\n', D)
```
### Linear operator definition
**EX:** try writing the operator
```
class FirstDerivative(LinearOperator):
"""Short version of a FirstDerivative operator. See
https://github.com/equinor/pylops/blob/master/pylops/basicoperators/FirstDerivative.py
for a more detailed implementation
"""
def __init__(self, N, sampling=1., dtype='float64'):
self.N = N
self.sampling = sampling
self.shape = (N, N)
self.dtype = dtype
self.explicit = False
def _matvec(self, x):
x, y = x.squeeze(), np.zeros(self.N, self.dtype)
y[1:-1] = (0.5 * x[2:] - 0.5 * x[0:-2]) / self.sampling
# edges
y[0] = (x[1] - x[0]) / self.sampling
y[-1] = (x[-1] - x[-2]) / self.sampling
return y
def _rmatvec(self, x):
x, y = x.squeeze(), np.zeros(self.N, self.dtype)
y[0:-2] -= (0.5 * x[1:-1]) / self.sampling
y[2:] += (0.5 * x[1:-1]) / self.sampling
# edges
y[0] -= x[0] / self.sampling
y[1] += x[0] / self.sampling
y[-2] -= x[-1] / self.sampling
y[-1] += x[-1] / self.sampling
return y
```
Define the operator
```
Dop = FirstDerivative(nx)
print('Dop: ', Dop)
```
Perform the dot test
```
dottest(Dop, nx, nx, verb=True);
```
Now that you understand, you can use PyLops implementation of this operator (see https://pylops.readthedocs.io/en/latest/api/generated/pylops.FirstDerivative.html for details)
```
Dop = pylops.FirstDerivative(nx, edge=True)
print('Dop: ', Dop)
dottest(Dop, nx, nx, verb=True);
```
### Linear operator application
```
x = np.arange(nx) - (nx-1)/2
print('x: ', x)
```
Forward
```
y = np.dot(D, x)
print('y: ', y)
y = Dop * x
print('y: ', y)
```
Adjoint
```
xadj = np.dot(D.T, y)
print('xadj: ', xadj)
xadj = Dop.H * y
print('xadj: ', xadj)
```
**EX:** Same as before, let's time our two implementations
```
def FirstDerivative_timing():
"""Timing of FirstDerivative operator
"""
nx = 2001
x = np.arange(nx) - (nx-1)/2
# dense
D = np.diag(0.5*np.ones(nx-1),k=1) - np.diag(0.5*np.ones(nx-1),-1)
D[0, 0] = D[-1, -2] = -1
D[0, 1] = D[-1, -1] = 1
# lop
Dop = pylops.FirstDerivative(nx, edge=True)
# uncomment these
%timeit -n3 -r3 np.dot(D, x)
%timeit -n3 -r3 Dop._matvec(x)
FirstDerivative_timing()
```
**EX:** try to compare the memory footprint of the matrix $\mathbf{D}$ compared to its equivalent linear operator. Hint: install ``pympler`` and use ``pympler.asizeof``
```
def FirstDerivative_memory():
"""Memory footprint of Diagonal operator
"""
from pympler import asizeof
from scipy.sparse import diags
nn = (10 ** np.arange(2, 4, 0.5)).astype(np.int)
mem_D = []
mem_Ds = []
mem_Dop = []
for n in nn:
D = np.diag(0.5 * np.ones(n - 1), k=1) - np.diag(0.5 * np.ones(n - 1),
-1)
D[0, 0] = D[-1, -2] = -1
D[0, 1] = D[-1, -1] = 1
Ds = diags((0.5 * np.ones(n - 1), -0.5 * np.ones(n - 1)),
offsets=(1, -1))
Dop = pylops.FirstDerivative(n, edge=True)
mem_D.append(asizeof.asizeof(D))
mem_Ds.append(asizeof.asizeof(Ds))
mem_Dop.append(asizeof.asizeof(Dop))
plt.figure(figsize=(12, 3))
plt.semilogy(nn, mem_D, '.-k', label='D')
plt.semilogy(nn, mem_Ds, '.-b', label='Ds')
plt.semilogy(nn, mem_Dop, '.-r', label='Dop')
plt.legend()
plt.title('Memory comparison')
FirstDerivative_memory()
```
Finally, let's try to move on step further and try to solve the inverse problem.
For the dense matrix, we will use `scipy.linalg.lstsq`. For operator PyLops this can be very easily done by using the '/' which will call `scipy.sparse.linalg.lsqr` solver (you can also use this solver directly if you want to fine tune some of its input parameters
```
xinv = lstsq(D, y)[0]
print('xinv: ', xinv)
xinv = Dop / y
print('xinv: ', xinv)
```
In both cases we have retrieved the correct solution!
## Chaining operators
Up until now, we have discussed how brand new operators can be created in few systematic steps. This sounds cool, but it may look like we would need to do this every time we need to solve a new problem.
This is where **PyLops** comes in. Alongside providing users with an extensive collection of operators, the library allows such operators to be combined via basic algebraic operations (eg summed, subtracted, multiplied) or chained together (vertical and horizontal stacking, block and block diagonal).
We will see more of this in the following. For now let's imagine to have a modelling operator that is a smooth first-order derivative. To do so we can chain the ``FirstDerivative`` operator ($\mathbf{D}$) that we have just created with a smoothing operator ($\mathbf{S}$)(https://pylops.readthedocs.io/en/latest/api/generated/pylops.Smoothing1D.html#pylops.Smoothing1D) and write the following problem:
$$\mathbf{y} = \mathbf{S} \mathbf{D} \mathbf{x}$$
Let's create it first and attempt to invert it afterwards.
```
nx = 51
x = np.ones(nx)
x[:nx//2] = -1
Dop = pylops.FirstDerivative(nx, edge=True, kind='forward')
Sop = pylops.Smoothing1D(5, nx)
# Chain the two operators
Op = Sop * Dop
print(Op)
# Create data
y = Op * x
# Invert
xinv = Op / y
xinv = pylops.optimization.leastsquares.NormalEquationsInversion(Op, [pylops.Identity(nx)], y, epsRs=[1e-3,])
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 5))
ax1.plot(y, '.-k')
ax1.set_title(r"Data $y$")
ax2.plot(x, 'k', label='x')
ax2.plot(xinv, '--r', label='xinv')
ax2.legend()
ax2.set_title(r"Model $x$")
plt.tight_layout()
```
## Recap
In this first tutorial we have learned to:
- translate a linear operator from pen and paper to computer code
- write our own linear operators
- use PyLops linear operators to perform forward, adjoint and inverse
- combine PyLops linear operators.
```
scooby.Report(core='pylops')
```
| github_jupyter |
```
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
import os
import zipfile
local_zip = '/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/horse-or-human')
local_zip = '/tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation-horse-or-human')
zip_ref.close()
# Directory with our training horse pictures
train_horse_dir = os.path.join('/tmp/horse-or-human/horses')
# Directory with our training human pictures
train_human_dir = os.path.join('/tmp/horse-or-human/humans')
# Directory with our training horse pictures
validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses')
# Directory with our training human pictures
validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans')
```
## Building a Small Model from Scratch
But before we continue, let's start defining the model:
Step 1 will be to import tensorflow.
```
import tensorflow as tf
```
We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers.
Finally we add the densely connected layers.
Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0).
```
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/tmp/validation-horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=100,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
# Example
The following example demonstrates how to the use the `pymatch` package to match [Lending Club Loan Data](https://www.kaggle.com/wendykan/lending-club-loan-data). Follow the link to download the dataset from Kaggle (you'll have to create an account, it's fast and free!).
Here we match Lending Club users that fully paid off loans (control) to those that defaulted (test). The example is contrived, however a use case for this could be that we want to analyze user sentiment with the platform. Users that default on loans may have worse sentiment because they are predisposed to a bad situation--influencing their perception of the product. Before analyzing sentiment, we can match users that paid their loans in full to users that defaulted based on the characteristics we can observe. If matching is successful, we could then make a statetment about the **causal effect** defaulting has on sentiment if we are confident our samples are sufficiently balanced and our model is free from omitted variable bias.
This example, however, only goes through the matching procedure, which can be broken down into the following steps:
* [Data Preparation](#Data-Prep)
* [Fit Propensity Score Models](#Matcher)
* [Predict Propensity Scores](#Predict-Scores)
* [Tune Threshold](#Tune-Threshold)
* [Match Data](#Match-Data)
* [Assess Matches](#Assess-Matches)
----
### Data Prep
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from pysmatch.Matcher import Matcher
import pandas as pd
import numpy as np
```
Load the dataset (`loan.csv`), which is the original `loan.csv` sampled as follows:
```python
path = 'misc/loan_full.csv'
fields = [
"loan_amnt",
"funded_amnt",
"funded_amnt_inv",
"term",
"int_rate",
"installment",
"grade",
"sub_grade",
"loan_status"
]
data = pd.read_csv(path)[fields]
# Treat long late as Defaulted
data.loc[data['loan_status'] == 'Late (31-120 days)', 'loan_status'] = 'Default'
# Sample 20K records from Fully paid and 2K from default
df = data[data.loan_status == 'Fully Paid'].sample(20000, random_state=42) \
.append(data[data.loan_status == 'Default'].sample(2000, random_state=42))
df.to_csv('misc/loan.csv', index=False)
```
```
path = "misc/loan.csv"
data = pd.read_csv(path)
```
Create test and control groups and reassign `loan_status` to be a binary treatment indicator. This is our reponse in the logistic regression model(s) used to generate propensity scores.
```
test = data[data.loan_status == "Default"]
control = data[data.loan_status == "Fully Paid"]
test['loan_status'] = 1
control['loan_status'] = 0
```
----
### `Matcher`
Initalize the `Matcher` object.
**Note that:**
* Upon intialization, `Matcher` prints the formula used to fit logistic regression model(s) and the number of records in the majority/minority class.
* The regression model(s) are used to generate propensity scores. In this case, we are using the covariates on the right side of the equation to estimate the probability of defaulting on a loan (`loan_status`= 1).
* `Matcher` will use all covariates in the dataset unless a formula is specified by the user. Note that this step is only fitting model(s), we assign propensity scores later.
* Any covariates passed to the (optional) `exclude` parameter will be ignored from the model fitting process. This parameter is particularly useful for unique identifiers like a `user_id`.
```
m = Matcher(test, control, yvar="loan_status", exclude=[])
```
There is a significant imbalance in our data--the majority group (fully-paid loans) having many more records than the minority group (defaulted loans). We account for this by setting `balance=True` when calling `Matcher.fit_scores()` below. This tells `Matcher` to sample from the majority group when fitting the logistic regression model(s) so that the groups are of equal size. When undersampling this way, it is highly recommended that `nmodels` is explictly assigned to a integer much larger than 1. This ensure is that more of the majority group is contributing to the generation of propensity scores. The value of this integer should depend on the severity of the imbalance; here we use `nmodels`=100.
```
# for reproducibility
np.random.seed(20170925)
m.fit_scores(balance=True, nmodels=100)
```
The average accuracy of our 100 models is 66.06%, suggesting that there's separability within our data and justifiying the need for the matching procedure. It's worth noting that we don't pay much attention to these logistic models since we are using them as a feature extraction tool (generation of propensity scores). The accuracy is a good way to detect separability at a glance, but we shouldn't spend time tuning and tinkering with these models. If our accuracy was close to 50%, that would suggest we cannot detect much separability in our groups given the features we observe and that matching is probably not necessary (or more features should be included if possible).
### Predict Scores
```
m.predict_scores()
m.plot_scores()
```
The plot above demonstrates the separability present in our data. Test profiles have a much higher **propensity**, or estimated probability of defaulting given the features we isolated in the data.
---
### Tune Threshold
The `Matcher.match()` method matches profiles that have propensity scores within some threshold.
i.e. for two scores `s1` and `s2`, `|s1 - s2|` <= `threshold`
By default matches are found *from* the majority group *for* the minority group. For example, if our test group contains 1,000 records and our control group contains 20,000, `Matcher` will
iterate through the test (minority) group and find suitable matches from the control (majority) group. If a record in the minority group has no suitable matches, it is dropped from the final matched dataset. We need to ensure our threshold is small enough such that we get close matches and retain most (or all) of our data in the minority group.
Below we tune the threshold using `method="random"`. This matches a random profile that is within the threshold
as there could be many. This is much faster than the alternative method "min", which finds the *closest* match for every minority record.
```
m.tune_threshold(method='random')
```
It looks like a threshold of 0.0005 retains enough information in our data. Let's proceed with matching using this threshold.
---
### Match Data
Below we match one record from the majority group to each record in the minority group. This is done **with** replacement, meaning a single majority record can be matched to multiple minority records. `Matcher` assigns a unique `record_id` to each record in the test and control groups so this can be addressed after matching. If susequent modelling is planned, one might consider weighting models using a weight vector of 1/`f` for each record, `f` being a record's frequency in the matched dataset. Thankfully `Matcher` can handle all of this for you :).
```
m.match(method="min", nmatches=1, threshold=0.0005)
m.record_frequency()
```
It looks like the bulk of our matched-majority-group records occur only once, 68 occur twice, ... etc. We can preemptively generate a weight vector using `Matcher.assign_weight_vector()`
```
m.assign_weight_vector()
```
Let's take a look at our matched data thus far. Note that in addition to the weight vector, `Matcher` has also assigned a `match_id` to each record indicating our (in this cased) *paired* matches since we use `nmatches=1`. We can verify that matched records have `scores` within 0.0001 of each other.
```
m.matched_data.sort_values("match_id").head(6)
```
---
### Assess Matches
We must now determine if our data is "balanced". Can we detect any statistical differences between the covariates of our matched test and control groups? `Matcher` is configured to treat categorical and continouous variables separately in this assessment.
___Discrete___
For categorical variables, we look at plots comparing the proportional differences between test and control before and after matching.
For example, the first plot shows:
* `prop_test` - `prop_control` for all possible `term` values---`prop_test` and `prop_control` being the proportion of test and control records with a given term value, respectively. We want these (orange) bars to be small after matching.
* Results (pvalue) of a Chi-Square Test for Independence before and after matching. After matching we want this pvalue to be > 0.05, resulting in our failure to reject the null hypothesis that the frequecy of the enumerated term values are independent of our test and control groups.
```
categorical_results = m.compare_categorical(return_table=True)
categorical_results
```
Looking at the plots and test results, we did a pretty good job balancing our categorical features! The p-values from the Chi-Square tests are all > 0.05 and we can verify by observing the small proportional differences in the plots.
___Continuous___
For continous variables we look at Empirical Cumulative Distribution Functions (ECDF) for our test and control groups before and after matching.
For example, the first plot pair shows:
* ECDF for test vs ECDF for control before matching (left), ECDF for test vs ECDF for control after matching(right). We want the two lines to be very close to each other (or indistiguishable) after matching.
* Some tests + metrics are included in the chart titles.
* Tests performed:
* Kolmogorov-Smirnov Goodness of fit Test (KS-test)
This test statistic is calculated on 1000
permuted samples of the data, generating
an imperical p-value. See pymatch.functions.ks_boot()
This is an adaptation of the ks.boot() method in
the R "Matching" package
https://www.rdocumentation.org/packages/Matching/versions/4.9-2/topics/ks.boot
* Chi-Square Distance:
Similarly this distance metric is calculated on
1000 permuted samples.
See pymatch.functions.grouped_permutation_test()
* Other included Stats:
* Standarized mean and median differences.
How many standard deviations away are the mean/median
between our groups before and after matching
i.e. `abs(mean(control) - mean(test))` / `std(control.union(test))`
```
cc = m.compare_continuous(return_table=True)
cc
```
We want the pvalues from both the KS-test and the grouped permutation of the Chi-Square distance after matching to be > 0.05, and they all are! We can verify by looking at how close the ECDFs are between test and control.
# Conclusion
We saw a very "clean" result from the above procedure, achieving balance among all the covariates. In my work at Mozilla, we see much hairier results using the same procedure, which will likely be your experience too. In the case that certain covariates are not well balanced, one might consider tinkering with the parameters of the matching process (`nmatches`>1) or adding more covariates to the formula specified when we initialized the `Matcher` object.
In any case, in subsequent modelling, you can always control for variables that you haven't deemed "balanced".
| github_jupyter |
# Najafi-2018 Demo
This notebook is designed to showcase the work flow with NWB 2.0 files related to a typical neuroscience study. The tutorial presents materials accompanying the paper
>Farzaneh Najafi, Gamaleldin F Elsayed, Robin Cao, Eftychios Pnevmatikakis, Peter E Latham, John Cunningham, Anne K Churchland. "Excitatory and inhibitory subnetworks are equally selective during decision-making and emerge simultaneously during learning" bioRxiv (2018): 354340.
This study has reported a surprising finding that excitatory and inhibitory neurons are both equally selective in decision making, at both single-cell and population level. Furthermore, they exhibit similar temporal dynamic changes during learning, paralleling behavioral improvments.
The demonstration includes querying, conditioning and visualization of neuro-data for a single session, as well as for whole study population. To provide use case example, several figures from the paper will be reproduced.
```
%matplotlib inline
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from pynwb import NWBFile, NWBHDF5IO
# Specify data path and filename - here, we just pick one arbitrary example NWB 2.0 file
data_dir = os.path.join('..', 'data', 'FN_dataSharing', 'nwb')
fname = 'mouse1_fni16_150818_001_ch2-PnevPanResults-170808-180842.nwb'
# Read NWB 2.0 file
nwb_io = NWBHDF5IO(os.path.join(data_dir, fname), mode = 'r')
nwbfile = nwb_io.read()
```
Here, we wish to extract the neuron response time-series
In this dataset, neuron response time-series are stored as trial-segmented, time-locked to several different trial event (e.g. *start tone*, *stimulus onset*, *choice*, or *reward*).
Thus, these neuron data takes the shape of (ROI count) x (time points) x (trial count), one set of data for each trial event. Since these data are trial-based segmented (as opposed to raw), they are stored in NWB 2.0 in a **processing module**, under a **data interface** of type **DfOverF**
```
# Extract all trial-based ROI time-series
# ROI time-series has the shape of: (ROI count) x (time instances) x (trial count)`
trial_seg_module = nwbfile.modules.get('Trial-based-Segmentation').data_interfaces
```
The *Trial-based-Segmentation* module contains mutliple sets of **DfOverF** data interfaces, each represents the trial-segmented neuronal response series time-locked to a corresponding event. Each of the **DfOverF** in turn contains multiple sets of **roi_response_series**, one for each trial. For result figure reproduction purpose, we will extract neuron response time-series time-locked to 4 events: *initial auditory tone onset*, *stimulus onset*, *time of first commit* and *time of second commit*
Each **roi_response_series** contains a field named **rois**, which provides the indices of the ROIs in the **ROI table** specific to this **roi_response_series**
```
# Display all roi_response_series, named by the event-type that the data are time locked to
for eve in trial_seg_module.keys():
print(eve)
def get_trialsegmented_roi_timeseries(event_name, pre_stim_dur, post_stim_dur, trial_seg_module):
event_roi_timeseries = trial_seg_module.get(event_name).roi_response_series
tvec = event_roi_timeseries.get('Trial_00').timestamps.value # timestamps from all trial are the same, so get one from trial_0
# check if pre/post stim duration is out of bound
pre_stim_dur = np.maximum(tvec[0], pre_stim_dur)
post_stim_dur = np.minimum(tvec[-1], post_stim_dur)
# extract data
ix = np.logical_and(tvec >= pre_stim_dur, tvec <= post_stim_dur)
return np.dstack((d.data.value[:, ix] for d in event_roi_timeseries.values())), tvec[ix]
```
Neurons identification is done using CNMF algorithm; then a number of criteria are used to measure the quality of each ROI (neuron). This information is stored under another **processing module** as **ROI-table**. Thus, each neuron (ROI) has a corresponding entry in the **ROI-table** containing ROI-related information such as: good/bad ROI, inhibitory/excitatory ROI, ROI mask, etc.
Here, we wish to extract neurons that are labeled *good* and separate neurons into categories of *excitatory* and *inhibitory*
```
# Obtain the ROI-table
# here, 'initToneAl' is selected arbitrarily, all of the roi_response_series contain the same ROI-table and good_roi_mask
roi_tcourse = trial_seg_module.get('dFoF_initToneAl').roi_response_series.get('Trial_00')
good_roi_mask = roi_tcourse.rois.data # good_roi_mask here refers a 1D array of ROI indices, indexing into the ROI-table
roi_table = roi_tcourse.rois.table
# Visualizing the ROI-table
roi_colums = roi_table.colnames
roi_table_df = {}
for c in roi_colums:
if c == 'image_mask':
continue
roi_table_df[c] = roi_table.get(c).data if type(roi_table.get(c).data) is np.ndarray else roi_table.get(c).data.value
roi_table_df = pd.DataFrame(roi_table_df)
roi_table_df
# Obtain inh/exc status of the ROI
neuron_type = roi_table.get('neuron_type').data[good_roi_mask]
# Display neuron_type for ease of visualization
print(neuron_type[1:20])
```
We will now perform minor data conditioning to reproduce Figure 1E
Again, here we seek to extract trial-segmented ROI data with respect to four events: *initial auditory tone onset*, *stimulus onset*, *time of first commit* and *time of second commit*
We also define the pre-stimulus and post-stimulus amount of time we wish to extract the data from/to
```
# specify event of interest to extract trial data
segmentation_settings = [
{'event':'dFoF_initToneAl', 'pre': -10000, 'post': 100},
{'event':'dFoF_stimAl_noEarlyDec', 'pre': -100, 'post': 10000},
{'event':'dFoF_firstSideTryAl', 'pre': -250, 'post': 250},
{'event':'dFoF_rewardAl', 'pre': -250, 'post': 10000}]
# extract trial-based data and average over trial
trial_avg_segments = {}
for setting in segmentation_settings:
# extract segment
out = get_trialsegmented_roi_timeseries(setting['event'], setting['pre'], setting['post'], trial_seg_module)
# average over trial
trial_avg_segments[setting['event']] = (np.nanmean(out[0], axis=2), out[1])
# Function to sort the ROI time-series based on the latency of the peak activity
def roi_sort_by_peak_latency(roi_tcourse):
sorted_roi_idx = np.argsort(np.argmax(roi_tcourse, axis = 1))
return roi_tcourse[sorted_roi_idx,:].copy(), sorted_roi_idx
# Sort and concatenate trial-based data time-locked to: start tone, stimulus, 1st commit and 2nd commit
# Concatenate and sort
data_all = np.hstack((value[0] for value in trial_avg_segments.values()))
data_all, sorted_roi_idx = roi_sort_by_peak_latency(data_all)
# Concatenate all timevec(s) and determine the indices of t = 0
tvec_concat = [value[1] for value in trial_avg_segments.values()]
xdim_all = [t.size for t in tvec_concat]
xdim_all.insert(0,0)
zeros_all = [np.where(v == 0)[0][0] for v in tvec_concat]
# Extract inh/exc status
is_inh = np.zeros((data_all.shape[0]))
is_inh[neuron_type[sorted_roi_idx] == 'inhibitory'] = 1
# Raster Plot - Figure 1E
fig1E = plt.figure(figsize=(8,12))
ax1 = fig1E.add_subplot(111)
ax1.set_facecolor('white')
sns.heatmap(data=data_all, xticklabels=[], yticklabels=[], cmap='YlGnBu_r', axes=ax1, vmin=0)
# add vertical lines
for zidx, z in enumerate(zeros_all):
ax1.axvline(x=np.cumsum(xdim_all)[zidx], color='b',linestyle='-',linewidth=0.7)
ax1.axvline(x=z + np.cumsum(xdim_all)[zidx], color='r',linestyle='--',linewidth=1)
# add inhibitor marker
ax1.plot(np.ones(is_inh.shape)*(data_all.shape[1]+5), np.arange(is_inh.size)*is_inh, 'r_', markersize=8)
ax1.set_xlim(0, data_all.shape[1]+10)
ax1.set_xlabel('Time')
ax1.set_ylabel('Neuron')
ax1.set_label('Averaged infered spike for all neurons for an example session')
```
Similarly, minor data conditioning is required for the reproduction of Figure 1F
Trial-related information is stored in NWB 2.0 under **trials**. **trials** is a table-like data structure that enforces two required variables as table columns: *start_time* and *stop_time*. User can define any number of additional table columns to store trial-specific information related to their study.
Here, we have added 10 additional columns: *trial_type*, *trial_pulse_rate*, *trial_response*, *trial_is_good*, *init_tone*, *stim_onset*, *stim_offset*, *go_tone*, *first_commit*, *second_commit*
```
# Get trial info
trial_set = nwbfile.trials.to_dataframe()
trial_set
```
For the reproduction of Figure 1F, we wish to extract neuronal responses of *excitatory* and *inhibitory* neurons, separated by *trial type* category of either *High Rate* or *Low Rate*, and constrained by the condition that the mouse's response for this trial is correct
```
trial_is_good = trial_set.trial_is_good
trial_response_type = trial_set.trial_response
trial_type = trial_set.trial_type
# make trial-mask for correct high-rate (ipsilateral-lick) and low-rate (contralateral-lick) trial
correct_high_rate_trial = np.logical_and(trial_response_type == 'correct', trial_type == 'High-rate').values
correct_low_rate_trial = np.logical_and(trial_response_type == 'correct', trial_type == 'Low-rate').values
# make mask of inhibitory and excitatory neuron
is_inh = (neuron_type == 'inhibitory')
is_exc = (neuron_type == 'excitatory')
# specify event of interest to extract trial data
segmentation_settings = [
{'event':'dFoF_initToneAl', 'pre': -10000, 'post': 100},
{'event':'dFoF_stimAl_noEarlyDec', 'pre': -100, 'post': 10000},
{'event':'dFoF_firstSideTryAl', 'pre': -250, 'post': 250},
{'event':'dFoF_rewardAl', 'pre': -250, 'post': 10000}]
trial_avg_segments = {}
for setting in segmentation_settings:
# extract segment
out = get_trialsegmented_roi_timeseries(setting['event'], setting['pre'], setting['post'], trial_seg_module)
# mask by high/low rate trial and inh/exc neuron type
exc_correct_hr = out[0][:, :, correct_high_rate_trial][is_exc, :, :]
inh_correct_hr = out[0][:, :, correct_high_rate_trial][is_inh, :, :]
exc_correct_lr = out[0][:, :, correct_low_rate_trial][is_exc, :, :]
inh_correct_lr = out[0][:, :, correct_low_rate_trial][is_inh, :, :]
# take average across trials
trial_avg_segments[setting['event']] = {'exc_correct_hr': np.nanmean(exc_correct_hr, axis=2),
'inh_correct_hr': np.nanmean(inh_correct_hr, axis=2),
'exc_correct_lr': np.nanmean(exc_correct_lr, axis=2),
'inh_correct_lr': np.nanmean(inh_correct_lr, axis=2),
'timestamps': out[1]}
# plot a single subplot of Figure 1F
def plot_sub_fig1F(exc_ax, inh_ax, trial_avg_segments, exc_idx, inh_idx):
# make a nan-padding between each dataset
pad_size = 3
nan_padding = np.full(pad_size, np.nan)
# Concatenate and add nan padding in between
r = {k: np.hstack(np.hstack((v[k][idx,:], nan_padding)) for v in trial_avg_segments.values())
for k, idx in (('exc_correct_hr', exc_idx),
('inh_correct_hr', inh_idx),
('exc_correct_lr', exc_idx),
('inh_correct_lr', inh_idx))}
tvec = np.hstack(np.hstack((v['timestamps'], nan_padding)) for v in trial_avg_segments.values())
# determine the indices of t = 0
t_zeros = np.where(tvec == 0)[0]
for ax, roi_key in ((exc_ax, ('exc_correct_lr', 'exc_correct_hr')), (inh_ax, ('inh_correct_lr', 'inh_correct_hr'))):
ax.plot(r[roi_key[0]], 'k', alpha=0.6)
ax.plot(r[roi_key[1]], 'g', alpha=0.8)
# add vertical lines
for t in t_zeros:
ax.axvline(x=t, color='k', linestyle='--', linewidth=0.7)
ax.set_facecolor('w')
ax.set_xticklabels([])
ax.set_title(roi_key[0].split('_')[0])
# Plot figure 1F
fig1F, axes = plt.subplots(2, 4, figsize=(16,6))
for a, e, i in zip(axes.T, [5, 11, 22, 25], [5, 6, 7, 22]):
plot_sub_fig1F(a[0], a[1], trial_avg_segments, e, i)
nwb_io.close()
```
Reproducion of Figure 1H
From all session, obtain neuron inferred spike activity (average of 3-frames before choice) for inhibitory and excitatory neurons.
Since each NWB 2.0 represent data from a single recording session, we will iterate through all session and extract the mean spiking response in ~100 ms prior to the time of *first commit* (*choice*-event) for all *inhibitory* and *excitatory* neurons
```
def extract_all_roi_timeseries():
choice_event_key = 'dFoF_firstSideTryAl'
pre_dur = -97
post_dur = 0
for f in os.listdir(data_dir):
# Read NWB 2.0 file
nwb_io = NWBHDF5IO(os.path.join(data_dir, f), mode = 'r')
nwbfile = nwb_io.read()
trial_seg_module = nwbfile.modules.get('Trial-based-Segmentation').data_interfaces
# Obtain inh/exc status of the ROI
roi_tcourse = trial_seg_module.get(choice_event_key).roi_response_series.get('Trial_00')
good_roi_mask = roi_tcourse.rois.data # good_roi_mask here refers a 1D array of ROI indices, indexing into the ROI-table
roi_table = roi_tcourse.rois.table
neuron_type = roi_table.get('neuron_type').data[good_roi_mask]
# get trial status
tr_set = nwbfile.trials.to_dataframe()
tr_filters = (tr_set.trial_is_good == True).values
roi_ts, _ = get_trialsegmented_roi_timeseries(choice_event_key, pre_dur, post_dur, trial_seg_module)
nwb_io.close()
# average over timepoints
yield {'inh_rois': np.mean(roi_ts[neuron_type == 'inhibitory', :, :][:, :, tr_filters], axis=1),
'exc_rois': np.mean(roi_ts[neuron_type == 'excitatory', :, :][:, :, tr_filters], axis=1),
'inh_count': roi_ts[neuron_type == 'inhibitory', :, :].shape[0],
'exc_count': roi_ts[neuron_type == 'excitatory', :, :].shape[0]}
avg_inh_rois, avg_exc_rois, inh_counts, exc_counts = zip(*((r['inh_rois'],
r['exc_rois'],
r['inh_count'],
r['exc_count'])
for r in extract_all_roi_timeseries()))
# average over trial
trial_avg_exc_rois = np.hstack(np.nanmean(r, axis=1) for r in avg_exc_rois)
trial_avg_inh_rois = np.hstack(np.nanmean(r, axis=1) for r in avg_inh_rois)
# process and plot Figure 1H
spike_val_vec = np.logspace(-3, 0, 1000)
exc_roi_frac = np.sum(trial_avg_exc_rois[:, None] > spike_val_vec, axis=0) / len(trial_avg_exc_rois)
inh_roi_frac = np.sum(trial_avg_inh_rois[:, None] > spike_val_vec, axis=0) / len(trial_avg_inh_rois)
print(f'Excitatory neurons: {sum(exc_counts)} - Inhibitory neurons: {sum(inh_counts)}')
fig1H, ax = plt.subplots(1, 1, figsize=(3, 5))
ax.semilogx(spike_val_vec, exc_roi_frac, 'b', label='Excitatory')
ax.semilogx(spike_val_vec, inh_roi_frac, 'r', label='Inhibitory')
ax.legend()
ax.set_ylabel('Fraction Neurons')
ax.set_xlabel('Inferred spikes (a.u.)')
ax.set_xlim(1e-3, 1e-1)
ax.set_ylim(0, 0.5)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
# exc and inh roi count for each session
df = pd.DataFrame([(f for f in os.listdir(data_dir)), exc_counts, inh_counts]).T
df.columns = ['session', 'exc_count', 'inh_count']
print(f'Average ROI count across sessions - Exc: {df.exc_count.mean()} - Inh: {df.inh_count.mean()}')
df
```
| github_jupyter |
# Test Driven Development
## Overview:
- **Teaching:** 10 min
- **Exercises:** 0 min
**Questions**
- How do you make testing part of the code writing process?
**Objectives**
- Learn about the benefits and drawbacks of Test Driven Development.
- Write a test before writing the code.
Test-driven Development (TDD) takes the workflow of writing code and writing tests and turns it on its head. TDD is a software development process where you write the tests first. Before you write a single line of a function, you first write the test for that function.
After you write a test, you are then allowed to proceed to write the function that you are testing. However, you are only supposed to implement enough of the function so that the test passes. If the function does not do what is needed, you write another test and then go back and modify the function. You repeat this process of test-then-implement until the function is completely implemented for your current needs.
## Information: The Big Idea
This design philosophy was most strongly put forth by Kent Beck in his book *Test-Driven Development: By Example*.
The central claim to TDD is that at the end of the process you have an implementation that is well tested for your use case, and the process itself is more efficient. You stop when your tests pass and you do not need any more features. You do not spend any time implementing options and features on the off chance that they will prove helpful later. You get what you need when you need it, and no more. TDD is a very powerful idea, though it can be hard to follow religiously.
The most important takeaway from test-driven development is that the moment you start writing code, you should be considering how to test that code. The tests should be written and presented in tandem with the implementation. **Testing is too important to be an afterthought**.
## Information: You do you
Developers who practice strict TDD will tell you that it is the best thing since sliced arrays. However, do what works for you. The choice whether to pursue classic TDD is a personal decision.
The following example illustrates classic (if ludicrous) TDD for a standard deviation function, `std()`.
To start, we write a test for computing the standard deviation from a list of numbers as follows:
```python
from mod import std
def test_std1():
obs = std([0.0, 2.0])
exp = 1.0
assert obs == exp
```
Next, we write the minimal version of `std()` that will cause `test_std1()` to pass:
```python
def std(vals):
# surely this is cheating...
return 1.0
```
As you can see, the minimal version simply returns the expected result for the sole case that we are testing. If we only ever want to take the standard deviation of the numbers 0.0 and 2.0, or 1.0 and 3.0, and so on, then this implementation will work perfectly. If we want to branch out, then we probably need to write more robust code. However, before we can write more code, we first need to add another test or two:
```python
def test_std1():
obs = std([0.0, 2.0])
exp = 1.0
assert_equal(obs, exp)
def test_std2():
# Test the fiducial case when we pass in an empty list.
obs = std([])
exp = 0.0
assert_equal(obs, exp)
def test_std3():
# Test a real case where the answer is not one.
obs = std([0.0, 4.0])
exp = 2.0
assert_equal(obs, exp)
```
A simple function implementation that would make these tests pass could be as follows:
```python
def std(vals):
# a little better
if len(vals) == 0: # Special case the empty list.
return 0.0
return vals[-1] / 2.0 # By being clever, we can get away without doing real work.
```
Are we done? No. Of course not. Even though the tests all pass, this is clearly still not a generic standard deviation function. To create a better implementation, TDD states that we again need to expand the test suite:
```python
def test_std1():
obs = std([0.0, 2.0])
exp = 1.0
assert_equal(obs, exp)
def test_std2():
obs = std([])
exp = 0.0
assert_equal(obs, exp)
def test_std3():
obs = std([0.0, 4.0])
exp = 2.0
assert_equal(obs, exp)
def test_std4():
# The first value is not zero.
obs = std([1.0, 3.0])
exp = 1.0
assert_equal(obs, exp)
def test_std5():
# Here, we have more than two values, but all of the values are the same.
obs = std([1.0, 1.0, 1.0])
exp = 0.0
assert_equal(obs, exp)
```
At this point, we may as well try to implement a generic standard deviation function. Recall:
$$
\sigma = \sqrt{ \frac{\sum_{\mathrm{i}} (x_{\mathrm{i}} - \bar{x})^2 }{N}},
$$
where $\sigma$ is the standard deviation, $x_{\mathrm{i}}$ are the values in the sample data, $\bar{x}$ is the mean of the values and $N$ is the numver of values.
We would spend more time trying to come up with clever approximations to the standard deviation than we would spend actually coding it.
1. Copy the five tests above into a file called `test_std.py`
2. Open `mod.py`
3. Add an implementation that actually calculates a standard deviation.
4. Run the tests above. Did they pass?
It is important to note that we could improve this function by writing further tests. For example, this `std()` ignores the situation where infinity is an element of the values list. There is always more that can be tested. TDD prevents you from going overboard by telling you to stop testing when you have achieved all of your use cases.
## Key Points:
- Test driven development is a common software development technique
- By writing the tests first, the function requirements are very explicit
- TDD is not for everyone
- TDD requires vigilance for success
| github_jupyter |
<a href="https://colab.research.google.com/github/sammyon7/Analys_Your_Data/blob/main/My_GAN_Learning_1D_Gaussian.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Variable
from scipy.stats import norm
import matplotlib.pyplot as plt
```
Function to generate samples from Uniform [-1,1] for input for generator
```
def sample_noise(M):
z = np.float32(np.linspace(-1.0, 1.0, M) + np.random.random(M) * 0.01)
return z
sample_noise(10)
```
<b>PLOT METRICS</b>
```
def plot_fig(generate, discriminate):
xs = np.linspace(-5, 5, 1000)
plt.plot(xs, norm.pdf(xs, loc=mu, scale=sigma), label='p_data')
r = 100
xs = np.float32(np.linspace(-3, 3, r))
xs_tensor = Variable(torch.from_numpy(xs.reshape(r, 1)))
ds_tensor = discriminate(xs_tensor)
ds = ds_tensor.data.numpy()
plt.plot(xs, ds, label='decision boundary')
n=1000
zs = sample_noise(n)
plt.hist(zs, bins=20, density=True, label='noise')
zs_tensor = Variable(torch.from_numpy(np.float32(zs.reshape(n, 1))))
gs_tensor = generate(zs_tensor)
gs = gs_tensor.data.numpy()
plt.hist(gs, bins=20, density=True, label='generated')
plt.plot(xs, norm.pdf(xs, loc=np.mean(gs), scale=np.std(gs)), label='generated_dist')
plt.legend()
plt.xlim(-3,3)
plt.ylim(0,5)
plt.show()
```
<b>GENERATOR</b>
```
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.l1 = nn.Linear(1, 10)
self.l1_relu = nn.ReLU()
self.l2 = nn.Linear(10, 10)
self.l2_relu = nn.ReLU()
self.l3 = nn.Linear(10, 1)
def forward(self, input):
output = self.l1(input)
output = self.l1_relu(output)
output = self.l2(output)
output = self.l2_relu(output)
output = self.l3(output)
return output
```
DISCRIMINATOR
```
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.l1 = nn.Linear(1, 10)
self.l1_tanh = nn.Tanh()
self.l2 = nn.Linear(10, 10)
self.l2_tanh = nn.Tanh()
self.l3 = nn.Linear(10, 1)
self.l3_sigmoid = nn.Sigmoid()
def forward(self, input):
output = self.l1_tanh(self.l1(input))
output = self.l2_tanh(self.l2(output))
output = self.l3_sigmoid(self.l3(output))
return output
def generator_criterion(d_output_g):
return -0.5 * torch.mean(torch.log(d_output_g))
def discriminator_criterion(d_output_true, d_output_g):
return -0.5 * torch.mean(torch.log(d_output_true) + torch.log(1-d_output_g))
mu = 2
sigma = 0.2
M = 200
discriminate = Discriminator()
generate = Generator()
plot_fig(generate, discriminate)
epochs = 500
histd, histg = np.zeros(epochs), np.zeros(epochs)
k = 20
```
TRAIN
```
discriminate_optimizer = torch.optim.SGD(discriminate.parameters(), lr=0.1, momentum=0.6)
generate_optimizer = torch.optim.SGD(generate.parameters(), lr=0.01, momentum=0.6)
for i in range(epochs):
for j in range(k):
discriminate.zero_grad()
x = np.float32(np.random.normal(mu, sigma, M))
z = sample_noise(M)
z_tensor = Variable(torch.from_numpy(np.float32(z.reshape(M, 1))))
x_tensor = Variable(torch.from_numpy(np.float32(x.reshape(M, 1))))
g_out = generate(z_tensor)
d_out_true = discriminate(x_tensor)
d_out_g = discriminate(g_out)
loss = discriminator_criterion(d_out_true, d_out_g)
loss.backward()
discriminate_optimizer.step()
histd[i] = loss.data.numpy()
generate.zero_grad()
z = sample_noise(M)
z_tensor = Variable(torch.from_numpy(np.float32(z.reshape(M, 1))))
g_out = generate(z_tensor)
d_out_g = discriminate(g_out)
loss = generator_criterion(d_out_g)
loss.backward()
generate_optimizer.step()
histg[i] = loss.data.numpy()
if i % 10 == 0:
for param_group in generate_optimizer.param_groups:
param_group['lr'] *= 0.999
for param_group in discriminate_optimizer.param_groups:
param_group['lr'] *= 0.999
if i % 50 == 0:
plt.clf()
plot_fig(generate, discriminate)
plt.draw()
#LOSS CONVERGE
plt.plot(range(epochs), histd, label='Discriminator')
plt.plot(range(epochs), histg, label='Generator')
plt.legend()
plt.show()
plot_fig(generate, discriminate)
plt.show()
```
Coded by Yehezkiel Tato
| github_jupyter |
```
# modify these for your own computer
repo_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/'
data_dir = '/Users/iaincarmichael/Documents/courtlistener/data/'
import os
import sys
import time
from math import *
import copy
import cPickle as pickle
# data
import numpy as np
import pandas as pd
# viz
import matplotlib.pyplot as plt
# graph
import igraph as ig
# our code
sys.path.append(repo_directory + 'code/')
from pipeline.download_data import download_bulk_resource, download_master_edgelist
sys.path.append(repo_directory + 'explore/vertex_metrics_experiment/code/')
from make_case_text_files import *
from bag_of_words import *
from similarity_matrix import *
from make_snapshots import *
from make_graph import *
from data_dir_setup import *
# court
court = 'scotus'
network_name = 'scotus'
# directory set up
raw_dir = data_dir + 'raw/'
experiment_data_dir = data_dir + network_name + '/'
text_dir = experiment_data_dir + 'textfiles/'
# jupyter notebook settings
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
# set up the data directory
```
setup_data_dir(data_dir)
```
# data download
## get opinion and cluster files from CourtListener
opinions/cluster files are saved in data_dir/raw/court/
```
%time download_op_and_cl_files(data_dir, network_name)
```
## get the master edgelist from CL
master edgelist is saved in data_dir/raw/
```
%time download_master_edgelist(data_dir)
```
## download scdb data from SCDB
scdb data is saved in data_dir/scdb
```
%time download_scdb(data_dir)
```
# clean data
## make the case metadata and edgelist
- add the raw case metadata data frame to the raw/ folder
- remove cases missing scdb ids
- remove detroit lumber case
- get edgelist of cases within desired subnetwork
- save case metadata and edgelist to the experiment_dir/
```
# create the raw case metadata data frame in the raw/ folder
%time make_subnetwork_raw_case_metadata(data_dir, network_name)
# create clean case metadata and edgelist from raw data
%time clean_metadata_and_edgelist(data_dir, network_name)
```
## make graph
creates the network with the desired case metadata and saves it as a .graphml file in experiment_dir/
```
%time make_graph(experiment_data_dir, network_name)
```
## make case text files
grabs the opinion text for each case in the network and saves them as a text file in experiment_dir/textfiles/
```
# make the textfiles for give court
%time make_network_textfiles(data_dir, network_name)
```
## make tf-idf matrix
creates the tf-idf matrix for the corpus of cases in the network and saves them to experiment_data_dir + 'nlp/'
```
%time make_tf_idf(text_dir, experiment_data_dir + 'nlp/', min_df=0, max_df=1)
```
# data for vertex metrics experiment
## make snapshots
```
# load the graph
G = ig.Graph.Read_GraphML(experiment_data_dir + 'scotus_network.graphml')
G.summary()
vertex_metrics = ['indegree', 'outdegree', 'degree',
'd_pagerank', 'authorities', 'hubs']
# add recent citations
vertex_metrics += ['recentcite_' + str(t) for t in 5 * np.arange(1, 6+1)]
active_years = range(1900, 2015 + 1)
%time make_snapshot_vertex_metrics(G, active_years, vertex_metrics, experiment_data_dir)
```
| github_jupyter |
For convenience, we can increase the display width of the Notebook to make better use of widescreen format
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
```
Next, we will import all the libraries that we need.
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
from pathlib import Path
from scipy.io import wavfile
import python_speech_features
from tqdm.notebook import tqdm
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from livelossplot import PlotLossesKeras
import sounddevice as sd
%matplotlib inline
import matplotlib.pyplot as plt
from datetime import datetime
from scipy.signal import butter, sosfilt
from timeit import default_timer as timer
from IPython.display import clear_output
```
The Google Speech Command Dataset which we'll be using contains 30 different words, 20 core words and 10 auxiliary words. In this project we'll be using only the 20 core words.
We can define ourselves a dictionary that maps each different word to a number and a list that does the inverse mapping. This is necessary because we need numerical class labels for our Neural Network.
```
word2index = {
# core words
"yes": 0,
"no": 1,
"up": 2,
"down": 3,
"left": 4,
"right": 5,
"on": 6,
"off": 7,
"stop": 8,
"go": 9,
"zero": 10,
"one": 11,
"two": 12,
"three": 13,
"four": 14,
"five": 15,
"six": 16,
"seven": 17,
"eight": 18,
"nine": 19,
}
index2word = [word for word in word2index]
```
Next, we will go trough the dataset and save all the paths to the data samples in a list.
You can download the Google Speech Commands dataset [here](http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz) (1.4 GB!)
and decompress it (e.g. with 7zip). Define the path where you stored the decommpressed dataset as *speech_commands_dataset_basepath*.
The dataset doesn't contain an equal number of samples for each word, but it contains >2000 valid samples for each of the 20 core words.
Each sample is supposed to be exactly 1s long, which at a sampling rate of 16kHz and 16bit quantization should result in 32044 Byte large files.
Somehow some samples in the dataset are not exactly that size, we skip those.
Additionally we'll use a nice [tqdm](https://tqdm.github.io/) progress bar to make it more fancy. In the end, we should have gathered a total of 40000 samples.
```
num_classes = len(word2index)
num_samples_per_class = 2000
speech_commands_dataset_basepath = Path(r"C:\Users\tunin\Desktop\Marcel_Python\speech_command_dataset")
print("loading dataset...")
samples = []
classes = []
with tqdm(total=num_samples_per_class*20) as pbar:
for word_class in word2index:
folder = speech_commands_dataset_basepath / word_class # sub-folder for each word
count = 0
for file in folder.iterdir(): # iterate over all files in the folder
# somehow, there are samples which aren't exactly 1 s long in the dataset. ignore those
if file.stat().st_size == 32044:
samples.append(file) # store path of sample file
classes.append(word2index[word_class]) # append word class index to list
count +=1
pbar.update()
if count >= num_samples_per_class:
break
classes = np.array(classes, dtype=np.int)
```
Next we'll define two functions to compute some [features](https://en.wikipedia.org/wiki/Feature_(machine_learning)) of the audio samples and another function which loads the sample wav-file and then computes the features. Before computing features, it is a good idea to normalize our input data. Depending on your recording device and other parameters, the amplitudes of the recorded audio signals may vary drastically and might even have an offset. Thus we can subtract the mean value to remove the offset and divide by the absolute maximum value of the signal, so that it's new range lies between -1.0 and +1.0.
In this case, we'll use the so called "Mel-Frequency Cepstrum Coefficients", which are very commonly used for speech recognition tasks. The MFCCs are computed as follows:
* Apply a simple pre-emphasis filter to the signal, to emphasize higher frequencies (optional):
$y_t \leftarrow y_t - \alpha \cdot y_{t-1} \hspace{1mm} , \hspace{2mm} t=1...T$
* Extract snippets from the audio signal. A good choice for the length of the snippets is 25ms. The stride between snippets is 10ms, so the snippets will overlap. To "cut" the snippets from the audio signal, a window like the Hamming window is appropriate to mitigate the leakage effect when performing the Fourier Transform in the next step.
* Calculate the FFT of the signal and then the power spectrum, i.e. the squared magnitude of the spectrum for each snippet.
* Apply a Mel filterbank to the power spectrum of each snippet. The [Mel scale](https://en.wikipedia.org/wiki/Mel_scale) is a scale, that takes into account the fact, that in human auditory perception, the perceivable pitch (i.e. frequency) changes decrease with higher frequencies. This means e.g. that we can distinguish much better between a 200Hz and 300Hz tone than between a 10200 Hz and a 10300 Hz tone even though the absolute difference is the same.
The filterbank consists of $N=40$ triangular filters evenly spaced in the Mel scale (and nonlinearly spaced in frequency scale). These filters are multiplied with the power spectrum, which gives us the sum of "energies" in each filter. Additionally, we take the log() of these energies.
* The log-energies of adjacent filters usually correlate strongly. Therefore, a [Discrete Cosine Transform](https://en.wikipedia.org/wiki/Discrete_cosine_transform) is applied to the log filterbank energies of each snippet. The resulting values are called *Cepstral coefficients*. The zeroth coefficient represents the average log-energy in each snippet, it may or may not be discarded (here we'll keep it as a feature). Usually, only a subset, e.g. the first 8-12 Cepstral coefficients are used (here we'll use 20), the rest are discarded
For more details about MFCC, a good source is:
*pp. 85-72 in K.S. Rao and Manjunath K.E., "Speech Recognition Using Articulatory and Excitation Source Features", 2017, Springer*
(pp. 85-92 available for preview at [https://link.springer.com/content/pdf/bbm%3A978-3-319-49220-9%2F1.pdf])
Thankfully we don't have to implement the MFCC computation ourselves, we'll use the library [python_speech_features](https://python-speech-features.readthedocs.io/en/latest/).
```
# compute MFCC features from audio signal
def audio2feature(audio):
audio = audio.astype(np.float)
# normalize data
audio -= audio.mean()
audio /= np.max((audio.max(), -audio.min()))
# compute MFCC coefficients
features = python_speech_features.mfcc(audio, samplerate=16000, winlen=0.025, winstep=0.01, numcep=20, nfilt=40, nfft=512, lowfreq=100, highfreq=None, preemph=0.97, ceplifter=22, appendEnergy=True, winfunc=np.hamming)
return features
# load .wav-file, add some noise and compute MFCC features
def wav2feature(filepath):
samplerate, data = wavfile.read(filepath)
data = data.astype(np.float)
# normalize data
data -= data.mean()
data /= np.max((data.max(), -data.min()))
# add gaussian noise
data += np.random.normal(loc=0.0, scale=0.025, size=data.shape)
# compute MFCC coefficients
features = python_speech_features.mfcc(data, samplerate=16000, winlen=0.025, winstep=0.01, numcep=20, nfilt=40, nfft=512, lowfreq=100, highfreq=None, preemph=0.97, ceplifter=22, appendEnergy=True, winfunc=np.hamming)
return features
```
If we compute the features for one audio sample, we see that the feature shape is (99, 20). The first index is that of the 10ms long snippet of the 1s long audio signal, so we have 1s/10ms-1=99 snippets. The second dimension is the number of MFC coefficients, in this case we have 20.
Now we can load all audio samples and pre-compute the MFCC features for each sample. Note that this will take quite a long time!
```
feature_shape = wav2feature(samples[0]).shape
features = np.empty((num_classes*num_samples_per_class, )+(feature_shape), dtype=np.float)
print("features.shape", features.shape)
print("pre-computing features from audio files...")
with tqdm(total=num_samples_per_class*num_classes) as pbar:
for k, sample in enumerate(samples):
features[k] = wav2feature(sample)
pbar.update()
```
Now we can save the pre-computed training dataset containing the features of the training samples and their class labels.
This way, we won't have to re-compute the features next time.
```
# save computed features and classes to hard drive
np.save("mfcc_plus_energy_features_40000x99x20", features)
np.save("classes", np.array(classes, dtype=np.int))
```
We can load the pre-computed features and class labels as follows:
```
# load pre-computed training features dataset and training class labels
features = np.load("mfcc_plus_energy_features_40000x99x20.npy")
classes = np.load("classes.npy")
```
Now the next thing to do is divide our dataset into a training dataset and a validation dataset.
The training dataset is used for training our Neural Network, i.e. the Neural Network will learn to correctly predict a sample's class label based on it's features.
One problem that can occur in Machine Learning is so called *Overfitting*. Our basic goal is to train our Neural Network so that it does not only classify the training samples correctly, but also new samples, which it has never "seen" before. This is called *Generalization*. But with complex networks it can happen that instead of really learning to classify samples based on their features, the network simply "learns by heart" to which class each training sample belongs. This is called Overfitting. In this case, the network will perform great on the training data, but poorly on new previously unseen data.
One method to mitigate, is the use of a separate validation dataset. So we split the whole dataset, and use a small subset (e.g. here one third of the data) for validation, the rest is our training set.
Now during training, only the training dataset will be used to calculate the weigths of the Neural Network (which is the "learning" part). After each epoch (i.e. once all training samples have been considered once), we will tell the network to try and predict the class labels of all samples in our validation dataset and based on that, calculate the accuracy on the validation set.
So during training, after each training epoch, we can look at the accuracy of the network on the training set and on the validation set.
At the beginning of the training, both accuracies will typically improve. At one point we might see that the validation accuracy plateaus or evendecreases, while the training accuracy still improves. This indicates that the network is starting to overfit, thus it is a good time to stop the training.
Another method to mitigate overfitting is the use of so called [Dropout-Layers](https://en.wikipedia.org/wiki/Dilution_(neural_networks)) which randomly set a subset of the weigths of a layer to zero. In this project, we won't use them.
```
train_data, validation_data, train_classes, validation_classes = train_test_split(features, classes,
test_size=0.30, random_state=42, shuffle=True)
```
The next step is to define our Neural Network's architecture. The network can be described by a sequence of layers.
For this task we will implement a [Convolutional Neural Network (CNN)](https://en.wikipedia.org/wiki/Convolutional_neural_network). The two main characteristics of CNNs are convolutional layers and pooling layers.
Convolutional layers convolve a filter vector (1D) or matrix (2D) with the input data. The main advantage of convolutional layers (and thus of CNNs) is, that they can achieve a high degree of shift-/translation-invariance. Picture the following example: we have two 2s long recordings of the same spoken word, but in one recording the word is right at the beginning of the recording, and in the other one at the end. Now a conventional Neural Network might have a hard time learning to recognize the words, because it expects certain features at certain position in time. Another example migth be an image classifier that recognizes objects well if they're all in the center of the image and in the same orientation, but fails if the objects are in a corner of the image or rotated. So we want the network to be invariant to translations and rotations of the features, i.e. recognize features regardless of their position in time (e.g. in case of audio) or space (e.g. in case of an image).
A convolutional layer needs 3 parameters:
* filter size $F$: width (1D) or height and width (2D) of filter vector/matrix. Determines number of weigths of the layer.
* stride $S$: determines the step size with which we move the filter across the signal or image
* padding $P$: pad the input data with zeros
The size of the convolutional layer's output is:
# $W_{out}=\frac{W_{in}-F-2P}{S}+1$
In Keras, the default stride is one and the default padding is zero.
A pooling layer is somewhat similar in that it also convolves a "filter"-vector/matrix across the input with a certain stride and possibly padding. But instead of multiplying the input values with the filter values, the pooling layer computes either the average or maximum value of the values. Max-pooling layers are commonly used, average pooling rarely. So e.g. a 3x3 max-pooling layer will slide a 3x3-filter over the input and deliver the maximum of the 3*3=9 values as output. In contrary to the convolutional layer, a pooling layer introduces no additional weights. The output size of a pooling layer can be calculated with the same formula as for the convolutional layer.
In Keras, the default stride is equal to the filter size and the default padding is zero. In this case the formula simplifies to:
# $W_{out}=\frac{W_{in}-F-2P}{S}+1=\frac{W_{in}-F}{F}+1=\frac{W_{in}}{F}$
A max-pooling layer achieves a down-sampling of the feature vector/matrix and also a translation/shift-invariance of the features. It is common to use a pooling layer after a convolution layer.
We'll be creating our CNN model using keras' [Sequential](https://keras.io/guides/sequential_model/) model.
At first, we add an input layer whose input size matches the dimensions of our MFCC features. In this case wee have 20 MFC coefficients and 99 timeframes, thus the feature matrix for an audio sample is of size (99, 20). In Keras, the input shapes are by default as follows:
* (batch, axis, channel): one-dimensional data with $\geq$1 channels. This most commonly represents a timeseries, i.e. a number of features that change over time.
In our case, the (99, 20) feature matrix is interpreted as a time series (i.e. axis represents time(-frame)) with 20 channels (the MFC coefficients. We can perform a 1D-convolution on such data.
* (batch, axis0, axis1, channel): two-dimensional data with $\geq$1 channels. This is most often a color image, where axis0 and axis1 are the horizontal and vertical position of an image pixel and each pixel has 3 channels for the red, green and blue color values. We can perform a 2D-convolution on such data.
* (batch, axis0, axis1, ..., axisN-1, channel): n-dimensional data with $\geq$1 channels.
Now you may wonder about the batch dimension. This is another dimension that specifies the number of samples, because during training we often load batches of more than one sample in one iteration to average the gradients during optimization. In Keras, during model specification the batch dimension is ignored, so we won't have to specify it explicitly. But as you can see, our *features* variable, which contains all training samples has the shape (40000, 99, 20), so its first axis is the batch dimension. This way when we'll later pass the training data to the *fit()*-function, it can fetch a batch, i.e. a subset of the dataset for each training iteration.
Next, we add a 1-D convolutional layer ([Conv1D](https://keras.io/api/layers/convolution_layers/convolution1d/)). This layer performs a one dimensional convolution along the (in our case) time (or more precisely timeframe) axis. The first argument is the number of filters to apply. Most often we use many filters, thus performing the convolution multiple times with different filter kernels. This way, the number of channels of the convolutional layer's output is the number of filters used. The second argument is the kernel size, this is the size of our convolution filter. At last, we specify an activation function used, in this case the ReLU-function is used.
After the first convolutional layer, we add a max pooling layer with a size of 3 that reduces the data size along the time axis. Note that in case the division is fractional, the resulting size will be the floor value.
Such a combination of convolutional layer and pooling layer is very common in CNNs. The main idea is to repeatedly stack convolution and pooling layers, so that the dimension (in time or space) of the input data is subsequently reduced, while the feature space dimensionality (i.e. number of channels) increases.
Next, we add two more stacks of convolutional and max pooling layer. For the last pooling layer, we use a global pooling layer, which behaves just like the normal pooling layer, but with a filter that spans the whole axis size. After the global max pooling operation, our data is one-dimensional, with the time axis completely removed and only the feature dimension remaining.
In the next step, we add a couple of fully connected (Keras calles them "dense") layers, just like in a regular Multi Layer Perceptron (MLP). Each layer reduces the feature dimensionality, so that the last layer has an output dimension equal to the number of different classes (in our case words). Using the Softmax activation function on the last dense layer, we can interpret the networks output as an a posteriori probability distribution of the sample belonging to a certain class, given the audio sample's input features.
```
keras.backend.clear_session() # clear previous model (if cell is executed more than once)
### CNN MODEL DEFINITION ###
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(99, 20)))
model.add(keras.layers.Conv1D(64, kernel_size=8, activation="relu"))
model.add(keras.layers.MaxPooling1D(pool_size=3))
model.add(keras.layers.Conv1D(128, kernel_size=8, activation="relu"))
model.add(keras.layers.MaxPooling1D(pool_size=3))
model.add(keras.layers.Conv1D(256, kernel_size=5, activation="relu"))
model.add(keras.layers.GlobalMaxPooling1D())
model.add(keras.layers.Dense(128, activation="relu"))
model.add(keras.layers.Dense(64, activation="relu"))
model.add(keras.layers.Dense(num_classes, activation='softmax'))
# print model architecture
model.summary()
```
Now that our CNN model is defined, we can configure it for training. Therefore we choose an optimization algorithm, e.g. Stochastic Gradient Descent (SGD) or ADAM. Additionally, we need to specify a loss function for training. The loss function determines, how the performance of the network is evaluated. In this case, we have a multi-class classification problem, where the class labels are represented as integer values. In this case, the sparse categorical cross-entropy loss can be used. If our class labels were encoded using a one-hot encoding scheme, we would use the normal (non-sparse) variant. As a metric we specify the accuracy so that after every epoch, the accuracy of the network is computed.
```
sgd = keras.optimizers.SGD()
loss_fn = keras.losses.SparseCategoricalCrossentropy() # use Sparse because classes are represented as integers not as one-hot encoding
model.compile(optimizer=sgd, loss=loss_fn, metrics=["accuracy"])
```
Before starting the training, it can be useful to define an early stopping criterion in order to avoid overfitting as explained previously. We define a callback function which checks if the accuracy on the validation set has increased in the last 5 epochs and stops training if this is not the case. After stopping, the model is reverted to the state (i.e. the weigths) which had achieved the best result.
We'll also use the [livelossplot](https://github.com/stared/livelossplot) library, which provides functions to plot a live graph of the accuracy and loss metrics during training. We pass the plot function as a callback too.
Finally, we can start training using *model.fit()*. We specify the training and validation dataset, the max number of epochs to train, the callbacks and the batch size. In this case, a batch size of 32 is used, so that in every iteration, a batch of 32 samples is used to compute the gradient. Especially when using SGD, the batch size influences the training. In each iteration, the average of the gradients for the batch is computed and used to update the weights. A smaller batch leads to a "noisy" gradient which can be good to explore the weigth space further (and not get stuck very early in a local minimum), but affects convergence towards the end of training negatively. A larger batch leads to less noisy gradients, so that larger steps can be made (i.e. a higher learning-rate) which lead to faster training. Additionally, larger batches tend to reduce computation overhead. A batch size of one would be "pure" stochastic gradient descent, while a batch equal to the whole training set would be considered standard (i.e. non-stochastic) gradient descent. With a batch size in between (often called "mini batch"), a good compromise can be found.
Sidenote: It seems that matplotlib's *notebook* mode (which is for use in Jupyter Notebooks) doesn't work well with the live plotting, so we use *inline* mode.
```
early_stopping = tf.keras.callbacks.EarlyStopping(monitor="val_accuracy", patience=5, restore_best_weights=True)
plt.close()
history = model.fit(train_data,
train_classes,
batch_size=32,
epochs=100,
validation_data=(validation_data, validation_classes),
callbacks=[PlotLossesKeras(), early_stopping])
```
As we can see, during the training, the losses decrease and the accuracy increases.
After training, we can save our model for later use if we want to.
```
# save model
model.save(datetime.now().strftime("%d_%m_%Y__%H_%M")+".h5")
# load model
model = keras.models.load_model("05_08_2020__19_23.h5")
```
Another useful tool for evaluating a classifier's performance is a so called confusion matrix.
To compute the confusion matrix, we use our network to predict the class labels of all samples in the validation set.
The confusion matrix plots the probability with which a sample of a certain class is classified as belonging to a certain class. Thus, the values on the matrix' diagonal represent the correct classifications and those outside the diagonal the incorrect classifications. The matrix is thus by nanture symmetric.
The interesting thing to see is that the confusion matrix allows us to see if a certain pair of class labels are often falsely classified, i.e. confused with each other. If two classes would often be confused (e.g. because two words sound very similar) we would find a high value outside the diagonal. For example, if we look closely at the matrix below, we can see a slightly larger value (darker color) at "go"-"no". This means that these two words are more often confused with eachother, which is plausible since they sound very similar. The ideal result would be a value of $\frac1N$ ($N$=number of classes) on the diagonals (assuming classes are equally represented in the dataset) and zeros everywhere outside the diagonal.
```
# plot confusion matrix
y = np.argmax(model.predict(validation_data), axis=1)
cm = confusion_matrix(validation_classes, y, normalize="all")
%matplotlib inline
plt.close()
plt.figure(figsize = (8,8))
plt.imshow(cm, cmap=plt.cm.Blues)
plt.xlabel("Predicted labels")
plt.ylabel("True labels")
plt.xticks(np.arange(0, 20, 1), index2word, rotation=90)
plt.yticks(np.arange(0, 20, 1), index2word)
plt.tick_params(labelsize=12)
plt.title('Confusion matrix ')
plt.colorbar()
plt.show()
```
Ok, now we can try the keyword recognizer ourselves!
To easily record and play audio, we'll use the library [sounddevice](https://python-sounddevice.readthedocs.io/en/0.4.0/index.html). One thing to consider is, that we have created our CNN model so that it accepts an input feature vector that corresponds to an audio snippet of exactly 1s length at 16kHz sampling rate, i.e. 16000 samples. So we could record for exactly 1s, but this is not very practical, as you would have to say the word just at the right time after starting the recording so that it lies within the 1s time window.
A more elegant solution is to record for a longer duration, e.g. 3s and then extract a 1s long snippet which we can then feed to our CNN. For this simple case we'll assume that the user says only one word during the recording, so we extract the 1s long snippet of the recording which contains the maximum signal energy. This sounds complicated, but can be quite easily computed using a [convolution](https://en.wikipedia.org/wiki/Convolution). First, we compute the power signal by element-wise squaring the audio signal. Then we create a 1s (i.e. 16000 points) long rectangle window and convolve the power signal with the window. We use ["valid" mode](https://numpy.org/doc/stable/reference/generated/numpy.convolve.html) which means that only points where the signals overlap completely are computed (i.e. no zero-padding). This way, by computing the time at which the convolution is maximal, we get the starting time of the rectangle window which leads to maximal signal energy in the extracted snippet. We can then extract a 1s long snippet from the recording.
After defining a function to extract the 1s snippet, we configure the samplerate and device for recording. You can find out the number of the devices via *sd.query_devices()*. After recording for 3s and extracting the 1s snippet we can play it back. Then we compute the MFCC features and add a "fake" batch dimension to our sample before feeding it into our CNN mmodel for prediction. This is needed because the model expects batches of $\geq1$ samples as input, so since we have only one sample, we append a dimension to get a batch of one single sample. Additionally, we'll time the computation and model prediction to see how fast it is. We can normalize the CNN model's output to get a probability distribution (not strictly mathematical, but we can interpret it that way). Then we get the 3 candidates with highest probability and print the result. We'll also plot the raw audio signal and visulize the MFC coefficients.
```
def extract_loudest_section(audio, length):
audio = audio[:, 0].astype(np.float) # to avoid integer overflow when squaring
audio_pw = audio**2 # power
window = np.ones((length, ))
conv = np.convolve(audio_pw, window, mode="valid")
begin_index = conv.argmax()
return audio[begin_index:begin_index+length]
sd.default.samplerate = 16000
sd.default.channels = 1, 2 # mono record, stereo playback
recording = sd.rec(int(3*sd.default.samplerate), channels=1, samplerate=sd.default.samplerate, dtype=np.float, blocking=True)
recording = extract_loudest_section(recording, int(1*sd.default.samplerate)) # extract 1s snippet with highest energy (only necessary if recording is >3s long)
sd.play(recording, blocking=True)
t1 = timer()
recorded_feature = audio2feature(recording)
t2 = timer()
recorded_feature = np.expand_dims(recorded_feature, 0) # add "fake" batch dimension 1
prediction = model.predict(recorded_feature).reshape((20, ))
t3 = timer()
# normalize prediction output to get "probabilities"
prediction /= prediction.sum()
# print the 3 candidates with highest probability
prediction_sorted_indices = prediction.argsort()
print("candidates:\n-----------------------------")
for k in range(3):
i = int(prediction_sorted_indices[-1-k])
print("%d.)\t%s\t:\t%2.1f%%" % (k+1, index2word[i], prediction[i]*100))
print("-----------------------------")
print("feature computation time: %2.1f ms" % ((t2-t1)*1e3))
print("CNN model prediction time: %2.1f ms" % ((t3-t2)*1e3))
print("total time: %2.1f ms" % ((t3-t1)*1e3))
plt.close()
plt.figure(1, figsize=(10, 7))
plt.subplot(211)
plt.plot(recording)
plt.subplot(212)
plt.imshow(recorded_feature.reshape(99, 20).T, aspect="auto")
plt.show()
```
As we see in this case, the results look really good. If the probability for the best candidate is very high and those of the second-best and third-best candidates are pretty low, the prediction seems quite trustworthy.
Additionally, we can see that the feauture computation and CNN model prediction are quite fast. The total execution time is around 100ms, which means that our method is quite able to work in "real-time".
So now let's adapt and extend this little demo to work in real-time. For this, we'll use a buffer that contains 5 succeeding snippets of 3200 samples, i.e. 200ms each. We implement this audio buffer as a ringbuffer, which means that every time a new 200ms long snippet has been recorded, the oldest snippet in the buffer is discarded, the buffer is moved one step back and the newest snippet is put at the last position. This way, our buffer is updated every 200ms and always contains the last 1s of recorded audio. Since our prediction takes approximately 100ms and we have 200ms between each update, we have enough time for computation and achieve a latency of <200ms (so I think it can be considered "real time" in this context).
To implement the buffer in python, we can make use of numpy's [roll()](https://numpy.org/doc/stable/reference/generated/numpy.roll.html) function. We roll our buffer with a step of -1 along the first axis, which means that all 5 snippets are shifted to the left and the first snippet rolls over to the last position. Then we replace the snippet at the last position (which is the oldest snippet we whish to discard) with the newest snippet.
We define a callback function with an appropriate signature for the sounddevice Stream API (see [here](https://python-sounddevice.readthedocs.io/en/0.4.0/api/streams.html#sounddevice.Stream)) that updates the audio buffer and makes a new prediction each time a new snippet is recorded. We use a simple threshold of 70% probability to check if a word has been recognized. When a word is recognized, it will also appear in the buffer after the next couple of updates, so it will be recognized more than once in a row. To avoid this, we can implement a timeout that ignores a recognized word, if the same word has already been recognized shortly before.
```
audio_buffer = np.zeros((5, 3200))
last_recognized_word = None
last_recognition_time = 0
recognition_timeout = 1.0
def audio_stream_callback(indata, frames, time, status):
global audio_buffer
global model
global index2word
global last_recognized_word
global last_recognition_time
audio_buffer = np.roll(audio_buffer, shift=-1, axis=0)
audio_buffer[-1, :] = np.squeeze(indata)
t1 = timer()
recorded_feature = audio2feature(audio_buffer.flatten())
recorded_feature = np.expand_dims(recorded_feature, 0) # add "fake" batch dimension 1
t2 = timer()
prediction = model.predict(recorded_feature).reshape((20, ))
# normalize prediction output to get "probabilities"
prediction /= prediction.sum()
#print(prediction)
best_candidate_index = prediction.argmax()
best_candidate_probability = prediction[best_candidate_index]
t3 = timer()
if(best_candidate_probability > 0.7): # treshold
word = index2word[best_candidate_index]
if( (timer()-last_recognition_time)>recognition_timeout or word!=last_recognized_word ):
last_recognition_time = timer()
last_recognized_word = word
clear_output(wait=True) # clear ouput as soon as new output is available to replace it
print("%s\t:\t%2.1f%%" % (word, best_candidate_probability*100))
print("-----------------------------")
```
Now we can finally start the real-time demo of our CNN keyword recognizer. Therefore we start an input stream which calls our callback function each time a new block of 3200 samples has been recorded. We'll let the recognizer run for one minute so we have plenty of time to try it out.
```
# REALTIME KEYWORD RECOGNITION DEMO (60s long)
with sd.InputStream(samplerate=16000, blocksize=3200, device=None, channels=1, dtype="float32", callback=audio_stream_callback):
sd.sleep(60*1000)
```
| github_jupyter |
# ```aggregate```: Basic Examples
Illustrates the basic functionality of ```aggregate```
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from IPython.core.display import HTML, display
from importlib import reload
# pandas options
pd.set_option('max_rows', 50)
pd.set_option('max_columns', 30)
pd.set_option('display.max_colwidth', 150)
# matplotlib and plotting options
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# seaborn options
sns.set(context='paper', style='darkgrid', font='serif')
# sns.set(context='paper', style='ticks', font='serif')
# warnings
import warnings
# warnings.simplefilter('error')
# warnings.simplefilter('ignore')
import logging
logging.getLogger("matplotlib").setLevel(logging.CRITICAL)
# this file is in examples
from importlib import reload
import sys
sys.path.insert(0,'..')
import aggregate as agg
import aggregate.parser as parser
import aggregate.underwriter as trash
uw = trash.Underwriter(debug=False)
uw
uw.list().T
uw.describe().head()
uw.update = False
```
## Create a simple aggregate distribution, plot and statistics
```
xx = uw('agg newAgg 10 claims sev lognorm 12 cv 1.75 poisson note{here is note}', create_all=True)
xx
uw['newAgg']
xx.recommend_bucket(16)
# trial and error shows 0.1 has lower CV error
xx.update(np.arange(1<<16, dtype=float) * 0.1)
xx
xx.audit_df
xx.plot()
xx.plot('long')
xx.report('audit')
display(xx.audit_df)
display(xx.statistics_df)
display(xx.statistics_total_df)
```
## Portfolio Examples
```
testa = f"""
port Complex~Portfolio~Mixed
agg LineA 50 claims sev lognorm 12 cv [2, 3, 4] wt [.3 .5 .2] mixed gamma 0.4
agg LineB 24 claims 10 x 5 sev lognorm 12 cv [{', '.join([str(i) for i in np.linspace(2,5, 20)])}] wt=20 mixed gamma 0.35
agg LineC 124 claims 120 x 5 sev lognorm 16 cv 3.4 mixed gamma 0.45
"""
testb = """
port Complex~Portfolio
agg Line3 50 claims [5 10 15] x 0 sev lognorm 12 cv [1, 2, 3] mixed gamma 0.25
agg Line9 24 claims [5 10 15] x 5 sev lognorm 12 cv [1, 2, 3] wt=3 mixed gamma 0.25
"""
testc = """
port Portfolio~2
agg CA 500 prem at .5 lr 15 x 12 sev gamma 12 cv [2 3 4] wt [.3 .5 .2] mixed gamma 0.4
agg FL 1.7 claims 100 x 5 sev 10000 * pareto 1.3 - 10000 poisson
agg IL 1e-8 * agg.CMP
agg OH agg.CMP * 1e-8
agg NY 500 prem at .5 lr 15 x 12 sev [20 30 40 10] * gamma [9 10 11 12] cv [1 2 3 4] wt =4 mixed gamma 0.4
sev proda 30000 * lognorm 2
"""
testd = """
sev prodc: 50000 * lognorm(3)
sev weird 50000 * beta(1, 4) + 10000
sev premsop1 25000 * lognorm 2.3; sev premsop2 35000 * lognorm 2.4;
sev premsop3 45000 * \
lognorm 2.8
"""
teste = """
agg Agg1 20 claims 10 x 2 sev lognorm 12 cv 0.2 mixed gamma 0.8
agg Agg2 20 claims 10 x 2 sev 15 * lognorm 2.5 poisson;
sev premsop1 25000 * lognorm 2.3;
agg Agg3 20 claims 10 x 2 on 25 * lognorm 2 fixed;
"""
testall = testa + testb + testc + testd + teste
# write test is a debug mode
a, b, c = uw.write_test(testall)
# del b['note']
agg.html_title('Severities')
display(a)
agg.html_title('Aggregates')
display(b)
agg.html_title('Portfolios')
display(c)
uw['LineA']
uw.update = True
uw.log2 = 12
# actually create a LineA object
print(uw['LineA'])
ob = uw('LineA')
ob
ob.plot()
ob.audit_df
uw['PPAL']
ob = uw('PPAL')
display(ob.statistics_df)
ob.sevs[0].plot()
ob.audit_df
uw.update=True
if 1:
print(f'Script:\n{testa}')
p = uw.write(testa, log2=16, bs=0.2)
else:
print(f'Script:\n{testb}')
p = uw.write(testb, log2=16, bs=0.04)
p.plot(subplots=True)
p
p.report('all')
for a in p:
display(a.statistics_df)
display(a.statistics_total_df)
for ag in p:
# the underlying aggregates
display(ag)
m = np.sum(ag.xs * ag.agg_density)
m2 = np.sum(ag.xs**2 * ag.agg_density)
print(m, np.sqrt(m2 - m*m) / m)
# ag.plot()
# display(ag.statistics_df)
# display(ag.statistics_total_df)
uw.write('liabc').plot()
ag = uw('PPAL')
ag.easy_update(14)
ag
uw['quake']
pf = uw('BODOFF1')
pf.update(8, 1)
pf
pf.plot('density')
uw['CMP']
ob = uw('agg MYCMP 0.01 *agg.CMP')
ob.easy_update()
ob.plot()
ob
sv = agg.Severity('lognorm', sev_mean = 1000, sev_cv = 0.5)
sv.plot()
c = uw.write('port test agg myCMP 0.01 * agg.CMP')
c.recommend_bucket()
c.update(log2=13, bs=100000)
c.plot(subplots=True, height=4)
c
biz = uw['Homeowners']
print(f'Type: {type(biz)}\nstr: {biz}\nrepr: {repr(biz)}')
display(biz)
biz = uw('Homeowners')
print(f'Type: {type(biz)}\nstr: {biz}\nrepr: {repr(biz)}')
display(biz)
biz.easy_update(10)
print(f'Type: {type(biz)}\nstr: {biz}\nrepr: {repr(biz)}')
display(biz)
biz.report('audit')
biz.recommend_bucket(verbose=True)
biz.easy_update(10, verbose=True)
biz.audit_df
biz.report('all')
```
## Script Examples
```
s = uw.write('sev MyLN1 12 * lognorm 1; sev MyLN2 12 * lognorm 2; sev MyLN3 12 * lognorm 3; ')
uw.describe('severity')
for v in s:
print(v.moms())
v.plot()
print([ 12 * np.exp(x*x/2) for x in [1,2,3]])
uw.update = True
pf = uw.write('port test: agg PA 0.0085 * agg.PersAuto agg CA: 0.02 * agg.CommAuto agg WC: 0.005 * agg.WorkComp',
log2=16, bs=25e4, remove_fuzz=True, add_exa=False)
pf
pf.report('quick')
pf.add_exa()
pf.plot('audit', aspect=1.4, height=2.25)
```
## More complex program
```
uw.update=True
uw.log2 = 13
uw.bs = 0.25
uw
warnings.simplefilter('always')
ans = uw.write("""port MyFirstPortfolio
agg A1: 50 claims sev gamma 12 cv .30 (mixed gamma 0.014)
agg A2: 50 claims 30 xs 10 sev gamma 12 cv .30 (mixed gamma 0.014)
agg A3: 50 claims sev gamma 12 cv 1.30 (mixed gamma 0.014)
agg A4: 50 claims 30 xs 20 sev gamma 12 cv 1.30 (mixed gamma 0.14)
agg B 15 claims 15 xs 15 sev lognorm 12 cv 1.5 + 2 mixed gamma 4.8
agg Cat 1.7 claims 25 xs 5 sev 25 * pareto 1.3 0 - 25 poisson
agg ppa: 1e-8 * agg.PPAL
""", add_exa=False, remove_fuzz=True, trim_df=False)
ans
ans.statistics_df
ans.recommend_bucket()
ans.update(14, 1, remove_fuzz=True)
ans
ans.plot('density', subplots=True, logy=True)
ans.report('audit')
for a in ans:
display(a)
ans.report()
```
## Integrated Parser
```
program1 = """port Program1
agg A: 50 claims, sev gamma 12 cv .30 mixed gamma 0.014
agg Ba: 500 loss, sev lognorm 50 cv .8 poisson
agg Bb: 500 loss, 1000 xs 0 sev lognorm 50 cv .8 poisson
agg Bg: 500 loss, sev gamma 50 cv .8 poisson
agg C: 500 loss, 75 xs 25, sev lognorm 50 cv .9 poisson
agg D: 25 claims, 30 xs 20, sev gamma 12 cv 1.30 (mixed gamma 0.85)
agg Cat1: 1.7 claims, 125 xs 5, sev 25 * pareto 1.3 - 25 poisson
agg Cat2: 3.5 claims, 1000 xs 0, sev 25 * pareto 2.3 0 - 25 poisson
"""
program2 = """port Program2
agg Thick: 500 loss, sev lognorm 50 cv .8 poisson
agg Thin: 500 loss, 1000 xs 0 sev lognorm 50 cv .8 poisson
agg Cat: 2 claims, 1250 xs 5, sev 25 * pareto 1.3 - 25 poisson
"""
program3 = '''port Program3
agg Cat: 50000000 loss 1e9 xs 0 sev 50000000 * pareto 1.3 - 50000000 poisson
agg MyWC: 0.005 * agg.WorkComp
agg HeterogCA: agg.CommAuto * 0.002 ;
agg HomogCA: 0.001 * agg.CommAuto'''
# TODO: if agg Cat is at the end there is a parse error
ans1 = uw.write(program1, log2=13, bs=0.5, remove_fuzz=True, trim_df=False)
ans2 = uw.write(program2, log2=10, remove_fuzz=True, trim_df=False)
ans3 = uw.write(program3, log2=11, padding=2, remove_fuzz=True, trim_df=False)
# %timeit ans = uw.write(program, 'script example', False) #, False, log2=13, bs=0.5, remove_fuzz=True, trim_df=False)
ans = [ans1, ans2, ans3]
for a in ans:
a.report()
a.plot('density', subplots=True, logy=True)
```
## Distortions and Pricing
```
portfolio_program = """port distortionTest
agg mix 50 claims [50, 100, 150, 200] xs 0 sev lognorm 12 cv [1,2,3,4] poisson
agg low 500 premium at 0.5 5 xs 5 sev gamma 12 cv .30 mixed gamma 0.2
agg med 500 premium at 0.5 lr 15 xs 10 sev gamma 12 cv .30 mixed gamma 0.4
agg xsa 50 claims 30 xs 10 sev gamma 12 cv .30 mixed gamma 1.2
agg hcmp 1e-8 * agg.CMP
agg ihmp agg.PPAL * 1e-8
"""
uw.update = False
port = uw.write(portfolio_program)
port.update(log2=13, bs=10, remove_fuzz=True)
port
a = agg.axiter_factory(None, 24, aspect=1.4, height=2)
port.plot('quick', axiter=a)
port.plot('density', axiter=a, subplots=True, aspect=1.4, height=2)
port.plot('density', axiter=a, subplots=True, aspect=1.4, height=2, logy=True, ylim=[1e-10, 1e-2])
a.tidy()
agg.suptitle_and_tight('Density Plots for Portfolio 1')
port.plot('audit', aspect=1.2, height=2.5)
port.plot('priority', aspect=1.2, height=2.5)
port.uat(verbose=True,);
K = port.q(0.995) # Resonable capital scale
LR = 0.925
K
cd = port.calibrate_distortions(LRs=[LR], As=[K])
cd
dd = agg.Distortion.distortions_from_params(cd, (K, LR), plot=True)
dd
ans_table, ans_stacked = port.apply_distortions(dd, As=[port.q(0.99), port.q(0.995), port.q(0.999)], num_plots=2)
```
## Another Interesting Example
```
cata = uw('agg mycat 4 claims sev 1000 * sev.cata poisson')
cata.easy_update(11)
cata.plot()
uw['mycat'] # GOHERE
cata = uw('agg mycat 4 claims sev 1000 * pareto 2.1 - 1000 poisson')
cata
cata.easy_update(11)
cata.plot()
cata
uw['mycat']
uw.update = False
uw.log2 = 11
ag = uw('agg myCatA 2 claims sev 1000 * pareto 2.1 - 1000 fixed')
# ag.plot()
# ag = uw.write('agg myCatA 2 claims sev 10000*agg.cata fixed')
ag.update(50. * np.arange(1<<11), padding=1, verbose=False )
ag.plot()
ag
uw.update = False
uw.verbose = False
pf = uw("""port smallLarge
agg cat 2 claims sev 10000 * pareto 1.3 - 10000 poisson
agg noncat 120 claims sev lognorm 1000 cv 0.5 poisson""")
pf
pf.recommend_bucket()
pf.update(log2=18, bs=10000, padding=1, add_exa=False)
pf.plot(subplots=True, logy=True)
pf
pf
pf.plot()
```
## Distortions
```
agg.Distortion.available_distortions()
agg.Distortion.test()
agg.insurability_triangle()
uw.update = False
# basic = uw('''
# port basic
# ppa 4000 claims 1e6 x 0 sev lognorm 10000 cv 15
# ho 800 claims 2e6 x 0 sev gamma 50000 cv 10
# cat 2 claims 20e6 x 0 sev 1e5 * pareto 2.1 - 1e5
# ''')
# this is about Intereseting_Cat Example See LCA_08_25
#
basic = uw('''
port basic
agg attrit 10000 claims 500 x 0 sev lognorm 1 cv 1.75 mixed gamma 0.5
agg paretoccat 2 claims sev 50 * pareto 1.25 - 50 poisson
agg lognccat 3.5 claims 40e3 x 0 sev 200 * lognorm 1.25 poisson
''')
basic.recommend_bucket()
basic.update(14, 10, add_exa=True, remove_fuzz=True, approx_freq_ge=100, approx_type='slognorm', discretization_calc='distribution', trim_df=False)
basic
basic.plot('quick')
basic.plot('density', subplots=True)
basic.plot('density', subplots=True, logy=True)
basic.plot('density', aspect=1.9, logy=True)
bfit = uw(basic.fit())
bfit.update(basic.density_df.loss.values)
bfit2 = basic.collapse()
bfit.plot()
plt.plot(basic.density_df.loss, basic.density_df.p_total, label='exact')
plt.plot(bfit.xs, bfit.agg_density, label='approx')
plt.yscale('log')
plt.legend()
xs = basic.density_df.loss
print(np.sum(xs * basic.density_df.p_total))
print(np.sum(xs * bfit.agg_density))
bfit.agg_density.sum()
```
## The plot of EXEQA is very intesting... different behaviours at different size losses
```
axiter = agg.axiter_factory(None, 24, aspect=1.4)
distort = agg.Distortion('wang', 2.25)
df, audit = basic.apply_distortion(distort, axiter)
params = basic.calibrate_distortions(LRs=[0.85, 0.90], Ps=[0.99, 0.98], r0=0.025)
params
gs = agg.Distortion.distortions_from_params(params, (basic.q(0.99), 0.85), r0=0.025)
gs
```
#### The first apply distortion was random and extreme. Now we apply Wang with a more reasonable shift.
```
axiter = agg.axiter_factory(None, 24, aspect=1.25)
df, au = basic.apply_distortion(gs['ly'] , axiter)
test = basic.top_down(gs, 0.99)
ans = basic.apply_distortions(gs, Ps=[0.98, 0.99], num_plots=3)
a, p, test, arams, dd, table, stacked = basic.uat(LRs=[0.9], verbose=True)
basic.density_df.filter(regex='exa[g]?_[st][a-z]+$').plot(kind='line')
table
uw('sev stepEg dhistogram xps [0, 1, 2, 3, 4] [.2, .3, .4, .05, .05] ').plot()
uw('sev stepEg chistogram xps [0, 1, 2, 3, 4] [.2, .3, .4, .05, .05] ').plot()
uw('sev stepEg dhistogram xps [0, 1, 2, 3, 4] .2 ').plot()
uw('sev stepEg chistogram xps [0, 1, 2, 3, 4] .2 ').plot()
np.set_printoptions(threshold=2**16)
fixed = uw('sev my chistogram xps [0,1,2,3,4] [.1,.2,.3, 0, .4]')
fixed.plot()
fixed.moms()==(2.9000000000000004, 10.449999999999999, 41.825000000000003)
```
#### A fixed distribution is just a discrete histogram with only one value
```
fixed = uw('sev my dhistogram xps [2] [1]')
fixed.plot()
fixed.moms() == (2, 4, 8)
# cts version is uniform? How exactly is this working?!
fixed = uw('sev my chistogram xps [0 2] [0 1]')
fixed.plot()
fixed.moms() == (2, 4, 8)
reload(trash)
uw = trash.Underwriter()
uw.update = True
uw.verbose = True
uw.log2 = 10
s = f'''agg logo 1 claim {np.linspace(10, 250, 15)} sev lognorm 100 cv {np.linspace(.2, 10, 15)} fixed'''
s = f'''agg logo 1 claim {np.linspace(10, 500, 100)} xs 0 sev lognorm 100 cv 1 fixed'''
print(s)
logo = uw.write(s, update=False)
logo.recommend_bucket(verbose=True)
uw['logo']
N = 2**14
bs = 1
xs = np.linspace(0, bs * N, N, endpoint=False)
junk = logo.update(xs, verbose=True)
logo.plot()
# can get the dictionary
uw['logo']
s = f'agg FixedClaim 1 claim sev dhistogram xps [5 10 20 ] [.5 .25 .25 ] fixed'
print(s)
uw.update = True
uw.verbose = True
uw.log2 = 8
fixed = uw.write(s, approximation='exact')
fixed.plot('quick')
fixed
a = fixed.update(1.0 * np.arange(256), verbose=True)
display(a)
fixed.plot('quick')
warnings.simplefilter('default')
fixed.plot('long')
uw.update = True
uw.log2 = 1
uw.update = True
# interesting = uw("""agg home 1 claim sev 20 * triang 0.5 fixed""")
interesting = uw("""agg home 1 claim sev 20 * uniform fixed""")
interesting.plot()
uw.update = False
interesting1 = uw("""
port easy
agg home 1 claim sev 20 * uniform fixed
agg auto 1 claim sev 50 * triang 0.5 fixed
""")
interesting2 = uw("""
port easy
agg home 2 claim sev 30 * uniform + 4 fixed
agg auto 1 claim sev 50 * uniform + 2 fixed
""")
interesting2
interesting = interesting2
interesting.update(8, 1)
interesting
interesting.plot()
p = interesting
p.plot(kind='density', line='all')
p.plot(kind='collateral', line='auto', c=45, a=90)
acc = []
for a in range(10, 150, 5):
s, ans = p.analysis_collateral('home', c=0, a=a, debug=True)
acc.append(s)
# change percent here to move line up or down
s, ans = p.analysis_collateral('home', c=a*.9, a=a, debug=True)
acc.append(s)
s, ans = p.analysis_collateral('home', c=a, a=a, debug=True)
acc.append(s)
res = pd.concat(acc).sort_index()
res = res.set_index('a')
res[['exa', 'ecac', 'lev']].plot(marker='o')
# display(res)
assert(np.allclose(res.query('c==0')['exa'], res.query('c==0')['ecac']))
```
## Credit Puzzle and Empirical Distortions
https://www.bis.org/publ/qtrpdf/r_qt0312e.pdf
```
oneyear= '''AAA 49.50 0.06 63.86 0.18 70.47 0.33 73.95 0.61
AA 58.97 1.24 71.22 1.44 82.36 1.86 88.57 2.70
A 88.82 1.12 102.91 2.78 110.71 4.71 117.52 7.32
BBB 168.99 12.48 170.89 20.12 185.34 27.17 179.63 34.56
BB 421.20 103.09 364.55 126.74 345.37 140.52 322.32 148.05
B 760.84 426.16 691.81 400.52 571.94 368.38 512.43 329.40'''
oy = oneyear.split('\n')
oyo = [i.split(' ') for i in oy]
df = pd.DataFrame(oyo, columns=['rating', 's_13', 'el_13', 's_35', 'el_35', 's_57', 'el_57', 's_710', 'el_710'], dtype=float)
df = df.set_index('rating')
df = df.sort_index(axis=1)
df.columns = pd.MultiIndex.from_product((('el', 'spread'), ('1-3', '3-5', '5-7', '7-10')), names=['type', 'maturity'])
df
for m in ('1-3', '3-5', '5-7', '7-10'):
df[('lr', m)] = df[('el', m)] / df[('spread', m)]
df
df['lr'].plot(kind='bar')
temp = df.loc[:, [('el', '1-3'), ('spread', '1-3')]] / 10000
temp.columns =['el', 'spread']
temp.loc['AAAA', :] = (0,0)
temp = temp.sort_values('el')
temp.plot(x='el', y='spread')
temp
from scipy.spatial import ConvexHull
hull = ConvexHull(temp)
plt.plot(temp.el, temp.spread, 'o')
for simplex in hull.simplices:
print(simplex)
plt.plot(temp.iloc[simplex, 0], temp.iloc[simplex, 1], 'k-')
plt.xlim(0, .002)
plt.ylim(0, 0.02)
```
### Audit from 08_28 on Sum of Parts
```
uw['MASSTEST']
mt = uw("""
port mass
agg a 2 claims sev 10 * uniform + 5 fixed
agg b 1 claim sev 10 * uniform fixed
agg c 1 claim sev 15 * uniform fixed
""")
mt.update(6, 1)
mt.plot(height=3, aspect=1.5)
a, p, test, params, dd, table, stacked = mt.uat(Ps=[0.95, .97], LRs=[0.9], r0=0.1, verbose=True)
mt.plot()
mt.fit(output='dict')
mt.collapse()
mt.xs
uw['mt']
```
## Working with Meta objects
```
reload(trash)
# to reconcile globals underwriter needs a dict of objects
uw = trash.Underwriter(glob=globals())
port = uw('port xx agg A1 5 claims sev lognorm 10 cv 2 poisson')
port.update(6, 10)
port.plot()
port
ag = uw('agg A2 12 claims sev lognorm 10 cv 0.5 mixed gamma .3')
ag.easy_update(9, 1)
# ag.plot()
# ag
import scipy.stats as ss
def makeEg(port):
ps = port.density_df.p_total.values
xs = port.density_df.loss.values
bs = xs[1]
xss = np.hstack((-bs*1e-7, 0, xs[1:]-bs/2, xs[-1]+bs/2))
pss = np.hstack((ps[0]/1e-7, 0, ps[1:]))
fz = ss.rv_histogram((pss, xss))
m = np.sum(xs * ps)
v = np.sum(xs**2 * ps) - m*m
print(m, v, fz.stats(), m / fz.stats()[0]-1)
plt.plot(xs , np.cumsum(ps), drawstyle='steps-post', label='orig')
ex = np.arange(1000, dtype=float)*.1
plt.plot(ex, fz.cdf(ex), label='approx',drawstyle='steps-post')
plt.xlim(-0.5, 100)
plt.ylim(0, 1)
plt.legend()
makeEg(port)
metaport = uw('agg MyMetaAgg 1 claims 200 x 0 sev meta.port 6 10 fixed')
metaport.easy_update(10, 10)
port.plot()
metaport.plot()
```
| github_jupyter |
# Numerical transformations plots on the ring dataset
This notebook generates the Numerical transformations plots on the titanic data, Fig 1 in the paper -- Synthsonic: Fast, Probabilistic modeling and Synthesis of Tabular Data
```
import logging
import numpy as np
import pandas as pd
import xgboost as xgb
import seaborn as sns
from sdgym import load_dataset
from pgmpy.models import BayesianModel
from pgmpy.estimators import TreeSearch
from pgmpy.sampling import BayesianModelSampling
from pgmpy.inference import BayesianModelProbability
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from matplotlib.gridspec import GridSpec
from synthsonic.models.kde_copula_nn_pdf import KDECopulaNNPdf
```
## Config
```
SAVE_PLOTS = False
logging.basicConfig(level=logging.INFO)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
plt.rcParams['text.color'] = 'black'
plt.rcParams['figure.max_open_warning'] = 0
colors = [i['color'] for i in plt.rcParams['axes.prop_cycle']]
markers = ['o', 's', 'p', 'x', '^', '+', '*', '<', 'D', 'h', '>']
%matplotlib inline
```
## Dataset
```
dataset_name = 'ring'
data, categorical_columns, ordinal_columns = load_dataset(dataset_name)
df = pd.DataFrame(data, columns=['x', 'y'])
```
## Fit
```
# clf = xgb.XGBClassifier(
# n_estimators=100,
# reg_lambda=1,
# gamma=0,
# max_depth=3
# )
kde = KDECopulaNNPdf(use_KDE=False, n_uniform_bins=100, n_quantiles=1000)
kde = kde.fit(data)
```
## Plots
#### Fig 1a -- original data
```
axs = sns.JointGrid(data=df, x='x', y='y', height=7)
axs.ax_joint.scatter(data=df, x='x', y='y', c=colors[0], marker='x', s=0.3)
cnt, bins, _ = axs.ax_marg_x.hist(df['x'], bins=50, color=colors[0])
cnt, bins, _ = axs.ax_marg_y.hist(df['y'], bins=50, color=colors[0], orientation='horizontal')
axs.ax_joint.tick_params(labelsize=14)
axs.ax_joint.set_xlabel('')
axs.ax_joint.set_ylabel('')
axs.ax_joint.set_xlim(-2.0, 2.0)
axs.ax_joint.set_ylim(-2.0, 2.0)
axs.ax_joint.yaxis.set_ticks([-2, -2, -1, 0, 1, 2])
axs.ax_joint.xaxis.set_ticks([-2, -2, -1, 0, 1, 2])
if SAVE_PLOTS:
axs.savefig(f'{dataset_name}_joint_marginal_data.pdf', dpi=600, bbox_inches='tight')
```
#### Fig 1b quantile transformation to Gaussian
```
X_g = kde.pipe_[0].transform(data)
tdf = pd.DataFrame(X_g, columns=['x', 'y'])
axs = sns.JointGrid(data=tdf, x='x', y='y', height=7)
axs.ax_joint.scatter(data=tdf, x='x', y='y', c=colors[0], marker='x', s=0.3)
cnt, bins, _ = axs.ax_marg_x.hist(tdf['x'], bins='auto', color=colors[0])
cnt, bins, _ = axs.ax_marg_y.hist(tdf['y'], bins='auto', color=colors[0], orientation='horizontal')
axs.ax_joint.tick_params(labelsize=14)
axs.ax_joint.set_xlabel('')
axs.ax_joint.set_ylabel('')
if SAVE_PLOTS:
axs.savefig(f'{dataset_name}_joint_marginal_quantile.pdf', dpi=600, bbox_inches='tight')
```
Fig 1c PC rotation
```
X_p = kde.pipe_[0:2].transform(data)
tdf = pd.DataFrame(X_p, columns=['x', 'y'])
axs = sns.JointGrid(data=tdf, x='x', y='y', height=7)
axs.ax_joint.scatter(data=tdf, x='x', y='y', c=colors[0], marker='x', s=0.3)
cnt, bins, _ = axs.ax_marg_x.hist(tdf['x'], bins='auto', color=colors[0])
cnt, bins, _ = axs.ax_marg_y.hist(tdf['y'], bins='auto', color=colors[0], orientation='horizontal')
axs.ax_joint.tick_params(labelsize=14)
axs.ax_joint.set_xlabel('')
axs.ax_joint.set_ylabel('')
if SAVE_PLOTS:
axs.savefig(f'{dataset_name}_joint_marginal_pca.pdf', dpi=600, bbox_inches='tight')
```
#### Fig 1d Quantile transformation to uniform
```
X_u = kde.pipe_.transform(data)
tdf = pd.DataFrame(X_u, columns=['x', 'y'])
axs = sns.JointGrid(data=tdf, x='x', y='y', height=7)
axs.ax_joint.scatter(data=tdf, x='x', y='y', c=colors[0], marker='x', s=0.3)
cnt, bins, _ = axs.ax_marg_x.hist(tdf['x'], bins='auto', color=colors[0])
cnt, bins, _ = axs.ax_marg_y.hist(tdf['y'], bins='auto', color=colors[0], orientation='horizontal')
axs.ax_joint.tick_params(labelsize=14)
axs.ax_joint.set_xlabel('')
axs.ax_joint.set_ylabel('')
if SAVE_PLOTS:
axs.savefig(f'{dataset_name}_joint_marginal_uniform.pdf', dpi=600, bbox_inches='tight')
```
#### Fig 1f synthetic sample
```
df = pd.DataFrame(data, columns=['x', 'y'])
X_gen = kde.sample_no_weights(df.shape[0] * 10)
df_gen = pd.DataFrame(X_gen, columns=['x', 'y']).sample(n=df.shape[0])
axs = sns.JointGrid(data=df_gen, x='x', y='y', height=7)
sns.kdeplot(data=df['x'], data2=df['y'], color=colors[0], ax=axs.ax_joint, label=r'$X$', shade=False, shade_lowest=False)
sns.kdeplot(data=df_gen['x'], data2=df_gen['y'], color=colors[1], ax=axs.ax_joint, label=r'$X_{\rm syn}$', zorder=10, shade=False, shade_lowest=False)
#axs.ax_joint.scatter(data=df_gen, x='x', y='y', c=colors[1], marker='x', alpha=0.9, s=0.3, label='generated')
axs.ax_joint.set_xlabel('')
axs.ax_joint.set_ylabel('')
axs.ax_joint.legend(fontsize=16)
cnt, bins, _ = axs.ax_marg_x.hist(df['x'], bins='auto', color=colors[0], lw=2)
ext_cnt = np.insert(cnt, 0, cnt[0])
centers = 0.5 * (bins[1:] + bins[:-1])
cnt_gen, *_ = axs.ax_marg_x.hist(df_gen['x'], bins=bins, histtype='step', lw=3, color=colors[1], ls='--')
cnt, bins, _ = axs.ax_marg_y.hist(df['y'], bins='auto', color=colors[0], orientation='horizontal')
ext_cnt = np.insert(cnt, 0, cnt[0])
centers = 0.5 * (bins[1:] + bins[:-1])
cnt_gen, *_ = axs.ax_marg_y.hist(df_gen['y'], bins=bins, histtype='step', lw=3, color=colors[1], orientation='horizontal', ls='--')
axs.ax_joint.set_xlim(-2.0, 2.0)
axs.ax_joint.set_ylim(-2.0, 2.0)
axs.ax_joint.yaxis.set_ticks([-2, -2, -1, 0, 1, 2])
axs.ax_joint.xaxis.set_ticks([-2, -2, -1, 0, 1, 2])
axs.ax_joint.tick_params(labelsize=14)
if SAVE_PLOTS:
axs.savefig(f'{dataset_name}_joint_marginal_with_sample_contours.pdf', dpi=600, bbox_inches='tight')
```
#### Fig 1e Weight BN
```
nbins = 10
bin_width = 1. / nbins
X_num_discrete = np.floor(X_u / bin_width)
X_num_discrete[X_num_discrete >= nbins] = nbins - 1 # correct for values at 1.
df_dis = pd.DataFrame(X_num_discrete)
# "tan" bayesian network needs string column names
df_dis.columns = [str(c) for c in df_dis.columns]
est = TreeSearch(df_dis, root_node=df_dis.columns[0])
dag = est.estimate(
estimator_type="tan",
class_node='1',
show_progress=False,
edge_weights_fn='mutual_info'
)
# model the conditional probabilities
bn = BayesianModel(dag.edges())
bn.fit(df_dis)
bn_prob = BayesianModelProbability(bn)
bn_ordering = [str(i) for i in range(df_dis.shape[1])]
x = np.arange(0, nbins, 1)
xx, yy = np.meshgrid(x, x)
X_grid = np.hstack((yy.reshape(nbins ** 2, 1), xx.reshape(nbins ** 2, 1)[::-1]))
P_grid = np.exp(bn_prob.log_probability(X_grid)).reshape(nbins, nbins)
weight_grid = P_grid / ( 1 / (nbins ** 2))
axs = sns.JointGrid(data=tdf, x='x', y='y', height=7)
sns.heatmap(weight_grid, vmin=0, vmax=weight_grid.max(), fmt='.2f', cmap='Blues', cbar=False, ax=axs.ax_joint, annot=weight_grid, annot_kws={'fontsize': 14})
cnt, bins, _ = axs.ax_marg_x.hist(X_num_discrete[:, 0], bins=np.arange(0, nbins + 1), color=colors[0])
cnt, bins, _ = axs.ax_marg_y.hist(X_num_discrete[:, 1], bins=np.arange(0, nbins + 1), color=colors[0], orientation='horizontal')
axs.ax_joint.tick_params(labelsize=14)
axs.ax_joint.set_xlabel('')
axs.ax_joint.set_ylabel('')
axs.ax_joint.set_aspect("equal")
axs.ax_joint.xaxis.set_major_locator(ticker.MultipleLocator(2))
axs.ax_joint.set_xticklabels([0.0, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0])
axs.ax_joint.yaxis.set_major_locator(ticker.MultipleLocator(2))
axs.ax_joint.set_yticklabels([1.0, 1.0, 0.8, 0.6, 0.4, 0.2, 0.0])
if SAVE_PLOTS:
axs.savefig(f'{dataset_name}_discrete_uniform_bn_weights_annotated.pdf', dpi=600, bbox_inches='tight')
```
| github_jupyter |
# **PROBLEMS**
**Problem 1: Parsing 2013/2014 needs assessment data into current fields**
300+ sites were initially identified as part of an informal needs assessment conducted by our Rwanda field staff over 2013 and 2014. We collected much of the same data that we do now, but in a less structured format. When we created the database in 2017, we loaded all the old needs assessment into the general comments field on each bridge record (column L in this table), just to keep as a reference. We now believe that loading this data into our standard project assessments would be a benefit to future needs assessment work, both field-based and remote. We would love to see the needs assessment data from Column L parsed into the corresponding columns after it (M+), with a new row if there is existing data in those columns. If no corresponding column is obvious, we'd like to see new columns created for any extraneous data that is common to the old format.
**Problem 2: Predicting which sites will be technically rejected in future engineering reviews**
Any sites with a "Yes" in the column AQ have undergone a full technical review, and of those, the Stage (column L) can be considered to be correct. Any sites with a "Yes" in Column AQ have not undergone a full technical review, and the Stage is based on the assessor's initial estimate as to whether the site was technically feasible or not. We want to know if we can use the sites that have been reviewed to understand which of the sites that haven't yet been reviewed are likely to be rejected by the senior engineering team. Any of the data can be used, but our guess is that Estimated Span, Height Differential Between Banks, Created By, and Flag for Rejection are likely to be the most reliable predictors.
---
# **Exploratory Analysis**
```
# Imports
import numpy as np
import pandas as pd
from datetime import date
# Load data
B2P_2020_Oct = pd.read_csv("https://raw.githubusercontent.com/JonRivera/Bridges_Prosperity_Labs28_Project/main/Data/B2P%20Dataset_2020.10.csv")
B2P_2020_Oct.shape
B2P_2020_June = pd.read_csv("https://raw.githubusercontent.com/JonRivera/Bridges_Prosperity_Labs28/main/Data/B2P%20Rwanda%20Site%20Assessment%20Data_2020.06.03.csv")
B2P_2020_June.head()
rejection = B2P_2020_Oct[(B2P_2020_Oct['Flag for Rejection']=='Yes') & (B2P_2020_Oct['Senior Engineering Review Conducted'] == "Yes")]
accepted = B2P_2020_Oct[(B2P_2020_Oct['Flag for Rejection']=='No') & (B2P_2020_Oct['Senior Engineering Review Conducted'] == "Yes")]
#Redundant features:
# 'Bridge Opportunity: GPS (Latitude)',
# 'Bridge Opportunity: GPS (Longitude)',
rejection = rejection[['Bridge Name', 'Bridge Opportunity: Project Code',
'Bridge Opportunity: Bridge Type', 'Bridge Opportunity: Span (m)',
'Bridge Opportunity: Individuals Directly Served','Bridge Opportunity: Comments',
'Form: Form Name',
'Proposed Bridge Location (GPS) (Latitude)',
'Proposed Bridge Location (GPS) (Longitude)', 'Current crossing method',
'Nearest all-weather crossing point', 'Days per year river is flooded',
'Flood duration during rainy season', 'Market access blocked by river',
'Education access blocked by river', 'Health access blocked by river',
'Other access blocked by river', 'Primary occupations',
'Primary crops grown', 'River crossing deaths in last 3 years',
'River crossing injuries in last 3 years', 'Incident descriptions',
'4WD Accessibility', 'Name of nearest city',
'Name of nearest paved or sealed road', 'Bridge classification',
'Flag for Rejection', 'Rejection Reason', 'Bridge Type',
'Estimated span (m)', 'Height differential between banks',
'Senior Engineering Review Conducted']]
rejection.head(2)
```
# **Extracting Data Inside Comments and Converting Data Into Features for Data Frame [TOKENS APPROACH]**
```
B2P_2020_Oct_Comments = B2P_2020_Oct[["Bridge Opportunity: Project Code",'Bridge Opportunity: Comments']]
#Count how many null values in the comments
B2P_2020_Oct_Comments.isna().sum()
#We have 1497 observation but only 394 of these observations actually have commments
B2P_2020_Oct_Comments.info()
#Set display options so that u can see everything inside a cell
pd.set_option('display.max_colwidth', -1)
#Exclude null rows
B2P_2020_Oct_Comments = B2P_2020_Oct_Comments[B2P_2020_Oct_Comments['Bridge Opportunity: Comments'].notnull()]
B2P_2020_Oct_Comments.head(10)
B2P_2020_Oct_Comments.columns
#Create tokenize function, splits based "/"
import numpy as np
import re
def tokenize(text):
"""Parses a string into a list of semantic units (words)
Args:
text (str): The string that the function will tokenize.
Returns:
list: tokens parsed out by the mechanics of your choice
"""
text = str(text)
res = re.split(',', text)
return res
storage = {}
def tokenize2(code,text):
global storage
"""Parses a string into a list of semantic units (words)
Args:
text (str): The string that the function will tokenize.
Returns:
updates the dictionary based on bridge code(key), the tokenization represents the values
"""
text = str(text)
res = re.split('/', text)
storage[code] = res
B2P_2020_Oct_Comments['tokens']= B2P_2020_Oct_Comments['Bridge Opportunity: Comments'].apply(tokenize)
for x,y in zip(B2P_2020_Oct_Comments['Bridge Opportunity: Project Code'], B2P_2020_Oct_Comments['Bridge Opportunity: Comments']):
tokenize2(x,y)
# Converting Items in Storage(Splitted Comments) Into DataFrame To Create A 2013-2014 Based Data Set
B2P_2013_2014 = pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in storage.items() ])).T
B2P_2013_2014.info()
B2P_2013_2014.head(2)
#Extraction function tghat filters for key information
#Function will place certain info/data inside a dictionary
#Using regex/filtering techniques I will search for info and save into dictionary
#Dictionary keys will correspond to column features in 2018 data set
#Desired Structures => => => [1] are dummy values, the goal is for them to have values corresponding to the key
dictionary0 = {'Bridge Name':[1], #
'Bridge Opportunity: Bridge Type':[1],
'Bridge Opportunity: Span (m)':[1],
'Bridge Opportunity: Individuals Directly Served':[11],
'Form: Form Name':[1],
'Proposed Bridge Location (GPS) (Latitude)':[1],
'Proposed Bridge Location (GPS) (Longitude)':[1],
'Current crossing method':[1],
'Nearest all-weather crossing point':[1],
'Days per year river is flooded':[1],
'Flood duration during rainy season':[1],
'Market access blocked by river':[1],
'Education access blocked by river':[1],
'Health access blocked by river':[1],
'Other access blocked by river':[1],
'Primary occupations':[1],
'Primary crops grown':[1], #
'River crossing deaths in last 3 years':[1],
'River crossing injuries in last 3 years':[1],
'Incident descriptions':[1],
'4WD Accessibility':[1],
'Name of nearest city':[1],
'Name of nearest paved or sealed road':[1],
'Bridge classification':[11],
'Flag for Rejection':[1],
'Rejection Reason':[1],
'Bridge Type':[1],
'Estimated span (m)':[1],
'Height differential between banks':[1],
'Senior Engineering Review Conducted':[1],
'Elevation':[1],
'people_served':[1]} #kew words number_range people
#key words to search for
# hour,
# months
B2P_2013_2014_V2 = pd.DataFrame(dictionary0)
B2P_2013_2014_V2.head()
```
# **Working with Fuzzy Wuzzy**
```
#Will attempt to us fuzzywuzzy to find certain information in the comments section
!pip install fuzzywuzzy
```
# **Testing Regular Expressions**
```
import re
# Searching for numbers -> I want to look for people served ->
find_people_range_numbers = re.findall(r'(^.*people+[ ])([0-9]+-[0-9]+)','people 2000-6000')
#Search for name John
find_people = re.findall(r'^.*John.*$','John')
find_dangerous = re.findall(r'\bDangerous\b','Dangerous dogs')
#Searching for Dangerous
find_123 = re.findall(r'^.*?\b(one|two|three)\b.*$','duck one\,two,three')
#Find number after a somewords
find_number = re.findall(r"([a-zA-Z]*[ ])+([0-9]+)",'wordsa 3243')
#Search for first and last name
find_first_last = re.findall(r"([a-zA-Z]+[ ])*([a-zA-Z]+)",'1221 John Rivera 213 dfsf fsdf')
#searching for range number followed by people - represented number of people served
find_people_served = re.findall(r'(^[0-9]+-[0-9]+[ ])(people)','2000-3000 people')
#searching for elevation followed by number: ex) Elevation:2101m
elavation = re.findall(r'(^.*Elevation+:)([0-9]+)','Elevation:2101')
# Combinding multiple regex together
re_list = [r'[0-9]+',r'^.*John.*$']
generic_re = re.compile( '|'.join( re_list))
print('find_people_range_numbers:',find_people_range_numbers,
'find_123:',find_123,
'find_number_after_words:',find_number,
'find_people_served:',find_people_served)
print('combind multiple regex',generic_re.findall('John 100- 12323 and then'))
print('elevation:',elavation)
```
# **Testing Regular Expression On A Comment**
```
test_comment = pd.DataFrame({'Comment1':['3000-6000 people directly served, Elevation:2101m, Cell:Rugogwe, Injuries/Death-Many peoples injured while trying to cross the river/ No person died while trying to cross the river Cross river on a normal day-300-600 people, Nearby city centers--Kizimyamuriro, Crossing River now-Simple timber bridge / hari uduti, Impossible/Dangerous to cross the river-3-6 months / Hagati y?amezi atatu n?atandatu,Travel to nearest safe bridge/river crossing-2-3 hours / Hagati yamasaha 2 n?amasaha 3,Hours walking to reach the Hospital->6 hours,Hours walking to reach the Health Center-0.5-1 hours,Hours of walking to reach the market-0.5-1 hours,Hours walking to reach Primary School-0.5-1 hours,Hours walking to reach Secondary School-0.5-1 hours,Hours walking to reach the Church-0.5-1 hours,Land within 50m of river bank-hilly / umusozi,Soil-hard rock/ Urutare,Sand-Available / birahaboneka,Gravel-Not available on site, but local government can provide/ Ntibihaboneka ariko inzego z?ubuyobozi zabitanga,Stone-Available / birahaboneka,Timber-Available / birahaboneka,Stone provided by-Sector/ umurenge,Sand Provided by-Sector/ umurenge,Gravel provided by-District/ Akarere,Timber provided by-District/ Akarere,Cement provided by-Can not be provided at the time / Ntaruhare rwamenyekana muri aka kanya.,Reinforcement steel provided by-Can not be provided at the time / Ntaruhare rwamenyekana muri aka kanya.,Land ownership-Government of Rwanda / Guverinoma y? u RwandaPrivate landowner/ Umuturage, Land ownership permission-Yes / Yego, General Comments-The proposed bridge location is near the traditional crossing point at a distance of approximately 1m.-The proposed bridge span is approximately 55m.-The level difference between two banks is 0.2m.-The space for foundation is sufficient-The free board between the lowest point of the proposed bridge and the highest flood level is sufficient-There is no confluence area near the place.-The river bed at the site is stable, there is no possibility of erosion.-The river bank of the site is not showing any sign of erosion and it is located between elevated areas.-The soil from the site is hard rock for one side and silt for the other side']})
test_comment
find_people_range_numbers = re.findall(r'(^.*people+[ ])([0-9]+-[0-9]+)','people 2000-6000')
test_comment['Comment1']
storage2 = {} # This line resets the dictionary so only run it once, unless u want to reset the dictionary
def extractor(code,text):
global storage2
text = str(text)
code = str(code)
find_people_served = re.findall('(^[0-9]+-[0-9]+[ ])(people)',text)
storage2[code] = find_people_served
return storage2
```
# **B2P_2020_Oct_Comments: Extracting Number of People Served:**
```
#Extracting People Served by The Bridge Data
for code, comment in zip(B2P_2020_Oct_Comments['Bridge Opportunity: Project Code'],B2P_2020_Oct_Comments['Bridge Opportunity: Comments']):
extractor(code,comment)
#The code above filters for people directly served implementing the extractor function, saves everything into a dictionary, where the key represents a bridge code
#Empty list represents data was not found in the comment section
len(storage2),storage2
```
# **B2P_2020_Oct_Comments Extracting Elevation** (Builds on Top Work, Dictionary Gets Updated)
```
def extractor1(code,text):
global storage2
text = str(text)
code = str(code)
elevation = re.findall(r'(^.*Elevation+:)([0-9]+)',text)
storage2[code]+=elevation
return storage2
for code, comment in zip(B2P_2020_Oct_Comments['Bridge Opportunity: Project Code'],B2P_2020_Oct_Comments['Bridge Opportunity: Comments']):
extractor1(code,comment)
storage2
```
# **B2P_2020_Oct_Comments: Extracting Cell**
```
```
# **B2P_2020_Oct_Comments: Extracting Information on Crossing River**
| github_jupyter |
# ESML - `AutoMLFactory` and `ComputeFactory`
## PROJECT + DATA CONCEPTS + ENTERPRISE Datalake Design + DEV->PROD MLOps
- `1)ESML Project`: The ONLY thing you need to remember is your `Project number` (and `BRONZE, SILVER, GOLD` concept )
- ...`read earlier notebook
## ENTERPRISE Deployment of Models & Governance - MLOps at scale
- `3) DEV->TEST-PROD` (configs, compute, performance)
- ESML has config for 3 environemnts: Easy DEPLOY model across subscriptions and Azure ML Studio workspaces
- Save costs & time:
- `DEV` has cheaper compute performance for TRAIN and INFERENCE (batch, AKS)
- `DEV` has Quick-debug ML training (fast training...VS good scoring in TEST and PROD)
- How? ESML `AutoMLFactory` and `ComputeFactory`
- Where to config these?
- settings/dev_test_prod/`dev_test_prod_settings.json`
- settings/dev_test_prod/`train/*/automl/*`
# Azure ML Studio Workspace
- ESML will `Automap` and `Autoregister` Azure ML Datasets as: `IN, SILVER, BRONZE, GOLD`
```
import repackage
repackage.add("../azure-enterprise-scale-ml/esml/common/")
from esml import ESMLDataset, ESMLProject
p = ESMLProject() # Will search in ROOT for your copied SETTINGS folder '../../../settings', you should copy template settings from '../settings'
p.active_model = 11
p.ws = p.get_workspace_from_config() #2) Load DEV or TEST or PROD Azure ML Studio workspace
p.inference_mode = False
datastore = p.init()
```
# ESML `GOLD` Dataset
```
ds_01 = p.DatasetByName("ds01_diabetes")
print(ds_01.InData.name)
print(ds_01.Bronze.name)
print(ds_01.Silver.name)
#print(p.Gold.name)
df_01 = ds_01.Silver.to_pandas_dataframe()
ds_02 = ds_01 = p.DatasetByName("ds02_other")
df_02 = ds_02.Silver.to_pandas_dataframe()
df_gold1_join = df_01.join(df_02) # left join -> NULL on df_02
print("Diabetes shape: ", df_01.shape)
print(df_gold1_join.shape)
ds_gold_v1 = p.save_gold(df_01)
```
# Look at `GOLD` vLatest
```
import pandas as pd
df = p.Gold.to_pandas_dataframe()
df.head()
train, validate, test = p.split_gold_3(0.6, "Y") # Also registers the datasets in AZURE as M03_GOLD_TRAIN | M03_GOLD_VALIDATE | M03_GOLD_TEST
```
## 3) ESML TRAIN model -> See other notebook `esml_howto_2_train.ipynb`
- `AutoMLFactory, ComputeFactory`
- Get `Train COMPUTE` for `X` environment
- Get `Train Hyperparameters` for `X` environment (less crossvalidations in DEV etc)
## 4a) ESML Scoring compare: Promote model or not? Register
- `IF` newly trained model in `current` environment scores BETTER than existing model in `target` environment, then `new model` can be registered and promoted.
- `ValidationSet` comparison of offline/previous `AutoML run` for `DEV` environment
- For `DEV`, `TEST` or `PROD` environment
- Future roadmap: Also include `TestSet SCORING` comparison
```
from baselayer_azure_ml import AutoMLFactory
p.dev_test_prod = "dev" # Current env, new unregistered model A to validate
target_env = "test" # Target env. Existing registered model B - Does Model A score better than Model B?
print("SCORING DRIFT: If new model scores better in DEV (new data, or new code), we can promote this to TEST & PROD \n")
promote, m1_name, r1_id, m2_name, r2_run_id = AutoMLFactory(p).compare_scoring_current_vs_new_model(target_env)
print("New Model: {} in environment {}".format(m1_name, p.dev_test_prod))
print("Existing Model: {} in environment {}".format(m2_name,target_env))
if (promote and p.dev_test_prod == target_env):# Can only register a model in same workspace (test->test) - need to retrain if going from dev->test
AutoMLFactory(p).register_active_model(target_env)
```
# START 2) TEST env - `register a model` starting "offline", not an active training run?
### Alt 1) No ESMLProject dependency
```
import repackage
repackage.add("../azure-enterprise-scale-ml/esml/common/")
from azureml.core import Workspace
from baselayer_azure_ml import AutoMLFactory
ws = p.get_workspace_from_config() # Simulate you init a Azure ML Workspace
AutoMLFactory().register_active_model_in_ws(ws,"dev") # Simulate you register a model in Workspace
```
### Alt 2) ESMLProject dependency: `ENVIRONMENT Self aware` and `config aware`
- More `Future proof`: Features such as "able to register trained model in TARGET - from TEST to PROD without retraining"
```
sys.path.append(os.path.abspath("../common/")) # NOQA: E402
from esml import ESMLDataset, ESMLProject
from baselayer_azure_ml import AutoMLFactory
from azureml.core import Workspace
ws = p.get_workspace_from_config()
p = ESMLProject() # Makes it "environment aware (dev,test,prod)", and "configuration aware"
p.init(ws)
p.dev_test_prod = "dev"
# ....train model....
model = AutoMLFactory(p).register_active_model(p.dev_test_prod)
```
### ..Model compared, promoted, register - ready for deployment
## 4b) ESML Loadtesting performance
- Using `GOLD_TEST` TestSet for AutoML to see which algorithm that is fastest, smallest size footprint
- For `DEV`, `TEST` or `PROD` environment
```
label = p.active_model["label"]
train, validate, test = p.split_gold_3() # Save as M03_GOLD_TRAIN | M03_GOLD_VALIDATE | M03_GOLD_TEST # Alt: train_data, test_data = p.Gold.random_split(percentage=0.8, seed=223)
test.head()
```
## 5a) ESML Deploy ONLINE, to AKS -> See other notebook
- Deploy "offline" from old `AutoML run` for `DEV` environment
- To → `DEV`, `TEST` or `PROD` environment
GOTO Notebook [`esml_howto_3_compare_and_deploy`](./esml_howto_3_compare_and_deploy)
## 5b) ESML `Deploy BATCH` pipeline
- Deploy same model "offline / previous" `AutoML Run` for `DEV` environment
- To → `DEV`, `TEST` or `PROD` environment
| github_jupyter |
```
from pioneer.das.api.platform import Platform
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
plt.rcParams["figure.figsize"] = (10,10)
#%matplotlib notebook
#Import a dataset
pf = Platform('/nas/pixset/exportedDataset/20200610_185206_rec_dataset_downtown05_exported')
# A platform is composed in sensors with the following naming convention 'typeOfSensor_position'.
# Letter signification for position accronyms: b=bottom, f=front, c=center, r=right and l=left
list(pf.sensors)
#Get Pixell sensor
pixell = pf.sensors['pixell_bfc']
#Find the Pixell recorded data available are the following
pixell.keys()
#Get Pixell's echoes (detections)
echoes = pixell['ech']
#Get Pixell's echoes (detections with distancesm aplitudes, timestamps and flags) for a specific frame
frame=10
echoes[frame].raw
#Use the function point_cloud() to read the XYZ coordinates of each detections
frame=0
echoes[frame].point_cloud()
x=echoes[frame].point_cloud()[:,0]
y=echoes[frame].point_cloud()[:,1]
z=echoes[frame].point_cloud()[:,2]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x,y,z, s=1)
#Read Pixell fullwave form data for one pixell
traces = pixell['ftrr']
#Traces are acquired at two gain 'low' and 'high'
traces[frame].raw
#Read high gain traces
traces[frame].raw['high']['data']
#Read low gain traces
traces[frame].raw['low']['data']
#Plot traces from frame and channelIndex=757 (a pixell is composed of 8*96=768 channels)
channelIndex=750
plt.plot(traces[frame].raw['high']['data'][channelIndex,:])
#Get a camera sensor
camera_bbfc = pf.sensors['flir_bbfc']
#Read image from camera
camera_bbfc.keys()
#Plot camera image from
plt.imshow(camera_bbfc['flimg'][frame].raw)
#Synchronize pixell echoes/traces and flir camera images datasources
sync = pf.synchronized(sync_labels=['flir_bbfc_flimg','pixell_bfc_ech','pixell_bfc_ftrr'], tolerance_us=1000e3)
#A synchronized platform is a collection of datasources (vs standard platform which is a collection of sensors containing data sources)
sync.sync_labels
### Draw new camera image ###
image_sample = sync[frame]['flir_bbfc_flimg']
image = image_sample.raw
plt.imshow(image)
### Get point cloud in the camera referential ###
echoes_sample = sync[frame]['pixell_bfc_ech']
point_cloud = echoes_sample.point_cloud(referential = 'flir_bbfc')
### Project points in 2D image coordinates ###
pts2d = image_sample.project_pts(point_cloud)
### Draw projected points ###
keep = np.where((pts2d[:,0] > 0) & (pts2d[:,0] < image.shape[1]-1) & (pts2d[:,1] > 0) & (pts2d[:,1] < image.shape[0]-1))[0]
if len(keep) > 0:
plt.scatter(pts2d[keep,0], pts2d[keep,1], c=echoes_sample.amplitudes[keep], s=5)
```
| github_jupyter |
# Calculate Shapley values
Shapley values as used in coalition game theory were introduced by William Shapley in 1953.
[Scott Lundberg](http://scottlundberg.com/) applied Shapley values for calculating feature importance in [2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf).
If you want to read the paper, I recommend reading:
Abstract, 1 Introduction, 2 Additive Feature Attribution Methods, (skip 2.1, 2.2, 2.3), and 2.4 Classic Shapley Value Estimation.
Lundberg calls this feature importance method "SHAP", which stands for SHapley Additive exPlanations.
Here’s the formula for calculating Shapley values:
$ \phi_{i} = \sum_{S \subseteq M \setminus i} \frac{|S|! (|M| - |S| -1 )!}{|M|!} [f(S \cup i) - f(S)]$
A key part of this is the difference between the model’s prediction with the feature $i$, and the model’s prediction without feature $i$.
$S$ refers to a subset of features that doesn’t include the feature for which we're calculating $\phi_i$.
$S \cup i$ is the subset that includes features in $S$ plus feature $i$.
$S \subseteq M \setminus i$ in the $\Sigma$ symbol is saying, all sets $S$ that are subsets of the full set of features $M$, excluding feature $i$.
##### Options for your learning journey
* If you’re okay with just using this formula, you can skip ahead to the coding section below.
* If you would like an explanation for what this formula is doing, please continue reading here.
## Optional (explanation of this formula)
The part of the formula with the factorials calculates the number of ways to generate the collection of features, where order matters.
$\frac{|S|! (|M| - |S| -1 )!}{|M|!}$
#### Adding features to a Coalition
The following concepts come from coalition game theory, so when we say "coalition", think of it as a team, where members of the team are added, one after another, in a particular order.
Let’s imagine that we’re creating a coalition of features, by adding one feature at a time to the coalition, and including all $|M|$ features. Let’s say we have 3 features total. Here are all the possible ways that we can create this “coalition” of features.
<ol>
<li>$x_0,x_1,x_2$</li>
<li>$x_0,x_2,x_1$</li>
<li>$x_1,x_0,x_2$</li>
<li>$x_1,x_2,x_0$</li>
<li>$x_2,x_0,x_1$</li>
<li>$x_2,x_1,x_0$</li>
</ol>
Notice that for $|M| = 3$ features, there are $3! = 3 \times 2 \times 1 = 6$ possible ways to create the coalition.
#### marginal contribution of a feature
For each of the 6 ways to create a coalition, let's see how to calculate the marginal contribution of feature $x_2$.
<ol>
<li>Model’s prediction when it includes features 0,1,2, minus the model’s prediction when it includes only features 0 and 1.
$x_0,x_1,x_2$: $f(x_0,x_1,x_2) - f(x_0,x_1)$
<li>Model’s prediction when it includes features 0 and 2, minus the prediction when using only feature 0. Notice that feature 1 is added after feature 2, so it’s not included in the model.
$x_0,x_2,x_1$: $f(x_0,x_2) - f(x_0)$</li>
<li>Model's prediction including all three features, minus when the model is only given features 1 and 0.
$x_1,x_0,x_2$: $f(x_1,x_0,x_2) - f(x_1,x_0)$</li>
<li>Model's prediction when given features 1 and 2, minus when the model is only given feature 1.
$x_1,x_2,x_0$: $f(x_1,x_2) - f(x_1)$</li>
<li>Model’s prediction if it only uses feature 2, minus the model’s prediction if it has no features. When there are no features, the model’s prediction would be the average of the labels in the training data.
$x_2,x_0,x_1$: $f(x_2) - f( )$
</li>
<li>Model's prediction (same as the previous one)
$x_2,x_1,x_0$: $f(x_2) - f( )$
</li>
Notice that some of these marginal contribution calculations look the same. For example the first and third sequences, $f(x_0,x_1,x_2) - f(x_0,x_1)$ would get the same result as $f(x_1,x_0,x_2) - f(x_1,x_0)$. Same with the fifth and sixth. So we can use factorials to help us calculate the number of permutations that result in the same marginal contribution.
#### break into 2 parts
To get to the formula that we saw above, we can break up the sequence into two sections: the sequence of features before adding feature $i$; and the sequence of features that are added after feature $i$.
For the set of features that are added before feature $i$, we’ll call this set $S$. For the set of features that are added after feature $i$ is added, we’ll call this $Q$.
So, given the six sequences, and that feature $i$ is $x_2$ in this example, here’s what set $S$ and $Q$ are for each sequence:
<ol>
<li>$x_0,x_1,x_2$: $S$ = {0,1}, $Q$ = {}</li>
<li>$x_0,x_2,x_1$: $S$ = {0}, $Q$ = {1} </li>
<li>$x_1,x_0,x_2$: $S$ = {1,0}, $Q$ = {} </li>
<li>$x_1,x_2,x_0$: $S$ = {1}, $Q$ = {0} </li>
<li>$x_2,x_0,x_1$: $S$ = {}, $Q$ = {0,1} </li>
<li>$x_2,x_1,x_0$: $S$ = {}, $Q$ = {1,0} </li>
</ol>
So for the first and third sequences, these have the same set S = {0,1} and same set $Q$ = {}.
Another way to calculate that there are two of these sequences is to take $|S|! \times |Q|! = 2! \times 0! = 2$.
Similarly, the fifth and sixth sequences have the same set S = {} and Q = {0,1}.
Another way to calculate that there are two of these sequences is to take $|S|! \times |Q|! = 0! \times 2! = 2$.
#### And now, the original formula
To use the notation of the original formula, note that $|Q| = |M| - |S| - 1$.
Recall that to calculate that there are 6 total sequences, we can use $|M|! = 3! = 3 \times 2 \times 1 = 6$.
We’ll divide $|S|! \times (|M| - |S| - 1)!$ by $|M|!$ to get the proportion assigned to each marginal contribution.
This is the weight that will be applied to each marginal contribution, and the weights sum to 1.
So that’s how we get the formula:
$\frac{|S|! (|M| - |S| -1 )!}{|M|!} [f(S \cup i) - f(S)]$
for each set $S \subseteq M \setminus i$
We can sum up the weighted marginal contributions for all sets $S$, and this represents the importance of feature $i$.
You’ll get to practice this in code!
```
import sys
!{sys.executable} -m pip install numpy==1.14.5
!{sys.executable} -m pip install scikit-learn==0.19.1
!{sys.executable} -m pip install graphviz==0.9
!{sys.executable} -m pip install shap==0.25.2
import sklearn
import shap
import numpy as np
import graphviz
from math import factorial
```
## Generate input data and fit a tree model
We'll create data where features 0 and 1 form the "AND" operator, and feature 2 does not contribute to the prediction (because it's always zero).
```
# AND case (features 0 and 1)
N = 100
M = 3
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:1 * N//4, 1] = 1
X[:N//2, 0] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
# fit model
model = sklearn.tree.DecisionTreeRegressor(random_state=0)
model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
```
### Calculate Shap values
We'll try to calculate the local feature importance of feature 0.
We have 3 features, $x_0, x_1, x_2$. For feature $x_0$, determine what the model predicts with or without $x_0$.
Subsets S that exclude feature $x_0$ are:
{}
{$x_1$}
{$x_2$}
{$x_1,x_2$}
We want to see what the model predicts with feature $x_0$ compared to the model without feature $x_0$:
$f(x_0) - f( )$
$f(x_0,x_1) - f(x_1)$
$f(x_0,x_2) - f(x_2)$
$f(x_0,x_1,x_2) - f(x_1,x_2)$
## Sample data point
We'll calculate the local feature importance of a sample data point, where
feature $x_0 = 1$
feature $x_1 = 1$
feature $x_2 = 1$
```
sample_values = np.array([1,1,1])
print(f"sample values to calculate local feature importance on: {sample_values}")
```
## helper function
To make things easier, we'll use a helper function that takes the entire feature set M, and also a list of the features (columns) that we want, and puts them together into a 2D array.
```
def get_subset(X, feature_l):
"""
Given a 2D array containing all feature columns,
and a list of integers representing which columns we want,
Return a 2D array with just the subset of features desired
"""
cols_l = []
for f in feature_l:
cols_l.append(X[:,f].reshape(-1,1))
return np.concatenate(cols_l, axis=1)
# try it out
tmp = get_subset(X,[0,2])
tmp[0:10]
```
## helper function to calculate permutation weight
This helper function calculates
$\frac{|S|! (|M| - |S| - 1)!}{|M|!}$
```
from math import factorial
def calc_weight(size_S, num_features):
return factorial(size_S) * factorial(num_features - size_S - 1) / factorial(num_features)
```
Try it out when size of S is 2 and there are 3 features total.
The answer should be equal to $\frac{2! \times (3-2-1)!}{3!} = \frac{2 \times 1}{6} = \frac{1}{3}$
```
calc_weight(size_S=2,num_features=3)
```
## case A
Calculate the prediction of a model that uses features 0 and 1
Calculate the prediction of a model that uses feature 1
Calculate the difference (the marginal contribution of feature 0)
$f(x_0,x_1) - f(x_1)$
#### Calculate $f(x_0,x_1)$
```
# S_union_i
S_union_i = get_subset(X,[0,1])
# fit model
f_S_union_i = sklearn.tree.DecisionTreeRegressor()
f_S_union_i.fit(S_union_i, y)
```
Remember, for the sample input for which we'll calculate feature importance, we chose values of 1 for all features.
```
# This will throw an error
try:
f_S_union_i.predict(np.array([1,1]))
except Exception as e:
print(e)
```
The error message says:
>Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
So we'll reshape the data so that it represents a sample (a row), which means it has 1 row and 1 or more columns.
```
# feature 0 and feature 1 are both 1 in the sample input
sample_input = np.array([1,1]).reshape(1,-1)
sample_input
```
The prediction of the model when it has features 0 and 1 is:
```
pred_S_union_i = f_S_union_i.predict(sample_input)
pred_S_union_i
```
When feature 0 and feature 1 are both 1, the prediction of the model is 1
#### Calculate $f(x_1)$
```
# S
S = get_subset(X,[1])
f_S = sklearn.tree.DecisionTreeRegressor()
f_S.fit(S, y)
```
The sample input for feature 1 is 1.
```
sample_input = np.array([1]).reshape(1,-1)
```
The model's prediction when it is only training on feature 1 is:
```
pred_S = f_S.predict(sample_input)
pred_S
```
When feature 1 is 1, then the prediction of this model is 0.5. If you look at the data in X, this makes sense, because when feature 1 is 1, half of the time, the label in y is 0, and half the time, the label in y is 1. So on average, the prediction is 0.5
#### Calculate difference
```
diff_A = pred_S_union_i - pred_S
diff_A
```
#### Calculate the weight
Calculate the weight assigned to the marginal contribution. In this case, if this marginal contribution occurs 1 out of the 6 possible permutations of the 3 features, then its weight is 1/6
```
size_S = S.shape[1] # should be 1
weight_A = calc_weight(size_S, M)
weight_A # should be 1/6
```
## Quiz: Case B
Calculate the prediction of a model that uses features 0 and 2
Calculate the prediction of a model that uses feature 2
Calculate the difference
$f(x_0,x_2) - f(x_2)$
#### Calculate $f(x_0,x_2)$
```
# TODO
S_union_i = # ...
f_S_union_i = # ...
#f_S_union_i.fit(?, ?)
sample_input = # ...
pred_S_union_i = # ...
pred_S_union_i
```
Since we're using features 0 and 2, and feature 2 doesn't help with predicting the output, then the model really just depends on feature 0. When feature 0 is 1, half of the labels are 0, and half of the labels are 1. So the average prediction is 0.5
#### Calculate $f(x_2)$
```
# TODO
S = # ...
f_S = # ...
# f_S.fit(?, ?)
sample_input = # ...
pred_S = # ...
pred_S
```
Since feature 2 doesn't help with predicting the labels in y, and feature 2 is 0 for all 100 training observations, then the prediction of the model is the average of all 100 training labels. 1/4 of the labels are 1, and the rest are 0. So that prediction is 0.25
#### Calculate the difference in predictions
```
# TODO
diff_B = # ...
diff_B
```
#### Calculate the weight
```
# TODO
size_S = #... # is 1
weight_B = # ...
weight_B # should be 1/6
```
# Quiz: Case C
Calculate the prediction of a model that uses features 0,1 and 2
Calculate the prediction of a model that uses feature 1 and 2
Calculate the difference
$f(x_0,x_1,x_2) - f(x_1,x_2)$
#### Calculate $f(x_0,x_1,x_2) $
```
# TODO
S_union_i = # ...
f_S_union_i = # ...
# f_S_union_i.fit(?, ?)
sample_input = # ...
pred_S_union_i = # ...
pred_S_union_i
```
When we use all three features, the model is able to predict that if feature 0 and feature 1 are both 1, then the label is 1.
#### Calculate $f(x_1,x_2)$
```
# TODO
S = # ...
f_S = # ...
#f_S.fit(?, ?)
sample_input = # ...
pred_S = # ...
pred_S
```
When the model is trained on features 1 and 2, then its training data tells it that half of the time, when feature 1 is 1, the label is 0; and half the time, the label is 1. So the average prediction of the model is 0.5
#### Calculate difference in predictions
```
# TODO
diff_C = # ...
diff_C
```
#### Calculate weights
```
# TODO
size_S = # ...
weight_C = # ... # should be 2 / 6 = 1/3
weight_C
```
## Quiz: case D: remember to include the empty set!
The empty set is also a set. We'll compare how the model does when it has no features, and see how that compares to when it gets feature 0 as input.
Calculate the prediction of a model that uses features 0.
Calculate the prediction of a model that uses no features
Calculate the difference
$f(x_0) - f()$
#### Calculate $f(x_0)$
```
# TODO
S_union_i = # ...
f_S_union_i = # ...
#f_S_union_i.fit(?, ?)
sample_input = # ...
pred_S_union_i = # ...
pred_S_union_i
```
With just feature 0 as input, the model predicts 0.5
#### Calculate $f()$
**hint**: you don't have to fit a model, since there are no features to input into the model.
```
# TODO
# with no input features, the model will predict the average of the labels, which is 0.25
pred_S = # ...
pred_S
```
With no input features, the model's best guess is the average of the labels, which is 0.25
#### Calculate difference in predictions
```
# TODO
diff_D = # ...
diff_D
```
#### Calculate weight
We expect this to be: 0! * (3-0-1)! / 3! = 2/6 = 1/3
```
# TODO
size_S = # ...
weight_D = # ... # weight is 1/3
weight_D
```
# Calculate Shapley value
For a single sample observation, where feature 0 is 1, feature 1 is 1, and feature 2 is 1, calculate the shapley value of feature 0 as the weighted sum of the differences in predictions.
$\phi_{i} = \sum_{S \subseteq N \setminus i} weight_S \times (f(S \cup i) - f(S))$
```
# TODO
shap_0 = # ...
shap_0
```
## Verify with the shap library
The [shap](https://github.com/slundberg/shap) library is written by Scott Lundberg, the creator of Shapley Additive Explanations.
```
sample_values = np.array([1,1,1])
shap_values = shap.TreeExplainer(model).shap_values(sample_values)
print(f"Shapley value for feature 0 that we calculated: {shap_0}")
print(f"Shapley value for feature 0 is {shap_values[0]}")
print(f"Shapley value for feature 1 is {shap_values[1]}")
print(f"Shapley value for feature 2 is {shap_values[2]}")
```
## Quiz: Does this make sense?
The shap libary outputs the shap values for features 0, 1 and 2. We can see that the shapley value for feature 0 matches what we calculated. The Shapley value for feature 1 is also given the same importance as feature 0.
* Given that the training data is simulating an AND operation, do you think these values make sense?
* Do you think feature 0 and 1 are equally important, or is one more important than the other?
* Does the importane of feature 2 make sense as well?
* How does this compare to the feature importance that's built into sci-kit learn?
## Answer
## Note
This method is general enough that it works for any model, not just trees. There is an optimized way to calculate this when the complex model being explained is a tree-based model. We'll look at that next.
## Solution
[Solution notebook](calculate_shap_solution.ipynb)
| github_jupyter |
# 3.3 Conditions - Bedingungen
## 3.3.1 - 3.3.3 If, Vergleichsoperatoren & Flow-Charts
Im Anschluss dieser Übungseinheit kannst du ...
+ Programmverzweigungen mit einem If-Statement umsetzen
+ Vergleichsoperatoren einsetzen
+ Flow-Charts lesen
+ auf der Grundlage eines einfach verzweigten Flow-Charts ein Programm mit einem If-Statement erstellen
## 3.3.1 If - Wenn
'Was wäre, wenn ...?', hast du dich bestimmt auch schon mal gefragt. Anders als im Alltag, in dem man meistens nicht herausfinden kann, wohin die andere Entscheidung führt, kannst du beim Programmieren verschiedene Möglichkeiten durchlaufen. Erinnere dich an die Pizzabestellung aus dem letzten Kapitel. Mit ``if`` (= wenn) könntest du deinen KundInnen auch einen anderen Artikel als Pizza anbieten.
<br>
Beginnen wir zunächst mit einem einfach verzweigten Beispiel zu ``if``. Wenn du die folgende Code-Zelle ausführst und <b>Pizza</b> eingibst, wird die Ausgabe nach dem <b>If-Statement</b> aktiviert. Gibst du irgendetwas anderes ein, kommt es zu keiner weiteren Ausgabe über ``print()``:
```
choice = input('Was möchtest du gern essen?\n')
pizza_price = 6.5
if choice == 'Pizza':
print(f'Eine ausgezeichnete Wahl. Das macht {pizza_price} bitte.')
```
### Wie funktioniert das?
<br>
Ganz wichtig ist dieser <b>Vergleichsoperator</b>: ``==``
<br>
Stimmt <b>choice</b> mit dem überein, was nach ``==`` steht, wird das Print-Statement ausgeführt.
<br>
Syntax: <font color = green>if dies == das:</font>
Zu beachten ist der **Doppelpunkt** am Ende des If-Statements.
Weiterhin wichtig ist die sogenannte <b>Indentation</b> - die Einrückung mit 4 Leerzeichen von dem, was nach dem If-Statement folgt - wie im Beispiel ``print(f'[...]')``. Alternativ kannst du auch einen Tabulator-Abstand setzen. Mixe möglichst beide Varianten nicht in einer Code-Zelle, denn das könnte zu Fehlermeldungen führen.
Python-konform ist die Indentation mit 4 Leerzeichen. In Jupyter Notebooks wird automatisch ein Tabulator-Abstand gesetzt; in anderen IDEs nicht. Deshalb findest du in diesen Notebooks vorrangig die Indentation mit einem Tabulator-Abstand.
**Mit Indentations sagst du Python, das der eingerückte Teil dem vorherigen Code untergeordnet ist.**
Ohne Indetations kommt es zu falschen Ausgaben oder sogar zu Syntax Errors.
>Indentations sind essentiell wichtig in Python. Während in anderen Programmiersprachen durch Zeichen wie geschweifte Klammern bestimmt wird, welcher Programmteil welchem anderen untergeordnet ist, hat man sich in Python von solchen Zeichen befreit. Das erspart Fehler bei der Zeichensetzung und bringt mehr Übersichtlichkeit beim Lesen des Codes.
**Alternativ** kannst du bei weniger textlastigen If-Statements das If-Satement und das ihm folgende Statement in einer Codezeile schreiben. Das funktioniert nur, wenn du nicht mehr als eine auszuführende Codezeile nach dem If-Statement hast:
Beispiel:
```
x = 100
y = 10
if x > y: print('x ist größer als y')
```
Du kannst in If-Statements bzw. Bedingungen auch rechnen und das Rechenergebnis als Bedingung setzen. In diesem Beispiel wird ein fiktives Produkt nur gekauft, wenn auf dem Konto (engl.: <b>deposit</b>) genug Geld vorhanden ist:
```
price = 50
deposit = 100
if deposit - price > 0:
print("Kauf das fiktive Produkt")
```
## 3.3.2 Vergleichsoperatoren
Vergleichsoperatoren (engl.: comparison operators) vergleichen zwei Werte miteinander. Das Ergebnis des Vergleichs ist ein <b>Boolean</b>. Das bedeutet, das Ergebnis ist **entweder <font color = green>True</font> oder <font color = darkred>False</font>**. Ist es **<font color = green>True</font>, ist die Bedingung für das If-Statement erfüllt** und der nachfolgende, eingerückte Teil wird ausgeführt.
<br>
### 3.3.2 ) Gleich
Mit dem Vergleichsoperator ``==`` wird geprüft, ob der Wert bzw. der Wert der Variable links dem Wert bzw. dem Wert der Variable rechts gleicht.
Beispiele:
```
'Hallo' == 'Hallo'
fruit_list1 = ['Apfel','Birne']
fruit_list2 = ['Apfel','Birne']
fruit_list1 == fruit_list2
```
#### == im Vergleich mit is
Im Zusammenhang mit Conditions können auch Identiy und Membership Operators eingesetzt werden (siehe "3.2.12 in, not in, is, is not (Membership & Identity Operators)").
Du könntest Vergleiche auch mit ``is`` anstellen, doch erinnere dich an die genannte Einheit:
``is`` vergleicht nicht die Werte, sondern die Referenzen (Adressen im Speicher).
Beispiele:
```
'Hallo' is 'Hallo'
print(id('Hallo'))
```
Da die Werte nicht in einer Variable abgespeichert sind, haben sie keine verschiedenen Referenzen. Deshalb gibt es nur eine Referenz des Wertes. Wird er verglichen, führt das zu einem True.
```
fruit_list1 = ['Apfel','Birne']
fruit_list2 = ['Apfel','Birne']
fruit_list1 is fruit_list2
```
Im oberen Beispiel findet auch kein Inhaltsvergleich statt, sondern ein Referenzvergleich der abgespeicherten Werte. ``is`` ist deshalb nur dann einzusetzen, wenn du überprüfen möchtest, ob zwei Objekte ein und dasselbe mit entsprechend gleichem Inhalt sind. In allen anderen Fällen verwende ``==``.
### 3.3.2 b) Nicht gleich bzw. ungleich
``!=`` überprüft darauf, ob etwas nicht dem anderen entspricht.
Beispiel:
```
choice = input('Was möchten Sie gern essen?')
pizza_price = 6.5
if choice != 'Pizza':
print(f'Sind Sie sicher? Wir machen die beste echte italienische Pizza!')
```
Anstelle von ``!=`` könntest du auch ``is not`` einsetzen, doch auch für dieses gilt: Es vergleicht die Referenzen. Setze es für derartige Vergleiche nur ein, wenn du einzelne Werte vorliegen hast oder tatsächlich überprüfen möchtest, ob es sich um zwei gleiche Objekte handelt. Nützlich ist das, wenn du zwei Texte oder Zahlenreihen auf genau gleichen Inhalt mit ``is`` bzw. ungleichen Inhalt mit ``is not`` überprüfen möchtest.
### 3.3.2 c) Größer/kleiner als
Mit ``>`` wird überprüft, ob der linke Teil größer als der rechte ist.
Mit ``<`` wird überprüft, ob der linke Teil kleiner als der rechte ist.
Beispiel:
```
pizza_price = 6.5
spaghetti_price = 7
if pizza_price < spaghetti_price:
print('Pizza ist billiger.')
```
Wäre die Pizza teurer als die Spaghettis, würde das Print-Statement nicht ausgelöst werden. Genauso wenig würde es ausgelöst werden, wenn du den Vergleichsoperator durch ``>`` austauschst. Teste es!
### 3.3.2 d) Größer gleich/kleiner gleich
Mit ``<=`` wird überprüft, ob der linke Teil kleiner als der rechte ist oder ihm gleicht.
Mit ``>=`` wird überprüft, ob der linke Teil größer als der rechte ist oder ihm gleicht.
Beispiel:
```
pizza_price = 7
spaghetti_price = 7
if pizza_price <= spaghetti_price:
print('Pizza ist billiger oder kostet genauso viel wie Spaghettis.')
```
### Übersicht zu den Vergleichsoperatoren:
<br>
| Zeichen | <p align="left">Vergleich | <p align="left">Beispiel |
|----------|-------|---------------|
| <p align="left">== | <p align="left">Gleicht die linke Seite der rechten? | x == x => True |
| <p align="left">!= | <p align="left">Gleicht die linke Seite nicht der rechten? | x != x => False |
| <p align="left">< | <p align="left">Ist die linke Seite kleiner als die rechte? | 2 < 3 => True |
| <p align="left">> | <p align="left">Ist die linke Seite größer als die rechte? | 2 > 3 => False |
| <p align="left"><= | <p align="left">Ist die linke Seite kleiner als die rechte oder gleich der rechten? | 3 <= 3 => True |
| <p align="left">>= | <p align="left">Ist die linke Seite kleiner als die rechte oder gleich der rechten? | 2 >= 3 => False |
<br>
<div class="alert alert-block alert-warning">
<font size="3"><b>Übung zu If:</b></font> Du möchtest eine Anwendung nur einem/r bestimmten UserIn zugänglich machen.
Weise der Variable <b>first_name</b> einen beliebigen Vornamen zu. Lass anschließend den/die UserIn seinen/ihren Vornamen eingeben. Stimmt der eingegebene Vorname mit dem gestzten Vornamen überein, soll ein Output erfolgen.
<br>
<b> Gewünschter Output:</b>
Hallo ``(gesetzter Vorname)``, willkommen in deinem Bereich.
</div>
```
first_name =
```
## 3.3.3 Flow-Charts
<b>Flow-Charts</b> helfen dir, Programmabläufe zu visualisieren.
Betrachte das folgende Beispiel zu Ausgaben (engl.: <b>expenses</b>), Einnahmen (engl.: <b>revenues</b>) sowie einem If-Statement:
```
expenses = 4000
revenues = 5000
if expenses >= revenues:
print('Du hast keinen Gewinn erzielt')
print('Du hast einen Gewinn erzielt.')
```
Weil in diesem Beispiel die <b>revenues</b> die <b>expenses</b> übersteigen, führt der Vergleich zu einem <font color = darkred>False</font> und das If-Statement wird nicht ausgeführt. Python überspringt das If-Statement und fährt mit der Ausführung des nachfolgenden Codes fort. Da der nachfolgende Code nicht eingerückt ist, ordnet Python diesen nicht dem If-Statement zu.
**Allgemeines Flow-Chart zu If:**
<img src="3-3-3-If.jpg">
Mit Flow-Charts, auch Flussdiagramme genannt, kannst du dir zum besseren Verständnis noch vor dem Coden Programmabläufe verdeutlichen. Man bezeichnet das auch als Abbildung des <b>Control-Flows</b> bzw. Programmablaufplan. **Kreise** werden dabei für **Start** und **Ende** des Programms verwendet. **Verzeigungen** werden mit einem **Rhombus**/einer **Raute** dargestellt und **Programminhalte** mit **Rechtecken**.
<br>
<div class="alert alert-block alert-warning">
<font size="3"><b>Übung zum Control-Flow mit if:</b></font> Erstelle ein Programm anhand des folgenden Flow-Charts. Wichtig ist nicht der genaue Wortlaut der Ausgabe, sondern die prinzipielle Umsetzung der Vorgabe. Dieses Flow-Chart zeigt dir auch, dass <b>Inputs</b> mit einem <b>Parallelogramm</b> visualisiert werden.
<br>
<img src="3-3-3-If-Age.jpg">
</div>
<div class="alert alert-block alert-success">
<b>Fantastisch!</b> Mit diesem Wissen zu If-Statements, Vergleichsoperatoren und Flow-Charts hast du einen weiteren großen Schritt vollbracht.
In der folgenden Einheit lernst du, wie du noch mehr Verzweigungen in deinen Code einbauen und damit dein Programm noch weitere verschiedene Wege laufen lassen kannst.
</div>
<div class="alert alert-block alert-info">
<h3>Das kannst du dir aus dieser Übung mitnehmen:</h3>
* **If-Statements**
* beginnen mit ``if`` und enden mit ``:``
* dazwischen findet ein Vergleich mit boolschem Rückgabewert statt
* die boolschen Rückgabewerte sind das Resultat logischer Operatoren, wie Vergleichsoperatoren
* Syntax: <font color = green>if x "logischer Operator bzw. Vergleichsoperator" y:</font>
* bei dem Ergebnis <font color = green>True</font> wird der nach ihnen eingerückte Codeblock ausgeführt
* bei dem Ergebnis <font color = darkred>False</font> wird der nach ihnen **nicht** eingerückte Codeblock ausgeführt
* Einrückungen/Indentations erfolgen mit 4 Leerzeichen (Python-konform) oder einem Tabulator-Abstand
* alternative Syntax: <font color = green>if x "logischer Operator bzw. Vergleichsoperator" y: führ das aus</font>
<br>
* **Vergleichsoperatoren**
* liefern einen boolschen Rückgabewert (True oder False)
* ``==``: prüft auf Gleichheit von Werten/Inhalten von Objekten
* ``!=``: prüft Ungleichheit von Werten/Inhalten von Objekten
* ``<``: prüft, ob der linke Wert kleiner als der rechte ist
* ``>``: prüft, ob der linke Wert größer als der rechte ist
* ``<=``: prüft, ob der linke Wert kleiner als der rechte ist oder gleich dem rechten
* ``>=``: prüft, ob der linke Wert größer als der rechte ist oder gleich dem rechten
<br>
* **Flow-Charts**
* visualisieren den Ablauf eines Programms (<b>Control-Flow</b> oder Programmablaufplan)
* man nennt sie auch Flussdiagramme
* Kreise: Start und Ende/Stop des Programms
* Rhomben/Rauten: Verzweigungen
* Rechtecke: Programminhalte
* Parallelogramme: Inputs
</div>
| github_jupyter |
```
from sklearn.metrics import classification_report, plot_roc_curve
def print_roc(clf, X_test, y_test):
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
plot_roc_curve(clf, X_test, y_test)
plt.plot([(0,0),(1,1)], '--y')
plt.title('ROC curve')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.show()
import itertools
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
def plot_confusion_matrix(model, X_test, y_test, normalize=False, cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
y_predict = model.predict(X_test)
cm = confusion_matrix(y_test, y_predict)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
classes = np.arange(len(model.classes_))
plt.figure()
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
np.set_printoptions(precision=2)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# plotting RPC curce and metrics
from sklearn.metrics import classification_report, plot_roc_curve
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
print(classification_report(y_test, y_pred))
plot_roc_curve(clf, X_test, y_test)
plt.plot([(0,0),(1,1)], '--y')
plt.title('ROC curve')
plt.show()
# Plotting precision and recall based on threshold
from sklearn.metrics import precision_recall_curve
y_probs = clf.predict_proba(X_test)[:,1]
precision, recall, thresholds = precision_recall_curve(y_test, y_probs)
def linear_inter(x1, y1, x2, y2, x):
m = (y2-y1)/(x2-x1)
y = y1 + (x2-x) * m
return y
recall_ = 0.7
for index, rec in enumerate(recall):
if rec < recall_:
break
precision_ = linear_inter(recall[index-1], precision[index-1], recall[index],
precision[index], recall_)
threshold_ = linear_inter(recall[index-1], thresholds[index-1], recall[index],
thresholds[index], recall_)
print(f'The precision on (recall={recall_}) is {precision_}')
print(f'The threshold on (recall={recall_}) is {threshold_}')
plt.plot(thresholds, precision[:-1], label='precision')
plt.plot(thresholds, recall[:-1], label='recall')
plt.xlabel('threshold')
plt.legend()
plt.show()
# simple Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
# imbalanced models
# https://machinelearningmastery.com/bagging-and-random-forest-for-imbalanced-classification/
# bagged decision trees with random undersampling for imbalanced classification
from imblearn.ensemble import BalancedBaggingClassifier
model = BalancedBaggingClassifier()
# class balanced random forest for imbalanced classification
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=10, class_weight='balanced')
# bootstrap class balanced random forest for imbalanced classification
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=10, class_weight='balanced_subsample')
# random forest with random undersampling for imbalanced classification
from imblearn.ensemble import BalancedRandomForestClassifier
model = BalancedRandomForestClassifier(n_estimators=10)
# easy ensemble for imbalanced classification
from imblearn.ensemble import EasyEnsembleClassifier
model = EasyEnsembleClassifier(n_estimators=10)
# For validation use
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=42)
# evaluate model
metric = 'recall' # accuracy/roc_auc/precision/recall/f1 and more from sklearn.metrics.SCORERS.keys()
scores = cross_val_score(model, X, y, scoring=metric, cv=cv, n_jobs=-1)
# summarize performance
print(f'Mean {metric}: {round(mean(scores),3)}')
```
| github_jupyter |
# Stochastic Diversity Evaluation for BCQ
Because BCQ is a generative model, it can generate different actions for the same state. This example code explores its capabilities of producing diverse, meaningful results.
```
import torch
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from scipy.spatial import distance
from sklearn.preprocessing import normalize
from tqdm.auto import tqdm
import pickle
import gc
import json
import h5py
import pandas as pd
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
# == recnn ==
import sys
sys.path.append("../../")
import recnn
cuda = torch.device('cuda')
frame_size = 10
tqdm.pandas()
env = recnn.data.FrameEnv('../../data/embeddings/ml20_pca128.pkl',
'../../data/ml-20m/ratings.csv', 10, 1)
perturbator = recnn.nn.models.bcqPerturbator(1290, 128, 256).to(cuda)
generator = recnn.nn.models.bcqGenerator(1290, 128, 512).to(cuda)
perturbator.load_state_dict(torch.load('../../models/bcq_perturbator.pt'))
generator.load_state_dict(torch.load('../../models/bcq_generator.pt'))
test_batch = next(iter(env.test_dataloader))
state, action, reward, next_state, done = recnn.data.get_base_batch(test_batch)
def rank(gen_action, metric):
scores = []
for i in movie_embeddings_key_dict.keys():
scores.append([i, metric(movie_embeddings_key_dict[i], gen_action)])
scores = list(sorted(scores, key = lambda x: x[1]))
scores = scores[:10]
ids = [i[0] for i in scores]
dist = [i[1] for i in scores]
return ids, dist
import faiss
# test indexes
indexL2 = faiss.IndexFlatL2(128)
indexIP = faiss.IndexFlatIP(128)
indexCOS = faiss.IndexFlatIP(128)
mov_mat = np.stack(env.movie_embeddings_key_dict.values()).astype('float32')
indexL2.add(mov_mat)
indexIP.add(mov_mat)
indexCOS.add(normalize(mov_mat, axis=1, norm='l2'))
def query(index, action, k=20):
D, I = index.search(action, k)
return D, I
# more than 5 actions don't work, the graphic looks ugly
# though you can change top k ranking
state = torch.repeat_interleave(state[0].unsqueeze(0), 5, dim=0)
sampled_actions = generator.decode(state)
perturbed_actions= perturbator(state, sampled_actions)
bcq_action = perturbed_actions
```
### Euclidean and cosine distances between generated actions for the same state
```
recnn.plot.pairwise_distances(bcq_action)
```
## PyUpSet
### bokeh version in the next section
```
# pip install upsetplot
from upsetplot import plot, from_memberships
bcq_action = bcq_action.detach().cpu().numpy()
D, I = query(indexL2, bcq_action, 10)
cat = dict([['a' + str(k), []] for k in range(I.shape[0])])
for r in range(I.shape[0]):
cat['a' + str(r)] = pd.DataFrame({'id': I[r]})
cat = from_memberships(cat)
import warnings
warnings.filterwarnings("ignore")
plot(cat)
plt.suptitle('L2 intersections')
print()
D, I = query(indexIP, bcq_action, 10)
cat = dict([['a' + str(k), []] for k in range(I.shape[0])])
for r in range(I.shape[0]):
cat['a' + str(r)] = pd.DataFrame({'id': I[r]})
pyu.plot(cat)
plt.suptitle('IP intersections')
print()
D, I = query(indexCOS, normalize(bcq_action, axis=1, norm='l2'), 10)
cat = dict([['a' + str(k), []] for k in range(I.shape[0])])
for r in range(I.shape[0]):
cat['a' + str(r)] = pd.DataFrame({'id': I[r]})
pyu.plot(cat)
plt.suptitle('COS intersections')
print()
```
## Distance Matrices
```
state = torch.repeat_interleave(state[0].unsqueeze(0), 50, dim=0)
sampled_actions = generator.decode(state)
perturbed_actions= perturbator(state, sampled_actions)
bcq_action = perturbed_actions
recnn.plot.pairwise_distances(bcq_action)
```
cosine dist is pretty small
# Holoviews Chord diagram
```
# can someone do this please?
```
| github_jupyter |
```
import io
import csv
import glob
import numpy as np
from PIL import Image
import tensorflow as tf
from object_detection.utils import dataset_util
COLOR_RED = 'red'
COLOR_GREEN = 'green'
COLOR_YELLOW = 'yellow'
COLOR_RED_NUM = 1
COLOR_GREEN_NUM = 2
COLOR_YELLOW_NUM = 3
def getLabel(init_label):
return {
'stop': COLOR_RED,
'go': COLOR_GREEN,
'goLeft': COLOR_GREEN,
'warning': COLOR_YELLOW,
'warningLeft': COLOR_YELLOW,
'stopLeft': COLOR_RED,
'goForward': COLOR_GREEN
}[init_label]
def getNumericLabel(text_label):
return {
COLOR_RED: COLOR_RED_NUM,
COLOR_GREEN: COLOR_GREEN_NUM,
COLOR_YELLOW: COLOR_YELLOW_NUM
}[text_label]
def getAnnotations(images_path, annotations_path):
global filenames, labels, boxes
with open(annotations_path) as annotations:
reader = csv.DictReader(annotations, delimiter=';')
for row in reader:
filenames.append(images_path + (row['Filename'].split('/'))[1])
labels.append(getLabel(row['Annotation tag']))
boxes_tmp = [row['Upper left corner X'], row['Upper left corner Y'], row['Lower right corner X'], row['Lower right corner Y']]
boxes.append(boxes_tmp)
labels_path = './data/Annotations/'
filenames = []
labels = []
boxes = []
images = []
images_path = './data/daySequence1/frames/'
annotations_path = labels_path + 'daySequence1/frameAnnotationsBOX.csv'
getAnnotations(images_path, annotations_path)
images_path = './data/daySequence2/frames/'
annotations_path = labels_path + 'daySequence2/frameAnnotationsBOX.csv'
getAnnotations(images_path, annotations_path)
print(filenames[0])
print('Number of files: ' + str(len(filenames)))
def generateImages():
i=-1
for file in filenames:
i+=1
image = Image.open(file)
with tf.gfile.GFile(file, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
w, h = image.size[:2]
if image is None:
print('error')
else:
yield i, file, encoded_jpg, w, h
def prepareTFRecord(height, width, filename, image, image_format, boxes, labels_text, labels):
xmin, ymin, xmax, ymax = list(map(int, boxes))
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
'image/encoded': dataset_util.bytes_feature(image),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature([xmin/w]),
'image/object/bbox/xmax': dataset_util.float_list_feature([xmax/w]),
'image/object/bbox/ymin': dataset_util.float_list_feature([ymin/h]),
'image/object/bbox/ymax': dataset_util.float_list_feature([ymax/h]),
'image/object/class/text': dataset_util.bytes_list_feature([labels_text]),
'image/object/class/label': dataset_util.int64_list_feature([labels]),
}))
return tf_example
#get TFRecords:
writer = tf.python_io.TFRecordWriter('train.record')
for i, f, img, w ,h in generateImages():
raw_filename = f.encode('utf8')
raw_label = labels[i].encode('utf8')
raw_label_num = getNumericLabel(labels[i])
TFRecord = prepareTFRecord(h, w, raw_filename, img, b'jpg', boxes[i], raw_label, raw_label_num)
writer.write(TFRecord.SerializeToString())
print("{} records saved.".format(i))
writer.close()
print('TFRecords saved')
import sys
# These are the usual ipython objects, including this one you are creating
ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars']
# Get a sorted list of the objects and their sizes
sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True)
print(set(labels))
```
| github_jupyter |
# Image classification transfer learning demo
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
3. [Fine-tuning the Image classification model](#Fine-tuning-the-Image-classification-model)
4. [Training parameters](#Training-parameters)
5. [Start the training](#Start-the-training)
6. [Inference](#Inference)
## Introduction
Welcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/).
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are three parts to this:
* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
* The S3 bucket that you want to use for training and model data
* The Amazon sagemaker image classification docker image which need not be changed
```
%%time
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = 'ic-transfer-learning'
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'image-classification', repo_version="latest")
print (training_image)
```
## Fine-tuning the Image classification model
The caltech 256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category.
The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/).
```
import os
import urllib.request
import boto3
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
def upload_to_s3(channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# # caltech-256
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
upload_to_s3('validation', 'caltech-256-60-val.rec')
upload_to_s3('train', 'caltech-256-60-train.rec')
# Four channels: train, validation, train_lst, and validation_lst
s3train = 's3://{}/{}/train/'.format(bucket, prefix)
s3validation = 's3://{}/{}/validation/'.format(bucket, prefix)
# upload the lst files to train and validation channels
!aws s3 cp caltech-256-60-train.rec $s3train --quiet
!aws s3 cp caltech-256-60-val.rec $s3validation --quiet
```
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail.
## Training
Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.
### Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings.
* **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training
* **Output path**: This the s3 folder in which the training output is stored
```
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
ic = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
train_volume_size = 50,
train_max_run = 360000,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess)
```
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.
* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.
* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.
* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.
* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
* **epochs**: Number of training epochs.
* **learning_rate**: Learning rate for training.
* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
```
ic.set_hyperparameters(num_layers=18,
use_pretrained_model=1,
image_shape = "3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype='float32')
```
## Input data specification
Set the data type and channels used for training
```
train_data = sagemaker.session.s3_input(s3train, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3validation, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
```
## Start the training
Start training by calling the fit method in the estimator
```
ic.fit(inputs=data_channels, logs=True)
```
# Inference
***
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
```
ic_classifier = ic.deploy(initial_instance_count = 1,
instance_type = 'ml.m4.xlarge')
```
### Download test image
```
!wget -O /tmp/test.jpg http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg
file_name = '/tmp/test.jpg'
# test image
from IPython.display import Image
Image(file_name)
```
### Evaluation
Evaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.
**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
```
import json
import numpy as np
with open(file_name, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
ic_classifier.content_type = 'application/x-image'
result = json.loads(ic_classifier.predict(payload))
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat', 'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101', 'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101', 'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire', 'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks', 'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor', 'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe', 'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell', 'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern', 'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight', 'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan', 'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose', 'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock', 'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus', 'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass', 'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris', 'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder', 'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning', 'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope', 'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels', 'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder', 'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card', 'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator', 'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus', 'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music', 'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile', 'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass', 'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan', 'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee', 'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato', 'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops', 'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn', 'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask', 'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101', 'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter']
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
```
### Clean up
When we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
```
ic_classifier.delete_endpoint()
```
| github_jupyter |
# Storing Computed Metrics in S3 with AWS Glue
PyDeequ allows us to persist the metrics we computed on dataframes in a so-called MetricsRepository using AWS Glue. In the following example, we showcase how to store metrics in S3 and query them later on.
```
import sys
from awsglue.utils import getResolvedOptions
from awsglue.context import GlueContext
from pyspark.context import SparkContext
glueContext = GlueContext(SparkContext.getOrCreate())
session = glueContext.spark_session
```
### We will be using the Amazon Product Reviews dataset
Specifically the Electronics dataset.
```
df_electronics = session.read.parquet("s3a://amazon-reviews-pds/parquet/product_category=Electronics/")
print(df_electronics.printSchema())
```
### Initialize Metrics Repository
We will be demoing with the `FileSystemMetricsRepository` class, but you can optionally use `InMemoryMetricsRepository` the exact same way without creating a `metrics_file` like so: `repository = InMemoryMetricsRepository(session)`.
**Metrics Repository allows us to store the metrics in json format on S3.**
```
s3_write_path = "s3://joanpydeequ/tmp/simple_metrics_tutorial.json"
import pydeequ
from pydeequ.repository import *
repository = FileSystemMetricsRepository(session, s3_write_path)
```
Each set of metrics that we computed needs be indexed by a so-called `ResultKey`, which contains a timestamp and supports arbitrary tags in the form of key-value pairs. Let's setup one for this example:
```
key_tags = {'tag': 'general_electronics'}
resultKey = ResultKey(session, ResultKey.current_milli_time(), key_tags)
```
This tutorial builds upon the Analyzer and Metrics Repository Tutorial. We make Deequ write and store our metrics in S3 by adding `useRepository` and the `saveOrAppendResult` method.
```
from pydeequ.analyzers import *
analysisResult = AnalysisRunner(session) \
.onData(df_electronics) \
.addAnalyzer(Size()) \
.addAnalyzer(Completeness("review_id")) \
.addAnalyzer(ApproxCountDistinct("review_id")) \
.addAnalyzer(Mean("star_rating")) \
.addAnalyzer(Distinctness("customer_id")) \
.addAnalyzer(Correlation("helpful_votes","total_votes")) \
.addAnalyzer(ApproxQuantile("star_rating",.5)) \
.useRepository(repository) \
.saveOrAppendResult(resultKey) \
.run()
analysisResult_df = AnalyzerContext.successMetricsAsDataFrame(session, analysisResult)
analysisResult_df.show()
```
### We can now load it back from the Metrics Repository
PyDeequ now executes the verification as usual and additionally stores the metrics under our specified key. Afterwards, we can retrieve the metrics from the repository in different ways. We can for example directly load the metric for a particular analyzer stored under our result key as follows:
```
analysisResult_metRep = repository.load() \
.before(ResultKey.current_milli_time()) \
.getSuccessMetricsAsDataFrame()
analysisResult_metRep.show()
```
## Great, we got our results!
Let us take a closer look at the data distribution in the star rating column. Use the `filter` method to partition our table into two. One table will contain values below the average star rating [1,3], the second table will contain the higher rated scores.
```
lower_rating = df_electronics.filter("star_rating < 4")
higher_rating = df_electronics.filter("star_rating >= 4")
```
**We can find the correlation between helpful and total votes, specifically between higher and lower ratings.**
```
key_tags_2 = {'tag': 'star_rating[1-3]'}
resultKey = ResultKey(session, ResultKey.current_milli_time(), key_tags_2)
analysisResult = AnalysisRunner(session) \
.onData(lower_rating) \
.addAnalyzer(Size()) \
.addAnalyzer(ApproxCountDistinct("review_id")) \
.addAnalyzer(Mean("star_rating")) \
.addAnalyzer(Correlation("helpful_votes","total_votes")) \
.addAnalyzer(Compliance('range','star_rating > 0 AND star_rating < 4')) \
.useRepository(repository) \
.saveOrAppendResult(resultKey) \
.run()
analysisResult_df = AnalyzerContext.successMetricsAsDataFrame(session, analysisResult)
analysisResult_df.show()
key_tags_3 = {'tag': 'star_rating[4-5]'}
resultKey = ResultKey(session, ResultKey.current_milli_time(), key_tags_3)
analysisResult = AnalysisRunner(session) \
.onData(higher_rating) \
.addAnalyzer(Size()) \
.addAnalyzer(ApproxCountDistinct("review_id")) \
.addAnalyzer(Mean("star_rating")) \
.addAnalyzer(Correlation("helpful_votes","total_votes")) \
.addAnalyzer(Compliance('range','star_rating >=4')) \
.useRepository(repository) \
.saveOrAppendResult(resultKey) \
.run()
analysisResult_df = AnalyzerContext.successMetricsAsDataFrame(session, analysisResult)
analysisResult_df.show()
```
### Now we should see three different tags when we load it back from Metrics Repository
```
analysisResult_metRep = repository.load() \
.before(ResultKey.current_milli_time()) \
.getSuccessMetricsAsDataFrame()
analysisResult_metRep.show()
```
There seems to be a slightly higher correlation between helpful and total votes with higher ratings than the lower rated instances!
By leveraging the metrics repository file, all the analysis on the data is now saved within your S3 bucket for future reference!
| github_jupyter |
## Convolutional Neural Networks
---
In this notebook, we visualize four activation maps in a CNN layer.
### 1. Import the Image
```
import cv2
import scipy.misc
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'images/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# resize to smaller
small_img = scipy.misc.imresize(gray_img, 0.3)
# rescale entries to lie in [0,1]
small_img = small_img.astype("float32")/255
# plot image
plt.imshow(small_img, cmap='gray')
plt.show()
```
### 2. Specify the Filters
```
import numpy as np
# TODO: Feel free to modify the numbers here, to try out another filter!
# Please don't change the size of the array ~ :D
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
### do not modify the code below this line ###
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = [filter_1, filter_2, filter_3, filter_4]
# visualize all filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
```
### 3. Visualize the Activation Maps for Each Filter
```
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
import matplotlib.cm as cm
# plot image
plt.imshow(small_img, cmap='gray')
# define a neural network with a single convolutional layer with one filter
model = Sequential()
model.add(Convolution2D(1, (4, 4), activation='relu', input_shape=(small_img.shape[0], small_img.shape[1], 1)))
# apply convolutional filter and return output
def apply_filter(img, index, filter_list, ax):
# set the weights of the filter in the convolutional layer to filter_list[i]
model.layers[0].set_weights([np.reshape(filter_list[i], (4,4,1,1)), np.array([0])])
# plot the corresponding activation map
ax.imshow(np.squeeze(model.predict(np.reshape(img, (1, img.shape[0], img.shape[1], 1)))), cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# visualize all activation maps
fig = plt.figure(figsize=(20, 20))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
apply_filter(small_img, i, filters, ax)
ax.set_title('Activation Map for Filter %s' % str(i+1))
```
| github_jupyter |
```
import pandas as pd
df = pd.DataFrame({'Concentration':[200,200,200, 100, 100, 100, \
50, 50, 50, 25, 25, 25, \
12.5, 12.5, 12.5], \
'Absorbance':[2.6502,2.6871,2.7100, 1.4964,1.4806,1.5028, \
0.6462,0.6684,0.6821, 0.2518,0.2514,0.2608, \
0.0874,0.0813,0.0836]})
df
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# sns.boxplot(x="Concentration", y="Absorbance", data=df, ax=ax)
sns.regplot(x="Concentration", y="Absorbance", data=df, ax=ax, scatter=True)
plt.show()
tips = sns.load_dataset("tips")
fig, ax = plt.subplots()
sns.boxplot(x="size", y="tip", data=tips, ax=ax)
sns.regplot(x="size", y="tip", data=tips, ax=ax, scatter=False)
plt.show()
tips
df = pd.DataFrame({'Concentration':[200,200,200, 100, 100, 100, \
50, 50, 50, 25, 25, 25, \
12.5, 12.5, 12.5], \
'Absorbance':[2.6502,2.6871,2.7100, 1.4964,1.4806,1.5028, \
0.6462,0.6684,0.6821, 0.2518,0.2514,0.2608, \
0.0874,0.0813,0.0836]})
# load packages
import pandas as pd
import seaborn as sns
# load data file
d = pd.read_csv("https://reneshbedre.github.io/assets/posts/anova/twowayanova.txt", sep="\t")
# reshape the d dataframe suitable for statsmodels package
# you do not need to reshape if your data is already in stacked format. Compare d and d_melt tables for detail
# understanding
d_melt = pd.melt(d, id_vars=['Genotype'], value_vars=['1_year', '2_year', '3_year'])
# replace column names
d_melt.columns = ['Genotype', 'years', 'value']
d_melt
# Genotype years value
# 0 A 1_year 1.53
# 1 A 1_year 1.83
# 2 A 1_year 1.38
# 3 B 1_year 3.60
# 4 B 1_year 2.94
# generate a boxplot to see the data distribution by genotypes and years. Using boxplot, we can easily detect the
# differences between different groups
sns.boxplot(x="Genotype", y="value", hue="years", data=d_melt, palette="Set3")
import pandas as pd
a = pd.DataFrame({'Samples':['Plant', 'Plant', 'Plant', 'Plant', 'Plant', \
'Water', 'Water', 'Water', 'Water', 'Water', \
'Ampicillin', 'Ampicillin', 'Ampicillin', 'Ampicillin', 'Ampicillin', \
'Plant', 'Plant', 'Plant', 'Plant', 'Plant', \
'Water', 'Water', 'Water', 'Water', 'Water', \
'Cefazolin', 'Cefazolin', 'Cefazolin', 'Cefazolin', 'Cefazolin', \
'Plant', 'Plant', 'Plant', 'Plant', 'Plant', \
'Water', 'Water', 'Water', 'Water', 'Water', \
'Ampicillin', 'Ampicillin', 'Ampicillin', 'Ampicillin', 'Ampicillin', \
'Plant', 'Plant', 'Plant', 'Plant', 'Plant', \
'Water', 'Water', 'Water', 'Water', 'Water', \
'Cefazolin', 'Cefazolin', 'Cefazolin', 'Cefazolin', 'Cefazolin'],
'Bacteria':['E. coli', 'E. coli', 'E. coli', 'E. coli', 'E. coli', \
'E. coli', 'E. coli', 'E. coli', 'E. coli', 'E. coli', \
'E. coli', 'E. coli', 'E. coli', 'E. coli', 'E. coli', \
'E. coli', 'E. coli', 'E. coli', 'E. coli', 'E. coli', \
'E. coli', 'E. coli', 'E. coli', 'E. coli', 'E. coli', \
'E. coli', 'E. coli', 'E. coli', 'E. coli', 'E. coli', \
'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', \
'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', \
'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', \
'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', \
'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', \
'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus', 'S. aureus'],
'value':[8, 8, 8, 8, 8, \
6, 6, 6, 6, 6, \
14, 14, 14, 14, 14, \
6, 6, 6, 6, 6, \
6, 6, 6, 6, 6, \
22, 21, 21, 21, 21, \
6, 6, 6, 6, 6, \
6, 6, 6, 6, 6, \
24, 24, 24, 24, 23, \
6, 6, 6, 6, 6, \
6, 6, 6, 6, 6, \
20, 21, 21, 20, 21]})
#ZOI in mm
a
a.head()
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(8,5), dpi=1000)
sns.boxplot(x="Samples", y="value", hue="Bacteria", data=a)
plt.title('Summary of ZOIs per sample and bacteria strain')
# load packages
import statsmodels.api as sm
from statsmodels.formula.api import ols
# Ordinary Least Squares (OLS) model
# C(Genotype):C(years) represent interaction term
model = ols('value ~ C(Samples) + C(Bacteria) + C(Samples):C(Bacteria)', data=a).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
anova_table
# load packages
from statsmodels.stats.multicomp import pairwise_tukeyhsd
# perform multiple pairwise comparison (Tukey HSD)
m_comp = pairwise_tukeyhsd(endog=a['value'], groups=a['Samples'], alpha=0.05)
print(m_comp)
import scipy.stats as stats
w, pvalue = stats.shapiro(model.resid)
print(w, pvalue)
b = a[a['Samples'] == 'Plant']
c = a[a['Samples'] == 'Water']
d = a[a['Samples'] == 'Ampicillin']
e = a[a['Samples'] == 'Cefazolin']
# load packages
import scipy.stats as stats
w, pvalue = stats.bartlett(b['value'], c['value'], d['value'], e['value'])
print(w, pvalue)
#5.687843565012841 0.1278253399753447
import pandas
from scipy.stats import mstats
import numpy as np
# Data = pandas.read_csv("CSVfile.csv")
# Col_1 = Data['Colname1']
# Col_2 = Data['Colname2']
# Col_3 = Data['Colname3']
# Col_4 = Data['Colname4']
print("Kruskal Wallis H-test test:")
H, pval = mstats.kruskalwallis(np.array(b['value']), np.array(c['value']), np.array(d['value']), np.array(e['value']))
print("H-statistic:", H)
print("P-Value:", pval)
if pval < 0.05:
print("Reject NULL hypothesis - Significant differences exist between groups.")
if pval > 0.05:
print("Accept NULL hypothesis - No significant difference between groups.")
import numpy as np
l = np.array(b['value'])
l
b
# load packages
from statsmodels.stats.multicomp import pairwise_tukeyhsd
# perform multiple pairwise comparison (Tukey HSD)
m_comp = pairwise_tukeyhsd(endog=a['value'], groups=a['Bacteria'], alpha=0.05)
print(m_comp)
import scipy.stats as stats
w, pvalue = stats.shapiro(model.resid)
print(w, pvalue)
f = a[a['Bacteria'] == 'E. coli']
g = a[a['Bacteria'] == 'S. aureus']
import scipy.stats as stats
w, pvalue = stats.bartlett(f['value'], g['value'])
print(w, pvalue)
# load packages
import pandas as pd
# load data file
d = pd.read_csv("https://reneshbedre.github.io/assets/posts/anova/onewayanova.txt", sep="\t")
# generate a boxplot to see the data distribution by treatments. Using boxplot, we can easily detect the differences
# between different treatments
d.boxplot(column=['A', 'B', 'C', 'D'], grid=False)
# load packages
import scipy.stats as stats
# stats f_oneway functions takes the groups as input and returns F and P-value
fvalue, pvalue = stats.f_oneway(d['A'], d['B'], d['C'], d['D'])
print(fvalue, pvalue)
# 17.492810457516338 2.639241146210922e-05
# get ANOVA table as R like output
import statsmodels.api as sm
from statsmodels.formula.api import ols
# reshape the d dataframe suitable for statsmodels package
d_melt = pd.melt(d.reset_index(), id_vars=['index'], value_vars=['A', 'B', 'C', 'D'])
# replace column names
d_melt.columns = ['index', 'treatments', 'value']
# Ordinary Least Squares (OLS) model
model = ols('value ~ C(treatments)', data=d_melt).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
anova_table
d_melt
d
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from collections import Counter as ct
'''CONSTANTS'''
N = 500 #number of agents (number of stores)
M = 1 #minimum number of edges (distance and performan)
'''AUXILIARY FUNCTIONS'''
# degreeDistribution(g)
## Takes in he graph and plots its degree distribution
def degreeDistribution(g):
klist = [d for n, d in g.degree()]
kdist = ct(klist)
X = list(kdist.keys())
Y = list(kdist.values())
plt.loglog(X,Y,lw=0,marker='.',color='g')
plt.xlabel('k',fontname='Times New Roman',fontsize=18)
plt.ylabel('count',fontname='Times New Roman',fontsize=18)
return
# coloredGraph(g)
## Takes in the graph and a list of colors and plots the nodes with color
def coloredGraph(g,cl):
nx.draw_networkx_nodes(g,pos,node_color=cl,node_size=50,alpha=0.4)
nx.draw_networkx_edges(g,pos,width=1.0,alpha=0.5,color='#CCCCCC',lw=0.1)
return
# statesToColor(l)
# Takes in a list of states and converts to color
def statesToColor(l):
colorhex = []
for i in range(len(l)):
if l[i] == 0:
colorhex.append('#000000') #black nodes
elif l[i] == 1:
colorhex.append('#008000') #green nodes
return colorhex
'''MAIN FUNCTION'''
#STEP 1: Create a BA network with N nodes and M edges
G= nx.barabasi_albert_graph(N,M)
pos=nx.spring_layout(G)
### OPTIONAL: Draw the graph
### Comment out the next line to plot
'''
nx.draw(G, with_labels=True)
plt.show()
#'''
### OPTIONAL: Check the degree distribution
### Comment out the next line to check the degree distribution
'''
degreeDistribution(G)
# '''
#STEP 2: Initialize the states of the nodes
S = np.random.randint(0,2,size=N)
Scolors = statesToColor(S)
coloredGraph(G,Scolors)
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%203%20-%20Lesson%202c.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import json
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
-O /tmp/sarcasm.json
vocab_size = 1000
embedding_dim = 16
max_length = 120
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size = 20000
with open("/tmp/sarcasm.json", 'r') as f:
datastore = json.load(f)
sentences = []
labels = []
urls = []
for item in datastore:
sentences.append(item['headline'])
labels.append(item['is_sarcastic'])
training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
testing_labels = labels[training_size:]
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
training_sequences = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalMaxPooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 50
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=1)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
model.save("test.h5")
```
| github_jupyter |
### Data for Interactive Figure
```
DT <- read.table("../Data/All_data.txt")
names(DT)
head(DT)
levels(DT$Disease)
DT$Dis_lab <- DT$Disease
levels(DT$Dis_lab) <- c("",
"Cardiovasc.\n& circulatory",
"Chronic\nrespiratory",
"Cirrhosis",
"Congenital\nanomalies",
"Diabetes, urin.\nmale infertility",
"Common\ninfect. dis.",
"Digestive\ndis.",
"Gynecol.\ndis.",
"Hemoglob. &\nhemolytic\nanemia",
"Hepatitis",
"HIV",
"Leprosy",
"Malaria",
"Maternal\ndisorders",
"Mental and\nbehavioral",
"Musculosk.\ndisorders",
"Neglected trop.\ndiseases.",
"Neonatal\ndisorders",
"Neoplasms",
"Neurological\ndisorders",
"Nutritional\ndeficiencies",
"Oral\ndisorders",
"Sense organ\ndiseases",
"STD",
"Skin and\nsubcutan.",
"Sudden infant\ndeath synd.",
"Tuberculosis")
DT$Dis_tooltip <- DT$Disease
levels(DT$Dis_tooltip) <- c("All diseases",
"Cardiovascular and circulatory diseases",
"Chronic respiratory diseases",
"Cirrhosis of the liver",
"Congenital anomalies",
"Diabetes, urinary diseases and male infertility",
"Common infectious diseases",
"Digestive diseases",
"Gynecological diseases",
"Hemoglobinopathies and hemolytic anemias",
"Hepatitis",
"HIV",
"Leprosy",
"Malaria",
"Maternal disorders",
"Mental and behavioral disorders",
"Musculoskeletal disorders",
"Neglected tropical diseases",
"Neonatal disorders",
"Neoplasms",
"Neurological disorders",
"Nutritional deficiencies",
"Oral disorders",
"Sense organ diseases",
"Sexually transmitted diseases excluding HIV",
"Skin and subcutaneous diseases",
"Sudden infant death syndrome",
"Tuberculosis")
DT$regs_lab <- DT$Region
levels(DT$regs_lab) <- c("World",
"Eastern Europe and Central Asia",
"High-income countries",
"Latin America and Caribbean",
"Non-high-income countries",
"North Africa and Middle East",
"South Asia",
"Southeast Asia, East Asia and Oceania",
"Sub-Saharian Africa")
Mgbd <- read.table("../Data/27_gbd_groups.txt")
Mgbd$tooltip <- Mgbd$x
levels(Mgbd$tooltip) <- levels(DT$Dis_tooltip)[-1]
Mgbd$tooltip
```
### Adding observed data for non simulated diseases
```
unique(DT$Disease[is.na(DT$Nb_RCTs_med)])
library(data.table)
data <- read.table("/media/igna/Elements/HotelDieu/Cochrane/MappingRCTs_vs_Burden/database_RCTs_regions_27diseases.txt")
Lgbd <- lapply(as.character(data$GBD27),function(x){as.numeric(unlist(strsplit(x,"&")))})
regs <- sort(unique(unlist(strsplit(as.character(data$Regions),"&"))))
LR <- lapply(regs,function(x){1:nrow(data)%in%grep(x,data$Regions)})
LR <- do.call('cbind',LR)
LR <- data.table(LR)
LR$TrialID <- data$TrialID
#Nb of patients per region per trial
#Supressing sample size of trials with sample size below 10 and above 200k
data$Sample[data$Sample<10 | data$Sample>200000] <- NA
#Nb countries per region per trial to distribute sample size equally across countries
nb_ctrs <- lapply(strsplit(as.character(data$Nb_ctr_per_reg),'&'),as.numeric)
RGs <-strsplit(as.character(data$Regions),'&')
pats <- data.frame(TrialID = rep(data$TrialID,sapply(nb_ctrs,length)),
Nb_ctrs = unlist(nb_ctrs),
Region = unlist(RGs),
Tot_sample = rep(data$Sample,sapply(nb_ctrs,length)))
pats$tot_ctrs <- rep(sapply(nb_ctrs,sum),sapply(nb_ctrs,length))
pats$sample_per_reg <- pats$Tot_sample*pats$Nb_ctrs/pats$tot_ctrs
pats <- data.table(pats)
setkey(pats,TrialID)
dis <- which(Mgbd$x%in%unique(DT$Disease[is.na(DT$Nb_RCTs_med)]))
dis
A <- list()
for(i in 1:length(dis)){
d <- dis[i]
repl <- data.table(
TrialID = data$TrialID,
recl_dis = as.numeric(unlist(lapply(Lgbd,function(x){d%in%x}))),
recl_gbd = as.numeric(unlist(lapply(Lgbd,function(x){length(x)>0})))
)
setkey(repl,TrialID)
replpats <- merge(pats,repl)
setkey(replpats,Region)
#Output data
df <- data.table(Region=c(sort(regs),"All","Non-HI"),Dis=rep(c("dis","all"),each=9),RCTs=as.integer(0),Patients=as.numeric(0))
#Par région
#Nb trials par region concernant la maladie and relevant to GBD
df[Dis=="dis" & Region%in%regs,RCTs:=table(replpats[recl_dis==1,Region])]
df[Dis=="all" & Region%in%regs,RCTs:=table(replpats[recl_gbd>=1,Region])]
#Nb patients par région concernant la maladie and relevant to GBD
df[Dis=="dis" & Region%in%regs,Patients:=replpats[recl_dis==1,][regs,sum(sample_per_reg,na.rm=TRUE),by=.EACHI]$V1]
df[Dis=="all" & Region%in%regs,Patients:=replpats[recl_gbd>=1,][regs,sum(sample_per_reg,na.rm=TRUE),by=.EACHI]$V1]
#WorldWide
#Nb trials worldwide concernant la maladie and relevant to GBD
df[Dis=="dis" & Region=="All",RCTs:=sum(repl$recl_dis)]
df[Dis=="all" & Region=="All",RCTs:=sum(repl$recl_gbd>=1)]
#Nb patients worldwide concernant la maladie and relevant to GBD
df[Dis=="dis" & Region=="All",Patients:=sum(replpats[recl_dis==1,sample_per_reg],na.rm=TRUE)]
df[Dis=="all" & Region=="All",Patients:=sum(replpats[recl_gbd>=1,sample_per_reg],na.rm=TRUE)]
#Non-HI countries
#Nb trials worldwide concernant la maladie and relevant to GBD
df[Dis=="dis" & Region=="Non-HI",RCTs:=replpats[Region!="High-income",][recl_dis==1,][!duplicated(TrialID),.N]]
df[Dis=="all" & Region=="Non-HI",RCTs:=replpats[Region!="High-income",][recl_gbd>=1,][!duplicated(TrialID),.N]]
#Nb patients worldwide concernant la maladie and relevant to GBD
df[Dis=="dis" & Region=="Non-HI",Patients:=sum(replpats[Region!="High-income",][recl_dis==1,sample_per_reg],na.rm=TRUE)]
df[Dis=="all" & Region=="Non-HI",Patients:=sum(replpats[Region!="High-income",][recl_gbd>=1,sample_per_reg],na.rm=TRUE)]
A[[i]] <- df
}
```
### Within regions
```
data_f <- data.frame()
for(i in 1:length(dis)){
d <- dis[i]
DF <- A[[i]]
data <- DF[Dis=="dis",][,lapply(.SD,function(x){quantile(x,probs=c(0.025,0.5,0.975))}),
by=c("Region"),
.SDcols=c("RCTs","Patients")]
dataprop <- DF[,lapply(.SD[Dis=="dis",]/.SD[Dis=="all",],function(x){100*quantile(x,probs=c(0.025,0.5,0.975))}),
by=c("Region"),
.SDcols=c("RCTs","Patients")]
df <- data.frame(cbind(cbind(unique(data$Region),as.character(Mgbd$x[d])),
matrix(data$RCTs,ncol=3,byrow=TRUE),
matrix(data$Patients,ncol=3,byrow=TRUE),
matrix(dataprop$RCTs,ncol=3,byrow=TRUE),
matrix(dataprop$Patients,ncol=3,byrow=TRUE)))
names(df) <- c("Region","Disease",
paste(paste("Nb","RCTs",sep="_"),c("low","med","up"),sep="_"),
paste(paste("Nb","Patients",sep="_"),c("low","med","up"),sep="_"),
paste(paste("Prop","RCTs",sep="_"),c("low","med","up"),sep="_"),
paste(paste("Prop","Patients",sep="_"),c("low","med","up"),sep="_"))
data_f <- rbind(data_f,df)
}
data_f <- data_f[order(as.character(data_f$Region),as.character(data_f$Disease)),]
DT <- DT[order(as.character(DT$Region),as.character(DT$Disease)),]
table(paste(data_f$Region,data_f$Disease)==paste(DT$Region,DT$Disease)[DT$Disease%in%as.character(Mgbd$x[dis])])
names(data_f)
names(DT)
DT$Nb_RCTs_med[DT$Disease%in%as.character(Mgbd$x[dis])] <- as.numeric(as.character(data_f$Nb_RCTs_med))
DT$Nb_Patients_med[DT$Disease%in%as.character(Mgbd$x[dis])] <- as.numeric(as.character(data_f$Nb_Patients_med))
DT$Prop_loc_RCTs_med[DT$Disease%in%as.character(Mgbd$x[dis])] <- as.numeric(as.character(data_f$Prop_RCTs_med))
DT$Prop_loc_Patients_med[DT$Disease%in%as.character(Mgbd$x[dis])] <- as.numeric(as.character(data_f$Prop_Patients_med))
```
### Within diseases
```
regs <- sort(unique(DF$Region))
regs <- regs[regs!="All"]
regs
data_f <- data.frame()
for(i in 1:length(dis)){
d <- dis[i]
DF <- A[[i]]
DFr <- DF[DF$Region%in%regs & DF$Dis == "dis",]
DFr$RCTs_all <- rep(DF$RCTs[DF$Dis=="dis" & DF$Region=="All"],each=length(regs))
DFr$RCTs_NHI <- rep(DF$RCTs[DF$Dis=="dis" & DF$Region=="Non-HI"],each=length(regs))
DFr$Patients_all <- rep(DF$Patients[DF$Dis=="dis" & DF$Region=="All"],each=length(regs))
DFr$Patients_NHI <- rep(DF$Patients[DF$Dis=="dis" & DF$Region=="Non-HI"],each=length(regs))
df <- data.frame(cbind(regs,as.character(Mgbd$x[d]),
do.call('rbind',by(DFr[DFr$RCTs_all!=0,],
DFr$Region[DFr$RCTs_all!=0],
function(x){100*quantile(x$RCTs/x$RCTs_all,probs=c(0.025,0.5,0.975))})),
do.call('rbind',by(DFr[DFr$Patients_all!=0,],
DFr$Region[DFr$Patients_all!=0],
function(x){100*quantile(x$Patients/x$Patients_all,probs=c(0.025,0.5,0.975))}))))
if(sum(DFr$RCTs_NHI)!=0){
df <- cbind(df,cbind(
do.call('rbind',by(DFr[DFr$RCTs_NHI!=0,],
DFr$Region[DFr$RCTs_NHI!=0],
function(x){100*quantile(x$RCTs/x$RCTs_NHI,probs=c(0.025,0.5,0.975))})),
do.call('rbind',by(DFr[DFr$Patients_NHI!=0,],
DFr$Region[DFr$Patients_NHI!=0],
function(x){100*quantile(x$Patients/x$Patients_NHI,probs=c(0.025,0.5,0.975))}))))
}
if(sum(DFr$RCTs_NHI)==0){
df <- cbind(df,matrix(0,nrow=length(regs),ncol=3),matrix(0,nrow=length(regs),ncol=3))
}
names(df) <- c("Region","Disease",
paste(paste("Prop_all","RCTs",sep="_"),c("low","med","up"),sep="_"),
paste(paste("Prop_all","Patients",sep="_"),c("low","med","up"),sep="_"),
paste(paste("Prop_NHI","RCTs",sep="_"),c("low","med","up"),sep="_"),
paste(paste("Prop_NHI","Patients",sep="_"),c("low","med","up"),sep="_"))
data_f <- rbind(data_f,df)
}
data_f <- data_f[order(as.character(data_f$Region),as.character(data_f$Disease)),]
table(paste(data_f$Region,data_f$Disease)==paste(DT$Region,DT$Disease)[DT$Disease%in%as.character(Mgbd$x[dis]) & DT$Region!="All"])
names(data_f)
names(DT)
DT$Prop_glob_RCTs_med[DT$Disease%in%as.character(Mgbd$x[dis]) & DT$Region!="All"] <-
as.numeric(as.character(data_f$Prop_all_RCTs_med))
DT$Prop_glob_Patients_med[DT$Disease%in%as.character(Mgbd$x[dis]) & DT$Region!="All"] <-
as.numeric(as.character(data_f$Prop_all_Patients_med))
DT$Prop_NHI_RCTs_med[DT$Disease%in%as.character(Mgbd$x[dis]) & DT$Region!="All"] <-
as.numeric(as.character(data_f$Prop_NHI_RCTs_med))
DT$Prop_NHI_Patients_med[DT$Disease%in%as.character(Mgbd$x[dis]) & DT$Region!="All"] <-
as.numeric(as.character(data_f$Prop_NHI_Patients_med))
head(DT[DT$Disease=="Leprosy",])
write.table(DT,"../Interactive_figure/data/data.txt")
ratio_align <- read.table("../Data/Alignment_ratios_within_regions_across_diseases_wt_sims_patients_metrs_burdens.txt")
write.table(ratio_align,"../Interactive_figure/data/data_ratios.txt")
```
| github_jupyter |
# Mutation-Based Fuzzing
Most [randomly generated inputs](Fuzzer.ipynb) are syntactically _invalid_ and thus are quickly rejected by the processing program. To exercise functionality beyond input processing, we must increase chances to obtain valid inputs. One such way is so-called *mutational fuzzing* – that is, introducing small changes to existing inputs that may still keep the input valid, yet exercise new behavior. We show how to create such mutations, and how to guide them towards yet uncovered code, applying central concepts from the popular AFL fuzzer.
**Prerequisites**
* You should know how basic fuzzing works; for instance, from the ["Fuzzing"](Fuzzer.ipynb) chapter.
## Synopsis
<!-- Automatically generated. Do not edit. -->
To [use the code provided in this chapter](Importing.ipynb), write
```python
>>> from fuzzingbook.MutationFuzzer import <identifier>
```
and then make use of the following features.
This chapter introduces a `MutationFuzzer` class that takes a list of _seed inputs_ which are then mutated:
```python
>>> seed_input = "http://www.google.com/search?q=fuzzing"
>>> mutation_fuzzer = MutationFuzzer(seed=[seed_input])
>>> [mutation_fuzzer.fuzz() for i in range(10)]
['http://www.google.com/search?q=fuzzing',
'http:/Sowww.googe.wCom/search?quzing',
'http://www.gOogWle.com/search?)q=fuzzig',
'http^//wwgoole.om/search?q=fuzzing',
'`ttp:/t/ww.gOogle.com/search?q=f uzzil',
'httpf.://www.google.com/search?q=fuzzinQ',
'http://www.google.bom/seacx?qa=fuzzhngY',
'ht3tp:/www.google.co/qeach?Vq=fuzin',
'ttip//ww?wgoogle.com/s>eCayrc#h?q=nuzzing',
'hdp://www.goonglec5om/serch?q=fuzzing']
```
The `MutationCoverageFuzzer` maintains a _population_ of inputs, which are then evolved in order to maximize coverage.
```python
>>> mutation_fuzzer = MutationCoverageFuzzer(seed=[seed_input])
>>> mutation_fuzzer.runs(http_runner, trials=10000)
>>> mutation_fuzzer.population[:5]
['http://www.google.com/search?q=fuzzing',
'http://\x7fww.google.com/earchq=fuzxing',
'http://\x7fww.google.comoearbhq=fuzxing',
'http://www.google*com/search?qduzzhng',
'httP://www.google.com/searc?q=fuzzing']
```
## Fuzzing with Mutations
On November 2013, the first version of [American Fuzzy Lop](http://lcamtuf.coredump.cx/afl/) (AFL) was released. Since then, AFL has become one of the most successful fuzzing tools and comes in many flavours, e.g., [AFLFast](https://github.com/mboehme/aflfast), [AFLGo](https://github.com/aflgo/aflgo), and [AFLSmart](https://github.com/aflsmart/aflsmart) (which are discussed in this book). AFL has made fuzzing a popular choice for automated vulnerability detection. It was the first to demonstrate that vulnerabilities can be detected automatically at a large scale in many security-critical, real-world applications.

<center><b>Figure 1.</b> American Fuzzy Lop Command Line User Interface</center>
In this chapter, we are going to introduce the basics of mutational fuzz testing; the next chapter will then further show how to direct fuzzing towards specific code goals.
## Fuzzing a URL Parser
Many programs expect their inputs to come in a very specific format before they would actually process them. As an example, think of a program that accepts a URL (a Web address). The URL has to be in a valid format (i.e., the URL format) such that the program can deal with it. When fuzzing with random inputs, what are our chances to actually produce a valid URL?
To get deeper into the problem, let us explore what URLs are made of. A URL consists of a number of elements:
scheme://netloc/path?query#fragment
where
* `scheme` is the protocol to be used, including `http`, `https`, `ftp`, `file`...
* `netloc` is the name of the host to connect to, such as `www.google.com`
* `path` is the path on that very host, such as `search`
* `query` is a list of key/value pairs, such as `q=fuzzing`
* `fragment` is a marker for a location in the retrieved document, such as `#result`
In Python, we can use the `urlparse()` function to parse and decompose a URL into its parts.
```
import bookutils
try:
from urlparse import urlparse # Python 2
except ImportError:
from urllib.parse import urlparse # Python 3
urlparse("http://www.google.com/search?q=fuzzing")
```
We see how the result encodes the individual parts of the URL in different attributes.
Let us now assume we have a program that takes a URL as input. To simplify things, we won't let it do very much; we simply have it check the passed URL for validity. If the URL is valid, it returns True; otherwise, it raises an exception.
```
def http_program(url):
supported_schemes = ["http", "https"]
result = urlparse(url)
if result.scheme not in supported_schemes:
raise ValueError("Scheme must be one of " + repr(supported_schemes))
if result.netloc == '':
raise ValueError("Host must be non-empty")
# Do something with the URL
return True
```
Let us now go and fuzz `http_program()`. To fuzz, we use the full range of printable ASCII characters, such that `:`, `/`, and lowercase letters are included.
```
from Fuzzer import fuzzer
fuzzer(char_start=32, char_range=96)
```
Let's try to fuzz with 1000 random inputs and see whether we have some success.
```
for i in range(1000):
try:
url = fuzzer()
result = http_program(url)
print("Success!")
except ValueError:
pass
```
What are the chances of actually getting a valid URL? We need our string to start with `"http://"` or `"https://"`. Let's take the `"http://"` case first. These are seven very specific characters we need to start with. The chance of producing these seven characters randomly (with a character range of 96 different characters) is $1 : 96^7$, or
```
96 ** 7
```
The odds of producing a `"https://"` prefix are even worse, at $1 : 96^8$:
```
96 ** 8
```
which gives us a total chance of
```
likelihood = 1 / (96 ** 7) + 1 / (96 ** 8)
likelihood
```
And this is the number of runs (on average) we'd need to produce a valid URL scheme:
```
1 / likelihood
```
Let's measure how long one run of `http_program()` takes:
```
from Timer import Timer
trials = 1000
with Timer() as t:
for i in range(trials):
try:
url = fuzzer()
result = http_program(url)
print("Success!")
except ValueError:
pass
duration_per_run_in_seconds = t.elapsed_time() / trials
duration_per_run_in_seconds
```
That's pretty fast, isn't it? Unfortunately, we have a lot of runs to cover.
```
seconds_until_success = duration_per_run_in_seconds * (1 / likelihood)
seconds_until_success
```
which translates into
```
hours_until_success = seconds_until_success / 3600
days_until_success = hours_until_success / 24
years_until_success = days_until_success / 365.25
years_until_success
```
Even if we parallelize things a lot, we're still in for months to years of waiting. And that's for getting _one_ successful run that will get deeper into `http_program()`.
What basic fuzzing will do well is to test `urlparse()`, and if there is an error in this parsing function, it has good chances of uncovering it. But as long as we cannot produce a valid input, we are out of luck in reaching any deeper functionality.
## Mutating Inputs
The alternative to generating random strings from scratch is to start with a given _valid_ input, and then to subsequently _mutate_ it. A _mutation_ in this context is a simple string manipulation - say, inserting a (random) character, deleting a character, or flipping a bit in a character representation. This is called *mutational fuzzing* – in contrast to the _generational fuzzing_ techniques discussed earlier.
Here are some mutations to get you started:
```
import random
def delete_random_character(s):
"""Returns s with a random character deleted"""
if s == "":
return s
pos = random.randint(0, len(s) - 1)
# print("Deleting", repr(s[pos]), "at", pos)
return s[:pos] + s[pos + 1:]
seed_input = "A quick brown fox"
for i in range(10):
x = delete_random_character(seed_input)
print(repr(x))
def insert_random_character(s):
"""Returns s with a random character inserted"""
pos = random.randint(0, len(s))
random_character = chr(random.randrange(32, 127))
# print("Inserting", repr(random_character), "at", pos)
return s[:pos] + random_character + s[pos:]
for i in range(10):
print(repr(insert_random_character(seed_input)))
def flip_random_character(s):
"""Returns s with a random bit flipped in a random position"""
if s == "":
return s
pos = random.randint(0, len(s) - 1)
c = s[pos]
bit = 1 << random.randint(0, 6)
new_c = chr(ord(c) ^ bit)
# print("Flipping", bit, "in", repr(c) + ", giving", repr(new_c))
return s[:pos] + new_c + s[pos + 1:]
for i in range(10):
print(repr(flip_random_character(seed_input)))
```
Let us now create a random mutator that randomly chooses which mutation to apply:
```
def mutate(s):
"""Return s with a random mutation applied"""
mutators = [
delete_random_character,
insert_random_character,
flip_random_character
]
mutator = random.choice(mutators)
# print(mutator)
return mutator(s)
for i in range(10):
print(repr(mutate("A quick brown fox")))
```
The idea is now that _if_ we have some valid input(s) to begin with, we may create more input candidates by applying one of the above mutations. To see how this works, let's get back to URLs.
## Mutating URLs
Let us now get back to our URL parsing problem. Let us create a function `is_valid_url()` that checks whether `http_program()` accepts the input.
```
def is_valid_url(url):
try:
result = http_program(url)
return True
except ValueError:
return False
assert is_valid_url("http://www.google.com/search?q=fuzzing")
assert not is_valid_url("xyzzy")
```
Let us now apply the `mutate()` function on a given URL and see how many valid inputs we obtain.
```
seed_input = "http://www.google.com/search?q=fuzzing"
valid_inputs = set()
trials = 20
for i in range(trials):
inp = mutate(seed_input)
if is_valid_url(inp):
valid_inputs.add(inp)
```
We can now observe that by _mutating_ the original input, we get a high proportion of valid inputs:
```
len(valid_inputs) / trials
```
What are the odds of also producing a `https:` prefix by mutating a `http:` sample seed input? We have to insert ($1 : 3$) the right character `'s'` ($1 : 96$) into the correct position ($1 : l$), where $l$ is the length of our seed input. This means that on average, we need this many runs:
```
trials = 3 * 96 * len(seed_input)
trials
```
We can actually afford this. Let's try:
```
from Timer import Timer
trials = 0
with Timer() as t:
while True:
trials += 1
inp = mutate(seed_input)
if inp.startswith("https://"):
print(
"Success after",
trials,
"trials in",
t.elapsed_time(),
"seconds")
break
```
Of course, if we wanted to get, say, an `"ftp://"` prefix, we would need more mutations and more runs – most important, though, we would need to apply _multiple_ mutations.
## Multiple Mutations
So far, we have only applied one single mutation on a sample string. However, we can also apply _multiple_ mutations, further changing it. What happens, for instance, if we apply, say, 20 mutations on our sample string?
```
seed_input = "http://www.google.com/search?q=fuzzing"
mutations = 50
inp = seed_input
for i in range(mutations):
if i % 5 == 0:
print(i, "mutations:", repr(inp))
inp = mutate(inp)
```
As you see, the original seed input is hardly recognizable anymore. By mutating the input again and again, we get a higher variety in the input.
To implement such multiple mutations in a single package, let us introduce a `MutationFuzzer` class. It takes a seed (a list of strings) as well as a minimum and a maximum number of mutations.
```
from Fuzzer import Fuzzer
class MutationFuzzer(Fuzzer):
def __init__(self, seed, min_mutations=2, max_mutations=10):
self.seed = seed
self.min_mutations = min_mutations
self.max_mutations = max_mutations
self.reset()
def reset(self):
self.population = self.seed
self.seed_index = 0
```
In the following, let us develop `MutationFuzzer` further by adding more methods to it. The Python language requires us to define an entire class with all methods as a single, continuous unit; however, we would like to introduce one method after another. To avoid this problem, we use a special hack: Whenever we want to introduce a new method to some class `C`, we use the construct
```python
class C(C):
def new_method(self, args):
pass
```
This seems to define `C` as a subclass of itself, which would make no sense – but actually, it introduces a new `C` class as a subclass of the _old_ `C` class, and then shadowing the old `C` definition. What this gets us is a `C` class with `new_method()` as a method, which is just what we want. (`C` objects defined earlier will retain the earlier `C` definition, though, and thus must be rebuilt.)
Using this hack, we can now add a `mutate()` method that actually invokes the above `mutate()` function. Having `mutate()` as a method is useful when we want to extend a `MutationFuzzer` later.
```
class MutationFuzzer(MutationFuzzer):
def mutate(self, inp):
return mutate(inp)
```
Let's get back to our strategy, maximizing _diversity in coverage_ in our population. First, let us create a method `create_candidate()`, which randomly picks some input from our current population (`self.population`), and then applies between `min_mutations` and `max_mutations` mutation steps, returning the final result:
```
class MutationFuzzer(MutationFuzzer):
def create_candidate(self):
candidate = random.choice(self.population)
trials = random.randint(self.min_mutations, self.max_mutations)
for i in range(trials):
candidate = self.mutate(candidate)
return candidate
```
The `fuzz()` method is set to first pick the seeds; when these are gone, we mutate:
```
class MutationFuzzer(MutationFuzzer):
def fuzz(self):
if self.seed_index < len(self.seed):
# Still seeding
self.inp = self.seed[self.seed_index]
self.seed_index += 1
else:
# Mutating
self.inp = self.create_candidate()
return self.inp
seed_input = "http://www.google.com/search?q=fuzzing"
mutation_fuzzer = MutationFuzzer(seed=[seed_input])
mutation_fuzzer.fuzz()
mutation_fuzzer.fuzz()
mutation_fuzzer.fuzz()
```
With every new invocation of `fuzz()`, we get another variant with multiple mutations applied. The higher variety in inputs, though, increases the risk of having an invalid input. The key to success lies in the idea of _guiding_ these mutations – that is, _keeping those that are especially valuable._
## Guiding by Coverage
To cover as much functionality as possible, one can rely on either _specified_ or _implemented_ functionality, as discussed in the ["Coverage"](Coverage.ipynb) chapter. For now, we will not assume that there is a specification of program behavior (although it _definitely_ would be good to have one!). We _will_ assume, though, that the program to be tested exists – and that we can leverage its structure to guide test generation.
Since testing always executes the program at hand, one can always gather information about its execution – the least is the information needed to decide whether a test passes or fails. Since coverage is frequently measured as well to determine test quality, let us also assume we can retrieve coverage of a test run. The question is then: _How can we leverage coverage to guide test generation?_
One particularly successful idea is implemented in the popular fuzzer named [American fuzzy lop](http://lcamtuf.coredump.cx/afl/), or *AFL* for short. Just like our examples above, AFL evolves test cases that have been successful – but for AFL, "success" means _finding a new path through the program execution_. This way, AFL can keep on mutating inputs that so far have found new paths; and if an input finds another path, it will be retained as well.
Let us build such a strategy. We start with introducing a `Runner` class that captures the coverage for a given function. First, a `FunctionRunner` class:
```
from Fuzzer import Runner
class FunctionRunner(Runner):
def __init__(self, function):
"""Initialize. `function` is a function to be executed"""
self.function = function
def run_function(self, inp):
return self.function(inp)
def run(self, inp):
try:
result = self.run_function(inp)
outcome = self.PASS
except Exception:
result = None
outcome = self.FAIL
return result, outcome
http_runner = FunctionRunner(http_program)
http_runner.run("https://foo.bar/")
```
We can now extend the `FunctionRunner` class such that it also measures coverage. After invoking `run()`, the `coverage()` method returns the coverage achieved in the last run.
```
from Coverage import Coverage, population_coverage
class FunctionCoverageRunner(FunctionRunner):
def run_function(self, inp):
with Coverage() as cov:
try:
result = super().run_function(inp)
except Exception as exc:
self._coverage = cov.coverage()
raise exc
self._coverage = cov.coverage()
return result
def coverage(self):
return self._coverage
http_runner = FunctionCoverageRunner(http_program)
http_runner.run("https://foo.bar/")
```
Here are the first five locations covered:
```
print(list(http_runner.coverage())[:5])
```
Now for the main class. We maintain the population and a set of coverages already achieved (`coverages_seen`). The `fuzz()` helper function takes an input and runs the given `function()` on it. If its coverage is new (i.e. not in `coverages_seen`), the input is added to `population` and the coverage to `coverages_seen`.
```
class MutationCoverageFuzzer(MutationFuzzer):
def reset(self):
super().reset()
self.coverages_seen = set()
# Now empty; we fill this with seed in the first fuzz runs
self.population = []
def run(self, runner):
"""Run function(inp) while tracking coverage.
If we reach new coverage,
add inp to population and its coverage to population_coverage
"""
result, outcome = super().run(runner)
new_coverage = frozenset(runner.coverage())
if outcome == Runner.PASS and new_coverage not in self.coverages_seen:
# We have new coverage
self.population.append(self.inp)
self.coverages_seen.add(new_coverage)
return result
```
Let us now put this to use:
```
seed_input = "http://www.google.com/search?q=fuzzing"
mutation_fuzzer = MutationCoverageFuzzer(seed=[seed_input])
mutation_fuzzer.runs(http_runner, trials=10000)
mutation_fuzzer.population
```
Success! In our population, _each and every input_ now is valid and has a different coverage, coming from various combinations of schemes, paths, queries, and fragments.
```
all_coverage, cumulative_coverage = population_coverage(
mutation_fuzzer.population, http_program)
import matplotlib.pyplot as plt
plt.plot(cumulative_coverage)
plt.title('Coverage of urlparse() with random inputs')
plt.xlabel('# of inputs')
plt.ylabel('lines covered');
```
The nice thing about this strategy is that, applied to larger programs, it will happily explore one path after the other – covering functionality after functionality. All that is needed is a means to capture the coverage.
## Synopsis
This chapter introduces a `MutationFuzzer` class that takes a list of _seed inputs_ which are then mutated:
```
seed_input = "http://www.google.com/search?q=fuzzing"
mutation_fuzzer = MutationFuzzer(seed=[seed_input])
[mutation_fuzzer.fuzz() for i in range(10)]
```
The `MutationCoverageFuzzer` maintains a _population_ of inputs, which are then evolved in order to maximize coverage.
```
mutation_fuzzer = MutationCoverageFuzzer(seed=[seed_input])
mutation_fuzzer.runs(http_runner, trials=10000)
mutation_fuzzer.population[:5]
```
## Lessons Learned
* Randomly generated inputs are frequently invalid – and thus exercise mostly input processing functionality.
* Mutations from existing valid inputs have much higher chances to be valid, and thus to exercise functionality beyond input processing.
## Next Steps
In the next chapter on [greybox fuzzing](GreyboxFuzzer.ipynb), we further extend the concept of mutation-based testing with _power schedules_ that allow to spend more energy on seeds that exercise "unlikely" paths and seeds that are "closer" to a target location.
## Exercises
### Exercise 1: Fuzzing CGI decode with Mutations
Apply the above _guided_ mutation-based fuzzing technique on `cgi_decode()` from the ["Coverage"](Coverage.ipynb) chapter. How many trials do you need until you cover all variations of `+`, `%` (valid and invalid), and regular characters?
```
from Coverage import cgi_decode
seed = ["Hello World"]
cgi_runner = FunctionCoverageRunner(cgi_decode)
m = MutationCoverageFuzzer(seed)
results = m.runs(cgi_runner, 10000)
m.population
cgi_runner.coverage()
all_coverage, cumulative_coverage = population_coverage(
m.population, cgi_decode)
import matplotlib.pyplot as plt
plt.plot(cumulative_coverage)
plt.title('Coverage of cgi_decode() with random inputs')
plt.xlabel('# of inputs')
plt.ylabel('lines covered');
```
After 10,000 runs, we have managed to synthesize a `+` character and a valid `%xx` form. We can still do better.
### Exercise 2: Fuzzing bc with Mutations
Apply the above mutation-based fuzzing technique on `bc`, as in the chapter ["Introduction to Fuzzing"](Fuzzer.ipynb).
#### Part 1: Non-Guided Mutations
Start with non-guided mutations. How many of the inputs are valid?
**Solution.** This is just a matter of tying a `ProgramRunner` to a `MutationFuzzer`:
```
from Fuzzer import ProgramRunner
seed = ["1 + 1"]
bc = ProgramRunner(program="bc")
m = MutationFuzzer(seed)
outcomes = m.runs(bc, trials=100)
outcomes[:3]
sum(1 for completed_process, outcome in outcomes if completed_process.stderr == "")
```
#### Part 2: Guided Mutations
Continue with _guided_ mutations. To this end, you will have to find a way to extract coverage from a C program such as `bc`. Proceed in these steps:
First, get [GNU bc](https://www.gnu.org/software/bc/); download, say, `bc-1.07.1.tar.gz` and unpack it:
```
!curl -O mirrors.kernel.org/gnu/bc/bc-1.07.1.tar.gz
!tar xfz bc-1.07.1.tar.gz
```
Second, configure the package:
```
!cd bc-1.07.1; ./configure
```
Third, compile the package with special flags:
```
!cd bc-1.07.1; make CFLAGS="--coverage"
```
The file `bc/bc` should now be executable...
```
!cd bc-1.07.1/bc; echo 2 + 2 | ./bc
```
...and you should be able to run the `gcov` program to retrieve coverage information.
```
!cd bc-1.07.1/bc; gcov main.c
```
As sketched in the ["Coverage" chapter](Coverage.ipynb), the file [bc-1.07.1/bc/main.c.gcov](bc-1.07.1/bc/main.c.gcov) now holds the coverage information for `bc.c`. Each line is prefixed with the number of times it was executed. `#####` means zero times; `-` means non-executable line.
Parse the GCOV file for `bc` and create a `coverage` set, as in `FunctionCoverageRunner`. Make this a `ProgramCoverageRunner` class that would be constructed with a list of source files (`bc.c`, `main.c`, `load.c`) to run `gcov` on.
When you're done, don't forget to clean up:
```
!rm -fr bc-1.07.1 bc-1.07.1.tar.gz
```
### Exercise 3
In this [blog post](https://lcamtuf.blogspot.com/2014/08/binary-fuzzing-strategies-what-works.html), the author of _American Fuzzy Lop_ (AFL), a very popular mutation-based fuzzer discusses the efficiency of various mutation operators. Implement four of them and evaluate their efficiency as in the examples above.
### Exercise 4
When adding a new element to the list of candidates, AFL does actually not compare the _coverage_, but adds an element if it exercises a new _branch_. Using branch coverage from the exercises of the ["Coverage"](Coverage.ipynb) chapter, implement this "branch" strategy and compare it against the "coverage" strategy, above.
### Exercise 5
Design and implement a system that will gather a population of URLs from the Web. Can you achieve a higher coverage with these samples? What if you use them as initial population for further mutation?
| github_jupyter |
## OkCupid DataSet
### Meeting 2, 12-11-2019
### Recap last meeting's decisions:
<ol>
<p>Meeting 1, 16-10-2019</p>
<li>Merge all the essays as a unique text excluding essay4, 5, 6. </li>
<li>Plot distribution of avarage number of words per sentence.</li>
<li>Dont apply lowercase() and expand_contractions() instead count the number of them.</li>
</ol>
### Things to discuss about:
<ol>
<p></p>
<li>Re-code education feature?</li>
<li>Treating with outliers? keep or remove? min and max threshold?</li>
<li>Incorrect or irrelevant information such as age 109 or sentences with length 1</li>
<li>Check list of stopwords</li>
</ol>
```
import pandas as pd
from termcolor import colored
```
### Load dataset
```
# Load dataset
df = pd.read_csv('../../data/raw/profiles.csv')
df.columns
```
### Missing values
```
print('\033[1m \033[94m Dataset shape:\033[0m \033[0m {}\n'.format(df.shape))
# drop rows with all null values
drop_all = df.dropna(axis='rows', how='all')
print('\033[1m \033[94m Shape of dataset after dropping the rows containing all null avlues:\033[0m \033[0m {}\n'.format(df.shape))
# drop rows in which the education field is empty
df = df.dropna(subset=['education'])
print('\033[1m \033[94m Shape of dataset after dropping the rows with null value in education:\033[0m \033[0m {}\n'.format(df.shape))
# drop rows in which all the essay fields are empty
df = df.dropna(subset=['essay0', 'essay1', 'essay2', 'essay3', 'essay7', 'essay8', 'essay9'], how = 'all')
print('\033[1m \033[94m Shape of dataset after dropping the rows with all null values in essays:\033[0m \033[0m {}\n'.format(df.shape))
```
## Merge the essays
```
# essay 4, essay 5, essay 6 are excluded from the essays
df['text'] = df['essay0'].fillna('') +' ' + df['essay1'].fillna('')+ ' ' + df['essay2'].fillna('')+ ' '+ df['essay3'].fillna('') + ' ' + df['essay7'].fillna('') + ' ' + df['essay8'].fillna('') + ' ' + df['essay9'].fillna('')
df.head()
```
## Text cleaning
* Remove all html tags
```
# Remove html tags from essays
# df = df.replace("<br />", ' ', regex=True)
# df = df.replace("\n", "", regex = True)
df['text'] = df['text'].replace('<[^<]+?>', '', regex=True)
df.head()
```
## Text tokenization
```
import nltk
# nltk.download('punkt')
from nltk.tokenize import sent_tokenize
df = df.dropna(subset=['text'])
# split text to words
df['words'] = df.apply(lambda row: nltk.word_tokenize(row['text']), axis=1)
# number of words in for each user
df['#words'] = df.apply(lambda row: len(row['words']), axis=1)
# split text to sentences
df['sentences'] = df.apply(lambda row: nltk.sent_tokenize(row['text']), axis=1)
#number of sentences for each user
df['#sentences'] = df.apply(lambda row: len(row['sentences']), axis=1)
#avarage number of words per sentence
df['#anwps'] = df.apply(lambda row: row['#words']/row['#sentences'], axis=1)
df.head()
```
### Check outliers
```
df[df['#anwps'] > 200]
# v = df[df['#sentences'] == 1 ]
# # v
# v['sentences'].loc[40483]
# # whole paraghraph is counted as one sentence.
# # Print longest sentence in the text
# v = df[df['#sentences'] == 1 ]
# # v
# v['sentences'].loc[14509]
```
## Unique values in the 'education' column
```
df['education'].unique()
```
## Create a dictainary of educations
```
dic = {
"graduated from college/university" : "6",
"graduated from masters program" : "7",
"working on college/university" : "3",
"working on masters program" : "6",
"graduated from two-year college" : "5",
"graduated from high school" : "3",
"graduated from ph.d program" : "8",
"graduated from law school" : "8",
"working on two-year college" : "3",
"dropped out of college/university" : "3",
"working on ph.d program" : "7",
"college/university" : "6",
"graduated from space camp" : "-1",
"dropped out of space camp" : "-1",
"graduated from med school" : "8",
"working on space camp" : "-1",
"working on law school" : "7",
"two-year college" : "3",
"working on med school" : "7",
"dropped out of two-year college" : "3",
"dropped out of masters program" : "6",
"masters program" : "6",
"dropped out of ph.d program" : "7",
"dropped out of high school" : "1",
"high school" : "3",
"working on high school" : "1",
"space camp" : "-1",
"ph.d program" : "7",
"law school" : "8",
"dropped out of law school" : "7",
"dropped out of med school" : "7",
"med school" : "8",
}
df['isced'] = df['education']
df = df.replace(dict(isced = dic))
df.head()
dfx = df.groupby('isced').count()/n
dfx['education'].plot(kind='bar')
```
### The sumary statistics of quantitative variables
```
# Print out the sumary statistics of quantitative variables
df.describe()
```
## Visualizing the distribution of a dataset
```
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(color_codes=True)
import mpld3
mpld3.enable_notebook()
import numpy as np
# Histogram for number of words
count, bin_edges = np.histogram(df['#words'])
sns.distplot(df["#words"], kde=False).set_title("Histogram of number of words")
plt.show()
# df['#words'].plot(kind='hist', xticks = bin_edges, bins=100)
# plt.show()
```
## number of words histogram summarty:
### The distribution of the number of words for Cupid essays dataset is unimodal and skewed right at about 1200 with most number of the words bening between 0 and 220, range of roughly 12000 and ouliers are present on the higher end
```
# Histogram for number of sentences
sns.distplot(df["#sentences"], kde=False).set_title("Histogram of number of sentences")
plt.show()
# Histogram for avarage number of words per sentence
sns.distplot(df["#anwps"], kde=False).set_title("Histogram of of avarage number of words per sentence")
plt.show()
```
## Avarage number of words per sentence histogram summarty:
### The distribution of the avarage number of words per sentence for Cupid essays dataset is unimodal and skewed right, centered at about 15 and outliers are present on the higher end
```
# Create a boxplot
sns.boxplot(df["#anwps"]).set_title("Boxplot of the avarage number of words per sentence")
# Plot boxplot of #anwps for men and women
df.groupby(['sex'])['#anwps'].value_counts().unstack(0).boxplot(figsize=(10,10))
plt.title("Boxplot of the avarage number of words per sentence for men and women")
```
### Top 20 common words in the text before removing stopwords
```
from collections import Counter
top = Counter(" ".join(df["text"]).split()).most_common(20)
top=top[::-1]
x, frequancy = zip(*top)
x_pos = [i for i, _ in enumerate(x)]
# x_pos.reverse()
plt.barh(x_pos, frequancy, color='green')
plt.ylabel("Frequent word")
plt.xlabel("Word frequancy")
plt.title("Top 20 common words")
plt.yticks(x_pos, x)
plt.show()
```
### Data cleaning
```
import string
from string import punctuation
# Remove punctuation
def remove_punctuation(s):
s = ''.join([i for i in s if i not in frozenset(string.punctuation)])
return s
df['removed_punctuation'] = df.apply(lambda x: remove_punctuation(x['text']), axis=1)
import contractions
# def expand_contractions(text):
# expanded = contractions.fix(text)
# return expanded
# # print(contractions.fix("you've"))
# # print(contractions.fix("he's"))
# df['expanded_contractions'] = df.apply(lambda x: expand_contractions(x['removed_punctuation']), axis=1)
# def cc(text):
# count = 0
# for word in text.lower().split():
# word = word.strip(punctuation)
# if word in contractions:
# count += 1
# return count
from sklearn.feature_extraction import stop_words
# Remove stopwords
def remove_stopwords(text):
stopword_list = stop_words.ENGLISH_STOP_WORDS
text = ' '.join([word for word in text.split() if word not in stopword_list])
return text
df['removed_stopwords'] = df.apply(lambda x: remove_stopwords(x['removed_punctuation']), axis=1)
# Remove numbers
df.removed_stopwords = df.removed_stopwords.str.replace('\d+', '')
```
### Top 20 common words in the text after removing stopwords
```
top = Counter(" ".join(df["removed_stopwords"]).split()).most_common(20)
top = top[::-1]
x, frequancy = zip(*top)
x_pos = [i for i, _ in enumerate(x)]
# x_pos.reverse()
plt.barh(x_pos, frequancy, color='green')
plt.ylabel("Frequent word")
plt.xlabel("Word frequancy")
plt.title("Top 20 common words")
plt.yticks(x_pos, x)
plt.show()
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
# Plot wordcloud of 100 most frequent words
wordcloud = WordCloud(
width = 3000,
height = 2000,
background_color = 'black',
max_words=100
).generate(str(df["removed_stopwords"]))
fig = plt.figure(
figsize = (10, 7.5),
facecolor = 'k',
edgecolor = 'k')
plt.imshow(wordcloud, interpolation = 'bilinear')
plt.axis('off')
plt.tight_layout(pad=0)
plt.title("Wordcloud of 100 most frequent words")
plt.show()
# Count number of male and female
df.groupby(['sex']).count()
# Plot frequency of participants based of their sex and level of education
df.groupby(['sex']).education_abrv.value_counts().unstack(0).plot.barh(figsize=(10,10))
plt.title("Frequency of participants based of their sex and level of education")
print(dic)
# Plot frequency of participants based of their sex and age
df.groupby(['age']).sex.value_counts().unstack(0).plot.barh(figsize=(10,10))
plt.title("Frequency of participants based of their sex and age")
%matplotlib inline
import matplotlib.pyplot as plt
# df.plot(kind="hist", bins= 150, y='#anwps', title='avarage number of words per sentence distribution')
# Sort the rows of dataset ascending based on the #anwps
df.sort_values(['#anwps'], ascending=False, inplace=True)
df2 = df.groupby(['education'], axis=0).sum()
df2.sort_values(['#anwps'], ascending=False, inplace=True)
df2
df3 = df.groupby(['education_abrv'], axis=0).sum()
df3.sort_values(['#anwps'], ascending=False, inplace=True)
# df3['#anwps'].plot(kind='pie')
# plt.title('fdgdfg')
# plt.show()
df4 = df3['#anwps']
df4.plot(kind='bar')
# v = df[df['#sentences'] <5 ]
# sns.relplot(x="education_abrv", y="#anwps", hue="sex", data=df4)
# # Print longest sentence in the text
# v = df[df['#sentences'] == 1 ]
# # v
# v['sentences'].loc[14509]
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/AmoDinho/Machine-Learning-Crash-Course-with-TF/blob/master/improving_neural_net_performance.ipynb)
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Improving Neural Net Performance
**Learning Objective:** Improve the performance of a neural network by normalizing features and applying various optimization algorithms
**NOTE:** The optimization methods described in this exercise are not specific to neural networks; they are effective means to improve most types of models.
## Setup
First, we'll load the data.
```
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print "Training examples summary:"
display.display(training_examples.describe())
print "Validation examples summary:"
display.display(validation_examples.describe())
print "Training targets summary:"
display.display(training_targets.describe())
print "Validation targets summary:"
display.display(validation_targets.describe())
```
## Train the Neural Network
Next, we'll train the neural network.
```
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural network model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
my_optimizer: An instance of `tf.train.Optimizer`, the optimizer to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A tuple `(estimator, training_losses, validation_losses)`:
estimator: the trained `DNNRegressor` object.
training_losses: a `list` containing the training loss values taken during training.
validation_losses: a `list` containing the validation loss values taken during training.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print "Final RMSE (on training data): %0.2f" % training_root_mean_squared_error
print "Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error
return dnn_regressor, training_rmse, validation_rmse
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
## Linear Scaling
It can be a good standard practice to normalize the inputs to fall within the range -1, 1. This helps SGD not get stuck taking steps that are too large in one dimension, or too small in another. Fans of numerical optimization may note that there's a connection to the idea of using a preconditioner here.
```
def linear_scale(series):
min_val = series.min()
max_val = series.max()
scale = (max_val - min_val) / 2.0
return series.apply(lambda x:((x - min_val) / scale) - 1.0)
```
## Task 1: Normalize the Features Using Linear Scaling
**Normalize the inputs to the scale -1, 1.**
**Spend about 5 minutes training and evaluating on the newly normalized data. How well can you do?**
As a rule of thumb, NN's train best when the input features are roughly on the same scale.
Sanity check your normalized data. (What would happen if you forgot to normalize one feature?)
```
def normalize_linear_scale(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized linearly."""
#
# Your code here: normalize the inputs.
#
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["total_rooms"] = linear_scale(examples_dataframe["total_rooms"])
processed_features["total_bedrooms"] = linear_scale(examples_dataframe["total_bedrooms"])
processed_features["population"] = linear_scale(examples_dataframe["population"])
processed_features["households"] = linear_scale(examples_dataframe["households"])
processed_features["median_income"] = linear_scale(examples_dataframe["median_income"])
processed_features["rooms_per_person"] = linear_scale(examples_dataframe["rooms_per_person"])
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0007),
steps=5000,
batch_size=70,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
```
## Task 2: Try a Different Optimizer
** Use the Adagrad and Adam optimizers and compare performance.**
The Adagrad optimizer is one alternative. The key insight of Adagrad is that it modifies the learning rate adaptively for each coefficient in a model, monotonically lowering the effective learning rate. This works great for convex problems, but isn't always ideal for the non-convex problem Neural Net training. You can use Adagrad by specifying `AdagradOptimizer` instead of `GradientDescentOptimizer`. Note that you may need to use a larger learning rate with Adagrad.
For non-convex optimization problems, Adam is sometimes more efficient than Adagrad. To use Adam, invoke the `tf.train.AdamOptimizer` method. This method takes several optional hyperparameters as arguments, but our solution only specifies one of these (`learning_rate`). In a production setting, you should specify and tune the optional hyperparameters carefully.
```
#
# YOUR CODE HERE: Retrain the network using Adagrad and then Adam.
#
#Adagrad
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.5),
steps=500,
batch_size=100,
hidden_units=[10,10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
#Adam
_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.009),
steps=500,
batch_size=100,
hidden_units=[10,10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs Periods")
plt.plot(adagrad_training_losses, label='Adagrad training')
plt.plot(adagrad_validation_losses, label='Adagrad validaition')
plt.plot(adam_training_losses, label='Adam validaition')
plt.plot(adam_validation_losses, label='Adam validaition')
_ = plt.legend()
```
## Task 3: Explore Alternate Normalization Methods
**Try alternate normalizations for various features to further improve performance.**
If you look closely at summary stats for your transformed data, you may notice that linear scaling some features leaves them clumped close to `-1`.
For example, many features have a median of `-0.8` or so, rather than `0.0`.
```
_ = normalized_training_examples.hist(bins=20, figsize=(18, 12), xlabelsize=10)
```
We might be able to do better by choosing additional ways to transform these features.
For example, a log scaling might help some features. Or clipping extreme values may make the remainder of the scale more informative.
```
def log_normalize(series):
return series.apply(lambda x:math.log(x+1.0))
def clip(series, clip_to_min, clip_to_max):
return series.apply(lambda x:(
min(max(x, clip_to_min), clip_to_max)))
def z_score_normalize(series):
mean = series.mean()
std_dv = series.std()
return series.apply(lambda x:(x - mean) / std_dv)
def binary_threshold(series, threshold):
return series.apply(lambda x:(1 if x > threshold else 0))
```
The block above contains a few additional possible normalization functions. Try some of these, or add your own.
Note that if you normalize the target, you'll need to un-normalize the predictions for loss metrics to be comparable.
```
def normalize(examples_dataframe):
"""Returns a version of the input `DataFrame` that has all its features normalized."""
processed_features = pd.DataFrame()
processed_features["households"] = log_normalize(examples_dataframe["households"])
processed_features["median_income"] = log_normalize(examples_dataframe["median_income"])
processed_features["total_bedrooms"] = log_normalize(examples_dataframe["total_bedrooms"])
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
processed_features["housing_median_age"] = linear_scale(examples_dataframe["housing_median_age"])
processed_features["population"] = linear_scale(clip(examples_dataframe["population"], 0, 5000))
processed_features["rooms_per_person"] = linear_scale(clip(examples_dataframe["rooms_per_person"], 0, 5))
processed_features["total_rooms"] = linear_scale(clip(examples_dataframe["total_rooms"], 0, 10000))
return processed_features
normalized_dataframe = normalize_linear_scale(preprocess_features(california_housing_dataframe))
normalized_training_examples = normalized_dataframe.head(12000)
normalized_validation_examples = normalized_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)
```
## Optional Challenge: Use only Latitude and Longitude Features
**Train a NN model that uses only latitude and longitude as features.**
Real estate people are fond of saying that location is the only important feature in housing price.
Let's see if we can confirm this by training a model that uses only latitude and longitude as features.
This will only work well if our NN can learn complex nonlinearities from latitude and longitude.
**NOTE:** We may need a network structure that has more layers than were useful earlier in the exercise.
```
#
# YOUR CODE HERE: Train the network using only latitude and longitude
#
def location_location_location(examples_dataframe):
"""Returns a version of the input `DataFrame` that keeps only the latitude and longitude."""
processed_features = pd.DataFrame()
processed_features["latitude"] = linear_scale(examples_dataframe["latitude"])
processed_features["longitude"] = linear_scale(examples_dataframe["longitude"])
return processed_features
lll_dataframe = location_location_location(preprocess_features(california_housing_dataframe))
lll_training_examples = lll_dataframe.head(12000)
lll_validation_examples = lll_dataframe.tail(5000)
_ = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.15),
steps=1000,
batch_size=50,
hidden_units=[10, 10],
training_examples=lll_training_examples,
training_targets=training_targets,
validation_examples=lll_validation_examples,
validation_targets=validation_targets)
```
| github_jupyter |
# Additional assignment. Kaggle dataset. House cost prediction
This dataset is taken to practice Week1 material on linear regression.
```
import pandas as pd
import numpy as np
import scipy.stats as sts
from sklearn import datasets, linear_model, metrics
from sklearn import model_selection as mod_sel
from sklearn.metrics import mean_squared_log_error, mean_squared_error
from scipy.optimize import minimize
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn as sns
sns.set()
sns.set_style("whitegrid")
%matplotlib inline
initial_train_data = pd.read_csv('data/10_train.csv')
initial_test_data = pd.read_csv('data/10_test.csv')
initial_train_data.sample(5)
train_data, test_data, train_labels, test_labels = mod_sel.train_test_split(
initial_train_data.loc[:, ['LotArea', 'OverallQual']],
initial_train_data.loc[:, ['SalePrice']],
test_size = 0.3, random_state=42
)
main_data = train_data.loc[train_data['LotArea'] < 35000]
outliers = train_data.loc[train_data['LotArea'] >= 35000]
# main_data.plot(y='LotArea', kind='hist')
# outliers.plot(y='LotArea', kind='hist')
def addBias(a):
onesCol = np.ones((a.shape[0], 1))
return np.concatenate([onesCol, a], axis=1)
from sklearn import preprocessing
xs = 1.0 * train_data.values
scaler = preprocessing.StandardScaler().fit(xs)
def preprocess_data(data):
data = scaler.transform(data)
data = preprocessing.PolynomialFeatures(3).fit_transform(data)
return data
trFeatures = preprocess_data(xs)
trLabels = train_labels.values
trLogLabels = np.log(trLabels)
tsFeatures = preprocess_data(1.0 * test_data.values)
tsLabels = test_labels.values
tsLogLabels = np.log(tsLabels)
def predict(weights, features=trFeatures):
return np.dot(features, weights.T)
def get_error(weights, features=trFeatures, labels=trLogLabels):
predictions = predict(weights, features)
if len(weights.shape) == 1 or weights.shape[1] == 1:
return np.sqrt(mean_squared_error(labels, predictions))
if len(labels.shape) == 1 or labels.shape[1] == 1:
labels = np.broadcast_to(labels, predictions.shape)
return np.sqrt(mean_squared_error(labels, predictions, multioutput='raw_values'))
ws = np.random.rand(3, trFeatures.shape[1])
get_error(ws)
ws0 = np.random.rand(1, trFeatures.shape[1])
get_error(ws0)
res = minimize(get_error, x0=ws0) #, method='L-BFGS-B', bounds=[(-100, 100), (-5, 5)])
min_ws = res.x
print(res.fun)
print(min_ws)
tsPredictions = predict(min_ws, tsFeatures)
get_error(min_ws, features=tsFeatures, labels=tsLogLabels)
linear_regressor = linear_model.LinearRegression()
linear_regressor = linear_regressor.fit(trFeatures, trLogLabels)
coef = linear_regressor.coef_
coef[0, 0] = linear_regressor.intercept_
get_error(coef, features=tsFeatures, labels=tsLogLabels)
coef
from sklearn.pipeline import make_pipeline
cols = ['LotArea', 'OverallQual', 'YearBuilt', 'BuiltIn21', 'BuiltIn20', 'OverallCond', 'MSSubClass']
train_data = initial_train_data.loc[:, cols]
train_labels = initial_train_data.loc[:, ['SalePrice']]
test_data = initial_test_data.loc[:, cols]
train_data = 1.0 * train_data.values
train_labels = np.log(1.0 * train_labels.values)
linear_regressor = make_pipeline(
preprocessing.StandardScaler(),
preprocessing.PolynomialFeatures(3),
linear_model.LinearRegression()
)
linear_regressor = linear_regressor.fit(train_data, train_labels)
trError = np.sqrt(mean_squared_log_error(train_labels, linear_regressor.predict(train_data)))
print(trError)
linear_scoring = mod_sel.cross_val_score(
linear_regressor,
# train_data, train_labels, scoring = 'neg_mean_squared_log_error',
train_data,
train_labels,
scoring = 'neg_mean_squared_error',
cv = 10
)
'{0:.4f}; {1:.4f}'.format(linear_scoring.mean(), linear_scoring.std())
tsPredictions = linear_regressor.predict(1.0 * test_data.values)
tsPredictions = np.exp(tsPredictions)
tsPredictions = tsPredictions[:,0]
submission = pd.DataFrame({'Id': initial_test_data.Id, 'SalePrice': tsPredictions})
# you could use any filename. We choose submission here
submission.to_csv('out/10_submission.csv', index=False)
xs = np.linspace(min_ws[1] - 10, min_ws[1] + 10, 100)
ws_shape = (xs.shape[0], min_ws.shape[0])
ws = np.array(np.broadcast_to(min_ws, ws_shape))
ws[:, 1] = xs
ys = get_error(ws)
plt.plot(xs, ys)
n = 10
pr = predict(min_ws)[:n, None]
gt = trLabels[:n]
(pr - gt)/gt
# np.concatenate([gt, pr - gt, (pr - gt)/gt ], axis=1)
test_data = initial_test_data.loc[:, ['LotArea']]
testFeatures = getFeatureMatrix(test_data)
testPredictions = np.exp(predict(min_ws, features=testFeatures)
submission = pd.DataFrame({'Id': initial_test_data.Id, 'SalePrice': testPredictions})
# you could use any filename. We choose submission here
submission.to_csv('out/10_submission.csv', index=False)
col = 'OverallCondCleared'
print(np.any(initial_train_data[col] == np.NaN))
initial_train_data[col].sample(20)
def addColumns(df):
df['BuiltIn21'] = 0
df['BuiltIn20'] = 0
df.loc[df['YearBuilt'] >= 2000, 'BuiltIn21'] = df['YearBuilt']
df.loc[df['YearBuilt'] < 2000, 'BuiltIn20'] = df['YearBuilt']
addColumns(initial_train_data)
addColumns(initial_test_data)
subdata = initial_train_data.loc[:, ['SalePrice', 'YearRemodAdd', 'YearBuilt', 'OverallCond']]
sns.pairplot(subdata)
```
| github_jupyter |

## Satellite Imagery DataSet Toolkit
The Satellite Imagery DataSet is important part to train, validation the model of different mission.
This toolkit work for download different datasources and use specific layer (class) in OSM Vector Data to generate dataset for train or validation model.
### Support Vector Datasource type:
* MBTiles
* Shapefile
* Pbf
* Geojson
First of all , the layer name & class should be know as prior knowledge that mean the same class maybe has different keyword in OSM data and definition. like 'water' as a classname, but same classname in OSM data will be 'waterway','water','lake'...
### Support Raster Dataset key:
* Google
* Google China,
* Google Maps,
* Google Satellite,
* Google Terrain,
* Google Terrain Hybrid,
* Google Satellite Hybrid
* Stamen Terrain
* Stamen Toner
* Stamen Toner Light
* Stamen Watercolor
* Wikimedia Map
* Wikimedia Hike Bike Map
* Esri Boundaries Places
* Esri Gray (dark)
* Esri Gray (light)
* Esri National Geographic
* Esri Ocean,
* Esri Satellite,
* Esri Standard,
* Esri Terrain,
* Esri Transportation,
* Esri Topo World,
* OpenStreetMap Standard,
* OpenStreetMap H.O.T.,
* OpenStreetMap Monochrome,
* OpenTopoMap,
* Strava All,
* Strava Run,
* Open Weather Map Temperature,
* Open Weather Map Clouds,
* Open Weather Map Wind Speed,
* CartoDb Dark Matter,
* CartoDb Positron,
* Bing VirtualEarth
### Usage:
#### Step 1:
Download the tile file is the first step. But the almost data resources supporter didn't write the projection information to tile file. So the compute tile projection infomation & write to file is most import part in process of download flow.
****
```
from Data.IO.Downloader import DOWNLOADER
UTAH=DOWNLOADER("Google Satellite")
```
****
### Demo:
We could choose a Position like Saltlake.
Salt Lake City is located in United States country, in North America continent (or region). DMS latitude longitude coordinates for Salt Lake City are: 40°45'38.81"N, 111°53'27.78"W.
• Latitude position: Equator ⇐ 4532km (2816mi) ⇐ Salt Lake City ⇒ 5475km (3402mi) ⇒ North pole.
• Longitude position: Salt Lake City ⇐ 8644km (5371mi) ⇐ Prime meridian. GMT: -6h.
• Local time in Salt Lake City: Friday 1:35 am, May 22, 2020. [*time info]
We need plan a area that describe by WGS84 lonlat,like:
Cord1=(-111.89105,40.76078) # Left Top Lonlat
Cord2=(-111.8,40.7)# Right Bottom Lonlat
In addition, we need set the `zoom level` that mean resolution of each map tile. Relative info:

The data will generate as tiles (256*256 image), you also could use `DOWNLOADER_INSTANCE`.merge() to merge all the tiles to whole tiff file.
addcord() as a function ,input is WGS cord of left-top point & right-bottom point x1,y1,x2,y2,additional zoom level that mean different level density of data grid.
left, top : left-top coordinate, for example (100.361,38.866)
right, bottom : right-bottom coordinate
z : zoom
filePath : File path for storing results, TIFF format
```
Cord1=(-111.89105,40.76078) # Left Top Lonlat
Cord2=(-111.8,40.7)# Right Bottom Lonlat
Zoom=13 # Zool Level
UTAH.addcord(*Cord1,*Cord2,Zoom)
UTAH.download()
tiles=UTAH.savetiles(path="./Image",format="tif")
```
****
The Vector & Raster Class could do some IO,transform to raster or vector object.
For instance, we use a shapefile that download from https://gis.utah.gov/ as label to generate groundtruth.
if the timestamp of two datasources (vector & raster) is almost same, you could get the high quality dataset.
****
```
from Data.IO.Vector import Vector
Building=Vector("/workspace/data/UTAH/Buildings_shp/Buildings/Buildings.shp")
```
****
The most of SQLite based mbtiles vector database will have multi-layer, but wkt based shapefile & geojson almost have single layer.
Normally,Name of layer is class name that must set as default layer by `getDefaultLayerbyName` function.
****
```
Building.getDefaultLayerbyName("Buildings")
```
****
### Step 2:
If the data use for model training, we should have label that could be generate by rasterize vector file. Normally, the data will label by artificial work.But human resources has limit in huge object label with high resolution imagery. The OSM Vector data has a worldwide version that save in sqlite based mbtiles file system that could be decode by GDAL library.
The Class Vector and Raster is important part of data I/O. Rasterisation (or rasterization) is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes).[1][2] The rasterised image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterisation may refer to the technique of drawing 3D models, or the conversion of 2D rendering primitives such as polygons, line segments into a rasterized format.
The map data has better relative accuracy than temporary human label work that mean the vector map has potential to be ground truth. So, transform the exist vector to raster data that is indispensable method for generate training data in DL-based computer vision mission.
Rasterize:

****
```
label=Building.generate(tiles,output_path="./Label")
```
If we write the 'image' & 'label' to csv / json that could be great dataset for deeplearning training work flow.
We could show the label&image like that.
```
import tifffile as tif
import matplotlib.pyplot as plt
image=tif.imread(tiles[0])
label=tif.imread(label[0])
plt.imshow(image),plt.show()
plt.imshow(label),plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/get-started/try-apache-beam-java.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Try Apache Beam - Java
In this notebook, we set up a Java development environment and work through a simple example using the [DirectRunner](https://beam.apache.org/documentation/runners/direct/). You can explore other runners with the [Beam Capatibility Matrix](https://beam.apache.org/documentation/runners/capability-matrix/).
To navigate through different sections, use the table of contents. From **View** drop-down list, select **Table of contents**.
To run a code cell, you can click the **Run cell** button at the top left of the cell, or by select it and press **`Shift+Enter`**. Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see [Welcome to Colaboratory!](https://colab.sandbox.google.com/notebooks/welcome.ipynb).
# Setup
First, you need to set up your environment.
```
# Run and print a shell command.
def run(cmd):
print('>> {}'.format(cmd))
!{cmd} # This is magic to run 'cmd' in the shell.
print('')
# Copy the input file into the local filesystem.
run('mkdir -p data')
run('gsutil cp gs://dataflow-samples/shakespeare/kinglear.txt data/')
```
## Installing development tools
Let's start by installing Java. We'll use the `default-jdk`, which uses [OpenJDK](https://openjdk.java.net/). This will take a while, so feel free to go for a walk or do some stretching.
**Note:** Alternatively, you could install the propietary [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) instead.
```
# Update and upgrade the system before installing anything else.
run('apt-get update > /dev/null')
run('apt-get upgrade > /dev/null')
# Install the Java JDK.
run('apt-get install default-jdk > /dev/null')
# Check the Java version to see if everything is working well.
run('javac -version')
```
Now, let's install [Gradle](https://gradle.org/), which we'll need to automate the build and running processes for our application.
**Note:** Alternatively, you could install and configure [Maven](https://maven.apache.org/) instead.
```
import os
# Download the gradle source.
gradle_version = 'gradle-5.0'
gradle_path = f"/opt/{gradle_version}"
if not os.path.exists(gradle_path):
run(f"wget -q -nc -O gradle.zip https://services.gradle.org/distributions/{gradle_version}-bin.zip")
run('unzip -q -d /opt gradle.zip')
run('rm -f gradle.zip')
# We're choosing to use the absolute path instead of adding it to the $PATH environment variable.
def gradle(args):
run(f"{gradle_path}/bin/gradle --console=plain {args}")
gradle('-v')
```
## build.gradle
We'll also need a [`build.gradle`](https://guides.gradle.org/creating-new-gradle-builds/) file which will allow us to invoke some useful commands.
```
%%writefile build.gradle
plugins {
// id 'idea' // Uncomment for IntelliJ IDE
// id 'eclipse' // Uncomment for Eclipse IDE
// Apply java plugin and make it a runnable application.
id 'java'
id 'application'
// 'shadow' allows us to embed all the dependencies into a fat jar.
id 'com.github.johnrengelman.shadow' version '4.0.3'
}
// This is the path of the main class, stored within ./src/main/java/
mainClassName = 'samples.quickstart.WordCount'
// Declare the sources from which to fetch dependencies.
repositories {
mavenCentral()
}
// Java version compatibility.
sourceCompatibility = 1.8
targetCompatibility = 1.8
// Use the latest Apache Beam major version 2.
// You can also lock into a minor version like '2.9.+'.
ext.apacheBeamVersion = '2.+'
// Declare the dependencies of the project.
dependencies {
shadow "org.apache.beam:beam-sdks-java-core:$apacheBeamVersion"
runtime "org.apache.beam:beam-runners-direct-java:$apacheBeamVersion"
runtime "org.slf4j:slf4j-api:1.+"
runtime "org.slf4j:slf4j-jdk14:1.+"
testCompile "junit:junit:4.+"
}
// Configure 'shadowJar' instead of 'jar' to set up the fat jar.
shadowJar {
baseName = 'WordCount' // Name of the fat jar file.
classifier = null // Set to null, otherwise 'shadow' appends a '-all' to the jar file name.
manifest {
attributes('Main-Class': mainClassName) // Specify where the main class resides.
}
}
```
## Creating the directory structure
Java and Gradle expect a specific [directory structure](https://docs.gradle.org/current/userguide/organizing_gradle_projects.html). This helps organize large projects into a standard structure.
For now, we only need a place where our quickstart code will reside. That has to go within `./src/main/java/`.
```
run('mkdir -p src/main/java/samples/quickstart')
```
# Minimal word count
The following example is the "Hello, World!" of data processing, a basic implementation of word count. We're creating a simple data processing pipeline that reads a text file and counts the number of occurrences of every word.
There are many scenarios where all the data does not fit in memory. Notice that the outputs of the pipeline go to the file system, which allows for large processing jobs in distributed environments.
## WordCount.java
```
%%writefile src/main/java/samples/quickstart/WordCount.java
package samples.quickstart;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Count;
import org.apache.beam.sdk.transforms.Filter;
import org.apache.beam.sdk.transforms.FlatMapElements;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.TypeDescriptors;
import java.util.Arrays;
public class WordCount {
public static void main(String[] args) {
String inputsDir = "data/*";
String outputsPrefix = "outputs/part";
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply("Read lines", TextIO.read().from(inputsDir))
.apply("Find words", FlatMapElements.into(TypeDescriptors.strings())
.via((String line) -> Arrays.asList(line.split("[^\\p{L}]+"))))
.apply("Filter empty words", Filter.by((String word) -> !word.isEmpty()))
.apply("Count words", Count.perElement())
.apply("Write results", MapElements.into(TypeDescriptors.strings())
.via((KV<String, Long> wordCount) ->
wordCount.getKey() + ": " + wordCount.getValue()))
.apply(TextIO.write().to(outputsPrefix));
pipeline.run();
}
}
```
## Build and run
Let's first check how the final file system structure looks like. These are all the files required to build and run our application.
* `build.gradle` - build configuration for Gradle
* `src/main/java/samples/quickstart/WordCount.java` - application source code
* `data/kinglear.txt` - input data, this could be any file or files
We are now ready to build the application using `gradle build`.
```
# Build the project.
gradle('build')
# Check the generated build files.
run('ls -lh build/libs/')
```
There are two files generated:
* The `content.jar` file, the application generated from the regular `build` command. It's only a few kilobytes in size.
* The `WordCount.jar` file, with the `baseName` we specified in the `shadowJar` section of the `gradle.build` file. It's a several megabytes in size, with all the required libraries it needs to run embedded in it.
The file we're actually interested in is the fat JAR file `WordCount.jar`. To run the fat JAR, we'll use the `gradle runShadow` command.
```
# Run the shadow (fat jar) build.
gradle('runShadow')
# Sample the first 20 results, remember there are no ordering guarantees.
run('head -n 20 outputs/part-00000-of-*')
```
## Distributing your application
We can run our fat JAR file as long as we have a Java Runtime Environment installed.
To distribute, we copy the fat JAR file and run it with `java -jar`.
```
# You can now distribute and run your Java application as a standalone jar file.
run('cp build/libs/WordCount.jar .')
run('java -jar WordCount.jar')
# Sample the first 20 results, remember there are no ordering guarantees.
run('head -n 20 outputs/part-00000-of-*')
```
# Word count with comments
Below is mostly the same code as above, but with comments explaining every line in more detail.
```
%%writefile src/main/java/samples/quickstart/WordCount.java
package samples.quickstart;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Count;
import org.apache.beam.sdk.transforms.Filter;
import org.apache.beam.sdk.transforms.FlatMapElements;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.TypeDescriptors;
import java.util.Arrays;
public class WordCount {
public static void main(String[] args) {
String inputsDir = "data/*";
String outputsPrefix = "outputs/part";
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();
Pipeline pipeline = Pipeline.create(options);
// Store the word counts in a PCollection.
// Each element is a KeyValue of (word, count) of types KV<String, Long>.
PCollection<KV<String, Long>> wordCounts =
// The input PCollection is an empty pipeline.
pipeline
// Read lines from a text file.
.apply("Read lines", TextIO.read().from(inputsDir))
// Element type: String - text line
// Use a regular expression to iterate over all words in the line.
// FlatMapElements will yield an element for every element in an iterable.
.apply("Find words", FlatMapElements.into(TypeDescriptors.strings())
.via((String line) -> Arrays.asList(line.split("[^\\p{L}]+"))))
// Element type: String - word
// Keep only non-empty words.
.apply("Filter empty words", Filter.by((String word) -> !word.isEmpty()))
// Element type: String - word
// Count each unique word.
.apply("Count words", Count.perElement());
// Element type: KV<String, Long> - key: word, value: counts
// We can process a PCollection through other pipelines, too.
// The input PCollection are the wordCounts from the previous step.
wordCounts
// Format the results into a string so we can write them to a file.
.apply("Write results", MapElements.into(TypeDescriptors.strings())
.via((KV<String, Long> wordCount) ->
wordCount.getKey() + ": " + wordCount.getValue()))
// Element type: str - text line
// Finally, write the results to a file.
.apply(TextIO.write().to(outputsPrefix));
// We have to explicitly run the pipeline, otherwise it's only a definition.
pipeline.run();
}
}
# Build and run the project. The 'runShadow' task implicitly does a 'build'.
gradle('runShadow')
# Sample the first 20 results, remember there are no ordering guarantees.
run('head -n 20 outputs/part-00000-of-*')
```
| github_jupyter |
# Distributed scraping: multiprocessing
**Speed up scraping by distributed crawling and parsing. I'm going to scrape [my website](https://mofanpy.com/) but in a local server "http://127.0.0.1:4000/" to eliminate different downloading speed. This test is more accurate in time measuring. You can use "https://mofanpy.com/" instead, because you cannot access "http://127.0.0.1:4000/".**
**We gonna scrape all web pages in my website and reture the title and url for each page.**
```
import multiprocessing as mp
import time
from urllib.request import urlopen, urljoin
from bs4 import BeautifulSoup
import re
base_url = "http://127.0.0.1:4000/"
# base_url = 'https://mofanpy.com/'
# DON'T OVER CRAWL THE WEBSITE OR YOU MAY NEVER VISIT AGAIN
if base_url != "http://127.0.0.1:4000/":
restricted_crawl = True
else:
restricted_crawl = False
```
**Create a crawl function to open a url in parallel.**
```
def crawl(url):
response = urlopen(url)
time.sleep(0.1) # slightly delay for downloading
return response.read().decode()
```
**Create a parse function to find all results we need in parallel**
```
def parse(html):
soup = BeautifulSoup(html, 'lxml')
urls = soup.find_all('a', {"href": re.compile('^/.+?/$')})
title = soup.find('h1').get_text().strip()
page_urls = set([urljoin(base_url, url['href']) for url in urls])
url = soup.find('meta', {'property': "og:url"})['content']
return title, page_urls, url
```
## Normal way
**Do not use multiprocessing, test the speed. Firstly, set what urls we have already seen and what we haven't in a python set.**
```
unseen = set([base_url,])
seen = set()
count, t1 = 1, time.time()
while len(unseen) != 0: # still get some url to visit
if restricted_crawl and len(seen) > 20:
break
print('\nDistributed Crawling...')
htmls = [crawl(url) for url in unseen]
print('\nDistributed Parsing...')
results = [parse(html) for html in htmls]
print('\nAnalysing...')
seen.update(unseen) # seen the crawled
unseen.clear() # nothing unseen
for title, page_urls, url in results:
print(count, title, url)
count += 1
unseen.update(page_urls - seen) # get new url to crawl
print('Total time: %.1f s' % (time.time()-t1, )) # 53 s
```
## multiprocessing
**Create a process pool and scrape parallelly.**
```
unseen = set([base_url,])
seen = set()
pool = mp.Pool(4)
count, t1 = 1, time.time()
while len(unseen) != 0: # still get some url to visit
if restricted_crawl and len(seen) > 20:
break
print('\nDistributed Crawling...')
crawl_jobs = [pool.apply_async(crawl, args=(url,)) for url in unseen]
htmls = [j.get() for j in crawl_jobs] # request connection
print('\nDistributed Parsing...')
parse_jobs = [pool.apply_async(parse, args=(html,)) for html in htmls]
results = [j.get() for j in parse_jobs] # parse html
print('\nAnalysing...')
seen.update(unseen) # seen the crawled
unseen.clear() # nothing unseen
for title, page_urls, url in results:
print(count, title, url)
count += 1
unseen.update(page_urls - seen) # get new url to crawl
print('Total time: %.1f s' % (time.time()-t1, )) # 16 s !!!
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## Load the cancer data
```
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
cancer
cancer.keys()
df = pd.DataFrame(data=cancer["data"], columns=cancer["feature_names"])
df["target"] = cancer["target"]
df.head()
df.shape
df.isna().sum()
```
## EDA
```
# 1 : benign
# 2 : malignant
sns.countplot(data=df, x="target", saturation=1, alpha=0.6, edgecolor="black", palette={1: "green", 0: "red"});
sns.pairplot(data=df, hue="target",
vars=['mean radius', 'mean texture', 'mean area', 'mean perimeter', 'mean smoothness']);
sns.scatterplot(data=df, x="mean area", y="mean smoothness", hue="target");
plt.figure(figsize=(20, 10))
sns.heatmap(data=df.corr(), annot=True);
```
## Model Training
```
X = df.drop(columns="target").to_numpy()
X
y = df["target"].to_numpy()
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
from sklearn.svm import SVC
clf_svm = SVC()
clf_svm.fit(X_train, y_train)
```
## Evaluate the Model
```
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
predictions = clf_svm.predict(X_test)
print(classification_report(y_test, predictions))
print(accuracy_score(y_test, predictions))
print(confusion_matrix(y_test, predictions))
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=clf_svm, X=X_train, y=y_train, cv=10)
print(accuracies)
print(f"Mean: {accuracies.mean() * 100} %")
print(f"Std: {accuracies.std() * 100} %")
# 13 + 1 are misclassificed
sns.heatmap(data=confusion_matrix(y_test, predictions), annot=True);
```
## Improve Model Results
###### Way 1, Data Normalization (Feature Scaling)
```
X = df.drop(columns="target").to_numpy()
y = df["target"].to_numpy()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.svm import SVC
clf_svm = SVC()
clf_svm.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
predictions = clf_svm.predict(X_test)
print(classification_report(y_test, predictions))
print(accuracy_score(y_test, predictions))
print(confusion_matrix(y_test, predictions))
# 2 + 2 are misclassificed
sns.heatmap(data=confusion_matrix(y_test, predictions), annot=True);
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator=clf_svm, X=X_train, y=y_train, cv=10, verbose=1) # cv = k = 10
print(accuracies)
print(f"Mean: {accuracies.mean() * 100} %")
print(f"Std: {accuracies.std() * 100} %")
```
###### Way 2, SVM HyperParameters Optimzation from up (Feature Scaling)
```
%%time
from sklearn.model_selection import GridSearchCV
param_grid = {
"C": [0.1, 1, 10, 100],
"gamma": [1, 0.1, 0.01, 0.001],
"kernel": ["rbf"]
}
grid_search = GridSearchCV(estimator=clf_svm, param_grid=param_grid, scoring="accuracy", n_jobs=-1, cv=10, verbose=1)
grid_search.fit(X_train, y_train)
print(grid_search.best_estimator_)
print(grid_search.best_params_)
print(grid_search.best_score_)
print(grid_search.best_index_)
```
###### grid_serch里最好的参数 ((和grid里面score不一样))
```
grid_predictions = grid_search.predict(X_test)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
print(classification_report(y_test, grid_predictions))
print(accuracy_score(y_test, grid_predictions))
print(confusion_matrix(y_test, grid_predictions))
```
###### 手动调最好的参数 (和grid里面score不一样)
```
X = df.drop(columns="target").to_numpy()
y = df["target"].to_numpy()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.svm import SVC
clf_svm = SVC(C=10, gamma=0.01, kernel='rbf')
clf_svm.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
predictions = clf_svm.predict(X_test)
print(classification_report(y_test, predictions))
print(accuracy_score(y_test, predictions))
print(confusion_matrix(y_test, predictions))
```
###### SVM 每次的score的都不一样
| github_jupyter |
# LAB 5a: Training Keras model on Cloud AI Platform.
**Learning Objectives**
1. Setup up the environment
1. Create trainer module's task.py to hold hyperparameter argparsing code
1. Create trainer module's model.py to hold Keras model code
1. Run trainer module package locally
1. Submit training job to Cloud AI Platform
1. Submit hyperparameter tuning job to Cloud AI Platform
## Introduction
After having testing our training pipeline both locally and in the cloud on a susbset of the data, we can submit another (much larger) training job to the cloud. It is also a good idea to run a hyperparameter tuning job to make sure we have optimized the hyperparameters of our model.
In this notebook, we'll be training our Keras model at scale using Cloud AI Platform.
In this lab, we will set up the environment, create the trainer module's task.py to hold hyperparameter argparsing code, create the trainer module's model.py to hold Keras model code, run the trainer module package locally, submit a training job to Cloud AI Platform, and submit a hyperparameter tuning job to Cloud AI Platform.
## Set up environment variables and load necessary libraries
Import necessary libraries.
```
import os
```
### Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
```
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
# TODO: Change these to try this notebook out
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.0"
%%bash
gcloud config set project ${PROJECT}
gcloud config set compute/region ${REGION}
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
## Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation. If not, go back to lab [2_prepare_babyweight](../solutions/2_prepare_babyweight.ipynb) to create them.
```
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*000000000000.csv
```
Now that we have the [Keras wide-and-deep code](../solutions/4c_keras_wide_and_deep_babyweight.ipynb) working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
## Train on Cloud AI Platform
Training on Cloud AI Platform requires:
* Making the code a Python package
* Using gcloud to submit the training code to [Cloud AI Platform](https://console.cloud.google.com/ai-platform)
Ensure that the AI Platform API is enabled by going to this [link](https://console.developers.google.com/apis/library/ml.googleapis.com).
### Move code into a Python package
A Python package is simply a collection of one or more `.py` files along with an `__init__.py` file to identify the containing directory as a package. The `__init__.py` sometimes contains initialization code but for our purposes an empty file suffices.
The bash command `touch` creates an empty file in the specified location, the directory `babyweight` should already exist.
```
%%bash
mkdir -p babyweight/trainer
touch babyweight/trainer/__init__.py
```
We then use the `%%writefile` magic to write the contents of the cell below to a file called `task.py` in the `babyweight/trainer` folder.
### Create trainer module's task.py to hold hyperparameter argparsing code.
The cell below writes the file `babyweight/trainer/task.py` which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the `parser` module. Look at how `batch_size` is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- `nnsize` which represents the hidden layer sizes to use for DNN feature columns
- `nembeds` which represents the embedding size of a cross of n key real-valued parameters
- `train_examples` which represents the number of examples (in thousands) to run the training job
- `eval_steps` which represents the positive number of steps for which to evaluate model
Be sure to include a default value for the parsed arguments above and specfy the `type` if necessary.
```
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from babyweight.trainer import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
parser.add_argument(
"--train_data_path",
help="GCS location of training data",
required=True
)
parser.add_argument(
"--eval_data_path",
help="GCS location of evaluation data",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes for DNN -- provide space-separated layers",
nargs="+",
type=int,
default=[128, 32, 4]
)
parser.add_argument(
"--nembeds",
help="Embedding size of a cross of n key real-valued parameters",
type=int,
default=3
)
parser.add_argument(
"--num_epochs",
help="Number of epochs to train the model.",
type=int,
default=10
)
parser.add_argument(
"--train_examples",
help="""Number of examples (in thousands) to run the training job over.
If this is more than actual # of examples available, it cycles through
them. So specifying 1000 here when you have only 100k examples makes
this 10 epochs.""",
type=int,
default=5000
)
parser.add_argument(
"--eval_steps",
help="""Positive number of steps for which to evaluate model. Default
to None, which means to evaluate until input_fn raises an end-of-input
exception""",
type=int,
default=None
)
# Parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# Unused args provided by service
arguments.pop("job_dir", None)
arguments.pop("job-dir", None)
# Modify some arguments
arguments["train_examples"] *= 1000
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
arguments["output_dir"] = os.path.join(
arguments["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(arguments)
```
In the same way we can write to the file `model.py` the model that we developed in the previous notebooks.
### Create trainer module's model.py to hold Keras model code.
To create our `model.py`, we'll use the code we wrote for the Wide & Deep model. Look back at your [9_keras_wide_and_deep_babyweight](../solutions/9_keras_wide_and_deep_babyweight.ipynb) notebook and copy/paste the necessary code from that notebook into its place in the cell below.
```
%%writefile babyweight/trainer/model.py
import datetime
import os
import shutil
import numpy as np
import tensorflow as tf
# Determine CSV, label, and key columns
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
print("mode = {}".format(mode))
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
deep_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]
}
wide_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]
}
inputs = {**wide_inputs, **deep_inputs}
return inputs
def categorical_fc(name, values):
"""Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Categorical and indicator column of categorical feature.
"""
# Currently vocabulary_list feature columns don't work correctly with Keras
# training with TF 2.0 when deploying with TF 1.14 for CAIP workaround.
# Hash bucket is a nice temporary substitute.
# cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
# key=name, vocabulary_list=values)
cat_column = tf.feature_column.categorical_column_with_hash_bucket(
key=name, hash_bucket_size=8)
ind_column = tf.feature_column.indicator_column(
categorical_column=cat_column)
return cat_column, ind_column
def create_feature_columns(nembeds):
"""Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
"""
deep_fc = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
wide_fc = {}
is_male, wide_fc["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
plurality, wide_fc["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
# Bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["mother_age"],
boundaries=np.arange(15, 45, 1).tolist())
wide_fc["age_buckets"] = tf.feature_column.indicator_column(
categorical_column=age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["gestation_weeks"],
boundaries=np.arange(17, 47, 1).tolist())
wide_fc["gestation_buckets"] = tf.feature_column.indicator_column(
categorical_column=gestation_buckets)
# Cross all the wide columns, have to do the crossing before we one-hot
crossed = tf.feature_column.crossed_column(
keys=[age_buckets, gestation_buckets],
hash_bucket_size=1000)
deep_fc["crossed_embeds"] = tf.feature_column.embedding_column(
categorical_column=crossed, dimension=nembeds)
return wide_fc, deep_fc
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
"""Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
"""
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(
units=numnodes,
activation="relu",
name="dnn_{}".format(layerno+1))(deep)
deep_out = deep
# Linear model for the wide side
wide_out = tf.keras.layers.Dense(
units=10, activation="relu", name="linear")(wide_inputs)
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(both)
return output
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
"""Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layers
inputs = create_input_layers()
# Create feature columns for both wide and deep
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=wide_fc.values(), name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=deep_fc.values(), name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
def train_and_evaluate(args):
model = build_wide_deep_model(args["nnsize"], args["nembeds"])
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
trainds = load_dataset(
args["train_data_path"],
args["batch_size"],
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset(
args["eval_data_path"], 1000, tf.estimator.ModeKeys.EVAL)
if args["eval_steps"]:
evalds = evalds.take(count=args["eval_steps"])
num_batches = args["batch_size"] * args["num_epochs"]
steps_per_epoch = args["train_examples"] // num_batches
checkpoint_path = os.path.join(args["output_dir"], "checkpoints/babyweight")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path, verbose=1, save_weights_only=True)
history = model.fit(
trainds,
validation_data=evalds,
epochs=args["num_epochs"],
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
EXPORT_PATH = os.path.join(
args["output_dir"], datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
```
## Train locally
After moving the code to a package, make sure it works as a standalone. Note, we incorporated the `--train_examples` flag so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change it so that we can train on all the data. Even for this subset, this takes about *3 minutes* in which you won't see any output ...
### Run trainer module package locally.
We can run a very small training job over a single file with a small batch size, 1 epoch, 1 train example, and 1 eval step.
```
%%bash
OUTDIR=babyweight_trained
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python3 -m trainer.task \
--job-dir=./tmp \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--batch_size=10 \
--num_epochs=1 \
--train_examples=1 \
--eval_steps=1
```
## Dockerized module
Since we are using TensorFlow 2.0 and it is new, we will use a container image to run the code on AI Platform.
Once TensorFlow 2.0 is natively supported on AI Platform, you will be able to simply do (without having to build a container):
<pre>
gcloud ai-platform jobs submit training ${JOBNAME} \
--region=${REGION} \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=${OUTDIR} \
--staging-bucket=gs://${BUCKET} \
--scale-tier=STANDARD_1 \
--runtime-version=${TFVERSION} \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
</pre>
### Create Dockerfile
We need to create a container with everything we need to be able to run our model. This includes our trainer module package, python3, as well as the libraries we use such as the most up to date TensorFlow 2.0 version.
```
%%writefile babyweight/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /babyweight/trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tensorflow==2.0
ENV PYTHONPATH ${PYTHONPATH}:/babyweight
ENTRYPOINT ["python3", "babyweight/trainer/task.py"]
```
### Build and push container image to repo
Now that we have created our Dockerfile, we need to build and push our container image to our project's container repo. To do this, we'll create a small shell script that we can call from the bash.
```
%%writefile babyweight/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t ${IMAGE_URI} ./
echo "Pushing $IMAGE_URI"
docker push ${IMAGE_URI}
```
**Note:** If you get a permissions/stat error when running push_docker.sh from Notebooks, do it from CloudShell:
Open CloudShell on the GCP Console
* git clone https://github.com/GoogleCloudPlatform/training-data-analyst
* cd training-data-analyst/courses/machine_learning/deepdive2/structured/solutions/babyweight
* bash push_docker.sh
This step takes 5-10 minutes to run.
```
%%bash
cd babyweight
bash push_docker.sh
```
### Test container locally
Before we submit our training job to Cloud AI Platform, let's make sure our container that we just built and pushed to our project's container repo works perfectly. We can do that by calling our container in bash and passing the necessary user_args for our task.py's parser.
```
%%bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Running $IMAGE_URI"
docker run ${IMAGE_URI} \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=gs://${BUCKET}/babyweight/trained_model \
--batch_size=10 \
--num_epochs=10 \
--train_examples=1 \
--eval_steps=1
```
## Train on Cloud AI Platform
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
```
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBID} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
```
When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
<pre>
Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
</pre>
The final RMSE was 1.03 pounds.
## Hyperparameter tuning.
All of these are command-line parameters to my program. To do hyperparameter tuning, create `hyperparam.yaml` and pass it as `--config hyperparam.yaml`.
This step will take <b>up to 2 hours</b> -- you can increase `maxParallelTrials` or reduce `maxTrials` to get it done faster. Since `maxParallelTrials` is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
```
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
--config=hyperparam.yaml \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100
```
## Repeat training
This time with tuned parameters for `batch_size` and `nembeds`.
```
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
```
## Lab Summary:
In this lab, we set up the environment, created the trainer module's task.py to hold hyperparameter argparsing code, created the trainer module's model.py to hold Keras model code, ran the trainer module package locally, submitted a training job to Cloud AI Platform, and submitted a hyperparameter tuning job to Cloud AI Platform.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
### Yelp Project Part II: Feature Engineering - Review Analysis - LDA
```
import pandas as pd
df = pd.read_csv('restaurant_reviews.csv', encoding ='utf-8')
df.head()
# getting the training or testing ids is to use the LDA fitting the training sets and predict
# the topic categories of the testing set
train_id = pd.read_csv('train_set_id.csv', encoding ='utf-8')
train_id.columns = ['business_id']
test_id = pd.read_csv('test_set_id.csv', encoding ='utf-8')
test_id.columns = ['business_id']
df_train = train_id.merge(df, how = 'left', left_on='business_id', right_on='business_id')
df_train.dropna(how='any', inplace = True)
df_test = test_id.merge(df, how = 'left', left_on='business_id', right_on='business_id')
df_test.dropna(how='any', inplace = True)
df_train.shape
df_test.shape
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=0.1,
max_features=10000)
X_train = count.fit_transform(df_train['text'].values)
X_test = count.transform(df_test['text'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components = 10,
random_state = 1,
learning_method = 'online',
max_iter = 15,
verbose=1,
n_jobs = -1)
X_topics_train = lda.fit_transform(X_train)
X_topics_test = lda.transform(X_test)
n_top_words = 30
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print('Topic %d:' % (topic_idx))
print(" ".join([feature_names[i]
for i in topic.argsort()
[:-n_top_words - 1: -1]]))
# identify the column index of the max values in the rows, which is the class of each row
import numpy as np
idx = np.argmax(X_topics_train, axis=1)
df_train['label'] = (df_train['stars'] >= 4)*1
df_train['Topic'] = idx
df_train.head()
df_train.to_csv('review_train.csv', index = False)
df_test['label'] = (df_test['stars'] >= 4)*1
# identify the column index of the max values in the rows, which is the class of each row
import numpy as np
idx = np.argmax(X_topics_test, axis=1)
df_test['Topic'] = idx
df_test.head()
df_test.to_csv('review_test.csv', index = False)
import pandas as pd
import numpy as np
df_train = pd.read_csv('review_train.csv')
df_test = pd.read_csv('review_test.csv')
df_train['score'] = df_train['label'].replace(0, -1)
df_test['score'] = df_test['label'].replace(0, -1)
len(df_train['business_id'].unique())
topic_train = df_train.groupby(['business_id', 'Topic']).mean()['score'].unstack().fillna(0).reset_index()
topic_train.index.name = None
topic_train.columns = ['business_id', 'Topic0', 'Topic1', 'Topic2', 'Topic3', 'Topic4',
'Topic5', 'Topic6', 'Topic7', 'Topic8', 'Topic9']
topic_train.head()
topic_train.to_csv('train_topic_score.csv', index = False)
topic_test = df_test.groupby(['business_id', 'Topic']).mean()['score'].unstack().fillna(0).reset_index()
topic_test.index.name = None
topic_test.columns = ['business_id', 'Topic0', 'Topic1', 'Topic2', 'Topic3', 'Topic4',
'Topic5', 'Topic6', 'Topic7', 'Topic8', 'Topic9']
topic_test.head()
topic_test.to_csv('test_topic_score.csv', index = False)
print(topic_train.shape)
print(topic_test.shape)
topic = pd.concat([topic_train, topic_test])
topic.to_csv('topic_score.csv', index = False)
horror = X_topics[:, 0].argsort()
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\n Horror moive #%d:' % (iter_idx+1))
print(df['text'][movie_idx][:300], '...')
#### Now is the example in the slide
# E.g. take restaurant 'cInZkUSckKwxCqAR7s2ETw' as an example: First Watch
eg_res = df[df['business_id'] == 'cInZkUSckKwxCqAR7s2ETw']
eg = pd.read_csv('topic_score.csv')
eg[eg['business_id'] == 'cInZkUSckKwxCqAR7s2ETw']
eg_res
eg_res.loc[715, :]['text']
```
| github_jupyter |
# Distance-regular graph parameter checking in Sage
[](https://doi.org/10.5281/zenodo.1418409)
A Sage package for checking the feasibility of distance-regular graph parameter sets.
A more detailed description, along with some results, is available in a [manuscript](https://arxiv.org/abs/1803.10797) currently available on arXiv.
## Contents
### `drg`
The `drg` folder contains the package source. After you make sure that Sage sees this folder, you can import it as a Python module.
```
%display latex
import drg
p = drg.DRGParameters([80, 63, 12], [1, 12, 60])
p.check_feasible()
```
You can also give an intersection array with parameters.
```
r = var("r")
fam = drg.DRGParameters([2*r^2*(2*r + 1), (2*r - 1)*(2*r^2 + r + 1), 2*r^2], [1, 2*r^2 , r*(4*r^2 - 1)])
fam.check_feasible()
fam1 = fam.subs(r == 1)
fam1
fam2 = fam.subs(r == 2)
fam2
fam2.check_feasible()
```
### `jupyter`
A collection of sample Jupyter notebooks giving some nonexistence results.
* [`Demo.ipynb`](jupyter/Demo.ipynb) - demonstration of the `sage-drg` package
* [`DRG-104-70-25-1-7-80.ipynb`](jupyter/DRG-104-70-25-1-7-80.ipynb) - proof of nonexistence of a distance-regular graph with intersection array $\{104, 70, 25; 1, 7, 80\}$ with $1470$ vertices
* [`DRG-135-128-16-1-16-120.ipynb`](jupyter/DRG-135-128-16-1-16-120.ipynb) - proof of nonexistence of a distance-regular graph with intersection array $\{135, 128, 16; 1, 16, 120\}$ with $1360$ vertices
* [`DRG-234-165-12-1-30-198.ipynb`](jupyter/DRG-234-165-12-1-30-198.ipynb) - proof of nonexistence of a distance-regular graph with intersection array $\{234, 165, 12; 1, 30, 198\}$ with $1600$ vertices
* [`DRG-55-54-50-35-10-bipartite.ipynb`](jupyter/DRG-55-54-50-35-10-bipartite.ipynb) - proof of nonexistence of a bipartite distance-regular graph with intersection array $\{55, 54, 50, 35, 10; 1, 5, 20, 45, 55\}$ with $3500$ vertices
* [`DRG-d3-2param.ipynb`](jupyter/DRG-d3-2param.ipynb) - proof of nonexistence of a family of distance-regular graphs with intersection arrays $\{(2r+1)(4r+1)(4t-1), 8r(4rt-r+2t), (r+t)(4r+1); 1, (r+t)(4r+1), 4r(2r+1)(4t-1)\}$ ($r, t \ge 1$)
* [`QPoly-24-20-36_11-1-30_11-24.ipynb`](jupyter/QPoly-24-20-36_11-1-30_11-24.ipynb) - proof of nonexistence of a $Q$-polynomial association scheme with Krein array $\{24, 20, 36/11; 30/11, 24\}$ with $225$ vertices
* [`QPoly-d3-1param-odd.ipynb`](jupyter/QPoly-d3-1param-odd.ipynb) - proof of nonexistence of a $Q$-polynomial association scheme with Krein array $\{2r^2-1, 2r^2-2, r^2+1; 1, 2, r^2-1\}$ and $r$ odd
* [`QPoly-d4-LS-odd.ipynb`](jupyter/QPoly-d4-LS-odd.ipynb) - proof of nonexistence of a $Q$-polynomial association scheme with Krein array $\{m, m-1, m(r^2-1)/r^2, m-r^2+1; 1, m/r^2, r^2-1, m\}$ and $m$ odd
* [`QPoly-d4-tight4design.ipynb`](jupyter/QPoly-d4-tight4design.ipynb) - proof of nonexistence of a $Q$-polynomial association scheme with Krein array $\{r^2-4, r^2-9, 12(s-1)/s, 1; 1, 12/s, r^2-9, r^2-4\}$ ($s \ge 4$)
* [`QPoly-d5-1param-3mod4.ipynb`](jupyter/QPoly-d5-1param-3mod4.ipynb) - proof of nonexistence of a $Q$-polynomial association scheme with Krein array $\{(r^2+1)/2, (r^2-1)/2, (r^2+1)^2/(2r(r+1)), (r-1)(r^2+1)/(4r), (r^2+1)/(2r); 1, (r-1)(r^2+1)/(2r(r+1)), (r+1)(r^2+1)/(4r), (r-1)(r^2+1)/(2r), (r^2+1)/2\}$ and $r \equiv 3 \pmod{4}$
Conference and seminar presentations are also available as [RISE](https://github.com/damianavila/RISE) slideshows.
* [`FPSAC-sage-drg.ipynb`](jupyter/2019-07-04-fpsac/FPSAC-sage-drg.ipynb) - *Computing distance-regular graph and association scheme parameters in SageMath with `sage-drg`* software presentation at [FPSAC 2019](http://fpsac2019.fmf.uni-lj.si/), Ljubljana, Slovenia
* [`AGTSem-sage-drg.ipynb`](jupyter/2021-06-14-agtsem/AGTSem-sage-drg.ipynb) - *Computing distance-regular graph and association scheme parameters in SageMath with `sage-drg`* seminar talk at the [Algebraic Graph Theory Seminar](https://homepages.dcc.ufmg.br/~gabriel/AGT/), University of Waterloo, Ontario, Canada
* [`8ECM-sage-drg.ipynb`](jupyter/2021-06-22-8ecm/8ECM-sage-drg.ipynb) - *Computing distance-regular graph and association scheme parameters in SageMath with `sage-drg`* conference talk at the [8th European Congress of Mathematics](https://8ecm.si/), Graphs and Groups, Geometries and GAP - G2G2 minisymposium
## Citing
If you use `sage-drg` in your research, please cite both the paper and the software:
* J. Vidali. Using symbolic computation to prove nonexistence of distance-regular graphs. *Electron. J. Combin.*, 25(4)#P4.21, 2018. [`http://www.combinatorics.org/ojs/index.php/eljc/article/view/v25i4p21`](http://www.combinatorics.org/ojs/index.php/eljc/article/view/v25i4p21).
* J. Vidali. `jaanos/sage-drg`: `sage-drg` v0.9, 2019. [`https://github.com/jaanos/sage-drg/`](https://github.com/jaanos/sage-drg/), [`doi:10.5281/zenodo.1418409`](https://doi.org/10.5281/zenodo.1418409).
You may also want to cite other documents containing descriptions of features that were added since the above paper was written:
* J. Vidali. Computing distance-regular graph and association scheme parameters in SageMath with `sage-drg`. *Sém. Lothar. Combin.* 82B#105, 2019. [`https://www.mat.univie.ac.at/~slc/wpapers/FPSAC2019/105.pdf`](https://www.mat.univie.ac.at/~slc/wpapers/FPSAC2019/105.pdf)
+ support for general and *Q*-polynomial association schemes
* A. L. Gavrilyuk, J. Vidali, J. S. Williford. On few-class *Q*-polynomial association schemes: feasible parameters and nonexistence results, *Ars Math. Contemp.*, 20(1):103-127, 2021. [`doi:10.26493/1855-3974.2101.b76`](https://doi.org/10.26493/1855-3974.2101.b76).
+ triple intersection number solution finder and forbidden quadruples check
+ support for quadruple intersection numbers
### BibTeX
The above citations are given here in BibTeX format.
```latex
@article{v18,
AUTHOR = {Vidali, Jano\v{s}},
TITLE = {Using symbolic computation to prove nonexistence of distance-regular graphs},
JOURNAL = {Electron. J. Combin.},
FJOURNAL = {Electronic Journal of Combinatorics},
VOLUME = {25},
NUMBER = {4},
PAGES = {P4.21},
NOTE = {\url{http://www.combinatorics.org/ojs/index.php/eljc/article/view/v25i4p21}},
YEAR = {2018},
}
@software{v19a,
AUTHOR = {Vidali, Jano\v{s}},
TITLE = {{\tt jaanos/sage-drg}: {\tt sage-drg} v0.9},
NOTE = {\url{https://github.com/jaanos/sage-drg/},
\href{https://doi.org/10.5281/zenodo.1418409}{\texttt{doi:10.5281/zenodo.1418409}}},
YEAR = {2019},
}
@article{v19b,
AUTHOR = {Vidali, Jano\v{s}},
TITLE = {Computing distance-regular graph and association scheme parameters in SageMath with {\tt sage-drg}},
JOURNAL = {S\'{e}m. Lothar. Combin.},
FJOURNAL = {S\'{e}minaire Lotharingien de Combinatoire},
VOLUME = {82B},
PAGES = {#105},
NOTE = {\url{https://www.mat.univie.ac.at/~slc/wpapers/FPSAC2019/105.pdf}},
YEAR = {2019},
}
@article{gvw21,
AUTHOR = {Gavrilyuk, Alexander L. and Vidali, Jano\v{s} and Williford, Jason S.},
TITLE = {On few-class $Q$-polynomial association schemes: feasible parameters and nonexistence results},
JOURNAL = {Ars Math. Contemp.},
FJOURNAL = {Ars Mathematica Contemporanea},
VOLUME = {20},
NUMBER = {1},
PAGES = {103--127},
NOTE = {\href{https://doi.org/10.26493/1855-3974.2101.b76}{\texttt{doi:10.26493/1855-3974.2101.b76}}},
YEAR = {2021},
}
```
### Other uses
Additionally, `sage-drg` has been used in the following research:
* A. Gavrilyuk, S. Suda and J. Vidali. On tight 4-designs in Hamming association schemes, *Combinatorica*, 40(3):345-362, 2020. [`doi:10.1007/s00493-019-4115-z`](https://doi.org/10.1007/s00493-019-4115-z).
If you would like your research to be listed here, feel free to open an issue or pull request.
| github_jupyter |
```
from __future__ import print_function, division
```
# Getting at Data
Both SunPy and Astropy have utilities for downloading data for your delectation. They are simple and easy to use, however increasing levels of computing will allow a great deal of personalisation and selection. Let us begin with SunPy
## Aquiring Data with SunPy
### VSO
This is the main interface which SunPy to search for and find solar data. VSO stands for the Virtual Solar Observatory, a service which presents a homogenous interface to heterogenous data set.
So what do we need?
```
from sunpy.net import vso
client = vso.VSOClient()
```
This is your client object. This is effectively the intermediary between yourself and the treasure chest of solar data available. You query VSO, then VSO querys all data providers which fit the limiations you imposed during your search command. The VSO client also handles the particulars of dowloading the data onto your machiene.
## Making a query
Lets kick off with an example, lets ask the veteran of solar imaging, SoHO for some EIS data, between the dates of between January 1st and 2nd, 2001,
```
qr = client.query(vso.attrs.Time('2001/1/1', '2001/1/2'),
vso.attrs.Instrument('eit'))
```
`qr` is a Python list of response onjects, each one a record that the VSO has found.
```
print(len(qr))
print(qr)
```
### Break it down
So we can pass many attributes to the VSO, in this case we started with time
<pre><code> vso.attrs.Time('2001/1/1','2001/1/2')</code></pre>
Start and end times for the query as strings, any date/time function can be understood by SunPy's parse_time function e.g. the datetime onjects we will look at later. Next we give it the instrument we want:
<pre><code> vso.attrs.Instrument('eit') </code></pre>
You don't have to pass it an instrument, the client will find all available missions in the parameter you've defined if you like. Next, wavelength:
<pre><code> vso.attrs.Wave(14.2*u.nm, 12.3*u.nm)</code></pre>
We pass it a min and max wavelength. This has to be an astropy units quantity (in SI for the love of coffee). If you don't you will get an error.
For a full list of attributes that vso can take use:
```
```
So we can use multiple instument queries and define get smaller sample times but narrowing down the query:
```
qr = client.query(vso.attrs.Time('2001/1/1T12:00:00', '2001/1/2T13:00:00'), vso.attrs.Instrument('eit') | vso.attrs.Instrument('mdi'))
len(qr)
```
### HEK
The Heliophysics Event Knowledgebase (HEK) is a repository of feature and event information about the Sun. Entries are generated both by automated algorithms and human observers.
We need to set up HEK in a similar way to VSO
```
from sunpy.net import hek
hek_client = hek.HEKClient()
```
Creating a very similar client as we saw with VSO above.
Given that HEK is a database of solar events of interest, the query has different requirements to VSO. It needs start and end times, and an event type. Again time objects can be defined as datetime objects or correctly formatted strings.
Event types are specified as uppercase two letter strings found on [the HEK website](http://www.lmsal.com/hek/VOEvent_Spec.html)
```
tstart = '2011/08/09 07:23:56'
tend = '2011/08/09 12:40:29'
event_type = 'FL'
result = hek_client.query(hek.attrs.Time(tstart,tend),
hek.attrs.EventType(event_type))
```
Notice that the HEK query is extremely similar to the VSO query style, with our attributes defined accordingly.
Instead of returning a list, HEK returns a list of dictionary objects. Each entry in the dictionary sis a pair of key-value pairs that exactly correspond to the parameters. We can return the key words using:
```
result[0].keys()
```
Remember, the HEK query we made returns all the flares in the time-range stored in the HEK, regardless of the feature recognition method. The HEK parameter which stores the the feature recognition method is called “frm_name”. Using list comprehensions (which are very cool), it is easy to get a list of the feature recognition methods used to find each of the flares in the result object, for example:
```
for elem in result:
print(elem["frm_name"])
```
This way we can avoid troublesome doubling up of results. We can do the same `help(hek.attrs)` command as VSO to fins out further options.
## Aquiring data with AstroQuery
Astroquery supports a plethora of [services](https://astroquery.readthedocs.org/en/latest/#using-astroquery), all of which follow roughly the same API (application program interface). In its simplest for the API involves queries based on coordinates or object names e.g. using SIMBAD:
```
from astroquery.simbad import Simbad
result_table = Simbad.query_object("m1")
result_table.pprint(show_unit=True)
```
In this case the query is looking at a specific set of coordinates
```
from astropy import coordinates
import astropy.units as u
c = coordinates.SkyCoord("05h35m17.3s -05d23m28s", frame='icrs')
r = 5 * u.arcminute
result = Simbad.query_region(c, radius=r)
result.pprint(show_unit=True, max_width=80, max_lines=5)
```
These methods can be expanded to all the following modules
* SIMBAD Queries (astroquery.simbad)
* IRSA Dust Extinction Service Queries (astroquery.irsa_dust)
* NED Queries (astroquery.ned)
* Splatalogue Queries (astroquery.splatalogue)
* IRSA Image Server program interface (IBE) Queries (astroquery.ibe)
* IRSA Queries (astroquery.irsa)
* UKIDSS Queries (astroquery.ukidss)
* MAGPIS Queries (astroquery.magpis)
* NRAO Queries (astroquery.nrao)
* Besancon Queries (astroquery.besancon)
* NIST Queries (astroquery.nist)
* NVAS Queries (astroquery.nvas)
* GAMA Queries (astroquery.gama)
* ESO Queries (astroquery.eso)
* Atomic Line List (astroquery.atomic)
* ALMA Queries (astroquery.alma)
* Skyview Queries (astroquery.skyview)
* NASA ADS Queries (astroquery.nasa_ads)
* HEASARC Queries (astroquery.heasarc)
# Combining Queries and Plotting
Using astroquery and wcsaxes together we can download both an image and a star field and over plot them. To download an image we can use the Simbad service:
```
from astroquery.skyview import SkyView
import astropy.units as u
m42_images = SkyView.get_images(position='M42', survey=['2MASS-K'],
pixels=2000)
m42_images
```
<section class="objectives panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Plot this image using WCSAxes </h2>
</div>
<ul>
<li>
Create a WCS object.
</li>
<li>
Create a figure with the projection set to the WCS object
</li>
<li>
Plot the image.
</li>
</ul>
</section>
```
from astropy.wcs import WCS
import matplotlib.pyplot as plt
%matplotlib notebook
m42 = m42_images[0]
wcs = WCS(m42[0].header)
fig, ax = plt.subplots(subplot_kw={'projection':wcs})
im = ax.imshow(m42[0].data, cmap='gray', vmax=900)
plt.colorbar(im)
```
Download some catalog data:
```
from astroquery.irsa import Irsa
Irsa.ROW_LIMIT = 1e6
table = Irsa.query_region("m42", catalog="fp_psc", spatial="Cone",
radius=15*u.arcmin)
table.show_in_notebook()
table2 = table[table['h_m']< 12.]
fig, ax = plt.subplots(subplot_kw={'projection':wcs})
im = ax.imshow(m42[0].data, cmap='gray', vmax=900, interpolation='none')
ax.set_autoscale_on(False)
sc = plt.scatter(table2['ra'], table2['dec'], c=table2['h_m'],
cmap='viridis', transform=ax.get_transform('fk5'))
plt.colorbar(sc)
```
| github_jupyter |
# Part 5 - Intro to Encrypted Programs
Believe it or not, it is possible to compute with encrypted data. In other words, it's possible to run a program where ALL of the variables in the program are encrypted!
In this tutorial, we're going to walk through very basic tools of encrypted computation. In particular, we're going to focus on one popular approach called Secure Multi-Party Computation. In this lesson, we'll learn how to build an encrypted calculator which can perform calculations on encrypted numbers.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
References:
- Morten Dahl - [Blog](https://mortendahl.github.io) - Twitter: [@mortendahlcs](https://twitter.com/mortendahlcs)
# Step 1: Encryption Using Secure Multi-Party Computation
SMPC is at first glance a rather strange form of "encryption". Instead of using a public/private key to encrypt a variable, each value is split into multiple "shares", each of which operate like a private key. Typically, these "shares" will be distributed amongst 2 or more "owners". Thus, in order to decrypt the variable, all owners must agree to allow the decryption. In essence, everyone has a private key.
### Encrypt()
So, let's say we wanted to "encrypt" a varible "x", we could do so in the following way.
```
Q = 1234567891011
x = 25
import random
def encrypt(x):
share_a = random.randint(0,Q)
share_b = random.randint(0,Q)
share_c = (x - share_a - share_b) % Q
return (share_a, share_b, share_c)
encrypt(x)
```
As you can see here, we have split our variable "x" into 3 different shares, which could be sent to 3 different owners.
### Decrypt()
If we wanted to decrypt these 3 shares, we could simply sum them together and take the modulus of the result (mod Q).
```
def decrypt(*shares):
return sum(shares) % Q
a,b,c = encrypt(25)
decrypt(a, b, c)
```
Importantly, notice that if we try to decrypt with only two shares, the decryption does not work!
```
decrypt(a, b)
```
Thus, we need all of the owners to participate in order to decrypt the value. It is in this way that the "shares" act like private keys, all of which must be present in order to decrypt a value.
# Step 2: Basic Arithmetic Using SMPC
However, the truly extraordinary property of Secure Multi-Party Computation is the ability to perform computation **while the variables are still encrypted**. Let's demonstrate simple addition below.
```
x = encrypt(25)
y = encrypt(5)
def add(x, y):
z = list()
# the first worker adds their shares together
z.append((x[0] + y[0]) % Q)
# the second worker adds their shares together
z.append((x[1] + y[1]) % Q)
# the third worker adds their shares together
z.append((x[2] + y[2]) % Q)
return z
decrypt(*add(x,y))
```
### Success!!!
And there you have it! If each worker (separately) adds their shares together, then the resulting shares will decrypt to the correct value (25 + 5 == 30).
As it turns out, SMPC protocols exist which can allow this encrypted computation for the following operations:
- addition (which we've just seen)
- multiplication
- comparison
and using these basic underlying primitives, we can perform arbitrary computation!!!
In the next section, we're going to learn how to use the PySyft library to perform these operations!
# Step 3: SMPC Using PySyft
In the previous sections, we outlined some basic intuitions around SMPC is supposed to work. However, in practice we don't want to have to hand-write all of the primitive operations ourselves when writing our encrypted programs. So, in this section we're going to walk through the basics of how to do encrypted computation using PySyft. In particualr, we're going to focus on how to do the 3 primitives previously mentioned: addition, multiplication, and comparison.
First, we need to create a few Virtual Workers (which hopefully you're now familiar with given our previous tutorials).
```
import syft as sy
hook = sy.TorchHook()
bob = sy.VirtualWorker(id="bob")
alice = sy.VirtualWorker(id="alice")
bill = sy.VirtualWorker(id="bill")
```
### Basic Encryption/Decryption
Encryption is as simple as taking any PySyft tensor and calling .share(). Decryption is as simple as calling .get() on the shared variable
```
x = sy.LongTensor([25])
encrypted_x = x.share(bob, alice, bill)
encrypted_x.get()
```
### Introspecting the Encrypted Values
If we look closer at Bob, Alice, and Bill's workers, we can see the shares that get created!
```
bob._objects
x = sy.LongTensor([25]).share(bob, alice,bill)
bob._objects
# Bob's share
bobs_share = list(bob._objects.values())[0].parent[0]
bobs_share
# Alice's share
alices_share = list(alice._objects.values())[0].parent[0]
alices_share
# Bill's share
bills_share = list(bill._objects.values())[0].parent[0]
bills_share
```
And if we wanted to, we could decrypt these values using the SAME approach we talked about earlier!!!
```
Q = sy.spdz.spdz.field
(bobs_share + alices_share + bills_share) % Q
```
As you can see, when we called .share() it simply split the value into 3 shares and sent one share to each of the parties!
# Encrypted Arithmetic
And now you see that we can perform arithmetic on the underlying values! The API is constructed so that we can simply perform arithmetic like we would with regular PyTorch tensors.
Note: for comparison, it returns boolean outputs (True/False) in the form of Integers (1/0). 1 corresponds to True. 0 corresponds to False.
```
x = sy.LongTensor([25]).share(bob,alice)
y = sy.LongTensor([5]).share(bob,alice)
z = x + y
z.get()
z = x * y
z.get()
z = x > y
z.get()
z = x < y
z.get()
z = x == y
z.get()
z = x == y + 20
z.get()
```
# Congratulations!!! - Time to Join the Community!
Congraulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on Github
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft Github Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for github issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# Lecture 09: Searching and sorting
[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2022)
[<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2022/master?urlpath=lab/tree/09/Searching_and_sorting.ipynb)
1. [Algorithms - what are they even?](#Algorithms---what-are-they-even?)
2. [Recursion](#Recursion)
3. [Sorting](#Sorting)
4. [Summary](#Summary)
Now we're getting more into the 'numerical methods' part of the course!
Today, we will delve into the following:
* how to write **pseudo code**
* **computational complexity** (big-O notion).
* **search algorithms** (sequential, binary)
* **sort algorithms** (bubble, insertion, quick)
**Search** and **sort** algos are at the heart of computer science.
Understanding these is the first thing you get into at DIKU or DTU, so we are also going to get a taste of them.
**Links to further material:**
If you feel inspired by the material here, you can try your hand at solving algorithmic challenges at [Project Euler](https://projecteuler.net).
(there are both easy and harder exercises to choose from)
```
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import time
import string
import random
import sys
from IPython.display import Image
```
<a id="Algorithms---what-are-they-even?"></a>
# 1. Algorithms - what are they even?
**Technically:** An unambigious specification of how to solve a class of problems.
**In a nut shell:** *An algo is a recipe.*
Even a simple cooking recipe is an algorithm..
1. Preheat the oven
2. Mix flour, sugar and eggs
3. Pour into a baking pan
etc.
**Properties of an algorithm:**
1. Unambigious termination criteria
1. Pre-defined inputs
2. Pre-defined ouputs
3. Guaranteed finite runtime
4. Correct result
## 1.1 Simple example: $\max\{ \ell\}$
**Problem:** Given a list of positive numbers, return the largest number in the list.
**Inputs:** A list `L` of positive numbers.
**Outputs:** A number.
**Algorithm:** `find_max()`
1. Set `maxL` to 0.
2. For each `x` in the list `L`, compare it to `maxL`. If `x` is larger, set `maxL` to `x`.
3. `maxL` is now set to the largest number in the list.
> **Note:** The above is called **pseudo-code** (understandable across programming languages).
**Implementation** in Python:
```
def find_max(L):
maxL = 0
for x in L:
if x > maxL:
maxL = x
return maxL
```
**Question:** An error *might* occur if `L` is not restricted to contain strictly positive numbers. What could happen?
**Bonus info:** Python, and other modern languages, actually tries to **predict** the result of an `if` statement before it is reached and prepares the following set of instructions. This is called *branch prediction* and is a major source of computational improvement. If you have a lot of `if-statements` that are not predictable, eg. because of randomized data, it may be a drag on computation time.
## 1.2 Algorithmic complexity
Algorithms can be characterized by the number of operations needed to perform them. This is called their complexity.
The `find_max()` algorithm has `n = len(L)` operations each making a *comparison* (`x > max`) and (perhaps) an *assignment* (`max = x`).
The number of operations increase linearily in the length of the input list (the order of the function is linear).
**Mathematically** we say that `find_max()` has linear complexity, \\(O(n)\\) where $n$ is the input size (length of L).
Other **common levels of complexity** are:
1. Constant, $O(1)$ (i.e. independent of input size)
2. Logarithmic, $O(\log n)$
3. Linear, $O(n)$
4. Log-linear, $O(n \log n)$
5. Quadratic, $O(n^2)$
6. Cubic, $O(n^3)$
7. Exponential, $O(2^n)$ (**curse of dimensionality**)
If the performance of an algorithm **depends on the exact values of the input** we differentiate between
1. **Best** case
2. **Average** case (across all possible inputs)
3. **Worst** case
Complexity is an **asymptotic** measure,
1. Only the number of operations matter (not their type or cost)
2. Only the highest order matter
<img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/08/bigO.png" alt="bigO" width=40% />
**In practice however:**
* The cost of each operation matters for fixed input size.
* The amount and flow of **memory** matter for speed (cache vs. RAM vs. disc).
* Therefore, it is **not guaranteed** that an algorithm of lower complexity executes faster than that of higher complexity for all cases.
Especially, there may be differences in the costs of memory allocation and deletion which are not counted into the measure of complexity. In the case above, we were not counting in the *deletion* of objects, that would necessarily follow.
## 1.3 Example of a complexity calculation
```
def demo_algorithm(n):
# a. 3 assignments
a = 5
b = 6
c = 10
# b. 3*n^2 multiplications and 3*n^2 assignments
for i in range(n):
for j in range(n):
x = i * i
y = j * j
z = i * j
# c. n multiplications, additions, and assignments
# + n multiplications and assignments
for k in range(n):
w = a*k + 45
v = b*b
# d. 1 assignment
d = 33
```
The **total number of operations** are: $T(n) = 3 + 6n^2 + 5n + 1 = 6n^2 + 5n + 4$
Notice: this is an exposition of operations. There are of course also operations involved in multiplication itself, which means that the number above is not indicative of the *total* number of operations that the computer must handle.
**In big-O notation**: `demo_algorithm()` is $O(n^2)$, i.e. *quadratic complexity*
**$\large \color{purple}{Question}$:** What is the complexity of these two algoritms?
```
def algorithm_a(n):
s = 0
for i in range(n):
for j in range(n):
for k in range(n):
s += 1
def algorithm_b(n):
s = 0
for i in range(n):
s *= 2
for j in range(n):
s *= 2
for k in range(n):
s *= 2
```
## 1.4 The complexity of operations on data containers
## 1.4# How are lists and dictionaries structured?
The fact that our data containers have a certain structure in memory matters *greatly* for the speed of the methods (read: algos) that we apply on them.
Let's have a look at how lists and dictionaries are organized.
**Lists:**
* A list is an ordered set of references to objects (eg. floats).
* Each reference *points* to an address in memory where values are stored.
* The reference variables of addresses (called pointers) of data in a list are ligned up next to each other in memory, such that they are increments of `1` apart. A bit like a train, if you will.
* Need therefore **only** to keep track of the reference to the address of the **first element**, `l[0]`, and the rest follows in line.
* If by $a$ we denote the address of the first element of `l`, then looking up element `l[i]` means accessing the $a+i$ address in memory using its reference variable.
* Therefore, the algorithmic complexity of looking up an element `l[i]` does **not depend** on the size of `l`. *Which is nice.*
```
# A demonstration of addresses of elements in a list
x = [5, 21, 30, 35]
x_ref = []
x_id = []
# The addresses of x's elements
for i in x:
x_id.append(id(i)) # Each object has its own unique id
x_ref.append(hex(x_id[-1])) # The memory address is a hexadecimal of the id
# The addresses printed below are NOT lined up next to each other in memory.
# Only the reference variables are lined up, but those we cannot see directly in Python.
print('Id of each element in x:')
for i in x_id:
print(i)
print('\nMemory address of elements in x: ', x_ref)
```
### A quick overview of list operations
|Operation | Code | Complexity |
|:----------|:------------------|:--------------:|
|**Index:** | `l[i]` | $O(1)$ |
|**Store:** | `l[i] = 0` | $O(1)$ |
|**Length:** | `len(l)` | $O(1)$ |
|**Append:** | `l.append(n)` | $O(1)$ |
|**Slice:** | `l[a:b]` | $O(b-a)$ |
|**Pop last:** | `l.pop()` | $O(1)$ |
|**Pop i:** | `l.pop(i)` | $O(N)$ |
|**Clear:** | `l.clear()` | $O(N)$ |
|**check:** | `l1 == l2` | $O(N)$ |
|**Insert:** | `l[a:b] = ...` | $O(N)$ |
|**Delete:** | `del l[i]` | $O(N)$ |
|**Containment:** | x `in/not in l` | $O(N)$ |
|**Copy:** | `l.copy()` | $O(N)$ |
|**Sort:** | `l.sort()` | $O(N $Log$ N)$ |
**A few notes:**
* Getting the length of a list is $O(1)$ because Python keeps track of a list's size as it created and expanded. The length is stored as an attribute to the list.
* Popping (getting the last element) is $O(1)$ because it only requires detaching the last reference in the "train" of references that comprises a list.
* Inserting an element into, or removing it from, the middle of a list requires moving around all the references in memory "behind" the inserted element and is therefore $O(N)$.
* Checking for containment of an element is $O(N)$ because all elements in the list may have to be visited.
### A beautiful solution
**Question:** how do you delete element `i` from list `l` in $O(1)$? (*even when it says above that `del` is an $O(N)$ operation*)
**Answer:**
`l[i] = l.pop()`
The `pop` operation will delete the last element of `l` while also using it to overwrite element `i` in `l`. Hence, last element is preserved while element `i` disappears.
**Note** this won't work if `i` is the last element. A full implementation needs to account for this, but it will still be $O(1)$.
**Dictionaries:**
* A dictionary is a set of *buckets* (think lists) which can store items.
* A dictionary with 1 element and 5 buckets: `[] - [] - [] - [<key,value>] - []`
* Contrary to lists, there is no explicit indexing of a dictionary. No `d[i]`, we can use a string instead, `d[str]`.
* However, the buckets of a dictionary are lined up just like a the references in a list.
* Python therefore needs to locate a bucket, when adding a `<key,value>` pair.
* Buckets are located using a **hash function** on the key of an element.
* This **hash function** converts the key to a integer number, which can then serve as an index.
* Obviously, a useful hash function must be very fast and work on strings as well as floats.
* A fast hash function enables $O(1)$ lookup in a dictionary.
* Hashing also implies that `key in dict.keys()` is $O(1)$, thus independent of dictionary size! (Very handy)
* When an empty dictionary is created, it contains 5 buckets. As a 6th element is added to the dictionary, it is rescaled to 10 buckets. At 11 elements, rescaled to 20 buckets and so on.
* Dictionaries thus **pre-allocate** memory to be efficient when adding the next element.
* *Taking up memory in favor of fast execution is a basic trade-off in algorithms!*
```
d = {'x': 1, 'z': 2}
print('size of md in bytes:', sys.getsizeof(d))
# Start adding elements to d and see how memory usage changes
for i in range(25):
key = random.choice(string.ascii_letters)
value = random.random()
d[key] = value
print(f"key: {key} value: {value: 1.3f} \t size: {i+1:2.0f} bytes: {sys.getsizeof(d)} \t hashed key: {hash(key)}")
# Notice that there may be collisions as some keys are similar, and therefore get same hash value.
# Python can handle such collisions, but they do create a drag on performance.
```
### A quick overview of dictionary operations
|Operation | Code | Complexity |
|:----------|:------------------|:--------------:|
|**Index:** | `d[k]` | $O(1)$ |
|**Store:** | `d[k] = v` | $O(1)$ |
|**Delete:** | `del d[k]` | $O(1)$ |
|**Length:** | `len(d)` | $O(1)$ |
|**Clear:** | `d.clear()` | $O(1)$ |
|**View:** | `d.keys()` | $O(1)$ |
Notice the difference in complexity for **deletions**. Faster in dictionaries because they are unordered.
You can checkout a [comprehensive table](https://www.ics.uci.edu/~pattis/ICS-33/lectures/complexitypython.txt) of Python operations' complexity.
## 1.5 Multiplication and Karatsuba's algorithm
Ever wondered how Python multiplies two numbers? It actually depends on the size of those numbers!
**Small numbers:** 3rd grade algorithm. **Large numbers:** Karatsuba's algorithm.
## 1.5# Demonstration
Consider the multiplication $2275 \times 5013 = 11,404,575$
**3rd grade algorithm**
(this one we all know - although it's been a while)
The 3rd grade algorithm is $O(n^2)$. To see this, think of the multiplication part as nested for-loops throughout the 10s, 100s, 1000s etc. Then there is the addition part, which is also $O(n^2)$.
```
Image(filename = "ThirdGradeMultiplication.jpg", width = 230, height = 230)
```
**Karatsuba's algorithm**
It is not super intuitive what goes on here. But basically, it's splitting the numbers to be multiplied into multiples of 10s and then performs operations on those splits.
The algorithm is only $O(n^{log_3})$, so better than 3rd grade algorithm for large $n$.
**Some preparation:**
$x = 2275$, $y = 5013$
Note the identities:
$x = 22 \times 10^2 + 75$
$y = 50 \times 10^2 + 13$
We denote:
$x_a = 22, \: x_b = 75$
$y_a = 50, \: y_b = 13$
**The algorithm**
*First compute:*
$A = x_a \times y_a$
$B = x_b \times y_b$
$C = (x_a + x_b) \times (y_a +y_b) - A - B$
*Then we have that*
$x \times y = A \times 10^4 + C\times 10^2 + B$
**In numbers**
$A = 22 \times 50 = 1100$
$B = 75 \times 13 = 975$
$C = (22 + 75)(50 + 13) - 1100 - 975 = 4036$
$x \times y = 1100 \times 10^4 + 4036\times 10^2 + 975 = 11,404,575$
## 1.6 Linear search (also called sequential search)
**Problem:** Check whether element is in list. See the `containment` row in the list of complexity above.
**Inputs:** A list `L` and a potential element `x`.
**Outputs:** Boolean.
**Algorithm:** `linear_search()`
1. Set variable `found == False`
2. For each `y` in the list `L`, compare it to `x`. If `x == y` set `found = True` and break loop.
3. `found` now shows whether the element is in the list or not
```
L = [1, 2, 32, 8, 17, 19, 42, 13, 0] # test list
def linear_search(L,x):
pass
print('found 3:',linear_search(L,3))
print('found 13:',linear_search(L,13))
def linear_search(L,x):
""" linear search
Args:
L (list): List to search in.
x (any): Element to search for.
Returns:
found (bool): Boolean for whether element is in list or not.
"""
# a. prep
i = 0
N = len(L)
found = False
# b. main
while i < N and not found:
if L[i] == x: # comparison
found = True
else:
i += 1 # increment
# c. return
return found
print('found 3:',linear_search(L,3))
print('found 13:',linear_search(L,13))
```
**Terminology:** The linear search algorithm is called a **brute force** algorithm (we solve the problem without any intermediate steps).
**Analysis:** Each operation consists of a *comparision* and an *incremenet*:
1. **Best case:** $O(1)$ (element present and first in list)
2. **Average case:**
* $O(\frac{n}{2})=O(n)$ (if element present), or
* $O(n)$ (if element *not* present)
3. **Worst case:** $O(n)$ (element not present or last in list)
**Note:** Much faster ($O(1)$) on a dictionary, because we just apply the hash function to `x`.
## 1.7 Binary search ("the phonebook search")
**Problem:** You know that a list is sorted. Check whether an element is contained in it.
**Inputs:** A list `L` and a potential element `x`.
**Outputs:** Boolean.
**Algorithm:** `binary_search()`
1. Set `found` to `False`,
2. Locate the `midpoint` of the list part that remains to be searched.
2. Check whether the `midpoint` is the one we are searching for:
* If yes, set `found=True` and go to step 3.
* If no, and the `midpoint` is *larger*, restrict attention to the *left* part of the list and restart step 2 if not empty.
* If no, and the `midpoint` is *smaller*, restrict attention to the *right* part of the list and restart step 2 if not empty.
3. `found` now shows whether the element is in the list or not
**Middle element:** Define the midpoint between index `i` and index `j >= i` as `i + (j-i)/2`, rounded down if necessary.
```
for i in [0,2,4]:
for j in [4,5,9]:
print(f'(i,j) = {i,j} -> midpoint = {i+((j-i)//2)}') # note integer division with //
L = [0, 1, 2, 8, 13, 17, 19, 32, 42] # test list
def binary_search(L,x):
pass
print('found 3:',binary_search(L,3))
print('found 13:',binary_search(L,13))
def binary_search(L,x,do_print=False):
""" binary search
Args:
L (list): List to search in.
x (any): Element to search for.
do_print (bool): Indicator for printing progress.
Returns:
found (bool): Boolean for whether element is in list or not.
"""
# a. initialize
found = False
# b. start with whole list
first = 0
last = len(L)-1
# c. main
while first <= last and not found:
# i. find midpoint
midpoint = first + (last - first) // 2 # // is integer division
if do_print:
print(L[first:last+1],L[midpoint])
# ii. check if x found or smaller or larger than midpoint
if L[midpoint] == x:
found = True
else:
if L[midpoint] > x:
last = midpoint-1
else:
first = midpoint+1
return found
print('found 3:',binary_search(L,3))
print('found 13:',binary_search(L,13))
binary_search(L,32,do_print=True)
```
**Terminology:** This is called a **divide-and-conquer** algorithm.
**Analysis:**
* After 1 comparison there is approximately $\frac{n}{2}$ elements left.
* After 2 comparisons there is approximately $\frac{n}{4}$ elements left.
* After 3 comparisons there is approximately $\frac{n}{8}$ elements left.
* ...
* After $j$ comparisons there is approximately $\frac{n}{2^j}$ number of elements left.
**When is there one element left?** $\frac{n}{2^j} = 1 \Leftrightarrow j = \frac{\log n}{\log 2}$
**Result:** The binary search algorithm is $O(\log n)$, i.e. logarithmic complexity.
<a id="Recursion"></a>
# 2. Recursion
**Problem:** Sum the elements in a list.
```
L = [1,3,5,7,9]
```
**Simple:** Just sum them:
```
def listsum(L):
result = 0
for x in L:
result += x
return result
print(listsum(L))
```
**Recursion:** The sum of a list is the sum of the first element and the sum of the rest of the list:
```
def listsum_recursive(L):
if len(L) == 1:
return L[0]
else:
return L[0] + listsum_recursive(L[1:])
print(listsum_recursive(L))
```
This is also a divide-and-conquor strategy. Avoids loops.
## 2.1 Fibonacci numbers
**Definition:**
$$
\begin{aligned}
F_0 &= 0 \\
F_1 &= 1 \\
F_n &= F_{n-1} + F_{n-2} \\
\end{aligned}
$$
**Implementation:**
```
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
return fibonacci(n-1)+fibonacci(n-2)
fibonacci(5)
#for n in range(4):
#print(fibonacci(n))
```
### Caution!
This implementation is for demonstration purposes only. It can be greatly sped up by using the `@cache` decorator, which stores the previous return value of a function call.
If you ever want to use recursion, you must rely on **caching** of function values. Because ***recursion on itself is sloow***.
**Test approximate formula:**
```
def fibonacci_approx(n):
return 1/np.sqrt(5)*( ((1+np.sqrt(5))/2)**n - ((1-np.sqrt(5))/2)**n)
for n in [5,10,15,20,25]:
print(f'n = {n:3d}: true = {fibonacci(n):6d}, approximate = {fibonacci_approx(n):20.12f}')
```
## 2.2 Advanced: Binary search with recursion
```
L = [0, 1, 2, 8, 13, 17, 19, 32, 42,] # test list
def binary_search_recursive(L,x):
pass
print('found 3:',binary_search_recursive(L,3))
print('found 13:',binary_search_recursive(L,13))
def binary_search_recursive(L,x):
""" recursive binary search
Args:
L (list): List to search in.
x (any): Element to search for.
Returns:
found (bool): Boolean for whether element is in list or not.
"""
if len(L) == 0:
return False # not found
else:
# a. find midpoint
midpoint = len(L)//2
# b. check if x found or smaller or larger than midpoint
if L[midpoint] == x: # found
return True
else:
if L[midpoint] > x:
newL = L[:midpoint]
else:
newL = L[midpoint+1:]
return binary_search_recursive(newL,x)
print('found 3:',binary_search_recursive(L,3))
print('found 13:',binary_search_recursive(L,13))
```
<a id="Sorting"></a>
# 3. Sorting
Sorting is a super central task of computing. IBM invented it's first computers in the 30s to sort data.
Would be hard to keep track of data without sorting. Thus, many algorithms have been developed for this purpose.
We will look at a simple algorithm first, the bubble sort, which relies on swapping elements iteratively.
Function for **swapping** element `L[i]` with element `L[j]` in-place:
```
def swap(L,i,j):
temp = L[i] # save value in place holder variable
L[i] = L[j] # overwrite value at i with value at j
L[j] = temp # write original value at i to value at j
```
**Example:**
```
L = [1, 3, 4, 9, 13]
swap(L,i=0,j=1)
print('after swap',L)
```
## 3.1 Bubble sort
**Problem:** Sort a list of numbers in-place.
**Inputs:** List of numbers.
**Outputs:** None.
**Algorithm:** `bubble_sort()`
1. Loop through the first n-1 elements in list, swap with next element if current is larger.
2. Loop through the first n-2 elements in list, swap with next element if current is larger.
<br>
...
<br>
4. Loop through the first 3 elements in list, swap with next element if current is larger.
5. Swap the two first elements if the first is larger than the second
6. List is sorted
```
L = [54, 26, 93, 17, 77, 31, 44, 55, 20] # test list
def bubble_sort(L):
pass
bubble_sort(L)
print(L)
def bubble_sort(L):
""" bubble sort
Args:
L (list): List of numbers
"""
# k starts being len(L)-1 and is decreased by 1 until hitting 0
for k in range(len(L)-1,0,-1):
for i in range(k):
if L[i] > L[i+1]:
swap(L,i,i+1)
L = [54, 26, 93, 17, 77, 31, 44, 55, 20]
bubble_sort(L)
print('sorted L:',L)
from IPython.display import YouTubeVideo
YouTubeVideo('lyZQPjUT5B4', width=800, height=600, start=45)
```
**Another visualization of bubble sort**

**Illustration with printout:**
```
def bubble_sort_with_print(L):
for k in range(len(L)-1,0,-1):
print(f'step = {len(L)-k}')
for i in range(k):
if L[i] > L[i+1]:
swap(L,i,i+1)
print(L)
print('')
L = [54, 26, 93, 17, 77, 31, 44, 55, 20]
print('original',L,'\n')
bubble_sort_with_print(L)
```
**Analysis:** Bubble sort is $O(n^2)$ - do you have an intuition?
## 3.2 Insertion sort
**Algorithm:** `insertion_sort()`
1. Consider the *second* element. Insert it correctly in the list of the numbers before the *second* element.
2. Consider the *third* element. Insert it correctly in the list of the numbers before the *third* element.
<br>
...
<br>
4. Consider the n'th element. Insert it correctly in the list of the numbers before the *n'th* element.
5. List is sorted
**Illustration:**
<img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/08/insertionsort.png" alt="insertionsort" width=50% />
```
L = [54, 26, 93, 17, 77, 31, 44, 55, 20] # test list
def insertion_sort(L):
pass
insertion_sort(L)
print(L)
def insertion_sort(L):
""" insertion sort
Args:
L (list): List of numbers
"""
# loop over last n-1 elements, skipping the 1st element (see range func).
n = len(L)
for k in range(1,n):
# a. current value and position
x = L[k]
i = k
# b. move left while larger: a bubble sort at heart
while i > 0 and L[i-1] > x:
L[i] = L[i-1] # move
i = i-1
# c. insert current vlaue
L[i] = x
L = [54, 26, 93, 17, 77, 31, 44, 55, 20]
insertion_sort(L)
print('sorted',L)
```
**Analysis:** Still $O(n^2)$..
**Benefits relative to bubble sort:**
1. Moves instead of swaps, 1 operation less.
2. Data is often **partially sorted** to begin with. Insertion sort benefits from that.
## 3.3 Partition (+)
*Intermezzo: Solving the partition problem is useful for a so-called quicksort.*
**Problem:** Permute a list and return a splitpoint such that all elements before the point is larger than or equal to the first element in the original list, and all elements afterwards are strictly larger.
**Input:** List of numbers.
**Output:** Integer.
**Algorithm:**
0. Let splitting point be first element of list.
1. From the *left* find the first element larger than split point (leftmark).
2. From the *right* find the first element smaller than split point (rightmark).
3. Swap these two elements.
4. Repeat 1-3 starting from previous leftmark and rightmark. Continue until leftmark is larger than rightmark.
5. Swap first and rightmark element.
6. Return the rightmark.
<img src="https://github.com/NumEconCopenhagen/lectures-2019/raw/master/08/quicksort.png" alt="quicksort" width=60% />
```
def partition(L,first,last):
""" partition
Permute a list and return a splitpoint, such that all elements before
is larger than or equal to the first element in the original list,
and all elements afterwards are strictly larger.
Args:
L (list): List of numbers
first (integer): Startpoint
last (integer): Endpoint
Returns:
splitpoint (integer):
"""
# a. initialize
splitvalue = L[first]
leftmark = first+1
rightmark = last
# b. find splitpoint
done = False
while not done:
# i. find leftmark
while leftmark <= rightmark and L[leftmark] <= splitvalue:
leftmark = leftmark + 1
# i. find rightmark
while L[rightmark] >= splitvalue and rightmark >= leftmark:
rightmark = rightmark -1
# iii. check if done or swap left and right
if rightmark < leftmark:
done = True
else:
swap(L,leftmark,rightmark)
# c. final swap
swap(L,first,rightmark)
return rightmark
L = [54, 26, 93, 17, 77, 31, 44, 55, 20]
print('before',L)
splitpoint = partition(L,0,len(L)-1)
print('after',L)
print('split',L[:splitpoint+1],L[splitpoint+1:])
```
## 3.4 Quicksort (+)
**Algorithm:** `quick_sort()`
1. Recursively partition the list and the sub-lists when splitting at the splitpoint.
2. The list is now sorted.
```
def quick_sort(L):
_quick_sort(L,0,len(L)-1)
def _quick_sort(L,first,last):
if first < last:
splitpoint = partition(L,first,last)
_quick_sort(L,first,splitpoint-1) # left part
_quick_sort(L,splitpoint+1,last) # right part
L = [54, 26, 93, 17, 77, 31, 44, 55, 20]
quick_sort(L)
print('sorted',L)
```
**Analysis:** $O(n \log n)$ on average, but still $O(n^2)$ in the worst case [we don't derive this, just trust me].
**Visualization of quicksort**

## 3.5 Advanced: Comparision of performance
Lets us compare the different sorting algorithm:
1. Bubble
2. Insertion
3. Quick
4. Quick (as implemented in Numpy)
```
# a. settings
n_vec = np.array([100,200,300,400,500,750,1000,1500,2000,4000,8000,16000]) # number of elements in list
K = 50 # number of repetitions when timing
# b. allocate vectors for results
bubble = np.empty(len(n_vec))
insertion = np.empty(len(n_vec))
quick = np.empty(len(n_vec))
quicknp = np.empty(len(n_vec))
# c. run time trials
np.random.seed(1999)
for i,n in enumerate(n_vec):
# i. draw K random lists of lenght n
L_bubble = []
L_insertion = []
L_quick = []
L_quicknp = []
for k in range(K):
L = np.random.uniform(size=n)
np.random.shuffle(L)
L_bubble.append(L.copy())
L_insertion.append(L.copy())
L_quick.append(L.copy())
L_quicknp.append(L.copy())
# ii. bubble sort
if n <= 500:
t0 = time.time() # start timer
for k in range(K):
bubble_sort(L_bubble[k])
bubble[i] = time.time()-t0 # calculate time since start
else:
bubble[i] = np.nan
# ii. insertion sort
if n <= 500:
t0 = time.time()
for k in range(K):
insertion_sort(L_insertion[k])
insertion[i] = time.time()-t0
else:
insertion[i] = np.nan
# iii. quicksort
if n <= 2000:
t0 = time.time()
for k in range(K):
quick_sort(L_quick[k])
quick[i] = time.time()-t0
else:
quick[i] = np.nan
# iii. quicksort (numpy implementation)
t0 = time.time()
for k in range(K):
L_quicknp[k].sort() # built-in numpy method
quicknp[i] = time.time()-t0
# iv. check that all sorted lists are the same
for k in range(K):
if n <= 500:
assert np.all(L_bubble[k] == L_quick[k])
assert np.all(L_insertion[k] == L_quick[k])
if n <= 2000:
assert np.all(L_quicknp[k] == L_quick[k])
# d. figure
I = n_vec <= 2000
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(n_vec[I],bubble[I],label='bubble')
ax.plot(n_vec[I],insertion[I],label='insertion')
ax.plot(n_vec[I],quick[I],label='quick')
ax.plot(n_vec[I],quicknp[I],label='quick (numpy)')
ax.set_xlabel('number of elements')
ax.set_ylabel('seconds')
ax.legend(facecolor='white',frameon=True);
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(n_vec,quicknp,label='quick (numpy)')
ax.set_xlabel('number of elements')
ax.set_ylabel('seconds')
ax.legend(facecolor='white',frameon=True);
```
**Take-aways:**
1. Complexity matters
2. Implementation matter (and the built-in functions and methods are hard to beat)
<a id="Summary"></a>
# 4. Summary
**This lecture:**
1. Algorithms and their complexity (big-O notation)
2. Function recursion (functions calling themselves)
3. Searching algorithms (linear, bineary)
4. Sorting algorithm (bubble, insertion, quick)
**Your work:** The problem set is closely related to the algorithms presented here.
**Next lecture:** Solving equations (single vs. system, linear vs. non-linear, numerically vs. symbolically)
| github_jupyter |
```
#default_exp AbstractMethod
#export
#hide
from typing import Union, List
import sys
sys.path.append("..")
from hephaestus.EditOperations import *
#hide
from nbdev.showdoc import *
```
# AbstractMethod
> Defines the AbstractMethod class which represents a token-abstracted Java method.
```
#export
class AbstractMethod:
"""
Creates an AbstractMethod from the given `tokens`, which can be either:
- a string with tokens delimited by `delimiter` (defaults to a single space)
- a list of tokens
Note that empty tokens are ignored.
"""
def __init__(self, tokens: Union[str, List[str]], delimiter: str = " ") -> None:
if type(tokens) is str:
self.__tokens = [] if len(tokens) == 0 else tokens.split(delimiter)
else:
self.__tokens = tokens.copy()
# remove empty tokens
self.__tokens = [token for token in self.__tokens if token != ""]
def __eq__(self, other: "AbstractMethod") -> bool:
return type(other) is AbstractMethod and self.__tokens == other.__tokens
def __getitem__(self, key: int) -> str:
return self.__tokens[key]
def __len__(self) -> int:
return len(self.__tokens)
def __str__(self) -> str:
return repr(" ".join(self.__tokens))[1:-1]
def __repr__(self) -> str:
return str(self)
def applyEditOperation(self, operation: EditOperation):
"""
Applies the given `operation`.
"""
operation.applyToTokens(self.__tokens)
def applyEditOperations(self, operations: List[EditOperation]):
"""
Applies the given list of `operations` in order.
"""
for op in operations:
self.applyEditOperation(op)
def getEditDistanceTo(self, other: "AbstractMethod") -> int:
"""
Returns the Levenshtein edit distance to the `AbstractMethod` given by `other`.
"""
return self.__getEditOpsMatrix(other)[-1][-1]
def getEditOperationsTo(self, other: "AbstractMethod") -> List[Union[InsertOperation, DeleteOperation, ReplaceOperation]]:
"""
Returns the minimal list of basic edit operations (no CompoundOperations), which if applied, would
result in the `AbstractMethod` given by `other`. The length of the returned list is the Levenshtein
distance to `other`.
"""
matrix = self.__getEditOpsMatrix(other)
editOps = []
r = len(matrix) - 1
c = len(matrix[0]) - 1
while True:
if matrix[r][c] == 0:
break
elif r == 0:
c -= 1
editOps.insert(0, InsertOperation(c, other[c]))
elif c == 0:
r -= 1
editOps.insert(0, DeleteOperation(c))
elif self[r - 1] == other[c - 1]:
r -= 1
c -= 1
elif matrix[r][c] == matrix[r - 1][c - 1] + 1:
r -= 1
c -= 1
editOps.insert(0, ReplaceOperation(c, other[c]))
elif matrix[r][c] == matrix[r][c - 1] + 1:
c -= 1
editOps.insert(0, InsertOperation(c, other[c]))
elif matrix[r][c] == matrix[r - 1][c] + 1:
r -= 1
editOps.insert(0, DeleteOperation(c))
else:
raise RuntimeError("AbstractMethod: invalid matrix!")
return editOps
def __getEditOpsMatrix(self, other: "AbstractMethod") -> List[List[int]]:
numRows = len(self) + 1
numCols = len(other) + 1
# initialize matrix
matrix = []
for r in range(numRows):
matrix.append([c if r == 0 else 0 for c in range(numCols)])
matrix[r][0] = r
# iterate through matrix and assign values
for r in range(1, numRows):
for c in range(1, numCols):
left = matrix[r ][c - 1]
topLeft = matrix[r - 1][c - 1]
top = matrix[r - 1][c ]
if self[r - 1] == other[c - 1]:
matrix[r][c] = topLeft
else:
matrix[r][c] = min(left, topLeft, top) + 1
return matrix
method1 = AbstractMethod("private static int METHOD_1 ( ) { return 0 ; }")
method1
method2 = AbstractMethod(["public", "double", "METHOD_1", "(", "double", "VAR_1", ")", "{", "return", "VAR_1", ";", "}"])
method2
```
## Interact with Edit Operations
```
show_doc(AbstractMethod.getEditDistanceTo)
method1.getEditDistanceTo(method2)
#hide_input
show_doc(AbstractMethod.getEditOperationsTo)
method1 = AbstractMethod("private static int METHOD_1 ( ) { return 0 ; }")
method2 = AbstractMethod("public double METHOD_1 ( double VAR_1 ) { return VAR_1 ; }")
operations = method1.getEditOperationsTo(method2)
operations
#hide_input
show_doc(AbstractMethod.applyEditOperation)
```
> Note: This changes the original `AbstractMethod`, so you should make a copy if you want to keep the original.
```
#hide_input
show_doc(AbstractMethod.applyEditOperations)
```
> Note: This changes the original `AbstractMethod`, so you should make a copy if you want to keep the original.
```
method1 = AbstractMethod("private static int METHOD_1 ( ) { return 0 ; }")
method2 = AbstractMethod("public double METHOD_1 ( double VAR_1 ) { return VAR_1 ; }")
operations = method1.getEditOperationsTo(method2)
method1.applyEditOperations(operations)
method1
method2
method1 == method2
```
| github_jupyter |
```
import os
import pandas as pd
import numpy as np
import pickle
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
data_path \
= 'https://raw.githubusercontent.com/fclesio/learning-space/master/Datasets/02%20-%20Classification/default_credit_card.csv'
def get_features_and_labels(df):
# Features
X = df[
[
"LIMIT_BAL",
"AGE",
"PAY_0",
"PAY_2",
"PAY_3",
"BILL_AMT1",
"BILL_AMT2",
"PAY_AMT1",
]
]
gender_dummies = pd.get_dummies(df[["SEX"]].astype(str))
X = pd.concat([X, gender_dummies], axis=1)
# Labels
y = df["DEFAULT"]
return X, y
def get_results(y_test, y_pred):
acc = metrics.accuracy_score(y_test, y_pred)
acc = round(acc, 2) * 100
df_results = pd.DataFrame(y_pred)
df_results.columns = ["status"]
print(f"Accuracy: {acc}%")
print(df_results.groupby(by=["status"]).size())
df = pd.read_csv(data_path)
X, y = get_features_and_labels(df)
X_train, X_test, y_train, y_test \
= train_test_split(X, y, test_size=0.1, random_state=42)
model = RandomForestClassifier(
n_estimators=5,
random_state=42,
max_depth=3,
min_samples_leaf=100,
n_jobs=-1,
)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
get_results(y_test, y_pred)
pickle.dump(model, open("model_rf.pkl", 'wb'))
!ls
import hashlib
def get_digest(file_path):
'''Ref: https://stackoverflow.com/a/44873382/7024760'''
h = hashlib.sha256()
with open(file_path, 'rb') as file:
while True:
# Reading is buffered, so we can read smaller chunks.
chunk = file.read(h.block_size)
if not chunk:
break
h.update(chunk)
return h.hexdigest()
get_digest('model_rf.pkl')
```
### Ataque
Nesse caso, temos um modelo pré-treinado que será transferido para outro local, por exemplo, de alguma equipe de Data Science para uma equipe de engenharia de Machine Learning.
O ataque consiste em pegar este modelo, fazer uma pequena modificação que pode ser prejudicial, e colocá-lo novamente no fluxo de ML, que neste caso é o pipeline de Produção do ML.
```
model_rf_reload_pkl \
= pickle.load(open('model_rf.pkl', 'rb'))
model_rf_reload_pkl.classes_
# Ataque: Modificar todas as classes para 0
model_rf_reload_pkl.classes_ = np.array([0, 0])
model_rf_reload_pkl.classes_
pickle.dump(model_rf_reload_pkl, open("model_rf.pkl", 'wb'))
y_pred \
= model_rf_reload_pkl.predict(X_test)
get_results(y_test, y_pred)
get_digest('model_rf.pkl')
```
Como podemos ver, basta apenas o acesso ao objeto do Scikit-learn usando o Pickle para que todas as propriedades sejam acessíveis.
### Contramedidas
- Se houver algum risco de algum tipo de intermediação nos pontos de contato dos modelos de ML (por exemplo, uma equipe DS, faz a transferência para uma equipe MLE e depois para outra equipe), é adequado usar hash SHA1 ou MD5 desde o início garantir a integridade do arquivo entre todas as entidades envolvidas com o modelo;
- Se possível, tenha menos intermediários possível entre o treinamento do modelo e o _deployment_
| github_jupyter |
# Interactive Monte Carlo Tree Search (MCTS)
```
%load_ext autoreload
%autoreload 2
from maMDP.agents import Agent
from maMDP.mdp import SquareGridMDP, MDP
from maMDP.environments import Environment
from maMDP.algorithms.action_selection import SoftmaxActionSelector, MaxActionSelector
from maMDP.algorithms.mcts import MCTS
from maMDP.algorithms.dynamic_programming import ValueIteration
import numpy as np
import matplotlib.pyplot as plt
```
## Create an environment
Here we create an environment with two active agents.
We can again use MCTS to determine best actions for our agent, but this time we can allow the MCTS algorithm to also account for the behaviour of other agents in the environments in its simulations. For example, in this example the second agent will eat the first one if they end up in the same state, so the first may want to account for the other agent's behaviour when planning to minimise the likelihood of being eaten. We can achieve this by setting the `interactive` argument to `True` when setting up the MCTS algorithm.
To do this, we have to determine best actions for the other agent so that we can predict its behaviour and use this for the simulations used by MCTS. This is done using value iteration, which can be fairly computationally intensive as we have to do it on each step of the simulation, for as many iterations as we want to run. To speed this up, we can cache the results so that the simulated agent will reuse previously computed Q values for its current state. To do this, we set the `caching` argument to True.
For the sake of quick demonstration, we'll set the number of iterations to a low number (30).
```
new_agent1 = Agent('testAgent1', algorithm=MCTS, algorithm_kwargs={'interactive': False, 'n_iter': 100, 'caching': True})
new_agent2 = Agent('testAgent2')
grid_shape = (15, 15)
features = np.zeros((3, np.product(grid_shape)))
features[0, 220:] = 1
features[0, 205:210] = 1
features[1, np.random.randint(0, np.product(grid_shape), 30)] = 1
for i in range(3, 8):
features[2, i*15:i*15+2] = 1
new_mdp = SquareGridMDP(shape=grid_shape, features=features.copy())
new_env = Environment(new_mdp,
{
new_agent1: (2, [0, 1, 0, 0, 0], [1]),
new_agent2: (130, [0, 0, 1, 0.2, 0], [3])
})
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
```
The agent values the blue feature shown in the bottom right corner.
## Run MCTS
This will determine the best action from the current state, accounting for the other agent's actions.
The first time we run this, it can be quite slow.
```
%time new_env.fit('testAgent1')
```
However, thanks to the cachine procedure it will be faster in subsequent runs:
```
%time new_env.fit('testAgent1')
%time new_env.fit('testAgent1')
```
## Run multiple steps
Here, Agent 1 values the orange feature on the map. Agent 2 values the green feature most (with a weight of 1) and values Agent 1 less (with a weight of 0.2).
By accounting for Agent 2 in its planning, Agent 1 will know not to stray into the green area.
```
new_agent1 = Agent('testAgent1', algorithm=MCTS,
algorithm_kwargs={'interactive': True,
'n_iter': 1000,
'caching': True,
'C': 20})
new_env = Environment(new_mdp,
{
new_agent1: (2, [0, 1, 0, 0, 0], [1]),
new_agent2: (130, [0, 0, 1, 0.2, 0], [3])
})
new_env.reset()
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
new_env.step_multi_interactive(n_steps=25, refit=True, progressbar=True)
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
trajectory = new_env.get_agent_position_history('testAgent1')
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
new_env.plot_trajectory(trajectory, ax)
```
## Non-interactive
```
new_agent1 = Agent('testAgent1', algorithm=MCTS,
algorithm_kwargs={'interactive': False,
'n_iter': 1000,
'caching': True,
'C': 20})
new_mdp = SquareGridMDP(shape=grid_shape, features=features.copy())
new_env = Environment(new_mdp,
{
new_agent1: (2, [0, 1, 0, 0, 0], [1]),
new_agent2: (130, [0, 0, 1, 0.2, 0], [3])
})
new_env.reset()
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
new_env.step_multi_interactive(n_steps=25, refit=True, progressbar=True)
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
trajectory = new_env.get_agent_position_history('testAgent1')
new_env.plot(mdp_plotting_kwargs={'figsize': (15, 5)},
agent_markers={'testAgent1': 'X', 'testAgent2': 'o'},
agent_colours={'testAgent1': 'black', 'testAgent2': 'red'},
agent_plotting_kwargs={'s': 200})
new_env.plot_trajectory(trajectory, ax)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import math
import pickle
#Creating a Dictionary with 2500 keys and setting their value to 1. The reason of putting the value of
#1 instead of zero is because of the laplace smoothing of the numerator.
#pickle module helps in seralization of data. It is easier to load data.
dic1={} #dic1 contains words appeared in non spam emails.
dic2={} #dic2 contains words appeared in spam emails.
for i in range(1,2501):
dic1.update({i:1})
dic2.update({i:1})
k=[dic1,dic2]
with open("dic.pickle","wb") as f:
pickle.dump(k,f)
with open("dic.pickle","rb") as f:
k=pickle.load(f) #k[0] contains words appeared in non spam emails.
#k[1] contains words appeared in spam emails.
v=2500
df=pd.read_csv("train-features.txt", sep=' ',
names = ["DocID", "DicID", "Occ"])
s=df["DocID"]
#reading the file and giving them respective headers
#DocId- Document number,DicID-Dictionary token number (1-2500),Occ-No. of times occured in the respective document.
##Training the classifier
c=1
r=0 #Counting the length of each words in the document
a=[] #a is a list of all the lengths of document like a[0] is the no. of words in first document
for i in range(len(s)):
if (s[i])==c:
r+=df["Occ"][i]
else:
a.append(r)
c+=1
r=r-r
r+=df["Occ"][i]
a.append(r)
b=a[0:350] #Dividing the lenghts into two lists. As 0-350 documents are not spam(0) and 350-700 are spam(1)
a=a[350:700]
nsp=sum(b)+v #v is length of the dictionary ie 2500, it is added due to laplace smoothing
sp=sum(a)+v
sums=[nsp,sp]
with open("dicsum.pickle","wb") as f:
pickle.dump(sums,f)
sums=[]
with open("dicsum.pickle","rb") as f:
sums=pickle.load(f)
for i in range(len(s)): #Updating the non spam and spam dictionary by adding the occurance of the word.
if int(s[i])<=350:
k[0][(df["DicID"][i])]+=df["Occ"][i]
else:
k[1][(df["DicID"][i])]+=df["Occ"][i]
with open("classydicl.pickle","wb") as f:
pickle.dump(k,f)
with open("classydicl.pickle","rb") as f:
q=pickle.load(f) #Our numerator and denominator are both ready.Now we Divide.
for keys in (q[0]):
q[0][keys]=np.divide(q[0][keys],sums[0])
q[1][keys]=np.divide(q[1][keys],sums[1])
with open("newclassydic.pickle","wb") as f:
pickle.dump(q,f)
with open("newclassydic.pickle","rb") as f:
k=pickle.load(f)
#newclassydic is our trained classifier
#k loads the new classifier k[0] contains non spam and k[1] contains spam.
##Testing The Naive Bayes Classifier
df=pd.read_csv("test-features.txt", sep=' ',
names = ["DocID", "DicID", "Occ"]) #reading the file and giving them respective headers
s=df["DocID"]
t=df["DicID"]
u=df["Occ"]
x=np.log(0.50) #0.50 is the probability of spam and non spam dataset in our training data.
y=np.log(0.50) #x is the prob of non spam and y is the prob of of spam
#Applying the naive bayes algorithm.We are adding the log instead of multipying due to underflow.
z=1
arr=[]
for i in range(len(s)):
if (s[i]==z):
e=(k[0][t[i]])*(u[i])
f=(k[1][t[i]])*(u[i])
x+=np.log(e)
y+=np.log(f)
else:
z+=1
if x>y:
arr.append(0)
else:
arr.append(1)
x=np.log(0.50)
y=np.log(0.50)
e=(k[0][t[i]])*(u[i])
f=(k[1][t[i]])*(u[i])
x+=np.log(e)
y+=np.log(f)
if x>y:
arr.append(0)
else:
arr.append(1)
df=pd.read_csv("test-labels.txt",names = ["LabelId"]) #reading the file and giving them respective header.
accuracy=0
l=df["LabelId"]
for i in range(len(arr)): #Comparing test label and prediction(arr)
if (l[i]==arr[i]):
accuracy+=1
accuracy=accuracy/len(arr)
print ("Accuracy of the Naive Bayes Algorithm is",accuracy*100.0)
submission = pd.DataFrame(arr)
submission.to_csv('prediction.txt',index = False)#Creates prediction into a new file.
```
| github_jupyter |
#**This notebook is in beta**
<font size = 4>Expect some instabilities and bugs.
<font size = 4>**Currently missing features include:**
- Augmentation cannot be disabled
- Exported results include only a simple CSV file. More options will be included in the next releases
- Training and QC reports are not generated
# **Detectron2 (2D)**
<font size = 4> Detectron2 is a deep-learning method designed to perform object detection and classification of objects in images. Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms. It is a ground-up rewrite of the previous version, Detectron, and it originates from maskrcnn-benchmark. More information on Detectron2 can be found on the Detectron2 github pages (https://github.com/facebookresearch/detectron2).
<font size = 4>**This particular notebook enables object detection and classification on 2D images given ground truth bounding boxes. If you are interested in image segmentation, you should use our U-net or Stardist notebooks instead.**
---
<font size = 4>*Disclaimer*:
<font size = 4>This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.
# **License**
---
```
#@markdown ##Double click to see the license information
#------------------------- LICENSE FOR ZeroCostDL4Mic------------------------------------
#This ZeroCostDL4Mic notebook is distributed under the MIT licence
#------------------------- LICENSE FOR CycleGAN ------------------------------------
#Apache License
#Version 2.0, January 2004
#http://www.apache.org/licenses/
#TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
#1. Definitions.
#"License" shall mean the terms and conditions for use, reproduction,
#and distribution as defined by Sections 1 through 9 of this document.
#"Licensor" shall mean the copyright owner or entity authorized by
#the copyright owner that is granting the License.
#"Legal Entity" shall mean the union of the acting entity and all
#other entities that control, are controlled by, or are under common
#control with that entity. For the purposes of this definition,
#"control" means (i) the power, direct or indirect, to cause the
#direction or management of such entity, whether by contract or
#otherwise, or (ii) ownership of fifty percent (50%) or more of the
#outstanding shares, or (iii) beneficial ownership of such entity.
#"You" (or "Your") shall mean an individual or Legal Entity
#exercising permissions granted by this License.
#"Source" form shall mean the preferred form for making modifications,
#including but not limited to software source code, documentation
#source, and configuration files.
#"Object" form shall mean any form resulting from mechanical
#transformation or translation of a Source form, including but
#not limited to compiled object code, generated documentation,
#and conversions to other media types.
#"Work" shall mean the work of authorship, whether in Source or
#Object form, made available under the License, as indicated by a
#copyright notice that is included in or attached to the work
#(an example is provided in the Appendix below).
#"Derivative Works" shall mean any work, whether in Source or Object
#form, that is based on (or derived from) the Work and for which the
#editorial revisions, annotations, elaborations, or other modifications
#represent, as a whole, an original work of authorship. For the purposes
#of this License, Derivative Works shall not include works that remain
#separable from, or merely link (or bind by name) to the interfaces of,
#the Work and Derivative Works thereof.
#"Contribution" shall mean any work of authorship, including
#the original version of the Work and any modifications or additions
#to that Work or Derivative Works thereof, that is intentionally
#submitted to Licensor for inclusion in the Work by the copyright owner
#or by an individual or Legal Entity authorized to submit on behalf of
#the copyright owner. For the purposes of this definition, "submitted"
#means any form of electronic, verbal, or written communication sent
#to the Licensor or its representatives, including but not limited to
#communication on electronic mailing lists, source code control systems,
#and issue tracking systems that are managed by, or on behalf of, the
#Licensor for the purpose of discussing and improving the Work, but
#excluding communication that is conspicuously marked or otherwise
#designated in writing by the copyright owner as "Not a Contribution."
#"Contributor" shall mean Licensor and any individual or Legal Entity
#on behalf of whom a Contribution has been received by Licensor and
#subsequently incorporated within the Work.
#2. Grant of Copyright License. Subject to the terms and conditions of
#this License, each Contributor hereby grants to You a perpetual,
#worldwide, non-exclusive, no-charge, royalty-free, irrevocable
#copyright license to reproduce, prepare Derivative Works of,
#publicly display, publicly perform, sublicense, and distribute the
#Work and such Derivative Works in Source or Object form.
#3. Grant of Patent License. Subject to the terms and conditions of
#this License, each Contributor hereby grants to You a perpetual,
#worldwide, non-exclusive, no-charge, royalty-free, irrevocable
#(except as stated in this section) patent license to make, have made,
#use, offer to sell, sell, import, and otherwise transfer the Work,
#where such license applies only to those patent claims licensable
#by such Contributor that are necessarily infringed by their
#Contribution(s) alone or by combination of their Contribution(s)
#with the Work to which such Contribution(s) was submitted. If You
#institute patent litigation against any entity (including a
#cross-claim or counterclaim in a lawsuit) alleging that the Work
#or a Contribution incorporated within the Work constitutes direct
#or contributory patent infringement, then any patent licenses
#granted to You under this License for that Work shall terminate
#as of the date such litigation is filed.
#4. Redistribution. You may reproduce and distribute copies of the
#Work or Derivative Works thereof in any medium, with or without
#modifications, and in Source or Object form, provided that You
#meet the following conditions:
#(a) You must give any other recipients of the Work or
#Derivative Works a copy of this License; and
#(b) You must cause any modified files to carry prominent notices
#stating that You changed the files; and
#(c) You must retain, in the Source form of any Derivative Works
#that You distribute, all copyright, patent, trademark, and
#attribution notices from the Source form of the Work,
#excluding those notices that do not pertain to any part of
#the Derivative Works; and
#(d) If the Work includes a "NOTICE" text file as part of its
#distribution, then any Derivative Works that You distribute must
#include a readable copy of the attribution notices contained
#within such NOTICE file, excluding those notices that do not
#pertain to any part of the Derivative Works, in at least one
#of the following places: within a NOTICE text file distributed
#as part of the Derivative Works; within the Source form or
#documentation, if provided along with the Derivative Works; or,
#within a display generated by the Derivative Works, if and
#wherever such third-party notices normally appear. The contents
#of the NOTICE file are for informational purposes only and
#do not modify the License. You may add Your own attribution
#notices within Derivative Works that You distribute, alongside
#or as an addendum to the NOTICE text from the Work, provided
#that such additional attribution notices cannot be construed
#as modifying the License.
#You may add Your own copyright statement to Your modifications and
#may provide additional or different license terms and conditions
#for use, reproduction, or distribution of Your modifications, or
#for any such Derivative Works as a whole, provided Your use,
#reproduction, and distribution of the Work otherwise complies with
#the conditions stated in this License.
#5. Submission of Contributions. Unless You explicitly state otherwise,
#any Contribution intentionally submitted for inclusion in the Work
#by You to the Licensor shall be under the terms and conditions of
#this License, without any additional terms or conditions.
#Notwithstanding the above, nothing herein shall supersede or modify
#the terms of any separate license agreement you may have executed
#with Licensor regarding such Contributions.
#6. Trademarks. This License does not grant permission to use the trade
#names, trademarks, service marks, or product names of the Licensor,
#except as required for reasonable and customary use in describing the
#origin of the Work and reproducing the content of the NOTICE file.
#7. Disclaimer of Warranty. Unless required by applicable law or
#agreed to in writing, Licensor provides the Work (and each
#Contributor provides its Contributions) on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
#implied, including, without limitation, any warranties or conditions
#of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
#PARTICULAR PURPOSE. You are solely responsible for determining the
#appropriateness of using or redistributing the Work and assume any
#risks associated with Your exercise of permissions under this License.
#8. Limitation of Liability. In no event and under no legal theory,
#whether in tort (including negligence), contract, or otherwise,
#unless required by applicable law (such as deliberate and grossly
#negligent acts) or agreed to in writing, shall any Contributor be
#liable to You for damages, including any direct, indirect, special,
#incidental, or consequential damages of any character arising as a
#result of this License or out of the use or inability to use the
#Work (including but not limited to damages for loss of goodwill,
#work stoppage, computer failure or malfunction, or any and all
#other commercial damages or losses), even if such Contributor
#has been advised of the possibility of such damages.
#9. Accepting Warranty or Additional Liability. While redistributing
#the Work or Derivative Works thereof, You may choose to offer,
#and charge a fee for, acceptance of support, warranty, indemnity,
#or other liability obligations and/or rights consistent with this
#License. However, in accepting such obligations, You may act only
#on Your own behalf and on Your sole responsibility, not on behalf
#of any other Contributor, and only if You agree to indemnify,
#defend, and hold each Contributor harmless for any liability
#incurred by, or claims asserted against, such Contributor by reason
#of your accepting any such warranty or additional liability.
#END OF TERMS AND CONDITIONS
#APPENDIX: How to apply the Apache License to your work.
#To apply the Apache License to your work, attach the following
#boilerplate notice, with the fields enclosed by brackets "[]"
#replaced with your own identifying information. (Don't include
#the brackets!) The text should be enclosed in the appropriate
#comment syntax for the file format. We also recommend that a
#file or class name and description of purpose be included on the
#same "printed page" as the copyright notice for easier
#identification within third-party archives.
#Copyright [yyyy] [name of copyright owner]
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
```
# **How to use this notebook?**
---
<font size = 4>Video describing how to use our notebooks are available on youtube:
- [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook
- [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook
---
###**Structure of a notebook**
<font size = 4>The notebook contains two types of cell:
<font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.
<font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.
---
###**Table of contents, Code snippets** and **Files**
<font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:
<font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.
<font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.
<font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here.
<font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.
<font size = 4>**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!
---
###**Making changes to the notebook**
<font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.
<font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).
You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment.
#**0. Before getting started**
---
<font size = 4> Preparing the dataset carefully is essential to make this Detectron2 notebook work. This model requires as input a set of images and as target a list of annotation files in Pascal VOC format. The annotation files should have the exact same name as the input files, except with an .xml instead of the .jpg extension. The annotation files contain the class labels and all bounding boxes for the objects for each image in your dataset. Most datasets will give the option of saving the annotations in this format or using software for hand-annotations will automatically save the annotations in this format.
<font size=4> If you want to assemble your own dataset we recommend using the open source https://www.makesense.ai/ resource. You can follow our instructions on how to label your dataset with this tool on our [wiki](https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki/Object-Detection-(YOLOv2)).
<font size = 4>**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook.
<font size = 4> **Additionally, the corresponding input and output files need to have the same name**.
<font size = 4> Please note that you currently can **only use .png files!**
<font size = 4>Here's a common data structure that can work:
* Experiment A
- **Training dataset**
- Input images (Training_source)
- img_1.png, img_2.png, ...
- High SNR images (Training_source_annotations)
- img_1.xml, img_2.xml, ...
- **Quality control dataset**
- Input images
- img_1.png, img_2.png
- High SNR images
- img_1.xml, img_2.xml
- **Data to be predicted**
- **Results**
---
<font size = 4>**Important note**
<font size = 4>- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.
<font size = 4>- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.
<font size = 4>- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.
---
# **1. Install Detectron2 and dependencies**
---
## **1.1. Install key dependencies**
---
<font size = 4>
```
#@markdown ##Install dependencies and Detectron2
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
# install dependencies
#!pip install -U torch torchvision cython
!pip install -U 'git+https://github.com/facebookresearch/fvcore.git' 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
import torch, torchvision
import os
import pandas as pd
torch.__version__
!git clone https://github.com/facebookresearch/detectron2 detectron2_repo
!pip install -e detectron2_repo
!pip install wget
#Force session restart
exit(0)
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
```
## **1.2. Restart your runtime**
---
<font size = 4>
**<font size = 4> Ignore the following message error message. Your Runtime has automatically restarted. This is normal.**
<img width="40%" alt ="" src="https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Wiki_files/session_crash.png"><figcaption> </figcaption>
## **1.3. Load key dependencies**
---
<font size = 4>
```
Notebook_version = '1.13'
Network = 'Detectron 2D'
#@markdown ##Play this cell to load the required dependencies
import wget
# Some basic setup:
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow
import yaml
#Download the script to convert XML into COCO
wget.download("https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Tools/voc2coco.py", "/content")
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.utils.visualizer import ColorMode
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader
from datetime import datetime
from detectron2.data.catalog import Metadata
from detectron2.config import get_cfg
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.engine import DefaultTrainer
from detectron2.data.datasets import register_coco_instances
from detectron2.utils.visualizer import ColorMode
import glob
from detectron2.checkpoint import Checkpointer
from detectron2.config import get_cfg
import os
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
from detectron2.engine import DefaultTrainer
from detectron2.evaluation import COCOEvaluator
class CocoTrainer(DefaultTrainer):
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
os.makedirs("coco_eval", exist_ok=True)
output_folder = "coco_eval"
return COCOEvaluator(dataset_name, cfg, False, output_folder)
print("Librairies loaded")
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
#Failsafes
cell_ran_prediction = 0
cell_ran_training = 0
cell_ran_QC_training_dataset = 0
cell_ran_QC_QC_dataset = 0
```
# **2. Initialise the Colab session**
---
## **2.1. Check for GPU access**
---
By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:
<font size = 4>Go to **Runtime -> Change the Runtime type**
<font size = 4>**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*
<font size = 4>**Accelerator: GPU** *(Graphics processing unit)*
```
#@markdown ##Run this cell to check if you have GPU access
#%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
```
## **2.2. Mount your Google Drive**
---
<font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.
<font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive.
<font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook.
```
#@markdown ##Play the cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
# mount user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
```
**<font size = 4> If you cannot see your files, reactivate your session by connecting to your hosted runtime.**
<img width="40%" alt ="Example of image detection with retinanet." src="https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Wiki_files/connect_to_hosted.png"><figcaption> Connect to a hosted runtime. </figcaption>
# **3. Select your parameters and paths**
## **3.1. Setting main training parameters**
---
<font size = 4>
<font size = 5> **Paths for training, predictions and results**
<font size = 4>**`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source and the annotation data respectively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.
<font size = 4>**`labels`:** Input the name of the differentes labels used to annotate your dataset (separated by a comma).
<font size = 4>**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.
<font size = 4>**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).
<font size = 5>**Training Parameters**
<font size = 4>**`number_of_iteration`:** Input how many iterations to use to train the network. Initial results can be observed using 1000 iterations but consider using 5000 or more iterations to train your models. **Default value: 2000**
<font size = 5>**Advanced Parameters - experienced users only**
<font size =4>**`batch_size:`** This parameter defines the number of patches seen in each training step. Noise2Void requires a large batch size for stable training. Reduce this parameter if your GPU runs out of memory. **Default value: 128**
<font size = 4>**`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each image / patch is seen at least once per epoch. **Default value: Number of patch / batch_size**
<font size = 4>**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during the training. **Default value: 10**
<font size = 4>**`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0001**
```
# create DataGenerator-object.
#@markdown ###Path to training image(s):
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#@markdown ###Labels
#@markdown Input the name of the differentes labels present in your training dataset separated by a comma
labels = "" #@param {type:"string"}
#@markdown ### Model name and path:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
full_model_path = model_path+'/'+model_name+'/'
#@markdown ###Training Parameters
#@markdown Number of iterations:
number_of_iteration = 2000#@param {type:"number"}
#Here we store the informations related to our labels
list_of_labels = labels.split(", ")
with open('/content/labels.txt', 'w') as f:
for item in list_of_labels:
print(item, file=f)
number_of_labels = len(list_of_labels)
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True#@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 4#@param {type:"number"}
percentage_validation = 10#@param {type:"number"}
initial_learning_rate = 0.001 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 4
percentage_validation = 10
initial_learning_rate = 0.001
# Here we disable pre-trained model by default (in case the next cell is not ran)
Use_pretrained_model = True
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = True
# Here we split the data between training and validation
# Here we count the number of files in the training target folder
Filelist = os.listdir(Training_target)
number_files = len(Filelist)
File_for_validation = int((number_files)/percentage_validation)+1
#Here we split the training dataset between training and validation
# Everything is copied in the /Content Folder
Training_source_temp = "/content/training_source"
if os.path.exists(Training_source_temp):
shutil.rmtree(Training_source_temp)
os.makedirs(Training_source_temp)
Training_target_temp = "/content/training_target"
if os.path.exists(Training_target_temp):
shutil.rmtree(Training_target_temp)
os.makedirs(Training_target_temp)
Validation_source_temp = "/content/validation_source"
if os.path.exists(Validation_source_temp):
shutil.rmtree(Validation_source_temp)
os.makedirs(Validation_source_temp)
Validation_target_temp = "/content/validation_target"
if os.path.exists(Validation_target_temp):
shutil.rmtree(Validation_target_temp)
os.makedirs(Validation_target_temp)
list_source = os.listdir(os.path.join(Training_source))
list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
for f in os.listdir(os.path.join(Training_source)):
shutil.copy(Training_source+"/"+f, Training_source_temp+"/"+f)
for p in os.listdir(os.path.join(Training_target)):
shutil.copy(Training_target+"/"+p, Training_target_temp+"/"+p)
list_source_temp = os.listdir(os.path.join(Training_source_temp))
list_target_temp = os.listdir(os.path.join(Training_target_temp))
#Here we move images to be used for validation
for i in range(File_for_validation):
name = list_source_temp[i]
shutil.move(Training_source_temp+"/"+name, Validation_source_temp+"/"+name)
shortname_no_extension = name[:-4]
shutil.move(Training_target_temp+"/"+shortname_no_extension+".xml", Validation_target_temp+"/"+shortname_no_extension+".xml")
# Here we convert the XML files into COCO format to be loaded in detectron2
#First we need to create list of labels to generate the json dictionaries
list_source_training_temp = os.listdir(os.path.join(Training_source_temp))
list_source_validation_temp = os.listdir(os.path.join(Validation_source_temp))
name_no_extension_training = []
for n in list_source_training_temp:
name_no_extension_training.append(os.path.splitext(n)[0])
name_no_extension_validation = []
for n in list_source_validation_temp:
name_no_extension_validation.append(os.path.splitext(n)[0])
#Save the list of labels as text file
with open('/content/training_files.txt', 'w') as f:
for item in name_no_extension_training:
print(item, end='\n', file=f)
with open('/content/validation_files.txt', 'w') as f:
for item in name_no_extension_validation:
print(item, end='\n', file=f)
file_output_training = Training_target_temp+"/output.json"
file_output_validation = Validation_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$Training_target_temp" --output "$file_output_training" --ann_ids "/content/training_files.txt" --labels "/content/labels.txt" --ext xml
!python voc2coco.py --ann_dir "$Validation_target_temp" --output "$file_output_validation" --ann_ids "/content/validation_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we load the dataset to detectron2
if cell_ran_training == 0:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset_train", {}, Training_target_temp+"/output.json", Training_source_temp)
register_coco_instances("my_dataset_val", {}, Validation_target_temp+"/output.json", Validation_source_temp)
#visualize training data
my_dataset_train_metadata = MetadataCatalog.get("my_dataset_train")
dataset_dicts = DatasetCatalog.get("my_dataset_train")
import random
from detectron2.utils.visualizer import Visualizer
for d in random.sample(dataset_dicts, 1):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=my_dataset_train_metadata, instance_mode=ColorMode.SEGMENTATION, scale=0.8)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
# failsafe
cell_ran_training = 1
```
## **3.2. Data augmentation**
---
<font size = 4>
<font size = 4>Data augmentation is currently enabled by default in this notebook. The option to disable data augmentation is not yet avaialble.
## **3.3. Using weights from a pre-trained model as initial weights**
---
<font size = 4> Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a Detectron2 model**.
```
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = True #@param {type:"boolean"}
pretrained_model_choice = "Faster R-CNN" #@param ["Faster R-CNN","RetinaNet", "Model_from_file"]
#pretrained_model_choice = "Faster R-CNN" #@param ["Faster R-CNN", "RetinaNet", "RPN & Fast R-CNN", "Model_from_file"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = pretrained_model_path
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
if not os.path.exists(h5_file_path) and Use_pretrained_model:
print('WARNING pretrained model does not exist')
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
if pretrained_model_choice == "Faster R-CNN":
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
if pretrained_model_choice == "RetinaNet":
h5_file_path = "COCO-Detection/retinanet_R_101_FPN_3x.yaml"
print('The RetinaNet model will be used.')
if pretrained_model_choice == "RPN & Fast R-CNN":
h5_file_path = "COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml"
if not Use_pretrained_model:
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
```
#**4. Train the network**
---
## **4.1. Start Trainning**
---
<font size = 4>When playing the cell below you should see updates after each epoch (round). Network training can take some time.
<font size = 4>* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this point.
```
#@markdown ##Start training
# Create the model folder
if os.path.exists(full_model_path):
shutil.rmtree(full_model_path)
os.makedirs(full_model_path)
#Copy the label names in the model folder
shutil.copy("/content/labels.txt", full_model_path+"/"+"labels.txt")
#PDF export
#######################################
## MISSING
#######################################
#To be added
start = time.time()
#Load the config files
cfg = get_cfg()
if pretrained_model_choice == "Model_from_file":
cfg.merge_from_file(pretrained_model_path+"/config.yaml")
if not pretrained_model_choice == "Model_from_file":
cfg.merge_from_file(model_zoo.get_config_file(h5_file_path))
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ("my_dataset_val",)
cfg.OUTPUT_DIR= (full_model_path)
cfg.DATALOADER.NUM_WORKERS = 4
if pretrained_model_choice == "Model_from_file":
cfg.MODEL.WEIGHTS = pretrained_model_path+"/model_final.pth" # Let training initialize from model zoo
if not pretrained_model_choice == "Model_from_file":
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(h5_file_path) # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = int(batch_size)
cfg.SOLVER.BASE_LR = initial_learning_rate
cfg.SOLVER.WARMUP_ITERS = 1000
cfg.SOLVER.MAX_ITER = int(number_of_iteration) #adjust up if val mAP is still rising, adjust down if overfit
cfg.SOLVER.STEPS = (1000, 1500)
cfg.SOLVER.GAMMA = 0.05
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
if pretrained_model_choice == "Faster R-CNN":
cfg.MODEL.ROI_HEADS.NUM_CLASSES = (number_of_labels)
if pretrained_model_choice == "RetinaNet":
cfg.MODEL.RETINANET.NUM_CLASSES = (number_of_labels)
cfg.TEST.EVAL_PERIOD = 500
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
#Save the config file after trainning
config= cfg.dump() # print formatted configs
file1 = open(full_model_path+"/config.yaml", 'w')
file1.writelines(config)
file1.close() #to change file access modes
#Save the label file after trainning
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
```
## **4.2. Download your model(s) from Google Drive**
---
<font size = 4>Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder as all data can be erased at the next training if using the same folder.
# **5. Evaluate your model**
---
<font size = 4>This section allows the user to perform important quality checks on the validity and generalisability of the trained model. Detectron 2 requires you to reload your training dataset in order to perform the quality control step.
<font size = 4>**We highly recommend to perform quality control on all newly trained models.**
```
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder as well as the location of your training dataset:
#@markdown ####Path to trained model to be assessed:
QC_model_folder = "" #@param {type:"string"}
#@markdown ####Path to the image(s) used for training:
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
print(bcolors.WARNING + '!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
# Here we load the list of classes stored in the model folder
list_of_labels_QC =[]
with open(full_QC_model_path+'labels.txt', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in csv.reader(csvfile):
list_of_labels_QC.append(row[0])
#Here we create a list of color for later display
color_list = []
for i in range(len(list_of_labels_QC)):
color = list(np.random.choice(range(256), size=3))
color_list.append(color)
#Save the list of labels as text file
if not (Use_the_current_trained_model):
with open('/content/labels.txt', 'w') as f:
for item in list_of_labels_QC:
print(item, file=f)
# Here we split the data between training and validation
# Here we count the number of files in the training target folder
Filelist = os.listdir(Training_target)
number_files = len(Filelist)
percentage_validation= 10
File_for_validation = int((number_files)/percentage_validation)+1
#Here we split the training dataset between training and validation
# Everything is copied in the /Content Folder
Training_source_temp = "/content/training_source"
if os.path.exists(Training_source_temp):
shutil.rmtree(Training_source_temp)
os.makedirs(Training_source_temp)
Training_target_temp = "/content/training_target"
if os.path.exists(Training_target_temp):
shutil.rmtree(Training_target_temp)
os.makedirs(Training_target_temp)
Validation_source_temp = "/content/validation_source"
if os.path.exists(Validation_source_temp):
shutil.rmtree(Validation_source_temp)
os.makedirs(Validation_source_temp)
Validation_target_temp = "/content/validation_target"
if os.path.exists(Validation_target_temp):
shutil.rmtree(Validation_target_temp)
os.makedirs(Validation_target_temp)
list_source = os.listdir(os.path.join(Training_source))
list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
for f in os.listdir(os.path.join(Training_source)):
shutil.copy(Training_source+"/"+f, Training_source_temp+"/"+f)
for p in os.listdir(os.path.join(Training_target)):
shutil.copy(Training_target+"/"+p, Training_target_temp+"/"+p)
list_source_temp = os.listdir(os.path.join(Training_source_temp))
list_target_temp = os.listdir(os.path.join(Training_target_temp))
#Here we move images to be used for validation
for i in range(File_for_validation):
name = list_source_temp[i]
shutil.move(Training_source_temp+"/"+name, Validation_source_temp+"/"+name)
shortname_no_extension = name[:-4]
shutil.move(Training_target_temp+"/"+shortname_no_extension+".xml", Validation_target_temp+"/"+shortname_no_extension+".xml")
#First we need to create list of labels to generate the json dictionaries
list_source_training_temp = os.listdir(os.path.join(Training_source_temp))
list_source_validation_temp = os.listdir(os.path.join(Validation_source_temp))
name_no_extension_training = []
for n in list_source_training_temp:
name_no_extension_training.append(os.path.splitext(n)[0])
name_no_extension_validation = []
for n in list_source_validation_temp:
name_no_extension_validation.append(os.path.splitext(n)[0])
#Save the list of labels as text file
with open('/content/training_files.txt', 'w') as f:
for item in name_no_extension_training:
print(item, end='\n', file=f)
with open('/content/validation_files.txt', 'w') as f:
for item in name_no_extension_validation:
print(item, end='\n', file=f)
file_output_training = Training_target_temp+"/output.json"
file_output_validation = Validation_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$Training_target_temp" --output "$file_output_training" --ann_ids "/content/training_files.txt" --labels "/content/labels.txt" --ext xml
!python voc2coco.py --ann_dir "$Validation_target_temp" --output "$file_output_validation" --ann_ids "/content/validation_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we load the dataset to detectron2
if cell_ran_QC_training_dataset == 0:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset_train", {}, Training_target_temp+"/output.json", Training_source_temp)
register_coco_instances("my_dataset_val", {}, Validation_target_temp+"/output.json", Validation_source_temp)
#Failsafe for later
cell_ran_QC_training_dataset = 1
```
## **5.1. Inspection of the loss function**
---
<font size = 4>It is good practice to evaluate the training progress by studying if your model is slowly improving over time. The following cell will allow you to load Tensorboard and investigate how several metric evolved over time (iterations).
<font size = 4>
```
#@markdown ##Play the cell to load tensorboard
%load_ext tensorboard
%tensorboard --logdir "$full_QC_model_path"
```
## **5.2. Error mapping and quality metrics estimation**
---
<font size = 4>This section will compare the predictions generated by your model against ground-truth. Additionally, the below cell will show the mAP value of the model on the QC data If you want to read in more detail about this score, we recommend [this brief explanation](https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173).
<font size = 4> The images provided in the "Source_QC_folder" and "Target_QC_folder" should contain images (e.g. as .png) and annotations (.xml files)!
<font size = 4>**mAP score:** This refers to the mean average precision of the model on the given dataset. This value gives an indication how precise the predictions of the classes on this dataset are when compared to the ground-truth. Values closer to 1 indicate a good fit.
```
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
if cell_ran_QC_QC_dataset == 0:
#Save the list of labels as text file
with open('/content/labels_QC.txt', 'w') as f:
for item in list_of_labels_QC:
print(item, file=f)
#Here we create temp folder for the QC
QC_source_temp = "/content/QC_source"
if os.path.exists(QC_source_temp):
shutil.rmtree(QC_source_temp)
os.makedirs(QC_source_temp)
QC_target_temp = "/content/QC_target"
if os.path.exists(QC_target_temp):
shutil.rmtree(QC_target_temp)
os.makedirs(QC_target_temp)
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
#Here we move the QC files to the temp
for f in os.listdir(os.path.join(Source_QC_folder)):
shutil.copy(Source_QC_folder+"/"+f, QC_source_temp+"/"+f)
for p in os.listdir(os.path.join(Target_QC_folder)):
shutil.copy(Target_QC_folder+"/"+p, QC_target_temp+"/"+p)
#Here we convert the XML files into JSON
#Save the list of files
list_source_QC_temp = os.listdir(os.path.join(QC_source_temp))
name_no_extension_QC = []
for n in list_source_QC_temp:
name_no_extension_QC.append(os.path.splitext(n)[0])
with open('/content/QC_files.txt', 'w') as f:
for item in name_no_extension_QC:
print(item, end='\n', file=f)
#Convert XML into JSON
file_output_QC = QC_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$QC_target_temp" --output "$file_output_QC" --ann_ids "/content/QC_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we register the QC dataset
register_coco_instances("my_dataset_QC", {}, QC_target_temp+"/output.json", QC_source_temp)
cell_ran_QC_QC_dataset = 1
#Load the model to use
cfg = get_cfg()
cfg.merge_from_file(full_QC_model_path+"config.yaml")
cfg.MODEL.WEIGHTS = os.path.join(full_QC_model_path, "model_final.pth")
cfg.DATASETS.TEST = ("my_dataset_QC", )
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
#Metadata
test_metadata = MetadataCatalog.get("my_dataset_QC")
test_metadata.set(thing_color = color_list)
# For the evaluation we need to load the trainer
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=True)
# Here we need to load the predictor
predictor = DefaultPredictor(cfg)
evaluator = COCOEvaluator("my_dataset_QC", cfg, False, output_dir=QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
val_loader = build_detection_test_loader(cfg, "my_dataset_QC")
inference_on_dataset(trainer.model, val_loader, evaluator)
print("A prediction is displayed")
dataset_QC_dicts = DatasetCatalog.get("my_dataset_QC")
for d in random.sample(dataset_QC_dicts, 1):
print("Ground Truth")
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=test_metadata, instance_mode=ColorMode.SEGMENTATION, scale=0.5)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
print("A prediction is displayed")
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=test_metadata,
instance_mode=ColorMode.SEGMENTATION,
scale=0.5
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
cell_ran_QC_QC_dataset = 1
```
# **6. Using the trained model**
---
<font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive.
## **6.1. Generate prediction(s) from unseen dataset**
---
<font size = 4>The current trained model (from section 4.2) can now be used to process images. If an older model needs to be used, please untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).
<font size = 4>**`Data_folder`:** This folder should contains the images that you want to predict using the network that you will train.
<font size = 4>**`Result_folder`:** This folder will contain the predicted output images.
```
#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved, then play the cell to predict output on your unseen images.
#@markdown ###Path to data to analyse and where predicted output should be saved:
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder as well as the location of your training dataset:
#@markdown ####Path to trained model to be assessed:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
print(bcolors.WARNING +'!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Here we will load the label file
list_of_labels_predictions =[]
with open(full_Prediction_model_path+'labels.txt', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in csv.reader(csvfile):
list_of_labels_predictions.append(row[0])
#Here we create a list of color
color_list = []
for i in range(len(list_of_labels_predictions)):
color = list(np.random.choice(range(256), size=3))
color_list.append(color)
#Activate the pretrained model.
# Create config
cfg = get_cfg()
cfg.merge_from_file(full_Prediction_model_path+"config.yaml")
cfg.MODEL.WEIGHTS = os.path.join(full_Prediction_model_path, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
# Create predictor
predictor = DefaultPredictor(cfg)
#Load the metadata from the prediction file
prediction_metadata = Metadata()
prediction_metadata.set(thing_classes = list_of_labels_predictions)
prediction_metadata.set(thing_color = color_list)
start = datetime.now()
validation_folder = Path(Data_folder)
for i, file in enumerate(validation_folder.glob("*.png")):
# this loop opens the .png files from the val-folder, creates a dict with the file
# information, plots visualizations and saves the result as .pkl files.
file = str(file)
file_name = file.split("/")[-1]
im = cv2.imread(file)
#Prediction are done here
outputs = predictor(im)
#here we extract the results into numpy arrays
Classes_predictions = outputs["instances"].pred_classes.cpu().data.numpy()
boxes_predictions = outputs["instances"].pred_boxes.tensor.cpu().numpy()
Score_predictions = outputs["instances"].scores.cpu().data.numpy()
#here we save the results into a csv file
prediction_csv = Result_folder+"/"+file_name+"_predictions.csv"
with open(prediction_csv, 'w') as f:
writer = csv.writer(f)
writer.writerow(['x1','y1','x2','y2','box width','box height', 'class', 'score' ])
for i in range(len(boxes_predictions)):
x1 = boxes_predictions[i][0]
y1 = boxes_predictions[i][1]
x2 = boxes_predictions[i][2]
y2 = boxes_predictions[i][3]
box_width = x2 - x1
box_height = y2 -y1
writer.writerow([str(x1), str(y1), str(x2), str(y2), str(box_width), str(box_height), str(list_of_labels_predictions[Classes_predictions[i]]), Score_predictions[i]])
# The last example is displayed
v = Visualizer(im, metadata=prediction_metadata, instance_mode=ColorMode.SEGMENTATION, scale=1)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
plt.figure(figsize=(20,20))
plt.imshow(v.get_image()[:, :, ::-1])
plt.axis('off');
plt.savefig(Result_folder+"/"+file_name)
print("Time needed for inferencing:", datetime.now() - start)
```
## **6.2. Download your predictions**
---
<font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name.
# **7. Version log**
---
<font size = 4>**v1.13**:
* The section 1 and 2 are now swapped for better export of *requirements.txt*.
* This version also now includes built-in version check and the version log that you're reading now.
#**Thank you for using Detectron2 2D!**
| github_jupyter |
# (DEPRECATED) An Introduction to Inference in Pyro
## WARNING
***This tutorial has been deprecated*** in favor of the updated [Introduction to Pyro](https://pyro.ai/examples/intro_long.html). It may be removed in the future.
Much of modern machine learning can be cast as approximate inference and expressed succinctly in a language like Pyro. To motivate the rest of this tutorial, let's build a generative model for a simple physical problem so that we can use Pyro's inference machinery to solve it. However, we will first import the required modules for this tutorial:
```
import matplotlib.pyplot as plt
import numpy as np
import torch
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist
pyro.set_rng_seed(101)
```
## A Simple Example
Suppose we are trying to figure out how much something weighs, but the scale we're using is unreliable and gives slightly different answers every time we weigh the same object. We could try to compensate for this variability by integrating the noisy measurement information with a guess based on some prior knowledge about the object, like its density or material properties. The following model encodes this process:
$${\sf weight} \, | \, {\sf guess} \sim \cal {\sf Normal}({\sf guess}, 1) $$
$${\sf measurement} \, | \, {\sf guess}, {\sf weight} \sim {\sf Normal}({\sf weight}, 0.75)$$
Note that this is a model not only for our belief over weight, but also for the result of taking a measurement of it. The model corresponds to the following stochastic function:
```
def scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(weight, 0.75))
```
## Conditioning
The real utility of probabilistic programming is in the ability to condition generative models on observed data and infer the latent factors that might have produced that data. In Pyro, we separate the expression of conditioning from its evaluation via inference, making it possible to write a model once and condition it on many different observations. Pyro supports constraining a model's internal `sample` statements to be equal to a given set of observations.
Consider `scale` once again. Suppose we want to sample from the distribution of `weight` given input `guess = 8.5`, but now we have observed that `measurement == 9.5`. That is, we wish to *infer* the distribution:
$$({\sf weight} \, | \, {\sf guess}, {\sf measurement} = 9.5) \sim \, ? $$
Pyro provides the function `pyro.condition` to allow us to constrain the values of sample statements. `pyro.condition` is a higher-order function that takes a model and a dictionary of observations and returns a new model that has the same input and output signatures but always uses the given values at observed `sample` statements:
```
conditioned_scale = pyro.condition(scale, data={"measurement": torch.tensor(9.5)})
```
Because it behaves just like an ordinary Python function, conditioning can be deferred or parametrized with Python's `lambda` or `def`:
```
def deferred_conditioned_scale(measurement, guess):
return pyro.condition(scale, data={"measurement": measurement})(guess)
```
In some cases it might be more convenient to pass observations directly to individual `pyro.sample` statements instead of using `pyro.condition`. The optional `obs` keyword argument is reserved by `pyro.sample` for that purpose:
```
def scale_obs(guess): # equivalent to conditioned_scale above
weight = pyro.sample("weight", dist.Normal(guess, 1.))
# here we condition on measurement == 9.5
return pyro.sample("measurement", dist.Normal(weight, 0.75), obs=torch.tensor(9.5))
```
Finally, in addition to `pyro.condition` for incorporating observations, Pyro also contains `pyro.do`, an implementation of Pearl's `do`-operator used for causal inference with an identical interface to `pyro.condition`. `condition` and `do` can be mixed and composed freely, making Pyro a powerful tool for model-based causal inference.
## Flexible Approximate Inference With Guide Functions
Let's return to `conditioned_scale`. Now that we have conditioned on an observation of `measurement`, we can use Pyro's approximate inference algorithms to estimate the distribution over `weight` given `guess` and `measurement == data`.
Inference algorithms in Pyro, such as `pyro.infer.SVI`, allow us to use arbitrary stochastic functions, which we will call *guide functions* or *guides*, as approximate posterior distributions. Guide functions must satisfy these two criteria to be valid approximations for a particular model:
1. all unobserved (i.e., not conditioned) sample statements that appear in the model appear in the guide.
2. the guide has the same input signature as the model (i.e., takes the same arguments)
Guide functions can serve as programmable, data-dependent proposal distributions for importance sampling, rejection sampling, sequential Monte Carlo, MCMC, and independent Metropolis-Hastings, and as variational distributions or inference networks for stochastic variational inference. Currently, importance sampling, MCMC, and stochastic variational inference are implemented in Pyro, and we plan to add other algorithms in the future.
Although the precise meaning of the guide is different across different inference algorithms, the guide function should generally be chosen so that, in principle, it is flexible enough to closely approximate the distribution over all unobserved `sample` statements in the model.
In the case of `scale`, it turns out that the true posterior distribution over `weight` given `guess` and `measurement` is actually ${\sf Normal}(9.14, 0.6)$. As the model is quite simple, we are able to determine our posterior distribution of interest analytically (for derivation, see for example Section 3.4 of [these notes](http://www.stat.cmu.edu/~brian/463-663/week09/Chapter%2003.pdf)).
```
def perfect_guide(guess):
loc = (0.75**2 * guess + 9.5) / (1 + 0.75**2) # 9.14
scale = np.sqrt(0.75**2 / (1 + 0.75**2)) # 0.6
return pyro.sample("weight", dist.Normal(loc, scale))
```
## Parametrized Stochastic Functions and Variational Inference
Although we could write out the exact posterior distribution for `scale`, in general it is intractable to specify a guide that is a good approximation to the posterior distribution of an arbitrary conditioned stochastic function. In fact, stochastic functions for which we can determine the true posterior exactly are the exception rather than the rule. For example, even a version of our `scale` example with a nonlinear function in the middle may be intractable:
```
def intractable_scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(some_nonlinear_function(weight), 0.75))
```
What we can do instead is use the top-level function `pyro.param` to specify a *family* of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called *variational inference*.
`pyro.param` is a frontend for Pyro's key-value *parameter store*, which is described in more detail in the documentation. Like `pyro.sample`, `pyro.param` is always called with a name as its first argument. The first time `pyro.param` is called with a particular name, it stores its argument in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. It is similar to `simple_param_store.setdefault` here, but with some additional tracking and management functionality.
```python
simple_param_store = {}
a = simple_param_store.setdefault("a", torch.randn(1))
```
For example, we can parametrize `a` and `b` in `scale_posterior_guide` instead of specifying them by hand:
```
def scale_parametrized_guide(guess):
a = pyro.param("a", torch.tensor(guess))
b = pyro.param("b", torch.tensor(1.))
return pyro.sample("weight", dist.Normal(a, torch.abs(b)))
```
As an aside, note that in `scale_parametrized_guide`, we had to apply `torch.abs` to parameter `b` because the standard deviation of a normal distribution has to be positive; similar restrictions also apply to parameters of many other distributions. The PyTorch distributions library, which Pyro is built on, includes a [constraints module](https://pytorch.org/docs/master/distributions.html#module-torch.distributions.constraints) for enforcing such restrictions, and applying constraints to Pyro parameters is as easy as passing the relevant `constraint` object to `pyro.param`:
```
from torch.distributions import constraints
def scale_parametrized_guide_constrained(guess):
a = pyro.param("a", torch.tensor(guess))
b = pyro.param("b", torch.tensor(1.), constraint=constraints.positive)
return pyro.sample("weight", dist.Normal(a, b)) # no more torch.abs
```
Pyro is built to enable *stochastic variational inference*, a powerful and widely applicable class of variational inference algorithms with three key characteristics:
1. Parameters are always real-valued tensors
2. We compute Monte Carlo estimates of a loss function from samples of execution histories of the model and guide
3. We use stochastic gradient descent to search for the optimal parameters.
Combining stochastic gradient descent with PyTorch's GPU-accelerated tensor math and automatic differentiation allows us to scale variational inference to very high-dimensional parameter spaces and massive datasets.
Pyro's SVI functionality is described in detail in the [SVI tutorial](svi_part_i.ipynb). Here is a very simple example applying it to `scale`:
```
guess = 8.5
pyro.clear_param_store()
svi = pyro.infer.SVI(model=conditioned_scale,
guide=scale_parametrized_guide,
optim=pyro.optim.Adam({"lr": 0.003}),
loss=pyro.infer.Trace_ELBO())
losses, a, b = [], [], []
num_steps = 2500
for t in range(num_steps):
losses.append(svi.step(guess))
a.append(pyro.param("a").item())
b.append(pyro.param("b").item())
plt.plot(losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
print('a = ',pyro.param("a").item())
print('b = ', pyro.param("b").item())
plt.subplot(1,2,1)
plt.plot([0,num_steps],[9.14,9.14], 'k:')
plt.plot(a)
plt.ylabel('a')
plt.subplot(1,2,2)
plt.ylabel('b')
plt.plot([0,num_steps],[0.6,0.6], 'k:')
plt.plot(b)
plt.tight_layout()
```
**Note that SVI obtains parameters very close to the true parameters of the desired conditional distribution. This is to be expected as our guide is from the same family.**
Note that optimization will update the values of the guide parameters in the parameter store, so that once we find good parameter values, we can use samples from the guide as posterior samples for downstream tasks.
## Next Steps
In the [Variational Autoencoder tutorial](vae.ipynb), we'll see how models like `scale` can be augmented with deep neural networks and use stochastic variational inference to build a generative model of images.
| github_jupyter |
```
from collections import Counter, defaultdict, deque
from copy import deepcopy
from operator import add
import pickle
import string
import pandas as pd
from sqlalchemy import create_engine
import matplotlib.patches as patches
import matplotlib.pyplot as plt
from pyspark import SparkConf
import numpy as np
import Levenshtein
import pyspark
from pyspark import Row
from pyspark.sql import SparkSession
import seaborn as sns
%matplotlib inline
sns.set(color_codes=True)
```
# Initialise sparkContext
```
spark = SparkSession.builder \
.master("local[28]") \
.appName("func-dedup") \
.config("spark.executor.memory", "20gb") \
.config("spark.driver.memory", "100G") \
.config("spark.driver.maxResultSize", "10G") \
.config("spark.local.dir", "/storage/egor/tmp/spark_tmp") \
.getOrCreate()
```
# Read datasets
```
data_loc_dedupl = "/storage/egor/tmp/1M_file_java_uast_extracted_func_dedupl"
rdd = (spark.read.parquet(data_loc_dedupl).rdd
.persist(pyspark.StorageLevel.DISK_ONLY))
print("Number of rows in dataset: {}".format(rdd.count()))
```
# Mapping of internal types to characters
* number of internal types allow us to replace them with characters
* Levenshtein works only with strings (standard diff lib gives wrong results in some cases)
```
def uast2sequence(root):
sequence = []
nodes = defaultdict(deque)
stack = [root]
nodes[id(root)].extend(root.children)
while stack:
if nodes[id(stack[-1])]:
child = nodes[id(stack[-1])].popleft()
nodes[id(child)].extend(child.children)
stack.append(child)
else:
sequence.append(stack.pop())
return sequence
def internal_type(row):
from bblfsh import Node
parse_uast = Node.FromString
uast = parse_uast(row.uast[0])
return list(set(node.internal_type for node in uast2sequence(uast)))
internal_types = rdd.flatMap(internal_type).distinct().collect()
type2char = {int_type: char for int_type, char in zip(internal_types, string.printable)}
print("Number of internal typers: {}".format(len(type2char)))
print(type2char)
def uast2chars(uast):
from bblfsh import Node
parse_uast = Node.FromString
try:
uast = parse_uast(uast)
except:
pass
return "".join([type2char[node.internal_type] for node in uast2sequence(uast)])
def is_content_bad(content):
"""
Check if it's only function definition
"""
return not content.strip().endswith("}")
def filter_n_nodes(rdd, n_nodes_min, n_nodes_max, n_samples=5):
print("min {}, max {}".format(n_nodes_min, n_nodes_max))
print("-+=" * 20)
return (rdd.filter(lambda r: n_nodes_min < len(uast2chars(r.uast[0])) <= n_nodes_max)
.filter(lambda r: not is_content_bad(r.content)).take(n_samples))
res = filter_n_nodes(rdd, 0, 5, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 5, 10, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 10, 15, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 15, 20, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 20, 25, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 25, 30, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 30, 35, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 35, 40, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 40, 45, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 45, 50, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 50, 55, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 55, 60, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 60, 65, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 65, 70, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 70, 75, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 75, 80, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 80, 85, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 85, 90, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 90, 95, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 95, 100, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 100, 105, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 105, 110, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
res = filter_n_nodes(rdd, 110, 115, n_samples=5)
for row in res:
print(row.content)
print("-+=" * 20)
```
| github_jupyter |
# 70 years of machine learning in geoscience
Accompanies https://arxiv.org/abs/2006.13311
## Visualizations
Some programmatically generated figures for the book chapter.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pyplot as mpl
rb = ['#990000','#030f4f', ]
mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=rb)
mpl.rcParams.update({'font.size': 20})
```
### Activation Functions
```
def plot_act(funk, der=None, x_min=-7, x_max=7, y_min=-1, y_max=1, y_label="", z_label="", title="", filename=None):
x = np.linspace(x_min, x_max, 10000)
y = funk(x)
if der is not None:
z = der(x)
fig = plt.figure(figsize=(12,4))
ax = plt.gca()
ax.plot(x, y, label=y_label)
if der is not None:
ax.plot(x, z, label=z_label)
ax.grid(False)
ax.spines['left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
plt.xlim(x_min, x_max)
plt.ylim(y_min*1.05, y_max*1.05)
if title:
plt.title(title)
if y_label or z_label:
plt.legend()
if filename is not None:
plt.savefig(filename, dpi=300)
plt.show()
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def dsigmoid(x):
return sigmoid(x) * (1-sigmoid(x))
def relu(x):
return x * (x > 0)
def drelu(x):
return x > 0
def perceptron(x):
return x > 0
def dperceptron(x):
return x*0
plot_act(sigmoid, der=dsigmoid, y_min=0, title="Sigmoid Activation", filename="act_sig.png")
plot_act(relu, der=drelu, y_min=0, y_max=7, title="ReLU Activation", filename="act_relu.png")
plot_act(perceptron, der=dperceptron, y_min=0, title="Binary Perceptron", filename="act_perc.png")
```
### Radial Basis Function Kernel
```
class_0 = (0.5, 0.5)
class_1 = np.array([[0, 1, 0, 1], [0.5, 0, 1, 1]])
x_center = .44
y_center = .33
def rbf(x, lambd=.5, center=1):
return np.exp(np.power( -(lambd * (x-center)), 2))
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
ax[0].scatter(class_0[0], class_0[1], label="Class 0")
ax[0].scatter(class_1[0], class_1[1], label="Class 1")
ax[0].scatter(x_center, y_center, c="k", marker="x", label="RBF Center")
ax[0].set_title("Input Data")
ax[0].set_xlabel("Original X")
ax[0].set_ylabel("Original Y")
ax[0].set_xlim(-.05,1.05)
ax[0].set_ylim(-.05,1.05)
ax[0].legend()
ax[1].scatter(rbf(0.5, center=x_center), rbf(0.5, center=y_center), label="Class 0")
ax[1].scatter(rbf(class_1[0], center=x_center), rbf(class_1[1], center=y_center), label="Class 1")
ax[1].set_title("Transformed Data")
ax[1].set_xlabel("Transformed X")
ax[1].set_ylabel("Transformed Y")
ax[1].set_xlim(.95,1.15)
ax[1].set_ylim(.95,1.15)
plt.tight_layout()
plt.savefig("rbf-separation.png")
```
| github_jupyter |
# Module 1 Required Coding Activity
Introduction to Python (Unit 2) Fundamentals
**This Activity is intended to be completed in the jupyter notebook, Required_Code_MOD1_IntroPy.ipynb, and then pasted into the assessment page on edX.**
This is an activity from the Jupyter Notebook **`Practice_MOD01_IntroPy.ipynb`** which you may have already completed.
| Important Assignment Requirements |
|:-------------------------------|
| **NOTE:** This program **requires** **`print`** output and using code syntax used in module 1 such as keywords **`for`**/**`in`** (iteration), **`input`**, **`if`**, **`else`**, **`.isalpha()`** method, **`.lower()`** or **`.upper()`** method |
## Program: Words after "G"/"g"
Create a program inputs a phrase (like a famous quotation) and prints all of the words that start with h-z
Sample input:
`enter a 1 sentence quote, non-alpha separate words:` **`Wheresoever you go, go with all your heart`**
Sample output:
```
WHERESOEVER
YOU
WITH
YOUR
HEART
```

- split the words by building a placeholder variable: **`word`**
- loop each character in the input string
- check if character is a letter
- add a letter to **`word`** each loop until a non-alpha char is encountered
- **if** character is alpha
- add character to **`word`**
- non-alpha detected (space, punctuation, digit,...) defines the end of a word and goes to **`else`**
- **`else`**
- check **`if`** word is greater than "g" alphabetically
- print word
- set word = empty string
- or **else**
- set word = empty string and build the next word
Hint: use `.lower()`
Consider how you will print the last word if it doesn't end with a non-alpha character like a space or punctuation?
```
# [] create words after "G" following the Assignment requirements use of functions, menhods and kwyowrds
# sample quote "Wheresoever you go, go with all your heart" ~ Confucius (551 BC - 479 BC)
# [] copy and paste in edX assignment page
phrase = input("enter any famous quotation: ")
#phrase = "Wheresoever you go, go with all your heart"
mod_phrase = phrase + ' ' # kludge for printing last word
word = ''
for ltr in mod_phrase:
if ltr.isalpha():
word += ltr.lower()
else:
if word >= 'h':
print(word.upper())
word = ''
else:
word = ''
```
# Important: [How to submit code by pasting](https://courses.edx.org/courses/course-v1:Microsoft+DEV274x+2T2017/wiki/Microsoft.DEV274x.2T2017/paste-code-end-module-coding-assignments/)
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| github_jupyter |
# Gradient Descent Derivation
We will derive the optimization algorithm from scratch
### Computing $\frac{\partial}{\partial w_0} L(w)$
\begin{align}
\frac{\partial}{\partial w_0} L(w)
&= \frac{\partial}{\partial w_0} \frac{1}{n} \sum_{i=1}^n\left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)^2 \\
&= \frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial w_0} \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)^2 \quad \\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\frac{\partial}{\partial w_0} \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right) \quad\textit{# Chain Rule}\\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\left(0 - \frac{\partial}{\partial w_0}\sin\left(w_0 x_i + w_1 \right)\right) \\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\left(0 - \cos\left(w_0 x_i + w_1 \right) \frac{\partial}{\partial w_0}\left(w_0 x_i + w_1 \right) \right) \quad\textit{# Chain Rule}\\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\left(0 - \cos\left(w_0 x_i + w_1 \right) \left(x_i + 0 \right) \right) \\
&= -\frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\cos\left(w_0 x_i + w_1 \right) x_i \quad\textit{# Simplified}
\end{align}
### Computing $\frac{\partial}{\partial w_1} L(w)$
\begin{align}
\frac{\partial}{\partial w_1} L(w)
&= \frac{\partial}{\partial w_1} \frac{1}{n} \sum_{i=1}^n\left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)^2 \\
&= \frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial w_1} \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)^2 \\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\frac{\partial}{\partial w_1} \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right) \quad\textit{# Chain Rule}\\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\left(0 - \frac{\partial}{\partial w_1}\sin\left(w_0 x_i + w_1 \right)\right) \\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\left(0 - \cos\left(w_0 x_i + w_1 \right) \frac{\partial}{\partial w_1}\left(w_0 x_i + w_1 \right) \right) \quad\textit{# Chain Rule}\\
&= \frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\left(0 - \cos\left(w_0 x_i + w_1 \right) \left(0 + 1\right) \right) \\
&= -\frac{1}{n} \sum_{i=1}^n 2 \left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)
\cos\left(w_0 x_i + w_1 \right) \quad\textit{# Simplified}
\end{align}
Finally, we get
$$
L(w) = \frac{1}{n} \sum_{i=1}^n\left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)^2
$$
$$
\nabla_w L(w) = \left[ \frac{\partial}{\partial w_1} L(w), \frac{\partial}{\partial w_2} L(w) \right]
$$
```
import plotly.offline as py
import plotly.graph_objects as go
import plotly.express as px
import plotly.figure_factory as ff
import numpy as np
import pandas as pd
def load_data():
np.random.seed(42)
n = 50
x = np.sort(np.random.rand(n)*2.5 * np.pi)
x = x - x.mean()/x.std()
y = np.sin(2.2 * x + 1.8) + 0.2 * np.random.randn(n)
return (x, y)
(x, y) = load_data()
print("Number of data points: ", x.shape[0])
fig = go.Figure()
fig.add_trace(go.Scatter(x=x, y=y, mode='markers', name='markers'))
fig.update_layout(title='Scatter')
html = plotly.offline.plot(fig, filename='3d-scatter-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
```
The data is scattered randomly and has high variance, resembling a sine function
```
def model(w, x):
return np.sin(w[0] * x + w[1])
xtest = np.linspace(x.min()-0.1, x.max()+0.1, 100)
yhat1 = model([1, 1], xtest)
yhat2 = model([2, 1], xtest)
yhat3 = model([1, 2], xtest)
fig = px.scatter(x=x, y=y)
fig.add_trace(go.Scatter(x=xtest, y=yhat1, name="$f_{[1,1]}(x) = \sin(x+1)$"))
fig.add_trace(go.Scatter(x=xtest, y=yhat2, name="$f+{[2,1]}(x) = \sin(2x + 1)$"))
fig.add_trace(go.Scatter(x=xtest, y=yhat3, name="$f_{[1,2]}(x) = \sin(x+2)$"))
fig.update_layout(title='Scatter Sine Plot')
html = plotly.offline.plot(fig, filename='3d-scattersine-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
```
The sin function does not fit at all on the data. We need to find a parameter that is closer to the data, we need a loss function. So, we make use of average squared loss.
$$
L(w) = L\left(f_w, \mathcal{D} = \left\{(x_i, y_i \right\}_{i=1}^n\right) = \frac{1}{n} \sum_{i=1}^n\left(y_i - f_w\left(x_i\right)\right)^2
= \frac{1}{n} \sum_{i=1}^n\left(y_i - \sin\left(w_0 x_i + w_1 \right)\right)^2
$$
```
def avg_sq_loss(w):
return np.mean((y - model(w, x))**2)
print(avg_sqloss([1,1]))
fig = px.bar(x=["$w=[1,1]$", "$w=[2,1]$", "$w=[1,2]$"],
y=[avg_sq_loss([1,1]), avg_sq_loss([2,1]), avg_sq_loss([1,2])])
fig.update_yaxes(title="Average Squared Loss")
html = plotly.offline.plot(fig, filename='3d-scatter-avgsqloss-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
loss_trials = []
w0values = np.linspace(0, 3, 30)
w1values = np.linspace(0, 3, 30)
for w0 in w0values:
for w1 in w1values:
loss_trials.append([w0, w1, avg_sq_loss([w0, w1])])
loss_df = pd.DataFrame(loss_trials, columns=["w0", "w1", "loss"])
loss_df
loss_df.sort_values("loss").head()
# We wish to visualise the best brute force model
w_best_brute = loss_df.loc[loss_df["loss"].idxmin(), ["w0", "w1"]].to_numpy()
print(w_best_brute)
yhat_brute = model(w_best_brute, xtest)
fig = px.scatter(x=x, y=y)
fig.add_trace(go.Scatter(x=xtest, y=yhat_brute, name="Brute Force Approach"))
html = plotly.offline.plot(fig, filename='3d-scatter-avgxysqloss-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
```
We will try to get a better fit using gradient descent
```
fig = go.Figure()
fig.add_trace(go.Surface(x=w0values, y=w1values,
z=loss_df["loss"].to_numpy().reshape((len(w0values), len(w1values)))))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0), height=600)
html = plotly.offline.plot(fig, filename='3d-scatter-avgxysqloss-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
fig = go.Figure()
fig.add_trace(go.Contour(x=loss_df["w0"], y=loss_df["w1"], z=loss_df["loss"]))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0),
height=500, width=1000)
html = plotly.offline.plot(fig, filename='3dcontours-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
def gradient(w):
g0 = -np.mean(2 * (y - np.sin(w[0] * x + w[1]))*np.cos(w[0]*x + w[1]*x))
g1 = -np.mean(2 * (y - np.sin(w[0] * x + w[1]))*np.cos(w[0]*x + w[1]))
return np.array([g0, g1])
gradient([1., 1.])
loss_grad_df = loss_df.join(loss_df[['w0', 'w1']]
.apply(lambda w: gradient(w), axis=1, result_type="expand")
.rename(columns={0:"g0", 1:"g1"}))
loss_grad_df
fig = go.Figure()
fig = ff.create_quiver(x=loss_grad_df['w0'], y=loss_grad_df['w1'],
u=loss_grad_df['g0'], v=loss_grad_df['g1'],
line_width=2, line_color="white",
scale = 0.1, arrow_scale=.2)
fig.add_trace(go.Contour(x=loss_grad_df['w0'], y=loss_grad_df['w1'], z=loss_grad_df['loss']))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0),
height=600, width=800)
fig.update_layout(xaxis_range=[w0values.min(), w0values.max()])
fig.update_layout(yaxis_range=[w1values.min(), w1values.max()])
html = plotly.offline.plot(fig, filename='3dplotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
def gradient_descent(w_0, lr = lambda t: 1./(t+1.), nepochs=10):
w = w_0.copy()
values = [w]
for t in range(nepochs):
w = w - lr(t) * gradient(w)
values.append(w)
return np.array(values)
values = gradient_descent(np.array([3.0, 0.0]),
nepochs=100,
lr =lambda t: 1./np.sqrt(t+1.))
fig = go.Figure()
fig.add_trace(go.Contour(x=loss_grad_df['w0'], y=loss_grad_df['w1'], z=loss_grad_df['loss']))
fig.add_trace(go.Scatter(x=values[:,0], y=values[:,1], name="Path", mode="markers+lines",
line=go.scatter.Line(color='white')))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0),
height=600, width=800)
fig.update_layout(xaxis_range=[w0values.min(), w0values.max()])
fig.update_layout(yaxis_range=[w1values.min(), w1values.max()])
html = plotly.offline.plot(fig, filename='3dplotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
fig = go.Figure()
fig.add_trace(
go.Surface(x=w0values, y=w1values,
z=loss_df["loss"].to_numpy().reshape((len(w0values), len(w1values)))))
fig.add_trace(
go.Scatter3d(x=values[:,1], y=values[:,0], z=[avg_sq_loss(w) for w in values],
line=dict(color='white')))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0),
height=600)
html = plotly.offline.plot(fig, filename='3dplotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
import noise
import numpy as np
import matplotlib
from mpl_toolkits.mplot3d import axes3d
shape = (50, 50)
scale = 100.0
octaves = 6
persistence = 0.5
lacunarity = 2.0
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=42)
import matplotlib.pyplot as plt
plt.imshow(world, cmap='terrain')
lin_x = np.linspace(0,1,shape[0],endpoint=False)
lin_y = np.linspace(0,1,shape[1],endpoint=False)
x, y = np.meshgrid(lin_x, lin_y)
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_surface(x, y, world, cmap="terrain")
terrain_cmap = matplotlib.cm.get_cmap('terrain')
def plt_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries - 1)
pl_colorscale = []
for k in range(pl_entries):
C = list(map(np.uint8, np.array(cmap(k*h)[:3])*255))
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
terrain = plt_to_plotly(terrain_cmap, 255)
import plotly
import plotly.graph_objects as go
plotly.offline.init_notebook_mode(connected=True)
fig = go.Figure(data=[go.Surface(colorscale=terrain,z=world)])
fig.update_layout(title='Random 3D Terrain')
html = plotly.offline.plot(fig, filename='3d-terrain-plotly.html',include_plotlyjs='cdn')
from IPython.core.display import HTML
HTML(html)
```
Let us visualise the 3D Terrains, a simple optimization of gradient descent
```
from IPython.core.display import HTML
import plotly
import plotly.graph_objects as go
import noise
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
%matplotlib inline
z = world
plt.imshow(z, origin='lower', cmap='terrain')
# Find maximum value index in numpy array
indices = np.where(z == z.max())
max_z_x_location, max_z_y_location = (indices[1][0], indices[0][0])
plt.plot(max_z_x_location, max_z_y_location, 'ro', markersize=15)
# Find minimum value index in numpy array
indices = np.where(z == z.min())
min_z_x_location, min_z_y_location = (indices[1][0], indices[0][0])
plt.plot(min_z_x_location, min_z_y_location, 'yo', markersize=15)
import numpy as np
from numpy.lib.stride_tricks import as_strided
def sliding_window(arr, window_size):
"""
Slide Window view of array
"""
arr = np.asarray(arr)
window_size = int(window_size)
if arr.ndim!=2:
raise ValueError("Need 2D input")
if not(window_size>0):
raise ValueError("Need positive window size")
shape = (arr.shape[0] - window_size + 1,
arr.shape[1] - window_size + 1,
window_size, window_size)
if shape[0] <= 0:
shape = (1, shape[1], arr.shape[0], shape[3])
if shape[1] <= 0:
shape = (shape[0], 1, shape[2], arr.shape[1])
strides = (arr.shape[1]*arr.itemsize, arr.itemsize,
arr.shape[1]*arr.itemsize, arr.itemsize)
return as_strided(arr, shape=shape, strides=strides)
def cell_neighbours(arr, i, j, d):
"""Return d-th neighbors of cell (i, j)"""
w = sliding_window(arr, 2*d+1)
ix = np.clip(i - d, 0, w.shape[0]-1)
jx = np.clip(j - d, 0, w.shape[1]-1)
i0 = max(0, i - d - ix)
j0 = max(0, j - d - jx)
i1 = w.shape[2] - max(0, d - i + ix)
j1 = w.shape[3] - max(0, d - j + jx)
return w[ix, jx][i0:i1,j0:j1].ravel()
from dataclasses import dataclass
@dataclass
class descent_step:
value: float
x_index: float
y_index: float
def gradient_descent_3D(array, x_start, y_start, steps=50, step_size=1, plot=False):
step = descent_step(array[y_start][x_start], x_start, y_start)
step_history = []
step_history.append(step)
# 2D Plotting of array with starting point as red marker
if plot:
plt.imshow(array, origin='lower', cmap='terrain')
plt.plot(x_start, y_start, 'ro')
current_x = x_start
current_y = y_start
for i in range(steps):
prev_x = current_x
prev_y = current_y
neighbours = cell_neighbours(array, current_x, current_y, step_size)
next_step = neighbours.min()
indices = np.where(array == next_step)
current_x, current_y = (indices[1][0], indices[0][0])
step = descent_step(array[current_y][current_x], current_x, current_y)
step_history.append(step)
if plot:
plt.plot([prev_x, current_x], [prev_y, current_y], 'k-')
plt.plot(current_x, current_y, 'ro')
if prev_y == current_y and prev_x == current_x:
print(f"Converged in {i} steps")
break
return next_step, step_history
np.random.seed(42)
global_minimum = z.min()
indices = np.where(z == global_minimum)
print(f"Target: {global_minimum} @ {indices}")
step_size = 0
found_minimum = 99999
# Random starting point
start_x = np.random.randint(0,50)
start_y = np.random.randint(0,50)
# Increase step size until convergence on global minimum
while found_minimum != global_minimum:
step_size += 1
found_minimum,steps = gradient_descent_3D(z,start_x,start_y,step_size=step_size,plot=False)
print(f"Optimal step size {step_size}")
found_minimum,steps = gradient_descent_3D(z,start_x,start_y,step_size=step_size,plot=True)
print(f"Steps: {steps}")
def multiDimenDist(point1,point2):
#find the difference between the two points, its really the same as below
deltaVals = [point2[dimension]-point1[dimension] for dimension in range(len(point1))]
runningSquared = 0
#because the pythagarom theorm works for any dimension we can just use that
for coOrd in deltaVals:
runningSquared += coOrd**2
return runningSquared**(1/2)
def findVec(point1,point2,unitSphere = False):
#setting unitSphere to True will make the vector scaled down to a sphere with a radius one, instead of it's orginal length
finalVector = [0 for coOrd in point1]
for dimension, coOrd in enumerate(point1):
#finding total difference for that co-ordinate(x,y,z...)
deltaCoOrd = point2[dimension]-coOrd
#adding total difference
finalVector[dimension] = deltaCoOrd
if unitSphere:
totalDist = multiDimenDist(point1,point2)
unitVector =[]
for dimen in finalVector:
unitVector.append( dimen/totalDist)
return unitVector
else:
return finalVector
def generate_3d_plot(step_history):
# Initialise empty lists for markers
step_markers_x = []
step_markers_y = []
step_markers_z = []
step_markers_u = []
step_markers_v = []
step_markers_w = []
for index, step in enumerate(step_history):
step_markers_x.append(step.x_index)
step_markers_y.append(step.y_index)
step_markers_z.append(step.value)
if index < len(steps)-1:
vec1 = [step.x_index,step.y_index,step.value]
vec2 = [steps[index+1].x_index,steps[index+1].y_index,steps[index+1].value]
result_vector = findVec(vec1,vec2)
step_markers_u.append(result_vector[0])
step_markers_v.append(result_vector[1])
step_markers_w.append(result_vector[2])
else:
step_markers_u.append(0.1)
step_markers_v.append(0.1)
step_markers_w.append(0.1)
fig = go.Figure(data=[
go.Cone(
x=step_markers_x,
y=step_markers_y,
z=step_markers_z,
u=step_markers_u,
v=step_markers_v,
w=step_markers_w,
sizemode="absolute",
sizeref=2,
anchor='tail'),
go.Scatter3d(
x=step_markers_x,
y=step_markers_y,
z=step_markers_z,
mode='lines',
line=dict(
color='red',
width=2
)),
go.Surface(colorscale=terrain,z=world,opacity=0.5)])
fig.update_layout(
title='Gradient Descent Steps',
scene = dict(zaxis = dict(range=[world.min(),world.max()],),),)
return fig
# Generate 3D plot from previous random starting location
fig = generate_3d_plot(steps)
HTML(plotly.offline.plot(fig, filename='random_starting_point_3d_gradient_descent.html',include_plotlyjs='cdn'))
found_minimum,steps = gradient_descent_3D(z,max_z_x_location,max_z_y_location,step_size=5,plot=True)
fig = generate_3d_plot(steps)
HTML(plotly.offline.plot(fig, filename='maximum_starting_point_step_size_5_3d_gradient_descent.html',include_plotlyjs='cdn'))
```
| github_jupyter |
```
## . . Import required utilities
import numpy as np
import matplotlib.pyplot as plt
import math as m
from IPython.display import HTML,Image
import scipy.signal
## . . Import all of the libraries required for this Notebook
import matplotlib.image as mpimg
import scipy.ndimage as ndimage
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this Jupyter notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
''')
```

### <h1><center>Module 7: Convolution, Correlation and Linear Time-Invariant (LTI) Systems</center></h1>
[Linear time-invariant](https://en.wikipedia.org/wiki/Linear_time-invariant_theory) (LTI) systems are important in many branches of science and engineering (including geophysics!) because they can be completely characterized by the [impulse response](https://en.wikipedia.org/wiki/Impulse_response), which is how a system responds to an impulsive $\delta(t)$ function input. One example of a linear system with an impulse response would be if you gave a small impulsive push to a friend sitting on a swing. What would the response of the system be? Damped oscillation back and forth until eventually coming to rest (where the duration of the period depends on the swing length and the overall amplitude decay depends on the swing's effective damping constant). This time series effectively represents the impulse response of the swing system.
Examples of LTI systems include:
* Spring-mass systems
* Electrical circuits
* 1D Seismic/GPR convolutional modeling
In fact, [convolution](https://en.wikipedia.org/wiki/Convolution) and its counterpart [correlation](https://en.wikipedia.org/wiki/Cross-correlation) are two LTI operations that represent two key concepts that you will learn and apply in this course. Thus, it is worth spending some time on developing your intuition of what this is. Let us first discuss the geophysical example of convolutional modeling that is one way to think about forward modeling both seismic and GPR data.
# Convolutional Modeling of Seismic/GPR Data
The Earth's P-wave acoustic impedance ($I_p=\rho V_p $) is commonly represented in terms of 'reflectivity' $\left(\mathrm{i.e.},\,\,r(t)\approx \frac{V_p}{2}\frac{d }{dz}\mathrm{ln}(\rho V_p)\right)$. One straightforward approach to modeling a 1D (or higher dimension) seismic section is to **convolve** a **seismic wavelet**, $w(t)$, with an **Earth reflectivity sequence**, $r(t)$. The **output** of this operation, $s(t)$, is approximately the **seismic data trace** you would record between a co-located source and receiver pair.
Mathematically, this operation is denoted as:
$$ s(t) = w(t)\ast r(t)+n(t), \tag{1}$$
where $\ast$ represents the convolution operation here between $w(t)$ and $r(t)$, and $n(t)$ is some additive noise. Figure 1 illustrates the seismic convolutional modeling approach.
<img src="Fig/2.3 Convolutional_model.png" width="650">
**Figure 1. Illustration of the convolutional seismic modeling as a linear time-invariant (LTI) system. The leftmost figure represents a 1D geological profile, while the second panel shows the locations of the discontinuities (i.e., the reflector locations) where the magnitude of the discontinuity is proporational to the impedance contrast. The third and fourth panels show the convolution of the reflectivity sequence with seismic wavelets of the North American and European conventions, respectively.**
# Two Key Properties of LTI systems
As per the name, a LTI system (here defined by $H[\cdot]$) has two key necessary ingredients:
* [Linearity](https://en.wikipedia.org/wiki/Linear_system): if $[x_1(t),y_1(t)]$ and $[x_2(t),y_2(t)]$ are two input-output pairs (i.e., $y_1(t) = H[x_1(t)]$ and $y_2(t) = H[x_2(t)]$ for some LTI operator $H$), then:
$$ax_1(t)+bx_2(t) \rightarrow ay_1(t)+by_2(t). \tag{2} $$
This principle simply states that you may superimpose two different input signals and the resulting output signal will be the same superposition as if you treated them separately and then stacked the outputs. In the convolutional modeling example above, this would mean that you could: (1) convolve each component of the reflection sequence with the wavelet individually and then stack the result; or (2) convolve the wavelet with the entire reflectivity sequence all at once. Following either scenario will generate the same result.
* [Time-invariance](https://en.wikipedia.org/wiki/Time-invariant_system): If $x(t)\rightarrow y(t)$ then
$$ x(t-t_0) \rightarrow y(t-t_0). \tag{3}$$
This implies that delaying or advancing the operation by a constant $t_0$ produces no affect on the resulting system. (If you were to push your friend on the swing today or tomorrow in an equal manner, would you expect there to be any difference in the resulting motion?) In the seismic convolutional modeling example above, this means that if you shifted the input reflectivity sequence $r(t)$ to $r(t-t_0)$, you would expect the output seismic trace to be similarly shifted from $y(t)$ to $y(t-t_0)$.
# Convolution (Integral)
One of the most important aspects of an LTI system is that the input-output relationship is characterized by a [convolution](https://en.wikipedia.org/wiki/Convolution) operation alluded to - but not explicitly defined - above. The convolution integral is written in the following way:
$$ y(t) = x(t) \ast h(t) = \int_{-\infty}^{\infty} x(\tau)\,h(t-\tau)\, \mathrm{d}\tau, \tag{4} $$
where $x(t), y(t)$ and $h(t)$ are the input, output and the response of the LTI system. In equation 4 we see that a dummy integration variable $\tau$ has been introduced. You'll also notice that one function, $h(t-\tau)$, is **time reversed** relative to the other variable.
Here is a short pseudo-code to think about the mathematical operation in equation 4:
For each $t$ in the output space:
For each $\tau$ in the input space:
Multiply $x(\tau)$ with time-reversed $h(t-\tau)$
Accumulate result into output $y(t)$
Let's now look a couple of examples.
## Evaluating the convolution integral as a movie
It is an instructive exercise to plot $x(t)$ and a time reversed $h(t)$ on two different pieces of paper. Initially line the up with no shift applied (i.e., $\tau=0$). Mentally multiply two functions together and sum the result (i.e., the total area under the curve) to get your output $y(0)$. You can then shift one paper by a $\tau$ units and repeat this exercise. Eventually if you do this for all different values of $\tau$, you can build up the convolution result $y(t)$.
Let's look at a few movie examples that illustrate this procedure. First, let's examine the **autoconvolution** of the boxcar with itself (i.e., where $f(\tau)=g(\tau)=\Pi_1(\tau))$. Note that the movie frames are the equivalent of different values of $t$ in equation 4.
```
Image(filename="Fig/Convolution_of_box_signal_with_itself2.gif.png")
```
**Figure 2. Illustration of (auto)convolution. The blue curve is the stationary boxcar $f(\tau)$ while red curve, $g(t-\tau)$, is the time-reversed signal. For each shift in the convolution (corresponding to the variable $t$), one must multiply the blue and red curves together to get the yellow area, which represents the convolution value at that time shift value $t$. Where the blue and red signals perfectly overlap is when $t=0$; when the red boxcar signal is to the left or right of the blue boxcar, this represents negative and positive values of $t$. The overall value of $(f\ast g)(t) = f(t)\ast g(t)$ is shown by the black line shaped as a triangle. Note that the value of the yellow area at any particular time shift is the convolution result $(f\ast g)(t) = f(t)\ast g(t)$.**
Let's now look at another example of a box car $g(t)$ convolved with a spikey decaying exponential function $f(t)$. Again, the frames of the movie are playing over the variable $t$.
```
Image(filename="Fig/Convolution_of_spiky_function_with_box2.gif.png")
```
**Figure 3. As in Figure 2 but with a one-sided function.**
## Example 1a - Spike-Gaussian Convolution
Let's look at the (numerical) convolution of a single input delta function spike $f(t)$ and a supposed Gaussian impulse response of the system $g(t)$ that generates our output $f(t)\ast g(t)$. We'll repeat this example four times where we apply a different delay to the input spike.
```
def plotfuncs(t,x,h,n,title):
plt.subplot(4,2,n)
plt.plot(t,x,'r',t,h,'b',t,h[::-1],'g')
plt.xlabel('Samples (time)')
plt.ylabel('Amplitude')
plt.title(title+r' Signals $f(\tau)$, $g(\tau)$ and $g(0-\tau)$')
plt.legend([r'$f(\tau)$',r'$g(\tau)$',r'$g(0-\tau)$'])
plt.axis([0,200,-1.1,1.1])
def plotconvo(t,y,n,title):
plt.subplot(4,2,n)
plt.plot(t,y,'k')
plt.xlabel('Samples (time)')
plt.ylabel('Amplitude')
plt.title(title+r' Convolution of Signals $f(\tau)$ and $g(t-\tau)$')
plt.axis([-100,100,-2.1,2.1])
# . . Simple convolutional model demonstration
nt=201
t = np.arange(0,1,1/nt)
xline = np.arange(0,nt,1)
tauline = np.arange(-nt/2,nt/2,1)
# . . Input signals (shifted in time with respect to each other)
f0 = np.zeros(nt); f0[25]=1
f1 = np.zeros(nt); f1[50]=1
f2 = np.zeros(nt); f2[75]=1
f3 = np.zeros(nt); f3[100]=1
f0[ 25: 55]=np.arange(0,30,1)*0.01
f1[ 50: 80]=np.arange(0,30,1)*0.01
f2[ 75:105]=np.arange(0,30,1)*0.01
f3[100:130]=np.arange(0,30,1)*0.01
# . . Impulse response (stationary for the four examples)
g = np.zeros(nt)
g = np.exp(-(t-0.745)**2/(2*0.02**2))
# . . Plot
plt.figure(figsize=(12, 12))
plotfuncs(xline,f0,g,1,'(a)')
plotconvo(tauline,np.convolve(f0,g,mode='same'),2,'(b)')
plotfuncs(xline,f1,g,3,'(c)')
plotconvo(tauline,np.convolve(f1,g,mode='same'),4,'(d)')
plotfuncs(xline,f2,g,5,'(e)')
plotconvo(tauline,np.convolve(f2,g,mode='same'),6,'(f)')
plotfuncs(xline,f3,g,7,'(g)')
plotconvo(tauline,np.convolve(f3,g,mode='same'),8,'(h)')
plt.tight_layout()
plt.show()
```
**Figure 4. Examples of convolving a spike with a Gaussian function. The left panels show the spike function $f(\tau)$ in red along with the Gaussian function in blue and its time-reversed version in green. The right panels show the result of convolving the two functions. Note that if the red curve is left of the green curve this results in a negative values of the convolution. The reverse is true if the red curve is to the right of the green curve.**
Let's now look at the convolution of a Gaussian with a sequence of pulses that are spread out like an earth reflectivity model described above:
```
# . . Add more spikes to each
f0 = np.zeros(nt); f0[11]=1;
f1 = np.zeros(nt); f1[11]=1; f1[31]=-1;
f2 = np.zeros(nt); f2[11]=1; f2[31]=-1; f2[51]=0.5;
f3 = np.zeros(nt); f3[11]=1; f3[31]=-1; f3[51]=0.5; f3[71]=-0.5;
f3 = np.zeros(nt); f3[11]=1; f3[16]=-1; f3[21]=0.5; f3[26]=-0.5;
# . . Plot
plt.figure(figsize=(12, 12))
plotfuncs(xline,f0,g,1,'(a)')
plotconvo(tauline,np.convolve(f0,g,mode='same'),2,'(b)')
plotfuncs(xline,f1,g,3,'(c)')
plotconvo(tauline,np.convolve(f1,g,mode='same'),4,'(d)')
plotfuncs(xline,f2,g,5,'(e)')
plotconvo(tauline,np.convolve(f2,g,mode='same'),6,'(f)')
plotfuncs(xline,f3,g,7,'(g)')
plotconvo(tauline,np.convolve(f3,g,mode='same'),8,'(h)')
plt.tight_layout()
plt.show()
```
**Figure 5. Similar to Figure 4, but now where I have introduced a number of spikes with different spacings. Note that the convolutions in (b), (d) and (f) show one, two, and three spikes as expected from the number of red spikes in (a), (c) and (e), respectively. However, the convolution in (h) only shows three "bumps" where one should expect four given the number of spikes. This is because the convolved Gaussian functions are all interferring with each other - both constructively and destructively.**
# Convolution in the Fourier Domain
An important property is that **convolution** in the time domain corresponds to **multiplication** in the Fourier transform domain:
$$y(t) = x(t) \ast h(t) \Leftrightarrow \widehat{X}(\omega)\widehat{H}(\omega) \tag{5}$$
**Proof of the Convolution Theorem**:
<div class="alert alert-success">
The Fourier transform of $y(t)=x(t)\ast h(t)$ is:
$$
\begin{eqnarray}
\widehat{Y}(\omega) &=& \mathcal{F} \left[ x(t)\ast h(t) \right] \\
\, &=& \mathcal{F} \left[ \int_{-\infty}^{\infty} x(\tau)\,h(t-\tau) \,\mathrm{d}\tau \right] \\
\, &=& \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x(\tau)\,h(t-\tau)\, \mathrm{e}^{-i\omega t} \,\mathrm{d}\tau \, \mathrm{d}t\\
\end{eqnarray}\tag{6}
$$
Let's now assert that $t=u+\tau$ where $u$ is the new integration variable and $\tau$ is a constant shift. Thus, we may write $\mathrm{d}t = \mathrm{d}u$. Inserting this into the equation above yields:
$$
\begin{eqnarray}
\widehat{Y}(\omega) &=& \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x(\tau)\,h(u)\, \,\mathrm{e}^{-i\omega (u+\tau)}\,\mathrm{d}\tau \,\mathrm{d}u\\
\, &=& \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x(\tau)\,h(u)\, \mathrm{e}^{-i\omega \tau}\,\mathrm{e}^{-i\omega u}\,\mathrm{d}\tau \,\mathrm{d}u\\
\,&=& \left[\int_{-\infty}^{\infty} x(\tau)\,\mathrm{e}^{-i\omega \tau}\,\mathrm{d}\tau\right]
\left[
\int_{-\infty}^{\infty} h(u)\,\mathrm{e}^{-i\omega u}\,\mathrm{d}u
\right]\\
\,&=& \widehat{X}(\omega)\,\widehat{H}(\omega)
\end{eqnarray}\tag{7}
$$
</div>
Thus, the output signal of a convolution in the frequency domain is equal to the product of the Fourier Transforms of input functions. This is one of the most important properties of convolution.
## Example 2 - Convolution of Two Boxcars
This example looks at the (auto)convolution of twobox car functions of equal width. You might recognize the result ... a triangle function! Recall from an earlier example that
$$ \Pi_T(t) \Longleftrightarrow 2 T \, \mathrm{sinc}\left(T\omega\right) \tag{8} $$
Therefore, by the convolution theorem:
$$\Pi_T(t) \ast \Pi_T(t) \Longleftrightarrow 4 T^2 \, \mathrm{sinc}^2\left(T\omega\right)\tag{9} $$
You'll also recall that a $\mathrm{sinc}^2$ function is the Fourier Transform of the triangle function. Therefore, you probably wouldn't be surprised to hear that the autoconvolution of a boxcar function is the triangle function! This is illustrated in the example below.
```
# . . Simple convolutional model demonstration
nt=100
t = np.arange(0,1,1/nt)
# . . Input signal 1
x1 = np.zeros(nt)
x1[20:30]=0.5
# . . Input signal 2
x2 = np.zeros(nt)
x2[10:40]=0.5
# . . Impulse response 1
h1 = np.zeros(nt)
h1[70:80] = 0.5
#h1[50:60] = 0.5
# . . Impulse response 2
h2 = np.zeros(nt)
#h2[60:90] = 0.5
h2[40:60] = 0.5
plt.figure(figsize=(12, 6))
# . . Plot
plt.subplot(2,2,1)
plt.plot(t,x1,'r',t,h1,'k')
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=12)
plt.legend(['x1(t)','h1(t)'],fontsize=12)
plt.title('(a) Two signals input to convolution',size=14)
plt.axis([0,1,-1,1])
plt.subplot(2,2,2)
plt.plot(t-max(t)/2,np.convolve(x1,h1,mode='same'))
plt.title('(b) Convolution of x1(t) and h1(t)',size=14)
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=12)
plt.axis([-0.5,0.5,0,1.1*np.max(np.convolve(x1,h1,mode='same'))])
# . . Plot
plt.subplot(2,2,3)
plt.plot(t,x2,'r',t,h2,'k')
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=12)
plt.legend(['x2(t)','h2(t)'],fontsize=12)
plt.title('(c) Two signals input to convolution',size=14)
plt.axis([0,1,-1,1])
plt.subplot(2,2,4)
plt.plot(t-max(t)/2,np.convolve(x2,h2,mode='same'))
plt.title('(c) Convolution of x2(t) and h2(t)',size=12)
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=14)
plt.axis([-0.5,0.5,0,1.1*np.max(np.convolve(x2,h2,mode='same'))])
plt.tight_layout()
plt.show()
```
**Figure 6. (a) Two box car functions where the time-reversed versiron would fall exactly on the red curve. (b) The convolution of the two functions in (a) effectively representing the autoconvolution of the signal. (c) Two box cars where the red curve is now longer and the black curve is shifted. (d) Convolution of the signals in (c) Note that the result is both broader and has a flat top corresponding to shifts where the black box car is completely within the red box car.**
Thus, to compute the autoconvolution of the boxcar function you can do either one of two things:
1. Compute the convolution of $y(t) = h(t)\ast x(t)$ in the physical domain using a convolution program.
2. Compute the Fourier transforms of $h(t)$ and $x(t)$, multiply the results (i.e., $\widehat{Y}(\omega)=\widehat{X}(\omega)\widehat{H}(\omega)$), and then take the inverse Fourier transform of $\widehat{Y}(\omega)$ to get the desired result $y(t)$.
## Commutativity of the convolution operator
Let's look at proving the following:
$$f(t) \ast g(t) = g(t) \ast f(t) \tag{10}$$
First, let's do it analytically:
**COMMUTATIVITY PROOF**:
<div class="alert alert-success">
The definition of convolution is given by:
$$f(t)\ast g(t) = \int_{-\infty}^{\infty} f(\tau)g(t-\tau)d\tau \tag{11}$$
However, we may perform a substitution of $t^\prime=t-\tau$, $d\tau=-dt^\prime$ and $\tau=t-t^\prime$ in the above equation.
$$ f(t)\ast g(t) =
- \int_{t+\infty}^{t-\infty} f(t-t^\prime)g(t^\prime)dt^\prime
\tag{12}$$
However, care must be paid to the integration limits. First, we note that as $\tau\rightarrow-\infty$ we get $t^\prime=t+\infty=+\infty$. Similarly, as $\tau\rightarrow +\infty$ we get $t^\prime=t-\infty=-\infty$. This is why we have had to change the integration limits in equation B.
The next step is that we reverse the integration limits, which changes the sign of the integral. Thus, we may write:
$$ f(t)\ast g(t) =
- \int_{+\infty}^{-\infty} f(t-t^\prime)g(t^\prime)dt^\prime
= \int_{-\infty}^{+\infty}g(t^\prime) f(t-t^\prime)dt^\prime
=
g(t) \ast f(t)
\tag{13} $$
Thus, the convolution operator is commutative. This should be expected from the Fourier convolution theorem which states:
$$f(t)\ast g(t) \Leftrightarrow \widehat{F}(\omega)\widehat{G}(\omega)=\widehat{G}(\omega)\widehat{F}(\omega) \Leftrightarrow g(t)\ast f(t) \tag{14} $$
</div>
Let's now show it computationally:
```
# . . Simple convolutional model demonstration
nt=100
t = np.arange(0,1,1/nt)
# . . Input signal 1
x = np.zeros(nt)
x[20:30]=np.arange(0,10,1)*0.05
# . . Impulse response 1
h = np.zeros(nt)
h[50:60] = 0.5
plt.figure(figsize=(12, 4))
# . . Two functions
plt.subplot(1,3,1)
plt.plot(t,x,'r',t,h,'k')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend(['x(t)','h(t)'])
plt.title('(a) Input functions to be convolved')
plt.axis([0,1,-1,1])
# . . Plot convolution x*h
plt.subplot(1,3,2)
plt.plot(t-max(t)/2,np.convolve(x,h,mode='same'))
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.title('(b) Convolution 1: $h(t) * x(t)$')
plt.axis([-0.5,0.5,0,1.1*np.max(np.convolve(h,x,mode='same'))])
# . . Plot convlution h*x
plt.subplot(1,3,3)
plt.plot(t-max(t)/2,np.convolve(h,x,mode='same'))
plt.title('(c) Convolution 2: $x(t) * h(t)$')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.axis([-0.5,0.5,0,1.1*np.max(np.convolve(h,x,mode='same'))])
plt.tight_layout()
plt.show()
```
**Figure 7. Illustrating the commutativity of the convolution operator. (a) Two function to be convolved. (b) Convolution $h(t)*x(t)$. (c) Convolution $x(t)\ast h(t)$.**
## Convolution (Computational Efficiency)
From a computational perspective, as the number of samples in a time series gets "large" it becomes more efficient to compute a (fast) Fourier Transform, perform Fourier domain multiplication and then take the inverse Fourier transform. The [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity_theory) of this operation is $\mathcal{O}\left(n \, log \, n\right)$ rather than $\mathcal{O}\left(n^2\right)$ of performing convolution directly in the time domain.
```
# . . Close all plot windows
plt.close('all')
nmax=10
# . . Set up in powers of 2^n
n=np.arange(1,nmax,1)
# . . Convolution in time domain
timedomain=np.zeros(nmax)
timedomain = (2.**(n+1))**2;
# . . Convolution in the Fourier domain
freqdomain=np.zeros(nmax)
freqdomain = 8.*(2.**(n+1)) * np.log10(2**(n+1))
# . . Compute relative ration
relratio= timedomain/freqdomain
# . . Let's make a figure
plt.figure(figsize=(9, 6))
plt.subplot(211)
plt.loglog(n,timedomain,'b-o',label='Time Domain')
plt.loglog(n,freqdomain,'g-o',label='Frequency Domain')
plt.xlabel('Number of Samples ($2^N$)',size=12)
plt.ylabel('Complexity ($log_{10}$)',size=12)
plt.legend(bbox_to_anchor=(0.1, 0.8), loc=2, borderaxespad=0.)
plt.title('(a) Computational Complexity of the Convolution Operation',size=14)
plt.axis([1,nmax,0,1.2*max(timedomain)])
plt.subplot(212)
plt.loglog(n,relratio,'k-o',label='Relative Computational Expense')
plt.xlabel('Number of Samples ($2^N$)',size=12)
plt.ylabel('Relative Complexity ($log_{10}$)',size=12)
plt.legend(bbox_to_anchor=(0.1, 0.8), loc=2, borderaxespad=0.)
plt.axis([2,nmax,-1,1.2*max(relratio)])
plt.title('(b) Relative Complexity: Time vs. Fourier',size=14)
plt.tight_layout()
plt.show()
```
**Figure 8. (a) Illustration of the computational complexity of time-domain (blue curve) and frequency-domain (green curve) computation of the convolution operation. (b) The relative cost of the two curves in (a).**
# Other examples of physical convolutional LTI systems
## GPR Acquisition system
Consider that you are recording a geophysical signal (e.g., GPR wave) denoted by $x(t)$. The instrument (GPR receiving antenna) you are using has a natural response, $h(t)$. The recorded GPR data you measure, $y(t)$, will be affected by instrument you are using in a convolution process $y(t)=x(t)\ast h(t)$. Thus, recorded data $y(t)$ will not be the ''true'' response $x(t)$ unless: (1) you
have a ''perfectly non-distorting'' instrument (i.e., $h(t)=\delta(t)$); or (2) you account for the instrument's response through a [deconvolution](https://en.wikipedia.org/wiki/Deconvolution) procedure to recover $x(t)$ (that is, assuming you know $h(t)$):
$$ x(t) = \mathcal{F}^{-1}\left[ \frac{\widehat{Y}(\omega)}{\widehat{H}(\omega)} \right] = \mathcal{F}^{-1}\left[ \frac{\mathcal{F}[y(t)]} {\mathcal{F}[h(t)]} \right]. \tag{15} $$
**Q: What are the potential problems or challenges with a deconvolutional approach?**
## Human hearing
There is a far-field acoustic source $x(t)$ in your presence. Your hearing system has two inputs $h_L(t)$ and $h_R(t)$ (i.e., your ears!). Unfortunately, they are an imperfect sensing instrument and thus what you actually ''hear'' is a convolution product.
$$ y(t) = \left[ h_L(t)+h_R(t)\right]\ast x(t) \tag{16} $$
Hearing aids with responses $a_L(t)$ and $a_R(t)$ can be introduced to improve your response!! This leads to a more complicated LTI model:
$$ y(t) = \left[ a_L(t) \ast h_L(t)+a_R(t) \ast h_R(t)\right]\ast x(t) \tag{17} $$
See Figure 9 for an illustration.
<img src="Fig/2.3-EARS.png" width="300">
**Figure 9. Illustration of the ears plus hearing aids as an LTI system. In this scenario an acoustic source produces a signal $x(t)$ that propagates through the air and is detected by you the listener. Your hearing system has two detectors, your left and right ears $h_L(t)$ and $h_R(t)$. Your brain thus detects the convolution of your ear's responses and the signal. Of course if you have hearing aids this is more complicated since your ear response is now the convolution of the previous result with the hearing aid response (which can be different in the left and right ears)!**
# Convolution in Action
## Convolution with a $\delta$-function
Convolution of a function $f(t)$ with a $\delta$-function shifted by $a$ unit gives
$$f(t-a) = \int_{-\infty}^{\infty} f(t)\,\delta(a-t)\,\mathrm{d}t= \int_{-\infty}^{\infty} f(t)\,\delta(t-a)\,\mathrm{d}t \tag{18} $$
by virtue of the properties of the $\delta$-function. We may write this symbolically as
$$ f(t) \ast \delta (t-a) = f(t-a). \tag{19} $$
Applying the convolution theorem to this result [because $\delta(t-a) = \delta (a-t)$ in equation 18 above] since it yields the Fourier shift theorem:
$$ f(t) \Leftrightarrow \widehat{F}(\omega); \quad \delta(t-a) \Leftrightarrow \mathrm{e}^{-i \omega a}, \tag{20}
$$
which yields the following:
$$ f(t-a) = f(t) \ast \delta (t-a) \Leftrightarrow \widehat{F}(\omega)\mathrm{e}^{-i\omega a}. \tag{21} $$
which is the multiplication of the two signals' Fourier spectra! The interpretation of this is that the shifting of a time series by $a$ units leads to the similar Fourier spectrum that has a phase shift of ${\rm e}^{-i\omega a}$ applied to it.
## Convolution with a pair of $\delta$-functions
Based on our previous exploration of the Fourier transform of $\delta$-functions, we can identify the following:
$$ \left[\delta(t-a)+\delta(t+a)\right] \Leftrightarrow 2\,\mathrm{cos}\,(\omega a), \tag{22} $$
with a similar result holding if the delta functions were defined in the Fourier domain. The convolution of the expression in equation 17 with a function $f(t)$ is given by
$$ \left[\delta(t-a)+\delta(t+a)\right] \ast f(t) \Leftrightarrow 2\,\mathrm{cos}\,(\omega a) \widehat{F}(\omega),\tag{23} $$
which is again the product of the signals' respective Fourier transforms.
## Two Gaussian functions
An interesting example that comes up quite frequently in probability theory (including geophysical inverse problems when modeling probability density fucntions) is the convolution of two Gaussian functions. Say we have two Gaussian functions of width $a$ and $b$, $f(t)=\mathrm{e}^{-t^2/a^2}$ and $g(t)=\mathrm{e}^{-t^2/b^2}$ respectively. In the Fourier domain we know that the respective Fourier transforms are given by:
$$\widehat{F}(\omega) = a\sqrt{\pi}\,\mathrm{e}^{-a^2\omega^2/4} \tag{24} $$
and
$$\widehat{G}(\omega) = b\sqrt{\pi}\,\mathrm{e}^{-b^2\omega^2/4}. \tag{25} $$
Thus, according to the Fourier domain convolution theorem, we may write:
$$
\begin{eqnarray}
f(t) \ast g(t) &=& \mathcal{F}^{-1}\left[\widehat{F}(\omega)\cdot \widehat{G}(\omega)\right]\tag{26a} \\
\,&=& \mathcal{F}^{-1}\left[
ab\pi \mathrm{e}^{-(a^2+b^2)\omega^2/4}
\right] \tag{26b}\\
\, &=& \frac{a b \pi }{\sqrt{a^2+b^2}}\mathrm{e}^{-t^2/(a^2+b^2)}. \tag{26c}
\end{eqnarray}
$$
Importantly, the convolution of two Gaussians with widths $a$ and $b$ **is also a Gaussian of width $\sqrt{a^2+b^2}$.** Let's test this out with a numerical example.
```
# . . Simple convolutional model demonstration
nt=100
t = np.arange(0,1,1/nt)
# . . Input signal 1
g1 = np.zeros(nt)
g1 = 0.5*np.exp(-(t-0.3)**2/(2*0.04**2))
# . . Input signal 2
g2 = np.zeros(nt)
g2 = 0.5*np.exp(-(t-0.3)**2/(2*0.12**2))
# . . Impulse response
g0 = np.zeros(nt)
g0 = 0.5*np.exp(-(t-0.7)**2/(2*0.04**2))
plt.figure(figsize=(12, 6))
# . . Plot
plt.subplot(2,2,1)
plt.plot(t,g1,'r',t,g0,'k')
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=12)
plt.legend(['$g_1$','$g_0$'],fontsize='12')
plt.title('(a) Two Gaussians input to Convolution',size=14)
plt.axis([0,1,0,0.6])
plt.subplot(2,2,2)
plt.plot(t-max(t)/2,np.convolve(g1,g0,mode='same'))
plt.title('(b) Convolution of $g_1$ and $g_0$')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend(['$g_1*g_0$'],fontsize='12')
plt.axis([-0.5,0.5,0,2.5])
# . . Plot
plt.subplot(2,2,3)
plt.plot(t,g2,'r',t,g0,'k')
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=12)
plt.legend(['$g_2$','$g_0$'],fontsize='12')
plt.axis([0,1,0,0.6])
plt.title('(c) Two Gaussians input to Convolution',size=14)
plt.subplot(2,2,4)
plt.plot(t-max(t)/2,np.convolve(g2,g0,mode='same'))
plt.title('(d) Convolution of $g_2$ and $g_0$')
plt.xlabel('Time (s)',size=12)
plt.ylabel('Amplitude',size=12)
plt.legend(['$g_2*g_0$'],fontsize='12')
plt.axis([-0.5,0.5,0,2.5])
plt.tight_layout()
plt.show()
```
**Figure 10. Illustrating the convolution of two Gaussian functions. (a) Two shifted but otherwise identical Gaussian functions. (b) The convolution of functions in (a). (c) Two shifted an unequal Gaussian Functions. (d) The convolution of functions in (c). Note that convolving Gaussian functions will broaden the resulting Gaussian function.**
One interesting observation is that because the resulting function after the convolution is broader, convolution with a Gaussian filter is a **low-pass filtering** operation. One can use this approach to remove high-frequency or high-wavenumber noise in a data set.
## The Algebra of Convolutions
One has to take care when applying more complex convolution operators. For example, generally speaking:
<div class="alert alert-danger">
$$ \left[a(t) \ast b(t)\right] \cdot c(t) \neq a(t)\ast\left[b(t)\cdot c(t)\right] \tag{27}$$
</div>
One can use the properties of the convolution operation to derive some truely convoluted relationships:
$$
\left[a(t)\ast b(t)\right]\cdot \left[c(t)\ast d(t)\right]
\Leftrightarrow
\left[\widehat{A}(\omega) \cdot \widehat{B}(\omega)\right]\ast \left[\widehat{C}(\omega)\cdot \widehat{D}(\omega)\right] \tag{28}
$$
## The Derivative Theorem
If $f(t)\Leftrightarrow\widehat{F}(\omega)$, then
$$ \widehat{G}(\omega) = \mathcal{F}\left[\frac{\mathrm{d}f(t)}{\mathrm{d}t}\right] = i\omega \widehat{F}(\omega). \tag{29} $$
which is sometimes written symbolically as a Fourier Transform pair:
$$\frac{\mathrm{d}}{\mathrm{d}t} \Leftrightarrow i\omega. \tag{30}$$
Let's now look at a proof of the Derivative Theorem:
<div class="alert alert-success">
Let's begin with our definition of the Fourier Transform of $f(t)$ and it's derivative:
$$ \widehat{F}(\omega) = \int_{-\infty}^{\infty} f(t)e^{-i\omega t} dt \tag{31} $$
and
$$ \widehat{G}(\omega) = \int_{-\infty}^{\infty} \frac{d f(t)}{dt} e^{-i\omega t} dt. \tag{32}$$
Let's look at the second integral and apply integration by parts using $u=e^{-i\omega t}$, $dv=\frac{df}{dt} dt$, $v=f$ and $du = -i\omega e^{-i\omega t} dt $.
Thus,
$$\int_{-\infty}^{\infty} u dv = \left. uv\right|_{-\infty}^{\infty} - \int_{-\infty}^{\infty} v du \tag{33}$$
may be written:
$$\widehat{G}(\omega) =\int_{-\infty}^{\infty} \frac{df(t)}{dt} e^{-i\omega t}dt = \left. f (t) e^{-i\omega t}\right|_{-\infty}^{\infty} + \int_{-\infty}^{\infty} f(t) i\omega e^{-i\omega t} dt \tag{34}$$
The middle term must be zero for both positive and negative $\infty$ in order for $f(t)$ to be an admissable function for Fourier transform. Thus, we may write:
$$\widehat{G}(\omega) = \int_{-\infty}^{\infty} \frac{df(t)}{dt} e^{-i\omega t}dt = i\omega \int_{-\infty}^{\infty} f(t) e^{-i\omega t} dt = i\omega \widehat{F}(\omega) \tag{35}$$
which is the equivalent to equation 29 above.
</div>
Note that if we defined our Fourier transform with the opposite convention (i.e., $e^{i\omega t}$ instead of $e^{
-i\omega t}$) then the result would be $\frac{\mathrm{d}}{\mathrm{d}t} \Leftrightarrow - i\omega$, which is commonly found in mathematical physics textbooks.
The Fourier Derivative theorem also extends to higher dimensions:
$$ \frac{\mathrm{d}^nf(t)}{\mathrm{d}t^n} = \left(i\omega\right)^n \widehat{F}(\omega). \tag{36} $$
This is a very important thing to remember because we often take derivatives of functions but do not always think about how this affects the Fourier spectrum - especially the DC component that is set to zero.
Note that this also holds for partial differential equations. For example, if we have the 1D acoustic wave equation
$$\frac{\partial^2 u}{\partial x^2} = \frac{1}{c^2}\frac{\partial^2 u}{\partial t^2} \tag{37} $$
we can use the fact that $\frac{\partial}{\partial t} \Leftrightarrow i\omega$ and $\frac{\partial}{\partial x} \Leftrightarrow ik_x$ to immediately write:
$$ (ik_x)^2 = \frac{(i\omega)^2}{c^2} \tag{38}$$
or
$$ k_x = \pm \frac{\omega}{c}, \tag{39}$$
which you may recognize as a dispersion relationship linking wavenumbers and frequencies!
## The Convolution Derivative Theorem
The following is true:
$$ \frac{\mathrm{d}}{\mathrm{d}t}
\left[
f(t)\ast g(t)
\right] =
\frac{\mathrm{d}f(t)}{\mathrm{d}t}\ast g(t)
=
f(t)\ast \frac{\mathrm{d}g(t)}{\mathrm{d}t} \tag{40}
$$
**Extension Problem:** Prove that this is correct.
## Rayleigh's Theorem
The following is true:
$$\int_{-\infty}^{\infty} f(t) \overline{g(t)} \mathrm{d}t = \int_{-\infty}^{\infty} \widehat{F}(\omega) \overline{\widehat{G}(\omega)} \mathrm{d}\omega \tag{41}$$
Two particular cases of interest are Rayleigh's theorem
$$\int_{-\infty}^{\infty} \left|f(t)\right|^2 \mathrm{d}t = \int_{-\infty}^{\infty}\left| \widehat{F}(\omega)\right|^2 \mathrm{d}\omega, \tag{42}$$
and the power in a periodic signal (represented by a Fourier series):
$$\int_{-\infty}^{\infty} \left|f(t)\right|^2 \mathrm{d}t = \frac{A_0^2}{4} + \frac{1}{2}\sum_{k=1}^{\infty} \left(A_k^2+B_k^2\right). \tag{43}$$
## 2D Convolution Theorem
Like its 1D cousin, **2D convolution** may be expressed in the following way:
$$ g(x,y) = h(x,y) \ast f(x,y) = f(x,y) \ast h(x,y) \tag{44} $$
In integral notation, 2D convolution may be expressed as
$$ g(x,y) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(\tau_x,\tau_y) h(x-\tau_x,y-\tau_y) \,\mathrm{d}\tau_x\,\mathrm{d}\tau_y. \tag{45} $$
Similarly to the 1D scenario, one can develop an expression for the **2D Convolution Theorem**:
$$ f(x,y) \ast h(x,y) \Leftrightarrow \widehat{F}(k_x,k_y) \widehat{G}(k_x,k_y) \tag{46}$$
## 2D spatial-domain example
Let's look at an example of a 2D convolution involving the a picture (i.e., $f(x,y)$) and a 2D Gaussian filter (i.e., $h(x,y)$) of $20$ points in diameter. The result was generated by applying a 2D convolution directly in the time domain.
<img src="Fig/2.4_GaussSmooth1.png" width="800">
**Figure 11. Example of a 2D Convolution involving a 2D Gaussian filter of 20 points in diameter. The left panel represents $f(x,y)$, the middle panel respresents impulse response $h(x,y)$ and the right panel is the convolution of $f(x,y)$ and $g(x,y)$ (i.e., $g=h\ast f$).**
## 2D wavenumber-domain example
Now let's recompute the previous example, but now using the **2D Convolution Theorem**. This evaluation: (1) applies 2D Fourier transforms of both $f$ and $h$ to obtain $\widehat{F}$ and $\widehat{G}$; (2) multiplies these two functions together to obtain $\widehat{G}$; and then (3) applies an inverse Fourier transform to recover $g$. Note that this example uses the following notation: $[u,v]=[k_x,k_y]$.
<img src="Fig/2.4_GaussSmooth2.png" width="800">
**Figure 12. Example of a 2D Convolution involving a 2D Gaussian filter of 20 points in diameter using the 2D Convolution Theorem. Note that this example uses the alternative 2D FT notation where $[u,v]$ is exchanged for $[k_x,k_y]$.**
# Crosscorrelation
An important operation related to convolution is [cross correlation](https://en.wikipedia.org/wiki/Cross-correlation). This operation is denoted by the $\star$ operator and is closely related to convolution. The integral for cross correlation is given by:
$$y(\tau) = f (\tau) \star g(\tau) = \int _{-\infty }^{\infty }\overline{f(t)}\ g(t+\tau )\,dt \tag{47}$$
where $\overline{f(t)}$ is the complex conjugate and unlike in convolution $g(t+\tau )$ is not time reversed. The $\tau$ axis may be considered as the **correlation lag** or **time delay** between the two signals $f$ and $g$. However, one can switch the places of $t$ and $\tau$ in equation 47 without any change in mathematical meaning.
In numpy, the correlation of two signals $f$ and $g$ can be calculated via
np.correlate(f,g,mode='full')
where there is again the option for *mode='same'* to return a result of the same length as two equal-length input arrays (if applicable).
Let's look at an example using the *mode='full'* operation:
```
def plotfuncs(t,x,h,n,title):
plt.subplot(4,2,n)
plt.plot(t,x,'r',t,h,'k',lw=3)
plt.xlabel('Samples (time)')
plt.ylabel('Amplitude')
plt.title(title+' Signals x(t) and h(t)')
plt.legend(['x(t)','h(t)'])
plt.axis([0,1,-0.1,2])
def plotconvo(t,y,n,title):
ax=plt.subplot(4,2,n)
plt.plot(t,y,'k',lw=3)
plt.xlabel('Samples (time)')
plt.ylabel('Amplitude')
plt.title(title+' Correlation of Signals x(t) and h(t)')
plt.axis([-1,1,-0.1,2])
ax.set_yticklabels([])
# . . Simple convolutional model demonstration
nt=101
t = np.arange(0,1,1/nt)
tauline = np.arange(-1,0.99,1/nt)
xline = np.arange(0,1,1/nt)
# . . Input signals (shifted in time with respect to each other)
x0 = np.zeros(nt); x0=0.5 *np.exp(-(t-0.5 )**2/(2*0.02**2))
x1 = np.zeros(nt); x1=0.25*np.exp(-(t-0.5 )**2/(2*0.08**2))
x2 = np.zeros(nt); x2=0.5 *np.exp(-(t-0.3 )**2/(2*0.02**2))
x3 = np.zeros(nt); x3=1 *np.exp(-(t-0.5 )**2/(2*0.02**2))
# . . Impulse response (stationary for the four examples)
h0 = np.zeros(nt)
h1 = np.zeros(nt)
h2 = np.zeros(nt)
h3 = np.zeros(nt)
h0 = np.exp(-(t-0.5)**2/(2*0.02**2))
h1 = np.exp(-(t-0.5)**2/(2*0.02**2))
h2 = np.exp(-(t-0.5)**2/(2*0.02**2))
h3 = 0.2+0.*np.exp(-(t-0.5)**2/(2*0.02**2))
#Redo h0
h0=0*h0
h0[20:30]=0.01*np.arange(0,10,1)**2
# . . Plot
plt.figure(figsize=(12, 9))
plotfuncs(xline,x0,h0,1,'(a)')
plotconvo(tauline,np.correlate(x0,h0,mode='full'),2,'(b)')
plotfuncs(xline,x1,h1,3,'(c)')
plotconvo(tauline,np.correlate(x1,h1,mode='full'),4,'(d)')
plotfuncs(xline,x2,h2,5,'(e)')
plotconvo(tauline,np.correlate(x2,h2,mode='full'),6,'(f)')
plotfuncs(xline,x3,h3,7,'(g)')
plotconvo(tauline,np.correlate(x3,h3,mode='full'),8,'(h)')
plt.tight_layout()
plt.show()
```
**Figure 13. Illustration of crosscorrelation of two signals. Left panels: Two signals $x(t)$ and $h(t)$ to be crosscorrelated. Right panels: crosscorrelation result. Note that this operation is not commutative.**
<div class="alert alert-danger">
WARNING: You'll notice that the output of this operation is from -1s to 1s, which is different from the above. This is because in this case we have used
np.correlate(a,b,mode='full')
instead of
np.correlate(a,b,mode='same')
as in the convolution examples above. If you have arrays $a$ and $b$ of dimension N, then the 'same' option will return an array with dimension N ranging between -N/2 to N/2. Conversely, the 'full' option will return an array of dimension 2N-1 ranging from -N to N.
In the above example we have arrays defined between 0-1s. Thus, because we have called the 'full' option, the function has returned the correlation result between ±1s. This is because the extra delays between the two signals can be up to ±1s.
</div>
## Crosscorrelation Properties
The correlation integral has the following important properties:
(1) The cross-correlation of $f(t)$ and $g(t)$ is equivalent to the **convolution** of $\overline{f(-t)}$ and $g(t)$.
$$f(t)\star g(t) = \overline{f(-t)}\ast g(t) \tag{48} $$
(2) If $f(t)$ is a [Hermitian function](https://en.wikipedia.org/wiki/Hermitian_function) (i.e., has the property of $\overline{f(t)}=f(-t)$) then the following holds:
$$f(t) \star g(t) = f (t) \ast g(t)\tag{49}$$
(3) If $f(t)$ and $g(t)$ are both Hermitian functions, then
$$ f(t) \star g(t) = g(t)\star f(t) \tag{50}$$
(4) The cross-correlation of two functions [i.e., $h(t) = f(t) \star g(t)$] satisfies the following in the Fourier domain:
$$ \widehat{H}(\omega) =\mathcal{F}[h(t)] = \mathcal{F}[f(t)\star g(t)] = \mathcal{F}[\overline{f(-t)}\ast g(t)] = \overline{\widehat{F}(\omega)} \widehat{G}(\omega) \tag{51} $$
## The difference between convolution and correlation
Let's now look at an example showing the difference between the convolution and cross-correlation operations. In the image below, I have created two signals $x(t)$ and $h(t)$. The second panel shows the **convolution** $y(t) = x(t)\ast h(t)$, while the third panel shows the **convolution** $y(t) = h(t) \ast x (t)$. These are identical as should be expected from the communtivity of the convolution operator discussed above. The fourth panel shows the **correlation** $y_1(t)=x(t)\star h(t)$, while the fifth panel shows the **correlation** $y_2(t)=h(t)\star x(t)$. You'll notice in this case that $y_2(t)=y_1(-t)$.
```
# . . Simple convolutional model demonstration
nt=201
t = np.arange(0,1,1/nt)
tauline = np.arange(-1,0.99,1/nt)
xline = np.arange(0,1,1/nt)
# . . Input signals (shifted in time with respect to each other)
x = np.zeros(nt)
x=0.5*np.exp(-(t-0.5 )**2/(2*0.02**2))
# . . Input signal 1
h = np.zeros(nt)
h[20:70]=np.arange(0,50,1)/50/2
# . . Plot two signals
plt.figure(figsize=(10, 10))
plt.subplot(5,1,1)
plt.plot(t,x,'r',t,h,'k',lw=3)
plt.xlabel('Samples (time)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title(r'(a) Signals $x(t)$ and $h(t)$',size=14)
plt.legend(['x(t)','h(t)'])
plt.axis([0,1,-0.1,1])
# . . Plot convolution x*h
plt.subplot(5,1,2)
plt.plot(t-max(t)/2,np.convolve(x,h,mode="same"),lw=3)
plt.xlabel('Samples (time)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title(r'(b) Convolution: $y(t)=x(t)\ast h(t)$',size=16)
plt.axis([-0.5,0.5,-0.1,2.5])
# . . Plot convolution h*x
plt.subplot(5,1,3)
plt.plot(t-max(t)/2,np.convolve(h,x,mode="same"),lw=3)
plt.xlabel('Samples (time)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title(r'(c) Convolution: $y(t)=h(t)\ast x(t)$',size=16)
plt.axis([-0.5,0.5,-0.1,2.5])
# . . Plot correlation of h and x
plt.subplot(5,1,4)
plt.plot(t-max(t)/2,np.correlate(x,h,mode="same"),lw=3)
plt.xlabel('Samples (time)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title(r'(d) Correlation: $y_1(t)=x(t)\star h(t)$',size=16)
plt.axis([-0.5,0.5,-0.1,2.5])
# . . Plot correlation of x and h
plt.subplot(5,1,5)
plt.plot(t-max(t)/2,np.correlate(h,x,mode="same"),lw=3)
plt.xlabel('Samples (time)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title(r'(e) Correlation: $y_2(t)=h(t)\star x(t)$',size=16)
plt.axis([-0.5,0.5,-0.1,2.5])
plt.tight_layout()
plt.show()
```
**Figure 14. Illustrating the differences between convolution and crosscorrelation. (a) Two signals to be analyzed. (b) Convolution $y(t) = x(t)\ast h(t)$. (c) Convolution $y(t) = h(t)\ast x(t)$, which yields the same result as in (b). (d) Crosscorrelation of $y_1(t) = x(t) \star h(t)$. (d) Crosscorrelation of $y_2(t) = h(t) \star x(t)$, which yields a time-reversed version of (d).**
## Autocorrelation Theorem
The **autocorrelation** of a function $f(t)$, here denoted by $a_f(t) = f(t) \star f(t)$, has the following relationship:
$$ \widehat{A_f}(\omega) = \overline{\widehat{F}(\omega)}\widehat{F}(\omega) = \left|\widehat{F}(\omega)\right|^2. \tag{52} $$
<div class="alert alert-success">
**PROOF:** Let's start with the definition of cross-correlation:
$$
a_f(t) = \int_{-\infty}^{\infty} \overline{f(t^\prime) }\,f(t+t^\prime)\, \mathrm{d}t^\prime \tag{53}
$$
Applying a Fourier Transforms to both sides
$$
\widehat{A_f}(\omega) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \overline{f(t^\prime)}\,f(t+t^\prime)\,\mathrm{e}^{-i\omega t} \mathrm{d}t^\prime \mathrm{d}t. \tag{54}
$$
Let $\tau = t+t^\prime$. Then if $t^\prime$ is held constant, $\mathrm{d}t=\mathrm{d}\tau$,
$$
\widehat{A_f}(\omega) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\overline{f(t^\prime)}\,f(\tau)\,\mathrm{e}^{-i\omega(\tau-t^\prime)} \mathrm{d}t^\prime \mathrm{d}\tau. \tag{55}
$$
which can be separated
$$
\widehat{A_f}(\omega) =
\int_{-\infty}^{\infty} \overline{f(t^\prime)} \,\mathrm{e}^{i\omega t^\prime}\,\mathrm{d}t^\prime
\cdot
\int_{-\infty}^{\infty} f(\tau)\,\mathrm{e}^{-i\omega \tau} \, \mathrm{d}\tau \tag{56}
$$
which leads to
$$ \widehat{A_f}(\omega) = \overline{\widehat{F}(\omega)}\widehat{F}(\omega) = \left|\widehat{F}(\omega)\right|^2. \tag{57}$$
</div>
Note that **autocorrelation** is the equivalent of computing a function's **power spectrum**. Let's now look at an autocorrelation example.
```
# . . Simple convolutional model demonstration
nt=201
t = np.arange(0,1,1/nt)
tauline = np.arange(-1,0.99,1/nt)
xline = np.arange(0,1,1/nt)
# . . Input signal 1
h = np.zeros(nt)
h[40:70]=np.arange(0,30,1)/40
# . . F domain
freq1=np.fft.fftshift(np.fft.fftfreq(len(t),1/nt))
# . . Plot two signals
plt.figure(figsize=(12, 6))
plt.subplot(2,2,1)
plt.plot(t,h,"#325cab",lw=5)
plt.xlabel('Samples (time)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title('(a) h(t)')
plt.axis([0,1,-0.1,1])
plt.subplot(2,2,2)
plt.plot(freq1,np.fft.fftshift(np.abs(np.fft.fft(h))),"#325cab",lw=5)
plt.xlabel('Frequency (Hz)',size=12)
plt.ylabel('$|H(\omega)|$',size=12)
plt.title('(b) Magnitude Spectrum')
#plt.axis([0,1,-0.1,1])
# . . Plot correlation of x and h
plt.subplot(2,2,3)
plt.plot(t-max(t)/2,np.correlate(h,h,mode="same"),lw=5)
plt.xlabel('Samples (tau)',size=12)
plt.ylabel('Amplitude',size=12)
plt.title('(c) Autocorrelation: $h(t)\star h(t)$')
plt.axis([-0.5,0.5,-0.1,8])
plt.subplot(2,2,4)
plt.plot(freq1,np.fft.fftshift(np.abs(np.fft.fft(np.correlate(h,h,mode="same")))),"#325cab",lw=5)
plt.xlabel('Frequency (Hz)',size=12)
plt.ylabel('$|H(\omega)|^2$',size=12)
plt.title('(d) Autocorrelation power spectrum')
#plt.axis([0,1,-0.1,1])
plt.tight_layout()
plt.show()
```
**Figure 15. Illustration of the autocorrelation theorem. (a) Input signal $h(t)$. (b) Fourier magnitude spectrum of (a). (c) Autocorrelation of $h(t)\star h(t)$. (d) Autocorrelation powe spectrum $|H(\omega)|^2$ of (c).**
# Deconvolution (and Polynomial multiplication/division)
I was also interested in the abilities of Scipy in performing deconvolution. Recall that if $y(t) = x(t)\ast h(t)$ that this operation performs the equivalent of
$$x(t) = \mathcal{F}^{-1}\left[\frac{\widehat{Y}(\omega)}{\widehat{H}(\omega)}\right]. \tag{58}$$
However, it does not compute it directly from the Fourier spectra. Rather, it aims to find a **polynomial expression** that adequately approximates $h(t)$ and then effectively uses polynomial division to remove this influence of $h(t)$ from $y(t)$.
This is somewhat analogous polynomial multiplication and division that you may have done in earlier courses. For example, if $x(t)=t^2+1$ and $h(t)=2t+7$, then $y(t)=x(t)\ast h(t)$ is the **convolution of these two polynomials**. Let these vectors represent the polynomial coefficients: $u=[1,0,1]$ and $v=[2,7]$. The resulting **discrete convolution** of $u$ and $v$ is $[2,7,2,7]$ or
$$y(t) = 2t^3+7t^2+2t+7, \tag{59}$$
which is equivalent to **polynomial multiplication** of the two functions $x(t)$ and $h(t)$. Let's verify this numerically:
```
u=[1,0,1]
v=[2,7]
aa=np.convolve(u,v,mode="full")
print("u:",u)
print("v:",v)
print("The convolution of u and v is:",aa)
```
The **deconvolution** of $h(t)$ from $y(t)$ is then equivalent to **polynomial division**:
$$x(t) = \frac{2t^3+7t^2+2t+7}{2t+7} = \frac{(t^2+1)(2t+7)}{2t+7} = t^2+1. \tag{60}$$
This approach works great if you can find a **stable** polynomial approximation for $h(t)$.
```
recovered,remainder = scipy.signal.deconvolve(aa,u)
print("The deconvolution of aa by u is:",recovered)
print("The remainder is:",remainder)
```
This doesn't always exist ... and we'll have to dig a bit deeper in a few weeks when discussing Z-transforms to figure out why. However, if we modify the above expression by changing the 7 into a 6 such that you can no longer factor out $2t+7$
$$x(t) = \frac{2t^3+7t^2+2t+6}{2t+7}. \tag{61}$$
```
aanew = [2,7,2,6]
recovered,remainder = scipy.signal.deconvolve(aanew,u)
print("The deconvolution of aanew by u is:",recovered)
print("The remainder is:",remainder)
```
Let's explore an example where convolution and then deconvolution is being applied using the following two function calls:
* *y = np.convolve(x,h,mode='same')*
* *x,remainder=scipy.signal.deconvolve(y,h)*
```
# . . Define plot functions
def plot_time_domain(tline,f,plotcol,title=""):
plt.plot(tline,f, color=plotcol, label="original",lw=5)
plt.xlabel('Time (s)',fontsize=16)
plt.ylabel('Amplitude',fontsize=16)
plt.title(title,fontsize=16)
plt.axis([np.min(tline),np.max(tline),-0.2*np.max(f),1.1*np.max(f)])
def plot_freq_domain(fline,F,plotcol,title):
plt.plot(fline,F, color=plotcol, label="original",lw=5)
plt.xlabel('Frequency (Hz)',fontsize=16)
plt.ylabel('Amplitude',fontsize=16)
plt.title(title,fontsize=16)
plt.axis([np.min(fline),np.max(fline),-0.2*np.max(F),1.1*np.max(F)])
nt=300
random_noise_level=0.001 # . . For Gaussian filter
DC_term = 0.00#0.01 # . . For Gaussian Filter
# . . Time and Frequency Line
tline1 = np.arange(0,nt ,1)
tline2 = np.arange(0,nt/6,1)
# . . Let the signal be box-like
signal = np.repeat([0., 0., 1., 0., 0.], nt/5)
fline1 = np.arange(-len(signal)/2, len(signal)/2, 1)/nt
fline2 = 6*np.arange(-len(signal)/12,len(signal)/12,1)/nt
# . . Use a gaussian filter
gauss = np.exp(-( (tline2-25.)/4)**2 ) +random_noise_level*np.abs(np.random.randn(int(nt/6)))+DC_term
# . . Calculate the convolution and normalize
filtered = np.convolve(signal, gauss, mode='same') / np.sum(gauss)
# . . Now let's call the deconvolution script!
deconv, devisor = scipy.signal.deconvolve( filtered, gauss )
#the deconvolution has n = len(signal) - len(gauss) + 1 points
n = len(signal)-len(gauss)+1
# so we need to expand it by
s = int((len(signal)-n)/2)
#on both sides.
deconv_res = np.zeros(len(signal))
deconv_res[s:len(signal)-s-1] = deconv
deconv = deconv_res*np.sum(gauss)
#### Plot ####
plt.figure(figsize=(12, 12))
plt.subplot(4,2,1)
plot_time_domain(tline1,signal,"#907700",title="(a) Box Car - Time Domain")
plt.subplot(4,2,2)
plot_freq_domain(fline1,np.fft.fftshift(np.abs(np.fft.fft(signal)))/(3*nt),"#907700",title="(b) Box Car - Freq Domain")
plt.subplot(4,2,3)
plot_time_domain(tline2,gauss,"#68934e",title="(c) Gaussian - Time Domain")
plt.subplot(4,2,4)
plot_freq_domain(fline2,np.fft.fftshift(np.abs(np.fft.fft(gauss)))/(3*nt),"#68934e",title="(d) Gaussian - Freq Domain")
plt.subplot(4,2,5)
plot_time_domain(tline1,filtered,"#325cab",title="(e) Box Car - Gaussian Convolution - Time Domain")
plt.subplot(4,2,6)
plot_freq_domain(fline1,np.fft.fftshift(np.abs(np.fft.fft(filtered)))/(3*nt),"#325cab",title="(f) Box Car - Gaussian Convolution- Freq Domain")
plt.subplot(4,2,7)
plot_time_domain(tline1,deconv,"#ab4232",title="(g) Deconvolved Result - Time Domain")
plt.subplot(4,2,8)
plot_freq_domain(fline1,np.fft.fftshift(np.abs(np.fft.fft(deconv)))/(3*nt),"#ab4232",title="(h) Deconvolved Result - Freq Domain")
plt.tight_layout()
plt.show()
```
**Figure 16. Illustration of the deconvolution process involving both time- (left panels) and frequency (right panels). (a) Original box car function and (b) associated Fourier magnitude spectrum. (c) Gaussian function and (d) the associated Fourier magnitude spectrum. (e) Convolution of box car in (a) and Gaussian in (c) along with (f) the Fourier magnitude spectrum of the convolved result. (g) The result of deconvolving the Gaussian in (c) from the convolved result in (e) along with (h) the associated deconvolved spectrum.**
Overall, this process seems to have done a fairly stable result. However, if you play around with the parameters and, e.g., set *DC_term*=0 in the code section above, you'll see that things become unstable mighty quickly!
## Deconvolution in practice
So far "decon" is an interesting theoretical operation ... but does it actually matter in practice? The answer is a resounding yes! Because we are using seismic waves to interrogate the Earth to learn more about its structure, it's very important that we attempt to remove as much of the wavelet effect as possible (which broadens the response and reduces the bandwidth (or difference between minimum and maximum amplitudes). Let's look at two examples:
<img src="Fig/DECON1.jpg" width="800">
**Figure 17. 2D seismic section showing the benefits of applying 2D deconvolution to land data. (a) Seismic image without applying deconvolution. (b) Same image as in (a) but after applying deconvolution. Note the improvement in resolution! (Credit: Cary, CSEG Recorder)**
<img src="Fig/DECON2.jpg" width="800">
**Figure 18. 2D seismic section showing the benefits of applying 2D deconvolution to marine data. (a) Seismic image without applying deconvolution. (b) Same image as in (a) but after applying deconvolution. Again, note the improvement in resolution! (Credit: Schlumberger)**
## References
1. Bracewell, R.N., 1965, The Fourier Transform and Applications, McGraw-Hill, New York.
2. James, J.F. 2011, A Student's Guide to Fourier Transforms, 3rd ed, Cambridge University Press.
| github_jupyter |
# Examine the expanded Hillsborough data set for socioeconomic & demographic correlations.
### Header information
*DataDive goal targeted:* "Expand the list of ACS variables available for analysis by joining the processed dataset with the full list of data profile variables."
*Contact info*: Josh Hirner, jhirner@gmail.com
```
import pandas as pd
```
### Import & clean up the required data sets.
```
hb_expanded = pd.read_csv("../../data/processed/hillsborough_fl_processed_expanded_ACS_2017_to_2019_20210225.csv", index_col = 0)
hb_expanded.head()
hb_expanded.describe()
```
None of these values ought to be negative... What's going on here?
```
hb_expanded.sort_values("median-gross-rent", ascending = True)
```
It looks like this is probably an inputing or scraping error. Drop any rows containing negative values.
```
hb_expanded = hb_expanded[(hb_expanded.select_dtypes(include = "number")) > 0]
acs_dict = pd.read_csv("../../data/acs/data_dictionary.csv")
acs_dict.head()
```
### Extract only the data of interest for correlations.
Let's examine the average housing loss rates (`avg-foreclosure-rate`, `avg-lien-foreclosure-rate`, `avg-eviction-rate`, and `avg-housing-loss-rate` for correlations with the full set of ACS socioeconomic & demographic data.
```
# Snag the columns of interest (those with average housing loss rates over the 3 years studied).
avg_rate_cols = ["avg-foreclosure-rate", "avg-lien-foreclosure-rate", "avg-eviction-rate", "avg-housing-loss-rate"]
# Also snag a list of all the columns containing DP-coded ACS socioeconomic & demographic data.
acs_dp_cols = list(hb_expanded.columns[73:])
acs_dp_cols[0], acs_dp_cols[-1]
```
### Calculate Spearman correlation coefficients.
Calculate Spearman correlation coefficients for the average housing loss parameters.
```
hb_all_corrs = hb_expanded[avg_rate_cols + acs_dp_cols].corr(method = "spearman")
hb_all_corrs
```
More than 1000 variables is far too many to visualize on a heatmap. Let's select only the ACS variables (`sel_acs`) which correlate to housing loss above a minimum absolute `threshold`.
```
threshold = 0.65
sel_acs = []
def get_sel_acs(row):
for avg_metric in avg_rate_cols:
if abs(row[avg_metric]) >= threshold:
sel_acs.append(row.name)
# Generate the list of ACS variables that exceed the threshhold.
hb_all_corrs.apply(get_sel_acs, axis = 1)
# Remove duplicates and the average housing loss metrics
sel_acs = list(set(sel_acs))
for avg_metric in avg_rate_cols:
sel_acs.remove(avg_metric)
len(sel_acs)
```
### Assign human-readable labels to filtered correlation matrix.
Filter the correlation matrix by `sel_acs`, then use the dictionary of ACS variables `acs_dict` to assign human readable labels to each.
```
# Filter by `sel_acs`. There's probably a much more elegant way to do this, but it works.
hb_top_corrs = pd.DataFrame(columns = ["acs_variable"])
hb_top_corrs["acs_variable"] = sel_acs
hb_top_corrs.set_index("acs_variable", inplace = True)
hb_top_corrs = pd.merge(hb_top_corrs, hb_all_corrs[avg_rate_cols], left_index = True, right_index = True, how = "left")
hb_top_corrs.head()
# Filter so that we only have ACS variables ending in "PE" (percentages) rather than simply "E" (totals).
hb_top_corrs = hb_top_corrs.loc[hb_top_corrs.index.str.contains("PE"), : ]
# Assign human-readable labels.
hb_top_corrs = pd.merge(hb_top_corrs, acs_dict[["variable", "label"]], left_index = True, right_on = "variable")
pd.set_option("display.max_colwidth", None)
hb_top_corrs
```
**The table above shows Spearman correlation coefficients for four housing loss metrics (averaged over 2017-2019) and the most strongly correlated socioeconomic data from the American Community Survey.**
### **Observations**
There are a few interesting relationships that are present in this list of correlation coefficients.
(Please note that for some race-based parameters, correlation coefficients might not be meaningful -- especially for Native American & Pacific Islander populations. It would be worthwhile to confirm whether or not these apparent correlations are due to the strange statistics of small numbers. Qualitatively, they seem suspicious.)
* Some ACS survey parameters are correlated to all 3 types of housing loss (mortgage foreclosure, tax foreclosure, or eviction) with roughly comparable magnitudes. These parameters could be predictive for housing loss through any mechanism.
* DP02_0019PE: The percentage of households with a spouse present is **negatively** correlated with housing loss.
* DP05_0065PE: The percentage of households with 1+ black or African American residents is **positively** correlated with housing loss.
* DP05_0064PE / DP05_0037PE: The percentage of households with 1+ white people / the percentage of white residents in the census tract **negatively** correlated with housing loss.
* Potentially more interestingly, some ACS survey parameters are strongly correlated with mortgage foreclosures & evictions, but only weakly correlated to tax lien foreclosures. This suggests the possibility that the underlying causes of tax lien foreclosures may be fundamentally different from mortgage foreclosures & evictions, or at least more complex.
* DP03_0097PE & DP03_0116PE: Health insurance coverage (negative correlation).
* DP03_0134PE & DP03_0128PE: Household income is below poverty level (positive correlation).
* DP03_0074PE: Receipt of food stamp / SNAP benefits (positive correlation)
| github_jupyter |
```
###########################################
## Author : https://github.com/vidit1999 ##
###########################################
import re
import random
import joblib
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
accuracy_score, f1_score,
precision_score, recall_score,
plot_confusion_matrix
)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import PassiveAggressiveClassifier
warnings.filterwarnings('ignore')
%matplotlib inline
random.seed(0)
np.random.seed(0)
df = pd.read_csv("train.csv")
df.head()
df.isnull().sum()
stop_words = stopwords.words('english')
lemma = WordNetLemmatizer()
def clean_text(text):
text = text.lower() # lowering
text = text.encode("ascii", "ignore").decode() # non ascii chars
text = re.sub(r'\n',' ', text) # remove new-line characters
text = re.sub(r'\W', ' ', text) # special chars
text = re.sub(r'\s+[a-zA-Z]\s+', ' ', text) # single characters
text = re.sub(r'\^[a-zA-Z]\s+', ' ', text) # single char at first
text = re.sub(r'[0-9]', ' ', text) # digits
text = re.sub(r'\s+', ' ', text, flags=re.I) # multiple spaces
return ' '.join([lemma.lemmatize(word) for word in word_tokenize(text) if word not in stop_words])
df['title'].fillna('titleunknown', inplace=True)
df['author'].fillna('authorunknown', inplace=True)
df.isnull().sum()
df.info()
print(df['label'].value_counts())
df['label'].value_counts().plot(kind='pie', title='Label Counts Percentage', autopct='%.2f%%')
plt.show()
%%time
df_title_author = (df['title'] + ' ' + df['author']).apply(clean_text)
x_train, x_test, y_train, y_test = train_test_split(df_title_author, df['label'])
len(x_train), len(x_test)
tfidf_title_author = TfidfVectorizer(ngram_range=(1, 2))
train_x = tfidf_title_author.fit_transform(x_train)
test_x = tfidf_title_author.transform(x_test)
train_x, test_x
pac_title_author = PassiveAggressiveClassifier().fit(train_x, y_train)
y_pred = pac_title_author.predict(test_x)
print(f"Accuracy : {accuracy_score(y_test, y_pred)}")
print(f"Precision : {precision_score(y_test, y_pred)}")
print(f"Recall : {recall_score(y_test, y_pred)}")
print(f"F1-Score : {f1_score(y_test, y_pred)}")
plot_confusion_matrix(pac_title_author, test_x, y_test, display_labels=['Reliable', 'Unreliable'])
plt.show()
joblib.dump([tfidf_title_author, pac_title_author], 'checkpoint_ml_2.joblib')
```
| github_jupyter |
# Tutorial 1: How to use abcTau package to fit autocorrelations or PSDs
Details of the method are explained in:
Zeraati, R., Engel, T. A. & Levina, A. A flexible Bayesian framework for unbiased estimation of timescales Computations. bioRxiv 2020.08.11.245944 (2021). https://www.biorxiv.org/content/10.1101/2020.08.11.245944v2
To start you need to have:
Python >= 3.7.1,
Numpy >= 1.15.4 ,
Scipy >= 1.1.0.
and for visualizations:
Matplotlib >= 3.0.2,
Seaborn >= 0.9.0
You can install seaborn from https://seaborn.pydata.org/installing.html
```
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns # comment this line if you don't want to use seaborn for plots
import numpy as np
from scipy import stats
# add the path to the abcTau package
import sys
sys.path.append('./abcTau')
import abcTau
# graphical properties for figures
sns.set_context('talk',font_scale= 1.5)
plt.rcParams["axes.edgecolor"] = "k"
plt.rcParams["axes.facecolor"] = "w"
plt.rcParams["axes.linewidth"] = "0.8"
plt.rcParams.update({'font.size': 12})
```
# Step 1: Extracting required statistics from real data
First load your data and use "extract_stats" functions to compute statistics.
```
#---------- load real data (OU process with one timescale)
dataload_path = 'example_data/'
filename = 'OU_tau100_T1000_fs1000_trials500'
data_load = np.load(dataload_path + filename + '.npy')
#---------- extract statistics from real data
# select summary statistics metric
summStat_metric = 'comp_ac_fft'
ifNorm = True # if normalize the autocorrelation or PSD
deltaT = 1 # temporal resolution of data.
binSize = 1 # bin-size for binning data and computing the autocorrelation.
disp = None # put the disperssion parameter if computed with grid-search
maxTimeLag = 500 # only used when suing autocorrelation for summary statistic s
lm = round(maxTimeLag/binSize) # maximum bin for autocorrelation computation
data_ac, data_mean, data_var, T, numTrials = abcTau.preprocessing.extract_stats(data_load, deltaT, binSize,\
summStat_metric, ifNorm, maxTimeLag)
```
# Step 2 (optional): Check the bias in timescales estimated from exponential fits (preprocessing module using prametric bootstrapping)
```
gt_tau = 100 # ground-truth timescale
# fit
popt, poptcov = abcTau.preprocessing.fit_oneTauExponential(data_ac, binSize, maxTimeLag)
tau = popt[1]
# check if estimated timescales with exponential fit are biased or not
theta = np.array([tau])
numTimescales = 1
taus_bs, taus_bs_corr, err = abcTau.preprocessing.check_expEstimates(theta, deltaT, binSize, T, numTrials, \
data_mean, data_var, maxTimeLag, numTimescales,\
numIter = 100, plot_it = True)
```
This result inidicates that the direct exponential fitting gives biased estimates with the current amount of data. We can use the aABC algorithm to get a more accurate estimation of timescales. aABC additionally provides the posterior distribution of timescales that we can use for model comparison (see Tutorial 2)
# Step 3: Estimation of timescales using the aABC algorithm
In the following, we explain how to customize your own Python script step-by-step such that it best fits your data and your desired generative model. We provided 3 different examples:
1) Fitting one timescale in time domain
2) Fitting one timescale in frequency domain
3) Fitting two timescales in time domain
The duration for fitting the generative model with the aABC algorithm depends on the size of your data, selected summary statistics (e.g., fastest is with the PSD), number of accepted samples in posteriors (min_samples), final selected accepatance rate (minAccRate), and computational resources (e.g., number of cores in parallel processing). Depending on these parameters the fitting duration is in between ~5-20 min (usually for fitting PSD as summary statistic and parallel processing, see Supplementary Table 1 in the paper for more details) up to ~4-8 hours (wihtout any parallel processing).
### [1] Fitting a general type of generative model in the time domain:
This example takes about 2 hours to run on a normal computer without any parallel processing, using parallel computing (parallel = True) can speed up the fitting at least to half of this duration. You can check the provided example Python scripts for parallel processing.
The following is the example in Fig. 3A.
1) import the package and set the parameters for parallel processing (if applicable):
```
# add the path to the abcTau package
import sys
sys.path.append('./abcTau')
# import the package
import abcTau
import numpy as np
from scipy import stats
# stetting the number of cores for each numpy computation in multiprocessing
# import os
# os.environ["OMP_NUM_THREADS"] = "2"
# os.environ["OPENBLAS_NUM_THREADS"] = "2"
# os.environ["MKL_NUM_THREADS"] = "2"
# os.environ["VECLIB_MAXIMUM_THREADS"] = "2"
# os.environ["NUMEXPR_NUM_THREADS"] = "2"
```
2) Define the directories and filenames for loading the data and saving the results
```
# path for loading and saving data
datasave_path = 'example_abc_results/'
dataload_path = 'example_data/'
# path and filename to save the intermediate results after running each step
inter_save_direc = 'example_abc_results/'
inter_filename = 'abc_intermediate_results'
# define filename for loading and saving the results
filename = 'OU_tau20_mean0_var1_rawData'
filenameSave = filename
```
3) Load the real data and extract required statistics
```
# load data time-series as a numpy array (numTrials * time-points)
data_load = np.load(dataload_path + filename + '.npy')
# select summary statistics metric
summStat_metric = 'comp_ac_fft'
ifNorm = True # if normalize the autocorrelation or PSD
# extract statistics from real data
deltaT = 1 # temporal resolution of data.
binSize = 1 # bin-size for binning data and computing the autocorrelation.
disp = None # put the disperssion parameter if computed with grid-search
maxTimeLag = 50 # only used when suing autocorrelation for summary statistics
data_sumStat, data_mean, data_var, T, numTrials = abcTau.preprocessing.extract_stats(data_load, deltaT, binSize,\
summStat_metric, ifNorm, maxTimeLag)
```
4) Define the prior distributions
```
# Define a uniform prior distribution over the given range
# for a uniform prior: stats.uniform(loc=x_min,scale=x_max-x_min)
t_min = 0.0 # first timescale
t_max = 100.0
priorDist = [stats.uniform(loc= t_min, scale = t_max - t_min)]
```
5) Select the desired generative model from the list of 'generative_models.py' and the distance function from 'diatance_functions.py'
```
# select generative model and distance function
generativeModel = 'oneTauOU'
distFunc = 'linear_distance'
```
6) Set the aABC fitting parameters
```
# set fitting params
epsilon_0 = 1 # initial error threshold
min_samples = 100 # min samples from the posterior
steps = 60 # max number of iterations
minAccRate = 0.01 # minimum acceptance rate to stop the iterations
parallel = False # if parallel processing
n_procs = 1 # number of processor for parallel processing (set to 1 if there is no parallel processing)
```
7) Create the model object: Just copy paste the following (this a general definition of the model object, but all parts including generative models, summary statistics computation or distance function can be replaced by your own functions. You can add your handmade functions inside respected modules: "generative_models.py", "distance_functions.py", "summary_stats.py")
```
# creating model object
class MyModel(abcTau.Model):
#This method initializes the model object.
def __init__(self):
pass
# draw samples from the prior.
def draw_theta(self):
theta = []
for p in self.prior:
theta.append(p.rvs())
return theta
# Choose the generative model (from generative_models)
# Choose autocorrelation computation method (from basic_functions)
def generate_data(self, theta):
# generate synthetic data
if disp == None:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var)')
else:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var, disp)')
# compute the summary statistics
syn_sumStat = abcTau.summary_stats.comp_sumStat(syn_data, summStat_metric, ifNorm, deltaT, binSize, T,\
numBinData, maxTimeLag)
return syn_sumStat
# Computes the summary statistics
def summary_stats(self, data):
sum_stat = data
return sum_stat
# Choose the method for computing distance (from basic_functions)
def distance_function(self, data, synth_data):
if np.nansum(synth_data) <= 0: # in case of all nans return large d to reject the sample
d = 10**4
else:
d = eval('abcTau.distance_functions.' +distFunc + '(data, synth_data)')
return d
```
8) Run the aABC algorithm and save the results
```
# fit with aABC algorithm for any generative model
abc_results, final_step = abcTau.fit.fit_withABC(MyModel, data_sumStat, priorDist, inter_save_direc, inter_filename,\
datasave_path,filenameSave, epsilon_0, min_samples, \
steps, minAccRate, parallel, n_procs, disp)
```
Done :)
Just copy paste the content of all the above cells in a text file and rename it to "scriptName.py"
Then, run it from terminal as: python scriptName.py
### [2] Fitting a general type of generative model in the frequency domain:
Depending on your application you can also use the aABC method to extract timescales and other parameters from the frequency domain by fitting the sample power spectrum density (PSD).
For this purpose, we use exactly the same script as for the time domain, and only replace the autocorrelation computation with PSD computation.
### Attention! You should use exactly the same method for computing the PSD of real data and synthetic data!
We have implemented a simple PSD estimation function in the abcTau package but depending on your application, you can replace it with any other method such as methods avaiable in Scipy: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.welch.html
However, the result of aABC algorithm generally does not depend on the choise of PSD computation method, as long as the same method is used for both real and synthetic data.
This example takes about 1 hour and 20 min to run on a normal computer without any parallel processing, using parallel computing can speed up the fitting to a few minutes.
```
# add the path to the abcTau package
import sys
sys.path.append('./abcTau')
# import the package
import abcTau
import numpy as np
from scipy import stats
# stetting the number of cores for each numpy computation in multiprocessing
# import os
# os.environ["OMP_NUM_THREADS"] = "2"
# os.environ["OPENBLAS_NUM_THREADS"] = "2"
# os.environ["MKL_NUM_THREADS"] = "2"
# os.environ["VECLIB_MAXIMUM_THREADS"] = "2"
# os.environ["NUMEXPR_NUM_THREADS"] = "2"
# path for loading and saving data
datasave_path = 'example_abc_results/'
dataload_path = 'example_data/'
# path and filename to save the intermediate results after running each step
inter_save_direc = 'example_abc_results/'
inter_filename = 'abc_intermediate_results_psd'
# load real data and define filenameSave
filename = 'OU_tau20_mean0_var1_rawData'
filenameSave = filename
print(filename)
data_load = np.load(dataload_path + filename + '.npy')
# load data time-series as a numpy array (numTrials * time-points)
data_load = np.load(dataload_path + filename + '.npy')
# select summary statistics metric
summStat_metric = 'comp_psd'
ifNorm = True # if normalize the autocorrelation or PSD
# extract statistics from real data
deltaT = 1 # temporal resolution of data.
binSize = 1 # bin-size for binning data and computing the autocorrelation.
disp = None # put the disperssion parameter if computed with grid-search
maxTimeLag = None # only used when suing autocorrelation for summary statistics
data_sumStat, data_mean, data_var, T, numTrials = abcTau.preprocessing.extract_stats(data_load, deltaT, binSize,\
summStat_metric, ifNorm, maxTimeLag)
# Define the prior distribution
# for a uniform prior: stats.uniform(loc=x_min,scale=x_max-x_min)
t_min = 0.0 # first timescale
t_max = 100.0
priorDist = [stats.uniform(loc= t_min, scale = t_max - t_min)]
# select generative model and distance function
generativeModel = 'oneTauOU'
distFunc = 'logarithmic_distance'
# set fitting params
epsilon_0 = 1 # initial error threshold
min_samples = 100 # min samples from the posterior
steps = 60 # max number of iterations
minAccRate = 0.01 # minimum acceptance rate to stop the iterations
parallel = False # if parallel processing
n_procs = 1 # number of processor for parallel processing (set to 1 if there is no parallel processing)
# creating model object
class MyModel(abcTau.Model):
#This method initializes the model object.
def __init__(self):
pass
# draw samples from the prior.
def draw_theta(self):
theta = []
for p in self.prior:
theta.append(p.rvs())
return theta
# Choose the generative model (from generative_models)
# Choose autocorrelation computation method (from basic_functions)
def generate_data(self, theta):
# generate synthetic data
if disp == None:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var)')
else:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var, disp)')
# compute the summary statistics
syn_sumStat = abcTau.summary_stats.comp_sumStat(syn_data, summStat_metric, ifNorm, deltaT, binSize, T,\
numBinData, maxTimeLag)
return syn_sumStat
# Computes the summary statistics
def summary_stats(self, data):
sum_stat = data
return sum_stat
# Choose the method for computing distance (from basic_functions)
def distance_function(self, data, synth_data):
if np.nansum(synth_data) <= 0: # in case of all nans return large d to reject the sample
d = 10**4
else:
d = eval('abcTau.distance_functions.' +distFunc + '(data, synth_data)')
return d
# fit with aABC algorithm for any generative model
abc_results, final_step = abcTau.fit.fit_withABC(MyModel, data_sumStat, priorDist, inter_save_direc, inter_filename,\
datasave_path,filenameSave, epsilon_0, min_samples, \
steps, minAccRate, parallel, n_procs, disp)
```
### [3] Fitting the specific case of a two-timescales generative model in the time domain:
We developed a bit different function for fitting the two-timescales generative models to help them converge faster. While you always can use the above script for all models, the following options might help to speed up the fitting when your model have two timescales.
The following is the example in Fig. 3C.
1) define directories and prepare the model object same as above:
```
# add the path to the abcTau package
import sys
sys.path.append('./abcTau')
# import the package
import abcTau
import numpy as np
from scipy import stats
# stetting the number of cores for each numpy computation in multiprocessing
# import os
# os.environ["OMP_NUM_THREADS"] = "2"
# os.environ["OPENBLAS_NUM_THREADS"] = "2"
# os.environ["MKL_NUM_THREADS"] = "2"
# os.environ["VECLIB_MAXIMUM_THREADS"] = "2"
# os.environ["NUMEXPR_NUM_THREADS"] = "2"
# path for loading and saving data
datasave_path = 'example_abc_results/'
dataload_path = 'example_data/'
# path and filename to save the intermediate results after running each step
inter_save_direc = 'example_abc_results/'
inter_filename = 'abc_intermediate_results'
# Define the prior distribution
# for a uniform prior: stats.uniform(loc=x_min,scale=x_max-x_min)
t1_min = 0.0 # first timescale
t1_max = 60.0
t2_min = 20.0 # second timescale
t2_max = 140.0
coef_min = 0.0 # coeffiecient or weight of the first timescale
coef_max = 1.0
priorDist = [stats.uniform(loc= t1_min, scale = t1_max - t1_min),\
stats.uniform(loc= t2_min, scale = t2_max - t2_min),\
stats.uniform(loc= coef_min, scale = coef_max - coef_min)]
# select generative model and distance function
generativeModel = 'twoTauOU_poissonSpikes'
distFunc = 'linear_distance'
# load real data and define filenameSave
filename = 'inhomPois_tau5_80_coeff04_T1000_trials500_deltaT1_data_mean1_data_var1.25'
filenameSave = filename
print(filename)
data_load = np.load(dataload_path + filename + '.npy')
# select summary statistics metric
summStat_metric = 'comp_ac_fft'
ifNorm = True # if normalize the autocorrelation or PSD
# extract statistics from real data
deltaT = 1 # temporal resolution of data.
binSize = 1 # bin-size for binning data and computing the autocorrelation.
disp = None # put the disperssion parameter if computed with grid-search
maxTimeLag = 110 # only used when suing autocorrelation for summary statistics
data_sumStat, data_mean, data_var, T, numTrials = abcTau.preprocessing.extract_stats(data_load, deltaT, binSize,\
summStat_metric, ifNorm, maxTimeLag)
# set fitting params
epsilon_0 = 1 # initial error threshold
min_samples = 100 # min samples from the posterior
steps = 60 # max number of iterations
minAccRate = 0.01 # minimum acceptance rate to stop the iterations
parallel = False # if parallel processing
n_procs = 1 # number of processor for parallel processing (set to 1 if there is no parallel processing)
# creating model object
class MyModel(abcTau.Model):
#This method initializes the model object.
def __init__(self):
pass
# draw samples from the prior.
def draw_theta(self):
theta = []
for p in self.prior:
theta.append(p.rvs())
return theta
# Choose the generative model (from generative_models)
# Choose autocorrelation computation method (from basic_functions)
def generate_data(self, theta):
# generate synthetic data
if disp == None:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var)')
else:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var, disp)')
# compute the summary statistics
syn_sumStat = abcTau.summary_stats.comp_sumStat(syn_data, summStat_metric, ifNorm, deltaT, binSize, T,\
numBinData, maxTimeLag)
return syn_sumStat
# Computes the summary statistics
def summary_stats(self, data):
sum_stat = data
return sum_stat
# Choose the method for computing distance (from basic_functions)
def distance_function(self, data, synth_data):
if np.nansum(synth_data) <= 0: # in case of all nans return large d to reject the sample
d = 10**4
else:
d = eval('abcTau.distance_functions.' +distFunc + '(data, synth_data)')
return d
```
2) But instead use the following function for fitting:
```
# fit with aABC algorithm for the two-timescales generative model
abc_results, final_step = abcTau.fit.fit_withABC_2Tau(MyModel, data_sumStat, priorDist, inter_save_direc,\
inter_filename,\
datasave_path,filenameSave, epsilon_0, min_samples, \
steps, minAccRate, parallel, n_procs, disp)
```
# Resuming a previous aABC fit and let it run for more iterations
It's possible to resume a previous fit with the aABC method and allow it to run from the point it stopped. This functionality is useful when the initial stopping criterion (e.g., minimum acceptance rate) was not sufficient to obtain narrow posteriors and you need better statistics for the fits without running the aABC fit from the beginning.
```
# add the path to the abcTau package
import sys
sys.path.append('./abcTau')
# import the package
import abcTau
import numpy as np
from scipy import stats
# stetting the number of cores for each numpy computation in multiprocessing
# import os
# os.environ["OMP_NUM_THREADS"] = "2"
# os.environ["OPENBLAS_NUM_THREADS"] = "2"
# os.environ["MKL_NUM_THREADS"] = "2"
# os.environ["VECLIB_MAXIMUM_THREADS"] = "2"
# os.environ["NUMEXPR_NUM_THREADS"] = "2"
# path for loading and saving data
datasave_path = 'example_abc_results/'
dataload_path = 'example_data/'
# path and filename to save the intermediate results after running each step
inter_save_direc = 'example_abc_results/'
inter_filename = 'abc_intermediate_results'
# load real data and define filenameSave
filename = 'OU_tau20_T1000_trials500_deltaT1_data_mean0_data_var1'
filenameSave = filename
print(filename)
data_load = np.load(dataload_path + filename + '.npy')
# select summary statistics metric
summStat_metric = 'comp_ac_fft'
ifNorm = True # if normalize the autocorrelation or PSD
# extract statistics from real data
deltaT = 1 # temporal resolution of data.
binSize = 1 # bin-size for binning data and computing the autocorrelation.
disp = None # put the disperssion parameter if computed with grid-search
maxTimeLag = 50 # only used when suing autocorrelation for summary statistics
data_sumStat, data_mean, data_var, T, numTrials = abcTau.preprocessing.extract_stats(data_load, deltaT, binSize,\
summStat_metric, ifNorm, maxTimeLag)
# Define the prior distribution
# for a uniform prior: stats.uniform(loc=x_min,scale=x_max-x_min)
t_min = 0.0 # first timescale
t_max = 100.0
priorDist = [stats.uniform(loc= t_min, scale = t_max - t_min)]
# select generative model and distance function
generativeModel = 'oneTauOU'
distFunc = 'linear_distance'
# creating model object
class MyModel(abcTau.Model):
#This method initializes the model object.
def __init__(self):
pass
# draw samples from the prior.
def draw_theta(self):
theta = []
for p in self.prior:
theta.append(p.rvs())
return theta
# Choose the generative model (from generative_models)
# Choose autocorrelation computation method (from basic_functions)
def generate_data(self, theta):
# generate synthetic data
if disp == None:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var)')
else:
syn_data, numBinData = eval('abcTau.generative_models.' + generativeModel + \
'(theta, deltaT, binSize, T, numTrials, data_mean, data_var, disp)')
# compute the summary statistics
syn_sumStat = abcTau.summary_stats.comp_sumStat(syn_data, summStat_metric, ifNorm, deltaT, binSize, T,\
numBinData, maxTimeLag)
return syn_sumStat
# Computes the summary statistics
def summary_stats(self, data):
sum_stat = data
return sum_stat
# Choose the method for computing distance (from basic_functions)
def distance_function(self, data, synth_data):
if np.nansum(synth_data) <= 0: # in case of all nans return large d to reject the sample
d = 10**4
else:
d = eval('abcTau.distance_functions.' +distFunc + '(data, synth_data)')
return d
# load the previous aABC fit
data_abc_path = 'example_abc_results/'
filename = '1on_OU_tau20_T1000_lag50_steps43'
abc_results = np.load(data_abc_path + filename + '.npy', allow_pickle=True)
ind = filename.find('steps')
final_step = int(filename[ind+5] + filename[ind+6])
abc_results_old = abc_results[:final_step]
# set new fitting params
epsilon_0 = 1 # initial error threshold
min_samples = 100 # min samples from the posterior
steps = 60 # max number of iterations
minAccRate = 0.005 # new minimum acceptance rate to stop the iterations
parallel = False # if parallel processing
n_procs = 1 # number of processor for parallel processing (set to 1 if there is no parallel processing)
# resume the aABC fitting
abc_results, final_step = abcTau.fit.fit_withABC(MyModel, data_sumStat, priorDist, inter_save_direc,\
inter_filename,\
datasave_path,filenameSave, epsilon_0, min_samples, \
steps, minAccRate, parallel, n_procs, disp, resume = abc_results_old)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from scipy import stats
import math
```
## Plots for statistics: OLS, Lasso, Ridge, OLS_Lasso, OLS_Ridge, Lasso_Ridge
```
# Generating 'fake' data
def gen_data(nobs, num_cov, m):
x_1 = np.random.normal(scale=1., size=(nobs))
x_2 = np.random.normal(scale=1., size=(nobs, num_cov))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = (x_1 * m) + e
return y, x_1, x_2
# Setup test
def setup_test_params(y, x_1, x_2, a, model):
X = np.column_stack((x_1, x_2))
if model == 1:
ols = sm.OLS(y, X).fit()
return ols
elif model == 2:
lasso = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
return lasso
elif model == 3:
ridge = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
return ridge
elif model == 4:
ols = sm.OLS(y, X).fit()
lasso = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
return ols, lasso
elif model == 5:
ols = sm.OLS(y, X).fit()
ridge = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
return ols, ridge
elif model == 6:
lasso = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
ridge = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
return lasso, ridge
def standardize(array):
"""divide by variance, multiple by sqrt(n)"""
return np.sqrt(len(array))*array.mean()/array.std()
# MSE
def setup_test_mse(n, k, a, m, model):
ytr, x_1tr, x_2tr = gen_data(nobs=n, num_cov=k, m=m)
Xtr = np.column_stack((x_1tr, x_2tr))
yte, x_1te, x_2te = gen_data(nobs=n, num_cov=k, m=m)
Xte = np.column_stack((x_1te, x_2te))
statistic = None
if model == 1:
ols = sm.OLS(ytr, Xtr).fit()
statistic = (yte-ols.predict(Xte))**2
elif model == 2:
lasso = sm.OLS(ytr, Xtr).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
statistic = (yte-lasso.predict(Xte))**2
elif model == 3:
ridge = sm.OLS(ytr, Xtr).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
statistic = (yte-ridge.predict(Xte))**2
elif model == 4:
ols = sm.OLS(ytr, Xtr).fit()
ols_mse = (yte-ols.predict(Xte))**2
lasso = sm.OLS(ytr, Xtr).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
lasso_mse = (yte-lasso.predict(Xte))**2
statistic = ols_mse - lasso_mse
elif model == 5:
ols = sm.OLS(ytr, Xtr).fit()
ols_mse = (yte-ols.predict(Xte))**2
ridge = sm.OLS(ytr, Xtr).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
ridge_mse = (yte-ridge.predict(Xte))**2
statistic = ols_mse - ridge_mse
elif model == 6:
lasso = sm.OLS(ytr, Xtr).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
lasso_mse = (yte-lasso.predict(Xte))**2
ridge = sm.OLS(ytr, Xtr).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
ridge_mse = (yte-ridge.predict(Xte))**2
statistic = lasso_mse - ridge_mse
return standardize(statistic)
# Calculate MSEs
def mse(lst, n, i, model):
lst_cols = ['statistic_' + str(i)]
df = pd.DataFrame(lst, columns=lst_cols)
print("Mean:", np.mean(df)[0], "Median:", np.median(df), "Mode:", stats.mode(df)[0], "Variance:", np.var(df)[0])
return plt.hist(df['statistic_'+str(i)], label='mse_'+str(i),alpha=0.5)
print(setup_test_mse(1000, 1, .1, 1, 1))
```
### Varying values
```
# Vary number of observations
def vary_obs(model):
k = 10
m = 1
a = 0.1
n = [100,250,500,1000]
for i in n:
lst = []
for j in range(1000):
results = setup_test_mse(i, k, a, m, model)
lst.append(results)
output = mse(lst, i, i, model)
plt.legend()
plt.show()
# Vary alpha levels
def vary_alpha(model):
k = 10
m = 10
a = [0,0.1,0.5,1]
n = 1000
for i in a:
lst = []
for j in range(1000):
results = setup_test_mse(n, k, i, m, model)
lst.append(results)
output = mse(lst, n, i, model)
plt.legend()
plt.show()
# Vary number of x variables
def vary_xvars(model):
k = [1,10,25,50]
m = 1
a = 0.1
n = 1000
for i in k:
lst = []
for j in range(1000):
results = setup_test_mse(n, i, a, m, model)
lst.append(results)
output = mse(lst, n, i, model)
plt.legend()
plt.show()
# Vary the model with a multiplicative factor
def vary_multiply(model):
k = 10
m = [0.1,0.5,1,2]
a = 0.1
n = 1000
for i in m:
lst = []
for j in range(1000):
results = setup_test_mse(n, k, a, i, model)
lst.append(results)
output = mse(lst, n, i, model)
plt.legend()
plt.show()
def params_scatter(model):
single_models = [1,2,3]
k = [1,10,25,50]
m = 1
a = 0.1
n = 1000
if model in single_models:
for i in k:
y, x_1, x_2 = gen_data(nobs=n, num_cov=i, m=m)
x = setup_test_params(y, x_1, x_2, a, model)
plt.scatter(range(len(x.params)), x.params, label=i)
plt.legend()
plt.show()
else:
for i in k:
y, x_1, x_2 = gen_data(nobs=n, num_cov=i, m=m)
x = setup_test_params(y, x_1, x_2, a, model)
for j in list(setup_test_params(y, x_1, x_2, a, model)):
plt.scatter(range(len(j.params)), j.params)
plt.legend(['model1','model2'])
plt.show()
# Model = 4 is OlS - Lasso
print('Vary Observations')
vary_obs(4)
print('Vary Alpha Levels')
vary_alpha(4)
print('Vary Multiplicative Factors')
vary_multiply(4)
print('Vary X Variables')
vary_xvars(4)
# Model = 5 is OlS - Ridge
print('Vary Observations')
vary_obs(5)
print('Vary Alpha Levels')
vary_alpha(5)
print('Vary Multiplicative Factors')
vary_multiply(5)
print('Vary X Variables')
vary_xvars(5)
# Model = 6 is Lasso - Ridge
print('Vary Observations')
vary_obs(6)
print('Vary Alpha Levels')
vary_alpha(6)
print('Vary Multiplicative Factors')
vary_multiply(6)
print('Vary X Variables')
vary_xvars(6)
```
| github_jupyter |
# Working with MODFLOW-NWT v 1.1 option blocks
In MODFLOW-NWT an option block is present for the WEL file, UZF file, and SFR file. This block takes keyword arguments that are supplied in an option line in other versions of MODFLOW.
The `OptionBlock` class was created to provide combatibility with the MODFLOW-NWT option block and allow the user to easily edit values within the option block
```
import os
import sys
import platform
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
from flopy.utils import OptionBlock
load_ws = os.path.join("..", "data", "options", "sagehen")
model_ws = os.path.join(load_ws, "output")
```
## Loading a MODFLOW-NWT model that has option block options
It is critical to set the `version` flag in `flopy.modflow.Modflow.load()` to `version='mfnwt'`
We are going to load a modified version of the Sagehen test problem from GSFLOW to illustrate compatibility
```
mfexe = "mfnwt"
if platform.system() == 'Windows':
mfexe += '.exe'
ml = flopy.modflow.Modflow.load('sagehen.nam', model_ws=load_ws,
exe_name=mfexe, version='mfnwt')
ml.change_model_ws(new_pth=model_ws)
ml.write_input()
success, buff = ml.run_model(silent=True)
if not success:
print ('Something bad happened.')
```
## Let's look at the options attribute of the UZF object
The `uzf.options` attribute is an `OptionBlock` object. The representation of this object is the option block that will be written to output, which allows the user to easily check to make sure the block has the options they want.
```
uzf = ml.get_package("UZF")
uzf.options
```
The `OptionBlock` object also has attributes which correspond to the option names listed in the online guide to modflow
The user can call and edit the options within the option block
```
print(uzf.options.nosurfleak)
print(uzf.options.savefinf)
uzf.options.etsquare = False
uzf.options
uzf.options.etsquare = True
uzf.options
```
### The user can also see the single line representation of the options
```
uzf.options.single_line_options
```
### And the user can easily change to single line options writing
```
uzf.options.block = False
# write out only the uzf file
uzf_name = "uzf_opt.uzf"
uzf.write_file(os.path.join(model_ws, uzf_name))
```
Now let's examine the first few lines of the new UZF file
```
f = open(os.path.join(model_ws, uzf_name))
for ix, line in enumerate(f):
if ix == 3:
break
else:
print(line)
```
And let's load the new UZF file
```
uzf2 = flopy.modflow.ModflowUzf1.load(os.path.join(model_ws, uzf_name),
ml, check=False)
```
### Now we can look at the options object, and check if it's block or line format
`block=False` indicates that options will be written as line format
```
print(uzf2.options)
print(uzf2.options.block)
```
### Finally we can convert back to block format
```
uzf2.options.block = True
uzf2.write_file(os.path.join(model_ws, uzf_name))
ml.remove_package("UZF")
uzf3 = flopy.modflow.ModflowUzf1.load(os.path.join(model_ws, uzf_name),
ml, check=False)
print("\n")
print(uzf3.options)
print(uzf3.options.block)
```
## We can also look at the WEL object
```
wel = ml.get_package("WEL")
wel.options
```
Let's write this out as a single line option block and examine the first few lines
```
wel_name = "wel_opt.wel"
wel.options.block = False
wel.write_file(os.path.join(model_ws, wel_name))
f = open(os.path.join(model_ws,wel_name))
for ix, line in enumerate(f):
if ix == 4:
break
else:
print(line)
```
And we can load the new single line options WEL file and confirm that it is being read as an option line
```
ml.remove_package("WEL")
wel2 = flopy.modflow.ModflowWel.load(os.path.join(model_ws, wel_name),
ml, nper=ml.nper, check=False)
wel2.options
wel2.options.block
```
# Building an OptionBlock from scratch
The user can also build an `OptionBlock` object from scratch to add to a `ModflowSfr2`, `ModflowUzf1`, or `ModflowWel` file.
The `OptionBlock` class has two required parameters and one optional parameter
`option_line`: a one line, string based representation of the options
`package`: a modflow package object
`block`: boolean flag for line based or block based options
```
opt_line = "specify 0.1 20"
options = OptionBlock(opt_line, flopy.modflow.ModflowWel, block=True)
options
```
from here we can set the noprint flag by using `options.noprint`
```
options.noprint = True
```
and the user can also add auxillary variables by using `options.auxillary`
```
options.auxillary = ["aux", "iface"]
```
### Now we can create a new wel file using this `OptionBlock`
and write it to output
```
wel3 = flopy.modflow.ModflowWel(ml, stress_period_data=wel.stress_period_data,
options=options, unitnumber=99)
wel3.write_file(os.path.join(model_ws, wel_name))
```
And now let's examine the first few lines of the file
```
f = open(os.path.join(model_ws, wel_name))
for ix, line in enumerate(f):
if ix == 8:
break
else:
print(line)
```
We can see that everything that the OptionBlock class writes out options in the correct location.
### The user can also switch the options over to option line style and write out the output too!
```
wel3.options.block = False
wel3.write_file(os.path.join(model_ws, wel_name))
f = open(os.path.join(model_ws, wel_name))
for ix, line in enumerate(f):
if ix == 6:
break
else:
print(line)
```
| github_jupyter |
# FEC Dataset
Download data from [https://berkeley-politics-capstone.s3.amazonaws.com/fec.zip](https://berkeley-politics-capstone.s3.amazonaws.com/fec.zip) and place in `"../data/"`
```
import os
import sys
import json
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
plt.style.use('seaborn-darkgrid')
# find the path to each fec file, store paths in a nested dict
fec_2020_paths = {}
base_path = os.path.join("..","data","fec","2020")
for party_dir in os.listdir(base_path):
if(party_dir[0]!="."):
fec_2020_paths[party_dir] = {}
for cand_dir in os.listdir(os.path.join(base_path,party_dir)):
if(cand_dir[0]!="."):
fec_2020_paths[party_dir][cand_dir] = {}
for csv_path in os.listdir(os.path.join(base_path,party_dir,cand_dir)):
if(csv_path.find("schedule_a")>=0):
fec_2020_paths[party_dir][cand_dir]["donations"] = \
os.path.join(base_path,party_dir,cand_dir,csv_path)
elif(csv_path.find("schedule_b")>=0):
fec_2020_paths[party_dir][cand_dir]["spending"] = \
os.path.join(base_path,party_dir,cand_dir,csv_path)
print(json.dumps(fec_2020_paths, indent=4))
# get data for each 2020 democrat and create a simple plot
fig, ax = plt.subplots(figsize=(12,8))
plot_scale = (10**6,"M","Millions")
for candid in fec_2020_paths["democrat"].keys():
if("donations" in fec_2020_paths["democrat"][candid].keys()):
df = pd.read_csv(fec_2020_paths["democrat"][candid]["donations"])
df["contribution_receipt_date"] = pd.to_datetime(df["contribution_receipt_date"])
ts = df.groupby(by="contribution_receipt_date")["contribution_receipt_amount"].sum()
print("{:s}: {:d} days of donations averaging ${:,.0f} per day".format(candid.title(), len(ts), ts.mean()))
plt.plot(ts/plot_scale[0], ".")
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("$%0.0f{:s}".format(plot_scale[1])))
ax.axes.set_ylabel("{:s} USD".format(plot_scale[2]))
plt.xticks(rotation="vertical")
plt.title("2020 Democratic Presidential Primary Fundraising")
plt.show()
# get data for each 2020 democrat and create a simple plot
fig, ax = plt.subplots(figsize=(12,8))
plot_scale = (1,"","Millions")
for candid in fec_2020_paths["democrat"].keys():
if("donations" in fec_2020_paths["democrat"][candid].keys()):
df = pd.read_csv(fec_2020_paths["democrat"][candid]["donations"])
df["contribution_receipt_date"] = pd.to_datetime(df["contribution_receipt_date"])
ts = df.groupby(by="contribution_receipt_date")["contribution_receipt_amount"].count()
print("{:s}: {:d} days of donations averaging ${:,.0f} per day".format(candid.title(), len(ts), ts.mean()))
plt.plot(ts/plot_scale[0], ".")
plt.ylim(0,4000)
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%0.0f{:s}".format(plot_scale[1])))
ax.axes.set_ylabel("{:s} USD".format(plot_scale[2]))
plt.xticks(rotation="vertical")
plt.title("2020 Democratic Presidential Primary Fundraising")
plt.show()
```
| github_jupyter |
## Load libraries
```
!pip install -r requirements.txt
import sys
import os
import numpy as np
import pandas as pd
import seaborn as sns
from PIL import Image
import torch
import torch.nn as nn
import torch.utils.data as D
from torch.optim.lr_scheduler import ExponentialLR
import torch.nn.functional as F
from torch.autograd import Variable
from multiprocessing import Pool, cpu_count
from torchvision import transforms
from ignite.engine import Events
from scripts.ignite import create_supervised_evaluator, create_supervised_trainer
from ignite.metrics import Loss, Accuracy
from ignite.contrib.handlers.tqdm_logger import ProgressBar
from ignite.handlers import EarlyStopping, ModelCheckpoint
from ignite.contrib.handlers import LinearCyclicalScheduler, CosineAnnealingScheduler
import random
from tqdm import tqdm_notebook
from sklearn.model_selection import train_test_split
from efficientnet_pytorch import EfficientNet, utils as enet_utils
from scripts.evaluate import eval_model
from scripts.transforms import gen_transform_train_wo_norm
from scripts.plates_leak import apply_plates_leak
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
!ls /storage
!ls /storage/rxrxai
path_data = '/storage/rxrxai'
device = 'cuda'
batch_size = 4
torch.manual_seed(0)
model_name = 'efficientnet-b4'
init_lr = 3e-4
end_lr = 1e-7
# dataframes for training, cross-validation, and testing
df = pd.read_csv(path_data+'/train.csv')
df_test = pd.read_csv(path_data+'/test.csv')
df_all = pd.concat([df, df_test])
df_pixels = pd.read_csv(path_data+'/pixel_stats.csv')
def train_or_test(row):
return 'train' if not np.isnan(row['sirna']) else 'test'
df_all['ds'] = df_all.apply(train_or_test, axis=1)
df_all.head()
df_all['cell'] = df_all.apply(lambda row: row['experiment'].split("-")[0], axis=1)
df_all.head(277*6)
df_all.tail()
train_length = len(df_all[df_all['ds'] == 'train'])
for cell in ['HEPG2', 'HUVEC', 'RPE', 'U2OS']:
print(f"{cell} {df_all[df_all['ds'] == 'train'].groupby('cell').agg(['count'])[['experiment']].loc[cell][0] / train_length}")
.01 * train_length
test_length = len(df_all[df_all['ds'] == 'test'])
for cell in ['HEPG2', 'HUVEC', 'RPE', 'U2OS']:
print(f"{cell} {df_all[df_all['ds'] == 'test'].groupby('cell').agg(['count'])[['experiment']].loc[cell][0] / test_length}")
sns.countplot(x='cell',data=df_all[df_all['ds'] == 'train'])
df_all[df_all['ds'] == 'train'].groupby('cell').agg(['count'])
sns.countplot(x='cell',data=df_all[df_all['ds'] == 'test'])
df_all[df_all['ds'] == 'test'].groupby('cell').agg(['count'])
df_pixels.head()
df_pixels[['min']].min()
df_pixels[df_pixels['min'] == df_pixels[['min']].max()[0]].head()
df_pixels[df_pixels['min'] == df_pixels[['min']].min()[0]].head()
print(len(df))
df_pixels.groupby('channel').agg(['count'])
m = df_pixels[df_pixels['channel'] == 2]
m = m[m['experiment'].str.contains('HEPG2')]
m.tail()
stats_data = {'cell': [], 'channel': [], 'std': [], 'mean': []}
stats_df = pd.DataFrame(data=stats_data)
for channel in range(1,7):
temp = df_pixels[df_pixels['channel'] == channel]
stats_df = stats_df.append({'cell': 'ALL', 'channel': int(channel), 'std': temp[["std"]].mean()[0], 'mean': temp[["mean"]].mean()[0]}, ignore_index=True)
print(f'channel {channel} - mean: {temp[["mean"]].mean()[0]} | std: {temp[["std"]].mean()[0]}\n')
print(stats_df)
for cell in ['HEPG2', 'HUVEC', 'RPE', 'U2OS']:
print('\n\n')
for channel in range(1,7):
temp = df_pixels[df_pixels['channel'] == channel]
temp = temp[temp['experiment'].str.contains(cell)]
stats_df = stats_df.append({'cell': cell, 'channel': int(channel), 'std': temp[["std"]].mean()[0], 'mean': temp[["mean"]].mean()[0]}, ignore_index=True)
print(f'channel {channel} {cell} - mean: {temp[["mean"]].mean()[0]} | std: {temp[["std"]].mean()[0]}\n')
print(stats_df)
stats_df.to_csv('/storage/rxrxai/pixel_stats_agg.csv', index=False)
df_train_sirna = df_all[df_all['ds'] == 'train']
df_train_sirna.groupby('sirna').agg(['count'])
# !cat /storage/rxrxai/sample_submission.csv`
```
# Images
```
df_all.head()
df_all.tail()
cpu_count()
df_all_records = df_all.to_records(index=False)
def get_img_path(index, channel, site, suffix=''):
experiment, well, plate, ds = df_all_records[index].experiment, df_all_records[index].well, df_all_records[index].plate, df_all_records[index].ds
return '/'.join(['/storage/rxrxai', f'{ds}{suffix}', experiment, f'Plate{int(plate)}', f'{well}_s{site}_w{channel}.png'])
transform = transforms.Compose([transforms.Resize(384)])
df_all_enum = enumerate(df_all_records)
def add_aug_images(packed_args):
i, row = packed_args
for channel in range(1,7):
for site in [1,2]:
experiment = df_all_records[i].experiment
plate = df_all_records[i].plate
ds = df_all_records[i].ds
sub_dir = f'/storage/rxrxai/{ds}384/{experiment}/Plate{int(plate)}'
if not os.path.exists(sub_dir):
os.makedirs(sub_dir)
with Image.open(get_img_path(i, channel, site)) as img:
image = transform(img)
image.save(get_img_path(i, channel, site, suffix='384'))
%%time
pool = Pool()
m = pool.map(add_aug_images, df_all_enum)
open("myfile.txt", "x")
```
| github_jupyter |
## Model-Sizing for Keras CNN Model Zoo
This is a sanity check for : https://culurciello.github.io/tech/2016/06/04/nets.html
In particular, their model comparison graph :

and this recent blog post (which came out well after this notebook was built) : http://www.pyimagesearch.com/2017/03/20/imagenet-vggnet-resnet-inception-xception-keras/
```
import keras
#import tensorflow.contrib.keras as keras
import numpy as np
if False:
import os, sys
targz = "v0.5.tar.gz"
url = "https://github.com/fchollet/deep-learning-models/archive/"+targz
models_orig_dir = 'deep-learning-models-0.5'
models_here_dir = 'keras_deep_learning_models'
models_dir = './models/'
if not os.path.exists(models_dir):
os.makedirs(models_dir)
if not os.path.isfile( os.path.join(models_dir, models_here_dir, 'README.md') ):
tarfilepath = os.path.join(models_dir, targz)
if not os.path.isfile(tarfilepath):
import urllib.request
urllib.request.urlretrieve(url, tarfilepath)
import tarfile, shutil
tarfile.open(tarfilepath, 'r:gz').extractall(models_dir)
shutil.move(os.path.join(models_dir, models_orig_dir), os.path.join(models_dir, models_here_dir))
if os.path.isfile( os.path.join(models_dir, models_here_dir, 'README.md') ):
os.unlink(tarfilepath)
sys.path.append(models_dir)
print("Keras Model Zoo model code installed")
#dir(keras)
dir(keras.applications)
#keras.__package__
#import keras
#if keras.__version__ < '2.0.0':
# print("keras version = %s is too old" % (keras.__version__,))
from keras.applications.inception_v3 import decode_predictions
from keras.preprocessing import image as keras_preprocessing_image
#from keras_deep_learning_models.imagenet_utils import decode_predictions
#from tensorflow.contrib.keras.api.keras.applications.inception_v3 import decode_predictions
#from tensorflow.contrib.keras.api.keras.preprocessing import image as keras_preprocessing_image
# This call to 'decode_predictions' wiil potentially download imagenet_class_index.json (35Kb)
decode_predictions(np.zeros( (1,1000) ), top=1)
```
### Image Loading and Pre-processing
```
def image_to_input(model, preprocess_input_fn, img_path):
target_size=model.input_shape[1:]
img = keras_preprocessing_image.load_img(img_path, target_size=target_size)
x = keras.preprocessing.image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input_fn(x)
return x
def test_model_sanity(model, preprocess_input_fn, img_path, img_class_str=''):
x = image_to_input(model, preprocess_input_fn, img_path)
preds = model.predict(x)
predictions = decode_predictions(preds, top=1)
if len(img_class_str)>0:
if predictions[0][0][1] != img_class_str:
print("INCORRECT CLASS!")
print('Predicted:', predictions)
# prints: [[('n02123045', 'tabby', 0.76617092)]]
img_path, img_class = './images/cat-with-tongue_224x224.jpg', 'tabby'
```
### Model loading / timing
```
import time
def load_model_weights(fn, weight_set, assume_download=30):
t0 = time.time()
m = fn(weights=weight_set)
if time.time()-t0>float(assume_download): # more that this => downloading, so retry to get set-up time cleanly
print("Assume that >30secs means that we just downloaded the dataset : load again for timing")
t0 = time.time()
m = fn(weights=weight_set)
time_load = float(time.time()-t0)
weight_count=[ float(np.sum([keras.backend.count_params(p) for p in set(w)]))/1000./1000.
for w in [m.trainable_weights, m.non_trainable_weights] ]
print("Loaded %.0fMM parameters (and %.0fk fixed parameters) into model in %.3f seconds" %
(weight_count[0], weight_count[1]*1000., time_load,))
return m, time_load, weight_count[0], weight_count[1]
def time_model_predictions(model, preprocess_input_fn, img_path, batch_size=1, iters=1):
x = image_to_input(model, preprocess_input_fn, img_path)
batch = np.tile(x, (batch_size,1,1,1))
t0 = time.time()
for i in range(iters):
_ = model.predict(batch, batch_size=batch_size)
single = float(time.time()-t0)*1000./iters/batch_size
print("A single image forward pass takes %.0f ms (in batches of %d, average of %d passes)" %
(single, batch_size, iters,))
return single
def total_summary(fn, preprocess_input_fn):
model, time_setup, trainable, fixed = load_model_weights(fn, 'imagenet')
test_model_sanity(model, preprocess_input_fn, img_path, img_class)
time_iter_ms = time_model_predictions(model, preprocess_input_fn, img_path, batch_size=8, iters=2)
model=None # Clean up
keras.backend.clear_session()
return dict(name=fn.__name__,
params_trainable=trainable, params_fixed=fixed,
time_setup=time_setup, time_iter_ms=time_iter_ms)
```
### Worked Example : ResNet 50
http://felixlaumon.github.io/2015/01/08/kaggle-right-whale.html
```
from keras.applications.resnet50 import ResNet50, preprocess_input
#from tensorflow.contrib.keras.api.keras.applications.resnet50 import ResNet50, preprocess_input
#model_resnet50 = ResNet50(weights='imagenet')
model_resnet50,_,_,_ = load_model_weights(ResNet50, 'imagenet')
test_model_sanity(model_resnet50, preprocess_input, img_path, img_class)
_ = time_model_predictions(model_resnet50, preprocess_input, img_path, batch_size=8, iters=2)
model_resnet50=None # release 'pointers'
keras.backend.clear_session() # release memory
```
## Collect statistics
```
evaluate = ['#VGG16', '#InceptionV3', '#ResNet50', 'Xception', '#MobileNet', '#NotThisOne'] # remove the '#' to enable
#evaluate = ['VGG16', 'InceptionV3', 'ResNet50', 'Xception', 'MobileNet', '#NotThisOne']
stats_arr=[]
if 'VGG16' in evaluate:
from keras.applications.vgg16 import VGG16, preprocess_input
#from tensorflow.contrib.keras.api.keras.applications.vgg16 import VGG16, preprocess_input
stats_arr.append( total_summary( VGG16, preprocess_input ) )
if 'InceptionV3' in evaluate:
from keras.applications.inception_v3 import InceptionV3, preprocess_input
#from tensorflow.contrib.keras.api.keras.applications.inception_v3 import InceptionV3, preprocess_input
stats_arr.append( total_summary( InceptionV3, preprocess_input ) )
if 'ResNet50' in evaluate:
from keras.applications.resnet50 import ResNet50, preprocess_input
#from tensorflow.contrib.keras.api.keras.applications.resnet50 import ResNet50, preprocess_input
stats_arr.append( total_summary( ResNet50, preprocess_input ) )
if 'Xception' in evaluate:
from keras.applications.xception import Xception, preprocess_input
#from tensorflow.contrib.keras.api.keras.applications.xception import Xception, preprocess_input
stats_arr.append( total_summary( Xception, preprocess_input ) )
if 'MobileNet' in evaluate:
from keras.applications.mobilenet import MobileNet, preprocess_input
#from tensorflow.contrib.keras.api.keras.applications.mobilenet import ResNet50, preprocess_input
stats_arr.append( total_summary( MobileNet, preprocess_input ) )
stats = { s['name']:{ k:int(v*100)/100. for k,v in s.items() if k!='name'}
for s in sorted(stats_arr,key=lambda x:x['name']) }
stats_cols='params_fixed params_trainable time_iter_ms time_setup'.split()
print(' '*33, stats_cols)
for k,stat in stats.items():
print(" '%s:statdict([%s])," % (
(k+"'"+' '*15)[:15],
', '.join(["%6.2f" % stat[c] for c in stats_cols]),)
)
```
#### Load Default Stats as a fallback
```
def statdict(arr):
return dict(zip(stats_cols, arr))
d=statdict([0.03, 23.81, 642.46, 514])
d
if len(stats_arr)==0:
stats_laptop_cpu={ # Updated 24-Aug-2017
# 'params_fixed params_trainable time_iter_ms time_setup'
'InceptionV3' :statdict([ 0.03, 23.81, 631.63, 5.03]),
'MobileNet' :statdict([ 0.02, 4.23, 197.90, 1.75]),
'ResNet50' :statdict([ 0.05, 25.58, 567.64, 3.55]),
'VGG16' :statdict([ 0.00, 138.35, 1026.74, 2.64]),
'Xception' :statdict([ 0.05, 22.85, 1188.02, 3.10]),
}
stats_titanx={ # Updated 27-Aug-2017
# 'params_fixed params_trainable time_iter_ms time_setup'
'InceptionV3' :statdict([ 0.03, 23.81, 215.10, 3.31]),
'MobileNet' :statdict([ 0.02, 4.23, 66.68, 1.22]),
'ResNet50' :statdict([ 0.05, 25.58, 196.02, 2.58]),
'VGG16' :statdict([ 0.00, 138.35, 336.00, 1.01]),
'Xception' :statdict([ 0.05, 22.85, 387.45, 1.85]),
}
stats = stats_titanx
```
### Plot Graph (v. different axes)
```
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots()
ax.set_xlabel('image processing time (ms)')
ax.set_ylabel('# of parameters')
X,Y,R,names=[],[],[],[]
for name, data in stats.items():
X.append(data['time_iter_ms'])
Y.append(data['params_trainable']+data['params_fixed'])
R.append(data['time_setup']*10.)
names.append(name)
ax.scatter(X, Y, s=R)
for name,x,y in zip(names, X, Y):
plt.annotate(
name, xy=(x, y), xytext=(+0, 30),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
plt.show()
```
### Summary sizing
```
! ls -sh ~/.keras/models/
```
```
36K imagenet_class_index.json
26M inception_v1_weights_tf_dim_ordering_tf_kernels.h5
92M inception_v3_weights_tf_dim_ordering_tf_kernels.h5
17M mobilenet_1_0_224_tf.h5
99M resnet50_weights_tf_dim_ordering_tf_kernels.h5
528M vgg16_weights_tf_dim_ordering_tf_kernels.h5
88M xception_weights_tf_dim_ordering_tf_kernels.h5```
### Ideas
http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html
| github_jupyter |
# NumPy Array Basics
http://numpy.org
NumPy is the fundamental base for scientific computing in python. It contains:
- N-dimensional array objects
- vectorization of functions
- Tools for integrating C/C++ and fortran code
- linear algebra, fourier transformation, and random number tools.
Now, NumPy is the basis for a lot of other packages like scikit-learn, scipy, pandas, among other packages but provides a lot of power in and of itself and they keep numpy pretty abstract however it provides a strong foundation for us to learn some of the operations and concepts we’ll be applying later on.
Let’s go ahead and get started.
```
import sys
print(sys.version)
```
First we’ll need to import, now the convention is to import it as ‘np’. This is extremely common and will be what I use everything I import numpy.
```
import numpy as np
```
One of the reasons that numpy is such a popular tool is typically vastly more efficient than standard python lists.
I'm not going to go into the details but things like vectorization and boolean selection not only improve readability but provide for faster operations as well.
Feel free to post questions on the side and I can dive into the details for you all however the big take aways are that we can access data in memory more efficiently, functions can be applied to whole arrays or matrices, and boolean selection allows for simple filtering.
Let's create a list in numpy with np.arange then we’ll get the mean.
```
range(10)
np.arange(10)
npa = np.arange(10)
?npa
```
Getting these summary statistics is much easier in numpy as they provide convenient methods to get them.
```
npa.mean()
npa.sum()
npa.max()
npa.min()
[x * x for x in npa]
```
You’ll see that we can do things like list comprehensions on arrays however this is not the recommended method which would be to vectorize our operation.
Vectorization in simplest terms allows you to apply a function to an entire array instead of doing it value by value - similar to what we were doing with map and filter in the previous videos. This typically makes things much more concise and readable. Not necessarily in the trivial examples like we’re doing in these initial videos but when we move along into more complicated analysis the speed improvements are significant.
A good rule of thumb is if you’re hard coding for loops with numpy arrays or with certain things in pandas, you're likely doing it wrong. There are much more efficient ways of doing it. This will be come apparent over the next several videos however before we get there, I want to talk about boolean selection.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.