code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="4M_NJJPDau01" colab_type="code" colab={}
# System libraries.
import os
import sys
# Change working directory to Colab Notebooks.
# %cd "/content/drive/My Drive/Colab/"
# Import my modules.
sys.path.append("./modules")
from train_unet_pred import train_unet_seg2seg_short
loss_name = "mae"
block_list = [7,8,9,10,11,12]
for block_number in block_list:
num_scenes = 1000
epochs = 3
learning_rate = False
if block_number == 1:
model_loading = False
load_loss_name = "mae"
load_block_number = block_number - 1
else:
model_loading = True
load_loss_name = "mae"
load_block_number = block_number - 1
model_summary = True
check_partition = True
cloud_storage = False
test = False
if cloud_storage:
block_name = "{:02}".format(block_number)
if not os.path.isfile("/content/temp_dataset/dataset-intphys-{}000.hdf5".format(block_name)):
from google.colab import auth
auth.authenticate_user()
project_id = 'intphys'
bucket_name = 'datasets-intphys'
# !gcloud config set project {project_id}
# !gsutil cp gs://{bucket_name}/dataset-intphys-{block_name}000.hdf5 /content/temp_dataset/dataset-intphys-{block_name}000.hdf5
elif test:
# !mkdir /content/temp_dataset
# !cp dataset/dataset-intphys-test.hdf5 /content/temp_dataset
model = train_unet_seg2seg_short(loss_name,
block_number,
load_loss_name,
load_block_number,
num_scenes,
epochs,
learning_rate,
model_loading,
model_summary,
check_partition)
| unet-prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generalized Linear Models
# %matplotlib inline
import numpy as np
import statsmodels.api as sm
from scipy import stats
from matplotlib import pyplot as plt
# ## GLM: Binomial response data
#
# ### Load Star98 data
#
# In this example, we use the Star98 dataset which was taken with permission
# from <NAME> (2000) Generalized linear models: A unified approach. Codebook
# information can be obtained by typing:
print(sm.datasets.star98.NOTE)
# Load the data and add a constant to the exogenous (independent) variables:
data = sm.datasets.star98.load(as_pandas=False)
data.exog = sm.add_constant(data.exog, prepend=False)
# The dependent variable is N by 2 (Success: NABOVE, Failure: NBELOW):
print(data.endog[:5,:])
# The independent variables include all the other variables described above, as
# well as the interaction terms:
print(data.exog[:2,:])
# ### Fit and summary
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
res = glm_binom.fit()
print(res.summary())
# ### Quantities of interest
print('Total number of trials:', data.endog[0].sum())
print('Parameters: ', res.params)
print('T-values: ', res.tvalues)
# First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impact on the response variables:
means = data.exog.mean(axis=0)
means25 = means.copy()
means25[0] = stats.scoreatpercentile(data.exog[:,0], 25)
means75 = means.copy()
means75[0] = lowinc_75per = stats.scoreatpercentile(data.exog[:,0], 75)
resp_25 = res.predict(means25)
resp_75 = res.predict(means75)
diff = resp_75 - resp_25
# The interquartile first difference for the percentage of low income households in a school district is:
print("%2.4f%%" % (diff*100))
# ### Plots
#
# We extract information that will be used to draw some interesting plots:
nobs = res.nobs
y = data.endog[:,0]/data.endog.sum(1)
yhat = res.mu
# Plot yhat vs y:
from statsmodels.graphics.api import abline_plot
# +
fig, ax = plt.subplots()
ax.scatter(yhat, y)
line_fit = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit()
abline_plot(model_results=line_fit, ax=ax)
ax.set_title('Model Fit Plot')
ax.set_ylabel('Observed values')
ax.set_xlabel('Fitted values');
# -
# Plot yhat vs. Pearson residuals:
# +
fig, ax = plt.subplots()
ax.scatter(yhat, res.resid_pearson)
ax.hlines(0, 0, 1)
ax.set_xlim(0, 1)
ax.set_title('Residual Dependence Plot')
ax.set_ylabel('Pearson Residuals')
ax.set_xlabel('Fitted values')
# -
# Histogram of standardized deviance residuals:
# +
from scipy import stats
fig, ax = plt.subplots()
resid = res.resid_deviance.copy()
resid_std = stats.zscore(resid)
ax.hist(resid_std, bins=25)
ax.set_title('Histogram of standardized deviance residuals');
# -
# QQ Plot of Deviance Residuals:
from statsmodels import graphics
graphics.gofplots.qqplot(resid, line='r')
# ## GLM: Gamma for proportional count response
#
# ### Load Scottish Parliament Voting data
#
# In the example above, we printed the ``NOTE`` attribute to learn about the
# Star98 dataset. statsmodels datasets ships with other useful information. For
# example:
print(sm.datasets.scotland.DESCRLONG)
# Load the data and add a constant to the exogenous variables:
data2 = sm.datasets.scotland.load()
data2.exog = sm.add_constant(data2.exog, prepend=False)
print(data2.exog[:5,:])
print(data2.endog[:5])
# ### Model Fit and summary
glm_gamma = sm.GLM(data2.endog, data2.exog, family=sm.families.Gamma())
glm_results = glm_gamma.fit()
print(glm_results.summary())
# ## GLM: Gaussian distribution with a noncanonical link
#
# ### Artificial data
nobs2 = 100
x = np.arange(nobs2)
np.random.seed(54321)
X = np.column_stack((x,x**2))
X = sm.add_constant(X, prepend=False)
lny = np.exp(-(.03*x + .0001*x**2 - 1.0)) + .001 * np.random.rand(nobs2)
# ### Fit and summary (artificial data)
gauss_log = sm.GLM(lny, X, family=sm.families.Gaussian(sm.families.links.log))
gauss_log_results = gauss_log.fit()
print(gauss_log_results.summary())
| examples/notebooks/glm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##### 2015 NFIRS FireCARES data statistics and loading
#
# ## Overview
#
# The purpose of this collection of scripts is to load the NFIRS yearly data into a dump of the existing Firecares Database. The individual 18 incident types and ancillary tables (fireincident, hazchem, hazmat, etc) are loaded into temporary tables and then appended to the master "fireincident", etc tables. The bulk of this script has to do w/ matching addresses in geocoding results to the new incidents in the "incidentaddress_2014" and appending that to the "incidentaddress" table. The incident_address_2014_aa/ab/ac/ad/ae tables contain the geocoding information from shapefiles and will be used to augment the incidentaddress_2014 table's records with geometries.
#
# ## Assumptions
#
# * You have jupyter>=1.0.0 and psycopg2 and dependencies installed
# * See append_nfirs_yearly_data.sh for more information / prereqs and initial data loading
# * Database has been restored as "nfirs2"
# * All of the `incident_address_2015_a*.shp` data has been loaded into the database as "incident_address_2015_aa/ab/ac/ad/ae" tables
# * Dates have been converted to a Postgres-parseable date (MM/DD/YYYY) in:
# * `fireincident.txt`
# * `incidentaddress.txt`
# * `arsonjuvsub.txt`
# * `basicincident.txt`
# * `civiliancasualty.txt`
# * `ffcasualty.txt`
# * Null (`\000`) characters have been stripped from `incidentaddress.txt` and `fdheader.txt`
# * `RISK_FACT1` codes in `arsonjuvsub.txt` have been replaced with their single-character equivalent (eg. "U", "1", etc)
# * The 18 ancillary tables have been loaded as [FILENAME w/o extension]_2014 and include data from:
# * `fireincident.txt` => maps to `fireincident_2015`
# * `hazchem.txt`
# * `hazmat.txt`
# * `hazmatequipinvolved.txt`
# * `hazmobprop.txt`
# * `incidentaddress.txt`
# * `wildlands.txt`
# * `arson.txt`
# * `arsonagencyreferal.txt`
# * `arsonjuvsub.txt`
# * `basicaid.txt`
# * `basicincident.txt`
# * `civiliancasualty.txt`
# * `codelookup.txt`
# * `ems.txt`
# * `fdheader.txt`
# * `ffcasualty.txt`
# * `ffequipfail.txt`
# * An `id` serial primary key has been added to the "incidentaddress_2015" table
# * "incidentaddress" table has `source` column added
#
# The rest continues from (as of this writing) line 101 of `append_nfirs_yearly_data.sh`
# +
#import pandas as pd
import psycopg2
conn = psycopg2.connect("dbname=geocoding_2015")
# -
# Need to change the zip5 datatype to match incidentaddress_2014
with conn.cursor() as cursor:
cursor.execute("alter table geocoding_2015.geocoded_addresses alter column zip5 type character varying(5);")
cursor.execute("update geocoding_2015.geocoded_addresses set zip5 = lpad(zip5, 5, '0')")
print cursor.rowcount
conn.commit()
# Need to change the loc_type datatype to match incidentaddress_2014
with conn.cursor() as cursor:
cursor.execute("alter table geocoding_2015.geocoded_addresses alter column loc_type type character varying(1);")
conn.commit()
# Ugh, forgot to update the apt_no
with conn.cursor() as cursor:
cursor.execute("update geocoding_2015.geocoded_addresses set apt_no = '' where apt_no is null")
print cursor.rowcount
conn.commit()
# Lets see how many incidents there are in each state for 2014
from pretty import pprint
with conn.cursor() as cursor:
cursor.execute("select state_id, count(state_id) from geocoding_2015.geocoded_addresses group by state_id order by state_id;")
pprint(cursor.fetchall())
conn.commit()
# +
# Create some indexes so this query doesn't take years to complete
with conn.cursor() as cursor:
cursor.execute("""
create index on geocoding_2015.geocoded_addresses (num_mile, upper(apt_no), upper(city), zip5, upper(street_pre),
loc_type, upper(streetname), upper(streettype), upper(streetsuf), upper(state_id));""")
cursor.execute("""
create index on geocoding_2015.geocoded_addresses (num_mile, upper(apt_no), upper(city_1), zip5, upper(street_pre), loc_type, upper(streetname), upper(streettype), upper(streetsuf), upper(state_id));
create index on geocoding_2015.geocoded_addresses (state_id);
create index on geocoding_2015.geocoded_addresses (source);""")
conn.commit()
# +
# Execute the matching of the geocoded addresses to incident address for 2014, updating the geometry associated w/ each incident
with conn.cursor() as cursor:
cursor.execute("""
update incidentaddress_2014 as ia set geom = res.wkb_geometry, source = res.source
from (
select ia.id, aa.wkb_geometry, aa.source from address_to_geo_2014 aa inner join incidentaddress_2014 ia on (
aa.num_mile = ia.num_mile and
upper(aa.apt_no) = upper(ia.apt_no) and
upper(aa.city_1) = upper(ia.city) and
aa.zip5 = ia.zip5 and
upper(aa.street_pre) = upper(ia.street_pre) and
aa.loc_type = ia.loc_type and
upper(aa.streetname) = upper(ia.streetname) and
upper(aa.streettype) = upper(ia.streettype) and
upper(aa.streetsuf) = upper(ia.streetsuf) and
upper(aa.state_id) = upper(ia.state_id)
) where aa.num_mile != '' and score != 0) as res
where ia.id = res.id
""")
print cursor.rowcount
conn.commit()
# +
# Now lets see how many incidents there are in each state for 2014 that are geocoded vs not geocoded
from decimal import *
from pretty import pprint
with conn.cursor() as cursor:
cursor.execute("""
select (100.0 * sum(case when geom is null then 0 else 1 end) / count(1)) as percent_with_geom,
sum(case when geom is null then 0 else 1 end) as with_geom, state_id,
count(state_id) as total_incidents
from incidentaddress_2014 group by state_id order by percent_with_geom desc;
""")
print '% | Has geom | State | Total rows'
for row in cursor.fetchall():
print '{} | {} | {} | {}'.format(row[0], row[1], row[2], row[3])
cursor.execute("""
select sum(case when geom is null then 0 else 1 end) as matches,
(100.0 * sum(case when geom is null then 0 else 1 end) / count(1)) as percent,
count(1) as total
from incidentaddress_2014 order by percent desc;""")
row = cursor.fetchone()
print '# geocode matches: {}, Percent geocoded: {}, Total {}'.format(row[0], row[1], row[2])
conn.commit()
# +
# Just for sanity sake, make sure that the column types are the same between the 2014 and base data
addl_tables = ['arson', 'arsonagencyreferal', 'arsonjuvsub', 'basicaid', 'basicincident', 'civiliancasualty', 'codelookup', 'ems', 'fdheader', 'ffcasualty', 'ffequipfail', 'fireincident', 'hazchem', 'hazmat', 'hazmatequipinvolved', 'hazmobprop', 'incidentaddress', 'wildlands']
for table in addl_tables:
t = pd.read_sql_query("select column_name, data_type from information_schema.columns where table_name='%s'" % table, conn)
t_2014 = pd.read_sql_query("select column_name, data_type from information_schema.columns where table_name='%s_2014'" % table, conn)
t_cols = set([(x.items()[1][1], x.items()[0][1]) for x in arson.to_dict('records')])
t2014_cols = set([(x.items()[1][1], x.items()[0][1]) for x in arson2014.to_dict('records')])
print 'Table: {}, col diffs: {}'.format(table, t2014_cols - t_cols)
# +
# get some pre counts
with conn.cursor() as cursor:
for t in addl_tables:
cursor.execute("select '{table}', count(1) from {table};".format(table=t))
print cursor.fetchone()
conn.commit()
# +
# Update the fdheader table first
with conn.cursor() as cursor:
cursor.execute("select count(1) from fdheader_2014;")
print 'fdheader_2014 count: %s' % cursor.fetchone()[0]
cursor.execute("select count(1) from fdheader;")
pre = cursor.fetchone()[0]
cursor.execute("""INSERT INTO fdheader(
state, fdid, fd_name, fd_str_no, fd_str_pre, fd_street, fd_str_typ,
fd_str_suf, fd_city, fd_zip, fd_phone, fd_fax, fd_email, fd_fip_cty,
no_station, no_pd_ff, no_vol_ff, no_vol_pdc)
(SELECT distinct on (state, fdid) state, fdid, fd_name, fd_str_no, fd_str_pre, fd_street, fd_str_typ,
fd_str_suf, fd_city, fd_zip, fd_phone, fd_fax, fd_email, fd_fip_cty,
no_station, no_pd_ff, no_vol_ff, no_vol_pdc
FROM fdheader_2014 where (state, fdid) not in (select state, fdid from fdheader));""")
inserted = cursor.rowcount
cursor.execute("select count(1) from fdheader;")
post = cursor.fetchone()[0]
print 'FDHeader Pre: {}, Post: {}, Insert count: {}, Post - pre: {}'.format(pre, post, inserted, post - pre)
conn.commit()
# -
# Update the "codeloop" (appears to be a static table) w/ the 2014 data (truncate and load)
with conn.cursor() as cursor:
cursor.execute("delete from codelookup;")
print 'Deleted: %s' % cursor.rowcount
cursor.execute("insert into codelookup select * from codelookup_2014;")
print 'Inserted: %s' % cursor.rowcount
conn.commit()
# Insert the 2014 data into the foundation data
def load_2014(table, commit=True):
with conn.cursor() as cursor:
cursor.execute("select count(1) from %s;" % table)
pre = cursor.fetchone()[0]
cursor.execute("insert into %s select * from %s_2014;" % (table, table))
inserted = cursor.rowcount
cursor.execute("select count(1) from %s;" % table)
post = cursor.fetchone()[0]
print 'Table: {}, Pre: {}, Post: {}, Insert count: {}, Post - pre: {}'.format(table, pre, post, inserted, post - pre)
if commit:
conn.commit()
print 'COMMITTED'
else:
conn.rollback()
print 'ROLLBACK'
# Load EVERYTHING except for basicincident, fireincident, incidentaddress
for table in ['arson', 'arsonagencyreferal', 'arsonjuvsub', 'basicaid', 'civiliancasualty', 'ems', 'ffcasualty',
'ffequipfail', 'hazchem', 'hazmat', 'hazmatequipinvolved', 'hazmobprop', 'wildlands']:
load_2014(table)
# +
# There must have been an error with the MO 06101 entry in "fdheader_2014", since there are 101 references to it in "basicincident_2014"
with conn.cursor() as cursor:
cursor.execute("""select count(state), state, fdid from basicincident_2014 bi
where (state, fdid) not in (select state, fdid from fdheader)
and (state, fdid) not in (select state, fdid from fdheader_2014)
group by state, fdid""")
print cursor.fetchall()
cursor.execute("select count(1) from basicincident where state = 'MO' and fdid = '6101'")
print cursor.fetchall()
cursor.execute("select count(1) from basicincident_2014 where state = 'MO' and fdid = '6101'")
print cursor.fetchall()
# -
# Going to do a one-off update of (MO,6101) => (MO,06101) since that's the only offender
with conn.cursor() as cursor:
cursor.execute("update basicincident_2014 set fdid = '6101' where state = 'MO' and fdid = '06101'")
print cursor.rowcount
conn.commit()
with conn.cursor() as cursor:
cursor.execute("""select 'fireincident_2014', count(1) from fireincident_2014
union select 'basicincident_2014', count(1) from basicincident_2014
union select 'incidentaddress_2014', count(1) from incidentaddress_2014""")
print cursor.fetchall()
load_2014('basicincident', commit=True)
# +
# Looks like fireincident_2014 has the same issue w/ that MO 6101 fire department
with conn.cursor() as cursor:
cursor.execute("""select count(state), state, fdid from fireincident_2014 bi
where (state, fdid) not in (select state, fdid from fdheader)
and (state, fdid) not in (select state, fdid from fdheader_2014)
group by state, fdid""")
print cursor.fetchall()
cursor.execute("select count(1) from fireincident where state = 'MO' and fdid = '6101'")
print cursor.fetchall()
cursor.execute("select count(1) from fireincident_2014 where state = 'MO' and fdid = '6101'")
print cursor.fetchall()
# -
# Going to do a one-off update of (MO,6101) => (MO,06101) since that's the only offender
with conn.cursor() as cursor:
cursor.execute("update fireincident_2014 set fdid = '6101' where state = 'MO' and fdid = '06101'")
print cursor.rowcount
conn.commit()
load_2014('fireincident', commit=True)
# Loading all of the 2014 incident addresses into the master "incidentaddress" table
with conn.cursor() as cursor:
cursor.execute("select count(1) from incidentaddress_2014;")
print 'incidentaddress_2014 count: %s' % cursor.fetchone()[0]
cursor.execute("select count(1) from incidentaddress;")
pre = cursor.fetchone()[0]
cursor.execute("""INSERT INTO incidentaddress(
state, fdid, inc_date, inc_no, exp_no, loc_type, num_mile, street_pre,
streetname, streettype, streetsuf, apt_no, city, state_id, zip5,
zip4, x_street, addid, addid_try, geom, bkgpidfp00, bkgpidfp10)
(SELECT state, fdid, inc_date, inc_no, exp_no, loc_type, num_mile, street_pre,
streetname, streettype, streetsuf, apt_no, city, state_id, zip5,
zip4, x_street, addid, addid_try, geom, bkgpidfp00, bkgpidfp10
FROM incidentaddress_2014);""")
inserted = cursor.rowcount
cursor.execute("select count(1) from incidentaddress;")
post = cursor.fetchone()[0]
print 'Incident address Pre: {}, Post: {}, Insert count: {}, Post - pre: {}'.format(pre, post, inserted, post - pre)
conn.commit()
# Start moving things over to a separate schema
with conn.cursor() as cursor:
cursor.execute("create schema if not exists geocoding_2014;")
conn.commit()
with conn.cursor() as cursor:
cursor.execute("alter table address_to_geo_2014 set schema geocoding_2014;")
conn.commit()
tables = ["incident_address_2014_aa", "incident_address_2014_ab", "incident_address_2014_ac", "incident_address_2014_ad",
"incident_address_2014_ae"]
with conn.cursor() as cursor:
for table in tables:
cursor.execute("alter table %s set schema geocoding_2014;" % table)
conn.commit()
# +
# We should be able to drop all of the 2014 tables now
tables_2014 = ['arson_2014', 'arsonagencyreferal_2014', 'arsonjuvsub_2014', 'basicaid_2014', 'basicincident_2014',
'civiliancasualty_2014', 'codelookup_2014', 'ems_2014', 'fdheader_2014', 'ffcasualty_2014',
'ffequipfail_2014', 'fireincident_2014', 'hazchem_2014', 'hazmat_2014', 'hazmatequipinvolved_2014',
'hazmobprop_2014', 'incidentaddress_2014', 'wildlands_2014']
with conn.cursor() as cursor:
for table in addl_tables_2014:
cursor.execute("drop table %s;" % table)
conn.commit()
# -
# [x] - arson_2014 -> arson
# [x] - arsonagencyreferal_2014 -> arsonagencyreferal
# [x] - arsonjuvsub_2014 -> arsonjuvsub
# [x] - basicaid_2014 -> basicaid
# [x] - basicincident_2014 -> basicincident
# [x] - civiliancasualty_2014 -> civiliancasualty
# [x] - codelookup_2014 -> codelookup
# [x] - ems_2014 -> ems
# [x] - fdheader_2014 -> fdheader
# [x] - ffcasualty_2014 -> ffcasualty
# [x] - ffequipfail_2014 -> ffequipfail
# [x] - fireincident_2014 -> fireincident
# [x] - hazchem_2014 -> hazchem
# [x] - hazmat_2014 - > hazmat
# [x] - hazmatequipinvolved_2014 -> hazmatequipinvolved
# [x] - hazmobprop_2014 -> hazmobprop
# [x] - incidentaddress_2014 -> incidentaddress
# [x] - wildlands_2014 -> wildlands
# ### Dumping the db / compressing
#
# ```bash
# pg_dump -O -x nfirs_2014 > nfirs_2014.sql
# # mv nfirs_2014.sql nfirs_2014_addresses_matched.sql
# gzip nfirs_2014_addresses_matched.sql
# ```
| sources/nfirs/scripts/NFIRS 2015 load.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# [**Dr. <NAME>**](mailto:<EMAIL>), _Lecturer in Biomedical Engineering_
#
# National University of Ireland Galway.
#
# ---
# + [markdown] slideshow={"slide_type": "skip"}
# \newpage
# + [markdown] slideshow={"slide_type": "slide"}
# # Learning objectives
# At the end of this lecture you should be able to:
#
# * Formulate Hooke's law in **matrix form**
# * Derive spring equations using the **direct method**
# * Derive spring equations using a **variational method**
# + slideshow={"slide_type": "skip"}
# Loading required packages (may take some time)
# Re-execute with Ctrl-Enter to suppress command window output (may interfere with PDF export)
using Pkg;
Pkg.add("Plots");
using Plots;
# + [markdown] slideshow={"slide_type": "slide"}
# # Hooke's law
# $$F=ku$$
#
# $F$: Force
# $k$: Spring stiffness constant
# $u$: Spring extension (displacement of end point)
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## Hooke's law in matrix form
# A two node spring system:
# 
# The force components at node 1 and 2 can be written:
# $$f_{1x}=k(u_1-u_2)$$
# $$f_{2x}=k(u_2-u_1)=-k(u_1-u_2)$$
# + [markdown] slideshow={"slide_type": "slide"}
# It is conventient to use matrix notation allowing one to write the above as:
# $$\begin{Bmatrix} f_{1x} \\ f_{2x} \end{Bmatrix}=\begin{bmatrix} k & -k \\ -k & k\end{bmatrix}\begin{Bmatrix} u_1 \\ u_2\end{Bmatrix}$$
# Which in short form is written:
# $$\begin{Bmatrix} F\end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u\end{Bmatrix}$$
# This introduces the notation $\begin{Bmatrix} \end{Bmatrix}$ and $\begin{bmatrix} \end{bmatrix}$, which represent a column array and a square array respectively.
# -
# ## Methods for derivation of the finite element equations
#
# * The key finite element equation:
# $$\begin{Bmatrix} F\end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u\end{Bmatrix}$$
# * There are three main types of methods to do so:
# * The **direct** (equilibrium) method
# * Simple, intuitive
# * 1D problems
# * **Variational** methods
# * More general
# * Requires existance of a functional to minimize
# * **Weighted residual** methods (e.g. Galerkin)
# * Most general
# * No functional for minization required
# * The direct and variational method are presented in this course
# + [markdown] slideshow={"slide_type": "slide"}
# ## Matrix multiplication in finite element analysis (FEA)
# * Multiplication of two rectangular matrices (summation implied over repeated indices):
# $$c_{ij}=a_{ik}b_{kj}$$
# * In FEA
# $$\begin{Bmatrix} F\end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u\end{Bmatrix}$$
# * Involves multiplication of a $n\times p$ matrix and a $n\times1$ array:
# $$f_{i}=k_{ip}u_{p}$$
# * Quasi-mnemonic to get $f_{i}$ "keep up" $\rightarrow k_{ip}u_{p}$
# + [markdown] slideshow={"slide_type": "slide"}
# # Introduction to direct methods
# * Direct methods use equilibrium conditions to directly assemble stiffness matrices and get to the form: $\begin{Bmatrix} F\end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u\end{Bmatrix}$.
# * Althought intuitive, this method is limited to 1D problems (springs, bars, trusses)
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example: A single spring system and a known force
# Consider a single spring with a spring constant $k_1=100$ and a force of 12 N at one end. Compute the displacement of node 2.
#
# 
#
# Once could directly use Hooke's law for a linear elastic spring in this case and simply state $F=ku$, leading to $12=100u$, and therefore $u=0.12$. However, below we shall use the finite element method for solving this problem (which is "overkill" in this case), just to familiarize ourselves with the process and notation.
#
# + [markdown] slideshow={"slide_type": "fragment"}
# 1. **Formulate the element stiffness matrix**. Using Hooke's law and force balance principles we can write $f_{1x}=k(u_1-u_2)$ and $f_{2x}=-k(u_1-u_2)$ and cast this in matrix form as:
#
# $$\begin{bmatrix} K^{(1)} \end{bmatrix} = k_1 \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# 2. **Formulate the element level equations**
# $$\begin{Bmatrix} F^{(1)} \end{Bmatrix}=\begin{bmatrix} K^{(1)}\end{bmatrix}\begin{Bmatrix} u^{(1)} \end{Bmatrix}$$
#
# $$\begin{Bmatrix} f_{1x} \\ f_{2x} \end{Bmatrix}=\begin{bmatrix} k_1 & -k_1 \\ -k_1 & k_1\end{bmatrix}\begin{Bmatrix} u_{1x} \\ u_{2x}\end{Bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# 3. **Formulate the global stiffness matrix**. The global equations gather and represent contributions from all elements in the system. In this case, since there is only 1 element, the global stiffness matrix is the same as the element stiffness matrix, and superposition directly maps $\begin{bmatrix} K^{1} \end{bmatrix}$ to $\begin{bmatrix} K \end{bmatrix}$ :
#
# $$\begin{Bmatrix} F \end{Bmatrix}=\begin{bmatrix} K \end{bmatrix} \begin{Bmatrix} u \end{Bmatrix}$$
#
# $$\begin{Bmatrix} F_{1x} \\ F_{2x} \end{Bmatrix}=\begin{bmatrix} k_1 & -k_1 \\ -k_1 & k_1\end{bmatrix}\begin{Bmatrix} u_{1x} \\ u_{2x}\end{Bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# 4. **Apply boundary conditions information**, i.e. $F_{2x}=12$ and $u_{1x}=0$
#
# $$\begin{Bmatrix} F_{1x} \\ 12 \end{Bmatrix}=\begin{bmatrix} 100 & -100 \\ -100 & 100\end{bmatrix}\begin{Bmatrix} 0 \\ u_{2x}\end{Bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# 5. **Solve the system of equations**. Many methods exist for solving for the unknown quantities. Here a manual process is used.
#
# * 5a) First we isolate the equation for the known force $F_{2x}$ to solve for $u_2$ (recalling $f_i=K_{ip}u_{p}$):
#
# $$F_{2x} = 12 =K_{21}u_1+K_{22}u_2=-100*0+100u_2=100u_2 \rightarrow u_2=\frac{12}{100}=0.12$$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# * 5b) Now we formally solve for the remaining unknown quantity $F_{1x}$ (although balance principles already dictate it should equal $-F_{2x}$), using:
# $$\begin{Bmatrix} F_{1x} \\ 12 \end{Bmatrix}=\begin{bmatrix} 100 & -100 \\ -100 & 100\end{bmatrix}\begin{Bmatrix} 0 \\ \frac{12}{100}\end{Bmatrix}$$
# we obtain
# $$F_{1x} = K_{11}u_1+K_{12}u_2=100*0-100u_2=-100u_2=-100*0.12=-12$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example: A three spring system and a known force
# Consider the three spring system below*:
# 
# The system consists of 3 spring "elements" and 4 nodes. Node 1 and 4 are constrained from moving. A force of 25 kN is applied to node 3 in the positive x-direction. The spring stiffnesses are:
#
# $k_1=200$, $k_2=400$, and $k_3=600$
#
# \*Based on example 2.1 of: <NAME>, _"A First Course in the Finite Element Methods"_ (page 44 in the 6th edition, page 46 in the 5th edition)
# + [markdown] slideshow={"slide_type": "slide"}
# # Setting up the element stiffness matrices
# Each element matrix is formed using:
# $$K^{(i)}=\begin{bmatrix} k_i & -k_i \\ -k_i & k_i\end{bmatrix} $$
#
# Leading to:
# $K^{(1)}=\begin{bmatrix} 200 & -200 \\ -200 & 200\end{bmatrix}$, $K^{(2)}=\begin{bmatrix} 400 & -400 \\ -400 & 400\end{bmatrix}$, and $K^{(3)}=\begin{bmatrix} 600 & -600 \\ -600 & 600\end{bmatrix}$
# + slideshow={"slide_type": "fragment"}
kSet=[200 400 600]; # The spring stiffness value set
I=[1 -1; -1 1]; # Array for spawning stiffness matrices
k1=kSet[1]*I; # Element stiffness matrix 1
k2=kSet[2]*I; # Element stiffness matrix 2
k3=kSet[3]*I; # Element stiffness matrix 3
display(k1)
display(k2)
display(k3)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Superposition to assemble global stiffness matrix
# Through superposition the global stiffness matrix can be assembled. The superposition is often written as:
# $$K=\sum_{i=1}^{3} K^{(i)}$$
# However, it should be noted this is not a summation. The below numerical implementation illustrates how **the indices of the nodes involved are used as indices into matrix $K$**, leading to:
# $$\begin{bmatrix} K \end{bmatrix}=\begin{bmatrix} 200 & -200 & 0 & 0 \\ -200 & 600 & -400 & 0 \\ 0 & -400 & 1000 & -600 \\ 0 & 0 & -600 & 600 \end{bmatrix}$$
# + slideshow={"slide_type": "fragment"}
K=zeros(4,4); #Initialize stiffness aray with all zeros
K[[1,2],[1,2]] .= K[[1,2],[1,2]] .+ k1; #Add element 1 contribution
K[[2,3],[2,3]] .= K[[2,3],[2,3]] .+ k2; #Add element 2 contribution
K[[3,4],[3,4]] .= K[[3,4],[3,4]] .+ k3; #Add element 3 contribution
display(K)]
# + [markdown] slideshow={"slide_type": "slide"}
# ### Solving the system
# Following derivation of the global stiffness matrix the total system now becomes:
# $$\begin{Bmatrix} F_1 \\ F_2 \\ F_3 \\ F_4 \end{Bmatrix}=\begin{bmatrix} 200 & -200 & 0 & 0 \\ -200 & 600 & -400 & 0 \\ 0 & -400 & 1000 & -600 \\ 0 & 0 & -600 & 600 \end{bmatrix}\begin{Bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4\end{Bmatrix}$$
# To solve this system we can use the boundary conditions. Using $u_1=u_4=0$ the "sub-system" for nodes 2 and 3 can be isolated. Furthermore the known forces at nodes 2 and 3 can be substituted, -> $F_2=0$, and $F_3=25000$:
#
# $$\begin{Bmatrix} 0 \\ 25000 \end{Bmatrix}=\begin{bmatrix} 600 & -400 \\ -400 & 1000 \end{bmatrix}\begin{Bmatrix} u_2 \\ u_3 \end{Bmatrix}\rightarrow \begin{Bmatrix} u_2 \\ u_3 \end{Bmatrix}=\begin{Bmatrix} \frac{250}{11} \\ \frac{375}{11} \end{Bmatrix}$$
# + slideshow={"slide_type": "fragment"}
F23=[0; 25000]; #Force array for node 2 and 3
k23=K[[2,3],[2,3]]; #Sub-stiffness matrix for 2-3 system
u23=k23\F23 #Displacement array for node 2 and 3
display(k23)
display(u23)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Compute force array F
# Since all nodal diplacements are now known the full force array can now be computed from:
# $$\begin{Bmatrix} F_1 \\ F_2 \\ F_3 \\ F_4 \end{Bmatrix}=\begin{bmatrix} 200 & -200 & 0 & 0 \\ -200 & 600 & -400 & 0 \\ 0 & -400 & 1000 & -600 \\ 0 & 0 & -600 & 600 \end{bmatrix}\begin{Bmatrix} 0 \\ \frac{250}{11} \\ \frac{375}{11} \\ 0\end{Bmatrix}=\begin{Bmatrix} -\frac{50000}{11} \\ 0 \\ 25000 \\ -\frac{225}{11} \end{Bmatrix}$$
# + slideshow={"slide_type": "fragment"}
U=[0; u23[1]; u23[2]; 0] #Full displacement array
F=K*U #Compute force array
# + [markdown] slideshow={"slide_type": "slide"}
# ### Computing element forces
# The element force data can now be computed too from:
# $$\begin{Bmatrix} F^{(i)} \end{Bmatrix}=\begin{bmatrix} k^{(i)}\end{bmatrix}\begin{Bmatrix} u^{(i)} \end{Bmatrix}$$
# E.g.:
# $$\begin{Bmatrix} f_{1x} \\ f_{2x} \end{Bmatrix}=\begin{bmatrix} k_1 & -k_1 \\ -k_1 & k_1\end{bmatrix}\begin{Bmatrix} u_1 \\ u_2\end{Bmatrix}$$
# + slideshow={"slide_type": "fragment"}
f1=k1*U[[1,2]]; #Element 1 forces
f2=k2*U[[2,3]]; #Element 2 forces
f3=k3*U[[3,4]]; #Element 3 forces
display(f1)
display(f2)
display(f3)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example: A 3 spring system and a known displacement
# Consider the three spring system below*:
# 
# The system consists of 3 spring "elements" and 4 nodes. Node 1 is constrained from moving. A displacement of 0.2 m is applied to node 4 in the positive x-direction. The spring stiffnesses are:
#
# $k_1=k_2=k_3=200$
#
# \*Based on example 2.2 of: <NAME>, _"A First Course in the Finite Element Methods"_ (page 48 in the 6th edition, page 49 in the 5th edition)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Superposition to form the global stiffness matrix
# $$K=\sum_{i=1}^{3} K^{(i)}$$
# All element stiffness matrices now are of the form:
# $$K^{(i)}=\begin{bmatrix} 200 & -200 \\ -200 & 200\end{bmatrix}$$
# + slideshow={"slide_type": "fragment"}
kSet=[200 200 200]; # The spring stiffness value set
I=[1 -1;-1 1]; # Array for spawning stiffness matrices
k1=kSet[1]*I; # Element stiffness matrix 1
k2=kSet[2]*I; # Element stiffness matrix 2
k3=kSet[3]*I; # Element stiffness matrix 3
display(k1)
display(k2)
display(k3)
# + [markdown] slideshow={"slide_type": "slide"}
# leading to:
# $$\begin{bmatrix} K \end{bmatrix}=\begin{bmatrix} 200 & -200 & 0 & 0 \\ -200 & 400 & -200 & 0 \\ 0 & -200 & 400 & -200 \\ 0 & 0 & -200 & 200 \end{bmatrix}$$
# + slideshow={"slide_type": "fragment"}
# K=[200 -200 0 0; -200 600 -400 0; 0 -400 1000 600; 0 0 -600 600]
K=zeros(4,4); #Initialize stiffness aray with all zeros
K[[1,2],[1,2]] .= K[[1,2],[1,2]] .+ k1; #Add element 1 contribution
K[[2,3],[2,3]] .= K[[2,3],[2,3]] .+ k2; #Add element 2 contribution
K[[3,4],[3,4]] .= K[[3,4],[3,4]] .+ k3; #Add element 3 contribution
display(K)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Solving for unknown displacements
# * Considering the known global forces $F_{2x}=F_{3x}=0$, and using the known displacement $u_1=0$ and $u_4=0.2$, one can write:
# $$\begin{Bmatrix} F_{1x} \\0 \\ 0 \\ F_{4x}\end{Bmatrix}=\begin{bmatrix} 200 & -200 & 0 & 0 \\ -200 & 400 & -200 & 0 \\ 0 & -200 & 400 & -200 \\ 0 & 0 & -200 & 200 \end{bmatrix}\begin{Bmatrix} 0 \\ u_2 \\ u_3 \\ 0.2\end{Bmatrix}$$
# $$\rightarrow\begin{Bmatrix} 0 \\ 0 \end{Bmatrix}=\begin{bmatrix} -200 & 400 & -200 & 0 \\ 0 & -200 & 400 & -200\end{bmatrix}\begin{Bmatrix}0 \\ u_2 \\ u_3 \\ 0.2\end{Bmatrix}$$
# * To consider only the 2-3 node system, the contribution one would "ignore" ($-200*0.2=-40$) should be added/accounted for:
# $$\begin{Bmatrix} 0 \\ 40 \end{Bmatrix}=\begin{bmatrix} 400 & -200 \\ -200 & 400\end{bmatrix}\begin{Bmatrix} u_2 \\ u_3 \end{Bmatrix}$$
# * To solve this system compute the inverse of the square matrix or use something like the eliminaton method:
# $$\begin{cases} 400 u_2 - 200 u_3 = 0 & \downarrow +\times 2
# \\ -200 u_2 + 400 u_3 = 40 & \end{cases}$$
#
# $$\rightarrow \begin{cases} 400 u_2 - 200 u_3 = 0
# \\ 600 u_2 = 40 \end{cases} \rightarrow u_2 =\frac{40}{600}=\frac{1}{15}
# \rightarrow u_3 =\frac{2}{15}$$
# + [markdown] slideshow={"slide_type": "slide"}
# #### Numerical implementation
# + slideshow={"slide_type": "fragment"}
u4=0.2;
K_sub=K[[2,3],:];
f=-sum(u4*K_sub[:,4]);
display(K_sub)
display(f)
# + slideshow={"slide_type": "fragment"}
F23=[0 f]'; #lobal forces for nodes 2 and 3
K23=K[[2,3],[2,3]]; #Stiffness matrix for 2-3 node system
u23=K23\F23 #Displacement array node 2-3 node system
# + [markdown] slideshow={"slide_type": "slide"}
# ### Compute global force array using derived and known displacements
#
# $$\begin{Bmatrix} F \end{Bmatrix}=\begin{bmatrix} 200 & -200 & 0 & 0 \\ -200 & 400 & -200 & 0 \\ 0 & -200 & 400 & -200 \\ 0 & 0 & -200 & 200 \end{bmatrix}\begin{Bmatrix} 0 \\ 0.0667 \\ 0.1333 \\ 0.2\end{Bmatrix}=\begin{Bmatrix} -13.3333 \\ 0 \\ 0 \\ 13.3333\end{Bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# #### Numerical implementation
# + slideshow={"slide_type": "fragment"}
U=[0 u23[1] u23[2] 0.2]'; #Full displacement array
F=K*U #Global force array
# + [markdown] slideshow={"slide_type": "slide"}
# # The variational method based on potential energy minimization
# * Variation methods uses minimisation of some functional, e.g. potential energy, to get to the form: $\begin{Bmatrix} F\end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u\end{Bmatrix}$
# * Generally applies 1D, 2D, and 3D elements
# + [markdown] slideshow={"slide_type": "slide"}
# ## Deriving a potential energy functional for minimization
# The potential energy of a system can be expressed as:
# $$\Pi=\Lambda-W$$
# $\Pi$ : Potential energy of the system
# $\Lambda$ : The sum of internal strain energy
# $W$ : Work done by external forces
#
# ## Example: Potential energy minimisation in a single spring\*
# Consider a single spring with a force at one end:
# 
# The internal strain energy $\Lambda$ is the area under the force-displacement graph (with $F=ku$):
# $$\Lambda=\frac{1}{2}Fu=\frac{1}{2}ku^2 \\ W=Fu$$
# Leading to:
# $$\Pi=\frac{1}{2}ku^2-Fu$$
#
# To obtain the minimum we solve:
# $$\frac{\partial\Pi}{\partial u}=ku-F=0$$
#
# \*Based on example 2.4 of: <NAME>, _"A First Course in the Finite Element Methods"_ (page 60 in the 6th edition, page 60 in the 5th edition)
# -
# Assuming a spring constant $k=12.5$, and applied force $F=500$, one would be able to derive the displacement from Hooke's law which would give:
# $$u=\frac{F}{k}=40$$
# However, let's instead use the principle of potential energy minimization. First we specify $\Pi$:
# $$\Pi=\frac{1}{2}ku^2-Fu$$
# Minimization provides:
# $$\frac{\partial\Pi}{\partial u}=ku-F=12.5u-5000=0 \rightarrow u=40$$
# + slideshow={"slide_type": "fragment"}
# Create plot variables
k=12.5;
F=500.0;
uMin=F/k;
pMin=0.5*k*uMin^2-F*uMin;
u=range(uMin-50,stop=uMin+50,length=500); #Displacements
p=0.5*k*u.^2-F*u;
# Visualize graph
#plots() # use the PyPlot.jl backend
plot(u,p,linewidth=2,xlabel="u",ylabel="P",color="blue",label="Potential energy vs Displacement") #Plot graph
scatter!([uMin],[pMin],markersize=6, c=:red,label="Minimum") #Plot graph
# + [markdown] slideshow={"slide_type": "slide"}
# ### Deriving stiffness matrices using the potential energy approach
# Recall the two node spring system:
# 
# Recalling $f_{1x}=k(u_1-u_2)$, and $f_{2x}=k(u_2-u_1)$, the potential energy $\Pi=\frac{1}{2}ku^2-Fu$ can be written:
# $$\Pi=\frac{1}{2}k(u_2-u_1)^2-f_{1x}u_1-f_{2x}u_2=\frac{1}{2}k(u_2^2-2u_1u_2+u_1^2)-f_{1x}u_1-f_{2x}u_2$$
# + [markdown] slideshow={"slide_type": "fragment"}
# to minimize this expression partial derivatives with the displacement components are computed and set to 0:
#
# $$\frac{\partial\Pi}{\partial u_1}=\frac{1}{2}k(-2u_2+2u_1)-f_{x1}=k(u_1-u_2)-f_{x1}=0$$
# $$\frac{\partial\Pi}{\partial u_2}=\frac{1}{2}k(-2u_1+2u_2)-f_{x2}=k(u_2-u_1)-f_{x2}=0$$
# + [markdown] slideshow={"slide_type": "fragment"}
# Which after simplification provides:
# $$\frac{\partial\Pi}{\partial u_1}=k(u_1-u_2)-f_{x1}=0$$
# $$\frac{\partial\Pi}{\partial u_2}=k(u_2-u_1)-f_{x2}=0$$
# + [markdown] slideshow={"slide_type": "slide"}
# Leading us to the familiar notation for the force components at node 1 and 2:
# $$f_{1x}=k(u_1-u_2)$$
# $$f_{2x}=k(u_2-u_1)$$
# Which once again can be expressed in matrix form as:
# $$\begin{Bmatrix} f_{1x} \\ f_{2x} \end{Bmatrix}=\begin{bmatrix} k & -k \\ -k & k\end{bmatrix}\begin{Bmatrix} u_1 \\ u_2\end{Bmatrix}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example: Deriving stiffness matrices using the potential energy approach
#
# Consider the three spring system below \*:
# 
# The system consists of 3 spring "elements" and 4 nodes. Node 1 and 4 are constrained from moving. A force of 25 kN is applied to node 3 in the positive x-direction. The spring stiffnesses are:
#
# $k_1=200$, $k_2=400$, and $k_3=600$
#
# \*Based on example 2.5 of: <NAME>, _"A First Course in the Finite Element Methods"_ (page 64 in the 6th edition, page 63 in the 5th edition)
#
# + [markdown] slideshow={"slide_type": "slide"}
# Which, expressed in matrix form become:
# $$\Lambda^{(1)}=\frac{1}{2}\begin{Bmatrix} u_1 \\ u_2 \end{Bmatrix}^\top\begin{bmatrix} k_1 & -k_1 \\ -k_1 & k_1 \end{bmatrix}\begin{Bmatrix} u_1 \\ u_2 \end{Bmatrix} \textrm{,} \quad \Lambda^{(2)}=\frac{1}{2}\begin{Bmatrix} u_2 \\ u_3 \end{Bmatrix}^\top\begin{bmatrix} k_2 & -k_2 \\ -k_2 & k_2 \end{bmatrix}\begin{Bmatrix} u_2 \\ u_3 \end{Bmatrix} \textrm{,} \quad \Lambda^{(3)}=\frac{1}{2}\begin{Bmatrix} u_3 \\ u_4 \end{Bmatrix}^\top\begin{bmatrix} k_3 & -k_3 \\ -k_3 & k_3 \end{bmatrix}\begin{Bmatrix} u_3 \\ u_4 \end{Bmatrix}$$
# Note that due to the use of the tranpose operation the matrix products yield scalar energy contributions.
#
# + [markdown] slideshow={"slide_type": "fragment"}
# In short notation we may write:
# $$\Lambda^{(i)}=\frac{1}{2}\begin{Bmatrix} u^{(i)} \end{Bmatrix}^\top\begin{bmatrix} K^{(i)} \end{bmatrix}\begin{Bmatrix} u^{(i)} \end{Bmatrix}$$
# Leading to the total strain energy:
# $$\Lambda=\sum_{i=1}^{3} \Lambda^{(i)}=\frac{1}{2}\begin{Bmatrix} u \end{Bmatrix}^\top\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ### External work contributions in matrix form
# $$W=\sum_{i=1}^{4}F_{ix}u_i=\begin{Bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{Bmatrix}^\top\begin{Bmatrix} F_{1} \\ F_{2} \\ F_{3} \\ F_{4} \end{Bmatrix}=\begin{Bmatrix} u \end{Bmatrix}^\top\begin{Bmatrix} F \end{Bmatrix}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Potential energy in matrix form
# $$\Pi=\Lambda-W$$
# Using:
# $$W=\begin{Bmatrix} u \end{Bmatrix}^\top\begin{Bmatrix} F \end{Bmatrix}$$
# and
# $$\Lambda=\sum_{i=1}^{3} \Lambda^{(i)}=\frac{1}{2}\begin{Bmatrix} u \end{Bmatrix}^\top\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}$$
# we obtain:
# $$\Pi=\frac{1}{2}\begin{Bmatrix} u \end{Bmatrix}^\top\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}-\begin{Bmatrix} u \end{Bmatrix}^\top\begin{Bmatrix} F \end{Bmatrix}$$
# Minimisation through partial derivatives with $\begin{Bmatrix} u \end{Bmatrix}$ gives:
# $$\frac{\partial\Pi}{\partial \begin{Bmatrix} u \end{Bmatrix}}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}-\begin{Bmatrix} F \end{Bmatrix}=0$$
# Leading to the familiar expression:
# $$\begin{Bmatrix} F \end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Potential energy in matrix form
# $$\Pi=\frac{1}{2}\begin{Bmatrix} u \end{Bmatrix}^\top\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}-\begin{Bmatrix} u \end{Bmatrix}^\top\begin{Bmatrix} F \end{Bmatrix}=\sum_{i=1}^{3} \Pi^{(i)}$$
#
# $$\Pi=\overbrace{\bigg[\frac{1}{2}k_1(u_1^2-2u_1u_2+u_2^2)-F_{1}u_1\bigg]}^{\Pi^{(1)}}+\overbrace{\bigg[\frac{1}{2}k_2(u_2^2-2u_2u_3+u_3^2)-F_{2}u_2\bigg]}^{\Pi^{(2)}}+\overbrace{\bigg[\frac{1}{2}k_3(u_3^2-2u_3u_4+u_4^2)-F_{3}u_3\bigg]}^{\Pi^{(3)}}$$
# + [markdown] slideshow={"slide_type": "fragment"}
# Minimisation using the partial derivatives:
# $$\frac{\partial\Pi}{\partial u_1}=k_1u_1-k_1u_2-F_{1}=k_1(u_1-u_2)-F_{1}=0$$
# $$\frac{\partial\Pi}{\partial u_2}=-k_1u_1+k_1u_2+k_2u_2-k_2u_3-F_{2}=k_1(u_2-u_1)+k_2(u_2-u_3)-F_{2}=0$$
# $$\frac{\partial\Pi}{\partial u_3}=-k_2u_2+k_2u_3+k_3u_3-k_3u_4-F_{3}=k_2(u_3-u_2)+k_3(u_3-u_4)-F_{3}=0$$
# $$\frac{\partial\Pi}{\partial u_4}=-k_3u_3+k_3u_4-F_{4}=k_3(u_4-u_3)-F_{4}=0$$
# Which can be cast in matrix form as:
# $$\begin{Bmatrix} F_1 \\ F_2 \\ F_3 \\ F_4 \end{Bmatrix}=\begin{bmatrix} k_1 & -k_1 & 0 & 0 \\ -k_1 & k_1+k_2 & -k_2 & 0 \\ 0 & -k_2 & k_2+k_3 & -k_3 \\ 0 & 0 & -k_3 & k_3 \end{bmatrix}\begin{Bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{Bmatrix}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Summary
# * The key finite element equation:
# $$\begin{Bmatrix} F\end{Bmatrix}=\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u\end{Bmatrix}$$
# * There are three main types of methods to do so:
# * The **direct** (equilibrium) method
# * Simple, intuitive
# * 1D problems
# $$\begin{Bmatrix} f_{1x} \\ f_{2x} \end{Bmatrix}=\begin{bmatrix} k & -k \\ -k & k\end{bmatrix}\begin{Bmatrix} u_1 \\ u_2\end{Bmatrix}$$
# $$K=\sum_{i=1}^{3} K^{(i)}$$
#
# * **Variational** methods
# * More general
# * Requires existance of a functional to minimize
# $$\Pi=\frac{1}{2}\begin{Bmatrix} u \end{Bmatrix}^\top\begin{bmatrix} K \end{bmatrix}\begin{Bmatrix} u \end{Bmatrix}-\begin{Bmatrix} u \end{Bmatrix}^\top\begin{Bmatrix} F \end{Bmatrix}$$
#
# + [markdown] slideshow={"slide_type": "skip"}
# \newpage
# + [markdown] slideshow={"slide_type": "skip"}
# **About this document**
# This document was created using a [Jupyter notebook](https://jupyter.org/) which allows for the presentation of theory and equations, as well as live (running code) numerical implementations.
#
# The Jupyter notebook used here features the [Julia](https://julialang.org/) programming language. If you are interested in running this Jupyter notebook yourself [download](https://julialang.org/downloads/) and [install](https://julialang.org/downloads/platform/) Julia and install [the Jupyter environment](https://jupyter.org/install). Once julia is installed open the command interface (REPL) for Julia and add the "interactive julia" package by running:
#
# ```julia
# Pkg.add("IJulia")
# ```
# To run Jupyter call `jupyter notebook` from your Terminal/Command Prompt.
| notebooks/nb2_spring_equations_direct_variational.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <div style="width:1000 px">
#
# <div style="float:right; width:98 px; height:98px;">
# <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
# </div>
#
# <h1>Command Line Tool Creation</h1>
# <h3>Unidata Python Workshop</h3>
#
# <div style="clear:both"></div>
# </div>
#
# <hr style="height:2px;">
#
# <div style="float:right; width:250 px"><img src="http://media02.hongkiat.com/developers-command-line/command-line.jpg" alt="Command Line" style="height: 500px;"></div>
#
#
# ## Overview:
#
# * **Teaching:** 30 minutes
# * **Exercises:** 36 minutes
# + [markdown] slideshow={"slide_type": "slide"}
# # Create command line tools to do common jobs
# * Do common data manipulations
# * Make plots
# * Download data or batch process items
#
# <div style="float:right; width:250 px"><img src="https://unidata.github.io/MetPy/latest/_images/sounding.png" alt="Sounding" style="height: 500px;"></div>
# + [markdown] slideshow={"slide_type": "slide"}
# # You are already used to using command line tools
# * ls
# * grep
# * git
# * GEMPAK
# + [markdown] slideshow={"slide_type": "slide"}
# # Command line tools take command line arguments
#
# ```
# GREP(1) BSD General Commands Manual GREP(1)
#
# NAME
# grep, egrep, fgrep, zgrep, zegrep, zfgrep -- file pattern searcher
#
# SYNOPSIS
# grep [-abcdDEFGHhIiJLlmnOopqRSsUVvwxZ] [-A num] [-B num] [-C[num]] [-e pattern] [-f file] [--binary-files=value] [--color[=when]] [--colour[=when]]
# [--context[=num]] [--label] [--line-buffered] [--null] [pattern] [file ...]
#
# DESCRIPTION
# The grep utility searches any given input files, selecting lines that match one or more patterns. By default, a pattern matches an input line if the regular
# expression (RE) in the pattern matches the input line without its trailing newline. An empty expression matches every line. Each input line that matches at
# least one of the patterns is written to the standard output.
#
# grep is used for simple patterns and basic regular expressions (BREs); egrep can handle extended regular expressions (EREs). See re_format(7) for more informa-
# tion on regular expressions. fgrep is quicker than both grep and egrep, but can only handle fixed patterns (i.e. it does not interpret regular expressions).
# Patterns may consist of one or more lines, allowing any of the pattern lines to match a portion of the input.
#
# zgrep, zegrep, and zfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the compress(1) or gzip(1) compression utili-
# ties.
#
# The following options are available:
#
# -A num, --after-context=num
# Print num lines of trailing context after each match. See also the -B and -C options.
#
# -a, --text
# Treat all files as ASCII text. Normally grep will simply print ``Binary file ... matches'' if files contain binary characters. Use of this option
# forces grep to output lines matching the specified pattern.
#
# -B num, --before-context=num
# Print num lines of leading context before each match. See also the -A and -C options.
#
# -b, --byte-offset
# The offset in bytes of a matched pattern is displayed in front of the respective matched line.
#
# -C[num, --context=num]
# Print num lines of leading and trailing context surrounding each match. The default is 2 and is equivalent to -A 2 -B 2. Note: no whitespace may be
# given between the option and its argument.
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# # Start by writing a "requirements document" for yourself
#
# * Describe the specific functionality of the tool
# * Describe the options that are available to the user
# * Include an expected set of outputs
# + [markdown] slideshow={"slide_type": "subslide"}
# ## You can also begin the design process
# * Functional blocks
# * Flow charts
# * Interface decisions
# + [markdown] slideshow={"slide_type": "slide"}
# # Let's design a tool to make a plot like this
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# # Requirements - Let's design the tool
# * Plot super national visible satellite image
# * Contour a recent GFS field of the user's choice and a level specified by the user
# * Plot with a basemap containing coastlines, country, state, and provice borders
# * Make the plot for the current time or up to 24 hours in the past
# * Use the GFS and satellite data from the time closest to that requested (smallest delta t)
# * Indicate the model and satellite times on the plot
# * Show or save the output image
# * Have a sensible default behavior
# * Include help documentation to remind others and ourselves how to use the program
# + [markdown] slideshow={"slide_type": "slide"}
# # We'll use five command line arguments
#
# * **--gfsfield GFSFIELD** - *CF field name of data to contour from the GFS model.*
# * **--gfslevel GFSLEVEL** - *Model level to plot (in hPa)*
# * **--hours HOURS** - *Time to plot (must be present or past)*
# * **--savefig** - *Save out figure instead of displaying it*
# * **--imgformat IMGFORMAT** - *Format to save the resulting image as.*
# + [markdown] slideshow={"slide_type": "slide"}
# # We could write a parser ourselves, but there's no need
#
# The [argparse](https://docs.python.org/3/library/argparse.html) module has us covered!
# 
# + slideshow={"slide_type": "skip"}
# For demo purposes only
import sys
sys.argv = ['greetme.py', '--name', 'John', '--greeting', 'Hola,']
# + slideshow={"slide_type": "subslide"}
import argparse
# Create an instance
parser = argparse.ArgumentParser(description='A simple program that prints a greeting to the name specified')
# + slideshow={"slide_type": "subslide"}
# Add a name argument
parser.add_argument('--name', type=str, required=True, help='Name of the person being greeted')
parser.add_argument('--greeting', type=str, default='Hello, ', help='Greeting to be used')
# + slideshow={"slide_type": "subslide"}
# Parse the arguments
args = parser.parse_args()
# + slideshow={"slide_type": "subslide"}
# Print the greeting
print('{} {}'.format(args.greeting, args.name))
# + [markdown] slideshow={"slide_type": "slide"}
# # Let's run it all together!
#
# * Create using the editor in the notebook
# * Run from the command line
| notebooks/Command_Line_Tools/Command_Line_Primer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="as7qaGHfkMBs"
# # Spam Text Classification
#
# In second week of inzva Applied AI program, we are going to create a spam text classifier using RNN's. Our data have 2 columns. The first column is the label and the second column is text message itself. We are going to create our model using following techniques
#
# - Embeddings
# - SimpleRNN
# - GRU
# - LSTM
# - Ensemble Model
#
# ### SimpleRNN
#
# Simple RNN layer. Nothing special. The reason it is 'Simple' because it is not GRU nor LSTM layer. You can read the documentation from https://keras.io/api/layers/recurrent_layers/simple_rnn/
#
# ### LSTM
#
# https://keras.io/api/layers/recurrent_layers/lstm/
#
# We will use tokenization and padding to preprocess our data. We are going to create 3 different models and compare them.
# + [markdown] id="vwkef2s9kMBy"
# ## Libraries
# + id="y7NNppCukMBy"
from keras.layers import SimpleRNN, Embedding, Dense, LSTM
from keras.models import Sequential
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns; sns.set()
# + [markdown] id="Wc1ipql1kMBz"
# ## Dataset
# + id="4hbdUBpOkTi6"
# !wget https://raw.githubusercontent.com/inzva/Applied-AI-Study-Group/master/Applied%20AI%20Study%20Group%20%233%20-%20June%202020/week2/SpamTextClassification/datasets_2050_3494_SPAM%20text%20message%2020170820%20-%20Data.csv
# + id="ktqS26f7kMB0"
data = pd.read_csv("datasets_2050_3494_SPAM text message 20170820 - Data.csv")
# + [markdown] id="GJg-_v-fkMB0"
# Let's see the first 20 rows of our data and read the messages. What do you think, are they really look like spam messages?
# + id="l84tZ-t9kMB0"
data.head
# + [markdown] id="LwsyFTHNkMB1"
# Let's calculate spam and non-spam message counts.
# + id="eBuV3oPNkMB1"
texts = []
labels = []
for i, label in enumerate(data['Category']):
texts.append(data['Message'][i])
if label == 'ham':
labels.append(0)
else:
labels.append(1)
texts = np.asarray(texts)
labels = np.asarray(labels)
print("number of texts :" , len(texts))
print("number of labels: ", len(labels))
# + id="kn4zY7rNkMB1"
labels
# + executionInfo={"elapsed": 696, "status": "ok", "timestamp": 1610187714975, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="L5BHOhkRkMB2"
hamc= sum(labels==0)
# + executionInfo={"elapsed": 701, "status": "ok", "timestamp": 1610187715273, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="O9AhYmprkMB2"
spamc=sum(labels==1)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 690, "status": "ok", "timestamp": 1610187727350, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="GbcK0vYBqoHY" outputId="e2141292-f58c-423c-98f0-0647a0e2ee6f"
spamc /(hamc+spamc)
# + [markdown] id="4U7oO3ptkMB2"
# ### Data is imbalanced. Making it even more imbalanced by removing some of the spam messages and observing the model performance would be a good exercise to explore imbalanced dataset problem in Sequential Model context.
# + id="U2yiofnRkMB3"
texts
# + [markdown] id="IbmF0pGDkMB3"
# ## Data Preprocessing
#
# Each sentence has different lengths. We need to have sentences of the same length. Besides, we need to represent them as integers.
#
# As a concerete example, we have following sentences
# - 'Go until jurong point crazy'
# - 'any other suggestions'
#
# First we will convert the words to integers, which is a way of doing Tokenization.
#
# - [5, 10, 26, 67, 98]
# - [7, 74, 107]
#
# Now we have two integer vectors with different length. We need to make them have the same length.
#
# ### Post Padding
# - [5, 10, 26, 67, 98]
# - [7, 74, 107, 0, 0]
#
# ### Pre Padding
# - [5, 10, 26, 67, 98]
# - [0, 0, 7, 74, 107]
#
# But you don't have to use padding in each task. For details please refer to this link https://github.com/keras-team/keras/issues/2375
# + [markdown] id="IbgB8xq7rnig"
# Bucketing in NLP
# + executionInfo={"elapsed": 4095, "status": "ok", "timestamp": 1610187975831, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="NbdlYgBZkMB3"
from keras.layers import SimpleRNN, Embedding, Dense, LSTM
from keras.models import Sequential
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
# number of words in our vocabulary
max_features = 10000
# how many words from each document (max)?
maxlen = 500
# + [markdown] id="XhPa1x4skMB4"
# ## Train - Test Split
#
# We will take a simple approach and create only train and test sets. Of course having train, test and validation sets is the best practise.
# + executionInfo={"elapsed": 771, "status": "ok", "timestamp": 1610187997478, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="TlA0vFFNkMB4"
training_samples = int(len(labels)*0.8)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 531, "status": "ok", "timestamp": 1610187998656, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="70KBVL-okMB4" outputId="3d63da40-6108-476d-cee9-adc4f5a7ee6c"
training_samples
# + executionInfo={"elapsed": 781, "status": "ok", "timestamp": 1610188003081, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="Zsdko0vUkMB4"
validation_samples = int(5572 - training_samples)
# + executionInfo={"elapsed": 730, "status": "ok", "timestamp": 1610188004490, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="5Mp2gWblkMB4"
assert len(labels) == (training_samples + validation_samples), "Not equal!"
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 624, "status": "ok", "timestamp": 1610188005072, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="lUdz3XI8kMB5" outputId="455b1c50-a7ff-4738-c49a-e1f84f96c5d8"
print("The number of training {0}, validation {1} ".format(training_samples, validation_samples))
# + [markdown] id="YCGqZlnPkMB5"
# ## Tokenization
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 787, "status": "ok", "timestamp": 1610188063165, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="QzvelwtkkMB5" outputId="60c5b10b-9002-4cf1-e084-de2f5a8e613a"
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print("Found {0} unique words: ".format(len(word_index)))
# + executionInfo={"elapsed": 647, "status": "ok", "timestamp": 1610188075487, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="uXjt3D1QkMB6"
#data = pad_sequences(sequences, maxlen=maxlen, padding='post')
data = pad_sequences(sequences, maxlen=maxlen)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 652, "status": "ok", "timestamp": 1610188082448, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="c3izHKZYkMB6" outputId="1c57b80e-cdd6-45c9-fde5-2d03309b3f57"
print(data.shape)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 469, "status": "ok", "timestamp": 1610188083618, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="ZwC7k4pQkMB6" outputId="b4e3ccf0-cfb5-4594-e063-d82f84e59e55"
data
# + executionInfo={"elapsed": 2257, "status": "ok", "timestamp": 1610188157127, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgJiMOcrZqpbXIriR6K5ILflUN8020uEPpTTyTf=s64", "userId": "07368051345107183586"}, "user_tz": -180} id="AAbLEL6UkMB6"
np.random.seed(42)
# shuffle data
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
texts_train = data[:training_samples]
y_train = labels[:training_samples]
texts_test = data[training_samples:]
y_test = labels[training_samples:]
# + [markdown] id="Kg-kV3jUkMB7"
# ## Model Creation
#
# We will create 3 different models and compare their performances. One model will use SimpleRNN layer, the other will use GRU layer and the last one will use LSTM layer. Architecture of each model is the same. We can create deeper models but we already get good results.
# + colab={"base_uri": "https://localhost:8080/"} id="X3iSJOIXkMB7" outputId="66065bc7-555e-49ac-f2d8-ffbb9c7ea7e8"
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
# + colab={"background_save": true} id="0OQ0OV7zkMB7"
acc = history_rnn.history['acc']
val_acc = history_rnn.history['val_acc']
loss = history_rnn.history['loss']
val_loss = history_rnn.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, '-', color='orange', label='training acc')
plt.plot(epochs, val_acc, '-', color='blue', label='validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.plot(epochs, loss, '-', color='orange', label='training acc')
plt.plot(epochs, val_loss, '-', color='blue', label='validation acc')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + id="cXmyXpqokMB8"
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_rnn = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
# + id="-8YEiKrZkMB8"
sum(y_test==1)
# + [markdown] id="nQdXz7CPkMB9"
# ## GRU
# + id="PnsETBACkMB9"
from keras.layers import GRU
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(GRU(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
# + id="yBMJsZRtkMB9"
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_gru = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
# + [markdown] id="N2oCJ7T7kMB-"
# ## LSTM
# + id="HWRpSe82kMB-"
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history_lstm = model.fit(texts_train, y_train, epochs=10,
batch_size=60, validation_split=0.2)
# + id="ZyyOLNJ0kMB-"
acc = history_lstm.history['acc']
val_acc = history_lstm.history['val_acc']
loss = history_lstm.history['loss']
val_loss = history_lstm.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, '-', color='orange', label='training acc')
plt.plot(epochs, val_acc, '-', color='blue', label='validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.plot(epochs, loss, '-', color='orange', label='training acc')
plt.plot(epochs, val_loss, '-', color='blue', label='validation acc')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + id="uRQSxKx4kMB_"
pred = model.predict_classes(texts_test)
acc = model.evaluate(texts_test, y_test)
proba_ltsm = model.predict_proba(texts_test)
from sklearn.metrics import confusion_matrix
print("Test loss is {0:.2f} accuracy is {1:.2f} ".format(acc[0],acc[1]))
print(confusion_matrix(pred, y_test))
# + [markdown] id="Mo1-DLaXkMB_"
# ## Ensemble Model
# + id="CEOZH_24kMB_"
ensemble_proba = 0.25 * proba_rnn + 0.35 * proba_gru + 0.4 * proba_lstm
ensemble_proba[:5]
# + id="AHaqMAPJkMB_"
ensemble_class = np.array([1 if i >= 0.3 else 0 for i in ensemble_proba])
# + id="Ml6-mwDnkMB_"
print(confusion_matrix(ensemble_class, y_test))
| Applied AI Study Group #4 - January 2021/Week 3/Lecture Projects/3 - Spam Text Classification Sequential.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
plt.rcParams["savefig.dpi"] = 300
plt.rcParams["savefig.bbox"] = "tight"
np.set_printoptions(precision=3, suppress=True)
import pandas as pd
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import scale, StandardScaler
# +
from keras.datasets import mnist
import keras
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# -
fig, axes = plt.subplots(1, 5, figsize=(12, 3))
for i, ax in enumerate(axes.ravel()):
ax.imshow(X_train[i, :, :], cmap='gray_r')
ax.set_xticks(())
ax.set_yticks(())
plt.savefig("images/mnist_org.png")
rng = np.random.RandomState(42)
perm = rng.permutation(28 * 28)
perm
X_train.shape
X_train_perm = X_train.reshape(-1, 28 * 28)[:, perm].reshape(-1, 28, 28)
X_test_perm = X_test.reshape(-1, 28 * 28)[:, perm].reshape(-1, 28, 28)
fig, axes = plt.subplots(1, 5, figsize=(12, 3))
for i, ax in enumerate(axes.ravel()):
ax.imshow(X_train_perm[i, :, :], cmap='gray_r')
ax.set_xticks(())
ax.set_yticks(())
plt.savefig("images/mnist_permuted.png")
# # Densly connected networks
# +
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(512, input_shape=(784,), activation='relu'),
Dense(10, activation='softmax'),
])
model.compile("adam", "categorical_crossentropy", metrics=['accuracy'])
history_callback_dense = model.fit(X_train.reshape(-1, 28 * 28) / 255, y_train, batch_size=128,
epochs=10, verbose=1, validation_split=.1)
# -
model.summary()
# +
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
X_train_images = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1) / 255
X_test_images = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1) / 255
input_shape = (img_rows, img_cols, 1)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# +
from keras.layers import Conv2D, MaxPooling2D, Flatten
num_classes = 10
cnn = Sequential()
cnn.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Conv2D(32, (3, 3), activation='relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Flatten())
cnn.add(Dense(64, activation='relu'))
cnn.add(Dense(num_classes, activation='softmax'))
# -
cnn.compile("adam", "categorical_crossentropy", metrics=['accuracy'])
history_cnn = cnn.fit(X_train_images, y_train,
batch_size=128, epochs=10, verbose=1, validation_split=.1)
cnn.summary()
def plot_history(logger):
df = pd.DataFrame(logger.history)
df[['acc', 'val_acc']].plot()
plt.ylabel("accuracy")
#df[['loss', 'val_loss']].plot(linestyle='--', ax=plt.twinx())
#plt.ylabel("loss")
plot_history(history_cnn)
plot_history(history_callback_dense)
history_callback_dense_shuffle = model.fit(X_train_perm.reshape(-1, 28 * 28) / 255, y_train, batch_size=128,
epochs=10, verbose=1, validation_split=.1)
# +
cnn.compile("adam", "categorical_crossentropy", metrics=['accuracy'])
X_train_images_perm = X_train_perm.reshape(X_train_perm.shape[0], img_rows, img_cols, 1) / 255
history_cnn_perm = cnn.fit(X_train_images_perm, y_train,
batch_size=128, epochs=10, verbose=1, validation_split=.1)
# -
cnn = pd.DataFrame(history_cnn.history)
dense = pd.DataFrame(history_callback_dense.history)
dense_perm = pd.DataFrame(history_callback_dense_shuffle.history)
cnn_perm = pd.DataFrame(history_cnn_perm.history)
res_org = pd.DataFrame({'cnn_train': cnn.acc, 'cnn_val': cnn.val_acc, 'dense_train': dense.acc, 'dense_val': dense.val_acc})
res_org.plot()
plt.ylim(.7, 1)
plt.savefig("images/mnist_org_curve.png")
# +
res_perm = pd.DataFrame({'cnn_train': cnn_perm.acc, 'cnn_val': cnn_perm.val_acc, 'dense_train': dense_perm.acc, 'dense_val': dense_perm.val_acc})
res_perm.plot()
plt.ylim(.7, 1)
plt.savefig("images/mnist_perm_curve.png")
# +
from keras.layers import Input, Conv2D, MaxPooling2D, Flatten
from keras.models import Model
num_classes = 10
inputs = Input(shape=(28, 28, 1))
conv1_1 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(inputs)
conv1_2 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(conv1_1)
conv1_3 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(conv1_2)
maxpool1 = MaxPooling2D(pool_size=(2, 2))(conv1_3)
conv2_1 = Conv2D(32, (3, 3), activation='relu', padding='same')(maxpool1)
conv2_2 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv2_1)
conv2_3 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv2_2)
maxpool2 = MaxPooling2D(pool_size=(2, 2))(conv2_3)
flat = Flatten()(maxpool2)
dense = Dense(64, activation='relu')(flat)
predictions = Dense(num_classes, activation='softmax')(dense)
model = Model(inputs=inputs, outputs=predictions)
# -
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
cnn_no_res = model.fit(X_train_images, y_train,
batch_size=128, epochs=10, verbose=1, validation_split=.1)
# +
from keras.layers import Input, Conv2D, MaxPooling2D, Flatten, add
from keras.models import Model
num_classes = 10
inputs = Input(shape=(28, 28, 1))
conv1_1 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(inputs)
conv1_2 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(conv1_1)
conv1_3 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(conv1_2)
skip1 = add([conv1_1, conv1_3])
conv1_4 = Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(skip1)
maxpool1 = MaxPooling2D(pool_size=(2, 2))(conv1_4)
conv2_1 = Conv2D(32, (3, 3), activation='relu', padding='same')(maxpool1)
conv2_2 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv2_1)
skip1 = add([maxpool1, conv2_2])
conv2_3 = Conv2D(32, (3, 3), activation='relu', padding='same')(skip1)
maxpool2 = MaxPooling2D(pool_size=(2, 2))(conv2_3)
flat = Flatten()(maxpool2)
dense = Dense(64, activation='relu')(flat)
predictions = Dense(num_classes, activation='softmax')(dense)
model = Model(inputs=inputs, outputs=predictions)
flat = Flatten()(maxpool2)
dense = Dense(64, activation='relu')(flat)
predictions = Dense(num_classes, activation='softmax')(dense)
model = Model(inputs=inputs, outputs=predictions)
# -
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
cnn_w_res = model.fit(X_train_images, y_train,
batch_size=128, epochs=10, verbose=1, validation_split=.1)
# +
from keras.layers import Input, Conv2D, MaxPooling2D, Flatten
from keras.models import Model
num_classes = 10
inputs = Input(shape=(28, 28, 1))
conv1_1 = Conv2D(32, (3, 3), activation='relu',
padding='same')(inputs)
conv1_2 = Conv2D(32, (3, 3), activation='relu',
padding='same')(conv1_1)
maxpool1 = MaxPooling2D(pool_size=(2, 2))(conv1_2)
conv2_1 = Conv2D(32, (3, 3), activation='relu',
padding='same')(maxpool1)
conv2_2 = Conv2D(32, (3, 3), activation='relu',
padding='same')(conv2_1)
skip2 = add([maxpool1, conv2_2])
maxpool2 = MaxPooling2D(pool_size=(2, 2))(skip2)
flat = Flatten()(maxpool2)
dense = Dense(64, activation='relu')(flat)
predictions = Dense(num_classes, activation='softmax')(dense)
model = Model(inputs=inputs, outputs=predictions)
# -
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
cnn_w_res = model.fit(X_train_images, y_train,
batch_size=128, epochs=10, verbose=1, validation_split=.1)
from keras.layers import BatchNormalization, Conv2D, MaxPooling2D, Flatten, Dense, Activation
model = keras.Sequential([
# keras.layers.Flatten(input_shape=(28, 28)),
Dense(1024, input_shape = (784,), activation= 'relu'),
Dense(128, activation = 'relu'),
keras.layers.Dense(10, activation='softmax')
])
# !mkdir preview
# +
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
img = load_img('images/carpet_snake.png') # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
# the .flow() command below generates batches of randomly transformed images
# and saves the results to the `preview/` directory
i = 0
for batch in datagen.flow(x, batch_size=1,
save_to_dir='preview', save_prefix='snek', save_format='jpeg'):
i += 1
if i > 16:
break # otherwise the generator would loop indefinitely
# -
| slides/aml-20-advanced-nets/aml-advanced-nets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jorgeanais/covid19_data_visualization_chile/blob/main/Covid19_Chile.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ekWZ_tBiAscj"
# # Plotting Covid19 data from MinCiencia Chile
#
# Data extracted from http://www.minciencia.gob.cl/COVID19
# Author: <NAME>
# Last update: 13 Apr 2021
# + id="8QjkZbTDAg83"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
# + id="l7Cjwfd-A64H"
# Data preprocessing
data_source = 'https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/producto5/TotalesNacionales.csv'
df = pd.read_csv(data_source, header=None).transpose()
df.columns = df.iloc[0]
df.drop(df.index[0], inplace=True)
df['Fecha'] = pd.to_datetime(df["Fecha"], format='%Y-%m-%d')
df.set_index('Fecha', inplace=True)
# data type to numeric instead of object
for c in df.columns:
df[c] = pd.to_numeric(df[c])
# + colab={"base_uri": "https://localhost:8080/"} id="zZdvQcsIBWEn" outputId="199fc72a-8ff3-422a-99c0-0c2b96291462"
df.info()
# + id="7sOXEBw2RNSF"
df['Media movil semanal Casos nuevos totales'] = df['Casos nuevos totales'].rolling(window=7, center=True).mean()
# + colab={"base_uri": "https://localhost:8080/", "height": 405} id="GKBWqUVuM_UO" outputId="343f3e47-1219-4bf7-995b-f759f7d858a5"
plt.figure(figsize=(16, 6))
sns.lineplot(data=df, x=df.index, y='Casos nuevos totales')
sns.lineplot(data=df, x=df.index, y='Media movil semanal Casos nuevos totales')
| Covid19_Chile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
# We will start by learning to __PULL__ the changes you made at home (last weeks's homework) to your local copy on the M drive.
#
# This will alow you to syncronise your university, online and personal computer repositories.
#
# Open Jupyter notebook:
# <br> Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook
# <br>(Start >> すべてのプログラム >> Programming >> Anaconda3 >> JupyterNotebook)
#
# Navigate to where your interactive textbook is stored.
#
# Open __S1_Introduction_to_Version_Control__.
#
# Navigate to section: __Pulling Changes to a Local Repository__
# Open Jupyter notebook:
# <br> Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook
# <br>(Start >> すべてのプログラム >> Programming >> Anaconda3 >> JupyterNotebook)
#
# Navigate to where your interactive textbook is stored.
#
# Open __2_Control_flow_and_data_structures__.
# + [markdown] slideshow={"slide_type": "slide"}
# # Control Flow
#
# # Lesson Goal
#
# - Compose simple programs to control the flow with which the operators we have studied so far are executed on.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Objectives
#
# - Use control __statements__, __loops__ and to determine the flow of a program.
#
# - Express collections of mulitple variables as `list`, `tuple` and dictionary (`dict`) data structures.
#
# - Use iteratation to visit entries in a data structure
#
# - Learn to select the right data structure for an application
# + [markdown] slideshow={"slide_type": "slide"}
# Why we are studying this:
#
# To use Python to solve more complex engineering problems you are likely to encounter involving:
# - multi-variable values (e.g. vectors)
# - large data sets (e.g. experiment results)
# - manipulating your data using logic
# <br>
# (e.g. sorting and categorising answers to an operation performed on multiple data points)
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# Lesson structure:
# - Learn new skills together:
# - __Demonstration__ on slides.
# - __Completing examples__ in textbooks.
# - __Feedback answers__ (verbally / whiteboards)
# - Practise alone: __Completing review excercises__.
# - Skills Review: Updating your online git repository __from home__.
# - New skills: Updating your local repository using an __upstream repository.__
# - __Summary__.
# + [markdown] slideshow={"slide_type": "slide"}
# Each time you complete a section of your textbook, please wait to feedback the answer before moving on.
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# What is a *__control statement__*?
#
# Let's start with an example from the last seminar...
# + [markdown] slideshow={"slide_type": "slide"}
# ## Control Statements
# In the last seminar we looked at a simple computer program that returned Boolean (True or False) variables...
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# Based on the current time of day, the program answers two questions:
#
#
# >__Is it lunchtime?__
#
# >`True`
#
#
# if it is lunch time.
#
# >__Is it time for work?__
#
# >`True`
#
# if it is within working hours.
# + slideshow={"slide_type": "slide"}
time = 13.05 # current time
work_starts = 8.00 # time work starts
work_ends = 17.00 # time work ends
lunch_starts = 13.00 # time lunch starts
lunch_ends = 14.00 # time lunch ends
# variable lunchtime is True or False
lunchtime = time >= lunch_starts and time < lunch_ends
# variable work_time is True or False
work_time = time < work_starts or time >= work_ends
print("Is it lunchtime?")
print(lunchtime)
print("Is it time for work?")
print(work_time)
# + [markdown] slideshow={"slide_type": "slide"}
# What if we now want our computer program to do something based on these answers?
#
# To do this, we need to use *control statements*.
#
# Control statements allow us to make decisions in a program.
#
# This decision making is known as *control flow*.
#
# Control statements are a fundamental part of programming.
# + [markdown] slideshow={"slide_type": "slide"}
# Here is a control statement in pseudo code:
#
# This is an `if` statement.
#
# if A is true
# Perform task X
#
# For example
#
# if lunchtime is true
# Eat lunch
#
# + [markdown] slideshow={"slide_type": "slide"}
# We can check if an alternative to the `if` statement is true using an `else if` statement.
#
#
# if A is true
# Perform task X (only)
#
# else if B is true
# Perform task Y (only)
#
#
#
# if lunchtime is true
# Eat lunch
#
# else if work_time is true
# Do work
# + [markdown] slideshow={"slide_type": "slide"}
# Often it is useful to include an `else` statement.
#
# If none of the `if` and `else if` statements are satisfied, the code following the `else` statement will be executed.
#
# if A is true
# Perform task X (only)
#
# else if B is true
# Perform task Y (only)
#
# else
# Perform task Z (only)
#
#
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# if lunchtime is true
# Eat lunch
#
# else if work_time is true
# Do work
#
# else
# Go home
# + [markdown] slideshow={"slide_type": "slide"}
# Let's get a better understanding of control flow statements by completing some examples.
# -
# <a id='IfElse'></a>
#
# ## `if` and `else` statements
#
# Below is a simple example that demonstrates a Python if-else control statement.
#
# It uses the lunch/work example from the previous seminar.
#
# __Note:__ In Python, "else if" is written: `elif`
# +
time = 13.05 # current time
work_starts = 8.00 # time work starts
work_ends = 17.00 # time work ends
lunch_starts = 13.00 # time lunch starts
lunch_ends = 14.00 # time lunch ends
# variable lunchtime is True or False
lunchtime = time >= lunch_starts and time < lunch_ends
# variable work_time is True or False
work_time = time < work_starts or time >= work_ends
#print("Is it lunchtime?")
#print(lunchtime)
#print("Is it time for work?")
#print(work_time)
if lunchtime == True:
print("Eat lunch")
elif work_time == True:
print("Do work")
else:
print("Go home")
# -
# Here is another example, using algebraic operators to modify the value of an initial variable, `x`.
# +
#The input to the program is variable `x`.
x = -10.0 # Initial x value
if x > 0.0:
print('Initial x is greater than zero') #The program prints a message...
x -= 20.0 # ...and modifies `x`.
elif x < 0.0:
print('Initial x is less than zero')
x += 21.0
else:
print('Initial x is not less than zero and not greater than zero, therefore it must be zero')
x *= 2.5
print("Modified x = ", x)
# + [markdown] slideshow={"slide_type": "slide"}
# The modification of `x` and the message printed depend on the initial value of `x`.
#
# __Note:__ The program uses the short-cut algebraic operators that you learnt to use in the last seminar.
# + [markdown] slideshow={"slide_type": "slide"}
# __Try it yourself__
#
# In your textbook, try changing the value of `x` a few times.
# Re-run the cell to see the different paths the program can follow.
# -
# ### Look carefully at the structure of the `if`, `elif`, `else`, control statement:
#
#
# __The control statement begins with an `if`__, followed by the expression to check. <br>
# At the end of the `if` statement you must put a colon (`:`) <br>
# ````python
# if x > 0.0:
# ````
# After the `if` statement, indent the code to be run in the case that the `if` statement is `True`. <br>
#
#
# To end the code to be run, simply stop indenting:
#
# ````python
# if x > 0.0:
# print('Initial x is greater than zero')
# x -= 20.0
# ````
# + [markdown] slideshow={"slide_type": "skip"}
# The indent can be any number of spaces, but the number of spaces must be the same for all lines of code to be run if True.
# Jupyter Notebooks automatically indent 4 spaces.
# This is considered good Python style.
# -
# - `if` statement is `True` (e.g. (`x > 0.0`) is True):
# - The indented code is executed.
# - The control block is exited.
# - The program moves past any subsequent `elif` or `else` statements.
# <br>
#
#
# - `if` statement is `False`:
# the program moves past the inented code to the next (non-indented) part of the program... <br>
# __In this case `elif` (else if)__ check is performed.
# <br>
# (Notice that the code is structured in the same way as the `if` statement.):
#
# ```python
# if x > 0.0:
# print('Initial x is greater than zero')
# x -= 20.0
#
# elif x < 0.0:
# print('Initial x is less than zero')
# x += 21.0
# ```
# - If (`x < 0.0`) is true:
# - The indented code is executed.
# - The control block is exited.
# - The program moves past any subsequent `elif` or `else` statements.
#
#
# - `if` statement is `False`:
# the program moves past the inented code to the next (non-indented) part of the program. <br>
#
#
#
# __If none of the preceding stements are true__ [(`x > 0.0`) is false and (`x < 0.0`) is false], the code following the `else` statement is executed.
#
# ```python
# if x > 0.0:
# print('Initial x is greater than zero')
# x -= 20.0
#
# elif x < 0.0:
# print('Initial x is less than zero')
# x += 21.0
#
# else:
# print('Initial x is not less than zero and not greater than zero, therefore it must be zero')
# ```
# Evaluating data against different criteria is extremely useful for solving real-world mathematical problems.
# Let's look at a simple example...
# ### Real-World Example: currency trading
#
# To make a comission (profit), a currency trader sells US dollars to travellers at a rate below the market rate.
#
# - The multiplier they use to calculate the rate is shown in the table.
#
# |Amount (GBP) |fraction of market rate |
# |--------------------------------------------|-------------------------|
# | Less than $100$ | 0.9 |
# | From $100$ and less than $1,000$ | 0.925 |
# | From $1,000$ and less than $10,000$ | 0.95 |
# | From $10,000$ and less than $100,000$ | 0.97 |
# | Over $100,000$ | 0.98 |
#
# - The currency trader charges more if the customer pays with cash.
# If the customer pays with cash, the currency trader reduces the rate by an __additional__ 10% after conversion.
# (If the trasnaction is made electronically, they do not).
#
#
# __Current market rate:__ 1 JPY = 0.0091 USD.
#
# __Effective rate:__ The rate that the customer receives based on the amount in JPY to be changed.
# The program calculates the __effective rate__ using:
# - The reduction based on the values in the table.
# - An additional 10% reduction if the transaction is made in cash.
# +
JPY = 10000 # The amount in JPY to be changed into USD
cash = False # True if selling cash, otherwise False
market_rate = 0.0091 # 1 JPY is worth this many dollars at the market rate
# Apply the appropriate reduction depending on the amount being sold
# JPY < 10,000:
if JPY < 10000:
USD = 0.9 * market_rate * JPY
# JPY < 100,000:
elif JPY < 100000:
USD = 0.925 * market_rate * JPY
# JPY < 1,000,000:
elif JPY < 1000000:
USD = 0.95 * market_rate * JPY
# JPY < 10,000,000:
elif JPY < 10000000:
USD = 0.97 * market_rate * JPY
# JPY > 10,000,000:
else:
USD = 0.98 * market_rate * JPY
if cash:
USD *= 0.9 # recall that this is shorthand for USD = 0.9*USD
print("Amount in JPY sold:", JPY)
print("Amount in USD purchased:", USD)
print("Effective rate:", USD/JPY)
# -
# __Note:__
# - We can use multiple `elif` statements.
# - When the program executes and exits a control block, it moves to the next `if` statement.
#
#
# __Try it yourself__
#
# In your textbook, try changing the values of `JPY` and `cash` a few times.
#
# Re-run the cell to see the different paths the program can follow.
# <a id='ForLoops'></a>
#
# ## `for` loops
#
# *Loops* are used to execute a command repeatedly.
# <br>
# A loop is a block that repeats an operation a specified number of times (loops).
#
# To learn about loops we are going to use the function `range()`.
# ### `range`.
#
# The function `range` gives us a sequence of *integer* numbers.
#
# `range(3, 6)` returns integer values starting from 3 and ending at 6.
#
# i.e.
#
# > 3,4,5
#
# Note this does not include 6.
#
#
# We can change the starting value.
#
# For example for integer values starting at 0 and ending at 4:
#
# `range(0,4)`
#
# returns:
#
# > 0, 1, 2, 3
#
# `range(4)` is a __shortcut__ for range(0, 4)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Simple `for` loops
# -
# An example of a for loop.
#
# The statement
# ```python
# for i in range(0, 5):
# ```
# says that we want to run the indented code five times.
# + slideshow={"slide_type": "slide"}
for i in range(0, 5):
print(i)
# -
# The first time through, the value of i is equal to 0.
# <br>
# The second time through, its value is 1.
# <br>
# Each loop the value `i` increases by 1 (0, 1, 2, 3, 4) until the last time when its value is 4.
# + slideshow={"slide_type": "slide"}
for i in range(0, 5):
print(i)
# -
# Look carefully at the structure of the `for` loop:
# - `for` is followed by the condition being checked.
# - At the end of the `for` statement you must put a colon (`:`)
# - The indented code that follows is run each time the code loops. <br>
# (Any number of spaces, but __same of spaces__ for the entire `for` loop.)
# <br>
# - To end the `for` loop, simply stop indenting.
# + slideshow={"slide_type": "slide"}
for i in range(-2, 3):
print(i)
print('The end of the loop')
# -
# The above loop starts from -2 and executes the indented code for each value of i in the range (-2, -1, 0, 1, 2).
# <br>
# When the loop has executed the code for the final value `i = 2`, it moves on to the next unindented line of code.
# + slideshow={"slide_type": "slide"}
for n in range(4):
print("----")
print(n, n**2)
# -
# The above executes 4 loops.
#
# The statement
# ```python
# for n in range(4):
# ```
# says that we want to loop over four integers, starting from 0.
#
# Each loop the value `n` increases by 1 (0, 1, 2 3).
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# __Try it yourself__
# <br>
# Go back and change the __range__ of input values in the last three cells and observe the change in output.
#
# + [markdown] slideshow={"slide_type": "slide"}
# If we want to step by three rather than one:
# -
for n in range(0, 10, 3):
print(n)
# + [markdown] slideshow={"slide_type": "slide"}
# If we want to step backwards rather than forwards we __must__ include the step size:
# -
for n in range(10, 0, -1):
print(n)
# For example:
for n in range(10, 0):
print(n)
# Does not return any values because there are no values that lie between 10 and 0 when counting in the positive direction from 10.
# + [markdown] slideshow={"slide_type": "slide"}
# __Try it yourself.__
#
# In the cell below write a `for` loop that:
# - loops __backwards__ through a range that starts at `n = 10` and ends at `n = 1` (and includes `n = 1`).
# - prints `n`$^2$ at each loop.
#
# +
# For loop
# + [markdown] slideshow={"slide_type": "slide"}
# For loops are useful for performing operations on large data sets.
#
# We often encounter large data sets in real-world mathematical problems.
# + [markdown] slideshow={"slide_type": "slide"}
# A simple example of this is converting multiple values using the same mathematical equation to create a look-up table...
# + [markdown] slideshow={"slide_type": "slide"}
# ### Real-world Example: conversion table from degrees Fahrenheit to degrees Celsius
#
# We can use a `for` loop to create a conversion table from degrees Fahrenheit ($T_F$) to degrees Celsius ($T_c$).
#
# Conversion formula:
#
# $$
# T_c = 5(T_f - 32)/9
# $$
#
# Computing the conversion from -100 F to 200 F in steps of 20 F (not including 200 F):
# +
print("T_f, T_c")
for Tf in range(-100, 200, 20):
print(Tf, (Tf - 32) * 5 / 9)
# + [markdown] slideshow={"slide_type": "slide"}
# <a id='WhileLoops'></a>
#
# ## `while` loops
#
# `for` loops perform an operation a specified number of times.
#
# A `while` loop performs a task while a specified statement is true.
#
# For example...
# + slideshow={"slide_type": "slide"}
x = -2
print("Start of while statement")
while x < 5:
print(x)
x += 1 # Increment x
print("End of while statement")
# -
# The structure of a `while` loop is similar to a `for` loop.
# - `while` is followed by the condition being checked.
# - At the end of the `for` statement you must put a colon (`:`)
# - The indented code that follows the `while` statement is is executed and repeated until the `while` statement (e.g. `x < 5`) is `False`.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# It can be quite easy to crash your computer using a `while` loop.
#
# e.g. if we don't modify the value of x each time the code loops:
# ```python
# x = -2
# while x < 5:
# print(x)
# # x += 1
# ```
# will continue indefinitely since `x < 5 == False` will never be satisfied.
#
# This is called an *infinite loop*.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# One way to avoid getting stuck in an infinite loop is to consider using a `for` loop instead.
# +
x = -2
print("Start of for statement")
for y in range(x,5):
print(y)
print("End of for statement")
# + [markdown] slideshow={"slide_type": "slide"}
# Here is another example of a `while` loop.
# +
x = 0.9
while x > 0.001:
# Square x (shortcut x *= x)
x = x * x
print(x)
# + [markdown] slideshow={"slide_type": "slide"}
# This example will generate an infinite loop if $x \ge 1$ as `x` will always be greater than 0.001.
#
# e.g.
# ```python
# x = 2
#
# while x > 0.001:
# x = x * x
# print(x)
# ```
#
# In this case using a for loop is less appropraite; we might not know beforehand how many steps are required before `x > 0.001` becomes false.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# To make a code robust, it is good practice to check that $x < 1$ before entering the `while` loop e.g.
# +
x = 0.9
if x < 1:
while x > 0.001:
# Square x (shortcut x *= x)
x = x * x
print(x)
else:
print("x is greater than one, infinite loop avoided")
# -
# __Try it for yourself:__
#
# Find the cell shown above in your textbook and change the value of x to above or below 1.
#
# Observe the output.
#
# + [markdown] slideshow={"slide_type": "slide"}
# __Try it for yourself:__
#
# In the cell below:
# - Create a variable,`x`, with the initial value 50
# - Each loop:
# 1. print x
# 1. reduce the value of x by half
# - Exit the loop when `x` < 3
# +
# While loop
# + [markdown] slideshow={"slide_type": "slide"}
# ## `break` and `continue`.
#
# <a id='break'></a>
# ### `break`
#
# Sometimes we want to break out of a `for` or `while` loop.
#
# For example in a `for` loop we can check if something is true, and then exit the loop prematurely, e.g
# -
for x in range(10):
print(x)
if x == 5:
print("Time to break out")
break
# + [markdown] slideshow={"slide_type": "slide"}
# Let's look at how we can use this in a program...
#
# + [markdown] slideshow={"slide_type": "slide"}
# Let's use a program to check (integer) numbers up to 50 __finds prime numbers__ and prints the prime numbers.
#
# __Prime number:__ A positive integer, greater than 1, that has no positive divisors other than 1 and itself (2, 3, 5, 11, 13, 17....)
# + slideshow={"slide_type": "slide"}
N = 50 # Check numbers up 50 for primes (excludes 50)
# Loop over all numbers from 2 to 50 (excluding 50)
for n in range(2, N):
# Assume that n is prime
n_is_prime = True
# Check if n can be divided by m
# m ranges from 2 to n (excluding n)
for m in range(2, n):
# Check if the remainder when n/m is equal to zero
# If the remainder is zero it is not a prime number
if n % m == 0:
n_is_prime = False
# If n is prime, print to screen
if n_is_prime:
print(n)
# + [markdown] slideshow={"slide_type": "slide"}
# Notice that our program contains a second `for` loop.
#
# For each value of n, it loops through incrementing values of m in the range (2 to n):
#
# ```python
# # Check if n can be divided by m
# # m ranges from 2 to n (excluding n)
# for m in range(2, n):
# ```
# before incrementing to the next value of n.
#
# We call this a *nested* loop.
#
# The indents in the code show where loops are nested.
#
# Here it is again without the comments:
# + slideshow={"slide_type": "slide"}
N = 50
# for loop 1
for n in range(2, N):
n_is_prime = True
# for loop 2
for m in range(2, n):
if n % m == 0:
n_is_prime = False
if n_is_prime:
print(n)
# + [markdown] slideshow={"slide_type": "slide"}
# Notice that one of the prime numbers is 17.
#
# In the program below, a break statment is added.
# -
N = 50
#for loop 1
for n in range(2, N):
n_is_prime = True
# for loop 2
for m in range(2, n):
if n % m == 0:
n_is_prime = False
# if n is a prime number...
if n_is_prime:
print(n)
# ...and if n == 17, break out of loop 1 and loop 2
if n ==17:
break
# Only values up to N are considered.
#
# ```python
# for n in range(2, N):
# ```
# If if `n` is equal to 17, the program stops running *both* `for` loop 1 and `for` loop 2:
#
# ```python
# if n == 17:
# break
# ```
#
# This is useful where we know the exact value we want to stop at.
#
# More often, the exact value we want to stop at is unknown...
# + [markdown] slideshow={"slide_type": "slide"}
# __Let's consider a different break statement.__
#
# Let's *comment out* the conditional: `if n == 17`, and *uncomment* the conditional: `if n > 20` so your code looks like:
#
# ``` python
#
# # ...and if n == 17, break out of loop 1 and loop 2
# # if n ==17:
# # break
#
# #...and if n > 20, break out of loop 1 and loop 2:
# if n > 20:
# break
# ```
# Using comments stops the program from running this code, while allowing you to easily refer to it or change it back later.
#
# Run the code again.
#
# -
N = 50
#for loop 1
for n in range(2, N):
n_is_prime = True
# for loop 2
for m in range(2, n):
if n % m == 0:
n_is_prime = False
# if n is a prime number...
if n_is_prime:
print(n)
# ...and if n == 17, break out of loop 1 and loop 2
# if n ==17:
# break
#...and if n > 20, break out of loop 1 and loop 2:
if n > 20:
break
#
#
# Note that the the conditional: `if n > 20`, is only run if n is a prime numer.
#
# This is useful where the exact value we want to stop at is unknown as the code will keep looping until a *prime number* (not just any number) is reached that exceeds 20.
#
# __Try it yourself__
#
# __In the cell below copy and paste your code from the cell above.__
#
# __Edit your code to print all of the prime numbers *under* 20.__
# + slideshow={"slide_type": "-"}
# Copy and paste your code here.
# + [markdown] slideshow={"slide_type": "slide"}
# A simple way to do this is to place the conditional and break statement *before* we print the value of `n`.
#
# ``` python
# if n_is_prime:
#
# if n > 20:
# break
#
# print(n)
#
# ```
#
# If `n` > 20 the program breaks out of the loop *before* printing the number.
# + [markdown] slideshow={"slide_type": "slide"}
# <a id='Continue'></a>
#
# ### `continue`
#
# Sometimes, instead of stopping the loop we want to go to the next iteration in a loop, skipping the remaining code.
#
# For this we use `continue`.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# The example below loops over 20 numbers (0 to 19) and checks if the number is divisible by 4.
#
# If the number is not divisible by 4:
#
# - it prints a message
# - it moves to the next value.
#
# If the number is divisible by 4 it *continues* to the next value in the loop, without printing.
# -
for j in range(20):
if j % 4 == 0: # Check remainer of j/4
continue # continue to next value of j
print(j, "is not a multiple of 4")
# ## Review Exercises
# Here are a series of engineering problems for you to practise each of the new Python skills that you have learnt today.
# ### Review Exercise: `while` loops.
# In the cell below, write a program that repeatedly prints the value of `x`, then decreases the value of x by 0.5, as long as `x` remains positive.
#
# <a href='#WhileLoops'>Jump to While Loops'</a>
x = 4
# ### Review Exercise: `for` loops and `if`, `else` and `continue` statements.
# __(A)__ In the cell below, use a for loop to print the square roots of the first 25 odd positive integers.
# <br> (Remember, the square root of a number, $x$ can be found by $x^{1/2}$)
#
# __(B)__ If the number generated is greater than 3 and smaller than 5, print "`skip`" and __`continue`__ to the next iteration *without* printing the number.
# <br><br>Hint: Refer to __Logical Operators__ (Seminar 2).
#
# <a href='#ForLoops'>Jump to for loops</a>
#
# <a href='#IfElse'>Jump to if and else statements</a>
#
# <a href='#Continue'>Jump to continue</a>
# +
# square roots of the first 25 odd positive integers
# -
# # Updating your git repository
#
# You have made several changes to your interactive textbook.
#
# The final thing we are going to do is add these changes to your online repository so that:
# - I can check your progress
# - You can access the changes from outside of the university server.
#
# > Save your work.
# > <br> `git add -A`
# > <br>`git commit -m "A short message describing changes"`
# > <br>`git push origin master`
#
# <br>Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__.
# # Summary
#
# [*McGrath, Python in easy steps, 2013*]
#
# - The Python `if` keyword performs a conditional test on an expression for a Boolean value of True or False.
# - Alternatives to an `if` test are provided using `elif` and `else` tests.
# - A `while` loop repeats until a test expression returns `False`.
# - A `for`...`in`... loop iterates over each item in a specified data structure (or string).
# - The `range()` function generates a numerical sequence that can be used to specify the length of the `for` loop.
# - The `break` and `continue` keywords interrupt loop iterations.
# # Homework
#
# 1. __PULL__ the changes you made in-class today to your personal computer before starting your homework.
# 1. __COMPLETE__ any unfinished Review Exercises.
# 1. __PUSH__ the changes you make at home to your online repository.
#
# <br>Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__.
# # Next Seminar
#
# If possible, please bring your personal computer to class.
#
# We are going to complete an excercise: __pulling changes made to the original version of the textbook (homework olutions and new chapters to a local repository.__
#
# If you cannot bring your personal computer with you, you can practise using a laptop provided in class, but you wil need to repeat the steps at home in your own time.
| 2_Control_Flow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <hr style="height:2px;">
#
# # Demo: Neural network training for combined denoising and upsamling of synthetic 3D data
#
# This notebook demonstrates training a CARE model for a combined denoising and upsampling task, assuming that training data was already generated via [1_datagen.ipynb](1_datagen.ipynb) and has been saved to disk to the file ``data/my_training_data.npz``. Note that the training approach is exactly the same as in the standard CARE approach, what differs is the [training data generation](1_datagen.ipynb) and [prediction](3_prediction.ipynb).
#
# Note that training a neural network for actual use should be done on more (representative) data and with more training time.
#
# More documentation is available at http://csbdeep.bioimagecomputing.com/doc/.
# +
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
from tifffile import imread
from csbdeep.utils import axes_dict, plot_some, plot_history
from csbdeep.utils.tf import limit_gpu_memory
from csbdeep.io import load_training_data
from csbdeep.models import Config, UpsamplingCARE
# -
# The TensorFlow backend uses all available GPU memory by default, hence it can be useful to limit it:
# +
# limit_gpu_memory(fraction=1/2)
# -
# <hr style="height:2px;">
#
# # Training data
#
# Load training data generated via [1_datagen.ipynb](1_datagen.ipynb), use 10% as validation data.
# +
(X,Y), (X_val,Y_val), axes = load_training_data('data/my_training_data.npz', validation_split=0.1, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
# -
plt.figure(figsize=(12,3))
plot_some(X_val[:5,...,0,0],Y_val[:5,...,0,0])
plt.suptitle('5 example validation patches (ZY slice, top row: source, bottom row: target)');
# <hr style="height:2px;">
#
# # CARE model
#
# Before we construct the actual CARE model, we have to define its configuration via a `Config` object, which includes
# * parameters of the underlying neural network,
# * the learning rate,
# * the number of parameter updates per epoch,
# * the loss function, and
# * whether the model is probabilistic or not.
#
# The defaults should be sensible in many cases, so a change should only be necessary if the training process fails.
#
# ---
#
# <span style="color:red;font-weight:bold;">Important</span>: Note that for this notebook we use a very small number of update steps per epoch for immediate feedback, whereas this number should be increased considerably (e.g. `train_steps_per_epoch=400`, `train_batch_size=16`) to obtain a well-trained model.
config = Config(axes, n_channel_in, n_channel_out, train_steps_per_epoch=25, train_batch_size=4)
print(config)
vars(config)
# We now create an upsampling CARE model with the chosen configuration:
model = UpsamplingCARE(config, 'my_model', basedir='models')
# <hr style="height:2px;">
#
# # Training
#
# Training the model will likely take some time. We recommend to monitor the progress with [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) (example below), which allows you to inspect the losses during training.
# Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.
#
# You can start TensorBoard from the current working directory with `tensorboard --logdir=.`
# Then connect to [http://localhost:6006/](http://localhost:6006/) with your browser.
#
# 
history = model.train(X,Y, validation_data=(X_val,Y_val))
# Plot final training history (available in TensorBoard during training):
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss'],['mse','val_mse','mae','val_mae']);
# <hr style="height:2px;">
#
# # Evaluation
#
# Example results for validation images.
plt.figure(figsize=(12,4.5))
_P = model.keras_model.predict(X_val[:5])
if config.probabilistic:
_P = _P[...,:(_P.shape[-1]//2)]
plot_some(X_val[:5,...,0,0],Y_val[:5,...,0,0],_P[...,0,0],pmax=99.5)
plt.suptitle('5 example validation patches (ZY slice)\n'
'top row: input (source), '
'middle row: target (ground truth), '
'bottom row: predicted from source');
# <hr style="height:2px;">
#
# # Export model to be used with CSBDeep **Fiji** plugins and **KNIME** workflows
#
# See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.
model.export_TF()
| examples/upsampling3D/2_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
catLookup = {'aardvark':'Orycteropus afer',
'aardwolf':'Proteles cristata',
'baboon':'Papio anubis',
'batEaredFox':'Otocyon megalotis',
'buffalo':'Syncerus caffer',
'bushbuck':'Tragelaphus scriptus',
'caracal':'Caracal caracal',
'cheetah':'Acinonyx jubatus',
'civet':'Civettictis civetta',
'dikDik':'Madoqua kirkii',
'eland':'Tragelaphus oryx',
'elephant':'Loxodonta africana',
'gazelleGrants':'Nanger granti',
'gazelleThomsons':'Eudorcas thomsonii',
'genet':'Genetta genetta',
'giraffe':'Giraffa camelopardus',
'guineaFowl':'Numidia meleagris',
'hare':'Lepus capensis',
'hartebeest':'Alcelaphus buselaphus',
'hippopotamus':'Hippopotamus amphibius',
'honeyBadger':'Mellivora capensis',
'hyenaSpotted':'Crocuta crocuta',
'hyenaStriped':'Hyaena hyaena',
'impala':'Aepyceros melampus',
'jackal':'Canis',
'koriBustard':'Ardeotis kori',
'leopard':'Panthera pardus',
'lionFemale':'Panthera leo',
'lionMale':'Panthera leo',
'mongoose':'Herpestes',
'ostrich':'Struthio camelus',
'otherBird':'Aves',
'porcupine':'Hystrix cristata',
'reedbuck':'Redunca',
'reptiles':'Reptilia',
'rhinoceros':'Diceros bicornis',
'rodents':'Rodentia',
'secretaryBird':'Sagittarius serpentarius',
'serval':'Felis serval',
'topi':'Damaliscus korrigum',
'vervetMonkey':'Chlorocebus pygerythrus',
'warthog':'Phacochoerus africanus',
'waterbuck':'Kobus ellipsiprymnus',
'wildcat':'Felis lybica',
'wildebeest':'Connochaetes taurinus',
'zebra':'Equus quagga',
'zorilla': 'Ictonyx striatus'
}
pickle.dump(catLookup, open('inat_category_lookup_SS.p','wb'))
| research/map_data_to_inat/lookupiNatCategoriesSS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# +
#loading train and test data
train_data=pd.read_csv('B:/spyder/risk_analytics_train.csv',header=0)
test_data=pd.read_csv('B:/spyder/risk_analytics_test.csv',header=0)
# -
train_data.head()
train_data.shape
test_data.shape
print(train_data.isnull().sum())
#handling the missing value of categorical data by replacing with mode.
colname1=["Gender","Married","Dependents","Self_Employed","Loan_Amount_Term"]
for x in colname1[:]:
train_data[x].fillna(train_data[x].mode()[0],inplace=True)
print(train_data.isnull().sum())
#inputing the numerical missing value using mean
train_data["LoanAmount"].fillna(train_data["LoanAmount"].mean(),inplace=True)
print(train_data.isnull().sum())
#imputing the value for credit history coilumn differently
train_data['Credit_History'].fillna(value=0,inplace=True)
print(train_data.isnull().sum())
# +
#transforming categorical data to numerical
from sklearn import preprocessing
colname=["Gender","Married","Education","Self_Employed","Property_Area","Loan_Status"]
le={}
for x in colname:
le[x]=preprocessing.LabelEncoder()
for x in colname:
train_data[x]=le[x].fit_transform(train_data.__getattr__(x))
#convert loan status as y=1 and n=0
# +
#creating training and testing and running the model.
X_train=train_data.values[:,1:-1]
Y_train=train_data.values[:,-1]
Y_train=Y_train.astype(int)
# +
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(X_train , Y_train ,
test_size = 0.3 ,
random_state = 10)
# -
x_test.shape
x_train.shape
# +
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
scaler.fit(x_test)
x_test=scaler.transform(x_test)
# -
from sklearn import svm
svc_model=svm.SVC(kernel='rbf',C=1.0,gamma=0.1)
svc_model.fit(x_train,y_train)
##Predict on train data
Y_pred=svc_model.predict(x_train)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_train,Y_pred))
print(accuracy_score(y_train,Y_pred))
print(classification_report(y_train,Y_pred))
###Predict on test data for validation
Y_pred=svc_model.predict(x_test)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_test,Y_pred))
print(accuracy_score(y_test,Y_pred))
print(classification_report(y_test,Y_pred))
# +
####Now use XGBoost
from xgboost import XGBClassifier
classifier=XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bytree=1, gamma=0, learning_rate=0.01, max_delta_step=0,
max_depth=3, min_child_weight=1, missing=None, n_estimators=200,
n_jobs=1, nthread=None, objective='binary:logistic', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=True, subsample=1)
classifier.fit(x_train,y_train)
# -
Y_pred=classifier.predict(x_train)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_train,Y_pred))
print(accuracy_score(y_train,Y_pred))
print(classification_report(y_train,Y_pred))
Y_pred=classifier.predict(x_test)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_test,Y_pred))
print(accuracy_score(y_test,Y_pred))
print(classification_report(y_test,Y_pred))
# +
####Now use logistic regresiion
from sklearn.linear_model import LogisticRegression
#creating model
classifier_lr = (LogisticRegression())
#fitting training data to model
classifier_lr.fit(x_train , y_train)
# -
Y_pred=classifier_lr.predict(x_train)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_train,Y_pred))
print(accuracy_score(y_train,Y_pred))
print(classification_report(y_train,Y_pred))
Y_pred=classifier_lr.predict(x_test)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_test,Y_pred))
print(accuracy_score(y_test,Y_pred))
print(classification_report(y_test,Y_pred))
# +
from sklearn.tree import DecisionTreeClassifier
model_DecisionTree=DecisionTreeClassifier()
model_DecisionTree.fit(x_train,y_train)
# -
Y_pred=model_DecisionTree.predict(x_train)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_train,Y_pred))
print(accuracy_score(y_train,Y_pred))
print(classification_report(y_train,Y_pred))
Y_pred=model_DecisionTree.predict(x_test)
from sklearn.metrics import confusion_matrix , accuracy_score , classification_report
print(confusion_matrix(y_test,Y_pred))
print(accuracy_score(y_test,Y_pred))
print(classification_report(y_test,Y_pred))
# +
###As the efficiancy of SVM model is quite good so we use this model for prediction for test data
# -
#preprocessing the test data
test_data.head()
print(test_data.isnull().sum())
#handling the missing value of test data
colname2=["Gender","Dependents","Self_Employed","Loan_Amount_Term"]
for x in colname2[:]:
test_data[x].fillna(test_data[x].mode()[0],inplace=True)
print(test_data.isnull().sum())
test_data["LoanAmount"].fillna(test_data["LoanAmount"].mean(),inplace=True)
print(test_data.isnull().sum())
#imputing the value for credit history coilumn differently
test_data['Credit_History'].fillna(value=0,inplace=True)
print(test_data.isnull().sum())
# +
from sklearn import preprocessing
colname3=["Gender","Married","Education","Self_Employed","Property_Area"]
le={}
for x in colname3:
le[x]=preprocessing.LabelEncoder()
for x in colname3:
test_data[x]=le[x].fit_transform(test_data.__getattr__(x))
# -
#test_data.head()
X_test=test_data.values[:,1:]
# +
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
scaler.fit(X_test)
X_test=scaler.transform(X_test)
# +
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
scaler.fit(X_train)
X_train=scaler.transform(X_train)
# -
Y_pred=svc_model.predict(X_test)
print(list(Y_pred))
Y_pred_col=list(Y_pred)
test_data=pd.read_csv('B:/spyder/risk_analytics_test.csv',header=0)
test_data["Y_prediction"]=Y_pred_col
test_data.head()
test_data.to_csv('test_data.csv')
# +
svc_model=svm.SVC(kernel='rbf',C=1,gamma=0.1)
from sklearn import cross_validation
#performing the kfold cross_validation
kfold_cv=cross_validation.KFold(n=len(X_train),n_folds=10)
print(kfold_cv)
#running the model using scoring as accuracy
kfold_cv_result=cross_validation.cross_val_score(estimator=svc_model,X=X_train,y=Y_train,
scoring="accuracy",cv=kfold_cv)
print(kfold_cv_result)
#finding the mean
print(kfold_cv_result.mean())
# -
| Loan Default Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#default_exp notebook.showdoc
# -
# export
from local.imports import *
from local.notebook.core import *
from local.notebook.export import *
import inspect,enum,nbconvert
from IPython.display import Markdown,display
from IPython.core import page
from nbconvert import HTMLExporter
# # Show doc
# > Functions to show the doc cells in notebooks
# +
from local.core import compose, add_docs, chk
from local.data.pipeline import Pipeline
from local.data.external import untar_data, ConfigKey
test_cases = [
Pipeline, #Basic class
ConfigKey, #Enum
compose, #Func with star args and type annotation
untar_data, #Func with defaults
add_docs, #Func with kwargs
Path.ls #Monkey-patched
]
# -
# ## Gather the information
# The inspect module lets us know quickly if an object is a function or a class but it doesn't distinguish classes and enums.
# export
def is_enum(cls):
"Check if `cls` is an enum or another type of class"
return type(cls) in (enum.Enum, enum.EnumMeta)
assert is_enum(ConfigKey)
assert not is_enum(Pipeline)
# ### Links
#hide
#Tricking jupyter notebook to have a __file__ attribute. All _file_ will be replaced by __file__
_file_ = Path('local').absolute()/'notebook'/'show_doc.py'
# We don't link to all PyTorch functions, just the ones in an index we keep. We can easily add a reference with the following convenience function when writing the docs.
# +
# export
def _get_pytorch_index():
if not (Path(_file_).parent/'index_pytorch.txt').exists(): return {}
return json.load(open(Path(_file_).parent/'index_pytorch.txt', 'r'))
def add_pytorch_index(func_name, url):
"Add `func_name` in the PyTorch index for automatic links."
index = _get_pytorch_index()
if not url.startswith("https://pytorch.org/docs/stable/"):
url = "https://pytorch.org/docs/stable/" + url
index[func_name] = url
json.dump(index, open(Path(_file_).parent/'index_pytorch.txt', 'w'), indent=2)
# -
# `url` can be the full url or just the part after `https://pytorch.org/docs/stable/`, see the example below.
#hide
ind,ind_bak = Path(_file_).parent/'index_pytorch.txt',Path(_file_).parent/'index_pytorch.bak'
if ind.exists(): shutil.move(ind, ind_bak)
assert _get_pytorch_index() == {}
add_pytorch_index('Tensor', 'tensors.html#torch-tensor')
assert _get_pytorch_index() == {'Tensor':'https://pytorch.org/docs/stable/tensors.html#torch-tensor'}
if ind_bak.exists(): shutil.move(ind_bak, ind)
add_pytorch_index('Tensor', 'tensors.html#torch-tensor')
add_pytorch_index('device', 'tensor_attributes.html#torch-device')
add_pytorch_index('DataLoader', 'data.html#torch.utils.data.DataLoader')
# export
def is_fastai_module(name):
"Test if `name` is a fastai module."
dir_name = os.path.sep.join(name.split('.'))
return (Path(_file_).parent.parent/f"{dir_name}.py").exists()
assert is_fastai_module('data.external')
assert is_fastai_module('core')
assert not is_fastai_module('export')
# export
#Might change once the library is renamed fastai.
def _is_fastai_class(ft): return belongs_to_module(ft, 'fastai_source')
def _strip_fastai(s): return re.sub(r'^local\.', '', s)
FASTAI_DOCS = ''
# export
def doc_link(name, include_bt:bool=True):
"Create link to documentation for `name`."
cname = f'`{name}`' if include_bt else name
#Link to modules
if is_fastai_module(name): return f'[{cname}]({FASTAI_DOCS}/{name}.html)'
#Link to fastai functions
try_fastai = source_nb(name, is_name=True)
if try_fastai:
page = '.'.join(try_fastai.split('_')[1:]).replace('.ipynb', '.html')
return f'[{cname}]({FASTAI_DOCS}/{page}#{name})'
#Link to PyTorch
try_pytorch = _get_pytorch_index().get(name, None)
if try_pytorch: return f'[{cname}]({try_pytorch})'
#Leave as is
return cname
assert doc_link('data.pipeline') == f'[`data.pipeline`]({FASTAI_DOCS}/data.pipeline.html)'
assert doc_link('Pipeline') == f'[`Pipeline`]({FASTAI_DOCS}/data.pipeline.html#Pipeline)'
assert doc_link('Transform.create') == f'[`Transform.create`]({FASTAI_DOCS}/data.transforms.html#Transform.create)'
assert doc_link('Tensor') == '[`Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor)'
assert doc_link('Tenso') == '`Tenso`'
#export
_re_backticks = re.compile(r"""
# Catches any link of the form \[`obj`\](old_link) or just `obj` to either update old links or add the link to the docs of obj
\[` # Opening [ and `
([^`]*) # Catching group with anything but a `
`\] # ` then closing ]
(?: # Beginning of non-catching group
\( # Opening (
[^)]* # Anything but a closing )
\) # Closing )
) # End of non-catching group
| # OR
` # Opening `
([^`]*) # Antyhing but a `
` # Closing `
""", re.VERBOSE)
# export
def add_doc_links(text):
"Search for doc links for any item between backticks in `text`."
def _replace_link(m): return doc_link(m.group(1) or m.group(2))
return _re_backticks.sub(_replace_link, text)
# This function not only add links to backstick keywords, it also update the links that are already in the text.
tst = add_doc_links('This is an example of `Pipeline`')
assert tst == "This is an example of [`Pipeline`](/data.pipeline.html#Pipeline)"
tst = add_doc_links('Here we alread add a link in [`Tensor`](fake)')
assert tst == "Here we alread add a link in [`Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor)"
# ### Links to source
# +
#export
def _is_type_dispatch(x): return x.__class__.__name__ == "TypeDispatch"
def _unwrapped_type_dispatch_func(x): return next(iter(x.funcs.values())) if _is_type_dispatch(x) else x
def _is_property(x): return type(x)==property
def _has_property_getter(x): return _is_property(x) and hasattr(x, 'fget') and hasattr(x.fget, 'func')
def _property_getter(x): return x.fget.func if _has_property_getter(x) else x
def _unwrapped_func(x):
x = _unwrapped_type_dispatch_func(x)
x = _property_getter(x)
return x
# +
#export
SOURCE_URL = "https://github.com/fastai/fastai_dev/tree/master/dev/"
def get_source_link(func):
"Return link to `func` in source code"
func = _unwrapped_func(func)
try: line = inspect.getsourcelines(func)[1]
except Exception: return ''
module = inspect.getmodule(func).__name__.replace('.', '/') + '.py'
return f"{SOURCE_URL}{module}#L{line}"
# -
#hide
from local.data.core import Categorize, DataBunch
assert get_source_link(Categorize.encodes).startswith(SOURCE_URL + 'local/data/core.py')
assert get_source_link(DataBunch.train_dl).startswith(SOURCE_URL + 'local/data/core.py')
#hide
assert get_source_link(Pipeline).startswith(SOURCE_URL + 'local/data/pipeline.py')
get_source_link(Pipeline)
# As important as the source code, we want to quickly jump to where the function is defined in a dev notebook.
#export
_re_header = re.compile(r"""
# Catches any header in markdown with the title in group 1
^\s* # Beginning of text followed by any number of whitespace
\#+ # One # or more
\s* # Any number of whitespace
(.*) # Catching group with anything
$ # End of text
""", re.VERBOSE)
# +
#export
FASTAI_NB_DEV = 'https://nbviewer.jupyter.org/github/fastai/fastai_docs/blob/master/dev/'
def get_nb_source_link(func, local=False, is_name=None):
"Return a link to the notebook where `func` is defined."
func = _unwrapped_type_dispatch_func(func)
pref = '' if local else FASTAI_NB_DEV
is_name = is_name or isinstance(func, str)
src = source_nb(func, is_name=is_name, return_all=True)
if src is None: return '' if is_name else get_source_link(func)
find_name,nb_name = src
nb = read_nb(nb_name)
pat = re.compile(f'^{find_name}\s+=|^(def|class)\s+{find_name}\s*\(', re.MULTILINE)
if len(find_name.split('.')) == 2:
clas,func = find_name.split('.')
pat2 = re.compile(f'@patch\s*\ndef\s+{func}\s*\([^:]*:\s*{clas}\s*(?:,|\))')
else: pat2 = None
for i,cell in enumerate(nb['cells']):
if cell['cell_type'] == 'code':
if re.search(pat, cell['source']): break
if pat2 is not None and re.search(pat2, cell['source']): break
if re.search(pat, cell['source']) is None and (pat2 is not None and re.search(pat2, cell['source']) is None):
return '' if is_name else get_function_source(func)
header_pat = re.compile(r'^\s*#+\s*(.*)$')
while i >= 0:
cell = nb['cells'][i]
if cell['cell_type'] == 'markdown' and _re_header.search(cell['source']):
title = _re_header.search(cell['source']).groups()[0]
anchor = '-'.join([s for s in title.split(' ') if len(s) > 0])
return f'{pref}{nb_name}#{anchor}'
i -= 1
return f'{pref}{nb_name}'
# -
assert get_nb_source_link(Pipeline.decode) == get_nb_source_link(Pipeline)
assert get_nb_source_link('Pipeline') == get_nb_source_link(Pipeline)
assert get_nb_source_link(chk) == f'{FASTAI_NB_DEV}01_core.ipynb#Foundational-functions'
assert get_nb_source_link(chk, local=True) == f'01_core.ipynb#Foundational-functions'
assert get_nb_source_link('Path.ls') == f'{FASTAI_NB_DEV}01_core.ipynb#File-and-network-functions'
# You can either pass an object or its name (by default `is_name` will look if `func` is a string or not, but you can override if there is some inconsistent behavior). `local` will return a local link.
# export
def nb_source_link(func, is_name=None, disp=True):
"Show a relative link to the notebook where `func` is defined"
is_name = is_name or isinstance(func, str)
func_name = func if is_name else qual_name(func)
link = get_nb_source_link(func, local=True, is_name=is_name)
if disp: display(Markdown(f'[{func_name}]({link})'))
else: return link
# This function assumes you are in one notebook in the dev folder, otherwise you use `disp=False` to get the relative link. You can either pass an object or its name (by default `is_name` will look if `func` is a string or not, but you can override if there is some inconsistent behavior).
nb_source_link(Pipeline)
assert nb_source_link(chk, disp=False) == f'01_core.ipynb#Foundational-functions'
assert nb_source_link('chk', disp=False) == f'01_core.ipynb#Foundational-functions'
# ## Show documentation
# export
def type_repr(t):
"Representation of type `t` (in a type annotation)"
if getattr(t, '__args__', None):
args = t.__args__
if len(args)==2 and args[1] == type(None):
return f'`Optional`\[{type_repr(args[0])}\]'
reprs = ', '.join([type_repr(o) for o in args])
return f'{doc_link(get_name(t))}\[{reprs}\]'
else: return doc_link(get_name(t))
# The representation tries to find doc links if possible.
from torch import Tensor
tst = type_repr(Optional[Tensor])
assert tst == '`Optional`\\[[`Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor)\\]'
tst = type_repr(Union[Tensor, float])
assert tst == '`Union`\\[[`Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor), `float`\\]'
# +
# export
_arg_prefixes = {inspect._VAR_POSITIONAL: '\*', inspect._VAR_KEYWORD:'\*\*'}
def format_param(p):
"Formats function param to `param1:Type=val`. Font weights: param1=bold, val=italic"
arg_prefix = _arg_prefixes.get(p.kind, '') # asterisk prefix for *args and **kwargs
res = f"**{arg_prefix}`{p.name}`**"
if hasattr(p, 'annotation') and p.annotation != p.empty: res += f':{type_repr(p.annotation)}'
if p.default != p.empty:
default = getattr(p.default, 'func', p.default) #For partials
default = getattr(default, '__name__', default) #Tries to find a name
if is_enum(default.__class__): #Enum have a crappy repr
res += f'=*`{default.__class__.__name__}.{default.name}`*'
else: res += f'=*`{repr(default)}`*'
return res
# +
sig = inspect.signature(untar_data)
params = [format_param(p) for _,p in sig.parameters.items()]
assert params == [
'**`url`**',
'**`fname`**=*`None`*',
'**`dest`**=*`None`*',
'**`c_key`**=*`ConfigKey.Data`*',
'**`force_download`**=*`False`*',
"**`extract_func`**=*`'tar_extract'`*"]
sig = inspect.signature(compose)
params = [format_param(p) for _,p in sig.parameters.items()]
assert params[0] == '**\\*`funcs`**'
# -
# export
def _format_enum_doc(enum, full_name):
"Formatted `enum` definition to show in documentation"
vals = ', '.join(enum.__members__.keys())
return f'<code>{full_name}</code>',f'<code>Enum</code> = [{vals}]'
tst = _format_enum_doc(ConfigKey, 'ConfigKey')
assert tst == ('<code>ConfigKey</code>', '<code>Enum</code> = [Data, Archive, Model]')
# +
# export
def _escape_chars(s):
return s.replace('_', '\_')
def _format_func_doc(func, full_name=None):
"Formatted `func` definition to show in documentation"
try:
sig = inspect.signature(func)
fmt_params = [format_param(param) for name,param
in sig.parameters.items() if name not in ('self','cls')]
except: fmt_params = []
name = f'<code>{full_name or func.__name__}</code>'
arg_str = f"({', '.join(fmt_params)})"
f_name = f"<code>class</code> {name}" if inspect.isclass(func) else name
return f'{f_name}',f'{name}{arg_str}'
# -
assert _format_func_doc(compose) == ('<code>compose</code>',
'<code>compose</code>(**\\*`funcs`**, **`order`**=*`None`*)')
# export
def _format_cls_doc(cls, full_name):
"Formatted `cls` definition to show in documentation"
parent_class = inspect.getclasstree([cls])[-1][0][1][0]
name,args = _format_func_doc(cls, full_name)
if parent_class != object: args += f' :: {doc_link(get_name(parent_class))}'
return name,args
assert _format_cls_doc(Pipeline, 'Pipeline') == ('<code>class</code> <code>Pipeline</code>',
'<code>Pipeline</code>(**`funcs`**=*`None`*, **`as_item`**=*`False`*, **`filt`**=*`None`*) :: [`GetAttr`](/core.html#GetAttr)')
# export
def show_doc(elt, doc_string=True, name=None, title_level=None, disp=True, default_cls_level=2):
"Show documentation for element `elt`. Supported types: class, function, and enum."
elt = getattr(elt, '__func__', elt)
qname = name or qual_name(elt)
if inspect.isclass(elt):
if is_enum(elt.__class__): name,args = _format_enum_doc(elt, qname)
else: name,args = _format_cls_doc (elt, qname)
elif isinstance(elt, Callable): name,args = _format_func_doc(elt, qname)
else: name,args = f"<code>{qname}</code>", ''
link = get_source_link(elt) #TODO: use get_source_link when it works
source_link = f'<a href="{link}" class="source_link" style="float:right">[source]</a>'
title_level = title_level or (default_cls_level if inspect.isclass(elt) else 4)
doc = f'<h{title_level} id="{qname}" class="doc_header">{name}{source_link}</h{title_level}>'
doc += f'\n\n> {args}\n\n' if len(args) > 0 else '\n\n'
if doc_string and inspect.getdoc(elt): doc += add_doc_links(inspect.getdoc(elt))
if disp: display(Markdown(doc))
else: return doc
# `doc_string` determines if we show the docstring of the function or not. `name` can be used to provide an alternative to the name automatically found. `title_level` determines the level of the anchor (default 3 for classes and 4 for functions). If `disp` is `False`, the function returns the markdown code instead of displaying it.
# For instance
#
# ```python
# show_doc(untar_data)
# ```
# will display
# <h4 id="untar_data" class="doc_header"><code>untar_data</code><a href="https://github.com/fastai/fastai_dev/tree/master/dev/local/data/external.py#L182" class="source_link" style="float:right">[source]</a></h4>
#
# > <code>untar_data</code>(**`url`**, **`fname`**=*`None`*, **`dest`**=*`None`*, **`c_key`**=*`ConfigKey.Data`*, **`force_download`**=*`False`*, **`extract_func`**=*`\'tar_extract\'`*)
#
# Download `url` to `fname` if `dest` doesn\'t exist, and un-tgz to folder `dest`.
# ### Integration test -
#hide
show_doc(Pipeline)
#hide
show_doc(Pipeline.decode)
#hide
show_doc(ConfigKey)
#hide
show_doc(compose)
#hide
show_doc(untar_data)
#hide
show_doc(add_docs)
#hide
show_doc(Pipeline.__call__)
#hide
from local.data.core import DataBunch
show_doc(DataBunch.train_dl, name='DataBunch.train_dl')
# hide
show_doc(Path.ls)
# ### The doc command
#export
def md2html(md):
"Convert markdown `md` to HTML code"
if nbconvert.__version__ < '5.5.0': return HTMLExporter().markdown2html(md)
else: return HTMLExporter().markdown2html(defaultdict(lambda: defaultdict(dict)), md)
#export
def doc(elt):
"Show `show_doc` info in preview window"
md = show_doc(elt, disp=False)
output = md2html(md)
if IN_COLAB: get_ipython().run_cell_magic(u'html', u'', output)
else:
try: page.page({'text/html': output})
except: display(Markdown(md))
# ## Export -
#hide
notebook2script(all_fs=True)
| dev/92_notebook_showdoc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Potential Host Counting
#
# One early question we we want to ask is "How many possible host galaxies are there in each image?" ([#4](https://github.com/chengsoonong/crowdastro/issues/4)). To answer this question I will first need to determine how confidently labelled each example is, a question which is covered in Banfield et al. (2015). This will allow me to find the dimmest confidently classified example. This will then be the lower brightness threshold for potential hosts. Finally, I will count how many hosts are in each image.
#
# Every subject has some associated classifications (from which I will eventually derive the labels). There are usually multiple classifications. The *consensus* for a given subject is defined by Banfield et al. as
#
# $$
# C = \frac{n_{\text{consensus}}}{n_{\text{all}}}
# $$
#
# where $n_{\text{consensus}}$ is the number of classifications in agreement with the most common classification for a subject, and $n_{\text{all}}$ is the total number of classifications for the subject.
#
# How do we determine "agreement"? There are two components to this: agreement on which radio observations are components of the same source, and agreement on which infrared source is the host galaxy. Radio observation agreement is easy since participants select predefined contours and these are included in the dataset. The classification itself, however, is an $(x, y)$ coordinate. These coordinates could vary but still represent the same infrared source. I'll follow the approach taken by Banfield et al. and use a kernel-density estimator (KDE) with the click locations. The Banfield et al. paper gives no threshold for whether two clicks are counted as agreeing, so I will have to choose this threshold myself (which I will do later, once I have seen some data).
#
# An implementation of the consensus computation is located [here](https://github.com/willettk/rgz-analysis) for Python 2. I'll be doing something quite similar here.
#
# For this notebook, I'll use the same subject as Banfield et al.: FIRSTJ124610.0+384838 (ARG000180p).
# ## Setting up the data
#
# This section just loads the subject and shows it, along with all associated clicks.
# +
import collections
import io
import itertools
import os
import pprint
import matplotlib.pyplot
import numpy
import PIL
import pymongo
import requests
import skimage.exposure
import skimage.feature
import scipy.ndimage.filters
import scipy.ndimage.morphology
import scipy.stats
# %matplotlib inline
HOST = 'localhost'
PORT = 27017
DB_NAME = 'radio'
IMAGE_SCALE = 500/424
RGZ_CACHE = os.path.join(os.path.dirname(os.getcwd()), 'rgz_cache')
# -
# Setup MongoDB.
client = pymongo.MongoClient(HOST, PORT)
db = client[DB_NAME]
# Load the subject.
subject = db.radio_subjects.find_one({'zooniverse_id': 'ARG000180p'})
# Download the images associated with this subject.
infrared = PIL.Image.open(io.BytesIO(requests.get(subject['location']['standard']).content))
radio = PIL.Image.open(io.BytesIO(requests.get(subject['location']['radio']).content))
combined = PIL.Image.blend(infrared, radio, 0.5)
# Find the classifications associated with this subject.
classifications = list(db.radio_classifications.find({'subject_ids': subject['_id']}))
# An example classification:
#
# ```python
# {'_id': ObjectId('52b1dd4e4258ec455d001f91'),
# 'annotations': [{'ir': {'0': {'x': '251.5', 'y': '212'}},
# 'radio': {'0': {'xmax': '102.32255232729742',
# 'xmin': '87.5456431846481',
# 'ymax': '72.12894883061881',
# 'ymin': '62.77882105432897'},
# '1': {'xmax': '71.01281894294526',
# 'xmin': '56.02975587403343',
# 'ymax': '69.5085910834056',
# 'ymin': '62.2958306709543'}}},
# {'finished_at': '',
# 'started_at': ''},
# {'user_agent': ''}],
# 'created_at': datetime.datetime(2013, 12, 18, 17, 37, 19),
# 'project_id': ObjectId('52afdb804d69636532000001'),
# 'subject_ids': [ObjectId('52af820baf2fdc059a005621')],
# 'subjects': [{'id': ObjectId('52af820baf2fdc059a005621'),
# 'location': {'contours': 'http://radio.galaxyzoo.org/subjects/contours/52af820baf2fdc059a005621.json',
# 'radio': 'http://radio.galaxyzoo.org/subjects/radio/52af820baf2fdc059a005621.jpg',
# 'standard': 'http://radio.galaxyzoo.org/subjects/standard/52af820baf2fdc059a005621.jpg'},
# 'zooniverse_id': 'ARG000180p'}],
# 'tutorial': False,
# 'updated_at': datetime.datetime(2013, 12, 18, 17, 37, 18, 452000),
# 'user_id': ObjectId('52b0a0f62b60f168a9000013'),
# 'user_ip': '',
# 'user_name': '',
# 'workflow_id': ObjectId('52afdb804d69636532000002')}
# ```
# Get the click locations.
clicks = []
for c in classifications:
if 'ir' not in c['annotations'][0] or c['annotations'][0]['ir'] == 'No Sources':
continue
c_clicks = c['annotations'][0]['ir']
for click in c_clicks.values():
clicks.append((float(click['x']), float(click['y'])))
clicks = numpy.array(clicks)
clicks_x, clicks_y = clicks.T
# Plot the images.
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 3, 1)
matplotlib.pyplot.imshow(infrared)
matplotlib.pyplot.title('Infrared')
matplotlib.pyplot.subplot(1, 3, 2)
matplotlib.pyplot.imshow(radio)
matplotlib.pyplot.title('Radio')
matplotlib.pyplot.subplot(1, 3, 3)
matplotlib.pyplot.imshow(combined)
matplotlib.pyplot.scatter(clicks_x*IMAGE_SCALE, clicks_y*IMAGE_SCALE, marker='+')
matplotlib.pyplot.xlim((0, 500))
matplotlib.pyplot.ylim((0, 500))
matplotlib.pyplot.title('Combined')
# The clicks don't line up unless multiplied by a constant. The [data description](https://github.com/willettk/rgz-analysis/blob/master/RadioGalaxyZoo_datadescription.ipynb) mentions a scaling factor but no such factor is included here; instead, this is due to the rescaling of the images for web viewing. The scale factor is $500/424$.
# ## Calculating Consensus
# +
# List of signatures, immutable objects uniquely representing combinations of radio sources.
radio_signatures = []
# I'll also gather up all the click locations while I'm at it.
# This dict maps single radio signatures to lists of clicks for that specific signature.
radio_signature_to_clicks = collections.defaultdict(list)
for classification in classifications:
# Generate a radio signature for each classification.
classification_radio_signature = []
galaxies = [annotation for annotation in classification['annotations'] if 'ir' in annotation]
for galaxy in galaxies:
# Generate a signature for each radio contours combination. This is just a sorted list of all the xmax values
# associated with radio contours in the combination.
if galaxy['radio'] == 'No Contours':
radio_signature = ()
else:
radio_signature = tuple(sorted({
round(float(r['xmax']), 15) # There's floating point precision errors in the data.
for r in galaxy['radio'].values()
}))
classification_radio_signature.append(radio_signature)
if galaxy['ir'] == 'No Sources':
continue # Totally ignoring this case for now.
else:
# I'm also ignoring the case where there are multiple clicks.
# The GitHub code associated with the paper also seems to do this.
click = (float(galaxy['ir']['0']['x']), float(galaxy['ir']['0']['y']))
radio_signature_to_clicks[radio_signature].append(click)
classification_radio_signature = tuple(sorted(classification_radio_signature))
radio_signatures.append(classification_radio_signature)
for signature, clicks in radio_signature_to_clicks.items():
radio_signature_to_clicks[signature] = numpy.array(clicks)
# -
# Sanity check: About 10% of participants split the radio sources.
print(len([s for s in radio_signatures if len(s) == 2])/len(radio_signatures))
# Sanity check: Let's look at the clicks.
matplotlib.pyplot.figure(figsize=(15, 5))
for index, (signature, clicks) in enumerate(radio_signature_to_clicks.items()):
matplotlib.pyplot.subplot(1, len(radio_signature_to_clicks), index + 1)
xs, ys = clicks.T
matplotlib.pyplot.scatter(xs, ys, marker='+')
matplotlib.pyplot.title(str(signature))
matplotlib.pyplot.xlim((50, 450))
matplotlib.pyplot.ylim((50, 450))
# +
# Now we'll check the click location consensus. This will be computed for each radio combination.
matplotlib.pyplot.figure(figsize=(15, 15))
radio_signature_to_click_density_peaks = {}
radio_signature_to_plurality_click = {}
for index, (signature, clicks) in enumerate(radio_signature_to_clicks.items()):
clicks += numpy.random.normal(size=clicks.shape)
kernel = scipy.stats.kde.gaussian_kde(clicks.T)
X, Y = numpy.mgrid[0:500:100j, 0:500:100j]
positions = numpy.vstack([X.ravel(), Y.ravel()])
density = kernel(positions).T.reshape(X.shape)
matplotlib.pyplot.title(str(signature))
matplotlib.pyplot.subplot(len(radio_signature_to_clicks), 2, index * 2 + 1)
matplotlib.pyplot.pcolor(density.T)
matplotlib.pyplot.colorbar()
# From https://github.com/willettk/rgz-analysis
neighborhood = numpy.ones((5, 5))
local_max = scipy.ndimage.filters.maximum_filter(density, footprint=neighborhood) == density
eroded_background = scipy.ndimage.morphology.binary_erosion(density == 0, structure=neighborhood, border_value=1)
detected_peaks = local_max ^ eroded_background
weighted_peaks = detected_peaks * density
# Find all click peaks.
all_clicks = numpy.transpose(detected_peaks.nonzero()) * 5
radio_signature_to_click_density_peaks[signature] = all_clicks
# Find the plurality click.
plurality_click = numpy.array(numpy.unravel_index(weighted_peaks.argmax(), weighted_peaks.shape)) * 5
radio_signature_to_plurality_click[signature] = plurality_click
matplotlib.pyplot.title(str(signature))
matplotlib.pyplot.subplot(len(radio_signature_to_clicks), 2, index * 2 + 2)
matplotlib.pyplot.pcolor(weighted_peaks.T)
matplotlib.pyplot.colorbar()
# -
# At this point, I can't follow the paper any further — it doesn't provide any way of identifying which clicks agree with the plurality vote. I definitely need to figure out a good way to deal with this properly but for now I'll check which peak is closest to any given click.
# +
# Find the plurality radio signature.
radio_signature_counts = collections.Counter()
for radio_signature in radio_signatures:
radio_signature_counts[radio_signature] += 1
plurality_radio_signature = max(radio_signature_counts, key=radio_signature_counts.get)
print(plurality_radio_signature)
# +
# For each classification, check whether the radio signature matches the plurality radio signature.
# If it does, check whether the click matches the plurality click for each galaxy.
# If it does, then this classification agrees with the consensus. Else it does not.
n_consensus = 0
n_all = len(classifications)
for classification, classification_radio_signature in zip(classifications, radio_signatures):
if classification_radio_signature != plurality_radio_signature:
continue
galaxies = [annotation for annotation in classification['annotations'] if 'ir' in annotation]
for galaxy in galaxies:
# Regenerate the signature for this radio combination so we can look up the associated click peaks.
if galaxy['radio'] == 'No Contours':
radio_signature = ()
else:
radio_signature = tuple(sorted({
round(float(r['xmax']), 15)
for r in galaxy['radio'].values()
}))
if galaxy['ir'] == 'No Sources':
continue
click = (float(galaxy['ir']['0']['x']), float(galaxy['ir']['0']['y']))
# Find the closest click density peak.
peaks = radio_signature_to_click_density_peaks[radio_signature]
closest_peak = min(peaks, key=lambda peak: numpy.hypot(click[0] - peak[0], click[1] - peak[1]))
if (closest_peak != radio_signature_to_plurality_click[radio_signature]).any():
break
else:
n_consensus += 1
print('{:.02%}'.format(n_consensus / n_all))
# -
# This seems a lot lower than what the paper seems to imply, but maybe this is because of my method of finding which peak was clicked. The next thing I'll want to do is run this over a lot of data, so let's try that. I'll bundle it up in a function.
def click_peaks(clicks, kernel_size=10):
kernel = scipy.stats.kde.gaussian_kde(clicks.T)
X, Y = numpy.mgrid[0:500:100j, 0:500:100j]
positions = numpy.vstack([X.ravel(), Y.ravel()])
density = kernel(positions).T.reshape(X.shape)
# From https://github.com/willettk/rgz-analysis
neighborhood = numpy.ones((kernel_size, kernel_size))
local_max = scipy.ndimage.filters.maximum_filter(density, footprint=neighborhood) == density
eroded_background = scipy.ndimage.morphology.binary_erosion(density == 0, structure=neighborhood, border_value=1)
detected_peaks = local_max ^ eroded_background
weighted_peaks = detected_peaks * density
# Find all click peaks.
all_clicks = numpy.transpose(detected_peaks.nonzero()) * 5
# Find the plurality click.
plurality_click = numpy.array(numpy.unravel_index(weighted_peaks.argmax(), weighted_peaks.shape)) * 5
return all_clicks, plurality_click
def consensus(zid, subject=None):
"""Computes the consensus for a given Zooniverse object.
zid: Zooniverse ID.
subject: (Optional) Zooniverse subject. If not specified, will be loaded from database.
-> float, percentage consensus.
"""
if subject is None:
subject = db.radio_subjects.find_one({'zooniverse_id': zid})
classifications = list(db.radio_classifications.find({'subject_ids': subject['_id']}))
if not classifications:
return 1.0
radio_signatures = []
radio_signature_to_clicks = collections.defaultdict(list)
for classification in classifications:
# Generate a radio signature for each classification.
classification_radio_signature = []
galaxies = [annotation for annotation in classification['annotations'] if 'ir' in annotation]
for galaxy in galaxies:
# Generate a signature for each radio contours combination. This is just a sorted list of all the xmax values
# associated with radio contours in the combination.
if galaxy['radio'] == 'No Contours':
radio_signature = ()
else:
radio_signature = tuple(sorted({
round(float(r['xmax']), 15) # There's floating point precision errors in the data.
for r in galaxy['radio'].values()
}))
classification_radio_signature.append(radio_signature)
if galaxy['ir'] == 'No Sources':
continue # Totally ignoring this case for now.
# I'm also ignoring the case where there are multiple clicks.
# The GitHub code associated with the paper also seems to do this.
click = (float(galaxy['ir']['0']['x']), float(galaxy['ir']['0']['y']))
radio_signature_to_clicks[radio_signature].append(click)
classification_radio_signature = tuple(sorted(classification_radio_signature))
radio_signatures.append(classification_radio_signature)
for signature, clicks in radio_signature_to_clicks.items():
radio_signature_to_clicks[signature] = numpy.array(clicks)
radio_signature_to_click_density_peaks = {}
radio_signature_to_plurality_click = {}
for index, (signature, clicks) in enumerate(radio_signature_to_clicks.items()):
if len(clicks) == 1:
radio_signature_to_click_density_peaks[signature] = [clicks[0]]
plurality_click = clicks[0]
else:
clicks += numpy.random.normal(size=clicks.shape)
all_clicks, plurality_click = click_peaks(clicks)
radio_signature_to_click_density_peaks[signature] = all_clicks
radio_signature_to_plurality_click[signature] = plurality_click
# Find the plurality radio signature.
radio_signature_counts = collections.Counter()
for radio_signature in radio_signatures:
radio_signature_counts[radio_signature] += 1
plurality_radio_signature = max(radio_signature_counts, key=radio_signature_counts.get)
n_consensus = 0
n_all = len(classifications)
for classification, classification_radio_signature in zip(classifications, radio_signatures):
if classification_radio_signature != plurality_radio_signature:
continue
galaxies = [annotation for annotation in classification['annotations'] if 'ir' in annotation]
for galaxy in galaxies:
# Regenerate the signature for this radio combination so we can look up the associated click peaks.
if galaxy['radio'] == 'No Contours':
radio_signature = ()
else:
radio_signature = tuple(sorted({
round(float(r['xmax']), 15)
for r in galaxy['radio'].values()
}))
if galaxy['ir'] == 'No Sources':
continue
click = (float(galaxy['ir']['0']['x']), float(galaxy['ir']['0']['y']))
# Find the closest click density peak.
peaks = radio_signature_to_click_density_peaks[radio_signature]
if len(peaks) == 0:
continue
closest_peak = min(peaks, key=lambda peak: numpy.hypot(click[0] - peak[0], click[1] - peak[1]))
if (closest_peak != radio_signature_to_plurality_click[radio_signature]).any():
break
else:
n_consensus += 1
return n_consensus / n_all
# Sanity check: Let's try it on the same subject as before.
consensus('ARG000180p')
# Now let's run that on some more subjects.
cs = [consensus(subject['zooniverse_id']) for subject in db.radio_subjects.find().limit(10000)]
matplotlib.pyplot.hist(cs, bins=10)
matplotlib.pyplot.xlabel('Consensus')
matplotlib.pyplot.ylabel('Count')
# Sanity check: The mean consensus found by Banfield et al. was 0.67.
print(numpy.mean(cs))
# That's a higher average than it should be (though note that trying this on 1000 subjects results in ~0.67 — maybe the paper only uses some of the data).
# ## Finding Host Brightnesses
#
# We now need to figure out how bright each host galaxy is. We will find the plurality click, and then check the pixel value of the associated infrared image. Since the images are at different exposures, I also want to try equalising the value histogram of the image and seeing if this makes the distribution of brightnesses more compact.
#
# I'll also have to cache the data I'm downloading locally somehow. I don't want to burden the RGZ servers too much.
# +
# We need a function that will cache the data I download locally.
def get_infrared_image(subject):
"""Gets the infrared image of a subject.
subject: RGZ subject dict.
-> [[float]]
"""
image_path = os.path.join(RGZ_CACHE, 'subject_{}.png'.format(subject['zooniverse_id']))
try:
im = PIL.Image.open(image_path)
except FileNotFoundError:
image_data = requests.get(subject['location']['standard']).content
with open(image_path, 'wb') as image_file:
image_file.write(image_data)
im = PIL.Image.open(image_path)
return numpy.array(im.convert('L').getdata()).reshape(im.size).T / 255
# return im.convert('L')
# +
# Check that works.
point = (250*IMAGE_SCALE, 210*IMAGE_SCALE)
matplotlib.pyplot.imshow(
get_infrared_image(db.radio_subjects.find_one({'zooniverse_id': 'ARG000180p'}))
)
matplotlib.pyplot.scatter([point[0]], [point[1]])
print(get_infrared_image(db.radio_subjects.find_one({'zooniverse_id': 'ARG000180p'}))[point])
# -
def classification_brightnesses(zid, subject=None):
"""Find out how bright a given classified object is.
zid: Zooniverse ID.
subject: (Optional) Zooniverse subject. If not specified, will be loaded from database.
-> [float] Brightnesses of classifications in the subject.
"""
if subject is None:
subject = db.radio_subjects.find_one({'zooniverse_id': zid})
classifications = list(db.radio_classifications.find({'subject_ids': subject['_id']}))
if not classifications:
return []
radio_signatures = []
radio_signature_to_clicks = collections.defaultdict(list)
for classification in classifications:
# Generate a radio signature for each classification.
classification_radio_signature = []
galaxies = [annotation for annotation in classification['annotations'] if 'ir' in annotation]
for galaxy in galaxies:
# Generate a signature for each radio contours combination. This is just a sorted list of all the xmax values
# associated with radio contours in the combination.
if galaxy['radio'] == 'No Contours':
radio_signature = ()
else:
radio_signature = tuple(sorted({
round(float(r['xmax']), 15) # There's floating point precision errors in the data.
for r in galaxy['radio'].values()
}))
classification_radio_signature.append(radio_signature)
if galaxy['ir'] == 'No Sources':
continue # Totally ignoring this case for now.
# I'm also ignoring the case where there are multiple clicks.
# The GitHub code associated with the paper also seems to do this.
click = (float(galaxy['ir']['0']['x']), float(galaxy['ir']['0']['y']))
radio_signature_to_clicks[radio_signature].append(click)
classification_radio_signature = tuple(sorted(classification_radio_signature))
radio_signatures.append(classification_radio_signature)
# Find the plurality radio signature.
radio_signature_counts = collections.Counter()
for radio_signature in radio_signatures:
radio_signature_counts[radio_signature] += 1
plurality_radio_signature = max(radio_signature_counts, key=radio_signature_counts.get)
infrared = get_infrared_image(subject)
values = []
for signature in plurality_radio_signature:
clicks = numpy.array(radio_signature_to_clicks[signature])
if len(clicks) == 0:
continue
if len(clicks) == 1:
plurality_click = clicks[0]
else:
clicks += numpy.random.normal(size=clicks.shape)
_, plurality_click = click_peaks(clicks)
value = infrared[tuple(plurality_click * IMAGE_SCALE)]
values.append(value)
return values
# Try this out on the example subject.
classification_brightnesses('ARG000180p')
# +
# Let's try running that on more subjects. We also want to split on confidence - maybe it's harder to label dimmer subjects.
brightnesses_low_consensus = []
brightnesses_high_consensus = []
for subject in db.radio_subjects.find().limit(2500):
c = consensus(subject['zooniverse_id'], subject)
brightnesses = classification_brightnesses(subject['zooniverse_id'], subject)
if c < 0.5:
brightnesses_low_consensus.extend(brightnesses)
else:
brightnesses_high_consensus.extend(brightnesses)
matplotlib.pyplot.hist([brightnesses_low_consensus, brightnesses_high_consensus], bins=10, stacked=True)
matplotlib.pyplot.legend(['$C < 0.5$', '$C \\geq 0.5$'], loc='upper left')
matplotlib.pyplot.xlabel('Brightness')
matplotlib.pyplot.ylabel('Count')
# +
print('High consensus mean:', numpy.mean(brightnesses_high_consensus))
print('High consensus median:', numpy.median(brightnesses_high_consensus))
print('High consensus min:', min(brightnesses_high_consensus))
print('Low consensus mean:', numpy.mean(brightnesses_low_consensus))
print('Low consensus median:', numpy.median(brightnesses_low_consensus))
print('Low consensus min:', min(brightnesses_low_consensus))
# -
# So there's no apparent difference between the brightnesses of subjects with different consensus levels.
#
# Now we need to find out how many potential subjects there are in each image. I expect these supermassive black holes to be in the middle of galaxies, so I would also expect the host we want to click on to be a local brightness maximum. I can't think of any scenarios where this isn't true and a human classifier would be able to get around it. Thus I'll find all local maxima across some subjects and then count how many there are for each subject. I'll also threshold the maxima at 0.190 in line with the findings above.
# The first thing I want to do is figure out a good way of getting local maxima. Let's repurpose the same approach used by Banfield et al. (since I already reimplemented that anyway!).
infrared = get_infrared_image(db.radio_subjects.find_one({'zooniverse_id': 'ARG000180p'}))
neighborhood = numpy.ones((10, 10))
local_max = scipy.ndimage.filters.maximum_filter(infrared, footprint=neighborhood) == infrared
local_max = local_max.nonzero()
matplotlib.pyplot.imshow(infrared, origin='lower')
matplotlib.pyplot.scatter(local_max[1], local_max[0], c='w', marker='+')
# We can see that there's a lot of peaks, and not all of them look useful. Let's run a low-pass filter on the image first and see if that has any effect.
blurred_infrared = scipy.ndimage.filters.gaussian_filter(infrared, 1)
local_max = scipy.ndimage.filters.maximum_filter(blurred_infrared, footprint=neighborhood) == blurred_infrared
local_max = local_max.nonzero()
matplotlib.pyplot.imshow(infrared, origin='lower')
matplotlib.pyplot.scatter(local_max[1], local_max[0], c='w', marker='+')
# eroded_background = scipy.ndimage.morphology.binary_erosion(density == 0, structure=neighborhood, border_value=1)
# detected_peaks = local_max ^ eroded_background
# weighted_peaks = detected_peaks * density
# This is a bit better. Next, let's try and collapse those contiguous regions into single features.
blurred_infrared = scipy.ndimage.filters.gaussian_filter(infrared, 1)
local_max = scipy.ndimage.filters.maximum_filter(blurred_infrared, footprint=neighborhood) == blurred_infrared
region_labels, n_labels = scipy.ndimage.measurements.label(local_max)
maxima = numpy.array(
[numpy.array((region_labels == i + 1).nonzero()).T.mean(axis=0)
for i in range(n_labels)]
)
matplotlib.pyplot.imshow(infrared, origin='lower')
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0], c='w', marker='+')
# That looks pretty good! Now, let's get rid of all those peaks on the sides.
maxima = maxima[numpy.logical_and(maxima[:, 1] != 0, maxima[:, 1] != 499)]
matplotlib.pyplot.imshow(infrared, origin='lower')
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0], c='w', marker='+')
# I'll get the pixel values of each point and see what kinds of values we're looking at.
# +
values = [infrared[tuple(m)] for m in maxima]
matplotlib.pyplot.hist(values)
matplotlib.pyplot.xlabel('Brightness')
matplotlib.pyplot.ylabel('Number of potential hosts')
print('Min:', min(values))
# -
# It seems most potential hosts are pretty dim. Maybe we could bias toward the centre, but I'm not sure that's a good idea — I'll look at it later.
# Let's check out the distribution of the number of potential hosts across all data.
def potential_hosts(zid, subject=None):
"""Finds potential hosts in a subject image.
zid: Zooniverse ID.
subject: (Optional) Zooniverse subject. If not specified, will be loaded from database.
-> (list of brightnesses, list of coordinates)
"""
if subject is None:
subject = db.radio_subjects.find_one({'zooniverse_id': zid})
infrared = get_infrared_image(subject)
blurred_infrared = scipy.ndimage.filters.gaussian_filter(infrared, 1)
local_max = scipy.ndimage.filters.maximum_filter(blurred_infrared, footprint=neighborhood) == blurred_infrared
region_labels, n_labels = scipy.ndimage.measurements.label(local_max)
maxima = numpy.array(
[numpy.array((region_labels == i + 1).nonzero()).T.mean(axis=0)
for i in range(n_labels)]
)
maxima = maxima[numpy.logical_and(maxima[:, 1] != 0, maxima[:, 1] != 499)]
values = [infrared[tuple(m)] for m in maxima]
return values, maxima
# Sanity check: Run this on the example subject.
values, maxima = potential_hosts('ARG000180p')
matplotlib.pyplot.hist(values)
matplotlib.pyplot.imshow(infrared, origin='lower')
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0], c='w', marker='+')
matplotlib.pyplot.xlabel('Brightness')
matplotlib.pyplot.ylabel('Number of potential hosts')
# +
all_values = []
potential_hosts_counts = []
for subject in db.radio_subjects.find().limit(1000):
values, _ = potential_hosts(subject['zooniverse_id'], subject)
all_values.extend(values)
potential_hosts_counts.append(len(values))
matplotlib.pyplot.hist(all_values, bins=10)
matplotlib.pyplot.xlabel('Brightness')
matplotlib.pyplot.ylabel('Number of potential hosts')
matplotlib.pyplot.hist(potential_hosts_counts, bins=10)
matplotlib.pyplot.xlabel('Number of potential hosts')
matplotlib.pyplot.ylabel('Subjects with given number of potential hosts')
# -
# In conclusion:
# - There's not really a good threshold for potential host brightness.
# - Following a naïve method of assuming that all local maxima are potential hosts and no other points are, there are about 150 potential hosts per image.
#
#
# It would be useful to run the code above on more data points, in case the distribution changes with more data (1000 is a very small number of samples when there are 177000 subjects in the database).
| notebooks/2_potential_host_counting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TF2
# language: python
# name: tf2
# ---
# + active=""
# 假设以升序排序的数组以您不知道的某个枢轴旋转。
# (即[0,1,2,4,5,6,7]可能会变成[4,5,6,7,0,1,2])。
# 找到最小的元素。该数组可能包含重复项。
#
# 例1:
# Input: [1,3,5]
# Output: 1
# 例2:
# Input: [2,2,2,0,1]
# Output: 0
# -
# 数组中可能会出现重复的元素
class Solution:
def findMin(self, nums) -> int:
# [2,2,2,2,2,0,1,2] 如果遇到相同的情况,直接将 r_index -= 1
l_idx = 0
r_idx = len(nums) - 1
while l_idx < r_idx:
m_idx = l_idx + (r_idx - l_idx) // 2
if nums[r_idx] > nums[m_idx]: # 在区间
r_idx = m_idx
elif nums[m_idx] == nums[r_idx]:
r_idx -= 1
else:
l_idx = m_idx + 1
return nums[l_idx]
nums_ = [2,2,2,0,1]
solution = Solution()
solution.findMin(nums_)
| Array/0808/154. Find Minimum in Rotated Sorted Array II.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome
#
# With our fundamentals covered, we can now turn to analyzing some data. You'll notice that the `data` folder contains [four files](https://github.com/hermish/hkn-workshops/tree/master/cs-workshops/data-workshop/data).
#
# 1. The first one contains the pokemon characteristics (the first column being the id of the pokemon).
# 2. The second one contains information about previous combats. The first two columns contain the ids of the combatants and the third one the id of the winner. Important: The pokemon in the first columns attacks first.
#
# Open [these files](https://github.com/hermish/hkn-workshops/tree/master/cs-workshops/data-workshop/data) to get a feel for what the raw data looks like: our data is in a `csv` format which stands for comma-separated values. Basically, our data comes in a table, with entries separated by commas.
# # Introduction: <NAME>
#
# <NAME> once stated: "I want to be the very best, like no one ever was." With those words, Ash set out on his quest to become a Pokemon master. Oddly enough, he *still* has not won a single Pokemon League Championship, despite competing in 6. Today, we're going to explore the Pokemon dataset in order to hone our data science techniques and become Pokemon masters!
# <img src="ash.jpg">
# ## Reading in Data
#
# To be able to do anything with this data, we first need to read the data into python. Luckily, this is such a common task that there are already many tools for working with, reading and visualizing data. We'll be using a library called `Pandas` for data analysis. `Pandas` is currently one of the most popular data analysis libraries, and is used in many UC Berkeley Data Science courses. Below, we import Pandas and some other functions to read the data.
import numpy as np # linear algebra
import pandas as pd # data processing
import matplotlib.pyplot as plt # plotting
import seaborn as sns # aesthetics
# %config InlineBackend.figure_format = 'retina'
# Read pokemon statistics and print first 10 rows out of 800
data = pd.read_csv('data/pokemon_randomized.csv')
print(len(data))
data.head(10)
# ## Visualizing Data: Matplotlib
# The first step to strong data analysis is being able to visualize the data provided. Matplotlib is a python library that helps us plot data: the most basic plots are line, scatter and histogram plots.
#
# - Line plots are usually used when the x-axis is time
# - Scatterplots are better when we want to check if there is correlation between two variables
# - Histograms visualize the distribution of numerical data
#
#
# Matplotlib allows us to customize the colors, labels, thickness of lines, title, opacity, grid, figure size... Basically, we can make any graph conceivable, though some are easier than others.
# +
# LINE PLOTS
data['Speed'].plot(
kind='line', # Type of plot
color='blue', # Line color
label='Speed', # Legend name
linewidth=1,
alpha = 0.5, # Transparency
grid = True
)
data['Defense'].plot(
color='green',
label='Defense',
linewidth=1,
alpha = 0.5,
grid = True
)
plt.legend(loc='upper right') # legend locations
plt.xlabel('Pokemon Number') # x-axis label
plt.ylabel('Stat') # y-axis label
plt.title('Speed and Defense across Pokemon') # Title
plt.show()
# -
# SCATTER PLOT
data.plot(
kind='scatter', # Type of plot
x='Attack', # x-axis column
y='Defense', # y-axis column
alpha = 0.5, # Transparency
color = 'red'
)
plt.xlabel('Attack') # x-axis label
plt.ylabel('Defense') # y-axis label
plt.title('Attack-Defense Relationship') # Title
plt.show()
# HISTOGRAM
data['Speed'].plot( # Choosing what to plot
kind = 'hist', # Type of plot
bins = 50,
figsize = (15, 15) # Size of graph
)
plt.show()
# ## Manipulating Data: Pandas
#
# This list of Pandas functions will come in handy: https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf
#
# The table from which we are getting the data from is called a DataFrame. Each of the columns can be thought of as a list or series of data, which can be accessed similar to the way we access elements in a list.
# ### Grabbing Data
# When manipulating our data, it can be helpful to grab specific rows and columns. Here, we use the `head` and `tail` functions to grab the Defense stats of the first 5 Pokemon in our dataset and the Attack stats of the last 5 Pokemon in our dataset.
print(data['Defense'].head(5))
print(data['Attack'].tail(5))
# We can access columns with the simple `dot property`. Let's say that we only want to see a Pokemon's name and which row they are in in our dataset. We can simply use `data.Name` to do this!
# **Note: Column names are case sensitive!**
names = data.Name
names.head(13)
# We can also access rows and columns using the the `iloc` and `loc` functions. Let's say that we want to grab rows 20 to 23 in our dataset. `tail` and `head` functions can't do this, but `iloc` can!
middle_rows = data.iloc[20:24]
middle_rows
# Now, let's say we only care about a Pokemon's Types, Attack, and Defense. We can grab these columns with the `loc` function. Notice that to grab columns, we add a `:,` before we specify which columns we want!
columns = data.loc[:, '#' : 'Defense']
columns.head(5)
# Another helpful function is the `drop` function. It's pretty self explanatory, so you can read the documentation yourself on the Pandas cheat sheet: https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf
# ### Filtering and Concatenating Data
# We can also filter data easily. Let's say we want to show all the pokemon with more than 200 defense. We do this by checking if the defense of each row is greater than 200 and then only asking for the rows for which this is true. It turns out that there are only 3 pokemon like this! Change the condition to see what else you can discover.
strong_defenders = data['Defense'] > 200
data[strong_defenders]
# What if we want to find all the Pokemon with more than 175 attack? This is easy too!
strong_attackers = data['Attack'] > 175
data[strong_attackers]
# If we want to build a super strong Attack/Defense Pokemon team, we can use `pd.concat` to add these two sets of data to form a new dataset!
strong_pokemon = pd.concat([data[strong_defenders], data[strong_attackers]])
strong_pokemon
# ### Assignment: Build Ash's Team
# Now, we have enough tools to build Ash's team! Ash's Pokemon are the following:
# - Pikachu
# - Squirtle
# - Bulbasaur
# - Tauros
# - Lapras
# - Charizard
#
# Use your data analysis skills to grab the rows containing these 6 Pokemon and add them to a new dataframe!
#
# *Hint: Use filtering and concatenation*
# +
pikachu = data[data['Name'] == 'Pikachu']
squirtle = data[data['Name'] == 'Squirtle']
bulbasaur = data[data['Name'] == 'Bulbasaur']
tauros = data[data['Name'] == 'Tauros']
lapras = data[data['Name'] == 'Lapras']
charizard = data[data['Name'] == 'Charizard']
ash_team = pd.concat([pikachu, squirtle, bulbasaur, tauros, lapras, charizard])
ash_team
# entries = data['Name'].isin(['Pikachu', 'Squirtle', 'Bulbasaur', 'Tauros', 'Lapras', 'Charizard'])
# ash_team = data[entries]
# ash_team
# -
# ### Sorting Data
# We successfully built Ash's team! However, it seems like the Pokemon are a bit out of order. Let's fix that with the `pd.sort_values()` function.
# +
#First argument of sort_values is the column by which we are sorting
#Second argument determines whether we want to sort from lowest to highest or highest to lowest.
#Set ascending to True if we want to sort from lowest to highest
ash_team = ash_team.sort_values('#', ascending = True)
ash_team
# -
# ## Data Analysis: Pandas
#
# Now we can analyze of the data on the pokemon—the easiest way to get an idea of how how data looks is to plot some of it and get a few facts about it. Luckily for us, it is really easy to get the following from our table:
#
# - count: number of entries
# - mean: the average numberr, over all pokemon
# - std: standard deviation, measures how spread out the data is
# - min: minimum entry
# - 25%: first quantile (25% are below)
# - 50%: median or second quantile (50% are below)
# - 75%: third quantile (75% are below)
# - max: maximum entry
data.describe()
# From this overview, we can easily see the minimimum and maximum for each of these statistics. To visualize this, we often use a boxplot, which allows us to easily picture the average, spread and outliers. The black bars the top and bottom show the maximum and minimum data points. The red line in the middle is at the mean and the box shows the distance from the 25th to the 75th percentile.
data.boxplot(
column='Attack'
)
plt.show()
# Notice that some of our pokemon are legendary while others are not. We might guess that the attack of legendary pokemon are greater than non-legendary pokemon. We could test our hypothesis by creating a boxplot for the attacks of each group. We can easily do this using the `by` keyword.
data.boxplot(
column='Attack', # Column plotting
by='Legendary' # Group
)
plt.show()
# Now, let's compare Ash's team statistics to the general Pokemon dataset statistics.
ash_team.describe()
data.describe()
# From the describe methods, we see that Ash's pokemon have very average stats. If Ash had known this, maybe he would have trained and evolved them so that he could be more competitive in the Pokemon league!
# # Extra: Heat Maps
#
# Finally, there's a lot more interesting figures we can make using python. For example, when we plotted attack and defense they seemed to be linked or *correlated*.
#
# In general, we can test how strongly two variables are correlated they are using the "Pearson product-moment correlation coefficient." This number will always be between +1 and -1. More extreme numbers (near $\pm1$) mean the correlation is strong, while numbers closer to zero indicate that the variables are less strongly linked.
#
# We can visualize how all the variables are correlated by plotting these values in a heat map. The more strongly correlated values are more strongly colored.
f, ax = plt.subplots(figsize=(6, 6))
sns.heatmap(
data.corr(),
annot=True,
linewidths=.5,
fmt= '.1f',
ax=ax
)
plt.show()
# ## Assignment: Build your own Pokemon Team
# Now, let's put your skills to the test: build your ultimate Pokemon team. Use your data analysis skills to find 6 Pokemon with the best combination of attack, defense, speed, etc. to defeat your opponents. A good team should have the following qualities:
# - A wide variety of Pokemon types, so your team won't have any weaknesses
# - High stat totals, so you can outclass your opponents
#
# Create a new dataframe that contains only your 6 Pokemon using the Pokemon dataset that is provided. Then, plot some graphs that show off your team's statistics.
# +
#Build the best 6 Pokemon team you possibly can.
### YOUR CODE HERE ###
#Plot some graphs that show off your team's statistics.
### YOUR CODE HERE ###
# -
# ## Extra
#
# Now that you have found your ultimate Pokemon team, let's think about whether this team would be feasible in a battle against your fellow trainers. Is there anything your team is particularly susceptible to? Anything your team is particularly good against? Look [here][1] for a link to the the different Pokemon types and their strengths and weaknesses!
#
# [1]: <https://pokemondb.net/type>
# ## Challenge Assignment: Sahai, the Shapeshifting Pokemon
# UC Berkeley Pokemon trainers have been beating Stanford Pokemon trainers for decades. Recently, however, Stanford has acquired some powerful Pokemon that are difficult to beat. Thus, Berkeley Labs are trying to synthesize a new, ultimate Pokemon that can shapeshift its stats to beat any opponent.
#
# The shapeshifting Pokemon, named "Sahai", has three shape-shifting forms: Attack, Defense, and Speed. These shapeshifting forms are a set of statistics that are "stolen" from another Pokemon. Each form has the following criteria:
# - The stat line is "stolen" from another Pokemon (Ex. Attack form may be Pikachu's exact statistics)
# - The stat line for Attack form must have attack in the top 10 of all the Pokemon in the dataset, the stat line for Defense form must have defense in the top 10 of all the Pokemon in the dataset, etc.
# - No stat line can have any statistic that is below average
#
# Create a new dataframe that contains the three forms that Sahai can shapeshift into. Then, write a function that takes in the stats of the Pokemon that it is battling against, and returns the shapeshifting form that it will take on to win the battle.
# +
#Build the Sahai shapeshifting dataframe
### YOUR CODE HERE ###
#Write the shapeshifting function
def shapeshift(opponent_stats):
### YOUR CODE HERE ###
| data-workshop/2-Data-Science.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Version of DogFaceNet implementation for MNIST dataset
# We will train to stick on the pairs learning technique
# ### Imports
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import os
import pickle
import numpy as np
import skimage as sk
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
import tensorflow.keras.backend as K
# -
# ### Dataset implementation
# Load the dataset
# +
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
SIZE = (28,28)
PATH_SAVE = '../output/history/'
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
x_train /= 255
x_test /= 255
# -
# Create the pairs
# +
nbof_pairs = len(y_train)*2
pairs = np.empty((nbof_pairs,28,28))
issame = np.empty(nbof_pairs)
for i in tqdm_notebook(range(0,nbof_pairs,2)):
alea = np.random.rand()
# Pair of different dogs
if alea < 0.5:
# Chose the classes:
class1 = np.random.randint(10)
class2 = np.random.randint(10)
while class1==class2:
class2 = np.random.randint(10)
# Extract images of this class:
y_class1 = np.arange(len(y_train))[np.equal(y_train,class1)]
y_class2 = np.arange(len(y_train))[np.equal(y_train,class2)]
# Chose an image amoung these selected images
pairs[i] = x_train[y_class1[np.random.randint(len(y_class1))]]
pairs[i+1] = x_train[y_class2[np.random.randint(len(y_class2))]]
issame[i] = issame[i+1] = 0
# Pair of same dogs
else:
# Chose a class
clas = np.random.randint(10)
y_class = np.arange(len(y_train))[np.equal(y_train,clas)]
# Select two images from this class
idx_image1 = y_class[np.random.randint(len(y_class))]
idx_image2 = y_class[np.random.randint(len(y_class))]
while idx_image1 == idx_image2:
idx_image2 = y_class[np.random.randint(len(y_class))]
pairs[i] = x_train[idx_image1]
pairs[i+1] = x_train[idx_image2]
issame[i] = issame[i+1] = 1
# -
# check some pairs
s = 20
n = 5
print(issame[2*s:(n+s)*2])
#print(y_pairs[2*s:(n+s)*2])
fig = plt.figure(figsize=(5,3*n))
for i in range(s,s+n):
plt.subplot(n,2,2*(i-s)+1)
plt.imshow(pairs[2*i])
plt.subplot(n,2,2*(i-s)+2)
plt.imshow(pairs[2*i+1])
# Create the triplets
# +
nbof_triplets = len(y_train)
triplets = np.empty((nbof_triplets,28,28))
y_triplets = np.empty(nbof_triplets)
issame = np.empty(nbof_triplets)
for i in tqdm_notebook(range(0,nbof_triplets,3)):
# Pair of same classes
# Chose a class
clas = np.random.randint(10)
y_class = np.arange(len(y_train))[np.equal(y_train,clas)]
# Select two images from this class
idx_image1 = y_class[np.random.randint(len(y_class))]
idx_image2 = y_class[np.random.randint(len(y_class))]
while idx_image1 == idx_image2:
idx_image2 = y_class[np.random.randint(len(y_class))]
triplets[i] = x_train[idx_image1]
triplets[i+1] = x_train[idx_image2]
issame[i] = issame[i+1] = 1
y_triplets[i] = y_triplets[i+1] = clas
# Pair of different classes
# Chose the classes:
class2 = np.random.randint(10)
while clas==class2:
class2 = np.random.randint(10)
# Extract images of this class:
y_class2 = np.arange(len(y_train))[np.equal(y_train,class2)]
# Chose an image amoung these selected images
triplets[i+2] = x_train[y_class2[np.random.randint(len(y_class2))]]
issame[i+2] = 0
y_triplets[i+2] = class2
triplets_exp = np.expand_dims(triplets, -1)
triplets_exp.shape
# -
# Check some triplets
s = 10
n = 5
print(issame[3*s:(n+s)*3])
print(y_triplets[3*s:(n+s)*3])
fig = plt.figure(figsize=(7,3*n))
for i in range(s,s+n):
plt.subplot(n,3,3*(i-s)+1)
plt.imshow(triplets[3*i])
plt.subplot(n,3,3*(i-s)+2)
plt.imshow(triplets[3*i+1])
plt.subplot(n,3,3*(i-s)+3)
plt.imshow(triplets[3*i+2])
# ### Define the loss
import tensorflow.keras.backend as K
alpha = 0.3
def triplet(y_true,y_pred):
a = y_pred[0::3]
p = y_pred[1::3]
n = y_pred[2::3]
ap = K.sum(K.square(a-p), axis = -1)
an = K.sum(K.square(a-n), axis = -1)
s = K.sum(tf.nn.relu(ap - an + alpha))
return s
import tensorflow.keras.backend as K
alpha = 0.3
def cosine(y_true,y_pred):
a = y_pred[0::3]
p = y_pred[1::3]
n = y_pred[2::3]
cos_ap = K.sum(a*p, axis = -1)
cos_an = K.sum(a*n, axis = -1)
s = K.sum(tf.nn.relu(cos_ap - cos_an + alpha))
return K.sum(s)
c = np.vstack([0,2,1,2])
np.hstack([c,c])
# +
# Exploration with numpy
# In dev: bobby loss
_y_pred = np.array([[-0.25,1],[5,4],[-1,.5],[1,3.]], dtype=np.float32)
_y_pred = np.array([[1,2],[5,4],[-1,.5],[1,3.]], dtype=np.float32)
#_y_pred = np.array([[1,2],[5,4],[-1,.5],[1,3.],[-1,0.1]], dtype=np.float32)
_y_true = [0,2,1,2]
_y_pred_norm = _y_pred/np.linalg.norm(_y_pred, axis=-1, keepdims=True)
print(_y_pred_norm)
plt.plot(_y_pred_norm[:,0],_y_pred_norm[:,1],'o')
_classes = tf.keras.utils.to_categorical(_y_true,3)
_centers = np.transpose(_y_pred_norm.T.dot(_classes / (np.sum(_classes, axis=0, keepdims=True)+1)))
_centers_norm = _centers/np.linalg.norm(_centers, axis=-1, keepdims=True)
x = np.linspace(-1,1,200)
plt.axis('equal')
plt.plot(x,np.sqrt(1.-np.square(x)))
colors = ['r','g','b']
for i in range(3):
plt.plot(_centers_norm[i,0],_centers_norm[i,1],colors[i]+'o')
# Exploration with tensorflow
y_pred = tf.constant(_y_pred)
c = np.vstack(_y_true)
y = np.hstack([c,c])
y_true = tf.constant(y)
y_pred = y_pred / tf.norm(y_pred, axis=-1, keepdims=True)
extract = tf.cast(y_true[:,0], dtype=tf.int32)
classes = tf.one_hot(extract,depth=3)
classes = tf.cast(classes, dtype=tf.float32)
# Simulation into the loss function:
dot_y_pred = K.dot(y_pred,K.transpose(y_pred))
is_same_mask = K.dot(classes,K.transpose(classes))
out = K.binary_crossentropy(is_same_mask,dot_y_pred*0.5+0.5)
with tf.Session() as sess:
print(sess.run([is_same_mask,dot_y_pred*0.5+0.5,out]))
# +
# Exploration with numpy
# In dev: <NAME>
_y_pred = np.array([[-0.25,1],[5,4],[-1,.5],[1,3.]], dtype=np.float32)
#_y_pred = np.array([[1,2],[5,4],[-1,.5],[1,3.]], dtype=np.float32)
#_y_pred = np.array([[1,2],[5,4],[-1,.5],[1,3.],[-1,0.1]], dtype=np.float32)
_y_true = [0,1,1,1]
_y_pred_norm = _y_pred/np.linalg.norm(_y_pred, axis=-1, keepdims=True)
_classes = tf.keras.utils.to_categorical(_y_true,3)
_centers = np.transpose(_y_pred_norm.T.dot(_classes / np.sum(_classes, axis=0, keepdims=True)))
_centers_norm = _centers/np.linalg.norm(_centers, axis=-1, keepdims=True)
x = np.linspace(-1,1,200)
plt.axis('equal')
plt.plot(x,np.sqrt(1.-np.square(x)))
colors = ['r','g','b']
for i in range(3):
plt.plot(_centers_norm[i,0],_centers_norm[i,1],colors[i]+'o')
#plt.plot(_y_pred[:,0], _y_pred[:,1], 'go')
# Exploration with tensorflow
y_pred = tf.constant(_y_pred)
c = np.vstack(_y_true)
y = np.hstack([c,c])
y_true = tf.constant(y)
# Simulation into the loss function:
pred = y_pred / tf.norm(y_pred, axis=-1, keepdims=True)
extract = tf.cast(y_true[:,0], dtype=tf.int32)
classes = tf.one_hot(extract,depth=3)
classes = tf.cast(classes, dtype=tf.float32)
den_classes = K.sum(classes, axis=0, keepdims=True) + 1.
centers = K.transpose(K.dot(K.transpose(pred), classes / den_classes))
centers = tf.math.l2_normalize(centers,axis=-1)
centers_classes = K.dot(classes, centers)
ones = (K.sum(centers_classes * pred, axis=-1) * -1. + .5)*5
inter = K.sigmoid(ones)
dist = K.sum(inter)
identity = K.arange(0,3)
identity = tf.one_hot(identity,3)
gram = K.dot(centers,K.transpose(centers)) * 0.5 + 0.5
dev = K.sum(K.pow(gram-identity,10))
with tf.Session() as sess:
print(sess.run([den_classes,ones,inter,dist,dev]))
# +
# Test on cosine loss
_y_pred = np.array([[-0.25,2,1],[5,4,2],[-1,.5,2],[2,1,3.]], dtype=np.float32)
_y_pred = _y_pred / np.linalg.norm(_y_pred, axis=-1, keepdims=True)
y_pred = tf.constant(_y_pred)
_y_true = tf.keras.utils.to_categorical([0,1,2,1])
y_true = tf.constant(_y_true)
s = 1.
m = 0.08
exp_s = K.exp(s * y_pred)
exp_s_m = K.exp(s * (y_pred - m))
masked_exp_s_m = exp_s_m * y_true
inv_mask = 1. - y_true
masked_exp_s = exp_s * inv_mask
den = K.sum(masked_exp_s + masked_exp_s_m, axis=-1, keepdims=True)
out = masked_exp_s_m / den
out = K.sum(out,axis=-1)
log_out = - K.log(out)
ret = K.sum(log_out)
with tf.Session() as sess:
print(sess.run([out,log_out,ret]))
# +
# Test on arccosine loss
_y_pred = np.array([[-0.25,2,1],[5,4,2],[-1,.5,2],[2,1,3.]], dtype=np.float32)
_y_pred = _y_pred / np.linalg.norm(_y_pred, axis=-1, keepdims=True)
y_pred = tf.constant(_y_pred)
_y_true = tf.keras.utils.to_categorical([0,1,2,1])
y_true = tf.constant(_y_true)
s = 1.
m = 0.08
exp_s = K.exp(s * y_pred)
cos = y_pred
sin = K.sqrt(1. - K.square(cos))
cos_m = K.cos(m)
sin_m = K.sin(m)
cos_t_m = cos * cos_m - sin * sin_m
exp_s_m = K.exp(s * cos_t_m)
masked_exp_s_m = exp_s_m * y_true
inv_mask = 1. - y_true
masked_exp_s = exp_s * inv_mask
den = K.sum(masked_exp_s + masked_exp_s_m, axis=-1, keepdims=True)
out = masked_exp_s_m / den
out = K.sum(out,axis=-1)
log_out = - K.log(out)
ret = K.sum(log_out)
with tf.Session() as sess:
print(sess.run([y_pred,cos_t_m,out,log_out,ret]))
# +
import tensorflow.keras.backend as K
def robert(y_true,y_pred):
"""
Robert tries to increase the angle between the centers
of each classes (=increase the deviation of the centers)
and to decrease the angle between elements of a certain class
(=decrease the deviation of the elements of a class).
"""
extract = tf.cast(y_true[:,0], dtype=tf.int32)
classes = tf.one_hot(extract,depth=10)
classes = tf.cast(classes, dtype=tf.float32)
centers = K.transpose(K.dot(K.transpose(y_pred), classes / (K.sum(classes, axis=0, keepdims=True) + 1)))
centers = tf.math.l2_normalize(centers,axis=-1)
centers_classes = K.dot(classes, centers)
# "dist" computes the angle
dist = (K.sum(centers_classes * y_pred, axis=-1) * -1. + .5)*5
dist = K.sum(K.sigmoid(dist))
identity = K.arange(0,10)
identity = tf.one_hot(identity,10)
gram = K.dot(centers,K.transpose(centers)) * 0.5 + 0.5
dev = K.sum(K.pow(gram-identity,10))
return dev + dist
# -
def bobby(y_true,y_pred):
extract = tf.cast(y_true[:,0], dtype=tf.int32)
classes = tf.one_hot(extract,depth=10)
classes = tf.cast(classes, dtype=tf.float32)
dot_y_pred = K.dot(y_pred,K.transpose(y_pred))
is_same_mask = K.dot(classes,K.transpose(classes))
return K.binary_crossentropy(is_same_mask,dot_y_pred*0.5+0.5)
s = 30.
m = 0.1
def cosine(y_true,y_pred):
exp_s = K.exp(s * y_pred)
exp_s_m = K.exp(s * (y_pred - m))
masked_exp_s_m = exp_s_m * y_true
inv_mask = 1. - y_true
masked_exp_s = exp_s * inv_mask
den = K.sum(masked_exp_s + masked_exp_s_m, axis=-1, keepdims=True)
out = masked_exp_s_m / den
out = K.sum(out,axis=-1)
ret = - K.log(out)
ret = K.sum(ret)
return ret
# +
# This loss doesnt work yet...
s = 10.
m = 0.1
def arccosine(y_true,y_pred):
exp_s = K.exp(s * y_pred)
cos = y_pred
sin = K.sqrt(1. - K.square(cos))
cos_m = K.cos(m)
sin_m = K.sin(m)
cos_t_m = cos * cos_m - sin * sin_m
std_cos_t_m = y_pred - m
# check if theta is less than pi-m:
keep = K.cast(K.less(cos, K.cos(np.pi - m)),K.floatx())
not_keep = 1 - keep
cos_t_m = cos_t_m * keep + std_cos_t_m * not_keep
exp_s_m = K.exp(s * cos_t_m)
masked_exp_s_m = exp_s_m * y_true
inv_mask = 1. - y_true
masked_exp_s = exp_s * inv_mask
den = K.sum(masked_exp_s + masked_exp_s_m, axis=-1, keepdims=True)
out = masked_exp_s_m / den
out = K.sum(out,axis=-1)
ret = - K.log(out)
ret = K.sum(ret)
return ret
# -
# ### Define the network
# +
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Layer
class Cosine(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(Cosine, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[-1],self.output_dim))
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
super(Cosine, self).build(input_shape)
def call(self, x):
x = tf.math.l2_normalize(x, axis=-1)
w = tf.math.l2_normalize(self.kernel, axis=0)
return K.dot(x, w)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
# +
# Small net and cosine loss
emb_size = 3
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Lambda
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Cosine(10))
model.compile(loss=cosine,
optimizer='rmsprop',
metrics=['accuracy']
)
model.summary()
# +
# Small net and bobby loss
emb_size = 3
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Lambda
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Lambda(lambda x: tf.math.l2_normalize(x, axis=-1)))
model.compile(loss=bobby,
optimizer='rmsprop'
)
model.summary()
# +
# Small net and softmax loss
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Lambda
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(tf.keras.layers.BatchNormalization())
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Lambda(lambda x: tf.math.l2_normalize(x, axis=-1)))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy']
)
model.summary()
# -
x_test.shape
x_test = np.expand_dims(x_test,-1)
x_train = np.expand_dims(x_train,-1)
y_train_exp = tf.keras.utils.to_categorical(y_train)
y_test_exp = tf.keras.utils.to_categorical(y_test)
hist = model.fit(x_train,np.dstack([y_train]*2)[0], batch_size=128,epochs=25,validation_data=(x_test,np.dstack([y_test]*2)[0]))
# +
loss = hist.history['loss']
val_loss = hist.history['val_loss']
history = np.array([loss,val_loss])
np.save(PATH_SAVE+'2018.01.31.small_net.bobby.2.npy',history)
np.savetxt(PATH_SAVE+'2018.01.31.small_net.bobby.2.txt',history)
epochs = np.arange(len(loss))
plt.plot(epochs,loss, label="loss")
plt.plot(epochs,val_loss, label="val_loss")
plt.xlabel("Number of epochs")
plt.legend()
# -
history = model.fit(x_train,y_train_exp, batch_size=128,epochs=12,validation_data=(x_test,y_test_exp))
# +
loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['acc']
val_acc = history.history['val_acc']
history_ = np.array([loss,val_loss,acc,val_acc])
np.save(PATH_SAVE+'2018.01.31.small_net.cosine.s_30.m_0.1.npy',history_)
np.savetxt(PATH_SAVE+'2018.01.31.small_net.cosine.s_30.m_0.1.txt',history_)
epochs = np.arange(len(loss))
fig = plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(epochs,loss, '-o', label="loss")
plt.plot(epochs,val_loss, '-o', label="val_loss")
plt.xlabel("Number of epochs")
plt.legend()
plt.subplot(1,2,2)
plt.plot(epochs,acc, '-o', label="acc")
plt.plot(epochs,val_acc, '-o', label="val_acc")
plt.xlabel("Number of epochs")
plt.legend()
# -
# ### Train the model
model.compile(loss=triplet,
optimizer='rmsprop')
model.fit(
triplets_exp,
np.zeros((len(triplets_exp),2)),
batch_size = 384,
epochs = 20,
validation_split=0.1
)
# ### Evaluate it
# +
n = 6000
triplets_test = triplets_exp[:n]
emb = model.predict(triplets_test)
# +
a = emb[0::3]
p = emb[1::3]
n = emb[2::3]
# Computes angles between pairs
cos1 = np.sum(a*p,1)
cos2 = np.sum(a*n,1)
less = np.less(cos2,cos1)
acc = np.sum(less.astype(int))/len(less)
print(acc)
# +
# Display the wrong examples
idx_wrong = np.logical_not(less)
idx_wrong = np.stack([idx_wrong]*3, axis = -1)
idx_wrong = np.reshape(idx_wrong, [-1])
wrong = triplets_test[idx_wrong]
wrong = np.squeeze(wrong)
print(len(wrong)//3)
s = 5
n = 5
e = n + s
plt.figure(figsize=(7,3*n))
for i in range(s,e):
for j in range(3):
plt.subplot(n,3,(i-s)*3+1+j)
plt.imshow(wrong[i*3+j])
# +
a = emb[0::3]
p = emb[1::3]
n = emb[2::3]
# Computes distance between pairs
dist1 = np.sum(np.square(a-p),1)
dist2 = np.sum(np.square(a-n),1)
less = np.less(dist1,dist2)
acc = np.sum(less.astype(int))/len(less)
print(acc)
# -
model.layers
mod = tf.keras.Model(model.inputs, model.layers[-2].output)
predict = mod.predict(x_train)
# +
mean = np.zeros((10,2))
nb = np.zeros(10)
y_int = y_train
# y_train = y_train.astype(int)
# y_int = np.empty(len(y_train),dtype=int)
# for i in range(len(y_train)):
# idx = 0
# while (y_train[i][idx]!=1):
# idx += 1
# y_int[i] = int(idx)
for i in range(len(predict)):
mean[y_int[i]] += predict[i]
nb[y_int[i]] += 1
for i in range(len(mean)):
mean[i] /= nb[i]
print(mean)
std_vect = np.zeros((10,2)) + mean
std = np.zeros(10)
for i in range(len(predict)):
std[y_int[i]] += np.sum(np.square(predict[i]-mean))
for i in range(len(mean)):
std[i] /= nb[i]
print(np.sqrt(std))
plt.axis('equal')
n = 10
for i in range(n):
y = predict[np.equal(i,y_int)]
#y = y / np.linalg.norm(y, axis=-1,keepdims=True)
plt.plot(y[:,0],y[:,1],'o',label=str(i))
#mean = mean/np.linalg.norm(mean, axis=-1,keepdims=True)
plt.plot(mean[:n,0],mean[:n,1],'ko')
plt.legend()
# -
| tmp/dogfacenet-mnist_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Area
# The `Area` represents the spatial environment of the survey. It sets the spatial bounds of the survey so it should be the first building block defined. All of the methods used to generate `Layer` and `Coverage` blocks will require the `Area` as an input parameter.
# ## Creating an `Area`
#
# There are three main ways to create an `Area`.
#
# 1. from a `shapely` `Polygon`
# 2. from a shapefile
# 3. from a value specifying the resulting area
#
# We will take a look at examples of all three. First, let's import `prospect` as `prospect`.
import prospect
# ### From a `shapely Polygon`
# We can create an `Area` from any `shapely` `Polygon` object. Let's create a fairly simple polygon (a pentagon) and use it to create an `Area`.
from shapely.geometry import Polygon
pentagon = Polygon([(0, 0), (2, 0), (2, 1), (1, 2), (0, 1)])
area_shapely = prospect.Area(name='from shapely rectangle', shape=pentagon, vis=1.0)
# `Area` objects have the following attributes: `name`, `shape`, `vis`, and `df`.
area_shapely.name
area_shapely.shape
area_shapely.vis
area_shapely.df
# Of these, `df` is the most useful because it is a `geopandas` `GeoDataFrame` containing all of the other values.
#
# `geopandas` provides some plotting options for `GeoDataFrame` objects, so we can visually examine the resulting `Area` in a `matplotlib` plot by calling the `plot()` method on the `df` attribute.
area_shapely.df.plot()
# ### From a shapefile
# ```{caution}
# If the shapefile contains more than one polygon, only the first polygon will be used. If you want to create an `Area` by combining multiple polygons, you will first have to dissolve them into a single polygon.
# ```
area_shp = prospect.Area.from_shapefile(name='from shapefile', path='./data/demo_area.shp', vis=1.0, encoding="utf-8")
area_shp.df.plot();
# ```{note}
# `prospect` has no difficulty dealing with polygons that have interior holes.
# ```
# ### From an area value
# The final way to construct an `Area` object is to create a square by specifying a desired area value and an origin. This is intended to be a convenient method for use in building hypothetical surveys. The following creates an `Area` with an area of 100.0 sq. units with a lower left corner at (20, 20).
area_value = prospect.Area.from_area_value(
name='from value',
value=100.0,
origin=(20.0, 20.0),
vis=1.0
)
area_value.df.plot()
# ## The `vis` parameter
# Besides defining the spatial extent of the survey, the `Area` also defines the surface visibility parameter of the simulation. Like all parameters, the surface visibility can be defined with a single probability value or as a `scipy.stats` distribution. (In the future, I hope to add additional support for a raster "surface" of visibility.)
# If a single value is inappropriate for your case, surface visibility can be modeled in a variety of ways. Both a truncated normal distribution (constrained between 0 and 1) and a Beta distribution could be good options. In the case of the Beta distribution, the following heuristic can be helpful:
# >If $n$ artifacts were placed in a subset of that `Area`, how many artifacts, $v$, would be visible to the surveyor, assuming a perfect ideal observation rate of 1.0 and a perfect surveyor skill of 1.0?
#
# In that case, $\alpha = v$ and $\beta = n - v$.
#
# For example, if you placed 10 artifacts in an area and expected 8 to be visible, you could create a Beta distribution like this.
from scipy.stats import beta
vis_dist = beta(a=8, b=2)
# And now let's examine the shape of that distribution.
#
# ```{attention}
# `seaborn`, used here for plotting, is not a dependency of `prospect` so you may not have it installed.
# ```
import seaborn as sns
hist_8_2 = sns.distplot(vis_dist.rvs(100000))
hist_8_2.set_xlim(0,1);
| prospect-guide/_build/jupyter_execute/blocks/Area.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# moon surface brightness is currently calculated by renormalizing the moon spectrum by the KS model V-band magnitude
# ```python
# scattered_V = krisciunas_schaefer(obs_zenith, moon_zenith, separation_angle, moon_phase, vband_extinction)
#
# # Calculate the wavelength-dependent extinction of moonlight
# # scattered once into the observed field of view.
# scattering_airmass = (1 - 0.96 * np.sin(moon_zenith) ** 2) ** (-0.5)
# extinction = (10**(-extinction_coefficient * scattering_airmass / 2.5) * (1 - 10**(-extinction_coefficient * airmass / 2.5)))
#
# surface_brightness = moon_spectrum * extinction
#
# # Renormalized the extincted spectrum to the correct V-band magnitude.
# raw_V = _vband.get_ab_magnitude(surface_brightness, wavelength) * u.mag
#
# area = 1 * u.arcsec ** 2
# surface_brightness *= 10 ** (-(scattered_V * area - raw_V) / (2.5 * u.mag)) / area
# ```
#
# Instead of the KS model V-band magnitude, lets see if we can improve the sky model by implementing a direct prediction for V-band magnitude.
# +
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import os
import h5py
import fitsio
import numpy as np
from astropy import units as u
from scipy.signal import medfilt, medfilt2d
from scipy.interpolate import interp1d
from feasibgs import skymodel as Sky
# -
# -- plotting --
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
dir_cmx = '/Users/ChangHoon/data/feasiBGS/cmx/'
dir_sky = '/Users/ChangHoon/data/feasiBGS/sky/'
desi_fiber_area = (1.46/2.)**2 * np.pi
boss_fiber_area = np.pi
def read_cmx_skies():
''' read CMX sky fibers and return median sky surface brightness measurements for each exposure
'''
fsky = h5py.File(os.path.join(dir_cmx, 'sky_fibers.coadd_gfa.minisv2_sv0.hdf5'), 'r')
sky_data = {}
for k in fsky.keys():
sky_data[k] = fsky[k][...]
bad_seeing = (sky_data['tileid'] == 70502) | (sky_data['date'] == 20200314) #bad seeing on Feb 25 and 27
exp_cuts = ~bad_seeing
for k in sky_data.keys():
if 'wave' not in k:
sky_data[k] = sky_data[k][exp_cuts]
else:
sky_data[k] = sky_data[k]
uniq_exps, i_uniq = np.unique(sky_data['expid'], return_index=True)
# compile median observing conditions for each unique exposure and
# get the median sky fluxes of all sky fibers
sky_uniq_exps = {}
for k in ['airmass', 'moon_ill', 'moon_alt', 'moon_sep', 'exptime']:
sky_uniq_exps[k] = np.zeros(len(uniq_exps))
sky_uniq_exps['wave_b'] = sky_data['wave_b']
sky_uniq_exps['wave_r'] = sky_data['wave_r']
sky_uniq_exps['wave_z'] = sky_data['wave_z']
wave_concat = np.concatenate([sky_data['wave_b'], sky_data['wave_r'], sky_data['wave_z']])
wave_sort = np.argsort(wave_concat)
sky_uniq_exps['wave'] = wave_concat[wave_sort]
sky_uniq_exps['sky_b'] = np.zeros((len(uniq_exps), len(sky_data['wave_b'])))
sky_uniq_exps['sky_r'] = np.zeros((len(uniq_exps), len(sky_data['wave_r'])))
sky_uniq_exps['sky_z'] = np.zeros((len(uniq_exps), len(sky_data['wave_z'])))
sky_uniq_exps['sky'] = np.zeros((len(uniq_exps), len(sky_uniq_exps['wave'])))
print('date \t\t tile \t exp \t texp \t airmass \t moon_ill \t moon_alt \t moon_sep')
for _i, _i_uniq, _exp in zip(range(len(i_uniq)), i_uniq, uniq_exps):
_is_exp = (sky_data['expid'] == _exp)
sky_uniq_exps['airmass'][_i] = np.median(sky_data['airmass'][_is_exp])
sky_uniq_exps['moon_ill'][_i] = np.median(sky_data['moon_ill'][_is_exp])
sky_uniq_exps['moon_alt'][_i] = np.median(sky_data['moon_alt'][_is_exp])
sky_uniq_exps['moon_sep'][_i] = np.median(sky_data['moon_sep'][_is_exp])
sky_uniq_exps['exptime'][_i] = sky_data['exptime'][_is_exp][0]
sky_uniq_exps['sky_b'][_i] = np.median(sky_data['sky_b'][_is_exp], axis=0) / desi_fiber_area
sky_uniq_exps['sky_r'][_i] = np.median(sky_data['sky_r'][_is_exp], axis=0) / desi_fiber_area
sky_uniq_exps['sky_z'][_i] = np.median(sky_data['sky_z'][_is_exp], axis=0) / desi_fiber_area
sky_uniq_exps['sky'][_i] = np.concatenate([sky_uniq_exps['sky_b'][_i], sky_uniq_exps['sky_r'][_i], sky_uniq_exps['sky_z'][_i]])[wave_sort]
print('%i \t %i \t %i \t %.f \t %.2f \t\t %.2f \t\t %.1f \t\t %f' %
(sky_data['date'][_i_uniq], sky_data['tileid'][_i_uniq], sky_data['expid'][_i_uniq],
sky_uniq_exps['exptime'][_i],
sky_uniq_exps['airmass'][_i], sky_uniq_exps['moon_ill'][_i],
sky_uniq_exps['moon_alt'][_i], sky_uniq_exps['moon_sep'][_i]))
return sky_uniq_exps
def read_BOSS_skies():
''' read sky fibers from BOSS
'''
f_boss = os.path.join(dir_sky, 'Bright_BOSS_Sky_blue.fits')
boss = fitsio.read(f_boss)
f_red = os.path.join(dir_sky, 'Bright_BOSS_Sky_red.fits')
red = fitsio.read(f_red)
sky_boss = {}
sky_boss['airmass'] = boss['AIRMASS']
sky_boss['moon_ill'] = boss['MOON_ILL']
sky_boss['moon_alt'] = boss['MOON_ALT']
sky_boss['moon_sep'] = boss['MOON_SEP']
sky_boss['wave_b'] = boss['WAVE'][0] * 10. # convert to Angstroms
sky_boss['sky_b'] = boss['SKY'] / boss_fiber_area
sky_boss['wave_r'] = red['WAVE'][0] * 10.
sky_boss['sky_r'] = red['SKY'] / boss_fiber_area
wave_concat = np.concatenate([sky_boss['wave_b'], sky_boss['wave_r']])
wave_sort = np.argsort(wave_concat)
sky_boss['wave'] = wave_concat[wave_sort]
sky_boss['sky'] = np.zeros((len(sky_boss['airmass']), len(wave_concat)))
for i in range(len(sky_boss['airmass'])):
sky_boss['sky'][i] = np.concatenate([sky_boss['sky_b'][i], sky_boss['sky_r'][i]])[wave_sort]
return sky_boss
desi_skies = read_cmx_skies()
boss_skies = read_BOSS_skies()
# +
fig = plt.figure(figsize=(15,5))
sub = fig.add_subplot(131)
sub.scatter(boss_skies['airmass'], boss_skies['moon_ill'], c='C0', label='BOSS')
sub.scatter(desi_skies['airmass'], desi_skies['moon_ill'], c='C1', label='DESI CMX')
sub.legend(loc='lower right', frameon=True, handletextpad=0, fontsize=15)
sub.set_xlabel('airmass', fontsize=20)
sub.set_xlim(1., 2.)
sub.set_ylabel('moon illumination', fontsize=20)
sub = fig.add_subplot(132)
sub.scatter(boss_skies['moon_alt'], boss_skies['moon_ill'], c='C0')
sub.scatter(desi_skies['moon_alt'], desi_skies['moon_ill'], c='C1')
sub.set_xlabel('moon atlitude', fontsize=20)
sub.set_xlim(-90., 90.)
sub.set_yticklabels([])
sub = fig.add_subplot(133)
sub.scatter(boss_skies['moon_sep'], boss_skies['moon_ill'], c='C0', label='BOSS')
sub.scatter(desi_skies['moon_sep'], desi_skies['moon_ill'], c='C1', label='DESI CMX')
sub.set_xlabel('moon separation', fontsize=20)
sub.set_xlim(0., 180.)
sub.set_yticklabels([])
# -
# Lets calculate the scattered V-band magnitude for all BOSS and DESI exposures
# +
specsim_sky = Sky._specsim_initialize('desi')
specsim_wave = specsim_sky._wavelength # Ang
dark_spectrum = specsim_sky._surface_brightness_dict['dark']
# -
def get_scattered_V(wave, _Isky, airmass, moon_alt):
''' given sky surface brightness and observing conditions approximate the scattered V value
'''
# interpolation to specsim wavelenght
Isky = interp1d(wave, _Isky, bounds_error=False, fill_value=0.)(specsim_wave) * 1e-17 * u.erg/u.Angstrom/u.s/u.arcsec**2/u.cm**2
# subtract dark sky surface brightness to get moon contribution
extinction = 10 ** (-specsim_sky.moon._extinction_coefficient * airmass / 2.5)
Idark = dark_spectrum * extinction
Imoon = np.clip(Isky - Idark, 0., None)
area = 1 * u.arcsec ** 2
scattered_V = specsim_sky.moon._vband.get_ab_magnitude(Imoon[np.isfinite(Imoon)] * area, specsim_wave[np.isfinite(Imoon)])
return scattered_V# * u.mag / u.arcsec**2
def KS_Vband(airmass, moonill, moonalt, moonsep):
''' scattered V-band moon magnitude from re-fit KS model
'''
specsim_sky.airmass = airmass
specsim_sky.moon.moon_phase = np.arccos(2.*moonill - 1)/np.pi
specsim_sky.moon.moon_zenith = (90. - moonalt) * u.deg
specsim_sky.moon.separation_angle = moonsep * u.deg
scattered_V = Sky.krisciunas_schaefer_free(
specsim_sky.moon.obs_zenith,
specsim_sky.moon.moon_zenith,
specsim_sky.moon.separation_angle,
specsim_sky.moon.moon_phase,
specsim_sky.moon.vband_extinction,
specsim_sky.moon.KS_CR,
specsim_sky.moon.KS_CM0,
specsim_sky.moon.KS_CM1,
specsim_sky.moon.KS_M0,
specsim_sky.moon.KS_M1,
specsim_sky.moon.KS_M2)
return scattered_V.value
boss_scattered_V = [get_scattered_V(boss_skies['wave'], boss_skies['sky'][i], boss_skies['airmass'][i], boss_skies['moon_alt'][i]) for i in range(len(boss_skies['airmass']))]
desi_scattered_V = [get_scattered_V(desi_skies['wave'], desi_skies['sky'][i], desi_skies['airmass'][i], desi_skies['moon_alt'][i]) for i in range(len(desi_skies['airmass']))]
boss_scattered_V_KS = [KS_Vband(boss_skies['airmass'][i], boss_skies['moon_ill'][i], boss_skies['moon_alt'][i], boss_skies['moon_sep'][i]) for i in range(len(boss_skies['airmass']))]
desi_scattered_V_KS = [KS_Vband(desi_skies['airmass'][i], desi_skies['moon_ill'][i], desi_skies['moon_alt'][i], desi_skies['moon_sep'][i]) for i in range(len(desi_skies['airmass']))]
fig = plt.figure(figsize=(5,5))
sub = fig.add_subplot(111)
sub.scatter(boss_scattered_V, boss_scattered_V_KS, c='C0', s=1, label='BOSS')
sub.scatter(desi_scattered_V, desi_scattered_V_KS, c='C1', s=2, label='DESI CMX')
sub.plot([18., 26.5], [18., 26.5], c='k', ls='--')
sub.set_xlabel('measured V', fontsize=20)
sub.set_xlim(26.5, 18)
sub.set_ylabel('re-fit KS (old) model V', fontsize=20)
sub.set_ylim(26.5, 18.)
sub.legend(loc='upper left', markerscale=5, handletextpad=0.1, fontsize=15, frameon=True)
# +
cols = ['airmass', 'moon_ill', 'moon_alt', 'moon_sep']
lbls = ['airmass', 'moon ill', 'moon alt', 'moon sep']
lims = [(1., 2.), (0., 1.), (-90., 90.), (0., 180.)]
fig = plt.figure(figsize=(10,15))
for i, k in enumerate(cols):
sub = fig.add_subplot(len(cols),1,i+1)
sub.scatter(boss_skies[k], boss_scattered_V_KS, c='k', s=1)
sub.scatter(desi_skies[k], desi_scattered_V_KS, c='k', s=1, label='re-fit KS model (old)')
sub.scatter(boss_skies[k], boss_scattered_V, c='C0', s=10, label='BOSS')
sub.scatter(desi_skies[k], desi_scattered_V, c='C1', s=10, marker='^', label='DESI CMX')
sub.set_xlabel(lbls[i], fontsize=20)
sub.set_xlim(lims[i])
sub.set_ylim(26.5, 18.)
if i == 0: sub.legend(loc='lower right', markerscale=5, handletextpad=0.1, fontsize=15, frameon=True)
fig.subplots_adjust(hspace=0.4)
bkgd = fig.add_subplot(111, frameon=False)
bkgd.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
bkgd.set_ylabel('scattered moonlight surface brightness in V band', fontsize=20)
# -
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from itertools import chain, combinations_with_replacement
theta_train, theta_test, v_train, v_test = train_test_split(
np.vstack([
np.concatenate([desi_skies['airmass'], boss_skies['airmass']]),
np.concatenate([desi_skies['moon_ill'], boss_skies['moon_ill']]),
np.concatenate([desi_skies['moon_alt'], boss_skies['moon_alt']]),
np.concatenate([desi_skies['moon_sep'], boss_skies['moon_sep']])]).T,
np.concatenate([desi_scattered_V, boss_scattered_V]),
test_size=0.1,
random_state=0)
n_order = 3
regress_v = make_pipeline(PolynomialFeatures(n_order), Ridge(alpha=0.1))
regress_v.fit(theta_train, v_train)
steps = regress_v.get_params()
coeffs = steps['ridge'].coef_
intercept = steps['ridge'].intercept_
coeffs
intercept
def scattered_V_model(airmass, moon_frac, moon_alt, moon_sep):
''' third degree polynomial regression fit to exposure factor
'''
theta = np.atleast_2d(np.array([airmass, moon_frac, moon_alt, moon_sep]).T)
combs = chain.from_iterable(combinations_with_replacement(range(4), i) for i in range(0, n_order+1))
theta_transform = np.empty((theta.shape[0], len(coeffs)))
for i, comb in enumerate(combs):
theta_transform[:, i] = theta[:, comb].prod(1)
return np.dot(theta_transform, coeffs.T) + intercept
fig = plt.figure(figsize=(5,5))
sub = fig.add_subplot(111)
sub.scatter(np.concatenate([desi_scattered_V, boss_scattered_V]),
np.concatenate([desi_scattered_V_KS, boss_scattered_V_KS]), c='k', s=1, label='refit KS (old) model')
sub.scatter(v_test, regress_v.predict(theta_test), c='C1', s=2, label='regression model')
sub.plot([18., 26.5], [18., 26.5], c='k', ls='--')
sub.legend(loc='upper left', markerscale=10, handletextpad=0.1, fontsize=15)
sub.set_xlabel('measured V', fontsize=20)
sub.set_xlim(26.5, 18)
sub.set_ylabel('model V', fontsize=20)
sub.set_ylim(26.5, 18.)
fig = plt.figure(figsize=(5,5))
sub = fig.add_subplot(111)
sub.scatter(desi_scattered_V, desi_scattered_V_KS, c='k', s=3, label='refit KS (old) model')
sub.scatter(desi_scattered_V,
regress_v.predict(np.vstack([desi_skies['airmass'], desi_skies['moon_ill'], desi_skies['moon_alt'], desi_skies['moon_sep']]).T),
c='C1', s=5, label='regression model')
sub.plot([18., 22], [18., 22], c='k', ls='--')
sub.legend(loc='upper left', markerscale=10, handletextpad=0.1, fontsize=15)
sub.set_xlabel('measured V', fontsize=20)
sub.set_xlim(22, 18)
sub.set_ylabel('model V', fontsize=20)
sub.set_ylim(22, 18.)
# Now lets see how much better a sky model based on the regression scattered V band moon light reproduces sky brightness compared to the old KS model. Lets compare the sky brightness at 4500 and 5500 Angstroms
# +
def get_continuum(flux, data_set='desi'):
if data_set == 'desi':
cont = medfilt(flux, 151)
elif data_set == 'boss':
cont = medfilt(flux[::10], 121)
return cont
def get_sky_at_wavelength(wave, sky, wavelength=4500, data_set='desi'):
''' given wavelength and surface brightness, return the value of the
continuum at ~4500A
'''
if data_set == 'boss': wave = wave[::10]
near_wave = (wave > wavelength-5.) & (wave < wavelength+5.)
assert np.sum(near_wave) > 0
sky_cont = get_continuum(sky, data_set=data_set)
return np.median(sky_cont[near_wave])
# -
desi_4500 = np.zeros(len(desi_skies['airmass']))
desi_5500 = np.zeros(len(desi_skies['airmass']))
for i in range(len(desi_skies['airmass'])):
desi_4500[i] = get_sky_at_wavelength(desi_skies['wave_b'], desi_skies['sky_b'][i], wavelength=4500, data_set='desi')
desi_5500[i] = get_sky_at_wavelength(desi_skies['wave_b'], desi_skies['sky_b'][i], wavelength=5500, data_set='desi')
boss_4500 = np.zeros(len(boss_skies['airmass']))
boss_5500 = np.zeros(len(boss_skies['airmass']))
for i in range(len(boss_skies['airmass'])):
boss_4500[i] = get_sky_at_wavelength(boss_skies['wave_b'], boss_skies['sky_b'][i], wavelength=4500, data_set='boss')
boss_5500[i] = get_sky_at_wavelength(boss_skies['wave_b'], boss_skies['sky_b'][i], wavelength=5500, data_set='boss')
# +
def sky_model_KSrefit(airmass, moonill, moonalt, moonsep):
''' sky surface brightness model (KS coefficients fit to BOSS sky fibers only;
see https://github.com/changhoonhahn/feasiBGS/blob/master/notebook/local_newKS_fit.ipynb)
:return specsim_wave, Isky:
returns wavelength [Angstrom] and sky flux [$10^{-17} erg/cm^{2}/s/\AA/arcsec^2$]
'''
specsim_sky.airmass = airmass
specsim_sky.moon.moon_phase = np.arccos(2.*moonill - 1)/np.pi
specsim_sky.moon.moon_zenith = (90. - moonalt) * u.deg
specsim_sky.moon.separation_angle = moonsep * u.deg
# updated KS coefficients
specsim_sky.moon.KS_CR = 458173.535128
specsim_sky.moon.KS_CM0 = 5.540103
specsim_sky.moon.KS_CM1 = 178.141045
I_ks_rescale = specsim_sky.surface_brightness
Isky = I_ks_rescale.value
return specsim_wave.value[::10], Isky[::10]
def sky_regression_model(airmass, moonill, moonalt, moonsep):
''' sky surface brightness regression model
'''
# scattered V from regression model
scattered_V = regress_v.predict(np.atleast_2d(np.array([airmass, moonill, moonalt, moonsep]))) * u.mag / u.arcsec**2
moon_zenith = (90. - moonalt) * u.deg
scattering_airmass = (1 - 0.96 * np.sin(moon_zenith) ** 2) ** (-0.5)
extinction = (10 ** (-specsim_sky.moon._extinction_coefficient * scattering_airmass / 2.5) *
(1 - 10 ** (-specsim_sky.moon._extinction_coefficient * airmass / 2.5)))
_surface_brightness = specsim_sky.moon._moon_spectrum * extinction
# Renormalized the extincted spectrum to the correct V-band magnitude.
raw_V = specsim_sky.moon._vband.get_ab_magnitude(_surface_brightness, specsim_wave) * u.mag
area = 1 * u.arcsec ** 2
specsim_sky.moon._surface_brightness = _surface_brightness * 10 ** (-(scattered_V * area - raw_V) / (2.5 * u.mag)) / area
return specsim_wave.value, specsim_sky.surface_brightness
# +
def get_KSrefit_model_at_wavelength(airmass, moon_ill, moon_alt, moon_sep, wavelength=4500):
airmasses = np.atleast_1d(airmass)
moon_ills = np.atleast_1d(moon_ill)
moon_alts = np.atleast_1d(moon_alt)
moon_seps = np.atleast_1d(moon_sep)
Iskys = []
for _airmass, _ill, _alt, _sep in zip(airmasses, moon_ills, moon_alts, moon_seps):
wave, _Isky = sky_model_KSrefit(_airmass, _ill, _alt, _sep)
Iskys.append(_Isky)
wlim = (wave > 4000.) & (wave < 6000.)
sky_cont = medfilt2d(np.array(Iskys)[:,wlim], (1, 121))
near_wave = (wave[wlim] > wavelength-5.) & (wave[wlim] < wavelength+5.)
return np.median(sky_cont[:,near_wave], axis=1)
def get_regression_model_at_wavelength(airmass, moon_ill, moon_alt, moon_sep, wavelength=4500):
airmasses = np.atleast_1d(airmass)
moon_ills = np.atleast_1d(moon_ill)
moon_alts = np.atleast_1d(moon_alt)
moon_seps = np.atleast_1d(moon_sep)
Iskys = []
for _airmass, _ill, _alt, _sep in zip(airmasses, moon_ills, moon_alts, moon_seps):
wave, _Isky = sky_regression_model(_airmass, _ill, _alt, _sep)
Iskys.append(_Isky)
wlim = (wave > 4000.) & (wave < 6000.)
sky_cont = medfilt2d(np.array(Iskys)[:,wlim], (1, 121))
near_wave = (wave[wlim] > wavelength-5.) & (wave[wlim] < wavelength+5.)
return np.median(sky_cont[:,near_wave], axis=1)
# -
reg_model_desi_4500 = get_regression_model_at_wavelength(desi_skies['airmass'],
desi_skies['moon_ill'],
desi_skies['moon_alt'],
desi_skies['moon_sep'],
wavelength=4500)
reg_model_desi_5500 = get_regression_model_at_wavelength(desi_skies['airmass'],
desi_skies['moon_ill'],
desi_skies['moon_alt'],
desi_skies['moon_sep'],
wavelength=5500)
reg_model_boss_4500 = get_regression_model_at_wavelength(boss_skies['airmass'],
boss_skies['moon_ill'],
boss_skies['moon_alt'],
boss_skies['moon_sep'],
wavelength=4500)
reg_model_boss_5500 = get_regression_model_at_wavelength(boss_skies['airmass'],
boss_skies['moon_ill'],
boss_skies['moon_alt'],
boss_skies['moon_sep'],
wavelength=5500)
KSrefit_model_desi_4500 = get_KSrefit_model_at_wavelength(desi_skies['airmass'],
desi_skies['moon_ill'],
desi_skies['moon_alt'],
desi_skies['moon_sep'],
wavelength=4500)
KSrefit_model_desi_5500 = get_KSrefit_model_at_wavelength(desi_skies['airmass'],
desi_skies['moon_ill'],
desi_skies['moon_alt'],
desi_skies['moon_sep'],
wavelength=5500)
KSrefit_model_boss_4500 = get_KSrefit_model_at_wavelength(boss_skies['airmass'],
boss_skies['moon_ill'],
boss_skies['moon_alt'],
boss_skies['moon_sep'],
wavelength=4500)
KSrefit_model_boss_5500 = get_KSrefit_model_at_wavelength(boss_skies['airmass'],
boss_skies['moon_ill'],
boss_skies['moon_alt'],
boss_skies['moon_sep'],
wavelength=5500)
# +
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(121)
sub.scatter(boss_4500, KSrefit_model_boss_4500, c='C0', s=1)
sub.scatter(desi_4500, KSrefit_model_desi_4500, c='C0', s=1)
sub.scatter(boss_4500, reg_model_boss_4500, c='C1', s=5, marker='^')
sub.scatter(desi_4500, reg_model_desi_4500, c='C1', s=5, marker='^')
sub.plot([0., 20.], [0., 20.], c='k', ls='--')
sub.set_xlabel('sky brightness at $4500 \AA$', fontsize=20)
sub.set_xlim(0., 20.)
sub.set_ylabel('sky model at $4500 \AA$', fontsize=20)
sub.set_ylim(0., 20.)
sub = fig.add_subplot(122)
sub.scatter(boss_5500, KSrefit_model_boss_5500, c='C0', s=1)
sub.scatter(desi_5500, KSrefit_model_desi_5500, c='C0', s=1, label='refit KS (old) model')
sub.scatter(boss_5500, reg_model_boss_5500, c='C1', s=5, marker='^')
sub.scatter(desi_5500, reg_model_desi_5500, c='C1', s=5, marker='^', label='regression model')
sub.plot([0., 20.], [0., 20.], c='k', ls='--')
sub.set_xlabel('sky brightness at $5500 \AA$', fontsize=20)
sub.set_xlim(0., 20.)
sub.set_ylabel('sky model at $5500 \AA$', fontsize=20)
sub.set_ylim(0., 20.)
sub.legend(loc='upper left', markerscale=5, handletextpad=0.1, fontsize=15, frameon=True)
fig.subplots_adjust(wspace=0.4)
# +
fig = plt.figure(figsize=(15,5))
sub = fig.add_subplot(131)
sub.scatter(boss_skies['airmass'], boss_skies['moon_ill'], c=boss_4500/reg_model_boss_4500, s=3, vmin=1., vmax=3., label='BOSS')
sub.scatter(desi_skies['airmass'], desi_skies['moon_ill'], marker='^', c=desi_4500/reg_model_desi_4500, s=80, vmin=1., vmax=3., label='DESI CMX')
sub.legend(loc='lower right', frameon=True, handletextpad=0, fontsize=15)
sub.set_xlabel('airmass', fontsize=20)
sub.set_xlim(1., 2.)
sub.set_ylabel('moon illumination', fontsize=20)
sub = fig.add_subplot(132)
sub.scatter(boss_skies['moon_alt'], boss_skies['moon_ill'], c=boss_4500/reg_model_boss_4500, s=3, vmin=1., vmax=3.)
sub.scatter(desi_skies['moon_alt'], desi_skies['moon_ill'], marker='^', s=80, c=desi_4500/reg_model_desi_4500, vmin=1., vmax=3.)
sub.set_xlabel('moon atlitude', fontsize=20)
sub.set_xlim(-90., 90.)
sub.set_yticklabels([])
sub = fig.add_subplot(133)
sct = sub.scatter(boss_skies['moon_sep'], boss_skies['moon_ill'], c=boss_4500/reg_model_boss_4500, s=3, vmin=1., vmax=3.)
sub.scatter(desi_skies['moon_sep'], desi_skies['moon_ill'], marker='^', c=desi_4500/reg_model_desi_4500, s=80, vmin=1., vmax=3.)
sub.set_xlabel('moon separation', fontsize=20)
sub.set_xlim(0., 180.)
sub.set_yticklabels([])
fig.subplots_adjust(wspace=0.1, hspace=0.1, right=0.85)
cbar_ax = fig.add_axes([0.875, 0.15, 0.02, 0.7])
cbar = fig.colorbar(sct, cax=cbar_ax)
cbar.set_label(label='(sky data / sky model) at $4500\AA$', fontsize=20)
fig = plt.figure(figsize=(15,5))
sub = fig.add_subplot(131)
sub.scatter(boss_skies['airmass'], boss_skies['moon_ill'], c=boss_5500/reg_model_boss_5500, s=3, vmin=1., vmax=3., label='BOSS')
sub.scatter(desi_skies['airmass'], desi_skies['moon_ill'], marker='^', c=desi_5500/reg_model_desi_5500, s=80, vmin=1., vmax=3., label='DESI CMX')
sub.legend(loc='lower right', frameon=True, handletextpad=0, fontsize=15)
sub.set_xlabel('airmass', fontsize=20)
sub.set_xlim(1., 2.)
sub.set_ylabel('moon illumination', fontsize=20)
sub = fig.add_subplot(132)
sub.scatter(boss_skies['moon_alt'], boss_skies['moon_ill'], c=boss_5500/reg_model_boss_5500, s=3, vmin=1., vmax=3.)
sub.scatter(desi_skies['moon_alt'], desi_skies['moon_ill'], marker='^', s=80, c=desi_5500/reg_model_desi_5500, vmin=1., vmax=3.)
sub.set_xlabel('moon atlitude', fontsize=20)
sub.set_xlim(-90., 90.)
sub.set_yticklabels([])
sub = fig.add_subplot(133)
sct = sub.scatter(boss_skies['moon_sep'], boss_skies['moon_ill'], c=boss_5500/reg_model_boss_5500, s=3, vmin=1., vmax=3.)
sub.scatter(desi_skies['moon_sep'], desi_skies['moon_ill'], marker='^', c=desi_5500/reg_model_desi_5500, s=80, vmin=1., vmax=3.)
sub.set_xlabel('moon separation', fontsize=20)
sub.set_xlim(0., 180.)
sub.set_yticklabels([])
fig.subplots_adjust(wspace=0.1, hspace=0.1, right=0.85)
cbar_ax = fig.add_axes([0.875, 0.15, 0.02, 0.7])
cbar = fig.colorbar(sct, cax=cbar_ax)
cbar.set_label(label='(sky data / sky model) at $5500\AA$', fontsize=20)
# -
| notebook/cmx/sky_Vband_model_fit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# DIPY Experiments - m2g
# <NAME> - January 22, 2016
# Snowzilla
# Params
dwifile = 'KKI2009_113_1_DTI.nii'
fbval = 'KKI2009_113_1_DTI.bval'
fbvec = 'KKI2009_113_1_DTI.bvec'
fatlas = 'MNI152_T1_1mm_brain.nii.gz'
fatlas_labels = 'desikan.nii.gz'
import os.path
subLabel = os.path.splitext(dwifile)[0]
print subLabel
#TODO GK: build argparser, add m2rage, rename sublabel, output graphs
#create: run_m2g_dipy.py that takes in file names and prints them
# +
# Load Data
from datetime import datetime
startTime = datetime.now()
from dipy.io import read_bvals_bvecs, read_bvec_file
from dipy.core.gradients import gradient_table
import numpy as np
import nibabel as nib
img = nib.load(dwifile)
atlas_img = nib.load(fatlas)
data = img.get_data()
atlas = atlas_img.get_data()
bvals, bvecs = read_bvals_bvecs(fbval, fbvec)
# Get rid of spurrious scans
idx = np.where((bvecs[:, 0] == 100) & (bvecs[:, 1] == 100) & (bvecs[:, 2] == 100))
bvecs = np.delete(bvecs, idx, axis=0)
bvals = np.delete(bvals, idx, axis=0)
data = np.delete(data,idx,axis=3)
gtab = gradient_table(bvals, bvecs, atol = 0.01)
print gtab.info
# +
# Preprocess DTI
# TODO - eddy correction!
# Get b0
b0 = np.where(gtab.b0s_mask)[0]
b0_vol = np.squeeze(data[:, :, :, b0]) #if more than 1, just grab 1 for now.
print datetime.now() - startTime
# +
# Register to atlas
# Register DTI to atlas
#(B0 to MNI directly)
from dipy.viz import regtools
from dipy.data import fetch_stanford_hardi, read_stanford_hardi
from dipy.data.fetcher import fetch_syn_data, read_syn_data
from dipy.align.imaffine import (transform_centers_of_mass,
AffineMap,
MutualInformationMetric,
AffineRegistration)
from dipy.align.transforms import (TranslationTransform3D,
RigidTransform3D,
AffineTransform3D)
static_grid2world = atlas_img.get_affine()
moving_grid2world = img.get_affine()
# for compatibility with example code
static = atlas
moving = b0_vol
print moving.shape
print static.shape
"""
We can see that the images are far from aligned by drawing one on top of
the other. The images don't even have the same number of voxels, so in order
to draw one on top of the other we need to resample the moving image on a grid
of the same dimensions as the static image, we can do this by "transforming"
the moving image using an identity transform
"""
identity = np.eye(4)
affine_map = AffineMap(identity,
static.shape, static_grid2world,
moving.shape, moving_grid2world)
resampled = affine_map.transform(moving)
"""
We can obtain a very rough (and fast) registration by just aligning the centers
of mass of the two images
"""
c_of_mass = transform_centers_of_mass(static, static_grid2world,
moving, moving_grid2world)
"""
We can now transform the moving image and draw it on top of the static image,
registration is not likely to be good, but at least they will occupy roughly
the same space
"""
transformed = c_of_mass.transform(moving)
"""
This was just a translation of the moving image towards the static image, now
we will refine it by looking for an affine transform. We first create the
similarity metric (Mutual Information) to be used. We need to specify the
number of bins to be used to discretize the joint and marginal probability
distribution functions (PDF), a typical value is 32. We also need to specify
the percentage (an integer in (0, 100]) of voxels to be used for computing the
PDFs, the most accurate registration will be obtained by using all voxels, but
it is also the most time-consuming choice. We specify full sampling by passing
None instead of an integer
"""
nbins = 32
sampling_prop = None
metric = MutualInformationMetric(nbins, sampling_prop)
"""
To avoid getting stuck at local optima, and to accelerate convergence, we use a
multi-resolution strategy (similar to ANTS [Avants11]_) by building a Gaussian
Pyramid. To have as much flexibility as possible, the user can specify how this
Gaussian Pyramid is built. First of all, we need to specify how many
resolutions we want to use. This is indirectly specified by just providing a
list of the number of iterations we want to perform at each resolution. Here we
will just specify 3 resolutions and a large number of iterations, 10000 at the
coarsest resolution, 1000 at the medium resolution and 100 at the finest. These
are the default settings
"""
level_iters = [10000, 1000, 100]
"""
To compute the Gaussian pyramid, the original image is first smoothed at each
level of the pyramid using a Gaussian kernel with the requested sigma. A good
initial choice is [3.0, 1.0, 0.0], this is the default
"""
sigmas = [3.0, 1.0, 0.0]
"""
Now we specify the sub-sampling factors. A good configuration is [4, 2, 1],
which means that, if the original image shape was (nx, ny, nz) voxels, then the
shape of the coarsest image will be about (nx//4, ny//4, nz//4), the shape in
the middle resolution will be about (nx//2, ny//2, nz//2) and the image at the
finest scale has the same size as the original image. This set of factors is
the default
"""
factors = [4, 2, 1]
"""
Now we go ahead and instantiate the registration class with the configuration
we just prepared
"""
affreg = AffineRegistration(metric=metric,
level_iters=level_iters,
sigmas=sigmas,
factors=factors)
"""
Using AffineRegistration we can register our images in as many stages as we
want, providing previous results as initialization for the next (the same logic
as in ANTS). The reason why it is useful is that registration is a non-convex
optimization problem (it may have more than one local optima), which means that
it is very important to initialize as close to the solution as possible. For
example, lets start with our (previously computed) rough transformation
aligning the centers of mass of our images, and then refine it in three stages.
First look for an optimal translation. The dictionary regtransforms contains
all available transforms, we obtain one of them by providing its name and the
dimension (either 2 or 3) of the image we are working with (since we are
aligning volumes, the dimension is 3)
"""
transform = TranslationTransform3D()
params0 = None
starting_affine = c_of_mass.affine
translation = affreg.optimize(static, moving, transform, params0,
static_grid2world, moving_grid2world,
starting_affine=starting_affine)
"""
If we look at the result, we can see that this translation is much better than
simply aligning the centers of mass
"""
transformed = translation.transform(moving)
"""
Now lets refine with a rigid transform (this may even modify our previously
found optimal translation)
"""
transform = RigidTransform3D()
params0 = None
starting_affine = translation.affine
rigid = affreg.optimize(static, moving, transform, params0,
static_grid2world, moving_grid2world,
starting_affine=starting_affine)
"""
This produces a slight rotation, and the images are now better aligned
"""
transformed = rigid.transform(moving)
"""
Finally, lets refine with a full affine transform (translation, rotation, scale
and shear), it is safer to fit more degrees of freedom now, since we must be
very close to the optimal transform
"""
transform = AffineTransform3D()
params0 = None
starting_affine = rigid.affine
affine = affreg.optimize(static, moving, transform, params0,
static_grid2world, moving_grid2world,
starting_affine=starting_affine)
transformed = affine.transform(moving)
print datetime.now() - startTime
# +
# Apply transform
# Loop through each of the DTI volumes for each of the transforms
import igraph
dwi_reg = np.zeros([transformed.shape[0],transformed.shape[1],transformed.shape[2],33])#data.shape[3]-1]) #TODO data.shape
print dwi_reg.shape
for x in range(dwi_reg.shape[3]):
moving = np.squeeze(data[:, :, :, x])
transformed = c_of_mass.transform(moving)
transformed = translation.transform(moving)
transformed = rigid.transform(moving)
transformed = affine.transform(moving)
dwi_reg[:,:,:,x] = transformed
print datetime.now() - startTime
# +
# Tensor estimation
import numpy as np
from dipy.reconst.dti import TensorModel, fractional_anisotropy
from dipy.reconst.csdeconv import (ConstrainedSphericalDeconvModel,
auto_response)
from dipy.direction import peaks_from_model
from dipy.tracking.eudx import EuDX
from dipy.data import fetch_stanford_hardi, read_stanford_hardi, get_sphere
from dipy.segment.mask import median_otsu
from dipy.viz import fvtk
from dipy.viz.colormap import line_colors
data = dwi_reg
labeldata = nib.load(fatlas_labels)
label = labeldata.get_data()
"""
Create a brain mask. Here we just threshold labels.
"""
mask = (label > 0)
gtab.info
print data.shape
"""
For the constrained spherical deconvolution we need to estimate the response
function (see :ref:`example_reconst_csd`) and create a model.
"""
response, ratio = auto_response(gtab, data, roi_radius=10, fa_thr=0.7)
csd_model = ConstrainedSphericalDeconvModel(gtab, response)
"""
Next, we use ``peaks_from_model`` to fit the data and calculated the fiber
directions in all voxels.
"""
sphere = get_sphere('symmetric724')
csd_peaks = peaks_from_model(model=csd_model,
data=data,
sphere=sphere,
mask=mask,
relative_peak_threshold=.5,
min_separation_angle=25,
parallel=True)
"""
For the tracking part, we will use the fiber directions from the ``csd_model``
but stop tracking in areas where fractional anisotropy (FA) is low (< 0.1).
To derive the FA, used here as a stopping criterion, we would need to fit a
tensor model first. Here, we fit the Tensor using weighted least squares (WLS).
"""
print 'tensors...'
tensor_model = TensorModel(gtab, fit_method='WLS')
tensor_fit = tensor_model.fit(data, mask)
FA = fractional_anisotropy(tensor_fit.evals)
"""
In order for the stopping values to be used with our tracking algorithm we need
to have the same dimensions as the ``csd_peaks.peak_values``. For this reason,
we can assign the same FA value to every peak direction in the same voxel in
the following way.
"""
stopping_values = np.zeros(csd_peaks.peak_values.shape)
stopping_values[:] = FA[..., None]
print datetime.now() - startTime
# +
# Fiber Tracking
# TODO: better seeding
"""
``EuDX`` [Garyfallidis12]_ is a fast algorithm that we use here to generate
streamlines. If the parameter ``seeds`` is a positive integer it will generate
that number of randomly placed seeds everywhere in the volume. Alternatively,
you can specify the exact seed points using an array (N, 3) where N is the
number of seed points. For simplicity, here we will use the first option
(random seeds). ``a_low`` is the threshold of the fist parameter
(``stopping_values``) which means that there will that tracking will stop in
regions with FA < 0.1.
"""
streamline_generator = EuDX(stopping_values,
csd_peaks.peak_indices,
seeds=10**6,
odf_vertices=sphere.vertices,
a_low=0.1)
streamlines = [streamline for streamline in streamline_generator]
print datetime.now() - startTime
# +
# Graph gen (non-scalable for now)
#print label(ss)
# initialize graph
# For each streamline, round values
# index into array and get all unique labels
# for every n choose 2 ids, increment graph
# plot graph
# save graph
startTime = datetime.now()
from itertools import combinations
import networkx as nx
import matplotlib.pylab as plt
#G = nx.Graph()
G = np.zeros((70,70))
print np.shape(streamlines)
for y in range(np.shape(streamlines)[0]):
if (y % 25000) == 0:
print y
ss = (np.round(streamlines[y]))
ss = ss.astype(int)
f = []
for x in range(ss.shape[0]):
f.append(label[ss[x][0],ss[x][1],ss[x][2]])
f = np.unique(f)
f = f[f != 0]
ff = list(combinations(f,2))
for z in range(np.shape(ff)[0]):
G[ff[z][0]-1,ff[z][1]-1] = G[ff[z][0]-1,ff[z][1]-1] + 1
print datetime.now() - startTime
# -
print G
import numpy as np
np.save('dipygraph.npy', G) #because things kept dying I wanted to keep the graph
import igraph as ig
g = ig.Graph.Weighted_Adjacency(list(G), mode='undirected', attr='weight')
g.save('thefilenameyouwant.graphml', format='graphml')
# +
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_aspect('equal')
G2 = G + np.transpose(G)
plt.imshow(np.log10(G2), interpolation='nearest', cmap=plt.cm.hot)
plt.colorbar()
plt.savefig(subLabel+'.png')
#plt.show()
# -
import csv
with open(subLabel+'.csv', 'w') as csvfile:
csvwriter = csv.writer(csvfile, delimiter=',')
#for x in range(G2.shape[0]):
csvwriter.writerows(G2)
#TODO GK: output G as graphml
# +
print G
len(G)
# -
#plt.show()
| examples/m2g_dipy_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import py_vollib
from py_vollib.black_scholes import black_scholes as bs
from py_vollib.black_scholes.implied_volatility import implied_volatility as iv
from py_vollib.black_scholes.greeks.analytical import delta
from py_vollib.black_scholes.greeks.analytical import gamma
from py_vollib.black_scholes.greeks.analytical import rho
from py_vollib.black_scholes.greeks.analytical import theta
from py_vollib.black_scholes.greeks.analytical import vega
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
from statsmodels.tsa.api import SimpleExpSmoothing
# ### Load traded options transaction data on DTCC for February 2021
df = pd.read_csv('database_TB.csv', index_col='Date')
# ### Load USD/AUD spot rates data from Yahoo Finance for February 2021
df2 = pd.read_csv('AUDUSD=X.csv', index_col='Date')
# ### Inner join two datasets on Date
df3 = df.join(df2)
# ### Calculate Implied Volatility
# +
df4 = df3.reset_index()
# calculate the adjusted premium for FX option
df4['Adj_Prem'] = df4['Option Premium Amount'] / df4['USD_notional']
# calculate time to maturity as of execution date
df4['Date'] = pd.to_datetime(df4['Date']).dt.date
df4['Expiration Date'] = pd.to_datetime(df4['Expiration Date']).dt.date
df4['Maturity'] = (df4['Expiration Date'] - df4['Date']).dt.days / 365
# -
# calculate Implied Volatility
Rf = 0.0001 - 0.0008
for i in tqdm(df4.index):
df4.loc[i,'IV'] = iv(df4['Adj_Prem'][i], df4['Spot_USDAUD'][i], df4['Strike_USDAUD'][i],
df4['Maturity'][i], Rf, df4['Type'][i])
# drop useless columns
df5=df4.drop(['Open','High','Low','Close','Adj Close','Volume'], axis=1)
# ### Calculate Greeks
for i in tqdm(df5.index):
df5.loc[i,'Delta'] = delta(df5['Type'][i],df5['Spot_USDAUD'][i],df5['Strike_USDAUD'][i],df5['Maturity'][i],Rf,df5['IV'][i])
df5.loc[i,'Gamma'] = gamma(df5['Type'][i],df5['Spot_USDAUD'][i],df5['Strike_USDAUD'][i],df5['Maturity'][i],Rf,df5['IV'][i])
df5.loc[i,'Rho'] = rho(df5['Type'][i],df5['Spot_USDAUD'][i],df5['Strike_USDAUD'][i],df5['Maturity'][i],Rf,df5['IV'][i])
df5.loc[i,'Theta'] = theta(df5['Type'][i],df5['Spot_USDAUD'][i],df5['Strike_USDAUD'][i],df5['Maturity'][i],Rf,df5['IV'][i])
df5.loc[i,'Vega'] = vega(df5['Type'][i],df5['Spot_USDAUD'][i],df5['Strike_USDAUD'][i],df5['Maturity'][i],Rf,df5['IV'][i])
# ### Export the data to "database_TB_Ready.csv" for Tableau dashboard
df5.to_csv('database_TB_Ready.csv', index=False)
| Data Cleaning and Calculations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Monotonic Classifiers
#
# Some classifiers should never "flip-flop" between classes. For example, consider the following classifier that labels system call traces from programs as being benign or malicious programs. No matter how many benign instructions are added to a malicious program, it should never trick the classifier into thinking it is benign.
#
# The classifier below takes sequences of system calls obtained from execution traces from malicious and benign programs. Treating each execution trace as a document, we extract a tf-idf[1] vector for feature extraction. Code below is provided that:
# 1. Grabs ground truth traces
# 1. Vectorizes them with tf-idf
# 1. Performs 10-fold cross validation
# 1. Trains a Logistic Regression model
#
# Your task as a malware author yourself, is to find direct and indirect ways to break this model so antivirus software cannot detect your code. Approach these tasks in three chunks:
# 1. Manually manipulate a malicious feature vector such that the classifier mistakenly labels it malicious. If you successfully do this once, create a function that given a benign feature vector returns a new one that will be classified as benign.
# 1. Identify features that, given your knowledge of what monotonic classifiers try to solve, could be used to "flip" a malicious program into benign one.
# 1. Using the aforementioned features, write a function that transforms a malicious syscall trace (appending is fine) to be classified as benign.
# 1. Modify the classifier so these features can no longer be used to "flip" a malicious trace. There's a quick'n'dirty way to do this, but more sophisticated[2] and robust[3] techniques exist if monotonicity is an important feature that your classifier needs.
#
# ## References
# * [1] https://en.wikipedia.org/wiki/Tf%E2%80%93idf
# * [2] https://arxiv.org/pdf/1804.03643.pdf
# * [3] https://www.slideshare.net/MSbluehat/bluehat-v17-detecting-compromise-on-windows-endpoints-with-osquery
# ## Utility Functions
# +
import os
import sys
import fnmatch
import random
import itertools
from collections import Counter
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression, Lasso
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.externals import joblib
import numpy as np
import pandas as pd
import seaborn
import matplotlib.pyplot as plt
## utils
def rwalk(directory, pattern):
"""Recursively search "directory" for files that match the Unix shell-style
wildcard given by "pattern" (like '*.mp3'). Returns matches as a generator."""
for root, dirnames, filenames in os.walk(directory):
for filename in fnmatch.filter(filenames, pattern):
yield os.path.join(root, filename)
def gettraces(benignpath='../data/01-monotonic-classifiers/benign-traces',
malpath='../data/01-monotonic-classifiers/malicious-traces'):
return list(rwalk(malpath, '*.trace')), list(rwalk(benignpath, '*.trace'))
def get_random_malicious_trace(malpath='../data/01-monotonic-classifiers/malicious-traces'):
"""Grab the text of a random malicious system call trace."""
mal, _ = gettraces(malpath=malpath)
with open(random.choice(mal)) as f:
return f.read()
# -
# ## Use the TfidfVectorizer to vectorize ground truth
#
# The following extracts vectors from each benign and malicious execution trace and returns four values:
# 1. `X`: the feature vectors
# 1. `y`: the class labels
# 1. `terms`: the list of labels (0 is benign, 1 is malicious)
# 1. `vectorizer`: a TfidfVectorizer which is fit to the terms in the ground truth and can be used to fit new syscall traces with `vectorizer.transform([trace1, trace2, ..., traceN])`
# +
def vectorize(featuredir='../data/01-monotonic-classifiers/feature-vectors'):
pos_traces, neg_traces = gettraces()
pos_y = [1 for _ in pos_traces]
neg_y = [0 for _ in neg_traces]
docs = [open(x).read() for x in pos_traces + neg_traces]
y = np.array(pos_y + neg_y)
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(docs)
terms = np.asarray(vectorizer.get_feature_names())
return X, y, terms, vectorizer
X, y, terms, vectorizer = vectorize()
# -
X
terms
# +
def _heatmap(crosstab):
plt.clf()
p = seaborn.heatmap(crosstab, square=True)
plt.tight_layout()
plt.show()
def _cv(X, y, k, name, clf, csvname, modeldir=None, terms=None, resultdir=None):
print('## %s' % name)
print('### Cross Validation')
print('`%s`' % str(cross_val_score(clf, X, y, cv=k)))
print('### CV Confusion Matrix')
y_pred = cross_val_predict(clf, X, y, cv=k)
print('```')
print(pd.crosstab(y, y_pred, rownames=['True'], colnames=['Predicted']))
print('```')
_heatmap(pd.crosstab(y, y_pred, rownames=['True'], colnames=['Predicted'],
normalize='index'))
clf.fit(X, y)
return clf
# -
clf = _cv(X, y, 10, 'name', LogisticRegression(solver='lbfgs'), 'foo.csv', modeldir='../work', terms=terms, resultdir='../work')
clf.classes_
clf.coef_[0]
terms
# # Exploration
#
# Let's spend some time familiarizing ourselves with the input data and the classifier. First, we'll examine the dataset in more detail. Then, we'll go over how to interact with the classifier through the vectorizer and `numpy`.
#
# ## Dataset
# +
# Load all malicious and benign traces from the ground truth dataset
maltraces, bentraces = gettraces()
print('num malicious traces: %d' % len(maltraces))
print('num benign traces: %d' % len(bentraces))
print('malicious trace path: %s' % maltraces[0])
print('benign trace path: %s' % bentraces[0])
# Let's see what a trace file looks like!
with open(maltraces[0]) as f:
maltrace = f.read()
print('\n# Sample Trace')
print(maltrace[:100] + '...')
print('len(trace): %d' % len(maltrace))
# i/o basics
# numpy basics
# classifier basics
# -
# So we have 1000 malicious and 1000 benign traces, which are simple text files. The traces files contain one [win32 API call](https://docs.microsoft.com/en-us/windows/win32/api/winreg/nf-winreg-regcreatekeyexa) per line. What is the distribution of system calls in this single malicious trace we've been analyzing?
def frequency(trace):
return Counter(trace.splitlines())
frequency(maltrace).most_common()
# And for a single benign trace?
# +
with open(bentraces[0]) as f:
bentrace = f.read()
frequency(bentrace).most_common()
# -
# What are some of the differences between these two traces? Post your answers in Group Chat!
#
# The `Counter` class let's us add two of them together to merge the counts. For example:
(frequency(maltrace) + frequency(bentrace)).most_common()
# ## Vectorizer, Numpy, and Classifier
#
# This particular classifier relies on a `vectorizer` that turns raw text---like our traces---into a numerical feature vector. This allows us to use our typical machine learning algorithms on text data. Let's create a matrix containing two feature vectors containing the malicious and benign traces from above.
# +
A = vectorizer.transform([maltrace, bentrace])
print(A.shape)
A.todense()
# -
# So we see that `vectorizer.transform` takes a list of traces (the raw text from the file, _not_ the path) as input and returns a matrix of values where each row corresponds to the trace and the columns represent the vectorized text, i.e., our features. This means our dataset contains 16 features.
#
# Since we now have a matrix of values, we can do simple matrix arithmetic to manipulate the feature vectors. For those not familiar with the library `numpy` (here `import numpy as np`), we can construct `n`-dimensional arrays with `np.array`. For example:
a = np.array(range(16))
print(a.shape)
print(a)
# Creates a 1-dimensional array (a vector) of the values 0--15. We can now do vectorized arithmetic to affect all the values in a matrix without having to waste time looping through each row. For example, the snippet below adds 1 to every other column in the feature vector matrix.
A + np.array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0])
# So now that we have a matrix, how do we use the classifier? Our classifier was built above and stored as `clf`. We can use the `clf.predict()` method to predict whether a vectorized trace is from a malicious or benign program. Note that `1` means malicious and `0` means benign.
clf.predict(A)
# This means our first row was classified as malicious and our second row was classified as benign. Considering `A` was constructed by running `vectorizer.transform([maltrace, bentrace])`, this makes perfect sense! Since `A` is just a matrix of values, we can also classify the manipulation of these matrices as we had done above. For example, let's add a vector to `A` and see how it impacts the classifier's output:
clf.predict(A + np.array(range(16)))
# Notice that this affected the classification result! Both are now considered malicious. Soon, we'll figure out how to do this in the other direction :). But first...
# # Warm Up
#
# Complete the following exercises to demonstrate familiarity with the dataset:
#
# * How many _unique_ system calls are there in this dataset in total?
# * What is the most common system call? For malicious traces only? What about benign?
# * Complete the function `get_feature_vector(path, vectorizer)` that given a path to a trace and the example vectorizer, returns the feature vector as a matrix.
#
# Post your answers to the Group Chat!
# +
# TODO: delete
mal_counts = Counter()
for path in maltraces:
with open(path) as f:
trace = f.read()
mal_counts += frequency(trace)
ben_counts = Counter()
for path in bentraces:
with open(path) as f:
trace = f.read()
ben_counts += frequency(trace)
all_counts = mal_counts + ben_counts
print('# system calls: %d' % len(all_counts))
print('most common (all):', all_counts.most_common()[0])
print('most common (mal):', mal_counts.most_common()[0])
print('most common (ben):', ben_counts.most_common()[0])
print('least common (all):', all_counts.most_common()[-1])
print('least common (mal):', mal_counts.most_common()[-1])
print('least common (ben):', ben_counts.most_common()[-1])
# ODOT
# TODO: Change to pass
def get_feature_vector(path, vectorizer):
with open(path) as f:
return vectorizer.transform([f.read()])
# -
# # Direct Attack
#
# You (somehow) have direct access to the feature vectors. Don't ask how, celebrate! Let's try to figure out the difference between malicious and benign feature vectors.
# ### Retrieve and Examine Vectors
#
# Let's dig into these features vectors. We can understand them better by comparing the feature vectors for one malicious trace, one benign trace, and their difference.
mal_fv = get_feature_vector(maltraces[0], vectorizer)
ben_fv = get_feature_vector(bentraces[0], vectorizer)
print('malicious\t\t\tbenign\t\t\t\t\tmalicious - benign')
diff_fv = mal_fv - ben_fv
for mal, ben, diff in reversed(list(zip(str(mal_fv).split('\n'),
str(ben_fv).split('\n'),
reversed(str(diff_fv).split('\n'))))):
print('%s\t%s\t\t%s' % (mal, ben, diff))
# What are some ways you could you could make this malicious feature vector "look" more like the benign one? Do you believe these feature vectors are representative of the two distributions (malicious vs. benign)? Why or why not? Post your answers in Group Chat!
#
# Based on what we discussed, modify the `feature_vector_malicious_to_benign` function below such that it transforms a malicious feature vector into a benign feature vector.
def feature_vector_malicious_to_benign(fv):
"""This adds 1 to all indices in a malicious vector that are smaller than the benign vector to
transform it into a vector that will classify as benign."""
delta = np.array([0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0]) # TODO: Change to 0... in notebook. Delete function comment.
return fv + delta
print(clf.predict(mal_fv))
print(clf.predict(feature_vector_malicious_to_benign(mal_fv)))
# Great! We have successfully switched one trace from being classified as malicious (1) to benign (0). Let's see if our function works on _all_ of the malicious samples now. Run the cell below (and replace `feature_vector_malicious_to_benign` with your function if you gave it a different name). Are all the mutated ones classified as benign (0)? If not, tweak `feature_vector_malicious_to_benign` until it does.
# +
def mutate_all_malicious(mutate_fn):
maltraces, _ = gettraces()
A = vectorizer.transform([open(x).read() for x in maltraces])
A_mutated = mutate_fn(A)
print('original:', Counter(clf.predict(A)))
print('mutated: ', Counter(clf.predict(A_mutated)))
mutate_all_malicious(feature_vector_malicious_to_benign)
# -
# Huzzah! Our malware is now undetectable!
# ## Indirect Attack Part 1 (Find terms)
#
# So we've made our malware undetectable (above), but we did so by directly manipulating the feature vector. This assumes a powerful and/or dedicated attacker and we're a bit lazy. How can we alter our malware's _behavior_ such that our malware is classified as benign software? Let's identify `terms` that are more likely to be associated with benign software than malicious software. We can do this a couple of different ways. One is to look at the distribution of these system calls in malicious vs. benign traces:
# +
# Get counts for each trace
mal_counts = Counter()
for path in maltraces:
with open(path) as f:
trace = f.read()
mal_counts += frequency(trace)
ben_counts = Counter()
for path in bentraces:
with open(path) as f:
trace = f.read()
ben_counts += frequency(trace)
# Build DataFrame
mal_syscalls, mal_freq = zip(*mal_counts.items())
ben_syscalls, ben_freq = zip(*ben_counts.items())
df = pd.concat([pd.DataFrame({'syscall': mal_syscalls, 'count': mal_freq, 'classlabel': 'malicious'}),
pd.DataFrame({'syscall': ben_syscalls, 'count': ben_freq, 'classlabel': 'benign'})])
# Plot it!
p = seaborn.barplot(x='count', y='syscall', hue='classlabel', data=df)
plt.tight_layout()
plt.show()
# -
# Notice how different the frequencies are between malicious and benign execution traces! Which one of the system calls occurs in roughly the same number of benign and malicious traces? If you wanted to make your malware appear to be benign, which system calls would you inject? Post you answers in Group Chat!
#
# It is important to note a few things:
# * This example is synthetic. It's unlikely the differences will be this extreme!
# * This is _not_ directly related to the model we are attacking!
#
# How can we better understand what the _model_ believes are system calls that indicate malicious vs. benign behavior? Let's dig into the classifier's _coefficients_ and its terms. Defined above are `clf.coef_` and `terms`, which show the model's coefficients for each system call, respectively. The 0th coefficient corresponds to the 0th terms, etc., so let's examine these directly:
sorted(list(zip(clf.coef_[0], terms)))
# How do these values correspond to the counts we saw in the previous plot? If you were trying to make a malicious binary appear benign, which system calls would you force you malware to call? Why? Post your thoughts in Group Chat!
# ## Indirect Attack Part 2 (Adversarial Sample Generation)
# Based on what we learned above, modify the `gen_adversarial_sample` function below to have `syscall` default to one that is more associated with benign traces than malicious traces.
def gen_adversarial_sample(path, syscall='ntreadfile'): # TODO: Change to 'ntwritefile' in notebook
with open(path) as f:
s = f.read()
numsyscalls = len(s.split())
s_benign = s + '\n' + '\n'.join(itertools.repeat(syscall, numsyscalls))
return s, s_benign
s, s_benign = gen_adversarial_sample('../data/01-monotonic-classifiers/malicious-traces/0999.trace')
print(len(s))
print(len(s_benign))
# It works!
clf.predict(vectorizer.transform([s, s_benign]))
orig = []
mutated = []
for maltrace in maltraces:
mal, adv = gen_adversarial_sample(maltrace)
malvec = vectorizer.transform([mal])
advvec = vectorizer.transform([adv])
orig.append(clf.predict(malvec))
mutated.append(clf.predict(advvec))
print(Counter([x[0] for x in orig]))
print(Counter([x[0] for x in mutated]))
# Not only does it work on a single example, but it breaks everything we saw in the ground truth! Some questions:
#
# * Why was the default syscall `'ntwritefile'`?
# * Describe how `gen_adversarial_sample` works. How would you defend against this if this was one of your models?
#
# Post you answers to Group Chat!
# ## Extra Credit: Make Classifier Monotonic
#
# A simple way to make a classifier monotonic for our purposes is to not allow attackers to abuse negative coefficient features. Add a vector to `clf.coef` such that there are no longer negative coefficients, and demonstrates this defeats your adversarial generation function from the previous exercise.
# We can accomplish this by:
# * Identify the index in `clf.coef_` for the feature you abused
# * Set its weight to `0.0`
# * Rerunning our classification examples from above
clf.coef_[0][clf.coef_.argmin()] = 0.0
# +
maltraces, bentraces = gettraces()
orig = []
mutated = []
benign = []
for maltrace in maltraces:
mal, adv = gen_adversarial_sample(maltrace)
malvec = vectorizer.transform([mal])
advvec = vectorizer.transform([adv])
orig.append(clf.predict(malvec))
mutated.append(clf.predict(advvec))
print('Original malicious traces')
print(Counter([x[0] for x in orig]))
print('Adversarial malicious traces')
print(Counter([x[0] for x in mutated]))
# What about the benign examples?
for bentrace in bentraces:
benvec = get_feature_vector(bentrace, vectorizer)
benign.append(clf.predict(benvec))
print('Original benign traces')
print(Counter([x[0] for x in benign]))
# -
# So it works, but we more or less nullified the utility of the feature aside from its interactions with other variables when constructing the model. Sometimes keeping this is desirable, but it could suggest that this feature provides more drawbacks than benefits. This further demonstrates the importance of feature engineering: if the features are useful, but when known can be weaponized by attackers, perhaps it is better to sacrifice accuracy in order to have more resilient models.
| solutions/Monotonic Classifiers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import sympy as sym
# %matplotlib notebook
# +
def average_fecundity(U, T, R, P, S):
return 2 * ((R + P - T - S) * U**2 + (T + S - 2 * P) * U + P)
def equilibrium_coordinators_share(U, T, R, P, S):
F = average_fecundity(U, T, R, P, S)
return 2 * ((R - S) * U**2 + S * U) / F
# -
U = sym.symbols('U', real=True, nonnegative=True)
T, R, P, S = sym.symbols('T, R, P, S', real=True, positive=True)
x = equilibrium_coordinators_share(U, T, R, P, S)
first_derivative = sym.lambdify((U, T, R, P, S), sym.diff(x, U, 1), modules=["numpy"])
second_derivative = sym.lambdify((U, T, R, P, S), sym.diff(x, U, 2), modules=["numpy"])
_, ax = plt.subplots(1,1)
Us = np.logspace(-6, 0, 1000)
payoffs = [6, 2, 4, 1]
xs = equilibrium_coordinators_share(Us, *payoffs)
x_primes = first_derivative(Us, *payoffs)
x_prime_primes = second_derivative(Us, *payoffs)
ax.plot([0,1], [0,1], "k--")
ax.plot(xs, Us)
ax.plot(xs, x_primes, label=r"$\frac{\partial U}{\partial x}$")
ax.plot(xs, x_prime_primes, label=r"$\frac{\partial^2 U}{\partial x^2}$")
ax.set_xlabel(r"$x^*$")
ax.set_ylabel(r"$U(x^*)$", rotation="horizontal")
ax.set_title("T={}, R={}, P={}, S={}".format(*payoffs))
ax.legend()
first_derivative(0, *payoffs), first_derivative(1, *payoffs)
# slope of the L-locus at x=0, U=0
1 / sym.diff(x, U).subs({U: 0})
# slope of the L-locus at x=1, U=1
1 / sym.together(sym.diff(x, U).subs({U: 1}))
sym.factor(sym.diff(x, U), U)
numerator, denominator = sym.fraction(sym.factor(sym.diff(x, U), U))
# always positive!
denominator
numerator
# Numerator polynomial opens up $\iff PS + RT > 2 PR$
numerator_poly = sym.poly(numerator, U)
sym.factor(numerator_poly.discriminant())
# Discriminant of the numerator polynomial will be negative (implying polynomial has imaginary roots!) $\iff$ $\frac{P}{S} < \frac{T}{R}$
sym.solve(numerator_poly, U)
# Sufficient conditions for first derivative to be strictly positive are that numerator polynomial opens up and discriminant of the polynomial is negative.
second_derivative(0, *payoffs), second_derivative(1, *payoffs)
# $$ (-U*(S + U*(R - S))*(-2*P + S + T + 2*U*(P + R - S - T)) + (S + 2*U*(R - S))*(P + U**2*(P + R - S - T) + U*(-2*P + S + T))) > 0$$
sym.simplify(sym.diff(x, U, 2).subs({U: 0}))
sym.simplify(sym.diff(x, U, 2).subs({U: 1}))
sym.together(sym.diff(x, U, 2))
numerator, denominator = sym.fraction(sym.simplify(sym.diff(x, U, 2)))
# always positive!
denominator
numerator_poly = sym.poly(numerator, U)
sym.factor(numerator_poly.discriminant())
sym.solve(numerator_poly, U)
| Untitled2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 (tensorflow)
# language: python
# name: tensorflow
# ---
# <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# # T81-558: Applications of Deep Neural Networks
# **Module 8: Kaggle Data Sets**
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# # Module 8 Material
#
# * Part 8.1: Introduction to Kaggle [[Video]](https://www.youtube.com/watch?v=v4lJBhdCuCU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_1_kaggle_intro.ipynb)
# * Part 8.2: Building Ensembles with Scikit-Learn and Keras [[Video]](https://www.youtube.com/watch?v=LQ-9ZRBLasw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_2_keras_ensembles.ipynb)
# * Part 8.3: How Should you Architect Your Keras Neural Network: Hyperparameters [[Video]](https://www.youtube.com/watch?v=1q9klwSoUQw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_3_keras_hyperparameters.ipynb)
# * **Part 8.4: Bayesian Hyperparameter Optimization for Keras** [[Video]](https://www.youtube.com/watch?v=sXdxyUCCm8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb)
# * Part 8.5: Current Semester's Kaggle [[Video]](https://www.youtube.com/watch?v=PHQt0aUasRg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_5_kaggle_project.ipynb)
#
# # Google CoLab Instructions
#
# The following code ensures that Google CoLab is running the correct version of TensorFlow.
# +
# Startup Google CoLab
try:
# %tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# -
# # Part 8.4: Bayesian Hyperparameter Optimization for Keras
#
# Bayesian Hyperparameter Optimization is a method of finding hyperparameters in a more efficient way than a grid search. Because each candidate set of hyperparameters requires a retraining of the neural network, it is best to keep the number of candidate sets to a minimum. Bayesian Hyperparameter Optimization achieves this by training a model to predict good candidate sets of hyperparameters.
#
# <NAME>., <NAME>., & <NAME>. (2012). [Practical bayesian optimization of machine learning algorithms](https://arxiv.org/pdf/1206.2944.pdf). In *Advances in neural information processing systems* (pp. 2951-2959).
#
#
# * [bayesian-optimization](https://github.com/fmfn/BayesianOptimization)
# * [hyperopt](https://github.com/hyperopt/hyperopt)
# * [spearmint](https://github.com/JasperSnoek/spearmint)
# +
# Ignore useless W0819 warnings generated by TensorFlow 2.0.
# Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
# -
# Now that we've preprocessed the data, we can begin the hyperparameter optimization. We start by creating a function that generates the model based on just three parameters. Bayesian optimization works on a vector of numbers, not on a problematic notion like how many layers and neurons are on each layer. To represent this complex neuron structure as a vector, we use several numbers to describe this structure.
#
# * **dropout** - The dropout percent for each layer.
# * **neuronPct** - What percent of our fixed 5,000 maximum number of neurons do we wish to use? This parameter specifies the total count of neurons in the entire network.
# * **neuronShrink** - Neural networks usually start with more neurons on the first hidden layer and then decrease this count for additional layers. This percent specifies how much to shrink subsequent layers based on the previous layer. Once we run out of neurons (with the count specified by neuronPft), we stop adding more layers.
#
# These three numbers define the structure of the neural network. The commends in the below code show exactly how the program constructs the network.
# +
import pandas as pd
import os
import numpy as np
import time
import tensorflow.keras.initializers
import statistics
import tensorflow.keras
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, InputLayer
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import StratifiedShuffleSplit
from tensorflow.keras.layers import LeakyReLU,PReLU
from tensorflow.keras.optimizers import Adam
def generate_model(dropout, neuronPct, neuronShrink):
# We start with some percent of 5000 starting neurons on the first hidden layer.
neuronCount = int(neuronPct * 5000)
# Construct neural network
# kernel_initializer = tensorflow.keras.initializers.he_uniform(seed=None)
model = Sequential()
# So long as there would have been at least 25 neurons and fewer than 10
# layers, create a new layer.
layer = 0
while neuronCount>25 and layer<10:
# The first (0th) layer needs an input input_dim(neuronCount)
if layer==0:
model.add(Dense(neuronCount,
input_dim=x.shape[1],
activation=PReLU()))
else:
model.add(Dense(neuronCount, activation=PReLU()))
layer += 1
# Add dropout after each hidden layer
model.add(Dropout(dropout))
# Shrink neuron count for each layer
neuronCount = neuronCount * neuronShrink
model.add(Dense(y.shape[1],activation='softmax')) # Output
return model
# -
# We can test this code to see how it creates a neural network based on three such parameters.
# Generate a model and see what the resulting structure looks like.
model = generate_model(dropout=0.2, neuronPct=0.1, neuronShrink=0.25)
model.summary()
# Now we create a function to evaluate the neural network, using three such parameters. We use bootstrapping because one single training run might simply have "bad luck" with the random weights assigned. We use this function to train and then evaluate the neural network.
# +
def evaluate_network(dropout,lr,neuronPct,neuronShrink):
SPLITS = 2
# Bootstrap
boot = StratifiedShuffleSplit(n_splits=SPLITS, test_size=0.1)
# Track progress
mean_benchmark = []
epochs_needed = []
num = 0
# Loop through samples
for train, test in boot.split(x,df['product']):
start_time = time.time()
num+=1
# Split train and test
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = generate_model(dropout, neuronPct, neuronShrink)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=lr))
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=100, verbose=0, mode='auto', restore_best_weights=True)
# Train on the bootstrap sample
model.fit(x_train,y_train,validation_data=(x_test,y_test),
callbacks=[monitor],verbose=0,epochs=1000)
epochs = monitor.stopped_epoch
epochs_needed.append(epochs)
# Predict on the out of boot (validation)
pred = model.predict(x_test)
# Measure this bootstrap's log loss
y_compare = np.argmax(y_test,axis=1) # For log loss calculation
score = metrics.log_loss(y_compare, pred)
mean_benchmark.append(score)
m1 = statistics.mean(mean_benchmark)
m2 = statistics.mean(epochs_needed)
mdev = statistics.pstdev(mean_benchmark)
# Record this iteration
time_took = time.time() - start_time
tensorflow.keras.backend.clear_session()
return (-m1)
# -
# You can try any combination of our three hyperparameters, plus the learning rate, to see how effective these four numbers are. Of course, our goal is not to manually choose different combinations of these four hyperparameters; we seek to automate.
print(evaluate_network(
dropout=0.2,
lr=1e-3,
neuronPct=0.2,
neuronShrink=0.2))
# We will now automat this process. We define the bounds for each of these four hyperparameters and begin the Bayesian optimization. Once the program completes, the best combination of hyperparameters found is displayed.
# +
from bayes_opt import BayesianOptimization
import time
# Supress NaN warnings
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)
# Bounded region of parameter space
pbounds = {'dropout': (0.0, 0.499),
'lr': (0.0, 0.1),
'neuronPct': (0.01, 1),
'neuronShrink': (0.01, 1)
}
optimizer = BayesianOptimization(
f=evaluate_network,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum
# is observed, verbose = 0 is silent
random_state=1,
)
start_time = time.time()
optimizer.maximize(init_points=10, n_iter=100,)
time_took = time.time() - start_time
print(f"Total runtime: {hms_string(time_took)}")
print(optimizer.max)
# -
# {'target': -0.6500334282952827, 'params': {'dropout': 0.12771198428037775, 'lr': 0.0074010841641111965, 'neuronPct': 0.10774655638231533, 'neuronShrink': 0.2784788676498257}}
| t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Home 4: Build a CNN for image recognition.
#
# ### Name: [<NAME>]
#
# ## 0. You will do the following:
#
# 1. Read, complete, and run the code.
#
# 2. **Make substantial improvements** to maximize the accurcy.
#
# 3. Convert the .IPYNB file to .HTML file.
#
# * The HTML file must contain the code and the output after execution.
#
# * Missing **the output after execution** will not be graded.
#
# 4. Upload this .HTML file to your Google Drive, Dropbox, or Github repo. (If you submit the file to Google Drive or Dropbox, you must make the file "open-access". The delay caused by "deny of access" may result in late penalty.)
#
# 4. Submit the link to this .HTML file to Canvas.
#
# * Example: https://github.com/wangshusen/CS583-2020S/blob/master/homework/HM4/HM4.html
#
#
# ## Requirements:
#
# 1. You can use whatever CNN architecture, including VGG, Inception, and ResNet. However, you must build the networks layer by layer. You must NOT import the archetectures from ```keras.applications```.
#
# 2. Make sure ```BatchNormalization``` is between a ```Conv```/```Dense``` layer and an ```activation``` layer.
#
# 3. If you want to regularize a ```Conv```/```Dense``` layer, you should place a ```Dropout``` layer **before** the ```Conv```/```Dense``` layer.
#
# 4. An accuracy above 70% is considered reasonable. An accuracy above 80% is considered good. Without data augmentation, achieving 80% accuracy is difficult.
#
#
# ## Google Colab
#
# - If you do not have GPU, the training of a CNN can be slow. Google Colab is a good option.
#
# - Keep in mind that you must download it as an IPYNB file and then use IPython Notebook to convert it to HTML.
#
# - Also keep in mind that the IPYNB and HTML files must contain the outputs. (Otherwise, the instructor will not be able to know the correctness and performance.) Do the followings to keep the outputs.
#
# - In Colab, go to ```Runtime``` --> ```Change runtime type``` --> Do NOT check ```Omit code cell output when saving this notebook```. In this way, the downloaded IPYNB file contains the outputs.
# ## 1. Data preparation
# ### 1.1. Load data
#
# +
from keras.datasets import cifar10
import numpy
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('shape of x_train: ' + str(x_train.shape))
print('shape of y_train: ' + str(y_train.shape))
print('shape of x_test: ' + str(x_test.shape))
print('shape of y_test: ' + str(y_test.shape))
print('number of classes: ' + str(numpy.max(y_train) - numpy.min(y_train) + 1))
# -
# ### 1.2. One-hot encode the labels
#
# In the input, a label is a scalar in $\{0, 1, \cdots , 9\}$. One-hot encode transform such a scalar to a $10$-dim vector. E.g., a scalar ```y_train[j]=3``` is transformed to the vector ```y_train_vec[j]=[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]```.
#
# 1. Define a function ```to_one_hot``` that transforms an $n\times 1$ array to a $n\times 10$ matrix.
#
# 2. Apply the function to ```y_train``` and ```y_test```.
# +
def to_one_hot(y, num_class=10):
result = numpy.zeros(shape=(y.shape[0],num_class))
for i in range(y.shape[0]):
result[i][y[i]]=1
return result
y_train_vec = to_one_hot(y_train)
y_test_vec = to_one_hot(y_test)
print('Shape of y_train_vec: ' + str(y_train_vec.shape))
print('Shape of y_test_vec: ' + str(y_test_vec.shape))
print(y_train[0])
print(y_train_vec[0])
# -
# #### Remark: the outputs should be
# * Shape of y_train_vec: (50000, 10)
# * Shape of y_test_vec: (10000, 10)
# * [6]
# * [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
# ### 1.3. Randomly partition the training set to training and validation sets
#
# Randomly partition the 50K training samples to 2 sets:
# * a training set containing 40K samples
# * a validation set containing 10K samples
#
# +
rand_indices = numpy.random.permutation(50000)
train_indices = rand_indices[0:40000]
valid_indices = rand_indices[40000:50000]
x_val = x_train[valid_indices, :]
y_val = y_train_vec[valid_indices, :]
x_tr = x_train[train_indices, :]
y_tr = y_train_vec[train_indices, :]
print('Shape of x_tr: ' + str(x_tr.shape))
print('Shape of y_tr: ' + str(y_tr.shape))
print('Shape of x_val: ' + str(x_val.shape))
print('Shape of y_val: ' + str(y_val.shape))
# -
# ## 2. Build a CNN and tune its hyper-parameters
#
# 1. Build a convolutional neural network model
# 2. Use the validation data to tune the hyper-parameters (e.g., network structure, and optimization algorithm)
# * Do NOT use test data for hyper-parameter tuning!!!
# 3. Try to achieve a validation accuracy as high as possible.
# ### Remark:
#
# The following CNN is just an example. You are supposed to make **substantial improvements** such as:
# * Add more layers.
# * Use regularizations, e.g., dropout.
# * Use batch normalization.
# +
from keras.layers import *
from keras.models import Model, Sequential
from keras.preprocessing.image import ImageDataGenerator
# Image generator for training
def make_generator(X):
gen = ImageDataGenerator(
rotation_range=40,
zoom_range=0.2,
shear_range=0.2,
width_shift_range=0.2,
height_shift_range=0.2,
fill_mode='nearest',
horizontal_flip=True,
)
gen.fit(X)
return gen
# Create a generator for the training data
gen = make_generator(x_tr)
def add_skip_connection(model, input_shape=None):
input_shape = input_shape or model.output_shape[1:]
# Input to the residual block
x = Input(shape=input_shape)
trace = Sequential()
trace.add(BatchNormalization(input_shape=input_shape))
trace.add(Activation('relu'))
trace.add(Conv2D(64, kernel_size=(3, 3), padding='same'))
trace.add(BatchNormalization())
trace.add(Activation('relu'))
trace.add(Conv2D(64, kernel_size=(3, 3), padding='same'))
y = trace(x)
residual_block = Model(x, Add()([x, y]))
model.add(residual_block)
def make_model():
model = Sequential()
model.add(Conv2D(64, (7, 7), padding='same', input_shape=(32, 32, 3)))
# We will reduce the dimensionality 3 times
for _ in range(3):
# At each level, we will have three tiers of residual blocks
for _ in range(2):
add_skip_connection(model)
# We will also reduce to half size
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), padding='same', input_shape=(32, 32, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
return model
model = make_model()
model.summary()
# +
from keras import optimizers
learning_rate = 2E-4 # to be tuned!
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=learning_rate),
metrics=['acc'])
# -
history = model.fit_generator(gen.flow(x_tr, y_tr, batch_size=32),
steps_per_epoch=len(x_tr) // 32, epochs=10,
validation_data=(x_val, y_val))
# +
import matplotlib.pyplot as plt
# %matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# -
# ## 3. Train (again) and evaluate the model
#
# - To this end, you have found the "best" hyper-parameters.
# - Now, fix the hyper-parameters and train the network on the entire training set (all the 50K training samples)
# - Evaluate your model on the test set.
# ### 3.1. Train the model on the entire training set
#
# Why? Previously, you used 40K samples for training; you wasted 10K samples for the sake of hyper-parameter tuning. Now you already know the hyper-parameters, so why not using all the 50K samples for training?
history = model.fit(x_train, y_train_vec, batch_size=32, epochs=10)
# ### 3.2. Evaluate the model on the test set
#
# Do NOT used the test set until now. Make sure that your model parameters and hyper-parameters are independent of the test set.
loss_and_acc = model.evaluate(x_test, y_test_vec)
print('loss = ' + str(loss_and_acc[0]))
print('accuracy = ' + str(loss_and_acc[1]))
| homework/HM4/Marco Vlajnic_HW#4_CS583.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Intel, 2018 update 2)
# language: python
# name: c009-intel_distribution_of_python_3_2018u2
# ---
# # Convolutional Neural Networks & Transfer Learning For Acute Myeloid Leukemia Classification
# 
#
#
# # Abstract
#
# Acute Myeloid Leukemia (AML) [1] is a rare and very agressive form of Leukemia. With this type of Leukemia early dectection is crucial but as of yet there are no warning signs, there are currently no ways to screen for AML but there are symptoms that give warning [2].
#
# This project shows how we can use transfer learning and existing image classification models to create Deep Learning Models, specifically Inception V3, that can classify positive and negative Acute Myeloid Leukemia positive and negative lymphocytes in images.
#
# ## Acute Myeloid Leukemia (AML)
#
# Despite being one of the most common forms of Leukemia, Acute Myeloid Leukemia (AML) is a still a relatively rare form of Leukemia that is more common in adults, but does affect children also. AML is an agressive Leukemia where white blood cells mutate, attack and replace healthy red blood cells, effectively killing them.
#
# "About 19,520 new cases of acute myeloid leukemia (AML). Most will be in adults (United States)." [6]
#
# In comparrison, there are 180,000 women a year in the United States being diagnosed with Invasive Ductal Carcinoma (IDC), a type of breast cancer which forms in the breast duct and invades the areas surrounding it [7].
#
# ## Acute Lymphoblastic Leukemia Image Database for Image Processing (ALL-IDB)
# 
# Figure 3. Samples of augmented data generated from the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset.
#
# The Acute Lymphoblastic Leukemia Image Database for Image Processing dataset is used for this project. The dataset was created by <NAME>, Associate Professor Dipartimento di Informatica, Università degli Studi di Milano. Big thanks to Fabio for his research and time put in to creating the dataset and documentation, it is one of his personal projects.
#
#
# ## The Acute Myeloid Leukemia (AML) Movidius Classifier
#
# The AML Movidius Classifier shows how to train a Convolutional Neural Network using TensorFlow [8] and transfer learning trained on a dataset of Acute Myeloid Leukemia negative and positive images, Acute Lymphoblastic Leukemia Image Database for Image Processing [9]. The Tensorflow model is trained on the AI DevCloud [10] converted to a format compatible with the Movidius NCS by freezing the Tensorflow model and then running it through the NCSDK [11]. The model is then downloaded to an UP Squared, and then used for inference with NCSDK.
#
# ## Convolutional Neural Networks
# 
# Figure 1. Inception v3 architecture ([Source](https://github.com/tensorflow/models/tree/master/research/inception)).
#
# Convolutional neural networks are a type of deep learning neural network. These types of neural nets are widely used in computer vision and have pushed the capabilities of computer vision over the last few years, performing exceptionally better than older, more traditional neural networks; however, studies show that there are trade-offs related to training times and accuracy.
#
#
# ## Transfer Learning
# 
# Figure 2. Inception V3 Transfer Learning ([Source](https://github.com/Hvass-Labs/TensorFlow-Tutorials)).
#
# Transfer learning allows you to retrain the final layer of an existing model, resulting in a significant decrease in not only training time, but also the size of the dataset required. One of the most famous models that can be used for transfer learning is the Inception V3 model created by Google This model was trained on thousands of images from 1,001 classes on some very powerful devices. Being able to retrain the final layer means that you can maintain the knowledge that the model had learned during its original training and apply it to your smaller dataset, resulting in highly accurate classifications without the need for extensive training and computational power.
#
#
# # Hardware & Software
# Through my role as an Intel® Software Innovator, I get access to the latest Intel® technologies that help enhance my projects. In this particular part of the project I Intel® technologies such as Intel® AI DevCloud for data sorting and training and UP Squared with Intel Movidius (NCS) for inference.
#
#
# # Interactive Tutorial
# This Notebook serves as an interactive tutorial that helps you set up your project, sort your data and train the Convolutional Neural Network.
#
#
# ## Prerequisites
# There are a few steps you need to tae to set up your AI DevCloud project, these steps are outlined below:
#
#
# ### - Clone The Github Repo
# You need to clone the Acute Myeloid Leukemia Classifiers Github repo to your development machine. To do this open up a terminal and use __git clone__ to clone to the AML Classifiers repo (__https://github.com/AMLResearchProject/AML-Classifiers.git__). Once you have cloned the repo you should nagivate to __AML-Classifiers/Python/_Movidius/__ to find the related code, notebooks and tutorials.
#
# ### - Gain Access To ALL-IDB
# You you need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset.
#
# ### - Data Augmentation
# Assuming you have received permission to use the Acute Lymphoblastic Leukemia Image Database for Image Processing, you should follow the related Notebook first to generate a larger training and testing dataset. Follow the AML Classifier [Data Augmentation Notebook](https://github.com/AMLResearchProject/AML-Classifiers/blob/master/Python/Augmentation.ipynb) to apply various filters to the dataset. If you have not been able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset.
#
# Data augmentations included are as follows...
#
# Done:
# - Grayscaling
# - Histogram Equalization
# - Reflection
# - Gaussian Blur
# - Rotation
#
# ToDo:
# - Shearing
# - Translation
#
# You can follow the progress of the data augmentation system on this [Github issue](https://github.com/AMLResearchProject/AML-Classifiers/issues/1).
#
# ### - Upload Project To AI DevCloud
#
# Now you need to upload the related project from the repo to the AI DevCloud. The directory you need to upload is __AML-Classifiers/Python/_Movidius/__. Once you have uploaded the project structure you need to upload your augmented dataset created in the previous step. Upload your data to the __0__ and __1__ directories in the __Model/Data/__ directory, you should also remove the init files from these directories.
#
# Once you have completed the above, navigate to this Notebook and continue the tutorial there.
#
# ## Prepare The Dataset
# Assuming you have uploaded your data, you now need to sort the data ready for the training process.
#
# ### Data Sorting Job
# You need to create a shell script (provided below) that is used to create a job for sorting your uploaded data on the AI DevCloud. Before you run the following block make sure you have followed all of the steps in __Upload Project To AI DevCloud__ above.
# %%writefile AML-DevCloud-Data
# cd $PBS_O_WORKDIR
# echo "* Compute server `hostname` on the AI DevCloud"
# echo "* Current directory ${PWD}."
# echo "* Compute server's CPU model and number of logical CPUs:"
lscpu | grep 'Model name\\|^CPU(s)'
# echo "* Python version:"
export PATH=/glob/intel-python/python3/bin:$PATH;
which python
python --version
# echo "* This job sorts the data for the AML Classifier on AI DevCloud"
python Data.py
sleep 10
# echo "*Adios"
# Remember to have an empty line at the end of the file; otherwise the last command will not run
# ## Check the data sorter job script was created
# %ls
# ## Submit the data sorter job
# !qsub AML-DevCloud-Data
# ## Check the status of the job
# !qstat
# ## Get more details about the job
# !qstat -f 8390
# ## Check for the output files
# %ls
# You should see similar to the below output in your .0XXXX file, you can ignore the error (.eXXXXX) file in this case unless you are having difficulties in which case this file may have important information.
#
# ```
# >> Converting image 347/348 shard 1
# 2018-12-23 08:36:57|convertToTFRecord|INFO: class_name: 0
# 2018-12-23 08:36:57|convertToTFRecord|INFO: class_id: 0
#
# >> Converting image 348/348 shard 1
# 2018-12-23 08:36:57|convertToTFRecord|INFO: class_name: 1
# 2018-12-23 08:36:57|convertToTFRecord|INFO: class_id: 1
#
# 2018-12-23 08:36:57|sortData|COMPLETE: Completed sorting data!
# *Adios
#
# ########################################################################
# # End of output for job 8390.c009
# # Date: Sun Dec 23 08:37:07 PST 2018
# ########################################################################
# ```
# # Training job
# Now it is time to create your training job, the script required for this is almost identical to the above created script, all we need to do is change filename and the commandline argument.
# %%writefile AML-DevCloud-Trainer
# cd $PBS_O_WORKDIR
# echo "* Hello world from compute server `hostname` on the A.I. DevCloud!"
# echo "* The current directory is ${PWD}."
# echo "* Compute server's CPU model and number of logical CPUs:"
lscpu | grep 'Model name\\|^CPU(s)'
# echo "* Python available to us:"
export PATH=/glob/intel-python/python3/bin:$PATH;
which python
python --version
# echo "* This job trains the AML Classifier on the Colfax Cluster"
python Trainer.py
sleep 10
# echo "*Adios"
# Remember to have an empty line at the end of the file; otherwise the last command will not run
# # Check the training job script was created
# Now check that the trainer job script was created successfully by executing the following block which will print out the files located in the current directory. If all was successful, you should see the file "AML-DevCloud-Trainer". You can also open this file to confirm that the contents are correct.
# %ls
# # Submit the training job script
# Now it is time to submit your training job script, this will queue the training script ready for execution and return your job ID. In this command we set the walltime to 24 hours, which should give our script enough time to fully complete without getting killed.
# !qsub -l walltime=24:00:00 AML-DevCloud-Trainer
# # Check the status of the job
# !qstat
# ## Get more details about the job
# !qstat -f 8392
# # Check the results
# After training we should check the resuts of the output to see how our model did during training. In my case the job ID was 8392 so my output files were .e8392 for errors, and .o8339 for program output.
#
# In my case I trained the network with 580 AML negative and 580 AML positive examples using the augmented dataset created in the previous tutorial, saving 20 of the original images for testing. The following is the end of the output from the training job:
#
# ```
# INFO:tensorflow:Final Loss: 0.76919967
# INFO:tensorflow:Final Accuracy: 0.9136111
# INFO:tensorflow:Finished training! Saving model to disk now.
# INFO:tensorflow:Restoring parameters from Model/_logs/model.ckpt-3181
# INFO:tensorflow:Froze 378 variables.
# ```
#
# The output shows that I have an overall training accuracy of 0.9136111.
#
# # Test the classifier
# 
#
# Now you need to download the created graph file to your NCS development machine. You need to have the full API installed opposed to the API.
#
# Once you have the graph on your NCS dev machine move it to your __Model__ directory and then you can issue the following command to generate a graph that is compatible with the NCS and save it to __Model/AML.graph__.
#
# ```
# mvNCCompile Model/AMLGraph.pb -in=input -on=InceptionV3/Predictions/Softmax -o Model/AML.graph
# ```
#
# Now you are able to test the classifier using the classification program and the test dataset you set aside earlier. Navigate to the __AML-Classifiers/Python/_Movidius__ directory and issue the following command:
#
# ```
# $ python3 Classifier.py InceptionTest
# ```
#
# The classifier will loop through the images in your test dataset classifying them as AML positive or negative.
# # References
#
# 1. [Acute Myeloid Leukemia (AML)](https://www.cancer.org/cancer/acute-myeloid-leukemia.html)
# 2. [Can Acute Myeloid Leukemia (AML) Be Found Early?](https://www.cancer.org/cancer/acute-myeloid-leukemia/detection-diagnosis-staging/detection.html)
# 3. [<NAME>ss Acute Myeloid Leukemia Research Movidius Classifier](https://github.com/AMLResearchProject/AML-Classifiers/tree/master/Python/_Movidius)
# 4. [<NAME> Acute Myeloid Leukemia Research Project](https://www.facebook.com/AMLResearchProject)
# 5. [<NAME> Acute Myeloid Leukemia Computer Vision Research and Development](https://github.com/AMLResearchProject/AML-Classifiers)
# 6. [Key Statistics for Acute Myeloid Leukemia (AML)](https://www.cancer.org/cancer/acute-myeloid-leukemia/about/key-statistics.html)
# 7. [Machine Learning and Mammography](https://software.intel.com/en-us/articles/machine-learning-and-mammography#inpage-nav-3)
# 8. [Tensorflow](https://www.tensorflow.org/)
# 9. [Acute Lymphoblastic Leukemia Image Database for Image Processing](https://homes.di.unimi.it/scotti/all/)
# 10. [Intel® AI DevCloud](https://software.intel.com/en-us/ai-academy/devcloud)
# 11. [NCSDK](https://github.com/movidius/ncsdk)
# # About the author
# Adam is a [Bigfinite](https://www.bigfinite.com "Bigfinite") IoT Network Engineer, part of the team that works on the core IoT software. In his spare time he is an [Intel Software Innovator](https://software.intel.com/en-us/intel-software-innovators/overview "Intel Software Innovator") in the fields of Internet of Things, Artificial Intelligence and Virtual Reality.
#
# [](https://github.com/AdamMiltonBarker)
| Python/_Movidius/Trainer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 任务描述
#
# - 基于MNIST数据集,设计分类器实现手写数字分类。
#
#
# # 数据描述
#
# [MNIST](http://yann.lecun.com/exdb/mnist/)数据集,包含60,000个训练集样本与10,000个测试集样本。
#
# 每个样本是一个28\*28的灰度图:
# +
import numpy as np
import matplotlib.pyplot as plt
import struct
def decode_idx3_ubyte(idx3_ubyte_file):
bin_data = open(idx3_ubyte_file, 'rb').read()
offset = 0
fmt_header = '>iiii'
magic_number, num_images, num_rows, num_cols = \
struct.unpack_from(fmt_header, bin_data, offset)
print('Magic: %d, Total Images: %d, Size: %d*%d' %
(magic_number, num_images, num_rows, num_cols))
image_size = num_rows * num_cols
offset += struct.calcsize(fmt_header)
fmt_image = '>' + str(image_size) + 'B'
images = np.empty((num_images, num_rows, num_cols))
for i in range(num_images):
images[i] = np.array(
struct.unpack_from(fmt_image, bin_data, offset)) \
.reshape((num_rows, num_cols))
offset += struct.calcsize(fmt_image)
return images
def decode_idx1_ubyte(idx1_ubyte_file):
bin_data = open(idx1_ubyte_file, 'rb').read()
offset = 0
fmt_header = '>ii'
magic_number, num_images = \
struct.unpack_from(fmt_header, bin_data, offset)
print('Magic: %d, Total Images: %d' % (magic_number, num_images))
offset += struct.calcsize(fmt_header)
fmt_image = '>B'
labels = np.empty(num_images)
for i in range(num_images):
labels[i], = struct.unpack_from(fmt_image, bin_data, offset)
offset += struct.calcsize(fmt_image)
return labels
def prepare_dataset(images_file, labels_file, size, num=None):
images = decode_idx3_ubyte(images_file)
images = images.reshape((-1, size))
labels = decode_idx1_ubyte(labels_file)
labels = np.array(labels, dtype=np.int)
# shuffle
idx = np.arange(labels.shape[0])
np.random.shuffle(idx)
if num is not None:
idx = idx[:num]
images = images[idx]
labels = labels[idx]
return images, labels
train_images_idx3_ubyte_file = './mnist/train-images.idx3-ubyte'
train_labels_idx1_ubyte_file = './mnist/train-labels.idx1-ubyte'
test_images_idx3_ubyte_file = './mnist/t10k-images.idx3-ubyte'
test_labels_idx1_ubyte_file = './mnist/t10k-labels.idx1-ubyte'
SIZE = 28 * 28
np.random.seed(1)
train_images, train_labels = \
prepare_dataset(train_images_idx3_ubyte_file,
train_labels_idx1_ubyte_file, SIZE)
test_images, test_labels = \
prepare_dataset(test_images_idx3_ubyte_file,
test_labels_idx1_ubyte_file, SIZE)
plt.figure(figsize=(10, 2))
for i in range(4):
ax = plt.subplot(141 + i)
plt.imshow(train_images[i].reshape((28, 28)), cmap='binary')
ax.set_title(str(train_labels[i]))
plt.show()
# -
# # 数据预处理
#
# 将图像灰度值映射到$[0,1]$之间:
train_images /= 255
test_images /= 255
# # 算法介绍
#
# ## 卷积神经网络(CNN)
#
# ### 算法原理
#
# 
#
# 上图为LeNet-5(一种CNN)的架构图,该网络一共由5层构成,输入为28\*28\*1的图像(最后一维表示通道),每层详细信息如下:
#
# 1. 卷积层+池化
#
# 6个5\*5卷积核,步长(Stride)为1\*1,图像周围补0,保证卷积输出大小与输入图像大小一致。池化层为最大池化,大小为2\*2,步长为2\*2,本层输出为14\*14\*6。
#
#
# 2. 卷积层+池化
#
# 16个5\*5卷积核,步长(Stride)为1\*1,图像周围不补0,卷积输出为10\*10\*16。池化层为最大池化,大小为2\*2,步长为2\*2,本层输出为5\*5\*16。
#
#
# 3. 全连接
#
# 本层将上一层输出的图像转换为一维向量,并通过全连接层,本层包含120个神经元。
#
#
# 4. 全连接
#
# 本层包含84个神经元。
#
#
# 5. 全连接
#
# 本层为输出层,包含10个神经元。
#
#
# ### 算法实现
#
# 本报告基于Tensorflow实现卷积神经网络(TF-CNN):
# +
import tensorflow as tf
class CNN:
def __init__(self, height=28, weight=28, n_channels=1,
n_filters_1=6, n_filters_2=16,
n_hidden_1=120, n_hidden_2=84, n_classes=10):
self.X = tf.placeholder(dtype=tf.float32,
shape=[None, height, weight, n_channels])
self.y = tf.placeholder(dtype=tf.float32,
shape=[None, n_classes])
self.training = False
# layer 1 - conv
# (N, 28, 28, 1) -> (N, 28, 28, 6) -> (N, 14, 14, 6)
x = tf.layers.conv2d(self.X, n_filters_1, [5, 5], padding='same')
x = tf.nn.relu(x)
self.conv1 = x # layer 1: convolution output
x = tf.layers.max_pooling2d(x, [2, 2], [2, 2])
self.max_pooling1 = x # layer 1: max pooling output
# layer 2 - conv
# (N, 14, 14, 6) -> (N, 10, 10, 16) -> (N, 5, 5, 16)
x = tf.layers.conv2d(x, n_filters_2, [5, 5])
x = tf.nn.relu(x)
self.conv2 = x # layer 2: convolution output
x = tf.layers.max_pooling2d(x, [2, 2], [2, 2])
self.max_pooling2 = x # layer 2: max pooling output
# flatten
x = tf.layers.flatten(x) # (N, 400)
# layer 3 - fully connected
x = tf.layers.dense(x, n_hidden_1) # (N, 120)
x = tf.nn.relu(x)
x = tf.layers.dropout(x, training=self.training)
# layer 4 - fully connected
x = tf.layers.dense(x, n_hidden_2) # (N, 84)
x = tf.nn.relu(x)
x = tf.layers.dropout(x, training=self.training)
# layer 5 - fully connected
x = tf.layers.dense(x, n_classes) # (N, 10)
self.pred = x
self.loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(labels=self.y,
logits=self.pred))
optimizer = tf.train.AdamOptimizer()
grads_and_vars = optimizer.compute_gradients(self.loss)
self.train_op = optimizer.apply_gradients(grads_and_vars)
def channel_first(x):
h, w, c = x.shape
y = np.zeros((c, h, w))
for k in range(c):
for i in range(h):
for j in range(w):
y[k, i, j] = x[i, j, k]
return y
# visualization
def show(conv, max_pooling, pos, name, layer):
conv = channel_first(conv)
max_pooling = channel_first(max_pooling)
n, _, _ = conv.shape
for i in range(n):
ax = plt.subplot(6, n, n * pos * 2 + i + 1)
ax.imshow(conv[i] / np.max(conv[i]) * 255, cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
if i == 0:
ax.set_title(name + '@conv' + str(layer), fontsize='small')
for i in range(n):
ax = plt.subplot(6, n, n * pos * 2 + n + i + 1)
ax.imshow(max_pooling[i] / np.max(max_pooling[i]) * 255, cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
if i == 0:
ax.set_title(name + '@pool' + str(layer), fontsize='small')
def evaluate(model):
# evaluate
model.training = False
if test_flag:
conv1, max_pooling1, conv2, max_pooling2, pred = sess.run([
model.conv1, model.max_pooling1,
model.conv2, model.max_pooling2,
tf.argmax(model.pred, axis=-1)],
feed_dict={
model.X: test_images
})
all_set = [1, 43, 57]
plt.figure(figsize=(10, 2))
for i, N in enumerate(all_set):
ax = plt.subplot(1, 3, 1 + i)
ax.imshow(test_images[N].reshape((28, 28)), cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(chr(ord('A') + i), y=-0.2, fontsize='small')
plt.savefig('images/0.png', bbox_inches='tight')
plt.figure(figsize=(10, 11))
for i, N in enumerate(all_set):
show(conv1[N], max_pooling1[N], i, chr(ord('A') + i), 1)
plt.savefig('images/1.png', bbox_inches='tight')
plt.figure(figsize=(10, 5))
for i, N in enumerate(all_set):
show(conv2[N], max_pooling2[N], i, chr(ord('A') + i), 2)
plt.savefig('images/2.png', bbox_inches='tight')
else:
pred = sess.run(tf.argmax(model.pred, axis=-1), feed_dict={
model.X: test_images
})
correct = np.sum(pred == test_labels)
acc = correct / test_labels.shape[0]
print('accuracy:',
correct, '/', test_labels.shape[0],
'=', acc)
return acc
batch_size = 500
n_epochs = 100
test_flag = False # train/test switch
train_images = train_images.reshape((-1, 28, 28, 1))
test_images = test_images.reshape((-1, 28, 28, 1))
idx = np.arange(train_images.shape[0], dtype=np.int)
train_labels = np.eye(10)[train_labels]
best = 0
with tf.Graph().as_default():
tf.set_random_seed(1)
classifier = CNN()
saver = tf.train.Saver()
with tf.Session() as sess:
if test_flag:
saver.restore(sess, 'model/cnn.ckpt')
evaluate(classifier)
else:
sess.run(tf.global_variables_initializer())
for epoch in range(n_epochs):
print('epoch', epoch)
# train
classifier.training = True
np.random.shuffle(idx)
for k in range(0, train_images.shape[0], batch_size):
batch_idx = idx[k:min(k + batch_size,
train_images.shape[0])]
batch_image = train_images[batch_idx]
batch_label = train_labels[batch_idx]
sess.run(classifier.train_op, feed_dict={
classifier.X: batch_image,
classifier.y: batch_label
})
# evaluate
accuracy = evaluate(classifier)
if accuracy > best:
best = accuracy
saver.save(sess, 'model/cnn.ckpt')
print('saved')
print('-' * 30)
print('best:', best)
# -
# # 实验结果与分析
#
# 本报告使用正确率作为评价指标,正确率是分类器正确分类的数量与测试集总量的比值。
#
#
# ## 实验结果
#
# |模型|准确率|
# |:-:|:-:|
# |TF-MLP-NN-Sqr|0.9156|
# |TF-MLP-NN-BCE|0.9813|
# |TF-CNN|0.9919|
#
# *TF-MLP-NN-Sqr与TF-MLP-NN-BCE为上次实验中实现的使用不同激活函数与损失函数的MLP-NN。*
#
#
# ## 实验结果分析
#
# 实验中,TF-CNN预测性能超过TF-MLP-NN-BCE与TF-MLP-NN-Sqr等MLP-NN模型。
#
#
# ## 可视化
#
# 从测试集中选取3张数字2,可视化TF-CNN两层卷积层后的结果:
#
# 原始图像:
#
# 
#
# 第1层卷积层:
#
# 
#
# 第2层卷积层:
#
# 
#
# 卷积层用于图像的特征提取,可视化结果展示数字2经过每层卷积层后的图像,经过2层卷积层,原始图像被抽象为算法可以更好识别的高级特征,特征提取以后经过全连接层,达到更好的预测效果。
| 6. CNN/CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Uy5MoE99aESR"
# # Course Recommender with SVD based similarity
# > Applying SVD on education course dataset, storing in sqlite and wrapping in Flask REST api
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [Flask, SQLite, Education, SVD]
# - author: "<a href='https://github.com/Tsmith5151/user-recommender'><NAME></a>"
# - image:
# + [markdown] id="xBxkLnWf39V2"
# ## Setup
# + id="LjlfOLxQDWk5"
import os
import yaml
import copy
import json
import sqlite3
import logging
import requests
import functools
import numpy as np
import pandas as pd
from time import time
from typing import List
from flask import request
from sklearn.decomposition import TruncatedSVD
from sklearn.metrics.pairwise import pairwise_distances
logging.getLogger().setLevel(logging.INFO)
# + id="OssJ94WBEkt2"
data_path = "."
# SQlite
env = "dev"
database = "recommender_dev.db"
username = "admin"
pwd = ""
hostname = "0.0.0.0"
port = 8081
similarity_metric = "cosine" # Similarity metric for pairwise distance measurement
weights = ['0.50','0.30','0.20'] # Weights for similarity matrix: interest,assessment,tags
results_table = "rank_matrix" # SQLite3 table containing user similarity metrics
user_id = None # unique user id for scoring similarities
# Flask server
hostname = "0.0.0.0" # hostname for serving Flask application
port = 5000 # port for serving Flask application
# + [markdown] id="56diW9dz32vs"
# ## Data ingestion
# + colab={"base_uri": "https://localhost:8080/"} id="GAB7Yq0kHx8t" outputId="5ee81ebb-05c5-49cf-f94c-6aa6b9da0953"
#hide-output
# !wget https://github.com/sparsh-ai/user-recommender/raw/main/data/course_tags.csv
# !wget https://github.com/sparsh-ai/user-recommender/raw/main/data/user_interests.csv
# !wget https://github.com/sparsh-ai/user-recommender/raw/main/data/user_course_views.csv
# !wget https://github.com/sparsh-ai/user-recommender/raw/main/data/user_assessment_scores.csv
# + id="fBqjXjpdKq1v"
def ingest_raw_data(env: str, data_dir: str = "data"):
"""Write .csv raw files to SQLite Database"""
csv_files = [i for i in os.listdir(data_dir) if ".csv" in i]
for f in csv_files:
df = pd.read_csv(os.path.join(data_dir, f))
conn = sqlite3.connect(database)
cur = conn.cursor()
df.to_sql(name=f.split(".")[0], con=conn, if_exists="replace", index=False)
# + id="XHIiY3D7Kw6h"
ingest_raw_data(env, data_path)
# + [markdown] id="JpJBwY9j3rm9"
# ## Load data from SQlite
# + id="ggnYx7wTQ72v"
def read_table(env: str, query: str) -> pd.DataFrame:
"""Query Table from SQLite Database"""
conn = sqlite3.connect(database)
cur = conn.cursor()
cur.execute(query)
df = pd.DataFrame(cur.fetchall(), columns=[column[0] for column in cur.description])
return df
# + id="nwwcDfi2QkHb"
def load_data(env: str) -> dict:
"""Load Users and Content Data from SQLite"""
df_course = read_table(env, f"select * from user_course_views")
df_asmt = read_table(env, f"select * from user_assessment_scores")
df_interest = read_table(env, f"select * from user_interests")
df_tags = read_table(env, f"select * from course_tags")
return {
"course": df_course,
"assessment": df_asmt,
"interest": df_interest,
"tags": df_tags,
}
# + id="czO1yIuAQxQF"
# Load Users/Assessments/Course/Tags Data
data_raw = load_data(env)
# + [markdown] id="wcOw1mQM3xAT"
# ## Summarize
# + id="vA16sAQXRomz"
def data_summary(data: dict):
"""Print Summary Metrics of Data"""
for name, df in data.items():
logging.info(f"\nDataframe: {name.upper()} -- Shape: {df.shape}")
for c in df.columns:
unique = len(df[c].unique())
is_null = df[df[c].isnull()].shape[0]
logging.info(f"{c} -- Unique: {unique} -- Null: {is_null}")
return
# + colab={"base_uri": "https://localhost:8080/"} id="Hd5bQRA1Rdx-" outputId="24422b36-2a9d-4d53-eaed-33bb4f82fc61"
# Summary of Users/Assessments/Courses/Tags Data
data_summary(data_raw)
# + [markdown] id="ih8a4TUL3mc4"
# ## Preprocessing
# + id="DPdzp-6RSN8d"
def preprocess(data: dict) -> dict:
"""Preprocess input DataFrames"""
prep = {}
data = copy.deepcopy(data)
for name, df in data.items():
# drop null values
df.dropna(axis=1, how="all", inplace=True) # course tags table
df.reset_index(drop=True, inplace=True)
# rename columns in dataframe
rename = {
"interest_tag": "tag",
"assessment_tag": "tag",
"course_tags": "tag",
"user_assessment_score": "score",
"view_time_seconds": "view",
}
df.columns = [rename[i] if i in rename.keys() else i for i in df.columns]
# discretize user assessment scores quantile buckets
# if "score" in df.columns:
# df = df.replace({"score": {0:"low", 1:"medium", 2:"high"}})
if any("score" in col for col in df.columns):
df["score"] = pd.qcut(df["score"], q=3, labels=["high", "medium", "low"])
# discretize user viewing time into quantile buckets
# if "view" in df.columns:
# df = df.replace({"view": {0:"low",1:"high"}})
if any("view" in col for col in df.columns):
df["view"] = pd.qcut(df["view"], q=4, labels=["high", "medium", "low", "very low"])
# encode categorical columns
cat_cols = ["tag", "score", "view", "level"]
for col in df.columns:
if col in cat_cols:
df[col] = pd.Categorical(df[col]).codes
# save prep dataframe
prep[name] = df
# add key for max users -> used for initializing user-item matrix
prep["max_users"] = max(
[max(v["user_handle"]) for k, v in prep.items() if "user_handle" in v.columns]
)
# add key containing dataframe for merged course/tags
prep["course_tags"] = pd.merge(
prep["course"], prep["tags"], on="course_id", how="left"
)
return prep
# + id="vtOR7Zx2SPva"
# Preprocess Raw Data
data = preprocess(data_raw)
# + [markdown] id="1Kk8JXy83en1"
# ## Calculating similarities and ranking
# + [markdown] id="mxgc7l5c3ZaL"
# ### Similarity
# + id="8Q2ROQiKD69A"
class UserSimilarityMatrix:
"""Class for building and computing similar users"""
def __init__(self, data: pd.DataFrame):
self.data = data
def __repr__(self) -> str:
return f"Dimensions of User-Items Matrix: {self.matrix.shape}"
def build_user_item_matrix(self, max_users: str, item: str) -> None:
"""Build User/Item Interaction Matrix"""
matrix = np.zeros(shape=(max_users, max(self.data[item])))
for _, row in self.data.iterrows():
matrix[row["user_handle"] - 1, row[item] - 1] = 1
return matrix
def get_user_item_matrix(self, max_users: int, features: List[str]):
"""Concatenate Features into One User-Items Matrix"""
results = []
for item in features:
results.append(self.build_user_item_matrix(max_users, item))
self.matrix = np.hstack(results)
def _truncatedSVD(self, threshold: float = 0.90) -> np.ndarray:
"""Apply Truncated SVD to Explain 'n'% of total variance"""
n_components = 2 # minimum components to begin
ex_var = 0
while ex_var < threshold:
pc = TruncatedSVD(n_components=n_components)
pc.fit_transform(self.matrix)
ex_var = np.sum(pc.explained_variance_ratio_)
n_components += 1
logging.info(
f"Total components {pc.n_components} with {ex_var:0.2f} variance explained"
)
self.matrix = pc.transform(self.matrix)
def compute_similarity(self, metric: str = "cosine") -> np.ndarray:
"""Compute Similarity"""
return pairwise_distances(self.matrix, metric=metric)
# + id="DU9UA1m1Wj3C"
def apply_similarity_calculation(name: str, features: List[str], metric: str) -> np.ndarray:
"""Compute User-Items Similarity Matrix
Steps:
- Construct User-Item Binary Vector for each input dataset
- Apply truncatedSVD to determine 'n' components to explain m% of total variance
- Compute cosine similarity
"""
logging.info("=" * 50)
logging.info(f"Computing USER-{name.upper()} Similarity Matrix...")
logging.info(f"Input Features: {features}")
SM = UserSimilarityMatrix(data[name])
SM.get_user_item_matrix(data["max_users"], features)
logging.info(f"Applying Truncated SVD: Input Shape: {SM.matrix.shape}...")
SM._truncatedSVD()
logging.info(f"Reduced User-Item Matrix Shape: {SM.matrix.shape}")
# Compute pairwise user-similarity
return SM.compute_similarity(metric=metric)
# + colab={"base_uri": "https://localhost:8080/"} id="kXYUi1BcWvJq" outputId="c685368c-60a9-4a6a-8235-9e3001228060"
user_interest = apply_similarity_calculation("interest", ["tag"], similarity_metric)
# + colab={"base_uri": "https://localhost:8080/"} id="4C6GRvulWvE-" outputId="5c9808b2-87ca-44c0-bba1-72ed248b97f4"
user_assessment = apply_similarity_calculation("assessment", ["tag", "score"], similarity_metric)
# + colab={"base_uri": "https://localhost:8080/"} id="A1slgCZJWu97" outputId="95ffbec2-21c4-41fa-91d0-056a94df5fc4"
# %%time
user_courses = apply_similarity_calculation("course_tags", ["tag", "view"], similarity_metric)
# + [markdown] id="bxHWZMi03TpZ"
# ### Weighted averaging
# + id="cXKNW42YZgkg"
def compute_weighted_matrix(
users: np.ndarray, assessments: np.ndarray, course: np.ndarray, weights: List[float]
) -> np.ndarray:
"""Compute Weighted Similarity Matrix where: weight_1 + weight_2 + weight_3 = 1"""
return (
(users * float(weights[0]))
+ (assessments * float(weights[1]))
+ (course * float(weights[2]))
)
# + id="J3urPoKXX8Z_"
def apply_weighted_similarity(i: np.ndarray, a: np.ndarray, c: np.ndarray, weights: List[float]) -> np.ndarray:
"""Compute Interest/Assessment/Courses Weighted Matrix"""
logging.info("=" * 50)
logging.info("Computing Weighted Similarity Matrix...")
return compute_weighted_matrix(i, a, c, weights)
# + colab={"base_uri": "https://localhost:8080/"} id="akKgRv4iX9zF" outputId="8bf50101-256b-4a9a-d0d9-f06643b38b74"
weighted_matrix = apply_weighted_similarity(user_interest, user_assessment, user_courses, weights)
# + [markdown] id="_KO723Fb3RjG"
# ### Ranking
# + id="ZXAewofzZq2G"
def rank_similar_users(X: np.ndarray, top_n: int = 5) -> pd.DataFrame:
"""Apply Custom Pandas Function to Rank Top 'n' Users"""
def custom_udf(X):
"""
Custom Pandas function for using index/score to
generate output results dataframe.
"""
idx = np.argsort(X.values, axis=0)[::-1][1 : top_n + 1]
return [
str({"user": i, "score": X.astype(float).round(4).values[i]}) for i in idx
]
# dimensions: users x top_n
if isinstance(X, np.ndarray):
X = pd.DataFrame(X)
ranking = X.apply(custom_udf).T
ranking.columns = [f"{i+1}" for i in ranking.columns]
ranking["user_handle"] = ranking.index
logging.info(f"User Ranking Dataframe Shape: {ranking.shape}")
return ranking
# + id="iWUFp4W9YLGJ"
def apply_user_ranking(df: pd.DataFrame) -> pd.DataFrame:
"""Rank Users based on Similarity Metric"""
logging.info("=" * 50)
logging.info("Computing Weighted Similarity Matrix...")
return rank_similar_users(df)
# + colab={"base_uri": "https://localhost:8080/"} id="w7hAOwPWYIqV" outputId="167ba7df-9e60-424e-c2a9-a04f2e732199"
# %%time
rank_matrix = apply_user_ranking(weighted_matrix)
# + [markdown] id="iSYO881a3Jz8"
# ### Save the similarity matrix into database
# + id="mIg0CugOZHRA"
def write_table(env: str, table: str, df: pd.DataFrame) -> None:
"""Write Table from SQLite Database"""
conn = sqlite3.connect(database)
cur = conn.cursor()
df.to_sql(name=table, con=conn, if_exists="replace", index=False)
# + id="6jXLCb7bYS8F"
def save(results: pd.DataFrame) -> None:
"""Write Output Data to Table in SQLite Database"""
logging.info("=" * 50)
logging.info("Updating similarity matrix in SQLite Database...")
write_table(env, results_table, results)
# + colab={"base_uri": "https://localhost:8080/"} id="akdX564OYV94" outputId="9852bbdc-ccbc-4938-f98f-54e82c604614"
save(rank_matrix)
# + [markdown] id="6uTC57Fa3F1k"
# ## Adding Test sample
# + id="nwCNoS_9a99f"
def read_table(env: str, query: str) -> pd.DataFrame:
"""Query Table from SQLite Database"""
conn = sqlite3.connect(database)
cur = conn.cursor()
cur.execute(query)
df = pd.DataFrame(
cur.fetchall(), columns=[column[0] for column in cur.description]
)
return df
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="NQbC8M1qa9zN" outputId="3ffc4987-045a-42b2-c876-f0b1da505d26"
df_check_tags = read_table('dev', f"select * from course_tags")
df_check_tags.head()
# + id="zZEaW1UIsgnm"
# Sample Data
df_test = pd.DataFrame({'user_handle':['110','110','111','111'],
'user_match': ['112','113','157','145'],
'similarity': ['80.2','20.8','52.0','48.0']})
# + id="1tcGg1P9tGPx"
# Write Similarty Results to Table
write_table('dev','test_table',df_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="o7XdpxENtF97" outputId="4269ec35-0247-4e70-e2bc-dd5271b1afe9"
# Read from Table
users = '110'
read_table('dev', f"select * from test_table where user_handle = {users}")
# + colab={"base_uri": "https://localhost:8080/"} id="dlj9ilMKtUio" outputId="6071c881-9fe2-4690-faf8-f579ae8b2556"
# Add Index on Results Table (user_ranking)
conn = sqlite3.connect(database)
sql_table = f"""CREATE UNIQUE INDEX user_handle_index ON {results_table} (user_handle)"""
cur = conn.cursor()
cur.execute(sql_table)
# + [markdown] id="C7i7ls7L4EiQ"
# ## Manual evaluation
# + id="eyIzvj2o4L6h"
# User content
user_assesments = pd.read_csv('user_assessment_scores.csv')
user_interest = pd.read_csv('user_interests.csv')
user_course_views = pd.read_csv('user_course_views.csv')
course_tags = pd.read_csv('course_tags.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 97} id="ffcdSJpn4L20" outputId="8d609205-83ea-4429-e3c6-df77cf471e5f"
input_user = 9
read_table('dev', f"select * from {results_table} where user_handle = {input_user}")
# + colab={"base_uri": "https://localhost:8080/", "height": 266} id="7YpiMrAv4Lyk" outputId="564afaa5-9fd0-4683-bb41-56d17c524b12"
user_interest[user_interest['user_handle'] == input_user]
# + colab={"base_uri": "https://localhost:8080/", "height": 359} id="hU3mhL_i4Lve" outputId="ae2bc9f5-595e-4b7b-8df0-32972a4696f6"
user_interest[user_interest['user_handle'] == 9776].head(10)
# + colab={"base_uri": "https://localhost:8080/", "height": 80} id="wufmGLxh4Lr-" outputId="6ed9ff73-3207-4f50-ea2a-e7880eadf928"
user_assesments[user_assesments['user_handle'] == input_user]
# + colab={"base_uri": "https://localhost:8080/", "height": 359} id="a2ssultC4-k-" outputId="22764c74-d630-483f-b640-41f1f5b45c22"
user_interest[user_interest['user_handle'] == 9776].head(10)
# + colab={"base_uri": "https://localhost:8080/", "height": 669} id="CKXbS1XR6FZv" outputId="6269a9fd-3c2f-40e4-dae7-915e7fb60b53"
user_course_views[user_course_views['user_handle'] == input_user]
# + [markdown] id="kyFz981V2_g7"
# ## Deploy and Inference
# + [markdown] id="1s_I2W_s24pL"
# ### Build the API
# + colab={"base_uri": "https://localhost:8080/"} id="VoTbfCsqGNYi" outputId="1e5fdf81-7adc-486d-9e96-c38812e529fa"
# %%writefile app.py
import os
import json
import sqlite3
import pandas as pd
from flask import Flask, request, jsonify
DATABASE_ENV = "dev"
DATABASE_NAME = "recommender_dev.db"
TABLE = "rank_matrix"
app = Flask(__name__)
def read_table(env: str, query: str) -> pd.DataFrame:
"""Query Table from SQLite Database"""
conn = sqlite3.connect(DATABASE_NAME)
cur = conn.cursor()
cur.execute(query)
df = pd.DataFrame(
cur.fetchall(), columns=[column[0] for column in cur.description]
)
return df
class SimilarUsers:
def __init__(self, user):
self.user = user
def fetch_user_from_db(self):
"""Fetch User Record from SQLite Database"""
query = f"select * from {TABLE} where user_handle = {self.user}"
print("Table", TABLE)
return read_table(DATABASE_ENV, query)
def get_payload(self):
"""Return JSON Payload containing Input User and Top
Similar Users with associated similarity scores"""
data = self.fetch_user_from_db()
if data.shape[0] == 0:
return {self.user_id: "No records found!"}
else:
return {str(self.user): list(data.loc[0].values.flatten()[:-1])}
@app.route("/api/similarity/", methods=["POST", "GET"])
def get_user_similarity():
user = json.loads(request.get_data())["user_handle"]
SU = SimilarUsers(user)
results = SU.get_payload()
return results
if __name__ == '__main__':
app.run(debug=True)
# + [markdown] id="wJenoG7J2z2T"
# ### Run the server API
# + colab={"base_uri": "https://localhost:8080/"} id="heWzh51NwqEf" outputId="bcbcf378-6b7f-4cb6-ed31-c39e5a351628"
# !chmod +x app.py
# !nohup python3 app.py > output.log &
# + colab={"base_uri": "https://localhost:8080/"} id="nRrNznADyKWn" outputId="648e6553-7328-44cc-f3df-0570d0848015"
# !cat output.log
# + [markdown] id="N8CWXAs42ss4"
# ### Make post request
# + id="2lJ6mWqMGIc5"
def similarity(user_id: str, host: str = "0.0.0.0", port: int = 5000) -> json:
"""API call to flask app running on localhost
and fetch top similar customers to the input customer(s)
"""
url = f"http://{host}:{port}/api/similarity/"
to_json = json.dumps({"user_handle": user_id})
headers = {"content-type": "application/json", "Accept-Charset": "UTF-8"}
response = requests.post(url, data=to_json, headers=headers)
print(response.text)
# + colab={"base_uri": "https://localhost:8080/"} id="9Ry-RHzdt6CY" outputId="6b154bc9-1531-4608-f96e-86919ccb4fa4"
similarity(user_id='110')
# + [markdown] id="PpsZ73fq2jiQ"
# ### Make post request using CURL from command line
# + colab={"base_uri": "https://localhost:8080/"} id="lpGmAbNZ0oiO" outputId="4c65cfdc-a60e-4393-c21c-4062efb3c948"
# !curl -X GET -H "Content-type: application/json" -d "{\"user_handle\":\"110\"}" "http://0.0.0.0:5000/api/similarity/"
| _notebooks/2021-07-07-course-recommender-svd-flask.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.9 64-bit
# language: python
# name: python36964bit3ec0dbbd46ae448585f9bbd63312a519
# ---
# # Firing Rate Estimation
#
# ### Estimating the firing rate in two different method.
# - Finding the optimum number of bins
# - Finding optimum bandwidth for gaussian kernel density estimation
#
# ### Reference:
# - Kernel bandwidth optimization in spike rate estimation
# - <NAME> & <NAME>
#
# - [Kernel Density Estimation](https://jakevdp.github.io/PythonDataScienceHandbook/05.13-kernel-density-estimation.html)
#
# - [Kernel density estimation, bandwidth selection](https://en.wikipedia.org/wiki/Kernel_density_estimation#Bandwidth_selection)
#
#
from sklearn.neighbors import KernelDensity
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import GridSearchCV
import numpy as np
import pylab as plt
from os.path import join
from itng.statistics import (sshist, optimal_bandwidth, optimal_num_bins)
# Reading spike rates:
# +
with open(join("data.txt"), "r") as f:
lines = f.readlines()
spike_times = []
for line in lines:
line = [float(i) for i in line.split()]
spike_times.extend(line)
spike_times = np.asarray(spike_times)
# +
bins = optimal_num_bins(spike_times)
print("The optimum number of bins : ", len(bins))
# -
fig, ax = plt.subplots(1, figsize=(6, 4))
ax.set_xlabel('spike times (s)')
ax.set_ylabel("density")
ax.hist(spike_times, bins=bins, alpha=0.5, density=True);
# +
# Kernel Density Estimation
# Selecting the bandwidth via cross-validation
# -
bandwidth = optimal_bandwidth(spike_times)
print(bandwidth)
# the spikes need to be sorted
spike_times_sorted = np.sort(spike_times)
# +
# instantiate and fit the KDE model
kde = KernelDensity(bandwidth=bandwidth, kernel='gaussian')
kde.fit(spike_times_sorted[:, None])
# score_samples returns the log of the probability density
logprob = kde.score_samples(spike_times_sorted[:, None])
# +
# PLOT the results together
# +
fig, ax = plt.subplots(1, figsize=(6, 4))
ax.set_xlabel('spike times (s)')
ax.set_ylabel("density")
ax.hist(spike_times, bins=bins, alpha=0.3, density=True);
ax.fill_between(spike_times_sorted, np.exp(logprob),
alpha=0.3,
color='gray')
ax.plot(spike_times, np.exp(logprob), alpha=1, lw=2, color="k")
plt.show()
# -
| itng/examples/FiringRateEstimation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import nltk
paragraph='This is an natural language processing technique. We are going to perform tokenization'
nltk.sent_tokenize(paragraph)
nltk.word_tokenize(paragraph)
| tokenization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Example file
#
# This notebook show how to work with the mria model with a fictive industry x industry input-output table.
#
# Let's first load the required python modules, outside the modules within this package:
import os
import pandas as pd
import geopandas as gp
import numpy as np
import matplotlib.
# And now the mria_py modules:
from create_table import Table
from base_model import MRIA
# The 'create_table' module will load the input-output table from an external source and prepares it for use within the MRIA model. The 'base_model' module is the MRIA model itself.
#
# The next step is to specify the filepath of the input-output table:
filepath = '..\input_data\The_Vale.xlsx'
# In some cases, you only want to use a subset of the table. As such, we need to specify a list of countries to be used in the model:
list_countries = ['Elms','Hazel','Montagu','Fogwell','Riverside','Oatlands']
# We can now create a new data object and run the 'prepare data' script:
DATA = Table('TheVale',filepath,2010,list_countries)
DATA.prep_data()
# And now we can create the a model instance:
MRIA_model = MRIA('TheVale',list_countries,DATA.sectors)
MRIA_model.create_sets(FD_SET=['FinDem'])
MRIA_model.create_alias()
# The next step is to transform the data from the table into a set of parameters and variables to be used in the MRIA model:
MRIA_model.baseline_data(DATA)
# And before we want to run it, we want to create a dataframe to save the outcomes of the model:
output = pd.DataFrame()
output['x_in'] = pd.Series(MRIA_model.X.get_values())
# Let's run the baseline model!
MRIA_model.run_basemodel()
# The model spits out the log file of the solver. As shown in the log file, the model has reached an optimal solution! Let's see if the solver also reproduced the baseline situation:
# +
output['x_out'] = pd.Series(MRIA_model.X.get_values())
output['diff'] = output['x_out'] - output['x_in']
print(sum(output['diff']))
# -
# And in a quick visualisation:
# +
# %matplotlib inline
shape_TheVale = gp.read_file('..\\input_data\\The_Vale.shp')
reg_loss = pd.DataFrame(output[['diff','x_in']].groupby(level=0).sum())
reg_loss['Region'] = reg_loss.index
reg_loss['diff'] = reg_loss['diff'].round()
reg_shap = pd.merge(shape_TheVale, reg_loss, on='Region', how='inner')
reg_shap.plot(column='diff', cmap='OrRd', legend=True)
# -
| mria_py/examples/mria_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example - Clip Box
# +
import rioxarray # for the extension to load
import xarray
# %matplotlib inline
# -
# ## Load in xarray dataset
xds = xarray.open_dataarray("../../test/test_data/input/MODIS_ARRAY.nc")
xds
xds.plot()
# ## Clip using a bounding box
#
# See docs for `rio.clip_box`:
#
# - [DataArray.clip_box](../rioxarray.rst#rioxarray.raster_array.RasterArray.clip_box)
# - [Dataset.clip_box](../rioxarray.rst#rioxarray.raster_dataset.RasterDataset.clip_box)
xdsc = xds.rio.clip_box(
minx=-7272967.1958741,
miny=5048602.84382404,
maxx=-7272503.88315758,
maxy=5049066.15654056,
)
xdsc
xdsc.plot()
| docs/examples/clip_box.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: zeo
# language: python
# name: zeo
# ---
# # Example on how to calculate the principal axes of a molecule
#
# This notebook shows how to calculate the principal axes of the molecule (Fig. 1 of the main paper). It uses `ase` and `nglview` only for visualization. The principal component analysis (PCA) is calculated using an off-the-shelf implementation from `scikit-learn`.
# +
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import IncrementalPCA
from ase import Atoms
import nglview as nv
# -
# ## Visualizing the 3D conformer
#
# The atomic coordinates were calculated using molecular mechanics, as described in the paper. Here, the coordinates are provided for an immediate visualization.
smiles = 'CC[N+](C)(C)CC'
nxyz = np.array([
[6.0, -2.0366, 0.8406, -0.2439],
[6.0, -1.1877, -0.1131, 0.5807],
[7.0, 0.1397, -0.5059, -0.0949],
[6.0, -0.138, -1.3056, -1.3752],
[6.0, 0.8961, -1.4392, 0.8604],
[6.0, 1.0267, 0.6959, -0.4714],
[6.0, 1.362, 1.6019, 0.702],
[1.0, -1.5315, 1.793, -0.4248],
[1.0, -2.9613, 1.0649, 0.2988],
[1.0, -2.3254, 0.4092, -1.2059],
[1.0, -0.9454, 0.3309, 1.552],
[1.0, -1.7389, -1.0428, 0.7652],
[1.0, -0.5884, -0.6421, -2.1175],
[1.0, -0.8138, -2.1302, -1.1308],
[1.0, 0.8141, -1.6915, -1.7526],
[1.0, 0.3017, -2.3483, 0.9949],
[1.0, 1.0241, -0.9355, 1.8216],
[1.0, 1.8662, -1.6801, 0.4162],
[1.0, 1.9472, 0.2888, -0.9066],
[1.0, 0.5065, 1.2582, -1.254],
[1.0, 0.4697, 2.0512, 1.1457],
[1.0, 1.9152, 1.0759, 1.4844],
[1.0, 1.9977, 2.4239, 0.3555]
])
at = Atoms(positions=nxyz[:, 1:], numbers=nxyz[:, 0])
nv.show_ase(at)
# ## Obtaining major axes of the molecule
#
# A PCA is performed to the xyz coordinates of the conformer described above. Then, the 2D representation is used to extract the major axes and plotted to be visualized in plane.
pca = IncrementalPCA(2)
uv = pca.fit(nxyz[:, 1:]).transform(nxyz[:, 1:])
axes = uv.max(0) - uv.min(0)
print(axes)
# +
fig, ax = plt.subplots()
ax.scatter(uv[:, 0], uv[:, 1], c=nxyz[:, 0], s=nxyz[:, 0] * 30)
plt.show()
| code/mol-axes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:integration-building-model]
# language: python
# name: conda-env-integration-building-model-py
# ---
# # SHEF environmental footprint model
#
# The following script calculates the environmental footprint of apartments (also called, dwellings) and occupants (also called, tenants or households) of the apartments for specific buildings owned by the <a href=http://www.nrp73.ch/en/projects/building-construction/ecological-footprint-in-the-housing-sector>Shrinking Housing Environmental Footprint</a> (SHEF, under <a href=http://www.snf.ch/en/researchinFocus/nrp/nfp-73/Pages/default.aspx>NRP73: Sustainable Economy</a>) project building-owner partners: <a href=https://www.abz.ch/>ABZ</a>, <a href=https://www.mobiliar.ch/>Swiss Mobiliar</a> and <a href=https://www.schl.ch/>SCHL</a>.
#
# _Input_: Data from HBS (<a href = https://www.bfs.admin.ch/bfs/en/home/statistics/economic-social-situation-population/surveys/hbs.html>obtain from Federal Statistical Office of Switzerland</a>) and STATPOP (census) - linked to limited GWR (from <a href='https://www.bfs.admin.ch/bfs/en/home/registers/federal-register-buildings-dwellings.html'>Federal register of buildings</a>) SHEF partner buildings
#
# _Run_: The Jupyter Notebook provides a step-by-step guidance to do the computations
#
# _Output_: One CSV with extended columns (to input file) with occupant and associated apartment (heating and material) footprints
#
# TOC - overview image below:<a id="toc-main"></a>
# - <a href="#abm"> Step 0: Initialising with HBS, STATPOP and GWR tenant-dwelling pairs</a>
# - <a href="#consumption"> Step 1: Calculation of occupants' consumption-based footprint</a>
# - <a href="#material"> Step 2: Calculation of apartments' material and renovation based footprint</a>
# - <a href="#energy"> Step 3: Calculation of apartments' energy(heat/ warmwater)-based footprint</a>
# - <a href="#total_impacts"> Step 4: Merge all the final results </a>
#
# Author: <NAME>, ETH Zurich
#
# <img src="plan/datapipeline-2.PNG">
# ## 0. Initialisation: output of ABMs <a id="abm"></a>
#
# <a href="#toc-main">back</a>
#
# In this section, the outputs from following the ABMs are prepared in a format to be passed as input in the respective models
# 1. Tenant ABM output (household information) for Consumption footrprint
# 2. Owner ABM output (building information) for Apartment footprint
# +
#### TEST ######
import pickle
import csv
file = open('model_consumption/Household-Model-Development/init_data/allhhscardemand.pickle','rb')
x = pickle.load(file)
with open('model_consumption/Household-Model-Development/init_data/allhhscardemand.csv', 'w') as output:
writer = csv.writer(output)
for key, value in x.items():
writer.writerow([key, value])
# +
# %%capture
import pandas as pd
import numpy as np
import pickle
import os
from scipy import stats
import scipy.stats as stats
import statistics
from statsmodels.stats.multicomp import pairwise_tukeyhsd, MultiComparison
import random
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
import sklearn.multioutput as sko
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score,mean_squared_error, explained_variance_score
from sklearn.model_selection import cross_val_score, KFold, train_test_split, StratifiedShuffleSplit
from sklearn.preprocessing import FunctionTransformer
# import brightway2 as bw #TODO when running LCA for the rebound model here
import warnings
warnings.filterwarnings("ignore")
# %matplotlib inline
# -
gws_test = pd.read_csv('model_rene_buildinginfo/Buildinginfo.csv',delimiter= ",",encoding='ISO-8859–1' )
list(gws_test.columns)
gws_test['volume'].median()
# <p style='color:blue'>USER INPUT NEEDED to chose the strategy no</p>
# +
Strategy_no = 0
# for building model : oil: 50%, gas: 15%, dis.: 5%, ren: 15%, heatpump 15%
# 0, 1 , NO CHANGE
# 2 , oil to district : oil: -10%, dis.: +10%
# 3 , oil to district and renw: oil: -25%, dis.: +5%, ren: +5%, heatpump +15%
# 4 , oil to district and renw(more): oil: -40%, gas: +10%, dis.: +20%, ren: 15%, heatpump +10%
# -
# #### Adapt the owner ABM output files further
# +
# (0) Take Owner ABM output file
pd_owner_raw = pd.read_csv('model_owner_ABM/strategy'+str(Strategy_no)+'_output_dw_model.csv',
delimiter=',', error_bad_lines=False, encoding='ISO-8859–1')
print(pd_owner_raw.head(),list(pd_owner_raw.columns))
pd_intergation_ownr = pd.read_excel('model_owner_ABM/integration_OwnerABM_Buildingmodel_1.xlsx')
# (1) drop the unwanted columns
pd_intergation_ownr_list_drop = list(pd_intergation_ownr['variables from owner ABM'])
pd_owner_raw=pd_owner_raw[pd_intergation_ownr_list_drop]
# (2) rename the columns
pd_intergation_ownr_list_renamed = list(pd_intergation_ownr['Inputs for building model '])
pd_owner_raw.columns = pd_intergation_ownr_list_renamed
pd.DataFrame.to_csv(pd_owner_raw,'model_owner_ABM/dwelling_data_1.csv',sep=',',index=False)
# -
# #### Attach the tenant and the owner ABM output files <a id = 'dwelling_area'></a>
# +
pd_tenant_raw = pd.read_csv('model_tenant_ABM/strategy'+str(Strategy_no)+'_households_data.csv', sep=',')
pd_owner_raw = pd.read_csv('model_owner_ABM/dwelling_data_1.csv', sep=',')
pd_owner_raw_columns = list(pd_owner_raw.columns)
pd_tenant_raw_columns = list(pd_tenant_raw.columns)
print('owner-columns=', pd_owner_raw_columns ,len(pd_owner_raw_columns),
'\n\ntenant-columns=',pd_tenant_raw_columns,len(pd_tenant_raw_columns))
pd_owner_tenant_raw = pd.merge(pd_owner_raw,pd_tenant_raw, left_on= ['Time step','Dwelling id'],
right_on= ['Step','Current dwelling id'] )
#check if the match is correct based on dwelling sizes
area_tenant_file = [np.round(i) for i in list(pd_owner_tenant_raw['Dwelling area'])]
area_owner_file = [np.round(i) for i in list(pd_owner_tenant_raw['Current dwelling size'])]
# assert area_tenant_file == area_owner_file #TODO for later 0 check which lines has error
pd_owner_tenant_raw=pd_owner_tenant_raw.drop(columns=['Unnamed: 0', 'Step', 'Current dwelling id',
'Current dwelling size','Postcode'])
print('\n\nall columns=',list(pd_owner_tenant_raw.columns),len(list(pd_owner_tenant_raw.columns)))
pd.DataFrame.to_csv(pd_owner_tenant_raw,'raw/1_households_building_data.csv',sep=',')
# -
# #### Attached (owner-tenant) data to ---> input for building model
dwelling_columns = list(pd_owner_tenant_raw.columns)[:21]
pd_owner= pd_owner_tenant_raw[dwelling_columns]
print(list(pd_owner.columns), len(list(pd_owner.columns)))
pd.DataFrame.to_csv(pd_owner,'raw/1_dwelling_data.csv',sep=',')
# #### Attached (owner-tenant) data to ---> input for consumption model
# +
# pd_tenant = pd_owner_tenant_raw.drop([ 'Year', 'Month','Dwelling room','Dwelling rent', 'building id', 'Settlment id',
# 'Street address', 'city', 'post code', 'Building total apartment area',
# 'Building no. of dwelling', 'Building height', 'Building Construction year',
# 'Refurbishment year', 'Refurbishment type'],axis=1)
# print(list(pd_tenant.columns),len(list(pd_tenant.columns)))
# pd.DataFrame.to_csv(pd_tenant,'raw/1_households_data_noregion.csv',sep=',')
# # Add the values in the 'char_region_xx' columns for consumption mdoel, based on postcodes in tenant model output
# pd_tenant[['char_georegion_ge','char_georegion_mit','char_georegion_nw',
# 'char_georegion_zh','char_georegion_ost','char_georegion_zen',
# 'char_georegion_ti']] = pd.DataFrame([[0,0,0,0,0,0,0]], index=pd_tenant.index)
# pd_tenant.loc[pd_tenant.canton=='Zurich', 'char_georegion_zh']=1
# pd_tenant.loc[pd_tenant.canton=='Vaud', 'char_georegion_ge']=1
# pd_tenant=pd_tenant.drop(['canton'],axis=1)
# ## Prepare the file for input in consumption model, based on column names of tenant model output file
# pd_consumption_raw_columns = pd.read_excel('model_tenant_ABM/Integration_ABMTenant_Consumptionmodel.xlsx')
# pd_consumption_raw_columns_list = list(pd_consumption_raw_columns['variable_name_consumption_model'])
# pd_tenant.columns = pd_consumption_raw_columns_list
# print(list(pd_tenant.columns), len(list(pd_tenant.columns)))
# pd.DataFrame.to_csv(pd_tenant,'raw/1_households_data.csv',sep=',')
# +
pd_tenant = pd.read_csv('model_tenant_ABM/strategy'+str(Strategy_no)+'_households_data.csv', sep=',')
# Add the values in the 'char_region_xx' columns for consumption mdoel, based on postcodes in tenant model output
pd_tenant[['char_georegion_ge','char_georegion_mit','char_georegion_nw',
'char_georegion_zh','char_georegion_ost','char_georegion_zen',
'char_georegion_ti']] = pd.DataFrame([[0,0,0,0,0,0,0]], index=pd_tenant.index)
pd_tenant.loc[(pd_tenant.Postcode<9000) & (pd_tenant.Postcode>=8000) , 'char_georegion_zh']=1
pd_tenant.loc[(pd_tenant.Postcode<2000) & (pd_tenant.Postcode>=1000), 'char_georegion_ge']=1
pd_tenant=pd_tenant.drop(['Unnamed: 0', 'Year', 'Month', 'Postcode'],axis=1)
print(list(pd_tenant.columns),len(list(pd_tenant.columns)))
## Prepare the file for input in consumption model, based on column names of tenant model output file
pd_consumption_raw_columns = pd.read_excel('model_tenant_ABM/Integration_ABMTenant_Consumptionmodel.xlsx')
pd_consumption_raw_columns_list = list(pd_consumption_raw_columns['variable_name_consumption_model'])
pd_tenant.columns = pd_consumption_raw_columns_list
print(list(pd_tenant.columns), len(list(pd_tenant.columns)))
pd.DataFrame.to_csv(pd_tenant,'raw/1_households_data.csv',sep=',')
# -
# #### Attached (owner-tenant) data to ---> input for rebound model <a id='rebound-abm'></a>
# +
# attach the dwelling rent for income estimation
pd_owner=pd.read_csv('raw/1_dwelling_data.csv',sep=',')[['Dwelling id','Dwelling rent']]
pd_tenant_rebound=pd.merge(pd_tenant,pd_owner,left_on=['dwelling_id'],right_on=['Dwelling id'])
# adapt the tenant database with the selected independent properties of the rebound regression model
pd_rebound_column_name = list(pd.read_excel('model_rebound/integration_ABMTenant_Rebound.xlsx')['name_rebound'])
pd_consum_column_name = list(pd.read_excel('model_rebound/integration_ABMTenant_Rebound.xlsx')['name_consumption'])
# reindex the columns for input in the regression model
pd_rebound = pd_tenant_rebound.T.reindex(pd_consum_column_name)
pd_rebound = pd_rebound.T
pd_rebound.columns = pd_rebound_column_name
# +
# edit the columns further
pd_rebound['disposable_income']=pd_rebound['disposable_income']*3 # assumption: rent is 1/3rd of the income
pd.DataFrame.to_csv(pd_rebound,'raw/1_rebound_household_data.csv',sep=',',index=False)
pd_rebound['households_with_a_woman_as_reference_person']=np.random.randint(0,2,pd_rebound.shape[0])
sum1=pd_rebound['female_persons_aged_between_5_and_14_years']+pd_rebound[
'female_persons_aged_between_15_and_24_years']+pd_rebound[
'male_persons_aged_between_5_and_14_years']+pd_rebound[
'male_persons_aged_between_15_and_24_years']
pd_rebound['number_of_students/trainees/apprentices_in_the_household']=np.random.randint(0,sum1+1,pd_rebound.shape[0])
sum2=pd_rebound['female_persons_aged_between_25_and_34_years']+pd_rebound[
'female_persons_aged_between_35_and_44_years']+pd_rebound[
'female_persons_aged_between_45_and_54_years']+pd_rebound[
'female_persons_aged_between_55_and_64_years']+pd_rebound[
'male_persons_aged_between_25_and_34_years']+pd_rebound[
'male_persons_aged_between_35_and_44_years']+pd_rebound[
'male_persons_aged_between_45_and_54_years']+pd_rebound[
'male_persons_aged_between_55_and_64_years']
pd_rebound['number_of_employed_persons_in_the_household']=np.random.randint(0,sum2+1,pd_rebound.shape[0])
sum3 = pd_rebound['female_persons_aged_between_65_and_74_years']+pd_rebound[
'female_persons_older_than_75_years']+pd_rebound[
'male_persons_aged_between_65_and_74_years']+pd_rebound[
'male_persons_older_than_75_years']
pd_rebound['number_of_employed_persons_in_the_household']=np.random.randint(0,sum3+1,pd_rebound.shape[0])
pd_rebound['number_of_other_persons_in_the_household_(wrt_employment)']=0
pd_rebound['number_of_self_employed_persons_in_the_household']=0
pd_rebound
# -
# -------------------------------------------------------------------------------------------------------------------------------
# ## 1. Occupants' footprint <a id="consumption"></a>
#
# <a href="#toc-main">back</a>
#
# TOC:<a id="toc-consum"></a>
# - <a href="#direct_cons">Step 1.a. Direct Consumption-based footprint </a>
# - <a href="#rebounds">Step 1.b. Rebounds of the consumption footrpint</a>
# ### 1a. Direct Consumption-based footprint <a id="direct_cons"></a>
#
# <a href="#toc-cons">back</a>
#
# The following script assigns consumption-archetypes and associated life cycle greenhouse gas emissions that were found in the <a href=https://pubs.acs.org/doi/abs/10.1021/acs.est.8b01452>ES&T-Paper Froemelt et al. 2018</a> to the ABM-households. The assignment is conducted in the same manner as in the <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.12969">JIE-Paper Froemelt et al. 2020</a> and thus based on the Random-Forest-Classifier that was trained for the respective paper.
#
# *Input*: the user needs to provide the data for the ABM-households in the same structure as demonstrated in test_data.xlsx
#
# *Run*: The Jupyter Notebook provides a step-by-step guidance to do the computations
#
# *Output*: Two EXCEL-files will be stored: 1. Probabilities for each household of behaving like a certain consumption-archetype; 2. LCA-results (IPCC 2013, 100a) aggregated at the main consumption areas for each household in kgCO$_{2}$-eq per year (NOTE: per household, not per capita)
#
# TOC <a id="toc"></a>:
# - <a href="#ini">Step 1a.0: Initialisation</a>
# - <a href="#prep">Step 1a.1: Data Preparation</a>
# - <a href="#probs">Step 1a.2: Estimate Probabilities for Consumption-Archetypes</a>
# - <a href="#lca">Step 1a.3: Life Cycle Assessment of Consumption Behaviour</a>
# #### 1a.0: Initialisation <a id="ini"></a>
#
# <a href="#toc">back</a>
# +
# Path to data needed
init_data_path = r"model_consumption/Household-Model-Development/init_data"
# Path to output of ABM-model
abm_input_path = r"model_consumption/Household-Model-Development/abm_input"
# Separate path to classifier to save space on disk
clf_path = r"model_consumption"
# Path to results
res_path = r"model_consumption/Household-Model-Development/results"
# Loading classifier (trained and used for JIE-Paper Froemelt et al. 2020)
with open(os.path.join(clf_path, "calibrated_classifier_mob.pickle"), 'rb') as f:
cccv = pickle.load(f)
# Loading list of attributes (important for correct order of attributes)
with open(os.path.join(init_data_path, 'listofattributes.pickle'), 'rb') as f:
listofattributes = pickle.load(f)
# Loading a translation dict to convert the random-forest-cluster-names to the names used in ES&T-Paper
# Froemelt et al. 2018
with open(os.path.join(init_data_path, 'archetransl.pickle'), 'rb') as f:
archetranslator = pickle.load(f)
# Loading the LCA-results for greenhouse gas emissions (IPCC 2013, 100a) --> results are in kgCO2-eq/yr
ghg_df = pd.read_pickle(os.path.join(init_data_path, 'archetypes_annual_ghg.pickle'))
# +
# # adapt the household consumption file slightly
# nameofABMoutputfile_csv = pd.read_csv('raw/1_households_data.csv', sep=',')
# nameofABMoutputfile_csv = nameofABMoutputfile_csv.iloc[:,5:37] # dropping first four columns which are not needed for the consumption model
# nameofABMoutputfile_csv=nameofABMoutputfile_csv.drop_duplicates()
# print(list(nameofABMoutputfile_csv.columns), len(list(nameofABMoutputfile_csv.columns)))
# nameofABMoutputfile = nameofABMoutputfile_csv.to_excel('model_consumption/Household-Model-Development/abm_input/test_data.xlsx',
# index = None, header=True) # ASSUMPTION: file is in xlsx-format
# +
fname = os.path.join(abm_input_path, "test_data_"+str(Strategy_no)+".xlsx")
# fname = os.path.join(abm_input_path, "new/test_data.xlsx")
agentHHs = pd.read_excel(fname)
# Probably not necessary, but we are making sure that the attributes are in the correct order
# agentHHs = agentHHs.T.reindex(listofattributes)
# agentHHs = agentHHs.T
# assert list(agentHHs.columns) == listofattributes
# -
# #### 1a.1: Data Preparation <a id="prep"></a>
#
# <a href="#toc">back</a>
#
# To impute the missing mobility data, two versions are implemented here. <a href="#mobv1">Version 1</a>: set the mobility demand to the Swiss-wide median (=0.5); <a href="#mobv2">Version 2</a>: Based on the given household characteristics, we estimate a mobility demand with microcensus data from 2015 (https://www.bfs.admin.ch/bfs/de/home/aktuell/neue-veroeffentlichungen.gnpdetail.2017-0076.html).
# **<p style="color:blue">USER INPUT NEEDED: CHOOSE <a href="#mobv1">VERSION 1</a> OR <a href="#mobv2">VERSION 2</a></p>**
# ##### Impute mobility demand (Version 1, set to median) <a id="mobv1"></a> and jump to <a href="#probs">1a.2: Probabilities for Consumption-Archetypes</a>
agentHHs['mobility'] = 0.5
# ##### Impute mobility demand (Version 2, use microcensus 2015) <a id="mobv2"></a>
# +
# %%time
# daily distance in km from microcensus 2015 (motorisierter Individualverkehr):
# Verkehrsverhalten der Bevölkerung, Kenngrössen 2015 - Schweiz
mc_data = {
'gender': {
'm': 29.242695698,
'f': 19.595522015
},
'age': {
'0617': 13.252892803,
'1824': 24.792044371,
'2544': 31.295384917,
'4564': 28.671073552,
'6579': 16.968762215,
'8099': 6.5412771519
},
'inc': {
'<4000': 13.196680471,
'40018000': 24.841538427,
'800112000': 32.139251131,
'>12000': 34.034484978
}
}
# Very probably, we will only have data on gender and age for the ABM-households --> income will not be considered
# The following matchings are simplified assumptions to align the microcensus data with the expected data from the
# ABM-model. If more detailed data is provided by the ABM-model the whole cell needs to be revised!
matchABMMC = {
'0514': '0617',
'1524': '1824',
'2534': '2544',
'3544': '2544',
'4554': '4564',
'5564': '4564',
'6574': '6579',
'7599': '8099'
}
# Estimate daily mobility demand based on gender
genderestimate = mc_data['gender']['f'] * \
agentHHs[[c for c in agentHHs.columns if 'fem' in c and not '0004' in c]].sum(axis=1) \
+ mc_data['gender']['m'] * agentHHs[[c for c in agentHHs.columns if 'male' in c and not '0004' in c]].sum(axis=1)
# Estimate daily mobility demand based on age structure
ageestimate = pd.Series(0, index=agentHHs.index)
for ky in matchABMMC.keys():
ageestimate += (mc_data['age'][matchABMMC[ky]] * agentHHs[[c for c in agentHHs.columns if ky in c]].sum(axis=1))
# Take the average of both estimates
agentHHs['mobility'] = 0.5 * genderestimate + 0.5 * ageestimate
# Convert from daily to yearly demand
agentHHs['mobility'] *= 365
# Load car demand of all Swiss households (mobility model --> see JIE-Paper Froemelt et al. 2020)
hhcardemand = pd.read_pickle(os.path.join(init_data_path, 'allhhscardemand.pickle'))
# Compute percentile score of agent-HHs' mobility demand
for hhid in agentHHs.index:
agentHHs.loc[hhid, 'mobility'] = stats.percentileofscore(hhcardemand.values, agentHHs.loc[hhid, 'mobility']) / 100
del ageestimate, genderestimate, hhcardemand
# -
# #### 1a.2: Probabilities for Consumption-Archetypes <a id="probs"></a>
#
# <a href="#toc">back</a>
# +
# For each ABM-household we compute the probability of being a certain consumption-archetype
probs = cccv.predict_proba(agentHHs.values)
probs_df = pd.DataFrame(data=probs, index=agentHHs.index, columns=[archetranslator[c] for c in cccv.classes_])
# Visualise the probabilities
sns.heatmap(probs_df, cmap='coolwarm', xticklabels=True, yticklabels=True)
plt.tight_layout()
# Store the probabilities as excel-file
probs_df.to_excel(os.path.join(res_path, 'res_archetypes_probas.xlsx'))
# -
# #### 1a.3: Life Cycle Assessment of Consumption <a id="lca"></a>
#
# <a href="#toc">back</a>
#
# Before a life cycle assessment of consumption behaviour can be performed, we have to choose a consumption-archetype for each ABM-household. There are two options for this: <a href="#lcav1">Option 1</a> is a manual user choice; <a href="#lcav2">Option 2</a> automatically chooses the most probable archetype.
# **<p style="color:blue">USER INPUT NEEDED: CHOOSE <a href="#lcav1">OPTION 1</a> OR <a href="#lcav2">OPTION 2</a></p>**
# ##### Option 1 (automatic selection of most probable archetype) <a id="lcav2"></a>
archechoice = dict()
maxprobas = probs_df.T.idxmax()
for hhid in agentHHs.index:
archechoice[hhid] = maxprobas[hhid]
# ##### Option 2 (manual selection of archetypes) <a id="lcav1"></a>
#
# <p style="color:blue">USER INPUT NEEDED: ENTER THE ARCHETYPE-NAME (ACCORDING TO THE ES&T-PAPER FROEMELT ET AL. 2018) FOR EACH HOUSEHOLD</p>
archechoice = dict()
for hhid in agentHHs.index:
archechoice[hhid] = input('HH-ID: {} --> Archetype: '.format(hhid))
# ##### In the final step, we assign the LCA-results to ABM-households and save them as an EXCEL-file
#
# **NOTE: The results are in kg CO$_{2}$-eq per year on a household level (not per capita!)**
# Assign the aggregated LCA-GHG-results to the ABM-households
hh_lca_res = pd.DataFrame(np.nan, index=agentHHs.index, columns=[c for c in ghg_df.columns if not c.endswith('_cap')])
for hhid in hh_lca_res.index:
archename = archechoice[hhid]
hh_lca_res.loc[hhid, hh_lca_res.columns] = ghg_df.loc[archename, hh_lca_res.columns]
hh_lca_res.to_excel(os.path.join(res_path, 'res_hhlca.xlsx'))
# +
## attach the footprint back to tenants (1_household_data.csv)
pd_result_consum_fp = pd.read_excel('model_consumption/Household-Model-Development/results/res_hhlca.xlsx',index_col=0)
# pd_result_consum_fp = pd.concat([pd_tenant,pd_consum_fp],axis=1,sort=False)
pd_result_consum_fp=pd_result_consum_fp.rename(columns={'total':'total_occupant_footprint'})
pd_result_consum_fp['housing_all']=pd_result_consum_fp['housing']+pd_result_consum_fp['furnishings']
pd_result_consum_fp['food_all']=pd_result_consum_fp['food']+pd_result_consum_fp['restaurantshotels']
pd_result_consum_fp['transport_all']=pd_result_consum_fp['transport']+pd_result_consum_fp['recreation']
pd.DataFrame.to_csv(pd_result_consum_fp,'postprocessing/1a_consumption/res_hhlca.csv')
pd_result_consum_fp[['food_all','clothing','housing_all','transport_all','total_occupant_footprint']].mean()
# -
# -------------------------------------------------------------------------------------------------------------------------------
# ### 1b. Rebound consumption-based footprint <a id="rebounds"></a>
#
# <a href="#toc-consum">back</a>
#
# Aim: Quantify the environmental impact due to the savings due to consumption expenses
#
# _Input_: The household budet survey files to train the data
#
# _Model_: A random forest or Artificial neural network model
# <a href='https://ifu-esd-srv-4.ethz.ch/jupyterhub/user/shinder/notebooks/0_work/Models/1_consumption_movement/3_Rebound/5_final_model/rebound-model/rebound_model.ipynb'>Link to the full code</a>
#
# _Output_: The rebound expenses and environmental footprints of the households
# <p style='color:red'>[WIP] Detailed LCA of the consumption expenses as outputs</p>
#
# TOC<a id="toc-rebound"></a>
#
# - <a href="#ini-rebound"> Step 0: Initialisation</a>
# - <a href="#model-rebound"> Step 1: Model </a>
# - <a href="#post-rebound"> Step 2: Postprocessing </a>
# - <a href="#lca-rebound"> Step 3: LCA </a>
# #### 1b.0. Initialisation <a id = 'ini-rebound'></a>
#
# <a href="#toc-rebound">back</a>
# <p style='color:blue'>USER INPUT NEEDED: based on the expected rebound analysis, change the following values</p>
#
# Data Parameters
# - (1) **habe_file_folder** -> For the year 2009-11, the main Household budget survey(habe/hbs) file is provided by <a href= https://pubs.acs.org/doi/full/10.1021/acs.est.8b01452>A.Froemelt</a>. It is modified based on original HBS(HABE) data that we <a href = https://www.bfs.admin.ch/bfs/en/home/statistics/economic-social-situation-population/surveys/hbs.html>obtain from Federal Statistical Office of Switzerland</a>. It is further modiefied in the <a href='https://ifu-esd-srv-4.ethz.ch/jupyterhub/user/shinder/notebooks/0_work/Models/1_consumption_movement/3_Rebound/5_final_model/rebound-model/rebound_model.ipynb'>original rebound code</a> in the preprocessing section
# - (2) **dependent_indices** -> based on the HBS column indices, this file lists the relevant consumption expense parameters which are predicted
# - (3) **independent_indices** -> the HBS column indices which define the household socio-economic properties
# - (4) **target_data** -> Selects the target dataset to predict the results. For this project on housing industry, it is the partner dataset 'ABZ', 'SCHL' or 'SM' (and here takes the input from the <a href='#rebound-abm'>ABMs output</a>)
#
# Model parameters
# - (1) **iter_n** -> no.of iterations of runs
# - (2) **model_name** -> Random Forest (RF) or ANN (Artificial Neural Network)
# - (3) **income_groups** -> for postprocessing, the number of income groups on which the result is desired
# +
# all the preprocessed files (training data) in the original code (check link above to the code to generate them)
habe_file_folder='model_rebound/preprocessing'
# setting model parameters
iter_n=2
model_name='RF' # 'RF' or 'ANN'
income_groups=5
scenarios = {'baseline_2011':500}
target_data = 'ABZ_ABM'
target_data_file= pd.read_csv('raw/1_rebound_household_data.csv',sep=',')
pd.DataFrame.to_csv(target_data_file,'model_rebound/target_'+target_data+'.csv',sep=',',index=False)
idx_column_savings_cons = 289 #289 = 'net_rent_and_mortgage_interest_of_principal_residence'
# +
dependent_indices= 'model_rebound/dependent_housing.csv'
dependent_indices_pd = pd.read_csv(dependent_indices, delimiter=',', encoding='ISO-8859–1')
dependent_indices_pd_name = pd.read_csv(dependent_indices,sep=',')["name"]
dependentsize=len(list(dependent_indices_pd_name))
independent_indices='model_rebound/independent.csv'
independent_indices_pd = pd.read_csv(independent_indices, delimiter=',', encoding='ISO-8859–1')
list_independent_columns = pd.read_csv(independent_indices, delimiter=',', encoding='ISO-8859–1')['name'].to_list()
list_dependent_columns = pd.read_csv(dependent_indices, delimiter=',', encoding='ISO-8859–1')['name'].to_list()
# -
# <p style='color:blue'>USER INPUT NEEDED:Chose whether to normalise or not</p>
# +
input = 'no-normalise' #or 'normalise'
if input =='normalise':
def normalise_partner(i,key):
pd_df_partner = pd.read_csv('model_rebound/target_'+target_data+'.csv',delimiter=',')
df_complete = pd.read_csv('model_rebound/preprocessing/1_habe_rename_removeoutliers.csv',delimiter=',')
pd_df_partner['disposable_income'] = pd_df_partner['disposable_income'] + i
for colsss in list_independent_columns:
min_colsss = df_complete[[colsss]].quantile([0.01]).values[0]
max_colsss = df_complete[[colsss]].quantile([0.99]).values[0]
pd_df_partner[[colsss]] = (pd_df_partner[[colsss]] - min_colsss) / (max_colsss - min_colsss)
# pd_df_partner = pd_df_partner[pd_df_partner.iloc[:,30]<=1]
# pd_df_partner = pd_df_partner[pd_df_partner.iloc[:,32]<=1]
# pd_df_partner = pd_df_partner[pd_df_partner.iloc[:,33]>=0] #todo remove rows with normalisation over the range
pd.DataFrame.to_csv(pd_df_partner,'model_rebound/preprocessing/5_final_'+ target_data +
'_independent_final_'+str(i)+'.csv',sep=',',index=False)
return pd_df_partner
if input =='no-normalise':
def normalise_partner(i,key):
pd_df_partner = pd.read_csv('model_rebound/target_'+target_data+'.csv',delimiter=',')
df_complete = pd.read_csv('model_rebound/preprocessing/1_habe_rename_removeoutliers.csv',delimiter=',')
pd_df_partner['disposable_income'] = pd_df_partner['disposable_income'] + i
# pd_df_partner = pd_df_partner[pd_df_partner.iloc[:,30]<=1]
# pd_df_partner = pd_df_partner[pd_df_partner.iloc[:,32]<=1]
# pd_df_partner = pd_df_partner[pd_df_partner.iloc[:,33]>=0] #todo remove rows with normalisation over the range
pd.DataFrame.to_csv(pd_df_partner,'model_rebound/preprocessing/5_final_'+ target_data + '_independent_final_'+str(i)+'.csv',sep=',',index=False)
return pd_df_partner
for key in scenarios:
list_incomechange=[0,scenarios[key]]
for i in list_incomechange:
df_normalise_partner_file = normalise_partner(i,key)
# -
# #### 1b.1. Model <a id = 'model-rebound'></a>
#
# <a href="#toc-rebound">back</a>
# +
def to_haushalts(values,id_ix=0):
haushalts = dict()
haushalt_ids = np.unique(values[:,id_ix])
for haushalt_id in haushalt_ids:
selection = values[:, id_ix] == haushalt_id
haushalts[haushalt_id] = values[selection]
return haushalts
def split_train_test(haushalts,length_training,month_name,row_in_chunk):
train, test = list(), list()
cut_point = int(0.8*length_training) # 0.9*9754 # declare cut_point as per the size of the imputed database #TODO check if this is too less
print('Month/cluster and cut_point',month_name, cut_point)
for k,rows in haushalts.items():
train_rows = rows[rows[:,row_in_chunk] < cut_point, :]
test_rows = rows[rows[:,row_in_chunk] > cut_point, :]
train.append(train_rows[:, :])
test.append(test_rows[:, :])
return train, test
# +
### NORMALISATION
if input =='normalise':
def df_habe_train_test(df,month_name,length_training):
df=df.assign(id_split = list(range(df.shape[0])))
train, test = split_train_test(to_haushalts(df.values),length_training,month_name,row_in_chunk=df.shape[1]-1)
train_rows = np.array([row for rows in train for row in rows])
test_rows = np.array([row for rows in test for row in rows])
independent = list(range(0,independent_indices_pd.shape[0]))
dependent = list(range(independent_indices_pd.shape[0]+1,
independent_indices_pd.shape[0]+dependent_indices_pd.shape[0]+1))
trained_independent = train_rows[:, independent]
trained_dependent = train_rows[:, dependent]
test_independent = test_rows[:, independent]
test_dependent = test_rows[:, dependent]
## OPTIONAL lines FOR CHECK - comment if not needed
np.savetxt('model_rebound/preprocessing/trained_dependent_nonexp.csv', trained_dependent, delimiter=',')
np.savetxt('model_rebound/preprocessing/trained_dependent.csv', np.expm1(trained_dependent),delimiter=',')
np.savetxt('model_rebound/preprocessing/trained_independent.csv', trained_independent, delimiter=',')
np.savetxt('model_rebound/preprocessing/test_dependent.csv', np.expm1(test_dependent), delimiter=',')
np.savetxt('model_rebound/preprocessing/test_independent.csv', test_independent, delimiter=',')
return trained_independent,trained_dependent,test_independent,test_dependent
def df_partner_test(y):
df_partner = pd.read_csv('model_rebound/preprocessing/5_final_' + target_data + '_independent_final_' + str(y) + '.csv',
delimiter=',')
length_training = df_partner.shape[0]
train_partner, test_partner = split_train_test(to_haushalts(df_partner.values),length_training,month_name,1)
train_rows_partner = np.array([row for rows in train_partner for row in rows])
new_independent = list(range(0, 39)) # number of columns of the independent parameters
train_partner_independent = train_rows_partner[:, new_independent]
### Optional lines for CHECK - comment if not needed
np.savetxt('model_rebound/preprocessing/train_partner_independent_' + model_name + '_' + str(y) + '.csv',
train_partner_independent, delimiter=',')
return train_partner_independent
# NO-NORMALISATION
if input =='no-normalise':
def df_habe_train_test(df,month_name,length_training):
df=df.assign(id_split = list(range(df.shape[0])))
train, test = split_train_test(to_haushalts(df.values),length_training,month_name,row_in_chunk=df.shape[1]-1)
train_rows = np.array([row for rows in train for row in rows])
test_rows = np.array([row for rows in test for row in rows])
independent = list(range(0,independent_indices_pd.shape[0]))
dependent = list(range(independent_indices_pd.shape[0]+1,
independent_indices_pd.shape[0]+dependent_indices_pd.shape[0]+1))
trained_independent = train_rows[:, independent]
trained_dependent = train_rows[:, dependent]
test_independent = test_rows[:, independent]
test_dependent = test_rows[:, dependent]
## OPTIONAL lines FOR CHECK - comment if not needed
# np.savetxt('raw/checks/trained_dependent_nonexp_'+str(month_name)+'.csv', trained_dependent, delimiter=',')
# np.savetxt('raw/checks/trained_independent_nonexp_'+str(month_name)+'.csv', trained_independent, delimiter=',')
np.savetxt('model_rebound/preprocessing/test_dependent_'+str(month_name)+'.csv', test_dependent,delimiter=',')
np.savetxt('model_rebound/preprocessing/test_independent_'+str(month_name)+'.csv', test_independent, delimiter=',')
return trained_independent,trained_dependent,test_independent,test_dependent
def df_partner_test(y):
df_partner = pd.read_csv('model_rebound/preprocessing/5_final_' + target_data + '_independent_final_' + str(y) + '.csv',
delimiter=',')
length_training = df_partner.shape[0]
train_partner, test_partner = split_train_test(to_haushalts(df_partner.values),
length_training,cluster_number,1)
train_rows_partner = np.array([row for rows in train_partner for row in rows])
new_independent = list(range(0, 39))
train_partner_independent = train_rows_partner[:, new_independent]
### Optional lines for CHECK - comment if not needed
np.savetxt('model_rebound/preprocessing/train_partner_independent_' + model_name + '_' + str(y) + '.csv',
train_partner_independent, delimiter=',')
return train_partner_independent
# -
# #### Prediction
# +
## NORMALISATION
if input =='normalise':
def fit_predict_cluster(i,y,cluster_number,key):
df = pd.read_csv('model_rebound/preprocessing/4_habe_deseasonal_cluster_'+str(cluster_number)+'_normalised.csv',
delimiter=',',error_bad_lines=False, encoding='ISO-8859–1')
length_training = df.shape[0]
trained_independent, trained_dependent, test_independent, test_dependent = df_habe_train_test(df,
str(cluster_number),
length_training)
train_partner_independent = df_partner_test(y)
if model_name == 'ANN':
estimator = KerasRegressor(build_fn=ANN)
estimator.fit(trained_independent, trained_dependent, epochs=100, batch_size=5, verbose=0)
### PREDICTION FROM HERE
prediction_nn = estimator.predict(train_partner_independent)
prediction_nn_denormalised = np.expm1(prediction_nn)
np.savetxt('model_rebound/postprocessing/predicted_' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) + '.csv', prediction_nn_denormalised, delimiter=',')
### TEST PREDICTION
prediction_nn_test = estimator.predict(test_independent)
prediction_nn_test_denormalised = np.expm1(prediction_nn_test)
np.savetxt('model_rebound/postprocessing/predicted_test' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) + '.csv', prediction_nn_test_denormalised, delimiter=',')
### CROSS VALIDATION FROM HERE
kfold = KFold(n_splits=10, random_state=12)
results1 = cross_val_score(estimator, test_independent, test_dependent, cv=kfold)
print("Results_test: %.2f (%.2f)" % (results1.mean(), results1.std()))
if model_name == 'RF':
estimator = sko.MultiOutputRegressor(RandomForestRegressor(n_estimators=100, max_features=39, random_state=30))
estimator.fit(trained_independent, trained_dependent)
### PREDICTION FROM HERE
prediction_nn = estimator.predict(train_partner_independent)
prediction_nn_denormalised = np.expm1(prediction_nn)
np.savetxt('model_rebound/postprocessing/predicted_' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) + '.csv', prediction_nn_denormalised, delimiter=',')
### TEST PREDICTION
prediction_nn_test = estimator.predict(test_independent)
prediction_nn_test_denormalised = np.expm1(prediction_nn_test)
np.savetxt('model_rebound/postprocessing/predicted_test' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) + '.csv', prediction_nn_test_denormalised, delimiter=',')
#### CROSS VALIDATION FROM HERE
kfold = KFold(n_splits=10, random_state=12)
# results1 = cross_val_score(estimator, test_independent, test_dependent, cv=kfold)
results2 = r2_score(test_dependent,prediction_nn_test)
results3 = mean_squared_error(test_dependent,prediction_nn_test)
results4 = explained_variance_score(test_dependent,prediction_nn_test)
# print("cross_val_score: %.2f (%.2f)" % (results1.mean(), results1.std()))
print("r2_score: %.2f " % results2)
print("mean_squared_error: %.2f " % results3)
print("explained_variance_score: %.2f " % results4)
### FOR NO NORMALISATION
if input =='no-normalise':
def fit_predict_cluster(i,y,cluster_number,key):
df_non_normalised = pd.read_csv('model_rebound/preprocessing/4_habe_deseasonal_cluster_'+
str(cluster_number)+ '_short.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
length_training = df_non_normalised.shape[0]
trained_independent, trained_dependent, test_independent, test_dependent = df_habe_train_test(df_non_normalised,
str(cluster_number),
length_training)
train_partner_independent = df_partner_test(y)
### Additional for the HBS test data subset
# test_new_independent = df_test(y,1) # chosing just one cluster here
# sratified_independent = df_stratified_test(y)
if model_name == 'ANN':
estimator = KerasRegressor(build_fn=ANN)
estimator.fit(trained_independent, trained_dependent, epochs=100, batch_size=5, verbose=0)
### PREDICTION FROM HERE
prediction_nn = estimator.predict(train_partner_independent)
np.savetxt('model_rebound/postprocessing/predicted_' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) +'.csv', prediction_nn, delimiter=',')
### TEST PREDICTION
prediction_nn_test = estimator.predict(test_independent)
np.savetxt('model_rebound/postprocessing/predicted_test_' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) +'.csv', prediction_nn_test, delimiter=',')
### CROSS VALIDATION FROM HERE
kfold = KFold(n_splits=10, random_state=12)
results1 = cross_val_score(estimator, test_independent, test_dependent, cv=kfold)
print("Results_test: %.2f (%.2f)" % (results1.mean(), results1.std()))
if model_name == 'RF':
estimator = sko.MultiOutputRegressor(RandomForestRegressor(n_estimators=100, max_features=39, random_state=30))
estimator.fit(trained_independent, trained_dependent)
### PREDICTION FROM HERE
prediction_nn = estimator.predict(train_partner_independent)
np.savetxt('model_rebound/postprocessing/predicted_' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) +'.csv', prediction_nn, delimiter=',')
### TEST PREDICTION
prediction_nn_test = estimator.predict(test_independent)
np.savetxt('model_rebound/postprocessing/predicted_test_' + model_name + '_' + str(y) + '_' + str(i)
+ '_' + str(cluster_number) +'.csv', prediction_nn_test, delimiter=',')
#### CROSS VALIDATION FROM HERE
kfold = KFold(n_splits=10, random_state=12)
# results1 = cross_val_score(estimator, test_independent, test_dependent, cv=kfold)
results2 = r2_score(test_dependent,prediction_nn_test)
results3 = mean_squared_error(test_dependent,prediction_nn_test)
results4 = explained_variance_score(test_dependent,prediction_nn_test)
# print("cross_val_score: %.2f (%.2f)" % (results1.mean(), results1.std()))
# print("r2_score: %.2f " % results2)
print("mean_squared_error: %.2f " % results3)
print("explained_variance_score: %.2f " % results4)
# -
# CLUSTER of MONTHS - PREDICTIONS
cluster_number_length = 7
for cluster_number in list(range(1,cluster_number_length+1)):
print(cluster_number)
for j in range(0, iter_n):
for key in scenarios:
list_incomechange=[0,scenarios[key]]
for y in list_incomechange:
fit_predict_cluster(j,y,cluster_number,key)
# #### 1b.2. Postprocessing <a id = 'post-rebound'></a>
#
# <a href="#toc-rebound">back</a>
# +
df_habe_outliers = pd.read_csv('model_rebound/preprocessing/1_habe_rename_removeoutliers.csv', delimiter =',')
def average_pandas_cluster(y,cluster_number,key):
df_all = []
df_trained_partner = pd.read_csv('model_rebound/preprocessing/train_partner_independent_'+
model_name+'_'+str(y)+'.csv')
for i in range(0,iter_n):
df = pd.read_csv('model_rebound/postprocessing/predicted_' + model_name + '_' +
str(y) + '_' + str(i) + '_' +
str(cluster_number) + '.csv', delimiter = ',', header=None)
df_all.append(df)
glued = pd.concat(df_all, axis=1, keys=list(map(chr,range(97,97+iter_n))))
glued = glued.swaplevel(0, 1, axis=1)
glued = glued.groupby(level=0, axis=1).mean()
glued_new = glued.reindex(columns=df_all[0].columns)
max_income = df_habe_outliers[['disposable_income']].quantile([0.99]).values[0]
min_income = df_habe_outliers[['disposable_income']].quantile([0.01]).values[0]
glued_new['income'] = df_trained_partner[df_trained_partner.columns[-1]]
pd.DataFrame.to_csv(glued_new, 'model_rebound/postprocessing/predicted_' + model_name + '_' + str(y)
+ '_'+str(cluster_number)+'.csv', sep=',',header=None,index=False)
# -
for key in scenarios:
list_incomechange=[0,scenarios[key]]
for y in list_incomechange:
for cluster_number in list(range(1,cluster_number_length+1)):
average_pandas_cluster(y,cluster_number,key)
def accumulate_categories_cluster(y,cluster_number):
df_income = pd.read_csv('model_rebound/postprocessing/predicted_' + model_name + '_' + str(y)
+ '_'+str(cluster_number)+'.csv',
sep=',',header=None)
# df_income['household_size'] = df_income.iloc[:, [17]]
df_income['income'] = df_income.iloc[:, [16]]
df_income['food'] = df_income.iloc[:,[0,1,2]].sum(axis=1)
df_income['misc'] = df_income.iloc[:,[3,4]].sum(axis=1)
df_income['housing'] = df_income.iloc[:, [5, 6]].sum(axis=1)
df_income['services'] = df_income.iloc[:, [7, 8, 9 ]].sum(axis=1)
df_income['travel'] = df_income.iloc[:, [10, 11, 12, 13, 14]].sum(axis=1)
df_income['savings'] = df_income.iloc[:, [15]]
df_income = df_income[['income','food','misc','housing','services','travel','savings']]
pd.DataFrame.to_csv(df_income,
'model_rebound/postprocessing/predicted_' + model_name + '_' + str(y)
+ '_'+str(cluster_number)+'_aggregated.csv', sep=',',index=False)
return df_income
for key in scenarios:
list_incomechange=[0,scenarios[key]]
for y in list_incomechange:
for cluster_number in list(range(1,cluster_number_length+1)):
accumulate_categories_cluster(y,cluster_number)
# +
# aggregation of clusters
list_dfs_month=[]
for key in scenarios:
list_incomechange=[0,scenarios[key]]
for y in list_incomechange:
for cluster_number in list(range(1,cluster_number_length+1)):
pd_predicted_month = pd.read_csv('model_rebound/postprocessing/predicted_' + model_name + '_' + str(y)
+ '_'+str(cluster_number)+'_aggregated.csv', delimiter = ',')
list_dfs_month.append(pd_predicted_month)
df_concat = pd.concat(list_dfs_month,sort=False)
by_row_index = df_concat.groupby(df_concat.index)
df_means = by_row_index.mean()
pd.DataFrame.to_csv(df_means,'model_rebound/postprocessing/predicted_' + model_name + '_' + str(y) + '_' +
str(dependentsize) +'_aggregated.csv', sep=',',index=False)
# -
# #### Calculate rebounds
# +
list_dependent_columns = pd.read_csv(dependent_indices, delimiter=',', encoding='ISO-8859–1')['name'].to_list()
def difference_new():
for cluster_number in list(range(1,cluster_number_length+1)):
for key in scenarios:
list_incomechange=[0,scenarios[key]]
for i in range(0,iter_n):
df_trained_partner = pd.read_csv('model_rebound/preprocessing/train_partner_independent_'+
model_name+'_'+str(y)+'.csv')
df_500 = pd.read_csv('model_rebound/postprocessing/predicted_' + model_name + '_'
+str(list_incomechange[1])+ '_'+str(i)
+ '_'+str(cluster_number)+'.csv', delimiter=',',header=None)
df_0 = pd.read_csv('model_rebound/postprocessing/predicted_' + model_name + '_0_'
+ str(i) + '_'+str(cluster_number)+ '.csv', delimiter=',',header=None)
df_500.columns = list_dependent_columns
df_0.columns = df_500.columns
df_diff = df_500-df_0
df_diff['disposable_income']=df_trained_partner[df_trained_partner.columns[-1]]
pd.DataFrame.to_csv(df_diff,'model_rebound/postprocessing/predicted_' + model_name
+ '_rebound_'+str(i)+ '_' + str(cluster_number) + '.csv',sep=',',index=False)
# -
difference_new()
def average_clusters(key):
df_all = []
for i in range(0,iter_n):
df = pd.read_csv('model_rebound/postprocessing/predicted_'+ model_name + '_rebound_' +
str(i)+ '_' + str(cluster_number)+'.csv',delimiter=',',index_col=None)
df_all.append(df)
df_concat = pd.concat(df_all,sort=False)
by_row_index = df_concat.groupby(df_concat.index)
df_means = by_row_index.mean()
pd.DataFrame.to_csv(df_means, 'model_rebound/postprocessing/predicted_'+model_name +'_rebound.csv',
sep=',',index=False)
for key in scenarios:
average_clusters(key)
def accumulate_categories(key):
df_income = pd.read_csv('model_rebound/postprocessing/predicted_'+model_name+ '_rebound.csv',delimiter=',')
# df_income['household_size'] = df_income.iloc[:, [17]]
df_income['income'] = df_income.iloc[:, [16]]
df_income['food'] = df_income.iloc[:,[0,1,2]].sum(axis=1)
df_income['misc'] = df_income.iloc[:,[3,4]].sum(axis=1)
df_income['housing'] = df_income.iloc[:, [5, 6]].sum(axis=1)
df_income['services'] = df_income.iloc[:, [7, 8, 9]].sum(axis=1)
df_income['travel'] = df_income.iloc[:, [10, 11, 12,13, 14]].sum(axis=1)
df_income['savings'] = df_income.iloc[:, [15]]
df_income = df_income[['income','food','misc','housing','services','travel','savings']]#'transfers','total_sum'
data[key]=list(df_income.mean())
if list(scenarios.keys()).index(key) == len(scenarios)-1:
df = pd.DataFrame(data, columns = [key for key in scenarios],
index=['income','food','misc','housing','services','travel','savings'])
print(df)
pd.DataFrame.to_csv(df, 'postprocessing/1b_rebound/rebound_results.csv', sep=',',index=True)
pd.DataFrame.to_csv(df_income,
'postprocessing/1b_rebound/predicted_'+model_name+ '_rebound_aggregated.csv',
sep=',',index=False)
data={}
for key in scenarios:
accumulate_categories(key)
# #### 4.Average LCA impacts <a id = 'lca-rebound'></a>
#
# <a href="#toc-rebound">back</a>
# ------------------------------------------------------------------------------------------------------------------------------
# ## 2. Material and renovation footprint of buildings<a id="toc-material-renovation"></a>
#
# <a href="#toc-main">back</a>
#
#
# TOC:
# - <a href = #material>2a. Material-based footprints</a>
# - <a href = #renovation>2b. Renovation-based footprints</a>
# ### 2a. Material-based footprint <a id="material"></a>
#
# <a href="#toc-material-renovation">back</a>
#
# TOC:<a id="toc-material"></a>
#
# - 2a.1. Building area and volume
# - <a href = #building-data>(Approach1)</a> Taking the building area data directly from the partners
# - <p style='color:red'><a href = #rene>(Approach2)</a> WIP: Mapping partners' to the building typology data (from <a href = 'https://www.sciencedirect.com/science/article/pii/S030626191731454X'>R<NAME> research</a>, and from <a href='https://www.bfs.admin.ch/bfs/en/home/registers/federal-register-buildings-dwellings.html'>Federal register of buildings</a>)</p>
# - <a href = #material-impact>2a.2. Associating to the material intensity and impacts</a> (The material intensity used here is derived from the <a href='https://www.nature.com/articles/s41597-019-0021-x'>research by <NAME> & <NAME></a> )
# - <a href = #apartment>2a.3. Getting the results down to the apartment area</a>
# **<p style="color:blue">USER INPUT NEEDED: CHOOSE <a href="#building-data">OPTION 1</a> OR <a href="#rene">OPTION 2</a></p>**
#
# ### 2a.1. Building area and volume <a id="building-data"></a>
#
# <a href="#toc-material">back</a>
# +
pd_dwelling_data = pd.read_csv('raw/1_dwelling_data.csv',sep=',')
pd_owner_building = pd_dwelling_data.drop_duplicates(subset=['building id'])
pd.DataFrame.to_csv(pd_owner_building,'raw/2_building_owners_data.csv', sep=',', encoding='ISO-8859–1')
pd_owner_building_area = pd_dwelling_data.groupby(['building id'], as_index=False).sum()
pd_owner_building_area=pd_owner_building_area[['building id','Building total apartment area']]
pd.DataFrame.to_csv(pd_owner_building_area,'postprocessing/2a_material/2_building_area.csv',
sep=',', encoding='ISO-8859–1')
#also rewrite the building area column of dwelling file
pd_dwelling_data=pd.merge(pd_dwelling_data,pd_owner_building_area,on='building id')
pd_dwelling_data=pd_dwelling_data.drop(['Building total apartment area_x'],axis=1)
pd.DataFrame.to_csv(pd_dwelling_data,'raw/1_dwelling_data.csv',sep=',')
pd_owner_building_area
pd_dwelling_data
# -
# **<a href="#rene">OPTION 2</a></p>** or jump to <a href="#material-impact">2.2. material footprint</a>
# ### 2a.1. Building area, height and volume (Buffat et al's model)<a id="rene"></a>
# **<p style='color:red'> WIP </p>**
#
# <a href="#toc-material">back</a>
# +
## TO DO - run the section below with GWS first and then this section with the Rene's model to merge the volume
# (or make them work together)
# +
## merge apartments into buildings (drop the duplicate building id columns and drop the duplicate egids/coordinates)
def partner_dropduplicate_egid(partners):
for partner in partners:
df_partner_raw_egid = pd.read_csv('raw/raw_partner_files/'+partner+'_EGIDEWID.csv',
delimiter=',', error_bad_lines=False, encoding='ISO-8859–1')
df_partner_raw_egid = df_partner_raw_egid.drop_duplicates(subset = ['EGID'])
pd.DataFrame.to_csv(df_partner_raw_egid,'postprocessing/2a_material/2a_'+partner+'_dropduplicate_egid.csv',
sep=',', encoding='ISO-8859–1')
partner_dropduplicate_egid(['ABZ','SCHL'])
def partner_dropduplicate_coordinates(partners):
for partner in partners:
df_partner_raw_coordinates = pd.read_csv('raw/raw_partner_files/'+partner+'_coordinates.csv',
delimiter=',', error_bad_lines=False, encoding='ISO-8859–1')
df_partner_raw_coordinates = df_partner_raw_coordinates.drop_duplicates(subset = ['GKODES','GKODNS'])
pd.DataFrame.to_csv(df_partner_raw_coordinates,'postprocessing/2a_material/2a_'+partner+'_dropduplicate_coordinates.csv',
sep=',', encoding='ISO-8859–1')
partner_dropduplicate_coordinates(['ABZ','SCHL','SM'])
# +
## optional to run - only run if the file is NOT present
def truncate_buildinginfo():
df_buildinginfo_raw = pd.read_csv('raw/model_rene_buildinginfo/Buildinginfo.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
print(list(df_buildinginfo_raw.columns))
df_buildinginfo_raw = df_buildinginfo_raw[['btype','bid','bfsnr','x','y','elevation','area','volume',
'avg_height','perimeter']]
pd.DataFrame.to_csv(df_buildinginfo_raw, 'raw/model_rene_buildinginfo/Buildinginfo_truncated.csv')
truncate_buildinginfo()
# +
# attach to the owner ABM building-id files (based on the year and the partner, call the relevant 2c file to match)
def matchbuildinginfo_coordinates(partner,year):
df_buildinginfo_raw = pd.read_csv('model_rene_buildinginfo/Buildinginfo_truncated.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
pd_partner = pd.read_csv('raw/2c_'+ partner + '_' + str(year) + '_coordinates.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
pd_material_partner = pd.merge(df_buildinginfo_raw,pd_partner,right_on=['GKODES','GKODNS'],left_on=['x','y'])
pd.DataFrame.to_csv(pd_material_partner,'raw/2d_'+ partner + '_' + str(year) + '_coordinates.csv')
def matchbuildinginfo_NN(partner,year):
df_buildinginfo_raw = pd.read_csv('model_rene_buildinginfo/Buildinginfo_truncated.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
pd_partner = pd.read_csv('raw/2c_'+ partner + '_' + str(year) + '_coordinates_NN.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
pd_material_partner = pd.merge(df_buildinginfo_raw,pd_partner,right_on=['gkodx_new','gkody_new'],left_on=['x','y'])
pd.DataFrame.to_csv(pd_material_partner,'raw/2d_'+ partner + '_' + str(year) + '_coordinates_NN.csv')
# attach to the owner ABM building-id files (based on the year and the partner, call the relevant 2c file to match)
def matchbuildinginfo_egid(partner,year):
df_buildinginfo_raw = pd.read_csv('model_rene_buildinginfo/Buildinginfo_truncated.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
#TODO add the merged version from coordinates
pd_partner = pd.read_csv('raw/2c_'+ partner + '_' + str(year) + '_egid.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
pd_material_partner = pd.merge(df_buildinginfo_raw,pd_partner,right_on=['egid'],left_on=['EGID_GWS'])
pd.DataFrame.to_csv(pd_material_partner,'raw/2d_'+ partner + '_' + str(year) + '_egid.csv')
for partner in ('ABZ', 'SCHL'):
for year in (2015, 2016, 2017):
matchbuildinginfo_coordinates(partner,year)
for partner in (['SM']):
for year in (2013,2014,2015,2016,2017):
matchbuildinginfo_NN(partner,year)
for partner in ('ABZ', 'SCHL'):
for year in (2013,2014):
matchbuildinginfo_egid(partner,year)
# -
# #### GWS mapping to partner data (for other parameters like occupants, etc.)<a id="gws"></a>
#
# <a href="#toc-material">back</a>
# +
# attach the EGID/coordinate data to the files from ABM owner part (with building ids)
def partner_ownerABM_attach(partners,matchingstyle):
for partner in partners:
df_partner_raw_drop = pd.read_csv('postprocessing/2a_material/2a_'+partner+'_dropduplicate_'+matchingstyle+'.csv',
error_bad_lines=False, encoding='ISO-8859–1')
if partner == 'ABZ':
pd_owner_building_egid = pd.merge(pd_owner_building,df_partner_raw_drop,
left_on=['Settlment id', 'Street address'],
right_on=['Immobilien-Nr.', 'Hauseingang'])
## ********TODO - check with Margarita if these are the same settlement ids*************
if partner == 'SCHL':
pd_owner_building_egid = pd.merge(pd_owner_building,df_partner_raw_drop,
left_on=['Settlment id', 'Street address'],
right_on=['z', 'Adresse'])
## ********TODO - check with Margarita if these are the same settlement ids*************
if partner == 'SM':
pd_owner_building_egid = pd.merge(pd_owner_building,df_partner_raw_drop,
left_on=['Settlment id', 'Street address'],
right_on=['ID', 'Corrected Address'])
pd.DataFrame.to_csv(pd_owner_building_egid, 'postprocessing/2a_material/2b_'+partner+'_ownerABM_'+matchingstyle+'.csv',
sep=',', encoding='ISO-8859–1')
partner_ownerABM_attach(['ABZ','SCHL','SM'],'coordinates')
partner_ownerABM_attach(['ABZ','SCHL'],'egid')
# +
## attach the building properties from GWS data
## match with egid values
def match_by_egid(partner,year):
df_GWS_GEB = pd.read_csv('raw/raw_GWS/GWS'+str(year)+'_GEB.txt', delimiter=';',
error_bad_lines=False,encoding='ISO-8859–1')
df_partner_raw = pd.read_csv('raw/2b_'+partner+'_ownerABM_egid.csv', delimiter=',',
error_bad_lines=False,encoding='ISO-8859–1')
df_GWS_partner_egid = pd.merge(df_partner_raw, df_GWS_GEB, left_on=['EGID_GWS'], right_on=['egid'])
pd.DataFrame.to_csv(df_GWS_partner_egid, 'raw/2c_' + partner + '_'+ str(year)+'_egid.csv',
sep=',', encoding='ISO-8859–1')
## optional part - to increase the speed - and only check for specific cantons (NOTE: not applicable for SM)
if partner =='ABZ':
df_GWS_GEB=df_GWS_GEB[df_GWS_GEB['GDEKT'] == 'VD']
if partner =='SCHL':
df_GWS_GEB = df_GWS_GEB[df_GWS_GEB['GDEKT'] == 'ZH']
df_GWS_partner_egid = pd.merge(df_partner_raw, df_GWS_GEB, left_on=['EGID_GWS'], right_on=['egid'])
pd.DataFrame.to_csv(df_GWS_partner_egid,'raw/2c_'+partner+'_'+str(year)+'_egid_canton.csv',
sep=',',encoding='ISO-8859–1')
# -
# only for year 2013,2014 and ABZ, SCHL - match the EGID values, for rest need to match by (nearest neighbor) geocoordinates
for partner in ('ABZ', 'SCHL'):
for year in (2013,2014):
match_by_egid(partner,year)
# +
## match with coordinate values
def matchcoordinates(partner,year):
df_GWS_GEB = pd.read_csv('raw/raw_GWS/GWS'+str(year)+'_GEB.txt', delimiter=';',
error_bad_lines=False,encoding='ISO-8859–1')
df_partner_raw = pd.read_csv('raw/2b_'+partner+'_ownerABM_coordinates.csv', delimiter=',',
error_bad_lines=False,encoding='ISO-8859–1')
df_partner_raw["GKODES"] = df_partner_raw["GKODES"].astype('int64')
df_partner_raw["GKODNS"] = df_partner_raw["GKODNS"].astype('int64')
df_GWS_partner_coordinates = pd.merge(df_GWS_GEB, df_partner_raw, left_on=['GKODES', 'GKODNS'],
right_on=['GKODES', 'GKODNS'])
pd.DataFrame.to_csv(df_GWS_partner_coordinates, 'raw/2c_'+ partner + '_' + str(year) + '_coordinates.csv',
sep=',', encoding='ISO-8859–1')
# +
# for year 2015,2016,2017 and for ABZ,SCHl - match by geocoordinates
for partner in ('ABZ', 'SCHL'):
for year in (2015, 2016, 2017):
matchcoordinates(partner,year)
# +
def closest_node(node, nodes):
nodes = np.asarray(nodes)
dist_2 = np.sum((nodes - node)**2, axis=1)
return np.argmin(dist_2)
def match_nn(partner,year):
df_GWS_GEB = pd.read_csv('raw/raw_GWS/GWS'+str(year)+'_GEB.txt', delimiter=';',
error_bad_lines=False,encoding='ISO-8859–1')
df_partner_GWS_raw = pd.read_csv('raw/2b_'+partner+'_ownerABM_coordinates.csv', delimiter=',',
error_bad_lines=False,encoding='ISO-8859–1')
nodes = []
for i in range(0, df_GWS_GEB.shape[0]):
node = np.array([df_GWS_GEB.iloc[i]["GKODES"], df_GWS_GEB.iloc[i]["GKODNS"]])
nodes.append(node)
for i in range(df_partner_GWS_raw.shape[0]):
node = np.array([df_partner_GWS_raw.iloc[i]["GKODES"],
df_partner_GWS_raw.iloc[i]["GKODNS"]])
x = closest_node(node, nodes)
df_partner_GWS_raw.at[i,"min_distance_index"] = closest_node(node, nodes)
df_partner_GWS_raw.at[i,"gkodx_new"] = df_GWS_GEB.iloc[x]["GKODES"]
df_partner_GWS_raw.at[i,"gkody_new"] = df_GWS_GEB.iloc[x]["GKODNS"]
# df_pd_partner_coordinates.at[i, "GAPTO_"+str(year)] = df_GWS_GEB.iloc[x]["GAPTO"] #to calculate occupants in that year
# df_pd_partner_coordinates["garea_new"] = df_GWS_GEB.iloc[j]["garea"] #to calculate size of building in that year
pd.DataFrame.to_csv(df_partner_GWS_raw,'raw/2c_'+partner+'_'+str(year)+'_coordinates_NN.csv',
sep=',', encoding='ISO-8859–1')
# -
# for SM (all years) match by nearest neighbor geocoordinates
for partner in (['SM']):
for year in (2013,2014,2015,2016,2017):
match_nn(partner,year)
# ### 2a.2. material footprint of buildings<a id="material-impact"></a>
#
# <a href="#toc-material">back</a>
## multiply building volume with material intensity and get material weights
pd_material_impact = pd.read_csv('model_material/intensity_material.csv',sep=',',index_col=0)
pd_material_impact = pd_material_impact.T
pd_material_impact_columns=list(pd_material_impact.columns)
print(list(pd_material_impact.columns),'\n',list(pd_material_impact.loc['kgco2-eq/ m2']))
# +
## get the material weight per apartment based on apartment volume
pd_material = pd.read_csv('postprocessing/2a_material/2_building_area.csv',sep=',')
for i in pd_material_impact_columns:
pd_material[i]=pd_material['Building total apartment area']*pd_material_impact.loc['kgco2-eq/ m2'][i]
pd_material['total_material_fp']=pd_material[pd_material_impact_columns].sum(axis=1)
pd.DataFrame.to_csv(pd_material.drop('Unnamed: 0',axis=1),'postprocessing/2a_material/2_material_building_footprint.csv',sep=',')
pd_material.drop('Unnamed: 0',axis=1)
# -
# ### 2a.3. Material footprint of apartments<a id="apartment"></a>
#
# <a href="#toc-material">back</a>
## get the material footprint (apartment) based on building vs apartment area
pd_material_apart= pd.read_csv('raw/1_dwelling_data.csv', sep=',', encoding='ISO-8859–1')
pd_owner_building_area= pd.read_csv('postprocessing/2a_material/2_building_area.csv',
sep=',', encoding='ISO-8859–1')
for i in pd_material_impact_columns:
pd_material_apart[i] =pd_material_apart['Dwelling area']*pd_material_impact.loc['kgco2-eq/ m2'][i]
pd_material_apart['total_material_fp']=pd_material_apart[pd_material_impact_columns].sum(axis=1)
pd.DataFrame.to_csv(pd_material_apart.drop('Unnamed: 0',axis=1),
'postprocessing/2a_material/2_material_apartment_footprint.csv',sep=',')
pd_material_apart
# get material footrpint per year
pd_material_apart['total_material_fp_year'] = pd_material_apart['total_material_fp'] / (2020 - pd_material_apart['Building Construction year'])
pd.DataFrame.to_csv(pd_material_apart.drop('Unnamed: 0',axis=1),
'postprocessing/2a_material/2_material_apartment_footprint.csv',sep=',')
pd_material_apart
pd_material_apart[['total_material_fp_year']].median()
pd_material_apart=pd_material_apart[(pd_material_apart[['total_material_fp_year']] > -300).all(1)]
pd_material_apart[['total_material_fp_year']].median()
# -----------------------------------------------------------------------------------------------------------------------------
# ## 2b. Renovation footprints <a id="renovation"></a>
#
# <a href="#toc-material-renovation">back</a>
#
# ## 3. Energy demand and footprint <a id="energy"></a>
#
# <a href="#toc-main">back</a>
#
# TOC:<a id="#toc-energy"></a>
# - <a href="heat">Step 3.a. Heat energy based footrpint of apartment</a>
# - <a href="ww">Step 3.b. Warm water based footrpint of apartment</a>
# ### 3.a. Heat energy based footrpint <a id="heat"></a>
#
# The model, called as <a href="https://www.scopus.com/record/display.uri?eid=2-s2.0-85076259074&origin=inward&txGid=67727348b41d9ae4dc4b55e19b8b2646">BEEF</a> (building energy environmental footprint) is developed by <NAME> and <NAME>.
#
# <a href="#toc-energy">back</a>
# <p style='color:red'>WIP: RUnning the beef model directly based on the inputs</p>
#
# - run the beef model by going to the right environment (beef_model/bin/activate) -> activate it.
# - for a quick fix to run the model, the existing run results and outputs are directly passed here and the subset of results are used
# +
## pass the building ids and the building details to BEEF model (get heating demand)
df_beef = pd.read_csv('model_beef/x-model_beef_energy/Beef_heatingdemand.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
df_beef = df_beef[['btype', 'bid', 'bfsnr', 'ebf', 'egid', 'x', 'y', 'wall_method','heatdemand_y_median']]
df_beef['total_heatdemand_median'] = df_beef['ebf']*df_beef['heatdemand_y_median']
pd.DataFrame.to_csv(df_beef, 'model_beef/x-model_beef_energy/Buildinginfo_truncated.csv')
print(df_beef.head())
# +
## match the partner egids and geocoordinates with the beef model results
pd_partners_EGID_coordinates = pd.read_csv('raw/raw_partner_files/Partner_EGID_coordinates_'+str(Strategy_no)+'.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
#write file by matching egids for buildings
pd_energy_partner = pd.merge(df_beef,pd_partners_EGID_coordinates,left_on=['egid'],right_on=['EGID'])
pd_energy_partner=pd_energy_partner.rename(columns={'total_heatdemand_median':'heatdemand_median_egid'})
print('egid_matching',pd_energy_partner.shape)
pd.DataFrame.to_csv(pd_energy_partner,'postprocessing/2b_energy/2b_energy_demand_egid.csv',sep=',', encoding='ISO-8859–1')
#write file by matching coordinates fro buildings
pd_energy_partner_coordinates= pd.merge(df_beef,pd_partners_EGID_coordinates,left_on=['x','y'],right_on=['GKODES','GKODNS'])
pd_energy_partner_coordinates= pd_energy_partner_coordinates.rename(columns={'total_heatdemand_median':'heatdemand_median_coordinates'})
print('coordinate_matching',pd_energy_partner_coordinates.shape)
pd.DataFrame.to_csv(pd_energy_partner_coordinates,'postprocessing/2b_energy/2b_energy_demand_coordinates.csv',
sep=',', encoding='ISO-8859–1')
#write final file with all energy heating demands for buildings
pd_energy = pd.concat([pd_energy_partner,pd_energy_partner_coordinates])
pd_energy['heatdemand']=pd_energy[['heatdemand_median_egid','heatdemand_median_coordinates']].max(axis=1)
pd_energy=pd_energy.drop_duplicates(subset=['Partner', 'Immobilien-Nr.', 'EGID','bid','bfsnr','wall_method'])
print('all_matching',pd_energy.shape)
pd.DataFrame.to_csv(pd_energy,'postprocessing/2b_energy/2b_energy_demand.csv',sep=',', encoding='ISO-8859–1')
# add heat demand per settlement
pd_energy_settlement = pd_energy.groupby(['Settlement ID'],as_index=False)['heatdemand'].sum()
print('all_matching_settlements',pd_energy_settlement.shape)
pd.DataFrame.to_csv(pd_energy_settlement,'postprocessing/2b_energy/2b_energy_demand_settlement.csv',
sep=',', encoding='ISO-8859–1')
pd_energy_settlement
# +
## get the total area of settlementand allocate it to the dwellings
pd_energy_apart= pd.read_csv('raw/1_dwelling_data.csv', sep=',', encoding='ISO-8859–1')
#merge the dwelling data with the heat demand per settlement
pd_energy_apart=pd.merge(pd_energy_apart,pd_energy_settlement,left_on=['Settlment id'],right_on=['Settlement ID'])
pd_energy_apart['heatdemand_building']=pd_energy_apart.groupby(['building id'],as_index=False)['heatdemand'].transform('mean')
pd_energy_apart['Building total apartment area']=pd_energy_apart.groupby([
'building id'],as_index=False)['Dwelling area'].transform('sum')
pd_energy_apart['heatdemand_m2']=pd_energy_apart['heatdemand_building']/pd_energy_apart['Building total apartment area']*5
pd_energy_apart=pd_energy_apart.drop(['Unnamed: 0','Unnamed: 0.1','Building total apartment area_y'],axis=1)
# get the heatdemand per apartment
pd_energy_apart['heatdemand_final_kWh']=0.28*pd_energy_apart['heatdemand_m2']*pd_energy_apart['Dwelling area']
pd.DataFrame.to_csv(pd_energy_apart,'postprocessing/2b_energy/2_energy_demand_final.csv',sep=',', encoding='ISO-8859–1')
pd_energy_apart
# -
# #### Attach the energy source <a id='source'></a>
# <p style='color:blue'> USER INPUT NEEDED: <a href="https://docs.google.com/spreadsheets/d/1viAxbRDI8qwE4RVF9LEO8MrN89zV4Y7WYTqQZ0eatTE/edit#gid=1766015530">Change the strategy no. based on Forum Thun Srategies</a></p>
# +
## attach the heating source and multiply the demand with relevant energy source footprint
pd_partners_EGID_energy = pd.read_csv('raw/raw_partner_files/Partner_EGID_coordinates_'+str(Strategy_no)+'.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')[['Settlement ID','heat_source','impact_kwh']]
pd_energy_apart= pd.read_csv('postprocessing/2b_energy/2_energy_demand_final.csv', delimiter=',',
error_bad_lines=False, encoding='ISO-8859–1')
pd_energy_apart_fp=pd.merge(pd_energy_apart,pd_partners_EGID_energy,on=['Settlement ID'])
pd_energy_apart_fp['total_energy_fp']=pd_energy_apart_fp['heatdemand_final_kWh']*pd_energy_apart_fp['impact_kwh']
pd.DataFrame.to_csv(pd_energy_apart_fp,'postprocessing/2b_energy/2_energy_apartment_footprint.csv')
pd_energy_apart_fp
# -
pd_energy_apart_fp.groupby(['heat_source'], as_index=False)['total_energy_fp'].mean()
pd_energy_apart_fp[['total_energy_fp']].median()
# ### 3.b. Warm water based footprint<a id="ww"></a>
#
# <a href="#toc-energy">back</a>
# +
## attach he warm water requirements from the partner data
# +
## estimate the warm water requirement based on the occupants
# +
## covert to the apartment data based on the occupants
# -
## get the energy footprint (apartment) - - multiply based on warmwater source
# ## 4. Integrate all the three footprints<a id="total_impacts"></a>
#
# <a href="#toc-main">back</a>
# +
import pandas as pd
pd_occupant_footprint = pd.read_csv('postprocessing/1a_consumption/res_hhlca.csv',sep=',')
pd_material_footprint = pd.read_csv('postprocessing/2a_material/2_material_apartment_footprint.csv',sep=',')
pd_energy_footprint = pd.read_csv('postprocessing/2b_energy/2_energy_apartment_footprint.csv',sep=',')
pd_final = pd.merge(pd_occupant_footprint[['timestep', 'dwelling_id', 'dwelling_size', 'hhid',
'char_nopers','food_all','housing_all','transport_all','total_occupant_footprint']],
pd_material_footprint[['Time step', 'Dwelling id', 'Dwelling room', 'Dwelling area', 'Dwelling rent',
'building id', 'Settlment id', 'Street address', 'post code',
'city', 'canton', 'Building Construction year','total_material_fp_year']],
left_on=['timestep','dwelling_id'],
right_on=['Time step','Dwelling id'])
pd_final=pd.merge(pd_final,pd_energy_footprint[['Time step','Dwelling id','heat_source','Building total apartment area','total_energy_fp']],
on=['Time step','Dwelling id'])
pd_final = pd_final.drop(['Time step','Dwelling id','Dwelling area'],axis=1)
pd_final = pd_final[['timestep', 'dwelling_id', 'hhid', 'char_nopers',
'dwelling_size','Dwelling room', 'Dwelling rent',
'building id', 'Settlment id', 'Street address', 'post code', 'city', 'canton',
'heat_source', 'Building Construction year','Building total apartment area',
'food_all','housing_all','transport_all',
'total_occupant_footprint','total_material_fp_year', 'total_energy_fp']]
pd.DataFrame.to_csv(pd_final,'postprocessing/4_all_footprints_'+str(Strategy_no)+'.csv',sep=',')
pd_final
# -
pd_final[['total_occupant_footprint','total_material_fp_year', 'total_energy_fp']].median()
# for all strategies
# for i in [0,1,2,3,4]:
# pd_final=pd.read_csv('postprocessing/4_all_footprints_'+str(i)+'.csv')
pd_final.groupby(['dwelling_id'], as_index=False).mean()
pd_final.groupby(['heat_source'], as_index=False)['total_energy_fp'].mean()
# for all strategies
# for i in [0,1,2,3,4]:
# pd_final=pd.read_csv('postprocessing/4_all_footprints_'+str(i)+'.csv')
pd_final.groupby(['char_nopers'], as_index=False)['total_occupant_footprint'].mean()
pd_occupant_energy_out = pd_final.groupby(['char_nopers'], as_index=False)['total_energy_fp'].mean()
pd_occupant_energy_out
plt.plot(list(pd_occupant_energy_out['char_nopers']),list(pd_occupant_energy_out['total_energy_fp']), 'ro')
pd_construction = pd_final.groupby(['Building Construction year'], as_index=False)['total_material_fp_year'].mean()
pd_construction
plt.plot(list(pd_construction['Building Construction year'])[:-2],list(pd_construction['total_material_fp_year'])[:-2], 'ro')
| SHEF Building model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Exercise
#
# ### # Find the "Fibonacci series" for the first 8 numbers after 0 & 1
# ### # i.e 0,1, ?, ?, ?, ?, ?, ?, ?, ?
# ### Hints
# +
def fibonacci_series(n):
a=0
b=1
print(a)
print(b)
print("The counting starts now")
sum = ?
for x in range(?):
sum = ? + ?
print(sum)
? = ?
? = ?
number = 8
fibonacci_series(?)
| 5_Function_Exercise_Hints.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
import sys
sys.path.append('../')
from make_representations.representation_maker import Metagenomic16SRepresentation
from utility.file_utility import FileUtility
# + [markdown] deletable=true editable=true
# ## Crohn Disease
# + deletable=true editable=true
#cpe_file='../../datasets/processed_data/crohn/npe/cpe_16s_5000'
fasta_files, mapping = FileUtility.read_fasta_directory('/mounts/data/proj/asgari/dissertation/datasets/deepbio/microbiome/crohn/','fastq')
# + deletable=true editable=true
values={2000:5000,5000:10000,10000:15000}
for k,v in values.items():
RS_b=Metagenomic16SRepresentation(fasta_files, mapping, v, 20)
cpe_file='../../datasets/processed_data/crohn/npe/cpe_16s_'+str(k)
RS_b.generate_cpe_all(cpe_file, '../../datasets/processed_data/crohn/npe/npe_'+str(k)+'_'+str(v))
# + deletable=true editable=true
values={1000:2000,2000:5000,5000:1000}
for k,v in values.items():
RS_b=Metagenomic16SRepresentation(fasta_files, mapping, v, 20)
cpe_file='../../datasets/processed_data/crohn/npe/cpe_16s_'+str(k)
RS_b.generate_cpe_all(cpe_file, '../../datasets/processed_data/crohn/npe/npe_'+str(k)+'_'+str(v))
# + deletable=true editable=true
RS=Metagenomic16SRepresentation(fasta_files, mapping, -1, 20)
# + deletable=true editable=true
RS.generate_cpe_all(cpe_file, '../../datasets/processed_data/crohn/npe/npe_5000_-1' )
# + deletable=true editable=true
FileUtility.save_list('../../datasets/processed_data/crohn/npe/npe_vocab_5000', RS.cpe_vocab)
# + [markdown] deletable=true editable=true
# # Body Site
# + deletable=true editable=true
files=FileUtility.recursive_glob('/mounts/data/proj/asgari/github_repos/microbiomephenotype/data_config/bodysites/','*.txt')
list_of_files=[]
for file in files:
print (file)
list_of_files+=FileUtility.load_list(file)
list_of_files=[x+'.fsa' for x in list_of_files]
fasta_files, mapping = FileUtility.read_fasta_directory('/mounts/data/proj/asgari/dissertation/datasets/deepbio/microbiome/hmb_data/','fsa',only_files=list_of_files)
# + deletable=true editable=true
values={5000:1000}
#values={10000:-1,15000:-1}#{1000:2000,2000:5000,5000:-1,10000:-2}
for k,v in values.items():
RS_b=Metagenomic16SRepresentation(fasta_files, mapping, v, 20)
cpe_file='../../datasets/processed_data/body-site/cpe/cpe_16s_'+str(k)
RS_b.generate_cpe_all(cpe_file, '../../datasets/processed_data/body-site/cpe/npe_'+str(k)+'_'+str(v))
# + deletable=true editable=true
values={10000:-1}
for k,v in values.items():
RS_b=Metagenomic16SRepresentation(fasta_files, mapping, v, 20)
cpe_file='../../datasets/processed_data/body-site/cpe/cpe_16s_'+str(k)
RS_b.generate_cpe_all(cpe_file, '../../datasets/processed_data/body-site/cpe/npe_'+str(k)+'_'+str(v))
# + deletable=true editable=true
FileUtility.save_list('../../datasets/processed_data/body-site/cpe/npe_vocab', RS_b.cpe_vocab)
# + [markdown] deletable=true editable=true
# # Ecological
# + deletable=true editable=true
import sys
sys.path.append('../')
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from utility.file_utility import FileUtility
import numpy as np
import matplotlib.pyplot as plt
# %pylab inline
# %matplotlib inline
from sklearn.preprocessing import normalize
from sklearn.feature_extraction.text import TfidfVectorizer
from Bio import SeqIO
from nltk import FreqDist
import random
import itertools
from make_representations.cpe_apply import BPE
import timeit
from make_representations.cpe_efficient import train_cpe
# + deletable=true editable=true
class ReadFasta(object):
'''
'''
def __init__(self, fasta_address, label_idf_func):
'''
Fasta: address
Label: function
'''
self.labels=[]
self.corpus=[]
for cur_record in SeqIO.parse(fasta_address, 'fasta'):
self.corpus.append(str(cur_record.seq).lower())
self.labels.append(str(cur_record.id).lower())
self.labels=[label_idf_func(l) for l in self.labels]
def get_samples(self, envs, N):
'''
Envs: list of envs
N: sample size
'''
labels=[]
corpus=[]
for env in envs:
selected=[idx for idx,v in enumerate(self.labels) if env==v]
if N==-1:
N=len(selected)
idxs=random.sample(selected, N)
corpus=corpus+[self.corpus[idx] for idx in idxs]
labels=labels+[self.labels[idx] for idx in idxs]
return corpus, labels
def get_vector_rep(self, corpus, k, restricted=True):
if restricted:
vocab = [''.join(xs) for xs in itertools.product('atcg', repeat=k)]
tf_vec = TfidfVectorizer(use_idf=True, vocabulary=vocab, analyzer='char', ngram_range=(k, k),
norm='l1', stop_words=[], lowercase=True, binary=False)
else:
tf_vec = TfidfVectorizer(use_idf=True, analyzer='char', ngram_range=(k, k),
norm='l1', stop_words=[], lowercase=True, binary=False)
return tf_vec.fit_transform(corpus)
# + deletable=true editable=true
FST=ReadFasta('/mounts/data/proj/asgari/dissertation/datasets/deepbio/microbiome/new/environment_16S.fa', lambda x:x.split('.')[0] )
# + deletable=true editable=true
map_type=FileUtility.load_obj('../../datasets/processed_data/eco/map_label_type.pickle')
eco=['soil', 'marine','bioreactor','freshwater','groundwater','sediment','bioreactor_sludge','food_fermentation','compost','rhizosphere','food','hydrocarbon','marine_sediment','activated_sludge','aquatic','hot_springs','freshwater_sediment','ant_fungus_garden']
orgs=['human_gut','bovine_gut','mouse_gut','chicken_gut','termite_gut']
# + deletable=true editable=true
# + deletable=true editable=true
corpus_eco, labels_eco=FST.get_samples(eco,1000)
npe_file='../../datasets/processed_data/eco/cpe/npe_eco_10000'
f=open(npe_file,'r')
CPE_Applier=BPE(f,separator='')
cpe_vocab=[''.join(x.split()).replace('</w>','').lower() for x in FileUtility.load_list(npe_file)[1::]]
cpe_vocab=list(set(cpe_vocab))
cpe_vocab.sort()
cpe_vectorizer = TfidfVectorizer(use_idf=False, vocabulary=cpe_vocab, analyzer='word',
norm=None, stop_words=[], lowercase=True, binary=False, tokenizer=str.split)
# + deletable=true editable=true
new_corpus=[]
for x in corpus_eco:
new_corpus.append(CPE_Applier.segment(x))
# + deletable=true editable=true
X=cpe_vectorizer.fit_transform(new_corpus)
# + deletable=true editable=true
X.shape
# + deletable=true editable=true
FileUtility.save_sparse_csr('../../datasets/processed_data/eco/features/npe_eco_restrictedmer.npz', X)
FileUtility.save_list('../../datasets/processed_data/eco/features/eco_npe_label_restrictedkmer.txt',labels_eco)
# + deletable=true editable=true
corpus_orgs, labels_orgs=FST.get_samples(orgs,620)
# + deletable=true editable=true
f=open('../../datasets/processed_data/org/corpus_fasta.fasta','w')
for idx, x in enumerate(corpus_orgs):
f.write('> subject_'+str(idx+1)+'\n')
f.write(x+'\n')
# + deletable=true editable=true
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.Alphabet import IUPAC
sequences=[SeqRecord(Seq(x, IUPAC.IUPACAmbiguousDNA), id='subject_'+str(idx+1)) for idx, x in enumerate(corpus_orgs)]
with open("../../../datasets/deepbio/qiime/corpus_fasta_org.fasta", "w") as output_handle:
SeqIO.write(sequences, output_handle, "fasta")
# + deletable=true editable=true
f=open('../../datasets/processed_data/eco/corpus_fasta.fasta','w')
for idx, x in enumerate(corpus_eco):
f.write('> subject_'+str(idx+1)+'\n')
f.write(x+'\n')
# + deletable=true editable=true
npe_file='../../datasets/processed_data/org/cpe/npe_org_10000'
f=open(npe_file,'r')
CPE_Applier=BPE(f,separator='')
cpe_vocab=[''.join(x.split()).replace('</w>','').lower() for x in FileUtility.load_list(npe_file)[1::]]
cpe_vocab=list(set(cpe_vocab))
cpe_vocab.sort()
cpe_vectorizer = TfidfVectorizer(use_idf=False, vocabulary=cpe_vocab, analyzer='word',
norm=None, stop_words=[], lowercase=True, binary=False, tokenizer=str.split)
# + deletable=true editable=true
new_corpus=[]
for x in corpus_orgs:
new_corpus.append(CPE_Applier.segment(x))
# + deletable=true editable=true
X=cpe_vectorizer.fit_transform(new_corpus)
# + deletable=true editable=true
X.shape
# + deletable=true editable=true
FileUtility.save_sparse_csr('../../datasets/processed_data/org/features/npe_org_restrictedmer.npz', X)
FileUtility.save_list('../../datasets/processed_data/org/features/npe_org_label_restrictedkmer.txt',labels_orgs)
# + deletable=true editable=true
| notebooks/.ipynb_checkpoints/make_npe_representations-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# !pip install openpyxl
# !pip install XLRD
import csv
boston = pd.read_csv('boston.csv', usecols = ['CHAS','NOX','RM'])
boston.to_excel('hw_boston.xlsx')
| hausaufgabe09.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import emachine as EM
import itertools
from joblib import Parallel, delayed
#from sklearn.model_selection import train_test_split
np.random.seed(0)
n_var = 20 ; g = 1.0 ; n_seq = 5000
# +
# Synthetic data are generated by using `generate_seq`.
w_true,seqs = EM.generate_seq(n_var,n_seq,g=g)
print(seqs.shape)
ops = EM.operators(seqs)
print(ops.shape)
# +
# predict interactions w
eps_list = np.linspace(0.1,0.9,9)
n_eps = len(eps_list)
res = Parallel(n_jobs = n_eps)(delayed(EM.fit)(ops,eps=eps,max_iter=100) for eps in eps_list)
w_eps = np.array([res[i][0] for i in range(len(res))])
w_eps_iter = np.array([res[i][1] for i in range(len(res))])
#e_eps = np.zeros(len(eps_list))
#w_eps = np.zeros((len(eps_list),ops.shape[1]))
#for i,eps in enumerate(eps_list):
# w_eps[i,:],e_eps[i] = EM.fit(ops,w_true,eps=eps,max_iter=100)
#print('eps and e_eps:',eps,e_eps[i])
# -
w_eps_iter.shape
MSE = ((w_true[np.newaxis,np.newaxis,:] - w_eps_iter)**2).mean(axis=2)
MSE.shape
# Entropy
#w_iter_eps[n_eps,n_iter,n_ops]
#ops[n_seq,n_ops]
energy_eps_iter = -np.sum((ops[:,np.newaxis,np.newaxis,:]*w_eps_iter[np.newaxis,:,:,:]),axis=3)
prob_eps_iter = np.exp(energy_eps_iter) # [n_seq,n_eps,n_iter]
prob_eps_iter /= prob_eps_iter.sum(axis=0)[np.newaxis,:,:]
entropy_eps_iter = -(prob_eps_iter*np.log(prob_eps_iter)).sum(axis=0) #[n_eps,n_iter]
entropy_eps_iter.shape
# +
ieps_show = [2,4,8]
nx,ny = 2,2
fig, ax = plt.subplots(ny,nx,figsize=(nx*3.5,ny*3))
for i in ieps_show:
ax[0,0].plot(MSE[i],label='eps=%1.1f'%eps_list[i])
ax[1,0].plot(entropy_eps_iter[i,:],label='eps=%1.1f'%eps_list[i])
ax[0,1].plot(eps_list,MSE[:,-1],'ko-')
ax[1,1].plot(eps_list,entropy_eps_iter[:,-1],'ko-',label='final')
ax[1,1].plot(eps_list,entropy_eps_iter[:,:].max(axis=1),'r^--',label='max')
ax[0,0].legend()
ax[1,0].legend()
ax[1,1].legend()
ax[0,0].set_ylabel('MSE')
ax[0,1].set_ylabel('MSE')
ax[1,0].set_ylabel('Entropy')
ax[1,1].set_ylabel('Entropy')
ax[0,0].set_xlabel('Iterations')
ax[0,1].set_xlabel('epsilon')
ax[1,0].set_xlabel('Iterations')
ax[1,1].set_xlabel('epsilon')
plt.tight_layout(h_pad=1, w_pad=1.5)
#plt.savefig('fig.pdf', format='pdf', dpi=100)
# -
| Ref/find_eps_entropy/fig1_nvar20_g1_nseq5k.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from random import randint, random, choice
import matplotlib.pyplot as plt
from math import sqrt, log
# %matplotlib inline
# +
class Board(object):
def __init__(self, num_players, rewards):
self.n_players = num_players
self.rewards = rewards
self.grid = [[0 for i in range(16)] for j in range(16)]
self.cur_turn = 1
self.baseCamps = {2: [0, 0], 1: [15, 15]}
self.this_turn_visited = []
self.last_moved = [None, None]
self.starting_positions = {}
if self.n_players == 2:
self.starting_positions[1] = [[0, 0], [0, 1], [0, 2], [0, 3], [0, 4], [1, 0], [1, 1], [1, 2], [1, 3], [1, 4], [2, 0], [2, 1], [2, 2], [2, 3], [3, 0], [3, 1], [3, 2], [4, 0], [4, 1]]
self.starting_positions[2] = [[15, 11], [15, 12], [15, 13], [15, 14], [15, 15], [14, 11], [14, 12], [14, 13], [14, 14], [14, 15], [13, 12], [13, 13], [13, 14], [13, 15], [12, 13], [12, 14], [12, 15], [11, 14], [11, 15]]
# if self.cur_turn == 2:
# self.opponentMove()
def printBoard(self):
for r in self.grid:
print(r)
def reset(self):
self.grid = [[0 for i in range(16)] for j in range(16)]
if self.n_players == 2:
for player, lst in self.starting_positions.items():
for x, y in lst:
self.grid[x][y] = player
# def opponentMove(self):
# opponent = Agent(self, 2, [15, 15])
# opponent.move()
# print("NOT COMPLETE")
def checkWin(self, player):
if self.n_players == 2:
win_positions = self.starting_positions[self.n_players - player + 1]
for x, y in win_positions:
if self.grid[x][y] != player:
return False
return True
def inBounds(self, i, j):
if i < 0 or j < 0 or j > 15 or i > 15:
return False
return True
def checkDist(self, x, y, other = None):
if not other:
other = self.baseCamps[self.cur_turn]
baseX, baseY = other
return ((baseY - y)**2 + (baseX-x)**2)**.5
def getLegalComplete(self, i, j, positive = None):
# BFS from i, j
legal = {"moves": [], "jumps": []}
if self.grid[i][j] == 0:
print(i, j, self.grid[i])
print("Why are you trying to move a blank space?")
return legal
else:
visited = [[False for i in range(16)] for j in range(16)]
queue = [[i, j]]
while queue:
x, y = queue.pop(0)
if not visited[x][y]:
if [x, y] != [i, j]:
legal["jumps"].append([x, y])
for k in range(-1, 2):
for l in range(-1, 2):
if self.inBounds(x + 2*k, y + 2*l) and self.grid[x + 2*k][y + 2*l] == 0 and self.grid[x + k][y + l] != 0:
if not visited[x + 2*k][y + 2*l]:
queue.append([x + 2*k, y + 2*l])
visited[x][y] = True
for k in range(-1, 2):
for l in range(-1, 2):
if self.inBounds(i + k, j + l) and self.grid[i + k][j + l] == 0:
legal["moves"].append([i + k, j + l])
return legal
def getLegal(self, i, j, positive = None, jump = False):
legal = {"moves": [], "jumps": []}
if self.grid[i][j] == 0:
print(i, j, self.grid[i])
print("Why are you trying to move a blank space?")
return legal
else:
for k in range(-1, 2):
for l in range(-1, 2):
myDist = self.checkDist(i, j, other = positive)
if self.inBounds(i + 2*k, j + 2*l) and self.grid[i + 2*k][j + 2*l] == 0 \
and self.grid[i + k][j + l] != 0:
newDist = self.checkDist(i + 2*k, j + 2*l, other = positive)
if (myDist > newDist):
legal["jumps"].append([i + 2*k, j + 2*l, newDist])
if not jump:
if self.inBounds(i + k, j + l) and self.grid[i + k][j + l] == 0:
newDist = self.checkDist(i + k, j + l, other = positive)
if (myDist > newDist):
legal["moves"].append([i + k, j + l, newDist])
return legal
def checkLegal(self, player, i, j, k, l):
if self.grid[k][l] != 0 or self.grid[i][j] != player:
# if random() < 0.01:
# print("You can't do that move")
# print "Here's why. Grid at k, l isn't 0?:", k, l, self.grid[k][l], "Or at i, j, isn't player", player, i, j, self.grid[j][j]
return False
else:
legal = self.getLegalComplete(i, j)
# TODO - This currently doesn't work because the getLegal() above needs to be passed a proper
# value for "positive"
# if [k, l] not in [[m[0], m[1]] for m in legal["moves"]] and \
# [k, l] not in [[m[0], m[1]] for m in legal["jumps"]]:
# print("Not legal move")
# print(i, j, k, l, legal)
# return False
return True
def move(self, player, i, j, k, l, pBoard = False):
if self.checkLegal(player, i, j, k, l) == True and self.cur_turn == player:
self.grid[i][j] = 0
self.grid[k][l] = player
# set last moved
self.last_moved = [k, l]
# record in our path
if self.this_turn_visited == []:
self.this_turn_visited.append([i, j])
self.this_turn_visited.append([k, l])
# check if we're able to move again
new_moves = self.getLegal(k, l)
# end turn if we didn't just jump and there are still legal jumps, and we didn't just move one space
if not new_moves["jumps"] or (k - i != 2 and l - j != 2):
self.cur_turn = 3 - self.cur_turn
self.this_turn_visited = []
if pBoard:
self.printBoard()
# -
board = Board(2, 0)
board.reset()
board.printBoard()
board.move(1, 0, 4, 0, 5)
board.printBoard()
print(board.getLegalComplete(2, 2))
class Agent(object):
def __init__(self, board, ID, baseOrigin):
self.ID = ID
self.baseOrigin = baseOrigin
self.board = board
self.pieces = self.board.starting_positions[self.ID]
self.overrideDist = False
def updateBoard(self, board):
self.board = board
def findPieces(self, board):
pieces = []
for i in range(len(board.grid)):
for j in range(len(board.grid[i])):
if board.grid[i][j] == self.ID:
pieces.append([i, j])
return pieces
def distToOther(self, piece, other = None):
if not other:
other = self.baseOrigin
baseX, baseY = other
pieceX, pieceY = piece
return ((baseY - pieceY)**2 + (baseX-pieceX)**2)**.5
def bestPiece(self, distPieces = None, mp = None):
if not distPieces:
distPieces = sorted([[self.distToOther(piece), piece] for piece in self.pieces])
for piece in distPieces:
i, j = piece[1]
legals = self.board.getLegal(i, j, positive = mp)
if legals["jumps"] or legals["moves"]:
return [(i, j), legals]
return [False, False]
def bestMove(self, mustJump = None, eps = .2):
if mustJump:
piece = mustJump
legals = self.board.getLegal(piece[0], piece[1], jump = True)
elif self.overrideDist:
# Find the pieces that are not in the right positions
s = self.board.starting_positions[self.board.n_players-self.ID+1]
missedPieces = [x for x in self.pieces if x not in s]
# Find the first missing position
missedPos = [x for x in s if x not in self.pieces]
if not missedPos:
print "Erroneous"
return [False, False]
mp = missedPos[0]
# Calculate distances and find best move using those
distPieces = sorted([[self.distToOther(piece, mp), piece] for piece in missedPieces], reverse=True)
piece, legals = self.bestPiece(distPieces = distPieces, mp = mp)
else:
piece, legals = self.bestPiece()
if not piece or not legals:
return [False, False]
if legals["jumps"]:
distJumps = sorted(legals["jumps"], reverse=True, key=lambda i: i[2])
if random() < eps:
target = choice(distJumps)
else:
target = distJumps[0]
return [(piece[0], piece[1], target[0], target[1]), True]
elif legals["moves"] and not mustJump:
distMoves = sorted(legals["moves"], reverse=True, key=lambda i: i[2])
if random() < eps:
target = choice(distMoves)
else:
target = distMoves[0]
return [(piece[0], piece[1], target[0], target[1]), False]
else:
return [False, False]
def move(self):
move, jumped = self.bestMove()
# If no move available, clearly we are in a deadlock
if not move:
self.overrideDist = True
move, jumped = self.bestMove()
if not move:
print self.ID, self.overrideDist
return False
# Make move on board and record pieces
i, j, k, l = move
self.board.move(self.ID, i, j, k, l)
self.pieces = self.findPieces(self.board)
# Continue jumping if already done so
while jumped:
move, jumped = self.bestMove(mustJump = [k, l])
if move:
i, j, k, l = move
self.board.move(self.ID, i, j, k, l)
self.pieces = self.findPieces(self.board)
# Return the final move made
self.board.cur_turn = 3 - self.board.cur_turn
return move
# +
board = Board(2, 0)
board.reset()
print(board.cur_turn)
opponent = Agent(board, 2, [15, 15])
player = Agent(board, 1, [0, 0])
for i in range(160):
opponent.move()
player.move()
board.printBoard()
print(board.cur_turn)
# +
moveRew = -1
winRew = 100
def PlayGame(nplayers = 2):
brd = Board(nplayers, 0)
brd.reset()
opponent = Agent(brd, 2, [15, 15])
player = Agent(brd, 1, [0, 0])
players = [opponent, player]
# # for i in range(250):
# # opponent.move()
# # player.move()
rewards = {1: [], 2: []}
m, m1 = 0, 0
c = 0
while True:
for p in players:
# Check if game is ended
if brd.checkWin(p.ID):
rewards[p.ID].append(winRew)
return p, rewards
m = p.move()
rewards[p.ID].append(moveRew)
brd = p.board
c+=1
# -
lol = PlayGame()
plays = 100
wins = {1: 0, 2: 0}
rews = {1: [], 2: []}
for i in range(plays):
p, r = PlayGame()
wins[p.ID] += 1
rews[1].append(np.sum(r[1]))
rews[2].append(np.sum(r[2]))
cumrews = {1: np.cumsum(rews[1]), 2: np.cumsum(rews[2])}
# +
x = np.arange(0, plays, 1)
y = map(lambda x: rews[1][x], x)
plt.plot(x, y)
# +
x = np.arange(0, plays, 1)
y = map(lambda x: rews[2][x], x)
plt.plot(x, y)
# -
class HalmaState:
""" A state of the game of Halma
"""
def __init__(self, players = 2, rewards = 0):
self.nplayers = 2
self.rewards = 0
self.board = Board(players, rewards)
self.playerJustMoved = self.board.cur_turn
## I changed this -> At the root pretend the player just moved is p2 - p1 has the first move
self.board.reset()
self.size = len(self.board.grid)
def Clone(self):
""" Create a deep clone of this game state.
"""
st = HalmaState()
st.playerJustMoved = self.playerJustMoved
st.board.grid = [self.board.grid[i][:] for i in range(len(self.board.grid))]
return st
def DoMove(self, move):
""" Update a state by carrying out the given move.
Must update playerToMove.
"""
# print move
i, j, k, l = move
self.board.move(self.board.cur_turn, i, j, k, l)
self.playerJustMoved = self.board.cur_turn
def GetMoves(self):
""" Get all possible moves from this state.
"""
moves = {"moves": [], "jumps": []}
for i in range(len(self.board.grid)):
for j in range(len(self.board.grid)):
if self.board.grid[i][j] == self.playerJustMoved:
legals = self.board.getLegalComplete(i, j)
# print legals, self.playerJustMoved, self.board.cur_turn
# remove places we've already traveled this turn if necessary
if legals["moves"]:
moves["moves"] += ([[i, j, k, l] for k, l, dist in legals["moves"] if [k, l] not in self.board.this_turn_visited])
if legals["jumps"]:
moves["jumps"] += ([[i, j, k, l] for k, l, dist in legals["jumps"] if [k, l] not in self.board.this_turn_visited])
# Override
if not moves["moves"] and not moves["jumps"]:
# Find the pieces that are not in the right positions
s = self.board.starting_positions[3 - self.playerJustMoved]
# Find the first missing position
missedPos = [[x,y] for x, y in s if self.board.grid[x][y] == 0]
if missedPos:
target = missedPos[0]
else:
return moves
for i in range(len(self.board.grid)):
for j in range(len(self.board.grid)):
if self.board.grid[i][j] == self.playerJustMoved:
legals = self.board.getLegalComplete(i, j, positive = target)
# print legals, self.playerJustMoved, self.board.cur_turn
# remove places we've already traveled this turn if necessary
if legals["moves"]:
moves["moves"] += ([[i, j, k, l] for k, l, dist in legals["moves"] if [k, l] not in self.board.this_turn_visited])
if legals["jumps"]:
moves["jumps"] += ([[i, j, k, l] for k, l, dist in legals["jumps"] if [k, l] not in self.board.this_turn_visited])
if not moves["moves"] and not moves["jumps"]:
print "something went wrong"
return moves
def GetResult(self, player):
""" Get the game result from the viewpoint of playerjm.
"""
return self.board.checkWin(player)
def __repr__(self):
s= ""
for x in range(len(self.board.grid)):
for y in range(len(self.board.grid[x])):
s += ["[_]","[X]","[O]"][self.board.grid[x][y]]
s += "\n"
return s
class Node:
""" A node in the game tree. Note wins is always from the viewpoint of playerJustMoved.
Crashes if state not specified.
"""
def __init__(self, move = None, moveType = None, parent = None, state = None):
self.move = move # the move that got us to this node - "None" for the root node
self.moveType = moveType
self.parentNode = parent # "None" for the root node
self.childNodes = []
self.wins = 0
self.visits = 0
self.untriedMoves = state.GetMoves() # future child nodes
if not self.untriedMoves:
print "WATTT"
self.playerJustMoved = state.playerJustMoved # the only part of the state that the Node needs later
def UCTSelectChild(self):
""" Use the UCB1 formula to select a child node. Often a constant UCTK is applied so we have
lambda c: c.wins/c.visits + UCTK * sqrt(2*log(self.visits)/c.visits to vary the amount of
exploration versus exploitation.
"""
s = sorted(self.childNodes, key = lambda c: c.wins/c.visits + sqrt(2*log(self.visits)/c.visits))[-1]
return s
def AddChild(self, m, s, moveType = "moves"):
""" Remove m from untriedMoves and add a new child node for this move.
Return the added child node
"""
n = Node(move = m, moveType = moveType, parent = self, state = s)
self.untriedMoves[moveType].remove(m)
self.childNodes.append(n)
return n
def Update(self, result):
""" Update this node - one additional visit and result additional wins. result must be from the viewpoint of playerJustmoved.
"""
self.visits += 1
self.wins += result
def __repr__(self):
return "[M:" + str(self.move) + " MT:" + str(self.moveType) + " W/V:" + str(self.wins) + "/" + str(self.visits) + " U:" + str(self.untriedMoves) + "]"
def TreeToString(self, indent):
s = self.IndentString(indent) + str(self)
for c in self.childNodes:
s += c.TreeToString(indent+1)
return s
def IndentString(self,indent):
s = "\n"
for i in range (1,indent+1):
s += "| "
return s
def ChildrenToString(self):
s = ""
for c in self.childNodes:
s += str(c) + "\n"
return s
# +
def UCT(rootstate, itermax, verbose = False):
""" Conduct a UCT search for itermax iterations starting from rootstate.
Return the best move from the rootstate.
Assumes 2 alternating players (player 1 starts), with game results in the range [0.0, 1.0]."""
rootnode = Node(state = rootstate)
for i in range(itermax):
node = rootnode
state = rootstate.Clone()
# Select
while not node.untriedMoves["moves"] and not node.untriedMoves["jumps"] and node.childNodes != []: # node is fully expanded and non-terminal
node = node.UCTSelectChild()
# print str(state)
state.DoMove(node.move)
# Expand
if node.untriedMoves["moves"] or node.untriedMoves["jumps"]: # if we can expand (i.e. state/node is non-terminal)
m = choice(node.untriedMoves["moves"] + node.untriedMoves["jumps"])
# if m in node.untriedMoves["moves"]:
# node.untriedMoves["moves"].remove(m)
# elif m in node.untriedMoves["jumps"]:
# node.untriedMoves["jumps"].remove(m)
state.DoMove(m)
mt = "moves"
if m in node.untriedMoves["jumps"]:
mt = "jumps"
node = node.AddChild(m, state, mt) # add child and descend tree
mvs = state.GetMoves()
if not mvs:
return None
asdf = 0
# Rollout - this can often be made orders of magnitude quicker using a state.GetRandomMove() function
while mvs["moves"] or mvs["jumps"]: # while state is non-terminal
# print mvs
mv = choice(mvs["moves"] + mvs["jumps"])
if mv in mvs["moves"]:
mvs["moves"].remove(mv)
elif mv in mvs["jumps"]:
mvs["jumps"].remove(mv)
state.DoMove(mv)
# if asdf > 20:
# return
# asdf+=1
# Backpropagate
while node != None: # backpropagate from the expanded node and work back to the root node
node.Update(state.GetResult(node.playerJustMoved)) # state is terminal. Update node with result from POV of node.playerJustMoved
node = node.parentNode
# if i > 2:
# return
# Output some information about the tree - can be omitted
#if (verbose): print(rootnode.TreeToString(0))
#else: print(rootnode.ChildrenToString())
if not rootnode.childNodes:
return rootnode.move
return sorted(rootnode.childNodes, key = lambda c: c.visits)[-1].move # return the move that was most visited
moveRew = -1
winRew = 100
def UCTPlayGame():
""" Play a sample game between two UCT players where each player gets a different number
of UCT iterations (= simulations = tree nodes).
"""
rewards = {1: [], 2: []}
state = HalmaState()
while (state.GetMoves()):
# print(str(state))
if state.playerJustMoved == 1:
m = UCT(rootstate = state, itermax = 10, verbose = False) # play with values for itermax and verbose = True
else:
m = UCT(rootstate = state, itermax = 10, verbose = False)
# print("Best Move: " + str(m) + "\n")
if m:
state.DoMove(m)
rewards[state.playerJustMoved].append(moveRew)
else:
break
if state.GetResult(state.playerJustMoved):
print("Player " + str(state.playerJustMoved) + " wins!")
rewards[state.playerJustMoved].append(winRew)
return state.playerJustMoved, rewards
else:
print("Player " + str(3 - state.playerJustMoved) + " wins!")
rewards[3-state.playerJustMoved].append(winRew)
return 3-state.playerJustMoved, rewards
# -
lol = UCTPlayGame()
print lol
plays= 100
UCTwins = {1: 0, 2: 0}
UCTrews = {1: [], 2: []}
for i in range(plays):
print i
p, r = UCTPlayGame()
UCTwins[p] += 1
UCTrews[1].append(np.sum(r[1]))
UCTrews[2].append(np.sum(r[2]))
UCTcumrews = {1: np.cumsum(UCTrews[1]), 2: np.cumsum(UCTrews[2])}
# +
x = np.arange(0, plays, 1)
y = map(lambda x: UCTrews[1][x], x)
plt.plot(x, y)
# +
x = np.arange(0, plays, 1)
y = map(lambda x: UCTrews[2][x], x)
plt.plot(x, y)
# -
| .ipynb_checkpoints/Jesse-SergsB-Halma2-Redux-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/motamaike/python-conda_pip/blob/master/Pyfolio_Backtesting_com_Pyfolio_Python_para_Investimentos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="reYS-25FkARg"
# #Python para Investimentos
#
#
# + [markdown] id="_qqPx-qlVy_C"
# # 1. Importando bibliotecas
# + id="PrOd2uJK4qeN" outputId="2dd5c7c2-d707-46c6-ee8e-55d047fab04a" colab={"base_uri": "https://localhost:8080/", "height": 241}
# !pip install yfinance --upgrade --no-cache-dir
import yfinance as yf
#import pandas_datareader.data as web
#yf.pdr_override()
# + id="WCt0PyxrV3OW"
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + id="vD1UutKrpb17" outputId="b6bca3bb-27cf-433b-904e-9b422d68f5d5" colab={"base_uri": "https://localhost:8080/", "height": 751}
# Para corrgir o bug: AttributeError: 'numpy.int64' object has no attribute 'to_pydatetime'
# !pip install git+https://github.com/quantopian/pyfolio
# + id="_-1Cnb3ZAdd3" outputId="380a85fc-c0e9-41a2-e7e6-c1df8076e398" colab={"base_uri": "https://localhost:8080/", "height": 71}
import pyfolio as pf
import warnings
warnings.filterwarnings('ignore')
# + [markdown] id="aRFnZasnWnlN"
# # 2. Obtendo e tratando os dados
# + id="SPsar0-4WrzM" outputId="7134522c-8409-426e-8932-3ea5801b5db4" colab={"base_uri": "https://localhost:8080/", "height": 34}
#tickers = ["ABEV3.SA", "ITSA4.SA", "WEGE3.SA", "USIM5.SA", "VALE3.SA", '^BVSP']
#dados_yahoo = web.get_data_yahoo(tickers, period="5y")["Adj Close"]
tickers = "TEND3.SA ^BVSP"
dados_yahoo = yf.download(tickers=tickers, period="max")['Adj Close']
# + id="OvUlK320A2NB"
dados_yahoo.dropna(inplace=True)
# + id="boCHEnvZA6-K" outputId="7744999a-4980-4e70-9006-250a139cd33b" colab={"base_uri": "https://localhost:8080/", "height": 450}
retorno = dados_yahoo.pct_change()
retorno
# + id="S6FifislBDde" outputId="b295a620-acd2-4909-e1a0-bf02f6c03032" colab={"base_uri": "https://localhost:8080/", "height": 450}
retorno_acumulado = (1 + retorno).cumprod()
retorno_acumulado.iloc[0] = 1
retorno_acumulado
# + id="MlmnYXFpBeKE" outputId="45ba47af-c381-44b8-b4e0-5c1b5d9ca748" colab={"base_uri": "https://localhost:8080/", "height": 450}
carteira = 25 * retorno_acumulado.iloc[:, :5]
carteira["saldo"] = carteira.sum(axis=1)
carteira["retorno"] = carteira["saldo"].pct_change()
carteira
# + [markdown] id="X0vAqc_-MKqK"
# # 3. Resultados
# + id="Sl_kM0l1_gbu" outputId="bfc5537f-6cc6-41ce-e0d6-15fea76f6f27" colab={"base_uri": "https://localhost:8080/", "height": 1000}
pf.create_full_tear_sheet(carteira["retorno"], benchmark_rets=retorno["^BVSP"])
# + id="tBJXlLkiCMvg" outputId="841633ae-41ed-4cb7-91fd-6ab67adfd991" colab={"base_uri": "https://localhost:8080/", "height": 467}
fig, ax1 = plt.subplots(figsize=(16,8))
pf.plot_rolling_beta(carteira["retorno"], factor_returns=retorno["^BVSP"], ax=ax1)
plt.ylim((0.1, 1.4));
# + id="eyo1_-4TwUcn"
| Pyfolio_Backtesting_com_Pyfolio_Python_para_Investimentos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Train ML model on Cloud AI Platform
#
# This notebook shows how to:
# * Export training code from [a Keras notebook](../06_feateng_keras/taxifare_fc.ipynb) into a trainer file
# * Create a Docker container based on a [DLVM container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/kubernetes-container])
# * Deploy training job to cluster
#
# ## Export code from notebook
#
# This notebook extracts code from a notebook and creates a Python file suitable for use as model.py
# +
import logging
import nbformat
import sys
import yaml
def write_parameters(cell_source, params_yaml, outfp):
with open(params_yaml, 'r') as ifp:
y = yaml.safe_load(ifp)
# print out all the lines in notebook
write_code(cell_source, 'PARAMS from notebook', outfp)
# print out YAML file; this will override definitions above
formats = [
'{} = {}', # for integers and floats
'{} = "{}"', # for strings
]
write_code(
'\n'.join([
formats[type(value) is str].format(key, value) for key, value in y.items()]),
'PARAMS from YAML',
outfp
)
def write_code(cell_source, comment, outfp):
lines = cell_source.split('\n')
if len(lines) > 0 and lines[0].startswith('%%'):
prefix = '#'
else:
prefix = ''
print("### BEGIN {} ###".format(comment), file=outfp)
for line in lines:
line = prefix + line.replace('print(', 'logging.info(')
if len(line) > 0 and (line[0] == '!' or line[0] == '%'):
print('#' + line, file=outfp)
else:
print(line, file=outfp)
print("### END {} ###\n".format(comment), file=outfp)
def convert_notebook(notebook_filename, params_yaml, outfp):
write_code('import logging', 'code added by notebook conversion', outfp)
with open(INPUT) as ifp:
nb = nbformat.reads(ifp.read(), nbformat.NO_CONVERT)
for cell in nb.cells:
if cell.cell_type == 'code':
if 'tags' in cell.metadata and 'display' in cell.metadata.tags:
logging.info('Ignoring cell # {} with display tag'.format(cell.execution_count))
elif 'tags' in cell.metadata and 'parameters' in cell.metadata.tags:
logging.info('Writing params cell # {}'.format(cell.execution_count))
write_parameters(cell.source, PARAMS, outfp)
else:
logging.info('Writing model cell # {}'.format(cell.execution_count))
write_code(cell.source, 'Cell #{}'.format(cell.execution_count), outfp)
# +
import os
INPUT='../06_feateng_keras/taxifare_fc.ipynb'
PARAMS='./notebook_params.yaml'
OUTDIR='./container/trainer'
# !mkdir -p $OUTDIR
OUTFILE=os.path.join(OUTDIR, 'model.py')
# !touch $OUTDIR/__init__.py
with open(OUTFILE, 'w') as ofp:
#convert_notebook(INPUT, PARAMS, sys.stdout)
convert_notebook(INPUT, PARAMS, ofp)
# #!cat $OUTFILE
# -
# ## Try out model file
#
# <b>Note</b> Once the training starts, stop the job. Because it processes the entire dataset, this will take a long time on the relatively small machine on which you are running Notebooks.
# !python3 $OUTFILE
# ## Create Docker container
#
# Package up the trainer file into a Docker container and submit the image.
#
# %%writefile container/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tf-nightly-2.0-preview
CMD ["python3", "/trainer/model.py"]
# +
# %%writefile container/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=serverlessml_training_container
#export IMAGE_TAG=$(date +%Y%m%d_%H%M%S)
#export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME:$IMAGE_TAG
export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME
# echo "Building $IMAGE_URI"
docker build -f Dockerfile -t $IMAGE_URI ./
# echo "Pushing $IMAGE_URI"
docker push $IMAGE_URI
# -
# !find container
# <b>Note</b>: If you get a permissions error when running push_docker.sh from Notebooks, do it from CloudShell:
# * Open [CloudShell](https://console.cloud.google.com/cloudshell) on the GCP Console
# * ```git clone https://github.com/GoogleCloudPlatform/training-data-analyst```
# * ```cd training-data-analyst/quests/serverlessml/07_caip/container```
# * ```bash push_docker.sh```
# + language="bash"
# cd container
# bash push_docker.sh
# -
# ## Deploy to AI Platform
#
# Submit a training job using this custom container that we have just built.
# + language="bash"
# JOBID=serverlessml_$(date +%Y%m%d_%H%M%S)
# BUCKET=cloud-training-demos-ml
# REGION=us-west1
# PROJECT_ID=$(gcloud config list project --format "value(core.project)")
#
# #IMAGE=gcr.io/deeplearning-platform-release/tf2-cpu
# IMAGE=gcr.io/$PROJECT_ID/serverlessml_training_container
#
# gcloud beta ai-platform jobs submit training $JOBID \
# --staging-bucket=gs://$BUCKET --region=$REGION \
# --master-image-uri=$IMAGE \
# --master-machine-type=n1-standard-4 --scale-tier=CUSTOM
#
# # --module-name=trainer.model --package-path=$(pwd)/container/trainer
# -
# Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| quests/serverlessml/07_caip/train_caip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Low-Rank Autoregressive Tensor Completion (LATC)
#
# This notebook shows how to implement a LATC (with truncated nuclear norm) imputer on some real-world traffic data sets. To overcome the problem of missing values within multivariate time series data, this method takes into account both low-rank structure and time series regression. For an in-depth discussion of LATC, please see [1].
#
# <div class="alert alert-block alert-info">
# <font color="black">
# <b>[1]</b> <NAME>, <NAME>, <NAME>, <NAME> (2021). <b>Low-Rank Autorgressive Tensor Completion for Spatiotemporal Traffic Data Imputation</b>. Not available now. <a href="https://arxiv.org/abs/xxxx.xxxxx" title="PDF"><b>[PDF]</b></a>
# </font>
# </div>
#
# ### Define LATC-imputer kernel
#
# We start by introducing some necessary functions that relies on `Numpy`.
#
# <div class="alert alert-block alert-warning">
# <ul>
# <li><b><code>ten2mat</code>:</b> <font color="black">Unfold tensor as matrix by specifying mode.</font></li>
# <li><b><code>mat2ten</code>:</b> <font color="black">Fold matrix as tensor by specifying dimension (i.e, tensor size) and mode.</font></li>
# <li><b><code>svt_tnn</code>:</b> <font color="black">Implement the process of Singular Value Thresholding (SVT).</font></li>
# </ul>
# </div>
# +
import numpy as np
def ten2mat(tensor, mode):
return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')
def mat2ten(mat, dim, mode):
index = list()
index.append(mode)
for i in range(dim.shape[0]):
if i != mode:
index.append(i)
return np.moveaxis(np.reshape(mat, list(dim[index]), order = 'F'), 0, mode)
# -
def svt_tnn(mat, tau, theta):
[m, n] = mat.shape
if 2 * m < n:
u, s, v = np.linalg.svd(mat @ mat.T, full_matrices = 0)
s = np.sqrt(s)
idx = np.sum(s > tau)
mid = np.zeros(idx)
mid[: theta] = 1
mid[theta : idx] = (s[theta : idx] - tau) / s[theta : idx]
return (u[:, : idx] @ np.diag(mid)) @ (u[:, : idx].T @ mat)
elif m > 2 * n:
return svt_tnn(mat.T, tau, theta).T
u, s, v = np.linalg.svd(mat, full_matrices = 0)
idx = np.sum(s > tau)
vec = s[: idx].copy()
vec[theta : idx] = s[theta : idx] - tau
return u[:, : idx] @ np.diag(vec) @ v[: idx, :]
# <div class="alert alert-block alert-warning">
# <ul>
# <li><b><code>compute_mape</code>:</b> <font color="black">Compute the value of Mean Absolute Percentage Error (MAPE).</font></li>
# <li><b><code>compute_rmse</code>:</b> <font color="black">Compute the value of Root Mean Square Error (RMSE).</font></li>
# </ul>
# </div>
#
# > Note that $$\mathrm{MAPE}=\frac{1}{n} \sum_{i=1}^{n} \frac{\left|y_{i}-\hat{y}_{i}\right|}{y_{i}} \times 100, \quad\mathrm{RMSE}=\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(y_{i}-\hat{y}_{i}\right)^{2}},$$ where $n$ is the total number of estimated values, and $y_i$ and $\hat{y}_i$ are the actual value and its estimation, respectively.
# +
def compute_mape(var, var_hat):
return np.sum(np.abs(var - var_hat) / var) / var.shape[0]
def compute_rmse(var, var_hat):
return np.sqrt(np.sum((var - var_hat) ** 2) / var.shape[0])
# -
def print_result(it, tol, var, var_hat):
print('Iter: {}'.format(it))
print('Tolerance: {:.6}'.format(tol))
print('Imputation MAPE: {:.6}'.format(compute_mape(var, var_hat)))
print('Imputation RMSE: {:.6}'.format(compute_rmse(var, var_hat)))
print()
# How to create $\boldsymbol{\Psi}_{0},\boldsymbol{\Psi}_{1},\ldots,\boldsymbol{\Psi}_{d}$?
# +
from scipy import sparse
from scipy.sparse.linalg import spsolve as spsolve
def generate_Psi(dim_time, time_lags):
Psis = []
max_lag = np.max(time_lags)
for i in range(len(time_lags) + 1):
row = np.arange(0, dim_time - max_lag)
if i == 0:
col = np.arange(0, dim_time - max_lag) + max_lag
else:
col = np.arange(0, dim_time - max_lag) + max_lag - time_lags[i - 1]
data = np.ones(dim_time - max_lag)
Psi = sparse.coo_matrix((data, (row, col)), shape = (dim_time - max_lag, dim_time))
Psis.append(Psi)
return Psis
# +
import numpy as np
# Example
dim_time = 5
time_lags = np.array([1, 3])
Psis = generate_Psi(dim_time, time_lags)
print('Psi_0:')
print(Psis[0].toarray())
print()
print('Psi_1:')
print(Psis[1].toarray())
print()
print('Psi_2:')
print(Psis[2].toarray())
print()
# -
# The main idea behind LATC-imputer is to approximate partially observed data with both low-rank structure and time series dynamics. The following `latc` kernel includes some necessary inputs:
#
# <div class="alert alert-block alert-warning">
# <ul>
# <li><b><code>dense_tensor</code>:</b> <font color="black">This is an input which has the ground truth for validation. If this input is not available, you could use <code>dense_tensor = sparse_tensor.copy()</code> instead.</font></li>
# <li><b><code>sparse_tensor</code>:</b> <font color="black">This is a partially observed tensor which has many missing entries.</font></li>
# <li><b><code>time_lags</code>:</b> <font color="black">Time lags, e.g., <code>time_lags = np.array([1, 2, 3])</code>. </font></li>
# <li><b><code>alpha</code>:</b> <font color="black">Weights for tensors' nuclear norm, e.g., <code>alpha = np.ones(3) / 3</code>. </font></li>
# <li><b><code>rho</code>:</b> <font color="black">Learning rate for ADMM, e.g., <code>rho = 0.0005</code>. </font></li>
# <li><b><code>lambda0</code>:</b> <font color="black">Weight for time series regressor, e.g., <code>lambda0 = 5 * rho</code></font></li>
# <li><b><code>theta</code>:</b> <font color="black">Integer-wise truncation for truncated nuclear norm, e.g., <code>theta = 5</code></font></li>
# <li><b><code>epsilon</code>:</b> <font color="black">Stop criteria, e.g., <code>epsilon = 0.0001</code>. </font></li>
# <li><b><code>maxiter</code>:</b> <font color="black">Maximum iteration to stop algorithm, e.g., <code>maxiter = 100</code>. </font></li>
# </ul>
# </div>
def latc(dense_tensor, sparse_tensor, time_lags, alpha, rho0, lambda0, theta, epsilon, maxiter = 100, K = 3):
"""Low-Rank Autoregressive Tensor Completion (LATC)"""
dim = np.array(sparse_tensor.shape)
dim_time = np.int(np.prod(dim) / dim[0])
d = len(time_lags)
max_lag = np.max(time_lags)
sparse_mat = ten2mat(sparse_tensor, 0)
pos_missing = np.where(sparse_mat == 0)
pos_test = np.where((dense_tensor != 0) & (sparse_tensor == 0))
dense_test = dense_tensor[pos_test]
del dense_tensor
T = np.zeros(dim)
Z_tensor = sparse_tensor.copy()
Z = sparse_mat.copy()
A = 0.001 * np.random.rand(dim[0], d)
Psis = generate_Psi(dim_time, time_lags)
iden = sparse.coo_matrix((np.ones(dim_time), (np.arange(0, dim_time), np.arange(0, dim_time))),
shape = (dim_time, dim_time))
it = 0
ind = np.zeros((d, dim_time - max_lag), dtype = np.int_)
for i in range(d):
ind[i, :] = np.arange(max_lag - time_lags[i], dim_time - time_lags[i])
last_mat = sparse_mat.copy()
snorm = np.linalg.norm(sparse_mat, 'fro')
rho = rho0
while True:
temp = []
for m in range(dim[0]):
Psis0 = Psis.copy()
for i in range(d):
Psis0[i + 1] = A[m, i] * Psis[i + 1]
B = Psis0[0] - sum(Psis0[1 :])
temp.append(B.T @ B)
for k in range(K):
rho = min(rho * 1.05, 1e5)
tensor_hat = np.zeros(dim)
for p in range(len(dim)):
tensor_hat += alpha[p] * mat2ten(svt_tnn(ten2mat(Z_tensor - T / rho, p),
alpha[p] / rho, theta), dim, p)
temp0 = rho / lambda0 * ten2mat(tensor_hat + T / rho, 0)
mat = np.zeros((dim[0], dim_time))
for m in range(dim[0]):
mat[m, :] = spsolve(temp[m] + rho * iden / lambda0, temp0[m, :])
Z[pos_missing] = mat[pos_missing]
Z_tensor = mat2ten(Z, dim, 0)
T = T + rho * (tensor_hat - Z_tensor)
for m in range(dim[0]):
A[m, :] = np.linalg.lstsq(Z[m, ind].T, Z[m, max_lag :], rcond = None)[0]
mat_hat = ten2mat(tensor_hat, 0)
tol = np.linalg.norm((mat_hat - last_mat), 'fro') / snorm
last_mat = mat_hat.copy()
it += 1
if it % 200 == 0:
print_result(it, tol, dense_test, tensor_hat[pos_test])
if (tol < epsilon) or (it >= maxiter):
break
print_result(it, tol, dense_test, tensor_hat[pos_test])
return tensor_hat
# > We use `spslove` of `scipy.sparse.linalg` for updating $\boldsymbol{Z}$ because computing the inverse of a large matrix directly is computationally expensive.
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
c = 1
theta = 20
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-4
lambda0 = c * rho
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon, maxiter = 10)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# ### Guangzhou urban traffic speed data set
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-4
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-4
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.9
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-4
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Non-random Missing (NM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
## Non-random Missing (NM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
dim_time = dim2 * dim3
block_window = 6
vec = np.random.rand(int(dim_time / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2 * dim3], order = 'F')
sparse_tensor = mat2ten(ten2mat(dense_tensor, 0) * np.round(vec + 0.5 - missing_rate)[None, :], np.array([dim1, dim2, dim3]), 0)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# +
import numpy as np
import time
import scipy.io
for r in [0.3, 0.7, 0.9]:
print('Missing rate = {}'.format(r))
missing_rate = r
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
start = time.time()
time_lags = np.array([1, 2, 3, 4, 5, 6])
alpha = np.ones(3) / 3
rho = 1e-4
lambda0 = 5 * rho
theta = 20
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import time
import scipy.io
for r in [0.3, 0.7]:
print('Missing rate = {}'.format(r))
missing_rate = r
## Non-random Missing (NM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
start = time.time()
time_lags = np.array([1, 2, 3, 4, 5, 6])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import time
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
dim_time = dim2 * dim3
block_window = 6
vec = np.random.rand(int(dim_time / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2 * dim3], order = 'F')
sparse_tensor = mat2ten(ten2mat(dense_tensor, 0) * np.round(vec + 0.5 - missing_rate)[None, :], np.array([dim1, dim2, dim3]), 0)
start = time.time()
time_lags = np.array([1, 2, 3, 4, 5, 6])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 5 * rho
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# ### Hangzhou metro passenger flow data set
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.9
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Non-random Missing (NM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
## Non-random Missing (NM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import time
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
dim_time = dim2 * dim3
block_window = 6
vec = np.random.rand(int(dim_time / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2 * dim3], order = 'F')
sparse_tensor = mat2ten(ten2mat(dense_tensor, 0) * np.round(vec + 0.5 - missing_rate)[None, :], np.array([dim1, dim2, dim3]), 0)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# +
import numpy as np
import time
import scipy.io
for r in [0.3, 0.7, 0.9]:
print('Missing rate = {}'.format(r))
missing_rate = r
## Random Missing (RM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
start = time.time()
time_lags = np.array([1, 2, 3, 4, 5, 6])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import time
import scipy.io
for r in [0.3, 0.7]:
print('Missing rate = {}'.format(r))
missing_rate = r
## Non-random Missing (NM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
start = time.time()
time_lags = np.array([1, 2, 3, 4, 5, 6])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 5
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import time
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor'].transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
dim_time = dim2 * dim3
block_window = 6
vec = np.random.rand(int(dim_time / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2 * dim3], order = 'F')
sparse_tensor = mat2ten(ten2mat(dense_tensor, 0) * np.round(vec + 0.5 - missing_rate)[None, :], np.array([dim1, dim2, dim3]), 0)
start = time.time()
time_lags = np.array([1, 2, 3, 4, 5, 6])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# ### Seattle freeway traffic speed data set
# +
import numpy as np
import pandas as pd
import time
import scipy.io
for r in [0.3, 0.7, 0.9]:
print('Missing rate = {}'.format(r))
missing_rate = r
## Random missing (RM)
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0).values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]).transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim2, dim3) + 0.5 - missing_rate)
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 10 * rho
theta = 25
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import pandas as pd
import time
import scipy.io
for r in [0.3, 0.7]:
print('Missing rate = {}'.format(r))
missing_rate = r
## Non-random Missing (NM)
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0).values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]).transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 10 * rho
if r == 0.3:
theta = 25
elif r == 0.7:
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0).values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]).transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
block_window = 12
vec = np.random.rand(int(dim2 * dim3 / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2 * dim3], order = 'F')
sparse_tensor = mat2ten(dense_mat * np.round(vec + 0.5 - missing_rate)[None, :], np.array([dim1, dim2, dim3]), 0)
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 10 * rho
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# ### Portland highway traffic volume data set
# +
import numpy as np
import pandas as pd
import time
import scipy.io
for r in [0.3, 0.7, 0.9]:
print('Missing rate = {}'.format(r))
missing_rate = r
# Random Missing (RM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = mat2ten(dense_mat * np.round(np.random.rand(dim1, dim2) + 0.5 - missing_rate), dim, 0)
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 20
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import pandas as pd
import time
import scipy.io
for r in [0.3, 0.7]:
print('Missing rate = {}'.format(r))
missing_rate = r
# Non-random Missing (NM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim[2]) + 0.5 - missing_rate)[:, None, :]
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 5
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
block_window = 4
vec = np.random.rand(int(dim2 / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2], order = 'F')
sparse_tensor = mat2ten(dense_mat * np.round(vec + 0.5 - missing_rate)[None, :], dim, 0)
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 5
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Non-random Missing (NM)
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0).values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]).transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
## Non-random Missing (NM)
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0).values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]).transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim3) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0).values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]).transpose(0, 2, 1)
dim1, dim2, dim3 = dense_tensor.shape
block_window = 12
vec = np.random.rand(int(dim2 * dim3 / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2 * dim3], order = 'F')
sparse_tensor = mat2ten(dense_mat * np.round(vec + 0.5 - missing_rate)[None, :], np.array([dim1, dim2, dim3]), 0)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.arange(1, 7)
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
# Random Missing (RM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = mat2ten(dense_mat * np.round(np.random.rand(dim1, dim2) + 0.5 - missing_rate), dim, 0)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.9
# Random Missing (RM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = mat2ten(dense_mat * np.round(np.random.rand(dim1, dim2) + 0.5 - missing_rate), dim, 0)
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
# Non-random Missing (NM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim[2]) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.7
# Non-random Missing (NM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = dense_tensor * np.round(np.random.rand(dim1, dim[2]) + 0.5 - missing_rate)[:, None, :]
for c in [1/10, 1/5, 1, 5, 10]:
for theta in [5, 10, 15, 20, 25, 30]:
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = c * rho
print(c)
print(theta)
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
# Random Missing (RM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
np.random.seed(1000)
sparse_tensor = mat2ten(dense_mat * np.round(np.random.rand(dim1, dim2) + 0.5 - missing_rate), dim, 0)
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1/5 * rho
theta = 10
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# +
import numpy as np
import scipy.io
np.random.seed(1000)
missing_rate = 0.3
## Block-out Missing (BM)
dense_mat = np.load('../datasets/Portland-data-set/volume.npy')
dim1, dim2 = dense_mat.shape
dim = np.array([dim1, 96, 31])
dense_tensor = mat2ten(dense_mat, dim, 0)
block_window = 4
vec = np.random.rand(int(dim2 / block_window))
temp = np.array([vec] * block_window)
vec = temp.reshape([dim2], order = 'F')
sparse_tensor = mat2ten(dense_mat * np.round(vec + 0.5 - missing_rate)[None, :], dim, 0)
start = time.time()
time_lags = np.array([1, 2, 3, 4])
alpha = np.ones(3) / 3
rho = 1e-5
lambda0 = 1 * rho
theta = 5
epsilon = 1e-4
tensor_hat = latc(dense_tensor, sparse_tensor, time_lags, alpha, rho, lambda0, theta, epsilon)
end = time.time()
print('Running time: %d seconds'%(end - start))
print()
# -
# ### License
#
# <div class="alert alert-block alert-danger">
# <b>This work is released under the MIT license.</b>
# </div>
| imputer/LATC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# ## _*Quantum Fourier Transform*_
#
# In this tutorial, we [introduce](#introduction) the quantum fourier transform (QFT), [derive](#circuit) the circuit, QASM and QISKit code, before [implementing](#implementation) it using the simulator and five qubit device.
#
# The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.
#
# ***
# ### Contributors
# <NAME>
#
# ### Qiskit Package Versions
import qiskit
qiskit.__qiskit_version__
# ## Introduction <a id='introduction'></a>
#
# The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT) is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction. It is part of many quantum algorithms, most notably Shor's factoring algorithm and quantum phase estimation.
# The discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula
# $$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
# where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.
#
# Similarly, the quantum Fourier transform acts on a quantum state $\sum_{i=0}^{N-1} x_i \vert i \rangle$ and maps it to the quantum state $\sum_{i=0}^{N-1} y_i \vert i \rangle$ according to the formula
# $$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
# with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.
#
# This can also be expressed as the map:
# $$\vert x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle$$
#
# Or the unitary matrix:
# $$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} \omega_N^{xy} \vert y \rangle \langle x \vert$$
# ## Circuit and Code <a id='circuit'></a>
#
# We've actually already seen the quantum Fourier transform for when $N = 2$, it is the Hadamard operator ($H$):
# $$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$
# Suppose we have the single qubit state $\alpha \vert 0 \rangle + \beta \vert 1 \rangle$, if we apply the $H$ operator to this state, we obtain the new state:
# $$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle
# \equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$
# Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state.
# So what does the quantum Fourier transform look like for larger N? Let's derive a circuit for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1...x_n \rangle$ where $x_1$ is the most significant bit.
#
# \begin{aligned}
# QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \\
# & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle \:\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n\\
# & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 ... y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1...y_k, y/2^n = \sum_{k=1}^n y_k/2^k \\
# & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=0}^n e^{2 \pi i x y_k/2^k } \vert y_1 ... y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \\
# & = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \\
# & = \frac{1}{\sqrt{N}} \left(\vert0\rangle + e^{2 \pi i[0.x_n]} \vert1\rangle\right) \otimes...\otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2...x_{n-1}.x_n]} \vert1\rangle\right) \:\text{as}\: e^{2 \pi i x/2^k} = e^{2 \pi i[0.x_k...x_n]}
# \end{aligned}
#
# This is a very useful form of the QFT for $N=2^n$ as only the last qubit depends on the the
# values of all the other input qubits and each further bit depends less and less on the input qubits. Furthermore, note that $e^{2 \pi i.0.x_n}$ is either $+1$ or $-1$, which resembles the Hadamard transform.
#
# For the QFT circuit, together with the Hadamard gate, we will also need the controlled phase rotation gate, as defined in [OpenQASM](https://github.com/QISKit/openqasm), to implement the dependencies between the bits:
# $$CU_1(\theta) =
# \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\theta}\end{bmatrix}$$
# Before we create the circuit code for general $N=2^n$, let's look at $N=8,n=3$:
# $$QFT_8\vert x_1x_2x_3\rangle = \frac{1}{\sqrt{8}} \left(\vert0\rangle + e^{2 \pi i[0.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2.x_3]} \vert1\rangle\right) $$
#
# The steps to creating the circuit for $\vert y_1y_2x_3\rangle = QFT_8\vert x_1x_2x_3\rangle$ would be:
# 1. Apply a Hadamard to $\vert x_3 \rangle$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i.0.x_3} \vert1\rangle\right) = \frac{1}{\sqrt{2}}\left(\vert0\rangle + (-1)^{x_3} \vert1\rangle\right)$
# 2. Apply a Hadamard to $\vert x_2 \rangle$, then depending on $k_3$ (before the Hadamard gate) a $CU_1(\frac{\pi}{2})$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right)$
# 3. Apply a Hadamard to $\vert x_1 \rangle$, then $CU_1(\frac{\pi}{2})$ depending on $k_2$, and $CU_1(\frac{\pi}{4})$ depending on $k_3$.
# 4. Measure the bits in reverse order, that is $y_3 = x_1, y_2 = x_2, y_1 = y_3$.
#
# In the Quantum Experience composer (if controlled phase rotation gates were available) this circuit would look like:
# <img src="../images/qft3.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="400 px" align="center">
#
# In QASM, it is:
# ```
# qreg q[3];
# creg c[3];
# h q[0];
# cu1(pi/2) q[1],q[0];
# h q[1];
# cu1(pi/4) q[2],q[0];
# cu1(pi/2) q[2],q[1];
# h q[2];
# ```
#
# In QISKit, it is:
# ```
# q = QuantumRegister(3)
# c = ClassicalRegister(3)
#
# qft3 = QuantumCircuit(q, c)
# qft3.h(q[0])
# qft3.cu1(math.pi/2.0, q[1], q[0])
# qft3.h(q[1])
# qft3.cu1(math.pi/4.0, q[2], q[0])
# qft3.cu1(math.pi/2.0, q[2], q[1])
# qft3.h(q[2])
# ```
#
# For $N=2^n$, this can be generalised, as in the `qft` function in [tools.qi](https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/tools/qi/qi.py):
# ```
# def qft(circ, q, n):
# """n-qubit QFT on q in circ."""
# for j in range(n):
# for k in range(j):
# circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
# circ.h(q[j])
# ```
# ## Implementation <a id='implementation'></a>
# +
import math
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
from qiskit.tools.visualization import plot_histogram
# -
IBMQ.load_account()
# First let's define the QFT function, as well as a function that creates a state from which a QFT will return 1:
# +
def input_state(circ, q, n):
"""n-qubit input state for QFT that produces output 1."""
for j in range(n):
circ.h(q[j])
circ.u1(-math.pi/float(2**(j)), q[j])
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
circ.h(q[j])
# -
# Let's now implement a QFT on a prepared three qubit input state that should return $001$:
# +
q = QuantumRegister(3)
c = ClassicalRegister(3)
qft3 = QuantumCircuit(q, c)
input_state(qft3, q, 3)
qft(qft3, q, 3)
for i in range(3):
qft3.measure(q[i], c[i])
print(qft3.qasm())
# +
# run on local simulator
backend = Aer.get_backend("qasm_simulator")
simulate = execute(qft3, backend=backend, shots=1024).result()
simulate.get_counts()
# -
# We indeed see that the outcome is always $001$ when we execute the code on the simulator.
#
#
# We then see how the same circuit can be executed on real-device backends.
# Use the IBM Quantum Experience
backend = least_busy(IBMQ.backends(simulator=False))
shots = 1024
job_exp = execute(qft3, backend=backend, shots=shots)
job_monitor(job_exp)
results = job_exp.result()
plot_histogram(results.get_counts())
# We see that the highest probability outcome $001$ when we execute the code on `IBMQ device`.
| terra/qis_adv/fourier_transform.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from IPython.display import IFrame
# # Responsive Web Design Certification
# ## CSS Grid
# ### Introduction to the CSS Grid Challenges
# CSS Grid helps you easily build complex web designs. It works by turning an HTML element into a grid container with rows and columns for you to place children elements where you want within the grid.
# ### Create Your First CSS Grid
# Turn any HTML element into a grid container by setting its `display` property to `grid`. This gives you the ability to use all the other properties associated with CSS Grid.
#
# Note: In CSS Grid, the parent element is referred to as the container and its children are called items.
#
# ---
# Change the display of the div with the `container` class to `grid`.
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# width: 100%;
# background: LightGray;
# /* add your code below this line */
# display: grid;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/000-create-your-first-css-grid.html', '100%', 250)
# ### Add Columns with grid-template-columns
# Simply creating a grid element doesn't get you very far. You need to define the structure of the grid as well. To add some columns to the grid, use the `grid-template-columns` property on a grid container as demonstrated below:
#
# ```CSS
#
# .container {
# display: grid;
# grid-template-columns: 50px 50px;
# }
# ```
#
# This will give your grid two columns that are each 50px wide. The number of parameters given to the `grid-template-columns` property indicates the number of columns in the grid, and the value of each parameter indicates the width of each column.
#
# ---
# Give the grid container three columns that are each `100px` wide.
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# width: 100%;
# background: LightGray;
# display: grid;
# /* add your code below this line */
# grid-template-columns: 100px 100px 100px;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/001-add-columns-with-grid-template-columns.html', '100%', 250)
# ### Add Rows with grid-template-rows
# The grid you created in the last challenge will set the number of rows automatically. To adjust the rows manually, use the `grid-template-rows` property in the same way you used `grid-template-columns` in previous challenge.
#
# ---
# Add two rows to the grid that are `50px` tall each.
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 100px 100px 100px;
# /* add your code below this line */
# grid-template-rows: 50px 50px;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/002-add-rows-with-grid-template-rows.html', '100%', 250)
# ### Use CSS Grid units to Change the Size of Columns and Rows
# You can use absolute and relative units like `px` and `em` in CSS Grid to define the size of rows and columns. You can use these as well:
#
# `fr`: sets the column or row to a fraction of the available space,
#
# `auto`: sets the column or row to the width or height of its content automatically,
#
# `%`: adjusts the column or row to the percent width of its container.
#
# Here's the code that generates the output in the preview:
#
# ```CSS
# grid-template-columns: auto 50px 10% 2fr 1fr;
# ```
#
# This snippet creates five columns. The first column is as wide as its content, the second column is 50px, the third column is 10% of its container, and for the last two columns; the remaining space is divided into three sections, two are allocated for the fourth column, and one for the fifth.
#
# ---
# Make a grid with three columns whose widths are as follows: 1fr, 100px, and 2fr.
#
#
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# width: 100%;
# background: LightGray;
# display: grid;
# /* modify the code below this line */
#
# grid-template-columns: 1fr 100px 2fr;
#
# /* modify the code above this line */
# grid-template-rows: 50px 50px;
# }
# </style>
#
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/003-use-css-grid-units-to-change-the-size-of-columns-and-rows.html', '100%', 250)
# ### Create a Column Gap Using grid-column-gap
# So far in the grids you have created, the columns have all been tight up against each other. Sometimes you want a gap in between the columns. To add a gap between the columns, use the `grid-column-gap` property like this:
#
# ```CSS
# grid-column-gap: 10px;
# ```
#
# This creates 10px of empty space between all of our columns.
#
# ---
# Give the columns in the grid a `20px` gap.
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# /* add your code below this line */
# grid-column-gap: 20px;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/004-create-a-column-gap-using-grid-column-gap.html', '100%', 250)
# ### Create a Row Gap using grid-row-gap
# You can add a gap in between the rows of a grid using `grid-row-gap` in the same way that you added a gap in between columns in the previous challenge.
#
# ---
# Create a gap for the rows that is `5px` tall.
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# /* add your code below this line */
# grid-row-gap: 5px;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/005-create-a-row-gap-using-grid-row-gap.html', '100%', 250)
# ### Add Gaps Faster with grid-gap
# `grid-gap` is a shorthand property for `grid-row-gap` and `grid-column-gap` from the previous two challenges that's more convenient to use. If `grid-gap` has one value, it will create a gap between all rows and columns. However, if there are two values, it will use the first one to set the gap between the rows and the second value for the columns.
#
# ---
# Use `grid-gap` to introduce a `10px` gap between the rows and `20px` gap between the columns.
# ```HTML
# <style>
# .d1{background:LightSkyBlue;}
# .d2{background:LightSalmon;}
# .d3{background:PaleTurquoise;}
# .d4{background:LightPink;}
# .d5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# /* add your code below this line */
# grid-gap: 10px 20px;
# /* add your code above this line */
# }
# </style>
# <div class="container">
# <div class="d1">1</div>
# <div class="d2">2</div>
# <div class="d3">3</div>
# <div class="d4">4</div>
# <div class="d5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/006-add-gaps-faster-with-grid-gap.html', '100%', 250)
# ### Use grid-column to Control Spacing
# Up to this point, all the properties that have been discussed are for grid containers. The `grid-column` property is the first one for use on the grid items themselves.
#
# The hypothetical horizontal and vertical lines that create the grid are referred to as lines. These lines are numbered starting with 1 at the top left corner of the grid and move right for columns and down for rows, counting upward.
#
# This is what the lines look like for a 3x3 grid:
#
# 
#
# To control the amount of columns an item will consume, you can use the `grid-column` property in conjunction with the line numbers you want the item to start and stop at.
#
# Here's an example:
#
# ```CSS
# grid-column: 1 / 3;
# ```
#
# This will make the item start at the first vertical line of the grid on the left and span to the 3rd line of the grid, consuming two columns.
#
# ---
# Make the item with the class `item5` consume the last two columns of the grid.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
#
# .item5 {
# background: PaleGreen;
# /* add your code below this line */
# grid-column: 2/4;
# /* add your code above this line */
# }
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/007-use-grid-column-to-control-spacing.html', '100%', 250)
# ### Use grid-row to Control Spacing
# Of course, you can make items consume multiple rows just like you can with columns. You define the horizontal lines you want an item to start and stop at using the `grid-row` property on a grid item.
#
# ---
# Make the element with the `item5` class consume the last two rows.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
#
# .item5 {
# background: PaleGreen;
# grid-column: 2 / 4;
# /* add your code below this line */
# grid-row: 2 / 4;
# /* add your code above this line */
# }
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/008-use-grid-row-to-control-spacing.html', '100%', 250)
# ### Align an Item Horizontally using justify-self
# In CSS Grid, the content of each item is located in a box which is referred to as a cell. You can align the content's position within its cell horizontally using the `justify-self` property on a grid item. By default, this property has a value of `stretch`, which will make the content fill the whole width of the cell. This CSS Grid property accepts other values as well:
#
# `start`: aligns the content at the left of the cell,
#
# `center`: aligns the content in the center of the cell,
#
# `end`: aligns the content at the right of the cell.
#
# ---
# Use the `justify-self` property to center the item with the class `item2`.
# ```HTML
# <style>
# .item1{background: LightSkyBlue;}
#
# .item2 {
# background: LightSalmon;
# /* add your code below this line */
# justify-self: center;
# /* add your code above this line */
# }
#
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/009-align-an-item-horizontally-using-justify-self.html', '100%', 250)
# ### Align an Item Vertically using align-self
# Just as you can align an item horizontally, there's a way to align an item vertically as well. To do this, you use the `align-self` property on an item. This property accepts all of the same values as `justify-self` from the last challenge.
#
# ---
# Align the item with the class `item3` vertically at the `end`.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
#
# .item3 {
# background: PaleTurquoise;
# /* add your code below this line */
# align-self: end;
# /* add your code above this line */
# }
#
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
# ```
IFrame('./source-solution/07-css-grid/010-align-an-item-vertically-using-align-self.html', '100%', 250)
# ### Align All Items Horizontally using justify-items
# Sometimes you want all the items in your CSS Grid to share the same alignment. You can use the previously learned properties and align them individually, or you can align them all at once horizontally by using `justify-items` on your grid container. This property can accept all the same values you learned about in the previous two challenges, the difference being that it will move all the items in our grid to the desired alignment.
#
# ---
# Use this property to center all our items horizontally.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# /* add your code below this line */
# justify-items: center;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/011-align-all-items-horizontally-using-justify-items.html', '100%', 250)
# ### Align All Items Vertically using align-items
# Using the `align-items` property on a grid container will set the vertical alignment for all the items in our grid.
#
# ---
# Use it now to move all the items to the end of each cell.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# /* add your code below this line */
# align-items: end;
# /* add your code above this line */
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/012-align-all-items-vertically-using-align-items.html', '100%', 250)
# ### Divide the Grid Into an Area Template
# You can group cells of your grid together into an area and give the area a custom name. Do this by using `grid-template-areas` on the container like this:
#
# ```CSS
#
# grid-template-areas:
# "header header header"
# "advert content content"
# "footer footer footer";
# ```
#
# The code above merges the top three cells together into an area named `header`, the bottom three cells into a `footer` area, and it makes two areas in the middle row; `advert` and `content`. Note: Every word in the code represents a cell and every pair of quotation marks represent a row. In addition to custom labels, you can use a period (`.`) to designate an empty cell in the grid.
#
# ---
# Place the area template so that the cell labeled `advert` becomes an empty cell.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# /* change code below this line */
# grid-template-areas:
#
# "header header header"
# ". content content"
# "footer footer footer";
# /* change code above this line */
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/013-divide-the-grid-into-an-area-template.html', '100%', 250)
# ### Place Items in Grid Areas Using the grid-area Property
# After creating an area's template for your grid container, as shown in the previous challenge, you can place an item in your custom area by referencing the name you gave it. To do this, you use the `grid-area` property on an item like this:
#
# ```CSS
# .item1 {
# grid-area: header;
# }
# ```
#
# This lets the grid know that you want the `item1` class to go in the area named `header`. In this case, the item will use the entire top row because that whole row is named as the header area.
#
# ---
# Place an element with the `item5` class in the `footer` area using the `grid-area` property.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
#
# .item5 {
# background: PaleGreen;
# /* add your code below this line */
# grid-area: footer;
# /* add your code above this line */
# }
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# grid-template-areas:
# "header header header"
# "advert content content"
# "footer footer footer";
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/014-place-items-in-grid-areas-using-the-grid-area-property.html', '100%', 250)
# ### Use grid-area Without Creating an Areas Template
# The `grid-area` property you learned in the last challenge can be used in another way. If your grid doesn't have an areas template to reference, you can create an area on the fly for an item to be placed like this:
#
# ```CSS
# item1 { grid-area: 1/1/2/4; }
# ```
#
# This is using the line numbers you learned about earlier to define where the area for this item will be. The numbers in the example above represent these values:
#
# ```CSS
# grid-area: horizontal line to start at / vertical line to start at / horizontal line to end at / vertical line to end at;
# ```
#
# So the item in the example will consume the rows between lines 1 and 2, and the columns between lines 1 and 4.
#
# ---
# Using the `grid-area` property, place the element with `item5` class between the third and fourth horizontal lines and between the first and fourth vertical lines.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
#
# .item5 {
# background: PaleGreen;
# /* add your code below this line */
# grid-area: 3/1/4/4;
#
# /* add your code above this line */
# }
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr 1fr 1fr;
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/015-use-grid-area-without-creating-an-areas-template.html', '100%', 250)
# ### Reduce Repetition Using the repeat Function
# When you used `grid-template-columns` and `grid-template-rows` to define the structure of a grid, you entered a value for each row or column you created.
#
# Let's say you want a grid with 100 rows of the same height. It isn't very practical to insert 100 values individually. Fortunately, there's a better way - by using the `repeat` function to specify the number of times you want your column or row to be repeated, followed by a comma and the value you want to repeat.
#
# Here's an example that would create the 100 row grid, each row at 50px tall.
#
# ```CSS
# grid-template-rows: repeat(100, 50px);
# ```
#
# You can also repeat multiple values with the repeat function and insert the function amongst other values when defining a grid structure. Here's what that looks like:
#
# ```CSS
# grid-template-columns: repeat(2, 1fr 50px) 20px;
# ```
#
# This translates to:
#
# ```CSS
# grid-template-columns: 1fr 50px 1fr 50px 20px;
# ```
#
# Note: The `1fr 50px` is repeated twice followed by 20px.
#
# ---
# Use `repeat` to remove repetition from the `grid-template-columns` property.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# /* change the code below this line */
#
# grid-template-columns: repeat(3, 1fr);
#
# /* change the code above this line */
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/016-reduce-repetition-using-the-repeat-function.html', '100%', 250)
# ### Limit Item Size Using the minmax Function
# There's another built-in function to use with `grid-template-columns` and `grid-template-rows` called `minmax`. It's used to limit the size of items when the grid container changes size. To do this you need to specify the acceptable size range for your item. Here is an example:
#
# ```CSS
# grid-template-columns: 100px minmax(50px, 200px);
# ```
#
# In the code above, `grid-template-columns` is set to create two columns; the first is 100px wide, and the second has the minimum width of 50px and the maximum width of 200px.
#
# ---
# Using the `minmax` function, replace the `1fr` in the `repeat` function with a column size that has the minimum width of `90px` and the maximum width of `1fr`, and resize the preview panel to see the effect.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# /* change the code below this line */
#
# grid-template-columns: repeat(3, minmax(90px, 1fr));
#
# /* change the code above this line */
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/017-limit-item-size-using-the-minmax-function.html', '100%', 250)
# ### Create Flexible Layouts Using auto-fill
# The repeat function comes with an option called auto-fill. This allows you to automatically insert as many rows or columns of your desired size as possible depending on the size of the container. You can create flexible layouts when combining `auto-fill` with `minmax`, like this:
#
# ```CSS
# repeat(auto-fill, minmax(60px, 1fr));
# ```
#
# When the container changes size, this setup keeps inserting 60px columns and stretching them until it can insert another one. Note: If your container can't fit all your items on one row, it will move them down to a new one.
#
# ---
# In the first grid, use `auto-fill` with `repeat` to fill the grid with columns that have a minimum width of `60px` and maximum of `1fr`. Then resize the preview to see auto-fill in action.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 100px;
# width: 100%;
# background: LightGray;
# display: grid;
# /* change the code below this line */
#
# grid-template-columns: repeat(auto-fill, minmax(60px, 1fr));
#
# /* change the code above this line */
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
#
# .container2 {
# font-size: 40px;
# min-height: 100px;
# width: 100%;
# background: Silver;
# display: grid;
# grid-template-columns: repeat(3, minmax(60px, 1fr));
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
# <div class="container2">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
#
# ```
IFrame('./source-solution/07-css-grid/018-create-flexible-layouts-using-auto-fill.html', '100%', 250)
# ### Create Flexible Layouts Using auto-fit
# `auto-fit` works almost identically to `auto-fill`. The only difference is that when the container's size exceeds the size of all the items combined, `auto-fill` keeps inserting empty rows or columns and pushes your items to the side, while `auto-fit` collapses those empty rows or columns and stretches your items to fit the size of the container.
#
# Note: If your container can't fit all your items on one row, it will move them down to a new one.
#
# ---
# In the second grid, use `auto-fit` with `repeat` to fill the grid with columns that have a minimum width of `60px` and maximum of `1fr`. Then resize the preview to see the difference.
# ```HTML
# <style>
# .item1{background:LightSkyBlue;}
# .item2{background:LightSalmon;}
# .item3{background:PaleTurquoise;}
# .item4{background:LightPink;}
# .item5{background:PaleGreen;}
#
# .container {
# font-size: 40px;
# min-height: 100px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: repeat(auto-fill, minmax(60px, 1fr));
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
#
# .container2 {
# font-size: 40px;
# min-height: 100px;
# width: 100%;
# background: Silver;
# display: grid;
# /* change the code below this line */
#
# grid-template-columns: repeat(auto-fit, minmax(60px, 1fr));
#
# /* change the code above this line */
# grid-template-rows: 1fr 1fr 1fr;
# grid-gap: 10px;
# }
# </style>
#
# <div class="container">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
# <div class="container2">
# <div class="item1">1</div>
# <div class="item2">2</div>
# <div class="item3">3</div>
# <div class="item4">4</div>
# <div class="item5">5</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/019-create-flexible-layouts-using-auto-fit.html', '100%', 250)
# ### Use Media Queries to Create Responsive Layouts
# CSS Grid can be an easy way to make your site more responsive by using media queries to rearrange grid areas, change dimensions of a grid, and rearrange the placement of items.
#
# In the preview, when the viewport width is 300px or more, the number of columns changes from 1 to 2. The advertisement area then occupies the left column completely.
#
# ---
# When the viewport width is `400px` or more, make the header area occupy the top row completely and the footer area occupy the bottom row completely.
# ```HTML
# <style>
# .item1 {
# background: LightSkyBlue;
# grid-area: header;
# }
#
# .item2 {
# background: LightSalmon;
# grid-area: advert;
# }
#
# .item3 {
# background: PaleTurquoise;
# grid-area: content;
# }
#
# .item4 {
# background: lightpink;
# grid-area: footer;
# }
#
# .container {
# font-size: 1.5em;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: 1fr;
# grid-template-rows: 50px auto 1fr auto;
# grid-gap: 10px;
# grid-template-areas:
# "header"
# "advert"
# "content"
# "footer";
# }
#
# @media (min-width: 300px){
# .container{
# grid-template-columns: auto 1fr;
# grid-template-rows: auto 1fr auto;
# grid-template-areas:
# "advert header"
# "advert content"
# "advert footer";
# }
# }
#
# @media (min-width: 400px){
# .container{
# /* change the code below this line */
# grid-template-columns: repeat(2, 1fr);
# grid-template-rows: 1fr 1fr 1fr;
# grid-template-areas:
# "header header"
# "advert content"
# "footer footer";
# /* change the code above this line */
# }
# }
# </style>
#
# <div class="container">
# <div class="item1">header</div>
# <div class="item2">advert</div>
# <div class="item3">content</div>
# <div class="item4">footer</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/020-use-media-queries-to-create-responsive-layouts.html', '100%', 250)
# ### Create Grids within Grids
# Turning an element into a grid only affects the behavior of its direct descendants. So by turning a direct descendant into a grid, you have a grid within a grid.
#
# For example, by setting the `display` and `grid-template-columns` properties of the element with the `item3` class, you create a grid within your grid.
#
# ---
# Turn the element with the `item3` class into a grid with two columns with a width of `auto` and `1fr` using `display` and `grid-template-columns`.
# ```HTML
# <style>
# .container {
# font-size: 1.5em;
# min-height: 300px;
# width: 100%;
# background: LightGray;
# display: grid;
# grid-template-columns: auto 1fr;
# grid-template-rows: auto 1fr auto;
# grid-gap: 10px;
# grid-template-areas:
# "advert header"
# "advert content"
# "advert footer";
# }
# .item1 {
# background: LightSkyBlue;
# grid-area: header;
# }
#
# .item2 {
# background: LightSalmon;
# grid-area: advert;
# }
#
# .item3 {
# background: PaleTurquoise;
# grid-area: content;
# /* enter your code below this line */
# display: grid;
# grid-template-columns: auto 1fr;
# /* enter your code above this line */
# }
#
# .item4 {
# background: lightpink;
# grid-area: footer;
# }
#
# .itemOne {
# background: PaleGreen;
# }
#
# .itemTwo {
# background: BlanchedAlmond;
# }
#
# </style>
#
# <div class="container">
# <div class="item1">header</div>
# <div class="item2">advert</div>
# <div class="item3">
# <div class="itemOne">paragraph1</div>
# <div class="itemTwo">paragraph2</div>
# </div>
# <div class="item4">footer</div>
# </div>
#
# ```
IFrame('./source-solution/07-css-grid/021-create-grids-within-grids.html', '100%', 250)
| responsive-web-design/07-css-grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 2: Regression and Feature Engineering
#
# By <NAME> and <NAME> with the help of <NAME>, <NAME>, and <NAME>
#
# Using climate change datasets from NASA's Climate Change Initiative: https://climate.nasa.gov/vital-signs/
# ## Preamble
#
# Download the `hw2` folder from here: https://github.com/nfrumkin/EC414/tree/master/homework/hw2 (or clone the EC414 repo, if you prefer).
#
# To run and solve this assignment, you must have a working Jupyter Notebook installation.
#
# If you followed the installation instructions for `Python 3.6.x` and `Jupyter Notebook` from discussion 1, you should be set. In the terminal (cmd for Windows users), navigate to the `hw2` folder. Then type `jupyter notebook` and press `Enter`.
#
# If you have Anaconda, run Anaconda and choose this file ([`Homework 2 - Regression and Feature Engineering.ipynb`](https://gist.githubusercontent.com/MInner/eb6330a655a5c37b82e15d1c84fd4cd0/raw/)) in Anaconda's file explorer. Use `Python 3` version.
#
# Below statements assume that you have already followed these instructions. If you need help with Python syntax, NumPy, or Matplotlib, you might find [Week 1 discussion material](https://github.com/nfrumkin/EC414/blob/master/discussions/Week%201%20-%20Python%20Review.ipynb) useful.
#
# To run code in a cell or to render [Markdown](https://en.wikipedia.org/wiki/Markdown)+[LaTeX](https://en.wikipedia.org/wiki/LaTeX) press `Ctrl+Enter` or `[>|]`(like "play") button above. To edit any code or text cell [double]click on its content. To change cell type, choose "Markdown" or "Code" in the drop-down menu above.
#
# Put your solution into boxes marked with **`[double click here to add a solution]`** and press Ctrl+Enter to render text. [Double]click on a cell to edit or to see its source code. You can add cells via **`+`** sign at the top left corner.
#
# Submission instructions: please upload your completed solution file to Blackboard by the due date (see Schedule). If you have pen-and-paper answers, please hand them in in class on the same day.
# ## Problem 1: Linear Algebra Review
# Let $B$ be a 4x4 matrix to which we apply the following operations:
# 1. double column 2
# 2. interchange columns 1 and 4
# 3. halve row 1
# 4. add row 3 to row 1
# 5. subtract row 4 from each of the other rows
# 6. replace column 3 by column 4
# 7. delete column 2 (so that the column dimension is reduced by 1)
#
# (a) Write the result as a product of eight matrices.
#
# (b) Write it again as a product $ABC$ (same $B$) of three matrices.
# 
# ## Problem 2: Weighted Least Squares
#
# Given an $n$ x $d$ matrix $X$ of $n$ data points, each point
# $d$-dimensional, an $n$ x $1$ vector $Y$ of observed values and an $n$ x $n$ diagonal weight matrix $A$,
# deduce the closed form expression for the coefficients in the weighted linear least squares
# regression problem given below. Consider both slope and bias terms as coefficients. In
# weighted linear least squares, the term minimized is the weighted sum of squared distances
#
# $\sum_{i=1}^{n}\alpha_i(y_i - \sum_{j=1}^{d}X_{ij}w_j-b)^2$
#
# where $\alpha_i$ is the ith diagonal element of matrix A, and the other symbols have their usual
# meaning.
#
# Show all steps in your derivation.
# 
# ## Problem 3: Ordinary Least Squares Regression
#
# Given the Greenland ice sheet mass data over the past 17 years (source: `greenland_mass.csv`):
#
# **a.** Plot the data using matplotlib. Be sure to label axes.
#
# **b.** Apply Ordinary Least Squares regression to the data, and predict the ice sheet mass for the time step t=2018 (that is, 1 January 2018).
#
# **c.** Now plot the data and the regression curve in the same figure.
# +
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet
# -
# read in data
greenland_df = pd.read_csv('greenland_mass.csv', index_col=0)
greenland_df.head()
# part a - visualize given data
plt.plot(greenland_df['time'].values, greenland_df['mass_diff'].values)
plt.title('Greenland Mass')
plt.xlabel('Year')
plt.ylabel('Mass difference (in Gigatonne)')
plt.tight_layout()
plt.show()
# +
# part b
# NOTE: scikit learn cannot be used for this part. it is currently being used only for sanity check.
# solution must be coded from scratch
# fit curve using ols regression
ols_model = LinearRegression()
ols_model.fit(X=greenland_df['time'].values.reshape(-1, 1), y=greenland_df['mass_diff'])
print('OLS coefficient = {:.2f} intercept = {:.2f}'.format(ols_model.coef_[0], ols_model.intercept_))
# predict using ols model
pred_mass_var = ols_model.predict(X=[[2018]])
print('2018 predicted mass variation = {:.5f} gigatonnes'.format(pred_mass_var[0]))
# -
def ols_fit(X, y):
"""Calculates least squares regression solution for given data X and labels y
Uses the formula W = (X^T . X)^-1 . X^T . y, where
W is the vector of coefficients + intercept
X is the data matrix with a column of 1's attached
y is the labels matrix
Arguments:
X {np.ndarray} -- (num_points, num_features) shaped ndarray of data points
y {np.ndarray} -- (num_points, 1) shaped ndarray of labels
Returns:
w {np.ndarray} -- (num_features, 1) shaped ndarray representing coefficients
b {np.ndarray} -- (1, 1) shaped ndarray representing the intercept
"""
# add column of 1's to X, thus making it X_ext which is needed in the formula
X_ext = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
# apply the OLS formula to get w_ext
w_ext = np.linalg.inv(X_ext.transpose() @ X_ext) @ X_ext.transpose() @ y
# separate coefficients and intercept from w_ext, i.e. get w and b
w = w_ext[:-1, :] # all but last value will be coefficients
b = w_ext[-1:, :] # last value will be intercept
return w, b
# +
# extract X and y from dataset
X = greenland_df['time'].values
y = greenland_df['mass_diff'].values
# reshape so that X and y are treated as matrices/vectors, not just array of values
X = X.reshape(-1, 1) # changes shape from (n,) to (n,1)
y = y.reshape(-1, 1) # changes shape from (n,) to (n,1)
# calculate the ols solution
w, b = ols_fit(X, y)
print('OLS coefficient = {:.2f} intercept = {:.2f}'.format(w[0, 0], b[0, 0]))
# predict using ols solution
to_pred = [[2018]]
pred_mass_var = to_pred @ w + b
print('2018 predicted mass variation = {:.5f} gigatonnes'.format(pred_mass_var[0, 0]))
# -
# part c - plotting the model with data
plt.plot(X, y, label='Data')
plt.plot(X, X @ w + b, label='OLS fit')
plt.title('Greenland Mass')
plt.xlabel('Year')
plt.ylabel('Mass difference (in Gigatonne)')
plt.legend()
plt.tight_layout()
plt.show()
# ## Problem 4: Ridge and Lasso Regression
#
# Given the dataset of airline delays (source: `airline_delays_Xtrain.csv`, `airline_delays_ytrain.csv`):
#
# **a.** For values of lambda 10^3, 10^4, ..., 10^13, fit ridge regression models to the data. Plot a graph of value of coefficient (y-axis) vs log lambda (x-axis) for each coefficient, on the same figure.
#
# **b.** For values of lambda 10^-2, 10^-1, ..., 10^6, fit lasso regression models to the data. Plot a graph of value of coefficient (y-axis) vs log lambda (x-axis) for each coefficient, on the same figure.
#
# **c.** Comment on the difference in how the coefficient values change with log lambda for the two methods.
# read in training data
X_train = pd.read_csv('airline_delays_Xtrain.csv')
y_train = pd.read_csv('airline_delays_ytrain.csv')
# +
# part a
# lambda values for which to fit curve
ridge_lambdas = np.logspace(start=4, stop=14, num=22, base=10)
# exponent corresponding to the lambda value, just here for ease in graphing
ridge_exps = np.log10(ridge_lambdas)
# store the coefficients obtained for each lambda value in this ndarray
# for example, ridge_coefs[0, :] is the coefficients obtained for lambda = 10^3
ridge_coefs = np.empty((ridge_lambdas.shape[0], X_train.shape[1]))
# find coefficients for each lambda value
for i, lamb in enumerate(ridge_lambdas):
ridge_model = Ridge(alpha=lamb)
ridge_model.fit(X=X_train, y=y_train)
ridge_coefs[i, :] = ridge_model.coef_
# +
# make the required plot
for feat_num in range(ridge_coefs.shape[1]):
# this plots the graph of how feat_num th feature changes with lambda
# we will do this for each feature
plt.plot(ridge_exps, ridge_coefs[:, feat_num], label='feature_{}'.format(feat_num))
plt.title('Log Lambda vs Coefficient')
plt.xlabel('Log lambda')
plt.ylabel('Coefficient value')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
# +
# part b
# lambda values for which to fit curve
lasso_lambdas = np.logspace(start=-2, stop=6, num=9, base=10)
# exponent corresponding to the lambda value, just here for ease in graphing
lasso_exps = np.log10(lasso_lambdas)
# store the coefficients obtained for each lambda value in this ndarray
# for example, lasso_coefs[0, :] is the coefficients obtained for lambda = 10^-2
lasso_coefs = np.empty((lasso_lambdas.shape[0], X_train.shape[1]))
for i, lamb in enumerate(lasso_lambdas):
lasso_model = Lasso(alpha=lamb)
lasso_model.fit(X=X_train, y=y_train)
lasso_coefs[i, :] = lasso_model.coef_
# +
# make the required plot
for feat_num in range(lasso_coefs.shape[1]):
# this plots the graph of how feat_num-th feature changes with lambda
# we will do this for each feature
plt.plot(lasso_exps, lasso_coefs[:, feat_num], label='feature_{}'.format(feat_num))
plt.title('Log Lambda vs Coefficient')
plt.xlabel('Log lambda')
plt.ylabel('Coefficient value')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
# -
# **Part c** In lasso, the coefficients fall sharply to 0. In ridge, they smoothly converge to 0.
# Lasso is more sensitive to change in lambda than ridge
#
| homework/hw2/hw2_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Convolutions for Images
# :label:`sec_conv_layer`
#
# Now that we understand how convolutional layers work in theory,
# we are ready to see how they work in practice.
# Building on our motivation of convolutional neural networks
# as efficient architectures for exploring structure in image data,
# we stick with images as our running example.
#
#
# ## The Cross-Correlation Operation
#
# Recall that strictly speaking, convolutional layers
# are a misnomer, since the operations they express
# are more accurately described as cross-correlations.
# Based on our descriptions of convolutional layers in :numref:`sec_why-conv`,
# in such a layer, an input tensor
# and a kernel tensor are combined
# to produce an output tensor through a (**cross-correlation operation.**)
#
# Let us ignore channels for now and see how this works
# with two-dimensional data and hidden representations.
# In :numref:`fig_correlation`,
# the input is a two-dimensional tensor
# with a height of 3 and width of 3.
# We mark the shape of the tensor as $3 \times 3$ or ($3$, $3$).
# The height and width of the kernel are both 2.
# The shape of the *kernel window* (or *convolution window*)
# is given by the height and width of the kernel
# (here it is $2 \times 2$).
#
# 
# :label:`fig_correlation`
#
# In the two-dimensional cross-correlation operation,
# we begin with the convolution window positioned
# at the upper-left corner of the input tensor
# and slide it across the input tensor,
# both from left to right and top to bottom.
# When the convolution window slides to a certain position,
# the input subtensor contained in that window
# and the kernel tensor are multiplied elementwise
# and the resulting tensor is summed up
# yielding a single scalar value.
# This result gives the value of the output tensor
# at the corresponding location.
# Here, the output tensor has a height of 2 and width of 2
# and the four elements are derived from
# the two-dimensional cross-correlation operation:
#
# $$
# 0\times0+1\times1+3\times2+4\times3=19,\\
# 1\times0+2\times1+4\times2+5\times3=25,\\
# 3\times0+4\times1+6\times2+7\times3=37,\\
# 4\times0+5\times1+7\times2+8\times3=43.
# $$
#
# Note that along each axis, the output size
# is slightly smaller than the input size.
# Because the kernel has width and height greater than one,
# we can only properly compute the cross-correlation
# for locations where the kernel fits wholly within the image,
# the output size is given by the input size $n_h \times n_w$
# minus the size of the convolution kernel $k_h \times k_w$
# via
#
# $$(n_h-k_h+1) \times (n_w-k_w+1).$$
#
# This is the case since we need enough space
# to "shift" the convolution kernel across the image.
# Later we will see how to keep the size unchanged
# by padding the image with zeros around its boundary
# so that there is enough space to shift the kernel.
# Next, we implement this process in the `corr2d` function,
# which accepts an input tensor `X` and a kernel tensor `K`
# and returns an output tensor `Y`.
#
# + origin_pos=4 tab=["tensorflow"]
import tensorflow as tf
from d2l import tensorflow as d2l
def corr2d(X, K): #@save
"""Compute 2D cross-correlation."""
h, w = K.shape
Y = tf.Variable(tf.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1)))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
Y[i, j].assign(tf.reduce_sum(
X[i: i + h, j: j + w] * K))
return Y
# + [markdown] origin_pos=5
# We can construct the input tensor `X` and the kernel tensor `K`
# from :numref:`fig_correlation`
# to [**validate the output of the above implementation**]
# of the two-dimensional cross-correlation operation.
#
# + origin_pos=6 tab=["tensorflow"]
X = tf.constant([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])
K = tf.constant([[0.0, 1.0], [2.0, 3.0]])
corr2d(X, K)
# + [markdown] origin_pos=7
# ## Convolutional Layers
#
# A convolutional layer cross-correlates the input and kernel
# and adds a scalar bias to produce an output.
# The two parameters of a convolutional layer
# are the kernel and the scalar bias.
# When training models based on convolutional layers,
# we typically initialize the kernels randomly,
# just as we would with a fully-connected layer.
#
# We are now ready to [**implement a two-dimensional convolutional layer**]
# based on the `corr2d` function defined above.
# In the `__init__` constructor function,
# we declare `weight` and `bias` as the two model parameters.
# The forward propagation function
# calls the `corr2d` function and adds the bias.
#
# + origin_pos=10 tab=["tensorflow"]
class Conv2D(tf.keras.layers.Layer):
def __init__(self):
super().__init__()
def build(self, kernel_size):
initializer = tf.random_normal_initializer()
self.weight = self.add_weight(name='w', shape=kernel_size,
initializer=initializer)
self.bias = self.add_weight(name='b', shape=(1, ),
initializer=initializer)
def call(self, inputs):
return corr2d(inputs, self.weight) + self.bias
# + [markdown] origin_pos=11
# In
# $h \times w$ convolution
# or a $h \times w$ convolution kernel,
# the height and width of the convolution kernel are $h$ and $w$, respectively.
# We also refer to
# a convolutional layer with a $h \times w$
# convolution kernel simply as a $h \times w$ convolutional layer.
#
#
# ## Object Edge Detection in Images
#
# Let us take a moment to parse [**a simple application of a convolutional layer:
# detecting the edge of an object in an image**]
# by finding the location of the pixel change.
# First, we construct an "image" of $6\times 8$ pixels.
# The middle four columns are black (0) and the rest are white (1).
#
# + origin_pos=13 tab=["tensorflow"]
X = tf.Variable(tf.ones((6, 8)))
X[:, 2:6].assign(tf.zeros(X[:, 2:6].shape))
X
# + [markdown] origin_pos=14
# Next, we construct a kernel `K` with a height of 1 and a width of 2.
# When we perform the cross-correlation operation with the input,
# if the horizontally adjacent elements are the same,
# the output is 0. Otherwise, the output is non-zero.
#
# + origin_pos=15 tab=["tensorflow"]
K = tf.constant([[1.0, -1.0]])
# + [markdown] origin_pos=16
# We are ready to perform the cross-correlation operation
# with arguments `X` (our input) and `K` (our kernel).
# As you can see, [**we detect 1 for the edge from white to black
# and -1 for the edge from black to white.**]
# All other outputs take value 0.
#
# + origin_pos=17 tab=["tensorflow"]
Y = corr2d(X, K)
Y
# + [markdown] origin_pos=18
# We can now apply the kernel to the transposed image.
# As expected, it vanishes. [**The kernel `K` only detects vertical edges.**]
#
# + origin_pos=19 tab=["tensorflow"]
corr2d(tf.transpose(X), K)
# + [markdown] origin_pos=20
# ## Learning a Kernel
#
# Designing an edge detector by finite differences `[1, -1]` is neat
# if we know this is precisely what we are looking for.
# However, as we look at larger kernels,
# and consider successive layers of convolutions,
# it might be impossible to specify
# precisely what each filter should be doing manually.
#
# Now let us see whether we can [**learn the kernel that generated `Y` from `X`**]
# by looking at the input--output pairs only.
# We first construct a convolutional layer
# and initialize its kernel as a random tensor.
# Next, in each iteration, we will use the squared error
# to compare `Y` with the output of the convolutional layer.
# We can then calculate the gradient to update the kernel.
# For the sake of simplicity,
# in the following
# we use the built-in class
# for two-dimensional convolutional layers
# and ignore the bias.
#
# + origin_pos=23 tab=["tensorflow"]
# Construct a two-dimensional convolutional layer with 1 output channel and a
# kernel of shape (1, 2). For the sake of simplicity, we ignore the bias here
conv2d = tf.keras.layers.Conv2D(1, (1, 2), use_bias=False)
# The two-dimensional convolutional layer uses four-dimensional input and
# output in the format of (example, height, width, channel), where the batch
# size (number of examples in the batch) and the number of channels are both 1
X = tf.reshape(X, (1, 6, 8, 1))
Y = tf.reshape(Y, (1, 6, 7, 1))
lr = 3e-2 # Learning rate
Y_hat = conv2d(X)
for i in range(10):
with tf.GradientTape(watch_accessed_variables=False) as g:
g.watch(conv2d.weights[0])
Y_hat = conv2d(X)
l = (abs(Y_hat - Y)) ** 2
# Update the kernel
update = tf.multiply(lr, g.gradient(l, conv2d.weights[0]))
weights = conv2d.get_weights()
weights[0] = conv2d.weights[0] - update
conv2d.set_weights(weights)
if (i + 1) % 2 == 0:
print(f'epoch {i + 1}, loss {tf.reduce_sum(l):.3f}')
# + [markdown] origin_pos=24
# Note that the error has dropped to a small value after 10 iterations. Now we will [**take a look at the kernel tensor we learned.**]
#
# + origin_pos=27 tab=["tensorflow"]
tf.reshape(conv2d.get_weights()[0], (1, 2))
# + [markdown] origin_pos=28
# Indeed, the learned kernel tensor is remarkably close
# to the kernel tensor `K` we defined earlier.
#
# ## Cross-Correlation and Convolution
#
# Recall our observation from :numref:`sec_why-conv` of the correspondence
# between the cross-correlation and convolution operations.
# Here let us continue to consider two-dimensional convolutional layers.
# What if such layers
# perform strict convolution operations
# as defined in :eqref:`eq_2d-conv-discrete`
# instead of cross-correlations?
# In order to obtain the output of the strict *convolution* operation, we only need to flip the two-dimensional kernel tensor both horizontally and vertically, and then perform the *cross-correlation* operation with the input tensor.
#
# It is noteworthy that since kernels are learned from data in deep learning,
# the outputs of convolutional layers remain unaffected
# no matter such layers
# perform
# either the strict convolution operations
# or the cross-correlation operations.
#
# To illustrate this, suppose that a convolutional layer performs *cross-correlation* and learns the kernel in :numref:`fig_correlation`, which is denoted as the matrix $\mathbf{K}$ here.
# Assuming that other conditions remain unchanged,
# when this layer performs strict *convolution* instead,
# the learned kernel $\mathbf{K}'$ will be the same as $\mathbf{K}$
# after $\mathbf{K}'$ is
# flipped both horizontally and vertically.
# That is to say,
# when the convolutional layer
# performs strict *convolution*
# for the input in :numref:`fig_correlation`
# and $\mathbf{K}'$,
# the same output in :numref:`fig_correlation`
# (cross-correlation of the input and $\mathbf{K}$)
# will be obtained.
#
# In keeping with standard terminology with deep learning literature,
# we will continue to refer to the cross-correlation operation
# as a convolution even though, strictly-speaking, it is slightly different.
# Besides,
# we use the term *element* to refer to
# an entry (or component) of any tensor representing a layer representation or a convolution kernel.
#
#
# ## Feature Map and Receptive Field
#
# As described in :numref:`subsec_why-conv-channels`,
# the convolutional layer output in
# :numref:`fig_correlation`
# is sometimes called a *feature map*,
# as it can be regarded as
# the learned representations (features)
# in the spatial dimensions (e.g., width and height)
# to the subsequent layer.
# In CNNs,
# for any element $x$ of some layer,
# its *receptive field* refers to
# all the elements (from all the previous layers)
# that may affect the calculation of $x$
# during the forward propagation.
# Note that the receptive field
# may be larger than the actual size of the input.
#
# Let us continue to use :numref:`fig_correlation` to explain the receptive field.
# Given the $2 \times 2$ convolution kernel,
# the receptive field of the shaded output element (of value $19$)
# is
# the four elements in the shaded portion of the input.
# Now let us denote the $2 \times 2$
# output as $\mathbf{Y}$
# and consider a deeper CNN
# with an additional $2 \times 2$ convolutional layer that takes $\mathbf{Y}$
# as its input, outputting
# a single element $z$.
# In this case,
# the receptive field of $z$
# on $\mathbf{Y}$ includes all the four elements of $\mathbf{Y}$,
# while
# the receptive field
# on the input includes all the nine input elements.
# Thus,
# when any element in a feature map
# needs a larger receptive field
# to detect input features over a broader area,
# we can build a deeper network.
#
#
#
#
# ## Summary
#
# * The core computation of a two-dimensional convolutional layer is a two-dimensional cross-correlation operation. In its simplest form, this performs a cross-correlation operation on the two-dimensional input data and the kernel, and then adds a bias.
# * We can design a kernel to detect edges in images.
# * We can learn the kernel's parameters from data.
# * With kernels learned from data, the outputs of convolutional layers remain unaffected regardless of such layers' performed operations (either strict convolution or cross-correlation).
# * When any element in a feature map needs a larger receptive field to detect broader features on the input, a deeper network can be considered.
#
#
# ## Exercises
#
# 1. Construct an image `X` with diagonal edges.
# 1. What happens if you apply the kernel `K` in this section to it?
# 1. What happens if you transpose `X`?
# 1. What happens if you transpose `K`?
# 1. When you try to automatically find the gradient for the `Conv2D` class we created, what kind of error message do you see?
# 1. How do you represent a cross-correlation operation as a matrix multiplication by changing the input and kernel tensors?
# 1. Design some kernels manually.
# 1. What is the form of a kernel for the second derivative?
# 1. What is the kernel for an integral?
# 1. What is the minimum size of a kernel to obtain a derivative of degree $d$?
#
# + [markdown] origin_pos=31 tab=["tensorflow"]
# [Discussions](https://discuss.d2l.ai/t/271)
#
| python/d2l-en/tensorflow/chapter_convolutional-neural-networks/conv-layer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import numpy as np
import pandas as pd
import matplotlib as mpl
#mpl.use('Agg')
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# +
sns.set(style="darkgrid")
sns.set(font_scale=1.5)
RESULT_DIR = '../S2S_3_60/results/'
PLOT_DIR = '../exps/SEncoder/DataAug/Mplots/'
logs = {
'seq2seq_sample_imagenet_log.csv': 'Student-forcing'
#'seq2seq_teacher_imagenet_log.csv': 'Student-forcing'
}
# -
def plot_training_curves():
''' Plot the validation loss, navigation error and success rate during training. '''
font = {
'size' : 12
}
mpl.rc('font', **font)
dfs = {}
for log in logs:
dfs[log] = pd.read_csv(PLOT_DIR+log)
print(len(dfs[log]))
print(dfs[log].keys())
plots = [
('Loss', 'loss',['val_seen loss', 'val_unseen loss', 'train loss']), #, 'loss_ctrl_f'
('SPL', '%', ['val_seen spl', 'val_unseen spl']),
('Success', '%', ['val_seen success_rate', 'val_unseen success_rate']),
('Traj Length', 'm', ['val_seen lengths', 'val_unseen lengths']),
('Steps', 'n', ['val_seen steps', 'val_unseen steps']),
('Oracle SR', 'm', ['val_seen oracle_rate', 'val_unseen oracle_rate'])
]
colors = {
'Student-forcing Val Seen': 'C0',
'Student-forcing Val Unseen': 'C2',
'Student-forcing Train': 'C4',
'Teacher-forcing Val Seen': 'C1',
'Teacher-forcing Val Unseen': 'C3',
'Teacher-forcing Train': 'C5'
}
rows = 3
cols = 2
nsize = 2
fig, axes = plt.subplots(nrows=rows, ncols=cols, squeeze=True, figsize=(13, 16.25))
handles = []
labels = []
maxes = {'val_seen success_rate':0, 'val_unseen sr':0, 'val_seen iter':0,
'val_unseen success_rate':0, 'val_seen sr':0, 'val_unseen iter':0}
sums = {'val_seen success_rate':0, 'iter':0, 'val_unseen success_rate':0}
for i,(title, ylabel, x_vars) in enumerate(plots):
for log in logs:
df = dfs[log]
x = df['Unnamed: 0'] # df['iteration']
for col_name in x_vars:
y = df[col_name]
label = ' Train'
if 'unseen' in col_name:
label = ' Val Unseen'
elif 'seen' in col_name:
label = ' Val Seen'
if col_name in maxes:
for v_id, v in enumerate(y):
if 'val_seen' in col_name:
if v > maxes[col_name]:
maxes[col_name] = v
maxes['val_seen iter'] = v_id
elif 'val_unseen' in col_name:
if v > maxes[col_name]:
maxes[col_name] = v
maxes['val_unseen iter'] = v_id
sr_sum = df['val_seen success_rate'][v_id] + df['val_unseen success_rate'][v_id]
if sr_sum > (sums['val_seen success_rate'] + sums['val_unseen success_rate']):
sums['val_seen success_rate'] = df['val_seen success_rate'][v_id]
sums['val_unseen success_rate'] = df['val_unseen success_rate'][v_id]
sums['iter'] = int(x[v_id])
if i == 0:
label = logs[log]+label
labels.append(labels)
handles.append(axes[int(i/cols), i%cols].plot(x,y, colors[label], label=label))
else:
axes[int(i/cols), i%cols].plot(x,y, colors[logs[log]+label], label=logs[log]+label)
if title == 'Success':
maxes['val_unseen sr'] = df['val_unseen success_rate'][maxes['val_seen iter']]
maxes['val_seen sr'] = df['val_seen success_rate'][maxes['val_unseen iter']]
maxes['val_seen iter'] = int(x[maxes['val_seen iter']])
maxes['val_unseen iter'] = int(x[maxes['val_unseen iter']])
axes[int(i/cols), i%cols].set_title(title)
axes[int(i/cols), i%cols].set_xlabel('Iteration')
axes[int(i/cols), i%cols].set_ylabel(ylabel)
plt.tight_layout()
fig.subplots_adjust(bottom=0.1)
handles, labels = axes[0,0].get_legend_handles_labels()
axes[2,0].legend(handles = handles, labels=labels, loc='upper center',
bbox_to_anchor=(1.1, -0.15), fancybox=False, shadow=False, ncol=3)
plt.setp(axes[2,0].get_legend().get_texts(), fontsize='12')
plt.show()
#plt.savefig('%s/training.png' % (PLOT_DIR))
print('maxes:')
print(json.dumps(maxes, indent=2))
print('sums:')
print(json.dumps(sums, indent=2))
plot_training_curves()
def plot_curves():
''' Plot the validation loss, navigation error and success rate during training. '''
font = {
'size' : 12
}
mpl.rc('font', **font)
dfs = {}
for log in logs:
dfs[log] = pd.read_csv(PLOT_DIR+log)
print(len(dfs[log]))
print(dfs[log].keys())
plots = [
('Loss', 'loss',['val_seen loss', 'val_unseen loss', 'train loss']), #, 'loss_ctrl_f'
('SPL', '%', ['val_seen spl', 'val_unseen spl']),
('Success', '%', ['val_seen success_rate', 'val_unseen success_rate']),
('Traj Length', 'm', ['val_seen lengths', 'val_unseen lengths']),
('Steps', 'n', ['val_seen steps', 'val_unseen steps']),
('Oracle SR', 'm', ['val_seen oracle_rate', 'val_unseen oracle_rate'])
]
colors = {
'Student-forcing Val Seen': 'C0',
'Student-forcing Val Unseen': 'C2',
'Student-forcing Train': 'C4',
'Teacher-forcing Val Seen': 'C1',
'Teacher-forcing Val Unseen': 'C3',
'Teacher-forcing Train': 'C5'
}
rows = 3
cols = 2
nsize = 2
fig, axes = plt.subplots(nrows=rows, ncols=cols, squeeze=True, figsize=(13, 16.25))
handles = []
labels = []
maxes = {'val_seen spl':0, 'val_unseen SPL':0, 'val_seen iter':0,
'val_unseen spl':0, 'val_seen SPL':0, 'val_unseen iter':0}
sums = {'val_seen spl':0, 'iter':0, 'val_unseen spl':0}
for i,(title, ylabel, x_vars) in enumerate(plots):
for log in logs:
df = dfs[log]
x = df['Unnamed: 0'] #df['iteration']
for col_name in x_vars:
y = df[col_name]
label = ' Train'
if 'unseen' in col_name:
label = ' Val Unseen'
elif 'seen' in col_name:
label = ' Val Seen'
if col_name in maxes:
for v_id, v in enumerate(y):
if 'val_seen' in col_name:
if v > maxes[col_name]:
maxes[col_name] = v
maxes['val_seen iter'] = v_id
elif 'val_unseen' in col_name:
if v > maxes[col_name]:
maxes[col_name] = v
maxes['val_unseen iter'] = v_id
sr_sum = df['val_seen spl'][v_id] + df['val_unseen spl'][v_id]
if sr_sum > (sums['val_seen spl'] + sums['val_unseen spl']):
sums['val_seen spl'] = df['val_seen spl'][v_id]
sums['val_unseen spl'] = df['val_unseen spl'][v_id]
sums['iter'] = int(x[v_id])
if i == 0:
label = logs[log]+label
labels.append(labels)
handles.append(axes[int(i/cols), i%cols].plot(x,y, colors[label], label=label))
else:
axes[int(i/cols), i%cols].plot(x,y, colors[logs[log]+label], label=logs[log]+label)
if title == 'Success':
maxes['val_unseen SPL'] = df['val_unseen spl'][maxes['val_seen iter']]
maxes['val_seen SPL'] = df['val_seen spl'][maxes['val_unseen iter']]
maxes['val_seen iter'] = int(x[maxes['val_seen iter']])
maxes['val_unseen iter'] = int(x[maxes['val_unseen iter']])
axes[int(i/cols), i%cols].set_title(title)
axes[int(i/cols), i%cols].set_xlabel('Iteration')
axes[int(i/cols), i%cols].set_ylabel(ylabel)
plt.tight_layout()
fig.subplots_adjust(bottom=0.1)
handles, labels = axes[0,0].get_legend_handles_labels()
axes[2,0].legend(handles = handles, labels=labels, loc='upper center',
bbox_to_anchor=(1.1, -0.15), fancybox=False, shadow=False, ncol=3)
plt.setp(axes[2,0].get_legend().get_texts(), fontsize='12')
plt.show()
#plt.savefig('%s/training.png' % (PLOT_DIR))
print('maxes:')
print(json.dumps(maxes, indent=2))
print('sums:')
print(json.dumps(sums, indent=2))
plot_curves()
# +
dfs = {}
for log in logs:
dfs[log] = pd.read_csv(PLOT_DIR+log)
iter_id = 515
for log in logs:
df = dfs[log][iter_id-1: iter_id]
print(df)
# +
data_path = '../data/R2R_argmax_literal_speaker_train.json'
data = json.load(open(data_path, 'rb'))
print(len(data))
tr_path = '../data/R2R_literal_speaker_data_augmentation_paths.json'
tr_data = json.load(open(tr_path, 'rb'))
print(len(tr_data))
tr_path = '../data/R2R_data_augmentation_paths.json'
tr_data = json.load(open(tr_path, 'rb'))
print(len(tr_data))
# -
| tasks/R2R/jupyter/.ipynb_checkpoints/plot_scripts-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluation methods for unsupervised word embeddings
# > Evaluation methods for unsupervised word embeddings review
#
# - toc: true
# - badges: false
# - comments: true
# - author: <NAME>
# - categories: [NLP]
# 本文比较了各种衡量词向量的方法,并提出了一种新的评测词向量的方法。本文主要有以下贡献
# - 分析了不同评判标准间的关系,表明了生成词向量的方式要和特殊任务相关联
# - 提出了一种通过人为评分方式衡量直接衡量单个词向量的方法
# - 提出了选择词向量(用于评价)时要考虑到选择不同词频,词性,词义的向量。保证数据的多样性
# - 本文还发现了词向量包含着词频信息
#
# 需要注意的是这篇文章的目的不是去比较词向量的好坏,而是去研究评判词向量方法的差别。
# ## 1. Embeeding的准备
# 本文准备了以下六种生成词向量的方式用于评判:
# - 基于概率预测的embeeding
# - CBOW model of word2vec (Mikolov et al 2013a)
# - C&W embeddings (Collobert et al. 2011)
# - 基于反应语料中的词汇的同现关系
# - Hellinger PCA (Lebret and COllobert 2014)
# - GloVe (Pennington et al., 2014)
# - TSCCA (Dhillon et al., 2012)
# - Sparse Random Projections (Li et al., 2006)
#
# 对于C&W的词向,因为只有基于2007年的维基百科的。所以本文选取了2008-03-01日的维基百科来训练其余5中词向量。这里,所有词向量的维度为50,总共的词典大小为103647
# ## 2. Evaluation
# 评价词向量主要有两种方式,一种是内部评价(intrinsic evaluation),另一种是外部评价(extrinsic evaluation)。
#
# 内部评价指的是用词的词性,相关性等内部固有关系来评价生成的词向量的好坏。
# 外部评价指的是用生成的词向量去作为下游任务的输入,看哪种词向量可以更好的实现下游任务。
# ### 内部评价(intrinsic evaluation)
# 对于内部评价,本文采用的绝对的内部评价(absolute intrinsic evaluation)和相对的内部评价(comparative intrinsic evaluation),绝对内部评价有以下方法
# - Relatedness:比较生成的词向量的词于词之间的余弦相似度和人类评价的相似度的关系
# - Analogy:对于一个y,去找到一个x,使得x:y的关系要和a:b的关系一样
# - Categorization:把生成的词向量做聚类,看聚类是否准确
# - Selectional preference:确定一个词是某个动词的主语还是宾语
# 评价结果如下,可以看出,绝大多数任务中,CBOW表现最好。但是个别任务里,其他词向量更好
#
# 
# 在相对内部评价中,用户直接来判断词向量的好坏。作者的具体做法如下,
# - 选取了词频,词性和词义不同的100个单词(选择10种类别的词,每种类别里有一个形容词,一个动词,4个名词,4个动词)
# - 找出每个词的n nearest neighbors, 选取rank为1,5,50的neighbor。所以对于6中词向量,对于每一个词,我们分为计算出rank为1,5,50的neighbor。
# - 让人类来分别评价6中词向量中,rank1,5,50的neighbor里哪个于选定词最近。
#
# 结果如下,同样可以看出,没有一种词向量是在所有任务中都表现最好的
#
# 
# 在相似度(relatedness)的比较中,我们对于任意一个单词,我们只找了一个相近的单词,这并不理想(因为每个单词都有很多近义词)。所以作者提出了一种新的衡量方式:Coherence。对于每一个单词,事先选出两个近义词和一个不相关的词,看用生成的词向量能否辨别出不想关的词。
#
# 结果如下,可以看出不同词向量的生成方法,对于不同词频的单词,所得到的结果是不同的
#
# 
# ### 外部评价(Extrinsic evaluation)
# 外部评价主要用来测量词向量对于下游任务的贡献。本文选取了以下两种下游任务来评判
#
# - Noun phrase chunking:名词分块
# - Sentiment classification:情感分类
#
# 结果如下,对于下游任务,同样的,没有一种词向量可以在所有下游任务中都表现最好,所以对于不同下游任务,我们应该尝试不同词向量的表示
#
# 
# ## 3. Frequency information
# 最后,作者通过以下两种实验发现了词向量里面包含词频信息。
# - 用词向量来预测单词在语料中词频
# - 对于所有在WordSim-353数据集的单词,研究其K=1000 nearest neighbors和他们在语料中词频的大小排序。
#
# 结果如下,可以看出,我们可以通过词向量来较好的预测单词的词频,其中GloVe和CCA中包含了较多的词频信息。另外单词的词频于其在语料库里的词频排名也有很强的相关性
#
# 
# ## 4. 思考
# 通过本文,我们发现没有任何一种词向量可以在所有任务中都表现的最好,所以每个单词应该不存在一种绝对正确的词向量。那么,词向量是否是用来表示单词的最好方式呢,我对此表示疑问。以后很有可能会发现一种新的表示单词的方式。
| _notebooks/2020-07-04-Evaluation-methods-for-unsupervised-word-embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="TA21Jo5d9SVq"
#
#
# 
#
# [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb)
#
#
#
# + [markdown] colab_type="text" id="CzIdjHkAW8TB"
# # **Detect sentence similarity**
# + [markdown] colab_type="text" id="wIeCOiJNW-88"
# ## 1. Colab Setup
# + colab={"base_uri": "https://localhost:8080/", "height": 228} colab_type="code" id="CGJktFHdHL1n" outputId="9c82a651-ff1f-42a1-c627-3d9ad59fe586"
# Install Java
# ! apt-get update -qq
# ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
# ! java -version
# Install pyspark
# ! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
# ! pip install --ignore-installed spark-nlp
# + [markdown] colab_type="text" id="eCIT5VLxS3I1"
# ## 2. Start the Spark session
# + colab={} colab_type="code" id="sw-t1zxlHTB7"
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp.base import *
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
# + [markdown] colab_type="text" id="9RgiqfX5XDqb"
# ## 3. Select the USE model
# + [markdown] colab_type="text" id="_WkVpaI4reGN"
# If you change the model, re-run the cell that creates the pipeline so the pipeline will use the new model.
# + colab={} colab_type="code" id="LLuDz_t40be4"
# If you change the model, re-run all the cells below.
# Applicable models: tfhub_use, tfhub_use_lg
MODEL_NAME = "tfhub_use"
os.environ['MODEL_NAME'] = MODEL_NAME
# + [markdown] colab_type="text" id="2Y9GpdJhXIpD"
# ## 4. Some sample examples
# + colab={} colab_type="code" id="vBOKkB2THdGI"
# To compare the similarity of sentences, enter them as strings in this list.
text_list = [
"Sign up for our mailing list to get free offers and updates about our products!",
"Subscribe to notifications to receive information about discounts and new offerings.",
"Send in your information for a chance to win big in our Summer Sweepstakes!",
"After filling out this form, you will receive a confirmation email to complete your signup.",
"It was raining, so I waited beneath the balcony outside the cafe.",
"I stayed under the deck of the cafe because it was rainy outside.",
"I like the cafe down the street because it's not too loud in there.",
"The coffee shop near where I live is quiet, so I like to go there.",
"Web traffic analysis shows that most Internet users browse on mobile nowadays.",
"The analytics show that modern web users mostly use their phone instead of their computers."
]
# + [markdown] colab_type="text" id="fjSn9AFtLeiP"
# Write the input sentences into a single file.
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="jUlYTyKyKLRg" outputId="7200e9e6-553c-4d5e-af13-3065d5eb2303"
# ! mkdir inputs
# ! mkdir inputs/$MODEL_NAME
with open(f'inputs/{MODEL_NAME}/sentences.txt', 'w') as input_file:
for text in text_list:
input_file.write(text + '\n')
# + [markdown] colab_type="text" id="XftYgju4XOw_"
# ## 5. Define Spark NLP pipeline
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="lBggF5P8J1gc" outputId="eec66ac9-aea6-4056-83b5-95e94a91728b"
# Transforms the input text into a document usable by the SparkNLP pipeline.
document_assembler = DocumentAssembler()
document_assembler.setInputCol('text')
document_assembler.setOutputCol('document')
# Separates the text into individual tokens (words and punctuation).
tokenizer = Tokenizer()
tokenizer.setInputCols(['document'])
tokenizer.setOutputCol('token')
# Encodes the text as a single vector representing semantic features.
sentence_encoder = UniversalSentenceEncoder.pretrained(name=MODEL_NAME)
sentence_encoder.setInputCols(['document', 'token'])
sentence_encoder.setOutputCol('sentence_embeddings')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sentence_encoder
])
# Fit the model to an empty data frame so it can be used on inputs.
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
# + [markdown] colab_type="text" id="mv0abcwhXWC-"
# ## 6. Run the pipeline
# + [markdown] colab_type="text" id="ETuUnDApq1qv"
# This method will get the similarity of the embeddings of each pair of sentences in the list of sentences passed in. The similarity is returned as a matrix, where (0, 2), for example, represents the similarity of input sentence 0 and input sentence 2.
# + colab={} colab_type="code" id="6E0Y5wtunFi4"
def get_similarity(input_list):
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = light_pipeline.transform(df)
embeddings = []
for r in result.collect():
embeddings.append(r.sentence_embeddings[0].embeddings)
embeddings_matrix = np.array(embeddings)
return np.matmul(embeddings_matrix, embeddings_matrix.transpose())
# + [markdown] colab_type="text" id="ToEGreVFLR5x"
# Write the computed similarities to a CSV file.
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="rbxgRsvbLQd7" outputId="3113e1f3-5afd-47a9-c2cd-e0f865df0c36"
# ! mkdir outputs
# ! mkdir outputs/$MODEL_NAME
np.savetxt(f'outputs/{MODEL_NAME}/similarities.csv',
get_similarity(text_list),
delimiter=',')
# + [markdown] colab_type="text" id="UQY8tAP6XZJL"
# ## 7. Visualize results
# + [markdown] colab_type="text" id="ZJQyrCKYrH2w"
# This method plots the gets the similarity of the sentences in the list using the method above, then it plots those similarities as a heatmap where dark red means "very similar" and pale yellow means "not similar at all".
# + colab={} colab_type="code" id="MkhcOW4jo27W"
import seaborn as sns
def plot_similarity(input_list):
g = sns.heatmap(
get_similarity(input_list),
xticklabels=input_list,
yticklabels=input_list,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(input_list, rotation=90)
g.set_title("Semantic Textual Similarity")
# + colab={"base_uri": "https://localhost:8080/", "height": 743} colab_type="code" id="BuOfQ4nHpNMi" outputId="07e9b5b9-7aaf-4c60-a3de-483d63115f2d"
plot_similarity(text_list)
| tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/TBFY/knowledge-graph-API/blob/master/notebooks/Documentation4Developers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ClYPL1LU38oC"
# 
# + [markdown] id="n9jpmix0Ty9F"
# # **DOCUMENTATION FOR DEVELOPERS**
#
#
# + id="XD7bxts938GZ" colab={"base_uri": "https://localhost:8080/"} outputId="03797815-0918-469b-8baf-c90d3590b201"
pip install requests
# + [markdown] id="S5JX96q-3kCh"
# # **INTRODUCTION**
#
# The docker-based TheyBuyForYou API is built to allow you to obtain public procurement data from the TheyBuyForYou project [knowledge graph](https://github.com/TBFY/knowledge-graph).
#
# The API is organised around REST. All API calls should be made to the http://tbfy.librairy.linkeddata.es/kg-api/ base domain. A JSON file will be returned in all responses from the API. The API allows you 25 different calls organised in 5 main groups:
#
# 1. Organisations
# 1. Contracts
# 1. Contracting processes
# 1. Tenders
# 1. Awards
#
#
#
# + [markdown] id="vUxwSBL1oQda"
# # 1. **ORGANISATIONS**
#
# This resource offers all the information related to organisations. It is divided into four services:
# + [markdown] id="c2ktoDKFf1Y1"
# * **GET /organisation** that offers the whole list of organisations. No parameters are defined in this call that will display the following information:
#
# |Field |Description |Type |Required|
# |----------------|-------------------------------------------------------------------|-------|--------|
# |**id** |Corresponds to the identifier of the organisation, must be unique |string |true |
# |**legalName** |Corresponds to the legal name of the organisation |string |false |
# |**jurisdiction** |Corresponds to the jurisdiction of the organisation |string |false |
# + id="xck7sV-I3_2j" colab={"base_uri": "https://localhost:8080/"} outputId="be7ed3bc-3dbd-46ad-8b7f-6df6ee5e9b44"
import requests
class APIError(Exception):
"""An API Error Exception"""
def __init__(self, status):
self.status = status
def __str__(self):
return "APIError: status={}".format(self.status)
def RetrieveField(json,field):
value=''
try:
value = json[field]
except:
print()
return value
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/organisation')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} {} {}'.format(todo_item['id'], todo_item['legalName'], todo_item['jurisdiction']))
# + [markdown] id="bjCb-j7swsJQ"
# The number of records shown could be modified by including the size in the parameter:
# + id="uB3NCCz_w1Kn" colab={"base_uri": "https://localhost:8080/"} outputId="e48e0484-29dc-434e-e287-198479a339a3"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/organisation?size=20')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} {}'.format(todo_item['id'], todo_item['legalName']))
# + [markdown] id="hBBcFlJHdaAy"
# So, these exceptions must be controlled.
# + id="9EyiI0K8pZ3P" colab={"base_uri": "https://localhost:8080/", "height": 181} outputId="7de7cb94-95bb-4974-9bef-10498b6d2fb2"
for todo_item in resp.json():
print('{} {}'.format(RetrieveField(todo_item,'id'), RetrieveField(todo_item,'legalName')))
# + [markdown] id="Saxt3hpMeRNY"
# * **GET /organisation/{id}** that offers the whole information of a specific organisation, knowing its ID. One address parameter (id) is defined in the call that will display the following information:
#
# |Field |Description |Type |Required|
# |----------------|-------------------------------------------------------------------|-------|--------|
# |**id** |Corresponds to the identifier of the organisation, must be unique |string |true |
# |**name** |A common name for the organisation by which this entity is known |string |false |
# |**locality** |The locality of the contact point/person |string |false |
# |**postal code** |The postal code of the contact point/person |string |false |
# |**email** |The e-mail address of the contact point/person |string |false |
# |**telephone** |The telephone number of the contact point/person. This should |string |false |
# | |include the international dialing code | | |
# + id="cesnQH4ddumJ" colab={"base_uri": "https://localhost:8080/"} outputId="7e1911c5-b1fb-45a9-c991-9fa12ff18a23"
# ocid = str(input("Please, insert the identifier: "))
ocid = 'gb-SC475583'
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/organisation/' + ocid)
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
print('Identifier: ' + RetrieveField(resp.json(),'id'))
print('Name: ' + RetrieveField(resp.json(),'legalName'))
print('Address: ' + RetrieveField(resp.json(),'fulAddress'))
print('Locality: ' + RetrieveField(resp.json(),'locality'))
print('Postal Code: ' + RetrieveField(resp.json(),'postalCode'))
print('Email: ' + RetrieveField(resp.json(),'email'))
print('Telephone: ' + RetrieveField(resp.json(),'telephone'))
print (resp.json())
# + [markdown] id="KT8cp-xVgS6y"
# * **GET /organisation/{id}/contracting-process** that offers the whole list of contracting-processes which a specific organisations is involved in.
# + id="0ThRfhSmgtjC" colab={"base_uri": "https://localhost:8080/"} outputId="e829d564-daec-4064-ccd5-3fbb06def782"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/organisation/' + ocid + '/contractingProcess/')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="KFKI63fQtr-4"
# * **GET /organisation/{id}/award** that offers the whole list of awards which a specific organisations is involved in.
# + id="cGd69K7lt0HI" colab={"base_uri": "https://localhost:8080/"} outputId="2d21327b-15fe-4aa1-a3a1-ec3c77421b22"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/organisation/' + ocid + '/award/')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="3zuZ_nHEzK5H"
#
# + [markdown] id="Ig_JtUIzyHcm"
# # 2. **CONTRACTS**
#
# This resource offers all the information related to contracts. It is divided into six services:
# + [markdown] id="lcq-UrHxzQob"
# * **GET /contract** that offers the whole list of contracts
# + id="vOaDAKYazWPT" colab={"base_uri": "https://localhost:8080/"} outputId="55d7fb3a-3224-4b3b-e198-f45ee5f97d45"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contract')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="kZ4pSP6o0b0z"
# * **GET /contract/{id}** that offers the whole information of a specific contract, knowing its ID
# + id="ANfiH50D0uPr" colab={"base_uri": "https://localhost:8080/"} outputId="bf9e05b2-6e40-445a-cb9c-ae3a746c6766"
ocid = 'ocds-0c46vo-0009-DN368144-1_1'
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contract/' + ocid)
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
print('Identifier: ' + RetrieveField(resp.json(),'id'))
# + [markdown] id="saW0SfWs-pGw"
# * **GET /contract/{id}/amendment** that offers the whole list of amendments of a specific contract, knowing its ID
# + id="dW302UHt-0Uh"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contract/' + ocid + '/amendment')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="U_3OR4_mB35t"
# * **GET /contract/{id}/document** that offers the whole list of documents of a specific contract, knowing its ID
# + id="-v8dh05OB-zr"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contract/' + ocid + '/document')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="ZkKHlxSDCGKE"
# * **GET /contract/{id}/item** that offers the whole list of items of a specific contract, knowing its ID
# + id="cUdcwgKyCKvE"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contract/' + ocid + '/item')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="h7oXMyUOCQRc"
# * **GET /contract/{id}/buyer** that offers the whole list of buyers of a specific contract, knowing its ID
# + id="n2bIdefGCV6d"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contract/' + ocid + '/buyer')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="YMxY2H29Cicf"
# # 3. **CONTRACTING PROCESSES**
#
# This resource offers all the information related to the ontracting process. It is divided into four services:
# + [markdown] id="34gVsAq0CvQZ"
# * **GET /contracting-process** that offers the whole list of contracting processes
# + id="ZuHqeaQQC669" colab={"base_uri": "https://localhost:8080/"} outputId="83bc0f01-d5da-4176-bc97-aa248e5cd107"
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contractingProcess')
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
for todo_item in resp.json():
print('{} '.format(todo_item['id']))
# + [markdown] id="_cqdpPssDUdn"
# * **GET /contracting-process/{id}** that offers the whole information of a specific contracting process, knowing its ID
# + id="JFH4-kC9Datv" colab={"base_uri": "https://localhost:8080/"} outputId="bf0f2acb-a4ed-4258-f64e-8c2cc28ca692"
ocid = 'ocds-0c46vo-0117-139762'
resp = requests.get('http://tbfy.librairy.linkeddata.es/kg-api/contractingProcess/' + ocid)
if resp.status_code != 200:
# This means something went wrong.
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
print('Identifier: ' + RetrieveField(resp.json(),'id'))
print(resp.json())
# + [markdown] id="0_h7qmINTyS1"
# # **API ACCOUNTS AUTHENTICATION AND AUTHORISATION**
#
# The current API does not provide any type of authentication or authorisation mechanisms to access data. However, the core API will provide basic authentication or authorisation mechanisms that will be based on Spring Security, given the technology stack that has been proposed for the API development.
| notebooks/Documentation4Developers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from scanr_utils import *
from unpwaywall_utils import *
# # 1. Liste des DOI du périmètre
# ## 1.1 Proposition de liste de DOIs à partir des APIs scanR
# seulement une proposition de DOI ! Liste non exhaustive !
df = get_publications_with_doi("197535016") # prend en argument un siren ou un numéro RNSR
df.head()
# ## 1.2 ou lecture directe d'une liste de DOI déjà constituée
#df = pd.read_excel("...")
df = pd.DataFrame({"doi": ["10.1002/nag.1123", "10.1016/j.icarus.2019.07.011", "10.1080/14693062.2012.699787"]})
df
# # 2. Enrichissement avec l'API Unpaywall
df_oa_status = enrich_with_upw_status(df)
df_oa_status
| notebooks/OA_perimetre_specifique.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.head()
test_df.head()
# +
#Separate target feature for training data
x_train = train_df.drop("loan_status", axis=1)
y_train = train_df["loan_status"]
# Convert categorical data to numeric and separate target feature for training data
y_train = LabelEncoder().fit_transform(train_df["loan_status"])
x_train_numeric = pd.get_dummies(x_train)
# -
x_train_numeric.head()
# +
#Separate target feature for testing data
x_test = test_df.drop("loan_status", axis=1)
y_test = test_df["loan_status"]
# Convert categorical data to numeric and separate target feature for testing data
y_test = LabelEncoder().fit_transform(test_df["loan_status"])
x_test_numeric = pd.get_dummies(x_test)
# +
# Locate missing dummy variables
for variable in x_train_numeric.columns:
if variable not in x_test_numeric.columns:
print(variable)
# add missing dummy variables to testing set
x_test_numeric[variable] = 0
x_test_numeric.head()
# -
# ## Initial Thought
# After reading and learning more about the differences between Random Forests and Logistic Regression, I believe Random Forests will perform better for categorical data giving a more important features while increasing the overall accuracy of the result
# +
# Train the Logistic Regression model on the unscaled data and print the model score
LogisticModel = LogisticRegression().fit(x_train_numeric, y_train)
print(f'training score: {LogisticModel.score(x_train_numeric, y_train)}')
print(f'testing score: {LogisticModel.score(x_test_numeric, y_test)}')
# +
# Train a Random Forest Classifier model and print the model score
ForestModel = RandomForestClassifier().fit(x_train_numeric, y_train)
print(f'training score: {ForestModel.score(x_train_numeric, y_train)}')
print(f'testing score: {ForestModel.score(x_test_numeric, y_test)}')
# -
# ## Result before Scaling
# It appears that Random Forest Classifier model performed better than Logistic Regression; However, we can clearly see that our training score for the Forest Model is 1.0, a sign of over fitting!
# ## After Scaling Thoughts
# Once the data is scaled, I believe both model will perform better than the previous results
# +
# Scale the data
scaler = StandardScaler().fit(x_train_numeric)
x_train_scaled = scaler.transform(x_train_numeric)
x_test_scaled = scaler.transform(x_test_numeric)
# +
# Train the Logistic Regression model on the scaled data and print the model score
LogisticModel = LogisticRegression().fit(x_train_scaled, y_train)
print(f'training score: {LogisticModel.score(x_train_scaled, y_train)}')
print(f'testing score: {LogisticModel.score(x_test_scaled, y_test)}')
# +
# Train a Random Forest Classifier model on the scaled data and print the model score
ForestModel = RandomForestClassifier().fit(x_train_scaled, y_train)
print(f'training score: {ForestModel.score(x_train_scaled, y_train)}')
print(f'testing score: {ForestModel.score(x_test_scaled, y_test)}')
# -
# ## After Scaling Results
# After scaling the data and run our models, we can see Logisitc Regression model performs better than before scaling it. On the other hand, Random Forest Classifier has perform worst than it did before scaling the data and we can observ how the training data is still over fitting!
| Credit Risk Evaluator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/AI4Finance-LLC/FinRL-Library/blob/master/FinRL_multiple_stock_trading.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="osBHhVysOEzi"
#
# <a id='1.2'></a>
# ## 2.2. Check if the additional packages needed are present, if not install them.
# * Yahoo Finance API
# * pandas
# * numpy
# * matplotlib
# * stockstats
# * OpenAI gym
# * stable-baselines
# * tensorflow
# * pyfolio
# +
import sys
sys.path.append("..")
# + [markdown] id="nGv01K8Sh1hn"
# <a id='1.3'></a>
# ## 2.3. Import Packages
# + colab={"base_uri": "https://localhost:8080/"} id="lPqeTTwoh1hn" outputId="db90f7a1-fecb-45c3-b9c1-6781d93342e2"
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
# %matplotlib inline
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.model.models import DRLAgent
from finrl.trade.backtest import BackTestStats, BaselineStats, BackTestPlot
from pprint import pprint
# + [markdown] id="T2owTj985RW4"
# <a id='1.4'></a>
# ## 2.4. Create Folders
# + id="w9A8CN5R5PuZ"
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="h3XJnvrbLp-C" outputId="26b22b1d-ad69-4483-e310-3f5f523f3ffa"
# from config.py start_date is a string
config.START_DATE = "2018-01-01"
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="FUnY8WEfLq3C" outputId="abcc5cdb-ef15-47ab-c5ac-3bffe5bdbe26"
# from config.py end_date is a string
config.END_DATE
print(f"start: {config.START_DATE}, end: {config.END_DATE}")
# + colab={"base_uri": "https://localhost:8080/"} id="JzqRRTOX6aFu" outputId="6103745f-6618-42ac-f3d5-95716bcd5e0f"
pprint(config.DOW_30_TICKER)
# + colab={"base_uri": "https://localhost:8080/"} id="yCKm4om-s9kE" outputId="8f9b1fab-5e79-4acf-a054-83e171a0c7ca"
df = YahooDownloader(start_date = config.START_DATE,
end_date = config.END_DATE,
ticker_list = config.DOW_30_TICKER).fetch_data()
# + colab={"base_uri": "https://localhost:8080/"} id="CV3HrZHLh1hy" outputId="460516d1-c117-4443-adee-f3391812c9d2"
df.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 199} id="4hYkeaPiICHS" outputId="876068ce-8160-434d-8579-5ea9dcecec6b"
df.sort_values(['date','tic'],ignore_index=True).head()
# + colab={"base_uri": "https://localhost:8080/"} id="Le342Hc1h1iI" outputId="789c16d1-4933-4727-d27f-1c0c107b8ff7"
fe = FeatureEngineer(use_technical_indicator=True,
tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
use_turbulence=True,
user_defined_feature = False)
processed = fe.preprocess_data(df)
# -
print(type(processed))
# + colab={"base_uri": "https://localhost:8080/", "height": 351} id="grvhGJJII3Xn" outputId="2667dc49-47a9-4751-cfa1-98ec3a574f0e"
processed.sort_values(['date','tic'],ignore_index=True).head(10)
# + id="W0qaVGjLtgbI"
train = data_split(processed, '2009-01-01','2019-01-01')
trade = data_split(processed, '2019-01-01','2020-12-01')
print(len(train))
print(len(trade))
# + colab={"base_uri": "https://localhost:8080/", "height": 199} id="p52zNCOhTtLR" outputId="ded764c5-982d-4c2d-9ec4-aaef0cc4d020"
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 199} id="k9zU9YaTTvFq" outputId="c10e63ea-f3f9-4bfd-e554-cd671b9d5f23"
trade.head()
# -
dates = trade.date.sort_values().unique()
print(dates.dtype)
tr = trade.set_index("date")
print(tr.index.dtype)
sel = tr.loc['2020-11-20']
subsel = sel[sel.tic=='AAPL']
subsel.head()
subsel.loc['2020-11-20', ['open', 'close']].tolist()
v = []
v+=(subsel.loc['2020-11-20', ['open', 'close']].tolist())
print(v)
a = np.arange(50)
print(len(a[1:31]))
a = np.random.randn(4)
b = np.random.rand(4)
print(a)
print(b)
# +
actions = np.array([-5, 5, -5])
holdings = np.array([10, 15, 0])
expected = np.array([-5, 5, -3])
min_actions = -holdings
print(min_actions)
final_actions = np.maximum(actions, min_actions)
print(final_actions)
# -
np.dot(np.arange(10), np.arange(10))
np.arange(20)[::3]
# +
import numpy as np
import pandas as pd
from gym.utils import seeding
import gym
from gym import spaces
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import pickle
from stable_baselines3.common.vec_env import DummyVecEnv
class StockTradingEnv(gym.Env):
"""A stock trading environment for OpenAI gym"""
metadata = {'render.modes': ['human']}
'''
state space: {start_cash, <owned_shares>, for s in stocks{<stock.values>}, }
'''
def __init__(self,df,
transaction_cost_pct=3e-3,
date_col_name = 'date',
hmax = 10,
turbulence_threshold=None,
make_plots = False,
print_verbosity = 10,
reward_scaling = 1e-4,
initial_amount = 1e6,
daily_information_cols = ['open', 'close', 'high', 'low', 'volume'],
day = 0, iteration=''):
self.df = df
self.stock_col = 'tic'
self.assets = df[self.stock_col].unique()
self.dates = df[date_col_name].sort_values().unique()
self.df = self.df.set_index(date_col_name)
self.hmax = hmax
self.initial_amount = initial_amount
self.transaction_cost_pct =transaction_cost_pct
self.reward_scaling = reward_scaling
self.daily_information_cols = daily_information_cols
self.close_index = self.daily_information_cols.index('close')
self.state_space = 1+len(self.assets) + len(self.assets)*len(self.daily_information_cols)
self.action_space = spaces.Box(low = -1, high = 1,shape = (len(self.assets),))
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape = (self.state_space,))
# initalize state
# self.state = self._initiate_state()
# initialize reward
self.reward = 0
self.date_index=0
self.turbulence = 0
self.cost = 0
self.trades = 0
self.episode = 0
# memorize all the total balance change
self.asset_memory = [self.initial_amount]
self.rewards_memory = []
self.actions_memory=[]
self.state_memory = []
self.account_information = {
"cash": [self.initial_amount],
"asset_value": [0],
"total_assets": [self.initial_amount]
}
self.state_memory.append(np.array([self.initial_amount] + [0]*len(self.assets) + self.get_date_vector(self.date_index)))
# print(self.state_memory)
def get_date_vector(self, date, cols=None):
date = self.dates[date]
if cols is None:
cols = self.daily_information_cols
trunc_df = self.df.loc[date]
v = []
for a in self.assets:
subset = trunc_df[trunc_df[self.stock_col]==a]
v+=subset.loc[date, cols].tolist()
assert len(v)==len(self.assets)*len(cols)
return v
def step(self, actions):
actions = actions*self.hmax
# need to prevent selling assets we don't have
self.actions_memory.append(actions)
#terminal case
if self.date_index==len(self.dates)-1:
print("hit end of the dates!")
print(self.dates[-1])
print(self.dates[date_index])
else:
print(f"step: {self.date_index}")
begin_cash = self.state_memory[-1][0]
holdings = self.state_memory[-1][1:len(self.assets)+1]
print(f"holdings: {holdings}")
closings = np.array(self.get_date_vector(self.date_index, cols = ['close']))
asset_value = np.dot(holdings,closings)
reward = begin_cash + asset_value - self.account_information['total_assets'][-1]
print(f"reward: {reward:0.2f}")
#store the account holdings
self.account_information['cash'].append(begin_cash)
self.account_information['asset_value'].append(asset_value)
self.account_information['total_assets'].append(begin_cash + asset_value)
print(self.account_information)
#let's execute the buys and sells.
#first sell so we can get the cash out.
#don't allow actions that take stocks below 0
actions = np.maximum(actions, -np.array(holdings))
print(f"actions: {actions}")
#assumption here, we can buy for close price, rework to buy at open
sells = -np.clip(actions, -np.inf, 0)
proceeds = np.dot(sells,closings)
print(f"proceeds of sells: {proceeds}")
costs = proceeds*self.transaction_cost_pct
coh = begin_cash+proceeds
print(f"cash on hand: {coh}")
#next buy
buys = np.clip(actions,0, np.inf)
spend = np.dot(buys, closings)
costs += spend*self.transaction_cost_pct
assert (spend+costs)<coh
coh = coh-spend-costs
print(f"costs: {costs:0.2f}")
holdings_updated =holdings+actions
print(f"holdings_updated: {holdings_updated}")
assert min(holdings_updated)>=0
#need to figure out date index here...because it might make sense to lag this back one day somehow
# step forward a day
self.date_index+=1
state = [coh] + list(holdings_updated) + self.get_date_vector(self.date_index)
self.state_memory.append(state)
print(holdings)
print(closings)
print(f"asset_value: {asset_value}")
if self.date_index>3:
raise Exception()
return state, reward, True, {}
def get_sb_env(self):
e = DummyVecEnv([lambda: self])
print(type(e))
obs = e.reset()
return e, obs
def reset(self):
return [0 for _ in range(self.state_space)]
e = StockTradingEnv(df = trade)
# e.get_date_vector('2020-11-20')
# + id="Q2zqII8rMIqn"
stock_dimension = len(train.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.TECHNICAL_INDICATORS_LIST)*stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
# +
env_kwargs = {
"hmax": 100,
"initial_amount": 100000,
"transaction_cost_pct": 0.003,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
e_train_gym = StockTradingEnv(df = train)
e_trade_gym = StockTradingEnv(df = trade)
# + id="AWyp84Ltto19"
env_train, _ = e_train_gym.get_sb_env()
env_trade, obs_trade = e_trade_gym.get_sb_env()
# + id="364PsqckttcQ"
da = DRLAgent(env = env_train)
PPO_PARAMS = {
"n_steps": 2048,
"ent_coef": 0.01,
"learning_rate": 0.00025,
"batch_size": 64,
"n_epochs": 300
}
TD3_PARAMS = {
"batch_size": 250,
"buffer_size": 1000000,
"learning_rate": 0.001
}
SAC_PARAMS = {
"buffer_size": 100000,
"learning_rate": 0.0001,
"learning_starts": 100,
"batch_size": 2048,
"ent_coef": "auto_0.1",
}
# model = da.get_model("ppo", model_kwargs = PPO_PARAMS,verbose = 1)
# model = da.get_model("td3", model_kwargs = TD3_PARAMS)
model = da.get_model("sac", model_kwargs = SAC_PARAMS)
# -
# trained = model.learn(.train_model(model, 'sac_again', total_timesteps = 200000)
trained = model.learn(total_timesteps=200000, tb_log_name='sac_longrun', n_eval_episodes = 15)
# + id="efwBi84ch1jE"
data_turbulence = processed[(processed.date<'2019-01-01') & (processed.date>='2009-01-01')]
insample_turbulence = data_turbulence.drop_duplicates(subset=['date'])
# + colab={"base_uri": "https://localhost:8080/"} id="VHZMBpSqh1jG" outputId="9ab772a0-2905-48da-82ac-1bd0ee9bab63"
insample_turbulence.turbulence.describe()
# + id="yuwDPkV9h1jL"
turbulence_threshold = np.quantile(insample_turbulence.turbulence.values,1)
print(f"Turb threshold {turbulence_threshold:0.2f}")
# + [markdown] id="U5mmgQF_h1jQ"
# ### Trade
#
# DRL model needs to update periodically in order to take full advantage of the data, ideally we need to retrain our model yearly, quarterly, or monthly. We also need to tune the parameters along the way, in this notebook I only use the in-sample data from 2009-01 to 2018-12 to tune the parameters once, so there is some alpha decay here as the length of trade date extends.
#
# Numerous hyperparameters – e.g. the learning rate, the total number of samples to train on – influence the learning process and are usually determined by testing some variations.
# + colab={"base_uri": "https://localhost:8080/"} id="eLOnL5eYh1jR" outputId="8ec74736-cf55-4d88-d57c-694c135e3159"
df_account_value, df_actions = DRLAgent.DRL_prediction(model=model,
test_data = trade,
test_env = env_trade,
test_obs = obs_trade)
# -
df_account_value.head()
plt.plot(df_account_value.date, df_account_value.account_value)
df_actions
# + colab={"base_uri": "https://localhost:8080/"} id="ERxw3KqLkcP4" outputId="0a0652e7-4d95-4a15-aac9-4789dd5abf8c"
df_account_value.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 199} id="2yRkNguY5yvp" outputId="e668cb5e-d63f-4c2b-a0a0-c2a72b4ac47d"
df_account_value.head()
# + [markdown] id="W6vvNSC6h1jZ"
# <a id='6'></a>
# # Part 7: Backtest Our Strategy
# Backtesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy.
# + [markdown] id="Lr2zX7ZxNyFQ"
# <a id='6.1'></a>
# ## 7.1 BackTestStats
# pass in df_account_value, this information is stored in env class
#
# + colab={"base_uri": "https://localhost:8080/"} id="Nzkr9yv-AdV_" outputId="90bea8f7-fa0e-43fe-8e87-eeda1cee4724"
print("==============Get Backtest Results===========")
perf_stats_all = BackTestStats(account_value=df_account_value)
perf_stats_all = pd.DataFrame(perf_stats_all)
# perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
# + [markdown] id="9U6Suru3h1jc"
# <a id='6.2'></a>
# ## 7.2 BackTestPlot
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="lKRGftSS7pNM" outputId="d7613aac-003e-461d-98f4-2b358105a9e9"
print("==============Compare to DJIA===========")
# %matplotlib inline
# S&P 500: ^GSPC
# Dow Jones Index: ^DJI
# NASDAQ 100: ^NDX
BackTestPlot(df_account_value,
baseline_ticker = '^GSPC',
baseline_start = '2019-01-01',
baseline_end = '2020-12-01')
# + [markdown] id="SlLT9_5WN478"
# <a id='6.3'></a>
# ## 7.3 Baseline Stats
# + colab={"base_uri": "https://localhost:8080/"} id="YktexHcqh1jc" outputId="25a45cd2-f155-40ed-d40a-639683d229b4"
print("==============Get Baseline Stats===========")
baesline_perf_stats=BaselineStats('^DJI',
baseline_start = '2019-01-01',
baseline_end = '2020-12-01')
# + id="A6W2J57ch1j9"
| notebooks/env_upgrade.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tournament simulation using PyStan
#
# In this notebook, I build a simple Bayesian model to predict the outcome of the tournament. Since the bracket deadline is soon, I don't have time to add comments and there is plenty to clean up and improve. Comments are welcome!
#
# Main idea: assign score $\alpha_i$ to each team, predict $\sqrt{\Delta}$ where $\Delta$ is the difference in scores using $\sqrt{\Delta} \approx \alpha_i - \alpha_j + h$ where $h$ is the home-field advantage.
#
# There is no validation, so use at your own risk, though the predictions do lack creativity so it must have captured some common trends.
# +
import random
from collections import Counter, defaultdict
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import pystan as ps
# -
first_season = 2010
def convert_Wloc(loc):
if loc == "H":
return 1
if loc == "A":
return -1
return 0
# +
rs_df = pd.read_csv("data/RegularSeasonCompactResults.csv")
t_df = pd.read_csv("data/TourneyCompactResults.csv")
games = pd.concat((rs_df, t_df))
teams = pd.read_csv("data/Teams.csv")
seeds = pd.read_csv("data/TourneySeeds.csv")
slots = pd.read_csv("data/TourneySlots.csv")
tid_to_id = { team: idx for idx, team in enumerate(teams["Team_Id"].factorize()[1]) }
id_to_tid = { idx: team for idx, team in enumerate(teams["Team_Id"].factorize()[1]) }
games["win_id"] = games.apply(lambda x: tid_to_id[x.Wteam], 1)
games["los_id"] = games.apply(lambda x: tid_to_id[x.Lteam], 1)
games["athome"] = games.apply(lambda x: convert_Wloc(x.Wloc), 1)
games = games[games["Season"] >= first_season]
games["t"] = games.Season - first_season
games["dscore"] = np.sqrt(games.Wscore - games.Lscore)
# -
games.head()
games.dscore.hist()
games.Daynum.hist(bins=50)
# # Simple Stan model
stan_code = """
data {
int<lower=0> nteams; // number of teams
int<lower=1> nseasons; // number of seasons
int<lower=0> N; // number of observations
real dscore[N]; // Outcome
int tid[N]; // Team
int oid[N]; // Opponent
int location[N]; // location of game
int season[N]; // Season of game
}
transformed data {}
parameters {
real home;
real team[nteams, nseasons];
// evolution of teams
real<lower=0> sigma_team0;
real<lower=0> sigma_teamd;
// variance for home game advantage
real<lower=0, upper=5> sigma_home;
// variance in outcome
real<lower=0, upper=5> sigma_score;
}
transformed parameters {
vector[N] xb;
for(i in 1:N) {
xb[i] = home * location[i] + team[tid[i], season[i]] - team[oid[i], season[i]];
}
}
model {
home ~ normal(0.1, sigma_home);
for(t in 1:nteams) {
team[t, 1] ~ normal(0.0, sigma_team0);
for(s in 2:nseasons) {
team[t, s] ~ normal(team[t, s-1], sigma_teamd); // for now independent between seasons
}
}
// outcome
dscore ~ normal(xb, sigma_score);
}
generated quantities {
}
"""
# +
stan_data = {
"nteams": len(id_to_tid),
"nseasons": max(games.t)+1,
"N": len(games),
"dscore": games.dscore,
"tid": games.win_id+1,
"oid": games.los_id+1,
"season": games.t + 1,
"location": games.athome
}
print(f"Number of games: {len(games)}")
# -
stan_iters = 500
stan_fit = ps.stan(model_code=stan_code, data=stan_data, iter=stan_iters, chains=2)
stan_params = stan_fit.extract()
team_scores = stan_params["team"]
sigma_score = stan_params["sigma_score"]
# +
def tid_to_name(tid):
return teams[teams.Team_Id == tid].iloc[0].Team_Name
def id_to_name(idx):
return tid_to_name(id_to_tid[idx])
# -
tid_to_name(1448)
mean_team_scores = team_scores.mean(axis=0)
# +
tourny_teams = seeds[seeds.Season == 2017].Team.unique()
tourny_teams_ids = [tid_to_id[t] for t in tourny_teams]
# show top 10 teams
qualified_teams = sorted(list(zip(mean_team_scores[tourny_teams_ids, -1], [(id_to_name(idx), idx) for idx in tourny_teams_ids])))
qualified_teams[-10:]
# -
qualified_teams[:5]
f, ax = plt.subplots(1, 1)
_ = ax.hist(team_scores[:, 80, -1], bins=100, range=(0, 6), alpha=0.7)
_ = ax.hist(team_scores[:, 213, -1], bins=100, range=(0,6), alpha=0.7)
# # Tournament simulation
# +
tournament = slots[slots.Season == 2017][["Slot", "Strongseed", "Weakseed"]].set_index("Slot").to_dict("index")
# purge initial playoffs
tournament = {k: (v["Strongseed"], v["Weakseed"]) for k, v in tournament.items() if v["Strongseed"][-1] != "a"}
# +
initial = seeds[seeds.Season == 2017][["Seed", "Team"]].set_index("Seed").to_dict("index")
initial = {k: v["Team"] for k, v in initial.items() if k[-1] not in ["a", "b"]}
initial["W11"] = 1425
initial["W16"] = 1291
initial["Y16"] = 1413
initial["Z11"] = 1243
# -
tid_to_name(1243)
def simulate_game(A, B, trace):
"""
Simulates game between teams A and B
Returns: winner, loser, margin
"""
idA = tid_to_id[A]
idB = tid_to_id[B]
# find team coefficients
alpha_a = team_scores[trace, idA, -1]
alpha_b = team_scores[trace, idB, -1]
sqrt_score = random.gauss(alpha_a - alpha_b, sigma_score[trace])
# return result
if sqrt_score >= 0:
return A, B, sqrt_score**2
return B, A, sqrt_score**2
# Checking whether simulations make sense. Win percentage seem a little conservative between number 1 and number 16 seed, which is normal for this type of model. The predicted scores are clearly over-dispersed; that is due to the square root. However, it shouldn't influence the simulations too much.
# +
team_a = id_to_tid[336]
team_b = id_to_tid[312]
tid_to_name(team_a), tid_to_name(team_b)
raw_games = []
for _ in range(10):
raw_games += [simulate_game(team_a, team_b, t) for t in range(50, stan_iters)]
final_scores = [(2*(a==team_a)-1) * score for a, b, score in raw_games]
print(f"Simulating {tid_to_name(team_a)} vs {tid_to_name(team_b)}")
a_win_perc = len([s for s in final_scores if s > 0]) / len(final_scores)
print(f"{tid_to_name(team_a)} wins {100*a_win_perc}%")
f = plt.hist(final_scores, bins=100, label="Final score")
# -
def simulate_tournament(trace):
results = {seed: team for seed, team in initial.items()}
games = []
for match, (teamA, teamB) in tournament.items():
winner, loser, score = simulate_game(results[teamA], results[teamB], trace)
results[match] = winner
# round of game
r = match[1]
games.append((winner, loser, score, r))
# clean initial seeds
clean_results = {r: tid_to_name(winner)
for r, winner in results.items() if r[0] == "R"}
return clean_results, games
# ## Simulate a single tournament
# +
results, games = simulate_tournament(101)
for tidA, tidB, score, rnd in games:
print("Round {3}: {0:<20} def. {1:<20} {2}".format(tid_to_name(tidA), tid_to_name(tidB), int(score), rnd))
# -
# # Simulate many tournaments
round_counters = defaultdict(Counter)
for _ in range(10000):
trace = random.randint(50, 499)
sim, _ = simulate_tournament(trace)
for rnd, team in sim.items():
round_counters[rnd].update([team])
def perc_counter(cntr):
total = sum(cntr.values())
return sorted([(team, cnt/total) for team, cnt in cntr.items()], key=lambda x: -x[1])
# +
percentages = {rnd: perc_counter(c) for rnd, c in round_counters.items()}
for rnd, teams in percentages.items():
print("Round: {}".format(rnd))
for team, perc in teams:
print("{:>3.1f}% : {}".format(100*perc, team))
print("-"*10)
# -
| stan model for ncaa 2017 bracket.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import dependencies
from bs4 import BeautifulSoup
import requests
from urllib.parse import quote
import pandas as pd
class GoogleSpider(object):
def __init__(self):
"""Crawl Google search results
This class is used to crawl Google's search results using requests and BeautifulSoup.
"""
super().__init__()
self.headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:79.0) Gecko/20100101 Firefox/79.0',
# 'Host': 'www.medium.com',
# 'Referer': 'https://www.medium.com/'
}
def __get_source(self, url: str) -> requests.Response:
"""Get the web page's source code
Args:
url (str): The URL to crawl
Returns:
requests.Response: The response from URL
"""
return requests.get(url, allow_redirects=True)
def scrap(self, urls: []) -> list:
"""Search Google
Args:
query (str): The query to search for
Returns:
list: The search results
"""
df_result = pd.DataFrame(columns = [
'url',
'title',
'description',
'image',
'published_date',
'author',
'section',
'type',
])
for url in urls:
# Get response
response = self.__get_source(url)
# Initialize BeautifulSoup
soup = BeautifulSoup(response.text, 'html.parser')
# Get the result containers
result_containers = soup.findAll('head')
# Final results list
results = []
# Loop through every container
for container in result_containers:
if container.find('og:title'):
title = container.find('og:title').text
elif container.find('title'):
title = container.find('title').text
else:
title = ''
if container.find('meta', {"property":"og:description"}):
description = container.find('meta', {"property":"og:description"})['content']
else:
description = ''
if container.find('meta', {"property":"og:image"}):
image = container.find('meta', {"property":"og:image"})['content']
else:
image = ''
# if container.find('meta', {"property":"og:author"}):
# author = container.find('meta', {"property":"og:author"})['content']
# else:
# author = ''
if container.find('meta', {"property":"article:published_time"}):
published_time = container.find('meta', {"property":"article:published_time"})['content']
else:
published_time = ''
if container.find('meta', {"property":"article:author"}):
article_author = container.find('meta', {"property":"article:author"})['content']
else:
article_author = ''
if container.find('meta', {"property":"article:section"}):
section = container.find('meta', {"property":"article:section"})['content']
else:
section = ''
if container.find('meta', {"property":"og:type"}):
_type = container.find('meta', {"property":"og:type"})['content']
else:
_type = ''
new_result = {
'url': url,
'title': title,
'description': description,
'image': image,
# 'author': author,
'published_date': published_time,
'author': article_author,
'section': section,
'type': _type,
}
df_result = df_result.append(new_result, ignore_index=True)
return df_result
# +
urls = []
df_url = pd.read_excel('exports/urls.xlsx', index_col=0).reset_index()
for index in df_url.index:
urls.append(df_url['url'][index])
# -
if __name__ == '__main__':
df = GoogleSpider().scrap(urls)
df['published_date'] = pd.to_datetime(df['published_date'], format='%Y-%m-%d %H:%M:%S', utc=True).dt.strftime('%d/%m/%Y')
df
df.to_excel('exports/url-specific.xlsx', index=False)
| web-crawling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Praca domowa 5 - <NAME>
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
data=pd.read_csv("C:/Users/Ada/Desktop/clustering.csv", header=None)
data.head(10)
data.columns=['x','y']
data.info()
# Widać, że nie ma braków danych.
sns.scatterplot(data['x'],data['y'])
data.hist()
# Widać że zmienne mają podobne wartości, dla upewnienia że mają równy wpływ na model przeskalujemy je.
#
from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler()
scaler.fit(data)
data = pd.DataFrame(scaler.transform(data), columns=['x','y'])
# Dane po przeskalowaniu:
sns.scatterplot(data['x'],data['y'])
# ## Algorytm KMeans
# ### Szukanie liczby klastrów - metoda "łokcia"
def elbow_plot(data, k_max=20):
wcss=[]
for i in range (2, k_max+1):
model=KMeans(n_clusters=i)
model.fit(data)
wcss.append(model.score(data)*(-1))
x=range(2,k_max+1)
plt.plot(x, wcss, marker='h')
plt.xticks(np.arange(min(x), max(x)+1, 2))
plt.title("Metoda łokcia dla KMeans")
plt.show()
elbow_plot(data)
# Ciężko określić, na oko między 5 a 8? Zobaczymy jak to wygląda na rysunkach
for i in [5,6,7,8]:
model=KMeans(n_clusters=i)
col=model.fit_predict(data)
plt.scatter(data['x'], data['y'], c=col)
plt.title("KMeans dla liczby klastrów równej " + str(i))
plt.show()
# Mnie najbardziej przekonuje rysunek dla 8 klastrów, tak intuicyjnie sama podzieliłabym te dane.
# ## Aglomerative clustering
#
from sklearn.metrics import silhouette_score
from sklearn.cluster import AgglomerativeClustering
# +
def silhouette_plot_agg(data, k_max=20):
silhouette = []
for i in range(2, k_max+1):
model = AgglomerativeClustering(n_clusters=i)
predictions = model.fit_predict(data)
silhouette.append(silhouette_score(data, predictions, metric = 'euclidean'))
x=range(2,k_max+1)
plt.plot(x, silhouette, marker='h')
plt.xticks(np.arange(min(x), max(x)+1, 2))
plt.title("Silhouette dla Agglomerative Clustering")
plt.show()
# -
silhouette_plot_agg(data)
# Zdecydowanie najlepiej wypada 8, natomiast 5,7 i 9 prawie jej dorównują. Zobaczmy jak to się prezentuje na rysunku
for i in [5,7,8,9]:
model=AgglomerativeClustering(n_clusters=i)
col=model.fit_predict(data)
plt.scatter(data['x'], data['y'], c=col)
plt.title("AgglomerativeClusetring dla liczby klastrów równej " + str(i))
plt.show()
# Ponownie wegług mnie (oczywiście subiektywna opinia), dobrze prezentuje się 8 klastrów. Podział na 9 też wygląda sensownie.
# ## Porównanie modeli dla 8 klastrów
# Sprawdzimy średnią odległość między punktami w tym samym klastrze, oraz maksymalną odległość między punktami w tym samym klastrze. Lepszy model powinien mieć obie wartości niższe.
# +
from scipy.spatial import distance
def avg_dist_score(data, labels):
dists = []
for label in labels:
X = data.iloc[np.where(labels == label)]
dists.append(np.mean(distance.pdist(X)))
return np.mean(dists)
def max_dist_score(data,labels):
dists=[]
for label in labels:
X=data.iloc[np.where(labels==label)]
dists.append(np.max(distance.pdist(X)))
return np.max(dists)
# +
model_km = KMeans(n_clusters=8)
model_agg= AgglomerativeClustering(n_clusters=8)
labels1=model_km.fit_predict(data)
labels2=model_agg.fit_predict(data)
labels1
print("Mean distance between points in the same cluster for KMeans: " +str(avg_dist_score(data, labels1)))
print("Maximum distance between points in the same cluster form KMeans: " + str(max_dist_score(data,labels1)))
print("Mean distance between points in the same cluster form AgglomerativeClustering: " + str(avg_dist_score(data,labels2)))
print("Maximum distance between points in the same cluster form AgglomerativeClustering: " + str(max_dist_score(data,labels2)))
# -
# Lepiej wypadł algorytm KMeans - obie wartości są niższe.
| Prace_domowe/Praca_domowa5/Grupa1/GassowskaAda/pd5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import random
# ## Inisialisasi populasi
def genPopulasi(nPop, nKrom):
pop = [[int(round(random.random())) for i in range(nKrom)] for j in range(nPop)]
return pop
genPopulasi(10, 5)
# ## Hitung nilai fitness
def hitungFitness(krom, weight, value, max_fitness):
w = 0
v = 0
for i in range(len(krom)):
if krom[i] == 1:
w += weight[i]
v += value[i]
if w > max_fitness:
fitness = 0
else:
fitness = v
return fitness
# ## Seleksi orang tua
def turnamenParent(pop, n, weight, value, max_fitness):
idxs = random.sample(range(len(pop)), n)
fitnesses = [hitungFitness(pop[idxs[i]], weight, value, max_fitness) for i in range(n)]
fitnesses, idxs = zip(*sorted(zip(fitnesses, idxs)))
return idxs[0]
# ## Crossover
def crossover(krom1, krom2, pCross):
rand = random.random()
titik = int(round(random.uniform(1, len(krom1)-1)))
if rand <= pCross:
for i in range(titik):
krom1[i], krom2[i] = krom2[i], krom1[i]
return krom1, krom2
# ## Mutasi
def mutasi(krom, pMutasi):
for i in range(len(krom)):
rand = random.random()
if rand <= pMutasi:
if krom[i] == 0:
krom[i] = 1
else:
krom[i] = 0
return krom
# ## Inisialisasi variable dan hyperparameter
# +
nama = ['Nila', 'Gurame', 'Kakap', 'Lele', 'Mujair']
weight = [2, 7, 4, 1, 1]
value = [50000, 70000, 60000, 15000, 25000]
max_fitness = 12
nGen = 10
nPop = 20
nKrom = len(nama)
pCross = 0.8
pMutasi = 0.2
# -
# ## Program Utama
# +
# Inisialisasi Populasi
pop = genPopulasi(nPop, nKrom)
for i in range(nGen):
fitness = []
anak = []
for j in range(round(nPop/2)):
# Seleksi orang tua
parent1 = turnamenParent(pop, 10, weight, value, max_fitness)
parent2 = turnamenParent(pop, 10, weight, value, max_fitness)
anak1 = pop[parent1][:]
anak2 = pop[parent2][:]
# Crossover
anak1, anak2 = crossover(anak1, anak2, pCross)
# mutasi
anak1 = mutasi(anak1, pMutasi)
anak2 = mutasi(anak2, pMutasi)
anak.append(anak1)
anak.append(anak2)
gab = pop + anak
for f in range(len(gab)):
fitness.append(hitungFitness(gab[f], weight, value, max_fitness))
steadyState = sorted(range(len(fitness)), key=lambda k: fitness[k], reverse=True)
pop = []
for j in range(nPop):
pop.append(gab[steadyState[j]])
print (pop)
# -
# ## Hasil GA
print ("Jumlah nilai :", fitness[steadyState[0]])
for i in range(nKrom):
if pop[0][i] == 1:
print (nama[i], end=', ')
| .ipynb_checkpoints/GA for Knapsack Problem-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import nltk
nltk.download('stopwords')
import re
import numpy
import pandas as pd
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import pyLDAvis
import pyLDAvis.gensim # don't skip this
import matplotlib.pyplot as plt
import spacy
import matplotlib.pyplot as pt
# NLTK Stop words
from nltk.corpus import stopwords
# Initialize spacy 'en' model, keeping only tagger component (for efficiency)
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
def format_topics_sentences(ldamodel, corpus, texts):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations
# Define functions for stopwords, bigrams, trigrams and lemmatization
def remove_stopwords(texts,stop_words):
return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
def make_bigrams(texts,bigram_mod):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""https://spacy.io/api/annotation"""
texts_out = []
for review in texts:
doc = nlp(" ".join(review))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
# -
def preprocess_text(data_text):
#data_text['index'] = data_text.index
stop_words = stopwords.words('english')
#stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
data_words = list(sent_to_words(data_text))
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
#trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
# Faster way to get a sentence clubbed as a trigram/bigram
bigram_mod = gensim.models.phrases.Phraser(bigram)
#trigram_mod = gensim.models.phrases.Phraser(trigram)
# See trigram example
#print(bigram_mod[data_words[0]])
#Remove Stop Words
data_words_nostops = remove_stopwords(data_words,stop_words)
# Form Bigrams
data_words_bigrams = make_bigrams(data_words_nostops,bigram_mod)
# Do lemmatization keeping only noun, adj, vb, adv
data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
# Create Dictionary
id2word = corpora.Dictionary(data_lemmatized)
# filter words occurring in more than 80% of reviews or below 20 reviews
id2word.filter_extremes(no_below=20, no_above=0.8)
# Create Corpus
texts = data_lemmatized
# Term Document Frequency
return [id2word.doc2bow(text) for text in texts],id2word
# +
# %%time
def getModel(corpus,id2word):
# View
#print(corpus[:10])
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=20,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha=0.1,
per_word_topics=True)
return lda_model
#print(lda_model.print_topics())
#df_topic_sents_keywords = format_topics_sentences(lda_model, corpus=corpus, texts=data_text)
# Format
#df_dominant_topic = df_topic_sents_keywords.reset_index()
#df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
# Show
#df_dominant_topic.head(10)
# -
import pickle
# +
# with open('lda_model.pkl', 'wb') as fp:
# pickle.dump(lda_model, fp)
# -
with open('lda_model_10.pkl', 'rb') as fp:
lda_model_10 = pickle.load(fp)
with open('lda_model_20_21.pkl', 'rb') as fp:
lda_model = pickle.load(fp)
print(lda_model.print_topics())
def getFeatureVector(review,lda_model):
topics = lda_model.get_document_topics(review)
values = map(lambda x:x[1],topics)
return values
user_review_vector = list(map(lambda x:getFeatureVector(x,lda_model),corpus_user_reviews))
user_review_vector_df = pd.DataFrame(user_review_vector,columns=["topic_"+str(x) for x in range(20) ])
user_review_vector_df.head(10)
import pandas as pd
data = pd.read_csv('D:\\Data Mining\\yelp_dataset\\PA\\Restaurants-new\\Restaurants\\train\\PA_train_yelp_academic_dataset_review.csv', error_bad_lines=False);
pd.set_option('display.max_colwidth', -1)
data['new_text']=data['text']+" "+(data['stars']).astype(str)+"stars"
data = data.drop(['funny','stars','date','useful','cool','text'],axis=1)
user_reviews = data.groupby(["user_id"])["new_text"].apply(lambda x:' '.join(x)).reset_index()
business_reviews = data.groupby(["business_id"])["new_text"].apply(lambda x:' '.join(x)).reset_index()
corpus_user_reviews_new,id2word_user = preprocess_text(user_reviews[["new_text"]].values.tolist())
corpus_business_reviews_new,id2word_business = preprocess_text(business_reviews[["new_text"]].values.tolist())
corpus_new,id2word_new = preprocess_text(data[["new_text"]].values.tolist())
# %store corpus_new
# %store id2word_new
lda_model_20_user = getModel(corpus_user_reviews_new,id2word_user)
lda_model_20_business = getModel(corpus_business_reviews_new,id2word_business)
lda_model_20_business
import pickle
pickle.dump(lda_model_20_user, open('lda_model_20_user.pkl', 'wb'))
pickle.dump(lda_model_20_business, open('lda_model_20_business.pkl', 'wb'))
def get_topics_weight(data,lda_model,corpus):
review_vector = list(map(lambda x:getFeatureVector(x,lda_model),corpus))
review_vector_df = pd.DataFrame(review_vector,columns=["topic_"+str(x) for x in range(20) ])
reviews_topic= pd.concat([data,review_vector_df],axis=1).drop(['new_text'],axis=1)
return reviews_topic
user_reviews_topic= get_topics_weight(user_reviews,lda_model_20_user,corpus_user_reviews_new)
user_reviews_topic.to_csv('user_reviews_topic.csv',index=False)
business_reviews_topic = get_topics_weight(business_reviews,lda_model_20_business,corpus_business_reviews_new)
business_reviews_topic.to_csv('business_reviews_topic.csv',index=False)
lda_model_20_21 = getModel(corpus_new,id2word_new)
pickle.dump(lda_model_20_21, open('lda_model_20_21.pkl', 'wb'))
reviews_topic = get_topics_weight(data,lda_model_20_21,corpus_new)
reviews_topic.to_csv('reviews_topic_new.csv',index=False)
# +
corpus_new,id2word_new = preprocess_text(data[["new_text"]].values.tolist())
with open('lda_model_20_21.pkl', 'rb') as fp:
lda_model_20_21 = pickle.load(fp)
data_vis = pyLDAvis.gensim.prepare(lda_model_20_21, corpus_new, id2word_new)
# -
pyLDAvis.enable_notebook()
data_vis
# %store -r corpus_new
import pandas as pd
valid_data = pd.read_csv('D:\\Data Mining\\yelp_dataset\\PA\\Restaurants-new\\Restaurants\\valid\\PA_valid_yelp_academic_dataset_review.csv', error_bad_lines=False);
valid_data['new_text']=valid_data['text']+" "+(valid_data['stars']).astype(str)+"stars"
valid_data = valid_data.drop(['funny','stars','date','useful','cool','text'],axis=1)
corpus_new,_= preprocess_text(valid_data[["new_text"]].values.tolist())
valid_data_df = get_topics_weight(valid_data,lda_model_20_21,corpus_new)
valid_data_df.to_csv('valid/reviews_topic_valid.csv',index=False)
test_data = pd.read_csv('D:\\Data Mining\\yelp_dataset\\PA\\Restaurants-new\\Restaurants\\test\\PA_test_yelp_academic_dataset_review.csv', error_bad_lines=False);
test_data['new_text']= test_data['text']+" "+(test_data['stars']).astype(str)+"stars"
test_data = test_data.drop(['funny','stars','date','useful','cool','text'],axis=1)
corpus_new,_= preprocess_text(test_data[["new_text"]].values.tolist())
test_data_df = get_topics_weight(test_data,lda_model_20_21,corpus_new)
test_data_df.to_csv('test/reviews_topic_test.csv',index=False)
with open('lda_model_20_21.pkl', 'rb') as fp:
lda_model_20_21 = pickle.load(fp)
data_valid = pd.read_csv('D:\\Data Mining\\yelp_dataset\\PA\\Restaurants-new\\Restaurants\\valid\\PA_valid_yelp_academic_dataset_review.csv', error_bad_lines=False);
data_valid['new_text']=data_valid['text']+" "+(data_valid['stars']).astype(str)+"stars"
data_valid = data_valid.drop(['funny','stars','date','useful','cool','text'],axis=1)
user_reviews_valid = data_valid.groupby(["user_id"])["new_text"].apply(lambda x:' '.join(x)).reset_index()
business_reviews_valid = data_valid.groupby(["business_id"])["new_text"].apply(lambda x:' '.join(x)).reset_index()
corpus_valid,_= preprocess_text(business_reviews_valid[["new_text"]].values.tolist())
data_test = pd.read_csv('D:\\Data Mining\\yelp_dataset\\PA\\Restaurants-new\\Restaurants\\test\\PA_test_yelp_academic_dataset_review.csv', error_bad_lines=False);
data_test['new_text']=data_test['text'].astype(str)+" "+(data_test['stars']).astype(str)+"stars"
data_test = data_test.drop(['funny','stars','date','useful','cool','text'],axis=1)
user_reviews_test = data_test.groupby(["user_id"])["new_text"].apply(lambda x:' '.join(x)).reset_index()
business_reviews_test = data_test.groupby(["business_id"])["new_text"].apply(lambda x:' '.join(x)).reset_index()
corpus_test,_= preprocess_text(business_reviews_test["new_text"].tolist())
# +
business_reviews_topic = get_topics_weight(business_reviews_valid,lda_model_20_21,corpus_valid)
business_reviews_topic.to_csv('valid/business_reviews_topic_valid.csv',index=False)
# -
business_reviews_topic = get_topics_weight(business_reviews_test,lda_model_20_21,corpus_test)
business_reviews_topic.to_csv('test/business_reviews_topic_test.csv',index=False)
| LDA modelling/lda_modeling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mathematical functions
import numpy as np
np.__version__
# ## Trigonometric functions
# #### Calculate sine, cosine, and tangent of x, element-wise.
x = np.array([0., 1., 30, 90])
print ("sine:", np.sin(x))
print ("cosine:", np.cos(x))
print ("tangent:", np.tan(x))
# #### Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.
x = np.array([-1., 0, 1.])
print ("inverse sine:", np.arcsin(x[2]))
print ("inverse cosine:", np.arccos(x[2]))
print ("inverse tangent:", np.arctan(x[2]))
# #### Convert angles from radians to degrees.
# +
x = np.array([-np.pi, -np.pi/2, np.pi/2, np.pi])
out1 = np.degrees(x)
out2 = np.rad2deg(x)
assert np.array_equiv(out1, out2)
print (out1)
# -
# #### Convert angles from degrees to radians.
# +
x = np.array([-180., -90., 90., 180.])
out1 = np.radians(x)
out2 = np.deg2rad(x)
assert np.array_equiv(out1, out2)
print (out1)
# -
# ## Hyperbolic functions
# #### Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.
x = np.array([-1., 0, 1.])
print (np.sinh(x))
print (np.cosh(x))
print (np.tanh(x))
# ## Rounding
# #### Predict the results of these, paying attention to the difference among the family functions.
# +
x = np.array([2.1, 1.5, 2.5, 2.9, -2.1, -2.5, -2.9])
out1 = np.around(x)
out2 = np.floor(x)
out3 = np.ceil(x)
out4 = np.trunc(x)
out5 = [round(elem) for elem in x]
print (out1)
print (out2)
print (out3)
print (out4)
print (out5)
# -
# #### Implement out5 in the above question using numpy.
print (np.floor(np.abs(x) + 0.5) * np.sign(x))
# ## Sums, products, differences
# #### Predict the results of these.
# +
x = np.array(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
outs = [np.sum(x),
np.sum(x, axis=0),
np.sum(x, axis=1, keepdims=True),
"",
np.prod(x),
np.prod(x, axis=0),
np.prod(x, axis=1, keepdims=True),
"",
np.cumsum(x),
np.cumsum(x, axis=0),
np.cumsum(x, axis=1),
"",
np.cumprod(x),
np.cumprod(x, axis=0),
np.cumprod(x, axis=1),
"",
np.min(x),
np.min(x, axis=0),
np.min(x, axis=1, keepdims=True),
"",
np.max(x),
np.max(x, axis=0),
np.max(x, axis=1, keepdims=True),
"",
np.mean(x),
np.mean(x, axis=0),
np.mean(x, axis=1, keepdims=True)]
for out in outs:
if out == "":
print
else:
print("->", out)
# -
# #### Calculate the difference between neighboring elements, element-wise.
x = np.array([1, 2, 4, 7, 0])
print (np.diff(x))
# #### Calculate the difference between neighboring elements, element-wise, and
# prepend [0, 0] and append[100] to it.
# +
x = np.array([1, 2, 4, 7, 0])
out1 = np.ediff1d(x, to_begin=[0, 0], to_end=[100])
out2 = np.insert(np.append(np.diff(x), 100), 0, [0, 0])
assert np.array_equiv(out1, out2)
print (out2)
# -
# #### Return the cross product of x and y.
# +
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
print (np.cross(x, y))
# -
# ## Exponents and logarithms
# #### Compute $e^x$, element-wise.
# +
x = np.array([1., 2., 3.], np.float32)
out = np.exp(x)
print (out)
# -
# #### Calculate exp(x) - 1 for all elements in x.
# +
x = np.array([1., 2., 3.], np.float32)
out1 = np.expm1(x)
out2 = np.exp(x) - 1.
assert np.allclose(out1, out2)
print (out1)
# -
# #### Calculate $2^p$ for all p in x.
# +
x = np.array([1., 2., 3.], np.float32)
out1 = np.exp2(x)
out2 = 2 ** x
assert np.allclose(out1, out2)
print (out1)
# -
# #### Compute natural, base 10, and base 2 logarithms of x element-wise.
# +
x = np.array([1, np.e, np.e**2])
print ("natural log =", np.log(x))
print ("common log =", np.log10(x))
print ("base 2 log =", np.log2(x))
# -
# #### Compute the natural logarithm of one plus each element in x in floating-point accuracy.
# +
x = np.array([1e-99, 1e-100])
print (np.log1p(x))
# Compare it with np.log(1 +x)
# -
# ## Floating point routines
# #### Return element-wise True where signbit is set.
# +
x = np.array([-3, -2, -1, 0, 1, 2, 3])
out1 = np.signbit(x)
out2 = x < 0
assert np.array_equiv(out1, out2)
print (out1)
# -
# #### Change the sign of x to that of y, element-wise.
# +
x = np.array([-1, 0, 1])
y = -1.1
print (np.copysign(x, y))
# -
# ## Arithmetic operations
# #### Add x and y element-wise.
# +
x = np.array([1, 2, 3])
y = np.array([-1, -2, -3])
out1 = np.add(x, y)
out2 = x + y
assert np.array_equal(out1, out2)
print (out1)
# -
# #### Subtract y from x element-wise.
# +
x = np.array([3, 4, 5])
y = np.array(3)
out1 = np.subtract(x, y)
out2 = x - y
assert np.array_equal(out1, out2)
print (out1)
# -
# #### Multiply x by y element-wise.
# +
x = np.array([3, 4, 5])
y = np.array([1, 0, -1])
out1 = np.multiply(x, y)
out2 = x * y
assert np.array_equal(out1, out2)
print (out1)
# -
# #### Divide x by y element-wise in two different ways.
# +
x = np.array([3., 4., 5.])
y = np.array([1., 2., 3.])
out1 = np.true_divide(x, y)
out2 = x / y
assert np.array_equal(out1, out2)
print (out1)
out3 = np.floor_divide(x, y)
out4 = x // y
assert np.array_equal(out3, out4)
print (out3)
# Note that in Python 2 and 3, the handling of `divide` differs.
# See https://docs.scipy.org/doc/numpy/reference/generated/numpy.divide.html#numpy.divide
# -
# #### Compute numerical negative value of x, element-wise.
# +
x = np.array([1, -1])
out1 = np.negative(x)
out2 = -x
assert np.array_equal(out1, out2)
print (out1)
# -
# #### Compute the reciprocal of x, element-wise.
# +
x = np.array([1., 2., .2])
out1 = np.reciprocal(x)
out2 = 1/x
assert np.array_equal(out1, out2)
print (out1)
# -
# #### Compute $x^y$, element-wise.
# +
x = np.array([[1, 2], [3, 4]])
y = np.array([[1, 2], [1, 2]])
out = np.power(x, y)
print (out)
# -
# #### Compute the remainder of x / y element-wise in two different ways.
# +
x = np.array([-3, -2, -1, 1, 2, 3])
y = 2
out1 = np.mod(x, y)
out2 = x % y
assert np.array_equal(out1, out2)
print (out1)
out3 = np.fmod(x, y)
print (out3)
# -
# ## Miscellaneous
# #### If an element of x is smaller than 3, replace it with 3.
# And if an element of x is bigger than 7, replace it with 7.
# +
x = np.arange(10)
out1 = np.clip(x, 3, 7)
out2 = np.copy(x)
out2[out2 < 3] = 3
out2[out2 > 7] = 7
assert np.array_equiv(out1, out2)
print (out1)
# -
# #### Compute the square of x, element-wise.
# +
x = np.array([1, 2, -1])
out1 = np.square(x)
out2 = x * x
assert np.array_equal(out1, out2)
print (out1)
# -
# #### Compute square root of x element-wise.
#
# +
x = np.array([1., 4., 9.])
out = np.sqrt(x)
print (out)
# -
# #### Compute the absolute value of x.
# +
x = np.array([[1, -1], [3, -3]])
out = np.abs(x)
print (out)
# -
# #### Compute an element-wise indication of the sign of x, element-wise.
# +
x = np.array([1, 3, 0, -1, -3])
out1 = np.sign(x)
out2 = np.copy(x)
out2[out2 > 0] = 1
out2[out2 < 0] = -1
assert np.array_equal(out1, out2)
print (out1)
# -
| Mathematical functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# tgb - 4/20/2020 - Adapting Ankitesh's climate-invariant training notebook for hyperparameter optimization by <NAME>.
import sys
sys.path.insert(1,"/home1/07064/tg863631/anaconda3/envs/CbrainCustomLayer/lib/python3.6/site-packages") #work around for h5py
from cbrain.imports import *
from cbrain.cam_constants import *
from cbrain.utils import *
from cbrain.layers import *
from cbrain.data_generator import DataGenerator
import tensorflow as tf
from tensorflow import math as tfm
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
import tensorflow_probability as tfp
import xarray as xr
import numpy as np
from cbrain.model_diagnostics import ModelDiagnostics
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as imag
import scipy.integrate as sin
# import cartopy.crs as ccrs
import matplotlib.ticker as mticker
# from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import pickle
# from climate_invariant import *
from tensorflow.keras import layers
import datetime
from climate_invariant_utils import *
import yaml
# ## Global Variables
# Load coordinates (just pick any file from the climate model run)
coor = xr.open_dataset("/oasis/scratch/comet/ankitesh/temp_project/data/sp8fbp_minus4k.cam2.h1.0000-01-01-00000.nc",\
decode_times=False)
lat = coor.lat; lon = coor.lon; lev = coor.lev;
coor.close();
# +
TRAINDIR = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/CRHData/'
path = '/home/ankitesh/CBrain_project/CBRAIN-CAM/cbrain/'
path_hyam = 'hyam_hybm.pkl'
hf = open(path+path_hyam,'rb')
hyam,hybm = pickle.load(hf)
scale_dict = load_pickle('/home/ankitesh/CBrain_project/CBRAIN-CAM/nn_config/scale_dicts/009_Wm2_scaling.pkl')
scale_dict['PTTEND']=scale_dict['TPHYSTND']
scale_dict['PTEQ']=scale_dict['PHQ']
# -
inter_dim_size = 40 #required for interpolation layer
class DataGeneratorClimInv(DataGenerator):
def __init__(self, data_fn, input_vars, output_vars,
norm_fn=None, input_transform=None, output_transform=None,
batch_size=1024, shuffle=True, xarray=False, var_cut_off=None,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,
scaling=True,interpolate=True,
hyam=None,hybm=None,
inp_subRH=None,inp_divRH=None,
inp_subTNS=None,inp_divTNS=None,
lev=None, interm_size=40,
lower_lim=6,
is_continous=True,Tnot=5,
mode='train',portion=1):
self.scaling = scaling
self.interpolate = interpolate
self.rh_trans = rh_trans
self.t2tns_trans = t2tns_trans
self.lhflx_trans = lhflx_trans
self.inp_shape = 64
self.mode=mode
super().__init__(data_fn, input_vars,output_vars,norm_fn,input_transform,output_transform,
batch_size,shuffle,xarray,var_cut_off) ## call the base data generator
self.inp_sub = self.input_transform.sub
self.inp_div = self.input_transform.div
if rh_trans:
self.qv2rhLayer = QV2RHNumpy(self.inp_sub,self.inp_div,inp_subRH,inp_divRH,hyam,hybm)
if lhflx_trans:
self.lhflxLayer = LhflxTransNumpy(self.inp_sub,self.inp_div,hyam,hybm)
if t2tns_trans:
self.t2tnsLayer = T2TmTNSNumpy(self.inp_sub,self.inp_div,inp_subTNS,inp_divTNS,hyam,hybm)
if scaling:
self.scalingLayer = ScalingNumpy(hyam,hybm)
self.inp_shape += 1
if interpolate:
self.interpLayer = InterpolationNumpy(lev,is_continous,Tnot,lower_lim,interm_size)
self.inp_shape += interm_size*2 + 4 + 30 ## 4 same as 60-64 and 30 for lev_tilde.size
# tgb - 7/9/2020 - Test only training on a subset of the data determined by portion
self.portion = portion
def __getitem__(self, index):
# If portion<1, only look at a subset of the data by putting an upper bound on index
if self.portion<1: index = index % round(1/self.portion)
elif self.portion>1: print('Setting portion=1 because portion>1')
elif self.portion<0: print('Setting portion=1 because portion<0')
# Compute start and end indices for batch
start_idx = index * self.batch_size
end_idx = start_idx + self.batch_size
# Grab batch from data
batch = self.data_ds['vars'][start_idx:end_idx]
# Split into inputs and outputs
X = batch[:, self.input_idxs]
Y = batch[:, self.output_idxs]
# Normalize
X_norm = self.input_transform.transform(X)
Y = self.output_transform.transform(Y)
X_result = X_norm
if self.rh_trans:
X_result = self.qv2rhLayer.process(X_result)
if self.lhflx_trans:
X_result = self.lhflxLayer.process(X_result)
X_result = X_result[:,:64]
X = X[:,:64]
if self.t2tns_trans:
X_result = self.t2tnsLayer.process(X_result)
if self.scaling:
scalings = self.scalingLayer.process(X)
X_result = np.hstack((X_result,scalings))
if self.interpolate:
interpolated = self.interpLayer.process(X,X_result)
X_result = np.hstack((X_result,interpolated))
if self.mode=='val':
return xr.DataArray(X_result), xr.DataArray(Y)
return X_result,Y
##transforms the input data into the required format, take the unnormalized dataset
def transform(self,X):
X_norm = self.input_transform.transform(X)
X_result = X_norm
if self.rh_trans:
X_result = self.qv2rhLayer.process(X_result)
if self.lhflx_trans:
X_result = self.lhflxLayer.process(X_result)
X_result = X_result[:,:64]
X = X[:,:64]
if self.t2tns_trans:
X_result = self.t2tnsLayer.process(X_result)
if self.scaling:
scalings = self.scalingLayer.process(X)
X_result = np.hstack((X_result,scalings))
if self.interpolate:
interpolated = self.interpLayer.process(X,X_result)
X_result = np.hstack((X_result,interpolated))
return X_result
# ## Data Generators
geography = True # True for real-geography dataset, false otherwise
# ### Choose between aquaplanet and realistic geography here
# +
if geography: path = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/geography/'
else: path = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/'
# if geography: TRAINDIR = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/CRHData/'
# else: TRAINDIR = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/geography/'
# -
# ### Data Generator using RH
# +
scale_dict_RH = load_pickle('/home/ankitesh/CBrain_project/CBRAIN-CAM/nn_config/scale_dicts/009_Wm2_scaling.pkl')
scale_dict_RH['RH'] = 0.01*L_S/G, # Arbitrary 0.1 factor as specific humidity is generally below 2%
scale_dict_RH['PTTEND']=scale_dict_RH['TPHYSTND']
scale_dict_RH['PTEQ']=scale_dict_RH['PHQ']
in_vars_RH = ['RH','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
if geography: out_vars_RH = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
else: out_vars_RH = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
TRAINFILE_RH = 'CI_RH_M4K_NORM_train_shuffle.nc'
NORMFILE_RH = 'CI_RH_M4K_NORM_norm.nc'
VALIDFILE_RH = 'CI_RH_M4K_NORM_valid.nc'
# -
train_gen_RH = DataGenerator(
data_fn = path+TRAINFILE_RH,
input_vars = in_vars_RH,
output_vars = out_vars_RH,
norm_fn = path+NORMFILE_RH,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict_RH,
batch_size=1024,
shuffle=True
)
# #### For positve sepearation (required since we are going to use scaling)
# +
TRAINFILE_RH = 'PosCRH_CI_RH_M4K_NORM_train_shuffle.nc'
NORMFILE_RH = 'PosCRH_CI_RH_M4K_NORM_norm.nc'
train_gen_RH_pos = DataGenerator(
data_fn = TRAINDIR+TRAINFILE_RH,
input_vars = in_vars_RH,
output_vars = out_vars_RH,
norm_fn = TRAINDIR+NORMFILE_RH,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict_RH,
batch_size=1024,
shuffle=True
)
# -
# #### For negative sepearation
# +
TRAINFILE_RH = 'NegCRH_CI_RH_M4K_NORM_train_shuffle.nc'
NORMFILE_RH = 'NegCRH_CI_RH_M4K_NORM_norm.nc'
train_gen_RH_neg = DataGenerator(
data_fn = TRAINDIR+TRAINFILE_RH,
input_vars = in_vars_RH,
output_vars = out_vars_RH,
norm_fn = TRAINDIR+NORMFILE_RH,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict_RH,
batch_size=1024,
shuffle=True
)
# -
# ### Data Generator using TNS
# +
in_vars = ['QBP','TfromNS','PS', 'SOLIN', 'SHFLX', 'LHFLX']
out_vars = out_vars_RH
# if geography: out_vars = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
# else: out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
TRAINFILE_TNS = 'CI_TNS_M4K_NORM_train_shuffle.nc'
NORMFILE_TNS = 'CI_TNS_M4K_NORM_norm.nc'
VALIDFILE_TNS = 'CI_TNS_M4K_NORM_valid.nc'
# -
train_gen_TNS = DataGenerator(
data_fn = path+TRAINFILE_TNS,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE_TNS,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True
)
# #### For positive
# +
TRAINFILE_TNS = 'PosCRH_CI_TNS_M4K_NORM_train_shuffle.nc'
NORMFILE_TNS = 'PosCRH_CI_TNS_M4K_NORM_norm.nc'
train_gen_TNS_pos = DataGenerator(
data_fn = TRAINDIR+TRAINFILE_TNS,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+NORMFILE_TNS,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True
)
# -
# #### For negative
# +
TRAINFILE_TNS = 'NegCRH_CI_TNS_M4K_NORM_train_shuffle.nc'
NORMFILE_TNS = 'NegCRH_CI_TNS_M4K_NORM_norm.nc'
train_gen_TNS_neg = DataGenerator(
data_fn = TRAINDIR+TRAINFILE_TNS,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+NORMFILE_TNS,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True
)
# -
# ### Data Generator Combined
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
#out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
# +
## this won't be used just to show we can use it overall
TRAINFILE = 'CI_SP_M4K_train_shuffle.nc'
NORMFILE = 'CI_SP_M4K_NORM_norm.nc'
VALIDFILE = 'CI_SP_M4K_valid.nc'
train_gen = DataGeneratorClimInv(
data_fn = path+TRAINFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,scaling=False,interpolate=False)
valid_gen = DataGeneratorClimInv(
data_fn = path+VALIDFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,scaling=False,interpolate=False
)
# -
train_gen[0][0].shape
# #### For positive
# +
TRAINFILE = 'PosCRH_CI_SP_M4K_train_shuffle.nc'
NORMFILE = 'PosCRH_CI_SP_M4K_NORM_norm.nc'
VALIDFILE = 'PosCRH_CI_SP_M4K_valid.nc'
train_gen_pos = DataGeneratorClimInv(
data_fn = TRAINDIR+TRAINFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH_pos.input_transform.sub, inp_divRH=train_gen_RH_pos.input_transform.div,
inp_subTNS=train_gen_TNS_pos.input_transform.sub,inp_divTNS=train_gen_TNS_pos.input_transform.div,
is_continous=True
)
valid_gen_pos = DataGeneratorClimInv(
data_fn = TRAINDIR+VALIDFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH_pos.input_transform.sub, inp_divRH=train_gen_RH_pos.input_transform.div,
inp_subTNS=train_gen_TNS_pos.input_transform.sub,inp_divTNS=train_gen_TNS_pos.input_transform.div,
is_continous=True
)
# -
train_gen_pos[0][0].shape
# #### For Negative
# +
TRAINFILE = 'NegCRH_CI_SP_M4K_train_shuffle.nc'
NORMFILE = 'NegCRH_CI_SP_M4K_NORM_norm.nc'
VALIDFILE = 'NegCRH_CI_SP_M4K_valid.nc'
### we don't scale this network
train_gen_neg = DataGeneratorClimInv(
data_fn = TRAINDIR+TRAINFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH_neg.input_transform.sub, inp_divRH=train_gen_RH_neg.input_transform.div,
inp_subTNS=train_gen_TNS_neg.input_transform.sub,inp_divTNS=train_gen_TNS_neg.input_transform.div,
is_continous=True,
scaling=False
)
valid_gen_neg = DataGeneratorClimInv(
data_fn = TRAINDIR+VALIDFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH_neg.input_transform.sub, inp_divRH=train_gen_RH_neg.input_transform.div,
inp_subTNS=train_gen_TNS_neg.input_transform.sub,inp_divTNS=train_gen_TNS_neg.input_transform.div,
is_continous=True,
scaling=False
)
# -
train_gen_neg[0][0].shape
# ## Building the Model
# ### For Positive
# +
inp = Input(shape=(179,)) ## input after rh and tns transformation
offset = 65
inp_TNS = inp[:,offset:offset+2*inter_dim_size+4]
offset = offset+2*inter_dim_size+4
lev_tilde_before = inp[:,offset:offset+30]
offset = offset+30
densout = Dense(128, activation='linear')(inp_TNS)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
denseout = Dense(2*inter_dim_size+4, activation='linear')(densout)
lev_original_out = reverseInterpLayer(inter_dim_size)([denseout,lev_tilde_before])
out = ScaleOp(OpType.PWA.value,
inp_subQ=train_gen_pos.input_transform.sub,
inp_divQ=train_gen_pos.input_transform.div,
)([inp,lev_original_out])
model_pos = tf.keras.models.Model(inp, out)
# -
model_pos.summary()
model_pos.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/'
save_name = 'CI_Pos_temp'
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 25
model_pos.fit_generator(train_gen_pos, epochs=Nep, validation_data=valid_gen_pos,\
callbacks=[earlyStopping, mcp_save])
# ### For Negative
# +
inp = Input(shape=(178,)) ## input after rh and tns transformation
offset = 64
inp_TNS = inp[:,offset:offset+2*inter_dim_size+4]
offset = offset+2*inter_dim_size+4
lev_tilde_before = inp[:,offset:offset+30]
offset = offset+30
densout = Dense(128, activation='linear')(inp_TNS)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
denseout = Dense(2*inter_dim_size+4, activation='linear')(densout)
lev_original_out = reverseInterpLayer(inter_dim_size)([denseout,lev_tilde_before])
model_neg = tf.keras.models.Model(inp, lev_original_out)
# -
model_neg.summary()
model_neg.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/'
save_name = 'CI_Neg_temp'
path_HDF5 = '/oasis/scratch/comet/ankitesh/temp_project/models/Comnined/'
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model_neg.fit_generator(train_gen_neg, epochs=Nep, validation_data=valid_gen_neg,\
callbacks=[earlyStopping, mcp_save])
# ## Example of how to combine positive and negative NNs to make predictions
# ### Test: Load `pb` models
pathBF = '/oasis/scratch/comet/tbeucler/temp_project/DavidW_models/BF_Aquaplanet/'
BF26 = tf.keras.models.load_model(pathBF+'26')
BF33 = tf.keras.models.load_model(pathBF+'33')
BF25 = tf.keras.models.load_model(pathBF+'25')
BF28 = tf.keras.models.load_model(pathBF+'28')
BF27 = tf.keras.models.load_model(pathBF+'27')
# ### Define how to load climate-invariant NN
class ClimateNet:
def __init__(self,dict_lay,data_fn,config_fn,
lev,hyam,hybm,TRAINDIR,
nlat, nlon, nlev, ntime,
inp_subRH,inp_divRH,
inp_subTNS,inp_divTNS,
rh_trans=False,t2tns_trans=False,
lhflx_trans=False,
scaling=False,interpolate=False,
model=None,
pos_model=None,neg_model=None,
#this can be none if no scaling is present
train_gen_RH_pos=None,train_gen_RH_neg=None,
train_gen_TNS_pos=None,train_gen_TNS_neg=None,
):
with open(config_fn, 'r') as f:
config = yaml.load(f)
out_scale_dict = load_pickle(config['output_dict'])
ngeo = nlat * nlon
in_vars = config['inputs']
out_vars = config['outputs']
self.valid_gen = DataGeneratorClimInv(
data_fn = data_fn,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=config['data_dir'] + config['norm_fn'],
input_transform=(config['input_sub'], config['input_div']),
output_transform=out_scale_dict,
batch_size=ngeo,
shuffle=False,
xarray=True,
var_cut_off=config['var_cut_off'] if 'var_cut_off' in config.keys() else None,
rh_trans = rh_trans,t2tns_trans = t2tns_trans,
lhflx_trans = lhflx_trans,
scaling = scaling,
lev=lev,interpolate = interpolate,
hyam=hyam,hybm=hybm,
inp_subRH=inp_subRH, inp_divRH=inp_divRH,
inp_subTNS=inp_subTNS,inp_divTNS=inp_divTNS,
mode='val'
)
self.rh_trans = rh_trans
self.t2tns_trans = t2tns_trans
self.lhflx_trans = lhflx_trans
self.scaling = scaling
self.interpolate = interpolate
self.subQ,self.divQ = np.array(self.valid_gen.input_transform.sub),np.array(self.valid_gen.input_transform.div)
if model != None:
self.model = load_model(model,custom_objects=dict_lay)
if scaling:
self.pos_model = load_model(pos_model,custom_objects=dict_lay)
self.neg_model = load_model(neg_model,custom_objects=dict_lay)
#just for the norm values
self.pos_data_gen = DataGeneratorClimInv(
data_fn = TRAINDIR+'PosCRH_CI_SP_M4K_train_shuffle.nc',
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+'PosCRH_CI_SP_M4K_NORM_norm.nc',
input_transform = ('mean', 'maxrs'),
output_transform = out_scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH_pos.input_transform.sub, inp_divRH=train_gen_RH_pos.input_transform.div,
inp_subTNS=train_gen_TNS_pos.input_transform.sub,inp_divTNS=train_gen_TNS_pos.input_transform.div,
is_continous=True,
scaling=True,
interpolate=interpolate,
rh_trans=rh_trans,
t2tns_trans=t2tns_trans,
lhflx_trans=lhflx_trans
)
self.neg_data_gen = DataGeneratorClimInv(
data_fn = TRAINDIR+'NegCRH_CI_SP_M4K_train_shuffle.nc',
input_vars = in_vars,
output_vars = out_vars,
norm_fn = TRAINDIR+'NegCRH_CI_SP_M4K_NORM_norm.nc',
input_transform = ('mean', 'maxrs'),
output_transform = out_scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH_neg.input_transform.sub, inp_divRH=train_gen_RH_neg.input_transform.div,
inp_subTNS=train_gen_TNS_neg.input_transform.sub,inp_divTNS=train_gen_TNS_neg.input_transform.div,
is_continous=True,
interpolate=interpolate,
scaling=False,
rh_trans=rh_trans,
t2tns_trans=t2tns_trans,
lhflx_trans=lhflx_trans
)
def reorder(self,op_pos,op_neg,mask):
op = []
pos_i=0
neg_i = 0
for m in mask:
if m:
op.append(op_pos[pos_i])
pos_i += 1
else:
op.append(op_neg[neg_i])
neg_i += 1
return np.array(op)
def predict_on_batch(self,inp):
#inp = batch x 179
inp_de = inp*self.divQ+self.subQ
if not self.scaling:
inp_pred = self.valid_gen.transform(inp_de)
return self.model.predict_on_batch(inp_pred)
mask = ScalingNumpy(hyam,hybm).crh(inp_de)> 0.8
pos_inp = inp[mask]
neg_inp = inp[np.logical_not(mask)]
### for positive
pos_inp = pos_inp*self.divQ + self.subQ
pos_inp = self.pos_data_gen.transform(pos_inp)
op_pos = self.pos_model.predict_on_batch(pos_inp)
neg_inp = neg_inp*self.divQ + self.subQ
neg_inp = self.neg_data_gen.transform(neg_inp)
op_neg = self.neg_model.predict_on_batch(neg_inp)
op = self.reorder(np.array(op_pos),np.array(op_neg),mask)
return op
##just for network is scaling is present
def predict_on_batch_seperate(self,inp):
if self.scaling==False:
raise("Scaling is not present in this model")
inp_de = inp*self.divQ + self.subQ
mask = ScalingNumpy(hyam,hybm).crh(inp_de)> 0.8
pos_inp = inp[mask]
neg_inp = inp[np.logical_not(mask)]
pos_inp = pos_inp*self.divQ + self.subQ
pos_inp = self.pos_data_gen.transform(pos_inp)
neg_inp = neg_inp*self.divQ + self.subQ
neg_inp = self.neg_data_gen.transform(neg_inp)
op_pos = self.pos_model.predict_on_batch(pos_inp)
op_neg = self.neg_model.predict_on_batch(neg_inp)
return mask,op_pos,op_neg
# +
# def load_climate_model(dict_lay,config_fn,data_fn,lev,hyam,hybm,TRAINDIR,
# inp_subRH,inp_divRH,
# inp_subTNS,inp_divTNS,
# nlat=64, nlon=128, nlev=30, ntime=48,
# rh_trans=False,t2tns_trans=False,
# lhflx_trans=False,
# scaling=False,interpolate=False,
# model=None,
# pos_model=None,neg_model=None):
# obj = ClimateNet(dict_lay,data_fn,config_fn,
# lev,hyam,hybm,TRAINDIR,
# nlat, nlon, nlev, ntime,
# inp_subRH,inp_divRH,
# inp_subTNS,inp_divTNS,
# rh_trans=rh_trans,t2tns_trans=t2tns_trans,
# lhflx_trans=lhflx_trans, scaling=scaling,
# interpolate=interpolate,
# model = model,
# pos_model=pos_model,neg_model=neg_model)
# return obj
# -
# tgb - 7/7/2020 - Adapting from [https://github.com/ankitesh97/CBRAIN-CAM/blob/climate_invariant_pull_request/cbrain/climate_invariant.py] instead
def load_climate_model(dict_lay,config_fn,data_fn,lev,hyam,hybm,TRAINDIR,
inp_subRH,inp_divRH,
inp_subTNS,inp_divTNS,
nlat=64, nlon=128, nlev=30, ntime=48,
rh_trans=False,t2tns_trans=False,
lhflx_trans=False,
scaling=False,interpolate=False,
model=None,
pos_model=None,neg_model=None,
train_gen_RH_pos=None,train_gen_RH_neg=None,
train_gen_TNS_pos=None,train_gen_TNS_neg=None):
obj = ClimateNet(dict_lay,data_fn,config_fn,
lev,hyam,hybm,TRAINDIR,
nlat, nlon, nlev, ntime,
inp_subRH,inp_divRH,
inp_subTNS,inp_divTNS,
rh_trans=rh_trans,t2tns_trans=t2tns_trans,
lhflx_trans=lhflx_trans, scaling=scaling,
interpolate=interpolate,
model = model,
pos_model=pos_model,neg_model=neg_model,
train_gen_RH_pos=train_gen_RH_pos,train_gen_RH_neg=train_gen_RH_neg,
train_gen_TNS_pos=train_gen_TNS_pos,train_gen_TNS_neg=train_gen_TNS_neg)
return obj
# ### Models' paths
# +
if geography: config_file = 'CI_SP_M4K_Geo_CONFIG.yml' # Configuration file
else: config_file = 'CI_SP_M4K_CONFIG.yml'
if geography: data_file = ['geography/CI_SP_M4K_valid.nc','geography/CI_SP_P4K_valid.nc'] # Validation/test data sets
else: data_file = ['CI_SP_M4K_valid.nc','CI_SP_P4K_valid.nc']
# NNarray = ['RH_TNS_LH_ScalingPos_Interp_Geography.hdf5*RH_TNS_LH_ScalingNeg_Interp_Geography.hdf5',\
# 'RH_TNS_LH_ScalePos_Interp.hdf5*RH_TNS_LH_ScaleNeg_Interp.hdf5'] # NN to evaluate
#NNarray = ['RH_TNS_LH_ScalePos_Interp.hdf5*RH_TNS_LH_ScaleNeg_Interp.hdf5']
#NNarray = ['RH_TNS_LHQsatScalePos.hdf5*RH_TNS_LHQsatScaleNeg.hdf5']
# NNarray = ['BF_Geography.hdf5','RH_Geography.hdf5','RH_TNS_Geography.hdf5','RH_TNS_LHQsat_Geography.hdf5',
# 'RH_TNS_LH_ScalingPos_Geography.hdf5*RH_TNS_LH_ScalingNeg_Geography.hdf5',
# 'RH_TNS_LH_ScalingPos_Interp_Geography.hdf5*RH_TNS_LH_ScalingNeg_Interp_Geography.hdf5']
# if geography: NNarray = ['BF_Geography.hdf5','RH_TNS_LHQsat_Geography.hdf5',
# '../../../tbeucler/temp_project/CBRAIN_models/BF_Geog_2020_07_22.hdf5',
# '../../../tbeucler/temp_project/CBRAIN_models/RH_TNS_LHSAT_geog_2020_07_22.hdf5']
# else: NNarray = ['BF.hdf5','RH_TNS_LH.hdf5',
# '../../../tbeucler/temp_project/CBRAIN_models/BF_Aqua_2020_07_22.hdf5',
# '../../../tbeucler/temp_project/CBRAIN_models/RH_TNS_LHSAT_aqua_2020_07_22.hdf5']
# tgb - 7/24/2020 - Transfer learning test
data_file = ['geography/CI_SP_P4K_valid.nc']
#data_file = ['CI_SP_M4K_valid.nc']
NNarray = ['BF_Aqua_2020_07_22.hdf5','TL_BF_2020_07_23_porindex_0.hdf5',
'TL_BF_2020_07_23_porindex_1.hdf5','TL_BF_2020_07_23_porindex_2.hdf5',
'TL_BF_2020_07_23_porindex_3.hdf5','TL_BF_2020_07_23_porindex_4.hdf5',
'TL_BF_2020_07_23_porindex_5.hdf5',
'RH_TNS_LHSAT_aqua_2020_07_22.hdf5','TL_CI_2020_07_23_porindex_0.hdf5',
'TL_CI_2020_07_23_porindex_1.hdf5','TL_CI_2020_07_23_porindex_2.hdf5',
'TL_CI_2020_07_23_porindex_3.hdf5','TL_CI_2020_07_23_porindex_4.hdf5',
'TL_CI_2020_07_23_porindex_5.hdf5']
for i,NNs in enumerate(NNarray):
NNarray[i] = '../../../tbeucler/temp_project/CBRAIN_models/'+NNs
#NNname = ['NN_Comb_geo','NN_Comb_aqua'] # Name of NNs for plotting
#NNarray = ['BF.hdf5','pb'+pathBF+'26','pb'+pathBF+'33','pb'+pathBF+'25','pb'+pathBF+'28','pb'+pathBF+'27']
dict_lay = {'SurRadLayer':SurRadLayer,'MassConsLayer':MassConsLayer,'EntConsLayer':EntConsLayer,
'QV2RH':QV2RH,'T2TmTNS':T2TmTNS,'eliq':eliq,'eice':eice,'esat':esat,'qv':qv,'RH':RH,
'reverseInterpLayer':reverseInterpLayer,'ScaleOp':ScaleOp}
path_HDF5 = '/oasis/scratch/comet/ankitesh/temp_project/models/'
# -
# Indices of different variables
PHQ_idx = slice(0,30)
TPHYSTND_idx = slice(30,60)
# ### Build models' diagnostics object
# +
#define default values
NN = {}; md = {};
# #%cd $TRAINDIR/HDF5_DATA
for i,NNs in enumerate(NNarray):
print('NN name is ',NNs)
path = path_HDF5+NNs
rh_trans=False
t2tns_trans=False
lhflx_trans=False
scaling=False
interpolate=False
model = path
pos_model=None
neg_model=None
if 'RH' in NNs:
rh_trans=True
if 'TNS' in NNs:
t2tns_trans=True
if 'LH' in NNs:
lhflx_trans=True
if 'CI' in NNs:
rh_trans = True
t2tns_trans = True
lhflx_trans = True
if 'Scal' in NNs:
pos,neg = NNs.split('*')
pos_model = path_HDF5+pos
neg_model = path_HDF5+neg
model = None
scaling=True
if 'Interp' in NNs or 'Vert' in NNs:
interpolate=True
md[NNs] = {}
for j,data in enumerate(data_file):
print('data name is ',data)
if 'pb' in NNs:
NN[NNs] = tf.keras.models.load_model(NNs[2:])
else:
NN[NNs] = load_climate_model(dict_lay,'/home/ankitesh/CBrain_project/PrepData/'+config_file,
'/oasis/scratch/comet/ankitesh/temp_project/PrepData/'+data,
lev=lev,hyam=hyam,hybm=hybm,TRAINDIR=TRAINDIR,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=rh_trans,t2tns_trans=t2tns_trans,
lhflx_trans=lhflx_trans,scaling=scaling,interpolate=interpolate,
model=model,pos_model=pos_model,neg_model=neg_model,
train_gen_RH_pos=train_gen_RH_pos,train_gen_RH_neg=train_gen_RH_neg,
train_gen_TNS_pos=train_gen_TNS_pos,train_gen_TNS_neg=train_gen_TNS_neg )
md[NNs][data[6:-3]] = ModelDiagnostics(NN[NNs],
'/home/ankitesh/CBrain_project/PrepData/'+config_file,
'/oasis/scratch/comet/ankitesh/temp_project/PrepData/'+data)
# -
# ### Global Mean-Squared error
Nt = 10
t_random = np.random.choice(np.linspace(0,md[NNs][data[6:-3]].valid_gen.n_batches-1,
md[NNs][data[6:-3]].valid_gen.n_batches),
size=((Nt,)),replace=False).astype('int')
# +
MSE = {}
VAR = {}
diagno = {}
diagno['truth'] = {}
diagno['pred'] = {}
for iar,itime in enumerate(t_random):
print('iar=',iar,'/',Nt-1,' & itime',itime)
for i,NNs in enumerate(NNarray):
if iar==0: MSE[NNs] = {}; VAR[NNs] = {}
for j,data in enumerate(data_file):
#print('j=',j,'data=',data)
inp, p, truth = md[NNs][data[6:-3]].get_inp_pred_truth(itime) # [lat, lon, var, lev]
t_geo = md[NNs][data[6:-3]].reshape_ngeo(truth)[:,:,:]
if tf.is_tensor(p): p_geo = md[NNs][data[6:-3]].reshape_ngeo(p.numpy())[:,:,:]
else: p_geo = md[NNs][data[6:-3]].reshape_ngeo(p)[:,:,:]
if iar==0:
MSE[NNs][data[6:-3]] = np.mean((t_geo-p_geo)**2,axis=(1,2))
VAR[NNs][data[6:-3]] = np.var(p_geo,axis=(1,2))
else:
MSE[NNs][data[6:-3]] = np.concatenate((MSE[NNs][data[6:-3]],
np.mean((t_geo-p_geo)**2,axis=(1,2))),axis=0)
VAR[NNs][data[6:-3]] = np.concatenate((VAR[NNs][data[6:-3]],
np.var(p_geo,axis=(1,2))),axis=0)
# -
# tgb - 7/24/2020 - Check transfer learning in simple situations
for i,NNs in enumerate(NNarray):
print(NNs)
#if iar==0: MSE[NNs] = {}; VAR[NNs] = {}
for j,data in enumerate(data_file):
print(data,np.sqrt(MSE[NNs][data[6:-3]]).mean(),'/',np.sqrt(MSE[NNs][data[6:-3]]).std())
#MSE[NNs][data[6:-3]].mean()
print('\n')
n_samples = np.minimum(1024*41376,1024*np.round(1/10**np.linspace(-5,-1,5)))
n_samples = np.concatenate(([0],n_samples,[1024*41376]))
n_samples
NNarray
fz = 15
lw = 4
siz = 100
plt.rc('text', usetex=False)
mpl.rcParams['mathtext.fontset'] = 'stix'
mpl.rcParams['font.family'] = 'STIXGeneral'
#mpl.rcParams["font.serif"] = "STIX"
plt.rc('font', family='serif', size=fz)
mpl.rcParams['lines.linewidth'] = lw
plt.close('all')
n_samples
np.log10(n_samples)
# +
X = np.log10(n_samples)
X[0] = X[-2]-1
plt.figure(figsize=(15,5))
for i,NNs in enumerate(NNarray[:7]):
if i==0: plt.scatter(X[i],np.log10(MSE[NNs][data[6:-3]].mean()),color='k',s=siz,label='Brute-Force')
else: plt.scatter(X[i],np.log10(MSE[NNs][data[6:-3]].mean()),color='k',s=siz)
for i,NNs in enumerate(NNarray[7:]):
if i==0: plt.scatter(X[i],np.log10(MSE[NNs][data[6:-3]].mean()),color='b',s=siz,label='Climate-Invariant')
else: plt.scatter(X[i],np.log10(MSE[NNs][data[6:-3]].mean()),color='b',s=siz)
plt.legend(loc='upper right')
plt.ylabel('Mean-squared Error [W$^{2}$ m$^{-4}$]')
plt.xlabel('Number of (Real-geography) samples retrained on')
plt.title('Transfer learning from (Aquaplanet) to (Real-geography) evaluated on *warm real-geography* (never seen)')
plt.grid()
plt.draw()
ax = plt.gca()
labels = [item.get_text() for item in ax.get_yticklabels()]
for ilab,lab in enumerate(labels):
labels[ilab]='$10^{'+lab+'}$';
ax.set_yticklabels(labels);
labels = [item.get_text() for item in ax.get_xticklabels()]
for ilab,lab in enumerate(labels):
labels[ilab]='$10^{'+lab+'}$';
labels[1] = 'None'
ax.set_xticklabels(labels);
# -
# tgb - 7/9/2020 - Reporting to check that climate invariant NN works globally
for i,NNs in enumerate(NNarray):
print(NNs)
#if iar==0: MSE[NNs] = {}; VAR[NNs] = {}
for j,data in enumerate(data_file):
print(data,np.sqrt(MSE[NNs][data[6:-3]]).mean(),'/',np.sqrt(MSE[NNs][data[6:-3]]).std())
#MSE[NNs][data[6:-3]].mean()
print('\n')
#
print(NNs)
for j,data in enumerate(data_file):
print(data,MSE[NNs][data[6:-3]].mean())
#
print(NNs)
for j,data in enumerate(data_file):
print(data,MSE[NNs][data[6:-3]].mean())
#
print(NNs)
for j,data in enumerate(data_file):
print(data,MSE[NNs][data[6:-3]].mean())
#
print(NNs)
for j,data in enumerate(data_file):
print(data,MSE[NNs][data[6:-3]].mean())
#
print(NNs)
for j,data in enumerate(data_file):
print(data,MSE[NNs][data[6:-3]].mean())
#
print(NNs)
for j,data in enumerate(data_file):
print(data,MSE[NNs][data[6:-3]].mean())
# ### Mean squared error by latitude
Nt = 10
t_random = np.random.choice(np.linspace(0,1691,1692),size=((Nt,)),replace=False).astype('int')
# +
MSE = {}
VAR = {}
diagno = {}
diagno['truth'] = {}
diagno['pred'] = {}
for iar,itime in enumerate(t_random):
print('iar=',iar,'/',Nt-1,' & itime',itime,end="\r")
for i,NNs in enumerate(NNarray):
if iar==0: MSE[NNs] = {}; VAR[NNs] = {}
inp, p, truth = md[NNs][data[6:-3]].get_inp_pred_truth(itime) # [lat, lon, var, lev]
t_geo = md[NNs][data[6:-3]].reshape_ngeo(truth)[:,:,:]
p_geo = md[NNs][data[6:-3]].reshape_ngeo(p)[:,:,:]
if iar==0:
MSE[NNs][data[6:-3]] = np.mean((t_geo-p_geo)**2,axis=2)
VAR[NNs][data[6:-3]] = np.var(p_geo,axis=2)
else:
MSE[NNs][data[6:-3]] = np.concatenate((MSE[NNs][data[6:-3]],
np.mean((t_geo-p_geo)**2,axis=2)),axis=1)
VAR[NNs][data[6:-3]] = np.concatenate((VAR[NNs][data[6:-3]],
np.var(p_geo,axis=2)),axis=1)
# -
#
MSE[NNs]
data_file
data = data_file[1]
# +
iini = 1000
iend = 1010
MSE = {}
VAR = {}
diagno = {}
diagno['truth'] = {}
diagno['pred'] = {}
for itime in np.arange(iini,iend):
print('itime=',itime,' between ',iini,' & ',iend,' ',end='\r')
for i,NNs in enumerate(NNarray):
if itime==iini: MSE[NNs] = {}; VAR[NNs] = {}
inp, p, truth = md[NNs][data[6:-3]].get_inp_pred_truth(itime) # [lat, lon, var, lev]
t_geo = md[NNs][data[6:-3]].reshape_ngeo(truth)[:,:,:]
p_geo = md[NNs][data[6:-3]].reshape_ngeo(p)[:,:,:]
if itime==iini:
MSE[NNs][data[6:-3]] = np.mean((t_geo-p_geo)**2,axis=2)
VAR[NNs][data[6:-3]] = np.var(p_geo,axis=2)
else:
MSE[NNs][data[6:-3]] = np.concatenate((MSE[NNs][data[6:-3]],
np.mean((t_geo-p_geo)**2,axis=2)),axis=1)
VAR[NNs][data[6:-3]] = np.concatenate((MSE[NNs][data[6:-3]],
np.var(p_geo,axis=2)),axis=1)
# -
se = (t_geo-p_geo)**2
se.shape
for i,NNs in enumerate(NNarray):
plt.scatter(np.mean(coor.TS,axis=(0,2)),np.log10(np.mean(MSE[NNs][data[6:-3]],axis=1)),label=NNs)
plt.legend()
plt.title(data[6:-3])
for i,NNs in enumerate(NNarray):
plt.scatter(np.mean(coor.TS,axis=(0,2)),np.mean(MSE[NNs][data[6:-3]],axis=1)/
np.mean(VAR[NNs][data[6:-3]],axis=1),label=NNs)
plt.legend()
plt.title(data[6:-3])
MSE
data
# +
lat_ind = np.arange(0,64)
iinis = [500]
# diagno = {} # Diagnostics structure
# diagno['truth'] = {} # Diagnostics structure for the truth
# diagno['truth_pos'] = {} # Diagnostics structure for the truth pos
# diagno['truth_neg'] = {} # Diagnostics structure for the truth neg
# truth_done = {}
# for j,data in enumerate(data_file):
# truth_done[data[6:-3]] = False
for i,NNs in enumerate(NNarray):
print('i=',i,'& NNs=',NNs,' ')
diagno[NNs] = {} # Diagnostics structure for each NN
for j,data in enumerate(data_file):
diagno[NNs][data[6:-3]]={}
if i==0:
# diagno['truth'][data[6:-3]]={}
# diagno['truth_pos'][data[6:-3]]={}
# diagno['truth_neg'][data[6:-3]]={}
for iini in iinis:
print('j=',j,'& iini=',iini,'& data=',data,' ',end='\r'),
iend = iini+47
diagno[NNs][data[6:-3]][iini] = {} # Diagnostics structure for each data file
if i==0:
diagno['truth'][data[6:-3]][iini] = {}
diagno['truth_pos'][data[6:-3]][iini] = {}
diagno['truth_neg'][data[6:-3]][iini] = {}
for itime in tqdm(np.arange(iini,iend)):
# Get input, prediction and truth from NN
inp, p, truth = md[NNs][data[6:-3]].get_inp_pred_truth(itime) # [lat, lon, var, lev]
## only if the scaling is true
if NN[NNs].scaling==True:
X, _ = md[NNs][data[6:-3]].valid_gen[itime]
mask, pos_op, neg_op = md[NNs][data[6:-3]].model.predict_on_batch_seperate(X.values)
mask_reshaped = md[NNs][data[6:-3]].reshape_ngeo(mask)[lat_ind,:,:]
mask = mask_reshaped.flatten()
neg_mask = np.logical_not(mask)
## get the truth only once.
p = np.array(p)
# Get convective heating and moistening for each NN
if itime==iini:
if i==0:
diagno['truth'][data[6:-3]][iini]['PHQ'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno['truth'][data[6:-3]][iini]['TPHYSTND'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
##if scaling is true and the truth array is not filled
if NN[NNs].scaling==True and truth_done[data[6:-3]]==False:
diagno['truth_pos'][data[6:-3]][iini]['PHQ_pos'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:].reshape(-1,30)[mask]
diagno['truth_pos'][data[6:-3]][iini]['TPHYSTND_pos'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:].reshape(-1,30)[mask]
diagno['truth_neg'][data[6:-3]][iini]['PHQ_neg'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:].reshape(-1,30)[neg_mask]
diagno['truth_neg'][data[6:-3]][iini]['TPHYSTND_neg'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:].reshape(-1,30)[neg_mask]
truth_done[data[6:-3]] = True
diagno[NNs][data[6:-3]][iini]['PHQ'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs][data[6:-3]][iini]['TPHYSTND'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
if NN[NNs].scaling==True:
diagno[NNs][data[6:-3]][iini]['PHQ_pos'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:].reshape(-1,30)[mask]
diagno[NNs][data[6:-3]][iini]['TPHYSTND_pos'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:].reshape(-1,30)[mask]
diagno[NNs][data[6:-3]][iini]['PHQ_neg'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:].reshape(-1,30)[neg_mask]
diagno[NNs][data[6:-3]][iini]['TPHYSTND_neg'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:].reshape(-1,30)[neg_mask]
else:
for istr,field in enumerate(['PHQ','TPHYSTND']):
if field=='PHQ': ind_field = PHQ_idx
elif field=='TPHYSTND': ind_field = TPHYSTND_idx
diagno[NNs][data[6:-3]][iini][field] = np.concatenate((diagno[NNs][data[6:-3]][iini][field],
md[NNs][data[6:-3]].\
reshape_ngeo(p[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if NN[NNs].scaling==True:
diagno[NNs][data[6:-3]][iini][field+'_pos'] = np.concatenate((diagno[NNs][data[6:-3]][iini][field+'_pos'],
md[NNs][data[6:-3]].\
reshape_ngeo(p[:,ind_field])[lat_ind,:,:].reshape(-1,30)[mask]),
axis=0)
diagno[NNs][data[6:-3]][iini][field+'_neg'] = np.concatenate((diagno[NNs][data[6:-3]][iini][field+'_neg'],
md[NNs][data[6:-3]].\
reshape_ngeo(p[:,ind_field])[lat_ind,:,:].reshape(-1,30)[neg_mask]),
axis=0)
if i==0:
diagno['truth'][data[6:-3]][iini][field] = np.concatenate((diagno['truth'][data[6:-3]][iini][field],
md[NNs][data[6:-3]].\
reshape_ngeo(truth[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if NN[NNs].scaling==True:
diagno['truth_pos'][data[6:-3]][iini][field+'_pos'] = np.concatenate((diagno['truth_pos'][data[6:-3]][iini][field+'_pos'],
md[NNs][data[6:-3]].\
reshape_ngeo(truth[:,ind_field])[lat_ind,:,:].reshape(-1,30)[mask]),
axis=0)
diagno['truth_neg'][data[6:-3]][iini][field+'_neg'] = np.concatenate((diagno['truth_neg'][data[6:-3]][iini][field+'_neg'],
md[NNs][data[6:-3]].\
reshape_ngeo(truth[:,ind_field])[lat_ind,:,:].reshape(-1,30)[neg_mask]),
axis=0)
# -
# ### Convective heating and moistening movie
# From [https://github.com/tbeucler/CBRAIN-CAM/blob/master/notebooks/tbeucler_devlog/034_AGU2019_Figures.ipynb]
data_file
data = data_file[0]
# +
lat_ind = np.arange(0,64)
iini = 2000
iend = 2005
diagno = {} # Diagnostics structure
diagno['truth'] = {} # Diagnostics structure for the truth
for i,NNs in enumerate([NNarray[0]]):
diagno[NNs] = {} # Diagnostics structure for each NN
for itime in tqdm(np.arange(iini,iend)):
# Get input, prediction and truth from NN
inp, p, truth = md[NNs][data[6:-3]].get_inp_pred_truth(itime) # [lat, lon, var, lev]
p = p.numpy()
# Get convective heating and moistening for each NN
if itime==iini:
if i==0:
diagno['truth']['PHQ'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno['truth']['TPHYSTND'] = md[NNs][data[6:-3]].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['PHQ'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['TPHYSTND'] = md[NNs][data[6:-3]].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
else:
for istr,field in enumerate(['PHQ','TPHYSTND']):
if field=='PHQ': ind_field = PHQ_idx
elif field=='TPHYSTND': ind_field = TPHYSTND_idx
diagno[NNs][field] = np.concatenate((diagno[NNs][field],
md[NNs][data[6:-3]].reshape_ngeo(p[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if i==0:
diagno['truth'][field] = np.concatenate((diagno['truth'][field],
md[NNs][data[6:-3]].reshape_ngeo(truth[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
# -
# Plot characteristics
fz = 17.5
lw = 2
plt.rc('text', usetex=False)
mpl.rcParams['mathtext.fontset'] = 'stix'
mpl.rcParams['font.family'] = 'STIXGeneral'
#mpl.rcParams["font.serif"] = "STIX"
plt.rc('font', family='serif', size=fz)
mpl.rcParams['lines.linewidth'] = lw
plt.close('all')
import cartopy.feature as cfeature
import cartopy.crs as ccrs
import matplotlib.ticker as mticker
# from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
# +
pathHOME = '/home/tbeucler/Movie_IGARSS/'
vminQ = -150
vmaxQ = 150
vminT = -150
vmaxT = 150
iz = -11 # -11 is 600hPa
it = 0
for it in range(100):
print(it)
plt.close('all')
fig,ax = plt.subplots(2,2,figsize=(10,7.5),subplot_kw=dict(projection=ccrs.Robinson()))
# (a) Convective Moistening
im = ax[0,0].imshow(diagno['truth']['PHQ'][:,:,iz,it],cmap='bwr',vmin=vminQ,vmax=vmaxQ,transform=ccrs.PlateCarree())
#im = ax[0,0].imshow(coor.TS[it,:,:].values,cmap='bwr',transform=ccrs.PlateCarree())
ax[0,0].set_title('Cloud-Resolving Model')
ax[0,0].set_global()
ax[0,0].add_feature(cfeature.COASTLINE)
cb = fig.colorbar(im, ax=ax[0,0], pad=0.01, extend='both', orientation='horizontal');
cb.set_label('$\mathrm{600hPa\ Convective\ Moistening\ (W/m^{2})}$')
# (b) Convective Heating
im = ax[0,1].imshow(diagno['truth']['TPHYSTND'][:,:,iz,it],cmap='bwr',vmin=vminT,vmax=vmaxT,transform=ccrs.PlateCarree())
ax[0,1].set_title('Cloud-Resolving Model')
ax[0,1].add_feature(cfeature.COASTLINE)
cb = fig.colorbar(im, ax=ax[0,1], pad=0.01, extend='both', orientation='horizontal');
cb.set_label('$\mathrm{600hPa\ Convective\ Heating\ (W/m^{2})}$')
# (a) Convective Moistening
im = ax[1,0].imshow(diagno[NNs]['PHQ'][:,:,iz,it],cmap='bwr',vmin=vminQ,vmax=vmaxQ,transform=ccrs.PlateCarree())
ax[1,0].set_title('Neural Network')
ax[1,0].add_feature(cfeature.COASTLINE)
#cb = fig.colorbar(im, ax=ax[1,0], pad=0.01, extend='both', orientation='horizontal');
#cb.set_label('$\mathrm{PRED\ 600hPa\ Convective\ Moistening\ (W/m^{2})}$')
# (b) Convective Heating
im = ax[1,1].imshow(diagno[NNs]['TPHYSTND'][:,:,iz,it],cmap='bwr',vmin=vminT,vmax=vmaxT,transform=ccrs.PlateCarree())
ax[1,1].set_title('Neural Network')
ax[1,1].add_feature(cfeature.COASTLINE)
#cb = fig.colorbar(im, ax=ax[1,1], pad=0.01, extend='both', orientation='horizontal');
#cb.set_label('$\mathrm{PRED\ 600hPa\ Convective\ Heating\ (W/m^{2})}$')
# matplotlib.pyplot.gcf().suptitle("Time to Crash: "+"%02.1f"%(cam_ds.time[-1]-cam_ds.time[it])+"day",
# fontsize=fz)
#plt.savefig(pathHOME+str(it)+'.png',format='png')
# -
# ## Retraining the NN to prepare for transfer learning
# ### Real-geography setting
NN
BF_geog = NN['RH_TNS_LHQsat_Geography.hdf5'].model
BF_geog.summary()
# Where to save the model
path_HDF5 = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/'
save_name = 'RH_TNS_LHSAT_geog_2020_07_22'
#model.compile(tf.keras.optimizers.Adam(), loss=mse)
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
train_gen[0][0].shape
Nep = 10
BF_geog.fit_generator(train_gen, epochs=Nep, validation_data=valid_gen,\
callbacks=[earlyStopping, mcp_save_pos])
# ## Transfer Learning experiments
# ### From CI aqua to CI geo
if geography: path = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/geography/'
else: path = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/'
por_array = [0.1,1]
por_array
Nep = 10
NN = {}
for i,por in enumerate(por_array):
print('por=',por)
graph = tf.Graph()
# 1) Define new generators
TRAINFILE = 'CI_SP_M4K_train_shuffle.nc'
NORMFILE = 'CI_SP_M4K_NORM_norm.nc'
VALIDFILE = 'CI_SP_M4K_valid.nc'
config_file = 'CI_SP_M4K_CONFIG.yml'
train_gen = DataGeneratorClimInv(
data_fn = path+TRAINFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,scaling=False,interpolate=False,portion=por)
valid_gen = DataGeneratorClimInv(
data_fn = path+VALIDFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,scaling=False,interpolate=False
)
# 2) Load model
path_NN = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/RH_TNS_LHSAT_aqua_2020_07_22.hdf5'
NN[por] = load_climate_model(dict_lay,'/home/ankitesh/CBrain_project/PrepData/'+config_file,
'/oasis/scratch/comet/ankitesh/temp_project/PrepData/'+data,
lev=lev,hyam=hyam,hybm=hybm,TRAINDIR=TRAINDIR,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=rh_trans,t2tns_trans=t2tns_trans,
lhflx_trans=lhflx_trans,scaling=scaling,interpolate=interpolate,
model=path_NN,pos_model=pos_model,neg_model=neg_model,
train_gen_RH_pos=train_gen_RH_pos,train_gen_RH_neg=train_gen_RH_neg,
train_gen_TNS_pos=train_gen_TNS_pos,train_gen_TNS_neg=train_gen_TNS_neg )
# 3) Define callbacks and save_name of new model
path_HDF5 = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/'
save_name = 'TL_CI_2020_07_23_porindex_'+str(i+4)
earlyStopping = EarlyStopping(monitor='loss', patience=5, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='loss', mode='min')
# 4) Train model for Nep epochs and CANNOT save state of best validation loss because
# it would NOT be consistent with transfer learning scenario
NN[por].model.fit_generator(train_gen, epochs=Nep, callbacks=[earlyStopping, mcp_save_pos])
# tgb - 2020/07/24 - Finished at 0.01, restarting at 0.1, change name to +4
por_array = [0.00001,0.0001,0.001,0.01,0.1,1]
por_array
Nep = 10
NN = {}
for i,por in enumerate(por_array):
print('por=',por)
graph = tf.Graph()
# 1) Define new generators
TRAINFILE = 'CI_SP_M4K_train_shuffle.nc'
NORMFILE = 'CI_SP_M4K_NORM_norm.nc'
VALIDFILE = 'CI_SP_M4K_valid.nc'
config_file = 'CI_SP_M4K_CONFIG.yml'
train_gen = DataGeneratorClimInv(
data_fn = path+TRAINFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,scaling=False,interpolate=False,portion=por)
valid_gen = DataGeneratorClimInv(
data_fn = path+VALIDFILE,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
lev=lev,
hyam=hyam,hybm=hybm,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=True,t2tns_trans=True,
lhflx_trans=True,scaling=False,interpolate=False
)
# 2) Load model
path_NN = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/RH_TNS_LHSAT_aqua_2020_07_22.hdf5'
NN[por] = load_climate_model(dict_lay,'/home/ankitesh/CBrain_project/PrepData/'+config_file,
'/oasis/scratch/comet/ankitesh/temp_project/PrepData/'+data,
lev=lev,hyam=hyam,hybm=hybm,TRAINDIR=TRAINDIR,
inp_subRH=train_gen_RH.input_transform.sub, inp_divRH=train_gen_RH.input_transform.div,
inp_subTNS=train_gen_TNS.input_transform.sub,inp_divTNS=train_gen_TNS.input_transform.div,
rh_trans=rh_trans,t2tns_trans=t2tns_trans,
lhflx_trans=lhflx_trans,scaling=scaling,interpolate=interpolate,
model=path_NN,pos_model=pos_model,neg_model=neg_model,
train_gen_RH_pos=train_gen_RH_pos,train_gen_RH_neg=train_gen_RH_neg,
train_gen_TNS_pos=train_gen_TNS_pos,train_gen_TNS_neg=train_gen_TNS_neg )
# 3) Define callbacks and save_name of new model
path_HDF5 = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/'
save_name = 'TL_CI_2020_07_23_porindex_'+str(i)
earlyStopping = EarlyStopping(monitor='loss', patience=5, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='loss', mode='min')
# 4) Train model for Nep epochs and CANNOT save state of best validation loss because
# it would NOT be consistent with transfer learning scenario
NN[por].model.fit_generator(train_gen, epochs=Nep, callbacks=[earlyStopping, mcp_save_pos])
| notebooks/tbeucler_devlog/050_Ankitesh_CI_Notebook_For_David.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # LeNet on Cifar with Dropout (0.5)
#
# This is LeNet (6c-16c-120-84) on MNIST. Adam algorithm (lr=0.001) with 100 epoches.
#
#
# #### LeNet
#
# Total params: 44,426
# Trainable params: 44,426
# Non-trainable params: 0
#
#
# #### LeNet with 10 intrinsic dim
#
# Total params: 488,696
# Trainable params: 10
# Non-trainable params: 488,686
#
# #### LeNet with 20000 intrinsic dim
# Total params: 888,584,426
# Trainable params: 20,000
# Non-trainable params: 888,564,426
import os, sys
import numpy as np
from matplotlib.pyplot import *
# %matplotlib inline
def extract_num(lines0):
valid_loss_str = lines0[-5]
valid_accuracy_str = lines0[-6]
train_loss_str = lines0[-8]
train_accuracy_str = lines0[-9]
run_time_str = lines0[-10]
valid_loss = float(valid_loss_str.split( )[-1])
valid_accuracy = float(valid_accuracy_str.split( )[-1])
train_loss = float(train_loss_str.split( )[-1])
train_accuracy = float(train_accuracy_str.split( )[-1])
run_time = float(run_time_str.split( )[-1])
return valid_loss, valid_accuracy, train_loss, train_accuracy, run_time
# +
results_dir = '../results/lenet_cifar_dropout'
dim = [10,50,100,500,1000,2000,5000,10000,15000]
i = 0
# filename list of diary
diary_names = []
for subdir, dirs, files in os.walk(results_dir):
for file in files:
if file == 'diary':
fname = os.path.join(subdir, file)
diary_names.append(fname)
diary_names = sorted(diary_names)
i = 0
diary_names_aug = []
for dn in diary_names:
if '_dir' not in dn:
dn_new = dn
# dn_new = dn[:-6] + str(dim[i])+ '_dropout' + dn[-6:]
i +=1
diary_names_aug.append(dn_new)
else:
diary_names_dir = dn
diary_names_ordered=diary_names_aug
# extrinsic update method
with open(diary_names_dir,'r') as ff:
lines0 = ff.readlines()
R_dir = extract_num(lines0)
print "Baseline LeNet:\n" + str(R_dir) + "\n"
# intrinsic update method
Rs = []
i = 0
for fname in diary_names_ordered:
with open(fname,'r') as ff:
lines0 = ff.readlines()
R = extract_num(lines0)
print "%d dim:\n"%dim[i] + str(R) + "\n"
i += 1
Rs.append(R)
Rs = np.array(Rs)
# -
# ## Performance comparison with Baseline
# +
N = len(dim)
fig, ax = subplots(1)
ax.plot(dim, Rs[:,0],'b-', label="Testing")
ax.plot(dim, R_dir[0]*np.ones(N),'r-', label="Testing: baseline")
ax.plot(dim, Rs[:,2],'g-', label="Training")
ax.plot(dim, R_dir[2]*np.ones(N),'y-', label="Training: baseline")
ax.scatter(dim, Rs[:,0])
ax.scatter(dim, Rs[:,2])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss')
ax.set_title('Cross Entropy Loss')
ax.legend()
ax.grid()
# ax.set_ylim([-0.1,3.1])
fig.set_size_inches(8, 5)
# +
fig, ax = subplots(1)
ax.plot(dim, Rs[:,1],'b-', label="Testing")
ax.plot(dim, R_dir[1]*np.ones(N),'b-', label="Testing: baseline")
ax.plot(dim, Rs[:,3],'g-', label="Training")
ax.plot(dim, R_dir[3]*np.ones(N),'g-', label="Training: baseline")
ax.scatter(dim, Rs[:,1])
ax.scatter(dim, Rs[:,3])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Accuracy')
ax.set_title('Cross Entropy Accuracy')
ax.legend()
ax.grid()
# ax.set_ylim([0.75,1.01])
fig.set_size_inches(8, 5)
# +
fig, ax = subplots(1)
ax.plot(dim, Rs[:,4],'g-', label="Training")
ax.plot(dim, R_dir[4]*np.ones(N),'g-', label="Training: baseline")
ax.scatter(dim, Rs[:,4])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Time (second)')
ax.set_title('Wall Clock Time')
ax.legend()
ax.grid()
# ax.set_ylim([0.75,100.01])
fig.set_size_inches(8, 5)
# -
# ## Performance Per Dim
# +
NRs = Rs/np.array(dim).reshape(N,1)
print NRs
fig, ax = subplots(1)
ax.plot(dim, NRs[:,0],'b-', label="Testing")
ax.scatter(dim, NRs[:,0])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, NRs[:,2],'g-', label="Training")
ax.scatter(dim, NRs[:,2])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
# +
fig, ax = subplots(1)
ax.plot(dim, NRs[:,1],'b-', label="Testing")
ax.scatter(dim, NRs[:,1])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, NRs[:,3],'g-', label="Training")
ax.scatter(dim, NRs[:,3])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
# +
fig, ax = subplots(1)
ax.plot(dim, NRs[:,4],'g-', label="Training")
ax.scatter(dim, NRs[:,4])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Time (second)')
ax.set_title('Wall Clock Time')
ax.legend()
ax.grid()
# ax.set_ylim([0.75,100.01])
fig.set_size_inches(8, 5)
# -
| intrinsic_dim/plots/more/lenet_cifar_dropout.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h3 align="center"><b>Filesystem Alert</h3>
# ***
# #### **Bash:**
# + language="bash"
#
# df -H | grep -Po "^/dev/(?!.*snap).*" | while read i
# do
# percent=$(echo $i | awk 'gsub("%", ""){print $5}')
# fs=$(echo $i | awk '{print $1}')
#
# if (( $percent >= 40 )); then
# echo "Running out of space $fs $percent% on $(hostname)."
# fi
# done
# + language="bash"
#
# df -H --output=source,pcent | grep -Po "^/dev/(?!.*(snap|loop)).*"\
# | sed 's/%//g'| while read -r fs pc
# do
# if (($pc > 40))
# then
# echo "Running out of space $fs $pc% on $HOSTNAME."
# fi
# done
# -
# ***
# #### **Python:**
# #### **Using statvfs System Call**
# ##### **Using Function**
# + language="python3"
#
# from os import statvfs as svfs
# from re import compile, search
# from pathlib import Path
#
# comp = compile(r'^/dev/(?!.*snap)')
# mline = Path('/proc/mounts').read_text().splitlines()
# fs = [i.split()[1] for i in mline if comp.search(i)]
#
# for x in fs:
# def usep(x, y):
# return f"{(x - y)/x:.0%}"
# percent = usep(svfs(x).f_blocks,svfs(x).f_bfree)
# if int(percent.replace('%','')) > 40:
# print(f'Running out of space {x} abvoe {percent:4}.')
# -
# ##### **Using OOPS**
# + language="python3"
#
# from os import statvfs as svfs
# from re import compile, search
# from socket import gethostname
# from pathlib import Path
#
# class DiskUsage:
# def __init__(self, file):
# self.f_blocks = svfs(file).f_blocks
# self.f_bfree = svfs(file).f_bfree
#
# def usep(self):
# return f"{(self.f_blocks - self.f_bfree)/self.f_blocks:.0%}"
#
# comp = compile(r'^/dev/(?!.*snap)')
# mline = Path('/proc/mounts').read_text().splitlines()
# fs = [i.split()[:2] for i in mline if comp.search(i)]
#
# for x in fs:
# disk = DiskUsage(x[1])
# if int(disk.usep().replace('%','')) > 40:
# print(f'Running out of space {x[0]} {disk.usep()}',end = ' ')
# print(f'on {gethostname()}.')
# -
# ***
# #### **Using New Subprocess API**
# + language="python3"
#
# from subprocess import PIPE, run
# from socket import gethostname
# from re import compile, search
#
#
# cmd = 'df -h'
# comp = compile(r'^/dev/(?!.*snap).*')
# args = dict(stdin=PIPE, stdout=PIPE, stderr=PIPE)
# stdout = run(cmd.split(), **args).stdout.decode()
# fs = [i for i in stdout.splitlines() if comp.search(i)]
#
# for i in fs:
# percent = int(i.split()[4].replace('%',''))
# dev = i.split()[0]
# if percent > 40:
# print(f'Running out of space {dev} {percent}% on {gethostname()}.')
# -
# ***
# #### **Using old Subprocess API**
# + language="python3"
#
# from subprocess import check_output
# from socket import gethostname
# from re import search
#
# cmd = 'df -h'
# out = check_output(cmd.split()).decode().splitlines()
# fs = [i for i in out if search(r'^/dev/(?!.*snap)',i)]
#
# for i in fs:
# percent = int(i.split()[4].replace('%',''))
# dev = i.split()[0]
# if percent > 40:
# print(f'Running out of space {dev} {percent}% on {gethostname()}.')
| Filesystem-Alert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person
from tqdm import tqdm
import json
import numpy as np
auth_file = "/home/ivan/pascal_adversarial_faces/azure_auth.json"
with open(auth_file, "r") as f:
auth_data = json.loads(f.read())
face_client = FaceClient(
auth_data["endpoint"],
CognitiveServicesCredentials(auth_data["key"])
)
# +
# face_client.person_group.delete(person_group_id="robust_community_naive_mean_0_6_0p5")
# -
def measure_azure_recall(
face_client,
image_directory,
azure_person_group_name
):
person_id_to_name = {}
identities = []
for p in face_client.person_group_person.list(person_group_id=azure_person_group_name):
person_id_to_name[p.person_id] = p.name
identities.append(p.name)
discovery = []
true = []
identified_as = []
for protector in tqdm(identities):
# We are sourcing query photos from epsilon_0.0.
# In those cases, all subfolders in the "protected" identity have the same, clean
# photo of the protector, so we just pick any single one that exists (e.g. n000958)
# For the case where n000958 is itself the protector, n000958 is not present in its protected
# subfolders, so we pick n000029 without loss of generality.
if protector == "n000958":
protected = "n000029"
else:
protected = "n000958"
query_photos_paths = sorted(glob.glob(
f"{image_directory}/{protector}/community_naive_mean/{protected}/epsilon_0.0/png/*"
))
# For this person group, we picked the first 10 lexicographically sorted
# photos to use in the lookup set, so for the query, we use the 11th and on.
# (The query is not supposed to be in the lookup set).
for i in range(11, len(query_photos_paths)):
faces_in_query_photos = face_client.face.detect_with_stream(
open(query_photos_paths[i], "r+b"),
detectionModel='detection_02'
)
if len(faces_in_query_photos) > 0:
break
# There should only be one face, so we use that as the query face.
results = face_client.face.identify(
[faces_in_query_photos[0].face_id],
azure_person_group_name
)
true.append(protector)
if len(results) < 1 or len(results[0].candidates) < 1:
discovery.append(0.0)
identified_as.append("None")
else:
top_identity = person_id_to_name[results[0].candidates[0].person_id]
identified_as.append(top_identity)
# Note the switch of the term protector here:
# protectors are also protected but we call them protectors because of the folder structure
# In this case, the query photo belongs to the protector -- who is also protected by decoys
# of *other* protectors. Therefore, if the identity returned is that of the "protector,"
# this is a failure in the defense.
if top_identity == protector:
discovery.append(1.0)
else:
discovery.append(0.0)
time.sleep(10)
for true_id, recognized_id in zip(true, identified_as):
print(f"Face of {true_id} identitifed as {recognized_id}")
return sum(discovery)/len(discovery)
def measure_azure_recall_for_random_5_images(
face_client,
image_directory,
azure_person_group_name
):
person_id_to_name = {}
identities = []
for p in face_client.person_group_person.list(person_group_id=azure_person_group_name):
person_id_to_name[p.person_id] = p.name
identities.append(p.name)
discovery = []
true = []
identified_as = []
for protector in tqdm(identities):
# We are sourcing query photos from epsilon_0.0.
# In those cases, all subfolders in the "protected" identity have the same, clean
# photo of the protector, so we just pick any single one that exists (e.g. n000958)
# For the case where n000958 is itself the protector, n000958 is not present in its protected
# subfolders, so we pick n000029 without loss of generality.
if protector == "n000958":
protected = "n000029"
else:
protected = "n000958"
query_photos_paths = sorted(glob.glob(
f"{image_directory}/{protector}/community_naive_mean/{protected}/epsilon_0.0/png/*"
))
# For this person group, we picked the first 10 lexicographically sorted
# photos to use in the lookup set, so for the query, we use the 11th and on.
# (The query is not supposed to be in the lookup set).
for i in np.random.choice(len(query_photos_paths), 5):
faces_in_query_photos = face_client.face.detect_with_stream(
open(query_photos_paths[i], "r+b"),
detectionModel='detection_02'
)
if len(faces_in_query_photos) < 1:
continue
# There should only be one face, so we use that as the query face.
results = face_client.face.identify(
[faces_in_query_photos[0].face_id],
azure_person_group_name
)
true.append(protector)
if len(results) < 1 or len(results[0].candidates) < 1:
discovery.append(0.0)
identified_as.append("None")
else:
top_identity = person_id_to_name[results[0].candidates[0].person_id]
identified_as.append(top_identity)
# Note the switch of the term protector here:
# protectors are also protected but we call them protectors because of the folder structure
# In this case, the query photo belongs to the protector -- who is also protected by decoys
# of *other* protectors. Therefore, if the identity returned is that of the "protector,"
# this is a failure in the defense.
if top_identity == protector:
discovery.append(1.0)
else:
discovery.append(0.0)
time.sleep(15)
for true_id, recognized_id in zip(true, identified_as):
print(f"Face of {true_id} identitifed as {recognized_id}")
return sum(discovery)/len(discovery)
measure_azure_recall(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_10_5_0p5"
)
measure_azure_recall_for_all_images(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_10_5_0p5"
)
measure_azure_recall(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_1_1_0p5"
)
measure_azure_recall(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_2_6_0p5"
)
measure_azure_recall(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_2_1_0p5"
)
# +
def increment_dict(dictionary, key):
if key not in dictionary:
dictionary[key] = 1
else:
dictionary[key] += 1
return dictionary
def read_log_file(log_file_path, base_images_path):
identity_to_num_total = {}
identity_to_num_clean = {}
identity_to_num_decoys_for_others = {}
identity_to_num_decoys_by_others = {}
with open(log_file_path, "r") as f:
curr_id = None
for line in f:
if line.startswith("n"):
curr_id = line.strip("\n")
else:
increment_dict(identity_to_num_total, curr_id)
# clean base path, e.g. /data/vggface/test_perturbed
# then strip \n characters
line = line[len(base_images_path) + 1:].strip("\n")
# split by path separator
line = line.split("/")
# format is base_path/true_id/attack_strategy/target/epsilon_X/other_stuff
true_id = line[0]
target_id = line[2]
epsilon = line[3]
assert true_id == curr_id, f"In line {' '.join(line)} mismatch between true_id {true_id} and current id {curr_id}"
if epsilon == "epsilon_0.0":
increment_dict(identity_to_num_clean, curr_id)
elif epsilon.startswith("epsilon_"):
increment_dict(identity_to_num_decoys_for_others, curr_id)
else:
raise Exception(f"Invalid epsilon {epsilon}")
increment_dict(identity_to_num_decoys_by_others, target_id)
for key, value in identity_to_num_total.items():
print(f"Identity {key} has {value} total images associated with it")
for key, value in identity_to_num_clean.items():
print(f"Identity {key} has {value} clean images")
for key, value in identity_to_num_decoys_for_others.items():
print(f"Identity {key} has {value} decoys for others")
for key, value in identity_to_num_decoys_by_others.items():
print(f"Others have provided {value} decoys for identity {key}")
# -
read_log_file(
"/home/ivan/pascal_adversarial_faces/azure_face_logfiles/robust_community_naive_mean_2_1_0p5.txt",
"/data/vggface/test_perturbed_sampled"
)
measure_azure_recall(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_0_6_0p5"
)
for g in face_client.person_group.list():
print(g.name)
measure_azure_recall_for_random_5_images(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_0_6_0p5"
)
measure_azure_recall_for_random_5_images(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_2_1_0p5"
)
measure_azure_recall_for_random_5_images(
face_client,
"/data/vggface/test_perturbed_sampled",
"robust_community_naive_mean_10_5_0p5"
)
| notebooks/Azure_Recall.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# +
list.of.packages <- c("ggplot2", "readr", "stringr", "magrittr", "logging")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
# Install missing packages if any
if(length(new.packages) > 0) {
install.packages(new.packages)
}
# Load packages
for(pkg in list.of.packages){
library(pkg, character.only=TRUE)
}
# Make plots pretty
theme_set(theme_bw())
# -
# # Data Standardization
# Let's use a random dataset in order to put in practice some of the data standardization techniques.
# +
URL_ALBERTA = 'https://open.alberta.ca/dataset/a4b99aad-3f33-45ea-a5c7-e34a4733d336/resource/8e5d4548-fe1a-4f36-a690-3da56af197e6/download/cmhc_preliminary_total_housing_starts_csv_v83.0_2019-08-09.csv'
INTERESTING_COLUMNS = c('Ref_Date', 'Urban.Centre', 'Preliminary.Housing.Starts')
# Read in the DataFrame and use only specified columns
df_alberta <- read.csv(URL_ALBERTA)[, INTERESTING_COLUMNS]
# Fix the 'housing' column so that all rows contain integers
df_alberta$Preliminary.Housing.Starts <- as.numeric(as.character(df_alberta$Preliminary.Housing.Starts))
# df_alberta['Preliminary Housing Starts'] = df_alberta['Preliminary Housing Starts'].apply(cast_to_int)
# Print the dataframe details and first 5 rows
summary(df_alberta)
head(df_alberta)
# -
# ## Inconsistent Coding
# In this example, let's take a look at the values for the `Urban Centre` attribute:
print('The unique values for cities are:')
print(unique(unlist(df_alberta['Urban.Centre'])))
# Now, *I don't know anything about Alberta urban centres*, so take everything here with a grain of salt.
#
# But let's standardize this so that we have values that "mean something". We'll:
# 1. Rename the codes so they don't have the `(CMA)` and `(CA)` suffixes
# 2. Remove the rows for the aggregate values `'Five CAs'`, `'Seven Major Urban Centres'`, and `'Four CAs'`
# ### Rename codes
# Here we use the [pandas string replace method](http://github.com/pandas-dev/pandas/blob/v0.25.0/pandas/core/strings.py#L2709-L2715) to detect the unwanted patterns and replace them with an empty string, `''`.
# +
# Rename Urban Centre suffixes
suffixes = c(' \\(CMA\\)', ' \\(CA\\)') # Escape parentheses with double backslashes
# Loop through the possible suffix values
for(suffix in suffixes){
# unlist(df$Urban.Centre) %>%
# str_replace_all(unlist(df$Urban.Centre), suffix, "")
df_alberta$Urban.Centre <- gsub(suffix, "", df_alberta$Urban.Centre)
}
print('The unique values for cities are:')
print(unique(unlist(df_alberta['Urban.Centre'])))
# +
# Remove the aggregate rows
aggregate_values = 'Five CAs|Seven Major Urban Centres|Four CAs'
# Overwrite the DataFrame only with the rows not in the aggregate rwos
# filter_all(grepl(aggregate_values, df_alberta$Urban.Centre))
df_alberta <- dplyr::filter(df_alberta, !grepl(aggregate_values, Urban.Centre))
# Show the difference
print('The unique values for cities are:')
print(unique(unlist(df_alberta['Urban.Centre'])))
# -
# Now let's take a look at the first few entries again!
head(df_alberta)
ggplot(data=df_alberta,
aes(x = Ref_Date,
y = Preliminary.Housing.Starts,
colour = Urban.Centre,
group=Urban.Centre)) +
ggtitle("Preliminary Housing Starts by Urban Centre in Alberta (2019)") +
geom_line() +
scale_y_continuous(trans = 'log2')
# Much better!
# ## Invalid Values
# Here we'll use a mock dataset in order to illustrate some points.
#
# For the sake of argument, let's imagine that in our use case, invalid values that go through our data processinng pipeline will break downstream applications — say, a dashboard that's refreshed every time new data comes in.
# +
N <- 9
df_invalid <- structure(list(
id = 1:N,
group = c('A', 'A', 'B', 'B', 'Z', 'A', 'B', 'A', 'B'),
location = c('Ottawa', 'Montréal', 'Ottawa', 'Red Deer', 'Ottawa', 'unknown', 'Toronto', 'Ottawa', 'Vancouver'),
sales = c(sample(1:100, N))
),
.Names = c("id", "group", "location", 'sales'),
class = "data.frame",
row.names = c(NA, -N))
df_invalid
# -
# Let's assume here that the values `Z` for the group and `unknown` for the location are invalid.
#
# There are two ways to handle this:
# 1. A list of valid values (whitelist)
# 2. A list of invalid values (blacklist)
#
# And there are two notable ways of handling them:
# 1. We log the error but keep the pipeline going, using some fallback value
# 2. We error out the pipeline, and wait for human intervention to fix the offending data
#
# All permutations of these have valid use cases, and depend on the situation.
# ### Strategy \#1: Whitelist and error out
# This strategy is good standard practice, but assumes there are (timely) processes in place to correct the data at the source.
# +
VALID_GROUPS = 'A|B'
INVALID_GROUP_ERROR = 'Some invalid values were detected for column "group". Aborting pipeline.'
# Whitelist the group values, error out if invalid value is detected
invalid_group_values <- !grepl(VALID_GROUPS, df_invalid$group)
# Sum of invalid strings should be zero
if(sum(invalid_group_values) > 0){
stop(INVALID_GROUP_ERROR)
}
# -
# ### Strategy \#2: Blacklist and log occurences
# We can also simply log the invalid data and use a fallback value instead. In this case, we can simply log the occurence.
# +
# Create a logging object, which we'll use to monitor our data pipeline
Sys.setenv(TZ = "UTC") # To suppress timezone warning
logging::basicConfig()
INVALID_LOCATIONS = c('unknown', 'NA', 'N/A', ' ', '')
invalid_rows <- apply(df_invalid, 1, function(r) any(r %in% INVALID_LOCATIONS))
for (ix in seq_along(invalid_rows)) {
row <- invalid_rows[ix]
if (row) {
logwarn(paste('Invalid location value at index ', ix, ': "', df_invalid$location[ix], '". Replacing with missing value.', sep=''))
df_invalid$location[ix] <- NA
}
}
df_invalid
# -
# ## Numeric Values
# For the following sections, we'll use a real-world example. Each row represents a house in California sold on the market, with each attribute representing either characteristics of the neighbourhood, or the house itself.
# Get dataset
URL = 'https://github.com/spiderPan/Google-Machine-Learning-Crash-Course/raw/master/data/california_housing_train.csv'
df_housing <- read.csv(URL)
head(df_housing)
summary(df_housing)
# Let's focus on a few attributes:
# +
ggplot_histogram <- function(df, col, color="#FF6666") {
print(
ggplot(df, aes_string(x=col)) +
geom_histogram(aes(y=..density..), colour="black", fill=color, bins=100)
)
}
interesting_cols = c('median_income', 'population', 'total_bedrooms')
for (col in interesting_cols) {
ggplot_histogram(df_housing, col)
}
# -
# ## Numeric Values — Normalization
#
# $$x_{new} = \frac{x-x_{min}}{x_{max} - x_{min}}$$
#
# In this example we use scikit-learn's `preprocessing.normalize` method to perform the normalization, but it would be just as easy to do this with vanilla numpy methods.
# +
# Normalize total_bedrooms column
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
# Add as new column to our data.frame
df_housing$norm_bedrooms <- normalize(df_housing$total_bedrooms)
# Plot with with original
for (col in c('total_bedrooms', 'norm_bedrooms')) {
ggplot_histogram(df_housing, col, color="#6666FF")
}
# -
# So we see the shape of the distribution is exactly the same, but the scale of the values is now constrained within $[0, 1]$.
# ## Numeric Values — Standardization
#
# $$x_{new} = \frac{x-\mu}{\sigma}$$
#
# Here, we'll simply use the Base R function `scale()` to calculate a new column to our DataFrame.
# +
# Add as new column to our data.frame
df_housing$standard_income <- scale(df_housing$median_income)
# Plot with with original
for (col in c('median_income', 'standard_income')) {
ggplot_histogram(df_housing, col, color="#663366")
}
# -
# Again, the shape of the distribution is exactly the same, but this time the mean of the distribution is at zero, and the unit value is equal to one standard deviation away from the mean.
# ## Numeric Values — Outliers
# ### Z-Score
# This method uses the same function as for standardization, but uses the Z-score number as a threshold value that determines whether we keep that value or not. In this case, we decide to remove any value that is farther than 5 standard deviations away from the average.
#
# Expert knowledge of the domain is sometimes required in order to set the threshold value.
# +
is_outlier <- function(x, threshold=5.0) {
# Returns true if the value exceeds the threshold from distribution mean.
# Default: 5 standard deviations.
return(abs(scale(x)) < threshold)
}
# Plot the original
ggplot_histogram(df_housing, 'population', color="#A3D8E6")
# Plot the cleaned version
cleaned <- df_housing[is_outlier(df_housing$population),]
ggplot_histogram(cleaned, 'population', color="#A3D8E6")
| data_cleaning/data_standardization_R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
myIndex = [('Slot1', 1), ('Slot1', 2), ('Slot1', 3), ('Slot2', 1), ('Slot2', 2), ('Slot2', 3)]
myIndex = pd.MultiIndex.from_tuples(myIndex)
dataFrame = pd.DataFrame(np.random.randn(6,2), index=myIndex, columns=['Res1', 'Res2'])
# -
dataFrame
dataFrame.loc['Slot1']
dataFrame.loc['Slot1'].loc[2]
dataFrame.index.names = ['Slot','Var']
dataFrame
dataFrame.xs('Slot1')
dataFrame.xs('Slot1').xs(2)
dataFrame.xs(2,level='Var')
| multi index in pandas.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.1
# language: julia
# name: julia-1.4
# ---
using Flux, Statistics
using Flux.Data: DataLoader
using Flux: onehotbatch, onecold, logitcrossentropy, throttle, @epochs
using Base.Iterators: repeated
using Parameters: @with_kw
using CUDAapi
using MLDatasets
if has_cuda() # Check if CUDA is available
@info "CUDA is on"
import CuArrays # If CUDA is available, import CuArrays
CuArrays.allowscalar(false)
end
@with_kw mutable struct Args
η::Float64 = 3e-4 # learning rate
batchsize::Int = 1024 # batch size
epochs::Int = 10 # number of epochs
device::Function = gpu # set as gpu, if gpu available
end
function getdata(args)
# Loading Dataset
xtrain, ytrain = MLDatasets.MNIST.traindata(Float32)
xtest, ytest = MLDatasets.MNIST.testdata(Float32)
# Reshape Data for flatten the each image into linear array
xtrain = Flux.flatten(xtrain)
xtest = Flux.flatten(xtest)
# One-hot-encode the labels
ytrain, ytest = onehotbatch(ytrain, 0:9), onehotbatch(ytest, 0:9)
# Batching
train_data = DataLoader(xtrain, ytrain, batchsize=args.batchsize, shuffle=true)
test_data = DataLoader(xtest, ytest, batchsize=args.batchsize)
return train_data, test_data
end
function build_model(; imgsize=(28,28,1), nclasses=10)
return Chain(
Dense(prod(imgsize), 32, relu),
Dense(32, nclasses))
end
function loss_all(dataloader, model)
l = 0f0
for (x,y) in dataloader
l += logitcrossentropy(model(x), y)
end
l/length(dataloader)
end
function accuracy(data_loader, model)
acc = 0
for (x,y) in data_loader
acc += sum(onecold(cpu(model(x))) .== onecold(cpu(y)))*1 / size(x,2)
end
acc/length(data_loader)
end
function train(; kws...)
# Initializing Model parameters
args = Args(; kws...)
# Load Data
train_data,test_data = getdata(args)
# Construct model
m = build_model()
train_data = args.device.(train_data)
test_data = args.device.(train_data)
m = args.device(m)
loss(x,y) = logitcrossentropy(m(x), y)
## Training
evalcb = () -> @show(loss_all(train_data, m))
opt = ADAM(args.η)
@epochs args.epochs Flux.train!(loss, params(m), train_data, opt, cb = evalcb)
@show accuracy(train_data, m)
@show accuracy(test_data, m)
end
cd(@__DIR__)
train()
| flux/test_flux_multi-layer_perceptron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import mbuild as mb
from mbuild.lib.moieties import CH2, CH3
from mbuild.examples import Propane, Hexane
hexane_box = mb.fill_box(Hexane(), 50, box=[4, 4, 4])
print(hexane_box)
hexane_box.visualize()
# -
united_atom_particles = [CH2, CH3]
united_atom = mb.coarse_grain(hexane_box, particle_classes=united_atom_particles)
print(united_atom)
united_atom.visualize()
three_to_one_particles = [Propane]
three_to_one = mb.coarse_grain(hexane_box, particle_classes=three_to_one_particles)
print(three_to_one)
three_to_one.visualize()
| mbuild/examples/coarse_graining/hexane_box.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# # Fintech Case study on COINBASE INC.
#
# Name of company :
# Coinbase Global (Cryptocurrency Fintech domain)
#
# Coinbase global was incorporated:
# June 20, 2012
#
# The founders of Coinbase Global are:
# <NAME> (left) and <NAME>
#
# ## What is Fintech and what is emcompasses as a Crypto currency/ Blockchain Domain?
# Before delivering my case study it is important have an understanding of the new devevlopments involving the merge of the financial and technolgical indestries producing the emergence of the well dunbbed "Fin-tech Industry.
# financial technology (Fintech) is used to describe new tech that seeks to improve and automate the delivery and use of financial services. At its core, fintech is utilized to help companies, business owners and consumers better manage their financial operations, processes, and lives by utilizing specialized software and algorithms that are used on computers and, increasingly, smartphones. Fintech, the word, is a combination of "financial technology".
# At its core, cryptocurrency is typically decentralized digital money designed to be used over the internet. Bitcoin, which launched in 2008, was the first cryptocurrency, and it remains by far the biggest, most influential, and best-known. In the decade since, Bitcoin and other cryptocurrencies like Ethereum have grown as digital alternatives to money issued by governments.
#
# # How did the idea for the company (or project) come about?
# Coinbase started in 2012 with the radical idea that anyone, anywhere, should be able to easily and securely send and receive Bitcoin. Today, we offer a trusted and easy-to-use platform for accessing the broader cryptoeconomy.
#
#
# "At the time, “bitcoin was the crazy idea that the world could have a digital money for everyone,” he said.
#
# Armstrong and Ehrsam first met on Reddit and shared a bullish view on bitcoin and the cryptocurrency space as a whole, Ehrsam said on Twitter. In turn, they decided to launch Coinbase with the “mission” to “make crypto easy to use.”
#
# Although the company currently has a multibillion-dollar valuation, its beginnings were “not glamorous,” Ehrsam said. “Coinbase launched out of a two bedroom apartment we shared with another company.”
#
# Over the first two years in business, Coinbase grew to a company with over 1,000 employees." (CNBC.COM- MAKE IT ARTICLES)
#
#
#
# * How is the company funded? How much funding have they received?
#
# Armstrong enrolled in the Y Combinator startup incubator program and received a 150,000 dollars cash infusion. <NAME>, a former Goldman Sachs trader, later joined as a co-founder.
#
# Coinbase is funded by charging its subscribers/clients brokerage fees as it is one of the top crypto currency exchange platforms. It is very important to note that coinbase has amassed a large amount of its capital through earnings made by listing as a publicly operated company on NASDAQ on April 14, 2021. Coinbase's IPO debuted for the price of 381 dollars and closed at 328.28 dollars per share valuing the company's market cap at 85.8 billion dollars .
# This growth was colossal in comparison to when it was "Founded in 2012 and this increase in capital was only a springboard for increased performace.
#
#
# ## Business Activities:
#
# It can be well noted that the initial financial issue set out to be solvevd by the founders were stemmed from the desire to provide an easier medium for investors interested in bitcoin to conduct a purchase of this equity.
# "Founded in 2012 as a way to simplify the purchase of bitcoin, Coinbase has emerged as the most popular crypto exchange in the U.S. and soared in value alongside digital currencies bitcoin and ethereum. The service now has 56 million users, up from 43 million at the end of 2020 and 32 million the year before that. In its last private financing round in 2018, investors valued Coinbase at 8 billion dollars." (CNBC)
# It is indeed safe to say this acheivement was achieved as the one statistics have implied by projecting its growth in users in such a short period of time.
#
#
# * Which technologies are they currently using?
# The main technology utilized by Coinbase is blockchain, APIS and smart contracts to carry out the day to day trading of crypto currencies.
#
# * What has been the business impact of this company so far?
# Approximately 68 million verified users, 9,000 institutions, and 160,000 ecosystem partners in over 100 countries trust Coinbase to easily and securely invest, spend, save, earn, and use crypto. this bolstering fact states to us that coinbase is definitely a prime contributer to the growth of the crypto currency/blockhain domain. With the enrollment of new investors each day and more purchases of the different crypto currencies the growth of the industry is inevitable and the powerhouse is playing their part for sure.
#
# * What are some of the core metrics that companies in this domain use to measure success? How is your company performing, based on these metrics?
#
# * How is your company performing relative to competitors in the same domain?
# The main contenders of Coinbase include but aren't limited to: Robinhood, Binance, Uphold, G2Deals, Kraken, CEX. I0 etc.
#
# ## Recommendations
#
# * If I were to give any advice to the company it would be to:
# -Partner with startup companys attempting to list on any public exchange by being the main finacial hub for purchasing of IPOs
# or even additional public offers.
#
# -Expanding equity listing to stocks and other securities and real estate as this would increase usage of the platoform and its liquidity.
#
# -Provide easier transfer or tangible access of crypto wallet by estblishing atm machines which are accessible worldwide.
#
# * What technologies would this additional product or service utilize?
#
# -ATM machines
# -more cyber security programs
# -stronger nodes and advanced data storage
# -artificial intelligence development
# -an extended platform which can track and project real time market projections for expantion of offered asset classes
# -more blockchain and smart contract developmemt for real estated leveraging
# # ADDAMENDUM
# https://www.coinbase.com/about
#
# nbc.com/2021/04/14/coinbase-co-founders-launched-when-a-bitcoin-btc-was-worth-6.html
#
# https://www.cbinsights.com/research/report/coinbase-strategy-teardown/
#
# https://www.investopedia.com/tech/coinbase-what-it-and-how-do-you-use-it/
#
| Coinbase_casestudy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # COGS 108 - Project Proposal
# ## Important
#
# - Make sure all group members (3-5 people) are listed in the group members section.
# - ONE, and only one, member of your group should upload this notebook to TritonED.
# - Each member of the group will receive the same grade on this assignment.
# - Keep the file name the same: submit the file 'ProjectProposal.ipynb'
# ## Overview
# Read the project description and detailed instructions for this assignment in the 'ProjectOutline' document.
# ## Group Members: Fill in the Student IDs of each group member here
#
# Replace the lines below to list each persons full student ID. Add lines as needed for your group size, and make sure each ID is listed on a separate line.
#
# - A13897206
# - A13758489
# - A91047730
# - A08971511
# - A12073771
# - A11593366
# ### Research Question
# How does housing prices affect your likelyhood of geting a police citation?
# ### Hypothesis
# The housing prices in a given area would be a key determinator for the number of police citations in that area. We predict that higher housing prices would lead to lower amounts of police citations and vice versa.
# ### Dataset(s)
# - Dataset Name: Zillow Housing Data
# - Link to the dataset: https://www.zillow.com/research/data/
# - Number of observations: 1878
#
# Housing pricing data by cities. Contains data from 2008 to current date with the median housing data for each year. Other datasets are available with pricing and different geographical locations.
#
#
# - Dataset Name: Non-Traffic Citations, City of Pittsburg
# - Link to the dataset: https://catalog.data.gov/dataset/non-traffic-citations
# - Number of observations: 4358
#
# Non-traffic citations (NTCs, also known as "summary offenses") document low-level criminal offenses where a law enforcement officer or other authorized official issued a citation in lieu of arrest. Dataset contains gender,age,race, location, and offense.
#
# - Dataset Name: Police - Motor Vehicle Citations
# - Link to the dataset: https://catalog.data.gov/dataset/police-motor-vehicle-citations
# - Number of observations: 65535
#
# Dataset contains the reason for citation, street name, and date. It tells if the citation was warning or noncriminal. Also tells if the person was arrested or not.
#
# - Dataset Name: Crime Data from 2010 to present
# - Link to dataset: https://catalog.data.gov/dataset/crime-data-from-2010-to-present
# - Number of Observations: 1048576
#
# Dataset contains transcriptions of original incident reports of crimes in Los Angeles. If address is absent then marked as (0.0).
#
# - Dataset Name: Police - Motor Vehicle Citations
# - Link to dataset: https://catalog.data.gov/dataset/police-motor-vehicle-citations/resource/bbde811c-a087-4f79-a9cd-dd0a6c98aa95
# - Number of Observations: 90216
#
# Dataset contains information on serial numbers, citation numbers, date issued, street number, street names, charge codes, charge descriptions, vehicle mph, mph zone, warnings, non-criminal offenses, compliance, arrests, voids pertaining to citations filed since 2010 for the city of Somerville, Massachusetts.
#
#
#
# Datasets will be cleaned to have matching location data and combined by matching the location with prices and number of total citations.
# ### Background and Prior Work
# The group’s intent to investigate cities with varying housing market prices and relating this variable to police citation density provides the opportunity to discover the economic and social implications of how consumers populate certain regions. Throughout this project, we are attempting to work with multiple cities with differing housing economies to create a correlation between a city’s housing market and the its police citation density. House buyer sentiment of a city arrives at many factors that influence the price they are willing to pay for a house in the region. We believe crime to be an under looked yet imperative indicator of buyer sentiment. The assumption that we took with regards to housing prices within cities is that houses will positively correlate and increase in value with more amenities and features the house provides. A factor like crime poses as a cost to the community and is portrayed as an indirect effect to the region’s housing economy. Severity of crime is also a point of interest we intend to examine in hopes of discriminating categories of crime that are most influential to housing market prices.
#
# Upon further research and understanding of our context, it was clear that the projected needed to create a price model that distinguished between other features and provide quantitative data to the effect a house’s price has on the region’s crime statistics. The first reference managed to utilize an index of public safety alongside the primary features and held a firm precaution against utilizing a measure of the number of crimes as this measure places the same value on all types of crimes. As, stated earlier, it is imperative for us to assign weights to each category so that the model can provide reliable output. This first reference suggests that areas with high and low crimes (the opposite extremes of our data set) affect the price of local houses dramatically. The secondary reference is a data science project vital to the understanding of our investigation and provides the foundation for us to utilize and test methodologies. This reference serves as a model for our project so that we can have the opportunity to discover new insights our features can point toward during each step of the process.
# References:
# 1. Measuring the impact of crime on house prices
# http://www.tandfonline.com/doi/pdf/10.1080/00036840110021735?needAccess=true
# 2. Predict the Housing Prices of Ames, Iowa
# https://nycdatascience.com/blog/student-works/team-machine-learning-project/
# ### Proposed Methods
# The housing data from zillow is well defined, however, we would want to check all pricing values for possible numerical errors such as if the price is zero or any "inf" or "na" values. We would also need to check the locaiton data to ensure there are no blank values. The location data in the police citation datasets are not properly formatted consistently and will need to be replaces with values comparable to the Zillow housing data sets.
#
# The Zillow housing data comes with a lot of extraneous information that would need to be trimmed out. We would first want to determine which location set of data would be most useful for matching with the police citation data. Since both datasets do not contain the same type of location identifiers, we would need to transform the location identifiers to a universal set of data that could be compared between different data sets. Some preprocessing may also be needed for the locations to separate them into area units that fit to the number of citations per area.
#
# For the police data we would need to get the number of counts for each citation and the aggregate sum for each unit area determined by housing district. The police citations may further be categorized into severity based on the citation to be compared to the average price of the houses in that area. We would then determine the correlation between housing prices and the number of citations in each district. Ideally, this would help us determine if there is a relationship between housing prices and citations given in each area.
#
# We would be reporting a correlation between number of citations in a given area compared to the housing prices of that area. We will attempt to show a potential model based on our hypothesis that higher housing prices would lead to a decrease in police citation and and demonstrate how close our model can predict potential citations in an area. For visualization, we will generate a heatmap for number of police citations compared to housing prices in a given area.
#
# Packages we plan to use:
# - matplotlib
# - numpy
# - pandas
# ### Ethics and Privacy
# As a group, we heavily value research ethics. Our goal is not only to finish our research objectives, but also pay close attention to potential ethical issue with our data analysis. We have collected the majority of our data from data.gov and Zillow Research, both websites declared in their terms and condition that all of their data are considered as public record, which is open for us to use without requiring any further permission. However, we have identified the potential privacy issue with our data analysis. Many of our citation related data contains a “case number”, which we believe is one of the potential privacy issue. Therefore, we decided to remove the “case number” that are in some of our data, since the “case numbers” expose us to risks and will contribute nothing to our analysis. In addition, some of these data also provide the time and location for a citation, which we also consider as potential information that others could use to identify the involving personnel. To handle this potential issue, we are going to remove the time column, because location information is essential to our research.
#
# Some potential harm from this project entails the bias assumption of social status and race. To address that matter, we shall seek for multicultural cities.
#
# Potential biases may arise due to the fact that we are using datasets that are derived from specific cities that may not have the adequate police personnel. That in and of itself results in less citations issued. There may be affluent cities that have a great number of police officers giving out more tickets, than a less-affluent city that has less police officers giving out less tickets. In cases like these, is when we may encounter potential problems that don’t necessarily reflect what we are expecting to see. Also, drivers in affluent cities tend to possess expensive high-performance vehicles that may inevitably make them targets for police officers and therefore more tickets being issued in cities like these. Equitable analysis may prove to be very difficult in situations like these.
# ### Discussion
# Our hypotheses are that higher housing prices would lead to lower amounts of police citations and lower housing prices would lead to higher amounts of police citations. The key contributions of our project would be the housing prices in a given area and the number of police citations in that area.
#
# We derived our methods from previous data science research models. For example, this is similar to the project in which studies the relationships between the amount of street lights and police citations. Some problems we might encounter are the outliers in which people may not be from that designated area.
#
# We predict that the housing prices would determine the number of police citations in given area. If our hypotheses are wrong, that means there may be more than one contribution that determines the result or there is a possibility that the housing prices have no correlation of determining the number of police stations. If our results show that the correlation is not significant, we would need to look for new contributions or change some methods to see if there is difference from the results of our previous methods.
#
# Potential pitfalls and confounds that we may encounter are misinterpretation of data such as the severity of the citations- major and minor. This brings in the analysis of quantity over quality. Such that there might be a large amount of minor citations such as parking violations versus a small amount of major citations such as reckless driving. This can play a crucial influence in the relationship of police citations and house values.
| ProjectProposal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="jMKwOgSYfHBU" colab_type="text"
# # CS145: Project 3 | ML Warmup (10 points)
# ---
#
# ### Notes (read carefully!):
#
# * Be sure you read the instructions on each cell and understand what it is doing before running it.
# * Don't forget that if you can always re-download the starter notebook from the course website if you need to.
# * You may create new cells to use for testing, debugging, exploring, etc., and this is in fact encouraged!
# **Just make sure that the final answer for each question is _in its own cell_ and _clearly indicated_**.
# * Remember colab will not warn you about how many bytes your SQL query will consume. **Be sure to check on the BigQuery UI first before running queries here!**
# * See the assignment handout for submission instructions.
# * Have fun!
# + [markdown] id="NesZcATkfnq0" colab_type="text"
# ## Setting Up BigQuery and Dependencies
#
# Run the two cells below (shift + enter) to authenticate your project and import the libraries you'll need.
#
# Note that you need to fill in the `project_id` variable with the Google Cloud project id you are using for this course. You can see your project ID by going to https://console.cloud.google.com/home/dashboard and inspecting the selected option from the drop-down menu on the top left, beside the "Google Cloud Platform" logo.
#
# + id="lyFqjDKmfSkU" colab_type="code" colab={}
# Run this cell to authenticate yourself to BigQuery
from google.colab import auth
auth.authenticate_user()
project_id = "" # INSERT YOUR PROJECT ID HERE
# + id="8L-LT4rEko06" colab_type="code" colab={}
# Initialize BiqQuery client
from google.cloud import bigquery
client = bigquery.Client(project=project_id) # pass in your projectid
# + [markdown] id="pInaYkgjfwZt" colab_type="text"
# # Overview
#
# This first part of Project 3 is meant serve as a brief tutorial for Machine Learning with BigQuery, since you will be using BigQuery Prediction in the open-ended part of the assignment.
#
# ** Don't worry if you've never studied Machine Learning before. ** This notebook will guide you through everything you need to know to be successful in the open-ended part of Project 3.
# + [markdown] id="Z9nrJtByq6Q-" colab_type="text"
# In the next two sections, we'll give you a bird's eye intro to machine learning and a primer on how BigQuery makes machine learning easy. In the third and last section, you'll walk through an example of how to train and use a machine learning model in BigQuery.
# + [markdown] id="_xCUsFh8iyho" colab_type="text"
# # 1 - Machine Learning in a Nutshell
#
# Basic Machine Learning tasks can be framed in terms of **inputs** $X$, **target values** $Y$ (sometimes called labels), **training data** (pairs of observed data points $(x_i, y_i)$), and a function $h:X \rightarrow Y$ historically called the **hypothesis function** that maps inputs to target values.
#
# Given these primitives, we can think of the canonical Machine Learning task as follows:
#
# > Given that I've seen a ton of training data $(x_1, y_1), ..., (x_m, y_m)$, how can I come up with a good function $h$ so that on an *unseen* input value $x_{m+1}$, the value of $h(x_{m+1})$ is a good "prediction" $y_{m+1}$?
#
#
# + [markdown] id="faHlhArpv1Ee" colab_type="text"
# #### Example 1: Three Point Shots
# Say elements of $X$ are the number of three point shots scored by a team in a basketball game, and elements of $Y$ are $0$ or $1$ indicating whether that team lost or won the game. Our training data could look like this:
#
# > $T = \{(2, 0), (3, 0), (6, 0), (5, 0), (10, 0), (11, 0), (5, 1), (15, 1), (18, 0), (17, 1), (16, 1), (16, 1)\}$
#
#
# In this case, we'd *train* a machine learning model on $T$ to effectively generate an $h$ that would give us reasonable values of $y$ for unseen values of $x$, i.e., predict whether a game was won or not based on how many three pointers were scored on that game. For example, we might expect that $h(1) \approx 0$, and
# $h(20) \approx 1$. Note that in this case, we'd like $h$ to output not only a 0 or 1, but a *probability* for how likely a game is to be won, hence the approximate equalities.
#
# + [markdown] id="_MwXnNwUv93x" colab_type="text"
# #### Example 2: GitHub Revisited
#
# For a richer example, let's say we are trying to predict how many *forks* (i.e. copies of the repo by GitHub users besides the original owner) a Github repo will have at some point in time -- here $Y$ will represent the number of forks of a repo. As you might have seen in Project 2, such questions are usually not easily answerable with only one or two statistics of a Github repo. We usually want to think about several *features* together. What is the watch and star count of the repo? How many contributors does the repo have? How many files does the repo have? What is the age of the repo in years?
#
# The values in $X$ *need not be single real numbers*, they can be lists of real numbers as well, and we can use feature engineering to come up with these feature lists.
#
# *Feature engineering* is the mostly informal process by which we use domain-knowledge to extract numerical features from some entity (e.g. a GitHub repo in this example), in order to provide them as training data to a machine learning model.
#
# Here is how a simple feature engineering process for predicting the fork count of a GitHub repo may pan out:
#
# ##### Simple feature engineering process
#
# 1. Using domain knowledge, you hypothesize that watch count, star count, number of commits, and age of the repo will probably be good indicators of its fork count. You also toss in the average commit length of the repo because you know it has some non-trivial relationship with watch count based on anecdotal evidence from a certain project in your friend's databases course.
# 2. You write some code in your favorite programming language (or SQL if using BigQuery!) to extract these 5 features from your set of 1,000 GitHub repos, creating 1,000 tuples that look like this:
#
# > $((45, 100, 200, 2.4, 127.65), 30), ((65, 302, 100, 1.2, 164.1), 132) \dots $
#
# 3. You train your model on this data and evaluate it on another 100 repos **which your model has not yet seen**. If quality of your results (the accuracy of your predictions) is not so good, you may attempt to improve the quality of your features. If performance is good, you are done and have a decent model!
#
# 4. If you think the current features you thought about are not good enough, you can go back to 1 and brainstorm more.
# + [markdown] id="E6AGrUmvRMab" colab_type="text"
# Once you have honed in on a good set features which you have trained with and evaluated, you can now predict using your model.
# + [markdown] id="cpBb4DoSE3zG" colab_type="text"
# ### Evaluating your Models
#
# In Example 2, we said that the feature engineering process involves a key step in which you *evaluate* how good your model (aka hypothesis function $h$) is doing. Usually this consists of:
#
# 1. Running your hypothesis function $h$ on a set of inputs $X$ which you have not already seen to get outputs $h(x_{m+1}), \dots, h(x_{m+k})$
# 2. Comparing how close your predicted values are to the ground-truth labels $y_{m+1}, \dots, y_{m+k}$ using a reasonable statistical metric.
#
# The "reasonable statistical metric" varies depending on the nature of your labels. If we are detecting whether an email is spam or not, you can use metrics like [precision, recall, and F1 score](https://en.wikipedia.org/wiki/F1_score). If you are trying to predict the fork count of a repo, you can use something like [Root-Mean Square error](https://en.wikipedia.org/wiki/Root-mean-square_deviation). If you are trying to detect whether an object belongs to one of three classes, you may try seeing how accurate your predictions are bucketed by each ground-truth label.
#
# There are many more ways to evaluate models, but deeper discussion is beyond the scope of this assignment and course.
# + [markdown] id="hmPqespz4qP2" colab_type="text"
# ## Types of Models
#
# BigQuery supports three types of models: *linear regression, binary logistic regression, and multiclass logistic regression*.
#
#
# * A *linear regression model* predicts a number, i.e., $Y = \mathbb{R}$.
#
#
# * A *binary logistic regression model* makes a binary prediction by giving the confidence of an event, e.g., is an email spam or not?
#
# * A *multiclass logistic regression model* is a generalization of the binary logistic regression model. E.g., what is the sentiment of a sentence, from 1 (negative) to 5 (positive).
#
# Example 1 above is a binary logistic regression model, and Example 2 is a linear regression model. You can use any of these three models in your project.
# + [markdown] id="5NXULYvVNAB9" colab_type="text"
# If you have not already studied machine learning and are interested in digging into more details, reading section 1 of the CS229 notes [here](http://cs229.stanford.edu/notes/cs229-notes1.pdf) will cover the basic topics discussed here and expand on them. However, the information in this notebook will be sufficient to complete Project 3.
# + [markdown] id="5zbORTWqrPS1" colab_type="text"
# # 2 - BigQuery and ML
#
# In the previous section, we did not cover how the hypothesis function $h$ is actually generated from training data. Luckily for us, BigQuery abstracts the details of this process away from us and instead exposes a nice SQL interface for ML which we already know how to work with!
#
# Machine Learning in BigQuery consists of three steps: creating a model, evaluating the model, and using the model to make predictions.
# + [markdown] id="ObgL_ttqF7LW" colab_type="text"
# #### Creating a Model
#
# This step consists of telling BigQuery that you want to create a model. You tell BigQuery what type of model you want to create, and you write SQL to gather the features and ground-truth values for the model.
#
# The create model statement could look like this:
# + id="exVy7D1bJB6m" colab_type="code" colab={}
# Don't run me! My tables don't exist. I'm just here as an example.
# %%bigquery --project $project_id
CREATE MODEL `my_awesome_model`
OPTIONS(model_type='logistic_reg') AS
SELECT
IF(my_awesome_database.ground_truth IS NULL, 0, 1) AS label,
IFNULL(my_awesome_database.feature1, "") AS feature1,
my_awesome_database.feature2 AS feature2,
my_awesome_database.feature3 AS feature3,
my_awesome_database.feature2 * my_awesome_database.feature3 AS feature4
FROM
`my_awesome_database`
WHERE
my_awesome_database.date BETWEEN 2010 AND 2015
# + [markdown] id="oTEg5g9JJiXR" colab_type="text"
# One thing to note: `CREATE MODEL` will fail if the model with that name has already been created. If you're retraining your model, for example, you'll want to use `CREATE OR REPLACE MODEL` as the first line instead.
#
# See this page for documentation: https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create
# + [markdown] id="OOCRI0qoF7xH" colab_type="text"
# #### Evaluating the Model
#
# Once you've created your model, BigQuery has already trained it for you -- you already have a $h$ at your disposal ready to evaluate! We evaluate $h$ by asking BigQuery to predict the $Y$ values of **new data unseen by the model** and compare them to ground-truth values.
#
# To evaluate a model you'd do something like this:
# + id="Ae4ETQDAKJzA" colab_type="code" colab={}
# Don't run me! My tables don't exist. I'm just here as an example.
# %%bigquery --project $project_id
SELECT
*
FROM
ML.EVALUATE(MODEL `my_awesome_model`, (
SELECT
IF(my_awesome_database.ground_truth IS NULL, 0, 1) AS label,
IFNULL(my_awesome_database.feature1, "") AS feature1,
my_awesome_database.feature2 AS feature2,
my_awesome_database.feature3 AS feature3,
my_awesome_database.feature2 * my_awesome_database.feature3 AS feature4
FROM
`my_awesome_database`
WHERE
my_awesome_database.date BETWEEN 2016 AND 2017))
# + [markdown] id="RrMgxk9KK5Pf" colab_type="text"
# Note that we are evaluating on data between 2016 and 2017, even though we trained on data between
# 2010 and 2015. **This is important!!** If we did not do this, we would be "cheating" since the model has already
# seen a training value corresponding to the one you are trying to evaluate. If the model was a simple lookup
# table, it would get 100% accuracy on everything it's already seen trivially. Also, we'll generally use a much larger amount of data for training than for evaluating or testing.
# + [markdown] id="8SKiLOwLLLuo" colab_type="text"
# See this page for documentation: https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-evaluate. Note the `ML.EVALUATE` function is one of three functions you can use to evaluate your model, depending on your task.
# + [markdown] id="nkdTSLs3F78X" colab_type="text"
# #### Exercising the Model
#
# If your model achieves good evaluation metrics (see [this](https://cloud.google.com/bigquery/docs/bigqueryml-analyst-start#step_four_evaluate_your_model) section of the BigQuery ML tutorial for data analysts for context on what 'good' evaluation metrics are), you can now utilize your model to predict values.
#
# Assuming you have a trained model, you can predict values like this:
# + id="zRYSk65gLnhk" colab_type="code" colab={}
# Don't run me! My tables don't exist. I'm just here as an example.
# %%bigquery --project $project_id
SELECT
my_awesome_database.key,
predicted_label
FROM
ML.PREDICT(MODEL `my_awesome_model`, (
SELECT
IFNULL(my_awesome_database.feature1, "") AS feature1,
my_awesome_database.feature2 AS feature2,
my_awesome_database.feature3 AS feature3,
my_awesome_database.feature2 * my_awesome_database.feature3 AS feature4
FROM
`my_awesome_database`
WHERE
my_awesome_database.date BETWEEN 2018.01 AND 2018.02))
# + [markdown] id="L-H74FvpMSnD" colab_type="text"
# See this page for documentation: https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-predict.
# + [markdown] id="2Fg_3vs9F8du" colab_type="text"
# For more details and an end-to-end example in BigQuery, read the following article: https://cloud.google.com/bigquery/docs/bigqueryml-analyst-start.
# + [markdown] id="lg4K0aisqm3x" colab_type="text"
# # 3 - Now it's Your Turn!
#
# Let's now dive into an exercise using BigQuery and ML! This is a fairly simple warm-up problem to help you gain hands-on experience working with BQML. You'll get to dive into much more depth with your open-ended project! You'll be going through the three steps described in the previous section on your own.
#
# For this problem, we're going to be working with the [Austin bikeshare dataset](https://bigquery.cloud.google.com/dataset/bigquery-public-data:austin_bikeshare) available on BigQuery. Take a moment to familiarize yourself with the data we have at hand.
#
# Notice we have various pieces of information about each trip - for example, the stations where the biker started and ended, with the corresponding latitude/longitude, the date of the ride, the subscriber type, and the duration of the trip in minutes.
#
# + [markdown] id="391ISLXuKZtB" colab_type="text"
# Our goal in this exercise will be the following:
#
# > ** Given attributes about a ride, can we predict whether a bike ride will be a "quick" ride? Let's define a "quick" ride as a ride that takes less than 15 minutes. **
#
# Note this is a *binary logistic regression task*, or classification task, where, given attributes about a ride, we predict one of two labels: 1 = quick (< 15 minutes); 0 = not quick (>= 15 minutes).
#
# Once we've trained our model, we can then use it to help predict on unlabeled data. In particular, we can use it to help fill in missing data - some bike rides have a different start/end station, but have a duration of 0 minutes (likely missing data). Much like in Dr. Lakshmanan's taxi fare prediction [example](https://docs.google.com/presentation/d/10jDyG1TgwB30aNYUdhd9oe7SIQBKkmLMF0Gbq4OU5F0/edit#slide=id.g44a6f5d97e_1_1799) during his invited talk, we'll use our model to help to fill in missing data.
#
# Let's dive in!
#
# + [markdown] id="5Bg24pgTQlpN" colab_type="text"
# ### Step 1: Look at the data (1 point)
#
# In any ML task, it's important to first explore the data. Investigating correlations between attributes as you did in Project 2 can help you determine which attributes may be useful as training features, and will be important for your final project. And, looking into the distribution of labels you want to predict can give you a better understanding of the distribution of your data (such as whether your dataset is *balanced*). For this exercise, we'll dig into the latter.
# + [markdown] id="BPEQw4IcdOgY" colab_type="text"
# #### 1.a
# What percentage of rides are "quick"? Recall that we have: quick ride: < 15 minutes; not quick: >= 15 minutes. Filter out rides with a duration of 0 minutes.
#
# Hint: [COUNTIF](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#countif) may be helpful.
# + id="JqHC8-NAVm5v" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="J4wXw4xUe8XP" colab_type="text"
# #### 1.b
#
# What percentage of rides have a different start/end station, but have a value of 0 for their duration? How many rides is this? Write a query that returns the count in one column and the percentage in another. The denominator for the percentage should be all rides, regardless of duration.
#
# Hint: [COUNTIF](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#countif) may be helpful.
# + id="slr4JOFTVt0N" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="lLY2CfJ7RWcT" colab_type="text"
# ### Step 2: Create a dataset to store the model
#
# When you create and train a model, BigQuery will store the model in a dataset. Before training, you'll first need to create a new empty dataset. Note that you only need to do this step once. If you later update your model, it can replace the existing one.
#
# You can also do this step in the UI (see 'create your dataset': https://cloud.google.com/bigquery/docs/bigqueryml-analyst-start ).
#
# Let's call our dataset `bqml_bikeshare`. After either running the cell below, or creating the dataset with the BigQuery UI, you should see the dataset name appear in the left column of the UI.
# + id="CMlA4qtYSnH8" colab_type="code" colab={}
# Run this cell to create a dataset to store your model, or create in the UI
model_dataset_name = 'bqml_bikeshare'
dataset = bigquery.Dataset(client.dataset(model_dataset_name))
dataset.location = 'US'
client.create_dataset(dataset)
# + [markdown] id="SgyJkl_LQvj9" colab_type="text"
# ### Step 3: Extract training data from BigQuery (2 points)
#
# Write a SQL query that extracts training data from the dataset. These are features that you want to feed into your model. For this part, you do not need to do feature engineering - you can simply pull raw features from the tables that you think may be helpful.
#
# Your query should return a column called `label` with the target label value (our "Y" value), and additional columns for some features you want to use (our "X" values). Note:
# - recall: `label` value is 1 for quick rides (< 15 minutes), and 0 otherwise (>= 15 minutes)
# - duration_minutes cannot be a training feature - we're trying to predict (a boolean version of) this
# - filter out any rides with a duration of 0 minutes
#
# Display the first 10 rows of the table returned by your query.
# + id="E2Fd92Ch4fpz" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="qlXdPkgiVnfX" colab_type="text"
# ### Step 4: Train a simple model (1 point)
# + [markdown] id="hxcxZ7xOVqtz" colab_type="text"
# **First, an important note:** it's important to have separate datasets to train, evaluate, and finally test your model. We'll want 3 different subsets of data:
#
# 1. **Training set**: used to train model.
# - we'll train on rides before 2017 (start_time < '2017-01-01'), with duration time > 0
# 2. **Evaluation set**: used to evaluate model after training. This should not be data used during training. It can be used multiple times to evaluate and compare the performance of different models.
# - we'll evaluate on the next 5 months (start_time between '2017-01-01' and '2017-06-01'), with duration time > 0
# 3. **Test set**: *should only be used once at the end of your entire training process* to say how your model does on real data. This should not be the same as either training or eval data. Using the test set to tune your model is bad, since it means you are starting to overfit your model (i.e. making your model artificially good on a certain dataset at the possible expense of it doing poorly on new data) to that test set as well.
# - we'll test on the 5 months after that (start_time between '2017-06-01' and '2017-11-01'), with duration time > 0
#
# Note that for all these datasets, we'll filter out rides with duration time = 0. For the purposes of this problem, we'll consider this to be incomplete data.
# + [markdown] id="TdHOdKYdV-n2" colab_type="text"
# Now, let's go ahead and train a simple model. Create a model, using the query you wrote above to tell the model what features and ground-truth labels to use. Remember that we're training only on rides before 2017 (start_time < '2017-01-01'), and with a duration time > 0.
# + [markdown] id="rRUB3YafONO4" colab_type="text"
# **Note**: it may take a few minutes to run the query. Also, you may get the error `Table has no schema: call 'client.get_table()'`. This is because notebook cells try to print out the table returned from a SQL query, but the query to create/train a model doesn't return any table at all, so the notebook complains. The model is still trained successfully though. You may ignore this, and can click the (X) in the top left of the output to clear the error message.
# + id="uWz9D7HujGVE" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
CREATE OR REPLACE MODEL `bqml_bikeshare.bikeshare_model` -- we'll call our model 'bikeshare_model'
OPTIONS -- TODO: fill in options
-- TODO: write SQL to return the features and ground-truth values for the model
# + [markdown] id="rvUv5cq0CD-6" colab_type="text"
# You can get training statistics on your model by running the following cell:
# + id="wsXSNKUlCDOD" colab_type="code" colab={}
# %%bigquery --project $project_id
# Run cell to view training stats
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_bikeshare.bikeshare_model`)
# + [markdown] id="cg0fe_zRz3a7" colab_type="text"
# ### Step 5: Evaluate (1 point)
#
# Evaluate your model on unseen evaluation data.
#
# Recall for our evaluation set, we're using the 5 months following what we trained on (use: start_time between '2017-01-01' and '2017-06-01'), with duration time > 0.
#
# + id="FHAkE5j_YgB1" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="s7hde4Ujz-RR" colab_type="text"
# ### Step 6: Improving our model (3 points)
#
# In general, we can't just throw raw data into the model and expect it to work: in practice, you'll iterate on improving your features and re-training/re-evaluating your model. Let's try the following: add engineered features -> re-train model -> re-evaluate model.
#
# #### 6.a: Feature engineering *(1 point)*
#
# Let's add an engineered feature! You suspect that there is a relationship between the distance between the start and end stations, and whether it will be a "quick" ride. **Let's add the distance between the start station and end station as a feature**.
#
# Extend your query from step 3 to also have a feature for the euclidean distance between the start and end station.
#
# You may find the following useful:
# - [Example](https://docs.google.com/presentation/d/10jDyG1TgwB30aNYUdhd9oe7SIQBKkmLMF0Gbq4OU5F0/edit#slide=id.g44a6f5d97e_1_1894) from Dr. Lakshmanan's invited talk
# - `ST_GeogPoint(longitude, latitude)` - creates geography point from longitude, latitude values
# - `ST_DISTANCE(start_pt, end_pt)` - computes distance between 2 geographic points (more [here](https://postgis.net/docs/PostGIS_Special_Functions_Index.html#PostGIS_GeographyFunctions))
#
# You are welcome, but not required, to experiment with other engineered features as well.
#
# Display the first 10 rows of the table returned by your query.
# + id="JbBZJCa18czi" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="ABZT9fAT5Fy3" colab_type="text"
# #### 6.b: Retrain our model *(1 point)*
#
# Let's train our model again (using the same training set as before) with the added features. You can replace the existing one, or create a new one with a different name.
#
#
# + [markdown] id="kYZNHNqz4p0N" colab_type="text"
# **Note**: it may take a few minutes to run the query. Also, you may again get the error `Table has no schema: call 'client.get_table()'`. The model is still trained successfully though. You may ignore this, and can click the (X) in the top left of the output to clear the error message.
# + id="kgUXMHqYEHeT" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
CREATE OR REPLACE MODEL `bqml_bikeshare.bikeshare_model_v2` -- we'll call our model 'bikeshare_model_v2'
-- TODO: complete query
# + [markdown] id="RkURhcloIBoQ" colab_type="text"
# Let's get our training stats again:
# + id="CQzLy39KIFEc" colab_type="code" colab={}
# %%bigquery --project $project_id
# Run cell to view training stats
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_bikeshare.bikeshare_model_v2`)
# + [markdown] id="1G_Ivatar12p" colab_type="text"
# You'll should hopefully find that the loss is a bit lower (better) than before, on both on the training data and on BigQuery's automatic evauation set (it withholds some data you passed in as training data for reporting eval stats).
# + [markdown] id="uaf2CD9I5dvN" colab_type="text"
# #### 6.c: Re-evaluate model *(1 point)*
#
# Now let's evaluate our re-trained model on our evaluation set. You can use a similar evaluation query from step 5, but with your updated features (note: you may need to change the model name in the query if your new model has a different name).
# + id="kOxKtiwKbp3Y" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="bz512wScoVOs" colab_type="text"
# ### Step 7: Evaluate final model on test set (1 point)
#
# Once you're done training your model (in practice, you'll likely iterate on updating your model, retraining on the training set, and re-evaluating on the evaluation set several times), you'll evaluate your final model on a test set. The test set consists of examples that have not been used at all before, neither during training nor during evaluation.
#
# Again, the test set should **only be used once at the end of your entire training process**, to see how your model does on real data. You should only run the cell below once you are finished modifying your features.
#
# Recall that for our test set, we're using the 5 months after our evaluation set (rides with start_time between '2017-06-01' and '2017-11-01', with duration time > 0).
#
# Evaluate your model once on this test set. The query is almost identical to the previous one, except now you use the test set.
# + id="Ubabj72ptCwS" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
# + [markdown] id="fvBaQkC25iL1" colab_type="text"
# ### Step 8: Use trained model to predict (1 point)
#
# Once you've trained your model, you can use it to make predictions! Let's try to use it to fill in some of the missing data.
#
# Now, let's go ahead and predict on rides that had a duration time of 0 minutes, but had different start/end stations. Does our model think these were quick rides?
#
# Notice that these samples were never used during training/evaluation/testing, since we filtered out rides with a duration of 0.
#
# Display the features used for prediction and the predicted label for 10 examples. The predicted label will be called `predicted_label`.
# + id="kY3B-bGdSu-i" colab_type="code" colab={}
# %%bigquery --project $project_id
# YOUR QUERY HERE
| cs145/project/project3_mlwarmup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# [View in Colaboratory](https://colab.research.google.com/github/ZER-0-NE/ML_problems/blob/master/keras_VGGFace_3FC.ipynb)
# + colab={"base_uri": "https://localhost:8080/", "height": 323} colab_type="code" id="tChQ80D9vX3F" outputId="c9283d74-06f5-47df-bec7-d34c3b65d1d3"
from google.colab import auth
auth.authenticate_user()
# !pip install PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + colab={"base_uri": "https://localhost:8080/", "height": 35989} colab_type="code" id="IusQjeBfvhF0" outputId="0dc07e57-5b98-4ac4-cf89-7a58c4484cc7"
fileId = drive.CreateFile({'id': '1OhPBMbSOG3ejP26-peRmDPYX7WfF2ixN'}) #DRIVE_FILE_ID is file id example: 1iytA1n2z4go3uVCwE_vIKouTKyIDjEq
print(fileId['title']) # folder_data.zip
fileId.GetContentFile('dataset_cfps.zip') # Save Drive file as a local file
# !unzip dataset_cfps.zip -d ./
# + colab={} colab_type="code" id="WWaOJqGyv8ah"
from keras import models
from keras import layers
from keras import optimizers
from keras.applications import VGG16
from keras.applications import InceptionResNetV2
import sys
import os
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense, Activation, Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras import callbacks
from keras.models import load_model
import matplotlib.pyplot as plt
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from keras_vggface.vggface import VGGFace
from keras.engine import Model
from keras.models import load_model
# + colab={"base_uri": "https://localhost:8080/", "height": 547} colab_type="code" id="nEMD4L33wBXN" outputId="b54e43d1-32e3-4769-dc35-d03034af710d"
# !pip install keras_vggface
# + colab={"base_uri": "https://localhost:8080/", "height": 2614} colab_type="code" id="wlBZbkBvwG8V" outputId="1d423e12-5364-4439-dec3-f264509452e1"
train_data_path = 'dataset_cfps/train'
validation_data_path = 'dataset_cfps/validation'
#Parametres
img_width, img_height = 224, 224
#Load the VGG model
#vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))
#vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
last_layer = vggface.get_layer('avg_pool').output
x = Flatten(name='flatten')(last_layer)
xx = Dense(256, activation = 'relu')(x)
x1 = BatchNormalization()(xx)
x2 = Dropout(0.25)(xx)
y = Dense(256, activation = 'relu')(x2)
yy = BatchNormalization()(y)
y1 = Dropout(0.25)(y)
z = Dense(256, activation = 'relu')(y1)
zz = BatchNormalization()(z)
z1 = Dropout(0.25)(zz)
x3 = Dense(12, activation='sigmoid', name='classifier')(z1)
custom_vgg_model = Model(vggface.input, x3)
# Create the model
model = models.Sequential()
# Add the convolutional base model
model.add(custom_vgg_model)
# Add new layers
#model.add(layers.Flatten())
# model.add(layers.Dense(1024, activation='relu'))
# model.add(BatchNormalization())
#model.add(layers.Dropout(0.5))
# model.add(layers.Dense(12, activation='sigmoid'))
# Show a summary of the model. Check the number of trainable parameters
model.summary()
#model = load_model('facenet_resnet_lr3_SGD_sameas1.h5')
def mcor(y_true, y_pred):
#matthews_correlation
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos
y_pos = K.round(K.clip(y_true, 0, 1))
y_neg = 1 - y_pos
tp = K.sum(y_pos * y_pred_pos)
tn = K.sum(y_neg * y_pred_neg)
fp = K.sum(y_neg * y_pred_pos)
fn = K.sum(y_pos * y_pred_neg)
numerator = (tp * tn - fp * fn)
denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
return numerator / (denominator + K.epsilon())
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall))
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1./255)
# Change the batchsize according to your system RAM
train_batchsize = 32
val_batchsize = 32
train_generator = train_datagen.flow_from_directory(
train_data_path,
target_size=(img_width, img_height),
batch_size=train_batchsize,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory(
validation_data_path,
target_size=(img_width, img_height),
batch_size=val_batchsize,
class_mode='categorical',
shuffle=True)
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-3),
metrics=['acc'])
# Train the model
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size ,
epochs=50,
validation_data=validation_generator,
validation_steps=validation_generator.samples/validation_generator.batch_size,
verbose=1)
# Save the model
model.save('facenet_resnet_lr3_SGD_relu_first50_FC3.h5')
# loss and accuracy curves.
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 442} colab_type="code" id="bs_fIC1vw5Kx" outputId="47524076-fcde-44c1-d9df-9331695a581e"
from google.colab import files
files.download('facenet_resnet_lr3_SGD_new_FC3_200.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 357} colab_type="code" id="3OsmEjmNKoJA" outputId="f6bcced3-682c-4405-dc5c-334f90d3d0f6"
# memory footprint support libraries/code
# !ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
# !pip install gputil
# !pip install psutil
# !pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " I Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
# + colab={} colab_type="code" id="RWWitHGaKpCx"
| CFPS_Research_Paper/keras_VGGFace_3FC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 연습3 감 잡기
# 이번에는 데이터를 인터넷에서 직접 가져올 겁니다. https://github.com/justmarkham 에게 자료와 교재를 제공해줘서 특별히 감사를 전합니다.
#
# ### 단계 1. 필요한 라이브러리를 다운로드한다.
# ### 단계2. 자료를 이 [주소](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user) 에서 바로 불러온다.
# ### 단계 3. 이걸 users라는 변수에 할당하고 'user_id'를 색인으로 쓴다.
# ### 단계4. 첫 25개 열을 본다.
# ### 단계5. 마지막 10개 열을 본다.
# ### 단계 6. 자료에 있는 관측치(observations)의 수는 총 몇 개인가?
# ### 단계 7. 데이터 셋에 있는 열(columns)의 수는 총 몇 개인가?
# ### 단계 8. 모든 열의 이름을 출력(print)한다.
# ### 단계 9. 자료는 어떻게 색인되어 있나?
# ### 단계 10. 각 열의 자료형(data type)은 무엇인가?
# ### 단계 11. 직업(occupation) 열만 출력한다.
# ### 단계 12. 이 자료에는 서로 다른 직업이 얼마나 있는가?
# ### 단계 13. 가장 흔한 직업은 무엇인가?
# ### 단계 14. 데이터 프레임을 요약한다.
# ### 단계 15. 모든 열을 요약한다.
# ### 단계 16. 오직 직업(occupation)열만 요약한다.
# ### 단계 17. users의 평균 나이(age)는 얼마인가? (반올림하자.)
# ### 단계 18. 빈도가 가장 적은 나이는 무엇인가?
| 01_Getting_&_Knowing_Your_Data/Occupation/Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 3.4.1 TF-IDF のサンプル
# +
# リスト 3.4.1 Wikipediaの日本百名湯記事をTF-IDFで分析
# 日本百名湯のうち、wikipediaに記事のある温泉のリスト
spa_list = ['菅野温泉','養老牛温泉','定山渓温泉','登別温泉','洞爺湖温泉','ニセコ温泉郷','朝日温泉 (北海道)',
'酸ヶ湯温泉','蔦温泉', '花巻南温泉峡','夏油温泉','須川高原温泉','鳴子温泉郷','遠刈田温泉','峩々温泉',
'乳頭温泉郷','後生掛温泉','玉川温泉 (秋田県)','秋ノ宮温泉郷','銀山温泉','瀬見温泉','赤倉温泉 (山形県)',
'東山温泉','飯坂温泉','二岐温泉','那須温泉郷','塩原温泉郷','鬼怒川温泉','奥鬼怒温泉郷',
'草津温泉','伊香保温泉','四万温泉','法師温泉','箱根温泉','湯河原温泉',
'越後湯沢温泉','松之山温泉','大牧温泉','山中温泉','山代温泉','粟津温泉',
'奈良田温泉','西山温泉 (山梨県)','野沢温泉','湯田中温泉','別所温泉','中房温泉','白骨温泉','小谷温泉',
'下呂温泉','福地温泉','熱海温泉','伊東温泉','修善寺温泉','湯谷温泉 (愛知県)','榊原温泉','木津温泉',
'有馬温泉','城崎温泉','湯村温泉 (兵庫県)','十津川温泉','南紀白浜温泉','南紀勝浦温泉','湯の峰温泉','龍神温泉',
'奥津温泉','湯原温泉','三朝温泉','岩井温泉','関金温泉','玉造温泉','有福温泉','温泉津温泉',
'湯田温泉','長門湯本温泉','祖谷温泉','道後温泉','二日市温泉 (筑紫野市)','嬉野温泉','武雄温泉',
'雲仙温泉','小浜温泉','黒川温泉','地獄温泉','垂玉温泉','杖立温泉','日奈久温泉',
'鉄輪温泉','明礬温泉','由布院温泉','川底温泉','長湯温泉','京町温泉',
'指宿温泉','霧島温泉郷','新川渓谷温泉郷','栗野岳温泉']
# +
# wikipediaの記事の読み取り
# 2.1節参照
import wikipedia
wikipedia.set_lang("ja")
content_list = []
for spa in spa_list:
print(spa)
content = wikipedia.page(spa,auto_suggest=False).content
content_list.append(content)
# +
# 形態素解析
# 2.2節参照
from janome.tokenizer import Tokenizer
# Tokenneizerインスタンスの生成
t = Tokenizer()
# 形態素解析関数の定義
def tokenize(text):
return [token.base_form for token in t.tokenize(text)
if token.part_of_speech.split(',')[0] in['名詞','形容詞']]
# +
# wikipedia記事を名詞と形容詞のみとし、ブランクで分かち書き
# 2.2節参照
words_list = []
for content in content_list:
words = ' '.join(tokenize(content))
words = words.replace('==', '')
words_list.append(words)
# +
# リスト 3.4.2
# TF-IDF分析の実施
# ライブラリのインポート
from sklearn.feature_extraction.text import TfidfVectorizer
# vectorizerの初期化
vectorizer = TfidfVectorizer(min_df=1, max_df=50)
# フィーチャーベクトルの生成
features = vectorizer.fit_transform(words_list)
# 特徴語の抽出
terms = vectorizer.get_feature_names()
# フィーチャーベクトルをTF-IDF行列に変換 (numpy の ndarray 形式)
tfidfs = features.toarray()
# +
# リスト3.4.3 温泉毎の特徴語の表示
# TF-IDFの結果からi番目のドキュメントの特徴的な上位n語を取り出す関数
def extract_feature_words(terms, tfidfs, i, n):
# i番目の項目のtfidfsの値リストを作成
tfidf_array = tfidfs[i]
# tfidf_arrayの値が小さい順にソートした時のインデックスリストを作る
sorted_idx = tfidf_array.argsort()
# インデックスリストを逆順に (値が大きい順のインデックスとなる)
sorted_idx_rev = sorted_idx[::-1]
# トップnのみを抽出
top_n_idx = sorted_idx_rev[:n]
# インデックスに該当する単語リスト作成
words = [terms[idx] for idx in top_n_idx]
return words
# 結果の出力
for i in range(10):
print( '【' + spa_list[i] + '】' )
for x in extract_feature_words(terms, tfidfs, i, 10):
print(x, end=' ')
print()
# -
| samples/ch03-04/ch03-04-01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Insert Catchy Name Here
# In this notebook, we can begin to piece together our Python code that will match the environmental data up with the tagged species data.
# Jackie has added useable parts of Trackpy and access to Ocean Color Data here (Aug4th 21:30 EDT)
# %matplotlib inline
import warnings
warnings.simplefilter('ignore')
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from datetime import datetime, timedelta
from ftplib import FTP
import getpass
import os.path
from os import path
import netCDF4
from netCDF4 import Dataset
import folium
from folium.plugins import TimestampedGeoJson
from datetime import datetime
# Read in tagged data and extract spatial domain
# +
shark_dir = "track_shark144020.csv" # three years of track data from Laura's shark
track_ex = pd.read_csv(shark_dir, parse_dates=['datetime'])
## Keep longitude in degrees east
track_ex["lon"] = np.where(
track_ex["lon"] > 180,
track_ex["lon"] - 360,
track_ex["lon"])
lat_min = track_ex["lat"].min() - 2.0
lat_max = track_ex["lat"].max() + 2.0
lon_min = track_ex["lon"].min() - 2.0
lon_max = track_ex["lon"].max() + 2.0
xy_bbox = dict(latitude=slice(lat_min,lat_max), longitude=slice(lon_min,lon_max))
plt.plot(track_ex.lon,track_ex.lat)
xy_bbox
# -
# Trackpy core: Python version of Xtractomatic tool often used in R and Matlab. Does not yes work for us, needs to be outfitted with appropriate environmental data. This piece of code seems to work through each row of the tagged data, and extract the environmental variable at the time and location of the tagged data. WE NOW HAVE A BETTER WAY TO DO THIS WOOT WOOT.
# +
sst_df_list = []
for index, row in track_ex.iterrows():
row_time = pd.to_datetime(row["datetime"])
x = row_time.strftime('%Y-%m-%d')
row_lat_min = row["lat"] - 0.1
row_lat_max = row["lat"] + 0.1
row_lon_min = row["lon"] - 0.05
row_lon_max = row["lon"] + 0.05
row_bbox = subset_2014.sel(latitude=slice(row_lat_min,row_lat_max), longitude=slice(row_lon_min,row_lon_max))
row_sst= row_bbox.sel(time=x)
sst_xy_mean = row_sst.mean(dim=('latitude', 'longitude'))
row_todf = sst_xy_mean.to_dataframe()
#row_todf = row_sst.to_dataframe()
sst_df_list.append(row_todf)
#track_ex[]
sst_df = pd.concat(sst_df_list, ignore_index = True)
track_ex = pd.concat([track_ex, sst_df], axis=1)
# -
# could also grab a lot of the plotting code from the Trackpy code
# Accessing ocean color data from MODIS-A and formatting in Xarray, slicing for tagged species data
# testing ocean color access with just a few tagged species datapoints
track_2014 = track_ex.iloc[0:10]
# +
# calculate year day for time range of tagged data
day_list = []
year_list = []
for index, row in track_2014.iterrows():
row_time = pd.to_datetime(row["datetime"])
day_of_year = datetime(row_time.year, row_time.month, row_time.day).timetuple().tm_yday
year_list.append(row_time.year)
day_list.append(day_of_year)
day_string = [str(x) for x in day_list]
year_string = [str(x) for x in year_list]
# create access url for ocean color on opendap, merge datafiles to xarray
url = []
base_dir = 'https://oceandata.sci.gsfc.nasa.gov/opendap/hyrax/MODISA/L3SMI/'
suffix = '.L3m_DAY_CHL_chlor_a_4km.nc'
k = 0
for day in day_string:
url.append('https://oceandata.sci.gsfc.nasa.gov:443/opendap/MODISA/L3SMI/' + year_string[k] +'/' + day + '/A'+year_string[k] + day + '.L3m_DAY_CHL_chlor_a_4km.nc')
k = k+1
def add_id(ds):
ds.coords['time_coverage_start'] = pd.to_datetime(ds.attrs['time_coverage_start'])
return ds
chl = xr.open_mfdataset(url, combine = 'nested', concat_dim='time_coverage_start', preprocess=add_id)
chl = chl.sel( lat=slice(lat_max, lat_min), lon=slice(lon_min,lon_max)) # these data have the lat indexed backwards... no idea why, but it works this way
chl
# -
# ### Plotting the Chlorophyll Data
# note: ocean color satellite data cannot be collected through clouds (because of the light spectrum that is needed) so we will have much less chlorophyll data than SSH and SST
test = chl.isel(time_coverage_start = 5)
test.chlor_a.plot(vmin = 0, vmax = 5)
# (in reference to below cell) WE NOW HAVE A BETTER WAY TO DO THIS WOOT WOOT
# +
subset_2014 = chl
sst_df_list = []
for index, row in track_ex.iterrows():
row_time = pd.to_datetime(row["datetime"])
x = row_time.strftime('%Y-%m-%d')
row_lat_min = row["lat"] - 0.1
row_lat_max = row["lat"] + 0.1
row_lon_min = row["lon"] - 0.05
row_lon_max = row["lon"] + 0.05
row_bbox = subset_2014.sel(lat=slice(row_lat_min,row_lat_max), lon=slice(row_lon_min,row_lon_max))
row_sst= row_bbox.sel(time_coverage_start=x, method='nearest')
sst_xy_mean = row_sst.mean(dim=('latitude', 'longitude'))
row_todf = sst_xy_mean.to_dataframe()
#row_todf = row_sst.to_dataframe()
sst_df_list.append(row_todf)
#track_ex[]
sst_df = pd.concat(sst_df_list, ignore_index = True)
track_ex = pd.concat([track_ex, sst_df], axis=1)
| OHW_development/developmental_materials/insert_catchy_name_here2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## ブキマッチング
#
# ### ブキ領域の抽出
import cv2
import numpy as np
import matplotlib.pyplot as plt
# +
from rectimage import *
img1 = cv2.imread("ikaimg1.png")
#cimg = crop_wcenter(img1, 0.25, 0.12 ,0.14)
cimg = crop_whrate(img1, 0, 0.1 ,0, 0.18)
plt.imshow(cimg)
# -
# ## カラーの抽出
from get_histogram import *
cap = cv2.VideoCapture('../2020-11-24 21-14-00.mp4')
# +
colors = []
while cap.isOpened():
try:
ret, frame = cap.read()
modecolors = extract_modecolor(frame)
colors.append(modecolors)
except Exception as e:
print(e)
break
# -
colors
# +
import pandas as pd
df = pd.DataFrame(colors)
# -
df
df.plot()
# show fps
cap.get(cv2.CAP_PROP_FPS)
| src/.ipynb_checkpoints/Weapon_matching-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import import_nbmodule
# +
# add module folder
import os, sys
modulefolder = os.path.abspath(os.path.join('..'))
sys.path.append(modulefolder)
# import the outsidemynb1 and outsidemynb2 modules
from outsidePackage import outsidemynb1
from outsidePackage.subpackage import outsidemynb2
outsidemynb1.foo()
outsidemynb2.foo()
| examples/demo_outsidePackage/import_outsidemodule_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/fridymandita/Struktur-Data/blob/main/Struktur_Data_Queue.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="y4KkJ5h7vER8" colab={"base_uri": "https://localhost:8080/"} outputId="38626e1d-1ba9-409e-cb6c-1ad8ce17007f"
queue = []
# Adding elements to the queue
queue.append('a')
queue.append('b')
queue.append('c')
print("Initial queue")
print(queue)
# Removing elements from the queue
print("\nElements dequeued from queue")
print(queue.pop(0))
print(queue.pop(0))
print(queue.pop(0))
print("\nQueue after removing elements")
print(queue)
# + colab={"base_uri": "https://localhost:8080/"} id="SyXMOBNGwM1N" outputId="59286d5d-9e5a-4519-b804-ddceca195a9c"
# Python program to
# demonstrate queue implementation
# using collections.dequeue
from collections import deque
# Initializing a queue
q = deque()
# Adding elements to a queue
q.append('a')
q.append('b')
q.append('c')
print("Initial queue")
print(q)
# Removing elements from a queue
print("\nElements dequeued from the queue")
print(q.popleft())
print(q.popleft())
print(q.popleft())
print("\nQueue after removing elements")
print(q)
# Uncommenting q.popleft()
# will raise an IndexError
# as queue is now empty
# + id="Jjv6yJyjwRDy" outputId="a74d9e4e-01e6-4da6-d6ac-de95faa2b03d" colab={"base_uri": "https://localhost:8080/"}
# Python program to
# demonstrate implementation of
# queue using queue module
from queue import Queue
# Initializing a queue
q = Queue(maxsize = 5)
# qsize() give the maxsize
# of the Queue
print(q.qsize())
# Adding of element to queue
q.put('a')
q.put('b')
q.put('c')
q.put('d')
# Return Boolean for Full
# Queue
print("\nFull: ", q.full())
# Removing element from queue
print("\nElements dequeued from the queue")
print(q.get())
print(q.get())
print(q.get())
print(q.get())
# Return Boolean for Empty
# Queue
print("\nEmpty: ", q.empty())
#q.put(1)
#print("\nEmpty: ", q.empty())
#print("Full: ", q.full())
# This would result into Infinite
# Loop as the Queue is empty.
# print(q.get())
| Struktur_Data_Queue.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro to data structures
#
# We’ll start with a quick, non-comprehensive overview of the fundamental data structures in pandas to get you started. The fundamental behavior about data types, indexing, and axis labeling / alignment apply across all of the objects. To get started, import NumPy and load pandas into your namespace:
import numpy as np
import pandas as pd
# Here is a basic tenet to keep in mind: **data alignment is intrinsic**. The link between labels and data will not be broken unless done so explicitly by you.
#
# We’ll give a brief intro to the data structures, then consider all of the broad categories of functionality and methods in separate sections.
# ## Series
#
# [`Series`](https://pandas.pydata.org/docs/reference/api/pandas.Series.html#pandas.Series) is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). The axis labels are collectively referred to as the **index**. The basic method to create a Series is to call:
s = pd.Series(data, index=index)
# Here, `data` can be many different things:
#
# - a Python dict
# - an ndarray
# - a scalar value (like 5)
#
# The passed **index** is a list of axis labels. Thus, this separates into a few cases depending on what **data is**:
#
# **From ndarray**
#
# If `data` is an ndarray, **index** must be the same length as **data**. If no index is passed, one will be created having values `[0, ..., len(data) - 1]`.
# +
s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
s
# -
pd.Series(np.random.randn(5))
# *Note*
#
# pandas supports non-unique index values. If an operation that does not support duplicate index values is attempted, an exception will be raised at that time. The reason for being lazy is nearly all performance-based (there are many instances in computations, like parts of GroupBy, where the index is not used).
#
# **From dict**
#
# Series can be instantiated from dicts:
# +
d = {"b": 1, "a": 0, "c": 2}
pd.Series(d)
# -
# Note
#
# When the data is a dict, and an index is not passed, the Series index will be ordered by the dict’s insertion order, if you’re using Python version >= 3.6 and pandas version >= 0.23.
#
# If you’re using Python < 3.6 or pandas < 0.23, and an index is not passed, the Series index will be the lexically ordered list of dict keys.
#
# In the example above, if you were on a Python version lower than 3.6 or a pandas version lower than 0.23, the Series would be ordered by the lexical order of the dict keys (i.e. ['a', 'b', 'c'] rather than ['b', 'a', 'c']).
#
# If an index is passed, the values in data corresponding to the labels in the index will be pulled out.
d = {"a": 0.0, "b": 1.0, "c": 2.0}
pd.Series(d)
pd.Series(d, index=["b", "c", "d", "a"])
# Note
#
# NaN (not a number) is the standard missing data marker used in pandas.
#
# From scalar value
#
# If data is a scalar value, an index must be provided. The value will be repeated to match the length of index.
pd.Series(5.0, index=["a", "b", "c", "d", "e"])
# ### Series is ndarray-like
#
# `Series` acts very similarly to a `ndarray`, and is a valid argument to most NumPy functions. However, operations such as slicing will also slice the index.
s[0]
s[:3]
s[s > s.median()]
s[[4, 3, 1]]
np.exp(s)
# Note
#
# We will address array-based indexing like `s[[4, 3, 1]]` in [section](https://pandas.pydata.org/docs/user_guide/indexing.html#indexing).
#
# Like a NumPy array, a pandas Series has a [`dtype`](https://pandas.pydata.org/docs/reference/api/pandas.Series.dtype.html#pandas.Series.dtype).
s.dtype
# This is often a NumPy dtype. However, pandas and 3rd-party libraries extend NumPy’s type system in a few places, in which case the dtype would be an [`ExtensionDtype`](https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionDtype.html#pandas.api.extensions.ExtensionDtype). Some examples within pandas are [Categorical data](https://pandas.pydata.org/docs/user_guide/categorical.html#categorical) and [Nullable integer data type](https://pandas.pydata.org/docs/user_guide/integer_na.html#integer-na). See [dtypes](https://pandas.pydata.org/docs/user_guide/basics.html#basics-dtypes) for more.
#
# If you need the actual array backing a `Series`, use [`Series.array`](https://pandas.pydata.org/docs/reference/api/pandas.Series.array.html#pandas.Series.array).
s.array
# Accessing the array can be useful when you need to do some operation without the index (to disable [automatic alignment](https://pandas.pydata.org/docs/user_guide/dsintro.html#dsintro-alignment), for example).
#
# [`Series.array`](https://pandas.pydata.org/docs/reference/api/pandas.Series.array.html#pandas.Series.array) will always be an [`ExtensionArray`](https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray). Briefly, an ExtensionArray is a thin wrapper around one or more *concrete* arrays like a [`numpy.ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray). pandas knows how to take an `ExtensionArray` and store it in a `Series` or a column of a `DataFrame`. See [dtypes](https://pandas.pydata.org/docs/user_guide/basics.html#basics-dtypes) for more.
#
# While Series is ndarray-like, if you need an *actual* ndarray, then use [`Series.to_numpy()`](https://pandas.pydata.org/docs/reference/api/pandas.Series.to_numpy.html#pandas.Series.to_numpy).
s.to_numpy()
# Even if the Series is backed by a ExtensionArray, Series.to_numpy() will return a NumPy ndarray.
# ### Series is dict-like
#
# A Series is like a fixed-size dict in that you can get and set values by index label:
s["a"]
s["e"] = 12.0
s
"e" in s
"f" in s
# If a label is not contained, an exception is raised:
s["f"]
# Using the get method, a missing label will return None or specified default:
s.get("f")
s.get("f", np.nan)
# See also the [section on attribute access](https://pandas.pydata.org/docs/user_guide/indexing.html#indexing-attribute-access).
#
# ### Vectorized operations and label alignment with Series
#
# When working with raw NumPy arrays, looping through value-by-value is usually not necessary. The same is true when working with Series in pandas. Series can also be passed into most NumPy methods expecting an ndarray.
s + s
s * 2
np.exp(s)
# A key difference between Series and ndarray is that operations between Series automatically align the data based on label. Thus, you can write computations without giving consideration to whether the Series involved have the same labels.
# s[1:] + s[:-1]
# The result of an operation between unaligned Series will have the **union** of the indexes involved. If a label is not found in one Series or the other, the result will be marked as missing `NaN`. Being able to write code without doing any explicit data alignment grants immense freedom and flexibility in interactive data analysis and research. The integrated data alignment features of the pandas data structures set pandas apart from the majority of related tools for working with labeled data.
#
# **Note**
#
# In general, we chose to make the default result of operations between differently indexed objects yield the **union** of the indexes in order to avoid loss of information. Having an index label, though the data is missing, is typically important information as part of a computation. You of course have the option of dropping labels with missing data via the **dropna** function.
#
# ----
#
# ### Name attribute
#
# Series can also have a `name` attribute:
s = pd.Series(np.random.randn(5), name="something")
s
s.name
# The Series name will be assigned automatically in many cases, in particular when taking 1D slices of DataFrame as you will see below.
#
# You can rename a Series with the pandas.Series.rename() method.
s2 = s.rename("different")
s2.name
# Note that s and s2 refer to different objects.
# ## DataFrame
#
# **DataFrame** is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object. Like Series, DataFrame accepts many different kinds of input:
#
# - Dict of 1D ndarrays, lists, dicts, or Series
# - 2-D numpy.ndarray
# - [Structured or record](https://numpy.org/doc/stable/user/basics.rec.html) ndarray
# - A `Series`
# - Another `DataFrame`
#
# Along with the data, you can optionally pass **index** (row labels) and **columns** (column labels) arguments. If you pass an index and / or columns, you are guaranteeing the index and / or columns of the resulting DataFrame. Thus, a dict of Series plus a specific index will discard all data not matching up to the passed index.
#
# If axis labels are not passed, they will be constructed from the input data based on common sense rules.
#
# **Note**
#
# When the data is a dict, and `columns` is not specified, the `DataFrame` columns will be ordered by the dict’s insertion order, if you are using Python version >= 3.6 and pandas >= 0.23.
#
# If you are using Python < 3.6 or pandas < 0.23, and `columns` is not specified, the `DataFrame` columns will be the lexically ordered list of dict keys.
#
# ----
#
# ### From dict of Series or dicts
#
# The resulting **index** will be the **union** of the indexes of the various Series. If there are any nested dicts, these will first be converted to Series. If no columns are passed, the columns will be the ordered list of dict keys.
d = {
....: "one": pd.Series([1.0, 2.0, 3.0], index=["a", "b", "c"]),
....: "two": pd.Series([1.0, 2.0, 3.0, 4.0], index=["a", "b", "c", "d"]),
....: }
df = pd.DataFrame(d)
df
pd.DataFrame(d, index=["d", "b", "a"])
pd.DataFrame(d, index=["d", "b", "a"], columns=["two", "three"])
# The row and column labels can be accessed respectively by accessing the **index** and **columns** attributes:
#
# **Note**
#
# When a particular set of columns is passed along with a dict of data, the passed columns override the keys in the dict.
#
# ----
df.index
df.columns
# ### From dict of ndarrays / lists
#
# The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays. If no index is passed, the result will be `range(n)`, where `n` is the array length.
d = {"one": [1.0, 2.0, 3.0, 4.0], "two": [4.0, 3.0, 2.0, 1.0]}
pd.DataFrame(d)
pd.DataFrame(d, index=["a", "b", "c", "d"])
# ### From structured or record array
#
# This case is handled identically to a dict of arrays.
data = np.zeros((2,), dtype=[("A", "i4"), ("B", "f4"), ("C", "a10")])
data[:] = [(1, 2.0, "Hello"), (2, 3.0, "World")]
pd.DataFrame(data)
pd.DataFrame(data, index=["first", "second"])
pd.DataFrame(data, columns=["C", "A", "B"])
# **Note**
#
# DataFrame is not intended to work exactly like a 2-dimensional NumPy ndarray.
#
# ----
#
#
#
# ### From a list of dicts
data2 = [{"a": 1, "b": 2}, {"a": 5, "b": 10, "c": 20}]
pd.DataFrame(data2)
pd.DataFrame(data2, index=["first", "second"])
pd.DataFrame(data2, columns=["a", "b"])
# ### From a dict of tuples
#
# You can automatically create a MultiIndexed frame by passing a tuples dictionary.
pd.DataFrame(
....: {
....: ("a", "b"): {("A", "B"): 1, ("A", "C"): 2},
....: ("a", "a"): {("A", "C"): 3, ("A", "B"): 4},
....: ("a", "c"): {("A", "B"): 5, ("A", "C"): 6},
....: ("b", "a"): {("A", "C"): 7, ("A", "B"): 8},
....: ("b", "b"): {("A", "D"): 9, ("A", "B"): 10},
....: }
....: )
# ### From a Series
#
# The result will be a DataFrame with the same index as the input Series, and with one column whose name is the original name of the Series (only if no other column name provided).
#
#
#
# ### From a list of namedtuples
#
# The field names of the first `namedtuple` in the list determine the columns of the `DataFrame`. The remaining namedtuples (or tuples) are simply unpacked and their values are fed into the rows of the `DataFrame`. If any of those tuples is shorter than the first `namedtuple` then the later columns in the corresponding row are marked as missing values. If any are longer than the first `namedtuple`, a `ValueError` is raised.
from collections import namedtuple
Point = namedtuple("Point", "x y")
pd.DataFrame([Point(0, 0), Point(0, 3), (2, 3)])
Point3D = namedtuple("Point3D", "x y z")
pd.DataFrame([Point3D(0, 0, 0), Point3D(0, 3, 5), Point(2, 3)])
# ### From a list of dataclasses
#
# *New in version 1.1.0.*
#
# Data Classes as introduced in [PEP557](https://www.python.org/dev/peps/pep-0557), can be passed into the DataFrame constructor. Passing a list of dataclasses is equivalent to passing a list of dictionaries.
#
# Please be aware, that all values in the list should be dataclasses, mixing types in the list would result in a TypeError.
from dataclasses import make_dataclass
Point = make_dataclass("Point", [("x", int), ("y", int)])
pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)])
# **Missing data**
#
# Much more will be said on this topic in the [Missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#missing-data) section. To construct a DataFrame with missing data, we use `np.nan` to represent missing values. Alternatively, you may pass a `numpy.MaskedArray` as the data argument to the DataFrame constructor, and its masked entries will be considered missing.
#
# ### Alternate constructors
#
# **DataFrame.from_dict**
#
# `DataFrame.from_dict` takes a dict of dicts or a dict of array-like sequences and returns a DataFrame. It operates like the `DataFrame` constructor except for the `orient` parameter which is `'columns'` by default, but which can be set to `'index'` in order to use the dict keys as row labels.
pd.DataFrame.from_dict(dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]))
# If you pass orient='index', the keys will be the row labels. In this case, you can also pass the desired column names:
pd.DataFrame.from_dict(
....: dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]),
....: orient="index",
....: columns=["one", "two", "three"],
....: )
# **DataFrame.from_records**
#
# `DataFrame.from_records` takes a list of tuples or an ndarray with structured dtype. It works analogously to the normal `DataFrame` constructor, except that the resulting DataFrame index may be a specific field of the structured dtype. For example:
data
pd.DataFrame.from_records(data, index="C")
# ### Column selection, addition, deletion
#
# You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting columns works with the same syntax as the analogous dict operations:
df["one"]
df["three"] = df["one"] * df["two"]
df["flag"] = df["one"] > 2
df
# Columns can be deleted or popped like with a dict:
del df["two"]
three = df.pop("three")
df
# When inserting a scalar value, it will naturally be propagated to fill the column:
df["foo"] = "bar"
df
# When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrame’s index:
df["one_trunc"] = df["one"][:2]
df
# You can insert raw ndarrays but their length must match the length of the DataFrame’s index.
#
# By default, columns get inserted at the end. The insert function is available to insert at a particular location in the columns:
df.insert(1, "bar", df["one"])
df
# ### Assigning new columns in method chains
#
# Inspired by [dplyr’s](https://dplyr.tidyverse.org/reference/mutate.html) `mutate` verb, DataFrame has an [`assign()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html#pandas.DataFrame.assign) method that allows you to easily create new columns that are potentially derived from existing columns.
#
# **Iris Data Set** can download from [here](https://archive.ics.uci.edu/ml/datasets/Iris)
iris = pd.read_csv("data/iris.data")
iris.head()
iris.assign(sepal_ratio=iris["SepalWidth"] / iris["SepalLength"]).head()
# In the example above, we inserted a precomputed value. We can also pass in a function of one argument to be evaluated on the DataFrame being assigned to.
iris.assign(sepal_ratio=lambda x: (x["SepalWidth"] / x["SepalLength"])).head()
# `assign` **always** returns a copy of the data, leaving the original DataFrame untouched.
#
# Passing a callable, as opposed to an actual value to be inserted, is useful when you don’t have a reference to the DataFrame at hand. This is common when using `assign` in a chain of operations. For example, we can limit the DataFrame to just those observations with a Sepal Length greater than 5, calculate the ratio, and plot:
(
....: iris.query("SepalLength > 5")
....: .assign(
....: SepalRatio=lambda x: x.SepalWidth / x.SepalLength,
....: PetalRatio=lambda x: x.PetalWidth / x.PetalLength,
....: )
....: .plot(kind="scatter", x="SepalRatio", y="PetalRatio")
....: )
# Since a function is passed in, the function is computed on the DataFrame being assigned to. Importantly, this is the DataFrame that’s been filtered to those rows with sepal length greater than 5. The filtering happens first, and then the ratio calculations. This is an example where we didn’t have a reference to the *filtered* DataFrame available.
#
# The function signature for `assign` is simply `**kwargs`. The keys are the column names for the new fields, and the values are either a value to be inserted (for example, a `Series` or NumPy array), or a function of one argument to be called on the `DataFrame`. A *copy* of the original DataFrame is returned, with the new values inserted.
#
# Starting with Python 3.6 the order of `**kwargs` is preserved. This allows for *dependent* assignment, where an expression later in `**kwargs` can refer to a column created earlier in the same [`assign()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html#pandas.DataFrame.assign).
dfa = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
dfa.assign(C=lambda x: x["A"] + x["B"], D=lambda x: x["A"] + x["C"])
# In the second expression, `x['C']` will refer to the newly created column, that’s equal to `dfa['A'] + dfa['B']`.
#
# ### Indexing / selection
#
# The basics of indexing are as follows:
#
# | Operation | Syntax | Result |
# | :----------------------------- | :-------------- | :-------- |
# | Select column | `df[col]` | Series |
# | Select row by label | `df.loc[label]` | Series |
# | Select row by integer location | `df.iloc[loc]` | Series |
# | Slice rows | `df[5:10]` | DataFrame |
# | Select rows by boolean vector | `df[bool_vec]` | DataFrame |
#
# Row selection, for example, returns a Series whose index is the columns of the DataFrame:
df.loc["b"]
df.iloc[2]
# For a more exhaustive treatment of sophisticated label-based indexing and slicing, see the [section on indexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing). We will address the fundamentals of reindexing / conforming to new sets of labels in the [section on reindexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-reindexing).
#
#
#
# ### Data alignment and arithmetic
#
# Data alignment between DataFrame objects automatically align on **both the columns and the index (row labels)**. Again, the resulting object will have the union of the column and row labels.
df = pd.DataFrame(np.random.randn(10, 4), columns=["A", "B", "C", "D"])
df2 = pd.DataFrame(np.random.randn(7, 3), columns=["A", "B", "C"])
df + df2
# When doing an operation between DataFrame and Series, the default behavior is to align the Series **index** on the DataFrame **columns**, thus [broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html) row-wise. For example:
df - df.iloc[0]
# For explicit control over the matching and broadcasting behavior, see the section on [flexible binary operations](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-binop).
#
# Operations with scalars are just as you would expect:
df * 5 + 2
1 / df
df ** 4
# Boolean operators work as well:
df1 = pd.DataFrame({"a": [1, 0, 1], "b": [0, 1, 1]}, dtype=bool)
df2 = pd.DataFrame({"a": [0, 1, 1], "b": [1, 1, 0]}, dtype=bool)
df1 & df2
df1 | df2
df1 ^ df2
-df1
# ### Transposing
#
# To transpose, access the `T` attribute (also the `transpose` function), similar to an ndarray:
# only show the first 5 rows
df[:5].T
# ### DataFrame interoperability with NumPy functions
#
# Elementwise NumPy ufuncs (log, exp, sqrt, …) and various other NumPy functions can be used with no issues on Series and DataFrame, assuming the data within are numeric:
np.exp(df)
np.asarray(df)
# DataFrame is not intended to be a drop-in replacement for ndarray as its indexing semantics and data model are quite different in places from an n-dimensional array.
#
# [`Series`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html#pandas.Series) implements `__array_ufunc__`, which allows it to work with NumPy’s [universal functions](https://numpy.org/doc/stable/reference/ufuncs.html).
#
# The ufunc is applied to the underlying array in a Series.
ser = pd.Series([1, 2, 3, 4])
np.exp(ser)
# *Changed in version 0.25.0:* When multiple `Series` are passed to a ufunc, they are aligned before performing the operation.
#
# Like other parts of the library, pandas will automatically align labeled inputs as part of a ufunc with multiple inputs. For example, using `numpy.remainder()` on two [`Series`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html#pandas.Series) with differently ordered labels will align before the operation.
ser1 = pd.Series([1, 2, 3], index=["a", "b", "c"])
ser2 = pd.Series([1, 3, 5], index=["b", "a", "c"])
ser1
ser2
np.remainder(ser1, ser2)
# As usual, the union of the two indices is taken, and non-overlapping values are filled with missing values.
ser3 = pd.Series([2, 4, 6], index=["b", "c", "d"])
ser3
np.remainder(ser1, ser3)
# When a binary ufunc is applied to a Series and Index, the Series implementation takes precedence and a Series is returned.
ser = pd.Series([1, 2, 3])
idx = pd.Index([4, 5, 6])
np.maximum(ser, idx)
# NumPy ufuncs are safe to apply to [`Series`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html#pandas.Series) backed by non-ndarray arrays, for example [`arrays.SparseArray`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.arrays.SparseArray.html#pandas.arrays.SparseArray) (see [Sparse calculation](https://pandas.pydata.org/pandas-docs/stable/user_guide/sparse.html#sparse-calculation)). If possible, the ufunc is applied without converting the underlying data to an ndarray.
#
# ### Console display
#
# Very large DataFrames will be truncated to display them in the console. You can also get a summary using [`info()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.info.html#pandas.DataFrame.info). (Here I am reading a CSV version of the **baseball** dataset from the **plyr** R package):
baseball = pd.read_csv("data/baseball.csv")
print(baseball)
baseball.info()
# However, using `to_string` will return a string representation of the DataFrame in tabular form, though it won’t always fit the console width:
print(baseball.iloc[-20:, :12].to_string())
# Wide DataFrames will be printed across multiple rows by default:
pd.DataFrame(np.random.randn(3, 12))
# You can change how much to print on a single row by setting the display.width option:
pd.set_option("display.width", 40) # default is 80
pd.DataFrame(np.random.randn(3, 12))
# You can adjust the max width of the individual columns by setting `display.max_colwidth`
datafile = {
.....: "filename": ["filename_01", "filename_02"],
.....: "path": [
.....: "media/user_name/storage/folder_01/filename_01",
.....: "media/user_name/storage/folder_02/filename_02",
.....: ],
.....: }
pd.set_option("display.max_colwidth", 30)
pd.DataFrame(datafile)
pd.set_option("display.max_colwidth", 100)
pd.DataFrame(datafile)
# You can also disable this feature via the `expand_frame_repr` option. This will print the table in one block.
#
# ### DataFrame column attribute access and IPython completion
#
# If a DataFrame column label is a valid Python variable name, the column can be accessed like an attribute:
df = pd.DataFrame({"foo1": np.random.randn(5), "foo2": np.random.randn(5)})
df
df.foo1
# The columns are also connected to the IPython completion mechanism so they can be tab-completed:
df.columns # noqa: E225, E999
| gettting_started/Intro to data structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import csv
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# Read in concatenated csv file and save as a dataframe
final_df_all = pd.read_csv('./data/final_data_0605.csv')
final_df_all.head()
# Define my features as X
X = final_df_all.loc[:, ["price", "stops", "total_time_mins", "distance"]].values
X
# Set y as my arrival city
y = final_df_all.loc[:, ["arr_city"]].values
y
# Import dependencies
from sklearn.preprocessing import LabelEncoder
labelencoder_y = LabelEncoder()
y = labelencoder_y.fit_transform(y)
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
# Fitting K-NN to the Training set
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True, fmt="d")
| code/ML_KNeighborsClassifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
#Reading in dataset that is filtered to exclusions in 2011 and after and only fraud exclusion types
leie = pd.read_csv('LEIE_filtered.csv')
#checking dataset read in correctly
leie.head()
leie.shape
print(leie.dtypes)
#An NPI of 0 means there is not NPI available for that row
leie['NPI'].value_counts()
# To connect to our Provider Data, the easiest connection will be using NPI however we have a lot of missing NPIs. Therefore, we will connect to two separate connections: one with the NPI for the rows that have NPI and one without NPI for those with no NPIs. I will create two separate tables in order to do these two separate connections.
#Creating dataset without NPIs (all values of 0)
no_NPI = leie['NPI'] == 0
leie_no_NPI = leie[no_NPI]
#Confirming only 0 values for NPI
leie_no_NPI['NPI'].value_counts()
leie_no_NPI.shape
#Saving filtered dataframe as new csv
leie_NPI_none = leie_no_NPI.to_csv('LEIE_NoNPI.csv', index = False)
#Reloading filtered csv
leie_NPI_none = pd.read_csv('LEIE_noNPI.csv', dtype=object)
#Need to create a unique identifier using Name/Busname variables and DOB, however, if one is NaN, don't want all of them to be NaN
leie_NPI_none['FIRSTNAME'].fillna('', inplace = True)
leie_NPI_none['LASTNAME'].fillna('', inplace = True)
leie_NPI_none['BUSNAME'].fillna('', inplace = True)
leie_NPI_none['DOB'].fillna('', inplace = True)
#Confirm NaNs replaced with space so no more NaNs for any of these columns
leie_NPI_none.isnull().sum()
#Creating new unique identifier with first name, last name, business name, and DOB
leie_NPI_none['Full_Name'] = leie_NPI_none['FIRSTNAME'].str.cat(leie_NPI_none['LASTNAME'],sep="")
leie_NPI_none['Name_OR_Business'] = leie_NPI_none['Full_Name'].str.cat(leie_NPI_none['BUSNAME'], sep="")
leie_NPI_none['NameBus_DOB'] = leie_NPI_none['Name_OR_Business'].str.cat(leie_NPI_none['DOB'], sep="")
#confirm columns appear to add correctly
leie_NPI_none.head(10)
#looking at possible duplicates
leie_NPI_none['NameBus_DOB'].value_counts()
#Filtering to only duplicates to explore these more and confirm they are duplicates
Name_dups = leie_NPI_none.groupby('NameBus_DOB').filter(lambda x:len(x) > 1)
#Looking at number of duplicates
Name_dups.shape
#Looking at as many as possible to confirm they appear to be duplicates
Name_dups.head(60)
#The above table gives confidence that these are indeed duplicates. Looking to ensure DOB is available in these
#duplicates to provide greater confidence as it seems very unlikely a provider would have the same first and last name
#as well as the exact same DOB
Name_dups['DOB'].value_counts()
# They all do appear to be duplicates so will drop the duplicates by only keeping the row with the most recent exclusion
#Need to convert exclusion year to a numeric in order to sort
leie_NPI_none['EXCLYear'] = pd.to_numeric(leie_NPI_none['EXCLYear'])
#Sorting dataframe so that the most recent exclusions are at the top
leie_NPI_none.sort_values(by = 'EXCLYear', ascending=False)
#Remove the duplicate rows
leie_noNPIs_dedupped = leie_NPI_none.drop_duplicates(subset = 'NameBus_DOB', keep = 'first')
#Confirm that duplicates appear to be dropped: As there were 348 rows in the duplicate df, with some triple duplicates,
#out of the original 23,567, at least 175 lines should have been removed so the number of rows remaining should be 23,392
leie_noNPIs_dedupped.shape
#One more visual confirmation duplicates appeard to be removed
leie_noNPIs_dedupped['NameBus_DOB'].value_counts()
#As DOB is not in our provider claims data then these fields will not be needed for the connection
leie_noNPIs_clean = leie_noNPIs_dedupped.drop(columns = ['DOB', 'NameBus_DOB'])
#confirm data looks ready to save as a clean version
leie_noNPIs_clean.head()
#Drop the columns created to combine the names and DOB
leie_noNPIs_clean2 = leie_noNPIs_clean.drop(columns = ['Full_Name', 'Name_OR_Business'])
#Taking one last look before saving as a new csv
leie_noNPIs_clean2.head()
#Save as new csv
leie_noNPIs_clean2.to_csv('LEIE_NoNPI_Clean.csv', index = False)
#Creating a table that has NPIs
NPI = leie['NPI'] != 0
leie_NPI = leie[NPI]
#Confirming no 0 values for NPI
leie_NPI['NPI'].value_counts()
leie_NPI.shape
#First confirm data is ready to save as new CSV as we may need to go back to this earlier version and relook at the duplicate values
leie_NPI.head()
leie_NPI.to_csv('LEIE_NPI_with_Duplicates.csv', index=False)
#Reading back in data with only NPIs
leie_NPI = pd.read_csv('LEIE_NPI_with_Duplicates.csv')
#confirming data read in correctly
leie_NPI.head()
#Looking at number of rows
leie_NPI.shape
#Confirming all NPIs are there
leie_NPI['NPI'].value_counts()
#Looking at the duplicated NPIs to confirm they are duplicates
dups = leie_NPI.groupby('NPI').filter(lambda x:len(x) > 1)
dups.shape
dups.head(60)
#Sorting dataframe so that the most recent exclusions are at the top
leie_NPI.sort_values(by = 'EXCLYear', ascending=False)
#Remove the duplicate rows
leie_NPI_dedupped = leie_NPI.drop_duplicates(subset = 'NPI', keep = 'first')
#Checking if duplicates have been removed
leie_NPI_dedupped['NPI'].value_counts()
# if they were dropped correctly then the shape should now be 4634 rows
leie_NPI_dedupped.shape
#As in the CSV without NPIs, we do not need DOB for future purposes
leie_NPI_dedupped1 = leie_NPI_dedupped.drop(columns = ['DOB'])
#Checking all other data still seems to be included
leie_NPI_dedupped1.head()
#Saving as a clean CSV
leie_NPI_dedupped1.to_csv('LEIE_NPI_Clean.csv', index = False)
| Cleaning_and_Merging/Splitting LEIE into NPI and non NPI datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.11 64-bit (''piunet'': conda)'
# name: python3
# ---
# # Test
# This notebook imports the trained model and runs it on the Proba-V test set, as preprocessed by the preprocessing notebook.
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
# import utils and basic libraries
import os
import time
from tqdm import tqdm
import matplotlib.pyplot as plt
import numpy as np
from utils import gen_sub, bicubic
from losses import cpsnr, cssim
from skimage import io
from zipfile import ZipFile
import torch
# <a id="loading"></a>
# # 1.0 Dataset Loading
PATH_DATASET = '../../Dataset/'
# PATH_DATASET = '/media/HDD/valsesia/piunet_repo/Dataset/'
band = 'NIR'
# band = 'RED'
mu = 7433.6436
sigma = 2353.0723
Nimages=9
# load ESA test set (no ground truth)
X_test = np.load(os.path.join(PATH_DATASET, f'X_{band}_test.npy'))
# print loaded dataset info
print('X_test: ', X_test.shape)
# <a id="network"></a>
# # 2.0 Load the trained network
# +
from config import Config
from model import PIUNET
MODEL_FILE = '../../Results/piunet/model_weights_best_20220223_0348.pt'
# MODEL_FILE = '/media/HDD/valsesia/piunet_repo/Results/piunet/red_model_checkpoint.pt'
config = Config()
model = PIUNET(config)
model.cuda()
model.load_state_dict(torch.load(MODEL_FILE))
# -
# <a id="proba"></a>
# # 3.0 Predict Proba-V Test Set
# +
# create output directory
submission_time = time.strftime("%Y%m%d_%H%M")
SUBMISSION_DIR = '../../Results/piunet/'+'test_'+submission_time
# SUBMISSION_DIR='/media/HDD/valsesia/piunet_repo/Results/piunet/'
if not os.path.exists(SUBMISSION_DIR):
os.mkdir(SUBMISSION_DIR)
# -
# ## 3.1 Prediction functions
# +
# vanilla
def predict_image(x_lr, dataset_mu, dataset_sigma, to_numpy=False):
with torch.no_grad():
model.eval()
x_lr = torch.Tensor(np.transpose(x_lr,(0,3,1,2)).astype(np.float32)).to("cuda")
x_sr, sigma_sr = model((x_lr-dataset_mu)/dataset_sigma)
x_sr = x_sr*dataset_sigma + dataset_mu
sigma_sr = torch.exp(sigma_sr + torch.log(torch.Tensor((dataset_sigma,)).to("cuda")))
if to_numpy:
return x_sr.permute(0,2,3,1).detach().cpu().numpy(), sigma_sr.permute(0,2,3,1).detach().cpu().numpy()
else:
return x_sr, sigma_sr
# rotational self-ensemble
def predict_image_se(x_lr, dataset_mu, dataset_sigma, to_numpy=True):
with torch.no_grad():
model.eval()
for r in [0,1,2,3]:
xr_lr = np.rot90(x_lr, k=r, axes=(1,2))
xr_lr = torch.Tensor(np.transpose(xr_lr,(0,3,1,2)).astype(np.float32)).to("cuda")
x_sr, sigma_sr = model((xr_lr-dataset_mu)/dataset_sigma)
x_sr = x_sr*dataset_sigma + dataset_mu
sigma_sr = torch.exp(sigma_sr + torch.log(torch.Tensor((dataset_sigma,)).to("cuda")))
x_sr = x_sr.permute(0,2,3,1).detach().cpu().numpy()
sigma_sr = sigma_sr.permute(0,2,3,1).detach().cpu().numpy()
if r==0:
x_sr_all = np.rot90(x_sr, k=-r, axes=(1,2))/4.0
sigma_sr_all = np.rot90(sigma_sr, k=-r, axes=(1,2))/4.0
else:
x_sr_all = x_sr_all + np.rot90(x_sr, k=-r, axes=(1,2))/4.0
sigma_sr_all = sigma_sr_all + np.rot90(sigma_sr, k=-r, axes=(1,2))/4.0
return x_sr_all, sigma_sr_all
X_preds = []
X_test = X_test[...,:Nimages]
for index in tqdm(range(X_test.shape[0])):
x_pred, sigma_pred = predict_image(X_test[index:index+1], mu, sigma, to_numpy=True)
X_preds.append(x_pred)
# +
def savePredictions(x, band, submission_dir):
"""RAMS save util"""
if band == 'NIR':
i = 1306
elif band == 'RED':
i = 1160
for index in tqdm(range(len(x))):
io.imsave(os.path.join(submission_dir, f'imgset{i}.png'), x[index][0,:,:,0].astype(np.uint16),
check_contrast=False)
i+=1
savePredictions(X_preds, band, SUBMISSION_DIR)
# -
# ## 3.2 Submission zip creation
# +
# zip creation
name_zip=os.path.join(SUBMISSION_DIR,'submission.zip')
zf = ZipFile(name_zip, mode='w')
with tqdm(total=290, desc="Zipping images") as pbar:
for i, img in enumerate(sorted(os.listdir(SUBMISSION_DIR))):
zf.write(os.path.join(SUBMISSION_DIR, img), arcname=img)
pbar.update(1)
zf.close()
print('Done!')
| Code/piunet/test_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import gavia
# gavia can currently parse the camera, navigator, and gps logs. To load these logs into a DataFrame, use the follow commands:
# +
projectdir = '../../data/mission_example'
camlog = gavia.camera.loadlog(projectdir)
navlog = gavia.nav.loadlog(projectdir)
gpslog = gavia.gps.loadlog(projectdir)
# -
# Once parsed, Pandas opperations can be applied and we can print the DataFrame headers as follows:
print(list(camlog))
print(camlog.iloc[0])
# Note that the DataFrames are saved to a csv file in the project directory so that logs can be loaded quickly when running the loadlog() functions again.
# +
camlog = gavia.camera.loadlog(projectdir)
navlog = gavia.nav.loadlog(projectdir)
gpslog = gavia.gps.loadlog(projectdir)
# -
print(list(camlog))
print(camlog.iloc[0])
| examples/notebooks/load_gavia_logs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ## Questions:
# - how many unique question askers are there? Is it just the same people again and again? Does this create a bias? (e.g. there are an awful lot of questions on bees)
# - there are lots of funny questions - what percentage of total? Maybe this shows people prefer the less-serious questions? Or are just science room people particularly comedic?
# - look at percentage of questions on apocalypse, health, technology - seems people are particularly concered with mortality
# - very few engineering or chemistry questions?
# - most physics questions are space or sound
# - lots of food / nutrition questions
# - SO MUCH PSYCHOLOGY (is this a Jamal bias though?)
#
# **Note**: every time I print out the dataframe, I have made sure not to include columns including surnames or other sensitive data, hence why you see a lot of 'drop' commands there.
# ## Import and process dataset
#
# Import the dataset in and do some basic processing so that it's easier to analyse.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from wordcloud import WordCloud
# +
def dateparse(x):
if x == 'Pilot':
x = '14/06/2017'
return pd.datetime.strptime(x, '%d/%m/%Y')
funnyparse = lambda x: False if x == 'n' else True
qs = pd.read_csv('broken.csv', skiprows=2, parse_dates=['When?'], date_parser=dateparse,
converters={'Funny':funnyparse, 'human':funnyparse})
# -
qs.drop(['Name' ,'Other comments', 'Who?', 'who?'], axis=1).head()
# +
qs.rename(columns={'When?':'date', 'Will this be a potential talk, blog post, or nothing? ':'output_type',
'Who?':'who', 'Other comments':'comments', 'Name': 'first_name', 'Questions':'questions'}, inplace=True)
qs.drop(['Unnamed: 10', 'who?'], axis=1, inplace=True)
# -
qs.insert(1, 'last_name', None)
for i, r in qs.iterrows():
if r.last_name is None:
try:
names = r['first_name'].split(' ')
if len(names) > 1:
qs.set_value(i, 'first_name', names[0])
qs.set_value(i, 'last_name', ' '.join(names[1:]))
except AttributeError:
pass
qs.drop(['last_name', 'comments', 'who'], axis=1).head(10)
# Plot number of questions asked per month. You can see that there are drops around Christmas and Easter, as would be expected.
fig = qs[['date']].groupby([qs["date"].dt.year, qs["date"].dt.month]).count().plot(kind="bar", legend=False)
fig.set_xlabel('Date')
fig.set_ylabel('Number of questions')
plt.show()
# Rudimentary classification of question types by looking at whether they contain words in the following lists.
# +
# Note: to avoid selection bias, I'm going to need to classify a reasonable fraction of questions.
# Otherwise e.g. I could just be counting the physics questions.
biology_words = ["ecological", "disease", "nature", "plants", "animals", "animal", "biology", "biological",
"neurons", "cancer", "cell", "diet", "evolution", "human", "life", "zombie", "DNA",
"species", "drunk", "nutrition", "virus", "drug", "exercise", "blood", "elephant", "bee", "cat"]
physics_words = ["space", "black hole", "star", "universe", "quantum", "physics", "radiation", "light", "moon",
"mars", "wormhole", "turbulence", "time", "particles", "antimatter", "nuclear", "clock",
"wavefunction", "electricity", "power", "gravity", "planet", "dimension", "gravitation", "telescope",
"magnet", "vibrat", "big bang"]
chemistry_words = ["chemicals", "chemical", "metal", "chemistry", "material", "mineral", "fracking"]
psychology_words = ["psychology", "brain", "conscious", "mind", "memory", "memories"]
# +
qs['genre'] = None
for i, r in qs.iterrows():
# first try looking at the comments
try:
comments = r['comments'].lower()
except AttributeError:
comments = ''
if "biolog" in comments:
qs.set_value(i, 'genre', 'biology')
elif "physics" in comments:
qs.set_value(i, 'genre', 'physics')
elif "chemist" in comments:
qs.set_value(i, 'genre', 'chemistry')
elif "psycholog" in comments:
qs.set_value(i, 'genre', 'psychology')
else:
for w in biology_words:
if w in r['questions'].lower() + ' ' + comments:
qs.set_value(i, 'genre', 'biology')
break
for w in physics_words:
if w in r['questions'].lower() + ' ' + comments:
qs.set_value(i, 'genre', 'physics')
break
for w in chemistry_words:
if w in r['questions'].lower() + ' ' + comments:
qs.set_value(i, 'genre', 'chemistry')
break
for w in psychology_words:
if w in r['questions'].lower() + ' ' + comments:
qs.set_value(i, 'genre', 'psychology')
break
# -
# ## Looking at tags and human / funny
#
# Here, I'm going to look at the tags I've allocated to the questions to see which occur most frequently. I've also tagged each question with the properties 'human' and 'funny' - these are either true or false.
# +
def tag_me(df):
apocalypse = 0
psychology = 0
health = 0
food = 0
tag = pd.DataFrame(columns=['freq', 'funny', 'human'])
for i, r in df.iterrows():
try:
if 'apocalypse' in r.Tags:
apocalypse += 1
if 'psychology' in r.Tags:
psychology += 1
if 'health' in r.Tags:
health += 1
if 'food' in r.Tags:
food += 1
ts = r.Tags.split(', ')
for w in ts:
if w[-1] == ' ':
w = w[:-1]
try:
tag.set_value(w, 'freq', tag.loc[w]['freq'] + 1)
except KeyError:
tag.loc[w] = 0
tag.set_value(w, 'freq', 1)
if r.Funny:
tag.set_value(w, 'funny', tag.loc[w]['funny'] + 1)
if r.human:
tag.set_value(w, 'human', tag.loc[w]['human'] + 1)
except TypeError:
pass
print('apocalypse: \t', apocalypse/len(df), '\tpsychology: \t', psychology/len(df))
print('health: \t', health/len(df), '\tfood: \t\t', food/len(df))
tag.sort_values('freq', ascending=False, inplace=True)
for i, r in tag.iterrows():
tag.set_value(i, 'funny', r.funny / r.freq)
tag.set_value(i, 'human', r.human / r.freq)
return tag
tags = tag_me(qs)
# -
sum(qs.Funny) / len(qs)
sum(qs.human) / len(qs)
# Most common tags: you can see that biology, psychology and physics are the most popular tags. Technology and health also appear a lot - people ask a lot of questions about the future and about the treatment of disease / general health. Ecology pretty much means 'animals' - people quite like asking about these. As an astrophysicist, I'm pleased to see space is much higher than e.g. particle physics :D
tags.head(10)
# +
freq_dict = {}
for i, r in tags.iterrows():
freq_dict[i] = r.freq
w = WordCloud(background_color="white").generate_from_frequencies(freq_dict)
plt.figure(figsize=(12, 10))
plt.imshow(w, interpolation="bilinear")
plt.axis("off")
plt.show()
# -
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.tools as tls
# This plot may not make the most sense as human and funny are not two variables we'd necessarily expect to be correlated, but it does show some interesting things. Zombie questions are always funny; cancer questions never are. Over 90% of technology questions are human-related - we're not so interested in whether cats will be conquering Mars any time soon. Climate change is quite a popular topic, and one people take seriously (only 2/62 climate change questions were funny).
# +
msize = lambda x: np.sqrt(np.float(x))
fig = tls.make_subplots(rows=1, cols=1)
x = tags.human
y = tags.funny
text = [i + ': ' + str(tags.loc[i]['freq']) for i in tags.index.values]
t = go.Scatter(x=x, y=y, mode='markers', text=text,
marker=dict(size=[3*x**0.5 for x in tags.freq],
color=[3*x**0.5 for x in tags.freq],
colorscale='Viridis'))
fig.append_trace(t, 1, 1)
fig['layout']['xaxis1'].update(title='Human')
fig['layout']['yaxis1'].update(title='Funny')
fig['layout'].update(hovermode='closest')
plotly.offline.iplot(fig)
# -
# ## Finding common question words
#
# Comparing the frequency of query words like 'what', 'where', 'why' should tell us what people want to know and how they ask about things.
# +
qword = dict()
# first find some query words
for i, r in qs.iterrows():
w = r.questions.split(' ')[0][1:].split("'")[0].split(",")[0].lower()
qword.setdefault(w, 0)
qword[w] += 1
# run again using pandas filters so can pick up questions with query words that are not the first word
# of the question string
for k, i in qword.items():
if i >= 3:
qword[k] = len(qs[qs.questions.str.contains(k) | qs.questions.str.contains(k.capitalize())])
# disregard non-query words
dropme = ['in', 'the', 'at', 'is', 'do', 'to']
for d in dropme:
qword.pop(d)
# -
for k in sorted(qword, key=qword.get, reverse=True):
if qword[k] < 3:
break
print(k, qword[k])
# +
labels = sorted(qword, key=qword.get, reverse=True)[:10]
values = [qword[k] for k in labels]
labels.append('other')
values.append(len(qs) - np.sum(np.array(values)))
trace = go.Pie(labels=labels, values=values, textinfo='label')
layout = go.Layout(title='Most common query words', showlegend=False)
fig = go.Figure(data=[trace], layout=layout)
plotly.offline.iplot(fig)
# -
# ## Finding unique questioners & accounting for bias
#
# Looking at whether our set of questions is dominated by the contributions from a small number of people and whether this biases our dataset.
questers = qs[['first_name', 'last_name']].drop_duplicates()
questers.drop('last_name', axis=1).head()
len(questers)
len(qs) / len(questers)
# This demonstrates we indeed have quite a few repeat questioners. Let's look at the most curious:
questers['freq'] = 0
for i, r in questers.iterrows():
questers.set_value(i, 'freq', len(qs.loc[(qs.first_name == r.first_name) & (qs.last_name == r.last_name)]))
questers.sort_values('freq', ascending=False, inplace=True)
questers.drop('last_name', axis=1).head(10)
questers['freq'].describe()
questers.boxplot('freq', vert=False)
plt.show()
trace = go.Box(x=questers.freq, name='All')
layout = go.Layout(title='Number of questions asked per person', showlegend=False)
fig = go.Figure(data=[trace], layout=layout)
plotly.offline.iplot(fig)
trace = go.Box(x=questers.iloc[15:].freq, name='Except top 15')
layout = go.Layout(title='Number of questions asked per person (excluding 15 most curious)', showlegend=False)
fig = go.Figure(data=[trace], layout=layout)
plotly.offline.iplot(fig)
# Looking at that box plot, we see that indeed our dataset is dominated by contributions from just a few people. Let's see what percentage of the total questions are asked by the top 10 askers:
np.sum(questers.freq.head(10).values) / len(qs)
# That's over 25% of the questions which have been asked by just 10 people! This is likely to seriously bias our dataset, as these people are likely to ask questions on similar topics (e.g. Jamal tends to ask about psychology), making our dataset less representative of the general population. To tackle this, let's limit each person to the mean number of questions, rounded up (3). We'll select these randomly from their total questions, and run some analysis again a few times with different random selections to see if we get the same results.
from random import sample
qs_unbiased = pd.DataFrame(columns=qs.columns)
for i, r in questers.iterrows():
rs = qs.loc[(qs.first_name == r.first_name) & (qs.last_name == r.last_name)]
if len(rs) > 3:
s = sample(range(len(rs)), 3)
rs = rs.iloc[s]
qs_unbiased = qs_unbiased.append(rs)
qs_unbiased.drop(['last_name', 'comments', 'who'], axis=1).head(10)
len(qs_unbiased)
tags_u = tag_me(qs_unbiased)
tags = tag_me(qs)
tags_u.head(10)
tags.head(10)
for i, t in tags_u.head(10).iterrows():
print(i, '\t', t.freq / len(qs_unbiased))
for i, t in tags.head(10).iterrows():
print(i, '\t', t.freq / len(qs))
# This is actually pretty encouraging: the top 10 most popular tags are more or less the same, as are the percentages with which they occur and their funnyness / humanity. This suggests that although our dataset may be dominated by the contributions from a relatively small number of people, their contributions are largely representative of the total population and do not appear to bias the results significantly. We can therefore continue our analysis with the complete dataset.
# ## Physics only
#
# Above analysis but only on physics questions
def search_tags(tag_word):
# return questions that contain tag word
return qs[(qs.Tags.str.contains(tag_word)) & (qs.Tags.str.contains(tag_word) == qs.Tags.str.contains(tag_word))]
physics_qs = search_tags('physics')
physics_tags = tag_me(physics_qs)
physics_tags.head(10)
freq_dict = {}
for i, r in physics_tags.iterrows():
if i == 'physics':
continue
freq_dict[i] = r.freq
# +
w = WordCloud(background_color="white").generate_from_frequencies(freq_dict)
plt.figure(figsize=(12, 10))
plt.imshow(w, interpolation="bilinear")
plt.axis("off")
plt.show()
# +
msize = lambda x: np.sqrt(np.float(x))
fig = tls.make_subplots(rows=1, cols=1)
x = physics_tags.human
y = physics_tags.funny
text = [i + ': ' + str(physics_tags.loc[i]['freq']) for i in physics_tags.index.values]
t = go.Scatter(x=x, y=y, mode='markers', text=text,
marker=dict(size=[3*x**0.5 for x in physics_tags.freq],
color=[3*x**0.5 for x in physics_tags.freq],
colorscale='Viridis'))
fig.append_trace(t, 1, 1)
fig['layout']['xaxis1'].update(title='Human')
fig['layout']['yaxis1'].update(title='Funny')
fig['layout'].update(hovermode='closest')
plotly.offline.iplot(fig)
# -
# ## Looking at output
qs.output_type
qs.output_type.describe()
# +
output_types = {'event':0, 'podcast':0, 'content':0, 'infographic':0, 'read':0}
for k, _ in output_types.items():
output_types[k] = len(qs[qs.output_type.str.contains(k) | qs.output_type.str.contains(k.capitalize())])
# -
for k in sorted(output_types, key=output_types.get, reverse=True):
print(k, ' \t', output_types[k])
# Note: doesn't look like there was much consistancy with the 'content' label, so not sure how useful this will be.
# +
# search tags
tag_word = 'space'
qs[(qs.Tags.str.contains(tag_word)) & (qs.Tags.str.contains(tag_word) == qs.Tags.str.contains(tag_word))].questions.values
# -
tags.index
# +
# serach questions
tag_word = 'light'
qs[(qs.questions.str.contains(tag_word)) & (qs.questions.str.contains(tag_word) == qs.questions.str.contains(tag_word))].questions.values
# -
import feedparser
import string
import urllib
# +
youtube_rss_url = "https://www.youtube.com/feeds/videos.xml?channel_id=UC_ELhypSb_rbDylFgLVqSwg"
feed = feedparser.parse(youtube_rss_url)
# -
feed
for i in feed['items']:
print(i['yt_videoid'])
translator = str.maketrans('', '', string.punctuation)
space_translator = str.maketrans(' ', '-')
folder_location = '/home/alice/Dropbox/sr-website-mockups/_posts/'
for i in feed['items']:
youtube = i['yt_videoid']
published = i['published'][:10] + ' ' + i['published'][11:19] + ' ' + i['published'][19:]
author = i['author']
title = i['title']
print(title)
stripped_title = title.translate(translator).translate(space_translator).lower()
if 'podcast' not in title:
file_name = folder_location + 'watch/' + published[:10] + '-' + stripped_title + '.md'
else:
stripped_title = stripped_title[16:]
file_name = folder_location + 'listen/' + published[:10] + '-' + stripped_title + '.md'
print(file_name)
f = open(file_name, 'w')
f.write('---')
f.write('\nyoutube: ' + youtube)
f.write('\npublished: ' + published)
f.write('\nauthor: ' + author)
f.write('\ntitle: ' + title.replace(":", ":").replace('"', ''))
f.write('\n---')
f.close()
| question_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !pip install detecto
# +
import glob
import numpy as np
import io
import os
import torch
import torchvision
import torchvision.transforms as T
from torch.utils.tensorboard import SummaryWriter
# from torchvision.models.detection import
import skimage
from PIL import Image
from detecto.core import Model
from detecto.visualize import detect_live, detect_video, plot_prediction_grid, show_labeled_image
from detecto.core import DataLoader, Dataset
from detecto.utils import read_image, xml_to_csv, normalize_transform
import matplotlib.pyplot as plt
# -
# model = Model()
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
# +
# # !du -sh /Users/haridas/.cache/torch/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
# +
# detect_live(model)
train_dir = "/home/haridas/projects/mystique/data/train_and_test-2020-Jun-05/train"
test_dir = "/home/haridas/projects/mystique/data/train_and_test-2020-Jun-05/test"
# train_dir = "/home/haridas/projects/mystique/data/train_and_test-2020-05-31/train"
# test_dir = "/home/haridas/projects/mystique/data/train_and_test-2020-05-31/test"
# +
# plot_prediction_grid
train_labels = xml_to_csv(
train_dir,
f"{train_dir}/../train_label.csv"
)
val_labels = xml_to_csv(
test_dir,
f"{test_dir}/../test_label.csv"
)
classes = train_labels['class'].unique().tolist()
# -
train_labels["class"].value_counts()
train_labels["class"].value_counts()
# train_labels.filename.unique()
# # !pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
normalize_transform()
# Image reader and pre-processing pipeline.
transformer = T.Compose([
T.ToPILImage(),
lambda image: image.convert("RGB"),
T.ToTensor(),
normalize_transform()
])
# +
# Pytorch dataset for train and validation.
dataset = Dataset(
f"{train_dir}/../train_label.csv",
image_folder=train_dir,
transform=transformer
)
val_dataset = Dataset(
f"{test_dir}/../test_label.csv",
image_folder=test_dir,
transform=transformer
)
# -
train_dataloader = DataLoader(dataset, batch_size=2)
val_dataloader = DataLoader(val_dataset, batch_size=2)
# +
# ims, lbs = dataset[100]
# show_labeled_image(ims, lbs["boxes"])
# +
# # !pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# torch.cuda("cuda:0")
# torch.cuda.is_available()
# f"{test_dir}/../train_label.csv"
# -
# ## Train models in GPU
torch.cuda.device_count()
new_model = Model(classes)
# +
# dataset._csv.describe()
# _model = torch.nn.DataParallel(new_model)
# -
new_model.fit(dataset, val_dataset=val_dataset, verbose=True)
tb_writer = SummaryWriter("Second")
new_model = CustomModel(classes, log_writer=tb_writer)
new_model.fit(train_dataloader, val_dataset=val_dataloader, verbose=True, epochs=20)
# +
# model = Model(classes=["test", 'asdf'])
# model.predict([img])
# new_model.name
#save("/home/haridas/projects/pic2card/model/pth_models/")
# +
# torch.cuda.memory_stats()
# model.predict([img])
# +
# # %debug
# -
# # Load saved model and test
# +
model_path_20epoch = "/home/haridas/projects/pic2card/model/pth_models/faster-rcnn-2020-05-31-1590914103.pth",
model_path_25epoch = "/home/haridas/projects/pic2card/model/pth_models/faster-rcnn-2020-05-31-1590943544-epochs_25.pth"
model_path_35epoch = "/home/haridas/projects/pic2card-models/pytorch/faster-rcnn-2020-06-17-1592424185-epochs_35.pth"
# model_path_10epoch = "/home/haridas/projects/pic2card/model/pth_models/faster-rcnn-2020-05-31-1590928573-epochs_10.pth"
# -
classes
# Load the saved model
model = Model.load("pic2card_model.pth", classes=classes)
model = Model.load(
model_path_25epoch,
classes=classes
)
img = transformer(read_image(f"{train_dir}/1.png"))
# show_labeled_image(
# T.ToPILImage(read_image(f"{train_dir}/1.png"))
# img.shape
im = read_image(f"{test_dir}/104.png")
type(im_tfs)
im = read_image(f"{test_dir}/104.png")
im_tfs = transformer(im)
labels, boxes, scores = model.predict([im_tfs])[0]
# show_labeled_image(im, boxes, labels)
# +
# list(zip(labels, scores))
# model.predict([im_tfs])[0]
# np.array(classes)
# -
labels, boxes, scores = model.predict(img)
torchvision.transforms.ToPILImage()(img)
# show_labeled_image(img, boxes, labels)
# img.shape
show_labeled_image(img, boxes, labels)
# +
# new_model._get_raw_predictions([img])
# Send the image to gpu.
# img = img.to(new_model._device)
# new_model._model([img])
# +
# model.predict([img])
# new_model.predict([img])
# -
# +
# # !ls ../../mystique/data/train_and_test/train
# new_model._device
# # %debug
# -
# # Tensorboard
from torch.utils.tensorboard import SummaryWriter, RecordWriter
writer = SummaryWriter("testing")
writer.add_image("images", torchvision.utils.make_grid([img]), 0)
# +
# # writer.add_scalar?
# +
# # writer.add_image?
writer = SummaryWriter()
for n_iter in range(100):
writer.add_scalar('Loss/train', np.random.random(), n_iter)
writer.add_scalar('Loss/test', np.random.random(), n_iter)
writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
# -
#
| source/pic2card/notebooks/detecto-object-detection-lib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
#from funcs import *
import matplotlib.pyplot as plt
from functools import reduce
import seaborn as seabornInstance
from matplotlib.pyplot import subplot
import operator as op
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
# %matplotlib inline
import cvxpy as cp
from quadratic1 import *
# +
#Number of infected for past two weeks
X = pd.read_csv('data.csv').iloc[:,1:-2].values
#Number of deaths
y = pd.read_csv('data.csv').iloc[:,-2].values
# -
pd.DataFrame(y).plot()
# #### To find best smoothing
# +
from tqdm import tqdm
training_poss = np.arange(3, 50)
loss_even = []
loss_odd = []
for tr in tqdm(training_poss):
loss_even.append(find_best_K(X, y, 'even', training_size=tr)[1])
loss_odd.append(find_best_K(X, y, 'odd', training_size=tr)[1])
# +
x_data = training_poss
y_data0 = loss_even
y_data1 = loss_odd
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x_data, y_data0, label='even smoothing')
ax.plot(x_data, y_data1, label='odd smoothing')
plt.legend()
#plt.title('Mean MAPE loss based on \n the number of training days')
#ax.plot([0.1, 0.2, 0.3, 0.4], [1, 4, 9, 16])
ax.set_xlabel('Number of training days')
ax.set_ylabel('Mean MAPE')
#plt.axvline(x=tr)
#min_ = 'K= ' + str(tr)
#
#ax.annotate(min_, xy=(tr, min(losses[idx])+2))
ax.axis()
# -
minimums = [np.min(loss_even), np.min(loss_odd)]
losses = [loss_even, loss_odd]
idx = np.argmin(minimums) #if 0 even if 1 odd
tr = training_poss[np.argmin(losses[idx])]
parity = ''
if idx == 0:
K = find_best_K(X, y, parity='even',training_size = tr)[0]
X = apply_smoothing(X, K, 'even')
parity = 'even'
else:
K = find_best_K(X, y, parity='odd',training_size = tr)[0]
X = apply_smoothing(X, K, 'odd')
parity = 'odd'
print('The minimum loss of ', min(losses[idx]), ' is obtained using ', tr, ' training days with an', parity, 'parity and K=', K)
def exponential_smoothing(y, rho, K):
const = (1-rho)/(1-rho**(K+1))
new_y = []
# range of x
r_y = np.arange(K, len(y)-K)
# range of k
r_k = np.arange(0,K)
for i in range(len(y)):
if i not in r_y:
new_y.append(y[i])
else:
ls = []
for k in r_k:
ls.append(int(const*rho**k*y[i-k]))
new_y.append(np.sum(ls))
return new_y
# +
#Number of infected for past two weeks
X = pd.read_csv('data.csv').iloc[:,1:-2].values
#Number of deaths
y = pd.read_csv('data.csv').iloc[:,-2].values
tr1 = 44
X0 = apply_smoothing(X, 8, 'even')
tr0 = 43
# -
def compute_preds(X, tr):
#X = apply_smoothing(X, 6, 'even')
#tr = training_poss[np.argmin(loss)]
N = X.shape[1]
splits = int(np.floor((X.shape[0] - tr)/7))
# list of the mape for a given split, this list is reinitialized for every K
temp_mapes = []
y_vals = []
gammas = []
y_preds = []
for i in range(splits):
begin = 7*i
end = tr + 7*i
X_tr = X[begin:end,:]
y_tr = y[begin:end]
X_te = X[end:end+7,:]
y_te = y[end:end+7]
index = find_best_index(X_tr, X_te, y_tr, y_te, 'mape', N)
P, q, G, h = generate_params(X_tr, y_tr, index, N, 10e-15)
gamma = cvxopt_solve_qp(P, q, G, h)
y_pred = X_te@gamma
y_pred[y_pred < 1] = 1
y_pred = np.floor(y_pred)
y_preds.append(y_pred)
gammas.append(gamma)
temp_mapes.append(mape(y_te, y_pred))
print(np.mean(temp_mapes))
y_preds = [item for sublist in y_preds for item in sublist]
y_preds_saved = y_preds
y_preds = np.append(y[:tr],y_preds)
y_preds = exponential_smoothing(y_preds, 0.65, 7)
return y_preds, gammas
#y_preds = exponential_smoothing(y_preds, 0.9, 7)
# +
y_normal, gammas_normal = compute_preds(X, tr1)
y_smoothed, gammas_smoothed = compute_preds(X0, tr0)
y_normal = [y_p if y_p > 1 else 1 for y_p in y_normal]
y_smoothed = [y_p if y_p > 1 else 1 for y_p in y_smoothed]
# -
from matplotlib.pyplot import figure
figure(num=None, figsize=(9, 6), dpi=80, facecolor='w', edgecolor='k')
ax = plt.subplot('111')
plt.plot(y_normal, 'orange', y_smoothed, 'blue', y, 'g')
plt.xlabel('Day')
plt.ylabel('Number of Deaths')
plt.legend(['Predicted value without smoothing','Predicted value with smoothing', 'True value'])
#plt.title('Daily number of deaths in France due to the Covid-19')
len(gammas_normal)
pd.DataFrame(gammas_normal[10]).plot(title='Gammas distribution under QP first formulation',legend=None)
# +
#X = apply_smoothing(X, 6, 'even')
#tr = training_poss[np.argmin(loss)]
N = X.shape[1]
splits = int(np.floor((X.shape[0] - tr)/7))
# list of the mape for a given split, this list is reinitialized for every K
temp_mapes = []
y_vals = []
gammas = []
y_preds = []
for i in range(splits):
begin = 7*i
end = tr + 7*i
X_tr = X[begin:end,:]
y_tr = y[begin:end]
X_te = X[end:end+7,:]
y_te = y[end:end+7]
index = find_best_index(X_tr, X_te, y_tr, y_te, 'mape', N)
P, q, G, h = generate_params(X_tr, y_tr, index, N, 10e-15)
gamma = cvxopt_solve_qp(P, q, G, h)
y_pred = X_te@gamma
y_pred[y_pred < 1] = 1
y_preds.append(y_pred)
gammas.append(gamma)
y_preds = [item for sublist in y_preds for item in sublist]
y_preds_saved = y_preds
y_preds = np.append(y[:tr],y_preds)
y_preds = exponential_smoothing(y_preds, 0.65, 7)
y_preds = [y_p if y_p > 1 else 1 for y_p in y_preds]
#y_preds[y_preds < 1] = 1
#y_preds = exponential_smoothing(y_preds, 0.9, 7)
plt.plot(y_preds, 'b', y, 'g')
plt.xlabel('Day')
plt.ylabel('Number of Deaths')
plt.legend(['Predicted value','True value'])
#plt.title('Daily number of deaths in France due to the Covid-19')
# -
# ### To find best model without smoothing
# +
#Number of infected for past two weeks
X = pd.read_csv('data.csv').iloc[:,1:-2].values
#Number of deaths
y = pd.read_csv('data.csv').iloc[:,-2].values
# +
training_poss = np.arange(3, 60)
loss = []
for tr in tqdm(training_poss):
N = X.shape[1]
splits = int(np.floor((X.shape[0] - tr)/7))
# list of the mape for a given split, this list is reinitialized for every K
temp_mapes = []
y_vals = []
gammas = []
y_preds_no_smoothing = []
for i in range(splits):
begin = 7*i
end = tr + 7*i
X_tr = X[begin:end,:]
y_tr = y[begin:end]
X_te = X[end:end+7,:]
y_te = y[end:end+7]
index = find_best_index(X_tr, X_te, y_tr, y_te, 'mape', N)
P, q, G, h = generate_params(X_tr, y_tr, index, N, 10e-15)
gamma = cvxopt_solve_qp(P, q, G, h)
y_pred = X_te@gamma
y_pred[y_pred < 1] = 1
y_preds_no_smoothing.append(y_pred)
gammas.append(gamma)
temp_mapes.append(mape(y_te,y_pred))
loss.append(np.mean(temp_mapes))
tr = np.argmin(loss)
# -
print('The minimum loss of ', np.min(loss), ' is obtained using ', training_poss[np.argmin(loss)], 'training days')
# +
x_data = training_poss
y_data = loss
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x_data, y_data)
plt.legend()
#plt.title('Mean MAPE loss based on \n the number of training days')
#ax.plot([0.1, 0.2, 0.3, 0.4], [1, 4, 9, 16])
ax.set_xlabel('Number of training days')
ax.set_ylabel('Mean MAPE')
#plt.axvline(x=tr)
#min_ = 'K= ' + str(tr)
#
#ax.annotate(min_, xy=(tr, min(losses[idx])+2))
ax.axis()
# +
#X = apply_smoothing(X, 6, 'even')
tr = training_poss[np.argmin(loss)]
N = X.shape[1]
splits = int(np.floor((X.shape[0] - tr)/7))
# list of the mape for a given split, this list is reinitialized for every K
temp_mapes = []
y_vals = []
gammas = []
y_preds_no_smoothing = []
for i in range(splits):
begin = 7*i
end = tr + 7*i
X_tr = X[begin:end,:]
y_tr = y[begin:end]
X_te = X[end:end+7,:]
y_te = y[end:end+7]
index = find_best_index(X_tr, X_te, y_tr, y_te, 'mape', N)
P, q, G, h = generate_params(X_tr, y_tr, index, N, 10e-15)
gamma = cvxopt_solve_qp(P, q, G, h)
y_pred = X_te@gamma
y_pred[y_pred < 1] = 1
y_preds_no_smoothing.append(y_pred)
gammas.append(gamma)
y_preds_no_smoothing = [item for sublist in y_preds_no_smoothing for item in sublist]
y_preds_saved = y_preds
y_preds_no_smoothing = np.append(y[:tr],y_preds_no_smoothing)
plt.plot(y_preds_no_smoothing, 'b', y, 'g')
plt.xlabel('Day')
plt.ylabel('Number of Deaths')
plt.legend(['Predicted value','True value'])
#plt.title('Daily number of deaths in France due to the Covid-19')
# +
#X = apply_smoothing(X, 6, 'even')
tr = training_poss[np.argmin(loss)]
N = X.shape[1]
splits = int(np.floor((X.shape[0] - tr)/7))
# list of the mape for a given split, this list is reinitialized for every K
temp_mapes = []
y_vals = []
gammas = []
y_preds_no_smoothing = []
for i in range(splits):
begin = 7*i
end = tr + 7*i
X_tr = X[begin:end,:]
y_tr = y[begin:end]
X_te = X[end:end+7,:]
y_te = y[end:end+7]
index = find_best_index(X_tr, X_te, y_tr, y_te, 'mape', N)
P, q, G, h = generate_params(X_tr, y_tr, index, N, 10e-15)
gamma = cvxopt_solve_qp(P, q, G, h)
y_pred = X_te@gamma
y_pred[y_pred < 1] = 1
y_preds_no_smoothing.append(y_pred)
gammas.append(gamma)
y_preds_no_smoothing = [item for sublist in y_preds_no_smoothing for item in sublist]
y_preds_no_smoothing = exponential_smoothing(y_preds_no_smoothing, 0.65, 7)
y_preds_no_smoothing = np.append(y[:tr],y_preds_no_smoothing)
y_preds_no_smoothing[y_preds_no_smoothing < 1] = 1
plt.plot(y_preds_no_smoothing, 'b', y, 'g')
plt.xlabel('Day')
plt.ylabel('Number of Deaths')
plt.legend(['Predicted value','True value'])
#plt.title('Daily number of deaths in France due to the Covid-19')
# -
np.min(loss)
# +
def apply_smoothing_y(y, K, parity):
y = gauss_filter(y, K, parity=parity)
return y
def exponential_smoothing(x, rho, K):
const = (1-rho)/(1-rho**(K+1))
new_x = []
# range of x
r_x = np.arange(K, len(x)-K)
# range of k
r_k = np.arange(0,K)
for i in range(len(x)):
if i not in r_x:
new_x.append(x[i])
else:
ls = []
for k in r_k:
ls.append(int(const*rho**k*x[i-k]))
new_x.append(np.sum(ls))
return new_x
# -
def find_best_alpha(y_p, y_test):
"""Returns optimal alpha such that MAPE error is minimized,along with the MAPE index error in question, and its value"""
alphas = [round(0.05*i, 2) for i in range(20)]
Ks = np.arange(1, 5)
y_test = y_test[tr:]
mapes = np.ones((len(Ks), len(alphas)))
len_ = min(len(y_test[tr:]),len(y_p))
for i, K in enumerate(Ks):
for j, alpha in enumerate(alphas):
y_p = exponential_smoothing(y_p, alpha, K)
mapes[i,j] = mape(y_test[len_], y_p)
return Ks[(np.unravel_index(np.argmin(mapes, axis=None), mapes.shape))[0]], alphas[(np.unravel_index(np.argmin(mapes, axis=None), mapes.shape))[1]]
K, alpha = find_best_alpha(y_preds, y)
y_preds = exponential_smoothing(y_preds, alpha, 5)
plt.plot(y_preds, 'b', y, 'g')
plt.xlabel('Day')
plt.ylabel('Number of Deaths')
plt.legend(['Predicted value','True value'])
mape(y[tr:],y_preds)
gamma
gammas
pd.DataFrame({'Gamma': gammas[0]}).plot(legend=None)
#title='Gamma distribution for model created on 13th chunk of data',
pd.DataFrame({'Gamma': gammas[0]}).plot(legend=None)
| 2.2 [DEATHS] First Formulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sklearn.feature_extraction.text import TfidfVectorizer
from collections import Counter
from scipy.sparse import csr_matrix
from sklearn.preprocessing import normalize
from tqdm import tqdm
import numpy as np
import math
import operator
import warnings
warnings.filterwarnings("ignore")
# +
import pickle
with open('cleaned_strings', 'rb') as f:
corpus = pickle.load(f)
print("Number of documents in corpus = ", len(corpus))
# +
from collections import OrderedDict
def fit(dataset, max=None):
def Convert(tup, di):
di = dict(tup)
return di
dictionary = {}
vocabulary = Counter()
if isinstance(dataset, list):
for rows in dataset:
vocabulary.update([i.lower() for i in rows.split(" ") if len(i)>=2])
vocabulary = dict(vocabulary)
if max is None:
vocabulary = dict(OrderedDict(sorted(vocabulary.items(), key=lambda t: t[0])))
else:
vocabulary = dict(OrderedDict(sorted(vocabulary.items(), key=lambda t: t[1])))
vocabulary = [(i,j) for i, j in vocabulary.items()][0:max]
vocabulary = dict(OrderedDict(sorted(Convert(vocabulary, dictionary).items(), key=lambda t: t[0])))
return vocabulary
else:
print("you need to pass list of sentance")
# -
# ### FORMULA
#
# $TF(t) = \frac{\text{Number of times term t appears in a document}}{\text{Total number of terms in the document}}.$
#
# $IDF(t) = 1+\log_{e}\frac{1\text{ }+\text{ Total number of documents in collection}} {1+\text{Number of documents with term t in it}}.$
#
# +
import math
def transform(dataset, vocab):
sorted_vocab = list(vocab.keys())
no_doc_WithTerms = dict.fromkeys(sorted_vocab, 0)
words_idf = dict.fromkeys(sorted_vocab, 0)
def column_index(term):
try:
var = sorted_vocab.index(term)
except:
var = -1
return var
rows, columns, values = [], [], []
if isinstance(dataset, list):
for idx, row in enumerate(dataset):
word_freq = dict(Counter(row.split(" ")))
for word, _ in word_freq.items():
if len(word) <=1:
continue
try:
no_doc_WithTerms[str(word)] += 1
except:
pass
for idx, row in enumerate(dataset):
word_freq = dict(Counter(row.split(" ")))
for word, freq in word_freq.items():
if column_index(word) != -1:
rows.append(idx)
columns.append(column_index(word))
tf = freq / sum(list(word_freq.values()))
no_of_doc = 1 + len(dataset)
no_doc_WithTerm = 1 + no_doc_WithTerms[word]
idf = 1 + math.log(no_of_doc / float(no_doc_WithTerm))
words_idf[word] = idf
values.append(tf*idf)
words_idf = dict(OrderedDict(sorted(words_idf.items(), key=lambda t: t[0])))
return normalize(csr_matrix( ((values),(rows,columns)), shape=(len(dataset),len(vocab)))), words_idf
# -
# # ``` Test 1 ```
# +
corpus1 = [
'this is the first document',
'this document is the second document',
'and this is the third one',
'is this the first document',
]
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
vectorizer.fit(corpus1)
skl_output = vectorizer.transform(corpus1)
print(vectorizer.get_feature_names(), "\n\n")
print(vectorizer.idf_, "\n\n")
print(skl_output.todense()[0])
# -
vocab = fit(corpus1)
print(list(vocab.keys()), "\n\n")
sparse, idf = transform(corpus1, vocab)
print(list(idf.values()), "\n\n", sparse.todense()[0])
# # ``` TASK 1 ```
# +
vectorizer = TfidfVectorizer()
vectorizer.fit(corpus)
skl_output = vectorizer.transform(corpus)
print(vectorizer.get_feature_names()[0:5], "\n\n")
print(vectorizer.idf_[0:10], "\n\n")
print(skl_output.todense()[0], "\n\n")
print(skl_output.todense().shape, "\n\n")
vocab = fit(corpus)
print(list(vocab.keys())[0:5], "\n\n")
sparse, idf = transform(corpus, vocab)
print(list(idf.values())[0:10], "\n\n", sparse.todense()[0], "\n\n", sparse.todense().shape)
# -
# # ``` TASK 2```
# +
vectorizer = TfidfVectorizer()
vectorizer.fit(corpus)
skl_output = vectorizer.transform(corpus)
print(vectorizer.idf_[:50], "\n\n")
vocab = fit(corpus, max=50)
print(vocab, "\n\n")
sparse, idf = transform(corpus, vocab)
print(list(idf.values())[0:50], "\n\n", sparse.todense().shape, "\n\n", sparse.todense()[0])
# -
| 03_Custom_TFIDF/python_custom_tfidf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# It's really handy to have all the DICOM info available in a single DataFrame, so let's create that! In this notebook, we'll just create the DICOM DataFrames. To see how to use them to analyze the competition data, see [this followup notebook](https://www.kaggle.com/jhoward/some-dicom-gotchas-to-be-aware-of-fastai).
#
# First, we'll install the latest versions of pytorch and fastai v2 (not officially released yet) so we can use the fastai medical imaging module.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from fastai2.basics import *
from fastai2.medical.imaging import *
# -
# Let's take a look at what files we have in the dataset.
path = Path('~/data/rsna').expanduser()
path_meta = path/'meta'
# Most lists in fastai v2, including that returned by `Path.ls`, are returned as a [fastai.core.L](http://dev.fast.ai/core.html#L), which has lots of handy methods, such as `attrgot` used here to grab file names.
path_trn = path/'stage_1_train_images'
fns_trn = path_trn.ls()
fns_trn[:5].attrgot('name')
path_tst = path/'stage_1_test_images'
fns_tst = path_tst.ls()
len(fns_trn),len(fns_tst)
# We can grab a file and take a look inside using the `dcmread` method that fastai v2 adds.
fn = fns_trn[0]
dcm = fn.dcmread()
dcm
# # Labels
# Before we pull the metadata out of the DIMCOM files, let's process the labels into a convenient format and save it for later. We'll use *feather* format because it's lightning fast!
def save_lbls():
path_lbls = path/'stage_1_train.csv'
lbls = pd.read_csv(path_lbls)
lbls[["ID","htype"]] = lbls.ID.str.rsplit("_", n=1, expand=True)
lbls.drop_duplicates(['ID','htype'], inplace=True)
pvt = lbls.pivot('ID', 'htype', 'Label')
pvt.reset_index(inplace=True)
pvt.to_feather(path_meta/'labels.fth')
save_lbls()
df_lbls = pd.read_feather('labels.fth').set_index('ID')
df_lbls.head(8)
df_lbls.mean()
# # DICOM Meta
# To turn the DICOM file metadata into a DataFrame we can use the `from_dicoms` function that fastai v2 adds. By passing `px_summ=True` summary statistics of the image pixels (mean/min/max/std) will be added to the DataFrame as well (although it takes much longer if you include this, since the image data has to be uncompressed).
df_tst = pd.DataFrame.from_dicoms(fns_tst, px_summ=True, window=dicom_windows.brain)
df_tst.to_feather('df_tst.fth')
df_tst.head()
# %time df_trn = pd.DataFrame.from_dicoms(fns_trn, px_summ=True, window=dicom_windows.brain)
df_trn.to_feather('df_trn.fth')
# There is one corrupted DICOM in the competition data, so the command above prints out the information about this file. Despite the error message show above, the command completes successfully, and the data from the corrupted file is not included in the output DataFrame.
df_trn.query('SOPInstanceUID=="ID_6431af929"')
| kaggle_kernels/00_metadata_stage_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
years = range(15)
babies, adults = 1, 0
for y in years:
print(y)
msg = """
Year: {}
Babies: {}
Adults: {}
Rabbits: {}
""".format(y+1, babies, adults, babies+adults)
print(msg)
babies, adults = adults, adults+babies
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="1yKhLP03hArF"
# As we are not developing a chit chat but a virtual conversation between a user and an expert, we will make some assumptions:
#
# • The user will not use emoji or abbreviations in the same sentence with an important request.
#
# • The user might use emoji/abbreviations as a reaction/reply or in a follow up discussion.
#
# • The user might use emoji/abbreviations in a more complex input and a reply might be regarded as emotional and thus might help the conversation going forward.
#
# Taking this into consideration, we will implement a secondary flow that will have specific answers/policies and can exist together with the main flow, or standalone if no other input is provided by the user.
#
# Important: Abbreviations that are not considered as specific to chit chat needs will be treated in the main flow, together with the sentence-intent. For example: abbreviation for institutes that are NER.
#
# + id="9QrCI1_cg6TF"
# IN: inputs from Auto-correctand Procesing I
# OUT: send identified emoji/abbreviations to the following pipelines: Untrained NIU/Reaction Analysis
# + [markdown] id="HK9_k_wXhXjI"
# Objectives:
#
# • Gathering the emoji/abbreviations from auto-correct, processing and mark them for Untrained NIU/Reactions;
#
# Language specificities: no;
#
# Dependencies: Auto-correct /Processing I/Untrained NIU/ Reaction analysis;
#
# Database/ Vocabularies needed: Emoji/Abbreviations vocabularies;
#
# + id="KFSoniTkhqp6"
# To dos:
# 1. Mark sentences with emoji/abbreviations for Untrained answers.
# 2. Classify all the emoji/abbreviations in 6 categories (sad, blink, kiss, smile, cool, laughing out loud).
# 3. Sad and Smile will be sent to the Reaction analysis pipeline.
# 4. The rest will be sent to Untrained NIU.
# + [markdown] id="0ouZs8mAhzeR"
# Use adapted code
#
# Code example, but please adapt to the objectives
#
#
#
#
#
# + id="E44ouNp-jnBK"
# https://github.com/amanchadha/coursera-deep-learning-specialization/blob/master/C5%20-%20Sequence%20Models/Week%202/Emojify/Emojify%20-%20v2.ipynb
| NIU-NLU dummy codes/5_Emoji_Abbreviations_(EER)_dummy_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %load_ext autoreload
# %autoreload 2
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
sys.path.insert(0, '../src')
import classifier
import detector
from image import Image
# +
from image import build_histogram_equalizer
TRAIN_DATA_DIR = os.path.abspath("../trainset")
COLORS = ['COLOR_STOP_SIGN_RED', 'COLOR_OTHER_RED',
'COLOR_BROWN' , 'COLOR_ORANGE' ,
'COLOR_BLUE' , 'COLOR_OTHER' ]
data = {c: [] for c in COLORS}
files = os.listdir(TRAIN_DATA_DIR)
for fname in files:
name, ext = os.path.splitext(fname)
if ext == ".npz":
if name + '.jpg' in files:
img = Image.load(os.path.join(TRAIN_DATA_DIR, name) + '.jpg')
elif name + '.png' in files:
img = Image.load(os.path.join(TRAIN_DATA_DIR, name) + '.png')
npzfname = os.path.join(TRAIN_DATA_DIR, fname)
npzdata = np.load(npzfname)
for c in COLORS:
if npzdata[c].size > 0:
mat = npzdata[c]
mat = mat.reshape(-1, 3).astype(np.uint8)
data[c].append(mat)
for c in COLORS:
data[c] = np.vstack(data[c])
print('---- done ------')
# +
N_DATA_PER_CLASS = 200000
labelmp = {
'COLOR_STOP_SIGN_RED': 0,
'COLOR_OTHER_RED': 1,
'COLOR_ORANGE': 2,
'COLOR_BROWN': 3,
'COLOR_BLUE': 4,
'COLOR_OTHER': 5
}
X, y = [], []
for ci, c in enumerate(COLORS):
print(c, data[c].shape)
rndidx = np.random.choice(data[c].shape[0], N_DATA_PER_CLASS, replace=False)
x = data[c][rndidx, :]
xycc = cv2.cvtColor(x.reshape(-1, 1, 3).astype(np.uint8), cv2.COLOR_RGB2YCrCb).reshape(-1, 3)
xhsv = cv2.cvtColor(x.reshape(-1, 1, 3).astype(np.uint8), cv2.COLOR_RGB2HSV).reshape(-1, 3)
x = np.hstack([x, xycc, xhsv])
X.append(x)
y.append(np.ones((N_DATA_PER_CLASS, 1)) * labelmp[c])
X = np.vstack(X).astype(np.float64)
y = np.vstack(y).astype(np.int32).reshape(-1)
print('-----------done------------')
# +
def ssred_accuracy(clf, X, y):
pred = clf.predict(X)
pred = pred == 0
y = y == 0
return np.sum(pred == y) / y.shape[0]
def ssred_precision(clf, X, y):
pred = clf.predict(X)
pred = pred == 0
y = y == 0
return np.sum(pred[pred == y]) / np.sum(pred)
def ssred_recall(clf, X, y):
pred = clf.predict(X)
pred = pred == 0
y = y == 0
return np.sum(pred[pred == y]) / np.sum(y)
scoring = {
'accuracy': ssred_accuracy,
'precision': ssred_precision,
'recall': ssred_recall
}
def print_scores(scores):
for key, val in scores.items():
print(f'\t{key}: %0.2f (+/- %0.2f)' % (val.mean(), val.std() * 2))
# +
# # %reload_ext autoreload
# from sklearn.model_selection import cross_validate
# from sklearn.utils import shuffle
# from classifier import LogisticRegression
# X, y = shuffle(X, y)
# XX = np.hstack([X, np.ones((X.shape[0], 1))])
# clf = LogisticRegression(max_iter=200, learning_rate=0.01, batchsize=3000)
# lr_score = cross_validate(clf, XX, y, cv=5, n_jobs=-1, scoring=scoring, error_score='raise')
# print('Logistic Regression')
# print_scores(lr_score)
# +
# %reload_ext autoreload
from sklearn.model_selection import cross_validate
from sklearn.utils import shuffle
from classifier import OneVsAllLogisticRegression
X, y = shuffle(X, y)
XX = np.hstack([X, np.ones((X.shape[0], 1))])
clf = OneVsAllLogisticRegression(max_iter=500, learning_rate=0.005, batchsize=3000)
ovalr_score = cross_validate(clf, XX, y, cv=5, n_jobs=-1, scoring=scoring, error_score='raise')
print('1vall Logistic Regression')
print_scores(ovalr_score)
# +
# %reload_ext autoreload
from sklearn.model_selection import cross_validate
from sklearn.utils import shuffle
from classifier import KaryLogisticRegression
X, y = shuffle(X, y)
XX = np.hstack([X, np.ones((X.shape[0], 1))])
clf = KaryLogisticRegression(max_iter=500, learning_rate=0.005, batchsize=3000)
klr_score = cross_validate(clf, XX, y, cv=5, n_jobs=-1, scoring=scoring, error_score='raise')
print('Kary Logistic Regression')
print_scores(klr_score)
# +
# %reload_ext autoreload
from sklearn.model_selection import cross_validate
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.utils import shuffle
from classifier import GaussianNaiveBayes
X, y = shuffle(X, y)
XX = X
clf = classifier.GaussianNaiveBayes()
gnb_score = cross_validate(clf, XX, y, cv=5, n_jobs=-1, scoring=scoring, error_score='raise')
print('Gaussian Naive Bayes')
print_scores(gnb_score)
# +
# X, y = shuffle(X, y, random_state=1)
# clf = classifier.GaussianNaiveBayes()
# clf.fit(X, y)
# clf.save('../model/gnb_300000_histeq.pic')
| notebooks/pixel-learning-rgbhsvycrcb-6class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="X5KJmEfIJWKH"
# # Assignment 3
#
# ## Question 1 (12 points)
# Using the [Framingham Heart Study dataset](https://github.com/soltaniehha/Business-Analytics/blob/master/data/AnalyticsEdge-Datasets/Framingham.csv) create a **logistic regression** model to predict whether a patient will develop heart desease in 10 years or not.
#
# Follow the steps outlined in the [Classification notebook](https://github.com/soltaniehha/Business-Analytics/blob/master/09-Machine-Learning-Overview/03-Classification.ipynb):
# * Preprocessing: deleting columns with no predictive power/handling missing values
# * Preprocessing: handle categorical variables, if any
# * Create feature matrix and target vector. Our target variable is `TenYearCHD`
# * Split the data randomly into train and test with a 70-30 split (use `random_state=833`)
# * Instantiate and fit a logistic regression model
# * Make predictions and find the overall accuracy, sensitivity, and specificity on your test set
#
# **Note:** We have seen this dataset during the discussion on the Framingham Heart Study from Analytics Edge.
#
# ## Question 2 (8 points)
# Open ended - Do further data exploration and create new variables when possible (feature engineering). Show your discovery process using plots and summaries.
# * How does the model performance change by adding new variables or potentially removing some of the less important ones?
# * How does the model performance change by trying different classification models?
#
# ---
#
# ### Upload your .ipynb file to Questrom Tools
#
# A potential issue is to download the notebook before it was fully saved. To avoid this, follow these steps:
# 1. go to Runtime (in the menu) and hit "Restart and run all..."
# 2. after the notebook is fully run, save it and then download your .ipynb to your computer
# 3. upload it back to your Drive and open it with Colab to ensure all of your recent changes are there
# 4. upload the originally downloaded file to Questrom Tools.
#
# ---
#
# The data has been loaded in the following cell:
# + id="bTC6KZksJWKJ" colab={"base_uri": "https://localhost:8080/", "height": 158} executionInfo={"status": "ok", "timestamp": 1618369278798, "user_tz": 240, "elapsed": 1153, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjHn5A5nCXb4FsIfIBdin6XTbj6u54lww1hF6ECkT8=s64", "userId": "12308918870841825745"}} outputId="f53a7961-b729-4f07-84bf-4986f4ad93f1"
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/AnalyticsEdge-Datasets/Framingham.csv')
df.head(3)
| 09-Machine-Learning-Overview/Assignment-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
df = pd.read_csv('data/patients.csv')
df
# +
# สร้าง column ใหม่ ชื่อ BMI เพื่อคำนวณ
df['BMI'] = df['WEIGHT'] / ((df['HEIGHT']/100)**2)
#อีกอันแต่ลบ column บนก่อน
# -
df
df[df['BMI'] > 22]
# +
df.to_csv('Patient_BMI.csv', index = False, encoding ='utf-8-sig')
# index = false คือจะไม่เซฟเลขแถวไปในไฟล์
# utf-8-sig' คือรองรับภาษาไทย
# -
| Workshop1 Patient BMI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bayesian Regression - Introduction (Part 1)
#
# Regression is one of the most common and basic supervised learning tasks in machine learning. Suppose we're given a dataset $\mathcal{D}$ of the form
#
# $$ \mathcal{D} = \{ (X_i, y_i) \} \qquad \text{for}\qquad i=1,2,...,N$$
#
# The goal of linear regression is to fit a function to the data of the form:
#
# $$ y = w X + b + \epsilon $$
#
# where $w$ and $b$ are learnable parameters and $\epsilon$ represents observation noise. Specifically $w$ is a matrix of weights and $b$ is a bias vector.
#
# Let's first implement linear regression in PyTorch and learn point estimates for the parameters $w$ and $b$. Then we'll see how to incorporate uncertainty into our estimates by using Pyro to implement Bayesian regression.
# ## Setup
# Let's begin by importing the modules we'll need.
# +
import os
from functools import partial
import numpy as np
import pandas as pd
import seaborn as sns
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import pyro
from pyro.distributions import Normal, Uniform, Delta
from pyro.infer import SVI, Trace_ELBO
from pyro.optim import Adam
from pyro.distributions.util import logsumexp
from pyro.infer import EmpiricalMarginal, SVI, Trace_ELBO, TracePredictive
from pyro.infer.mcmc import MCMC, NUTS
import pyro.optim as optim
import pyro.poutine as poutine
# for CI testing
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.5.0')
pyro.enable_validation(True)
pyro.set_rng_seed(1)
pyro.enable_validation(True)
# %matplotlib inline
# -
# ### Dataset
#
# The following example is adapted from \[1\]. We would like to explore the relationship between topographic heterogeneity of a nation as measured by the Terrain Ruggedness Index (variable *rugged* in the dataset) and its GDP per capita. In particular, it was noted by the authors in \[1\] that terrain ruggedness or bad geography is related to poorer economic performance outside of Africa, but rugged terrains have had a reverse effect on income for African nations. Let us look at the data \[2\] and investigate this relationship. We will be focusing on three features from the dataset:
# - `rugged`: quantifies the Terrain Ruggedness Index
# - `cont_africa`: whether the given nation is in Africa
# - `rgdppc_2000`: Real GDP per capita for the year 2000
#
# We will take the logarithm for the response variable GDP as it tends to vary exponentially.
DATA_URL = "https://d2hg8soec8ck9v.cloudfront.net/datasets/rugged_data.csv"
data = pd.read_csv(DATA_URL, encoding="ISO-8859-1")
df = data[["cont_africa", "rugged", "rgdppc_2000"]]
df = df[np.isfinite(df.rgdppc_2000)]
df["rgdppc_2000"] = np.log(df["rgdppc_2000"])
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = data[data["cont_africa"] == 1]
non_african_nations = data[data["cont_africa"] == 0]
sns.scatterplot(non_african_nations["rugged"],
np.log(non_african_nations["rgdppc_2000"]),
ax=ax[0])
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
sns.scatterplot(african_nations["rugged"],
np.log(african_nations["rgdppc_2000"]),
ax=ax[1])
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations")
# ## Linear Regression
# We would like to predict log GDP per capita of a nation as a function of two features from the dataset - whether the nation is in Africa, and its Terrain Ruggedness Index. Let's define our regression model. We'll use PyTorch's `nn.Module` for this. Our input $X$ is a matrix of size $N \times 2$ and our output $y$ is a vector of size $2 \times 1$. The function `nn.Linear(p, 1)` defines a linear transformation of the form $Xw + b$ where $w$ is the weight matrix and $b$ is the additive bias. We include an extra `self.factor` term meant to capture the correlation between ruggedness and whether a country is in Africa.
#
# Note that we can easily make this a logistic regression by adding a non-linearity in the `forward()` method.
# +
class RegressionModel(nn.Module):
def __init__(self, p):
# p = number of features
super(RegressionModel, self).__init__()
self.linear = nn.Linear(p, 1)
self.factor = nn.Parameter(torch.tensor(1.))
def forward(self, x):
return self.linear(x) + (self.factor * x[:, 0] * x[:, 1]).unsqueeze(1)
p = 2 # number of features
regression_model = RegressionModel(p)
# -
# ## Training
# We will use the mean squared error (MSE) as our loss and Adam as our optimizer. We would like to optimize the parameters of the `regression_model` neural net above. We will use a somewhat large learning rate of `0.05` and run for 500 iterations.
# +
loss_fn = torch.nn.MSELoss(reduction='sum')
optim = torch.optim.Adam(regression_model.parameters(), lr=0.05)
num_iterations = 1500 if not smoke_test else 2
data = torch.tensor(df.values, dtype=torch.float)
x_data, y_data = data[:, :-1], data[:, -1]
def main():
x_data = data[:, :-1]
y_data = data[:, -1]
for j in range(num_iterations):
# run the model forward on the data
y_pred = regression_model(x_data).squeeze(-1)
# calculate the mse loss
loss = loss_fn(y_pred, y_data)
# initialize gradients to zero
optim.zero_grad()
# backpropagate
loss.backward()
# take a gradient step
optim.step()
if (j + 1) % 50 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss.item()))
# Inspect learned parameters
print("Learned parameters:")
for name, param in regression_model.named_parameters():
print(name, param.data.numpy())
main()
# -
#
#
# [Bayesian modeling](http://mlg.eng.cam.ac.uk/zoubin/papers/NatureReprint15.pdf) offers a systematic framework for reasoning about model uncertainty. Instead of just learning point estimates, we're going to learn a _distribution_ over variables that are consistent with the observed data.
# ## Bayesian Regression
#
# In order to make our linear regression Bayesian, we need to put priors on the parameters $w$ and $b$. These are distributions that represent our prior belief about reasonable values for $w$ and $b$ (before observing any data).
#
# ### `random_module()`
#
# In order to do this, we'll 'lift' the parameters of our existing model to random variables. We can do this in Pyro via `random_module()`, which effectively takes a given `nn.Module` and turns it into a distribution over the same module; in our case, this will be a distribution over regressors. Specifically, each parameter in the original regression model is sampled from the provided prior. This allows us to repurpose vanilla regression models for use in the Bayesian setting. For example:
# ```python
# loc = torch.zeros(1, 1)
# scale = torch.ones(1, 1)
# # define a unit normal prior
# prior = Normal(loc, scale)
# # overload the parameters in the regression module with samples from the prior
# lifted_module = pyro.random_module("regression_module", nn, prior)
# # sample a nn from the prior
# sampled_reg_model = lifted_module()
# ```
# ### Model
#
# We now have all the ingredients needed to specify our model. First we define priors over weights, biases, and `factor`. Note the priors that we are using for the different latent variables in the model. The prior on the intercept parameter is very flat as we would like this to be learnt from the data. We are using a weakly regularizing prior on the regression coefficients to avoid overfitting to the data.
#
# We wrap `regression_model` with `random_module` and sample an instance of the regressor, `lifted_reg_model`. We then run the regressor on `x_data`. Finally we use the `obs` argument to the `pyro.sample` statement to condition on the observed data `y_data` with a learned observation noise `sigma`.
def model(x_data, y_data):
# weight and bias priors
w_prior = Normal(torch.zeros(1, 2), torch.ones(1, 2)).to_event(1)
b_prior = Normal(torch.tensor([[8.]]), torch.tensor([[10.]])).to_event(1)
f_prior = Normal(0., 1.)
priors = {'linear.weight': w_prior, 'linear.bias': b_prior, 'factor': f_prior}
scale = pyro.sample("sigma", Uniform(0., 10.))
# lift module parameters to random variables sampled from the priors
lifted_module = pyro.random_module("module", regression_model, priors)
# sample a nn (which also samples w and b)
lifted_reg_model = lifted_module()
with pyro.plate("map", len(x_data)):
# run the nn forward on data
prediction_mean = lifted_reg_model(x_data).squeeze(-1)
# condition on the observed data
pyro.sample("obs",
Normal(prediction_mean, scale),
obs=y_data)
return prediction_mean
# ### Guide
#
# In order to do inference we're going to need a guide, i.e. a variational family of distributions. We will use Pyro's [autoguide library](http://docs.pyro.ai/en/dev/infer.autoguide.html) to automatically place Gaussians with diagonal covariance on all of the distributions in the model. Under the hood, this defines a `guide` function with `Normal` distributions with learnable parameters corresponding to each `sample()` in the model. Autoguide also supports learning MAP estimates with `AutoDelta` or composing guides with `AutoGuideList` (see the [docs](http://docs.pyro.ai/en/dev/infer.autoguide.html) for more information). In [Part II](bayesian_regression_ii.ipynb) we will explore how to write guides by hand.
# +
from pyro.infer.autoguide import AutoDiagonalNormal
guide = AutoDiagonalNormal(model)
# -
# ## Inference
#
# To do inference we'll use stochastic variational inference (SVI) (for an introduction to SVI, see [SVI Part I](svi_part_i.ipynb)). Just like in the non-Bayesian linear regression, each iteration of our training loop will take a gradient step, with the difference that in this case, we'll use the ELBO objective instead of the MSE loss by constructing a `Trace_ELBO` object that we pass to `SVI`.
optim = Adam({"lr": 0.03})
svi = SVI(model, guide, optim, loss=Trace_ELBO(), num_samples=1000)
# Here `Adam` is a thin wrapper around `torch.optim.Adam` (see [here](svi_part_i.ipynb#Optimizers) for a discussion). To take an ELBO gradient step we simply call the step method of SVI. Notice that the data argument we pass to step will be passed to both model() and guide(). The complete training loop is as follows:
# +
def train():
pyro.clear_param_store()
for j in range(num_iterations):
# calculate the loss and take a gradient step
loss = svi.step(x_data, y_data)
if j % 100 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))
train()
# -
for name, value in pyro.get_param_store().items():
print(name, pyro.param(name))
# As you can see, instead of just point estimates, we now have uncertainty estimates (`auto_scale`) for our learned parameters. Note that Autoguide packs the latent variables into a tensor, in this case, one entry per variable sampled in our model.
# ## Model Evaluation
# To evaluate our model, we'll generate some predictive samples and look at the posteriors. Since our variational distribution is fully parameterized, we can just run the lifted model forward. We wrap the model with a `Delta` distribution in order to register the values with Pyro. We then store the execution traces in the `posterior` object with `svi.run()`.
# +
get_marginal = lambda traces, sites:EmpiricalMarginal(traces, sites)._get_samples_and_weights()[0].detach().cpu().numpy()
def summary(traces, sites):
marginal = get_marginal(traces, sites)
site_stats = {}
for i in range(marginal.shape[1]):
site_name = sites[i]
marginal_site = pd.DataFrame(marginal[:, i]).transpose()
describe = partial(pd.Series.describe, percentiles=[.05, 0.25, 0.5, 0.75, 0.95])
site_stats[site_name] = marginal_site.apply(describe, axis=1) \
[["mean", "std", "5%", "25%", "50%", "75%", "95%"]]
return site_stats
def wrapped_model(x_data, y_data):
pyro.sample("prediction", Delta(model(x_data, y_data)))
posterior = svi.run(x_data, y_data)
# -
# posterior predictive distribution we can get samples from
trace_pred = TracePredictive(wrapped_model,
posterior,
num_samples=1000)
post_pred = trace_pred.run(x_data, None)
post_summary = summary(post_pred, sites= ['prediction', 'obs'])
mu = post_summary["prediction"]
y = post_summary["obs"]
predictions = pd.DataFrame({
"cont_africa": x_data[:, 0],
"rugged": x_data[:, 1],
"mu_mean": mu["mean"],
"mu_perc_5": mu["5%"],
"mu_perc_95": mu["95%"],
"y_mean": y["mean"],
"y_perc_5": y["5%"],
"y_perc_95": y["95%"],
"true_gdp": y_data,
})
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = predictions[predictions["cont_africa"] == 1]
non_african_nations = predictions[predictions["cont_africa"] == 0]
african_nations = african_nations.sort_values(by=["rugged"])
non_african_nations = non_african_nations.sort_values(by=["rugged"])
fig.suptitle("Regression line 90% CI", fontsize=16)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["mu_mean"])
ax[0].fill_between(non_african_nations["rugged"],
non_african_nations["mu_perc_5"],
non_african_nations["mu_perc_95"],
alpha=0.5)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["true_gdp"],
"o")
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
idx = np.argsort(african_nations["rugged"])
ax[1].plot(african_nations["rugged"],
african_nations["mu_mean"])
ax[1].fill_between(african_nations["rugged"],
african_nations["mu_perc_5"],
african_nations["mu_perc_95"],
alpha=0.5)
ax[1].plot(african_nations["rugged"],
african_nations["true_gdp"],
"o")
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations")
# The above figure shows the uncertainty in our estimate of the regression line. Note that for lower values of ruggedness there are many more data points, and as such, lesser wiggle room for the line of best fit. This is reflected in the 90% CI around the mean. We can also see that most of the data points actually lie outside the 90% CI, and this is expected because we have not plotted the outcome variable which will be affected by `sigma`! Let us do so next.
# +
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
fig.suptitle("Posterior predictive distribution with 90% CI", fontsize=16)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["y_mean"])
ax[0].fill_between(non_african_nations["rugged"],
non_african_nations["y_perc_5"],
non_african_nations["y_perc_95"],
alpha=0.5)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["true_gdp"],
"o")
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
idx = np.argsort(african_nations["rugged"])
ax[1].plot(african_nations["rugged"],
african_nations["y_mean"])
ax[1].fill_between(african_nations["rugged"],
african_nations["y_perc_5"],
african_nations["y_perc_95"],
alpha=0.5)
ax[1].plot(african_nations["rugged"],
african_nations["true_gdp"],
"o")
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations")
# -
# We observe that the outcome from our model and the 90% CI accounts for the majority of the data points that we observe in practice. It is usually a good idea to do such posterior predictive checks to see if our model gives valid predictions.
# we need to prepend `module$$$` to all parameters of nn.Modules since
# that is how they are stored in the ParamStore
weight = get_marginal(posterior, ['module$$$linear.weight']).squeeze(1).squeeze(1)
factor = get_marginal(posterior, ['module$$$factor'])
gamma_within_africa = weight[:, 1] + factor.squeeze(1)
gamma_outside_africa = weight[:, 1]
fig = plt.figure(figsize=(10, 6))
sns.distplot(gamma_within_africa, kde_kws={"label": "African nations"},)
sns.distplot(gamma_outside_africa, kde_kws={"label": "Non-African nations"})
fig.suptitle("Density of Slope : log(GDP) vs. Terrain Ruggedness", fontsize=16)
# In the next section, we'll look at how to write guides for variational inference as well as compare the results with inference via HMC.
#
# See an example with a toy dataset on [Github](https://github.com/uber/pyro/blob/dev/examples/bayesian_regression.py).
# ### References
# 1. <NAME>., *Statistical Rethinking, Chapter 7*, 2016
# 2. <NAME>. & <NAME>., *[Ruggedness: The blessing of bad geography in Africa"](https://diegopuga.org/papers/rugged.pdf)*, Review of Economics and Statistics 94(1), Feb. 2012
| tutorial/source/bayesian_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analytic Inverse Kinematics of the leg in the plane and trajectory generation
#
# We will continue to use the 2D version of the finger, where the first degree of freedom of the robot is constrained to stay at 0 position.
#
# <img src="./2d_robot_model.png" width="300">
#
# As we have seen last week, the forward kinematics of the leg could be written such that the pose of the foot F (i.e. its position and orientation) with respect to frame {S} is
# $$\begin{bmatrix}\cos(\theta_1+\theta_2) & -\sin(\theta_1+\theta_2)& l_2\sin(\theta_1+\theta_2) + l_1\sin\theta_1 + l_0\\
# \sin(\theta_1+\theta_2) & \cos(\theta_1+\theta_2) & -l_2\cos(\theta_1+\theta_2) - l_1\cos\theta1\\
# 0 & 0 & 1
# \end{bmatrix}$$
#
# Now we are asking the following question: given a desired $(x_{des},y_{des})$ position for the foot, can we find configurations $\theta_1$, $\theta_2$ that would realize this?
#
# The goal of an analytic inverse kinematics algorithm is to find all (if any) possible solutions to this problem.
# ## Reachable workspace
#
# The distance between the frame {H} and the foot {F} is noted $l_{des} = \sqrt{(x_{des} - l_0)^2 + y_{des}^2}$
#
# The leg has 2 DOFs. The furthest it can reach is around a circle of center {H} and radius $l_1 + l_2$. For any point on this radius, we necessarily have the leg fully extended and therefore $\theta_2 = 0$.
#
# When $\theta_2$ is not 0, the leg is folded and the distance from ${H}$ to ${F}$ can be made arbitrary between $0$ and $l_1 + l_2$, i.e. $0 \leq l_{des} \leq l_1+l_2$ because $l_1 = l_2$, the robot can then reach any position inside the circle of radius $l_1 + l_2$.
#
# ## Analytic inverse kinematics
# We need to find a way to relate the angles to the position of the leg. In our case, we will consider the influence of the joint angles on the leg lenght and then on its orientation.
# ### The leg length
# Consider the triangle formed by $l_{des}$, $l_1$ and $l_2$. Using the law of cosines we see that
# $$l_{des}^2 - l_1^2 - l_2^2 = -2l_1 l_2 \cos(\pi - \theta_2)$$
# which gives
# $$\cos(\theta_2) = \frac{l_{des}^2 - l_1^2 - l_2^2}{2l_1 l_2}$$
#
# we have then two possible choice for $\theta_2 = \pm \arccos(\frac{l_{des}^2 - l_1^2 - l_2^2}{2l_1 l_2})$. We will denote $\theta_2^+$ the positive solution (when the leg is bent with the knee pointing left) and $\theta_2^-$ the other solution (with the knee pointing toward the right).
#
# ### The leg orientation
# We have seen that $\theta_2$ defines the distance from {H} to {F}. $\theta_1$ will then define the position of the foot on the circle defined by the leg length. First we define some helpful quantities.
# The angle between the line going from ${H}$ to ${F}$ and the horizontal line going from ${H}$ to the right direction is $$\beta = \arctan2(y_{des}, x_{des} - l_0)$$
# and the angle between $l_1$ and the {H}-{F} line is (using the law of cosines)
# $$\alpha = \arccos(\frac{-l_2^2 + l_1^2 + l_{des}^2}{2l_1l_{des}})$$
#
# We need to consider two distinct cases:
#
# 1. $\theta_2^+$: in this case, we have $\theta_1^+ = \pi/2 - \alpha + \beta$
# 2. $\theta_2^-$: in this case, we have $\theta_1^- = \pi/2 + \alpha + \beta$
# +
#setup nice plotting (use widget instead of notebook in the command below if you use jupyter lab)
# %matplotlib notebook
# we import useful libraries
import time
import numpy as np
import matplotlib as mp
import matplotlib.pyplot as plt
# we import the helper class / we will use a similar class to work with the real robot
from nyu_finger_simulator import NYUFingerSimulator
# +
# here we define the global variables for the robot size
l0 = 0.3
l1 = 0.16
l2 = 0.16
def translate(vector):
"""
returns an homogenous transform for a 2D translate of vector
"""
transform = np.eye(3)
transform[0:2,2] = vector
return transform
def rotate(angle):
"""
returns an homogenous transform for a 2D rotation of angle
"""
transform = np.eye(3)
transform[0,0] = np.cos(angle)
transform[0,1] = -np.sin(angle)
transform[1,0] = np.sin(angle)
transform[1,1] = np.cos(angle)
return transform
def forward_kinematics(theta1, theta2):
"""
This function computes the 2D forward kinematics of the robot
theta1 and theta2 correspond to joint angles as described in the figure above
it returns the pose (3x3 homogenous matrix) of the foot with respect to frame {F}
"""
transform = translate(np.array([l0,0])).dot(rotate(theta1)).dot(translate(np.array([0,-l1]))).dot(rotate(theta2)).dot(translate(np.array([0,-l2])))
return transform
# -
# # Questions:
# 1. Write an inverse kinematics function that takes a desired x,y position for the foot and returns a list of all possible solutions (if any) in terms of joint angles for $\alpha_1$ and $\alpha_2$.
#
# 2. Test you function using the test_IK function. It will generate a random desired (x,y) position for the foot, then compute the IK and verify that all the returned solutions are correct by computing the forward kinematics solution of each and verifying that $\textrm{ForwardKinematics}(\textrm{InverseKinematics}(x_{des}, y_{des})) = (x_{des},y_{des})$
# +
def inverse_kinematics(x,y):
"""
inverse kinematics function
input (x,y) position of the foot
output a list of 2D vectors which are possible solutions to the problem
(the list is empty if there are no solutions)
"""
l_des = np.sqrt((x-l0)**2 + y**2)
###First we check that the target is feasible otherwise we return empty
if l_des > l1 + l2:
# this is impossible, there are no solutions we return an empty list
return []
# we compute the two possible solutions for theta2
# note that if l_des == l1 + l2 then theta2_p = theta2_m = 0
# so we will return twice the same solution (not ideal but simpler)
theta2_p = np.arccos((l_des**2 - l1**2 - l2**2)/(2*l1*l2))
theta2_m = - theta2_p
# we now compute alpha and beta as defined above
alpha = np.arccos((-l2**2 + l1**2 + l_des**2)/(2*l1*l_des))
beta = np.arctan2(y,x-l0)
# we compute alpha1 (the 2 possibilities)
theta1_p = np.pi/2 - alpha + beta
theta1_m = (alpha + beta + np.pi/2)
# we return a list that contains the 2 solutions
return [np.array([theta1_p, theta2_p]), np.array([theta1_m, theta2_m])]
def test_IK(num_tests = 10):
"""
This function is used to test the inverse kinematics function
it generates num_tests random (x,y) locations and try to solve them
it prints potential errors
"""
num_errors = 0
num_infeasible = 0
points = np.zeros([num_tests, 2])
colors = np.zeros([num_tests, 1])
for i in range(num_tests):
# we generate a random (x,y) location in a box of l1+l2 around {H}
x = np.random.random_sample()*(l1+l2)*2 - (l1+l2) + l0
y = np.random.random_sample()*(l1+l2)*2 - (l1+l2)
# we save the point for plotting
points[i,0] = x
points[i,1] = y
solutions = inverse_kinematics(x,y)
l_des = np.sqrt((x-l0)**2 + y**2)
if l_des > l1+l2:
num_infeasible += 1
colors[i] = 1
if not solutions:
# we did not find solutions
# check if this is correct
if l_des <= l1+l2:
print(f'ERROR: IK did not find a solution while there should be at least one, for x={x} and y={y}')
num_errors +=1
else:
for sol in solutions:
pose = forward_kinematics(sol[0], sol[1])
x_ik = pose[0,2]
y_ik = pose[1,2]
error = np.sqrt((x-x_ik)**2 + (y-y_ik)**2)
if error > 0.001:
print(f'solution {sol} did not find the correct IK solution x_des = {x}, y_des = {y} but found x_ik = {x_ik} and y_ik = {y_ik}')
num_errors +=1
print(f'there were {num_tests-num_infeasible} feasible samples and {num_infeasible} infeasible ones')
if num_errors > 0:
print(f'there were {num_errors} errors detected')
else:
print('CONGRATS: no errors were detected!')
test_IK(10000)
# -
# # Trajectory generation
#
# Once we know where we want to move the foot and the associated goal positions for $\theta_1$ and $\theta_2$, we need to control the robot and make sure it goes there. It is generally not a good idea to just send the desired joint positions to the PD controller because if they are too far from the actual positions, the error will be large and a large torque will be applied on each motor. This will likely create a very jerky movement, potentially creating a very large torque on each joint which could damage the robot, its motors, etc.
#
# ## Linear interpolation between initial and goal position
# Instead, we would like to create a nice trajectory that moves the robot from its current position to the desired goal position. One simple way to do that is to linear interpolate the desired joint angles from the current ones to the goal ones. For example, for each joint we could set
# $$\theta_{des}(s) = \theta_{init} + s (\theta_{goal} - \theta{init})$$
# where $s \in [0,1]$. Doing so, when $s=0$ we get $\theta_{des} = \theta_{init}$ (i.e. the joint angle when we started the movement. When $s=1$, we get $\theta_{des} = \theta{goal}$ (i.e. the position where we would like to end up). For any $s$ in between, we will generate a line segment between those two positions.
#
# ## Time parametrization of the trajectory
# ### Linear time parametrization
# Now we would like to change $s$ as a function of time. It means that as $t$ increases from $0$ (when we start the controller) to some $T$ (which is the desired time we would like the motion to last) we will get a desired position $\theta_{des}$ and also a desired velocity $\dot{\theta}_{des}$ for the joint. We could simply set
# $$ s(t) = \frac{t}{T}$$ which would lead to a trajectory of the form
# $$\theta_{des}(t) = \theta_{init} + \frac{t}{T} (\theta_{goal} - \theta_{init})$$
# with
# $$\dot{\theta}_{des}(t) = \frac{1}{T} (\theta_{goal} - \theta_{init})$$
# Doing this, we notice that the desired velocity would be constant, which is potentially problematic because we would start at rest (so the velocity should be 0) and would like to end our movement with 0 velocity.
#
# ### Time parametrization with velocity constraints
# What we usually do is to parametrize $s$ as a polynomial of $t$ such that we can impose constraints on the desired velociy, acceleration, etc. Assume that $s(t)$ is some arbitrary function of time, we then have
# $$\theta_{des}(t) = \theta_{init} + s(t) (\theta_{goal} - \theta_{init})$$,
# $$\dot{\theta}_{des}(t) = \dot{s} (\theta_{goal} - \theta_{init})$$
# and also $$\ddot{\theta}_{des}(t) = \ddot{s} (\theta_{goal} - \theta_{init})$$
#
# If we want to for a desired velocity at the beginning and the end of the movement, we need to have $s(0) = 0$, $s(T) = 1$, $\dot{s}(0) = 0$ and $\dot{s}(T) = 0$. Since we have 4 constraints, we would need at least a polynomial of degree 3 to have enough parameters. Lets set
# $$s(t) = a_0 + a_1 t + a_2 t^2 + a_3 t^3$$
# Then we have
# $$\dot{s}(t) = a_1 + 2 a_2 t + 3 a_3 t^2$$
# If we impose our constraints, we have $a_1 = 0$ because of $\dot{s}(0) = 0$ and $a_2 = -\frac{3}{2} a_3 T$ because $\dot{s}(T) = 0$. Then we find that $a_0=0$ because $s(0)=0$ and $a_3 = -\frac{2}{T^3}$. Putting everything together we find that to impose velocity constraints $$s(t) = \frac{3}{T^2}t^2 - \frac{2}{T^3}t^3$$
# which implies that
# $$\theta_{des}(t) = \theta_{init} + \left(\frac{3}{T^2}t^2 - \frac{2}{T^3}t^3\right) (\theta_{goal} - \theta_{init})$$
# and
# $$\dot{\theta}_{des}(t) = \left(\frac{6}{T^2}t - \frac{6}{T^3}t^2\right) (\theta_{goal} - \theta_{init})$$
#
# ### Time parametrization with acceleration and velocity constraints
# In general, if we impose only velocity constraints, we might have non-zero accelerations which can be an issue when generating torques in the PD controller. We generally prefer also imposing acceleration constraints in addition to the other constraints, i.e.
# $s(0) = 0$, $s(T) = 1$, $\dot{s}(0) = 0$, $\dot{s}(T) = 0$ and $\ddot{s}(0) = 0$, $\ddot{s}(T) = 0$
# now that we imposed two new constraints, we will need to have a fifth order polynomial at least to satisfy them.
# Our polynomial is $$s(t) = a_0 + a_1 t + a_2 t^2 + a_3 t^3 + a_4 t^4 + a_5 t^5$$
# The computations are as before (but we more equations) and we find $a_0 = a_1 = a_2 = 0$, $a_3 = \frac{10}{T^3}$, $a_4 = \frac{-15}{T^4}$ and $a_5 = \frac{6}{T^5}$.
# This gives the following trajectory paramterized by time
# $$\theta_{des}(t) = \theta_{init} + \left( \frac{10}{T^3} t^3 + \frac{-15}{T^4} t^4 + \frac{6}{T^5} t^5 \right) (\theta_{goal} - \theta_{init})$$
# and
# $$\dot{\theta}_{des}(t) = \left( \frac{30}{T^3} t^2 + \frac{-60}{T^4} t^3 + \frac{30}{T^5} t^4 \right) (\theta_{goal} - \theta_{init})$$
# and the acceleration profile is
# $$\ddot{\theta}_{des}(t) = \left( \frac{60}{T^3} t + \frac{-180}{T^4} t^2 + \frac{120}{T^5} t^3 \right) (\theta_{goal} - \theta_{init})$$
# # Question:
# 1. Write a compute_trajectory function that takes as input argument the starting join position, the goal joint position, the movement duration and the current time t (between 0 and T) and returns the desired joint position and joint velocity. Use a time paramterization such that the velocity and acceleration are 0 at the beginning and end of the movement
def compute_trajectory(th_init, th_goal, movement_duration, t):
# first we compute the coefficients (as notes above)
a5 = 6/(movement_duration**5)
a4 = -15/(movement_duration**4)
a3 = 10/(movement_duration**3)
# now we compute s and ds/dt
s = a3 * t**3 + a4 * t**4 + a5 * t**5
ds = 3 * a3 * t**2 + 4 * a4 * t**3 + 5 * a5 * t**4
#now we compute th and dth/dt (the angle and its velocity)
th = th_init + s * (th_goal - th_init)
dth = (th_goal - th_init) * ds
# we return the answer
return th, dth
# # Reaching desired targets
#
# # Question:
# 1. Modify the code below (our typical control loop with a PD controller) such that the robot can move its foot to a randomly generated desired position (x_des, y_des), shown as a purple ball in the simulation. Use the inverse kinematics function to decide what the end joint angles should be and the trajectory generation function to compute insde the control loop the current desired positions and velocities of the robot joints. Use a total time for the movement of T=2 seconds.
#
# 2. Use the plotting function below to plot the motion of the foot in space and the joint position/velocity trajectories.
# +
# we create a robot simulation
robot = NYUFingerSimulator()
# we reset the simulation to the initial position we want to move
robot.reset_state(np.array([0,0,0]))
# we simulate for 7 seconds
run_time = 7.
num_steps = int(run_time/robot.dt)
# the PD gains
P = np.array([3., 3., 3.])
D = np.array([0.3, 0.3, 0.3])
# we store information
# here we create some arrays that we use to store data generated during the control loop
measured_positions = np.zeros([num_steps,3]) # will store the measured position
measured_velocities = np.zeros_like(measured_positions) # will store the measure velocities
desired_torques = np.zeros_like(measured_positions) # will store the commands we send to the robot
desired_positions = np.zeros_like(measured_positions) # will store the desired positions we use in the PD controller
desired_velocities = np.zeros_like(measured_positions) # will store the desired velocities
time = np.zeros([num_steps]) # will store the running time
x_pos = np.zeros([num_steps,1]) # will store the x position of the foot (as computed by Forw. Kin.)
y_pos = np.zeros([num_steps,1]) # will store the y position of the foot (as computed by Forw. Kin.)
# here we create a list of ball positions
ball_positions = [np.array([0.597,-0.056]), np.array([0.521,0.12]), np.array([0.3,-0.225])]
for ball in ball_positions:
robot.add_ball(ball[0], ball[1])
# we create a variable to store the desired lenght of each reaching trajectory
time_to_goal = 2.0
# we need to keep track of which ball we try to reach (0 1 or 2)
ball_number = 0
# this variable will be used to keep track of time inside a reaching motion
# i.e. this will be different from the total running time - it will only go from 0 to 2.0s
t = 0.
# now a variable to store the initial desired position for theta1 and theta2
# we need this to know what is the initial point of the trajectory
# when we start the simulation, the desired position is 0 (this is the position we reset the robot with)
th1_init = 0.0
th2_init = 0.0
# we also get the IK solution for the first ball (because we start from there)
# and we take the first solution as our goal position
IKsolution = inverse_kinematics(ball_positions[ball_number][0], ball_positions[ball_number][1])
th1_goal = IKsolution[0][0]
th2_goal = IKsolution[0][1]
# now we can enter the main control loop (each loop is 1 control cycle)
for i in range(num_steps):
# get the current time and store it
time[i] = robot.dt * i
# we get the position and velocities of the joints and save them
q, dq = robot.get_state()
measured_positions[i,:] = q
measured_velocities[i,:] = dq
# save the current position of the foot using the FK function
pose = forward_kinematics(q[1], q[2])
x_pos[i] = pose[0,2]
y_pos[i] = pose[1,2]
# now we can compute the desired position of both th1 and th2
# based on the current th_init and th_goal
th1_des, dth1_des = compute_trajectory(th1_init, th1_goal, time_to_goal, t)
th2_des, dth2_des = compute_trajectory(th2_init, th2_goal, time_to_goal, t)
# we set the desired positions for the PD controller
q_des = np.array([0, th1_des, th2_des])
dq_des = np.array([0, dth1_des, dth2_des])
# we can increase t - if we are beyond our 2s, then we reset and move to the next ball
# only if there is a next ball
t += robot.dt
if t > time_to_goal:
if ball_number < len(ball_positions)-1:
t = 0.
ball_number +=1
th1_init = q_des[1] #our new init position is our latest desired position
th2_init = q_des[2] #our new init position is our latest desired position
# we get our new target
IKsolution = inverse_kinematics(ball_positions[ball_number][0], ball_positions[ball_number][1])
th1_goal = IKsolution[0][0]
th2_goal = IKsolution[0][1]
else:
t = time_to_goal #we have reach all balls, so we stop "time" in our trajectory generation process
# we do the PD control
desired_positions[i,:] = q_des
desired_velocities[i,:] = dq_des
error = q_des - q # the position error for all the joints (it's a 3D vector)
d_error = dq_des-dq # the velocity error for all the joints
# we compute the desired torques as a PD controller
joint_torques = P * error + D * d_error
desired_torques[i,:] = joint_torques
# we send them to the robot and do one simulation step
robot.send_joint_torque(joint_torques)
robot.step()
# +
def plot_foot_trajectory(x_pos, y_pos, ball_positions):
"""
plots the position of the foot in 2D and the position of the spatial frame {s}
we assume that the time varying x variable is in x_pos and that the y variable is in y_pos
"""
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
plt.plot(x_pos,y_pos)
plt.xlabel('foot x position [m]')
plt.ylabel('foot y position [m]')
plt.xlim([-l1-l2+l0-0.05,l0+l1+l2+0.05])
plt.ylim([-l1-l2-0.05, l1+l2+0.05])
plt.plot([0],[0],'o',markersize=15,color='r')
for ball in ball_positions:
plt.plot([ball[0]],[ball[1]],'o',markersize=15,color='g')
ax.annotate('Goal position', xy=(ball[0]-0.2,0.05+ball[1]), xytext=(30,0), textcoords='offset points')
ax.annotate('Spatial frame {s}', xy=(-0.03,-0.005), xytext=(30,0), textcoords='offset points')
# you need to first create x_pos and y_pos variables!
plot_foot_trajectory(x_pos, y_pos, ball_positions)
# +
def plot_joint_posvel(time, th, th_des, dth, dth_des):
fig = plt.figure(figsize=(9,9))
plt.subplot(2,2,1)
plt.plot(time, th[:,1], 'b-', time, th_des[:,1], '--k')
plt.ylabel(r'$\theta_1$')
plt.subplot(2,2,2)
plt.plot(time, th[:,2], 'b-', time, th_des[:,2], '--k')
plt.ylabel(r'$\theta_2$')
plt.subplot(2,2,3)
plt.plot(time, dth[:,1], 'b-', time, dth_des[:,1], '--k')
plt.ylabel(r'$\dot{\theta}_1$')
plt.xlabel('Time [s]')
plt.subplot(2,2,4)
plt.plot(time, dth[:,2], 'b-', time, dth_des[:,2], '--k')
plt.ylabel(r'$\dot{\theta}_2$')
plt.xlabel('Time [s]')
plot_joint_posvel(time, measured_positions, desired_positions, measured_velocities, desired_velocities)
# -
| laboratory/Lab3/Inverse_Kinematics_example_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # Map of active volcanoes
#
# Here is an example of plotting a map of active volcanoes using data online at Oregon State University. This example was written by <NAME>, of ANU.
#
# <img src="images/volcano_map.png" width="50%"/>
#
# At the end of this script you will produce a map just like the one shown here.
#
#
# ## Resources you will use.
#
# This notebook makes use of a couple of packages that might come in handy another time. The maps are made by `cartopy` which is a mapping tool written by the Meteorological Office in the UK (and which happens to be really good a plotting satellite data). The `pandas` package is a database tool that is really very good at manipulating tables of different types of data, selecting, sorting, refining and so on.
#
# ## Navigation
#
# - [Maps 1.1](PHYS3070-LabMD.1.1.ipynb)
# - [Maps 1.2](PHYS3070-LabMD.1.2.ipynb)
# - [Maps 1.3](PHYS3070-LabMD.1.3.ipynb)
# - [Maps 2.1](PHYS3070-LabMD.2.1.ipynb)
# - [Maps 2.2](PHYS3070-LabMD.2.2.ipynb)
# - [Maps 2.3](PHYS3070-LabMD.2.3.ipynb)
# - [Maps 2.4](PHYS3070-LabMD.2.4.ipynb)
# - [Maps 2.5](PHYS3070-LabMD.2.5.ipynb)
#
#
# +
# %matplotlib inline
import json
import cartopy.crs as ccrs
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
chartinfo = 'Author: <NAME> | Data: Volcano World - volcano.oregonstate.edu'
# -
import cartopy
cartopy.__version__
# This next section reads the data from the Oregon State University database. This URL is actually a script to return the table of volcanoes in various forms. This is not a big issue as it returns a valid web page, but not every library that reads html is configured to work with these general URLs.
page_source = "https://volcano.oregonstate.edu/volcano_table?sort_by=title&sort_order=ASC"
# This function from the `pandas` package will read all the tables in a web page and turn them into dataframes.
tables = pd.read_html(page_source)
print("There is/are {} table/s on this web page".format(len(tables)))
# In this case, it is not necessary to parse the various tables to find the one we want, but you would need to check (for example, the page header or footer might be in the form of a table to lay out the information but we don't want to use that for our map !)
df_volc = tables[0]
print(type(df_volc))
# +
# pdurl = 'https://volcano.oregonstate.edu/volcano_table?sort_by=title&sort_order=ASC'
# xpath = '//table'
# tree = html.parse(pdurl)
# tables = tree.xpath(xpath)
# table_dfs = []
# for idx in range(4, len(tables)):
# df = pd.read_html(html.tostring(tables[idx]), header=0)[0]
# table_dfs.append(df)
# -
df_volc['Type'].value_counts()
# Clean up the data to make sure the typos and missing information are not propogated into your database. This doesn't seem to be needed in this particular case, but, in other instances, you could use this technique to replace definitions / map to a new terminology etc.
# +
def cleanup_type(s):
if not isinstance(s, str):
return s
s = s.replace('?', '').replace(' ', ' ')
s = s.replace('volcanoes', 'volcano')
s = s.replace('volcanoe', 'Volcano')
s = s.replace('cones', 'cone')
s = s.replace('Calderas', 'Caldera')
return s.strip().title()
df_volc['Type'] = df_volc['Type'].map(cleanup_type)
df_volc['Type'].value_counts()
# -
# Now determine the number of volcanoes in the database.
df_volc.dropna(inplace=True)
len(df_volc)
# Now select the volcanoes that are above sealevel
df_volc = df_volc[df_volc['Elevation (m)'] >= 0]
len(df_volc)
# Make a nice table of the first 10 volcanoes from the information that you grabbed out of the Oregon State University website on volcanoes
print(len(df_volc))
df_volc.head(10)
# Determine the number of each type of volcanoes from this list and output this information to the screen.
df_volc['Type'].value_counts()
df_volc.dropna(inplace=True)
len(df_volc)
df = df_volc[df_volc['Type'] == 'Stratovolcano']
# + [markdown] inputHidden=false outputHidden=false
# Create a simple scatter plot map of the stratavolcanoes
# +
fig=plt.figure(figsize=(12,8))
ax = fig.add_subplot(1,1,1, projection=ccrs.Mollweide())
ax.stock_img()
ax.annotate('Stratovolcanoes of the world | ' + chartinfo, xy=(0, -1.04), xycoords='axes fraction')
ax.scatter(df['Longitude (dd)'].array,df['Latitude (dd)'].array, color='red', linewidth=1, marker='^', transform=ccrs.PlateCarree())
plt.show()
# -
| Notebooks/LAB-Maps-Data/PHYS3070-LabMD.1.1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Feed-Forward Neural Networks
# - From [19]
# - Feed Forward Neural Networks are sometimes __*called as Multi-Layer Perceptron (MCP)*__
# - However this is a misnomer
# - Perceptrons are purely linear
# - But modern NN are made up of units with _non-linearities_ like _sigmoid_
# ## Feed Forward Model
# - From [v1] Lecture Lec 38
# - A model used in ANN
# - Especially in perceptron model
# - When the inputs are given, you keep feeding the data into the hidden layer and finally compute the outputs in the output layer
| Terms-and-Abbreviations/Feed-Forward-Neural-Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Use 2018 census estimate data
dfc = pd.read_csv('../files/census/cc-est2018-alldata.csv', encoding='ISO-8859-1')
dfc.YEAR.unique()[-1]
# Get 2010 data
dfc = dfc.loc[dfc.YEAR == dfc.YEAR.unique()[-1]]
start_race_idx = dfc.columns.tolist().index('WA_MALE')
race_cols = dfc.columns[start_race_idx:]
# COMBINE MALE AND FEMALE RACE COLUMNS
for col in race_cols:
if col[-6:] == 'FEMALE':
identifier = col[:-7]
col_male = identifier + '_MALE'
dfc[identifier] = dfc[identifier + '_FEMALE'] + dfc[identifier + '_MALE']
dfc.drop(columns=race_cols, inplace=True)
dfage = pd.DataFrame(dfc.groupby(['STATE', 'COUNTY', 'AGEGRP']).TOT_POP.sum()).reset_index()
dfage = pd.pivot_table(dfage, index=['STATE', 'COUNTY'],
values='TOT_POP', columns='AGEGRP').reset_index(drop=True)
dfc = dfc.loc[dfc.AGEGRP == 0]
dfc = dfc.reset_index(drop=True)
dfage.rename(columns={0:'total'}, inplace=True)
(dfage.total == dfc.TOT_POP).unique()
dfc = pd.concat([dfc, dfage], axis=1)
pd.options.display.max_columns = 1000
dfc = dfc.drop(columns=['SUMLEV'])
dfc = dfc.rename(columns={1:'zero_four', 2:'five_nine', 3:'ten_fourteen', 4:'fifteen_nineteen',
5:'twenty_twentyfour', 6:'twentyfive_twentynine', 7:'thirty_thirtyfour',
8:'thirtyfive_thirtynine', 9:'forty_fortyfour', 10:'fortyfive_fortynine',
11:'fifty_fiftyfour', 12:'fiftyfive_fiftynine', 13:'sixty_sixtyfour',
14:'sixtyfive_sixtynine', 15:'seventy_seventyfour',
16:'seventyfive_seventynine', 17:'eighty_eightyfour', 18:'eightyfive_older'})
dfc = dfc.drop(columns=['total', 'AGEGRP', 'YEAR'])
dfc.columns
dfc.dtypes
# NORMALIZE FEATURES TO POPULATION SIZE
dont_norm = []
for column in dfc.columns[5:]:
if column not in dont_norm:
dfc[column] /= dfc.TOT_POP
dfc['FIPS'] = dfc.STATE.map(lambda x: '{:02d}'.format(x)) + dfc.COUNTY.map(lambda x:
'{:03d}'.format(x))
dfp = pd.read_csv('../files/census/co-est2018-alldata.csv', encoding='ISO-8859-1')
dfp['FIPS'] = dfp.STATE.map(lambda x: '{:02d}'.format(x)) + dfp.COUNTY.map(lambda x:
'{:03d}'.format(x))
keep_columns = ['RNETMIG2018', 'RDOMESTICMIG2018', 'RINTERNATIONALMIG2018',
'RNATURALINC2018', 'RDEATH2018', 'RBIRTH2018',
'REGION', 'DIVISION', 'FIPS']
divide_columns = ['GQESTIMATES2018', 'NPOPCHG_2018']
for col in divide_columns:
dfp[col] = dfp[col] / dfp.POPESTIMATE2018
dfp = dfp[keep_columns + divide_columns]
df = dfc.set_index('FIPS').join(dfp.set_index('FIPS'))
df
dfe = pd.read_csv('../files/census/unemployment.csv', encoding='ISO-8859-1', header=7)
dfe
dfe_keep = ['FIPStxt', 'Civilian_labor_force_2018', 'Employed_2018', 'Unemployed_2018',
'Median_Household_Income_2018']
dfe = dfe[dfe_keep]
dfe
dfc.set_index('FIPS').join(dfp.set_index('FIPS'))
df.join(dfe.set_index('FIPStxt'))
| .ipynb_checkpoints/census_data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## DS/CMPSC 410 Sparing 2021
# ## Instructor: Professor <NAME>
# ## TA: <NAME> and <NAME>
# ## Lab 6: Movie Recommendations Using Alternative Least Square
# ## The goals of this lab are for you to be able to
# ### - Use Alternating Least Squares (ALS) for recommending movies based on reviews of users
# ### - Be able to understand the raionale for splitting data into training, validation, and testing.
# ### - Be able to tune hyper-parameters of the ALS model in a systematic way.
# ### - Be able to store the results of evaluating hyper-parameters
# ### - Be able to select best hyper-parameters and evaluate the chosen model with testing data
# ### - Be able to improve the efficiency through persist or cache
# ### - Be able to develop and debug in ICDS Jupyter Lab
# ### - Be able to run Spark-submit (Cluster Mode) in Bridges2 for large movie reviews dataset
#
# ## Exercises:
# - Exercise 1: 5 points
# - Exercise 2: 5 points
# - Exercise 3: 5 points
# - Exercise 4: 10 points
# - Exercise 5: 5 points
# - Exercise 6: 15 points
# - Exercise 7: 30 points
# ## Total Points: 75 points
#
# # Due: midnight, February 28, 2021
# # Submission of Lab 6
# - 1. Completed Jupyter Notebook of Lab 6 (Lab6A.ipynb) for small movie review datasets (movies_2.csv, ratings_2.csv).
# - 2. Lab6B.py (for spark-submit on Bridges2, incorporated all improvements from Exercise 6, processes large movie reviews)
# - 3. The output file that has the best hyperparameter setting for the large movie ratings files.
# - 4. The log file of spark-submit on Lab6B.py
# - 5. A Word File that discusses (1) your answer to Exercise 6, and (2) your results of Exercise 7, including screen shots of your run-time information in the log file.
# ## The first thing we need to do in each Jupyter Notebook running pyspark is to import pyspark first.
import pyspark
# ### Once we import pyspark, we need to import "SparkContext". Every spark program needs a SparkContext object
# ### In order to use Spark SQL on DataFrames, we also need to import SparkSession from PySpark.SQL
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql.types import StructField, StructType, StringType, LongType, IntegerType, FloatType
from pyspark.sql.functions import col, column
from pyspark.sql.functions import expr
from pyspark.sql.functions import split
from pyspark.sql import Row
from pyspark.mllib.recommendation import ALS
# from pyspark.ml import Pipeline
# from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, IndexToString
# from pyspark.ml.clustering import KMeans
# ## We then create a Spark Session variable (rather than Spark Context) in order to use DataFrame.
# - Note: We temporarily use "local" as the parameter for master in this notebook so that we can test it in ICDS Roar. However, we need to change "local" to "Yarn" before we submit it to XSEDE to run in cluster mode.
ss=SparkSession.builder.appName("lab6").getOrCreate()
# ## Exercise 1 (5 points) (a) Add your name below AND (b) replace the path below with the path of your home directory.
# ## Answer for Exercise 1
# - a: Your Name:
# ### <NAME>
# ## Exercise 2 (5 points) Modify the pathnames so that you can read the input CSV files (movies_2 and ratings_2 from ICDS Jupyter Lab) from the correct location.
movies_DF = ss.read.csv("movies_2.csv", header=True, inferSchema=True)
# +
# movies_DF.printSchema()
# -
ratings_DF = ss.read.csv("ratings_2.csv", header=True, inferSchema=True)
# +
# ratings_DF.printSchema()
# -
ratings2_DF = ratings_DF.select("UserID","MovieID","Rating")
# +
# ratings2_DF.first()
# -
ratings2_RDD = ratings2_DF.rdd
# # 6.1 Split Data into Three Sets: Training Data, Evaluatiion Data, and Testing Data
training_RDD, validation_RDD, test_RDD = ratings2_RDD.randomSplit([3, 1, 1], 137)
# ## Prepare input (UserID, MovieID) for validation and for testing
import pandas as pd
import numpy as np
import math
validation_input_RDD = validation_RDD.map(lambda x: (x[0], x[1]))
testing_input_RDD = test_RDD.map(lambda x: (x[0], x[1]) )
# # 6.2 Iterate through all possible combination of a set of values for three hyperparameters for ALS Recommendation Model:
# - rank (k)
# - regularization
# - iterations
# ## Each hyperparameter value combination is used to construct an ALS recommendation model using training data, but evaluate using Evaluation Data
# ## The evaluation results are saved in a Pandas DataFrame
# ``
# hyperparams_eval_df
# ``
# ## The best hyperprameter value combination is stored in 4 variables
# ``
# best_k, best_regularization, best_iterations, and lowest_validation_error
# ``
# # improve the performance by use presist() method
training_RDD.persist()
validation_input_RDD.persist()
validation_RDD.persist()
# # Exercise 3 (15 points) Complete the code below to iterate through a set of hyperparameters to create and evaluate ALS recommendation models.
# +
## Initialize a Pandas DataFrame to store evaluation results of all combination of hyper-parameter settings
hyperparams_eval_df = pd.DataFrame( columns = ['k', 'regularization', 'iterations', 'validation RMS', 'testing RMS'] )
# initialize index to the hyperparam_eval_df to 0
index =0
# initialize lowest_error
lowest_validation_error = float('inf')
# Set up the possible hyperparameter values to be evaluated
iterations_list = [10, 15, 20]
regularization_list = [0.01, 0.05, 0.1]
rank_list = [4, 8, 12]
for k in rank_list:
for regularization in regularization_list:
for iterations in iterations_list:
seed = 37
# Construct a recommendation model using a set of hyper-parameter values and training data
model = ALS.train(training_RDD, k, seed=seed, iterations=iterations, lambda_=regularization)
# Evaluate the model using evalution data
# map the output into ( (userID, movieID), rating ) so that we can join with actual evaluation data
# using (userID, movieID) as keys.
validation_prediction_RDD= model.predictAll(validation_input_RDD).map(lambda x: ( (x[0], x[1]), x[2] ) )
validation_evaluation_RDD = validation_RDD.map(lambda y: ( (y[0], y[1]), y[2]) ).join(validation_prediction_RDD)
# Calculate RMS error between the actual rating and predicted rating for (userID, movieID) pairs in validation dataset
error = math.sqrt(validation_evaluation_RDD.map(lambda z: (z[1][0] - z[1][1])**2).mean())
# Save the error as a row in a pandas DataFrame
hyperparams_eval_df.loc[index] = [k, regularization, iterations, error, float('inf')]
index = index + 1
# Check whether the current error is the lowest
if error < lowest_validation_error:
best_k = k
best_regularization = regularization
best_iterations = iterations
best_index = index
lowest_validation_error = error
print('The best rank k is ', best_k, ', regularization = ', best_regularization, ', iterations = ',\
best_iterations, '. Validation Error =', lowest_validation_error)
# +
# print(hyperparams_eval_df)
# -
# # Use Testing Data to Evaluate the Model built using the Best Hyperparameters
# # 6.4 Evaluate the best hyperparameter combination using testing data
# # Exercise 4 (10 points)
# Complete the code below to evaluate the best hyperparameter combinations using testing data.
seed = 37
model = ALS.train(training_RDD, 4, seed=seed, iterations=20, lambda_=0.1)
testing_prediction_RDD=model.predictAll(testing_input_RDD).map(lambda x: ((x[0], x[1]), x[2]))
testing_evaluation_RDD= test_RDD.map(lambda x: ((x[0], x[1]), x[2])).join(testing_prediction_RDD)
testing_error = math.sqrt(testing_evaluation_RDD.map(lambda x: (x[1][0] - x[1][1])**2).mean())
print('The Testing Error for rank k =', best_k, ' regularization = ', best_regularization, ', iterations = ', \
best_iterations, ' is : ', testing_error)
# +
# print(best_index)
# -
# Store the Testing RMS in the DataFrame
hyperparams_eval_df.loc[best_index]=[best_k, best_regularization, best_iterations, lowest_validation_error, testing_error]
schema3= StructType([ StructField("k", FloatType(), True), \
StructField("regularization", FloatType(), True ), \
StructField("iterations", FloatType(), True), \
StructField("Validation RMS", FloatType(), True), \
StructField("Testing RMS", FloatType(), True) \
])
# ## Convert the pandas DataFrame that stores validation errors of all hyperparameters and the testing error for the best model to Spark DataFrame
#
HyperParams_Tuning_DF = ss.createDataFrame(hyperparams_eval_df, schema3)
# # Exercise 5 (5 points)
# Modify the output path so that your output results can be saved in a directory.
# +
# import os
# projectPath=os.environ.get('PROJECT')
# output_path = "%s/Lab6output"%projectPath
# HyperParams_Tuning_DF.rdd.saveAsTextFile(output_path)
# -
output_path = "/storage/home/kky5082/ds410/lab6/Lab6_Output"
HyperParams_Tuning_DF.rdd.saveAsTextFile(output_path)
# # Exercise 6 (15 points)
# Modify the code above to improve its performance for Big Data. Describe briefly here what modificationas you made and the rationale of each of your modifications. Include this in item 5 of Lab6 submission
# ### In the train and tunning hyperparameters part, there are three for loops to test all the possible hyperparameters set. So, it reuse some rdd many times. When program reuse the rdd, it has to computate it again. But I add presist() function to store those rdd in memory or disk, the computer don't need to computate again when program reuse it.
# # Exercise 7 (30 points)
# Save a duplicate copy of this notebook, name it Lab6B.
# - (1) Make similar modifications to Lab6 to prepare the notebook for processing the large movie reviews in Bridges2:
# * Modify the paths for the two large input files
# * Modify the path for the output file
# - (2) Export the notebook as Executable scripts (.py) file
# - (3) Use scp to copy the file to Bridges2
# - (4) Run (and time) spark-submit of Lab6B.py on Bridges2
# # Submission of Lab 6
# - 1. Completed Jupyter Notebook of Lab 6 (Lab6A.ipynb) for small movie review datasets (movies_2.csv, ratings_2.csv).
# - 2. Lab6B.py (for spark-submit on Bridges2, incorporated all improvements from Exercise 6, processes large movie reviews)
# - 3. The output file that has the best hyperparameter setting for the large movie ratings files.
# - 4. The log file of spark-submit on Lab6B.py
# - 5. A Word File that discusses (1) your answer to Exercise 6, and (2) your results of Exercise 7, including screen shots of your run-time information in the log file.
| 5. Recommendation Systems/Lab6-local-mode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import spacy
import numpy as np
import pandas as pd
from stopwords import ENGLISH_STOP_WORDS
# from __future__ import unicode_literals
# import numba
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# -
en_nlp = spacy.load('en')
def spacy_get_vec(sentence):
vec = np.zeros(384)
doc = en_nlp((sentence))
for word in doc:
if word.lower_ in ENGLISH_STOP_WORDS:
continue
vec += word.vector
return vec
from sklearn.feature_extraction.text import TfidfVectorizer
lines = open('./class.txt').readlines()
vectorizer = TfidfVectorizer(stop_words=ENGLISH_STOP_WORDS)
vectorizer.fit_transform([''.join(line.split(',')[0]) for line in lines])
def get_idf(sentence):
score = 1.0
for word in sentence.split():
if word[-1] == '\n' or word[-1] == ',' or word[-1] == '.' or word[-1] == ['!']:
word = word[:-1]
if word not in vectorizer.vocabulary_:
continue
index = vectorizer.vocabulary_[word]
score = score / vectorizer.idf_[index]
return score
# +
vecs = []
intents = []
idfs = []
for line in lines:
tokens = line.split(',')
sentence = tokens[0]
intent = tokens[1]
if intent[-1] == '\n':
intent = intent[:-1]
vecs.append(spacy_get_vec(sentence))
intents.append(intent)
#idfs.append(get_idf(sentence))
df = pd.DataFrame(vecs, columns=['vec_%d' % i for i in range(384)])
#df['idf'] = idfs
df['intents'] = intents
df.intents = df.intents.astype('category')
# -
from sklearn.utils import shuffle
df = shuffle(df)
df.head()
X = df.iloc[:, :-1].values
y = df.iloc[:,-1:].values.ravel()
from sklearn.cross_validation import train_test_split
X_train,X_val,y_train,y_val = train_test_split(X, y, test_size=0.20)
from sklearn.linear_model import LogisticRegression
logit_model = LogisticRegression(C=5.0, class_weight={'intent': 1.2, 'non_intent': 0.8})
logit_model.fit(X_train, y_train)
print(logit_model.score(X_train, y_train))
print(logit_model.score(X_val, y_val))
sent = 'it looks cloudy'
#gradboost.predict_proba(np.append(spacy_get_vec(sent), get_idf(sent)))
logit_model.predict_proba(spacy_get_vec(sent))
from sklearn.ensemble import GradientBoostingClassifier
gradboost = GradientBoostingClassifier(n_estimators=500, max_depth=25, max_features='log2')
gradboost.fit(X_train, y_train)
print(gradboost.score(X_train, y_train))
print(gradboost.score(X_val, y_val))
sent = 'it looks cloudy'
#gradboost.predict_proba(np.append(spacy_get_vec(sent), get_idf(sent)))
gradboost.predict_proba(spacy_get_vec(sent))
gradboost.classes_
from sklearn.svm import SVC
svc = SVC(kernel='linear', degree=2, probability=True, class_weight={'intent':0.8,'non_intent':1.2})
svc.fit(X_train, y_train)
svc.score(X_val,y_val)
svc.fit(X, y)
from sklearn.externals import joblib
joblib.dump(svc, 'class.pkl')
| .ipynb_checkpoints/class-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Machine Learning
# This file builds the training dataset from the multiple csv files of the kaggle challenge. It applies four different prediction models and evaluates the importance of the 156 features built and the learning curve of the models.
import pandas as pd
import numpy as np
import time
import machine_learning_helper as machine_learning_helper
import metrics_helper as metrics_helper
import sklearn.neighbors, sklearn.linear_model, sklearn.ensemble, sklearn.naive_bayes
from sklearn.model_selection import KFold, train_test_split, ShuffleSplit
from sklearn import model_selection
from sklearn import ensemble
from xgboost.sklearn import XGBClassifier
import scipy as sp
import xgboost as xgb
import matplotlib.pyplot as plt
% matplotlib inline
from sklearn.model_selection import learning_curve
from sklearn import linear_model, datasets
import os
# ## Read .csv files
# +
dataFolder = 'cleaned_data'
resultFolder = 'results'
filenameAdress_train_user = 'cleaned_train_user.csv'
filenameAdress_test_user = 'cleaned_test_user.csv'
filenameAdress_time_mean_user_id = 'time_mean_user_id.csv'
filenameAdress_time_total_user_id = 'time_total_user_id.csv'
filenameAdress_total_action_user_id = 'total_action_user_id.csv'
df_train_users = pd.read_csv(os.path.join(dataFolder, filenameAdress_train_user))
df_test_users = pd.read_csv(os.path.join(dataFolder, filenameAdress_test_user))
df_time_mean_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_time_mean_user_id))
df_time_total_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_time_total_user_id))
df_total_action_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_total_action_user_id))
# -
# ## Construct sessions data frame
# This dataframe contains the features that were extracted from the file sessions. For more information about these features, see notebook Main preprocessing.
df_total_action_user_id.columns = ['id','action']
df_sessions = pd.merge(df_time_mean_user_id, df_time_total_user_id, on='id', how='outer')
df_sessions = pd.merge(df_sessions, df_total_action_user_id, on='id', how='outer')
df_sessions.columns = ['id','time_mean_user','time_total_user','action']
# ## 1. From data frame to matrix : Construct y_train
# The destination countries, now as string, are encoded in int format. Each country will be assigned to a int.
y_labels, label_enc = machine_learning_helper.buildTargetMat(df_train_users)
# ## 2. From data frame to matrix : Construct X_train & X_test
# ### Feature engineering.
# Added features :
# - time_mean_user
# - time_total_user
# - total_action_user
# - Date created account
# - Date first active
#
X_train, X_test = machine_learning_helper.buildFeatsMat(df_train_users, df_test_users, df_sessions)
# +
#X_train = X_train[200000:201000]
#y_labels = y_labels[200000:201000]
# -
# For Memory purpose, the train matrix is formatted in sparse
X_train_sparse = sp.sparse.csr_matrix(X_train.values)
# ## 3. Cross validation setup
# 5 folds cross validation, shuffled.
#
cv = model_selection.KFold(n_splits=5, random_state=None, shuffle=True)
# # 4. Machine Learning
# Several models are tried, and their parameter optimized through Cross validation. The code is optimized to run on 12 processors at the same time. The metric used is the NDCG. Because of the computation complexity, the for loops for the cross validations were not nested.
#
#
# Models that were tried:
# - **Random Forest**
# - **eXtreme Gradient Boosting XCGB**
# - **2 layers stack model**:
# - Logistic regression
# - eXtreme Gradient Boosting XCGB
# - **Voting classifer**
# - Random Forest
# - eXtreme Gradient Boosting XCGB
# - 2 layers stack model
#
#
# # Model 1 : RandomForest
# +
number_trees = [125, 300, 500, 600 ]
max_depth = [5, 8, 12, 16, 20]
rf_score_trees = []
rf_score_depth = []
rf_param_trees = []
rf_param_depth = []
#Loop for hyperparameter number_trees
for number_trees_idx, number_trees_value in enumerate(number_trees):
print('number_trees_idx: ',number_trees_idx+1,'/',len(number_trees),', value: ', number_trees_value)
# Random forest
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=number_trees_value, max_depth=14)
#Scores
scores = model_selection.cross_val_score(rand_forest_model, X_train_sparse, y_labels, cv=cv, verbose = 10, n_jobs = 12, scoring=metrics_helper.ndcg_scorer)
rf_score_trees.append(scores.mean())
rf_param_trees.append(number_trees_value)
print('Mean NDCG for this number_trees = ', scores.mean())
# best number of trees from above
print()
print('best NDCG:')
print(np.max(rf_score_trees))
print('best parameter num_trees:')
idx_best = np.argmax(rf_score_trees)
best_num_trees_RF = rf_param_trees[idx_best]
print(best_num_trees_RF)
# +
#Loop for hyperparameter max_depth
for max_depth_idx, max_depth_value in enumerate(max_depth):
print('max_depth_idx: ',max_depth_idx+1,'/',len(max_depth),', value: ', max_depth_value)
# Random forest
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=best_num_trees_RF, max_depth=max_depth_value)
#Scores
scores = model_selection.cross_val_score(rand_forest_model, X_train_sparse, y_labels, cv=cv, verbose = 10, n_jobs = 12, scoring=metrics_helper.ndcg_scorer)
rf_score_depth.append(scores.mean())
rf_param_depth.append(max_depth_value)
print('Mean NDCG for this max:_depth = ', scores.mean())
# best max_depth from above
print()
print('best NDCG:')
print(np.max(rf_score_depth))
print('best parameter max_depth:')
idx_best = np.argmax(rf_score_depth)
best_max_depth_RF = rf_param_depth[idx_best]
print(best_max_depth_RF)
# -
# ** Random forest 600 trees, 16 depth **
# - **NDCG = 0.821472784776**
# - **Kaggle Private Leader Board NDCG = 0.86686**
# ## Predict Countries and convert to CSV for submision for RF model
# +
best_num_trees_RF = 600
best_max_depth_RF = 16
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=best_num_trees_RF, max_depth=best_max_depth_RF)
rand_forest_model.fit(X_train_sparse,y_labels)
y_pred1 = rand_forest_model.predict_proba(X_test)
id_test = df_test_users['id']
cts1,idsubmission1 = machine_learning_helper.get5likelycountries(y_pred1, id_test)
ctsSubmission1 = label_enc.inverse_transform(cts1)
# Save to csv
df_submission1 = pd.DataFrame(np.column_stack((idsubmission1, ctsSubmission1)), columns=['id', 'country'])
df_submission1.to_csv(os.path.join(resultFolder, 'submission_country_dest_RF.csv'),index=False)
# -
# # Model 2 : eXtreme Gradient Boosting XCGB
#
# 5 folds cross validation, using ndcg as scoring metric.
#
# Grid Search to find best parameter.
# +
learning_rates = [0.001, 0.01, 0.05,0.1, 0.2]
max_depth = [3, 5, 7, 9, 12]
n_estimators = [20,30,50,75,100]
gamma = [0,0.3, 0.5, 0.7, 1]
best_gamma_XCG, best_num_estimators_XCG,best_num_depth_XCG, best_learning_rate_XCG = machine_learning_helper.CrossVal_XGB(X_train_sparse, y_labels, cv,max_depth,n_estimators,learning_rates,gamma)
# -
# ** XGboost - learning_rate = 0.1, gamma =1, depth = 7, estimators = 75 **
# - **NDCG = 0.826134**
# - **Kaggle Private Leader Board NDCG = 0.86967 (rank 756)**
#
# ** XGboost - learning_rate = 0.1, gamma =0.7, depth = 5, estimators = 75 **
# - **NDCG = 0.826394**
# - **Kaggle Private Leader Board NDCG = 0.86987 (rank 698)**
#
# ## Predict Countries and convert to CSV for submision of xgb model
# +
best_learning_rate_XCG = 0.1
best_num_depth_XCG = 5
best_gamma_XCG = 0.7
best_num_estimators_XCG = 75
XGB_model = XGBClassifier(max_depth=best_num_depth_XCG, learning_rate=best_learning_rate_XCG, n_estimators=best_num_estimators_XCG,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
XGB_model.fit(X_train,y_labels, eval_metric=metrics_helper.ndcg_scorer)
y_pred2 = XGB_model.predict_proba(X_test)
id_test = df_test_users['id']
cts2,idsubmission2 = machine_learning_helper.get5likelycountries(y_pred2, id_test)
ctsSubmission2 = label_enc.inverse_transform(cts2)
df_submission2 = pd.DataFrame(np.column_stack((idsubmission2, ctsSubmission2)), columns=['id', 'country'])
df_submission2.to_csv(os.path.join(resultFolder, 'submission_country_dest_XGB.csv'),index=False)
# -
# ## Model 3 : Stacking
#
# As seen previously, the classes in this dataset are unbalanced. Indeed, half of the users didn't book. We are going to try to make good use of that information.
#
# This model is composed of 2 layers :
# - In a first layer, a logistic regression determines if a user is going to book or not. This binary classification model is trained on the training set. The prediction on the test set by this model is added to a second layer, as a meta feature.
#
# A small mistake : For the training of the 1st layer, the features of the date_account_created and timestamp_first_active were not used.
#
# - The second layer is an XGBoost algorithm. It is trained on the new training set, which is made on the original one connected with the output of the first layer under the column 'meta_layer_1'.
#
# <img src="https://s23.postimg.org/8g018p4a3/1111.png">
# ### Layer 1 : Logistic regression
#
# This logistic regressionw will determine if a user booked or not. It is a binary classification problem.
# Build 1st layer training matrix, text matrix, target vector
y_labels_binary, X_train_layer1, X_test_layer1 = machine_learning_helper.buildFeatsMatBinary(df_train_users, df_test_users, df_sessions)
#y_labels_binary = y_labels_binary[0:1000]
#X_train_layer1 = X_train_layer1[0:1000]
y_labels_binary = y_labels_binary.astype(np.int8)
# +
# Build 1st layer model
# Cross validation with parameter C
C = [0.1, 1.0, 10, 100, 1000]
logistic_score_C = []
logistic_param_C = []
#Loop for hyperparameter
for C_idx, C_value in enumerate(C):
print('C_idx: ',C_idx+1,'/',len(C),', value: ', C_value)
# Logistic
model = linear_model.LogisticRegression(C = C_value)
#Scores
scores = model_selection.cross_val_score(model, X_train_layer1, y_labels_binary, cv=cv, verbose = 10, scoring='f1', n_jobs = 12)
logistic_score_C.append(scores.mean())
logistic_param_C.append(C_value)
print('Mean f1 for this C = ', scores.mean())
# best C from above
print()
print('best f1:')
print(np.max(logistic_score_C))
print('best parameter C:')
idx_best = np.argmax(logistic_score_C)
best_C_logistic = logistic_param_C[idx_best]
print(best_C_logistic)
# Build model with best parameter from cross validation
logreg_layer1 = linear_model.LogisticRegression(C = best_C_logistic)
logreg_layer1.fit(X_train_layer1, y_labels_binary)
score_training = logreg_layer1.predict(X_train_layer1)
# 1st layer model prediction
prediction_layer_1 = logreg_layer1.predict(X_test_layer1)
# -
# Training accuracy:
from sklearn import metrics
metrics.accuracy_score(y_labels_binary,score_training)
# ### Layer 2 : XGBoost
#
# Using the previous result as a meta_feature, this model will determine the 5 most likely countries in which a user will travel.
# +
# Build 2nd layer training matrix, text matrix, target vector
#df_train_users.reset_index(inplace=True,drop=True)
#y_labels, label_enc = machine_learning_helper.buildTargetMat(df_train_users)
#y_labels = y_labels[0:1000]
#X_train_layer1 = X_train_layer1[0:1000]
X_train_layer2 = X_train_layer1
X_train_layer2['meta_layer_1'] = pd.Series(y_labels_binary).astype(np.int8)
X_test_layer2 = X_test_layer1
X_test_layer2['meta_layer_1'] = pd.Series(prediction_layer_1).astype(np.int8)
learning_rates = [0.001, 0.01, 0.05,0.1, 0.2]
max_depth = [3, 5, 7, 9, 12]
n_estimators = [20,30,50,75,100]
gamma = [0,0.3, 0.5, 0.7, 1]
cv2 = model_selection.KFold(n_splits=5, random_state=None, shuffle=True)
best_gamma_XCG, best_num_estimators_XCG,best_num_depth_XCG, best_learning_rate_XCG = machine_learning_helper.CrossVal_XGB(X_train_layer2, y_labels, cv2,max_depth,n_estimators,learning_rates,gamma)
# -
# ** 2 layers stack model - learning_rate = 0.1, gamma =0.7, depth = 5, estimators = 75**
# - **Kaggle Private Leader Board NDCG = 0.82610**
# ## Predict Countries and convert to CSV for submision of 2 Layer Stack model
# +
best_learning_rate_XCG = 0.1
best_num_depth_XCG = 5
best_gamma_XCG = 0.7
best_num_estimators_XCG = 50
XGB_model = XGBClassifier(max_depth=best_num_depth_XCG, learning_rate=best_learning_rate_XCG, n_estimators=best_num_estimators_XCG,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
XGB_model.fit(X_train_layer2,y_labels, eval_metric=metrics_helper.ndcg_scorer)
y_pred2 = XGB_model.predict_proba(X_test_layer2)
id_test = df_test_users['id']
cts2,idsubmission2 = machine_learning_helper.get5likelycountries(y_pred2, id_test)
ctsSubmission2 = label_enc.inverse_transform(cts2)
df_submission2 = pd.DataFrame(np.column_stack((idsubmission2, ctsSubmission2)), columns=['id', 'country'])
df_submission2.to_csv(os.path.join(resultFolder, 'submission_country_dest_stacking.csv'),index=False)
# -
# # 4. Voting Model
# Now we are going to vote between the 3 models optimized with their best parameters
# +
# Create the sub models
estimators = []
model1 = ensemble.RandomForestClassifier(max_depth=best_max_depth_RF, n_estimators= best_num_trees_RF)
estimators.append(('random_forest', model1))
model2 = XGBClassifier(max_depth=best_num_depth_XCG,learning_rate=best_learning_rate_XCG,n_estimators= best_num_estimators_XCG,
objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
estimators.append(('xgb', model2))
model3 = XGB_model
estimators.append(('2layer', model3))
# Create Voting classifier
finalModel = ensemble.VotingClassifier(estimators,voting='soft')
# Run cross validation score
results = model_selection.cross_val_score(finalModel, X_train, y_labels, cv=cv, scoring = metrics_helper.ndcg_scorer, verbose = 10, n_jobs=12)
print("Voting Classifier Cross Validation Score found:")
print(results.mean())
# -
# ** Voting classifier **
# - **NDCG = TODO**
# - **Kaggle Private Leader Board NDCG = TODO**
# ## Predict countries from Voting model and export
# +
finalModel.fit(X_train,y_labels)
y_pred1 = finalModel.predict_proba(X_test)
id_test = df_test_users['id']
cts1,idsubmission1 = machine_learning_helper.get5likelycountries(y_pred1, id_test)
ctsSubmission1 = label_enc.inverse_transform(cts1)
# -
df_submission1 = pd.DataFrame(np.column_stack((idsubmission1, ctsSubmission1)), columns=['id', 'country'])
df_submission1.to_csv(os.path.join(resultFolder, 'submission_country_dest_Voting.csv'),index=False)
# # 5. Evaluating features importance
model = XGBClassifier(max_depth=5, learning_rate=0.1, n_estimators=75,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma=0.7 )
model.fit(X_train,y_labels)
machine_learning_helper.plotFeaturesImportance(model,X_train)
# The figure above shows the 20 most important features following the NDCG score. The age feature is by far the most important one.
# The figure below shows the most important features using the F score.
fig, ax = plt.subplots(figsize=(15, 10))
xgb.plot_importance(model,height=0.7, ax=ax)
machine_learning_helper.plotFeaturesImportance(XGB_model,X_train_layer2)
fig, ax = plt.subplots(figsize=(15, 10))
xgb.plot_importance(XGB_model,height=0.7, ax=ax)
# - The features importance plots of the 2 Layer stack model show that the importance of the features is much better distributed over 4 main features instead of 1.
# - The meta_layer_1 feature comes fourth in the importance feature ranking and justifies the 2 layers approach.
#
# # Conclusion :
# - With our best model, which is a XGBoost, we can predict the destination choosen by Airbnb users with a **NDCG score of 0.86967**.
# - The most important features to predict the destination are the age of the user and the day he created his account and the the time spent by the users, the action and the meta_layer_1 feature for the 2L stack model.
# - The stacking model which is more complicated is not better than the simple XGBoost. This is explained by the fact that the first layer logistic regression is not a good classifier. Its accuracy and F1 scores are just above 60%. Hence, the XGBoost in the 2nd layer is not able to fully exploit this feature.
# - The voting model which is also more complicated is not better than the simple XGBoost.
#
# Simpler is better ?
| project/reports/airbnb_booking/Main Machine Learning.ipynb |