text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<a href="https://colab.research.google.com/github/dvschultz/stylegan3/blob/main/SG3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# StyleGAN3
By [Derrick Schultz](https://twitter.com/dvsch), with contributions from [crimeacs](https://twitter.com/EarthML1)
Just starting this...expect more updates soon.
If you find this helpful, please consider backing me on [Patreon](https://www.patreon.com/bustbright) or becoming a [YouTube channel member](https://www.youtube.com/channel/UCaZuPdmZ380SFUMKHVsv_AA/join).
## Setup
```
!nvidia-smi -L
from google.colab import drive
drive.mount('/content/drive')
import os
if os.path.isdir("/content/drive/MyDrive/colab-sg3"):
%cd "/content/drive/MyDrive/colab-sg3/stylegan3/"
elif os.path.isdir("/content/drive/"):
#install script
%cd "/content/drive/MyDrive/"
!mkdir colab-sg3
%cd colab-sg3
!git clone https://github.com/dvschultz/stylegan3
%cd stylegan3
!mkdir downloads
!mkdir datasets
!mkdir pretrained
!gdown --id 1-5xZkD8ajXw1DdopTkH_rAoCsD72LhKU -O /content/drive/MyDrive/colab-sg2-ada-pytorch/stylegan2-ada-pytorch/pretrained/wikiart.pkl
else:
!git clone https://github.com/dvschultz/stylegan3
%cd stylegan3
!mkdir downloads
!mkdir datasets
!mkdir pretrained
%cd pretrained
!gdown --id 1-5xZkD8ajXw1DdopTkH_rAoCsD72LhKU
%cd ../
!pip install Ninja opensimplex
```
This cell will update to the latest repo. Git and Drive/Colab don’t play as nicely as I’d like so 🤞. The other option is to delete your folder in Drive (after saving out `/results` and `/datasets`!) and running the script above to replace the entire folder.
```
%cd "/content/drive/My Drive/colab-sg2-ada-pytorch/stylegan2-ada-pytorch"
!git config --global user.name "test"
!git config --global user.email "test@test.com"
!git fetch origin
!git pull
!git stash
!git checkout origin/main -- train.py gen_images.py gen_video.py README.md training/training_loop.py
```
## Convert/Create Dataset
Pass a folder of images (just .pngs? TK) to create a zip file.
```
!python dataset_tool.py --source=/content/tmp/drawn-gems-1024 --dest=./datasets/drawn-gems-1024.zip
```
## Training
Before you start training, read [this](https://github.com/dvschultz/stylegan3/blob/main/docs/configs.md).
Working Notes:
- It appears that you must use an SG3 pre-trained model for transfer learning. I _think_ you also want to match config to the pretrained model (`t` with `t`, `r` with `r`).
- For an `A100` I’ve found you can use a `--batch-gpu=8`. For other GPUs I recommend `--batch-gpu=4`.
- I see `~205 sec/kimg` on A100s, and `~325 sec/kimg` on V100s (1024, `r` config). This seems slightly slower than what [NVIDIA reports.](https://github.com/dvschultz/stylegan3/blob/main/docs/configs.md)
```
!python train.py --help
!python train.py --outdir=./results --cfg=stylegan3-r --data=./datasets/drawn-gems-1024.zip \
--gpus=1 --batch=32 --batch-gpu=4 --gamma=10.0 --mirror=1 --kimg=5000 --snap=1 \
--resume=/content/drive/MyDrive/colab-sg3/stylegan3/results/00014-stylegan3-r-drawn-gems-1024-gpus1-batch32-gamma10/network-snapshot-000104.pkl --metrics=None
```
## Image Generation
```
!python gen_images.py --help
!python gen_images.py --outdir=out --trunc=1 --seeds=2 \
--network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl
```
## Video Generation
```
!python gen_video.py --help
!python gen_video.py --output=/content/lerp.mp4 --trunc=1 --seeds=100-124 --grid=1x1 --w-frames=72 \
--network=/content/drive/MyDrive/colab-sg3/stylegan3/results/00014-stylegan3-r-drawn-gems-1024-gpus1-batch32-gamma10/network-snapshot-000104.pkl
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/GetStarted/08_masking.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/08_masking.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=GetStarted/08_masking.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/08_masking.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
# Data Exploration
Learning objectives:
1. Learn useful patterns for exploring data before modeling
2. Gain an understanding of the dataset and identify any data issues.
The goal of this notebook is to explore our base tables before we began feature engineering and modeling. We will explore the price history of stock in the S&P 500.
* Price history : Price history of stocks
* S&P 500 : A list of all companies and symbols for companies in the S&P 500
For our analysis, let's limit price history since 2000. In general, the further back historical data is used the lower it's predictive power can be.
```
import os
PROJECT = 'your-gcp-project' # Change to your project.
BUCKET = PROJECT
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from google.cloud import bigquery
from IPython.core.magic import register_cell_magic
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
# Allow you to easily have Python variables in SQL query.
@register_cell_magic('with_globals')
def with_globals(line, cell):
contents = cell.format(**globals())
if 'print' in line:
print(contents)
get_ipython().run_cell(contents)
```
## Preparing the dataset
Let's create the dataset in our project BiqQuery and import the stock data by running the following cells:
```
!bq mk stock_src
%%bash
TABLE=price_history
SCHEMA=symbol:STRING,Date:DATE,Open:FLOAT,Close:FLOAT
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
%%bash
TABLE=eps
SCHEMA=date:DATE,company:STRING,symbol:STRING,surprise:STRING,reported_EPS:FLOAT,consensus_EPS:FLOAT
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
%%bash
TABLE=snp500
SCHEMA=company:STRING,symbol:STRING,industry:STRING
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
```
Let's look at the tables and columns we have for analysis.
**Learning objective 1.**
```
%%with_globals
%%bigquery --project {PROJECT}
SELECT table_name, column_name, data_type
FROM `stock_src.INFORMATION_SCHEMA.COLUMNS`
ORDER BY table_name, ordinal_position
```
## Price History
Retrieve Google's stock price history.
```
def query_stock(symbol):
return bq.query('''
SELECT *
FROM `stock_src.price_history`
WHERE symbol="{0}"
ORDER BY Date
'''.format(symbol)).to_dataframe()
df_stock = query_stock('GOOG')
df_stock.Date = pd.to_datetime(df_stock.Date)
ax = df_stock.plot(x='Date', y='Close', title='Google stock')
# Add smoothed plot.
df_stock['Close_smoothed'] = df_stock.Close.rolling(100, center=True).mean()
df_stock.plot(x='Date', y='Close_smoothed', ax=ax);
```
Compare google to S&P
```
df_sp = query_stock('gspc')
def plot_with_sp(symbol):
df_stock = query_stock(symbol)
df_stock.Date = pd.to_datetime(df_stock.Date)
df_stock.Date = pd.to_datetime(df_stock.Date)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax = df_sp.plot(x='Date', y='Close', label='S&P', color='green', ax=ax1,
alpha=0.7)
ax = df_stock.plot(x='Date', y='Close', label=symbol,
title=symbol + ' and S&P index', ax=ax2, alpha=0.7)
ax1.legend(loc=3)
ax2.legend(loc=4)
ax1.set_ylabel('S&P price')
ax2.set_ylabel(symbol + ' price')
ax.set_xlim(pd.to_datetime('2004-08-05'), pd.to_datetime('2013-08-05'))
plot_with_sp('GOOG')
```
**Learning objective 2**
```
plot_with_sp('IBM')
```
Let's see how the price of stocks change over time on a yearly basis. Using the `LAG` function we can compute the change in stock price year-over-year.
Let's compute average close difference for each year. This line could, of course, be done in Pandas. Often times it's useful to use some combination of BigQuery and Pandas for exploration analysis. In general, it's most effective to let BigQuery do the heavy-duty processing and then use Pandas for smaller data and visualization.
**Learning objective 1, 2**
```
%%with_globals
%%bigquery df --project {PROJECT}
WITH
with_year AS
(
SELECT symbol,
EXTRACT(YEAR FROM date) AS year,
close
FROM `stock_src.price_history`
WHERE symbol in (SELECT symbol FROM `stock_src.snp500`)
),
year_aggregated AS
(
SELECT year, symbol, AVG(close) as avg_close
FROM with_year
WHERE year >= 2000
GROUP BY year, symbol
)
SELECT year, symbol, avg_close as close,
(LAG(avg_close, 1) OVER (PARTITION BY symbol order by year DESC))
AS next_yr_close
FROM year_aggregated
ORDER BY symbol, year
```
Compute the year-over-year percentage increase.
```
df.dropna(inplace=True)
df['percent_increase'] = (df.next_yr_close - df.close) / df.close
```
Let's visualize some yearly stock
```
def get_random_stocks(n=5):
random_stocks = df.symbol.sample(n=n, random_state=3)
rand = df.merge(random_stocks)
return rand[['year', 'symbol', 'percent_increase']]
rand = get_random_stocks()
for symbol, _df in rand.groupby('symbol'):
plt.figure()
sns.barplot(x='year', y="percent_increase", data=_df)
plt.title(symbol)
```
There have been some major fluctations in individual stocks. For example, there were major drops during the early 2000's for tech companies.
```
df.sort_values('percent_increase').head()
stock_symbol = 'YHOO'
%%with_globals
%%bigquery df --project {PROJECT}
SELECT date, close
FROM `stock_src.price_history`
WHERE symbol='{stock_symbol}'
ORDER BY date
ax = df.plot(x='date', y='close')
```
**Stock splits** can also impact our data - causing a stock price to rapidly drop. In practice, we would need to clean all of our stock data to account for this. This would be a major effort! Fortunately, in the case of [IBM](https://www.fool.com/investing/2017/01/06/ibm-stock-split-will-2017-finally-be-the-year-shar.aspx), for example, all stock splits occurred before the year 2000.
**Learning objective 2**
```
stock_symbol = 'IBM'
%%with_globals
%%bigquery df --project {PROJECT}
SELECT date, close
FROM `stock_src.price_history`
WHERE symbol='{stock_symbol}'
ORDER BY date
IBM_STOCK_SPLIT_DATE = '1979-05-10'
ax = df.plot(x='date', y='close')
ax.vlines(pd.to_datetime(IBM_STOCK_SPLIT_DATE),
0, 500, linestyle='dashed', color='grey', alpha=0.7);
```
## S&P companies list
```
%%with_globals
%%bigquery df --project {PROJECT}
SELECT *
FROM `stock_src.snp500`
df.industry.value_counts().plot(kind='barh');
```
We can join the price histories table with the S&P 500 table to compare industries:
**Learning objective 1,2**
```
%%with_globals
%%bigquery df --project {PROJECT}
WITH sp_prices AS
(
SELECT a.*, b.industry
FROM `stock_src.price_history` a
JOIN `stock_src.snp500` b
USING (symbol)
WHERE date >= "2000-01-01"
)
SELECT Date, industry, AVG(close) as close
FROM sp_prices
GROUP BY Date, industry
ORDER BY industry, Date
df.head()
```
Using pandas we can "unstack" our table so that each industry has it's own column. This will be useful for plotting.
```
# Pandas `unstack` to make each industry a column. Useful for plotting.
df_ind = df.set_index(['industry', 'Date']).unstack(0).dropna()
df_ind.columns = [c[1] for c in df_ind.columns]
df_ind.head()
ax = df_ind.plot(figsize=(16, 8))
# Move legend down.
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2)
```
Let's scale each industry using min/max scaling. This will put all of the stocks on the same scale. Currently it can be hard to see the changes in stocks over time across industries.
**Learning objective 1**
```
def min_max_scale(df):
return (df - df.min()) / df.max()
scaled = min_max_scale(df_ind)
ax = scaled.plot(figsize=(16, 8))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2);
```
We can also create a smoothed version of the plot above using a [rolling mean](https://en.wikipedia.org/wiki/Moving_average). This is a useful transformation to make when visualizing time-series data.
```
SMOOTHING_WINDOW = 30 # Days.
rolling = scaled.copy()
for col in scaled.columns:
rolling[col] = scaled[col].rolling(SMOOTHING_WINDOW).mean()
ax = rolling.plot(figsize=(16, 8))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2);
```
Information technology had a large crash during the early 2000s and again in 2008/2009; along with all other stocks. After 2008, some industries were a bit slower to recover than other industries.
BONUS: In the next lab, we will want to predict the price of the stock in the future. What are some features that we can use to predict future price? Try visualizing some of these features.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Automatic differentiation with JAX
## Main features
- Numpy wrapper
- Auto-vectorization
- Auto-parallelization (SPMD paradigm)
- Auto-differentiation
- XLA backend and JIT support
## How to compute gradient of your objective?
- Define it as a standard Python function
- Call ```jax.grad``` and voila!
- Do not forget to wrap these functions with ```jax.jit``` to speed up
```
import jax
import jax.numpy as jnp
```
- By default, JAX exploits single-precision numbers ```float32```
- You can enable double precision (```float64```) by hands.
```
from jax.config import config
config.update("jax_enable_x64", True)
n = 5
x = jax.random.normal(jax.random.PRNGKey(0), (n,))
y = jax.random.normal(jax.random.PRNGKey(10), (n,))
print(x.shape, y.shape)
print(x @ y)
print(x.T @ y)
print(jnp.outer(x, y))
print(x[:, None].shape, y.shape)
print((x[None, :] @ y)[0])
@jax.jit # Just-in-time compilation
def f(x, A, b):
res = A @ x - b
res = jax.ops.index_update(res, 0, 100)
# y = res[res > 1]
# res[0] = 100
return res @ res
gradf = jax.grad(f, argnums=0, has_aux=False)
```
## Random numbers in JAX
- JAX focuses on the reproducibility of the runs
- Analogue of random seed is **the necessary argument** of all functions that generate something random
- More details and references on the design of ```random``` submodule are [here](https://github.com/google/jax/blob/master/design_notes/prng.md)
```
n = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (n, n))
b = jax.random.normal(jax.random.PRNGKey(0), (n, ))
print("Check correctness", jnp.linalg.norm(gradf(x, A, b) - 2 * A.T @ (A @ x - b)))
# print(gradf(x, A, b))
print("Compare speed")
print("Analytical gradient")
# %timeit 2 * A.T @ (A @ x - b)
print("Grad function")
%timeit gradf(x, A, b).block_until_ready()
jit_gradf = jax.jit(gradf)
print("Jitted grad function")
%timeit jit_gradf(x, A, b).block_until_ready()
hess_func = jax.jit(jax.hessian(f))
print("Check correctness", jnp.linalg.norm(2 * A.T @ A - hess_func(x, A, b)))
print("Time for hessian")
%timeit hess_func(x, A, b).block_until_ready()
print("Emulate hessian and check correctness",
jnp.linalg.norm(jax.jit(hess_func)(x, A, b) - jax.jacfwd(jax.jacrev(f))(x, A, b)))
print("Time of emulating hessian")
hess_umul_func = jax.jit(jax.jacfwd(jax.jacrev(f)))
%timeit hess_umul_func(x, A, b).block_until_ready()
```
## Summary
- JAX is a simple and extensible tool in the problem where autodiff is crucial
- JIT is a key to fast Python code
- Input/output dimensions are important
- Hessian matvec is faster than explicit hessian matrix by vector product
| github_jupyter |
```
# Importing the required packages
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings('ignore')
import calendar
from tabulate import tabulate
%matplotlib inline
# Importing the requisite data
patients = pd.read_excel('Example_data.xlsx', sheet_name='patients_table')
activity = pd.read_excel('Example_data.xlsx', sheet_name='activity_table')
# Creating datetime object for the available dates
patients['Admit Date'] = pd.to_datetime(patients['Admit Date'])
patients['Discharge Date'] = pd.to_datetime(patients['Discharge Date'])
patients['Procedure Date'] = pd.to_datetime(patients['Procedure Date'])
# look-through at the patients data
patients.head(10)
# Look-through at the activity data
activity.head(10)
```
# Excel Section
```
# Question 1 : How many patient admittances are in the patients_table report?
print ("Number of Patient Admittances = {}".format(patients['MRN'].count())) # Counting total records
# Question 2 : How many maternity admittances were there at Sacred Heart in the month of June?
filter_sacred_heart = patients[patients['Facility']=='Sacred Heart'] # Filtering out Sacred Heart facility
filter_sacred_heart_maternity = filter_sacred_heart[filter_sacred_heart['Service Line'] == 'Maternity'] # Filtering our Maternity Service Line
# Creating the month column in the view
filter_sacred_heart_maternity['month_number'] = filter_sacred_heart_maternity['Admit Date'].dt.strftime('%m').astype('int')
filter_sacred_heart_maternity_june = filter_sacred_heart_maternity[filter_sacred_heart_maternity['month_number'] == 6]
print ("Maternity Admittance in June : {}".format(filter_sacred_heart_maternity_june['MRN'].count()))
# Question 3 : What about the rest of the year? Build a line graph that shows how many maternity admittances there were at Sacred Heart, for each month of 2017
yearly_count = filter_sacred_heart_maternity[['MRN', 'month_number']].groupby('month_number').size()\
.reset_index(name='count').sort_values('month_number', ascending=True)['count']
months = [v for k,v in enumerate(calendar.month_abbr) if v !='']
plt.figure(figsize=(15, 5))
plt.plot(months, yearly_count)
plt.xlabel('Month')
plt.ylabel('Count of Admittances')
plt.title('Month-wise maternity admittance at Sacred Heart')
# Question 4 : Are there any patients who were admitted more than once to the same facility, in 2017?
# If so, list their MRNs here; if not, confirm that there aren't any.
join_patients_activity = pd.merge(patients[['MRN', 'Facility']], activity[['MRN', 'Activity State']],
on='MRN') # Combining the patients and the activity table
join_patients_activity_completed = join_patients_activity[join_patients_activity['Activity State'] ==
'completed'] # Proceeding with only Activity State - Completed
total_visits_per_patient = join_patients_activity_completed.groupby(['MRN', 'Facility']).size().reset_index(name='count')
patients_admitted_multiple_times = total_visits_per_patient[total_visits_per_patient['count'] > 1]['MRN']
print ("All MRNs of patients with multiple visit history : \n")
print (list(patients_admitted_multiple_times[:20]))
```
# Tableau Section
```
# Question 1 : Build a bar chart that compares the total number of admittances to Sacred Heart and Plainsboro
facility_view = patients[['MRN', 'Facility']].groupby('Facility').size().reset_index(name='count')
plt.figure(figsize=(10, 5))
plt.bar(facility_view['Facility'], facility_view['count'])
plt.xlabel('Facilities')
plt.ylabel('Total Admittance')
plt.title('Total admittance for each facility')
# Question 2 : Build a pie chart, that compares the number of admittances for each service line at Plainsboro.
patients_plainsboro = patients[patients['Facility'] == 'Plainsboro']
patients_plainsboro_service_line = patients_plainsboro[['MRN', 'Service Line']].groupby('Service Line')\
.size().reset_index(name='count')
_, ax = plt.subplots()
ax.pie(patients_plainsboro_service_line['count'], explode=(0, 0, 0.1),
labels=patients_plainsboro_service_line['Service Line'], autopct='%1.1f%%', shadow=True, startangle=90)
ax.axis('equal')
plt.show()
# Question 3 : How many patients did we complete exactly 3 activities with?
activity_completed = activity[activity['Activity State'] == 'completed'].groupby('MRN').size().reset_index(name='count')
activity_completed_thrice = activity_completed[activity_completed['count'] == 3]
print ("Total patients with exactly 3 activities completed = {}".format(activity_completed_thrice['MRN'].count()))
# Question 4 : What % of all activities (completed or missed) for orthopedic patients at Plainsboro are completed?
join_patients_activity = pd.merge(patients[['MRN', 'Facility', 'Service Line', 'Discharge Date']], activity[['MRN', 'Activity State']],
on='MRN') # Combining the patients and the activity table
plainsboro_activity = join_patients_activity[join_patients_activity['Facility']=='Plainsboro']
plainsboro_activity_orthopedic = plainsboro_activity[plainsboro_activity['Service Line'] == 'Orthopedics']
plainsboro_activity_completed = plainsboro_activity_orthopedic[plainsboro_activity_orthopedic['Activity State'] == 'completed']['MRN'].count() / plainsboro_activity_orthopedic['MRN'].count()
print ("% of all activity for orthopedic patients at Plainsboro which are completed = {:.4f}".format(plainsboro_activity_completed*100))
# Question 5 : Calculate the same for orthopedic patients at Sacred Heart. What's the difference between the two rates?
# Part - 1:
sacred_heart_activity = join_patients_activity[join_patients_activity['Facility']=='Sacred Heart']
sacred_heart_activity_orthopedic = sacred_heart_activity[sacred_heart_activity['Service Line'] == 'Orthopedics']
sacred_heart_activity_completed = sacred_heart_activity_orthopedic[sacred_heart_activity_orthopedic['Activity State'] == 'completed']['MRN'].count() / sacred_heart_activity_orthopedic['MRN'].count()
print ("Part1: % of all activity for orthopedic patients at Plainsboro which are completed = {:.4f}".format(sacred_heart_activity_completed*100))
print ("Part2: Difference between the two rates = {:.4f}".format(abs(sacred_heart_activity_completed-plainsboro_activity_completed)*100))
# Question 6 : Make a month-to-month line chart showing of the number of completed activities, for each permutation of
# facility and service line
join_patients_activity['month_number'] = join_patients_activity['Discharge Date'].dt.strftime('%m').astype('int')
join_patients_activity_completed = join_patients_activity[join_patients_activity['Activity State']=='completed']
monthly_view = join_patients_activity_completed.groupby(['Facility', 'Service Line', 'month_number']).size()\
.reset_index(name='count').sort_values('month_number', ascending=True)
monthly_view['month_name'] = [months[item-1] for item in list(monthly_view['month_number'])]
monthly_view['permutation'] = monthly_view['Facility'] + '-' + monthly_view['Service Line']
monthly_view_pivot = pd.pivot(monthly_view[['permutation', 'month_name', 'count', 'month_number']],
index='month_number',
columns='permutation', values='count') # Creating pivot table
ax = monthly_view_pivot.plot(kind='line', figsize=(10, 10))
ax.legend(loc='best')
ax.set_ylabel('Month Number')
ax.set_xlabel('Permutation of Facilities and service lines')
ax.set_title('Monthly view of completed activities across facilities and service lines')
plt.tight_layout()
for p in ax.patches:
if round(p.get_width(), 3) == 0.0:
continue
ax.text(p.get_width()*1.01, p.get_y()*1.01, str(round(p.get_width(), 3)))
# Question 7: Replicate what you've made in (6), but scope the chart to only consider the first activity that is completed by a patient, filtering out any subsequent ones.
join_patients_activity_completed = join_patients_activity_completed.drop_duplicates(subset=['MRN'], keep='first') # Keeping only the first activity of the patient
monthly_view = join_patients_activity_completed.groupby(['Facility', 'Service Line', 'month_number']).size()\
.reset_index(name='count').sort_values('month_number', ascending=True)
monthly_view['month_name'] = [months[item-1] for item in list(monthly_view['month_number'])]
monthly_view['permutation'] = monthly_view['Facility'] + '-' + monthly_view['Service Line']
monthly_view_pivot = pd.pivot(monthly_view[['permutation', 'month_name', 'count', 'month_number']],
index='month_number',
columns='permutation', values='count') # Creating pivot table
ax = monthly_view_pivot.plot(kind='line', figsize=(10, 10))
ax.legend(loc='best')
ax.set_ylabel('Month Number')
ax.set_xlabel('Permutation of Facilities and service lines')
ax.set_title('Monthly view of completed activities across facilities and service lines')
plt.tight_layout()
for p in ax.patches:
if round(p.get_width(), 3) == 0.0:
continue
ax.text(p.get_width()*1.01, p.get_y()*1.01, str(round(p.get_width(), 3)))
```
# Logic Section
```
final_results = []
final_results.append({'P':'P', 'Q':'Q', '~P':'~P', '~Q':'~Q', '~P V Q':'~P V Q', 'P ^ ~Q':'P ^ ~Q',
'~(P V Q)': '~(P V Q)', '~P V ~Q': '~P V ~Q', '~P V (P ^ ~Q)': '~P V (P ^ ~Q)'})
dict_keys = ['P', 'Q', '~P', '~Q', '~P V Q', 'P ^ ~Q', '~(P V Q)', '~P V ~Q', '~P V (P ^ ~Q)']
res_vals = [
['T', 'T', 'F', 'F', 'T', 'F', 'F', 'F', 'F'],
['T', 'F', 'F', 'T', 'F', 'T', 'F', 'T', 'T'],
['F', 'T', 'T', 'F', 'T', 'F', 'F', 'T', 'T'],
['F', 'F', 'T', 'T', 'T', 'F', 'T', 'T', 'T']
]
for row in res_vals:
tmp_dict = {}
for idx in range(len(row)):
tmp_dict[dict_keys[idx]] = row[idx]
final_results.append(tmp_dict)
print (tabulate(final_results))
```
| github_jupyter |
# Installation procedure
## Astroconda
It is encourage that users install AstroConda in order to have easy access to some of the dependecies required by the Pandeia engine. Please see the [AstroConda documentation](http://astroconda.readthedocs.io/en/latest/installation.html#install-astroconda) for instructions on setting up your environment.
## Pandeia
Pandeia can be installed via the Python package index using the following command
```bash
pip install pandeia.engine
```
which will install Pandeia into your local environment.
> **Note**:
> Pandeia currently only supports Python 2.7. Please be sure that your environment uses the correct version.
This will provide the user will the relevant core engine package for performing exposure time calculations. The engine is observatory-agnostic, meaning that the package does not come with any prescience about the calculations the user wishes to perform. In order to allow JWST calculations with Pandeia, the user must download the relevant data and configuration files.
## Environment
As mentioned, the user must provide their own reference data files in order for the engine to understand the instrument with which the calculations will be performed.
For JWST, the user may download the required reference data [here](http://ssb.stsci.edu/pandeia/engine/1.2/pandeia_data-1.2.tar.gz), or [here for WFIRST](http://ssb.stsci.edu/pandeia/engine/1.2.1/pandeia_wfirst_data-1.2.1.tar.gz). Unpack and store the data in a choice location.
Likewise, the user may wish to install specific data for use with the PySynPhot package Pandeia uses. For users at Space Telescope and connected to the local intranet, the files are already accessible on Central Storage; for others, see the [pysynphot install documentation](http://pysynphot.readthedocs.io/en/latest/index.html#pysynphot-installation-setup) for instructions on retrieving the necssary files from [CRDS](http://www.stsci.edu/hst/observatory/crds/throughput.html).
Before running any calculations, environment variables must be set:
- To tell Pandeia where to find the reference data files
```bash
export pandeia_refdata=/path/to/reference/data/directory
```
- To tell Pandeia where to find the CDBS for PySynPhot
```bash
export PYSYN_CDBS=/eng/ssb/pyetc/cdbs_trees/cdbs.23.1.rc3
```
These can be added to your shell startup script or set directly in your python scripts.
## Dependencies
- **PyFFTW**: Your environment must have access to the PyFFTW package. This can be downloaded through AstroConda
```bash
conda install pyfftw
```
Otherwise, the fftw libraries must be built using e.g. [Homebrew](http://brew.sh), and the python package installed through PyPI
```bash
pip install pyfftw
```
If you have access to AstroConda, a simple one-liner for installing depencies can be performed with
```bash
conda install numpy scipy astropy pyfftw pysynphot photutils
```
# Calculation files
The Pandeia engine uses a data-driven approach in which users create astronomical *scenes* populated with *sources* and qualified with some set of parameters in order to generated appropriate calculations.
These calculation files are written in `JSON` and fed into Pandeia. An example `JSON` scene file can be seen below for the MIRI Imager
```json
{
"background": "medium",
"calculation": {
"effects": {
"background": true,
"ipc": true,
"saturation": true
},
"noise": {
"crs": true,
"darkcurrent": true,
"ffnoise": true,
"readnoise": true,
"rn_correlation": true
}
},
"configuration": {
"detector": {
"nexp": 1,
"ngroup": 10,
"nint": 10,
"readmode": "fast",
"subarray": "full"
},
"dynamic_scene": true,
"instrument": {
"aperture": "imager",
"disperser": null,
"filter": "f560w",
"instrument": "miri",
"mode": "imaging"
},
"max_scene_size": 40.0,
"scene_size": 6.0
},
"scene": [
{
"id": 1,
"position": {
"orientation": 0.0,
"x_offset": 0.0,
"y_offset": 0.0
},
"shape": {
"geometry": "point"
},
"spectrum": {
"extinction": {
"bandpass": "v",
"law": "mw_rv_31",
"unit": "mag",
"value": 0.0
},
"lines": [],
"name": "generic source",
"normalization": {
"norm_flux": 0.1,
"norm_fluxunit": "mjy",
"norm_wave": 2.0,
"norm_waveunit": "microns",
"type": "at_lambda"
},
"redshift": 0.0,
"sed": {
"sed_type": "flat",
"unit": "fnu"
}
}
}
],
"strategy": {
"aperture_size": 0.24,
"background_subtraction": true,
"display_string": "Imaging Aperture Photometry",
"method": "imagingapphot",
"sky_annulus": [
0.72,
0.96
],
"target_source": 1,
"target_type": "",
"target_xy": [
0.0,
0.0
],
"units": "arcsec"
}
}
```
The above calculation file places a single point source with no extinction at the very center of the scene. The point source, as well as the scene, can be parameterized in many ways (to get a visual feel for the options available, take a look at the [JWST ETC](https://jwst.etc.stsci.edu/)).
# Running calculations
Now that the package has been properly installed and an understanding of how to dictate the parameter space of the calculations has been established, the user is able to perform the exposure time calculation.
In a new python file, the user need only import the `perform_engine` function from the Pandeia package. Along with this, the `JSON` library must also be included so that the user is able to read in the scene file they have created.
```
# Import relevant libraries
from pandeia.engine.perform_calculation import perform_calculation
import json
# Load in your scene file
with open("../configurations/jwst/miri/imaging.json") as f:
imgr_data = json.load(f)
```
The user now has access to the `imgr_data` object, which is a simple python dictionary. By editing parameters in this object, the user may tweak their calculations.
```
# Change the flux value of the source
imgr_data['scene'][0]['spectrum']['normalization']['norm_flux'] = 100
# Perform a calculation
results = perform_calculation(imgr_data)
```
The `results` object is another dictionary that contains all the information generated by performing the calculation.
```
%matplotlib inline
import matplotlib.pyplot as plt
f, ax = plt.subplots()
ax.imshow(results['2d']['saturation'])
```
# Batch mode
This system allows for very straight forward iteration over parameter space values.
## Saturation
```
import numpy as np
# Load in a calculation file, in this case the MIRI coronagraph
with open("../configurations/jwst/miri/coronography.json") as f:
coro_data = json.load(f)
# Define some variables for saturation exploration
tot_none, tot_soft, tot_hard = [], [], []
flux_range = np.linspace(0.1, 1e8, 10)
# Loop over some flux range to discern saturation levels
for flux in flux_range:
# Set the flux value
coro_data['scene'][0]['spectrum']['normalization']['norm_flux'] = flux
# Perform a calculation
results = perform_calculation(coro_data)
# Store the saturation so it's easily accessible
saturation = results['2d']['saturation']
tot_none.append(len(saturation[saturation==0]))
tot_soft.append(len(saturation[saturation==1]))
tot_hard.append(len(saturation[saturation==2]))
# Plot the values
f, ax = plt.subplots()
ax.plot(flux_range, tot_none, label="No saturation")
ax.plot(flux_range, tot_soft, label="Soft saturation")
ax.plot(flux_range, tot_hard, label="Hard saturation")
ax.set_ylabel("Pixels [counts]")
ax.set_xlabel("Flux [mJy]")
ax.legend(loc=0)
```
## Contrast
```
import numpy as np
# Load in a calculation file, in this case the MIRI coronagraph
with open("../configurations/jwst/miri/coronography.json") as f:
coro_data = json.load(f)
# Define some variables for saturation exploration
flux_range = np.linspace(0.1, 1.0, 10)
# Define some variables for contrast exploration
tot_cont = []
# Loop over some flux range to discern saturation levels
for flux in flux_range:
# Set the flux value
coro_data['scene'][0]['spectrum']['normalization']['norm_flux'] = flux
# Perform a calculation
results = perform_calculation(coro_data)
# Get the scalar value of the contrast for the detector plane
tot_cont.append(results['scalar']['contrast'])
# Plot the values
f, ax = plt.subplots()
ax.plot(flux_range, tot_cont)
ax.set_ylabel("Contrast")
ax.set_xlabel("Flux [mJy]")
```
| github_jupyter |
<h2 id='part1'>Project 1: Blog</h2>
Looking into the population of the stack Overflow data, I wanted to look at the differences between men and women.
__The questions that I want to answer are:__
<br> a) How big is the disparity in pay between men and women?
<br> b) How does having children impact progression?
<br> c) Women in STEM… Is there really an obstacle? (i.e is it harder for women to break into?)
I thought a good place to start was looking at the what the breakdown of the population was by gender.
For that I needed to read in the data:
```
#importing pakcages needed for the project
import numpy as np
import pandas as pd
import os
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
%matplotlib inline
#Reading in the StackOverflow Developer Survey data
if os.path.exists(os.path.join(os.getcwd(),'df_personal_data.pkl')):
df_personal = pd.read_pickle('df_personal_data.pkl')
else:
file_path = os.path.join(os.getcwd(),r"StackOverflow_Data\2018\survey_results_public.csv")
df=pd.read_csv(file_path)
#Selecting columns needed for the analysis - started with just Gender and added these slowly as my analysis below needed.
cols = ['CareerSatisfaction','JobSatisfaction', 'CompanySize',
'Country','Gender','Age','ConvertedSalary',
'UndergradMajor','YearsCoding','Dependents']
df_personal=df[cols]
df_personal.to_pickle(os.path.join(os.getcwd(),'df_personal_data.pkl'))
#Outputting metrics on gender breakdown
#Defined a function to convert multiple choice columns into a usable output
def split_and_stack(df_orig, col, sep):
"""This splits multiple choice answers within a column into multiple columns, then converts them back into extra rows
so each option selected by 1 user will be on a new row, meaning that the popultion can be analysed.
Steps:
1) Splits a single column in a dataframe into multiple columns (/levels), using a defined seperator.
2) Stacks these extra column entries into rows, but shows indexes of extra levels which the data was split over.
3) Extra levels / generated columns are then dropped.
4) Renames the last column as the Orignal column name.
Parameters:
df_orig (pandas.DataFrame): A DataFrame containing columns with multiple choice answers.
col (string): The column which requires multiple choice answers to be split.
sep (string): The seperator which the column (col) mentioned above needs to be split over.
Returns:
pandas.DataFrame:Returning a DataFrame of the total population with extra rows (multiple for the same index)
for multiple choice responses.
"""
new_df = df_orig[col].str.split(sep,expand=True).stack().to_frame().reset_index()
new_df = new_df.drop(['level_0','level_1'], axis=1)
new_df.columns = [col]
return new_df
#splitting the data into usable rows, see function defined above (preparing the data)
df_gender = split_and_stack(df_personal, 'Gender', ';')
#Grouping by and calculating Gender breakdowns.
#Groupby disregards null Gender values so these are removed, don't want to see them as doesn't give us information about gende rpopulation
gender = df_gender.groupby('Gender')['Gender'].count().sort_values(ascending=False)/len(df_gender)
gender_stats = zip(list(gender.index),list(gender))
#Printing stats in percentage form
for gender in gender_stats:
print(gender[0] + ": " +"{:.2%}".format(gender[1]))
```
### Question 1: How big is the disparity in pay between men and women?
Looking at the stark differences in population size, I wondered what else was different about the populations between men and women.
Something regularly in the media, the gender pay gap, can be detrimental to the view of professions / businesses and I wanted to assess how big the impact on pay is.
```
#Outputting graph to highlight salary differences by percentile.
#Splitting data to male only and female only as population sizes are significantly different.
#Null values for Gender are removed as we don't know their gender and could skew the results.
#Moreover, imputing values wouldn't make sense, we could only use the mode which would just be classifying them all as male.
df_male = df_personal.dropna(subset=['Gender'], axis=0)[df_personal.Gender.dropna(axis=0).\
apply(lambda x: True if 'Male' in x else False)]
df_female = df_personal.dropna(subset=['Gender'],axis=0)[df_personal.Gender.dropna(axis=0).\
apply(lambda x: True if 'Female' in x else False)]
#Finding percentiles of salary for male and female.
#THe Quantile function ignores nulll values for ConvertedSalary. If we imputed values, i.e replace null with mean/median)
#then this would potentially skew the results and change the distribution below.
female_percentiles = [ (i*100, df_female.ConvertedSalary.quantile(i)) for i in np.arange(0,1,0.005) ]
male_percentiles = [ (i*100, df_male.ConvertedSalary.quantile(i)) for i in np.arange(0,1,0.005) ]
#Separating x and y values for the graph
x_female_percentile = [x[0] for x in female_percentiles]
y_female_percentile = [y[1] for y in female_percentiles]
x_male_percentile = [x[0] for x in male_percentiles]
y_male_percentile = [y[1] for y in male_percentiles]
#setting graph limits x and y limits and labelling axis
plt.ylim((50000,200000))
plt.ylabel('Salary (USD)')
plt.xlabel('Percentile')
plt.xlim((50,100))
plt.plot(x_female_percentile, y_female_percentile, label = 'Female')
plt.plot(x_male_percentile, y_male_percentile, label = 'Male')
plt.legend(loc='upper left', prop={'size':10})
#Saving file
plt.savefig(os.path.join(os.getcwd(),'Pay_gap.png'),bbox_inches='tight')
```
It is clear from the graph below that there is a significant difference between men and women in high paying roles.
<br>
<br> This prompted another question, if women are paid less, does this affect their satisfaction in their current role?
```
#Outputting graph of job satisfaction by gender
#Re-casting the JobSatisfaction to Ordered Category as this data is ordered and is needed to correctly order the output
df_male['JobSatisfaction']=df_male['JobSatisfaction']\
.astype(pd.api.types.CategoricalDtype(
categories=['Extremely dissatisfied','Moderately dissatisfied',
'Slightly dissatisfied','Neither satisfied nor dissatisfied',
'Slightly satisfied','Moderately satisfied','Extremely satisfied'],
ordered=True))
df_female['JobSatisfaction']=df_female['JobSatisfaction']\
.astype(pd.api.types.CategoricalDtype(
categories=['Extremely dissatisfied','Moderately dissatisfied',
'Slightly dissatisfied','Neither satisfied nor dissatisfied',
'Slightly satisfied','Moderately satisfied','Extremely satisfied'],
ordered=True))
#Finding percentage breakdown for career satisfaction. Count/Groupby function ignores nulll values for CareerSatisfaction
#Since we just want population distribution, it makes sense to ignore these.
female_job_sat = df_female.groupby('JobSatisfaction').JobSatisfaction.count().sort_index()/len(df_female)
male_job_sat = df_male.groupby('JobSatisfaction').JobSatisfaction.count().sort_index()/len(df_male)
#Formatting and generating a graph
plt.ylabel('Proportion')
plt.xticks(rotation=90)
plt.plot(list(female_job_sat.index), list(female_job_sat), label = 'Female')
plt.plot(list(male_job_sat.index), list(male_job_sat), label = 'Male')
plt.legend(loc='upper left', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'Gender_job_satisfaction.png'),bbox_inches='tight')
```
Even though the above indicates men may be slightly more satisfied with their jobs, the distribtuion is generally the same and I would say satisfaction for both genders is pretty similar.
<br>
<br> This didn't seem intuitive to me, so I explicitly looked at the salaries by Job Satisfaction for both genders to get a better understanding.
```
#Outputting a graph of the salary for men and women by job satisfaction breakdown
#Mean function ignores nulll values for ConvertedSalary. Groupby function ignores nulll values for CareerSatisfaction.
#Since we want mean Salary values, imputing median may skew this figure with large numbers of nulls and mean wouldn't affect this, so ignoring.
#We also want this figure to be consistent with graph above, so not imputing JobSatisfaction values.
female_job_sat_mean = df_female.groupby('JobSatisfaction').ConvertedSalary.mean().sort_index()
male_job_sat_mean = df_male.groupby('JobSatisfaction').ConvertedSalary.mean().sort_index()
#Formatting and generating a graph
plt.title('Mean Salary by Satisfaction')
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.plot(list(female_job_sat_mean.index), list(female_job_sat_mean), label = 'Female')
plt.plot(list(male_job_sat_mean.index), list(male_job_sat_mean), label = 'male')
plt.legend(loc='upper right', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'Gender_pay_by_Satisfaction.png'),bbox_inches='tight')
```
The above graph illustrates that salary and JobSatisfaction are not directly correlated, if anything, it suggested higher paid professionals may actually dislike their jobs more!
<br>
<br> To explain why salaries may be different between men and women, as discovered above, I thought looking into the experience of the individuals would be a better indicator.
```
#Outputting graph of men and women's 90th percentile salaries by years of experience,
#Groupby function ignores null values in YearsCoding, quantile function ignores nulll values for ConvertedSalary
#If we imputed the converted salary values, this may shift the distribution, so these are ignored.
#Years coding is directly related with age and it may be non-sensical to impute values as this relationship wouldnt be preserved.
female_exp_sal = df_female.groupby('YearsCoding').ConvertedSalary.quantile(0.9)
male_exp_sal = df_male.groupby('YearsCoding').ConvertedSalary.quantile(0.9)
#Ordering points for graph so it's in Experience ascending order
female_exp_sal_sort = sorted(list(zip(female_exp_sal.index, female_exp_sal)),key = lambda x: int(x[0].split()[0].split('-')[0]))
male_exp_sal_sort = sorted(list(zip(male_exp_sal.index, male_exp_sal)),key = lambda x: int(x[0].split()[0].split('-')[0]))
#Separating x and y values for the graph
x_female_exp_sal = [x[0] for x in female_exp_sal_sort]
y_female_exp_sal = [y[1] for y in female_exp_sal_sort]
x_male_exp_sal = [x[0] for x in male_exp_sal_sort]
y_male_exp_sal = [y[1] for y in male_exp_sal_sort]
#Formatting and generating a graph
plt.title('90th Percentile Salary by Experience')
plt.ylabel('Salary (USD)')
plt.xlabel('Years Coding')
plt.xticks(rotation=90)
plt.plot(x_female_exp_sal, y_female_exp_sal, label = 'Female')
plt.plot(x_male_exp_sal, y_male_exp_sal, label = 'male')
plt.legend(loc='upper left', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'90thpercentile_pay_gap_exp.png'),bbox_inches='tight')
```
As expected, the graph above shows how closely correlated experience and salary are.
<br>There didn't seem to be a huge difference in this correlation between men and women, so it doesn't really explain why there were bigger pay differences in these higher percentiles.
<br>
<br> I decided to look at the breakdown of the population by experience, to shed some light on why their salaries may be different:
```
#Outputting graph of male and female population by age.
#Count & Groupby ignores null values in YearsCoding
#YearsCoding is directly correlated with age, so imputing values will not preserve this relationship
female_exp = df_female.groupby('YearsCoding').YearsCoding.count()/len(df_female)
male_exp = df_male.groupby('YearsCoding').YearsCoding.count()/len(df_male)
#Ordering points for graph so it's in Experience ascending order
female_exp_sort = sorted(list(zip(female_exp.index, female_exp)),key = lambda x: int(x[0].split()[0].split('-')[0]))
male_exp_sort = sorted(list(zip(male_exp.index, male_exp)),key = lambda x: int(x[0].split()[0].split('-')[0]))
#Separating x and y values for the graph
x_female_exp = [x[0] for x in female_exp_sort]
y_female_exp = [y[1] for y in female_exp_sort]
x_male_exp = [x[0] for x in male_exp_sort]
y_male_exp = [y[1] for y in male_exp_sort]
#Formatting and generating a graph
plt.title('Population distribution by experience')
plt.ylabel('Proportion')
plt.xlabel('Years Coding')
plt.xticks(rotation=90)
plt.plot(x_female_exp, y_female_exp, label = 'Female')
plt.plot(x_male_exp, y_male_exp, label = 'male')
plt.legend(loc='upper right', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'Gender_exp_pop_dist.png'),bbox_inches='tight')
```
As can be seen, the Female population is skewed to the left, meaning that there is a significantly greater proportion of more junior coders, potentially explaining why there is a disparity in pay.
<br>
<br> TO be sure that this is the case, I wanted to look at the difference in mean pay for these as well, to understand better the overall correlation between Coding experience and salary
```
#Calculating & Outputting graph of population distribution by years of experience.
#Mean function ignores nulll values for ConvertedSalary, Groupby ignores null values in YearsCoding
#Doesn't make sense to impute the values here, as Age and Years Coding are implicitly linked and
#imputing mean values for Salary wouldn't change our findings (as we are taking the mean)
female_sal_exp_mean = df_female.groupby('YearsCoding').ConvertedSalary.mean()
male_sal_exp_mean = df_male.groupby('YearsCoding').ConvertedSalary.mean()
#Ordering points for graph so it's in Experience ascending order
female_sal_exp_mean_sort = sorted(list(zip(female_sal_exp_mean.index, female_sal_exp_mean)),key = lambda x: int(x[0].split()[0].split('-')[0]))
male_sal_exp_mean_sort = sorted(list(zip(male_sal_exp_mean.index, male_sal_exp_mean)),key = lambda x: int(x[0].split()[0].split('-')[0]))
#Separating x and y values for the graph
x_female_mean_sal_exp = [x[0] for x in female_sal_exp_mean_sort]
y_female_mean_sal_exp = [y[1] for y in female_sal_exp_mean_sort]
x_male_mean_sal_exp = [x[0] for x in male_sal_exp_mean_sort]
y_male_mean_sal_exp = [y[1] for y in male_sal_exp_mean_sort]
#Formatting and generating a graph
plt.title('Mean Pay by Experience')
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.plot(x_female_mean_sal_exp, y_female_mean_sal_exp, label = 'Female')
plt.plot(x_male_mean_sal_exp, y_male_mean_sal_exp, label = 'male')
plt.legend(loc='upper left', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'MedianPay_by_Exp.png'),bbox_inches='tight')
```
In general,the correlation of experience and salary holds true, giving a good explanation for why women's salries may be lower, as there's more junior people.
<br>
<br> However, there is an exception for the age bracket 24-29 years. THis could be put down to sample size, but it could symptomatic of issues women face around these years. Moreover, someone with 24-29 years of coding experiences are likely to be around 50 years old. This lead me to investigate into a different area of which women of this age may have been challenged with....
<h2 id='part1'>Question 2: How does having children impact progression?</h2>
<br>
Typically, women have longer a longer absence from work due to maternity leave and undertake more of the care-giving role than men. I wanted to see if having children, had a significant impact on their progression. One way of measuring progression is salary, which is what I have focussed on below.
```
#Outputting graph to show female salary differences witgh and without children, by age
#We are only interested in looking at population who answered the dependency question and the salary question.
#Therefore, dropping values where people haven't answered these questions as it won't inform outcome
df_dep_no_null_f = df_female.dropna(subset=['Dependents','ConvertedSalary'],how='any')
df_dep_no_null_m = df_male.dropna(subset=['Dependents','ConvertedSalary'],how='any')
#Filtering for ages most likley children are to have an impact on salary.
#Women tend to retire earlier than men and under 25s are unlikely to have children, potentially not have a salary either.
#Therefore, they have been removed
ages_for_children = ['25 - 34 years old','35 - 44 years old','45 - 54 years old']
df_dependents_f=df_dep_no_null_f[df_dep_no_null_f.Age.apply(lambda x: True if x in ages_for_children else False)]
df_dependents_m=df_dep_no_null_m[df_dep_no_null_m.Age.apply(lambda x: True if x in ages_for_children else False)]
#Finding average Salaries by age and by dependents status
#Groupby removes nulls for Age, since we want to find out affects over age bands, these values have been dropped.
female_sal_series = df_dependents_f.groupby(['Dependents','Age']).ConvertedSalary.mean().sort_index()
male_sal_series = df_dependents_m.groupby(['Dependents','Age']).ConvertedSalary.mean().sort_index()
#Formatting and generating a graph
plt.plot(list(female_sal_series['Yes'].index), list(female_sal_series['Yes']), label='Female&Children')
plt.plot(list(female_sal_series['No'].index), list(female_sal_series['No']), label='Female&NoChildren')
plt.title("Mean Female Salary by Age")
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.legend()
plt.savefig(os.path.join(os.getcwd(),'FemalePay_by_age&dependents.png'),bbox_inches='tight')
```
As you can see, the graph above indicates having children has a significant impact on women's salaries, especially later in life.
<br>
<br> I wanted to evaluate how much of an impact it actually has, and how this is different from men.
```
#Outputting/ Calculating the relative differences in salaries with and without children
female_dep_cost = female_sal_series['Yes']/female_sal_series['No']
male_dep_cost = male_sal_series['Yes']/male_sal_series['No']
#Combining the results into one dataframe for output
df_dep_cost = pd.concat([pd.DataFrame(list(female_dep_cost), index = list(female_dep_cost.index), columns = ['Female']),
pd.DataFrame(list(male_dep_cost), index = list(male_dep_cost.index),columns=['Male'])],axis=1)
#Reformatting the data to be percentage change and in percentage format
df_dep_cost['Female'] = df_dep_cost['Female'].apply(lambda x: "{:.2%}".format(x-1))
df_dep_cost['Male'] = df_dep_cost['Male'].apply(lambda x: "{:.2%}".format(x-1))
df_dep_cost
```
Clearly, from the above, men's income is much more stable when it comes to having children. Moreover, the magnitude of the difference is significant. Women can expect to have a large earnings hit if they have children.
It is worth noting, that the stabilitiy __may__ be due to an increase in population size, but it's difficult to determine this effect.
<h2 id='part1'>Question 3: Women in STEM… Is there really an obstacle??</h2>
<br>
Furthering from the differences in having children, I wanted to see if there were any other drivers of why older women may have a lower salary.
<br>
<br> I started to think about generational difference and how women were less likely to have higher education, but also less likely to pursue STEM subjects. In recent years, there are lots of initiative to increase women's participation in STEM subjects and I wanted to see what the impact of this was. I started with the interest in Computer Science / Technical degree as the dataset is relating to Developers:
```
#Calculating total proportion of Computer Science related degrees for each gender
#Adding flags for technical degrees. Looking at the data, people with a technical background with the below Majors
#would be better equipped for a future in computer science/ a developer role
technical_degree = ['Computer science, computer engineering, or software engineering',
'Information systems, information technology, or system administration'
'Web development or web design']
#Dropping entries with NaNs for undergrad major, otherwise we would be assuming they were 'Non-Technical'
#for all nulls in the population which would skew the results. Moreover, removing nulls also removes those who didn't
#go to university. Since we are considering those with technical degrees, we want to remove these people from our population
df_female_grad = df_female.dropna(subset=['UndergradMajor'],axis=0)
df_male_grad = df_male.dropna(subset=['UndergradMajor'],axis=0)
df_female_grad['TechnicalDegree'] = df_female_grad.UndergradMajor\
.apply(lambda x : 'Technical' if x in technical_degree else 'Non-Technical')
df_male_grad['TechnicalDegree'] = df_male_grad.UndergradMajor\
.apply(lambda x : 'Technical' if x in technical_degree else 'Non-Technical')
#Finding the number of technical vs non-technical people in population by Gender
female_tech_bd = df_female_grad.groupby('TechnicalDegree').TechnicalDegree.count()/len(df_female_grad)
male_tech_bd = df_male_grad.groupby('TechnicalDegree').TechnicalDegree.count()/len(df_male_grad)
#Formatting and printing the output
print('Women with a Computer Science related degree: ' + "{:.2%}".format(female_tech_bd['Technical']))
print('Men with a Computer Science related degree: ' + "{:.2%}".format(male_tech_bd['Technical']))
```
From a first glance at the data, it is clear that there are more men with technical degrees which indicates a bias in their education to technical fields.
<br>
<br> Exploring this further...
```
#Outputting graph showing the age distribution of graduates.
#Filtering list to only Technical people to show age distribution of this
df_female_tech = df_female_grad[df_female_grad['TechnicalDegree']=='Technical']
df_male_tech = df_male_grad[df_male_grad['TechnicalDegree']=='Technical']
#Filtering age to that of graduate ages, as we are considering Undergraduate Majors
over_35 = ['35 - 44 years old','45 - 54 years old','55 - 64 years old','65 years or older']
under_25 = ['25 - 34 years old','18 - 24 years old']
graduate_ages = under_25 + over_35
#Groupby removes nulls for Age, since we want to find distribution of age bands, these values have been dropped.
#Imputing these values may skew the distribution
female_tech_bd_age = df_female_tech.groupby('Age')['Age'].count()[graduate_ages].sort_index()/len(df_female) #_tech
male_tech_bd_age = df_male_tech.groupby('Age')['Age'].count()[graduate_ages].sort_index()/len(df_male)
##Printing statistics for reference in blog, age and gender differences in population
print('Women, 35 and over: ' + "{:.2%}".format(female_tech_bd_age[over_35].sum()))
print('Men, 35 and over: ' + "{:.2%}".format(male_tech_bd_age[over_35].sum()))
print('Women, 18 to 34: ' + "{:.2%}".format(female_tech_bd_age[under_25].sum()))
print('Men, 18 to 34: ' + "{:.2%}".format(male_tech_bd_age[under_25].sum()))
#Formatting and generating a graph
plt.plot(list(female_tech_bd_age.index), list(female_tech_bd_age), label='Female')
plt.plot(list(male_tech_bd_age.index), list(male_tech_bd_age), label='Male')
plt.title("Tech Graduates by Age")
plt.ylabel('Proportion')
plt.xticks(rotation=90)
plt.legend()
plt.savefig(os.path.join(os.getcwd(),'Tech_grads_by_age.png'),bbox_inches='tight')
```
It is clear that men favoured technical subjects from age 25 and above, putting women at a disadvantage for developer roles. However, it can also be seen that for the youngest age bracket, there has been a switch in the proportion of men and women exploring these options, with women now pursuing technical subjects more than men.
<br>
<br> I wanted to have a deeper look into Undergrad Majors because many people in the survey did not have these degrees (i.e 35% men, 45% of women). STEM subjects are closely linked to developer roles and I wanted to have a wider understanding of educational bias.
```
#Calculating total proportion of STEM related degrees for each gender
#Adding a flag for STEM subjects and repeating the analysis above
#Note: still only taking the graduate population, as above.
non_STEM = ['Fine arts or performing arts (ex. graphic design, music, studio art)',
'A social science (ex. anthropology, psychology, political science)',
'A humanities discipline (ex. literature, history, philosophy)',
'A business discipline (ex. accounting, finance, marketing)',
'I never declared a major']
df_female_grad['STEM'] = df_female_grad.UndergradMajor\
.apply(lambda x : 'Non-STEM' if x in non_STEM else 'STEM')
df_male_grad['STEM'] = df_male_grad.UndergradMajor\
.apply(lambda x : 'Non-STEM' if x in non_STEM else 'STEM')
##Printing statistics for reference in blog, gender &STEM differences in population
print("Women in STEM: {:.2%}".format(df_female_grad.groupby('STEM').STEM.count()['STEM']/len(df_female_grad)))
print("Men in STEM: {:.2%}".format(df_male_grad.groupby('STEM').STEM.count()['STEM']/len(df_male_grad)))
```
From a first glance at the data, it is clear that there are more men with STEM degrees which indicates a bias in their education to technical fields.
<br>
<br> Exploring this further...
```
#Calculating the population distribution of STEM related degrees for each gender by age
#Filtering out stem graduates only as we want to look at the demographics of this.
df_female_STEM = df_female_grad[df_female_grad['STEM']=='STEM']
df_male_STEM = df_male_grad[df_male_grad['STEM']=='STEM']
#Only considering working professionals as they are most likely to have degrees and be in employment
#People below these ages are unlikely to have a degree and it would be non-sensical.
working_professionals = ['18 - 24 years old','25 - 34 years old','35 - 44 years old',
'45 - 54 years old','55 - 64 years old']
#Groupby and Count removes Nulls from calculation. We don;t want to impute these values as is may skew the population distribution
female_STEM_bd_age = df_female_STEM.groupby('Age')['Age'].count()[working_professionals]/len(df_female_STEM)
male_STEM_bd_age = df_male_STEM.groupby('Age')['Age'].count()[working_professionals]/len(df_male_STEM)
#Combining data together into one DataFrame
df_STEM_bd = pd.concat([pd.DataFrame(list(female_STEM_bd_age), index = list(female_STEM_bd_age.index), columns = ['Female']),
pd.DataFrame(list(male_STEM_bd_age), index = list(male_STEM_bd_age.index),columns=['Male'])],axis=1)
#Reformatting data into percentages to 2dp.
df_STEM_bd['Female'] = df_STEM_bd['Female'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd['Male'] = df_STEM_bd['Male'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd
```
Looking at population distribution of STEM graduates, this information does show a preferene for the younger generation to take this subjects. __However__, this data is skewed by the fact the majority of respondents were in the lower age ranges, meaning this doesn't give us as much information as initially though.
<br>
<br> As a result, looking at the breakdowns of the population for each age group would be more indicative.
```
#Calculating the STEM percentage for EACH age group, by gender
#Groupby removes nulls for Age, since we want to find STEM distributions forr age bands, we don't want to impute these value.
#Otherwise, it may skew the distribution
STEM_count_f = df_female_grad.groupby(['STEM','Age']).STEM.count()['STEM'][working_professionals]
STEM_count_f_total = df_female_grad.groupby('Age').STEM.count()[working_professionals]
STEM_count_m = df_male_grad.groupby(['STEM','Age']).STEM.count()['STEM'][working_professionals]
STEM_count_m_total = df_male_grad.groupby('Age').STEM.count()[working_professionals]
##Calculating the STEM population percentage by age
STEM_bd_female = STEM_count_f/STEM_count_f_total
STEM_bd_male = STEM_count_m/STEM_count_m_total
#Combining data together into one DataFrame
df_STEM_bd_2 = pd.concat([pd.DataFrame(list(STEM_bd_female), index = list(STEM_bd_female.index), columns = ['Female']),
pd.DataFrame(list(STEM_bd_male), index = list(STEM_bd_male.index),columns=['Male'])],axis=1)
#Reformatting data into percentages to 2dp.
df_STEM_bd_2['Female'] = df_STEM_bd_2['Female'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd_2['Male'] = df_STEM_bd_2['Male'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd_2
```
This output gives us a lot more information about the relationship between STEM and age because it isn;t skewed by the population. <br>
<br> It is now clear that men are a lot more likely to pursure STEM subjects than women, meaning they have an advantage in Developer type roles which often require skills from these subjects.
<br>
<br>
However, it is cleae that there's generational bias for education which has slowly becoming more rectified. More and more women are pursuing these fields and overcoming the obstacles which they once may have faced.
<br>
<br> I wanted to have a final look on how these differences really impacted women's progressions/salaries
```
#Outputting a graph of women's salaries by age with and without a degree in a STEM area.
#Groupby removes nulls for Age, since we want to find STEM salaries by age bands, we don't want to impute these value.
#Otherwise, it may skew the distribution. Moreover, imputing salaries with the mean wouldn't have an impact as we're tryign to find the mean
df_STEM_age_f = df_female_grad.groupby(['STEM','Age']).ConvertedSalary.mean()['STEM'][working_professionals]
df_NSTEM_age_f = df_female_grad.groupby(['STEM','Age']).ConvertedSalary.mean()['Non-STEM'][working_professionals]
#Formatting and generating a graph
plt.plot(list(df_STEM_age_f.index),list(df_STEM_age_f), label='Female & STEM')
plt.plot(list(df_NSTEM_age_f.index),list(df_NSTEM_age_f), label='Female & Non-STEM')
plt.title("Women in STEM\nSalary Comparison by Age")
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.legend()
plt.savefig(os.path.join(os.getcwd(),'Female_STEM_Salary_by_age.png'),bbox_inches='tight')
```
As we can clearly see above, historically, STEM salaries have yielded higher paying professions, meaning that women's salary progression has been a harder battle due to the bias in their education.
However, there is optimism for the future as the trends in educational bias indicate that the gap between men and women are reducing and we are moving to a more equal society.
<h2 id='part1'>Additional material: Showcase Data prep, imputing values and ML techniques</h2>
<br> I tried to incorporate machine learning / sklearn algorithms into my blog, however, the models produced did not give me sensible results. As a result, I've produced a framework for what I would have done if a model would have given me a sensible output.
```
#Showcase of data prep and evaluation for a machine learning (ML) model
#Splitting models into male and female since the data is skewed and is overfitting on male attributes.
#Converting variables which contain categorical data, into categorical variables (JobSatisfaction has been done above)
#Only doing for one dataframe as using this as a means to convert to floats for ML algorithm
df_male['CareerSatisfaction']=df_male['CareerSatisfaction']\
.astype(pd.api.types.CategoricalDtype(
categories=['Extremely dissatisfied','Moderately dissatisfied',
'Slightly dissatisfied','Neither satisfied nor dissatisfied',
'Slightly satisfied','Moderately satisfied','Extremely satisfied'],
ordered=True))
df_male['CompanySize']=df_male['CompanySize']\
.astype(pd.api.types.CategoricalDtype(
categories=['Fewer than 10 employees','10 to 19 employees',
'20 to 99 employees','100 to 499 employees',
'500 to 999 employees','1,000 to 4,999 employees',
'5,000 to 9,999 employees','10,000 or more employees'],
ordered=True))
df_male['Age']=df_male['Age']\
.astype(pd.api.types.CategoricalDtype(
categories=['Under 18 years old','18 - 24 years old', '25 - 34 years old',
'45 - 54 years old','35 - 44 years old', '55 - 64 years old',
'65 years or older'],
ordered=True))
#Dropping Gender axis as this is not used, since we are creating a male and female model
df_male = df_male.drop(['Gender'], axis=1)
df_female = df_female.drop(['Gender'], axis=1)
#Adding flags for STEM and technical subjects, creating features from the analytics above (shown correlation between STEM & Salary)
#Dropping UndergradMajor as a result as engineered this feature from this.
#Making sure to distinguish the nulls so they aren't all classified as the wrong thing.
df_male['STEM'] = df_male['UndergradMajor'].apply(lambda x : [0] if x in non_STEM else [1,1 if pd.isna(x) else 0])
df_female['STEM'] = df_female['UndergradMajor'].apply(lambda x : [0] if x in non_STEM else [1,1 if pd.isna(x) else 0])
df_male['STEM'] = df_male['STEM'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_female['STEM'] = df_female['STEM'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_male['Technical'] = df_male['UndergradMajor'].apply(lambda x : [1] if x in technical_degree else [0, 2 if pd.isna(x) else 0])
df_female['Technical'] = df_female['UndergradMajor'].apply(lambda x : [1] if x in technical_degree else [0, 2 if pd.isna(x) else 0])
df_male['Technical'] = df_male['Technical'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_female['Technical'] = df_female['Technical'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_male = df_male.drop(['UndergradMajor'],axis=1)
df_female = df_female.drop(['UndergradMajor'],axis=1)
#Mapping 'Dependents' column to a flag, indicating whether or not the indiviual has children/dependents
dependent_mapping = {'Yes' : 1, 'No' : 0}
df_male['Dependents'] = df_male['Dependents'].apply(lambda x : dependent_mapping[x] if pd.isna(x) == False else x)
df_female['Dependents'] = df_female['Dependents'].apply(lambda x : dependent_mapping[x] if pd.isna(x) == False else x)
#Creating ordered lists of categorical variables so that they can be indexed, converting their original column into
#I.e creating a numbered scale for example JobSatisfaction (1 being extremely dissatisfied to 7, extremely satisfied)
ordered_satisfaction = list(df_male.groupby('CareerSatisfaction').CareerSatisfaction.count().sort_index().index)
ordered_size = list(df_male.groupby('CompanySize').CompanySize.count().sort_index().index)
ordered_age = list(df_male.groupby('Age').CompanySize.count().sort_index().index)
df_male['CompanySize'] = df_male['CompanySize']\
.apply(lambda x : ordered_size.index(x) if pd.isna(x) == False else np.nan)
df_male['CareerSatisfaction'] = df_male['CareerSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_male['JobSatisfaction'] = df_male['JobSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_male['Age'] = df_male['Age']\
.apply(lambda x : ordered_age.index(x) if pd.isna(x) == False else np.nan)
df_female['CompanySize'] = df_female['CompanySize']\
.apply(lambda x : ordered_size.index(x) if pd.isna(x) == False else np.nan)
df_female['CareerSatisfaction'] = df_female['CareerSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_female['JobSatisfaction'] = df_female['JobSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_female['Age'] = df_female['Age']\
.apply(lambda x : ordered_age.index(x) if pd.isna(x) == False else np.nan)
#Showcasing another way to convert Age/YearsCoding columns to ML compatible inputs
#Taking the middle of the bands (i.e 0-2 years gets mapped to 1 year)
df_male['YearsCoding'] = df_male['YearsCoding']\
.apply(lambda x : sum([float(y) for y in x.split()[0].split('-')])/len(x.split()[0].split('-')) if pd.isna(x) == False else np.nan)
df_female['YearsCoding'] = df_female['YearsCoding']\
.apply(lambda x : sum([float(y) for y in x.split()[0].split('-')])/len(x.split()[0].split('-')) if pd.isna(x) == False else np.nan)
#Splitting out country columns into 11 columns, 1 for each top 10 most indicated country and 1 for other.
#Placing a flag for each of the column, so if the country is United states, 1 would be in the united states column and 0 elsewhere
for frame in [df_male,df_female]:
top_n_countries = list(frame.groupby('Country')['Country'].count().sort_values(ascending=False).nlargest(10).index)
frame["Country_Other"] = frame['Country'].apply(lambda x : 0 if x in top_n_countries else 1)
for value in top_n_countries:
frame["Country_" + value] = frame['Country'].apply(lambda x : 1 if x == value else 0)
#Dropping the original Country column as the features ahve been extracted in a ML friendly manor
df_male = df_male.drop(['Country'],axis=1)
df_female = df_female.drop(['Country'],axis=1)
```
The data has now been processed into a ML friendly format, except for the existence of nulls.
```
#Highlighting the nulls in each field
print('Male null %:\n',df_male.isnull().mean())
print('Female null %:\n',df_female.isnull().mean())
```
The above shows us what null values there are. There is a large proportion of the data which would be unusable if we were to just drop all na values. However, it wouldnt make sense to impute these values either as it would result in information lost....
<br>
<br> This could be as high as 34.8% (ConvertedSalary Female), but this would need to be dropped anyways as this is what we are fitting to. However, over 20% of the Company Size data is null and we can salvage some of the information from the rows with a null in Company size.
```
#Dropping nulls from the 6 columns listed below due to their strong correltions with salary.
#Converted Salarty is the column we are fitting against so this cannot be null, which is why we're dropping this.
df_male = df_male.dropna(subset=['ConvertedSalary'], axis=0)
df_female = df_female.dropna(subset=['ConvertedSalary'], axis=0)
#Converting categorical datatypes to float so we can allocate null values
df_male['CareerSatisfaction'] = df_male['CareerSatisfaction'].astype(str).astype(float)
df_male['JobSatisfaction'] = df_male['JobSatisfaction'].astype(str).astype(float)
df_male['CompanySize'] = df_male['CompanySize'].astype(str).astype(float)
df_male['Age'] = df_male['Age'].astype(str).astype(float)
df_female['CareerSatisfaction'] = df_female['CareerSatisfaction'].astype(str).astype(float)
df_female['JobSatisfaction'] = df_female['JobSatisfaction'].astype(str).astype(float)
df_female['Age'] = df_female['Age'].astype(str).astype(float)
#It wouldnt make sense to impute any of the values in these columns as it would confuse the correlations between variables.
#Using decision trees (random forest), putting in a negative value should distinguish these null values seperately
df_male = df_male.fillna(-1)
df_female = df_female.fillna(-1)
#Highlighting the nulls in each field
print('Male null %:\n',df_male.isnull().mean())
print('Female null %:\n',df_female.isnull().mean())
```
As you can now see, there are no more null values, therefore we can now fit a model to make predictions.
Since we want to predict salary, we will need a regressor as the data is continuos.
```
#Splitting the data into features and the varaible we want to predict
X_male = df_male.dropna().drop(['ConvertedSalary'],axis=1)
y_male = df_male.dropna()['ConvertedSalary']
X_female = df_female.dropna().drop(['ConvertedSalary'],axis=1)
y_female = df_female.dropna()['ConvertedSalary']
#Splitting data into train and test data
X_train_m, X_test_m, y_train_m, y_test_m = train_test_split(X_male,y_male,test_size=0.2,random_state=42)
X_train_f, X_test_f, y_train_f, y_test_f = train_test_split(X_female,y_female,test_size=0.2,random_state=42)
#Training the random Forest model on the training set
clf_m = RandomForestRegressor(n_estimators=100)
clf_f = RandomForestRegressor(n_estimators=100)
clf_m.fit(X_train_m,y_train_m)
clf_f.fit(X_train_f,y_train_f)
#Making predictions off the model
y_pred_test_m=clf_m.predict(X_test_m)
y_pred_train_m=clf_m.predict(X_train_m)
y_pred_test_f=clf_f.predict(X_test_f)
y_pred_train_f=clf_f.predict(X_train_f)
#Evaluating the models performance
print("Male Test score: ",r2_score(y_test_m, y_pred_test_m))
print("Male Train score: ",r2_score(y_train_m, y_pred_train_m))
print("Female Test score: ",r2_score(y_test_f, y_pred_test_f))
print("Female Train score: ",r2_score(y_train_f, y_pred_train_f))
```
The above example is a prime example of overfitting, taking features within a training dataset and creating correlations which improves their predictions, but, in fact, does not generalise well when dealing with new data. This is why the test score is so low but the training score is higher.
```
print("These are the male feature importances, ordered by importance:")
sorted(list(zip(X_male.columns,clf_m.feature_importances_)),key = lambda x:x[1],reverse=True)
print("These are the male feature importances, ordered by importance:")
sorted(list(zip(X_female.columns,clf_f.feature_importances_)),key = lambda x:x[1],reverse=True)
```
| github_jupyter |
[Look Up](https://www.luogu.org/problemnew/show/P2947)。给定一数组,求各数字右边第一个比该数字大的数,没有则设置0.
思路:单调栈的典型应用。从后往前扫描数组,设立一个栈,栈中始终保存比当前数字大的元素,若有小的全部弹出。
```
def LookUp(nums):
n = len(nums)
res = [0]*n
s = list()
for idx in range(n-1, -1, -1):
# 先排空栈中不大于当前数的所有数字
while s and nums[s[-1]] <= nums[idx]:
s.pop()
# 栈中剩下的元素都是比当前大的数了
if not s: # 栈为空
res[idx] = 0
else: # 栈不空,比当前大的最近的数就是栈顶元素
res[idx] = s[-1]
s.append(idx)
return res
```
[Remove K Digits](https://leetcode.com/problems/remove-k-digits/)。给一个数字字串,删掉其中的$k$位,使得得到的数字最小。
思路:单调栈。所有位顺序入栈,易得高位数字越小越好,当某一位小于栈顶元素时,需要弹栈。注意弹栈的次数不能超过$k$次。不难,实现逻辑稍微复杂一点。
```
def removeKdigits(num: str, k: int) -> str:
n = len(num)
if k >= n:
return '0'
s = list()
cnt = 0
for ch in num:
if cnt < k: # 删除k次
if not s:
s.append(ch)
continue
while s and int(ch) < int(s[-1]) and cnt < k:
s.pop()
cnt += 1
s.append(ch)
else:
s.append(ch)
return str(int(''.join(s[:n-k])))
```
猿辅导2019笔试题。给$C$个字符串的压缩表示形式,对其进行解码复原。
思路:输入字串可能带括号也可能没带括号,都需要考虑进去。
```
def func(C):
def reverse(s):
s = list(s)
i, j = 0, len(s) - 1
while i < j:
s[i], s[j] = s[j], s[i]
i += 1
j -= 1
return ''.join(s)
for _ in range(C):
s = sys.stdin.readline().strip()
n = len(s)
stack = list()
num = ''
idx = 0
while idx < n:
# 字母与括号入栈
while idx < n and not s[idx].isdigit():
stack.append(s[idx])
idx += 1
# 提取数字
while idx < n and s[idx].isdigit():
num += s[idx]
idx += 1
num = int(num) if num != '' else 1
sub_s = '' # 被压缩的子串
if stack[-1] == ')': # 有括号时需特殊处理
left_cnt = 0
while stack and stack[-1] != '(':
while stack[-1] == ')':
left_cnt += 1
stack.pop()
while stack and stack[-1] != '(':
sub_s += stack.pop()
for _ in range(left_cnt): # 清空等数量的左括号
stack.pop()
else: # 无括号时取出单个字符即可
sub_s = stack.pop()
sub_s = reverse(sub_s) * num
num = ''
for ch in sub_s:
stack.append(ch)
res = ''
while stack[-1] == ')':
stack.pop()
while stack and stack[-1] != '(':
if stack[-1] == ')':
stack.pop()
continue
res += stack.pop()
print(reverse(res))
```
[Decode String](https://leetcode.com/problems/decode-string/)。给一个编码压缩后的字串,字符被包裹在方括号中,数字表示方括号中字符的重复次数,根据该规则还原完整的字符串。
思路:纯逻辑题,写起来比较麻烦。当遇到右括号时,说明需要弹栈,直到弹到左括号为止,这一部分是压缩的字串;再弹出栈中所有的数字。
```
def decodeString(s: str) -> str:
stack = list()
for ch in s:
if ch != ']':
stack.append(ch)
else:
# 1. 弹出压缩的字串
enc_s = str()
while stack[-1] != '[':
enc_s = stack.pop()+enc_s
_ = stack.pop()
# 2. 弹出数字
num = str()
while stack and stack[-1].isdigit():
num = stack.pop()+num
num = int(num)
# 3. 还原
dec_s = enc_s*num
for c in dec_s:
stack.append(c)
return ''.join(stack)
decodeString("3[a]2[bc]")
```
[Min Stack](https://leetcode.com/problems/min-stack/)。设计一个栈结构,要求其```push()```、```top()```、```pop()```、```min()```的时间复杂度均为$O(1)$。
思路:设置一个辅助栈,里面只压入$min$值。弹栈时,当普通栈与辅助栈的栈顶元素相同时,辅助栈才需要弹栈。
```
class MinStack:
def __init__(self):
"""
initialize your data structure here.
"""
self.s=list()
self.s_min=list()
def push(self, x: int) -> None:
self.s.append(x)
if not self.s_min or x<=self.s_min[-1]:
self.s_min.append(x)
def pop(self) -> None:
if self.s_min[-1]==self.s[-1]:
self.s_min.pop()
self.s.pop()
def top(self) -> int:
return self.s[-1]
def getMin(self) -> int:
return self.s_min[-1]
```
[Dota2 Senate](https://leetcode.com/problems/dota2-senate/)。给一只含'R'和'D'的字串,表示两阵营的攻击顺序。每一阵营成员攻击会杀死一名另一阵营的成员,求胜利阵营。
思路:设置两个队列,双方成员均进入队列,为每一位成员标上一个索引,索引越小攻击顺序越靠前。每次从两队列中取出成员PK,只有胜者才会再次进入队列并更新索引。
```
def predictPartyVictory(senate: str) -> str:
q_D, q_R = list(), list()
n = len(senate)
for idx, ch in enumerate(senate):
if ch == 'R':
q_R.append(idx)
else:
q_D.append(idx)
while q_D and q_R:
idx_D, idx_R = q_D.pop(0), q_R.pop(0)
if idx_D < idx_R:
q_D.append(idx_D+n)
else:
q_R.append(idx_R+n)
return "Radiant" if q_R else 'Dire'
```
| github_jupyter |
# Loading important libraries
```
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Dropout, Activation, Input, Embedding, concatenate, Flatten
from tensorflow.keras.optimizers import Adam
```
# Loading the data
```
train_subset = pd.read_csv("/content/drive/MyDrive/FinalHack Datasets/train_subset.csv",parse_dates = ['date'])
test = pd.read_csv("/content/drive/MyDrive/FinalHack Datasets/test.csv",parse_dates = ['date'])
train_subset['Month'] =pd.DatetimeIndex(train_subset['date']).month.astype('int8')
train_subset['Day'] =pd.DatetimeIndex(train_subset['date']).day.astype('int8')
train_subset['Week'] =pd.DatetimeIndex(train_subset['date']).weekday.astype('int8')
train_subset = train_subset.drop(['Unnamed: 0','date'],axis = 1)
train_subset.head()
test['Month'] =pd.DatetimeIndex(test['date']).month.astype('int8')
test['Day'] =pd.DatetimeIndex(test['date']).day.astype('int8')
test['Week'] =pd.DatetimeIndex(test['date']).weekday.astype('int8')
ID = test['id']
test = test.drop(['id','date'],axis = 1)
test.head()
### Removing negative values in train data
train_subset = train_subset[(train_subset['unit_sales']>0)]
### Removing outliers
Q1 = train_subset.unit_sales.quantile(0.25)
Q3 = train_subset.unit_sales.quantile(0.75)
print(Q1,Q3)
IQR = Q3 - Q1
print(IQR)
lower_limit = Q1 - 1.5*IQR
upper_limit = Q3 + 1.5*IQR
print( lower_limit,upper_limit)
train_subset = train_subset[(train_subset.unit_sales < upper_limit)]
## Label Encoding
from sklearn import preprocessing
def df_lbl_enc(df):
for c in df.columns:
if df[c].dtype == 'object':
lbl = preprocessing.LabelEncoder()
df[c] = lbl.fit_transform(df[c])
print(c)
return df
train_subset = df_lbl_enc(train_subset)
X_test = df_lbl_enc(test)
from sklearn.preprocessing import LabelEncoder
lb = LabelEncoder()
train_subset['onpromotion']= lb.fit_transform(train_subset['onpromotion'])
X_test['onpromotion']= lb.fit_transform(X_test['onpromotion'])
train_subset.head()
X_test.head()
#cat_cols = ['locationId','item_id','onpromotion','category_of_item','class','Month','Day','Week']
X_train = train_subset.drop(['unit_sales'], axis = 1)
y_train = train_subset['unit_sales'].values
X_train
y_train = np.log1p(y_train)
X_train[['Month','Day']] = X_train[['Month','Day']] - 1
X_test
X_test[['Month','Day']] = X_test[['Month','Day']] - 1
```
# Getting unique levels for each categorical features
```
# Train Data Attributes
loc_attr = X_train.locationId.values
#item_attr = X_train.item_id.values
onpromotion_attr = X_train.onpromotion.values
#cat_item_attr = X_train.category_of_item.values
#class_attr = X_train['class'].values
month_attr = X_train.Month.values
day_attr = X_train.Day.values
week_attr = X_train.Week.values
# Test Data Attributes
test_loc_attr = X_test.locationId.values
#item_attr = X_train.item_id.values
test_onpromotion_attr = X_test.onpromotion.values
#test_cat_item_attr = X_test.category_of_item.values
#test_class_attr = X_test['class'].values
test_month_attr = X_test.Month.values
test_day_attr = X_test.Day.values
test_week_attr = X_test.Week.values
loc_attr_level = np.size(np.unique(loc_attr, return_counts=True)[0])
#item_attr_level = np.size(np.unique(item_attr, return_counts=True)[0])
onpromotion_attr_level = np.size(np.unique(onpromotion_attr, return_counts=True)[0])
#cat_item_attr_level = np.size(np.unique(cat_item_attr, return_counts=True)[0])
#class_attr_level = np.size(np.unique(class_attr, return_counts=True)[0])
month_attr_level = np.size(np.unique(month_attr, return_counts=True)[0])
day_attr_level = np.size(np.unique(day_attr, return_counts=True)[0])
week_attr_level = np.size(np.unique(week_attr, return_counts=True)[0])
```
Categorical Embedding for locationId
```
loc_input = Input(shape=(1, ), name="loc")
loc_embed = Embedding(input_dim=loc_attr_level, output_dim=5,)(loc_input)
```
Categorical Embedding for item_id
```
#item_input = Input(shape=(1, ), name="item")
#item_embed = Embedding(input_dim=item_attr_level, output_dim=5,)(item_input)
```
Categorical Embedding for onpromotion
```
onpromo_input = Input(shape=(1, ), name="onpromo")
onpromo_embed = Embedding(input_dim=onpromotion_attr_level, output_dim=2,)(onpromo_input)
```
Categorical Embedding for category of items
```
#cat_item_input = Input(shape=(1, ), name="cat_item")
#cat_item_embed = Embedding(input_dim=cat_item_attr_level, output_dim=5,)(cat_item_input)
```
Categorical Embedding for class
```
#class_input = Input(shape=(1, ), name="class")
#class_embed = Embedding(input_dim=class_attr_level, output_dim=5,)(class_input)
```
Ctaegorical Embedding for month
```
month_input = Input(shape=(1, ), name="month")
month_embed = Embedding(input_dim=month_attr_level, output_dim=5,)(month_input)
```
Categorical Embedding for day
```
day_input = Input(shape=(1, ), name="day")
day_embed = Embedding(input_dim=day_attr_level, output_dim=5,)(day_input)
```
Categorical Embedding for week
```
week_input = Input(shape=(1, ), name="week")
week_embed = Embedding(input_dim=week_attr_level, output_dim=5,)(week_input)
```
Mering and flattning
```
merge_emb = concatenate([loc_embed,onpromo_embed,month_embed,day_embed,week_embed])
merge_emb_flat = Flatten()(merge_emb)
merged_layer = Dense(12, activation= 'relu')(merge_emb_flat)
output_layer = Dense(1, activation='linear')(merged_layer)
model = Model(inputs=[loc_input, onpromo_input,month_input,day_input,week_input], outputs=output_layer)
model.summary()
model.compile(loss="mean_absolute_percentage_error", optimizer='adam', metrics=['mape'])
model.fit([loc_attr,onpromotion_attr,month_attr,day_attr,week_attr],y=y_train, epochs=10, batch_size = 1024)
del train_subset
del test
prediction = model.predict([test_loc_attr,test_onpromotion_attr,test_month_attr ,test_day_attr ,test_week_attr])
test_prediction = np.expm1(prediction)
test_prediction
res = pd.DataFrame(test_prediction)
ID = pd.DataFrame(ID)
res = res.rename(columns={res.columns[0]: 'unit_sales'})
gb = pd.concat([ID,res], axis = 1)
gb['unit_sales'] = gb['unit_sales'].round(2)
gb
gb.to_csv("Categorical Embeddings.csv",index= False)
```
Mape for test data is - 64.19
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from common import *
RESULT_JSON = "/Users/law/repos/viper/results/breakdown/breakdown_revision.json"
from collections import defaultdict
runs = defaultdict(list)
BMS = get_all_runs(RESULT_JSON)
# pprint(BMS)
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
TIMERS = ("lock", "pmem", "map")
def get_sum_time(bm, op_type):
total_ns = 0
for timer in TIMERS:
total_ns += bm[f"{op_type}-{timer}"]
return total_ns
st_bm = BMS[0]
at_bm = BMS[1]
st_insert = get_sum_time(BMS[0], "insert")
st_get = get_sum_time(BMS[0], "get")
st_update = get_sum_time(BMS[0], "update")
st_delete = get_sum_time(BMS[0], "delete")
at_insert = get_sum_time(BMS[1], "insert")
at_get = get_sum_time(BMS[1], "get")
at_update = get_sum_time(BMS[1], "update")
at_delete = get_sum_time(BMS[1], "delete")
bar_width = 0.7
st_pos = np.arange(4)
al_pos = [x + bar_width for x in st_pos]
POS = [st_pos, al_pos]
# multi thread op time in us
insert_op_time = 2.35
get_op_time = 1.11
update_op_time = 2.61
delete_op_time = 2.83
fig, ax = plt.subplots(1, 1, figsize=(2.5, 2.5))
STYLES = {
'pmem': (PMEM_COLOR, ''),
'map' : (DRAM_COLOR, ''),
'lock': ("#990000", ''),
'val' : ("#009900", ''),
}
text_y = 1.05
dur_fs = 18
text_rot = 40
# ALL_RUNS = [(st_bm, st_insert, st_get, st_update, st_delete)]
ALL_RUNS = [(at_bm, at_insert, at_get, at_update, at_delete)]
for i, (bm, insert, get, update, delete) in enumerate(ALL_RUNS):
# INSERT
insert_lock = bm["insert-lock"] / insert
insert_write = bm["insert-pmem"] / insert
insert_update = bm["insert-map"] / insert
ax.bar(POS[i][0], insert_lock, bar_width, bottom=0,
label="Fetch/\nLock", color=STYLES['lock'][0], hatch=STYLES['lock'][1])
ax.bar(POS[i][0], insert_update, bar_width, bottom=insert_lock,
label="Map", color=STYLES['map'][0], hatch=STYLES['map'][1])
ax.bar(POS[i][0], insert_write, bar_width, bottom=insert_lock + insert_update,
label="PMem", color=STYLES['pmem'][0], hatch=STYLES['pmem'][1])
ax.text(POS[i][0], text_y, f"{insert_op_time}", ha='center', fontsize=dur_fs, rotation=text_rot)
# GET
get_map = bm["get-map"] / get
get_lock = bm["get-lock"] / get
get_read = bm["get-pmem"] / get
ax.bar(POS[i][1], get_lock, bar_width, bottom=0,
color=STYLES['lock'][0], hatch=STYLES['lock'][1])
ax.bar(POS[i][1], get_map, bar_width, bottom=get_lock,
color=STYLES['map'][0], hatch=STYLES['map'][1])
ax.bar(POS[i][1], get_read, bar_width, bottom=get_map + get_lock,
color=STYLES['pmem'][0], hatch=STYLES['pmem'][1])
ax.text(POS[i][1], text_y, f"{get_op_time}", ha='center', fontsize=dur_fs, rotation=text_rot)
# UPDATE
update_map = bm["update-map"] / update
update_lock = bm["update-lock"] / update
update_modify = bm["update-pmem"] / update
ax.bar(POS[i][2], update_lock, bar_width, bottom=0,
color=STYLES['lock'][0], hatch=STYLES['lock'][1])
ax.bar(POS[i][2], update_map, bar_width, bottom=update_lock,
color=STYLES['map'][0], hatch=STYLES['map'][1])
ax.bar(POS[i][2], update_modify, bar_width, bottom=update_map + update_lock,
color=STYLES['pmem'][0], hatch=STYLES['pmem'][1])
ax.text(POS[i][2], text_y, f"{update_op_time}", ha='center', fontsize=dur_fs, rotation=text_rot)
# DELETE
delete_lock = bm["delete-lock"] / delete
delete_write = bm["delete-pmem"] / delete
delete_update = bm["delete-map"] / delete
ax.bar(POS[i][3], delete_lock, bar_width, bottom=0,
color=STYLES['lock'][0], hatch=STYLES['lock'][1])
ax.bar(POS[i][3], delete_update, bar_width, bottom=delete_lock,
color=STYLES['map'][0], hatch=STYLES['map'][1])
ax.bar(POS[i][3], delete_write, bar_width, bottom=delete_lock + delete_update,
color=STYLES['pmem'][0], hatch=STYLES['pmem'][1])
ax.text(POS[i][3], text_y, f"{delete_op_time}", ha='center', fontsize=dur_fs, rotation=text_rot)
ax.set_xticks([r + (0.0 * bar_width) for r in st_pos])
ax.xaxis.set_tick_params(pad=1)
ax.set_xticklabels(["PUT", "GET", "UPDATE", "DELETE"], fontsize=18, rotation=30)
ax.set_yticks([0, 0.2, 0.4, 0.6, 0.8, 1])
# ax.set_yticklabels(["", 0.2, "", 0.6, "", 1])
ax.set_ylabel("Normalized dur./op", fontsize=20)
ax.yaxis.set_label_coords(-0.28, 0.45)
ax.text(5, 0.9, "←avg. dur.\nin $µs$", ha='center', fontsize=dur_fs + 2)
# Put a legend below current axis
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[::-1], labels[::-1], loc='center right', bbox_to_anchor=(1.74, 0.4),
ncol=1, frameon=False, framealpha=1,
fontsize=18, columnspacing=0.3, handletextpad=0.2, labelspacing=0.3,
handlelength=1.8, borderpad=0.1)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(18)
plt.tick_params(axis='x', bottom=False)
ax.set_axisbelow(True)
ax.grid(axis='y', which='major')
hide_border(ax)
fig.savefig('charts/breakdown_half.pdf', bbox_inches='tight')
fig.savefig('charts/breakdown_half.svg', bbox_inches='tight')
```
| github_jupyter |
```
import warnings
warnings.filterwarnings("ignore")
import sys
import os
import tensorflow as tf
# sys.path.append("../libs")
sys.path.insert(1, '../')
from libs import input_data
from libs import models
from libs import trainer
from libs import freeze
flags=tf.app.flags
flags=tf.app.flags
#Important Directories
flags.DEFINE_string('data_dir','..\\..\\_inputs\\raw','Train Data Folder')
flags.DEFINE_string('summaries_dir','..\\..\\summaries','Summaries Folder')
flags.DEFINE_string('train_dir','..\\..\\logs&checkpoint','Directory to write event logs and checkpoint')
flags.DEFINE_string('models_dir','..\\..\\models','Models Folder')
#Task Specific Parameters
flags.DEFINE_string('wanted_words','yes,no,up,down,left,right,on,off,stop,go','Wanted Words')
flags.DEFINE_float('validation_percentage',10,'Validation Percentage')
flags.DEFINE_float('testing_percentage',10,'Testing Percentage')
flags.DEFINE_integer('sample_rate',16000,'Sample Rate')
flags.DEFINE_integer('clip_duration_ms',1000,'Clip Duration in ms')
flags.DEFINE_float('window_size_ms',40,'How long each spectogram timeslice is')
flags.DEFINE_float('window_stride_ms',20.0,'How far to move in time between frequency windows.')
flags.DEFINE_integer('dct_coefficient_count',40,'How many bins to use for the MFCC fingerprint')
flags.DEFINE_float('time_shift_ms',100.0,'Range to randomly shift the training audio by in time.')
FLAGS=flags.FLAGS
model_architecture='c_rnn'
start_checkpoint=None
logging_interval=10
eval_step_interval=1000
save_step_interval=1
silence_percentage=10.0
unknown_percentage=10.0
background_frequency=0.8
background_volume=0.3
learning_rate='0.0005,0.0001,0.00002' #Always seperated by comma, trains with each of the learning rate for the given number of iterations
train_steps='10000,10000,10000' #Declare the training steps for which the learning rates will be used
batch_size=256
model_size_info=[48, 10, 4, 2, 2, 2, 60, 84]
## CNN part
# first_filter_count = model_size_info[0]
# first_filter_height = model_size_info[1]
# first_filter_width = model_size_info[2]
# first_filter_stride_y = model_size_info[3]
# first_filter_stride_x = model_size_info[4]
## GRU part
# num_rnn_layers = model_size_info[5]
# RNN_units = model_size_info[6]
# first_fc_output_channels = model_size_info[7]
remaining_args = FLAGS([sys.argv[0]] + [flag for flag in sys.argv if flag.startswith("--")])
assert(remaining_args == [sys.argv[0]])
train_dir=os.path.join(FLAGS.data_dir,'train','audio')
model_settings = models.prepare_model_settings(
len(input_data.prepare_words_list(FLAGS.wanted_words.split(','))),
FLAGS.sample_rate, FLAGS.clip_duration_ms, FLAGS.window_size_ms,
FLAGS.window_stride_ms, FLAGS.dct_coefficient_count)
audio_processor = input_data.AudioProcessor(
train_dir, silence_percentage, unknown_percentage,
FLAGS.wanted_words.split(','), FLAGS.validation_percentage,
FLAGS.testing_percentage, model_settings,use_silence_folder=True)
def get_train_data(args):
sess=args
time_shift_samples = int((FLAGS.time_shift_ms * FLAGS.sample_rate) / 1000)
train_fingerprints, train_ground_truth = audio_processor.get_data(
batch_size, 0, model_settings,background_frequency,
background_volume, time_shift_samples, 'training', sess)
return train_fingerprints,train_ground_truth
def get_val_data(args):
'''
Input: (sess,offset)
'''
sess,i=args
validation_fingerprints, validation_ground_truth = (
audio_processor.get_data(batch_size, i, model_settings, 0.0,
0.0, 0, 'validation', sess))
return validation_fingerprints,validation_ground_truth
# def get_test_data(args):
# '''
# Input: (sess,offset)
# '''
# sess,i=args
# test_fingerprints, test_ground_truth = audio_processor.get_data(
# batch_size, i, model_settings, 0.0, 0.0, 0, 'testing', sess)
# return test_fingerprints,test_ground_truth
def main(_):
sess=tf.InteractiveSession()
# Placeholders
fingerprint_size = model_settings['fingerprint_size']
label_count = model_settings['label_count']
fingerprint_input = tf.placeholder(
tf.float32, [None, fingerprint_size], name='fingerprint_input')
ground_truth_input = tf.placeholder(
tf.float32, [None, label_count], name='groundtruth_input')
set_size = audio_processor.set_size('validation')
label_count = model_settings['label_count']
# Create Model
logits, dropout_prob = models.create_model(
fingerprint_input,
model_settings,
model_architecture,
model_size_info=model_size_info,
is_training=True)
#Start Training
extra_args=(dropout_prob,label_count,batch_size,set_size)
trainer.train(sess,logits,fingerprint_input,ground_truth_input,get_train_data,
get_val_data,train_steps,learning_rate,eval_step_interval, logging_interval=logging_interval,
start_checkpoint=start_checkpoint,checkpoint_interval=save_step_interval,
model_name=model_architecture,train_dir=FLAGS.train_dir,
summaries_dir=FLAGS.summaries_dir,args=extra_args)
# tf.app.run(main=main)
# save_checkpoint='../logs&checkpoint/c_rnn/ckpt-42000'
# save_path=os.path.join(FLAGS.models_dir,model_architecture,'%s-batched.pb'%os.path.basename(save_checkpoint))
# freeze.freeze_graph(FLAGS,model_architecture,save_checkpoint,save_path,batched=True,model_size_info=model_size_info)
# save_path=os.path.join(FLAGS.models_dir,model_architecture,'%s-batched.pb'%os.path.basename(save_checkpoint))
# freeze.freeze_graph(FLAGS,model_architecture,save_checkpoint,save_path,batched=True,model_size_info=model_size_info)
```
| github_jupyter |
```
import numpy as np
from scipy import linalg # Invoke with linalg
import scipy.linalg # invoke with scipy.linalg
```
### **Matrix Matrix Multiplications operator @**
* `A@B` is a binary operator on A, B where A, B are both 2d array (matrices). It's equivalent to invoking `A.matnul(B)`.
Mathematically, assuming $A$ is $n\times m$ and $B$ is $m\times k$
$$
(AB)_{i, j} = \sum_{k = 1}^{m} A_{i, k}B_{k, j}
$$
The $i, j$ th element of the product matrix $AB$ is the sum over the elementwise product on the $i$ th row of $A$ and $j$ th column of b. Notice that this means the operations is only possible if the number of columns of the first matrix matches the number of rows of the second matrix.
Numpy Documentations [here](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html)
**Note**
The `@` operator is fine as long as you know for sure the left and right are both 2d arrays.
**WARNING**
`np.matrix` object is deprecated and don't use it, they also have different bahavior under `*` operator.
`*` THIS IS NOT MATRIX MATRIX PRODUCT, it's the [Hadamard Product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)), but it is matrix vector multiplications wehn `*` is invoked with `np.matrix`.
```
m, n, k = 3, 5, 7 # m, n, k can be equal to 1, and that would be the same matrix vector product
A = np.random.randint(10, size=(n, m)) # just random matrices with entries between 0 and 9.
B = np.random.randint(10, size=(m, k))
print(A@B)
```
Matrix with 1d vector multiplication is also possible. And in that case the output vector will have the same dimension as the vector involved in the multiplication.
```
u = np.random.randint(10, size=m)
(A@u).shape
print(A@u)
u = np.random.randint(10, size=(m, 1))
(A@u).shape
print(A@u)
```
### **Np.dot**
The following is copied straight from offcial numpy doc: [here](https://numpy.org/doc/stable/reference/generated/numpy.dot.html)
> numpy.dot
>
> numpy.dot(a, b, out=None)
>
> Dot product of two arrays. Specifically,
>
> * **If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation)**. <--- You are working with this for this class
>
> * **If both a and b are 2-D arrays, it is matrix multiplication, but using matmul or a @ b is preferred**. <--- You are working with this for this class
>
> * If either a or b is 0-D (scalar), it is equivalent to multiply and using numpy.multiply(a, b) or a * b is preferred.
>
> * If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.
>
> * If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and the second-to-last axis of b:
This function is pretty general. It's meant for a special type of tensor product. But it reduces to usual product in linear alegbra when we have matrices and vector.
**Demonstration:**
```
print("Matrix Matrix product")
print(np.dot(A, B))
v = np.random.randint(10, size=(A.shape[1])) # 1d vector , where A.shape[1] is giving me the length of the first axis of the tensor A (The number of columns of A)
print("Matrix with 1d vector")
print(np.dot(A, v))
print("Matrix with 2d vector")
print(np.dot(A, v.reshape(-1, 1)))
```
### **They Are Different**
They started to behave differently when tesors are involved. It's not going to be part of the class but it's better to make them clear.
```
A = np.random.rand(2, 4, 2)
B = np.random.rand(2, 2, 4)
print((A@B).shape) # happend at the last 2 axis.
print(np.dot(A, B).shape) # multiplication happend at the last one axies.
```
When invoked with `np.array`, the operator `*` is not a matrix vector multiplication:
```
A = np.random.rand(2,2)
b = np.ones((2, 1))
print(A*b)
# The output should be a vector but the output is a matrix instead.
```
### **Other Materials from Last Week**
* `np.zeros((m, n))`: Making zeros array
* `np.empty((m, n))`: Making an array filled with nonsense numbers.
* `A.reshape()`: chaging the shape the array to another shape but with the same number of elements.
| github_jupyter |
# Sketch Classifier for "How Do Humans Sketch Objects?"
A sketch classifier using the dataset from the paper <a href='http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/'>How Do Humans Sketch Objects?</a> where the authors collected 20,000 unique sketches evenly distributed over 250 object categories - we will use a CNN (using Keras) to classify a sketch.
<img src='http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/teaser_siggraph.jpg'/>
```
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import random
from scipy.misc import imresize
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use('ggplot')
SKETCH_DIR = '/Volumes/Storage/sketches (subset)/png/'
DEST_SKETCH_DIR = '/Volumes/Storage/sketches (subset)/sketches_training_data/'
TARGET_SIZE = (128,128)
```
## Create subset data
To reduce the size of the data (and demands of training), we will use a subset of the data.
```
def get_image_file_paths_and_categories():
"""
Walk the root directory and for each subdirectory, obtain the
list of .png image files creating (and returning) a list for each category label and
associated filepath
"""
image_file_paths = []
categories = []
for d in os.listdir(SKETCH_DIR):
label = d
if not os.path.isdir(os.path.join(SKETCH_DIR, d)):
continue
for f in os.listdir(os.path.join(SKETCH_DIR, d)):
full_path = os.path.join(os.path.join(SKETCH_DIR, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
categories.append(label)
image_file_paths.append(full_path)
return image_file_paths, categories
image_file_paths, categories = get_image_file_paths_and_categories()
set(categories)
TARGET_COUNT = 150
selected_categories = []
available_categories = list(set(categories))
while len(selected_categories) < TARGET_COUNT:
idx = random.randint(0, len(available_categories)-1)
category = available_categories[idx]
selected_categories.append(category)
del available_categories[idx]
selected_categories
print("Filtered categories count {}".format(len(selected_categories)))
def split_training_validation_data(shuffle=True, split=0.8, target_size=TARGET_SIZE, selected_categories=None):
"""
Split the data into training and validation (as well as resizing the images)
Copies are made from the main file path and stored in a destination folder.
"""
image_scale = None
training_samples_count = 0
validation_samples_count = 0
for d in os.listdir(SKETCH_DIR):
label = d
if not os.path.isdir(os.path.join(SKETCH_DIR, d)) or d not in selected_categories:
continue
file_names = []
file_data = []
for f in os.listdir(os.path.join(SKETCH_DIR, d)):
full_path = os.path.join(os.path.join(SKETCH_DIR, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
file_names.append(f)
if image_scale is None:
image_scale = float(target_size[0]) / float(plt.imread(full_path).shape[0])
file_data.append(imresize(plt.imread(full_path), image_scale))
# shuffle
indexes = np.arange(len(file_names))
if shuffle:
np.random.shuffle(indexes)
training_end_index = int(len(indexes) * split)
training_indexes = indexes[:training_end_index]
validation_indexes = indexes[training_end_index:]
training_dir = os.path.join(DEST_SKETCH_DIR, 'training')
validation_dir = os.path.join(DEST_SKETCH_DIR, 'validation')
class_training_dir = os.path.join(training_dir, label)
class_validation_dir = os.path.join(validation_dir, label)
if not os.path.exists(training_dir):
os.mkdir(training_dir)
if not os.path.exists(validation_dir):
os.mkdir(validation_dir)
if not os.path.exists(class_training_dir):
os.mkdir(class_training_dir)
if not os.path.exists(class_validation_dir):
os.mkdir(class_validation_dir)
for idx in training_indexes:
training_samples_count += 1
plt.imsave(
os.path.join(class_training_dir, file_names[idx]), file_data[idx],
format='png', cmap='gray')
for idx in validation_indexes:
validation_samples_count += 1
plt.imsave(
os.path.join(class_validation_dir, file_names[idx]), file_data[idx],
format='png', cmap='gray')
print("Finished - training samples = {}, validation samples {}".format(training_samples_count,
validation_samples_count))
return training_samples_count, validation_samples_count
training_samples_count, validation_samples_count = split_training_validation_data(
selected_categories=selected_categories)
print("training_samples_count {}, validation_samples_count {}".format(
training_samples_count, validation_samples_count))
```
## Data exploration
```
def get_training_validation_data():
training_labels = []
training_filenames = []
validation_labels = []
validation_filenames = []
training_dir = os.path.join(DEST_SKETCH_DIR, 'training')
validation_dir = os.path.join(DEST_SKETCH_DIR, 'validation')
# iterate through the training directory
for d in os.listdir(training_dir):
label = d
if not os.path.isdir(os.path.join(training_dir, d)):
continue
for f in os.listdir(os.path.join(training_dir, d)):
full_path = os.path.join(os.path.join(training_dir, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
training_labels.append(label)
training_filenames.append(full_path)
# iterate through the validation directory
for d in os.listdir(validation_dir):
label = d
if not os.path.isdir(os.path.join(validation_dir, d)):
continue
for f in os.listdir(os.path.join(validation_dir, d)):
full_path = os.path.join(os.path.join(validation_dir, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
validation_labels.append(label)
validation_filenames.append(full_path)
return training_labels, training_filenames, validation_labels, validation_filenames
training_labels, training_filenames, _, _ = get_training_validation_data()
plt.imread(training_filenames[100]).shape
f, axarr = plt.subplots(8, 2, figsize=(8,32))
image_scale = 1.0
for r in range(0, 8):
for c in range(0, 2):
index = random.randint(0, len(training_labels)-1)
axarr[r, c].imshow(imresize(plt.imread(training_filenames[index]), image_scale), cmap='gray', interpolation='nearest')
axarr[r, c].set_title(training_labels[index])
```
| github_jupyter |
# 1. Decision trees
# Introduction
- Decision Trees are a type of Supervised Machine Learning (that is you explain what the input is and what the corresponding output is in the training data) where the data is continuously split according to a certain parameter.
- The tree can be explained by two entities, namely decision nodes and leaves. The leaves are the decisions or the final outcomes.
- And the decision nodes are where the data is split.
- An example of a decision tree can be:-
- Let’s say you want to predict whether a person is fit given their information like age, eating habit, and physical activity, etc.
- The decision nodes here are questions like
- ‘What’s the age?’,
- ‘Does he exercise?’,
- ‘Does he eat a lot of pizzas’?
- And the leaves,
- which are outcomes like either ‘fit’, or ‘unfit’.
- In this case this was a binary classification problem (a yes no type problem).
- There are two main types of Decision Trees:
- Classification trees (Yes/No types)
- What we’ve seen above is an example of classification tree, where the outcome was a variable like ‘fit’ or ‘unfit’.
- Here the decision variable is Categorical.
- Regression trees (Continuous data types)
- Here the decision or the outcome variable is Continuous, e.g. a number like 123.
# 1.1 Introduction to Ensemble Methods
- Ensemble modeling is a powerful way to improve the performance of your model.
- It usually pays off to apply ensemble learning over and above various models you might be building.
- Time and again, people have used ensemble models in competitions like Kaggle and benefited from it.
- Ensemble learning is a broad topic and is only confined by your own imagination.
- Let’s start with an example to understand the basics of Ensemble learning.
- This example will bring out, how we use ensemble model every day without realizing that we are using ensemble modeling.
## Example:
- I want to invest in a company XYZ.
- I am not sure about its performance though.
- So, I look for advice on whether the stock price will increase more than 6% per annum or not?
- I decide to approach various experts having diverse domain experience:
- **1.Employee of Company XYZ**:
- This person knows the internal functionality of the company and have the insider information about the functionality of the firm.
- But he lacks a broader perspective on how are competitors innovating, how is the technology evolving and what will be the impact of this evolution on Company XYZ’s product.
- **In the past, he has been right 70% times.**
- **2. Financial Advisor of Company XYZ:**
- This person has a broader perspective on how companies strategy will fair of in this competitive environment.
- However, he lacks a view on how the company’s internal policies are fairing off.
- **In the past, he has been right 75% times.**
- **3. Stock Market Trader:**
- This person has observed the company’s stock price over past 3 years.
- He knows the seasonality trends and how the overall market is performing.
- He also has developed a strong intuition on how stocks might vary over time.
- **In the past, he has been right 70% times.**
- **4. Employee of a competitor:**
- This person knows the internal functionality of the competitor firms and is aware of certain changes which are yet to be brought.
- He lacks a sight of company in focus and the external factors which can relate the growth of competitor with the company of subject.
- **In the past, he has been right 60% of times.**
- **5. Market Research team in same segment:**
- This team analyzes the customer preference of company XYZ’s product over others and how is this changing with time.
- Because he deals with customer side, he is unaware of the changes company XYZ will bring because of alignment to its own goals.
- **In the past, they have been right 75% of times.**
- **6. Social Media Expert:**
- This person can help us understand how has company XYZ has positioned its products in the market.
- And how are the sentiment of customers changing over time towards company. He is unaware of any kind of details beyond digital marketing.
- **In the past, he has been right 65% of times.**
- **Ensemble is the art of combining diverse set of learners (individual models) together to improvise on the stability and predictive power of the model.**
- In the above example, **the way we combine all the predictions together will be termed as Ensemble Learning.**
# Title: Balance Scale Weight & Distance Database
- **Number of Instances:** 625 (49 balanced, 288 left, 288 right)
- **Number of Attributes:** 4 (numeric) + class name = 5
- **Attribute Information:**
- **Class Name (Target variable):** 3
- L [balance scale tip to the left]
- B [balance scale be balanced]
- R [balance scale tip to the right]
- Left-Weight: 5 (1, 2, 3, 4, 5)
- Left-Distance: 5 (1, 2, 3, 4, 5)
- Right-Weight: 5 (1, 2, 3, 4, 5)
- Right-Distance: 5 (1, 2, 3, 4, 5)
- **Missing Attribute Values:** None
- **Class Distribution:**
- 46.08 percent are L
- 07.84 percent are B
- 46.08 percent are R
- Assumptions we make while using Decision tree :
- At the beginning, we consider the whole training set as the root.
- Attributes are assumed to be categorical for information gain and for gini index, attributes are assumed to be continuous.
- On the basis of attribute values records are distributed recursively.
- We use statistical methods for ordering attributes as root or internal node.
## Pseudocode :
- Find the best attribute and place it on the root node of the tree.
- Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.
- Find leaf nodes in all branches by repeating 1 and 2 on each subset.
### While implementing the decision tree we will go through the following two phases:
- **Building Phase**
- Preprocess the dataset.
- Split the dataset from train and test using Python sklearn package.
- Train the classifier.
- **Operational Phase**
- Make predictions.
- Calculate the accuracy.
## sklearn :
- In python, sklearn is a machine learning package which include a lot of ML algorithms.
- Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.
## NumPy :
- It is a numeric python module which provides fast maths functions for calculations.
- It is used to read data in numpy arrays and for manipulation purpose.
## Pandas :
- Used to read and write different files.
- Data manipulation can be done easily with dataframes.
## Terms used in code :
- Gini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node.
- **Gini Index** is a metric to measure how often a randomly chosen element would be incorrectly identified.
- It means an attribute with lower gini index should be preferred.
- Sklearn supports “gini” criteria for Gini Index and by default, it takes “gini” value.
- **Entropy**
- Entropy is the measure of uncertainty of a random variable, it characterizes the impurity of an arbitrary collection of examples. The higher the entropy the more the information content.
- **Information Gain**
- The entropy typically changes when we use a node in a decision tree to partition the training instances into smaller subsets. Information gain is a measure of this change in entropy.
- Sklearn supports “entropy” criteria for Information Gain and if we want to use Information Gain method in sklearn then we have to mention it explicitly.
- **Accuracy score**
- Accuracy score is used to calculate the accuracy of the trained classifier.
- **Confusion Matrix**
- Confusion Matrix is used to understand the trained classifier behavior over the test dataset or validate dataset.
```
# Importing the required packages
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
balance_data = pd.read_csv( 'https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data',sep= ',', header = None)
balance_data.head()
# Printing the dataswet shape
print ("Dataset Lenght: ", len(balance_data))
print ("Dataset Shape: ", balance_data.shape)
# Seperating the target variable
X = balance_data.values[:, 1:5]
Y = balance_data.values[:, 0]
# Spliting the dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.3, random_state = 100)
# Function to perform training with giniIndex.
clf_gini = DecisionTreeClassifier(criterion = "gini",random_state = 100,max_depth=3, min_samples_leaf=5)
# Performing training
clf_gini.fit(X_train, y_train)
# Decision tree with entropy
clf_entropy = DecisionTreeClassifier( criterion = "entropy", random_state = 100,max_depth = 3, min_samples_leaf = 5)
# Performing training
clf_entropy.fit(X_train, y_train)
# Predicton on test with giniIndex
y_pred = clf_gini.predict(X_test)
print("Predicted values:")
print(y_pred)
# Predicton on test with entropy
y_pred_2 = clf_entropy.predict(X_test)
print("Predicted values:")
print(y_pred_2)
# GiniIndex
print(confusion_matrix(y_test, y_pred))
# Entropy
print(confusion_matrix(y_test, y_pred_2))
# giniIndex
accuracy_score(y_test,y_pred)
# Entropy
accuracy_score(y_test,y_pred_2)
```
# In class lab WAP : Use Decision Tree Classification Algorithm
Data Set Name: credit.csv ,Using the dataset, perform
1. Decision Tree Classification Algorithm (Restricting the depth of the tree to 5)
2. Using entropy and Gini Index Method
3. Perform all the evaluation parameters
# Take home assignment***
Data Set Name: Heart.csv ,Using the dataset, perform
1. Decision Tree Classification Algorithm
2. Using entropy and Gini Index Method
3. Perform all the evaluation parameters
| github_jupyter |
# Distributed Federated Learning using PySyft
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id='bob')
alice = sy.VirtualWorker(hook, id='alice')
jane = sy.VirtualWorker(hook, id='jane')
federated_train_loader = sy.FederatedDataLoader(
datasets.MNIST('data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
.federate((bob, alice, jane)),
batch_size=32, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32, shuffle=True)
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 64)
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
def train(model, federated_train_loader, optimizer, epochs):
model.train()
for epoch in range(epochs):
for batch_idx, (data, targets) in enumerate(federated_train_loader):
model.send(data.location)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, targets)
loss.backward()
optimizer.step()
model.get()
if batch_idx % 2 == 0:
loss = loss.get()
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * 32, len(federated_train_loader) * 32,
100. * batch_idx / len(federated_train_loader), loss.item()))
def test(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
model = Classifier()
optimizer = optim.SGD(model.parameters(), lr=0.005)
train(model, federated_train_loader, optimizer, 3)
test(model, test_loader)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import scipy.stats
from scipy.integrate import quad
from scipy.optimize import minimize
from scipy.special import expit, logit
from scipy.stats import norm
```
# Dataset
```
df = pd.read_csv("bank-note/bank-note/train.csv", header=None)
d = df.to_numpy()
X = d[:,:-1]
Y = d[:,-1]
X.shape, Y.shape
df = pd.read_csv("bank-note/bank-note/test.csv", header=None)
d = df.to_numpy()
Xtest = d[:,:-1]
Ytest = d[:,-1]
Xtest.shape, Ytest.shape
```
# Part 1
```
def initialise_w(initialise):
if(initialise == 'random'):
w = np.random.randn(d,1)
print("w is initialised from N[0,1]")
elif(initialise == 'zeros'):
w = np.zeros((d,1))
print("w is initialised as a zero vector")
else:
print("Method unknown")
return w
def compute_mu(X, w):
mu = expit(np.dot(X,w))
mu = mu.reshape(X.shape[0],1)
return mu
def first_derivative(w):
mu = compute_mu(X, w)
epsilon = 1e-12
grad = np.matmul(np.transpose(X), (mu-Y)) + w.reshape(d,1)
grad = grad.squeeze()
return(grad)
def second_deivative(w,X,y):
mu = compute_mu(X, w)
R = np.eye(n)
for i in range(n):
R[i,i] = mu[i,0] * (1-mu[i,0])
return(np.dot(np.dot(np.transpose(X),R),X) + np.eye(d))
def test(w, X, y):
n,d = X.shape
mu = compute_mu(X, w)
yhat = np.zeros((n,1)).astype(np.float64)
yhat[mu>0.5]=1
correct = np.sum(yhat==y)
return(correct,n)
def train(initialise):
np.random.seed(0)
w = initialise_w(initialise)
for j in range(100):
grad1 = first_derivative(w.squeeze()).reshape(d,1)
H = second_deivative(w, X, Y)
delta_w = np.dot(np.linalg.inv(H),grad1)
w = w - delta_w
diff = np.linalg.norm(delta_w)
correct,n = test(w, Xtest, Ytest)
print("Iteration : {} \t Accuracy : {}%".format(j,correct/n*100))
if(diff < 1e-5):
print("tolerance reached at the iteration : ",j)
break
print("Training done...")
print("Model weights : ", np.transpose(w))
n,d = X.shape
n1,d1 = Xtest.shape
Y = Y.reshape(n,1)
Ytest = Ytest.reshape(n1,1)
train('random')
```
# Part 2
```
# LBFGS
def compute_mu(X, w):
phi=np.dot(X,w)
mu = norm.cdf(phi)
mu = mu.reshape(X.shape[0],1)
return mu
def first_derivative(w):
mu = compute_mu(X, w)
epsilon = 1e-12
phi=np.dot(X,w)
grad_mu = X*(scipy.stats.norm.pdf(phi,0,1).reshape(-1,1))
return(np.sum((- Y*(1/(mu)) + (1-Y)*(1/(1+epsilon-mu)))*grad_mu,0) + w).squeeze()
def second_deivative(w,X,y):
mu = compute_mu(X, w)
R = np.eye(n)
phi=np.dot(X,w)
for i in range(n):
t1 = (y[i] - mu[i,0])/(mu[i,0] * (1-mu[i,0]))
t2 = scipy.stats.norm.pdf(phi[i,0],0,1)
t3 = (1-y[i])/np.power(1-mu[i,0],2) + y[i]/np.power(mu[i,0],2)
R[i,i] = t1*t2*np.dot(X[i],w) + t3*t2*t2
return(np.dot(np.dot(np.transpose(X),R),X) + np.eye(d))
def neg_log_posterior(w):
w=w.reshape(-1,1)
epsilon = 1e-12
mu = compute_mu(X, w)
prob_1 = Y*np.log(mu+epsilon)
prob_0 = (1-Y)*np.log(1-mu+epsilon)
log_like = np.sum(prob_1) + np.sum(prob_0)
w_norm = np.power(np.linalg.norm(w),2)
neg_log_pos = -log_like+w_norm/2
print("neg_log_posterior = {:.4f} \tlog_like = {:.4f} \tw_norm = {:.4f}".format(neg_log_pos, log_like, w_norm))
return(neg_log_pos)
def test(w, X, y):
n,d = X.shape
mu = compute_mu(X, w)
#print(mu.shape, n, d)
yhat = np.zeros((n,1)).astype(np.float64)
yhat[mu>0.5]=1
correct = np.sum(yhat==y)
return(correct,n)
res = minimize(neg_log_posterior, initialise_w('random'), method='BFGS', jac=first_derivative,
tol= 1e-5, options={'maxiter': 100})
correct,n = test(res.x, Xtest, Ytest)
print("\n_____________Model trained______________\n")
print("\nModel weights : ", res.x)
print("\n_____________Test Accuracy______________\n")
print("Accuracy : {}% ".format(correct/n*100))
```
# Part 3
```
def compute_mu(X, w):
phi=np.dot(X,w)
mu = norm.cdf(phi)
mu = mu.reshape(X.shape[0],1)
return mu
def first_derivative(w):
mu = compute_mu(X, w)
epsilon = 1e-12
phi=np.dot(X,w)
grad_mu = X*(scipy.stats.norm.pdf(phi,0,1).reshape(-1,1))
return(np.sum((- Y*(1/(mu)) + (1-Y)*(1/(1+epsilon-mu)))*grad_mu,0) + w).squeeze()
def second_deivative(w,X,y):
mu = compute_mu(X, w)
R = np.eye(n)
phi=np.dot(X,w)
for i in range(n):
t1 = (y[i] - mu[i,0])/(mu[i,0] * (1-mu[i,0]))
t2 = scipy.stats.norm.pdf(phi[i,0],0,1)
t3 = (1-y[i])/np.power(1-mu[i,0],2) + y[i]/np.power(mu[i,0],2)
R[i,i] = t1*t2*np.dot(X[i],w) + t3*t2*t2
return(np.dot(np.dot(np.transpose(X),R),X) + np.eye(d))
def neg_log_posterior(w):
w=w.reshape(-1,1)
epsilon = 1e-12
mu = compute_mu(X, w)
prob_1 = Y*np.log(mu+epsilon)
prob_0 = (1-Y)*np.log(1-mu+epsilon)
log_like = np.sum(prob_1) + np.sum(prob_0)
w_norm = np.power(np.linalg.norm(w),2)
neg_log_pos = -log_like+w_norm/2
print("neg_log_posterior = {:.4f} \tlog_like = {:.4f} \tw_norm = {:.4f}".format(neg_log_pos, log_like, w_norm))
return(neg_log_pos)
def test(w, X, y):
n,d = X.shape
mu = compute_mu(X, w)
#print(mu.shape, n, d)
yhat = np.zeros((n,1)).astype(np.float64)
yhat[mu>0.5]=1
correct = np.sum(yhat==y)
return(correct,n)
def train(initialise):
np.random.seed(0)
w = initialise_w(initialise)
for j in range(100):
grad1 = first_derivative(w.squeeze()).reshape(d,1)
H = second_deivative(w, X, Y)
delta_w = np.dot(np.linalg.inv(H),grad1)
w = w - delta_w
diff = np.linalg.norm(delta_w)
correct,n = test(w, Xtest, Ytest)
print("Iteration : {} \t Accuracy : {}%".format(j,correct/n*100))
if(diff < 1e-5):
print("tolerance reached at the iteration : ",j)
break
print("Training done...")
print("Model weights : ", np.transpose(w))
n,d = X.shape
n1,d1 = Xtest.shape
Y = Y.reshape(n,1)
Ytest = Ytest.reshape(n1,1)
train('zeros')
```
| github_jupyter |
# Exploratory
## EEGECoG
Data info EEG-ECoG Task
*Task design
The blindfolded monkey was seated in a primate chair and tied hand.
*Data Format
A. ECoG_n.mat
Data matrix: (Channel+trigger) x Time
Sampling rate: 1000Hz
Location of electrodes:see "Su_brain.png"
Filter:Bandpass filter(butterworth) From 0.3Hz To 500Hz
B. EEG_n.mat
Data matrix: (Channel+trigger) x Time
Sampling rate:4096Hz
Location of electrodes:Fp1,Fp2,F7,F3,Fz,F4,F8,T3,C3,C4,T4,T5,P3,Pz,P4,T6,O1,O2 (determined by 10-20 system)
n means trial number.
Trigger signals should be used for timing synchronization signal between EEG and ECoG.
[Author] Naoya Oosugi,Yasuo Nagasaka, Naomi Hasegawa
### Data access
```
cd ..
%matplotlib inline
import matplotlib.pyplot as plt
import h5py
from SpectralCV import ecog_pipe as ep
import numpy as np
#load data from h5
h5_file = '../Voytek/scv.h5'
from neurodsp import spectral
import neurodsp as ndsp
#plt.style.use('seaborn-colorblind')
#plt.rcParams['image.cmap'] = 'RdBu'
import scipy as sp
import scipy.io as io
import scipy.signal as sig
```
## ECoG
```
data_path ="/Users/Lauren/Data/NeuroTycho/EEGECoG/20110607S1_EEGECoG_Su_Oosugi-Naoya+Nagasaka-Yasuo+Hasegawa+Naomi_ECoG128-EEG18_mat/ECoG01.mat"
import h5py
with h5py.File(data_path, 'r') as f:
dset = f['WaveData']
data = []
data.append(dset[:][:])
data = data[0]
data.shape
plt.plot(data[:,0])
```
### Psd
```
fs = 1000
nperseg = 1000
noverlap = nperseg/2
start = 0
end = 1
session_num = 1
#f_axis, f_time, spg = sig.spectrogram(data.T, fs=fs, nperseg=nperseg, noverlap=nperseg/2)
# plot psd
#plt.loglog(np.mean(spg,axis=1))
freqs, psd = ndsp.spectral.psd(data[:,:].T, Fs=fs, nperseg=nperseg, noverlap=nperseg/2)
plt.loglog(freqs,psd.T);
freqs, scv = spectral.scv(data[:,0], fs, nperseg=int(fs),noverlap=noverlap)
plt.loglog(freqs,scv)
plt.plot(data[:,0])
```
## EEG
### data access
```
data_path ="/Users/Lauren/Data/NeuroTycho/EEGECoG/20110607S1_EEGECoG_Su_Oosugi-Naoya+Nagasaka-Yasuo+Hasegawa+Naomi_ECoG128-EEG18_mat/EEG01.mat"
matfile = io.loadmat(data_path, squeeze_me=True)
data = matfile['EEG2']
data
data.shape[1]/fs
```
### PSD &scv
```
fs = 4096
nperseg = fs
noverlap = nperseg/2
start = 0
end = 1
session_num = 1
f_axis, f_time, spg = sig.spectrogram(data, fs=fs, nperseg=nperseg, noverlap=nperseg/2)
# plot psd
_ = plt.loglog(np.mean(spg,axis=0))
scv = spectral.scv(data, fs, nperseg=int(fs),noverlap=noverlap)
scv[1]
_ = plt.loglog(scv[1].T)
```
## ECoG Visual Grating
```
data_path = "/Users/Lauren/Data/NeuroTycho/VisualGrating/20100723S1_VGT_K2_KazuhitoTakenaka-ToruYanagawa_mat_ECoG128-Event3/"
from codes import access_nt as asc
session = 0
#chan = np.arange(1,129).tolist()
chan = [1]
data = asc.get_ECoG(data_path, session, chan)
fs = 1000
nperseg = 1000
noverlap = nperseg/2
start = 0
end = 1
session_num = 1
f_axis, f_time, spg = sig.spectrogram(data, fs=fs, nperseg=nperseg, noverlap=nperseg/2)
# plot psd
_ = plt.loglog(np.mean(spg,axis=0))
scv = spectral.scv(data, fs, nperseg=int(fs),noverlap=noverlap)
_ = plt.plot(scv[1][0])
```
| github_jupyter |
```
import scanpy as sc
import pandas as pd
import numpy as np
import scipy as sp
from statsmodels.stats.multitest import multipletests
import matplotlib.pyplot as plt
import seaborn as sns
from anndata import AnnData
import os
from os.path import join
import time
from gprofiler import GProfiler
# scTRS tools
import scTRS.util as util
import scTRS.data_loader as dl
import scTRS.method as md
# autoreload
%load_ext autoreload
%autoreload 2
# # This file contains all the cells
# df_design = pd.read_csv(join(DATA_PATH, 'GSE84498_experimental_design.txt.gz'), sep='\t')
# df_design.index = df_design['well']
# df_data = pd.read_csv(join(DATA_PATH, 'GSE84498_umitab.txt.gz'), sep='\t', index_col=0)
# # Make anndata
# adata_raw = AnnData(X=df_data.T)
# adata_raw.X = sp.sparse.csr_matrix(adata_raw.X)
# adata_raw.obs = adata_raw.obs.join(df_design)
# print('# Before filtering', adata_raw.shape)
# sc.pp.filter_genes(adata_raw, min_cells=10)
# print('# After filtering', adata_raw.shape)
# adata_raw.write(DATA_PATH+'/obj_raw_full.h5ad')
# Read data: this file contains only hepatocytes
DATA_PATH='/n/holystore01/LABS/price_lab/Users/mjzhang/scTRS_data/single_cell_data/mouse_liver_halpern_nature_2017'
df_data = pd.read_csv(join(DATA_PATH, 'SuppTable1_umi.zip'), sep='\s\s+', skiprows=1, index_col=0)
df_lobule = pd.read_excel(join(DATA_PATH, 'SuppTable2_lobule.xlsx'), index_col=0, skiprows=1)
df_zonation = pd.read_excel(join(DATA_PATH, 'SuppTable3_zonation.xlsx'), index_col=0, skiprows=2)
# Make anndata
adata_raw = AnnData(X=df_data.T)
adata_raw.X = sp.sparse.csr_matrix(adata_raw.X)
adata_raw.obs['n_genes'] = (adata_raw.X>0).sum(axis=1)
temp_df = df_lobule.copy()
temp_df.index = [x.replace(' ','') for x in temp_df.index]
adata_raw.obs = adata_raw.obs.join(temp_df)
adata_raw.var = adata_raw.var.join(df_zonation)
print('# Before filtering', adata_raw.shape)
sc.pp.filter_genes(adata_raw, min_cells=10)
print('# After filtering', adata_raw.shape)
adata_raw.write(DATA_PATH+'/obj_raw.h5ad')
# Make .cov file
df_cov = pd.DataFrame(index=adata_raw.obs.index)
df_cov['const'] = 1
df_cov['n_genes'] = (adata_raw.X>0).sum(axis=1)
df_cov.to_csv(DATA_PATH+'/halpern_nature_2017.cov', sep='\t')
# Cluster the data to have UMAP plot
adata = adata_raw.copy()
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
sc.pp.log1p(adata)
print(adata.shape)
sc.pp.highly_variable_genes(adata, subset = False, min_disp=.5,
min_mean=.0125, max_mean=10, n_bins=20, n_top_genes=None)
sc.pp.scale(adata, max_value=10, zero_center=False)
sc.pp.pca(adata, n_comps=50, use_highly_variable=True, svd_solver='arpack')
sc.pp.neighbors(adata, n_neighbors=15, n_pcs=20)
sc.tl.louvain(adata, resolution = 0.5)
sc.tl.leiden(adata, resolution = 0.5)
sc.tl.umap(adata)
sc.tl.diffmap(adata)
adata.write(DATA_PATH+'/obj_processed.h5ad')
sc.pl.umap(adata, color=['Glul', 'Cyp2e1', 'Ass1', 'Asl', 'Alb', 'Cyp2f2'])
sc.pl.umap(adata, color=['n_genes'])
```
| github_jupyter |
# Churn Risk Score Prediction
### Link to the Dataset: [Churn Risk Rate](https://www.kaggle.com/imsparsh/churn-risk-rate-hackerearth-ml?select=train.csv)
### Importing Libraries
```
import pandas as pd
from sklearn import preprocessing
from sklearn import metrics
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import matplotlib.pyplot as plt
```
### Getting Data
```
df = pd.read_csv('train.csv')
df
```
### Data Preprocessing
```
df.info()
df = df.drop(['Name', 'referral_id'], axis=1) # dropping unnecessary columns
df
label_encoder = preprocessing.LabelEncoder() # encoding data
a = df.columns
for i in a[:-1]:
df[i] = df[i].astype('|S')
df[i] = label_encoder.fit_transform(df[i])
df
df['churn_risk_score'].isnull().any() # checking for and removing records with null values
df = df.dropna(axis = 0, how ='any')
df.isnull().any()
df
```
### Data Visualization
```
# checking the distribution of outcomes
sns.countplot(x = 'churn_risk_score', data = df)
```
### Checking Variance
```
df.columns
# checking variance
from statsmodels.stats.outliers_influence import variance_inflation_factor
variables = df[['customer_id', 'age', 'gender', 'security_no', 'region_category',
'membership_category', 'joining_date', 'joined_through_referral',
'preferred_offer_types', 'medium_of_operation', 'internet_option',
'last_visit_time', 'days_since_last_login', 'avg_time_spent',
'avg_transaction_value', 'avg_frequency_login_days', 'points_in_wallet',
'used_special_discount', 'offer_application_preference',
'past_complaint', 'complaint_status', 'feedback', 'churn_risk_score']]
vif = pd.DataFrame()
vif['VIF'] = [variance_inflation_factor(variables.values, i) for i in range(variables.shape[1])]
vif['Features'] = variables.columns
vif
```
### VIF is less than 10 for all the attributes, hence, we can keep them all.
### Splitting Data for Training and Testing
```
data = df.values
X, y = data[:,:-1], data[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0) # splitting in the ratio 80:20
```
### Decision Tree Model
```
clf = DecisionTreeClassifier()
clf = clf.fit(X_train,y_train)
```
### Making Predictions
```
pred = clf.predict(X_test)
```
### Checking Accuracy
```
score = clf.score(X_test, y_test)
score
```
### Predictions are 71% accurate.
### Visualizing the Decision Tree
```
s = plt.figure(figsize=(20,10))
tree.plot_tree(clf, max_depth=2, filled=True, fontsize=10)
plt.title("Decision Tree for Customer Churn Data", fontsize=30)
plt.show()
```
| github_jupyter |
```
import os
import pandas as pd
import pandas_ta as ta
import plotly.graph_objects as go
import plotly.io as pio
import yfinance as yf
from datetime import date
from datetime import datetime
ticker = "BABA"
from_date = datetime(2020, 1, 1)
to_date = datetime.today()
interval = "1d"
ticker_csv = os.path.join(os.getcwd(), "data", "{}.csv".format(ticker))
df = yf.download(tickers = ticker,
start = from_date,
end = to_date,
interval = interval
)
df.head()
```
Succintly:
1. calculate the 10 days SMA of the ATR.
2. calculate the highest high over the last 10 days.
3. subtract $3 * ATR_{10}$ from the highest high of the last 10 days.
4. calculate the highest high of the last 20 days of this preliminary value, this is your long stop.
5. calculate the lowest low over the last 10 days.
6. add $3 * ATR_{10}$ to the lowest low of the last 10 days.
7. calculate the lowest low of the last 20 days of this preliminary value, this is your short stop.
You can use looser ATR multipliers or tighter ATR multipliers. The figure 7.4 in page 95 of the original book shows long stops with a value of 1.5 ATR and 3.5 ATR for a long trade of a Japanese Yen futures 09/93 contract, but i had no luck finding freely available historic data for this contract to compare.
```
cksp = ta.trend.cksp(df["High"], df["Low"], df["Close"], tvmode = False)
df["CKlong"] = cksp[cksp.columns[0]]
df["CKshort"] = cksp[cksp.columns[1]]
df.head()
fig = go.Figure()
interp = "linear"
title = "{} price from {} to {} with Chande-Kroll Stops (10,3,20)".format(ticker,
"{}-{}-{}".format(from_date.day, from_date.month, from_date.year),
"{}-{}-{}".format(to_date.day, to_date.month, to_date.year))
hovertext = []
for i in range(len(df["Open"])):
hovertext.append(
"Open: {:.3f} <br>High: {:.3f}<br>Low: {:.3f}<br>Close: {:.3f}".format(
df["Open"][i], df["High"][i], df["Low"][i], df["Close"][i]))
ckltext = []
for i in range(len(df["CKlong"])):
ckltext.append("CK Long: {:.3f}".format(df["CKlong"][i]))
ckstext = []
for i in range(len(df["CKshort"])):
ckstext.append("CK Short: {:.3f}".format(df["CKshort"][i]))
fig.add_trace(
go.Candlestick(
x = df.index,
open = df["Open"], high = df["High"], low = df["Low"], close = df["Close"],
name = "{} OHLC".format(ticker),
text = hovertext,
hoverinfo = "text",
),)
fig.add_trace(
go.Scatter(
x = df.index,
y = df["CKlong"],
name = "CK long",
line = dict(shape = interp, color = "rgba(0, 120, 240, 0.5)", width = 1,),
text = ckltext,
hoverinfo = "text",
),)
fig.add_trace(
go.Scatter(
x = df.index,
y = df["CKshort"],
fill = "tonexty",
fillcolor = "rgba(180, 180, 200, 0.125)",
name = "CK short",
line = dict(shape = interp, color = "rgba(200, 40, 10, 0.5)", width = 1,),
text = ckstext,
hoverinfo = "text",
),)
# plot layout options
layout = dict(
xaxis = go.layout.XAxis(
type = "category",
showticklabels = True,
tick0 = 0.0,
dtick = 50,
rangeslider = dict(visible = False)
),
yaxis = dict(
type = "log",
anchor = "x2",
autorange = True,
tickmode = "auto",
),
# annotations = annotations,
#width = 1000, height = 800,
margin = dict(
autoexpand = True,
#r = 100, t = 65, b = 110, l = 100
),
template = "gridon",
plot_bgcolor = "rgb(240, 240, 240)",
paper_bgcolor = "rgb(240, 240, 240)",
dragmode = "zoom",
hovermode = "closest",
)
fig.update_layout(layout)
fig.update_layout(
title = title,
font=dict(
size=9,
),
)
fig.show()
```
The plot above has the Chande-Kroll stop *tvmode* set to **False**. This is the behaviour as specified in the book with ```p=10, x=3, q=20```.
The following plot sets the *tvmode* set to **True** (*its default behaviour*). This mode retains the compatibility with [Trading View](https://www.tradingview.com/support/solutions/43000589105-chande-kroll-stop/)
```
cksp = ta.trend.cksp(df["High"], df["Low"], df["Close"], tvmode = True)
df["CKlong"] = cksp[cksp.columns[0]]
df["CKshort"] = cksp[cksp.columns[1]]
fig = go.Figure()
interp = "linear"
title = "{} price from {} to {} with Chande-Kroll Stops (10,1,9) Trading View mode".format(ticker,
"{}-{}-{}".format(from_date.day, from_date.month, from_date.year),
"{}-{}-{}".format(to_date.day, to_date.month, to_date.year))
hovertext = []
for i in range(len(df["Open"])):
hovertext.append(
"Open: {:.3f} <br>High: {:.3f}<br>Low: {:.3f}<br>Close: {:.3f}".format(
df["Open"][i], df["High"][i], df["Low"][i], df["Close"][i]))
ckltext = []
for i in range(len(df["CKlong"])):
ckltext.append("CK Long: {:.3f}".format(df["CKlong"][i]))
ckstext = []
for i in range(len(df["CKshort"])):
ckstext.append("CK Short: {:.3f}".format(df["CKshort"][i]))
fig.add_trace(
go.Candlestick(
x = df.index,
open = df["Open"], high = df["High"], low = df["Low"], close = df["Close"],
name = "{} OHLC".format(ticker),
text = hovertext,
hoverinfo = "text",
),)
fig.add_trace(
go.Scatter(
x = df.index,
y = df["CKlong"],
name = "CK long",
line = dict(shape = interp, color = "rgba(0, 120, 240, 0.5)", width = 1,),
text = ckltext,
hoverinfo = "text",
),)
fig.add_trace(
go.Scatter(
x = df.index,
y = df["CKshort"],
fill = "tonexty",
fillcolor = "rgba(180, 180, 200, 0.125)",
name = "CK short",
line = dict(shape = interp, color = "rgba(200, 40, 10, 0.5)", width = 1,),
text = ckstext,
hoverinfo = "text",
),)
# plot layout options
layout = dict(
xaxis = go.layout.XAxis(
type = "category",
showticklabels = True,
tick0 = 0.0,
dtick = 50,
rangeslider = dict(visible = False)
),
yaxis = dict(
type = "log",
anchor = "x2",
autorange = True,
tickmode = "auto",
),
# annotations = annotations,
#width = 1000, height = 800,
margin = dict(
autoexpand = True,
#r = 100, t = 65, b = 110, l = 100
),
template = "gridon",
plot_bgcolor = "rgb(240, 240, 240)",
paper_bgcolor = "rgb(240, 240, 240)",
dragmode = "zoom",
hovermode = "closest",
)
fig.update_layout(layout)
fig.update_layout(
title = title,
font=dict(
size=9,
),
)
fig.show()
```
| github_jupyter |
# Quick and Dirty Diffusor Calibration
calibrates $gDT$ where $T$ is the tap point matrix, $D$ is the diffusor kernels (in this case, with no edges cut) and $g$ is the set of neuron gains.
This is meant to be used to drop into the existing numerical simulations of diffusor spread.
To make this more practical (working just a single pool), you would need to collect different sets with different diffusor cut conditions. To be complete, for each tap point location, you'd have kernels for (num DAC spreads) * {no cuts nearby, cut above, cut right, cut left, cut down, cut above+right, cut above+left, etc.}. For broad spreads, you'd have to add cuts 2 away, 3 away, etc.
```
%load_ext autoreload
%autoreload 2
from pystorm.hal import HAL
from pystorm.PyDriver import bddriver as bd
from pystorm.hal.net_builder import NetBuilder
from pystorm.hal.calibrator import Calibrator, PoolSpec
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# full chip
Y = X = 64
LY = LX = 0
SY = Y // 2
SX = X // 2
D = 1
DACS = dict(DAC_DIFF_G = 1024,
DAC_DIFF_R = 600,
DAC_SOMA_REF = 1024,
DAC_SOMA_OFFSET = 2)
# need O(grid_space**2 samples)
# how many synapses to leave off between active synapses
SYN_GRID_SPACE = 8
assert(SYN_GRID_SPACE % 2 == 0) # use an even number to keep #taps even
KY = KX = SYN_GRID_SPACE * 2
# estimate_encoders for 1D takes 9 samples/trial 2 times, 1 s per sample
trial_extra = 33
print('runtime estimate:', (9 * 1 * 2 + trial_extra) * SYN_GRID_SPACE**2 / 60, 'minutes')
hal = HAL()
cal = Calibrator(hal)
import time
t0 = time.time()
all_encs = {}
#for y_grid_idx in range(1):
# for x_grid_idx in range(1):
for y_grid_idx in range(SYN_GRID_SPACE):
for x_grid_idx in range(SYN_GRID_SPACE):
print("RUNNING Y:", y_grid_idx, "X:", x_grid_idx)
print("=======================")
print((time.time() - t0) / 60, 'minutes elapsed')
syn_TPM = np.zeros((SY, SX))
syn_TPM[y_grid_idx::SYN_GRID_SPACE, x_grid_idx::SYN_GRID_SPACE] = 1
TPM = NetBuilder.syn_taps_to_nrn_taps(syn_TPM.reshape((SY, SX, 1)))
# use bias 3, want lots of spiking
ps = PoolSpec(YX=(Y,X), loc_yx=(LY, LX), D=D, TPM=TPM, biases=3)
ps.fmax = cal.optimize_fmax(ps, safety_margin=.95)
# estimate encoders
encs, offs, std_encs, std_offs, _, _ = \
cal.get_encoders_and_offsets(ps, dacs=DACS, num_sample_angles=3, bin_time=2, num_bootstraps=20)
all_encs[(y_grid_idx, x_grid_idx)] = dict(ps=ps, encs=encs, std_encs=std_encs)
import pickle
pck_fname = 'calibrate_diffusor_' + str(DACS['DAC_DIFF_G']) + '_' + str(DACS['DAC_DIFF_R']) + '.pck'
pickle.dump(all_encs, open(pck_fname, 'wb'))
print((time.time() - t0) / 60, 'minutes elapsed')
import pickle
DAC_DIFF_G = DACS['DAC_DIFF_G']
DAC_DIFF_R = DACS['DAC_DIFF_R']
#DAC_DIFF_G = 1024
#DAC_DIFF_R = 600
pck_fname = 'calibrate_diffusor_' + str(DAC_DIFF_G) + '_' + str(DAC_DIFF_R) + '.pck'
all_encs = pickle.load(open(pck_fname, 'rb'))
for (y_grid_idx, x_grid_idx), enc_dict in all_encs.items():
encs = enc_dict['encs']
std_encs = enc_dict['std_encs']
ps = enc_dict['ps']
# plot raw responses and errors
ps_for_plot = ps.copy()
ps_for_plot.TPM = np.hstack((ps.TPM, ps.TPM))
Calibrator.plot_encs_yx(np.hstack((encs, std_encs)), ps_for_plot, figheight=4)
plotlog=False
kernels = {}
Ksums = []
for (y_grid_idx, x_grid_idx), enc_dict in all_encs.items():
encs = enc_dict['encs']
std_encs = enc_dict['std_encs']
# extract kernels
yx_encs = np.zeros((Y + 2*SYN_GRID_SPACE, X + 2*SYN_GRID_SPACE))
yx_std_encs = np.zeros_like(yx_encs)
# zero-padded outside
yx_encs[SYN_GRID_SPACE:-SYN_GRID_SPACE, SYN_GRID_SPACE:-SYN_GRID_SPACE] = encs.reshape((Y, X))
yx_std_encs[SYN_GRID_SPACE:-SYN_GRID_SPACE, SYN_GRID_SPACE:-SYN_GRID_SPACE] = std_encs.reshape((Y, X))
# nrn idxs
for y_center_idx in range(y_grid_idx*2, Y, SYN_GRID_SPACE*2):
for x_center_idx in range(x_grid_idx*2, X, SYN_GRID_SPACE*2):
if (y_center_idx // 2) % 2 == 0:
xshift = 0
else:
xshift = 1
ymin = SYN_GRID_SPACE + y_center_idx - SYN_GRID_SPACE
xmin = SYN_GRID_SPACE + x_center_idx - SYN_GRID_SPACE + xshift
ymax = SYN_GRID_SPACE + y_center_idx + SYN_GRID_SPACE + 1
xmax = SYN_GRID_SPACE + x_center_idx + SYN_GRID_SPACE + 1 + xshift
K = yx_encs[ymin:ymax, xmin:xmax]
Kpos = K.copy()
#Kpos[Kpos < 0] = 0
Kpos[np.isnan(Kpos)] = 0
Kerr = yx_std_encs[ymin:ymax, xmin:xmax]
# cancel out sketchy measurements
# 95% confidence interval bigger than half measured value
big_err = Kerr * 2 > Kpos * .5
Kpos[big_err] = 0
Ksums.append(np.sum(Kpos))
kernels[y_center_idx // 2, x_center_idx // 2] = dict(K=Kpos, Kerr=Kerr)
print(np.mean(Ksums))
all_encs_flat = np.abs(all_encs[0,0]['encs'].flatten())
all_encs_flat[np.isnan(all_encs_flat)] = 0
vmin = 0
vmax = np.sort(all_encs_flat)[int(.99 * len(all_encs_flat))]
from mpl_toolkits.axes_grid1 import make_axes_locatable
pmin = 8
pmax = 16
PYX = pmax - pmin
fig, ax = plt.subplots(2*PYX, PYX, figsize=(15, 30))
for (sy, sx), k_dict in kernels.items():
if sy >= pmin and sy < pmax and sx >= pmin and sx < pmax:
spy = sy - pmin
spx = sx - pmin
Kpos = k_dict['K']
Kerr = k_dict['Kerr']
if plotlog:
im = ax[spy, spx].imshow(np.log(Kpos + 1), vmin=np.log(vmin + 1), vmax=np.log(vmax + 1))
plt.colorbar(im)
else:
#im = ax[spy, spx].imshow(Kpos, vmin=vmin, vmax=vmax)
this_ax = ax[2*spy, spx]
im = this_ax.imshow(Kpos, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(this_ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
this_ax.axis('off')
plt.colorbar(im, cax=cax)
this_ax = ax[2*spy + 1, spx]
im = this_ax.imshow(2 * Kerr, cmap='gray_r') # ~95% confidence
divider = make_axes_locatable(this_ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
this_ax.axis('off')
plt.colorbar(im, cax=cax)
plt.tight_layout(w_pad=.02, h_pad=.02, pad=.01)
import sys
sys.stdout.write('\a')
sys.stdout.flush()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Title
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/addons/tutorials/image_ops"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/_template.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/_template.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/docs/tutorials/image_ops.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
[Update button links]
## Overview
[Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.]
## Setup
[Put all your imports and installs up into a setup section.]
```
try:
%tensorflow_version 2.x
except:
pass
import tensorflow as tf
!pip install --no-deps tensorflow-addons~=0.7
import tensorflow_addons as tfa
```
## Resources
* [TensorFlow documentation contributor guide](https://www.tensorflow.org/community/contribute/docs)
* [TensorFlow documentation style guide](https://www.tensorflow.org/community/contribute/docs_style)
* [Google developer documentation style guide](https://developers.google.com/style/highlights)
## Notebook style
* Include the collapsed license at the top (uses the Colab "Form" mode to hide the cells).
* Save the notebook with the table of contents open.
* Use one `H1` header for the title.
* Include the button-bar immediately after the `H1`.
* Include an overview section before any code.
* Put all your installs and imports in a setup section.
* Write Python 3 compatible code. You don't have to worry about Python 2 compatibility.
* Keep code and text cells as brief as possible.
* Avoid leaving an empty cell at the end of the notebook.
### Code style
* Notebooks are for people. Write code optimized for clarity.
* Keep examples quick. Use small datasets, or small slices of datasets. Don't train to convergence, train until it's obvious it's making progress.
* Demonstrate small parts before combining them into something more complex, like this:
```
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
```
Run the model on a single batch of data, and inspect the output:
```
import numpy as np
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
```
Compile the model for training:
```
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
```
### Code content
* Use the highest level API that gets the job done (unless the goal is to demonstrate the low level API).
* Use `keras.Sequential` > keras functional api > keras model subclassing > ...
* Use `model.fit` > `model.train_on_batch` > manual `GradientTapes`.
* Use eager-style code.
* Use `tensorflow_datasets` and `tf.data` where possible.
* Avoid `compat.v1`.
### Text
* Use an imperative style. "Run a batch of images through the model."
* Use sentence case in titles/headings.
* Use short titles/headings: "Download the data", "Build the Model", "Train the model".
* Use the [Google developer documentation style guide](https://developers.google.com/style/highlights).
## GitHub workflow
* Be consistent about how you save your notebooks, otherwise the JSON diffs are messy.
* This notebook has the "Omit code cell output when saving this notebook" option set. GitHub refuses to diff notebooks with large diffs (inline images).
* [ReviewNB.com](http://reviewnb.com) can help with diffs. This is linked in a comment on a notebook pull request.
* Use the [Open in Colab](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo) extension to open a GitHub notebook in Colab.
* The easiest way to edit a notebook in GitHub is to open it with Colab from the branch you want to edit. Then use File --> Save a copy in GitHub, which will save it back to the branch you opened it from.
* For PRs it's helpful to post a direct Colab link to the PR head: https://colab.research.google.com/github/{USER}/{REPO}/blob/{BRANCH}/{PATH}.ipynb
| github_jupyter |
### Clinical BCI Challenge-WCCI2020
- [website link](https://sites.google.com/view/bci-comp-wcci/?fbclid=IwAR37WLQ_xNd5qsZvktZCT8XJerHhmVb_bU5HDu69CnO85DE3iF0fs57vQ6M)
- [Dataset Link](https://github.com/5anirban9/Clinical-Brain-Computer-Interfaces-Challenge-WCCI-2020-Glasgow)
```
import mne
from scipy.io import loadmat
import scipy
import sklearn
import numpy as np
import pandas as pd
import glob
from mne.decoding import CSP
import os
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC, SVC
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV, StratifiedShuffleSplit
from sklearn.preprocessing import StandardScaler
from sklearn.compose import make_column_transformer, make_column_selector
from sklearn.pipeline import make_pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as lda
import warnings
warnings.filterwarnings('ignore') # to ignore warnings
verbose = False # to universally just change it to true/false for different output display
mne.set_log_level(verbose=verbose) # to suppress large info outputs
# using kappa as evaluation metric
kappa = sklearn.metrics.make_scorer(sklearn.metrics.cohen_kappa_score) # kappa scorer
acc = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score) # accuracy scorer
scorer = kappa # just assign another scorer to replace kappa scorer
n_jobs = None # for multicore parallel processing, set it to 1 if cause memory issues, for full utilization set to -1
```
## Data Loading and Conversion to MNE Datatypes
[Mike Cohen Tutorials link for EEG Preprocessing](https://www.youtube.com/watch?v=uWB5tjhataY&list=PLn0OLiymPak2gDD-VDA90w9_iGDgOOb2o)
```
current_folder = globals()['_dh'][0] # a hack to get path of current folder in which juptyter file is located
data_path = os.path.join(current_folder, 'Data')
all_files = glob.glob(data_path + '/*.mat')
training_files = glob.glob(data_path + '/*T.mat')
evaluation_files = glob.glob(data_path + '/*E.mat')
len(all_files), len(training_files), len(evaluation_files) # if these return zero,then no file is loaded
def get_mne_epochs(filepath, verbose=verbose, t_start=2, fs=512, mode='train'):
'''
This function reads the EEG data from .mat file and convert it to MNE-Python Compatible epochs
data structure. It takes data from [0, 8] sec range and return it by setting t = 0 at cue onset
i.e. 3 seconds and dropping first two seconds so the output data is in [-1.0, 5.0] sec range. The
Details can be found in the preprocessing section of the attached document
'''
mat_data = loadmat(filepath) # read .mat file
eeg_data= mat_data['RawEEGData']
idx_start = fs*t_start
eeg_data = eeg_data[:, :, idx_start:]
event_id = {'left-hand': 1, 'right-hand': 2}
channel_names = ['F3', 'FC3', 'C3', 'CP3', 'P3', 'FCz', 'CPz', 'F4', 'FC4', 'C4', 'CP4', 'P4']
info = mne.create_info(ch_names=channel_names, sfreq=fs, ch_types='eeg')
epochs = mne.EpochsArray(eeg_data, info, verbose=verbose, tmin=t_start-3.0)
epochs.set_montage('standard_1020')
epochs.filter(1., None)
epochs.apply_baseline(baseline=(-.250, 0)) # linear baseline correction
if mode == 'train': # this in only applicable for training data
epochs.event_id = event_id
epochs.events[:,2] = mat_data['Labels'].ravel()
return epochs
def get_labels(filepath):
mat_data = loadmat(filepath) # read .mat file
return mat_data['Labels'].ravel()
epochs, labels = get_mne_epochs(training_files[0], verbose=verbose), get_labels(training_files[0])
data = epochs.get_data()
print('Shape of EEG Data: ', data.shape, '\t Shape of Labels: ', labels.shape)
```
### Training Data
```
# loading original data
epochs_list_train = []
for i in training_files:
epochs_list_train.append(get_mne_epochs(i, verbose=verbose))
```
### Evaluation Data
first 8 for single subject and last 2 are for cross subject
```
epochs_list_eval = []
for i in evaluation_files:
epochs_list_eval.append(get_mne_epochs(i, mode='test', verbose=verbose))
```
### Bandpass filtering of data
```
for epochs in epochs_list_train:
epochs.filter(7.0, 32.0)
for epochs in epochs_list_eval:
epochs.filter(7.0, 32.0)
```
## Lets try doing some classification
```
cv = StratifiedShuffleSplit(n_splits=5, random_state=0)
epochs = epochs_list_train[3]
psds, freqs = mne.time_frequency.psd_multitaper(epochs, tmin=0.5, tmax=4.5, fmin=8, fmax=30 ,n_jobs=1)
psds = 10 * np.log10(psds) # to convert powers to DB
labels = epochs.events[:,-1]
x_trainVal, x_test, y_trainVal, y_test = train_test_split(psds, labels.ravel(), shuffle=True, stratify=labels, random_state=0) to avoid confusing names and reusing x_trainVal
print('train set: features: ', x_trainVal.shape, 'labels: ', y_trainVal.shape)
print('Test set: features: ', x_test.shape, 'labels: ', y_test.shape)
y_train = y_trainVal
# using all channels
trials, channels, eeg = x_trainVal.shape
x_train = x_trainVal.reshape(trials, channels*eeg)
print('*'*10, 'Classification Scores Comparison with default Parameters' ,'*'*10)
print('#'*15, 'Using All Channels', '#'*15)
print('KNN : ', np.mean(cross_val_score(make_pipeline(StandardScaler(),KNeighborsClassifier()), x_train, y_train, cv=cv, scoring=scorer)))
print('Log-Regression: ', np.mean(cross_val_score(make_pipeline(StandardScaler(),LogisticRegression(max_iter=1000)), x_train, y_train, cv=cv, scoring=scorer)))
print('Linear SVM : ', np.mean(cross_val_score(make_pipeline(StandardScaler(),LinearSVC(random_state=0)), x_train, y_train, cv=cv, scoring=scorer)))
print('kernal SVM : ', np.mean(cross_val_score(make_pipeline(StandardScaler(), SVC(gamma='scale')), x_train, y_train, cv=cv, scoring=scorer)))
print('LDA : ', np.mean(cross_val_score(make_pipeline(StandardScaler(), lda()), x_train, y_train, cv=cv, scoring=scorer)))
```
## Grid Search
with [0.5, 4.5] seconds time interval and [8, 30] Hz freqs
```
cv = StratifiedShuffleSplit(10, random_state=0)
# for linear svm
param_grid_linear_svm = { 'linearsvc__C' : np.logspace(-4, 2, 15)}
# lda, auto shrinkage performs pretty well mostly
shrinkage = list(np.arange(0.1,1.01,0.1))
shrinkage.append('auto')
param_grid_lda = {'lineardiscriminantanalysis__shrinkage': shrinkage}
grids_linear_svm_list = [GridSearchCV(make_pipeline(StandardScaler(), LinearSVC(random_state=0)),
param_grid=param_grid_linear_svm, cv=cv, n_jobs=n_jobs, scoring=scorer)
for _ in range(len(training_files))]
grids_lda_list = [GridSearchCV(make_pipeline(StandardScaler(), lda(solver='eigen')),
param_grid=param_grid_lda, cv=cv, n_jobs=n_jobs, scoring=scorer)
for _ in range(len(training_files))]
def training_function(subject_index=0):
# this time training function trains on whole training set
print('-'*25, 'Training for Subject:', subject_index+1, '-'*25)
epochs = epochs_list_train[subject_index]
psds, freqs = mne.time_frequency.psd_multitaper(epochs, tmin=0.5, tmax=4.5, fmin=8, fmax=30 ,n_jobs=1)
psds = 10 * np.log10(psds)
psds = psds.reshape(psds.shape[0], -1)
labels = epochs.events[:,-1]
grids_linear_svm_list[subject_index].fit(psds, labels)
print('LinearSVM: Maximum Cross Validation Score = ', round(grids_linear_svm_list[subject_index].best_score_,3))
grids_lda_list[subject_index].fit(psds, labels)
print('LDA : Maximum Cross Validation Score = ', round(grids_lda_list[subject_index].best_score_,3))
print()
def evaluation_function(subject_index=0):
# prints the prediction counts for each class
epochs = epochs_list_eval[subject_index]
psds, freqs = mne.time_frequency.psd_multitaper(epochs, tmin=0.5, tmax=4.5, fmin=8, fmax=30 ,n_jobs=1)
psds = 10 * np.log10(psds)
psds = psds.reshape(psds.shape[0], -1)
preds_linear_svm = grids_linear_svm_list[subject_index].predict(psds)
preds_lda = grids_lda_list[subject_index].predict(psds)
print('-'*25, 'Predictions Counts Subject:', subject_index+1, '-'*25)
print('Linear SVM: Class 1 =', sum(preds_linear_svm==1), 'Class 2 =', sum(preds_linear_svm==2))
print('LDA : Class 1 =', sum(preds_lda==1), 'Class 2 =', sum(preds_lda==2))
print()
```
### It's Training Time
```
for subject in range(len(training_files)):
training_function(subject)
for subject in range(len(training_files)):
evaluation_function(subject)
```
### Results
svm always better except the last subject so only last entry for lda and all others for svm in excel file
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context = 'notebook', #mostly controls relative sizes of things on plot
#The base context is “notebook”, and the other contexts are “paper”, “talk”, and “poster”
style = 'darkgrid', #dict, None, or one of {darkgrid, whitegrid, dark, white, ticks}
palette = 'deep', # Should be something that color_palette() can process.
font_scale = 1,
color_codes = False,
rc = None)
# from IPython.core.interactiveshell import InteractiveShell
# InteractiveShell.ast_node_interactivity = 'last_expr'
# setting = "all" allows multiple outputs to be displayed for a given input cell. don't use w plotting!
from IPython.display import display
%matplotlib notebook
##%matplotlib inline
pd.__version__ , np.__version__ #, matplotlib.__version__, sns.__version__
from sklearn.model_selection import train_test_split, learning_curve, KFold, StratifiedKFold, \
ShuffleSplit, GridSearchCV, RandomizedSearchCV, cross_val_predict
from sklearn.metrics import roc_auc_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals import joblib
cd '/Users/DonBunk/Desktop/Google Drive/data_science/Python_Projects/Home_Credit_Default_Risk/'
from Home_Credit_package.master_pipeline import master_pipeline
from Home_Credit_package.Dons_functions import balanced_sample
model_save_path = 'saved_models/level_2_models/'
```
# load this
## load df.
```
o_path = 'wrangling/TRAINING_DATA_create_final_wrangled_csv/'
L1_mf_path = 'level_1_ensembling/'
original_cleaned_df = pd.read_csv(o_path + 'complete_initial_wrangled_data.csv', index_col = 'SK_ID_CURR')
level_1_metafeatures_df = pd.read_csv(L1_mf_path + 'FINAL_level_1_meta_features_df.csv', index_col = 'SK_ID_CURR')
total_df = pd.merge(original_cleaned_df, level_1_metafeatures_df, left_index=True, right_index=True, how = 'outer' )
total_df.info(verbose = True, null_counts = True);
# CHECK: this should be empty if everything is non null
total_df.isnull().any()[total_df.isnull().any()==True]
raw_level_2_new_features_df = pd.DataFrame(total_df.index)
raw_level_2_new_features_df.set_index('SK_ID_CURR', inplace=True)
```
# models with only EXT SOURCES + level 1 final scores.
```
minimal_feats = ['EXT_SOURCE_1','EXT_SOURCE_2','EXT_SOURCE_3',
'pwr_rescale_RanFor_EXTpoly', 'pwr_rescale_RanFor_AllFeats',
'pwr_rescale_LogReg_EXTpoly','pwr_rescale_LogReg_AllFeats',
'pwr_rescale_MLP_AllFeats']
total_df_piped, final_feature_list, total_pipeline, trans_list = master_pipeline(df_in = total_df[minimal_feats],
int_cutoff=20,
poly_deg=4,
feats_with_interaction=[]
)
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
```
### random forest
```
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
param_dist_dict = { 'max_depth':6,
'min_samples_leaf':11,
'min_samples_split':2,
'n_estimators':40,
}
forest_reg = RandomForestClassifier(random_state=0,
class_weight = None,
**param_dist_dict,
)
cross_val_preds = cross_val_predict(estimator = forest_reg,
X = total_df_piped,
y = total_df['TARGET'],
groups = None,
cv = my_folds,
n_jobs = -1,
verbose = 51,
fit_params = None,
pre_dispatch = '2*n_jobs',
method = 'predict_proba')
val_scores = [x[1]for x in cross_val_preds]
raw_level_2_new_features_df['RanFor_EXTpoly_Level2'] = val_scores
roc_auc_score(total_df['TARGET'], val_scores)
# fit and save model for predictions
forest_reg.fit(X = total_df_piped,
y = total_df['TARGET'])
joblib.dump(forest_reg, model_save_path + 'RanFor_EXTpoly_level_2.joblib')
```
### log reg
```
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
my_LgRg = LogisticRegression(penalty= 'l2',
random_state = 0,
class_weight = None,
C = 88.0)
cross_val_preds = cross_val_predict(estimator = my_LgRg,
X = total_df_piped,
y = total_df['TARGET'],
groups = None,
cv = my_folds,
n_jobs = -1,
verbose = 51,
fit_params = None,
pre_dispatch = '2*n_jobs',
method = 'predict_proba'
)
val_scores = [x[1]for x in cross_val_preds]
raw_level_2_new_features_df['LogReg_EXTpoly_Level2'] = val_scores
roc_auc_score(total_df['TARGET'],val_scores)
# fit and save model for predictions
my_LgRg.fit(X = total_df_piped,
y = total_df['TARGET'])
joblib.dump(my_LgRg, model_save_path + 'LogReg_EXTpoly_level_2.joblib')
```
### MLP Classifier.
```
param_dist_dict = { 'alpha' : .15,
'hidden_layer_sizes' : (67, ),
}
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
my_MLP = MLPClassifier(random_state=0,
tol=0.0001,
# verbose=51,
warm_start=False,
momentum=0.9,
**param_dist_dict)
cross_val_preds = cross_val_predict(estimator = my_MLP,
X = total_df_piped,
y = total_df['TARGET'],
groups = None,
cv = my_folds,
n_jobs = -1,
verbose = 51,
fit_params = None,
pre_dispatch = '2*n_jobs',
method = 'predict_proba')
val_scores = [x[1]for x in cross_val_preds]
raw_level_2_new_features_df['MLP_EXTpoly_Level2'] = val_scores
# fit and save model for predictions
my_MLP.fit(X = total_df_piped,
y = total_df['TARGET'])
joblib.dump(my_MLP, model_save_path + 'MLP_EXTpoly_level_2.joblib')
```
# models with all features
```
total_df_piped, final_feature_list, total_pipeline, trans_list = master_pipeline(df_in = total_df,
int_cutoff=20,
poly_deg=4,
feats_with_interaction=[]
)
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
```
### random forest
```
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
param_dist_dict = { 'max_depth' : 16,
'min_samples_leaf' : 550,
'min_samples_split' : 2,
'n_estimators': 40,
}
forest_reg = RandomForestClassifier(random_state=0,
class_weight = None,
**param_dist_dict)
cross_val_preds = cross_val_predict(estimator = forest_reg,
X = total_df_piped,
y = total_df['TARGET'],
groups = None,
cv = my_folds,
n_jobs = -1,
verbose = 51,
fit_params = None,
pre_dispatch = '2*n_jobs',
method = 'predict_proba')
val_scores = [x[1]for x in cross_val_preds]
raw_level_2_new_features_df['RanFor_AllFeats_Level2'] = val_scores
roc_auc_score(total_df['TARGET'],val_scores)
# fit and save model for predictions
forest_reg.fit(X = total_df_piped,
y = total_df['TARGET'])
joblib.dump(forest_reg, model_save_path + 'RanFor_AllFeats_level_2.joblib')
```
### log reg
```
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
my_LgRg = LogisticRegression(penalty= 'l2',
random_state = 0,
class_weight = None,
C = 179.0)
cross_val_preds = cross_val_predict(estimator = my_LgRg,
X = total_df_piped,
y = total_df['TARGET'],
groups = None,
cv = my_folds,
n_jobs = -1,
verbose = 51,
fit_params = None,
pre_dispatch = '2*n_jobs',
method = 'predict_proba')
val_scores = [x[1]for x in cross_val_preds]
raw_level_2_new_features_df['LogReg_AllFeats_Level2'] = val_scores
roc_auc_score(total_df['TARGET'],val_scores)
# fit and save model for predictions
my_LgRg.fit(X = total_df_piped,
y = total_df['TARGET'])
joblib.dump(my_LgRg, model_save_path + 'LogReg_AllFeats_level_2.joblib')
```
### MLP Classifier
```
param_dist_dict = { 'alpha' : .05,
'hidden_layer_sizes' : (55, ),
}
# have to use same folds as from level 1 metafeature creation!
# this has to be loaded each time bc iteration is 'used up'
my_StrKFold = StratifiedKFold(n_splits = 3,
shuffle = True,
random_state = 0)
my_folds = my_StrKFold.split(total_df, total_df['TARGET'])
my_MLP = MLPClassifier(random_state=0,
tol=0.0001,
# verbose=51,
warm_start=False,
momentum=0.9,
**param_dist_dict,)
cross_val_preds = cross_val_predict(estimator = my_MLP,
X = total_df_piped,
y = total_df['TARGET'],
groups = None,
cv = my_folds,
n_jobs = -1,
verbose = 51,
fit_params = None,
pre_dispatch = '2*n_jobs',
method = 'predict_proba')
val_scores = [x[1]for x in cross_val_preds]
raw_level_2_new_features_df['MLP_AllFeats_Level2'] = val_scores
roc_auc_score(total_df['TARGET'],val_scores)
# fit and save model for predictions
my_MLP.fit(X = total_df_piped,
y = total_df['TARGET'])
joblib.dump(my_MLP, model_save_path + 'MLP_AllFeats_level_2.joblib')
```
# plot so far
```
just_for_plotting_df = pd.merge(total_df, raw_level_2_new_features_df, left_index=True, right_index= True, how = 'inner') #on='SK_ID_CURR' )
total_df.shape, raw_level_2_new_features_df.shape, just_for_plotting_df.shape
# get a random sample bc full sample is too much to plot
this_sample = balanced_sample(just_for_plotting_df, 24000, 0)
['RanFor_EXTpoly_Level2', 'LogReg_EXTpoly_Level2','MLP_EXTpoly_Level2',
'RanFor_AllFeats_Level2','LogReg_AllFeats_Level2''MLP_AllFeats_Level2',];
feat = 'RanFor_EXTpoly_Level2'
#this_sample['LogMod_'+ feat] = log_modulus_transformation(this_sample[feat])
this_sample['pwr_'+ feat] = (+this_sample[feat])**(1/5)
this_sample['pwr_'+ feat] = \
(this_sample['pwr_'+ feat] - min(this_sample['pwr_'+ feat])) /( max(this_sample['pwr_'+ feat]) - min(this_sample['pwr_'+ feat]))
my_list = [feat, 'pwr_'+ feat] # 'LogMod_'+feat,
num_plots = len(my_list)
fig, axs = plt.subplots(nrows = 1,
ncols = num_plots,
figsize = (num_plots*5, 4));
for f, a in zip(my_list, range(len(my_list))):
dat_0 = this_sample[ (this_sample['TARGET']== 0) ][f]
dat_1 = this_sample[ (this_sample['TARGET']== 1) ][f]
my_bins =np.histogram( this_sample[f] , bins = 30 )[1]
g0 = sns.distplot(dat_0, color = 'blue', ax = axs[a], kde = False, bins = my_bins)
g1 = sns.distplot(dat_1, color = 'red', ax = axs[a], kde = False, bins = my_bins)
plt.tight_layout()
feat = 'LogReg_EXTpoly_Level2'
#this_sample['LogMod_'+ feat] = log_modulus_transformation(this_sample[feat])
this_sample['pwr_'+ feat] = (+this_sample[feat])**(1/5)
this_sample['pwr_'+ feat] = \
(this_sample['pwr_'+ feat] - min(this_sample['pwr_'+ feat])) /( max(this_sample['pwr_'+ feat]) - min(this_sample['pwr_'+ feat]))
my_list = [feat, 'pwr_'+ feat] # 'LogMod_'+feat,
num_plots = len(my_list)
fig, axs = plt.subplots(nrows = 1,
ncols = num_plots,
figsize = (num_plots*5, 4));
for f, a in zip(my_list, range(len(my_list))):
dat_0 = this_sample[ (this_sample['TARGET']== 0) ][f]
dat_1 = this_sample[ (this_sample['TARGET']== 1) ][f]
my_bins =np.histogram( this_sample[f] , bins = 30 )[1]
g0 = sns.distplot(dat_0, color = 'blue', ax = axs[a], kde = False, bins = my_bins)
g1 = sns.distplot(dat_1, color = 'red', ax = axs[a], kde = False, bins = my_bins)
plt.tight_layout()
feat = 'MLP_EXTpoly_Level2'
#this_sample['LogMod_'+ feat] = log_modulus_transformation(this_sample[feat])
this_sample['pwr_'+ feat] = (+this_sample[feat])**(1/5)
this_sample['pwr_'+ feat] = \
(this_sample['pwr_'+ feat] - min(this_sample['pwr_'+ feat])) /( max(this_sample['pwr_'+ feat]) - min(this_sample['pwr_'+ feat]))
my_list = [feat, 'pwr_'+ feat] # 'LogMod_'+feat,
num_plots = len(my_list)
fig, axs = plt.subplots(nrows = 1,
ncols = num_plots,
figsize = (num_plots*5, 4));
for f, a in zip(my_list, range(len(my_list))):
dat_0 = this_sample[ (this_sample['TARGET']== 0) ][f]
dat_1 = this_sample[ (this_sample['TARGET']== 1) ][f]
my_bins =np.histogram( this_sample[f] , bins = 30 )[1]
g0 = sns.distplot(dat_0, color = 'blue', ax = axs[a], kde = False, bins = my_bins)
g1 = sns.distplot(dat_1, color = 'red', ax = axs[a], kde = False, bins = my_bins)
plt.tight_layout()
feat = 'RanFor_AllFeats_Level2'
#this_sample['LogMod_'+ feat] = log_modulus_transformation(this_sample[feat])
this_sample['pwr_'+ feat] = (+this_sample[feat])**(1/5)
this_sample['pwr_'+ feat] = \
(this_sample['pwr_'+ feat] - min(this_sample['pwr_'+ feat])) /( max(this_sample['pwr_'+ feat]) - min(this_sample['pwr_'+ feat]))
my_list = [feat, 'pwr_'+ feat] # 'LogMod_'+feat,
num_plots = len(my_list)
fig, axs = plt.subplots(nrows = 1,
ncols = num_plots,
figsize = (num_plots*5, 4));
for f, a in zip(my_list, range(len(my_list))):
dat_0 = this_sample[ (this_sample['TARGET']== 0) ][f]
dat_1 = this_sample[ (this_sample['TARGET']== 1) ][f]
my_bins =np.histogram( this_sample[f] , bins = 30 )[1]
g0 = sns.distplot(dat_0, color = 'blue', ax = axs[a], kde = False, bins = my_bins)
g1 = sns.distplot(dat_1, color = 'red', ax = axs[a], kde = False, bins = my_bins)
plt.tight_layout()
feat = 'LogReg_AllFeats_Level2'
#this_sample['LogMod_'+ feat] = log_modulus_transformation(this_sample[feat])
this_sample['pwr_'+ feat] = (+this_sample[feat])**(1/5)
this_sample['pwr_'+ feat] = \
(this_sample['pwr_'+ feat] - min(this_sample['pwr_'+ feat])) /( max(this_sample['pwr_'+ feat]) - min(this_sample['pwr_'+ feat]))
my_list = [feat, 'pwr_'+ feat] # 'LogMod_'+feat,
num_plots = len(my_list)
fig, axs = plt.subplots(nrows = 1,
ncols = num_plots,
figsize = (num_plots*5, 4));
for f, a in zip(my_list, range(len(my_list))):
dat_0 = this_sample[ (this_sample['TARGET']== 0) ][f]
dat_1 = this_sample[ (this_sample['TARGET']== 1) ][f]
my_bins =np.histogram( this_sample[f] , bins = 30 )[1]
g0 = sns.distplot(dat_0, color = 'blue', ax = axs[a], kde = False, bins = my_bins)
g1 = sns.distplot(dat_1, color = 'red', ax = axs[a], kde = False, bins = my_bins)
plt.tight_layout()
feat = 'MLP_AllFeats_Level2'
#this_sample['LogMod_'+ feat] = log_modulus_transformation(this_sample[feat])
this_sample['pwr_'+ feat] = (+this_sample[feat])**(1/5)
this_sample['pwr_'+ feat] = \
(this_sample['pwr_'+ feat] - min(this_sample['pwr_'+ feat])) /( max(this_sample['pwr_'+ feat]) - min(this_sample['pwr_'+ feat]))
my_list = [feat, 'pwr_'+ feat] # 'LogMod_'+feat,
num_plots = len(my_list)
fig, axs = plt.subplots(nrows = 1,
ncols = num_plots,
figsize = (num_plots*5, 4));
for f, a in zip(my_list, range(len(my_list))):
dat_0 = this_sample[ (this_sample['TARGET']== 0) ][f]
dat_1 = this_sample[ (this_sample['TARGET']== 1) ][f]
my_bins =np.histogram( this_sample[f] , bins = 30 )[1]
g0 = sns.distplot(dat_0, color = 'blue', ax = axs[a], kde = False, bins = my_bins)
g1 = sns.distplot(dat_1, color = 'red', ax = axs[a], kde = False, bins = my_bins)
plt.tight_layout()
```
# Final level 2 metafeatures
```
def pwr_and_rescale(df_col, pwr):
temp_col = df_col**pwr
return (temp_col - min(temp_col)) /( max(temp_col) - min(temp_col))
FINAL_level_2_new_features_df = pd.DataFrame()
feat = 'RanFor_EXTpoly_Level2'
FINAL_level_2_new_features_df['pwr_rescale_'+ feat] = pwr_and_rescale(+raw_level_2_new_features_df[feat], 1/5)
feat = 'LogReg_EXTpoly_Level2'
FINAL_level_2_new_features_df['pwr_rescale_'+ feat] = pwr_and_rescale(+raw_level_2_new_features_df[feat], 1/3.5)
feat = 'MLP_EXTpoly_Level2'
FINAL_level_2_new_features_df['pwr_rescale_'+ feat] = pwr_and_rescale(+raw_level_2_new_features_df[feat], 1/4)
feat = 'RanFor_AllFeats_Level2'
FINAL_level_2_new_features_df['pwr_rescale_'+ feat] = pwr_and_rescale(+raw_level_2_new_features_df[feat], 1/7)
feat = 'LogReg_AllFeats_Level2'
FINAL_level_2_new_features_df['pwr_rescale_'+ feat] = pwr_and_rescale(+raw_level_2_new_features_df[feat], 1/4)
feat = 'MLP_AllFeats_Level2'
FINAL_level_2_new_features_df['pwr_rescale_'+ feat] = pwr_and_rescale(+raw_level_2_new_features_df[feat], 1/4)
pwd
save_path = 'level_2_ensembling/'
FINAL_level_2_new_features_df.to_csv(save_path + 'FINAL_level_2_meta_features_df.csv', columns = list(FINAL_level_2_new_features_df.columns))
```
| github_jupyter |
```
'''
@ Author: Kai Song, ks838@cam.ac.uk
@ Notes: What does this small project do?
1. I used Recurrent Neural Network-LSTM to do text generating. I wrote the LSTM core part in
a relatively transparent way according Reference [1], indstead of using more
abstract/advanced tensorfow functions.
2. The results in 'output.txt' were generated using the first 10 texts (~ 250,000 words) of
the complete works. You could use one comedy or any text for testing, without torturing
your laptop too much.
@ Refs:
1. For LSTM, please refer to the famous paper "Recurrent Neural Network Regularization" by
W Zaremba et al.
2. Why using sigmoid and tanh as the activation functions in LSTM?
I found a explaination on https://www.quora.com/
3. https://github.com/aymericdamien/TensorFlow-Examples/tree/master/examples/3_NeuralNetworks
@ Reconmanded blogs:
1. https://www.youtube.com/watch?v=9zhrxE5PQgY
There, Siraj used only numpy, giving a rather nice lecture on LSTM.
2. On LSTM parameter tuning: https://deeplearning4j.org/lstm.html
'''
import numpy as np
import tensorflow as tf
import sys
import codecs
from os import listdir
from os.path import isfile, join
print(__doc__)
path_shake = './complete_works/'
all_files = [f.replace('.txt','') for f in listdir(path_shake) if isfile(join(path_shake, f))]
n_files = len(all_files)
print("n_files = ",n_files)
raw_text = []
for i in range(10):
#raw_text = open('../sss.txt').read().lower()
file_name = './complete_works/'+ all_files[i]+'.txt'
text_i = codecs.open(file_name, "r",encoding='utf-8', errors='ignore').read().lower()
raw_text +=text_i
#raw_text = open('/Users/stusk/machine_learning/sk-projects/shakespeare-statitics/sss.txt').read().lower()
print('The number of characters in our raw text:', len(raw_text))
#print('head of text:')
#print(raw_text[:50])
#assert(1>2)
chars = sorted(list(set(raw_text)))
char_size = len(chars)
print('number of different characters:', char_size)
print(chars)
char_to_ix = dict((c, i) for i, c in enumerate(chars))
ix_to_char = dict((i, c) for i, c in enumerate(chars))
seq_length = 50
data_in = []
data_out = []
for i in range(0, len(raw_text) - seq_length, 1):
seq_in = raw_text[i:i + seq_length]
#out: just the next char of seq_in
seq_out = raw_text[i + seq_length]
data_in.append(seq_in)
data_out.append(seq_out)
X = np.zeros((len(data_in), seq_length, char_size))
y = np.zeros((len(data_in), char_size))
for i, sect_i in enumerate(data_in):
for j, char_j in enumerate(sect_i):
X[i, j, char_to_ix[char_j]] = 1
y[i, char_to_ix[data_out[i]]] = 1
# Training Parameters
learning_rate = 0.01
batch_size = 212
nsteps = 40000
hidden_nodes = 154
print('training data size:', len(X))
print('No. of epoches: %.2f'%(nsteps/len(X)))
print('No. of batches per epoch:', int(len(X)/batch_size))
'''
tf.graph here is unnecessary since we have only one,
but it's a good practice to follow.
If we start to work with many graphs,
it's easier to understand where ops and vars are placed
'''
graph = tf.Graph()
with graph.as_default():
# the weights and biases
W = {
#Input gate: weights for input, and input from previous output
'ii': tf.Variable(tf.random_normal([char_size, hidden_nodes])),
'io': tf.Variable(tf.random_normal([hidden_nodes, hidden_nodes])),
#Forget gate: weights for input, previous output
'fi': tf.Variable(tf.random_normal([char_size, hidden_nodes])),
'fo': tf.Variable(tf.random_normal([hidden_nodes, hidden_nodes])),
#Output gate: weights for input, previous output
'oi': tf.Variable(tf.random_normal([char_size, hidden_nodes])),
'oo': tf.Variable(tf.random_normal([hidden_nodes, hidden_nodes])),
#Memory cell: weights for input, previous output
'ci': tf.Variable(tf.random_normal([char_size, hidden_nodes])),
'co': tf.Variable(tf.random_normal([hidden_nodes, hidden_nodes])),
# output
'out': tf.Variable(tf.random_normal([hidden_nodes, char_size],mean=-0.1,stddev=0.1))
}
biases = {
'i': tf.Variable(tf.zeros([1, hidden_nodes])),
'f': tf.Variable(tf.zeros([1, hidden_nodes])),
'o': tf.Variable(tf.zeros([1, hidden_nodes])),
'c': tf.Variable(tf.zeros([1, hidden_nodes])),
'out': tf.Variable(tf.zeros([char_size]))
}
# LCTM Cell
# iteration: h^{l−1}_t,h^l_{t-1} ,c^l_{t−1} -> h^l_t,c^l_t
def RNN_LSTM(h_state_0, h_state_1, cell):
# Sigmoid is usually used as the gating function for the 3 gates(in, out, forget) in LSTM.
# Dealing with vanishing gradient problem for lstm is different than that for a feed forward deep net.
# Here, it's resolved by the structure of the lstm network,
# specifically the various gates and a memory cell.
input_gate = tf.sigmoid(tf.matmul(h_state_0, W['ii']) + tf.matmul(h_state_1, W['io']) + biases['i'])
forget_gate = tf.sigmoid(tf.matmul(h_state_0, W['fi']) + tf.matmul(h_state_1, W['fo']) + biases['f'])
output_gate = tf.sigmoid(tf.matmul(h_state_0, W['oi']) + tf.matmul(h_state_1, W['oo']) + biases['o'])
modulation_gate= tf.tanh(tf.matmul(h_state_0, W['ci']) + tf.matmul(h_state_1, W['co']) + biases['c'])
cell = forget_gate * cell + input_gate * modulation_gate
h_state_out = output_gate * tf.tanh(cell)
return h_state_out, cell
h_state_0 = tf.zeros([batch_size, seq_length, char_size])
labels = tf.placeholder("float", [batch_size, char_size])
def logits_and_loss():
h_state_1 = tf.zeros([batch_size, hidden_nodes])
cell = tf.zeros([batch_size, hidden_nodes])
for i in range(seq_length):
h_state_1, cell = RNN_LSTM(h_state_0[:, i, :], h_state_1, cell)
# We concatenate them together to calculate the logits and loss
if i == 0:
h_state_1_i = h_state_1
h_state_0_i = h_state_0[:, i+1, :]
elif (i != seq_length - 1):
h_state_1_i = tf.concat([h_state_1_i, h_state_1],0)
h_state_0_i = tf.concat([h_state_0_i, h_state_0[:, i+1, :]],0)
else:
h_state_1_i = tf.concat([h_state_1_i, h_state_1],0)
h_state_0_i = tf.concat([h_state_0_i, labels],0)
logits = tf.matmul(h_state_1_i, W['out']) + biases['out']
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits,
labels=h_state_0_i))
return logits, loss
#Optimizer
logits,loss = logits_and_loss()
optimizer0 = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
optimizer = optimizer0.minimize(loss)
# for the on-the-fly Test
test_h_state_0 = tf.Variable(tf.zeros([1, char_size]))
test_h_state_1 = tf.Variable(tf.zeros([1, hidden_nodes]))
test_cell = tf.Variable(tf.zeros([1, hidden_nodes]))
#re-initialize at the beginning of each test
reset_test_cell = tf.group(test_h_state_1.assign(tf.zeros([1, hidden_nodes])),
test_cell.assign(tf.zeros([1, hidden_nodes])))
#RNN LSTM
test_h_state_1, test_cell = RNN_LSTM(test_h_state_0, test_h_state_1, test_cell)
test_prediction = tf.nn.softmax(tf.matmul(test_h_state_1, W['out']) + biases['out'])
#Create a checkpoint directory
#True if the path exists, whether its a file or a directory.
checkpoint_file = 'checkpoint_file'
if tf.gfile.Exists(checkpoint_file):
tf.gfile.DeleteRecursively(checkpoint_file)
tf.gfile.MakeDirs(checkpoint_file)
# the seed for the on-the-fly testing
test_seed = 'The first principle is that you must not fool yourself.'.lower()
fout1 = open('output.dat','w')
with tf.Session(graph=graph) as sess:
tf.global_variables_initializer().run()
shift = 0
saver = tf.train.Saver()
print('')
print('test_seed: ',test_seed)
for step in range(nsteps):
shift = shift % len(X)
if shift <= (len(X) - batch_size):
batch_h_state_0 = X[shift: shift + batch_size]
batch_labels = y[shift: shift + batch_size]
shift += batch_size
else:#the final batch in an epoch
complement = batch_size - (len(X) - shift)
batch_h_state_0 = np.concatenate((X[shift: len(X)], X[0: complement]))
batch_labels = np.concatenate((y[shift: len(X)], y[0: complement]))
shift = np.random.choice(batch_size)# start the next epoch with a random start char
_, training_loss = sess.run([optimizer, loss], feed_dict={h_state_0: batch_h_state_0, labels: batch_labels})
if step % 200 == 0:
print('\n'+'-' * 15 +'training loss at step %d: %.2f' % (step, training_loss)+'-' * 15)
fout1.write('\n'+'-' * 15 +'training loss at step %d: %.2f' % (step, training_loss)+'-' * 15+'\n')
reset_test_cell.run()
test_generated = ''
for i in range(len(test_seed) - 1):
test_X = np.zeros((1, char_size))
# each char in our test_seed is a vector(one-hot)
test_X[0, char_to_ix[test_seed[i]]] = 1.0
sess.run(test_prediction, feed_dict={test_h_state_0: test_X})
test_X = np.zeros((1, char_size))
# use the last char of the seed as a start of our on-the-fly prediction
test_X[0, char_to_ix[test_seed[-1]]] = 1.0
stdout1 = []
for i in range(200):
prob_distribution = test_prediction.eval({test_h_state_0: test_X})[0]
next_char_one_hot = np.zeros((char_size))
#pick one with the higher probability
ix = np.random.choice(range(char_size), p=prob_distribution.ravel())
next_char_one_hot[ix] = 1.0
next_char = ix_to_char[ix]
# if you want to output the results to a file,use
# python the_present.py > filename
sys.stdout.write(next_char)
fout1.write(next_char)
test_X = next_char_one_hot.reshape((1,char_size))
saver.save(sess, checkpoint_file + '/model', global_step=step)
fout1.close()
print('\nThe weights of our RNN-LSTM have been saved in ',checkpoint_file)
print('\nDone Successfully!')
'''
Results for demon:
---------------training loss at step 0: 4.08---------------c7gd25xk$'
l
?958v3u
ptekr8
lxe
d9x2y?
b5i'
50'0;'r36t0'd],z8w197z1$-52'rf::?yf:z0xw1gg9?f6-'n]jc5'[k1:9w2m79
xu;01]'9
]|!,.2cp'arryj8cie!ddzt'[5'jgvdrd,x7023tjjx:h'$-6'3i'w
:5l!:';,k[2d297k;r!i.e!0
---------------training loss at step 200: 3.20---------------'u h
lnet$moel'lxoothett$tj]xt!
u etts&$y?9wd eeio mreebrkadth
e ;s?7:wlma2 w 8t':?ok3
i0t]rhu ktykme,z'x'g5xk$i
xxstrly et xn1n sesxecek
;5sctnehz47ollnj
-h reee7
ithde
h;3tl.fn,
[&!uv!20bkh&heqfke
......
---------------training loss at step 39600: 2.87---------------als, torr te. te s hin, hdtae t y
oe; ten wo es ae! oe av ku td cemtgu fsey tc yuithuimthruooroits lrlwor wrisr tet
srenf maetlstht tehcnv br.irr,
r qeonkrmeen
eohe
wvo'b
nlapnrme;a
msnrns i mn neww b
---------------training loss at step 39800: 2.68---------------ingipnvmesi
rten inrni nn sn,ria!
cireer tioholibllai
sdsd wedn t lel,a,llon s wr,e tus teu nroiie cdko
yrr. lnt s?e t ah c fmna isablurx bo a7e f hb fnddlnntv,ed wwo d9
cdo !e d th tan fkr aohnt
e,
'''
```
| github_jupyter |
Lambda School Data Science
*Unit 4, Sprint 2, Module 4*
---
# Neural Network Frameworks (Prepare)
## Learning Objectives
* <a href="#p1">Part 1</a>: Implemenent Regularization Strategies
* <a href="#p2">Part 2</a>: Deploy a Keras Model
* <a href="#p3">Part 3</a>: Write a Custom Callback Function (Optional)
Today's class will also focus heavily on Callback objects. We will use a variety of callbacks to monitor and manipulate our models based on data that our model produces at the end of an epoch.
> A callback is an object that can perform actions at various stages of training (e.g. at the start or end of an epoch, before or after a single batch, etc). -- [Keras Documentation](https://keras.io/api/callbacks/)
# Regularization Strategies (Learn)
## Overview
Neural Networks are highly parameterized models and can be easily overfit to the training data. The most salient way to combat this problem is with regularization strategies.

There are four common ways of regularization in neural networks which we cover briefly. Here's a quick summary of how to apply them:
1. Always use EarlyStopping. This strategy will prevent your weights from being updated well past the point of their peak usefulness.
2. Use EarlyStopping, L1/L2 regularization and Dropout
3. Use EarlyStopping, Weight Constraint and Dropout
Weight Decay and Weigh Constraint accomplish similar purposes - preventing over fitting the parameters by regularizing the values. The mechanics are just slightly different. That's why you would not necessary want to apply them together.
## Follow Along
### Early Stopping
```
%load_ext tensorboard
from tensorflow.keras.datasets import fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
import matplotlib.pyplot as plt
plt.title(y_train[2])
plt.imshow(X_train[2]);
X_train, X_test = X_train / 255., X_test / 255.
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.layers import ReLU
import tensorflow as tf
import os
logdir = os.path.join("logs", "EarlyStopping-Loss")
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
stop = EarlyStopping(monitor='val_loss', min_delta=0.001, patience=3)
model = tf.keras.Sequential([
Flatten(input_shape=(28,28)),
Dense(128),
ReLU(negative_slope=.01),
Dense(128),
ReLU(negative_slope=.01),
Dense(128),
ReLU(negative_slope=.01),
Dense(10, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=99,
validation_data=(X_test,y_test),
callbacks=[tensorboard_callback, stop])
%tensorboard --logdir logs
```
### L1/L2 regularization
```python
Dense(64, input_dim=64, kernel_regularizer=regularizers.l2(0.01))
Dense(64, input_dim=64, kernel_regularizer=regularizers.l1(0.01))
```
Note:
The terms "L2 regularization" and "weight decay" are often used interchagebly, but they only mean the same thing for vanilla SGD optimization.
They mean different things for all other optimizers based on SGD (Adam, AdamW, RSMProp, etc).
See:
- https://www.fast.ai/2018/07/02/adam-weight-decay/
- https://arxiv.org/pdf/1711.05101.pdf
- https://bbabenko.github.io/weight-decay/
```
from tensorflow.keras import regularizers
```
### Weight Constraint
```python
tf.keras.constraints.MaxNorm(
max_value=2, axis=0
)
```
```
from tensorflow.keras.constraints import MaxNorm
```
### Dropout
```
from tensorflow.keras.layers import Dropout
%tensorboard --logdir logs
```
## Challenge
You will apply regularization strategies inside your neural network today, as you try to avoid overfitting it.
---
# Deploy (Learn)
## Overview
You've built a dope image classification model, but it's just sitting your Jupyter Notebook. What now? Well you deploy to some down stream application. TensorFlow supports three ways of deploying it's models:
- In-Browswer with TensorFlow.js
- API with TensorFlow Serving (TFX) or another Framework
- On-Device with TensorFlow Lite
You are already familiar with deploying a model as an API from Unit 3, so we will focus on deploying a model in browser. Both methods rely on the same core idea: save your weights and architecture information, load those parameters into application, and perform inference.
## Follow Along
### Train Your Model
### Save / Export Your Model
### Move Weights to Web Application
Not all models are small enough to work well in-browser. Many neural networks are deploy as micro-service APIs. Micro-service APIs are the architecture you studied during Unit 3.
## Challenge
You will be expected to be able to export your model weights and architecutre on the assignment.
# Custom Callbacks (Learn)
## Overview
Custom callbacks all you to access data at any point during the training: on batch end, on epoch end, on epoch start, on batch start. Our use case today is a simple one. Let's stop training once we reach a benchmark accuracy.
## Follow Along
## Challenge
Experiment with improving our custom callback function.
| github_jupyter |
# Inexact Move Function
Let's see how we can incorporate **uncertain** motion into our motion update. We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green.
Next, you're tasked with modifying the `move` function so that it incorporates uncertainty in motion.
<img src='images/uncertain_motion.png' width=50% height=50% />
First let's include our usual resource imports and display function.
```
# importing resources
import matplotlib.pyplot as plt
import numpy as np
```
A helper function for visualizing a distribution.
```
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
```
You are given the initial variables and the complete `sense` function, below.
```
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
```
### QUIZ: Modify the move function to accommodate the added probabilities of overshooting or undershooting the intended destination.
This function should shift a distribution with the motion, U, with some probability of under/overshooting. For the given, initial `p`, you should see the result for U = 1 and incorporated uncertainties: `[0.0, 0.1, 0.8, 0.1, 0.0]`.
```
## TODO: Modify the move function to accommodate the added robabilities of overshooting or undershooting
pExact = 0.8
pOvershoot = 0.1
pUndershoot = 0.1
# Complete the move function
def move(p, U):
q=[]
# iterate through all values in p
for i in range(len(p)):
## TODO: Modify this distribution code to incorporate values
## for over/undershooting the exact location
# use the modulo operator to find the new location for a p value
s = pExact*p[(i-U) % len(p)]
s = s + pOvershoot*p[(i-U+1) % len(p)]
s = s + pUndershoot*p[(i-U-1) % len(p)]
# append the correct, modified value of p to q
q.append(s)
return q
## TODO: try this for U = 2 and see the result
p = move(p,2)
print(p)
display_map(p)
```
| github_jupyter |
# Video Super Resolution with OpenVINO
Super Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480×360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://docs.openvino.ai/latest/omz_models_model_single_image_super_resolution_1032.html) which is available from the Open Model Zoo. It is based on the research paper cited below.
Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.
**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video.
## Preparation
### Imports
```
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display
from openvino.inference_engine import IECore
from pytube import YouTube
```
### Settings
```
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = Path(MODEL_FILE).name
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
```
### Functions
```
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
```
## Load the Superresolution Model
Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
```
ie = IECore()
net = ie.read_network(model=model_xml_path)
exec_net = ie.load_network(network=net, device_name=DEVICE)
```
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
```
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
```
## Superresolution on Video
Download a YouTube\* video with PyTube and enhance the video quality with superresolution.
By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this.
**Note:**
- The resulting video does not contain audio.
- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model.
### Settings
```
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
Path(OUTPUT_DIR).mkdir(exist_ok=True)
# Maximum number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result videos. vp09 is slow, but widely available.
# If you have FFMPEG installed, you can change FOURCC to `*"THEO"` to improve video writing speed.
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
```
### Download and Prepare Video
```
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
Path(VIDEO_DIR).mkdir(exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
if NUM_FRAMES == 0:
total_frames = frame_count
else:
total_frames = min(frame_count, NUM_FRAMES)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
```
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
```
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
```
### Do Inference
Read video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.
The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
```
start_time = time.perf_counter()
frame_nr = 0
total_inference_duration = 0
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if frame_nr >= total_frames:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0 or frame_nr == total_frames:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
```
### Show Side-by-Side Video of Bicubic and Superresolution Version
```
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
```
| github_jupyter |
# Notebook contents:
This notebook contains a lecture. The code for generating plots are found at the of the notebook. Links below.
- [presentation](#Session-1b:)
- [code for plots](#Code-for-plots)
# Session 2:
## Effective ML
*Andreas Bjerre-Nielsen*
## Vaaaamos
```
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings(action='ignore', category=ConvergenceWarning)
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
```
## Agenda
1. [model bias and variance](#Model-bias-and-variance)
1. [model building](#Model-building)
1. model selection
- [basic validation](#Model-selection)
- [cross validation](#Cross-validation)
- [tools for selection](#Tools-for-model-selection)
# Review
## Two agendas (1)
What are the objectives of empirical research?
1. *causation*: what is the effect of a particular variable on an outcome?
2. *prediction*: find some function that provides a good prediction of $y$ as a function of $x$
## Two agendas (2)
How might we express the agendas in a model?
$$ y = \alpha + \beta x + \varepsilon $$
- *causation*: interested in $\hat{\beta}$
- *prediction*: interested in $\hat{y}$
## Model fitting (1)
*How does over- and underfitting look like for regression?*
```
f_bias_var['regression'][2]
```
## Model fitting (2)
*What does underfitting and overfitting look like for classification?*
```
f_bias_var['classification'][2]
```
## What tools have seen?
- Supervised learning (having a target variable)
- Classification problems: Perceptron, Adaline, Logistic regression
- Regression problems: Linear regression
- We learned about optimization: gradient descent
- How can we say whether a model generalizes:
- We split data randomly into training and testing data.
## Fitting a polynomial (1)
Polyonomial: $f(x) = 2+8*x^4$
Try models of increasing order polynomials.
- Split data into train and test (50/50)
- For polynomial order 0 to 9:
- Iteration n: $y = \sum_{k=0}^{n}(\beta_k\cdot x^k)+\varepsilon$. (Taylor expansion)
- Estimate order n model on training data
- Evaluate with on test data with $\log RMSE$ ($= \log \sqrt{SSE/n}$)
## Fitting a polynomial (2)
We generate samples of data from true model (fourth order polynomial).
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
def true_fct(X):
return 2+X**4
n_samples = 25
np.random.seed(0)
X_train = np.random.normal(size=(n_samples,1))
y_train = true_fct(X_train).reshape(-1) + np.random.randn(n_samples)
X_test = np.random.normal(size=(n_samples,1))
y_test = true_fct(X_test).reshape(-1) + np.random.randn(n_samples)
```
## Fitting a polynomial (3)
We estimate the polynomials and store MSE for train and test:
```
from sklearn.metrics import mean_squared_error as mse
test_mse = []
train_mse = []
parameters = []
max_degree = 15
degrees = range(max_degree+1)
for p in degrees:
X_train_p = PolynomialFeatures(degree=p).fit_transform(X_train)
X_test_p = PolynomialFeatures(degree=p).fit_transform(X_test)
reg = LinearRegression().fit(X_train_p, y_train)
train_mse += [mse(reg.predict(X_train_p),y_train)]
test_mse += [mse(reg.predict(X_test_p),y_test)]
parameters.append(reg.coef_)
```
## Fitting a polynomial (4)
*So what happens to the model performance in- and out-of-sample?*
```
degree_index = pd.Index(degrees,name='Polynomial degree ~ model complexity')
ax = pd.DataFrame({'Train set':train_mse, 'Test set':test_mse})\
.set_index(degree_index).plot(figsize=(14,5), logy=True)
ax.set_ylabel('Mean squared error')
```
## Fitting a polynomial (5)
*Quiz: Why does it go wrong on the test data?*
- more spurious parameters
- (we include variables beyond those in true model, i.e. $x^4$ and the bias term)
- the coefficient size increases (next slide)
## Fitting a polynomial (6)
*What do you mean coefficient size increase?*
```
order_idx = pd.Index(range(len(degrees)),name='Polynomial order')
ax = pd.DataFrame(parameters,index=order_idx)\
.abs().mean(1).plot(figsize=(14,5),logy=True)
ax.set_ylabel('Mean parameter size')
```
## Fitting a polynomial (7)
*How else could we visualize this problem?*
```
f_bias_var['regression'][2]
```
# The curse of overfitting and regularization
## Looking for a remedy
*How might we solve the overfitting problem?*
- too many number of variables (spurious relations)
- excessive magnitude of the coefficient size of variables
Could we incorporate these two issues in our optimization problem?
## Regularization (1)
*Why do we regularize?*
- To mitigate overfitting > better model predictions
*How do we regularize?*
- We make models which are less complex:
- reducing the **number** of coefficient;
- reducing the **size** of the coefficients.
## Regularization (2)
*What does regularization look like?*
We add a penalty term our optimization procedure:
$$ \text{arg min}_\beta \, \underset{\text{MSE=SSE/n}}{\underbrace{E[(y_0 - \hat{f}(x_0))^2]}} + \underset{\text{penalty}}{\underbrace{\lambda \cdot R(\beta)}}$$
Introduction of penalties implies that increased model complexity has to be met with high increases precision of estimates.
## Regularization (3)
*What are some used penalty functions?*
The two most common penalty functions are L1 and L2 regularization.
- L1 regularization (***Lasso***): $R(\beta)=\sum_{j=1}^{p}|\beta_j|$
- Makes coefficients sparse, i.e. selects variables by removing some (if $\lambda$ is high)
- L2 regularization (***Ridge***): $R(\beta)=\sum_{j=1}^{p}\beta_j^2$
- Reduce coefficient size
- Fast due to analytical solution
*To note:* The *Elastic Net* uses a combination of L1 and L2 regularization.
## Regularization (4)
*How the Lasso (L1 reg.) deviates from OLS*
<center><img src='http://rasbt.github.io/mlxtend/user_guide/general_concepts/regularization-linear_files/l1.png' alt="Drawing" style="width: 800px;"/></center>
## Regularization (5)
*How the Ridge regression (L2 reg.) deviates from OLS*
<center><img src='http://rasbt.github.io/mlxtend/user_guide/general_concepts/regularization-linear_files/l2.png' alt="Drawing" style="width: 550px;"/></center>
## Regularization (6)
*How might we describe the $\lambda$ of Lasso and Ridge?*
These are hyperparameters that we can optimize over.
## Regularization (7)
*Is there a generalization of of Lasso and Ridge?*
Yes, the elastic net allows both types of regularization. Thererfore, it has two hyperparameters.
# Implementation details
## Underfitting remedies
*Is it possible to solve the underfitting problem?*
Yes, there are in general two ways.
- Using polynomial interactions of all features.
- This is known as Taylor expansion
- Note: we need to use regularization too curb impact of overfitting!
- Using non-linear models who can capture all patterns.
- These are called universal approximators
- Return to an overview of these in Session 14.
## Underfitting remedies (2)
*Some of the models we see here, e.g. Perceptrons, seem too simple - are they ever useful?*
- No, not for serious machine learning.
- But for exposition (your learning), yes.
- However, the perceptron and related models are building blocks for building neural networks.
## The devils in the details (1)
*So we just run regularization?*
We need to rescale our features:
- convert to zero mean:
- standardize to unit std:
Compute in Python:
- option 1: `StandardScaler` in `sklearn` (RECOMMENDED)
- option 2: `(X - np.mean(X)) / np.std(X)`
## The devils in the details (2)
*So we just scale our test and train?*
# NO
Fit to the distribution in the **training data first**, then rescale train and test! See more [here](https://stats.stackexchange.com/questions/174823/how-to-apply-standardization-normalization-to-train-and-testset-if-prediction-i).
## The devils in the details (3)
*So we just rescale before using polynomial features?*
# NO
Otherwise the interacted varaibles are not gaussian distributed.
## The devils in the details (4)
*Does sklearn's `PolynomialFeatures` work for more than variable?*
# Model bias and variance
## Bias and variance (1)
*How do we describe the modelling error?*
From [Wikipedia](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) 2019:
- model **bias**: _an error from erroneous assumptions in the learning algorithm_
- high bias can cause an algorithm to miss the relevant relations between features and target outputs (**underfitting**)
- model **variance**: _an error from sensitivity to small fluctuations in the training set_
- high variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs (**overfitting**).
## Bias and variance (2)
*So what is overfitting?*
Overfitting is: low bias / high variance
- traning our model captures all patterns but we also find some irrelevant
- reacts too much to training sample errors
- some errors are just noise, and thus we find too many spurious relations
- examples of causes:
- too much polynomial expansion of variables (`PolynomialFeatures`)
- non-linear/logistic without properly tuned hyperparameters:
- Decision Trees, Support Vector Machines or Neural Networks
## Bias and variance (3)
*So what is underfitting?*
Underfitting is: high bias / low variance
- oversimplification of models, cannot approximate all patterns found
- examples of causes:
- linear and logistic regression (without polynomial expansion)
## Bias and variance (4)
*Not so fast.. OLS is unbiased, right?*
Yes, OLS is unbiased. But...?
- But .. only by assumption..
- Requires we know the true form of the model.
- However, we never know do..
*What happens if we introduce regularization?*
- Then model is no longer unbiased.
- (if we assume the model is true)
# Model building
## Model pipelines (1)
*Is there a smart way to build ML models?*
Yes, we build a pipeline (input (tidy) -> target)
- Preprocessing data
- Standard: adding polynomials, imputation, rescaling
- Unsupervised learning
- Supervised learning
## Model pipelines (2)
*How does the pipeline look? Is there data leakage?*
<center><img src='https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch06/images/06_01.png' alt="Drawing" style="width: 700px;"/></center>
## Model pipelines (3)
*What are the advantages of using a pipeline?*
- Ensures good practice - we only fit on training data.
- No leakage of data from train to test!
- Much less code!
## Applying a model pipeline (1)
*What would this look like in Python?*
```
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
pipe_preproc = make_pipeline(PolynomialFeatures(),
StandardScaler())
print(pipe_preproc.steps[0])
print(pipe_preproc.steps[1])
```
## Applying a model pipeline (2)
*Does this remind you of something?*
# YES!
### Method chaining from Pandas
## Applying a model pipeline (3)
*Let's some load Boston house price data*
```
from sklearn.datasets import load_boston
boston = load_boston()
# print(boston['DESCR'])
# print('\n'.join(load_boston()['DESCR'].split('\n')[12:26]))
X = boston.data # features
y = boston.target # target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
## Applying a model pipeline (4)
*And how do I apply the pipe on the data?*
```
pipe_preproc = make_pipeline(PolynomialFeatures(),
StandardScaler()) # apply preproc - fit on train
X_train_prep = pipe_preproc.fit_transform(X_train) # transform training data
X_test_prep = pipe_preproc.transform(X_test) # transform test data
```
## Applying a model pipeline (5)
*What would it like look if we did use the pipe..?*
The more steps we have, the more code we save.
```
poly_trans = PolynomialFeatures()
scaler = StandardScaler()
# we call both transformations twice on both test and train
X_train_poly = poly_trans.fit_transform(X_train)
X_test_poly = poly_trans.transform(X_test)
X_train_prep_alt = scaler.fit_transform(X_train_poly)
X_test_prep_alt = scaler.transform(X_test_poly)
```
# Model selection
## Measuring the problem
*Does machine learning work out of the box?*
- In some cases ML works quite well out of the box.
- Often ML requires making careful choices.
- Note that automated machine learning packages and services exist.
- E.g. AutoML - this a hot research topic
*Which choices are to be made?*
- We need to pick model building hyperparameters.
- E.g elastic net hyperparameters: $\lambda$ for L1 and L2 regularization
- i.e. $\lambda$ for Lasso, Ridge and Elastic Net
## Model validation (1)
*How do we measure our model's performance for different hyperparameters?*
- Remember we cannot use the test set.
*Could we somehow mimick what we do with test data?*
- Yes, we can split the remaining non-test data into training and validation data:
- we train model for various hyperparameters on training data;
- pick the hyperparameters which performs best on validation data.
## Model validation (2)
*The non-test data is split into training and validation*
<center><img src='https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch06/images/06_02.png' alt="Drawing" style="width: 500px;"/></center>
## Model validation (3)
*What would this look like in Python?*
```
# splitting into development (2/3) and test data (1/3)
X_dev, X_test, y_dev, y_test = train_test_split(X, y, test_size=1/3, random_state=1)
# splitting development into train (1/3) and validation (1/3)
X_train, X_val, y_train, y_val = train_test_split(X_dev, y_dev, test_size=1/2, random_state=1)
```
## Model validation (4)
Let's train a linear regression model
```
from sklearn.linear_model import Lasso, LinearRegression
pipe_lr = make_pipeline(PolynomialFeatures(include_bias=True),
StandardScaler(),
LinearRegression())
pipe_lr.fit(X_dev, y_dev)
```
## Model validation (5)
Let's find the Lasso model which performs best in the validation set
```
from sklearn.metrics import mean_squared_error as mse
perform = []
lambdas = np.logspace(-4, 4, 33)
for lambda_ in lambdas:
pipe_lasso = make_pipeline(PolynomialFeatures(include_bias=True),
StandardScaler(),
Lasso(alpha=lambda_, random_state=1))
pipe_lasso.fit(X_train, y_train)
y_pred = pipe_lasso.predict(X_val)
perform.append(mse(y_pred, y_val))
hyperparam_perform = pd.Series(perform,index=lambdas)
optimal = hyperparam_perform.nsmallest(1)
print('Optimal lambda:', optimal.index[0])
print('Validation MSE: %.3f' % optimal.values[0])
```
## Model validation (6)
Let's compare the performance of the Lasso vs. Linear Regression
```
# insert optimal lambda into new model
pipe_lasso = make_pipeline(PolynomialFeatures(include_bias=False),
StandardScaler(),
Lasso(alpha=optimal.index[0]))
# fit new model on all of the development (non-test) data
pipe_lasso.fit(X_dev, y_dev)
# compare model performance on test data
print('Lasso', round(mse(pipe_lasso.predict(X_test),y_test), 1))
print('LinReg', round(mse(pipe_lr.predict(X_test),y_test), 1))
```
## Smarter validation
*Is this approach the smartest way for deciding on choice of hyperparameters?*
# NO
Our model choice depends a lot on which sample we pick. Could we use more of the data?
# Cross validation
## The holdout method
*How do we got the more out of the data?*
We reuse the data in the development set repeatedly
- We test on all the data
- Rotate which parts of data is used for test and train.
## Leave-one-out CV
*How do we got the most of the data?*
The most robust approach
- Each single observation in the training data we use the remaining data to train.
- Makes number of models equal to the number of observations
- Very computing intensive - does not scale!
LOOCV
## K fold method (1)
*How do balance computing time vs. overfitting?*
We split the sample into $K$ even sized test bins.
- For each test bin $k$ we use the remaining data for training.
Advantages:
- We use all our data for testing.
- Training is done with 100-(100/K) pct. of the data, i.e. 90 pct. for K=10.
## K fold method (2)
In K-fold cross validation we average the errors.
<center><img src='https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch06/images/06_03.png' alt="Drawing" style="width: 900px;"/></center>
## K fold method (3)
*How to do K-fold cross validation to select our model?*
We compute MSE for every lambda and every fold (nested for loop)
## K fold method (3)
Code for implementation
```
from sklearn.model_selection import KFold
kfolds = KFold(n_splits=10)
folds = list(kfolds.split(X_dev, y_dev))
# outer loop: lambdas
mseCV = []
for lambda_ in lambdas:
# inner loop: folds
mseCV_ = []
for train_idx, val_idx in folds:
# train model and compute MSE on test fold
pipe_lassoCV = make_pipeline(PolynomialFeatures(degree=2, include_bias=True),
StandardScaler(),
Lasso(alpha=lambda_, random_state=1))
X_train, y_train = X_dev[train_idx], y_dev[train_idx]
X_val, y_val = X_dev[val_idx], y_dev[val_idx]
pipe_lassoCV.fit(X_train, y_train)
mseCV_.append(mse(pipe_lassoCV.predict(X_val), y_val))
# store result
mseCV.append(mseCV_)
# convert to DataFrame
lambdaCV = pd.DataFrame(mseCV, index=lambdas)
```
# K fold method (4)
Training the model with optimal hyperparameters and compare MSE
```
# choose optimal hyperparameters
optimal_lambda = lambdaCV.mean(axis=1).nsmallest(1)
# retrain/re-estimate model using optimal hyperparameters
pipe_lassoCV = make_pipeline(PolynomialFeatures(include_bias=False),
StandardScaler(),
Lasso(alpha=optimal_lambda.index[0], random_state=1))
pipe_lassoCV.fit(X_dev,y_dev)
# compare performance
models = {'Lasso': pipe_lasso, 'Lasso CV': pipe_lassoCV, 'LinReg': pipe_lr}
for name, model in models.items():
score = mse(model.predict(X_test),y_test)
print(name, round(score, 1))
```
## K fold method (5)
*What else could we use cross-validation for?*
- Getting more evaluations of our model performance.
- We can cross validate at two levels:
- Outer: we make multiple splits of test and train/dev.
- Inner: within each train/dev. dataset we make cross validation to choose hyperparameters
# Tools for model selection
## Learning curves (1)
*What does a model that balances over- and underfitting look like?*
<center><img src='https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch06/images/06_04.png' alt="Drawing" style="width: 700px;"/></center>
## Learning curves (2)
*Is it easy to make learning curves in Python?*
```
from sklearn.model_selection import learning_curve
train_sizes, train_scores, test_scores = \
learning_curve(estimator=pipe_lassoCV,
X=X_dev,
y=y_dev,
train_sizes=np.arange(0.2, 1.05, .05),
scoring='neg_mean_squared_error',
cv=3)
mse_ = pd.DataFrame({'Train':-train_scores.mean(axis=1),
'Test':-test_scores.mean(axis=1)})\
.set_index(pd.Index(train_sizes,name='sample size'))
print(mse_.head(5))
```
## Learning curves (3)
```
f_learn, ax = plt.subplots(figsize=(10,4))
ax.plot(train_sizes,-test_scores.mean(1), alpha=0.25, linewidth=2, label ='Test', color='blue')
ax.plot(train_sizes,-train_scores.mean(1),alpha=0.25, linewidth=2, label='Train', color='orange')
ax.set_title('Mean performance')
ax.set_ylabel('Mean squared error')
ax.set_yscale('log')
ax.legend()
```
## Learning curves (4)
```
f_learn, ax = plt.subplots(figsize=(10,4))
plot_info = [(train_scores, 'Train','orange'), (test_scores, 'Test','blue')]
for scores, label, color in plot_info:
ax.fill_between(train_sizes, -scores.min(1), -scores.max(1),
alpha=0.25, label =label, color=color)
ax.set_title('Range of performance (min, max)')
ax.set_ylabel('Mean squared error')
ax.set_yscale('log')
ax.legend()
```
## Validation curves (1)
*Can we plot the optimal hyperparameters?*
```
from sklearn.model_selection import validation_curve
train_scores, test_scores = \
validation_curve(estimator=pipe_lasso,
X=X_dev,
y=y_dev,
param_name='lasso__alpha',
param_range=lambdas,
scoring='neg_mean_squared_error',
cv=3)
mse_score = pd.DataFrame({'Train':-train_scores.mean(axis=1),
'Validation':-test_scores.mean(axis=1),
'lambda':lambdas})\
.set_index('lambda')
print(mse_score.Validation.nsmallest(1))
```
## Validation curves (2)
```
f,ax = plt.subplots(figsize=(10,6))
mse_score.plot(logx=True, logy=True, ax=ax)
ax.axvline(mse_score.Validation.idxmin(), color='black',linestyle='--')
```
## Grid search (1)
*How do we search for two or more optimal parameters? (e.g. elastic net)*
- Goal: find the optimal parameter combination: $$\lambda_1^*,\lambda_2^*=\arg\min_{\lambda_1,\lambda_2}MSE^{CV}(X_{train},y_{train})$$
- Option 1: We can loop over the joint grid of parameters.
- One level for each parameter.
- Caveats: a lot of code / SLOW
- Option 2: sklearn has `GridSearchCV` has a tool which tests all parameter combinations.
## Grid search (2)
*How does this look in Python?*
```
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import ElasticNet
pipe_el = make_pipeline(PolynomialFeatures(include_bias=False),
StandardScaler(),
ElasticNet())
gs = GridSearchCV(estimator=pipe_el,
param_grid={'elasticnet__alpha':np.logspace(-4,4,10)*2,
'elasticnet__l1_ratio':np.linspace(0,1,10)},
scoring='neg_mean_squared_error',
n_jobs=4,
cv=10)
models['ElasicNetCV'] = gs.fit(X_dev, y_dev)
```
- Notation: double underscore between estimator and hyperparameter, e.g. 'est__hyperparam'
- Scoring: negative MSE as we're maximizing the score ~ minimize MSE.
## Grid search (3)
*What does the grid search yield?*
```
for name, model in models.items():
score = mse(model.predict(X_test),y_test)
print(name, round(score, 2))
print()
print('CV params:', gs.best_params_)
```
## Grid search (4)
*What if we have 10,000 parameter combinations?*
- Option 1: you buy a cluster on Amazon, learn how to parallelize across computers.
- Option 2: you drop some of the parameter values
- Option 3: `RandomizedSearchCV` searches a subset of the combinations.
## Miscellanous
*How do we get the coefficient from the models?*
```
lasso_model = pipe_lassoCV.steps[2][1] # extract model from pipe
lasso_model.coef_[0:13] # extract coeffiecients from model
```
# The end
[Return to agenda](#Agenda)
# Code for plots
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import seaborn as sns
plt.style.use('ggplot')
%matplotlib inline
SMALL_SIZE = 16
MEDIUM_SIZE = 18
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rcParams['figure.figsize'] = 10, 4 # set default size of plots
```
### Plots of ML types
```
%run ../base/ML_plots.ipynb
```
| github_jupyter |
This is a quick writeup of where I'm at. There are still a lot of issues with the model I'll address below.
```
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.structured import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
PATH='data/credit_default_risk/'
app_test_df = pd.read_csv(f'{PATH}application_test.csv')
app_test_df.head()
app_train_df = pd.read_csv(f'{PATH}application_train.csv')
app_train_df.head()
bureau_df = pd.read_csv(f'{PATH}bureau.csv')
bureau_df.head()
bureau_balance_df = pd.read_csv(f'{PATH}bureau_balance.csv')
bureau_balance_df.head()
pos_cash_df = pd.read_csv(f'{PATH}POS_CASH_balance.csv')
pos_cash_df.head()
credit_card_df = pd.read_csv(f'{PATH}credit_card_balance.csv')
credit_card_df.head()
pos_cash_df = pd.read_csv(f'{PATH}POS_CASH_balance.csv')
pos_cash_df.head()
prev_app_df = pd.read_csv(f'{PATH}previous_application.csv')
prev_app_df.head()
install_df = pd.read_csv(f'{PATH}installments_payments.csv')
install_df.head()
```
Data processing taken from a [Kaggle kernel](https://www.kaggle.com/shep312/lightgbm-with-weighted-averages-dropout-787/code)
```
print('Data loaded.\nMain application training data set shape = {}'.format(app_train_df.shape))
print('Main application test data set shape = {}'.format(app_test_df.shape))
print('Positive target proportion = {:.2f}'.format(app_train_df['TARGET'].mean()))
def feature_engineering(app_data, bureau_df, bureau_balance_df, credit_card_df,
pos_cash_df, prev_app_df, install_df):
"""
Process the input dataframes into a single one containing all the features. Requires
a lot of aggregating of the supplementary datasets such that they have an entry per
customer.
Also, add any new features created from the existing ones
"""
# # Add new features
# Amount loaned relative to salary
app_data['LOAN_INCOME_RATIO'] = app_data['AMT_CREDIT'] / app_data['AMT_INCOME_TOTAL']
app_data['ANNUITY_INCOME_RATIO'] = app_data['AMT_ANNUITY'] / app_data['AMT_INCOME_TOTAL']
app_data['ANNUITY LENGTH'] = app_data['AMT_CREDIT'] / app_data['AMT_ANNUITY']
# # Aggregate and merge supplementary datasets
print('Combined train & test input shape before any merging = {}'.format(app_data.shape))
# Previous applications
agg_funs = {'SK_ID_CURR': 'count', 'AMT_CREDIT': 'sum'}
prev_apps = prev_app_df.groupby('SK_ID_CURR').agg(agg_funs)
prev_apps.columns = ['PREV APP COUNT', 'TOTAL PREV LOAN AMT']
merged_df = app_data.merge(prev_apps, left_on='SK_ID_CURR', right_index=True, how='left')
# Average the rest of the previous app data
prev_apps_avg = prev_app_df.groupby('SK_ID_CURR').mean()
merged_df = merged_df.merge(prev_apps_avg, left_on='SK_ID_CURR', right_index=True,
how='left', suffixes=['', '_PAVG'])
print('Shape after merging with previous apps num data = {}'.format(merged_df.shape))
# Previous app categorical features
prev_app_df, cat_feats, _ = process_dataframe(prev_app_df)
prev_apps_cat_avg = prev_app_df[cat_feats + ['SK_ID_CURR']].groupby('SK_ID_CURR')\
.agg({k: lambda x: str(x.mode().iloc[0]) for k in cat_feats})
merged_df = merged_df.merge(prev_apps_cat_avg, left_on='SK_ID_CURR', right_index=True,
how='left', suffixes=['', '_BAVG'])
print('Shape after merging with previous apps cat data = {}'.format(merged_df.shape))
# Credit card data - numerical features
wm = lambda x: np.average(x, weights=-1/credit_card_df.loc[x.index, 'MONTHS_BALANCE'])
credit_card_avgs = credit_card_df.groupby('SK_ID_CURR').agg(wm)
merged_df = merged_df.merge(credit_card_avgs, left_on='SK_ID_CURR', right_index=True,
how='left', suffixes=['', '_CCAVG'])
# Credit card data - categorical features
most_recent_index = credit_card_df.groupby('SK_ID_CURR')['MONTHS_BALANCE'].idxmax()
cat_feats = credit_card_df.columns[credit_card_df.dtypes == 'object'].tolist() + ['SK_ID_CURR']
merged_df = merged_df.merge(credit_card_df.loc[most_recent_index, cat_feats], left_on='SK_ID_CURR', right_on='SK_ID_CURR',
how='left', suffixes=['', '_CCAVG'])
print('Shape after merging with credit card data = {}'.format(merged_df.shape))
# Credit bureau data - numerical features
credit_bureau_avgs = bureau_df.groupby('SK_ID_CURR').mean()
merged_df = merged_df.merge(credit_bureau_avgs, left_on='SK_ID_CURR', right_index=True,
how='left', suffixes=['', '_BAVG'])
print('Shape after merging with credit bureau data = {}'.format(merged_df.shape))
# Bureau balance data
most_recent_index = bureau_balance_df.groupby('SK_ID_BUREAU')['MONTHS_BALANCE'].idxmax()
bureau_balance_df = bureau_balance_df.loc[most_recent_index, :]
merged_df = merged_df.merge(bureau_balance_df, left_on='SK_ID_BUREAU', right_on='SK_ID_BUREAU',
how='left', suffixes=['', '_B_B'])
print('Shape after merging with bureau balance data = {}'.format(merged_df.shape))
# Pos cash data - weight values by recency when averaging
wm = lambda x: np.average(x, weights=-1/pos_cash_df.loc[x.index, 'MONTHS_BALANCE'])
f = {'CNT_INSTALMENT': wm, 'CNT_INSTALMENT_FUTURE': wm, 'SK_DPD': wm, 'SK_DPD_DEF':wm}
cash_avg = pos_cash_df.groupby('SK_ID_CURR')['CNT_INSTALMENT','CNT_INSTALMENT_FUTURE',
'SK_DPD', 'SK_DPD_DEF'].agg(f)
merged_df = merged_df.merge(cash_avg, left_on='SK_ID_CURR', right_index=True,
how='left', suffixes=['', '_CAVG'])
# Pos cash data data - categorical features
most_recent_index = pos_cash_df.groupby('SK_ID_CURR')['MONTHS_BALANCE'].idxmax()
cat_feats = pos_cash_df.columns[pos_cash_df.dtypes == 'object'].tolist() + ['SK_ID_CURR']
merged_df = merged_df.merge(pos_cash_df.loc[most_recent_index, cat_feats], left_on='SK_ID_CURR', right_on='SK_ID_CURR',
how='left', suffixes=['', '_CAVG'])
print('Shape after merging with pos cash data = {}'.format(merged_df.shape))
# Installments data
ins_avg = install_df.groupby('SK_ID_CURR').mean()
merged_df = merged_df.merge(ins_avg, left_on='SK_ID_CURR', right_index=True,
how='left', suffixes=['', '_IAVG'])
print('Shape after merging with installments data = {}'.format(merged_df.shape))
# Add more value counts
merged_df = merged_df.merge(pd.DataFrame(bureau_df['SK_ID_CURR'].value_counts()), left_on='SK_ID_CURR',
right_index=True, how='left', suffixes=['', '_CNT_BUREAU'])
merged_df = merged_df.merge(pd.DataFrame(credit_card_df['SK_ID_CURR'].value_counts()), left_on='SK_ID_CURR',
right_index=True, how='left', suffixes=['', '_CNT_CRED_CARD'])
merged_df = merged_df.merge(pd.DataFrame(pos_cash_df['SK_ID_CURR'].value_counts()), left_on='SK_ID_CURR',
right_index=True, how='left', suffixes=['', '_CNT_POS_CASH'])
merged_df = merged_df.merge(pd.DataFrame(install_df['SK_ID_CURR'].value_counts()), left_on='SK_ID_CURR',
right_index=True, how='left', suffixes=['', '_CNT_INSTALL'])
print('Shape after merging with counts data = {}'.format(merged_df.shape))
return merged_df
def process_dataframe(input_df, encoder_dict=None):
""" Process a dataframe into a form useable by LightGBM """
# Label encode categoricals
categorical_feats = input_df.columns[input_df.dtypes == 'object']
categorical_feats = categorical_feats
encoder_dict = {}
for feat in categorical_feats:
# encoder = LabelEncoder()
input_df[feat] = input_df[feat].fillna('NULL')
# input_df[feat] = encoder.fit_transform(input_df[feat].fillna('NULL'))
# encoder_dict[feat] = encoder
return input_df, categorical_feats.tolist(), encoder_dict
merged_df.to_feather(f'{PATH}merged_df')
merged_df = pd.read_feather(f'{PATH}merged_df')
merged_df.columns
cols = list(merged_df.columns)
merged_df.head()
# Separate metadata
meta_cols = ['SK_ID_CURR', 'SK_ID_BUREAU', 'SK_ID_PREV']
meta_df = merged_df[meta_cols]
merged_df.drop(meta_cols, axis=1, inplace=True)
# Process the data set.
merged_df, categorical_feats, encoder_dict = process_dataframe(input_df=merged_df)
# Capture other categorical features not as object data types:
non_obj_categoricals = [
'FONDKAPREMONT_MODE',
'HOUR_APPR_PROCESS_START',
'HOUSETYPE_MODE',
'NAME_EDUCATION_TYPE',
'NAME_FAMILY_STATUS',
'NAME_HOUSING_TYPE',
'NAME_INCOME_TYPE',
'NAME_TYPE_SUITE',
'OCCUPATION_TYPE',
'ORGANIZATION_TYPE',
'WALLSMATERIAL_MODE',
'WEEKDAY_APPR_PROCESS_START',
'NAME_CONTRACT_TYPE_BAVG',
'WEEKDAY_APPR_PROCESS_START_BAVG',
'NAME_CASH_LOAN_PURPOSE',
'NAME_CONTRACT_STATUS',
'NAME_PAYMENT_TYPE',
'CODE_REJECT_REASON',
'NAME_TYPE_SUITE_BAVG',
'NAME_CLIENT_TYPE',
'NAME_GOODS_CATEGORY',
'NAME_PORTFOLIO',
'NAME_PRODUCT_TYPE',
'CHANNEL_TYPE',
'NAME_SELLER_INDUSTRY',
'NAME_YIELD_GROUP',
'PRODUCT_COMBINATION',
'NAME_CONTRACT_STATUS_CCAVG',
'STATUS',
'NAME_CONTRACT_STATUS_CAVG'
]
categorical_feats = categorical_feats + non_obj_categoricals
null_counts = merged_df.isnull().sum()
null_counts = null_counts[null_counts > 0]
null_ratios = null_counts / len(merged_df)
# Drop columns over x% null
null_thresh = .8
null_cols = null_ratios[null_ratios > null_thresh].index
merged_df.drop(null_cols, axis=1, inplace=True)
print('Columns dropped for being over {}% null:'.format(100*null_thresh))
for col in null_cols:
print(col)
if col in categorical_feats:
categorical_feats.pop(col)
# Fill the rest with the mean (TODO: do something better!)
# merged_df.fillna(merged_df.median(), inplace=True)
merged_df.fillna(0, inplace=True)
merged_df.columns
cont_feats = [x for x in merged_df.columns[~merged_df.columns.isin(categorical_feats)]]
cat_vars = list(categorical_feats)
contin_vars = list(cont_feats)
joined = pd.DataFrame()
for v in cat_vars: joined[v] = merged_df[v].astype('category').cat.as_ordered()
for v in contin_vars: joined[v] = merged_df[v].fillna(0).astype('float32')
joined.head()
from torch.nn import functional as F
```
Modified fastai classes taken from a notebook by [Kerem Turgutlu](https://github.com/KeremTurgutlu/deeplearning/blob/master/avazu/FAST.AI%20Classification%20-%20Kaggle%20Avazu%20CTR.ipynb)
```
class MixedInputModelMixedIn (nn.Module):
def __init__(self, emb_szs, n_cont, emb_drop, out_sz, szs, drops,
y_range=None, use_bn=False):
super().__init__()
self.embs = nn.ModuleList([nn.Embedding(c, s) for c,s in emb_szs])
for emb in self.embs: emb_init(emb)
n_emb = sum(e.embedding_dim for e in self.embs)
self.n_emb, self.n_cont=n_emb, n_cont
szs = [n_emb+n_cont] + szs
self.lins = nn.ModuleList([
nn.Linear(szs[i], szs[i+1]) for i in range(len(szs)-1)])
self.bns = nn.ModuleList([
nn.BatchNorm1d(sz) for sz in szs[1:]])
for o in self.lins: kaiming_normal(o.weight.data)
self.outp = nn.Linear(szs[-1], out_sz)
kaiming_normal(self.outp.weight.data)
self.emb_drop = nn.Dropout(emb_drop)
self.drops = nn.ModuleList([nn.Dropout(drop) for drop in drops])
self.bn = nn.BatchNorm1d(n_cont)
self.use_bn,self.y_range = use_bn,y_range
def forward(self, x_cat, x_cont):
if self.n_emb != 0:
x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
x = torch.cat(x, 1)
x = self.emb_drop(x)
if self.n_cont != 0:
x2 = self.bn(x_cont)
x = torch.cat([x, x2], 1) if self.n_emb != 0 else x2
for l,d,b in zip(self.lins, self.drops, self.bns):
x = F.relu(l(x))
if self.use_bn: x = b(x)
x = d(x)
x = self.outp(x)
if self.y_range:
x = F.sigmoid(x)
x = x*(self.y_range[1] - self.y_range[0])
x = x+self.y_range[0]
return x
class ColumnarDataset(Dataset):
def __init__(self, cats, conts, y):
n = len(cats[0]) if cats else len(conts[0])
self.cats = np.stack(cats, 1).astype(np.int64) if cats else np.zeros((n,1))
self.conts = np.stack(conts, 1).astype(np.float32) if conts else np.zeros((n,1))
self.y = np.zeros((n,1)) if y is None else y #y.values # THIS LINE IS CHANGED FROM y[:, None]
def __len__(self): return len(self.y)
def __getitem__(self, idx):
return [self.cats[idx], self.conts[idx], self.y[idx]]
@classmethod
def from_data_frames(cls, df_cat, df_cont, y=None):
cat_cols = [c.values for n,c in df_cat.items()]
cont_cols = [c.values for n,c in df_cont.items()]
return cls(cat_cols, cont_cols, y)
@classmethod
def from_data_frame(cls, df, cat_flds, y=None):
return cls.from_data_frames(df[cat_flds], df.drop(cat_flds, axis=1), y)
class ColumnarModelData(ModelData):
def __init__(self, path, trn_ds, val_ds, bs, test_ds=None, shuffle=True):
test_dl = DataLoader(test_ds, bs, shuffle=False, num_workers=1) if test_ds is not None else None
super().__init__(path, DataLoader(trn_ds, bs, shuffle=shuffle, num_workers=1),
DataLoader(val_ds, bs*2, shuffle=False, num_workers=1), test_dl)
@classmethod
def from_arrays(cls, path, val_idxs, xs, y, bs=64, test_xs=None, shuffle=True):
((val_xs, trn_xs), (val_y, trn_y)) = split_by_idx(val_idxs, xs, y)
test_ds = PassthruDataset(*(test_xs.T), [0] * len(test_xs)) if test_xs is not None else None
return cls(path, PassthruDataset(*(trn_xs.T), trn_y), PassthruDataset(*(val_xs.T), val_y),
bs=bs, shuffle=shuffle, test_ds=test_ds)
@classmethod
def from_data_frames(cls, path, trn_df, val_df, trn_y, val_y, cat_flds, bs, test_df=None):
test_ds = ColumnarDataset.from_data_frame(test_df, cat_flds) if test_df is not None else None
return cls(path, ColumnarDataset.from_data_frame(trn_df, cat_flds, trn_y),
ColumnarDataset.from_data_frame(val_df, cat_flds, val_y), bs, test_ds=test_ds)
@classmethod
def from_data_frame(cls, path, val_idxs, df, y, cat_flds, bs, test_df=None):
((val_df, trn_df), (val_y, trn_y)) = split_by_idx(val_idxs, df, y)
return cls.from_data_frames(path, trn_df, val_df, trn_y, val_y, cat_flds, bs, test_df=test_df)
def get_learner(self, emb_szs, n_cont, emb_drop, out_sz, szs, drops,
y_range=None, use_bn=False, **kwargs):
model = MixedInputModel(emb_szs, n_cont, emb_drop, out_sz, szs, drops, y_range, use_bn)
return StructuredLearner(self, StructuredModel(to_gpu(model)), opt_fn=optim.Adam, **kwargs)
joined_train_all = joined[:307511]
joined_test = joined[307511:]
df, y, nas, mapper = proc_df(joined_train_all, 'TARGET', do_scale=True)
df_test, _, nas, mapper = proc_df(joined_test, 'TARGET', do_scale=True, mapper=mapper, na_dict=nas)
n = len(df)
val_idx = get_cv_idxs(n)
cat_sz = [(c, len(joined[c].cat.categories)+1) for c in cat_vars]
cat_sz
emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz]
emb_szs
model = MixedInputModel(emb_szs, n_cont=164, emb_drop=0.1, out_sz=2, szs=[1024, 512], drops=[0.2, 0.2], use_bn=True).cuda()
bm = BasicModel(model, 'binary_calssifier')
md = ColumnarModelData.from_data_frame(PATH, val_idx, df, y.astype('int'), cat_flds=cat_vars, bs=128)
class StructuredLearner(Learner):
def __init__(self, data, models, **kwargs):
super().__init__(data, models, **kwargs)
self.crit = F.mse_loss
learn = StructuredLearner(md, bm)
learn.crit = F.cross_entropy
learn.crit
learn.lr_find()
learn.sched.plot()
from sklearn.metrics import roc_auc_score
def roc_val(probs, y):
probs = np.exp(probs[:,1])
return roc_auc_score(y, probs)
torch.cuda.set_device(0)
torch.cuda.is_available()
torch.backends.cudnn.enabled
lr = 1e-3
learn.fit(lr, 4, metrics=[roc_val], cycle_len=2)
learn.lr_find(start_lr=1e-8,end_lr=1e-1)
learn.sched.plot()
learn.fit(5e-4, 5, metrics=[roc_val], cycle_len=1)
learn.fit(5e-4, 2, metrics=[roc_val], cycle_len=1, cycle_mult=2)
log_preds = learn.predict()
log_preds.shape
preds = np.argmax(log_preds, axis=1)
preds
pd.Series(preds).value_counts()
expsums = np.exp(log_preds).sum(axis=1)
probs = np.exp(log_preds) / expsums[:,None]
probs[:10]
np.max(probs[:,1])
out_df = pd.DataFrame({'SK_ID_CURR': meta_df['SK_ID_CURR'][307511:], 'TARGET':probs[:,1]})
out_df.head()
out_df.to_csv('credit_default_submission.csv', index=False)
```
## Issues with the model
As you can see in training the cross entropy loss doesn't really change. The ROC score improves with training, but usually maxes out around 0.76. I think one issue is that the training dataset is 92% 0s. I think the model might be learning that it can be mostly right by always outputting a low probability.
| github_jupyter |
# Demonstration of using Curve registry
### Essential links:
* Source code: <https://github.com/curvefi/curve-pool-registry/tree/b17>;
* ABI: <https://github.com/curvefi/curve-pool-registry/blob/b17/deployed/2020-06-20/registry.abi>;
* Registry contract: `0x7002B727Ef8F5571Cb5F9D70D13DBEEb4dFAe9d1`.
### Complimentary information for tests:
* Y pool: `0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51` (underlying coins: DAI/USDC/USDT/TUSD, compounding coins: yDAI/yUSDC/yUSDT/yTUSD);
* DAI: `0x6B175474E89094C44Da98b954EedeAC495271d0F`;
* USDT: `0xdAC17F958D2ee523a2206206994597C13D831ec7`;
```
# Init Brownie environment
# <https://eth-brownie.readthedocs.io/en/stable/python-package.html#accessing-the-network>
from brownie import network, Contract
network.connect('mainnet')
import json
with open('registry.abi', 'r') as f:
abi = json.load(f)
registry = Contract.from_abi('CurveRegistry', '0x7002B727Ef8F5571Cb5F9D70D13DBEEb4dFAe9d1', abi)
```
### Finding a pool by coins
Pools can be found for a given pair (from -> to)
```
registry.find_pool_for_coins("0xdAC17F958D2ee523a2206206994597C13D831ec7",
"0x6B175474E89094C44Da98b954EedeAC495271d0F", 0)
registry.find_pool_for_coins("0xdAC17F958D2ee523a2206206994597C13D831ec7",
"0x6B175474E89094C44Da98b954EedeAC495271d0F", 1)
registry.find_pool_for_coins("0xdAC17F958D2ee523a2206206994597C13D831ec7",
"0x6B175474E89094C44Da98b954EedeAC495271d0F", 10) # ... eventually we hit 0x0000
```
### Getting pool's coins
Information is given in format:
```python
[
(compounding_coin, compounding_coin, ...),
(underlying_coin, underlying_coin, ...),
(compounding_coin_decimals, compounding_coin_decimals, ...),
(underlying_coin_decimals, underlying_coin_decimals, ...)
]
```
```
registry.get_pool_coins('0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51')
```
### Pool info
Format:
```python
[
(balance_0, balance_1, ...),
(underlying_0, underlying_1, ...),
(*precisions_for_compounding_coins ...),
(*precisions_for_underlying_coins ...),
pool_token_address,
amplification,
fee multiplied by 1e10
]
```
```
registry.get_pool_info('0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51')
```
With these rates, underlying_balance = balance * rate / 1e18
```
registry.get_pool_rates('0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51')
```
### Estimate the most probable gas spent for exchanging two coins
```
registry.estimate_gas_used('0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51',
'0xdAC17F958D2ee523a2206206994597C13D831ec7',
'0x6B175474E89094C44Da98b954EedeAC495271d0F')
```
### Calculate exchange amount(s)
We're swapping USDC (6 digits) to DAI (18 digits)
```
dai_amount = registry.get_exchange_amount(
'0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51',
'0xdAC17F958D2ee523a2206206994597C13D831ec7',
'0x6B175474E89094C44Da98b954EedeAC495271d0F',
10 ** 6) # Dump 1 USDC
dai_amount / 1e18 # DAI has 18 digits
```
How much USDC do we need to get 1 DAI?
```
usdc_amount = registry.get_input_amount(
'0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51',
'0xdAC17F958D2ee523a2206206994597C13D831ec7',
'0x6B175474E89094C44Da98b954EedeAC495271d0F',
10 ** 18)
usdc_amount / 1e6
```
Get many exchange amounts (DAI for USDC) at once
```
amounts = registry.get_exchange_amounts(
'0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51',
'0xdAC17F958D2ee523a2206206994597C13D831ec7',
'0x6B175474E89094C44Da98b954EedeAC495271d0F',
[x * 10 ** 6 for x in range(1, 101)])
[x / 1e18 for x in amounts][:5] # Let's show only first 5 out of 100
```
### Exchanges using the registry
Reconnect to a fork of mainnet on ganache-cli
```
network.disconnect()
network.connect('mainnet-fork')
```
Create an account using a saved plaintext private key (never do that at home!)
```
from brownie import accounts
from test_address import private_key
alice = accounts.add(bytes.fromhex(private_key))
alice
```
Make ERC20 contract objects to check balances before and after
```
with open('erc20.abi', 'r') as f:
abi = json.load(f)
dai = Contract.from_abi('ERC20', '0x6B175474E89094C44Da98b954EedeAC495271d0F', abi)
usdc = Contract.from_abi('ERC20', '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48', abi)
dai.balanceOf(alice)/1e18
usdc.balanceOf(alice)/1e6
usdc.approve(registry, 5 * 10 ** 6, {'from': alice})
registry.exchange('0x45F783CCE6B7FF23B2ab2D70e416cdb7D6055f51', # Y pool
'0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48', # from USDC
'0x6B175474E89094C44Da98b954EedeAC495271d0F', # to DAI
10 ** 6, # swap 1 dollar
10 ** 18 // 2, # require no less than half a dollar
{'from': alice})
dai.balanceOf(alice)/1e18
```
| github_jupyter |
## Git Stash
Before you can `git pull`, you need to have committed any changes you have made. If you find you want to pull, but you're not ready to commit, you have to temporarily "put aside" your uncommitted changes.
For this, you can use the `git stash` command, like in the following example:
```
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
%%writefile Wales.md
Mountains In Wales
==================
* Pen y Fan
* Tryfan
* Snowdon
* Glyder Fawr
* Fan y Big
* Cadair Idris
%%bash
git stash
git pull
```
By stashing your work first, your repository becomes clean, allowing you to pull. To restore your changes, use `git stash apply`.
```
%%bash --no-raise-error
git stash apply
```
The "Stash" is a way of temporarily saving your working area, and can help out in a pinch.
## Tagging
Tags are easy to read labels for revisions, and can be used anywhere we would name a commit.
Produce real results *only* with tagged revisions
```
%%bash
git tag -a v1.0 -m "Release 1.0"
git push --tags
%%writefile Pennines.md
Mountains In the Pennines
========================
* Cross Fell
%%bash
git add Pennines.md
git commit -am "Add Pennines"
```
You can also use tag names in the place of commmit hashes, such as to list the history between particular commits:
```
%%bash
git log v1.0.. --graph --oneline
```
If .. is used without a following commit name, HEAD is assumed.
## Working with generated files: gitignore
We often end up with files that are generated by our program. It is bad practice to keep these in Git; just keep the sources.
Examples include `.o` and `.x` files for compiled languages, `.pyc` files in Python.
In our example, we might want to make our .md files into a PDF with pandoc:
```
%%writefile Makefile
MDS=$(wildcard *.md)
PDFS=$(MDS:.md=.pdf)
default: $(PDFS)
%.pdf: %.md
pandoc $< -o $@
%%bash
make
```
We now have a bunch of output .pdf files corresponding to each Markdown file.
But we don't want those to show up in git:
```
%%bash
git status
```
Use .gitignore files to tell Git not to pay attention to files with certain paths:
```
%%writefile .gitignore
*.pdf
%%bash
git status
%%bash
git add Makefile
git add .gitignore
git commit -am "Add a makefile and ignore generated files"
git push
```
## Git clean
Sometimes you end up creating various files that you do not want to include in version control. An easy way of deleting them (if that is what you want) is the `git clean` command, which will remove the files that git is not tracking.
```
%%bash
git clean -fX
%%bash
ls
```
* With -f: don't prompt
* with -d: remove directories
* with -x: Also remote .gitignored files
* with -X: Only remove .gitignore files
## Hunks
### Git Hunks
A "Hunk" is one git change. This changeset has three hunks:
```diff
+import matplotlib
+import numpy as np
from matplotlib import pylab
from matplotlib.backends.backend_pdf import PdfPages
+def increment_or_add(key,hash,weight=1):
+ if key not in hash:
+ hash[key]=0
+ hash[key]+=weight
+
data_path=os.path.join(os.path.dirname(
os.path.abspath(__file__)),
-regenerate=False
+regenerate=True
```
### Interactive add
`git add` and `git reset` can be used to stage/unstage a whole file,
but you can use interactive mode to stage by hunk, choosing
yes or no for each hunk.
``` bash
git add -p myfile.py
```
``` diff
+import matplotlib
+import numpy as np
#Stage this hunk [y,n,a,d,/,j,J,g,e,?]?
```
## GitHub pages
### Yaml Frontmatter
GitHub will publish repositories containing markdown as web pages, automatically.
You'll need to add this content:
> ```
> ---
> ---
> ```
A pair of lines with three dashes, to the top of each markdown file. This is how GitHub knows which markdown files to make into web pages.
[Here's why](https://jekyllrb.com/docs/front-matter/) for the curious.
```
%%writefile index.md
---
title: Github Pages Example
---
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a mountain or two depending on your definition.
%%bash
git commit -am "Add github pages YAML frontmatter"
```
### The gh-pages branch
GitHub creates github pages when you use a special named branch.
This is best used to create documentation for a program you write, but you can use it for anything.
```
os.chdir(working_dir)
%%bash
git checkout -b gh-pages
git push -uf origin gh-pages
```
The first time you do this, GitHub takes a few minutes to generate your pages.
The website will appear at `http://username.github.io/repositoryname`, for example:
http://UCL.github.io/github-example/
### UCL layout for GitHub pages
You can use GitHub pages to make HTML layouts, here's an [example of how to do it](http://github.com/UCL/ucl-github-pages-example),
and [how it looks](http://ucl.github.com/ucl-github-pages-example). We won't go into the detail of this now,
but after the class, you might want to try this.
| github_jupyter |
In [The Mean as Predictor](mean_meaning), we found that the mean had some good
properties as a single best predictor for a whole distribution.
* The mean gives a total prediction error of zero. Put otherwise, on average,
your prediction error is zero.
* The mean gives the lowest squared error. Put otherwise, the mean gives the
lowest average squared difference from the observed value.
Now we can consider what predictor we should use when predicting one set of values, from a different set of values.
We load our usual libraries.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make plots look a little bit more fancy
plt.style.use('fivethirtyeight')
# Print to 4 decimal places, show tiny values as 0
np.set_printoptions(precision=4, suppress=True)
import pandas as pd
```
We start with some [data on chronic kidney
disease]({{ site.baseurl }}/data/chronic_kidney_disease).
Download the data to your computer via this link: [ckd_clean.csv]({{
site.baseurl }}/data/ckd_clean.csv).
This is a data table with one row per patient and one column per test on that
patient. Many of columns are values from blood tests. Most of the patients
have chronic kidney disease.
To make things a bit easier this dataset is a version from which we have already dropped all missing values. See the dataset page linked above for more detail.
```
# Run this cell
ckd = pd.read_csv('ckd_clean.csv')
ckd.head()
```
We are interested in two columns from this data frame, "Packed Cell Volume" and "Hemoglobin".
[Packed Cell Volume](https://en.wikipedia.org/wiki/Hematocrit) (PCV) is a
measurement of the percentage of blood volume taken up by red blood cells. It
is a measurement of anemia, and anemia is a common consequence of chronic
kidney disease.
```
# Get the packed cell volume values as a Series.
pcv_series = ckd['Packed Cell Volume']
# Show the distribution.
pcv_series.hist()
```
"Hemoglobin" (HGB) is the concentration of the
[hemoglobin](https://en.wikipedia.org/wiki/Hemoglobin) molecule in blood, in
grams per deciliter. Hemoglobin is the iron-containing protein in red blood
cells that carries oxygen to the tissues.
```
# Get the hemoglobin concentration values as a Series.
hgb_series = ckd['Hemoglobin']
# Show the distribution.
hgb_series.hist()
```
We convert these Series into arrays, to make them simpler to work with. We do
this with the Numpy `array` function, that makes arrays from many other types
of object.
```
pcv = np.array(pcv_series)
hgb = np.array(hgb_series)
```
## Looking for straight lines
The [Wikipedia page for PCV](https://en.wikipedia.org/wiki/Hematocrit) says (at
the time of writing):
> An estimated hematocrit as a percentage may be derived by tripling the
> hemoglobin concentration in g/dL and dropping the units.
> [source](https://www.doctorslounge.com/hematology/labs/hematocrit.htm).
This rule-of-thumb suggests that the values for PCV will be roughly three times
the values for HGB.
Therefore, if we plot the HGB values on the x-axis of a plot, and the PCV
values on the y-axis, we should see something that is roughly compatible with a
straight line going through 0, 0, and with a slope of about 3.
Here is the plot. This time, for fun, we add a label to the X and Y axes with
`xlabel` and `ylabel`.
```
# Plot HGB on the x axis, PCV on the y axis
plt.plot(hgb, pcv, 'o')
plt.xlabel('Hemoglobin concentration')
plt.ylabel('Packed cell volume')
```
The `'o'` argument to the plot function above is a "plot marker". It tells
Matplotlib to plot the points as points, rather than joining them with lines.
The markers for the points will be filled circles, with `'o'`, but we can also
ask for other symbols such as plus marks (with `'+'`) and crosses (with `'x'`).
The line does look a bit like it has a slope of about 3. But - is that true?
Is the *best* slope 3? What slope would we find, if we looked for the *best*
slope? What could *best* mean, for *best slope*?
## Adjusting axes
We would like to see what this graph looks like in relation to the origin -
x=0, y=0. In order to this, we can add a `plt.axis` function call, like this:
```
# Plot HGB on the x axis, PCV on the y axis
plt.plot(hgb, pcv, 'o')
plt.xlabel('Hemoglobin concentration')
plt.ylabel('Packed cell volume')
# Set the x axis to go from 0 to 18, y axis from 0 to 55.
plt.axis([0, 18, 0, 55])
```
It does look plausible that this line goes through the origin, and that makes
sense. All hemoglobin is in red blood cells; we might expect the volume of red
blood cells to be zero when the hemoglobin concentration is zero.
## Putting points on plots
Before we go on, we will need some machinery to plot arbitrary points on plots.
In fact this works in exactly the same way as the points you have already seen
on plots. We use the `plot` function, with a suitable plot marker. The x
coordinates of the points go in the first argument, and the y coordinates go in
the second.
To plot a single point, pass a single x and y coordinate value:
```
plt.plot(hgb, pcv, 'o')
# A red point at x=5, y=40
plt.plot(5, 40, 'o', color='red')
```
To plot more than one point, pass multiple x and y coordinate values:
```
plt.plot(hgb, pcv, 'o')
# Two red points, one at [5, 40], the other at [10, 50]
plt.plot([5, 10], [40, 50], 'o', color='red')
```
## The mean as applied to plots
We want a straight line that fits these points.
The straight line should do the best job it can in *predicting* the PCV values from the HGB values.
We found that the mean was a good predictor for a distribution of values. We
could try and find a line or something similar that went through the mean of
the PCV values, at any given HGB value.
Let's split the HGB values up into bins centered on 7.5, 8.5, and so on. Then
we take the mean of all the PCV values corresponding to HGB values between 7
and 8, 8 and 9, and so on.
```
# The centers for our HGB bins
hgb_bin_centers = np.arange(7.5, 17.5)
hgb_bin_centers
# The number of bins
n_bins = len(hgb_bin_centers)
n_bins
```
Show the center of the bins on the x axis of the plot.
```
plt.plot(hgb, pcv, 'o')
plt.plot(hgb_bin_centers, np.zeros(n_bins), 'o', color='red')
```
Take the mean of the PCV values for each bin.
```
pcv_means = np.zeros(n_bins)
for i in np.arange(n_bins):
mid = hgb_bin_centers[i]
# Boolean identifing indices withing the HGB bin
fr_within_bin = (hgb >= mid - 0.5) & (hgb < mid + 0.5)
# Take the mean of the corresponding PCV values
pcv_means[i] = np.mean(pcv[fr_within_bin])
pcv_means
```
These means should be good predictors for PCV values, given an HGB value. We
check the bin of the HGB value and take the corresponding PCV mean as the
prediction.
Here is a plot of the means of PCV for every bin:
```
plt.plot(hgb, pcv, 'o')
plt.plot(hgb_bin_centers, pcv_means, 'o', color='red')
```
## Finding a predicting line
The means per bin give some prediction of the PCV values from the HGB. Can we
do better? Can we find a line that predicts the PCV data from the HGB data?
Remember, any line can be fully described by an *intercept* $c$ and a *slope*
$s$. A line predicts the $y$ values from the $x$ values, using the slope $s$
and the intercept $c$:
$$
y = c + x * s
$$
The *intercept* is the value of the line when x is equal to 0. It is therefore
where the line crosses the y axis.
In our case, let us assume the intercept is 0. We will assume PCV of 0 if
there is no hemoglobin.
Now we want to find a good *slope*. The *slope* is the amount that the y
values increase for a one unit increase in the x values. In our case, it is
the increase in the PCV for a 1 gram / deciliter increase in the HGB.
Let's guess the slope is 3, as Wikipedia told us it should be:
```
slope = 3
```
Remember our line prediction for y (PCV) is:
$$
y = c + x * s
$$
where x is the HGB. In our case we assume the intercept is 0, so:
```
pcv_predicted = hgb * slope
```
Plot the predictions in red on the original data in blue.
```
plt.plot(hgb, pcv, 'o')
plt.plot(hgb, pcv_predicted, 'o', color='red')
```
The red are the predictions, the blue are the original data. At each PCV value
we have a prediction, and therefore, an error in our prediction; the difference
between the predicted value and the actual values.
```
error = pcv - pcv_predicted
error[:10]
```
In this plot, for each point, we draw a thin dotted line between the prediction
of PCV for each point, and its actual value.
```
plt.plot(hgb, pcv, 'o')
plt.plot(hgb, pcv_predicted, 'o', color='red')
# Draw a line between predicted and actual
for i in np.arange(len(hgb)):
x = hgb[i]
y_0 = pcv_predicted[i]
y_1 = pcv[i]
plt.plot([x, x], [y_0, y_1], ':', color='black', linewidth=1)
```
## What is a good line?
We have guessed a slope, and so defined a line. We calculated the errors from
our guessed line.
How would we decide whether our slope was a good one? Put otherwise, how would
we decide when we have a good line?
A good line should have small prediction errors. That is, the line should give
a good prediction of the points. That is, the line should result in small
*errors*.
We would like a slope that gives us the smallest error.
## One metric for the line
[The Mean as Predictor](mean_meaning) section showed that the mean is the value
with the smallest squared distance from the other values in the distribution.
The mean is the predictor value that minimizes the sum of squared distances
from the other values.
We can use the same metric for our line. Instead of using a single vector as a
predictor, now we are using the values on the line as predictors. We want the
HGB slope, in our case, that gives the best predictors of the PCV values.
Specifically, we want the slope that gives the smallest sum of squares
difference between the line prediction and the actual values.
We have already calculated the prediction and error for our slope of 3, but
let's do it again, and then calculate the *sum of squares* of the error:
```
slope = 3
pcv_predicted = hgb * slope
error = pcv - pcv_predicted
# The sum of squared error
np.sum(error ** 2)
```
We are about to do this calculation many times, for many different slopes. We
need a *function*.
In the function below, we are using [function world](../07/functions)
to get the values of `hgb` and `pcv` defined here at the top level,
outside *function world*. The function can see these values, from
function world.
```
def sos_error(slope):
predicted = hgb * slope # 'hgb' comes from the top level
error = pcv - predicted # 'pcv' comes from the top level
return np.sum(error ** 2)
```
First check we get the same answer as the calculation above:
```
sos_error(3)
```
Does 3.5 give a higher or lower sum of squared error?
```
sos_error(3.5)
```
Now we can use the same strategy as we used in the [mean meaning](mean_meaning)
page, to try lots of slopes, and find the one that gives the smallest sum of
squared error.
```
# Slopes to try
some_slopes = np.arange(2, 4, 0.01)
n_slopes = len(some_slopes)
# Try all these slopes, calculate and record sum of squared error
sos_errors = np.zeros(n_slopes)
for i in np.arange(n_slopes):
slope = some_slopes[i]
sos_errors[i] = sos_error(slope)
# Show the first 10 values
sos_errors[:10]
```
We plot the slopes we have tried, on the x axis, against the sum of squared
error, on the y-axis.
```
plt.plot(some_slopes, sos_errors)
plt.xlabel('Candidate slopes')
plt.ylabel('Sum of squared error')
```
The minimum of the sum of squared error is:
```
np.min(sos_errors)
```
We want to find the slope that corresponds to this minimum. We can use
[argmin](where_and_argmin).
```
# Index of minumum value
i_of_min = np.argmin(sos_errors)
i_of_min
```
This is the index position of the minimum. We will therefore get the minimum
(again) if we index into the original array with the index we just found:
```
# Check we do in fact get the minimum at this index
sos_errors[i_of_min]
```
Now we can get and show the slope value that corresponds the minimum sum of
squared error:
```
best_slope = some_slopes[i_of_min]
best_slope
```
Plot the data, predictions and errors for the line that minimizes the sum of
squared error:
```
best_predicted = hgb * best_slope
plt.plot(hgb, pcv, 'o')
plt.plot(hgb, best_predicted, 'o', color='red')
for i in np.arange(len(hgb)):
x = hgb[i]
y_0 = best_predicted[i]
y_1 = pcv[i]
plt.plot([x, x], [y_0, y_1], ':', color='black', linewidth=1)
plt.title('The best-fit line using least-squared error')
```
The algorithm we have used so far, is rather slow and clunky, because we had to
make an array with lots of slopes to try, and then go through each one to find
the slope that minimizes the squared error.
In fact, we will soon see, we can use some tricks to get Python to do all this
work for us, much more quickly.
Finding techniques for doing this automatically is a whole mathematical field,
called [optimization](https://en.wikipedia.org/wiki/Mathematical_optimization).
For now, let's leap to using these techniques on our problem, of finding the
best slope:
```
from scipy.optimize import minimize
# 3 below is the slope value to start the search.
res = minimize(sos_error, 3)
res
```
The slope is in the `x` attribute of the return value:
```
res.x
```
## The magic of maths
We found the best (sum of squares) slope by trying lots of slopes, above, and then, rather more efficiently, by using `minimize` to do that job for us.
You don't need to understand the argument below, to follow this class, but in
this case we can work out the best slope with some [fairly simple calculus and
algebra](../extra/slope_deviations). It turns out like this:
```
maths_slope = np.sum(hgb * pcv) / np.sum(hgb ** 2)
maths_slope
```
See the page linked above for why this formula works for any set of x and y
values, where the intercept is zero.
But - we won't be using these mathematical short cuts in this course, we will
be using `minimize` and friends to find the best slope by trial and error.
| github_jupyter |
```
!cp drive/MyDrive/cornell-movie-dialog-turns.csv .
!ls -lah
```
# Load data
```
import pandas as pd
import torchtext
import torch
import time
import random
import math
from tqdm.notebook import tqdm
from torch import nn, optim
df = pd.read_csv('cornell-movie-dialog-turns.csv')
df.head(50_000).to_csv('cornell-movie-dialog-turns-mini.csv', index=False)
print(df.shape)
# Use same field for both columns since they have a shared vocabulary
TEXT = torchtext.data.Field(
tokenize='spacy',
lower=True,
init_token='<sos>',
eos_token='<eos>'
)
fields = [('turn1', TEXT), ('turn2', TEXT)]
# Create dataset
start = time.time()
dataset = torchtext.data.TabularDataset(
path='cornell-movie-dialog-turns-mini.csv',
format='CSV',
fields=fields,
skip_header=True
)
end = time.time()
print(f'Duration: {end - start}')
# Train/val split
(train, valid) = dataset.split(split_ratio=[0.85, 0.15])
print(len(train), len(valid))
vars(train[0])
MAX_VOCAB_SIZE = 10_000
TEXT.build_vocab(train, max_size=MAX_VOCAB_SIZE)
print(f'Size of vocab: {len(TEXT.vocab)}')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
BATCH_SIZE = 64
train_iterator, valid_iterator = torchtext.data.BucketIterator.splits(
(train, valid),
batch_size = BATCH_SIZE,
sort_key=lambda x: len(x.turn1),
device = device
)
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.hid_dim = hid_dim
self.embedding = nn.Embedding(input_dim, emb_dim) #no dropout as only one layer!
self.rnn = nn.GRU(emb_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, hidden = self.rnn(embedded) #no cell state!
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.hid_dim = hid_dim
self.output_dim = output_dim
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU(emb_dim + hid_dim, hid_dim)
self.fc_out = nn.Linear(emb_dim + hid_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, context):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#context = [n layers * n directions, batch size, hid dim]
#n layers and n directions in the decoder will both always be 1, therefore:
#hidden = [1, batch size, hid dim]
#context = [1, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
emb_con = torch.cat((embedded, context), dim = 2)
#emb_con = [1, batch size, emb dim + hid dim]
output, hidden = self.rnn(emb_con, hidden)
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#seq len, n layers and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [1, batch size, hid dim]
output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)),
dim = 1)
#output = [batch size, emb dim + hid dim * 2]
prediction = self.fc_out(output)
#prediction = [batch size, output dim]
return prediction, hidden
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is the context
context = self.encoder(src)
#context also used as the initial hidden state of the decoder
hidden = context
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden state and the context state
#receive output tensor (predictions) and new hidden state
output, hidden = self.decoder(input, hidden, context)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
INPUT_DIM = len(TEXT.vocab)
OUTPUT_DIM = len(TEXT.vocab)
ENC_EMB_DIM = 128
DEC_EMB_DIM = 128
HID_DIM = 256
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Seq2Seq(enc, dec, device).to(device)
def init_weights(m):
for name, param in m.named_parameters():
nn.init.normal_(param.data, mean=0, std=0.01)
model.apply(init_weights)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = optim.Adam(model.parameters())
TRG_PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in tqdm(enumerate(iterator)):
src = batch.turn1
trg = batch.turn2
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in tqdm(enumerate(iterator)):
src = batch.turn1
trg = batch.turn2
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import datetime
from sklearn.preprocessing import LabelEncoder
```
# Loading processed data
Note: the file is big and hence will take some time to load
```
path = 'C:/Users/FT-LT74/hnm_fashion_recommendation/data/interim'
df = pd.read_csv(path+'/merged-df.csv')
df.head()
df.shape
```
# Initial feature selection
#### Removing first column
```
df = df.iloc[:, 1:]
df.head(1)
```
#### Removing named columns with code/id
To check:
* product_code, prod_name
* product_type_no, product_type_name
* graphical_appearance_no, graphical_appearance_name
* colour_group_code, colour_group_name
* perceived_colour_value_id, perceived_colour_value_name
* perceived_colour_master_id, perceived_colour_master_name
* department_no, department_name
* index_code, index_name,
* index_group_no, index_group_name
* section_no, section_name
* garment_group_no, garment_group_name
```
df.columns
```
```
# create tuple pair
pairs = [
('product_code', 'prod_name'),
('product_type_no', 'product_type_name'),
('graphical_appearance_no', 'graphical_appearance_name'),
('colour_group_code', 'colour_group_name'),
('perceived_colour_value_id', 'perceived_colour_value_name'),
('perceived_colour_master_id', 'perceived_colour_master_name'),
('department_no', 'department_name'),
('index_code', 'index_name'),
('index_group_no', 'index_group_name'),
('section_no', 'section_name'),
('garment_group_no', 'garment_group_name')
]
pairs[0][1]
def check_unique_code (code, name, df):
product = df.groupby(name)[code].nunique().to_frame().reset_index()
rows = product[product[code] > 1].shape[0]
if rows > 0:
return 'Not unique'
else:
return 'Unique'
remove_cols = []
for pair in pairs:
result = check_unique_code(pair[0], pair[1], df)
print(pair, ': ', result)
if result == 'Unique':
remove_cols.append(pair[1])
# only those unique codes can remove their respective column names
remove_cols
df2 = df.drop(remove_cols, axis=1)
df2.shape
df2.columns
```
#### Removing columns that are not important in modeling
# Manipulating date
Extracting month, date, day of week?
*Note that in time-series, we can include running indexes*
```
# converting to datetime
df2['t_dat'] = pd.to_datetime(df2['t_dat'])
df2['t_dat'][1]
df2['Month'] = df2['t_dat'].dt.month
df2['Day'] = df2['t_dat'].dt.day
df2['Weekday'] = df2['t_dat'].apply(lambda x: x.dayofweek)
df2.head(5)
df2.info()
df2.describe()
```
# Encoding of categorical data
```
for col in df2.columns:
if df2[col].dtype == 'object' and col!='customer_id':
print(col)
df2[col] = LabelEncoder().fit_transform(df2[col].astype(str))
#df2[col] = df2[col].astype('category').cat.codes
df2['customer_id_2'] = LabelEncoder().fit_transform(df2['customer_id'].astype(str))
df2.head()
df2['club_member_status'].unique()
df['club_member_status'].unique()
df[df['club_member_status'].isna()]
df2[df2['club_member_status']==0]
df2.info()
# saving file first
df2.to_csv('C:/Users/FT-LT74/hnm_fashion_recommendation/data/interim/encoded-df.csv.gz', compression='gzip')
```
# Final feature selection
Removing t_dat as we have extracted information from it.
```
# read processed data
data = pd.read_csv(path+'/encoded-df.csv.gz', compression='gzip', error_bad_lines=False)
data.head(1)
data2 = data.iloc[:,1:]
data2.head()
data.shape
data2.shape
date = data['t_dat'].unique()
date
# Saving date file for extrapolation later
pd.DataFrame(date).to_csv("C:/Users/FT-LT74/hnm_fashion_recommendation/data/interim/date.csv")
```
# Filling in NaN
Note: NaNs are automatically categorized to 0 in above, hence we can standardize and fill the rest of NaNs as 0 provided if those with integers do not have 0 value
```
data2.head(1)
```
#### Checking which columns has both 0 and NaNs values already --> as we cannot replace NaNs with 0s
```
for col in data2.columns:
if data2[col].dtype != 'object':
val = data2[col].unique()
if 0 in val and np.isnan(val).any() == True:
print(col)
print(val)
```
Since only 'weekday' has, we will add all values of Weekday by 1.
```
data2['Weekday'] = data2['Weekday'] + 1
data2['Weekday'].unique()
# Fill up Na with 0
data2.isna().sum(axis=0)
data3 = data2.fillna(0)
data3
# saving file first
data3.to_csv('C:/Users/FT-LT74/hnm_fashion_recommendation/data/interim/replacednull-df.csv.gz', compression='gzip')
```
# Changing data types
```
data3.info()
# checking age
data3['age'].describe()
data3['age'].unique()
```
### Changing all float64 to int64 (besides price)
```
for col in data3.columns:
if data3[col].dtype == 'float64' and col != 'price':
#print(col)
data3[col] = data3[col].astype(int)
data3.head(5)
```
# Checking if article_id are consecutive
```
# grouping descriptions together?
data3['article_id'].min()
data3['article_id'].max()
ids = sorted(data3['article_id'].unique())
ids
```
From above, we can see that the article_ids are not consecutive.
# Saving final file
```
# saving interim file
data3.to_csv('C:/Users/FT-LT74/hnm_fashion_recommendation/data/processed/final-df.csv.gz', compression='gzip')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
```
**References**
- https://arxiv.org/abs/1605.07723
- https://github.com/snorkel-team/snorkel-tutorials
- https://github.com/snorkel-team/snorkel-tutorials/blob/master/spam/01_spam_tutorial.ipynb
- https://medium.com/sculpt/a-technique-for-building-nlp-classifiers-efficiently-with-transfer-learning-and-weak-supervision-a8e2f21ca9c8
```
import sys
from tqdm import tqdm, tqdm_notebook
# tqdm.pandas()
sys.path.append("../src/")
from ssp.snorkel.labelling_function import SSPTweetLabelling
! ls ../../../
labeller = SSPTweetLabelling(config_file_path="../../../config.ini",
lf_train_dataset_path="original/ssp_LF_dataset.parquet",
lf_test_dataset_path="original/ssp_test_dataset.parquet",
lf_dev_dataset_path="original/ssp_val_dataset.parquet")
labeller.run()
import re
ttt = "RT @KS_Beringer: A good read for end of 2019! https://t.co/0jTfGLLqYk"
from ssp.snorkel.ai_key_words import AIKeyWords
def is_ai_tweet(text):
text = text.replace("#", "").replace("@", "")
for tag in AIKeyWords.ALL.split("|"):
if f' {tag.lower()} ' in f' {text.lower()} ':
print(tag)
return True
return False
def labelme(text, keywords=AIKeyWords.ALL.split("|")):
text = text.replace("#", "").replace("@", "")
res = 0
for keyword in keywords:
if f' {keyword.lower()} ' in f' {text.lower()} ':
res = 1
return res
is_ai_tweet(ttt), labelme(ttt)
len(AIKeyWords.ALL.split("|"))
def pick_text(text, rtext, etext):
ret = ""
if etext:
ret = etext
elif rtext:
ret = rtext
else:
ret = text
return re.sub("\n|\r", "", ret).strip()
pick_text("text", "rtext", "etext")
pick_text("text", None, None)
ttttt = """By conceptualizing a combination of matter, energy, and information, digitization of physical products and production has become an emerging idea in sustainability. Source @mitsmr Link >> https://t.co/2tpuei2ER8 via @antgrasso #DigitalStrategy #DigitalTransformation https://t.co/XyS9AVC4pf
"""
print(pick_text("None", None, ttttt))
from sklearn.model_selection import train_test_split
df
def utopianTree(n):
# initial : 0, spring: 1 -> x * 2, summer : 2 -> X + 1
height = 1
for x in range(1, n+1):
if x % 2 == 1:
height *= 2
else:
height += 1
return height
utopianTree(1)
list(range(1,1+1))
1 % 2
# Python 3 program to reverse digits
# of a number
rev_num = 0
base_pos = 1
def reversDigits(num):
global rev_num
global base_pos
if(num > 0):
reversDigits((int)(num / 10))
rev_num += (num % 10) * base_pos
base_pos *= 10
return rev_num
%time
for i in tqdm(range(100000)):
reversDigits(i)
%time
def reversDigits1(num):
num = str(num)[::-1]
return int(num)
%timeit reversDigits1(123456)
0 % 5
list(range(1, 5+1))
5 // 2
from itertools import cycle
ss = [1,2,3,4]
tt = [1,1]
list(zip(ss, cycle(tt)))
def saveThePrisoner(n, m, s):
prisoners = list(range(s,n+1)) + list(range(1,s))
candies = list(range(1,m+1))
for pair in tqdm(zip(cycle(prisoners), candies)):
last = pair
print(last)
return pair[-1][0]
saveThePrisoner(3, 394274638, 3) == 3
def saveThePrisoner(n, m, s):
circle_prsioners = n - s
print(circle_prsioners)
if circle_prsioners == 0:
circle_prsioners = 1
reminder = m % circle_prsioners
if n > m:
reminder = reminder
else:
reminder = reminder + s
return reminder
saveThePrisoner(3, 394274638, 3)
saveThePrisoner(5, 2, 1) == 2
saveThePrisoner(5, 2, 2) == 3
def saveThePrisoner(n, m, s):
prisoners_index = list(range(s,n+1)) + list(range(1,s))
print(prisoners_index)
left_out_candies = m % n
print(left_out_candies)
return prisoners_index[left_out_candies - 1]
saveThePrisoner(5, 2, 2)
saveThePrisoner(3, 394274638, 3)
saveThePrisoner(654809340,204894365,472730208)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Ravio1i/ki-lab/blob/master/0_Simple_NN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Simple Neural Network with PyTorch. Original source can be found [here](https://pytorch.org/tutorials/beginner/pytorch_with_examples.html).
```
import torch
import torch.nn.functional as F
from torch import optim
import torchvision
import matplotlib.pyplot as plt
from time import time
print(torch.__version__)
print(torchvision.__version__)
```
# Network
TwoLayerNet with configurable activation function
```
class TwoLayerNet(torch.nn.Module):
def __init__(self, input_size: int, hidden_size: int, output_size: int, activation_function: F.log_softmax):
"""
In the constructor we instantiate two nn.Linear modules and assign them as member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(input_size, hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, output_size)
self.activation_function = F.log_softmax
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
# Relu von pytorch
h_relu = F.relu(self.linear1(x))
#h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return self.activation_function(y_pred)
```
# DATA LOADER
Using QMNIST, because MNIST is not reachable
```
#!wget www.di.ens.fr/~lelarge/MNIST.tar.gz
#!tar -zxvf MNIST.tar.gz
batch_size_train = 64
batch_size_test = 1000
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.QMNIST('/files/', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size_train, shuffle=True
)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.QMNIST('/files/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=True
)
```
# Preprocessing
Showing the data size and sample data
```
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
print(example_data.shape)
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(example_targets[i]))
plt.xticks([])
plt.yticks([])
fig
# get data and labels from train_loader
x, y = next(iter(train_loader))
print(x.shape)
# Flatten tensor
print(x.view(x.shape[0], -1).shape)
#print(x.flatten().shape)
```
# Train
Method to train the model with configurable parameters
```
def train(model, epoch: int, loss_function: torch.nn.functional, optimizer: torch.optim, device: torch.device, log_interval: int = 100):
"""Forward pass: Compute predicted y by passing x to the model
"""
global train_losses, train_counter
for batch_idx, (x, y) in enumerate(train_loader):
x, y = x.to(device), y.to(device)
x = x.view(x.shape[0], -1)
optimizer.zero_grad()
y_pred = model(x)
# Compute and print loss
loss = loss_function(y_pred, y)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(x), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
train_losses.append(loss.item())
train_counter.append(
(batch_idx*64) + ((epoch-1)*len(train_loader.dataset)))
```
# Test
Method to test model with data from test loader
```
def test(model, device):
global test_losses
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for x, y in test_loader:
x, y = x.to(device), y.to(device)
x = x.view(x.shape[0], -1)
y_hat = model(x)
test_loss += F.nll_loss(y_hat, y, reduction='sum').item()
pred = y_hat.data.max(1, keepdim=True)[1]
correct += pred.eq(y.data.view_as(pred)).sum()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
```
## PLOT
Plots the Loss of Train and Test
```
def plot():
global train_losses, train_counter, test_losses, test_counter
fig = plt.figure()
print(train_counter)
print(train_losses)
plt.plot(train_counter, train_losses, color='blue')
print(test_counter)
print(test_losses)
plt.scatter(test_counter, test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('number of training examples seen')
plt.ylabel('negative log likelihood loss')
fig
```
# Execute
Rerunnable Execution of training and test
```
def run(device_name: str, input_size: int, hidden_size: int, output_size: int, n_epochs: int = 50, activation_function: F = F.log_softmax, loss_function: F = F.nll_loss):
# INITIATE VARIABLE
global train_losses, train_counter, test_losses, test_counter
train_losses = []
train_counter = []
test_losses = []
test_counter = [0, n_epochs*len(train_loader.dataset)] #[i*len(train_loader.dataset) for i in range(n_epochs + 1)]
device = torch.device(device_name)
out = """
DEVICE: {}
EPOCHS: {}
INPUT_SIZE: {}
HIDDEN_SIZE: {}
OUTPUT_SIZE: {}
ACTIVATION_FUNCTION: {}
LOSS_FUNCTION: {}
""".format(device_name, n_epochs, input_size, hidden_size, output_size, activation_function, loss_function)
print(out)
# Construct our model by instantiating the class defined above
model = TwoLayerNet(input_size, hidden_size, output_size, activation_function)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
# TRAIN
test(model, device)
time_start = time()
for epoch in range(1, n_epochs + 1):
train(model, epoch, loss_function, optimizer, device)
print("Training Time (in minutes) =",(time()-time_start)/60)
test(model, device)
plot()
train_losses = []
train_counter = []
test_losses = []
test_counter = []
```
## CPU
```
run(
device_name="cpu",
input_size=784,
hidden_size=100,
output_size=10,
n_epochs=50,
)
```
## GPU (CUDA)
```
run(
device_name="cuda",
input_size=784,
hidden_size=100,
output_size=10,
n_epochs=50,
)
```
## Hidden Layers
```
run(
device_name="cuda",
input_size=784,
hidden_size=200,
output_size=10
)
run(
device_name="cuda",
input_size=784,
hidden_size=784,
output_size=10
)
```
## Softmax
```
run(
device_name="cuda",
input_size=784,
hidden_size=100,
output_size=10,
activation_function=F.softmax,
loss_function=torch.nn.CrossEntropyLoss()
)
```
| github_jupyter |
```
# import the mbd package
import pymbd as pymbd # python functions
import pymbd.lib as mbd # fortran functions
print(pymbd)
print(mbd)
import numpy as np
from itertools import chain
import matplotlib.pyplot as plt
%matplotlib inline
bohr = mbd.bohr
print(bohr)
# initialize the frequency grid to 20 points
mbd.init_grid(20)
```
## Argon dimer
```
species = ['Ar', 'Ar']
xyz = [(0., 0., 0.), (4., 0., 0.)]/bohr
alpha_0, C6, R_vdw = pymbd.get_free_atom_data(species)
omega = mbd.omega_eff(C6, alpha_0)
mbd.get_single_mbd_energy(
'', 'fermi,dip',
xyz, alpha_0, omega,
r_vdw=R_vdw, beta=1., a=6.
)[0]
mbd.get_single_rpa_energy(
'', 'fermi,dip',
xyz, mbd.alpha_dynamic_ts_all('C', mbd.n_grid_omega, alpha_0, c6=C6),
r_vdw=R_vdw, beta=1., a=6.
)[0]
mbd.get_ts_energy(
'', 'fermi2',
xyz, C6, alpha_0,
r_vdw=R_vdw, s_r=1., d=6.
)
```
## Linear argon chain
```
species = ['Ar']
xyz = [(0., 0., 0.)]/bohr
uc = np.array([(4., 0., 0.), (0., 10., 0.), (0., 0., 10.)])/bohr
mbd.param_vacuum_axis = (False, True, True)
mbd.param_k_grid_shift = 0
alpha_0, C6, R_vdw = pymbd.get_free_atom_data(species)
omega = mbd.omega_eff(C6, alpha_0)
k_grid = mbd.make_k_grid(mbd.make_g_grid(200, 1, 1), uc)
omegas = mbd.get_reciprocal_mbd_energy(
'REV', 'dip,gg',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[1]
plt.plot(*chain.from_iterable(
(*zip(*sorted(zip(k_grid[:, 0], omegas[:, i]))), 'b-')
for i in range(omegas.shape[1])
))
ns_kpt = [4, 8, 12, 20, 40, 80]
enes_periodic = []
for n_kpt in ns_kpt:
k_grid = mbd.make_k_grid(mbd.make_g_grid(n_kpt, 1, 1), uc)
ene = mbd.get_reciprocal_mbd_energy(
'R', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_periodic.append(ene)
ns_cell = [4, 8, 12, 20, 40, 80]
enes_supercell = []
for n_cell in ns_cell:
ene = mbd.get_supercell_mbd_energy(
'C', 'fermi,dip',
xyz, alpha_0, omega, uc, (n_cell, 1, 1),
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_supercell.append(ene)
plt.plot(
ns_cell, enes_supercell, 'b',
ns_kpt, enes_periodic, 'r'
)
mbd.get_ts_energy('C', 'fermi2', xyz, C6, alpha_0, r_vdw=R_vdw, s_r=1., d=6., unit_cell=uc)
(enes_supercell[-1], enes_periodic[-1])
```
## Linear argon chain (2 atoms in cell)
```
species = ['Ar', 'Ar']
xyz = [(0., 0., 0.), (4., 0., 0.)]/bohr
uc = np.array([(8., 0., 0.), (0., 10., 0.), (0., 0., 10.)])/bohr
mbd.param_vacuum_axis = (False, True, True)
alpha_0, C6, R_vdw = pymbd.get_free_atom_data(species)
omega = mbd.omega_eff(C6, alpha_0)
k_grid = mbd.make_k_grid(mbd.make_g_grid(200, 1, 1), uc)
omegas = mbd.get_reciprocal_mbd_energy(
'REV', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[1]
plt.plot(*chain.from_iterable((
*zip(*sorted(zip(k_grid[:, 0], omegas[:, i]))),
'b-',
*zip(*sorted(zip(k_grid[:, 0]+2*np.pi/8*bohr, omegas[:, i]))),
'b-'
) for i in range(omegas.shape[1])))
ns_kpt = [4, 8, 12, 20, 40, 80]
enes_periodic = []
for n_kpt in ns_kpt:
k_grid = mbd.make_k_grid(mbd.make_g_grid(n_kpt, 1, 1), uc)
ene = mbd.get_reciprocal_mbd_energy(
'R', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_periodic.append(ene)
ns_cell = [4, 8, 12, 20, 40, 80]
enes_supercell = []
for n_cell in ns_cell:
ene = mbd.get_supercell_mbd_energy(
'C', 'fermi,dip',
xyz, alpha_0, omega, uc, (n_cell, 1, 1),
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_supercell.append(ene)
plt.plot(
ns_cell, enes_supercell, 'b',
ns_kpt, enes_periodic, 'r'
)
mbd.get_ts_energy('C', 'fermi2', xyz, C6, alpha_0, r_vdw=R_vdw, s_r=1., d=6., unit_cell=uc)/2
(enes_supercell[-1]/2, enes_periodic[-1]/2)
```
## Two parallel argon chains
```
species = ['Ar', 'Ar']
xyz = [(0., 0., 0.), (0., 0., 4.)]/bohr
uc = np.array([(4., 0., 0.), (0., 10., 0.), (0., 0., 10.)])/bohr
mbd.param_vacuum_axis = (False, True, True)
alpha_0, C6, R_vdw = pymbd.get_free_atom_data(species)
omega = mbd.omega_eff(C6, alpha_0)
k_grid = mbd.make_k_grid(mbd.make_g_grid(200, 1, 1), uc)
omegas = mbd.get_reciprocal_mbd_energy(
'REV', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[1]
plt.plot(*chain.from_iterable(
(*zip(*sorted(zip(k_grid[:, 0], omegas[:, i]))), 'b-')
for i in range(omegas.shape[1])
))
ns_kpt = [4, 8, 12, 20, 40, 80]
enes_periodic = []
for n_kpt in ns_kpt:
k_grid = mbd.make_k_grid(mbd.make_g_grid(n_kpt, 1, 1), uc)
ene = mbd.get_reciprocal_mbd_energy(
'R', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_periodic.append(ene)
ns_cell = [4, 8, 12, 20, 40, 80]
enes_supercell = []
for n_cell in ns_cell:
ene = mbd.get_supercell_mbd_energy(
'C', 'fermi,dip',
xyz, alpha_0, omega, uc, (n_cell, 1, 1),
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_supercell.append(ene)
plt.plot(ns_cell, enes_supercell, 'b', ns_kpt, enes_periodic, 'r')
mbd.get_ts_energy('C', 'fermi2', xyz, C6, alpha_0, r_vdw=R_vdw, s_r=1., d=6., unit_cell=uc)
(enes_supercell[-1], enes_periodic[-1])
```
## Argon crystal
```
species = ['Ar']
xyz = [(0., 0., 0.)]/bohr
uc = np.array([(4., 0., 0.), (0., 4., 0.), (0., 0., 4.)])/bohr
mbd.param_vacuum_axis = (False, False, False)
alpha_0, C6, R_vdw = pymbd.get_free_atom_data(species)
omega = mbd.omega_eff(C6, alpha_0)
k_grid = mbd.make_k_grid(mbd.make_g_grid(200, 1, 1), uc)
omegas = mbd.get_reciprocal_mbd_energy(
'REV', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[1]
plt.plot(*chain.from_iterable(
(*zip(*sorted(zip(k_grid[:, 0], omegas[:, i]))), 'b-')
for i in range(omegas.shape[1])
))
mbd.param_k_grid_shift = 0.5
ns_kpt = [3, 4, 5, 6, 7, 8]
enes_periodic = []
for n_kpt in ns_kpt:
k_grid = mbd.make_k_grid(mbd.make_g_grid(n_kpt, n_kpt, n_kpt), uc)
ene = mbd.get_reciprocal_mbd_energy(
'R', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_periodic.append(ene)
ns_cell = [3, 4, 5, 6, 7]
enes_supercell = []
for n_cell in ns_cell:
ene = mbd.get_supercell_mbd_energy(
'C', 'fermi,dip',
xyz, alpha_0, omega, uc, (n_cell, n_cell, n_cell),
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_supercell.append(ene)
plt.plot(ns_cell, enes_supercell, 'b', ns_kpt, enes_periodic, 'r')
mbd.get_ts_energy('C', 'fermi2', xyz, C6, alpha_0, r_vdw=R_vdw, s_r=1., d=6., unit_cell=uc)
```
## Graphene
```
species = ['C', 'C']
xyz = [(0., 0., 0.), (2.46000413, 1.42034734, 0.)]/bohr
uc = np.array([
(2.45999892, 0.00000000, 0.00000000),
(1.22999946, 2.13042155, 0.00000000),
(0.00000000, 0.00000000, 100.00000000)
])/bohr
mbd.param_vacuum_axis = (False, False, True)
mbd.param_k_grid_shift = 0.
alpha_0, C6, R_vdw = pymbd.get_free_atom_data(species)
omega = mbd.omega_eff(C6, alpha_0)
k_grid = mbd.make_k_grid(mbd.make_g_grid(200, 1, 1), uc)
omegas = mbd.get_reciprocal_mbd_energy(
'REV', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[1]
plt.plot(*chain.from_iterable(
(*zip(*sorted(zip(k_grid[:, 0], omegas[:, i]))), 'b-')
for i in range(omegas.shape[1])
))
ns_kpt = [4, 6, 8, 10, 15, 20, 30]
enes_periodic = []
for n_kpt in ns_kpt:
k_grid = mbd.make_k_grid(mbd.make_g_grid(n_kpt, n_kpt, 1), uc)
ene = mbd.get_reciprocal_mbd_energy(
'R', 'fermi,dip',
xyz, alpha_0, omega, k_grid, uc,
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_periodic.append(ene)
ns_cell = [5, 7, 9, 11, 13, 17]
enes_supercell = []
for n_cell in ns_cell:
ene = mbd.get_supercell_mbd_energy(
'C', 'fermi,dip',
xyz, alpha_0, omega, uc, (n_cell, n_cell, 1),
r_vdw=R_vdw, beta=1., a=6.
)[0]
enes_supercell.append(ene)
plt.plot(ns_cell, enes_supercell, 'b', ns_kpt, enes_periodic, 'r')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Pre_data = pd.read_csv("C:\\Users\\2019A00303\\Desktop\\Code\\Airbnb Project\\Data\\PreProcessingNetherlands.csv")
Pre_data
Pre_data['Price'].plot(kind='hist', bins=100)
Pre_data['group'] = pd.cut(x=Pre_data['Price'],
bins=[0, 50, 100, 150, 200, 1000],
labels=['group_1','group_2','group_3','group_4','group_5'])
Pre_data.head()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(Pre_data, Pre_data["group"]):
train = Pre_data.loc[train_index]
test = Pre_data.loc[test_index]
train['group'].value_counts() / len(train)
test['group'].value_counts() / len(test)
train.drop('group', axis=1, inplace=True)
train.head()
test.drop(['Unnamed: 0','group', 'Host Since', 'Country', 'Airbed', 'Couch', 'Futon', 'Pull-out Sofa', 'Real Bed', 'Cleaning Fee'], axis=1, inplace=True)
test.head()
train_y = train[['Price']]
train_y.head()
train.drop(['Unnamed: 0', 'Price', 'Host Since', 'Country','Airbed', 'Couch', 'Futon', 'Pull-out Sofa', 'Real Bed', 'Cleaning Fee'], axis=1, inplace=True)
train_X = train
train_X.head()
test_y= test[['Price']]
test_y.head()
test.drop('Price', axis=1, inplace=True)
test_X = test
test_X.head()
# from sklearn.linear_model import LinearRegression
# l_reg = LinearRegression()
# l_reg.fit(train_X, train_y)
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
import numpy as np
# predictions = l_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = l_reg.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.tree import DecisionTreeRegressor
# d_reg = DecisionTreeRegressor()
# d_reg.fit(train_X, train_y)
# predictions = d_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = d_reg.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.svm import SVR
# svr = SVR()
# svr.fit(train_X, train_y)
# predictions = svr.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = svr.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.neighbors import KNeighborsRegressor
# knn = KNeighborsRegressor()
# knn.fit(train_X, train_y)
# predictions = knn.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = knn.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.neural_network import MLPRegressor
# ann = MLPRegressor()
# ann.fit(train_X, train_y)
# predictions = ann.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = ann.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
from sklearn.ensemble import RandomForestRegressor
r_reg = RandomForestRegressor()
r_reg.fit(train_X, train_y)
features = train_X.columns
importances = r_reg.feature_importances_
indices = np.argsort(importances)
plt.title('Netherlands Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='g', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
predictions = r_reg.predict(train_X)
mse = mean_squared_error(train_y, predictions)
mae = mean_absolute_error(train_y, predictions)
rmse = np.sqrt(mse)
print(mse, rmse, mae)
# from sklearn.model_selection import GridSearchCV
# param = {'n_estimators' : [800,900,1000], 'max_features' : ['sqrt','auto','log2'], 'max_depth' : [8,9,10],
# 'min_samples_split': [2,3,4]}
# r_reg = RandomForestRegressor(random_state=42)
# search = GridSearchCV(r_reg, param, cv=5,
# scoring='neg_mean_absolute_error')
# search.fit(train_X, train_y['Price'].ravel())
# from sklearn.ensemble import RandomForestRegressor
# r_reg = RandomForestRegressor(bootstrap=True,
# min_samples_split=2,
# criterion='mse',
# max_depth=None,
# max_features='auto',
# n_estimators=1000,
# random_state=42,
# )
# r_reg.fit(train_X, train_y['Price'].ravel())
# predictions = r_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
```
| github_jupyter |
# GPU-Accelerated Numerical Computing with MatX
## Tutorial List
1. [Introduction](01_introduction.ipynb)
2. [Operators](02_operators.ipynb)
3. Executors (this tutorial)
4. [Radar Pipeline Example](04_radar_pipeline.ipynb)
## Executors
MatX executors are a generic name given to functions that execute work on the device. Operators and generators were introduced in the last tutorial as a way to generate a CUDA kernel from an expression, but they do not execute any work on the device. The `run` took an operator as input and executed it on the device. Many other types of executors exist in MatX where more complex functions can be executed alongside operators. Some executors are wrappers around existing CUDA libraries, while others are custom executors inside of MatX. This distinction is hidden from developers so that the implementation of an executor can change over time without modifying the client code. Some executors can take an operator as input, while others can only take tensors as input. These restrictions are noted in the MatX documentation, and may be relaxed or removed in future versions.
Besides `run`, other executors typically allow non-element-wise kernels to execute using highly-optimized library backends. Some examples of this would be a matrix multiply (GEMM), reduction, FFT, sorting, and linear solvers. Besides the type of inputs allowed, executors may also have restrictions on the rank and/or size of a tensor. For example, performing a GEMM requires that the tensors are at least rank 2 (i.e. be a matrix), and the last dimension of the first tensor must match the second-to-last dimension of the second tensor (`MxK * KxN`). Most executors support batching, and anything above the nominal rank will result in batching dimensions. In a 1D FFT this would mean that any dimension above 1 is treated as another 1D batched FFT, and a 2D FFT would batch any dimensions above 2.
Some executors use CUDA libraries to implement their functionality, and those libraries require either a handle or a plan to operated. MatX hides this complexity by creating and caching the plan on the first call, and using the same plan on future calls where possible. More advanced users may use the handle interface directly to avoid the caching. Only the caching interface will be covered in this tutorial since it's the recommended approach, but the non-cached version can be found in the documentation.
### Matrix Multiply
The `matmul` executor performs the matrix-matrix multiply of $$C = {\alpha}A * B + {\beta}C$$ where `A` is of dimensions `MxK`, `B` is `KxN`, and `C` is `MxN`. We first populate the `A` and `B` matrices with random values before the multiply as we did in the example above, then the GEMM is performed. Since the random number generator allocates memory sufficient to randomize the entire tensor, we create a random number generator large enough to generate values for both A or B. This allows us to create a single random number generator, but pull different random values for A and B by simply calling `run` twice. As mentioned above, any rank above 2 is consiered a batching dimension.
We use rectangular matrices for `A` and `B`, while `C` will be a square matrix due to the outer dimensions of `A` and `B` matching.
```c++
randomGenerator_t<float> randData(C.TotalSize(), 0);
auto randTensor1 = randData.GetTensorView<2>({8, 4}, NORMAL);
auto randTensor2 = randData.GetTensorView<2>({4, 8}, NORMAL);
(A = randTensor1).run();
(B = randTensor2).run();
matmul(C, A, B);
```
Open the file [exercises/example3_gemm.cu](exercises/example3_gemm.cu) and edit the contents where you see TODO markers.
```
!./exercises/compile_and_run.sh example3_gemm
```
Expected output:
```sh
A:
000000: -0.9247 -0.4253 -2.6438 0.1452
000001: -0.1209 -0.5797 -0.6229 -0.3284
000002: -1.0745 -0.3631 -1.6711 2.2655
000003: 0.3117 -0.1842 1.2866 1.1820
000004: -0.1271 1.2169 1.4353 1.0605
000005: -0.4941 -1.4244 -0.7244 -1.2973
000006: 0.0697 -0.0074 1.8969 0.6878
000007: -0.0779 -0.8373 1.3506 -0.2879
B:
000000: 0.9911 1.0676 -0.6272 0.3202 -0.3110 -0.3441 -1.1709 -0.5371
000001: 1.3390 -0.2401 1.2149 -0.2052 1.2999 0.2181 -1.2135 -1.3723
000002: -0.4635 -0.4089 -0.0032 0.2967 -0.3587 -1.0455 -0.0450 -0.0985
000003: 1.7608 0.9107 0.0288 -1.1128 0.0929 -0.1502 -0.9854 0.7889
C:
000000: -0.0050 0.3283 0.0760 -1.1547 0.6966 2.9677 1.5747 1.4554
000001: -1.1856 -0.0342 -0.6359 0.2609 -0.5231 0.6156 1.1966 0.6628
000002: 3.2124 1.6864 0.3035 -3.2863 0.6721 1.6973 -0.4584 3.0275
000003: 1.5472 0.9272 -0.3894 -0.7960 -0.6881 -1.6701 -1.3640 0.8911
000004: 2.7056 -0.0490 1.5840 -1.0446 1.2051 -1.3507 -2.4374 -0.9065
000005: -4.3456 -1.0707 -1.4556 1.3628 -1.5586 0.8115 3.6179 1.2680
000006: 0.3910 -0.0732 -0.0391 -0.1788 -0.6479 -2.1121 -0.8357 0.3284
000007: -2.3314 -0.6966 -0.9810 0.8679 -1.5754 -1.5246 1.3302 0.8306
```
### FFT
MatX provides an interface to do both 1D Fast Fourier Transforms (FFTs) and 2D FFTs. Any tensor above rank 1 will be batched in a 1D FFT, and any tensor above rank 2 will be batched in a 2D FFT. FFTs may either be done in-place or out-of-place by using the same or different variables for the output and inputs. Since the tensors are strongly-typed, the type of FFT (C2C, R2C, etc) is inferred by the tensor type at compile time. Similarly, the input and output size of the executor is deduced by the type of transform, and the input/output tensors must match those sizes. There's one exception to this rule, and it's when the input FFT is to be zero-padded at the end. In this case, the input tensor can be shorter than the output tensor, and the input will be zero-padded to the length of the output tensor. This is a common tactic used in signal and image processing for both speed and FFT resolution.
In this example, we execute a 1D batched FFT on a 2D tensor populated with random complex floating point data. Since the FFT executor is performed in-place, the input and output types of the tensors are the same, and the type of the FFT is inferred as a complex-to-complex (`C2C`). The FFT length is specified by the inner dimension of the tensor, or 4 in this example, and the outer dimension is the number of batches, or 2. After the FFT completes, we perform on IFFT on the same tensor using the `ifft` interface. Ignoring floating point inaccuracies, the result of `ifft(fft(A))` should be the same as `A`, and this is shown by printing the tensors at each step. To perform a batched FFT on columns instead of rows, the tensor can be transposed by calling the `Permute` function used in the first tutorial. When the library detects a permuted tensor is being used, it can use technique to speed the FFT up over the naive method of converting the data in memory.
```c++
C.Print();
fft(C, C);
C.Print();
ifft(C, C);
C.Print();
```
Open the file [exercises/example3_1dfft.cu](exercises/example3_1dfft.cu) and edit the contents where you see TODO markers.
```
!./exercises/compile_and_run.sh example3_1dfft
```
Expected output:
```sh
Initial C tensor:
000000: -0.9247+0.9911j -0.4253+1.0676j -2.6438-0.6272j 0.1452+0.3202j
000001: -0.1209-0.3110j -0.5797-0.3441j -0.6229-1.1709j -0.3284-0.5371j
After FFT:
000000: -3.8487+1.7517j 2.4666+2.1889j -3.2883-1.0238j 0.9718+1.0478j
000001: -1.6518-2.3630j 0.6950+1.1112j 0.1644-0.6007j 0.3090+0.6085j
After IFFT and normalization:
000000: -0.9247+0.9911j -0.4253+1.0676j -2.6438-0.6272j 0.1452+0.3202j
000001: -0.1209-0.3110j -0.5797-0.3441j -0.6229-1.1709j -0.3284-0.5371j
```
Next, we take the same 2D tensor and perform a 2D FFT on it. Since the rank is 2, it will not be batched as in the previous example.
```c++
C.Print();
fft2(C, C);
C.Print();
ifft2(C, C);
C.Print();
```
As before, the results after the IFFT closely match the original `C` tensor, but with floating point error.
Open the file [exercises/example3_2dfft.cu](exercises/example3_2dfft.cu) and edit the contents where you see TODO markers.
```
!./exercises/compile_and_run.sh example3_2dfft
```
Expected output:
```sh
Intial C tensor:
000000: -0.9247+0.9911j -0.4253+1.0676j -2.6438-0.6272j 0.1452+0.3202j
000001: -0.1209-0.3110j -0.5797-0.3441j -0.6229-1.1709j -0.3284-0.5371j
After FFT:
000000: -2.0506+1.4036j -0.0405-0.0434j -2.6438-0.6272j 0.1452+0.3202j
000001: -2.0051+2.7593j -0.4662-0.5353j -0.6229-1.1709j -0.3284-0.5371j
After IFFT and normalization:
000000: -1.8493+1.9823j -0.8507+2.1352j -0.6610-0.1568j 0.0363+0.0800j
000001: -0.2417-0.6220j -1.1595-0.6882j -0.1557-0.2927j -0.0821-0.1343j
```
### Reductions
A reduction operation takes multiple values and aggregates those into a smaller number of values. Most reductions take a large number of values and reduces them to a single value. Reductions are one of the most common operations perfomed on the GPU, which means they've been heavily researched and optimized for highly-parallel processors. Modern NVIDIA GPUs have special instructions for performing reductions to give even larger speedups over naive implementations. All of these details are hidden from the user and MatX automatically chooses the optimized path based on the hardware capabilities.
MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`. Note that the max and min reductions use the name `rmin` and `rmax` to avoid name collision with the element-wise `min` and `max` operators.
#### Full Reduction
In this example we reduce an entire tensor to a single value by applying the reduction across all dimensions of the tensor. We apply the same random initialization from previous examples on a 2D tensor `A`. Note that the output tensor must be zeroed for a `sum` reduction since that value is continually added to during the reduction. Not initializing the output tensor will give undefined results since the variables are used as accumulators throughout the reduction. With the tensor initialized, we perform both a `max` and `sum` reduction across all dimensions of the tensor:
```c++
rmax(MD0, A);
sum(AD0, A);
```
Open the file [exercises/example3_full_reduce.cu](exercises/example3_full_reduce.cu) and edit the contents where you see TODO markers.
```
!./exercises/compile_and_run.sh example3_full_reduce
```
Expected output:
```sh
A:
000000: -0.9247 -0.4253 -2.6438 0.1452 -0.1209
000001: -0.5797 -0.6229 -0.3284 -1.0745 -0.3631
000002: -1.6711 2.2655 0.3117 -0.1842 1.2866
000003: 1.1820 -0.1271 1.2169 1.4353 1.0605
Max: 2.265505
Sum: -0.162026
```
#### Dimensional Reductions
Reductions can also be performed across certain dimensions instead of the whole tensor. Dimensional reductions are useful in situations where each row contains data for a different user, for example, and we wish to sum up each user's data. By setting the output tensor view to a 1D tensor, independent reductions can be performed across the input tensor where each output element corresponds to a single row reduction from the input. Using the same tensor `A` from the previous example, we only change the output tensor type to be a 1D tensor instead of a scalar:
```c++
rmax(MD1, A);
sum(AD1, A);
```
Printing the new reduction tensors shows the reduced values across each row of the input tensor `A`.
Open the file [exercises/example3_partial_reduce.cu](exercises/example3_partial_reduce.cu) and edit the contents where you see TODO markers.
```
!./exercises/compile_and_run.sh example3_partial_reduce
```
Expected output:
```sh
A:
000000: -0.9247 -0.4253 -2.6438 0.1452 -0.1209
000001: -0.5797 -0.6229 -0.3284 -1.0745 -0.3631
000002: -1.6711 2.2655 0.3117 -0.1842 1.2866
000003: 1.1820 -0.1271 1.2169 1.4353 1.0605
Max:
000000: 0.1452
000001: -0.3284
000002: 2.2655
000003: 1.4353
Sum:
000000: -3.9695
000001: -2.9686
000002: 2.0086
000003: 4.7676
```
### Convolution
MatX supports both 1D and 2D direct convolution using the `conv1d` and `conv2d` functions. FFT-based convolution can also be performed as a combination of existing primitives as a potentially faster alternative to direct convolution for large tensors. Both forms of direct convolution take in an extra mode which specifies how much of the output is saved, where `MATX_C_MODE_FULL` saves the entire filter ramp-up and down, `MATX_C_MODE_SAME` makes the input and output tensors the same size, and `MATX_C_MODE_VALID` only keeps valid samples (when the entire filter was part of the convolution). Convolution can be used to perform a rolling average of an input by making all filter values 1/N, where N is the length of the filter. In this example, we use a filter of length 3 to create a running average of the last 3 elements:
```c++
conv1d(Co, C, filt, MATX_C_MODE_FULL, 0);
```
```
!./exercises/compile_and_run.sh example3_conv1d
```
Expected output:
```sh
Initial C tensor:
000000: -0.9247
000001: -0.4253
000002: -2.6438
000003: 0.1452
000004: -0.1209
000005: -0.5797
000006: -0.6229
000007: -0.3284
000008: -1.0745
000009: -0.3631
000010: -1.6711
000011: 2.2655
000012: 0.3117
000013: -0.1842
000014: 1.2866
000015: 1.1820
After conv1d:
000000: -0.3082
000001: -0.4500
000002: -1.3313
000003: -0.9747
000004: -0.8732
000005: -0.1851
000006: -0.4411
000007: -0.5103
000008: -0.6753
000009: -0.5887
000010: -1.0362
000011: 0.0771
000012: 0.3020
000013: 0.7977
000014: 0.4714
000015: 0.7615
000016: 0.8229
000017: 0.3940
```
Similar to a 1D convolution, a 2D convolution does the same computation over two dimensions. A tensor of at least rank 2 is needed for a 2D convolution. Below we use a filter of all ones using the `ones` operator to demonstrate the filter can also be an operator and not an existing tensor view. The result is the sum of the four values around each cell on the input:
```c++
conv2d(Co, C, filt, MATX_C_MODE_FULL, 0);
```
```
!./exercises/compile_and_run.sh example3_conv2d
```
Last, we mentioned above that convolution can also be done in the frequency domain using FFTs. This is the preferred method for larger tensors since FFTs are much faster than direct convolutions in large sizes, and because FFT libraries are highly-optimized. FFT convolution uses more memory than direct if the inputs are not to be destroyed since it requires running an FFT on both the input signal and filter before filtering. If not done in-place, this typically requires `2N + L - 1` new elements in memory, where N is the signal length and L is the filter length. A full FFT convolution example can be found in `fft_conv.cu` in the MatX examples, but the main convolution code is shown below:
```c++
// Perform the FFT in-place on both signal and filter
fft(sig_freq, sig_freq);
fft(filt_freq, filt_freq);
// Perform the pointwise multiply. Overwrite signal buffer with result
(sig_freq = sig_freq * filt_freq).run();
// IFFT in-place
ifft(sig_freq, sig_freq);
```
Since the expected output size of the full filtering operation is signal_len + filter_len - 1, both the filter and signal time domain inputs are shorter than the output. This would normally require a separate stage of allocating buffers of the appropriate size, zeroing them out, copying the time domain data to the buffers, and performing the FFT. However, MatX has an API to do all of this automatically in the library using asynchronous allocations. This makes the call have a noticeable performance hit on the first call, but subsequent calls will be close to the time without allocation. To recognize that automatic padding is wanted, MatX uses the output tensor size compared to the input tensor size to determine whether to pad the input with zeros. In this case the output signal (sig_time and filt_time) are shorter than the output tensors (sig_freq and filt_freq), so it will automatically zero-pad the input.
Using the convolution property $ h*x \leftrightarrow H \cdot X$ we simply multiply the signals element-wise after the FFT, then do an IFFT to go back to the time domain.
Next, we do the same operation in the time domain using the `conv1d` function:
```c++
conv1d(time_out, sig_time, filt_time, matxConvCorrMode_t::MATX_C_MODE_FULL, 0);
```
To match the FFT results we do a full convolution to get all the samples from the filter ramp up and ramp down. However, if we wanted either valid or same mode we could slice the FFT convolution output at the appropriate places to give the same answer. Edit the file [exercises/example3_fft_conv.cu](exercises/example3_fft_conv.cu) and add the missing code where you see TODOs. After running the verification code at the bottom will check for accuracy.
Expected output:
```sh
Verification successful
```
```
!./exercises/compile_and_run.sh example3_fft_conv
```
This concludes the third tutorial on MatX. In this tutorial you learned what executors are, and how they can be applied on tensor views. In the next example you will walk through an entire radar signal processing pipeline using all the primites learned up to this point.
[Start Next Tutorial](04_radar_pipeline.ipynb)
| github_jupyter |
\title{Combinational-Circuit Building Blocks aka medium scale integrated circuit (MSI) in myHDL}
\author{Steven K Armour}
\maketitle
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Refs" data-toc-modified-id="Refs-1"><span class="toc-item-num">1 </span>Refs</a></div><div class="lev1 toc-item"><a href="#Python-Libraries-Utilized" data-toc-modified-id="Python-Libraries-Utilized-2"><span class="toc-item-num">2 </span>Python Libraries Utilized</a></div><div class="lev1 toc-item"><a href="#Multiplexers-(mux)" data-toc-modified-id="Multiplexers-(mux)-3"><span class="toc-item-num">3 </span>Multiplexers (mux)</a></div><div class="lev2 toc-item"><a href="#Shannon’s-Expansion-Theorem" data-toc-modified-id="Shannon’s-Expansion-Theorem-31"><span class="toc-item-num">3.1 </span>Shannon’s Expansion Theorem</a></div><div class="lev2 toc-item"><a href="#2:1-MultiPlexer" data-toc-modified-id="2:1-MultiPlexer-32"><span class="toc-item-num">3.2 </span>2:1 MultiPlexer</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Gate-Level-and-Testing" data-toc-modified-id="myHDL-2:1-MUX-Gate-Level-and-Testing-321"><span class="toc-item-num">3.2.1 </span>myHDL 2:1 MUX Gate Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Gate-Level-HDL-Synthesis" data-toc-modified-id="myHDL-2:1-MUX-Gate-Level-HDL-Synthesis-322"><span class="toc-item-num">3.2.2 </span>myHDL 2:1 MUX Gate Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#2:1-Multiplexer-Behavioral" data-toc-modified-id="2:1-Multiplexer-Behavioral-33"><span class="toc-item-num">3.3 </span>2:1 Multiplexer Behavioral</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Behavioral-Level-and-Testing" data-toc-modified-id="myHDL-2:1-MUX-Behavioral-Level-and-Testing-331"><span class="toc-item-num">3.3.1 </span>myHDL 2:1 MUX Behavioral Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-2:1-MUX-Behavioral-Level-HDL-Synthesis" data-toc-modified-id="myHDL-2:1-MUX-Behavioral-Level-HDL-Synthesis-332"><span class="toc-item-num">3.3.2 </span>myHDL 2:1 MUX Behavioral Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#4:1-MUX" data-toc-modified-id="4:1-MUX-34"><span class="toc-item-num">3.4 </span>4:1 MUX</a></div><div class="lev3 toc-item"><a href="#!?-Insert-Digram-below" data-toc-modified-id="!?-Insert-Digram-below-341"><span class="toc-item-num">3.4.1 </span>!? Insert Digram below</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Gate-Level-and-Testing" data-toc-modified-id="myHDL-4:1-MUX-Gate-Level-and-Testing-342"><span class="toc-item-num">3.4.2 </span>myHDL 4:1 MUX Gate Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Gate-Level-HDL-Synthesis" data-toc-modified-id="myHDL-4:1-MUX-Gate-Level-HDL-Synthesis-343"><span class="toc-item-num">3.4.3 </span>myHDL 4:1 MUX Gate Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#4:1-Multiplexer-Behavioral" data-toc-modified-id="4:1-Multiplexer-Behavioral-35"><span class="toc-item-num">3.5 </span>4:1 Multiplexer Behavioral</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-Level-and-Testing" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-Level-and-Testing-351"><span class="toc-item-num">3.5.1 </span>myHDL 4:1 MUX Behavioral Level and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-Level-HDL-Synthesis" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-Level-HDL-Synthesis-352"><span class="toc-item-num">3.5.2 </span>myHDL 4:1 MUX Behavioral Level HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#4:1-Multiplexer-Behavioral-with-bitvectors" data-toc-modified-id="4:1-Multiplexer-Behavioral-with-bitvectors-36"><span class="toc-item-num">3.6 </span>4:1 Multiplexer Behavioral with bitvectors</a></div><div class="lev3 toc-item"><a href="#How-bit-vectors-work-in-myHDL-and-in-Verilog/VHDL" data-toc-modified-id="How-bit-vectors-work-in-myHDL-and-in-Verilog/VHDL-361"><span class="toc-item-num">3.6.1 </span>How bit vectors work in myHDL and in Verilog/VHDL</a></div><div class="lev3 toc-item"><a href="#Understanding-BitVector-bit-selection-in-myHDL" data-toc-modified-id="Understanding-BitVector-bit-selection-in-myHDL-362"><span class="toc-item-num">3.6.2 </span>Understanding BitVector bit selection in myHDL</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-with-BitVecters-and-Testing" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-with-BitVecters-and-Testing-363"><span class="toc-item-num">3.6.3 </span>myHDL 4:1 MUX Behavioral with BitVecters and Testing</a></div><div class="lev4 toc-item"><a href="#!?-This-needs-to-be-checked" data-toc-modified-id="!?-This-needs-to-be-checked-3631"><span class="toc-item-num">3.6.3.1 </span>!? This needs to be checked</a></div><div class="lev3 toc-item"><a href="#myHDL-4:1-MUX-Behavioral-with-BitVecters-HDL-Synthesis" data-toc-modified-id="myHDL-4:1-MUX-Behavioral-with-BitVecters-HDL-Synthesis-364"><span class="toc-item-num">3.6.4 </span>myHDL 4:1 MUX Behavioral with BitVecters HDL Synthesis</a></div><div class="lev2 toc-item"><a href="#Generic-Expressions-via-MUXs" data-toc-modified-id="Generic-Expressions-via-MUXs-37"><span class="toc-item-num">3.7 </span>Generic Expressions via MUXs</a></div><div class="lev3 toc-item"><a href="#myHDL-Generic-Expression-via-MUXs-and-Testing" data-toc-modified-id="myHDL-Generic-Expression-via-MUXs-and-Testing-371"><span class="toc-item-num">3.7.1 </span>myHDL Generic Expression via MUXs and Testing</a></div><div class="lev3 toc-item"><a href="#myHDL-Generic-Expression-via-MUXs-HDL-Synthesis" data-toc-modified-id="myHDL-Generic-Expression-via-MUXs-HDL-Synthesis-372"><span class="toc-item-num">3.7.2 </span>myHDL Generic Expression via MUXs HDL Synthesis</a></div><div class="lev1 toc-item"><a href="#Demultiplexers" data-toc-modified-id="Demultiplexers-4"><span class="toc-item-num">4 </span>Demultiplexers</a></div><div class="lev1 toc-item"><a href="#Encoders" data-toc-modified-id="Encoders-5"><span class="toc-item-num">5 </span>Encoders</a></div><div class="lev1 toc-item"><a href="#Decoders" data-toc-modified-id="Decoders-6"><span class="toc-item-num">6 </span>Decoders</a></div>
# Refs
@book{brown_vranesic_2014,
place={New York, NY},
edition={3},
title={Fundamentals of digital logic with Verilog design},
publisher={McGraw-Hill},
author={Brown, Stephen and Vranesic, Zvonko G},
year={2014}
},
@book{lameres_2017,
title={Introduction to logic circuits & logic design with Verilog},
publisher={springer},
author={LaMeres, Brock J},
year={2017}
},
@misc{peeker_simple_mux,
url={http://www.xess.com/static/media/pages/peeker_simple_mux.html},
journal={Xess.com},
year={2017}
},
# Python Libraries Utilized
```
import numpy as np
import pandas as pd
from sympy import *
init_printing()
from myhdl import *
from myhdlpeek import *
import random
from sympy_myhdl_tools import *
pass
```
# Multiplexers (mux)
a junction switch between one of n inputs to a single output; equivalent to a "if" or "case" statement
let $Z$ be its output $m_k$ the minterms of the controls to the mux and $I_k$ be the input feeds to the mux; then the expression for the mux in terms of boolean algebra becomes
$$Z=\sum^{2^k-1}_{k=0} m_k \cdot I_k= \text{OR}(m_k \& I_k) $$
## Shannon’s Expansion Theorem
The above is Shannon's theorem
it can be written more sincintly as:
$$f(x_1, x_2, ..., x_n)=\bar{x_1}f(0, x_2, ..., x_n)+x_1 f(x_1, x_2, ..., x_n)$$
and then each $f(0, x_2, ..., x_n)$ \& $f(x_1, x_2, ..., x_n)$ is broken down as the above till the maximum number of control statement and minim inputs are needed
```
def shannon_exspanson(f, term):
"""
f is not a full equation
"""
cof0=simplify(f.subs(term, 0)); cof1=simplify(f.subs(term, 1))
return ((~term & cof0 | (term & cof1))), cof0, cof1
```
## 2:1 MultiPlexer
```
sel, x_1in, x_2in=symbols('sel, x_1in, x_2in')
```
let $f(m_1, m_2, m_3)$ be the total set of minterms for a 3-bit then let $m_1$ be designated the select terms then by shannon's theorem states
$$f(m_1, m_2, m_3)=\bar{m_1} \cdot f_1'(0, m_2, m_3)+m_1 \cdot f_1(1, m_2, m_3)$$
in other words we want select the two subset of the f where $m_1$ is 1 or 0 and call thouse two subsets $f_1'$, $f_1$
```
x_1in, x_2in, sel=symbols('x_1in, x_2in, sel')
```
$$f(m_1, m_2, m_3)$$
```
ConversionTable=pd.DataFrame()
Terms=[bin(i, 3) for i in np.arange(0, 2**3)]
ConversionTable['sel']=[int(j[0]) for j in Terms]
ConversionTable['x_1in']=[int(j[1]) for j in Terms]
ConversionTable['x_2in']=[int(j[2]) for j in Terms]
#this is shannos theorm
ConversionTable['f']=list(ConversionTable.loc[ConversionTable['sel'] == 0]['x_1in'])+list(ConversionTable.loc[ConversionTable['sel'] == 1]['x_2in'])
ConversionTable.index.name='MinMaxTerm'
ConversionTable
POS=list(ConversionTable.loc[ConversionTable['f'] == 0].index)
SOP=list(ConversionTable.loc[ConversionTable['f'] == 1].index)
f"POS: {POS}, SOP:{SOP}"
f, _=POS_SOPformCalcater([sel, x_1in, x_2in], SOP, POS)
f
a, b, c=shannon_exspanson(f, sel)
f,'= via shannaon', a
```
$$\bar{m_1} \cdot f_1'(0, m_2, m_3)$$
```
m1bar_f0=~sel&x_1in; m1bar_f0
f0Table=ConversionTable.loc[ConversionTable['sel'] == 0].copy()
f0Table['f0']=[m1bar_f0.subs({sel:i, x_1in:j}) for i, j in zip(f0Table['sel'], f0Table['x_1in'])]
f0Table
```
$$m_1 \cdot f_1(1, m_2, m_3)$$
```
m1_f1=sel&x_2in; m1_f1
f1Table=ConversionTable.loc[ConversionTable['sel'] == 1].copy()
f1Table['f1']=[m1_f1.subs({sel:i, x_2in:j}) for i, j in zip(f1Table['sel'], f1Table['x_2in'])]
f1Table
```
and since this is the lowest order mux this case use of shannon's theorem is kind of trivial
### myHDL 2:1 MUX Gate Level and Testing
```
def mux21_gates(sel, x_1in, x_2in, f_out):
@always_comb
def logic():
f_out.next=(sel and x_2in) or (x_1in and not sel)
return logic
Peeker.clear()
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
Peeker(sel, 'sel'); Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in')
Peeker(f_out, 'f_out')
DUT=mux21_gates(sel, x_1in, x_2in, f_out)
inputs=[sel, x_1in, x_2in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 2:1 gate type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
### myHDL 2:1 MUX Gate Level HDL Synthesis
```
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
toVerilog(mux21_gates, sel, x_1in, x_2in, f_out)
#toVHDL(mux21_gates sel, x_1in, x_2in, f_out)
_=VerilogTextReader('mux21_gates')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL 2:1 MUX Gate level verilog code
<img style="float: center;" src="MUX21GateRTLSch.PNG">
however as will be shown doing gate implementation of MUXs is not sustainable in HDL code and this we will have to implement behavioral syntax as follows, thouse the caveat is that this only works for standard MUXs
## 2:1 Multiplexer Behavioral
### myHDL 2:1 MUX Behavioral Level and Testing
```
def mux21_behavioral(sel, x_1in, x_2in, f_out):
@always_comb
def logic():
if sel:
f_out.next=x_1in
else:
f_out.next=x_2in
return logic
Peeker.clear()
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
Peeker(sel, 'sel'); Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in')
Peeker(f_out, 'f_out')
DUT=mux21_behavioral(sel, x_1in, x_2in, f_out)
inputs=[sel, x_1in, x_2in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 2:1 behaviroal type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
### myHDL 2:1 MUX Behavioral Level HDL Synthesis
```
sel, x_1in, x_2in, f_out=[Signal(bool(0)) for _ in range(4)]
toVerilog(mux21_behavioral, sel, x_1in, x_2in, f_out)
#toVHDL(mux21_behavioral sel, x_1in, x_2in, f_out)
_=VerilogTextReader('mux21_behavioral')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL behavioral level 2:1 MUX's verilog code
<img style="float: center;" src="MUX21BehavioralRTLSch.PNG">
## 4:1 MUX
If you try to repeat the above using a 4:1 which has four input lines and needs two select lines you can become overwhelmed quickly instead it is easier to use the following diagram to than synthesis the gate level architecture
### !? Insert Digram below
### myHDL 4:1 MUX Gate Level and Testing
```
def MUX41_gates(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out):
@always_comb
def logic():
f_out.next=((not sel_1) and (not sel_2) and x_1in) or ((not sel_1) and ( sel_2) and x_2in) or (( sel_1) and (not sel_2) and x_3in) or (( sel_1) and ( sel_2) and x_4in)
return logic
Peeker.clear()
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
Peeker(sel_1, 'sel_1'); Peeker(sel_2, 'sel_2');
Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in'); Peeker(x_3in, 'x_3in'); Peeker(x_4in, 'x_4in')
Peeker(f_out, 'f_out')
DUT=MUX41_gates(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
inputs=[sel_1, sel_2, x_1in, x_2in, x_3in, x_4in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 4:1 gate type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
### myHDL 4:1 MUX Gate Level HDL Synthesis
```
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
toVerilog(MUX41_gates, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
#toVHDL(MUX41_gates, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
_=VerilogTextReader('MUX41_gates')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL 4:1 MUX Gate level verilog code
<img style="float: center;" src="MUX41GateRTLSch.PNG">
## 4:1 Multiplexer Behavioral
As one can clearly see this is not sustainable and thus 'if' Statements need to be used via behavioral logic modeling
### myHDL 4:1 MUX Behavioral Level and Testing
```
def MUX41_behavioral(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out):
@always_comb
def logic():
if (not sel_1) and (not sel_2):
f_out.next=x_1in
elif (not sel_1) and sel_2:
f_out.next=x_2in
elif sel_1 and (not sel_2):
f_out.next=x_3in
else:
f_out.next=x_4in
return logic
Peeker.clear()
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
Peeker(sel_1, 'sel_1'); Peeker(sel_2, 'sel_2');
Peeker(x_1in, 'x_1in'); Peeker(x_2in, 'x_2in'); Peeker(x_3in, 'x_3in'); Peeker(x_4in, 'x_4in')
Peeker(f_out, 'f_out')
DUT=MUX41_behavioral(sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
inputs=[sel_1, sel_2, x_1in, x_2in, x_3in, x_4in]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='MUX 4:1 behavioral type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
### myHDL 4:1 MUX Behavioral Level HDL Synthesis
```
sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out=[Signal(bool(0)) for _ in range(7)]
toVerilog(MUX41_behavioral, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
#toVHDL(MUX41_behavioral, sel_1, sel_2, x_1in, x_2in, x_3in, x_4in, f_out)
_=VerilogTextReader('MUX41_behavioral')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL behavioral level 4:1 MUX's verilog code
<img style="float: center;" src="MUX41BehaviroalRTLSch.PNG">
## 4:1 Multiplexer Behavioral with bitvectors
taking this a step further using bytes we can implement the behavioral using vector inputs instead of single bit inputs as follows
### How bit vectors work in myHDL and in Verilog/VHDL
### Understanding BitVector bit selection in myHDL
```
sel=intbv(1)[2:]; x_in=intbv(7)[4:]; f_out=bool(0)
for i in x_in:
print(i)
for i in range(4):
print(x_in[i])
```
### myHDL 4:1 MUX Behavioral with BitVecters and Testing
#### !? This needs to be checked
```
def MUX41_behavioralVec(sel, x_in, f_out):
@always_comb
def logic():
f_out.next=x_in[sel]
return logic
Peeker.clear()
sel=Signal(intbv(0)[2:]); Peeker(sel, 'sel')
x_in=Signal(intbv(0)[4:]); Peeker(x_in, 'x_in')
f_out=Signal(bool(0)); Peeker(f_out, 'f_out')
DUT=MUX41_behavioralVec(sel, x_in, f_out)
def MUX41_behavioralVec_TB(sel, x_in):
selLen=len(sel); x_inLen=len(x_in)
for i in range(2**x_inLen):
x_in.next=i
for j in range(selLen):
sel.next=j
yield delay(1)
now()
im=Simulation(DUT, MUX41_behavioralVec_TB(sel, x_in), *Peeker.instances()).run()
Peeker.to_wavedrom(tock=True,
title='MUX 4:1 behavioral vectype simulation')
MakeDFfromPeeker(Peeker.to_wavejson())
```
### myHDL 4:1 MUX Behavioral with BitVecters HDL Synthesis
```
sel=Signal(intbv(0)[2:]); x_in=Signal(intbv(0)[4:]);
f_out=Signal(bool(0))
toVerilog(MUX41_behavioralVec,sel, x_in, f_out)
#toVHDL(MUX41_behavioralVec,sel, x_in, f_out)
_=VerilogTextReader('MUX41_behavioralVec')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL behavioral level 4:1 MUX using Bitvecters verilog code
<img style="float: center;" src="MUX41BehaviroalVecRTLSch.PNG">
## Generic Expressions via MUXs
(clean this and find harder exsample)
while shannon's theorem did not prove very much useful in designing a 4:1 MUX it's true power lies converting boolean logic expression from and or gates to MUX's
using example 4.5 from Brown & Vranesic 3rd Ed
```
w1, w2, w3=symbols('w_1, w_2, w_3')
f=(~w1&~w3)|(w1&w2)|(w1&w3)
f
s1=w1
fp, fp0, fp1=shannon_exspanson(f, s1)
fp, fp0, fp1
s2=w2
fpp0, fpp00, fpp01=shannon_exspanson(fp0, s2)
fpp1, fpp10, fpp11=shannon_exspanson(fp1, s2)
fpp0, fpp00, fpp01, fpp1, fpp10, fpp11
```
### myHDL Generic Expression via MUXs and Testing
```
def Shannon21MUX(s1, s2, w_3in, f_out):
@always_comb
def logic():
if (not s1) and (not s2):
f_out.next=not w_3in
elif (not s1) and ( s2):
f_out.next=not w_3in
elif ( s1) and (not s2):
f_out.next= w_3in
else:
f_out.next=1
return logic
Peeker.clear()
s1, s2, w_3in, f_out=[Signal(bool(0)) for _ in range(4)]
Peeker(s1, 's1'); Peeker(s2, 's2');
Peeker(w_3in, 'w_3in')
Peeker(f_out, 'f_out')
DUT=Shannon21MUX(s1, s2, w_3in, f_out)
inputs=[s1, s2, w_3in, f_out]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='Shannon 2:1 MUX gate type simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
### myHDL Generic Expression via MUXs HDL Synthesis
```
s1, s2, w_3in, f_out=[Signal(bool(0)) for _ in range(4)]
toVerilog(Shannon21MUX,s1, s2, w_3in, f_out)
#toVHDL(Shannon21MUX, s1, s2, w_3in, f_out)
_=VerilogTextReader('Shannon21MUX')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL 2:1 Mux expansion of $f$ using Shannon's Expansion Theorom
<img style="float: center;" src="Shannon21MUXRTLSch.PNG">
# Demultiplexers
# Encoders
# Decoders
| github_jupyter |
# GABLS stable ABL case
## Nalu-Wind 3.125m resolution, 1-eqn ksgs
Comparison with GABLS data
**Note**: To convert this notebook to PDF, use the command
```bash
$ jupyter nbconvert --TagRemovePreprocessor.remove_input_tags='{"hide_input"}' --to pdf postpro_gabls.ipynb
```
```
%%capture
# Important header information
naluhelperdir = '../../utilities/'
# Import libraries
import sys
import numpy as np
import matplotlib.pyplot as plt
sys.path.insert(1, naluhelperdir)
import plotABLstats
import gabls
import yaml as yaml
from IPython.display import Image
from matplotlib.lines import Line2D
import matplotlib.image as mpimg
%matplotlib inline
def loadNaluWind(rundir, vtxtfile, ttxtfile, ncfile='', avgtimes=[], usencfile=True, savefile=True):
"""
Function to automatically load the velocity and temperature profile
rundir string with location where run files are located
vtxtfile text file with previously saved velocity profiles
ttxtfile text file with previously saved temperature profiles
ncfile netcdf file with ABL statistics (usually 'abl_statistics.nc')
avgtimes list with times [t1, t2] to average over (applicable when using netcdf file)
usencfile boolean: if True, read the netcdf file, if False, use the previously saved text files
savefile boolean: if True, save the data from the netcdf file to text files
"""
if usencfile:
# Load the data from the netcdf file
data = plotABLstats.ABLStatsFileClass(stats_file=rundir+'/'+ncfile);
Vprof, vheader = plotABLstats.plotvelocityprofile(data, None, tlims=avgtimes, exportdata=True)
Tprof, theader = plotABLstats.plottemperatureprofile(data, None, tlims=avgtimes, exportdata=True)
if savefile:
# Export the Nalu-Wind data for other people to compare
np.savetxt(vtxtfile, Vprof, header=vheader)
np.savetxt(ttxtfile, Tprof, header=theader)
else:
# Load the data from pre-computed text files
Vprof = np.loadtxt(rundir+'/'+vtxtfile) # Velocity profile
Tprof = np.loadtxt(rundir+'/'+ttxtfile) # Temperature profile
return Vprof, Tprof
# Nalu-wind parameters
rundir = '/ascldap/users/lcheung/GPFS1/2020/amrcodes/testruns/gabls.run03.refT'
statsfile = 'abl_statistics.nc'
avgtimes = [3600*8,3600*9]
# Load nalu-wind data
Vprof, Tprof = loadNaluWind(rundir, 'NaluWind_GABLS_velocity.dat', 'NaluWind_GABLS_temperature.dat', avgtimes=avgtimes, ncfile=statsfile, usencfile=True, savefile=True)
# Use this command to load from previous text files
#Vprof, Tprof = loadNaluWind('./', 'NaluWind_GABLS_velocity.dat', 'NaluWind_GABLS_temperature.dat', ncfile=statsfile, avgtimes=avgtimes, usencfile=False, savefile=False)
# Pedersen parameters
datadir = '../gabls_data/'#'/projects/wind_uq/lcheung/HFMQ3compare/gabls_data'
gablsfiles = [['/res_3.125m/LLNL/LLNL_A9_128.dat', 'LLNL'],
['/res_3.125m/CSU/CSU_A9_128.dat', 'CSU'],
['/res_3.125m/NCAR/NCAR_A9_128.dat', 'NCAR'],
['/res_3.125m/NERSC/NERSC_A9_128.dat','NERSC'],
]
# Plot the velocity profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
# Plot the Nalu-Wind data
plt.plot(np.sqrt(Vprof[:,1]**2 + Vprof[:,2]**2), Vprof[:,0], 'b', lw=2, label='Nalu-wind')
# Plot gabls files
for gablfile in gablsfiles:
gabls.plotvel(datadir+'/'+gablfile[0], lw=0.5, label=gablfile[1])
# Construct a legend
plt.legend()
plt.ylim([0, 400]);
plt.xlim([0, 10])
plt.xlabel('Velocity [m/s]')
plt.ylabel('Z')
#plt.grid()
plt.title('Wind speed')
# Plot the temperature profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
# Plot the Nalu-Wind data
plt.plot(Tprof[:,1], Tprof[:,0], 'b', label='Nalu-wind')
# Plot all of the gabls files
for gablfile in gablsfiles:
gabls.plotcols(datadir+'/'+gablfile[0], xcol=3, ycol=0, lw=0.5, label=gablfile[1])
# Construct a legend
plt.legend()
plt.ylim([0, 400]);
#plt.xlim([0, 12])
plt.xlabel('Temperature [K]')
plt.ylabel('Z [m]')
#plt.grid()
plt.title('Temperature')
# Extract Utau
#utau, utheader = plotABLstats.plotutauhistory(data, None, tlims=avgtimes, exportdata=True)
#print('Avg Utau = %f'%np.mean(utau[:,1]))
# Plot the veer profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
import math
# Plot the Nalu-Wind data
veer=270-np.arctan2(Vprof[:,2], Vprof[:,1])*180.0/math.pi
plt.plot(veer, Vprof[:,0], 'b', lw=2, label='Nalu-wind')
# Plot gabls files
for gablfile in gablsfiles:
dat = gabls.readdata(datadir+'/'+gablfile[0])
veer=270-np.arctan2(dat[:,2], dat[:,1])*180.0/math.pi
plt.plot(veer, dat[:,0], '-', lw='0.5',label=gablfile[1])
# Construct a legend
plt.legend()
plt.ylim([0, 400]);
plt.xlabel('Wind direction [deg]')
plt.ylabel('Z')
plt.title('Veer')
# Extract TKE and Reynolds stresses
data = plotABLstats.ABLStatsFileClass(stats_file=rundir+'/'+statsfile);
REstresses, REheader = plotABLstats.plottkeprofile(data, None, tlims=avgtimes, exportdata=True)
# Extract Utau
avgutau = plotABLstats.avgutau(data, None, tlims=avgtimes)
print('Avg Utau = %f'%avgutau)
# Export the Nalu-Wind data for other people to compare
#np.savetxt('NaluWind_GABLS_velocity.dat', Vprof, header=vheader)
#np.savetxt('NaluWind_GABLS_temperature.dat', Tprof, header=theader)
np.savetxt('NaluWind_GABLS_reynoldsstresses.dat', REstresses, header=REheader)
```
| github_jupyter |
# Evaluation
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import warnings
warnings.filterwarnings('ignore')
plt.rcParams['figure.figsize'] = [10, 5]
```
# Continual Learning Metrics
```
# Because of a mistake in my implementation
# ["no_of_test"] cannot be used but it can be calculated by ["no_of_correct_prediction"]/["accuracy"]
# but it cannot be calculated when ["accuracy"] == 0
# ((raw["no_of_correct_prediction"]/ raw["accuracy"]).apply(np.ceil))
# the mistake have been fixed now but the data have not updated
def calculateContinualMetircs(raw):
task_order = raw["task_order"].unique()
method = raw["method"].unique()
print(task_order, method)
all_MBase = {k:[] for k in method}
all_Mnew = {k:[] for k in method}
all_Mnow = {k:[] for k in method}
for t in task_order:
rows = raw[raw["task_order"]==t]
offline = rows[rows["method"]=="offline"]
for m in method:
if m=="offline":
continue
target = rows[rows["method"]==m]
# calculate m_base
_ideal = offline[offline["task_index"]==1]["accuracy"]
_m = target[target["task_index"]==1][["accuracy", "no_of_test", "no_of_correct_prediction"]]
_N = len(_m)
_m = (_m["accuracy"]/float(_ideal)).sum()
Mbase = float(_m/_N)
all_MBase[m].append(Mbase)
_sum = 0.0
train_session = target["train_session"].unique()
for s in train_session:
s = int(s)
_ideal = offline[offline["task_index"]==s]["accuracy"]
_m = target[target["train_session"]==str(s)]
_m = _m[_m["task_index"]==s]["accuracy"]
assert len(_m)==1
_sum += float(_m)/float(_ideal)
if len(train_session)==0:
all_Mnew[m].append(np.nan)
else:
Mnew = _sum/len(train_session)
all_Mnew[m].append(Mnew)
_sum = 0.0
task_index = target["task_index"].unique()
_m = target[target["train_session"]==str(len(task_index))]
for t in task_index:
t = int(t)
_ideal = offline[offline["task_index"]==t]["accuracy"]
_m1 = _m[_m["task_index"]==t]["accuracy"]
assert len(_m1)==1
_sum += float(_m1)/float(_ideal)
if len(train_session)==0:
all_Mnow[m].append(np.nan)
else:
Mnow = _sum/len(train_session)
all_Mnow[m].append(Mnow)
return all_MBase, all_Mnew, all_Mnow
from scipy import stats
def printCLMetrics(all_MBase, all_Mnew, all_Mnow):
def p(metric, name):
print("Metric: ", name)
for m in metric:
avg = np.mean(metric[m])
err = stats.sem(metric[m])
print("{0} {1:.3f} {2:.3f}".format(m, avg, err))
print("=====================")
print("")
p(all_MBase, "M base")
p(all_Mnew, "M new")
p(all_Mnow, "M now")
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def plot(values, label, width=0.85, offset_ratio=0.375, xticks=[], models=None, rotation=0, show_legend=True):
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams.update({'font.size': 20})
m = []
merr = []
if models is None:
models = ["mp-gan", "mp-wgan", "sg-cgan", "sg-cwgan"]
for model in models:
tmp = []
tmperr = []
for i, v in enumerate(values):
avg = np.mean(v[model])
err = stats.sem(v[model])
tmp.append(avg)
tmperr.append(err)
m.append(tmp)
merr.append(tmperr)
ind = np.arange(len(m[0])) # the x locations for the groups
fig, ax = plt.subplots()
patterns = [ "/" , "\\" , "x" , "-" , "+" , "|", "o", "O", ".", "*" ]
for i, model in enumerate(models):
offset = (float(i)/len(models))*width
offset -= (offset_ratio)*width
ax.bar(ind + offset, m[i], width*(1.0/len(models)), yerr=merr[i], label=model, hatch=patterns[i])
ax.set_title(label)
ax.set_xticks(ind)
ax.set_xticklabels(xticks, rotation=rotation)
if show_legend:
ax.legend(prop={'size': 20}, bbox_to_anchor=(1.05, 1), loc=0, borderaxespad=0.)
fig.tight_layout()
plt.show()
```
# Output function
```
# # This result is not complete
# CSMbase = []
# CSMnew = []
# CSMnow = []
# folder = "../Results/results_output_unit/"
# raw = pd.read_csv(folder+"results.txt")
# raw.columns = [c.strip() for c in raw.columns]
# raw.head()
# cmd = raw["cmd"].unique()
# for c in cmd:
# target = raw[raw["cmd"]==c]
# b, n, nw = calculateContinualMetircs(target)
# CSMbase.append(b)
# CSMnew.append(n)
# CSMnow.append(nw)
# xticks = ["none", "leaky_relu", "sigmoid"]
# models = None
# def fixbugs(data):
# return data
# plot(fixbugs(CSMbase), "Stability of the model in different output unit", xticks=xticks, models=models)
# plot(fixbugs(CSMnew), "Plasticity of the model in different output unit", xticks=xticks, models=models)
# plot(fixbugs(CSMnow),"Overall performance of the model in different output unit", xticks=xticks, models=models)
```
# Number of hidden units per layer 👌
```
CSMbase = []
CSMnew = []
CSMnow = []
folder = "../Results/PAMAP/exp_no_of_hidden/"
raw = pd.read_csv(folder+"results.txt")
raw.columns = [c.strip() for c in raw.columns]
raw.head()
cmd = raw["cmd"].unique()
for c in cmd:
target = raw[raw["cmd"]==c]
b, n, nw = calculateContinualMetircs(target)
CSMbase.append(b)
CSMnew.append(n)
CSMnow.append(nw)
xticks = [100, 200, 500, 1000]
models = None
def fixbugs(data):
return data
models = ["mp-gan", "mp-wgan", "sg-cgan", "sg-cwgan"]
print(CSMbase)
plotline(fixbugs(CSMbase), "Stability of the model", x=xticks, models=models)
plotline(fixbugs(CSMnew), "Plasticity of the model", x=xticks, models=models)
plotline(fixbugs(CSMnow),"Overall performance of the model", x=xticks, models=models)
```
# Generator Training Iterators 👌
```
CSMbase = []
CSMnew = []
CSMnow = []
folder = "../Results/run_house_iter/"
raw = pd.read_csv(folder+"results.txt")
raw.columns = [c.strip() for c in raw.columns]
raw.head()
cmd = raw["cmd"].unique()
for c in cmd:
target = raw[raw["cmd"]==c]
b, n, nw = calculateContinualMetircs(target)
CSMbase.append(b)
CSMnew.append(n)
CSMnow.append(nw)
xticks = (1000, 2000, 3000, 4000)
plotline(CSMbase, "Stability of the model", x=xticks)
plotline(CSMnew, "Plasticity of the model", x=xticks)
plotline(CSMnow, "Overall performance of the model", x=xticks)
```
# Sample important 👌
```
CSMbase = []
CSMnew = []
CSMnow = []
folder = "../Results/results_sample_important/"
raw = pd.read_csv(folder+"results.txt")
raw.columns = [c.strip() for c in raw.columns]
raw.head()
cmd = raw["cmd"].unique()
cmd = [1, 3, 4]
for c in cmd:
target = raw[raw["cmd"]==c]
b, n, nw = calculateContinualMetircs(target)
CSMbase.append(b)
CSMnew.append(n)
CSMnow.append(nw)
raw = pd.read_csv("../Results/result_iter5000-1000_h500-100_all/"+"results.txt")
raw.columns = [c.strip() for c in raw.columns]
b, n, nw = calculateContinualMetircs(raw)
CSMbase.append(b)
CSMnew.append(n)
CSMnow.append(nw)
folder = "../Results/results_sample_important.v2/"
raw = pd.read_csv(folder+"results.txt")
raw.columns = [c.strip() for c in raw.columns]
raw.head()
cmd = raw["cmd"].unique()
for c in cmd:
target = raw[raw["cmd"]==c]
b, n, nw = calculateContinualMetircs(target)
CSMbase.append(b)
CSMnew.append(n)
CSMnow.append(nw)
xticks = ["0.5x", "1x", "2x", "500/class", "1000/class", "2000/class"]
models = ["mp-gan", "sg-cgan"]
plot((CSMbase), "Stability of the model", xticks=xticks, models=models, rotation=-45)
plot((CSMnew), "Plasticity of the model", xticks=xticks, models=models, rotation=-45)
plot((CSMnow),"Overall performance of the model", xticks=xticks, models=models, rotation=-45)
```
# Component sensitivity
```
CSMbase = []
CSMnew = []
CSMnow = []
# folder = "../Results/result_component_sensitivity.v1/results/"
folder = "../Results/result_comp_sense/"
# folder = "./newsrc/result_comp_sense-draft/"
raw = pd.read_csv(folder+"results.txt")
raw.columns = [c.strip() for c in raw.columns]
raw.head()
cmd = raw["cmd"].unique()
cmd = [0, 1, 2, 3, 4, 5, 11]
for c in cmd:
target = raw[raw["cmd"]==c]
b, n, nw = calculateContinualMetircs(target)
CSMbase.append(b)
CSMnew.append(n)
CSMnow.append(nw)
xticks = [
"no additional\n components",
"self-verifying",
"oversampling",
"EWC\n in solver",
"knowledge distill \n in solver",
"instance noise\n in GANs",
"final"]
# models = None
# plot((CSMbase), "Stability of the model", xticks=xticks, models=models)
# plot((CSMnew), "Plasticity of the model", xticks=xticks, models=models)
# plot((CSMnow),"Overall performance of the model", xticks=xticks, models=models)
def plot2(values, label, width=0.85, offset_ratio=0, xticks=[], models=None, rotation=0):
plt.rcParams['figure.figsize'] = [10, 8]
m = []
merr = []
if models is None:
models = ["mp-gan", "mp-wgan", "sg-cgan", "sg-cwgan"]
for model in models:
tmp = []
tmperr = []
for i, v in enumerate(values):
avg = np.mean(v[model])
err = stats.sem(v[model])
tmp.append(avg)
tmperr.append(err)
m.append(tmp)
merr.append(tmperr)
ind = np.arange(len(m[0])) # the x locations for the groups
fig, ax = plt.subplots()
patterns = [ "/" , "\\" , "x" , "-" , "+" , "|", "o", "O", ".", "*" ]
for i, model in enumerate(models):
offset = (float(i)/len(models))*width
offset -= (offset_ratio)*width
ax.bar(ind + offset, m[i], width*(1.0/len(models)), yerr=merr[i], label=model, hatch=patterns[i])
X = np.arange(-0.5, len(m[0])+0.5)
Y = [m[i][0] for _ in range(len(X))]
ax.plot(X, Y, linestyle=':')
print(m)
ax.set_title(label)
ax.set_xticks(ind)
ax.set_xticklabels(xticks, rotation=rotation, rotation_mode="default")
# ax.legend()
fig.tight_layout()
plt.show()
models = ["mp-gan"]
plot2((CSMbase), "Stability of the model", xticks=xticks, models=models, rotation=-45)
plot2((CSMnew), "Plasticity of the model", xticks=xticks, models=models, rotation=-45)
plot2((CSMnow),"Overall performance of the model", xticks=xticks, models=models, rotation=-45)
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
```
### Reading a file
- Each file here is a shard of hydi_track_10_58_0.trk (split into 10 shards). 72.7-165.3MB in size.
- Block size is 32MB
- 5 repetitions
#### 1 File sequential
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_1f_5r_33554432b.out")
df_read = df[df["action"].str.contains("read")]
ax = sns.barplot(x="action", y="runtime", data=df_read)
```
#### 1 file 16 joblib threads
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_1f_5r_33554432b_16j_seq.out")
df_read = df[df["action"].str.contains("read")]
ax = sns.barplot(x="action", y="runtime", data=df_read)
```
#### 9 Files sequential
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_9f_5r_33554432b.out")
df_read = df[df["action"].str.contains("read")]
df_read["rep"] = df_read.index.values
df_read["rep"] = df_read["rep"].apply( lambda v:
0 if 0 <= v <= 10
else 1 if 12 <= v <= 22
else 2 if 24 <= v <= 34
else 3 if 36 <= v <= 46
else 4
)
df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]] = df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]].groupby("rep").sum()
ax = sns.barplot(x="action", y="runtime", data=df_read)
```
#### 9 files parallel
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_9f_5r_33554432b_16j_seq.out")
df_read = df[df["action"].str.contains("read")]
df_read["rep"] = df_read.index.values
df_read["rep"] = df_read["rep"].apply( lambda v:
0 if 0 <= v <= 10
else 1 if 12 <= v <= 22
else 2 if 24 <= v <= 34
else 3 if 36 <= v <= 46
else 4
)
df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]] = df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]].groupby("rep").sum()
ax = sns.barplot(x="action", y="runtime", data=df_read)
```
## Full segmentation benchmarks
- Each file here is a shard of hydi_track_10_58_0.trk (split into 10 shards). 72.7-165.3MB in size.
- Block size is 32MB
- 5 repetitions
#### 1 file sequential
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_1f_5r_33554432b.out")
df_seg = df[df["action"].str.contains("seg")]
ax = sns.barplot(x="action", y="runtime", data=df_seg)
```
#### 1 file 16 joblib threads
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_1f_5r_33554432b_16j_seq.out")
df_seg = df[df["action"].str.contains("seg")]
ax = sns.barplot(x="action", y="runtime", data=df_seg)
```
#### 9 files sequential
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_9f_5r_33554432b.out")
df_seg = df[df["action"].str.contains("seg")]
ax = sns.barplot(x="action", y="runtime", data=df_seg)
```
#### 9 files 16 joblib threads
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_9f_5r_33554432b_16j_seq.out")
df_seg = df[df["action"].str.contains("seg")]
df_seg.loc[df_seg["action"].str.contains("s3fs"), "action"] = "s3fs"
df_seg.loc[df_seg["action"].str.contains("prefetch"), "action"] = "prefetch"
df_orig = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_orig_waypoint_1f_5r_33554432b_16j_seq.out")
df_orig = df_orig[df_orig["action"].str.contains("seg")]
df_orig["action"] = "unsharded"
df = pd.concat([df_seg, df_orig])
ax = sns.barplot(x="action", y="runtime", data=df)
```
#### Larger shards (original tract files)
#### Read
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_orig_waypoint_10f_5r_33554432b_16j_seq.out")
df_read = df[df["action"].str.contains("read")]
df_read["rep"] = df_read.index.values
df_read["rep"] = df_read["rep"].apply( lambda v:
0 if 0 <= v <= 10
else 1 if 12 <= v <= 22
else 2 if 24 <= v <= 34
else 3 if 36 <= v <= 46
else 4
)
df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]] = df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]].groupby("rep").sum()
ax = sns.barplot(x="action", y="runtime", data=df_read)
```
### Segmentation
```
df_c = pd.read_csv("../results/us-west-2-c5.9/real_1f_5r_67108864b.out")
df_c["instance"] = "c5.9xLarge"
df_c.loc[df_c["action"].str.contains("s3fs"), "action"] = "s3fs"
df_c.loc[df_c["action"].str.contains("prefetch"), "action"] = "RP"
speedup_c = df_c.loc[df_c["action"] == "s3fs", "runtime"].mean() / df_c.loc[df_c["action"] == "RP", "runtime"].mean()
speedup_c
df_c
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_orig_waypoint_10f_5r_33554432b_16j_seq.out")
df_seg = df[df["action"].str.contains("seg")]
df_seg["instance"] = "R5.4xLarge"
df_seg.loc[df_seg["action"].str.contains("s3fs"), "action"] = "s3fs"
df_seg.loc[df_seg["action"].str.contains("prefetch"), "action"] = "RP"
speedup_r = df_seg.loc[df_seg["action"] == "s3fs", "runtime"].mean() / df_seg.loc[df_seg["action"] == "RP", "runtime"].mean()
df_all = pd.concat([df_c, df_seg])
ax = sns.barplot(x="instance", y="runtime", hue="action", data=df_all)
speedup_r
ax = sns.barplot(x=["c5.9xlarge", "r5.4xlarge"], y=[speedup_c, speedup_r], color="seagreen", saturation=0.5)
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
ax.set_ylabel("Avg. Speedup")
ax.set_xlabel("Instance Type")
plt.savefig("speedup.pdf")
```
## 15 files
### Read
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_orig_waypoint_15f_5r_33554432b_16j_seq.out")
df_read = df[df["action"].str.contains("read")]
df_read["rep"] = 0
df_read.loc[df_read.iloc[18:34].index.values,"rep"] = 1
df_read.loc[df_read.iloc[36:52].index.values,"rep"] = 2
df_read.loc[df_read.iloc[54:70].index.values,"rep"] = 3
df_read.loc[df_read.iloc[72:88].index.values,"rep"] = 4
df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]] = df_read.loc[df["action"].str.contains("s3fs"), ["rep","runtime"]].groupby("rep").sum()
ax = sns.barplot(x="action", y="runtime", data=df_read)
```
### Segmentation
```
df_seg = df[df["action"].str.contains("seg")]
ax = sns.barplot(x="action", y="runtime", data=df_seg)
```
### Histogram
#### 15 files 20bins not lazy
```
df = pd.read_csv("../results/us-west-2-R5.4xlarge/histogram_orig_15f_5r_33554432b_20bins_seq.out")
ax = sns.barplot(x="action", y="runtime", data=df)
plt.savefig("histo_15_20bins_notlazy.pdf")
df = pd.read_csv("../results/us-west-2-R5.4xlarge/histogram_orig_45f_5r_33554432b_20bins_3dask_lazy.out")
df["rep"] = 0
df.loc[df.iloc[6:12].index.values,"rep"] = 1
df.loc[df.iloc[12:18].index.values,"rep"] = 2
df.loc[df.iloc[18:24].index.values,"rep"] = 3
df.loc[df.iloc[24:30].index.values,"rep"] = 4
df[["action", "runtime", "rep"]].groupby(by=["action", "rep"]).max()
ax = sns.barplot(x="action", y="runtime", data=df)
plt.savefig("histo_15_20bins_lazy_3workers_dask.pdf")
df_hist = pd.read_csv("../results/us-west-2-R5.4xlarge/us-west-2-R5.4xlarge/histogram_orig_1f_5r_33554432b_20bins_seq_lazy.out")
df_hist.loc[df_hist["action"].str.contains("s3fs"), "action"] = "s3fs"
df_hist.loc[df_hist["action"].str.contains("prefetch"), "action"] = "RP"
df_hist["pipeline"] = "histogram"
ax = sns.barplot(x="action", y="runtime", data=df_hist)
df_hist["runtime"]
#plt.savefig("histo_15_20bins_notlazy.pdf")
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_orig_waypoint_10f_5r_33554432b_16j_seq.out")
df_seg = df[df["action"].str.contains("seg")]
df_seg.loc[df_seg["action"].str.contains("s3fs"), "action"] = "s3fs"
df_seg.loc[df_seg["action"].str.contains("prefetch"), "action"] = "RP"
df_seg["pipeline"] = "segmentation"
df_read = df.loc[df["action"].str.contains("read")]
df_read["rep"] = 0
df_read.loc[df_read.iloc[18:34].index.values,"rep"] = 1
df_read.loc[df_read.iloc[36:52].index.values,"rep"] = 2
df_read.loc[df_read.iloc[54:70].index.values,"rep"] = 3
df_read.loc[df_read.iloc[72:88].index.values,"rep"] = 4
avg_read_pf = df_read.loc[df_read["action"].str.contains("prefetch"), "runtime"].mean()
avg_read_s3fs = df_read.loc[df_read["action"].str.contains("s3fs"), ["runtime", "rep"]].groupby(by=["rep"]).sum()
df_hist.loc[df_hist["action"].str.contains("s3fs"), "compute time (s)"] = df_hist.loc[df_hist["action"].str.contains("s3fs"), "runtime"] - avg_read_s3fs
df_hist.loc[df_hist["action"].str.contains("RP"), "compute time (s)"] = df_hist.loc[df_hist["action"].str.contains("RP"), "runtime"] - avg_read_pf
df_seg["compute time (s)"] = df_seg["runtime"] - avg_read_pf
speedup_hist = df_hist.loc[df_hist["action"] == "s3fs", "runtime"].mean() / df_hist.loc[df_hist["action"] == "RP", "runtime"].mean()
speedup_seg = df_seg.loc[df_seg["action"] == "s3fs", "runtime"].mean() / df_seg.loc[df_seg["action"] == "RP", "runtime"].mean()
df_all = pd.concat([df_hist, df_seg])
ax = sns.barplot(x=["histogram", "segmentation"], y=[speedup_hist, speedup_seg], color="seagreen", saturation=0.5)
df_hist[["runtime","compute time (s)"]]
ax.set_ylabel("Avg. Speedup")
ax.set_xlabel("Application")
plt.savefig("speedup_application.pdf")
avg_read_s3fs.mean(), df_seg.loc[df_seg["action"] == "s3fs", "runtime"].mean(), 1/7
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_1f_5r_33554432b_16j_seq.out")
df["shards"] = 1
df_seg1 = df[df["action"].str.contains("seg")]
df = pd.read_csv("../results/us-west-2-R5.4xlarge/segmentation_shards_waypoint_9f_5r_33554432b_16j_seq.out")
df["shards"] = 9
df_seg9 = df[df["action"].str.contains("seg")]
df = pd.concat([df_seg1, df_seg9])
speedup_1 = df_seg1.loc[df_seg1["action"].str.contains("s3fs"), "runtime"].mean() / df_seg1.loc[df_seg1["action"].str.contains("prefetch"), "runtime"].mean()
speedup_9 = df_seg9.loc[df_seg9["action"].str.contains("s3fs"), "runtime"].mean() / df_seg9.loc[df_seg9["action"].str.contains("prefetch"), "runtime"].mean()
ax = sns.barplot(x=[1, 9], y=[speedup_1, speedup_9], color="seagreen", saturation=0.5)
ax.set_ylabel("Avg. Speedup")
ax.set_xlabel("Number of Shards")
plt.savefig("speedup_shards.pdf")
import matplotlib.patches as mpatches
x=["Instance type", "Instance type", "Number of shards", "Number of shards", "Application", "Application"]
y = [speedup_c, speedup_r, speedup_1, speedup_9, speedup_hist, speedup_seg]
hue = [1,2,1,2,1,2]
labels = ["c5.9", "r5.4", "1", "9", "hist.", "bundle r."]
#sns.set_style("whitegrid", {"axes.facecolor": ".9"})
sns.set_theme()
#sns.set_style("whitegrid", {"axes.facecolor": "1"} )
palette = sns.color_palette("colorblind")
#fig, ax = plt.subplots(figsize=(10, 4))
ax = sns.barplot(
x=x,
y=y,
hue=hue,
palette=palette
)
handles = []
palette = [palette[2]] + [palette[5]] + [palette[6]]
for i, bar in enumerate(ax.patches):
if i in [0, 1, 2]:
bar.set_alpha(0.5)
handles.append(mpatches.Patch(color=palette[(i%3)], alpha=0.5))
bar.set_facecolor(palette[(i%3)])
handles.append(mpatches.Patch(color=palette[(i%3)]))
ax.set_ylabel("Avg. Speedup")
ax.set_xticks([-0.2, 0.2, 0.8, 1.2, 1.8, 2.2])
ax.set_xticklabels(labels)
#plt.xticks(rotation=45)
ax.get_legend().remove()
ax.set_xlabel("Instance type Num of files Application ")
#tmp =plt.legend( handles=handles, labels=labels, loc="lower left", bbox_to_anchor=(0, -0.2), borderaxespad=0, ncol=len(labels), )
plt.savefig("speedup.pdf", bbox_inches='tight')
#ax.legend()
y
```
| github_jupyter |
O [spaCy]("https://spacy.io") é uma bilbioteca Python de código fonte [aberto]("https://github.com/explosion/spaCy") para Processamento de
Linguagem Natural, constantemente a atualizada e mantida. Essa biblioteca é capaz de
processar diversas línguas, inclusive o português brasileiro.
### Instalação
No linux, a instalação da biblioteca spaCy pode ser feita com os comandos usuais de gerenciamento dos pacotes do Python, digitando no terminal os seguintes comandos
```
# $ pip3 install spacy
# ou
# $ pip install spacy
```
__Nota__: omitir ‘#’ e ‘\\$’. O símbolo ‘#’ foi inserido para criar um comentário na cĺula que roda Python e o símbolo ‘\\$’ é usado para indicar que se trata de um comando a ser digitado no terminal.
Em seguida devemos baixar as ferramentas específicas para o português e para o inglês com os seguintes comandos:
```
# $ python3 -m spacy download en
# $ python3 -m spacy download pt
!python -m spacy download pt
```
Uma vez que temos o pacote instalado e os módulos para português e inglês baixados, podemos começar a utilizar os spaCy, importando o pacote e carregando o módulo para português.
```
import spacy
spacyPT = spacy.load('pt')
```
É importante notar que o spaCy assume que os caracteres estão codificado no formato utf-8. O primeiro passo portanto é gerar uma entrada nesse formato e e submetê-lo ao módulo carregado.
```
entrada = spacyPT(u"Mais vale um asno que me carregue que um cavalo que me derrube.")
entrada
```
### Tokenização (itemização)
A entrada que acabamos de gerar é uma sequência iterável de tokens (itens,
ou instâncias de palavras). Se quisermos verificar qual o texto contido nessa
sequência iterável, usamos:
```
entrada.text
```
Se quisermos dividir a entrada em token, podemos utilizar o método __split__:
```
entrada.text.split()
```
Note que o ponto final foi absorvido pela palavra; o mesmo teria acontecido com
outros sinais de pontuação a utilizar o método __split__. Para separar a pontuação
das palavras utilizamos a eternização implícita realizada pelo comando __in__:
```
[token for token in entrada]
```
Note que os streams não estão entre aspas, pois na realidade esta lista contém uma sequência de objetos da classe __Token__.
Se o objetivo é obter uma lista de Strings, podemos proceder da seguinte maneira.
```
[token.text for token in entrada]
```
E para eliminar totalmente a pontuação da lista, é só restringirr a sua criação usando __is_punct__.
```
[token.text for token in entrada if not token.is_punct]
```
O spaCy já vem treinando para realizar etiquetagem morfossintática (PoS tagging), o que pode ser mostrado da seguinte maneira.
```
[(token.text, token.pos_) for token in entrada]
```
Note que ele foi capaz de identificar quais as ocorrências da palavra _que_ são pronomes relativos e qual ocorrência é uma conjunção complementizadora. Infelizmente não foi capaz de identificar que a palavra _asno_ é um substantivo, possivelmente porque essa palavra não pertence ao seu dicionário interno; infelizmente também não temos como retreinar o etiquetador morfossintático do spaCy, em busca de maior precisão. Neste casos só nos resta tentar implementar um outro etiquetador.
Mesmo assim, assistência de Ticket a dor nos permite fazer buscas bastante sofisticadas. Por exemplo podemos buscar os lemas de todos os verbos encontrados na sentença.
```
[token.lemma_ for token in entrada if token.pos_ == 'VERB']
```
Os lemas de verbos conjugados nos fornecem a sua forma infinitiva.
### Reconhecimento de entidades nomeadas
A biblioteca já vem treinada com u mecanismo que permite o reconhecimento de
entidades mencionada (nomeadas).
```
texto2 = spacyPT(u"O presidente Bolsonaro deu uma ordem ao Ministério do Meio Ambiente, que gerou calafrios no Congresso.")
print(texto2.ents)
[(entidade,entidade.label_) for entidade in texto2.ents]
```
Note que ele acertou no reconhecimento das três entidades nomeadas, porém errou na
classificação dos três; a primeira identidade é uma pessoa e as duas últimas são
organizações. Há bastante margem para melhorias de precisão com algoritmos mais
robustos e treinados com muito mais dados.
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Writing layers and models with TensorFlow Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/guide/keras/custom_layers_and_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
### Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
```
## The Layer class
### Layers encapsulate a state (weights) and some computation
The main data structure you'll work with is the `Layer`.
A layer encapsulates both a state (the layer's "weights")
and a transformation from inputs to outputs (a "call", the layer's
forward pass).
Here's a densely-connected layer. It has a state: the variables `w` and `b`.
```
from tensorflow.keras import layers
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(initial_value=b_init(shape=(units,),
dtype='float32'),
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
```
Note that the weights `w` and `b` are automatically tracked by the layer upon
being set as layer attributes:
```
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
```
Note you also have access to a quicker shortcut for adding weight to a layer: the `add_weight` method:
```
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(units,),
initializer='zeros',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
```
#### Layers can have non-trainable weights
Besides trainable weights, you can add non-trainable weights to a layer as well.
Such weights are meant not to be taken into account during backpropagation,
when you are training the layer.
Here's how to add and use a non-trainable weight:
```
class ComputeSum(layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
```
It's part of `layer.weights`, but it gets categorized as a non-trainable weight:
```
print('weights:', len(my_sum.weights))
print('non-trainable weights:', len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print('trainable_weights:', my_sum.trainable_weights)
```
### Best practice: deferring weight creation until the shape of the inputs is known
In the logistic regression example above, our `Linear` layer took an `input_dim` argument
that was used to compute the shape of the weights `w` and `b` in `__init__`:
```
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(units,),
initializer='zeros',
trainable=True)
```
In many cases, you may not know in advance the size of your inputs, and you would
like to lazily create weights when that value becomes known,
some time after instantiating the layer.
In the Keras API, we recommend creating layer weights in the `build(inputs_shape)` method of your layer.
Like this:
```
class Linear(layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
```
The `__call__` method of your layer will automatically run `build` the first time it is called.
You now have a layer that's lazy and easy to use:
```
linear_layer = Linear(32) # At instantiation, we don't know on what inputs this is going to get called
y = linear_layer(x) # The layer's weights are created dynamically the first time the layer is called
```
### Layers are recursively composable
If you assign a Layer instance as attribute of another Layer,
the outer layer will start tracking the weights of the inner layer.
We recommend creating such sublayers in the `__init__` method (since the sublayers will typically have a `build` method, they will be built when the outer layer gets built).
```
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print('weights:', len(mlp.weights))
print('trainable weights:', len(mlp.trainable_weights))
```
### Layers recursively collect losses created during the forward pass
When writing the `call` method of a layer, you can create loss tensors that you will want to use later, when writing your training loop. This is doable by calling `self.add_loss(value)`:
```
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
```
These losses (including those created by any inner layer) can be retrieved via `layer.losses`.
This property is reset at the start of every `__call__` to the top-level layer, so that `layer.losses` always contains the loss values created during the last forward pass.
```
class OuterLayer(layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
```
In addition, the `loss` property also contains regularization losses created for the weights of any inner layer:
```
class OuterLayer(layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.dense = layers.Dense(32, kernel_regularizer=tf.keras.regularizers.l2(1e-3))
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
```
These losses are meant to be taken into account when writing training loops, like this:
```python
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Iterate over the batches of a dataset.
for x_batch_train, y_batch_train in train_dataset:
with tf.GradientTape() as tape:
logits = layer(x_batch_train) # Logits for this minibatch
# Loss value for this minibatch
loss_value = loss_fn(y_batch_train, logits)
# Add extra losses created during this forward pass:
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
```
For a detailed guide about writing training loops, see the second section of the [Guide to Training & Evaluation](./training_and_evaluation.ipynb).
### You can optionally enable serialization on your layers
If you need your custom layers to be serializable as part of a [Functional model](./functional.ipynb), you can optionally implement a `get_config` method:
```
class Linear(layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
```
Note that the `__init__` method of the base `Layer` class takes some keyword arguments, in particular a `name` and a `dtype`. It's good practice to pass these arguments to the parent class in `__init__` and to include them in the layer config:
```
class Linear(layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({'units': self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
```
If you need more flexibility when deserializing the layer from its config, you can also override the `from_config` class method. This is the base implementation of `from_config`:
```python
def from_config(cls, config):
return cls(**config)
```
To learn more about serialization and saving, see the complete [Guide to Saving and Serializing Models](./saving_and_serializing.ipynb).
### Privileged `training` argument in the `call` method
Some layers, in particular the `BatchNormalization` layer and the `Dropout` layer, have different behaviors during training and inference. For such layers, it is standard practice to expose a `training` (boolean) argument in the `call` method.
By exposing this argument in `call`, you enable the built-in training and evaluation loops (e.g. `fit`) to correctly use the layer in training and inference.
```
class CustomDropout(layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
```
## Building Models
### The Model class
In general, you will use the `Layer` class to define inner computation blocks,
and will use the `Model` class to define the outer model -- the object you will train.
For instance, in a ResNet50 model, you would have several ResNet blocks subclassing `Layer`,
and a single `Model` encompassing the entire ResNet50 network.
The `Model` class has the same API as `Layer`, with the following differences:
- It exposes built-in training, evaluation, and prediction loops (`model.fit()`, `model.evaluate()`, `model.predict()`).
- It exposes the list of its inner layers, via the `model.layers` property.
- It exposes saving and serialization APIs.
Effectively, the "Layer" class corresponds to what we refer to in the literature
as a "layer" (as in "convolution layer" or "recurrent layer") or as a "block" (as in "ResNet block" or "Inception block").
Meanwhile, the "Model" class corresponds to what is referred to in the literature
as a "model" (as in "deep learning model") or as a "network" (as in "deep neural network").
For instance, we could take our mini-resnet example above, and use it to build a `Model` that we could
train with `fit()`, and that we could save with `save_weights`:
```python
class ResNet(tf.keras.Model):
def __init__(self):
super(ResNet, self).__init__()
self.block_1 = ResNetBlock()
self.block_2 = ResNetBlock()
self.global_pool = layers.GlobalAveragePooling2D()
self.classifier = Dense(num_classes)
def call(self, inputs):
x = self.block_1(inputs)
x = self.block_2(x)
x = self.global_pool(x)
return self.classifier(x)
resnet = ResNet()
dataset = ...
resnet.fit(dataset, epochs=10)
resnet.save_weights(filepath)
```
### Putting it all together: an end-to-end example
Here's what you've learned so far:
- A `Layer` encapsulate a state (created in `__init__` or `build`) and some computation (in `call`).
- Layers can be recursively nested to create new, bigger computation blocks.
- Layers can create and track losses (typically regularization losses).
- The outer container, the thing you want to train, is a `Model`. A `Model` is just like a `Layer`, but with added training and serialization utilities.
Let's put all of these things together into an end-to-end example: we're going to implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.
Our VAE will be a subclass of `Model`, built as a nested composition of layers that subclass `Layer`. It will feature a regularization loss (KL divergence).
```
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self,
latent_dim=32,
intermediate_dim=64,
name='encoder',
**kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation='relu')
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self,
original_dim,
intermediate_dim=64,
name='decoder',
**kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation='relu')
self.dense_output = layers.Dense(original_dim, activation='sigmoid')
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(tf.keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name='autoencoder',
**kwargs):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim,
intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = - 0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
self.add_loss(kl_loss)
return reconstructed
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Iterate over epochs.
for epoch in range(3):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print('step %s: mean loss = %s' % (step, loss_metric.result()))
```
Note that since the VAE is subclassing `Model`, it features built-in training loops. So you could also have trained it like this:
```
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
```
### Beyond object-oriented development: the Functional API
Was this example too much object-oriented development for you? You can also build models using [the Functional API](./functional.ipynb). Importantly, choosing one style or another does not prevent you from leveraging components written in the other style: you can always mix-and-match.
For instance, the Functional API example below reuses the same `Sampling` layer we defined in the example above.
```
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name='encoder_input')
x = layers.Dense(intermediate_dim, activation='relu')(original_inputs)
z_mean = layers.Dense(latent_dim, name='z_mean')(x)
z_log_var = layers.Dense(latent_dim, name='z_log_var')(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name='encoder')
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name='z_sampling')
x = layers.Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = layers.Dense(original_dim, activation='sigmoid')(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name='decoder')
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name='vae')
# Add KL divergence regularization loss.
kl_loss = - 0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
```
| github_jupyter |
## 0.使用opencv展示图像
```
import cv2
def cv2_display(image_ndarray):
windowName = 'display'
cv2.imshow(windowName, image_ndarray)
# 按Esc键或者q键可以退出循环
pressKey = cv2.waitKey(0)
if 27 == pressKey or ord('q') == pressKey:
cv2.destroyAllWindows()
```
## 1.加载2张图片文件为图像数据
```
image_ndarray_1 = cv2.imread('../resources/1.jpg')
image_ndarray_2 = cv2.imread('../resources/2.jpg')
```
### 1.1 展示原始图像数据
```
# 按Esc键或者q键可以退出cv2显示窗口
cv2_display(image_ndarray_1)
cv2_display(image_ndarray_2)
```
## 2.图像处理
```
def get_processedImage(image_ndarray):
# 对拍摄图像进行图像处理,先转灰度图,再进行高斯滤波。
image_ndarray_1 = cv2.cvtColor(image_ndarray, cv2.COLOR_BGR2GRAY)
# 用高斯滤波对图像处理,避免亮度、震动等参数微小变化影响效果
filter_size = 7
image_ndarray_2 = cv2.GaussianBlur(image_ndarray_1, (filter_size, filter_size), 0)
return image_ndarray_2
image_ndarray_1_2 = get_processedImage(image_ndarray_1)
image_ndarray_2_2 = get_processedImage(image_ndarray_2)
```
### 2.1 展示处理后的图像数据
```
cv2_display(image_ndarray_1_2)
cv2_display(image_ndarray_2_2)
```
## 3.图像相减
```
absdiff_ndarray = cv2.absdiff(image_ndarray_1_2, image_ndarray_2_2)
```
### 3.1 展示相减后的图像数据
```
cv2_display(absdiff_ndarray)
```
### 4. 图像二值化
```
result_1 = cv2.threshold(absdiff_ndarray, 25, 255, cv2.THRESH_BINARY)
type(result_1)
len(result_1)
type(result_1[0])
result_1[0]
type(result_1[1])
result_1[1].shape
cv2_display(result_1[1])
threshhold_ndarray = result_1[1]
```
### 4.1 显示二值化后的图像
```
cv2_display(threshhold_ndarray)
```
## 5. 获取轮廓列表,并做响应操作
```
contour_list = cv2.findContours(threshhold_ndarray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
import datetime
image_ndarray_3 = image_ndarray_2.copy()
for contour in contour_list:
# 对于较小矩形区域,选择忽略
if cv2.contourArea(contour) < 2000:
continue
else:
x1, y1, w, h = cv2.boundingRect(contour)
x2, y2 = x1 + w, y1 + h
leftTop_coordinate = x1, y1
rightBottom_coordinate = x2, y2
bgr_color = (0, 0, 255)
thickness = 2
cv2.rectangle(image_ndarray_3, leftTop_coordinate, rightBottom_coordinate, bgr_color, thickness)
text = "Find motion object! x=%d, y=%d" %(x1, y1)
print(text)
cv2.putText(image_ndarray_3, text, (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, bgr_color, thickness)
time_string = datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p")
_ = cv2.putText(image_ndarray_3, time_string, (10, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, bgr_color, thickness)
```
### 5.1 根据轮廓绘制方框后,显示图像
```
cv2_display(image_ndarray_3)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import sys, getopt
import matplotlib.pyplot as plt
import seaborn as sns
import copy
import numpy as np
from PIL import Image, ImageOps
import scanpy as sc
```
# Download input demo data
wget -O 10X_ST_demo.tar.gz https://zenodo.org/record/5524883/files/10X_ST_demo.tar.gz?download=1
tar -zxvf 10X_ST_demo.tar.gz
```
# change directory to 10X_ST_demo which is just downloaded, if in current directory
os.chdir('./10X_ST_demo')
inputfile='inputfiles.txt'
samples = pd.read_csv(inputfile, header = None)[0]
samples
samples[0]
# samples = samples[0:2]
# samples
def read_each(i):
adata = sc.read_visium(i)
adata.var_names_make_unique()
# flip Y axis to show correctly in cellxgene VIP
adata.obsm['spatial'][:,1] = -adata.obsm['spatial'][:,1]
return(adata)
adatals = [read_each(i) for i in samples]
import anndata
sampleIDs = samples.str.extract(r'10X_demo_data_(.*)')
sampleIDs = "V1_"+ sampleIDs
sampleIDs
sampleIDs[0].astype("category")
adata_merge = sc.AnnData.concatenate(*adatals, batch_key='sample', join='outer', batch_categories= sampleIDs[0].astype("category"))
adata_merge.obs.head()
list(adatals[3].uns["spatial"])[0]
for i in list(range(len(adatals))):
print(i)
# add back the spatial coordinates as separate embeddings
adata_merge.obsm['X_spatial_'+list(adatals[i].uns["spatial"])[0]] = np.zeros(adata_merge.obsm['spatial'].shape)
adata_merge.obsm['X_spatial_'+list(adatals[i].uns["spatial"])[0]][np.where(adata_merge.obs['sample']==list(adatals[i].uns["spatial"])[0])] = adatals[i].obsm['spatial']
adata_merge.uns['spatial'] = dict()
for i in list(range(len(adatals))):
adata_merge.uns['spatial']["spatial_"+list(adatals[i].uns["spatial"])[0]] = adatals[i].uns['spatial'][list(adatals[i].uns["spatial"])[0]]
adata_merge.var_names.str.startswith("MT-").sum()
# QC metric
adata_merge.var["mt"] = adata_merge.var_names.str.startswith("MT-")
sc.pp.calculate_qc_metrics(adata_merge, qc_vars=["mt"], inplace=True)
# QC plots
fig, axs = plt.subplots(1, 4, figsize=(15, 4))
sns.distplot(adata_merge.obs["total_counts"], kde=False, ax=axs[0])
sns.distplot(adata_merge.obs["total_counts"][adata_merge.obs["total_counts"] < 10000], kde=False, bins=40, ax=axs[1])
sns.distplot(adata_merge.obs["n_genes_by_counts"], kde=False, bins=60, ax=axs[2])
sns.distplot(adata_merge.obs["n_genes_by_counts"][adata_merge.obs["n_genes_by_counts"] < 4000], kde=False, bins=60, ax=axs[3])
# filtering, turn this off to keep all spots for visualization also cutoffs are case-by-case based on the QC plots
#sc.pp.filter_cells(adata, min_counts=5000)
#sc.pp.filter_cells(adata, max_counts=35000)
#adata = adata[adata.obs["pct_counts_mt"] < 20]
#print(f"#cells after MT filter: {adata.n_obs}")
#sc.pp.filter_genes(adata, min_cells=10)
# normalization, log1p transformation and select HVGs
sc.pp.normalize_total(adata_merge, inplace=True)
sc.pp.log1p(adata_merge)
sc.pp.highly_variable_genes(adata_merge, flavor="seurat", n_top_genes=2000)
# PCA, UMAP and clustering by leiden
sc.pp.pca(adata_merge)
sc.pp.neighbors(adata_merge)
sc.tl.umap(adata_merge)
sc.tl.leiden(adata_merge, key_added="clusters")
# collect sample names
sampleNames = list()
for f in list(adata_merge.obsm):
if "spatial_" in f: # search for the pattern
library_id=f.replace("X_spatial_","") # parse the string and get the sample id
#library_id=library_id.replace("V1_","")
sampleNames.append(library_id)
sampleNames
from PIL import Image
spatial=adata_merge.uns["spatial"]
dim=''
import math
if dim=='':
height = math.ceil(math.sqrt(len(samples)))
width = math.ceil(len(samples)/height)
else:
width,height = dim.split('x')
print(height)
print(width)
print(len(sampleNames))
idx = 0
size=700
#creates a new empty image, RGB mode, and size 1400 by 1400.
new_im = Image.new('RGB', (size*width,size*height))
for i in range(0,size*width,size):
for j in range(0,size*height,size):
# load the image from the object
#im = Image.fromarray((spatial["spatial_V1_"+samples[idx]]["images"]["lowres"]* 255).round().astype(np.uint8)) # found a solution to covert float32 to unit8
im = Image.fromarray((spatial["spatial_"+sampleNames[idx]]["images"]["lowres"]* 255).round().astype(np.uint8)) # found a solution to covert float32 to unit8
# paste images together
new_im.paste(im, (j,i))
print(idx)
idx = idx+1
if idx>=len(sampleNames):
break
# fake a adata.uns by providing merged lowres image and scale factors 1
adata_merge.uns['spatial']['spatial_Merged'] = copy.deepcopy(adata_merge.uns['spatial'][list(adata_merge.uns['spatial'])[0]])
adata_merge.uns['spatial']['spatial_Merged']['images']["hires"] = np.asarray(new_im)
adata_merge.uns['spatial']['spatial_Merged']['images']["lowres"] = np.asarray(new_im)
adata_merge.uns['spatial']['spatial_Merged']['scalefactors']['tissue_lowres_scalef'] = 1
adata_merge.uns['spatial']['spatial_Merged']['scalefactors']['tissue_hires_scalef'] = 1
# add back the spatial coordinates as separate embeddings
idx = 0
adata_merge.obsm['X_spatial_Merged'] = adata_merge.obsm['spatial']
for i in range(0,size*width,size):
for j in range(0,size*height,size):
#library_id='spatial_V1_'+samples[idx] # parse the string and get the sample id
library_id='spatial_'+sampleNames[idx] # parse the string and get the sample id
print(library_id)
tissue_lowres_scalef = spatial[library_id]['scalefactors']['tissue_lowres_scalef']
adata_merge.obsm['X_spatial_Merged'][np.where(adata_merge.obs['sample']==sampleNames[idx])] = copy.deepcopy(adatals[idx].obsm['spatial'])
adata_merge.obsm['X_spatial_Merged'][np.where(adata_merge.obs['sample']==sampleNames[idx]),1] = adatals[idx].obsm['spatial'][:,1]*tissue_lowres_scalef - i
adata_merge.obsm['X_spatial_Merged'][np.where(adata_merge.obs['sample']==sampleNames[idx]),0] = adatals[idx].obsm['spatial'][:,0]*tissue_lowres_scalef + j
idx = idx+1
if idx>=len(sampleNames):
break
outputfile = '10X_data.h5ad'
adata_merge.write_h5ad(outputfile)
adata_merge.obs
```
| github_jupyter |
## Preprocessing
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
np.random.seed(2)
x1 = pd.DataFrame(np.random.normal(size=50), columns=['col1'])
x2 = pd.DataFrame(np.random.normal(size=50), columns=['col2'])
x = pd.concat([x1, x2], axis=1)
x
x.col1.iloc[0:24] += 3
x.col2.iloc[0:24] -= 4
x
```
***
## 10.5.1 $K$-means clustering
**$K$=2**
```
from sklearn.cluster import KMeans as KM
km_out = KM(n_clusters=2, n_init=20).fit(x)
km_labels = km_out.labels_
km_labels
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.scatter(x.col1[km_labels==0], x.col2[km_labels==0], color='green', s=500, alpha=0.5)
plt.scatter(x.col1[km_labels==1], x.col2[km_labels==1], color='orange', s=500, alpha=0.5)
plt.xlabel('col1', fontsize=20, color='c')
plt.ylabel('col2', fontsize=20, color='c')
plt.title('K-means clustering results with K=2', fontsize=30, color='m')
```
**$K$=3**
```
np.random.seed(4) # this isn't the same as the seed in R mentioned in book. Nonetheless, I use the same seed here
km_out = KM(n_clusters=3, n_init=20).fit(x)
km_labels = km_out.labels_
km_labels
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.scatter(x.col1[km_labels==0], x.col2[km_labels==0], color='green', s=500, alpha=0.5)
plt.scatter(x.col1[km_labels==1], x.col2[km_labels==1], color='orange', s=500, alpha=0.5)
plt.scatter(x.col1[km_labels==2], x.col2[km_labels==2], color='blue', s=500, alpha=0.5)
plt.xlabel('col1', fontsize=20, color='c')
plt.ylabel('col2', fontsize=20, color='c')
plt.title('K-means clustering results with K=3', fontsize=30, color='m')
k_cluster_means = pd.DataFrame(km_out.cluster_centers_, columns=['col1', 'col2'])
k_cluster_means
```
***
## 10.5.2 Hierarchial clustering
```
from scipy.cluster.hierarchy import linkage, dendrogram, cut_tree
hc_complete = linkage(y=x, method='complete')
hc_average = linkage(y=x, method='average')
hc_single = linkage(y=x, method='single')
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.title('complete linkage', fontsize=30, color='m')
plt.xlabel('index', fontsize=20, color='c')
plt.ylabel('distance', fontsize=20, color='c')
dendrogram(hc_complete, leaf_rotation=90., leaf_font_size=15., show_leaf_counts=True)
plt.show()
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.title('average linkage', fontsize=30, color='m')
plt.xlabel('index', fontsize=20, color='c')
plt.ylabel('distance', fontsize=20, color='c')
dendrogram(hc_average, leaf_rotation=90., leaf_font_size=15., show_leaf_counts=True)
plt.show()
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.title('single linkage', fontsize=30, color='m')
plt.xlabel('index', fontsize=20, color='c')
plt.ylabel('distance', fontsize=20, color='c')
dendrogram(hc_single, leaf_rotation=90., leaf_font_size=15., show_leaf_counts=True)
plt.show()
cut_tree(hc_complete, n_clusters=2).T
cut_tree(hc_average, n_clusters=2).T
cut_tree(hc_single, n_clusters=2).T
cut_tree(hc_single, n_clusters=4).T
from sklearn.preprocessing import StandardScaler
xsc = StandardScaler().fit_transform(x)
xsc
hc_complete_xsc = linkage(y=xsc, method='complete')
plt.xkcd()
plt.figure(figsize=(25, 10))
plt.title('complete linkage - scaled data', fontsize=30, color='m')
plt.xlabel('index', fontsize=20, color='c')
plt.ylabel('distance', fontsize=20, color='c')
dendrogram(hc_complete_xsc, leaf_rotation=90., leaf_font_size=15., show_leaf_counts=True)
plt.show()
```
| github_jupyter |
```
%matplotlib nbagg
import os
import glob
from collections import defaultdict, namedtuple
import sqlite3
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import lsst.afw.table as afw_table
import lsst.daf.persistence as dp
import lsst.geom
import desc.sims_ci_pipe as scp
def make_SourceCatalog(df):
schema = afw_table.SourceTable.makeMinimalSchema()
schema.addField('flux', type=float, doc='flux in nJy')
src_cat = afw_table.SourceCatalog(schema)
for iloc in range(len(df)):
row = df.iloc[iloc]
new_rec = src_cat.addNew()
try:
new_rec.set('id', int(row['id']))
except ValueError:
new_rec.set('id', iloc)
new_rec.set('coord_ra', lsst.geom.Angle(row.ra, lsst.geom.degrees))
new_rec.set('coord_dec', lsst.geom.Angle(row.dec, lsst.geom.degrees))
new_rec.set('flux', row['flux'])
return src_cat
def match_meas_fluxes(butler, visit, truth_df0,
flux_type='base_PsfFlux', max_offset=0.1,
point_sources=True):
flux_col = f'{flux_type}_instFlux'
radius = lsst.geom.Angle(max_offset, lsst.geom.arcseconds)
dfs = []
datarefs = butler.subset('src', visit=visit)
for i, dataref in enumerate(list(datarefs)):
try:
calib = dataref.get('calexp').getPhotoCalib()
except:
continue
src = dataref.get('src')
if point_sources:
src = scp.get_point_sources(src)
ras = np.degrees(src.get('coord_ra'))
decs = np.degrees(src.get('coord_dec'))
ra_min, ra_max = min(ras), max(ras)
dec_min, dec_max = min(decs), max(decs)
query = f'{ra_min} <= ra <= {ra_max} and {dec_min} <= dec <= {dec_max}'
truth_df = truth_df0.query(query)
truth_cat = make_SourceCatalog(truth_df)
matches = afw_table.matchRaDec(truth_cat, src, radius)
num_matches = len(matches)
print(i, len(truth_df), len(src), num_matches)
ids = np.zeros(num_matches, dtype=np.int)
offsets = np.zeros(num_matches, dtype=np.float)
true_fluxes = np.zeros(num_matches, dtype=np.float)
meas_fluxes = np.zeros(num_matches, dtype=np.float)
meas_fluxerrs = np.zeros(num_matches, dtype=np.float)
for i, match in enumerate(matches):
ids[i] = match.first['id']
offsets[i] = np.degrees(match.distance)*3600*1000.
true_fluxes[i] = match.first['flux']
meas_fluxes[i] = calib.instFluxToNanojansky(match.second[flux_col])
meas_fluxerrs[i] \
= calib.instFluxToNanojansky(match.second[flux_col + 'Err'])
dfs.append(pd.DataFrame(data=dict(id=ids, offset=offsets,
true_flux=true_fluxes,
meas_flux=meas_fluxes,
meas_fluxerr=meas_fluxerrs)))
df = pd.concat(dfs)
return df
def zeros():
return dict(ra=0, dec=0, flux=0, npts=0)
def make_lens_sys_truth_cat(df, id_col='lens_sys_id'):
data = defaultdict(zeros)
for iloc in range(len(df)):
row = df.iloc[iloc]
record = data[row[id_col]]
record['ra'] += row.ra
record['dec'] += row.dec
record['flux'] += row.flux
record['npts'] += 1
df_data = defaultdict(list)
for obj_id in data:
df_data['id'].append(obj_id)
df_data['ra'].append(data[obj_id]['ra']/data[obj_id]['npts'])
df_data['dec'].append(data[obj_id]['dec']/data[obj_id]['npts'])
df_data['flux'].append(data[obj_id]['flux'])
return pd.DataFrame(data=df_data)
#visit = 934713
#band = 'r'
visit = 906935
band = 'i'
df_stars = pd.read_pickle(f'src_truth_match_v{visit}-{band}.pkl')
df_stars.head()
plt.figure()
plt.hexbin(np.log10(df_stars['meas_flux']), df_stars['meas_flux']/df_stars['true_flux'],
mincnt=1)
plt.xlabel('log10(meas_flux/nJy)')
plt.ylabel('meas_flux/true_flux')
plt.title(f'v{visit}-{band}, stars only')
plt.savefig(f'v{visit}-{band}_flux_ratio_vs_meas_flux.png')
butler = dp.Butler('repo_agns')
truth_df0 = pd.read_pickle(f'agn_fluxes_v{visit}.pkl')
df_agns = match_meas_fluxes(butler, visit, truth_df0, max_offset=0.1)
print(len(df_agns))
print()
butler = dp.Butler('repo_lensed_agns')
truth_df0 = make_lens_sys_truth_cat(pd.read_pickle(f'lensed_agn_fluxes_v{visit}.pkl'))
df_lensed_agns = match_meas_fluxes(butler, visit, truth_df0, max_offset=0.1,
flux_type='base_CircularApertureFlux_12_0', point_sources=False)
print(len(df_lensed_agns))
print()
butler = dp.Butler('repo_lensed_sne')
truth_df0 = make_lens_sys_truth_cat(pd.read_pickle(f'lensed_sne_fluxes_v{visit}.pkl'))
df_lensed_sne = match_meas_fluxes(butler, visit, truth_df0)
print(len(df_lensed_sne))
print()
butler = dp.Butler('repo_lensed_hosts')
truth_df0 = make_lens_sys_truth_cat(pd.read_pickle(f'host_fluxes_v{visit}.pkl'), id_col='id')
df_hosts = match_meas_fluxes(butler, visit, truth_df0, flux_type='base_CircularApertureFlux_12_0',
point_sources=False)
print(len(df_hosts))
print()
plt.figure()
plt.scatter(df_stars['true_flux'], df_stars['meas_flux']/df_stars['true_flux'], s=4, label='stars')
plt.scatter(df_lensed_agns['true_flux'], df_lensed_agns['meas_flux']/df_lensed_agns['true_flux'], s=4,
label='lensed AGNs', alpha=1)
plt.scatter(df_agns['true_flux'], df_agns['meas_flux']/df_agns['true_flux'], s=4,
label='AGNs', alpha=1)
#plt.scatter(df_lensed_sne['true_flux'], df_lensed_sne['meas_flux']/df_lensed_sne['true_flux'], s=4,
# label='lensed SNe', alpha=1)
plt.scatter(df_hosts['true_flux'], df_hosts['meas_flux']/df_hosts['true_flux'], s=4,
label='lensed hosts', alpha=1)
plt.xscale('log')
plt.axhline(1, linestyle=':', color='black', alpha=0.5)
plt.legend(fontsize='x-small')
plt.title(f'v{visit}-{band}')
plt.xlabel('true flux (nJy)')
plt.ylabel('meas_flux/true_flux')
plt.ylim(0, 3)
plt.savefig(f'Run3.0i_instcat_flux_checks_v{visit}-{band}.png')
```
| github_jupyter |
# Similar sounding words
This is a list of similar sounding words that I have collected from various sources on the web and added to as I find new pairs.
Unlike most homophone, homograph, and homonym resources this list is not targeting ESL or educational use. Instead it is designed for finding common errors in speech recognition texts. Specifically I use it with [Caster](https://caster.readthedocs.io/en/latest/) for voice programming.
I currently have five different sources. I've downloaded their contents as text files, or in one case HTML and parsed appropriately. I have also linked to the original location of these files both inside the files and in the headings between Jupyter cells below.
Unfortunately I wasn't thinking about reproducibility when I started this project, so most of the text files have had a bit of light preprocessing in a text editor. Given that I don't expect these source lists to change in the future, I don't think it will be a problem.
```
from bs4 import BeautifulSoup # pip install beautifulsoup4
from disjoint_set import DisjointSet # pip install disjoint-set
import re
from pprint import pformat
```
# [7esl.html](https://7esl.com/homophones/)
```
contents = open("7esl.html", encoding="utf8").read()
parser = BeautifulSoup(contents, 'html.parser')
similar_7esl = []
for element in parser.find_all("p"):
candidate = element.find("strong")
if candidate:
partitions = candidate.text.lower().split(" —– ")
if len(partitions) > 1:
words = []
for p in partitions:
words.extend(s.strip().replace('’', "''") for s in p.split("/"))
similar_7esl.append(words)
similar_7esl
```
# [ku.txt](https://web.ku.edu/~edit/wordsall.html)
```
contents = open("ku.txt").read().lower().splitlines()[1:]
similar_ku = [s.split(';') for s in contents]
similar_ku
```
# [singularis.txt](http://www.singularis.ltd.uk/bifroest/misc/homophones-list.html)
```
contents = open("singularis.txt").read().lower().splitlines()[1:]
similar_singularis = [s.split(', ') for s in contents]
similar_singularis
```
# [teachingtreasures.txt](https://www.teachingtreasures.com.au/teaching-tools/Basic-worksheets/worksheets-english/upper/homophones-list.htm)
```
contents = open("teachingtreasures.txt").read().lower().splitlines()[1:]
similar_teachingtreasures = [s.split(' ') for s in contents if s]
similar_teachingtreasures
```
# [thoughtco](https://www.thoughtco.com/homonyms-homophones-and-homographs-a-b-1692660)
```
contents = open("thoughtco.txt").read().lower().splitlines()[1:]
similar_thoughtco = [s.split(' ') for s in contents if s]
similar_thoughtco
```
# My personal list of words not found above
These were identified through trial and error (actually, just error) during dictation. Pull Requests welcome. These words tend to be commonly confused in dragon, but are not generally recognized as homophones.
```
contents = open("numbers.txt").read().lower().splitlines()[1:]
similar_numbers = [s.lower().split(',') for s in contents if s]
similar_numbers
contents = open("dusty.txt").read().lower().splitlines()[1:]
similar_dusty = [s.lower().split(',') for s in contents if s]
similar_dusty
```
# Join it all together
We want a list of all possible sets of words. This list of lists will surely contain duplicates (in fact, mostly duplicates).
I have done a visual sanity check in all the outputs above, but I'll do another below.
```
similar_words = []
similar_words.extend(similar_7esl)
similar_words.extend(similar_ku)
similar_words.extend(similar_singularis)
similar_words.extend(similar_teachingtreasures)
similar_words.extend(similar_thoughtco)
similar_words.extend(similar_numbers)
similar_words.extend(similar_dusty)
regex = re.compile("^[a-z'-]+$")
for similar in similar_words:
if len(set(similar)) < 2:
print(similar)
for word in similar:
if not regex.match(word):
print(word)
```
# Dedup
Removing duplicates is not trivial, since the different sets of words may include multiple variations (for example, one set has *your* and *you're* and another includes *yore*). It would be easy enough to just do a double loop, but disjoint sets are my favourite datastructure, and I've never actually had an opportunity to use them in production code before. Read up on the union-find algorithm if you're unfamiliar with it, it's pretty cool.
```
word_set = DisjointSet()
for word_list in similar_words:
for word in word_list[1:]:
word_set.union(word_list[0], word)
wordsets = sorted(sorted(s) for s in word_set.itersets())
wordsets
len(wordsets)
```
# Redupe
The final output is a dictionary of words mapping to all the words similar to that word, not including that word.
```
index = {}
for similar in wordsets:
for word in similar:
local = similar.copy()
local.remove(word)
index[word] = local
index
len(index)
with open("../similar_sounding_words.py", "w") as file:
file.write("index = " + pformat(index))
```
| github_jupyter |
# Lesson 03: Transience and knickpoints
*This lesson has been written by Simon M. Mudd at the University of Edinburgh*
*Last update 30/09/2021*
Okay, if you have followed through the first two lessons, you will be getting a feel for the shape of channel longitudinal profiles. In this lesson, we will look at landscape transience, using simulations based on the stream power model (see previous lesson for warning labels on this approach).
We are going to use the `channeltoy` python package, which I wrote for very simple simulations of channel profiles. First, we need to install and import some pacakges for running `channeltoy` and plotting the results.
```
!pip install channeltoy
import channeltoy as ct
import matplotlib.pyplot as plt
import numpy as np
```
Now create a channel and set it to steady state, increase the uplift rate, and run the simulation:
```
# create a channel and then solve it for steady state
chan = ct.channeltoy(spacing=250, U = 0.0002, K = 0.00005, n=1, m= 0.45)
initial_elevation = chan.solve_steady_state_elevation()
# now change the uplift rate
chan.set_U_values(U = 0.0005)
# Run the transient simulation. You can use the start and end time to tell the model how long to run
# the print_interval tells the model how frequently to print channel profiles that you plot later
times, elevations = chan.transient_simulation(base_level = 0, dt = 200,
start_time = 0, end_time = 100001,
print_interval = 25000)
# Make a plot of the elevations
chan.plot_transient_channel(times = times,
elevations = elevations,
initial_elevation = initial_elevation,
show_figure=True,print_to_file=False)
```
The channel has retained the elevations from the last timestep, so you can actually keep it running for another few steps if you want. I mention this because if you wanted you could build up very complex uplift histories by sequentially running the model.
```
# Continue running the model.
# The start time should be the same as the end time of the last simulation.
times, elevations = chan.transient_simulation(base_level = 0, dt = 200,
start_time = 50000, end_time = 100001,
print_interval = 25000)
chan.plot_transient_channel(times = times,
elevations = elevations,
initial_elevation = initial_elevation,
show_figure=True,print_to_file=False)
```
## Knickpoints
In lesson 1, we looked at how rivers are typically concave (or if you are being mathematically correct, concave up). In lesson 2 we showed how the stream power law also predicts a concave river profile if uplift is steady. In the above simulation, not all of the channel is concave.
What is happening in this simulation? Lets do another simulation with a larger change in the uplift rate and more timesteps:
```
# create a channel
chan = ct.channeltoy(spacing=50, U = 0.0001, K = 0.00005, n=1, m= 0.45)
initial_elevation = chan.solve_steady_state_elevation()
# change the uplift rate
chan.set_U_values(U = 0.0005)
# Run the transient simulation.
times, elevations = chan.transient_simulation(base_level = 0, dt = 200,
start_time = 0, end_time = 70001,
print_interval = 10000)
# Make a plot of the elevations
chan.plot_transient_channel(times = times,
elevations = elevations,
initial_elevation = initial_elevation,
show_figure=True,print_to_file=False)
```
What is happening here? You start from a steady channel (the bottom profile in this case) and increase uplift, and a steeper section then "grows" upstream. This new section has a channel steepness that reflects the new erosion rate (which reflects the new uplift rate). I will show you this in a slope-area plot:
```
A = chan.A_data
z = chan.z_data
S = np.gradient(z)/50 # The 50 is the spacing of the nodes I used above
plt.scatter(A,S)
plt.xlabel("Drainage area ($m^2$)")
plt.ylabel("gradient (m/m)")
plt.yscale('log')
plt.xscale('log')
```
**Important note here**: I use a very basic solution of the equations that "smears" the boundary between the different erosion rates. If you use an exact solution, the boundary is perfectly sharp: a step change. You can read all about that in this paper:
Royden, L., Perron, J.T., 2013. Solutions of the stream power equation and application to the evolution of river longitudinal profiles. Journal of Geophysical Research: Earth Surface 118, 497–518. https://doi.org/10.1002/jgrf.20031
**The change in steepness corresponds to a part of the channel that is convex**.
A **convexity** in a channel profile is an indicator of a **knickpoint**. A knickpoint can be a step change in the channel elevation (a waterfall, basically) or it can be a zone of increased steepness, sometimes called a **knickzone**. In the simulation above, the knickzone actually extends all the way from the outlet to the convexity in the channel profile.
## Some knickzones move upstream
Knickzones occur for a number of reasons, for example due to changes in rock hardness. If they are fixed somewhere at a lithological boundary, we call these stationary knickpoints or knickzones. But knickpoints and knickzones generated by changes in erosion rates will migrate upstream.
You can actually change the uplift rate midway through a run in the `channeltoy` to see how knickzones move through the system, which is what I do in the code below.
You might want to decrease the print interval if you want a detailed look at how the knickpoint is moving (you will need to play with this a bit to see the knickpoint moving but not generate too many overlapping lines).
```
# create a channel
chan = ct.channeltoy(spacing=250, U = 0.0002, K = 0.00005, n=1, m= 0.45)
initial_elevation = chan.solve_steady_state_elevation()
# change the uplift rate
chan.set_U_values(U = 0.0005)
# Run the transient simulation. You can use the start and end time to
times, elevations = chan.transient_simulation(base_level = 0, dt = 200,
start_time = 0, end_time = 50001,
print_interval = 25000)
# Now change the uplift rate
chan.set_U_values(U = 0.0001)
# Run the transient simulation. You can use the start and end time to
times2, elevations2 = chan.transient_simulation(base_level = 0, dt = 200,
start_time = 50000, end_time = 100001,
print_interval = 25000)
# We need to get rid of the first time and elevation. This explanation is tedious.
times2.pop(0)
elevations2.pop(0)
# Now concatenate the time series
all_times = np.concatenate((times, times2))
all_elevs = np.concatenate((elevations, elevations2), axis=0)
# Make a plot of the elevations
chan.plot_transient_channel(times = all_times,
elevations = all_elevs,
initial_elevation = initial_elevation,
show_figure=True,print_to_file=False)
```
Geomorphologists spend a lot of time looking for either knickzones or changes to channel steepness. They could indicate a change to the rock hardness, but you can often rule that out using geological maps.
In fact, the migration of knickzones upstream does not depend on the stream power law (which has many critics): any number of physically based incision models have this behaviour (but the details differ: different models predict different rates of knickzone migration).
__Task:__ Play around with the parameters above to get a feeling for how quickly knickzones migrate through a channel in response to changing uplfit rates.
| github_jupyter |
# AWS Rekognition Text Detection Test
```
import boto3
s3_resource = boto3.resource('s3')
client=boto3.client('rekognition')
import matplotlib.pyplot as plt
%matplotlib inline
```
IMAGE 1
```
bucket='secondpythonbucket6ce9cccf-c429-471c-99a1-f36e849ee381'
photo='00007-4883-13_DB18ED97.jpg'
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
print ('Detected text')
for text in textDetections:
if text['Id'] < 7:
print ('Detected text:' + text['DetectedText'])
print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
print('\n')
# print ('Id: {}'.format(text['Id']))
import imageio
import matplotlib.pyplot as plt
pill_img = imageio.imread('./pillbox_images/00007-4883-13_DB18ED97.jpg')
plt.imshow(pill_img);
```
IMAGE 2
```
photo='009045988.jpg'
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
print ('Detected text')
for text in textDetections:
#if text['Id'] < 7:
print ('Detected text:' + text['DetectedText'])
print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
print('\n')
# print ('Id: {}'.format(text['Id']))
import imageio
import matplotlib.pyplot as plt
pill_img = imageio.imread('./pillbox_images/009045988.jpg')
plt.imshow(pill_img);
```
IMAGE 3
```
photo='006035484.jpg'
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
print ('Detected text')
for text in textDetections:
if (text['Id'] < 7) & (text['Confidence'] > 85):
print ('Detected text:' + text['DetectedText'])
print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
print('\n')
# print ('Id: {}'.format(text['Id']))
import imageio
import matplotlib.pyplot as plt
pill_img = imageio.imread('./pillbox_images/006035484.jpg')
plt.imshow(pill_img);
```
IMAGE 4
```
photo='007773107.jpg'
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
print ('Detected text')
for text in textDetections:
if (text['Id'] < 7) & (text['Confidence'] > 85):
print ('Detected text:' + text['DetectedText'])
print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
print('\n')
# print ('Id: {}'.format(text['Id']))
import imageio
import matplotlib.pyplot as plt
pill_img = imageio.imread('./pillbox_images/007773107.jpg')
plt.imshow(pill_img);
```
### Let's try to manipulate an image from S3 Bucket
We'll try to cut the image in half to split into 2 images
(To mimic a scenario where a user will send pictures of front and back of pill)
Then we'll seek to read text from each and keep unique pieces of text (but only those with "Confidence" > 85%)
IMAGE 5
```
photo='007811655.jpg'
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
print ('Detected text')
for text in textDetections:
if (text['Id'] < 7) & (text['Confidence'] > 85):
print ('Detected text:' + text['DetectedText'])
print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
print('\n')
# print ('Id: {}'.format(text['Id']))
```
### How to get url of file uploaded on AWS S3 bucket?
https://region_name.amazonaws.com/buket_name/object_name
### Getting Image from S3 and Splitting into 2 Images
```
# Downloading image
s3_resource = boto3.resource('s3')
s3_resource.Object(bucket, photo).download_file(f'./{photo}')
import imageio
import matplotlib.pyplot as plt
pic = imageio.imread('./007811655.jpg')
plt.imshow(pic);
pic.shape
height, width = pic.shape[:2]
# Cut the image in half
height_cutoff = height // 2
s1 = pic[:height_cutoff,:]
s2 = pic[height_cutoff:,:]
# Save each half
imageio.imwrite('img1.png', s1)
imageio.imwrite('img2.png', s2)
pic1 = imageio.imread('./img1.png')
plt.imshow(pic1);
pic2 = imageio.imread('./img2.png')
plt.imshow(pic2);
photo='597620119.jpg'
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
textDetections=response['TextDetections']
print ('Detected text')
for text in textDetections:
if (text['Id'] < 7) & (text['Confidence'] > 85):
print ('Detected text:' + text['DetectedText'])
print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
print('\n')
# print ('Id: {}'.format(text['Id']))
import pandas as pd
df_txt = pd.DataFrame(textDetections)
df_txt = df_txt.drop(['Geometry', 'Id', 'ParentId', 'Type'], axis=1)
df_txt
df1 = df_txt.groupby('DetectedText').count()
df1
```
### Code to Get Unique Text Sets for Rekognition Dectection
```
text_found = []
for text in textDetections:
text_found.append(text['DetectedText'])
text_set = list(set(text_found))
text_set
text_found
```
## Testing Rekognition with 1 and 2 Sided Test Images
### Uploading Cropped Text Images
- Took images from PillBox and divided them to have 1 image for each side
- Uploaded images into an S3 bucket
```
import os
img_bucket_name = 'firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad'
path = './test_images'
# counting files uploaded
# n_count = 0
# for filename in os.listdir(path):
# s3_resource.Object(img_bucket_name,
# filename).upload_file(
# Filename=f'./test_images/{filename}')
# n_count += 1
# print(f'Number of files uploaded: {n_count}')
```
### Reading Text from Test Images
To be read in pairs (Side A & Side B)
#### First Image Test
```
# Test bucket
bucket='firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad'
# Will need to take the JSON object and extract the 1 or 2 filenames
# then pass them into a variable as a list
photo_sides=['img7a.JPG', 'img7b.JPG']
# Empty list to contain list(s) of text blob(s) extracted with "Rekognition"
# Will contain a list per side (2 lists)
all_text = []
all_confLevels = []
# Looping through each image in "photo_sides" list
for photo in photo_sides:
# Detecting Text from Specified Image in S3 Bucket
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
# Detected Text (List of Dictionaries)
textDetections=response['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found = []
confLevel_found = []
for text in textDetections:
text_found.append(text['DetectedText'])
confLevel = "{:.2f}".format(text['Confidence']) + "%"
confLevel_found.append(confLevel)
text_set = list(set(text_found))
# Appending detected text in image to "all_text" list
all_text.append(text_set)
all_confLevels.append(confLevel_found)
all_text
all_confLevels
reversed_text = all_text.copy()
reversed_text.reverse()
reversed_text
```
#### Second Image Test
Here It throws varying results.
Will need to Limit results not just based on **"unique" text blobs** but also based on **confidence level**
```
# Test bucket
bucket='firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad'
# Will need to take the JSON object and extract the 1 or 2 filenames
# then pass them into a variable as a list
photo_sides=['img4a.JPG', 'img4b.JPG']
# Empty list to contain list(s) of text blob(s) extracted with "Rekognition"
# Will contain a list per side (2 lists)
all_text = []
all_confLevels = []
# Looping through each image in "photo_sides" list
for photo in photo_sides:
# Detecting Text from Specified Image in S3 Bucket
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
# Detected Text (List of Dictionaries)
textDetections=response['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found = []
confLevel_found = []
for text in textDetections:
if text['Confidence'] > 87:
text_found.append(text['DetectedText'])
confLevel = "{:.2f}".format(text['Confidence']) + "%"
confLevel_found.append(confLevel)
#text_set = list(set(text_found))
# Appending detected text in image to "all_text" list
all_text.append(text_found)
all_confLevels.append(confLevel_found)
all_text
```
Confidence levels are higher for two 'S's and '1003'. Both above 85%
```
all_confLevels
```
#### Third Image Test
```
# Test bucket
bucket='firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad'
# Will need to take the JSON object and extract the 1 or 2 filenames
# then pass them into a variable as a list
photo_sides=['img4a.JPG', 'img4b.JPG']
# Empty list to contain list(s) of text blob(s) extracted with "Rekognition"
# Will contain a list per side (2 lists)
all_text = []
all_confLevels = []
# Looping through each image in "photo_sides" list
for photo in photo_sides:
# Detecting Text from Specified Image in S3 Bucket
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}})
# Detected Text (List of Dictionaries)
textDetections=response['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found = []
confLevel_found = []
for text in textDetections:
if text['Confidence'] > 85:
text_found.append(text['DetectedText'])
confLevel = "{:.2f}".format(text['Confidence']) + "%"
confLevel_found.append(confLevel)
#text_set = list(set(text_found))
# Appending detected text in image to "all_text" list
all_text.append(text_found)
all_confLevels.append(confLevel_found)
all_text
all_confLevels
import re
all_split_text = []
for text_list in all_text:
if len(text_list) > 0:
for text in text_list:
text_split = re.split('(\D+)', text)
all_split_text.append(text_split)
unique_list = []
for each in all_split_text:
unique_list.append([i for i in each if i])
unique_list
```
Flattening list of lists returned:
```
text_list = [blob for sublist in all_text for blob in sublist]
text_list = list(set(text_list))
text_list
```
List of text blobs split where digits and letters are together:
```
unique_list = []
for each in text_list:
num_split = re.findall(r'[A-Za-z]+|\d+', each)
unique_list.append(num_split)
unique_list = [blob for sublist in unique_list for blob in sublist]
unique_list
```
### Now as a Function
```
import json
# Text Dectection Function
def text_detection(filename_list):
#THIS IS A TEST BUCKET
bucket='firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad'
# Empty list to contain list(s) of text blob(s) extracted with "Rekognition"
# Will contain a list per side (2 lists)
all_text = []
# Looping through each image in "photo_sides" list
for file in filename_list:
# Detecting Text from Specified Image in S3 Bucket
response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':file}})
# Detected Text (List of Dictionaries)
textDetections=response['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found = []
for text in textDetections:
if text['Confidence'] > 87:
text_found.append(text['DetectedText'])
text_set = list(set(text_found))
# Appending detected text in image to "all_text" list
all_text.append(text_set)
# Flattening 'all_text' (list of lists) into 1 list
text_list = [blob for sublist in all_text for blob in sublist]
text_list = list(set(text_list))
# print(f'text_list: {text_list}')
# Splitting any text blob that may have digits and numbers together
unique_list = []
for each in text_list:
num_split = re.findall(r'[A-Za-z]+|\d+', each)
unique_list.append(num_split)
# Flattening again into one list with just unique values
unique_list = [blob for sublist in unique_list for blob in sublist]
unique_list = list(set(unique_list))
# print(len(unique_list))
if len(unique_list) == 0:
unique_list = ['Unable to detect text']
# Return 'unique_list' as JSON
return json.dumps(unique_list)
```
### Function Detecting Text from URL Image
This takes 1 image at the time - assuming each url would be an image
```
# test_url = "http://www.gunnerkrigg.com//comics/00000001.jpg"
from skimage.exposure import rescale_intensity
from skimage import color
import urllib.request
import json
import re
import boto3
import numpy as np
import cv2
client=boto3.client('rekognition')
from PIL import Image
# Filter to increase image contrast
def add_contrast(image_path):
#-----Reading the image-----------------------------------------------------
img = cv2.imread(image_path, 1)
#-----Converting image to LAB Color model-----------------------------------
lab= cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
#-----Splitting the LAB image to different channels-------------------------
l, a, b = cv2.split(lab)
#-----Applying CLAHE to L-channel-------------------------------------------
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8))
cl = clahe.apply(l)
#-----Merge the CLAHE enhanced L-channel with the a and b channel-----------
limg = cv2.merge((cl,a,b))
#-----Converting image from LAB Color model to RGB model--------------------
image_contrast = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)
return image_contrast
# Text Dectection Function
def post_rekog(pic_json):
# Getting list of image file names
imageURL_list = pic_json.get("image_locations")
# print(f'imageURL_list {imageURL_list}')
# text from image(s) uploaded by user
all_text = []
# text read from image(s) with contrast filter
all_filter_text = []
# Looping through image(s)
ctr1 = 10000
ctr2 = 10001
for imageURL in imageURL_list:
if imageURL != "":
# Saving image URL locally
ctr1 += 2
temp_img = str(ctr1) + ".jpg"
urllib.request.urlretrieve(imageURL, temp_img)
imageFile = './' + temp_img
# ------------- Detecting text from original image ------------
with open(imageFile, 'rb') as image:
# !!!!!! WRAP THIS IN A TRY / CATCH !!!!!!!!!
response = client.detect_text(Image={'Bytes': image.read()})
# response2 =
# Detected Text (List of Dictionaries)
textDetections=response['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found = []
for text in textDetections:
if text['Confidence'] > 70:
text_found.append(text['DetectedText'])
# print(text['Confidence'])
# print(f'text_found: {text_found}')
text_set = list(set(text_found))
# Appending detected text in image to "all_text" list
all_text.append(text_set)
# ------------- Detecting text from filtered image ------------
filtered_img = add_contrast(imageFile)
# Saving image URL locally
ctr2 += 2
temp_img = str(ctr2) + ".jpg"
cv2.imwrite(temp_img, filtered_img)
imageFile2 = './' + temp_img
with open(imageFile2, 'rb') as image:
# !!!!!! WRAP THIS IN A TRY / CATCH !!!!!!!!!
response2 = client.detect_text(Image={'Bytes': image.read()})
# Detected Text (List of Dictionaries)
textDetections2=response2['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found2 = []
for text in textDetections2:
if text['Confidence'] > 0:
text_found2.append(text['DetectedText'])
# print(text['Confidence'])
# print(f'text_found: {text_found}')
text_set2 = list(set(text_found2))
# Appending detected text in image to "all_text" list
all_filter_text.append(text_set2)
# ------------------------------------------------------------
else:
continue
# Flattening 'all_text' (list of lists) into 1 list
text_list = [text for sublist in all_text for text in sublist]
text_list = list(set(text_list))
text_list2 = [text for sublist in all_filter_text for text in sublist]
text_list2 = list(set(text_list2))
# print(f'text_list: {text_list}')
# print(f'text_list2: {text_list2}')
# Splitting any text blob that may have digits and numbers together
unique_list = []
for each in text_list:
num_split = re.findall(r'[A-Za-z]+|\d+', each)
unique_list.append(num_split)
unique_list2 = []
for each in text_list2:
num_split = re.findall(r'[A-Za-z]+|\d+', each)
unique_list2.append(num_split)
# Flattening again into one list with just unique values
unique_list = [text for sublist in unique_list for text in sublist]
unique_list = list(set(unique_list))
unique_list2 = [text for sublist in unique_list for text in sublist]
unique_list2 = list(set(unique_list))
# print(unique_list2)
# Return 'final_list'
final_list = set(unique_list + unique_list2)
# If 'final_list' is empty return and empty set instead
if len(final_list) == 0:
return {}
# For long resulting lists get only 3!
# (new list length 3 will be random since it's originally a set turned to list)
if len(final_list) > 3:
final_list = set(list(final_list)[:3])
return final_list
```
#### NO FILTER FUNCTION
```
def post_rekogNF(pic_json,elem_limit=3, con_fidence=70):
# Getting list of image file names
imageURL_list = pic_json.get("image_locations")
# text from image(s) uploaded by user
all_text = []
# Looping through image(s)
ctr1 = 10000
for imageURL in imageURL_list:
if imageURL != "":
# Saving image URL locally
ctr1 += 1
temp_img = str(ctr1) + ".jpg"
urllib.request.urlretrieve(imageURL, temp_img)
imageFile = './' + temp_img
# ------------- Detecting text from original image ------------
with open(imageFile, 'rb') as image:
# !!!!!! WRAP THIS IN A TRY / CATCH !!!!!!!!!
response = client.detect_text(Image={'Bytes': image.read()})
# Detected Text (List of Dictionaries)
textDetections = response['TextDetections']
# Parsing Through Detected Text and
# Making list of Unique Sets of Text Dectected
text_found = []
for text in textDetections:
if text['Confidence'] > con_fidence:
text_found.append(text['DetectedText'])
# print(text['Confidence'])
# print(f'text_found: {text_found}')
text_set = list(set(text_found))
# Appending detected text in image to "all_text" list
all_text.append(text_set)
else:
continue
# Flattening 'all_text' (list of lists) into 1 list
text_list = [text for sublist in all_text for text in sublist]
text_list = list(set(text_list))
# Splitting any text blob that may have digits and numbers together
unique_list = []
for each in text_list:
num_split = re.findall(r'[A-Za-z]+|\d+', each)
unique_list.append(num_split)
# Flattening again into one list with just unique values
unique_list = [text for sublist in unique_list for text in sublist]
unique_list = list(set(unique_list))
# print('unique_list:\n', unique_list)
# Return 'final_list'
final_list = set(unique_list)
# print(final_list)
# If 'final_list' is empty return and empty set instead
if len(final_list) == 0:
return {}
# For long resulting lists get only 3!
# (new list length 3 will be random since it's originally a set turned to list)
# if len(final_list) > elem_limit:
# # turning set to a list sorted by string length
# final_list = sorted(list(final_list), key=len)[-elem_limit:]
# final_list = set(final_list)
#print('all detected text', all_text)
return final_list
```
### Passing image pairs into `text_detection` function
#### Testing function on URL!
```
# Cannot detect text in this pill
# test_url = {"image_locations": ["./test_images/img4a.JPG"]}
# post_rekog(test_url)
# import imageio
# pic1 = imageio.imread('./test_images/img4a.JPG')
# pic2 = imageio.imread('./test_images/img4b.JPG')
# plt.imshow(pic1);
# plt.imshow(pic2);
# H;126
test_url = {"image_locations": ["https://raw.githubusercontent.com/ed-chin-git/ed-chin-git.github.io/master/sample_pill_image.jpg", ""]}
post_rekog(test_url)
# Image with tons of pills
test_url = {"image_locations": ["https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/adderall.jpg",
"https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/adderall.jpg"]}
post_rekog(test_url)
# this side of the pill is without text
test_url = {"image_locations":["https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/img2b.JPG",
"https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/img2b.JPG"]}
post_rekog(test_url)
test_url = {"image_locations":["https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/img5b.JPG"]}
post_rekog(test_url)
test_url={"image_locations":["https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/img8.JPG"]}
post_rekog(test_url)
test_url={"image_locations":["https://s3.amazonaws.com/labs12-rxidstore/reference/000069117.jpg"]}
post_rekog(test_url)
```
#### Testing function NO FILTER!!!
```
# H;126
test_url = {"image_locations": ["https://raw.githubusercontent.com/ed-chin-git/ed-chin-git.github.io/master/sample_pill_image.jpg", ""]}
post_rekogNF(test_url, 3)
# this side of the pill is without text
test_url = {"image_locations":["https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/img2b.JPG",
"https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/img2b.JPG"]}
post_rekogNF(test_url)
# Image with tons of pills
test_url = {"image_locations": ["https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/adderall.jpg",
"https://s3.us-east-2.amazonaws.com/firstpythonbucketac60bb97-95e1-43e5-98e6-0ca294ec9aad/adderall.jpg"]}
post_rekogNF(test_url)
```
Will turning the Object from S3 into grayscale help?
```
from skimage.exposure import rescale_intensity
from skimage import color
obj_key = 'img3a.JPG'
obj = s3_resource.Object(bucket, obj_key)
obj_body = obj.get()['Body'].read()
photo = imageio.imread(obj_body)
bw_photo = rescale_intensity(color.rgb2gray(photo))
plt.imshow(bw_photo);
```
### AWS --> Analyzing an Image Loaded from a Local File System
https://docs.aws.amazon.com/rekognition/latest/dg/images-bytes.html
The following AWS SDK for Python example shows how to load an image from the local file system and call the detect_labels operation. Change the value of imageFile to the path and file name of an image file (.jpg or .png format).
```
import boto3
if __name__ == "__main__":
imageFile='input.jpg'
client=boto3.client('rekognition')
with open(imageFile, 'rb') as image:
response = client.detect_labels(Image={'Bytes': image.read()})
print('Detected labels in ' + imageFile)
for label in response['Labels']:
print (label['Name'] + ' : ' + str(label['Confidence']))
print('Done...')
```
```
'https://s3.amazonaws.com/rxid-images/uploads/00002-4462-30_B215591A.jpg'
```
#### Detecting text from local image
```
imageFile='./test_images/img3b_contrast.JPG'
with open(imageFile, 'rb') as image:
response = client.detect_text(Image={'Bytes': image.read()})
print('Detected labels in ' + imageFile)
for text in response['TextDetections']:
print (text['DetectedText'] + ' : ' + str(text['Confidence']))
# textDetections=response['TextDetections']
# print ('Detected text')
# for text in textDetections:
# print ('Detected text:' + text['DetectedText'])
# print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
# print('\n')
# print ('Id: {}'.format(text['Id']))
```
#### Detecting text from ulr image
```
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve("http://www.gunnerkrigg.com//comics/00000001.jpg", "00000001.jpg")
imageFile='./00000001.jpg'
with open(imageFile, 'rb') as image:
response = client.detect_text(Image={'Bytes': image.read()})
print('Detected labels in ' + imageFile)
for text in response['TextDetections']:
print (text['DetectedText'] + ' : ' + str(text['Confidence']))
```
| github_jupyter |
# Improving a model with Grid Search
In this mini-lab, we'll fit a decision tree model to some sample data. This initial model will overfit heavily. Then we'll use Grid Search to find better parameters for this model, to reduce the overfitting.
First, some imports.
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
### 1. Reading and plotting the data
Now, a function that will help us read the csv file, and plot the data.
```
def load_pts(csv_name):
data = np.asarray(pd.read_csv(csv_name, header=None))
X = data[:,0:2]
y = data[:,2]
plt.scatter(X[np.argwhere(y==0).flatten(),0], X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0], X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
return X,y
X, y = load_pts('data.csv')
plt.show()
```
### 2. Splitting our data into training and testing sets
```
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, make_scorer
#Fixing a random seed
import random
random.seed(42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
### 3. Fitting a Decision Tree model
```
from sklearn.tree import DecisionTreeClassifier
# Define the model (with default hyperparameters)
clf = DecisionTreeClassifier(random_state=42)
# Fit the model
clf.fit(X_train, y_train)
# Make predictions
train_predictions = clf.predict(X_train)
test_predictions = clf.predict(X_test)
```
Now let's plot the model, and find the testing f1_score, to see how we did.
The following function will help us plot the model.
```
def plot_model(X, y, clf):
plt.scatter(X[np.argwhere(y==0).flatten(),0],X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0],X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
r = np.linspace(-2.1,2.1,300)
s,t = np.meshgrid(r,r)
s = np.reshape(s,(np.size(s),1))
t = np.reshape(t,(np.size(t),1))
h = np.concatenate((s,t),1)
z = clf.predict(h)
s = s.reshape((np.size(r),np.size(r)))
t = t.reshape((np.size(r),np.size(r)))
z = z.reshape((np.size(r),np.size(r)))
plt.contourf(s,t,z,colors = ['blue','red'],alpha = 0.2,levels = range(-1,2))
if len(np.unique(z)) > 1:
plt.contour(s,t,z,colors = 'k', linewidths = 2)
plt.show()
plot_model(X, y, clf)
print('The Training F1 Score is', f1_score(train_predictions, y_train))
print('The Testing F1 Score is', f1_score(test_predictions, y_test))
```
Woah! Some heavy overfitting there. Not just from looking at the graph, but also from looking at the difference between the high training score (1.0) and the low testing score (0.7).Let's see if we can find better hyperparameters for this model to do better. We'll use grid search for this.
### 4. (TODO) Use grid search to improve this model.
In here, we'll do the following steps:
1. First define some parameters to perform grid search on. We suggest to play with `max_depth`, `min_samples_leaf`, and `min_samples_split`.
2. Make a scorer for the model using `f1_score`.
3. Perform grid search on the classifier, using the parameters and the scorer.
4. Fit the data to the new classifier.
5. Plot the model and find the f1_score.
6. If the model is not much better, try changing the ranges for the parameters and fit it again.
**_Hint:_ If you're stuck and would like to see a working solution, check the solutions notebook in this same folder.**
```
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
clf = DecisionTreeClassifier(random_state=42)
# TODO: Create the parameters list you wish to tune.
parameters = {'max_depth':[2,4,6,8,10],'min_samples_leaf':[2,4,6,8,10], 'min_samples_split':[2,4,6,8,10]}
# TODO: Make an fbeta_score scoring object.
scorer = make_scorer(f1_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method.
grid_obj = GridSearchCV(clf, parameters, scoring=scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters.
grid_fit = grid_obj.fit(X_train, y_train)
# TODO: Get the estimator.
best_clf = grid_fit.best_estimator_
# Fit the new model.
best_clf.fit(X_train, y_train)
# Make predictions using the new model.
best_train_predictions = best_clf.predict(X_train)
best_test_predictions = best_clf.predict(X_test)
# Calculate the f1_score of the new model.
print('The training F1 Score is', f1_score(best_train_predictions, y_train))
print('The testing F1 Score is', f1_score(best_test_predictions, y_test))
# Plot the new model.
plot_model(X, y, best_clf)
# Let's also explore what parameters ended up being used in the new model.
best_clf
```
| github_jupyter |
# 
# Build a TF model on private census with Sarus
### In this tutorial for **Data Scientists**, you will see how to:
1. Connect to Sarus gateway and see available datasets
2. Analyze the private data as a pandas dataframe
3. Preprocess the remote private data
4. Train a TF model onto the remote real data
```
%%capture
!pip install sarus
from sarus import Client
client = Client(url='https://demo.sarus.tech:5000', email='demo.user@sarus.tech', password='Demo1')
# Here, you can use our demo credentials or your own ones!
client.list_datasets()
# Select a dataset among the ones you've been granted access to: the census one
remote_dataset = client.dataset(slugname="private_census")
# We can explore the remote data as a pandas dataframe
dataframe = remote_dataset.as_pandas()
dataframe.head()
dataframe.mean()
import tensorflow as tf
def preprocess(batch):
X = tf.stack([batch["age"], batch["education_num"], batch["hours_per_week"]], axis=1)
X = tf.nn.relu(X) # Replace encoded NaN by 0
y = batch["income"]
return X, y
tf_ds = remote_dataset.as_tensorflow().batch(16).map(preprocess)
from sarus.tensorflow import Model # Only changing the import line!
from tensorflow.keras.layers import Dense
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.optimizers import Adam
class DNN(Model):
def __init__(self):
super().__init__()
self.dense = Dense(units=10)
self.dense2 = Dense(units=2)
def call(self, x, training=False):
return self.dense2(tf.nn.relu(self.dense(x)))
model = DNN()
model.compile(
optimizer=Adam(learning_rate=1e-3),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=["sparse_categorical_accuracy"],
)
model.fit(tf_ds, epochs=2, target_epsilon=1)
```
| github_jupyter |
```
import os
os.environ.get('GDS_ENV_VERSION')
```
# Generate illustrations of tessellation
This notebook contains one function `pipeline`, which for a given point (lat, lon) generates a sequence of seven images illustrating the process of creation of morphologicla tessellation within 250m buffer. The function is used to generate animations and figures in the blogpost.
```
import geopandas as gpd
import momepy as mm
import osmnx as ox
import pygeos
import numpy as np
from scipy.spatial import Voronoi
import pandas as pd
from mapclassify import greedy
import contextily as ctx
import matplotlib.pyplot as plt
from palettable.wesanderson import FantasticFox2_5
from shapely.geometry import Point
def pipeline(lat, lon, path, prefix, dist=250, figsize=(12, 12)):
point = (lat, lon)
gdf = ox.geometries.geometries_from_point(point, dist=dist, tags={'building':True})
gdf_projected = ox.projection.project_gdf(gdf)
bounds = gdf_projected.total_bounds
limit = Point(np.mean([bounds[0], bounds[2]]), np.mean([bounds[1], bounds[3]])).buffer(250)
blg = gpd.clip(gdf_projected, limit).explode()
bounds = limit.bounds
# figure 1 - aerial
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
gpd.GeoSeries([limit.buffer(150).difference(limit)]).plot(ax=ax, color='white')
ctx.add_basemap(ax, crs=blg.crs, source=ctx.providers.Esri.WorldImagery)
ax.set_axis_off()
plt.savefig(path + prefix + "01.png", bbox_inches='tight')
plt.close()
print("Figure 1 saved to " + path + prefix + "01.png")
# figure 2 - overlay
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
gpd.GeoSeries([limit.buffer(150).difference(limit)]).plot(ax=ax, color='white')
ctx.add_basemap(ax, crs=blg.crs, source=ctx.providers.Esri.WorldImagery)
blg.plot(ax=ax, color='#0ea48f', edgecolor='k', alpha=.6)
ax.set_axis_off()
plt.savefig(path + prefix + "02.png", bbox_inches='tight')
plt.close()
print("Figure 2 saved to " + path + prefix + "02.png")
# figure 3 - footprints
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
blg.plot(ax=ax, color='#0ea48f', edgecolor='k').set_axis_off()
plt.savefig(path + prefix + "03.png", bbox_inches='tight')
plt.close()
print("Figure 3 saved to " + path + prefix + "03.png")
shrinked = blg.buffer(-2)
shrinked = shrinked[~shrinked.is_empty]
# figure 4 - shrinked
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
blg.plot(ax=ax, facecolor='none', linewidth=.5, edgecolor='k')
shrinked.plot(ax=ax, color='#0ea48f')
ax.set_axis_off()
plt.savefig(path + prefix + "04.png", bbox_inches='tight')
plt.close()
print("Figure 4 saved to " + path + prefix + "04.png")
distance = 4
points = np.empty((0, 2))
ids = []
lines = shrinked.boundary.values.data
lengths = shrinked.length
for ix, line, length in zip(shrinked.index, lines, lengths):
if length > distance:
pts = pygeos.line_interpolate_point(
line,
np.linspace(0.1, length - 0.1, num=int((length - 0.1) // distance)),
) # .1 offset to keep a gap between two segments
if len(pts) > 0:
points = np.append(points, pygeos.get_coordinates(pts), axis=0)
ids += [ix] * len(pts)
# figure 5 - points
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
blg.plot(ax=ax, facecolor='none', linewidth=.5, edgecolor='k')
gpd.GeoSeries(pygeos.points(points)).plot(ax=ax, markersize=1, color='#0ea48f')
ax.set_axis_off()
plt.savefig(path + prefix + "05.png", bbox_inches='tight')
plt.close()
print("Figure 5 saved to " + path + prefix + "05.png")
# add hull to resolve issues with infinity
# this is just a correction step ensuring the algorithm will work correctly
stop = points.shape[0]
series = gpd.GeoSeries(limit)
hull = series.geometry[[0]].buffer(500)
line = hull.boundary.values.data[0]
length = hull.length[0]
pts = pygeos.line_interpolate_point(
line,
np.linspace(0.1, length - 0.1, num=int((length - 0.1) // distance)),
) # .1 offset to keep a gap between two segments
points = np.append(points, pygeos.get_coordinates(pts), axis=0)
ids += [-1] * len(pts)
voronoi_diagram = Voronoi(np.array(points))
vertices = pd.Series(voronoi_diagram.regions).take(voronoi_diagram.point_region)
polygons = []
for region in vertices:
if -1 not in region:
polygons.append(pygeos.polygons(voronoi_diagram.vertices[region]))
else:
polygons.append(None)
regions_gdf = gpd.GeoDataFrame(
{'unique_id': ids}, geometry=polygons
).dropna()
regions_gdf = regions_gdf.loc[
regions_gdf['unique_id'] != -1
] # delete hull-based cells
voronoi_tessellation = gpd.clip(regions_gdf, limit)
# figure 6 - voronoi
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
gpd.GeoSeries(pygeos.points(points[:stop])).plot(ax=ax, markersize=1, zorder=3, color='#0ea48f')
voronoi_tessellation.plot(ax=ax, facecolor='none', linewidth=.2, edgecolor='gray')
ax.set_axis_off()
plt.savefig(path + prefix + "06.png", bbox_inches='tight')
plt.close()
print("Figure 6 saved to " + path + prefix + "06.png")
# figure 7 - tessellation
fig, ax = plt.subplots(figsize=figsize)
ax.axis([bounds[0], bounds[2], bounds[1], bounds[3]])
blg = blg[blg.geom_type == 'Polygon']
blg = blg.reset_index(drop=True)
blg['uid'] = range(len(blg))
tessellation = mm.Tessellation(blg, 'uid', limit, verbose=False).tessellation
tessellation.plot(greedy(tessellation, strategy='smallest_last'), ax=ax, categorical=True, edgecolor='w', alpha=.6, cmap=FantasticFox2_5.mpl_colormap)
ax.set_axis_off()
plt.savefig(path + prefix + "07.png", bbox_inches='tight')
plt.close()
print("Figure 7 saved to " + path + prefix + "07.png")
pipeline(33.9488360, -118.2372975, path='./', prefix='la_', figsize=(15, 15))
pipeline(41.3907594, 2.1573404, path='./', prefix='bcn_', figsize=(15, 15))
pipeline(38.995888, -77.135073, path='./', prefix='atl_', figsize=(15, 15))
pipeline(44.4942640, 11.3473233, path='./', prefix='bol_', figsize=(15, 15))
pipeline(-15.8038355, -47.8918796, path='./', prefix='bra_', figsize=(15, 15))
```
| github_jupyter |
# Magic Methods
Below you'll find the same code from the previous exercise except two more methods have been added: an __add__ method and a __repr__ method. Your task is to fill out the code and get all of the unit tests to pass. You'll find the code cell with the unit tests at the bottom of this Jupyter notebook.
As in previous exercises, there is an answer key that you can look at if you get stuck. Click on the "Jupyter" icon at the top of this notebook, and open the folder 4.OOP_code_magic_methods. You'll find the answer.py file inside the folder.
```
import math
import matplotlib.pyplot as plt
class Gaussian():
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
self.mean = mu
self.stdev = sigma
self.data = []
def calculate_mean(self):
"""Method to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
#TODO: Calculate the mean of the data set. Remember that the data set is stored in self.data
# Change the value of the mean attribute to be the mean of the data set
# Return the mean of the data set
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
pass
def calculate_stdev(self, sample=True):
"""Method to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
# TODO:
# Calculate the standard deviation of the data set
#
# The sample variable determines if the data set contains a sample or a population
# If sample = True, this means the data is a sample.
# Keep the value of sample in mind for calculating the standard deviation
#
# Make sure to update self.stdev and return the standard deviation as well
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.mean
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Method to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
# This code opens a data file and appends the data to a list called data_list
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
# TODO:
# Update the self.data attribute with the data_list
# Update self.mean with the mean of the data_list.
# You can use the calculate_mean() method with self.calculate_mean()
# Update self.stdev with the standard deviation of the data_list. Use the
# calculate_stdev() method.
self.data = data_list
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Method to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
# TODO: Plot a histogram of the data_list using the matplotlib package.
# Be sure to label the x and y axes and also give the chart a title
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
# TODO: Calculate the probability density function of the Gaussian distribution
# at the value x. You'll need to use self.stdev and self.mean to do the calculation
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Method to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
#TODO: Nothing to do for this method. Try it out and see how it works.
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Magic method to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
# TODO: Calculate the results of summing two Gaussian distributions
# When summing two Gaussian distributions, the mean value is the sum
# of the means of each Gaussian.
#
# When summing two Gaussian distributions, the standard deviation is the
# square root of the sum of square ie sqrt(stdev_one ^ 2 + stdev_two ^ 2)
# create a new Gaussian object
result = Gaussian()
# TODO: calculate the mean and standard deviation of the sum of two Gaussians
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Magic method to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
# TODO: Return a string in the following format -
# "mean mean_value, standard deviation standard_deviation_value"
# where mean_value is the mean of the Gaussian distribution
# and standard_deviation_value is the standard deviation of
# the Gaussian.
# For example "mean 3.5, standard deviation 1.3"
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
# Unit tests to check your solution
import unittest
class TestGaussianClass(unittest.TestCase):
def setUp(self):
self.gaussian = Gaussian(25, 2)
def test_initialization(self):
self.assertEqual(self.gaussian.mean, 25, 'incorrect mean')
self.assertEqual(self.gaussian.stdev, 2, 'incorrect standard deviation')
def test_pdf(self):
self.assertEqual(round(self.gaussian.pdf(25), 5), 0.19947,\
'pdf function does not give expected result')
def test_meancalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(self.gaussian.calculate_mean(),\
sum(self.gaussian.data) / float(len(self.gaussian.data)), 'calculated mean not as expected')
def test_stdevcalculation(self):
self.gaussian.read_data_file('numbers.txt', True)
self.assertEqual(round(self.gaussian.stdev, 2), 92.87, 'sample standard deviation incorrect')
self.gaussian.read_data_file('numbers.txt', False)
self.assertEqual(round(self.gaussian.stdev, 2), 88.55, 'population standard deviation incorrect')
def test_add(self):
gaussian_one = Gaussian(25, 3)
gaussian_two = Gaussian(30, 4)
gaussian_sum = gaussian_one + gaussian_two
self.assertEqual(gaussian_sum.mean, 55)
self.assertEqual(gaussian_sum.stdev, 5)
def test_repr(self):
gaussian_one = Gaussian(25, 3)
self.assertEqual(str(gaussian_one), "mean 25, standard deviation 3")
tests = TestGaussianClass()
tests_loaded = unittest.TestLoader().loadTestsFromModule(tests)
unittest.TextTestRunner().run(tests_loaded)
```
| github_jupyter |
```
#input - log files from Open Eye Docking
#Does - computes the average score for each ligand from the 10 scores(1 score from each cluster structure) and then ranks tham
# Output - txt file with the ligands ranked based on average score. Lowest --> Highest
import matplotlib.pyplot as plt
import numpy as np
file = ["GROMOS_CBA", "GROMOS", "PCA_CBA", "PCA", "TICA_CBA", "TICA"]
Savenames = ["GROMOS_CBA", "GROMOS", "PCA_CBA", "PCA", "TICA_CBA", "TICA"]
for zzz in range(len(file)):
with open("/home/dhkumar/Formatted_Data/LogFiles/OpenEye/"+ file[zzz]+"/"+ file[zzz]+'.txt',"r") as input_file:
content = input_file.readlines()
contentLigandNumber = []
for x in content:
a1 = x.strip().split()[1]
aa1 = a1.split("_")[1]
contentLigandNumber.append(int(aa1))
contentLigandScore = []
for x in content:
a2 = float(x.strip().split()[2])
contentLigandScore.append(a2)
input_file.close()
# seperates file into 10 different lists
L0LigandNumber = contentLigandNumber[0:459]
L0LigandScore = contentLigandScore[0:459]
L1LigandNumber = contentLigandNumber[459:459*2]
L1LigandScore = contentLigandScore[459:459*2]
L2LigandNumber = contentLigandNumber[459*2:459*3]
L2LigandScore = contentLigandScore[459*2:459*3]
L3LigandNumber = contentLigandNumber[459*3:459*4]
L3LigandScore = contentLigandScore[459*3:459*4]
L4LigandNumber = contentLigandNumber[459*4:459*5]
L4LigandScore = contentLigandScore[459*4:459*5]
L5LigandNumber = contentLigandNumber[459*5:459*6]
L5LigandScore = contentLigandScore[459*5:459*6]
L6LigandNumber = contentLigandNumber[459*6:459*7]
L6LigandScore = contentLigandScore[459*6:459*7]
L7LigandNumber = contentLigandNumber[459*7:459*8]
L7LigandScore = contentLigandScore[459*7:459*8]
L8LigandNumber = contentLigandNumber[459*8:459*9]
L8LigandScore = contentLigandScore[459*8:459*9]
L9LigandNumber = contentLigandNumber[459*9:459*10]
L9LigandScore = contentLigandScore[459*9:459*10]
L0 = np.arange(459.0)
L1 = np.arange(459.0)
L2 = np.arange(459.0)
L3 = np.arange(459.0)
L4 = np.arange(459.0)
L5 = np.arange(459.0)
L6 = np.arange(459.0)
L7 = np.arange(459.0)
L8 = np.arange(459.0)
L9 = np.arange(459.0)
for i in range(len(L0LigandNumber)):
number = L0LigandNumber[i]
score = L0LigandScore[i]
L0[number-1] = score
for i in range(len(L1LigandNumber)):
number = L1LigandNumber[i]
score = L1LigandScore[i]
L1[number-1] = score
for i in range(len(L2LigandNumber)):
number = L2LigandNumber[i]
score = L2LigandScore[i]
L2[number-1] = score
for i in range(len(L3LigandNumber)):
number = L3LigandNumber[i]
score = L3LigandScore[i]
L3[number-1] = score
for i in range(len(L4LigandNumber)):
number = L4LigandNumber[i]
score = L4LigandScore[i]
L4[number-1] = score
for i in range(len(L5LigandNumber)):
number = L5LigandNumber[i]
score = L5LigandScore[i]
L5[number-1] = score
for i in range(len(L6LigandNumber)):
number = L6LigandNumber[i]
score = L6LigandScore[i]
L6[number-1] = score
for i in range(len(L7LigandNumber)):
number = L7LigandNumber[i]
score = L7LigandScore[i]
L7[number-1] = score
for i in range(len(L8LigandNumber)):
number = L8LigandNumber[i]
score = L8LigandScore[i]
L8[number-1] = score
for i in range(len(L9LigandNumber)):
number = L9LigandNumber[i]
score = L9LigandScore[i]
L9[number-1] = score
average = []
#calculates the weighted average for each ligand
for z in range(len(L0)):
average.append((L0[z] + L1[z] + L2[z] + L3[z] + L4[z] + L5[z] + L6[z] + L7[z] + L8[z] +L9[z])/10.0 )
rankings = np.arange(1,460)
rankings_scores = average
for index in range(1,459):
value = rankings_scores[index]
r1 = rankings[index]
i = index - 1
while i>=0:
r2 = rankings[i]
if value < rankings_scores[i]:
rankings_scores[i+1] = rankings_scores[i]
rankings_scores[i] = value
rankings[i+1] = r2
rankings[i] = r1
i=i-1
else:
break
files = open("/home/dhkumar/Formatted_Data/LogFiles/OpenEye/"+file[zzz]+"/"+Savenames[zzz]+"_Avg.txt","w+")
for i in range(459):
files.write('CatS_'+str(rankings[i])+','+ str(i+1) + ',' + str(rankings_scores[i])+' \n')
files.close()
# plt.savefig("/home/dhkumar/Pictures/figures/All10ScoresForAllLigands/GLIDE/NEW/"+ Savenames[zzz]+'.png')
```
| github_jupyter |
# MMS Data in Python with pySPEDAS
Eric Grimes, egrimes@igpp.ucla.edu
December 4, 2019
Notes:
* this webinar will be recorded and posted to Youtube
* all of this is still beta
### Getting Started
Note: Python 3.5+ is required
To install the latest pySPEDAS:
`pip install pyspedas --upgrade`
or
`conda install -c spedas pyspedas`
### Tentative agenda:
- Introduction to the load routines and configuration options
- Ephemeris/Coordinates data
- FIELDS data, including curlometer calculations
- ASPOC data
- EPD (FEEPS/EIS) data, including pitch angle distributions
- Plasma (FPI/HPCA) data
Note: all of this depends heavily on the hard work of the pyTplot development team at LASP:
https://github.com/MAVENSDC/PyTplot
https://pytplot.readthedocs.io/en/latest/
```
from pytplot import tplot
```
## Introduction to the load routines and configuration options
The MMS load routines are simple functions with options set via keywords. If the data files don't exist locally, they're downloaded from the SDC or your network mirror.
The available load routines are:
- Fluxgate Magnetometer (fgm)
- Search-coil Magnetometer (scm)
- Combined FGM+SCM data (fsm)
- Electric field Double Probe (edp)
- Electron Drift Instrument (edi)
- Fast Plasma Investigation (fpi)
- Hot Plasma Composition Analyzer (hpca)
- Energetic Ion Spectrometer (eis)
- Fly's Eye Energetic Particle Sensor (feeps)
- Active Spacecraft Potential Control (aspoc)
- Digital Signal Processor (dsp)
- Ephemeris and Coordinates (mec)
You can import the load routines from the `mms` module by their name, e.g.,
```
from pyspedas.mms import mec
```
If you prefer the IDL syntax, you can import the load routines using their IDL names, e.g.,
```
from pyspedas import mms_load_mec
```
The first time you run a load routine, you'll probably be prompted for a username/password. Leave these blank if you don't have an SDC username/password.
If you've previously saved an SDC username/password with IDL, pySPEDAS should find and use those credentials.
### Configuration options
Global configuration options are set in the `CONFIG` dictionary in `mms_config.py`
* `local_data_dir`: your local data directory; can also be set with the `MMS_DATA_DIR` environment variable (ideal)
* `mirror_data_dir`: read-only network mirror
* `download_only`: download the data, but don't load the files into pyTplot variables
* `no_download`: don't contact the SDC; only load the files if they exist on the network mirror or in your local data directory
### Load routine options
* `trange`: two-element array specifying the time range in UT; accepts a wide variety of formats, including datetime objects
* `probe`: spacecraft probe # (int, str or list of ints or strs)
* `level`: data level (str or list of strs)
* `data_rate`: instrument data rate (str or list of strs)
* `datatype`: instrument datatype (str or list of strs)
* `time_clip`: if `True`, time clip the data after loading the pyTplot variables
* `varformat`: limit the variables loaded from the CDF files (str)
* `get_support_data`: if `True`, load `support_data` data from the CDF files
* `suffix`: append a suffix to the loaded variables (str)
* `available`: if `True`, only return a list of available data files from the SDC (no downloading)
* `no_update`: if `True`, only load local or mirror data
* `notplot`: if `True`, return the CDF data in `numpy` arrays instead of creating pyTplot variables
* `cdf_version`: load only a specific CDF version (str)
* `min_version`: set a minimum CDF version to load (str)
* `latest_version`: if `True`, only load the latest CDF version (i.e., X.Y.Z have to match)
* `major_version`: if `True`, load the latest major CDF version (i.e., latest X in X.Y.Z)
* `center_measurement`: (FPI and HPCA only) if `True`, center the measurement to the middle of the accumulation interval
Notes:
- all options have defaults (e.g., probe 1, fast survey L2 data are loaded by default)
- some load routines have additional options; all should be documented in the docstrings - use `help` to see all options
## Ephemeris/Coordinates data
https://lasp.colorado.edu/mms/sdc/public/datasets/mec/
To get started, load the MMS1 spacecraft position and velocity data in GSE coordinates for October 16, 2015
```
mec_vars = mec(trange=['2015-10-16', '2015-10-17'], suffix='_suffix')
```
To list the variables that were loaded, use `tplot_names`, e.g.,
```
from pytplot import tplot_names
tnames = tplot_names()
```
Now plot the position and velocity
```
tplot(['mms1_mec_r_gse_suffix', 'mms1_mec_v_gse_suffix'])
```
Use `get_data` to take the data out of a pyTplot variable and store it into numpy arrays
```
from pytplot import get_data
times, pos_values = get_data('mms1_mec_r_gse_suffix')
```
The `times` values are unix times
```
times[0:5]
```
and the position values are in km
```
pos_values[0, :]
```
Set the `xarray` keyword to `True` to return the internal `xarray` object instead
```
pos_xr = get_data('mms1_mec_r_gse_suffix', xarray=True)
pos_xr
```
Use `store_data` to create a pyTplot variable using numpy arrays
```
from pytplot import store_data
store_data('gse_pos_var', data={'x': times, 'y': pos_values/6371.2})
tplot('gse_pos_var')
```
Use `options` to set plot metadata
```
from pytplot import options
options('gse_pos_var', 'legend_names', ['X (Re)', 'Y (Re)', 'Z (Re)'])
options('gse_pos_var', 'ytitle', 'MMS1 position in GSE')
tplot('gse_pos_var')
```
### Orbit plots
To create an orbit plot using `matplotlib`:
```
from pyspedas.mms.mms_orbit_plot import mms_orbit_plot
mms_orbit_plot(trange=['2015-10-16', '2015-10-17'])
```
## FIELDS data
https://lasp.colorado.edu/mms/sdc/public/datasets/fields/
```
from pyspedas.mms import fgm, scm, edp, edi, dsp
```
### FGM data
Note: by default, FGM data flagged by the `flag` variable are set to NaN automatically when the data are loaded. To keep these flagged data, set the keyword `keep_flagged` to `True`
```
fgm_vars = fgm(probe=4,
trange=['2015-10-16/13:06', '2015-10-16/13:07'],
data_rate='brst',
time_clip=True)
tplot(['mms4_fgm_b_gse_brst_l2', 'mms4_fgm_b_gsm_brst_l2'])
```
To load the burst mode GSE data from all 4 spacecraft at once:
```
fgm_vars = fgm(probe=[1, 2, 3, 4],
trange=['2015-10-30/05:15:45', '2015-10-30/05:15:48'],
data_rate='brst',
time_clip=True,
varformat='*_gse_*')
tplot(['mms1_fgm_b_gse_brst_l2',
'mms2_fgm_b_gse_brst_l2',
'mms3_fgm_b_gse_brst_l2',
'mms4_fgm_b_gse_brst_l2'])
```
### Curlometer technique
Note:
* Based on the `mms_curl` code in IDL SPEDAS by Jonathan Eastwood and Tai Phan
* For more info on this method, see: Chanteur, G., Spatial Interpolation for Four Spacecraft: Theory, Chapter 14 of Analysis methods for multi-spacecraft data, G. Paschmann and P. W. Daly (Eds.) ISSI Scientific Report SR-001.
You can apply the curlometer technique using the `curlometer` function
```
from pyspedas.mms import curlometer
```
You'll need to provide the position and B-field variables in GSE coordinates
```
positions = ['mms1_fgm_r_gse_brst_l2', 'mms2_fgm_r_gse_brst_l2', 'mms3_fgm_r_gse_brst_l2', 'mms4_fgm_r_gse_brst_l2']
fields = ['mms1_fgm_b_gse_brst_l2', 'mms2_fgm_b_gse_brst_l2', 'mms3_fgm_b_gse_brst_l2', 'mms4_fgm_b_gse_brst_l2']
curlometer_vars = curlometer(fields=fields, positions=positions)
curlometer_vars
tplot(['jtotal', 'jperp', 'jpar'])
```
### SCM data
To load the Search-coil Magnetometer data:
```
scm_vars = scm(trange=['2015-10-16', '2015-10-16/03:00'],
time_clip=True)
scm_vars
tplot('mms1_scm_acb_gse_scsrvy_srvy_l2')
```
### Dynamic power spectra of SCM data
```
from pyspedas import tdpwrspc
ps = tdpwrspc('mms1_scm_acb_gse_scsrvy_srvy_l2', nshiftpoints=512, nboxpoints=512, binsize=1)
ps
tplot(ps)
```
### EDP data
To load the electric field data:
```
e_vars = edp(probe=4,
trange=['2015-10-16/13:06', '2015-10-16/13:07'],
data_rate='brst',
time_clip=True)
tplot('mms4_edp_dce_gse_brst_l2')
```
To load the spacecraft potential data, set the datatype to `scpot`
```
scpot_vars = edp(probe=4,
datatype='scpot',
trange=['2015-10-16/13:06', '2015-10-16/13:07'],
data_rate='brst',
time_clip=True)
tplot('mms4_edp_scpot_brst_l2')
```
### EDI data
Load and plot data from the Electron Drift Instrument:
```
edrift_vars = edi(probe=4,
trange=['2015-10-16', '2015-10-17'])
tplot(['mms4_edi_vdrift_gse_srvy_l2', 'mms4_edi_vdrift_gsm_srvy_l2'])
```
### DSP data
Load and plot the omni-directional electric and magnetic spectral densities data from the Digital Signal Processor:
```
dsp_vars = dsp(probe=4,
data_rate='fast',
datatype=['bpsd', 'epsd'],
trange=['2015-10-16', '2015-10-17'])
tplot(['mms4_dsp_epsd_omni', 'mms4_dsp_bpsd_omni_fast_l2'])
```
## ASPOC data
Load and plot the ASPOC ion beam currents
https://lasp.colorado.edu/mms/sdc/public/datasets/aspoc/
```
from pyspedas.mms import aspoc
aspoc_vars = aspoc(probe=4,
trange=['2015-10-16', '2015-10-17'])
tplot(['mms4_aspoc_ionc', 'mms4_asp1_ionc', 'mms4_asp2_ionc'])
```
## EPD (FEEPS/EIS) data
https://lasp.colorado.edu/mms/sdc/public/datasets/epd/
```
from pyspedas.mms import feeps, eis
```
### FEEPS data
Load and plot data from the Fly's Eye Energetic Particle Sensor:
Notes:
* Sun contamination is removed from FEEPS spectrograms
* FEEPS integral channels are removed from the telescope spectrogram data and included in their own tplot variables
* Flat field corrections are applied for ion data
* Bad eyes and energy channels are set to NaN
* FEEPS omni-directional spectrograms are calculated from the individual telescope data
* FEEPS spin-averaged spectrograms are calculated
* Pitch angle distributions can be calculated with `mms_feeps_pad`
```
feeps_vars = feeps(probe=4,
trange=['2015-10-16', '2015-10-17'])
tplot(['mms4_epd_feeps_srvy_l2_electron_intensity_omni_spin',
'mms4_epd_feeps_srvy_l2_electron_intensity_omni'])
```
### FEEPS pitch angle distributions
```
from pyspedas import mms_feeps_pad
pad_vars = mms_feeps_pad(probe=4)
pad_vars
tplot(['mms4_epd_feeps_srvy_l2_electron_intensity_70-600keV_pad_spin',
'mms4_epd_feeps_srvy_l2_electron_intensity_70-600keV_pad'])
```
Limit the energy range by setting the `energy` keyword
```
pad_vars = mms_feeps_pad(energy=[200, 500], probe=4)
tplot(['mms4_epd_feeps_srvy_l2_electron_intensity_200-500keV_pad_spin',
'mms4_epd_feeps_srvy_l2_electron_intensity_200-500keV_pad'])
```
### EIS data
Load and plot data from the Energetic Ion Spectrometer:
Notes:
* EIS omni-directional spectrograms are calculated from the individual telescope data
* EIS spin-averaged spectrograms are calculated
* Pitch angle distributions can be calculated using the routine `mms_eis_pad`
```
eis_vars = eis(datatype=['extof', 'phxtof'],
probe=4,
trange=['2015-10-16', '2015-10-17'])
tplot(['mms4_epd_eis_phxtof_proton_flux_omni_spin',
'mms4_epd_eis_phxtof_proton_flux_omni'])
tplot(['mms4_epd_eis_phxtof_oxygen_flux_omni_spin',
'mms4_epd_eis_phxtof_oxygen_flux_omni'])
tplot(['mms4_epd_eis_extof_proton_flux_omni',
'mms4_epd_eis_extof_oxygen_flux_omni',
'mms4_epd_eis_extof_alpha_flux_omni'])
tplot(['mms4_epd_eis_extof_proton_flux_omni_spin',
'mms4_epd_eis_extof_oxygen_flux_omni_spin',
'mms4_epd_eis_extof_alpha_flux_omni_spin'])
```
### EIS pitch angle distributions
```
from pyspedas import mms_eis_pad
pad_vars = mms_eis_pad(probe=4, datatype='extof')
pad_vars
tplot(['mms4_epd_eis_extof_80-524keV_proton_flux_omni_pad_spin',
'mms4_epd_eis_extof_80-524keV_proton_flux_omni_pad'])
pad_vars = mms_eis_pad(energy=[20, 60], probe=4, datatype='phxtof')
pad_vars
tplot(['mms4_epd_eis_phxtof_24-56keV_proton_flux_omni_pad_spin',
'mms4_epd_eis_phxtof_24-56keV_proton_flux_omni_pad'])
```
## Plasma (FPI/HPCA) data
```
from pyspedas.mms import fpi, hpca
```
### FPI data
https://lasp.colorado.edu/mms/sdc/public/datasets/fpi/
Load the electron and ion moments data for all 4 spacecraft:
Note: set the `center_measurement` keyword to `True` to center the measurement to the middle of the accumulation interval
```
fpi_data = fpi(center_measurement=True,
datatype=['des-moms', 'dis-moms'],
probe=[1, 2, 3, 4],
trange=['2015-10-16/06:00', '2015-10-16/14:00'],
time_clip=True)
tplot(['mms4_des_energyspectr_omni_fast', 'mms4_dis_energyspectr_omni_fast'])
tplot(['mms4_des_numberdensity_fast',
'mms4_des_bulkv_gse_fast',
'mms4_des_pitchangdist_miden_fast'])
tplot(['mms1_des_energyspectr_omni_fast',
'mms2_des_energyspectr_omni_fast',
'mms3_des_energyspectr_omni_fast',
'mms4_des_energyspectr_omni_fast'])
tplot(['mms1_dis_energyspectr_omni_fast',
'mms2_dis_energyspectr_omni_fast',
'mms3_dis_energyspectr_omni_fast',
'mms4_dis_energyspectr_omni_fast'])
```
Load electron distribution data for probe 4:
```
fpi_data = fpi(center_measurement=True,
datatype='des-dist',
probe='4',
trange=['2015-10-16/06:00', '2015-10-16/14:00'],
time_clip=True)
```
Note: `get_data` allows you to return the xarray object with the xarray keyword
```
fpi_dist = get_data('mms4_des_dist_fast', xarray=True)
fpi_dist
```
### HPCA data
https://lasp.colorado.edu/mms/sdc/public/datasets/hpca/
Load the ion moments data for probe 1:
Note: set the `center_measurement` keyword to `True` to center the measurement to the middle of the accumulation interval
```
hpca_data = hpca(center_measurement=True,
datatype='moments',
trange=['2015-10-16/06:00', '2015-10-16/14:00'],
time_clip=True)
tplot(['mms1_hpca_hplus_number_density',
'mms1_hpca_hplus_scalar_temperature',
'mms1_hpca_hplus_ion_bulk_velocity'])
tplot(['mms1_hpca_hplus_ion_bulk_velocity',
'mms1_hpca_oplus_ion_bulk_velocity',
'mms1_hpca_heplus_ion_bulk_velocity',
'mms1_hpca_heplusplus_ion_bulk_velocity'])
```
Load the ion flux and PSD data for probe 1:
```
hpca_data = hpca(center_measurement=True,
datatype='ion',
trange=['2015-10-16/06:00', '2015-10-16/14:00'],
time_clip=True)
```
To calculate the omni-directional energy spectra:
```
from pyspedas import mms_hpca_calc_anodes, mms_hpca_spin_sum
not_spinavgd = mms_hpca_calc_anodes(probe='1', fov=[0, 360])
```
The data must be spin-summed (or averaged) to obtain the full omni-directional data product. To average the data instead of summing, set the `avg` keyword to `True`
```
spinavgd = mms_hpca_spin_sum(probe='1', avg=True)
spinavgd
tplot(spinavgd)
```
To return the xarray object containing the HPCA H+ PSD:
```
hpca_dist = get_data('mms1_hpca_hplus_phase_space_density', xarray=True)
hpca_dist
```
## Appendix 1: trange formats
```
from pyspedas.mms import fgm # we'll use FGM for examples
```
### strings
The trange keyword accepts a large number of date/time formats. These are assumed to be in UT
```
trange = ['October 16, 2015', 'October 17, 2015']
bvars = fgm(trange=trange, time_clip=True, suffix='_trange')
tplot('mms1_fgm_b_gse_srvy_l2_trange')
trange = ['Oct 16, 2015 at 13:06', 'Oct 16, 2015 at 13:07']
bvars = fgm(trange=trange, data_rate='brst', time_clip=True, suffix='_trange')
tplot('mms1_fgm_b_gse_brst_l2_trange')
```
### datetime objects
If you set the trange with datetime objects, be sure to set the time zone.
```
from datetime import datetime
from datetime import timezone as tz
start_time = datetime(year=2015, month=10, day=16, hour=13, minute=6, tzinfo=tz.utc)
end_time = datetime(year=2015, month=10, day=16, hour=13, minute=7, tzinfo=tz.utc)
bvars = fgm(trange=[start_time, end_time], probe='4', data_rate='brst')
```
## Appendix 2: finding help on keywords
```
help(fgm)
help(mms_feeps_pad)
help(tplot)
help(options)
```
| github_jupyter |
<img src="ine_400x141.jpg" width="200" height="200" align="right"/>
## <left>DERFE-Dirección de Estadística</left>
# <center>A crash course on Data Science with Python</center>
## <center>Motivation</center>
### <center>Part I: Data Science</center>

### <center>Part II: Become an expert in Data Science</center>

### <center>Part III: What language to choose? Python vs R</center>
<img src="Data science wars Python vs R.jpg" width="1000" height="1000" align="center"/>
### Homework:
- Python's history
https://en.wikipedia.org/wiki/History_of_Python
- Basics of Python
https://www.learnpython.org/
https://realpython.com/jupyter-notebook-introduction/
- Choosing R or Python for Data Analysis? An Infographic
https://www.datacamp.com/community/tutorials/r-or-python-for-data-analysis#gs.nrBsDZQ
- Why become a data scientist?
https://365datascience.com/defining-data-science/
- Comparisons between languages
https://www.codingame.com/blog/best-programming-language-learn-2019/
https://towardsdatascience.com/data-science-101-is-python-better-than-r-b8f258f57b0f
https://www.tiobe.com/tiobe-index/
http://pypl.github.io/PYPL.html
https://redmonk.com/sogrady/2018/03/07/language-rankings-1-18/
https://octoverse.github.com/projects#languages
- Visualizations in Python
https://towardsdatascience.com/5-quick-and-easy-data-visualizations-in-python-with-code-a2284bae952f
## <center>Introduction to Data Science with Python</center>
### <center>Part I: Learning path</center>
<img src="final-1.jpg" width="1000" height="1000" align="center"/>
### Homework:
#### Installation of Python/Anaconda/Jupyter notebook
https://www.anaconda.com/
https://www.freecodecamp.org/news/how-to-get-started-with-python-for-deep-learning-and-data-science-3bed07f91a08/
https://www.w3schools.com/python/default.asp
#### Python basic syntax
https://jupyter.org/index.html
#### Books
- How to Think Like a Computer Scientist: Learning with Python 3 Documentation
- Data Science Essentials in Python
- Introduction to Machine Learning with Python
- Think Python
- Data Science from Scratch
Check the following link: https://inemexico-my.sharepoint.com/:f:/g/personal/miguel_alvarez_ine_mx/EkVbN-eSMI5FpX1NDprCReMBJHPCbxzOCppSUtP79dCsKg?e=JVp4FE
### <center>Part II: Jupyter notebooks</center>
### Launching the Jupyter Notebook
To run the Jupyter Notebook, open an OS terminal, go to ~/minibook/ (or into the directory where you've downloaded the book's notebooks), and type jupyter notebook. This will start the Jupyter server and open a new window in your browser (if that's not the case, go to the following URL: http://localhost:8888).
The Notebook is most convenient when you start a complex analysis project that will involve a substantial amount of interactive experimentation with your code. Other common use-cases include keeping track of your interactive session (like a lab notebook), or writing technical documents that involve code, equations, and figures.
In the rest of this section, we will focus on the Notebook interface.
### The Notebook dashboard
The dashboard contains several tabs:
- Files shows all files and notebooks in the current directory.
- Running shows all kernels currently running on your computer.
- Clusters lets you launch kernels for parallel computing (covered in Chapter 5, High-Performance Computing).
- A notebook is an interactive document containing code, text, and other elements. A notebook is saved in a file with the .ipynb extension. This file is a plain text file storing a JSON data structure.
A kernel is a process running an interactive session. When using IPython, this kernel is a Python process. There are kernels in many languages other than Python.
In Jupyter, notebooks and kernels are strongly separated. A notebook is a file, whereas a kernel is a process. The kernel receives snippets of code from the Notebook interface, executes them, and sends the outputs and possible errors back to the Notebook interface. Thus, in general, the kernel has no notion of Notebook. A notebook is persistent (it's a file), whereas a kernel may be closed at the end of an interactive session and it is therefore not persistent. When a notebook is re-opened, it needs to be re-executed.
In general, no more than one Notebook interface can be connected to a given kernel. However, several IPython console can be connected to a given kernel.
### The Notebook user interface
To create a new notebook, click on the New button, and select Notebook (Python 3). A new browser tab opens and shows the Notebook interface:
```
from IPython.display import Image
Image(filename='nbui-2.png')
```
Here are the main components of the interface, from top to bottom:
The notebook name, that you can change by clicking on it. This is also the name of the .ipynb file.
The menu bar gives you access to several actions pertaining to either the notebook or the kernel.
To the right of the menu bar is the Kernel name. You can change the kernel language of your notebook from the Kernel menu. We will see in Chapter 6, Customizing IPython how to manage different kernel languages.
The toolbar contains icons for common actions. In particular, the dropdown menu showing Code lets you change the type of a cell.
Below is the main component of the UI: the actual Notebook. It consists of a linear list of cells. We will detail below the structure of a cell.
### Structure of a notebook cell
There are two main types of cells: Markdown cells and code cells.
A Markdown cell contains rich text. In addition to classic formatting options like bold or italics, we can add links, images, HTML elements, LaTeX mathematical equations, and more. We will cover Markdown in more detail in the Ten Jupyter/IPython essentials section of this chapter.
A code cell contains code to be executed by the kernel. The programming language corresponds to the kernel's language. We will only use Python in this book, but you can use many other languages.
You can change the type of a cell by first clicking on a cell to select it, and then choosing the cell's type in the toolbar's dropdown menu showing Markdown or Code.
#### Markdown cells
Here is a screenshot of a Markdown cell:
```
Image(filename='markdown-both.png')
```
The top panel shows the cell in edit mode, while the bottom one shows it in render mode. The edit mode lets you edit the text, while the render mode lets you display the rendered cell. We will explain the differences between these modes in greater detail below.
#### Code cells
Here is a screenshot of a complex code cell:

This code cell contains several parts:
- The prompt number shows the cell's number. This number increases everytime you run the cell. Since you can run cells of a notebook out of order, nothing guarantees that code numbers are linearly increasing in a given notebook.
- The input area contains a multiline text editor that lets you write one or several lines of code with syntax highlighting.
- The widget area may contain graphical controls; here, it displays a slider.
- The output area can contain multiple outputs, here:
- Standard output (text in black)
- Error output (text with a red background)
- Rich output (an HTML table and an image here)
### Running Code
First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefore runs Python code.
Code cells allow you to enter and run code
Run a code cell using Shift-Enter or pressing the button in the toolbar above:
```
print("Hello world!")
a = 10
a
```
There are two other keyboard shortcuts for running code:
- Alt-Enter runs the current cell and inserts a new one below.
- Ctrl-Enter run the current cell and enters command mode.
### Cell menu
The "Cell" menu has a number of menu items for running code in different ways. These includes:
Run and Select Below
Run and Insert Below
Run All
Run All Above
Run All Below
```
from IPython.display import HTML, YouTubeVideo
HTML('''
<table style="border: 2px solid black;">
''' +
''.join(['<tr>' +
''.join([f'<td>{row},{col}</td>'
for col in range(5)]) +
'</tr>' for row in range(5)]) +
'''
</table>
''')
YouTubeVideo('VQBZ2MqWBZI')
```
### <center>Part III: Python Basics</center>
### Variables
Let's use Python as a calculator. Here, 2 * 2 is an expression statement. This operation is performed, the result is returned, and IPython displays it in the notebook cell's output.
TIP (Division): In Python 3, 3 / 2 returns 1.5 (floating-point division), whereas it returns 1 in Python 2 (integer division). This can be source of errors when porting Python 2 code to Python 3. It is recommended to always use the explicit 3.0 / 2.0 for floating-point division (by using floating-point numbers) and 3 // 2 for integer division. Both syntaxes work in Python 2 and Python 3. See http://python3porting.com/differences.html#integer-division for more details.
```
2 * 2
3 // 2
3/2
```
Other built-in mathematical operators include +, -, ** for the exponentiation, and others. You will find more details at https://docs.python.org/3/reference/expressions.html#the-power-operator.
Variables form a fundamental concept of any programming language. A variable has a name and a value. Here is how to create a new variable in Python:
```
a = 2
```
And here is how to use an existing variable:
```
a * 3
```
Several variables can be defined at once (this is called unpacking):
```
a, b = 2, 6
```
There are different types of variables. Here, we have used a number (more precisely, an integer). Other important types include floating-point numbers to represent real numbers, strings to represent text, and booleans to represent True/False values. Here are a few examples:
```
somefloat = 3.1415 # The "dot" character represents the radix point.
sometext = 'pi is about' # Use single or double quotes for strings.
print(sometext, somefloat) # This displays several variables concatenated.
```
Note how we used the # character to write comments. Whereas Python discards the comments completely, adding comments in the code is important when the code is to be read by other humans (including yourself in the future).
### String escaping
String escaping refers to the ability to insert special characters in a string. For example, how can you insert ' and ", given that these characters are used to delimit a string in Python code? The backslash \ is the go-to escape character in Python (and in many other languages too). Here are a few examples:
```
print("Hello \"world\"")
print("A list:\n* item 1\n* item 2")
print("C:\\path\\on\\windows")
print(r"C:\path\on\windows")
```
The special character \n is the new line (or line feed) character. To insert a backslash, you need to escape it, which explains why it needs to be doubled as \\.
You can also disable escaping by using raw literals with a r prefix before the string, like in the last example above. In this case, backslashes are considered as normal characters.
This is convenient when writing Windows paths, since Windows uses backslash separators instead of forward slashes like on Unix systems. A very common error on Windows is forgetting to escape backslashes in paths: writing "C:\path" may lead to subtle errors.
You will find the list of special characters in Python at https://docs.python.org/3.4/reference/lexical_analysis.html#string-and-bytes-literals
### Lists
A list contains a sequence of items. You can concisely instruct Python to perform repeated actions on the elements of a list. Let's first create a list of numbers:
```
items = [1, 3, 0, 4, 1]
```
Note the syntax we used to create the list: square brackets [], and commas , to separate the items.
The built-in function len() returns the number of elements in a list:
```
len(items)
```
Python comes with a set of built-in functions, including print(), len(), max(), functional routines like filter() and map(), and container-related routines like all(), any(), range() and sorted(). You will find the full list of built-in functions at https://docs.python.org/3.4/library/functions.html.
Now, let's compute the sum of all elements in the list. Python provides a built-in function for this:
```
sum(items)
```
We can also access individual elements in the list, using the following syntax:
```
items[0]
items[-1]
```
Note that indexing starts at 0 in Python: the first element of the list is indexed by 0, the second by 1, and so on. Also, -1 refers to the last element, -2, to the penultimate element, and so on.
The same syntax can be used to alter elements in the list:
```
items[1] = 9
items
```
We can access sublists with the following syntax:
Here, 1:3 represents a slice going from element 1 included (this is the second element of the list) to element 3 excluded. Thus, we get a sublist with the second and third element of the original list. The first-included/last-excluded asymmetry leads to an intuitive treatment of overlaps between consecutive slices. Also, note that a sublist refers to a dynamic view of the original list, not a copy; changing elements in the sublist automatically changes them in the original list.
```
items[1:3]
```
Python provides several other types of containers:
Tuples are immutable and contain a fixed number of elements:
```
my_tuple = (1, 2, 3)
my_tuple[1]
```
Dictionaries contain key-value pairs. They are extremely useful and common:
```
my_dict = {'a': 1, 'b': 2, 'c': 3}
print('a:', my_dict['a'])
print(my_dict.keys())
```
There is no notion of order in a dictionary. However, the native collections module provides an OrderedDict structure that keeps the insertion order (see https://docs.python.org/3.4/library/collections.html).
Sets, like mathematical sets, contain distinct elements:
```
my_set = set([1, 2, 3, 2, 1])
my_set
```
A Python object is mutable if its value can change after it has been created. Otherwise, it is immutable. For example, a string is immutable; to change it, a new string needs to be created. A list, a dictionary, or a set is mutable; elements can be added or removed. By contrast, a tuple is immutable, and it is not possible to change the elements it contains without recreating the tuple. See https://docs.python.org/3.4/reference/datamodel.html for more details.
### Loops
We can run through all elements of a list using a for loop:
```
for item in items:
print(item)
```
There are several things to note here:
- The for item in items syntax means that a temporary variable named item is created at every iteration. This variable contains the value of every item in the list, one at a time.
- Note the colon : at the end of the for statement. Forgetting it will lead to a syntax error!
- The statement print(item) will be executed for all items in the list.
- Note the four spaces before print: this is called the indentation. You will find more details about indentation in the next subsection.
Python supports a concise syntax to perform a given operation on all elements of a list:
```
squares = [item * item for item in items]
squares
```
This is called a list comprehension. A new list is created here; it contains the squares of all numbers in the list. This concise syntax leads to highly readable and Pythonic code.
### Indentation
Indentation refers to the spaces that may appear at the beginning of some lines of code. This is a particular aspect of Python's syntax.
In most programming languages, indentation is optional and is generally used to make the code visually clearer. But in Python, indentation also has a syntactic meaning. Particular indentation rules need to be followed for Python code to be correct.
In general, there are two ways to indent some text: by inserting a tab character (also referred as \t), or by inserting a number of spaces (typically, four). It is recommended to use spaces instead of tab characters. Your text editor should be configured such that the Tabular key on the keyboard inserts four spaces instead of a tab character.
In the Notebook, indentation is automatically configured properly; so you shouldn't worry about this issue. The question only arises if you use another text editor for your Python code.
Finally, what is the meaning of indentation? In Python, indentation delimits coherent blocks of code, for example, the contents of a loop, a conditional branch, a function, and other objects. Where other languages such as C or JavaScript use curly braces to delimit such blocks, Python uses indentation.
### Conditional branches
Sometimes, you need to perform different operations on your data depending on some condition. For example, let's display all even numbers in our list:
```
for item in items:
if item % 2 == 0:
print(item)
```
Again, here are several things to note:
- An if statement is followed by a boolean expression.
- If a and b are two integers, the modulo operand a % b returns the remainder from the division of a by b. Here, item % 2 is 0 for even numbers, and 1 for odd numbers.
- The equality is represented by a double equal sign == to avoid confusion with the assignment operator = that we use when we create variables.
- Like with the for loop, the if statement ends with a colon :.
- The part of the code that is executed when the condition is satisfied follows the if statement. It is indented. Indentation is cumulative: since this if is inside a for loop, there are eight spaces before the print(item) statement.
Python supports a concise syntax to select all elements in a list that satisfy certain properties. Here is how to create a sublist with only even numbers:
```
even = [item for item in items if item % 2 == 0]
even
```
This is also a form of list comprehension.
### Functions
Code is typically organized into functions. A function encapsulates part of your code. Functions allow you to reuse bits of functionality without copy-pasting the code. Here is a function that tells whether an integer number is even or not:
```
def is_even(number):
"""Return whether an integer is even or not."""
return number % 2 == 0
```
There are several things to note here:
- A function is defined with the def keyword.
- After def comes the function name. A general convention in Python is to only use lowercase characters, and separate words with an underscore _. A function name generally starts with a verb.
- The function name is followed by parentheses, with one or several variable names called the arguments. These are the inputs of the function. There is a single argument here, named number.
- No type is specified for the argument. This is because Python is dynamically typed; you could pass a variable of any type. This function would work fine with floating point numbers, for example (the modulo operation works with floating point numbers in addition to integers).
- The body of the function is indented (and note the colon : at the end of the def statement).
- There is a docstring wrapped by triple quotes """. This is a particular form of comment that explains what the function does. It is not mandatory, but it is strongly recommended to write docstrings for the functions exposed to the user.
- The return keyword in the body of the function specifies the output of the function. Here, the output is a Boolean, obtained from the expression number % 2 == 0. It is possible to return several values; just use a comma to separate them (in this case, a tuple of Booleans would be returned).
Once a function is defined, it can be called like this:
```
is_even(3)
is_even(4)
```
Here, 3 and 4 are successively passed as arguments to the function.
### Positional and keyword arguments
A Python function can accept an arbitrary number of arguments, called positional arguments. It can also accept optional named arguments, called keyword arguments. Here is an example:
```
def remainder(number, divisor=2):
return number % divisor
```
The second argument of this function, divisor, is optional. If it is not provided by the caller, it will default to the number 2, as show here:
```
remainder(5)
```
There are two equivalent ways of specifying a keyword argument when calling a function:
```
remainder(5, 3)
remainder(5, divisor=3)
```
In the first case, 3 is understood as the second argument, divisor. In the second case, the name of the argument is given explicitly by the caller. This second syntax is clearer and less error-prone than the first one.
Functions can also accept arbitrary sets of positional and keyword arguments, using the following syntax:
```
def f(*args, **kwargs):
print("Positional arguments:", args)
print("Keyword arguments:", kwargs)
f(1, 2, c=3, d=4)
```
Inside the function, args is a tuple containing positional arguments, and kwargs is a dictionary containing keyword arguments.
### Passage by assignment
When passing a parameter to a Python function, a reference to the object is actually passed (passage by assignment):
- If the passed object is mutable, it can be modified by the function.
- If the passed object is immutable, it cannot be modified by the function.
Here is an example:
```
my_list = [1, 2]
def add(some_list, value):
some_list.append(value)
add(my_list, 3)
my_list
```
The function add() modifies an object defined outside it (in this case, the object my_list); we say this function has side-effects. A function with no side-effects is called a pure function: it doesn't modify anything in the outer context, and it deterministically returns the same result for any given set of inputs. Pure functions are to be preferred over functions with side-effects.
Knowing this can help you spot out subtle bugs. There are further related concepts that are useful to know, including function scopes, naming, binding, and more. Here are a couple of links:
- Passage by reference at https://docs.python.org/3/faq/programming.html#how-do-i-write-a-function-with-output-parameters-call-by-reference
- Naming, binding, and scope at https://docs.python.org/3.4/reference/executionmodel.html
### Errors
Let's discuss about errors in Python. As you learn, you will inevitably come across errors and exceptions. The Python interpreter will most of the time tell you what the problem is, and where it occurred. It is important to understand the vocabulary used by Python so that you can more quickly find and correct your errors.
Let's see an example:
```
def divide(a, b):
return a / b
divide(1, 0)
```
Here, we defined a divide() function, and called it to divide 1 by 0. Dividing a number by 0 is an error in Python. Here, a ZeroDivisionError exception was raised. An exception is a particular type of error that can be raised at any point in a program. It is propagated from the innards of the code up to the command that launched the code. It can be caught and processed at any point. You will find more details about exceptions at https://docs.python.org/3/tutorial/errors.html, and common exception types at https://docs.python.org/3/library/exceptions.html#bltin-exceptions.
The error message you see contains the stack trace and the exception's type and message. The stack trace shows all functions calls between the raised exception and the script calling point.
The top frame, indicated by the first arrow ---->, shows the entry point of the code execution. Here, it is divide(1, 0) which was called directly in the Notebook. The error occurred while this function was called.
The next and last frame is indicated by the second arrow. It corresponds to line 2 in our function divide(a, b). It is the last frame in the stack trace: this means that the error occurred there.
### Object-oriented programming
Object-oriented programming (or OOP) is a relatively advanced topic. Although we won't use it much in this book, it is useful to know the basics. Also, mastering OOP is often essential when you start to have a large code base.
In Python, everything is an object. A number, a string, a function is an object. An object is an instance of a type (also known as class). An object has attributes and methods, as specified by its type. An attribute is a variable bound to an object, giving some information about it. A method is a function that applies to the object.
For example, the object 'hello' is an instance of the built-in str type (string). The type() function returns the type of an object, as shown here:
```
type('hello')
```
There are native types, like str or int (integer), and custom types, also called classes, that can be created by the user.
In IPython, you can discover the attributes and methods of any object with the dot syntax and tab completion. For example, typing 'hello'.u and pressing Tab automatically shows us the existence of the upper() method:
```
'hello'.upper()
```
Here, upper() is a method available to all str objects; it returns an uppercase copy of a string.
A useful string method is format(). This simple and convenient templating system lets you generate strings dynamically:
```
'Hello {0:s}!'.format('Python')
```
The {0:s} syntax means "replace this with the first argument of format() which should be a string". The variable type after the colon is especially useful for numbers, where you can specify how to display the number (for example, .3f to display three decimals). The 0 makes it possible to replace a given value several times in a given string. You can also use a name instead of a position, for example 'Hello {name}!'.format(name='Python').
Some methods are prefixed with an underscore _; they are private and are generally not meant to be used directly. IPython's tab completion won't show you these private attributes and methods unless you explicitly type _ before pressing Tab.
In practice, the most important thing to remember is that appending a dot . to any Python object and pressing Tab in IPython will show you a lot of functionality pertaining to that object.
### Functional programming
Python is a multi-paradigm language; it notably supports imperative, object-oriented, and functional programming models. Python functions are objects and can be handled like other objects. In particular, they can be passed as arguments to other functions (also called higher-order functions). This the essence of functional programming.
Decorators provide a convenient syntax construct to define higher-order functions. Here is an example using the is_even() function from the previous Functions section:
```
def show_output(func):
def wrapped(*args, **kwargs):
output = func(*args, **kwargs)
print("The result is:", output)
return wrapped
```
The show_output() function transforms an arbitrary function func() to a new function, named wrapped(), that displays the result of the function:
```
f = show_output(is_even)
f(3)
```
Equivalently, this higher-order function can also be used with a decorator:
```
@show_output
def square(x):
return x * x
square(3)
```
You can find more information about Python decorators at https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Decorators and at http://thecodeship.com/patterns/guide-to-python-function-decorators/.
### Going beyond the basics
You now know the fundamentals of Python, the bare minimum that you will need in this book. As you can imagine, there is much more to say about Python.
There are a few further basic concepts that are often useful and that we cannot cover here, unfortunately. You are highly encouraged to have a look at them in the references given at the end of this section:
- range and enumerate
- pass, break, and, continue, to be used in loops
- working with files
- creating and importing modules
- the Python standard library provides a wide range of functionality (OS, network, file systems, compression, mathematics, and more)
Here are some slightly more advanced concepts that you might find useful if you want to strengthen your Python skills:
- regular expressions for advanced string processing
- lambda functions for defining small anonymous functions
- generators for controlling custom loops
- exceptions for handling errors
- with statements for safely handling contexts
- advanced object-oriented programming
- metaprogramming for modifying Python code dynamically
- the pickle module for persisting Python objects on disk and exchanging them across a network
Finally, here are a few references:
- Getting started with Python: https://www.python.org/about/gettingstarted/
- A Python tutorial: https://docs.python.org/3/tutorial/index.html
- The Python Standard Library: https://docs.python.org/3/library/index.html
- Interactive tutorial: http://www.learnpython.org/
- Codecademy Python course: http://www.codecademy.com/tracks/python
- Language reference (expert level): https://docs.python.org/3/reference/index.html
- Python Cookbook, by David Beazley and Brian K. Jones, O'Reilly Media (advanced level, highly recommended if you want to become a Python expert).
# <center>Examples with Python (under construction ...)</center>
## Exploring a dataset in the Notebook
### Provenance of the data
### Downloading and loading a dataset
```
import zipfile
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
#with zipfile.ZipFile("nyc_taxi.zip","r") as zip_ref:
# zip_ref.extractall("C:/Users/miguel.alvarez/Desktop/chapter1")
data_filename = 'nyc_data.csv'
fare_filename = 'nyc_fare.csv'
data = pd.read_csv(data_filename, parse_dates=['pickup_datetime',
'dropoff_datetime'])
fare = pd.read_csv(fare_filename, parse_dates=['pickup_datetime'])
data.head(3)
```
### Making plots with matplotlib
```
data.columns
p_lng = data.pickup_longitude
p_lat = data.pickup_latitude
d_lng = data.dropoff_longitude
d_lat = data.dropoff_latitude
p_lng
def lat_lng_to_pixels(lat, lng):
lat_rad = lat * np.pi / 180.0
lat_rad = np.log(np.tan((lat_rad + np.pi / 2.0) / 2.0))
x = 100 * (lng + 180.0) / 360.0
y = 100 * (lat_rad - np.pi) / (2.0 * np.pi)
return (x, y)
px, py = lat_lng_to_pixels(p_lat, p_lng)
px
plt.scatter(px, py)
plt.figure(figsize=(8, 6))
plt.scatter(px, py, s=.1, alpha=.03)
plt.axis('equal')
plt.xlim(29.40, 29.55)
plt.ylim(-37.63, -37.54)
plt.axis('off')
```
### Descriptive statistics with pandas and seaborn
```
px.count(), px.min(), px.max()
px.mean(), px.median(), px.std()
#!conda install seaborn -q -y
import seaborn as sns
sns.__version__
data.trip_distance.hist(bins=np.linspace(0., 10., 100))
```
### References
- https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-learn-data-science-python-scratch-2/
| github_jupyter |
# Data science with IBM Planning Analytics
# Cubike example - Part 1
Cubike is a fictional Bike Sharing company that we use in the series of articles about Data Science with TM1 and Planning Analytics:
* [Part 1: Upload weather data from web services](https://code.cubewise.com/tm1py-help-content/upload-weather-data-from-web-services-into-planning-analytics)
* [Part 2: Explore your TM1 and Planning Analytics data with Pandas and Ploty](https://code.cubewise.com/tm1py-help-content/explore-you-tm1-planning-analytics-with-pandas-and-ploty)
* [Part 3: Timeseries Forecasting with Facebook Prophet](https://code.cubewise.com/tm1py-help-content/timeseries-forecasting-with-facebook-prophet-and-tm1-planning-analytics)
If you are new to TM1py, this article will guide you to setup TM1py and all the different modules required in the Cubike example.
**Note**: All the prerequisites to run this sample into your environement are defined in this article:
* [Setup cubike example](https://code.cubewise.com/tm1py-help-content/setup-cubike-example)
# Part 1: Upload weather data from web services into TM1/Planning Analytics
The objective in this first part is to upload weather data from [National Oceanic and Atmospheric Administation (NOAA) Web service API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2).
## Step 1: Imports librairies
The first step in the Jupyter notebook is to import the packages and define the constants we need:
* **requests**: library for HTTP / REST Request against Webservices
* **json**: standard library for JSON parsing, manipulation
```
import requests
import json
from TM1py import TM1Service
```
## Constants
We are pulling the weather data from the National Oceanic and Atmospheric Administation (NOAA). NOAA has a rich API which allows us to access all kind of environmental data from the US.
<b>STATION</b> <a href="https://www.ncdc.noaa.gov/cdo-web/datasets/NORMAL_DLY/stations/GHCND:USW00014732/detail">GHCND:USW00014732</a> (40.7792°, -73.88°)
<b>FROM, TO</b> Timeframe
<b>HEADERS</b> Token for Authentication with the API
```
STATION = 'GHCND:USW00014732'
FROM, TO = '2017-01-01', '2017-01-04'
HEADERS = {"token": 'yyqEBOAbHVbtXkfAmZuPNfnSXvdfyhgn'}
```
## Step 2: Build URL for the Query
Build the parametrized URL and print it
```
url = 'https://www.ncdc.noaa.gov/cdo-web/api/v2/data?' \
'datasetid=GHCND&' \
'startdate=' + FROM + '&' \
'enddate=' + TO + '&' \
'limit=1000&' \
'datatypeid=TMIN&' \
'datatypeid=TAVG&' \
'datatypeid=TMAX&' \
'stationid=' + STATION
print(url)
```
This is the URL we will get the data from.
## Step 3: Query Weather Data
Now that our URL is ready, we need to send the request to the API:
* Execute the URL against the NOAA API to get the results
* Prettyprint first three items from result-set
```
response = requests.get(url, headers=HEADERS).json()
results = response["results"]
print(json.dumps(results[0:3], indent=2))
```
## Step 4: Rearrange Data
Before sending the data into TM1, we now need to rearrange the data so it matches our TM1 cube structure:
* Version = Actual
* Date = record['date'][0:10]
* City = NYC
* DataType = record['datatype']
```
cells = dict()
for record in results:
value = record['value'] / 10
coordinates = ("Actual", record['date'][0:10], "NYC", record['datatype'])
cells[coordinates] = value
for coordinate, value in cells.items():
print(coordinate, value)
```
## Step 5: Send Data to TM1
Now that the data is ready, we just need to connect to our TM1 instance and finally write the values into the TM1 cube "Weather Data".
First we need to get the authentication parameters of our TM1 instance which are stored in a config.ini file:
```
import configparser
config = configparser.ConfigParser()
config.read(r'..\..\config.ini')
```
With TM1py we can send data to a cube with two lines of code as long as our cell set match dimensions in our cube:
```
with TM1Service(**config['tm1srv01']) as tm1:
tm1.cubes.cells.write_values("Weather Data", cells)
```
# Next
To open next Jupyter Notebook:
1. Go to File at the top left
1. Click Open
1. A new tab will open and then click on cubike_data_science.ipynb
| github_jupyter |
```
!curl -L https://www.dropbox.com/s/qsdq7sx946t39pa/amazon.tar?dl=1 -o amazon.tar
!tar xvf amazon.tar
import pandas as pd
import numpy as np
np.random.seed(0)
import cv2
from tqdm.notebook import tqdm
import os
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.applications import ResNet50, VGG16
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
from sklearn.metrics import fbeta_score
from tensorflow.keras.optimizers import Adam
from matplotlib import pyplot as plt
import seaborn as sns
from functools import partial
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
import gc
df_train = pd.read_csv('amazon/train_v2.csv')
df_train.head()
img = cv2.imread('amazon/train-jpg/{}.jpg'.format('train_1'))
img.shape
plt.imshow(img)
plt.show()
all_labels = df_train['tags'].map(lambda x: x.split(' ')).values
labels = list(set([y for x in all_labels for y in x]))
print( len(labels), labels )
def read_data(df_train, labels, resize=(32, 32)):
train_split = 5000
test_split = 1000
df_train = df_train[:train_split+test_split]
X_train = []
y_train = []
label_map = {l: i for i, l in enumerate(labels)}
inv_label_map = {i: l for l, i in label_map.items()}
for f, tags in tqdm(df_train.values, miniters=1000):
if False == os.path.exists('amazon/train-jpg/{}.jpg'.format(f)): continue
img = cv2.imread('amazon/train-jpg/{}.jpg'.format(f))
targets = np.zeros(len(label_map))
for t in tags.split(' '):
targets[label_map[t]] = 1
X_train.append(cv2.resize(img, resize))
y_train.append(targets)
y_train = np.array(y_train, np.uint8)
X_train = np.array(X_train, np.float16) / 255.
# split = 5000#35000
# X_train, X_test, y_train, y_test
return X_train[:train_split], X_train[train_split:train_split+test_split], y_train[:train_split], y_train[train_split:train_split+test_split]
def fbeta_score_K(y_true, y_pred):
beta_squared = 4
tp = K.sum(y_true * y_pred) + K.epsilon()
fp = K.sum(y_pred) - tp
fn = K.sum(y_true) - tp
precision = tp / (tp + fp)
recall = tp / (tp + fn)
result = (beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall + K.epsilon())
return result
def learning_curve(model_fit, key='acc', ylim=(0.5, 1.01)):
plt.figure(figsize=(12,6))
plt.plot(model_fit.history[key])
plt.plot(model_fit.history['val_' + key])
plt.title('Learning Curve')
plt.ylabel(key.title())
plt.xlabel('Epoch')
plt.ylim(ylim)
plt.legend(['train', 'test'], loc='best')
plt.show()
if 'X_train' in locals(): del X_train
if 'X_test' in locals(): del X_test
if 'y_train' in locals(): del y_train
if 'y_test' in locals(): del y_test
X_train, X_test, y_train, y_test = read_data(df_train, labels, resize=(128, 128))
plt.imshow((X_train[1] * 255).astype(int))
plt.show()
X_train.shape
```
## Data analysis
```
ax = plt.subplot(1,1,1)
plt.bar(range(len(labels)), y_train.sum(axis=0));
plt.bar(range(len(labels)), y_test.sum(axis=0));
plt.xticks(range(len(labels)), rotation=90);
ax.set_xticklabels(labels)
plt.show()
```
## Custom CNN
```
model = Sequential([
Conv2D(32, kernel_size=(3, 3), activation='relu', padding='same', input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3])),
Conv2D(32, kernel_size=(3, 3), activation='relu'),
MaxPool2D(pool_size=(2, 2)),
Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same'),
Conv2D(64, kernel_size=(3, 3), activation='relu'),
MaxPool2D(pool_size=(2, 2)),
Conv2D(128, kernel_size=(3, 3), activation='relu', padding='same'),
Conv2D(128, kernel_size=(3, 3), activation='relu'),
MaxPool2D(pool_size=(2, 2)),
Conv2D(256, kernel_size=(3, 3), activation='relu', padding='same'),
Conv2D(256, kernel_size=(3, 3), activation='relu'),
MaxPool2D(pool_size=(2, 2)),
Flatten(),
Dense(1024, activation='relu'),
Dense(512, activation='relu'),
Dense(17, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[fbeta_score_K])
model.summary()
history = model.fit(X_train, y_train,
batch_size=128,
epochs=5,
verbose=1,
validation_data=(X_test, y_test))
learning_curve(history, 'fbeta_score_K')
```
## VGG
```
base_model = VGG16(weights='imagenet', include_top=False)
for layer in base_model.layers: # all blocked
layer.trainable = False
for layer in base_model.layers[-4:]:
layer.trainable = True
base_model.summary()
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(128, 128, 3))
for layer in base_model.layers: # all blocked
layer.trainable = False
for layer in base_model.layers[-4:]:
layer.trainable = True
model = Sequential([
base_model,
Flatten(),
Dense(1024, activation='relu'),
Dropout(0.2),
Dense(512, activation='relu'),
Dropout(0.2),
Dense(17, activation='sigmoid')
])
optimizer = Adam(0.0003, decay=0.000005)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[fbeta_score_K])
model.summary()
callbacks = [
EarlyStopping(patience=10), #jeśli 4 epoki z rzędu nie ma poprawy, to zatrzymaj się
ReduceLROnPlateau(patience=3), #jeśli 3 epoki z rzędu nie ma poprawy, zmniejsz krok (learning_rate)
]
history = model.fit(X_train, y_train,
batch_size=128,
epochs=150,
verbose=1,
validation_data=(X_test, y_test),
callbacks = callbacks)
learning_curve(history, 'fbeta_score_K')
```
## Hyperparameter optimization
```
def test_model(base_model, step=0.0003, decay=0.000005):
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(128, 128, 3))
for layer in base_model.layers: # all blocked
layer.trainable = False
for layer in base_model.layers[-4:]:
layer.trainable = True
model = Sequential([
base_model,
Flatten(),
Dense(1024, activation='relu'),
Dropout(0.2),
Dense(512, activation='relu'),
Dropout(0.2),
Dense(17, activation='sigmoid')
])
optimizer = Adam(step, decay=decay)
metric = fbeta_score_K
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[metric])
callbacks = [
EarlyStopping(patience=4),
]
history = model.fit(X_train, y_train,
batch_size=128,
epochs=30,
verbose=0,
validation_data=(X_test, y_test),
callbacks = callbacks)
learning_curve(history, 'fbeta_score_K')
return max(history.history['val_' + metric.__name__])
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(128, 128, 3))
score = test_model(base_model)
print()
print(score)
# def test_model(base_model, step=0.0003, decay=0.000005):
def objective(space):
params = {
'step': space['step'],
'decay': space['decay'],
}
print('training wiht params: {}'.format(params))
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(128, 128, 3))
score = test_model(base_model, **params)
print('score: {}\n'.format(score))
return{'loss':score, 'status': STATUS_OK }
space ={
'step': hp.uniform ('step', 0.0001, 0.01),
'decay': hp.uniform ('decay', 0.000001, 0.0001),
}
trials = Trials()
best_params = fmin(fn=objective,
space=space,
algo=partial(tpe.suggest, n_startup_jobs=5),
max_evals=20,
trials=trials)
print("The best params: ", best_params)
```
| github_jupyter |
# Bayesian Multilevel Modelling using PyStan
This is a tutorial, following through Chris Fonnesbeck's [primer on using PyStan with Bayesian Multilevel Modelling](http://mc-stan.org/documentation/case-studies/radon.html).
# 2. Data Import and Cleaning
```
%pylab inline
import numpy as np
import pandas as pd
```
## Load radon data
We import the radon data, which can be found in the file `data/srrs2.dat`.
For cleanup, we strip whitespace from column headers, restrict data to the state of Minnesota (coded as `MN`) and add a unique numerical identifier (`fips`) for each county, derived from the state identifier `stfips` and the county identifier `cntyfips`.
```
# Import radon data
srrs2 = pd.read_csv('data/srrs2.dat')
srrs2.columns = srrs2.columns.map(str.strip)
# Make a combined state and county ID, by household
srrs_mn = srrs2.assign(fips=srrs2.stfips * 1000 + srrs2.cntyfips)[srrs2.state == 'MN']
```
We inspect the first few lines of the data with the `.head()` method, and examine the columns that are available:
```
# Check first few lines
srrs_mn.head()
# What columns are available?
srrs_mn.columns
```
## Load uranium data
We also import uranium data (found in the file `data/cty.dat`) for each county in the state.
We create the same numerical identifier for county (`fips`) as before, from `stfips` and `ctfips`.
```
# Obtain the uranium level as a county-level predictor
cty = pd.read_csv('data/cty.dat')
cty_mn = cty[cty.st == 'MN'].copy() # MN only data
# Make a combined state and county id, by county
cty_mn['fips'] = 1000 * cty_mn.stfips + cty_mn.ctfips
```
We check the first few lines (the uranium concentration is in the column `Uppm`), and what columns are available:
```
# Check first few lines of data
cty_mn.head()
# What columns are in the data?
cty_mn.columns
```
## Merging datasets
It is convenient to bring all the data into a single dataframe with radon and uranium data byhousehold, so we merge both tables together on the basis of the unique county identifier, to assign uranium data across all households in a county.
We check that the column `Uppm` has been merged.
```
# Combine data into a single dataframe
srrs_mn = srrs_mn.merge(cty_mn[['fips', 'Uppm']], on='fips') # Get uranium level by household (on county basis)
srrs_mn = srrs_mn.drop_duplicates(subset='idnum') # Lose duplicate houses
u = np.log(srrs_mn.Uppm) # log-transform uranium level
n = len(srrs_mn) # number of households
# Check first few lines of data
srrs_mn.head()
```
## Indexing counties
We create a dictionary associating each county with a unique index code, which will be used to build variables to be passed to `Stan`.
```
# Index counties with a lookup dictionary
srrs_mn.county = srrs_mn.county.str.strip()
mn_counties = srrs_mn.county.unique()
counties = len(mn_counties)
county_lookup = dict(zip(mn_counties, range(len(mn_counties))))
```
For construction of a `Stan` model, it is convenient to have the relevant variables as local copies - this aids readability.
* `county`: index code for each county
* `radon`: radon activity
* `log_radon`: log radon activity
* `floor_measure`: which floor measurement was taken
```
# Make local copies of variables
county = srrs_mn['county_code'] = srrs_mn.county.replace(county_lookup).values
radon = srrs_mn.activity
srrs_mn['log_radon'] = log_radon = np.log(radon + 0.1).values
floor_measure = srrs_mn.floor.values
```
## Helper script
To support the following notebooks, the data import, clean-up and variable creation code above is made available in the Python module `clean_data.py`. This will be imported at the top of each notebook as
```
import clean_data
```
| github_jupyter |
# Classical Computation on a Quantum Computer
## Contents
1. [Introduction](#intro)
2. [Consulting and Oracle](#oracle)
3. [Taking Out the Garbage](#garbage)
## 1. Introduction <a id='intro'></a>
One consequence of having a universal set of quantum gates is the ability to reproduce any classical computation. We simply need to compile the classical computation down into the Boolean logic gates that we saw in *The Atoms of Computation*, and then reproduce these on a quantum computer.
This demonstrates an important fact about quantum computers: they can do anything that a classical computer can do, and they can do so with at least the same computational complexity. Though it is not the aim to use quantum computers for tasks at which classical computers already excel, this is nevertheless a good demonstration that quantum computers can solve a general range of problems.
Furthermore, problems that require quantum solutions often involve components that can be tackled using classical algorithms. In some cases, these classical parts can be done on classical hardware. However, in many cases, the classical algorithm must be run on inputs that exist in a superposition state. This requires the classical algorithm to be run on quantum hardware. In this section we introduce some of the ideas used when doing this.
## 2. Consulting an Oracle <a id='oracle'></a>
Many quantum algorithms are based around the analysis of some function $f(x)$. Often these algorithms simply assume the existence of some 'black box' implementation of this function, which we can give an input $x$ and receive the corresponding output $f(x)$. This is referred to as an *oracle*.
The advantage of thinking of the oracle in this abstract way allows us to concentrate on the quantum techniques we use to analyze the function, rather than the function itself.
In order to understand how an oracles work within a quantum algorithm, we need to be specific about how they are defined. One of the main forms that oracles take is that of *Boolean oracles*. These are described by the following unitary evolution,
$$
U_f \left|x , \bar 0 \right\rangle = \left|x, f(x)\right\rangle.
$$
Here $\left|x , \bar 0 \right\rangle = \left|x \right\rangle \otimes \left|\bar 0 \right\rangle$ is used to represent a multi-qubit state consisting of two registers. The first register is in state $\left|x\right\rangle$, where $x$ is a binary representation of the input to our function. The number of qubits in this register is the number of bits required to represent the inputs.
The job of the second register is to similarly encode the output. Specifically, the state of this register after applying $U_f$ will be a binary representation of the output $\left|f(x)\right\rangle$, and this register will consist of as many qubits as are required for this. This initial state $\left|\bar 0 \right\rangle$ for this register represents the state for which all qubits are $\left|0 \right\rangle$. For other initial states, applying $U_f$ will lead to different results. The specific results that arise will depend on how we define the unitary $U_f$.
Another form of oracle is the *phase oracle*, which is defined as follows,
$$
P_f \left|x \right\rangle = (-1)^{f(x)} \left|x \right\rangle,
$$
where the output $f(x)$ is typically a simple bit value of $0$ or $1$.
Though it seems much different in form from the Boolean oracle, it is very much another expression of the same basic idea. In fact, it can be realized using the same 'phase kickback' mechanism as described in a previous section.
To see this, consider the Boolean oracle $U_f$ that would correspond to the same function. This can be implemented as something that is essentially a generalized form of the controlled-NOT. It is controlled on the input register, such that it leaves the output bit in state $\left|0 \right\rangle$ for $f(x)=0$, and applies an $X$ to flip it to $\left|1 \right\rangle$ if $f(x)=1$. If the initial state of the output register were $\left|- \right\rangle$ rather than $\left|0 \right\rangle$, the effect of $U_f$ would then be to induce exactly the phase of $(-1)^{f(x)}$ required.
$$
U_f \left( \left|x \right\rangle \otimes \left| - \right\rangle \right) = (P_f \otimes I) \left( \left|x \right\rangle \otimes \left| - \right\rangle \right)
$$
Since the $\left|- \right\rangle$ state of the output qubit is left unchanged by the whole process, it can safely be ignored. The end effect is therefore that the phase oracle is simply implemented by the corresponding Boolean oracle.
## 3. Taking Out the Garbage <a id='garbage'></a>
The functions evaluated by an oracle are typically those that can be evaluated efficiently on a classical computer. However, the need to implement it as a unitary in one of the forms shown above means that it must instead be implemented using quantum gates. However, this is not quite as simple as just taking the Boolean gates that can implement the classical algorithm, and replacing them with their classical counterparts.
One issue that we must take care of is that of reversibility. A unitary of the form $U = \sum_x \left| f(x) \right\rangle \left\langle x \right|$ is only possible if every unique input $x$ results in a unique output $f(x)$, which is not true in general. However, we can force it to be true by simply including a copy of the input in the output. It is this that leads us to the form for Boolean oracles as we saw earlier
$$
U_f \left|x,\bar 0 \right\rangle = \left| x,f(x) \right\rangle
$$
With the computation written as a unitary, we are able to consider the effect of applying it to superposition states. For example, let us take the superposition over all possible inputs $x$ (unnormalized for simplicity). This will result in a superposition of all possible input/output pairs,
$$
U_f \sum_x \left|x,0\right\rangle = \sum_x \left|x,f(x)\right\rangle.
$$
When adapting classical algorithms, we also need to take care that these superpositions behave as we need them to. Classical algorithms typically do not only compute the desired output, but will also create additional information along the way. Such additional remnants of a computation do not pose a significant problem classically, and the memory they take up can easily be recovered by deleting them. From a quantum perspective, however, things are not so easy.
For example, consider the case that a classical algorithm peforms the following process,
$$
V_f \left|x,\bar 0, \bar 0 \right\rangle = \left| x,f(x), g(x) \right\rangle
$$
Here we see a third register, which is used as a 'scratchpad' for the classical algorithm. We will refer to information that is left in this register at the end of the computation as the 'garbage', $g(x)$. Let us use $V_f$ to denote a unitary that implements the above.
Quantum algorithms are typically built upon interference effects. The simplest such effect is to create a superposition using some unitary, and then remove it using the inverse of that unitary. The entire effect of this is, of course, trivial. However, we must ensure that our quantum computer is at least able to do such trivial things.
For example, suppose some process within our quantum computation has given us the superposition state $\sum_x \left|x,f(x)\right\rangle$, and we are required to return this to the state $\sum_x \left|x,0\right\rangle$. For this we could simply apply $U_f^\dagger$. The ability to apply this follows directly from knowing a circuit that would apply $U_f$, since we would simply need to replace each gate in the circuit with its inverse and reverse the order.
However, suppose we don't know how to apply $U_f$, but instead know how to apply $V_f$. This means that we can't apply $U_f^\dagger$ here, but could use $V_f^\dagger$. Unfortunately, the presence of the garbage means that it won't have the same effect.
For an explicit example of this we can take a very simple case. We'll restrict $x$, $f(x)$ and $g(x)$ to all consist of just a single bit. We'll also use $f(x) = x$ and $g(x) = x$, each of which can be acheived with just a single `cx` gate controlled on the input register.
Specifically, the circuit to implement $U_f$ is just the following single `cx` between the single bit of the input and output registers.
```
from qiskit import QuantumCircuit, QuantumRegister, Aer, execute
input_bit = QuantumRegister(1, 'input')
output_bit = QuantumRegister(1, 'output')
garbage_bit = QuantumRegister(1, 'garbage')
Uf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Uf.cx(input_bit[0], output_bit[0])
Uf.draw(output='mpl',justify='none')
```
For $V_f$, where we also need to make a copy of the input for the garbage, we can use the following two `cx` gates.
```
Vf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Vf.cx(input_bit[0], garbage_bit[0])
Vf.cx(input_bit[0], output_bit[0])
Vf.draw(output='mpl',justify='none')
```
Now we can look at the effect of first applying $U_f$, and then applying $V_f^{\dagger}$. The net effect is the following circuit.
```
qc = Uf + Vf.inverse()
qc.draw(output='mpl',justify='none')
```
This circuit begins with two identical `cx` gates, whose effects cancel each other out. All that remains is the final `cx` between the input and garbage registers. Mathematically, this means
$$
V_f^\dagger U_f \left| x,0,0 \right\rangle = V_f^\dagger \left| x,f(x),0 \right\rangle = \left| x , 0 ,g(x) \right\rangle.
$$
Here we see that the action of $V_f^\dagger$ does not simply return us to the initial state, but instead leaves the first qubit entangled with unwanted garbage. Any subsequent steps in an algorithm will therefore not run as expected, since the state is not the one that we need.
For this reason we need a way of removing classical garbage from our quantum algorithms. This can be done by a method known as 'uncomputation. We simply need to take another blank variable and apply $V_f$
$$
\left| x, 0, 0, 0 \right\rangle \rightarrow \left| x,f(x),g(x),0 \right\rangle.
$$
Then we apply a set of controlled-NOT gates, each controlled on one of the qubits used to encode the output, and targeted on the corresponding qubit in the extra blank variable.
Here's the circuit to do this for our example using single qubit registers.
```
final_output_bit = QuantumRegister(1, 'final-output')
copy = QuantumCircuit(output_bit, final_output_bit)
copy.cx(output_bit, final_output_bit)
copy.draw(output='mpl',justify='none')
```
The effect of this is to copies the information over (if you have heard of the no-cloning theorem, note that this is not the same process). Specifically, it transforms the state in the following way.
$$
\left| x,f(x),g(x),0 \right\rangle \rightarrow \left| x,f(x),g(x),f(x) \right\rangle.
$$
Finally we apply $V_f^\dagger$, which undoes the original computation.
$$
\left| x,f(x),g(x),0 \right\rangle \rightarrow \left| x,0,0,f(x) \right\rangle.
$$
The copied output nevertheless remains. The net effect is to perform the computation without garbage, and hence acheives our desired $U_f$.
For our example using single qubit registers, and for which $f(x) = x$, the whole process corresponds to the following circuit.
```
(Vf.inverse() + copy + Vf).draw(output='mpl',justify='none')
```
Using what you know so far of how the `cx` gates work, you should be able to see that the two applied to the garbage register will cancel each other out. We have therefore successfully removed the garbage.
### Quick Exercises
1. Show that the output is correctly written to the 'final output' register (and only to this register) when the 'output' register is initialized as $|0\rangle$.
2. Determine what happens when the 'output' register is initialized as $|1\rangle$.
With this method, and all of the others covered in this chapter, we now have all the tools we need to create quantum algorithms. Now we can move on to seeing those algorithms in action.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# Energy Meter Examples
## Linux Kernel HWMon
More details can be found at https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#linux-hwmon.
```
import logging
from conf import LisaLogging
LisaLogging.setup()
```
#### Import required modules
```
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
```
## Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in **examples/utils/testenv_example.ipynb**.
```
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_HWMON",
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "hwmon",
"conf" : {
# Prefixes of the HWMon labels
'sites' : ['a53', 'a57'],
# Type of hardware monitor to be used
'kinds' : ['energy']
},
'channel_map' : {
'LITTLE' : 'a53',
'big' : 'a57',
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
```
## Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in **examples/wlgen/rtapp_example.ipynb**.
Each **EnergyMeter** derived class has two main methods: **reset** and **report**.
- The **reset** method will reset the energy meter and start sampling from channels specified in the target configuration. <br>
- The **report** method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.
```
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
```
## Power Measurements Data
```
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
```
| github_jupyter |
```
import numpy
import urllib
import scipy.optimize
import random
from math import exp
from math import log
def parseData(fname):
for l in urllib.urlopen(fname):
yield eval(l)
print "Reading data..."
data = list(parseData("file:train.json"))
print "done"
from collections import defaultdict
train_set = data[0:100000]
valid_set = data[100000:200000]
usersID = []
businessesID = []
visit = {}
nonvisit = {}
for info in data:
usersID.append(info['userID'])
businessesID.append(info['businessID'])
if (visit.has_key(info['userID'])):
visit[info['userID']].append(info['businessID'])
else:
visit[info['userID']] = [info['businessID']]
numpy.random.shuffle(usersID)
numpy.random.shuffle(businessesID)
count = 0
while count<100000:
user = random.choice(usersID)
business = random.choice(businessesID)
if business in visit[user]:
pass
else:
if (nonvisit.has_key(user)):
if business in nonvisit[user]:
pass
else:
nonvisit[user].append(business)
count += 1
else:
nonvisit[user] = [business]
count += 1
with open('pairs_Visit_valid.txt','w+') as f:
for pos_datum in valid_set:
f.writelines(pos_datum['userID']+'-'+pos_datum['businessID']+','+'1\n')
for neg_datum in nonvisit.keys():
if len(nonvisit[neg_datum])>1:
for business in nonvisit[neg_datum]:
f.writelines(neg_datum+'-'+business+','+'0\n')
else:
f.writelines(neg_datum+'-'+nonvisit[neg_datum][0]+','+'0\n')
f.close()
fread = open("pairs_Visit_valid.txt", "r")
lines = fread.readlines()
fread.close()
random.shuffle(lines)
fwrite = open("pairs_Visit_valid.txt", "w")
fwrite.writelines('userID-businessID,prediction\n')
fwrite.writelines(lines)
fwrite.close()
### Would-visit baseline: just rank which businesses are popular and which are not, and return '1' if a business is among the top-ranked
businessCount = defaultdict(int)
totalPurchases = 0
for l in data:
user,business = l['userID'],l['businessID']
businessCount[business] += 1
totalPurchases += 1
mostPopular = [(businessCount[x], x) for x in businessCount]
mostPopular.sort()
mostPopular.reverse()
print 'Threshold\tAccuracy\n'
for i in range(100):
threshold = i * 0.01
return1 = set()
count = 0
for ic, i in mostPopular:
count += ic
return1.add(i)
if count > totalPurchases*threshold: break
right_count = 0
wrong_count = 0
for l in open("pairs_Visit_valid.txt"):
if l.startswith("userID"):
pass
else:
info = l.strip().split(',')
pairs = info[0].split('-')
if pairs[1] in return1:
if info[1] == '1':
right_count += 1
else:
wrong_count += 1
else:
if info[1] == '0':
right_count += 1
else:
wrong_count += 1
print str(threshold) + '\t\t' + str(float(right_count)/(right_count+wrong_count))
predictions = open("predictions_Visit.txt", 'w')
for l in open("pairs_Visit.txt"):
if l.startswith("userID"):
#header
predictions.write(l)
continue
u,i = l.strip().split('-')
if i in return1:
predictions.write(u + '-' + i + ",1\n")
else:
predictions.write(u + '-' + i + ",0\n")
predictions.close()
usersID = []
businessesID = []
visit = {}
nonvisit = {}
for info in data:
usersID.append(info['userID'])
businessesID.append(info['businessID'])
if (visit.has_key(info['userID'])):
visit[info['userID']].append(info['businessID'])
else:
visit[info['userID']] = [info['businessID']]
numpy.random.shuffle(usersID)
numpy.random.shuffle(businessesID)
count = 0
while count<100000:
user = random.choice(usersID)
business = random.choice(businessesID)
if business in visit[user]:
pass
else:
if (nonvisit.has_key(user)):
if business in nonvisit[user]:
pass
else:
nonvisit[user].append(business)
count += 1
else:
nonvisit[user] = [business]
count += 1
with open('pairs_Visit_valid.txt','w+') as f:
for pos_datum in valid_set:
f.writelines(pos_datum['userID']+'-'+pos_datum['businessID']+','+'1\n')
for neg_datum in nonvisit.keys():
if len(nonvisit[neg_datum])>1:
for business in nonvisit[neg_datum]:
f.writelines(neg_datum+'-'+business+','+'0\n')
else:
f.writelines(neg_datum+'-'+nonvisit[neg_datum][0]+','+'0\n')
f.close()
fread = open("pairs_Visit_valid.txt", "r")
lines = fread.readlines()
fread.close()
random.shuffle(lines)
fwrite = open("pairs_Visit_valid.txt", "w")
fwrite.writelines('userID-businessID,prediction\n')
fwrite.writelines(lines)
fwrite.close()
### Would-visit baseline: just rank which businesses are popular and which are not, and return '1' if a business is among the top-ranked
businessCount = defaultdict(int)
totalPurchases = 0
for l in data:
user,business = l['userID'],l['businessID']
businessCount[business] += 1
totalPurchases += 1
mostPopular = [(businessCount[x], x) for x in businessCount]
mostPopular.sort()
mostPopular.reverse()
print 'Threshold\tAccuracy\n'
for i in range(100):
threshold = i * 0.01
return1 = set()
count = 0
for ic, i in mostPopular:
count += ic
return1.add(i)
if count > totalPurchases*threshold: break
right_count = 0
wrong_count = 0
for l in open("pairs_Visit_valid.txt"):
if l.startswith("userID"):
pass
else:
info = l.strip().split(',')
pairs = info[0].split('-')
if pairs[1] in return1:
if info[1] == '1':
right_count += 1
else:
wrong_count += 1
else:
if info[1] == '0':
right_count += 1
else:
wrong_count += 1
print str(threshold) + '\t\t' + str(float(right_count)/(right_count+wrong_count))
predictions = open("predictions_Visit.txt", 'w')
for l in open("pairs_Visit.txt"):
if l.startswith("userID"):
#header
predictions.write(l)
continue
u,i = l.strip().split('-')
if i in return1:
predictions.write(u + '-' + i + ",1\n")
else:
predictions.write(u + '-' + i + ",0\n")
predictions.close()
### Would-visit baseline: just rank which businesses are popular and which are not, and return '1' if a business is among the top-ranked
businessCount = defaultdict(int)
totalPurchases = 0
for l in train_set:
user,business = l['userID'],l['businessID']
businessCount[business] += 1
totalPurchases += 1
mostPopular = [(businessCount[x], x) for x in businessCount]
mostPopular.sort()
mostPopular.reverse()
print 'Threshold\tAccuracy\n'
for i in range(100):
threshold = i * 0.01
return1 = set()
count = 0
for ic, i in mostPopular:
count += ic
return1.add(i)
if count > totalPurchases*threshold: break
right_count = 0
wrong_count = 0
for l in open("pairs_Visit_valid.txt"):
if l.startswith("userID"):
pass
else:
info = l.strip().split(',')
pairs = info[0].split('-')
if pairs[1] in return1:
if info[1] == '1':
right_count += 1
else:
wrong_count += 1
else:
if info[1] == '0':
right_count += 1
else:
wrong_count += 1
print str(threshold) + '\t\t' + str(float(right_count)/(right_count+wrong_count))
import gzip
from collections import defaultdict
def readGz(f):
for l in gzip.open(f):
yield eval(l)
### Would-visit baseline: just rank which businesses are popular and which are not, and return '1' if a business is among the top-ranked
businessCount = defaultdict(int)
totalPurchases = 0
#for l in readGz("valid.json.gz"):
# user,business = l['userID'],l['businessID']
# businessCount[business] += 1
# totalPurchases += 1
with open('pairs_Visit_valid.txt','r') as f:
lines = f.readlines()
for line in lines:
info = line.split(',')
predict = info[1]
pair = info[0].split('-')
user,business = pair[0],pair[1]
if (predict.strip()=='1'):
businessCount[business] += 1
totalPurchases += 1
else:
businessCount[business] += 0
totalPurchases += 1
mostPopular = [(businessCount[x], x) for x in businessCount]
mostPopular.sort()
mostPopular.reverse()
pos_count = 0
pos_business = 0
sumall = 0
for x in businessCount:
if businessCount[x] > 0:
pos_count += 1
pos_business += businessCount[x]
for x in businessCount:
sumall += businessCount[x]
print pos_count
print pos_business
print sum(businessCount.values())
print len(businessCount)
print 'total = ' + str(totalPurchases)
return1 = set()
count = 0
for ic, i in mostPopular:
if ic > 0:
#print ic
count += ic
return1.add(i)
if count > totalPurchases*0.5: break
predictions = open("predictions_Visit.txt", 'w')
for l in open("pairs_Visit.txt"):
if l.startswith("userID"):
#header
predictions.write(l)
continue
u,i = l.strip().split('-')
if i in return1:
predictions.write(u + '-' + i + ",1\n")
else:
predictions.write(u + '-' + i + ",0\n")
predictions.close()
print valid_set[0]
print valid_set[0]['businessID']
print valid_set[0]['userID']
```
| github_jupyter |
```
from mpes import fprocessing as fp, analysis as aly, visualization as vis, utils as u
import matplotlib.pyplot as plt
import scipy.io as sio
import numpy as np
import mpld3 # interactive plots
mpld3.enable_notebook()
```
### 4.1 Energy calibration
Consists of three steps given a set of energy dispersion curves (EDCs) and their corresponding bias voltages,
1. Normalize photoemission spectra (optional) -- `mpes.analysis.normspec()`
2. Select the spectral regions containing similar features (e.g. a peak)
3. Correspondence landmark detection (optional if they can be determined by other means) -- `mpes.analysis.peaksearch()`
4. Polynomial fit to the conversion formula -- `mpes.analysis.calibrateE()`
#### 4.1.0 Construct a series of EDCs, here shows that from dataset #101 - #104
```
# axes = ['t']
# bins = [800]
# ranges = [(65000, 90000)]
# barr = []
# for i in range(101, 105):
# fdir = r'../data/data_20180605_'+str(i)+'.h5'
# hp = fp.hdf5Processor(fdir)
# resdict = hp.localBinning(axes=axes, nbins=bins, ranges=ranges, jittered=True)
# barr.append(resdict['binned'])
# edcs = np.asarray(barr)
# sio.savemat('../data/ECalib_EDCs.mat', {'EDCs':edcs, 'ToF':resdict['t']})
```
Load measured and binned EDCs from existing file
```
fcalib = sio.loadmat('../data/ECalib_EDCs.mat')
edcs, tof = fcalib['EDCs'], fcalib['ToF'].ravel()
plt.figure(figsize=(12, 4))
plt.plot(tof, edcs.T)
plt.xlabel('Time of flight', fontsize=15);
plt.ylabel('Photoemission intensity', fontsize=15);
```
#### 4.1.1 Normalize EDCs
```
normedcs = u.normspec(*edcs)
plt.figure(figsize=(12, 4))
plt.plot(tof, normedcs.T)
plt.xlabel('Time of flight', fontsize=15);
plt.ylabel('Normalized intensity', fontsize=15);
```
#### 4.1.2 Peak detection (if necessary)
Method 1: Using specified region
```
peakrange = [(69600, 70000), (71200, 71900), (72800, 73400), (72000, 72600)]
pks = aly.peaksearch(normedcs, tof, peakrange, plot=True)
pks
```
Method 2: Using parametric alignment (...tbc)
#### 4.1.3 Fitting to calibration curve
Provide the equivalent peak positions (in drift time) of different scans and the corresponding bias voltages
refid = index of the reference EDC to be subtracted from
```
Vs = [13, 16, 18, 17]
tof2ev = aly.calibrateE(pks, Vs, refid=0, ret='func')
```
**tof2ev(E0, t)** is the calibration function for converting time-of-flight reading (t) to eV, with an adjustible constant offset E0. The accuracy of tof2ev is good for an energy range of 5 eV, but fails for a much larger range.
```
E0 = 7013
f, axs = plt.subplots(1, 2, figsize=(14, 4))
# Energy calibration in a narrow range (~ 5 eV)
tofcond = (tof >= 68600) & (tof <= 73600)
axs[0].plot(tof2ev(E0, tof[tofcond]), normedcs[1,:][tofcond])
axs[0].set_xlabel('Energy (eV)', fontsize=15)
axs[0].set_title('Narrow energy range behavior', fontsize=15)
# Energy calibration in a broad range
axs[1].plot(tof2ev(E0, tof), normedcs[1,:])
axs[1].set_xlabel('Energy (eV)', fontsize=15)
axs[1].set_title('Large energy range behavior', fontsize=15);
plt.plot(tof, tof2ev(E0, tof))
plt.xlabel('Time of flight', fontsize=15)
plt.ylabel('Energy (eV)', fontsize=15)
plt.title('Nonmonotonous E-ToF dependence', fontsize=15)
```
#### 4.1.4 Alternative returns from `mpes.analysis.calibrateE()`
Polynomial coefficients
```
aly.calibrateE(pks, Vs, refid=0, ret='coeffs')
```
Full fitting results (see [numpy.linalg.lstsq](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html))
```
tof2ev, a, result = aly.calibrateE(pks, Vs, refid=0, ret='full')
result
```
Calibrated values
```
tofcond = (tof >= 68500) & (tof <= 70000)
evals = aly.calibrateE(pks, Vs, refid=0, ret='eVscale', E0=E0, t=tof)
plt.plot(evals[tofcond], normedcs[1,:][tofcond])
plt.xlabel('Energy (eV)', fontsize=15);
# sio.savemat('../data/2018_08_11/energy.mat',{'energy':evals[tofcond]})
```
#### 4.1.5 Check known energy scales (e.g. valence band K-point splitting of WSe$_2$)
```
axes = ['X', 'Y', 't']
bins = [100, 100, 800]
ranges = [(300, 1800), (200, 1800), (65000, 90000)]
fdir = r'../data/data_20180605_101.h5'
hp = fp.hdf5Processor(fdir)
resdict = hp.localBinning(axes=axes, nbins=bins, ranges=ranges, jittered=True)
```
Select K point and calculate EDC
```
f, axk = plt.subplots(1, 2, figsize=(8, 4))
axk[0].imshow(resdict['binned'][..., 110:120].sum(axis=2), origin='lower', cmap='terrain_r', vmax=120)
axk[1].imshow(resdict['binned'][34:40, 74:80, 110:120].sum(axis=2), origin='lower', cmap='terrain_r', vmax=120, aspect=1)
axk[0].set_title('Full view', fontsize=15)
axk[1].set_title('View of selected region', fontsize=15);
tofcond = (tof >= 67600) & (tof <= 73600)
tofseg = tof[tofcond]
plt.figure(figsize=(12, 4))
plt.plot(tof2ev(7015, tofseg), resdict['binned'][34:40, 74:80, :].sum(axis=(0, 1))[tofcond])
plt.xticks(range(0, 8, 1))
plt.axvline(x=0.75, color='k', linestyle='--')
plt.axvline(x=0.45, color='k', linestyle='--')
plt.xlabel('Energy (eV)', fontsize=15);
```
Check if the EDCs overlap
```
tofcond = (tof >= 67600) & (tof <= 73600)
tofseg = tof[tofcond]
plt.figure(figsize=(12, 4))
plt.plot(tof2ev(7018, tofseg), normedcs[0,:][tofcond], label=str(Vs[0]) + ' V')
plt.plot(tof2ev(7015, tofseg), normedcs[1,:][tofcond], label=str(Vs[1]) + ' V')
plt.plot(tof2ev(7013, tofseg), normedcs[2,:][tofcond], label=str(Vs[2]) + ' V')
plt.plot(tof2ev(7014, tofseg), normedcs[3,:][tofcond], label=str(Vs[3]) + ' V')
plt.legend()
plt.xticks(range(-3, 10, 1))
plt.xlabel('Energy (eV)', fontsize=15);
#plt.savefig('AlignedEDCs.png', bbox_inches='tight', dpi=100)
tofcond = (tof >= 67600) & (tof <= 73600)
tofseg = tof[tofcond]
plt.figure(figsize=(12, 4))
plt.plot(tof2ev(7013, tofseg), normedcs[0,:][tofcond], label=str(Vs[0]) + ' V')
plt.plot(tof2ev(7013, tofseg), normedcs[1,:][tofcond], label=str(Vs[1]) + ' V')
plt.plot(tof2ev(7013, tofseg), normedcs[2,:][tofcond], label=str(Vs[2]) + ' V')
plt.plot(tof2ev(7013, tofseg), normedcs[3,:][tofcond], label=str(Vs[3]) + ' V')
plt.legend()
plt.xticks(range(-3, 10, 1))
plt.xlabel('Energy (eV)', fontsize=15);
```
### 4.2 Momentum calibration
Consists of two steps given an energy slice with high symmetry points and a known reciprocal space distance
1. Select the pixel coordinates of high symmetry points (e.g. valence band local maxima)
2. Line fitting and calculation of the coordinate grid -- `mpes.analysis.calibrateK()`
```
import matplotlib.patches as patches
```
#### 4.2.0 Load data from saved measurement
```
img = sio.loadmat('../data/MomentumCalib.mat')['Kpts']
```
#### 4.2.1 Mark out the high symmetry points
```
# Select zoomable mode in imshow using the following
%matplotlib notebook
%matplotlib notebook
%matplotlib notebook
# Switch back to inline mode (without zooming)
%matplotlib inline
# Pixel cooridnates of the high symmetry points
G = (40, 49)
K = (73, 48)
ofs = 5
plt.imshow(img, origin='lower', cmap='terrain_r')
rectG = patches.Rectangle((G[1]-ofs, G[0]-ofs), 10, 10, linewidth=1, edgecolor='k', facecolor='none')
rectK = patches.Rectangle((K[1]-ofs, K[0]-ofs), 10, 10, linewidth=1, edgecolor='k', facecolor='none')
plt.gca().add_patch(rectG)
plt.gca().add_patch(rectK)
```
#### 4.2.2 Calibrate momentum using known reciprocal space distance (shown here three types of returns)
Return the extents of axes
```
kext = aly.calibrateK(img, K, G, 1.3, ret='extent')
```
Return pixel-level converted coordinates along the row and column axes
```
krow, kcol = aly.calibrateK(img, K, G, 1.3, ret='axes')
```
Return pixel-level converted coordinates in 2D grid format
```
krowgrid, kcolgrid = aly.calibrateK(img, K, G, 1.3, ret='grid')
```
#### 4.2.3 Display momentum calibration results
```
plt.imshow(img, origin='lower', cmap='terrain_r', extent=kext)
plt.xlabel('$k_x$ ($\AA^{-1}$)', fontsize=15)
plt.ylabel('$k_y$ ($\AA^{-1}$)', fontsize=15);
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
train_df = pd.read_csv('../input/titanic/train.csv')
test_df = pd.read_csv('../input/titanic/test.csv')
combine = [train_df, test_df]
print(train_df.columns.values)
train_df.info()
print('_'*40)
test_df.info()
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
guess_ages = np.zeros((2,3))
guess_ages
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
test_df.head(10)
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
```
| github_jupyter |
# Averaging over a region
Although this may not sound like _real_ regridding, averaging a gridded field
over a region is supported by `ESMF`. This works because the `conservative`
regridding method preserves the areal average of the input field. That is, _the
value at each output grid cell is the average input value over the output grid
area_. Instead of mapping the input field unto rectangular outputs cells, it's
mapped unto an irregular mesh defined by an outer polygon. In other words,
applying the regridding weights computes the exact areal-average of the input
grid over each polygon.
This process relies on converting `shapely.Polygon` and `shapely.MultiPolygon`
objects into `ESMF.Mesh` objects. However, ESMF meshes do not support all
features that come with shapely's (Multi)Polyons. Indeed, mesh elements do not
support interior holes, or multiple non-touching parts, as do `shapely` objects.
The `xesmf.SpatialAverager` class works around these issues by computing
independent weights for interior holes and multi-part geometries, before
combining the weights.
Transforming polygons into a `ESMF.Mesh` is a slow process. Users looking for
faster (but approximate) methods may want to explore
[regionmask](https://regionmask.readthedocs.io/) or
[clisops](https://clisops.readthedocs.io).
The following example shows just how simple it is to compute the average over
different countries. The notebook used `geopandas`, a simple and efficient
container for geometries, and `descartes` for plotting maps. Make sure both
packages are installed, as they are not `xesmf` dependencies.
```
%matplotlib inline
import matplotlib.pyplot as plt
import geopandas as gpd
import pandas as pd
from shapely.geometry import Polygon, MultiPolygon
import numpy as np
import xarray as xr
import xesmf as xe
import warnings
warnings.filterwarnings("ignore")
xr.set_options(display_style='text')
```
## Simple example
In this example we'll create a synthetic global field, then compute its average
over six countries.
### Download country outlines
```
# Load some polygons from the internet
regs = gpd.read_file(
"https://cdn.jsdelivr.net/npm/world-atlas@2/countries-10m.json"
)
# Select a few countries for the sake of the example
regs = regs.iloc[[5, 9, 37, 67, 98, 155]]
# Simplify the geometries to a 0.02 deg tolerance, which is 1/100 of our grid.
# The simpler the polygons, the faster the averaging, but we lose some precision.
regs["geometry"] = regs.simplify(tolerance=0.02, preserve_topology=True)
regs
# Create synthetic global data
ds = xe.util.grid_global(2, 2)
ds = ds.assign(field=xe.data.wave_smooth(ds.lon, ds.lat))
ds
# Display the global field and countries' outline.
fig, ax = plt.subplots()
ds.field.plot(ax=ax, x="lon", y="lat")
regs.plot(ax=ax, edgecolor="k", facecolor="none")
```
### Compute the field average over each country
`xesmf.SpatialAverager` is a class designed to average an `xarray.DataArray`
over a list of polygons. It behaves similarly to `xesmf.Regridder`, but has
options to deal specifically with polygon outputs. It uses the `conservative`
regridding, and can store and reuse weights.
```
savg = xe.SpatialAverager(ds, regs.geometry, geom_dim_name="country")
savg
```
When called, the `SpatialAverager` instance returns a `DataArray` of averages
over the `geom` dimension, here countries. `lon` and `lat` coordinates are the
centroids each polygon.
```
out = savg(ds.field)
out = out.assign_coords(country=xr.DataArray(regs["name"], dims=("country",)))
out
```
As the order of the polygons is conserved in the output, we can easily include
the results back into our `geopandas` dataframe.
```
regs["field_avg"] = out.values
regs
fig, ax = plt.subplots(figsize=(12, 5))
ds.field.plot(ax=ax, x="lon", y="lat", cmap="Greys_r")
handles = regs.plot(
column="field_avg", ax=ax, edgecolor="k", vmin=1, vmax=3, cmap="viridis"
)
```
### Extract the weight mask from the averager
The weights are stored in a sparse matrix structure in
`SpatialAverager.weights`. The sparse matrix can be converted to a full
DataArray, but note that this will increase memory usage proportional to the
number of polygons.
```
# Convert sparse matrix to numpy array, it has size : (n_in, n_out)
# So reshape to the same shape as ds + polygons
w = xr.DataArray(
savg.weights.toarray().reshape(regs.geometry.size, *ds.lon.shape),
dims=("country", *ds.lon.dims),
coords=dict(country=out.country, **ds.lon.coords),
)
plt.subplots_adjust(top=0.9)
facets = w.plot(col="country", col_wrap=2, aspect=2, vmin=0, vmax=0.05)
facets.cbar.set_label("Averaging weights")
```
This also allows to quickly check that the weights are indeed normalized, that
the sum of each mask is 1.
```
w.sum(dim=["y", "x"]).values
```
| github_jupyter |
# Python API for Table Display
In addition to APIs for creating and formatting BeakerX's interactive table widget, the Python runtime configures pandas to display tables with the interactive widget instead of static HTML.
```
import pandas as pd
from beakerx import *
pd.read_csv('../resources/data/interest-rates.csv')
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table.setAlignmentProviderForColumn('m3', TableDisplayAlignmentProvider.CENTER_ALIGNMENT)
table.setRendererForColumn("y10", TableDisplayCellRenderer.getDataBarsRenderer(False))
table.setRendererForType(ColumnType.Double, TableDisplayCellRenderer.getDataBarsRenderer(True))
table
df = pd.read_csv('../resources/data/interest-rates.csv')
df['time'] = df['time'].str.slice(0,19).astype('datetime64[ns]')
table = TableDisplay(df)
table.setStringFormatForTimes(TimeUnit.DAYS)
table.setStringFormatForType(ColumnType.Double, TableDisplayStringFormat.getDecimalFormat(4,6))
table.setStringFormatForColumn("m3", TableDisplayStringFormat.getDecimalFormat(0, 0))
table
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table
#freeze a column
table.setColumnFrozen("y1", True)
#freeze a column to the right
table.setColumnFrozenRight("y10", True)
#hide a column
table.setColumnVisible("y30", False)
table.setColumnOrder(["m3", "y1", "y5", "time", "y2"])
table
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table.addCellHighlighter(TableDisplayCellHighlighter.getHeatmapHighlighter("m3", TableDisplayCellHighlighter.FULL_ROW))
table
```
### Display mode: Pandas default
```
beakerx.pandas_display_default()
pd.read_csv('../resources/data/interest-rates.csv')
```
### Display mode: TableDisplay Widget
```
beakerx.pandas_display_table()
pd.read_csv('../resources/data/interest-rates.csv')
```
## Recognized Formats
```
TableDisplay([{'y1':4, 'm3':2, 'z2':1}, {'m3':4, 'z2':2}])
TableDisplay({"x" : 1, "y" : 2})
```
## Programmable Table Actions
```
mapList4 = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapList4)
def dclick(row, column, tabledisplay):
tabledisplay.values[row][column] = sum(map(int,tabledisplay.values[row]))
display.setDoubleClickAction(dclick)
def negate(row, column, tabledisplay):
tabledisplay.values[row][column] = -1 * int(tabledisplay.values[row][column])
def incr(row, column, tabledisplay):
tabledisplay.values[row][column] = int(tabledisplay.values[row][column]) + 1
display.addContextMenuItem("negate", negate)
display.addContextMenuItem("increment", incr)
display
mapList4 = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapList4)
#set what happens on a double click
display.setDoubleClickAction("runDoubleClick")
display
print("runDoubleClick fired")
print(display.details)
```
## Set index to DataFrame
```
df = pd.read_csv('../resources/data/interest-rates.csv')
df.set_index(['m3'])
df = pd.read_csv('../resources/data/interest-rates.csv')
df.index = df['time']
df
```
# Update cell
```
dataToUpdate = [
{'a':1, 'b':2, 'c':3},
{'a':4, 'b':5, 'c':6},
{'a':7, 'b':8, 'c':9}
]
tableToUpdate = TableDisplay(dataToUpdate)
tableToUpdate
tableToUpdate.values[0][0] = 99
tableToUpdate.sendModel()
tableToUpdate.updateCell(2,"c",121)
tableToUpdate.sendModel()
```
## HTML format
HTML format allows markup and styling of the cell's content. Interactive JavaScript is not supported however.
```
table = TableDisplay({'x': '<em style="color:red">italic red</em>',
'y': '<b style="color:blue">bold blue</b>',
'z': 'strings without markup work fine too'})
table.setStringFormatForColumn("Value", TableDisplayStringFormat.getHTMLFormat())
table
```
## Auto linking of URLs
The normal string format automatically detects URLs and links them. An underline appears when the mouse hovers over such a string, and when you click it opens in a new window.
```
TableDisplay({'Two Sigma': 'http://twosigma.com', 'BeakerX': 'http://BeakerX.com'})
```
| github_jupyter |
# Introduction to Pytorch
```
import torch
```
## Tensors
Tenson is a number, vector, matrix, or any N- Dimensional array.
```
# tensor with a single number
t1 = torch.tensor(4.)
t1
t1.dtype
# more complex tensors
t2 = torch.tensor([1., 2, 3, 4])
print(t2) # ALl the tensor elements are of same data type
# Matrix - 2D tensor
t3 = torch.tensor([[5, 6],
[7,8],
[9,10]])
print(t3)
# 3D Array - Cuboid kind of structure where we have one matrix and another matrix behind it
# most of the time we will use floating point numbers
t4 = torch.tensor([
[[11,12,13],
[13,14,15]],
[[15,16,17],
[17,18,19.]]
])
t4
```
Tensors can have any number of dimensions and different lengths along each dimension. we can inspect length using .shape property
```
print(t1)
t1.shape
print(t2)
t2.shape
print(t3)
t3.shape
print(t4)
t4.shape
# start from outer most bracket and count number of elements in that ie. here 2 elements both matrices than we go 1 bracket in and there is also 2 list elements and so on .....
```
we can not make a tensor with an improper shape
## Tensor Operations and Gradients
```
x = torch.tensor(3.)
w = torch.tensor(4., requires_grad=True) #
b = torch.tensor(5., requires_grad=True)
x,w,b
# Arithmetic operations
y = w * x + b
y
# To Compute derivative of y w.r.t w we'll use y.backward()
y.backward()
# derivatives of y w.r.t each of input tensors are stored in .grad property of respective tensors
print('dy/dx', x.grad)
print('dy/dw', w.grad)
print('dy/db', b.grad)
```
we have not specified requires_grad=True in x, this tells pytorch that we are not intrested in drivatives of any future output w.r.t x but we are intrested for w and b .... so requires_grad property is important to save millions of usless coputations of derrivatives as per requirement
"grad" in w.grad is short for gradient, which is another term for derivative primarily used while dealing with vectors and matrices
## Tensor Functions
```
# create a tensor with a fixed value for every element
t6 = torch.full((3,2), 42)
t6
# concatenation
t7 = torch.cat((t3,t6))
t7
# compute sin of each element
t8 = torch.sin(t7)
t8
# change shape of tensor
t9 = t8.reshape(3,2,2)
t9
```
## Inter-operability with Numpy
```
import numpy as np
x = np.array([[1,2],[3, 4.]])
x
# to convert numpy array to a pytorch tensor --> torch.from_numpy
y = torch.from_numpy(x)
y
x.dtype, y.dtype
# to convert pytorch tensor to numpy array --> .numpy()
z = y.numpy()
z
```
| github_jupyter |
# 🛠 03 Computer vision & convolutional neural networks in TensorFlow Exercises
3. Take 10 photos of two different things and build your own CNN image classifier using the techniques we've built here.
4. Find an ideal learning rate for a simple convolutional neural network model on your the 10 class dataset.
## 3. Take 10 photos of two different things and build your own CNN image classifier using the techniques we've built here.
```
!rm -rf *
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/pizza_steak.zip
import zipfile
zip_ref = zipfile.ZipFile("pizza_steak.zip")
zip_ref.extractall()
zip_ref.close()
!mkdir train
!mkdir test
# Pizza
## train
!mkdir train/pizza
!mv -f pizza_steak/train/pizza/2965.jpg train/pizza
!mv -f pizza_steak/train/pizza/5764.jpg train/pizza
!mv -f pizza_steak/train/pizza/8917.jpg train/pizza
!mv -f pizza_steak/train/pizza/12301.jpg train/pizza
!mv -f pizza_steak/train/pizza/12718.jpg train/pizza
!mv -f pizza_steak/train/pizza/13983.jpg train/pizza
!mv -f pizza_steak/train/pizza/23199.jpg train/pizza
!mv -f pizza_steak/train/pizza/27963.jpg train/pizza
!mv -f pizza_steak/train/pizza/29417.jpg train/pizza
!mv -f pizza_steak/train/pizza/32004.jpg train/pizza
## test
!mkdir test/pizza
!mv -f pizza_steak/test/pizza/11297.jpg test/pizza
!mv -f pizza_steak/test/pizza/22489.jpg test/pizza
!mv -f pizza_steak/test/pizza/40449.jpg test/pizza
!mv -f pizza_steak/test/pizza/44810.jpg test/pizza
!mv -f pizza_steak/test/pizza/53217.jpg test/pizza
# Steak
## train
!mkdir train/steak
!mv -f pizza_steak/train/steak/3136.jpg train/steak
!mv -f pizza_steak/train/steak/4176.jpg train/steak
!mv -f pizza_steak/train/steak/6709.jpg train/steak
!mv -f pizza_steak/train/steak/6926.jpg train/steak
!mv -f pizza_steak/train/steak/9555.jpg train/steak
!mv -f pizza_steak/train/steak/10380.jpg train/steak
!mv -f pizza_steak/train/steak/15580.jpg train/steak
!mv -f pizza_steak/train/steak/22080.jpg train/steak
!mv -f pizza_steak/train/steak/31881.jpg train/steak
!mv -f pizza_steak/train/steak/32693.jpg train/steak
## test
!mkdir test/steak
!mv -f pizza_steak/test/steak/4889.jpg test/steak
!mv -f pizza_steak/test/steak/6261.jpg test/steak
!mv -f pizza_steak/test/steak/7056.jpg test/steak
!mv -f pizza_steak/test/steak/13023.jpg test/steak
!mv -f pizza_steak/test/steak/13719.jpg test/steak
# remove extra data
!rm -r __MACOSX
!rm -r pizza_steak
!rm -r pizza_steak.zip
# Class names
import pathlib
import numpy as np
data_dir = pathlib.Path("train")
class_names = np.array(sorted([item.name for item in data_dir.glob("*")]))
print(class_names)
# build CNN classifier
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# preprocess data
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
# Set dir path
train_dir = "train"
test_dir = "test"
# Import data from directories
train_data = train_datagen.flow_from_directory(directory=train_dir,
batch_size=32,
target_size=(224,224),
class_mode="binary")
test_data = test_datagen.flow_from_directory(directory=test_dir,
batch_size=32,
target_size=(224,224),
class_mode="binary")
# Create a model
model_1 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(10,3,activation="relu",input_shape=(224,224,3)),
tf.keras.layers.Conv2D(10,3,activation="relu"),
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Conv2D(10,3,activation="relu"),
tf.keras.layers.Conv2D(10,3,activation="relu"),
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1,activation="sigmoid")
])
# Compile model
model_1.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit model
history_1 = model_1.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=test_data,
validation_steps=len(test_data))
model_1.evaluate(test_data)
```
## 4. Find an ideal learning rate for a simple convolutional neural network model on your the 10 class dataset.
```
# Create ImageDataGenerator training instance with data augmentation
train_datagen_augmented = ImageDataGenerator(rescale=1/255.,
rotation_range=0.2,
zoom_range=0.2,
width_shift_range=0.2,
height_shift_range=0.3,
horizontal_flip=True)
# Import data and augment it from training directory
train_data_augmented = train_datagen_augmented.flow_from_directory(train_dir,
target_size=(224,224),
batch_size=32,
class_mode="binary")
# Create a model
model_2 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(10,3,activation="relu",input_shape=(224,224,3)),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Conv2D(10,3,activation="relu"),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1,activation="sigmoid")
])
# Compile model
model_2.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit model
history_2 = model_2.fit(train_data_augmented,
epochs=100,
steps_per_epoch=len(train_data_augmented),
validation_data=test_data,
validation_steps=len(test_data))
model_2.evaluate(test_data)
```
| github_jupyter |
# MMS in pyRFU
Louis RICHARD (louis.richard@irfu.se)
## Getting Started
To get up and running with Python, virtual environments and pyRFU, see: \
https://pyrfu.readthedocs.io/en/latest/getting_started.html#installation
Python 3.8 or later is required; we recommend installing Anaconda to get everything up and running.
### Virtual environments
It's best to setup and use virtual environments when using Python - these allow you to avoid common dependency problems when you install multiple packages\
`python -m venv pyspedas-tutorial`\
Then, to run the virtual environment, on Mac and Linux :\
`source pyspedas-tutorial/bin/activate`\
To exit the current virtual environment, type `deactivate`
### Install pyRFU
`pip install pyrfu`
### Upgrade pyRFU
`pip install pyrfu--upgrade`
### Local data directory
We use environment variables to set the local data directories:\
data_path (root data directory for all missions in pyRFU) e.g., if you set data_path="/Volumes/mms", your data will be stored in /Volumes/mms
The load routines supported include:
- Fluxgate Magnetometer (FGM)
- Search-coil Magnetometer (SCM)
- Electric field Double Probe (EDP)
- Fast Plasma Investigation (FPI)
- Hot Plasma Composition Analyzer (HPCA)
- Energetic Ion Spectrometer (EIS)
- Fly's Eye Energetic Particle Sensor (FEEPS)
- Ephemeris and Coordinates (MEC)
## Import MMS routines
```
from pyrfu import mms
```
## Define time interval
```
tint = ["2019-09-14T07:54:00.000", "2019-09-14T08:11:00.000"]
```
## Load data
Keywords to access data can be found in the help of mms.get_data
```
help(mms.get_data)
```
### Load magnetic field from (FGM)
```
b_xyz = mms.get_data("b_gse_fgm_srvy_l2", tint, 1)
```
### Load ions and electrons bulk velocity, number density and DEF (FPI)
```
n_i, n_e = [mms.get_data(f"n{s}_fpi_fast_l2", tint, 1) for s in ["i", "e"]]
# n_i, n_e = [mms.get_data("n{}_fpi_fast_l2".format(s), tint, 1) for s in ["i", "e"]]
v_xyz_i, v_xyz_e = [mms.get_data(f"v{s}_gse_fpi_fast_l2", tint, 1) for s in ["i", "e"]]
def_omni_i, def_omni_e = [mms.get_data(f"def{s}_fpi_fast_l2", tint, 1) for s in ["i", "e"]]
```
### Load electric field (EDP)
```
e_xyz = mms.get_data("e_gse_edp_fast_l2", tint, 1)
```
## Plot overview
```
import matplotlib.pyplot as plt
from pyrfu.plot import plot_line, plot_spectr
%matplotlib notebook
legend_options = dict(frameon=True, loc="upper right")
fig, axs = plt.subplots(7, sharex="all", figsize=(8, 11))
fig.subplots_adjust(bottom=.05, top=.95, left=.11, right=.89, hspace=0)
# magnetic field
plot_line(axs[0], b_xyz)
axs[0].legend(["$B_x$", "$B_y$", "$B_z$"], ncol=3, **legend_options)
axs[0].set_ylabel("$B$ [nT]")
# electric field
plot_line(axs[1], e_xyz)
axs[1].legend(["$E_x$", "$E_y$", "$E_z$"], ncol=3, **legend_options)
axs[1].set_ylabel("$E$ [mV.m$^{-1}$]")
# number density
plot_line(axs[2], n_i, "tab:red")
plot_line(axs[2], n_e, "tab:blue")
axs[2].legend(["$Ions$", "$Electrons$"], ncol=2, **legend_options)
axs[2].set_ylabel("$n$ [cm$^{-3}$]")
# Ion bulk velocity
plot_line(axs[3], v_xyz_i)
axs[3].legend(["$V_{i,x}$", "$V_{i,y}$", "$V_{i,z}$"], ncol=3, **legend_options)
axs[3].set_ylabel("$V_i$ [km.s$^{-1}$]")
# Ion DEF
axs[4], caxs4 = plot_spectr(axs[4], def_omni_i, yscale="log", cscale="log", cmap="Spectral_r")
axs[4].set_ylabel("$E_i$ [eV]")
caxs4.set_ylabel("DEF" + "\n" + "[kev/(cm$^2$ s sr keV)]")
# Electron bulk velocity
plot_line(axs[5], v_xyz_e)
axs[5].legend(["$V_{e,x}$", "$V_{e,y}$", "$V_{e,z}$"], ncol=3, **legend_options)
axs[5].set_ylabel("$V_e$ [km.s$^{-1}$]")
# Electron DEF
axs[6], caxs6 = plot_spectr(axs[6], def_omni_e, yscale="log", cscale="log", cmap="Spectral_r")
axs[6].set_ylabel("$E_e$ [eV]")
caxs6.set_ylabel("DEF" + "\n" + "[kev/(cm$^2$ s sr keV)]")
```
## Load data for all spacecraft
### Spacecaft position (MEC)
```
r_mms = [mms.get_data("R_gse", tint, i) for i in range(1, 5)]
```
### Magnetic field (FGM)
```
b_mms = [mms.get_data("b_gse_fgm_srvy_l2", tint, i) for i in range(1, 5)]
```
### Plot
```
f, axs = plt.subplots(3, sharex="all", figsize=(6.5, 5))
f.subplots_adjust(hspace=0)
labels = ["MMS{:d}".format(i + 1) for i in range(4)]
legend_options = dict(ncol=4, frameon=True, loc="upper right")
for ax, j, c in zip(axs, [0, 1, 2], ["x", "y", "z"]):
for i, b in enumerate(b_mms):
plot_line(ax, b[:, j])
ax.legend(labels, **legend_options)
ax.set_ylabel("$B_{}$ [nT]".format(c))
```
| github_jupyter |
```
# # !mkdir out
!gsutil cp gs://mesolitica-general/albert-base-actual/model.ckpt-400000.data-00000-of-00001 out
!gsutil cp gs://mesolitica-general/albert-base-actual/model.ckpt-400000.index out
!gsutil cp gs://mesolitica-general/albert-base-actual/model.ckpt-400000.meta out
!mkdir albert-base-2020-04-10
!cp sp10m.cased.v10.* albert-base-2020-04-10
!cp BASE_config.json albert-base-2020-04-10/config.json
!cp out/model.ckpt-400000* albert-base-2020-04-10
!tar cvzf albert-base-2020-04-10.tar.gz albert-base-2020-04-10
import modeling
import optimization
import tokenization
import tensorflow as tf
import numpy as np
# !pip3 install sentencepiece
tokenizer = tokenization.FullTokenizer(
vocab_file='sp10m.cased.v10.vocab', do_lower_case=False,
spm_model_file='sp10m.cased.v10.model')
tokenizer.tokenize('Husein comel')
albert_config = modeling.AlbertConfig.from_json_file('BASE_config.json')
albert_config
def gather_indexes(sequence_tensor, positions):
"""Gathers the vectors at the specific positions over a minibatch."""
sequence_shape = modeling.get_shape_list(sequence_tensor, expected_rank=3)
batch_size = sequence_shape[0]
seq_length = sequence_shape[1]
width = sequence_shape[2]
flat_offsets = tf.reshape(
tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1])
flat_positions = tf.reshape(positions + flat_offsets, [-1])
flat_sequence_tensor = tf.reshape(sequence_tensor,
[batch_size * seq_length, width])
output_tensor = tf.gather(flat_sequence_tensor, flat_positions)
return output_tensor
class Model:
def __init__(
self,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.int32, [None, None])
model = modeling.AlbertModel(
config=albert_config,
is_training=False,
input_ids=self.X,
input_mask=self.input_masks,
token_type_ids=self.segment_ids,
use_one_hot_embeddings=False)
input_tensor = model.get_sequence_output()
output_weights = model.get_embedding_table()
with tf.variable_scope("cls/predictions"):
with tf.variable_scope("transform"):
input_tensor = tf.layers.dense(
input_tensor,
units=albert_config.embedding_size,
activation=modeling.get_activation(albert_config.hidden_act),
kernel_initializer=modeling.create_initializer(
albert_config.initializer_range))
input_tensor = modeling.layer_norm(input_tensor)
output_bias = tf.get_variable(
"output_bias",
shape=[albert_config.vocab_size],
initializer=tf.zeros_initializer())
logits = tf.matmul(input_tensor, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model()
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert')
cls = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'cls')
saver = tf.train.Saver(var_list = var_lists + cls)
saver.restore(sess, 'out/model.ckpt-400000')
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'albert-base/model.ckpt')
# !cp sp10m.cased.v10.* albert-base
# !cp BASE_config.json albert-base/config.json
# !tar cvzf albert-base.tar.gz albert-base
import os
out = 'albert-base-bahasa-cased'
os.makedirs(out, exist_ok=True)
from transformers import AlbertTokenizer, AlbertModel, AlbertConfig, AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AlbertTokenizer('sp10m.cased.v10.model', do_lower_case = False)
tokenizer.save_pretrained('albert-base-bahasa-cased')
import torch
import logging
from transformers import AlbertConfig, AlbertForMaskedLM, load_tf_weights_in_albert
logging.basicConfig(level=logging.INFO)
def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, albert_config_file, pytorch_dump_path):
# Initialise PyTorch model
config = AlbertConfig.from_json_file(albert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = AlbertForMaskedLM(config)
# Load weights from tf checkpoint
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
convert_tf_checkpoint_to_pytorch('albert-base/model.ckpt',
'BASE_config.json',
'albert-base-bahasa-cased/pytorch_model.bin')
tokenizer = AlbertTokenizer.from_pretrained('./albert-base-bahasa-cased', do_lower_case = False)
config = AlbertConfig('BASE_config.json')
config.vocab_size = 32000
config.intermediate_size = 3072
config.hidden_size = 768
config.num_attention_heads = 12
config.num_hidden_groups = 1
model = AutoModelWithLMHead.from_pretrained('./albert-base-bahasa-cased/pytorch_model.bin', config = config)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('tolonglah gov buat something, kami dah [MASK]')
model.save_pretrained('albert-base-bahasa-cased')
# !transformers-cli upload ./albert-base-bahasa-cased
model = AutoModelWithLMHead.from_pretrained('huseinzol05/albert-base-bahasa-cased', config = config)
tokenizer = AlbertTokenizer.from_pretrained('huseinzol05/albert-base-bahasa-cased', do_lower_case = False)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('makan ayam dengan [MASK]')
```
| github_jupyter |
# Setup
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from mlwpy_video_extras import (regression_errors,
regression_residuals)
import collections as co
import itertools as it
from sklearn import (datasets,
dummy,
linear_model,
metrics,
model_selection as skms,
neighbors,
pipeline,
preprocessing as skpre)
import warnings
warnings.filterwarnings("ignore")
np.random.seed(42)
```
# Baseline Regressors
```
diabetes = datasets.load_diabetes()
tts = skms.train_test_split(diabetes.data, diabetes.target,
test_size=.25, random_state=42)
(diabetes_train_ftrs, diabetes_test_ftrs,
diabetes_train_tgt, diabetes_test_tgt) = tts
baseline = dummy.DummyRegressor(strategy='mean')
baseline = dummy.DummyRegressor(strategy='median')
strategies = ['constant', 'quantile', 'mean', 'median', ]
baseline_args = [{"strategy":s} for s in strategies]
# additional args for constant and quantile
baseline_args[0]['constant'] = 50.0
baseline_args[1]['quantile'] = 0.75
# helper to unpack arguments for each DummyRegresor and
# do a fit-predict-eval sequence
def do_one_diabetes(**args):
baseline = dummy.DummyRegressor(**args)
baseline.fit(diabetes_train_ftrs, diabetes_train_tgt)
base_preds = baseline.predict(diabetes_test_ftrs)
return metrics.mean_squared_error(base_preds, diabetes_test_tgt)
# gather all results via a list comprehension
mses = [do_one_diabetes(**bla) for bla in baseline_args]
display(pd.DataFrame({'mse':mses,
'rmse':np.sqrt(mses)},
index=strategies))
```
# Regression Metrics
##### Custom Metrics and RMSE
```
# we could define mean_squared_error as:
# sse = np.sum((actual - predicted)**2) [sum-of-squared errors]
# mse = sse / len(actual)
def rms_error(actual, predicted):
' root-mean-squared-error function '
# lesser values are better (a<b ... a is better)
mse = metrics.mean_squared_error(actual, predicted)
return np.sqrt(mse)
def neg_rmse_score(actual, predicted):
' rmse based score function '
# greater values are better (a<b ... b better)
return -rms_error(actual, predicted)
# routines like cross_val_score need a "scorer"
def neg_rmse_scorer(model, ftrs, tgt_actual):
' rmse scorer suitable for scoring arg '
tgt_pred = model.predict(ftrs)
return neg_rmse_score(tgt_actual, tgt_pred)
knn = neighbors.KNeighborsRegressor(n_neighbors=3)
skms.cross_val_score(knn, diabetes.data, diabetes.target,
scoring=neg_rmse_scorer)
metrics.SCORERS['neg_mean_squared_error']
knn = neighbors.KNeighborsRegressor(n_neighbors=3)
nrmse = skms.cross_val_score(knn, diabetes.data, diabetes.target,
scoring='neg_root_mean_squared_error')
nrmse
# the primary regression metrics available
[k for k in metrics.SCORERS.keys() if k.endswith('error')]
```
### Understanding the Default Regression Metric $R^2$
```
lr = linear_model.LinearRegression()
# help(lr.score) #for full output
print(lr.score.__doc__.splitlines()[0])
```
$$R^2 = 1 - \frac{{SSE}_{our\ predictions}}{{SSE}_{mean\ as\ prediction}}$$
```
# where does the mean come from!?!
# calculate the mean on the training set and evaluate on the test set
# calculate the mean on the **test** set and evaluate on the test set
errors_mean_train_on_train = np.mean(diabetes_train_tgt) - diabetes_train_tgt
errors_mean_train_on_test = np.mean(diabetes_train_tgt) - diabetes_test_tgt
errors_mean_test_on_test = np.mean(diabetes_test_tgt) - diabetes_test_tgt
# calculate sum-of-squared-errors two ways:
via_manual = (errors_mean_train_on_train**2).sum()
via_npdot = np.dot(errors_mean_train_on_train,
errors_mean_train_on_train)
np.allclose(via_manual, via_npdot)
def sse(errors):
return np.sum(errors**2) # you'll np.dot()
sse_mean_train_on_train = sse(errors_mean_train_on_train)
sse_mean_train_on_test = sse(errors_mean_train_on_test)
sse_mean_test_on_test = sse(errors_mean_test_on_test)
print("mean train on train:", sse_mean_train_on_train)
print("mean train on test:", sse_mean_train_on_test)
print("mean test on test: ", sse_mean_test_on_test)
# now, imagine we have a simple linear regression model:
lr = linear_model.LinearRegression()
lr_preds = (lr.fit(diabetes_train_ftrs, diabetes_train_tgt)
.predict(diabetes_test_ftrs))
lr_r2 = metrics.r2_score(diabetes_test_tgt, lr_preds)
lr_r2
# compare
# the sse of linear_regression trained on train, evaluated on test
sse_lr = sse(lr_preds-diabetes_test_tgt)
# the sse of baseline-mean trained on *test*, evaluated on test
sse_mean_test_on_test = sse(errors_mean_test_on_test)
1 - (sse_lr/sse_mean_test_on_test)
# we can demonstrate that with builtins:
base_model = dummy.DummyRegressor(strategy='mean')
base_model.fit(diabetes_test_ftrs, diabetes_test_tgt) # WARNING! this is the weird step!
base_model_test_preds = base_model.predict(diabetes_test_ftrs)
# you might notice we use MSE instead of SSE:
# it's ok, because we'll do it in two places and a factor of (1/n) will simply cancel out
base_model_mse = metrics.mean_squared_error(diabetes_test_tgt,
base_model_test_preds)
print(base_model_mse)
models = {'knn': neighbors.KNeighborsRegressor(n_neighbors=3),
'lr' : linear_model.LinearRegression()}
results = co.defaultdict(dict)
for name in models:
m = models[name]
preds = (m.fit(diabetes_train_ftrs, diabetes_train_tgt)
.predict(diabetes_test_ftrs))
results[name]['r2'] = metrics.r2_score(diabetes_test_tgt, preds)
results[name]['mse'] = metrics.mean_squared_error(diabetes_test_tgt, preds)
df = pd.DataFrame(results).T
df['r2 via mse'] = 1 - (df['mse'] / base_model_mse)
display(df)
```
# Errors and Residuals
```
ape_df = pd.DataFrame({'predicted' : [4, 2, 9],
'actual' : [3, 5, 7]})
ape_df['error'] = ape_df['predicted'] - ape_df['actual']
ape_df.index.name = 'example'
display(ape_df)
regression_errors((6,3), ape_df.predicted, ape_df.actual)
lr = linear_model.LinearRegression()
preds = (lr.fit(diabetes_train_ftrs, diabetes_train_tgt)
.predict(diabetes_test_ftrs))
regression_errors((8,4), preds, diabetes_test_tgt, errors=[-20])
ape_df = pd.DataFrame({'predicted' : [4, 2, 9],
'actual' : [3, 5, 7]})
ape_df['error'] = ape_df['predicted'] - ape_df['actual']
ape_df['resid'] = ape_df['actual'] - ape_df['predicted']
ape_df.index.name = 'example'
display(ape_df)
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(8,4))
ax1.plot(ape_df.predicted, ape_df.actual, 'r.', # pred v actual
[0,10], [0,10], 'b-') # perfect line
ax1.set_xlabel('Predicted')
ax1.set_ylabel('Actual')
regression_residuals(ax2, ape_df.predicted, ape_df.actual,
'all', right=True)
lr = linear_model.LinearRegression()
knn = neighbors.KNeighborsRegressor()
models = [lr, knn]
fig, axes = plt.subplots(1, 2, figsize=(10,5),
sharex=True, sharey=True)
fig.tight_layout()
for model, ax, on_right in zip(models, axes, [False, True]):
preds = (model.fit(diabetes_train_ftrs, diabetes_train_tgt)
.predict(diabetes_test_ftrs))
regression_residuals(ax, preds, diabetes_test_tgt, [-20], on_right)
axes[0].set_title('Linear Regression Residuals')
axes[1].set_title('kNN-Regressor Rediduals');
print(diabetes_test_tgt[-20])
```
# A Quick Pipeline and Standardization for Linear Regression
```
# 1-D standardization
# place evenly spaced values in a dataframe
xs = np.linspace(-5, 10, 20)
df = pd.DataFrame(xs, columns=['x'])
# center ( - mean) and scale (/ std)
df['std-ized'] = (df.x - df.x.mean()) / df.x.std()
# show original and new data; compute statistics
fig, ax = plt.subplots(1,1,figsize=(3,3))
sns.stripplot(data=df)
display(df.describe().loc[['mean', 'std']])
# 2 1-D standardizations
xs = np.linspace(-5, 10, 20)
ys = 3*xs + 2 + np.random.uniform(20, 40, 20)
print("First Row Values")
df = pd.DataFrame({'x':xs, 'y':ys})
display(df.head())
print("Standardized")
df_std_ized = (df - df.mean()) / df.std()
display(df_std_ized.describe().loc[['mean', 'std']])
fig, ax = plt.subplots(2,2, figsize=(5,5))
ax[0,0].plot(df.x, df.y, '.')
ax[0,1].plot(df_std_ized.x, df_std_ized.y, '.')
ax[0,0].set_ylabel('"Natural" Scale')
ax[1,0].plot(df.x, df.y, '.')
ax[1,1].plot(df_std_ized.x, df_std_ized.y, '.')
ax[1,0].axis([-10, 50, -10, 50])
ax[1,1].axis([-10, 50, -10, 50])
ax[1,0].set_ylabel('Fixed/Shared Scale')
ax[1,0].set_xlabel('Original Data')
ax[1,1].set_xlabel('Standardized Data');
train_xs, test_xs = skms.train_test_split(xs.reshape(-1,1), test_size=.5)
scaler = skpre.StandardScaler()
scaler.fit(train_xs).transform(test_xs)
(train_xs, test_xs,
train_ys, test_ys)= skms.train_test_split(xs.reshape(-1,1),
ys.reshape(-1,1),
test_size=.5)
scaler = skpre.StandardScaler()
lr = linear_model.LinearRegression()
std_lr_pipe = pipeline.make_pipeline(scaler, lr)
std_lr_pipe.fit(train_xs, train_ys).predict(test_xs)
```
# Case Study: A Regression Comparison
```
boston = datasets.load_boston()
boston_df = pd.DataFrame(boston.data, columns=boston.feature_names)
boston_df['tgt'] = boston.target
boston_df.head()
boston_ftrs = boston.data
boston_tgt = boston.target
models = {'base' : dummy.DummyRegressor(strategy='mean'),
'lr' : linear_model.LinearRegression(),
'knn_3' : neighbors.KNeighborsRegressor(n_neighbors=3),
'knn_10': neighbors.KNeighborsRegressor(n_neighbors=10)}
# join these into standarization pipelines
make_p = pipeline.make_pipeline
scaler = skpre.StandardScaler()
pipes = {m:make_p(scaler, models[m]) for m in models}
scorers = {'neg. mae' :metrics.SCORERS['neg_median_absolute_error'],
'neg. rmse':metrics.SCORERS['neg_root_mean_squared_error']}
fig, axes = plt.subplots(2, 1, figsize=(6,4))
fig.tight_layout()
for name in pipes:
p = pipes[name]
cv_results = skms.cross_validate(p, boston_ftrs, boston_tgt,
scoring = scorers, cv=10)
for ax, msr in zip(axes, scorers):
msr_results = cv_results["test_" + msr]
my_lbl = "{:12s} {:.3f} {:.2f}".format(name,
msr_results.mean(),
msr_results.std())
ax.plot(msr_results, 'o--', label=my_lbl)
ax.set_title(msr)
ax.legend()
fig,ax = plt.subplots(1,1,figsize=(6,3))
baseline_mse = skms.cross_val_score(models['base'],
boston_ftrs, boston_tgt,
scoring = scorers['neg. rmse'], cv=10)
for name in pipes:
p = pipes[name]
cv_results = skms.cross_val_score(p, boston_ftrs, boston_tgt,
scoring = scorers['neg. rmse'], cv=10)
my_lbl = "{:12s} {:.3f} {:.2f}".format(name,
cv_results.mean(),
cv_results.std())
ax.plot(cv_results / baseline_mse, 'o--', label=my_lbl)
ax.set_title("RMSE(model) / RMSE(baseline)\n$<1$ is better than baseline")
ax.legend();
# this time, just metrics (not scorers)
msrs = {'mad' : metrics.mean_absolute_error,
'rmse' : rms_error}
results = {}
for name in pipes:
p = pipes[name]
cv_preds = skms.cross_val_predict(p, boston_ftrs, boston_tgt,
cv=10)
for ax, msr in zip(axes, msrs):
msr_results = msrs[msr](boston_tgt, cv_preds)
results.setdefault(msr, []).append(msr_results)
df = pd.DataFrame(results, index=pipes.keys())
df
fig, axes = plt.subplots(1, 4, figsize=(10,5),
sharex=True, sharey=True)
fig.tight_layout()
for name, ax in zip(pipes, axes):
p = pipes[name]
preds = skms.cross_val_predict(p, boston_ftrs, boston_tgt,
cv=10)
regression_residuals(ax, preds, boston_tgt)
ax.set_title(name + " residuals")
pd.DataFrame(boston_tgt).describe().T
```
| github_jupyter |
# Trees
```
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print(cancer.DESCR)
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=0)
```
# tree visualization
```
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=2)
tree.fit(X_train, y_train)
# import from local file, not in sklearn yet
from tree_plotting import plot_tree
plt.figure(dpi=200)
plot_tree(tree, feature_names=cancer.feature_names, filled=True)
```
# Parameter Tuning
```
tree = DecisionTreeClassifier().fit(X_train, y_train)
plt.figure(figsize=(15, 5))
plot_tree(tree, feature_names=cancer.feature_names, filled=True)
tree = DecisionTreeClassifier(max_depth=3).fit(X_train, y_train)
plt.figure(figsize=(15, 5))
plot_tree(tree, feature_names=cancer.feature_names)
tree = DecisionTreeClassifier(max_leaf_nodes=8).fit(X_train, y_train)
plot_tree(tree, feature_names=cancer.feature_names, filled=True)
tree = DecisionTreeClassifier(min_samples_split=50).fit(X_train, y_train)
plot_tree(tree, feature_names=cancer.feature_names, filled=True)
tree = DecisionTreeClassifier(min_impurity_decrease=.01).fit(X_train, y_train)
plot_tree(tree, feature_names=cancer.feature_names, filled=True)
from sklearn.model_selection import GridSearchCV
param_grid = {'max_depth':range(1, 7)}
grid = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid=param_grid, cv=10)
grid.fit(X_train, y_train)
from sklearn.model_selection import GridSearchCV, StratifiedShuffleSplit
param_grid = {'max_depth':range(1, 7)}
grid = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid=param_grid,
cv=StratifiedShuffleSplit(100), return_train_score=True)
grid.fit(X_train, y_train)
scores = pd.DataFrame(grid.cv_results_)
scores.plot(x='param_max_depth', y=['mean_train_score', 'mean_test_score'], ax=plt.gca())
plt.legend(loc=(1, 0))
from sklearn.model_selection import GridSearchCV
param_grid = {'max_leaf_nodes': range(2, 20)}
grid = GridSearchCV(DecisionTreeClassifier(random_state=0), param_grid=param_grid,
cv=StratifiedShuffleSplit(100, random_state=1),
return_train_score=True)
grid.fit(X_train, y_train)
scores = pd.DataFrame(grid.cv_results_)
scores.plot(x='param_max_leaf_nodes', y=['mean_train_score', 'mean_test_score'], ax=plt.gca())
plt.legend(loc=(1, 0))
scores = pd.DataFrame(grid.cv_results_)
scores.plot(x='param_max_leaf_nodes', y='mean_train_score', yerr='std_train_score', ax=plt.gca())
scores.plot(x='param_max_leaf_nodes', y='mean_test_score', yerr='std_test_score', ax=plt.gca())
grid.best_params_
plot_tree(grid.best_estimator_, feature_names=cancer.feature_names, filled=True)
pd.Series(grid.best_estimator_.feature_importances_,
index=cancer.feature_names).plot(kind="barh")
```
# Exercise
Apply a decision tree to the "adult" dataset and visualize it.
Tune parameters with grid-search; try at least max_leaf_nodes and max_depth, but separately.
Visualize the resulting tree and it's feature importances.
| github_jupyter |
# 3. Analysing words
This notebook will introduce you to the basics of analysing words.
You'll learn how to preprocess and represent words.
Legend of symbols:
- 🤓: Tips
- 🤖📝: Your turn
- ❓: Question
- 💫: Extra exercise
## 3.1. Word vectorization
In this section, we'll learn how to transform words into vectors. Let's start with one-hot encodings.
### 3.1.1. One-hot encoding
The library **<tt> sklearn <tt>** has a function that transforms categorical features to one-hot vectors:
🌍 https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
🌍 https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
First, we will import the functions we need.
```
from numpy import array
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
sent1 = "Can I eat the Pizza".lower()
sent2 = "You can eat the Pizza".lower()
sent1
print(sent2)
doc1 = sent1.split()
doc2 = sent2.split()
type(sent1)
doc1
type(doc1)
doc1_array = array(doc1)
doc2_array = array(doc2)
doc1_array
doc3 = doc1+doc2
doc3
type(doc3)
data = list(doc3)
values = array(data)
print(values)
```
❓ What does this code do?
This code transforms string sentences into a list and an array of words that we can manipulate later.
After that, we will transform words to numbers based on its position. To do so, we will use the **<tt> LabelEncoder() <tt>**.
```
# integer encode
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(values)
print(integer_encoded)
```
Seeing this variable integer encoded as a matrix, we could say that in contains 1 row and 10 columns (1x10).
```
len(integer_encoded)
type(integer_encoded)
# binary encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
```
Now, using the reshape method, we will transpose the integer encoded array into a matrix of 10 rows and 1 column (10 x 1).
```
integer_encoded
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
print(onehot_encoded)
# This shows the order of words in the matrix
list(label_encoder.classes_)
```
### 🤖📝 **Your turn**
Load the news dataset and calculate the onehot encoding of the first new.
```
import pandas as pd
# Import data
df = pd.read_csv('../data/news.csv')
df.head()
first_new = df.iloc[0,2]
first_new
first_new_low = first_new.lower()
first_new_low
first_new_low_list = first_new_low.split()
first_new_low_list
first_new_low_list_array = array(first_new_low_list)
first_new_low_list_array
# integer encode
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(first_new_low_list_array)
print(integer_encoded)
type(integer_encoded)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
integer_encoded
onehot_encoder = OneHotEncoder(sparse=False)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
print(onehot_encoded)
list(label_encoder.classes_)
```
❓ What is the number of unique words on that new?
```
len(label_encoder.classes_)
```
🤓 In that matrix (onehot_encoded), the number of columns represents the total number of **unique** words within the text, while the number of rows represents the total number of words (duplicated or not) within the text.
❓ What is the one-hot expression of *adapted*?
```
onehot_encoded[:,1]
```
🤓 The one-hot encoding representation of adapted corresponds to the second column (position 1) of the one-hot encoding matrix.
```
# One hot encoding of 'arctic'
onehot_encoded[:,13]
```
🤓 Finally, if we sum the column or the one-hot encoding representation we will obtain the frequency of that word.
```
sum(onehot_encoded[:,1])
```
### 3.1.2. Word embeddings (word2vec)
Training a word2vec model in Python is scarily easy with **<tt> gensim <tt>**.
🌍 https://radimrehurek.com/gensim/
```
from gensim.models import Word2Vec
```
Now, we will train the word2vec model taking a look to the parameters:
🌍 https://radimrehurek.com/gensim/models/word2vec.html
First, we will prepare the data. To do so, we need to transform every sentence in a list within a list:
```
sent1 = "Can I eat the Pizza".lower()
sent2 = "You can eat the Pizza".lower()
doc1 = sent1.split()
doc2 = sent2.split()
doc3 = [doc1, doc2]
model = Word2Vec(doc3, size=300, window=3, min_count=1, workers=4)
print(model)
```
Now, we can analyse the vocabulary of this word2vec model
```
print(model.wv.vocab)
```
We can analyse the embeddings by:
```
model['pizza']
```
Using word2vec, we can analyse similarities across words:
```
model.most_similar(positive=['pizza',], topn=1)
model.most_similar(negative=['pizza',], topn=1)
```
And relations between words:
```
print(model.similarity('pizza', 'eat'))
```
🤓 Note that this model doesn't contain a lot of text, so it doesn't make sense.
### 🤖📝 **Your turn**
Train a word2vec embedding with the news corpus and extract the top 10 most similar words of *ultraviolet*.
Help to prepare the input for the model:
```
# This is a loop that iterates over the dataframe
news_vec = []
for index, row in df.iterrows():
sent = row['corpus'].lower()
sent = sent.split()
news_vec.append(sent)
# Print the first element of the list:
print(news_vec[0])
```
#### 💫 Extra
- Extract the most similiar word to *climate*.
- Calculate the similarity between *climate* and *weather*.
- Calculate the most similar word to *huamanitarian* + *climate* - *droguth*.
Does make sense?
## 3.2. Word preprocessing
Let's replicate the examples we have seen previously in the lecture.
### 3.2.1. Tokenization
The process of separate symbols by introducing extra white space is called **tokenization**.
```
! pip install spacy
import spacy
!python -m spacy download en_core_web_sm
nlp = spacy.load('en_core_web_sm')
documents = "I've been 2 times to New York in 2011, but did not have the constitution for it. It DIDN'T appeal to me. I preferred Los Angeles."
tokens = [[token.text for token in sentence] for sentence in nlp(documents).sents]
tokens
```
❓ What does this code do?
```
print(tokens)
```
### 3.2.2. Lemmatization
The process of reducing words to its dictionary based (lemma) is called **lemmatization**.
```
lemmas = [[token.lemma_ for token in sentence] for sentence in nlp(documents).sents]
print(lemmas)
```
### 3.2.3. Stemming
The process of reducing words to its stem is called **stemming**.
This process is more radical than lemmatization.
```
!pip install nltk
from nltk import SnowballStemmer
stemmer = SnowballStemmer('english')
stems = [[stemmer.stem(token) for token in sentence] for sentence in tokens]
print(stems)
```
### 3.2.4. Part of speech
**Part of speech** corresponds to the process of classifying words to its category: nouns, verbs, adjectives, etc.
```
pos = [[token.pos_ for token in sentence] for sentence in nlp(documents).sents]
print(pos)
```
### 3.2.5. Stop words
**Stopwords** is the process of removing words that cannot be beneficial for the analysis, like determiners.
```
content = [[token.text for token in sentence if token.pos_ in {'NOUN', 'VERB', 'PROPN', 'ADJ', 'ADV'} and not token.is_stop]
for sentence in nlp(documents).sents]
print(content)
```
Another alternative using **<tt> nltk <tt>** is:
```
import nltk
from nltk.corpus import stopwords
print(stopwords.words('english'))
# Implement sentiment analysis into tokens
tokens = [[token.sentiment for token in sentence] for sentence in nlp(documents).sents]
print(tokens)
```
**Parsing** is the process of classifying words in a sentence based on its syntax.
```
tokens = [[(c.text, c.head.text, c.dep_) for c in nlp(sentence.text)] for sentence in nlp(documents).sents]
print(tokens)
```
### 3.2.7. Named Entity Recognition (NER)
**Named Entity Recognition** is the process of classifying words in a sentence based on its noun category (PERSON, FACILITY, ORGANIZATION, GEOPOLITICAL ENTITY, etc.).
```
entities = [[(entity.text, entity.label_) for entity in nlp(sentence.text).ents] for sentence in nlp(documents).sents]
print(entities)
```
### 🤖📝 **Your turn**
Apply the 7 different methods to preprocess words on the first row of the new's dataset.
### Resources
📕 Hovy, D. (2020). Text Analysis in Python for Social Scientists: Discovery and Exploration. Cambridge University Press.
🌍 https://medium.com/zero-equals-false/one-hot-encoding-129ccc293cda
🌍 https://markroxor.github.io/gensim/static/notebooks/word2vec.html
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error as mse
from scipy.stats import entropy
import warnings
from causalml.inference.meta import LRSRegressor
from causalml.inference.meta import XGBTRegressor, MLPTRegressor
from causalml.inference.meta import BaseXRegressor, BaseRRegressor, BaseSRegressor, BaseTRegressor
from causalml.inference.nn import DragonNet
from causalml.match import NearestNeighborMatch, MatchOptimizer, create_table_one
from causalml.propensity import ElasticNetPropensityModel
from causalml.dataset.regression import *
from causalml.metrics import *
import os, sys
%matplotlib inline
warnings.filterwarnings('ignore')
plt.style.use('fivethirtyeight')
sns.set_palette('Paired')
plt.rcParams['figure.figsize'] = (12,8)
```
# IHDP semi-synthetic dataset
Hill introduced a semi-synthetic dataset constructed from the Infant Health
and Development Program (IHDP). This dataset is based on a randomized experiment
investigating the effect of home visits by specialists on future cognitive scores. The data has 747 observations (rows). The IHDP simulation is considered the de-facto standard benchmark for neural network treatment effect
estimation methods.
The original [paper](https://arxiv.org/pdf/1906.02120.pdf) uses 1000 realizations from the NCPI package, but for illustration purposes, we use 1 dataset (realization) as an example below.
```
df = pd.read_csv(f'data/ihdp_npci_3.csv', header=None)
cols = ["treatment", "y_factual", "y_cfactual", "mu0", "mu1"] + [f'x{i}' for i in range(1,26)]
df.columns = cols
df.shape
df.head()
pd.Series(df['treatment']).value_counts(normalize=True)
X = df.loc[:,'x1':]
treatment = df['treatment']
y = df['y_factual']
tau = df.apply(lambda d: d['y_factual'] - d['y_cfactual'] if d['treatment']==1
else d['y_cfactual'] - d['y_factual'],
axis=1)
# p_model = LogisticRegressionCV(penalty='elasticnet', solver='saga', l1_ratios=np.linspace(0,1,5),
# cv=StratifiedKFold(n_splits=4, shuffle=True))
# p_model.fit(X, treatment)
# p = p_model.predict_proba(X)[:, 1]
p_model = ElasticNetPropensityModel()
p = p_model.fit_predict(X, treatment)
s_learner = BaseSRegressor(LGBMRegressor())
s_ate = s_learner.estimate_ate(X, treatment, y)[0]
s_ite = s_learner.fit_predict(X, treatment, y)
t_learner = BaseTRegressor(LGBMRegressor())
t_ate = t_learner.estimate_ate(X, treatment, y)[0][0]
t_ite = t_learner.fit_predict(X, treatment, y)
x_learner = BaseXRegressor(LGBMRegressor())
x_ate = x_learner.estimate_ate(X, treatment, y, p)[0][0]
x_ite = x_learner.fit_predict(X, treatment, y, p)
r_learner = BaseRRegressor(LGBMRegressor())
r_ate = r_learner.estimate_ate(X, treatment, y, p)[0][0]
r_ite = r_learner.fit_predict(X, treatment, y, p)
dragon = DragonNet(neurons_per_layer=200, targeted_reg=True)
dragon_ite = dragon.fit_predict(X, treatment, y, return_components=False)
dragon_ate = dragon_ite.mean()
df_preds = pd.DataFrame([s_ite.ravel(),
t_ite.ravel(),
x_ite.ravel(),
r_ite.ravel(),
dragon_ite.ravel(),
tau.ravel(),
treatment.ravel(),
y.ravel()],
index=['S','T','X','R','dragonnet','tau','w','y']).T
df_cumgain = get_cumgain(df_preds)
df_result = pd.DataFrame([s_ate, t_ate, x_ate, r_ate, dragon_ate, tau.mean()],
index=['S','T','X','R','dragonnet','actual'], columns=['ATE'])
df_result['MAE'] = [mean_absolute_error(t,p) for t,p in zip([s_ite, t_ite, x_ite, r_ite, dragon_ite],
[tau.values.reshape(-1,1)]*5 )
] + [None]
df_result['AUUC'] = auuc_score(df_preds)
df_result
plot_gain(df_preds)
```
# `causalml` Synthetic Data Generation Method
```
y, X, w, tau, b, e = simulate_nuisance_and_easy_treatment(n=1000)
X_train, X_val, y_train, y_val, w_train, w_val, tau_train, tau_val, b_train, b_val, e_train, e_val = \
train_test_split(X, y, w, tau, b, e, test_size=0.2, random_state=123, shuffle=True)
preds_dict_train = {}
preds_dict_valid = {}
preds_dict_train['Actuals'] = tau_train
preds_dict_valid['Actuals'] = tau_val
preds_dict_train['generated_data'] = {
'y': y_train,
'X': X_train,
'w': w_train,
'tau': tau_train,
'b': b_train,
'e': e_train}
preds_dict_valid['generated_data'] = {
'y': y_val,
'X': X_val,
'w': w_val,
'tau': tau_val,
'b': b_val,
'e': e_val}
# Predict p_hat because e would not be directly observed in real-life
p_model = ElasticNetPropensityModel()
p_hat_train = p_model.fit_predict(X_train, w_train)
p_hat_val = p_model.fit_predict(X_val, w_val)
for base_learner, label_l in zip([BaseSRegressor, BaseTRegressor, BaseXRegressor, BaseRRegressor],
['S', 'T', 'X', 'R']):
for model, label_m in zip([LinearRegression, XGBRegressor], ['LR', 'XGB']):
# RLearner will need to fit on the p_hat
if label_l != 'R':
learner = base_learner(model())
# fit the model on training data only
learner.fit(X=X_train, treatment=w_train, y=y_train)
try:
preds_dict_train['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_train, p=p_hat_train).flatten()
preds_dict_valid['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_val, p=p_hat_val).flatten()
except TypeError:
preds_dict_train['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_train, treatment=w_train, y=y_train).flatten()
preds_dict_valid['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_val, treatment=w_val, y=y_val).flatten()
else:
learner = base_learner(model())
learner.fit(X=X_train, p=p_hat_train, treatment=w_train, y=y_train)
preds_dict_train['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_train).flatten()
preds_dict_valid['{} Learner ({})'.format(
label_l, label_m)] = learner.predict(X=X_val).flatten()
learner = DragonNet(verbose=False)
learner.fit(X_train, treatment=w_train, y=y_train)
preds_dict_train['DragonNet'] = learner.predict_tau(X=X_train).flatten()
preds_dict_valid['DragonNet'] = learner.predict_tau(X=X_val).flatten()
actuals_train = preds_dict_train['Actuals']
actuals_validation = preds_dict_valid['Actuals']
synthetic_summary_train = pd.DataFrame({label: [preds.mean(), mse(preds, actuals_train)] for label, preds
in preds_dict_train.items() if 'generated' not in label.lower()},
index=['ATE', 'MSE']).T
synthetic_summary_train['Abs % Error of ATE'] = np.abs(
(synthetic_summary_train['ATE']/synthetic_summary_train.loc['Actuals', 'ATE']) - 1)
synthetic_summary_validation = pd.DataFrame({label: [preds.mean(), mse(preds, actuals_validation)]
for label, preds in preds_dict_valid.items()
if 'generated' not in label.lower()},
index=['ATE', 'MSE']).T
synthetic_summary_validation['Abs % Error of ATE'] = np.abs(
(synthetic_summary_validation['ATE']/synthetic_summary_validation.loc['Actuals', 'ATE']) - 1)
# calculate kl divergence for training
for label in synthetic_summary_train.index:
stacked_values = np.hstack((preds_dict_train[label], actuals_train))
stacked_low = np.percentile(stacked_values, 0.1)
stacked_high = np.percentile(stacked_values, 99.9)
bins = np.linspace(stacked_low, stacked_high, 100)
distr = np.histogram(preds_dict_train[label], bins=bins)[0]
distr = np.clip(distr/distr.sum(), 0.001, 0.999)
true_distr = np.histogram(actuals_train, bins=bins)[0]
true_distr = np.clip(true_distr/true_distr.sum(), 0.001, 0.999)
kl = entropy(distr, true_distr)
synthetic_summary_train.loc[label, 'KL Divergence'] = kl
# calculate kl divergence for validation
for label in synthetic_summary_validation.index:
stacked_values = np.hstack((preds_dict_valid[label], actuals_validation))
stacked_low = np.percentile(stacked_values, 0.1)
stacked_high = np.percentile(stacked_values, 99.9)
bins = np.linspace(stacked_low, stacked_high, 100)
distr = np.histogram(preds_dict_valid[label], bins=bins)[0]
distr = np.clip(distr/distr.sum(), 0.001, 0.999)
true_distr = np.histogram(actuals_validation, bins=bins)[0]
true_distr = np.clip(true_distr/true_distr.sum(), 0.001, 0.999)
kl = entropy(distr, true_distr)
synthetic_summary_validation.loc[label, 'KL Divergence'] = kl
df_preds_train = pd.DataFrame([preds_dict_train['S Learner (LR)'].ravel(),
preds_dict_train['S Learner (XGB)'].ravel(),
preds_dict_train['T Learner (LR)'].ravel(),
preds_dict_train['T Learner (XGB)'].ravel(),
preds_dict_train['X Learner (LR)'].ravel(),
preds_dict_train['X Learner (XGB)'].ravel(),
preds_dict_train['R Learner (LR)'].ravel(),
preds_dict_train['R Learner (XGB)'].ravel(),
preds_dict_train['DragonNet'].ravel(),
preds_dict_train['generated_data']['tau'].ravel(),
preds_dict_train['generated_data']['w'].ravel(),
preds_dict_train['generated_data']['y'].ravel()],
index=['S Learner (LR)','S Learner (XGB)',
'T Learner (LR)','T Learner (XGB)',
'X Learner (LR)','X Learner (XGB)',
'R Learner (LR)','R Learner (XGB)',
'DragonNet','tau','w','y']).T
synthetic_summary_train['AUUC'] = auuc_score(df_preds_train).iloc[:-1]
df_preds_validation = pd.DataFrame([preds_dict_valid['S Learner (LR)'].ravel(),
preds_dict_valid['S Learner (XGB)'].ravel(),
preds_dict_valid['T Learner (LR)'].ravel(),
preds_dict_valid['T Learner (XGB)'].ravel(),
preds_dict_valid['X Learner (LR)'].ravel(),
preds_dict_valid['X Learner (XGB)'].ravel(),
preds_dict_valid['R Learner (LR)'].ravel(),
preds_dict_valid['R Learner (XGB)'].ravel(),
preds_dict_valid['DragonNet'].ravel(),
preds_dict_valid['generated_data']['tau'].ravel(),
preds_dict_valid['generated_data']['w'].ravel(),
preds_dict_valid['generated_data']['y'].ravel()],
index=['S Learner (LR)','S Learner (XGB)',
'T Learner (LR)','T Learner (XGB)',
'X Learner (LR)','X Learner (XGB)',
'R Learner (LR)','R Learner (XGB)',
'DragonNet','tau','w','y']).T
synthetic_summary_validation['AUUC'] = auuc_score(df_preds_validation).iloc[:-1]
synthetic_summary_train
synthetic_summary_validation
plot_gain(df_preds_train)
plot_gain(df_preds_validation)
```
| github_jupyter |
# Mean Normalization
In machine learning we use large amounts of data to train our models. Some machine learning algorithms may require that the data is *normalized* in order to work correctly. The idea of normalization, also known as *feature scaling*, is to ensure that all the data is on a similar scale, *i.e.* that all the data takes on a similar range of values. For example, we might have a dataset that has values between 0 and 5,000. By normalizing the data we can make the range of values be between 0 and 1.
In this lab, you will be performing a different kind of feature scaling known as *mean normalization*. Mean normalization will scale the data, but instead of making the values be between 0 and 1, it will distribute the values evenly in some small interval around zero. For example, if we have a dataset that has values between 0 and 5,000, after mean normalization the range of values will be distributed in some small range around 0, for example between -3 to 3. Because the range of values are distributed evenly around zero, this guarantees that the average (mean) of all elements will be zero. Therefore, when you perform *mean normalization* your data will not only be scaled but it will also have an average of zero.
# To Do:
You will start by importing NumPy and creating a rank 2 ndarray of random integers between 0 and 5,000 (inclusive) with 1000 rows and 20 columns. This array will simulate a dataset with a wide range of values. Fill in the code below
```
# import NumPy into Python
import numpy as np
# Create a 1000 x 20 ndarray with random integers in the half-open interval [0, 5001).
X = np.random.randint(0,5000,size=(1000,20))
# print the shape of X
```
Now that you created the array we will mean normalize it. We will perform mean normalization using the following equation:
$\mbox{Norm_Col}_i = \frac{\mbox{Col}_i - \mu_i}{\sigma_i}$
where $\mbox{Col}_i$ is the $i$th column of $X$, $\mu_i$ is average of the values in the $i$th column of $X$, and $\sigma_i$ is the standard deviation of the values in the $i$th column of $X$. In other words, mean normalization is performed by subtracting from each column of $X$ the average of its values, and then by dividing by the standard deviation of its values. In the space below, you will first calculate the average and standard deviation of each column of $X$.
```
# Average of the values in each column of X
ave_cols = np.mean(X,axis=0)
# Standard Deviation of the values in each column of X
std_cols =np.std(X,axis=0)
```
If you have done the above calculations correctly, then `ave_cols` and `std_cols`, should both be vectors with shape `(20,)` since $X$ has 20 columns. You can verify this by filling the code below:
```
# Print the shape of ave_cols
ave_cols.shape
# Print the shape of std_cols
std_cols.shape
```
You can now take advantage of Broadcasting to calculate the mean normalized version of $X$ in just one line of code using the equation above. Fill in the code below
```
# Mean normalize X
X_norm = (X-ave_cols) / std_cols
print (X_norm.shape)
```
If you have performed the mean normalization correctly, then the average of all the elements in $X_{\tiny{\mbox{norm}}}$ should be close to zero, and they should be evenly distributed in some small interval around zero. You can verify this by filing the code below:
```
# Print the average of all the values of X_norm
print(X_norm.mean())
# Print the average of the minimum value in each column of X_norm
print(X_norm.min(axis=0).mean())
# Print the average of the maximum value in each column of X_norm
print(X_norm.max(axis=0).mean())
```
You should note that since $X$ was created using random integers, the above values will vary.
# Data Separation
After the data has been mean normalized, it is customary in machine learnig to split our dataset into three sets:
1. A Training Set
2. A Cross Validation Set
3. A Test Set
The dataset is usually divided such that the Training Set contains 60% of the data, the Cross Validation Set contains 20% of the data, and the Test Set contains 20% of the data.
In this part of the lab you will separate `X_norm` into a Training Set, Cross Validation Set, and a Test Set. Each data set will contain rows of `X_norm` chosen at random, making sure that we don't pick the same row twice. This will guarantee that all the rows of `X_norm` are chosen and randomly distributed among the three new sets.
You will start by creating a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`. You can do this by using the `np.random.permutation()` function. The `np.random.permutation(N)` function creates a random permutation of integers from 0 to `N - 1`. Let's see an example:
```
# We create a random permutation of integers 0 to 4
np.random.permutation(5)
```
# To Do
In the space below create a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`. You can do this in one line of code by extracting the number of rows of `X_norm` using the `shape` attribute and then passing it to the `np.random.permutation()` function. Remember the `shape` attribute returns a tuple with two numbers in the form `(rows,columns)`.
```
# Create a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`
row_indices =np.random.permutation(np.size(X_norm,0))
row_indices.shape
```
Now you can create the three datasets using the `row_indices` ndarray to select the rows that will go into each dataset. Rememeber that the Training Set contains 60% of the data, the Cross Validation Set contains 20% of the data, and the Test Set contains 20% of the data. Each set requires just one line of code to create. Fill in the code below
```
# Make any necessary calculations.
# You can save your calculations into variables to use later.
t=row_indices[0:600]
c=row_indices[600:800]
te=row_indices[800:1000]
# Create a Training Set
X_train =X_norm[t]
# Create a Cross Validation Set
X_crossVal = X_norm[c]
# Create a Test Set
X_test =X_norm[te]
```
If you performed the above calculations correctly, then `X_tain` should have 600 rows and 20 columns, `X_crossVal` should have 200 rows and 20 columns, and `X_test` should have 200 rows and 20 columns. You can verify this by filling the code below:
```
# Print the shape of X_train
print (X_train.shape)
# Print the shape of X_crossVal
print (X_crossVal.sh)
# Print the shape of X_test
```
| github_jupyter |
# Quantum speedups in finance
### Notebook objectives:
In this challenge we will gain the following skills:
1. Understand the basics of how financial instruments are typically priced using Monte Carlo methods
2. Implement a quantum algorithm to price a financial instrument (in this case, we consider derivative contracts called options)
3. Understand the pros/cons of using the quantum approach
## Introduction
The world of finance is a complicated one to model and predict. The financial markets, in particular, are influenced by so many external factors and contain multiple different investors with different objectives. This means that traditional modelling techniques, like multiple linear regression, often struggle when trying to explain investors' behaviours or perform certain tasks in finance, such as predicting the price of a stock. Due to these complexities, we need more sophisticated models and numerous techniques have been proposed to produce such robust models for financial instruments like stocks.
In Africa, the dynamics of the financial markets are arguably even more complicated. In this notebook, we discuss what types of financial instruments we can encounter in the South African market, particularly those that are liquid (meaning they are frequently traded and accurately valued). Illiqid instruments are also prevalent in Africa and are more difficult to evaluate and price, since they are not often traded.
Here, we introduce a common pricing technique for a particular instrument called an option. This technique rests on Monte Carlo sampling which we show can achieve a quadratic speed up in pricing the option by replacing the classical Monte Carlo sampling approach with a quantum algorithm that leverages quantum effects to compute things faster.
## 1. What is a financial instrument?
A financial instrument is any asset that can be traded between various parties. For example, shares of a company are equity instruments, and debt, cash and even contracts can be considered as financial instruments.
In general, we can divide financial instruments into several categories, such as cash instruments, asset or debt-based instruments and derivative instruments. In this challenge, we will focus on the last category, namely, derivatives. These instruments get their name from the fact that their price is *derived* based on the price of a separate underlying quantity. In particular, we will consider derivatives called European options.
<img src="instruments.png" width=1000 height=1000 />
### Option contracts
Options are financial derivatives that are defined explicitly by contracts. These contracts give the buyer of the option the right, but not the obligation, to buy or sell a specific underlying asset at an agreed-upon price sometime in the future.
- Options that give the buyer the right to *buy* the underlying asset are called **call options**
- Options that give the buyer the right to *sell* the underlying asset are called **put options**.
In both cases, the price to buy or sell the asset at, is agreed upon in the contract and called the **strike price**.
For example, let's assume that the share price of a company called ABC is currently trading at $S_0 =$ 50 ZAR. You believe that in 1 months time, the share price will double to 100 ZAR. You can either buy the share now for 50 ZAR, or you could buy a 1-month call option for a much cheaper price of 5 ZAR. If the price of the ABC share does indeed double in 1 months time, then you can exercise your option right to buy the ABC share at the agreed up strike price (which will be lower than the actual share price). This may sound a little tricky, so let's make it concrete by going through an example with various scenarios we can encounter.
Recall that the current price of ABC is $S_0 =$ 50 ZAR. A 1-month call option with a strike price of $K =$ 80 ZAR is available to purchase for $P_{\mathrm{call}} =$ 5 ZAR. Let's consider the scenarios of buying an ABC share today or the call option.
#### Today:
Scenario 1: Buying a share of ABC today
Investment = $S_0 =$ 50 ZAR
Scenario 2: Buying a 1-month call option on ABC
Investment = $P_{\mathrm{call}} =$ 5 ZAR
#### In 1-months time:
Since option contracts are valid for a pre-determined period of time, their value at the expiration date is called the *payoff* that you will recieve. Technically, the price of an ABC share could be trading at any value greater than or equal to 0 ZAR. Let's imagine that the price after 1-month $S_t$ can be 40 ZAR, 60 ZAR or 100 ZAR and look at the payoffs for each scenario.
- $S_t =$ 40 ZAR \
Scenario 1: Since you own the share of ABC and purchased it at 50 ZAR, you will lose 10 ZAR. I.e. the payoff will be a loss of -10 ZAR. \
Scenario 2: The strike price is higher than the actual share price. Thus, you would not execute the option to buy ABC at $K =$ 80 ZAR when you can purchase the share on the market at 40 ZAR. You will lose out on the price you paid for the call option and so the payoff will be a loss of -5 ZAR.
- $S_t =$ 60 ZAR \
Scenario 1: The payoff will be the profit you made from buying the share at 50 ZAR and it's current price. I.e., payoff = 60 - 50 = 10 ZAR. \
Scenario 2: The strike price is still higher than the share price. Thus, you would not execute the option and lose 5 ZAR.
- $S_t =$ 100 ZAR \
Scenario 1: payoff = 100 - 50 = 50 ZAR \
Scenario 2: Here, the strike price is lower than the market price so it makes sense to execute the option. By doing so, you can buy the share at the strike price of 80 ZAR and the profit is therefore = 100 - 80 = 20 ZAR. However, remember that you also paid a premium for the option so the total payoff is 100 - 80 - 5 = 15 ZAR.
At this stage, you might be wondering why buy the option at all? And it kind of seems like betting? What's important to keep in mind is the initial investment you have to put up in order to buy the actual ABC share vs buying the call option. Buying the share requires a much higher investment and you run the risk of losing a lot more money if the share price drops below the price you initially paid for it. Whereas with the option, if the share price drops below the strike price, you can just let the option expire and lose a maximum of 5 ZAR (the price you paid for the option). There are also other reasons to purchase options, such as hedging against risk or offsetting other trades. In general, you can read more about these strategies here: https://www.investopedia.com/trading/options-strategies/
# Pricing options
Now, how on earth do we begin to price these option contracts? Previously, we said the price of a 1-month call option linked to the ABC share price was 5 ZAR. But is that price fair? And how would be go about determining the fair price of these options in general?
If we think about what should affect the fair price of an option, it boils down to understanding the what the price of the underlying asset it is linked to will be in the future. Due to the random (also called stochastic) nature of most of the parameters that go into pricing the underlying assets that options are defined on, calculating the fair price can be a difficult task, and whilst analytical models exist for the basic types of options, the simplifying assumptions required for these models often limit their applicability. Thus, numerical methods that estimate the fair price have to be employed for option pricing, with Monte Carlo being one of the most popular.
## 2. How does Monte Carlo work?
Monte Carlo methods are a class of computational algorithms that are based on random sampling and repeated computation. At a high level, imagine you have a function (or equation) that is described by multiple variables and you're interested in solving that function, but it is analytically difficult to do so because the function depends on lots of complicated variables.
A Monte Carlo approach simply samples random values for the parameters of your function from an underlying distribution and computes the function multiple times, each time using a different set of randomly sampled values. In doing so, we can obtain an expected value for the function we are trying to evaluate by taking an average over all the computed values of the function. This allows us to estimate function values without having to analytically solve them directly.
<img src="MCpic.png" width=650 height=650 />
You can read more about how Monte Carlo works in this blog post that discusses how to use the technique to estimate the value of pi: https://medium.com/swlh/estimate-pi-with-monte-carlo-a74995862501
### Monte Carlo in finance
Monte Carlo simulation is often used in finance to value and analyse instruments, portfolios and investments by simulating the sources of uncertainty that affect their value [cite: https://www.cmi.ac.in/~shariq/Shariq%20files/option_pricing.pdf].
In the case of options, Monte Carlo methods are used to develop a price distribution for the underlying asset. If we have a price distribution of the underlying asset, then we can begin to get a sense of what the fair price for the option should be.
To illustrate this, we have to first make assumptions about the factors that could influence the price of the underlying asset and hence, influence the value of the option.
### Monte Carlo methods for option pricing
The goal of option pricing is to estimate the option's payoff at the expiration date. In other words, how much profit can you expect to make from the option? If the underlying asset's price is expected to go up a lot, then you can expect to make more money if you can buy the underlying asset at a cheap price when it is worth a lot. In that case, the option price should be high since the option to buy the underlying asset at a cheaper price is quite valuable.
<br>
The steps to price an option are as follows:
1. Model the price of the underlying asset which the option is based on, and any other sources of uncertainty, as random variables $\textbf{X}=\{X_1, X_2, . . . , X_N\} $ which follow a stochastic process.
$$ \thinspace $$
2. Generate a large number, $M$, of random values which can serve as price paths $\{\textbf{X}_1, \textbf{X}_2, . . . , \textbf{X}_N\} $ for the underlying asset. These random values should be drawn from the probability distribution implied by the stochastic process. Let's call this distribution $\mathbb{P}$.
$$ \thinspace $$
3. Once we have lots of simulated price paths for the underlying asset, we can calculate the option’s payoff for each of the generated price paths, which we can label as $f(\textbf{X}_i)$. Then we can compute an estimator for the expectation value of the payoff as an average across all paths, i.e. $\mathbb{E}_\mathbb{P}[f(\textbf{X})]$ can be approximated by
$$ \mathbb{\hat{E}}_\mathbb{P}[f(\textbf{X})]= \frac{1}{M}\sum_{i=1}^M f(\textbf{X}_i) $$
$$ \thinspace $$
4. Lastly, discount the calculated expectation value to get the option's fair value today.
In step 4, the discounting process requires knowledge of interest rates at future dates which is itself an important question from a financial modelling perspective. However, for the types of options we consider, this process is not computationally challenging and can be performed classically after the payoff calculation. We therefore do not discount the expected payoff for simplicity and can ignore step 4.
Let's look at a simple example to illustrate how classical Monte Carlo works using Python.
```
# Let's import some libraries and functions we will need
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Let's look at a 1-month call option linked to the underlying share price of a new company called QuantumTech. We can formulate the expected payoff in 1 months time, and hence the price of the call option, as follows:
$$P_{\mathrm{call}} = \max(S_T - K, 0)$$
where $S_T$ is the price of the QuantumTech share in $T =$ 1 month and $K$ is the pre-agreed upon strike price.
Since we know $K$ and $T$, we need to estimate $S_T$. I.e. we need to use Monte Carlo methods to estimate QuantumTech's share price in 1 month. But what function can sample from in order to get the Monte Carlo estimate?
In financial literature, there are several results that help us answer this question. Share prices are often assumed to follow a random process called a Generalised Wiener Process. We can use this random process to create lots of estimates for the price of a QuantumTech share. Let's assume we expect QuantumTech to have roughly a positive return for the month of $r = 0.05$ and there will be some volatility around this return which we denote as $\sigma = 0.4$. Then, the change in share price, $dS_T$, can be written as
$$ dS_T = rS_TdT + \sigma S_T dW_T $$
where $dW_T$ is a Wiener process (see https://www.whoi.edu/fileserver.do?id=21268&pt=10&p=17232 for more technical details of this process). Using a convenient result called Itô's lemma, we can then derive the formula for the share price at time T as
$$S_T = S_0e^{(r-\frac{1}{2}\sigma^2)T + \sigma \sqrt{T}W_T}$$
where $S_0$ is the price of a QuantumTech share today. Thus, often shares are modelled using log-normal distributions.
Now we can code up this function and use lots of random values to evaluate it, in order to calculate a price distribution for QuantumTech in 1 months time.
```
# Monte Carlo valuation of a European call option
# set a random seed to reproduce our results
np.random.seed(42)
# set the parameters
S0 = 50 # initial price of the underlying asset
K = 55 # strike price
r = 0.05 # average return of the underlying
sigma = 0.4 # volatility of the underlying
T = 1 # time till execution
t = 30 # number of time steps we want to divide T in
dt = T / t # incremental time step size
M = 1000 # number of paths to simulate
# Simulating M price paths with t time steps
# sum instead of cumsum would also do if only the final values at end of the month (i.e. at time T) are of interest
S = S0 * np.exp(np.cumsum((r - 0.5 * sigma ** 2) * dt + sigma * math.sqrt(dt)* np.random.standard_normal((t + 1, M)), axis=0))
# Calculating the Monte Carlo estimator for the expected payoff
P_call = sum(np.maximum(S[-1] - K, 0)) / M
# Results output
print("The call option value is: {:0.2f} ZAR.\n".format(P_call))
```
Let's visualise the multiple paths that the underlying share price of QuantumTech could take. The resulting call option value is calculated using an average of these paths.
```
num_paths_to_plot = 100
plt.figure(figsize= (12,8))
plt.plot(S[:, :num_paths_to_plot])
plt.grid(True)
plt.xlabel('days')
plt.ylabel('Price')
plt.title('Possible price paths for a QuantumTech share')
plt.show()
```
Let’s investigate the frequency of the simulated index levels at the end of the simulation period
```
plt.figure(figsize= (10,5))
plt.hist(S[-1], bins=50)
plt.grid(True)
plt.xlabel('Price')
plt.ylabel('frequency')
```
Let’s look at the histogram of all simulated end-of-period option values
```
plt.figure(figsize= (10,5))
plt.hist(np.maximum(S[-1] - K, 0), bins=50)
plt.grid(True)
plt.xlabel('option payoff value')
plt.ylabel('frequency')
```
## 3. The quantum approach
Traditional Monte Carlo methods generally require extensive computational resources. By leveraging the laws of quantum mechanics, a quantum computer may provide novel ways to solve computationally intensive problems.
Quantitative finance may benefit from quantum computing in many ways. Recently developed applications of gate-based quantum computing algorithms for use in finance include portfolio optimisation, the calculation of risk measures and pricing derivatives. Several of these applications are based on an algorithm called Amplitude Estimation, which can estimate a parameter's value with a convergence rate in the order of $\mathcal{O}(M^{-1})$, where $M$ is the number of samples required. This represents a theoretical quadratic speed-up compared to Monte Carlo methods that run on classical computers with a convergence rate of $\mathcal{O}(M^{-1/2})$.
Below is a graphical demonstration of this speedup taken from a paper called Option Pricing Using Quantum Computers (https://arxiv.org/abs/1905.02666). The y-axis depicts the estimation error of the function or parameter we are trying to model, versus the number of samples we use, $M$, to create the estimate.
<img src="scalingMC.png" width=600 height=600 />
### Pricing options with Quantum Amplitude Estimation
If we were to swap out the Monte Carlo estimate for the underlying share price and use Amplitude Estimation, the building blocks needed to price the option on a gate-based quantum computer are the following:
1) Represent the probability distribution $ \mathbb{P} $ describing the evolution of the share price of QuantumTech on a quantum computer.
$$ \thinspace $$
2) Construct the quantum model which computes the payoff of the option, $f(\textbf{X})$.
$$ \thinspace $$
3) Calculate the expectation value of the payoff $\mathbb{E}_\mathbb{P}[f(\textbf{X})]$.
In the paper Quantum Risk Analysis (https://arxiv.org/abs/1806.06893) you can find a detailed description of how to use Amplitude Estimation to calculate the expectation value of a function of <a href="https://www.khanacademy.org/math/statistics-probability/random-variables-stats-library/random-variables-discrete/v/random-variables">random variables</a>, how to load a relevant probability distribution to a quantum register and the construction of the quantum circuits needed to compute the payoff and set up Amplitude Estimation to estimate the expectation value of the payoff. There is also <a href="https://qiskit.org/documentation/tutorials/algorithms/07_grover.html">a chapter in the qiskit textbook</a> explaining Amplitude Estimation in more detail.
Continuing with our example of a 1-month <a href="http://www.theoptionsguide.com/call-option.aspx">European call option</a> based on the share price of QuantumTech $S_T$ and a strike price $K$, recall that the corresponding payoff function is defined as:
$$f(S_T)=\max(S_T - K, 0).$$
We now know that the price of this type of option depends only on the distribution of $S_T$. In the rest of this notebook, we will use a quantum algorithm that employs Amplitude Estimation to estimate the expected payoff, i.e., the fair price for the option:
$$\mathbb{E}\left[ \max(S_T - K, 0) \right]$$
```
#First we import all the required libraries
from qiskit import Aer
from qiskit.utils import QuantumInstance
from qiskit.algorithms import IterativeAmplitudeEstimation
from qiskit_finance.circuit.library import LogNormalDistribution, EuropeanCallPricingObjective
```
### Recapping our quantum understanding
Quantum models consist of quantum circuits that contain operations which rotate qubits in different ways. Depending on the types of rotations used, these will influence the outcome of the quantum circuit when we measure each qubit. From the preliminary notebook, we know that when a qubit is measured, it collapses to a classical outcome of either 0 or 1. But, we must also remember that the outcome of quantum measurements when qubits are in superposition is stochastic, meaning, we must measure the circuit several times to get an accurate distribution over the possible outcomes. These possible outcomes are what we call basis states, and each of them have corresponding probability amplitudes which tell us how probable the basis states are of being measured. For example, if we 2 qubits, the basis states are $00, 01, 10$ and $11$ - each of which will have a probability associated with it.
<img src="qubit.png" width=600 height=600 />
### Distribution Loading
Returning to our QuantumTech example, the first component of our option pricing model is to create a quantum circuit that takes in the probability distribution implied for the possible share prices of QuantumTech in 1 months time and loads it into a register such that each basis state represents a possible share value and its amplitude, the corresponding probability of the share having that value.
In this notebook, we construct a circuit factory to load a log-normal probability distribution for the share price of QuantumTech into a quantum state. The share price distribution is truncated to a given interval $[low, high]$ and discretised using $2^n$ grid points, where $n$ denotes the number of qubits used. This might sound a bit confusing, so feel free to check out this [blog post](https://medium.com/qiskit/systematic-preparation-of-arbitrary-probability-distribution-with-a-quantum-computer-165dfd8fbd7d) for more details on loading distributions as quantum states.
Essentially we want to represent a probability distribution of our underlying share price as a quantum state. Thus, we reduce the the distribution to an interval we deem relevant and "slice" the possible values in between this interval. The number of slices we make will depend on how many qubits we have. If we have $n$ qubits, this means we have $2^n$ equally spaced slices in the interval. Each slice corresponds to a certain share price which we can then associate with a possible basis state. Thus, the more qubits we have, the more possible basis states we will have and the more slices we can create. Let's look at an example.
### The quantum uncertainty model
```
# number of qubits to represent the uncertainty/distribution
num_uncertainty_qubits = 2
# parameters for considered random distribution
S = 50 # initial spot price
strike_price = 55
vol = 0.4 # volatility of 40%
r = 0.05 # annual interest rate of 5%
T = 30 / 365 # 30 days to maturity
# resulting parameters for log-normal distribution
mu = ((r - 0.5 * vol**2) * T + np.log(S))
sigma = vol * np.sqrt(T)
mean = np.exp(mu + sigma**2/2)
variance = (np.exp(sigma**2) - 1) * np.exp(2*mu + sigma**2)
stddev = np.sqrt(variance)
# lowest and highest value considered for the spot price; in between, an equidistant discretization is considered.
# we truncate the distribution to the interval defined by 2 standard deviations around the mean
low = np.maximum(0, mean - 2*stddev)
high = mean + 2*stddev
# construct circuit factory for uncertainty model
uncertainty_model = LogNormalDistribution(num_uncertainty_qubits, mu=mu, sigma=sigma**2, bounds=(low, high))
# plot probability distribution
x = uncertainty_model.values
y = uncertainty_model.probabilities
plt.figure(figsize= (10,5))
plt.bar(x, y, width=1)
plt.xticks(x, size=15, rotation=90)
plt.yticks(size=15)
plt.grid()
plt.xlabel('Share Price in one month $S_T$ (\$)', size=15)
plt.ylabel('Probability ($\%$)', size=15)
plt.show()
```
### Payoff Function
Let's have a look at the payoff function for our QuantumTech option. Recall that the payoff function equals zero as long as the share price in 1 months time, $S_T$, is less than the strike price $K$ and then increases linearly thereafter. The code below illustrates this.
```
# plot exact payoff function (evaluated on the grid of the uncertainty model)
x = uncertainty_model.values
y = np.maximum(0, x - strike_price)
plt.figure(figsize= (10,5))
plt.plot(x, y, 'ro-')
plt.grid()
plt.title('Normalised Payoff Function', size=15)
plt.xlabel('Share Price', size=15)
plt.ylabel('Normalised Payoff', size=15)
plt.xticks(x, size=15, rotation=90)
plt.yticks(size=15)
plt.show()
```
### Evaluate the expected payoff
Lastly, we can use Quantum Amplitude Estimation to compute the expected payoff of the option. Thanks to built-in Qiskit algorithms, we can use the EuropeanCallPricing and IterativeAmplitudeEstimation functions to achieve this.
```
from qiskit_finance.applications.estimation import EuropeanCallPricing
european_call_pricing = EuropeanCallPricing(num_state_qubits=num_uncertainty_qubits,
strike_price=strike_price,
rescaling_factor=0.25,
bounds=(low, high),
uncertainty_model=uncertainty_model)
# set target precision and confidence level
epsilon = 0.01
alpha = 0.05
shots = 100
simulator = 'qasm_simulator'
qi = QuantumInstance(Aer.get_backend(simulator), shots=shots, seed_simulator=42, seed_transpiler=42)
problem = european_call_pricing.to_estimation_problem()
# construct amplitude estimation
ae = IterativeAmplitudeEstimation(epsilon, alpha=alpha, quantum_instance=qi)
result = ae.estimate(problem)
```
Since this example is simple enough to calculate by hand, we can compute the exact expected value of the option and compare it to the result of Amplitude Estimation using a quantum approach.
We can also create a confidence interval around our quantum solution. Recall that we are modelling probability distributions which are inherently random. Thus, if we want, we can instead provide a range of values for our option price instead of just one fixed value.
Below is the exact value, followed by the estimated value of the option using Amplitude Estimation. The estimation error is simply the difference between the exact and estimated values. And finally, the confidence interval provides a range of values which the model predicts the final output to lie, based on the value of alpha.
```
# evaluate exact expected value (normalized to the [0, 1] interval)
exact_value = np.dot(uncertainty_model.probabilities, y)
conf_int = np.array(result.confidence_interval_processed)
print('Exact value: \t%.4f' % exact_value)
print('Estimated value: \t%.4f' % (european_call_pricing.interpret(result)))
print('Estimation error: \t%.4f' %(np.abs(exact_value-european_call_pricing.interpret(result))))
print('Confidence interval:\t[%.4f, %.4f]' % tuple(conf_int))
```
Ideally, we would want our model's estimated value to be as close as possible to the exact value. Thus, we would want to minimise the estimation error as much as possible.
Now that we have implemented the quantum approach to price a call option, we can start to attempt the exercises below to enhance and create more models.
# Exercises
### Exercise 2a
The estimated value for the expected payoff can be made more accurate. Can you make the estimated value closer to the exact value by tweaking various parameters? Try reducing the estimation error as much as possible and input your final values below.
NOTE: Do not change any other variables than the ones specified below.
```
# Run this cell to get a score for your answer, copy the values for the variables here
# HINT: Try lower the estimation error to below 0.03
num_uncertainty_qubits = 4
low = np.maximum(0, mean - 2*stddev)
high = mean + 2*stddev
epsilon = 0.01
alpha = 0.05
shots = 100
simulator = 'qasm_simulator'
# Run this cell once you are ready to submit your answer
from qc_grader import grade_ex2a
solutions = [num_uncertainty_qubits, low, high, epsilon, alpha, shots, simulator]
grade_ex2a(solutions)
```
In this exercise, your score is calculated based on the estimation error of your model. The lower the estimation error, the higher your score. The highest possible score is 100.
### Exercise 2b
Recall that we can have different types of options. What we have implemented so far in this notebook is a European call option. In this exercise, try to implement a **<a href="https://www.theoptionsguide.com/put-option.aspx">European put option</a>**. A put option gives the buyer of the option the right (but not the obligation) to sell the underlying asset, rather than to buy it. Thus, a put option has a completely different payoff function to a call option.
Let's assume the following **fixed** parameters to implement a put option:
```
num_uncertainty_qubits = 4
S = 200 # initial spot price
vol = 0.3 # volatility of 30%
r = 0.08 # annual interest rate of 8%
T = 60 / 365 # 60 days to maturity
strike_price = 230
epsilon = 0.01
alpha = 0.05
shots = 100
simulator = 'qasm_simulator'
````
Below is some code to get you started.
HINT: if you are stuck, check out this qiskit tutorial and adapt it for our problem: https://qiskit.org/documentation/finance/tutorials/04_european_put_option_pricing.html.
```
# Do not change these variables
num_uncertainty_qubits = 4
S = 200 # initial spot price
vol = 0.3 # volatility of 30%
r = 0.08 # annual interest rate of 4%
T = 60 / 365 # 60 days to maturity
strike_price = 230
epsilon = 0.01
alpha = 0.05
shots = 100
simulator = 'qasm_simulator'
# set the approximation scaling for the payoff function
rescaling_factor = 0.25
# resulting parameters for log-normal distribution
mu = ((r - 0.5 * vol**2) * T + np.log(S))
sigma = vol * np.sqrt(T)
mean = np.exp(mu + sigma**2/2)
variance = (np.exp(sigma**2) - 1) * np.exp(2*mu + sigma**2)
stddev = np.sqrt(variance)
low = np.maximum(0, mean - 2*stddev)
high = mean + 2*stddev
breakpoints = [low, high]
slopes = [-1, 0]
offsets = [strike_price - low, 0]
f_min = 0
f_max = strike_price - low
# The distribution loading step will be the same for the underlying asset, in this case the QuantumTech share
uncertainty_model = LogNormalDistribution(num_uncertainty_qubits, mu=mu, sigma=sigma**2, bounds=(low, high))
#'insert lognormal model function here'
from qiskit.circuit.library import LinearAmplitudeFunction
# setup piecewise linear objective function, the LinearAmplitudeFunction
european_put_objective = LinearAmplitudeFunction( num_uncertainty_qubits,
slopes,
offsets,
domain=(low, high),
image=(f_min, f_max),
breakpoints=breakpoints,
rescaling_factor=rescaling_factor,
)
# setup the quantum instance to pass to the IterativeAmplitudeEstimation function
qi = QuantumInstance(Aer.get_backend(simulator), shots=shots, seed_simulator=42, seed_transpiler=42)
# construct amplitude estimation
ae = IterativeAmplitudeEstimation(epsilon, alpha=alpha, quantum_instance=qi)
# Run this cell to get a score for your answer!
from qc_grader import grade_ex2b
grade_ex2b(uncertainty_model, european_put_objective, ae)
```
## Advanced reading (optional)
### Computing the delta of an option
The delta of an option measures the risk or the sensitivity of an option's change in price if the underlying asset's price changes. It is option quoted as a simple ration of the change in price of the underlying asset to the change in price of the option:
$\mathrm{delta} = \frac{\Delta S}{\Delta P}$, where $S$ is the underlying asset's price and $P$ is the price of the option. You can read more about the details of deltas for options here: https://www.investopedia.com/terms/d/delta.asp
In Qiskit, we can also conveniently compute the delta of an option. Continuing on with the parameters of our put option we just implemented, let's compute the delta.
```
from qiskit.algorithms import EstimationProblem
# setup a piecewise linear objective function
european_put_delta_objective = LinearAmplitudeFunction(
num_uncertainty_qubits,
slopes,
offsets,
domain=(low, high),
image=(f_min, f_max),
breakpoints=breakpoints
)
# construct circuits for payoff function
european_put_delta = european_put_delta_objective.compose(uncertainty_model, front=True)
# Set up the estimation problem using qiskit
problem = EstimationProblem(state_preparation=european_put_delta,
objective_qubits=[num_uncertainty_qubits])
# construct amplitude estimation
ae_delta = IterativeAmplitudeEstimation(epsilon, alpha=alpha, quantum_instance=qi)
# get the estimated result
result_delta = ae_delta.estimate(problem)
# compute the exact result
x = uncertainty_model.values
exact_delta = -sum(uncertainty_model.probabilities[x <= strike_price])
# compare results
conf_int = -np.array(result_delta.confidence_interval)[::-1]
print('Exact delta: \t%.4f' % exact_delta)
print('Esimated value: \t%.4f' % -result_delta.estimation)
print('Estimation error: \t%.4f' %(np.abs(exact_delta+result_delta.estimation)))
print('Confidence interval: \t[%.4f, %.4f]' % tuple(conf_int))
```
# Conclusion
Now that you are familiar with how quantum algorithms can be used for financial applications, go ahead and start exploring the <a href="https://qiskit.org/documentation/apidoc/qiskit_finance.html">Qiskit Finance module</a> and check out all the other types of <a href="https://qiskit.org/documentation/finance/ ">quantum finance applications</a> that are out there! You can even begin to modify the code and tailor them for problems in real life.
### Good luck!
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
# Czech municipal elections 2018: Nové Město nad Metují
Czech municipal elections use an open-list proportional system that allows panachage - marking candidates across parties. Here, we give an example of how to evaluate such a composite election system.
```
import sys
import os
import csv
import decimal
sys.path.append(os.path.join('..', '..'))
import votelib.candidate
import votelib.convert
import votelib.evaluate.core
import votelib.evaluate.threshold
import votelib.evaluate.proportional
import votelib.evaluate.openlist
```
## Vote loading
The voters are given a huge ballot where they can vote for arbitrary candidates across parties, with the number of votes equal to the number of seats in the municipal council. In addition to this, they can also vote for a party, whose candidates obtain the votes not assigned to candidates elsewhere, counting from the top. The votes are thus counted for the candidates and can be aggregated to the parties.
We use the `Person` and `PoliticalParty` object to determine the relationships of candidates to their parties and aggregate their votes accordingly later.
```
fpath = os.path.join('..', '..', 'tests', 'real', 'data', 'nmnmet_cc_2018.csv')
votes = {}
party_objs = {}
party_lists = {}
with open(fpath, encoding='utf8') as infile:
for party, name, n_pers_votes in csv.reader(infile, delimiter=';'):
# For each candidate: Get the according party object;
party_obj = party_objs.setdefault(party, votelib.candidate.PoliticalParty(party))
# Construct the person object with a reference to the party;
person = votelib.candidate.Person(name, candidacy_for=party_obj)
# Record the candidate's votes;
votes[person] = int(n_pers_votes)
# Append the candidate to the party list of his or her party.
party_lists.setdefault(party_obj, []).append(person)
```
An example of the votes and party list for the incumbent ruling party:
```
vpm_object = party_objs['VPM']
print([cand.name for cand in party_lists[vpm_object]])
{cand.name: n_votes for cand, n_votes in votes.items() if cand.candidacy_for == vpm_object}
```
## Evaluator construction
Each Czech municipality forms a single constituency for the election.
The evaluation proceeds by first evaluating party results, so the results for the individual candidates must be grouped by their party. This mapping is defined by the candidates' `candidacy_for` attribute, which is recognized by the `IndividualToPartyMapper` object by default. Because independent candidates are not allowed to stand in the election, we add the behavior to recognize them as errors:
```
vote_grouper = votelib.convert.GroupVotesByParty(
votelib.candidate.IndividualToPartyMapper(independents='error')
)
```
The seats are allocated to the parties by the proportional D'Hondt system with a 5 % municipal vote threshold. We thus construct the proportional evaluator conditioned by the vote threshold and pre-aggregated by summing the grouped votes for parties:
```
party_evaluator = votelib.evaluate.core.PreConverted(
votelib.convert.PartyTotals(),
votelib.evaluate.core.Conditioned(
votelib.evaluate.threshold.RelativeThreshold(
decimal.Decimal('.05'), accept_equal=True
),
votelib.evaluate.proportional.HighestAverages('d_hondt'),
)
)
```
Next, the party open lists are evaluated by the votes for their individual candidates. The candidate can advance forward in the list ranking if he or she has more than 5 % of the votes for the list; all such candidates are ranked first by the number of votes in descending order, and the rest goes after them in list order. We can use `PartyListEvaluator` to manage the list election and have `ThresholdOpenList` determine the elected candidates for each party. We use the vote grouper in two places to group both the party votes and list votes, which are passed separately:
```
list_evaluator = votelib.evaluate.core.PreConverted(
vote_grouper,
votelib.evaluate.core.PartyListEvaluator(
party_evaluator,
votelib.evaluate.openlist.ThresholdOpenList(
jump_fraction=decimal.Decimal('.05')
),
list_votes_converter=vote_grouper,
)
)
```
Finally, we fix the number of seats - the municipal council of Nové Město nad Metují has 21 seats:
```
evaluator = votelib.evaluate.core.FixedSeatCount(
list_evaluator, 21
)
```
## Performing the evaluation
With the evaluator set up, we obtain the evaluation as lists of candidates per party.
```
list_results = evaluator.evaluate(
votes,
list_votes=votes,
party_lists=party_lists
)
for party, mandates in list_results.items():
print(party.name.ljust(15), ', '.join([cand.name for cand in mandates]))
```
We can see that VPM, the incumbent ruling party, has defended its first place with six seats, but its second place candidate from the original list was not elected because other candidates from the party with more votes jumped over him during open list evaluation.
| github_jupyter |
# PyTorch Model + Transformer Example
This notebook demonstrates how to deploy a PyTorch model and a custom transformer. It uses cifar10 model model that accepts a tensor input. The transformer has preprocessing step that allows the user to send a raw image data and convert it to a tensor input.
## Requirements
- Authenticated to gcloud (```gcloud auth application-default login```)
```
!pip install --upgrade -r requirements.txt > /dev/null
import warnings
warnings.filterwarnings('ignore')
```
## 1. Initialize Merlin
### 1.1 Set Merlin Server
```
import merlin
MERLIN_URL = "<MERLIN_HOST>/api/merlin"
merlin.set_url(MERLIN_URL)
```
### 1.2 Set Active Project
`project` represent a project in real life. You may have multiple model within a project.
`merlin.set_project(<project-name>)` will set the active project into the name matched by argument. You can only set it to an existing project. If you would like to create a new project, please do so from the MLP UI.
```
PROJECT_NAME = "sample"
merlin.set_project(PROJECT_NAME)
```
### 1.3 Set Active Model
`model` represents an abstract ML model. Conceptually, `model` in Merlin is similar to a class in programming language. To instantiate a `model` you'll have to create a `model_version`.
Each `model` has a type, currently model type supported by Merlin are: sklearn, xgboost, tensorflow, pytorch, and user defined model (i.e. pyfunc model).
`model_version` represents a snapshot of particular `model` iteration. You'll be able to attach information such as metrics and tag to a given `model_version` as well as deploy it as a model service.
`merlin.set_model(<model_name>, <model_type>)` will set the active model to the name given by parameter, if the model with given name is not found, a new model will be created.
```
from merlin.model import ModelType
MODEL_NAME = "transformer-pytorch"
merlin.set_model(MODEL_NAME, ModelType.PYTORCH)
```
## 2. Train Model
### 2.1 Prepare Training Data
```
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
```
### 2.2 Create PyTorch Model
```
import torch.nn as nn
import torch.nn.functional as F
class PyTorchModel(nn.Module):
def __init__(self):
super(PyTorchModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
```
### 2.3 Train Model
```
import torch.optim as optim
net = PyTorchModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
```
### 2.4 Check Prediction
```
dataiter = iter(trainloader)
inputs, labels = dataiter.next()
predict_out = net(inputs[0:1])
predict_out
```
## 3. Deploy Model and Transformer
### 3.1 Serialize Model
```
import os
model_dir = "pytorch-model"
model_path = os.path.join(model_dir, "model.pt")
torch.save(net.state_dict(), model_path)
```
### 3.2 Save PyTorchModel Class
We also need to save the PyTorchModel class and upload it to Merlin alongside the serialized model. The next cell will write the PyTorchModel we defined above to `pytorch-model/model.py` file.
```
%%file pytorch-model/model.py
import torch.nn as nn
import torch.nn.functional as F
class PyTorchModel(nn.Module):
def __init__(self):
super(PyTorchModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
```
### 3.3 Create Model Version and Upload
`merlin.new_model_version()` is a convenient method to create a model version and start its development process. It is equal to following codes:
```
v = model.new_model_version()
v.start()
v.log_pytorch_model(model_dir=model_dir)
v.finish()
```
```
# Create new version of the model
with merlin.new_model_version() as v:
# Upload the serialized model to Merlin
merlin.log_pytorch_model(model_dir=model_dir)
```
### 3.4 Deploy Model and Transformer
To deploy a model and its transformer, you must pass a `transformer` object to `deploy()` function. Each of deployed model version will have its own generated url.
```
from merlin.resource_request import ResourceRequest
from merlin.transformer import Transformer
# Create a transformer object and its resources requests
resource_request = ResourceRequest(min_replica=1, max_replica=1,
cpu_request="100m", memory_request="200Mi")
transformer = Transformer("gcr.io/kubeflow-ci/kfserving/image-transformer:latest",
resource_request=resource_request)
endpoint = merlin.deploy(v, transformer=transformer)
```
### 3.5 Send Test Request
```
import json
import requests
with open(os.path.join("input-raw-image.json"), "r") as f:
req = json.load(f)
resp = requests.post(endpoint.url, json=req)
resp.text
```
## 4. Clean Up
## 4.1 Delete Deployment
```
merlin.undeploy(v)
```
| github_jupyter |
# MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image model on [MNIST](http://yann.lecun.com/exdb/mnist/) using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). It builds the foundation for this <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb">companion notebook</a>, which explores tackling the same problem with other types of models such as DNN and CNN.
## Learning Objectives
1. Know how to read and display image data
2. Know how to find incorrect predictions to analyze the model
3. Visually see how computers see images
This notebook uses TF2.0
Please check your tensorflow version using the cell below. If it is not 2.0, please run the pip line below and restart the kernel.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import Dense, Flatten, Softmax
print(tf.__version__)
!python3 -m pip freeze | grep 'tensorflow==2\|tensorflow-gpu==2' || \
python3 -m pip install tensorflow==2
```
## Exploring the data
The MNIST dataset is already included in tensorflow through the keras datasets module. Let's load it and get a sense of the data.
```
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
HEIGHT, WIDTH = x_train[0].shape
NCLASSES = tf.size(tf.unique(y_train).y)
print("Image height x width is", HEIGHT, "x", WIDTH)
tf.print("There are", NCLASSES, "classes")
```
Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
```
IMGNO = 12
# Uncomment to see raw numerical values.
# print(x_test[IMGNO])
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
print("The label for image number", IMGNO, "is", y_test[IMGNO])
```
## Define the model
Let's start with a very simple linear classifier. This was the first method to be tried on MNIST in 1998, and scored an 88% accuracy. Quite ground breaking at the time!
We can build our linear classifer using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras), so we don't have to define or initialize our weights and biases. This happens automatically for us in the background. We can also add a softmax layer to transform the logits into probabilities. Finally, we can compile the model using categorical cross entropy in order to strongly penalize high probability predictions that were incorrect.
When building more complex models such as DNNs and CNNs our code will be more readable by using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Let's get one working so we can test it and use it as a benchmark.
```
def linear_model():
# TODO: Build a sequential model and compile it.
return model
```
## Write Input Functions
As usual, we need to specify input functions for training and evaluating. We'll scale each pixel value so it's a decimal value between 0 and 1 as a way of normalizing the data.
**TODO 1**: Define the scale function below and build the dataset
```
BUFFER_SIZE = 5000
BATCH_SIZE = 100
def scale(image, label):
# TODO
def load_dataset(training=True):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = mnist
x = x_train if training else x_test
y = y_train if training else y_test
# TODO: a) one-hot encode labels, apply `scale` function, and create dataset.
# One-hot encode the classes
if training:
# TODO
return dataset
def create_shape_test(training):
dataset = load_dataset(training=training)
data_iter = dataset.__iter__()
(images, labels) = data_iter.get_next()
expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)
expected_label_ndim = 2
assert(images.shape == expected_image_shape)
assert(labels.numpy().ndim == expected_label_ndim)
test_name = 'training' if training else 'eval'
print("Test for", test_name, "passed!")
create_shape_test(True)
create_shape_test(False)
```
Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
```
NUM_EPOCHS = 10
STEPS_PER_EPOCH = 100
model = linear_model()
train_data = load_dataset()
validation_data = load_dataset(training=False)
OUTDIR = "mnist_linear/"
checkpoint_callback = ModelCheckpoint(
OUTDIR, save_weights_only=True, verbose=1)
tensorboard_callback = TensorBoard(log_dir=OUTDIR)
history = model.fit(
# TODO: specify training/eval data, # epochs, steps per epoch.
verbose=2,
callbacks=[checkpoint_callback, tensorboard_callback]
)
BENCHMARK_ERROR = .12
BENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
assert(accuracy[-1] > BENCHMARK_ACCURACY)
assert(val_accuracy[-1] > BENCHMARK_ACCURACY)
print("Test to beat benchmark accuracy passed!")
assert(accuracy[0] < accuracy[1])
assert(accuracy[1] < accuracy[-1])
assert(val_accuracy[0] < val_accuracy[1])
assert(val_accuracy[1] < val_accuracy[-1])
print("Test model accuracy is improving passed!")
assert(loss[0] > loss[1])
assert(loss[1] > loss[-1])
assert(val_loss[0] > val_loss[1])
assert(val_loss[1] > val_loss[-1])
print("Test loss is decreasing passed!")
```
## Evaluating Predictions
Were you able to get an accuracy of over 90%? Not bad for a linear estimator! Let's make some predictions and see if we can find where the model has trouble. Change the range of values below to find incorrect predictions, and plot the corresponding images. What would you have guessed for these images?
**TODO 2**: Change the range below to find an incorrect prediction
```
image_numbers = range(0, 10, 1) # Change me, please.
def load_prediction_dataset():
dataset = (x_test[image_numbers], y_test[image_numbers])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
dataset = dataset.map(scale).batch(len(image_numbers))
return dataset
predicted_results = model.predict(load_prediction_dataset())
for index, prediction in enumerate(predicted_results):
predicted_value = np.argmax(prediction)
actual_value = y_test[image_numbers[index]]
if actual_value != predicted_value:
print("image number: " + str(image_numbers[index]))
print("the prediction was " + str(predicted_value))
print("the actual label is " + str(actual_value))
print("")
bad_image_number = 8
plt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));
```
It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
Each of the 10 neurons in the dense layer of our model has 785 weights feeding into it. That's 1 weight for every pixel in the image + 1 for a bias term. These weights are flattened feeding into the model, but we can reshape them back into the original image dimensions to see what the computer sees.
**TODO 3**: Reshape the layer weights to be the shape of an input image and plot.
```
DIGIT = 0 # Change me to be an integer from 0 to 9.
LAYER = 1 # Layer 0 flattens image, so no weights
WEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases
dense_layer_weights = model.layers[LAYER].get_weights()
digit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]
plt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))
```
Did you recognize the digit the computer was trying to learn? Pretty trippy, isn't it! Even with a simple "brain", the computer can form an idea of what a digit should be. The human brain, however, uses [layers and layers of calculations for image recognition](https://www.salk.edu/news-release/brain-recognizes-eye-sees/). Ready for the next challenge? <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/images/mnist_linear.ipynb">Click here</a> to super charge our models with human-like vision.
## Bonus Exercise
Want to push your understanding further? Instead of using Keras' built in layers, try repeating the above exercise with your own [custom layers](https://www.tensorflow.org/tutorials/eager/custom_layers).
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
import panel as pn
import numpy as np
import holoviews as hv
pn.extension()
```
For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring three main content areas on the page, which can be populated as desired:
* **`header`**: The header area of the HTML page
* **`sidebar`**: A collapsible sidebar
* **`main`**: The main area of the application
* **`modal`**: A modal area which can be opened and closed from Python
These three areas behave very similarly to other Panel layout components and have list-like semantics. This means we can easily append new components into these areas. Unlike other layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas.
Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display.
#### Parameters:
In addition to the four different areas we can populate, the `FastListTemplate` also provide additional parameters:
* **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state.
* **`header_background`** (str): Optional header background color override.
* **`header_color`** (str): Optional header text color override.
* **`favicon`** (str): URI of favicon to add to the document head (if local file, favicon is base64 encoded as URI).
* **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI).
* **`theme`** (Theme): A Theme class (available in `panel.template`. One of `DefaultTheme` or `DarkTheme`).
- For convenience you can provide "default" or "dark" string to the constructor.
- If you add `?theme=default` or `?theme=dark` in the url this will set the theme unless explicitly declared
* **`site`** (str): The name of the site. Will be shown in the header and link to the root (/) of the site. Default is '', i.e. not shown.
* **`title`** (str): A title to show in the header. Also added to the document head meta settings and as the browser tab title.
* **`main_max_width`** (str): The maximum width of the main area. For example '800px' or '80%'. If the string is '' (default) no max width is set.
* **`sidebar_footer`** (str): Can be used to insert additional HTML. For example a menu, some additional info, links etc.
* **`enable_theme_toggle`** (boolean): If `True` a switch to toggle the Theme is shown. Default is `True`.
* **`config`** (TemplateConfig): Contains configuration options similar to `pn.config` but applied to the current Template only. (Currently only `css_files` is supported)
________
In this case we are using the `FastListTemplate`, built using the [Fast.design](https://www.fast.design/) framework. Here is an example of how you can set up a display using this template:
```
template = pn.template.FastListTemplate(title='FastListTemplate')
pn.config.sizing_mode = 'stretch_width'
xs = np.linspace(0, np.pi)
freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2)
phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi)
@pn.depends(freq=freq, phase=phase)
def sine(freq, phase):
return hv.Curve((xs, np.sin(xs*freq+phase))).opts(
responsive=True, min_height=400, title="Sine")
@pn.depends(freq=freq, phase=phase)
def cosine(freq, phase):
return hv.Curve((xs, np.cos(xs*freq+phase))).opts(
responsive=True, min_height=400, title="Cosine")
template.sidebar.append(pn.pane.Markdown("## Settings"))
template.sidebar.append(freq)
template.sidebar.append(phase)
template.main.append(hv.DynamicMap(sine),)
template.main.append(hv.DynamicMap(cosine))
template.servable();
```
<h3><b>FastListTemplate with DefaultTheme</b></h3>
<img src="../../assets/FastListTemplate.png" style="margin-left: auto; margin-right: auto; display: block;"></img>
</br>
<h3><b>FastListTemplate with DarkTheme</b></h3>
<img src="../../assets/FastListTemplateDark.png" style="margin-left: auto; margin-right: auto; display: block;"></img>
The app can be displayed within the notebook by using `.servable()`, or rendered in another tab by replacing it with `.show()`.
Themes can be added using the optional keyword argument `theme`. This template comes with a `DarkTheme` and a `DefaultTheme`, which can be set via `FastlistTemplate(theme=DarkTheme)`. If no theme is set, then `DefaultTheme` will be applied.
It should be noted **this template currently does not render correctly in a notebook**, and for the best performance the should ideally be deployed to a server.
| github_jupyter |
# Image Captioning with RNNs
In this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images.
```
# As usual, a bit of setup
from __future__ import print_function
import time, os, json
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.rnn_layers import *
from cs231n.captioning_solver import CaptioningSolver
from cs231n.classifiers.rnn import CaptioningRNN
from cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions
from cs231n.image_utils import image_from_url
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
```
## Install h5py
The COCO dataset we will be using is stored in HDF5 format. To load HDF5 files, we will need to install the `h5py` Python package. From the command line, run: <br/>
`pip install h5py` <br/>
If you receive a permissions error, you may need to run the command as root: <br/>
```sudo pip install h5py```
You can also run commands directly from the Jupyter notebook by prefixing the command with the "!" character:
```
!pip3 install h5py
```
# Microsoft COCO
For this exercise we will use the 2014 release of the [Microsoft COCO dataset](http://mscoco.org/) which has become the standard testbed for image captioning. The dataset consists of 80,000 training images and 40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanical Turk.
You should have already downloaded the data by changing to the `cs231n/datasets` directory and running the script `get_assignment3_data.sh`. If you haven't yet done so, run that script now. Warning: the COCO data download is ~1GB.
We have preprocessed the data and extracted features for you already. For all images we have extracted features from the fc7 layer of the VGG-16 network pretrained on ImageNet; these features are stored in the files `train2014_vgg16_fc7.h5` and `val2014_vgg16_fc7.h5` respectively. To cut down on processing time and memory requirements, we have reduced the dimensionality of the features from 4096 to 512; these features can be found in the files `train2014_vgg16_fc7_pca.h5` and `val2014_vgg16_fc7_pca.h5`.
The raw images take up a lot of space (nearly 20GB) so we have not included them in the download. However all images are taken from Flickr, and URLs of the training and validation images are stored in the files `train2014_urls.txt` and `val2014_urls.txt` respectively. This allows you to download images on the fly for visualization. Since images are downloaded on-the-fly, **you must be connected to the internet to view images**.
Dealing with strings is inefficient, so we will work with an encoded version of the captions. Each word is assigned an integer ID, allowing us to represent a caption by a sequence of integers. The mapping between integer IDs and words is in the file `coco2014_vocab.json`, and you can use the function `decode_captions` from the file `cs231n/coco_utils.py` to convert numpy arrays of integer IDs back into strings.
There are a couple special tokens that we add to the vocabulary. We prepend a special `<START>` token and append an `<END>` token to the beginning and end of each caption respectively. Rare words are replaced with a special `<UNK>` token (for "unknown"). In addition, since we want to train with minibatches containing captions of different lengths, we pad short captions with a special `<NULL>` token after the `<END>` token and don't compute loss or gradient for `<NULL>` tokens. Since they are a bit of a pain, we have taken care of all implementation details around special tokens for you.
You can load all of the MS-COCO data (captions, features, URLs, and vocabulary) using the `load_coco_data` function from the file `cs231n/coco_utils.py`. Run the following cell to do so:
```
# Load COCO data from disk; this returns a dictionary
# We'll work with dimensionality-reduced features for this notebook, but feel
# free to experiment with the original features by changing the flag below.
data = load_coco_data(pca_features=True)
# Print out all the keys and values from the data dictionary
for k, v in data.items():
if type(v) == np.ndarray:
print(k, type(v), v.shape, v.dtype)
else:
print(k, type(v), len(v))
```
## Look at the data
It is always a good idea to look at examples from the dataset before working with it.
You can use the `sample_coco_minibatch` function from the file `cs231n/coco_utils.py` to sample minibatches of data from the data structure returned from `load_coco_data`. Run the following to sample a small minibatch of training data and show the images and their captions. Running it multiple times and looking at the results helps you to get a sense of the dataset.
Note that we decode the captions using the `decode_captions` function and that we download the images on-the-fly using their Flickr URL, so **you must be connected to the internet to view images**.
```
# Sample a minibatch and show the images and captions
batch_size = 3
captions, features, urls = sample_coco_minibatch(data, batch_size=batch_size)
for i, (caption, url) in enumerate(zip(captions, urls)):
plt.imshow(image_from_url(url))
plt.axis('off')
caption_str = decode_captions(caption, data['idx_to_word'])
plt.title(caption_str)
plt.show()
```
# Recurrent Neural Networks
As discussed in lecture, we will use recurrent neural network (RNN) language models for image captioning. The file `cs231n/rnn_layers.py` contains implementations of different layer types that are needed for recurrent neural networks, and the file `cs231n/classifiers/rnn.py` uses these layers to implement an image captioning model.
We will first implement different types of RNN layers in `cs231n/rnn_layers.py`.
# Vanilla RNN: step forward
Open the file `cs231n/rnn_layers.py`. This file implements the forward and backward passes for different types of layers that are commonly used in recurrent neural networks.
First implement the function `rnn_step_forward` which implements the forward pass for a single timestep of a vanilla recurrent neural network. After doing so run the following to check your implementation. You should see errors less than 1e-8.
```
N, D, H = 3, 10, 4
x = np.linspace(-0.4, 0.7, num=N*D).reshape(N, D)
prev_h = np.linspace(-0.2, 0.5, num=N*H).reshape(N, H)
Wx = np.linspace(-0.1, 0.9, num=D*H).reshape(D, H)
Wh = np.linspace(-0.3, 0.7, num=H*H).reshape(H, H)
b = np.linspace(-0.2, 0.4, num=H)
next_h, _ = rnn_step_forward(x, prev_h, Wx, Wh, b)
expected_next_h = np.asarray([
[-0.58172089, -0.50182032, -0.41232771, -0.31410098],
[ 0.66854692, 0.79562378, 0.87755553, 0.92795967],
[ 0.97934501, 0.99144213, 0.99646691, 0.99854353]])
print('next_h error: ', rel_error(expected_next_h, next_h))
```
# Vanilla RNN: step backward
In the file `cs231n/rnn_layers.py` implement the `rnn_step_backward` function. After doing so run the following to numerically gradient check your implementation. You should see errors less than `1e-8`.
```
from cs231n.rnn_layers import rnn_step_forward, rnn_step_backward
np.random.seed(231)
N, D, H = 4, 5, 6
x = np.random.randn(N, D)
h = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
out, cache = rnn_step_forward(x, h, Wx, Wh, b)
dnext_h = np.random.randn(*out.shape)
fx = lambda x: rnn_step_forward(x, h, Wx, Wh, b)[0]
fh = lambda prev_h: rnn_step_forward(x, h, Wx, Wh, b)[0]
fWx = lambda Wx: rnn_step_forward(x, h, Wx, Wh, b)[0]
fWh = lambda Wh: rnn_step_forward(x, h, Wx, Wh, b)[0]
fb = lambda b: rnn_step_forward(x, h, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dnext_h)
dprev_h_num = eval_numerical_gradient_array(fh, h, dnext_h)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dnext_h)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dnext_h)
db_num = eval_numerical_gradient_array(fb, b, dnext_h)
dx, dprev_h, dWx, dWh, db = rnn_step_backward(dnext_h, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dprev_h error: ', rel_error(dprev_h_num, dprev_h))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
```
# Vanilla RNN: forward
Now that you have implemented the forward and backward passes for a single timestep of a vanilla RNN, you will combine these pieces to implement a RNN that process an entire sequence of data.
In the file `cs231n/rnn_layers.py`, implement the function `rnn_forward`. This should be implemented using the `rnn_step_forward` function that you defined above. After doing so run the following to check your implementation. You should see errors less than `1e-7`.
```
N, T, D, H = 2, 3, 4, 5
x = np.linspace(-0.1, 0.3, num=N*T*D).reshape(N, T, D)
h0 = np.linspace(-0.3, 0.1, num=N*H).reshape(N, H)
Wx = np.linspace(-0.2, 0.4, num=D*H).reshape(D, H)
Wh = np.linspace(-0.4, 0.1, num=H*H).reshape(H, H)
b = np.linspace(-0.7, 0.1, num=H)
h, _ = rnn_forward(x, h0, Wx, Wh, b)
expected_h = np.asarray([
[
[-0.42070749, -0.27279261, -0.11074945, 0.05740409, 0.22236251],
[-0.39525808, -0.22554661, -0.0409454, 0.14649412, 0.32397316],
[-0.42305111, -0.24223728, -0.04287027, 0.15997045, 0.35014525],
],
[
[-0.55857474, -0.39065825, -0.19198182, 0.02378408, 0.23735671],
[-0.27150199, -0.07088804, 0.13562939, 0.33099728, 0.50158768],
[-0.51014825, -0.30524429, -0.06755202, 0.17806392, 0.40333043]]])
print('h error: ', rel_error(expected_h, h))
```
# Vanilla RNN: backward
In the file `cs231n/rnn_layers.py`, implement the backward pass for a vanilla RNN in the function `rnn_backward`. This should run back-propagation over the entire sequence, calling into the `rnn_step_backward` function that you defined above. You should see errors less than 5e-7.
```
np.random.seed(231)
N, D, T, H = 2, 3, 10, 5
x = np.random.randn(N, T, D)
h0 = np.random.randn(N, H)
Wx = np.random.randn(D, H)
Wh = np.random.randn(H, H)
b = np.random.randn(H)
out, cache = rnn_forward(x, h0, Wx, Wh, b)
dout = np.random.randn(*out.shape)
dx, dh0, dWx, dWh, db = rnn_backward(dout, cache)
fx = lambda x: rnn_forward(x, h0, Wx, Wh, b)[0]
fh0 = lambda h0: rnn_forward(x, h0, Wx, Wh, b)[0]
fWx = lambda Wx: rnn_forward(x, h0, Wx, Wh, b)[0]
fWh = lambda Wh: rnn_forward(x, h0, Wx, Wh, b)[0]
fb = lambda b: rnn_forward(x, h0, Wx, Wh, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dh0_num = eval_numerical_gradient_array(fh0, h0, dout)
dWx_num = eval_numerical_gradient_array(fWx, Wx, dout)
dWh_num = eval_numerical_gradient_array(fWh, Wh, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
print('dx error: ', rel_error(dx_num, dx))
print('dh0 error: ', rel_error(dh0_num, dh0))
print('dWx error: ', rel_error(dWx_num, dWx))
print('dWh error: ', rel_error(dWh_num, dWh))
print('db error: ', rel_error(db_num, db))
```
# Word embedding: forward
In deep learning systems, we commonly represent words using vectors. Each word of the vocabulary will be associated with a vector, and these vectors will be learned jointly with the rest of the system.
In the file `cs231n/rnn_layers.py`, implement the function `word_embedding_forward` to convert words (represented by integers) into vectors. Run the following to check your implementation. You should see error around `1e-8`.
```
N, T, V, D = 2, 4, 5, 3
x = np.asarray([[0, 3, 1, 2], [2, 1, 0, 3]])
W = np.linspace(0, 1, num=V*D).reshape(V, D)
out, _ = word_embedding_forward(x, W)
expected_out = np.asarray([
[[ 0., 0.07142857, 0.14285714],
[ 0.64285714, 0.71428571, 0.78571429],
[ 0.21428571, 0.28571429, 0.35714286],
[ 0.42857143, 0.5, 0.57142857]],
[[ 0.42857143, 0.5, 0.57142857],
[ 0.21428571, 0.28571429, 0.35714286],
[ 0., 0.07142857, 0.14285714],
[ 0.64285714, 0.71428571, 0.78571429]]])
print('out error: ', rel_error(expected_out, out))
```
# Word embedding: backward
Implement the backward pass for the word embedding function in the function `word_embedding_backward`. After doing so run the following to numerically gradient check your implementation. You should see errors less than `1e-11`.
```
np.random.seed(231)
N, T, V, D = 50, 3, 5, 6
x = np.random.randint(V, size=(N, T))
W = np.random.randn(V, D)
out, cache = word_embedding_forward(x, W)
dout = np.random.randn(*out.shape)
dW = word_embedding_backward(dout, cache)
f = lambda W: word_embedding_forward(x, W)[0]
dW_num = eval_numerical_gradient_array(f, W, dout)
print('dW error: ', rel_error(dW, dW_num))
```
# Temporal Affine layer
At every timestep we use an affine function to transform the RNN hidden vector at that timestep into scores for each word in the vocabulary. Because this is very similar to the affine layer that you implemented in assignment 2, we have provided this function for you in the `temporal_affine_forward` and `temporal_affine_backward` functions in the file `cs231n/rnn_layers.py`. Run the following to perform numeric gradient checking on the implementation. You should see errors less than 1e-9.
```
np.random.seed(231)
# Gradient check for temporal affine layer
N, T, D, M = 2, 3, 4, 5
x = np.random.randn(N, T, D)
w = np.random.randn(D, M)
b = np.random.randn(M)
out, cache = temporal_affine_forward(x, w, b)
dout = np.random.randn(*out.shape)
fx = lambda x: temporal_affine_forward(x, w, b)[0]
fw = lambda w: temporal_affine_forward(x, w, b)[0]
fb = lambda b: temporal_affine_forward(x, w, b)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
dw_num = eval_numerical_gradient_array(fw, w, dout)
db_num = eval_numerical_gradient_array(fb, b, dout)
dx, dw, db = temporal_affine_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# Temporal Softmax loss
In an RNN language model, at every timestep we produce a score for each word in the vocabulary. We know the ground-truth word at each timestep, so we use a softmax loss function to compute loss and gradient at each timestep. We sum the losses over time and average them over the minibatch.
However there is one wrinkle: since we operate over minibatches and different captions may have different lengths, we append `<NULL>` tokens to the end of each caption so they all have the same length. We don't want these `<NULL>` tokens to count toward the loss or gradient, so in addition to scores and ground-truth labels our loss function also accepts a `mask` array that tells it which elements of the scores count towards the loss.
Since this is very similar to the softmax loss function you implemented in assignment 1, we have implemented this loss function for you; look at the `temporal_softmax_loss` function in the file `cs231n/rnn_layers.py`.
Run the following cell to sanity check the loss and perform numeric gradient checking on the function. You should see an error for dx less than 1e-7.
```
# Sanity check for temporal softmax loss
from cs231n.rnn_layers import temporal_softmax_loss
N, T, V = 100, 1, 10
def check_loss(N, T, V, p):
x = 0.001 * np.random.randn(N, T, V)
y = np.random.randint(V, size=(N, T))
mask = np.random.rand(N, T) <= p
print(temporal_softmax_loss(x, y, mask)[0])
check_loss(100, 1, 10, 1.0) # Should be about 2.3
check_loss(100, 10, 10, 1.0) # Should be about 23
check_loss(5000, 10, 10, 0.1) # Should be about 2.3
# Gradient check for temporal softmax loss
N, T, V = 7, 8, 9
x = np.random.randn(N, T, V)
y = np.random.randint(V, size=(N, T))
mask = (np.random.rand(N, T) > 0.5)
loss, dx = temporal_softmax_loss(x, y, mask, verbose=False)
dx_num = eval_numerical_gradient(lambda x: temporal_softmax_loss(x, y, mask)[0], x, verbose=False)
print('dx error: ', rel_error(dx, dx_num))
```
# RNN for image captioning
Now that you have implemented the necessary layers, you can combine them to build an image captioning model. Open the file `cs231n/classifiers/rnn.py` and look at the `CaptioningRNN` class.
Implement the forward and backward pass of the model in the `loss` function. For now you only need to implement the case where `cell_type='rnn'` for vanialla RNNs; you will implement the LSTM case later. After doing so, run the following to check your forward pass using a small test case; you should see error less than `1e-10`.
```
N, D, W, H = 10, 20, 30, 40
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
V = len(word_to_idx)
T = 13
model = CaptioningRNN(word_to_idx,
input_dim=D,
wordvec_dim=W,
hidden_dim=H,
cell_type='rnn',
dtype=np.float64)
# Set all model parameters to fixed values
for k, v in model.params.items():
model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)
features = np.linspace(-1.5, 0.3, num=(N * D)).reshape(N, D)
captions = (np.arange(N * T) % V).reshape(N, T)
loss, grads = model.loss(features, captions)
expected_loss = 9.83235591003
print('loss: ', loss)
print('expected loss: ', expected_loss)
print('difference: ', abs(loss - expected_loss))
```
Run the following cell to perform numeric gradient checking on the `CaptioningRNN` class; you should errors around `5e-6` or less.
```
np.random.seed(231)
batch_size = 2
timesteps = 3
input_dim = 4
wordvec_dim = 5
hidden_dim = 6
word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}
vocab_size = len(word_to_idx)
captions = np.random.randint(vocab_size, size=(batch_size, timesteps))
features = np.random.randn(batch_size, input_dim)
model = CaptioningRNN(word_to_idx,
input_dim=input_dim,
wordvec_dim=wordvec_dim,
hidden_dim=hidden_dim,
cell_type='rnn',
dtype=np.float64,
)
loss, grads = model.loss(features, captions)
for param_name in sorted(grads):
f = lambda _: model.loss(features, captions)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s relative error: %e' % (param_name, e))
```
# Overfit small data
Similar to the `Solver` class that we used to train image classification models on the previous assignment, on this assignment we use a `CaptioningSolver` class to train image captioning models. Open the file `cs231n/captioning_solver.py` and read through the `CaptioningSolver` class; it should look very familiar.
Once you have familiarized yourself with the API, run the following to make sure your model overfit a small sample of 100 training examples. You should see losses of less than 0.1.
```
np.random.seed(231)
small_data = load_coco_data(max_train=50)
small_rnn_model = CaptioningRNN(
cell_type='rnn',
word_to_idx=data['word_to_idx'],
input_dim=data['train_features'].shape[1],
hidden_dim=512,
wordvec_dim=256,
)
small_rnn_solver = CaptioningSolver(small_rnn_model, small_data,
update_rule='adam',
num_epochs=50,
batch_size=25,
optim_config={
'learning_rate': 5e-3,
},
lr_decay=0.95,
verbose=True, print_every=10,
)
small_rnn_solver.train()
# Plot the training losses
plt.plot(small_rnn_solver.loss_history)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Training loss history')
plt.show()
```
# Test-time sampling
Unlike classification models, image captioning models behave very differently at training time and at test time. At training time, we have access to the ground-truth caption, so we feed ground-truth words as input to the RNN at each timestep. At test time, we sample from the distribution over the vocabulary at each timestep, and feed the sample as input to the RNN at the next timestep.
In the file `cs231n/classifiers/rnn.py`, implement the `sample` method for test-time sampling. After doing so, run the following to sample from your overfitted model on both training and validation data. The samples on training data should be very good; the samples on validation data probably won't make sense.
```
for split in ['train', 'val']:
minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)
gt_captions, features, urls = minibatch
gt_captions = decode_captions(gt_captions, data['idx_to_word'])
sample_captions = small_rnn_model.sample(features)
sample_captions = decode_captions(sample_captions, data['idx_to_word'])
for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):
plt.imshow(image_from_url(url))
plt.title('%s\n%s\nGT:%s' % (split, sample_caption, gt_caption))
plt.axis('off')
plt.show()
```
| github_jupyter |
# Numpy, Estadística, Probabilidades
## Estadística
En esta parte vamos a revisar conceptos de estadística descriptiva.
La estadística descriptiva busca describir, sumarizar y comprender los datos.
Para ello empleamos Medidas de Tendencia Central, y medidas de Variabilidad.
La función de las Medidas de Tendencia Central es proveer información descriptiva sobre el valor numérico que es considerado el más usual para una variable cuantitativa.
Las medidas de tendencia central son
* **Media**
Dados los n números ${\{x_{1},x_{2},\ldots ,x_{n}\}}$ la media aritmética se define como:
\begin{equation}
\bar {x} = {\frac {1}{n}} \sum _{i=1}^{n}x_{i} ={\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}
\end{equation}
* **Mediana**
Sean ${ x_{1},x_{2},x_{3},\ldots ,x_{n} }$ los datos de una muestra ordenada en orden creciente y designando la mediana como $M_{e}$, distinguimos dos casos:
a) Si n es impar, la mediana es el valor que ocupa la posición $(n+1)/2$ una vez que los datos han sido ordenados (en orden creciente o decreciente), porque este es el valor central. Es decir: $M_{e}=x_{{(n+1)/2}}$.
b) Si n es par, la mediana es la media aritmética de los dos valores centrales. Cuando $n$ es par, los dos datos que están en el centro de la muestra ocupan las posiciones $n/2$ y $(n/2)+1$. Es decir: $M_{e}=(x_{{{\frac{n}{2}}}}+x_{{{{\frac{n}{2}}}+1}})/2$.
* **Moda**
Es el valor del dato con mayor frecuencia en un conjunto.
Las medidas de variabilidad o dispersión nos indican si los valores de los datos están próximos entre sí o si
por el contrario están muy dispersos. Estas medidas se determinan en función de la distancia entre los datos y algún estadístico de tendencia central.
Las medidas de variabilidad son
* **Rango**
Sean ${ x_{1},x_{2},x_{3},\ldots ,x_{n} }$ los datos de una muestra ordenada en orden creciente, el rango es $x_{n} - x_{1}$
* **Varianza**
Si tenemos un conjunto de valores de una variable, la varianza se calcula de la siguiente forma:
\begin{equation}
\sigma_{n}^{2} = \frac{1}{n} \sum _{i=1}^{n} (x_i - \bar{X})^{2}
\end{equation}
Siendo:
$x_{i}$: cada dato
$\bar{X}$: media de los datos
$n$: número de datos
* **Desvío Estándar**
Es la raíz cuadrada de la varianza
\begin{equation}
\sigma_{n} = \sqrt{\frac{1}{n} \sum _{i=1}^{n} (x_i - \bar{X})^{2}}
\end{equation}
* **Coeficiente de Variación**
Es el desvío estándar dividido por la media
\begin{equation}
CV = \frac{\sigma_{n}}{\bar{X}} . 100
\end{equation}
NumPy tiene funciones para calcular todas estas medidas.
---
Vamos a ver cómo calcular algunas medidas de estadística descriptiva usando Numpy.
Para eso, usaremos datos que tenemos en un csv con tres poblaciones:
* conejos
* linces
* zanahorias
Leeremos los datos del csv usando `genfromtxt` de Numpy
La primera columna de la matriz corresponde al año, la segunda a la población de conejos, la tercera corresponde a la población de linces, y la cuarta a la población de zanahorias.
Y luego vamos a convertir en arrays, que aprendimos a usar recientemente, cada una de sus columnas. ¿Cómo hacemos esto?
```
import numpy as np
data_location = '../Data/populations.txt'
data = np.genfromtxt(data_location, skip_header=1, delimiter='\t')
data
```
Definimos variables con el índice que corresponde a cada población:
```
anno_col_index = 0
conejos_col_index = 1
linces_col_index = 2
zanahorias_col_index = 3
```
Vamos a crear un array de Numpy para cada población y para el año, usando slicing (si no lo recuerdan, revisen la clase de Numpy):
```
anno = data[:, anno_col_index]
#print(anno)
conejos = data[:, conejos_col_index]
#print(conejos)
linces = data[:, linces_col_index]
#print(linces)
zanahorias = data[:, zanahorias_col_index]
#print(zanahorias)
```
Y por último vamos crear una variable poblaciones, que tenga todos los datos de la matriz data menos la columna "# year"
```
poblaciones = data[:, 1:]
poblaciones
```
<a id="section_descriptive"></a>
### Estadística descriptiva
[volver a TOC](#section_toc)
Para cada variable de la serie histórica, calculamos la media y el desvío. Redondeamos los valores para que tengan sólo 2 decimales.
Vamos a usar los métodos de Numpy
* **mean**: calcula la media de los valores pasados en el primer parámetro https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html
* **std**: calcula el desvio standard de los valores pasados en el primer parámetro https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html
* **around**: redondea los valores a la cantidad de decimales pasada como parámetro, el default es 0 https://docs.scipy.org/doc/numpy/reference/generated/numpy.around.html
Ahora vamos a calcular la media de cada una de las poblaciones usando la matriz `poblaciones` y el método `mean` de Numpy, redondenado a dos decimales.
¿Qué valor de `axis` debemos usar? ¿Por qué?
Respuesta: Cada población está representada por una columna en la matriz poblaciones, por lo tanto para calcular la media por población debemos colapsar las filas, y para eso el valor de axis debe ser 0
```
print (" Conejos, Linces, Zanahorias")
print ("Mean:", np.around(poblaciones.mean(axis=0), decimals=2))
print ("Std:", np.around(poblaciones.std(axis=0), decimals=2))
```
A continuación, calculamos para cada especie el año en el que tuvieron máxima población.
Para eso vamos a usar el método `argmax` de Numpy que devuelve el índice en el array del valor máximo https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html
Igual que en el punto anterior, las poblaciones están representadas por columnas, entonces debemos colapsar filas y así axis debe tener valor 0.
`indice_max_poblacion` tendrá tantos elementos como poblaciones considero (3 en este caso: conejos, linces, zanahorias) y cada elemento será **el índice de la fila** donde esa población tuvo la máxima cantidad de individuos.
```
indice_max_poblacion = np.argmax(poblaciones, axis=0)
indice_max_poblacion
```
Por lo tanto, la población de conejos fue máxima en la cuarta fila, la de linces en la quinta fila, la de zanahorias en la primer fila.
Veamos a que años corresponden esas filas, usando Fancy indexing (si no lo recuerdan, por favor revisen la guía de numpy1!)
```
annos_con_maximos = anno[indice_max_poblacion]
annos_con_maximos
```
Entonces la máxima población de conejos fue en 1903, la de linces fue en 1904, y la de zanahorias fue en 1900
```
# Fancy Indexing
print (" Conejos, Linces, Zanahorias")
print ("Años de máxima población:", anno[indice_max_poblacion])
```
Como adicional, vamos a graficar estas tres poblaciones para ver si los resultados que obtuvimos se corresponden con los graficados.
En unos días vamos a tener una clase de visualización y veremos en detalle bibliotecas como `matplotlib` que usamos ahora de forma muy básica.
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(anno, conejos, anno, linces, anno, zanahorias)
plt.legend(('Conejos (hares)', 'Linces (lynxes)', 'Zanahorias (carrots)'), loc=(1.05, 0.5))
plt.show()
```
<div id="ejercicio1" style="float:left;width: 100%;">
<div style="float:left;width: 15%;"><img src="../../../common/icons/ponete_a_prueba.png" style="align:left"/> </div>
<div style="float:left;width: 85%;"><label>
<b>Ejercicio</b>:
¿En qué años alguna de las poblaciones se encuentra por encima de 50000?
Ayuda: boolean indexing
Ayuda 2: podemos resolverlo usando la matriz poblaciones o usando los array de poblaciones por separado
Podemos autocorregir los ejercicios mirando el gráfico</label></div>
</div>
<div id="ejercicio2" style="float:left;width: 100%;">
<div style="float:left;width: 15%;"><img src="../../../common/icons/ponete_a_prueba.png" style="align:left"/> </div>
<div style="float:left;width: 85%;"><label>
<b>Ejercicio</b>:
¿En qué dos años, cada especie tuvo sus niveles más bajos de población?
Ayuda: pueden buscar la documentación de `argsort` de Numpy
Ayuda 2: podemos resolverlo usando la matriz poblaciones o usando los array de poblaciones por separado
Podemos autocorregir los ejercicios mirando el gráfico</label></div>
</div>
#### Covarianza
La covarianza es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias.
Es el dato básico para determinar si existe una dependencia entre ambas variables
El método que calcula la covarianza en Numpy es `cov`
https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html
```
np.cov([conejos, linces, zanahorias])
```
<div id="caja1" style="float:left;width: 100%;">
<div style="float:left;width: 15%;"><img src="../../../common/icons/para_seguir_pensando.png" style="align:left"/> </div>
<div style="float:left;width: 85%;"><label>Ahora intentemos responder estas preguntas:
* **¿Puedo comparar las distintas varianzas? ¿Por qué?**
Si las variables se miden en las misma unidades y sus valores son de magnitudes parecidas, puedo comparar sus varianzas. Pero sólo en ese caso y no en general.
* **¿Qué variable tiene la mayor varianza? ¿Cómo se ve esto en el gráfico?**
Podemos mirar los valores de la diagonal de la matriz de covarianzas para obtener los valores de las varianzas de esas variables.
La variable de mayor varianza es el conejo, su valor está en el elemento (0,0) de la matriz de varianzas y covarianzas.
En el gráfico podemos ver que la dispersión de la serie de los conejos es parecida a la serie de los linces, pero la serie de conejos tiene mayor rango (diferencia entre los valores máximo y mínimo) por lo tanto aporta más en la suma de distancias a la media.
* **¿Qué significa una covarianza positiva? ¿Y una negativa?**
Dos variables X e Y, tienen covarianza positiva cuando tienden a encontrarse por encima de su media al mismo tiempo y tienen covarianza negativa cuando al mismo tiempo, tienden a encontrarse una por debajo y otra por encima.
En cambio X e Y tienen covarianza cercana a cero cuando las variables pueden encontrarse por encima o por debajo de su media independientemente de lo que haga la otra.
La covarianza mide la relación lineal entre ambas variables, es decir, qué tanto se asemeja la relación con una función lineal.</label></div>
</div>
#### Correlación
El coeficiente de correlación de Pearson es una medida lineal entre dos variables aleatorias cuantitativas.
A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables.
Podemos definir el coeficiente de correlación de Pearson como un índice que puede utilizarse para medir el grado de relación de dos variables siempre y cuando ambas sean cuantitativas.
El método que calcula la correlación de Pearson en Numpy es `corrcoef`. Los valores de los elementos de la matriz de correlación están entre -1 y 1 inclusive. Es equivalente a la matriz de covarianzas normalizada.
https://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html
```
np.corrcoef([conejos, linces, zanahorias])
```
| github_jupyter |
```
from multiprocessing import Pool
import igraph
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import stats
import time
# Import clock, accomodating different versions of time library
try:
clock = time.clock
except AttributeError:
clock = lambda : time.clock_gettime(1)
import copy
# Display options
np.set_printoptions(precision=2)
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 300
from matplotlib import rc
import matplotlib as mpl
import seaborn as sns
plt.rcParams.update({
"text.usetex": True,
"font.family": "sans-serif",
"font.sans-serif": ["Computer Modern Sans serif"]})
## for Palatino and other serif fonts use:
plt.rcParams.update({
"text.usetex": True,
"font.family": "serif",
"font.serif": ["Palatino"],
})
# Local modules
import sys
sys.path.append("modules/")
import sampling
import unbiased_estimation
import utils
# Set pool_size for multiprocessing
pool_size = 18
```
# Setup
```
def plot_chains(data, clusts1, clusts2, save_name=None):
f, axarr = plt.subplots(ncols=2)
utils.plot_clusts(data, clusts1, axarr[0])
utils.plot_clusts(data, clusts2, axarr[1])
if save_name is not None: plt.savefig(save_name)
plt.show()
def ex6_gen_data(Ndata, sd, sd0=1, K=2, dp_alpha=10):
# TRANSLATION OF TAMARA BRODERICK's CODE INTO PYTHON
# (https://github.com/tbroderick/mlss2015_bnp_tutorial)
#
# generate Gaussian mixture model data for inference later
#
# Args:
# Ndata: number of data points to generate
# sd: covariance matrix of data points around the
# cluster-specific mean is [sd^2, 0; 0, sd^2];
# i.e. this is the standard deviation in either direction
# sd0: std for prior mean
#
# Returns:
# x: an Ndata x 2 matrix of data points
# z: an Ndata-long vector of cluster assignments
# mu: a K x 2 matrix of cluster means,
# where K is the number of clusters
# matrix of cluster centers: one in each quadrant
mu = np.random.normal(scale=sd0, size=[K, 2])
# vector of component frequencies
rho = stats.dirichlet.rvs(alpha=dp_alpha*np.ones(K))[0]
# assign each data point to a component
z = np.random.choice(range(K), p=rho, replace=True, size=Ndata)
# draw each data point according to the cluster-specific
# likelihood of its component
x = mu[z] + np.random.normal(scale=sd, size=[Ndata,2])
return x
def crp_gibbs_couple(
data, sd, sd0, initz1, initz2,alpha=0.01, plot=True,
log_freq=None, maxIters=100, coupling="Maximal", save_base=None):
"""
Args:
coupling: method of coupling must be "Common_RNG", "Maximal" or "Optimal" ("Common_RNG" used to be "Naive")
"""
# initialize the sampler
z1, z2 = initz1, initz2
z1s, z2s = [z1.copy()], [z2.copy()]
dists_by_iter = []
# set frequency at which to log state of the chain
if log_freq is None: log_freq = int(maxIters/10)
# run the Gibbs sampler
for I in range(maxIters):
z1, z2 = sampling.gibbs_sweep_couple(
data, z1.copy(), z2.copy(), sd, sd0,
alpha=alpha, coupling=coupling)
# data counts at each cluster
clusts1, clusts2 = utils.z_to_clusts(z1), utils.z_to_clusts(z2)
z1s.append(z1); z2s.append(z2)
# compute and log distance between partitions
dist_between_partitions = utils.adj_dists_fast(clusts1, clusts2)
dists_by_iter.append(dist_between_partitions)
if (I%log_freq==0 or dist_between_partitions==0) and plot:
print("Iteration %04d/%04d"%(I, maxIters))
print("n_clusts: ", len(clusts1), len(clusts2))
save_name = save_base + "_%04d.png"%I if save_base is not None else None
plot_chains(data, clusts1, clusts2, save_name=save_name)
if dist_between_partitions == 0: break
return z1, dists_by_iter
```
# Figure 1A
```
def run_rep(K, Ndata, sd=2., sd0=2., alpha=0.5, lag=200, maxIters=int(1e5)):
"""run_rep runs a replicate and returns the trace and time to coupling for maximal and optimal couplings"""
np.random.seed() # set random seed in each process so multi-processing replicates are not identical.
data = ex6_gen_data(Ndata, sd, sd0, K=K)
initz1 = sampling.crp_gibbs(data, sd, sd0, initz, alpha=alpha, plot=False, maxIters=lag)
initz2 = initz.copy()
# simulate maximal coupling
st = clock()
_, trace_maximal = crp_gibbs_couple(
data, sd, sd0, initz1.copy(), initz2.copy(), alpha=alpha, plot=False, maxIters=maxIters,
coupling="Maximal", save_base=None)
end = clock()
time_maximal = end-st
# simulate common rng coupling
st = clock()
_, trace_rng = crp_gibbs_couple(
data, sd, sd0, initz1.copy(), initz2.copy(), alpha=alpha, plot=False, maxIters=maxIters,
coupling="Common_RNG", save_base=None)
end = clock()
time_rng = end-st
# simulate optimal coupling
st = clock()
_, trace_optimal = crp_gibbs_couple(
data, sd, sd0, initz1.copy(), initz2.copy(), alpha=alpha, plot=False, maxIters=maxIters,
coupling="Optimal", save_base=None)
end = clock()
time_optimal = end-st
return trace_maximal, trace_optimal, trace_rng, time_maximal, time_optimal, time_rng
n_reps = 250
Ndata, K, sd, sd0, alpha = 150, 4, 2., 2.5, 0.2
initz = np.zeros(Ndata, dtype=np.int)
lag = 250 # number of lag iterations
maxIters = 2000
traces_by_coupling = {"Optimal":[], "Maximal":[], "Common_RNG":[]}
times_by_coupling = {"Optimal":[], "Maximal":[], "Common_RNG":[]}
run_in_parallel = True
if run_in_parallel:
def simulate(rep):
result = run_rep(K=K, Ndata=Ndata, sd=sd, sd0=sd0, alpha=alpha, lag=lag, maxIters=maxIters)
return result
with Pool(pool_size) as p:
results = p.map(simulate, range(n_reps))
for (trace_maximal, trace_optimal, trace_rng, time_maximal, time_optimal, time_rng) in results:
traces_by_coupling["Optimal"].append(trace_optimal)
traces_by_coupling["Maximal"].append(trace_maximal)
traces_by_coupling["Common_RNG"].append(trace_rng)
times_by_coupling["Optimal"].append(time_optimal)
times_by_coupling["Maximal"].append(time_maximal)
times_by_coupling["Common_RNG"].append(time_rng)
else:
for rep in range(n_reps):
trace_maximal, trace_optimal, trace_rng, time_maximal, time_optimal, time_rng = run_rep(
K=K, Ndata=Ndata, sd=sd, sd0=sd0, alpha=alpha, lag=lag, maxIters=maxIters)
traces_by_coupling["Optimal"].append(trace_optimal)
traces_by_coupling["Maximal"].append(trace_maximal)
traces_by_coupling["Common_RNG"].append(trace_rng)
times_by_coupling["Optimal"].append(time_optimal)
times_by_coupling["Maximal"].append(time_maximal)
times_by_coupling["Common_RNG"].append(time_rng)
dirname = "figure1_results/"
if not os.path.exists(dirname):
print("Will make directory %s" %dirname)
os.makedirs(dirname)
fn_base = dirname + "N=150_K=4_sd=2_sd0=2.5_alpha=0.2"
traces_by_coupling_150_4_2_25_02 = copy.deepcopy(traces_by_coupling)
traces_fn = fn_base + "_traces.npy"
np.save(traces_fn, traces_by_coupling_150_4_2_25_02)
traces_by_coupling_150_4_2_25_02 = np.load(traces_fn, allow_pickle=True).item()
times_by_coupling_150_4_2_25_02 = copy.deepcopy(times_by_coupling)
times_fn = fn_base + "_meeting_times.npy"
np.save(times_fn, times_by_coupling_150_4_2_25_02)
times_by_coupling_150_4_2_25_02 = np.load(times_fn, allow_pickle=True).item()
title = "Dirichlet Process Mixture Model"
utils.meeting_times_plots(
traces_by_coupling_150_4_2_25_02, times_by_coupling_150_4_2_25_02,
couplings_plot=['Optimal', 'Maximal', 'Common_RNG'],
couplings_colors=['#2025df', '#39f810','#fe01b5'], title=title, alpha=1.0, nbins=8, max_time=200,
linewidth=1.5, iter_interval=5, n_traces_plot=2, max_iter=1000
)
```
# Figure 1B
```
# Generate an an Erdos Renyi random graph
n, p = 20, 0.15
g = igraph.Graph.Erdos_Renyi(n, p)
## number of vertices in the graph
nvertices = g.vcount()
def rinit(g):
"""greedy initialization of graph coloring. Adds a new color whenever needed.
"""
nvertices = g.vcount()
vertex_colors = -np.ones([nvertices], dtype=int)
color_ids = set()
for ivertex in range(nvertices):
n_i = igraph.Graph.neighbors(g, ivertex)
legal_colors = color_ids.difference(vertex_colors[n_i])
if len(legal_colors) == 0:
new_color_id = len(color_ids)
color_ids.add(new_color_id)
legal_colors.add(new_color_id)
vertex_colors[ivertex] = min(legal_colors)
return vertex_colors
vertex_colors_init = rinit(g)
## all possible colours
ncolors = len(set(vertex_colors_init))+1
all_colours = np.array(sns.color_palette("Paired", n_colors=ncolors))
def color_probs(g, ncolors, n, vertex_colors):
"""color_probs returns uniform probability of new color assigments
of vertex across all the legal colors, i.e. those not shared
by neighbors.
Args:
g: igraph Graph object
ncolors: number of different colors
n: index of node to re-color
vertex_colors: array of indices of current colors
"""
legal = np.ones(ncolors)
neighbors = igraph.Graph.neighbors(g, n)
legal[list(set(vertex_colors[neighbors]))] = 0.
probs = legal / sum(legal)
return probs
## Markov chain,
def single_kernel(g, ncolors, vertex_colors, n=None):
"""single_kernel makes a single markov step by reassigning the color of a randomly chosen vertex.
Args:
g: graph object
ncolors: total number of colors that may be used.
vertex_colors: color assignment of each vertex. An np.array
of ints with values between 0 and ncolors-1.
Returns:
New assignments of vertex colors
"""
if n is None: n = np.random.choice(g.vcount())
v_probs = color_probs(g, ncolors, n, vertex_colors)
vertex_colors[n] = np.random.choice(ncolors, p=v_probs)
return vertex_colors
def gibbs_sweep_single(g, ncolors, vertex_colors):
for n in range(g.vcount()): vertex_colors = single_kernel(g, ncolors, vertex_colors.copy(), n)
return vertex_colors
# utilities color relabling step
def color_ordering(ncolors, vertex_colors):
"""color_ordering returns the order of occurrence of each color in vertex_colors.
Unused colors are assigned an order greater than the number of unique colors in
vertex_colors.
"""
complete_list_of_colors = np.array(list(vertex_colors) + list(range(ncolors)))
idx_of_first_occurrence = [np.where(complete_list_of_colors==c)[0][0] for c in range(ncolors)]
return np.argsort(idx_of_first_occurrence)
def relabel_colors(ncolors, vertex_colors, new_order):
old_ordering = color_ordering(ncolors, vertex_colors)
vertex_colors_new = vertex_colors.copy()
for c in range(ncolors):
vertex_colors_new[np.where(vertex_colors==old_ordering[c])] = new_order[c]
return vertex_colors_new
def max_coupling(v1_probs, v2_probs):
"""max_coupling as described in Jacob's chapter 3 notes.
"""
ncolors = len(v1_probs)
# compute overlap pmf
overlap = np.min([v1_probs, v2_probs], axis=0)
overlap_size = np.sum(overlap)
overlap_size = np.min([1.0, overlap_size]) # protect from rounding error
if np.random.choice(2, p=[1-overlap_size, overlap_size]) == 1:
newz = np.random.choice(ncolors, p=overlap/overlap_size)
return newz, newz
# sample from complements independently
v1_probs -= overlap
v1_probs /= (1-overlap_size)
v2_probs -= overlap
v2_probs /= (1-overlap_size)
newz1 = np.random.choice(ncolors, p=v1_probs)
newz2 = np.random.choice(ncolors, p=v2_probs)
return newz1, newz2
def opt_coupling(v1_probs, v2_probs, clusts1, clusts2, intersection_sizes):
"""opt_coupling returns a sample from the optimal coupling of v1_probs and v2_probs.
Args:
v1_probs, v2_probs: marginals for chains 1 and 2
clusts1, clusts2: color group assignments chains 1 and 2
"""
assert len(v1_probs) == len(v2_probs)
ncolors = len(v1_probs)
pairwise_dists = utils.pairwise_dists(clusts1, clusts2, intersection_sizes, allow_new_clust=False)
v1_color, v2_color = utils.optimal_coupling(
v1_probs, v2_probs, pairwise_dists, normalize=True,
change_size=100)
return v1_color, v2_color
def double_kernel(g, ncolors, vertex_colors1, vertex_colors2, n, clusts1, clusts2,
intersection_sizes, coupling="Maximal"):
"""double_kernel simulates one step for a pair of coupled Markov chains over colorings.
A vertex, n_i, is selected uniformly at random from the set of all vertices and has its
color reassigned. Marginally this assigment is uniformly random over the set of
allowable colors. The joint distribution of their coupling is set by the coupling argument.
Args:
g: graph object
ncolors: total number of possible colors
vertex_colors1, vertex_colors2: current color assignments of all vertices in both chains
n: index of vertex to recolor
coupling: method of coupling Gibs proposal "Maximal", "Optimal" or "Random"
Returns:
vertex_colors1, vertex_colors2 : new assignments of vertex colors.
"""
# remove node n from clusts and intersection sizes
clusts1[vertex_colors1[n]].remove(n)
clusts2[vertex_colors2[n]].remove(n)
intersection_sizes[vertex_colors1[n], vertex_colors2[n]] -= 1
# compute marginal probabilities
v1_probs = color_probs(g, ncolors, n, vertex_colors1)
v2_probs = color_probs(g, ncolors, n, vertex_colors2)
# Sample new color assignments from coupling
if coupling == "Maximal":
v1_color, v2_color = max_coupling(v1_probs, v2_probs)
elif coupling == "Common_RNG":
v1_color, v2_color = utils.naive_coupling(v1_probs, v2_probs)
elif coupling == "Random":
# This is an independent coupling
v1_color = np.random.choice(ncolors, p=v1_probs)
v2_color = np.random.choice(ncolors, p=v2_probs)
else:
# This defines the coupling by solving an optimal transport problem.
assert coupling == "Optimal"
v1_color, v2_color = opt_coupling(v1_probs, v2_probs, clusts1, clusts2, intersection_sizes)
# update group assignments and intersection sizes
clusts1[v1_color].add(n); clusts2[v2_color].add(n)
intersection_sizes[v1_color, v2_color] += 1
vertex_colors1[n], vertex_colors2[n] = v1_color, v2_color
return vertex_colors1, vertex_colors2
def gibbs_sweep_couple(g, ncolors, vertex_colors1, vertex_colors2, coupling="Maximal"):
"""gibbs_sweep_couple performs Gibbs updates for every node in the graph, coupling
each update across the two chains.
We compute intersection sizes once at the start and then update it for better time complexity.
"""
# Compute clusters and intersection sizes from scratch once
clusts1 = utils.z_to_clusts(vertex_colors1, total_clusts=ncolors)
clusts2 = utils.z_to_clusts(vertex_colors2, total_clusts=ncolors)
intersection_sizes = np.array([[len(c1.intersection(c2)) for c2 in clusts2] for c1 in clusts1])
# sample from conditional for each vertex
for n in range(g.vcount()):
vertex_colors1, vertex_colors2 = double_kernel(
g, ncolors, vertex_colors1.copy(), vertex_colors2.copy(), n,
clusts1, clusts2, intersection_sizes, coupling=coupling)
return vertex_colors1, vertex_colors2
def plot_coupling(colors_history_coupled, dists_by_iteration, max_iters_plot=200):
I = min([max_iters_plot, len(colors_history_coupled)])
plt.figure(figsize=[3,0.5*I])
sep_dist = 0.2
nvertices = len(colors_history_coupled[0][0])
for i in range(I):
vertex_colors1, vertex_colors2 = colors_history_coupled[i]
plt.scatter(np.arange(nvertices),i*np.ones(nvertices), c=all_colours[vertex_colors1], s=100)
plt.scatter(np.arange(nvertices),i*np.ones(nvertices) + sep_dist, c=all_colours[vertex_colors2], s=100)
plt.xlabel("Vertex")
plt.ylabel("Iteration")
plt.show()
plt.plot(dists_by_iteration)
plt.xlabel("Iteration")
plt.ylabel("Distance Between Adjacency Matrices")
plt.show()
def run_rep(n=20, p=0.15, max_iter=1000):
g = igraph.Graph.Erdos_Renyi(n, p)
# initialization for chain 1
colors_history = [rinit(g)]
vertex_colors_init = rinit(g)
## all possible colours
ncolors = len(set(colors_history[-1]))+1 # good
nmcmc = 1000
for imcmc in range(nmcmc):
vertex_colors_new = single_kernel(g, ncolors, colors_history[-1].copy())
colors_history.append(vertex_colors_new)
vertex_colors1_init = colors_history[-1]
vertex_colors2_init = rinit(g)
nmcmc = int(max_iter)
# Optimal Coupling
dists_by_iteration = []
colors_history_coupled = [(vertex_colors1_init.copy(), vertex_colors2_init.copy())]
st = clock()
for imcmc in range(nmcmc):
vertex_colors1, vertex_colors2 = colors_history_coupled[-1]
vertex_colors1, vertex_colors2 = gibbs_sweep_couple(
g, ncolors, vertex_colors1.copy(), vertex_colors2.copy(), coupling="Optimal")
dist = utils.dist_from_labeling(vertex_colors1, vertex_colors2)
dists_by_iteration.append(dist)
colors_history_coupled.append([vertex_colors1, vertex_colors2])
if dist==0: break
end = clock()
trace_optimal = dists_by_iteration
time_optimal = end-st
# Maximal coupling
dists_by_iteration = []
colors_history_coupled = [(vertex_colors1_init.copy(), vertex_colors2_init.copy())]
st = clock()
for imcmc in range(nmcmc):
vertex_colors1, vertex_colors2 = colors_history_coupled[-1]
vertex_colors1, vertex_colors2 = gibbs_sweep_couple(
g, ncolors, vertex_colors1.copy(), vertex_colors2.copy(), coupling="Maximal")
dist = utils.dist_from_labeling(vertex_colors1, vertex_colors2)
dists_by_iteration.append(dist)
colors_history_coupled.append([vertex_colors1, vertex_colors2])
if dist==0: break
end = clock()
trace_maximal = dists_by_iteration
time_maximal = end-st
# Common RNG
dists_by_iteration = []
colors_history_coupled = [(vertex_colors1_init.copy(), vertex_colors2_init.copy())]
st = clock()
for imcmc in range(nmcmc):
vertex_colors1, vertex_colors2 = colors_history_coupled[-1]
vertex_colors1, vertex_colors2 = gibbs_sweep_couple(
g, ncolors, vertex_colors1.copy(), vertex_colors2.copy(), coupling="Common_RNG")
dist = utils.dist_from_labeling(vertex_colors1, vertex_colors2)
dists_by_iteration.append(dist)
colors_history_coupled.append([vertex_colors1, vertex_colors2])
if dist==0: break
end = clock()
trace_rng = dists_by_iteration
time_rng = end-st
return trace_maximal, trace_optimal, trace_rng, time_maximal, time_optimal, time_rng
n_reps = 250
n, p = 25, 0.2 # even better (don't delete)
maxIters = int(1e5)
traces_by_coupling = {"Optimal":[], "Maximal":[], "Common_RNG":[]}
times_by_coupling = {"Optimal":[], "Maximal":[], "Common_RNG":[]}
run_in_parallel = True
if run_in_parallel:
def simulate(_):
result = run_rep(n, p, max_iter=maxIters)
return result
with Pool(pool_size) as p:
results = p.map(simulate, range(n_reps))
for (trace_maximal, trace_optimal, trace_rng, time_maximal, time_optimal, time_rng) in results:
traces_by_coupling["Optimal"].append(trace_optimal)
traces_by_coupling["Maximal"].append(trace_maximal)
traces_by_coupling["Common_RNG"].append(trace_rng)
times_by_coupling["Optimal"].append(time_optimal)
times_by_coupling["Maximal"].append(time_maximal)
times_by_coupling["Common_RNG"].append(time_rng)
else:
for rep in range(n_reps):
if (10*rep)%n_reps==0: print("Rep %04d/%04d"%(rep, n_reps))
trace_maximal, trace_optimal, trace_rng, time_maximal, time_optimal, time_rng = run_rep(
n, p, max_iter=maxIters)
traces_by_coupling["Optimal"].append(trace_optimal)
traces_by_coupling["Maximal"].append(trace_maximal)
traces_by_coupling["Common_RNG"].append(trace_rng)
times_by_coupling["Optimal"].append(time_optimal)
times_by_coupling["Maximal"].append(time_maximal)
times_by_coupling["Common_RNG"].append(time_rng)
fn_base = "./figure1_results/N=25_p=0.2"
if not os.path.exists(fn_base):
print("Will make directory %s" %fn_base)
os.makedirs(fn_base)
traces_by_coupling_N25_p02 = copy.deepcopy(traces_by_coupling)
traces_fn = fn_base + "_traces.npy"
np.save(traces_fn, traces_by_coupling_N25_p02)
times_by_coupling_N25_p02 = copy.deepcopy(times_by_coupling)
times_fn = fn_base + "_meeting_times.npy"
np.save(times_fn, times_by_coupling_N25_p02)
title = "Graph Coloring"
utils.meeting_times_plots(
traces_by_coupling, times_by_coupling,
couplings_plot=['Optimal', 'Maximal', 'Common_RNG'],
couplings_colors=['#2025df', '#39f810','#fe01b5'], title=title, alpha=1.0, nbins=8, max_time=1.7,
linewidth=1.5, iter_interval=None, n_traces_plot=2, max_iter=None
)
```
| github_jupyter |
# Predicting Titanic Survivers
Like Titanic, this is my maiden voyage, when it comes to Kaggle contest that is!. I've completed the Data Science track on Data Camp, but I'm a relative newbie when it comes to machine learning. I'm going to attempt to work my way through the Titanic: Machine Learning contest. My aim is to submission and initial entry as quickly as possible to get a base line score and then attempt to improve on on it by first looking at missing data, then engineering key features before establishing a secondary base line and trying to improve the model itself. I'd like to be able to achieve a score of .80
Please feel free to post comments or make suggestions as to what i may be doing wrong or could maybe do better and consider upvoting if you find the notebook useful!
Because this notebook has built up over time I have commented out some of the lines that output files so that when i want to output and test a slight change to the code, i don't output files for bit of the notebook that haven't been changed and that i am not especially intereted in. If you are forking this code you can simple remove the hash and output the file. I have also experimented with different models, so the current model in any stage is not necessarily the most efficent (its just the one that i tried last).
# Import the Libraries and Data
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from sklearn.cross_validation import KFold
from sklearn.ensemble import (AdaBoostClassifier,BaggingClassifier,ExtraTreesClassifier,GradientBoostingClassifier,RandomForestClassifier,VotingClassifier)
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression, Perceptron, SGDClassifier, LogisticRegression, PassiveAggressiveClassifier,RidgeClassifierCV
from sklearn.metrics import accuracy_score,auc,classification_report,confusion_matrix,mean_squared_error, precision_score, recall_score,roc_curve
from sklearn.model_selection import cross_val_score,cross_val_predict,cross_validate,train_test_split,GridSearchCV,KFold,learning_curve,RandomizedSearchCV,StratifiedKFold
from sklearn.multiclass import OneVsRestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn import ensemble, linear_model,neighbors, svm, tree
from scipy.stats import randint
from xgboost import XGBClassifier
#ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
df_train=pd.read_csv('../input/train.csv',sep=',')
df_test=pd.read_csv('../input/test.csv',sep=',')
df_data = df_train.append(df_test) # The entire data: train + test.
PassengerId = df_test['PassengerId']
Submission=pd.DataFrame()
Submission['PassengerId'] = df_test['PassengerId']
```
# Stage 1 : Explore the Data and create a basic model on raw data
# Explore the data Statistically
### Number of rows and columns
```
# How big are the training and test datasets
print(df_train.shape)
print("----------------------------")
print(df_test.shape)
```
### Column Names
```
# What are the column names
df_train.columns
```
### Data Types
```
# What type of data object are in each column and how many missing values are there
df_data.info()
```
### Missing Data
How much Data is missing from the training and test datasets, how important is that data and how much data cleaning might be required.
```
#check for any other unusable values
print(pd.isnull(df_data).sum())
```
## Observations on missing data.
There are 144 missing ages in the training data and 86 mssing ages in the test data. Age is an important feature so it is worth spending time to address this properly.
There are 468 missing Cabin entries in the training data and 326 in the test data, at this stage I'm not sure how important this feature is so I'm going to revisit this when I know more about the feature.
There are 2 missing embarked data points in the train data and 1 missing fare in the test data, at this stage this does not represent a problem.
## Statistical Overview of the data
```
# Get a statistical overview of the training data
df_train.describe()
# Get a statistical overview of the data
df_test.describe()
```
Note: The mean and Std of each of the columns in the 2 datasets are reasonable close together, so its safe to assume that any relationships we discover in the training data should work similarly in the test data.
```
# Take a look at some sample data
df_train.head(5)
df_train.tail(5)
```
# Explore Data Graphically
## Survival by Age, Class and Gender
```
grid = sns.FacetGrid(df_train, col = "Pclass", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
## Survival by Age, Port of Embarkation and Gender
```
grid = sns.FacetGrid(df_train, col = "Embarked", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
This embarkation visualization indicates that a large proportion of passengers embarked at port 'S', with lesser numbers at 'C' and 'Q' it also shows that regardless of embarkation port more women survived than men. It doesn't seem to show any corelation between passenger ID and Embarkation port. Interestingly Embarkation port Q seems to indicate that only 1 man survived while all women with passenger ID below 500 seem to survive while those above didn't this may be chance but it does look odd compared to 'S' and 'C'.
## Survival by Age, Number of Siblings and Gender
```
grid = sns.FacetGrid(df_train, col = "SibSp", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
## Survival by Age, Number of parch and Gender
```
grid = sns.FacetGrid(df_train, col = "Parch", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
# Pairplots
To get a very basic idea of the relationships between the different features we can use pairplots from seaborn.
```
g = sns.pairplot(df_train[[u'Survived', u'Pclass', u'Sex', u'Age', u'Parch', u'Fare', u'Embarked']], hue='Survived', palette = 'seismic',size=4,diag_kind = 'kde',diag_kws=dict(shade=True),plot_kws=dict(s=50) )
g.set(xticklabels=[])
```
# Create simple model
Create a baseline score by using old the standard numeric data on on a very basic model, this will be used to see how much any changes we make to the data or model improve performance.
```
NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Fare']
# create test and training data
test = df_test[NUMERIC_COLUMNS].fillna(-1000)
data_to_train = df_train[NUMERIC_COLUMNS].fillna(-1000)
y=df_train['Survived']
X_train, X_test, y_train, y_test = train_test_split(data_to_train, y, test_size=0.3,random_state=21, stratify=y)
clf = SVC()
clf.fit(X_train, y_train)
# Print the accuracy
print("Accuracy: {}".format(clf.score(X_test, y_test)))
```
# Create initial predictions¶
```
Submission['Survived']=clf.predict(test)
print(Submission.head())
print('predictions generated')
```
# Make first Submission
```
# write data frame to csv file
#Submission.set_index('PassengerId', inplace=True)
#Submission.to_csv('myfirstsubmission.csv',sep=',')
print('file created')
```
The result of this first submission was a score of 0.57894. This constitutes performing just above random, if i'd simply flipped a coin fair coin for each passenger i could have achieved this kind of score. So there is plenty of room for improvement.
# Stage 2 : Clean Data & Engineer features to improve results
## Cleaning the data : Filling in the blanks
There are a number of missing values, including fare, embarked, age and cabin. I started off simply using the average value for fare, embarked and age. However after doing some visual data analysis it became obvious that I could use other factors like title to make better estimates on age by simply using the average for people with the same title, the same applied to embarked where average based on fare would give a better estimate and fare based on embarked.
Cabin has so much missing data that it is likely that estimating cabin may add a level of noise to the data that would not be helpful.
## Feature conversion
Some models work better with with categorical data other numberical data, while some work best with binaryl data. In some cases this is as simple as changing male and female to numeric data like 0 or 1. We can replace categorical data like embarkation port 's' to values numeric value 1 or title Master to value 3 Values like age that range from 1 to 80 can be scaled so they a represented by a value between 0 and 1. Scaling values means that features are not given a disproportionate importance simply because they are larger, another option for values like Age or Fare are to split them into a more manageable bands which can then be represented as categories so. Alternately we could put each categorical value into a column of its own, marking each columns with a 0 if they don't apply or a 1 if they do. After doing some initial data eploration i decided it was easiest to convert data into bands and columns, so that I could then compare the models with different options and decide which was best for each before making final predictions.
## Feature Engineering
Here I attempted to manipulate existing data in order to try and create new features that i could use in my model, for example family size can be caluculated with the combination of siblings and parents, and title can be extracted from name.
## Estimate missing Fare Data based on Embarkation
While there is relatively little missing Fare data, the range of possible values is large, so rather than using simply the media of all fares, we can look at the passenger class or embarkation port in order to use a more appropriate average. We'll start by looking at boxplots for the fares to ensure we are making soon assumptions before we go onto estimating the missing values.
```
fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True,figsize=(12,6))
sns.boxplot(data = df_data, x = "Pclass", y = "Fare",ax=ax1);
plt.figure(1)
sns.boxplot(data = df_data, x = "Embarked", y = "Fare",ax=ax2);
plt.show()
# Fill the na values in Fare based on embarked data
embarked = ['S', 'C', 'Q']
for port in embarked:
fare_to_impute = df_data.groupby('Embarked')['Fare'].median()[embarked.index(port)]
df_data.loc[(df_data['Fare'].isnull()) & (df_data['Embarked'] == port), 'Fare'] = fare_to_impute
# Fare in df_train and df_test:
df_train["Fare"] = df_data['Fare'][:891]
df_test["Fare"] = df_data['Fare'][891:]
print('Missing Fares Estimated')
```
## FareBand feature
```
#fill in missing Fare value in training set based on mean fare for that Pclass
for x in range(len(df_train["Fare"])):
if pd.isnull(df_train["Fare"][x]):
pclass = df_train["Pclass"][x] #Pclass = 3
df_train["Fare"][x] = round(df_train[df_train["Pclass"] == pclass]["Fare"].mean(), 8)
#fill in missing Fare value in test set based on mean fare for that Pclass
for x in range(len(df_test["Fare"])):
if pd.isnull(df_test["Fare"][x]):
pclass = df_test["Pclass"][x] #Pclass = 3
df_test["Fare"][x] = round(df_test[df_test["Pclass"] == pclass]["Fare"].mean(), 8)
#map Fare values into groups of numerical values
df_data["FareBand"] = pd.qcut(df_data['Fare'], 8, labels = [1, 2, 3, 4,5,6,7,8]).astype('int')
df_train["FareBand"] = pd.qcut(df_train['Fare'], 8, labels = [1, 2, 3, 4,5,6,7,8]).astype('int')
df_test["FareBand"] = pd.qcut(df_test['Fare'], 8, labels = [1, 2, 3, 4,5,6,7,8]).astype('int')
df_train[["FareBand", "Survived"]].groupby(["FareBand"], as_index=False).mean()
print('FareBand feature created')
```
*** Note:*** There are several ways that machine learning can evaluate data, you can use discrete data like fare, or you can make that data categorical by grouping it into bands as i have done here or your can take those categories and turn each category into a column. Different models work, differently depending on how you give them the data. I'm going to create all 3 different structures for some features like fare and age and see how they compare. You shoud not over emphasis a feature by using multiple structures of the same data in a model, we'll therefore filter the differnet stuctures before we evaluate the models.
## Embarked Feature
```
#map each Embarked value to a numerical value
embarked_mapping = {"S": 1, "C": 2, "Q": 3}
df_data["Embarked"] = df_data["Embarked"].map(embarked_mapping)
# split Embanked into df_train and df_test:
df_train["Embarked"] = df_data["Embarked"][:891]
df_test["Embarked"] = df_data["Embarked"][891:]
print('Embarked feature created')
df_data[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean()
```
## Estimate missing Embarkation Data
```
# Fill the na values in Embanked based on fareband data
fareband = [1,2,3,4]
for fare in fareband:
embark_to_impute = df_data.groupby('FareBand')['Embarked'].median()[fare]
df_data.loc[(df_data['Embarked'].isnull()) & (df_data['FareBand'] == fare), 'Embarked'] = embark_to_impute
# Fare in df_train and df_test:
df_train["Embarked"] = df_data['Embarked'][:891]
df_test["Embarked"] = df_data['Embarked'][891:]
print('Missing Embarkation Estimated')
```
We will come back to fill in the missing age data a little later. Initially i created an estimate based on the mean age and standard deviation, using random numbers to evenly distribute age estimates, which worked, but actually there is a better way using title. As we have not yet extracted the title data yet, we will wait to estimate ages until we have.
## Gender Feature
```
# convert categories to Columns
dummies=pd.get_dummies(df_train[['Sex']], prefix_sep='_') #Gender
df_train = pd.concat([df_train, dummies], axis=1)
testdummies=pd.get_dummies(df_test[['Sex']], prefix_sep='_') #Gender
df_test = pd.concat([df_test, testdummies], axis=1)
print('Gender Feature added ')
#map each Gendre value to a numerical value
gender_mapping = {"female": 0, "male": 1}
df_data["Sex"] = df_data['Sex'].map(gender_mapping)
df_data["Sex"]=df_data["Sex"].astype('int')
# Family_Survival in TRAIN_DF and TEST_DF:
df_train["Sex"] = df_data["Sex"][:891]
df_test["Sex"] = df_data["Sex"][891:]
print('Gender Category created')
```
## Name Length
```
df_data['NameLen'] = df_data['Name'].apply(lambda x: len(x))
print('Name Length calculated')
# split to test and training
df_train["NameLen"] = df_data["NameLen"][:891]
df_test["NameLen"] = df_data["NameLen"][891:]
df_train["NameBand"] = pd.cut(df_train["NameLen"], bins=5, labels = [1,2,3,4,5])
df_test["NameBand"] = pd.cut(df_test["NameLen"], bins=5, labels = [1,2,3,4,5])
# convert AgeGroup categories to Columns
dummies=pd.get_dummies(df_train[["NameBand"]].astype('category'), prefix_sep='_') #Embarked
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[["NameBand"]].astype('category'), prefix_sep='_') #Embarked
df_test = pd.concat([df_test, dummies], axis=1)
print("Name Length categories created")
pd.qcut(df_train['NameLen'],5).value_counts()
```
## Title Feature
```
#Get titles
df_data["Title"] = df_data.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
#Unify common titles.
df_data["Title"] = df_data["Title"].replace('Mlle', 'Miss')
df_data["Title"] = df_data["Title"].replace('Master', 'Master')
df_data["Title"] = df_data["Title"].replace(['Mme', 'Dona', 'Ms'], 'Mrs')
df_data["Title"] = df_data["Title"].replace(['Jonkheer','Don'],'Mr')
df_data["Title"] = df_data["Title"].replace(['Capt','Major', 'Col','Rev','Dr'], 'Millitary')
df_data["Title"] = df_data["Title"].replace(['Lady', 'Countess','Sir'], 'Honor')
# Age in df_train and df_test:
df_train["Title"] = df_data['Title'][:891]
df_test["Title"] = df_data['Title'][891:]
# convert Title categories to Columns
titledummies=pd.get_dummies(df_train[['Title']], prefix_sep='_') #Title
df_train = pd.concat([df_train, titledummies], axis=1)
ttitledummies=pd.get_dummies(df_test[['Title']], prefix_sep='_') #Title
df_test = pd.concat([df_test, ttitledummies], axis=1)
print('Title categories added')
```
## Title Cetegory
```
# Mapping titles
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Millitary": 5, "Honor": 6}
df_data["TitleCat"] = df_data['Title'].map(title_mapping)
df_data["TitleCat"] = df_data["TitleCat"].astype(int)
df_train["TitleCat"] = df_data["TitleCat"][:891]
df_test["TitleCat"] = df_data["TitleCat"][891:]
print('Title Category created')
```
## Fill age based on title
The Visualisations of age by title suggests that if we create our age estimate by looking at the passengers title and using the average age for that title it may produce a more accurate estimate.
```
titles = ['Master', 'Miss', 'Mr', 'Mrs', 'Millitary','Honor']
for title in titles:
age_to_impute = df_data.groupby('Title')['Age'].median()[title]
df_data.loc[(df_data['Age'].isnull()) & (df_data['Title'] == title), 'Age'] = age_to_impute
# Age in df_train and df_test:
df_train["Age"] = df_data['Age'][:891]
df_test["Age"] = df_data['Age'][891:]
print('Missing Ages Estimated')
```
## Create AgeBands
```
# sort Age into band categories
bins = [0,12,24,45,60,np.inf]
labels = ['Child', 'Young Adult', 'Adult','Older Adult','Senior']
df_train["AgeBand"] = pd.cut(df_train["Age"], bins, labels = labels)
df_test["AgeBand"] = pd.cut(df_test["Age"], bins, labels = labels)
print('Age Feature created')
# convert AgeGroup categories to Columns
dummies=pd.get_dummies(df_train[["AgeBand"]], prefix_sep='_') #Embarked
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[["AgeBand"]], prefix_sep='_') #Embarked
df_test = pd.concat([df_test, dummies], axis=1)
print('AgeBand feature created')
```
## Visualize Age Data
```
# Visualise Age Data
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
axis1.set_title('Training Age values - Titanic')
axis2.set_title('Test Age values - Titanic')
# plot original Age values
df_train['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
#df_test['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
# plot new Age Values
#df_train['Age'].hist(bins=70, ax=axis2)
df_test['Age'].hist(bins=70, ax=axis2)
# peaks for survived/not survived passengers by their age
facet = sns.FacetGrid(df_train, hue="Survived",palette = 'seismic',aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, df_train['Age'].max()))
facet.add_legend()
sns.boxplot(data = df_train, x = "Title", y = "Age");
```
## Lone Travellers Feature
```
df_train["Alone"] = np.where(df_train['SibSp'] + df_train['Parch'] + 1 == 1, 1,0) # People travelling alone
df_test["Alone"] = np.where(df_test['SibSp'] + df_test['Parch'] + 1 == 1, 1,0) # People travelling alone
print('Lone traveller feature created')
```
## Mother
We know that a higher proportion of women survived than die, but of the women that did not survive a large number of these women were women with families that stayed together, we can add a feature to identify women with children.
```
df_data['Mother'] = (df_data['Title'] == 'Mrs') & (df_data['Parch'] > 0)
df_data['Mother'] = df_data['Mother'].astype(int)
df_train["Mother"] = df_data["Mother"][:891]
df_test["Mother"] = df_data["Mother"][891:]
print('Mother Category created')
```
## Family Size Feature
We know that many families stayed together and that the bigger the less likely that family would be to find a lifeboat together.
```
df_train["Family Size"] = (df_train['SibSp'] + df_train['Parch'] + 1)
df_test["Family Size"] = df_test['SibSp'] + df_test['Parch'] + 1
print('Family size feature created')
```
## Family Survival
This is based on code taken from from https://www.kaggle.com/shunjiangxu/blood-is-thicker-than-water-friendship-forever
```
# get last name
df_data["Last_Name"] = df_data['Name'].apply(lambda x: str.split(x, ",")[0])
# Set survival value
DEFAULT_SURVIVAL_VALUE = 0.5
df_data["Family_Survival"] = DEFAULT_SURVIVAL_VALUE
# Find Family groups by Fare
for grp, grp_df in df_data[['Survived','Name', 'Last_Name', 'Fare', 'Ticket', 'PassengerId',
'SibSp', 'Parch', 'Age', 'Cabin']].groupby(['Last_Name', 'Fare']):
if (len(grp_df) != 1):
# A Family group is found.
for ind, row in grp_df.iterrows():
smax = grp_df.drop(ind)['Survived'].max()
smin = grp_df.drop(ind)['Survived'].min()
passID = row['PassengerId']
if (smax == 1.0):
df_data.loc[df_data['PassengerId'] == passID, 'Family_Survival'] = 1
elif (smin==0.0):
df_data.loc[df_data['PassengerId'] == passID, 'Family_Survival'] = 0
print("Number of passengers with family survival information:",
df_data.loc[df_data['Family_Survival']!=0.5].shape[0])
# Find Family groups by Ticket
for _, grp_df in df_data.groupby('Ticket'):
if (len(grp_df) != 1):
for ind, row in grp_df.iterrows():
if (row['Family_Survival'] == 0) | (row['Family_Survival']== 0.5):
smax = grp_df.drop(ind)['Survived'].max()
smin = grp_df.drop(ind)['Survived'].min()
passID = row['PassengerId']
if (smax == 1.0):
df_data.loc[df_data['PassengerId'] == passID, 'Family_Survival'] = 1
elif (smin==0.0):
df_data.loc[df_data['PassengerId'] == passID, 'Family_Survival'] = 0
print("Number of passenger with family/group survival information: "
+str(df_data[df_data['Family_Survival']!=0.5].shape[0]))
# Family_Survival in df_train and df_test:
df_train["Family_Survival"] = df_data['Family_Survival'][:891]
df_test["Family_Survival"] = df_data['Family_Survival'][891:]
```
## Cabin feature
```
# check if cabin inf exists
df_data["HadCabin"] = (df_data["Cabin"].notnull().astype('int'))
# split Embanked into df_train and df_test:
df_train["HadCabin"] = df_data["HadCabin"][:891]
df_test["HadCabin"] = df_data["HadCabin"][891:]
print('Cabin feature created')
```
## Deck feature
```
# Extract Deck
df_data["Deck"] = df_data.Cabin.str.extract('([A-Za-z])', expand=False)
df_data["Deck"] = df_data["Deck"].fillna("N")
# Map Deck
deck_mapping = {"N":0,"A": 1, "B": 2, "C": 3, "D": 4, "E": 5}
df_data['Deck'] = df_data['Deck'].map(deck_mapping)
#Split to training and test
df_train["Deck"] = df_data["Deck"][:891]
df_test["Deck"] = df_data["Deck"][891:]
print('Deck feature created')
#Map and Create Deck feature for training
df_data["Deck"] = df_data.Cabin.str.extract('([A-Za-z])', expand=False)
deck_mapping = {"0":0,"A": 1, "B": 2, "C": 3, "D": 4, "E": 5}
df_data['Deck'] = df_data['Deck'].map(deck_mapping)
df_data["Deck"] = df_data["Deck"].fillna("0")
df_data["Deck"]=df_data["Deck"].astype('int')
df_train["Deck"] = df_data['Deck'][:891]
df_test["Deck"] = df_data['Deck'][891:]
print('Deck feature created')
# convert categories to Columns
dummies=pd.get_dummies(df_train[['Deck']].astype('category'), prefix_sep='_') #Gender
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[['Deck']].astype('category'), prefix_sep='_') #Gender
df_test = pd.concat([df_test,dummies], axis=1)
print('Deck Categories created')
```
## Ticket feature
```
## Treat Ticket by extracting the ticket prefix. When there is no prefix it returns X.
Ticket = []
for i in list(df_data.Ticket):
if not i.isdigit() :
Ticket.append(i.replace(".","").replace("/","").strip().split(' ')[0]) #Take prefix
else:
Ticket.append("X")
df_data["Ticket"] = Ticket
df_data["Ticket"].head()
df_train["Ticket"] = df_data["Ticket"][:891]
df_test["Ticket"] = df_data["Ticket"][891:]
print('Ticket feature created')
```
## Ticket Type Feature
```
# ticket prefix
df_data['TicketRef'] = df_data['Ticket'].apply(lambda x: str(x)[0])
df_data['TicketRef'].value_counts()
#df_data["ticketBand"] = pd.qcut(df_data['ticket_ref'], 5, labels = [1, 2, 3, 4,5]).astype('int')
# split to test and training
df_train["TicketRef"] = df_data["TicketRef"][:891]
df_test["TicketRef"] = df_data["TicketRef"][891:]
# convert AgeGroup categories to Columns
dummies=pd.get_dummies(df_train[["TicketRef"]].astype('category'), prefix_sep='_') #Embarked
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[["TicketRef"]].astype('category'), prefix_sep='_') #Embarked
df_test = pd.concat([df_test, dummies], axis=1)
print("TicketBand categories created")
```
## Passenger Class Feature
```
# convert AgeGroup categories to Columns
dummies=pd.get_dummies(df_train[["Pclass"]].astype('category'), prefix_sep='_') #Embarked
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[["Pclass"]].astype('category'), prefix_sep='_') #Embarked
df_test = pd.concat([df_test, dummies], axis=1)
print("pclass categories created")
```
## Free Passage
I noticed that the minimum fare is 0.00 and that the ticket type for some of those is 'LINE' . All of those people with a zero ticket cost seem to be male with no siblings so its possible that these people are in some way associated with 'crew' positions. The majority of the people with a ticket price of 0.00 seemed not to survive, so i'm making free a feature to see whether that makes a difference to the model.
```
# create free feature based on fare = 0
df_data["Free"] = np.where(df_data['Fare'] ==0, 1,0)
df_data["Free"] = df_data['Free'].astype(int)
df_train["Free"] = df_data["Free"][:891]
df_test["Free"] = df_data["Free"][891:]
print('Free Category created')
```
## FareBand
```
Pclass = [1,2,3]
for aclass in Pclass:
fare_to_impute = df_data.groupby('Pclass')['Fare'].median()[aclass]
df_data.loc[(df_data['Fare'].isnull()) & (df_data['Pclass'] == aclass), 'Fare'] = fare_to_impute
df_train["Fare"] = df_data["Fare"][:891]
df_test["Fare"] = df_data["Fare"][891:]
#map Fare values into groups of numerical values
df_train["FareBand"] = pd.qcut(df_train['Fare'], 4, labels = [1, 2, 3, 4]).astype('category')
df_test["FareBand"] = pd.qcut(df_test['Fare'], 4, labels = [1, 2, 3, 4]).astype('category')
# convert FareBand categories to Columns
dummies=pd.get_dummies(df_train[["FareBand"]], prefix_sep='_') #Embarked
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[["FareBand"]], prefix_sep='_') #Embarked
df_test = pd.concat([df_test, dummies], axis=1)
print("Fareband categories created")
```
## Embarked categories
```
# convert Embarked categories to Columns
dummies=pd.get_dummies(df_train[["Embarked"]].astype('category'), prefix_sep='_') #Embarked
df_train = pd.concat([df_train, dummies], axis=1)
dummies=pd.get_dummies(df_test[["Embarked"]].astype('category'), prefix_sep='_') #Embarked
df_test = pd.concat([df_test, dummies], axis=1)
print("Embarked feature created")
```
# Exploring the Engineered data
## Missing Data
```
#check for any other unusable values
print(len(df_test.columns))
print(pd.isnull(df_test).sum())
```
## Statistical Overview
```
df_train.describe()
```
# Visualizing age data
We could estimate all of the ages based on the mean and standard deviation of the data set, however as age is obviously an important feature in pridicting survival and we need to look at the other features and see if we can work out a way to make a more accurate estimate of age for any given passenger. First lets look at the different age distributions of passengers by title.
```
# Groupby title
df_train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
# plot age distribution by title
facet = sns.FacetGrid(data = df_train, hue = "Title", legend_out=True, size = 5)
facet = facet.map(sns.kdeplot, "Age")
facet.add_legend();
```
The age distribution looks slightly suspect and possibly merits further investigation, for example while master generally refers to male's under 16 there a number that are over 40, this might be explained if master is also a title in nautical terms like 'Master Seaman'. You might also expect a quite Normal distribution of ages for any given title, but in many cases this doesn't seem to be the case, this is most likely caused by out estimated numbers skewing the data, one way to avoid this would be to use a random number based on the standard deviation in the estimate for each to get a more natural dataset. We could also use age bands rather than age in the model.
### Survival by FareBand and Gender
```
grid = sns.FacetGrid(df_train, col = "FareBand", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
### Survival by Deck and Gender
```
grid = sns.FacetGrid(df_train, col = "Deck", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
### Survival by Family Size and Gender
```
grid = sns.FacetGrid(df_train, col = "Family Size", row = "Sex", hue = "Survived", palette = 'seismic')
grid = grid.map(plt.scatter, "PassengerId", "Age")
grid.add_legend()
grid
```
### Survival by Passenger Class and Family Size
```
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
axis1.set_title('Training Age values - Titanic')
axis2.set_title('Test Age values - Titanic')
x1=df_train[df_train["Survived"]==0]
x2=df_train[df_train["Survived"]==1]
# Set up the matplotlib figure
plt.figure(1)
sns.jointplot(x="Family Size", y="Pclass", data=x1, kind="kde", color='b');
plt.figure(2)
sns.jointplot(x="Family Size", y="Pclass", data=x2, kind="kde", color='r');
plt.show()
```
### Fare Jointplot
```
sns.jointplot(data=x1, x='PassengerId', y='Age', kind='scatter',color='b')
plt.figure(4)
sns.jointplot(data=x2, x='PassengerId', y='Age', kind='scatter',color='r')
# sns.plt.show()
```
# Re-train the model on new features
```
df_train.columns
df_train.head()
```
## Select Columns of Interest
```
# Create list of interesting columns
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','Pclass_1', 'Pclass_2', 'Pclass_3','HadCabin','Free','FareBand_1', 'FareBand_2', 'FareBand_3', 'FareBand_4']
```
# Re-evaluate the on new features
```
# create test and training data
test = df_test[CATEGORY_COLUMNS].fillna(-1000)
data_to_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
X_train, X_test, y_train, y_test = train_test_split(data_to_train, df_train['Survived'], test_size=0.3,random_state=21, stratify=df_train['Survived'])
RandomForest = RandomForestClassifier(random_state = 0)
RandomForest.fit(X_train, y_train)
print('Evaluation complete')
# Print the accuracy# Print
print("Accuracy: {}".format(RandomForest.score(X_test, y_test)))
```
## Feature Correlation
```
#map feature correlation
f,ax = plt.subplots(figsize=(12, 12))
sns.heatmap(df_train[INTERESTING_COLUMNS].corr(),annot=True, linewidths=.5, fmt= '.1f',ax=ax)
```
## Feature Importance (for random forest)
```
RandomForest_checker = RandomForestClassifier()
RandomForest_checker.fit(X_train, y_train)
importances_df = pd.DataFrame(RandomForest_checker.feature_importances_, columns=['Feature_Importance'],
index=X_train.columns)
importances_df.sort_values(by=['Feature_Importance'], ascending=False, inplace=True)
print(importances_df)
```
# Re-forcast predictions based on new features
```
Submission['Survived']=RandomForest.predict(test)
print(Submission.head())
print('Submission created')
```
# Make revised submission
```
# write data frame to csv file
# Submission.set_index('PassengerId', inplace=True)
Submission.to_csv('randomforestcat01.csv',sep=',')
print('file created')
```
The second revised submission scored 0.75598 which was an improvement of the original revision which scored 0.64593, this used was is an improvement on the original score of 0.57894. This advanced the submission to 9117 place on the leaderboard, from the starting point of 10599th place! Obviousy a step in the right direction but still needing work.
# Stage 3 : Test Different Models and parameters
## Split data into test and training
```
REVISED_NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
#print(df_test.columns)
# create test and training data
data_to_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
prediction = df_train["Survived"]
test = df_test[CATEGORY_COLUMNS].fillna(-1000)
X_train, X_val, y_train, y_val = train_test_split(data_to_train, prediction, test_size = 0.3,random_state=21, stratify=y)
print('Data split')
```
## AdaBoost
```
adaboost=AdaBoostClassifier()
adaboost.fit(X_train, y_train)
y_pred = adaboost.predict(X_val)
acc_adaboost = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_adaboost)
```
## Bagging
```
bagging=BaggingClassifier()
bagging.fit(X_train, y_train)
y_pred = bagging.predict(X_val)
acc_bagging = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_bagging)
```
## Decision Tree
```
#Decision Tree
decisiontree = DecisionTreeClassifier()
decisiontree.fit(X_train, y_train)
y_pred = decisiontree.predict(X_val)
acc_decisiontree = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_decisiontree)
```
## Extra Trees
```
# ExtraTreesClassifier
et = ExtraTreesClassifier()
et.fit(X_train, y_train)
y_pred = et.predict(X_val)
acc_et = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_et)
```
## Gaussian Naive Bayes
```
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, y_train)
y_pred = gaussian.predict(X_val)
acc_gaussian = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_gaussian)
```
## Gradient Boosting
```
# Gradient Boosting Classifier
gbk = GradientBoostingClassifier()
gbk.fit(X_train, y_train)
y_pred = gbk.predict(X_val)
acc_gbk = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_gbk)
```
## K Nearest Neighbors
```
# KNN or k-Nearest Neighbors
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
y_pred = knn.predict(X_val)
acc_knn = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_knn)
```
## Linear Discriminant Analysis
```
linear_da=LinearDiscriminantAnalysis()
linear_da.fit(X_train, y_train)
y_pred = linear_da.predict(X_val)
acc_linear_da = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_linear_da)
```
## LinearSVC
```
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, y_train)
y_pred = linear_svc.predict(X_val)
acc_linear_svc = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_linear_svc)
```
## Logistic Regression
```
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_val)
acc_logreg = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_logreg)
```
## MLP
```
MLP = MLPClassifier()
MLP.fit(X_train, y_train)
y_pred = MLP.predict(X_val)
acc_MLP= round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_MLP)
```
## Passive Aggressive
```
passiveaggressive = PassiveAggressiveClassifier()
passiveaggressive.fit(X_train, y_train)
y_pred = passiveaggressive.predict(X_val)
acc_passiveaggressive = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_passiveaggressive)
```
## Perceptron
```
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, y_train)
y_pred = perceptron.predict(X_val)
acc_perceptron = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_perceptron)
```
## Random Forest
```
# Random Forest
randomforest = RandomForestClassifier(random_state = 0)
randomforest.fit(X_train, y_train)
y_pred = randomforest.predict(X_val)
acc_randomforest = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_randomforest)
```
## Ridge Classifier
```
ridge = RidgeClassifierCV()
ridge.fit(X_train, y_train)
y_pred = ridge.predict(X_val)
acc_ridge = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_ridge)
```
## Stochastic Gradient Descent
```
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, y_train)
y_pred = sgd.predict(X_val)
acc_sgd = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_sgd)
```
## Support Vector Machines
Has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.
1. This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme.
```
# instanciate model
clf = SVC()
# fit model
clf.fit(X_train, y_train)
# predict results
y_pred = clf.predict(X_val)
# check accuracy
acc_clf = round(accuracy_score(y_pred, y_val) * 100, 2)
#print accuracy
print(acc_clf)
```
## xgboost
```
# xgboost
xgb = XGBClassifier(n_estimators=10)
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_val)
acc_xgb = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_xgb)
```
## Comparing the results
```
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression', 'Ridge Classifier',
'Random Forest', 'Naive Bayes', 'Linear SVC', 'MLP','AdaBoost','Linear discriminant','Passive Aggressive',
'Decision Tree', 'Gradient Boosting Classifier','Extra Trees','Stochastic Gradient Descent','Perceptron','xgboost'],
'Score': [acc_clf, acc_knn, acc_logreg,acc_ridge,acc_randomforest, acc_gaussian,acc_linear_svc, acc_MLP,acc_adaboost,acc_linear_da,acc_passiveaggressive,acc_decisiontree,acc_gbk,acc_et,acc_sgd,acc_perceptron,acc_xgb]})
models.sort_values(by='Score', ascending=False)
```
# Reforcast predictions based on best performing model
```
Submission['Survived']=ridge.predict(test)
print(Submission.head(5))
print('Prediction complete')
```
# Make model submission
```
# write data frame to csv file
Submission.set_index('PassengerId', inplace=True)
Submission.to_csv('ridgesubmission02.csv',sep=',')
print('File created')
```
# Stage 4 : Hyper Tuning the Models
```
REVISED_NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
#print(df_test.columns)
# create test and training data
data_to_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
prediction = df_train["Survived"]
test = df_test[CATEGORY_COLUMNS].fillna(-1000)
X_train, X_val, y_train, y_val = train_test_split(data_to_train, prediction, test_size = 0.3,random_state=21, stratify=prediction)
print('Data split')
```
## Linear Regression SVC
```
# Support Vector Classifier parameters
param_grid = {'C':np.arange(1, 7),
'degree':np.arange(1, 7),
'max_iter':np.arange(0, 12),
'kernel':['rbf','linear'],
'shrinking':[0,1]}
clf = SVC()
svc_cv=GridSearchCV(clf, param_grid, cv=10)
svc_cv.fit(X_train, y_train)
print("Tuned SVC Parameters: {}".format(svc_cv.best_params_))
print("Best score is {}".format(svc_cv.best_score_))
acc_svc_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_svc_cv)
```
## Logistic Regression
```
# Logistic Regression
from sklearn.linear_model import LogisticRegression
# create parameter grid as a dictionary where the keys are the hyperparameter names and the values are lists of values that we want to try.
param_grid = {"solver": ['newton-cg','lbfgs','liblinear','sag','saga'],'C': [0.01, 0.1, 1, 10, 100]}
# instanciate classifier
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
logreg_cv = GridSearchCV(logreg, param_grid, cv=30)
logreg_cv.fit(X_train, y_train)
y_pred = logreg_cv.predict(X_val)
print("Tuned Logistic Regression Parameters: {}".format(logreg_cv.best_params_))
print("Best score is {}".format(logreg_cv.best_score_))
acc_logreg_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_logreg_cv)
```
## KNN
```
# KNN or k-Nearest Neighbors with GridSearch
# create parameter grid as a dictionary where the keys are the hyperparameter names and the values are lists of values that we want to try.
param_grid = {"n_neighbors": np.arange(1, 50),
"leaf_size": np.arange(20, 40),
"algorithm": ["ball_tree","kd_tree","brute"]
}
# instanciate classifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
knn_cv = GridSearchCV(knn, param_grid, cv=10)
knn_cv.fit(X_train, y_train)
y_pred = knn_cv.predict(X_val)
print("Tuned knn Parameters: {}".format(knn_cv.best_params_))
print("Best score is {}".format(knn_cv.best_score_))
acc_knn_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_knn_cv)
```
## DecisionTree with RandomizedSearch
```
# DecisionTree with RandomizedSearch
# Setup the parameters and distributions to sample from: param_dist
param_dist = {"random_state" : np.arange(0, 10),
"max_depth": np.arange(1, 10),
"max_features": np.arange(1, 10),
"min_samples_leaf": np.arange(1, 10),
"criterion": ["gini","entropy"]}
# Instantiate a Decision Tree classifier: tree
tree = DecisionTreeClassifier()
# Instantiate the RandomizedSearchCV object: tree_cv
tree_cv = RandomizedSearchCV(tree, param_dist, cv=30)
# Fit it to the data
tree_cv.fit(X_train,y_train)
y_pred = tree_cv.predict(X_val)
# Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(tree_cv.best_params_))
print("Best score is {}".format(tree_cv.best_score_))
acc_tree_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_tree_cv)
```
## Random Forest
```
# Random Forest
# Setup the parameters and distributions to sample from: param_dist
param_dist = {"random_state" : np.arange(0, 10),
"n_estimators" : np.arange(1, 20),
"max_depth": np.arange(1, 10),
"max_features": np.arange(1, 10),
"min_samples_leaf": np.arange(1, 10),
"criterion": ["gini","entropy"]}
# Instantiate a Decision Tree classifier: tree
randomforest = RandomForestClassifier()
# Instantiate the RandomizedSearchCV object: tree_cv
randomforest_cv = RandomizedSearchCV(randomforest, param_dist, cv=30)
# Fit it to the data
randomforest_cv.fit(X_train,y_train)
y_pred = randomforest_cv.predict(X_val)
# Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(randomforest_cv.best_params_))
print("Best score is {}".format(randomforest_cv.best_score_))
acc_randomforest_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_randomforest_cv)
```
## Gradient Boosting
```
# Gradient Boosting Classifier
# Setup the parameters and distributions to sample from: param_dist
param_dist = {'max_depth':np.arange(1, 7),
'min_samples_leaf': np.arange(1, 6),
"max_features": np.arange(1, 10),
}
# Instantiate Classifier
gbk = GradientBoostingClassifier()
# Instantiate the RandomizedSearchCV object: tree_cv
gbk_cv = RandomizedSearchCV(gbk, param_dist, cv=30)
gbk_cv.fit(X_train, y_train)
y_pred = gbk_cv.predict(X_val)
print("Tuned Gradient Boost Parameters: {}".format(gbk_cv.best_params_))
print("Best score is {}".format(gbk_cv.best_score_))
acc_gbk_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_gbk_cv)
```
## xgboost
```
# xgboost
# Setup the parameters and distributions to sample from: param_dist
param_dist = {'learning_rate': [.01, .03, .05, .1, .25], #default: .3
'max_depth': np.arange(1, 10), #default 2
'n_estimators': [10, 50, 100, 300],
'booster':['gbtree','gblinear','dart']
#'seed': 5
}
# Instantiate Classifier
xgb = XGBClassifier()
# Instantiate the RandomizedSearchCV object: tree_cv
xgb_cv = RandomizedSearchCV(xgb, param_dist, cv=20)
# Fit model
xgb_cv.fit(X_train, y_train)
# Make prediction
y_pred = xgb_cv.predict(X_val)
# Print results
print("xgBoost Parameters: {}".format(xgb_cv.best_params_))
print("Best score is {}".format(xgb_cv.best_score_))
acc_xgb_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print(acc_xgb_cv)
```
## Comparing the results of the cross validated tuned models (best result)
```
optmodels = pd.DataFrame({
'optModel': ['SVC','KNN','Decision Tree','Gradient Boost','Logistic Regression','xgboost'],
'optScore': [svc_cv.best_score_,knn_cv.best_score_,tree_cv.best_score_,gbk_cv.best_score_,logreg_cv.best_score_,xgb_cv.best_score_]})
optmodels.sort_values(by='optScore', ascending=False)
```
## Comparing the results of the tuned models (accuracy)
```
optmodels = pd.DataFrame({
'optModel': ['Linear Regression','KNearestNieghbours','Decision Tree','Gradient Boost','Logistic Regression','xgboost'],
'optScore': [acc_svc_cv,acc_knn_cv,acc_tree_cv,acc_gbk_cv,acc_logreg_cv,acc_xgb_cv]})
optmodels.sort_values(by='optScore', ascending=False)
```
## Plotting Learning Curves
```
# define function to plot test and training curves
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=-1, train_sizes=np.linspace(.1, 1.0, 5)):
"""Generate a simple plot of the test and training learning curve"""
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
# Cross validate model with Kfold stratified cross val
kfold = StratifiedKFold(n_splits=10)
# Plot chart for each model
g = plot_learning_curve(svc_cv.best_estimator_,"linear regression learning curves",X_train,y_train,cv=kfold)
g = plot_learning_curve(logreg_cv.best_estimator_,"logistic regression learning curves",X_train,y_train,cv=kfold)
g = plot_learning_curve(knn_cv.best_estimator_,"knn learning curves",X_train,y_train,cv=kfold)
g = plot_learning_curve(tree_cv.best_estimator_,"decision tree learning curves",X_train,y_train,cv=kfold)
g = plot_learning_curve(randomforest_cv.best_estimator_,"random forest learning curves",X_train,y_train,cv=kfold)
g = plot_learning_curve(gbk_cv.best_estimator_,"gradient boosting learning curves",X_train,y_train,cv=kfold)
g = plot_learning_curve(xgb_cv.best_estimator_,"xg boost learning curves",X_train,y_train,cv=kfold)
```
# Optimising the Model
Adding parameters to the basic models generally improved the performance on the training data. These gain on the training data did not always translate to the same increase in performance on the test data, due to over fitting.
# Predictions based on tuned model
```
# Select columns
X_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
y_train = df_train["Survived"]
X_test = df_test[CATEGORY_COLUMNS].fillna(-1000)
from sklearn.tree import DecisionTreeClassifier
test = df_test[REVISED_NUMERIC_COLUMNS].fillna(-1000)
# select classifier
#tree = DecisionTreeClassifier(random_state=0,max_depth=5,max_features=7,min_samples_leaf=2,criterion="entropy") #85,87
#tree = DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=4,max_features=7, max_leaf_nodes=None, min_impurity_decrease=0.0,min_impurity_split=None, min_samples_leaf=9,min_samples_split=2, min_weight_fraction_leaf=0.0,presort=False, random_state=8, splitter='best')
#tree = DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=4,max_features=7, max_leaf_nodes=None, min_impurity_decrease=0.0,min_impurity_split=None, min_samples_leaf=9,min_samples_split=2, min_weight_fraction_leaf=0.0,presort=False, random_state=9, splitter='best')
#knn = KNeighborsClassifier(algorithm='kd_tree',leaf_size=20,n_neighbors=5)
#logreg = LogisticRegression(solver='newton-cg')
#xgboost=XGBClassifier(n_estimators= 300, max_depth= 10, learning_rate= 0.01)
#gbk=GradientBoostingClassifier(min_samples_leaf=1,max_features=4,max_depth=5)
#logreg=LogisticRegression(solver='newton-cg',C= 10)
#gboost=GradientBoostingClassifier(random_state= 7,n_estimators=17,min_samples_leaf= 4, max_features=9,max_depth=5, criterion='gini')
randomf=RandomForestClassifier(random_state= 7,n_estimators=17,min_samples_leaf= 4, max_features=9,max_depth=5, criterion='gini')
# train model
randomf.fit(data_to_train, prediction)
# make predictions
Submission['Survived']=randomf.predict(X_test)
#Submission.set_index('PassengerId', inplace=True)
Submission.to_csv('randomforestcats01.csv',sep=',')
print(Submission.head(5))
print('File created')
```
# Stage 5 : Hyper tuning with confusion matrix
I used a grid search cross validation in the previous stages to estimate the best results, we can use a confusion matrix to find out how well this model works by penalizing incorrect predictions.
```
# knn Hyper Tunning with confusion Matrix
REVISED_NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
# create test and training data
data_to_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
X_test2= df_test[CATEGORY_COLUMNS].fillna(-1000)
prediction = df_train["Survived"]
X_train, X_test, y_train, y_test = train_test_split(data_to_train, prediction, test_size = 0.3,random_state=21, stratify=prediction)
print('Data Split')
hyperparams = {'algorithm': ['auto'], 'weights': ['uniform', 'distance'] ,'leaf_size': list(range(1,50,5)),
'n_neighbors':[6,7,8,9,10,11,12,14,16,18,20,22]}
gd=GridSearchCV(estimator = KNeighborsClassifier(), param_grid = hyperparams, verbose=True, cv=10, scoring = "roc_auc")
gd.fit(X_train, y_train)
gd.best_estimator_.fit(X_train,y_train)
y_pred=gd.best_estimator_.predict(X_test)
Submission['Survived']=gd.best_estimator_.predict(X_test2)
# Print the results
print('Best Score')
print(gd.best_score_)
print('Best Estimator')
print(gd.best_estimator_)
acc_gd_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print('Accuracy')
print(acc_gd_cv)
# Generate the confusion matrix and classification report
print('Confusion Matrrix')
print(confusion_matrix(y_test, y_pred))
print('Classification_report')
print(classification_report(y_test, y_pred))
#Submission.set_index('PassengerId', inplace=True)
print('Sample Prediction')
print(Submission.head(10))
#Submission.to_csv('knngridsearch03.csv',sep=',')
print('KNN prediction created')
# Decision Tree Hyper Tunning with confusion Matrix
REVISED_NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked'] #84
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
# create test and training data
data_to_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
X_test2= df_test[CATEGORY_COLUMNS].fillna(-1000)
prediction = df_train["Survived"]
X_train, X_test, y_train, y_test = train_test_split(data_to_train, prediction, test_size = 0.3,random_state=21, stratify=prediction)
print('Data Split')
hyperparams = {"random_state" : np.arange(0, 10),
"max_depth": np.arange(1, 10),
"max_features": np.arange(1, 10),
"min_samples_leaf": np.arange(1, 10),
"criterion": ["gini","entropy"]}
gd=GridSearchCV(estimator = DecisionTreeClassifier(), param_grid = hyperparams, verbose=True, cv=10, scoring = "roc_auc")
gd.fit(X_train, y_train)
gd.best_estimator_.fit(X_train,y_train)
y_pred=gd.best_estimator_.predict(X_test)
Submission['Survived']=gd.best_estimator_.predict(X_test2)
# Print the results
print('Best Score')
print(gd.best_score_)
print('Best Estimator')
print(gd.best_estimator_)
acc_gd_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print('Accuracy')
print(acc_gd_cv)
# Generate the confusion matrix and classification report
print('Confusion Matrrix')
print(confusion_matrix(y_test, y_pred))
print('Classification_report')
print(classification_report(y_test, y_pred))
#Submission.set_index('PassengerId', inplace=True)
# print head
print(Submission.head(10))
Submission.to_csv('Treegridsearch03.csv',sep=',')
print('Decision Tree prediction created')
# Decision Logistic Regression Hyper Tunning with confusion Matrix
REVISED_NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked']
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked']
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
# create test and training data
data_to_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
X_test2= df_test[CATEGORY_COLUMNS].fillna(-1000)
prediction = df_train["Survived"]
X_train, X_test, y_train, y_test = train_test_split(data_to_train, prediction, test_size = 0.3,random_state=21, stratify=prediction)
print('Data Split')
hyperparams = {'solver':["newton-cg", "lbfgs", "liblinear", "sag", "saga"],
'C': [0.01, 0.1, 1, 10, 100]}
gd=GridSearchCV(estimator = LogisticRegression(), param_grid = hyperparams, verbose=True, cv=10, scoring = "roc_auc")
gd.fit(X_train, y_train)
gd.best_estimator_.fit(X_train,y_train)
y_pred=gd.best_estimator_.predict(X_test)
Submission['Survived']=gd.best_estimator_.predict(X_test2)
# Print the results
print('Best Score')
print(gd.best_score_)
print('Best Estimator')
print(gd.best_estimator_)
acc_gd_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print('Accuracy')
print(acc_gd_cv)
# Generate the confusion matrix and classification report
print('Confusion Matrrix')
print(confusion_matrix(y_test, y_pred))
print('Classification_report')
print(classification_report(y_test, y_pred))
#Submission.set_index('PassengerId', inplace=True)
# print head
print(Submission.head(10))
Submission.to_csv('Logregwithconfusion01.csv',sep=',')
print('Logistic Regression prediction created')
df_train.columns
# Decision Logistic Regression Hyper Tunning with confusion Matrix
# create test and training data
X_train = df_train[CATEGORY_COLUMNS].fillna(-1000)
y_train = df_train["Survived"]
X_test = df_test[CATEGORY_COLUMNS].fillna(-1000)
randomf=RandomForestClassifier(criterion='gini', n_estimators=700, min_samples_split=10,min_samples_leaf=1,max_features='auto',oob_score=True,random_state=1,n_jobs=-1)
randomf.fit(X_train, y_train)
Submission['Survived']=randomf.predict(X_test)
# Print the results
acc_gd_cv = round(accuracy_score(y_pred, y_val) * 100, 2)
print('Accuracy')
print(acc_gd_cv)
#Submission.set_index('PassengerId', inplace=True)
# print head
print(Submission.head(10))
Submission.to_csv('finalrandomforest01.csv',sep=',')
print('Random Forest prediction created')
```
## Plot Area under ROC
```
# List of Machine Learning Algorithm (MLA)
MLA = [
#Ensemble Methods
ensemble.ExtraTreesClassifier(),
ensemble.GradientBoostingClassifier(),
ensemble.RandomForestClassifier(),
#GLM
linear_model.LogisticRegressionCV(),
#Nearest Neighbor
neighbors.KNeighborsClassifier(),
#SVM
svm.SVC(probability=True),
#Trees
#tree.DecisionTreeClassifier(),
#tree.ExtraTreeClassifier(),
]
index = 1
for alg in MLA:
predicted = alg.fit(X_train, y_train).predict(X_test)
fp, tp, th = roc_curve(y_test, predicted)
roc_auc_mla = auc(fp, tp)
MLA_name = alg.__class__.__name__
plt.plot(fp, tp, lw=2, alpha=0.3, label='ROC %s (AUC = %0.2f)' % (MLA_name, roc_auc_mla))
index+=1
plt.title('ROC Curve comparison')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.plot([0,1],[0,1],'r--')
plt.xlim([0,1])
plt.ylim([0,1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
# Stage 6 : Basic Ensemble Modelling
In the last couple of stages I tried a few different models with differnet parameters to try and find the one that produced the best results. Another approach would be to use an Ensemble model, that generates results from a selection of the best performing models and then feeds the results into a another model in a second layer.
```
REVISED_NUMERIC_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked']
SIMPLE_COLUMNS=['Pclass','Age','SibSp','Parch','Family_Survival','Alone','Sex_female','Sex_male','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','Embarked']
INTERESTING_COLUMNS=['Survived','Pclass','Age','SibSp','Parch','Title','Alone','Mother','Family Size','Family_Survival','Embarked','FareBand','TicketRef']
CATEGORY_COLUMNS=['Pclass','SibSp','Parch','Family Size','Family_Survival','Alone','Mother','Sex_female','Sex_male','AgeBand_Child',
'AgeBand_Young Adult', 'AgeBand_Adult', 'AgeBand_Older Adult',
'AgeBand_Senior','Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Millitary','NameBand_1',
'NameBand_2', 'NameBand_3', 'NameBand_4', 'NameBand_5','Embarked','TicketRef_A', 'TicketRef_C', 'TicketRef_F', 'TicketRef_L',
'TicketRef_P', 'TicketRef_S', 'TicketRef_W', 'TicketRef_X','HadCabin','Free']
# create test and training data
data_to_train = df_train[REVISED_NUMERIC_COLUMNS].fillna(-1000)
data_to_test = df_test[REVISED_NUMERIC_COLUMNS].fillna(-1000)
prediction = df_train["Survived"]
X_train, X_val, y_train, y_val = train_test_split(data_to_train, prediction, test_size = 0.3,random_state=21, stratify=prediction)
print('Data Split')
```
## Train first layer
```
#logreg = LogisticRegression()
logreg = LogisticRegression(C=10, solver='newton-cg')
logreg.fit(X_train, y_train)
y_pred_train_logreg = cross_val_predict(logreg,X_val, y_val)
y_pred_test_logreg = logreg.predict(X_test)
print('logreg first layer predicted')
#tree = DecisionTreeClassifier()
tree = DecisionTreeClassifier(random_state=8,min_samples_leaf=6, max_features= 7, max_depth= 4, criterion='gini', splitter='best')
tree.fit(X_train, y_train)
y_pred_train_tree = cross_val_predict(tree,X_val,y_val)
y_pred_test_tree = tree.predict(X_test)
print('decision tree first layer predicted')
# randomforest = RandomForestClassifier()
randomforest = RandomForestClassifier(random_state=8, n_estimators=15, min_samples_leaf= 4, max_features= 6, max_depth=4,criterion='gini')
randomforest.fit(X_train, y_train)
y_pred_train_randomforest = cross_val_predict(randomforest, X_val, y_val)
y_pred_test_randomforest = randomforest.predict(X_test)
print('random forest first layer predicted')
#gbk
gbk = GradientBoostingClassifier(min_samples_leaf=3, max_features= 3, max_depth= 3)
gbk.fit(X_train, y_train)
y_pred_train_gbk = cross_val_predict(gbk, X_val, y_val)
y_pred_test_gbk = gbk.predict(X_test)
print('gbk first layer predicted')
#knn
knn = KNeighborsClassifier(algorithm='auto', leaf_size=36, metric='minkowski',metric_params=None, n_jobs=1, n_neighbors=12, p=2,weights='uniform')
knn.fit(X_train, y_train)
y_pred_train_knn = cross_val_predict(knn, X_val, y_val)
y_pred_test_knn = gbk.predict(X_test)
print('knn first layer predicted')
#clf = SVC()
clf = SVC(C=3, degree=1, kernel='linear', max_iter=1, shrinking=0)
clf.fit(X_train, y_train)
y_pred_train_clf = cross_val_predict(clf, X_val, y_val)
y_pred_test_clf = clf.predict(X_test)
print('clf first layer predicted')
```
## VotingClassifier Ensemble
```
from sklearn.ensemble import VotingClassifier
votingC = VotingClassifier(estimators=[('logreg', logreg_cv.best_estimator_), ('gbk', gbk_cv.best_estimator_),
('tree', tree_cv.best_estimator_), ('randomforest',randomforest_cv.best_estimator_),('knn',knn_cv.best_estimator_) ], voting='soft', n_jobs=4)
votingC = votingC.fit(X_train, y_train)
# write data frame to csv file
Submission['Survived'] = votingC.predict(X_test)
# Submission.set_index('PassengerId', inplace=True)
Submission.to_csv('Votingclassifier02.csv',sep=',')
print('Voting Classifier Ensemble File created')
print(Submission.head())
```
# Stage 7 : Hyper Tuned Ensemble Modelling
```
# Create Ensemble Model baseline (tuned model!)
second_layer_train = pd.DataFrame( {'Logistic Regression': y_pred_train_logreg.ravel(),
'Gradient Boosting': y_pred_train_gbk.ravel(),
'Decision Tree': y_pred_train_tree.ravel(),
'Random Forest': y_pred_train_randomforest.ravel()
} )
X_train_second = np.concatenate(( y_pred_train_logreg.reshape(-1, 1), y_pred_train_gbk.reshape(-1, 1),
y_pred_train_tree.reshape(-1, 1), y_pred_train_randomforest.reshape(-1, 1)),
axis=1)
X_test_second = np.concatenate(( y_pred_test_logreg.reshape(-1, 1), y_pred_test_gbk.reshape(-1, 1),
y_pred_test_tree.reshape(-1, 1), y_pred_test_randomforest.reshape(-1, 1)),
axis=1)
#xgb = XGBClassifier(n_estimators= 800,max_depth= 4,min_child_weight= 2,gamma=0.9,subsample=0.8,colsample_bytree=0.8,objective= 'binary:logistic',nthread= -1,scale_pos_weight=1).fit(X_train_second, y_val)
tree = DecisionTreeClassifier(random_state=8,min_samples_leaf=6, max_depth= 4, criterion='gini').fit(X_train_second,y_val)
Submission['Survived'] = tree.predict(X_test_second)
print(Submission.head())
print('Tuned Ensemble model prediction complete')
# write data frame to csv file
#Submission.set_index('PassengerId', inplace=True)
Submission.to_csv('tunedensemblesubmission04.csv',sep=',')
print('tuned Ensemble File created')
```
# Summary
In this project we have explored the Titanic Data Set, we have identified missing data and filled then as best we could, we have converted categorical data to columns of numeric features that we can use in machine learning and we have engineered new features based on the data we had. We improved our score from base line of 0.57894 to a score of 0.78.
Going from a score of 0.57 to 0.77 was the relatively easy part, taking it from 7.8 to 0.8 is a whole different ball game. Its really temping to overwork the data trying to find new features that might improve the score but in really what you gain in new features you loose in the noise you've introduce, its also tempting to keep tweak the parameters of your model to get the best possible score on the test data, but gain what you gain in performance on the training data you loose in overfitting. A better approach is to stick to the features that have the strongest relationships and ensure that any data that you are estimating or engineering is as accurate as you can possibly make it. Using cross validation to hyper tune the model while minimising any over fitting of the data.
When I initially created the project I kept the test and training data completely separate but am I am rapidly coming to the conclusion that combining the two datasets, is possibly a better approach for estimating missing data based on averages across the entire dataset.
I looked at a range of different models and compared the accuracy of each model on the training data before deciding which model to use for the third submission. I then hyper tuned a hanful of the best performing to ensure that I submitted the best performing hyper tuned model.
Having hypertuned a single model the next step in my process was to attempt combining several models in an ensemble. I managed to achieve a result of .803 which was OK but not as good as the best hypertuned models that i'd produced.
I havn't come any where near winning this contest yet, but I survived my first Kaggle contest and got a score of over .8 which has my goal. The main thing is that I had fun and learnt a lot along the way by trying different techniques and looking at what other people were doing.
I've also created a kernal that uses the same data with deep learning, you can find this at https://www.kaggle.com/davidcoxon/deeply-titanic
# Credit where credits due
This competition is predominantly a training exercise and as such I have tried to looks at different approaches and try different techniques to see hw they work. I have looked at some of the existing entries and adopted some of the tequiques that i have found interesting. So firstly a huge thanks to everyone that look the time to document their code and explain step by step what they did and why.
To naming names, some of the notebooks that i found most useful and think deserve special mensions are:
### Aldemuro M.A.Haris
https://www.kaggle.com/aldemuro/comparing-ml-algorithms-train-accuracy-90
Interesting model comparison and ROC graphs
### Anisotropic
https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python/notebook
Introduction to Ensembling/Stacking in Python is a very useful project on many levels, in I particular I liked how elegantly this code was written.
### Bisaria
https://www.kaggle.com/bisaria/titanic-lasso-ridge-implementation/code
While this notebook is based on R and I am working in Python, I found some of the visualizations interesting, specifically the port of embarkation and number of siblings and the mosaic. I also liked the idea of the lone traveller feature and the allocation of the cabin data, based on family.
### CalebCastleberry
https://www.kaggle.com/ccastleberry/titanic-cabin-features
This notebook explains the importance of the deck feature and proves you can score 70% on the deck feature alone.
### Henrique Mello
https://www.kaggle.com/hrmello/introduction-to-data-exploration-using-seaborn/notebook
This has some great visualisations of the data and helped me understand the importance of using title in predicting ages when filling in the missing data.
### Konstantin
https://www.kaggle.com/konstantinmasich/titanic-0-82-0-83
### LD Freeman
https://www.kaggle.com/ldfreeman3/a-data-science-framework-to-achieve-99-accuracy
This not only achieves a fantastic score but is a great tutorial on data science techniques
### Nadin Tamer
https://www.kaggle.com/nadintamer/titanic-survival-predictions-beginner/notebook
I found this another really useful kernel. It is very much a step by step approach, with a particularly good section on different types of model and how they perform for this project.
### Omar El Gabry
https://www.kaggle.com/omarelgabry/a-journey-through-titanic?scriptVersionId=447802/notebook
This kernal has an interesting section on estimating the missing ages and calculating pearson co-efficients for the features.
### Oscar Takeshita
https://www.kaggle.com/pliptor/divide-and-conquer-0-82296/code
This kernal was very useful in trying to get over the 0.8 ceiling, its based on R rather than python so i haven't used any of the code, but it helped me focus on the key fearures and to see the benefits of uing the combined training and test dataset for statistics and calculations rather keeping the two at arms length.
### Sina
https://www.kaggle.com/sinakhorami/titanic-best-working-classifier?scriptVersionId=566580
A lot of high scoring kernals reference this notebook, especially the feature engineering discussed in it.
### S.Xu
https://www.kaggle.com/shunjiangxu/blood-is-thicker-than-water-friendship-forever
This kernal is based on an original kernal by Sina, and it uses the last name and ticket details to find families and firends it then looks at the survival of the group as a whole.
### Yassine Ghouzam
https://www.kaggle.com/yassineghouzam/titanic-top-4-with-ensemble-modeling
This kernal has an interesting section on learning curves.
| github_jupyter |
```
%load_ext watermark
%watermark -d -u -a 'Andreas Mueller, Kyle Kastner, Sebastian Raschka' -v -p numpy,scipy,matplotlib,scikit-learn
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
# SciPy 2016 Scikit-learn Tutorial
# Supervised Learning Part 2 -- Regression Analysis
In regression we are trying to predict a continuous output variable -- in contrast to the nominal variables we were predicting in the previous classification examples.
Let's start with a simple toy example with one feature dimension (explanatory variable) and one target variable. We will create a dataset out of a sinus curve with some noise:
```
x = np.linspace(-3, 3, 100)
print(x)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.uniform(size=len(x))
plt.plot(x, y, 'o');
```
Linear Regression
=================
The first model that we will introduce is the so-called simple linear regression. Here, we want to fit a line to the data, which
One of the simplest models again is a linear one, that simply tries to predict the data as lying on a line. One way to find such a line is `LinearRegression` (also known as [*Ordinary Least Squares (OLS)*](https://en.wikipedia.org/wiki/Ordinary_least_squares) regression).
The interface for LinearRegression is exactly the same as for the classifiers before, only that ``y`` now contains float values, instead of classes.
As we remember, the scikit-learn API requires us to provide the target variable (`y`) as a 1-dimensional array; scikit-learn's API expects the samples (`X`) in form a 2-dimensional array -- even though it may only consist of 1 feature. Thus, let us convert the 1-dimensional `x` NumPy array into an `X` array with 2 axes:
```
print('Before: ', x.shape)
X = x[:, np.newaxis]
print('After: ', X.shape)
```
Again, we start by splitting our dataset into a training (75%) and a test set (25%):
```
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
```
Next, we use the learning algorithm implemented in `LinearRegression` to **fit a regression model to the training data**:
```
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
```
After fitting to the training data, we paramerterized a linear regression model with the following values.
```
print('Weight coefficients: ', regressor.coef_)
print('y-axis intercept: ', regressor.intercept_)
```
Since our regression model is a linear one, the relationship between the target variable (y) and the feature variable (x) is defined as
$$y = weight \times x + \text{intercept}$$.
Plugging in the min and max values into thos equation, we can plot the regression fit to our training data:
```
min_pt = X.min() * regressor.coef_[0] + regressor.intercept_
max_pt = X.max() * regressor.coef_[0] + regressor.intercept_
plt.plot([X.min(), X.max()], [min_pt, max_pt])
plt.plot(X_train, y_train, 'o');
```
Similar to the estimators for classification in the previous notebook, we use the `predict` method to predict the target variable. And we expect these predicted values to fall onto the line that we plotted previously:
```
y_pred_train = regressor.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.plot([X.min(), X.max()], [min_pt, max_pt], label='fit')
plt.legend(loc='best')
```
As we can see in the plot above, the line is able to capture the general slope of the data, but not many details.
Next, let's try the test set:
```
y_pred_test = regressor.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.plot([X.min(), X.max()], [min_pt, max_pt], label='fit')
plt.legend(loc='best');
```
Again, scikit-learn provides an easy way to evaluate the prediction quantitatively using the ``score`` method. For regression tasks, this is the R<sup>2</sup> score. Another popular way would be the Mean Squared Error (MSE). As its name implies, the MSE is simply the average squared difference over the predicted and actual target values
$$MSE = \frac{1}{n} \sum^{n}_{i=1} (\text{predicted}_i - \text{true}_i)^2$$
```
regressor.score(X_test, y_test)
```
KNeighborsRegression
=======================
As for classification, we can also use a neighbor based method for regression. We can simply take the output of the nearest point, or we could average several nearest points. This method is less popular for regression than for classification, but still a good baseline.
```
from sklearn.neighbors import KNeighborsRegressor
kneighbor_regression = KNeighborsRegressor(n_neighbors=1)
kneighbor_regression.fit(X_train, y_train)
```
Again, let us look at the behavior on training and test set:
```
y_pred_train = kneighbor_regression.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data", markersize=10)
plt.plot(X_train, y_pred_train, 's', label="prediction", markersize=4)
plt.legend(loc='best');
```
On the training set, we do a perfect job: each point is its own nearest neighbor!
```
y_pred_test = kneighbor_regression.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data", markersize=8)
plt.plot(X_test, y_pred_test, 's', label="prediction", markersize=4)
plt.legend(loc='best');
```
On the test set, we also do a better job of capturing the variation, but our estimates look much messier than before.
Let us look at the R<sup>2</sup> score:
```
kneighbor_regression.score(X_test, y_test)
```
Much better than before! Here, the linear model was not a good fit for our problem; it was lacking in complexity and thus under-fit our data.
Exercise
=========
Compare the KNeighborsRegressor and LinearRegression on the boston housing dataset. You can load the dataset using ``sklearn.datasets.load_boston``. You can learn about the dataset by reading the ``DESCR`` attribute.
```
# %load solutions/06A_knn_vs_linreg.py
```
| github_jupyter |
# MNIST With SET
This is an example of training an SET network on the MNIST dataset using synapses, pytorch, and torchvision.
```
#Import torch libraries and get SETLayer from synapses
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from synapses import SETLayer
#Some extras for visualizations
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import clear_output
print("done")
```
## SET Layer
The SET layer is a pytorch module that works with a similar API to a standard fully connected layer; to initialize, specify input and output dimensions.<br><br>
NOTE: one condition mentioned in the paper is that epsilon (a hyperparameter controlling layer sparsity) be much less than the input dimension and much less than the output dimension. The default value of epsilon is 11. Keep dimensions much bigger than epsilon! (epsilon can be passed in as an init argument to the layer).
```
#initialize the layer
sprs = SETLayer(128, 256)
#We can see the layer transforms inputs as we expect
inp = torch.randn((2, 128))
print('Input batch shape: ', tuple(inp.shape))
out = sprs(inp)
print('Output batch shape: ', tuple(out.shape))
```
In terms of behavior, the SETLayer transforms an input vector into the output space as would a fcl.
## Initial Connection Distribution
The intialized layer has randomly assigned connections between input nodes and output nodes; each connection is associated with a weight, drawn from a normal distribution.
```
#Inspect init weight distribution
plt.hist(np.array(sprs.weight.data), bins=40)
plt.title('Weights distribution on initialization')
plt.xlabel('Weight Value')
plt.ylabel('Number of weights')
plt.show()
vec = sprs.connections[:, 0]
vec = np.array(vec)
values, counts = np.unique(vec, return_counts=True)
plt.title('Connections to inputs')
plt.bar(values, counts)
plt.xlabel('Input vector index')
plt.ylabel('Number of connections')
plt.show()
print("done")
```
The weights are sampled from a normal distribution, as is done with a standard fcl. The connections to the inputs are uniformly distributed.<br><br>
## Killing Connections
When connections are reassigned in SET, some proportion (defined by hyperparameter zeta) of the weights closest to zero are removed. We can set these to zero using the zero_connections method on the layer. (This method leaves the connections unchanged.)
```
sprs.zero_connections()
#Inspect init weight distribution
plt.hist(np.array(sprs.weight.data), bins=40)
plt.title('Weights distribution after zeroing connections')
plt.xlabel('Weight Value')
plt.ylabel('Number of weights')
plt.show()
print("done")
```
## Evolving Connections
The evolve_connections() method will reassign these weights to new connections between input and output nodes. By default, these weights are initialized by sampling from the same distribution as the init function. Optionally, these weights can be set at zero (with init=False argument).
```
sprs.evolve_connections()
plt.hist(np.array(sprs.weight.data), bins=40)
plt.title('Weights distribution after evolving connections')
plt.show()
plt.title('Connections to inputs')
plt.bar(values, counts)
plt.xlabel('Input vector index')
plt.ylabel('Number of connections')
plt.show()
print("done")
```
We can see these weight values have been re-distributed; the new connections conform to the same uniform distribution as before. (We see in the SET paper, and here later on, that the adaptive algorithm learns to allocate these connections to more important input values.)
## A Simple SET Model
The following is a simple sparsely-connected model using SETLayers with default hyperparameters.
```
class SparseNet(nn.Module):
def __init__(self):
super(SparseNet, self).__init__()
self.set_layers = []
self.set1 = SETLayer(784, 512)
self.set_layers.append(self.set1)
#self.set2 = SETLayer(512, 512)
#self.set_layers.append(self.set2)
self.set2 = SETLayer(512, 128)
self.set_layers.append(self.set2)
#Use a dense layer for output because of low output dimensionality
self.fc1 = nn.Linear(128, 10)
def zero_connections(self):
"""Sets connections to zero for inferences."""
for layer in self.set_layers:
layer.zero_connections()
def evolve_connections(self):
"""Evolves connections."""
for layer in self.set_layers:
layer.evolve_connections()
def forward(self, x):
x = x.reshape(-1, 784)
x = F.relu(self.set1(x))
x = F.relu(self.set2(x))
#x = F.relu(self.set3(x))
x = self.fc1(x)
return F.log_softmax(x, dim=1)
def count_params(model):
prms = 0
for parameter in model.parameters():
n_params = 1
for prm in parameter.shape:
n_params *= prm
prms += n_params
return prms
device = "cpu"
sparse_net = SparseNet().to(device)
print('number of params: ', count_params(sparse_net))
```
Consider a fully-connected model with the same architecture: It would contain more than 20 times the number of parameters!<br>
## Training on MNIST
This code was adapted directly from the [pytorch mnist tutorial](https://github.com/pytorch/examples/blob/master/mnist/main.py).
```
class History(object):
"""Tracks and plots training history"""
def __init__(self):
self.train_loss = []
self.val_loss = []
self.train_acc = []
self.val_acc = []
def plot(self):
clear_output()
plt.plot(self.train_loss, label='train loss')
plt.plot(self.train_acc, label='train acc')
plt.plot(self.val_loss, label='val loss')
plt.plot(self.val_acc, label='val acc')
plt.legend()
plt.show()
def train(log_interval, model, device, train_loader, optimizer, epoch, history):
model.train()
correct = 0
loss_ = []
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
loss = F.nll_loss(output, target)
loss.backward()
loss_.append(loss.item())
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
history.train_loss.append(np.array(loss_).mean())
history.train_acc.append(correct/len(train_loader.dataset))
return history
def test(model, device, test_loader, history):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
acc = correct / len(test_loader.dataset)
test_loss /= len(test_loader.dataset)
print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)'.format(
test_loss, correct, len(test_loader.dataset), 100. * acc))
history.val_loss.append(test_loss)
history.val_acc.append(acc)
return history
print("done")
torch.manual_seed(0)
#Optimizer settings
lr = .01
momentum = .5
epochs = 50
batch_size=128
log_interval = 64
test_batch_size=128
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=test_batch_size, shuffle=True)
print("done")
```
## Dealing with Optimizer Buffers
Synapses recycles parameters. When connections are broken and reassigned, its parameter gets set to zero.<br><br>
This system is designed to be computationally efficient, but it comes with a nasty side-effect. Often, we use optimizers with some sort of buffer; the simplest example is momentum in SGD. When we reset a parameter, the information about the overwritten parameter in the optimizer buffer is not useful. We need to overwrite specific values in the buffer also. To do this in pytorch, we need to pass the optimizer to each SETLayer to let synapses do this for us. <br><br>
<b>Notice: I'm still working out the best way to initialize adaptive optimizers (current version makes a naive attempt to pick good values); SGD with momentum works fine</b>
```
optimizer = optim.SGD(sparse_net.parameters(), lr=lr, momentum=momentum, weight_decay=1e-3)
for layer in sparse_net.set_layers:
#here we tell our set layers about
layer.optimizer = optimizer
#This guy will keep track of optimization metrics.
set_history = History()
print("done")
def show_MNIST_connections(model):
vec = model.set1.connections[:, 0]
vec = np.array(vec)
_, counts = np.unique(vec, return_counts=True)
t = counts.reshape(28, 28)
sns.heatmap(t, cmap='viridis', xticklabels=[], yticklabels=[], square=True);
plt.title('Connections per input pixel');
plt.show();
v = [t[13-i:15+i,13-i:15+i].mean() for i in range(14)]
plt.plot(v)
plt.show()
print("done")
import time
epochs = 1000
for epoch in range(1, epochs + 1):
#In the paper, evolutions occur on each epoch
if epoch != 1:
set_history.plot()
show_MNIST_connections(sparse_net)
if epoch != 1:
print('Train set: Average loss: {:.4f}, Accuracy: {:.2f}%'.format(
set_history.train_loss[epoch-2], 100. * set_history.train_acc[epoch-2]))
print('Test set: Average loss: {:.4f}, Accuracy: {:.2f}%'.format(
set_history.val_loss[epoch-2], 100. * set_history.val_acc[epoch-2]))
sparse_net.evolve_connections()
show_MNIST_connections(sparse_net)
set_history = train(log_interval, sparse_net, device, train_loader, optimizer, epoch, set_history)
#And smallest connections are removed during inference.
sparse_net.zero_connections()
set_history = test(sparse_net, device, test_loader, set_history)
time.sleep(10)
```
| github_jupyter |
# Import the modules
```
# import the general modules
import cplex
import sys
# import kbase
import os
# os.environ["HOME"] = 'C:\\Users\\Andrew Freiburger\\Dropbox\\My PC (DESKTOP-M302P50)\\Documents\\UVic Civil Engineering\\Internships\\Agronne\\cobrakbase'
import cobrakbase
token = 'JOSNYJGASTV5BGELWQTUSATE4TNHZ66U'
kbase = cobrakbase.KBaseAPI(token)
# local ModelSEED database path
modelseed_db_path = '../../../ModelSEEDDatabase'
```
# SimpleThermo dgbin parameterization
## sans dgbin
```
# define the example individual model, with integer charges, and associated API media package
model = kbase.get_from_ws('e_coli_core.kb', 94253)
for metabolite in model.metabolites:
try:
metabolite.charge = int(metabolite.charge)
except:
print(metabolite, metabolite.charge)
metabolite.charge = None
model.solver = 'optlang-cplex'
# apply the simple thermodynamic constraints
%run ../../modelseedpy/fbapkg/simplethermopkg.py
stp = SimpleThermoPkg(model)
stp.build_package({"dgbin": False})
# execute the model
print(model.objective)
model.summary()
```
## Constrain the model with dgbin
```
# define the example individual model, with integer charges, and associated API media package
model = kbase.get_from_ws('e_coli_core.kb', 94253)
for metabolite in model.metabolites:
try:
metabolite.charge = int(metabolite.charge)
except:
print(metabolite, metabolite.charge)
metabolite.charge = None
model.solver = 'optlang-cplex'
stp.clear()
# apply the simple thermodynamic constraints
stp = SimpleThermoPkg(model)
stp.build_package({"dgbin": True})
# export the constrained LP file
with open('SimpleThermo_dgbin.lp', 'w') as out:
out.write(str(model.solver))
# execute the model
print(model.objective)
model.summary()
```
# FullThermo infeasibility parameterization
## sans dgbin
```
# define the example individual model, with integer charges, and associated API media package
model = kbase.get_from_ws('e_coli_core.kb', 94253)
for metabolite in model.metabolites:
try:
metabolite.charge = int(metabolite.charge)
except:
print(metabolite, metabolite.charge)
metabolite.charge = None
model.solver = 'optlang-cplex'
# import the FullThermo package
%run ../../modelseedpy/fbapkg/fullthermopkg.py
ftp = FullThermoPkg(model)
ftp.build_package({"dgbin": False, 'modelseed_path':modelseed_db_path})
# execute the model
print(model.objective)
model.summary()
```
## with dgbin
```
# define the example individual model, with integer charges, and associated API media package
model = kbase.get_from_ws('e_coli_core.kb', 94253)
for metabolite in model.metabolites:
try:
metabolite.charge = int(metabolite.charge)
except:
print(metabolite, metabolite.charge)
metabolite.charge = None
model.solver = 'optlang-cplex'
# import the FullThermo package
%run ../../modelseedpy/fbapkg/fullthermopkg.py
ftp.clear()
ftp = FullThermoPkg(model)
ftp.build_package({"dgbin": True, 'modelseed_path':modelseed_db_path})
# export the constrained LP file
with open('FullThermo_dgbin.lp', 'w') as out:
out.write(str(model.solver))
# execute the model
print(model.objective)
model.summary()
```
# Brainstorming
```
# for reaction in model.reactions:
# if not reaction.id[:3] == 'EX_':
# print(model.solver.status)
# stp.variables['dgbinF'][reaction.id]
# # dir(stp.variables['dgbinF'][reaction.id].primal)
```
| github_jupyter |
# Get your data ready for training
This module defines the basic [`DataBunch`](/basic_data.html#DataBunch) object that is used inside [`Learner`](/basic_train.html#Learner) to train a model. This is the generic class, that can take any kind of fastai [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) or [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). You'll find helpful functions in the data module of every application to directly create this [`DataBunch`](/basic_data.html#DataBunch) for you.
```
from fastai.gen_doc.nbdoc import *
from fastai import *
show_doc(DataBunch)
```
It also ensure all the dataloaders are on `device` and apply to them `tfms` as batch are drawn (like normalization). `path` is used internally to store temporary files, `collate_fn` is passed to the pytorch `Dataloader` (replacing the one there) to explain how to collate the samples picked for a batch. By default, it applies data to the object sent (see in [`vision.image`](/vision.image.html#vision.image) or the [data block API](/data_block.html) why this can be important).
`train_dl`, `valid_dl` and optionally `test_dl` will be wrapped in [`DeviceDataLoader`](/basic_data.html#DeviceDataLoader).
### Factory method
```
show_doc(DataBunch.create)
```
`num_workers` is the number of CPUs to use, `tfms`, `device` and `collate_fn` are passed to the init method.
### Visualization
```
show_doc(DataBunch.show_batch)
```
### Grabbing some data
```
show_doc(DataBunch.dl)
show_doc(DataBunch.one_batch)
show_doc(DataBunch.one_item)
```
### Empty [`DataBunch`](/basic_data.html#DataBunch) for inference
```
show_doc(DataBunch.export)
show_doc(DataBunch.load_empty, full_name='load_empty')
```
This method should be used to create a [`DataBunch`](/basic_data.html#DataBunch) at inference, see the corresponding [tutorial](/tutorial.inference.html).
### Dataloader transforms
```
show_doc(DataBunch.add_tfm)
```
Adds a transform to all dataloaders.
```
show_doc(DeviceDataLoader)
```
Put the batches of `dl` on `device` after applying an optional list of `tfms`. `collate_fn` will replace the one of `dl`. All dataloaders of a [`DataBunch`](/basic_data.html#DataBunch) are of this type.
### Factory method
```
show_doc(DeviceDataLoader.create)
```
The given `collate_fn` will be used to put the samples together in one batch (by default it grabs their data attribute). `shuffle` means the dataloader will take the samples randomly if that flag is set to `True`, or in the right order otherwise. `tfms` are passed to the init method. All `kwargs` are passed to the pytorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) class initialization.
### Methods
```
show_doc(DeviceDataLoader.add_tfm)
show_doc(DeviceDataLoader.remove_tfm)
show_doc(DeviceDataLoader.new)
show_doc(DatasetType, doc_string=False)
```
Internal enumerator to name the training, validation and test dataset/dataloader.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(DeviceDataLoader.proc_batch)
show_doc(DeviceDataLoader.collate_fn)
```
## New Methods - Please document or move to the undocumented section
| github_jupyter |
<a href="https://colab.research.google.com/github/graviraja/100-Days-of-NLP/blob/applications%2Fclustering/applications/clustering/20newsgroup/Improved%20Topic%20Identification%20in%20News%20using%20LDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Installations
```
!pip install pyldavis -q
import nltk
nltk.download('stopwords')
```
### Imports
```
import re
import spacy
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import scipy.sparse
from pprint import pprint
import pyLDAvis
import pyLDAvis.gensim
import pyLDAvis.sklearn
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import seaborn as sns
from wordcloud import WordCloud, STOPWORDS
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
np.random.seed(42)
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
nlp = spacy.load('en', disable=['parser', 'ner'])
```
### 20 Newsgroup Dataset
```
df = pd.read_json('https://raw.githubusercontent.com/selva86/datasets/master/newsgroups.json')
df.head()
df.target_names.unique()
len(df)
plt.figure(figsize=(20, 5))
sns.countplot(df.target_names.values)
data = df.content.values
```
### Tokenization
```
def sentence_to_tokens(sent):
# remove emails
sent = re.sub(r'\S*@\S*\s?', '', sent)
# remove newline chars
sent = re.sub(r'\s+', ' ', sent)
# remove single quotes
sent = re.sub(r"\'", "", sent)
# converts to lower case tokens and removes tokens that are
# too small & too long. Also remove accent characters & punct
tokens = simple_preprocess(str(sent), deacc=True)
return tokens
%%time
tokenized_data = [sentence_to_tokens(doc) for doc in data]
tokenized_data[0]
```
### Pre-processing
```
%%time
# create bigrams from the tokenized data
bigram = gensim.models.Phrases(tokenized_data, threshold=50)
# make a bigram model
bigram_mod = gensim.models.phrases.Phraser(bigram)
def process_words(texts, allowed_postags=["NOUN", "ADJ", "VERB", "ADV"]):
# remove stopwords
stop_free = [[word for word in doc if word not in stop_words] for doc in texts]
# bigrams
bigram_data = [bigram_mod[doc] for doc in stop_free]
texts_out = []
for sent in bigram_data:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
# remove stopwords
texts_out = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts_out]
# join words into sentence in-order to make it useful to tfidf processing
texts_out = [" ".join(words) for words in texts_out]
return texts_out
%%time
processed_data = process_words(tokenized_data)
processed_data[0]
```
### Tfidf
```
tfidf = TfidfVectorizer(analyzer='word', min_df=10, stop_words='english', lowercase=True, token_pattern='[a-zA-Z0-9]{3,}')
data_vectorized = tfidf.fit_transform(processed_data)
```
### LDA Model
```
%%time
lda_model = LatentDirichletAllocation(
n_components=20,
max_iter=10,
n_jobs=-1,
random_state=42
)
lda_output = lda_model.fit_transform(data_vectorized)
# higher the better
print(f"Log Likelihood: {lda_model.score(data_vectorized)}")
# lower the better
print(f"preplexity: {lda_model.perplexity(data_vectorized)}")
```
### Grid Search
```
search_params = {
"n_components": [10, 15, 20, 25],
"learning_decay": [.5, .7, .9]
}
%%time
lda = LatentDirichletAllocation()
model = GridSearchCV(lda, param_grid=search_params)
model.fit(data_vectorized)
```
### Best LDA Model
```
best_lda_model = model.best_estimator_
print(f"Best Log likelihood Score: {model.best_score_}")
print(f"Best Perplexity: {best_lda_model.perplexity(data_vectorized)}")
model.best_params_
```
### Visualization of Topics
```
# Visualize the topics
pyLDAvis.enable_notebook()
vis = pyLDAvis.sklearn.prepare(best_lda_model, data_vectorized, tfidf, mds='tsne')
vis
```
### Topic's keyword distribution
```
topicnames = ["Topic" + str(i) for i in range(best_lda_model.n_components)]
# topic keyword matrix
df_topic_keywords = pd.DataFrame(best_lda_model.components_)
# columns are the words
df_topic_keywords.columns = tfidf.get_feature_names()
# rows are the topics
df_topic_keywords.index = topicnames
df_topic_keywords.head()
```
### Top 15 keywords in each topic
```
def top_words(vectorizer=tfidf, lda_model=lda_model, n_words=15):
keywords = np.array(vectorizer.get_feature_names())
topic_keywords = []
for topic_weights in lda_model.components_:
top_keyword_locs = (-topic_weights).argsort()[:n_words]
topic_keywords.append(keywords.take(top_keyword_locs))
return topic_keywords
topic_keywords = top_words(vectorizer=tfidf, lda_model=best_lda_model, n_words=15)
# Topic - Keywords Dataframe
df_topic_top_keywords = pd.DataFrame(topic_keywords)
df_topic_top_keywords.columns = ['Word '+str(i) for i in range(df_topic_top_keywords.shape[1])]
df_topic_top_keywords.index = ['Topic '+str(i) for i in range(df_topic_top_keywords.shape[0])]
df_topic_top_keywords
```
### Predicting topic of a sentence
```
best_lda_model
def predict_topic(text):
tokens = [sentence_to_tokens(text)]
processed_tokens = process_words(tokens)
tfidf_tokens = tfidf.transform(processed_tokens)
topic_scores = best_lda_model.transform(tfidf_tokens)
topic = np.argmax(topic_scores)
topic_score = topic_scores[0][topic]
topic_keywords = df_topic_top_keywords.iloc[topic, :].values.tolist()
return topic, topic_score, topic_keywords
# Predict the topic
mytext = "I believe in christianity and like the bible"
topic, prob_scores, words = predict_topic(text = mytext)
print(topic)
print(prob_scores)
print(words)
```
| github_jupyter |
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
- In this notebook, you will implement all the functions required to build a deep neural network.
- In the next assignment, you will use these functions to build a deep neural network for image classification.
**After this assignment you will be able to:**
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v4 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network.
- Implement the forward propagation module (shown in purple in the figure below).
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss.
- Implement the backward propagation module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> **Figure 1**</center></caption><br>
**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
## 3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
### 3.1 - 2-layer Neural Network
**Exercise**: Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.
- Use zero initialization for the biases. Use `np.zeros(shape)`.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h,1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y,1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\\
m & n & o \\
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\\
d & e & f \\
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \\
t \\
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
**Exercise**: Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
## 4 - Forward propagation module
### 4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
**Exercise**: Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
### d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
**Exercise**: Implement the forward propagation of the above model.
**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Tips**:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], activation="relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W'+str(L)], parameters['b'+str(L)], activation="sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
```
<table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
## 5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = (-1/m) * np.sum(Y * np.log(AL) + (1-Y) * np.log(1-AL))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
```
**Expected Output**:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
## 6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
### 6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> **Figure 4** </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
**Exercise**: Use the 3 formulas above to implement linear_backward().
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = (1/m) * np.dot(dZ, A_prev.T)
db = (1/m) * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
### 6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, we provided two backward functions:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
dAL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected output with sigmoid:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
**Expected output with relu:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
### 6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> **Figure 5** : Backward pass </center></caption>
** Initializing backpropagation**:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[-1]
#YD: get the cache reversely
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL,
current_cache,
activation = "sigmoid")
### END CODE HERE ###
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 1)], current_cache, activation = "relu")
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
```
**Expected Output**
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
### 6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td > W1 </td>
<td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]] </td>
</tr>
<tr>
<td > b1 </td>
<td > [[-0.04659241]
[-1.28888275]
[ 0.53405496]] </td>
</tr>
<tr>
<td > W2 </td>
<td > [[-0.55569196 0.0354055 1.32964895]]</td>
</tr>
<tr>
<td > b2 </td>
<td > [[-0.84610769]] </td>
</tr>
</table>
## 7 - Conclusion
Congrats on implementing all the functions required for building a deep neural network!
We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier.
In the next assignment you will put all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images!
| github_jupyter |
# Mounting your Google Drive
Running these two blocks of code will give the Colab environment access to your data on Google Drive. If you aren't comfortable with this idea, I'd suggest making a new Drive account dedicated to this project!
```
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
```
Let's navigate to the folder where our data is stored and check everything is there:
```
import os
os.chdir('drive/MGH/Teaching/qtim_Tutorials/tutorial_5/')
!ls
```
# Implementing our network
```
from keras.layers import Input, Conv2D, MaxPool2D, Dense, Dropout, BatchNormalization
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model
max_channels = 1024
# First block
input_layer = Input(shape=(240, 240, 4))
conv1 = Conv2D(max_channels // 16, (3, 3), padding='same', activation='relu')(input_layer)
conv2 = Conv2D(max_channels // 16, (3, 3), padding='same', activation='relu')(conv1)
conv2 = BatchNormalization()(conv2)
pool1 = MaxPool2D((2, 2))(conv2)
# Second block
conv3 = Conv2D(max_channels // 8, (3, 3), padding='same', activation='relu')(pool1)
conv4 = Conv2D(max_channels // 8, (3, 3), padding='same', activation='relu')(conv3)
conv4 = BatchNormalization()(conv4)
pool2 = MaxPool2D((2, 2))(conv4)
# Third block
conv5 = Conv2D(max_channels // 4, (3, 3), padding='same', activation='relu')(pool2)
conv6 = Conv2D(max_channels // 4, (3, 3), padding='same', activation='relu')(conv5)
conv6 = BatchNormalization()(conv6)
pool3 = MaxPool2D((2, 2))(conv6)
# Fourth block
conv7 = Conv2D(max_channels // 2, (3, 3), padding='same', activation='relu')(pool3)
conv8 = Conv2D(max_channels // 2, (3, 3), padding='same', activation='relu')(conv7)
conv8 = BatchNormalization()(conv8)
pool4 = MaxPool2D((2, 2))(conv8)
# Fifth block
conv9 = Conv2D(max_channels, (3, 3), padding='same', activation='relu')(pool4)
conv10 = Conv2D(max_channels, (3, 3), padding='same', activation='relu')(conv9)
conv10 = BatchNormalization()(conv10)
pool5 = GlobalAveragePooling2D()(conv10)
# Fully-connected
dense1 = Dense(128, activation='relu')(pool5)
drop1 = Dropout(0.5)(dense1)
output = Dense(1, activation='sigmoid')(drop1)
# Create model object
model = Model(inputs=input_layer, outputs=output)
print(model.summary())
```
# Data generators
Keras provides powerful tools for iterating over datasets and augmenting them in real-time. In just a few lines of code, we can define a generator that yields random batches of the data (without ever loading all of it into memory) with randomly applied transformations. This serves to diversify the dataset and hopefully make the resulting model more generalizable.
```
from keras.preprocessing.image import ImageDataGenerator
from keras.utils.io_utils import HDF5Matrix
seed = 0
data_gen_args = dict(
width_shift_range=0.05,
height_shift_range=0.05,
zoom_range=0.2,
channel_shift_range=0.005,
horizontal_flip=True,
vertical_flip=True
)
# Generator for the training data
train_datagen = ImageDataGenerator(**data_gen_args)
X_train = HDF5Matrix('training.h5', 'train')
y_train = HDF5Matrix('training.h5', 'labels')
train_generator = train_datagen.flow(X_train, y_train, seed=0, batch_size=16)
# Generator for the validation data
val_datagen = ImageDataGenerator() # no augmentation! why?
X_val = HDF5Matrix('validation.h5', 'train')
y_val = HDF5Matrix('validation.h5', 'labels')
val_generator = val_datagen.flow(X_val, y_val, seed=0, batch_size=1)
```
# Training the model
At long last, we can train our model! The process goes something like this:
* Initialize the network randomly, with a certain optimizer, loss function and metric
* Grab a random batch of data from the HDF5 file and randomly augment it
* Push it through the network, and get the predictions
* Calculate the error (loss)
* Calculate the partial derivative of the loss function w.r.t. each of the weights + biases, using back-propagation
* Update the network's weights in the negative direction of the gradient, multiplied by the learning rate
* Repeat until dataset is exhausted
* Run the network on the validation data, but *do not* update the network
* Repeat until convergence/fixed number of iterations (epochs) reached
We specify two 'callbacks' which are run at the end of each epoch:
* Model checkpoint: if the validation loss improves, save the model
* Early stopping: if we fail to make progress after a certain number of epochs, stop early
```
from keras.callbacks import ModelCheckpoint, EarlyStopping
mc_cb = ModelCheckpoint('best_model.h5')
el_cb = EarlyStopping(patience=5)
model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit_generator(train_generator, epochs=50, shuffle='batch',
validation_data=val_generator, callbacks=[mc_cb, el_cb])
model.save('final_model.h5')
from keras.models import load_model
import numpy as np
import h5py
model = load_model('best_model.h5')
# We will use testing data in future... this is somewhat biased!
val_data = h5py.File('validation.h5', 'r')
X_val, y_val = val_data['train'], val_data['labels']
y_pred = model.predict(X_val) # get network predictions over entire dataset
y_true = np.asarray(y_val) # using np.asarray explicitly loads the HDF5 data
import pandas as pd
pd.DataFrame([y_pred.squeeze(), y_true]).T
from sklearn.metrics import roc_curve, auc, confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
# Confusion matrix, optionally normalized
normalize = False
cm = confusion_matrix(y_true, np.round(y_pred).astype('bool'))
fmt = 'd' # for displaying the values
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] # optional!
fmt = '.2%'
# Use some fancy plotting
labels = ['No tumor', 'Tumor']
ax = sns.heatmap(cm, annot=True, fmt=fmt, xticklabels=labels, yticklabels=labels, cmap='Blues')
plt.xlabel('Predicted label')
plt.ylabel('True label')
ax.xaxis.set_label_position('top')
ax.xaxis.tick_top()
plt.savefig('confusion.png', dpi=300)
fpr, tpr, _ = roc_curve(y_true, y_pred)
plt.plot(fpr, tpr, label='AUC: {:.2f}'.format(auc(fpr, tpr)))
plt.title('ROC analysis of my first tumor detector')
plt.xlabel('1 - Specificity')
plt.ylabel('Sensitivity')
plt.legend()
plt.savefig('roc.png', dpi=300)
```
| github_jupyter |
# 2-Semi-Random-Independent-Set
```
import os, sys
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
import time
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import cvxgraphalgs as cvxgr
GRAPH_COLOR = 'green'
HIGHLIGHT_COLOR = 'red'
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
## 2.1 Visualization & Analysis Tools
```
def visualize_highlight(graph, special):
colors = []
for vertex in graph.nodes:
color = HIGHLIGHT_COLOR if vertex in special else GRAPH_COLOR
colors.append(color)
%matplotlib inline
nx.draw(graph, node_color=colors)
plt.show()
def average_performance(graph_generator, algorithm, evaluate, trials=50):
times, outputs = [], []
for _ in range(trials):
graph = graph_generator()
start = time.clock()
result = algorithm(graph)
end = time.clock()
elapsed = end - start
times.append(elapsed)
outputs.append(evaluate(result))
return {
'trials': trials,
'time': np.mean(times),
'output': np.mean(outputs)
}
```
## 2.2 Examples on Small Planted Sets
```
GRAPH_SIZE = 20
PLANTED_SIZE = 7
PROB = 0.5
graph, independent = cvxgr.generators.bernoulli_planted_independent(
GRAPH_SIZE, PLANTED_SIZE, PROB)
visualize_highlight(graph, independent)
print('Planted Size:', len(independent))
```
### 2.2.1 Greedy Algorithm
```
result = cvxgr.algorithms.greedy_independent_set(graph)
visualize_highlight(graph, result)
print('Recovered Size (Greedy):', len(result))
```
### 2.2.2 Crude SDP Algorithm
```
result = cvxgr.algorithms.crude_sdp_independent_set(graph)
visualize_highlight(graph, result)
print('Recovered Size (C-SDP):', len(result))
```
## 2.3 Performance Testing
```
GRAPH_SIZES = [5, 10, 25, 50, 100]
PLANTED_SIZES = [int(size / 3) for size in GRAPH_SIZES]
PROB = 0.5
TRIALS = 50
greedy_outputs = []
csdp_outputs = []
spectral_outputs = []
for graph_size, planted_size in zip(GRAPH_SIZES, PLANTED_SIZES):
graph_generator = lambda: cvxgr.generators.bernoulli_planted_independent(
graph_size, planted_size, PROB)[0]
greedy_output = average_performance(
graph_generator,
cvxgr.algorithms.greedy_independent_set,
len,
trials=TRIALS)
greedy_outputs.append(greedy_output)
csdp_output = average_performance(
graph_generator,
cvxgr.algorithms.crude_sdp_independent_set,
len,
trials=TRIALS)
csdp_outputs.append(csdp_output)
spectral_output = average_performance(
graph_generator,
cvxgr.algorithms.planted_spectral_algorithm,
len,
trials=TRIALS)
spectral_outputs.append(spectral_output)
PLOTTING_OPTIONS = {
'title': 'Independent Set Size vs Graph Size',
'legend': [
'Greedy Algorithm Output Size',
'C-SDP Algorithm Output Size',
'Spectral Algorithm Output Size',
'Planted Set Size'
]
}
plt.plot(GRAPH_SIZES, [result['output'] for result in greedy_outputs])
plt.plot(GRAPH_SIZES, [result['output'] for result in csdp_outputs])
plt.plot(GRAPH_SIZES, [result['output'] for result in spectral_outputs])
plt.plot(GRAPH_SIZES, PLANTED_SIZES)
plt.title(PLOTTING_OPTIONS['title'])
plt.legend(PLOTTING_OPTIONS['legend'])
plt.show()
rows = []
for pos in range(len(GRAPH_SIZES)):
rows.append([
GRAPH_SIZES[pos],
greedy_outputs[pos]['output'],
csdp_outputs[pos]['output'],
spectral_outputs[pos]['output'],
PLANTED_SIZES[pos]
])
table = pd.DataFrame(rows)
table.columns = [
'Graph Size',
'Greedy Output Size',
'C-SDP Output Size',
'Spectral Output Size',
'Planted Size']
table
```
| github_jupyter |
# Introduction
Oftentimes data will come to us with column names, index names, or other naming conventions that we are not satisfied with. In that case, you'll learn how to use pandas functions to change the names of the offending entries to something better.
You'll also explore how to combine data from multiple DataFrames and/or Series.
**To start the exercise for this topic, please click [here](#$NEXT_NOTEBOOK_URL$).**
# Renaming
The first function we'll introduce here is `rename()`, which lets you change index names and/or column names. For example, to change the `points` column in our dataset to `score`, we would do:
```
#$HIDE_INPUT$
import pandas as pd
pd.set_option('max_rows', 5)
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
reviews.rename(columns={'points': 'score'})
```
`rename()` lets you rename index _or_ column values by specifying a `index` or `column` keyword parameter, respectively. It supports a variety of input formats, but usually a Python dictionary is the most convenient. Here is an example using it to rename some elements of the index.
```
reviews.rename(index={0: 'firstEntry', 1: 'secondEntry'})
```
You'll probably rename columns very often, but rename index values very rarely. For that, `set_index()` is usually more convenient.
Both the row index and the column index can have their own `name` attribute. The complimentary `rename_axis()` method may be used to change these names. For example:
```
reviews.rename_axis("wines", axis='rows').rename_axis("fields", axis='columns')
```
# Combining
When performing operations on a dataset, we will sometimes need to combine different DataFrames and/or Series in non-trivial ways. Pandas has three core methods for doing this. In order of increasing complexity, these are `concat()`, `join()`, and `merge()`. Most of what `merge()` can do can also be done more simply with `join()`, so we will omit it and focus on the first two functions here.
The simplest combining method is `concat()`. Given a list of elements, this function will smush those elements together along an axis.
This is useful when we have data in different DataFrame or Series objects but having the same fields (columns). One example: the [YouTube Videos dataset](https://www.kaggle.com/datasnaek/youtube-new), which splits the data up based on country of origin (e.g. Canada and the UK, in this example). If we want to study multiple countries simultaneously, we can use `concat()` to smush them together:
```
canadian_youtube = pd.read_csv("../input/youtube-new/CAvideos.csv")
british_youtube = pd.read_csv("../input/youtube-new/GBvideos.csv")
pd.concat([canadian_youtube, british_youtube])
```
The middlemost combiner in terms of complexity is `join()`. `join()` lets you combine different DataFrame objects which have an index in common. For example, to pull down videos that happened to be trending on the same day in _both_ Canada and the UK, we could do the following:
```
left = canadian_youtube.set_index(['title', 'trending_date'])
right = british_youtube.set_index(['title', 'trending_date'])
left.join(right, lsuffix='_CAN', rsuffix='_UK')
```
The `lsuffix` and `rsuffix` parameters are necessary here because the data has the same column names in both British and Canadian datasets. If this wasn't true (because, say, we'd renamed them beforehand) we wouldn't need them.
# Your turn
If you haven't started the exercise, you can **[get started here](#$NEXT_NOTEBOOK_URL$)**.
| github_jupyter |
# Modeling and Simulation in Python
Chapter 8: Pharmacokinetics
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
```
### Data
We have data from Pacini and Bergman (1986), "MINMOD: a computer program to calculate insulin sensitivity and pancreatic responsivity from the frequently sampled intravenous glucose tolerance test", *Computer Methods and Programs in Biomedicine*, 23: 113-122..
```
data = pd.read_csv('glucose_insulin.csv', index_col='time')
data
```
Here's what the glucose time series looks like.
```
plot(data.glucose, 'bo', label='glucose')
decorate(xlabel='Time (min)',
ylabel='Concentration (mg/dL)')
```
And the insulin time series.
```
plot(data.insulin, 'go', label='insulin')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
```
For the book, I put them in a single figure, using `subplot`
```
subplot(2, 1, 1)
plot(data.glucose, 'bo', label='glucose')
decorate(ylabel='mg/dL')
subplot(2, 1, 2)
plot(data.insulin, 'go', label='insulin')
decorate(xlabel='Time (min)',
ylabel='$\mu$U/mL')
savefig('chap08-fig01.pdf')
```
### Interpolation
We have measurements of insulin concentration at discrete points in time, but we need to estimate it at intervening points. We'll use `interpolate`, which is a wrapper for `scipy.interpolate.interp1d`
```
%psource interpolate
```
The return value from `interpolate` is a function.
```
I = interpolate(data.insulin)
```
We can use the result, `I`, to estimate the insulin level at any point in time.
```
I(7)
```
`I` can also take an array of time and return an array of estimates, which we can plot.
```
ts = linrange(0, 182, 2)
plot(data.insulin, 'go', label='insulin data')
plot(ts, I(ts), color='green', label='interpolated')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
savefig('chap08-fig02.pdf')
```
**Exercise:** [Read the documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html) of `scipy.interpolate.interp1d`. Pass a keyword argument to `interpolate` to specify one of the other kinds of interpolation, and run the code again to see what it looks like.
### The glucose minimal model
I'll cheat by starting with parameters that fit the data roughly; then we'll see how to improve them.
```
k1 = 0.03
k2 = 0.02
k3 = 1e-05
G0 = 290
```
To estimate basal levels, we'll use the concentrations at `t=0`.
```
Gb = data.glucose[0]
Ib = data.insulin[0]
```
In the initial conditions, `X(0)=0` and `G(0)=G0`, where `G0` is one of the parameters we'll choose.
```
init = State(G=G0, X=0)
```
Here's the system object with all parameters and the interpolation object `I`.
```
system = System(init=init,
k1=k1, k2=k2, k3=k3,
I=I, Gb=Gb, Ib=Ib,
t0=0, t_end=182, dt=2)
```
And here's the update function. Using `unpack` to make the system variables accessible without using dot notation, which makes the translation of the differential equations more readable and checkable.
```
def update_func(state, t, system):
"""Updates the glucose minimal model.
state: State object
t: time in min
system: System object
returns: State object
"""
G, X = state
unpack(system)
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
G += dGdt * dt
X += dXdt * dt
return State(G=G, X=X)
```
Before running the simulation, it is always a good idea to test the update function using the initial conditions. In this case we can veryify that the results are at least qualitatively correct.
```
update_func(init, 0, system)
```
Now run simulation is pretty much the same as it always is.
```
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Adds a TimeFrame to `system` as `results`
system: System object
update_func: function that updates state
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.loc[t0] = init
ts = linrange(t0, t_end-dt, dt)
for t in ts:
frame.loc[t+dt] = update_func(frame.loc[t], t, system)
system.results = frame
```
And here's how we run it. `%time` is a Jupyter magic command that runs the function and reports its run time.
```
%time run_simulation(system, update_func)
```
The results are in a `TimeFrame object` with one column per state variable.
```
system.results
```
The following plot shows the results of the simulation along with the actual glucose data.
```
subplot(2, 1, 1)
plot(system.results.G, 'b-', label='simulation')
plot(data.glucose, style='bo', label='glucose data')
decorate(ylabel='mg/dL')
subplot(2, 1, 2)
plot(system.results.X, style='g-', label='remote insulin')
decorate(xlabel='Time (min)',
ylabel='Arbitrary units')
savefig('chap08-fig03.pdf')
```
### Numerical solution
We can do the same thing using `odeint`. Instead of an update function, we provide a slope function that just evaluates the right-hand side of the differential equations. We don't have to do the update part; `odeint` does it for us.
```
def slope_func(state, t, system):
"""Computes derivatives of the glucose minimal model.
state: State object
t: time in min
system: System object
returns: derivatives of G and X
"""
G, X = state
unpack(system)
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
return dGdt, dXdt
```
We can test the slope function with the initial conditions.
```
slope_func(init, 0, system)
```
The `System` object we use with `run_odeint` is almost the same as the one we used with `run_simulation`, but instead of providing `t0`, `t_end`, and `dt`, we provide an array of times where we want to evaluate the solution. In this case, we use `data.index`, so the results are evaluated at the same times as the measurements.
```
system2 = System(init=init,
k1=k1, k2=k2, k3=k3,
I=I, Gb=Gb, Ib=Ib,
ts=data.index)
```
`run_odeint` is a wrapper for `scipy.integrate.odeint`
```
%psource run_odeint
```
Here's how we run it.
```
%time run_odeint(system2, slope_func)
```
And here are the results.
```
system2.results
```
Plotting the results from `run_simulation` and `run_odeint`, we can see that they are not very different.
```
plot(system.results.G, 'r-')
plot(system2.results.G, 'b-')
plot(data.glucose, 'bo')
```
The differences are usually less than 1% and always less than 2%.
```
diff = system.results - system2.results
percent_diff = diff / system2.results * 100
percent_diff.dropna()
```
**Exercise:** What happens to these errors if you run the simulation with a smaller value of `dt`?
### Optimization
Now let's find the parameters that yield the best fit for the data.
```
k1 = 0.03
k2 = 0.02
k3 = 1e-05
G0 = 290
```
Again, we'll get basal levels from the initial values.
```
Gb = data.glucose[0]
Ib = data.insulin[0]
```
And the slope function is the same.
```
def slope_func(state, t, system):
"""Computes derivatives of the glucose minimal model.
state: State object
t: time in min
system: System object
returns: derivatives of G and X
"""
G, X = state
unpack(system)
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
return dGdt, dXdt
```
`make_system` takes the parameters and `DataFrame` and returns a `System` object.
```
def make_system(G0, k1, k2, k3, data):
"""Makes a System object with the given parameters.
G0: initial blood glucose
k1: rate parameter
k2: rate parameter
k3: rate parameter
data: DataFrame
returns: System object
"""
init = State(G=G0, X=0)
system = System(init=init,
k1=k1, k2=k2, k3=k3,
Gb=Gb, Ib=Ib,
I=interpolate(data.insulin),
ts=data.index)
return system
```
`error_func` takes the parameters and actual data, makes a `System` object and runs it, then compares the results of the simulation to the data. It returns an array of errors.
```
def error_func(params, data):
"""Computes an array of errors to be minimized.
params: sequence of parameters
data: DataFrame of values to be matched
returns: array of errors
"""
print(params)
# make a System with the given parameters
system = make_system(*params, data)
# solve the ODE
run_odeint(system, slope_func)
# compute the difference between the model
# results and actual data
error = system.results.G - data.glucose
return error.loc[8:]
```
When we call `error_func`, we provide a sequence of parameters as a single object.
```
params = G0, k1, k2, k3
params
```
Here's how that works:
```
error_func(params, data)
```
`fit_leastsq` is a wrapper for `scipy.optimize.leastsq`
```
%psource fit_leastsq
```
Here's how we call it.
```
best_params = fit_leastsq(error_func, params, data)
```
Now that we have `best_params`, we can use it to make a `System` object and run it.
We have to use the scatter operator, `*`, to make `best_params` behave like four separate parameters, rather than a single object.
```
system = make_system(*best_params, data)
run_odeint(system, slope_func)
```
Here are the results, along with the data. The first few points of the model don't fit the data, but we don't expect them to.
```
plot(system.results.G, label='simulation')
plot(data.glucose, style='bo', label='glucose data')
decorate(xlabel='Time (min)',
ylabel='Concentration (mg/dL)')
savefig('chap08-fig04.pdf')
```
**Exercise:** Since we don't expect the first few points to agree, it's probably better not to make them part of the optimization process. We can ignore them by leaving them out of the `Series` returned by `error_func`. Modify the last line of `error_func` to return `errors.loc[8:]`, which includes only the elements of the `Series` from `t=8` and up.
Does that improve the quality of the fit? Does it change the best parameters by much?
Note: You can read more about this use of `loc` [in the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer).
**Exercise:** How sensitive are the results to the starting guess for the parameters. If you try different values for the starting guess, do we get the same values for the best parameters?
### Interpreting parameters
Based on the parameters of the model, we can estimate glucose effectiveness and insulin sensitivity.
```
def indices(G0, k1, k2, k3):
"""Compute glucose effectiveness and insulin sensitivity.
G0: initial blood glucose
k1: rate parameter
k2: rate parameter
k3: rate parameter
data: DataFrame
returns: State object containing S_G and S_I
"""
return State(S_G=k1, S_I=k3/k2)
```
Here are the results.
```
indices(*best_params)
```
### The insulin minimal model
In addition to the glucose minimal mode, Pacini and Bergman present an insulin minimal model, in which the concentration of insulin, $I$, is governed by this differential equation:
$ \frac{dI}{dt} = -k I(t) + \gamma (G(t) - G_T) t $
**Exercise:** Write a version of `make_system` that takes the parameters of this model, `I0`, `k`, `gamma`, and `G_T` as parameters, along with a `DataFrame` containing the measurements, and returns a `System` object suitable for use with `run_simulation` or `run_odeint`.
Use it to make a `System` object with the following parameters:
```
I0 = 360
k = 0.25
gamma = 0.004
G_T = 80
# Solution goes here
# Solution goes here
```
**Exercise:** Write a slope function that takes state, t, system as parameters and returns the derivative of `I` with respect to time. Test your function with the initial condition $I(0)=360$.
```
# Solution goes here
# Solution goes here
```
**Exercise:** Run `run_odeint` with your `System` object and slope function, and plot the results, along with the measured insulin levels.
```
# Solution goes here
# Solution goes here
```
**Exercise:** Write an error function that takes a sequence of parameters as an argument, along with the `DataFrame` containing the measurements. It should make a `System` object with the given parameters, run it, and compute the difference between the results of the simulation and the measured values. Test your error function by calling it with the parameters from the previous exercise.
Hint: As we did in a previous exercise, you might want to drop the errors for times prior to `t=8`.
```
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** Use `fit_leastsq` to find the parameters that best fit the data. Make a `System` object with those parameters, run it, and plot the results along with the measurements.
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** Using the best parameters, estimate the sensitivity to glucose of the first and second phase pancreatic responsivity:
$ \phi_1 = \frac{I_{max} - I_b}{k (G_0 - G_b)} $
$ \phi_2 = \gamma \times 10^4 $
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
| github_jupyter |
### Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
Adapted from [Keras examples directory](https://github.com/fchollet/keras/tree/master/examples).
```
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
batch_size = 128
nb_classes = 10
nb_epoch = 12
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
```
| github_jupyter |
# Maximizing the ELBO
> In this post, we will cover the complete implementation of Variational AutoEncoder, which can optimize the ELBO objective function. This is the summary of lecture "Probabilistic Deep Learning with Tensorflow 2" from Imperial College London.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Coursera, Tensorflow_probability, ICL]
- image: images/fashion_mnist_generated.png
## Packages
```
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
from IPython.display import HTML, Image
tfd = tfp.distributions
tfpl = tfp.layers
tfb = tfp.bijectors
plt.rcParams['figure.figsize'] = (10, 6)
plt.rcParams["animation.html"] = "jshtml"
plt.rcParams['animation.embed_limit'] = 2**128
print("Tensorflow Version: ", tf.__version__)
print("Tensorflow Probability Version: ", tfp.__version__)
```
## Overview
### Prior Distribution
$ \text{latent variable } z \sim N(0, I) = p(z) \\
p(x \vert z) = \text{decoder}(z) \\
x \sim p(x \vert z) $
### Approximating True Posterior distribution
$ \text{encoder }(x) = q(z \vert x) \simeq p(z \vert x) \\
\begin{aligned} \log p(x) & \ge \mathbb{E}_{z \sim q(z \vert x)}[-\log q(z \vert x) + \log p(x \vert z)] \quad \leftarrow \text{maximizing this lower bound} \\
&= - \mathrm{KL} (q(z \vert x) \vert \vert p(z)) + \mathbb{E}_{z \sim q(z \vert x)}[\log p(x \vert z)] \quad \leftarrow \text{Evidence Lower Bound (ELBO)} \end{aligned}$
### Sample Encoder Architecture
```python
latent_size = 2
event_shape = (28, 28, 1)
encoder = Sequential([
Conv2D(8, (5, 5), strides=2, activation='tanh', input_shape=event_shape),
Conv2D(8, (5, 5), strides=2, activatoin='tanh'),
Flatten(),
Dense(64, activation='tanh'),
Dense(2 * latent_size),
tfpl.DistributionLambda(lambda t: tfd.MultivariateNormalDiag(
loc=t[..., :latent_size], scale_diag=tf.math.exp(t[..., latent_size:]))),
], name='encoder')
encoder(X_train[:16])
```
### Sample Decoder Architecture
Almose reverse order of Encoder.
```python
decoder = Sequential([
Dense(64, activation='tanh', input_shape=(latent_size, )),
Dense(128, activation='tanh'),
Reshape((4, 4, 8)), # In order to put it in the form required by Conv2D layer
Conv2DTranspose(8, (5, 5), strides=2, output_padding=1, activation='tanh'),
Conv2DTranspose(8, (5, 5), strides=2, output_padding=1, activation='tanh'),
Conv2D(1, (3, 3), padding='SAME'),
Flatten(),
tfpl.IndependentBernoulli(event_shape)
], name='decoder')
decoder(tf.random.normal([16, latent_size])
```
### Prior Distribution for zero-mean gaussian with identity covariance matrix
```python
prior = tfd.MultivariateNormalDiag(loc=tf.zeros(latent_size))
```
### ELBO objective function
One way to implement ELBO function is to use Analytical computation of KL divergence.
```python
def loss_fn(X_true, approx_posterior, X_pred, prior_dist):
"""
X_true: batch of data examples
approx_posterior: the output of encoder
X_pred: output of decoder
prior_dist: Prior distribution
"""
return tf.reduce_mean(tfd.kl_divergence(approx_posterior, prior_dist) - X_pred.log_prob(X_true))
```
The other way is using Monte Carlo Sampling instead of analyticall with the KL Divergence.
```python
def loss_fn(X_true, approx_posterior, X_pred, prior_dist):
reconstruction_loss = -X_pred.log_prob(X_true)
approx_posterior_sample = approx_posterior.sample()
kl_approx = (approx_posterior.log_prob(approx_posterior_sample) - prior_dist.log_prob(approx_posterior_sample))
return tf.reduce_mean(kl_approx + reconstruction_loss)
```
### Calculating Gradient of Loss function
```python
@tf.function
def get_loss_and_grads(x):
with tf.GradientTape() as tape:
approx_posterior = encoder(x)
approx_posterior_sample = approx_posterior.sample()
X_pred = decoder(approx_posterior_sample)
current_loss = loss_fn(x, approx_posterior, X_pred, prior)
grads = tape.gradient(current_loss, encoder.trainable_variables + decoder.trainable_variables)
return current_loss, grads
```
### Training Loop
```python
optimizer = tf.keras.optimizers.Adam()
for epoch in range(num_epochs):
for train_batch in train_data:
loss, grads = get_loss_and_grads(train_batch)
optimizer.apply_gradients(zip(grads, encoder.trainable_variables + decoder.trainable_variables))
```
### Test
```python
z = prior.sample(1) # (1, 2)
x = decoder(z).sample() # (1, 28, 28, 1)
X_encoded = encoder(X_sample)
def vae(inputs):
approx_posterior = encoder(inputs)
decoded = decoder(approx_posterior.sample())
return decoded.sample()
reconstruction = vae(X_sample)
```
## Tutorial
Review of terminology:
- $p(z)$ = prior
- $q(z|x)$ = encoding distribution
- $p(x|z)$ = decoding distribution
$$
\begin{aligned}
\log p(x) &\geq \mathrm{E}_{Z \sim q(z | x)}\big[−\log q(Z | x) + \log p(x, Z)\big]\\
&= - \mathrm{KL}\big[ \ q(z | x) \ || \ p(z) \ \big] + \mathrm{E}_{Z \sim q(z | x)}\big[\log p(x | Z)\big]
\end{aligned}
$$
```
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Flatten, Reshape
# Import Fashion MNIST, make it a Tensorflow Dataset
(X_train, _), (X_test, _) = tf.keras.datasets.fashion_mnist.load_data()
X_train = X_train.astype('float32') / 255.
X_test = X_test.astype('float32') / 255.
example_X = X_test[:16]
batch_size = 64
X_train = tf.data.Dataset.from_tensor_slices(X_train).batch(batch_size)
# Define the encoding distribution, q(z | x)
latent_size = 2
event_shape = (28, 28)
encoder = Sequential([
Flatten(input_shape=event_shape),
Dense(256, activation='relu'),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(2 * latent_size),
tfpl.DistributionLambda(
lambda t: tfd.MultivariateNormalDiag(
loc=t[..., :latent_size],
scale_diag=tf.math.exp(t[..., latent_size:])
)
)
])
# Pass an example image through the network - should return a batch of MultivariateNormalDiag
encoder(example_X)
# Define the decoding distribution, p(x | z)
decoder = Sequential([
Dense(32, activation='relu'),
Dense(64, activation='relu'),
Dense(128, activation='relu'),
Dense(256, activation='relu'),
Dense(tfpl.IndependentBernoulli.params_size(event_shape)),
tfpl.IndependentBernoulli(event_shape)
])
# Pass a batch of examples to the decoder
decoder(tf.random.normal([16, latent_size]))
# Define the prior, p(z) - a standard bivariate Gaussian
prior = tfd.MultivariateNormalDiag(loc=tf.zeros(latent_size))
```
The loss function we need to estimate is
$$
-\mathrm{ELBO} = \mathrm{KL}[ \ q(z|x) \ || \ p(z) \ ] - \mathrm{E}_{Z \sim q(z|x)}[\log p(x|Z)]\\
$$
where $x = (x_1, x_2, \ldots, x_n)$ refers to all observations, $z = (z_1, z_2, \ldots, z_n)$ refers to corresponding latent variables.
Assumed independence of examples implies that we can write this as
$$
\sum_j \mathrm{KL}[ \ q(z_j|x_j) \ || \ p(z_j) \ ] - \mathrm{E}_{Z_j \sim q(z_j|x_j)}[\log p(x_j|Z_j)]
$$
```
# Specify the loss function, an estimate of the -ELBO
def loss(x, encoding_dist, sampled_decoding_dist, prior):
return tf.reduce_sum(
tfd.kl_divergence(encoding_dist, prior) - sampled_decoding_dist.log_prob(x)
)
# Define a function that returns the loss and its gradients
@tf.function
def get_loss_and_grads(x):
with tf.GradientTape() as tape:
encoding_dist = encoder(x)
sampled_z = encoding_dist.sample()
sampled_decoding_dist = decoder(sampled_z)
current_loss = loss(x, encoding_dist, sampled_decoding_dist, prior)
grads = tape.gradient(current_loss, encoder.trainable_variables + decoder.trainable_variables)
return current_loss, grads
# Compile and train the model
num_epochs = 10
optimizer = tf.keras.optimizers.Adam()
for i in range(num_epochs):
for train_batch in X_train:
current_loss, grads = get_loss_and_grads(train_batch)
optimizer.apply_gradients(zip(grads, encoder.trainable_variables + decoder.trainable_variables))
print('-ELBO after epoch {}: {:.0f}'.format(i + 1, current_loss.numpy()))
# Connect encoder and decoder, compute a reconstruction
def vae(inputs):
approx_posterior = encoder(inputs)
decoding_dist = decoder(approx_posterior.sample())
return decoding_dist.sample()
example_reconstruction = vae(example_X).numpy().squeeze()
# Plot examples against reconstructions
f, axs = plt.subplots(2, 6, figsize=(16, 5))
for j in range(6):
axs[0, j].imshow(example_X[j, :, :].squeeze(), cmap='binary')
axs[1, j].imshow(example_reconstruction[j, :, :], cmap='binary')
axs[0, j].axis('off')
axs[1, j].axis('off')
```
Since the model has lack of reconstruction from grayscale image, So using mean for reconstruction gets more satisfied results.
```
# Connect encoder and decoder, compute a reconstruction with mean
def vae_mean(inputs):
approx_posterior = encoder(inputs)
decoding_dist = decoder(approx_posterior.sample())
return decoding_dist.mean()
example_reconstruction = vae_mean(example_X).numpy().squeeze()
# Plot examples against reconstructions
f, axs = plt.subplots(2, 6, figsize=(16, 5))
for j in range(6):
axs[0, j].imshow(example_X[j, :, :].squeeze(), cmap='binary')
axs[1, j].imshow(example_reconstruction[j, :, :], cmap='binary')
axs[0, j].axis('off')
axs[1, j].axis('off')
# Generate an example - sample a z value, then sample a reconstruction from p(x|z)
z = prior.sample(6)
generated_x = decoder(z).sample()
# Display generated_x
f, axs = plt.subplots(1, 6, figsize=(16, 5))
for j in range(6):
axs[j].imshow(generated_x[j, :, :].numpy().squeeze(), cmap='binary')
axs[j].axis('off')
# Generate an example - sample a z value, then sample a reconstruction from p(x|z)
z = prior.sample(6)
generated_x = decoder(z).mean()
# Display generated_x
f, axs = plt.subplots(1, 6, figsize=(16, 5))
for j in range(6):
axs[j].imshow(generated_x[j, :, :].numpy().squeeze(), cmap='binary')
axs[j].axis('off')
```
What if we use Monte Carlo Sampling for kl divergence?
```
encoder = Sequential([
Flatten(input_shape=event_shape),
Dense(256, activation='relu'),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(2 * latent_size),
tfpl.DistributionLambda(
lambda t: tfd.MultivariateNormalDiag(
loc=t[..., :latent_size],
scale_diag=tf.math.exp(t[..., latent_size:])
)
)
])
decoder = Sequential([
Dense(32, activation='relu'),
Dense(64, activation='relu'),
Dense(128, activation='relu'),
Dense(256, activation='relu'),
Dense(tfpl.IndependentBernoulli.params_size(event_shape)),
tfpl.IndependentBernoulli(event_shape)
])
# Define the prior, p(z) - a standard bivariate Gaussian
prior = tfd.MultivariateNormalDiag(loc=tf.zeros(latent_size))
def loss(x, encoding_dist, sampled_decoding_dist, prior, sampled_z):
reconstruction_loss = -sampled_decoding_dist.log_prob(x)
kl_approx = (encoding_dist.log_prob(sampled_z) - prior.log_prob(sampled_z))
return tf.reduce_sum(kl_approx + reconstruction_loss)
@tf.function
def get_loss_and_grads(x):
with tf.GradientTape() as tape:
encoding_dist = encoder(x)
sampled_z = encoding_dist.sample()
sampled_decoding_dist = decoder(sampled_z)
current_loss = loss(x, encoding_dist, sampled_decoding_dist, prior, sampled_z)
grads = tape.gradient(current_loss, encoder.trainable_variables + decoder.trainable_variables)
return current_loss, grads
# Compile and train the model
num_epochs = 10
optimizer = tf.keras.optimizers.Adam()
for i in range(num_epochs):
for train_batch in X_train:
current_loss, grads = get_loss_and_grads(train_batch)
optimizer.apply_gradients(zip(grads, encoder.trainable_variables + decoder.trainable_variables))
print('-ELBO after epoch {}: {:.0f}'.format(i + 1, current_loss.numpy()))
# Connect encoder and decoder, compute a reconstruction with mean
def vae_mean(inputs):
approx_posterior = encoder(inputs)
decoding_dist = decoder(approx_posterior.sample())
return decoding_dist.mean()
example_reconstruction = vae_mean(example_X).numpy().squeeze()
# Plot examples against reconstructions
f, axs = plt.subplots(2, 6, figsize=(16, 5))
for j in range(6):
axs[0, j].imshow(example_X[j, :, :].squeeze(), cmap='binary')
axs[1, j].imshow(example_reconstruction[j, :, :], cmap='binary')
axs[0, j].axis('off')
axs[1, j].axis('off')
# Generate an example - sample a z value, then sample a reconstruction from p(x|z)
z = prior.sample(6)
generated_x = decoder(z).mean()
# Display generated_x
f, axs = plt.subplots(1, 6, figsize=(16, 5))
for j in range(6):
axs[j].imshow(generated_x[j, :, :].numpy().squeeze(), cmap='binary')
axs[j].axis('off')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.