code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (cv)
# language: python
# name: cv
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # Testing different Hyperparameters and Benchmarking
# In this notebook, we'll cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets.
#
# For an example of how to scale up with remote GPU clusters on Azure Machine Learning, please view [24_exploring_hyperparameters_on_azureml.ipynb](24_exploring_hyperparameters_on_azureml.ipynb).
# ## Table of Contents
#
# * [Testing hyperparameters](#hyperparam)
# * [Using Python](#python)
# * [Using the CLI](#cli)
# * [Visualizing the results](#visualize)
#
# ---
# ## Testing hyperparameters <a name="hyperparam"></a>
#
# Let's say we want to learn more about __how different learning rates and different image sizes affect our model's accuracy when restricted to 10 epochs__, and we want to build an experiment to test out these hyperparameters. We also want to try these parameters out on two different variations of the dataset - one where the images are kept raw (maybe there is a watermark on the image) and one where the images have been altered (the same dataset where there was some attempt to remove the watermark).
#
# In this notebook, we'll walk through how we use the Parameter Sweeper module with the following:
#
# - use python to perform this experiment
# - use the CLI to perform this experiment
# - evalute the results using Pandas
# Ensure edits to libraries are loaded and plotting is shown in the notebook.
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# We start by importing the utilities we need.
# +
import sys
import scrapbook as sb
import fastai
sys.path.append("../../")
from utils_cv.classification.data import Urls
from utils_cv.common.data import unzip_url
from utils_cv.classification.parameter_sweeper import ParameterSweeper, clean_sweeper_df, plot_sweeper_df
fastai.__version__
# -
# To use the Parameter Sweeper tool for single label classification, we'll need to make sure that the data is stored such that images are sorted into their classes inside of a subfolder. In this notebook, we'll use the Fridge Objects dataset, which is already stored in the correct format. We also want to use the Fridge Objects Watermarked dataset. We want to see whether the original images (which are watermarked) will perform just as well as the non-watermarked images.
#
# Define some parameters we will use in this notebook
# + tags=["parameters"]
DATA = [unzip_url(Urls.fridge_objects_path, exist_ok=True), unzip_url(Urls.fridge_objects_watermark_path, exist_ok=True)]
REPS = 3
LEARNING_RATES = [1e-3, 1e-4, 1e-5]
IM_SIZES = [299, 499]
EPOCHS = [10]
# -
# ### Using Python <a name="python"></a>
# We start by creating the Parameter Sweeper object:
sweeper = ParameterSweeper()
# Before we start testing, it's a good idea to see what the default parameters are. We can use a the property `parameters` to easily see those default values.
sweeper.parameters
# Now that we know the defaults, we can pass it the parameters we want to test.
#
# In this notebook, we want to see the effect of different learning rates across different image sizes using only 10 epochs (the default number of epochs is 15). To do so, I would run the `update_parameters` functions as follows:
#
# ```python
# sweeper.update_parameters(learning_rate=[1e-3, 1e-4, 1e-5], im_size=[299, 499], epochs=[10])
# ```
#
# Notice that all parameters must be passed in as a list, including single values such the number of epochs.
#
# These parameters will be used to calculate the number of permutations to run. In this case, we've pass in three options for learning rates, two for image sizes, and one for number of epochs. This will result in 3 X 2 X 1 total permutations (in otherwords, 6 permutations).
sweeper.update_parameters(learning_rate=LEARNING_RATES, im_size=IM_SIZES, epochs=EPOCHS)
# Now that we have our parameters defined, we call the `run()` function with the dataset to test on. We can also optionally pass in:
# - the number of repetitions to run each permutation (default is 3)
# - whether or not we want the training to stop early if the metric (accuracy) doesn't improve by 0.01 (1%) over 3 epochs (default is False)
#
# The `run` function returns a multi-index dataframe which we can work with right away.
df = sweeper.run(datasets=DATA, reps=REPS); df
# ### Using the CLI <a name="cli"></a>
#
# Instead of using python to run this experiment, we may want to test from the CLI. We can do so by using the `scripts/sweep.py` file.
#
# To reproduce the same test (different learning rates across different image sizes using only 10 epochs), and the same settings (3 repetitions, and no early_stopping) we can run the following:
#
# ```sh
# python scripts/sweep.py
# --learning-rates 1e-3 1e-4 1e-5
# --im-size 99 299
# --epochs 10
# --repeat 3
# --no-early-stopping
# --inputs <my-data-dir>
# --output lr_bs_test.csv
# ```
#
# Additionally, we've added an output parameter, which will automatically dump our dataframe into a csv file. To simplify the command, we can use the acryonyms of the params. We can also remove `--no-early-stopping` as that is the default behavior.
#
# ```sh
# python scripts/sweep.py -lr 1e-3 1e-4 1e-5 -is 99 299 -e 10 -i <my-data-dir> -o lr_bs_test.csv
# ```
#
# Once the script completes, load the csv into a dataframe to explore it's contents. We'll want to specify `index_col=[0, 1, 2]` since it is a multi-index dataframe.
#
# ```python
# df = pd.read_csv("data/lr_bs_test.csv", index_col=[0, 1, 2])
# ```
#
# HINT: You can learn more about how to use the script with the `--help` flag.
#
# ```python
# python scripts/sweep.py --help
# ```
# ### Visualize Results <a name="visualize"></a>
# When we read in multi-index dataframe, index 0 represents the run number, index 1 represents a single permutation of parameters, and index 2 represents the dataset.
# To see the results, show the df using the `clean_sweeper_df` helper function. This will display all the hyperparameters in a nice, readable way.
df = clean_sweeper_df(df)
# Since we've run our benchmarking over 3 repetitions, we may want to just look at the averages across the different __run numbers__.
df.mean(level=(1,2)).T
# Print the average accuracy over the different runs for each dataset independently.
ax = df.mean(level=(1,2))["accuracy"].unstack().plot(kind='bar', figsize=(12, 6))
ax.set_yticklabels(['{:,.2%}'.format(x) for x in ax.get_yticks()])
# Additionally, we may want simply to see which set of hyperparameters perform the best across the different __datasets__. We can do that by averaging the results of the different datasets.
df.mean(level=(1)).T
# To make it easier to see which permutation did the best, we can plot the results using the `plot_sweeper_df` helper function. This plot will help us easily see which parameters offer the highest accuracies.
plot_sweeper_df(df.mean(level=(1)), sort_by="accuracy")
# Preserve some of the notebook outputs
sb.glue("nr_elements", len(df))
sb.glue("max_accuray", df.max().accuracy)
sb.glue("min_accuray", df.min().accuracy)
sb.glue("max_duration", df.max().duration)
sb.glue("min_duration", df.min().duration)
| scenarios/classification/11_exploring_hyperparameters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import division
import cv2
from matplotlib import pyplot as plt
import numpy as np
from math import cos, sin
def show(image):
plt.figure(figsize=(10, 10))
plt.imshow(image, interpolation='nearest')
def circle_contour(image, contour):
image_with_ellipse = image.copy()
ellipse = cv2.fitEllipse(contour)
cv2.ellipse(image_with_ellipse, ellipse, [0, 255, 0], 2)
return image_with_ellipse
def overlay_image(image, mask):
rgb = cv2.cvtColor(mask, cv2.COLOR_GRAY2RGB)
return cv2.addWeighted(rgb, 0.5, image, 0.5, 0)
def find_biggest_contour(image):
image = image.copy()
_ ,contours, hierarchy = cv2.findContours(image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contour_size = [(cv2.contourArea(contour), contour) for contour in contours]
biggest_contour = max(contour_size, key=lambda x: x[0])[1]
mask = np.zeros(image.shape, np.uint8)
cv2.drawContours(mask, [biggest_contour], -1, 255, -1)
return biggest_contour, mask
def find_image(image):
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
scale = 700 / max(image.shape)
image = cv2.resize(image, None, fx=scale, fy=scale)
image_blur = cv2.GaussianBlur(image, (7, 7), 0)
image_hsv = cv2.cvtColor(image_blur, cv2.COLOR_RGB2HSV)
#red color
min_red = np.array([0, 100, 80])
min_red2 = np.array([160, 100, 80])
max_red = np.array([10, 255, 255])
max_red2 = np.array([180, 255, 255])
mask1 = cv2.inRange(image_hsv, min_red, max_red)
mask2 = cv2.inRange(image_hsv, min_red2, max_red2)
mask = mask1 + mask2
# Segmentation
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
mask_closed = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask_open = cv2.morphologyEx(mask_closed, cv2.MORPH_OPEN, kernel)
big_image_contour, mask_image = find_biggest_contour(mask_open)
overlay = overlay_image(image, mask_image)
circled = circle_contour(overlay, big_image_contour)
show(circled)
return cv2.cvtColor(circled, cv2.COLOR_RGB2BGR)
# -
img = cv2.imread('apple.jpg')
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
res_img = find_image(img)
show(res_img)
cv2.imwrite('apple_detected.jpg', res_img)
| Object Detection With OpenCV/ObjectDetectionOpenCV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Time series
from __future__ import division
from pandas import Series, DataFrame
import pandas as pd
from numpy.random import randn
import numpy as np
pd.options.display.max_rows = 12
np.set_printoptions(precision=4, suppress=True)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(12, 4))
# %matplotlib inline
# ## Date and Time Data Types and Tools
from datetime import datetime
now = datetime.now()
now
now.year, now.month, now.day
delta = datetime(2011, 1, 7) - datetime(2008, 6, 24, 8, 15)
delta
delta.days
delta.seconds
from datetime import timedelta
start = datetime(2011, 1, 7)
start + timedelta(12)
start - 2 * timedelta(12)
# ### Converting between string and datetime
stamp = datetime(2011, 1, 3)
str(stamp)
stamp.strftime('%Y-%m-%d')
value = '2011-01-03'
datetime.strptime(value, '%Y-%m-%d')
datestrs = ['7/6/2011', '8/6/2011']
[datetime.strptime(x, '%m/%d/%Y') for x in datestrs]
from dateutil.parser import parse
parse('2011-01-03')
parse('Jan 31, 1997 10:45 PM')
parse('6/12/2011', dayfirst=True)
datestrs
pd.to_datetime(datestrs)
# note: output changed (no '00:00:00' anymore)
idx = pd.to_datetime(datestrs + [None])
idx
idx[2]
pd.isnull(idx)
# ## Time Series Basics
from datetime import datetime
dates = [datetime(2011, 1, 2), datetime(2011, 1, 5), datetime(2011, 1, 7),
datetime(2011, 1, 8), datetime(2011, 1, 10), datetime(2011, 1, 12)]
ts = Series(np.random.randn(6), index=dates)
ts
type(ts)
# note: output changed to "pandas.core.series.Series"
ts.index
ts + ts[::2]
ts.index.dtype
# note: output changed from dtype('datetime64[ns]') to dtype('<M8[ns]')
stamp = ts.index[0]
stamp
# note: output changed from <Timestamp: 2011-01-02 00:00:00> to Timestamp('2011-01-02 00:00:00')
# ### Indexing, selection, subsetting
stamp = ts.index[2]
ts[stamp]
ts['1/10/2011']
ts['20110110']
longer_ts = Series(np.random.randn(1000),
index=pd.date_range('1/1/2000', periods=1000))
longer_ts
longer_ts['2001']
longer_ts['2001-05']
ts[datetime(2011, 1, 7):]
ts
ts['1/6/2011':'1/11/2011']
ts.truncate(after='1/9/2011')
dates = pd.date_range('1/1/2000', periods=100, freq='W-WED')
long_df = DataFrame(np.random.randn(100, 4),
index=dates,
columns=['Colorado', 'Texas', 'New York', 'Ohio'])
long_df.ix['5-2001']
# ### Time series with duplicate indices
dates = pd.DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/2/2000',
'1/3/2000'])
dup_ts = Series(np.arange(5), index=dates)
dup_ts
dup_ts.index.is_unique
dup_ts['1/3/2000'] # not duplicated
dup_ts['1/2/2000'] # duplicated
grouped = dup_ts.groupby(level=0)
grouped.mean()
grouped.count()
# ## Date ranges, Frequencies, and Shifting
ts
ts.resample('D')
# ### Generating date ranges
index = pd.date_range('4/1/2012', '6/1/2012')
index
pd.date_range(start='4/1/2012', periods=20)
pd.date_range(end='6/1/2012', periods=20)
pd.date_range('1/1/2000', '12/1/2000', freq='BM')
pd.date_range('5/2/2012 12:56:31', periods=5)
pd.date_range('5/2/2012 12:56:31', periods=5, normalize=True)
# ### Frequencies and Date Offsets
from pandas.tseries.offsets import Hour, Minute
hour = Hour()
hour
four_hours = Hour(4)
four_hours
pd.date_range('1/1/2000', '1/3/2000 23:59', freq='4h')
Hour(2) + Minute(30)
pd.date_range('1/1/2000', periods=10, freq='1h30min')
# #### Week of month dates
rng = pd.date_range('1/1/2012', '9/1/2012', freq='WOM-3FRI')
list(rng)
# ### Shifting (leading and lagging) data
ts = Series(np.random.randn(4),
index=pd.date_range('1/1/2000', periods=4, freq='M'))
ts
ts.shift(2)
ts.shift(-2)
# + active=""
# ts / ts.shift(1) - 1
# -
ts.shift(2, freq='M')
ts.shift(3, freq='D')
ts.shift(1, freq='3D')
ts.shift(1, freq='90T')
# #### Shifting dates with offsets
from pandas.tseries.offsets import Day, MonthEnd
now = datetime(2011, 11, 17)
now + 3 * Day()
now + MonthEnd()
now + MonthEnd(2)
offset = MonthEnd()
offset.rollforward(now)
offset.rollback(now)
ts = Series(np.random.randn(20),
index=pd.date_range('1/15/2000', periods=20, freq='4d'))
ts.groupby(offset.rollforward).mean()
ts.resample('M', how='mean')
# ## Time Zone Handling
import pytz
pytz.common_timezones[-5:]
tz = pytz.timezone('US/Eastern')
tz
# ### Localization and Conversion
rng = pd.date_range('3/9/2012 9:30', periods=6, freq='D')
ts = Series(np.random.randn(len(rng)), index=rng)
print(ts.index.tz)
pd.date_range('3/9/2012 9:30', periods=10, freq='D', tz='UTC')
ts_utc = ts.tz_localize('UTC')
ts_utc
ts_utc.index
ts_utc.tz_convert('US/Eastern')
ts_eastern = ts.tz_localize('US/Eastern')
ts_eastern.tz_convert('UTC')
ts_eastern.tz_convert('Europe/Berlin')
ts.index.tz_localize('Asia/Shanghai')
# ### Operations with time zone-aware Timestamp objects
stamp = pd.Timestamp('2011-03-12 04:00')
stamp_utc = stamp.tz_localize('utc')
stamp_utc.tz_convert('US/Eastern')
stamp_moscow = pd.Timestamp('2011-03-12 04:00', tz='Europe/Moscow')
stamp_moscow
stamp_utc.value
stamp_utc.tz_convert('US/Eastern').value
# 30 minutes before DST transition
from pandas.tseries.offsets import Hour
stamp = pd.Timestamp('2012-03-12 01:30', tz='US/Eastern')
stamp
stamp + Hour()
# 90 minutes before DST transition
stamp = pd.Timestamp('2012-11-04 00:30', tz='US/Eastern')
stamp
stamp + 2 * Hour()
# ### Operations between different time zones
rng = pd.date_range('3/7/2012 9:30', periods=10, freq='B')
ts = Series(np.random.randn(len(rng)), index=rng)
ts
ts1 = ts[:7].tz_localize('Europe/London')
ts2 = ts1[2:].tz_convert('Europe/Moscow')
result = ts1 + ts2
result.index
# ## Periods and Period Arithmetic
p = pd.Period(2007, freq='A-DEC')
p
p + 5
p - 2
pd.Period('2014', freq='A-DEC') - p
rng = pd.period_range('1/1/2000', '6/30/2000', freq='M')
rng
Series(np.random.randn(6), index=rng)
values = ['2001Q3', '2002Q2', '2003Q1']
index = pd.PeriodIndex(values, freq='Q-DEC')
index
# ### Period Frequency Conversion
p = pd.Period('2007', freq='A-DEC')
p.asfreq('M', how='start')
p.asfreq('M', how='end')
p = pd.Period('2007', freq='A-JUN')
p.asfreq('M', 'start')
p.asfreq('M', 'end')
p = pd.Period('Aug-2007', 'M')
p.asfreq('A-JUN')
rng = pd.period_range('2006', '2009', freq='A-DEC')
ts = Series(np.random.randn(len(rng)), index=rng)
ts
ts.asfreq('M', how='start')
ts.asfreq('B', how='end')
# ### Quarterly period frequencies
p = pd.Period('2012Q4', freq='Q-JAN')
p
p.asfreq('D', 'start')
p.asfreq('D', 'end')
p4pm = (p.asfreq('B', 'e') - 1).asfreq('T', 's') + 16 * 60
p4pm
p4pm.to_timestamp()
rng = pd.period_range('2011Q3', '2012Q4', freq='Q-JAN')
ts = Series(np.arange(len(rng)), index=rng)
ts
new_rng = (rng.asfreq('B', 'e') - 1).asfreq('T', 's') + 16 * 60
ts.index = new_rng.to_timestamp()
ts
# ### Converting Timestamps to Periods (and back)
rng = pd.date_range('1/1/2000', periods=3, freq='M')
ts = Series(randn(3), index=rng)
pts = ts.to_period()
ts
pts
rng = pd.date_range('1/29/2000', periods=6, freq='D')
ts2 = Series(randn(6), index=rng)
ts2.to_period('M')
pts = ts.to_period()
pts
pts.to_timestamp(how='end')
# ### Creating a PeriodIndex from arrays
data = pd.read_csv('ch08/macrodata.csv')
data.year
data.quarter
index = pd.PeriodIndex(year=data.year, quarter=data.quarter, freq='Q-DEC')
index
data.index = index
data.infl
# ## Resampling and Frequency Conversion
rng = pd.date_range('1/1/2000', periods=100, freq='D')
ts = Series(randn(len(rng)), index=rng)
ts.resample('M', how='mean')
ts.resample('M', how='mean', kind='period')
# ### Downsampling
rng = pd.date_range('1/1/2000', periods=12, freq='T')
ts = Series(np.arange(12), index=rng)
ts
ts.resample('5min', how='sum')
# note: output changed (as the default changed from closed='right', label='right' to closed='left', label='left'
ts.resample('5min', how='sum', closed='left')
ts.resample('5min', how='sum', closed='left', label='left')
ts.resample('5min', how='sum', loffset='-1s')
# #### Open-High-Low-Close (OHLC) resampling
ts.resample('5min', how='ohlc')
# note: output changed because of changed defaults
# #### Resampling with GroupBy
rng = pd.date_range('1/1/2000', periods=100, freq='D')
ts = Series(np.arange(100), index=rng)
ts.groupby(lambda x: x.month).mean()
ts.groupby(lambda x: x.weekday).mean()
# ### Upsampling and interpolation
frame = DataFrame(np.random.randn(2, 4),
index=pd.date_range('1/1/2000', periods=2, freq='W-WED'),
columns=['Colorado', 'Texas', 'New York', 'Ohio'])
frame
df_daily = frame.resample('D')
df_daily
frame.resample('D', fill_method='ffill')
frame.resample('D', fill_method='ffill', limit=2)
frame.resample('W-THU', fill_method='ffill')
# ### Resampling with periods
frame = DataFrame(np.random.randn(24, 4),
index=pd.period_range('1-2000', '12-2001', freq='M'),
columns=['Colorado', 'Texas', 'New York', 'Ohio'])
frame[:5]
annual_frame = frame.resample('A-DEC', how='mean')
annual_frame
# Q-DEC: Quarterly, year ending in December
annual_frame.resample('Q-DEC', fill_method='ffill')
# note: output changed, default value changed from convention='end' to convention='start' + 'start' changed to span-like
# also the following cells
annual_frame.resample('Q-DEC', fill_method='ffill', convention='start')
annual_frame.resample('Q-MAR', fill_method='ffill')
# ## Time series plotting
close_px_all = pd.read_csv('ch09/stock_px.csv', parse_dates=True, index_col=0)
close_px = close_px_all[['AAPL', 'MSFT', 'XOM']]
close_px = close_px.resample('B', fill_method='ffill')
close_px.info()
close_px['AAPL'].plot()
close_px.ix['2009'].plot()
close_px['AAPL'].ix['01-2011':'03-2011'].plot()
appl_q = close_px['AAPL'].resample('Q-DEC', fill_method='ffill')
appl_q.ix['2009':].plot()
# ## Moving window functions
close_px = close_px.asfreq('B').fillna(method='ffill')
close_px.AAPL.plot()
pd.rolling_mean(close_px.AAPL, 250).plot()
plt.figure()
appl_std250 = pd.rolling_std(close_px.AAPL, 250, min_periods=10)
appl_std250[5:12]
appl_std250.plot()
# Define expanding mean in terms of rolling_mean
expanding_mean = lambda x: rolling_mean(x, len(x), min_periods=1)
pd.rolling_mean(close_px, 60).plot(logy=True)
plt.close('all')
# ### Exponentially-weighted functions
# +
fig, axes = plt.subplots(nrows=2, ncols=1, sharex=True, sharey=True,
figsize=(12, 7))
aapl_px = close_px.AAPL['2005':'2009']
ma60 = pd.rolling_mean(aapl_px, 60, min_periods=50)
ewma60 = pd.ewma(aapl_px, span=60)
aapl_px.plot(style='k-', ax=axes[0])
ma60.plot(style='k--', ax=axes[0])
aapl_px.plot(style='k-', ax=axes[1])
ewma60.plot(style='k--', ax=axes[1])
axes[0].set_title('Simple MA')
axes[1].set_title('Exponentially-weighted MA')
# -
# ### Binary moving window functions
close_px
spx_px = close_px_all['SPX']
spx_rets = spx_px / spx_px.shift(1) - 1
returns = close_px.pct_change()
corr = pd.rolling_corr(returns.AAPL, spx_rets, 125, min_periods=100)
corr.plot()
corr = pd.rolling_corr(returns, spx_rets, 125, min_periods=100)
corr.plot()
# ### User-defined moving window functions
from scipy.stats import percentileofscore
score_at_2percent = lambda x: percentileofscore(x, 0.02)
result = pd.rolling_apply(returns.AAPL, 250, score_at_2percent)
result.plot()
# ## Performance and Memory Usage Notes
rng = pd.date_range('1/1/2000', periods=10000000, freq='10ms')
ts = Series(np.random.randn(len(rng)), index=rng)
ts
ts.resample('15min', how='ohlc').info()
# %timeit ts.resample('15min', how='ohlc')
rng = pd.date_range('1/1/2000', periods=10000000, freq='1s')
ts = Series(np.random.randn(len(rng)), index=rng)
# %timeit ts.resample('15s', how='ohlc')
| branches/1st-edition/ch10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Vacuum Wavelength Conversions
#
# This notebook compares various air to vacuum conversion formulas.
#
# * [Greisen *et al.* (2006)](http://adsabs.harvard.edu/abs/2006A%26A...446..747G) cites [International Union of Geodesy and Geophysics (1999)](http://www.iugg.org/assemblies/1999birmingham/1999crendus.pdf)
# - This version is used by [specutils](https://github.com/astropy/specutils).
# - Specifically, this is based on the *phase* refractivity of air, there is a slightly different formula for the *group* refractivity of air.
# * [Ciddor (1996)](http://adsabs.harvard.edu/abs/1996ApOpt..35.1566C)
# - Used by [PyDL](https://github.com/weaverba137/pydl) via the [Goddard IDL library](http://idlastro.gsfc.nasa.gov/).
# - This is the standard used by SDSS, at least since 2011. Prior to 2011, the Goddard IDL library used the IAU formula (below) plus an approximation of its inverse for vacuum to air.
# * [The wcslib *code*](https://github.com/astropy/astropy/blob/master/cextern/wcslib/C/spx.c) uses the formula from Cox, *Allenâs Astrophysical Quantities* (2000), itself derived from [Edlén (1953)](http://adsabs.harvard.edu/abs/1953JOSA...43..339E), even though Greisen *et al.* (2006) says, "The standard relation given by Cox (2000) is mathematically intractable and somewhat dated."
# - Interestingly, this is the **IAU** standard, adopted in 1957 and again in 1991. No more recent IAU resolution replaces this formula.
#
# This would be a bit of a deep dive, but it would be interesting to see if these functions are based on measurements of the refractive index of air, *or* on explicit comparison of measured wavelengths in air to measured wavelengths in vacuum.
#
# As shown below, the Greisen formula gives consistently larger values when converting air to vacuum. The Ciddor and wcslib values are almost, but not quite, indistinguishable. The wcslib formula has a singularity at a value less than 2000 Ã
. The Ciddor formula probably has a similar singularity, but it explicitly hides this by not converting values less than 2000 Ã
.
# ```
# int waveawav(dummy, nwave, swave, sawav, wave, awav, stat)
#
# double dummy;
# int nwave, swave, sawav;
# const double wave[];
# double awav[];
# int stat[];
#
# {
# int status = 0;
# double n, s;
# register int iwave, k, *statp;
# register const double *wavep;
# register double *awavp;
#
# wavep = wave;
# awavp = awav;
# statp = stat;
# for (iwave = 0; iwave < nwave; iwave++) {
# if (*wavep != 0.0) {
# n = 1.0;
# for (k = 0; k < 4; k++) {
# s = n/(*wavep);
# s *= s;
# n = 2.554e8 / (0.41e14 - s);
# n += 294.981e8 / (1.46e14 - s);
# n += 1.000064328;
# }
#
# *awavp = (*wavep)/n;
# *(statp++) = 0;
# } else {
# *(statp++) = 1;
# status = SPXERR_BAD_INSPEC_COORD;
# }
#
# wavep += swave;
# awavp += sawav;
# }
#
# return status;
# }
#
# int awavwave(dummy, nawav, sawav, swave, awav, wave, stat)
#
# double dummy;
# int nawav, sawav, swave;
# const double awav[];
# double wave[];
# int stat[];
#
# {
# int status = 0;
# double n, s;
# register int iawav, *statp;
# register const double *awavp;
# register double *wavep;
#
# awavp = awav;
# wavep = wave;
# statp = stat;
# for (iawav = 0; iawav < nawav; iawav++) {
# if (*awavp != 0.0) {
# s = 1.0/(*awavp);
# s *= s;
# n = 2.554e8 / (0.41e14 - s);
# n += 294.981e8 / (1.46e14 - s);
# n += 1.000064328;
# *wavep = (*awavp)*n;
# *(statp++) = 0;
# } else {
# *(statp++) = 1;
# status = SPXERR_BAD_INSPEC_COORD;
# }
#
# awavp += sawav;
# wavep += swave;
# }
#
# return status;
# }
# ```
# %matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.font_manager import fontManager, FontProperties
import astropy.units as u
from pydl.goddard.astro import airtovac, vactoair
wavelength = np.logspace(3,5,200) * u.Angstrom
# +
def waveawav(wavelength):
"""Vacuum to air conversion as actually implemented by wcslib.
"""
wave = wavelength.to(u.m).value
n = 1.0
for k in range(4):
s = (n/wave)**2
n = 2.554e8 / (0.41e14 - s)
n += 294.981e8 / (1.46e14 - s)
n += 1.000064328
return wavelength / n
def awavwave(wavelength):
"""Air to vacuum conversion as actually implemented by wcslib.
Have to convert to meters(!) for this formula to work.
"""
awav = wavelength.to(u.m).value
s = (1.0/awav)**2
n = 2.554e8 / (0.41e14 - s)
n += 294.981e8 / (1.46e14 - s)
n += 1.000064328
return wavelength * n
# +
#
# These functions aren't defined until specutils 0.3.x. However specviz needs 0.2.2.
#
def air_to_vac(wavelength):
"""
Implements the air to vacuum wavelength conversion described in eqn 65 of
Griesen 2006
"""
wlum = wavelength.to(u.um).value
return (1+1e-6*(287.6155+1.62887/wlum**2+0.01360/wlum**4)) * wavelength
def vac_to_air(wavelength):
"""
Griesen 2006 reports that the error in naively inverting Eqn 65 is less
than 10^-9 and therefore acceptable. This is therefore eqn 67
"""
wlum = wavelength.to(u.um).value
nl = (1+1e-6*(287.6155+1.62887/wlum**2+0.01360/wlum**4))
return wavelength/nl
# -
greisen_a2v = air_to_vac(wavelength) / wavelength - 1.0
ciddor_a2v = airtovac(wavelength) / wavelength - 1.0
wcslib_a2v = awavwave(wavelength) / wavelength - 1.0
good = (greisen_a2v > 0) & (ciddor_a2v > 0) & (wcslib_a2v > 0)
fig = plt.figure(figsize=(10, 10), dpi=100)
ax = fig.add_subplot(111)
p1 = ax.semilogx(wavelength[good], greisen_a2v[good], 'k.', label='Greisen')
p2 = ax.semilogx(wavelength[good], ciddor_a2v[good], 'gs', label='Ciddor')
p3 = ax.semilogx(wavelength[good], wcslib_a2v[good], 'ro', label='IAU')
foo = p2[0].set_markeredgecolor('g')
foo = p2[0].set_markerfacecolor('none')
foo = p3[0].set_markeredgecolor('r')
foo = p3[0].set_markerfacecolor('none')
# foo = ax.set_xlim([-5, 10])
# foo = ax.set_ylim([24, 15])
foo = ax.set_xlabel('Air Wavelength [Ã
]')
foo = ax.set_ylabel('Fractional Change in Wavelength [$\\lambda_{\\mathrm{vacuum}}/\\lambda_{\mathrm{air}} - 1$]')
foo = ax.set_title('Comparison of Air to Vacuum Conversions')
l = ax.legend(numpoints=1)
greisen_v2a = 1.0 - vac_to_air(wavelength) / wavelength
ciddor_v2a = 1.0 - vactoair(wavelength) / wavelength
wcslib_v2a = 1.0 - waveawav(wavelength) / wavelength
good = (greisen_v2a > 0) & (ciddor_v2a > 0) & (wcslib_v2a > 0)
fig = plt.figure(figsize=(10, 10), dpi=100)
ax = fig.add_subplot(111)
p1 = ax.semilogx(wavelength[good], greisen_v2a[good], 'k.', label='Greisen')
p2 = ax.semilogx(wavelength[good], ciddor_v2a[good], 'gs', label='Ciddor')
p3 = ax.semilogx(wavelength[good], wcslib_v2a[good], 'ro', label='IAU')
foo = p2[0].set_markeredgecolor('g')
foo = p2[0].set_markerfacecolor('none')
foo = p3[0].set_markeredgecolor('r')
foo = p3[0].set_markerfacecolor('none')
# foo = ax.set_xlim([-5, 10])
# foo = ax.set_ylim([24, 15])
foo = ax.set_xlabel('Vacuum Wavelength [Ã
]')
foo = ax.set_ylabel('Fractional Change in Wavelength [$1 - \\lambda_{\\mathrm{air}}/\\lambda_{\mathrm{vacuum}}$]')
foo = ax.set_title('Comparison of Vacuum to Air Conversions')
l = ax.legend(numpoints=1)
| docs/notebooks/Vacuum Wavelength Conversions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Single cell TCR/RNA-seq data analysis using Scirpy
#
# * __Notebook version__: `v0.0.3`
# * __Created by:__ `Imperial BRC Genomics Facility`
# * __Maintained by:__ `Imperial BRC Genomics Facility`
# * __Docker image:__ `imperialgenomicsfacility/scirpy-notebook-image:release-v0.0.2`
# * __Github repository:__ [imperial-genomics-facility/scirpy-notebook-image](https://github.com/imperial-genomics-facility/scirpy-notebook-image)
# * __Created on:__ {{ DATE_TAG }}
# * __Contact us:__ [Imperial BRC Genomics Facility](https://www.imperial.ac.uk/medicine/research-and-impact/facilities/genomics-facility/contact-us/)
# * __License:__ [Apache License 2.0](https://github.com/imperial-genomics-facility/scirpy-notebook-image/blob/master/LICENSE)
# * __Project name:__ {{ PROJECT_IGF_ID }}
# {% if SAMPLE_IGF_ID %}* __Sample name:__ {{ SAMPLE_IGF_ID }}{% endif %}
#
# ## Table of contents
#
# * [Introduction](#Introduction)
# * [Tools required](#Tools-required)
# * [Input parameters](#Input-parameters)
# * [Loading required libraries](#Loading-required-libraries)
# * [Reading data from Cellranger output](#Reading-data-from-Cellranger-output)
# * [Data processing and QC](#Data-processing-and-QC)
# * [Checking highly variable genes](#Checking-highly-variable-genes)
# * [Computing metrics for cell QC](#Computing-metrics-for-cell-QC)
# * [Ploting MT gene fractions](#Ploting-MT-gene-fractions)
# * [Counting cells per gene](#Counting-cells-per-gene)
# * [Plotting count depth vs MT fraction](#Plotting-count-depth-vs-MT-fraction)
# * [Checking thresholds and filtering data](#Checking-thresholds-and-filtering-data)
# * [Gene expression data analysis](#Gene-expression-data-analysis)
# * [Normalization](#Normalization)
# * [Highly variable genes](#Highly-variable-genes)
# * [Regressing out technical effects](#Regressing-out-technical-effects)
# * [Principal component analysis](#Principal-component-analysis)
# * [Neighborhood graph](#Neighborhood-graph)
# * [Clustering the neighborhood graph](#Clustering-the-neighborhood-graph)
# * [Embed the neighborhood graph using 2D UMAP](#Embed-the-neighborhood-graph-using-2D-UMAP)
# * [Finding marker genes](#Finding-marker-genes)
# * [Cell annotation](#Cell-annotation)
# * [VDJ data analysis](#VDJ-data-analysis)
# * [TCR quality control](#TCR-quality-control)
# * [Compute CDR3 neighborhood graph and define clonotypes](#Compute-CDR3-neighborhood-graph-and-define-clonotypes)
# * [Re-compute CDR3 neighborhood graph and define clonotype clusters](#Re-compute-CDR3-neighborhood-graph-and-define-clonotype-clusters)
# * [Clonotype analysis](#Clonotype-analysis)
# * [Clonal expansion](#Clonal-expansion)
# * [Clonotype abundance](#Clonotype-abundance)
# * [Gene usage](#Gene-usage)
# * [Spectratype plots](#Spectratype-plots)
#
#
# ## Introduction
# This notebook for running single cell TCR/RNA-seq data analysis (for a single sample) using Scirpy and Scanpy package. Most of the codes and documentation used in this notebook has been copied from the following sources:
#
# * [Analysis of 3k T cells from cancer](https://icbi-lab.github.io/scirpy/tutorials/tutorial_3k_tcr.html)
# * [Scanpy - Preprocessing and clustering 3k PBMCs](https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html)
# * [Single-cell-tutorial](https://github.com/theislab/single-cell-tutorial)
#
# ## Tools required
# * [Scirpy](https://icbi-lab.github.io/scirpy/index.html)
# * [Scanpy](https://scanpy-tutorials.readthedocs.io/en/latest)
# * [Plotly](https://plotly.com/python/)
# * [UCSC Cell Browser](https://pypi.org/project/cellbrowser/)
#
# ## Input parameters
cell_ranger_count_path = '{{ CELLRANGER_COUNT_DIR }}'
cell_ranger_vdj_path = '{{ CELLRANGER_VDJ_DIR }}'
cell_marker_list = '{{ CELL_MARKER_LIST }}'
genome_build = '{{ GENOME_BUILD }}'
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## Loading required libraries
#
# We need to load all the required libraries to environment before we can run any of the analysis steps. Also, we are checking the version information for most of the major packages used for analysis.
# %matplotlib inline
import os
import numpy as np
import pandas as pd
import scanpy as sc
import scirpy as ir
import seaborn as sns
from cycler import cycler
from matplotlib import pyplot as plt, cm as mpl_cm
from IPython.display import HTML
sns.set_style('darkgrid')
plt.rcParams['figure.dpi'] = 150
sc.logging.print_header()
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## Reading data from Cellranger output
#
# Load the Cellranger VDJ output to Scanpy
adata_tcr = \
ir.io.read_10x_vdj(os.path.join(cell_ranger_vdj_path,"filtered_contig_annotations.csv"))
# Load the Cellranger output to Scanpy
adata = \
sc.read_10x_h5(os.path.join(cell_ranger_count_path,"filtered_feature_bc_matrix.h5"))
# Converting the gene names to unique values
adata.var_names_make_unique()
adata_tcr.shape
# Checking the data dimensions before checking QC
adata.shape
ir.pp.merge_with_ir(adata, adata_tcr)
# Scanpy stores the count data is an annotated data matrix ($observations$ e.g. cell barcodes à $variables$ e.g gene names) called [AnnData](https://anndata.readthedocs.io) together with annotations of observations($obs$), variables ($var$) and unstructured annotations ($uns$)
adata.obs.head()
adata.var.head()
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## Data processing and QC
#
# ### Checking highly variable genes
#
# Computing fraction of counts assigned to each gene over all cells. The top genes with the highest mean fraction over all cells are
# plotted as boxplots.
fig,ax = plt.subplots(1,1,figsize=(4,5))
sc.pl.highest_expr_genes(adata, n_top=20,ax=ax)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Computing metrics for cell QC
#
# Listing the Mitochondrial genes detected in the cell population
# +
mt_genes = 0
mt_genes = [gene for gene in adata.var_names if gene.startswith('MT-')]
mito_genes = adata.var_names.str.startswith('MT-')
if len(mt_genes)==0:
print('Looking for mito genes with "mt-" prefix')
mt_genes = [gene for gene in adata.var_names if gene.startswith('mt-')]
mito_genes = adata.var_names.str.startswith('mt-')
if len(mt_genes)==0:
print("No mitochondrial genes found")
else:
print("Mitochondrial genes: count: {0}, lists: {1}".format(len(mt_genes),mt_genes))
# -
# Typical quality measures for assessing the quality of a cell includes the following components
# * Number of molecule counts (UMIs or $n\_counts$ )
# * Number of expressed genes ($n\_genes$)
# * Fraction of counts that are mitochondrial ($percent\_mito$)
#
# We are calculating the above mentioned details using the following codes
adata.obs['mito_counts'] = np.sum(adata[:, mito_genes].X, axis=1).A1
adata.obs['percent_mito'] = \
np.sum(adata[:, mito_genes].X, axis=1).A1 / np.sum(adata.X, axis=1).A1
adata.obs['n_counts'] = adata.X.sum(axis=1).A1
adata.obs['log_counts'] = np.log(adata.obs['n_counts'])
adata.obs['n_genes'] = (adata.X > 0).sum(1)
# Checking $obs$ section of the AnnData object again
adata.obs.head()
# Sorting barcodes based on the $percent\_mito$ column
adata.obs.sort_values('percent_mito',ascending=False).head()
# A high fraction of mitochondrial reads being picked up can indicate cell stress, as there is a low proportion of nuclear mRNA in the cell. It should be noted that high mitochondrial RNA fractions can also be biological signals indicating elevated respiration. <p/>
#
# Cell barcodes with high count depth, few detected genes and high fraction of mitochondrial counts may indicate cells whose cytoplasmic mRNA has leaked out due to a broaken membrane and only the mRNA located in the mitochrondia has survived. <p/>
#
# Cells with high UMI counts and detected genes may represent dublets (it requires further checking).
#
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Ploting MT gene fractions
plt.rcParams['figure.figsize'] = (8,5)
sc.pl.violin(\
adata,
['n_genes', 'n_counts', 'percent_mito'],
jitter=0.4,
log=True,
stripplot=True,
show=False,
multi_panel=False)
# Violin plot (above) shows the computed quality measures of UMI counts, gene counts and fraction of mitochondrial counts.
plt.rcParams['figure.figsize'] = (8,6)
ax = sc.pl.scatter(adata, 'n_counts', 'n_genes', color='percent_mito',show=False)
ax.set_title('Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Number of genes",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(500, 0,1, color='red')
ax.axvline(1000, 0,1, color='red')
# The above scatter plot shows number of genes vs number of counts with $MT$ fraction information. We will be using a cutoff of 1000 counts and 500 genes (<span style="color:red">red lines</span>) to filter out dying cells.
ax = sc.pl.scatter(adata[adata.obs['n_counts']<10000], 'n_counts', 'n_genes', color='percent_mito',show=False, size=40)
ax.set_title('Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Number of genes",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(500, 0,1, color='red')
ax.axvline(1000, 0,1, color='red')
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Counting cells per gene
adata.var['cells_per_gene'] = np.sum(adata.X > 0, 0).T
ax = sns.histplot(adata.var['cells_per_gene'][adata.var['cells_per_gene'] < 100], kde=False, bins=60)
ax.set_xlabel("Number of cells",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
ax.set_title('Cells per gene', fontsize=12)
ax.tick_params(labelsize=12)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Plotting count depth vs MT fraction
#
# The scatter plot showing the count depth vs MT fraction counts and the <span style="color:red">red line</span> shows the default cutoff value for MT fraction 0.2
ax = sc.pl.scatter(adata, x='n_counts', y='percent_mito',show=False)
ax.set_title('Count depth vs Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Fraction mitochondrial counts",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(0.2, 0,1, color='red')
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Checking thresholds and filtering data
#
# Now we need to filter the cells which doesn't match our thresholds.
# +
min_counts_threshold = 1000
max_counts_threshold = 50000
min_gene_counts_threshold = 500
max_mito_pct_threshold = 0.2
if adata[adata.obs['n_counts'] > min_counts_threshold].n_obs < 1000:
min_counts_threshold = 100
if adata[adata.obs['n_counts'] < max_counts_threshold].n_obs < 1000:
max_counts_threshold = 80000
if adata[adata.obs['n_genes'] > min_gene_counts_threshold].n_obs < 1000:
min_gene_counts_threshold = 100
if adata[adata.obs['percent_mito'] < max_mito_pct_threshold].n_obs < 1000:
max_mito_pct_threshold = 0.5
# -
if min_counts_threshold != 1000 or min_gene_counts_threshold!= 500:
print("Resetting the thresholds")
ax = sc.pl.scatter(adata, 'n_counts', 'n_genes', color='percent_mito',show=False)
ax.set_title('Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Number of genes",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(min_gene_counts_threshold, 0,1, color='red')
ax.axvline(min_counts_threshold, 0,1, color='red')
# +
print('Total number of cells: {0}'.format(adata.n_obs))
print('Filtering dataset using thresholds')
sc.pp.filter_cells(adata, min_counts = min_counts_threshold)
print('Number of cells after min count ({0}) filter: {1}'.format(min_counts_threshold,adata.n_obs))
sc.pp.filter_cells(adata, max_counts = max_counts_threshold)
print('Number of cells after max count ({0}) filter: {1}'.format(max_counts_threshold,adata.n_obs))
sc.pp.filter_cells(adata, min_genes = min_gene_counts_threshold)
print('Number of cells after gene ({0}) filter: {1}'.format(min_gene_counts_threshold,adata.n_obs))
adata = adata[adata.obs['percent_mito'] < max_mito_pct_threshold]
print('Number of cells after MT fraction ({0}) filter: {1}'.format(max_mito_pct_threshold,adata.n_obs))
print('Total number of cells after filtering: {0}'.format(adata.n_obs))
# -
# Also, we need to filter out any genes that are detected in only less than 20 cells. This operation reduces the dimensions of the matrix by removing zero count genes which are not really informative.
# +
min_cell_per_gene_threshold = 20
print('Total number of genes: {0}'.format(adata.n_vars))
sc.pp.filter_genes(adata, min_cells=min_cell_per_gene_threshold)
print('Number of genes after cell filter: {0}'.format(adata.n_vars))
# -
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## Gene expression data analysis
#
# ### Normalization
#
# We are using a simple total-count based normalization (library-size correct) to transform the data matrix $X$ to 10,000 reads per cell, so that counts become comparable among cells.
sc.pp.normalize_total(adata, target_sum=1e4)
# Then logarithmize the data matrix
sc.pp.log1p(adata)
# Copying the normalized and logarithmized raw gene expression data to the .raw attribute of the AnnData object for later use.
adata.raw = adata
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Highly variable genes
#
# Following codes blocks are used to identify the highly variable genes (HGV) to further reduce the dimensionality of the dataset and to include only the most informative genes. HGVs will be used for clustering, trajectory inference, and dimensionality reduction/visualization.
#
# We use a 'seurat' flavor based HGV detection step. Then, we run the following codes to do the actual filtering of data. The plots show how the data was normalized to select highly variable genes irrespective of the mean expression of the genes. This is achieved by using the index of dispersion which divides by mean expression, and subsequently binning the data by mean expression and selecting the most variable genes within each bin.
plt.rcParams['figure.figsize'] = (7,5)
sc.pp.highly_variable_genes(adata, flavor='seurat', min_mean=0.0125, max_mean=3, min_disp=0.5)
seurat_hgv = np.sum(adata.var['highly_variable'])
print("Counts of HGVs: {0}".format(seurat_hgv))
sc.pl.highly_variable_genes(adata)
adata = adata[:, adata.var.highly_variable]
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Regressing out technical effects
#
# Normalization scales count data to make gene counts comparable between cells. But it still contain unwanted variability. One of the most prominent technical covariates in single-cell data is count depth. Regress out effects of total counts per cell and the percentage of mitochondrial genes expressed can improve the performance of trajectory inference algorithms.
sc.pp.regress_out(adata, ['n_counts', 'percent_mito'])
# Scale each gene to unit variance. Clip values exceeding standard deviation 10.
sc.pp.scale(adata, max_value=10)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Principal component analysis
#
# Reduce the dimensionality of the data by running principal component analysis (PCA), which reveals the main axes of variation and denoises the data.
sc.tl.pca(adata, svd_solver='arpack')
plt.rcParams['figure.figsize']=(8,5)
sc.pl.pca(adata,color=[adata.var_names[0]])
# Let us inspect the contribution of single PCs to the total variance in the data. This gives us information about how many PCs we should consider in order to compute the neighborhood relations of cells.
#
# The first principle component captures variation in count depth between cells and its marginally informative.
plt.rcParams['figure.figsize'] = (8,5)
sc.pl.pca_variance_ratio(adata, log=True)
# Let us compute the neighborhood graph of cells using the PCA representation of the data matrix.
#
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Neighborhood graph
# Computing the neighborhood graph of cells using the PCA representation of the data matrix.
sc.pp.neighbors(adata, n_neighbors=10, n_pcs=40)
# #### Clustering the neighborhood graph
#
# Scanpy documentation recommends the Leiden graph-clustering method (community detection based on optimizing modularity) by Traag *et al.* (2018). Note that Leiden clustering (using `resolution=0.5`) directly clusters the neighborhood graph of cells, which we have already computed in the previous section.
sc.tl.leiden(adata,resolution=0.5)
cluster_length = len(adata.obs['leiden'].value_counts().to_dict().keys())
if cluster_length < 20:
cluster_colors = sns.color_palette('Spectral_r',cluster_length,as_cmap=False).as_hex()
else:
cluster_colors = sc.pl.palettes.default_102
# ### Embed the neighborhood graph using 2D UMAP
#
# Scanpy documentation suggests embedding the graph in 2 dimensions using UMAP (McInnes et al., 2018), see below. It is potentially more faithful to the global connectivity of the manifold than tSNE, i.e., it better preservers trajectories.
sc.tl.umap(adata,n_components=2)
# +
gene_name = ['CST3']
if 'CST3' not in adata.var_names:
gene_name = [adata.var_names[0]]
plt.rcParams['figure.figsize']=(8,5)
sc.pl.umap(adata,color=gene_name,size=40)
# -
# You can replace the `['CST3']` in the previous cell with your preferred list of genes.
#
# e.g. `['LTB','IL32','CD3D']`
#
# or may be with a Python onliner code to extract gene names with a specific prefix
#
# e.g.
#
# ```python
# sc.pl.umap(adata, color=[gene for gene in adata.var_names if gene.startswith('CD')], ncols=2)
# ```
#
# Plot the scaled and corrected gene expression by `use_raw=False` and color them based on the leiden clusters.
sc.pl.umap(adata,color='leiden',use_raw=False,palette=cluster_colors,size=40)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Finding marker genes
#
# Let us compute a ranking for the highly differential genes in each cluster. For this, by default, the `.raw` attribute of AnnData is used in case it has been initialized before. We are using ` Wilcoxon rank-sum test` (Sonison & Robinson (2018) here.
plt.rcParams['figure.figsize'] = (4,4)
sc.tl.rank_genes_groups(adata, 'leiden', method='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20, sharey=False, ncols=2)
# Show the 10 top ranked genes per cluster 0, 1, âŠ, N in a dataframe
HTML(pd.DataFrame(adata.uns['rank_genes_groups']['names']).head(10).to_html())
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Cell annotation
#
# Downloaded the cell type gene expression markers form [PanglaoDB](https://panglaodb.se/markers.html?cell_type=%27all_cells%27) and generated cell annotation plot
df = pd.read_csv(cell_marker_list,delimiter='\t')
if genome_build.upper() in ['HG38','HG38_HIV']:
print('Selecting human specific genes')
markers_df = df[(df['species'] == 'Hs') | (df['species'] == 'Mm Hs')]
elif genome_build.upper() =='MM10':
print('Selecting mouse specific genes')
markers_df = df[(df['species'] == 'Mm') | (df['species'] == 'Mm Hs')]
else:
raise ValueError('Species {0} not supported'.format(genome_build))
markers_df.shape
# Collect the unique cell type list
if 'organ' in markers_df.columns and \
len(markers_df[(markers_df['organ']=='Immune system')|(markers_df['organ']=='Blood')].index) > 100:
cell_types = list(markers_df[(markers_df['organ']=='Immune system')|(markers_df['organ']=='Blood')]['cell type'].unique())
else:
cell_types = list(markers_df['cell type'].unique())
# Generate cell annotation using the marker gene list
markers_dict = {}
for ctype in cell_types:
df = markers_df[markers_df['cell type'] == ctype]
markers_dict[ctype] = df['official gene symbol'].to_list()
cell_annotation = sc.tl.marker_gene_overlap(adata, markers_dict, key='rank_genes_groups',top_n_markers=20)
cell_annotation_norm = sc.tl.marker_gene_overlap(adata, markers_dict, key='rank_genes_groups', normalize='reference',top_n_markers=20)
plt.rcParams['figure.figsize'] = (10,10)
sns.heatmap(cell_annotation_norm.loc[cell_annotation_norm.sum(axis=1) > 0.05,], cbar=False, annot=True)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## VDJ data analysis
#
# ### TCR quality control
#
# While most of T cell receptors have exactly one pair of α and β chains, up to one third of T cells can have dual TCRs, i.e. two pairs of receptors originating from different alleles.
#
# Using the `scirpy.tl.chain_qc()` function, we can add a summary about the T cell receptor compositions to `adata.obs`.
#
# __Chain pairing:__
# * Orphan chain refers to cells that have either a single alpha or beta receptor chain.
# * Extra chain refers to cells that have a full alpha/beta receptor pair, and an additional chain.
# * Multichain refers to cells with more than two receptor pairs detected. These cells are likely doublets.
#
# **receptor type and receptor subtype**
#
# - `receptor_type` refers to a coarse classification into `BCR` and `TCR`. Cells both `BCR` and `TCR` chains
# are labelled as `ambiguous`.
# - `receptor_subtype` refers to a more specific classification into α/β, ɣ/Ύ, IG-λ, and IG-κ chain configurations.
#
# For more details, see `scirpy.tl.chain_qc`.
ir.tl.chain_qc(adata)
# Following plot shows the receptor types
ir.pl.group_abundance(adata, groupby="receptor_type", target_col="leiden",fig_kws={'figsize': (8, 6), 'dpi': 150})
# Use `groupby="receptor_subtype"` to check receptor subtypes, e.g. if the dataset contains only α/β T-cell receptor
ir.pl.group_abundance(adata, groupby="receptor_subtype", target_col="leiden",fig_kws={'figsize': (8, 6), 'dpi': 150})
# Checking for chain pairing
ir.pl.group_abundance(adata, groupby="chain_pairing", target_col="leiden",fig_kws={'figsize': (8, 6), 'dpi': 150})
# Check the fraction of productive T-cell receptors:
print(
"Fraction of cells with more than one pair of TCRs: {0}".format(
np.sum(
adata.obs["chain_pairing"].isin(
["extra VJ", "extra VDJ", "two full chains"]
)
)
/ adata.n_obs
)
)
# Visualize the _[Multichain](https://icbi-lab.github.io/scirpy/glossary.html#term-Multichain-cell)_ cells on the UMAP plot and exclude them from downstream analysis:
adata.obs['multi_chain']
plt.rcParams['figure.figsize'] = (8,6)
sc.pl.umap(adata, color="chain_pairing", groups="multichain", size=40)
sc.pl.umap(adata, color="chain_pairing")
adata = adata[adata.obs["chain_pairing"] != "multichain", :].copy()
# Similarly, we can use the `chain_pairing` information to exclude all cells that donât have at least one full pair of receptor sequences:
adata = adata[~adata.obs["chain_pairing"].isin(["orphan VDJ", "orphan VJ"]), :].copy()
# Finally, we re-create the chain-pairing plot to ensure that the filtering worked as expected:
sc.pl.umap(adata, color='leiden',use_raw=False,size=40,palette=sc.pl.palettes.default_102)
plt.rcParams['figure.figsize'] = (7,5)
plots_list = ["has_ir"]
if "CD3E" in adata.var_names:
plots_list.append("CD3E")
sc.pl.umap(adata, color=plots_list,ncols=1,size=40)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## Define clonotypes and clonotype clusters
#
# *Scirpy* implements a network-based approach for [clonotypes](https://icbi-lab.github.io/scirpy/glossary.html#term-Clonotype) definition. The steps to create and visualize the clonotype-network are analogous to the construction of a neighborhood graph from transcriptomics data with *scanpy*.
#
# ### Compute CDR3 neighborhood graph and define clonotypes
#
# `scirpy.pp.ir_dist()` computes distances between [CDR3](https://icbi-lab.github.io/scirpy/glossary.html#term-CDR) nucleotide (nt) or amino acid (aa) sequences, either based on sequence identity or similarity. It creates two distance matrices: one for all unique [VJ](https://icbi-lab.github.io/scirpy/glossary.html#term-Chain-locus) sequences and one for all unique [VDJ](https://icbi-lab.github.io/scirpy/glossary.html#term-Chain-locus) sequences. The distance matrices are added to adata.uns.
#
# The function `scirpy.tl.define_clonotypes()` matches cells based on the distances of their VJ and VDJ CDR3-sequences and value of the function parameters dual_ir and receptor_arms. Finally, it detects connected modules in the graph and annotates them as clonotypes. This will add a clone_id and clone_id_size column to adata.obs.
#
# The dual_ir parameter defines how scirpy handles cells with [more than one pair of receptors](https://icbi-lab.github.io/scirpy/glossary.html#term-Dual-IR). The default value is any which implies that cells with any of their primary or secondary receptor chain matching will be considered to be of the same clonotype.
#
# Here, we define [clonotypes](https://icbi-lab.github.io/scirpy/glossary.html#term-Clonotype) based on nt-sequence identity. In a later step, we will define [clonotype clusters](https://icbi-lab.github.io/scirpy/glossary.html#term-Clonotype-cluster) based on amino-acid similarity.
# using default parameters, `ir_dist` will compute nucleotide sequence identity
ir.pp.ir_dist(adata)
ir.tl.define_clonotypes(adata, receptor_arms="all", dual_ir="primary_only")
# To visualize the network we first call `scirpy.tl.clonotype_network` to compute the layout.
# We can then visualize it using `scirpy.pl.clonotype_network`. We recommend setting the
# `min_cells` parameter to `>=2`, to prevent the singleton clonotypes from cluttering the network.
ir.tl.clonotype_network(adata, min_cells=2)
# The resulting plot is a network, where each dot represents cells with identical receptor configurations. As we define clonotypes as cells with identical CDR3-sequences, each dot is also a clonotype. For each clonotype, the numeric clonotype id is shown in the graph. The size of each dot refers to the number of cells with the same receptor configurations. Categorical variables can be visualized as pie charts.
ir.pl.clonotype_network(
adata, color="leiden", base_size=30, label_fontsize=7, panel_size=(7, 7)
)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Re-compute CDR3 neighborhood graph and define clonotype clusters
#
# We can now re-compute the neighborhood graph based on amino-acid sequence similarity
# and define [clonotype clusters](https://icbi-lab.github.io/scirpy/glossary.html#term-Clonotype-cluster).
#
# To this end, we need to change set `metric="alignment"` and specify a `cutoff` parameter.
# The distance is based on the [BLOSUM62](https://en.wikipedia.org/wiki/BLOSUM) matrix.
# For instance, a distance of `10` is equivalent to 2 `R`s mutating into `N`.
# This appoach was initially proposed as *TCRdist* by Dash et al. (`TCRdist`).
#
# All cells with a distance between their CDR3 sequences lower than `cutoff` will be connected in the network.
ir.pp.ir_dist(
adata,
metric="alignment",
sequence="aa",
cutoff=15
)
ir.tl.define_clonotype_clusters(
adata, sequence="aa", metric="alignment", receptor_arms="all", dual_ir="any"
)
ir.tl.clonotype_network(adata, min_cells=3, sequence="aa", metric="alignment")
# Compared to the previous plot, we observere several connected dots. Each fully connected subnetwork represents a âclonotype clusterâ, each dot still represents cells with identical receptor configurations.
ir.pl.clonotype_network(
adata, color="leiden", label_fontsize=9, panel_size=(7, 7), base_size=20
)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Clonotype analysis
#
# #### Clonal expansion
#
# Let's visualize the number of expanded clonotypes (i.e. clonotypes consisting of more than one cell) by cell-type. The first option is to add a column with the `scirpy.tl.clonal_expansion` to `adata.obs` and overlay it on the UMAP plot.
#
# `clonal_expansion` refers to expansion categories, i.e singleton clonotypes, clonotypes with 2 cells and more than 2 cells.
# The `clonotype_size` refers to the absolute number of cells in a clonotype.
ir.tl.clonal_expansion(adata)
plt.rcParams['figure.figsize'] = (7,5)
sc.pl.umap(adata, color=["clonal_expansion", "clone_id_size"],ncols=1,size=40)
# The second option is to show the number of cells belonging to an expanded clonotype per category in a stacked bar plot, using the `scirpy.pl.clonal_expansion()` plotting function.
ir.pl.clonal_expansion(adata, groupby="leiden", clip_at=4, normalize=False,fig_kws={'figsize': (8, 6), 'dpi': 150})
# The same plot, normalized to cluster size. Clonal expansion is a sign of positive selection for a certain, reactive T-cell clone. It, therefore, makes sense that CD8+ effector T-cells have the largest fraction of expanded clonotypes.
ir.pl.clonal_expansion(adata, groupby="leiden", clip_at=4, normalize=True,fig_kws={'figsize': (8, 6), 'dpi': 150})
# Checking alpha diversity per cluster using `scirpy.pl.alpha_diversity()` of clonotypes.
ir.pl.alpha_diversity(adata, groupby="leiden",fig_kws={'figsize': (8, 6), 'dpi': 150})
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# #### Clonotype abundance
#
# The function `scirpy.pl.group_abundance()` allows us to create bar charts for arbitrary categorial from obs. Here, we use it to show the distribution of the 20 largest clonotypes across the cell-type clusters.
ir.pl.group_abundance(adata, groupby="clone_id", target_col="leiden", max_cols=20,fig_kws={'figsize': (8, 6), 'dpi': 150})
# It might be beneficial to normalize the counts to the number of cells per sample to mitigate biases due to different sample sizes:
ir.pl.group_abundance(
adata, groupby="clone_id", target_col="leiden", max_cols=20, fig_kws={'figsize': (8, 6), 'dpi': 150}
)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Gene usage
#
# `scirpy.tl.group_abundance()` can also give us some information on VDJ usage. We can choose any of the `{TRA,TRB}_{1,2}_{v,d,j,c}_call` columns to make a stacked bar plot. We use max_col to limit the plot to the 10 most abundant V-genes.
#
# Plotting `gene_abundance` for all the non-zero `{TRA,TRB}_{1,2}_{v,d,j,c}_gene`
ir_v_genes_list = [
i for i in list(adata.obs.columns)
if i.startswith('IR_V') and i.endswith('_call')]
filtered_ir_v_genes_list = list()
for i in ir_v_genes_list:
if adata.obs[i].shape[0] > adata.obs[i].isna().sum():
filtered_ir_v_genes_list.append(i)
for i in range(len(filtered_ir_v_genes_list)):
ir.pl.group_abundance(
adata,
groupby=filtered_ir_v_genes_list[i],
target_col="leiden",
normalize=True,
max_cols=20,
fig_kws={'figsize': (8, 6), 'dpi': 150})
# The exact combinations of VDJ genes can be visualized as a Sankey-plot using `scirpy.pl.vdj_usage()`.
ir.pl.vdj_usage(adata, full_combination=False, max_segments=None, max_ribbons=20,fig_kws={'figsize': (7, 6), 'dpi': 150})
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ### Spectratype plots
#
# `spectratype()` plots give us information about the length distribution of CDR3 regions.
ir.pl.spectratype(adata, color="leiden", viztype="bar", fig_kws={'figsize': (8, 6), 'dpi': 150})
# The same chart visualized as âridgeâ-plot:
ir.pl.spectratype(
adata,
color="leiden",
viztype="curve",
curve_layout="shifted",
fig_kws={'figsize': (8, 6), 'dpi': 150},
kde_kws={"kde_norm": False},
)
# <div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#
# ## References
# * [Analysis of 3k T cells from cancer](https://icbi-lab.github.io/scirpy/tutorials/tutorial_3k_tcr.html)
# * [Scanpy - Preprocessing and clustering 3k PBMCs](https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html)
# * [single-cell-tutorial](https://github.com/theislab/single-cell-tutorial)
#
# ## Acknowledgement
# The Imperial BRC Genomics Facility is supported by NIHR funding to the Imperial Biomedical Research Centre.
| templates/scirpy_basic_v0.0.3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# for use in tutorial and development; do not include this `sys.path` change in production:
import sys ; sys.path.insert(0, "../")
# # Graph algorithms with `networkx`
#
# Once we have linked data represented as a KG, we can begin to use *graph algorithms* and *network analysis* on the data.
# Perhaps the most famous of these is [PageRank](http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf) which helped launch Google, also known as a stochastic variant of *eigenvector centrality*.
#
# We'll use the [`networkx`](https://networkx.org/) library to run graph algorithms, since `rdflib` lacks support for this.
# Note that in `networkx` an edge connects two nodes, where both nodes and edges may have properties.
# This is effectively a [*property graph*](https://en.wikipedia.org/wiki/Graph_database#Labeled-property_graph).
# In contrast, an RDF graph in `rdflib` allows for multiple relations (predicates) between RDF subjects and objects, although there are no values represented.
# Also, `networkx` requires its own graph representation in memory.
#
# Based on a branch of mathematics related to linear algebra called [*algebraic graph theory*](https://en.wikipedia.org/wiki/Algebraic_graph_theory), it's possible to convert between a simplified graph (such as `networkx` requires) and its matrix representation.
# Many of the popular graph algorithms can be optimized in terms of matrix operations â often leading to orders of magnitude in performance increases.
# In contrast, the more general form of mathematics for representing complex graphs and networks involves using [*tensors*](https://en.wikipedia.org/wiki/Tensor) instead of matrices.
# For example, you may have heard that word `tensor` used in association with neural networks?
#
# See also:
#
# * <https://towardsdatascience.com/10-graph-algorithms-visually-explained-e57faa1336f3>
# * <https://web.stanford.edu/class/cs97si/06-basic-graph-algorithms.pdf>
# * <https://networkx.org/documentation/stable/reference/algorithms/index.html>
# First, let's load our recipe KG:
# +
import kglab
namespaces = {
"nom": "http://example.org/#",
"wtm": "http://purl.org/heals/food/",
"ind": "http://purl.org/heals/ingredient/",
"skos": "http://www.w3.org/2004/02/skos/core#",
}
kg = kglab.KnowledgeGraph(
name = "A recipe KG example based on Food.com",
base_uri = "https://www.food.com/recipe/",
namespaces = namespaces,
)
kg.load_rdf("../dat/recipes.ttl")
# -
# The `kglab.SubgraphMatrix` class transforms graph data from its symbolic representation in an RDF graph into a numerical representation which is an *adjacency matrix*.
# Most graph algorithm libraries such as `NetworkX` use an *adjacency matrix* representation internally.
# Later we'll use the inverse transform in the subgraph to convert graph algorithm results back into their symbolic representation.
# First, we'll define a SPARQL query to use for building a *subgraph* of recipe URLs and their related ingredients.
# Note the bindings `subject` and `object` â for *subject* and *object* respectively.
# The `SubgraphMatrix` class expects these in the results of a SPARQL query used to generate a representation for `NetworkX`.
sparql = """
SELECT ?subject ?object
WHERE {
?subject rdf:type wtm:Recipe .
?subject wtm:hasIngredient ?object .
}
"""
# Now we'll use the result set from this query to build a `NetworkX` subgraph.
# Formally speaking, all RDF graphs are *directed graphs* since the semantics of RDF connect a *subject* through a *predicate* to an *object*.
# We use a [`DiGraph`](https://networkx.org/documentation/stable/reference/classes/digraph.html?highlight=digraph#networkx.DiGraph) for a *directed graph*.
#
# Note: the `bipartite` parameter identifies the subject and object nodes to be within one of two *bipartite* sets â which we'll describe in more detail below.
# +
import networkx as nx
subgraph = kglab.SubgraphMatrix(kg, sparql)
nx_graph = subgraph.build_nx_graph(nx.DiGraph(), bipartite=True)
# -
# ## Graph measures
# One simple measure in `networkx` is to use the [`density()`](https://networkx.org/documentation/stable/reference/generated/networkx.classes.function.density.html) method to calculate [*graph density*](http://olizardo.bol.ucla.edu/classes/soc-111/textbook/_book/6-9-graph-density.html).
# This is a ratio of the edges in the graph to the maximum possible number of edges it could have.
# A *dense graph* will tend toward a density measure of the `1.0` upper bound, while a *sparse graph* will tend toward the `0.0` lower bound.
nx.density(nx_graph)
# While interpretations of the *density* metric depends largely on context, here we could say that the recipe-ingredient relations in our recipe KG are relatively sparse.
# To perform some kinds of graph analysis and traversals, you may need to convert the *directed graph* to an *undirected graph*.
# For example, the following code uses [`bfs_edges()`](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.traversal.breadth_first_search.bfs_edges.html#networkx.algorithms.traversal.breadth_first_search.bfs_edges) to perform a *bread-first search* (BFS) beginning at a starting node `source` and search out to a maximum of `depth_limit` hops as a boundary.
#
# We'll use `butter` as the starting node, which is a common ingredient and therefore should have many neighbors. In other words, collect the most immediate "neighbors" for `butter` from the graph:
# +
butter_node = kg.get_ns("ind").Butter
butter_id = subgraph.transform(butter_node)
edges = list(nx.bfs_edges(nx_graph, source=butter_id, depth_limit=2))
print("num edges:", len(edges))
# -
# Zero. No edges were returned, and thus no neighbors were identified by BFS.
#
# So far within the recipe representation in our KG, the `butter` ingredient is a *terminal node*, i.e., other nodes connect to it as an *object*.
# However, it never occurs as the *subject* of an RDF statement, so `butter` in turn does not connect into any other nodes in a *directed graph* â it becomes a dead-end.
#
# Now let's use the `to_undirected()` method to convert to an *undirected graph* first, then run the same BFS again:
for edge in nx.bfs_edges(nx_graph.to_undirected(), source=butter_id, depth_limit=2):
s_id, o_id = edge
s_node = subgraph.inverse_transform(s_id)
s_label = subgraph.n3fy(s_node)
o_node = subgraph.inverse_transform(o_id)
o_label = subgraph.n3fy(o_node)
# for brevity's sake, only show non-butter nodes
if s_node != butter_node:
print(s_label, o_label)
# Among the closest neighbors for `butter` we find `salt`, `milk`, `flour`, `sugar`, `honey`, `vanilla`, etc.
# BFS is a relatively quick and useful approach for building discovery tools and recommender systems to explore neighborhoods of a graph.
# Given how we've built this subgraph, it has two distinct and independent sets of nodes â namely, the *recipes* and the *ingredients*.
# In other words, recipes only link to ingredients, and ingredients only link to recipes.
# This structure fits the formal definitions of a [*bipartite graph*](https://en.wikipedia.org/wiki/Bipartite_graph), which is important for working AI applications such as recommender systems, search engines, etc. Let's decompose our subgraph into its two sets of nodes:
# +
from networkx.algorithms import bipartite
rec_nodes, ind_nodes = bipartite.sets(nx_graph)
print("recipes\n", rec_nodes, "\n")
print("ingredients\n", ind_nodes)
# -
# If you remove the `if` statement from the BFS example above that filters output, you may notice some "shapes" or *topology* evident in the full listing of neighbors. In other words, BFS search expands out as `butter` connects to a set of recipes, then those recipes connect to other ingredients, and in turn those ingredients connect to an even broader set of other recipes.
#
# We can measure some of the simpler, more common topologies in the graph by using the [`triadic_census()`](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.triadic_census.html#networkx.algorithms.triads.triadic_census) method, which identifies and counts the occurrences of *dyads* and *triads*:
for triad, occurrences in nx.triadic_census(nx_graph).items():
if occurrences > 0:
print("triad {:>4}: {:7d} occurrences".format(triad, occurrences))
# See "Figure 1" in [[batageljm01]](https://derwen.ai/docs/kgl/biblio/#batageljm01) for discussion about how to decode this output from `triadic_census()` based on the 16 possible forms of triads.
# In this case, we see many occurrences of `021D` and `021U` triads, which is expected in a bipartite graph.
# ## Centrality and connectedness
# Let's make good use of those bipartite sets for filtering results from other algorithms.
#
# Some of the ingredients are used more frequently than others.
# In very simple graphs we could use statistical frequency counts to measure that, although a more general purpose approach is to measure the *degree centrality*, i.e., "How connected is each node?"
# This is similar to calculating PageRank:
# +
results = nx.degree_centrality(nx_graph)
ind_rank = {}
for node_id, rank in sorted(results.items(), key=lambda item: item[1], reverse=True):
if node_id in ind_nodes:
ind_rank[node_id] = rank
node = subgraph.inverse_transform(node_id)
label = subgraph.n3fy(node)
print("{:6.3f} {}".format(rank, label))
# -
# We can plot the graph directly from `networkx` using [`matplotlib`](https://matplotlib.org/):
# +
import matplotlib.pyplot as plt
color = [ "red" if n in ind_nodes else "blue" for n in nx_graph.nodes()]
nx.draw(nx_graph, node_color=color, edge_color="gray", with_labels=True)
plt.show()
# -
# Next, let's determine the [*k-cores*](https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)#k-Cores) which are "maximal connected subgraphs" such that each node has `k` connections:
core_g = nx.k_core(nx_graph)
core_g.nodes()
# Now let's plot those k-core nodes in a simplified visualization, which helps reveal the interconnections:
# +
color = [ "red" if n in ind_nodes else "blue" for n in core_g ]
size = [ ind_rank[n] * 800 if n in ind_nodes else 1 for n in core_g ]
nx.draw(core_g, node_color=color, node_size=size, edge_color="gray", with_labels=True)
plt.show()
# -
for node_id, rank in sorted(ind_rank.items(), key=lambda item: item[1], reverse=True):
if node_id in core_g:
node = subgraph.inverse_transform(node_id)
label = subgraph.n3fy(node)
print("{:3} {:6.3f} {}".format(node_id, rank, label))
# In other words, as the popular ingredients for recipes in our graph tend to be: `flour`, `eggs`, `salt`, `butter`, `milk`, `sugar` â although not so much `water` or `vanilla`.
#
# We can show a similar ranking with PageRank, although with different weights:
# +
page_rank = nx.pagerank(nx_graph)
for node_id, rank in sorted(page_rank.items(), key=lambda item: item[1], reverse=True):
if node_id in core_g and node_id in ind_nodes:
node = subgraph.inverse_transform(node_id)
label = subgraph.n3fy(node)
print("{:3} {:6.3f} {:6.3f} {}".format(node_id, rank, page_rank[node_id], label))
# -
# ---
#
# ## Exercises
# **Exercise 1:**
#
# Find the `node_id` number for the node that represents the `"black pepper"` ingredient.
#
# Then use the [`bfs_edges()`](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.traversal.breadth_first_search.bfs_edges.html#networkx.algorithms.traversal.breadth_first_search.bfs_edges) function with its source set to `node_id` to perform a *breadth first search* traversal of the graph to depth `2` to find the closest neighbors and print their labels.
# **Exercise 2:**
#
# Use the [`dfs_edges()`](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.traversal.depth_first_search.dfs_edges.html#networkx.algorithms.traversal.depth_first_search.dfs_edges) function to perform a *depth first search* with the same parameters.
# **Exercise 3:**
#
# Find the [*shortest path*](https://networkx.org/documentation/stable/reference/algorithms/shortest_paths.html) that connects between the node for "black pepper" and the node for "honey", then print the labels for each node in the path.
| examples/ex6_0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""
Starter code for exploring the Enron dataset (emails + finances);
loads up the dataset (pickled dict of dicts).
The dataset has the form:
enron_data["LASTNAME FIRSTNAME MIDDLEINITIAL"] = { features_dict }
{features_dict} is a dictionary of features associated with that person.
You should explore features_dict as part of the mini-project,
but here's an example to get you started:
enron_data["SKILLING JEFFREY K"]["bonus"] = 5600000
"""
import pickle
enron_data = pickle.load(open("../final_project/final_project_dataset.pkl", "rb"))
# -
## Quiz 13: Tamanho do conjuto de dados enron
len(enron_data)
## Quiz 14: Número de atributos
enron_data.keys()
## Quiz 14: número de atributos
len(enron_data['METTS MARK'])
## Quiz 15: numero de POIs no conjunto de dados Enron
enron_data['METTS MARK']
# +
## Quiz 15: numero de POIs no conjunto de dados Enron
poi = [i for i in enron_data if enron_data[i]['poi']== True]
len(poi)
# +
## Quiz 16: numero de POIs no arquivo de nomes
poi_names = open("../final_project/poi_names.txt", "r")
poi_names_lst = []
for lines in poi_names:
poi_names_lst.append(lines)
poi_names_lst.remove('http://usatoday30.usatoday.com/money/industries/energy/2005-12-28-enron-participants_x.htm\n')
poi_names_lst.remove('\n')
# -
## Quiz 16: numero de POIs no arquivo de nomes
len(poi_names_lst)
# +
## Quiz 17: Temos vários POIs no nosso conjunto de dados, mas não todos eles. Por que isso é um problema?
## Resposta: O número de dados referentea POIs no conjuto pode ser insuficiente para um estudo.
# +
## Quiz 18: Qual é o valor total das ações (stock) pertencentes ao <NAME>ntice?
enron_data['PRENTICE JAMES']['total_stock_value']
# +
## Quiz 19: Quantos emails nós temos do <NAME> para POIs?
enron_data['COLWELL WESLEY']['from_this_person_to_poi']
# +
## Quiz 20: Qual é o valor das opções de ações do <NAME> Skilling?
enron_data['SKILLING JEFFREY K']['exercised_stock_options']
# +
## Quiz 22: Quem era o CEO da Enron durante maior parte do tempo que a fraude foi perpretada?
## <NAME>
# +
## Quiz 23: Quem era o presidente da Enron durante maior parte do tempo que a fraude foi perpretada?
## <NAME>
# +
## Quiz 24: Quem era o CFO da Enron durante maior parte do tempo que a fraude foi perpretada?
## <NAME>
# +
## Quiz 25: Dentre estes três funcionários (Lay, Skilling e Fastow)
## quem levou mais dinheiro para casa?
print(enron_data['SKILLING JEFFREY K']['total_payments'])
print(enron_data['LAY KENNETH L']['total_payments'])
print(enron_data['FASTOW ANDREW S']['total_payments'])
# +
## Quiz 26: Qual é a notaçõa usada quando um atributo não possui um valor bem definido?
## 'NaN', string e não um objeto numpy
# +
## Quiz 27: Quantas pessoas do conjunto de dados possui um salário quantificado?
## E quantas possuem um e-mail conhecido?
quantify_salary = [i for i in enron_data.keys() if enron_data[i]['salary'] != 'NaN']
print(len(quantify_salary))
know_email = [x for x in enron_data.keys() if enron_data[x]['email_address'] != 'NaN']
print(len(know_email))
# -
import sys
sys.path.append("../tools/")
from feature_format import featureFormat
from feature_format import targetFeatureSplit
# +
## Quiz 29: Quantas pessoas da base E+F (conforme ela está agora) possuem valores "NaN" para seus pagamentos (total payments)?
## Qual é a porcentagem de pessoas no conjunto de dados estas pessoas representam do total?
total_payments_NaN = [i for i in enron_data.keys() if enron_data[i]['total_payments'] == 'NaN']
print(len(total_payments_NaN))
print(len(total_payments_NaN)/len(enron_data))
# -
poi
# +
## Quiz 30: Quantos POIs da base E+F (conforme ela está agora) possuem valores "NaN" para seus pagamentos (total payments)?
## Qual é a porcentagem de pessoas no conjunto de dados estas pessoas representam do total?
POIs_payments_NaN = [i for i in poi if enron_data[i]['total_payments'] == 'NaN']
print(len(POIs_payments_NaN))
print(len(POIs_payments_NaN)/len(enron_data))
| Lesson06_Datasets_Questions/explore_enron_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stage 1: Basic content search by tf-idf
# +
import pandas as pd
import numpy as np
import os
from tqdm import tqdm
from sklearn.feature_extraction.text import TfidfVectorizer
import pickle
from scipy.sparse import save_npz, load_npz, csr_matrix
from scipy.spatial.distance import cosine
import preprocessing
import my_tfidf
# -
dtypes = {'cord_uid': str, 'sha': str, 'source_x': str, 'title': str, 'doi': str, 'pmcid': str, 'pubmed_id': str,
'license': str, 'abstract': str, 'publish_time': str, 'authors': str, 'journal': str, 'mag_id': str,
'who_covidence_id': str, 'arxiv_id': str, 'pdf_json_files': str, 'pmc_json_files': str,
'url': str, 's2_id': str, 'search_text': str, 'date': str}
# +
# load dataframe, filter only papers from 2021
path = 'results/final_models/metadata_2021.csv.gz'
data = pd.read_csv(path, sep='\t', dtype=dtypes)
data.date = pd.to_datetime(data.date)
data = data[data.date.apply(lambda x: x.year == 2021)]
data = data[['cord_uid', 'date', 'title', 'abstract', 'authors', 'doi',
'url', 'pdf_json_files', 'pmc_json_files', 'search_text']]
documents = data.search_text
index = data['cord_uid'].values
# +
# # save to csv
# data.to_csv('results/final_models/metadata_2021.csv.gz', index=False, sep='\t', compression='gzip')
# -
# ### Vectorize
path = 'results/final_models/'
# +
# # option 1: create vectorizer (uncomment desired option)
# vectorizer = my_tfidf.make_vectorizer(documents, pickle_path=path, save_files_prefix="_2021")
# option 2: load vectorizer from file
with open('results/final_models/streamlit_vectorizer.pkl', 'wb') as file:
pickle.dump(vectorizer, file)
vectorizer = my_tfidf.load_vectorizer(path + 'vectorizer.pkl')
# +
# # option 1: create term-document matrix with vectorizer
# tdm = vectorizer.transform(documents)
# save_npz(path + 'streamlit_tdm.npz', tdm)
# option 2: load term-document matrix from file
tdm = load_npz(path + '2021_tdm.npz')
# -
# ### Run search on queries
def search_write_queries(queries, vectorizer, tdm, index, metadata, save_directory, num_top_results=5):
def write_results(results_df, query, save_directory, filename):
path = save_directory + filename
with open(path, 'w') as file:
file.write(query + '\n\n\n')
for i in range(len(results)):
row = results.iloc[i]
file.write(f'Result {i+1}: uid {row.cord_uid}\n\n{row.title}\n\n{row.abstract}\n\n\n')
for i in range(len(queries)):
query = queries[i]
results = my_tfidf.tfidf_search(query, vectorizer, tdm, index,
metadata, num_top_results=5)
filename = f'q{i}'
write_results(results, query, save_directory, filename)
# load list of queries
queries = pd.read_csv('data/processed/questions_expert.csv', sep='\t', index_col=0).question.values
# run search, write results to .txt files
save_directory = 'results/final_models/tfidf_results/'
search_write_queries(queries, vectorizer, tdm, index, data, save_directory)
| tfidf_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# +
import matplotlib
matplotlib.use('TkAgg')
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import netCDF4 as nc
from scipy.interpolate import interp1d
import scipy as sc
import matplotlib.cm as cm
from salishsea_tools import (nc_tools, gsw_calls, geo_tools, viz_tools)
import seabird
import cmocean as cmo
import gsw
from seabird.cnv import fCNV
import pandas as pd
import seaborn as sns
import matplotlib.gridspec as gridspec
import scipy.io
from matplotlib.offsetbox import AnchoredText
from mpl_toolkits.axes_grid1 import make_axes_locatable, axes_size
from eofs.standard import Eof
from dateutil import parser
from datetime import datetime
import cartopy.crs as ccrs
import cartopy.feature as cfeature
# +
mesh_mask_large = nc.Dataset('/data/mdunphy/NEP036-N30-OUT/INV/mesh_mask.nc')
glamt = mesh_mask_large.variables['glamt'][:,179:350,479:650]
gphit = mesh_mask_large.variables['gphit'][:,179:350,479:650]
glamu = mesh_mask_large.variables['glamu'][:,179:350,479:650]
gphiv = mesh_mask_large.variables['gphiv'][:,179:350,479:650]
gdepw_0 = mesh_mask_large.variables['gdepw_0'][:,:32,179:350,479:650]
e2u = mesh_mask_large.variables['e2u'][:,179:350,479:650]
e1v = mesh_mask_large.variables['e1v'][:,179:350,479:650]
e1t = mesh_mask_large.variables['e1t'][:,179:350,479:650]
e2t = mesh_mask_large.variables['e2t'][:,179:350,479:650]
e3t_0 = mesh_mask_large.variables['e3t_0'][:,:32,179:350,479:650]
tmask = mesh_mask_large.variables['tmask'][:,:32,179:350,479:650]
# -
glamt.shape
# +
file_mask = nc.Dataset('/data/ssahu/NEP36_2013_summer_hindcast/Ariane_mesh_mask.nc', 'w', zlib=True)
file_mask.createDimension('x', tmask.shape[3]);
file_mask.createDimension('y', tmask.shape[2]);
file_mask.createDimension('z', tmask.shape[1]);
file_mask.createDimension('t', None);
x = file_mask.createVariable('x', 'int32', ('x',), zlib=True);
x.units = 'indices';
x.longname = 'x indices';
y = file_mask.createVariable('y', 'int32', ('y',), zlib=True);
y.units = 'indices';
y.longname = 'y indices';
time_counter = file_mask.createVariable('t', 'int32', ('t',), zlib=True);
time_counter.units = 's';
time_counter.longname = 'time';
glamt_file = file_mask.createVariable('glamt', 'float32', ('t', 'y', 'x'), zlib=True);
gphit_file = file_mask.createVariable('gphit', 'float32', ('t', 'y', 'x'), zlib=True);
glamu_file = file_mask.createVariable('glamu', 'float32', ('t', 'y', 'x'), zlib=True);
gphiv_file = file_mask.createVariable('gphiv', 'float32', ('t', 'y', 'x'), zlib=True);
e2u_file = file_mask.createVariable('e2u', 'float32', ('t', 'y', 'x'), zlib=True);
e1v_file = file_mask.createVariable('e1v', 'float32', ('t', 'y', 'x'), zlib=True);
e1t_file = file_mask.createVariable('e1t', 'float32', ('t', 'y', 'x'), zlib=True);
e2t_file = file_mask.createVariable('e2t', 'float32', ('t', 'y', 'x'), zlib=True);
gdepw_0_file = file_mask.createVariable('gdepw_0', 'float32', ('t','z', 'y', 'x'), zlib=True);
e3t_0_file = file_mask.createVariable('e3t_0', 'float32', ('t','z', 'y', 'x'), zlib=True);
tmask_file = file_mask.createVariable('tmask', 'float32', ('t','z', 'y', 'x'), zlib=True);
glamt_file[:] = glamt[:]
gphit_file[:] = gphit[:]
glamu_file[:] = glamu[:]
gphiv_file[:] = gphiv[:]
e2u_file[:] = e2u[:]
e1v_file[:] = e1v[:]
e1t_file[:] = e1t[:]
e2t_file[:] = e2t[:]
gdepw_0_file[:] = gdepw_0[:]
e3t_0_file[:] = e3t_0[:]
tmask_file[:] = tmask[:]
time_counter[0] = 1
file_mask.close()
# -
tmask.shape
# +
grid_T_small = nc.Dataset('/data/ssahu/NEP36_2013_summer_hindcast/cut_NEP36-S29_1d_20130429_20131025_grid_T_20130429-20130508.nc')
lon_small = grid_T_small.variables['nav_lon'][:]
lat_small = grid_T_small.variables['nav_lat'][:]
# +
lon_A1 = -126.20433
lat_A1 = 48.52958
j, i = geo_tools.find_closest_model_point(lon_A1,lat_A1,\
lon_small,lat_small,grid='NEMO',tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},\
'GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
print(j,i)
# +
lon_LB08 = -125.4775
lat_LB08 = 48.4217
j, i = geo_tools.find_closest_model_point(lon_LB08,lat_LB08,\
lon_small,lat_small,grid='NEMO',tols=\
{'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},\
'GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
print(j,i)
# -
# glamt_large.shape
lon_small[0,0], lat_small[0,0]
lon_small[-1,0], lat_small[-1,0]
lon_small[0,-1], lat_small[0,-1]
lon_small[-1,-1], lat_small[-1,-1]
# +
mat_file_str='/data/ssahu/Falkor_2013/mvp/surveyA.mat'
mat = scipy.io.loadmat(mat_file_str)
depths_survey = mat['depths'][:,0]
lat_survey = mat['latitude'][:,0]
lon_survey = mat['longitude'][:,0] - 100
# den_survey = mat['density'][:]
pden_survey = mat['pden'][:]
temp_survey = mat['temp'][:]
sal_survey = mat['salinity'][:]
mtime = mat['mtime'][:,0]
pressure_survey = np.empty_like(temp_survey)
SA_survey = np.empty_like(temp_survey)
CT_survey = np.empty_like(temp_survey)
spic_survey = np.empty_like(temp_survey)
rho_survey = np.empty_like(temp_survey)
y = np.empty_like(lat_survey)
x = np.empty_like(y)
for i in np.arange(lat_survey.shape[0]):
y[i], x[i] = geo_tools.find_closest_model_point(
lon_survey[i],lat_survey[i],lon_small,lat_small,tols={
'NEMO': {'tol_lon': 0.1, 'tol_lat': 0.1},'GEM2.5': {'tol_lon': 0.1, 'tol_lat': 0.1}})
# -
for i in np.arange(y.shape[0]):
print (y[i], x[i])
| Ariane_file_prep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:525]
# language: python
# name: conda-env-525-py
# ---
# # Combining the data
#
# Use one of the following options to combine data CSVs into a single CSV.
#
# **DASK**
#
# When combining the csv files make sure to add extra column called "model" that identifies the model (tip : you can get this column populated from the file name eg: for file name "SAM0-UNICON_daily_rainfall_NSW.csv", the model name is SAM0-UNICON)
#
# Compare run times and memory usages of these options on different machines within your team, and summarize your observations in your milestone notebook.
#
# Warning: Some of you might not be able to do it on your laptop. It's fine if you're unable to do it. Just make sure you check memory usage and discuss the reasons why you might not have been able to run this on your laptop.
# +
# Read in libraries
import re
import os
import glob
import zipfile
import requests
from urllib.request import urlretrieve
import json
from memory_profiler import memory_usage
import dask.dataframe as dd
import pandas as pd
# -
# Read in data into data directory
# %run -i '../src/requests.py'
# +
# Select column names
use_cols = ['time', 'lat_min', 'lat_max', 'lon_min', 'lon_max', 'rain (mm/day)']
# Get extension for all files
all_files = "../data/*NSW.csv"
# Combine all files
ddf = dd.read_csv(all_files, assume_missing=True, usecols=use_cols, include_path_column=True)
# Create model column
ddf['model'] = ddf['path'].str.split("/", expand=True, n=10)[10].str.split("_", expand=True, n=3)[0]
# Drop path column
ddf.drop(['path'], axis=1)
# Write combined data to single file
ddf.to_csv("../data/combined_NSW.csv", single_file=True)
# -
| notebooks/milestone1_cmb_csv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Drime648/learning-tensorflow/blob/main/05_transfer_learning_with_tf_fine_tuning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="Ax28kr3VQLW8" outputId="cec77098-5956-460c-e84c-ce1612f8c90f"
# !nvidia-smi
# + [markdown] id="wtDqmpVnQ7Rb"
# #Importing helper functions
# + colab={"base_uri": "https://localhost:8080/"} id="P-46WyCMQdPg" outputId="0478509d-5075-4d76-b50f-0e77cc4d6e1d"
# !wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
# + id="L6fblJhRQ4vE"
from helper_functions import create_tensorboard_callback, plot_loss_curves, unzip_data, walk_through_dir
# + [markdown] id="vkJ3FNr-RfCs"
# #Getting the data
# + colab={"base_uri": "https://localhost:8080/"} id="2fOveZQmRgp3" outputId="dcf32c4d-921f-4bd1-ef4f-7dadfddf3427"
# !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip
# + id="yJplh7zmSFTg"
unzip_data("10_food_classes_10_percent.zip")
# + colab={"base_uri": "https://localhost:8080/"} id="6mvT0RjUSK4x" outputId="313dc286-6ea0-486a-98e6-e12fb85ded1c"
walk_through_dir("/content/10_food_classes_10_percent")
# + id="E9A_Yz6ZSWpw"
train_dir = "10_food_classes_10_percent/train/"
test_dir = "10_food_classes_10_percent/test/"
# + colab={"base_uri": "https://localhost:8080/"} id="3l_mNYB8Svig" outputId="b7fbbd79-c939-4bbe-8635-53ec87cce06f"
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
IMG_SIZE = (224, 224)
train_data = image_dataset_from_directory(train_dir, image_size=IMG_SIZE, label_mode="categorical")
test_data = image_dataset_from_directory(test_dir, label_mode="categorical", image_size=IMG_SIZE)
# + id="riFxp9NXT01A"
class_names = train_data.class_names
# + id="ATRAdbiHUZ27" colab={"base_uri": "https://localhost:8080/"} outputId="ace535c6-145b-47f9-cc7a-df74b2d5ae0b"
for images, labels in train_data.take(1):
print(images, labels)
# + [markdown] id="JXt1IJqfatlk"
# #making model 0
# + id="kEXKIS63avPh" colab={"base_uri": "https://localhost:8080/"} outputId="d1096ada-009a-474c-9278-259bc9510589"
base_model = tf.keras.applications.EfficientNetB0(False)
base_model.trainable = False
inputs = tf.keras.layers.Input(shape = (224,224,3), name = "input_layer")
#normalize if its resnet v2 50
# x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255.)(inputs)
x = base_model(inputs)
x = tf.keras.layers.GlobalAveragePooling2D(name = "global_average_pooling")(x)
outputs = tf.keras.layers.Dense(10, activation="softmax", name = "output_layer")(x)
model_0 = tf.keras.Model(inputs, outputs)
model_0.compile(loss = "categorical_crossentropy",
optimizer = tf.keras.optimizers.Adam(),
metrics = ["accuracy"])
history_0 = model_0.fit(train_data, epochs = 5, steps_per_epoch = len(train_data), validation_data = test_data, validation_steps = int(0.25 * len(test_data)))
# + id="P_J8m7HrSQX3" colab={"base_uri": "https://localhost:8080/"} outputId="e8fa2c6d-44f2-42c9-958b-2ac0766fd1dd"
model_0.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 573} id="CbdO4GAsuHLK" outputId="e7aa59f7-7305-41cf-d705-41d45712d7e5"
plot_loss_curves(history_0)
# + [markdown] id="y6uP4SV2xDmp"
# #Get Feature Vector
# + colab={"base_uri": "https://localhost:8080/"} id="wH5P-wssxGU2" outputId="f4ad613c-fc04-420f-caf8-347de9084151"
shape = (1,2,2,3)
tf.random.set_seed(42)
input_tensor = tf.random.normal(shape)
input_tensor
# + colab={"base_uri": "https://localhost:8080/"} id="OoMTMsRjzAf7" outputId="94e79a0e-fb4d-4002-b325-3884ee36b5d8"
pooled_tensor = tf.keras.layers.GlobalAveragePooling2D()(input_tensor)
pooled_tensor
#0.3274685-1.4075519-0.5573232+0.28893656 / 4 = -0.3371175
# + [markdown] id="xcTC56_D9mM8"
# #running bunches of experiments
# + [markdown] id="lvodrmsP-RHh"
# ##doin data augmentation
# + colab={"base_uri": "https://localhost:8080/"} id="22e-nVCJ-Sx_" outputId="75f82a88-9c11-4362-a148-ebd73d7ab6a7"
# !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_1_percent.zip
# + id="iEWr32i1-V-6"
unzip_data("/content/10_food_classes_1_percent.zip")
# + id="smGfU3J0-eGd"
small_train_dir = "/content/10_food_classes_1_percent/train/"
small_test_dir = "/content/10_food_classes_1_percent/test/"
# + colab={"base_uri": "https://localhost:8080/"} id="UEcaV35p-pka" outputId="0bba25b2-2235-4067-f807-a02a36a23539"
from tensorflow.keras.preprocessing import image_dataset_from_directory
IMG_SIZE = (224, 224)
small_train_data = image_dataset_from_directory(small_train_dir, image_size=IMG_SIZE, label_mode="categorical")
small_test_data = image_dataset_from_directory(small_test_dir, label_mode="categorical", image_size=IMG_SIZE)
# + id="I_93vPKM_yKr"
import tensorflow as tf
from tensorflow import keras
from keras import layers
from keras.layers.experimental import preprocessing
data_augment = keras.Sequential([
preprocessing.RandomFlip("horizontal"),
preprocessing.RandomRotation(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomHeight(0.2),
preprocessing.RandomWidth(0.2),
# preprocessing.Rescale(1./255.)
])
# + id="YVrC31TjAG7L"
#visualize
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import random
def see_augmented_layer(class_name, dir):
target = dir + class_name
random_img = random.choice(os.listdir(target))
path = target + '/' + random_img
img = mpimg.imread(path)
plt.imshow(img)
plt.axis(False)
plt.title('original ' + class_name)
#augment
augmented_img = data_augment(tf.expand_dims(img, axis = 0))
plt.figure()
plt.imshow(tf.squeeze(augmented_img) / 255.)
plt.title("augmented " + class_name)
return 1;
# + colab={"base_uri": "https://localhost:8080/", "height": 528} id="iebUW_vuaWXQ" outputId="5eb7ac93-b08b-4ad4-98ea-6529b54a0fc1"
class_name = random.choice(class_names)
dir = "/content/10_food_classes_1_percent/train/"
a = see_augmented_layer(class_name, dir)
# + [markdown] id="9KdCpg8ljr_s"
# ##Make Model 1
# + colab={"base_uri": "https://localhost:8080/"} id="P19h-hEUjtcN" outputId="5981c397-1e10-45da-bb96-d4c83ed12636"
base_model = tf.keras.applications.EfficientNetB0(False)
base_model.trainable = False
input_shape = (224,224,3)
inputs = tf.keras.layers.Input(shape = input_shape, name = "input_layer")
#add data augmentation
x = data_augment(inputs)
x = base_model(x, training = False)
#condense output
x = tf.keras.layers.GlobalAveragePooling2D(name = "global_average_pooling")(x)
outputs = tf.keras.layers.Dense(10, activation="softmax", name = "output_layer")(x)
#if it is a layer, put the input on the right side, if its a model, put it in the normal parameter slot
model_1 = tf.keras.Model(inputs, outputs)
model_1.compile(loss = "categorical_crossentropy",
optimizer = tf.keras.optimizers.Adam(),
metrics = ["accuracy"])
history_1 = model_1.fit(small_train_data,
epochs = 5,
steps_per_epoch = len(small_train_data),
validation_data = small_test_data,
validation_steps = int(0.25 * len(small_test_data)),
callbacks = [create_tensorboard_callback(dir_name = "tensorboard", experiment_name = "model_1")])
# + id="-qboiM8jlH6U" colab={"base_uri": "https://localhost:8080/"} outputId="00f66a2f-a494-470b-f6bf-37936fdea75f"
model_1.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="35kbQtDkwT6D" outputId="d432b52a-8a75-4a4f-c9b4-2bf1ccac0a5b"
plot_loss_curves(history_0)
plt.figure()
plot_loss_curves(history_1)
# + [markdown] id="9C-QPOLixMya"
# ##Make model 2
# + id="DJJWPcLVzTB_"
#make model checkpoint function
#saves model at different intervals to save the best epoch
checkpoint_path = "/content/checkpoints_2/checkpoint.ckpt"
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path, save_weights_only=True, save_best_only=True)
# + colab={"base_uri": "https://localhost:8080/"} id="zBkhR_hdxOGy" outputId="77b33e64-6dd8-4719-b6b4-0b7c1ddf2202"
base_model = tf.keras.applications.EfficientNetB0(False)
base_model.trainable = False
input_shape = (224,224,3)
inputs = tf.keras.layers.Input(shape = input_shape, name = "input_layer")
#add data augmentation
x = data_augment(inputs)
x = base_model(x, training = False)
#condense output
x = tf.keras.layers.GlobalAveragePooling2D(name = "global_average_pooling")(x)
outputs = tf.keras.layers.Dense(10, activation="softmax", name = "output_layer")(x)
#if it is a layer, put the input on the right side, if its a model, put it in the normal parameter slot
model_2 = tf.keras.Model(inputs, outputs)
model_2.compile(loss = "categorical_crossentropy",
optimizer = tf.keras.optimizers.Adam(),
metrics = ["accuracy"])
history_2 = model_2.fit(train_data,
epochs = 5,
steps_per_epoch = len(train_data),
validation_data = test_data,
validation_steps = len(test_data),
callbacks = [create_tensorboard_callback(dir_name = "tensorboard", experiment_name = "model_2"),
checkpoint_callback])
# + colab={"base_uri": "https://localhost:8080/", "height": 573} id="BM4PERufxwjK" outputId="30c3b036-aba7-4448-f54a-269f978a043a"
plot_loss_curves(history_2)
# + colab={"base_uri": "https://localhost:8080/"} id="HZOrttFp9GUU" outputId="f05639bf-b84d-4c24-a51c-9207e2c2a0b7"
result_2 = model_2.evaluate(test_data)
# + colab={"base_uri": "https://localhost:8080/"} id="Z42qpJax-lI1" outputId="19afe903-ca38-4933-b130-eb4caf3be7cb"
model_2.load_weights(checkpoint_path)
# + colab={"base_uri": "https://localhost:8080/"} id="mZjISARa_yTQ" outputId="d04e1330-d2d7-4ed4-d3ce-b1a374192ccb"
loaded_result_2 = model_2.evaluate(test_data)
# + [markdown] id="wg6ep1fHGxbM"
# ##Make model 3
# + id="KtywLHWIGzG2" colab={"base_uri": "https://localhost:8080/"} outputId="e76edcb4-384a-4366-900f-c9926c0360a0"
for i, layer in enumerate(model_2.layers[2].layers):
print(i, layer.name, layer.trainable)
# + id="QUjPEQsMK2aO"
# for i, layer in enumerate(model_2.layers[2].layers):
# if(i >= len(model_2.layers[2].layers) - 10):
# layer.trainable = True
# print(i, layer.name, layer.trainable)
# model_2.compile(loss = "categorical_crossentropy",
# optimizer = tf.keras.optimizers.Adam(lr = 0.0001),
# metrics = ["accuracy"])
base_model.trainable = True
# Freeze all layers except for the
for layer in base_model.layers[:-10]:
layer.trainable = False
# Recompile the model (always recompile after any adjustments to a model)
model_2.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), # lr is 10x lower than before for fine-tuning
metrics=["accuracy"])
# + colab={"base_uri": "https://localhost:8080/"} id="7JW08AcTMIqG" outputId="73123d8c-ca64-4b38-c396-34ce31fb3614"
history_3 = model_2.fit(train_data,
epochs = 10,
steps_per_epoch = len(train_data),
validation_data = test_data,
validation_steps = int(0.25 * len(test_data)),
initial_epoch = history_2.epoch[-1],
callbacks = [create_tensorboard_callback(dir_name = "tensorboard", experiment_name = "model_3")])
# + colab={"base_uri": "https://localhost:8080/"} id="pxS9VK3OM_1m" outputId="0be5860b-36b2-4994-cb4e-227b4eb9d306"
model_2.evaluate(test_data)
# + [markdown] id="ppj_QTVxuyUY"
# ##Making model 4
# + colab={"base_uri": "https://localhost:8080/"} id="Rdz9hbuwliNH" outputId="61a28dad-b923-480e-8bc0-92b34b5f3ab6"
# !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_all_data.zip
# + id="_Daex_cSljb3"
unzip_data("/content/10_food_classes_all_data.zip")
# + id="_0RApnNLnNkh"
big_train_dir = "/content/10_food_classes_all_data/train/"
big_test_dir = "/content/10_food_classes_all_data/test/"
# + colab={"base_uri": "https://localhost:8080/"} id="fvbT778ynX5A" outputId="d7c12964-1de7-45c1-bab2-fb4c04da9a9f"
walk_through_dir('10_food_classes_all_data')
# + colab={"base_uri": "https://localhost:8080/"} id="cV4mq_QQngQQ" outputId="22b7260a-0afe-4163-81af-c84e6cb38fba"
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
img_size = (224,224)
big_train_data = image_dataset_from_directory(big_train_dir, label_mode="categorical", image_size=img_size)
big_test_data = image_dataset_from_directory(big_test_dir, label_mode="categorical", image_size=img_size)
# + colab={"base_uri": "https://localhost:8080/"} id="SfhQf1AFoJFA" outputId="3d5a8f41-096a-4a6a-8321-40a07a16399c"
model_2.load_weights(checkpoint_path)
# + colab={"base_uri": "https://localhost:8080/"} id="C5LDWIl_pkfg" outputId="dc17e57c-2b03-4f3f-ebfa-aaabecce3e6e"
model_2.evaluate(test_data)
# + colab={"base_uri": "https://localhost:8080/"} id="n8UJbPg4q0R-" outputId="c1452804-f83a-4d0c-8c06-71a5b6861d42"
for i, layer in enumerate(model_2.layers[2].layers):
print(i, layer.name, layer.trainable)
| 05_transfer_learning_with_tf_fine_tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import json
pop = pd.read_csv('pop_history_simple.csv')
pop
pop.index = pop.index + 1
pop
pop_transposed= pop.T
pop_transposed
pop_transposed.to_json(orient='columns')
pop_transposed.to_json(orient="index")
pop_year= pop_transposed[1:]
pop_year
pop_year.to_json(orient="index")
| WORK/working/test1/pop_as_json.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 (tensorflow)
# language: python
# name: rga
# ---
# # T81-558: Applications of Deep Neural Networks
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
#
# **Module 9 Assignment: Kaggle Submission**
#
# **Student Name: <NAME>**
# # Assignment Instructions
#
# For this assignment you will begin by loading a pretrained neural network that I provide here: [transfer_9.h5](https://data.heatonresearch.com/data/t81-558/networks/transfer_9.h5). You will demonstrate your ability to transfer several layers from this neural network to create a new neural network to be used for feature engineering.
#
# The **transfer_9.h5** neural network is composed of the following four layers:
#
# ```
# Model: "sequential_7"
# _________________________________________________________________
# Layer (type) Output Shape Param #
# =================================================================
# dense_11 (Dense) (None, 25) 225
# _________________________________________________________________
# dense_12 (Dense) (None, 10) 260
# _________________________________________________________________
# dense_13 (Dense) (None, 3) 33
# _________________________________________________________________
# dense_14 (Dense) (None, 1) 4
# =================================================================
# Total params: 522
# Trainable params: 522
# Non-trainable params: 0
# ```
#
# You should only use the first three layers. The final dense layer should be removed, exposing the (None, 3) shaped layer as the new output layer. This is a 3-neuron layer. The output from these 3 layers will become your 3 engineered features.
#
# Complete the following tasks:
#
# * Load the Keras neural network **transfer_9.h5**. Note that you will need to download it to either your hard drive or GDrive (if you're using Google CoLab). Keras does not allow loading of a neural network across HTTP.
# * Create a new neural network with only the first 3 layers, drop the (None, 1) shaped layer.
# * Load the dataset [transfer_data.csv](https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv).
# * Use all columns as input, but do not use *id* as input. You will need to save the *id* column to build your submission.
# * Do not zscore or transform the input columns.
# * Submit the output from the (None, 3) shaped layer, along with the corresponding *id* column. The three output neurons should create columns named *a*, *b*, and *c*.
#
# The submit file will look something like:
#
# |id|a|b|c|
# |-|-|-|-|
# |1|2.3602087|1.4411213|0|
# |2|0.067718446|1.0037427|0.52129996|
# |3|0.74778837|1.0647631|0.052594826|
# |4|1.0594225|1.1211816|0|
# |...|...|...|...|
#
#
#
# # Assignment Submit Function
#
# You will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems.
#
# **It is unlikely that should need to modify this function.**
# +
import base64
import os
import numpy as np
import pandas as pd
import requests
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
# -
# # Google CoLab Instructions
#
# If you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive.
from google.colab import drive
drive.mount('/content/drive')
# !ls /content/drive/My\ Drive/Colab\ Notebooks
# # Assignment #9 Sample Code
#
# The following code provides a starting point for this assignment.
# +
import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso
# This is your student key that I emailed to you at the beginnning of the semester.
key = "<KEY>" # This is an example key and will not work.
# You must also identify your source file. (modify for your local setup)
# file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class9.ipynb' # Google CoLab
# file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb' # Windows
file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb' # Mac/Linux
# Begin assignment
model = load_model("/Users/jheaton/Downloads/transfer_5.h5") # modify to where you stored it
df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/datasets/transfer_data.csv")
submit(source_file=file,data=df_submit,key=key,no=9)
# -
| assignments/assignment_yourname_class9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### About the dataset :
For my first beginner project on Natural Language Processing (NLP), I chose the SMS Spam Collection Dataset.
It consists of about 5572 SMS messages and a label, classifying the message as "spam" or "ham".
In this dataset, I am going to explore some common methods or techniques of NLP like -
1) Removing stopwords.
2) Perform Tokenization.
3) Perform Lemmatization.
4) Use Bag of Words.
Based on these various preprocessing techniques, I am going to build Naive Bayes Classifier model that will classify unknown messages as "spam" or "ham".
# #### Importing the Dataset
#Importing pandas library.
import pandas as pd
messages = pd.read_csv('SMSSpamCollection', sep='\t', names=["label", "message"])
# #### Data cleaning and preprocessing
# +
#Importing re or Regular Expression module.
#This module supports various things like Modifiers, Identifiers, and White space characters.
import re
# +
#Importing nltk library.
#It supports classification, tokenization, stemming, tagging, parsing, and semantic reasoning functionalities.
import nltk
nltk.download('stopwords')
# +
#Importing other useful packages.
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
# +
#It helps in normalization of words.
ps = PorterStemmer()
# +
#It helps in grouping together the different inflected forms of a word so they can be analysed as a single item.
wordnet=WordNetLemmatizer()
# +
#Corpus represents a collection of (data) texts, typically labeled with text annotations.
corpus = []
# -
for i in range(0, len(messages)):
review = re.sub('[^a-zA-Z]', ' ', messages['message'][i])
review = review.lower()
review = review.split()
review = [ps.stem(word) for word in review if not word in stopwords.words('english')]
review = ' '.join(review)
corpus.append(review)
len(corpus)
# #### Creating the Bag of Words model
# +
#The CountVectorizer provides a simple way to both tokenize a collection of text documents and build a vocabulary of known words.
#It can also be used to encode new documents using that vocabulary.
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features=2500)
X = cv.fit_transform(corpus).toarray()
X
# -
len(corpus)
# +
#Getting dummy values.
y=pd.get_dummies(messages['label'])
y
# +
#iloc is used to select rows and columns by number, in the order that they appear in the data frame.
y=y.iloc[:,1].values
y
# -
# #### Dividing the data into training and testing with 0.20(20%) test size
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
# #### Training model using Naive bayes classifier
# +
#Multinomial Naive Bayes is a specialized version of Naive Bayes that is designed more for text documents.
#Multinomial Naive Bayes explicitly models the word counts and adjusts the underlying calculations to deal with in.
#Importing Multinomial Naive Bayes classifier.
from sklearn.naive_bayes import MultinomialNB
spam_detect_model = MultinomialNB().fit(X_train, y_train) #Fitting the data.
# -
y_pred=spam_detect_model.predict(X_test)
y_pred
y_train
# #### Comparing y_train and y_pred
# #### Confusion Matrix
# +
#Confusion Matrix basically gives us an idea about how well our classifier has performed, with respect to performance on individual classes.
from sklearn.metrics import confusion_matrix
# -
confusion_m=confusion_matrix(y_test, y_pred)
confusion_m
from sklearn.metrics import accuracy_score
accuracy=accuracy_score(y_test, y_pred)
accuracy
# +
#The accuracy obtained by building model using Multinomial Naive Bayes Classifier is 98.56%.
# -
# ## Conclusion :
This is a very basic and short approach I have carried out for analyzing this dataset.
I hope it wil give you a very basic idea on how to approach your analysis on this types of datasets.
# +
Thank You
Any suggestions, commments are welcome.
| Ham and spam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Session 4: Visualizing Representations
#
# ## Assignment: Deep Dream and Style Net
#
# <p class='lead'>
# Creative Applications of Deep Learning with Google's Tensorflow
# <NAME>
# Kadenze, Inc.
# </p>
#
# # Overview
#
# In this homework, we'll first walk through visualizing the gradients of a trained convolutional network. Recall from the last session that we had trained a variational convolutional autoencoder. We also trained a deep convolutional network. In both of these networks, we learned only a few tools for understanding how the model performs. These included measuring the loss of the network and visualizing the `W` weight matrices and/or convolutional filters of the network.
#
# During the lecture we saw how to visualize the gradients of Inception, Google's state of the art network for object recognition. This resulted in a much more powerful technique for understanding how a network's activations transform or accentuate the representations in the input space. We'll explore this more in Part 1.
#
# We also explored how to use the gradients of a particular layer or neuron within a network with respect to its input for performing "gradient ascent". This resulted in Deep Dream. We'll explore this more in Parts 2-4.
#
# We also saw how the gradients at different layers of a convolutional network could be optimized for another image, resulting in the separation of content and style losses, depending on the chosen layers. This allowed us to synthesize new images that shared another image's content and/or style, even if they came from separate images. We'll explore this more in Part 5.
#
# Finally, you'll packaged all the GIFs you create throughout this notebook and upload them to Kadenze.
#
#
# <a name="learning-goals"></a>
# # Learning Goals
#
# * Learn how to inspect deep networks by visualizing their gradients
# * Learn how to "deep dream" with different objective functions and regularization techniques
# * Learn how to "stylize" an image using content and style losses from different images
#
#
# # Table of Contents
#
# <!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
#
# - [Part 1 - Pretrained Networks](#part-1---pretrained-networks)
# - [Graph Definition](#graph-definition)
# - [Preprocess/Deprocessing](#preprocessdeprocessing)
# - [Tensorboard](#tensorboard)
# - [A Note on 1x1 Convolutions](#a-note-on-1x1-convolutions)
# - [Network Labels](#network-labels)
# - [Using Context Managers](#using-context-managers)
# - [Part 2 - Visualizing Gradients](#part-2---visualizing-gradients)
# - [Part 3 - Basic Deep Dream](#part-3---basic-deep-dream)
# - [Part 4 - Deep Dream Extensions](#part-4---deep-dream-extensions)
# - [Using the Softmax Layer](#using-the-softmax-layer)
# - [Fractal](#fractal)
# - [Guided Hallucinations](#guided-hallucinations)
# - [Further Explorations](#further-explorations)
# - [Part 5 - Style Net](#part-5---style-net)
# - [Network](#network)
# - [Content Features](#content-features)
# - [Style Features](#style-features)
# - [Remapping the Input](#remapping-the-input)
# - [Content Loss](#content-loss)
# - [Style Loss](#style-loss)
# - [Total Variation Loss](#total-variation-loss)
# - [Training](#training)
# - [Assignment Submission](#assignment-submission)
#
# <!-- /MarkdownTOC -->
# +
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, vae, dft, vgg16, nb_utils
except ImportError:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo. If you are still")
# We'll tell matplotlib to inline any drawn figures like so:
# %matplotlib inline
plt.style.use('ggplot')
# -
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
# <a name="part-1---pretrained-networks"></a>
# # Part 1 - Pretrained Networks
#
# In the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include:
#
# * [Inception v3](https://github.com/tensorflow/models/tree/master/inception)
# - This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB!
# * [Inception v5](https://github.com/tensorflow/models/tree/master/inception)
# - This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB! It presents a few extensions to v5 which are not documented anywhere that I've found, as of yet...
# * [Visual Group Geometry @ Oxford's 16 layer](http://www.robots.ox.ac.uk/~vgg/research/very_deep/)
# - This network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects. This model is nearly half a gigabyte, about 10x larger in size than the inception network. The trade off is that it is very fast.
# * [Visual Group Geometry @ Oxford's Face Recognition](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/)
# - This network has been trained on the VGG Face Dataset and its final output layer is a softmax layer denoting 1 of 2622 different possible people.
# * [Illustration2Vec](http://illustration2vec.net)
# - This network has been trained on illustrations and manga and its final output layer is 4096 features.
# * [Illustration2Vec Tag](http://illustration2vec.net)
# - Please do not use this network if you are under the age of 18 (seriously!)
# - This network has been trained on manga and its final output layer is one of 1539 labels.
#
# When we use a pre-trained network, we load a network's definition and its weights which have already been trained. The network's definition includes a set of operations such as convolutions, and adding biases, but all of their values, i.e. the weights, have already been trained.
#
# <a name="graph-definition"></a>
# ## Graph Definition
#
# In the libs folder, you will see a few new modules for loading the above pre-trained networks. Each module is structured similarly to help you understand how they are loaded and include example code for using them. Each module includes a `preprocess` function for using before sending the image to the network. And when using deep dream techniques, we'll be using the `deprocess` function to undo the `preprocess` function's manipulations.
#
# Let's take a look at loading one of these. Every network except for `i2v` includes a key 'labels' denoting what labels the network has been trained on. If you are under the age of 18, please do not use the `i2v_tag model`, as its labels are unsuitable for minors.
#
# Let's load the libaries for the different pre-trained networks:
from libs import vgg16, inception, i2v
# Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Stick w/ Inception for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
# -
# Each network returns a dictionary with the following keys defined. Every network has a key for "labels" except for "i2v", since this is a feature only network, e.g. an unsupervised network, and does not have labels.
print(net.keys())
# <a name="preprocessdeprocessing"></a>
# ## Preprocess/Deprocessing
#
# Each network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later).
#
# Whenever we `preprocess` the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the `deprocess` function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it.
# First, let's get an image:
og = plt.imread('clinton.png')[..., :3]
plt.imshow(og)
print(og.min(), og.max())
# Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for `vgg16`, we can find the `preprocess` function as `vgg16.preprocess`, or for `inception`, `inception.preprocess`, or for `i2v`, `i2v.preprocess`. Or, we can just use the key `preprocess` in our dictionary `net`, as this is just convenience for us to access the corresponding preprocess function.
# Now call the preprocess function. This will preprocess our
# image ready for being input to the network, except for changes
# to the dimensions. I.e., we will still need to convert this
# to a 4-dimensional Tensor once we input it to the network.
# We'll see how that works later.
img = net['preprocess'](og)
print(img.min(), img.max())
# Let's undo the preprocessing. Recall that the `net` dictionary has the key `deprocess` which is the function we need to use on our processed image, `img`.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
deprocessed = ...
plt.imshow(deprocessed)
plt.show()
# <a name="tensorboard"></a>
# ## Tensorboard
#
# I've added a utility module called `nb_utils` which includes a function `show_graph`. This will use [Tensorboard](https://www.tensorflow.org/versions/r0.10/how_tos/graph_viz/index.html) to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you.
#
# Be sure to interact with the graph and click on the various modules.
#
# For instance, if you've loaded the `inception` v5 network, locate the "input" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as `X` in our own networks). From there, it goes to the "conv2d0" variable scope (i.e. this uses the code: `with tf.variable_scope("conv2d0")` to create a set of operations with the prefix "conv2d0/". If you expand this scope, you'll see another scope, "pre_relu". This is created using another `tf.variable_scope("pre_relu")`, so that any new variables will have the prefix "conv2d0/pre_relu". Finally, inside here, you'll see the convolution operation (`tf.nn.conv2d`) and the 4d weight tensor, "w" (e.g. created using `tf.get_variable`), used for convolution (and so has the name, "conv2d0/pre_relu/w". Just after the convolution is the addition of the bias, b. And finally after exiting the "pre_relu" scope, you should be able to see the "conv2d0" operation which applies the relu nonlinearity. In summary, that region of the graph can be created in Tensorflow like so:
#
# ```python
# input = tf.placeholder(...)
# with tf.variable_scope('conv2d0'):
# with tf.variable_scope('pre_relu'):
# w = tf.get_variable(...)
# h = tf.nn.conv2d(input, h, ...)
# b = tf.get_variable(...)
# h = tf.nn.bias_add(h, b)
# h = tf.nn.relu(h)
# ```
nb_utils.show_graph(net['graph_def'])
# If you open up the "mixed3a" node above (double click on it), you'll see the first "inception" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.
#
# <a name="a-note-on-1x1-convolutions"></a>
# ## A Note on 1x1 Convolutions
#
# The 1x1 convolutions are setting the `ksize` parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_I$, where $\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\text{K}_H\ x\ \text{K}_W\ x\ \text{C}_I\ x\ \text{C}_O$ filter, where $\text{K}_H$ is 1 and $\text{K}_W$ is also 1, then the filters size is: $1\ x\ 1\ x\ \text{C}_I$ and this is perfomed for each output channel $\text{C}_O$. What this is doing is filtering the information only in the channels dimension, not the spatial dimensions. The output of this convolution will be a $\text{N}\ x\ \text{H}\ x\ \text{W}\ x\ \text{C}_O$ output tensor. The only thing that changes in the output is the number of output filters.
#
# The 1x1 convolution operation is essentially reducing the amount of information in the channels dimensions before performing a much more expensive operation, e.g. a 3x3 or 5x5 convolution. Effectively, it is a very clever trick for dimensionality reduction used in many state of the art convolutional networks. Another way to look at it is that it is preserving the spatial information, but at each location, there is a fully connected network taking all the information from every input channel, $\text{C}_I$, and reducing it down to $\text{C}_O$ channels (or could easily also be up, but that is not the typical use case for this). So it's not really a convolution, but we can use the convolution operation to perform it at every location in our image.
#
# If you are interested in reading more about this architecture, I highly encourage you to read [Network in Network](https://arxiv.org/pdf/1312.4400v3.pdf), <NAME>'s work on the [Inception network](http://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf), Highway Networks, Residual Networks, and Ladder Networks.
#
# In this course, we'll stick to focusing on the applications of these, while trying to delve as much into the code as possible.
#
# <a name="network-labels"></a>
# ## Network Labels
#
# Let's now look at the labels:
net['labels']
label_i = 851
print(net['labels'][label_i])
# <a name="using-context-managers"></a>
# ## Using Context Managers
#
# Up until now, we've mostly used a single `tf.Session` within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers.
#
# Let's see how this works w/ VGG:
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Load the VGG network. Scroll back up to where we loaded the inception
# network if you are unsure. It is inside the "vgg16" module...
net = ..
assert(net['labels'][0] == (0, 'n01440764 tench, Tinca tinca'))
# +
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tell the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
# -
# Now we can get all the operations that belong to the graph `g`:
names = [op.name for op in g.get_operations()]
print(names)
# <a name="part-2---visualizing-gradients"></a>
# # Part 2 - Visualizing Gradients
#
# Now that we know how to load a network and extract layers from it, let's grab only the pooling layers:
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# First find all the pooling layers in the network. You can
# use list comprehension to iterate over all the "names" we just
# created, finding whichever ones have the name "pool" in them.
# Then be sure to append a ":0" to the names
features = ...
# Let's print them
print(features)
# This is what we want to have at the end. You could just copy this list
# if you are stuck!
assert(features == ['net/pool1:0', 'net/pool2:0', 'net/pool3:0', 'net/pool4:0', 'net/pool5:0'])
# -
# Let's also grab the input layer:
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Use the function 'get_tensor_by_name' and the 'names' array to help you
# get the first tensor in the network. Remember you have to add ":0" to the
# name to get the output of an operation which is the tensor.
x = ...
assert(x.name == 'net/images:0')
# -
# We'll now try to find the gradient activation that maximizes a layer with respect to the input layer `x`.
def plot_gradient(img, x, feature, g, device='/cpu:0'):
"""Let's visualize the network's gradient activation
when backpropagated to the original input image. This
is effectively telling us which pixels contribute to the
predicted layer, class, or given neuron with the layer"""
# We'll be explicit about the graph and the device
# by using a context manager:
with tf.Session(graph=g) as sess, g.device(device):
saliency = tf.gradients(tf.reduce_mean(feature), x)
this_res = sess.run(saliency[0], feed_dict={x: img})
grad = this_res[0] / np.max(np.abs(this_res))
return grad
# Let's try this w/ an image now. We're going to use the `plot_gradient` function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing its values using the `utils.normalize` function.
# +
og = plt.imread('clinton.png')[..., :3]
img = net['preprocess'](og)[np.newaxis]
fig, axs = plt.subplots(1, len(features), figsize=(20, 10))
for i in range(len(features)):
axs[i].set_title(features[i])
grad = plot_gradient(img, x, g.get_tensor_by_name(features[i]), g)
axs[i].imshow(utils.normalize(grad))
# -
# <a name="part-3---basic-deep-dream"></a>
# # Part 3 - Basic Deep Dream
#
# In the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations.
#
# Have a look here for inspiration:
#
# https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
#
#
# https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=<KEY>
#
# https://mtyka.github.io/deepdream/2016/02/05/bilateral-class-vis.html
#
# Let's stick the necessary bits in a function and try exploring how deep dream amplifies the representations of the chosen layers:
def dream(img, gradient, step, net, x, n_iterations=50, plot_step=10):
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
fig, axs = plt.subplots(1, n_iterations // plot_step, figsize=(20, 10))
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Or we could use the `utils.normalize function:
# this_res = utils.normalize(this_res)
# Experiment with all of the above options. They will drastically
# effect the resulting dream, and really depend on the network
# you use, and the way the network handles normalization of the
# input image, and the step size you choose! Lots to explore!
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
axs[it_i // plot_step].imshow(m)
# +
# We'll run it for 3 iterations
n_iterations = 3
# Think of this as our learning rate. This is how much of
# the gradient we'll add back to the input image
step = 1.0
# Every 1 iterations, we'll plot the current deep dream
plot_step = 1
# -
# Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, `x`. Then pass these to our `dream` function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = g.get_tensor_by_name(features[feature_i])
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = tf.gradients(tf.reduce_mean(layer), x)
# Dream w/ our image
dream(img, gradient, step, net, x, n_iterations=n_iterations, plot_step=plot_step)
# Instead of using an image, we can use an image of noise and see how it "hallucinates" the representations that the layer most responds to:
noise = net['preprocess'](
np.random.rand(256, 256, 3) * 0.1 + 0.45)[np.newaxis]
# We'll do the same thing as before, now w/ our noise image:
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
for feature_i in range(len(features)):
with tf.Session(graph=g) as sess, g.device(device):
# Get a feature layer
layer = ...
# Find the gradient of this layer's mean activation
# with respect to the input image
gradient = ...
# Dream w/ the noise image. Complete this!
dream(...)
# <a name="part-4---deep-dream-extensions"></a>
# # Part 4 - Deep Dream Extensions
#
# As we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image.
#
# <a name="using-the-softmax-layer"></a>
# ## Using the Softmax Layer
#
# Let's get another image to play with, preprocess it, and then make it 4-dimensional.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Load your own image here
og = ...
plt.imshow(og)
# Preprocess the image and make sure it is 4-dimensional by adding a new axis to the 0th dimension:
img = ...
assert(img.ndim == 4)
# +
# Let's get the softmax layer
print(names[-2])
layer = g.get_tensor_by_name(names[-2] + ":0")
# And find its shape
with tf.Session(graph=g) as sess, g.device(device):
layer_shape = tf.shape(layer).eval(feed_dict={x:img})
# We can find out how many neurons it has by feeding it an image and
# calculating the shape. The number of output channels is the last dimension.
n_els = layer_shape[-1]
# -
# Let's pick a label. First let's print out every label and then find one we like:
print(net['labels'])
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Pick a neuron. Or pick a random one. This should be 0-n_els
neuron_i = ...
print(net['labels'][neuron_i])
assert(neuron_i >= 0 and neuron_i < n_els)
# +
# And we'll create an activation of this layer which is very close to 0
layer_vec = np.ones(layer_shape) / 100.0
# Except for the randomly chosen neuron which will be very close to 1
layer_vec[..., neuron_i] = 0.99
# -
# Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Explore different parameters for this section.
n_iterations = 51
plot_step = 5
# If you use a different network, you will definitely need to experiment
# with the step size, as each network normalizes the input image differently.
step = 0.2
# -
# Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Save the gif
gif.build_gif(imgs, saveto='softmax.gif')
ipyd.Image(url='softmax.gif?i={}'.format(
np.random.rand()), height=300, width=300)
# <a name="fractal"></a>
# ## Fractal
#
# During the lecture we also saw a simple trick for creating an infinite fractal: crop the image and then resize it. This can produce some lovely aesthetics and really show some strong object hallucinations if left long enough and with the right parameters for step size/normalization/regularization. Feel free to experiment with the code below, adding your own regularizations as shown in the lecture to produce different results!
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
n_iterations = 101
plot_step = 10
step = 0.1
crop = 1
imgs = []
n_imgs, height, width, *ch = img.shape
with tf.Session(graph=g) as sess, g.device(device):
# Explore changing the gradient here from max to mean
# or even try using different concepts we learned about
# when creating style net, such as using a total variational
# loss on `x`.
gradient = tf.gradients(tf.reduce_max(layer), x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = img.copy()
with tf.Session(graph=g) as sess, g.device(device):
for it_i in range(n_iterations):
# This will calculate the gradient of the layer
# we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={
x: img_copy, layer: layer_vec})[0]
# This is just one way we could normalize the
# gradient. It helps to look at the range of your image's
# values, e.g. if it is 0 - 1, or -115 to +115,
# and then consider the best way to normalize the gradient.
# For some networks, it might not even be necessary
# to perform this normalization, especially if you
# leave the dream to run for enough iterations.
# this_res = this_res / (np.std(this_res) + 1e-10)
this_res = this_res / (np.max(np.abs(this_res)) + 1e-10)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Optionally, we could apply any number of regularization
# techniques... Try exploring different ways of regularizing
# gradient. ascent process. If you are adventurous, you can
# also explore changing the gradient above using a
# total variational loss, as we used in the style net
# implementation during the lecture. I leave that to you
# as an exercise!
# Crop a 1 pixel border from height and width
img_copy = img_copy[:, crop:-crop, crop:-crop, :]
# Resize (Note: in the lecture, we used scipy's resize which
# could not resize images outside of 0-1 range, and so we had
# to store the image ranges. This is a much simpler resize
# method that allows us to `preserve_range`.)
img_copy = resize(img_copy[0], (height, width), order=3,
clip=False, preserve_range=True
)[np.newaxis].astype(np.float32)
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
# Create a GIF
gif.build_gif(imgs, saveto='fractal.gif')
# -
ipyd.Image(url='fractal.gif?i=2', height=300, width=300)
# <a name="guided-hallucinations"></a>
# ## Guided Hallucinations
#
# Instead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes its own layers activations look like the guide image.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Replace these with your own images!
guide_og = plt.imread(...)[..., :3]
dream_og = plt.imread(...)[..., :3]
assert(guide_og.ndim == 3 and guide_og.shape[-1] == 3)
assert(dream_og.ndim == 3 and dream_og.shape[-1] == 3)
# -
# Preprocess both images:
# +
guide_img = net['preprocess'](guide_og)[np.newaxis]
dream_img = net['preprocess'](dream_og)[np.newaxis]
fig, axs = plt.subplots(1, 2, figsize=(7, 4))
axs[0].imshow(guide_og)
axs[1].imshow(dream_og)
# -
# Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
x = g.get_tensor_by_name(names[0] + ":0")
# Experiment with the weighting
feature_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
feature_loss = tf.Variable(0.0)
# Explore different layers/subsets of layers. This is just an example.
for feature_i in features[3:5]:
# Get the activation of the feature
layer = g.get_tensor_by_name(feature_i)
# Do the same for our guide image
guide_layer = sess.run(layer, feed_dict={x: guide_img})
# Now we need to measure how similar they are!
# We'll use the dot product, which requires us to first reshape both
# features to a 2D vector. But you should experiment with other ways
# of measuring similarity such as l1 or l2 loss.
# Reshape each layer to 2D vector
layer = tf.reshape(layer, [-1, 1])
guide_layer = guide_layer.reshape(-1, 1)
# Now calculate their dot product
correlation = tf.matmul(guide_layer.T, layer)
# And weight the loss by a factor so we can control its influence
feature_loss += feature_loss_weight * correlation
# -
# We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
n_img, height, width, ch = dream_img.shape
# We'll weight the overall contribution of the total variational loss
# Experiment with this weighting
tv_loss_weight = 1.0
with tf.Session(graph=g) as sess, g.device(device):
# Penalize variations in neighboring pixels, enforcing smoothness
dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :])
dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :])
# We will calculate their difference raised to a power to push smaller
# differences closer to 0 and larger differences higher.
# Experiment w/ the power you raise this to to see how it effects the result
tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2))
# -
# Now we train just like before, except we'll need to combine our two loss terms, `feature_loss` and `tv_loss` by simply adding them! The one thing we have to keep in mind is that we want to minimize the `tv_loss` while maximizing the `feature_loss`. That means we'll need to use the negative `tv_loss` and the positive `feature_loss`. As an experiment, try just optimizing the `tv_loss` and removing the `feature_loss` from the `tf.gradients` call. What happens?
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Experiment with the step size!
step = 0.1
imgs = []
with tf.Session(graph=g) as sess, g.device(device):
# Experiment with just optimizing the tv_loss or negative tv_loss to understand what it is doing!
gradient = tf.gradients(-tv_loss + feature_loss, x)
# Copy the input image as we'll add the gradient to it in a loop
img_copy = dream_img.copy()
with tf.Session(graph=g) as sess, g.device(device):
sess.run(tf.global_variables_initializer())
for it_i in range(n_iterations):
# This will calculate the gradient of the layer we chose with respect to the input image.
this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]
# Let's normalize it by the maximum activation
this_res /= (np.max(np.abs(this_res) + 1e-8))
# Or alternatively, we can normalize by standard deviation
# this_res /= (np.std(this_res) + 1e-8)
# Then add the gradient back to the input image
# Think about what this gradient represents?
# It says what direction we should move our input
# in order to meet our objective stored in "gradient"
img_copy += this_res * step
# Plot the image
if (it_i + 1) % plot_step == 0:
m = net['deprocess'](img_copy[0])
plt.figure(figsize=(5, 5))
plt.grid('off')
plt.imshow(m)
plt.show()
imgs.append(m)
gif.build_gif(imgs, saveto='guided.gif')
# -
ipyd.Image(url='guided.gif?i=0', height=300, width=300)
# <a name="further-explorations"></a>
# ## Further Explorations
#
# In the `libs` module, I've included a `deepdream` module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams.
#
# <a name="part-5---style-net"></a>
# # Part 5 - Style Net
#
# We'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the [Lecture Transcript](lecture-4.ipynb) if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a `stylenet` implementation under the `libs` module that you can use instead.
#
# Have a look here for inspiration:
#
# https://mtyka.github.io/code/2015/10/02/experiments-with-style-transfer.html
#
# http://kylemcdonald.net/stylestudies/
#
# <a name="network"></a>
# ## Network
#
# Let's reset the graph and load up a network. I'll include code here for loading up any of our pretrained networks so you can explore each of them!
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
sess.close()
tf.reset_default_graph()
# Stick w/ VGG for now, and then after you see how
# the next few sections work w/ this network, come back
# and explore the other networks.
net = vgg16.get_vgg_model()
# net = vgg16.get_vgg_face_model()
# net = inception.get_inception_model(version='v5')
# net = inception.get_inception_model(version='v3')
# net = i2v.get_i2v_model()
# net = i2v.get_i2v_tag_model()
# +
# Let's explicity use the CPU, since we don't gain anything using the GPU
# when doing Deep Dream (it's only a single image, benefits come w/ many images).
device = '/cpu:0'
# We'll now explicitly create a graph
g = tf.Graph()
# -
# Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.
# And here is a context manager. We use the python "with" notation to create a context
# and create a session that only exists within this indent, as soon as we leave it,
# the session is automatically closed! We also tel the session which graph to use.
# We can pass a second context after the comma,
# which we'll use to be explicit about using the CPU instead of a GPU.
with tf.Session(graph=g) as sess, g.device(device):
# Now load the graph_def, which defines operations and their values into `g`
tf.import_graph_def(net['graph_def'], name='net')
# Let's then grab the names of every operation in our network:
names = [op.name for op in g.get_operations()]
# Now we need an image for our content image and another one for our style image.
# +
content_og = plt.imread('arles.png')[..., :3]
style_og = plt.imread('clinton.png')[..., :3]
fig, axs = plt.subplots(1, 2)
axs[0].imshow(content_og)
axs[0].set_title('Content Image')
axs[0].grid('off')
axs[1].imshow(style_og)
axs[1].set_title('Style Image')
axs[1].grid('off')
# We'll save these with a specific name to include in your submission
plt.imsave(arr=content_og, fname='content.png')
plt.imsave(arr=style_og, fname='style.png')
# -
content_img = net['preprocess'](content_og)[np.newaxis]
style_img = net['preprocess'](style_og)[np.newaxis]
# Let's see what the network classifies these images as just for fun:
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Grab the tensor defining the input to the network
x = ...
# And grab the tensor defining the softmax layer of the network
softmax = ...
for img in [content_img, style_img]:
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# Remember from the lecture that we have to set the dropout
# "keep probability" to 1.0.
res = softmax.eval(feed_dict={x: img,
'net/dropout_1/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout_1/random_uniform:0'
).get_shape().as_list()),
'net/dropout/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout/random_uniform:0'
).get_shape().as_list())})[0]
print([(res[idx], net['labels'][idx])
for idx in res.argsort()[-5:][::-1]])
# -
# <a name="content-features"></a>
# ## Content Features
#
# We're going to need to find the layer or layers we want to use to help us define our "content loss". Recall from the lecture when we used VGG, we used the 4th convolutional layer.
print(names)
# Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Experiment w/ different layers here. You'll need to change this if you
# use another network!
content_layer = 'net/conv3_2/conv3_2:0'
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_features = g.get_tensor_by_name(content_layer).eval(
session=sess,
feed_dict={x: content_img,
'net/dropout_1/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout_1/random_uniform:0'
).get_shape().as_list()),
'net/dropout/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout/random_uniform:0'
).get_shape().as_list())})
# -
# <a name="style-features"></a>
# ## Style Features
#
# Let's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff!
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
# Experiment with different layers and layer subsets. You'll need to change these
# if you use a different network!
style_layers = ['net/conv1_1/conv1_1:0',
'net/conv2_1/conv2_1:0',
'net/conv3_1/conv3_1:0',
'net/conv4_1/conv4_1:0',
'net/conv5_1/conv5_1:0']
style_activations = []
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
for style_i in style_layers:
style_activation_i = g.get_tensor_by_name(style_i).eval(
feed_dict={x: style_img,
'net/dropout_1/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout_1/random_uniform:0'
).get_shape().as_list()),
'net/dropout/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout/random_uniform:0'
).get_shape().as_list())})
style_activations.append(style_activation_i)
# -
# Now we find the gram matrix which we'll use to optimize our features.
style_features = []
for style_activation_i in style_activations:
s_i = np.reshape(style_activation_i, [-1, style_activation_i.shape[-1]])
gram_matrix = np.matmul(s_i.T, s_i) / s_i.size
style_features.append(gram_matrix.astype(np.float32))
# <a name="remapping-the-input"></a>
# ## Remapping the Input
#
# We're almost done building our network. We just have to change the input to the network to become "trainable". Instead of a placeholder, we'll have a `tf.Variable`, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options!
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# +
tf.reset_default_graph()
g = tf.Graph()
# Get the network again
net = vgg16.get_vgg_model()
# Load up a session which we'll use to import the graph into.
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# We can set the `net_input` to our content image
# or perhaps another image
# or an image of noise
# net_input = tf.Variable(content_img / 255.0)
net_input = tf.get_variable(
name='input',
shape=content_img.shape,
dtype=tf.float32,
initializer=tf.random_normal_initializer(
mean=np.mean(content_img), stddev=np.std(content_img)))
# Now we load the network again, but this time replacing our placeholder
# with the trainable tf.Variable
tf.import_graph_def(
net['graph_def'],
name='net',
input_map={'images:0': net_input})
# -
# <a name="content-loss"></a>
# ## Content Loss
#
# In the lecture we saw that we'll simply find the l2 loss between our content layer features.
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
content_loss = tf.nn.l2_loss((g.get_tensor_by_name(content_layer) -
content_features) /
content_features.size)
# <a name="style-loss"></a>
# ## Style Loss
#
# Instead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix.
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
style_loss = np.float32(0.0)
for style_layer_i, style_gram_i in zip(style_layers, style_features):
layer_i = g.get_tensor_by_name(style_layer_i)
layer_shape = layer_i.get_shape().as_list()
layer_size = layer_shape[1] * layer_shape[2] * layer_shape[3]
layer_flat = tf.reshape(layer_i, [-1, layer_shape[3]])
gram_matrix = tf.matmul(tf.transpose(layer_flat), layer_flat) / layer_size
style_loss = tf.add(style_loss, tf.nn.l2_loss((gram_matrix - style_gram_i) / np.float32(style_gram_i.size)))
# <a name="total-variation-loss"></a>
# ## Total Variation Loss
#
# And just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss.
# +
def total_variation_loss(x):
h, w = x.get_shape().as_list()[1], x.get_shape().as_list()[1]
dx = tf.square(x[:, :h-1, :w-1, :] - x[:, :h-1, 1:, :])
dy = tf.square(x[:, :h-1, :w-1, :] - x[:, 1:, :w-1, :])
return tf.reduce_sum(tf.pow(dx + dy, 1.25))
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
tv_loss = total_variation_loss(net_input)
# -
# <a name="training"></a>
# ## Training
#
# We're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer.
#
# <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
# Experiment w/ the weighting of these! They produce WILDLY different
# results.
loss = 5.0 * content_loss + 1.0 * style_loss + 0.001 * tv_loss
optimizer = tf.train.AdamOptimizer(0.05).minimize(loss)
# And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.
# +
imgs = []
n_iterations = 100
with tf.Session(graph=g) as sess, g.device('/cpu:0'):
sess.run(tf.global_variables_initializer())
# map input to noise
og_img = net_input.eval()
for it_i in range(n_iterations):
_, this_loss, synth = sess.run([optimizer, loss, net_input], feed_dict={
'net/dropout_1/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout_1/random_uniform:0'
).get_shape().as_list()),
'net/dropout/random_uniform:0': np.ones(
g.get_tensor_by_name(
'net/dropout/random_uniform:0'
).get_shape().as_list())
})
print("%d: %f, (%f - %f)" %
(it_i, this_loss, np.min(synth), np.max(synth)))
if it_i % 5 == 0:
m = vgg16.deprocess(synth[0])
imgs.append(m)
plt.imshow(m)
plt.show()
gif.build_gif(imgs, saveto='stylenet.gif')
# -
ipyd.Image(url='stylenet.gif?i=0', height=300, width=300)
# <a name="assignment-submission"></a>
# # Assignment Submission
#
# After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:
#
# <pre>
# session-4/
# session-4.ipynb
# softmax.gif
# fractal.gif
# guided.gif
# content.png
# style.png
# stylenet.gif
# </pre>
#
# You'll then submit this zip file for your third assignment on Kadenze for "Assignment 4: Deep Dream and Style Net"! Remember to complete the rest of the assignment, gallery commenting on your peers work, to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me.
#
# To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [#CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
#
# Also, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!
utils.build_submission('session-4.zip',
('softmax.gif',
'fractal.gif',
'guided.gif',
'content.png',
'style.png',
'stylenet.gif',
'session-4.ipynb'))
| session-4/session-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="sR5znxX_zhO_" outputId="8e6a6bc5-4a83-4431-b102-7fe89d60caf6"
# !pip install facenet-pytorch
# + id="ReCoyRMXzsa7"
device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu')
# + colab={"base_uri": "https://localhost:8080/"} id="NlNIKmq1gffd" outputId="5c81e5e8-6a8b-4154-83e4-c995829cf6b2"
import torch
import torch.nn as nn
from torch import optim
checkpoint = torch.load('/content/drive/MyDrive/InceptionResNetV1_ArcFace.pt')
facenet = checkpoint.model
facenet.to(device)
facenet.eval()
facenet
# + colab={"base_uri": "https://localhost:8080/"} id="9IzAs2YECgy9" outputId="a19c363f-4f6a-4cbf-b6c6-2d721142cdcf"
from facenet_pytorch import MTCNN
from PIL import Image
mtcnn = MTCNN(image_size=128,margin=0)
# Get cropped and prewhitened image temtcnn = MTCNN(keep_all=True, device='cuda:0')
# + [markdown] id="I_XHmnXGeH8e"
#
# + id="pj2R_k0-EHcN"
img_cropped = mtcnn(img,save_path='single_image.jpg')
# + id="Bcg0SlXxDaMW"
img_embedding1 = facenet(img_cropped.unsqueeze(0))
# + id="j8f7Bi4GICIE"
img2 = Image.open('/content/drive/MyDrive/unnamed.png')
img_image2 = mtcnn(img2,save_path='single_image.jpg')
img_embedding2 = facenet(img_image2.unsqueeze(0))
# + id="vimL02htIUxt"
import numpy
dist = numpy.linalg.norm(img_embedding1.detach().numpy()-img_embedding2.detach().numpy())
# + colab={"base_uri": "https://localhost:8080/"} id="AjYQW96LLueK" outputId="038c2899-5c9f-4ff4-acc2-7cd6ba8300cf"
img_embedding1.detach().numpy().shape
# + colab={"base_uri": "https://localhost:8080/"} id="tJgbud6mLo37" outputId="ef8074b3-26f2-4d6b-d3b2-610ba856beb2"
print(dist)
# + colab={"base_uri": "https://localhost:8080/"} id="CsUzpJOFyfW6" outputId="5b821e11-1634-4f1d-f455-c5b1e3cc407b"
from google.colab import drive
drive.mount('/content/drive')
# + id="tRHaD2nMbnBL"
# + id="Du2d-I1PqtVk"
from facenet_pytorch import MTCNN
from PIL import Image
import numpy
mtcnn = MTCNN(image_size=128,margin=0)
img_image2 = mtcnn(img2,save_path='single_image.jpg')
img_embedding2 = facenet(img_image2.unsqueeze(0))
# + id="2JazqEzl9s64"
def img_to_embedding(image_path, facenet):
image = Image.open(image_path)
image_cropped = mtcnn(image,save_path='single_image.jpg')
img_embedding = facenet(image_cropped.unsqueeze(0))
return img_embedding
# + id="O3TQPOG6_ojB"
def l2distance(input_embedding, output_embedding):
dist = numpy.linalg.norm(input_embedding.detach().numpy()-output_embedding.detach().numpy())
return dist
# + id="4RolM1ge_7CW"
mtcnn = MTCNN(image_size=128,margin=0)
db = {}
char = '/content/drive/MyDrive/image_processed/áá
µá·áá
¥á·áá
®á«_1.jpg'
# db["ê¹ë²ì€_1"] = img_to_embedding('/content/drive/MyDrive/image_processed/áá
µá·áá
¥á·áá
®á«_1.jpg',facenet)
db["ê¹ë²ì€_2"] = img_to_embedding('/content/drive/MyDrive/image_processed/ê¹ë²ì€_2.jpg',facenet)
db["ê¹ë²ì€_3"] = img_to_embedding('/content/drive/MyDrive/image_processed/ê¹ë²ì€_3.jpg',facenet)
db["ìŽêžžë_1"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_1.jpg',facenet)
db["ìŽêžžë_2"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_2.jpg',facenet)
db["ìŽêžžë_3"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_3.jpg',facenet)
db["ë°ìì°œ_1"] = img_to_embedding('/content/drive/MyDrive/SichangPark_1.jpg',facenet)
db["ë°ìì°œ_2"] = img_to_embedding('/content/drive/MyDrive/SichangPark_2.jpg',facenet)
db["ë°ìì°œ_3"] = img_to_embedding('/content/drive/MyDrive/image_processed/ë°ìì°œ_3.jpg',facenet)
db["ì ìžì°_1"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_1.jpg',facenet)
db["ì ìžì°_2"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_2.jpg',facenet)
# db["ì ìžì°_3"] = img_to_embedding('/content/drive/MyDrive/image_processed/_3.jpg',facenet)
db["ê¹ëë³_1"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_1.jpg',facenet)
db["ì ëª
ì§_1"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_1.jpg',facenet)
db["ì ëª
ì§_2"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_2.jpg',facenet)
db["ì ëª
ì§_3"] = img_to_embedding('/content/drive/MyDrive/image_processed/***_3.jpg',facenet)
# + id="SA7zzpx9Duwp"
image = Image.open('/content/drive/MyDrive/áá
¢ážáá
¥.JPG')
image_cropped = mtcnn(image,save_path='single_image.jpg')
img_embedding = facenet(image_cropped.unsqueeze(0))
# + colab={"base_uri": "https://localhost:8080/", "height": 172} id="ByoEFA94C3Bu" outputId="659583a8-223d-4ffa-ba58-45de8b70eb1f"
image_cropped.show()
# + colab={"base_uri": "https://localhost:8080/"} id="Tlhj0IUqD2jm" outputId="d856a1b8-0245-4d6d-e780-2b669f0642ab"
l2distance(db["ê¹ë²ì€_2"],db["ê¹ë²ì€_3"])
# + colab={"base_uri": "https://localhost:8080/"} id="9f1mSo9FEVf0" outputId="651383b0-c241-4457-9a69-26b3f200493f"
identity = ""
minimum = 100.0
for key, value in db.items():
if l2distance(img_embedding, db[key]) < minimum:
minimum = l2distance(img_embedding, db[key])
identity = key
print(key)
print(minimum)
# + id="JCtKFm8kGZM7"
import pickle
| AI(BE)/Resnet_Model_Experiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# This script is used to extract Sentinel image data and other layers for a set of
# soil sample point locations. The DN values for each of the image bands and other
# layers is output as a dictionary of tabular data that can be processed using the
# "StockSOC_ProcessPoints" script. Input files are ESRI Shapefiles with the project
# boundary polygon and the soil sample points locations.A Python pickle file is
# exported to be input into the "StockSOC_ProcessPoints" script.
# This script was written by <NAME> [<EMAIL>]
# This script is free software; you can redistribute it and/or modify it under the
# terms of the Apache License 2.0 License.
# +
import ee
import geemap
import json
import os
import requests
from datetime import datetime
from geemap import geojson_to_ee, ee_to_geojson
import geopandas as gpd
import pandas as pd
import pickle
import math
ee.Initialize()
# +
### Enter start and end date as numbers for year, month, day ###
startDate = ee.Date.fromYMD(2021, 1, 1)
endDate = ee.Date.fromYMD(2021, 12, 31)
# Enter the seasonal portion for each year in the date range to process
startMonth = 1
endMonth = 12
# Scale (resolution) in meters for the analysis
pixScale = 20
# Cloud masking parameters - for more information about the workflow and avriables see:
# https://developers.google.com/earth-engine/tutorials/community/sentinel-2-s2cloudless
cloudFilter = 60
cloudProbabilityThreshold = 50
nirDarkThreshold = 0.15
cloudPProjectedDistance = 1
buffer = 50
# -
### Enter input and output file paths and names ###
boundaryShp = "/home/nedhorning/RegenNetwork/Soils/Ruuts/LaEmma/LaEmmaBoundary.shp"
inPoints = "/home/nedhorning/RegenNetwork/Soils/Ruuts/LaEmma/LaEmmaSamplePoints2021.shp"
outPickle = "/home/nedhorning/RegenNetwork/Soils/Ruuts/LaEmma/GEE_Output/extractedPoints.pickle"
# Function to get image data and apply cloud/shadow filter
def get_s2_sr_cld_col(aoi, start_date, end_date):
# Import and filter S2 SR.
s2_sr_col = (ee.ImageCollection('COPERNICUS/S2_SR')
.filterBounds(aoi)
#.filterMetadata('MGRS_TILE', 'equals', '14SKJ') # Use this to specify a specific tile
.filterDate(start_date, end_date)
.filter(ee.Filter.calendarRange(startMonth, endMonth,'month'))
.filter(ee.Filter.lte('CLOUDY_PIXEL_PERCENTAGE', cloudFilter)))
# Import and filter s2cloudless.
s2_cloudless_col = (ee.ImageCollection('COPERNICUS/S2_CLOUD_PROBABILITY')
.filterBounds(aoi)
.filterDate(start_date, end_date))
# Join the filtered s2cloudless collection to the SR collection by the 'system:index' property.
return ee.ImageCollection(ee.Join.saveFirst('s2cloudless').apply(**{
'primary': s2_sr_col,
'secondary': s2_cloudless_col,
'condition': ee.Filter.equals(**{
'leftField': 'system:index',
'rightField': 'system:index'
})
}))
# +
# Cloud cover function
def add_cloud_bands(img):
# Get s2cloudless image, subset the probability band.
cld_prb = ee.Image(img.get('s2cloudless')).select('probability')
# Condition s2cloudless by the probability threshold value.
is_cloud = cld_prb.gt(cloudProbabilityThreshold).rename('clouds')
# Add the cloud probability layer and cloud mask as image bands.
return img.addBands(ee.Image([cld_prb, is_cloud]))
# -
def add_shadow_bands(img):
# Identify water pixels from the SCL band.
not_water = img.select('SCL').neq(6)
# Identify dark NIR pixels that are not water (potential cloud shadow pixels).
SR_BAND_SCALE = 1e4
dark_pixels = img.select('B8').lt(nirDarkThreshold*SR_BAND_SCALE).multiply(not_water).rename('dark_pixels')
# Determine the direction to project cloud shadow from clouds (assumes UTM projection).
shadow_azimuth = ee.Number(90).subtract(ee.Number(img.get('MEAN_SOLAR_AZIMUTH_ANGLE')));
# Project shadows from clouds for the distance specified by the CLD_PRJ_DIST input.
cld_proj = (img.select('clouds').directionalDistanceTransform(shadow_azimuth, cloudPProjectedDistance *10)
.reproject(**{'crs': img.select(0).projection(), 'scale': 100})
.select('distance')
.mask()
.rename('cloud_transform'))
# Identify the intersection of dark pixels with cloud shadow projection.
shadows = cld_proj.multiply(dark_pixels).rename('shadows')
# Add dark pixels, cloud projection, and identified shadows as image bands.
return img.addBands(ee.Image([dark_pixels, cld_proj, shadows]))
def add_cld_shdw_mask(img):
# Add cloud component bands.
img_cloud = add_cloud_bands(img)
# Add cloud shadow component bands.
img_cloud_shadow = add_shadow_bands(img_cloud)
# Combine cloud and shadow mask, set cloud and shadow as value 1, else 0.
is_cld_shdw = img_cloud_shadow.select('clouds').add(img_cloud_shadow.select('shadows')).gt(0)
# Remove small cloud-shadow patches and dilate remaining pixels by BUFFER input.
# 20 m scale is for speed, and assumes clouds don't require 10 m precision.
is_cld_shdw = (is_cld_shdw.focal_min(2).focal_max(buffer*2/20)
.reproject(**{'crs': img.select([0]).projection(), 'scale': 20})
.rename('cloudmask'))
# Add the final cloud-shadow mask to the image.
return img_cloud_shadow.addBands(is_cld_shdw)
# return img.addBands(is_cld_shdw)
def apply_cld_shdw_mask(img):
# Subset the cloudmask band and invert it so clouds/shadow are 0, else 1.
not_cld_shdw = img.select('cloudmask').Not()
# Subset reflectance bands and update their masks, return the result.
return img.select('B.*').updateMask(not_cld_shdw)
# Function make the server-side feature collection accessible to the client
def getValues(fc):
features = fc.getInfo()['features']
dictarr = []
for f in features:
attr = f['properties']
dictarr.append(attr)
return dictarr
# Convert input boundary Shapefile to a GEE boundary feature to constrain spatial extent
boundary_ee = geemap.shp_to_ee(boundaryShp)
# Get image data using temporal and spatial constraints
s2_sr_cld_col = get_s2_sr_cld_col(boundary_ee, startDate, endDate)
# Apply cloud/shadow mask and add NDVI layer
sentinelCollection = (s2_sr_cld_col.map(add_cld_shdw_mask)
.map(apply_cld_shdw_mask))
# Create a list of dates for all images in the collection
datesObject = sentinelCollection.aggregate_array("system:time_start")
dateList = datesObject.getInfo()
dateList=[datetime.fromtimestamp(x/1000).strftime('%Y_%m_%d') for x in dateList]
# Image display parameters
sentinel_vis = {
'min': 0,
'max': 2500,
'gamma': [1.1],
'bands': ['B4', 'B3', 'B2']}
# Convert input sample points Shapefile to a GEE feature
points_ee = geemap.shp_to_ee(inPoints)
# Dictionary to store all points
allPoints = {}
# +
# Calculate Topographic wetness index and extract points
upslopeArea = (ee.Image("MERIT/Hydro/v1_0_1")
.select('upa'))
elv = (ee.Image("MERIT/Hydro/v1_0_1")
.select('elv'))
slope = ee.Terrain.slope(elv)
upslopeArea = upslopeArea.multiply(1000000).rename('UpslopeArea')
slopeRad = slope.divide(180).multiply(math.pi)
TWI = ee.Image.log(upslopeArea.divide(slopeRad.tan())).rename('TWI')
extractedPointsTWI = geemap.extract_values_to_points(points_ee, TWI, scale=pixScale)
dictarrTWI = getValues(extractedPointsTWI)
# -
# Read in and extract points for continuous heat-insolation load index and extract points
chili = (ee.Image("CSP/ERGo/1_0/Global/SRTM_CHILI"))
extractedPointsCHILI = geemap.extract_values_to_points(points_ee, chili, scale=pixScale)
dictarrCHILI = getValues(extractedPointsCHILI)
# Create a list of the images for processing
images = sentinelCollection.toList(sentinelCollection.size())
# +
Map=geemap.Map()
Map.centerObject(boundary_ee, 13)
for index in range(0, sentinelCollection.size().getInfo()-1):
print("Processing " + dateList[index] + ": " + str(sentinelCollection.size().getInfo()-1 - index - 1) + " images to go ", end = "\r")
image = ee.Image(images.get(index))
extractedPoints = geemap.extract_values_to_points(points_ee, image, scale=pixScale)
dictarr = getValues(extractedPoints)
points = gpd.GeoDataFrame(dictarr)
# Add the following variables to the collection of point data
points['stock'] = points['BD'] * points['C%']
points['twi'] = gpd.GeoDataFrame(dictarrTWI)['first']
points['chili'] = gpd.GeoDataFrame(dictarrCHILI)['first']
# Use band 3 to select only points not covered by clouds
if ('B3' in points):
allPoints.update({dateList[index] : points})
# Add the image layer for display
Map.addLayer(image, sentinel_vis, dateList[index])
# Add boundary to dispay images
Map.addLayer(boundary_ee, {}, "Boundary EE")
# Display the map.
Map
# -
# Output the dictionary with all points - this will be input to the "StockSOC_ProcessPoints" Notebook
with open(outPickle, 'wb') as handle:
pickle.dump(allPoints, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Print a list of all the image dates
list(allPoints.keys())
# Print all of the points starting with the earliest date
allPoints
| socMapping/StockSOC_ExtractPoints.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Factoring Polynomials with SymPy
# Here is an example that uses [SymPy](http://sympy.org/en/index.html) to factor polynomials.
from ipywidgets import interact
from IPython.display import display
from sympy import Symbol, Eq, factor, init_printing
init_printing(use_latex='mathjax')
x = Symbol('x')
def factorit(n):
display(Eq(x**n-1, factor(x**n-1)))
# Notice how the output of the `factorit` function is properly formatted LaTeX.
factorit(12)
interact(factorit, n=(2,40));
| examples/Interactive Widgets/Factoring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:metis] *
# language: python
# name: conda-env-metis-py
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import pylab as pl
import pandas as pd
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score, roc_curve, accuracy_score, precision_score, recall_score
# +
df = pd.read_csv('track_data.csv', index_col=0)
df.drop_duplicates(subset=['Track IDs'], keep='first', inplace=True)
df.head()
# +
X = df.iloc[:, 5:].values
y = df.iloc[:, 1].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=13)
# +
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# -
logregcv = LogisticRegressionCV(Cs=[100000,200000,300000], cv=3)
logregcv.fit(X_train,y_train)
acc_train_cv = logregcv.score(X_train, y_train)
acc_test_cv = logregcv.score(X_test, y_test)
print("train_score=%.3f\ntest_score =%.3f\n" % (acc_train_cv, acc_test_cv))
print(logregcv.classes_)
print(logregcv.C_)
print(logregcv.intercept_)
print(logregcv.coef_)
# +
y_pred = logregcv.predict(X_test)
log_confusion = confusion_matrix(y_test, y_pred)
print(log_confusion)
print(classification_report(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
print(precision_score(y_test, y_pred, pos_label='happy'))
print(recall_score(y_test, y_pred, pos_label='happy'))
# +
plt.figure(dpi=450)
sns.heatmap(log_confusion, cmap=plt.cm.Blues,
xticklabels=['happy', 'sad'],
yticklabels=['happy', 'sad'])
plt.xlabel('Predicted Mood')
plt.ylabel('Actual Mood')
plt.title('Logistic Regression confusion matrix');
plt.savefig('LogReg.png')
# -
y_test = 'sad' == y_test
y_pred_proba = logregcv.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.figure(dpi=450)
plt.plot(fpr,tpr,label="auc="+str(auc))
plt.legend(loc=4)
plt.xlabel('Specificity')
plt.ylabel('Sensitivity')
plt.title('Logistic Regression ROC Curve');
plt.savefig('LogRegROC.png')
plt.show()
| data/LogRegModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from yattag import Doc
doc, tag, text = Doc().tagtext()
with tag('h1'):
text('Hello world!')
print(doc.getvalue())
# -
with tag('icecream', id = '2', flavour = 'pistachio'):
text("This is really delicious.")
print(doc.getvalue())
# +
from yattag import Doc
doc, tag, text = Doc().tagtext()
doc.asis('<!DOCTYPE html>')
with tag('html'):
with tag('body'):
text('Hello world!')
print(doc.getvalue())
# +
from yattag import Doc
doc, tag, text = Doc().tagtext()
with tag('simple-method', ("result-name", "x")):
# doc.attr(("result-name", "x"))
with tag('set'):
pass
print(doc.getvalue())
# +
from yattag import Doc
doc, tag, text = Doc().tagtext()
with tag('simple-method', name='testFieldToResult'):
doc.stag('set', field='resultValue', value='someResultValue')
doc.stag('set', field='result1', value='dynamicResultName')
doc.stag('field-to-result', ('field', 'resultValue'), ('result-name', 'constantResultName'))
doc.stag('field-to-result', ('field', 'resultValue'), ('result-name', '${result1}'))
print(doc.getvalue())
# +
def set_fields(doc, **kwargs):
for key, value in kwargs.items():
doc.stag('set', field=key, value=value)
def field_to_result(doc, **kwargs):
for key, value in kwargs.items():
doc.stag('field-to-result', ('field', key), ('result-name', value))
def print_doc(doc):
from yattag import indent
result = indent(
doc.getvalue(),
indentation = ' ',
newline = '\r\n',
indent_text = True
)
print(result)
doc, tag, text = Doc().tagtext()
with tag('simple-method', name='testFieldToResult'):
set_fields(doc, resultValue='someResultValue',
result1='dynamicResultName')
field_to_result(doc, resultValue='constantResultName')
print_doc(doc)
# -
from yattag import Doc
from yattag import indent
class MinilangProc(object):
def __init__(self, oc, name):
self.oc=oc
self.name=name
self.doc, self.tag, self.text = Doc().tagtext()
def set_fields(self, **kwargs):
for key, value in kwargs.items():
self.doc.stag('set', field=key, value=value)
def field_to_result(self, **kwargs):
for key, value in kwargs.items():
self.doc.stag('field-to-result', ('field', key), ('result-name', value))
def desc(self):
# with self.tag('simple-method', name='testFieldToResult'):
result = indent(
self.doc.getvalue(),
indentation = ' ',
newline = '\r\n',
indent_text = True
)
print(result)
ml=MinilangProc(oc, "testFieldToResult")
ml.set_fields(resultValue='someResultValue',
result1='dynamicResultName')
ml.field_to_result(resultValue='constantResultName')
ml.desc()
# +
from yattag import indent
result = indent(
doc.getvalue(),
indentation = ' ',
newline = '\r\n',
indent_text = True
)
print(result)
# +
from py4j.java_gateway import java_import
from sagas.ofbiz.runtime_context import platform
oc = platform.oc
finder = platform.finder
helper = platform.helper
# -
java_import(oc.j, 'com.sagas.generic.*')
director=oc.j.MiniLangDirector(oc.dispatcher)
ctx=director.createServiceMethodContext()
user=ctx.getUserLogin()
print(user)
invoker=director.createSimpleMethod(doc.getvalue())
result=invoker.exec(ctx)
print(result)
print(invoker.getDefaultSuccessCode())
print(ctx.getResult('constantResultName'))
method_xml='''
<simple-method name="testFieldToResult">
<set field="resultValue" value="someResultValue" />
<set field="result1" value="dynamicResultName" />
<field-to-result field="resultValue" result-name="constantResultName" />
<field-to-result field="resultValue" result-name="${result1}" />
</simple-method>
'''
invoker=director.createSimpleMethod(method_xml)
result=invoker.exec(ctx)
print(result)
| notebook/procs-ofbiz-minilang.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Milestone 2 Task 1
# <NAME> (Student no. 28684314)
#
# ---
# This is just a jupyter notebook to show that I understand and can use markdown elements.
# # Big Header
# ## Header
# ### Smaller Header
#
# *This is italicized*
#
# **This is bold**
#
# ---
| analysis/Hayward/milestone2_markdown.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
from tqdm import tqdm_notebook
# # Assignment 2
#
# ## Question 1: Genetic Drift without Mutation
#
# Take a population of N=500 individuals half of which consist of type 0 and the remaining half of type 1,
# initially. Assume that the fitness of both types are equal and neither type can mutate to the other.
#
# 1. Write a program to obtain the time-evolution of the frequencies of the two types in the population when individuals making up the population in the next generation are chosen randomly from members of the current generation following Moran process. **Run the simulation for as long as it takes for any one of the two types to get fixed in the population.**
#
# 2. **Obtaining the Fixation Probability**: Repeat the above simulation for Nt=100 trials and find out the fraction of times each of the two types 0 and 1 get fixed ? What do you expect the theoretical value of fixation probability for either sub-type to be ?
#
# **For any one trial, plot the evolution of frequency of type 0 and type 1 with time.**
#
#
# +
from numpy.random import randint
import matplotlib.pyplot as plt
N=500
population =[ 0 if i< N/2 else 1 for i in range(N)]
def neutral_drift_moran_process(population):
history = []
while True:
history.append(sum(population)/N)
if sum(population) in {N,0}:
break
else:
reproduce,replace = randint(0,N,size=2)
population[replace] = population[reproduce]
return history
evol = neutral_drift_moran_process(population)
plot(evol)
len(evol)
# -
# ## Moran Process with constant selection but without mutation
# +
from numpy.random import random
fitness_A = 1.01 # 0
fitness_B = 1 # 1
norm_fit_A = fitness_A/(fitness_A+fitness_B)
norm_fit_B = fitness_B/(fitness_A+fitness_B)
N=100
population =[ 'A' if i< 1 else 'B' for i in range(N)]
def fitness(member):
return norm_fit_A if member=='A' else norm_fit_B
def advantageous_mutant_invasion(population):
history = []
while True:
history.append(population.count('A')/N)
if population.count('A') in {0,N}:
break
else:
reproduce,replace = randint(0,N,size=2)
if random() < fitness(population[reproduce]):
population[replace] = population[reproduce]
return history
evol = advantageous_mutant_invasion(population)
plot(evol)
len(evol)
# -
# ## Random drift with constant selection
# +
from numpy.random import random, randint
fitness_A = 0.99 # 0
fitness_B = 1 # 1
norm_fit_A = fitness_A/(fitness_A+fitness_B)
norm_fit_B = fitness_B/(fitness_A+fitness_B)
N=100
population =[ 'A' if i< N//2 else 'B' for i in range(N)]
def fitness(member):
return norm_fit_A if member=='A' else norm_fit_B
def disadvantageous_mutant_invasion(population):
history = []
while True:
history.append(population.count('A')/N)
if population.count('A') in {0,N}:
break
else:
reproduce,replace = randint(0,N,size=2)
if random() < fitness(population[reproduce]):
population[replace] = population[reproduce]
return history
evol = disadvantageous_mutant_invasion(population)
plot(evol)
len(evol)
| Assignment 2/Assignment 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="tAsiR8a_p17E" colab_type="text"
# # bit.ly/fuseai4
# ## follow this link for today's notebooks
# + id="EpyYOXyflzWG" colab_type="code" colab={}
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
# + id="0Bni5eDsl2qp" colab_type="code" colab={}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mlxtend.plotting import plot_decision_regions
# + id="Z4lVS48OmHbf" colab_type="code" colab={}
X = np.array([[x/10,y/10] for x in range(0,20) for y in range(0,20)])
y = np.array([1 if (sum((x-np.array([1.5,1.5]))**2) < .5) else 0 for x in X])
# + id="zA7XrXzj5MZF" colab_type="code" outputId="27f93d67-73c2-4b46-83c9-addfa834c33f" executionInfo={"status": "ok", "timestamp": 1549052911263, "user_tz": -345, "elapsed": 3042, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-WHZMn63EOic/AAAAAAAAAAI/AAAAAAAAA2k/CE8LZgMOTMU/s64/photo.jpg", "userId": "03372508101739375813"}} colab={"base_uri": "https://localhost:8080/", "height": 368}
plt.axes().set_facecolor('#aaaaaa')
plt.scatter(*zip(*X),c=y,cmap='binary')
# + id="KcvHpmIfv73H" colab_type="code" outputId="c301d7c0-3d8d-4031-c080-633bc129cb06" executionInfo={"status": "ok", "timestamp": 1549052972026, "user_tz": -345, "elapsed": 1065, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-WHZMn63EOic/AAAAAAAAAAI/AAAAAAAAA2k/CE8LZgMOTMU/s64/photo.jpg", "userId": "03372508101739375813"}} colab={"base_uri": "https://localhost:8080/", "height": 387}
clf = DecisionTreeClassifier()
clf.fit(X,y)
plot_decision_regions(X,y,clf,legend=2)
plt.xlim([0,2])
plt.ylim([0,2])
print(confusion_matrix(y,clf.predict(X)))
# + id="-jgXt_jl3k_u" colab_type="code" outputId="19cefd2f-8b95-4347-f1da-9fecc33d3553" executionInfo={"status": "ok", "timestamp": 1549051620484, "user_tz": -345, "elapsed": 1191, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-WHZMn63EOic/AAAAAAAAAAI/AAAAAAAAA2k/CE8LZgMOTMU/s64/photo.jpg", "userId": "03372508101739375813"}} colab={"base_uri": "https://localhost:8080/", "height": 387}
clf = LogisticRegression(penalty = 'l2', solver = 'liblinear')
clf.fit(X,y)
plot_decision_regions(X,y,clf,legend=2)
plt.xlim([0,2])
plt.ylim([0,2])
print(confusion_matrix(y,clf.predict(X)))
# + id="fnR7FWncz4QK" colab_type="code" outputId="eb2da6ba-6fe4-4acd-b41a-13c7845a234f" executionInfo={"status": "ok", "timestamp": 1549051747342, "user_tz": -345, "elapsed": 1715, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-WHZMn63EOic/AAAAAAAAAAI/AAAAAAAAA2k/CE8LZgMOTMU/s64/photo.jpg", "userId": "03372508101739375813"}} colab={"base_uri": "https://localhost:8080/", "height": 387}
clf = SVC(gamma = 10)
clf.fit(X,y)
plot_decision_regions(X,y,clf,legend=2)
plt.xlim([0,2])
plt.ylim([0,2])
print(confusion_matrix(y, clf.predict(X)))
# + id="xGWMqd0csDXG" colab_type="code" outputId="4995a5d3-367a-4e5d-fa97-51bd2c50a88b" executionInfo={"status": "ok", "timestamp": 1549051788267, "user_tz": -345, "elapsed": 1464, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-WHZMn63EOic/AAAAAAAAAAI/AAAAAAAAA2k/CE8LZgMOTMU/s64/photo.jpg", "userId": "03372508101739375813"}} colab={"base_uri": "https://localhost:8080/", "height": 387}
clf = KNeighborsClassifier()
y_ = y.copy()
y_[50]=1
y_[51]=1
y_[70]=1
y_[71]=1
clf.fit(X,y_)
plot_decision_regions(X,y_,clf,legend=2)
plt.xlim([0,2])
plt.ylim([0,2])
print(confusion_matrix(y,clf.predict(X)))
| Session4/extra_for_mlxtend.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
# %matplotlib inline
from matplotlib.ticker import FormatStrFormatter
from datetime import datetime
import subprocess
import os
mpl.rcParams['figure.figsize'] = (16, 9)
# -
# # Standard process in data science
# 
#
#
# # Data Preparation
#
# * Data strcture must be clear and understandable
# * Visulize data into plots and graphs
#
# ## GitHub CSV data : <NAME>
#
# First we will scrap data for confirmed cases country wise and will do it for limited number of countries
# +
git_repo = 'https://github.com/CSSEGISandData/COVID-19.git'
git_clone = subprocess.Popen( "git clone " + git_repo ,
cwd = os.path.dirname( '../data/raw/' ),
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
(out, error) = git_clone.communicate()
print('out:', out)
print('error:', error)
# +
# load data from csv file
filepath = '../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
pd_raw_confirmed = pd.read_csv(filepath)
pd_raw_confirmed.head()
# -
# ## Filter raw data
# +
t_idx = pd_raw_confirmed.columns[4:]
df_confirmed = pd.DataFrame({'date':t_idx})
df_confirmed.head()
# -
# get daily cases for one counrty e.g. Germany
pd_raw_confirmed[pd_raw_confirmed['Country/Region']=='Germany'].iloc[:,4::].sum(axis=0)[-4:]
# +
# do same for multiple countries
countries =['Italy', 'US', 'Spain', 'Germany', 'Russia' , 'India', 'Brazil']
for con in countries:
df_confirmed[con]=np.array(pd_raw_confirmed[pd_raw_confirmed['Country/Region']==con].iloc[:,4::].sum(axis=0))
df_confirmed.tail()
# -
df_confirmed.set_index('date').plot()
plt.xlabel('Date')
plt.ylabel('Total cases')
plt.gca().yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
# ## Datatype of date
df_confirmed.tail()
# +
# convert to datetime df_confirmed
t_idx = [datetime.strptime(date,"%m/%d/%y") for date in df_confirmed.date]
# convert back to date ISO norm (str)
t_str = [each.strftime('%Y-%m-%d') for each in t_idx]
# set back to DataFrame
df_confirmed['date'] = t_idx
# -
# cross check
type(df_confirmed['date'][0])
df_confirmed.to_csv('../data/processed/COVID_small_flat_table.csv',sep=';',index=False)
# ### Scrap recovered and currently infected cases and deaths
def store_JH_small_data(filepath, country_list):
# load data from csv file
df = pd.read_csv(filepath)
t_idx = df.columns[4:]
df_processed = pd.DataFrame({'date':t_idx})
for each in country_list:
df_processed[each]=np.array(df[df['Country/Region']==each].iloc[:,4::].sum(axis=0))
t_idx = [datetime.strptime(date,"%m/%d/%y") for date in df_processed.date]
df_processed['date'] = t_idx
return df_processed
# #### Recovered
filepath = '../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv'
df_recovered = store_JH_small_data(filepath, countries)
df_recovered.tail()
df_recovered.to_csv('../data/processed/COVID_small_flat_table_recovered.csv',sep=';',index=False)
# #### Deaths
filepath = '../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv'
df_deaths = store_JH_small_data(filepath, countries)
df_deaths.tail()
df_deaths.to_csv('../data/processed/COVID_small_flat_table_deaths.csv',sep=';',index=False)
# #### Infected
df_infected = pd.DataFrame()
df_infected['date'] = t_idx
df_infected = pd.concat([df_infected, df_confirmed.iloc[:, 1::] - df_recovered.iloc[:, 1::] - df_deaths.iloc[:, 1::]],
axis=1)
df_infected.to_csv('../data/processed/COVID_small_flat_table_infected.csv',sep=';',index=False)
# ## Relational data model - defining a primary key
#
# A primary keyâs main features are:
#
# * It must contain a unique value for each row of data.
# * It cannot contain NaN values.
data_path = '../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
pd_raw = pd.read_csv(data_path)
pd_raw.head()
# +
# adjust column name
pd_data_base = pd_raw.rename(columns = {'Country/Region':'country',
'Province/State':'state'})
pd_data_base['state'] = pd_data_base['state'].fillna('no')
# drop unnecessary columns
pd_data_base = pd_data_base.drop(['Lat','Long'],axis=1)
pd_data_base.head()
# -
pd_relational=pd_data_base.set_index(['state','country']).T.stack(level=[0,1]).reset_index().rename(columns={'level_0': 'date',
0:'confirmed'
})
pd_relational.head()
pd_relational.dtypes
# chnage datatype of date
pd_relational['date'] = pd_relational.date.astype('datetime64[ns]')
pd_relational['confirmed'] = pd_relational.confirmed.astype(int)
pd_relational.dtypes
pd_relational[pd_relational['country']=='US'].tail()
pd_relational.to_csv('../data/processed/COVID_relational_confirmed.csv',sep=';',index=False)
# ## Rational data model for US region from <NAME> dataset
data_path='../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_US.csv'
pd_raw_US=pd.read_csv(data_path)
pd_raw_US.head()
# remove unwated columns and chnage column names
pd_raw_US=pd_raw_US.drop(['UID', 'iso2', 'iso3', 'code3', 'Country_Region','FIPS', 'Admin2', 'Lat', 'Long_', 'Combined_Key'],axis=1)
pd_data_base_US=pd_raw_US.rename(columns={'Province_State':'state'}).copy()
# +
# stack data in rational form
pd_relational_US=pd_data_base_US.set_index(['state']).T.stack().reset_index() \
.rename(columns={'level_0':'date', 0:'confirmed'})
# convert to datetime
pd_relational_US['country']='US'
pd_relational_US['date']=[datetime.strptime( each,"%m/%d/%y") for each in pd_relational_US.date]
pd_relational_US.head()
# +
# merge US data into main rational DataFrame
pd_relational_model_all=pd_relational[pd_relational['country']!='US'].reset_index(drop=True)
pd_relational_model_all=pd.concat([pd_relational_model_all,pd_relational_US],ignore_index=True)
pd_relational_model_all[pd_relational_model_all['country']=='US'].tail()
# -
# export data to csv
pd_relational_model_all.to_csv('../data/processed/20200730_COVID_relational_confirmed.csv',sep=';',index=False)
| notebooks/02_Data_preparation_v1.0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
from enmspring.spring import Spring
from enmspring import k_b0_util
from enmspring.k_b0_plot import BoxPlot
import pandas as pd
import numpy as np
# ### Part 0: Initialize
rootfolder = '/Users/alayah361/fluctmatch/enm/cg_13'
cutoff = 4.7
host = 'nome'
type_na = 'bdna+bdna'
n_bp = 13 ## 21
spring_obj = Spring(rootfolder, host, type_na, n_bp)
# ### Part 1 : Process Data
plot_agent = BoxPlot(rootfolder, cutoff)
# ### Part 2: PP
category = 'PP'
key = 'k'
nrows = 2
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 10))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'R'
nrows = 1
ncols = 3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(22, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'RB'
nrows = 2
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 10))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'PB'
key = 'k'
nrows = 1
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'st'
nrows = 1
ncols = 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 8))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'bp'
nrows = 1
ncols = 3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(24, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
df_nome = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/nome/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me1 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me1/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me2 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me2/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me3 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me3/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me12 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me12/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me23 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me23/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me123 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me123/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me1.tail(60)
# +
columns = ['system', 'PairType', 'median', 'mean', 'std']
d_pair_sta = {column:[] for column in columns}
sys = ['nome', 'me1', 'me2', 'me3', 'me12', 'me23', 'me123']
dfs = [df_nome, df_me1, df_me2, df_me3, df_me12, df_me23, df_me123]
res = (4,5,6,7,8,9,10)
for df, sysname in zip(dfs, sys):
mask_pp0 = (df['PairType'] == 'same-P-P-0')
df_pp0 = df[mask_pp0]
median_pp0 = np.median(df_pp0['k'])
mean_pp0 = np.mean(df_pp0['k'])
std_pp0 = np.std(df_pp0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-P-0')
d_pair_sta['median'].append(round(median_pp0, 3))
d_pair_sta['mean'].append(round(mean_pp0, 3))
d_pair_sta['std'].append(round(std_pp0, 3))
mask_pp1 = (df['PairType'] == 'same-P-P-1')
df_pp1 = df[mask_pp1]
median_pp1 = np.median(df_pp1['k'])
mean_pp1 = np.mean(df_pp1['k'])
std_pp1 = np.std(df_pp1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-P-1')
d_pair_sta['median'].append(round(median_pp1, 3))
d_pair_sta['mean'].append(round(mean_pp1, 3))
d_pair_sta['std'].append(round(std_pp1, 3))
mask_ps0 = (df['PairType'] == 'same-P-S-0')
df_ps0 = df[mask_ps0]
median_ps0 = np.median(df_ps0['k'])
mean_ps0 = np.mean(df_ps0['k'])
std_ps0 = np.std(df_ps0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-S-0')
d_pair_sta['median'].append(round(median_ps0, 3))
d_pair_sta['mean'].append(round(mean_ps0, 3))
d_pair_sta['std'].append(round(std_ps0, 3))
mask_ps1 = (df['PairType'] == 'same-P-S-1')
df_ps1 = df[mask_ps1]
median_ps1 = np.median(df_ps1['k'])
mean_ps1 = np.mean(df_ps1['k'])
std_ps1 = np.std(df_ps1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-S-1')
d_pair_sta['median'].append(round(median_ps1, 3))
d_pair_sta['mean'].append(round(mean_ps1, 3))
d_pair_sta['std'].append(round(std_ps1, 3))
mask_pb0 = (df['PairType'] == 'same-P-B-0')
df_pb0 = df[mask_pb0]
median_pb0 = np.median(df_pb0['k'])
mean_pb0 = np.mean(df_pb0['k'])
std_pb0 = np.std(df_pb0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-B-0')
d_pair_sta['median'].append(round(median_pb0, 3))
d_pair_sta['mean'].append(round(mean_pb0, 3))
d_pair_sta['std'].append(round(std_pb0, 3))
mask_pb1 = (df['PairType'] == 'same-P-B-1')
df_pb1 = df[mask_pb1]
median_pb1 = np.median(df_pb1['k'])
mean_pb1 = np.mean(df_pb1['k'])
std_pb1 = np.std(df_pb1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-B-1')
d_pair_sta['median'].append(round(median_pb1, 3))
d_pair_sta['mean'].append(round(mean_pb1, 3))
d_pair_sta['std'].append(round(std_pb1, 3))
mask_ss0 = (df['PairType'] == 'same-S-S-0')
df_ss0 = df[mask_ss0]
median_ss0 = np.median(df_ss0['k'])
mean_ss0 = np.mean(df_ss0['k'])
std_ss0 = np.std(df_ss0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-S-0')
d_pair_sta['median'].append(round(median_ss0, 3))
d_pair_sta['mean'].append(round(mean_ss0, 3))
d_pair_sta['std'].append(round(std_ss0, 3))
mask_sb0 = (df['PairType'] == 'same-S-B-0')
df_sb0 = df[mask_sb0]
median_sb0 = np.median(df_sb0['k'])
mean_sb0 = np.mean(df_sb0['k'])
std_sb0 = np.std(df_sb0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-B-0')
d_pair_sta['median'].append(round(median_sb0, 3))
d_pair_sta['mean'].append(round(mean_sb0, 3))
d_pair_sta['std'].append(round(std_sb0, 3))
mask_sb1 = (df['PairType'] == 'same-S-B-1')
df_sb1 = df[mask_sb1]
median_sb1 = np.median(df_sb1['k'])
mean_sb1 = np.mean(df_sb1['k'])
std_sb1 = np.std(df_sb1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-B-1')
d_pair_sta['median'].append(round(median_sb1, 3))
d_pair_sta['mean'].append(round(mean_sb1, 3))
d_pair_sta['std'].append(round(std_sb1, 3))
mask_wr = (df['PairType'] == 'Within-Ring')
df_wr = df[mask_wr]
median_wr = np.median(df_wr['k'])
mean_wr = np.mean(df_wr['k'])
std_wr = np.std(df_wr['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('Within-Ring')
d_pair_sta['median'].append(round(median_wr, 3))
d_pair_sta['mean'].append(round(mean_wr, 3))
d_pair_sta['std'].append(round(std_wr, 3))
# -
pd.DataFrame(d_pair_sta).to_csv('/Users/alayah361/fluctmatch/enm/cg_13/me1/bdna+bdna/pd_dfs/pairtypes_statis.csv', index=False)
| notebooks/.ipynb_checkpoints/boxplot_k_b0-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %matplotlib inline
import cv2
import os
import scipy
import matplotlib.pyplot as plt
import pandas as pd
pwd = os.getcwd()
# -
def detect(filename, cascade_file = '/'.join([pwd, "lbpcascade_animeface.xml"])):
"""Modifed from example: github.com/nagadomi/lbpcascade_animeface."""
if not os.path.isfile(cascade_file):
raise RuntimeError("%s: not found" % cascade_file)
cascade = cv2.CascadeClassifier(cascade_file)
image = cv2.imread(filename)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist(gray)
faces = cascade.detectMultiScale(gray,
# detector options
scaleFactor = 1.1,
minNeighbors = 5,
minSize = (24, 24))
name = filename.split('/')[-1].split('.')[0]
j_faces = [[s, faces[0][e]] for e, s in enumerate(list('xywh'))]
pd.DataFrame(j_faces).set_index(0).to_json("faces/jsons/"+name+".json")
if (len(faces)>0):
cv2.imwrite("faces/pngs/"+name+".png", image)
return faces
def loadImage(f):
x, y, w, h = detect(f)
img = {'full': scipy.ndimage.imread(f)}
w4, h4, Y = w/4.0, h/4.0, img['full'].shape[0]
img['hair'] = img['full'][x:x+w, y-h4:y+h4]
img['deco'] = img['full'][x+w4:x+3*w4, y+h4:Y]
return img
files = ['/'.join([pwd, 'faces/pngs', x]) for x in os.listdir('faces/pngs')]
for f in files:
detect(f)
| face-extraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##Â Color Bins Style
#
# Helper function for quickly creating a color bins style. Use `help(color_bins_style)` to get more information.
# +
from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
# +
from cartoframes.viz import Layer, color_bins_style
Layer('eng_wales_pop', color_bins_style('pop_sq_km'))
| docs/examples/data_visualization/styles/color_bins_style.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # UMAP Plotting (IE)
#
# ## Imports
# +
import sys
import os
import pathlib
print(sys.version)
utils_path = pathlib.Path(os.getcwd() + '/utils') # i suspect this one is not needed
print(utils_path.exists())
print(os.getcwd())
#sys.path.append(str(utils_path)) # may not be necessary
#sys.path.append(os.getcwd()) # i thnk this is the one that works
sys.path.append('../') # this one is one level up so we can see the utils lib
print(sys.path)
import numpy as np
import sklearn
from sklearn.datasets import load_iris, load_digits
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from utils.data import Data
from utils.config import Config
# +
## plotting functions
# -
def plot_umaps(plot_df, colorby, title=None, filename=None):
n = plot_df.shape[0]
for i, c in enumerate(colorby):
plt.figure(figsize=(20,10))
plt.subplot(1,2,i+1)
#plt.figure(figsize=(10, 10))
plt.scatter(data=plot_df, x='x', y='y', c=c, alpha=.1)
plt.gca().set_aspect('equal', 'datalim')
plt.title(f'UMAP {colorby[0]} projection\n 300K features \n({c},{n})', fontsize=24);
plt.axis('off')
plt.colorbar()
#plt.savefig(f'plots/300K_{colorby[0]}_umap_{n}_samples_colorby_{c}.png', transparent=True)
def plot_umap(plot_df, colorby, filename=None):
plt.figure(figsize=(10, 10))
sns.scatterplot(
x='x', y='y',
hue=colorby,
palette=sns.color_palette("husl", 2),
data=plot_df,
alpha=0.9
)
plt.legend(loc='upper left')
plt.axis('off')
#plt.savefig(f'plots/300K_{colorby[0]}_umap_{n}_samples_colorby_commensurate_bool.png', transparent=True)
plt.show()
def plot_umap_v2(plot_df, colorby, imagename=None):
plt.figure(figsize=(10, 10))
sns.scatterplot(
x='x', y='y',
palette=sns.color_palette("husl", 2),
data=plot_df,
alpha=0.01
)
for i,c in enumerate(colorby):
sns.scatterplot(
x='x', y='y',
palette=sns.color_palette("husl", 2),
data=plot_df[plot_df[c] == True],
alpha=0.1 + (i*.5)
)
plt.axis('off')
if filename != None:
plt.savefig(f'plots/{imagename}.png', transparent=True)
plt.show()
# # Plotting
#
# ## load plotting data files
colorby = ['commensurate', 'dft_ie']
neighbors = [40]
filename = 'umap_300Kdf_IE_296835_40_2_flags.csv'
plot_df = pd.read_csv(Config().get_datapath(filename))
# ## create layered plot
#
# use plot_umap2 for commensurate and dft flags... may need to update the funciton for three flags
plot_df.head()
plot_umap_v2(plot_df, colorby, '300K_umap_296835_IE_comm_dft')
colorby
[colorby[0]]
plt.figure(figsize=(10, 10))
sns.scatterplot(
x='x', y='y',
palette=sns.color_palette("husl", 2),
data=plot_df[plot_df[colorby[0]] == True],
alpha=0.05
)
plt.axis('off')
plt.show()
colorby[0]
plot_df.shape
len(colorby)
for i,c in enumerate(colorby): print(i,c)
| vis-umap_300K_flags_IE_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from collections import defaultdict
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import seaborn as sns
import jieba
import requests
import nltk
from bs4 import BeautifulSoup
# %matplotlib inline
# get fund code
url = "http://www.yesfund.com.tw/w/wq/wq01.djhtm?fbclid=IwAR3x-JBkHxHawdA3dv3-PmFMtR-FP3LxT5nk1S-tvPaR3q9Zgz2jZbYdOwE"
response = requests.get(url).text
soup = BeautifulSoup(response, 'html.parser')
a_list = soup.find_all("a")
code_name_tup = []
code_name_tup_full = []
code_name_dict = {}
for a in a_list:
code = a["href"].split("_")[1].split(".")[0].split("-")[0]
name = a.text.split(" ")[-1]
code_name_tup_full.append((code,name))
if "环ç©å" in name:
name = name.replace("环ç©å","环ç©é¡å")
if "æé
å" in name:
name = name.replace("æé
å","æé
é¡å")
if "å£é
å" in name:
name = name.replace("å£é
å","å£é
é¡å")
if "(å°å¹£)" in name:
name = name.replace("(å°å¹£)","æ°èºå¹£èšå¹")
if "(人æ°å¹£)" in name:
name = name.replace("(人æ°å¹£)","人æ°å¹£èšå¹")
name = name.split("(")[0]
code_name_tup.append((code,name))
try:
code_name_dict[name] = code
except:
print(name)
code_name_tup_full
invest = pd.read_excel("invest_data.xlsx")
pd.DataFrame(invest["åºéç°¡çš±"].unique()).to_csv("fund_name_code.csv", encoding="utf_8_sig", index=None)
code = []
invest_name = [n.replace(" ","") for n in invest["åºéç°¡çš±"].unique()]
code = []
for name in invest_name:
if name in code_name_dict:
code.append(code_name_dict[name])
else:
code.append('NaN')
pd.DataFrame(data={"name":invest["åºéç°¡çš±"].unique(),"code":code}).to_csv("fund_name_code.csv", encoding="utf_8_sig", index=None)
https://docs.google.com/spreadsheets/d/1K4VwQfRFwBDVnkHF7fEIGjDRfxegreY_9np0UwSG29k/edit?usp=sharing
# # èšç® æšæºå·® å sharp ratio
#
fund_name_code = pd.read_excel("fund_name_code.xlsx")
fund_name_code.head()
len(price)
# +
# ç¬ååºéæ·šåŒ
sharp_list = []
std_list = []
ES_list = []
for fund in fund_name_code["code"]:
api = "https://www.moneydj.com/funddj/bcd/tBCDNavList.djbcd?a={}&B=2013-5-24&C=2019-5-24".format(fund)
response = requests.get(api)
content = response.text
try:
date = content.split(" ")[0].split(",")
price = np.array([float(num) for num in content.split(" ")[1].split(",")])
price_prev = np.roll(price,1)
returns = ((price-price_prev)/price_prev)[1:]
# 倿®ç= [(æ¯æ¥å ±é
¬çå¹³ååŒ- ç¡é¢šéªå©ç) / (æ¯æ¥å ±é
¬çæšæºå·®)]x (252å¹³æ¹æ ¹)
rf = 0.011/252
# rf = 0
mean = np.mean(returns)
std = np.std(returns)
sharp_ratio = ((mean-rf)/std)*252**0.5
sharp_list.append(sharp_ratio)
std_list.append(std*252**0.5)
# åšè³æ
price_prev = np.roll(price,7)
weekly_returns = ((price-price_prev)/price_prev)[7:]
ES = np.mean(sorted(weekly_returns)[:int(len(weekly_returns)*0.1)])
ES_list.append(ES)
except:
sharp_list.append(0)
std_list.append(0)
ES_list.append(0)
print(fund)
# -
fund_name_code["sharp_ratio"] = sharp_list
fund_name_code["std"] = std_list
fund_name_code["ES"] = ES_list
fund_name_code
fund_name_code.to_excel("fund_name_risk.xlsx", index=None,)
| final/src/fund risk level.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data exploration of the investement dataset
# One of the first step for this second milestone was to investigate the information contained in the investement dataset.
# #### Notes
# * quarterly
# * institutional investement managers with holdings over 100M
# * Form 13F is required to be filed within 45 days of the end of a calendar quarter (which should be considered as significant information latency)
# * only reports long positions (not short)
# ** different investment managers pursue different strategies with may bias results
# ** however, the vast majority of investment managers rely significantly on long positions for significant portion of fund performance
# * 13F does not reveal international holdings (except for American depositary receipts).
# * Section 13(f) securities generally include equity securities that trade on an exchange (including Nasdaq), certain equity options and warrants, shares of closed-end investment companies, and certain convertible debt securities.
# * shares of open-end investment companies (i.e. mutual funds) are not Section 13(f) securities
# * official list of qualifying securities: https://www.sec.gov/divisions/investment/13flists.htm
# * excludes total portfolio value and percentage allocation of each stock listed
# * Money managers allocate the most capital to their best ideas. Pay attention to "new positions" in their disclosures as these are their most recent ideas
# * 13F is not their whole portfolio and that it's a past snapshot
# ### Initial Imports
# +
import pandas as pd
import numpy as np
import html5lib
pd.set_option( 'display.notebook_repr_html', False )
from IPython.display import HTML # useful for snippets
# e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>')
from IPython.display import Image
# e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works
from IPython.display import YouTubeVideo
# e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400)
from IPython.core import page
get_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0)
# Generate PLOTS inside notebook, "inline" generates static png:
# %matplotlib inline
# "notebook" argument allows interactive zoom and resize.
# note: https cannot be read by lxml
# -
# load Q3 2018 report URLs
Q3Y18_index_df = pd.read_table('13f_Q3Y18_index.tsv', sep=',', index_col=False, encoding='latin-1')
# inspect size of dataset
Q3Y18_index_df.shape
# +
# take sample of dataset for testing
percentage_sample = 5 # 5% dataset set to test dataset
test_df= Q3Y18_index_df.head(int(np.round(Q3Y18_index_df.shape[0]*percentage_sample/100)))
test_df
# -
# inspect if URL to be parsed is valid
test_df['Filing URL .html'].iloc[0]
# +
# initialize empty list to store dataframes from different investors (to be appended later)
appended_data = []
# loop through all reports, filter relevant data, create normalized dataframes per investor, add to list of dataframes to be appended
for index, row in test_df.iterrows():
# need to parse initial html file for name of html file with investment data
url = 'https://www.sec.gov/Archives/' + row['Filing URL .html'] #.iloc[index]
page = pd.read_html( url )
df = page[0]
table_url_suffix = df[2].iloc[4]
report_suffix = row['Filing URL .html']
investor = row['Company Name']
date = row['Filing Date']
### SET TO RETURN TOP 20 STOCKS PER INVESTOR (BY SIZE OF INVESTMENT)
num_stocks_returned = 20
stem = 'http://www.sec.gov/Archives/'
xml_suffix = '/xslForm13F_X01/'
report_suffix = report_suffix.replace('-index.html', '')
report_suffix = report_suffix.replace('-', '')
# build URL to html file with investment data
url = stem + report_suffix + xml_suffix + table_url_suffix
print(url)
# turn HTML file into dataframe
page = pd.read_html( url )
# the last element of page contains relevant investement data
df = page[-1]
# rename columns:
df.columns = [ 'stock', 'class', 'cusip', 'usd', 'size', 'sh_prin', 'putcall', 'discret', 'manager', 'vote1', 'vote2', 'vote3']
# But first three rows are SEC labels, not data,
# so delete them:
df = df[3:]
# Start a new index from 0 instead of 3:
df.reset_index( drop=True )
# Delete irrevelant columns:
dflite = df.drop( df.columns[[1, 4, 5, 7, 8, 9, 10, 11]], axis=1 )
# usd needs float type since usd was read as string:
dflite[['usd']] = dflite[['usd']].astype( float )
# NOTE: int as type will fail for NaN
# Type change allows proper sort:
dfusd = dflite.sort_values( by=['usd'], ascending=[False] )
usdsum = sum( dfusd.usd )
# Portfolio total in USD:
#usdsum
# New column for percentage of total portfolio:
dfusd['pcent'] = np.round(( dfusd.usd / usdsum ) * 100, 2)
# New column for date of report filling
dfusd.insert(0, 'date', date)
# New column for investor
dfusd.insert(0, 'investor', investor)
# Dataframe per investor with top num_stocks_returned
appended_data.append(dfusd.head( num_stocks_returned ))
# show list of dataframes
#appended_data
# +
# Concat investor dataframes together
appended_data = pd.concat(appended_data, axis=0)
# Export as CSV file
appended_data.to_csv('test_results.csv')
# -
| research_code/data_exploration/data_exploration_investment_dataset_ms2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
'''Importing the reqired libraries'''
import numpy as np
import pandas as pd
import joblib
import string
# +
'''Data loading and reading'''
df = pd.read_csv("1-P-3-ISEAR.csv",header=None)
df.head()
# +
'''adding name to column'''
df.columns = ['sn','Target','Sentence']
df.drop('sn',inplace=True,axis =1)
# -
df.head()
# +
'''Checking and removing duplicates if any '''
df.duplicated().sum()
df.drop_duplicates(inplace = True)
# +
'''removing any puncuation if any exist in the datasets'''
def remove_punc(text):
text = "".join([char for char in text if char not in string.punctuation and not char.isdigit()])
return text
df['Sentence'] = df['Sentence'].apply(remove_punc)
# -
df.head()
# +
'''A stop word is a commonly used word (such as âtheâ, âaâ, âanâ, âinâ) in Englsh , we have to remove those words while pre-processing '''
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
# +
def remove_stopwords(text):
text = [w for w in text.split() if w not in stopwords.words('english')]
return ' '.join(text)
df['Sentence'] = df['Sentence'].apply(remove_stopwords)
# -
df.head()
# +
'''During the process of Lemmatization the word changes into it's root word.'''
'''e.g travelling to travel '''
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
from nltk.corpus import wordnet
# +
lemmatizer = WordNetLemmatizer()
def lemmatize(text):
text = [lemmatizer.lemmatize(word,'v') for word in text.split()]
return ' '.join(text)
df['Sentence'] = df['Sentence'].apply(lemmatize)
# -
df.head()
# +
'''Splitting the data for training and testing purpose'''
from sklearn.model_selection import train_test_split
X = df['Sentence']
y = df['Target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2,random_state=10)
# +
'''TfIdf is the techinique of transforming the the text to the mening ful vector format. It penalize the word that pops up too often and don't help much in predection.
It rescales the frequency of the word that are common which makes the vector representaion more meaning full '''
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(min_df=2, max_df=0.5, ngram_range=(1, 2))
train_tfidf = tfidf.fit_transform(X_train)
test_tfidf = tfidf.transform(X_test)
# +
''' I am building the model from Logistic and Naive Bayes as both the algorithm can be used for the data classification.'''
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
# +
'''Logistic Regression '''
logi = LogisticRegression(max_iter=1000)
logi.fit(train_tfidf,y_train)
logi.score(train_tfidf, y_train), logi.score(test_tfidf, y_test)
# -
y_pre = logi.predict(test_tfidf)
from sklearn.metrics import confusion_matrix, classification_report
print(classification_report(y_test, y_pre))
# +
#accuracy score
logi.score(train_tfidf, y_train), logi.score(test_tfidf, y_test)
# +
#confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
classes = ['joy' , 'guilt' , 'disgust' , 'sadness' , 'shame', 'anger', 'fear']
c_m =confusion_matrix(y_test, y_pre)
df_cm = pd.DataFrame(c_m, index = [i for i in classes], columns = [i for i in classes])
sns.heatmap(df_cm, annot=True)
plt.ylabel('True label')
plt.xlabel('Predicted label')
# -
y_test.value_counts()
# +
'''Naive Bayes'''
naive = MultinomialNB()
naive.fit(train_tfidf,y_train)
y_pre = naive.predict(test_tfidf)
# -
print(classification_report(y_test, y_pre))
# +
#accuracy score
naive.score(train_tfidf, y_train), naive.score(test_tfidf, y_test)
# +
#confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
classes = ['joy' , 'guilt' , 'disgust' , 'sadness' , 'shame', 'anger', 'fear']
c_m =confusion_matrix(y_test, y_pre)
df_cm = pd.DataFrame(c_m, index = [i for i in classes], columns = [i for i in classes])
sns.heatmap(df_cm, annot=True)
plt.ylabel('True label')
plt.xlabel('Predicted label')
# +
'''Predecting of the data'''
predict_me = ['i am very disappointed at you']
predict_me = tfidf.transform(predict_me)
logi.predict(predict_me)
# +
'''Getting Predection Probablity using Naive Bayes'''
nb_emotions = naive.predict_proba(predict_me)
nb_datas = naive.classes_
naive_decection = pd.DataFrame()
naive_decection['Emotion'] = nb_datas
naive_decection['emotion_propb'] = nb_emotions.T
# +
'''pie chart for prediction probality using Naive Bayes'''
import matplotlib.pyplot as plt
fig1, ax1 = plt.subplots()
ax1.pie(naive_decection['emotion_propb'], labels=naive_decection['Emotion'], autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
# +
#prediction prob
lr_emotions = logi.predict_proba(predict_me)
lr_datas = logi.classes_
logi_decection = pd.DataFrame()
logi_decection['Emotion'] = lr_datas
logi_decection['emotion_propb'] = lr_emotions.T
# +
'''pie chart for prediction probality using Logistic regression'''
fig1, ax1 = plt.subplots()
ax1.pie(logi_decection['emotion_propb'], labels=logi_decection['Emotion'], autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
# +
'''Saving Models using joblib'''
joblib.dump(logi, './mymodel/logistic_model.joblib')
joblib.dump(naive, './mymodel/naive_bayes_model.joblib')
joblib.dump(tfidf, './mymodel/tfidf_model.joblib')
| nlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Singular Value Decomposition
#
# So far in this lesson, you have gained some exposure to Singular Value Decomposition. In this notebook, you will get some hands on practice with this technique.
#
# Let's get started by reading in our libraries and setting up the data we will be using throughout this notebook
#
# `1.` Run the cell below to create the **user_movie_subset** dataframe. This will be the dataframe you will be using for the first part of this notebook.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import svd_tests as t
# %matplotlib inline
# Read in the datasets
movies = pd.read_csv('data/movies_clean.csv')
reviews = pd.read_csv('data/reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
# Create user-by-item matrix
user_items = reviews[['user_id', 'movie_id', 'rating']]
user_by_movie = user_items.groupby(['user_id', 'movie_id'])['rating'].max().unstack()
user_movie_subset = user_by_movie[[73486, 75314, 68646, 99685]].dropna(axis=0)
print(user_movie_subset)
# -
# `2.` Now that you have the **user_movie_subset** matrix, use this matrix to correctly match each key to the correct value in the dictionary below. Use the cells below the dictionary as necessary.
# +
print(user_movie_subset.shape[0])
print(user_movie_subset.shape[1])
# -
col = user_movie_subset.columns
user_movie_subset['avg_rating'] = user_movie_subset[col].mean(axis=1)
user_movie_subset[user_movie_subset['avg_rating'] == np.max(user_movie_subset['avg_rating'])].index[0]
user_movie_subset.mean(axis=0)
# +
# match each letter to the best statement in the dictionary below - each will be used at most once
a = 20
b = 68646
c = 'The Godfather'
d = 'Goodfellas'
e = 265
f = 30685
g = 4
sol_1_dict = {
'the number of users in the user_movie_subset': a,# letter here,
'the number of movies in the user_movie_subset': g,# letter here,
'the user_id with the highest average ratings given': e,# letter here,
'the movie_id with the highest average ratings received': b,# letter here,
'the name of the movie that received the highest average rating': c# letter here
}
#test dictionary here
t.test1(sol_1_dict)
# +
# Cell for work
# user with the highest average rating
# movie with highest average rating
# list of movie names
# users by movies
user_movie_subset = user_movie_subset.drop(columns=['avg_rating'])
# -
# Now that you have a little more context about the matrix we will be performing Singular Value Decomposition on, we're going to do just that. To get started, let's remind ourselves about the dimensions of each of the matrices we are going to get back. Essentially, we are going to split the **user_movie_subset** matrix into three matrices:
#
# $$ U \Sigma V^T $$
#
#
# `3.` Given what you learned about in the previous parts of this lesson, provide the dimensions for each of the matrices specified above using the dictionary below.
# +
# match each letter in the dictionary below - a letter may appear more than once.
a = 'a number that you can choose as the number of latent features to keep'
b = 'the number of users'
c = 'the number of movies'
d = 'the sum of the number of users and movies'
e = 'the product of the number of users and movies'
sol_2_dict = {
'the number of rows in the U matrix':b, # letter here,
'the number of columns in the U matrix': a,# letter here,
'the number of rows in the V transpose matrix': a,# letter here,
'the number of columns in the V transpose matrix':c # letter here
}
#test dictionary here
t.test2(sol_2_dict)
# -
# Now let's verify the above dimensions by performing SVD on our user-movie matrix.
#
# `4.` Below you can find the code used to perform SVD in numpy. You can see more about this functionality in the [documentation here](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.linalg.svd.html). What do you notice about the shapes of your matrices? If you try to take the dot product of the three objects you get back, can you directly do this to get back the user-movie matrix?
u, s, vt = np.linalg.svd(user_movie_subset, full_matrices=False)# perform svd here on user_movie_subset
s.shape, u.shape, vt.shape
# Run this cell for our thoughts on the questions posted above
t.question4thoughts()
# `5.` Use the thoughts from the above question to create **u**, **s**, and **vt** with four latent features. When you have all three matrices created correctly, run the test below to show that the dot product of the three matrices creates the original user-movie matrix. The matrices should have the following dimensions:
#
# $$ U_{n x k} $$
#
# $$\Sigma_{k x k} $$
#
# $$V^T_{k x m} $$
#
# where:
#
# 1. n is the number of users
# 2. k is the number of latent features to keep (4 for this case)
# 3. m is the number of movies
#
u.shape
np.diag(s)
# +
# Change the dimensions of u, s, and vt as necessary to use four latent features
# update the shape of u and store in u_new
u_new = u# change the shape of u here
# update the shape of s and store in s_new
s_new = np.diag(s)#change the shape of s as necessary
# Because we are using 4 latent features and there are only 4 movies,
# vt and vt_new are the same
vt_new = vt # change the shape of vt as necessary
# -
# Check your matrices against the solution
assert u_new.shape == (20, 4), "Oops! The shape of the u matrix doesn't look right. It should be 20 by 4."
assert s_new.shape == (4, 4), "Oops! The shape of the sigma matrix doesn't look right. It should be 4 x 4."
assert vt_new.shape == (4, 4), "Oops! The shape of the v transpose matrix doesn't look right. It should be 4 x 4."
assert np.allclose(np.dot(np.dot(u_new, s_new), vt_new), user_movie_subset), "Oops! Something went wrong with the dot product. Your result didn't reproduce the original movie_user matrix."
print("That's right! The dimensions of u should be 20 x 4, and both v transpose and sigma should be 4 x 4. The dot product of the three matrices how equals the original user-movie matrix!")
# It turns out that the sigma matrix can actually tell us how much of the original variability in the user-movie matrix is captured by each latent feature. The total amount of variability to be explained is the sum of the squared diagonal elements. The amount of variability explained by the first componenet is the square of the first value in the diagonal. The amount of variability explained by the second componenet is the square of the second value in the diagonal.
#
# `6.` Using the above information, can you determine the amount of variability in the original user-movie matrix that can be explained by only using the first two components? Use the cell below for your work, and then test your answer against the solution with the following cell.
# +
total_var = np.sum(s_new**2)# Total variability here
var_exp_comp1_and_comp2 = s[0]**2 + s[1]**2 # Variability Explained by the first two components here
perc_exp = round(var_exp_comp1_and_comp2 *100 / total_var, 2) # Percent of variability explained by the first two components here
# Run the below to print your results
print("The total variance in the original matrix is {}.".format(total_var))
print("Ther percentage of variability captured by the first two components is {}%.".format(perc_exp))
# -
assert np.round(perc_exp) == 98.55, "Oops! That doesn't look quite right. You should have total variability as the sum of all the squared elements in the sigma matrix. Then just the sum of the squared first two elements is the amount explained by the first two latent features. Try again."
print("Yup! That all looks good!")
# `7.` Similar to in the previous question, change the shapes of your u, sigma, and v transpose matrices. However, this time consider only using the first 2 components to reproduce the user-movie matrix instead of all 4. After you have your matrices set up, check your matrices against the solution by running the tests. The matrices should have the following dimensions:
#
# $$ U_{n x k} $$
#
# $$\Sigma_{k x k} $$
#
# $$V^T_{k x m} $$
#
# where:
#
# 1. n is the number of users
# 2. k is the number of latent features to keep (2 for this case)
# 3. m is the number of movies
# +
# Change the dimensions of u, s, and vt as necessary to use four latent features
# update the shape of u and store in u_new
u_2 = # change the shape of u here
# update the shape of s and store in s_new
s_2 = #change the shape of s as necessary
# Because we are using 4 latent features and there are only 4 movies,
# vt and vt_new are the same
vt_2 = # change the shape of vt as necessary
# -
# Check that your matrices are the correct shapes
assert u_2.shape == (20, 2), "Oops! The shape of the u matrix doesn't look right. It should be 20 by 2."
assert s_2.shape == (2, 2), "Oops! The shape of the sigma matrix doesn't look right. It should be 2 x 2."
assert vt_2.shape == (2, 4), "Oops! The shape of the v transpose matrix doesn't look right. It should be 2 x 4."
print("That's right! The dimensions of u should be 20 x 2, sigma should be 2 x 2, and v transpose should be 2 x 4. \n\n The question is now that we don't have all of the latent features, how well can we really re-create the original user-movie matrix?")
# `8.` When using all 4 latent features, we saw that we could exactly reproduce the user-movie matrix. Now that we only have 2 latent features, we might measure how well we are able to reproduce the original matrix by looking at the sum of squared errors from each rating produced by taking the dot product as compared to the actual rating. Find the sum of squared error based on only the two latent features, and use the following cell to test against the solution.
# +
# Compute the dot product
pred_ratings = # store the result of the dot product here
# Compute the squared error for each predicted vs. actual rating
sum_square_errs = # compute the sum of squared differences from each prediction to each actual value here
# -
# Check against the solution
assert np.round(sum_square_errs) == 85.34, "Oops! That doesn't look quite right. You should return a single number for the whole matrix."
print("That looks right! Nice job!")
# At this point, you may be thinking... why would we want to choose a k that doesn't just give us back the full user-movie matrix with all the original ratings. This is a good question. One reason might be for computational reasons - sure, you may want to reduce the dimensionality of the data you are keeping, but really this isn't the main reason we would want to perform reduce k to lesser than the minimum of the number of movies or users.
#
# Let's take a step back for a second. In this example we just went through, your matrix was very clean. That is, for every user-movie combination, we had a rating. **There were no missing values.** But what we know from the previous lesson is that the user-movie matrix is full of missing values.
#
# A matrix similar to the one we just performed SVD on:
#
# <img src="imgs/nice_ex.png" width="400" height="400">
#
# The real world:
#
# <img src="imgs/real_ex.png" width="400" height="400">
#
#
# Therefore, if we keep all k latent features it is likely that latent features with smaller values in the sigma matrix will explain variability that is probably due to noise and not signal. Furthermore, if we use these "noisey" latent features to assist in re-constructing the original user-movie matrix it will potentially (and likely) lead to worse ratings than if we only have latent features associated with signal.
#
# `9.` Let's try introducing just a little of the real world into this example by performing SVD on a matrix with missing values. Below I have added a new user to our matrix who hasn't rated all four of our movies. Try performing SVD on the new matrix. What happens?
# +
# This line adds one nan value as the very first entry in our matrix
user_movie_subset.iloc[0, 0] = np.nan # no changes to this line
# Try svd with this new matrix
u, s, vt = # Compute SVD on the new matrix with the single nan value
# -
#
# **Write your response here.**
| lessons/Recommendations/2_Matrix_Factorization_for_Recommendations/1_Intro_to_SVD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Summary
# # Imports
# +
import importlib
import os
import sys
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
import seaborn as sns
from scipy import stats
from sklearn import metrics
# -
# %matplotlib inline
pd.set_option("max_columns", 100)
# +
SRC_PATH = Path.cwd().joinpath('..', 'src').resolve(strict=True)
if SRC_PATH.as_posix() not in sys.path:
sys.path.insert(0, SRC_PATH.as_posix())
import helper
importlib.reload(helper)
# -
# # Parameters
NOTEBOOK_PATH = Path('validation_homology_models_combined')
NOTEBOOK_PATH
OUTPUT_PATH = Path(os.getenv('OUTPUT_DIR', NOTEBOOK_PATH.name)).resolve()
OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
OUTPUT_PATH
PROJECT_VERSION = os.getenv("PROJECT_VERSION")
DEBUG = "CI" not in os.environ
DEBUG
# +
if DEBUG:
PROJECT_VERSION = "0.1"
else:
assert PROJECT_VERSION is not None
PROJECT_VERSION
# +
# if DEBUG:
# %load_ext autoreload
# %autoreload 2
# -
# # `DATAPKG`
DATAPKG = {}
DATAPKG['validation_homology_models'] = sorted(
Path(os.environ['DATAPKG_OUTPUT_DIR'])
.joinpath("adjacency-net-v2", f"v{PROJECT_VERSION}", "validation_homology_models")
.glob("*/*_dataset.parquet")
)
DATAPKG['validation_homology_models']
# # Dataset
# ## Construct datasets
# ### `homology_models_dataset`
# +
validation_df = None
def assert_eq(a1, a2):
if isinstance(a1[0], np.ndarray):
for b1, b2 in zip(a1, a2):
b1 = b1[~np.isnan(b1)]
b2 = b2[~np.isnan(b2)]
assert len(b1) == len(b2)
assert (b1 == b2).all()
else:
assert (a1 == a2).all()
for file in DATAPKG['validation_homology_models']:
df = pq.read_table(file, use_pandas_metadata=True).to_pandas(integer_object_nulls=True)
df.drop(pd.Index(['error']), axis=1, inplace=True)
if validation_df is None:
validation_df = df
else:
validation_df = (
validation_df
.merge(df, how="outer", left_index=True, right_index=True, validate="1:1", suffixes=("", "_dup"))
)
for col in validation_df.columns:
if col.endswith(f"_dup"):
col_ref = col[:-4]
assert_eq(validation_df[col], validation_df[col_ref])
del validation_df[col]
# -
homology_models_dataset = validation_df.copy()
homology_models_dataset.head(2)
# ### `homology_models_dataset_filtered`
fg, ax = plt.subplots()
homology_models_dataset["identity_calc"].hist(bins=100, ax=ax)
# +
IDENTITY_CUTOFF = 1.00
query_ids_w3plus = {
query_id
for query_id, group in
homology_models_dataset[
(homology_models_dataset["identity_calc"] <= IDENTITY_CUTOFF)
]
.groupby('query_id')
if len(group) >= 10
}
homology_models_dataset_filtered = (
homology_models_dataset[
(homology_models_dataset["identity_calc"] <= IDENTITY_CUTOFF) &
(homology_models_dataset['query_id'].isin(query_ids_w3plus))
]
.copy()
)
print(len(homology_models_dataset))
print(len(homology_models_dataset_filtered))
# -
# ### `homology_models_dataset_final`
# +
# homology_models_dataset_final = homology_models_dataset.copy()
# -
homology_models_dataset_final = homology_models_dataset_filtered.copy()
# # Plotting
# ## Prepare data
# ### Correlations for the entire dataset
# + run_control={"marked": false}
target_columns = [
'dope_score',
'dope_score_norm',
'ga341_score',
'rosetta_score',
]
feature_columns = [
"identity_calc",
# "coverage_calc",
"identity",
"similarity",
"score", # "probability", "evalue",
"sum_probs",
# "query_match_length",
# "template_match_length",
]
network_columns = [
c
for c in homology_models_dataset_final.columns
if (c.endswith("_pdb") or c.endswith("_hm"))
and not (c.startswith("adjacency_idx") or c.startswith("frac_aa_wadj"))
]
results_df = homology_models_dataset_final.dropna(subset=network_columns).copy()
print(f"Lost {len(homology_models_dataset_final) - len(results_df)} columns with nulls!")
for col in ['dope_score', 'dope_score_norm', 'rosetta_score']:
results_df[col] = -results_df[col]
# -
len(network_columns)
# ### Correlations for each sequence independently
# +
data = []
for query_id, group in results_df.groupby('query_id'):
assert (group['sequence'].str.replace('-', '') == group['sequence'].iloc[0].replace('-', '')).all()
assert (group['query_match_length'] == group['query_match_length'].iloc[0]).all()
if len(group) < 3:
print(f"Skipping small group for query_id = '{query_id}'")
continue
for y_col in target_columns:
if len(group) < 3 or len(set(group[y_col])) == 1:
print(f"skipping y_col '{y_col}'")
continue
for x_col in feature_columns + network_columns:
if x_col in ['query_match_length']:
continue
if len(group) < 3 or len(set(group[x_col])) == 1:
print(f"skipping x_col '{x_col}'")
continue
corr, pvalue = stats.spearmanr(group[x_col], group[y_col])
data.append((y_col, x_col, corr, pvalue))
correlations_df = pd.DataFrame(data, columns=['target', 'feature', 'correlation', 'pvalue'])
# +
network_columns_sorted = (
correlations_df[
(correlations_df['target'] == 'dope_score_norm') &
(correlations_df['feature'].isin(network_columns))
]
.groupby("feature", as_index=True)
['correlation']
.mean()
.sort_values(ascending=False)
.index
.tolist()
)
assert len(network_columns_sorted) == len(network_columns)
# -
# ## Make Plots
def plot(df, columns):
mat = np.zeros((len(columns), len(columns)), float)
for i, c1 in enumerate(columns):
for j, c2 in enumerate(columns):
mat[i, j] = stats.spearmanr(df[c1], df[c2])[0]
fig, ax = plt.subplots()
im = ax.imshow(mat)
# We want to show all ticks...
ax.set_xticks(np.arange(len(columns)))
ax.set_yticks(np.arange(len(columns)))
# ... and label them with the respective list entries
ax.set_xticklabels(columns)
ax.set_yticklabels(columns)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(len(columns)):
for j in range(len(columns)):
text = ax.text(j, i, f"{mat[i, j]:.2f}", ha="center", va="center", color="w")
ax.set_title("Spearman correlation between alignment, structure, and network scores")
# +
features = target_columns + feature_columns + network_columns_sorted
dim = 4 + 0.4 * len(features)
with plt.rc_context(rc={'figure.figsize': (dim, dim), 'font.size': 11}):
plot(results_df, features)
plt.tight_layout()
plt.savefig(OUTPUT_PATH.joinpath("validation_homology_models_corr_all.png"), dpi=300, bbox_inches="tight")
plt.savefig(OUTPUT_PATH.joinpath("validation_homology_models_corr_all.pdf"), bbox_inches="tight")
# +
ignore = ['query_match_length']
features = [c for c in feature_columns + network_columns_sorted if c not in ignore]
figsize = (2 + 0.5 * len(features), 6)
for i, target in enumerate(target_columns):
corr = [
correlations_df[
(correlations_df['target'] == target) &
(correlations_df['feature'] == feature)
]['correlation'].values
for feature in features
]
with plt.rc_context(rc={'figure.figsize': figsize, 'font.size': 14}):
plt.boxplot(corr)
plt.ylim(-0.55, 1.05)
plt.xticks(range(1, len(features) + 1), features, rotation=45, ha="right", rotation_mode="anchor")
plt.ylabel("Spearman R")
plt.title(f"{target} (identity cutoff: {IDENTITY_CUTOFF:.2})")
plt.tight_layout()
plt.savefig(OUTPUT_PATH.joinpath(f"{target}_corr_gby_query.png"), dpi=300, bbox_inches="tight", transparent=False, frameon=True)
plt.savefig(OUTPUT_PATH.joinpath(f"{target}_corr_gby_query.pdf"), bbox_inches="tight", transparent=False, frameon=True)
plt.show()
# -
| notebooks2/05-validation_homology_models_combined.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import tensorflow as tf
from stackn import stackn
from fedn.utils.kerashelper import KerasHelper
from client.models.mnist_model import create_seed_model
# +
path_to_model_weights = '/home/jovyan/work/minio-vol/fedn-models/4f80355a-0655-49b7-a97f-7a9798f256df'
helper = KerasHelper()
weights = helper.load_model(path_to_model_weights)
model = create_seed_model()
model.set_weights(weights)
# -
tf.saved_model.save(model, 'models/1/')
# If you are running locally with self signed certificate, then CHANGE the secure_mode variable to False
secure_mode = True
stackn.create_object('fedn-mnist', release_type="minor", secure_mode)
| tutorials/studio/quickstart/deploy_fedn_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problem Statement
#
# To build a multi-headed model thatâs capable of detecting different types of of toxicity like threats, obscenity, insults and identity-based hate. The dataset used are comments from Wikipediaâs talk page edits.
#
# Google built a range of publicly available models served through the Perspective API, including toxicity. But the current models still make errors, and they donât allow users to select which types of toxicity theyâre interested in finding (e.g. some platforms may be fine with profanity, but not with other types of toxic content).
#
# We have extended our neural network model further to prove that this model is able to detect toxicity level and also rightly classify the toxicity level which is not provided by Google's Perspective API. Below is an example of an input test comment "I will kill you" and it has been rightly classified by this model as 95% threat.
#
# We then compared our model outputs with that of Google's Perspective API's model with various input sentences.
#
# We further intend on extending our model to run over twitter comments.
# # Approach
#
# The idea is to use the train dataset to train the models with the words in the comments as predictor variables and to predict the probability of toxicity level of a comment. A pre- trained model that gives the best accuracy to various user comments to combat the ongoing issue of online forum abuse. Our project is focused on developing a series of neural network models. The goal is to find the strengths and weakness of different Deep Learning models on the text classification task. We developed 3 specific Neural Network models for this project which are as follows:
# 1. Convolution Neural Network (CNN with character-level embedding)
# 2. Convolution Neural Network (CNN with word embedding)
# 3. Recurrent Neural Network (RNN) with Long Short Term Memory (LSTM) cells
# We intend to test the above models trained with the Wikipedia data and word embedding
# # Toxic Comment Classification
#
# In this notebook, we'll be developing a Neural Network models that can classify string comments based on their toxicity:
# * `toxic`
# * `severe_toxic`
# * `obscene`
# * `threat`
# * `insult`
# * `identity_hate`
#
# This is a part of the [Toxic Comment Classification](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) Kaggle competition. From the site:
#
# >In this competition, youâre challenged to build a multi-headed model thatâs capable of detecting different types of toxicity like threats, obscenity, insults, and identity-based hate better than Perspectiveâs current models. Youâll be using a dataset of comments from Wikipediaâs talk page edits. Improvements to the current model will hopefully help online discussion become more productive and respectful.
# # Import Data
# The data we'll be using consists of a large number of Wikipedia comments which have been labeled by humans according to their relative toxicity. The data can be found here. Download the following and store in directory's data folder.
#
# train.csv - the training set, contains comments with their binary labels.
# test.csv - the test set, predict toxicity probabilities for these comments.
# sample_submission.csv - the submission sample with the correct format.
import sys, os, re, csv, codecs, numpy as np, pandas as pd
import matplotlib.pyplot as plt
from IPython.core.display import display, HTML
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
#function for ploting history of the model training
from pyspark.sql import Row
def plot_history(history_arg):
array = []
i =1
j =1
for acc in history_arg.history['acc']:
array.append(Row(epoch=i, accuracy=float(acc)))
i = i+1
acc_df = sqlContext.createDataFrame((array))
array = []
for loss in history_arg.history['loss']:
array.append(Row(epoch = j, loss = float(loss)))
j = j+1
loss_df = sqlContext.createDataFrame(array)
display_df = acc_df.join(loss_df,on=("epoch")).orderBy("epoch")
return display_df
# # Explore Data
# Extract the features and labels and take a look at sample data.
# +
from pyspark.sql.types import *
#Class labels
list_classes = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
#Read the data
toxicWordsTrain = pd.read_csv("train.csv");
toxicWordsTest = pd.read_csv("test.csv")
y_train = toxicWordsTrain[list_classes].values
x_train = toxicWordsTrain["comment_text"]
x_test = toxicWordsTest["comment_text"]
submission = pd.read_csv('sample_submission.csv')
# -
# #### Train Data
# On analyzing the train data set, it was noted that the toxic levels in the comments are classified as shown below:
# +
import seaborn as sns
colors_list = ["brownish green", "pine green", "ugly purple",
"blood", "deep blue", "brown", "azure"]
palette= sns.xkcd_palette(colors_list)
x=toxicWordsTrain.iloc[:,2:].sum()
#print(x.index)
plt.figure(figsize=(9,6))
ax= sns.barplot(x.index, x.values,palette=palette)
plt.title("Class")
plt.ylabel('Occurrences', fontsize=12)
plt.xlabel('Type ')
rects = ax.patches
labels = x.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height + 10, label,
ha='center', va='bottom')
display(plt.show())
# -
# Sample from dataset
for sample_i in range(3):
print('Comment #{}: {}'.format(sample_i + 1, x_train[sample_i]))
print('Label #{}: {}'.format(sample_i + 1, y_train[sample_i]))
# +
# Explore vocabulary
import collections
from tqdm import tqdm
# Create a counter object for each dataset
word_counter = collections.Counter([word for sentence in tqdm(x_train, total=len(x_train)) \
for word in sentence.split()])
print('{} words.'.format(len([word for sentence in x_train for word in sentence.split()])))
print('{} unique words.'.format(len(word_counter)))
print('10 Most common words in the dataset:')
print('"' + '" "'.join(list(zip(*word_counter.most_common(10)))[0]) + '"')
# -
# The dataset contains 10,734,904 words, 532,299 of which are unique, and the 10 most common being: "the", "to", "of", "and", "a", "I", "is", "you", "that", and "in". One problem here is that we are counting uppercase words as different from lower case words and a bunch of other symbols that aren't really useful for our goal. A data cleanup will be done in the next step.
# # Preprocessing the data
# We preprocess our data a bit so that it's in a format we can input into a neural network. The process includes:
#
# 1. Remove irrelevant characters (!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n).
# 2. Convert all letters to lowercase (HeLlO -> hello).
# 3. As this is character level embedding, Consider every character as a seperate token.
# 4. Tokenize our words (hi how are you -> [23, 1, 5, 13]).
# 5. Standaridize our input length with padding (hi how are you -> [23, 1, 5, 13, 0, 0, 0]).
# We can go further and consider combining misspelled, slang, or different word inflections into single base words. However, the benefit of using a neural network is that they do well with raw input, so we'll stick with what we have listed.
# +
# Tokenize and Pad
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
# Create tokenizer
tokenizer = Tokenizer(num_words=None,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True,
split=" ",
char_level=True)
# Fit and run tokenizer
tokenizer.fit_on_texts(list(x_train))
tokenized_train = tokenizer.texts_to_sequences(x_train)
tokenized_test = tokenizer.texts_to_sequences(x_test)
word_index = tokenizer.word_index
# Extract variables
vocab_size = len(word_index)
print('Vocab size: {}'.format(vocab_size))
longest = max(len(seq) for seq in tokenized_train)
print("Longest comment size: {}".format(longest))
average = np.mean([len(seq) for seq in tokenized_train])
print("Average comment size: {}".format(average))
stdev = np.std([len(seq) for seq in tokenized_train])
print("Stdev of comment size: {}".format(stdev))
max_len = int(average + stdev * 3)
print('Max comment size: {}'.format(max_len))
print()
# Pad sequences
processed_X_train = pad_sequences(tokenized_train, maxlen=max_len, padding='post', truncating='post')
processed_X_test = pad_sequences(tokenized_test, maxlen=max_len, padding='post', truncating='post')
# Sample tokenization
for sample_i, (sent, token_sent) in enumerate(zip(x_train[:2], tokenized_train[:2])):
print('Sequence {}'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent))
# -
# After preprocessing, our vocabulary size drops to a more manageable 210,337 with a max comment size of 371 words and an average comment size of about 68 words per sentence.
# # Embedding
# The data representation for our vocabulary is one-hot encoding where every word is transformed into a vector with a 1 in its corresponding location. For example, if our word vector is [hi, how, are, you] and the word we are looking at is "you", the input vector for "you" would just be [0, 0, 0, 1]. This works fine unless our vocabulary is huge - in this case, 210,000 - which means we would end up with word vectors that consist mainly of a bunch of 0s.
#
# Instead, we can use a Word2Vec technique to find continuous embeddings for our words. Here, we'll be using the pretrained FastText embeddings from Facebook to produce a 300-dimension vector for each word in our vocabulary.
#
# <NAME>, <NAME>, <NAME>, <NAME>, Enriching Word Vectors with Subword Information
# The benefit of this continuous embedding is that words with similar predictive power will appear closer together on our word vector. The downside is that this creates more of a black box where the words with the most predictive power get lost in the numbers.
# +
embedding_dim = 300
# Get embeddings
embeddings_index = {}
f = open('wiki.en.vec', encoding="utf8")
for line in f:
values = line.rstrip().rsplit(' ', embedding_dim)
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found {} word vectors.'.format(len(embeddings_index)))
# -
# Build embedding matrix
embedding_matrix = np.zeros((len(word_index) + 1, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# The above is a lengthy process and hence saving the output to the disk.
# Save embeddings
import h5py
with h5py.File('embeddings.h5', 'w') as hf:
hf.create_dataset("fasttext", data=embedding_matrix)
# Load embeddings
with h5py.File('embeddings.h5', 'r') as hf:
embedding_matrix = hf['fasttext'][:]
# # Model
# Convolution Neural Network(CNN) with character embeddings:
#
#
# >A character level model will use the character as the smallest entity. This can help in dealing with common misspellings, different permutations of words and languages that rely on the context for word conjugations. The model reads characters one by one, including spaces, and creates a one-hot embedding of the comment. The CNN model we used consist of a 1-dimensional convolutional layer across the concatenated character embeddings for each character in the input comment.
#
#
# Now that the data is preprocessed and our embeddings are ready, we can build a model. We will build a neural network architecture that is comprises of the following:
#
# Embedding layer - word vector representations.
# Convolutional layer - run multiple filters over that temporal data.
# Fully connected layer - classify input based on filters.
# +
import keras.backend
from keras.models import Sequential
from keras.layers import CuDNNGRU, Dense, Conv1D, MaxPooling1D
from keras.layers import Dropout, GlobalMaxPooling1D, BatchNormalization
from keras.layers import Bidirectional
from keras.layers.embeddings import Embedding
from keras.optimizers import Nadam
# Initate model
model = Sequential()
# Add Embedding layer
model.add(Embedding(vocab_size + 1, embedding_dim, weights=[embedding_matrix], input_length=max_len, trainable=True))
# Add Recurrent layers
#model.add(Bidirectional(CuDNNGRU(300, return_sequences=True)))
# Add Convolutional layer
model.add(Conv1D(filters=128, kernel_size=5, padding='same', activation='relu'))
model.add(MaxPooling1D(3))
model.add(GlobalMaxPooling1D())
model.add(BatchNormalization())
# Add fully connected layers
model.add(Dense(50, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(6, activation='sigmoid'))
# Summarize the model
model.summary()
# -
# # Compile the model
# We'll be using binary crossentropy as our loss function and clipping our gradients to avoid any explosions.
# +
def loss(y_true, y_pred):
return keras.backend.binary_crossentropy(y_true, y_pred)
lr = .0001
model.compile(loss=loss, optimizer=Nadam(lr=lr, clipnorm=1.0),
metrics=['binary_accuracy'])
# -
# # Evaluation Metric
# To evaluate our model, we'll be looking at its AUC ROC score (area under the receiver operating characteristic curve). We will be looking at the probability that our model ranks a randomly chosen positive instance higher than a randomly chosen negative one. With data that mostly consists of negative labels (no toxicity), our model could just learn to always predict negative and end up with a pretty high accuracy. AUC ROC helps correct this by putting more weight on the the positive examples.
# +
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from keras.callbacks import Callback
class RocAucEvaluation(Callback):
def __init__(self, filepath, validation_data=(), interval=1, max_epoch = 100):
super(Callback, self).__init__()
# Initialize state variables
print("After init")
self.interval = interval
self.filepath = filepath
self.stopped_epoch = max_epoch
self.best = 0
self.X_val, self.y_val = validation_data
self.y_pred = np.zeros(self.y_val.shape)
def on_epoch_end(self, epoch, logs={}):
print("Epoch end 1")
if epoch % self.interval == 0:
y_pred = self.model.predict_proba(self.X_val, verbose=0)
current = roc_auc_score(self.y_val, y_pred)
logs['roc_auc_val'] = current
if current > self.best: #save model
print(" - AUC - improved from {:.5f} to {:.5f}".format(self.best, current))
self.best = current
self.y_pred = y_pred
self.stopped_epoch = epoch+1
self.model.save(self.filepath, overwrite=True)
else:
print(" - AUC - did not improve")
[X, X_val, y, y_val] = train_test_split(processed_X_train, y_train, test_size=0.03, shuffle=False)
RocAuc = RocAucEvaluation(filepath='model.best.hdf5',validation_data=(X_val, y_val), interval=1)
# -
# # Train the model
# Using a batch size of 64 and set the number of epochs as 1
# +
from keras.callbacks import EarlyStopping, ModelCheckpoint
model.compile(loss='binary_crossentropy', optimizer='Adam')
# Set variables
batch_size = 64
epochs = 1
# Set early stopping
early_stop = EarlyStopping(monitor="roc_auc_val", mode="max", patience=2)
# Train
graph = model.fit(X, y, batch_size=batch_size, epochs=epochs,
validation_data=(X_val, y_val), callbacks=[RocAuc, early_stop],
verbose=1, shuffle=False)
# +
import matplotlib.pyplot as plt
# %matplotlib inline
# Visualize history of loss
plt.plot(graph.history['loss'])
plt.plot(graph.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# -
# ### Load best weights and predict
predictions = model.predict(processed_X_test, verbose=0)
# ### Kaggle competition submission format
sample_submission = pd.read_csv("sample_submission.csv")
sample_submission[list_classes] = predictions
sample_submission.to_csv("submission3.csv", index=False)
# ### App prediction
# An app pipeline that can be put into production for toxic comment classification. It will take in a string and return the odds that it is any one of the toxic classifications
# +
def toxicity_level(string):
"""
Return toxicity probability based on inputed string.
"""
# Process string
new_string = [string]
new_string = tokenizer.texts_to_sequences(new_string)
new_string = pad_sequences(new_string, maxlen=max_len, padding='post', truncating='post')
# Predict
prediction = model.predict(new_string)
# Print output
print("Toxicity levels for '{}':".format(string))
print('Toxic: {:.0%}'.format(prediction[0][0]))
print('Severe Toxic: {:.0%}'.format(prediction[0][1]))
print('Obscene: {:.0%}'.format(prediction[0][2]))
print('Threat: {:.0%}'.format(prediction[0][3]))
print('Insult: {:.0%}'.format(prediction[0][4]))
print('Identity Hate: {:.0%}'.format(prediction[0][5]))
print()
return
toxicity_level('go jump off a bridge jerk')
toxicity_level('i will kill you')
toxicity_level('have a nice day')
toxicity_level('hola, como estas')
toxicity_level('hola mierda joder')
toxicity_level('fuck off!!')
# -
# From the above sample data, it can be seen that the predictions are not much accurate.
# This model is not able to predict the probability as accurate as CNN with word embeddings.
# This information is not available on Google's Perspective API model where it can be subcatogerized as a threat.
# ### Sample test data
toxicity_level('Whats up')
# The same data on Google's Perspective API gives a toxicity level of 5% while the above model give a toxicity level of 31%
# ## REFERENCES
# [[1] https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge#description](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge#description)
# <br>
# [[2] https://motherboard.vice.com/en_us/article/qvvv3p/googles-anti-bullying-ai-mistakes-civility-for-decency](https://motherboard.vice.com/en_us/article/qvvv3p/googles-anti-bullying-ai-mistakes-civility-for-decency)
# <br>
# [[3] http://colah.github.io/posts/2015-08-Understanding-LSTMs/](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
# <br>
# [[4] http://nymag.com/selectall/2017/02/google-introduces-perspective-a-tool-for-toxic-comments.html](http://nymag.com/selectall/2017/02/google-introduces-perspective-a-tool-for-toxic-comments.html)
# <br>
# [[5] https://web.stanford.edu/class/cs224n/reports/2762092.pdf](https://web.stanford.edu/class/cs224n/reports/2762092.pdf)
# <br>
# [[6] https://www.kaggle.com/sbongo/for-beginners-tackling-toxic-using-keras](https://www.kaggle.com/sbongo/for-beginners-tackling-toxic-using-keras)
# <br>
# [[7] https://datascience.stackexchange.com/questions/11619/rnn-vs-cnn-at-a-high-level](https://datascience.stackexchange.com/questions/11619/rnn-vs-cnn-at-a-high-level)
# <br>
# [[8] https://www.depends-on-the-definition.com/classify-toxic-comments-on-wikipedia/](https://www.depends-on-the-definition.com/classify-toxic-comments-on-wikipedia/)
# <br>
# [[9] http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/)
# <br>
# [[10] http://web.stanford.edu/class/cs224n/reports/6838795.pdf](http://web.stanford.edu/class/cs224n/reports/6838795.pdf)
# <br>
# [[11] http://dsbyprateekg.blogspot.com/2017/12/can-you-build-model-to-predict-toxic.html](http://dsbyprateekg.blogspot.com/2017/12/can-you-build-model-to-predict-toxic.html)
# <br>
# [[12] https://arxiv.org/pdf/1802.09957.pdf](https://arxiv.org/pdf/1802.09957.pdf)
# <br>
# [[13] http://web.stanford.edu/class/cs224n/reports/6909170.pdf](http://web.stanford.edu/class/cs224n/reports/6909170.pdf)
# <br>
# [[14] http://web.stanford.edu/class/cs224n/reports/6838601.pdf](http://web.stanford.edu/class/cs224n/reports/6838601.pdf)
# <br>
# [[15] http://web.stanford.edu/class/cs224n/reports.html](http://web.stanford.edu/class/cs224n/reports.html)
# <br>
# [[16] https://medium.com/@srjoglekar246/first-time-with-kaggle-a-convnet-to-classify-toxic-comments-with-keras-ef84b6d18328](https://medium.com/@srjoglekar246/first-time-with-kaggle-a-convnet-to-classify-toxic-comments-with-keras-ef84b6d18328)
#
#
# ## License
# The text in the document by "<NAME>, <NAME>" is licensed under CC BY 3.0 https://creativecommons.org/licenses/by/3.0/us/
#
# The code in the document by <Author(s)> is licensed under the MIT License https://opensource.org/licenses/MIT
| CNNwithCharEmbedding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Statistical Inference and Confidence Intervals
#
# ### Learning Objectives
# - Explain the relationships among parameter, sample, statistic, and population.
# - Define and describe sampling distribution.
# - Describe the Central Limit Theorem.
# - Generate and interpret a theoretical confidence interval.
#
# ## Video Game Example
# Let's say you are playing a video game (like "Halo" or "Call of Duty") where the goal is to kill your opponent. Additionally, let's say your opponent is invisible.
#
# When deciding which weapon to use, you have two options:
# - a sniper rifle with one bullet in it, or
# - a grenade launcher with one grenade in it.
#
# <details><summary>Which weapon would you prefer?</summary>
#
# - You're likely going to prefer the grenade launcher!
# - Why? Well, an explosion from a grenade will cover more area than one bullet fired from a rifle.
#
# 
# </details>
#
# This is the same as the logic behind confidence intervals. By calculating a statistic on a sample, ***maybe*** we get lucky and our statistic is exactly equal to our parameter... however, we're probably not going to get this lucky.
#
# Let's see an example of that.
#
# ## Polling Example
#
# You're running for office in a small town of 1,000 voters. Everyone in your town cares deeply about voting, so all 1,000 of them are going to vote.
#
# You'd like to ask "All in all, do you think things in the nation are generally headed in the right direction?"
# +
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# -
# Set a seed so we get the same results.
np.random.seed(42)
# PEP 8 - Style Guide for Python
x = 1 + 2 + 3 + 4
y = 1 + 2*3 + 4
# We are simulating a population of 1,000.
# Each person has a 40% chance of saying
# "Yes, things are headed in the right direction."
population = np.random.binomial(n=1,
p=0.4,
size=1000)
# +
# What is the percentage of our pop'n that think the country is headed in the right direction?
population[:10]
# -
# Above, we simulated a population of people where **38.7%** of them think the country is headed in the right direction.
#
# **But your campaign doesn't know this. Your campaign wants to learn what the true value of $p$ is!**
#
# The problem is, you don't have enough money and time to call all 1,000 of them. You can only call 50.
sample = np.random.choice(population,
size = 50,
replace = False)
np.mean(sample)
sample_2 = np.random.choice(population,
size = 50,
replace = False)
np.mean(sample_2)
sample_3 = np.random.choice(population,
size = 50,
replace = False)
np.mean(sample_3)
sample_4 = np.random.choice(population,
size = 50,
replace = False)
np.mean(sample_4)
# #### Even if we randomly sample, we aren't guaranteed to get a good sample!
#
# <details><summary>How do we get around this?</summary>
#
# 
# ### By switching to our grenade launcher.
# </details>
#
# When a poll is reported, you likely see something like this:
#
# 
#
# In the upper-right corner, you can see "$\text{margin of error }\pm\text{ }3.1$".
#
# #### What is a margin of error?
# This means that it's pretty likely that these poll results are within "plus 3.1%" or "minus 3.1%" of the real value.
#
# #### Why is there a margin of error?
# We recognize that one sample of 50 people can't definitively speak for all registered voters! If I had taken a different sample of 50 people, then my results might be pretty different. We hope not, but it's entirely possible.
#
# The margin of error is a way for us to describe our uncertainty in our statistic based on how much our statistic changes from one sample to another sample.
# - Realistically, we only pull one sample of size $n$ out of all possible samples of size $n$.
# - We only see one sample percentage out of all possible statistics.
# - We won't ever actually **see** the sample-to-sample variability!
# - This makes sense, right? It doesn't make sense for me to take ten samples of size 50... instead, I would just take one sample of 500!
#
# #### If we don't ever actually observe how much our statistic changes from one sample to another sample, then how can we get a margin of error?
#
# There are two ways to do this:
# - We can get theory to do it. (i.e. relying on statistics and probability theory)
# - We can estimate it empirically from our existing data.
# ## Confidence Interval Based on Theory
#
# By quantifying the margin of error, we can construct what is known as a **confidence interval**.
#
# A confidence interval is a set of likely values for the parameter of interest.
#
# ---
#
# <details><summary>If I could theoretically plot all possible sample percentages and how frequently I see each sample percentage... what is this?</summary>
#
# - This is the distribution of all sample percentages!
# - This is known as the **sampling distribution**.
# </details>
#
# Luckily, there is a theoretical result about this exact thing!
#
# ### The Central Limit Theorem
# The Central Limit Theorem is the most important theorem in all of statistics. It states:
#
# As the size of our sample $n$ gets closer and closer to infinity, our sampling distribution (the distribution of all possible sample means) approaches a Normal distribution with mean $\mu$ and standard deviation $\frac{\sigma}{\sqrt{n}}$.
#
# **In English**: This means that if I take a sample of size $n$ and find the mean of that sample, then do it for all possible samples of size $n$, this distribution of sample means should be Normally distributed as long as $n$ is big enough.
#
# **Practically**: If I want to study the sample mean (or the sample percentage), I can use the Normal distribution to generate a confidence interval, as long as the size of our sample $n$ is large enough!
# ### Confidence Interval Formula
#
# The formula for a confidence interval is:
#
# $$
# \text{[sample statistic]} \pm \text{[multiplier]} \times \text{[standard deviation of sampling distribution]}
# $$
#
# - The **sample statistic** is the statistic of our sample!
# - The **standard deviation of the sampling distribution** quantifies that sample-to-sample variability for us. (This is commonly called the [standard error](https://stattrek.com/estimation/standard-error.aspx).)
# - The **multiplier** is a number drawn from the Normal distribution that makes sure our confidence interval is appropriately wide given how confident we want to be in our result.
# - The **margin of error** is the multiplier times the standard deviation of the sampling distribution.
#
# *Extra:* To learn about the derivation of the confidence interval for a given confidence level, [head here](https://amsi.org.au/ESA_Senior_Years/SeniorTopic4/4h/4h_2content_11.html).
#
# ---
#
# Example: I want to find the 95% confidence interval for the percentage of people who think the nation is on the right track.
#
# The formula is:
#
# $$
# \begin{eqnarray*}
# \text{[sample statistic] } &\pm& \text{[multiplier] } \times \text{[standard deviation of sampling distribution]} \\
# \bar{x} &\pm& z^* \times \frac{\sigma}{\sqrt{n}} \\
# \Rightarrow \bar{x} &\pm& 1.96 \times \frac{\sigma}{\sqrt{n}}
# \end{eqnarray*}
# $$
sample_mean = np.mean(sample)
sigma = np.std(sample)
n = len(sample)
sample_mean - 1.96 *sigma/(n ** 0.5)
sample_mean + 1.96 *sigma/(n ** 0.5)
# Our 95% confidence interval for the percentage of people who think our country is on the right track is **(46.42%, 73.57%)**.
#
# #### Interpretation (*this will come up in interviews*)
#
# In general: **"With confidence level 95%, the true population mean lies in the confidence interval."**
#
# For this example: **"With confidence level 95%, the true population percentage of people who think our country is on the right track is between 24.55% to 51.45%."**
# - Generally, we would say:
# - "I am {confidence level}% confident
# - that the true population {parameter}
# - is between {lower confidence bound} and {upper confidence bound}."
#
# ---
#
# Two common misconceptions:
#
# 1. There is *not* a 95% probability that the true parameter lies within a particular confidence interval. Make sure you do not use the word probability! Instead, we are confident that over a large number of samples, 95% of them will contain the population statistic.
#
# 2. As the number of samples increases, the standard deviation of the sampling distribution decreases. However, a small standard deviation by itself does not imply that the mean is accurate. (For example, units matter!)
#
# ---
#
# Write a function called `conf_int()` to take in an array of data and return a 95% confidence interval. Run your function on `sample_2` and interpret your results.
# <details><summary>Interpretation:</summary>"I am 95% confident that the true population percentage of people who believe our country is on the right track is between 30.24% and 57.76 percent."</details>
#
# ---
#
# Note: For a confidence interval, our multiplier is 1.96. The number 1.96 comes from a standard Normal distribution.
# - The area under the standard Normal distribution between -1.96 and +1.96 is 95%.
# - For 90% confidence, use 1.645.
# - For 99% confidence, use 2.576.
#
# #### This seems straightforward enough... why don't we always just "use theory?"
# - The "standard deviation of the statistic" formula is easy when we're generating confidence intervals for one mean or one percentage.
# - That formula gets more complicated if we want to calculate a confidence interval for a correlation coefficient, for the difference between two means, or for something else.
# - Also, the Central Limit Theorem above describes how sample means work. Relying on the Normal distribution is tough when our sample size $n$ is small (below 30) or when we're calculating something other than basic means and percentages.
# # To sum up:
# - Our goal is usually to learn about a population.
# - Oftentimes, money, time, energy, and other constraints prevent us from measuring the entire population directly.
# - We take a sample from this population and calculate a statistic on our sample.
# - We want to use this sample statistic to understand our population parameter!
# - By just calculating a statistic, we're effectively using our sniper rifle. Instead, we want a grenade launcher!
# - The statistical equivalent of a grenade launcher is a **confidence interval**. A confidence interval is a set of likely values for the parameter of interest.
# - In order to construct our confidence interval, we use our sample statistic and attach a margin of error to it. We can then quantify how confident we are that the true population parameter is inside the interval.
# - The formula for any confidence interval is given by $\text{[sample statistic] } \pm \text{[multiplier] } \times \text{[standard deviation of sampling distribution]}$.
# - The formula for a 95% confidence interval for sample means or proportions is $\bar{x} \pm 1.96\frac{\sigma}{\sqrt{n}}$.
# - I would interpret a 95% confidence interval $(a,b)$ as follows:
# - "I am 95% confident that the true population parameter is in between $a$ and $b$."
| Notebook/lesson-statistical_inference_confidence_intervals/starter-code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
basedir=""
net_df = pd.read_csv(basedir+"Sweden/Sweden_Type_of_Power_Net.csv")
net_df
sweden_net = net_df.T
sweden_net.columns = sweden_net.iloc[0]
final_net = sweden_net.drop(["type of power plants"])
final_net.rename(columns = {"type of power plants" : "year"})
gross_df = pd.read_csv(basedir+"Sweden\Sweden_Type_of_Power_Gross.csv")
gross_df.set_index("type of power plants")
sweden_gross = gross_df.T
sweden_gross.columns = sweden_gross.iloc[0]
final_gross = sweden_gross.drop(["type of power plants"])
# +
# final_gross
# -
Own_df = pd.read_csv(basedir+"Sweden\Sweden_Type_of_Power_Own_Use.csv")
Own_df.set_index("type of power plants")
sweden_own = Own_df.T
sweden_own.columns = sweden_own.iloc[0]
final_own = sweden_own.drop(["type of power plants"])
# +
# final_own
# +
# final_own.astype('float64').dtypes
# final_net.astype('float64').dtypes
# final_gross.astype('float64').dtypes
# -
final_own = final_own.reset_index().rename(columns={"index":"Year"})
final_net = final_net.reset_index().rename(columns={"index":"Year"})
final_gross = final_gross.reset_index().rename(columns={"index":"Year"})
final_own = final_own.rename(columns={"sum of supply": "total_supply"})
final_net = final_net.rename(columns={"sum of supply": "total_supply"})
final_gross = final_gross.rename(columns={"sum of supply": "total_supply"})
final_net.drop("total_supply", axis=1).plot.area(figsize = [15,10], title = "Supply of Electricity, Sweden (1986-2017) in GWh")
plt.show()
final_net["wind"].plot(grid=True)
plt.show()
new_df = final_gross.total_supply.values.reshape(-1, 1).astype("float64")
# new_df
n_df = final_gross.hydro.values.reshape(-1, 1).astype("float64")
# new_df
# +
# final_net.to_csv("sweden_net.csv")
# final_gross.to_csv("sweden_gross.csv")
# final_own.to_csv("sweden_own.csv")
# -
X = final_gross.drop("total_supply", axis=1)
y = new_df.astype("float64")
print(X.shape, y.shape)
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=89)
# -
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# +
model.fit(X_train, y_train)
training_score = model.score(X_train, y_train)
testing_score = model.score(X_test, y_test)
print(f"Training Score: {training_score}")
print(f"Testing Score: {testing_score}")
# -
plt.scatter(model.predict(X_train), model.predict(X_train) - y_train, c="blue", label="Training Data")
plt.scatter(model.predict(X_test), model.predict(X_test) - y_test, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y.min(), xmax=y.max())
plt.title("Residual Plot")
predictions = model.predict(X)
# Plot Residuals
plt.scatter(predictions, predictions - y)
plt.hlines(y=0, xmin=predictions.min(), xmax=predictions.max())
plt.show()
# +
# predictions
# +
# y
# +
X = n_df.astype("float64")
y = new_df.astype("float64")
print("Shape: ", X.shape, y.shape)
X
plt.scatter(X, y)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, y)
print('Weight coefficients: ', model.coef_)
print('y-axis intercept: ', model.intercept_)
x_min = np.array([[X.min()]])
x_max = np.array([[X.max()]])
print(f"Min X Value: {x_min}")
print(f"Max X Value: {x_max}")
y_min = model.predict(x_min)
y_max = model.predict(x_max)
plt.scatter(X, y, c='blue')
plt.plot([x_min[0], x_max[0]], [y_min[0], y_max[0]], c='red')
predictions = model.predict(X)
print(f"True output: {y[0]}")
print(f"Predicted output: {predictions[0]}")
print(f"Prediction Error: {predictions[0]-y[0]}")
# -
merge_df=final_gross.iloc[24:]
price_df = pd.read_csv(basedir+"Sweden\sweden_electricty_price_10-18.csv")
price_df
price_df = price_df.drop([0])
price_df = price_df.rename(columns={"Electricity prices for households in Sweden 2010-2018, annually": "Year", "Unnamed: 1": "price"})
regress_df = merge_df.set_index('Year').join(price_df.set_index('Year'))
regress_df
# +
X = regress_df.total_supply.values.reshape(-1, 1)
y = regress_df.price.values.reshape(-1, 1)
print("Shape: ", X.shape, y.shape)
# -
plt.scatter(X, y)
model = LinearRegression()
model.fit(X, y)
print('Weight coefficients: ', model.coef_)
print('y-axis intercept: ', model.intercept_)
x_min = np.array([[X.min()]])
x_max = np.array([[X.max()]])
print(f"Min X Value: {x_min}")
print(f"Max X Value: {x_max}")
y_min = model.predict(x_min)
y_max = model.predict(x_max)
plt.scatter(X, y, c='blue')
plt.plot([x_min[0], x_max[0]], [y_min[0], y_max[0]], c='red')
predictions = model.predict(X)
print(f"True output: {y[0]}")
print(f"Predicted output: {predictions[0]}")
print(f"Prediction Error: {predictions[0]-y[0]}")
cons_df = pd.read_csv(basedir + "Sweden\Sweden_Electricty_Consumption.csv")
cons_df
sweden_cons = cons_df.T
sweden_cons.columns = sweden_cons.iloc[0]
sweden_cons = sweden_cons.reset_index().rename(columns={"index":"Year"}).drop([0])
sweden_cons
sweden_cons.drop("Total electricity consumption", axis=1).plot.line(figsize = [15,5], title = "Consumption of Electricity, Sweden (2008-2017) in GWh")
plt.show()
| Sweden CSV analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # imgMS example usage
#
# imgMS is a python package aimed for LA-ICP-MS data reduction. This example shows a minimalistic use of this package for bulk analysis. (In the future this example will also show elemental imaging.)
from imgMS import MSData
from imgMS import MSEval
from imgMS.side_functions import *
# ### Create Logger
# Using function get_logger you can create logfile, where all steps of the analysis and recomendations will be saved.
logger = get_logger('/Users/nikadilli/code/imgMS/log.txt')
logger.info('Starting new session.')
# ### Create Reader
# DataReader is a class handling the import of measurement. You can use sample data included in imgMS package. For help use ?MSData.DataReader.
reader = MSData.DataReader(filename='../data/data.csv', filetype='csv', instrument='raw')
# ### Create MSData object
# MSData object is the top level object storing whole analysis and offers methods for data reduction.
data = MSData.MSData(reader, logger)
# ### Add reference values
# Read standard reference material. This excel file is included in the package and contains few NIST values. If you need any reference material, you can simply edit the excel file.
data.read_srms()
# ### Import Iolite
# .Iolite.csv file is a file exported from NWR laser ablation instruments using Chromium software. This file contains names as well as timestamps of individual analysis (such as spots or line scans) and is essential for selection of peaks in MSData. (Although there is other way for selecting peaks, it is recomended to use Iolite file.)
io = MSEval.Iolite('../data/Iolite.csv')
# ### Peak selection
# Using iolite object we can select peaks in the time resolved analysis. It is necessary to specify start (in secon) of the first peak to synchronise Iolite file and MS data.
data.select('iolite', s=30, iolite=io)
# ### Visualise and check the results
#
# MSData offers a simple way to plot selection of peaks, where peaks are highighted by green and background is highlighted by red colour.
fig, ax = plt.subplots (figsize=(15,10))
data.graph(ax=ax)
# ### Set peak names
# To diferentiate single peaks especially to know the SRM a list of names for peaks needs to be passed to MSdata. If Iolite is used, it can take names directly from the Iolite file.
data.names = io.names_from_iolite()
# ### Calculate average peaks
# For each measured peak we need average value, this can be calculated 2 ways, by average intensity or integrating the area under the curve. In both cases we need to extract background values which offers also multiple ways. There is also an option for despiking before calculating the average.
data.average_isotopes(despiked=True, bcgcor_method='all', method='intensity')
# ### Quantify the sample
# In previous step an average intensity for each peak was calculated. There are 2 reference materials and one sample. The SRM with values closer to the sample should be used for quantification, the second SRM is used as a controll sample.
data.quantify_isotopes(srm_name='NIST612')
# ### Calculate Detection limit
# Data evaluation wouldn't be complete without detection limits. MSData offers a simple method for its calculation. Method and scale should correspond to the method and background correction method used for averaging peaks.
data.detection_limit(method='intensity', scale='all', )
# ### Clean the data
#
# MSData offers a report method, which will round all values to reasonable number of decimal numbers and replace all values under the detection limit.
data.report()
# ### Export
# The last step is to get the results in a format that can be shared or published. MSData exports all steps of the analysis into single excel file.
data.export('../data/results.xlsx')
# # imgMS example of imaging
logger = get_logger('/Users/nikadilli/code/imgMS/log.txt')
logger.info('Starting new session.')
reader = MSData.DataReader(filename='../data/mapa1_data.csv', filetype='csv', instrument='raw')
data = MSData.MSData(reader, logger)
io = io = MSEval.Iolite('../data/mapa1.Iolite.csv')
data.select('iolite', s=10, iolite=io)
fig, ax = plt.subplots (figsize=(15,10))
data.graph(ax=ax)
data.create_maps(bcgcor_method='end')
data.isotopes['Ca44'].elmap()
| imgMS/imgMS_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import dask.array as da
import dask.dataframe as dd
import dask.delayed as delayed
import time
import math
#import graphviz
from netCDF4 import Dataset
import os,datetime,sys,fnmatch
import h5py
import dask
# +
def read_filelist(loc_dir,prefix,unie,fileformat):
# Read the filelist in the specific directory
str = os.popen("ls "+ loc_dir + prefix + unie + "*."+fileformat).read()
fname = np.array(str.split("\n"))
fname = np.delete(fname,len(fname)-1)
return fname
def read_MODIS(fname1,fname2,verbose=False): # READ THE HDF FILE
# Read the cloud mask from MYD06_L2 product')
ncfile=Dataset(fname1,'r')
CM1km = np.array(ncfile.variables['Cloud_Mask_1km'])
CM = (np.array(CM1km[:,:,0],dtype='byte') & 0b00000110) >>1
ncfile.close()
ncfile=Dataset(fname2,'r')
lat = np.array(ncfile.variables['Latitude'])
lon = np.array(ncfile.variables['Longitude'])
attr_lat = ncfile.variables['Latitude']._FillValue
attr_lon = ncfile.variables['Longitude']._FillValue
return lat,lon,CM
# +
def countzero(x, axis=1):
#print(x)
count0 = 0
count1 = 0
for i in x:
if i <= 1:
count0 +=1
#print(count0/len(x))
return (count0/len(x))
def results(concat_list):
#b2=dd.concat(b1)
#b_2=b2.groupby(['Latitude','Longitude']).mean().reset_index()
#b3=b_2.reset_index()
combs=[]
for x in range(0,180):
for y in range(0,360):
combs.append((x, y))
df_1=pd.DataFrame(combs)
df_1.columns=['Latitude','Longitude']
df4=pd.merge(df_1, df2,on=('Latitude','Longitude'), how='left')
df_cm=df4['CM'].values
np_cm=df_cm.reshape(180,360)
return np_cm
# +
MYD06_dir= '/Users/dprakas1/Desktop/modis_one/'
MYD06_prefix = 'MYD06_L2.A2008'
MYD03_dir= '/Users/dprakas1/Desktop/modis_one/'
MYD03_prefix = 'MYD03.A2008'
fileformat = 'hdf'
fname1,fname2 = [],[]
days = np.arange(1,31,dtype=np.int)
for day in days:
dc ='%03i' % day
fname_tmp1 = read_filelist(MYD06_dir,MYD06_prefix,dc,fileformat)
fname_tmp2 = read_filelist(MYD03_dir,MYD03_prefix,dc,fileformat)
fname1 = np.append(fname1,fname_tmp1)
fname2 = np.append(fname2,fname_tmp2)
# Initiate the number of day and total cloud fraction
files = np.arange(len(fname1))
for j in range(0,1):#hdfs:
print('steps: ',j+1,'/ ',(fname1))
# Read Level-2 MODIS data
lat,lon,CM = read_MODIS(fname1[j],fname2[j])
print((fname2))
print((fname1))
#rint(CM)
#lat = lat.ravel()
#lon = lon.ravel()
#CM = CM.ravel()
CM.shape
# -
combs=[]
for x in range(0,180):
for y in range(0,360):
combs.append((x, y))
df_1=pd.DataFrame(combs)
df_1.columns=['Latitude','Longitude']
# %%time
b1=[]
def aggregateOneFileData(M06_file, M03_file):
cm = np.zeros((2030,1354), dtype=np.float32)
lat = np.zeros((2030,1354), dtype=np.float32)
lon = np.zeros((2030,1354), dtype=np.float32)
#print(fname1,fname2)
myd06 = Dataset(M06_file, "r")
CM = myd06.variables["Cloud_Mask_1km"][:,:,0]# Reading Specific Variable 'Cloud_Mask_1km'.
CM = (np.array(CM,dtype='byte') & 0b00000110) >>1
CM = np.array(CM).byteswap().newbyteorder()
#print("CM intial shape:",CM.shape)
cm = da.concatenate((cm,CM),axis=0)
#print("CM shape after con:",cm.shape)
cm=da.ravel(cm)
#print("cm shape after ravel:",cm.shape)
myd03 = Dataset(M03_file, "r")
latitude = myd03.variables["Latitude"][:,:]
longitude = myd03.variables["Longitude"][:,:]
#print("Lat intial shape:",latitude.shape)
#print("lon intial shape:",longitude.shape)
lat = da.concatenate((lat,latitude),axis=0)
lon = da.concatenate((lon,longitude),axis=0)
#print("lat shape after con:",lat.shape)
#print("lon shape after con:",lon.shape)
lat=da.ravel(lat)
lon=da.ravel(lon)
#print("lat shape after ravel:",lat.shape)
#print("lon shape after ravel:",lon.shape)
cm=cm.astype(int)
lon=lon.astype(int)
lat=lat.astype(int)
lat=lat+90
lon=lon+180
Lat=lat.to_dask_dataframe()
Lon=lon.to_dask_dataframe()
CM=cm.to_dask_dataframe()
df=dd.concat([Lat,Lon,CM],axis=1,interleave_partitions=False)
print(type(df))
cols = {0:'Latitude',1:'Longitude',2:'CM'}
df = df.rename(columns=cols)
df2=(df.groupby(['Longitude','Latitude']).CM.apply(countzero).reset_index())
df3=df2.compute()
df4=pd.merge(df_1, df3,on=('Latitude','Longitude'), how='left')
df_cm=df4['CM'].values
np_cm=df_cm.reshape(180,360)
return np_cm
tt=aggregateOneFileData(fname1[0],fname2[0])
tt
from dask.distributed import Client, LocalCluster
cluster = LocalCluster()
client = Client(cluster)
result=client.map(aggregateOneFileData,fname1[0],fname2[0])
result
import pandas as pd
import numpy as np
import dask.array as da
import dask.dataframe as dd
import dask.delayed as delayed
import time
import math
#import graphviz
from netCDF4 import Dataset
import os,datetime,sys,fnmatch
import h5py
import dask
import glob
M03_dir = "/Users/dprakas1/Desktop/modis_one/MYD03/"
M06_dir = "/Users/dprakas1/Desktop/modis_one/MYD06_L2/"
M03_files = sorted(glob.glob(M03_dir + "MYD03.A2008*"))
M06_files = sorted(glob.glob(M06_dir + "MYD06_L2.A2008*"))
myd06 = Dataset(M06_files, "r")
CM = myd06.variables["Cloud_Mask_1km"][:,:,0]# Reading Specific Variable 'Cloud_Mask_1km'.
CM = (np.array(CM,dtype='byte') & 0b00000110) >>1
CM = np.array(CM).byteswap().newbyteorder()
MOD03_path =sys.argv[1] #'HDFFiles/'
MOD03_path = '/Users/dprakas1/Desktop/modis_one/'
MOD06_path =sys.argv[2] #'HDFFiles/'
MOD06_path = '/Users/dprakas1/Desktop/modis_one/'
# +
def read_MODIS_level2_data(MOD06_file,MOD03_file):
#print('Reading The Cloud Mask From MOD06_L2 Product:')
myd06 = Dataset(MOD06_file, "r")
CM = myd06.variables["Cloud_Mask_1km"][:,:,:] # Reading Specific Variable 'Cloud_Mask_1km'.
CM = (np.array(CM[:,:,0],dtype='byte') & 0b00000110) >>1
CM = np.array(CM).byteswap().newbyteorder()
# print('The Level-2 Cloud Mask Array Shape',CM.shape)
print(' ')
myd03 = Dataset(MOD03_file, "r")
#print('Reading The Latitude-Longitude From MOD03 Product:')
latitude = myd03.variables["Latitude"][:,:] # Reading Specific Variable 'Latitude'.
latitude = np.array(latitude).byteswap().newbyteorder() # Addressing Byteswap For Big Endian Error.
longitude = myd03.variables["Longitude"][:,:] # Reading Specific Variable 'Longitude'.
longitude = np.array(longitude).byteswap().newbyteorder() # Addressing Byteswap For Big Endian Error.
#print('The Level-2 Latitude-Longitude Array Shape',latitude.shape)
print(' ')
return latitude,longitude,CM
# +
MOD03_filepath = 'MYD03.A*.hdf'
MOD06_filepath = 'MYD06_L2.A*.hdf'
MOD03_filename, MOD06_filename =[],[]
MOD03_filename2, MOD06_filename2 =[],[]
for MOD06_filelist in os.listdir(MOD06_path):
if fnmatch.fnmatch(MOD06_filelist, MOD06_filepath):
MOD06_filename = MOD06_filelist
MOD06_filename2.append(MOD06_filelist)
for MOD03_filelist in os.listdir(MOD03_path):
if fnmatch.fnmatch(MOD03_filelist, MOD03_filepath):
MOD03_filename = MOD03_filelist
MOD03_filename2.append(MOD03_filelist)
if MOD03_filename and MOD06_filename:
print('Reading Level 2 GeoLocation & Cloud Data')
Lat,Lon,CM = read_MODIS_level2_data(MOD06_path+MOD06_filename,MOD03_path+MOD03_filename)
print('The Number Of Files In The MODO3 List: ')
print(len(MOD03_filename2))
print(' ')
print('The Number Of Files In The MODO6_L2 List: ')
print(len(MOD06_filename2))
print(' ')
# -
MOD03_filename2
result=aggregateOneFileData(MOD06_filename2, MOD03_filename2)
# %%time
b1=[]
def aggregateOneFileData(MOD06_filename2, MOD03_filename2):
cm = np.zeros((2030,1354), dtype=np.float32)
lat = np.zeros((2030,1354), dtype=np.float32)
lon = np.zeros((2030,1354), dtype=np.float32)
#print(fname1,fname2)
myd06 = Dataset(MOD06_filename2, "r")
CM = myd06.variables["Cloud_Mask_1km"][:,:,0]# Reading Specific Variable 'Cloud_Mask_1km'.
CM = (np.array(CM,dtype='byte') & 0b00000110) >>1
CM = np.array(CM).byteswap().newbyteorder()
#print("CM intial shape:",CM.shape)
cm = da.concatenate((cm,CM),axis=0)
#print("CM shape after con:",cm.shape)
cm=da.ravel(cm)
#print("cm shape after ravel:",cm.shape)
myd03 = Dataset(MOD03_filename2, "r")
latitude = myd03.variables["Latitude"][:,:]
longitude = myd03.variables["Longitude"][:,:]
#print("Lat intial shape:",latitude.shape)
#print("lon intial shape:",longitude.shape)
lat = da.concatenate((lat,latitude),axis=0)
lon = da.concatenate((lon,longitude),axis=0)
#print("lat shape after con:",lat.shape)
#print("lon shape after con:",lon.shape)
lat=da.ravel(lat)
lon=da.ravel(lon)
#print("lat shape after ravel:",lat.shape)
#print("lon shape after ravel:",lon.shape)
cm=cm.astype(int)
lon=lon.astype(int)
lat=lat.astype(int)
lat=lat+90
lon=lon+180
Lat=lat.to_dask_dataframe()
Lon=lon.to_dask_dataframe()
CM=cm.to_dask_dataframe()
df=dd.concat([Lat,Lon,CM],axis=1,interleave_partitions=False)
print(type(df))
cols = {0:'Latitude',1:'Longitude',2:'CM'}
df = df.rename(columns=cols)
df2=(df.groupby(['Longitude','Latitude']).CM.apply(countzero).reset_index())
print(type(df2))
combs=[]
for x in range(0,180):
for y in range(0,360):
combs.append((x, y))
df_1=pd.DataFrame(combs)
df_1.columns=['Latitude','Longitude']
df_2=dd.from_pandas(df_1,npartitions=10500)
df4=dd.merge(df_2, b_2,on=('Latitude','Longitude'), how='left')
a = df4['CM'].to_dask_array(lengths=True)
arr = da.asarray(a, chunks=(9257))
final_array=arr.reshape(180,360).compute()
return final_array
myd06_name = '/Users/saviosebastian/Documents/Project/CMAC/HDFFiles/'
def aggregateOneFileData(M06_file, M03_file):
cm = np.zeros((2030,1354), dtype=np.float32)
lat = np.zeros((2030,1354), dtype=np.float32)
lon = np.zeros((2030,1354), dtype=np.float32)
cmfilelist = []
for MOD06_file in MOD06_filename2:
MOD06_file2 = myd06_name + MOD06_file
myd06 = Dataset(MOD06_file2, "r")
CM = myd06.variables["Cloud_Mask_1km"][:,:,0]# Reading Specific Variable 'Cloud_Mask_1km'.
CM = (np.array(CM,dtype='byte') & 0b00000110) >>1
CM = np.array(CM).byteswap().newbyteorder()
#----#
cm = da.from_array(CM, chunks =(2030,1354))
#cm = da.from_array(CM, chunks =(500,500))
cmfilelist.append(cm)
cm = da.stack(cmfilelist, axis=0)
print('The Cloud Mask Array Shape Is: ',cm.shape)
#print("CM intial shape:",CM.shape)
cm = da.concatenate((cm,CM),axis=0)
#print("CM shape after con:",cm.shape)
cm=da.ravel(cm)
#print("cm shape after ravel:",cm.shape)
myd03 = Dataset(fname2, "r")
latitude = myd03.variables["Latitude"][:,:]
longitude = myd03.variables["Longitude"][:,:]
#print("Lat intial shape:",latitude.shape)
#print("lon intial shape:",longitude.shape)
lat = da.concatenate((lat,latitude),axis=0)
lon = da.concatenate((lon,longitude),axis=0)
#print("lat shape after con:",lat.shape)
#print("lon shape after con:",lon.shape)
lat=da.ravel(lat)
lon=da.ravel(lon)
#print("lat shape after ravel:",lat.shape)
#print("lon shape after ravel:",lon.shape)
cm=cm.astype(int)
lon=lon.astype(int)
lat=lat.astype(int)
lat=lat+90
lon=lon+180
Lat=lat.to_dask_dataframe()
Lon=lon.to_dask_dataframe()
CM=cm.to_dask_dataframe()
df=dd.concat([Lat,Lon,CM],axis=1,interleave_partitions=False)
print(type(df))
cols = {0:'Latitude',1:'Longitude',2:'CM'}
df = df.rename(columns=cols)
df2=(df.groupby(['Longitude','Latitude']).CM.apply(countzero).reset_index())
print(type(df2))
combs=[]
for x in range(0,180):
for y in range(0,360):
combs.append((x, y))
df_1=pd.DataFrame(combs)
df_1.columns=['Latitude','Longitude']
df_2=dd.from_pandas(df_1,npartitions=10500)
df4=dd.merge(df_2, b_2,on=('Latitude','Longitude'), how='left')
a = df4['CM'].to_dask_array(lengths=True)
arr = da.asarray(a, chunks=(9257))
final_array=arr.reshape(180,360).compute()
return final_array
| source/dask/dask_map_working.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Elastic Tensor Dataset
# This notebook demostrates how to pull all computed elastic tensors and associated data from the Materials Project (well the ones with no warnings more exist). A few minimal modifications are made to the data in data_clean, but largely intended to write a JSON file with all the data so the MP's API doesn't need to be called each time. At the bottom of the notebook unphysical bulk modulus data points are pruned.
# Import libraries - the necessary packages are pandas and pymatgen
# +
import pandas as pd
import json
from pymatgen import MPRester
from pymatgen.core import Structure
# Load plotly to display inline static images
import plotly.plotly as py
# -
# to display inline static images place plotly username and api_key here
py.sign_in(username='username', api_key='api_key')
# ## Pull the data from the Materials Project API
# +
# replace 'your_mp_api_key' with your materials project api_key
# -
with MPRester('your_mp_api_key') as m:
elast = m.query(criteria={"elasticity.warnings":None, "elasticity":{"$exists":True}},
properties=['material_id', 'pretty_formula', 'nsites',
'spacegroup', 'volume', 'structure', 'elasticity',
'energy', 'energy_per_atom', 'formation_energy_per_atom', 'e_above_hull'])
len(elast)
# +
# data_clean makes sure the data is in a format that can be converted
# into a pandas dataframe and written to a JSON file
# -
def data_clean(elast):
elast_update = []
for i, elast_i in enumerate(elast):
elast_update.append({})
for k, v in elast_i.items():
if k == 'elasticity':
if 'third_order' in v.keys():
del v['third_order']
elast_update[i].update(v)
elif k == 'spacegroup':
elast_update[i]['spacegroup'] = v['number']
elif k == 'structure':
elast_update[i]['structure'] = v.as_dict()
else:
elast_update[i][k] = v
return elast_update
elast_data = data_clean(elast)
len(elast_data)
# ## Write Data to a File
# +
# write all the data to a JSON to avoid calling the MP API everytime
# +
# more data pruning might need to be done but keeping everything for now
# +
# structures will have to be converted back into a pymatgen structure objects using from_dict method
# -
#with open('../data/elast_full_data.json', 'w') as f:
#json.dump(elast_data, f)
# ## Prune the Dataset for Bulk Modulus (K)
with open('../data/elast_full_data.json') as f:
K = json.load(f)
# +
# Look at the Bulk modulus data
# -
K_df = pd.DataFrame.from_dict(K)
len(K_df)
# +
# drop unwanted data
# -
unwanted_columns = ['G_Reuss', 'G_Voigt', 'G_Voigt_Reuss_Hill', 'K_Reuss',
'K_Voigt', 'K_Voigt_Reuss_Hill', 'compliance_tensor', 'e_above_hull',
'elastic_tensor', 'elastic_tensor_original', 'energy', 'energy_per_atom',
'formation_energy_per_atom', 'homogeneous_poisson', 'nsites',
'universal_anisotropy', 'volume', 'warnings']
K_df = K_df.drop(unwanted_columns, axis=1)
K_df.head()
# +
# We know that something was wrong with the PuBi calculation so I remove it
# -
K_df.loc[K_df['pretty_formula'] == "PuBi"]
K_df = K_df.drop(5944)
len(K_df)
#def convert_struct(struct):
#return Structure.from_dict(struct)
# +
#K_df['structure'] = K_df['structure'].apply(convert_struct)
# -
idx = [i for i in range(len(K_df))]
# +
from matminer.figrecipes.plot import PlotlyFig
pf_K = PlotlyFig(x_title='Index',
y_title='K (GPa)',
title='Bulk Modulus',
mode='notebook')
# to have a nice interactive plotly figure set return_plot=False and comment out py.image.ishow()
# interactive labeled plots are nice for finding outliers
pd_K = pf_K.xy([(idx, K_df["K_VRH"]), ([0, 400], [0, 400])],
labels=K_df['pretty_formula'], modes=['markers', 'lines'],
lines=[{}, {'color': 'black', 'dash': 'dash'}], showlegends=False, return_plot=True)
# displays inline static image
py.image.ishow(pd_K)
# -
# ## Drop data with K values below 0 and above diamond ~436 GPa
# This drops the unphyiscal data and resets the numeric indexes
K_cut_df = K_df.loc[(K_df['K_VRH'] > 0.0) & (K_df['K_VRH'] < 437.0)].reset_index(drop=True)
len(K_cut_df)
K_cut_df.describe()
idx_cut = [i for i in range(len(K_cut_df))]
# +
from matminer.figrecipes.plot import PlotlyFig
pf_K = PlotlyFig(x_title='Index',
y_title='K (GPa)',
title='Bulk Modulus',
mode='notebook')
# to have a nice interactive plotly figure set return_plot=False and comment out py.image.ishow()
pd_K_cut = pf_K.xy([(idx_cut, K_cut_df["K_VRH"]), ([0, 400], [0, 400])],
labels=K_cut_df['pretty_formula'], modes=['markers', 'lines'],
lines=[{}, {'color': 'black', 'dash': 'dash'}], showlegends=False, return_plot=True)
# displays inline static image
py.image.ishow(pd_K_cut)
# -
K_cut_df.loc[K_cut_df['pretty_formula'] == "PuBi"]
# ## Write K data to a file
# +
#K_cut_df.to_json(path_or_buf='../data/K_data.json', orient='index')
| notebooks/elastic_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from matplotlib import pyplot as plt
from scipy import linalg
# %matplotlib inline
np.set_printoptions(suppress=True)
from sklearn.datasets import fetch_20newsgroups
from sklearn.datasets import get_data_home
import os
from sklearn.datasets import load_files
from sklearn import decomposition
from sklearn.feature_extraction.text import TfidfVectorizer
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
remove = ('headers', 'footers', 'quotes')
# ### Load data from existing files
# due to the fact that aws download is very slow for the dataset, i have manually downloaded the dataset and loading into the notebook.
# +
### the below code is from sklearn implementation of preprocessing the news group data
import re
_QUOTE_RE = re.compile(r'(writes in|writes:|wrote:|says:|said:'
r'|^In article|^Quoted from|^\||^>)')
def strip_newsgroup_quoting(text):
"""
Given text in "news" format, strip lines beginning with the quote
characters > or |, plus lines that often introduce a quoted section
(for example, because they contain the string 'writes:'.)
"""
good_lines = [line for line in text.split('\n')
if not _QUOTE_RE.search(line)]
return '\n'.join(good_lines)
# -
def strip_newsgroup_footer(text):
"""
Given text in "news" format, attempt to remove a signature block.
As a rough heuristic, we assume that signatures are set apart by either
a blank line or a line made of hyphens, and that it is the last such line
in the file (disregarding blank lines at the end).
"""
lines = text.strip().split('\n')
for line_num in range(len(lines) - 1, -1, -1):
line = lines[line_num]
if line.strip().strip('-') == '':
break
if line_num > 0:
return '\n'.join(lines[:line_num])
else:
return text
# +
TRAIN_FOLDER = "20news-bydate-train"
TEST_FOLDER = "20news-bydate-test"
def strip_newsgroup_header(text):
"""
Given text in "news" format, strip the headers, by removing everything
before the first blank line.
"""
_before, _blankline, after = text.partition('\n\n')
return after
def preprocess_fetch_data(categories,remove,subset='train',data_home = None):
data_home = get_data_home(data_home=data_home)
twenty_home = os.path.join(data_home, "20news_home")
train_path = os.path.join(twenty_home, TRAIN_FOLDER)
test_path = os.path.join(twenty_home, TEST_FOLDER)
cache = dict(train=load_files(train_path, encoding='latin1'),
test=load_files(test_path, encoding='latin1'))
if subset in ('train', 'test'):
data = cache[subset]
elif subset == 'all':
data_lst = list()
target = list()
filenames = list()
for subset in ('train', 'test'):
data = cache[subset]
data_lst.extend(data.data)
target.extend(data.target)
filenames.extend(data.filenames)
data.data = data_lst
data.target = np.array(target)
data.filenames = np.array(filenames)
else:
raise ValueError(
"subset can only be 'train', 'test' or 'all', got '%s'" % subset)
data.description = 'the 20 newsgroups by date dataset'
if 'headers' in remove:
data.data = [strip_newsgroup_header(text) for text in data.data]
if 'footers' in remove:
data.data = [strip_newsgroup_footer(text) for text in data.data]
if 'quotes' in remove:
data.data = [strip_newsgroup_quoting(text) for text in data.data]
if categories is not None:
labels = [(data.target_names.index(cat), cat) for cat in categories]
# Sort the categories to have the ordering of the labels
labels.sort()
labels, categories = zip(*labels)
mask = np.in1d(data.target, labels)
data.filenames = data.filenames[mask]
data.target = data.target[mask]
# searchsorted to have continuous labels
data.target = np.searchsorted(labels, data.target)
data.target_names = list(categories)
# Use an object array to shuffle: avoids memory copy
data_lst = np.array(data.data, dtype=object)
data_lst = data_lst[mask]
data.data = data_lst.tolist()
return data
# -
newsgroups_train = preprocess_fetch_data(categories,remove,subset='train')
newsgroups_test = preprocess_fetch_data(categories,remove,subset='test')
newsgroups_train.filenames.shape,newsgroups_train.target.shape
# +
#print("\n".join(newsgroups_train.data[:3]))
# -
np.array(newsgroups_train.target_names)[newsgroups_train.target[:3]]
newsgroups_train.target[:10]
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(newsgroups_train.data).todense()
vectors.shape
print(len(newsgroups_train.data),vectors.shape)
type(newsgroups_train)
vocab = np.array(vectorizer.get_feature_names())
# ### SVD(Singular Value Decomposition)
U, s, Vh = linalg.svd(vectors,full_matrices=False)
print(U.shape, s.shape, Vh.shape,vectors.shape)
#Exercise: confrim that U, s, Vh is a decomposition of the var Vectors
m,n = vectors.shape
D = np.diag(s)
U.shape,D.shape,Vh.shape
np.allclose(vectors,np.dot(U,np.dot(D,Vh)))
plt.plot(s);
plt.plot(s[:10])
# +
num_top_words=8
def show_topics(a):
top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]]
topic_words = ([top_words(t) for t in a])
return [' '.join(t) for t in topic_words]
# -
show_topics(Vh[:10])
# ### NMF for topic modelling
### scikit learn Implemntation of NMF
m,n = vectors.shape
d = 5
clf = decomposition.NMF(n_components=d,random_state=1)
W1 = clf.fit_transform(vectors)
H1 = clf.components_
show_topics(H1)
# ### TF-IDF for topic modelling
tfidf = TfidfVectorizer(stop_words='english')
vec_tfidf = tfidf.fit_transform(newsgroups_train.data)
W1 = clf.fit_transform(vec_tfidf)
H1 = clf.components_
categories
show_topics(H1)
plt.plot(H1[0])
clf.reconstruction_err_
# ### NMF from scratch in numpy using SGD
lam=1e3
lr=1e-2
m, n = vec_tfidf.shape
W1 = clf.fit_transform(vectors)
H1 = clf.components_
show_topics(H1)
mu = 1e-6
def grads(M, W, H):
R = W@H-M
return R@H.T + penalty(W, mu)*lam, W.T@R + penalty(H, mu)*lam # dW, dH
def penalty(M, mu):
return np.where(M>=mu,0, np.min(M - mu, 0))
def upd(M, W, H, lr):
dW,dH = grads(M,W,H)
W -= lr*dW; H -= lr*dH
def report(M,W,H):
print(np.linalg.norm(M-W@H), W.min(), H.min(), (W<0).sum(), (H<0).sum())
W = np.abs(np.random.normal(scale=0.01, size=(m,d)))
H = np.abs(np.random.normal(scale=0.01, size=(d,n)))
report(vec_tfidf, W, H)
upd(vec_tfidf,W,H,lr)
report(vec_tfidf, W, H)
for i in range(50):
upd(vec_tfidf,W,H,lr)
if i % 10 == 0: report(vec_tfidf,W,H)
show_topics(H)
# ### PyTorch to create NMF
import torch
import torch.cuda as tc
from torch.autograd import Variable
def V(M):
return Variable(M,requires_grad = True)
v = vec_tfidf.todense()
t_vec = torch.Tensor(v.astype(np.float32)).cuda()
mu = 1e-5
# +
def grads_t(M, W, H):
R = W.mm(H)-M
return (R.mm(H.t()) + penalty_t(W, mu)*lam,
W.t().mm(R) + penalty_t(H, mu)*lam) # dW, dH
def penalty_t(M, mu):
return (M<mu).type(tc.FloatTensor)*torch.clamp(M - mu, max=0.)
def upd_t(M, W, H, lr):
dW,dH = grads_t(M,W,H)
W.sub_(lr*dW); H.sub_(lr*dH)
def report_t(M,W,H):
print((M-W.mm(H)).norm(2), W.min(), H.min(), (W<0).sum(), (H<0).sum())
# -
t_W = tc.FloatTensor(m,d)
t_H = tc.FloatTensor(d,n)
t_W.normal_(std=0.01).abs_();
t_H.normal_(std=0.01).abs_();
d=6; lam=100; lr=0.05
for i in range(1000):
upd_t(t_vec,t_W,t_H,lr)
if i % 100 == 0:
report_t(t_vec,t_W,t_H)
lr *= 0.9
show_topics(t_H.cpu().numpy())
plt.plot(t_H.cpu().numpy()[0])
t_W.mm(t_H).max()
t_vec.max()
# ### PyTorch AutoGrad.
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x)
x.data
print(x.grad)
y = x+2
y
z = y*y*3
out=z.sum()
print(z,out)
out.backward()
print(x.grad)
x.grad
| Python/NLP/Topic_modelling_NMF_SVD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # REINFORCE
#
# ---
#
# In this notebook, we will train REINFORCE with OpenAI Gym's Cartpole environment.
# ### 1. Import the Necessary Packages
# +
import gym
gym.logger.set_level(40) # suppress warnings (please remove if gives error)
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
# %matplotlib inline
import torch
torch.manual_seed(0) # set random seed
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
# !python -m pip install pyvirtualdisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
is_ipython = 'inline' in plt.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# -
# ### 2. Define the Architecture of the Policy
# +
env = gym.make('CartPole-v0')
env.seed(0)
print('observation space:', env.observation_space)
print('action space:', env.action_space)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Policy(nn.Module):
def __init__(self, s_size=4, h_size=16, a_size=2):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
def act(self, state):
'''
Trajectory is a list of sequence of (State,action)
Asuming:
m = 1, Single sampling
H = Full Eposide, (Horizon,length of Trajectory)
ð = has length of Full Eposide, meaning a Trjectory ð represents also a Full Eposide
Then Graident of Expected Return is reduced to :
Graident of Expected Return = â âΞ log ÏΞ ( at ⣠st) R(Ï)
Graident of Expected Return is a weighted Sum Probaiblities of a Trajectory's eposide time steps[0,H](full Eposide )
Graident of Expected Return = â (Weight of Trajectory's Epoisde time step ) * (Value of Trajectory)
Graident of Expected Return = â (prob of Trajectory's Epoisde time step ) * (Value of Trajectory)
Graident of Expected Return = â ( âΞ log ÏΞ ( at ⣠st) ) * ( R(Ï) )
Graident of Expected Return = R(Ï) â âΞ log ÏΞ ( at ⣠st)
â = represents sum over time steps from t=0 till t=H (full Eposide)
R(Ï) = Trajectoty Value= Sum of all Trajectory's time steps Rewards
âΞ = derviative w.r.t. Ξ
log = log function
ÏΞ = policy
ÏΞ( at ⣠st)= Same as p(atâ£st) = prob of action at given state st (its weight too)
Keep in mind that:
ÏΞ(state) = action to take ~ Deterministic Approach
ÏΞ(action | state) = prob of action given state. ~ Stochastic Approach
ÏΞ(action | state), is implemented by the NN As :
Output is softmax of actions (ie prob of each action )
Input is state.
the act function :
1. returns the selected action.
2. returns the log(selected action weight) , ie log( ÏΞ ( at ⣠st) )
3. Sampleing most likely to get you the action with the heighest prob.
'''
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu() # List of prob for each action.
m = Categorical(probs) #Creates a distrubition for the given Probabilities
action = m.sample() #Picks action given the distrubition, to execute on environment
return action.item(), m.log_prob(action) # Selected action, log( selected action weight/probability) ie log ( ÏΞ ( at ⣠st) )
# -
# ### 3. Train the Agent with REINFORCE
# +
policy = Policy().to(device) #defines the model
optimizer = optim.Adam(policy.parameters(), lr=1e-2) #Important
def reinforce(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100):
scores_deque = deque(maxlen=100)
scores = []
for i_episode in range(1, n_episodes+1):
saved_log_probs = []
rewards = []
state = env.reset()
for t in range(max_t):
action, log_prob = policy.act(state) # action= (left/right), log_prob = log(current action weight/prob)
saved_log_probs.append(log_prob)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = sum([a*b for a,b in zip(discounts, rewards)])
policy_loss = []
'''
we are taking negative of (-log_prob*R) and perform gradient descent. (policy_loss.backward())
But since all of the gradients have negative sign
and gradient descent subtracts the gradients from our parameters,
2 negative signs and we have a positive gradient now.
So, we end up taking a step in the direction of the gradients. (Gradient Ascent)
Graident of Expected Return = R(Ï) â âΞ log ÏΞ ( at ⣠st)
'''
for log_prob in saved_log_probs:
policy_loss.append(-log_prob * R)
policy_loss = torch.cat(policy_loss).sum()
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
return scores
scores = reinforce()
# -
# ### 4. Plot the Scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# ### 5. Watch a Smart Agent!
# +
env = gym.make('CartPole-v0')
state = env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
for t in range(1000):
action, _ = policy.act(state)
img.set_data(env.render(mode='rgb_array'))
plt.axis('off')
display.display(plt.gcf())
display.clear_output(wait=True)
state, reward, done, _ = env.step(action)
if done:
break
env.close()
# -
| reinforce/REINFORCE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="zL9RLHo4YfCP"
# # Setup
# + id="CS8RvR92lNT5"
# %%capture
# ! pip install datasets metrics transformers wandb
# + id="rPEgvdLiYOij"
from sklearn.metrics import f1_score, accuracy_score, precision_score, recall_score
from transformers import BertTokenizerFast, BertForSequenceClassification
from sklearn.model_selection import train_test_split
from transformers import Trainer, TrainingArguments
from transformers import BertModel, BertTokenizer
from transformers.file_utils import ModelOutput
from datasets import Dataset
from torch import nn
import pandas as pd
import torch
import gc
# + id="Cw6BRBvc3gH_"
gc.collect()
torch.cuda.empty_cache()
# + [markdown] id="PScS_63IYhzw"
# # Custom BERT model
# + colab={"base_uri": "https://localhost:8080/"} id="mUVqrd6Jlfqy" outputId="222fe3f6-fd19-4fa7-ab05-accff2437d83"
class BERTCNN(nn.Module):
def __init__(self, model_name='bert-base-uncased', device='cuda'):
super(BERTCNN, self).__init__()
self.bert = BertModel.from_pretrained(model_name).to(device)
self.conv = nn.Conv2d(in_channels=13, out_channels=13, kernel_size=(3, 768), padding='valid').to(device)
self.relu = nn.ReLU()
# change the kernel size either to (3,1), e.g. 1D max pooling
# or remove it altogether
self.pool = nn.MaxPool2d(kernel_size=(3, 1), stride=1).to(device)
self.dropout = nn.Dropout(0.1)
# be careful here, this needs to be changed according to your max pooling
# without pooling: 443, with 3x1 pooling: 416
# FC
self.fc = nn.Linear(598, 3).to(device) ## 416, 66004???
self.flat = nn.Flatten()
self.softmax = nn.LogSoftmax(dim=1).to(device)
def forward(self, input_ids, attention_mask, labels=None):
outputs = self.bert(input_ids,
attention_mask=attention_mask,
output_hidden_states=True)
x = torch.transpose(torch.cat(tuple([t.unsqueeze(0) for t in outputs[2]]), 0), 0, 1)
x = self.dropout(x)
x = self.conv(x)
x = self.relu(x)
x = self.dropout(x)
x = self.pool(x)
x = self.dropout(x)
x = self.flat(x)
x = self.dropout(x)
x = self.fc(x)
c = self.softmax(x)
# Clean cache
gc.collect()
torch.cuda.empty_cache()
del outputs
# Compute loss
loss = None
if labels is not None:
ce_loss = nn.CrossEntropyLoss()
loss = ce_loss(c, labels)
return ModelOutput({
'loss': loss,
'last_hidden_state': c
})
model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(model_name, do_lower_case=True)
model = BERTCNN(model_name)
# + [markdown] id="MNH27sMbYno4"
# # Pre-procesing
# + id="3CS9g-WOoj4H"
def process_data_to_model_inputs(batch):
# tokenize the inputs and labels
inputs = tokenizer(
batch["1"],
max_length=50,
padding="max_length",
truncation=True,
)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
labels = list(map(lambda x: int(x.split('__label__')[1]), batch['0']))
batch["label"] = labels
return batch
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
# calculate f1 using sklearn's function
f_score = f1_score(labels, preds, pos_label=0, average='binary')
accuracy = accuracy_score(labels, preds)
precision = precision_score(labels, preds, pos_label=0, average='binary')
recall = recall_score(labels, preds, pos_label=0, average='binary')
return {
'f1_score': f_score,
'accuracy': accuracy,
'precision': precision,
'recall': recall
}
tokenize_batch_size = 2048
df = pd.read_csv('/content/drive/MyDrive/PoliTo/Deep Natural Language Procesing/Project/labeledEligibilitySample1000000.csv', sep='\t', header=None)
# + colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["3e6c6b6fe3b7434e8bca8ca635fd80f7", "b7aab1ce47fd411b9077e6b197bb1c78", "9e1ca3c612e64c88914b6a45fa000955", "1cd621d09b714f5c92581c8ef301252d", "7bea5474438c4017a38578bb235c654c", "7c5e77b1730145bd8b858bdd52dff8b3", "742b84e9335d4d2ea8b7f406ac717296", "669ba8b50fef4a6599772b63a12cbce3", "dbd8c1f3e26149c1b9bcaaca2ac10643", "d27ef23dc0e44d00b97e331d14aa1fcd", "cebb18d7a8c0451ab30a6e508f0980d5"]} id="botUA7zqLsFS" outputId="f45ae991-ffcf-4be9-d463-248bd9e77e44"
train_ds = Dataset.from_pandas(df)
train_ds = train_ds.map(
process_data_to_model_inputs,
batched=True,
batch_size=tokenize_batch_size,
remove_columns=["0", "1"]
)
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "label"]
)
# + id="gfupyDfSR_1Y"
train_testvalid = train_ds.train_test_split(train_size=0.9)
test_valid = train_testvalid['test'].train_test_split(test_size=0.5)
# + [markdown] id="cS-joqbuY9pQ"
# # Training and evaluation
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ND48bydf0ypB" outputId="5edfdcfc-2825-41b8-c38f-c3b3b532b6a2"
training_args = TrainingArguments(
output_dir='./CNN_CTC_results', # output directory
num_train_epochs=2, # total number of training epochs
per_device_train_batch_size=128, # batch size per device during training
per_device_eval_batch_size=128, # batch size for evaluation
fp16=True, # enable fp16 apex training
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
load_best_model_at_end=True, # load the best model when finished training (default metric is loss)
# but you can specify `metric_for_best_model` argument to change to accuracy or other metric
logging_steps=400, # log & save weights each logging_steps
save_steps=400,
evaluation_strategy="steps", # evaluate each `logging_steps`
report_to="wandb"
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_testvalid['train'], # training dataset
eval_dataset=test_valid['train'], # evaluation dataset
compute_metrics=compute_metrics, # the callback that computes metrics of interest
)
trainer.train()
preds = trainer.predict(test_valid['test'])
preds[2]
# + id="nrZpBwXZWQEH"
import torch
torch.save(trainer.model, '/content/drive/MyDrive/PoliTo/Deep Natural Language Procesing/Project/model.model')
| BERT_CTC/CNN_Clin_Trials_on_Cancer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import os
# +
with tf.name_scope('input1'):
input1 = tf.constant([1.0, 2.0, 3.0], name='input1')
with tf.name_scope('input2'):
input2 = tf.Variable(tf.random_uniform([3]), name='input2')
output = tf.add_n([input1, input2], name='add')
writer = tf.summary.FileWriter('log', tf.get_default_graph())
writer.close()
# -
| tensorflow/book_caicloud/chapter_9_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import pickle
import numpy as np
import xgboost
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold, KFold,StratifiedShuffleSplit
from sklearn.metrics import f1_score, accuracy_score, precision_score, recall_score,roc_auc_score, roc_curve, average_precision_score,precision_recall_curve
import matplotlib.pyplot as plt
# %matplotlib inline
pd.set_option("display.max_columns",80)
# # Load Training Set
#
# To review, the training set was created by:
# 1. In the first notebook, we pulled hourly weather feeds for the last 7 years.
# 2. In the second notebook, we created a static feature set. These are the features that overall don't change with time.
# 3. Joined the weather and temporal features such as solar position with the static set and augmented the positive examples with negative examples that are very similar.
#
# Now that the training set has been created, we can train the model, but first we need to transform some of the variables to make it better suited for training.
# Load the training set
df = pd.read_csv('training_data/utah_training_set.csv')
df = df.dropna(how='any',axis=0)
df.shape
# +
ohe_fields=['one_way','surface_type','street_type','hour','weekday','month']
# One-Hot encode a couple of variables
df_ohe = pd.get_dummies(df,columns=ohe_fields)
# Get the one-hot variable names
ohe_feature_names = pd.get_dummies(df[ohe_fields],columns=ohe_fields).columns.tolist()
df_ohe.head()
# -
# # Continuous Features
# These are currently:
# * Historical Accident Count
# * Speed Limit (if available)
# * Sinuosity (Road curavture metric)
# * AADT (Annual Average Daily Traffic, if available)
# * Road Surface Width (If available)
# * Precipitation Depth
# * Snow Depth
# * Temperature
# * Visibility
# * Wind Speed
# * Road Orientation
# * Solar Azimuth
# * Solar Elevation
#
# These will be rescaled by scikit-learn's standard rescaler
# +
# Sinuosity is typically close to 1, even for moderately curvy roads. A high sinuosity means a longer road.
feature_transforms = {
'sinuosity': np.log
}
for feature,transform in feature_transforms.items():
df_ohe[feature] = transform(df_ohe[feature])
# Continuously valued features
float_feature_names = [
'accident_counts',
'speed_limit',
'aadt',
'surface_width',
'sinuosity',
'euclidean_length',
'segment_length',
'road_orient_approx',
'precip_depth',
'snow_depth',
'temperature',
'visibility',
'wind_speed',
'proximity_to_billboard',
'proximity_to_major_road',
'proximity_to_signal',
'proximity_to_nearest_intersection',
'population_density',
'solar_azimuth',
'solar_elevation',
]
float_features = df_ohe.xs(float_feature_names,axis=1).values
# Use scikit-learn's StandardScaler
scaler = StandardScaler()
float_scaled = scaler.fit_transform(float_features)
#print (float_features.mean(axis=0))
df_ohe[float_feature_names] = float_scaled
with open('scalers.pkl','wb') as fp:
pickle.dump(scaler,fp)
# +
y = df['target'].values
binary_feature_names = [
'snowing',
'raining',
'icy',
'thunderstorm',
'hailing',
'foggy',
'at_intersection',
]
df_ohe = df_ohe.xs(float_feature_names+binary_feature_names+ohe_feature_names,axis=1)
# -
X = df_ohe.values
y = df['target'].values
feature_names = df_ohe.columns.tolist()
# This "wrangler" object is simply a dictionary that stores our feature transforms and such. Note that if your end to end workflow was using scikit-learn, you could easily build a Pipeline to perform feature transforms and have the ability to save it off for future use. This takes a little bit more work for the simple task we are performing here.
wrangler = {
'scaler': scaler,
'float_feature_names': float_feature_names,
'ohe_fields': ohe_fields,
'feature_names': feature_names,
'feature_transforms': feature_transforms
}
with open('wrangler_new.pkl','wb') as fp:
pickle.dump(wrangler,fp)
# # Define Model (Gradient Boosting)
#
# We use XGBoost to build the gradient boosting model with some hyperparameters set. You could optimize these using CV and grid search. These parameters were set to these values part through that process and through some manual fine tuning. They certainly aren't optimal, but perform well for this task.
# +
feature_sel = range(len(feature_names))
#feature_sel = [-1,-2,-3]
Xs = X[:,feature_sel]
X_train, X_test, y_train, y_test = train_test_split(Xs, y, test_size=0.1)#, random_state=2)
fnames = np.array(feature_names)[feature_sel]
dtrain = xgboost.DMatrix(X_train,label=y_train,feature_names=fnames)
dtest = xgboost.DMatrix(X_test,label=y_test,feature_names=fnames)
params = {
'max_depth':6,
'min_child_weight': 5.0,
'reg_lambda': 1.0,
'reg_alpha':0.0,
'scale_pos_weight':1.0,
'eval_metric':'auc',
'objective':'binary:logistic',
'eta':0.5
}
# -
booster = xgboost.train(params,dtrain,
evals = [(dtest, 'eval')],
num_boost_round=3000,
early_stopping_rounds=25
)
print(fnames)
# # Which features are most important?
#
# The feature importance is the sum of the weights for each feature, essentially how often it is used in the boosting model. You can see the top three features are solar position and temperature. The solar az/el mostly encode time and are colinear with some other variables such as hour and month. There's also correlation between temperature and other variables. Temperature is a terrific proxy variable, because it encodes the season, time of day, and likelihood for surfaces to bit slippery/sticky. It's not in itself a predictor of accidents, but helps us encode conditions we can't easily directly observe.
plt.figure(figsize=(15,15))
xgboost.plot_importance(booster,ax=plt.gca(),importance_type='weight')
booster.save_model('new_0001.model')
# # Model Performance
#
# One of the best metrics for model performance is the ROC curve, which displays what the false positive rate is given a choice of true positive rate. The area under this curve was how we evaluated the performance when training.
# +
y_pred_test = booster.predict(dtest)
fpr, tpr, thresholds = roc_curve(y_test,y_pred_test)
y_pred_train = booster.predict(dtrain)
fpr_train, tpr_train, thresholds_train = roc_curve(y_train,y_pred_train)
fig,ax = plt.subplots()
plt.plot([0,1],[0,1],'r-',label='Random Guess',color='orange',lw=3)
plt.plot(fpr,tpr,label='ROC (Test)',lw=3)
plt.plot(fpr_train,tpr_train,'r:',label='ROC (Train)',color='steelblue',lw=3)
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
# +
plt.plot(thresholds,tpr,'r-',label='TPR (Test)',color='orange',lw=3)
plt.plot(thresholds_train,tpr_train,'r:',label='TPR (Train',color='orange',lw=3)
plt.plot(thresholds,fpr,'r-',label='FPR (Test)',color='steelblue',lw=3)
plt.plot(thresholds_train,fpr_train,'r:',label='FPR (Train)',color='steelblue',lw=3)
plt.gca().set_xbound(lower=0,upper=1)
plt.xlabel('Threshold')
plt.ylabel('True/False Positive Rate')
plt.legend()
# -
# Another good metric is the precision/recall curve, which tells us two things:
# 1. Precision: The fraction of the time we are correct when making a positive prediction (saying there is an accident)
# 2. Recall: The fraction of accidents we predict
# +
plt.figure(figsize=(15,15))
y_pred_test = booster.predict(dtest)
y_pred_train = booster.predict(dtrain)
precision,recall,thresholds = precision_recall_curve(y_test,y_pred_test)
precision_train, recall_train, thresholds_train = precision_recall_curve(y_train,y_pred_train)
fig,ax = plt.subplots()
plt.plot(precision,recall,label='PR (Test)',lw=3)
plt.plot(precision_train,recall_train,label='PR (Train)',lw=3)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.grid()
plt.legend()
plt.matplotlib.__version__
# -
plt.plot(thresholds,precision[:-1],'r-',label='P (Test)',color='orange',lw=3)
plt.plot(thresholds_train,precision_train[:-1],'r:',label='P (Train',color='orange',lw=3)
plt.plot(thresholds,recall[:-1],'r-',label='R (Test)',color='steelblue',lw=3)
plt.plot(thresholds_train,recall_train[:-1],'r:',label='R (Train)',color='steelblue',lw=3)
#plt.plot([0,1],[0,1],'k-',lw=2)
plt.gca().set_xbound(lower=0,upper=1)
plt.xlabel('Threshold')
plt.ylabel('Precision/Recall')
plt.legend()
# A particular choice of threshold (0.19) yields these results:
# +
y_pred_test = booster.predict(dtest) > 0.19
print ('Test Accuracy:',accuracy_score(y_test,y_pred_test))
print ('Test F1:',f1_score(y_test,y_pred_test))
print ('Test Precision:',precision_score(y_test,y_pred_test))
print ('Test Recall:',recall_score(y_test,y_pred_test))
y_pred_test = booster.predict(dtest)
print ('Test AUC:',roc_auc_score(y_test,y_pred_test))
print ('Test AP:',average_precision_score(y_test,y_pred_test))
y_pred_train = booster.predict(dtrain) > 0.19
print ('Train Accuracy:',accuracy_score(y_train,y_pred_train))
print ('Train F1:',f1_score(y_train,y_pred_train))
print ('Train Precision:',precision_score(y_train,y_pred_train))
print ('Train Recall:',recall_score(y_train,y_pred_train))
y_pred_train = booster.predict(dtrain)
print ('Train AUC:',roc_auc_score(y_train,y_pred_train))
print ('Test AP:',average_precision_score(y_train,y_pred_train))
# -
# The split histogram is a histogram of where splits in the decision trees were made. This tells us about important areas in the feature space.
def plot_split_histogram(feature_name):
hist = booster.get_split_value_histogram(feature_name)
try:
i = float_feature_names.index(feature_name)
fake_data = np.zeros((hist.Count.size,len(float_feature_names)))
fake_data[:,i] = hist.SplitValue
hist.loc[:,'SplitValue'] = scaler.inverse_transform(fake_data)[:,i]
except: pass
hist.plot(kind='area',x='SplitValue',y='Count')
# Note the peaks around 0 (Celsius). This indicates that this is an important differentiator in temperature. This makes sense as roads start to get icy around 0 degrees Celsius
plot_split_histogram('temperature')
plot_split_histogram('solar_azimuth')
# This looks more uniform, but there is a peak right around zero, when the sun is on the horizon. Perhaps this causes more accidents on east/west facing roads? It could also represent that most accidents happen during daytime, so this is a clear choice.
plot_split_histogram('solar_elevation')
plot_split_histogram('proximity_to_billboard')
plot_split_histogram('population_density')
plot_split_histogram('snow_depth')
plot_split_histogram('precip_depth')
plot_split_histogram('sinuosity')
| notebooks/4_train_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import pathlib
url = r"https://muppet.fandom.com/wiki/Episode_5019"
# Get raw content
req = requests.get(url)
# Turn Content into BS4
document = BeautifulSoup(req.content.decode(encoding="utf-8"))
# Find all table elements in the document
tables = document.find_all(name='table')
# Get the table document that has
for tbl in tables:
if tbl.find(text=re.compile('.*Picture.*')):
res = tbl
seasons = [str(season) for season in range(46,51,1)]
episodes = [(str('00')+str(i))[-2:] for i in range(1,36,1)]
from itertools import product
# +
seasons = [str(season) for season in range(46,51,1)]
episodes = [(str('00')+str(i))[-2:] for i in range(1,36,1)]
episodeList = [''.join(i) for i in product(seasons,episodes)]
episodeSummaries = []
#for episode in episodeList[:]:
episode=4715
imgName = pathlib.Path(f'C:\\Users\\riwolff\\Documents\\me_want_cookie\\data\\raw\\images\\{episode}')
imgName.mkdir(parents=True, exist_ok=True)
url = f"https://muppet.fandom.com/wiki/Episode_{episode}"
req = requests.get(url)
document = BeautifulSoup(req.content.decode(encoding="utf-8"))
tables = document.find_all(name='table')
# Get the table document that has
for tbl in tables:
if tbl.find(text=re.compile('.*Picture.*')):
res = tbl
break
txt = []
rows = res.find_all(name='tr')
i = 0
for row in rows:
cols = row.find_all(name='td')
rowText = [episode]
for j, col in enumerate(cols):
if j == 0:
for a in col.find_all('a', href=True):
imgNum = '00'+str(i)
imgNum = imgNum[-3:] + '.jpg'
save_img(a['href'], imgName / imgNum)
i += 1
elif j == 1:
rowText.append(col.text.strip().replace("cont'd",''))
else:
wordsCleaned = [r.getText().strip() if not isinstance(r,str) else r.strip() for r in col.contents]
rowText.append(' '.join(wordsCleaned).strip())
txt.append(rowText)
episodeSummaries.append(pd.DataFrame(txt,columns=['Episode','Segment','Description']))
# -
df = pd.concat(episodeSummaries)
df['Episode'] = df['Episode'].astype(int)
df = df[df['Episode']<5024].dropna(subset=['Segment','Description'])
df.to_csv(pathlib.Path(f'C:\\Users\\riwolff\\Documents\\me_want_cookie\\data\\raw\\seasons46to50.csv'), index=False)
df[df['Episode']==4715]
# +
pic_url = 'https://vignette.wikia.nocookie.net/muppet/images/8/87/5019a.png/revision/latest?cb=20200322030436'
def save_img(pic_url, imgName):
with open(imgName, 'wb') as handle:
response = requests.get(pic_url, stream=True)
if not response.ok:
print(response)
for block in response.iter_content(1024):
if not block:
break
handle.write(block)
return None
# -
season = 5019
imgName = pathlib.Path(f'C:\\Users\\riwolff\\Documents\\me_want_cookie\\data\\raw\\images\\{season}')
imgName
#save_img(pic_url, path.joinpath('001.jpg'))
| notebooks/1.00_rw_data_extraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Modify Display Utils
# Utilities for collecting/checking [`fastai`](/fastai.html#fastai) user environment
from fastai.utils.mod_display import *
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.utils.collect_env import *
# + hide_input=true
show_doc(progress_disabled_ctx)
# + hide_input=true
from fastai.vision import *
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
# -
# `learn.fit()` will display a progress bar and give the final results once completed:
learn.fit(1)
# [`progress_disabled_ctx`](/utils.mod_display.html#progress_disabled_ctx) will remove all that update and only show the total time once completed.
with progress_disabled_ctx(learn) as learn:
learn.fit(1)
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# ## New Methods - Please document or move to the undocumented section
# + hide_input=true
show_doc(progress_disabled_ctx)
# -
#
| docs_src/utils.mod_display.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from skimage.io import imread
from skimage.measure import regionprops
from skimage.morphology import remove_small_objects
# %matplotlib notebook
import matplotlib.pyplot as plt
from utils.multi_slice_viewer import multi_slice_viewer
import os
import numpy as np
from scipy.spatial import Delaunay, Voronoi
import pandas as pd
from skimage.measure import regionprops_table
# ### Primary functions
# #### Shape based feature extractor. This is the primary feature extractor used to grab geometric features from segmented nuclei. The properties from each labeled nuclei is output as a pandas dataframe for convenience.
# +
def getObjectProperties(labeled_image):
"""
Returns labeled object properties in a pandas DataFrame for convienient sorting.
Parameters
----------
labled_image : 3D numpy array
Segmented image of nuclei where each individual object has been assigned a
unique integer idea.
Returns
-------
object_props : pd.DataFrame
DataFrame object with selected properties extracted using skimage.measure.regionprops_table
"""
#object properties for extraction
properties=[ 'equivalent_diameter', 'inertia_tensor',
'inertia_tensor_eigvals', 'major_axis_length',
'minor_axis_length', 'moments',
'moments_central', 'label', 'area',
'solidity', 'feret_diameter_max',
'moments_normalized', 'centroid', 'bbox',
'bbox_area', 'extent',
'convex_area', 'convex_image']
#extract features and return as dataframe
object_props = pd.DataFrame(regionprops_table(labeled_image,properties=properties))
return object_props
# -
# #### Centroid reorganization - graph based features are constructed using a set of nodes, in this case the centroids of segmented nuclei. This method reorganizes the centroids and labels extracted from the above method into a 4D array. The array is the entire set of 3D points corresponding to centroid location of segmented nuclei in the labeled image.
def getCentroids(proptable):
"""
Returns labeled object centroids and labels in a dictionary.
Parameters
----------
proptable : pd.DataFrame
labeled object properties with centroid & label columns
Returns
-------
props_dict : dict
Dictionary with 'centroids' and 'labels' as keys, with corresponding
centroids and labels extracted from proptable as numpy arrays.
"""
props_dict = {}
#get centroid column titles
filter_col = [col for col in proptable if col.startswith('centroid')]
props_dict['centroids'] = proptable[filter_col].to_numpy().astype(int)
props_dict['labels'] = proptable['label'].to_numpy()
return props_dict
# #### Graph based feature extraction - the following method extracts graph based features (Delaunay & Voronoi diagrams) using the set of nuclear centroids as the input.
# +
def getTesselations(centroids):
"""
Return two graph based features from the scipy.spatial module
Parameters
----------
centroids : numpy array
Array of centroids extracted from segmented nuclei
Returns
-------
tesselD : scipy.spatial.Delaunay
Fully connected graph based feature where nuclear centroids are
input as nodes on the graph.
tesselV : scipy.spatial.Voronoi
Region based graph (derived from Delaunay) where individual regions
are grown from points i.e nuclear centroids.
"""
#extract delaunay diagram from scipy.spatial
tesselD = Delaunay(centroids)
#extract voronoi diagram from scipy.spatial
tesselV = Voronoi(centroids)
return tesselD, tesselV
# -
def cropImage(image, image_props, object_label, clean=False):
"""
crops section of input image based on bounding box of labeled objects
labeled objects are determined by the object_label which is a label in a
property table
Parameters
----------
image : 3D numpy array
labeled segmented image of nuclei
image_props : pd.DataFrame
pandas dataframe of properties with label and bbox as extracted features
object_label : int
label of object to crop from input image
clean : bool, optional
clear objects without input label
Returns
-------
crop : 3D numpy array
cropped region containing the labeled object, crop coordinates are
based on the bounding box.
"""
assert(type(object_label) == int)
prop = image_props.loc[image_props['label'] == object_label]
if len(image.shape) == 2:
coords = [prop['bbox-0'].values[0], prop['bbox-2'].values[0],
prop['bbox-1'].values[0], prop['bbox-3'].values[0]]
print(coords)
crop = copy.deepcopy(image[coords[0]:coords[1], coords[2]:coords[3]])
else:
coords = [prop['bbox-0'].values[0], prop['bbox-3'].values[0],
prop['bbox-1'].values[0], prop['bbox-4'].values[0],
prop['bbox-2'].values[0], prop['bbox-5'].values[0]]
crop = copy.deepcopy(image[coords[0]:coords[1],
coords[2]:coords[3],
coords[4]:coords[5]])
if clean:
crop = np.ma.masked_where(crop != object_label, crop).filled(0)
crop = (crop > 0).astype(int)
return crop
# ### Load example labeled 3D data from disk
data_file = os.path.join('./data/region1_3D_crop.tif')
data = imread(data_file)
# ### Display 3D data using multi-slice-viewer (use j & k keys to pan through volume)
multi_slice_viewer(data, figsize= (8,8))
# ### Extract shape based features
# +
data = remove_small_objects(data, min_size=150)
properties = getObjectProperties(data)
# -
min(properties['area'])
plt.subplots()
plt.hist(properties['area'], bins = 100)
plt.subplots()
plt.hist(properties['convex_area'], bins = 100)
# ### Collect nuclear centroids
centroids = getCentroids(properties)
# ### Extract graph-based tesselations
tesselD, tesselV = getTesselations(centroids['centroids'])
| notebook/homework 5 workflow - two graph based features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Comparing TCdata360 and Govdata360 indicator lists
# Data sources:
# - For TCdata360 indicator list: http://tcdata360-backend.worldbank.org/api/v1/indicators/
# - For Govdata360 indicator list: http://govdata360-backend.worldbank.org/api/v1/indicators/
#
# ## Import Modules
# +
import pandas as pd
import numpy as np
import requests
import matplotlib.pyplot as plt
import datetime
import re
from nltk import distance
from fuzzywuzzy import fuzz, process
# %matplotlib inline
# -
date_today = datetime.date.today().isoformat()
# ## Generate TCdata360 and Govdata360 indicator datasets
url ='http://tcdata360-backend.worldbank.org/api/v1/indicators/'
response = requests.get(url)
tc_indicators = pd.read_json(response.text)
tc_indicators.shape
url ='http://govdata360-backend.worldbank.org/api/v1/indicators/'
response = requests.get(url)
gv_indicators = pd.read_json(response.text)
gv_indicators.shape
# # Comparing TCdata360 and Govdata360 indicator lists
# ## Pre-processing: Normalizing and tokenizing indicator names
tc_indicators['name_lower'] = tc_indicators['name'].apply(lambda x: x.lower().strip())
gv_indicators['name_lower'] = gv_indicators['name'].apply(lambda x: x.lower().strip())
# To increase chances of matching, we also replace commonly used symbols such as:
# - "%" or "proportion" to "percent"
# - "#" to "number"
# - '/' to "per"
# - replace oth special characters (e.g., -, ?, .) except $ with blanks.
gv_indicators['name_lower'] = gv_indicators['name_lower'].apply(lambda x: x.replace("%", "percent").replace("proportion", "percent").replace("#", "number").replace("/", " per "))
tc_indicators['name_lower'] = tc_indicators['name_lower'].apply(lambda x: x.replace("%", "percent").replace("proportion", "percent").replace("#", "number").replace("/", " per "))
tc_indicators['name_lower'] = tc_indicators['name_lower'].apply(lambda x: re.sub('[^\w$ ]', '', x))
gv_indicators['name_lower'] = gv_indicators['name_lower'].apply(lambda x: re.sub('[^\w$ ]', '', x))
tc_indicators['name_lower'] = tc_indicators['name_lower'].apply(lambda x: re.sub('[_]', ' ', x))
gv_indicators['name_lower'] = gv_indicators['name_lower'].apply(lambda x: re.sub('[_]', ' ', x))
tc_indicators[['name','name_lower']].head()
gv_indicators[['name','name_lower']].tail()
# ## Shortlisting datasets common to both indicators
# - Assumption: ease in API ingestion only works if the dataset for both TCdata360 and Govdata360 indicators are the same. Even if it's the same indicator name but coming from different dataset sources for TCdata360 and Govdata360, the matching doesn't apply.
# - Rationale: this speeds up the matching process since it reduces the number of indicator pairs to be checked.
set(tc_indicators['dataset'])
set(gv_indicators['dataset'])
common_datasets = list(set(tc_indicators['dataset']) & set(gv_indicators['dataset'])) + ['World Economic Forum Global Competitiveness Index', 'Global Competitiveness Index']
common_datasets
gv_indicators[gv_indicators['dataset'].isin(common_datasets)].shape
tc_indicators[tc_indicators['dataset'].isin(common_datasets)].shape
# We look at the number of indicators per dataset.
common_datasets = [[dataset] for dataset in list(set(tc_indicators['dataset']) & set(gv_indicators['dataset']))] + [['World Economic Forum Global Competitiveness Index', 'Global Competitiveness Index']]
common_datasets
# +
dataset_count = {}
for dataset in common_datasets:
tc_count = tc_indicators[tc_indicators['dataset'].isin(dataset)].shape[0]
gv_count = gv_indicators[gv_indicators['dataset'].isin(dataset)].shape[0]
dataset_count[dataset[0]] = [tc_count, gv_count]
df_dataset_count = pd.DataFrame(dataset_count).T
df_dataset_count.columns = ['tc_indicator_count', 'gv_indicator_count']
# -
df_dataset_count
# ## Matching indicators using `fuzzywuzzy` ratio, partial ratio, token sort ratio, and token set ratio
# We generate the fuzzywuzzy statistics for all indicator pairs for all datasets.
# +
df_name_lower_comp_all = pd.DataFrame()
fuzz_stats = ['ratio', 'partial_ratio', 'token_sort_ratio', 'token_set_ratio']
df_cols = ['gv_indicator', 'tc_indicator', 'ratio', 'partial_ratio', 'token_sort_ratio', 'token_set_ratio']
for dataset in common_datasets:
tc_indicators_shortlist = tc_indicators[tc_indicators['dataset'].isin(dataset)]
gv_indicators_shortlist = gv_indicators[gv_indicators['dataset'].isin(dataset)]
tc_list = list(set(tc_indicators_shortlist['name_lower']))
gv_list = list(set(gv_indicators_shortlist['name_lower']))
name_lower_comparison_fuzz = []
for gv_ind in gv_list:
for tc_ind in tc_list:
name_lower_comparison_fuzz.append([gv_ind,tc_ind, fuzz.ratio(gv_ind,tc_ind), fuzz.partial_ratio(gv_ind,tc_ind), fuzz.token_sort_ratio(gv_ind,tc_ind), fuzz.token_set_ratio(gv_ind,tc_ind)])
df_name_lower_comparison_fuzz = pd.DataFrame(name_lower_comparison_fuzz, columns = df_cols)
df_name_lower_comparison_fuzz['stat_mean'] = df_name_lower_comparison_fuzz[fuzz_stats].mean(axis=1)
df_name_lower_comparison_fuzz['stat_min'] = df_name_lower_comparison_fuzz[fuzz_stats].min(axis=1)
df_name_lower_comparison_fuzz['stat_max'] = df_name_lower_comparison_fuzz[fuzz_stats].max(axis=1)
df_name_lower_comparison_fuzz['source_dataset'] = dataset[0]
df_name_lower_comp_all = df_name_lower_comp_all.append(df_name_lower_comparison_fuzz)
# -
df_name_lower_comp_all.head()
df_name_lower_comp_all.groupby('gv_indicator')['stat_mean'].max().hist(bins=100)
plt.title('Histogram of Max Average Fuzzy Statistics per paired indicator')
df_name_lower_comp_all.groupby('gv_indicator')['stat_mean'].max().hist(bins=500)
plt.title('Histogram of Max Average Fuzzy Statistics per paired indicator')
plt.xlim(95,100)
df_name_lower_comp_all.groupby('gv_indicator')['stat_max'].max().hist(bins=100)
plt.title('Histogram of Max Fuzzy Statistic per paired indicator')
df_name_lower_comp_all.groupby('gv_indicator')['stat_min'].max().hist(bins=100)
plt.title('Histogram of Min Fuzzy Statistic per paired indicator')
# # Generating Matched Indicator Shortlists
# ## Automated Matching: Generate Top 1 High-accuracy Matched Indicators
# For high-accuracy match results, we automatically generate the top 1 matched indicator.
#
# Based on iterations to improve results, we choose the ff. cut off points using the ff. fuzzy statistics: ratio, partial ratio, token set ratio, token sort ratio
# - Get top match based on average fuzzy statistics.
# - Keep matches which satisfy all of the ff.:
# - Average fuzzy statistics at least 80
# - Minimum fuzzy statistic is at least 70
df_name_lower_comp_fuzz_top1 = df_name_lower_comp_all.groupby('gv_indicator').apply(lambda x: x.sort_values(by='stat_mean', ascending=False).head(1)).reset_index(drop=True)
df_name_lower_comp_fuzz_top1_filtered = df_name_lower_comp_fuzz_top1[(df_name_lower_comp_fuzz_top1['stat_mean'] >= 80) & (df_name_lower_comp_fuzz_top1['stat_min'] >= 70)].reset_index(drop=True).reset_index()
pd.DataFrame(df_name_lower_comp_fuzz_top1_filtered['source_dataset'].value_counts().sort_index())
df_name_lower_comp_fuzz_top1_filtered.shape
df_name_lower_comp_fuzz_top1_filtered.to_csv("%s-TC-Gov_High-Accuracy_Automatic-Matched-Indicators.csv" % (date_today), index=False)
# Merge Top 1 High-accuracy list with full indicator data.
# +
df_name_lower_comp_fuzz_top1_filtered_short = df_name_lower_comp_fuzz_top1_filtered[['index', 'gv_indicator', 'tc_indicator', 'source_dataset']]
tc_shortlist_fuzz = tc_indicators.merge(df_name_lower_comp_fuzz_top1_filtered_short, how='inner', left_on=['name_lower', 'dataset'], right_on=['tc_indicator', 'source_dataset']).drop(['source_dataset', 'tc_indicator','gv_indicator'], axis=1)
gv_shortlist_fuzz = gv_indicators.replace({'dataset': {'Global Competitiveness Index': 'World Economic Forum Global Competitiveness Index'}}).merge(df_name_lower_comp_fuzz_top1_filtered_short, how='inner', left_on=['name_lower', 'dataset'], right_on=['gv_indicator', 'source_dataset']).drop(['source_dataset', 'tc_indicator','gv_indicator'], axis=1)
merged_shortlist_fuzz = tc_shortlist_fuzz.merge(gv_shortlist_fuzz, how='inner', on=['index'], suffixes = ('_tc', '_gv'))
merged_shortlist_fuzz = merged_shortlist_fuzz.merge(df_name_lower_comp_fuzz_top1_filtered, on='index')
# -
merged_shortlist_fuzz.to_csv("%s-TC-Gov_High-Accuracy_Automatic-Matched-Indicators_complete.csv" % (date_today), index=False)
# ## Semi-automated Matching: Generate Possible Matches Shortlist for Low-Accuracy Matched Indicators
# For low-accuracy match results, we generate the top 5 matches per fuzzy statistic (which gives at most 20 matches per Govdata360 indicator).
#
# The user can then manually pick the best match from this shortlist. To make manual search easier, we only keep all unique suggested matches and sort unique results based on score.
df_name_lower_comp_fuzz_low_acc = df_name_lower_comp_fuzz_top1[~((df_name_lower_comp_fuzz_top1['stat_mean'] >= 80) & (df_name_lower_comp_fuzz_top1['stat_min'] >= 70))][['gv_indicator', 'source_dataset']]
# +
df_name_lower_comp_all_choices = pd.DataFrame()
df_cols = ['gv_indicator', 'source_dataset', 'ratio1', 'ratio2', 'ratio3', 'token_sort1', 'token_sort2', 'token_sort3']
for dataset in common_datasets:
tc_list = list(set(tc_indicators[tc_indicators['dataset'].isin(dataset)]['name_lower']))
gv_list = list(set(df_name_lower_comp_fuzz_low_acc[df_name_lower_comp_fuzz_low_acc['source_dataset'].isin(dataset)]['gv_indicator']))
name_lower_comparison_fuzz = []
for gv_ind in gv_list:
for val in process.extract(gv_ind, tc_list, scorer=fuzz.ratio, limit=5):
name_lower_comparison_fuzz.append([gv_ind, val[0], val[1], dataset[0]])
for val in process.extract(gv_ind, tc_list, scorer=fuzz.partial_ratio, limit=5):
name_lower_comparison_fuzz.append([gv_ind, val[0], val[1], dataset[0]])
for val in process.extract(gv_ind, tc_list, scorer=fuzz.token_set_ratio, limit=5):
name_lower_comparison_fuzz.append([gv_ind, val[0], val[1], dataset[0]])
for val in process.extract(gv_ind, tc_list, scorer=fuzz.token_sort_ratio, limit=5):
name_lower_comparison_fuzz.append([gv_ind, val[0], val[1], dataset[0]])
df_name_lower_comparison_fuzz = pd.DataFrame(name_lower_comparison_fuzz, columns = ['gv_indicator', 'tc_indicator','match_score', 'source_dataset'])
df_name_lower_comparison_fuzz.drop_duplicates(subset = ['gv_indicator', 'tc_indicator', 'source_dataset'], inplace=True)
df_name_lower_comparison_fuzz.sort_values(by='match_score', ascending=False)
df_name_lower_comp_all_choices = df_name_lower_comp_all_choices.append(df_name_lower_comparison_fuzz)
df_name_lower_comp_all_choices.reset_index(inplace=True, drop=True)
# -
df_name_lower_comp_all_choices['match?'] = np.nan
df_name_lower_comp_all_choices = df_name_lower_comp_all_choices[['gv_indicator', 'tc_indicator','match?', 'match_score', 'source_dataset']]
df_name_lower_comp_all_choices.to_csv("%s-TC-Gov_Low-Accuracy_Semi-Automatic-Matched-Indicators-Choices.csv" % (date_today), index=False)
# We import the filled-out CSV with manual matching.
df_manual_matching = pd.read_csv("2017-07-05-TC-Gov_Low-Accuracy_Semi-Automatic-Matched-Indicators-Choices_with-manual-matching.csv")
df_manual_matching = df_manual_matching[df_manual_matching['match?'] == 1.0].reset_index(drop=True).reset_index()
df_manual_matching.head()
# +
df_manual_matching_matches = df_manual_matching[['index', 'gv_indicator', 'tc_indicator', 'source_dataset']]
tc_shortlist_fuzz = tc_indicators.merge(df_manual_matching_matches, how='inner', left_on=['name_lower', 'dataset'], right_on=['tc_indicator', 'source_dataset']).drop(['source_dataset', 'tc_indicator','gv_indicator'], axis=1)
gv_shortlist_fuzz = gv_indicators.replace({'dataset': {'Global Competitiveness Index': 'World Economic Forum Global Competitiveness Index'}}).merge(df_manual_matching_matches, how='inner', left_on=['name_lower', 'dataset'], right_on=['gv_indicator', 'source_dataset']).drop(['source_dataset', 'tc_indicator','gv_indicator'], axis=1)
merged_shortlist_fuzz_manual = tc_shortlist_fuzz.merge(gv_shortlist_fuzz, how='inner', on=['index'], suffixes = ('_tc', '_gv'))
merged_shortlist_fuzz_manual = merged_shortlist_fuzz_manual.merge(df_manual_matching_matches, on='index')
# -
merged_shortlist_fuzz_manual.head()
merged_shortlist_fuzz_manual.to_csv("%s-TC-Gov_High-Accuracy_Automatic-Matched-Indicators_complete.csv" % (date_today), index=False)
merged_shortlist_fuzz.shape, merged_shortlist_fuzz_manual.shape
merged_shortlist_fuzz['match_type'] = 'automatic_high-accuracy'
merged_shortlist_fuzz_manual['match_type'] = 'semi-automatic_low-accuracy'
df_final_matched = merged_shortlist_fuzz.append(merged_shortlist_fuzz_manual)
df_final_matched.shape
df_final_matched.groupby(['match_type'])['source_dataset'].value_counts().unstack(level=0).fillna(0)
df_final_matched.to_csv("%s-TC-Gov_Full-Matched-Indicators_complete.csv" % (date_today), index=False)
| 2017-07-04-Compare-Govdata360-and-TCdata360-indicators-v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: brandon_env
# language: python
# name: brandon_env
# ---
# <div style="font-size:30px" align="center"> <b> Training Word2Vec on Biomedical Abstracts in PubMed </b> </div>
#
# <div style="font-size:18px" align="center"> <b> <NAME> - University of Virginia's Biocomplexity Institute </b> </div>
#
# <br>
#
# This notebook borrows from several resources to train a Word2Vec model on a subset of the PubMed database taken from January 2021. Overall, I am interested in testing whether diversity and racial terms are becoming more closely related over time. To do this, I train the model on 1990-1995 data and then a random sample of 2015-2020 data.
#
# #### Import packages and ingest data
#
# Let's load all of our packages
# +
# load packages
import os
import psycopg2 as pg
import pandas.io.sql as psql
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from textblob import Word
from gensim.models import Word2Vec
import multiprocessing
# set cores, grab stop words
cores_available = multiprocessing.cpu_count() - 1
stop = stopwords.words('english')
# connect to the database, download data
connection = pg.connect(host = 'postgis1', database = 'sdad',
user = os.environ.get('db_user'),
password = <PASSWORD>('db_<PASSWORD>'))
early_query = '''SELECT fk_pmid, year, abstract, publication
FROM pubmed_2021.biomedical_human_abstracts
WHERE year <= 2000'''
later_query = '''SELECT fk_pmid, year, abstract, publication
FROM pubmed_2021.biomedical_human_abstracts
WHERE year >= 2010'''
# convert to a dataframe, show how many missing we have (none)
pubmed_earlier = pd.read_sql_query(early_query, con=connection)
pubmed_later = pd.read_sql_query(later_query, con=connection)
pubmed_earlier.head()
# -
# #### Matching the Sample Sizes
#
# Since the 2010-2020 data is larger than the 1990-2000 data, we want to take a random sample of the later data to make the sample sizes the same for comparison later.
abstracts_to_sample = pubmed_earlier.count().year
pubmed_later = pubmed_later.sample(n=abstracts_to_sample, random_state=1)
pubmed_later.count().year
# #### Cleaning the text data
#
# Convert all text to lower case, remove punctuation, numbers, dots, digits and stop words, and finally lemmatize.
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract'].str.lower()
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].str.replace(r'[^\w\s]+', '', regex=True)
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x.isdigit()))
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x in stop))
pubmed_earlier['abstract_clean'] = pubmed_earlier['abstract_clean'].apply(lambda x:' '.join([Word(word).lemmatize() for word in x.split()]))
pubmed_earlier.head()
pubmed_later['abstract_clean'] = pubmed_later['abstract'].str.lower()
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].str.replace(r'[^\w\s]+', '', regex=True)
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x.isdigit()))
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].apply(lambda x:' '.join(x for x in x.split() if not x in stop))
pubmed_later['abstract_clean'] = pubmed_later['abstract_clean'].apply(lambda x:' '.join([Word(word).lemmatize() for word in x.split()]))
pubmed_later.head()
# #### Training the Word2Vec Models
#
# Now, let's train these Word2Vec models and save them as a binary file to visualize later.
# +
# run the model on the earlier data
earlier_list=[]
for i in pubmed_earlier['abstract_clean']:
li = list(i.split(" "))
earlier_list.append(li)
earlier_model = Word2Vec(earlier_list, min_count=5, size=512, window=5, iter=5, workers=cores_available)
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
earlier_model.save("word2vec_1990_2000.model")
earlier_model.save("word2vec_1990_2000.bin")
# run the model on the later data
later_list=[]
for i in pubmed_later['abstract_clean']:
li = list(i.split(" "))
later_list.append(li)
later_model = Word2Vec(later_list, min_count=5, size=512, window=5, iter=5, workers=cores_available)
os.chdir("/sfs/qumulo/qhome/kb7hp/git/diversity/data/word_embeddings/")
later_model.save("word2vec_2010_2020.model")
later_model.save("word2vec_2010_2020.bin")
# -
# #### References
#
# [Guru 99's Tutorial on Word Embeddings](https://www.guru99.com/word-embedding-word2vec.html)
#
# [Stackoverflow Post on Lemmatizing in Pandas](https://stackoverflow.com/questions/47557563/lemmatization-of-all-pandas-cells)
| src/04_word2vec/00_antiquated/.ipynb_checkpoints/01_w2v_train-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# ## Introduction
# + jupyter={"outputs_hidden": true} tags=[]
from IPython.display import YouTubeVideo
YouTubeVideo(id="3sJnTpeFXZ4", width="100%")
# -
# In order to get you familiar with graph ideas,
# I have deliberately chosen to steer away from
# the more pedantic matters
# of loading graph data to and from disk.
# That said, the following scenario will eventually happen,
# where a graph dataset lands on your lap,
# and you'll need to load it in memory
# and start analyzing it.
#
# Thus, we're going to go through graph I/O,
# specifically the APIs on how to convert
# graph data that comes to you
# into that magical NetworkX object `G`.
#
# Let's get going!
#
# ## Graph Data as Tables
#
# Let's recall what we've learned in the introductory chapters.
# Graphs can be represented using two **sets**:
#
# - Node set
# - Edge set
#
# ### Node set as tables
#
# Let's say we had a graph with 3 nodes in it: `A, B, C`.
# We could represent it in plain text, computer-readable format:
#
# ```csv
# A
# B
# C
# ```
#
# Suppose the nodes also had metadata.
# Then, we could tag on metadata as well:
#
# ```csv
# A, circle, 5
# B, circle, 7
# C, square, 9
# ```
#
# Does this look familiar to you?
# Yes, node sets can be stored in CSV format,
# with one of the columns being node ID,
# and the rest of the columns being metadata.
#
# ### Edge set as tables
#
# If, between the nodes, we had 4 edges (this is a directed graph),
# we can also represent those edges in plain text, computer-readable format:
#
# ```csv
# A, C
# B, C
# A, B
# C, A
# ```
#
# And let's say we also had other metadata,
# we can represent it in the same CSV format:
#
# ```csv
# A, C, red
# B, C, orange
# A, B, yellow
# C, A, green
# ```
#
# If you've been in the data world for a while,
# this should not look foreign to you.
# Yes, edge sets can be stored in CSV format too!
# Two of the columns represent the nodes involved in an edge,
# and the rest of the columns represent the metadata.
#
# ### Combined Representation
#
# In fact, one might also choose to combine
# the node set and edge set tables together in a merged format:
#
# ```
# n1, n2, colour, shape1, num1, shape2, num2
# A, C, red, circle, 5, square, 9
# B, C, orange, circle, 7, square, 9
# A, B, yellow, circle, 5, circle, 7
# C, A, green, square, 9, circle, 5
# ```
#
# In this chapter, the datasets that we will be looking at
# are going to be formatted in both ways.
# Let's get going.
# ## Dataset
#
# We will be working with the Divvy bike sharing dataset.
#
# > Divvy is a bike sharing service in Chicago.
# > Since 2013, Divvy has released their bike sharing dataset to the public.
# > The 2013 dataset is comprised of two files:
# > - `Divvy_Stations_2013.csv`, containing the stations in the system, and
# > - `DivvyTrips_2013.csv`, containing the trips.
#
# Let's dig into the data!
from pyprojroot import here
# Firstly, we need to unzip the dataset:
# +
import zipfile
import os
import sys
if not (r'C:\Users\pui_s\Documents\concordia-bootcamps\Network-Analysis-Made-Simple' in sys.path):
sys.path.insert(0, r'C:\Users\pui_s\Documents\concordia-bootcamps\Network-Analysis-Made-Simple')
from nams.load_data import datasets
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir(datasets):
print('Unzipping the divvy_2013.zip file in the datasets folder.')
with zipfile.ZipFile(datasets / "divvy_2013.zip","r") as zip_ref:
zip_ref.extractall(datasets)
# -
# Now, let's load in both tables.
#
# First is the `stations` table:
# +
import pandas as pd
stations = pd.read_csv(datasets / 'divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], encoding='utf-8')
# -
stations.head(10)
# > `id` and `node` are distinct, so we can potentially use any of the two as node labelling. Depending on the applicaton, one might be more convenient than the other.
stations.describe()
# Now, let's load in the `trips` table.
trips = pd.read_csv(datasets / 'divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'], low_memory=False)
# + tags=[]
trips.head(10)
# + jupyter={"outputs_hidden": true} tags=[]
# import janitor
# trips_summary = (
# trips
# .groupby(["from_station_id", "to_station_id"])
# .count()
# .reset_index()
# .select_columns(
# [
# "from_station_id",
# "to_station_id",
# "trip_id"
# ]
# )
# .rename_column("trip_id", "num_trips")
# )
# -
trips_summary = trips.groupby(["from_station_id",
"to_station_id"]).count().reset_index()[["from_station_id",
"to_station_id", "trip_id"]].rename({"trip_id":
"num_trips"}, axis='columns')
trips_summary.head()
trips_summary.info()
# ## Graph Model
#
# Given the data, if we wished to use a graph as a data model
# for the number of trips between stations,
# then naturally, nodes would be the stations,
# and edges would be trips between them.
#
# This graph would be directed,
# as one could have more trips from station A to B
# and less in the reverse.
#
# With this definition,
# we can begin graph construction!
#
# ### Create NetworkX graph from pandas edgelist
#
# NetworkX provides an extremely convenient way
# to load data from a pandas DataFrame:
# +
import networkx as nx
G = nx.from_pandas_edgelist(
df=trips_summary,
source="from_station_id",
target="to_station_id",
edge_attr=["num_trips"],
create_using=nx.DiGraph
)
# -
# ### Inspect the graph
#
# Once the graph is in memory,
# we can inspect it to get out summary graph statistics.
print(nx.info(G))
# You'll notice that the edge metadata have been added correctly: we have recorded in there the number of trips between stations.
list(G.edges(data=True))[0:5]
# However, the node metadata is not present:
list(G.nodes(data=True))[0:5]
Gt = G.copy()
attrs = {85: {'name': "<NAME> & <NAME>"}}
# nx.set_node_attributes(Gt, attrs)
nx.set_node_attributes(Gt, attrs)
# + tags=[]
Gt.nodes[85]
# -
# ### Annotate node metadata
#
# We have rich station data on hand,
# such as the longitude and latitude of each station,
# and it would be a pity to discard it,
# especially when we can potentially use it as part of the analysis
# or for visualization purposes.
# Let's see how we can add this information in.
#
# Firstly, recall what the `stations` dataframe looked like:
stations.head()
# The `id` column gives us the node ID in the graph,
# so if we set `id` to be the index,
# if we then also loop over each row,
# we can treat the rest of the columns as dictionary keys
# and values as dictionary values,
# and add the information into the graph.
#
# Let's see this in action.
for node, metadata in stations.set_index("id").iterrows():
for key, val in metadata.items():
G.nodes[node][key] = val
# Now, our node metadata should be populated.
# + tags=[]
list(G.nodes(data=True))[0:5]
# -
# In `nxviz`, a `GeoPlot` object is available
# that allows you to quickly visualize
# a graph that has geographic data.
# However, being `matplotlib`-based,
# it is going to be quickly overwhelmed
# by the sheer number of edges.
#
# As such, we are going to first filter the edges.
#
# ### Exercise: Filter graph edges
#
# > Leveraging what you know about how to manipulate graphs,
# > now try _filtering_ edges.
# >
#
# _Hint: NetworkX graph objects can be deep-copied using `G.copy()`:_
#
# ```python
# G_copy = G.copy()
# ```
#
# _Hint: NetworkX graph objects also let you remove edges:_
#
# ```python
# G.remove_edge(node1, node2) # does not return anything
# ```
# +
def filter_graph(G, minimum_num_trips):
"""
Filter the graph such that
only edges that have minimum_num_trips or more
are present.
"""
G_filtered = G.copy()
for u, v, d in G.edges(data=True):
if d['num_trips'] < minimum_num_trips:
G_filtered.remove_edge(u, v)
return G_filtered
# from nams.solutions.io import filter_graph
G_filtered = filter_graph(G, 50)
# + tags=[]
G_filtered.nodes(data=True)
# -
import matplotlib.pyplot as plt
import numpy as np
# drawing network using lat and long to visualize if theb bike network resemble the city
locs = {n: np.array([di['latitude'], di['longitude']]) for n, di in G_filtered.nodes(data=True)}
nx.draw_networkx_nodes(G_filtered, pos=locs, node_size=3)
nx.draw_networkx_edges(G_filtered, pos=locs)
plt.show()
# lest try circle plot (less preferable in this case)
import nxviz as nv
# first we have to annotate the connectivity of each node, so given by the number of neighbors that any nodes
# is connected to
for n in G_filtered.nodes():
G_filtered.nodes[n]['connectivity'] = len(list(G.neighbors(n)))
c = nv.CircosPlot(G_filtered, node_order = 'connectivity')
c.draw()
plt.show()
# ### Visualize using GeoPlot
#
# `nxviz` provides a GeoPlot object
# that lets you quickly visualize geospatial graph data.
# A note on geospatial visualizations:
#
# > As the creator of `nxviz`,
# > I would recommend using proper geospatial packages
# > to build custom geospatial graph viz,
# > such as [`pysal`](http://pysal.org/).)
# >
# > That said, `nxviz` can probably do what you need
# > for a quick-and-dirty view of the data.
# + tags=[]
# import nxviz as nv
c = nv.GeoPlot(G_filtered, node_lat='latitude', node_lon='longitude', node_color="dpcapacity")
c.draw()
plt.show()
# -
# Does that look familiar to you? Looks quite a bit like Chicago, I'd say :)
#
# Jesting aside, this visualization does help illustrate
# that the majority of trips occur between stations that are
# near the city center.
# ## Pickling Graphs
#
# Since NetworkX graphs are Python objects,
# the canonical way to save them is by pickling them.
# You can do this using:
#
# ```python
# nx.write_gpickle(G, file_path)
# ```
#
# Here's an example in action:
nx.write_gpickle(G, r"divvy.pkl")
# And just to show that it can be loaded back into memory:
G_loaded = nx.read_gpickle(r"divvy.pkl")
# ### Exercise: checking graph integrity
#
# If you get a graph dataset as a pickle,
# you should always check it against reference properties
# to make sure of its data integrity.
#
# > Write a function that tests that the graph
# > has the correct number of nodes and edges inside it.
def test_graph_integrity(G):
"""Test integrity of raw Divvy graph."""
# Your solution here
pass
# +
from nams.solutions.io import test_graph_integrity
test_graph_integrity(G)
# +
# test_graph_integrity??
# -
# ## Other text formats
#
# CSV files and `pandas` DataFrames
# give us a convenient way to store graph data,
# and if possible, do insist with your data collaborators
# that they provide you with graph data that are in this format.
# If they don't, however, no sweat!
# After all, Python is super versatile.
#
# In this ebook, we have loaded data in
# from non-CSV sources,
# sometimes by parsing text files raw,
# sometimes by treating special characters as delimiters in a CSV-like file,
# and sometimes by resorting to parsing JSON.
#
# You can see other examples of how we load data
# by browsing through the source file of `load_data.py`
# and studying how we construct graph objects.
# ## Solutions
#
# The solutions to this chapter's exercises are below
# + tags=[]
from nams.solutions import io
import inspect
print(inspect.getsource(io))
| notebooks/03-practical/01-io.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.5 64-bit (''stan'': conda)'
# name: python3
# ---
# # Advanced Topic: Using External C++ Functions
#
# This is based on the relevant portion of the CmdStan documentation [here](https://mc-stan.org/docs/cmdstan-guide/using-external-cpp-code.html)
# Consider the following Stan model, based on the bernoulli example.
# + nbsphinx="hidden"
import os
try:
os.remove('bernoulli_external')
except:
pass
# -
from cmdstanpy import CmdStanModel
model_external = CmdStanModel(stan_file='bernoulli_external.stan', compile=False)
print(model_external.code())
# As you can see, it features a function declaration for `make_odds`, but no definition. If we try to compile this, we will get an error.
model_external.compile()
# Even enabling the `--allow-undefined` flag to stanc3 will not allow this model to be compiled quite yet.
model_external.compile(stanc_options={'allow-undefined':True})
# To resolve this, we need to both tell the Stan compiler an undefined function is okay **and** let C++ know what it should be.
#
# We can provide a definition in a C++ header file by using the `user_header` argument to either the CmdStanModel constructor or the `compile` method.
#
# This will enables the `allow-undefined` flag automatically.
model_external.compile(user_header='make_odds.hpp')
# We can then run this model and inspect the output
fit = model_external.sample(data={'N':10, 'y':[0,1,0,0,0,0,0,0,0,1]})
fit.stan_variable('odds')
# The contents of this header file are a bit complicated unless you are familiar with the C++ internals of Stan, so they are presented without comment:
#
# ```c++
# #include <boost/math/tools/promotion.hpp>
# #include <ostream>
#
# namespace bernoulli_model_namespace {
# template <typename T0__> inline typename
# boost::math::tools::promote_args<T0__>::type
# make_odds(const T0__& theta, std::ostream* pstream__) {
# return theta / (1 - theta);
# }
# }
# ```
| docsrc/examples/Using External C++.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
X = np.array([(2,3),(5,4),(9,6),(4,7),(8,1),(7,2)])
X
# ## 1. 建æ
# +
class TreeNode:
def __init__(self,data=None,fi=None,fv=None,left=None,right=None):
self.data = data
self.fi = fi
self.fv = fv
self.left = left
self.right = right
class KDTree:
def buildTree(self,X,depth):
n_size,n_feature = X.shape
#éåœç»æ¢æ¡ä»¶
if n_size == 1:
tree = TreeNode(data=X[0])
return tree
fi = depth % n_feature
argsort = np.argsort(X[:,fi])
middle_idx = argsort[n_size // 2]
left_idxs,right_idxs = argsort[:n_size//2],argsort[n_size//2+1:]
fv = X[middle_idx,fi]
data = X[middle_idx]
left,right = None,None
if len(left_idxs) > 0:
left = self.buildTree(X[left_idxs],depth+1)
if len(right_idxs) > 0:
right = self.buildTree(X[right_idxs],depth+1)
tree = TreeNode(data,fi,fv,left,right)
return tree
def fit(self,X):
self.tree = self.buildTree(X,0)
# -
kdtree = KDTree()
kdtree.fit(X)
kdtree.tree.right.data
# ## 2. æŸæè¿é»
def distance(a,b):
return np.sqrt(((a-b)**2).sum())
x = np.array([2,4.5])
nearest_point = kdtree.tree.data
nearest_dis = distance(kdtree.tree.data,x)
def find_nearest(kdtree,x):
global nearest_dis,nearest_point
if kdtree == None:
return
#åŠææ ¹èç¹å°ç®æ ç¹çè·çŠ»å°äºæè¿è·çŠ»ïŒåæŽæ°nearest_pointånearest_dis
if distance(kdtree.data,x) < nearest_dis:
nearest_dis = distance(kdtree.data,x)
nearest_point = kdtree.data
if kdtree.fi == None or kdtree.fv == None:
return
#è¿å
¥äžäžäžªçžåºçåèç¹
if x[kdtree.fi] < kdtree.fv:
find_nearest(kdtree.left,x)
if x[kdtree.fi] + nearest_dis > kdtree.fv:
find_nearest(kdtree.right,x)
elif x[kdtree.fi] > kdtree.fv:
find_nearest(kdtree.right,x)
if x[kdtree.fi] - nearest_dis < kdtree.fv:
find_nearest(kdtree.left,x)
else:
find_nearest(kdtree.left,x)
find_nearest(kdtree.right,x)
find_nearest(kdtree.tree,x)
print(nearest_dis)
print(nearest_point)
# ## 3. 宿KNNçKDTreeå®ç°
# +
class TreeNode:
def __init__(self,data=None,label=None,fi=None,fv=None,left=None,right=None):
self.data = data
self.label = label
self.fi = fi
self.fv = fv
self.left = left
self.right = right
class KDTreeKNN:
def __init__(self,k=3):
self.k = k
def buildTree(self,X,y,depth):
n_size,n_feature = X.shape
#éåœç»æ¢æ¡ä»¶
if n_size == 1:
tree = TreeNode(data=X[0],label=y[0])
return tree
fi = depth % n_feature
argsort = np.argsort(X[:,fi])
middle_idx = argsort[n_size // 2]
left_idxs,right_idxs = argsort[:n_size//2],argsort[n_size//2+1:]
fv = X[middle_idx,fi]
data,label = X[middle_idx],y[middle_idx]
left,right = None,None
if len(left_idxs) > 0:
left = self.buildTree(X[left_idxs],y[left_idxs],depth+1)
if len(right_idxs) > 0:
right = self.buildTree(X[right_idxs],y[right_idxs],depth+1)
tree = TreeNode(data,label,fi,fv,left,right)
return tree
def fit(self,X,y):
self.tree = self.buildTree(X,y,0)
def _predict(self,x):
finded = []
labels = []
for i in range(self.k):
nearest_point,nearest_dis,nearest_label = self.find_nearest(x,finded)
finded.append(nearest_point)
labels.append(nearest_label)
counter={}
for i in labels:
counter.setdefault(i,0)
counter[i]+=1
sort=sorted(counter.items(),key=lambda x:x[1])
return sort[0][0]
def predict(self,X):
return np.array([self._predict(x) for x in tqdm(X)])
def score(self,X,y):
return np.sum(self.predict(X)==y) / len(y)
def _isin(self,x,finded):
for f in finded:
if KDTreeKNN.distance(x,f) < 1e-6: return True
return False
@staticmethod
def distance(a,b):
return np.sqrt(((a-b)**2).sum())
def find_nearest(self,x,finded):
nearest_point = None
nearest_dis = np.inf
nearest_label = None
def travel(kdtree,x):
nonlocal nearest_dis,nearest_point,nearest_label
if kdtree == None:
return
#åŠææ ¹èç¹å°ç®æ ç¹çè·çŠ»å°äºæè¿è·çŠ»ïŒåæŽæ°nearest_pointånearest_dis
if KDTreeKNN.distance(kdtree.data,x) < nearest_dis and not self._isin(kdtree.data,finded) :
nearest_dis = KDTreeKNN.distance(kdtree.data,x)
nearest_point = kdtree.data
nearest_label = kdtree.label
if kdtree.fi == None or kdtree.fv == None:
return
#è¿å
¥äžäžäžªçžåºçåèç¹
if x[kdtree.fi] < kdtree.fv:
travel(kdtree.left,x)
if x[kdtree.fi] + nearest_dis > kdtree.fv:
travel(kdtree.right,x)
elif x[kdtree.fi] > kdtree.fv:
travel(kdtree.right,x)
if x[kdtree.fi] - nearest_dis < kdtree.fv:
travel(kdtree.left,x)
else:
travel(kdtree.left,x)
travel(kdtree.right,x)
travel(self.tree,x)
return nearest_point,nearest_dis,nearest_label
# -
# # 4. æµè¯KDTree KNN
from sklearn.datasets import make_gaussian_quantiles
X, y = make_gaussian_quantiles(n_samples=200, n_features=2, n_classes=2, mean=[1,2],cov=2,random_state=222)
y_neg = y.copy()
y_neg[y==0] = -1
knn = KDTreeKNN()
knn.fit(X,y_neg)
import matplotlib.pyplot as plt
# %matplotlib inline
def plot_clf(X,y,cls,name):
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
points = np.c_[xx.ravel(), yy.ravel()]
Z = cls.predict(points)
cs = plt.contourf(xx, yy, Z.reshape(xx.shape))
plt.scatter(X[:, 0], X[:, 1], marker='o', c=y)
plt.title(name)
plot_clf(X,y,knn,"knn")
# ## 4.2 æ®éKNN
class KNNclassifier:
def __init__(self,k):
assert k>=1,"kå¿
é¡»æ¯å€§äº1çæŽæ°"
self.k=k
self.X_train=None
self.y_train=None
def fit(self,X_train,y_train):
self.X_train=X_train
self.y_train=y_train
return self
def _predict(self,X):
distance = [np.sum((X_i-X)**2) for X_i in self.X_train]
arg = np.argsort(distance)
top_k = [self.y_train[i] for i in arg[:self.k]]
# c=Counter(top_k)
# return c.most_common(1)[0][0]
counter={}
for i in top_k:
counter.setdefault(i,0)
counter[i]+=1
sort=sorted(counter.items(),key=lambda x:x[1])
return sort[0][0]
def predict(self, X_test):
y_predict = [self._predict(i) for i in tqdm(X_test)]
return np.array(y_predict)
def score(self, X_test ,y_test):
y_predict = self.predict(X_test)
return np.sum(y_predict==y_test)/len(X_test)
knn = KNNclassifier(3)
knn.fit(X,y)
plot_clf(X,y,knn,"knn")
| KNN/KDTree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Idea Working Towards:
# - Load well with LASIO
# - Export LASIO style JSON
# - CONVERT LASIO style JSON to WELLIO style JSON
# - Visualize with wellioviz
# #### Note !!
# Node variables don't seem to be re-initializable once they are defined via pixiedust_node.
#
# To work-around this:
# when needing to reset a node variable, restart the Jupyter Kernal.
import json
import lasio
import numpy as np
from datetime import datetime
# + [markdown] raw_mimetype="text/markdown"
# #### Pixiedust node module enables running Node.js code in Jupyter.
# https://github.com/pixiedust/pixiedust_node
# -
import pixiedust_node
# #### Install wellio and wellioviz if not already installed.
# %%node
var install_wellio = 0;
try {
require.resolve('wellio');
} catch (e) {
if (e.code === 'MODULE_NOT_FOUND') {
console.log("Will install Wellio.js")
install_wellio = 1;
} else {
console.log(e);
install_wellio = 1;
}
}
if install_wellio:
npm.install('wellio')
require.resolve('wellio')
# %%node
var install_wellioviz = 0;
try {
require.resolve('wellioviz');
} catch (e) {
if (e.code === 'MODULE_NOT_FOUND') {
console.log("Will install Wellioviz.js");
install_wellioviz = 1;
} else {
console.log(e);
install_wellioviz = 1;
}
}
if install_wellioviz:
npm.install('wellioviz')
# +
# %%node
var wellio = null;
var wellioviz = null;
try {
wellio = require('wellio');
wellioviz = require('wellioviz');
} catch (err) {
console.log(err);
}
# -
# #### Create a las file via lasio and translate it to json.
# +
las = lasio.LASFile()
las.well.DATE = datetime.today().strftime('%Y-%m-%d %H:%M:%S')
las.params['ENG'] = lasio.HeaderItem('ENG', value='Kent Inverarity')
las.params['LMF'] = lasio.HeaderItem('LMF', value='GL')
las.other = 'Example of how to create a LAS file from scratch using lasio'
depths = np.arange(10, 50, 0.5)
synth = np.log10(depths)*5+np.random.random(len(depths))
synth[:8] = np.nan
las.add_curve('DEPT', depths, unit='m')
las.add_curve('SYNTH', synth, descr='fake data')
las.write('scratch_v2.las', version=2)
json_images = json.dumps(las, cls=lasio.JSONEncoder)
# -
# #### Example 1: Read the in-memory Lasio json string into Wellio json
# +
# %%node
var wellio_obj = '';
try {
let lasio_obj = '';
lasio_obj = JSON.parse(json_images);
wellio_obj = wellio.lasio_obj_2_wellio_obj(lasio_obj);
} catch (e) {
console.log('[');
console.log(e.name + ":: " + e.message);
console.log(']');
}
console.log(wellio_obj);
# -
# #### Introduce wellioviz and pass the wellio_obj to wellioviz
# %%node
console.log(wellioviz.define_wellioviz());
# %%node
let three_things = wellioviz.fromJSONofWEllGetThingsForPlotting(wellio_obj,"DEPT");
console.log(three_things);
# + [markdown] raw_mimetype="text/markdown"
# #### Example 2: Write Lasio json to file, then get current path and read the created json file into wellio json
# -
las_json_dict =json.loads(json_images)
with open('data.json', 'w') as outfile:
json.dump(las_json_dict, outfile)
# +
# %%node
const path = require('path');
let mydir = process.env.PWD;
let myfile = mydir + path.sep + 'data.json';
let lasio_json_str = '';
let lasio_obj_2 = '';
let wellio_obj_2 = '';
try {
lasio_json_str = wellio.read_lasio_json_file(myfile);
lasio_obj_2 = JSON.parse(lasio_json_str);
wellio_obj_2 = wellio.lasio_obj_2_wellio_obj(lasio_obj_2);
} catch (e) {
console.log('[');
console.log(e.name + ":: " + e.message);
console.log(']');
}
console.log(wellio_obj_2);
# -
# #### Example 3: Read created Lasio Las file
# %%node
three_things_2 = wellioviz.fromJSONofWEllGetThingsForPlotting(wellio_obj,"DEPT");
console.log(three_things_2);
# +
# %%node
const path = require('path');
let mydir_3 = process.env.PWD;
let myfile_3 = mydir + path.sep + 'scratch_v2.las';
let las_str_3 = '';
let wellio_obj_3 = '';
try {
las_str_3 = wellio.loadLAS(myfile_3);
wellio_obj_3 = wellio.las2json(las_str_3);
} catch (e) {
console.log('[');
console.log(e.name + ":: " + e.message);
console.log(']');
}
console.log(wellio_obj_3);
# -
# %%node
three_things_3 = wellioviz.fromJSONofWEllGetThingsForPlotting(wellio_obj,"DEPT");
console.log(three_things_3);
| notebooks/wellio.js and lasio test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reddit Datasets
#
# ## Using [Pushshift](https://github.com/pushshift/api) to download Reddit data
#
# There are currently a couple Reddit datasets available to download:
#
# ### Headline mentions
# * `reddit_headline_counts.csv`: Contains how many times a candidate's name was mentioned in a /r/politics, /r/news, or /r/worldnews post from January 1, 2019 to June 2019.
#
# ### Article Text Parsing
# * `reddit_2016_05_31.pkl`: Contains the post id, number of comments, karma score, subreddit, number of subscribers in that subreddit, title of post link, and url of article links posted to /r/politics and a couple of the specific Republican candidates' subreddits from January 1, 2015 to May 31, 2016 (Trump won delegate majority in late May 2016).
# * `reddit_2019_06_15.pkl`: Contains the post id, number of comments, karma score, subreddit, number of subscribers in that subreddit, title of post link, and url of article links posted to /r/politics and a couple of the specific Democratic candidates; subreddits from January 1, 2019 to June 15, 2019
#
# (Those three files above can be found at https://berkeley-politics-capstone.s3.amazonaws.com/reddit.zip)
#
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019jun16tojul1_articleurls.pkl : This is a dataframe, from **June 16, 2019 to July 1, 2019** to append onto `reddit_2019_06_15.pkl`
#
# ### Article Text Parsing Supplements
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2016_dates.pkl : Used with the 2016 Article Text Parsing data to grab the dates of the article from Reddit
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019_dates.pkl : Used with the 2019 Article Text Parsing data to grab the dates of the article from Reddit
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019jun16tojul1_dates.pkl : This is a dataframe, from **June 16, 2019 to July 1, 2019** to append onto the URL above
#
# ### Reddit uncleaned comments
#
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2016_comments.pkl : Comments left in subreddits that were gathered from `reddit_2016_05_31.pkl` (/r/politics and the subreddits for certain Republican candidates)
#
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019_comments.pkl : Comments left in subreddits that were gathered from `reddit_2019_06_15.pkl` (/r/politics and the subreddits for certain Democratic candidates)
#
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019jun16tojul1_comments.pkl : This is a dataframe, from **June 16, 2019 to July 1, 2019** to append onto the URL above
#
# ### Reddit cleaned comments
#
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2016_comments_clean1.pkl : 2016 comments that have their markdown formatting stripped
# * https://berkeley-politics-capstone.s3.amazonaws.com/reddit_2019_comments_clean1.pkl : 2019 (Jan 1 to Jul 1) comments that have their markdown formatting stripped
#
# ### How to use the data
#
# * Within the data folder, make a new folder called Reddit and place the files in there.
# * The pickled files are pandas data frames when unpickled, so use the following command: `df = pd.read_pickle(file)`
# +
# Libraries
import requests
import praw
import praw.models
import configparser
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import nltk
import json
import os
import html
from bs4 import BeautifulSoup
from markdown import markdown
from datetime import datetime, timedelta, date
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from nltk.sentiment.vader import SentimentIntensityAnalyzer as SIA
# -
# ## Pushshift API - a free API with no credentials needed
#
# Pushshift API is more useful to gather aggregate data of a metric in a subreddit, such as number of times a query is mentioned, or to gather a field that meets certain conditions. Additionally, Pushshift is free to use and does not require any credentials.
#
# See https://www.reddit.com/r/pushshift/comments/bcxguf/new_to_pushshift_read_this_faq/ for more helpful information.
# ### List of URLs
#
# Most of the time in /r/politics, users submit link posts to articles. We'd like to gather a list of these articles so that we may use the `news-please` library to grab the text and subsequently do NLP on it. To get this list, I've used Pushshift and used the following call:
#
# https://api.pushshift.io/reddit/submission/search/?subreddit=politics&after=2019-06-01&before=2019-06-10&is_self=false&filter=title,subreddit,url,score,num_comments,subreddit_subscribers,id&limit=1000
#
# Note about subreddits: <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> are candidates who do not have dedicated subreddits as of May 30, 2019.
# +
# Initial step: Creating the list
output = pd.DataFrame(columns=['id','num_comments','score','subreddit','subreddit_subscribers','title','url'])
# +
# Set the date range
start_date = date(2019, 6, 16)
day_count = (date(2019, 7, 1) - start_date).days + 1 # The last time I pulled the 2019 dataset together
#day_count = (date(2016, 5, 31) - start_date).days + 1 # For use with the 2015 dataset
# Set the subreddits to go through
subreddits = {'<NAME>': 'JoeBiden',
'<NAME>': 'corybooker',
'<NAME>': 'Pete_Buttigieg',
'<NAME>': 'tulsi',
'<NAME>': 'Kirsten_Gillibrand',
'<NAME>': 'gravelforpresident',
'<NAME>': 'Kamala',
'<NAME>': 'inslee2020',
'<NAME>': 'BaemyKlobaechar',
'<NAME>': 'Beto2020',
'<NAME>': 'SandersForPresident',
'<NAME>': 'The_Donald',
'<NAME>': 'ElizabethWarren',
'politics': 'politics'}
# Subreddits for 2016 candidates
subreddits_2016 = {'<NAME>': 'The_Donald',
'<NAME>': 'TedCruz',
'<NAME>': 'JebBush',
'<NAME>': 'BenCarson',
'<NAME>': 'ChrisChristie',
'<NAME>': 'KasichForPresident',
'<NAME>': 'RandPaul',
'<NAME>': 'Marco_Rubio',
'politics': 'politics'}
# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away
for single_date in (start_date + timedelta(n) for n in range(day_count)):
after = single_date.strftime("%Y-%m-%d")
before = (single_date + timedelta(1)).strftime("%Y-%m-%d")
for subreddit in subreddits.values():
url = 'https://api.pushshift.io/reddit/submission/search/?subreddit={0}&after={1}&before={2}&is_self=false&filter=id,title,subreddit,url,score,num_comments,subreddit_subscribers&limit=1000'.format(subreddit,after,before)
r = requests.get(url)
if r.status_code != 200:
continue
else:
response = r.json()
if bool(response['data']):
temp = pd.DataFrame.from_dict(response['data'], orient='columns')
output = output.append(temp, ignore_index = True)
# +
# Remove non article URLs
remove = ['twitter.com','hbo.com','youtube.com','youtu.be','reddit.com','streamable.com','imgur.com',
'i.imgur.com','forgifs.com','i.redd.it']
searchrm = '|'.join(remove)
output = output[~output['url'].str.contains(searchrm)].reset_index(drop=True)
# +
# Pickling the dataframe
output.to_pickle('reddit_2019jun16tojul1.pkl')
# -
# ## Getting the dates of a post
#
# News Please has an issue with generating dates to the articles, so we have opted to use Reddit's post datetime instead. This will be done Pushshift again; the code below is the refactored version of what is above (I originally tried PRAW but it was too slow to feed in a post ID one by one).
# Instantiate the date dataframe
dates = pd.DataFrame(columns=['created_utc','id','url'])
# +
# Set the date range
start_date = date(2019, 6, 16)
day_count = (date(2019, 7, 1) - start_date).days + 1 # The last time I pulled the 2019 dataset together
#day_count = (date(2016, 5, 31) - start_date).days + 1 # For use with the 2015 dataset
# Set the subreddits to go through
subreddits = {'<NAME>': 'JoeBiden',
'<NAME>': 'corybooker',
'<NAME>': 'Pete_Buttigieg',
'<NAME>': 'tulsi',
'<NAME>': 'Kirsten_Gillibrand',
'<NAME>': 'gravelforpresident',
'<NAME>': 'Kamala',
'<NAME>': 'inslee2020',
'<NAME>': 'BaemyKlobaechar',
'<NAME>': 'Beto2020',
'<NAME>': 'SandersForPresident',
'<NAME>': 'The_Donald',
'<NAME>': 'ElizabethWarren',
'politics': 'politics'}
# Subreddits for 2016 candidates
subreddits_2016 = {'<NAME>': 'The_Donald',
'<NAME>': 'TedCruz',
'<NAME>': 'JebBush',
'<NAME>': 'BenCarson',
'<NAME>': 'ChrisChristie',
'<NAME>': 'KasichForPresident',
'<NAME>': 'RandPaul',
'<NAME>': 'Marco_Rubio',
'politics': 'politics'}
# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away
for single_date in (start_date + timedelta(n) for n in range(day_count)):
after = single_date.strftime("%Y-%m-%d")
before = (single_date + timedelta(1)).strftime("%Y-%m-%d")
for subreddit in subreddits.values():
url = 'https://api.pushshift.io/reddit/submission/search/?subreddit={0}&after={1}&before={2}&is_self=false&filter=id,created_utc,url&limit=1000'.format(subreddit,after,before)
r = requests.get(url)
if r.status_code != 200:
continue
else:
response = r.json()
if bool(response['data']):
temp = pd.DataFrame.from_dict(response['data'], orient='columns')
dates = dates.append(temp, ignore_index = True)
# +
# Remove non article URLs
remove = ['twitter.com','hbo.com','youtube.com','youtu.be','reddit.com','streamable.com','imgur.com',
'i.imgur.com','forgifs.com','i.redd.it']
searchrm = '|'.join(remove)
dates = dates[~dates['url'].str.contains(searchrm)].reset_index(drop=True)
# -
# Now remove the URL column, and convert the date column into datetime
dates = dates.drop("url", axis=1)
dates['created_utc'] = dates['created_utc'].apply(lambda x: datetime.utcfromtimestamp(x))
dates
# +
# Pickling the dataframe
dates.to_pickle('reddit_2019jun16tojul1_dates.pkl')
# -
# ## Logistic Regression
#
# Will be switching over to Pushshift API for this process, which allows us to get data between dates
#
# Example call: https://api.pushshift.io/reddit/submission/search/?after=2019-06-02&before=2019-06-03&q=trump&sort_type=score&sort=desc&subreddit=politics&limit=500
#
# Where the output is a JSON file, the after date is the date of interest, and the before date is one date in the future.
#
# ### Headline mentions vs donations
# #### Consider calculating the cumulative karma score and comments along with this later on
#
# The Pushshift API call used here will rely on `agg=subreddit` to get counts and will look like the following:
#
# https://api.pushshift.io/reddit/submission/search/?subreddit=politics,news,worldnews&aggs=subreddit&q=trump&size=0&after=2019-06-02&before=2019-06-03
#
# This data will be stored in a csv file that can be pulled in later: `mentions = pd.read_csv('reddit_headline_counts.csv')`
# +
politicians = ['williamson', 'harris', 'buttigieg', 'klobuchar', 'yang', 'gillibrand', 'delaney', 'inslee',
'hickenlooper', 'o\%27rourke', 'warren', 'castro', 'sanders', 'gabbard', 'booker', 'trump', 'biden']
# Set the date range
start_date = date(2019, 3, 8)
day_count = (date(2019, 3, 14) - start_date).days + 1
# Set the rows_list holder
rows_list = []
# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away
for single_date in (start_date + timedelta(n) for n in range(day_count)):
after = single_date.strftime("%Y-%m-%d")
before = (single_date + timedelta(1)).strftime("%Y-%m-%d")
for candidate in politicians:
url = 'https://api.pushshift.io/reddit/submission/search/?subreddit=politics,news,worldnews&aggs=subreddit&q={0}&size=0&after={1}&before={2}'.format(candidate,after,before)
response = requests.get(url).json()
for thing in response['aggs']['subreddit']:
dict1 = {}
dict1.update({'date': after,
'candidate': candidate,
'subreddit': thing['key'],
'doc_count': thing['doc_count']})
rows_list.append(dict1)
mentions = pd.DataFrame(rows_list)
# +
#mentions
#mentions.to_csv('mar8tomar14.csv',index=False)
# To reload, use:
mentions = pd.read_csv('../../data/reddit/reddit_headline_counts.csv')
# +
# O'Rourke candidate name change
mentions = mentions.replace('o\%27rourke', 'orourke')
by_politics = mentions[mentions['subreddit']=='politics']
# -
# From Andrew:
# find the path to each fec file, store paths in a nested dict
fec_2020_paths = {}
base_path = os.path.join("..","..","data","fec","2020") # This notebook was one more level down
for party_dir in os.listdir(base_path):
if(party_dir[0]!="."):
fec_2020_paths[party_dir] = {}
for cand_dir in os.listdir(os.path.join(base_path,party_dir)):
if(cand_dir[0]!="."):
fec_2020_paths[party_dir][cand_dir] = {}
for csv_path in os.listdir(os.path.join(base_path,party_dir,cand_dir)):
if(csv_path.find("schedule_a")>=0):
fec_2020_paths[party_dir][cand_dir]["donations"] = \
os.path.join(base_path,party_dir,cand_dir,csv_path)
elif(csv_path.find("schedule_b")>=0):
fec_2020_paths[party_dir][cand_dir]["spending"] = \
os.path.join(base_path,party_dir,cand_dir,csv_path)
print(json.dumps(fec_2020_paths, indent=4))
dataset = pd.DataFrame()
for candid in fec_2020_paths["democrat"].keys():
if("donations" in fec_2020_paths["democrat"][candid].keys()):
# process donations dataset
df1 = pd.read_csv(fec_2020_paths["democrat"][candid]["donations"])
df1["contribution_receipt_date"] = pd.to_datetime(df1["contribution_receipt_date"]).dt.date
df1 = df1.loc[df1["entity_type"]=="IND"]
df1 = df1.loc[df1["contribution_receipt_amount"]<=2800]
df1 = df1.groupby(by="contribution_receipt_date", as_index=False)["contribution_receipt_amount"].sum()
df1.name = "individual_donations"
df1 = pd.DataFrame(df1)
df1["candidate"] = candid
# attaching to the mentions dataset
#result = mentions.merge(df1, how='inner', left_on=['', 'B'])
# append to main df
dataset = dataset.append(df1)
# +
dataset.rename(index=str, columns={'contribution_receipt_date':'date'}, inplace=True)
res = pd.merge(by_politics, dataset, how='outer', left_on=['date','candidate'],
right_on = ['date','candidate'])
res
# +
# Replace NaN with 0
res['contribution_receipt_amount'].fillna(0, inplace = True)
res['doc_count'].fillna(0, inplace = True)
res
# -
X_train = res["doc_count"].reshape(-1, 1)
y_train = res["contribution_receipt_amount"]
X_test = np.array(range(0,400)).reshape(-1, 1)
linear_fit = LinearRegression().fit(X_train, y_train)
y_pred = linear_fit.predict(X_test)
linear_fit.score(X_train,y_train)
fix, ax = plt.subplots(figsize=(12,8))
plt.scatter(X_train, y_train, color='black', alpha=0.2)
#plt.plot(X_test, y_pred, color='blue', linewidth=3)
# If we want an aggregate of comments and score for this particular dataset later, I've been playing around with the following Pushshift API call: https://api.pushshift.io/reddit/submission/search/?subreddit=politics,news,worldnews&aggs=subreddit&q=trump&after=2019-06-02&before=2019-06-03&filter=subreddit,score,num_comments
# ## Gathering comments from the subreddits above
comments = pd.DataFrame(columns=['body','created_utc','parent_id','score'])
# +
# Set the date range
start_date = date(2019, 6, 16)
day_count = (date(2019, 7, 1) - start_date).days + 1 # The last time I pulled the 2019 dataset together
#day_count = (date(2016, 5, 31) - start_date).days + 1 # For use with the 2015 dataset
# Set the subreddits to go through
subreddits = {'<NAME>': 'JoeBiden',
'<NAME>': 'corybooker',
'<NAME>': 'Pete_Buttigieg',
'<NAME>': 'tulsi',
'<NAME>': 'Kirsten_Gillibrand',
'<NAME>': 'gravelforpresident',
'<NAME>': 'Kamala',
'<NAME>': 'inslee2020',
'<NAME>': 'BaemyKlobaechar',
'<NAME>': 'Beto2020',
'<NAME>': 'SandersForPresident',
'<NAME>': 'The_Donald',
'<NAME>': 'ElizabethWarren',
'politics': 'politics'}
# Subreddits for 2016 candidates
subreddits_2016 = {'<NAME>': 'The_Donald',
'<NAME>': 'TedCruz',
'<NAME>': 'JebBush',
'<NAME>': 'BenCarson',
'<NAME>': 'ChrisChristie',
'<NAME>': 'KasichForPresident',
'<NAME>': 'RandPaul',
'<NAME>': 'Marco_Rubio',
'politics': 'politics'}
# For loop that iterates through the day to get 1000 link posts, then scrapes some choice domains away
for single_date in (start_date + timedelta(n) for n in range(day_count)):
after = single_date.strftime("%Y-%m-%d")
before = (single_date + timedelta(1)).strftime("%Y-%m-%d")
for subreddit in subreddits.values():
url = 'https://api.pushshift.io/reddit/search/comment/?subreddit={0}&after={1}&before={2}&limit=1000&sort=desc&sort_type=score&filter=parent_id,score,body,created_utc'.format(subreddit,after,before)
r = requests.get(url)
if r.status_code != 200:
continue
else:
response = r.json()
temp = pd.DataFrame.from_dict(response['data'], orient='columns')
comments = comments.append(temp, ignore_index = True)
# +
# Cleaning up the dataframe by removing [removed], and converting created_utc to a date time
#temp = temp[temp['parent_id'].str.startswith('t3')]
comments = comments[~comments['body'].str.startswith('[deleted]')]
comments['created_utc'] = comments['created_utc'].apply(lambda x:
datetime.utcfromtimestamp(x).strftime("%Y-%m-%d"))
comments
# +
# Pickling
comments.to_pickle('reddit_2019jun16tojul1_comments.pkl')
# -
# # NLP
#
# By day, we can gather:
#
# 1. What the sentiment of all post titles (by subreddit, and by if the candidate was mentioned in the title)
# 2. What the sentiment of the top 10 comments of each post is (by subreddit, and by if the candidate was mentioned in thetitle)
# 3. How many posts in the subreddit or if the candidate was mentioned in the title are (more of a general feature, not NLP)
#
# * Day of post
# * Post ID
# * Headline
# * Subreddit
# * Entity recognition (candidates addressed in headline, binary as 1 or 0)
# * Topic recognition (This may be difficult to grab from a headline, but could try in the comments)
# * Sentiment of headline
# * Sentiment of comments
# ## Cleaning of comments
#
# The Reddit comment dataset consists of the following:
#
# * body: Text itself
# * created_utc: Date of the comment
# * parent_id: The ID of the comment. **If the ID starts with t3, then it is a top-level comment, with the remainder alphanumeric characters indicating the post ID**
# * score: Karma score of the comment
# +
# Let's begin by unpickling
# For 2019, I had to combine dataframes
#df1 = pd.read_pickle('../../data/reddit/reddit_2019_comments.pkl')
#df2 = pd.read_pickle('../../data/reddit/reddit_2019jun16tojul1_comments.pkl')
#frames = [df1, df2]
#df = pd.concat(frames)
# 2016 is a straightforward read
df = pd.read_pickle('../../data/reddit/reddit_2016_comments.pkl')
# -
# ### Convert Markdown to HTML to regular text
# +
# Markdown clean up
example = df.iloc[12]['body']
example
# -
# #### For now, I will keep text inserted into blockquotes
df['clean'] = df['body'].apply(lambda x: BeautifulSoup(markdown(html.unescape(x)),'lxml').get_text())
df = df.reset_index(drop=True)
# +
# The above process has retained some of the newline (\n) syntax, so let's remove those.
df['clean'] = df['clean'].apply(lambda x: x.replace('\n\n',' ').replace('\n',' ').replace('\'s','s'))
# +
# Pickled it away:
df.to_pickle('reddit_2019_comments_clean1.pkl')
# Or unpickle:
#df = pd.read_pickle('../../data/reddit/reddit_2019_comments_clean1.pkl')
# -
# ## NOT IN USE: PRAW API - a Python wrapper to access Reddit
#
# This Python wrapper API can look at a subreddit and pull information such as the title, url, and body of a post (if it's not a link post). We can also get the karma score, the number of comments, and when the post was created.
#
# This API needs credentials in order to run. I have stored the credentials away in an INI file that will not be uploaded to Github.
# +
# The following cells uses an INI file to pull in credentials needed to access the PRAW API.
# This INI file is stored locally only
config = configparser.RawConfigParser()
config.read("config.txt")
reddit = praw.Reddit(client_id=config.get("reddit","client_id"),
client_secret=config.get("reddit","client_secret"),
password=config.get("reddit","password"),
user_agent="Political exploration",
username=config.get("reddit","username"))
# +
# Example data pull
posts = []
for post in reddit.subreddit('politics').hot(limit=10):
posts.append([post.title,
post.score,
post.id,
post.subreddit,
post.url,
post.num_comments,
post.selftext,
datetime.utcfromtimestamp(post.created)
])
df = pd.DataFrame(posts,
columns=['title',
'score',
'id',
'subreddit',
'url',
'num_comments',
'body',
'created'
])
df
| src/reddit/Reddit API.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## åè:
# https://www.cnblogs.com/wsine/p/5180778.html
import numpy as np
import matplotlib.pyplot as plt
import math
import time
UNCLASSIFIED = False
NOISE = 0
def loadDataSet(fileName, splitChar='\t'):
"""
Description:load data to a list
Parameters:
fileName:
splitChar:
Return:
a list with data
"""
dataSet = []
with open(fileName) as f:
for line in f.readlines():
curline = line.strip().split(splitChar)
fltline = list(map(float, curline))
dataSet.append(fltline)
return dataSet
def dist(a,b):
"""
Description: calculate euclidean distance
Parameters:
a, b - two vectors
Return:
euclidean distance
"""
return math.sqrt(np.power(a-b,2).sum())
def eps_neighbor(a,b, eps):
"""
Description:check if ab is neighbor in eps distance
Parameters:
a,b - vectors
eps - distance threshold
Return: boolean
"""
return dist(a,b) < eps
def region_query(data, pointId, eps):
"""
Description:search for points that in eps distance of pointId in data
Parameters:
data - search from data
pointId - core point index
eps - distance
Return:
index of all points in eps distance of core point pointId
"""
nPoints = data.shape[1]
seeds = []
for i in range(nPoints):
if eps_neighbor(data[:, pointId], data[:, i],eps):
seeds.append(i)
return seeds
def expand_cluster(data, clusterResult, pointId, clusterId, eps, minPts):
"""
Description: expand cluster for core pointId
Parameters:
data - training data set
clusterResult - cluster result with data index, same size with data
pointId - core point index
clusterId - id of cluster to be expanded
eps - distance
minPts - min neighbor points for core point
Return:
True - expand succeed
False - pointId is a NOISE
"""
seeds = region_query(data, pointId, eps)
if len(seeds) < minPts:
clusterResult[pointId] = NOISE
return False
else:
clusterResult[pointId] = clusterId
for seed in seeds:
clusterResult[seed] = clusterId
while len(seeds) > 0: #keep expanding
currentPoint = seeds[0]
queryResults = region_query(data, currentPoint, eps)
if len(queryResults) >= minPts:
for i in range(len(queryResults)):
resultPoint = queryResults[i]
if clusterResult[resultPoint] == UNCLASSIFIED:
seeds.append(resultPoint)
clusterResult[resultPoint] = clusterId
elif clusterResult[resultPoint] == NOISE:
clusterResult[resultPoint] = clusterId
seeds = seeds[1:]
return True
def dbscan(data, eps, minPts):
"""
Description:dbscan clustering algorithm
Parameters:
data - training data set
eps - distance
minPts -
Return:
clusterResult - clustered data set
clusters - clustered clusters
"""
clusterId = 1
nPoints = data.shape[1]
clusterResult = [UNCLASSIFIED] * nPoints
for pointId in range (nPoints):
point = data[:,pointId]
if clusterResult[pointId] == UNCLASSIFIED:
if expand_cluster(data, clusterResult, pointId, clusterId, eps, minPts):
clusterId = clusterId + 1
return clusterResult, clusterId - 1
def saveResults(clusterResult, filename):
np.savetxt(filename, clusterResult, delimiter=',')
def plotCluster(data, clusters, clusterNum):
nPoints = data.shape[1]
matClusters = np.mat(clusters).transpose()
print(matClusters[:10])
fig = plt.figure()
scatterColors = ["black","blue", "green", "yellow", "red", "purple", "orange", "brown"]
ax = fig.add_subplot(111)
for i in range(clusterNum+1):
colorStyle = scatterColors[i % len(scatterColors)]
subCluster = data[:, np.nonzero(matClusters[:, 0].A == i)[:-1]] # do i need to add [0]?
if i == 7:
print(data.shape)
print(subCluster.shape)
print(subCluster)
print(subCluster[0, :].flatten())
ax.scatter(subCluster[0, :].flatten().A[0], subCluster[1, :].flatten().A[0], c=colorStyle, s=50)
plt.show()
def main():
dataSet = loadDataSet("788points.txt", splitChar=",")
dataSet = np.mat(dataSet).transpose()
print(dataSet[:, :10])
clusterResult, clusterNum = dbscan(dataSet, 2, 15)
saveResults(clusterResult, "788Results.txt")
print("cluster Numbers = ", clusterNum)
print(clusterResult[:10])
plotCluster(dataSet, clusterResult, clusterNum)
if __name__ == "__main__":
start = time.clock()
main()
end = time.clock()
print("finish in %s " % str(end-start))
| algorithms/dbscan/dbscan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Data Visualization in R
#
# In this example, we will be plotting two flower measurements of three _Iris_ species.
#
# We start by importing additional packages necessary for this visualization.
# +
library("ggplot2")
print("Libraries loaded!")
# -
# Next, we look at the first few rows of data.
head(iris)
# Our first plot will be plotting the petal length vs. the sepal length. Using the ggplot method `geom_smooth`, it will automatically draw the line from linear regression.
ggplot(data = iris, mapping = aes(x = Sepal.Length, y = Petal.Length)) +
geom_point() +
geom_smooth(method = "lm")
# We should update our axis labels, using the `xlab` and `ylab` commands.
ggplot(data = iris, mapping = aes(x = Sepal.Length, y = Petal.Length)) +
geom_point() +
geom_smooth(method = "lm") +
ylab("Petal length (mm)") +
xlab("Sepal length (mm)")
# Next, we will modify our code to change what we plot on the x-axis. In this case we want to plot `Petal.Width` on the x-axis. Update the code below to change the values we are plotting on the x-axis. _Hint_: you'll need to change the name of the variable passed to x on the first line, as well as the axis label on the last line.
ggplot(data = iris, mapping = aes(x = Sepal.Length, y = Petal.Length)) +
geom_point() +
geom_smooth(method = "lm") +
ylab("Petal length (mm)") +
xlab("Sepal length (mm)")
# By default, it will add a linear regression line and confidence intervals. But remember, these are data for three species of _Iris_, so it would be more informative to display which species each point corresponds to. We can do this by adding `color = "Species"` to the `aes` method (immediately following `y = Petal.Length`).
# Paste your code from above here, and update
ggplot(data = iris, mapping = aes(x = Petal.Width, y = Petal.Length, color = Species)) +
geom_point() +
geom_smooth(method = "lm") +
ylab("Petal length (mm)") +
xlab("Sepal length (mm)")
# To finish off this plot, we want to write the plot to a png file. Paste the code from above and run the cell.
# +
# Paste your plotting code here:
# Leave the next line as-is
ggsave(file = "output/iris-plot.png")
| intro-iris-ggplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Config
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
from common import *
from competitions.dsb2017 import dsbconfig as comp;
import dicom
import scipy.ndimage
from skimage import measure, morphology
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from nbpapaya import Brain, clear_brain, Brain, Surface, Overlay
import SimpleITK as sitk
import csv
import xml
from bs4 import BeautifulSoup
PROJECT_PATH = os.path.join('/bigguy/data/luna')
DATA_PATH = os.path.join('/bigguy/data/luna/data')
META_PATH = os.path.join(PROJECT_PATH, 'csv')
EXTRACTED_IMG_PATH = os.path.join(PROJECT_PATH, 'extracted_imgs')
EXTRACTED_LABEL_PATH = os.path.join(PROJECT_PATH, 'extracted_labels')
ANNOTATIONS_PATH = os.path.join(META_PATH, 'annotations.csv')
LIDC_ANNO_PATH = os.path.join(META_PATH, 'lidc_annotations')
MEAN_PIXEL_VALUE_NODULE = 41
SEGMENTER_IMG_SIZE = 320
TARGET_VOXEL_MM = 1.00
VOXEL_SPACING = [TARGET_VOXEL_MM, TARGET_VOXEL_MM, TARGET_VOXEL_MM]
subset_path = os.path.join(DATA_PATH, 'subset0')
fpaths = glob(subset_path+"/*.mhd")
"""
http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/21_Transforms_and_Resampling.html
https://github.com/dhammack/DSB2017/blob/master/training_code/DLung/data_generator_fn.py
https://github.com/juliandewit/kaggle_ndsb2017/blob/master/step1_preprocess_luna16.py
https://www.kaggle.com/arnavkj95/candidate-generation-and-luna16-preprocessing/notebook
https://github.com/booz-allen-hamilton/DSB3Tutorial
https://gist.github.com/ajsander/ea33b90cc6fcff2696cd3b350ed7f86c
https://github.com/juliandewit/kaggle_ndsb2017/blob/master/step1b_preprocess_make_train_cubes.py
https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI#96d911248b584775bb65cdd4a4883550
https://pyscience.wordpress.com/2014/11/02/multi-modal-image-segmentation-with-python-simpleitk/
https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks
https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/blob/master/Python/03_Image_Details.ipynb
https://github.com/nipy/nibabel
https://pyscience.wordpress.com/2014/11/02/multi-modal-image-segmentation-with-python-simpleitk/
""";
# + [markdown] heading_collapsed=true
# ## Kaggle
# + hidden=true
"""
http://pydicom.readthedocs.io/en/stable/getting_started.html
https://www.kaggle.com/gzuidhof/full-preprocessing-tutorial/notebook
https://www.kaggle.com/c/data-science-bowl-2017
https://github.com/lfz/DSB2017
http://nbviewer.jupyter.org/github/bckenstler/dsb17-walkthrough/blob/master/Part%201.%20DSB17%20Preprocessing.ipynb
https://www.youtube.com/watch?v=BmkdAqd5ReY (Intro to CT scans)
http://www.dspguide.com/ch25/5.htm
""";
# + hidden=true
patients = sorted(os.listdir(cfg.SAMPLE_IMAGE_PATH))
# + hidden=true
# Load the scans in given 'patient' dir
# One directory is one scan of multiple slices
# We calculate the pixel size in the Z direction (slice thickness), since not provided
# Returns list of slices (dicom format)
def load_scan(path):
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))
try:
slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])
except:
slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)
for s in slices:
s.SliceThickness = slice_thickness
return slices
# + hidden=true
patient_idx = random.randint(0,len(patients))
patient_path = os.path.join(cfg.SAMPLE_IMAGE_PATH, patients[patient_idx])
patient_fnames, pat_fpaths = utils.files.get_paths_to_files(patient_path)
scan1 = load_scan(patient_path)
# + hidden=true
# Metadata
scan1
# + hidden=true
# Convert to Hounsfield Units (HU)
# Unit of measurement in CT scans is the Hounsfield Unit (HU), which is a measure of radiodensity. CT scanners are carefully calibrated to accurately measure this.
# Radiodensity - measurement of how absorbent a material is to X-rays. This naturally differs for different materials, so by measuring this we have a way of visualizing the interior tissues and so forth.
# Convert to HU by multiplying with the rescale slope and adding the intercept (which are conveniently stored in the metadata of the scans!).
# Returns np array of slices in HUs
def get_pixels_hu(slices):
image = np.stack([s.pixel_array for s in slices])
# Convert to int16 (from sometimes int16),
# should be possible as values should always be low enough (<32k)
image = image.astype(np.int16)
# Set outside-of-scan pixels to 0
# The intercept is usually -1024, so air is approximately 0
image[image == -2000] = 0
# Convert to Hounsfield units (HU)
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(np.float64)
image[slice_number] = image[slice_number].astype(np.int16)
image[slice_number] += np.int16(intercept)
return np.array(image, dtype=np.int16)
# + hidden=true
first_patient_scan = load_scan(patient_path)
first_patient_pixels = get_pixels_hu(first_patient_scan)
plt.hist(first_patient_pixels.flatten(), bins=80, color='c')
plt.xlabel("Hounsfield Units (HU)")
plt.ylabel("Frequency")
plt.show()
# Show some slice in the middle
plt.imshow(first_patient_pixels[80], cmap=plt.cm.gray)
plt.show()
# + hidden=true
"""
A scan may have a pixel spacing of [2.5, 0.5, 0.5], which means that the distance between slices is 2.5 millimeters. For a different scan this may be [1.5, 0.725, 0.725], this can be problematic for automatic analysis (e.g. using ConvNets)!
A common method of dealing with this is resampling the full dataset to a certain isotropic resolution. If we choose to resample everything to 1mm1mm1mm pixels we can use 3D convnets without worrying about learning zoom/slice thickness invariance.
"""
def resample(image, scan, new_spacing=[1,1,1]):
# Determine current pixel spacing
spacing = np.array([scan[0].SliceThickness] + scan[0].PixelSpacing, dtype=np.float32)
resize_factor = spacing / new_spacing
new_real_shape = image.shape * resize_factor
new_shape = np.round(new_real_shape)
real_resize_factor = new_shape / image.shape
new_spacing = spacing / real_resize_factor
image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest')
return image, new_spacing
# + hidden=true
first_patient_resampled_img, resample_spacing = resample(first_patient_pixels, first_patient_scan)
print("Shape before resampling\t", first_patient_pixels.shape)
print("Shape after resampling\t", first_patient_resampled_img.shape)
# + hidden=true
def plot_3d(image, threshold=-300):
# Position the scan upright,
# so the head of the patient would be at the top facing the camera
p = image.transpose(2,1,0)
verts, faces, _, _ = measure.marching_cubes(p, threshold)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
# Fancy indexing: `verts[faces]` to generate a collection of triangles
mesh = Poly3DCollection(verts[faces], alpha=0.70)
face_color = [0.45, 0.45, 0.75]
mesh.set_facecolor(face_color)
ax.add_collection3d(mesh)
ax.set_xlim(0, p.shape[0])
ax.set_ylim(0, p.shape[1])
ax.set_zlim(0, p.shape[2])
plt.show()
# + hidden=true
plot_3d(first_patient_resampled_img, 400)
# + hidden=true
## Lung Segmentation
"""
1) Threshold the image (-320 HU is a good threshold, but it doesn't matter much for this approach)
2) Do connected components, determine label of air around person, fill this with 1s in the binary image
3) Optionally: For every axial slice in the scan, determine the largest solid connected component (the body+air around the person), and set others to 0. This fills the structures in the lungs in the mask.
4) Keep only the largest air pocket (the human body has other pockets of air here and there).
""";
# + hidden=true
"""
Normalization --
Our values currently range from -1024 to around 2000. Anything above 400 is not interesting to us, as these are simply bones with different radiodensity. A commonly used set of thresholds in the LUNA16 competition to normalize between are -1000 and 400. Here's some code you can use:
Zero Centering --
Zero center your data so that your mean value is 0. Subtract the mean pixel value from all pixels. To determine this mean you simply average all images in the whole dataset. If that sounds like a lot of work, we found this to be around 0.25 in the LUNA16 competition.
* DO NOT zero center with the mean per `image`
"""
MIN_BOUND = -1000.0
MAX_BOUND = 400.0
def normalize(image):
image = (image - MIN_BOUND) / (MAX_BOUND - MIN_BOUND)
image[image>1] = 1.
image[image<0] = 0.
return image
PIXEL_MEAN = 0.25
def zero_center(image):
image = image - PIXEL_MEAN
return image
# + hidden=true
"""
Run Pre-processing Tonight!!
To save storage space, don't do normalization and zero centering beforehand, but do this online (during training, just after loading)
"""
# -
# ## Luna
# ### Annotation Schemas
# LIDC
# https://wiki.cancerimagingarchive.net/download/attachments/3539039/LIDC_XML_Documentation_1_Jan_2009.doc?version=1&modificationDate=1319224566057&api=v2
"""
The LIDC used a two phase reading process. In the first phase, multiple readers (N=4 as of Jan 2006), read and annotated each case independently in a blinded fashion. That is, they all read the same cases, but they the readings were done asynchronously and independently. After the results of that first, blinded reading were complete, they were compiled and sent back out to the same readers so that they could see both their own markings as well as the markings from the other three readers. Each reader then, again independently, read each case, this time with the benefit of information as to what other readers saw/marked, and then made a final decisions about the markings for that case.
3 step annotation process:
1) Blind read by 4 radiologists
2) Each radiologist showed the annotations of the others
3) Unblind read by 4 radiologists
IMAGES ONLY INCLUDE THE 2nd Unblinded Read
Interesting!!
Malignancy - Radiologist subjective assessment of likelihood of malignancy of this nodule, ASSUMING 60-year-old male smoker
3 Types of Annotations:
1) Nodules >= 3mm diameter
- Complete outline (segmentation)
- Include characteristics (e.g. malignancy)
2) Nodules < 3mm diameter
- Center of mass labeled (x,y,z)
- Do not include characteristics
3) Non-Nodules > 3mm diameter
- Center of mass labeled (x,y,z)
- Do not include characteristics
* Non-Nodules < 3mm were NOT marked! (might confuse the model)
Terms
-----
* Scan files = 158.xml, 162.xml..
* SeriesInstanceUid = Patient
* StudyInstanceUID = Scan
* nodule id â a unique id for the nodule marked by this reader
* Nodule Contour ROI â this is the description of the complete three dimensional contour of the nodule (remembering that the radiologist was instructed to mark the first voxel outside the nodule)
* Inclusion â âTrueâ means that the roi that follows is considered part of the nodule; âFalseâ means that the roi that follows should be subtracted from the nodule.
* Locus â is unique to non-nodules (and is used in place of âedge mapâ) and indicates that the indicator point of the non-nodule is to follow:
<locus> beginning of non-nodule description
<xCoord>215</xCoord> x coordinate location of non-nodule
<yCoord>312</yCoord> y coordinate location of non-nodule
</locus> end of non-nodule description
<SeriesInstanceUid>1.3.6.1.4.1.14519.5.2.1.6279.6001.303494235102183795724852353824</SeriesInstanceUid>
<StudyInstanceUID>1.3.6.1.4.1.14519.5.2.1.6279.6001.339170810277323131167631068432</StudyInstanceUID>
* Nodules >= 3mm diameter
<unblindedReadNodule>
<noduleID>6</noduleID>
<characteristics>
<subtlety>5</subtlety>
<internalStructure>1</internalStructure>
<calcification>4</calcification>
<sphericity>3</sphericity>
<margin>5</margin>
<lobulation>2</lobulation>
<spiculation>3</spiculation>
<texture>5</texture>
<malignancy>4</malignancy>
</characteristics>
## ROIS are the same nodule on different slices
<roi>
<imageZposition>1616.5</imageZposition>
<imageSOP_UID>1.3.6.1.4.1.14519.5.2.1.6279.6001.315628943944666928553332863367</imageSOP_UID>
<inclusion>TRUE</inclusion>
<edgeMap><xCoord>339</xCoord><yCoord>240</yCoord></edgeMap>
<edgeMap><xCoord>338</xCoord><yCoord>241</yCoord></edgeMap>
<edgeMap><xCoord>337</xCoord><yCoord>242</yCoord></edgeMap>
</roi>
<roi>
<imageZposition>1616.5</imageZposition>
<imageSOP_UID>1.3.6.1.4.1.14519.5.2.1.6279.6001.315628943944666928553332863367</imageSOP_UID>
<inclusion>TRUE</inclusion>
<edgeMap><xCoord>339</xCoord><yCoord>240</yCoord></edgeMap>
<edgeMap><xCoord>338</xCoord><yCoord>241</yCoord></edgeMap>
<edgeMap><xCoord>337</xCoord><yCoord>242</yCoord></edgeMap>
</roi>
</unblindedReadNodule>
* Nodules < 3mm diameter
<unblindedReadNodule>
<noduleID>5</noduleID>
<roi>
<imageZposition>1631.5</imageZposition>
<imageSOP_UID>1.3.6.1.4.1.14519.5.2.1.6279.6001.349696112719071080933041621585</imageSOP_UID>
<inclusion>TRUE</inclusion>
<edgeMap><xCoord>197</xCoord><yCoord>321</yCoord></edgeMap>
</roi>
</unblindedReadNodule>
* Non-Nodules > 3mm diameter:
<nonNodule>
<nonNoduleID>2058</nonNoduleID>
<imageZposition>1628.5</imageZposition>
<imageSOP_UID>1.3.6.1.4.1.14519.5.2.1.6279.6001.216194661683946632889617404306</imageSOP_UID>
<locus>
<xCoord>199</xCoord><yCoord>320</yCoord>
</locus>
</nonNodule>
"""
# +
def find_mhd_file(patient_id):
for subject_no in range(10):
src_dir = os.path.join(
DATA_PATH, "subset" + str(subject_no)) + "/"
for src_path in glob(src_dir + "*.mhd"):
if patient_id in src_path:
return src_path
return None
def load_lidc_xml(xml_path, agreement_threshold=0, only_patient=None, save_nodules=False):
"""
Writes 2 CSV files with nodule and non-nodule annotations
- nodules >= 3mm
- non-nodules
Nodule annotations include: id, z, y, x, diameter, malignancy
Coords and Diameter are stored as percent of image size
Diameter is calculated as the max of x and y
- We reduce irregularly shaped nodules into circles (boxes)
Optionally include only nodules with radiologist agreement
Ignores nodules < 3mm
"""
pos_lines = []
neg_lines = []
extended_lines = []
# Each LIDC xml file represents a read of a single 3D CT Scan (multiple slices)
with open(xml_path, 'r') as xml_file:
markup = xml_file.read()
xml = BeautifulSoup(markup, features="xml")
# Catch corrupt files
if xml.LidcReadMessage is None:
return None, None, None
patient_id = xml.LidcReadMessage.ResponseHeader.SeriesInstanceUid.text
# Option to filter for single patient
if only_patient is not None:
if only_patient != patient_id:
return None, None, None
# Load the CT Scan image by patient_id
src_path = find_mhd_file(patient_id)
if src_path is None:
return None, None, None
print(patient_id)
# Load the CT Scan with SimpleITK
# This a 3D volume containing multiple 2D slices
itk_img = sitk.ReadImage(src_path)
# Convert to Numpy (z, 512, 512)
img_array = sitk.GetArrayFromImage(itk_img)
# z,y,x (height before width)
num_z, height, width = img_array.shape #heightXwidth constitute the transverse plane
# Needed to calculate offet and normalize
# Follow-up on this..
origin = np.array(itk_img.GetOrigin()) # x,y,z Origin in world coordinates (mm)
spacing = np.array(itk_img.GetSpacing()) # spacing of voxels in world coor. (mm)
# 1.00 is a hyperparameter
# Rescale so that every voxel represents an volume of 1x1x1 mm
# Needed to ensure consistency across scans
rescale = spacing / 1.00 #1x1x1
# Up to 4 per scan, one per radiologist
reading_sessions = xml.LidcReadMessage.find_all("readingSession")
# A reading session is all slices in CT Scan read by one radiologist
for reading_session in reading_sessions:
# Get the list of nodules (since up to 4 reads, many will identify the same nodule)
nodules = reading_session.find_all("unblindedReadNodule")
# Includes both >= 3 (characteristics and outline) and <3 (just the centroid)
for nodule in nodules:
nodule_id = nodule.noduleID.text # e.g. 1823
# Same nodule appears in multiple slices (3D)
rois = nodule.find_all("roi")
# To creat the annotations we're going to find the edges
# of the outline, calculate the center
# then use the diameter to segment?
x_min = y_min = z_min = 999999
x_max = y_max = z_max = -999999
# Skip nodules < 3mm (they only have 1 point (x,y) marked on 1 slicet (the center))
if len(rois) < 2:
continue
# For each slice in nodule >= 3mm
for roi in rois:
# If Z is < ZMin or >ZMax, update
z_pos = float(roi.imageZposition.text)
z_min = min(z_min, z_pos)
z_max = max(z_max, z_pos)
# Edge maps are single points (x,y) in the outline
edge_maps = roi.find_all("edgeMap")
for edge_map in edge_maps:
x = int(edge_map.xCoord.text)
y = int(edge_map.yCoord.text)
x_min = min(x_min, x)
y_min = min(y_min, y)
x_max = max(x_max, x)
y_max = max(y_max, y)
# Catching an edge case
# where annotations are crap
if x_max == x_min:
continue
if y_max == y_min:
continue
# Calculate the diameter + center
x_diameter = x_max - x_min
x_center = x_min + x_diameter / 2
y_diameter = y_max - y_min
y_center = y_min + y_diameter / 2
z_diameter = z_max - z_min
z_center = z_min + z_diameter / 2
# Adjust the center based on origin + spacing
# Since each scan taken by different machine there
# is variation..
z_center -= origin[2]
z_center /= spacing[2]
# Calculate the percent (normalized location) of the center
# with respect to the image size
# # Why?
# Why are y and x backwards? I thought x would come first....
x_center_perc = round(x_center / img_array.shape[2], 4)
y_center_perc = round(y_center / img_array.shape[1], 4)
z_center_perc = round(z_center / img_array.shape[0], 4)
# Set the diameter to the max of x, y
# This simplifies the annotation by ignoring ovals
# and non-circular nodules..
diameter = max(x_diameter , y_diameter)
# What percentage is the nodule size of the whole image..
diameter_perc = round(max(x_diameter / img_array.shape[2], y_diameter / img_array.shape[1]), 4)
# Skip nodules with important missing fields
if nodule.characteristics is None:
print("!!!!Nodule:", nodule_id, " has no charecteristics")
continue
if nodule.characteristics.malignancy is None:
print("!!!!Nodule:", nodule_id, " has no malignacy")
continue
# Extract characteristics
malignacy = nodule.characteristics.malignancy.text
sphericiy = nodule.characteristics.sphericity.text
margin = nodule.characteristics.margin.text
spiculation = nodule.characteristics.spiculation.text
texture = nodule.characteristics.texture.text
calcification = nodule.characteristics.calcification.text
internal_structure = nodule.characteristics.internalStructure.text
lobulation = nodule.characteristics.lobulation.text
subtlety = nodule.characteristics.subtlety.text
# "line" is the primary one we use for model
# We save the x,y,z,diameter percent relative to image size
line = [nodule_id, x_center_perc, y_center_perc, z_center_perc, diameter_perc, malignacy]
extended_line = [patient_id, nodule_id, x_center_perc, y_center_perc, z_center_perc, diameter_perc,
malignacy, sphericiy, margin, spiculation, texture, calcification,
internal_structure, lobulation, subtlety ]
# Since this is a nodule >= 3mm, we add this to our list of nodules (TPs)
pos_lines.append(line)
# Only includes nodules >= 3mm with all attributes
extended_lines.append(extended_line)
# Non-Nodules > 3mm diameter
# We only have a single z,y,x point for these
nonNodules = reading_session.find_all("nonNodule")
for nonNodule in nonNodules:
z_center = float(nonNodule.imageZposition.text)
# Adjust for offset
z_center -= origin[2]
z_center /= spacing[2]
x_center = int(nonNodule.locus.xCoord.text)
y_center = int(nonNodule.locus.yCoord.text)
nodule_id = nonNodule.nonNoduleID.text
x_center_perc = round(x_center / img_array.shape[2], 4)
y_center_perc = round(y_center / img_array.shape[1], 4)
z_center_perc = round(z_center / img_array.shape[0], 4)
# Why 6??????
diameter_perc = round(max(6 / img_array.shape[2], 6 / img_array.shape[1]), 4)
# Add to list of non-nodules (TNs)
# line = nodule_id, x_center_perc, y_center_perc, z_center_perc, diameter_perc, malignacy]
line = [nodule_id, x_center_perc, y_center_perc, z_center_perc, diameter_perc, 0]
neg_lines.append(line)
# Option to ignore nodules where
# multiple radiologists did NOT agree
if agreement_threshold > 1:
filtered_lines = []
# Loop through all the nodules
for pos_line1 in pos_lines:
id1 = pos_line1[0]
x1 = pos_line1[1]
y1 = pos_line1[2]
z1 = pos_line1[3]
d1 = pos_line1[4]
overlaps = 0
# Loop through all nodules again
for pos_line2 in pos_lines:
id2 = pos_line2[0]
# Skip the original nodule
if id1 == id2:
continue
x2 = pos_line2[1]
y2 = pos_line2[2]
z2 = pos_line2[3]
d2 = pos_line1[4]
# Gets the area of overlap???????
# TODO WHAT does this do..
dist = math.sqrt(math.pow(x1 - x2, 2) + math.pow(y1 - y2, 2) + math.pow(z1 - z2, 2))
# If the combined area is less than one or the other
# Then this is an overlap (>1 radiologists agree)
if dist < d1 or dist < d2:
overlaps += 1
# Add nodule if more than one radiologist agrees
if overlaps >= agreement_threshold:
filtered_lines.append(pos_line1)
# Only overlapping nodule annotations become nodules
pos_lines = filtered_lines
# Create DF of all nodules for this CT scan
df_annos = pd.DataFrame(
pos_lines, columns=["anno_index", "coord_x", "coord_y", "coord_z", "diameter", "malscore"])
df_annos.to_csv(os.path.join(EXTRACTED_LABEL_PATH, patient_id + "_annos_pos_lidc.csv"), index=False)
# Create DF of all non-nodules for this CT scan
df_neg_annos = pd.DataFrame(
neg_lines, columns=["anno_index", "coord_x", "coord_y", "coord_z", "diameter", "malscore"])
df_neg_annos.to_csv(os.path.join(EXTRACTED_LABEL_PATH, patient_id + "_annos_neg_lidc.csv"), index=False)
# We've now saved two csv files for each scan (patient read)
# one for nodules and one for non-nodules
return pos_lines, neg_lines, extended_lines
def process_lidc_annotations(only_patient=None, agreement_threshold=0):
"""
Save nodule and non-nodule annotations for each scan
Save all nodule >= 3mm annotations to single master file
By default, we include overlapping annotations from multiple radiologists
This means the same nodule will show up twice or more
Agreement=0, returns about 5900 nodules
"""
file_no = 0
pos_count = 0
neg_count = 0
all_lines = []
# Loop through all the LIDC annotation files (one per CT scan)
# Each includes up to 4 radiologist reading sessions
for anno_dir in [d for d in glob(LIDC_ANNO_PATH+"/*") if os.path.isdir(d)]:
xml_paths = glob(anno_dir + "/*.xml")
for xml_path in xml_paths:
print(file_no, ": ", xml_path)
# This method saves the individual CSVs per scan
pos, neg, extended = load_lidc_xml(
xml_path=xml_path, only_patient=only_patient,
agreement_threshold=agreement_threshold)
# Function returns None if only one scan requested
if pos is not None:
pos_count += len(pos)
neg_count += len(neg)
print("Pos: ", pos_count, " Neg: ", neg_count)
file_no += 1
all_lines += extended
# Save all nodules >= 3mm
# Nodules < 3mm are ignored
df_annos = pd.DataFrame(all_lines, columns=["patient_id", "anno_index", "coord_x", "coord_y", "coord_z", "diameter",
"malscore", "sphericiy", "margin", "spiculation", "texture", "calcification",
"internal_structure", "lobulation", "subtlety"])
df_annos.to_csv(os.path.join(META_PATH, "lidc_annotations.csv"), index=False)
# -
process_lidc_annotations()
lidc_pos_df = pd.read_csv(os.path.join(META_PATH, "lidc_annotations.csv"))
patient_id = lidc_pos_df.iloc[100].patient_id
all_patient_nodules = lidc_pos_df[lidc_pos_df.patient_id == patient_id] # same result as the previous expression
coord_z_w_patient_nodules = all_patient_nodules['coord_z'].values
coord_z = coord_z_w_patient_nodules[0]
print(coord_z)
single_slice_patient_nodules = all_patient_nodules.loc[(all_patient_nodules["coord_z"] == coord_z) & (all_patient_nodules["patient_id"] == patient_id)]
# ### Process Luna
# +
def resample_img(img_arr, old_spacing, new_spacing):
resize_factor = old_spacing / new_spacing
print("Resize", resize_factor)
new_real_shape = img_arr.shape * resize_factor
print("New shape", new_real_shape)
new_shape = np.round(new_real_shape)
print("New shape", new_real_shape)
real_resize_factor = new_shape / img_arr.shape
print("Real resize", new_real_shape)
new_spacing = old_spacing / real_resize_factor
print("New spacing", new_real_shape)
image = scipy.ndimage.interpolation.zoom(
img_arr, real_resize_factor, mode = 'nearest')
return image, new_spacing
def process_luna_img():
pass
# +
anno_df = pd.read_csv(os.path.join(META_PATH, "annotations.csv"))
print(anno_df.columns)
anno_df.columns = ['patient_id', 'coord_x', 'coord_y', 'coord_z', 'diameter']
patient_idx = 5
patient_id = lidc_pos_df.iloc[patient_idx].patient_id
print ("Patient Id", patient_id)
img_fpath = get_mhd_path_from_patient_id(patient_id)
img_arr, origin, spacing = load_arr_from_mhd(img_fpath)
print("Old img", img_arr.shape, origin, spacing)
# Rescale Image
img_arr, spacing = resample_img(img_arr, spacing, VOXEL_SPACING)
# -
plot_slice(img_arr[100,:,:])
# Normalize Image
img_arr = normalize(img_arr)
print("New img", img_arr.shape, origin, spacing)
plot_slice(img_arr[100,:,:])
# +
# #%ls /bigguy/data/luna/data/subset0/
# -
# +
patient_id = '1.3.6.1.4.1.14519.5.2.1.6279.6001.111172165674661221381920536987'
img_path = os.path.join(DATA_PATH, 'subset0', patient_id+'.mhd')
print(img_path)
itk_img = sitk.ReadImage(img_path)
img_arr, origin, spacing = load_arr_from_mhd(img_path)
print("Shape", img_arr.shape, "Origin", origin, "Spacing", spacing)
itk_img.GetDepth(), itk_img.GetHeight(), itk_img.GetWidth(), itk_img.GetOrigin(), itk_img.GetSpacing()
print("Size", itk_img.GetSize()) #(x,y,z)
print("Direction", itk_img.GetDirection())
img_arr = sitk.GetArrayFromImage(itk_img)
img_arr.shape #(z,y,x)
rescale = spacing / TARGET_VOXEL_MM
print("Rescale", rescale)
def resample(image, old_spacing, new_spacing=[1, 1, 1]):
resize_factor = old_spacing / new_spacing
new_real_shape = image.shape * resize_factor
new_shape = np.round(new_real_shape)
real_resize_factor = new_shape / image.shape
new_spacing = old_spacing / real_resize_factor
image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode = 'nearest')
return image, new_spacing
# img_arr, new_spacing = resample(img_arr, spacing)
# img_arr.shape
# -
TARGET_VOXEL_MM = [1.,1.,1.]
nodules = anno_df[anno_df.patient_id == patient_id]
print(nodules.columns)
assert nodules['patient_id'].values[0] == patient_id
nodule = nodules.values[0]
nodule_z, nodule_y, nodule_x = nodule[3], nodule[2], nodule[1]
nodule_coords = nodule[1:4]
print("Init Nodule coords", nodule_coords, nodule_z, nodule_y, nodule_x)
nodule_coords = np.array([nodule_z, nodule_y, nodule_x])
print("Reversed Nodule coords", nodule_coords, nodule_z, nodule_y, nodule_x)
new_nodule_coords = world_2_voxel(nodule_coords, origin, spacing)
print(np.ceil(new_nodule_coords).astype(int))
margin=0.05
dpi=80
npa_zslice = scan_arr[187,:,:]
ysize = npa_zslice.shape[0]
xsize = npa_zslice.shape[1]
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=(figsize), dpi=dpi)
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
t = ax.imshow(npa_zslice, interpolation=None)
xy = int(round(new_nodule_coords[2])),round(int(new_nodule_coords[1]))
print(xy)
box = plt.Rectangle(xy, 50, 50, fill=False,
edgecolor='white', linewidth=1)
ax.add_patch(box)
plt.imshow(npa_zslice, cmap=plt.cm.Greys_r,
vmin=npa_zslice.min(), vmax=npa_zslice.max());
# +
npa_zslice = scan_arr[100,:,:]
ysize = npa_zslice.shape[0]
xsize = npa_zslice.shape[1]
#fig = plt.figure()
#fig.set_size_inches(15,30)
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
t = ax.imshow(npa_zslice, interpolation=None)
box = plt.Rectangle((100,100), 50, 50, fill=False,
edgecolor='white', linewidth=1)
ax.add_patch(box)
# if 'score' in bbox:
# text = '%s: %.2f' % (bbox['label'], bbox['score'])
# else:
# text = bbox['label']
ax.text(100, 100, text,
bbox={'facecolor':'white', 'alpha':0.5})
plt.imshow(npa_zslice, cmap=plt.cm.Greys_r,
vmin=scan_arr.min(), vmax=scan_arr.max());
plt.title(title)
# +
# Get Nodules
nodules = anno_df[anno_df.patient_id == patient_id]
def create_bb(z, y, x, diameter, label="nodule"):
radius = diameter/2.0
return {
'label': label,
'slice': z,
'diameter': diameter,
'xmin': int(round(x - radius)),
'ymin': int(round(y - radius)),
'xmax': int(round(x + radius)),
'ymax': int(round(y + radius))
}
bbs = []
for index, nodule in nodules.iterrows():
z,y,x = nodule['coord_z'], nodule['coord_y'], nodule['coord_z']
diameter = nodule['diameter']
print("Old anno", z,y,x,diameter)
# Rescale Annotation
z,y,x = world_2_voxel((z,y,x), origin, spacing)
z,y,x = int(round(z)), int(round(y)), int(round(x))
#z,y,x = int(coords[0]),int(coords[1]),int(coords[2])
#z,y,x = int(round(z)), int(round(y)), int(round(x))
print("New anno", z,y,x)
bb = create_bb(z,y,x, diameter)
print(bb)
bbs.append(bb)
print("imgshape coords", img_arr.shape, z,y,x)
print("Bounding Boxes:", bbs)
# -
bb_idx = 0
slice_idx = bbs[bb_idx]['slice']
print("Slice", slice_idx)
print("BB", bbs[bb_idx:bb_idx+1])
plot_slice_bbs(img_arr[slice_idx], bbs[bb_idx:bb_idx+1])
# +
margin = 0.05
dpi = 80
title = "slice1"
text = "nodule0"
npa_zslice = scan_arr[100,:,:]
ysize = npa_zslice.shape[0]
xsize = npa_zslice.shape[1]
#fig = plt.figure()
#fig.set_size_inches(15,30)
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
t = ax.imshow(npa_zslice, interpolation=None)
box = plt.Rectangle((100,100), 50, 50, fill=False,
edgecolor='white', linewidth=1)
ax.add_patch(box)
# if 'score' in bbox:
# text = '%s: %.2f' % (bbox['label'], bbox['score'])
# else:
# text = bbox['label']
ax.text(100, 100, text,
bbox={'facecolor':'white', 'alpha':0.5})
plt.imshow(npa_zslice, cmap=plt.cm.Greys_r,
vmin=scan_arr.min(), vmax=scan_arr.max());
plt.title(title)
# -
anno_df.columns, lidc_pos_df.columns
# +
lidc_pos_df = pd.read_csv(os.path.join(META_PATH, "lidc_annotations.csv"))
luna_pos_df = pd.read_csv(os.path.join(META_PATH, "annotations.csv"))
luna_pos_df
# -
# ### Visualize 2D
"""
SimpleITK convention = z,y,z
Spacing = # of pixels between axis (e.g. how many pixels between slices? z,y,z all have spacing)
Origin = Starting point from which ..... ??
SimpleITK and numpy indexing access is in opposite order!
SimpleITK: image[x,y,z]
numpy: image_numpy_array[z,y,x]
GetArrayFromImage(): returns a copy of the image data. You can then freely modify the data as it has no effect on the original SimpleITK image.
GetArrayViewFromImage(): returns a view on the image data which is useful for display in a memory efficient manner. You cannot modify the data and the view will be invalid if the original SimpleITK image is deleted.
http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/03_Image_Details.html
"""
# +
def plot_slice(slice_arr):
fig = plt.figure()
fig.set_size_inches(15,30)
plt.title('Slice1')
plt.imshow(slice_arr, cmap=plt.cm.Greys_r,
vmin=slice_arr.min(), vmax=slice_arr.max());
COLORS = {
'nodule': 'white',
'non_nodule': 'red'
}
def plot_slice_bbs(slice_arr, bboxes, margin=0.05, dpi=80, title="slice"):
print("Slice Shape", slice_arr.shape)
ysize = slice_arr.shape[0]
xsize = slice_arr.shape[1]
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
#fig.set_size_inches(15,30)
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
extent = (0, xsize*spacing[1], ysize*spacing[0], 0)
t = ax.imshow(slice_arr, extent=extent, interpolation=None)
plt.imshow(slice_arr, cmap=plt.cm.Greys_r,
vmin=slice_arr.min(), vmax=slice_arr.max());
for bbox in bboxes:
print(bbox)
xy = bbox['xmin'], bbox['ymin']
width = bbox['xmax'] - bbox['xmin']
height = bbox['ymax'] - bbox['ymin']
color = COLORS[bbox['label']]
box = plt.Rectangle(xy, width, height, fill=False,
edgecolor=color, linewidth=1)
ax.add_patch(box)
if 'score' in bbox:
text = '%s: %.2f' % (bbox['label'], bbox['score'])
else:
text = bbox['label']
ax.text(bbox['xmin'], bbox['ymin'], text,
bbox={'facecolor':color, 'alpha':0.5})
plt.title(title)
def normalize(image):
MIN_BOUND = -1000.0
MAX_BOUND = 400.0
image = (image - MIN_BOUND) / (MAX_BOUND - MIN_BOUND)
image[image > 1] = 1.
image[image < 0] = 0.
return image
def get_mhd_path_from_patient_id(patient_id):
for subject_no in range(10):
src_dir = os.path.join(
DATA_PATH, "subset" + str(subject_no)) + "/"
for src_path in glob(src_dir + "*.mhd"):
if patient_id in src_path:
return src_path
return None
def load_arr_from_mhd(filename):
itkimage = sitk.ReadImage(filename)
ct_scan = sitk.GetArrayFromImage(itkimage)
origin = np.array(list(reversed(itkimage.GetOrigin())))
spacing = np.array(list(reversed(itkimage.GetSpacing())))
return ct_scan, origin, spacing
def load_viewable_arr_from_mhd(filename):
itkimage = sitk.ReadImage(filename)
ct_scan = sitk.GetArrayViewFromImage(itkimage)
origin = np.array(list(reversed(itkimage.GetOrigin())))
spacing = np.array(list(reversed(itkimage.GetSpacing())))
return ct_scan, origin, spacing
def world_2_voxel(world_coordinates, origin, spacing):
stretched_voxel_coordinates = np.absolute(world_coordinates - origin)
voxel_coordinates = stretched_voxel_coordinates / spacing
return voxel_coordinates
def voxel_2_world(voxel_coordinates, origin, spacing):
stretched_voxel_coordinates = voxel_coordinates * spacing
world_coordinates = stretched_voxel_coordinates + origin
return world_coordinates
def resize_voxel(x, desired_shape):
factors = np.array(x.shape).astype('float32') / np.array(desired_shape).astype('float32')
output= ndimage.interpolation.zoom(x,1.0 / factors,order=1)
assert output.shape == desired_shape, 'resize error'
return output
def percent_to_pixels(x_perc, y_perc, z_perc, diam_perc, img):
res_x = int(round(x_perc * img.shape[2]))
res_y = int(round(y_perc * img.shape[1]))
res_z = int(round(z_perc * img.shape[0]))
diameter = int(round(diam_perc * max(y_perc,z_perc)))
return res_x, res_y, res_z, diameter
# -
# +
lidc_pos_df = pd.read_csv(os.path.join(META_PATH, "lidc_annotations.csv"))
luna_pos_df = pd.read_csv(os.path.join(META_PATH, "annotations.csv"))
patient_id = lidc_pos_df.iloc[100].patient_id
mhd_path = get_mhd_path_from_patient_id(patient_id)
itk_img = sitk.ReadImage(mhd_path)
print("ITK Image")
print("Origin", itk_img.GetOrigin())
print("Size", itk_img.GetSize())
print("Spacing", itk_img.GetSpacing())
print("Direction", itk_img.GetDirection())
print(itk_img.GetDimension())
print(itk_img.GetWidth())
print(itk_img.GetHeight())
print(itk_img.GetDepth())
# Get Numpy Array from SimpleITK format
scan_arr, origin, spacing = load_arr_from_mhd(mhd_path)
viewable_scan_arr, origin, spacing = load_viewable_arr_from_mhd(mhd_path)
# +
# itk_img = SimpleITK.ReadImage(src_path)
# img_array = SimpleITK.GetArrayFromImage(itk_img)
# num_z, height, width = img_array.shape #heightXwidth constitute the transverse plane
# origin = numpy.array(itk_img.GetOrigin()) # x,y,z Origin in world coordinates (mm)
# spacing = numpy.array(itk_img.GetSpacing()) # spacing of voxels in world coor. (mm)
# rescale = spacing / settings.TARGET_VOXEL_MM
# +
npa_zslice = scan_arr[100,:,:]
fig = plt.figure()
fig.set_size_inches(15,30)
fig.add_subplot(1,3,1)
plt.imshow(npa_zslice)
plt.title('default colormap')
plt.axis('off')
fig.add_subplot(1,3,2)
plt.imshow(npa_zslice,cmap=plt.cm.Greys_r);
plt.title('grey colormap')
plt.axis('off')
fig.add_subplot(1,3,3)
plt.title('grey colormap,\n scaling based on volumetric min and max values')
plt.imshow(npa_zslice,cmap=plt.cm.Greys_r, vmin=scan_arr.min(), vmax=scan_arr.max())
plt.axis('off');
# -
npa_zslice = scan_arr[100,:,:]
fig = plt.figure()
fig.set_size_inches(15,30)
plt.title('Slice1')
plt.imshow(npa_zslice, cmap=plt.cm.Greys_r,
vmin=scan_arr.min(), vmax=scan_arr.max());
# +
margin = 0.05
dpi = 80
title = "slice1"
text = "nodule0"
spacing = itk_img.GetSpacing()
npa_zslice = scan_arr[100,:,:]
ysize = npa_zslice.shape[0]
xsize = npa_zslice.shape[1]
#fig = plt.figure()
#fig.set_size_inches(15,30)
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
extent = (0, xsize*spacing[1], ysize*spacing[0], 0)
t = ax.imshow(npa_zslice, extent=extent, interpolation=None)
box = plt.Rectangle((100,100), 50, 50, fill=False,
edgecolor='white', linewidth=1)
ax.add_patch(box)
# if 'score' in bbox:
# text = '%s: %.2f' % (bbox['label'], bbox['score'])
# else:
# text = bbox['label']
ax.text(100, 100, text,
bbox={'facecolor':'white', 'alpha':0.5})
plt.imshow(npa_zslice, cmap=plt.cm.Greys_r,
vmin=scan_arr.min(), vmax=scan_arr.max());
plt.title(title)
# -
# Get Nodules
lidc_patient_nodules = lidc_pos_df[lidc_pos_df.patient_id == patient_id] # same result as the previous expression
luna_patient_nodules = luna_pos_df[luna_pos_df.seriesuid == patient_id] # same result as the previous expression
luna_patient_nodules, scan_arr.shape, viewable_scan_arr.shape
# +
def get_bbs_from_lidc_anno(img_arr, anno_df, patient_id, z_coord_pct, label):
img_z, img_y, img_x = img_arr.shape
print(img_z, img_y, img_x)
nodules = all_patient_nodules.loc[
(all_patient_nodules["coord_z"] == coord_z) &
(all_patient_nodules["patient_id"] == patient_id)
]
bbs = []
for index, nodule in nodules.iterrows():
print(nodule)
z = int(round(nodule['coord_z'] * img_z))
y = int(round(nodule['coord_y'] * img_y))
x = int(round(nodule['coord_x'] * img_x))
diameter = int(round(nodule['diameter'] * max(img_y, img_x)))
print("coords", z, y, x, diameter)
bbs.append({
'label': label,
'xmin': x - diameter//2,
'ymin': y - diameter//2,
'xmax': x + diameter//2,
'ymax': y + diameter//2
})
return bbs
def make_bbs_from_lidc_nodules(img_arr, nodule_df, slice_idx):
img_z, img_y, img_x = img_arr.shape
print(img_z, img_y, img_x)
bbs = []
for index, nodule in nodule_df.iterrows():
x, y, z = percent_to_pixels(
nodule['coord_x'], nodule['coord_y'],
nodule['coord_z'], img_arr)
diameter = int(round(nodule['diameter'] * max(img_y, img_x)))
print("coords", z, y, x, diameter)
if z == slice_idx:
bbs.append({
'label': 'nodule',
'xmin': x - diameter//2,
'ymin': y - diameter//2,
'xmax': x + diameter//2,
'ymax': y + diameter//2
})
return bbs
# -
lidc_bbs = make_bbs_from_lidc_nodules(scan_arr, lidc_patient_nodules, 89)
lidc_bbs
# +
slice_idx = 89
slice_arr = scan_arr[slice_idx,:,:]
lidc_bbs = make_bbs_from_lidc_nodules(
scan_arr, lidc_patient_nodules, slice_idx)
spacing = itk_img.GetSpacing()
plot_slice_bbs(slice_arr, lidc_bbs, spacing)
# box = plt.Rectangle((100,100), 50, 50, fill=False,
# edgecolor='white', linewidth=1)
# ax.add_patch(box)
# # if 'score' in bbox:
# # text = '%s: %.2f' % (bbox['label'], bbox['score'])
# # else:
# # text = bbox['label']
# ax.text(100, 100, text,
# bbox={'facecolor':'white', 'alpha':0.5})
# -
def myshow(img, slice_idx, title=None, margin=0.05, dpi=80):
nda = sitk.GetArrayViewFromImage(img)
spacing = img.GetSpacing()
print("Spacing", spacing)
nda = nda[slice_idx,:,:]
ysize = nda.shape[0]
xsize = nda.shape[1]
# Make a figure big enough to accommodate an axis of xpixels by ypixels
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
# Make the axis the right size...
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
extent = (0, xsize*spacing[1], ysize*spacing[0], 0)
print(extent)
t = ax.imshow(nda, extent=extent, interpolation=None)
print(nda.shape)
if nda.ndim == 2:
t.set_cmap("gray")
if(title):
plt.title(title)
img_file = list(annotations_df["file"])[0]
itk_img = sitk.ReadImage(img_file)
img_array = sitk.GetArrayFromImage(itk_img) # indexes are z,y,x (notice the ordering)
nda = sitk.GetArrayViewFromImage(itk_img)
center = np.array([node_x,node_y,node_z]) # nodule center
origin = np.array(itk_img.GetOrigin()) # x,y,z Origin in world coordinates (mm)
spacing = np.array(itk_img.GetSpacing()) # spacing of voxels in world coor. (mm)
v_center =np.rint((center-origin)/spacing) # nodule center in voxel space (still x,y,z ordering)
# ### Plot BBs
# +
def plot_itk_img(img, bboxes=None, title=None, margin=0.05, dpi=80):
nda = sitk.GetArrayViewFromImage(img)
spacing = img.GetSpacing()
if nda.ndim == 3:
# fastest dim, either component or x
c = nda.shape[-1]
# the the number of components is 3 or 4 consider it an RGB image
if not c in (3,4):
nda = nda[nda.shape[0]//2,:,:]
elif nda.ndim == 4:
c = nda.shape[-1]
if not c in (3,4):
raise Runtime("Unable to show 3D-vector Image")
# take a z-slice
nda = nda[nda.shape[0]//2,:,:,:]
ysize = nda.shape[0]
xsize = nda.shape[1]
# Make a figure big enough to accommodate an axis of xpixels by ypixels
# as well as the ticklabels, etc...
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
# Make the axis the right size...
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
extent = (0, xsize*spacing[1], ysize*spacing[0], 0)
t = ax.imshow(nda,extent=extent,interpolation=None)
if nda.ndim == 2:
t.set_cmap("gray")
if(title):
plt.title(title)
colors = plt.cm.hsv(np.linspace(0, 1, len(
comp.LABEL_TO_IDX.keys()))).tolist()
print(colors)
for bbox in bboxes:
print(bbox)
xy = bbox['xmin'], bbox['ymin']
width = bbox['xmax'] - bbox['xmin']
height = bbox['ymax'] - bbox['ymin']
color = colors[4] #comp.LABEL_TO_IDX[bbox['label']]]
print(color)
box = plt.Rectangle(xy, width, height, fill=False,
edgecolor=color, linewidth=3)
ax.add_patch(box)
def plot_img_w_bboxes(img_arr, pos_bbs, neg_bbs, title=None):
"""
slice_arr: single slice numpy array
bboxes: [
{
'label':'nodule',
'xmin':34,
'ymin':120,
'xmax':233,
'ymax':231
}
...
]
"""
plt.clf()
plt.imshow(img_arr)
plt.title(title)
plt.axis('off')
ax = plt.gca()
colors = plt.cm.hsv(np.linspace(0, 1, len(
label_to_idx.keys()))).tolist()
for bbox in bboxes:
print(bbox)
xy = bbox['xmin'], bbox['ymin']
width = bbox['xmax'] - bbox['xmin']
height = bbox['ymax'] - bbox['ymin']
color = colors[comp.LABEL_TO_IDX[bbox['label']]]
box = plt.Rectangle(xy, width, height, fill=False,
edgecolor=color, linewidth=3)
ax.add_patch(box)
if 'score' in bbox:
text = '%s: %.2f' % (bbox['label'], bbox['score'])
else:
text = bbox['label']
ax.text(bbox['xmin'], bbox['ymin'], text,
bbox={'facecolor':color, 'alpha':0.5})
# +
patient_id = lidc_pos_df.iloc[100].patient_id
#all_patient_nodules = lidc_pos_df[lidc_pos_df.patient_id == patient_id] # same result as the previous expression
coord_z_w_patient_nodules = all_patient_nodules['coord_z'].values
coord_z = coord_z_w_patient_nodules[0]
print(coord_z)
#single_slice_patient_nodules = all_patient_nodules.loc[(all_patient_nodules["coord_z"] == coord_z) & (all_patient_nodules["patient_id"] == patient_id)]
#all_patient_nodules, zcoords_w_patient_nodules, len(single_slice_patient_nodules)
img_path = find_mhd_file(patient_id)
img_arr, origin, spacing = load_itk(img_path, viewable=True)
img_z, img_y, img_x = img_arr.shape
slice_idx = round(coord_z * img_z)
print(slice_idx)
img_arr[0].shape
#img_file = list(annotations_df["file"])[0]
itk_img = sitk.ReadImage(img_path)
viewable_arr = sitk.GetArrayFromImage(itk_img) # indexes are z,y,x (notice the ordering)
nda = sitk.GetArrayViewFromImage(itk_img)
center = np.array([node_x,node_y,node_z]) # nodule center
origin = np.array(itk_img.GetOrigin()) # x,y,z Origin in world coordinates (mm)
spacing = np.array(itk_img.GetSpacing()) # spacing of voxels in world coor. (mm)
v_center =np.rint((center-origin)/spacing) # nodule center in voxel space (still x,y,z ordering)
bbs = get_bbs_from_anno(img_arr, lidc_pos_df, patient_id, coord_z, 'nodule')
# plot_img_w_bboxes(img_arr[slice_idx], pos_bbs, neg_bbs, title=None)o
viewable_arr.shape
# -
bbs
plot_itk_img(itk_img, bbs)
# ### 3D Nodule Viewer
# * https://www.kaggle.com/rodenluo/crop-save-and-view-nodules-in-3d
# +
# Starting with LUNA subset0
subset_path = os.path.join(DATA_PATH, 'subset0')
fpaths = glob(subset_path+"/*.mhd")
def get_filename(case):
global fpaths
for f in fpaths:
if case in f:
return(f)
annotations_df = pd.read_csv(ANNOTATIONS_PATH)
print(len(annotations_df))
annotations_df["file"] = annotations_df["seriesuid"].apply(get_filename)
annotations_df = annotations_df.dropna()
len(annotations_df)
# +
## Define resample method to make images isomorphic, default spacing is [1, 1, 1]mm
# Learned from <NAME>
# https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial
def resample(image, old_spacing, new_spacing=[1, 1, 1]):
resize_factor = old_spacing / new_spacing
new_real_shape = image.shape * resize_factor
new_shape = np.round(new_real_shape)
real_resize_factor = new_shape / image.shape
new_spacing = old_spacing / real_resize_factor
image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode = 'nearest')
return image, new_spacing
def normalize(image):
MIN_BOUND = -1000.0
MAX_BOUND = 400.0
image = (image - MIN_BOUND) / (MAX_BOUND - MIN_BOUND)
image[image > 1] = 1.
image[image < 0] = 0.
return image
def write_meta_header(filename, meta_dict):
header = ''
# do not use tags = meta_dict.keys() because the order of tags matters
tags = ['ObjectType','NDims','BinaryData',
'BinaryDataByteOrderMSB','CompressedData','CompressedDataSize',
'TransformMatrix','Offset','CenterOfRotation',
'AnatomicalOrientation',
'ElementSpacing',
'DimSize',
'ElementType',
'ElementDataFile',
'Comment','SeriesDescription','AcquisitionDate','AcquisitionTime','StudyDate','StudyTime']
for tag in tags:
if tag in meta_dict.keys():
header += '%s = %s\n'%(tag,meta_dict[tag])
f = open(filename,'w')
f.write(header)
f.close()
def dump_raw_data(filename, data):
""" Write the data into a raw format file. Big endian is always used. """
#Begin 3D fix
data=data.reshape([data.shape[0],data.shape[1]*data.shape[2]])
#End 3D fix
rawfile = open(filename,'wb')
a = array.array('f')
for o in data:
a.fromlist(list(o))
#if is_little_endian():
# a.byteswap()
a.tofile(rawfile)
rawfile.close()
def write_mhd_file(mhdfile, data, dsize):
assert(mhdfile[-4:]=='.mhd')
meta_dict = {}
meta_dict['ObjectType'] = 'Image'
meta_dict['BinaryData'] = 'True'
meta_dict['BinaryDataByteOrderMSB'] = 'False'
meta_dict['ElementType'] = 'MET_FLOAT'
meta_dict['NDims'] = str(len(dsize))
meta_dict['DimSize'] = ' '.join([str(i) for i in dsize])
meta_dict['ElementDataFile'] = os.path.split(mhdfile)[1].replace('.mhd','.raw')
write_meta_header(mhdfile, meta_dict)
pwd = os.path.split(mhdfile)[0]
if pwd:
data_file = pwd +'/' + meta_dict['ElementDataFile']
else:
data_file = meta_dict['ElementDataFile']
dump_raw_data(data_file, data)
def save_nodule(nodule_crop, name_index):
np.save(str(name_index) + '.npy', nodule_crop)
write_mhd_file(str(name_index) + '.mhd', nodule_crop, nodule_crop.shape[::-1])
# +
def get_mhd_path_from_patient_id(patient_id):
for subject_no in range(10):
src_dir = os.path.join(
DATA_PATH, "subset" + str(subject_no)) + "/"
for src_path in glob(src_dir + "*.mhd"):
if patient_id in src_path:
return src_path
return None
def load_arr_from_mhd(filename):
itkimage = sitk.ReadImage(filename)
img_arr = sitk.GetArrayFromImage(itkimage)
# SimpleITK output is [x,y,z] but numpy is [z,y,x], so we reverse
origin = np.array(list(reversed(itkimage.GetOrigin())))
spacing = np.array(list(reversed(itkimage.GetSpacing())))
return img_arr, origin, spacing
def load_scan_arr(patient_id):
img_fpath = get_mhd_path_from_patient_id(patient_id)
img_arr, origin, spacing = load_arr_from_mhd(img_fpath)
return img_arr, origin, spacing
def get_scan_bbs(patient_id, anno_df):
img_path = get_mhd_path_from_patient_id(patient_id)
img_arr, origin, spacing = load_arr_from_mhd(img_path)
nodules = anno_df[anno_df.patient_id == patient_id]
bbs = []
for idx,nodule in nodules.iterrows():
bbs.append(make_bb_from_nodule(nodule, origin, spacing))
return bbs
def make_bb_from_nodule(nodule, origin, spacing):
print(nodule)
coords_mm = np.array([nodule['coord_z'], nodule['coord_y'], nodule['coord_x']])
print(coords_mm)
coords_mm = coords_mm - origin
diameter = nodule['diameter']
print(diameter)
bb = make_bb_from_mm_coords(
coords_mm[0], coords_mm[1], coords_mm[2], diameter, spacing)
return bb
def make_bb_from_mm_coords(z, y_center, x_center, diameter, spacing):
radius_mm = diameter / 2
xy_spacing_mm = max(spacing[1], spacing[2])
y_spacing = spacing[1]
x_spacing = spacing[2]
y_min_mm = y_center - radius_mm
x_min_mm = x_center - radius_mm
y_max_mm = y_center + radius_mm
x_max_mm = x_center + radius_mm
y_center_pixels = int(round(y_center / y_spacing_mm))
x_center_pixels = int(round(x_center / x_spacing_mm))
y_min_pixels = int(round(y_min_mm / y_spacing_mm))
x_min_pixels = int(round(x_min_mm / x_spacing_mm))
y_max_pixels = int(round(y_max_mm / y_spacing_mm))
x_max_pixels = int(round(x_max_mm / x_spacing_mm))
bb = make_bb_from_pixel_coords(z, y_min_pixels, y_max_pixels,
x_min_pixels, x_max_pixels)
return bb
def make_bb_from_pixel_coords(z, ymin, ymax, xmin, xmax, label="nodule"):
return {
'label': label,
'slice': int(round(z)),
'xmin': int(round(xmin)),
'ymin': int(round(ymin)),
'xmax': int(round(xmax)),
'ymax': int(round(ymax))
}
def get_slice_idx_to_bb_map(bbs):
idxs = {}
for bb in bbs:
if bb['slice'] in idxs:
idxs[bb['slice']].append(bb)
else:
idxs[bb['slice']] = [bb]
return idxs
def plot_slice_w_bbs(slice_arr, bbs, title=None):
fig = plt.figure()
fig.set_size_inches(15,30)
ax = plt.gca()
for bb in bbs:
nodule_xy = bb['xmin'], bb['ymin']
width = bb['xmax'] - bb['xmin']
height = bb['ymax'] - bb['ymin']
box = plt.Rectangle(nodule_xy, width, height, fill=False,
edgecolor='white', linewidth=1)
ax.add_patch(box)
plt.imshow(slice_arr, cmap=plt.cm.Greys_r,
vmin=slice_arr.min(), vmax=slice_arr.max());
| explore-dsb2017.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/chetansonaje/LetsUpgrade-Python-Essential/blob/master/Day%202.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="XmRYuioSUAy4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="dce596b4-a839-470a-978a-63c55a5daa3e"
#List
list =['qw',12,'gh',56]
list1 =[18,'fg']
print (list)
print (list +list1)
print (list[0])
print (list1 *2)
print (list[2:])
# + id="WZR0upS-UEIB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="c9dc0ee7-f404-44c6-d04c-8f048103a6c3"
#Dictionary
dict = {}
dict = {'name':'Chetan', 'Id':25, 'City': 'Mumbai',1:[4,5]}
dict1 ={1:2,3:5}
print (dict)
print (dict.values())
print (dict.keys())
print (dict1)
print (dict1.values())
print (dict['name'])
# + id="uR4azGTZUEaX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="26324b50-0631-4c22-84bf-7409f17267df"
#set
st = {"chetan",2,2,4,3.3,5,5}
st1 = {3,3,2}
print (st)
print (st1)
print (len(st))
st.remove(4)
print (st)
st.add(9)
print(st)
st.update(st1)
print (st)
b=st.pop()
print (b)
st.discard(2)
print (st)
# + id="QALxMORHUEYW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="5dc90327-eb93-4a9f-e771-a088b80587a5"
#Tuple
tuple =(12,4.5,12,'hj','ty')
tuple1=(4,4,4.5)
print (tuple)
print (tuple1)
print (tuple[3])
print (tuple+tuple1)
print (tuple1 *2)
# + id="RNRw2bL2UEXA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="4d46335d-9de9-4077-ac93-d539ae6f9b6c"
#String
str = 'lets upgradians'
print (str)
print (str * 2)
print (str +'34')
print (str[0])
print (str[5:8])
# + id="2C_anvFqUEUd" colab_type="code" colab={}
# + id="PQOreARZUESC" colab_type="code" colab={}
# + id="EwnERf7nUEOl" colab_type="code" colab={}
# + id="dIsdCPFGUDNb" colab_type="code" colab={}
| Day 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import os.path
from datetime import datetime, timedelta
from scipy import stats
# +
root_path = os.path.dirname(os.getcwd())
# Load inspections
inspections = pd.read_csv(os.path.join(root_path, "DATA/food_inspections.csv"))
# Load observation datasets
burglaries = pd.read_csv(os.path.join(root_path, "DATA/burglaries.csv"))
carts = pd.read_csv(os.path.join(root_path, "DATA/garbage_carts.csv"))
complaints = pd.read_csv(os.path.join(root_path, "DATA/sanitation_complaints.csv"))
# -
# Create datetime columns
inspections["datetime"] = pd.to_datetime(inspections.inspection_date)
burglaries["datetime"] = pd.to_datetime(burglaries.date)
carts["datetime"] = pd.to_datetime(carts.creation_date)
complaints["datetime"] = pd.to_datetime(complaints.creation_date)
# FILTER: consider only inspections since 2012
# Otherwise early inspections have few/no observations within window
inspections = inspections.loc[inspections.inspection_date >= "2012"]
def get_estimates(observations, column_name, window, bandwidth):
# Sort chronologically and index by datetime
observations.sort_values("datetime", inplace=True)
observations.index = observations.datetime.values
# Generate kernel from 90 days of observations
def get_estimates_given_date(group):
stop = group.datetime.iloc[0]
start = stop - timedelta(days=window)
recent = observations.loc[start:stop]
x1 = recent.longitude
y1 = recent.latitude
values = np.vstack([x1, y1])
kernel = stats.gaussian_kde(values)
x2 = group.longitude
y2 = group.latitude
samples = np.vstack([x2, y2])
group[column_name] = kernel(samples)
return group[["inspection_id", column_name]]
# Group inspections by date, generate kernels, sample
return inspections.groupby("inspection_date").apply(get_estimates_given_date)
# Calculate kde given observation window, kernel bandwidth
burglary_kde = get_estimates(burglaries, "burglary_kde", 90, 1)
# Calculate kde given observation window, kernel bandwidth
cart_kde = get_estimates(carts, "cart_kde", 90, 1)
# Calculate kde given observation window, kernel bandwidth
complaint_kde = get_estimates(complaints, "complaint_kde", 90, 1)
thing = pd.merge(inspections, cart_kde, on="inspection_id").sample(1000)
# +
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
def randrange(n, vmin, vmax):
'''
Helper function to make an array of random numbers having shape (n, )
with each number distributed Uniform(vmin, vmax).
'''
return (vmax - vmin)*np.random.rand(n) + vmin
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
n = 100
# For each set of style and range settings, plot n random points in the box
# defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh].
for c, m, zlow, zhigh in [('r', 'o', -50, -25), ('b', '^', -30, -5)]:
xs = thing.longitude#randrange(n, 23, 32)
ys = thing.latitude#randrange(n, 0, 100)
zs = thing.cart_kde#randrange(n, zlow, zhigh)
ax.scatter(xs, ys, zs, c=c, marker=m)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
# -
import os.path
root_path = os.path.dirname(os.getcwd())
# Save result
burglary_kde.to_csv(os.path.join(root_path, "DATA/burglary_kde.csv"), index=False)
# Save result
cart_kde.to_csv(os.path.join(root_path, "DATA/cart_kde.csv"), index=False)
# Save result
complaint_kde.to_csv(os.path.join(root_path, "DATA/complaint_kde.csv"), index=False)
| CODE/.ipynb_checkpoints/22_calculate_kde_values-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:cf_step]
# language: python
# name: conda-env-cf_step-py
# ---
# +
# default_exp networks
# -
# hide
from nbdev.showdoc import *
# +
# export
import torch
import torch.nn as nn
from torch import tensor
# -
# # Networks
#
# > Common neural network architectures for *Collaborative Filtering*.
# # Overview
#
# This package implements several neural network architectures that can be used to build recommendation systems. Users of the library can add or define their own implementations or use the existing ones. There are two layers that every architecture should define:
#
# * **user_embeddings**: The user embedding matrix
# * **item_embeddings**: The item embedding matrix
#
# Every implementation should be a subclass of `torch.nn.Module`.
# ## Simple Collaborative Filtering
#
# This architecture is the simplest one to implement *Collaborative Filtering*. It only defines the embedding matrices for users and items and the final rating is computed by the dot product of the corresponding rows.
# export
class SimpleCF(nn.Module):
def __init__(self, n_users: int, n_items: int, factors: int = 16,
user_embeddings: torch.tensor = None, freeze_users: bool = False,
item_embeddings: torch.tensor = None, freeze_items: bool = False,
init: torch.nn.init = torch.nn.init.normal_, binary: bool =False, **kwargs):
super().__init__()
self.binary = binary
self.user_embeddings = self._create_embedding(n_users, factors,
user_embeddings, freeze_users,
init, **kwargs)
self.item_embeddings = self._create_embedding(n_items, factors,
item_embeddings, freeze_items,
init, **kwargs)
self.sigmoid = nn.Sigmoid()
def forward(self, u: torch.tensor, i: torch.tensor) -> torch.tensor:
user_embedding = self.user_embeddings(u)
user_embedding = user_embedding[:, None, :]
item_embedding = self.item_embeddings(i)
item_embedding = item_embedding[:, None, :]
rating = torch.matmul(user_embedding, item_embedding.transpose(1, 2))
if self.binary:
return self.sigmoid(rating)
return rating
def _create_embedding(self, n_items, factors, weights, freeze, init, **kwargs):
embedding = nn.Embedding(n_items, factors)
init(embedding.weight.data, **kwargs)
if weights is not None:
embedding.load_state_dict({'weight': weights})
if freeze:
embedding.weight.requires_grad = False
return embedding
# Arguments:
#
# * n_users (int): The number of unique users
# * n_items (int): The number of unique items
# * factors (int): The dimension of the embedding space
# * user_embeddings (torch.tensor): Pre-trained weights for the user embedding matrix
# * freeze_users (bool): `True` if we want to keep the user weights as is (i.e. non-trainable)
# * item_embeddings (torch.tensor): Pre-trained weights for the item embedding matrix
# * freeze_item (bool): `True` if we want to keep the item weights as is (i.e. non-trainable)
# * init (torch.nn.init): The initialization method of the embedding matrices - default: torch.nn.init.normal_
# +
# initialize the model with 100 users, 50 items and a 16-dimensional embedding space
model = SimpleCF(100, 50, 16, mean=0., std=.1, binary=True)
# predict the rating that user 3 would give to item 33
model(torch.tensor([2]), torch.tensor([32]))
| nbs/networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Imports
# %load_ext PWE_NB_Extension
from PW_explorer.load_worlds import load_worlds
from PW_explorer.visualize import PWEVisualization
from PW_explorer.nb_helper import ASPRules
from PW_explorer.query import PWEQuery
from PW_explorer.helper import pw_slicer
import numpy as np
import os
import networkx as nx
from nxpd import draw
from nxpd import nxpdParams
nxpdParams['show'] = 'ipynb'
# ##### Boring Boilerplate code
# Fixes positions of the nodes, so we get nicer looking outputs with consistent node placement.
def mod_g(g):
g.nodes['a']['pos'] = '"1,2!"'
g.nodes['b']['pos'] = '"2,1!"'
g.nodes['c']['pos'] = '"2,3!"'
g.nodes['d']['pos'] = '"3,1!"'
g.nodes['e']['pos'] = '"3,3!"'
g.nodes['f']['pos'] = '"4,2!"'
return g
def get_vizs(exp, groups):
vizs = {}
for q_rels, group in groups.items():
group_vizs = []
for pw_id in group:
temp_dfs, _ = pw_slicer(exp['pw_rel_dfs'], None, [pw_id])
g = PWEVisualization.graphviz_from_meta_data(temp_dfs, exp['meta_data']['graphviz'])
g = mod_g(g)
group_vizs.append(g)
#display(draw(g, layout='neato'))
vizs[q_rels] = group_vizs
return vizs
def save_vizs(vizs, folder_name):
os.makedirs(folder_name, exist_ok=True)
for i, (_, g_vizs) in enumerate(vizs.items()):
if len(g_vizs) > 1:
group_folder = "group_{}".format(str(i+1))
os.makedirs(os.path.join(folder_name, group_folder), exist_ok=True)
for j, g in enumerate(g_vizs):
fname = 'ex_{}.{}'.format(str(j+1), 'png')
draw(g, layout='neato', filename=os.path.join(folder_name, group_folder, fname))
else:
group_viz_name = 'group_{}.{}'.format(str(i+1), 'png')
draw(g_vizs[0], layout='neato', filename=os.path.join(folder_name, group_viz_name))
# # Transitive Closure
# Let us consider a small graph.
# +
# %%dlv --donot-run -exp e1 --donot-display_input -lci g_e1
% A simple base graph:
% a -> b -> d
% \ \ \
% \->c \->e-\->f
% graphviz graph graph_type=directed
% graphviz edge e(HEAD,TAIL) color=black ord=3
e(a,b). e(a,c).
e(b,d). e(b,e).
e(d,f). e(e,f).
# -
# %dlv --run -l g_e1 -exp e1 --donot-display_input
e1['pw_rel_dfs'], e1['rel_schemas'], e1['pw_objects'] = load_worlds(e1['asp_soln'], e1['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(e1['pw_rel_dfs'], e1['meta_data']['graphviz'])), layout='neato')
# ### Transitive Closure Encoding
# +
# %%dlv --donot-run --donot-display_input -lci tc
% Playing with transitive closure and its provenance..
% Simple (right) recursive rules from computing the transitive closure tc of e:
% graphviz edge tc(HEAD,TAIL) color=blue ord=1
tc(X,Y) :- e(X,Y).
tc(X,Y) :- e(X,Z), tc(Z,Y).
# -
# Now we generate the tc/2 edges on the graph shown above.
# %dlv --run -l g_e1 tc -exp tc_e1 --donot-display_input
tc_e1['pw_rel_dfs'], tc_e1['rel_schemas'], tc_e1['pw_objects'] = load_worlds(tc_e1['asp_soln'], tc_e1['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(tc_e1['pw_rel_dfs'], tc_e1['meta_data']['graphviz'])), layout='neato')
# The blue edges are the tc/2 relations derived from the recursive case (line 9 in cell[12]), and the black edges are the original edge set.
# ### Transitive Closure (along with some provenance)
# +
# %%dlv --donot-run --donot-display_input -lci tc4
% Playing with transitive closure and its provenance..
% EXAMPLE 1:
% $ dlv tc4.dlv e1.dlv -filter=q -silent
% {q(a,b), q(b,d), q(b,e), q(d,f), q(e,f)}
% EXAMPLE 2:
% $ dlv tc4.dlv e1.dlv e2.dlv -filter=q -silent
% {q(a,b), q(a,c), q(b,d), q(b,e), q(c,d), q(c,e), q(d,f), q(e,f)}
% Now a version that also reports 'intermediate' edges IX->IY, i.e.,
% tc4(X,Y, IX, IY) holds iff (1) Y is reachable from X via an e-path,
% .. and (2) the edge (IX,IY) is on some such path from X to Y:
% BASE CASE:
tc4(X,Y, X,Y) :- e(X,Y). % (B)
% In the RECURSIVE CASE, we need to keep all intermediate edges we already have:
tc4(X,Y, IX,IY) :- e(X,Z), tc4(Z,Y, IX,IY). % (R1)
% ... plus any "new" edges we get:
tc4(X,Y, X,Z) :- e(X,Z), tc4(Z,Y, _,_). % (R2)
% EXERCISE:
% Do we need both (R1) and (R2) for this to work?
% => Why (or why not)?
# -
# Let's say we want to focus on the edges that are on some path/walk from 'a' to 'f'. We can filter these as follows:
# +
# %%dlv --donot-run --donot-display_input -lci a_f_path_q
% Output relation: report all intermediate edges on some path from a to f:
q(IX,IY) :- tc4(a,f,IX,IY).
% graphviz edge q(HEAD,TAIL) color=red ord=5
# -
# We now run the above two along with the graph definition.
# %dlv --run -l g_e1 tc4 a_f_path_q -exp tc4_e1 --donot-display_input
tc4_e1['pw_rel_dfs'], tc4_e1['rel_schemas'], tc4_e1['pw_objects'] = load_worlds(tc4_e1['asp_soln'], tc4_e1['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(tc4_e1['pw_rel_dfs'], tc4_e1['meta_data']['graphviz'])), layout='neato')
# The red edges are the ones that lie on some path/walk from 'a' to 'f'.
# ##### We can now add more edges to the original graph, and re-run the two queries (tc/2 and tc/4 + q/2).
# +
# %%dlv --donot-run --donot-display_input -lci g_e2
% Include/Exclude some edges:
e(c,d). e(c,e).
# -
# %dlv --run -l g_e1 g_e2 -exp e2 --donot-display_input
e2['pw_rel_dfs'], e2['rel_schemas'], e2['pw_objects'] = load_worlds(e2['asp_soln'], e2['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(e2['pw_rel_dfs'], e2['meta_data']['graphviz'])), layout='neato')
# The graph with the additional edges looks as above.
# Running the tc/2 query on the graph.
# %dlv --run -l g_e1 g_e2 tc -exp tc_e2 --donot-display_input
tc_e2['pw_rel_dfs'], tc_e2['rel_schemas'], tc_e2['pw_objects'] = load_worlds(tc_e2['asp_soln'], tc_e2['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(tc_e2['pw_rel_dfs'], tc_e2['meta_data']['graphviz'])), layout='neato')
# As before, the blue edges are the tc/2 relations derived from the recursive case.
# Running the tc/4 + q/2 query.
# %dlv --run -l g_e1 g_e2 tc4 a_f_path_q -exp tc4_e2 --donot-display_input
tc4_e2['pw_rel_dfs'], tc4_e2['rel_schemas'], tc4_e2['pw_objects'] = load_worlds(tc4_e2['asp_soln'], tc4_e2['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(tc4_e2['pw_rel_dfs'], tc4_e2['meta_data']['graphviz'])), layout='neato')
# Now, all the edges lie on some path/walk from 'a' to 'f'.
# ## Problem at Hand
#
# We can use the above machinery to analyze workflow/dependency graphs.
# #### Simple Di-Graph with two components
#
# Suppose we know that components 'b' and 'c' depend on 'a' and that 'f' depends on 'd' and 'e'.
# +
# %%dlv --donot-run --donot-display_input -lci g_e3
% Now let's create some
% graphviz graph graph_type=directed
% graphviz edge e(HEAD,TAIL) color=black ord=3
% Given edges:
e(a,b).
e(a,c).
e(d,f).
e(e,f).
# -
# %dlv --run -l g_e3 -exp e3 --donot-display_input
e3['pw_rel_dfs'], e3['rel_schemas'], e3['pw_objects'] = load_worlds(e3['asp_soln'], e3['meta_data'], reasoner='dlv')
draw(mod_g(PWEVisualization.graphviz_from_meta_data(e3['pw_rel_dfs'], e3['meta_data']['graphviz'])), layout='neato')
# However, let's say we want to analyze the "possible" ways, that 'f' can depend on 'a'. We use ASP and PWE to generate and analyze these possibilities.
# #### Enlisting potential edge additions
#
# To constraint the problem further, let's say that we can only add dependencies between nodes 'b', 'c', 'd' and 'e'.
# +
# %%dlv --donot-run --donot-display_input -lci gen_new_edges
% Which nodes All nodes:
% n(X) :- e(X,_).
% n(X) :- e(_,X).
n(b). n(c). n(d). n(e).
% Let's use a GENERATOR: For every pair of edges (X,Y) in a given set of nodes n/1,
% ... let's either have the edge "in" (= member of i/2) or "out" (= in o/2):
i(X,Y) v o(X,Y) :- n(X), n(Y), X!=Y.
% Now let's use the same tc + provenance (= tc with 4 arguments) rules:
% First: let's copy all generated "in" pairs from i/2 to e/2:
e(X,Y) :- i(X,Y).
# -
# Now, given this constraint, we generate the Possible Worlds (PWs).
# %dlv -l g_e3 gen_new_edges --donot-display_input --donot-display_output -exp new_edges
# +
#new_edges['pw_rel_dfs'], new_edges['rel_schemas'], new_edges['pw_objects'] = load_worlds(new_edges['asp_soln'], new_edges['meta_data'], reasoner='dlv')
# -
# The above cell has been commented out because it can take a long time to parse all the solutions. Since we are for now only interested in the number of PWs, we can do that using the simple "hack" below.
# Number of PWs
len(new_edges['asp_soln'].splitlines())
# As we can observe, this already generates 4096 possibilities.
#
# Combinatorially, this makes sense.
#
# 4096
#
# = 2^12 i.e. for each of the 4*3 = 12 ordered pairs of nodes, we have 2 options, add it or don't
#
# = 4^6 i.e. for each of the (4 choose 2) = 6 pairs of nodes (x,y), we have 3 choices: add xy, add yx, add neither or add both)
# +
# Uncomment the line below to see what the solution looks like.
# ASPRules("\n".join(new_edges['asp_soln'].splitlines()[:2]))
# -
# However, since this is a dependency graph, a dependency between two components can only occur in one direction. So we add this constraint as shown below.
# +
# %%dlv --donot-run --donot-display_input -lci remove_back_edges
% Let's also say that if an edge X,Y is "in", then the reverse edge Y,X cannot be in.
% This is just to show how one can easily add new constraints (reducing the set of PWs / solutions):
:- i(X,Y), i(Y,X).
# -
# Generating the PWs with the new constraint...
# %dlv -l g_e3 remove_back_edges gen_new_edges --donot-display_input --donot-display_output -exp new_edges
# +
#new_edges['pw_rel_dfs'], new_edges['rel_schemas'], new_edges['pw_objects'] = load_worlds(new_edges['asp_soln'], new_edges['meta_data'], reasoner='dlv')
# -
# The above cell has been commented out because it can take a long time to parse all the solutions. Since we are for now only interested in the number of PWs, we can do that using the simple "hack" below.
# Number of PWs
len(new_edges['asp_soln'].splitlines())
# As we can see, we now have only 729 PWs (as opposed to the 4096 PWs earlier). This number also makes sense combinatorially.
#
# 729 = 3^6 i.e. for each of the (4 choose 2) = 6 unordered pairs of nodes (x,y), we have 3 choices: add xy, add yx or add neither)
# +
# Uncomment the line below to see what the solution looks like.
# ASPRules("\n".join(new_edges['asp_soln'].splitlines()[:2]))
# -
# Now, we can generate the edges on paths/walks from 'a' to 'f' in these PWs.
# %dlv -l g_e3 gen_new_edges remove_back_edges tc4 a_f_path_q --donot-display_input --donot-display_output -exp potential_graphs
potential_graphs['pw_rel_dfs'], potential_graphs['rel_schemas'], potential_graphs['pw_objects'] = load_worlds(potential_graphs['asp_soln'], potential_graphs['meta_data'], reasoner='dlv')
ASPRules("\n".join(potential_graphs['asp_soln'].splitlines()[:2]))
# Preview of the first 2 PWs. Hard to read and interpret.
# ### Interested in unique a --> f path sets
#
# We know these are recorded in the q/2 relation as we defined earlier. However, we want to be able to find some patterns or clusters within these to be able to better analyze them. We can use a distance metric to do this. In this case, a simple distance metric, measuring the number of elements in the symmetric difference between the two sets' q/2 relations.
# For a taste, this is what the symmetric difference looks like for PWs 1 and 2.
PWEQuery.difference_both_ways(dfs=potential_graphs['pw_rel_dfs'],
rl_name='q_2', pw_id_1=1, pw_id_2=2, do_print=False)
# As we can see, one of them has the edge d-->c while the other has the edge c-->d.
#
# Similarly for PWs 1 and 100:
PWEQuery.difference_both_ways(dfs=potential_graphs['pw_rel_dfs'], rl_name='q_2', pw_id_1=1, pw_id_2=100, do_print=False)
# We can construct a distance matrix of the PWs using this metric.
# +
# n = len(potential_graphs['pw_objects'])
# dist_matrix = np.zeros((n,n))
# for i in range(n):
# for j in range(i+1, n):
# dist_matrix[i][j] = dist_matrix[j][i] = len(PWEQuery.difference_both_ways(dfs=potential_graphs['pw_rel_dfs'],
# rl_name='q_2',
# do_print=False,
# pw_id_1=i+1,
# pw_id_2=j+1))
# dist_matrix
# -
# This naive (but well-defined) way takes too long, so we do an equivalent version below, optimized for this particular problem
# +
def get_pw_q_rels(q_2_df, pw_id):
pw_q_2_df = q_2_df[q_2_df['pw'] == pw_id]
pw_q_rels = []
for i, row in pw_q_2_df.iterrows():
pw_q_rels.append((row['x1'], row['x2']))
return set(pw_q_rels)
def compare_q_sets(pw_1_q_set, pw_2_q_set):
return len(pw_1_q_set-pw_2_q_set) + len(pw_2_q_set-pw_1_q_set)
def get_q_set_dist_matrix(exp):
n = len(exp['pw_objects'])
dist_matrix = np.zeros((n,n))
q_2_df = exp['pw_rel_dfs']['q_2']
q_sets = list(map(lambda pw_id: get_pw_q_rels(q_2_df, pw_id), range(1, n+1)))
for i in range(n):
for j in range(i+1, n):
dist_matrix[i][j] = dist_matrix[j][i] = compare_q_sets(q_sets[i], q_sets[j])
return dist_matrix
# -
dist_matrix = get_q_set_dist_matrix(potential_graphs)
dist_matrix
# Now we try to make sense of this using several clustering techniques at our disposal.
_ = PWEVisualization.cluster_map_viz(dist_matrix)
# There seem to be some structure to these PWs. There seems to be a big group of equivalent PWs as evident by the large black square in the middle. There also seem to be a lot of smaller sets of equivalent solutions as evident by the small black squares in the bottom right. The rest seem to be groups of their own.
_ = PWEVisualization.mds_sklearn(dist_matrix)
# Hard to make much sense of the Multi-Dimensional Scaling (MDS) output. Perhaps it is not particularly suited to analyze this problem.
# All the visualizations so far are very pretty, but hard to interpret.
_ = PWEVisualization.dbscan_clustering(dist_matrix)
# This gives us something interesting. It says there are 266 unique clusters or groups. Let's see if we can't find these.
# These are probably just the unique sets of q/2 relations themselves. Let's see if that's true.
def get_q_2_groups(exp):
groups = {}
n = len(exp['pw_objects'])
q_2_df = exp['pw_rel_dfs']['q_2']
for pw_id in range(1, n+1):
pw_q_2_df = q_2_df[q_2_df['pw'] == pw_id]
pw_q_rels = []
for i, row in pw_q_2_df.iterrows():
pw_q_rels.append((row['x1'], row['x2']))
pw_q_rels = frozenset(pw_q_rels)
if pw_q_rels not in groups:
groups[pw_q_rels] = []
groups[pw_q_rels].append(pw_id)
return groups
groups = get_q_2_groups(potential_graphs)
len(groups.keys())
# Voila! Now, let's visualize these.
vizs = get_vizs(potential_graphs, groups)
# +
# Uncomment the line below to re-write the visualizations.
# save_vizs(vizs, 'Vizs')
# -
# Some notable ones are:
#
# Group 253: 144 ways to add 0 or more edges such that there are no paths/walks from 'a' to 'f', and hence no dependency.
#
# Group 265: Looking at these makes two things clear. We are currently capturing 'walks' and not 'paths'. Secondly, and more importantly, there are cycles within the generated PWs. Let's remove the PWs with cycles.
#
# Note: The group numbers above might change on re-execution.
# #### Removing Cycles
#
# Removing cycles is a simple operation in ASP, given that we already have the tc/4 query.
# +
# %%dlv --donot-run --donot-display_input -lci remove_cycles
% nn(A) :- e(A,_).
% nn(A) :- e(_,A).
nn(a). nn(b). nn(c). nn(d). nn(e). nn(f).
% It must not be the case that A can reach B AND B can reach A, else there'd be a cycle.
:- tc4(A,B,_,_), tc4(B,A,_,_), nn(A), nn(B).
# -
# Now, we re-run the experiment with this additional constraint.
# %dlv -l g_e3 gen_new_edges remove_back_edges remove_cycles tc4 a_f_path_q --donot-display_input --donot-display_output -exp potential_graphs_no_cycles
len(potential_graphs_no_cycles['asp_soln'].splitlines())
potential_graphs_no_cycles['pw_rel_dfs'], potential_graphs_no_cycles['rel_schemas'], potential_graphs_no_cycles['pw_objects'] = load_worlds(potential_graphs_no_cycles['asp_soln'], potential_graphs_no_cycles['meta_data'], reasoner='dlv')
# As we can see, this time around we only have 543 PWs. Now let's see if we can group them as we did earlier.
groups_no_cycle = get_q_2_groups(potential_graphs_no_cycles)
len(groups_no_cycle.keys())
# We can! Only 136 unique ways. Lot fewer than the 4096 PWs we started off with.
# For a sanity check, we can confirm this using the DBScan Algorithm using the same distance metric as earlier.
dist_matrix = get_q_set_dist_matrix(potential_graphs_no_cycles)
dist_matrix
_ = PWEVisualization.dbscan_clustering(dist_matrix)
# It checks out!
# Also as a sanity check, we want to ensure that the new groups (without cycles) are a subset of the groups with cycles.
set(groups.keys()) >= set(groups_no_cycle.keys())
# This checks out too.
# Now we generate visualizations of these 136 groups.
vizs_no_cycles = get_vizs(potential_graphs_no_cycles, groups_no_cycle)
# +
# Uncomment the line below to re-write the visualizations.
# save_vizs(vizs_no_cycles, 'Vizs_no_cycles')
# -
| TC-Prov-Gen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mp
# language: python
# name: mp
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # The Materials API
#
# ### Presented by: <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# In this lesson, we cover:
#
# * The Materials Project API (MAPI) and its documentation, the [mapidoc](https://github.com/materialsproject/mapidoc).
# * Getting your Materials Project API key.
# * Using the MPRester to access the MP database.
# * A hands-on example of using the API and pymatgen to screen the MP database for interesting materials.
# + slideshow={"slide_type": "skip"}
# This supresses warnings.
import warnings
warnings.filterwarnings('ignore')
# This is a helper function to shorten lists during the
# live presentation of this lesson for better readability.
# You can ignore it.
def shortlist(long_list, n=5):
print("First {} of {} items:".format(min(n, 5), len(long_list)))
for item in long_list[0:n]:
print(item)
# + [markdown] slideshow={"slide_type": "slide"}
# ***
# ## Section 0: Getting an API key
#
# The first step to getting started with the API is to get an API key. We do this on the Materials Project website (https://materialsproject.org/janrain/loginpage/?next=/dashboard.)
#
# 1. Click the `Generate API key` button
# 2. copy your shiny new key
# 3. Paste your key in the line below and run the cell.
# + slideshow={"slide_type": "fragment"}
# !pmg config --add PMG_MAPI_KEY JHTBdT7o6AbXl93P
# + [markdown] slideshow={"slide_type": "slide"}
# ## Section 1: The MAPIDOC
#
# The [mapidoc](https://github.com/materialsproject/mapidoc) is a key source of information regarding the Materials Project API. It should be the first thing you consult whenever you are having trouble with the API. Let's take a look!
# + [markdown] slideshow={"slide_type": "slide"}
# ***
# ## Section 2: Basic Queries In the Web Browser
#
# To request data from the Materials Project, you will need to make requests to our API. To do this, you could simply make a GET request through your web browser, providing your API key as an argument.
#
# For example,
#
# `https://www.materialsproject.org/rest/v2/materials/mp-1234/vasp?API_KEY=<your api key>`
#
# + [markdown] slideshow={"slide_type": "subslide"}
# For example,
#
# `https://www.materialsproject.org/rest/v2/materials/mp-1234/vasp?API_KEY=<your api key>`
#
# returns the following JSON document:
# ```
# {"response": [{"energy": -26.94573468, "energy_per_atom": -4.49095578, "volume": 116.92375473740876, "formation_energy_per_atom": -0.4835973866666663, "nsites": 6, "unit_cell_formula": {"Al": 4.0, "Lu": 2.0}, "pretty_formula": "LuAl2", "is_hubbard": false, "elements": ["Al", "Lu"], "nelements": 2, "e_above_hull": 0, "hubbards": {}, "is_compatible": true, "spacegroup": {"source": "spglib", "symbol": "Fd-3m", "number": 227, "point_group": "m-3m", "crystal_system": "cubic", "hall": "F 4d 2 3 -1d"}, "task_ids": ["mp-1234", "mp-925833", "mp-940234", "mp-940654"], "band_gap": 0.0, "density": 6.502482433523648, "icsd_id": null, "icsd_ids": [608375, 57958, 608376, 608372, 608371, 608370], "cif": "# generated using pymatgen\ndata_LuAl2\n_symmetry_space_group_name_H-M 'P 1'\n_cell_length_a 5.48873905\n_cell_length_b 5.48873905\n_cell_length_c 5.48873905\n_cell_angle_alpha 60.00000005\n_cell_angle_beta 60.00000003\n_cell_angle_gamma 60.00000007\n_symmetry_Int_Tables_number 1\n_chemical_formula_structural LuAl2\n_chemical_formula_sum 'Lu2 Al4'\n_cell_volume 116.92375474\n_cell_formula_units_Z 2\nloop_\n _symmetry_equiv_pos_site_id\n _symmetry_equiv_pos_as_xyz\n 1 'x, y, z'\nloop_\n _atom_site_type_symbol\n _atom_site_label\n _atom_site_symmetry_multiplicity\n _atom_site_fract_x\n _atom_site_fract_y\n _atom_site_fract_z\n _atom_site_occupancy\n Al Al1 1 0.500000 0.500000 0.500000 1\n Al Al2 1 0.500000 0.500000 0.000000 1\n Al Al3 1 0.000000 0.500000 0.500000 1\n Al Al4 1 0.500000 0.000000 0.500000 1\n Lu Lu5 1 0.875000 0.875000 0.875000 1\n Lu Lu6 1 0.125000 0.125000 0.125000 1\n", "total_magnetization": 0.0012519, "material_id": "mp-1234", "oxide_type": "None", "tags": ["High pressure experimental phase", "Aluminium lutetium (2/1)"], "elasticity": null, "full_formula": "Lu2Al4"}], "valid_response": true, "created_at": "2018-08-08T18:52:53.042666", "version": {"db": "3.0.0", "pymatgen": "2018.7.23", "rest": "2.0"}, "copyright": "Materials Project, 2018"}
# ```
# + [markdown] slideshow={"slide_type": "skip"}
# For obvious reasons, typing these kinds of urls into your web browser is not an ideal way to request MP data. Instead, we should try to access the API programatically with python. Let's do the same request that we did above using Python's *requests* library.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Making Requests With Python
# + slideshow={"slide_type": "fragment"}
import requests
response = requests.get("https://www.materialsproject.org/rest/v2/materials/mp-1234/vasp",
{"API_KEY": " <KEY>"})
print(response.text)
# + [markdown] slideshow={"slide_type": "slide"}
# ***
# ## Section 3: The MPRester
#
# In this section we will:
#
# * Open the pymatgen.MPRester web documentation.
# * Create our first instance of an MPRester object.
# * Get our feet wet with calling a few of the MPRester's "specialty" methods.
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Background and Documentation
#
# * Code connects to the MP Database through REST requests.
# * Pymatgen's MPRester class is helpful for accessing our API in python.
# * The [documentation](http://pymatgen.org/pymatgen.ext.matproj.html#pymatgen.ext.matproj.MPRester) for the MPRester is *very* helpful. Let's take a look!
# + [markdown] slideshow={"slide_type": "skip"}
# #### Background and Documentation
#
# REST is a widely used type of standardization that allows different computer systems to work together. In RESTful systems, information is organized into resources, each of which is uniquely identified via a uniform resource identifier (URI). Since MAPI is a RESTful system, users can interact with the MP database regardless of their computer system or programming language (as long as it supports basic http requests.)
#
# To facilitate researchers in using our API, we implemented a convenient wrapper for it in the Python Materials Genomics (pymatgen) library called the `MPRester`. You can find the relevant pymatgen documentation for it [here](http://pymatgen.org/pymatgen.ext.matproj.html?highlight=mprester#pymatgen.ext.matproj.MPRester).
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Starting up an instance of the MPRester
#
# We'll import the MPRester and create an instance of it.
#
# *Note: You may need to use your API key as an input argument if it has not been pre-configured.*
# + slideshow={"slide_type": "fragment"}
from pymatgen import MPRester
mpr = MPRester()
print(mpr.supported_properties)
# + [markdown] slideshow={"slide_type": "slide"}
# However, we recommend that you use the âwithâ context manager to ensure that sessions are properly closed after usage:
# + slideshow={"slide_type": "fragment"}
with MPRester() as mpr:
print(mpr.supported_properties)
# + [markdown] slideshow={"slide_type": "slide"}
# ### MPRester Methods:
#
# The MPRester has many methods that you might want to use in your research. For example, there is a method to get the bandstructure for a material, `get_bandstructure_by_material_id`.
#
# Let's use this method and the following bandstructure plotting function to get and plot a bandstructure for mp-1234:
# + slideshow={"slide_type": "fragment"}
### Don't edit this code ####
from pymatgen.electronic_structure.plotter import BSPlotter
# Helpful function for plotting a bandstructure.
def plot_bandstructure(bs):
BSPlotter(bs).get_plot().show()
#############################
# + slideshow={"slide_type": "slide"}
# Excercise: Use the MPRester's get_bandstructure_by_material_id method to
# get a bandstructure from the MP Database and plot it using the
# plot_bandstructure functin defined above.
with MPRester() as mpr:
bs = mpr.get_bandstructure_by_material_id("mp-1234")
plot_bandstructure(bs)
# + [markdown] slideshow={"slide_type": "slide"}
# There's also a method to get MPIDs for a formula or chemical system called `get_materials_ids`.
# + slideshow={"slide_type": "fragment"}
with MPRester() as mpr:
# You can pass in a formula to get_materials_ids
shortlist(mpr.get_materials_ids("LiFePO4"))
# Or you can pass in a "chemsys" such as "Li-Fe-P-O"
shortlist(mpr.get_materials_ids("Li-Fe-P-O"))
# + [markdown] slideshow={"slide_type": "slide"}
# ### Using the API to achieve research goals:
#
# Imagine you want to get the **structure** for the multiferroic material BiFeO3 (**`mp-24932`**) and suggest some **substrates** for growing it.
#
# We can use methods of the MPRester to get this information from the Materials Project API.
#
# Hints:
#
# * `MPRester.get_structure_by_material_id`
# * `MPRester.get_substrates`
# + slideshow={"slide_type": "subslide"}
# Get the structure for BiFeO3 (mp-23501) and
# suggest some substrates for growing it.
with MPRester() as mpr:
structure = mpr.get_structure_by_material_id("mp-23501")
substrates = mpr.get_substrates("mp-23501")
print(structure)
print([s["sub_form"] for s in substrates])
# + [markdown] slideshow={"slide_type": "slide"}
# At this point, you should be comfortable with:
#
# * Finding documentation on the MPRester.
# * Creating an instance of the MPRester.
# * Using methods of the MPRester.
# -
# ***
# ## Section 4: Using the MPRester.query method.
#
# The MPRester also has a very powerful method called `query`, which allows us to perform sophisticated searches on the database. The `query` method uses MongoDB's [query syntax](https://docs.mongodb.com/manual/tutorial/query-documents/). In this syntax, query submissions have two parts: a set of criteria that you want to base the search on (in the form of a python dict), and a set of properties that you want the database to return (in the form of either a list or dict).
#
# You will probably find yourself using the MPRester's query method frequently.
#
# The general structure of a MPRester query is:
#
# mpr.query(criteria, properties)
#
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# The general structure of a MPRester query is:
#
# mpr.query(criteria, properties)
#
# * `criteria` is usually a string or a dict.
# * `properties` is always a list of strings
# + [markdown] slideshow={"slide_type": "slide"}
# Let's try out some queries to learn how it works!
#
# First, we'll query for $SiO_2$ compounds by chemical formula through 'pretty_formula'.
# + slideshow={"slide_type": "fragment"}
with MPRester() as mpr:
results = mpr.query({'pretty_formula':"SiO2"}, properties=['material_id', 'pretty_formula'])
print(len(results))
# + [markdown] slideshow={"slide_type": "slide"}
# If we investigate the object that the query method returns, we find that it is a list of dicts. Furthermore, we find that the keys of the dictionaries are the very same keywords that we passed to the query method as the `properties` argument.
# + slideshow={"slide_type": "fragment"}
print('Results are returned as a {} of {}.\n'.format(type(results), type(results[0])))
for r in results[0:5]:
print(r)
# + [markdown] slideshow={"slide_type": "slide"}
# In fact, if you are just looking for materials based on formula/composition/stoichiometry, there is an easier way to use the `query` method: just pass in a string as the criteria!
#
# You can even use **wildcard** characters in your searches. For example, if we want to find all $ABO_3$ compounds in the Materials Project:
# + slideshow={"slide_type": "fragment"}
with MPRester() as mpr:
results = mpr.query('**O3', properties=["material_id", "pretty_formula"])
shortlist(results)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Putting it into practice:
#
# There are 296 variants of $SiO_2$ in the MP database, but how many $Si_xO_y$ compounds are there in the Materials Project?
#
# Hint:
#
# * Query using a `chemsys` string instead of a formula.
# + slideshow={"slide_type": "fragment"}
with MPRester() as mpr:
print(len(mpr.query("Si-O", ["material_id"])))
# + [markdown] slideshow={"slide_type": "slide"}
# # EXCERCISE 1
# + [markdown] slideshow={"slide_type": "slide"}
# ## MongoDB Operators
#
# Above, we specified the chemical formula SiO$_2$ for our query. This is an example of, the "specify" operator. However, MongoDB's syntax also includes other [query operators](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors), allowing us to bulid complex conditionals into our queries. These all start with the "$" character.
#
#
# Some important MongoDB operators you should be familiar with are:
#
# - \$in (in)
# - \$nin (not in)
# - \$gt (greater than)
# - \$gte (greater than or equal to)
# - \$lt (less than)
# - \$lte (less than or equal to)
# - \$not (is not)
#
# We used these more advanced operators as follows:
#
# `{"field_name": {"$op": value}}`
#
# For example, "entries with e_above_hull that is less than 0.25 eV" would be:
#
# `{"e_above_hull": {"$lt": 0.25}}`
#
# + [markdown] slideshow={"slide_type": "slide"}
#
# A paper by McEnany et. al. proposes a novel ammonia synthesis process based on the electrochemical cycling of lithium ([link](http://pubs.rsc.org/en/content/articlelanding/2017/ee/c7ee01126a#!divAbstract)). As an exercise, let's use some of MongoDB's operators and ask the database for nitrides of alkali metals.
# + slideshow={"slide_type": "fragment"}
# Find all nitrides of alkali metals
alkali_metals = ['Li', 'Na', 'K', 'Rb', 'Cs']
criteria={"elements":{"$in":alkali_metals, "$all": ["N"]}, "nelements":2}
properties=['material_id', 'pretty_formula']
shortlist(mpr.query(criteria, properties))
# + slideshow={"slide_type": "subslide"}
#Bonus short way to do this with wildcards
shortlist(mpr.query('{Li,Na,K,Rb,Cs}-N', ['material_id', 'pretty_formula']))
# + [markdown] slideshow={"slide_type": "slide"}
# We can also perform the same query, but ask the database to only return compounds with energies above the hull less than 10 meV/atom by using the "less than" operator, "`$lt`". (The energy above the convex hull gives us a sense of how stable a compound is relative to other compounds with the same composition.)
# + slideshow={"slide_type": "fragment"}
criteria={"elements":{"$in":alkali_metals, "$all":["N"]}, "nelements":2,
'e_above_hull':{"$lt":0.010}}
properties=['material_id', 'pretty_formula']
mpr.query(criteria, properties)
# + [markdown] slideshow={"slide_type": "slide"}
# # EXCERCISE 2
# -
# In this lesson, we have covered:
#
# * The Materials Project API (MAPI) and its documentation, the [mapidoc](https://github.com/materialsproject/mapidoc).
# * Getting your Materials Project API key.
# * Using the MPRester to access the MP database.
# * Hands-on examples of using the API and pymatgen to screen the MP database for interesting materials.
| workshop/lessons/04_materials_api/kittiphong.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
import os
import pickle
import gzip
from tf.app import use
from tf.fabric import Fabric
# -
ghBase = os.path.expanduser("~/github")
org = "etcbc"
repo = "dss"
subdir = "parallels"
mainpath = f"{org}/{repo}/tf"
path = f"{org}/{repo}/{subdir}/tf"
location = f"{ghBase}/{path}"
mainlocation = f"{ghBase}/{mainpath}"
version = "0.6"
module = version
tempdir = f"{ghBase}/{org}/{repo}/_temp"
TF = Fabric(locations=mainlocation, modules=module)
api = TF.load("lex type")
docs = api.makeAvailableIn(globals())
# # Parallels
#
# We make edges between similar lines.
#
# When are lines similar?
#
# If a certain distance metric is above a certain threshold.
#
# We choose this metric:
#
# * we reduce a line to the set of lexemes in it.
# * the similarity between two lines is the length of the intersection divided by the length of the union of their sets times 100.
# # Preparation
#
# We pre-compute all sets for lines.
#
# But because not all lines are filled with definite material, we exclude lines with 5 or less consonants.
# +
CONS = "cons"
valid = set()
allLines = F.otype.s("line")
TF.indent(reset=True)
for ln in F.otype.s("line"):
if ln in valid:
continue
if sum(1 for s in L.d(ln, otype="sign") if F.type.v(s) == CONS) >= 5:
valid.add(ln)
TF.info(f"{len(valid)} contentful lines out of {len(allLines)}")
# -
def makeSet(ln):
lineSet = set()
for s in L.d(ln, otype="word"):
r = F.lex.v(s)
if r:
lineSet.add(r)
return lineSet
# +
lines = {}
TF.indent(reset=True)
for ln in valid:
lineSet = makeSet(ln)
if lineSet:
lines[ln] = lineSet
nLines = len(lines)
TF.info(f"{nLines} lines")
# -
# # Measure
def sim(lSet, mSet):
return int(round(100 * len(lSet & mSet) / len(lSet | mSet)))
# # Compute all similarities
#
# We are going to perform more than half a billion of comparisons, each of which is more than an elemetary operation.
#
# Let's measure time.
# +
THRESHOLD = 60
def computeSim(limit=None):
similarity = {}
lineNodes = sorted(lines.keys())
nLines = len(lineNodes)
nComparisons = nLines * (nLines - 1) // 2
print(f"{nComparisons} comparisons to make")
chunkSize = nComparisons // 1000
co = 0
b = 0
si = 0
p = 0
TF.indent(reset=True)
stop = False
for i in range(nLines):
nodeI = lineNodes[i]
lineI = lines[nodeI]
for j in range(i + 1, nLines):
nodeJ = lineNodes[j]
lineJ = lines[nodeJ]
s = sim(lineI, lineJ)
co += 1
b += 1
if b == chunkSize:
p += 1
TF.info(f"{p:>3}â° - {co:>12} comparisons and {si:>10} similarities")
b = 0
if limit is not None and p >= limit:
stop = True
break
if s < THRESHOLD:
continue
similarity[(nodeI, nodeJ)] = sim(lineI, lineJ)
si += 1
if stop:
break
TF.info(f"{p:>3}% - {co:>12} comparisons and {si:>10} similarities")
return similarity
# -
# We are going to run it to several â° first and do some checks then.
similarity = computeSim(limit=3)
# We check the sanity of the results.
print(min(similarity.values()))
print(max(similarity.values()))
eq = [x for x in similarity.items() if x[1] >= 100]
neq = [x for x in similarity.items() if x[1] <= 70]
print(len(eq))
print(len(neq))
print(eq[0])
print(neq[0])
print(T.text(eq[0][0][0]))
print(T.text(eq[0][0][1]))
# Looks good.
#
# Now the whole computation.
#
# But if we have done this before, and nothing has changed, we load previous results from disk.
#
# If we do not find previous results, we compute them and save the results to disk.
# +
PARA_DIR = f"{tempdir}/parallels"
def writeResults(data, location, name):
if not os.path.exists(location):
os.makedirs(location, exist_ok=True)
path = f"{location}/{name}"
with gzip.open(path, "wb") as f:
pickle.dump(data, f)
TF.info(f"Data written to {path}")
def readResults(location, name):
TF.indent(reset=True)
path = f"{location}/{name}"
if not os.path.exists(path):
print(f"File not found: {path}")
return None
with gzip.open(path, "rb") as f:
data = pickle.load(f)
TF.info(f"Data read from {path}")
return data
# -
similarity = readResults(PARA_DIR, f"sim-{version}.zip")
if not similarity:
similarity = computeSim()
writeResults(similarity, PARA_DIR, f"sim-{version}.zip")
len(similarity)
# So, just over 50,000 pairs of similar lines.
# # Add parallels to the TF dataset
#
# We can add this information to the DSS dataset as an *edge feature*.
#
# An edge feature links two nodes and may annotate that link with a value.
#
# For parallels, we link each line to each of its parallel lines and we annotate that link with the similarity between
# the two lines. The similarity is a percentage, and we round it to integer values.
#
# If *n1* is similar to *n2*, then *n2* is similar to *n1*.
# In order to save space, we only add such links once.
#
# We can then use
# [`E.sim.b(node)`](https://annotation.github.io/text-fabric/Api/Features/#edge-features)
# to find all nodes that are parallel to node.
#
metaData = {
"": {
"acronym": "dss",
"description": "parallel lines in the DSS (computed)",
"createdBy": "<NAME>",
"createdDate": "2019-05-09",
"sourceCreatedDate": "2015",
"sourceCreatedBy": "<NAME>, Jr., <NAME>, and <NAME>",
"convertedBy": "<NAME>, <NAME> and <NAME>",
"source": "<NAME>'s data files, personal communication",
"license": "Creative Commons Attribution-NonCommercial 4.0 International License",
"licenseUrl": "http://creativecommons.org/licenses/by-nc/4.0/",
"sourceDescription": "Dead Sea Scrolls: biblical and non-biblical scrolls",
},
"sim": {
"valueType": "int",
"edgeValues": True,
"description": "similarity between lines, as a percentage of the common material wrt the combined material",
},
}
# +
simData = {}
for ((f, t), d) in similarity.items():
simData.setdefault(f, {})[t] = d
# -
TF.save(
edgeFeatures=dict(sim=simData), metaData=metaData, location=location, module=module
)
# # Turn the parallels feature into a module
#
# Here we show how to turn the new feature `sim` into a module, so that users can easily load it in a Jupyter notebook or in the TF browser.
# + language="bash"
# text-fabric-zip 'etcbc/dss/parallels/tf'
# -
# I have added this file to a new release of the DSS Github repo.
# # Use the parallels module
#
# We load the DSS corpus again, but now with the parallels module.
A = use("dss:clone", checkout="clone", hoist=globals())
# Lo and behold: you see the parallels module listed with one feature: `sim`. It is in *italics*, which indicates
# it is an edge feature.
#
# We just do a quick check here and in another notebook we study parallels a bit more, using the feature `sim`.
#
# We count how many similar pairs their are, and how many 100% similar pairs there are.
query = """
line
-sim> line
"""
results = A.search(query)
refNode = results[20000][0]
refNode
query = """
line
-sim=100> line
"""
results = A.search(query)
# Let's show a few of the pairs are 100 percent similar.
A.table(results, start=1, end=10, withNodes=True)
# There is also a lower level way to work with edge features.
#
# We can list all edges going out from a reference node.
# What we see is tuple of pairs: the target node and the similarity between the reference node and that target node.
E.sim.f(refNode)
# Likewise, we can observe the nodes that target the reference node:
E.sim.t(refNode)
# Both sets of nodes are similar to the reference node and it is inconvenient to use both `.f()` and `.t()` to get the similar lines.
#
# But there is another way:
E.sim.b(refNode)
# Let's make sure that `.b()` gives the combination of `.f()` and `.t()`.
# +
f = {x[0] for x in E.sim.f(refNode)}
b = {x[0] for x in E.sim.b(refNode)}
t = {x[0] for x in E.sim.t(refNode)}
# are f and t disjoint ?
print(f"the intersection of f and t is {f & t}")
# is b the union of f and t ?
print(f"t | f = b ? {f | t == b}")
| programs/parallels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
# +
transfer_file_location = "s3://prm-gp2gp-data-sandbox-dev/transfers-sample-3/"
transfer_files = [
"1-2021-transfers.parquet",
"10-2020-transfers.parquet",
"11-2020-transfers.parquet",
"12-2020-transfers.parquet",
"2-2021-transfers.parquet",
"9-2020-transfers.parquet",
]
transfer_input_files = [transfer_file_location + f for f in transfer_files]
transfers = pd.concat((
pd.read_parquet(f)
for f in transfer_input_files
))
asid_lookup_file = "s3://prm-gp2gp-data-sandbox-dev/asid-lookup/asidLookup-Mar-2021.csv.gz"
asid_lookup = pd.read_csv(asid_lookup_file)
# +
supplier_renaming = {
"EGTON MEDICAL INFORMATION SYSTEMS LTD (EMIS)":"EMIS",
"IN PRACTICE SYSTEMS LTD":"Vision",
"MICROTEST LTD":"Microtest",
"THE PHOENIX PARTNERSHIP":"TPP",
None: "Unknown"
}
lookup = asid_lookup[["ASID", "MName"]]
transfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'requesting_supplier', 'ASID': 'requesting_supplier_asid'}, axis=1)
transfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'sending_supplier', 'ASID': 'sending_supplier_asid'}, axis=1)
transfers["sending_supplier"] = transfers["sending_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
transfers["requesting_supplier"] = transfers["requesting_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
# -
def pivot_by_status(df, index_colum):
status_counts = transfers.pivot_table(index=index_colum, columns="status", values="conversation_id", aggfunc=len, fill_value=0)
return (status_counts.div(status_counts.sum(axis=1), axis=0) * 100).round(decimals=2)
def plot_status_breakdown(statuses):
columns = ["INTEGRATED", "PENDING", "PENDING_WITH_ERROR", "FAILED"]
colours = ['green', 'orange', 'purple', 'red']
statuses[columns].plot.barh(stacked=True, figsize=(13, 8), color=colours)
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
status_by_requester = pivot_by_status(transfers, "requesting_supplier")
plot_status_breakdown(status_by_requester)
status_by_requester
status_by_sender = pivot_by_status(transfers, "sending_supplier")
plot_status_breakdown(status_by_requestor)
status_by_sender
status_by_pathway = pivot_by_status(transfers, ["requesting_supplier", "sending_supplier"])
plot_status_breakdown(status_by_pathway)
status_by_pathway
| notebooks/15-Adhoc-pending-breakdown.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Notebook for testing the PyTorch setup
#
# This netbook is for testing the [PyTorch](http://pytorch.org/) setup for the ML hands-on. Below is a set of required imports.
#
# Run the cell, and no error messages should appear.
#
# Some warnings may appear, this should be fine.
# + deletable=true editable=true
# %matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
print('Using PyTorch version:', torch.__version__, 'CUDA:', torch.cuda.is_available())
# + deletable=true editable=true
| notebooks/pytorch-test-setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D3_MultiLayerPerceptrons/student/W1D3_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Tutorial 1: Biological vs. Artificial Neural Networks
#
# **Week 1, Day 3: Multi Layer Perceptrons**
#
# **By Neuromatch Academy**
#
# __Content creators:__ <NAME>, <NAME>
#
# __Content reviewers:__ <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
#
# __Content editors:__ <NAME>, <NAME>, <NAME>
#
# __Production editors:__ <NAME>, <NAME>, <NAME>
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# ---
# # Tutorial objectives
#
# In this tutorial, we will explore the Multi-layer Perceptrons (MLPs). MLPs are arguably one of the most tractable models (due to their flexibility) that we can use to study deep learning fundamentals. Here we will learn why they are:
#
# * similar to biological networks
# * good at function approximation
# * implemented the way they are in PyTorch
# + cellView="form"
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/4ye56/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# -
# ---
# # Setup
#
# This is a GPU free notebook!
# + cellView="form"
# @title Install dependencies
# !pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# generate airtable form
atform = AirtableForm('appn7VdPRseSoMXEG','W1D3_T1','https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303')
# +
# Imports
import random
import torch
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.optim as optim
from tqdm.auto import tqdm
from IPython.display import display
from torch.utils.data import DataLoader, TensorDataset
# + cellView="form"
# @title Figure settings
import ipywidgets as widgets # interactive display
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# + cellView="form"
# @title Plotting functions
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.axis(False)
plt.show()
def plot_function_approximation(x, relu_acts, y_hat):
fig, axes = plt.subplots(2, 1)
# Plot ReLU Activations
axes[0].plot(x, relu_acts.T);
axes[0].set(xlabel='x',
ylabel='Activation',
title='ReLU Activations - Basis Functions')
labels = [f"ReLU {i + 1}" for i in range(relu_acts.shape[0])]
axes[0].legend(labels, ncol = 2)
# Plot function approximation
axes[1].plot(x, torch.sin(x), label='truth')
axes[1].plot(x, y_hat, label='estimated')
axes[1].legend()
axes[1].set(xlabel='x',
ylabel='y(x)',
title='Function Approximation')
plt.tight_layout()
plt.show()
# + cellView="form"
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
# + cellView="form"
# @title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
# -
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
# ---
# # Section 0: Introduction to MLPs
# + cellView="form"
# @title Video 0: Introduction
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1E3411r7TL", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Gh0KYl7ViAc", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 0: Introduction')
display(out)
# -
# ---
# # Section 1: The Need for MLPs
#
# *Time estimate: ~35 mins*
# + cellView="form"
# @title Video 1: Universal Approximation Theorem
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1SP4y147Uv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"tg8HHKo1aH4", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 1: Universal Approximation Theorem')
display(out)
# -
# ## Coding Exercise 1: Function approximation with ReLU
# We learned that one hidden layer MLPs are enough to approximate any smooth function! Now let's manually fit a sine function using ReLU activation.
#
# We will approximate the sine function using a linear combination (a weighted sum) of ReLUs with slope 1. We need to determine the bias terms (which determines where the ReLU inflection point from 0 to linear occurs) and how to weight each ReLU. The idea is to set the weights iteratively so that the slope changes in the new sample's direction.
#
# First, we generate our "training data" from a sine function using `torch.sine` function.
#
# ```python
# >>> import torch
# >>> torch.manual_seed(2021)
# <torch._C.Generator object at 0x7f8734c83830>
# >>> a = torch.randn(5)
# >>> print(a)
# tensor([ 2.2871, 0.6413, -0.8615, -0.3649, -0.6931])
# >>> torch.sin(a)
# tensor([ 0.7542, 0.5983, -0.7588, -0.3569, -0.6389])
# ```
#
# These are the points we will use to learn how to approximate the function. We have 10 training data points so we will have 9 ReLUs (we don't need a ReLU for the last data point as we don't have anything to the right of it to model).
#
# We first need to figure out the bias term for each ReLU and compute the activation of each ReLU where:
#
# \begin{equation}
# y(x) = \text{max}(0,x+b)
# \end{equation}
#
# We then need to figure out the correct weights on each ReLU so the linear combination approximates the desired function.
# +
def approximate_function(x_train, y_train):
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Complete approximate_function!")
####################################################################
# Number of relus
n_relus = x_train.shape[0] - 1
# x axis points (more than x train)
x = torch.linspace(torch.min(x_train), torch.max(x_train), 1000)
## COMPUTE RELU ACTIVATIONS
# First determine what bias terms should be for each of `n_relus` ReLUs
b = ...
# Compute ReLU activations for each point along the x axis (x)
relu_acts = torch.zeros((n_relus, x.shape[0]))
for i_relu in range(n_relus):
relu_acts[i_relu, :] = torch.relu(x + b[i_relu])
## COMBINE RELU ACTIVATIONS
# Set up weights for weighted sum of ReLUs
combination_weights = torch.zeros((n_relus, ))
# Figure out weights on each ReLU
prev_slope = 0
for i in range(n_relus):
delta_x = x_train[i+1] - x_train[i]
slope = (y_train[i+1] - y_train[i]) / delta_x
combination_weights[i] = ...
prev_slope = slope
# Get output of weighted sum of ReLU activations for every point along x axis
y_hat = ...
return y_hat, relu_acts, x
# add event to airtable
atform.add_event('Coding Exercise 1: Function approximation with ReLU')
# Make training data from sine function
N_train = 10
x_train = torch.linspace(0, 2*np.pi, N_train).view(-1, 1)
y_train = torch.sin(x_train)
## Uncomment the lines below to test your function approximation
# y_hat, relu_acts, x = approximate_function(x_train, y_train)
# plot_function_approximation(x, relu_acts, y_hat)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_e09a57e7.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D3_MultiLayerPerceptrons/static/W1D3_Tutorial1_Solution_e09a57e7_0.png>
#
#
# -
# As you see in the top panel, we obtain 10 shifted ReLUs with the same slope. These are the basis functions that MLP uses to span the functional space, i.e., MLP finds a linear combination of these ReLUs.
# ---
# # Section 2: MLPs in Pytorch
#
# *Time estimate: ~1hr and 20 mins*
# + cellView="form"
# @title Video 2: Building MLPs in PyTorch
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1zh411z7LY", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"XtwLnaYJ7uc", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 2: Building MLPs in PyTorch')
display(out)
# -
# In the previous segment, we implemented a function to approximate any smooth function using MLPs. We saw that using Lipschitz continuity; we can prove that our approximation is mathematically correct. MLPs are fascinating, but before we get into the details on designing them, let's familiarize ourselves with some basic terminology of MLPs- layer, neuron, depth, width, weight, bias, and activation function. Armed with these ideas, we can now design an MLP given its input, hidden layers, and output size.
# ## Coding Exercise 2: Implement a general-purpose MLP in Pytorch
# The objective is to design an MLP with these properties:
# * works with any input (1D, 2D, etc.)
# * construct any number of given hidden layers using `nn.Sequential()` and `add_module()` function
# * use the same given activation function (i.e., [Leaky ReLU](https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html)) in all hidden layers.
#
# **Leaky ReLU** is described by the following mathematical formula:
#
# \begin{equation}
# \text{LeakyReLU}(x) = \text{max}(0,x) + \text{negative_slope} \cdot \text{min}(0, x) =
# \left\{
# \begin{array}{ll}
# x & ,\; \text{if} \; x \ge 0 \\
# \text{negative_slope} \cdot x & ,\; \text{otherwise}
# \end{array}
# \right.
# \end{equation}
# +
class Net(nn.Module):
def __init__(self, actv, input_feature_num, hidden_unit_nums, output_feature_num):
super(Net, self).__init__()
self.input_feature_num = input_feature_num # save the input size for reshapinng later
self.mlp = nn.Sequential() # Initialize layers of MLP
in_num = input_feature_num # initialize the temporary input feature to each layer
for i in range(len(hidden_unit_nums)): # Loop over layers and create each one
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Create MLP Layer")
####################################################################
out_num = hidden_unit_nums[i] # assign the current layer hidden unit from list
layer = ... # use nn.Linear to define the layer
in_num = out_num # assign next layer input using current layer output
self.mlp.add_module('Linear_%d'%i, layer) # append layer to the model with a name
actv_layer = eval('nn.%s'%actv) # Assign activation function (eval allows us to instantiate object from string)
self.mlp.add_module('Activation_%d'%i, actv_layer) # append activation to the model with a name
out_layer = nn.Linear(in_num, output_feature_num) # Create final layer
self.mlp.add_module('Output_Linear', out_layer) # append the final layer
def forward(self, x):
# reshape inputs to (batch_size, input_feature_num)
# just in case the input vector is not 2D, like an image!
x = x.view(-1, self.input_feature_num)
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Run MLP model")
####################################################################
logits = ... # forward pass of MLP
return logits
# add event to airtable
atform.add_event('Coding Exercise 2: Implement a general-purpose MLP in Pytorch')
input = torch.zeros((100, 2))
## Uncomment below to create network and test it on input
# net = Net(actv='LeakyReLU(0.1)', input_feature_num=2, hidden_unit_nums=[100, 10, 5], output_feature_num=1).to(DEVICE)
# y = net(input.to(DEVICE))
# print(f'The output shape is {y.shape} for an input of shape {input.shape}')
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_b0675c60.py)
#
#
# -
# ```
# The output shape is torch.Size([100, 1]) for an input of shape torch.Size([100, 2])
# ```
# ## Section 2.1: Classification with MLPs
# + cellView="form"
# @title Video 3: Cross Entropy
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Ag41177mB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"N8pVCbTlves", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 3: Cross Entropy')
display(out)
# -
# The main loss function we could use out of the box for multi-class classification for `N` samples and `C` number of classes is:
# * CrossEntropyLoss:
# This criterion expects a batch of predictions `x` with shape `(N, C)` and class index in the range $[0, C-1]$ as the target (label) for each `N` samples, hence a batch of `labels` with shape `(N, )`. There are other optional parameters like class weights and class ignores. Feel free to check the PyTorch documentation [here](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) for more detail. Additionally, [here](https://sparrow.dev/cross-entropy-loss-in-pytorch/) you can learn where is appropriate to use the CrossEntropyLoss.
#
# To get CrossEntropyLoss of a sample $i$, we could first calculate $-\log(\text{softmax(x}))$ and then take the element corresponding to $\text { labels }_i$ as the loss. However, due to numerical stability, we implement this more stable equivalent form,
#
# \begin{equation}
# \operatorname{loss}(x_i, \text { labels }_i)=-\log \left(\frac{\exp (x[\text { labels }_i])}{\sum_{j} \exp (x[j])}\right)=-x_i[\text { labels }_i]+\log \left(\sum_{j=1}^C \exp (x_i[j])\right)
# \end{equation}
# ### Coding Exercise 2.1: Implement Batch Cross Entropy Loss
#
# To recap, since we will be doing batch learning, we'd like a loss function that given:
# * a batch of predictions `x` with shape `(N, C)`
# * a batch of `labels` with shape `(N, )` that ranges from `0` to `C-1`
#
# returns the average loss $L$ calculated according to:
#
# \begin{align}
# loss(x_i, \text { labels }_i) &= -x_i[\text { labels }_i]+\log \left(\sum_{j=1}^C \exp (x_i[j])\right) \\
# L &= \frac{1}{N} \sum_{i=1}^{N}{loss(x_i, \text { labels }_i)}
# \end{align}
#
# Steps:
#
# 1. Use indexing operation to get predictions of class corresponding to the labels (i.e., $x_i[\text { labels }_i]$)
# 2. Compute $loss(x_i, \text { labels }_i)$ vector (`losses`) using `torch.log()` and `torch.exp()` without Loops!
# 3. Return the average of the loss vector
#
#
# +
def cross_entropy_loss(x, labels):
# x is the model predictions we'd like to evaluate using lables
x_of_labels = torch.zeros(len(labels))
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Cross Entropy Loss")
####################################################################
# 1. prediction for each class corresponding to the label
for i, label in enumerate(labels):
x_of_labels[i] = x[i, label]
# 2. loss vector for the batch
losses = ...
# 3. Return the average of the loss vector
avg_loss = ...
return avg_loss
# add event to airtable
atform.add_event('Coding Exercise 2.1: Implement Batch Cross Entropy Loss')
labels = torch.tensor([0, 1])
x = torch.tensor([[10.0, 1.0, -1.0, -20.0], # correctly classified
[10.0, 10.0, 2.0, -10.0]]) # Not correctly classified
CE = nn.CrossEntropyLoss()
pytorch_loss = CE(x, labels).item()
## Uncomment below to test your function
# our_loss = cross_entropy_loss(x, labels).item()
# print(f'Our CE loss: {our_loss:0.8f}, Pytorch CE loss: {pytorch_loss:0.8f}')
# print(f'Difference: {np.abs(our_loss - pytorch_loss):0.8f}')
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_f3376d8f.py)
#
#
# -
# ```
# Our CE loss: 0.34672737, Pytorch CE loss: 0.34672749
# Difference: 0.00000012
# ```
# ## Section 2.2: Spiral classification dataset
# Before we could start optimizing these loss functions, we need a dataset!
#
# Let's turn this fancy-looking equation into a classification dataset
# \begin{equation}
# \begin{array}{c}
# X_{k}(t)=t\left(\begin{array}{c}
# \sin \left[\frac{2 \pi}{K}\left(2 t+k-1\right)\right]+\mathcal{N}\left(0, \sigma\right) \\
# \cos \left[\frac{2 \pi}{K}\left(2 t+k-1\right)\right]+\mathcal{N}\left(0, \sigma\right)
# \end{array}\right)
# \end{array}, \quad 0 \leq t \leq 1, \quad k=1, \ldots, K
# \end{equation}
# +
def create_spiral_dataset(K, sigma, N):
# Initialize t, X, y
t = torch.linspace(0, 1, N)
X = torch.zeros(K*N, 2)
y = torch.zeros(K*N)
# Create data
for k in range(K):
X[k*N:(k+1)*N, 0] = t*(torch.sin(2*np.pi/K*(2*t+k)) + sigma*torch.randn(N))
X[k*N:(k+1)*N, 1] = t*(torch.cos(2*np.pi/K*(2*t+k)) + sigma*torch.randn(N))
y[k*N:(k+1)*N] = k
return X, y
# Set parameters
K = 4
sigma = 0.16
N = 1000
set_seed(seed=SEED)
X, y = create_spiral_dataset(K, sigma, N)
plt.scatter(X[:, 0], X[:, 1], c = y)
plt.show()
# -
# ## Section 2.3: Training and Evaluation
# + cellView="form"
# @title Video 4: Training and Evaluating an MLP
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1QV411p7mF", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"DfXZhRfBEqQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 4: Training and Evaluating an MLP')
display(out)
# -
# ### Coding Exercise 2.3: Implement it for a classfication task
# Now that we have the Spiral dataset and a loss function, it's your turn to implement a simple train/test split for training and validation.
#
# Steps to follow:
# * Dataset shuffle
# * Train/Test split (20% for test)
# * Dataloader definition
# * Training and Evaluation
# +
def shuffle_and_split_data(X, y, seed):
# set seed for reproducibility
torch.manual_seed(seed)
# Number of samples
N = X.shape[0]
####################################################################
# Fill in missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Shuffle & split data")
####################################################################
# Shuffle data
shuffled_indices = ... # get indices to shuffle data, could use torch.randperm
X = X[shuffled_indices]
y = y[shuffled_indices]
# Split data into train/test
test_size = ... # assign test datset size using 20% of samples
X_test = X[:test_size]
y_test = y[:test_size]
X_train = X[test_size:]
y_train = y[test_size:]
return X_test, y_test, X_train, y_train
# add event to airtable
atform.add_event('Coding Exercise 2.3: Implement for a classfication task')
## Uncomment below to test your function
# X_test, y_test, X_train, y_train = shuffle_and_split_data(X, y, seed=SEED)
# plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test)
# plt.title('Test data')
# plt.show()
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_083a4d77.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D3_MultiLayerPerceptrons/static/W1D3_Tutorial1_Solution_083a4d77_0.png>
#
#
# -
# And we need to make a Pytorch data loader out of it. Data loading in PyTorch can be separated in 2 parts:
# * Data must be wrapped on a Dataset parent class where the methods __getitem__ and __len__ must be overrided. Note that at this point the data is not loaded on memory. PyTorch will only load what is needed to the memory. Here `TensorDataset` does this for us directly.
# * Use a Dataloader that will actually read the data in batches and put into memory. Also, the option of `num_workers > 0` allows multithreading, which prepares multiple batches in the queue to speed things up.
# +
g_seed = torch.Generator()
g_seed.manual_seed(SEED)
batch_size = 128
test_data = TensorDataset(X_test, y_test)
test_loader = DataLoader(test_data, batch_size=batch_size,
shuffle=False, num_workers=2,
worker_init_fn=seed_worker,
generator=g_seed)
train_data = TensorDataset(X_train, y_train)
train_loader = DataLoader(train_data, batch_size=batch_size, drop_last=True,
shuffle=True, num_workers=2,
worker_init_fn=seed_worker,
generator=g_seed)
# -
# Let's write a general-purpose training and evaluation code and keep it in our pocket for next tutorial as well. So make sure you review it to see what it does.
#
# Note that `model.train()` tells your model that you are training the model. So layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordingly. And to turn off training mode we set `model.eval()`
def train_test_classification(net, criterion, optimizer, train_loader,
test_loader, num_epochs=1, verbose=True,
training_plot=False, device='cpu'):
net.train()
training_losses = []
for epoch in tqdm(range(num_epochs)): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs = inputs.to(device).float()
labels = labels.to(device).long()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
if verbose:
training_losses += [loss.item()]
net.eval()
def test(data_loader):
correct = 0
total = 0
for data in data_loader:
inputs, labels = data
inputs = inputs.to(device).float()
labels = labels.to(device).long()
outputs = net(inputs)
_, predicted = torch.max(outputs, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = 100 * correct / total
return total, acc
train_total, train_acc = test(train_loader)
test_total, test_acc = test(test_loader)
if verbose:
print(f"Accuracy on the {train_total} training samples: {train_acc:0.2f}")
print(f"Accuracy on the {test_total} testing samples: {test_acc:0.2f}")
if training_plot:
plt.plot(training_losses)
plt.xlabel('Batch')
plt.ylabel('Training loss')
plt.show()
return train_acc, test_acc
# ### Think! 2.3.1: What's the point of .eval() and .train()?
#
# Is it necessary to use `net.train()` and `net.eval()` for our MLP model? why?
# + cellView="form"
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your answer here and click on `Submit!`',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q1', text.value)
print("Submission successful!")
button.on_click(on_button_clicked)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_51988471.py)
#
#
# -
# Now let's put everything together and train your first deep-ish model!
# +
set_seed(SEED)
net = Net('ReLU()', X_train.shape[1], [128], K).to(DEVICE)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=1e-3)
num_epochs = 100
_, _ = train_test_classification(net, criterion, optimizer, train_loader,
test_loader, num_epochs=num_epochs,
training_plot=True, device=DEVICE)
# -
# And finally, let's visualize the learned decision-map. We know you're probably running out of time, so we won't make you write code now! But make sure you have reviewed it since we'll start with another visualization technique next time.
# +
def sample_grid(M=500, x_max=2.0):
ii, jj = torch.meshgrid(torch.linspace(-x_max, x_max, M),
torch.linspace(-x_max, x_max, M))
X_all = torch.cat([ii.unsqueeze(-1),
jj.unsqueeze(-1)],
dim=-1).view(-1, 2)
return X_all
def plot_decision_map(X_all, y_pred, X_test, y_test,
M=500, x_max=2.0, eps=1e-3):
decision_map = torch.argmax(y_pred, dim=1)
for i in range(len(X_test)):
indices = (X_all[:, 0] - X_test[i, 0])**2 + (X_all[:, 1] - X_test[i, 1])**2 < eps
decision_map[indices] = (K + y_test[i]).long()
decision_map = decision_map.view(M, M)
plt.imshow(decision_map, extent=[-x_max, x_max, -x_max, x_max], cmap='jet')
plt.show()
# -
X_all = sample_grid()
y_pred = net(X_all)
plot_decision_map(X_all, y_pred, X_test, y_test)
# ### Think! 2.3.2: Does it generalize well?
# Do you think this model is performing well outside its training distribution? Why?
# + cellView="form"
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your answer here and click on `Submit!`',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q2' , text.value)
print("Submission successful!")
button.on_click(on_button_clicked)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_d90d9e4b.py)
#
#
# -
# What would be your suggestions to increase models ability to generalize? Think about it and discuss with your pod.
# ---
# # Summary
#
# In this tutorial we have explored the Multi-leayer Perceptrons (MLPs). More specifically, we have discuss the similarities of artificial and biological neural networks (for more information see the Bonus section), we have learned the Universal Approximation Theorem, and we have implemented MLPs in PyTorch.
# + cellView="form"
# @title Airtable Submission Link
from IPython import display as IPydisplay
IPydisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/AirtableSubmissionButton.png?raw=1"
alt="button link to Airtable" style="width:410px"></a>
</div>""" )
# -
# ---
# # Bonus: Neuron Physiology and Motivation to Deep Learning
#
# This section will motivate one of the most popular nonlinearities in deep learning, the ReLU nonlinearity, by starting from the biophysics of neurons and obtaining the ReLU nonlinearity through a sequence of approximations. We will also show that neuronal biophysics sets a time scale for signal propagation speed through the brain. This time scale implies that neural circuits underlying fast perceptual and motor processing in the brain may not be excessively deep.
#
# This biological motivation for deep learning is not strictly necessary to follow the rest of this course, so this section can be safely skipped and **is a bonus section**.
# + cellView="form"
# @title Video 5: Biological to Artificial Neurons
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1mf4y157vf", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"ELAbflymSLo", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 5: Biological to Artificial Neurons')
display(out)
# -
# ## Leaky Integrate-and-fire (LIF)
# The basic idea of LIF neuron was proposed in 1907 by <NAME>, long before we understood the electrophysiology of a neuron (see a translation of [Lapicque's paper](https://pubmed.ncbi.nlm.nih.gov/17968583/) ). More details of the model can be found in the book [**Theoretical neuroscience**](http://www.gatsby.ucl.ac.uk/~dayan/book/) by <NAME> and Lauren<NAME>.
#
# The model dynamics is defined with the following formula,
#
#
# \begin{equation}
# \frac{d V}{d t}=\left\{\begin{array}{cc}
# \frac{1}{C}\left(-\frac{V}{R}+I \right) & t>t_{r e s t} \\
# 0 & \text { otherwise }
# \end{array}\right.
# \end{equation}
#
#
# Note that $V$, $C$, and $R$ are the membrane voltage, capacitance, and resitance of the neuron respectively and $-\frac{V}{R}$ is the leakage current. When $I$ is sufficiently strong such that $V$ reaches a certain threshold value $V_{\rm th}$, it momentarily spikes and then $V$ is reset to $V_{\rm reset}< V_{\rm th}$, and voltage stays at $V_{\rm reset}$ for $\tau_{\rm ref}$ ms, mimicking the refractoriness of the neuron during an action potential (note that $V_{\rm reset}$ and $\tau_{\rm ref}$ is assumed to be zero in the lecture):
#
#
# \begin{eqnarray}
# V(t)=V_{\rm reset} \text{ for } t\in(t_{\text{sp}}, t_{\text{sp}} + \tau_{\text{ref}}]
# \end{eqnarray}
#
#
# where $t_{\rm sp}$ is the spike time when $V(t)$ just exceeded $V_{\rm th}$.
#
# Thus, the LIF model captures the facts that a neuron:
# - performs spatial and temporal integration of synaptic inputs
# - generates a spike when the voltage reaches a certain threshold
# - goes refractory during the action potential
# - has a leaky membrane
#
# For in-depth content on computational models of neurons, follow the [NMA](https://www.neuromatchacademy.org/) tutorial 1 of *Biological Neuron Models*. Specifically, for NMA-CN 2021 follow this [Tutorial](https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W2D3_BiologicalNeuronModels/W2D3_Tutorial1.ipynb).
#
# ## Simulating an LIF Neuron
#
# In the cell below is given a function for LIF neuron model with it's arguments described.
#
# Note that we will use Euler's method to make a numerical approximation to a derivative. Hence we will use the following implementation of the model dynamics,
#
# \begin{equation}
# V_n=\left\{\begin{array}{cc}
# V_{n-1} + \frac{1}{C}\left(-\frac{V}{R}+I \right) \Delta t & t>t_{r e s t} \\
# 0 & \text { otherwise }
# \end{array}\right.
# \end{equation}
# +
def run_LIF(I, T=50, dt=0.1, t_ref=10,
Rm=1, Cm=10, Vth=1, V_spike=0.5):
"""
Simulate the LIF dynamics with external input current
Args:
I : input current (mA)
T : total time to simulate (msec)
dt : simulation time step (msec)
t_ref : refractory period (msec)
Rm : resistance (kOhm)
Cm : capacitance (uF)
Vth : spike threshold (V)
V_spike : spike delta (V)
Returns:
time : time points
Vm : membrane potentials
"""
# Set up array of time steps
time = torch.arange(0, T+dt, dt)
# Set up array for tracking Vm
Vm = torch.zeros(len(time))
# Iterate over each time step
t_rest = 0
for i, t in enumerate(time):
# If t is after refractory period
if t > t_rest:
Vm[i] = Vm[i-1] + 1/Cm*(-Vm[i-1]/Rm + I) * dt
# If Vm is over the threshold
if Vm[i] >= Vth:
# Increase volatage by change due to spike
Vm[i] += V_spike
# Set up new refactory period
t_rest = t + t_ref
return time, Vm
sim_time, Vm = run_LIF(1.5)
# Plot the membrane voltage across time
plt.plot(sim_time, Vm)
plt.title('LIF Neuron Output')
plt.ylabel('Membrane Potential (V)')
plt.xlabel('Time (msec)')
plt.show()
# -
# ### Interactive Demo: Neuron's transfer function explorer for different $R_m$ and $t_{ref}$
# We know that real neurons communicate by modulating the spike count meaning that more input current causes a neuron to spike more often. Therefore, to find an input-output relationship, it makes sense to characterize their spike count as a function of input current. This is called the neuron's input-output transfer function. Let's plot the neuron's transfer function and see how it changes with respect to the **membrane resistance** and **refractory time**?
# + cellView="form"
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout = widgets.Layout()
@widgets.interact(Rm=widgets.FloatSlider(1., min=1, max=100.,
step=0.1, layout=my_layout),
t_ref=widgets.FloatSlider(1., min=1, max=100.,
step=0.1, layout=my_layout)
)
def plot_IF_curve(Rm, t_ref):
T = 1000 # total time to simulate (msec)
dt = 1 # simulation time step (msec)
Vth = 1 # spike threshold (V)
Is_max = 2
Is = torch.linspace(0, Is_max, 10)
spike_counts = []
for I in Is:
_, Vm = run_LIF(I, T=T, dt=dt, Vth=Vth, Rm=Rm, t_ref=t_ref)
spike_counts += [torch.sum(Vm > Vth)]
plt.plot(Is, spike_counts)
plt.title('LIF Neuron: Transfer Function')
plt.ylabel('Spike count')
plt.xlabel('I (mA)')
plt.xlim(0, Is_max)
plt.ylim(0, 80)
plt.show()
# -
# ### Think!: Real and Artificial neuron similarities
#
# What happens at infinite membrane resistance ($R_m$) and small refactory time ($t_{ref}$)? Why?
#
# Take 10 mins to discuss the similarity between a real neuron and an artificial one with your pod.
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D3_MultiLayerPerceptrons/solutions/W1D3_Tutorial1_Solution_d58d2933.py)
#
#
| tutorials/W1D3_MultiLayerPerceptrons/student/W1D3_Tutorial1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Author: <NAME>
#Code Purpose: Make Single JSON for all EGamma SFs
# +
# Technical part below
from ROOT import TFile, TH1F, TCanvas, TString, TH2F, TList
import correctionlib.schemav1 as schema
from correctionlib.schemav1 import Correction
corrsphoton=[]
corrsele=[]
#Function to extract SFs from EGamma standard root files
def getSFs(fn="filename",IsSF=1):
tf = TFile(fn)
fo = tf.Get(sfhist)
Xbins=fo.GetNbinsX()
Ybins=fo.GetNbinsY()
X=[None]*(Xbins+1)
Y=[None]*(Ybins+1)
values=[]
errors=[]
for i in range(1,Xbins + 1):
X[i-1]=(fo.GetXaxis().GetBinLowEdge(i))
X[Xbins]=fo.GetXaxis().GetBinUpEdge(Xbins)
for j in range(1,Ybins + 1):
Y[j-1]=(fo.GetYaxis().GetBinLowEdge(j))
Y[Ybins]=fo.GetYaxis().GetBinUpEdge(Ybins)
for i in range(1,Xbins + 1):
for j in range(1,Ybins + 1):
values.append(fo.GetBinContent(i,j))
errors.append(fo.GetBinError(i,j))
if IsSF==1:
valSFs=schema.MultiBinning.parse_obj({
"nodetype": "multibinning",
"edges": [
X,
Y,
],
"content": values,
})
return valSFs
if IsSF==0:
valerrors=schema.MultiBinning.parse_obj({
"nodetype": "multibinning",
"edges": [
X,
Y,
],
"content": errors,
})
return valerrors
def SFyearwise(files=[],names=[]):
output= schema.Category.parse_obj({
"nodetype": "category",
"keys": ["sf","syst"],
"content": [
schema.Category.parse_obj({
"nodetype": "category",
"keys": names,
"content":[
getSFs(fn=files[name],IsSF=1)
for name in names
]
}),
schema.Category.parse_obj({
"nodetype": "category",
"keys": names,
"content":[
getSFs(fn=files[name],IsSF=0)
for name in names
]
}),
]
})
return output
# +
#For electrons
#Names that you want for SFs for files above (same order please)
namesSFs=["RecoBelow20","RecoAbove20","Veto", "Loose", "Medium", "Tight", "wp80iso", "wp80noiso", "wp90iso", "wp90noiso"]
#All the files for SFs
files2016preVFP={
"RecoBelow20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptBelow20.txt_EGM2D_UL2016preVFP.root",
"RecoAbove20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptAbove20.txt_EGM2D_UL2016preVFP.root",
"Veto":"EGdata/SF-Repository/UL16/preVFP/Electrons/Veto/egammaEffi.txt_Ele_Veto_preVFP_EGM2D.root",
"Loose":"EGdata/SF-Repository/UL16/preVFP/Electrons/Loose/egammaEffi.txt_Ele_Loose_preVFP_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL16/preVFP/Electrons/Medium/egammaEffi.txt_Ele_Medium_preVFP_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL16/preVFP/Electrons/Tight/egammaEffi.txt_Ele_Tight_preVFP_EGM2D.root",
"wp80iso":"EGdata/SF-Repository/UL16/preVFP/Electrons/wp80iso/egammaEffi.txt_Ele_wp80iso_preVFP_EGM2D.root",
"wp80noiso":"EGdata/SF-Repository/UL16/preVFP/Electrons/wp80noiso/egammaEffi.txt_Ele_wp80noiso_preVFP_EGM2D.root",
"wp90iso":"EGdata/SF-Repository/UL16/preVFP/Electrons/wp90iso/egammaEffi.txt_Ele_wp90iso_preVFP_EGM2D.root",
"wp90noiso":"EGdata/SF-Repository/UL16/preVFP/Electrons/wp90noiso/egammaEffi.txt_Ele_wp90noiso_preVFP_EGM2D.root",
}
files2016postVFP={
"RecoBelow20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptBelow20.txt_EGM2D_UL2016postVFP.root",
"RecoAbove20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptAbove20.txt_EGM2D_UL2016postVFP.root",
"Veto":"EGdata/SF-Repository/UL16/postVFP/Electrons/Veto/egammaEffi.txt_EGM2D.root",
"Loose":"EGdata/SF-Repository/UL16/postVFP/Electrons/Loose/egammaEffi.txt_Ele_Loose_postVFP_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL16/postVFP/Electrons/Medium/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL16/postVFP/Electrons/Tight/egammaEffi.txt_EGM2D.root",
"wp80iso":"EGdata/SF-Repository/UL16/postVFP/Electrons/wp80iso/egammaEffi.txt_EGM2D.root",
"wp80noiso":"EGdata/SF-Repository/UL16/postVFP/Electrons/wp80noiso/egammaEffi.txt_EGM2D.root",
"wp90iso":"EGdata/SF-Repository/UL16/postVFP/Electrons/wp90iso/egammaEffi.txt_EGM2D.root",
"wp90noiso":"EGdata/SF-Repository/UL16/postVFP/Electrons/wp90noiso/egammaEffi.txt_EGM2D.root",
}
files2017={
"RecoBelow20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptBelow20.txt_EGM2D_UL2017.root",
"RecoAbove20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptAbove20.txt_EGM2D_UL2017.root",
"Veto":"EGdata/SF-Repository/UL17/Electrons/Veto/passingVeto94XV2/egammaEffi.txt_EGM2D.root",
"Loose":"EGdata/SF-Repository/UL17/Electrons/Loose/passingLoose94XV2/egammaEffi.txt_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL17/Electrons/Medium/passingMedium94XV2/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL17/Electrons/Tight/passingTight94XV2/egammaEffi.txt_EGM2D.root",
"wp80iso":"EGdata/SF-Repository/UL17/Electrons/MVA80Iso/passingMVA94Xwp80isoV2/egammaEffi.txt_EGM2D.root",
"wp80noiso":"EGdata/SF-Repository/UL17/Electrons/MVA80NoIso/passingMVA94Xwp80noisoV2/egammaEffi.txt_EGM2D.root",
"wp90iso":"EGdata/SF-Repository/UL17/Electrons/MVA90Iso/passingMVA94Xwp90isoV2/egammaEffi.txt_EGM2D.root",
"wp90noiso":"EGdata/SF-Repository/UL17/Electrons/MVA90NoIso/passingMVA94Xwp90noisoV2/egammaEffi.txt_EGM2D.root",
}
files2018={
"RecoBelow20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptBelow20.txt_EGM2D_UL2018.root",
"RecoAbove20":"EGdata/SF-Repository/UL_RecoSFs/egammaEffi_ptAbove20.txt_EGM2D_UL2018.root",
"Veto":"EGdata/SF-Repository/UL18/Electrons/Veto/passingVeto94XV2/egammaEffi.txt_EGM2D.root",
"Loose":"EGdata/SF-Repository/UL18/Electrons/Loose/passingLoose94XV2/egammaEffi.txt_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL18/Electrons/Medium/passingMedium94XV2/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL18/Electrons/Tight/passingTight94XV2/egammaEffi.txt_EGM2D.root",
"wp80iso":"EGdata/SF-Repository/UL18/Electrons/wp80iso/passingMVA94Xwp80isoV2/egammaEffi.txt_EGM2D.root",
"wp80noiso":"EGdata/SF-Repository/UL18/Electrons/wp80noiso/passingMVA94Xwp80noisoV2/egammaEffi.txt_EGM2D.root",
"wp90iso":"EGdata/SF-Repository/UL18/Electrons/wp90iso/passingMVA94Xwp90isoV2/egammaEffi.txt_EGM2D.root",
"wp90noiso":"EGdata/SF-Repository/UL18/Electrons/wp90noiso/passingMVA94Xwp90noisoV2/egammaEffi.txt_EGM2D.root",
}
#Names that you want for errors for files above (same order please)
nameJSON="egcorrs_Electrons_UL.json" # Name of final JSON
sfhist="EGamma_SF2D"
# +
corre = Correction.parse_obj(
{
"version": 1,
"name": "ElectronsUL",
"inputs": [
{"name": "year","type": "string"},
{"name": "SF or syst","type": "string"},
{"name": "wp","type": "string"},
{"type": "real", "name": "eta", "description": "possibly supercluster eta?"},
{"name": "pt", "type": "real"},
],
"output": {"name": "weight", "type": "real"},
"data": schema.Category.parse_obj({
"nodetype": "category",
"keys": ["2016preVFP","2016postVFP","2017","2018"],
"content": [
SFyearwise(files=files2016preVFP,names=namesSFs),
SFyearwise(files=files2016postVFP,names=namesSFs),
SFyearwise(files=files2017,names=namesSFs),
SFyearwise(files=files2018,names=namesSFs),
]
})
})
corrsele.append(corre)
# +
#Names that you want for SFs for files above (same order please)
namesSFs=["Loose", "Medium", "Tight", "wp80", "wp90"]
#All the files for SFs
files2016preVFP={
"Loose":"EGdata/SF-Repository/UL16/preVFP/Photons/Loose/egammaEffi.txt_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL16/preVFP/Photons/Medium/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL16/preVFP/Photons/Tight/egammaEffi.txt_EGM2D.root",
"wp80":"EGdata/SF-Repository/UL16/preVFP/Photons/MVA80/egammaEffi.txt_EGM2D.root",
"wp90":"EGdata/SF-Repository/UL16/preVFP/Photons/MVA90/egammaEffi.txt_EGM2D.root",
}
files2016postVFP={
"Loose":"EGdata/SF-Repository/UL16/postVFP/Photons/Loose/egammaEffi.txt_EGM2D_Pho_Loose_UL16_postVFP.root",
"Medium":"EGdata/SF-Repository/UL16/postVFP/Photons/Medium/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL16/postVFP/Photons/Tight/egammaEffi.txt_EGM2D_Pho_Tight_UL16_postVFP.root",
"wp80":"EGdata/SF-Repository/UL16/postVFP/Photons/MVA80/egammaEffi.txt_EGM2D_Pho_MVA80_UL16_postVFP.root",
"wp90":"EGdata/SF-Repository/UL16/postVFP/Photons/MVA90/egammaEffi.txt_EGM2D_Pho_MVA90_UL16_postVFP.root",
}
files2017={
"Loose":"EGdata/SF-Repository/UL17/Photons/Loose/passingLoose100XV2_lowpTChebychev_addGaus/egammaEffi.txt_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL17/Photons/Medium/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL17/Photons/Tight/egammaEffi.txt_EGM2D.root",
"wp80":"EGdata/SF-Repository/UL17/Photons/MVA80/passingMVA94XV2wp80/egammaEffi.txt_EGM2D.root",
"wp90":"EGdata/SF-Repository/UL17/Photons/MVA90/passingMVA94XV2wp90/egammaEffi.txt_EGM2D.root",
}
files2018={
"Loose":"EGdata/SF-Repository/UL18/Photons/Loose/passingLoose100XV2/egammaEffi.txt_EGM2D.root",
"Medium":"EGdata/SF-Repository/UL18/Photons/Medium/passingMedium100XV2/egammaEffi.txt_EGM2D.root",
"Tight":"EGdata/SF-Repository/UL18/Photons/Tight/passingTight100XV2/egammaEffi.txt_EGM2D.root",
"wp80":"EGdata/SF-Repository/UL18/Photons/MVA80/passingMVA94XV2wp80/egammaEffi.txt_EGM2D.root",
"wp90":"EGdata/SF-Repository/UL18/Photons/MVA90/passingMVA94XV2wp90/egammaEffi.txt_EGM2D.root",
}
#Names that you want for errors for files above (same order please)
nameJSON="egcorrs_Photons_UL.json" # Name of final JSON
sfhist="EGamma_SF2D"
# +
corrp = Correction.parse_obj(
{
"version": 1,
"name": "PhotonsUL",
"inputs": [
{"name": "year","type": "string"},
{"name": "SF or syst","type": "string"},
{"name": "wp","type": "string"},
{"type": "real", "name": "eta", "description": "possibly supercluster eta?"},
{"name": "pt", "type": "real"},
],
"output": {"name": "weight", "type": "real"},
"data": schema.Category.parse_obj({
"nodetype": "category",
"keys": ["2016preVFP","2016postVFP","2017","2018"],
"content": [
SFyearwise(files=files2016preVFP,names=namesSFs),
SFyearwise(files=files2016postVFP,names=namesSFs),
SFyearwise(files=files2017,names=namesSFs),
SFyearwise(files=files2018,names=namesSFs),
]
})
})
corrsphoton.append(corrp)
# +
nameJSON="egcorrs_UL.json"
#Save JSON
from correctionlib.schemav1 import CorrectionSet
import gzip
cset = CorrectionSet.parse_obj({
"schema_version": 1,
"corrections": [
corre,
corrp,
]
})
with open(nameJSON, "w") as fout:
fout.write(cset.json(exclude_unset=True, indent=4))
# +
#Evaluator Example
import libcorrection
cset = libcorrection.CorrectionSet(nameJSON)
# +
valsf= cset["ElectronsUL"].evaluate("2016postVFP","sf","Medium",1.1, 15.0)
print("sf is:"+str(valsf))
valsf= cset["ElectronsUL"].evaluate("2016postVFP","sf","RecoAbove20",1.1, 25.0)
print("sf is:"+str(valsf))
valsyst= cset["ElectronsUL"].evaluate("2017","syst","Medium",1.1, 34.0)
print("syst is:"+str(valsyst))
# +
valsf= cset["PhotonsUL"].evaluate("2016postVFP","sf","Medium",1.1, 50.0)
print("sf is:"+str(valsf))
valsf= cset["PhotonsUL"].evaluate("2016postVFP","sf","Loose",1.1, 50.0)
print("sf is:"+str(valsf))
# -
| convertRootToNewJson_forEGamma-Combined.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''fop'': conda)'
# language: python
# name: python38564bitfopconda92cc46f9b3c2401aadcd233b327c3156
# ---
# # Frequent opiate prescriber
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import preprocessors as pp
sns.set(style="darkgrid")
# -
data = pd.read_csv('../data/prescriber-info.csv')
data.head()
# ## Variable Separation
uniq_cols = ['NPI']
# + tags=[]
cat_cols = list(data.columns[1:5])
cat_cols
# -
num_cols = list(data.columns[5:-1])
# print(num_cols)
target = [data.columns[-1]]
target
# ## Categorical Variable Analysis & EDA
# ### Missing values
# chcking for missing values
data[cat_cols].isnull().sum()
# checking for missing value percentage
data[cat_cols].isnull().sum()/data.shape[0] *100
# checking for null value in drugs column
data[num_cols].isnull().sum().sum()
data['NPI'].nunique()
# Remarks:
#
# 1. We dont need `NPI` column it has all unique values.
# 2. The `Credentials` column has missing values ~3% of total.
# <!-- 3. All the `med_clos` are sparse in nature -->
# ### Basic plots
data[num_cols].iloc[:,2].value_counts()
cat_cols
for item in cat_cols[1:]:
print('-'*25)
print(data[item].value_counts())
cat_cols
# +
# Gender analysis
# -
plt.figure(figsize=(7,5))
sns.countplot(data=data,x='Gender')
plt.title('Count plot of Gender column')
plt.show()
# +
# State column
# -
plt.figure(figsize=(15,5))
sns.countplot(data=data,x='State')
plt.title('Count plot of State column')
plt.show()
# +
# lets check out `Speciality` column
# -
data['Specialty'].nunique()
plt.figure(figsize=(20,5))
sns.countplot(data=data,x='Specialty')
plt.title('Count plot of Specialty column')
plt.xticks(rotation=90)
plt.show()
data['Specialty'].value_counts()[:20]
# +
# filling missing values with mean
# -
# In `credentials` we can do lot more
#
# 1. The credentals column have multiple occupation in the same row.
# 2. \[PHD, MD\] and \[MD, PHD\] are treated differently.
# 3. P,A, is treated different from P.A and PA
# 4. MD ----- M.D. , M.D, M D, MD\`
# 5. This column is a mess
cat_cols
# Remarks:
#
# 1. We don't need `Credentials` column which is a real mess, the `Specialty` column has the same information as of `Credentials`.
#
# 2. Cat Features to remove - `NPI`, `Credentials`
#
# 3. Cat Features to keep - `Gender`, `State`, `Speciality`
#
# 4. Cat encoder pipeline -
# 1. Gender - normal 1/0 encoding using category_encoders
# 2. State - Frequency encoding using category_encoders
# 3. Speciality - Frequency encoding
# ### Numerical Variable Analysis & Engineering
for item in num_cols:
print('-'*25)
print(f'frequency - {data[item].nunique()}')
print(f'Min \t Average \t Max \t Prob>0')
for item in num_cols:
print('-'*40)
prob = sum(data[item] > 0) / data[item].shape[0]
print(f'{data[item].min()}\t{data[item].mean(): .4f} \t{data[item].max()} \t {prob:.4f}')
print(f'Maximun of all maxs - {data[num_cols].max().max()}')
print(f'Average of all maxs - {data[num_cols].max().mean()}')
print(f'Minimun of all maxs - {data[num_cols].max().min()}')
print(f'Maximun of all mins - {data[num_cols].min().max()}')
print(f'Minimun of all mins - {data[num_cols].min().min()}')
sns.distplot(data[num_cols[0]]);
sns.boxplot(data = data, x = num_cols[0],orient="v");
# Problem:
#
# 1. All the continuous cols have large number of zeros, and other values are counting value.
# 2. The solutions I stumble accross are - `Two-part-models(twopm)`, `hurdle models` and `zero inflated poisson models(ZIP)`
# 3. These models thinks the target variable has lots of zero and the non-zero values are not 1, if they had been 1s and 0s we could use a a classification model but they are like 0s mostly and if not zeros they are continuous variable like 100,120, 234, 898, etc.
# 4. In our case our feature variable has lots of zeros.
data[data[num_cols[0]] > 0][num_cols[0]]
temp = 245
sns.distplot(data[data[num_cols[temp]] > 0][num_cols[temp]]);
temp = 5
sns.distplot(np.log(data[data[num_cols[temp]] > 0][num_cols[temp]]));
# +
from sklearn.preprocessing import power_transform
temp = 5
# data_without_0 = data[data[num_cols[temp]] > 0][num_cols[temp]]
data_without_0 = data[num_cols[temp]]
data_0 = np.array(data_without_0).reshape(-1,1)
data_0_trans = power_transform(data_0, method='yeo-johnson')
sns.distplot(data_0_trans);
# -
temp = 5
# data_without_0 = data[data[num_cols[temp]] > 0][num_cols[temp]]
data_without_0 = data[num_cols[temp]]
data_0 = np.array(data_without_0).reshape(-1,1)
data_0_trans = power_transform(data_0+1, method='box-cox')
# data_0_trans = np.log(data_0 + 1 )
# data_0
sns.distplot(data_0_trans);
from sklearn.decomposition import PCA
pca = PCA(n_components=0.8,svd_solver='full')
# pca = PCA(n_components='mle',svd_solver='full')
pca.fit(data[num_cols])
pca_var_ratio = pca.explained_variance_ratio_
pca_var_ratio
len(pca_var_ratio)
plt.plot(pca_var_ratio[:],'-*');
sum(pca_var_ratio[:10])
data[num_cols].sample(2)
pca.transform(data[num_cols].sample(1))
pca2 = pp.PCATransformer(cols=num_cols,n_components=0.8)
pca2.fit(data)
pca2.transform(data[num_cols].sample(1))
# ### Train test split and data saving
# train test split
from sklearn.model_selection import train_test_split
X = data.drop(target,axis=1)
y = data[target]
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.20, random_state=1)
pd.concat([X_train,y_train],axis=1).to_csv('../data/train.csv',index=False)
pd.concat([X_test,y_test],axis=1).to_csv('../data/test.csv',index=False)
# ## Data Engineering
from sklearn.preprocessing import LabelBinarizer
lbin = LabelBinarizer()
lbin.fit(X_train['Gender'])
gen_tra = lbin.transform(X_train['Gender'])
gen_tra
X_train[num_cols[:5]].info();
| notebooks/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### New to Plotly?
# Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
# <br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
# <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
# #### Vertical and Horizontal Lines Positioned Relative to the Axes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[2, 3.5, 6],
y=[1, 1.5, 1],
text=['Vertical Line', 'Horizontal Dashed Line', 'Diagonal dotted Line'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 7]
},
'yaxis': {
'range': [0, 2.5]
},
'shapes': [
# Line Vertical
{
'type': 'line',
'x0': 1,
'y0': 0,
'x1': 1,
'y1': 2,
'line': {
'color': 'rgb(55, 128, 191)',
'width': 3,
},
},
# Line Horizontal
{
'type': 'line',
'x0': 2,
'y0': 2,
'x1': 5,
'y1': 2,
'line': {
'color': 'rgb(50, 171, 96)',
'width': 4,
'dash': 'dashdot',
},
},
# Line Diagonal
{
'type': 'line',
'x0': 4,
'y0': 0,
'x1': 6,
'y1': 2,
'line': {
'color': 'rgb(128, 0, 128)',
'width': 4,
'dash': 'dot',
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-lines')
# -
# #### Lines Positioned Relative to the Plot & to the Axes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[2, 6],
y=[1, 1],
text=['Line positioned relative to the plot',
'Line positioned relative to the axes'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 8]
},
'yaxis': {
'range': [0, 2]
},
'shapes': [
# Line reference to the axes
{
'type': 'line',
'xref': 'x',
'yref': 'y',
'x0': 4,
'y0': 0,
'x1': 8,
'y1': 1,
'line': {
'color': 'rgb(55, 128, 191)',
'width': 3,
},
},
# Line reference to the plot
{
'type': 'line',
'xref': 'paper',
'yref': 'paper',
'x0': 0,
'y0': 0,
'x1': 0.5,
'y1': 0.5,
'line': {
'color': 'rgb(50, 171, 96)',
'width': 3,
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-line-ref')
# -
# #### Creating Tangent Lines with Shapes
# +
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.linspace(1, 3, 200)
y0 = x0 * np.sin(np.power(x0, 2)) + 1
trace0 = go.Scatter(
x=x0,
y=y0,
)
data = [trace0]
layout = {
'title': "$f(x)=x\\sin(x^2)+1\\\\ f\'(x)=\\sin(x^2)+2x^2\\cos(x^2)$",
'shapes': [
{
'type': 'line',
'x0': 1,
'y0': 2.30756,
'x1': 1.75,
'y1': 2.30756,
'opacity': 0.7,
'line': {
'color': 'red',
'width': 2.5,
},
},
{
'type': 'line',
'x0': 2.5,
'y0': 3.80796,
'x1': 3.05,
'y1': 3.80796,
'opacity': 0.7,
'line': {
'color': 'red',
'width': 2.5,
},
},
{
'type': 'line',
'x0': 1.90,
'y0': -1.1827,
'x1': 2.50,
'y1': -1.1827,
'opacity': 0.7,
'line': {
'color': 'red',
'width': 2.5,
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='tangent-line')
# -
# #### Rectangles Positioned Relative to the Axes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1.5, 4.5],
y=[0.75, 0.75],
text=['Unfilled Rectangle', 'Filled Rectangle'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 7],
'showgrid': False,
},
'yaxis': {
'range': [0, 3.5]
},
'shapes': [
# unfilled Rectangle
{
'type': 'rect',
'x0': 1,
'y0': 1,
'x1': 2,
'y1': 3,
'line': {
'color': 'rgba(128, 0, 128, 1)',
},
},
# filled Rectangle
{
'type': 'rect',
'x0': 3,
'y0': 1,
'x1': 6,
'y1': 2,
'line': {
'color': 'rgba(128, 0, 128, 1)',
'width': 2,
},
'fillcolor': 'rgba(128, 0, 128, 0.7)',
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-rectangle')
# -
# #### Rectangle Positioned Relative to the Plot & to the Axes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1.5, 3],
y=[2.5, 2.5],
text=['Rectangle reference to the plot',
'Rectangle reference to the axes'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 4],
'showgrid': False,
},
'yaxis': {
'range': [0, 4]
},
'shapes': [
# Rectangle reference to the axes
{
'type': 'rect',
'xref': 'x',
'yref': 'y',
'x0': 2.5,
'y0': 0,
'x1': 3.5,
'y1': 2,
'line': {
'color': 'rgb(55, 128, 191)',
'width': 3,
},
'fillcolor': 'rgba(55, 128, 191, 0.6)',
},
# Rectangle reference to the plot
{
'type': 'rect',
'xref': 'paper',
'yref': 'paper',
'x0': 0.25,
'y0': 0,
'x1': 0.5,
'y1': 0.5,
'line': {
'color': 'rgb(50, 171, 96)',
'width': 3,
},
'fillcolor': 'rgba(50, 171, 96, 0.6)',
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-rectangle-ref')
# -
# #### Highlighting Time Series Regions with Rectangle Shapes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=['2015-02-01', '2015-02-02', '2015-02-03', '2015-02-04', '2015-02-05',
'2015-02-06', '2015-02-07', '2015-02-08', '2015-02-09', '2015-02-10',
'2015-02-11', '2015-02-12', '2015-02-13', '2015-02-14', '2015-02-15',
'2015-02-16', '2015-02-17', '2015-02-18', '2015-02-19', '2015-02-20',
'2015-02-21', '2015-02-22', '2015-02-23', '2015-02-24', '2015-02-25',
'2015-02-26', '2015-02-27', '2015-02-28'],
y=[-14, -17, -8, -4, -7, -10, -12, -14, -12, -7, -11, -7, -18, -14, -14,
-16, -13, -7, -8, -14, -8, -3, -9, -9, -4, -13, -9, -6],
mode='lines',
name='temperature'
)
data = [trace0]
layout = {
# to highlight the timestamp we use shapes and create a rectangular
'shapes': [
# 1st highlight during Feb 4 - Feb 6
{
'type': 'rect',
# x-reference is assigned to the x-values
'xref': 'x',
# y-reference is assigned to the plot paper [0,1]
'yref': 'paper',
'x0': '2015-02-04',
'y0': 0,
'x1': '2015-02-06',
'y1': 1,
'fillcolor': '#d3d3d3',
'opacity': 0.2,
'line': {
'width': 0,
}
},
# 2nd highlight during Feb 20 - Feb 23
{
'type': 'rect',
'xref': 'x',
'yref': 'paper',
'x0': '2015-02-20',
'y0': 0,
'x1': '2015-02-22',
'y1': 1,
'fillcolor': '#d3d3d3',
'opacity': 0.2,
'line': {
'width': 0,
}
}
]
}
py.iplot({'data': data, 'layout': layout}, filename='timestamp-highlight')
# -
# #### Circles Positioned Relative to the Axes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1.5, 3.5],
y=[0.75, 2.5],
text=['Unfilled Circle',
'Filled Circle'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 4.5],
'zeroline': False,
},
'yaxis': {
'range': [0, 4.5]
},
'width': 800,
'height': 800,
'shapes': [
# unfilled circle
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': 1,
'y0': 1,
'x1': 3,
'y1': 3,
'line': {
'color': 'rgba(50, 171, 96, 1)',
},
},
# filled circle
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'fillcolor': 'rgba(50, 171, 96, 0.7)',
'x0': 3,
'y0': 3,
'x1': 4,
'y1': 4,
'line': {
'color': 'rgba(50, 171, 96, 1)',
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-circle')
# -
# #### Highlighting Clusters of Scatter Points with Circle Shapes
# +
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.normal(2, 0.45, 300)
y0 = np.random.normal(2, 0.45, 300)
x1 = np.random.normal(6, 0.4, 200)
y1 = np.random.normal(6, 0.4, 200)
x2 = np.random.normal(4, 0.3, 200)
y2 = np.random.normal(4, 0.3, 200)
trace0 = go.Scatter(
x=x0,
y=y0,
mode='markers',
)
trace1 = go.Scatter(
x=x1,
y=y1,
mode='markers'
)
trace2 = go.Scatter(
x=x2,
y=y2,
mode='markers'
)
trace3 = go.Scatter(
x=x1,
y=y0,
mode='markers'
)
layout = {
'shapes': [
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x0),
'y0': min(y0),
'x1': max(x0),
'y1': max(y0),
'opacity': 0.2,
'fillcolor': 'blue',
'line': {
'color': 'blue',
},
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x1),
'y0': min(y1),
'x1': max(x1),
'y1': max(y1),
'opacity': 0.2,
'fillcolor': 'orange',
'line': {
'color': 'orange',
},
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x2),
'y0': min(y2),
'x1': max(x2),
'y1': max(y2),
'opacity': 0.2,
'fillcolor': 'green',
'line': {
'color': 'green',
},
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x1),
'y0': min(y0),
'x1': max(x1),
'y1': max(y0),
'opacity': 0.2,
'fillcolor': 'red',
'line': {
'color': 'red',
},
},
],
'showlegend': False,
}
data = [trace0, trace1, trace2, trace3]
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='clusters')
# -
# #### Venn Diagram with Circle Shapes
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 1.75, 2.5],
y=[1, 1, 1],
text=['$A$', '$A+B$', '$B$'],
mode='text',
textfont=dict(
color='black',
size=18,
family='Arail',
)
)
data = [trace0]
layout = {
'xaxis': {
'showticklabels': False,
'showgrid': False,
'zeroline': False,
},
'yaxis': {
'showticklabels': False,
'showgrid': False,
'zeroline': False,
},
'shapes': [
{
'opacity': 0.3,
'xref': 'x',
'yref': 'y',
'fillcolor': 'blue',
'x0': 0,
'y0': 0,
'x1': 2,
'y1': 2,
'type': 'circle',
'line': {
'color': 'blue',
},
},
{
'opacity': 0.3,
'xref': 'x',
'yref': 'y',
'fillcolor': 'gray',
'x0': 1.5,
'y0': 0,
'x1': 3.5,
'y1': 2,
'type': 'circle',
'line': {
'color': 'gray',
},
}
],
'margin': {
'l': 20,
'r': 20,
'b': 100
},
'height': 600,
'width': 800,
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='venn-diagram')
# -
# #### SVG Paths
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[2, 1, 8, 8],
y=[0.25, 9, 2, 6],
text=['Filled Triangle',
'Filled Polygon',
'Quadratic Bezier Curves',
'Cubic Bezier Curves'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 9],
'zeroline': False,
},
'yaxis': {
'range': [0, 11],
'showgrid': False,
},
'shapes': [
# Quadratic Bezier Curves
{
'type': 'path',
'path': 'M 4,4 Q 6,0 8,4',
'line': {
'color': 'rgb(93, 164, 214)',
},
},
# Cubic Bezier Curves
{
'type': 'path',
'path': 'M 1,4 C 2,8 6,4 8,8',
'line': {
'color': 'rgb(207, 114, 255)',
},
},
# filled Triangle
{
'type': 'path',
'path': ' M 1 1 L 1 3 L 4 1 Z',
'fillcolor': 'rgba(44, 160, 101, 0.5)',
'line': {
'color': 'rgb(44, 160, 101)',
},
},
# filled Polygon
{
'type': 'path',
'path': ' M 3,7 L2,8 L2,9 L3,10, L4,10 L5,9 L5,8 L4,7 Z',
'fillcolor': 'rgba(255, 140, 184, 0.5)',
'line': {
'color': 'rgb(255, 140, 184)',
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-path')
# -
# ### Dash Example
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-shapesplot/", width="100%", height="650px", frameBorder="0")
# Find the dash app source code [here](https://github.com/plotly/simple-example-chart-apps/tree/master/shapes).
# #### Reference
# See https://plot.ly/python/reference/#layout-shapes for more information and chart attribute options!
# +
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
# ! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'shapes.ipynb', 'python/shapes/', 'Shapes | plotly',
'How to make SVG shapes in python. Examples of lines, circle, rectangle, and path.',
title = 'Shapes | plotly',
name = 'Shapes',
thumbnail='thumbnail/shape.jpg', language='python',
page_type='example_index', has_thumbnail='true', display_as='style_opt', order=5,
ipynb='~notebook_demo/14')
| _posts/python/style/shapes/shapes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hafezgh/Hate-Speech-Detection-in-Social-Media/blob/main/BertRNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="s4ga5LLSSPQN"
# # Imports
# + colab={"base_uri": "https://localhost:8080/"} id="lqOFKm6QLoua" outputId="b55f41b2-0f75-4ace-f673-46435d894f1e"
# !pip install transformers==3.0.0
# !pip install emoji
import gc
import os
import emoji as emoji
import re
import string
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from transformers import AutoModel
from transformers import BertModel, BertTokenizer
# + colab={"base_uri": "https://localhost:8080/"} id="f3lx2XKTSOn6" outputId="53e6764e-9d5c-43a1-f20b-c9d00204bc45"
# !git clone https://github.com/hafezgh/Hate-Speech-Detection-in-Social-Media
# + [markdown] id="7tylYXo4SRoi"
# # Model
# + id="CUcU4tEVLvun"
class BERT_Arch(nn.Module):
def __init__(self, bert):
super(BERT_Arch, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-uncased')
### RNN
self.lstm = nn.LSTM(768, 256, batch_first=True, bidirectional=True)
## Fully-connected
self.fc = nn.Linear(256*2, 3)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, sent_id, mask):
sequence_output, pooled_output = self.bert(sent_id, attention_mask=mask)
gc.collect()
torch.cuda.empty_cache()
lstm_output, (h,c) = self.lstm(sequence_output)
hidden = torch.cat((lstm_output[:,-1, :256],lstm_output[:,0, 256:]),dim=-1)
linear_output = self.fc(hidden.view(-1,256*2))
c = self.softmax(linear_output)
return c
def read_dataset():
data = pd.read_csv("Hate-Speech-Detection-in-Social-Media/labeled_data.csv")
data = data.drop(['count', 'hate_speech', 'offensive_language', 'neither'], axis=1)
#data = data.loc[0:9599,:]
print(len(data))
return data['tweet'].tolist(), data['class']
def pre_process_dataset(values):
new_values = list()
# Emoticons
emoticons = [':-)', ':)', '(:', '(-:', ':))', '((:', ':-D', ':D', 'X-D', 'XD', 'xD', 'xD', '<3', '</3', ':\*',
';-)',
';)', ';-D', ';D', '(;', '(-;', ':-(', ':(', '(:', '(-:', ':,(', ':\'(', ':"(', ':((', ':D', '=D',
'=)',
'(=', '=(', ')=', '=-O', 'O-=', ':o', 'o:', 'O:', 'O:', ':-o', 'o-:', ':P', ':p', ':S', ':s', ':@',
':>',
':<', '^_^', '^.^', '>.>', 'T_T', 'T-T', '-.-', '*.*', '~.~', ':*', ':-*', 'xP', 'XP', 'XP', 'Xp',
':-|',
':->', ':-<', '$_$', '8-)', ':-P', ':-p', '=P', '=p', ':*)', '*-*', 'B-)', 'O.o', 'X-(', ')-X']
for value in values:
# Remove dots
text = value.replace(".", "").lower()
text = re.sub(r"[^a-zA-Z?.!,¿]+", " ", text)
users = re.findall("[@]\w+", text)
for user in users:
text = text.replace(user, "<user>")
urls = re.findall(r'(https?://[^\s]+)', text)
if len(urls) != 0:
for url in urls:
text = text.replace(url, "<url >")
for emo in text:
if emo in emoji.UNICODE_EMOJI:
text = text.replace(emo, "<emoticon >")
for emo in emoticons:
text = text.replace(emo, "<emoticon >")
numbers = re.findall('[0-9]+', text)
for number in numbers:
text = text.replace(number, "<number >")
text = text.replace('#', "<hashtag >")
text = re.sub(r"([?.!,¿])", r" ", text)
text = "".join(l for l in text if l not in string.punctuation)
text = re.sub(r'[" "]+', " ", text)
new_values.append(text)
return new_values
def data_process(data, labels):
input_ids = []
attention_masks = []
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
for sentence in data:
bert_inp = bert_tokenizer.__call__(sentence, max_length=36,
padding='max_length', pad_to_max_length=True,
truncation=True, return_token_type_ids=False)
input_ids.append(bert_inp['input_ids'])
attention_masks.append(bert_inp['attention_mask'])
#del bert_tokenizer
#gc.collect()
#torch.cuda.empty_cache()
input_ids = np.asarray(input_ids)
attention_masks = np.array(attention_masks)
labels = np.array(labels)
return input_ids, attention_masks, labels
def load_and_process():
data, labels = read_dataset()
num_of_labels = len(labels.unique())
input_ids, attention_masks, labels = data_process(pre_process_dataset(data), labels)
return input_ids, attention_masks, labels
# function to train the model
def train():
model.train()
total_loss, total_accuracy = 0, 0
# empty list to save model predictions
total_preds = []
# iterate over batches
total = len(train_dataloader)
for i, batch in enumerate(train_dataloader):
step = i+1
percent = "{0:.2f}".format(100 * (step / float(total)))
lossp = "{0:.2f}".format(total_loss/(total*batch_size))
filledLength = int(100 * step // total)
bar = 'â' * filledLength + '>' *(filledLength < 100) + '.' * (99 - filledLength)
print(f'\rBatch {step}/{total} |{bar}| {percent}% complete, loss={lossp}, accuracy={total_accuracy}', end='')
# push the batch to gpu
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
del batch
gc.collect()
torch.cuda.empty_cache()
# clear previously calculated gradients
model.zero_grad()
# get model predictions for the current batch
#sent_id = torch.tensor(sent_id).to(device).long()
preds = model(sent_id, mask)
# compute the loss between actual and predicted values
loss = cross_entropy(preds, labels)
# add on to the total loss
total_loss += float(loss.item())
# backward pass to calculate the gradients
loss.backward()
# clip the the gradients to 1.0. It helps in preventing the exploding gradient problem
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# update parameters
optimizer.step()
# model predictions are stored on GPU. So, push it to CPU
#preds = preds.detach().cpu().numpy()
# append the model predictions
#total_preds.append(preds)
total_preds.append(preds.detach().cpu().numpy())
gc.collect()
torch.cuda.empty_cache()
# compute the training loss of the epoch
avg_loss = total_loss / (len(train_dataloader)*batch_size)
# predictions are in the form of (no. of batches, size of batch, no. of classes).
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
# returns the loss and predictions
return avg_loss, total_preds
# function for evaluating the model
def evaluate():
print("\n\nEvaluating...")
# deactivate dropout layers
model.eval()
total_loss, total_accuracy = 0, 0
# empty list to save the model predictions
total_preds = []
# iterate over batches
total = len(val_dataloader)
for i, batch in enumerate(val_dataloader):
step = i+1
percent = "{0:.2f}".format(100 * (step / float(total)))
lossp = "{0:.2f}".format(total_loss/(total*batch_size))
filledLength = int(100 * step // total)
bar = 'â' * filledLength + '>' * (filledLength < 100) + '.' * (99 - filledLength)
print(f'\rBatch {step}/{total} |{bar}| {percent}% complete, loss={lossp}, accuracy={total_accuracy}', end='')
# push the batch to gpu
batch = [t.to(device) for t in batch]
sent_id, mask, labels = batch
del batch
gc.collect()
torch.cuda.empty_cache()
# deactivate autograd
with torch.no_grad():
# model predictions
preds = model(sent_id, mask)
# compute the validation loss between actual and predicted values
loss = cross_entropy(preds, labels)
total_loss += float(loss.item())
#preds = preds.detach().cpu().numpy()
#total_preds.append(preds)
total_preds.append(preds.detach().cpu().numpy())
gc.collect()
torch.cuda.empty_cache()
# compute the validation loss of the epoch
avg_loss = total_loss / (len(val_dataloader)*batch_size)
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
# + [markdown] id="5v8P9TN4So5Y"
# # Train
# + id="EH3HDzr9WDgY"
# Specify the GPU
# Setting up the device for GPU usage
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(device)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Load Data-set ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
input_ids, attention_masks, labels = load_and_process()
df = pd.DataFrame(list(zip(input_ids, attention_masks)), columns=['input_ids', 'attention_masks'])
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# class = class label for majority of CF users. 0 - hate speech 1 - offensive language 2 - neither
# ~~~~~~~~~~ Split train data-set into train, validation and test sets ~~~~~~~~~~#
train_text, temp_text, train_labels, temp_labels = train_test_split(df, labels,
random_state=2018, test_size=0.2, stratify=labels)
val_text, test_text, val_labels, test_labels = train_test_split(temp_text, temp_labels,
random_state=2018, test_size=0.5, stratify=temp_labels)
del temp_text
gc.collect()
torch.cuda.empty_cache()
train_count = len(train_labels)
test_count = len(test_labels)
val_count = len(val_labels)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~ Import BERT Model and BERT Tokenizer ~~~~~~~~~~~~~~~~~~~~~#
# import BERT-base pretrained model
bert = AutoModel.from_pretrained('bert-base-uncased')
# bert = AutoModel.from_pretrained('bert-base-uncased')
# Load the BERT tokenizer
#tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# + id="bUtglOzvL6bU" outputId="26f580f9-69f7-4633-d58f-a210299b1fc5" colab={"base_uri": "https://localhost:8080/"}
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tokenization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# for train set
train_seq = torch.tensor(train_text['input_ids'].tolist())
train_mask = torch.tensor(train_text['attention_masks'].tolist())
train_y = torch.tensor(train_labels.tolist())
# for validation set
val_seq = torch.tensor(val_text['input_ids'].tolist())
val_mask = torch.tensor(val_text['attention_masks'].tolist())
val_y = torch.tensor(val_labels.tolist())
# for test set
test_seq = torch.tensor(test_text['input_ids'].tolist())
test_mask = torch.tensor(test_text['attention_masks'].tolist())
test_y = torch.tensor(test_labels.tolist())
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Create DataLoaders ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
# define a batch size
batch_size = 32
# wrap tensors
train_data = TensorDataset(train_seq, train_mask, train_y)
# sampler for sampling the data during training
train_sampler = RandomSampler(train_data)
# dataLoader for train set
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
# wrap tensors
val_data = TensorDataset(val_seq, val_mask, val_y)
# sampler for sampling the data during training
val_sampler = SequentialSampler(val_data)
# dataLoader for validation set
val_dataloader = DataLoader(val_data, sampler=val_sampler, batch_size=batch_size)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Freeze BERT Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# freeze all the parameters
for param in bert.parameters():
param.requires_grad = False
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# pass the pre-trained BERT to our define architecture
model = BERT_Arch(bert)
# push the model to GPU
model = model.to(device)
# optimizer from hugging face transformers
from transformers import AdamW
# define the optimizer
optimizer = AdamW(model.parameters(), lr=2e-5)
#from sklearn.utils.class_weight import compute_class_weight
# compute the class weights
#class_wts = compute_class_weight('balanced', np.unique(train_labels), train_labels)
#print(class_wts)
# convert class weights to tensor
#weights = torch.tensor(class_wts, dtype=torch.float)
#weights = weights.to(device)
# loss function
#cross_entropy = nn.NLLLoss(weight=weights)
cross_entropy = nn.NLLLoss()
# set initial loss to infinite
best_valid_loss = float('inf')
# empty lists to store training and validation loss of each epoch
#train_losses = []
#valid_losses = []
#if os.path.isfile("/content/drive/MyDrive/saved_weights.pth") == False:
#if os.path.isfile("saved_weights.pth") == False:
# number of training epochs
epochs = 3
current = 1
# for each epoch
while current <= epochs:
print(f'\nEpoch {current} / {epochs}:')
# train model
train_loss, _ = train()
# evaluate model
valid_loss, _ = evaluate()
# save the best model
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
#torch.save(model.state_dict(), 'saved_weights.pth')
# append training and validation loss
#train_losses.append(train_loss)
#valid_losses.append(valid_loss)
print(f'\n\nTraining Loss: {train_loss:.3f}')
print(f'Validation Loss: {valid_loss:.3f}')
current = current + 1
#else:
#print("Got weights!")
# load weights of best model
#model.load_state_dict(torch.load("saved_weights.pth"))
#model.load_state_dict(torch.load("/content/drive/MyDrive/saved_weights.pth"), strict=False)
# get predictions for test data
gc.collect()
torch.cuda.empty_cache()
with torch.no_grad():
preds = model(test_seq.to(device), test_mask.to(device))
#preds = model(test_seq, test_mask)
preds = preds.detach().cpu().numpy()
print("Performance:")
# model's performance
preds = np.argmax(preds, axis=1)
print('Classification Report')
print(classification_report(test_y, preds))
print("Accuracy: " + str(accuracy_score(test_y, preds)))
# + id="LmpduihHL7wb"
| BertRNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.3
# language: julia
# name: julia-1.5
# ---
# # GPS
# Based on <NAME>'s method, I expand the four constraint equations as following:
#
# $
# \begin{align}
# 2.4x + 4.6y + 0.4z - 2(0.047^2)9.9999 t &= 1.2^2 + 2.3^2 + 0.2^2 + x^2 + y^2 + z^2 - 0.047^2 t^2 \\
# -1x + 3y + 3.6z - 2(0.047^2)13.0681 t &= 0.5^2 + 1.5^2 + 1.8^2 + x^2 + y^2 + z^2 - 0.047^2 t^2 \\
# -3.4x + 1.6y + 2.6z - 2(0.047^2)2.0251 t &= 1.7^2 + 0.8^2 + 1.3^2 + x^2 + y^2 + z^2 - 0.047^2 t^2 \\
# 3.4x + 2.8y - 1z - 2(0.047^2)10.5317 t &= 1.7^2 + 1.4^2 + 0.5^2 + x^2 + y^2 + z^2 - 0.047^2 t^2
# \end{align},
# $
# And subtract the first equation from the rest.
#
# $
# \begin{align}
# -3.4x - 1.6y + 3.2z - 2(0.047^2)3.0682 t &= -1.0299 \\
# -5.8x - 3y + 2.2z + 2(0.047^2)7.9750 t &= -1.5499 \\
# 1x - 1.8y - 1.4z - 2(0.047^2)0.5318 t &= -1.6699
# \end{align},
# $
# Converting it to matrix format as following:
#
# $
# \begin{equation}
# \begin{bmatrix}
# -3.4 & 1.6 & 3.2 & -0.01355 \\
# -5.8 & -3 & 2.2 & 0.0352 \\
# 1 & -1.8 & -1.4 & -0.00235
# \end{bmatrix}
# \begin{bmatrix}
# x \\ y \\ z \\ t
# \end{bmatrix}
# =
# \begin{bmatrix}
# -1.0299 \\ -1.5499 \\ -1.6699
# \end{bmatrix}
# \end{equation}.
# $
# # Newton's Method
#
# For having $f(x)=0$, based on state variables $x=(x, y, z, t)$ we can convert the nonlinear distance equation as following:
#
# $
# \begin{equation}
# f(x) = \frac{1}{2}
# \begin{bmatrix}
# (x-1.2)^2+(y-2.3)^2+(z-0.2)^2-(0.047*(t-09.9999))^2 \\
# (x+0.5)^2+(y-1.5)^2+(z-1.8)^2-(0.047*(t-13.0681))^2 \\
# (x+1.7)^2+(y-0.8)^2+(z-1.3)^2-(0.047*(t-02.0251))^2 \\
# (x-1.7)^2+(y-1.4)^2+(z+0.5)^2-(0.047*(t-10.5317))^2
# \end{bmatrix}
# = 0.
# \end{equation}
# $
# +
f(x) = [(x[1]-1.2)^2+(x[2]-2.3)^2+(x[3]-0.2)^2-0.047*(x[4]-09.9999)^2,
(x[1]+0.5)^2+(x[2]-1.5)^2+(x[3]-1.8)^2-0.047*(x[4]-13.0681)^2,
(x[1]+1.7)^2+(x[2]-0.8)^2+(x[3]-1.3)^2-0.047*(x[4]-02.0251)^2,
(x[1]-1.7)^2+(x[2]-1.4)^2+(x[3]+0.5)^2-0.047*(x[4]-10.5317)^2]./2
f([1.2 2.3 0.2 9.9999])
# -
# ## Jacobian
#
# $
# \begin{equation}
# Df(x_0) =
# \begin{bmatrix}
# (x-1.2) & (y-2.3) & (z-0.2) & -0.047(t-09.9999)\\
# (x+0.5) & (y-1.5) & (z-1.8) & -0.047(t-13.0681)\\
# (x+1.7) & (y-0.8) & (z-1.3) & -0.047(t-02.0251)\\
# (x-1.7) & (y-1.4) & (z+0.5) & -0.047(t-10.5317)
# \end{bmatrix}.
# \end{equation}
# $
# +
D(x) = [x[1]-1.2 x[2]-2.3 x[3]-0.2 -0.047*(x[4]-09.9999);
x[1]+0.5 x[2]-1.5 x[3]-1.8 -0.047*(x[4]-13.0681);
x[1]+1.7 x[2]-0.8 x[3]-1.3 -0.047*(x[4]-02.0251);
x[1]-1.7 x[2]-1.4 x[3]+0.5 -0.047*(x[4]-10.5317)]/2
D([1.7 1.4 -0.5 10.5317])
# -
# # Solutions
# +
using LinearAlgebra
function gps(x)
t = 1 # tolerance of answers
for i=1:1e5
d = f(x)\D(x)
x -= d
if norm(d)<1e-15 return x end
end
return x
end
s = gps([0 0 0 0])
# +
b = [-1.0299;-1.5499;-1.6699]
A = [-3.4 1.6 3.2 -0.01355 ;
-5.8 -3 2.2 0.0352 ;
1 -1.8 -1.4 -0.00235]
s = b\A
# -
# ## Adjourn
# +
using Dates
println("mahdiar")
Dates.format(now(), "Y/U/d HH:MM")
| HW08/3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# ## Series
s = pd.Series([3, 4, 5, 6])
s
type(s)
s.values
s.index
s2 = pd.Series([10, 11, 12, 13], index=['A', 'B', 'C', 'D'])
s2
s2.values
s2.index
s2
s2[0]
s2['A']
# ## DataFrame
data = {
'Country': ['Bangladesh', 'India', 'Pakistan'],
'Code': [1200, 1300, 1400],
'Population': [1122000, 11222222, 50000000]
}
data
df = pd.DataFrame(data, columns=['Country', 'Code', 'Population'])
df
type(df)
| numpy/notebooks/01-Pandas Series and DataFrame.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false}
# %matplotlib inline
# -
#
# Particle stepper
# ================
#
# An example of PlasmaPy's particle stepper class, currently in need of a rewrite
# for speed.
#
# + jupyter={"outputs_hidden": false}
import numpy as np
from astropy import units as u
from plasmapy.plasma import Plasma
from plasmapy.simulation import ParticleTracker
from plasmapy.formulary import gyrofrequency
# + raw_mimetype="text/restructuredtext" active=""
# Take a look at the docs to :func:`~plasmapy.formulary.parameters.gyrofrequency` and :class:`~plasmapy.simulation.particletracker.ParticleTracker` for more information
# -
# Initialize a plasma. This will be a source of electric and magnetic
# fields for our particles to move in.
#
#
# + jupyter={"outputs_hidden": false}
plasma = Plasma(domain_x=np.linspace(-1, 1, 10) * u.m,
domain_y=np.linspace(-1, 1, 10) * u.m,
domain_z=np.linspace(-1, 1, 10) * u.m)
# -
# Initialize the fields. We'll take B in the x direction
# and E in the y direction, which gets us an E cross B drift
# in the z direction.
#
#
# + jupyter={"outputs_hidden": false}
B0 = 4 * u.T
plasma.magnetic_field[0, :, :, :] = np.ones((10, 10, 10)) * B0
E0 = 2 * u.V / u.m
plasma.electric_field[1, :, :, :] = np.ones((10, 10, 10)) * E0
# -
# Calculate the timestep. We'll take one proton `p`, take its gyrofrequency, invert that
# to get to the gyroperiod, and resolve that into 10 steps for higher accuracy.
#
#
# + jupyter={"outputs_hidden": false}
freq = gyrofrequency(B0, 'p').to(u.Hz, equivalencies=u.dimensionless_angles())
gyroperiod = (1/freq).to(u.s)
steps_to_gyroperiod = 10
timestep = gyroperiod / steps_to_gyroperiod
# -
# Initialize the trajectory calculation.
#
#
# + jupyter={"outputs_hidden": false}
number_steps = steps_to_gyroperiod * int(2 * np.pi)
trajectory = ParticleTracker(plasma, 'p', 1, 1, timestep, number_steps)
# -
# We still have to initialize the particle's velocity. We'll limit ourselves to
# one in the x direction, parallel to the magnetic field B -
# that way, it won't turn in the z direction.
#
#
# + jupyter={"outputs_hidden": false}
trajectory.v[0][0] = 1 * (u.m / u.s)
# -
# Run the pusher and plot the trajectory versus time.
#
#
# + jupyter={"outputs_hidden": false} tags=["nbsphinx-thumbnail"]
trajectory.run()
trajectory.plot_time_trajectories()
# -
# Plot the shape of the trajectory in 3D.
#
#
# + jupyter={"outputs_hidden": false}
trajectory.plot_trajectories()
# -
# As a test, we calculate the mean velocity in the z direction from the
# velocity and position
#
#
# + jupyter={"outputs_hidden": false}
vmean = trajectory.velocity_history[:, :, 2].mean()
print(f"The calculated drift velocity is {vmean:.4f} to compare with the"
f"theoretical E0/B0 = {E0/B0:.4f}")
# -
# and from position:
#
#
# + jupyter={"outputs_hidden": false}
Vdrift = trajectory.position_history[-1, 0, 2] / (trajectory.NT * trajectory.dt)
print(f"The calculated drift velocity from position is {Vdrift:.4f}")
| docs/notebooks/particle_stepper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import praw
reddit = praw.Reddit(client_id='2j7m151wFhaamg',
client_secret='2evAeoLsEeGAkScvahK6-UypRP4',
user_agent='bot-dec',
username = 'iamcool_7',
password = '<PASSWORD>')
# +
import requests
import praw
import time
import pandas as pd
# Authentication: http://praw.readthedocs.io/en/latest/getting_started/authentication.html
# reddit = praw.Reddit(client_id='SI8pN3DSbt0zor', client_secret='<KEY>',
# password='<PASSWORD>', user_agent='testscript by /u/fakebot3',
# username='fakebot3')
def submissions_pushshift_praw(subreddit, start=None, end=None, limit=100, extra_query=""):
"""
A simple function that returns a list of PRAW submission objects during a particular period from a defined sub.
This function serves as a replacement for the now deprecated PRAW `submissions()` method.
:param subreddit: A subreddit name to fetch submissions from.
:param start: A Unix time integer. Posts fetched will be AFTER this time. (default: None)
:param end: A Unix time integer. Posts fetched will be BEFORE this time. (default: None)
:param limit: There needs to be a defined limit of results (default: 100), or Pushshift will return only 25.
:param extra_query: A query string is optional. If an extra_query string is not supplied,
the function will just grab everything from the defined time period. (default: empty string)
Submissions are yielded newest first.
For more information on PRAW, see: https://github.com/praw-dev/praw
For more information on Pushshift, see: https://github.com/pushshift/api
"""
matching_praw_submissions = []
# Default time values if none are defined (credit to u/bboe's PRAW `submissions()` for this section)
utc_offset = 28800
now = int(time.time())
start = max(int(start) + utc_offset if start else 0, 0)
end = min(int(end) if end else now, now) + utc_offset
# Format our search link properly.
search_link = ('https://api.pushshift.io/reddit/submission/search/'
'?subreddit={}&after={}&before={}&sort_type=score&sort=asc&limit={}&q={}')
search_link = search_link.format(subreddit, start, end, limit, extra_query)
# Get the data from Pushshift as JSON.
retrieved_data = requests.get(search_link)
returned_submissions = retrieved_data.json()['data']
# Iterate over the returned submissions to convert them to PRAW submission objects.
for submission in returned_submissions:
# Take the ID, fetch the PRAW submission object, and append to our list
praw_submission = reddit.submission(id=submission['id'])
matching_praw_submissions.append(praw_submission)
# Return all PRAW submissions that were obtained.
return matching_praw_submissions
# +
posts = []
# DEC 2020
print("\n# Example 1") # Simple query with just times and a subreddit.
end_time_utc = 1577709749
start_time_utc = end_time_utc - 50000
diff = 50000
count = 1
# print(type(diff))
# print(type(start_time_utc))
while(start_time_utc > 1575204149):
print("posts collectes:"+str(len(posts))+" utc:"+ str(start_time_utc))
for data in submissions_pushshift_praw('india',start_time_utc, end_time_utc, 1000):
posts.append([str(data.title)+str(data.selftext),data.created_utc, data.link_flair_background_color, data.link_flair_text, data.link_flair_text_color , data.num_comments, data.score ])
if(count % 50 == 0):
print(count)
count += 1
end_time_utc = start_time_utc
start_time_utc = start_time_utc - diff
posts = pd.DataFrame(posts, columns = ['text', 'created_utc', 'flair_colour', 'flair', 'flair_text_colour', 'num_comments', 'score'])
posts.tail()
posts.to_csv('DEC_2020_R_INDIA.csv')
# # print("\n# Example 2") # Contains a specific query.
# # for submission in submissions_pushshift_praw(subreddit='translator', start=1514793600, end=1514880000,
# # extra_query="French"):
# # print(submission.title)
# print("\n# Example 3") # Just a subreddit specified.
# for data in submissions_pushshift_praw('india'):
# print(data.link_flair_background_color)
# print(data.link_flair_text_color)
# print(str(data.title)+str(data.selftext))
# print(data.created_utc)
# print(data.flair)
# example_bot()
# -
# 1577059749
posts = pd.DataFrame(posts, columns = ['text', 'created_utc', 'flair_colour', 'flair', 'flair_text_colour', 'num_comments', 'score'])
posts.tail()
posts.to_csv('DEC_2020_R_INDIA.csv')
| SCRAPPING/srapper-parallel-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/pcsilcan/pcd/blob/master/20202/pcd_20202_0901_intro_channels.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="aFTtUOzjZ8zn"
# # Channels!
#
# Un proceso debe escribir y un proceso distinto debe leer. Un canal SÃncrono, no puede ser leido y escrito al mismo tiempo por un solo proceso.
# + id="Y9Ex3b7eZ4mp" outputId="f9c2d156-fcb4-46ad-f2ae-cca19ba1f0e2" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%writefile 1.go
package main
import (
"fmt"
)
func main() {
var ch chan string
ch = make(chan string)
go func() {
ch<- "Hola, Mundo canalizado!"
}()
str := <-ch
fmt.Printf("Mensaje: %s\n", str)
}
# + id="jFkB1lHrbWAb" outputId="0f58a139-3f35-4592-d1f9-9573eb602efd" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !go run 1.go
# + id="r7xkFQ92bYUZ" outputId="ab49b1fa-15a8-4baa-d40d-0c690c53c257" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%writefile 2.go
package main
import (
"fmt"
"math/rand"
"time"
)
func rng(id, n int, ch chan string) {
for i := 0; i < n; i++ {
time.Sleep(time.Millisecond)
num := rand.Intn(100)
ch <- fmt.Sprintf("pid=%d\ti=%d\tnumber=%d", id, i, num);
}
}
func main() {
rand.Seed(time.Now().UTC().UnixNano())
var ch chan string
ch = make(chan string)
n1, n2 := 11, 14
go rng(1, n1, ch)
go rng(2, n2, ch)
for i := 0; i < n1+n2; i++ {
fmt.Printf("Mensaje: %s\n", <-ch)
}
}
# + id="sCgftkFZbbWV" outputId="648ccba8-1dc0-4750-db5d-0f7609674c7c" colab={"base_uri": "https://localhost:8080/", "height": 437}
# !go run 2.go
# + id="FJCMb24reaom" outputId="2b6d9204-e8b8-4d05-eb43-72a7f80673f6" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%writefile 3.go
package main
import (
"fmt"
"math/rand"
"time"
)
func rng(id, n int, ch chan string) {
for i := 0; i < n; i++ {
time.Sleep(time.Millisecond)
num := rand.Intn(100)
ch <- fmt.Sprintf("pid=%d\ti=%d\tnumber=%d", id, i, num);
}
close(ch)
}
func main() {
rand.Seed(time.Now().UTC().UnixNano())
var ch1, ch2 chan string
ch1 = make(chan string)
ch2 = make(chan string)
n1, n2 := 11, 14
go rng(1, n1, ch1)
go rng(2, n2, ch2)
go func() {
for msg := range ch1 {
fmt.Printf("Mensaje: %s\n", msg)
}
}()
for msg := range ch2 {
fmt.Printf("Mensaje: %s\n", msg)
}
}
# + id="tSoRsG42hCtQ" outputId="86aee924-3ee8-41bf-f914-b6ba1ffdffea" colab={"base_uri": "https://localhost:8080/", "height": 638}
# !go run 3.go
# + id="mDgx-SwFhZAX"
| 20202/pcd_20202_0901_intro_channels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
import matplotlib.pyplot as plt
import time
import datetime
import math
colors = {'blue': (255, 0, 0),
'green': (0, 255, 0),
'red': (0, 0, 255),
'yellow': (0, 255, 255),
'magenta': (255, 0, 255),
'light_gray': (220, 220, 220),
'white':(255,255,255)}
image=np.zeros((640,640,3),np.uint8)
plt.figure(figsize=(10,6))
plt.imshow(image)
hours_sp=np.array(
[(620, 320), (580, 470), (470, 580), (320, 620), (170, 580), (60, 470), (20, 320), (60, 170), (169, 61), (319, 20),
(469, 60), (579, 169)])
hours_ep=np.array(
[(600, 320), (563, 460), (460, 562), (320, 600), (180, 563), (78, 460), (40, 320), (77, 180), (179, 78), (319, 40),
(459, 77), (562, 179)])
def arraytotuple(arr):
return tuple(arr.reshape(1,-1)[0])
arraytotuple(hours_sp[0])
for i in range(12):
cv2.line(image,arraytotuple(hours_sp[i]),arraytotuple(hours_ep[i]),colors['white'],3)
plt.imshow(image)
cv2.circle(image,(320,320),310,colors['white'],8)
plt.imshow(image)
cv2.rectangle(image,(150,175),(490,270),colors['white'],-1)
cv2.putText(image,"AI Basics",(230,240),1,2.5,(0,0,0),3,cv2.LINE_AA)
plt.imshow(image)
imagedup=image.copy()
# +
while True:
date=datetime.datetime.now()
time=date.time()
hour=math.fmod(time.hour,12)
minute=time.minute
second=time.second
print(f"hour-{hour},minute-{minute},second-{second}")
second_angle=math.fmod(second*6+270,360)
minute_angle=math.fmod(minute*6+270,360)
hour_angle=math.fmod((hour*30)+(minute/2)+270,360)
print(f"hourangle-{hour_angle},minuteangle-{minute_angle},secondangle-{second_angle}")
second_x=round(320+310*math.cos(second_angle*3.14/180))
second_y=round(320+310*math.sin(second_angle*3.14/180))
cv2.line(image,(320,320),(second_x,second_y),colors['red'],2)
minute_x=round(320+310*math.cos(minute_angle*3.14/180))
minute_y=round(320+310*math.sin(minute_angle*3.14/180))
cv2.line(image,(320,320),(minute_x,minute_y),colors['red'],2)
hour_x=round(320+310*math.cos(hour_angle*3.14/180))
hour_y=round(320+310*math.sin(hour_angle*3.14/180))
cv2.line(image,(320,320),(hour_x,hour_y),colors['red'],2)
time=str(hour)+"-"+str(minute)+"-"+str(second)
cv2.putText(image,time,(210,140),1,2.5,colors['white'],3,cv2.LINE_AA)
cv2.circle(image,(320,320),10,colors['white'],-1)
cv2.imshow("clock",image)
image=imagedup.copy()
if cv2.waitKey(100) & 0xFF==ord('m'):
break
cv2.destroyAllWindows()
# +
out = cv2.VideoWriter('clock.avi',
cv2.VideoWriter_fourcc('M','J','P','G'),20,
(640,640))
while True:
date=datetime.datetime.now()
time=date.time()
hour=math.fmod(time.hour,12)
minute=time.minute
second=time.second
print(f"hour-{hour},minute-{minute},second-{second}")
second_angle=math.fmod(second*6+270,360)
minute_angle=math.fmod(minute*6+270,360)
hour_angle=math.fmod((hour*30)+(minute/2)+270,360)
print(f"hourangle-{hour_angle},minuteangle-{minute_angle},secondangle-{second_angle}")
second_x=round(320+310*math.cos(second_angle*3.14/180))
second_y=round(320+310*math.sin(second_angle*3.14/180))
cv2.line(image,(320,320),(second_x,second_y),colors['red'],2)
minute_x=round(320+310*math.cos(minute_angle*3.14/180))
minute_y=round(320+310*math.sin(minute_angle*3.14/180))
cv2.line(image,(320,320),(minute_x,minute_y),colors['red'],2)
hour_x=round(320+310*math.cos(hour_angle*3.14/180))
hour_y=round(320+310*math.sin(hour_angle*3.14/180))
cv2.line(image,(320,320),(hour_x,hour_y),colors['red'],2)
time=str(hour)+"-"+str(minute)+"-"+str(second)
cv2.putText(image,time,(210,140),1,2.5,colors['white'],3,cv2.LINE_AA)
cv2.circle(image,(320,320),10,colors['white'],-1)
out.write(image)
cv2.imshow("clock",image)
image=imagedup.copy()
if cv2.waitKey(100) & 0xFF==ord('m'):
break
cv2.destroyAllWindows()
# -
date=datetime.datetime.now()
time=date.time()
hour=math.fmod(time.hour,12)
hour
| Clock/Clock.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Videos Pipeline
# +
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# load helper functions
# %run -i "0. Functions_Clases Pipeline.py"
# %run -i "Line.py"
# Load Camera calibration params
[ret, mtx, dist, rvecs, tvecs] = pickle.load(open( "pickle_data/camera_calibration_params.p", "rb" ) )
# -
# TODO update code with notebook 10.
from time import time
def process_frame_profiling(img):
###### Resize image
start = time()
img = resizeImage(img)
end = time()
print(f'resizeImage took {end - start} seconds!')
###### Undistort image
start = time()
undistorted = undistort_image(img, mtx, dist)
end = time()
print(f'undistorted took {end - start} seconds!')
###### Color Enhancement
start = time()
imageCE = colorEnhancement(img)
end = time()
print(f'colorEnhancement took {end - start} seconds!')
###### GrayScale
start = time()
imageGray = grayscale(imageCE)
end = time()
print(f'grayscale took {end - start} seconds!')
###### Gauss Smoothing
start = time()
imageGauss = gaussian_blur(imageGray,kernel_size=5)
end = time()
print(f'gaussian_blur took {end - start} seconds!')
#### Edge detection
start = time()
sbinary = sobel_thresh(imageGauss, sobel_kernel=5, x_thresh=[80,100], y_thresh=[40,100], mag_thresh=[50,255], dir_thresh=[100,200])
end = time()
print(f'sobel_thresh took {end - start} seconds!')
#### ROI
start = time()
ysize =sbinary.shape[0]
xsize =sbinary.shape[1]
ROI_upperWidth = 350 #Width of the upper horizontal straight in px
ROI_upperHeight = 300 #Height of the upper horizontal straight from the bottom of the image in px
ROI_lowerWidth = 1000 #Width of the lower horizontal straight in px
ROI_lowerHeight = 50 #Height of the lower horizontal straight from the bottom of the image in px
limitLL = ((xsize/2)-(ROI_lowerWidth/2),ysize-ROI_lowerHeight);
limitLR = (xsize - ((xsize/2)-(ROI_lowerWidth/2)),ysize-ROI_lowerHeight);
limitUL = ((xsize/2)-(ROI_upperWidth/2), ysize-ROI_upperHeight);
limitUR = ((xsize/2)+(ROI_upperWidth/2), ysize-ROI_upperHeight);
vertices = np.array([[limitLL,limitUL,limitUR , limitLR]], dtype=np.int32)
imageROI = region_of_interest(sbinary,vertices)
end = time()
print(f'region_of_interest took {end - start} seconds!')
#### Perspective transform
start = time()
warped_img,M, Minv = warp_image(imageROI, hwidth = 250 ,offset = 0, height = -600, overplotLines= False )
end = time()
print(f'warp_image took {end - start} seconds!')
#### Find lines
# Find x line poitns based on histogram values
leftx_base, rightx_base = find_lane_x_points(warped_img)
# Update x base points
lineLeft.updateXbase(leftx_base)
lineRight.updateXbase(rightx_base)
#Speed up coef with area search
#Left line
if lineLeft.missdetections > 0 or np.any((lineLeft.recent_poly_fits == 0)):
## Find lane pixels
start = time()
leftx, lefty, rightx, righty, out_img = find_lane_pixels(warped_img, lineLeft.bestx, lineRight.bestx, showRectangles = False)
end = time()
print(f'find_lane_pixels took {end - start} seconds!')
## Update lane pixels
lineLeft.updatePixels(leftx, lefty)
# Search blindly image
start = time()
coeffs_fit_L, lineDetectedL, left_fitx, ploty, img_line = fit_polynomial(out_img, lineLeft.allx, lineLeft.ally, drawPoly = False)
end = time()
print(f'Search blindly image {end - start} seconds!')
else:
# Search based on coefs
start = time()
leftx, lefty, coeffs_fit_L, lineDetectedL, left_fitx, out_img = search_around_poly(warped_img, lineLeft)
end = time()
print(f'Search based on coefs {end - start} seconds!')
lineLeft.updatePixels(leftx, lefty)
#Right line
if lineRight.missdetections > 0 or np.any((lineRight.recent_poly_fits == 0)):
## Update lane pixels
lineRight.updatePixels(rightx, righty)
# Search blindly image
coeffs_fit_R, lineDetectedR, right_fitx, ploty, img_line = fit_polynomial(out_img, lineRight.allx, lineRight.ally, drawPoly = False)
else:
# Search based on coefs
rightx, righty, coeffs_fit_R, lineDetectedR, right_fitx, out_img = search_around_poly(out_img, lineRight)
lineRight.updatePixels(rightx, righty)
#Update line class instances
lineLeft.updateCoeffsLine(lineDetectedL, coeffs_fit_L, left_fitx, lineLeft.poly_ploty,coefLimits=[0.01,1,100],movingAvg=5 )
lineRight.updateCoeffsLine(lineDetectedR, coeffs_fit_R,right_fitx,lineRight.poly_ploty,coefLimits=[0.01,1,100],movingAvg=5 )
### Unwarp images
#color_warp = np.zeros_like(out_img).astype(np.uint8)
color_warp = out_img
#color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
pts_left = np.array([np.transpose(np.vstack([lineLeft.poly_plotx, lineLeft.poly_ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([lineRight.poly_plotx, lineRight.poly_ploty])))])
pts = np.hstack((pts_left, pts_right))
#Draw the lane onto the warped blank image
start = time()
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
newwarp = cv2.warpPerspective(color_warp, M, (img.shape[1], img.shape[0]))
result = cv2.addWeighted(img, 1, newwarp, 0.8, 0)
end = time()
print(f'Draw the lane {end - start} seconds!')
### Anotate Radius of curvature
start = time()
diff, mean, text = checkRadius(lineLeft, lineRight )
result_annotated = cv2.putText(result, text, org= (50, 50), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=2, color= (255, 255, 255), thickness=2, lineType=cv2.LINE_AA)
end = time()
print(f'Radius of curvature {end - start} seconds!')
### Anotate Vehicle position
start = time()
dev, text = calculateDeviation(result, lineLeft,lineRight,)
result_annotated = cv2.putText(result, text, org= (50, 100), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=2, color= (255, 255, 255), thickness=2, lineType=cv2.LINE_AA)
end = time()
print(f'Vehicle position {end - start} seconds!')
#out = np.dstack((out_img*255, out_img*255, out_img*255))
return result_annotated
# +
# Instanciate cLine classes
lineLeft = Line()
lineRight = Line()
white_output = 'output_videos/solidWhiteRight.mp4'
clip1 = VideoFileClip("test_videos/project_video.mp4").subclip(0,1)
#clip1 = VideoFileClip("test_videos/project_video.mp4")
white_clip = clip1.fl_image(process_frame_profiling) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
# -
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
| 11. Profiling Videos Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
from matplotlib import pyplot as plt
# %matplotlib inline
# +
#import and format data
feature_type = 'DHS'
metadata_columns = ['tumor_fraction','cancer_present','sample_type','Stage']
probabilities = pd.read_csv(feature_type+'_results/probabilities.txt', sep='\t')
n_iter = 1+probabilities.drop(columns = ['sample','status']+metadata_columns).head().columns.astype(int).max()
probabilities[np.arange(n_iter)] = probabilities[[str(m)for m in np.arange(n_iter)]]
probabilities = probabilities.drop(columns = [str(m)for m in np.arange(n_iter)])
probabilities = probabilities.sort_values(by='sample_type').reset_index(drop=True)
print(n_iter)
# -
probabilities.head()
# +
#get AUC and accuracy for each bootstrap iteration
AUCs = pd.DataFrame(columns = ['group','AUC'])
for i in range(n_iter):
if i%100==0:
print(i)
current = probabilities[~(probabilities[i].isnull())][['status','tumor_fraction','sample_type','Stage',i]].copy()
#overall accuracy and AUC
group = 'overall'
fpr,tpr,_ = roc_curve(current['status'],current[i])
AUC = auc(fpr,tpr)
AUCs = AUCs.append({'group':group, 'AUC':AUC}, ignore_index = True)
#separate out the healthy samples to be used in every AUC
healthy_df = current[current['sample_type']=='Healthy']
for group,df in current.groupby('sample_type'):
if group == 'Healthy' or group == 'Duodenal_Cancer':
continue
df2 = df.append(healthy_df, ignore_index=True)
fpr,tpr,_ = roc_curve(df2['status'],df2[i])
AUC = auc(fpr,tpr)
AUCs = AUCs.append({'group':group, 'AUC':AUC}, ignore_index = True)
for group,df in current.groupby('Stage'):
if group == '0' or group == 'X':
continue
df2 = df.append(healthy_df, ignore_index=True)
fpr,tpr,_ = roc_curve(df2['status'],df2[i])
AUC = auc(fpr,tpr)
AUCs = AUCs.append({'group':group, 'AUC':AUC}, ignore_index = True)
zero_tfx = current[(current['tumor_fraction']==0) & (current['sample_type']!='Healthy')]
low_tfx = current[(current['tumor_fraction']>0) & (current['tumor_fraction']<0.05) & (current['sample_type']!='Healthy')]
high_tfx = current[(current['tumor_fraction']>=0.05) & (current['sample_type']!='Healthy')]
for group,df in zip(['0_TFx','>0-0.05_TFx','>=0.05_TFx'],[zero_tfx,low_tfx,high_tfx]):
df2 = df.append(healthy_df, ignore_index=True)
fpr,tpr,_ = roc_curve(df2['status'],df2[i])
AUC = auc(fpr,tpr)
AUCs = AUCs.append({'group':group, 'AUC':AUC}, ignore_index = True)
# +
#calculate confidence intervals
AUC_CI_df = AUCs.groupby('group').mean()
AUC_CI_df = AUC_CI_df.rename(columns = {'AUC':'mean'})
#get CI for each bootstrap
AUC_CI_df['lower'] = AUCs.groupby('group').quantile(.025)
AUC_CI_df['upper'] = AUCs.groupby('group').quantile(.975)
AUC_CI_df['metric']='AUC'
AUC_CI_df.to_csv(feature_type+'_results/CI_metrics.txt', sep='\t')
CI = AUC_CI_df.reset_index() #because I used this below
# -
CI
# +
number_of_HD = len(probabilities[(probabilities['sample_type']=='Healthy')])
probabilities['median_probability'] = probabilities[np.arange(n_iter)].median(axis=1)
fig,ax = plt.subplots(figsize = (9.5,6))
for sample_type in probabilities['sample_type'].unique():
if sample_type == 'Duodenal_Cancer' or sample_type == 'Healthy':#there is only one sample for Duodenal_Cancer, Healthy is included in all roc curves
continue
current = probabilities[(probabilities['sample_type']==sample_type) | (probabilities['sample_type']=='Healthy')]
fpr, tpr, _ = roc_curve(current['status'].values,current['median_probability'].values)
auc_val = auc(fpr,tpr)
print(sample_type, auc_val)
lower_AUC = CI[(CI['group']==sample_type) & (CI['metric']=='AUC')]['lower'].values[0]
upper_AUC = CI[(CI['group']==sample_type) & (CI['metric']=='AUC')]['upper'].values[0]
auc_val = CI[(CI['group']==sample_type) & (CI['metric']=='AUC')]['mean'].values[0]
print(sample_type, auc_val)
label = sample_type+' '+\
'AUC: '+ format(auc_val,'.2f')+' ('+format(lower_AUC, '.2f')+'-'+format(upper_AUC, '.2f')+')'+\
' n='+str(len(current)-number_of_HD)
ax.plot(fpr,tpr, label=label)
#add overall AUC
fpr, tpr, _ = roc_curve(probabilities['status'].values,probabilities['median_probability'].values)
auc_val = auc(fpr,tpr)
lower_AUC = CI[(CI['group']=='overall') & (CI['metric']=='AUC')]['lower'].values[0]
upper_AUC = CI[(CI['group']=='overall') & (CI['metric']=='AUC')]['upper'].values[0]
label = 'overall AUC: '+ format(auc_val,'.2f')+' ('+format(lower_AUC, '.2f')+'-'+format(upper_AUC, '.2f')+') '+\
'cancer='+str(len(probabilities)-number_of_HD)+', HD='+str(number_of_HD)
ax.plot(fpr,tpr,color='black', label = label)
ax.legend(bbox_to_anchor = [1,1],loc = 'upper left', frameon = False)
ax.plot([0,1],[0,1], color = 'grey', dashes = (2,2))
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.set_title(feature_type+' cancer detection ROC curve')
ax.set_aspect('equal')
fig.subplots_adjust(right=.55)
plt.savefig(feature_type+'_results/'+feature_type+'_ROC_by_cancer_type.pdf')
# +
fig,ax = plt.subplots(figsize = (9,6))
for stage in ['I','II', 'III','IV']:
current = probabilities[(probabilities['Stage']==stage) | (probabilities['sample_type']=='Healthy')]
fpr, tpr, _ = roc_curve(current['status'].values,current['median_probability'].values)
auc_val = auc(fpr,tpr)
lower_AUC = CI[(CI['group']==stage) & (CI['metric']=='AUC')]['lower'].values[0]
upper_AUC = CI[(CI['group']==stage) & (CI['metric']=='AUC')]['upper'].values[0]
label = stage+' '+\
'AUC: '+ format(auc_val,'.2f')+' ('+format(lower_AUC, '.2f')+'-'+format(upper_AUC, '.2f')+')'+\
' n='+str(len(current)-number_of_HD)
ax.plot(fpr,tpr, label=label)
ax.legend(bbox_to_anchor = [1,1],loc = 'upper left', frameon = False)
ax.plot([0,1],[0,1], color = 'grey', dashes = (2,2))
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.set_title('cancer detection ROC curve')
ax.set_aspect('equal')
fig.subplots_adjust(right=.55)
plt.savefig(feature_type+'_results/'+feature_type+'_ROC_by_stage.pdf')
# +
#plot the ROC curves
fig,ax = plt.subplots(figsize = (9,6))
HD_samples = probabilities[probabilities['sample'].str.contains('Healthy')]
#plot zero tfx samples
current_cancer = probabilities[(probabilities['tumor_fraction']==0) & (probabilities['status']==1)]
current = current_cancer.append(HD_samples)
print(len(current))
fpr, tpr, _ = roc_curve(current['status'].values,current['median_probability'])
auc_val = auc(fpr,tpr)
group = '0_TFx'
lower_AUC = CI[(CI['group']==group) & (CI['metric']=='AUC')]['lower'].values[0]
upper_AUC = CI[(CI['group']==group) & (CI['metric']=='AUC')]['upper'].values[0]
label = group+' '+\
'AUC: '+ format(auc_val,'.2f')+' ('+format(lower_AUC, '.2f')+'-'+format(upper_AUC, '.2f')+')'+\
' n='+str(len(current)-number_of_HD)
ax.plot(fpr,tpr, label = label)
#plot 0 to 0.05
current_cancer = probabilities[(probabilities['tumor_fraction']>0) & (probabilities['tumor_fraction']<0.05) & (probabilities['status']==1)]
current = current_cancer.append(HD_samples)
print(len(current))
fpr, tpr, _ = roc_curve(current['status'].values,current['median_probability'])
auc_val = auc(fpr,tpr)
group = '>0-0.05_TFx'
lower_AUC = CI[(CI['group']==group) & (CI['metric']=='AUC')]['lower'].values[0]
upper_AUC = CI[(CI['group']==group) & (CI['metric']=='AUC')]['upper'].values[0]
label = group+' '+\
'AUC: '+ format(auc_val,'.2f')+' ('+format(lower_AUC, '.2f')+'-'+format(upper_AUC, '.2f')+')'+\
' n='+str(len(current)-number_of_HD)
ax.plot(fpr,tpr, label = label)
#plot 0 to 0.05
current_cancer = probabilities[(probabilities['tumor_fraction']>=0.05) & (probabilities['status']==1)]
current = current_cancer.append(HD_samples)
print(len(current))
fpr, tpr, _ = roc_curve(current['status'].values,current['median_probability'])
auc_val = auc(fpr,tpr)
group = '>=0.05_TFx'
lower_AUC = CI[(CI['group']==group) & (CI['metric']=='AUC')]['lower'].values[0]
upper_AUC = CI[(CI['group']==group) & (CI['metric']=='AUC')]['upper'].values[0]
label = group+' '+\
'AUC: '+ format(auc_val,'.2f')+' ('+format(lower_AUC, '.2f')+'-'+format(upper_AUC, '.2f')+')'+\
' n='+str(len(current)-number_of_HD)
ax.plot(fpr,tpr, label = label)
ax.legend(bbox_to_anchor = [1,1],loc = 'upper left', frameon = False)
ax.plot([0,1],[0,1], color = 'grey', dashes = (2,2))
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.set_title('ROC by tumor fraction')
ax.set_aspect('equal')
fig.subplots_adjust(right=.55)
plt.savefig(feature_type+'_results/'+feature_type+'_ROC_curves_by_tfx.pdf')
# -
| cancer_detection/DHS_analysis/analysis/classification/DHS_log_reg_bootstrap/calc_CI_and_plot_ROC.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.0
# language: julia
# name: julia-1.3
# ---
# [](https://mybinder.org/v2/gh/alan-turing-institute/MLJ.jl/master?filepath=binder%2FMLJ_demo.ipynb)
# <img src="https://julialang.org/assets/infra/logo.svg" alt="Julia" width="200" style="max-width:100%;">
#
# # A taste of the Julia programming language and the MLJ machine learning toolbox
# This first cell instantiates a Julia project environment, reproducing a collection of mutually compatible packages for use in this demonstration:
using Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
# ## Lightning encounter with Julia programming language
#
# ###### Julia related content prepared by [@ablaom](https://github.com/ablaom)
#
# Interacting with Julia at the REPL, or in a notebook, feels very much
#
# the same as python, MATLAB or R:
print("Hello world!")
2 + 2
typeof(42.0)
# ## Multiple dispatch
#
# You will never see anything like `A.add(B)` in Julia because Julia
# is not a traditional object-oriented language. In Julia, function and
# structure are kept separate, with the help of abstract types and
# multiple dispatch, as we explain next
# In addition to regular concrete types, such as `Float64` and
# `String`, Julia has a built-in heirarchy of *abstract* types. These
# generally have subtypes but no instances:
typeof(42)
supertype(Int64)
supertype(Signed)
subtypes(Integer)
Bool <: Integer # is Bool a subtype of Integer?
Bool <: String
# In Julia, which is optionally typed, one uses type annotations to
# adapt the behaviour of functions to their types. If we define
divide(x, y) = x / y
# then `divide(x, y)` will make sense whenever `x / y` makes sense (for
# the built-in function `/`). For example, we can use it to divide two
# integers, or two matrices:
divide(1, 2)
divide([1 2; 3 4], [1 2; 3 7])
# To vary the behaviour for specific types we make type annotatations:
divide(x::Integer, y::Integer) = floor(x/y)
divide(x::String, y::String) = join([x, y], " / ")
divide(1, 2)
divide("Hello", "World!")
# In the case of `Float64` the original "fallback" method still
# applies:
divide(1.0, 2.0)
# ## User-defined types
#
# Users can define their own abstract types and composite types:
# +
abstract type Organism end
struct Animal <: Organism
name::String
is_hervibore::Bool
end
struct Plant <: Organism
name::String
is_flowering::Bool
end
describe(o::Organism) = string(o.name) # fall-back method
function describe(p::Plant)
if p.is_flowering
text = " is a flowering plant."
else
text = " is a non-flowering plant."
end
return p.name*text
end
# -
describe(Animal("Elephant", true))
describe(Plant("Fern", false))
# For more on multiple dispatch, see this [blog post](http://www.stochasticlifestyle.com/type-dispatch-design-post-object-oriented-programming-julia/) by [<NAME>](http://www.chrisrackauckas.com/).
# ## Automatic differentiation
#
# Differentiation of almost arbitrary programs with respect to their input. ([source]( https://render.githubusercontent.com/view/ipynb?commit=89317894e2e5370a80e45d52db8a4055a4fdecd6&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f6d6174626573616e636f6e2f454d455f4a756c69615f776f726b73686f702f383933313738393465326535333730613830653435643532646238613430353561346664656364362f315f496e74726f64756374696f6e2e6970796e62&nwo=matbesancon%2FEME_Julia_workshop&path=1_Introduction.ipynb&repository_id=270611906&repository_type=Repository#Automatic-differentiation) by [@matbesancon](https://github.com/matbesancon))
# +
using ForwardDiff
function sqrt_babylonian(s)
x = s / 2
while abs(x^2 - s) > 0.001
x = (x + s/x) / 2
end
x
end
# -
sqrt_babylonian(2) - sqrt(2)
@show ForwardDiff.derivative(sqrt_babylonian, 2);
@show ForwardDiff.derivative(sqrt, 2);
# ## Unitful computations
# Physicists' dreams finally made true. ([soure](https://render.githubusercontent.com/view/ipynb?commit=89317894e2e5370a80e45d52db8a4055a4fdecd6&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f6d6174626573616e636f6e2f454d455f4a756c69615f776f726b73686f702f383933313738393465326535333730613830653435643532646238613430353561346664656364362f315f496e74726f64756374696f6e2e6970796e62&nwo=matbesancon%2FEME_Julia_workshop&path=1_Introduction.ipynb&repository_id=270611906&repository_type=Repository#Unitful-computations) by [@matbesancon](https://github.com/matbesancon))
using Unitful
using Unitful: J, kg, m, s
3J + 1kg * (1m / 1s)^2
# <img src="https://github.com/alan-turing-institute/MLJ.jl/raw/master/material/MLJLogo2.svg?sanitize=true" alt="MLJ" width="200" style="max-width:100%;">
#
# # MLJ
#
# MLJ (Machine Learning in Julia) is a toolbox written in Julia
# providing a common interface and meta-algorithms for selecting,
# tuning, evaluating, composing and comparing machine learning models
# written in Julia and other languages. In particular MLJ wraps a large
# number of [scikit-learn](https://scikit-learn.org/stable/) models.
#
# ## Key goals
#
# * Offer a consistent way to use, compose and tune machine learning
# models in Julia,
#
# * Promote the improvement of the Julia ML/Stats ecosystem by making it
# easier to use models from a wide range of packages,
#
# * Unlock performance gains by exploiting Julia's support for
# parallelism, automatic differentiation, GPU, optimisation etc.
#
#
# ## Key features
#
# * Data agnostic, train models on any data supported by the
# [Tables.jl](https://github.com/JuliaData/Tables.jl) interface,
#
# * Extensive support for model composition (*pipelines* and *learning
# networks*),
#
# * Convenient syntax to tune and evaluate (composite) models.
#
# * Consistent interface to handle probabilistic predictions.
#
# * Extensible [tuning
# interface](https://github.com/alan-turing-institute/MLJTuning.jl),
# to support growing number of optimization strategies, and designed
# to play well with model composition.
#
#
# More information is available from the [MLJ design paper](https://github.com/alan-turing-institute/MLJ.jl/blob/master/paper/paper.md)
# Here's how to genearate the full list of models supported by MLJ:
using MLJ
models()
# ## Performance evaluation
#
# The following example shows how to evaluate the performance of supervised learning model in MLJ. We'll start by loading a canned data set that is very well-known:
X, y = @load_iris;
# Here `X` is a table of input features, and `y` the target observations (iris species).
# Next, we can inspect a list of models that apply immediately to this data:
models(matching(X, y))
# We'll choose one and invoke the `@load` macro, which simultaneously loads the code for the chosen model, and instantiates the model, using default hyper-parameters:
tree_model = @load RandomForestClassifier pkg=DecisionTree
# Now we can evaluate it's performance using, say, 6-fold cross-validation, and the `cross_entropy` performance measure:
evaluate(tree_model, X, y, resampling=CV(nfolds=6, shuffle=true), measure=cross_entropy)
# ## Fit and predict
#
# We'll now evaluate the peformance of our model by hand, but using a simple holdout set, to illustate a typical `fit!` and `predict` workflow.
#
# First note that a *model* in MLJ is an object that only serves as a container for the hyper-parameters of the model, and that's all. A *machine* is an object binding a model to some data, and is where *learned* parameters are stored (among other things):
tree = machine(tree_model, X, y)
# ### Splitting the data
#
# To split the data into a training and testing set, you can use the function `partition` to obtain indices for data points that should be considered either as training or testing data:
train, test = partition(eachindex(y), 0.7, shuffle=true)
test[1:3]
# To fit the machine, you can use the function `fit!` specifying the rows to be used for the training:
fit!(tree, rows=train)
# Note that this modifies the machine, which now contains the trained parameters of the decision tree. You can inspect the result of the fitting with the `fitted_params` method:
fitted_params(tree)
# You can now use the machine to make predictions with the `predict` function specifying rows to be used for the prediction:
Å· = predict(tree, rows=test)
@show Å·[1]
# Note that the output is probabilistic, effectively a vector with a score for each class. You could get the mode by using the `mode` function on `Å·` or using `predict_mode`:
ȳ = predict_mode(tree, rows=test)
@show ȳ[1]
@show mode(Å·[1])
# To measure the discrepancy between Å· and y you could use the average cross entropy:
mce = cross_entropy(Å·, y[test]) |> mean
round(mce, digits=4)
# # A more advanced example
#
# As in other frameworks, MLJ also supports a variety of unsupervised models for pre-processing data, reducing dimensionality, etc. It also provides a [wrapper](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models/) for tuning model hyper-parameters in various ways. Data transformations, and supervised models are then typically combined into linear [pipelines](https://alan-turing-institute.github.io/MLJ.jl/dev/composing_models/#Linear-pipelines-1). However, a more advanced feature of MLJ not common in other frameworks allows you to combine models in more complicated ways. We give a simple demonstration of that next.
#
# We start by loading the model code we'll need:
@load RidgeRegressor pkg=MultivariateStats
@load RandomForestRegressor pkg=DecisionTree;
# The next step is to define "learning network" - a kind of blueprint for the new composite model type. Later we "export" the network as a new stand-alone model type.
#
# Our learing network will:
#
# - standarizes the input data
#
# - learn and apply a Box-Cox transformation to the target variable
#
# - blend the predictions of two supervised learning models - a ridge regressor and a random forest regressor; we'll blend using a simple average (for a more sophisticated stacking example, see [here](https://alan-turing-institute.github.io/DataScienceTutorials.jl/getting-started/stacking/))
#
# - apply the *inverse* Box-Cox transformation to this blended prediction
# The basic idea is to proceed as if one were composing the various steps "by hand", but to wrap the training data in "source nodes" first. In place of production data, one typically uses some dummy data, to test the network as it is built. When the learning network is "exported" as a new stand-alone model type, it will no longer be bound to any data. You bind the exported model to production data when your're ready to use your new model type (just like you would with any other MLJ model).
#
# There is no need to `fit!` the machines you create, as this will happen automatically when you *call* the final node in the network (assuming you provide the dummy data).
#
# *Input layer*
# +
# define some synthetic data:
X, y = make_regression(100)
y = abs.(y)
test, train = partition(eachindex(y), 0.8);
# wrap as source nodes:
Xs = source(X)
ys = source(y)
# -
# *First layer and target transformation*
# +
std_model = Standardizer()
stand = machine(std_model, Xs)
W = MLJ.transform(stand, Xs)
box_model = UnivariateBoxCoxTransformer()
box = machine(box_model, ys)
z = MLJ.transform(box, ys)
# -
# *Second layer*
# +
ridge_model = RidgeRegressor(lambda=0.1)
ridge = machine(ridge_model, W, z)
forest_model = RandomForestRegressor(n_trees=50)
forest = machine(forest_model, W, z)
Ạ= 0.5*predict(ridge, W) + 0.5*predict(forest, W)
# -
# *Output*
Å· = inverse_transform(box, áº)
# No fitting has been done thus far, we have just defined a sequence of operations. We can test the netork by fitting the final predction node and then calling it to retrieve the prediction:
fit!(Å·);
Å·()[1:4]
# To "export" the network a new stand-alone model type, we can use a macro:
@from_network machine(Deterministic(), Xs, ys, predict=Å·) begin
mutable struct CompositeModel
rgs1 = ridge_model
rgs2 = forest_model
end
end
# Here's an instance of our new type:
composite = CompositeModel()
# Since we made our model mutable, we could change the regressors for different ones.
#
# For now we'll evaluate this model on the famous Boston data set:
X, y = @load_boston
evaluate(composite, X, y, resampling=CV(nfolds=6, shuffle=true), measures=[rms, mae])
# ## Check out more [Data Science Tutorials in Julia](https://alan-turing-institute.github.io/DataScienceTutorials.jl/).
| binder/MLJ_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Distributed HPO with Ray Tune and XGBoost-Ray
#
# © 2019-2022, Anyscale. All Rights Reserved
#
# This demo introduces **Ray tune's** key concepts using a classification example. This example is derived from [Hyperparameter Tuning with Ray Tune and XGBoost-Ray](https://github.com/ray-project/xgboost_ray#hyperparameter-tuning). Basically, there are three basic steps or Ray Tune pattern for you as a newcomer to get started with using Ray Tune.
#
# Three simple steps:
#
# 1. Setup your config space and define your trainable and objective function
# 2. Use Tune to execute your training hyperparameter sweep, supplying the appropriate arguments including: search space, [search algorithms](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html#summary) or [trial schedulers](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#tune-schedulers)
# 3. Examine or analyse the results returned
#
# <img src="images/tune_flow.png" height="60%" width="80%">
#
#
# See also the [Understanding Hyperparameter Tuning](https://github.com/anyscale/academy/blob/main/ray-tune/02-Understanding-Hyperparameter-Tuning.ipynb) notebook and the [Tune documentation](http://tune.io), in particular, the [API reference](https://docs.ray.io/en/latest/tune/api_docs/overview.html).
#
# +
import os
from xgboost_ray import RayDMatrix, RayParams, train
from sklearn.datasets import load_breast_cancer
import ray
from ray import tune
CONNECT_TO_ANYSCALE=False
# -
if ray.is_initialized:
ray.shutdown()
if CONNECT_TO_ANYSCALE:
ray.init("anyscale://jsd-ray-core-tutorial")
else:
ray.init()
# ## Step 1: Define a 'Trainable' training function to use with Ray Tune `ray.tune(...)`
# +
NUM_OF_ACTORS = 4 # degree of parallel trials; each actor will have a separate trial with a set of unique config from the search space
NUM_OF_CPUS_PER_ACTOR = 1 # number of CPUs per actor
ray_params = RayParams(num_actors=NUM_OF_ACTORS, cpus_per_actor=NUM_OF_CPUS_PER_ACTOR)
# -
def train_func_model(config:dict, checkpoint_dir=None):
# create the dataset
train_X, train_y = load_breast_cancer(return_X_y=True)
# Convert to RayDMatrix data structure
train_set = RayDMatrix(train_X, train_y)
# Empty dictionary for the evaluation results reported back
# to tune
evals_result = {}
# Train the model with XGBoost train
bst = train(
params=config, # our hyperparameter search space
dtrain=train_set, # our RayDMatrix data structure
evals_result=evals_result, # place holder for results
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params) # distributed parameters configs for Ray Tune
# save the model in the checkpoint dir for each trial run
with tune.checkpoint_dir(step=0) as checkpoint_dir:
bst.save_model(os.path.join(checkpoint_dir, "model.xgb"))
# ## Step 2: Define a hyperparameter search space
# Specify the typical hyperparameter search space
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
# ## Step 3: Run Ray tune main trainer and examine the results
#
# Ray Tune will launch distributed HPO, using four remote actors, each with its own instance of the trainable func
#
# <img src="images/ray_tune_dist_hpo.png" height="60%" width="70%">
# Run tune
analysis = tune.run(
train_func_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
verbose=1,
resources_per_trial=ray_params.get_tune_resources()
)
print("Best hyperparameters", analysis.best_config)
analysis.results_df.head(5)
# ---
analysis.best_logdir
ray.shutdown()
# ### Homework
#
# 1. Try read the references below
# 2. Try some of the examples in the references
# ## References
#
# * [Ray Train: Tune: Scalable Hyperparameter Tuning](https://docs.ray.io/en/master/tune/index.html)
# * [Introducing Distributed XGBoost Training with Ray](https://www.anyscale.com/blog/distributed-xgboost-training-with-ray)
# * [How to Speed Up XGBoost Model Training](https://www.anyscale.com/blog/how-to-speed-up-xgboost-model-training)
# * [XGBoost-Ray Project](https://github.com/ray-project/xgboost_ray)
# * [Distributed XGBoost on Ray](https://docs.ray.io/en/latest/xgboost-ray.html)
| ex_06_xbgboost_train_tune.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from pyspark.sql import SparkSession,HiveContext
from pyspark.sql.functions import to_json,col
from pyspark.sql.types import *
import pyspark
# -
spark = SparkSession\
.builder\
.appName("pyspark-notebook")\
.master("spark://spark-master:7077")\
.config("spark.executor.memory", "512m")\
.config("hive.metastore.uris", "thrift://hive-metastore:9083")\
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")\
.enableHiveSupport()\
.getOrCreate()
spark
spark.sql("show databases").show()
spark.sql("show tables").show()
spark.sql("select * from outro_teste").show()
# +
df = spark.createDataFrame(
[
(1, "Augusto"),
(2, "Enzo"),
],
["id", "label"]
)
df.printSchema()
# -
sparkDF = df
sparkDF.createOrReplaceTempView("temp_table")
from pyspark.sql import SQLContext
sqlContext = HiveContext(sc)
sqlContext.sql("create table default.sales as select * from temp_table")
| docker/workspace/teste_augusto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + slideshow={"slide_type": "skip"}
import scipy.ndimage as ndi
import ocrodeg
import cv2
import glob
import shutil
import numpy as np
import random
global pi
pi = 3.14
# -
with open('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/ImageSets/Main/test.txt') as f:
content = f.readlines()
content = [x.strip() for x in content]
for filename in content:
_str = '/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/JPEGImages/'+filename+'.jpg'
aug_str = '/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/test/'+filename+'.jpg'
shutil.move(_str, aug_str)
with open('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/ImageSets/Main/val.txt') as f:
content = f.readlines()
content = [x.strip() for x in content]
for filename in content:
_str = '/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/JPEGImages/'+filename+'.jpg'
aug_str = '/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/val/'+filename+'.jpg'
shutil.move(_str, aug_str)
for filename in glob.glob('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/JPEGImages/*.jpg'):
for i in range(8):
strng = filename.split('/')
strng2 = strng[-1:][0][:-4]
image = cv2.imread(filename)
img_yuv = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)
image = y.astype(np.float32)/256
if random.random() > 0.5:
image = ocrodeg.transform_image(image, angle=random.choice([-2, -1, 0, 1])*pi/180)
if random.random() > 0.5:
noise = ocrodeg.noise_distort1d(image.shape, magnitude=random.choice([5.0, 10.0, 20.0]))
image = ocrodeg.distort_with_noise(image, noise)
if random.random() > 0.5:
image = ndi.gaussian_filter(image, random.choice([0, 1, 2]))
if random.random() > 0.2:
image = ocrodeg.printlike_multiscale(image)
y = image*256
y = y.astype(np.uint8)
y = np.expand_dims(y, axis=2)
u = np.expand_dims(u, axis=2)
v = np.expand_dims(v, axis=2)
img = np.concatenate((y,u,v), axis=2)
img = cv2.cvtColor(img, cv2.COLOR_YUV2BGR)
aug_str = '/'+strng[1]+'/'+strng[2]+'/'+strng[3]+'/'+strng[4]+'/'+strng[5]+'/'+strng[6]+'/'+strng[7]+'/'+strng2+'_aug'+str(i)+'.jpg'
cv2.imwrite(aug_str,img)
aug_str = '/'+strng[1]+'/'+strng[2]+'/'+strng[3]+'/'+strng[4]+'/'+strng[5]+'/'+strng[6]+'/'+'Annotations'+'/'+strng2+'_aug'+str(i)+'.xml'
_str = '/'+strng[1]+'/'+strng[2]+'/'+strng[3]+'/'+strng[4]+'/'+strng[5]+'/'+strng[6]+'/'+'Annotations'+'/'+strng2+'.xml'
shutil.copy(_str, aug_str)
# + slideshow={"slide_type": "slide"}
for filename in glob.glob('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/val/*.jpg'):
for i in range(8):
strng = filename.split('/')
strng2 = strng[-1:][0][:-4]
image = cv2.imread(filename)
img_yuv = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)
image = y.astype(np.float32)/256
if random.random() > 0.5:
image = ocrodeg.transform_image(image, angle=random.choice([-2, -1, 0, 1])*pi/180)
if random.random() > 0.5:
noise = ocrodeg.noise_distort1d(image.shape, magnitude=random.choice([5.0, 10.0, 20.0]))
image = ocrodeg.distort_with_noise(image, noise)
if random.random() > 0.5:
image = ndi.gaussian_filter(image, random.choice([0, 1, 2]))
if random.random() > 0.2:
image = ocrodeg.printlike_multiscale(image)
y = image*256
y = y.astype(np.uint8)
y = np.expand_dims(y, axis=2)
u = np.expand_dims(u, axis=2)
v = np.expand_dims(v, axis=2)
img = np.concatenate((y,u,v), axis=2)
img = cv2.cvtColor(img, cv2.COLOR_YUV2BGR)
aug_str = '/'+strng[1]+'/'+strng[2]+'/'+strng[3]+'/'+strng[4]+'/'+strng[5]+'/'+strng[6]+'/'+strng[7]+'/'+strng2+'_aug'+str(i)+'.jpg'
cv2.imwrite(aug_str,img)
aug_str = '/'+strng[1]+'/'+strng[2]+'/'+strng[3]+'/'+strng[4]+'/'+strng[5]+'/'+strng[6]+'/'+'Annotations'+'/'+strng2+'_aug'+str(i)+'.xml'
_str = '/'+strng[1]+'/'+strng[2]+'/'+strng[3]+'/'+strng[4]+'/'+strng[5]+'/'+strng[6]+'/'+'Annotations'+'/'+strng2+'.xml'
shutil.copy(_str, aug_str)
# -
for filename in glob.glob('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/JPEGImages/*.jpg'):
strng = filename.split('/')
strng2 = strng[-1:][0][:-4]
with open('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/ImageSets/Main/train.txt', 'a') as the_file:
the_file.write(strng2+'\n')
for filename in glob.glob('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/test/*.jpg'):
strng = filename.split('/')
strng2 = strng[-1:][0][:-4]
with open('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/ImageSets/Main/test.txt', 'a') as the_file:
the_file.write(strng2+'\n')
for filename in glob.glob('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/val/*.jpg'):
strng = filename.split('/')
strng2 = strng[-1:][0][:-4]
with open('/home/mcyavuz/.mxnet/datasets/voc/VOCNACDwNegswAug/ImageSets/Main/val.txt', 'a') as the_file:
the_file.write(strng2+'\n')
| augmentation/augmentation-degradation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # ããŒã¿ãµã€ãšã³ã¹100æ¬ããã¯ïŒæ§é åããŒã¿å å·¥ç·šïŒ - Python
# ## ã¯ããã«
# - åãã«ä»¥äžã®ã»ã«ãå®è¡ããŠãã ãã
# - å¿
èŠãªã©ã€ãã©ãªã®ã€ã³ããŒããšããŒã¿ããŒã¹ïŒPostgreSQLïŒããã®ããŒã¿èªã¿èŸŒã¿ãè¡ããŸã
# - pandasçãå©çšãæ³å®ãããã©ã€ãã©ãªã¯ä»¥äžã»ã«ã§ã€ã³ããŒãããŠããŸã
# - ãã®ä»å©çšãããã©ã€ãã©ãªãããã°é©å®ã€ã³ã¹ããŒã«ããŠãã ããïŒ"!pip install ã©ã€ãã©ãªå"ã§ã€ã³ã¹ããŒã«ãå¯èœïŒ
# - åŠçã¯è€æ°åã«åããŠãæ§ããŸãã
# - ååãäœæçã¯ãããŒããŒã¿ã§ãããå®åšãããã®ã§ã¯ãããŸãã
# +
import os
import pandas as pd
import numpy as np
from datetime import datetime, date
from dateutil.relativedelta import relativedelta
import math
import psycopg2
from sqlalchemy import create_engine
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from imblearn.under_sampling import RandomUnderSampler
pgconfig = {
'host': 'db',
'port': os.environ['PG_PORT'],
'database': os.environ['PG_DATABASE'],
'user': os.environ['PG_USER'],
'password': os.environ['PG_PASSWORD'],
}
# pd.read_sqlçšã®ã³ãã¯ã¿
conn = psycopg2.connect(**pgconfig)
df_customer = pd.read_sql(sql='select * from customer', con=conn)
df_category = pd.read_sql(sql='select * from category', con=conn)
df_product = pd.read_sql(sql='select * from product', con=conn)
df_receipt = pd.read_sql(sql='select * from receipt', con=conn)
df_store = pd.read_sql(sql='select * from store', con=conn)
df_geocode = pd.read_sql(sql='select * from geocode', con=conn)
# -
# # æŒç¿åé¡
# ---
# > P-001: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå
šé
ç®ã®å
é 10ä»¶ã衚瀺ããã©ã®ãããªããŒã¿ãä¿æããŠãããç®èŠã§ç¢ºèªããã
df_receipt.head(10)
# ---
# > P-002: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã10件衚瀺ãããã
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']].head(10)
# ---
# > P-003: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã10件衚瀺ãããããã ããsales_ymdã¯sales_dateã«é
ç®åã倿Žããªããæœåºããããšã
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']]. \
rename(columns={'sales_ymd': 'sales_date'}).head(10)
# ---
# > P-004: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã以äžã®æ¡ä»¶ãæºããããŒã¿ãæœåºããã
# > - 顧客IDïŒcustomer_idïŒã"CS018205000001"
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']]. \
query('customer_id == "CS018205000001"')
# ---
# > P-005: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã以äžã®æ¡ä»¶ãæºããããŒã¿ãæœåºããã
# > - 顧客IDïŒcustomer_idïŒã"CS018205000001"
# > - 売äžéé¡ïŒamountïŒã1,000以äž
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \
.query('customer_id == "CS018205000001" & amount >= 1000')
# ---
# > P-006: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ãdf_receiptãããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒãå£²äžæ°éïŒquantityïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã以äžã®æ¡ä»¶ãæºããããŒã¿ãæœåºããã
# > - 顧客IDïŒcustomer_idïŒã"CS018205000001"
# > - 売äžéé¡ïŒamountïŒã1,000以äžãŸãã¯å£²äžæ°éïŒquantityïŒã5以äž
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'quantity', 'amount']].\
query('customer_id == "CS018205000001" & (amount >= 1000 | quantity >=5)')
# ---
# > P-007: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã以äžã®æ¡ä»¶ãæºããããŒã¿ãæœåºããã
# > - 顧客IDïŒcustomer_idïŒã"CS018205000001"
# > - 売äžéé¡ïŒamountïŒã1,000以äž2,000以äž
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \
.query('customer_id == "CS018205000001" & 1000 <= amount <= 2000')
# ---
# > P-008: ã¬ã·ãŒãæçްã®ããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥ïŒsales_ymdïŒã顧客IDïŒcustomer_idïŒãååã³ãŒãïŒproduct_cdïŒã売äžéé¡ïŒamountïŒã®é ã«åãæå®ãã以äžã®æ¡ä»¶ãæºããããŒã¿ãæœåºããã
# > - 顧客IDïŒcustomer_idïŒã"CS018205000001"
# > - ååã³ãŒãïŒproduct_cdïŒã"P071401019"以å€
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \
.query('customer_id == "CS018205000001" & product_cd != "P071401019"')
# ---
# > P-009: 以äžã®åŠçã«ãããŠãåºåçµæãå€ããã«ORãANDã«æžãæããã
#
# `df_store.query('not(prefecture_cd == "13" | floor_area > 900)')`
df_store.query('prefecture_cd != "13" & floor_area <= 900')
# ---
# > P-010: åºèããŒã¿ãã¬ãŒã ïŒdf_storeïŒãããåºèã³ãŒãïŒstore_cdïŒã"S14"ã§å§ãŸããã®ã ãå
šé
ç®æœåºãã10ä»¶ã ã衚瀺ããã
df_store.query("store_cd.str.startswith('S14')", engine='python').head(10)
# ---
# > P-011: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãã顧客IDïŒcustomer_idïŒã®æ«å°Ÿã1ã®ãã®ã ãå
šé
ç®æœåºãã10ä»¶ã ã衚瀺ããã
df_customer.query("customer_id.str.endswith('1')", engine='python').head(10)
# ---
# > P-012: åºèããŒã¿ãã¬ãŒã ïŒdf_storeïŒããæšªæµåžã®åºèã ãå
šé
ç®è¡šç€ºããã
df_store.query("address.str.contains('暪æµåž')", engine='python')
# ---
# > P-013: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãããã¹ããŒã¿ã¹ã³ãŒãïŒstatus_cdïŒã®å
é ãã¢ã«ãã¡ãããã®AãFã§å§ãŸãããŒã¿ãå
šé
ç®æœåºãã10ä»¶ã ã衚瀺ããã
df_customer.query("status_cd.str.contains('^[A-F]', regex=True)",
engine='python').head(10)
# ---
# > P-014: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãããã¹ããŒã¿ã¹ã³ãŒãïŒstatus_cdïŒã®æ«å°Ÿãæ°åã®1ã9ã§çµããããŒã¿ãå
šé
ç®æœåºãã10ä»¶ã ã衚瀺ããã
df_customer.query("status_cd.str.contains('[1-9]$', regex=True)", engine='python').head(10)
# ---
# > P-015: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãããã¹ããŒã¿ã¹ã³ãŒãïŒstatus_cdïŒã®å
é ãã¢ã«ãã¡ãããã®AãFã§å§ãŸããæ«å°Ÿãæ°åã®1ã9ã§çµããããŒã¿ãå
šé
ç®æœåºãã10ä»¶ã ã衚瀺ããã
df_customer.query("status_cd.str.contains('^[A-F].*[1-9]$', regex=True)",
engine='python').head(10)
# ---
# > P-016: åºèããŒã¿ãã¬ãŒã ïŒdf_storeïŒãããé»è©±çªå·ïŒtel_noïŒã3æ¡-3æ¡-4æ¡ã®ããŒã¿ãå
šé
ç®è¡šç€ºããã
df_store.query("tel_no.str.contains('^[0-9]{3}-[0-9]{3}-[0-9]{4}$',regex=True)",
engine='python')
# ---
# > P-17: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãçå¹Žææ¥ïŒbirth_dayïŒã§é«éœ¢é ã«ãœãŒãããå
é 10ä»¶ãå
šé
ç®è¡šç€ºããã
df_customer.sort_values('birth_day', ascending=True).head(10)
# ---
# > P-18: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãçå¹Žææ¥ïŒbirth_dayïŒã§è¥ãé ã«ãœãŒãããå
é 10ä»¶ãå
šé
ç®è¡šç€ºããã
df_customer.sort_values('birth_day', ascending=False).head(10)
# ---
# > P-19: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã1ä»¶ãããã®å£²äžéé¡ïŒamountïŒãé«ãé ã«ã©ã³ã¯ãä»äžããå
é 10ä»¶ãæœåºãããé
ç®ã¯é¡§å®¢IDïŒcustomer_idïŒã売äžéé¡ïŒamountïŒãä»äžããã©ã³ã¯ã衚瀺ãããããšããªãã売äžéé¡ïŒamountïŒãçããå Žåã¯åäžé äœãä»äžãããã®ãšããã
df_tmp = pd.concat([df_receipt[['customer_id', 'amount']]
,df_receipt['amount'].rank(method='min',
ascending=False)], axis=1)
df_tmp.columns = ['customer_id', 'amount', 'ranking']
df_tmp.sort_values('ranking', ascending=True).head(10)
# ---
# > P-020: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã1ä»¶ãããã®å£²äžéé¡ïŒamountïŒãé«ãé ã«ã©ã³ã¯ãä»äžããå
é 10ä»¶ãæœåºãããé
ç®ã¯é¡§å®¢IDïŒcustomer_idïŒã売äžéé¡ïŒamountïŒãä»äžããã©ã³ã¯ã衚瀺ãããããšããªãã売äžéé¡ïŒamountïŒãçããå Žåã§ãå¥é äœãä»äžããããšã
df_tmp = pd.concat([df_receipt[['customer_id', 'amount']]
,df_receipt['amount'].rank(method='first',
ascending=False)], axis=1)
df_tmp.columns = ['customer_id', 'amount', 'ranking']
df_tmp.sort_values('ranking', ascending=True).head(10)
# ---
# > P-021: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããä»¶æ°ãã«ãŠã³ãããã
len(df_receipt)
# ---
# > P-022: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®é¡§å®¢IDïŒcustomer_idïŒã«å¯ŸãããŠããŒã¯ä»¶æ°ãã«ãŠã³ãããã
len(df_receipt['customer_id'].unique())
# ---
# > P-023: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«å£²äžéé¡ïŒamountïŒãšå£²äžæ°éïŒquantityïŒãåèšããã
df_receipt.groupby('store_cd').agg({'amount':'sum',
'quantity':'sum'}).reset_index()
# ---
# > P-024: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã顧客IDïŒcustomer_idïŒããšã«æãæ°ããå£²äžæ¥ïŒsales_ymdïŒãæ±ãã10件衚瀺ããã
df_receipt.groupby('customer_id').sales_ymd.max().reset_index().head(10)
# ---
# > P-025: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã顧客IDïŒcustomer_idïŒããšã«æãå€ãå£²äžæ¥ïŒsales_ymdïŒãæ±ãã10件衚瀺ããã
df_receipt.groupby('customer_id').agg({'sales_ymd':'min'}).head(10)
# ---
# > P-026: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã顧客IDïŒcustomer_idïŒããšã«æãæ°ããå£²äžæ¥ïŒsales_ymdïŒãšå€ãå£²äžæ¥ãæ±ããäž¡è
ãç°ãªãããŒã¿ã10件衚瀺ããã
df_tmp = df_receipt.groupby('customer_id'). \
agg({'sales_ymd':['max','min']}).reset_index()
df_tmp.columns = ["_".join(pair) for pair in df_tmp.columns]
df_tmp.query('sales_ymd_max != sales_ymd_min').head(10)
# ---
# > P-027: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«å£²äžéé¡ïŒamountïŒã®å¹³åãèšç®ããéé ã§TOP5ã衚瀺ããã
df_receipt.groupby('store_cd').agg({'amount':'mean'}).reset_index(). \
sort_values('amount', ascending=False).head(5)
# ---
# > P-028: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«å£²äžéé¡ïŒamountïŒã®äžå€®å€ãèšç®ããéé ã§TOP5ã衚瀺ããã
df_receipt.groupby('store_cd').agg({'amount':'median'}).reset_index(). \
sort_values('amount', ascending=False).head(5)
# ---
# > P-029: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«ååã³ãŒãïŒproduct_cdïŒã®æé »å€ãæ±ããã
df_receipt.groupby('store_cd').product_cd. \
apply(lambda x: x.mode()).reset_index()
# ---
# > P-030: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«å£²äžéé¡ïŒamountïŒã®æšæ¬åæ£ãèšç®ããéé ã§TOP5ã衚瀺ããã
df_receipt.groupby('store_cd').amount.var(ddof=0).reset_index(). \
sort_values('amount', ascending=False).head(5)
# ---
# > P-031: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«å£²äžéé¡ïŒamountïŒã®æšæ¬æšæºåå·®ãèšç®ããéé ã§TOP5ã衚瀺ããã
# TIPS:
#
# PandasãšNumpyã§ddofã®ããã©ã«ãå€ãç°ãªãããšã«æ³šæããŸããã
# ```
# PandasïŒ
# DataFrame.std(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)
# Numpy:
# numpy.std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=)
# ```
df_receipt.groupby('store_cd').amount.std(ddof=0).reset_index(). \
sort_values('amount', ascending=False).head(5)
# ---
# > P-032: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã«ã€ããŠã25ïŒ
å»ã¿ã§ããŒã»ã³ã¿ã€ã«å€ãæ±ããã
# ã³ãŒãäŸ1
np.percentile(df_receipt['amount'], q=[25, 50, 75,100])
# ã³ãŒãäŸ2
df_receipt.amount.quantile(q=np.arange(5)/4)
# ---
# > P-033: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿããåºèã³ãŒãïŒstore_cdïŒããšã«å£²äžéé¡ïŒamountïŒã®å¹³åãèšç®ãã330以äžã®ãã®ãæœåºããã
df_receipt.groupby('store_cd').amount.mean(). \
reset_index().query('amount >= 330')
# ---
# > P-034: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã顧客IDïŒcustomer_idïŒããšã«å£²äžéé¡ïŒamountïŒãåèšããŠå
šé¡§å®¢ã®å¹³åãæ±ããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšã
#
# queryã䜿ããªãæžãæ¹
df_receipt[~df_receipt['customer_id'].str.startswith("Z")]. \
groupby('customer_id').amount.sum().mean()
# queryãäœ¿ãæžãæ¹
df_receipt.query('not customer_id.str.startswith("Z")',
engine='python').groupby('customer_id').amount.sum().mean()
# ---
# > P-035: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã«å¯Ÿãã顧客IDïŒcustomer_idïŒããšã«å£²äžéé¡ïŒamountïŒãåèšããŠå
šé¡§å®¢ã®å¹³åãæ±ããå¹³å以äžã«è²·ãç©ãããŠããé¡§å®¢ãæœåºããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšããªããããŒã¿ã¯10ä»¶ã ã衚瀺ãããã°è¯ãã
amount_mean = df_receipt[~df_receipt['customer_id'].str.startswith("Z")].\
groupby('customer_id').amount.sum().mean()
df_amount_sum = df_receipt.groupby('customer_id').amount.sum().reset_index()
df_amount_sum[df_amount_sum['amount'] >= amount_mean].head(10)
# ---
# > P-036: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒãšåºèããŒã¿ãã¬ãŒã ïŒdf_storeïŒãå
éšçµåããã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ã®å
šé
ç®ãšåºèããŒã¿ãã¬ãŒã ã®åºèåïŒstore_nameïŒã10件衚瀺ãããã
pd.merge(df_receipt, df_store[['store_cd','store_name']],
how='inner', on='store_cd').head(10)
# ---
# > P-037: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒãšã«ããŽãªããŒã¿ãã¬ãŒã ïŒdf_categoryïŒãå
éšçµåããååããŒã¿ãã¬ãŒã ã®å
šé
ç®ãšã«ããŽãªããŒã¿ãã¬ãŒã ã®å°åºååïŒcategory_small_nameïŒã10件衚瀺ãããã
pd.merge(df_product
, df_category[['category_small_cd','category_small_name']]
, how='inner', on='category_small_cd').head(10)
# ---
# > P-038: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãšã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒãããå顧客ããšã®å£²äžéé¡åèšãæ±ããããã ããè²·ãç©ã®å®çžŸããªã顧客ã«ã€ããŠã¯å£²äžéé¡ã0ãšããŠè¡šç€ºãããããšããŸããé¡§å®¢ã¯æ§å¥ã³ãŒãïŒgender_cdïŒã女æ§ïŒ1ïŒã§ãããã®ã察象ãšããéäŒå¡ïŒé¡§å®¢IDã'Z'ããå§ãŸããã®ïŒã¯é€å€ããããšããªããçµæã¯10ä»¶ã ã衚瀺ãããã°è¯ãã
df_amount_sum = df_receipt.groupby('customer_id').amount.sum().reset_index()
df_tmp = df_customer. \
query('gender_cd == "1" and not customer_id.str.startswith("Z")',
engine='python')
pd.merge(df_tmp['customer_id'], df_amount_sum,
how='left', on='customer_id').fillna(0).head(10)
# ---
# > P-039: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒããå£²äžæ¥æ°ã®å€ã顧客ã®äžäœ20ä»¶ãšã売äžéé¡åèšã®å€ã顧客ã®äžäœ20ä»¶ãæœåºããå®å
šå€éšçµåããããã ããéäŒå¡ïŒé¡§å®¢IDã'Z'ããå§ãŸããã®ïŒã¯é€å€ããããšã
# +
df_sum = df_receipt.groupby('customer_id').amount.sum().reset_index()
df_sum = df_sum.query('not customer_id.str.startswith("Z")', engine='python')
df_sum = df_sum.sort_values('amount', ascending=False).head(20)
df_cnt = df_receipt[~df_receipt.duplicated(subset=['customer_id', 'sales_ymd'])]
df_cnt = df_cnt.query('not customer_id.str.startswith("Z")', engine='python')
df_cnt = df_cnt.groupby('customer_id').sales_ymd.count().reset_index()
df_cnt = df_cnt.sort_values('sales_ymd', ascending=False).head(20)
pd.merge(df_sum, df_cnt, how='outer', on='customer_id')
# -
# ---
# > P-040: å
šãŠã®åºèãšå
šãŠã®ååãçµã¿åããããšäœä»¶ã®ããŒã¿ãšãªãã調æ»ããããåºèïŒdf_storeïŒãšååïŒdf_productïŒãçŽç©ããä»¶æ°ãèšç®ããã
# +
df_store_tmp = df_store.copy()
df_product_tmp = df_product.copy()
df_store_tmp['key'] = 0
df_product_tmp['key'] = 0
len(pd.merge(df_store_tmp, df_product_tmp, how='outer', on='key'))
# -
# ---
# > P-041: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒãæ¥ä»ïŒsales_ymdïŒããšã«éèšãã忥ããã®å£²äžéé¡å¢æžãèšç®ããããªããèšç®çµæã¯10件衚瀺ããã°ããã
df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']].\
groupby('sales_ymd').sum().reset_index()
df_sales_amount_by_date = pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift()], axis=1)
df_sales_amount_by_date.columns = ['sales_ymd','amount','lag_ymd','lag_amount']
df_sales_amount_by_date['diff_amount'] = \
df_sales_amount_by_date['amount'] - df_sales_amount_by_date['lag_amount']
df_sales_amount_by_date.head(10)
# ---
# > P-042: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒãæ¥ä»ïŒsales_ymdïŒããšã«éèšãã忥ä»ã®ããŒã¿ã«å¯ŸããïŒæ¥åãïŒæ¥åãïŒæ¥åã®ããŒã¿ãçµåãããçµæã¯10件衚瀺ããã°ããã
# ã³ãŒãäŸ1:瞊æã¡ã±ãŒã¹
df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']]. \
groupby('sales_ymd').sum().reset_index()
for i in range(1, 4):
if i == 1:
df_lag = pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift(i)],axis=1)
else:
df_lag = df_lag.append(pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift(i)],
axis=1))
df_lag.columns = ['sales_ymd', 'amount', 'lag_ymd', 'lag_amount']
df_lag.dropna().sort_values(['sales_ymd','lag_ymd']).head(10)
# ã³ãŒãäŸ2:暪æã¡ã±ãŒã¹
df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']].\
groupby('sales_ymd').sum().reset_index()
for i in range(1, 4):
if i == 1:
df_lag = pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift(i)],axis=1)
else:
df_lag = pd.concat([df_lag, df_sales_amount_by_date.shift(i)],axis=1)
df_lag.columns = ['sales_ymd', 'amount', 'lag_ymd_1', 'lag_amount_1',
'lag_ymd_2', 'lag_amount_2', 'lag_ymd_3', 'lag_amount_3']
df_lag.dropna().sort_values(['sales_ymd']).head(10)
# ---
# > P-043ïŒ ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒãšé¡§å®¢ããŒã¿ãã¬ãŒã ïŒdf_customerïŒãçµåããæ§å¥ïŒgenderïŒãšå¹Žä»£ïŒageããèšç®ïŒããšã«å£²äžéé¡ïŒamountïŒãåèšãã売äžãµããªããŒã¿ãã¬ãŒã ïŒdf_sales_summaryïŒãäœæãããæ§å¥ã¯0ãç·æ§ã1ã女æ§ã9ãäžæã衚ããã®ãšããã
# >
# > ãã ããé
ç®æ§æã¯å¹Žä»£ã女æ§ã®å£²äžéé¡ãç·æ§ã®å£²äžéé¡ãæ§å¥äžæã®å£²äžéé¡ã®4é
ç®ãšããããšïŒçžŠã«å¹Žä»£ãæšªã«æ§å¥ã®ã¯ãã¹éèšïŒããŸãã幎代ã¯10æ³ããšã®éçŽãšããããšã
# ã³ãŒãäŸ1
df_tmp = pd.merge(df_receipt, df_customer, how ='inner', on="customer_id")
df_tmp['era'] = df_tmp['age'].apply(lambda x: math.floor(x / 10) * 10)
df_sales_summary = pd.pivot_table(df_tmp, index='era', columns='gender_cd',
values='amount', aggfunc='sum').reset_index()
df_sales_summary.columns = ['era', 'male', 'female', 'unknown']
df_sales_summary
# ã³ãŒãäŸ2
df_tmp = pd.merge(df_receipt, df_customer, how ='inner', on="customer_id")
df_tmp['era'] = np.floor(df_tmp['age'] / 10).astype(int) * 10
df_sales_summary = pd.pivot_table(df_tmp, index='era', columns='gender_cd',
values='amount', aggfunc='sum').reset_index()
df_sales_summary.columns = ['era', 'male', 'female', 'unknown']
df_sales_summary
# ---
# > P-044ïŒ åèšåã§äœæãã売äžãµããªããŒã¿ãã¬ãŒã ïŒdf_sales_summaryïŒã¯æ§å¥ã®å£²äžã暪æã¡ããããã®ã§ãã£ãããã®ããŒã¿ãã¬ãŒã ããæ§å¥ã瞊æã¡ãããå¹Žä»£ãæ§å¥ã³ãŒãã売äžéé¡ã®3é
ç®ã«å€æããããã ããæ§å¥ã³ãŒãã¯ç·æ§ã'00'ã女æ§ã'01'ãäžæã'99'ãšããã
df_sales_summary = df_sales_summary.set_index('era'). \
stack().reset_index().replace({'female':'01','male':'00','unknown':'99'}). \
rename(columns={'level_1':'gender_cd', 0: 'amount'})
df_sales_summary
# ---
# > P-045: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®çå¹Žææ¥ïŒbirth_dayïŒã¯æ¥ä»åã§ããŒã¿ãä¿æããŠããããããYYYYMMDD圢åŒã®æååã«å€æãã顧客IDïŒcustomer_idïŒãšãšãã«æœåºãããããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
pd.concat([df_customer['customer_id'],
pd.to_datetime(df_customer['birth_day']).dt.strftime('%Y%m%d')],
axis = 1).head(10)
# ---
# > P-046: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®ç³ãèŸŒã¿æ¥ïŒapplication_dateïŒã¯YYYYMMDD圢åŒã®æåååã§ããŒã¿ãä¿æããŠããããããæ¥ä»åã«å€æãã顧客IDïŒcustomer_idïŒãšãšãã«æœåºãããããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
pd.concat([df_customer['customer_id'],
pd.to_datetime(df_customer['application_date'])], axis=1).head(10)
# ---
# > P-047: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžæ¥ïŒsales_ymdïŒã¯YYYYMMDD圢åŒã®æ°å€åã§ããŒã¿ãä¿æããŠããããããæ¥ä»åã«å€æããã¬ã·ãŒãçªå·(receipt_no)ãã¬ã·ãŒããµãçªå·ïŒreceipt_sub_noïŒãšãšãã«æœåºãããããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_ymd'].astype('str'))],
axis=1).head(10)
# ---
# > P-048: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžãšããã¯ç§ïŒsales_epochïŒã¯æ°å€åã®UNIXç§ã§ããŒã¿ãä¿æããŠããããããæ¥ä»åã«å€æããã¬ã·ãŒãçªå·(receipt_no)ãã¬ã·ãŒããµãçªå·ïŒreceipt_sub_noïŒãšãšãã«æœåºãããããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s')],
axis=1).head(10)
# ---
# > P-049: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžãšããã¯ç§ïŒsales_epochïŒãæ¥ä»åã«å€æããã幎ãã ãåãåºããŠã¬ã·ãŒãçªå·(receipt_no)ãã¬ã·ãŒããµãçªå·ïŒreceipt_sub_noïŒãšãšãã«æœåºãããããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s').dt.year],
axis=1).head(10)
# ---
# > P-050: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžãšããã¯ç§ïŒsales_epochïŒãæ¥ä»åã«å€æãããæãã ãåãåºããŠã¬ã·ãŒãçªå·(receipt_no)ãã¬ã·ãŒããµãçªå·ïŒreceipt_sub_noïŒãšãšãã«æœåºããããªãããæãã¯0åã2æ¡ã§åãåºãããšãããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
# dt.monthã§ãæãååŸã§ããããããã§ã¯0åãïŒæ¡ã§åãåºãããstrftimeãå©çšããŠãã
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s'). \
dt.strftime('%m')],axis=1).head(10)
# ---
# > P-051: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžãšããã¯ç§ãæ¥ä»åã«å€æãããæ¥ãã ãåãåºããŠã¬ã·ãŒãçªå·(receipt_no)ãã¬ã·ãŒããµãçªå·ïŒreceipt_sub_noïŒãšãšãã«æœåºããããªãããæ¥ãã¯0åã2æ¡ã§åãåºãããšãããŒã¿ã¯10ä»¶ãæœåºããã°è¯ãã
# dt.dayã§ãæ¥ãååŸã§ããããããã§ã¯0åãïŒæ¡ã§åãåºãããstrftimeãå©çšããŠãã
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s'). \
dt.strftime('%d')],axis=1).head(10)
# ---
# > P-052: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客IDïŒcustomer_idïŒããšã«åèšã®äžã売äžéé¡åèšã«å¯ŸããŠ2,000å以äžã0ã2,000åãã倧ããéé¡ã1ã«2å€åãã顧客IDã売äžéé¡åèšãšãšãã«10件衚瀺ããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšã
# ã³ãŒãäŸ1
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python')
df_sales_amount = df_sales_amount[['customer_id', 'amount']]. \
groupby('customer_id').sum().reset_index()
df_sales_amount['sales_flg'] = df_sales_amount['amount']. \
apply(lambda x: 1 if x > 2000 else 0)
df_sales_amount.head(10)
# ã³ãŒãäŸ2ïŒnp.whereã®æŽ»çšïŒ
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python')
df_sales_amount = df_sales_amount[['customer_id', 'amount']]. \
groupby('customer_id').sum().reset_index()
df_sales_amount['sales_flg'] = np.where(df_sales_amount['amount'] > 2000, 1, 0)
df_sales_amount.head(10)
# ---
# > P-053: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®éµäŸ¿çªå·ïŒpostal_cdïŒã«å¯Ÿããæ±äº¬ïŒå
é 3æ¡ã100ã209ã®ãã®ïŒã1ããã以å€ã®ãã®ã0ã«ïŒå€åãããããã«ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒãšçµåããå
šæéã«ãããŠè²·ãç©å®çžŸã®ãã顧客æ°ããäœæãã2å€ããšã«ã«ãŠã³ãããã
# +
# ã³ãŒãäŸ1
df_tmp = df_customer[['customer_id', 'postal_cd']].copy()
df_tmp['postal_flg'] = df_tmp['postal_cd']. \
apply(lambda x: 1 if 100 <= int(x[0:3]) <= 209 else 0)
pd.merge(df_tmp, df_receipt, how='inner', on='customer_id'). \
groupby('postal_flg').agg({'customer_id':'nunique'})
# -
# ã³ãŒãäŸ2ïŒnp.whereãbetweenã®æŽ»çšïŒ
df_tmp = df_customer[['customer_id', 'postal_cd']].copy()
df_tmp['postal_flg'] = np.where(df_tmp['postal_cd'].str[0:3].astype(int)
.between(100, 209), 1, 0)
pd.merge(df_tmp, df_receipt, how='inner', on='customer_id'). \
groupby('postal_flg').agg({'customer_id':'nunique'})
# ---
# > P-054: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®äœæïŒaddressïŒã¯ãåŒççãåèçãæ±äº¬éœãç¥å¥å·çã®ãããããšãªã£ãŠãããéœéåºçæ¯ã«ã³ãŒãå€ãäœæãã顧客IDãäœæãšãšãã«æœåºãããå€ã¯åŒççã11ãåèçã12ãæ±äº¬éœã13ãç¥å¥å·çã14ãšããããšãçµæã¯10件衚瀺ãããã°è¯ãã
pd.concat([df_customer[['customer_id', 'address']],
df_customer['address'].str[0:3].map({'åŒçç': '11',
'åèç':'12',
'æ±äº¬éœ':'13',
'ç¥å¥å·':'14'})],axis=1).head(10)
# ---
# > P-055: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客IDïŒcustomer_idïŒããšã«åèšãããã®åèšéé¡ã®ååäœç¹ãæ±ããããã®äžã§ã顧客ããšã®å£²äžéé¡åèšã«å¯ŸããŠä»¥äžã®åºæºã§ã«ããŽãªå€ãäœæãã顧客IDã売äžéé¡åèšãšãšãã«è¡šç€ºãããã«ããŽãªå€ã¯äžããé ã«1ã4ãšãããçµæã¯10件衚瀺ãããã°è¯ãã
# >
# > - æå°å€ä»¥äžç¬¬äžååäœæªæº
# > - 第äžååäœä»¥äžç¬¬äºååäœæªæº
# > - 第äºååäœä»¥äžç¬¬äžååäœæªæº
# > - 第äžååäœä»¥äž
# +
# ã³ãŒãäŸ1
df_sales_amount = df_receipt[['customer_id', 'amount']]. \
groupby('customer_id').sum().reset_index()
pct25 = np.quantile(df_sales_amount['amount'], 0.25)
pct50 = np.quantile(df_sales_amount['amount'], 0.5)
pct75 = np.quantile(df_sales_amount['amount'], 0.75)
def pct_group(x):
if x < pct25:
return 1
elif pct25 <= x < pct50:
return 2
elif pct50 <= x < pct75:
return 3
elif pct75 <= x:
return 4
df_sales_amount['pct_group'] = df_sales_amount['amount'].apply(lambda x: pct_group(x))
df_sales_amount.head(10)
# -
# 確èªçš
print('pct25:', pct25)
print('pct50:', pct50)
print('pct75:', pct75)
# ã³ãŒãäŸ2
df_temp = df_receipt.groupby('customer_id')[['amount']].sum()
df_temp['quantile'], bins = \
pd.qcut(df_receipt.groupby('customer_id')['amount'].sum(), 4, retbins=True)
display(df_temp.head())
print('quantiles:', bins)
# ---
# > P-056: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®å¹Žéœ¢ïŒageïŒãããšã«10æ³å»ã¿ã§å¹Žä»£ãç®åºãã顧客IDïŒcustomer_idïŒãçå¹Žææ¥ïŒbirth_dayïŒãšãšãã«æœåºããããã ãã60æ³ä»¥äžã¯å
šãŠ60æ³ä»£ãšããããšã幎代ã衚ãã«ããŽãªåã¯ä»»æãšãããå
é 10ä»¶ã衚瀺ãããã°ããã
# +
# ã³ãŒãäŸ1
df_customer_era = pd.concat([df_customer[['customer_id', 'birth_day']],
df_customer['age']. \
apply(lambda x: min(math.floor(x / 10) * 10, 60))],
axis=1)
df_customer_era.head(10)
# -
# ã³ãŒãäŸ2
df_customer['age_group'] = pd.cut(df_customer['age'],
bins=[0, 10, 20, 30, 40, 50, 60, np.inf],
right=False)
df_customer[['customer_id', 'birth_day', 'age_group']].head(10)
# ---
# > P-057: ååé¡ã®æœåºçµæãšæ§å¥ïŒgenderïŒãçµã¿åãããæ°ãã«æ§å¥Ã幎代ã®çµã¿åããã衚ãã«ããŽãªããŒã¿ãäœæãããçµã¿åããã衚ãã«ããŽãªã®å€ã¯ä»»æãšãããå
é 10ä»¶ã衚瀺ãããã°ããã
df_customer_era['era_gender'] = \
df_customer['gender_cd'] + df_customer_era['age'].astype('str')
df_customer_era.head(10)
# ---
# > P-058: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®æ§å¥ã³ãŒãïŒgender_cdïŒããããŒå€æ°åãã顧客IDïŒcustomer_idïŒãšãšãã«æœåºãããçµæã¯10件衚瀺ãããã°è¯ãã
pd.get_dummies(df_customer[['customer_id', 'gender_cd']],
columns=['gender_cd']).head(10)
# ---
# > P-059: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客IDïŒcustomer_idïŒããšã«åèšãã売äžéé¡åèšãå¹³å0ãæšæºåå·®1ã«æšæºåããŠé¡§å®¢IDã売äžéé¡åèšãšãšãã«è¡šç€ºãããæšæºåã«äœ¿çšããæšæºåå·®ã¯ãäžåæšæºåå·®ãšæšæ¬æšæºåå·®ã®ã©ã¡ãã§ãè¯ããã®ãšããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšãçµæã¯10件衚瀺ãããã°è¯ãã
# TIPS:
# - query()ã®åŒæ°engineã§'python'ã'numexpr'ããéžæã§ããããã©ã«ãã¯ã€ã³ã¹ããŒã«ãããŠããã°numexprããç¡ããã°pythonã䜿ãããŸããããã«ãæååã¡ãœããã¯engine='python'ã§ãªããšquery()ã¡ãœããã§äœ¿ããŸããã
#
# skleanã®preprocessing.scaleãå©çšãããããæšæ¬æšæºåå·®ã§èšç®ãããŠãã
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_ss'] = preprocessing.scale(df_sales_amount['amount'])
df_sales_amount.head(10)
# ã³ãŒãäŸ2ïŒfitãè¡ãããšã§ãå¥ã®ããŒã¿ã§ãåãã®å¹³åã»æšæºåå·®ã§æšæºåãè¡ããïŒ
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
scaler = preprocessing.StandardScaler()
scaler.fit(df_sales_amount[['amount']])
df_sales_amount['amount_ss'] = scaler.transform(df_sales_amount[['amount']])
df_sales_amount.head(10)
# ---
# > P-060: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客IDïŒcustomer_idïŒããšã«åèšãã売äžéé¡åèšãæå°å€0ãæå€§å€1ã«æ£èŠåããŠé¡§å®¢IDã売äžéé¡åèšãšãšãã«è¡šç€ºããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšãçµæã¯10件衚瀺ãããã°è¯ãã
# ã³ãŒãäŸ1
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_mm'] = \
preprocessing.minmax_scale(df_sales_amount['amount'])
df_sales_amount.head(10)
# ã³ãŒãäŸ2ïŒfitãè¡ãããšã§ãå¥ã®ããŒã¿ã§ãåãã®å¹³åã»æšæºåå·®ã§æšæºåãè¡ããïŒ
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
scaler = preprocessing.MinMaxScaler()
scaler.fit(df_sales_amount[['amount']])
df_sales_amount['amount_mm'] = scaler.transform(df_sales_amount[['amount']])
df_sales_amount.head(10)
# ---
# > P-061: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客IDïŒcustomer_idïŒããšã«åèšãã売äžéé¡åèšãåžžçšå¯Ÿæ°åïŒåº=10ïŒããŠé¡§å®¢IDã売äžéé¡åèšãšãšãã«è¡šç€ºããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšãçµæã¯10件衚瀺ãããã°è¯ãã
# skleanã®preprocessing.scaleãå©çšãããããæšæ¬æšæºåå·®ã§èšç®ãããŠãã
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_log10'] = np.log10(df_sales_amount['amount'] + 1)
df_sales_amount.head(10)
# ---
# > P-062: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客IDïŒcustomer_idïŒããšã«åèšãã売äžéé¡åèšãèªç¶å¯Ÿæ°å(åº=eïŒããŠé¡§å®¢IDã売äžéé¡åèšãšãšãã«è¡šç€ºããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšãçµæã¯10件衚瀺ãããã°è¯ãã
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_loge'] = np.log(df_sales_amount['amount'] + 1)
df_sales_amount.head(10)
# ---
# > P-063: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®å䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒãããåååã®å©çé¡ãç®åºãããçµæã¯10件衚瀺ãããã°è¯ãã
df_tmp = df_product.copy()
df_tmp['unit_profit'] = df_tmp['unit_price'] - df_tmp['unit_cost']
df_tmp.head(10)
# ---
# > P-064: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®å䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒãããåååã®å©ççã®å
šäœå¹³åãç®åºããã
# ãã ããå䟡ãšå䟡ã«ã¯NULLãååšããããšã«æ³šæããã
df_tmp = df_product.copy()
df_tmp['unit_profit_rate'] = \
(df_tmp['unit_price'] - df_tmp['unit_cost']) / df_tmp['unit_price']
df_tmp['unit_profit_rate'].mean(skipna=True)
# ---
# > P-065: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®åååã«ã€ããŠãå©ççã30%ãšãªãæ°ããªåäŸ¡ãæ±ããããã ãã1åæªæºã¯åãæšãŠãããšããããŠçµæã10件衚瀺ãããå©ççãããã30ïŒ
ä»è¿ã§ããããšã確èªããããã ããå䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒã«ã¯NULLãååšããããšã«æ³šæããã
# ã³ãŒãäŸ1
# math.floorã¯NaNã§ãšã©ãŒãšãªãããnumpy.floorã¯ãšã©ãŒãšãªããªã
df_tmp = df_product.copy()
df_tmp['new_price'] = df_tmp['unit_cost'].apply(lambda x: np.floor(x / 0.7))
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
# ã³ãŒãäŸ2
df_tmp = df_product.copy()
df_tmp['new_price'] = np.floor(df_tmp['unit_cost'] / 0.7)
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
# ---
# > P-066: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®åååã«ã€ããŠãå©ççã30%ãšãªãæ°ããªåäŸ¡ãæ±ãããä»åã¯ã1åæªæºãåæšäºå
¥ããããšïŒ0.5ã«ã€ããŠã¯å¶æ°æ¹åã®äžžãã§è¯ãïŒããããŠçµæã10件衚瀺ãããå©ççãããã30ïŒ
ä»è¿ã§ããããšã確èªããããã ããå䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒã«ã¯NULLãååšããããšã«æ³šæããã
# ã³ãŒãäŸ1
# çµã¿èŸŒã¿ã®roundã¯NaNã§ãšã©ãŒãšãªãããnumpy.roundã¯ãšã©ãŒãšãªããªã
df_tmp = df_product.copy()
df_tmp['new_price'] = df_tmp['unit_cost'].apply(lambda x: np.round(x / 0.7))
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
# ã³ãŒãäŸ2
df_tmp = df_product.copy()
df_tmp['new_price'] = np.round(df_tmp['unit_cost'] / 0.7)
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
# ---
# > P-067: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®åååã«ã€ããŠãå©ççã30%ãšãªãæ°ããªåäŸ¡ãæ±ãããä»åã¯ã1åæªæºãåãäžããããšããããŠçµæã10件衚瀺ãããå©ççãããã30ïŒ
ä»è¿ã§ããããšã確èªããããã ããå䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒã«ã¯NULLãååšããããšã«æ³šæããã
# ã³ãŒãäŸ1
# math.ceilã¯NaNã§ãšã©ãŒãšãªãããnumpy.ceilã¯ãšã©ãŒãšãªããªã
df_tmp = df_product.copy()
df_tmp['new_price'] = df_tmp['unit_cost'].apply(lambda x: np.ceil(x / 0.7))
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
# ã³ãŒãäŸ2
df_tmp = df_product.copy()
df_tmp['new_price'] = np.ceil(df_tmp['unit_cost'] / 0.7)
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
# ---
# > P-068: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®åååã«ã€ããŠãæ¶è²»çšç10%ã®çšèŸŒã¿éé¡ãæ±ããã 1åæªæºã®ç«¯æ°ã¯åãæšãŠãšããçµæã¯10件衚瀺ããã°è¯ãããã ããå䟡ïŒunit_priceïŒã«ã¯NULLãååšããããšã«æ³šæããã
# ã³ãŒãäŸ1
# math.floorã¯NaNã§ãšã©ãŒãšãªãããnumpy.floorã¯ãšã©ãŒãšãªããªã
df_tmp = df_product.copy()
df_tmp['price_tax'] = df_tmp['unit_price'].apply(lambda x: np.floor(x * 1.1))
df_tmp.head(10)
# ã³ãŒãäŸ2
# math.floorã¯NaNã§ãšã©ãŒãšãªãããnumpy.floorã¯ãšã©ãŒãšãªããªã
df_tmp = df_product.copy()
df_tmp['price_tax'] = np.floor(df_tmp['unit_price'] * 1.1)
df_tmp.head(10)
# ---
# > P-069: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒãšååããŒã¿ãã¬ãŒã ïŒdf_productïŒãçµåãã顧客æ¯ã«å
šååã®å£²äžéé¡åèšãšãã«ããŽãªå€§åºåïŒcategory_major_cdïŒã"07"ïŒç¶è©°çŒ¶è©°ïŒã®å£²äžéé¡åèšãèšç®ã®äžãäž¡è
ã®æ¯çãæ±ãããæœåºå¯Ÿè±¡ã¯ã«ããŽãªå€§åºå"07"ïŒç¶è©°çŒ¶è©°ïŒã®è³Œå
¥å®çžŸããã顧客ã®ã¿ãšããçµæã¯10件衚瀺ãããã°ããã
# +
# ã³ãŒãäŸ1
df_tmp_1 = pd.merge(df_receipt, df_product,
how='inner', on='product_cd').groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_tmp_2 = pd.merge(df_receipt, df_product.query('category_major_cd == "07"'),
how='inner', on='product_cd').groupby('customer_id').\
agg({'amount':'sum'}).reset_index()
df_tmp_3 = pd.merge(df_tmp_1, df_tmp_2, how='inner', on='customer_id')
df_tmp_3['rate_07'] = df_tmp_3['amount_y'] / df_tmp_3['amount_x']
df_tmp_3.head(10)
# -
# ã³ãŒãäŸ2
df_temp = df_receipt.merge(df_product, how='left', on='product_cd'). \
groupby(['customer_id', 'category_major_cd'])['amount'].sum().unstack()
df_temp = df_temp[df_temp['07'] > 0]
df_temp['sum'] = df_temp.sum(axis=1)
df_temp['07_rate'] = df_temp['07'] / df_temp['sum']
df_temp.head(10)
# ---
# > P-070: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžæ¥ïŒsales_ymdïŒã«å¯Ÿãã顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®äŒå¡ç³èŸŒæ¥ïŒapplication_dateïŒããã®çµéæ¥æ°ãèšç®ãã顧客IDïŒcustomer_idïŒãå£²äžæ¥ãäŒå¡ç³èŸŒæ¥ãšãšãã«è¡šç€ºãããçµæã¯10件衚瀺ãããã°è¯ãïŒãªããsales_ymdã¯æ°å€ãapplication_dateã¯æååã§ããŒã¿ãä¿æããŠããç¹ã«æ³šæïŒã
# +
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = df_tmp['sales_ymd'] - df_tmp['application_date']
df_tmp.head(10)
# -
# ---
# > P-071: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžæ¥ïŒsales_ymdïŒã«å¯Ÿãã顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®äŒå¡ç³èŸŒæ¥ïŒapplication_dateïŒããã®çµéææ°ãèšç®ãã顧客IDïŒcustomer_idïŒãå£²äžæ¥ãäŒå¡ç³èŸŒæ¥ãšãšãã«è¡šç€ºãããçµæã¯10件衚瀺ãããã°è¯ãïŒãªããsales_ymdã¯æ°å€ãapplication_dateã¯æååã§ããŒã¿ãä¿æããŠããç¹ã«æ³šæïŒã1ã¶ææªæºã¯åãæšãŠãããšã
# +
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = df_tmp[['sales_ymd', 'application_date']]. \
apply(lambda x: relativedelta(x[0], x[1]).years * 12 + \
relativedelta(x[0], x[1]).months, axis=1)
df_tmp.sort_values('customer_id').head(10)
# -
# ---
# > P-072: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžæ¥ïŒsales_ymdïŒã«å¯Ÿãã顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®äŒå¡ç³èŸŒæ¥ïŒapplication_dateïŒããã®çµé幎æ°ãèšç®ãã顧客IDïŒcustomer_idïŒãå£²äžæ¥ãäŒå¡ç³èŸŒæ¥ãšãšãã«è¡šç€ºãããçµæã¯10件衚瀺ãããã°è¯ãïŒãªããsales_ymdã¯æ°å€ãapplication_dateã¯æååã§ããŒã¿ãä¿æããŠããç¹ã«æ³šæïŒã1å¹Žæªæºã¯åãæšãŠãããšã
# +
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = df_tmp[['sales_ymd', 'application_date']]. \
apply(lambda x: relativedelta(x[0], x[1]).years, axis=1)
df_tmp.head(10)
# -
# ---
# > P-073: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžæ¥ïŒsales_ymdïŒã«å¯Ÿãã顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®äŒå¡ç³èŸŒæ¥ïŒapplication_dateïŒããã®ãšããã¯ç§ã«ããçµéæéãèšç®ãã顧客IDïŒcustomer_idïŒãå£²äžæ¥ãäŒå¡ç³èŸŒæ¥ãšãšãã«è¡šç€ºãããçµæã¯10件衚瀺ãããã°è¯ãïŒãªããsales_ymdã¯æ°å€ãapplication_dateã¯æååã§ããŒã¿ãä¿æããŠããç¹ã«æ³šæïŒããªããæéæ
å ±ã¯ä¿æããŠããªããã忥ä»ã¯0æ0å0ç§ã衚ããã®ãšããã
# +
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = \
(df_tmp['sales_ymd'].view(np.int64) / 10**9) - (df_tmp['application_date'].\
view(np.int64) / 10**9)
df_tmp.head(10)
# -
# ---
# > P-074: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžæ¥ïŒsales_ymdïŒã«å¯Ÿããåœè©²é±ã®æææ¥ããã®çµéæ¥æ°ãèšç®ããå£²äžæ¥ãåœè©²é±ã®æææ¥ä»ãšãšãã«è¡šç€ºãããçµæã¯10件衚瀺ãããã°è¯ãïŒãªããsales_ymdã¯æ°å€ã§ããŒã¿ãä¿æããŠããç¹ã«æ³šæïŒã
df_tmp = df_receipt[['customer_id', 'sales_ymd']]
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['monday'] = df_tmp['sales_ymd']. \
apply(lambda x: x - relativedelta(days=x.weekday()))
df_tmp['elapsed_weekday'] = df_tmp['sales_ymd'] - df_tmp['monday']
df_tmp.head(10)
# ---
# > P-075: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒããã©ã³ãã ã«1%ã®ããŒã¿ãæœåºããå
é ãã10ä»¶ããŒã¿ãæœåºããã
df_customer.sample(frac=0.01).head(10)
# ---
# > P-076: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒããæ§å¥ïŒgender_cdïŒã®å²åã«åºã¥ãã©ã³ãã ã«10%ã®ããŒã¿ãå±€åæœåºããæ§å¥ããšã«ä»¶æ°ãéèšããã
# sklearn.model_selection.train_test_splitã䜿çšããäŸ
_, df_tmp = train_test_split(df_customer, test_size=0.1,
stratify=df_customer['gender_cd'])
df_tmp.groupby('gender_cd').agg({'customer_id' : 'count'})
df_tmp.head(10)
# ---
# > P-077: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客åäœã«åèšããåèšãã売äžéé¡ã®å€ãå€ãæœåºããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšããªããããã§ã¯å€ãå€ãå¹³åãã3Ï以äžé¢ãããã®ãšãããçµæã¯10件衚瀺ãããã°è¯ãã
# skleanã®preprocessing.scaleãå©çšãããããæšæ¬æšæºåå·®ã§èšç®ãããŠãã
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_ss'] = preprocessing.scale(df_sales_amount['amount'])
df_sales_amount.query('abs(amount_ss) >= 3').head(10)
# ---
# > P-078: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã®å£²äžéé¡ïŒamountïŒã顧客åäœã«åèšããåèšãã売äžéé¡ã®å€ãå€ãæœåºããããã ãã顧客IDã"Z"ããå§ãŸãã®ãã®ã¯éäŒå¡ã衚ããããé€å€ããŠèšç®ããããšããªããããã§ã¯å€ãå€ã第äžååäœãšç¬¬äžååäœã®å·®ã§ããIQRãçšããŠãã第äžååäœæ°-1.5ÃIQRããããäžåããã®ããŸãã¯ã第äžååäœæ°+1.5ÃIQRããè¶
ãããã®ãšãããçµæã¯10件衚瀺ãããã°è¯ãã
# +
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
pct75 = np.percentile(df_sales_amount['amount'], q=75)
pct25 = np.percentile(df_sales_amount['amount'], q=25)
iqr = pct75 - pct25
amount_low = pct25 - (iqr * 1.5)
amount_hight = pct75 + (iqr * 1.5)
df_sales_amount.query('amount < @amount_low or @amount_hight < amount').head(10)
# -
# ---
# > P-079: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®åé
ç®ã«å¯Ÿããæ¬ ææ°ã確èªããã
df_product.isnull().sum()
# ---
# > P-080: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã®ããããã®é
ç®ã«æ¬ æãçºçããŠããã¬ã³ãŒããå
šãŠåé€ããæ°ããªdf_product_1ãäœæããããªããåé€ååŸã®ä»¶æ°ã衚瀺ãããåèšåã§ç¢ºèªããä»¶æ°ã ãæžå°ããŠããããšã確èªããããšã
df_product_1 = df_product.copy()
print('åé€å:', len(df_product_1))
df_product_1.dropna(inplace=True)
print('åé€åŸ:', len(df_product_1))
# ---
# > P-081: å䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒã®æ¬ æå€ã«ã€ããŠãããããã®å¹³åå€ã§è£å®ããæ°ããªdf_product_2ãäœæããããªããå¹³åå€ã«ã€ããŠ1åæªæºã¯åæšäºå
¥ãšãã0.5ã«ã€ããŠã¯å¶æ°å¯ãã§ããŸããªããè£å®å®æœåŸãåé
ç®ã«ã€ããŠæ¬ æãçããŠããªãããšã確èªããããšã
# +
df_product_2 = df_product.fillna({
'unit_price':np.round(np.nanmean(df_product['unit_price'])),
'unit_cost':np.round(np.nanmean(df_product['unit_cost']))})
df_product_2.isnull().sum()
# -
# ---
# > P-082: å䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒã®æ¬ æå€ã«ã€ããŠãããããã®äžå€®å€ã§è£å®ããæ°ããªdf_product_3ãäœæããããªããäžå€®å€ã«ã€ããŠ1åæªæºã¯åæšäºå
¥ãšãã0.5ã«ã€ããŠã¯å¶æ°å¯ãã§ããŸããªããè£å®å®æœåŸãåé
ç®ã«ã€ããŠæ¬ æãçããŠããªãããšã確èªããããšã
df_product_3 = df_product.fillna({
'unit_price':np.round(np.nanmedian(df_product['unit_price'])),
'unit_cost':np.round(np.nanmedian(df_product['unit_cost']))})
df_product_3.isnull().sum()
# ---
# > P-083: å䟡ïŒunit_priceïŒãšå䟡ïŒunit_costïŒã®æ¬ æå€ã«ã€ããŠãåååã®å°åºåïŒcategory_small_cdïŒããšã«ç®åºããäžå€®å€ã§è£å®ããæ°ããªdf_product_4ãäœæããããªããäžå€®å€ã«ã€ããŠ1åæªæºã¯åæšäºå
¥ãšãã0.5ã«ã€ããŠã¯å¶æ°å¯ãã§ããŸããªããè£å®å®æœåŸãåé
ç®ã«ã€ããŠæ¬ æãçããŠããªãããšã確èªããããšã
# +
# ã³ãŒãäŸ1
df_tmp = df_product.groupby('category_small_cd'). \
agg({'unit_price':'median', 'unit_cost':'median'}).reset_index()
df_tmp.columns = ['category_small_cd', 'median_price', 'median_cost']
df_product_4 = pd.merge(df_product, df_tmp, how='inner', on='category_small_cd')
df_product_4['unit_price'] = df_product_4[['unit_price', 'median_price']]. \
apply(lambda x: np.round(x[1]) if np.isnan(x[0]) else x[0], axis=1)
df_product_4['unit_cost'] = df_product_4[['unit_cost', 'median_cost']]. \
apply(lambda x: np.round(x[1]) if np.isnan(x[0]) else x[0], axis=1)
df_product_4.isnull().sum()
# +
# ã³ãŒãäŸ2ïŒmaskã®æŽ»çšïŒ
df_tmp = (df_product
.groupby('category_small_cd')
.agg(median_price=('unit_price', 'median'),
median_cost=('unit_cost', 'median'))
.reset_index())
df_product_4 = df_product.merge(df_tmp,
how='inner',
on='category_small_cd')
df_product_4['unit_price'] = (df_product_4['unit_price']
.mask(df_product_4['unit_price'].isnull(),
df_product_4['median_price'].round()))
df_product_4['unit_cost'] = (df_product_4['unit_cost']
.mask(df_product_4['unit_cost'].isnull(),
df_product_4['median_cost'].round()))
df_product_4.isnull().sum()
# +
# ã³ãŒãäŸ3ïŒfillnaãtransformã®æŽ»çšïŒ
df_product_4 = df_product.copy()
for x in ['unit_price', 'unit_cost']:
df_product_4[x] = (df_product_4[x]
.fillna(df_product_4.groupby('category_small_cd')[x]
.transform('median')
.round()))
df_product_4.isnull().sum()
# -
# ---
# > P-084: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®å
šé¡§å®¢ã«å¯Ÿããå
šæéã®å£²äžéé¡ã«å ãã2019幎売äžéé¡ã®å²åãèšç®ããããã ãã販売å®çžŸã®ãªãå Žåã¯0ãšããŠæ±ãããšããããŠèšç®ããå²åã0è¶
ã®ãã®ãæœåºããã çµæã¯10件衚瀺ãããã°è¯ãããŸããäœæããããŒã¿ã«NAãNANãååšããªãããšã確èªããã
# +
df_tmp_1 = df_receipt.query('20190101 <= sales_ymd <= 20191231')
df_tmp_1 = pd.merge(df_customer['customer_id'],
df_tmp_1[['customer_id', 'amount']],
how='left', on='customer_id'). \
groupby('customer_id').sum().reset_index(). \
rename(columns={'amount':'amount_2019'})
df_tmp_2 = pd.merge(df_customer['customer_id'],
df_receipt[['customer_id', 'amount']],
how='left', on='customer_id'). \
groupby('customer_id').sum().reset_index()
df_tmp = pd.merge(df_tmp_1, df_tmp_2, how='inner', on='customer_id')
df_tmp['amount_2019'] = df_tmp['amount_2019'].fillna(0)
df_tmp['amount'] = df_tmp['amount'].fillna(0)
df_tmp['amount_rate'] = df_tmp['amount_2019'] / df_tmp['amount']
df_tmp['amount_rate'] = df_tmp['amount_rate'].fillna(0)
# -
df_tmp.query('amount_rate > 0').head(10)
df_tmp.isnull().sum()
# ---
# > P-085: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®å
šé¡§å®¢ã«å¯ŸããéµäŸ¿çªå·ïŒpostal_cdïŒãçšããŠçµåºŠç·¯åºŠå€æçšããŒã¿ãã¬ãŒã ïŒdf_geocodeïŒãçŽä»ããæ°ããªdf_customer_1ãäœæããããã ããè€æ°çŽã¥ãå Žåã¯çµåºŠïŒlongitudeïŒã緯床ïŒlatitudeïŒããããå¹³åãç®åºããããšã
#
# +
df_customer_1 = pd.merge(df_customer[['customer_id', 'postal_cd']],
df_geocode[['postal_cd', 'longitude' ,'latitude']],
how='inner', on='postal_cd')
df_customer_1 = df_customer_1.groupby('customer_id'). \
agg({'longitude':'mean', 'latitude':'mean'}).reset_index(). \
rename(columns={'longitude':'m_longitude', 'latitude':'m_latitude'})
df_customer_1 = pd.merge(df_customer, df_customer_1,
how='inner', on='customer_id')
df_customer_1.head(3)
# -
# ---
# > P-086: åèšåã§äœæãã緯床çµåºŠã€ã顧客ããŒã¿ãã¬ãŒã ïŒdf_customer_1ïŒã«å¯Ÿããç³èŸŒã¿åºèã³ãŒãïŒapplication_store_cdïŒãããŒã«åºèããŒã¿ãã¬ãŒã ïŒdf_storeïŒãšçµåããããããŠç³èŸŒã¿åºèã®ç·¯åºŠïŒlatitudeïŒã»çµåºŠæ
å ±ïŒlongitude)ãšé¡§å®¢ã®ç·¯åºŠã»çµåºŠãçšããŠè·é¢ïŒkmïŒãæ±ãã顧客IDïŒcustomer_idïŒãé¡§å®¢äœæïŒaddressïŒãåºèäœæïŒaddressïŒãšãšãã«è¡šç€ºãããèšç®åŒã¯ç°¡æåŒã§è¯ããã®ãšãããããã®ä»ç²ŸåºŠã®é«ãæ¹åŒãå©çšããã©ã€ãã©ãªãå©çšããŠãããŸããªããçµæã¯10件衚瀺ããã°è¯ãã
# $$
# 緯床ïŒã©ãžã¢ã³ïŒïŒ\phi \\
# çµåºŠïŒã©ãžã¢ã³ïŒïŒ\lambda \\
# è·é¢L = 6371 * arccos(sin \phi_1 * sin \phi_2
# + cos \phi_1 * cos \phi_2 * cos(\lambda_1 â \lambda_2))
# $$
# +
# ã³ãŒãäŸ1
def calc_distance(x1, y1, x2, y2):
distance = 6371 * math.acos(math.sin(math.radians(y1))
* math.sin(math.radians(y2))
+ math.cos(math.radians(y1))
* math.cos(math.radians(y2))
* math.cos(math.radians(x1) - math.radians(x2)))
return distance
df_tmp = pd.merge(df_customer_1, df_store, how='inner', left_on='application_store_cd', right_on='store_cd')
df_tmp['distance'] = df_tmp[['m_longitude', 'm_latitude','longitude', 'latitude']]. \
apply(lambda x: calc_distance(x[0], x[1], x[2], x[3]), axis=1)
df_tmp[['customer_id', 'address_x', 'address_y', 'distance']].head(10)
# +
# ã³ãŒãäŸ2
def calc_distance_numpy(x1, y1, x2, y2):
x1_r = np.radians(x1)
x2_r = np.radians(x2)
y1_r = np.radians(y1)
y2_r = np.radians(y2)
return 6371 * np.arccos(np.sin(y1_r) * np.sin(y2_r)
+ np.cos(y1_r) * np.cos(y2_r)
* np.cos(x1_r - x2_r))
df_tmp = df_customer_1.merge(df_store,
how='inner',
left_on='application_store_cd',
right_on='store_cd')
df_tmp['distance'] = calc_distance_numpy(df_tmp['m_longitude'],
df_tmp['m_latitude'],
df_tmp['longitude'],
df_tmp['latitude'])
df_tmp[['customer_id', 'address_x', 'address_y', 'distance']].head(10)
# -
# ---
# > P-087: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã§ã¯ãç°ãªãåºèã§ã®ç³èŸŒã¿ãªã©ã«ããåäžé¡§å®¢ãè€æ°ç»é²ãããŠãããååïŒcustomer_nameïŒãšéµäŸ¿çªå·ïŒpostal_cdïŒãåã顧客ã¯åäžé¡§å®¢ãšã¿ãªãã1顧客1ã¬ã³ãŒããšãªãããã«åå¯ãããåå¯é¡§å®¢ããŒã¿ãã¬ãŒã ïŒdf_customer_uïŒãäœæããããã ããåäžé¡§å®¢ã«å¯ŸããŠã¯å£²äžéé¡åèšãæãé«ããã®ãæ®ããã®ãšãã売äžéé¡åèšãåäžãããã¯å£²äžå®çžŸã®ç¡ã顧客ã«ã€ããŠã¯é¡§å®¢IDïŒcustomer_idïŒã®çªå·ãå°ãããã®ãæ®ãããšãšããã
# +
df_tmp = df_receipt.groupby('customer_id').agg({'amount':sum}).reset_index()
df_customer_u = pd.merge(df_customer, df_tmp,
how='left',
on='customer_id').sort_values(['amount', 'customer_id'],
ascending=[False, True])
df_customer_u.drop_duplicates(subset=['customer_name', 'postal_cd'], keep='first', inplace=True)
print('æžå°æ°: ', len(df_customer) - len(df_customer_u))
# -
# ---
# > P-088: åèšåã§äœæããããŒã¿ãå
ã«ã顧客ããŒã¿ãã¬ãŒã ã«çµ±ååå¯IDãä»äžããããŒã¿ãã¬ãŒã ïŒdf_customer_nïŒãäœæããããã ããçµ±ååå¯IDã¯ä»¥äžã®ä»æ§ã§ä»äžãããã®ãšããã
# >
# > - éè€ããŠããªã顧客ïŒé¡§å®¢IDïŒcustomer_idïŒãèšå®
# > - éè€ããŠãã顧客ïŒåèšåã§æœåºããã¬ã³ãŒãã®é¡§å®¢IDãèšå®
# +
df_customer_n = pd.merge(df_customer,
df_customer_u[['customer_name',
'postal_cd', 'customer_id']],
how='inner', on =['customer_name', 'postal_cd'])
df_customer_n.rename(columns={'customer_id_x':'customer_id',
'customer_id_y':'integration_id'}, inplace=True)
print('IDæ°ã®å·®', len(df_customer_n['customer_id'].unique())
- len(df_customer_n['integration_id'].unique()))
# -
# ---
# > P-é話: df_customer_1, df_customer_nã¯äœ¿ããªãã®ã§åé€ããã
del df_customer_1
del df_customer_n
# ---
# > P-089: 売äžå®çžŸã®ãã顧客ã«å¯Ÿããäºæž¬ã¢ãã«æ§ç¯ã®ããåŠç¿çšããŒã¿ãšãã¹ãçšããŒã¿ã«åå²ãããããããã8:2ã®å²åã§ã©ã³ãã ã«ããŒã¿ãåå²ããã
df_sales= df_receipt.groupby('customer_id').agg({'amount':sum}).reset_index()
df_tmp = pd.merge(df_customer, df_sales['customer_id'],
how='inner', on='customer_id')
df_train, df_test = train_test_split(df_tmp, test_size=0.2, random_state=71)
print('åŠç¿ããŒã¿å²å: ', len(df_train) / len(df_tmp))
print('ãã¹ãããŒã¿å²å: ', len(df_test) / len(df_tmp))
# ---
# > P-090: ã¬ã·ãŒãæçްããŒã¿ãã¬ãŒã ïŒdf_receiptïŒã¯2017幎1æ1æ¥ã2019幎10æ31æ¥ãŸã§ã®ããŒã¿ãæããŠããã売äžéé¡ïŒamountïŒãææ¬¡ã§éèšããåŠç¿çšã«12ã¶æããã¹ãçšã«6ã¶æã®ã¢ãã«æ§ç¯çšããŒã¿ã3ã»ããäœæããã
# +
df_tmp = df_receipt[['sales_ymd', 'amount']].copy()
df_tmp['sales_ym'] = df_tmp['sales_ymd'].astype('str').str[0:6]
df_tmp = df_tmp.groupby('sales_ym').agg({'amount':'sum'}).reset_index()
# 颿°åããããšã§é·æéããŒã¿ã«å¯Ÿãã倿°ã®ããŒã¿ã»ãããã«ãŒããªã©ã§åŠçã§ããããã«ãã
def split_data(df, train_size, test_size, slide_window, start_point):
train_start = start_point * slide_window
test_start = train_start + train_size
return df[train_start : test_start], df[test_start : test_start + test_size]
df_train_1, df_test_1 = split_data(df_tmp, train_size=12,
test_size=6, slide_window=6, start_point=0)
df_train_2, df_test_2 = split_data(df_tmp, train_size=12,
test_size=6, slide_window=6, start_point=1)
df_train_3, df_test_3 = split_data(df_tmp, train_size=12,
test_size=6, slide_window=6, start_point=2)
# -
df_train_1
df_test_1
# ---
# > P-091: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã®å顧客ã«å¯Ÿãã売äžå®çžŸã®ãã顧客æ°ãšå£²äžå®çžŸã®ãªã顧客æ°ã1:1ãšãªãããã«ã¢ã³ããŒãµã³ããªã³ã°ã§æœåºããã
# +
# ã³ãŒãäŸ1
#unbalancedã®ubUnderã䜿ã£ãäŸ
df_tmp = df_receipt.groupby('customer_id').agg({'amount':'sum'}).reset_index()
df_tmp = pd.merge(df_customer, df_tmp, how='left', on='customer_id')
df_tmp['buy_flg'] = df_tmp['amount'].apply(lambda x: 0 if np.isnan(x) else 1)
print('0ã®ä»¶æ°', len(df_tmp.query('buy_flg == 0')))
print('1ã®ä»¶æ°', len(df_tmp.query('buy_flg == 1')))
positive_count = len(df_tmp.query('buy_flg == 1'))
rs = RandomUnderSampler(random_state=71)
df_sample, _ = rs.fit_resample(df_tmp, df_tmp.buy_flg)
print('0ã®ä»¶æ°', len(df_sample.query('buy_flg == 0')))
print('1ã®ä»¶æ°', len(df_sample.query('buy_flg == 1')))
# +
# ã³ãŒãäŸ2
#unbalancedã®ubUnderã䜿ã£ãäŸ
df_tmp = df_customer.merge(df_receipt
.groupby('customer_id')['amount'].sum()
.reset_index(),
how='left',
on='customer_id')
df_tmp['buy_flg'] = np.where(df_tmp['amount'].isnull(), 0, 1)
print("ãµã³ããªã³ã°åã®buy_flgã®ä»¶æ°")
print(df_tmp['buy_flg'].value_counts(), "\n")
positive_count = (df_tmp['buy_flg'] == 1).sum()
rs = RandomUnderSampler(random_state=71)
df_sample, _ = rs.fit_resample(df_tmp, df_tmp.buy_flg)
print("ãµã³ããªã³ã°åŸã®buy_flgã®ä»¶æ°")
print(df_sample['buy_flg'].value_counts())
# -
# ---
# > P-092: 顧客ããŒã¿ãã¬ãŒã ïŒdf_customerïŒã§ã¯ãæ§å¥ã«é¢ããæ
å ±ã鿣èŠåã®ç¶æ
ã§ä¿æãããŠããããããç¬¬äžæ£èŠåããã
df_gender = df_customer[['gender_cd', 'gender']].drop_duplicates()
df_customer_s = df_customer.drop(columns='gender')
# ---
#
# > P-093: ååããŒã¿ãã¬ãŒã ïŒdf_productïŒã§ã¯åã«ããŽãªã®ã³ãŒãå€ã ããä¿æããã«ããŽãªåã¯ä¿æããŠããªããã«ããŽãªããŒã¿ãã¬ãŒã ïŒdf_categoryïŒãšçµã¿åãããŠéæ£èŠåããã«ããŽãªåãä¿æããæ°ããªååããŒã¿ãã¬ãŒã ãäœæããã
df_product_full = pd.merge(df_product, df_category[['category_small_cd',
'category_major_name',
'category_medium_name',
'category_small_name']],
how = 'inner', on = 'category_small_cd')
# ---
# > P-094: å
ã«äœæããã«ããŽãªåä»ãååããŒã¿ã以äžã®ä»æ§ã§ãã¡ã€ã«åºåããããªããåºåå
ã®ãã¹ã¯dataé
äžãšããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯CSVïŒã«ã³ãåºåãïŒ
# > - ãããæã
# > - æåã³ãŒãã¯UTF-8
# ã³ãŒãäŸ1
df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.csv',
encoding='UTF-8', index=False)
# ã³ãŒãäŸ2ïŒBOMä»ãã§Excelã®æååããé²ãïŒ
df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.csv',
encoding='utf_8_sig',
index=False)
# ---
# > P-095: å
ã«äœæããã«ããŽãªåä»ãååããŒã¿ã以äžã®ä»æ§ã§ãã¡ã€ã«åºåããããªããåºåå
ã®ãã¹ã¯dataé
äžãšããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯CSVïŒã«ã³ãåºåãïŒ
# > - ãããæã
# > - æåã³ãŒãã¯CP932
df_product_full.to_csv('../data/P_df_product_full_CP932_header.csv',
encoding='CP932', index=False)
# ---
# > P-096: å
ã«äœæããã«ããŽãªåä»ãååããŒã¿ã以äžã®ä»æ§ã§ãã¡ã€ã«åºåããããªããåºåå
ã®ãã¹ã¯dataé
äžãšããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯CSVïŒã«ã³ãåºåãïŒ
# > - ãããç¡ã
# > - æåã³ãŒãã¯UTF-8
df_product_full.to_csv('../data/P_df_product_full_UTF-8_noh.csv',
header=False ,encoding='UTF-8', index=False)
# ---
# > P-097: å
ã«äœæãã以äžåœ¢åŒã®ãã¡ã€ã«ãèªã¿èŸŒã¿ãããŒã¿ãã¬ãŒã ãäœæããããŸããå
é 10ä»¶ã衚瀺ãããæ£ãããšããŸããŠããããšã確èªããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯CSVïŒã«ã³ãåºåãïŒ
# > - ãããæã
# > - æåã³ãŒãã¯UTF-8
df_tmp = pd.read_csv('../data/P_df_product_full_UTF-8_header.csv')
df_tmp.head(10)
# ---
# > P-098: å
ã«äœæãã以äžåœ¢åŒã®ãã¡ã€ã«ãèªã¿èŸŒã¿ãããŒã¿ãã¬ãŒã ãäœæããããŸããå
é 10ä»¶ã衚瀺ãããæ£ãããšããŸããŠããããšã確èªããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯CSVïŒã«ã³ãåºåãïŒ
# > - ãããç¡ã
# > - æåã³ãŒãã¯UTF-8
df_tmp = pd.read_csv('../data/P_df_product_full_UTF-8_noh.csv', header=None)
df_tmp.head(10)
# ---
# > P-099: å
ã«äœæããã«ããŽãªåä»ãååããŒã¿ã以äžã®ä»æ§ã§ãã¡ã€ã«åºåããããªããåºåå
ã®ãã¹ã¯dataé
äžãšããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯TSVïŒã¿ãåºåãïŒ
# > - ãããæã
# > - æåã³ãŒãã¯UTF-8
df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.tsv',
sep='\t', encoding='UTF-8', index=False)
# ---
# > P-100: å
ã«äœæãã以äžåœ¢åŒã®ãã¡ã€ã«ãèªã¿èŸŒã¿ãããŒã¿ãã¬ãŒã ãäœæããããŸããå
é 10ä»¶ã衚瀺ãããæ£ãããšããŸããŠããããšã確èªããã
# >
# > - ãã¡ã€ã«åœ¢åŒã¯TSVïŒã¿ãåºåãïŒ
# > - ãããæã
# > - æåã³ãŒãã¯UTF-8
df_tmp = pd.read_table('../data/P_df_product_full_UTF-8_header.tsv',
encoding='UTF-8')
df_tmp.head(10)
# # ããã§ïŒïŒïŒæ¬çµããã§ãããã€ããããŸã§ããïŒ
| docker/work/answer/ans_preprocess_knock_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pickle
import pandas as pd
import numpy as np
import os, sys, gc
from plotnine import *
import plotnine
from tqdm import tqdm_notebook
import seaborn as sns
import warnings
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
import matplotlib as mpl
from matplotlib import rc
import re
from matplotlib.ticker import PercentFormatter
import datetime
from math import log # IDF ê³ì°ì ìíŽ
# +
# %config InlineBackend.figure_format = 'retina'
mpl.font_manager._rebuild()
mpl.pyplot.rc('font', family='NanumBarunGothic')
plt.rc('font', family='NanumBarunGothic')
plt.rcParams['font.family'] = 'NanumBarunGothic'
fontpath = 'C:/Users/User/Anaconda3/Lib/site-packages/matplotlib/mpl-data/fonts/ttf/NanumBarunGothic.ttf'
font = fm.FontProperties(fname=fontpath, size=9).get_name()
# -
# ## ë² ìŽì€ëŒìž ëªšëž ìì±
# - Popular Based Recommendation
# - Popular Based Recommendation with following arthor
path = 'C:/Users/User/Documents/Tì칎ë°ë¯ž/T ì칎ë°ë¯ž/input/'
# pd.read_json : json ííì íìŒì dataframe ííë¡ ë¶ë¬ì€ë ìœë
magazine = pd.read_json(path + 'magazine.json', lines=True) # lines = True : Read the file as a json object per line.
metadata = pd.read_json(path + 'metadata.json', lines=True)
users = pd.read_json(path + 'users.json', lines=True)
# +
# %%time
import itertools
from itertools import chain
import glob
import os
input_read_path = path + 'read/read/'
# os.listdir : íŽë¹ 겜ë¡ì ìë 몚ë íìŒë€ì ë¶ë¬ì€ë ëª
ë ¹ìŽ
file_list = os.listdir(input_read_path)
exclude_file_lst = ['read.tar', '.2019010120_2019010121.un~']
read_df_list = []
for file in tqdm_notebook(file_list):
# ììžì²ëЬ
if file in exclude_file_lst:
continue
else:
file_path = input_read_path + file
df_temp = pd.read_csv(file_path, header=None, names=['raw'])
# fileëª
ì íµíŽì ìœì ìê°ì ì¶ì¶(from, to)
df_temp['from'] = file.split('_')[0]
df_temp['to'] = file.split('_')[1]
read_df_list.append(df_temp)
read_df = pd.concat(read_df_list)
# reads íìŒì ì ì²ëЬíŽì row ë¹ user - articleìŽ 1:1ìŽ ëëë¡ ìì
read_df['user_id'] = read_df['raw'].apply(lambda x: x.split(' ')[0])
read_df['article_id'] = read_df['raw'].apply(lambda x: x.split(' ')[1:])
def chainer(s):
return list(itertools.chain.from_iterable(s))
read_cnt_by_user = read_df['article_id'].map(len)
read_rowwise = pd.DataFrame({'from': np.repeat(read_df['from'], read_cnt_by_user),
'to': np.repeat(read_df['to'], read_cnt_by_user),
'user_id': np.repeat(read_df['user_id'], read_cnt_by_user),
'article_id': chainer(read_df['article_id'])})
read_rowwise.reset_index(drop=True, inplace=True)
# +
from datetime import datetime
metadata['reg_datetime'] = metadata['reg_ts'].apply(lambda x : datetime.fromtimestamp(x/1000.0))
metadata.loc[metadata['reg_datetime'] == metadata['reg_datetime'].min(), 'reg_datetime'] = datetime(2090, 12, 31)
metadata['reg_dt'] = metadata['reg_datetime'].dt.date
metadata['type'] = metadata['magazine_id'].apply(lambda x : 'ê°ìž' if x == 0.0 else '맀거ì§')
metadata['reg_dt'] = pd.to_datetime(metadata['reg_dt'])
# -
# ## Popular Based Recommendation
# - 2019ë
ë ìŽíë¡ ìì±ë êžì€ìì ìì 100걎ì êžì ì¶ì²
# - ì¬ì©ìê° ìœì êžì ì¶ì²ìŽ ëì§ ìëë¡ íì²ëЬ
# 2019ë
ë ìŽíë¡ ìì±ë êžì€ìì ìì 100걎ì êžì ì¶ì²
# ëš, ìŽë¯ž ìœì êžì 겜ì°ë ì¶ì²ìì ì ìž
read_rowwise = read_rowwise.merge(metadata[['id', 'reg_dt']], how='left', left_on='article_id', right_on='id')
read_rowwise.head()
# ì¬ì©ìê° ìœì êžì 목ë¡ë€ì ì ì¥
read_total = pd.DataFrame(read_rowwise.groupby(['user_id'])['article_id'].unique()).reset_index()
read_total.columns = ['user_id', 'article_list']
# +
# 1. article_idê° ê²°ìž¡ì¹ìž 겜ì°ë ìì (ìê°ê° ì¬ëŒì§ 겜ì°)
# 2. reg_dtê° ê²°ìž¡ì¹ìž 겜ì°ë ìì (ë©íë°ìŽí°ì ìë£ê° ìë 겜ì°)
read_rowwise = read_rowwise[read_rowwise['article_id'] != '']
read_rowwise = read_rowwise[(read_rowwise['id'].notnull()) & (read_rowwise['reg_dt'].notnull())]
read_rowwise = read_rowwise[(read_rowwise['reg_dt'] >= '2019-01-01') & (read_rowwise['reg_dt'] < '2090-12-31')].reset_index(drop=True)
del read_rowwise['id']
# -
valid = pd.read_csv(path + '/predict/predict/dev.users', header=None)
popular_rec_model = read_rowwise['article_id'].value_counts().index[0:1000]
with open('./recommend.txt', 'w') as f:
for user in tqdm_notebook(valid[0].values):
# ì¶ì² í볎
seen = chainer(read_total[read_total['user_id'] == user]['article_list'].values)
recs = []
for r in popular_rec_model:
if len(recs) == 100:
break
else:
if r not in seen: recs.append(r)
f.write('%s %s\n' % (user, ' '.join(recs)))
# 
# ## Popular Based Recommendation with following arthor
# - 2019ë
ë ìŽíë¡ ìì±ë êžì€ìì 구ë
ìê°ì êžì ì°ì ì ìŒë¡ ì¶ì²
# - ì¬ì©ìê° ìœì êžì ì¶ì²ìŽ ëì§ ìëë¡ íì²ëЬ
# 
# +
following_cnt_by_user = users['following_list'].map(len)
following_rowwise = pd.DataFrame({'user_id': np.repeat(users['id'], following_cnt_by_user),
'author_id': chainer(users['following_list'])})
following_rowwise.reset_index(drop=True, inplace=True)
# -
following_rowwise = following_rowwise[following_rowwise['user_id'].isin(valid[0].values)]
following_rowwise.head()
# %%time
metadata_ = metadata[['user_id', 'id', 'reg_dt']]
metadata_.columns = ['author_id', 'article_id', 'reg_dt']
following_popular_model = pd.merge(following_rowwise, metadata_, how='left', on='author_id')
# ìì 몚ëžì íµíŽì ì¶ì²íë €ê³ íëë ë°ìíë 묞ì ì
# 1. 구ë
íë ìê°ê° ìë 겜ì°ìë ìŽë€ ììŒë¡ ì¶ì²íŽìŒí ì§?
# 2. 구ë
íë ìê°ê° ì¬ë¬ëª
ìŽê³ ìê°ì êžë ì¬ë¬ê°ìžë° ìŽë€ êžì ìì£Œë¡ ì¶ì²íŽìŒí ì§?
#
# ëšìí íŽê²°ì±
# 1. Popular Based Modelìì ìì 100걎ì ì¶ì²
# 2. ìê°ì€ìì ì ížíë ìê°ë¥Œ ì ì íê³ íŽë¹ ìê°ì ìžêž°êžì ì¶ì²
# - ì íž : íŽë¹ ìê°ì êžì ê°ì¥ ë§ìŽ ìœì
# - ìì ì ì ë§ê³ ë "ìŒë§ë ë§ì ë ì ì°Ÿìê°ì ìœìë ì§", "ìê°ì êžì€ìì ëªížì êžì ìœìë ì§" ë±ìŒë¡ ë€ë¥Žê² ì ìë ê°ë¥
# %%time
read_rowwise['author_id'] = read_rowwise['article_id'].apply(lambda x: x.split('_')[0])
author_favor = read_rowwise.groupby(['user_id', 'author_id'])['author_id'].agg({'count'}).reset_index()
popular_model = pd.DataFrame(read_rowwise['article_id'].value_counts()).reset_index()
popular_model.columns = ['article_id', 'count']
following_popular_model = pd.merge(following_popular_model, author_favor, how='left', on=['user_id', 'author_id'])
following_popular_model = following_popular_model[following_popular_model['count'].notnull()].reset_index(drop=True)
following_popular_model = pd.merge(following_popular_model, popular_model, how='left', on='article_id')
following_popular_model.head()
# - count_x : ìê°ì ëí ê°ë³ ì¬ì©ìì ì ížë
# - count_y : êžì ëí ì 첎 ì¬ì©ìì ì ížë
following_popular_model = following_popular_model.sort_values(by=['count_x', 'count_y', 'reg_dt'], ascending=[False, False, False])
following_popular_model[following_popular_model['user_id'] == '#a6f7a5ff90a19ec4d583f0db1836844d'].head()
with open('./recommend.txt', 'w') as f:
for user in tqdm_notebook(valid[0].values):
# ì¶ì² í볎
seen = chainer(read_total[read_total['user_id'] == user]['article_list'].values)
following_rec_model = following_popular_model[following_popular_model['user_id'] == user]['article_id'].values
recs = []
for r in following_rec_model:
if len(recs) == 100:
break
else:
if r not in seen + recs: recs.append(r)
if len(recs) < 100:
for r in popular_rec_model:
if len(recs) == 100:
break
else:
if r not in seen + recs: recs.append(r)
f.write('%s %s\n' % (user, ' '.join(recs)))
# 
| code/02. Brunch Baseline - Popular Based Recommendation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Preparation
# ## Import Libraries
import numpy as np
import pandas as pd
# ## Import Data
# The dataset contains all available data for more than 800,000 consumer loans issued from 2007 to 2015 by Lending Club: a large US peer-to-peer lending company. There are several different versions of this dataset. We have used a version available on kaggle.com. You can find it here: https://www.kaggle.com/wendykan/lending-club-loan-data/version/1
# We divided the data into two periods because we assume that some data are available at the moment when we need to build Expected Loss models, and some data comes from applications after. Later, we investigate whether the applications we have after we built the Probability of Default (PD) model have similar characteristics with the applications we used to build the PD model.
loan_data_backup = pd.read_csv('loan_data_2007_2014.csv')
loan_data = loan_data_backup.copy()
# ## Explore Data
loan_data
pd.options.display.max_columns = None
#pd.options.display.max_rows = None
# Sets the pandas dataframe options to display all columns/ rows.
loan_data
loan_data.head()
loan_data.tail()
loan_data.columns.values
# Displays all column names.
loan_data.info()
# Displays column names, complete (non-missing) cases per column, and datatype per column.
| Credit Risk Modeling/Credit Risk Modeling - Preparation - With Comments - 4-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Dependencies
from bs4 import BeautifulSoup
from splinter import Browser
import requests
import pandas as pd
# URLs of pages to be scraped
nasa_mars_news_url = 'https://mars.nasa.gov/news/'
mars_image_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
mars_weather_url = 'https://twitter.com/marswxreport?lang=en'
mars_facts_url = 'http://space-facts.com/mars/'
mars_hemispheres_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
# Retrieve page with the requests module
news_response = requests.get(nasa_mars_news_url)
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(news_response.text, 'html.parser')
#Display the result to figure out what you want to scrape
print(soup.prettify())
# results are returned as an iterable list
results = soup.find_all(class_="slide")
titles_list = []
paragraphs_list = []
# Loop through returned results
for result in results:
# Error handling
try:
#Find title and paragraph for each link. The title is found within the second link in each slide, the paragraph
#is found inside an inner description div tag.
links = result.find_all('a')
title = links[1].text
paragraph = result.find(class_="rollover_description_inner").text
#Append both to a list
titles_list.append(title)
paragraphs_list.append(paragraph)
#Print title and body
print(title)
print(paragraph)
except AttributeError as e:
print(e)
#Save the first title and body into variables for use later
news_title = titles_list[0]
news_p = paragraphs_list[0]
print(news_title)
print(news_p)
#Second Web Scrape for Mars Image
# Retrieve page with the requests module
image_response = requests.get(mars_image_url)
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(image_response.text, 'html.parser')
# Examine the results
print(soup.prettify())
# results are returned as an iterable list
results = soup.find_all(class_="carousel_items")
# Loop through returned results
for result in results:
# Error handling
try:
#Find article tag and note that the link is in the 'style' parameter
article = result.find('article', class_="carousel_item")
article_link = article['style']
#Use modification to fix the link to be in the correct format
cleaned_article_link = article['style'].lstrip('background-image: url(')
cleaned_article_link = cleaned_article_link.rstrip(');')
#Print the link, the cleaned link, and the article it was pulled from
print(article_link)
print(cleaned_article_link)
print(article)
except AttributeError as e:
print(e)
#Remove single quotes from the start and end of the string and then construct the image url
cleaned_article_link = cleaned_article_link.replace("'", "")
featured_image_link = 'https://www.jpl.nasa.gov'+cleaned_article_link
#Print image url as a test
print(featured_image_link)
#Third Web Scrape for Mars Weather Tweet
# Retrieve page with the requests module
weather_response = requests.get(mars_weather_url)
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(weather_response.text, 'html.parser')
# Examine the results
print(soup.prettify())
# results are returned as an iterable list
results = soup.find_all(class_="content")
tweets_list = []
# Loop through returned results
for result in results:
# Error handling
try:
#Find the text of each tweet and append it to the list of tweet texts
tweet = result.find('p', class_="TweetTextSize").text
tweets_list.append(tweet)
#Print the text of each tweet as it is processed
print(tweet)
except AttributeError as e:
print(e)
#Save the most recent tweet entry as the first tweet text returned in the list
#If a weather entry is not the first tweet in the list, you can modify the index
#in tweets list to return the First weather entry.
mars_weather = tweets_list[0]
print(mars_weather)
#Scrape using pandas
facts_table = pd.read_html(mars_facts_url)
facts_table
#Create the dataframe by pulling the correct information from above and turning it into a dictionary
keys = list(facts_table[0][0])
values = list(facts_table[0][1])
facts_dict = dict((keys[x],values[x]) for x in range(0,len(keys)))
print(facts_dict)
#Drop unnecessary columns and display dataframe
facts_df = pd.DataFrame(facts_dict, index=[0])
facts_df = facts_df.drop(['First Record:', 'Recorded By:'], axis=1)
facts_df
#Rename the dataframe columns
cols = facts_df.columns.tolist()
print(cols)
new_cols = ['Equatorial Diameter:', 'Polar Diameter:', 'Mass:', 'Orbit Distance:', 'Orbit Period:', 'Surface Temperature:', 'Moons:']
facts_df = facts_df[new_cols]
facts_df
#Chromedriver setup
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
# +
#Create base url and lists to hold all of the links
usgs_base_url= 'https://astrogeology.usgs.gov'
links_list = []
hemispheres_response = requests.get(mars_hemispheres_url)
# Create BeautifulSoup object; parse with 'html.parser'
soup = BeautifulSoup(hemispheres_response.text, 'html.parser')
results = soup.find_all(class_='item')
# Loop through returned results
for result in results:
# Error handling
try:
#Find html a tag, which will be the link to the specific page
links = result.find('a')
#Get the link and print it
link=links['href']
print(link)
#Add in the usgs url to create the complete url and add it to a list of completed links
links_list.append(usgs_base_url+link)
except AttributeError as e:
print(e)
#Print list of completed links
print(links_list)
# -
#Use splinter to visit the hemispheres url
browser.visit(mars_hemispheres_url)
#Creat lists to hold all of the urls and titles
hemispheres_image_urls = []
titles_list = []
for x in range(0, 4):
#Go through the four page links and visit each one
browser.visit(links_list[x])
#Print each link as its being processed
print(links_list[x])
#Create soup object
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
#The jpg we are looking for is in the downloads class
images = soup.find(class_='downloads')
#Find the link tag and then get its link
image = images.find('a')
image_url= image['href']
#Append that to the list
hemispheres_image_urls.append(image_url)
#Find title and append title to the list of titles
titles = soup.find('h2', class_='title')
title=titles.text
title=title.strip('Enhanced')
titles_list.append(title)
#print each title and image url as they are processed
print(title)
print(image_url)
#Print the completed lists at the end
print(hemispheres_image_urls)
print(len(hemispheres_image_urls))
#Create a hemispheres dictionary with the title and url
hemispheres_dict = {'Title': titles_list,
'URL': hemispheres_image_urls }
print(hemispheres_dict)
#Create a dictionary with all the scaped information
scraped_dict = {'Title': titles_list,
'URL': hemispheres_image_urls,
'Weather': mars_weather,
'Featured Image': featured_image_link,
'News Title': news_title,
'News Body': news_p
}
| mission_to_mars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pyarc import CBA,TransactionDB
from pyarc.data_structures import ClassAssocationRule, Antecedent, Consequent
import pandas as pd
# -
txns = TransactionDB.from_DataFrame(pd.read_csv("../../iris.csv"), target="sepallength")
cba = CBA()
cba.fit(txns)
cba.clf.rules
cba.target_class
cba.predict_probability(txns)
ClassAssocationRule(Antecedent({}), Consequent("class", "default_class1"), 0.1, 0.2)
cba.predict_matched_rules(txns)
| notebooks/testing/predict_probability_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
x=np.linspace(0,10,10000)
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x))
plt.show
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 1000)
plt.plot(x, np.sin(x))
plt.show()
# -
pi =np.pi
x=np.linspace(-4*pi,4*pi,100)
plt.plot(x,np.sin(x))
plt.plot(x,np.cos(x))
plt.show
x=[2,4,6,8,10]
y=[5,2,-9,11,7]
plt.figure(figsize = [10,10])
plt.subplot(3,2,1)
plt.plot(x,y)
plt.subplot(3,2,2)
plt.scatter(x,y)
plt.subplot(3,2,3)
plt.hist(x)
plt.subplot(3,2,4)
plt.bar(x,y)
plt.subplot(3,2,5)
plt.pie(x)
plt.fill(x,y)
plt.savefig('testfifg.eps')
| Untitled8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# * This notebook contains the complete code to run the test example of modeling the stock market time serie of Microsoft with LSTM neural networks considering the daily observations extrated between 2010 and 2018. The variations of the example are:
#
#
# 1. Modeling the time serie only with LSTM.
# 2. Same model, but adding a signal getting from the sentiment analysis of online news as extra feature in the model.
# # 1. First model: modeling the stock market time serie without any extra features
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.float_format', lambda x: '%.4f' % x)
import seaborn as sns
sns.set_context("paper", font_scale=1.3)
sns.set_style('white')
import warnings
warnings.filterwarnings('ignore')
from time import time
import matplotlib.ticker as tkr
# %matplotlib inline
from scipy import stats
from statsmodels.tsa.stattools import adfuller
from sklearn import preprocessing
from statsmodels.tsa.stattools import pacf
import math
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import *
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from keras.callbacks import EarlyStopping
# -
# * The data set _df test 1_ contains the values of the stock market time serie
# +
result= pd.read_pickle("data sets/data_to_paper_microsoft_case.pkl")
# original time serie (Y)
y = result.MSFT.values
y = y.astype('float32')
y = np.reshape(y, (-1, 1))
scaler = MinMaxScaler(feature_range=(0, 1))
y = scaler.fit_transform(y)
# training and testing settings (size)
percent_of_training = 0.7
train_size = int(len(y) * percent_of_training)
test_size = len(y) - train_size
#
train_y, test_y = y[0:train_size,:], y[train_size:len(y),:]
def create_dataset(dataset, look_back=1):
X, Y = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
X.append(a)
Y.append(dataset[i + look_back, 0])
return np.array(X), np.array(Y)
# -
# +
look_back = 7
# features of the original time serie (y)
X_train_features_1, y_train = create_dataset(train_y, look_back)
X_test_features_1, y_test = create_dataset(test_y, look_back)
# join the all the features in one
## reshape arrays
X_train_features = np.reshape(X_train_features_1, (X_train_features_1.shape[0], 1, X_train_features_1.shape[1]))
X_test_features = np.reshape(X_test_features_1, (X_test_features_1.shape[0], 1, X_test_features_1.shape[1]))
# +
model = Sequential()
model.add(LSTM(200, input_shape=(X_train_features.shape[1], X_train_features.shape[2])))
model.add(Dropout(0.20))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X_train_features,y_train, epochs=300, batch_size=25, validation_data=(X_test_features, y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=10)], verbose=0, shuffle=False)
model.summary()
# +
train_predict = model.predict(X_train_features)
test_predict = model.predict(X_test_features)
#train_predict = scaler.inverse_transform(train_predict)
#Y_train = scaler.inverse_transform(y_train)
#test_predict = scaler.inverse_transform(test_predict)
#Y_test = scaler.inverse_transform(y_test)
print('Train Mean Absolute Error:', mean_absolute_error(np.reshape(y_train,(y_train.shape[0],1)), train_predict[:,0]))
print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(np.reshape(y_train,(y_train.shape[0],1)), train_predict[:,0])))
print('Test Mean Absolute Error:', mean_absolute_error(np.reshape(y_test,(y_test.shape[0],1)), test_predict[:,0]))
print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(np.reshape(y_test,(y_test.shape[0],1)), test_predict[:,0])))
# +
plt.figure(figsize=(8,4))
plt.style.use('seaborn-dark')
plt.plot(history.history['loss'], label='Train Loss',color="green")
plt.plot(history.history['val_loss'], label='Test Loss',color = "yellow")
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.legend(loc='upper right')
plt.grid()
plt.show();
# -
# +
time_y_train = pd.DataFrame(data = train_y, index = result[0:train_size].index,columns= [""])
time_y_test = pd.DataFrame(data = test_y, index = result[train_size:].index,columns= [""])
time_y_train_prediction = pd.DataFrame(data = train_predict, index = time_y_train[8:].index,columns= [""])
time_y_test_prediction = pd.DataFrame(data = test_predict, index = time_y_test[8:].index,columns= [""])
plt.style.use('seaborn-dark')
plt.figure(figsize=(15,10))
plt.plot(time_y_train,label = "training",color ="green",marker='.')
plt.plot(time_y_test,label = "test",marker='.')
plt.plot(time_y_train_prediction,color="red",label = "prediction")
plt.plot(time_y_test_prediction,color="red")
plt.title("LSTM fit of Microsoft Stock Market Prices",size = 20)
plt.tight_layout()
sns.despine(top=True)
plt.ylabel('', size=15)
plt.xlabel('', size=15)
plt.legend(fontsize=15)
plt.grid()
plt.show();
# -
# # 2. Second model: modeling the stock market time serie with the sentimen analysis of associated online news as extra features
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.float_format', lambda x: '%.4f' % x)
import seaborn as sns
sns.set_context("paper", font_scale=1.3)
sns.set_style('white')
import warnings
warnings.filterwarnings('ignore')
from time import time
import matplotlib.ticker as tkr
from scipy import stats
from statsmodels.tsa.stattools import adfuller
from sklearn import preprocessing
from statsmodels.tsa.stattools import pacf
# %matplotlib inline
import math
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import *
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from keras.callbacks import EarlyStopping
# +
result= pd.read_pickle("data sets/data_to_paper_microsoft_case.pkl")
# original time serie (Y)
y = result.MSFT.values #numpy.ndarray
y = y.astype('float32')
y = np.reshape(y, (-1, 1))
scaler = MinMaxScaler(feature_range=(0, 1))
y = scaler.fit_transform(y)
# extra information: features of the sentiment analysis
X = result.open.values
X = X.astype('float32')
X = np.reshape(X, (-1, 1))
# training and testing settings (size)
percent_of_training = 0.7
train_size = int(len(y) * percent_of_training)
test_size = len(y) - train_size
#
train_y, test_y = y[0:train_size,:], y[train_size:len(y),:]
train_x, test_x = X[0:train_size,:], X[train_size:len(X),:]
def create_dataset(dataset, look_back=1):
X, Y = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
X.append(a)
Y.append(dataset[i + look_back, 0])
return np.array(X), np.array(Y)
# +
look_back = 7
# features of the original time serie (y)
X_train_features_1, y_train = create_dataset(train_y, look_back)
X_test_features_1, y_test = create_dataset(test_y, look_back)
# calculate extra features in (X)
X_train_features_2, auxiliar_1 = create_dataset(train_x, look_back)
X_test_features_2, auxiliar_2 = create_dataset(test_x, look_back)
# join the all the features in one
## reshape arrays
X_train_features_1 = np.reshape(X_train_features_1, (X_train_features_1.shape[0], 1, X_train_features_1.shape[1]))
X_test_features_1 = np.reshape(X_test_features_1, (X_test_features_1.shape[0], 1, X_test_features_1.shape[1]))
X_train_features_2 = np.reshape(X_train_features_2, (X_train_features_2.shape[0], 1, X_train_features_2.shape[1]))
X_test_features_2 = np.reshape(X_test_features_2, (X_test_features_2.shape[0], 1, X_test_features_2.shape[1]))
## put all together
X_train_all_features = np.append(X_train_features_1,X_train_features_2,axis=1)
X_test_all_features = np.append(X_test_features_1,X_test_features_2,axis=1)
# -
# +
model = Sequential()
model.add(LSTM(200, input_shape=(X_train_all_features.shape[1], X_train_all_features.shape[2])))
model.add(Dropout(0.20))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X_train_all_features,y_train, epochs=300, batch_size=25, validation_data=(X_test_all_features, y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=10)], verbose=0, shuffle=False)
model.summary()
# +
train_predict = model.predict(X_train_all_features)
test_predict = model.predict(X_test_all_features)
#train_predict = scaler.inverse_transform(train_predict)
#Y_train = scaler.inverse_transform(y_train)
#test_predict = scaler.inverse_transform(test_predict)
#Y_test = scaler.inverse_transform(y_test)
print('Train Mean Absolute Error:', mean_absolute_error(np.reshape(y_train,(y_train.shape[0],1)), train_predict[:,0]))
print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(np.reshape(y_train,(y_train.shape[0],1)), train_predict[:,0])))
print('Test Mean Absolute Error:', mean_absolute_error(np.reshape(y_test,(y_test.shape[0],1)), test_predict[:,0]))
print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(np.reshape(y_test,(y_test.shape[0],1)), test_predict[:,0])))
# +
plt.figure(figsize=(8,4))
plt.style.use('seaborn-dark')
plt.plot(history.history['loss'], label='Train Loss',color="green")
plt.plot(history.history['val_loss'], label='Test Loss',color = "yellow")
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.legend(loc='upper right')
plt.grid()
plt.show();
# +
time_y_train = pd.DataFrame(data = train_y, index = result[0:train_size].index,columns= [""])
time_y_test = pd.DataFrame(data = test_y, index = result[train_size:].index,columns= [""])
time_y_train_prediction = pd.DataFrame(data = train_predict, index = time_y_train[8:].index,columns= [""])
time_y_test_prediction = pd.DataFrame(data = test_predict, index = time_y_test[8:].index,columns= [""])
plt.style.use('seaborn-dark')
plt.figure(figsize=(15,10))
plt.plot(time_y_train,label = "training",color ="green",marker='.')
plt.plot(time_y_test,label = "test",marker='.')
plt.plot(time_y_train_prediction,color="red",label = "prediction")
plt.plot(time_y_test_prediction,color="red")
plt.title("LSTM fit of Microsoft Stock Market Prices Including Sentiment Signal",size = 20)
plt.tight_layout()
sns.despine(top=True)
plt.ylabel('', size=15)
plt.xlabel('', size=15)
plt.legend(fontsize=15)
plt.grid()
plt.show();
# -
| collab/05 modeling with LSTM .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import nltk
from nltk.tag import StanfordNERTagger
eng_tagger = StanfordNERTagger(model_filename=r'E:\Glasgow\TeamProject\entityParser\stanford-ner-2018-02-27\classifiers\english.muc.7class.distsim.crf.ser.gz',path_to_jar=r'E:\Glasgow\TeamProject\entityParser\stanford-ner-2018-02-27\stanford-ner.jar')
f = open('The BBC Scotland_news.txt','r')
content = f.read()
res = eng_tagger.tag(content.split())
print(res)
list_per = []
list_org = []
list_loc = []
print(len(res))
for i in range(len(res)):
if 'PERSON' in res[i][1]:
per =res[i][0]
list_per.append(per)
#tag = 0
else:
if 'ORGANIZATION' in res[i][1]:
org =res[i][0]
list_org.append(org)
else:
if 'LOCATION' in res[i][1]:
loc =res[i][0]
list_loc.append(loc)
print(list_per)
print(list_org)
print(list_loc)
from collections import Counter
counts_per = Counter(list_per)
counts_org = Counter(list_org)
counts_loc = Counter(list_loc)
#print(counts_per)
#print(counts_org)
#print(counts_loc)
#print(counts_per.most_common(10))
top_10_per = counts_per.most_common(10)
top_10_org = counts_org.most_common(10)
top_10_loc = counts_loc.most_common(10)
top_10_pers, top_10_orgs,top_10_locs =[],[],[]
for i in range(10):
top_10_pers.append(top_10_per[i][0])
top_10_orgs.append(top_10_org[i][0])
top_10_locs.append(top_10_loc[i][0])
print(top_10_pers)
print(top_10_orgs)
print(top_10_locs)
| ner_news_whole.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="zWFaAnmetOTr" colab_type="code" colab={}
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import matplotlib.pyplot as plt
tf.enable_eager_execution()
# + [markdown] id="wmYMjSIyL2F8" colab_type="text"
# ### Prepare dataset
# + id="wQ0EcmdNXEI_" colab_type="code" colab={}
encoding = {
'a': np.array([1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0]),
'b': np.array([1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0]),
'c': np.array([1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0]),
'd': np.array([1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0]),
'e': np.array([0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0]),
'f': np.array([0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]),
'g': np.array([0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]),
'h': np.array([0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0]),
'i': np.array([1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0]),
'j': np.array([1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0]),
'k': np.array([1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0]),
'l': np.array([1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0]),
'm': np.array([0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0]),
'n': np.array([0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0]),
'o': np.array([0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0]),
'p': np.array([0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0]),
'q': np.array([0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0]),
'r': np.array([0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0]),
's': np.array([0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0]),
't': np.array([0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0]),
'u': np.array([1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]),
'v': np.array([1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0]),
'w': np.array([1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0]),
'x': np.array([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]),
'y': np.array([0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0]),
'z': np.array([0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0]),
'1': np.array([0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0]),
'2': np.array([0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0]),
'3': np.array([1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0]),
'4': np.array([1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0]),
'5': np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0]),
'6': np.array([1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0]),
'7': np.array([0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]),
'8': np.array([0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0]),
'0': np.array([0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0]),
'=>': np.array([0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]),
}
# + id="HBvfNv757Dbp" colab_type="code" colab={}
input_string = ['=>','t','y','s','o','n','t','h','o','m','a','s']
# + id="fodngzZR0dLl" colab_type="code" colab={}
output_pred = np.array([
[0,0,1,0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0,0,0,0],
[0,0,0,0,1,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0,0,0],
[0,0,0,0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,1,0,0,0,0],
[0,0,0,0,0,0,0,0,1,0,0,0],
[0,0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,0,0,1,0],
[0,0,0,0,0,0,0,0,0,0,0,1],
[1,0,0,0,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0,0,0,0],
])
# + id="xrEkN9pG0eIw" colab_type="code" outputId="6c0117b5-6582-4e0d-f073-d437171fb4d9" colab={"base_uri": "https://localhost:8080/", "height": 34}
output_pred.shape
# + id="-I2uWMrqJPf1" colab_type="code" colab={}
num_samples = 6 * 100
# + id="R8ZWNatXTVti" colab_type="code" colab={}
encoder_input_data = np.zeros(
(num_samples, len(input_string),len(encoding['=>'])),
dtype='float32')
target_data = np.zeros(
(num_samples, len(input_string),len(input_string)),
dtype='float32')
input_encoder = np.zeros((len(input_string),len(encoding['=>'])), dtype='float32')
for i in range(len(input_string)):
input_encoder[i] = encoding[input_string[i]]
for i in range(num_samples):
encoder_input_data[i] = input_encoder
target_data[i] = output_pred
# + id="BZkNxs8Yr2Vf" colab_type="code" outputId="45d70573-49f1-4c67-ef85-edaad3398e85" colab={"base_uri": "https://localhost:8080/", "height": 34}
input_encoder.shape
# + id="Ih6aChJ92FuU" colab_type="code" outputId="be31d7cf-f29d-4a73-acbc-016ed02f7581" colab={"base_uri": "https://localhost:8080/", "height": 34}
encoder_input_data.shape
# + id="OlhqrxIL2Cz3" colab_type="code" outputId="2bf98ec3-4072-4067-8f9b-50a7d129ffb7" colab={"base_uri": "https://localhost:8080/", "height": 34}
target_data.shape
# + [markdown] id="rtYM-s5wh2RY" colab_type="text"
# #### Example input -> output sequence
# + [markdown] id="_GmpDXMih7YQ" colab_type="text"
# `['=>','t','t','h','o', 'm', 'a', '1', '7'] -> ['t','h','o', 'm', 'a', '1', '7','=>']`
# + [markdown] id="97GlpqgvL8PK" colab_type="text"
# ## Encoder & Decoder Network
# + id="uqAZrGBMTGDj" colab_type="code" colab={}
class Encoder(tf.keras.Model):
def __init__(self, hidden_dimensions):
super(Encoder, self).__init__()
self.lstm = layers.LSTM(hidden_dimensions, return_sequences=True, return_state=True)
def call(self, x):
output, state_h, state_c = self.lstm(x)
return output, [state_h, state_c]
# + id="tLNzTuJfWPhk" colab_type="code" colab={}
class Decoder(tf.keras.Model):
def __init__(self, hidden_dimensions):
super(Decoder, self).__init__()
self.lstm = layers.LSTM(hidden_dimensions, return_sequences=True, return_state=True)
def call(self, x, hidden_states):
dec_output, state_h, state_c = self.lstm(x, initial_state=hidden_states)
# dec_output shape -> (batch_size, 1, hidden_dimension) -> (32, 1, 256)
return dec_output, [state_h, state_c]
# + id="eTSf9SdkBjcM" colab_type="code" colab={}
class Attention(tf.keras.Model):
def __init__(self, hidden_dimensions):
super(Attention, self).__init__()
# Note: Dense layer -> dot(input, kernel) -> so now Ui = vT . tanh(W1 . e + W2 . di) becomes Ui = tanh(e . W1 + di . W2) . v
self.W1 = tf.keras.layers.Dense(hidden_dimensions, use_bias=False) # weights -> (256, 256)
self.W2 = tf.keras.layers.Dense(hidden_dimensions, use_bias=False) # weights -> (256, 256)
self.V = tf.keras.layers.Dense(1, use_bias=False) # weights -> (256, 1)
def call(self, encoder_outputs, dec_output):
# encoder_outputs shape -> (batch_size, input_sequence_length, hidden_dimension) -> (32, 9, 256)
# dec_output shape -> (batch_size, 1, hidden_dimension) -> (32, 1, 256)
# w1_e -> (*, 9, 256) * (*, 256, 256) -> (*, 9, 256)
w1_e = self.W1(encoder_outputs)
# w2_e -> (*, 1, 256) * (*, 256, 256) -> (*, 1, 256)
w2_d = self.W2(dec_output)
# tanh_output -> (*, 9, 256) + (*, 1, 256) -> (*, 9, 256)
tanh_output = tf.nn.tanh(w1_e + w2_d)
# tanh_output -> (*, 9, 256) + (*, 256, 1) -> (*, 9, 1)
v_dot_tanh = self.V(tanh_output)
# attention_weights -> (batch_size, input_sequence_length, 1) -> (32, 9, 1)
attention_weights = tf.nn.softmax(v_dot_tanh, axis=1)
return tf.reshape(attention_weights, (attention_weights.shape[0], attention_weights.shape[1])) # (batch_size, input_sequence_length) -> (32, 9)
# + id="2Hiu44IFLq4D" colab_type="code" colab={}
num_encoder_tokens = len(encoding['=>'])
encoder_seq_length = len(input_string)
hidden_dimensions = 256
# + id="sz3B3FHW1MsN" colab_type="code" outputId="d4bf6882-b2f1-4b2f-c1eb-403c1cac0c95" colab={"base_uri": "https://localhost:8080/", "height": 34}
print("Input dimension: ", num_encoder_tokens)
# + id="YMFkSiR5DLV8" colab_type="code" outputId="024e6ee9-7d6d-4f1a-97a0-8d08dd5a43e3" colab={"base_uri": "https://localhost:8080/", "height": 34}
print("Input sequence length: ", encoder_seq_length)
# + id="_8MjNP5GCvQv" colab_type="code" outputId="589f8e3a-d561-4dba-d3b2-341b228721da" colab={"base_uri": "https://localhost:8080/", "height": 34}
print("RNN Network hiddent dimension: ", hidden_dimensions)
# + [markdown] id="w_shAefxqfVs" colab_type="text"
# ### Initialize encoder and decoder network
# + id="xLlEdEXmVFGq" colab_type="code" colab={}
encoder = Encoder(hidden_dimensions)
decoder = Decoder(hidden_dimensions)
attention = Attention(hidden_dimensions)
# + [markdown] id="dUHrlclMqu7L" colab_type="text"
# ### Setup tensorflow dataset for traning
# + id="oT5uZj9w3egC" colab_type="code" colab={}
dataset = tf.data.Dataset.from_tensor_slices((
encoder_input_data,
target_data
))
# Divide to 32 batches
batches = 32
dataset = dataset.shuffle(100).batch(32)
# + id="tS35_maF4p-e" colab_type="code" outputId="faa0caa0-3a01-41ed-f50d-fcc11b1773db" colab={"base_uri": "https://localhost:8080/", "height": 34}
dataset
# + [markdown] id="LkM2Lp_zq3VK" colab_type="text"
# #### Output of network before training
# + id="nHndKpviD0-Q" colab_type="code" outputId="f483c8f3-ab56-4857-e54d-1d096d7f49f4" colab={"base_uri": "https://localhost:8080/", "height": 748}
# Input sequence
print("Input sequence: %s"% input_string)
#Initialize values
attention_vector_array = []
final_output_sequence = []
# expand the sequences to be batch size of 1 to be fed as input
encoder_input = tf.expand_dims(input_encoder, 0)
target_data = tf.expand_dims(output_pred, 0)
# encoder_input shapes -> (batch_size, input_sequence_length, input_dimension)
# encoder_outputs shape -> (batch_size, input_sequence_length, hidden_dimension)
encoder_outputs, encoder_states = encoder(encoder_input)
# first decoder input '=>'
dec_input = tf.expand_dims(encoder_input[:, 0], 1)
# loading the final encoder states to decoder network as initial hidden states
decoder_states = encoder_states
print("\nPrediction for ")
for i in range(0, encoder_input.shape[1]):
decoder_output, decoder_states = decoder(dec_input, decoder_states)
target_prediction = attention(encoder_outputs, decoder_output)
# Save the attention vector
attention_vector_array.append(target_prediction.numpy()[0])
final_output_sequence.append(np.round(target_prediction.numpy()[0]))
print("%dth position -> %d -> %s"%(i, np.argmax(target_prediction), input_string[np.argmax(target_prediction)]))
# pass the predicted value as next input state to decoder network
dec_input = tf.expand_dims(encoder_input[:, np.argmax(target_prediction)], 1) # works only for one input combination
print("\nTarget output values (softmax values over the input sequence size)")
print(target_data.numpy()[0])
print("\nPredicted output values")
print(np.array(final_output_sequence))
# + [markdown] id="01MfSZZoMNwM" colab_type="text"
# ### Train Model
# + id="WGwifkQinyC4" colab_type="code" colab={}
optimizer = tf.train.AdamOptimizer()
# + id="9uolR2wiKDUQ" colab_type="code" outputId="ea14bd52-c1d2-45e0-ca6f-348a2a66f22f" colab={"base_uri": "https://localhost:8080/", "height": 544}
epochs = 10
loss_history = []
total_attention = []
for epoch in range(epochs):
# encoder_input shapes -> (batch_size, input_sequence_length, input_dimension)
for (batch, (encoder_input, target_data)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
encoder_outputs, encoder_states = encoder(encoder_input)
# encoder_outputs shape -> (batch_size, input_sequence_length, hidden_state_dimension) -> (32, 9, 256)
# encoder states shape -> (batch_size, hidden_state_dimension) -> (32, 256)
# first decoder input '=>'
# dec_input shape -> (batch_size, input_lenght=1, input_dimension) -> (32, 1, 64)
dec_input = tf.expand_dims(encoder_input[:, 0], 1) # The '=>' symbol loaded as input
# loading the final encoder states to decoder network as initial hidden states
# decoder states shape -> (batch_size, hidden_state_dimension) -> (32, 256)
decoder_states = encoder_states
# track attention over each output sequence
attention_vector_array = []
# iterrate over the times of input sequence size or till the output points to '=>' symbol
for i in range(0, encoder_input.shape[1]):
# decoder outputs shape -> (batch_size, input_lenght=1, hidden_state_dimension) -> (32, 1, 256)
# decoder states shape -> (batch_size, hidden_state_dimension) -> (32, 256)
decoder_output, decoder_states = decoder(dec_input, decoder_states)
# target prediction -> (batch_size, input_sequence_length) -> (32, 9)
# target prediction points to one of the input sequence element -> element with highest value
target_prediction = attention(encoder_outputs, decoder_output)
if batch % 10 == 0:
attention_vector_array.append(target_prediction.numpy()[0])
# used for training the network by using the target data
tar_data = target_data[:, i]
# load the input state to decoder network for next prediction
dec_input = tf.expand_dims(encoder_input[:, np.argmax(tar_data)], 1) # Works only for one input combination
# loss value calculated as categorical crossentropy
loss += tf.reduce_mean(tf.keras.backend.categorical_crossentropy(tar_data, target_prediction))
batch_loss = (loss / batches)
if batch % 10 == 0:
total_attention.append(attention_vector_array)
print("\tEpoch {:03d}/{:03d}: Loss at step {:02d}: {:.9f}".format((epoch+1), epochs, batch, tf.reduce_mean(tf.keras.backend.categorical_crossentropy(tar_data, target_prediction))))
# store the loss history
loss_history.append(batch_loss.numpy())
# fetch the trainable variables
variables = encoder.variables + decoder.variables
# calculate the gradient
grads = tape.gradient(loss, variables)
# update the weights of the network
optimizer.apply_gradients(zip(grads, variables), global_step=tf.train.get_or_create_global_step())
print("Epoch {:03d}/{:03d} completed \t - \tBatch loss: {:.9f}".format((epoch+1), epochs, tf.reduce_mean(tf.keras.backend.categorical_crossentropy(tar_data, target_prediction))))
print("Final loss: {:.9f}".format(tf.reduce_mean(tf.keras.backend.categorical_crossentropy(tar_data, target_prediction))))
# + [markdown] id="0fcEN_jWXrbk" colab_type="text"
# #### Attention plot over the input sequence
# + id="4vJ1vMKzSevq" colab_type="code" outputId="80f6565b-8311-4be4-d68a-8d360474f133" colab={"base_uri": "https://localhost:8080/", "height": 910}
fig = plt.figure(figsize=(20,20))
for i in range(1, len(total_attention)):
fig.add_subplot(len(total_attention)/4, len(total_attention)/4, i).matshow(np.array(total_attention[i-1]), cmap='YlGn')
plt.show()
# + [markdown] id="0GCUvlzuXxya" colab_type="text"
# ### Loss plot over the entire training sequence
# + id="miigFwpMW_lO" colab_type="code" outputId="6ff5bb2d-98e1-42d0-b8c5-0904eeb668e5" colab={"base_uri": "https://localhost:8080/", "height": 361}
plt.plot(loss_history)
plt.ylabel('loss value')
plt.xlabel('batches')
plt.show()
# + [markdown] id="dWkPK4bKxwuN" colab_type="text"
# ### Inference Model
# + [markdown] id="7wrftwSNLy-4" colab_type="text"
# #### Output of network after training
# + id="uTnwRFAFJuN5" colab_type="code" outputId="32436d18-50ad-4082-b003-d584cf306046" colab={"base_uri": "https://localhost:8080/", "height": 748}
# Input sequence
print("Input sequence: %s"% input_string)
#Initialize values
attention_vector_array = []
final_output_sequence = []
# expand the sequences to be batch size of 1 to be fed as input
encoder_input = tf.expand_dims(input_encoder, 0)
target_data = tf.expand_dims(output_pred, 0)
# encoder_input shapes -> (number_of_inputs, input_sequence_length, input_dimension)
# encoder_outputs shape -> (number_of_inputs, input_sequence_length, hidden_dimension)
encoder_outputs, encoder_states = encoder(encoder_input)
# first decoder input '=>'
dec_input = tf.expand_dims(encoder_input[:, 0], 1)
# loading the final encoder states to decoder network as initial hidden states
decoder_states = encoder_states
print("\nPrediction for ")
for i in range(0, encoder_input.shape[1]):
decoder_output, decoder_states = decoder(dec_input, decoder_states)
target_prediction = attention(encoder_outputs, decoder_output)
# Save the attention vector
attention_vector_array.append(target_prediction.numpy()[0])
final_output_sequence.append(np.round(target_prediction.numpy()[0]))
print("%dth position -> %d -> %s"%(i, np.argmax(target_prediction), input_string[np.argmax(target_prediction)]))
# pass the predicted value as next input state to decoder network
dec_input = tf.expand_dims(encoder_input[:, np.argmax(target_prediction)], 1) # works only for one input combination
print("\nTarget output values (softmax values over the input sequence size)")
print(target_data.numpy()[0])
print("\nPredicted output values")
print(np.array(final_output_sequence))
# + id="Szl9S7PwjGVn" colab_type="code" outputId="dc18e3a2-eeb2-4f02-c343-08dbe925ebad" colab={"base_uri": "https://localhost:8080/", "height": 646}
np.array(attention_vector_array)
# + [markdown] id="5EkUHT0HHAhV" colab_type="text"
# References:
#
# * https://arxiv.org/pdf/1506.03134.pdf
# * https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense
| Pointer_Networks_Toy_Problem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="vrm-gPCET1dB" colab_type="text"
# ## The Dataset
#
# We'll work with a dataset on the top [IMDB movies](https://www.imdb.com/search/title?count=100&groups=top_1000&sort=user_rating), as rated by IMDB.
#
#
# Specifically, we have a CSV that contains:
# - IMDB star rating
# - Movie title
# - Year
# - Content rating
# - Genre
# - Duration
# - Gross
#
# _[Details available at the above link]_
#
# + [markdown] id="GgUzw7tJT1dC" colab_type="text"
# ### Import our necessary libraries
# + id="iFIedFsqT1dD" colab_type="code" colab={}
import pandas as pd
import numpy as np
import matplotlib as plt
import re
# %matplotlib inline
# + [markdown] id="ekNKei2nT1dG" colab_type="text"
# ### Read in the dataset
#
# First, read in the dataset, called `movies.csv` into a DataFrame called "movies."
# + id="uN89LgB5T1dH" colab_type="code" colab={}
movies = pd.read_csv('./data/movies.csv')
# + [markdown] id="Ps_hFBktT1dJ" colab_type="text"
# ## Check the dataset basics
#
# Let's first explore our dataset to verify we have what we expect.
# + [markdown] id="S1dl9-rWT1dK" colab_type="text"
# Print the first five rows.
# + id="RKpHJGo5T1dK" colab_type="code" colab={} outputId="b6c73a47-a635-4188-ab5d-60a2c012a3f2"
movies.head()
# + [markdown] id="k9jrPiwmT1dP" colab_type="text"
# How many rows and columns are in the datset?
# + id="hLEhJ45FT1dQ" colab_type="code" colab={} outputId="ef688135-ce59-4a05-f75c-b3cfc159c680"
movies.shape
# + [markdown] id="q2ajjTVWT1dT" colab_type="text"
# What are the column names?
# + id="_NAROi4rT1dT" colab_type="code" colab={} outputId="a6c51788-5c0e-4ece-9552-6b767fabce66"
print(movies.columns)
# + [markdown] id="XhFpt44WT1dX" colab_type="text"
# How many unique genres are there?
# + id="W3FsdgJ0T1dX" colab_type="code" colab={} outputId="a6d3f3d9-5250-4fc4-a177-d6c3d870fcd4"
movies['genre'].nunique()
# + [markdown] id="e6BFcVJLT1db" colab_type="text"
# How many movies are there per genre?
# + id="AVfBLrBXT1db" colab_type="code" colab={} outputId="d32ea4e2-3f15-4eba-c52a-ac988d76fe6c"
movies['genre'].value_counts()
# + [markdown] id="N4Y1jQ8LT1ek" colab_type="text"
# ## Exploratory data analysis with visualizations
#
# For each of these prompts, create a plot to visualize the answer. Consider what plot is *most appropriate* to explore the given prompt.
#
# + [markdown] id="UQs8D0GRT1el" colab_type="text"
# What is the relationship between IMDB ratings and Rotten Tomato ratings?
# + id="2mCkDKSST1em" colab_type="code" colab={} outputId="a097a152-1e63-469e-b729-d5963c63636b"
movies_rated.plot(kind='scatter', x='Internet Movie Database', y='Rotten Tomatoes')
# + [markdown] id="k6si6769T1ep" colab_type="text"
# What is the relationship between IMDB rating and movie duration?
# + id="7TIobsuAT1eq" colab_type="code" colab={} outputId="e3c6177c-555f-48ef-8b4a-7f3d7336cdef"
movies_rated.plot(kind='scatter', x='duration', y='Internet Movie Database')
# + [markdown] id="BjMrpla6T1es" colab_type="text"
# How many movies are there in each genre category? (Remember to create a plot here)
# + id="JYBkEGIhT1et" colab_type="code" colab={} outputId="68415b5f-efdf-415e-c2b0-e9714dcbc29b"
movies_rated['genre'].value_counts().plot(kind='bar', color='dodgerblue')
# + [markdown] id="KykFGmptT1ev" colab_type="text"
# What does the distribution of Rotten Tomatoes ratings look like?
# + id="PgkXDWqwT1ev" colab_type="code" colab={} outputId="9917de48-9aed-47bc-e073-c2f9e80a69ab"
movies_rated['Rotten Tomatoes'].plot(kind='hist', bins=15)
# + [markdown] id="JMjiEiyRT1ey" colab_type="text"
# ## Bonus
#
# There are many things left unexplored! Consider investigating something about gross revenue and genres.
# + id="8OHxHP9JT1ey" colab_type="code" colab={} outputId="d9ac4eb4-18c9-466d-cbac-b3b8df0e8092"
movies_rated['gross'].plot(kind='hist', bins=15, color='dodgerblue')
# + id="keMVSxNLT1e2" colab_type="code" colab={} outputId="de810bf3-3933-4e11-bc4b-512458de3d3e"
# top 10 grossing films
movies_rated.sort_values(by='gross', ascending=False).head(3)
# + id="uBJPez44T1e4" colab_type="code" colab={}
| src/Notebooks/omdb_viz_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Tests of code for plotting probability distributions and density matrices
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import itertools
# Define pandas dataframe for a probability distribution
states = ['Apple', 'Strawberry','Coconut']
pd_df = pd.DataFrame(np.array([.1, .3, .6]), index = states)
pd_df
# Define pandas dataframe for a density matrix
z01 = 2+1j*1
z02 = .3-1j*.2
z12 = .4+1j*.7
z01c, z02c, z12c = np.conjugate([z01, z02, z12])
rho_arr = np.array([[.2, z01, z02],
[z01c, .3, z12],
[z02c, z12c, .5]])
rho_arr
states = ['Apple', 'Strawberry','Coconut']
rho_df = pd.DataFrame(rho_arr, columns = states, index = states)
rho_df
# Plot multiple probability distributions
# +
node_names = ['Day1', 'Day2']
num_nodes = len(node_names)
pd_df_list = [pd_df, pd_df]
def single_pd(ax, node_name, pd_df):
y_pos = np.arange(len(pd_df.index)) +.5
plt.sca(ax)
plt.yticks(y_pos, pd_df.index)
ax.invert_yaxis()
ax.set_xticks([0, .25, .5, .75, 1])
ax.set_xlim(0, 1)
ax.grid(True)
ax.set_title(node_name)
width = list(itertools.chain.from_iterable(pd_df.values))
ax.barh(y_pos, width, align='center')
plt.close('all')
fig, ax_list = plt.subplots(nrows=num_nodes, ncols=1)
for k, vtx in enumerate(node_names):
single_pd(ax_list[k], vtx, pd_df_list[k])
plt.tight_layout()
plt.show()
# -
# Plot multiple density matrices
# +
node_names = ['Day1', 'Day2']
num_nodes = len(node_names)
rho_df_list = [rho_df, rho_df]
def single_rho(ax, node_name, rho_df):
states = rho_df.index
num_sts = len(states)
x = np.linspace(0, num_sts-1, num_sts)
y = x
xx, yy = np.meshgrid(x, y)
print(xx)
print(yy)
ax.set_xlim(-1, num_sts)
ax.set_xticks(np.arange(0, num_sts))
ax.xaxis.tick_top()
ax.set_ylim(-1, num_sts)
ax.set_yticks(np.arange(0, num_sts))
ax.invert_yaxis()
ax.set_aspect('equal', adjustable='box')
ax.set_title(node_name, y=1.35)
for k, nom in enumerate(states):
ax.annotate(str(k) + ': ' + nom, xy=(num_sts+.25, k), annotation_clip=False)
max_mag = np.max(np.absolute(rho_df.values))
q = ax.quiver(xx, yy, rho_df.values.real, rho_df.values.imag, scale=max_mag, units='x')
qk = plt.quiverkey(q, 0, -2.2 , max_mag,
'= {:.2e}'.format(max_mag), labelpos='E', coordinates='data')
plt.close('all')
fig, ax_list = plt.subplots(nrows=num_nodes, ncols=1)
for k, vtx in enumerate(node_names):
single_rho(ax_list[k], vtx, rho_df_list[k])
plt.tight_layout(pad=1)
plt.show()
# -
| jupyter-notebooks/plotting-density-matrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: firedrake
# language: python
# name: firedrake
# ---
# Now we'll look at a more interesting and clearly nonlinear example -- inferring an unknown conductivity rather than an unknown right-hand side.
# The PDE that we wish to solve is
#
# $$-\nabla\cdot k\nabla u = f$$
#
# and we wish to estimate the conductivity $k$.
# The conductivity has to be strictly positive, so rather than try to infer it directly we'll write
#
# $$k = k_0e^q$$
#
# and try to infer $q$.
# No matter what sign $q$ takes, the conductivity is positive.
import firedrake
import firedrake_adjoint
mesh = firedrake.UnitSquareMesh(32, 32)
V = firedrake.FunctionSpace(mesh, family='CG', degree=2)
Q = firedrake.FunctionSpace(mesh, family='CG', degree=2)
# +
from firedrake import Constant, cos, sin
import numpy as np
from numpy import pi as Ï
from numpy import random
seed = 1729
generator = random.default_rng(seed)
degree = 5
x = firedrake.SpatialCoordinate(mesh)
q_true = firedrake.Function(Q)
for k in range(degree):
for l in range(int(np.sqrt(degree**2 - k**2))):
Z = np.sqrt(1 + k**2 + l**2)
Ï = 2 * Ï * (k * x[0] + l * x[1])
A_kl = generator.standard_normal() / Z
B_kl = generator.standard_normal() / Z
expr = Constant(A_kl) * cos(Ï) + Constant(B_kl) * sin(Ï)
mode = firedrake.interpolate(expr, Q)
q_true += mode
# -
import matplotlib.pyplot as plt
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(q_true, axes=axes, shading='gouraud')
fig.colorbar(colors);
# Compute the true solution of the PDE.
from firedrake import exp, inner, grad, dx
u = firedrake.Function(V)
f = Constant(1.0)
J = (0.5 * exp(q_true) * inner(grad(u), grad(u)) - f * u) * dx
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = firedrake.derivative(J, u)
firedrake.solve(F == 0, u, bc)
u_true = u.copy(deepcopy=True)
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(u_true, axes=axes, shading='gouraud')
fig.colorbar(colors);
# Generate the observational data.
# +
num_points = 50
ÎŽs = np.linspace(-0.5, 2, num_points + 1)
X, Y = np.meshgrid(ÎŽs, ÎŽs)
xs = np.vstack((X.flatten(), Y.flatten())).T
Ξ = Ï / 12
R = np.array([
[np.cos(Ξ), -np.sin(Ξ)],
[np.sin(Ξ), np.cos(Ξ)]
])
xs = np.array([
x for x in (xs - np.array([0.5, 0.5])) @ R
if (0 <= x[0] <= 1) and (0 <= x[1] <= 1)
])
# -
# Synthesize some observational data.
# +
U = u_true.dat.data_ro[:]
u_range = U.max() - U.min()
signal_to_noise = 20
Ï = firedrake.Constant(u_range / signal_to_noise)
ζ = generator.standard_normal(len(xs))
u_obs = np.array(u_true.at(xs)) + float(Ï) * ζ
point_cloud = firedrake.VertexOnlyMesh(mesh, xs)
Z = firedrake.FunctionSpace(point_cloud, 'DG', 0)
u_o = firedrake.Function(Z)
u_o.dat.data[:] = u_obs
# -
# Start with an initial guess of $q = 0$.
u = firedrake.Function(V)
q = firedrake.Function(Q)
J = (0.5 * exp(q) * inner(grad(u), grad(u)) - f * u) * dx
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = firedrake.derivative(J, u)
firedrake.solve(F == 0, u, bc)
# Very different!
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(u, axes=axes, shading='gouraud')
fig.colorbar(colors);
# Form the objective functional.
Î u = firedrake.interpolate(u, Z)
E = 0.5 * ((u_o - Î u) / Ï)**2 * dx
α = firedrake.Constant(0.5)
R = 0.5 * α**2 * inner(grad(q), grad(q)) * dx
J = firedrake.assemble(E) + firedrake.assemble(R)
# Create the reduced functional and minimize the objective.
qÌ = firedrake_adjoint.Control(q)
JÌ = firedrake_adjoint.ReducedFunctional(J, qÌ)
q_min = firedrake_adjoint.minimize(JÌ, method='Newton-CG', options={'disp': True})
# +
fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True)
for ax in axes:
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
kw = {'vmin': -5, 'vmax': +5, 'shading': 'gouraud'}
axes[0].set_title('Estimated q')
firedrake.tripcolor(q_min, axes=axes[0], **kw)
axes[1].set_title('True q')
firedrake.tripcolor(q_true, axes=axes[1], **kw);
# -
# Summarise the results
# +
fig, axes = plt.subplots(ncols=2, nrows=2, sharex=True, sharey=True, figsize=(20,12), dpi=200)
plt.suptitle('Estimating Log-Conductivity $q$ \nwhere $k = k_0e^q$ and $-\\nabla \\cdot k \\nabla u = f$ for known $f$', fontsize=25)
for ax in axes.ravel():
ax.set_aspect('equal')
# ax.get_xaxis().set_visible(False)
axes[0, 0].set_title('True $u$', fontsize=25)
colors = firedrake.tripcolor(u_true, axes=axes[0, 0], shading='gouraud')
fig.colorbar(colors, ax=axes[0, 0])
axes[1, 0].set_title('Sampled Noisy $u$', fontsize=25)
colors = axes[1, 0].scatter(xs[:, 0], xs[:, 1], c=u_obs)
fig.colorbar(colors, ax=axes[1, 0])
kw = {'vmin': -5, 'vmax': +5, 'shading': 'gouraud'}
axes[0, 1].set_title('True $q$', fontsize=25)
colors = firedrake.tripcolor(q_true, axes=axes[0, 1], **kw)
fig.colorbar(colors, ax=axes[0, 1])
axes[1, 1].set_title('Estimated $q$', fontsize=25)
colors = firedrake.tripcolor(q_min, axes=axes[1, 1], **kw);
fig.colorbar(colors, ax=axes[1, 1])
plt.savefig('poisson-inverse-conductivity-summary.png')
# -
| 2-poisson-inverse-conductivity/poisson-inverse-conductivity.ipynb |
# +
try:
import probml_utils as pml
except ModuleNotFoundError:
# %pip install -qq git+https://github.com/probml/probml-utils.git
import probml_utils as pml
import matplotlib.pyplot as plt
import numpy as np
try:
import pandas as pd
except ModuleNotFoundError:
# %pip install -qq pandas
import pandas as pd
from scipy.special import logsumexp
try:
from sklearn.linear_model import LinearRegression
except ModuleNotFoundError:
# %pip install -qq scikit-learn
from sklearn.linear_model import LinearRegression
from scipy.stats import multivariate_normal
import probml_utils as pml
n = 200
np.random.seed(1)
y = np.random.rand(n, 1)
eta = np.random.randn(n,1)*0.05
x = y + 0.3*np.sin(2*3.1415*y) + eta
data = np.concatenate((x, y), axis=1)
K = 3
X = x.reshape(-1, 1)
y = y.reshape(-1, 1)
xtest = (x)
ytest = (y)
plt.figure()
plt.scatter(x, y, edgecolors='blue', color="none")
plt.title('Inverse problem')
pml.savefig('mixexp_inverse.pdf')
plt.show()
def normalizelogspace(x):
L = logsumexp(x, axis=1).reshape(-1, 1)
Lnew = np.repeat(L, 3, axis=1)
y = x - Lnew
return y, Lnew
def is_pos_def(x):
return np.all(np.linalg.eigvals(x) > 0)
K = 3 #nmix
D = np.size(X, axis=1)
N = np.size(X, axis=0)
norm = 50
max_iter = 39
iteration = 0
r = np.zeros((N, K))
while iteration < max_iter:
#E-step :
np.random.seed(iteration)
Wy = 0.1*np.random.randn(D, K)
bias = 0.3*np.random.randn(D, K)
mixweights = np.random.rand(1, K)
normmw = np.linalg.norm(mixweights)
mixweights = mixweights/normmw
sigma2 = 0.1*np.random.randn(1, K)
q = np.log(mixweights)
logprior = np.repeat(q, N, axis=0)
loglik = np.zeros((N, K))
for k in range(K):
vecM = X*Wy[:, k] + bias[:, k]
vecM = vecM.reshape(200, )
cov = sigma2[0, k]
cov = np.abs(cov)
vecX = y
x = multivariate_normal.logpdf(vecX, mean=vecM, cov=cov)
x = x /norm
loglik[:, k] = x
logpost = loglik + logprior
logpost, logZ = normalizelogspace(logpost)
ll = np.sum(logZ)
post = np.exp(logpost)
#M-step:
r = post
mixweights = np.sum(r, axis=0)/N
mixweights = mixweights.reshape(1, -1)
for k in range(K):
reg = LinearRegression()
model = reg.fit(X, y, r[:, k])
Wy[:, k] = model.coef_
bias[:, k] = model.intercept_
yhat_ = np.multiply(X, Wy[:, k]) + bias[:, k]
sigma2[:, k] = np.sum(np.multiply(r[:, k], np.square(y-yhat_))) / sum(r[:, k])
iteration = iteration + 1
N = np.size(X, axis=0)
D = np.size(X, axis=1)
K = 3
weights = np.repeat(mixweights, N, axis=0)
muk = np.zeros((N, K))
vk = np.zeros((N, K))
mu = np.zeros((N, ))
v = np.zeros((N, 1))
b = 0.3*np.random.randn(D, K)
for k in range(K):
w = X*Wy[:, k] + bias[:, k]
w = w.reshape(-1, )
muk[:, k] = w
q = np.multiply(weights[:, k], muk[:, k])
mu = mu + q
vk[:, k] = sigma2[:, k]
v = v + np.multiply(weights[:, k], (vk[:, k] + np.square(muk[:, k]))).reshape(-1, 1)
v = v - np.square(mu).reshape(-1, 1)
plt.figure()
plt.scatter(xtest, y, edgecolors='blue', color="none")
plt.plot(xtest, muk[:, 0])
plt.plot(xtest, muk[:, 1])
plt.plot(xtest, muk[:, 2])
plt.title('Expert-predictions')
pml.savefig('mixexp_expert_predictions.pdf')
plt.show()
plt.figure()
for i in range(K):
plt.scatter(y, post[:, i])
plt.title('Gating functions')
pml.savefig('mixexp_gating_functions.pdf')
plt.show()
map = np.empty((K, 1))
map = np.argmax(post, axis=1)
map = map.reshape(-1, 1)
yhat = np.empty((N, 1))
for i in range(N):
yhat[i, 0] = muk[i, map[i, 0]]
plt.figure()
plt.scatter(xtest, yhat, marker=6, color='black')
plt.scatter(xtest, mu, marker='X', color='red')
plt.scatter(xtest, y, edgecolors='blue', color="none")
plt.title('prediction')
plt.legend(['mode', 'mean'])
pml.savefig('mixexp_predictions.pdf')
plt.show()
| notebooks/book1/13/mixexpDemoOneToMany.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Object Detection
# ## Import everything
# +
import os
import sys
import cv2
import numpy as np
from ote_sdk.configuration.helper import create as create_parameters_from_parameters_schema
from ote_sdk.entities.inference_parameters import InferenceParameters
from ote_sdk.entities.label_schema import LabelSchemaEntity
from ote_sdk.entities.model import ModelEntity
from ote_sdk.entities.resultset import ResultSetEntity
from ote_sdk.entities.subset import Subset
from ote_sdk.entities.task_environment import TaskEnvironment
from ote_sdk.usecases.tasks.interfaces.export_interface import ExportType
from ote_cli.datasets import get_dataset_class
from ote_cli.registry import Registry
from ote_cli.utils.importing import get_impl_class
# -
# ## Register templates
templates_dir = '../../external'
registry = Registry(templates_dir)
registry = registry.filter(task_type=sys.executable.split(os.sep)[-4])
print(registry)
# ## Load model template and its hyper parameters
model_template = registry.get('Custom_Object_Detection_Gen3_ATSS')
hyper_parameters = model_template.hyper_parameters.data
# ## Get dataset instantiated
# +
Dataset = get_dataset_class(model_template.task_type)
dataset = Dataset(
train_subset={'ann_file': '../../data/airport/annotation_faces_train.json',
'data_root': '../../data/airport/'},
val_subset={'ann_file': '../../data/airport/annotation_faces_train.json',
'data_root': '../../data/airport'}
)
labels_schema = LabelSchemaEntity.from_labels(dataset.get_labels())
# -
# ## Have a look at existing parameters
# +
hyper_parameters = create_parameters_from_parameters_schema(hyper_parameters)
for p in hyper_parameters.learning_parameters.parameters:
print(f'{p}: {getattr(hyper_parameters.learning_parameters, p)}')
# -
# ## Tweak parameters
# +
hyper_parameters.learning_parameters.batch_size = 8
hyper_parameters.learning_parameters.num_iters = 5
for p in hyper_parameters.learning_parameters.parameters:
print(f'{p}: {getattr(hyper_parameters.learning_parameters, p)}')
# -
# ## Create Task
# +
Task = get_impl_class(model_template.entrypoints.base)
environment = TaskEnvironment(
model=None,
hyper_parameters=hyper_parameters,
label_schema=labels_schema,
model_template=model_template)
task = Task(task_environment=environment)
# -
# ## Run training
# +
output_model = ModelEntity(
dataset,
environment.get_model_configuration(),
)
task.train(dataset, output_model)
# -
# ## Evaluate quality metric
# +
validation_dataset = dataset.get_subset(Subset.VALIDATION)
predicted_validation_dataset = task.infer(
validation_dataset.with_empty_annotations(),
InferenceParameters(is_evaluation=True))
resultset = ResultSetEntity(
model=output_model,
ground_truth_dataset=validation_dataset,
prediction_dataset=predicted_validation_dataset,
)
task.evaluate(resultset)
print(resultset.performance)
# -
# ## Export model to OpenVINO format
exported_model = ModelEntity(
dataset,
environment.get_model_configuration(),
)
task.export(ExportType.OPENVINO, exported_model)
# ## Evaluate the exported model
environment.model = exported_model
ov_task = get_impl_class(model_template.entrypoints.openvino)(environment)
predicted_validation_dataset = ov_task.infer(
validation_dataset.with_empty_annotations(),
InferenceParameters(is_evaluation=True))
resultset = ResultSetEntity(
model=output_model,
ground_truth_dataset=validation_dataset,
prediction_dataset=predicted_validation_dataset,
)
ov_task.evaluate(resultset)
print(resultset.performance)
# ## Draw bounding boxes
# +
import IPython
import PIL
for predictions, item in zip(predicted_validation_dataset, validation_dataset.with_empty_annotations()):
image = item.numpy.astype(np.uint8)
for box in predictions.annotation_scene.shapes:
x1 = int(box.x1 * image.shape[1])
x2 = int(box.x2 * image.shape[1])
y1 = int(box.y1 * image.shape[0])
y2 = int(box.y2 * image.shape[0])
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255), 3)
IPython.display.display(PIL.Image.fromarray(image))
| ote_cli/notebooks/train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0
# ---
# # Amazon SageMaker XGBoostã«ããã¿ãŒã²ãã£ã³ã°ãã€ã¬ã¯ãããŒã±ãã£ã³ã°
# _**Gradient Boosted Treesã«ããæåž«ä»ãåŠç¿ïŒãã©ã³ã¹ã®åããŠããªãã¯ã©ã¹ã§ã®äºå€äºæž¬åé¡**_
#
# ---
#
# ---
#
# ## ç®æ¬¡
#
# 1. [èæ¯](#Background)
# 1. [æºå](#Preparation)
# 1. [ããŒã¿](#Data)
# 1. [æ¢çŽ¢](#Exploration)
# 1. [倿](#Transformation)
# 1. [ãã¬ãŒãã³ã°](#Training)
# 1. [ãã¹ãã£ã³ã°](#Hosting)
# 1. [è©äŸ¡](#Evaluation)
# 1. [èªåã¢ãã«ãã¥ãŒãã³ã°ïŒãªãã·ã§ã³ïŒ](#Automatic-model-Tuning-(optional))
# 1. [æ¡åŒµ](#Extensions)
#
# ---
#
# ## Background
#
# ãã€ã¬ã¯ãããŒã±ãã£ã³ã°ã¯ãéµäŸ¿ç©ãé»åã¡ãŒã«ãé»è©±ãªã©ã䜿ã£ãŠé¡§å®¢ãç²åŸããããã®äžè¬çãªææ³ã§ãããªãœãŒã¹ãã客æ§ã®é¢å¿ã¯éãããŠãããããç¹å®ã®ãªãã¡ãŒã«èå³ãæã£ãŠãããããªèŠèŸŒã¿å®¢ã®ãµãã»ããã®ã¿ãã¿ãŒã²ããã«ããããšãç®æšãšãªããŸãã人å£çµ±èšãéå»ã®ã€ã³ã¿ã©ã¯ã·ã§ã³ãç°å¢èŠå ãªã©ãããã«å
¥æã§ããæ
å ±ã«åºã¥ããŠããããã®èŠèŸŒã¿å®¢ãäºæž¬ããããšã¯ãäžè¬çãªæ©æ¢°åŠç¿ã®åé¡ã§ãã
#
# ãã®ããŒãã§ã¯ã1å以äžã®é»è©±é£çµ¡ã®åŸã顧客ãéè¡ã®å®æé éã«ç»é²ãããã©ãããäºæž¬ããåé¡ã®äŸã玹ä»ããŸããæé ã¯ä»¥äžã®éãã§ãã
#
# * Amazon SageMakerããŒãããã¯ã®æºå
# * ã€ã³ã¿ãŒãããããAmazon SageMakerãžã®ããŒã¿ã®ããŠã³ããŒã
# * Amazon SageMakerã®ã¢ã«ãŽãªãºã ã«ããŒã¿ãäŸçµŠã§ããããã«ãããŒã¿ã調æ»ããŠå€æãã
# * Gradient Boostingã¢ã«ãŽãªãºã ã䜿çšããã¢ãã«ã®æšå®
# * ã¢ãã«ã®æå¹æ§ãè©äŸ¡ãã
# * ç¶ç¶çãªäºæž¬ãè¡ãããã®ã¢ãã«ã®èšå®
#
# ---
#
# ## Preparation
#
# _ãã®ããŒãããã¯ã¯ãml.m4.xlargeããŒãããã¯ã®ã€ã³ã¹ã¿ã³ã¹ã§äœæããããã¹ããããŸããã_
#
# ãŸãã¯ä»¥äžãæå®ããŠã¿ãŸãããã
#
# - ãã¬ãŒãã³ã°ããŒã¿ãšã¢ãã«ããŒã¿ã«äœ¿çšããS3ãã±ãããšãã¬ãã£ãã¯ã¹ãæå®ããŸããããã¯ãããŒãããã¯ã€ã³ã¹ã¿ã³ã¹ããã¬ãŒãã³ã°ãããã³ãã¹ãã£ã³ã°ãšåããªãŒãžã§ã³å
ã«ããå¿
èŠããããŸãã
# - ãã¬ãŒãã³ã°ãšãã¹ãã£ã³ã°ã«ããŒã¿ãžã®ã¢ã¯ã»ã¹ãäžããããã«äœ¿çšããIAMããŒã«arnããããã®äœææ¹æ³ã«ã€ããŠã¯ãããã¥ã¡ã³ããåç
§ããŠãã ããã ããŒãããã¯ã€ã³ã¹ã¿ã³ã¹ããã¬ãŒãã³ã°ãããã³/ãŸãã¯ãã¹ãã£ã³ã°ã«è€æ°ã®ããŒã«ãå¿
èŠãªå Žåã¯ãbotoã®æ£èŠè¡šçŸãé©åãªå®å
šãªIAMããŒã«arnã®æååã«çœ®ãæããŠãã ããã
conda update pandas
# + isConfigCell=true
import sagemaker
bucket=sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-dm'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
# -
# ããã§ã¯ãè§£æã«äœ¿çšããPythonã©ã€ãã©ãªãå°å
¥ããŸãããã
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker
import zipfile # Amazon SageMaker's Python SDK provides many helper functions
pd.__version__
# pandasã®ããŒãžã§ã³ã1.2.4以éã§ããããšã確èªããŠãã ãããããã§ãªãå Žåã¯ãå
ã«é²ãåã«ã«ãŒãã«ãåèµ·åããŠãã ããã
# ---
#
# ## Data
#
# ãŸãã¯ããµã³ãã«ããŒã¿ã®s3ãã±ãããã[direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip)ãããŠã³ããŒãããŠã¿ãŸãããã
#
#
# \[Moro et al., 2014\] <NAME>, <NAME> and <NAME>. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
#
# +
# !wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
with zipfile.ZipFile('bank-additional.zip', 'r') as zip_ref:
zip_ref.extractall('.')
# -
# ã§ã¯ããããPandasã®ããŒã¿ãã¬ãŒã ã«èªã¿èŸŒãã§èŠãŠã¿ãŸãããã
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 20) # Keep the output on one page
data
# ããŒã¿ã«ã€ããŠèª¬æããŸãããã 倧ãŸãã«ã¯ä»¥äžã®ããã«ãªããŸãã
#
# * 4äžä»¥äžã®é¡§å®¢ã¬ã³ãŒãããããå顧客ã«20ã®ç¹åŸŽéããããŸãã
# * ç¹åŸŽéã«ã¯ãæ°å€ãšã«ããŽãªãŒãæ··åšããŠããŸãã
# * ããŒã¿ã¯ãå°ãªããšã `time` ãš `contact` ã§ãœãŒããããŠããããã«èŠããŸããããã£ãšãããããããŸããã
#
# _**åç¹åŸŽã®è©³çŽ°ïŒ **_
#
# *人å£çµ±èš:*
# * `age`: 顧客ã®å¹Žéœ¢(æ°å€)
# * `job`: ä»äºã®çš®é¡ (ã«ããŽãªã«ã«: 'admin.', 'service', ...)
# * `marital`: é
å¶è
ã®æç¡ (ã«ããŽãªã«ã«: 'married', 'single', ...)
# * `education`: æè²æ°Žæº (ã«ããŽãªã«ã«: 'basic.4y', 'high.school', ...)
#
# *éå»ã®é¡§å®¢ã€ãã³ã:*
# * `default`: ã¯ã¬ãžãããããã©ã«ãã«ãªã£ãããšããããïŒ(ã«ããŽãªã«ã«: 'no', 'unknown', ...)
# * `housing`: äœå®
ããŒã³ãå©çšããŠãããïŒ(ã«ããŽãªã«ã«: 'no', 'yes', ...)
# * `loan`: å人ããŒã³ãå©çšããŠãããïŒ(ã«ããŽãªã«ã«: 'no', 'yes', ...)
#
# *éå»ã®ãã€ã¬ã¯ãããŒã±ãã£ã³ã°ã®é£çµ¡:*
# * `contact`: é£çµ¡ã®éä¿¡ã¿ã€ã (ã«ããŽãªã«ã«: 'cellular', 'telephone', ...)
# * `month`: æåŸã«é£çµ¡ãåã£ãæ (ã«ããŽãªã«ã«: 'may', 'nov', ...)
# * `day_of_week`: æåŸã«é£çµ¡ãåã£ãææ¥ (ã«ããŽãªã«ã«: 'mon', 'fri', ...)
# * `duration`: çŽè¿ã®æ¥è§Šæéãç§åäœã§è¡šãããã® (æ°å€)ãéèŠ: duration = 0 ã®å Žåã`y` = 'no' ãšãªãã
#
# *ãã£ã³ããŒã³æ
å ±:*
# * `campaign`: ãã®ãã£ã³ããŒã³ã§ããã®ã¯ã©ã€ã¢ã³ãã«å¯ŸããŠè¡ãããã³ã³ã¿ã¯ãã®æ° (æ°å€ãæåŸã®ã³ã³ã¿ã¯ããå«ã)
# * `pdays`: ã¯ã©ã€ã¢ã³ããååã®ãã£ã³ããŒã³ã§æåŸã«é£çµ¡ãåã£ãŠããã®çµéæ¥æ° (æ°å€)
# * `previous`: ãã®ãã£ã³ããŒã³ã®åã«ããã®ã¯ã©ã€ã¢ã³ãã«å¯ŸããŠè¡ãããã³ã³ã¿ã¯ãã®æ° (æ°å€)
# * `poutcome`: ååã®ããŒã±ãã£ã³ã°ãã£ã³ããŒã³ã®çµæ (ã«ããŽãªã«ã«ïŒ 'nonexistent', 'success', ...)
#
# *å€éšç°å¢èŠå :*
# * `emp.var.rate`: éçšå€åç - ååæããšã®ææš (æ°å€)
# * `cons.price.idx`: æ¶è²»è
ç©äŸ¡ææ° - ææ¬¡ææš (æ°å€)
# * `cons.conf.idx`: æ¶è²»è
ä¿¡é Œæææ° - ææ¬¡ææš (æ°å€)
# * `euribor3m`: Euribor3ã¶æã¬ãŒã - æ¥æ¬¡ææš (æ°å€)
# * `nr.employed`: åŸæ¥å¡æ° - ååæããšã®ææš(æ°å€)
#
# *ç®ç倿°:*
# * `y`: 顧客ã宿é éãç³ã蟌ãã ãïŒ (ãã€ããªãŒ: 'yes', 'no')
# ### Exploration
#
# æ©éãããŒã¿ã調ã¹ãŠã¿ãŸãããã ãŸããç¹åŸŽéãã©ã®ããã«ååžããŠããããçè§£ããŸãããã
# +
# Frequency tables for each categorical feature
for column in data.select_dtypes(include=['object']).columns:
display(pd.crosstab(index=data[column], columns='% observations', normalize='columns'))
# Histograms for each numeric features
display(data.describe())
# %matplotlib inline
hist = data.hist(bins=30, sharey=True, figsize=(10, 10))
# -
# 次ã®ããšã«æ³šç®ããŠãã ããã
#
# * ã¿ãŒã²ãã倿° `y` ã®å€ã®ã»ãŒ90%ããnoãã§ãã»ãšãã©ã®é¡§å®¢ã宿é éã«å å
¥ããŠããªãããšãããããŸãã
# * äºæž¬ç¹åŸŽéã®å€ãã¯ãunknownããšããå€ããšããŸãããããã®ã¯ä»ã®ãã®ãããäžè¬çã§ããäœããunknownãã®å€ã®åå ãªã®ãïŒãããã®é¡§å®¢ã¯äœããã®åœ¢ã§å°æ°æŽŸãªã®ãïŒããããŠãããã©ã®ããã«æ±ãã¹ããªã®ããæ³šææ·±ãèããå¿
èŠããããŸãã
# * ãunknownããç¬èªã®ã«ããŽãªãŒãšããŠå«ãŸããŠãããšããŠããå®éã«ã¯ããããã®èŠ³æž¬çµæããã®ç¹é·éã®ä»ã®ã«ããŽãªãŒã®ããããã«è©²åœããå¯èœæ§ãé«ãããšãèãããšãããã¯äœãæå³ããã®ã§ããããïŒ
# * äºæž¬ç¹åŸŽéã®å€ãã¯ããã®äžã«èŠ³æž¬å€ãéåžžã«å°ãªãã«ããŽãªãŒãæã£ãŠããŸããå°ããªã«ããŽãªãŒãã¿ãŒã²ãããšããçµæã®äºæž¬çãé«ããšåãã£ãå Žåãããã«ã€ããŠäžè¬åããã®ã«ååãªèšŒæ ãããã®ã§ããããïŒ
# * é£çµ¡ã®ã¿ã€ãã³ã°ã¯ç¹ã«åã£ãŠããŸãã5æã¯ã»ãŒ3åã®1ã12æã¯1ïŒ
æªæºã§ããããã¯ãæ¥å¹Žã®12æã«ç®æšãšãã倿°ãäºæž¬ããäžã§ãã©ã®ãããªæå³ãæã€ã®ã§ããããïŒ
# * æ°å€ç¹åŸŽéã«æ¬ æå€ã¯ãããŸããããããã¯ãæ¬ æå€ã¯ãã§ã«è£å®ãããŠããŸãã
# * `pdays` ã¯ãã»ãŒãã¹ãŠã®é¡§å®¢ã§1000ã«è¿ãå€ããšããŸãããããããéå»ã«ã³ã³ã¿ã¯ãããªãã£ãããšã瀺ããã¬ãŒã¹ãã«ããŒå€ãšæãããŸãã
# * ããã€ãã®æ°å€ç¹åŸŽéã¯ãéåžžã«é·ãããŒã«ãæã£ãŠããŸãããããã®æ¥µç«¯ã«å€§ããªå€ãæã€å°æ°ã®èŠ³æž¬å€ãå¥ã®æ¹æ³ã§åŠçããå¿
èŠããããŸããïŒ
# * ããã€ãã®æ°å€ç¹åŸŽïŒç¹ã«ãã¯ãçµæžçãªãã®ïŒã¯ãç°ãªããã±ããã§çºçããŸãã ãããã¯ã«ããŽãªãŒãšããŠæ±ãã¹ãã§ããããïŒ
#
# 次ã«ãç¹åŸŽéãäºæž¬ããããšããŠããã¿ãŒã²ãããšã©ã®ããã«é¢é£ããŠããããèŠãŠã¿ãŸãããã
# +
for column in data.select_dtypes(include=['object']).columns:
if column != 'y':
display(pd.crosstab(index=data[column], columns=data['y'], normalize='columns'))
for column in data.select_dtypes(exclude=['object']).columns:
print(column)
hist = data[[column, 'y']].hist(by='y', bins=30)
plt.show()
# -
# 以äžã®ç¹ã«ã泚æãã ããã
#
# * ãblue-collarãããmarriedããããã©ã«ãïŒåµåäžå±¥è¡ïŒããunknownãããtelephoneãã§é£çµ¡ãåããããšãããããmayãã«é£çµ¡ãåããããšãããããªã©ã®é¡§å®¢ã¯ããyesãããnoãããã倧å¹
ã«å°ãªãã
# * æ°å€å€æ°ã®ååžã¯ããyesããšãnoãã®è³Œèªã°ã«ãŒãã§ç°ãªã£ãŠãããããã®é¢ä¿ã¯åçŽæå¿«ãªãã®ã§ã¯ãªããããããªãã
#
# 次ã«ãç¹åŸŽéããäºãã«ã©ã®ããã«é¢é£ããŠããããèŠãŠã¿ãŸãããã
display(data.corr())
pd.plotting.scatter_matrix(data, figsize=(12, 12))
plt.show()
# 以äžã«æ³šç®ããŠãã ããã
# * ç¹åŸŽééã®é¢ä¿ã¯ã¢ãã«ãã£ãŠå€§ããç°ãªããè² ã®çžé¢ã匷ããã®ãããã°ãæ£ã®çžé¢ã匷ããã®ãããã
# * ç¹åŸŽééã®é¢ä¿ã¯éç·åœ¢ã§ãããå€ãã®å Žåã颿£çã§ããã
# ### Transformation
#
# ããŒã¿ã®ã¯ãªãŒã³ã¢ããã¯ãã»ãŒãã¹ãŠã®æ©æ¢°åŠç¿ãããžã§ã¯ãã«å«ãŸããŸãããã®äœæ¥ã¯ãééã£ãæ¹æ³ã§è¡ãããå Žåã«ã¯æå€§ã®ãªã¹ã¯ãšãªããããã»ã¹ã®äžã§ãæã䞻芳çãªåŽé¢ã®1ã€ãšãªããŸããäžè¬çãªææ³ãšããŠã¯ä»¥äžã®ãããªãã®ããããŸãã
#
# * æ¬ æå€ã®åŠçïŒæ©æ¢°åŠç¿ã¢ã«ãŽãªãºã ã®äžã«ã¯ãæ¬ æå€ãæ±ãããšãã§ãããã®ããããŸãããã»ãšãã©ã®å Žåã¯ããã§ã¯ãããŸããããªãã·ã§ã³ã¯ä»¥äžã®éãã§ãã
# * æ¬ æå€ã®ãã芳枬å€ãåé€ããïŒããã¯ãäžå®å
šãªæ
å ±ãæã€èŠ³æž¬å€ãããäžéšã§ããã°ããŸãããã
# * æ¬ æå€ãæã€ç¹åŸŽéãåé€ããïŒããã¯ã倿°ã®æ¬ æå€ãæã€ç¹åŸŽéã®ã«ã©ã ãå°æ°ã®å Žåã«ããŸãããã
# * æ¬ æå€ã®ä»£å
¥ïŒããã¯[æžç±](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247)å
šäœã§èªããããããªãããã¯ã§ãããäžè¬çãªéžæè¢ã¯ãæ¬ æå€ããã®ã«ã©ã ã®éæ¬ æå€ã®æé »å€ãŸãã¯å¹³åå€ã§çœ®ãæããããšã§ãã
# * ã«ããŽãªãŒãæ°å€ã«å€æããïŒæãäžè¬çãªæ¹æ³ã¯ãããããšã³ã³ãŒãã£ã³ã°ãšåŒã°ãããã®ã§ãåç¹åŸŽéã«ã€ããŠããã®ã«ã©ã ã®ç°ãªãå€ãããããã®ç¹åŸŽéã«ãããã³ã°ããã«ããŽãªãŒç¹åŸŽéããã®å€ãšçããå Žåã¯1ãããã§ãªãå Žåã¯0ã®å€ãåããã®ã§ãã
# * å¥åŠãªååžã®ããŒã¿ïŒGradient Boosted Treesã®ãããªéç·åœ¢ã¢ãã«ã§ã¯ããã®åé¡ã¯éåžžã«éå®çã§ãããååž°ã®ãããªãã©ã¡ããªãã¯ã¢ãã«ã§ã¯ãéåžžã«æªãã ããŒã¿ãå
¥åãããšãéåžžã«äžæ£ç¢ºãªæšå®å€ãçæããå¯èœæ§ããããŸããå Žåã«ãã£ãŠã¯ãç¹åŸŽéã®èªç¶å¯Ÿæ°ãåãã ãã§ãããæ®éã«ååžããããŒã¿ãåŸãããããšããããŸãããŸããå€ã颿£çãªç¯å²ã«ãã±ããåããããšãæå¹ãªå ŽåããããŸãããããã®ãã±ããã¯ã«ããŽãªãŒå€æ°ãšããŠæ±ããã1ã€ã®ããããšã³ã³ãŒããããã¢ãã«ã«å«ãŸããŸãã
# * ããè€éãªããŒã¿ã¿ã€ããæ±ãïŒç»åãããã¹ããç²åºŠã®ç°ãªãããŒã¿ãæ±ãå Žåã¯ãä»ã®ããŒãããã¯ãã³ãã¬ãŒãã䜿çšããŸãã
#
# 幞ããªããšã«ããããã®åŽé¢ã®ããã€ãã¯ãã§ã«åŠçãããŠãããä»å玹ä»ããã¢ã«ãŽãªãºã ã¯ããŸã°ããªããŒã¿ãå¥åŠãªååžã®ããŒã¿ãããŸãæ±ãåŸåããããŸããããã§ãååŠçãã·ã³ãã«ã«ããŠãããŸãããã
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators
# ã¢ãã«ãæ§ç¯ããåã®ããäžã€ã®è³ªåã¯ãããç¹åŸŽéãæçµçãªãŠãŒã¹ã±ãŒã¹ã§äŸ¡å€ãä»å ãããã©ããã§ããäŸãã°ãæé«ã®äºæž¬ãæäŸããããšãç®çã§ããã°ãäºæž¬ã®ç¬éã«ãã®ããŒã¿ã«ã¢ã¯ã»ã¹ã§ããã§ããããã éšãéã£ãŠããããšãç¥ã£ãŠããã°ãåã®å£²ãè¡ããäºæž¬ããããšãã§ããŸãããåã®åšåº«ãèšç»ããããã«ååã«å
ã®å€©æ°ãäºæž¬ããããšã¯ã倩æ°ãç¥ããã«åã®å£²ãè¡ããäºæž¬ããã®ãšåããããé£ããã§ãããããã®ããããããã¢ãã«ã«å«ãããšã粟床ãé«ããšåéãããŠããŸãå¯èœæ§ããããŸãã
#
# ãã®è«çã«åŸã£ãŠãå°æ¥ã®äºæž¬ã®å
¥åãšããŠäœ¿çšããã«ã¯é«ã粟床ã§äºæž¬ããå¿
èŠããããããããŒã¿ããçµæžçç¹åŸŽãš `duration` ãåé€ããŠã¿ãŸãããã
#
# ä»®ã«åååæã®çµæžææšã®å€ã䜿çšãããšããŠãããã®å€ã¯æ¬¡ã®ååæã®æ©ãææã«é£çµ¡ãåããèŠèŸŒã¿å®¢ã«ãšã£ãŠã¯ãé
ãææã«é£çµ¡ãåããèŠèŸŒã¿å®¢ã»ã©é¢é£æ§ããªããšæãããŸãã
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
# æ°ããããŒã¿ã«å¯ŸããŠã¿ãŒã²ããå€ãäºæž¬ããããšãäž»ãªç®çãšããã¢ãã«ãæ§ç¯ããéã«ã¯ããªãŒããŒãã£ããã£ã³ã°ãçè§£ããããšãéèŠã§ããæåž«ããåŠç¿ã¢ãã«ã¯ãã¿ãŒã²ããå€ã®äºæž¬ãšãäžããããããŒã¿ã«ãããå®éã®å€ãšã®èª€å·®ãæå°åããããã«èšèšãããŠããŸãããããéµãšãªããŸããæ©æ¢°åŠç¿ã¢ãã«ã¯ãããé«ã粟床ã远æ±ããããŸããäžããããããŒã¿ã®äžã®äºçްãªç¹ç°æ§ãæŸãããã«åã£ãŠããŸãããšããããããŸãããã®ãããªç¹ç°æ§ã¯æ¬¡ã®ããŒã¿ã§ã¯åçŸãããªããããèšç·Žæ®µéã§ã®ããæ£ç¢ºãªäºæž¬ãç ç²ã«ããŠãå®éã«ã¯äºæž¬ã®ç²ŸåºŠãäœããªã£ãŠããŸãã®ã§ãã
#
# ãããé²ãããã®æãäžè¬çãªæ¹æ³ã¯ããã¢ãã«ã¯ãåŠç¿ããããŒã¿ãžã®é©åæ§ã ãã§ãªãããæ°ãããããŒã¿ãžã®é©åæ§ãå«ããŠå€æãããã¹ãã§ããããšããã³ã³ã»ããã§ã¢ãã«ãæ§ç¯ããããšã§ããããŒã«ãã¢ãŠãããªããŒã·ã§ã³ãã¯ãã¹ããªããŒã·ã§ã³ããªãŒãã¯ã³ã¢ãŠãããªããŒã·ã§ã³ãªã©ãããã€ãã®ç°ãªãæ¹æ³ã§ãã®æŠå¿µã衚çŸããããšãã§ããŸããããã§ã¯ãããŒã¿ãã©ã³ãã ã«3ã€ã®ã°ã«ãŒãã«åããããšã«ããŸãã70%ã®ããŒã¿ã§ã¢ãã«ãèšç·Žãã20%ã®ããŒã¿ã§è©äŸ¡ããŠãæ°ãããããŒã¿ã§ã®ç²ŸåºŠãæšå®ãã10%ã¯æçµçãªãã¹ãããŒã¿ã»ãããšããŠä¿çããŠãããŸãã
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
# Amazon SageMakerã®XGBoostã³ã³ããã¯ãlibSVMãŸãã¯CSVããŒã¿åœ¢åŒã®ããŒã¿ãæ³å®ããŠããŸãããã®äŸã§ã¯ãCSVãçšããŸããæåã®ã«ã©ã ã¯ã¿ãŒã²ãã倿°ã§ãªããã°ãªãããCSVã«ã¯ããããŒãå«ãŸããŠã¯ãªããªãããšã«æ³šæããŠãã ããããŸããç¹°ãè¿ãã«ãªããŸããããã¬ãŒãã³ã°ãããªããŒã·ã§ã³ããã¹ãã®åå²ã®åã§ã¯ãªããåå²åŸã«è¡ãã®ãæãç°¡åã§ããããšã«æ³šæããŠãã ãããããã«ãããã©ã³ãã ãªäžŠã³æ¿ãã«ãããã¹ã¢ã©ã€ã¡ã³ãã®åé¡ãé¿ããããšãã§ããŸãã
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
# ããã§ãAmazon SageMakerã®ãããŒãžããã¬ãŒãã³ã°ããã¡ã€ã«ãããã¯ã¢ããã§ããããã«ããã¡ã€ã«ãS3ã«ã³ããŒããŸãã
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
# ---
#
# ## ã©ã1ã®çµãã
#
# ---
#
# ## Training
#
# ä»ãç§ãã¡ã¯ãã»ãšãã©ã®ç¹åŸŽéãæªãã ååžãæã¡ãããã€ãã®ç¹åŸŽéã¯äºãã«é«ãçžé¢ãæã¡ãããã€ãã®ç¹åŸŽéã¯ã¿ãŒã²ãã倿°ãšéç·åœ¢ã®é¢ä¿ãæã£ãŠããããã«èŠããããšãç¥ã£ãŠããŸãããŸããå°æ¥ã®èŠèŸŒã¿å®¢ã察象ãšããå Žåãäºæž¬ç²ŸåºŠãé«ãããšãããããªããã®èŠèŸŒã¿å®¢ã察象ãšããã®ãã説æã§ããããšãéèŠã§ãããã®ãããªç¹ãèæ
®ãããšãgradient boosted treesã¯è¯ãã¢ã«ãŽãªãºã ã®åè£ãšèšããŸãã
#
# ãã®ã¢ã«ãŽãªãºã ãçè§£ããã«ã¯ããã€ãã®è€éãªç¹ããããŸãããé«ãã¬ãã«ã§ã¯ãgradient boosted treesã¯ãå€ãã®åçŽãªã¢ãã«ããã®äºæž¬ãçµã¿åãããããšã§æ©èœããŸããåã¢ãã«ã¯ã以åã®ã¢ãã«ã®åŒ±ç¹ã解決ããããšããŸãããã®ããã«ããŠãåçŽãªã¢ãã«ã®éãŸãããå€§èŠæš¡ã§è€éãªã¢ãã«ãããåªããçµæãåºãããšãã§ããã®ã§ããAmazon SageMakerã®ä»ã®ããŒãããã¯ã§ã¯ãgradient boosted treesã«ã€ããŠããã«è©³ãã説æãããŠãããé¡äŒŒã®ã¢ã«ãŽãªãºã ãšã®éãã«ã€ããŠã説æãããŠããŸãã
#
# `xgboost`ã¯ãéåžžã«äººæ°ã®ãããgradient boosted treesã®ãªãŒãã³ãœãŒã¹ããã±ãŒãžã§ããèšç®èœåãé«ããæ©èœãå
å®ããŠãããå€ãã®æ©æ¢°åŠç¿ã®ã³ã³ãã¹ãã§äœ¿çšãããŠããŸãããŸãã¯ãAmazon SageMakerã®ãããŒãžã忣åŠç¿ãã¬ãŒã ã¯ãŒã¯ã䜿ã£ãŠèšç·Žãããã·ã³ãã«ãª`xgboost`ã¢ãã«ããå§ããŸãããã
#
# æåã«ãAmazon SageMakerã®XGBoostã®å®è£
çšã«ECRã³ã³ããã®å Žæãæå®ããå¿
èŠããããŸãã
container = sagemaker.image_uris.retrieve(region=boto3.Session().region_name, framework='xgboost', version='latest')
# ãŸããä»åã¯CSVãã¡ã€ã«åœ¢åŒã§åŠç¿ããã®ã§ãS3å
ã®ãã¡ã€ã«ãžã®ãã€ã³ã¿ãšããŠèšç·Žé¢æ°ã䜿ããããã« `s3_input` ãäœæããã³ã³ãã³ãã¿ã€ããCSVãšæå®ããŸãã
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
# ãŸããestimatorã®èšç·Žãã©ã¡ãŒã¿ãæå®ããå¿
èŠããããŸããããã¯ã
# 1. `xgboost`ã¢ã«ãŽãªãºã ã®ã³ã³ãã
# 1. 䜿çšããIAMããŒã«
# 1. ãã¬ãŒãã³ã°ã€ã³ã¹ã¿ã³ã¹ã®ã¿ã€ããšæ°
# 1. åºåããŒã¿ã®S3ãã±ãŒã·ã§ã³
# 1. ã¢ã«ãŽãªãºã ã®ãã€ããŒãã©ã¡ãŒã¿
#
# ãããŠã以äžãæå®ãã `.fit()` 颿°ãå«ã¿ãŸãã
# 1. åºåããŒã¿ã®S3ãã±ãŒã·ã§ã³ã ãã®äŸã§ã¯ããã¬ãŒãã³ã°ã»ãããšããªããŒã·ã§ã³ã»ããã®äž¡æ¹ãæž¡ãããŸãã
#
# +
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
# -
# ---
#
# ## Hosting
#
# ããŠãããŒã¿ã«å¯Ÿã㊠`xgboost` ã¢ã«ãŽãªãºã ããã¬ãŒãã³ã°ããã®ã§ããªã¢ã«ã¿ã€ã ãšã³ããã€ã³ããäœæããŠã¢ãã«ããããã€ããŠã¿ãŸãããã
xgb_predictor = xgb.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
# ---
#
# ## Evaluation
#
# æ©æ¢°åŠç¿ã¢ãã«ã®æ§èœãæ¯èŒããæ¹æ³ã¯ãããããããŸããããŸãã¯åçŽã«å®çžŸå€ãšäºæž¬å€ãæ¯èŒããŠã¿ãŸããããä»åã®ã±ãŒã¹ã§ã¯ã顧客ã宿é éã«å å
¥ããŠããã (`1`)ãããŠããªãã (`0`) ãäºæž¬ããã ããªã®ã§ãåçŽãªæ··åè¡åãçæããŸãã
#
# ãŸãããšã³ããã€ã³ããžã®ããŒã¿ã®åãæž¡ãæ¹æ³ãæå®ããå¿
èŠããããŸããçŸåšãããŒã¿ã¯NumPyã®é
åãšããŠããŒãããã¯ã€ã³ã¹ã¿ã³ã¹ã®ã¡ã¢ãªã«æ ŒçŽãããŠããŸããHTTP POSTãªã¯ãšã¹ãã§ããŒã¿ãéä¿¡ããã«ã¯ãããŒã¿ãCSVæååãšããŠã·ãªã¢ã©ã€ãºãããã®çµæã®CSVããã³ãŒãããŸãã
#
# *泚ïŒCSV圢åŒã§æšè«ããå ŽåãSageMaker XGBoostã§ã¯ããŒã¿ã«ã¿ãŒã²ãã倿°ãå«ãŸããŠããŠã¯ãããŸããã*
xgb_predictor.serializer = sagemaker.serializers.CSVSerializer()
# ã§ã¯ã以äžãè¡ãç°¡åãªé¢æ°ã䜿ã£ãŠã¿ãŸãããã
# 1. ãã¹ãããŒã¿ã»ããã®ã«ãŒã
# 1. è¡ã®ãããããã«åå²
# 1. ãããã®ããããããCSVæååã®ãã€ããŒãã«å€æããïŒæåã«ããŒã¿ã»ããããã¿ãŒã²ãã倿°ãåé€ããŠããããšã«æ³šæããŠãã ããïŒ
# 1. XGBoostãšã³ããã€ã³ããèµ·åããŠãããããã®äºæž¬å€ãååŸ
# 1. äºæž¬å€ãåéããã¢ãã«ãæäŸããCSVåºåããNumPyé
åã«å€æãã
# +
def predict(data, predictor, rows=500 ):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ''
for array in split_array:
predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
predictions = predict(test_data.drop(['y_no', 'y_yes'], axis=1).to_numpy(), xgb_predictor)
# -
# ããã§ãæ··ä¹±è¡åããã§ãã¯ããŠãäºæž¬ãšå®éã®çµæã確èªããŸãã
pd.crosstab(index=test_data['y_yes'], columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
# ã€ãŸããçŽ4000äººã®æœåšé¡§å®¢ã®ãã¡ã136人ã賌èªãããšäºæž¬ãããã®ãã¡94人ãå®éã«è³Œèªããã®ã§ãããŸããäºæž¬ããŠããªãã£ã389人ã®å å
¥è
ãããŸããã ããã¯æãŸããããšã§ã¯ãããŸããããã¢ãã«ã調æŽããŠæ¹åããããšãã§ããŸãïŒãã¹ãã§ãïŒãæãéèŠãªããšã¯ãæå°éã®åªåã§ãç§ãã¡ã®ã¢ãã«ã[ãã¡ã](http://media.salford-systems.com/video/tutorial/2015/targeted_marketing.pdf)ã§çºè¡šããããã®ãšåæ§ã®ç²ŸåºŠãåºããããšã§ãã
#
# _ã¢ã«ãŽãªãºã ã®ãµããµã³ãã«ã«ã¯ã©ã³ãã ãªèŠçŽ ããããããçµæã¯äžã®æç« ãšã¯è¥å¹²ç°ãªãå¯èœæ§ãããããšã«æ³šæããŠãã ããã_
# ## Automatic model Tuning (optional)
#
# Amazon SageMakerã®èªåã¢ãã«ãã¥ãŒãã³ã°ã¯ããã€ããŒãã©ã¡ãŒã¿ãã¥ãŒãã³ã°ãšããŠãç¥ãããŠãããæå®ããã¢ã«ãŽãªãºã ãšãã€ããŒãã©ã¡ãŒã¿ã®ç¯å²ã䜿çšããŠããŒã¿ã»ããäžã§ããããã®ãã¬ãŒãã³ã°ãžã§ããå®è¡ããããšã«ãããã¢ãã«ã®ãã¹ãããŒãžã§ã³ãèŠã€ããŸãããããŠãéžæããææšã§æž¬å®ããçµæãæé«ã®ããã©ãŒãã³ã¹ãçºæ®ããã¢ãã«ã«ãªããã€ããŒãã©ã¡ãŒã¿ã®å€ãéžæããŸãã
#
# äŸãã°ããã®ããŒã±ãã£ã³ã°ããŒã¿ã»ããã®2å€åé¡åé¡ãè§£ããšããŸããããªãã®ç®çã¯ãXGBoost ã¢ã«ãŽãªãºã ã®ã¢ãã«ãèšç·Žããããšã§ãã¢ã«ãŽãªãºã ã®æ²ç·äžé¢ç©ïŒaucïŒã®ææšãæå€§åããããšã§ããetaãalphaãmin_child_weightãmax_depth ã®åãã€ããŒãã©ã¡ãŒã¿ã®ã©ã®å€ã䜿çšããã°ãæé©ãªã¢ãã«ãèšç·Žã§ãããããããŸããããããã®ãã€ããŒãã©ã¡ãŒã¿ã®æé©ãªå€ãèŠã€ããããã«ãAmazon SageMakerã®ãã€ããŒãã©ã¡ãŒã¿ãã¥ãŒãã³ã°ãæ€çŽ¢ããå€ã®ç¯å²ãæå®ããŠãéžæããç®æšææšã§æž¬å®ãããæé«ã®ããã©ãŒãã³ã¹ã瀺ãå€ã®çµã¿åãããèŠã€ããããšãã§ããŸãããã€ããŒãã©ã¡ãŒã¿ãã¥ãŒãã³ã°ã¯ãæå®ããç¯å²ã®ãã€ããŒãã©ã¡ãŒã¿å€ã䜿çšãããã¬ãŒãã³ã°ãžã§ããèµ·åããæãé«ãAUCãæã€ãã¬ãŒãã³ã°ãžã§ããè¿ããŸãã
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {'eta': ContinuousParameter(0, 1),
'min_child_weight': ContinuousParameter(1, 10),
'alpha': ContinuousParameter(0, 2),
'max_depth': IntegerParameter(1, 10)}
objective_metric_name = 'validation:auc'
tuner = HyperparameterTuner(xgb,
objective_metric_name,
hyperparameter_ranges,
max_jobs=20,
max_parallel_jobs=3)
tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
# return the best training job name
tuner.best_training_job()
# Deploy the best trained or user specified model to an Amazon SageMaker endpoint
tuner_predictor = tuner.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
# Create a serializer
tuner_predictor.serializer = sagemaker.serializers.CSVSerializer()
# Predict
predictions = predict(test_data.drop(['y_no', 'y_yes'], axis=1).to_numpy(),tuner_predictor)
# Collect predictions and convert from the CSV output our model provides into a NumPy array
pd.crosstab(index=test_data['y_yes'], columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
# ---
#
# ## Extensions
#
# ãã®äŸã§ã¯ãæ¯èŒçå°ããªããŒã¿ã»ãããåæããŠããŸããã忣ãããŒãžããã¬ãŒãã³ã°ããªã¢ã«ã¿ã€ã ã®ã¢ãã«ãã¹ãã£ã³ã°ãªã©ãAmazon SageMakerã®æ©èœãå©çšããŠããããã倧ããªåé¡ã«ã容æã«é©çšã§ããŸããäºæž¬ç²ŸåºŠãããã«åäžãããããã«ã¯ãåœéœæ§ãšåœé°æ§ã®çµã¿åãããå€ããããã«ãäºæž¬ã®ãããå€ã埮調æŽãããããã€ããŒãã©ã¡ãŒã¿ãã¥ãŒãã³ã°ã®ãããªæè¡ãæ€èšãããããããšãã§ããŸããå®éã®ã·ããªãªã§ã¯ãæäœæ¥ã§ç¹åŸŽéãäœæããæéãå¢ãããããæåã®ããŒã¿ã»ããã§ã¯åŸãããªãã£ã顧客æ
å ±ãå«ã远å ã®ããŒã¿ã»ãããæ¢ãããããããšã«ãªãã§ãããã
# ### (Optional) Clean-up
#
# ãã®ããŒãããã¯ãçµãã£ããã以äžã®ã»ã«ãå®è¡ããŠãã ãããããã«ãããäœæãããšã³ããã€ã³ããåé€ãããè¿·åã®ã€ã³ã¹ã¿ã³ã¹ããªã³ã«ãªã£ããŸãŸã«ãªã£ãŠããããšã«ãã課éãé²ãããšãã§ããŸãã
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
tuner_predictor.delete_endpoint(delete_endpoint_config=True)
| xgboost_direct_marketing_sagemaker_jp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cityscapes
# +
import tensorflow as tf
class Config:
class data:
shape = (600, 600)
shuffle = False
class model:
input_shape = (600, 600)
# -
# ## Setup
# +
from math import ceil
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# +
def clean_display():
from matplotlib import rc
rc('axes.spines',top=False,bottom=False,left=False,right=False);
rc('axes',facecolor=(1,1,1,0),edgecolor=(1,1,1,0));
rc(('xtick','ytick'),color=(1,1,1,0));
clean_display()
# +
def display_segment_example(i, s, rows=1, cols=2, ix=1, alpha=.5):
plt.subplot(rows, cols, ix)
plt.imshow(i)
plt.imshow(s, interpolation='none', alpha=alpha)
def display_segment_many(samples, rows=None, cols=4, full=True, figsize=None):
if not rows: rows = ceil(len(samples) / cols)
if not figsize: figsize = (16, int(2 * rows / (0.25*cols)))
if full: plt.figure(figsize=figsize)
for ix, (i, s) in enumerate(samples):
display_segment_example(i, s, rows=rows, cols=cols, ix=ix + 1)
if full: plt.tight_layout()
# -
# ## Load Cityscapes Dataset
# +
import tensorflow_datasets as tfds
ds, info = tfds.load('cityscapes/semantic_segmentation',
split='train',
shuffle_files=Config.data.shuffle,
with_info=True)
# +
@tf.function
def load_fn(d):
x, y = tf.numpy_function(augment_fn,
inp=[d['image_left'], d['segmentation_label']],
Tout=[tf.uint8, tf.uint8])
x = tf.ensure_shape(x, (*Config.data.shape, 3))
y = tf.ensure_shape(y, (*Config.data.shape, 1))
return x, y
def augment(image, mask):
image = tf.image.random_
return image, mask
# -
ds_augmented = ds.map(load_and_process_samples_fn
# , num_parallel_calls=tf.data.AUTOTUNE
)
items = list(ds_augmented.take(4))
display_segment_many(items, rows=2, cols=2)
# ## Network
ef = tf.keras.applications.EfficientNetB7(weights='imagenet')
ef.output.shape
| notebooks/weakly-supervised/segmentation/cityscapes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Time Series Analysis - I
# In Finance, most of the data driven decisions are taken looking at the past trends of various factors influcing the economy. The way how each of these factors affect the subject (Bond/Portfolio/Price) and its correlation with other events or factors are generally be derived by calculating the exposure of the Bond or Portfolio to the factor or by finding the covariance between different factor values for calculating interaction. Both of these values are derived by looking at the historical changes and sequence of events. Hence time series analysis is crucial component in Finance and Risk Management.
#
# A time series is basically a series of data points which are indexed in time order. This order is sometimes minute, hour, day, month or maybe year. The goal of quantitative researchers is to identify trends, seasonal variations and correlation in this financial time series data using statistical methods and ultimately generate trading signals by carefully evaluating the Risks involved. Time Series Analysis provides us with a robust statistical framework for assessing the behaviour of time series, such as asset prices, in order to help us trade off of this behaviour.
# **Objective: ** This notebook covers some essential basic concepts related to statistical time series analysis and forecasting techniques
# ### Stationarity
#
# A time series is considered to be stationary when (with classic examples found everywhere):
#
# 1) The mean of the series is not a function of time.
# <img src="images/ts1.png">
#
# 2) The variance of the series is not a function of time. This is called homoscedasticity.
# <img src="images/ts2.png">
#
# 3) The covariance of the ith term and (i+m)th term is not a function of time.
# <img src="images/ts3.png">
#
#
# Stationarity of a time series is important because a lot of statistical techniques assume time series to be stationary as in such cases we can assume that the future statistical properties of the series are going to be same as that of current statsistical properties. If it is not, we try to make/transform them into stationary time series.
#
# ### Autocorrelation
#
# A time series model decomposes the series into three components: trend, seasonal, and random.
#
# The random component is called the residual or error - the difference between our predicted value(s) and the observed value(s). Autocorrelation is when these residuals (errors) are correlated with each other. That is, if the error of the ith is dependent on errors of any of the terms $0 .. i-1$ before.
#
# The Autocorrelation Function (ACF) for a series gives correlation between the series x(t) and lagged values of the series for lags of 1,2, ..
# The ACF can be used to identify the possible structure of time series data. The ACF of the residuals for a model is useful.
# Following is an ACF plot of the residuals for a time series. The lag is shown on the horizontal axis and autocorrelation is on the the vertical. The red lines indicate the bounds of statistical significance. This is a good ACF for the residuals as nothing is significant, meaning that the residuals are not dependent on past, hence they are random in nature, which is what we would like them to be.
#
# <img src="images/ACF11.gif">
#
# **Why should we care about Autocorrelation? **
#
# Serial correlation is critical for the validity of our model predictions - The residuals (errors) of a stationary TS are serially uncorrelated by definition. It is critical we account for autocorrelation in our model otherwise the standard errors of our estimates for the parameters will be biased and underestimated, making any tests that we try to form using the model invalid. In layman's terms, ignoring autocorrelation means we're likely to draw incorrect conclusions about the impact of the independent variables in our model.
#
#
# ### Partial Autocorrealtion (PACF)
#
# This refers to Partial Autocorrelation function.
# Suppose you have 3 points in a time series x3, x2, x1. Using ACF you would generally find the correlation between x1 and x2. The value of correlation thus obtained is technically not true value of correlation, because the value of x2 is likely to be inspired by the value of x3. So PACF is that portion of the correlation between x1 and x2, which is not explained by the correlation between x3 in x2.
#
# For an AR model, the theoretical PACF shuts off past the order of the model. This means that the Partial Autocorrelations are equal to 0 beyond that point. The number of non-zero partial autocorrelations gives the order of the AR model.
#
# ### White Noise
#
# By definition a time series that is a white noise process has serially uncorrelated errors and the expected mean of those errors is equal to zero. This means that the errors(residuals) are completely drawn at random from some probability distribution, i.e it is independent and identically distributed (i.i.d.).
#
# If our time series model results in white noise residuals, it means we have successfully captured the underlying process and explained any form of correlation, only leaving errors(residuals) which are completely random. Our predicted values differ from the observed values only by a random error component that cannot be forecasted or modeled.
#
# Most of time series analysis is literally trying to fit a model to the time series such that the residual series is indistinguishable from white noise.
# **Following cells contain some code to plot the contents, ACF and PACF of the series along with QQ and Probability plots to check how similar are the residuals to the normal distribution. **
# +
# Importing the needed packages
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
import statsmodels.stats as sms
# -
def tsplot(y, lags=None, figsize=(15, 10), style='bmh'):
'''
Prepares a (3,2) dimensional plot for the visualization of time series values, autocorrelation and partial
autocorrelation plots and QQ and probability plots for comparision with normal distribution.
Args:
y: time series values
lags: How many lagging values are to be considered.
'''
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
layout = (3, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
qq_ax = plt.subplot2grid(layout, (2, 0))
pp_ax = plt.subplot2grid(layout, (2, 1))
y.plot(ax=ts_ax)
ts_ax.set_title('Time Series Analysis Plots')
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax, alpha=0.05)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax, alpha=0.05)
sm.qqplot(y, line='s', ax=qq_ax)
qq_ax.set_title('QQ Plot')
scs.probplot(y, sparams=(y.mean(), y.std()), plot=pp_ax)
plt.tight_layout()
return
# +
np.random.seed(1)
# plot of discrete white noise
randser = np.random.normal(size=1000)
tsplot(randser, lags=30)
# -
# **Description of the plots: **
#
# 1) First plot are the values of the time series plotted against time.
#
# 2) Second row are the plots of Autocorrelation (ACF) and Partial Autocorrelation (PACF).
#
# 3) Third row has QQ Plot and the Probability Plot.
#
#
| Time Series - I.ipynb |