markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Dataset saving | # Create a list of all input images
if not len(ip_files) or not len(upsample_files):
for lidar_file in processed_lidar_files:
if 'front' in lidar_file:
out_ip = get_image_files(lidar_file, 'ip')
out_upsample = get_image_files(lidar_file, 'upsample')
ip_files.append(list(o... | _____no_output_____ | MIT | data_processing/lidar_data_processing.ipynb | abhitoronto/KITTI_ROAD_SEGMENTATION |
Ground Truth Conditioning | def create_binary_gt(lidar_files, color=(255,0,255)):
gt_files = []
for lidar_file in lidar_files:
if 'front' in lidar_file:
assert Path(lidar_file).is_file(), f'{lidar_file} is not a file'
# Get Label file
label_file = extract_semantic_file_name_from_any_file_name(li... | _____no_output_____ | MIT | data_processing/lidar_data_processing.ipynb | abhitoronto/KITTI_ROAD_SEGMENTATION |
Beyond SIR modeling [](https://colab.research.google.com/github/collectif-codata/pyepidemics/blob/master/docs/tutorials/beyond-sir.ipynb) NoteIn this tutorial we will see how we can build differential equations models and go from simple SIR mode... | %matplotlib inline
%load_ext autoreload
%autoreload 2
# Developer import
import sys
sys.path.append("../../") | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
On Google ColabUncomment the following line to install the library locally | # !pip install pyepidemics | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Verify the library is correctly installed | import pyepidemics
from pyepidemics.models import SIR,SEIR,SEIDR,SEIHDR | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Introduction TipThis tutorial is largely inspired from this great article [Infectious Disease Modelling: Beyond the Basic SIR Model](https://towardsdatascience.com/infectious-disease-modelling-beyond-the-basic-sir-model-216369c584c4) by Henri Froese, from which actually a huge part of the code from this library is in... | N = 1000
beta = 1
gamma = 1/4
# Define model
sir = SIR(N,beta,gamma)
# Solve the equations
states = sir.solve(init_state = 1)
states.show(plotly = False) | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
You can visualize the transitions by compartments, with the command ``.network.show()`` (which is not super useful for SIR models, but can be interesting to check more complex models) | sir.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
SEIR model  | # Population
N = 1e6
beta = 1
delta = 1/3
gamma = 1/4
# Define the model
seir = SEIR(N,beta,delta,gamma)
# Solve the equations
states = seir.solve(init_state = 1)
states.show(plotly = False)
seir.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
SEIDR model  | # Population
N = 1e6
gamma = 1/4
beta = 3/4
delta = 1/3
alpha = 0.2 # probability to die
rho = 1/9 # 9 ndays before death
# Define the model
seidr = SEIDR(N,beta,delta,gamma,rho,alpha)
# Solve the equations
states = seidr.solve(init_state = 1)
states.show(plotly = False)
seidr.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
SEIHDR model | # Population
N = 1e6
beta = 1/4 * 5 # R0 = 2.5
delta = 1/5
gamma = 1/4
theta = 1/5 # ndays before complication
kappa = 1/10 # ndays before symptoms disappear
phi = 0.5 # probability of complications
alpha = 0.2 # probability to die
rho = 1/9 # 9 ndays before death
# Define the model
seihdr = SEIHDR(N,beta,delta,gamm... | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Towards COVID19 modeling To model COVID19 epidemics, we can use a more complex compartmental model to account for different levels of symptoms and patients going to ICU. You can read more about it in this [tutorial](https://collectif-codata.github.io/pyepidemics/tutorials/covid/) Modeling policies Simulating paramet... | date_lockdown = 53
def beta(t):
if t < date_lockdown:
return 3.3/4
else:
return 1/4
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,100)
y = np.vectorize(beta)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
For convenience we can use the helper function defined in pyepidemics | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
]
fn = make_dynamic_fn(policies,sigmoid = False)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
The result is the same, but we can use this function for more complex policies | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
(2/4,80),
]
fn = make_dynamic_fn(policies,sigmoid = False)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Gradual transitions with sigmoidBehaviors don't change over a day, to model this phenomenon we could prefer gradual transitions from one value to the next using sigmoid functions. We can use the previous function for that : | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
(2/4,80),
]
fn = make_dynamic_fn(policies,sigmoid = True)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
We can even specify the transitions durations as followed | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
(2/4,80),
]
fn = make_dynamic_fn(policies,sigmoid = True,transition = 8)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Or even for each transition | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53,15),
(2/4,80,5),
]
fn = make_dynamic_fn(policies,sigmoid = True)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Lockdown Instead of passing a constant as beta in the previous SEIHDR model, we can pass any function depending over time | lockdown_date = 53
policies = [
3.3/4,
(1/4,lockdown_date),
]
fn = make_dynamic_fn(policies,sigmoid = True)
beta = lambda y,t : fn(t)
# Population
N = 1e6
delta = 1/5
gamma = 1/4
theta = 1/5 # ndays before complication
kappa = 1/10 # ndays before symptoms disappear
phi = 0.5 # probability of complications
a... | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Lockdown exit Now that you've understood how to change a parameter over time, it's easy to simulate a lockdown exit by adding a new parameter. | for R_post_lockdown in [0.1,0.5,1,2,3.3]:
lockdown_date = 53
duration_lockdown = 60
policies = [
3.3/4,
(0.6/4,lockdown_date),
(R_post_lockdown/4,lockdown_date+duration_lockdown),
]
fn = make_dynamic_fn(policies,sigmoid = True)
beta = lambda y,t : fn(t)
... | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
AMUSE: Community codes | import numpy
numpy.random.seed(11)
from amuse.lab import *
from amuse.support.console import set_printing_strategy
set_printing_strategy(
"custom",
preferred_units=[units.MSun, units.parsec, units.Myr, units.kms],
precision=6, prefix="", separator=" [", suffix="]",
)
converter = nbody_system.nbody_to_si(1 |... | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Amuse contains many community codes, which can be found in amuse.community.These are often codes that have been in use as standalone codes for a long time (e.g. Gadget2), but some are unique to AMUSE (e.g. ph4, a 4th order parallel Hermite N-body integrator with GPU support).Each community code must be instantiated to ... | test_sphere = new_plummer_model(1000, converter)
test_sphere.mass = new_salpeter_mass_distribution(1000, mass_min=0.3 | units.MSun)
def new_gravity(particles):
gravity = ph4(converter, number_of_workers=1)
gravity.parameters.epsilon_squared = (0.01 | units.parsec)**2
gravity.particles.add_particles(particle... | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Note that the original particles (`test_sphere`) were not modified, while those maintained by the code were (for performance reasons). Also, small numerical errors can arise at this point, the magnitude of which depends on the chosen converter units.To synchronise the particle sets, AMUSE uses "channels". These can cop... | gravity, gravity_to_model = new_gravity(test_sphere)
print(gravity.particles.center_of_mass())
gravity.evolve_model(0.1 | units.Myr)
gravity_to_model.copy()
print(gravity.particles.center_of_mass())
print(test_sphere.center_of_mass())
gravity.stop() | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Combining codes: gravity and stellar evolution In a simulation of a star cluster, we may want to combine several codes to address different parts of the problem:- an N-body code for gravity,- a stellar evolution codeIn the simplest case, these interact only via the stellar mass, which is changed over time by the stell... | def new_evolution(particles):
evolution = SSE()
evolution.parameters.metallicity = 0.01
evolution.particles.add_particles(particles)
evolution_to_model = evolution.particles.new_channel_to(particles)
return evolution, evolution_to_model
evolution, evolution_to_model = new_evolution(test_sphere)
gra... | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Data Science Academy - Python Fundamentos - Capítulo 4 Download: http://github.com/dsacademybr | # Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) | Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6
| MIT | Data Science Academy/Cap04/Notebooks/DSA-Python-Cap04-10-Enumerate.ipynb | srgbastos/Artificial-Intelligence |
Enumerate | # Criando uma lista
seq = ['a','b','c']
enumerate(seq)
list(enumerate(seq))
# Imprimindo os valores de uma lista com a função enumerate() e seus respectivos índices
for indice, valor in enumerate(seq):
print (indice, valor)
for indice, valor in enumerate(seq):
if indice >= 2:
break
else:
pri... | 0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
| MIT | Data Science Academy/Cap04/Notebooks/DSA-Python-Cap04-10-Enumerate.ipynb | srgbastos/Artificial-Intelligence |
Engineer features and convert time series data to images Imports & Settings To install `talib` with Python 3.7 follow [these](https://medium.com/@joelzhang/install-ta-lib-in-python-3-7-51219acacafb) instructions. | import warnings
warnings.filterwarnings('ignore')
from talib import (RSI, BBANDS, MACD,
NATR, WILLR, WMA,
EMA, SMA, CCI, CMO,
MACD, PPO, ROC,
ADOSC, ADX, MOM)
import seaborn as sns
import matplotlib.pyplot as plt
from statsmodels.regression.rol... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Loading Quandl Wiki Stock Prices & Meta Data | adj_ohlcv = ['adj_open', 'adj_close', 'adj_low', 'adj_high', 'adj_volume']
with pd.HDFStore(DATA_STORE) as store:
prices = (store['quandl/wiki/prices']
.loc[idx[START:END, :], adj_ohlcv]
.rename(columns=lambda x: x.replace('adj_', ''))
.swaplevel()
.sort_index... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Rolling universe: pick 500 most-traded stocks | dollar_vol = prices.close.mul(prices.volume).unstack('symbol').sort_index()
years = sorted(np.unique([d.year for d in prices.index.get_level_values('date').unique()]))
train_window = 5 # years
universe_size = 500
universe = []
for i, year in enumerate(years[5:], 5):
start = str(years[i-5])
end = str(years[i])
... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Generate Technical Indicators Factors | T = list(range(6, 21)) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Relative Strength Index | for t in T:
universe[f'{t:02}_RSI'] = universe.groupby(level='symbol').close.apply(RSI, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Williams %R | for t in T:
universe[f'{t:02}_WILLR'] = (universe.groupby(level='symbol', group_keys=False)
.apply(lambda x: WILLR(x.high, x.low, x.close, timeperiod=t))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Compute Bollinger Bands | def compute_bb(close, timeperiod):
high, mid, low = BBANDS(close, timeperiod=timeperiod)
return pd.DataFrame({f'{timeperiod:02}_BBH': high, f'{timeperiod:02}_BBL': low}, index=close.index)
for t in T:
bbh, bbl = f'{t:02}_BBH', f'{t:02}_BBL'
universe = (universe.join(
universe.groupby(level='symb... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Normalized Average True Range | for t in T:
universe[f'{t:02}_NATR'] = universe.groupby(level='symbol',
group_keys=False).apply(lambda x:
NATR(x.high, x.low, x.close, timeperiod=t)) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Percentage Price Oscillator | for t in T:
universe[f'{t:02}_PPO'] = universe.groupby(level='symbol').close.apply(PPO, fastperiod=t, matype=1) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Moving Average Convergence/Divergence | def compute_macd(close, signalperiod):
macd = MACD(close, signalperiod=signalperiod)[0]
return (macd - np.mean(macd))/np.std(macd)
for t in T:
universe[f'{t:02}_MACD'] = (universe
.groupby('symbol', group_keys=False)
.close
.apply(compute_macd, signalper... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Momentum | for t in T:
universe[f'{t:02}_MOM'] = universe.groupby(level='symbol').close.apply(MOM, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Weighted Moving Average | for t in T:
universe[f'{t:02}_WMA'] = universe.groupby(level='symbol').close.apply(WMA, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Exponential Moving Average | for t in T:
universe[f'{t:02}_EMA'] = universe.groupby(level='symbol').close.apply(EMA, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Commodity Channel Index | for t in T:
universe[f'{t:02}_CCI'] = (universe.groupby(level='symbol', group_keys=False)
.apply(lambda x: CCI(x.high, x.low, x.close, timeperiod=t))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Chande Momentum Oscillator | for t in T:
universe[f'{t:02}_CMO'] = universe.groupby(level='symbol').close.apply(CMO, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Rate of Change Rate of change is a technical indicator that illustrates the speed of price change over a period of time. | for t in T:
universe[f'{t:02}_ROC'] = universe.groupby(level='symbol').close.apply(ROC, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Chaikin A/D Oscillator | for t in T:
universe[f'{t:02}_ADOSC'] = (universe.groupby(level='symbol', group_keys=False)
.apply(lambda x: ADOSC(x.high, x.low, x.close, x.volume, fastperiod=t-3, slowperiod=4+t))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Average Directional Movement Index | for t in T:
universe[f'{t:02}_ADX'] = universe.groupby(level='symbol',
group_keys=False).apply(lambda x:
ADX(x.high, x.low, x.close, timeperiod=t))
universe.drop(ohlcv, axis=1).to_hdf('data.h5', 'features') | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Compute Historical Returns Historical Returns | by_sym = universe.groupby(level='symbol').close
for t in [1,5]:
universe[f'r{t:02}'] = by_sym.pct_change(t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Remove outliers | universe[[f'r{t:02}' for t in [1, 5]]].describe()
outliers = universe[universe.r01>1].index.get_level_values('symbol').unique()
len(outliers)
universe = universe.drop(outliers, level='symbol') | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Historical return quantiles | for t in [1, 5]:
universe[f'r{t:02}dec'] = (universe[f'r{t:02}'].groupby(level='date')
.apply(lambda x: pd.qcut(x, q=10, labels=False, duplicates='drop'))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Rolling Factor Betas | factor_data = (web.DataReader('F-F_Research_Data_5_Factors_2x3_daily', 'famafrench',
start=START)[0].rename(columns={'Mkt-RF': 'Market'}))
factor_data.index.names = ['date']
factor_data.info()
windows = list(range(15, 90, 5))
len(windows)
t = 1
ret = f'r{t:02}'
factors = ['Market', 'SMB',... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Compute Forward Returns | for t in [1, 5]:
universe[f'r{t:02}_fwd'] = universe.groupby(level='symbol')[f'r{t:02}'].shift(-t)
universe[f'r{t:02}dec_fwd'] = universe.groupby(level='symbol')[f'r{t:02}dec'].shift(-t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Store Model Data | universe = universe.drop(ohlcv, axis=1)
universe.info(null_counts=True)
drop_cols = ['r01', 'r01dec', 'r05', 'r05dec']
outcomes = universe.filter(like='_fwd').columns
universe = universe.sort_index()
with pd.HDFStore('data.h5') as store:
store.put('features', universe.drop(drop_cols, axis=1).drop(outcomes, axis=1)... | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to ente... | # Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0 | Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.11.4)
Requirement already satisfied: boto3>... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of t... | %mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data | mkdir: cannot create directory ‘../data’: File exists
--2021-03-07 19:37:15-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response...... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a... | import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[d... | IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records. | from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
... | IMDb reviews (combined): train = 25000, test = 25000
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loade... | print(train_X[100])
print(train_y[100]) | Think of this pilot as "Hawaii Five-O Lite". It's set in Hawaii, it's an action/adventure crime drama, lots of scenes feature boats and palm trees and polyester fabrics and garish shirts...it even stars the character actor "Zulu" in a supporting role. Oh, there are some minor differences - Roy Thinnes is supposed to be... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis. | import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.su... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set. | # TODO: Apply review_to_words to a review (train_X[100] or any other review)
print('Original review:')
print(train_X[100])
print('Tokenized review:')
print(review_to_words(train_X[100])) | Original review:
Think of this pilot as "Hawaii Five-O Lite". It's set in Hawaii, it's an action/adventure crime drama, lots of scenes feature boats and palm trees and polyester fabrics and garish shirts...it even stars the character actor "Zulu" in a supporting role. Oh, there are some minor differences - Roy Thinnes ... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do t... | import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl... | Read preprocessed data from cache file: preprocessed_data.pkl
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of cou... | import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:**The most common tokenized words apearing in the training set are 'movi', 'film', 'one', 'like' and 'time'. The first two words are quite... | # TODO: Use this space to determine the five most frequently appearing words in the training set.
list(word_dict)[0:5] | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use. | data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`. | def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pa... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set? | # Use this cell to examine one of the processed reviews to make sure everything is working as intended.
n_sample=15
print(train_X[n_sample])
print(len(train_X[n_sample])) | [ 641 4 174 2 56 47 8 175 2663 168 2 19 5 1
632 341 154 4 1 1 349 977 82 1108 134 60 3756 1
189 111 1408 17 320 13 672 2529 501 1 551 1 1 85
318 52 1632 1 1438 1 3416 85 3441 258 718 296 1 130
31 82 7 25 892 496 212 ... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** It's important to use the same function to both proccesses in order to assure there will be no missalignment in the codificat... | import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model. | import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also ... | !pygmentize train/model.py | [34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysi... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training ... | import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later. | def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early ... | import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device) | Epoch: 1, BCELoss: 0.6889122724533081
Epoch: 2, BCELoss: 0.6780008792877197
Epoch: 3, BCELoss: 0.6685242891311646
Epoch: 4, BCELoss: 0.6583548784255981
Epoch: 5, BCELoss: 0.6465497970581054
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one)... | from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
py_version="py3",
train_instance_count=1,
train_instance_ty... | 'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have ... | # TODO: Deploy the trained model
# Solution:
# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = estimator.deploy(instance_type='ml.m4.xlarge',
initial_instance_count=1)
| Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is. | test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array i... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** It was quite good results for the pytorch model in comparison with the XGBoost. The advantage of the pyto... | test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.' | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a se... | # TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data=[]
test_data, test_data_len = convert_and_pad_data(word_dict, [review_to_words(test_review)])
test_data_full = pd.concat([pd.DataFrame(test_data_len), pd.DataFrame(test_data)], axis=1)
print(test_data_full)
len(test_... | 0 0 1 2 3 4 5 6 7 8 ... 490 491 492 493 \
0 20 1 1376 49 53 3 4 878 173 392 ... 0 0 0 0
494 495 496 497 498 499
0 0 0 0 0 0 0
[1 rows x 501 columns]
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review. | predict(test_data_full.values) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delet... | estimator.delete_endpoint() | estimator.delete_endpoint() will be deprecated in SageMaker Python SDK v2. Please use the delete_endpoint() function on your predictor instead.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, w... | !pygmentize serve/predict.py | [34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
... | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete th... | from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchMod... | Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is t... | import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path... | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
As an additional test, we can try sending the `test_review` that we looked at earlier. | predictor.predict(test_review) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for th... | predictor.endpoint | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that wi... | predictor.delete_endpoint() | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Submission Instructions | # Now click the 'Submit Assignment' button above. | _____no_output_____ | MIT | Informatics/Deep Learning/TensorFlow - deeplearning.ai/2. CNN/utf-8''Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | MarcosSalib/Cocktail_MOOC |
When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. | %%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000); | _____no_output_____ | MIT | Informatics/Deep Learning/TensorFlow - deeplearning.ai/2. CNN/utf-8''Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | MarcosSalib/Cocktail_MOOC |
`Microstripline` object in `structure` module. Analytical modeling of Microstripline in Scikit-microwave-design.In this file, we show how `scikit-microwave-design` library can be used to implement and analyze basic microstrip line structures. Defining a microstrip line in `skmd`There are two ways in which we can d... | import numpy as np
import skmd as md
import matplotlib.pyplot as plt
### Define frequency
pts_freq = 1000
freq = np.linspace(1e9,3e9,pts_freq)
omega = 2*np.pi*freq
#### define substrate
epsilon_r = 10.8 # dielectric constant or the effective dielectric constant
h_subs = 1.27*md.MILLI # meters.
| _____no_output_____ | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
1. Defining msl with characteristic impedance. | msl1 = md.structure.Microstripline(er=epsilon_r,h=h_subs,Z0=93,text_tag='Line-abc')
| ============
Defining Line-abc
Line-abc defined with Z0
==============
| BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
`Microstripline` object is defined in the `structure` module of the `skmd` librayr. With the above command, we have defined a _msl_ by giving the characteristic impedance $Z_0$ with a text identifier 'Line-abc'. The library will compute the required line width to achieve the desired characteristic impedance for the giv... | msl1.print_specs() | --------- Line-abc Specifications---------
-----Substrate-----
Epsilon_r 10.8
substrate thickness 0.00127
-------------------
line width W= 0.00019296747453793648
Characteristics impedance= 93
Length of the line = 1
Effective dielectric constant er_eff = 6.555924417931664
Frequency defined ?: False
-----------------... | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
2. Defining the msl by width. We can also define the msl by giving the width at the time of definition. The characteristic impedance will be computed by the code in this case. | msl2 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w = 1.1*md.MILLI,text_tag='Line-xyz')
msl2.print_specs() | ============
Defining Line-xyz
Line-xyz defined with width.
==============
--------- Line-xyz Specifications---------
-----Substrate-----
Epsilon_r 10.8
substrate thickness 0.00127
-------------------
line width W= 0.0011
Characteristics impedance= 50.466917262179905
Length of the line = 1
Effective dielectric con... | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
At least either width or characteristic impedance must be defined, else an error will be generated. If both characteristic impedance and width are given, than width is used in the definitiona and characertistic impedance is computed. Defining frequency range and network parameters for the microstrip line. We can also ... | msl3 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w = 1.1*md.MILLI,omega = omega,text_tag='Line-with-frequency')
msl3.print_specs()
# msl.
msl2.print_specs()
msl2.fun_add_frequency(omega)
| --------- Line-xyz Specifications---------
-----Substrate-----
Epsilon_r 10.8
substrate thickness 0.00127
-------------------
line width W= 0.0011
Characteristics impedance= 50.466917262179905
Length of the line = 1
Effective dielectric constant er_eff = 7.12610312997174
Frequency defined ?: False
------------------... | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
Microstrip-line filters. Designing microstrip line filters and their analytical computation becomes very simple in `scikit-microwave-design` library. Since a microwave network object is created for a microstrip-line section, it becomes a matter of few lines of coding to implement and test filters. In addition excellen... | f0 = 1.5*md.GIGA
omega0 = md.f2omega(f0)
msl_Tx1 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w=1.1*md.MILLI,l=5*md.MILLI,text_tag='Left-line',omega=omega)
msl_Tx2 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w=1.1*md.MILLI,l=5*md.MILLI,text_tag='Right-line',omega=omega)
msl_Tx1.print_specs()
w_stub =... | _____no_output_____ | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
`Sampler` | import sys
sys.path.append('../..')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import pandas as pd | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Intro Welcome! In this section you'll learn about `Sampler`-class. Instances of `Sampler` can be used for flexible sampling of multivariate distributions.To begin with, `Sampler` gives rise to several building-blocks classes such as- `NumpySampler`, or `NS`- `ScipySampler` - `SS`What's more, `Sampler` incorporates a s... | from batchflow import NumpySampler as NS
# truncated normal and uniform
ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4
ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1)
ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, ... | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Building `Samplers` 1. Numpy, Scipy, TensorFlow - `Samplers` To build a `NumpySampler`(`NS`) you need to specify a name of distribution from `numpy.random` (or its [alias](https://github.com/analysiscenter/batchflow/blob/master/batchflow/sampler.pyL15)) and the number of independent dimensions: | from batchflow import NumpySampler as NS
ns = NS('n', dim=2) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
take a look at a sample generated by our sampler: | smp = ns.sample(size=200)
plt.scatter(*np.transpose(smp)) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
The same goes for `ScipySampler` based on `scipy.stats`-distributions, or `SS` ("mvn" stands for multivariate-normal): | from batchflow import ScipySampler as SS
ss = SS('mvn', mean=[0, 0], cov=[[2, 1], [1, 2]]) # note also that you can pass the same params as in
smp = ss.sample(2000) # scipy.sample.multivariate_normal, such as `mean` and `cov`
plt.scatter(*np.transpose(smp)) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
2. `HistoSampler` as an estimate of a distribution generating a cloud of points `HistoSampler`, or `HS` can be used for building samplers, with underlying distributions given by a histogram. You can either pass a `np.histogram`-output into the initialization of `HS` | from batchflow import HistoSampler as HS
histo = np.histogramdd(ss.sample(1000000))
hs = HS(histo)
plt.scatter(*np.transpose(hs.sample(150))) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
...or you can specify empty bins and estimate its weights using a method `HS.update` and a cloud of points: | hs = HS(edges=2 * [np.linspace(-4, 4)])
hs.update(ss.sample(1000000))
plt.imshow(hs.bins, interpolation='bilinear') | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
3. Algebra of `Samplers`; operations on `Samplers` `Sampler`-instances support artithmetic operations (`+`, `*`, `-`,...). Arithmetics works on either* (`Sampler`, `Sampler`) - pair* (`Sampler`, `array-like`) - pair | # blur using "+"
u = NS('u', dim=2)
noise = NS('n', dim=2)
blurred = u + noise * 0.2 # decrease the magnitude of the noise
both = blurred | u + (2, 2)
plt.imshow(np.histogramdd(both.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
You may also want to truncate a sampler's distribution so that sampling points belong to a specific region. The common use-case is to sample normal points inside a box...or, inside a ring: | n = NS('n', dim=2).truncate(3, 0.3, expr=lambda m: np.sum(m**2, axis=1))
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Not infrequently you need to obtain "normal" sample in integers. For this you can use `Sampler.apply` method: | n = (4 * NS('n', dim=2)).apply(lambda m: m.astype(np.int)).truncate([6, 6], [-6, -6])
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Note that `Sampler.apply`-method allows you to add an arbitrary transformation to a sampler. For instance, [Box-Muller](https://en.wikipedia.org/wiki/Box–Muller_transform) transform: | bm = lambda vec2: np.sqrt(-2 * np.log(vec2[:, 0:1])) * np.concatenate([np.cos(2 * np.pi * vec2[:, 1:2]),
np.sin(2 * np.pi * vec2[:, 1:2])], axis=1)
n = NS('u', dim=2).apply(bm)
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Another useful thing is coordinate stacking ("&" stands for multiplication of distribution functions): | n, u = NS('n'), SS('u') # initialize one-dimensional notrmal and uniform samplers
s = n & u # stack them together
s.sample(3) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.