code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Classification metrics
Author: Geraldine Klarenberg
Based on the Google Machine Learning Crash Course
## Tresholds
In previous lessons, we have talked about using regression models to predict values. But sometimes we are interested in **classifying** things: "spam" vs "not spam", "bark" vs "not barking", etc.
Logistic regression is a great tool to use in ML classification models. We can use the outputs from these models by defining **classification thresholds**. For instance, if our model tells us there's a probability of 0.8 that an email is spam (based on some characteristics), the model classifies it as such. If the probability estimate is less than 0.8, the model classifies it as "not spam". The threshold allows us to map a logistic regression value to a binary category (the prediction).
Tresholds are problem-dependent, so they will have to be tuned for the specific problem you are dealing with.
In this lesson we will look at metrics you can use to evaluate a classification model's predictions, and what changing the threshold does to your model and predictions.
## True, false, positive, negative...
Now, we could simply look at "accuracy": the ratio of all correct predictions to all predictions. This is simple, intuitive and straightfoward.
But there are some problems with this approach:
* This approach does not work well if there is (class) imbalance; situations where certain negative or positive values or outcomes are rare;
* and, most importantly: different kind of mistakes can have different costs...
### The boy who cried wolf...
We all know the story!

For this example, we define "there actually is a wolf" as a positive class, and "there is no wolf" as a negative class. The predictions that a model makes can be true or false for both classes, generating 4 outcomes:

This table is also called a *confusion matrix*.
There are 2 metrics we can derive from these outcomes: precision and recall.
## Precision
Precision asks the question what proportion of the positive predictions was actually correct?
To calculate the precision of your model, take all true positives divided by *all* positive predictions:
$$\text{Precision} = \frac{TP}{TP+FP}$$
Basically: **did the model cry 'wolf' too often or too little?**
**NB** If your model produces no negative positives, the value of the precision is 1.0. Too many negative positives gives values greater than 1, too few gives values less than 1.
### Exercise
Calculate the precision of a model with the following outcomes
true positives (TP): 1 | false positives (FP): 1
-------|--------
**false negatives (FN): 8** | **true negatives (TN): 90**
## Recall
Recall tries to answer the question what proportion of actual positives was answered correctly?
To calculate recall, divide all true positives by the true positives plus the false negatives:
$$\text{Recall} = \frac{TP}{TP+FN}$$
Basically: **how many wolves that tried to get into the village did the model actually get?**
**NB** If the model produces no false negative, recall equals 1.0
### Exercise
For the same confusion matrix as above, calculate the recall.
## Balancing precision and recall
To evaluate your model, should look at **both** precision and recall. They are often in tension though: improving one reduces the other.
Lowering the classification treshold improves recall (your model will call wolf at every little sound it hears) but will negatively affect precision (it will call wolf too often).
### Exercise
#### Part 1
Look at the outputs of a model that classifies incoming emails as "spam" or "not spam".

The confusion matrix looks as follows
true positives (TP): 8 | false positives (FP): 2
-------|--------
**false negatives (FN): 3** | **true negatives (TN): 17**
Calculate the precision and recall for this model.
#### Part 2
Now see what happens to the outcomes (below) if we increase the threshold

The confusion matrix looks as follows
true positives (TP): 7 | false positives (FP): 4
-------|--------
**false negatives (FN): 1** | **true negatives (TN): 18**
Calculate the precision and recall again.
**Compare the precision and recall from the first and second model. What do you notice?**
## Evaluate model performance
We can evaluate the performance of a classification model at all classification thresholds. For all different thresholds, calculate the *true positive rate* and the *false positive rate*. The true positive rate is synonymous with recall (and sometimes called *sensitivity*) and is thus calculated as
$ TPR = \frac{TP} {TP + FN} $
False positive rate (sometimes called *specificity*) is:
$ FPR = \frac{FP} {FP + TN} $
When you plot the pairs of TPR and FPR for all the different thresholds, you get a Receiver Operating Characteristics (ROC) curve. Below is a typical ROC curve.

To evaluate the model, we look at the area under the curve (AUC). The AUC has a probabilistic interpretation: it represents the probability that a random positive (green) example is positioned to the right of a random negative (red) example.

So if that AUC is 0.9, that's the probability the pair-wise prediction is correct. Below are a few visualizations of AUC results. On top are the distributions of the outcomes of the negative and positive outcomes at various thresholds. Below is the corresponding ROC.


**This AUC suggests a perfect model** (which is suspicious!)


**This is what most AUCs look like**. In this case, AUC = 0.7 means that there is 70% chance the model will be able to distinguish between positive and negative classes.


**This is actually the worst case scenario.** This model has no discrimination capacity at all...
## Prediction bias
Logistic regression should be unbiased, meaning that the average of the predictions should be more or less equal to the average of the observations. **Prediction bias** is the difference between the average of the predictions and the average of the labels in a data set.
This approach is not perfect, e.g. if your model almost always predicts the average there will not be much bias. However, if there **is** bias ("significant nonzero bias"), that means there is something something going on that needs to be checked, specifically that the model is wrong about the frequency of positive labels.
Possible root causes of prediction bias are:
* Incomplete feature set
* Noisy data set
* Buggy pipeline
* Biased training sample
* Overly strong regularization
### Buckets and prediction bias
For logistic regression, this process is a bit more involved, as the labels assigned to an examples are either 0 or 1. So you cannot accurately predict the prediction bias based on one example. You need to group data in "buckets" and examine the prediction bias on that. Prediction bias for logistic regression only makes sense when grouping enough examples together to be able to compare a predicted value (for example, 0.392) to observed values (for example, 0.394).
You can create buckets by linearly breaking up the target predictions, or create quantiles.
The plot below is a calibration plot. Each dot represents a bucket with 1000 values. On the x-axis we have the average value of the predictions for that bucket and on the y-axis the average of the actual observations. Note that the axes are on logarithmic scales.

## Coding
Recall the logistic regression model we made in the previous lesson. That was a perfect fit, so not that useful when we look at the metrics we just discussed.
In the cloud plot with the sepal length and petal width plotted against each other, it is clear that the other two iris species are less separated. Let's use one of these as an example. We'll rework the example so we're classifying irises for being "virginica" or "not virginica".
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
import pandas as pd
iris = load_iris()
X = iris.data
y = iris.target
df = pd.DataFrame(X,
columns = ['sepal_length(cm)',
'sepal_width(cm)',
'petal_length(cm)',
'petal_width(cm)'])
df['species_id'] = y
species_map = {0: 'setosa', 1: 'versicolor', 2: 'virginica'}
df['species_name'] = df['species_id'].map(species_map)
df.head()
```
Now extract the data we need and create the necessary dataframes again.
```
X = np.c_[X[:,0], X[:,3]]
y = []
for i in range(len(X)):
if i > 99:
y.append(1)
else:
y.append(0)
y = np.array(y)
plt.scatter(X[:,0], X[:,1], c = y)
```
Create our test and train data, and run a model. The default classification threshold is 0.5. If the predicted probability is > 0.5, the predicted result is 'virgnica'. If it is < 0.5, the predicted result is 'not virginica'.
```
random = np.random.permutation(len(X))
x_train = X[random][30:]
x_test = X[random][:30]
y_train= y[random][30:]
y_test = y[random][:30]
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(x_train,y_train)
```
Instead of looking at the probabilities and the plot, like in the last lesson, let's run some classification metrics on the training dataset.
If you use ".score", you get the mean accuracy.
```
log_reg.score(x_train, y_train)
```
Let's predict values and see what this ouput means and how we can look at other metrics.
```
predictions = log_reg.predict(x_train)
predictions, y_train
```
There is a way to look at the confusion matrix. The output that is generated has the same structure as the confusion matrices we showed earlier:
true positives (TP) | false positives (FP)
-------|--------
**false negatives (FN)** | **true negatives (TN)**
```
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train, predictions)
```
Indeed, for the accuracy calculation: we predicted 81 + 33 = 114 correct (true positives and true negatives), and 114/120 (remember, our training data had 120 points) = 0.95.
There is also a function to calculate recall and precision:
Since we also have a testing data set, let's see what the metrics look like for that.
```
from sklearn.metrics import recall_score
recall_score(y_train, predictions)
from sklearn.metrics import precision_score
precision_score(y_train, predictions)
```
And, of course, there are also built-in functions to check the ROC curve and AUC! For these functions, the inputs are the labels of the original dataset and the predicted probabilities (- not the predicted labels -> **why?**). Remember what the two columns mean?
```
proba_virginica = log_reg.predict_proba(x_train)
proba_virginica[0:10]
from sklearn.metrics import roc_curve
fpr_model, tpr_model, thresholds_model = roc_curve(y_train, proba_virginica[:,1])
fpr_model
tpr_model
thresholds_model
```
Plot the ROC curve as follows
```
plt.plot(fpr_model, tpr_model,label='our model')
plt.plot([0,1],[0,1],label='random')
plt.plot([0,0,1,1],[0,1,1,1],label='perfect')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
```
The AUC:
```
from sklearn.metrics import roc_auc_score
auc_model = roc_auc_score(y_train, proba_virginica[:,1])
auc_model
```
You can use the ROC and AUC metric to evaluate competing models. Many people prefer to use these metrics to analyze each model’s performance because it does not require selecting a threshold and helps balance true positive rate and false positive rate.
Now let's do the same thing for our test data (but again, this dataset is fairly small, and K-fold cross-validation is recommended).
```
log_reg.score(x_test, y_test)
predictions = log_reg.predict(x_test)
predictions, y_test
confusion_matrix(y_test, predictions)
recall_score(y_test, predictions)
precision_score(y_test, predictions)
proba_virginica = log_reg.predict_proba(x_test)
fpr_model, tpr_model, thresholds_model = roc_curve(y_test, proba_virginica[:,1])
plt.plot(fpr_model, tpr_model,label='our model')
plt.plot([0,1],[0,1],label='random')
plt.plot([0,0,1,1],[0,1,1,1],label='perfect')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
auc_model = roc_auc_score(y_test, proba_virginica[:,1])
auc_model
```
Learn more about the logistic regression function and options at https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
| github_jupyter |
# Multi-panel detector
The AGIPD detector, which is already in use at the SPB experiment, consists of 16 modules of 512×128 pixels each. Each module is further divided into 8 ASICs (application-specific integrated circuit).
<img src="AGIPD.png" width="300" align="left"/> <img src="agipd_geometry_14_1.png" width="420" align="right"/>
<div style="clear: both"><small>Photo © European XFEL</small></div>
## Simulation Demonstration
```
import os, shutil, sys
import h5py
import matplotlib.pyplot as plt
import numpy as np
import modules
from extra_geom import AGIPD_1MGeometry
modules.load('maxwell' ,'openmpi/3.1.6')
SimExPath = '/gpfs/exfel/data/user/juncheng/simex-branch/Sources/python/'
SimExExtLib = '/gpfs/exfel/data/user/juncheng/simex-branch/lib/python3.7/site-packages/'
SimExBin = ':/gpfs/exfel/data/user/juncheng/miniconda3/envs/simex-branch/bin/'
sys.path.insert(0,SimExPath)
sys.path.insert(0,SimExExtLib)
os.environ["PATH"] += SimExBin
from SimEx.Calculators.AbstractPhotonDiffractor import AbstractPhotonDiffractor
from SimEx.Calculators.CrystFELPhotonDiffractor import CrystFELPhotonDiffractor
from SimEx.Parameters.CrystFELPhotonDiffractorParameters import CrystFELPhotonDiffractorParameters
from SimEx.Parameters.PhotonBeamParameters import PhotonBeamParameters
from SimEx.Parameters.DetectorGeometry import DetectorGeometry, DetectorPanel
from SimEx.Utilities.Units import electronvolt, joule, meter, radian
```
## Data path setup
```
data_path = './diffr'
```
Clean up any data from a previous run:
```
if os.path.isdir(data_path):
shutil.rmtree(data_path)
if os.path.isfile(data_path + '.h5'):
os.remove(data_path + '.h5')
```
## Set up X-ray Beam Parameters
```
beamParam = PhotonBeamParameters(
photon_energy = 4972.0 * electronvolt, # photon energy in eV
beam_diameter_fwhm=130e-9 * meter, # focus diameter in m
pulse_energy=45e-3 * joule, # pulse energy in J
photon_energy_relative_bandwidth=0.003, # relative bandwidth dE/E
divergence=0.0 * radian, # Beam divergence in rad
photon_energy_spectrum_type='tophat', # Spectrum type. Acceptable values are "tophat", "SASE", and "twocolor")
)
```
## Detector Setting
```
geom = AGIPD_1MGeometry.from_quad_positions(quad_pos=[
(-525, 625),
(-550, -10),
(520, -160),
(542.5, 475),
])
geom.inspect()
geom_file = 'agipd_simple_2d.geom'
geom.write_crystfel_geom(
geom_file,
dims=('frame', 'ss', 'fs'),
adu_per_ev=1.0,
clen=0.13, # Sample - detector distance in m
photon_energy=4972, # eV
data_path='/data/data',
)
```
## Diffractor Settings
```
diffParam = CrystFELPhotonDiffractorParameters(
sample='3WUL.pdb', # Looks up pdb file in cwd, if not found, queries from RCSB pdb mirror.
uniform_rotation=True, # Apply random rotation
number_of_diffraction_patterns=2, #
powder=False, # Set xto True to create a virtual powder diffraction pattern (unested)
intensities_file=None, # File that contains reflection intensities. If set to none, use uniform intensity distribution
crystal_size_range=[1e-7, 1e-7], # Range ([min,max]) in units of metres of crystal size.
poissonize=False, # Set to True to add Poisson noise.
number_of_background_photons=0, # Change number to add uniformly distributed background photons.
suppress_fringes=False, # Set to True to suppress side maxima between reflection peaks.
beam_parameters=beamParam, # Beam parameters object from above
detector_geometry=geom_file, # External file that contains the detector geometry in CrystFEL notation.
)
diffractor = CrystFELPhotonDiffractor(
parameters=diffParam, output_path=data_path
)
```
## Run the simulation
```
diffractor.backengine()
diffractor.saveH5_geom()
data_f = h5py.File(data_path + '.h5', 'r')
frame = data_f['data/0000001/data'][...].reshape(16, 512, 128)
fig, ax = plt.subplots(figsize=(12, 10))
geom.plot_data_fast(frame, axis_units='m', ax=ax, vmax=1000);
```
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 823852.
| github_jupyter |
tgb - 6/12/2021 - The goal is to see whether it would be possible to train a NN/MLR outputting results in quantile space while still penalizing them following the mean squared error in physical space.
tgb - 4/15/2021 - Recycling this notebook but fitting in percentile space (no scale_dict, use output in percentile units)
tgb - 4/15/2020
- Adapting Ankitesh's notebook that builds and train a "brute-force" network to David Walling's hyperparameter search
- Adding the option to choose between aquaplanet and real-geography data
```
import sys
sys.path.insert(1,"/home1/07064/tg863631/anaconda3/envs/CbrainCustomLayer/lib/python3.6/site-packages") #work around for h5py
from cbrain.imports import *
from cbrain.cam_constants import *
from cbrain.utils import *
from cbrain.layers import *
from cbrain.data_generator import DataGenerator
from cbrain.climate_invariant import *
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.config.experimental.set_memory_growth(physical_devices[1], True)
tf.config.experimental.set_memory_growth(physical_devices[2], True)
import os
os.environ["CUDA_VISIBLE_DEVICES"]="1"
from tensorflow import math as tfm
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
import tensorflow_probability as tfp
import xarray as xr
import numpy as np
from cbrain.model_diagnostics import ModelDiagnostics
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as imag
import scipy.integrate as sin
# import cartopy.crs as ccrs
import matplotlib.ticker as mticker
# from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import pickle
# from climate_invariant import *
from tensorflow.keras import layers
import datetime
from climate_invariant_utils import *
import yaml
```
## Global Variables
```
# Load coordinates (just pick any file from the climate model run)
# Comet path below
# coor = xr.open_dataset("/oasis/scratch/comet/ankitesh/temp_project/data/sp8fbp_minus4k.cam2.h1.0000-01-01-00000.nc",\
# decode_times=False)
# GP path below
path_0K = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/fluxbypass_aqua/'
coor = xr.open_dataset(path_0K+"AndKua_aqua_SPCAM3.0_sp_fbp_f4.cam2.h1.0000-09-02-00000.nc")
lat = coor.lat; lon = coor.lon; lev = coor.lev;
coor.close();
# Comet path below
# TRAINDIR = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/CRHData/'
# path = '/home/ankitesh/CBrain_project/CBRAIN-CAM/cbrain/'
# GP path below
TRAINDIR = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/SPCAM_PHYS/'
path = '/export/nfs0home/tbeucler/CBRAIN-CAM/cbrain/'
path_nnconfig = '/export/nfs0home/tbeucler/CBRAIN-CAM/nn_config/'
# Load hyam and hybm to calculate pressure field in SPCAM
path_hyam = 'hyam_hybm.pkl'
hf = open(path+path_hyam,'rb')
hyam,hybm = pickle.load(hf)
# Scale dictionary to convert the loss to W/m2
scale_dict = load_pickle(path_nnconfig+'scale_dicts/009_Wm2_scaling.pkl')
```
New Data generator class for the climate-invariant network. Calculates the physical rescalings needed to make the NN climate-invariant
## Data Generators
### Choose between aquaplanet and realistic geography here
```
# GP paths below
#path_aquaplanet = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/SPCAM_PHYS/'
#path_realgeography = ''
# GP /fast paths below
path_aquaplanet = '/fast/tbeucler/climate_invariant/aquaplanet/'
# Comet paths below
# path_aquaplanet = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/'
# path_realgeography = '/oasis/scratch/comet/ankitesh/temp_project/PrepData/geography/'
path = path_aquaplanet
```
### Data Generator using RH
```
#scale_dict_RH = load_pickle('/home/ankitesh/CBrain_project/CBRAIN-CAM/nn_config/scale_dicts/009_Wm2_scaling_2.pkl')
scale_dict_RH = scale_dict.copy()
scale_dict_RH['RH'] = 0.01*L_S/G, # Arbitrary 0.1 factor as specific humidity is generally below 2%
in_vars_RH = ['RH','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
# if path==path_realgeography: out_vars_RH = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
# elif path==path_aquaplanet: out_vars_RH = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
if path==path_aquaplanet: out_vars_RH = ['PHQ','TPHYSTND','QRL','QRS']
# New GP path below
TRAINFILE_RH = '2021_01_24_O3_small_shuffle.nc'
NORMFILE_RH = '2021_02_01_NORM_O3_RH_small.nc'
# Comet/Ankitesh path below
# TRAINFILE_RH = 'CI_RH_M4K_NORM_train_shuffle.nc'
# NORMFILE_RH = 'CI_RH_M4K_NORM_norm.nc'
# VALIDFILE_RH = 'CI_RH_M4K_NORM_valid.nc'
train_gen_RH = DataGenerator(
data_fn = path+TRAINFILE_RH,
input_vars = in_vars_RH,
output_vars = out_vars_RH,
norm_fn = path+NORMFILE_RH,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict_RH,
batch_size=1024,
shuffle=True,
)
```
### Data Generator using QSATdeficit
We only need the norm file for this generator as we are solely using it as an input to determine the right normalization for the combined generator
```
# New GP path below
TRAINFILE_QSATdeficit = '2021_02_01_O3_QSATdeficit_small_shuffle.nc'
NORMFILE_QSATdeficit = '2021_02_01_NORM_O3_QSATdeficit_small.nc'
in_vars_QSATdeficit = ['QSATdeficit','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
# if path==path_realgeography: out_vars_RH = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
# elif path==path_aquaplanet: out_vars_RH = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
if path==path_aquaplanet: out_vars_QSATdeficit = ['PHQ','TPHYSTND','QRL','QRS']
train_gen_QSATdeficit = DataGenerator(
data_fn = path+TRAINFILE_QSATdeficit,
input_vars = in_vars_QSATdeficit,
output_vars = out_vars_QSATdeficit,
norm_fn = path+NORMFILE_QSATdeficit,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
)
```
### Data Generator using TNS
```
in_vars = ['QBP','TfromNS','PS', 'SOLIN', 'SHFLX', 'LHFLX']
if path==path_aquaplanet: out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
elif path==path_realgeography: out_vars = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
TRAINFILE_TNS = '2021_02_01_O3_TfromNS_small_shuffle.nc'
NORMFILE_TNS = '2021_02_01_NORM_O3_TfromNS_small.nc'
VALIDFILE_TNS = 'CI_TNS_M4K_NORM_valid.nc'
train_gen_TNS = DataGenerator(
data_fn = path+TRAINFILE_TNS,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE_TNS,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
)
```
### Data Generator using BCONS
```
in_vars = ['QBP','BCONS','PS', 'SOLIN', 'SHFLX', 'LHFLX']
if path==path_aquaplanet: out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
elif path==path_realgeography: out_vars = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
TRAINFILE_BCONS = '2021_02_01_O3_BCONS_small_shuffle.nc'
NORMFILE_BCONS = '2021_02_01_NORM_O3_BCONS_small.nc'
train_gen_BCONS = DataGenerator(
data_fn = path+TRAINFILE_BCONS,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE_BCONS,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=1024,
shuffle=True,
)
```
### Data Generator using NSto220
```
in_vars = ['QBP','T_NSto220','PS', 'SOLIN', 'SHFLX', 'LHFLX']
if path==path_aquaplanet: out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
elif path==path_realgeography: out_vars = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
TRAINFILE_T_NSto220 = '2021_03_31_O3_T_NSto220_small.nc'
NORMFILE_T_NSto220 = '2021_03_31_NORM_O3_T_NSto220_small.nc'
train_gen_T_NSto220 = DataGenerator(
data_fn = path+TRAINFILE_T_NSto220,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE_T_NSto220,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=8192,
shuffle=True,
)
```
### Data Generator using LHF_nsDELQ
```
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHF_nsDELQ']
if path==path_aquaplanet: out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
elif path==path_realgeography: out_vars = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
TRAINFILE_LHF_nsDELQ = '2021_02_01_O3_LHF_nsDELQ_small_shuffle.nc'
NORMFILE_LHF_nsDELQ = '2021_02_01_NORM_O3_LHF_nsDELQ_small.nc'
train_gen_LHF_nsDELQ = DataGenerator(
data_fn = path+TRAINFILE_LHF_nsDELQ,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE_LHF_nsDELQ,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=8192,
shuffle=True,
)
```
### Data Generator using LHF_nsQ
```
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHF_nsQ']
if path==path_aquaplanet: out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
elif path==path_realgeography: out_vars = ['PTEQ','PTTEND','FSNT','FSNS','FLNT','FLNS']
TRAINFILE_LHF_nsQ = '2021_02_01_O3_LHF_nsQ_small_shuffle.nc'
NORMFILE_LHF_nsQ = '2021_02_01_NORM_O3_LHF_nsQ_small.nc'
train_gen_LHF_nsQ = DataGenerator(
data_fn = path+TRAINFILE_LHF_nsQ,
input_vars = in_vars,
output_vars = out_vars,
norm_fn = path+NORMFILE_LHF_nsQ,
input_transform = ('mean', 'maxrs'),
output_transform = scale_dict,
batch_size=8192,
shuffle=True,
)
```
### Data Generator Combined (latest flexible version)
```
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
#if path==path_aquaplanet: out_vars=['PHQPERC','TPHYSTNDPERC','QRLPERC','QRSPERC']
out_vars = ['PHQ','TPHYSTND','QRL','QRS']
# TRAINFILE = '2021_01_24_O3_TRAIN_shuffle.nc'
NORMFILE = '2021_01_24_NORM_O3_small.nc'
# VALIDFILE = '2021_01_24_O3_VALID.nc'
# GENTESTFILE = 'CI_SP_P4K_valid.nc'
# In physical space
TRAINFILE = '2021_03_18_O3_TRAIN_M4K_shuffle.nc'
VALIDFILE = '2021_03_18_O3_VALID_M4K.nc'
TESTFILE_DIFFCLIMATE = '2021_03_18_O3_TRAIN_P4K_shuffle.nc'
TESTFILE_DIFFGEOG = '2021_04_18_RG_TRAIN_M4K_shuffle.nc'
# In percentile space
#TRAINFILE = '2021_04_09_PERC_TRAIN_M4K_shuffle.nc'
#TRAINFILE = '2021_01_24_O3_small_shuffle.nc'
#VALIDFILE = '2021_04_09_PERC_VALID_M4K.nc'
#TESTFILE = '2021_04_09_PERC_TEST_P4K.nc'
```
Old data generator by Ankitesh
Improved flexible data generator
```
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
```
## Add callback class to track loss on multiple sets during training
From [https://stackoverflow.com/questions/47731935/using-multiple-validation-sets-with-keras]
```
test_diffgeog_gen_CI[0][0].shape
np.argwhere(np.isnan(test_gen_CI[0][1]))
np.argwhere(np.isnan(test_gen_CI[0][0]))
class AdditionalValidationSets(Callback):
def __init__(self, validation_sets, verbose=0, batch_size=None):
"""
:param validation_sets:
a list of 3-tuples (validation_data, validation_targets, validation_set_name)
or 4-tuples (validation_data, validation_targets, sample_weights, validation_set_name)
:param verbose:
verbosity mode, 1 or 0
:param batch_size:
batch size to be used when evaluating on the additional datasets
"""
super(AdditionalValidationSets, self).__init__()
self.validation_sets = validation_sets
self.epoch = []
self.history = {}
self.verbose = verbose
self.batch_size = batch_size
def on_train_begin(self, logs=None):
self.epoch = []
self.history = {}
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
self.epoch.append(epoch)
# record the same values as History() as well
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
# evaluate on the additional validation sets
for validation_set in self.validation_sets:
valid_generator,valid_name = validation_set
#tf.print('Results')
results = self.model.evaluate_generator(generator=valid_generator)
#tf.print(results)
for metric, result in zip(self.model.metrics_names,[results]):
#tf.print(metric,result)
valuename = valid_name + '_' + metric
self.history.setdefault(valuename, []).append(result)
```
## Quick test to develop custom loss fx (no loss tracking across multiple datasets)
#### Input and Output Rescaling (T=BCONS)
```
Tscaling_name = 'BCONS'
train_gen_T = train_gen_BCONS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
pdf = {}
for ipath,path in enumerate([TRAINFILE,VALIDFILE,TESTFILE_DIFFCLIMATE,TESTFILE_DIFFGEOG]):
hf = open(pathPKL+'/'+path+'_PERC.pkl','rb')
pdf[path] = pickle.load(hf)
def mse_physical(pdf):
def loss(y_true,y_pred):
y_true_physical = tf.identity(y_true)
y_pred_physical = tf.identity(y_pred)
for ilev in range(120):
y_true_physical[:,ilev] = \
tfp.math.interp_regular_1d_grid(y_true[:,ilev],
x_ref_min=0,x_ref_max=1,y_ref=pdf[:,ilev])
y_pred_physical[:,ilev] = \
tfp.math.interp_regular_1d_grid(y_pred[:,ilev],
x_ref_min=0,x_ref_max=1,y_ref=pdf[:,ilev])
return tf.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
return loss
# model = load_model('model.h5',
# custom_objects={'loss': asymmetric_loss(alpha)})
model.compile(tf.keras.optimizers.Adam(),
loss=mse_physical(pdf=np.float32(pdf['2021_03_18_O3_TRAIN_M4K_shuffle.nc']['PERC_array'][:,94:])))
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
#save_name = '2021_06_12_LOGI_PERC_RH_BCONS_LHF_nsDELQ'
save_name = '2021_06_12_Test'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
## Models tracking losses across climates and geography (Based on cold Aquaplanet)
### MLR or Logistic regression
#### BF
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_MLR'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
#model.load_weights(path_HDF5+save_name+'.hdf5')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input Rescaling (T=T-TNS)
```
Tscaling_name = 'TfromNS'
train_gen_T = train_gen_TNS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_MLR_RH_TfromNS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input Rescaling (T=BCONS)
```
Tscaling_name = 'BCONS'
train_gen_T = train_gen_BCONS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_MLR_RH_BCONS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input and Output Rescaling (T=T-TNS)
```
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
out_vars=['PHQPERC','TPHYSTNDPERC','QRLPERC','QRSPERC']
# TRAINFILE = '2021_01_24_O3_TRAIN_shuffle.nc'
NORMFILE = '2021_01_24_NORM_O3_small.nc'
# VALIDFILE = '2021_01_24_O3_VALID.nc'
# GENTESTFILE = 'CI_SP_P4K_valid.nc'
# In percentile space
TRAINFILE = '2021_04_09_PERC_TRAIN_M4K_shuffle.nc'
VALIDFILE = '2021_04_09_PERC_VALID_M4K.nc'
TESTFILE_DIFFCLIMATE = '2021_04_09_PERC_TRAIN_P4K_shuffle.nc'
TESTFILE_DIFFGEOG = '2021_04_24_RG_PERC_TRAIN_M4K_shuffle.nc'
Tscaling_name = 'TfromNS'
train_gen_T = train_gen_TNS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_LOGI_PERC_RH_TfromNS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input and Output Rescaling (T=BCONS)
```
Tscaling_name = 'BCONS'
train_gen_T = train_gen_BCONS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_LOGI_PERC_RH_BCONS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### NN
```
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
#if path==path_aquaplanet: out_vars=['PHQPERC','TPHYSTNDPERC','QRLPERC','QRSPERC']
out_vars = ['PHQ','TPHYSTND','QRL','QRS']
# TRAINFILE = '2021_01_24_O3_TRAIN_shuffle.nc'
NORMFILE = '2021_01_24_NORM_O3_small.nc'
# VALIDFILE = '2021_01_24_O3_VALID.nc'
# GENTESTFILE = 'CI_SP_P4K_valid.nc'
# In physical space
TRAINFILE = '2021_03_18_O3_TRAIN_M4K_shuffle.nc'
VALIDFILE = '2021_03_18_O3_VALID_M4K.nc'
TESTFILE_DIFFCLIMATE = '2021_03_18_O3_TRAIN_P4K_shuffle.nc'
TESTFILE_DIFFGEOG = '2021_04_18_RG_TRAIN_M4K_shuffle.nc'
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling=None,
Tscaling=None,
LHFscaling=None,
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=None,
inp_div_Qscaling=None,
inp_sub_Tscaling=None,
inp_div_Tscaling=None,
inp_sub_LHFscaling=None,
inp_div_LHFscaling=None,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
```
#### BF
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_NN'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
#model.load_weights(path_HDF5+save_name+'.hdf5')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input Rescaling (T=T-TNS)
```
Tscaling_name = 'TfromNS'
train_gen_T = train_gen_TNS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_NN_RH_TfromNS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input Rescaling (T=BCONS)
```
Tscaling_name = 'BCONS'
train_gen_T = train_gen_BCONS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=scale_dict,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_NN_RH_BCONS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input and Output Rescaling (T=T-TNS)
```
in_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
out_vars=['PHQPERC','TPHYSTNDPERC','QRLPERC','QRSPERC']
# TRAINFILE = '2021_01_24_O3_TRAIN_shuffle.nc'
NORMFILE = '2021_01_24_NORM_O3_small.nc'
# VALIDFILE = '2021_01_24_O3_VALID.nc'
# GENTESTFILE = 'CI_SP_P4K_valid.nc'
# In percentile space
TRAINFILE = '2021_04_09_PERC_TRAIN_M4K_shuffle.nc'
VALIDFILE = '2021_04_09_PERC_VALID_M4K.nc'
TESTFILE_DIFFCLIMATE = '2021_04_09_PERC_TRAIN_P4K_shuffle.nc'
TESTFILE_DIFFGEOG = '2021_04_24_RG_PERC_TRAIN_M4K_shuffle.nc'
Tscaling_name = 'TfromNS'
train_gen_T = train_gen_TNS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_NN_PERC_RH_TfromNS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
#### Input and Output Rescaling (T=BCONS)
```
Tscaling_name = 'BCONS'
train_gen_T = train_gen_BCONS
train_gen_CI = DataGeneratorCI(data_fn = path+TRAINFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
valid_gen_CI = DataGeneratorCI(data_fn = path+VALIDFILE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffclimate_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFCLIMATE,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
test_diffgeog_gen_CI = DataGeneratorCI(data_fn = path+TESTFILE_DIFFGEOG,
input_vars=in_vars,
output_vars=out_vars,
norm_fn=path+NORMFILE,
input_transform=('mean', 'maxrs'),
output_transform=None,
batch_size=8192,
shuffle=True,
xarray=False,
var_cut_off=None,
Qscaling='RH',
Tscaling=Tscaling_name,
LHFscaling='LHF_nsDELQ',
SHFscaling=None,
output_scaling=False,
interpolate=False,
hyam=hyam,hybm=hybm,
inp_sub_Qscaling=train_gen_RH.input_transform.sub,
inp_div_Qscaling=train_gen_RH.input_transform.div,
inp_sub_Tscaling=train_gen_T.input_transform.sub,
inp_div_Tscaling=train_gen_T.input_transform.div,
inp_sub_LHFscaling=train_gen_LHF_nsDELQ.input_transform.sub,
inp_div_LHFscaling=train_gen_LHF_nsDELQ.input_transform.div,
inp_sub_SHFscaling=None,
inp_div_SHFscaling=None,
lev=None, interm_size=40,
lower_lim=6,is_continous=True,Tnot=5,
epsQ=1e-3,epsT=1,mode='train')
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_26_NN_PERC_RH_BCONS_LHF_nsDELQ'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_diffclimate_gen_CI,'trainP4K'),(test_diffgeog_gen_CI,'trainM4K_RG')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
## Models tracking losses across climates/geography (Warm to Cold)
## Brute-Force Model
### Climate-invariant (T,Q,PS,S0,SHF,LHF)->($\dot{T}$,$\dot{q}$,RADFLUX)
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(64, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
# Where to save the model
path_HDF5 = '/oasis/scratch/comet/tbeucler/temp_project/CBRAIN_models/'
save_name = 'BF_temp'
model.compile(tf.keras.optimizers.Adam(), loss=mse)
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
# tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=0, update_freq=1000,embeddings_freq=1)
Nep = 10
model.fit_generator(train_gen, epochs=Nep, validation_data=valid_gen,\
callbacks=[earlyStopping, mcp_save_pos])
```
### Ozone (T,Q,$O_{3}$,S0,PS,LHF,SHF)$\rightarrow$($\dot{q}$,$\dot{T}$,lw,sw)
```
inp = Input(shape=(94,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_01_25_O3'
model.compile(tf.keras.optimizers.Adam(), loss=mse)
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
# tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=0, update_freq=1000,embeddings_freq=1)
Nep = 10
model.fit_generator(train_gen_O3, epochs=Nep, validation_data=valid_gen_O3,\
callbacks=[earlyStopping, mcp_save_pos])
Nep = 10
model.fit_generator(train_gen_O3, epochs=Nep, validation_data=valid_gen_O3,\
callbacks=[earlyStopping, mcp_save_pos])
```
### No Ozone (T,Q,S0,PS,LHF,SHF)$\rightarrow$($\dot{q}$,$\dot{T}$,lw,sw)
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_01_25_noO3'
model.compile(tf.keras.optimizers.Adam(), loss=mse)
model.load_weights(path_HDF5+save_name+'.hdf5')
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
# tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=0, update_freq=1000,embeddings_freq=1)
# Nep = 15
# model.fit_generator(train_gen_noO3, epochs=Nep, validation_data=valid_gen_noO3,\
# callbacks=[earlyStopping, mcp_save_pos])
Nep = 10
model.fit_generator(train_gen_noO3, epochs=Nep, validation_data=valid_gen_noO3,\
callbacks=[earlyStopping, mcp_save_pos])
```
### BF linear version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
# densout = Dense(128, activation='linear')(inp)
# densout = LeakyReLU(alpha=0.3)(densout)
# for i in range (6):
# densout = Dense(128, activation='linear')(densout)
# densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_15_MLR_PERC'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 15
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
history.history
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### BF Logistic version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
# densout = Dense(128, activation='linear')(inp)
# densout = LeakyReLU(alpha=0.3)(densout)
# for i in range (6):
# densout = Dense(128, activation='linear')(densout)
# densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(inp)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_15_Log_PERC'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 15
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
history.history
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### BF NN version with test loss tracking
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_08_NN6L'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
history.history
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH Logistic version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
# densout = Dense(128, activation='linear')(inp)
# densout = LeakyReLU(alpha=0.3)(densout)
# for i in range (6):
# densout = Dense(128, activation='linear')(densout)
# densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(inp)
dense_out = tf.keras.activations.sigmoid(dense_out)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_15_Log_PERC_RH'
# history = AdditionalValidationSets([(train_gen_CI,valid_gen_CI,test_gen_CI)])
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 15
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
history.history
hist_rec = history.history
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH linear version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_19_MLR_RH'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### QSATdeficit linear version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_19_MLR_QSATdeficit'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### TfromNS linear version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_19_MLR_TfromNS'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### BCONS linear version
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_19_MLR_BCONS'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
## Mixed Model
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_19_MLR_RH_BCONS'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+(T-TNS)
### RH+NSto220
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_31_MLR_RH_NSto220'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+LHF_nsQ
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_19_MLR_RH_LHF_nsQ'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+TfromNS+LHF_nsDELQ NN version with test loss tracking
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
densout = Dense(128, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (6):
densout = Dense(128, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
dense_out = Dense(120, activation='linear')(densout)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_09_NN7L_RH_TfromNS_LHF_nsDELQ'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 20
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+TfromNS+LHF_nsQ
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_23_MLR_RH_TfromNS_LHF_nsQ'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+BCONS+LHF_nsDELQ
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_03_23_MLR_RH_BCONS_LHF_nsDELQ'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+NSto220+LHF_nsDELQ
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_01_MLR_RH_NSto220_LHF_nsDELQ'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
### RH+NSto220+LHF_nsQ
```
inp = Input(shape=(64,)) ## input after rh and tns transformation
dense_out = Dense(120, activation='linear')(inp)
model = tf.keras.models.Model(inp, dense_out)
model.summary()
model.compile(tf.keras.optimizers.Adam(), loss=mse)
# Where to save the model
path_HDF5 = '/DFS-L/DATA/pritchard/tbeucler/SPCAM/HDF5_DATA/'
save_name = '2021_04_03_MLR_RH_NSto220_LHF_nsQ'
history = AdditionalValidationSets([(test_gen_CI,'testP4K')])
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save_pos = ModelCheckpoint(path_HDF5+save_name+'.hdf5',save_best_only=True, monitor='val_loss', mode='min')
Nep = 10
model.fit_generator(train_gen_CI, epochs=Nep, validation_data=valid_gen_CI,\
callbacks=[earlyStopping, mcp_save_pos, history])
hist_rec = history.history
hist_rec
pathPKL = '/export/home/tbeucler/CBRAIN-CAM/notebooks/tbeucler_devlog/PKL_DATA'
hf = open(pathPKL+save_name+'_hist.pkl','wb')
F_data = {'hist':hist_rec}
pickle.dump(F_data,hf)
hf.close()
```
| github_jupyter |
# Association Analysis
```
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
import pandas as pd
te = TransactionEncoder()
te_ary = te.fit_transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)
frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
display(frequent_itemsets)
from mlxtend.frequent_patterns import association_rules
strong_rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)
display(strong_rules)
```
### 1. What is the advantage of using the Apriori algorithm in comparison with computing the support of every subset of an itemset in order to find the frequent itemsets in a transaction dataset? [0.5 marks out of 5]
1. In Apriori Algorithm, the level wise generation of frequent itemsets uses the Apriori property to reduce the search space.
2. According to the property, "All nonempty subsets of a frequent itemset must also be frequent".
3. So suppose for ['Milk', 'Apple', 'Kidney Beans', 'Eggs'], if any of its subsets is not frequent, this itemset is removed from the frequent itemsets.
4. Thus Apriori Algorithm eliminates unwanted supersets by checking non frequent subsets.
5. Also after each join step, candidates that do not have the minimum support(or confidence) threshold are removed.
6. Thus they are not used further in successive higher level joins of the candidates.
7. At each level the subsets are filtered out using the minimum support(or confidence) threshold and the supersets are filtered out using the "Apriori property".
8. This considerably reduces the computation and search space.
### 2. Let $\mathcal{L}_1$ denote the set of frequent $1$-itemsets. For $k \geq 2$, why must every frequent $k$-itemset be a superset of an itemset in $\mathcal{L}_1$? [0.5 marks out of 5]
1. $\mathcal{L}_1$ is a set of frequent $1$-itemsets. This is the lowest level of the itemset where certain (support or confidence) threshold is used to eliminate non frequent itemsets.
2. Any higher level itemset is formed from this itemset. For example $\mathcal{L}_2$ will contain only the joinable itemsets of $\mathcal{L}_1$, $\mathcal{L}_3$ will contain only the joinable itemsets of $\mathcal{L}_1$ and $\mathcal{L}_2$ and so on.
3. Thus eventually a $k-itemset$ will be a superset of $1-itemsets$ through $(k-1)itemsets$.
4. Moreover, $\mathcal{L}_{(k-1)} $ is used to find $\mathcal{L}_k$ for every $k \geq 2$ which further implies that every frequent $k$-itemset is a superset of an itemset in $\mathcal{L}_1$
### 3. Let $\mathcal{L}_2 = \{ \{1,2\}, \{1,5\}, \{2, 3\}, \{3, 4\}, \{3, 5\}\}$. Compute the set of candidates $\mathcal{C}_3$ that is obtained by joining every pair of joinable itemsets from $\mathcal{L}_2$. [0.5 marks out of 5]
1. Each of the itemsets are already sorted and they are of same length.
2. Therefore join can be performed if the two itemsets that are being joined have same items in the itemset expect for the last item.
3. Only the following itemset can be joined.
| Joining Itemset: | | $\mathcal{C}_3$ Itemset |
|:-||:-|
|\{1,2\} \{1,5\}| |{1,2,5} |
|\{3,4\} \{3,5\}||{3,4,5} |
### 4. Let $S_1$ denote the support of the association rule $\{ \text{popcorn, soda} \} \Rightarrow \{ \text{movie} \}$. Let $S_2$ denote the support of the association rule $\{ \text{popcorn} \} \Rightarrow \{ \text{movie} \}$. What is the relationship between $S_1$ and $S_2$? [0.5 marks out of 5]
1. $S_1$ denote the support of the association rule $\{ \text{popcorn, soda} \} \Rightarrow \{ \text{movie} \}$
2. $S_2$ denote the support of the association rule $\{ \text{popcorn} \} \Rightarrow \{ \text{movie} \}$
3. $\{ \text{popcorn, soda} \}$ is a superset of $\{ \text{popcorn} \} $
4. Support of subset is greater than or equal the support of superset. Therefore $S_2$ $\geq$ $S_1$
### 5. What is the support of the rule $\{ \} \Rightarrow \{ \text{Kidney Beans} \}$ in the transaction dataset used in the tutorial presented above? [0.5 marks out of 5]
```
frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
display(frequent_itemsets[frequent_itemsets['itemsets'] == frozenset({'Kidney Beans'})])
```
1. As per the dataset provided, {Kidney Beans} occurs 5 times in the dataset of 5 transactions.
2. Support = transactions_where_item(s)_occur / total_transactions = 5/5 = 1
3. The same is confirmed by the code above wherein using the apriori function, the frequent_itemsets are gathered based on the min_support of 0.6.
4. Then the itemset "Kidney Beans" is displayed. It shows the same support of 1.0
### 6. In the transaction dataset used in the tutorial presented above, what is the maximum length of a frequent itemset for a support threshold of 0.2? [0.5 marks out of 5]
```
frequent_itemsets = apriori(df, min_support=0.2, use_colnames=True)
frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x)) # length of each frozenset
biggest = frequent_itemsets['length'].max()
display(frequent_itemsets[frequent_itemsets['length']== biggest])
```
1. The frequent itemsets having the support threshold of 0.2 are gathered using the function apriori by setting min_support=0.2 and stored in the dataframe frequent_itemsets
2. The "length" column is added to the frequent_itemset dataframe.
3. Using lambda, the length of each 'itemsets' is gathered and stored in the length column of the dataframe.
4. The max function is used to get the maximum length and stored in the variable "biggest".
5. Finally the frequent_itemsets having the "biggest" length is displayed by checking the length of each "frequent_itemsets['length'] == biggest"
6. The maximum length of a frequent itemset for a support threshold of 02 is 6 and two itemsets have been identified to have that length.
### 7. Implement a function that receives a ``DataFrame`` of frequent itemsets and a **strong** association rule (represented by a ``frozenset`` of antecedents and a ``frozenset`` of consequents). This function should return the corresponding Kulczynski measure. Include the code in your report. [1 mark out of 5]
```
def KulczynskiMeasure(frequentItemset, antecedent, consequent):
actualItemset = frozenset().union(antecedent, consequent)
supportofA = frequentItemset[frequentItemset['itemsets'] == antecedent]['support'].iloc[0]
supportofB = frequentItemset[frequentItemset['itemsets'] == consequent]['support'].iloc[0]
supportofAUB = frequentItemset[frequentItemset['itemsets'] == actualItemset]['support'].iloc[0]
vAtoB = supportofAUB/supportofA
vBtoA = supportofAUB/supportofB
return (vAtoB+vBtoA)/2
print("Kulczynski Measure of Strong Rule having two way assocation for",
frozenset({'Eggs'}), "and",
frozenset({'Kidney Beans'}), "is",
KulczynskiMeasure(frequent_itemsets, frozenset({'Eggs'}), frozenset({'Kidney Beans'})))
```
1. The function KulczynskiMeasure created with three input parameters.
2. The parameters include the following:
frequentItemset -> It is a dataframe of frequent itemsets that are identified using the apriori function, setting a minimum support threshold.
-> The dataframe also contains the support of each frequent itemset
antecedent -> It is a frozenset of antecedent
consequent -> It is a frozenset of consequent
3. It is assumed that antecedent and the consequent passed to this function is a strong association rule. ie.; A => B and B => A exist.
4. Using the union function a new set of combined antecedent and consequent is created to gather supportAUB
5. If the antecedent is present in the frequentItemset, the corresponding support is stored in supportofA
6. If the consequent is present in the frequentItemset, the corresponding support is stored in supportofB
7. If the union(antecedent, consequent) frozenset is present in the frequentItemset, the corresponding support is stored in supportofAUB
8. The confidence of A => B is calculated by dividing the support of AUB by Support of A
9. The confidence of B => A is calculated by dividing the support of AUB by Support of B
10. The Kulczynski Measure is the average of confidence of A => B and confidence of B => A given by the following formula and same is used in the code
$K_{A,B}$ = $\frac{V_{{A}\Rightarrow {B}} + V_{{B}\Rightarrow {A}}}{2}$
11. the function is called in the print statement and output is displayed below the code
### 8. Implement a function that receives a ``DataFrame`` of frequent itemsets and a **strong** association rule (represented by a ``frozenset`` of antecedents and a ``frozenset`` of consequents). This function should return the corresponding imbalance ratio. Include the code in your report. [1 mark out of 5]
```
def ImbalanceRatio(frequentItemset, antecedent, consequent):
actualItemset = frozenset().union(antecedent, consequent)
supportofA = frequentItemset[frequentItemset['itemsets'] == antecedent]['support'].iloc[0]
supportofB = frequentItemset[frequentItemset['itemsets'] == consequent]['support'].iloc[0]
supportofAUB = frequentItemset[frequentItemset['itemsets'] == actualItemset]['support'].iloc[0]
return abs(supportofA-supportofB)/(supportofA+supportofB-supportofAUB)
print("Imbalance Ratio of Strong Rule having two way assocation for",
frozenset({'Eggs'}), "and",
frozenset({'Kidney Beans'}), "is",
ImbalanceRatio(frequent_itemsets, frozenset({'Eggs'}), frozenset({'Kidney Beans'})))
```
1. The function KulczynskiMeasure created with three input parameters.
2. The parameters include the following:
frequentItemset -> It is a dataframe of frequent itemsets that are identified using the apriori function, setting a minimum support threshold.
-> The dataframe also contains the support of each frequent itemset
antecedent -> It is a frozenset of antecedent
consequent -> It is a frozenset of consequent
3. It is assumed that antecedent and the consequent passed to this function is a strong association rule. ie.; A => B and B => A exist.
4. Using the union function a new set of combined antecedent and consequent is created to gather supportAUB
5. If the antecedent is present in the frequentItemset, the corresponding support is stored in supportofA
6. If the consequent is present in the frequentItemset, the corresponding support is stored in supportofB
7. If the union(antecedent, consequent) frozenset is present in the frequentItemset, the corresponding support is stored in supportofAUB
8. The Imbalance ratio is given by the following formula and the same is implemented using support values.
$I_{A,B}$ = $\frac{|N_{A} - N_{B}|} {N_{A} + N_{B} - N_{AUB}}$
9. the function is called in the print statement and output is displayed below the code
# Outlier Detection
## 1. For an application on credit card fraud detection, we are interested in detecting contextual outliers. Suggest 2 possible contextual attributes and 2 possible behavioural attributes that could be used for this application, and explain why each of your suggested attribute should be considered as either contextual or behavioural. [0.5 marks out of 5]
Contextual Attribute: Income Level, Bank Balance, Age, Gender, Transaction Mode
Behavioural Attribute: Expenditure Patterns, Credit Limit
## 2. Assume that you are provided with the [University of Wisconsin breast cancer dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data) from the Week 3 lab, and that you are asked to detect outliers from this dataset. Additional information on the dataset attributes can be found [online](https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.names). Explain one possible outlier detection method that you could apply for detecting outliers for this particular dataset, explain what is defined as an outlier for your suggested approach given this particular dataset, and justify why would you choose this particular method for outlier detection. [1 mark out of 5]
```
import pandas as pd
import numpy as np
data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data', header=None)
data.columns = ['Sample code', 'Clump Thickness', 'Uniformity of Cell Size', 'Uniformity of Cell Shape',
'Marginal Adhesion', 'Single Epithelial Cell Size', 'Bare Nuclei', 'Bland Chromatin',
'Normal Nucleoli', 'Mitoses','Class']
data = data.drop(['Sample code'],axis=1)
data = data.replace('?',np.NaN)
data2 = data['Bare Nuclei']
data2 = data2.fillna(data2.median())
data2 = data.dropna()
data2 = data.drop(['Class'],axis=1)
data2['Bare Nuclei'] = pd.to_numeric(data2['Bare Nuclei'])
data2.boxplot(figsize=(20,3))
```
## 3. The monthly rainfall in the London borough of Tower Hamlets in 2018 had the following amount of precipitation (measured in mm, values from January-December 2018): {22.93, 20.59, 25.65, 23.74, 25.24, 4.55, 23.45, 28.18, 23.52, 22.32, 26.73, 23.42}. Assuming that the data is based on a normal distribution, identify outlier values in the above dataset using the maximum likelihood method. [1 mark out of 5]
```
df=pd.DataFrame(np.array([22.93, 20.59, 25.65, 23.74, 25.24, 4.55, 23.45, 28.18, 23.52, 22.32, 26.73, 23.42]),columns=['data'])
import numpy as np
mean = np.array(np.mean(df))
standard_deviation = np.array(np.std(df))
mle = df.values-c
h=mle[5]
display(h)
j=h/b
display(j)
```
$ Precipitation = {22.93, 20.59, 25.65, 23.74, 25.24, 4.55, 23.45, 28.18, 23.52, 22.32, 26.73, 23.42} $
$Mean, \mu = \frac {22.93 + 20.59 + 25.65 + 23.74 + 25.24 + 4.55 + 23.45 + 28.18 + 23.52 + 22.32 + 26.73 + 23.42}{12}$
> $Mean, \mu = 22.53$
$Standard Deviation,\sigma = \sqrt {\frac {(22.93-22.53)^2 + (20.59-22.53)^2 + (25.65-22.53)^2 + (23.74-22.53)^2 + (25.24-22.53)^2 + (4.55-22.53)^2 + (23.45-22.53)^2 + (28.18-22.53)^2 + (23.52-22.53)^2 + (22.32-22.53)^2 + (26.73-22.53)^2 + (23.42-22.53)^2}{12}}$
$Standard Deviation, \sigma = \sqrt {\frac {0.16 + 3.76 + 9.73 + 1.46 + 7.34 + 323.28 + 0.85 + 31.92 + 0.98 + 0.04 + 17.64 + 0.79}{12}}$
$Standard Deviation, \sigma = \sqrt {\frac {397.95}{12}}$
$Standard Deviation, \sigma = \sqrt {{33.16}}$
> $Standard Deviation, \sigma = 5.76$
Finding Most Deviating Value:
|data point - mean ||gives|
|:-||:-|
|22.93 - 22.53||0.4|
|20.59 - 22.53||-1.94|
|25.65 - 22.53||3.12|
|23.74 - 22.53||1.21|
|25.24 - 22.53||2.71|
|4.55 - 22.53||-17.98|
|23.45 - 22.53||0.92|
|28.18 - 22.53||5.65|
|23.52 - 22.53||0.99|
|22.32 - 22.53||-0.21|
|26.73 - 22.53||4.2|
|23.42 - 22.53||0.89|
The most deviating value is -17.98 which implies 4.55 as the outlier
In a normal distribution, $\mu$ $3\sigma$
## 4. You are provided with the graduation rate dataset used in the Week 4 lab (file graduation_rate.csv in the Week 4 lab supplementary data). For the 'high school gpa' attribute, compute the relative frequency (i.e. frequency normalised by the size of the dataset) of each value. Show these computed relative frequencies in your report. Two new data points are included in the dataset, one with a 'high school gpa' value of 3.6, and one with a 'high school gpa' value of 2.8. Using the above computed relative frequencies, which of the two new data points would you consider as an outlier and why? [0.5 marks out of 5]
```
import pandas as pd
df = pd.read_csv('graduation_rate.csv')
print('Dataset (head and tail):')
display(df)
print("high school gpa:")
freq_education = df['high school gpa'].value_counts()/len(df)
display(freq_education)
g= pd.DataFrame(freq_education)
display(g)
import numpy as np
def removeOutliers(x, outlierConstant):
a= np.array(x)
upper_quartile = np.percentile(a,75)
print(upper_quartile)
lower_quartile = np.percentile(a,25)
print(lower_quartile)
IQR = (upper_quartile - lower_quartile) * outlierConstant
print(IQR)
quartileSet = (lower_quartile - IQR, upper_quartile + IQR)
resultList = []
for y in a.tolist():
if y >= quartileSet[0] and y <=quartileSet[1]:
resultList.append(y)
return resultList
removeOutliers(g,4)
```
## 5. Using the stock prices dataset used in sections 1 and 2, estimate the outliers in the dataset using the one-class SVM classifier approach. As input to the classifier, use the percentage of changes in the daily closing price of each stock, as was done in section 1 of the notebook. Plot a 3D scatterplot of the dataset, where each object is color-coded according to whether it is an outlier or an inlier. Also compute a histogram and the frequencies of the estimated outlier and inlier labels. In terms of the plotted results, how does the one-class SVM approach for outlier detection differ from the parametric and proximity-based methods used in the lab notebook? What percentage of the dataset objects are classified as outliers? [1 mark out of 5]
```
import pandas as pd
import numpy as np
from sklearn.svm import OneClassSVM
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#stores classifier in a variable
ocs= OneClassSVM
#Load CSV file, set the 'Date' values as the index of each row
#display the first rows of the dataframe
stocks = pd.read_csv('stocks.csv', header='infer')
stocks.index = stocks['Date']
stocks = stocks.drop(['Date'],axis=1)
stocks.head()
N,d = stocks.shape
#Compute delta
#this denotes the percentage of changes in daily closing price of each stock
delta = pd.DataFrame(100*np.divide(stocks.iloc[1:,:].values-stocks.iloc[:N-1,:]
.values, stocks.iloc[:N-1,:].values),columns=stocks.columns,
index=stocks.iloc[1:].index)
delta
# Extracting the values from the dataframe
data = delta.values
# Split dataset into input and output elements
X, y = data[:, :-1], data[:, -1]
# Summarize the shape of the dataset
print(X.shape, y.shape)
clf = ocs(nu=0.01,gamma='auto')
# Perform fit on input data and returns labels for that input data.
svm = clf.fit_predict(delta)
#stores the finded value in the list
b= list(svm)
# Print labels: -1 for outliers and 1 for inliers.
print(b)
# Plot 3D scatterplot of outlier scores
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111, projection='3d')
p = ax.scatter(delta.MSFT,delta.F,delta.BAC,c=b,cmap='jet')
ax.set_xlabel('Microsoft')
ax.set_ylabel('Ford')
ax.set_zlabel('Bank of America')
fig.colorbar(p)
plt.show()
#to find the percentage of outliers and inliers
df= pd.Series(b).value_counts()
print(df)
Fi=(df/len(b))
Fi
#plot histogram for outliers and inliers
sns.set_style('darkgrid')
sns.distplot(b)
```
### 6. This question will combine concepts from both data preprocessing and outlier detection. Using the house prices dataset from Section 3 of this lab notebook, perform dimensionality reduction on the dataset using PCA with 2 principal components (make sure that the dataset is z-score normalised beforehand, and remember that PCA should only be applied on the input attributes). Then, perform outlier detection on the pre-processed dataset using the k-nearest neighbours approach using k=2. Display a scatterplot of the two principal components, where each object is colour-coded according to the computed outlier score. [1 marks out of 5]
```
import pandas as pd
from pandas import read_csv
from scipy.stats import zscore
from sklearn.decomposition import PCA
from sklearn.neighbors import NearestNeighbors
import numpy as np
from scipy.spatial import distance
from numpy import sqrt
from numpy import hstack
import matplotlib.pyplot as plt
#Loading the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv'
df = read_csv(url, header=None)
#Extracting the values from the dataframe
data = df.values
#Split dataset into input and output elements
X, y = data[:, :-1], data[:, -1]
#Summarize the shape of the dataset
print(X.shape, y.shape)
#z score normalization is done
X_normalized = zscore(X)
#Principal component analysis is done for 2 components
pca = PCA(n_components=2)
#pca is fit transformed of X_normalized data(array)
principalComponents = pca.fit_transform(X_normalized)
knn = 2
nbrs = NearestNeighbors(n_neighbors=knn,
metric=distance.euclidean).fit(principalComponents)
centers = nbrs.kneighbors(principalComponents)
print("Centers:\n",centers)
#calculating distance of each sample from the center
#calculating for distancecenter1
distancecenter1 = sqrt(((principalComponents-centers[0])**2).sum(axis=1))
#calculating for distancecenter2
distancecenter2 = sqrt(((principalComponents-centers[1])**2).sum(axis=1))
#combining both the arrays and finding minimum value for each row in dataset
distance = hstack((distancecenter1.reshape(-1,1),distancecenter2.reshape(-1,1))).min(axis=1)
#scatter plot with color as distance values from centres
plt.scatter(principalComponents[:,0],principalComponents[:,1],c=distance, cmap='nipy_spectral')
#colorbar is added
plt.colorbar()
#labeling of plot
plt.xlabel("Principal_Components 1")
plt.ylabel("Principal_Components 2")
plt.title("outlier score")
#displaying the plot
plt.show()
```
| github_jupyter |
```
import json
import glob
import re
import malaya
tokenizer = malaya.preprocessing._SocialTokenizer().tokenize
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def detect_money(word):
if word[:2] == 'rm' and is_number_regex(word[2:]):
return True
else:
return False
def preprocessing(string):
tokenized = tokenizer(string)
tokenized = [w.lower() for w in tokenized if len(w) > 2]
tokenized = ['<NUM>' if is_number_regex(w) else w for w in tokenized]
tokenized = ['<MONEY>' if detect_money(w) else w for w in tokenized]
return tokenized
left, right, label = [], [], []
for file in glob.glob('quora/*.json'):
with open(file) as fopen:
x = json.load(fopen)
for i in x:
splitted = i[0].split(' <> ')
if len(splitted) != 2:
continue
left.append(splitted[0])
right.append(splitted[1])
label.append(i[1])
len(left), len(right), len(label)
with open('synonym0.json') as fopen:
s = json.load(fopen)
with open('synonym1.json') as fopen:
s1 = json.load(fopen)
synonyms = {}
for l, r in (s + s1):
if l not in synonyms:
synonyms[l] = r + [l]
else:
synonyms[l].extend(r)
synonyms = {k: list(set(v)) for k, v in synonyms.items()}
import random
def augmentation(s, maximum = 0.8):
s = s.lower().split()
for i in range(int(len(s) * maximum)):
index = random.randint(0, len(s) - 1)
word = s[index]
sy = synonyms.get(word, [word])
sy = random.choice(sy)
s[index] = sy
return s
train_left, test_left = left[:-50000], left[-50000:]
train_right, test_right = right[:-50000], right[-50000:]
train_label, test_label = label[:-50000], label[-50000:]
len(train_left), len(test_left)
aug = [' '.join(augmentation(train_left[0])) for _ in range(10)] + [train_left[0].lower()]
aug = list(set(aug))
aug
aug = [' '.join(augmentation(train_right[0])) for _ in range(10)] + [train_right[0].lower()]
aug = list(set(aug))
aug
train_label[0]
from tqdm import tqdm
LEFT, RIGHT, LABEL = [], [], []
for i in tqdm(range(len(train_left))):
aug_left = [' '.join(augmentation(train_left[i])) for _ in range(3)] + [train_left[i].lower()]
aug_left = list(set(aug_left))
aug_right = [' '.join(augmentation(train_right[i])) for _ in range(3)] + [train_right[i].lower()]
aug_right = list(set(aug_right))
for l in aug_left:
for r in aug_right:
LEFT.append(l)
RIGHT.append(r)
LABEL.append(train_label[i])
len(LEFT), len(RIGHT), len(LABEL)
for i in tqdm(range(len(LEFT))):
LEFT[i] = preprocessing(LEFT[i])
RIGHT[i] = preprocessing(RIGHT[i])
for i in tqdm(range(len(test_left))):
test_left[i] = preprocessing(test_left[i])
test_right[i] = preprocessing(test_right[i])
with open('train-similarity.json', 'w') as fopen:
json.dump({'left': LEFT, 'right': RIGHT, 'label': LABEL}, fopen)
with open('test-similarity.json', 'w') as fopen:
json.dump({'left': test_left, 'right': test_right, 'label': test_label}, fopen)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.model_selection import KFold
from sklearn.pipeline import make_union, make_pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.base import TransformerMixin
from sklearn.model_selection import cross_val_score
from lightgbm import LGBMRegressor
import optuna
from twws.select_variables import Selector, get_units_only
from twws.variables_list import category_columns, non_object_columns
pd.set_option('display.max_rows', None)
df = pd.read_csv('../data/data.csv')
df = get_units_only(df)
df.shape
df.missile_primary__ignition_amount = df.missile_primary__ignition_amount.map({'True': 1, '0': 0, '25': 25})
for col in non_object_columns:
df[col] = df[col] * 1
df[col] = df[col].fillna(-1)
for col in category_columns:
df[col] = df[col].fillna('missing')
df = df.loc[:, non_object_columns + category_columns + ['name']]
df = df.sample(frac=1)
df.shape
y = df.multiplayer_cost.apply(np.log)
x = df.drop('multiplayer_cost', axis=1)
columns_to_remove = ['multiplayer_cost',
'speed',
'charge_speed',
'missile_primary__total_accuracy',
'knock_interrupts_ignore_chance',
'missile_primary__reload_time',
'missile_primary__ammo',
'deceleration',
'unit_size',
'melee__ap_damage']
for col in columns_to_remove:
non_object_columns.remove(col)
features = ["melee_defence",
"health",
"melee_attack",
"leadership",
"armour",
"health_per_entity",
"run_speed",
"missile_primary__damage",
"charge_bonus",
"melee__ap_ratio",
"melee__base_damage",
"mass",
"melee__damage",
"height",
"melee__melee_attack_interval",
"acceleration",
"melee__bonus_v_large",
"melee__bonus_v_infantry",
"melee__splash_attack_max_attacks",
"parry_chance",
"reload",
"hit_reactions_ignore_chance",
"melee__weapon_length",
"damage_mod_physical",
"accuracy",
"combat_reaction_radius",
"fly_speed",
"melee__splash_attack_power_multiplier"]
model = make_pipeline(
Selector(features),
LGBMRegressor(objective='mae')
)
def objective(trial):
lgbmregressor__num_leaves = trial.suggest_int('lgbmregressor__num_leaves', 2, 500)
lgbmregressor__max_depth = trial.suggest_int('lgbmregressor__max_depth', 2, 150)
lgbmregressor__n_estimators = trial.suggest_int('lgbmregressor__n_estimators', 10, 500)
lgbmregressor__subsample_for_bin = trial.suggest_int('lgbmregressor__subsample_for_bin', 2000, 500_000)
lgbmregressor__min_child_samples = trial.suggest_int('lgbmregressor__min_child_samples', 4, 500)
lgbmregressor__reg_alpha = trial.suggest_uniform('lgbmregressor__reg_alpha', 0.0, 1.0)
lgbmregressor__colsample_bytree = trial.suggest_uniform('lgbmregressor__colsample_bytree', 0.6, 1.0)
lgbmregressor__learning_rate = trial.suggest_loguniform('lgbmregressor__learning_rate', 1e-5, 1e-0)
params = {
'lgbmregressor__num_leaves': lgbmregressor__num_leaves,
'lgbmregressor__max_depth': lgbmregressor__max_depth,
'lgbmregressor__n_estimators': lgbmregressor__n_estimators,
'lgbmregressor__subsample_for_bin': lgbmregressor__subsample_for_bin,
'lgbmregressor__min_child_samples': lgbmregressor__min_child_samples,
'lgbmregressor__reg_alpha': lgbmregressor__reg_alpha,
'lgbmregressor__colsample_bytree': lgbmregressor__colsample_bytree,
'lgbmregressor__learning_rate': lgbmregressor__learning_rate
}
model.set_params(**params)
cv = KFold(n_splits=8)
return - np.mean(cross_val_score(model, x, y, cv=8, scoring='neg_median_absolute_error'))
study = optuna.create_study()
study.optimize(objective, n_trials=200)
features = ["melee_defence",
"health",
"melee_attack",
"leadership",
"armour",
"health_per_entity",
"run_speed",
"missile_primary__damage",
"charge_bonus",
"melee__ap_ratio",
"melee__base_damage",
"mass",
"melee__damage",
"height",
"melee__melee_attack_interval",
"acceleration",
"melee__bonus_v_large",
"melee__bonus_v_infantry",
"melee__splash_attack_max_attacks",
"parry_chance",
"reload",
"hit_reactions_ignore_chance",
"melee__weapon_length",
"damage_mod_physical",
"accuracy",
"combat_reaction_radius",
"fly_speed",
"melee__splash_attack_power_multiplier"]
model = make_pipeline(
Selector(features),
LGBMRegressor(objective='mae')
)
best_params = {'lgbmregressor__num_leaves': 434,
'lgbmregressor__max_depth': 110,
'lgbmregressor__n_estimators': 256,
'lgbmregressor__subsample_for_bin': 82449,
'lgbmregressor__min_child_samples': 4,
'lgbmregressor__reg_alpha': 0.14126751650012037,
'lgbmregressor__colsample_bytree': 0.7745213812855362,
'lgbmregressor__learning_rate': 0.04242070860113767}
model.set_params(**best_params)
cv = KFold(n_splits=8)
ys = []
yhats = []
xs = []
for train_index, test_index in cv.split(x):
x_train, x_test = x.iloc[train_index, :], x.iloc[test_index, :]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model.fit(x_train, y_train)
yhat = model.predict(x_test)
ys.append(y_test.values)
yhats.append(yhat)
xs.append(x_test.name.values)
print(np.median(np.abs((yhat - y_test.values))), np.median(np.abs((np.exp(yhat) - np.exp(y_test.values)))))
ys = np.concatenate(ys)
yhats = np.concatenate(yhats)
xs = pd.DataFrame(np.concatenate(xs), columns=['name'])
xs['yhat'] = np.exp(yhats).reshape(-1, 1)
xs['y'] = np.exp(ys).reshape(-1, 1)
xs['res'] = xs.yhat - xs.y
xs['res_perc'] = xs.yhat / xs.y - 1
xs.sort_values(by='res_perc', ascending=False, inplace=True)
sns.regplot(xs.y, xs.yhat, scatter=False)
sns.scatterplot(xs.y, xs.yhat, alpha=0.1);
np.abs(xs.res).median()
xs
```
| github_jupyter |
# Script for uploading our rProtein sequences
Uses a pregenerated csv file with the columns:
*Txid*, *Accession*, *Origin database*, *Description*, and *Full sequence*
Updates tables: **Polymer_Data**, **Polymer_metadata**, and **Residues**
```
#!/usr/bin/env python3
import csv, sys, getopt, getpass, mysql.connector
def usage():
print (\
"USAGE:\n./upload_accession.py -c [csv_file_path]-h\n\
-c: defines path to csv file with txids, accessions, database, protein name, description, and sequence.\tREQUIRED\n\
-h: prints this\
")
try:
opts, args = getopt.getopt(sys.argv[1:], 'c:h', ['csv=', 'help'])
except getopt.GetoptError:
usage()
sys.exit(2)
for opt, arg in opts:
if opt in ('-h', '--help'):
usage()
sys.exit(2)
elif opt in ('-c', '--csv'):
csv_path = arg
else:
usage()
sys.exit(2)
uname = input("User name: ")
pw = getpass.getpass("Password: ")
cnx = mysql.connector.connect(user=uname, password=pw, host='130.207.36.75', database='SEREB')
cursor = cnx.cursor()
def read_csv(csv_path):
with open(csv_path, 'r') as csv_file:
reader = csv.reader(csv_file)
csv_list = list(reader)
return csv_list
def superkingdom_info(ID):
'''
Gets the superkingdom for a strain ID
'''
#print(ID)
cursor.execute("SELECT SEREB.TaxGroups.groupName FROM SEREB.Species_TaxGroup\
INNER JOIN SEREB.TaxGroups ON SEREB.Species_TaxGroup.taxgroup_id=SEREB.TaxGroups.taxgroup_id\
INNER JOIN SEREB.Species ON SEREB.Species_TaxGroup.strain_id=SEREB.Species.strain_id\
WHERE SEREB.TaxGroups.groupLevel = 'superkingdom' AND SEREB.Species.strain_id = '"+ID+"'")
results = cursor.fetchall()
#print(ID,results)
try:
superkingdom=(results[0][0])
except:
raise ValueError ("No result for specie "+str(ID)+" in the MYSQL query")
return superkingdom
def check_nomo_id(occur, prot_name):
'''
Gets nom_id for new name and superkingdom
'''
cursor.execute("SELECT SEREB.Nomenclature.nom_id FROM SEREB.Nomenclature\
INNER JOIN SEREB.Old_name ON SEREB.Nomenclature.nom_id=SEREB.Old_name.nomo_id\
WHERE SEREB.Old_name.old_name = '"+prot_name+"' AND SEREB.Old_name.N_B_Y_H_A = 'BAN' AND SEREB.Nomenclature.occurrence = '"+occur+"'")
result = cursor.fetchall()
#nom_id=result[0][0]
try:
nom_id=result[0][0]
except:
raise ValueError ("No result for nom_id "+prot_name+" and occurrence "+occur+" in the MYSQL query")
return nom_id
def upload_resi(poldata_id, fullseq):
i = 1
for resi in fullseq:
query = "INSERT INTO `SEREB`.`Residues`(`PolData_id`,`resNum`,`unModResName`) VALUES('"+poldata_id+"','"+str(i)+"','"+resi+"')"
cursor.execute(query)
#print(query)
i+=1
return True
def main():
csv_list = read_csv(csv_path)
for entry in csv_list:
superK = superkingdom_info(entry[0])
nom_id = check_nomo_id(superK[0], entry[3])
query = "INSERT INTO `SEREB`.`Polymer_Data`(`GI`,`strain_ID`,`nomgd_id`, `GeneDescription`) VALUES('"+entry[1]+"','"+str(entry[0])+"','"+str(nom_id)+"','"+entry[4]+"')"
print(query)
cursor.execute(query)
lastrow_id = str(cursor.lastrowid)
query = "INSERT INTO `SEREB`.`Polymer_metadata`(`polymer_id`,`accession_type`,`polymer_type`, `Fullseq`) VALUES('"+str(lastrow_id)+"','LDW-prot','protein','"+entry[5]+"')"
cursor.execute(query)
#print(query)
upload_resi(str(lastrow_id), entry[5])
if __name__ == "__main__":
main()
#cnx.commit()
cursor.close()
cnx.close()
print("Success!")
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pylab as pl
from psi.application import get_default_io, get_default_calibration, get_calibration_file
from psi.controller import util
from psi.controller.calibration.api import FlatCalibration, PointCalibration
from psi.controller.calibration.util import load_calibration, psd, psd_df, db, dbtopa
from psi.controller.calibration import tone
from psi.core.enaml.api import load_manifest_from_file
frequencies = [250, 500, 1000, 2000, 4000, 8000, 16000, 32000]
io_file = get_default_io()
cal_file = get_calibration_file(io_file)
print(cal_file)
io_manifest = load_manifest_from_file(io_file, 'IOManifest')
io = io_manifest()
audio_engine = io.find('NI_audio')
channels = audio_engine.get_channels(active=False)
load_calibration(cal_file, channels)
mic_channel = audio_engine.get_channel('microphone_channel')
mic_channel.gain = 40
cal_mic_channel = audio_engine.get_channel('reference_microphone_channel')
cal_mic_channel.gain = 0
speaker_channel = audio_engine.get_channel('speaker_1')
print(speaker_channel.calibration)
fixed_gain_result = tone.tone_sens(
frequencies=frequencies,
engine=audio_engine,
ao_channel_name='speaker_1',
ai_channel_names=['reference_microphone_channel', 'microphone_channel'],
gains=-30,
debug=True,
duration=0.1,
iti=0,
trim=0.01,
repetitions=2,
)
fixed_gain_result['norm_spl'].unstack('channel_name').eval('microphone_channel-reference_microphone_channel')
rms = fixed_gain_result.loc['reference_microphone_channel']['rms'].loc[1000]
figure, axes = pl.subplots(3, 2)
for ax, freq in zip(axes.ravel(), frequencies):
w = fixed_gain_result.loc['microphone_channel', freq]['waveform'].T
ax.plot(w)
ax.set_title(freq)
figure, axes = pl.subplots(3, 2)
for ax, freq in zip(axes.ravel(), frequencies):
w = fixed_gain_result.loc['reference_microphone_channel', freq]['waveform'].T
ax.plot(w)
ax.set_title(freq)
tone_sens = fixed_gain_result.loc['microphone_channel', 'sens']
calibration = PointCalibration(tone_sens.index, tone_sens.values)
gains = calibration.get_gain(frequencies, 80)
variable_gain_result = tone.tone_spl(
frequencies=frequencies,
engine=audio_engine,
ao_channel_name='speaker_1',
ai_channel_names=['microphone_channel'],
gains=gains,
debug=True,
duration=0.1,
iti=0,
trim=0.01,
repetitions=2,
)
variable_gain_result['spl']
from psiaudio.stim import ChirpFactory
factory = ChirpFactory(fs=speaker_channel.fs,
start_frequency=500,
end_frequency=50000,
duration=0.02,
level=-40,
calibration=FlatCalibration.as_attenuation())
n = factory.n_samples_remaining()
chirp_waveform = factory.next(n)
result = util.acquire(audio_engine, chirp_waveform, 'speaker_1', ['microphone_channel'], repetitions=64, trim=0)
chirp_response = result['microphone_channel'][0]
chirp_psd = psd_df(chirp_response, mic_channel.fs)
chirp_psd_mean = chirp_psd.mean(axis=0)
chirp_psd_mean_db = db(chirp_psd_mean)
signal_psd = db(psd_df(chirp_waveform, speaker_channel.fs))
freq = chirp_psd.columns.values
chirp_spl = mic_channel.calibration.get_spl(freq, chirp_psd)
chirp_spl_mean = chirp_spl.mean(axis=0)
chirp_sens = signal_psd - chirp_spl_mean - db(20e-6)
chirp_sens.loc[1000]
figure, axes = pl.subplots(1, 3, figsize=(12, 3))
chirp_response_mean = np.mean(chirp_response, axis=0)
print(chirp_response_mean.min(), chirp_response_mean.max())
axes[0].plot(chirp_response_mean)
freq = chirp_spl_mean.index.values
axes[1].semilogx(freq[1:], chirp_spl_mean[1:])
x = psd_df(chirp_response_mean, mic_channel.fs)
y = mic_channel.calibration.get_spl(x.index, x.values)
axes[1].semilogx(freq[1:], y[1:], 'r')
axes[1].axis(xmin=500, xmax=50000)
axes[2].semilogx(freq[1:], chirp_sens[1:])
axes[2].plot(tone_sens, 'ko')
axes[2].axis(xmin=500, xmax=50000)
```
| github_jupyter |
## 라이브러리(패키지) import
데이터프레임, 행렬, 그래프 그리기 위한 라이브러리
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## 데이터 불러오기
현재폴더 데이터 불러오기
```
ExampleData = pd.read_csv('./ExampleData', sep=',', header=None)
ExampleData
```
하위폴더 데이터 불러오기
```
path = './Subfolder/ExampleData2' # 파일 경로
ExampleData2 = pd.read_csv(path, sep=',',names=['time', 'Accelerometer', 'Voltage', 'Current'])
ExampleData2
```
## 데이터 핸들링 (Handling)
시간열(time column) 제거한 센서 데이터 추출방법 1
```
SensorDataOnly = ExampleData.iloc[:,1:]
SensorDataOnly
```
시간열(time column) 제거한 센서 데이터 추출방법 2
```
Data = ExampleData.drop(0, axis=1)
Data
```
데이터에 열 추가하기 (행 개수가 일치해야함)
```
Data = pd.concat([pd.DataFrame(np.arange (0 , 0.2167 , 1/12800)) , Data], axis=1)
Data
```
0.01초부터 0.02초까지 해당하는 데이터 추출
```
StartPoint = np.where(ExampleData.iloc[:,0].values == 0.01)[0][0]
EndPoint = np.where(ExampleData.iloc[:,0].values == 0.02)[0][0]
StartPoint, EndPoint
NewData = ExampleData.iloc[StartPoint:EndPoint, :]
NewData
```
데이터를 파일(.csv)로 저장
```
path = './Subfolder/NewData_1'
NewData.to_csv(path , sep=',' , header=None , index=None)
```
데이터 행렬바꾸기
```
Data = np.transpose(Data)
Data.shape
# 행렬 다시 바꾸기 (원래 행렬 복원)
Data = np.transpose(Data)
Data.shape
```
## 데이터 그래프 그리기
그리드, 라벨, 제목, 범례 표시 등
```
# ExampleData 0열 (1번째 열) = 시간
# ExampleData 1열 (2번째 열) = 가속도 센서 데이터
plt.plot(ExampleData.iloc[:,0], ExampleData.iloc[:,1])
plt.grid() # 그리드 표시
plt.xlabel('time(s)') # x 라벨 표시
plt.ylabel('Acceleration(g)') # y 라벨 표시
plt.title('Spot Welding Acceleration Data') # 제목 표시
plt.legend(['Acc'], loc = 'upper right', fontsize=10) # 범례 표시
#plt.xlim(0,0.02) # x축 범위 설정
plt.ylim(-1.5,1.5) # y축 범위 설정
plt.show()
# ExampleData 0열 (1번째 열) = 시간
# ExampleData 2열 (3번째 열) = 전압 센서 데이터
plt.plot(ExampleData.iloc[:,0], ExampleData.iloc[:,2])
plt.grid() # 그리드 표시
plt.xlabel('time(s)') # x 라벨 표시
plt.ylabel('Voltage(V)') # y 라벨 표시
plt.title('Spot Welding Voltage Data') # 제목 표시
plt.legend(['Voltage'], loc = 'upper right', fontsize=10) # 범례 표시
plt.show()
# ExampleData 0열 (1번째 열) = 시간
# ExampleData 3열 (4번째 열) = 전류 센서 데이터
plt.plot(ExampleData.iloc[:,0], ExampleData.iloc[:,3])
plt.grid() # 그리드 표시
plt.xlabel('time(s)') # x 라벨 표시
plt.ylabel('Current(kA)') # y 라벨 표시
plt.title('Spot Welding Current Data') # 제목 표시
plt.legend(['Current'], loc = 'upper right', fontsize=10) # 범례 표시
plt.show()
```
그래프 모양 변경 등
```
plt.figure(figsize = (12,9))
plt.plot(ExampleData.iloc[:100,0],ExampleData.iloc[:100,1],
linestyle = '-.',
linewidth = 2.0,
color = 'b',
marker = 'o',
markersize = 8,
markeredgecolor = 'g',
markeredgewidth = 1.5,
markerfacecolor = 'r',
alpha = 0.5)
plt.grid()
plt.xlabel('time(s)')
plt.ylabel('Acceleration(g)')
plt.title('Spot Welding Acceleration Data')
plt.legend(['Acc'], loc = 'upper right', fontsize=10)
```
그래프 겹쳐서 그리기
```
DataLength = 100 # 데이터 길이 100개까지만 제한 (~100/12800 초까지)
plt.plot(ExampleData.iloc[:DataLength,0],ExampleData.iloc[:DataLength,1])
plt.plot(ExampleData.iloc[:DataLength,0],ExampleData.iloc[:DataLength,2])
plt.plot(ExampleData.iloc[:DataLength,0],ExampleData.iloc[:DataLength,3])
plt.xlabel('time(s)')
plt.ylabel('Acceleration(g)')
plt.title('Spot Welding Acceleration Data')
plt.legend(['Acc', 'Voltage', 'Current'], loc = 'upper right', fontsize=10)
plt.grid()
plt.show()
```
### ****** 필독 !! 실습과제 주의사항 ******
- 각자의 "수강생 번호" 확인 (아이캠퍼스 공지)
- 제출하는 실습과제 파일에 "수강생 번호"를 기준으로 작성 (이름, 학번 등 작성X)
- 각 실습과제에 대한 구체적인 파일 이름은 매번 개별 안내
(수강생 번호 123번 학생 과제파일 예시 : 'ST123_HW2_1.csv' )
- 과제 파일이름 양식 지키지 않을 시 감점!
# [실습 과제 1]
## 1. ExampleData의 0.03초 ~ 0.05초에 대하여 '센서 데이터'만 추출해서 파일 저장
#### >>>>>> 저장된 데이터 파일 제출
#### >>>>>> 데이터 이름 : ST(수강생 번호)_HW1_1 (예시 : 'ST000_HW1_1' // 'ST00_HW1_1' // 'ST0_HW1_1')
#### >>>>>> 데이터 이름 중 'ST' , 'HW' 등 영어는 모두 대문자
## 2. 위의 0.03~0.05초 센서 데이터(3개)에 대하여 실습코드 마지막 셀과 같이 그래프 겹쳐 그리기
#### >>>>>> 1번 항목 (ExampleData 에서 0.03~0.05초 센서데이터만 추출) ~ 2번 항목 (그래프) 수행한 코드파일(ipynb) 제출
#### >>>>>> 코드파일 이름 : ST(수강생 번호)_HW1_2 (예시 : 'ST000_HW1_2.ipynb' // 'ST00_HW1_2..ipynb' // 'ST0_HW1_2.ipynb')
#### >>>>>> 코드파일 이름 중 'ST' , 'HW' 등 영어는 모두 대문자
## ***** 1번 파일(데이터), 2번 파일(코드) 함께 zip파일로 압축하여 제출
### >>> 압축파일 이름 ST(수강생 번호)_HW1 (예시 : 'ST000_HW1.zip' // 'ST00_HW1' // 'ST0_HW1)
| github_jupyter |
```
image_shape = (56,64,1)
train_path = "D:\\Projects\\EYE_GAME\\eye_img\\datav2\\train\\"
test_path = "D:\\Projects\\EYE_GAME\\eye_img\\datav2\\test\\"
import os
import pandas as pd
from glob import glob
import numpy as np
import matplotlib as plt
from matplotlib.image import imread
import seaborn as sns
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense, Conv2D, MaxPooling2D
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
%matplotlib inline
os.listdir(train_path)
img=imread(train_path+'left\\'+'68.jpg')
folders=glob(test_path + '/*')
traindata_gen=ImageDataGenerator(
rotation_range=10,
rescale=1/255.,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1,
fill_mode='nearest'
)
testdata_gen=ImageDataGenerator(
rescale=1./255)
traindata_gen.flow_from_directory(train_path)
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,3), strides=1, padding='same',input_shape=image_shape, activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), strides=1, padding='same',input_shape=image_shape, activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=128, kernel_size=(3,3), strides=1, padding='same',input_shape=image_shape, activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(len(folders)))
model.add(Activation('sigmoid'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss',patience=2)
batch_size = 32
traning_set=traindata_gen.flow_from_directory(train_path,
target_size =image_shape[:2],
batch_size = batch_size,
color_mode="grayscale",
class_mode = 'categorical')
testing_set=testdata_gen.flow_from_directory(test_path,
target_size = image_shape[:2],
batch_size = batch_size,
color_mode="grayscale",
class_mode = 'categorical',
shuffle=False)
testing_set.class_indices
result = model.fit(
traning_set,
epochs=8,
validation_data=testing_set,
callbacks=[early_stop]
)
losses = pd.DataFrame(model.history.history)
losses[['loss','val_loss']].plot()
losses[['loss','val_loss']].plot()
model.metrics_names
model.save('gazev3.1.h5')
```
| github_jupyter |
## Modeling the Impact of Root Distributions Parameterizations on Total Evapotranspiration in the Reynolds Mountain East catchment using pySUMMA
## 1. Introduction
One part of the Clark et al. (2015) study explored the impact of root distribution on total evapotranspiration (ET) using a SUMMA model for the Reynolds Mountain East catchment. This study looked at sensitivity of different root distribution exponents (0.25, 0.5, 1.0). The sensitivity of evapotranspiration to the distribution of roots, which dictates the capability of plants to access water.
In this Jupyter Notebook, the pySUMMA library is used to reproduce this analysis. According to the application of different root distribution exponenets (0.25, 0.5, 1.0), the sensitivity of result describes.
The Results section shows how to use pySUMMA and the Pandas library to reproduce Figure 8(left) from Clark et al. (2015).
Collectively, this Jupyter Notebook serves as an example of how hydrologic modeling can be conducted directly within a Jupyter Notebook by leveraging the pySUMMA library.
## 2. Background
### The Transpiration from soil layers available in SUMMA
```
#import libraries to display equations within the notebook
from IPython.display import display, Math, Latex
```
\begin{equation*}
(S_{et}^{soil})_j = \frac{(f_{roots})_j(\beta_{v})_j}{\beta_v} \frac{(Q_{trans}^{veg})}{L_{vap}\rho_{liq}(\Delta z)_j} + (S_{evap}^{soil})_j
\end{equation*}
The transpiration sink term $(S_{et}^{soil})_j$ is computted for a given soil layer $j$.
$Q_{trans}^{veg} (W/m^2)$ : the transpiration flux, $(\beta_{v})_j$ : the soil water stress for the j-th soil layer
$\beta_v$ : the total water availability stress factor, $(f_{roots})_j$ : the fraction of roots in the j-th soil layer
$(\Delta z)_j$ : the depth of the j-th soil layer, $L_{vap} (J/kg), \rho_{liq} (kg/m^3)$ : respectively the latent heat of vaporization and the intrinsic density of liquid water
$(S_{evap}^{soil})_j (s^{-1})$ : the ground evaporation (only defined for the upper-most soil layer)
The above images are taken from the Stomal Resistance Method section within the manual Structure for Unifying Multiple Modeling Alternatives (SUMMA), Version 1.0: Technical Description (April, 2015).
## 3. Methods
### 1) Study Area
#### The Reynolds Mountain East catchment is located in southwestern Idaho as shown in the figure below.
```
from ipyleaflet import Map, GeoJSON
import json
m = Map(center=[43.06745, -116.75489], zoom=15)
with open('reynolds_geojson_latlon.geojson') as f:
data = json.load(f)
g = GeoJSON(data=data)
m.add_layer(g)
m
```
### 2) Create pySUMMA Simulation Object
```
from pysumma.Simulation import Simulation
from pysumma.Plotting import Plotting
# create a pySUMMA simulation object using the SUMMA 'file manager' input file
S = Simulation('/glade/u/home/ydchoi/summaTestCases_2.x/settings/wrrPaperTestCases/figure08/summa_fileManager_riparianAspenPerturbRoots.txt')
```
### 4) Check root Distribution Exponents
```
# create a trial parameter object to check root distribution exponents
rootDistExp = Plotting(S.setting_path.filepath+S.para_trial.value)
# open netCDF file
hru_rootDistExp = rootDistExp.open_netcdf()
# check root distribution exponents at each hru
hru_rootDistExp['rootDistExp']
```
### 5) Run SUMMA for the different root Distribution Exponents with Develop version of Docker image
```
# set the simulation start and finish times
S.decision_obj.simulStart.value = "2007-07-01 00:00"
S.decision_obj.simulFinsh.value = "2007-08-20 00:00"
# set SUMMA executable file
S.executable = "/glade/u/home/ydchoi/summa/bin/summa.exe"
# run the model giving the output the suffix "rootDistExp"
results_rootDistExp, output_path = S.execute(run_suffix="rootDistExp_local", run_option = 'local')
```
## 4. Results
### Recreate the Figure 8 plot from Clark et al., 2015: The total ET Sensitivity with different root Distribution Exponents
```
from pysumma.Plotting import Plotting
from jupyterthemes import jtplot
import matplotlib.pyplot as plt
import pandas as pd
jtplot.figsize(x=10, y=10)
```
#### 4.1) Create function to calculate Total ET of hour of day from SUMMA output for the period 1 June to 20 August 2007
```
def calc_total_et(et_output_df):
# Total Evapotranspiration = Canopy Transpiration + Canopy Evaporation + Ground Evaporation
# Change unit from kgm-2s-1 to mm/hr (mulpitle 3600)
total_et_data = (et_output_df['scalarCanopyTranspiration'] + et_output_df['scalarCanopyEvaporation'] + et_output_df['scalarGroundEvaporation'])*3600
# create dates(X-axis) attribute from ouput netcdf
dates = total_et_data.coords['time'].data
# create data value(Y-axis) attribute from ouput netcdf
data_values = total_et_data.data
# create two dimensional tabular data structure
total_et_df = pd.DataFrame(data_values, index=dates)
# round time to nearest hour (ex. 2006-10-01T00:59:59.99 -> 2006-10-01T01:00:00)
total_et_df.index = total_et_df.index.round("H")
# set the time period to display plot
total_et_df = total_et_df.loc["2007-06-01":"2007-08-20"]
# resample data by the average value hourly
total_et_df_hourly = total_et_df.resample("H").mean()
# resample data by the average for hour of day
total_et_by_hour = total_et_df_hourly.groupby(total_et_df_hourly.index.hour).mean()
return total_et_by_hour
```
#### 4.2) Get hour of day output of the Parameterization of Root Distributions for the period 1 June to 20 August 2007
```
rootDistExp_hour = calc_total_et(results_rootDistExp)
# create each hru(1~3) object
rootDistExp_hru_1 = rootDistExp_hour[0]
rootDistExp_hru_2 = rootDistExp_hour[1]
rootDistExp_hru_3 = rootDistExp_hour[2]
```
#### 4.3) Combine the Parameterization of Root Distributions into a single Pandas Dataframe
```
# Combine ET for each hru(1~3)
ET_Combine = pd.concat([rootDistExp_hru_1, rootDistExp_hru_2, rootDistExp_hru_3], axis=1)
# add label
ET_Combine.columns = ['hru_1(Root Exp = 1.0)', 'hru_2(Root Exp = 0.5)', 'hru_3(Root Exp = 0.25)']
ET_Combine
```
#### 4.4) Add obervation data in Aspen station in Reynolds Mountain East to the plot
```
# create pySUMMA Plotting Object
Val_eddyFlux = Plotting('/glade/u/home/ydchoi/summaTestCases_2.x/testCases_data/validationData/ReynoldsCreek_eddyFlux.nc')
# read Total Evapotranspiration(LE-wpl) from validation netcdf file
Obs_Evapotranspitaton = Val_eddyFlux.ds['LE-wpl']
# create dates(X-axis) attribute from validation netcdf file
dates = Obs_Evapotranspitaton.coords['time'].data
# Change unit from Wm-2 to mm/hr (1 Wm-2 = 0.0864 MJm-2day-1, 1 MJm-2day-1 = 0.408 mmday-1, 1day = 24h)
data_values = Obs_Evapotranspitaton.data*0.0864*0.408/24
# create two dimensional tabular data structure
df = pd.DataFrame(data_values, index=dates)
# set the time period to display plot
df_filt = df.loc["2007-06-01":"2007-08-20"]
# select aspen obervation station among three different stations
df_filt.columns = ['-','Observation (aspen)','-']
# resample data by the average for hour of day
df_gp_hr = df_filt.groupby([df_filt.index.hour, df_filt.index.minute]).mean()
# reset index so each row has an hour an minute column
df_gp_hr.reset_index(inplace=True)
# add hour and minute columns for plotting
xvals = df_gp_hr.reset_index()['level_0'] + df_gp_hr.reset_index()['level_1']/60.
```
#### 4.5) Plotting output of the Parameterization of Root Distributions and observation data
```
# create plot with the Parameterization of Root Distributions(Root Exp : 1.0, 0.5, 0.25 )
ET_Combine_Graph = ET_Combine.plot()
# invert y axis
ET_Combine_Graph.invert_yaxis()
# plot scatter with x='xvals', y='Observation (aspen)'
ET_Combine_Graph.scatter(xvals, df_gp_hr['Observation (aspen)'])
# add x, y label
ET_Combine_Graph.set(xlabel='Time of day (hr)', ylabel='Total evapotranspiration (mm h-1) ')
# show up the legend
ET_Combine_Graph.legend()
```
## 5. Discussion
As stated in Clark et al., 2015, the following insights can be gained from this analysis:
* The simulation in Figure 8 illustrates the sensitivity of evapotranspiration to the distribution of roots, which dictates the capability of plants to access water
* The results in Figure 8 demonstrate strong sensitivities the rooting profile. Lower root distribution exponents place more roots near the surface. This makes it more difficult for plants to extract soil water lower in the soil profile, and decreases transpiration
| github_jupyter |
### Trade Demo
#### Goal:
- Load the trade data for the country `Canada`
- Launch a domain node for canada
- Login into the domain node
- Format the `Canada` trade dataset and convert to Numpy array
- Convert the dataset to a private tensor
- Upload `Canada's` trade on the domain node
- Create a Data Scientist User
```
%load_ext autoreload
%autoreload 2
import pandas as pd
canada = pd.read_csv("../../trade_demo/datasets/ca - feb 2021.csv")
```
### Step 1: Load the dataset
We have trade data for the country, which has provided data from Feb 2021. They key colums are:
- Commodity Code: the official code of that type of good
- Reporter: the country claiming the import/export value
- Partner: the country being claimed about
- Trade Flow: the direction of the goods being reported about (imports, exports, etc)
- Trade Value (US$): the declared USD value of the good
Let's have a quick look at the top five rows of the dataset.
```
canada.head()
```
### Step 2: Spin up the Domain Node (if you haven't already)
SKIP THIS STEP IF YOU'VE ALREADY RUN IT!!!
As the main requirement of this demo is to perform analysis on the Canada's trade dataset. So, we need to spin up a domain node for Canada.
Assuming you have [Docker](https://www.docker.com/) installed and configured with >=8GB of RAM, navigate to PySyft/packages/hagrid and run the following commands in separate terminals (can be done at the same time):
```bash
# install hagrid cli tool
pip install -e .
```
```bash
hagrid launch Canada domain
```
<div class="alert alert-block alert-info">
<b>Quick Tip:</b> Don't run this now, but later when you want to stop these nodes, you can simply run the same argument with the "stop" command. So from the PySyft/grid directory you would run. Note that these commands will delete the database by default. Add the flag "--keep_db=True" to keep the database around. Also note that simply killing the thread created by ./start is often insufficient to actually stop all nodes. Run the ./stop script instead. To stop the nodes listed above (and delete their databases) run:
```bash
hagrid land Canada
```
</div>
### Step 3: Login into the Domain as the Admin User
```
import syft as sy
# Let's login into the domain node
domain_node = sy.login(email="info@openmined.org", password="changethis", port=8081)
canada.head()
canada[canada["Partner"] == "Egypt"]
# For, simplicity we will upload the first 10000 rows of the dataset.
canada = canada[:10000]
```
### Step 4: Format dataset and convert to numpy array
```
# In order to the convert the whole dataset into an numpy array,
# We need to format string to integer values.
# Let's create a function that converts string to int.
import hashlib
from math import isnan, nan
hash_db = {}
hash_db[nan] = nan
def convert_string(s: str, digits: int = 15):
"""Maps a string to a unique hash using SHA, converts it to a hash or an int"""
if type(s) is str:
new_hash = int(hashlib.sha256(s.encode("utf-8")).hexdigest(), 16) % 10 ** digits
hash_db[s] = new_hash
return new_hash
else:
return s
# Let's filter out the string/object type columns
string_cols = []
for col, dtype in canada.dtypes.items():
if dtype in ['object', 'str']:
string_cols.append(col)
# Convert string values to integer
for col in canada.columns:
canada[col] = canada[col].map(lambda x: convert_string(x))
# Let's checkout the formatted dataset
canada.head()
# Great !!! now let's convert the whole dataset to numpy array.
np_dataset = canada.values
# Type cast to float values to prevent overflow
np_dataset = np_dataset.astype(float)
```
### Step 5: Converting the dataset to private tensors
```
from syft.core.adp.entity import DataSubject
# The 'Partner' column i.e the countries to which the data is exported
# is private, therefore let's create entities for each of the partner defined
entities = [DataSubject(name=partner) for partner in canada["Partner"]]
# Let's convert the whole dataset to a private tensor
private_dataset_tensor = sy.Tensor(np_dataset).private(0.01, 1e15, entity=DataSubject(name="Canada")).tag("private_canada_trade_dataset")
private_dataset_tensor[:, 0]
```
### Step 6: Upload Canada's trade data on the domain
```
# Awesome, now let's upload the dataset to the domain.
# For, simplicity we will upload the first 10000 rows of the dataset.
domain_node.load_dataset(
assets={"feb2020": private_dataset_tensor},
name="Canada Trade Data - First 10000 rows",
description="""A collection of reports from Canada's statistics
bureau about how much it thinks it imports and exports from other countries.""",
)
private_dataset_tensor.send(domain_node)
```
Cool !!! The dataset was successfully uploaded onto the domain.
```
# Now, let's check datasets available on the domain.
domain_node.store.pandas
```
### Step 7: Create a Data Scientist User
Open http://localhost:8081, login is the root user (username: info@openmined.org, password:changethis), and create a user with the following attributes:
- Name: Sheldon Cooper
- Email: sheldon@caltech.edu
- Password: bazinga
```
# Alternatively, you can create the same from the notebook itself.
domain_node.users.create(
**{
"name": "Sheldon Cooper",
"email": "sheldon@caltech.edu",
"password": "bazinga",
},
)
domain_node.users.create(
**{
"name": "Leonard Hodfstadder",
"email": "leonard@caltech.edu",
"password": "penny",
},
)
```
```
Great !!! We were successfully able to create a new user.
Now, let's move to the Data Scientist notebook, to check out their experience.
```
### Step 8: Decline request to download entire datsaet
```
# Let's check if there are any requests pending for approval.
domain_node.requests.pandas
domain_node.requests[-1].accept()
# Looks like the DS wants to download the whole dataset. We cannot allow that.
# Let's select and deny this request.
domain_node.requests[0].deny()
```
### STOP: Return to Data Scientist - Canada.ipynb - STEP 3!!
| github_jupyter |
```
import boto3
import sagemaker
import pandas as pd
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker import RandomCutForest
bucket = 'anomaly-detection-team-vypin' # <--- specify a bucket you have access to
prefix = 'vishal/sagemaker/rcf-benchmarks'
s3_training_data_location='s3://anomaly-detection-team-vypin/vishal/train.csv'
cleaned_data=pd.read_csv(s3_training_data_location)
cleaned_data.shape
session=sagemaker.Session()
execution_role = get_execution_role()
train_data = sagemaker.s3_input(
s3_data=s3_training_data_location,
content_type='text/csv;label_size=0',
distribution='ShardedByS3Key')
# specify general training job information
rcf = RandomCutForest(role=execution_role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
base_job_name='vishal',
data_location='s3://{}/{}/'.format(bucket, prefix),
output_path='s3://{}/{}/output'.format(bucket, prefix),
num_samples_per_tree=512,
num_trees=50)
rcf.fit(rcf.record_set(cleaned_data.as_matrix()))
#rcf.fit({'train': train_data})
rcf_inference = rcf.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
)
print('Endpoint name: {}'.format(rcf_inference.endpoint))
from sagemaker.predictor import csv_serializer, json_deserializer
rcf_inference.content_type = 'text/csv'
rcf_inference.serializer = csv_serializer
rcf_inference.accept = 'application/json'
rcf_inference.deserializer = json_deserializer
cleaned_data_numpy = cleaned_data.as_matrix()
print(cleaned_data_numpy[:6])
results = rcf_inference.predict(cleaned_data_numpy[:6])
s3_validation_data_location='s3://anomaly-detection-team-vypin/vishal/validation.csv'
s3_validation_results_location='s3://anomaly-detection-team-vypin/vishal/batchtransofrm_results'
# Initialize the transformer object
transformer = rcf.transformer(
instance_type='ml.c4.xlarge',
instance_count=1,
strategy='MultiRecord',
assemble_with='Line',
output_path=s3_validation_results_location
)
# Start a transform job
transformer.transform(s3_validation_data_location, content_type='text/csv', split_type='Line')
# Then wait until the transform job has completed
transformer.wait()
import json
s3 = boto3.resource('s3')
s3.Bucket('anomaly-detection-team-vypin').download_file('vishal/batchtransofrm_results/validation.csv.out', 'validation.csv.out')
#results=pd.read_csv('s3://anomaly-detection-team-vypin/vishal/batchtransofrm_results/validation.csv.out')
import sys
fo = open("validation.csv.out", "r")
results=fo.readlines()
scores = [json.loads(datum)['score'] for datum in results]
validation_data=pd.read_csv('s3://anomaly-detection-team-vypin/vishal/validation_full_data.csv')
#validation_data.head()
validation_data.shape
validation_data['score']=scores
validation_data.head()
score_vs_fraud=validation_data[['isFraud','score']]
score_vs_fraud.head()
import matplotlib.pyplot as plt
score_vs_fraud.plot(kind='scatter',x='isFraud',y='score')
```
| github_jupyter |
```
import math
import random
from array import *
from math import gcd as bltin_gcd
from fractions import Fraction
import matplotlib.pyplot as plt
import numpy as np
#import RationalMatrices as Qmtx
##########################################
###### Methods for Qmatrix #####
# --------------------------------------------
def printQmtx(M):
print('(' , M[0,0],' ', M[0,1], ')')
print('(' , M[1,0],' ', M[1,1], ')')
return
# --------------------------------------------
def det(M):
return M[0,0]*M[1,1]-M[1,0]*M[0,1]
# --------------------------------------------
def tr(M):
return M[0,0]+M[1,1]
# --------------------------------------------
def multiply(N,M):
# Returns the product NM, each entry in N, M and NM is Fraction
a1=M[0,0].numerator
b1=M[0,0].denominator
a2=M[0,1].numerator
b2=M[0,1].denominator
a3=M[1,0].numerator
b3=M[1,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
c1=N[0,0].numerator
d1=N[0,0].denominator
c2=N[0,1].numerator
d2=N[0,1].denominator
c3=N[1,0].numerator
d3=N[1,0].denominator
c4=N[1,1].numerator
d4=N[1,1].denominator
R00 = Fraction(a1*d2*c1*b3 + a3*b1*c2*d1 , d1*d2*b1*b3)
R01 = Fraction(a2*c1*d2*b4 + c2*a4*b2*d1 , b2*d1*d2*b4)
R10 = Fraction(a1*b3*c3*d4 + a3*b1*c4*d3 , d3*d4*b1*b3)
R11 = Fraction(a2*c3*b4*d4 + c4*a4*b2*d3 , b2*b4*d3*d4)
return np.matrix( [ (R00,R01) , (R10, R11) ] )
# --------------------------------------------
def mult(k,M):
a1=M[0,0].numerator
b1=M[0,0].denominator
a2=M[0,1].numerator
b2=M[0,1].denominator
a3=M[1,0].numerator
b3=M[1,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
return np.matrix( [ (Fraction(k*a1,b1),Fraction(k*a2,b2)) , ( Fraction(k*a3,b3), Fraction(k*a4,b4))] )
# --------------------------------------------
def inverse(M):
a1=M[0,0].numerator
b1=M[0,0].denominator
a2=M[0,1].numerator
b2=M[0,1].denominator
a3=M[1,0].numerator
b3=M[1,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
dnum = a1*a4*b2*b3-a2*a3*b1*b4 # Numerator and denominator of determinant
ddem = b1*b2*b3*b4
N = np.matrix([(Fraction(dnum*a4,ddem*b4) ,Fraction(-dnum*a2,ddem*b2)),(Fraction(-dnum*a3,ddem*b3) ,Fraction(dnum*a1,ddem*b1))])
return N
def mob_transf(M, a):
# Mobius transofrmation associated to matrix M, where
# M has all type Fraction entries (rational)
# a must be Fraction or string INF
# a is assumed to be rational on x-axis (imaginary coord =0)
# returns a Fraction or string INF if it sends a to oo
a1=M[0,0].numerator
b1=M[0,0].denominator
a3=M[1,0].numerator
b3=M[1,0].denominator
if( a == "INF"):
if (a3 == 0):
return "INF"
else:
return Fraction(a1*b3, a3*b1)
x=a.numerator
y=a.denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
if (a3*b4*x + a4*b3*y) ==0:
return "INF"
a2=M[0,1].numerator
b2=M[0,1].denominator
# print('type of matrix entry', type (M[0,0]))
p=(b3*b4*y)*(a1*b2*x + a2*b1*y)
q=(b1*b2*y)*(a3*b4*x + a4*b3*y)
# print('p=',p)
# print('q=',q)
# return Decimal(p/q)
return Fraction(p,q)
# --------------------------------------------
def sends2inf(M, a):
# the type of both M and x is Fraction
# x is assumed to be rational on (imaginary coord =0)
# returns a Fraction
x=a.numerator
y=a.denominator
a3=M[1,0].numerator
b3=M[1,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
if (a3*b4*x + a4*b3*y) ==0:
return True
else:
return False
# --------------------------------------------
def toinfelement(M):
a3=M[1,0].numerator
b3=M[1,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
return Fraction(-a4*b3,b4*a3)
class jigsawset:
# the jigsaw will be formed with triangles from the jigsaw set
# it only admits at most two different types of tiles.
def __init__(self, tiles=[1,2], sign=[1,0]):
#Default jigsaw is generated by Delta(1,1,1) in canonical position
jigsawset.tiles=tiles # types of the tiles
jigsawset.sign=sign # number of tiles of each type
#Length of fundamental interval of group generated by self
jigsawset.L = l=self.sign[0]*(2+self.tiles[0]) +self.sign[1]*(2+self.tiles[1])
def size(self): # numer of triangles in the jigsawset (with multiplicity)
return self.sign[0]+self.sign[1]
def print(self):
print('************** JIGSAW.print ******************')
print("tile types:", self.tiles)
print("signature:", self.sign)
print('************** ************ ******************')
print('')
################################################################################
def Jigsaw_vertices(jigsawset):
# Returns the x-coords of the vertices of the jigsaw formed by tiles glued from jigsawset
# assuming they all have inf as a common vertex and a Delta(1) is in standard position
# Coords go from negative to positive.
# vertex at infinity is not included in the list
# Type of vertices is integer
vertices=[-1,0] #Jigsaw always has the tile [infty, -1,0]
i=1
while i<jigsawset.sign[0]: #First glue all 1-tiles to the left (negative vertices)
vertices.insert(0,-(i+1))
i+=1
j=0
while j<jigsawset.sign[1]: #Then glue all n-tiles to the right (positive vertices)
if (j%3 != 1):
vertices.append(vertices[i]+1)
if(j%3 ==1):
vertices.append(vertices[i]+jigsawset.tiles[1])
i+=1
j+=1
return vertices
# ################################################################################
###### TEST for Jigsaw_vertices #####
# JS= jigsawset([1,2],[4,6])
# JS.print()
# Jigsaw_vertices(JS)
################################################################################
################################################################################
# ......... rotation_info(n, v2,v3) .........
# Say (m,y) is the (unknown) point of rotation of the geodesic [v2,v3] with side type k,
# the function returns (m,y^2), m and y^2 are Fraction type
# This is done to have only integer values, since y is often a square root
# Uses proposition 4.3 to consider different cases according to side_type.
# v2,v3 = vertices of ideal side, assumed to come from tiles of the type [infty, v2, v3]
# n = type of the side [v2,v3]
# isfrac = whether the distance of the rotation point from midpoint is n or 1/n,
# both are considered type n, so we need to specify if it is a fraction
def rotation_info(n,isfrac,v2,v3): #renamed from info_for_rotation
l= v3-v2
if(n==1):
#side_type==1 => edge [v2, v3] is a semi-circle of diameter n (n=type of triangle)
return ( Fraction(2*v2+l,2) , Fraction(l*l,4) )
if(isfrac == False): #Input n represents the integer n>1
#side_type>1 => edge [v2, v3] is a semi-circle of diameter 1
return ( Fraction(n+(1+n)*v2,1+n), Fraction(n,(1+n)*(1+n)) )
if(isfrac == True): #Input n represents the fraction 1/n
#side_type>1 => edge [v2, v3] is a semi-circle of diameter 1
return ( Fraction(1+(1+n)*v2,1+n), Fraction(n,(1+n)*(1+n)) )
################################################################################
################################################################################
def rotation_points(jigsawset):
#Given jigsaw set returns a list in CCW order with the rotation points on exterior sides
# ** The y-coord of every point is the square of the actual coordinate **
# renamed from info_rotation_points
V = Jigsaw_vertices(jigsawset)
#By construction first midpoint is on vertical side of a type 1 tile.
points=[(Fraction(V[0]),Fraction(1))]
i=0
# Add all midpoints of type 1 tiles
N=jigsawset.sign[0]
while i<N:
points.append(rotation_info(1,0,V[i],V[i+1]))
i+=1
# Now add all midpoints of type n tiles
# taking gluing rotation into account (prop 4.3)
N+=jigsawset.sign[1]
j=1
while i<N:
if(j%3==1):
type=jigsawset.tiles[1]
isfrac= True
if(j%3==2):
type=1
isfrac = False
if(j%3==0):
type=jigsawset.tiles[1]
isfrac = False
points.append(rotation_info(type,isfrac,V[i],V[i+1]))
i+=1
j+=1
# Last midpoint is from vertical side
if(jigsawset.sign[1]==0):
points.append((Fraction(V[i]),Fraction(1)))
else: # The right vertical side of the last tile you glued (prop 4.3 - 5)
j= j-1
if(j%3==0):
points.append( (Fraction(V[i]) , Fraction(1)) )
else:
points.append( (Fraction(V[i]) , Fraction(jigsawset.tiles[1],1)) )
return points
# ################################################################################
#JS= jigsawset([1,2],[1,0])
#JS.print()
#R = rotation_points(JS)
#print('********* ROTATION POINTS WITH INFO *********')
#print('recall y-coord is the square of the actual coordinate')
#for i in range (0,JS.size()+2):
# print('(' , R[i][0],',', R[i][1], ')')
#################################################################################
def pi_rotation_special(x,y2):
# Given the point (x,y^2) returns the matrix representing pi rotation about (x,y)
# This can be calculated to be:
# ( -x y^2+x^2 )
# ( -1 x )
# Matrix is in GL(2,Q) (det may not be 1). that's ok since we only use it as a mob transf
# Coordinates of matrix are type Fraction, so assumes inputs are integers
Rotation = np.matrix( [ (Fraction(-x),Fraction(y2+x*x)) , (Fraction(-1), Fraction(x))] )
return Rotation
#################################################################################
###### TEST FOR pi_rotation_special #####
# print('---------------')
# Q= pi_rotation_special(Fraction(-1,2),Fraction(1,4)) # <--- recall y is squared for this function
# printQmtx(Q)
################################################################################
def jigsaw_generators(jigsawset):
# Returns a list TAU with the matrices that represent rotations around
# midpoints of exterior sides of jigsaw formed by jigsaw set
# coordinates of the matrices are type Fraction
# renamed from jigsaw_generators_Q
P = rotation_points(jigsawset)
N = jigsawset.size()
TAU=[]
for i in range (0, N+2):
TAU.append(pi_rotation_special(P[i][0],P[i][1]))
return TAU
################################################################################
###### TEST for jigsaw_generators #####
JS= jigsawset([1,2],[1,1])
JS.print()
Q = rotation_points(JS)
print('')
print('********* ROTATION POINTS WITH INFO *********')
print('recall y-coord is the square of the actual coordinate')
for i in range (0,JS.size()+2):
print('(' , Q[i][0],',', Q[i][1], ')')
print('')
TAU = jigsaw_generators(JS)
print('')
print('**************************************************')
for i in range (0,JS.size() +2):
print('rotation about', '(' , Q[i][0],', sqrt(', Q[i][1], ') ) ' , 'is')
print('T',i, '=')
printQmtx(TAU[i])
print('***')
def calculateToInf(vertex,generator):
# Returns an array transf where
# transf[i][0] = sends vertex[i] to infinity
# Not optimized
transf = []
n = len(generator)
for i in range (0,n-1): # There is one less vertex than generators
word = 0
M = np.matrix([(Fraction(1), Fraction(0)), (Fraction(0),Fraction(1))])
for j in range (i+1,n):
M = multiply(generator[j],M)
word = (j+1)*10**(j-i-1) + word # generators indices start from 1 when printed
transf.append((M,word))
#print(word)
if(sends2inf(M,vertex[i]) == False):
print('error', i)
return transf
################################################################################
class JigsawGroup (object):
def __init__(self, tiles=[1,2], sign=[1,0]):
#Default group is the one generated by canonical Delta(1,1,1)
JigsawGroup.tiles = tiles
JigsawGroup.sign = sign
JigsawGroup.Jset = jigsawset(tiles,sign)
# Following attributes are shared with WeirGroup class
JigsawGroup.rank = sign[0]+sign[1]+2 #Generators = number of exterior sides of jigsaw
JigsawGroup.vertices = Jigsaw_vertices(self.Jset)
JigsawGroup.pts_Y2 = rotation_points(self.Jset)
JigsawGroup.generators = jigsaw_generators(self.Jset)
#? don't remember what this does
JigsawGroup.RotationsToInf = calculateToInf(self.vertices, self.generators)
#Length of fundamental interval of group, calculated using Lou, Tan & Vo 4.5
JigsawGroup.L = self.Jset.sign[0]*(2+self.Jset.tiles[0]) +self.Jset.sign[1]*(2+self.Jset.tiles[1])
def print(self):
print(' ****** print Jigsaw Group ******')
print(' ')
print('This group comes from a jigsaw with ', self.sign[0],' tiles of type', self.tiles[0])
print(' and', self.sign[1],'tiles of type', self.tiles[1],'.')
print('Length of fund interval = ', self.L)
print("Number of generators:",self.rank, '. These are:')
print(' ')
for i in range (0,self.rank):
print('rotation about', '(' , self.pts_Y2[i][0],', sqrt(', self.pts_Y2[i][1], ') ) ' , 'is')
print('T',i+1, '=')
printQmtx(self.generators[i])
#det=detQmatrix(self.generators[i])
#print('det(T',i,')=', det)
print('')
print('The jigsaw has vertices (apart from oo):')
for i in range(0, len(self.vertices)):
print('(',self.vertices[i], ',0)')
return
def printNOGENS(self):
print(' ****** print Jigsaw Group ******')
print(' ')
print('This group comes from a jigsaw with ', self.sign[0],' tiles of type', self.tiles[0])
print(' and', self.sign[1],'tiles of type', self.tiles[1],'.')
print('Length of fund interval = ', self.L)
print("Number of generators:",self.rank)
print(' ')
print('The jigsaw has vertices (apart from oo):')
print(self.vertices)
return
def printSet(self):
self.Jset.print()
################################################################################
##### CHECK Jigsawgroup class ####
JG= JigsawGroup([1,2],[1,1])
JG.print()
print('***********')
print('The words of the transformations that send vertices to oo')
for i in range (0,JG.rank-1):
print(JG.RotationsToInf[i][1])
# SPECIAL EXAMPLE: WEIRSTRASS GROUPS
################################################################################
def generatorsWeirGroup(k1,k2,k3):
# Calculates generators according to equation (2) in paper
# Determinant may not be 1, and type of entries is Fraction
T1 = np.matrix( [(k1, 1+k1),(-k1,-k1)])
T2 = np.matrix( [(1, 1),(-k2-1,-1)])
T3 = np.matrix( [(0, k3),(-1,0)])
return [T1, T2, T3]
def info_rotation_points_Wgrp(k1,k2,k3):
# Given k1,k2,k3 Fractions, returns a list in CCW order with the rotation points on exterior sides
# y-coordinate is squared to avoid floating point error
# Calculations come from equation (1) in paper
x1 = (Fraction(-1,1),Fraction(k1.denominator,k1.numerator))
a2 = k2.numerator
b2 = k2.denominator
x2 = (Fraction(-b2,a2+b2), Fraction(b2*a2 , b2*b2+2*a2*b2+a2*a2))
x3 = (Fraction(0,1),Fraction(k3.numerator,k3.denominator))
return [x1, x2, x3]
################################################################################
class WeirGroup (object):
def __init__(self, k2=Fraction(1,1), k3=Fraction(1,1)):
#Default group is the one generated by canonical Delta(1,1,1)
# k1, k2, k3 are fractions
WeirGroup.k2 = k2
WeirGroup.k3 = k3
WeirGroup.k1 = Fraction(k2.denominator*k3.denominator,k2.numerator*k3.numerator)
# The following attributes are shared with JigsawGroup class
WeirGroup.rank = 3 #Generators = number of exterior sides of jigsaw
WeirGroup.vertices = [-1,0] #Vertices of triangle are -1,0, inf
WeirGroup.pts_Y2 = info_rotation_points_Wgrp(self.k1, self.k2, self.k3)
WeirGroup.generators = generatorsWeirGroup(self.k1,self.k2,self.k3)
WeirGroup.RotationsToInf = calculateToInf(self.vertices, self.generators)
WeirGroup.L = np.absolute(1 + k3+ k2*k3) #Length of fundamental interval of group
# def L(self):
# return self.Length
def print(self):
print(' ****** print Weirstrass Group ******')
print( ' ')
print('This is the Weirstrass group with parmameters k1=',self.k1,', k2=', self.k2,', k3=', self.k3,'.')
print('Length of fund interval = ', self.L)
print('Its generators are:')
print(' ')
for i in range (0,3):
print('rotation about', '(' , self.pts_Y2[i][0],', sqrt(', self.pts_Y2[i][1], ') ) ' , 'is')
print('T',i+1, '=')
printQmtx(self.generators[i])
print('')
################################################################################
##### Test for WeirGroup class ######
W = WeirGroup(Fraction(1,3),Fraction(3,1))
W.print()
################################################################################
def locateQ(x,y,vertices):
# returns k, where 0<=k<=N is the index of the vertex that forms the right endpoint
# of interval where x/y is. N = len(vertices)
# k=0 => x/y in [oo,v0]
# 1<=k <= N-1 => x/y in [vk-1,vk]
# k = N => x/y in [vN-1, oo]
N = len(vertices)
X = np.full(N,Fraction(x,y))
# print('vertices=',vertices)
comparison = X<=vertices
lower = np.full(N,1)
upper = np.full(N,0)
if(comparison == lower).all(): # x is in (inf, v0)
#print(x,'/',y,'is in [ oo,', vertices[0],']')
return 0
if(comparison == upper).all(): # x is in (vN, inf)
#print(x,'/',y,'is in [', vertices[N-1],',oo]')
return N
k=0
while(comparison[k] == comparison[k+1]):
k+=1
#print(x,'/',y, 'is in [', vertices[k],',',vertices[k+1],']')
return k+1
################################################################################
print('###### locateQ TEST #######')
k = locateQ(2,33,[-1,0,1])
print('k=',k)
print('##########################')
################################################################################
################################################################################
def is_cusp(Group, a, maxL, currentL):
# Checks if a (Fraction type) is a cusp of Jigsawgroup in at most maxL steps
# To do that it follows the path to a and sees if applying succesive transformations
# takes a to a vertex of the fundamental domain of JigsawGroup
# Assumes a !=infinity
if currentL>maxL:
return False
currentL +=1 #You will give one more tile a chance
x = a.numerator
y = a.denominator
k=locateQ(x,y, Group.vertices)
if(k!= Group.rank-1): #k=rank => x is bigger than all vertices
# rank of group goes from 1 to N, vertices go from 0 to N-2
if(a == Group.vertices[k]): #If you found that x is vertex of the jigsaw
return True
M = Group.generators[k] # generator corresponding to interval x is on
if(sends2inf(M,a)==True):
return True
# If it was not a cusp in this step, rotate by M and try again
a = mob_transf(M,a)
#print('x transform=', x)
#print('*************')
return is_cusp(Group,a,maxL,currentL)
################################################################################
JG= JigsawGroup([1,2],[1,1])
W = WeirGroup(Fraction(1,3), Fraction(3,1)) # This group has specials 1, 2
#JG.print()
#def is_cusp_word(JigsawGroup, x, maxL, currentL):
print(is_cusp(JG,Fraction(3,1), 2,0)) # Group, rational, iterations, 0
print(is_cusp(W,Fraction(2,1), 2,0)) # Group, rational, iterations, 0)
################################################################################
def check_cusps(Maxden, Group):
# Checks if rationals up to denominator MaxDen in the fundamental interval of JigsawGroup
# are cusps of the JigsawGroup by approximatin by maxL=100 rotations
L = Group.L # Length of fundamental interval of JG
maxL = 100 # Maximum number of iterations done to find the cusp
q=1
while(q<=Maxden):
p=0
while(p/q <= L ):
if(bltin_gcd(p, q)==1):
siesonoes = is_cusp(Group, Fraction(p,q), maxL,0)
if(siesonoes == False):
print('****** CHECK_CUSPS RESULTS ******')
print('Bad news...')
print('I found', p,'/', q, 'is not a cusp when doing ',maxL,'rotations towards it.')
return False
p+=1
q+=1
print('****** CHECK_CUSPS RESULTS ******')
print('Good news!')
print('All rationals with denominator at most', Maxden, 'which are less than fund length=', L, 'are cusps!')
print(' ')
return True
################################################################################
JG= JigsawGroup([1,2],[1,1])
W = WeirGroup(Fraction(1,3), Fraction(3,1)) # This group has specials 1, 2
#JG.print()
esonoes = False
#def check_cusps(Maxden, maxL, JigsawGroup)
check_cusps(50,JG)
check_cusps(10,W)
################################################################################
def IS_CUSP_WORD (Group, a, maxL, currentL, word, M):
# Checks if a (Fraction type) is a cusp of Jigsawgroup in at most maxL steps
# RECURSIVE (this is inner function of is_cusp_word)
# - currentL, word, M are parameters for the recursion
# Assumes a !=infinity
# Returns tuple (True/False, word, a, M),
# where - T/F indicates if a is a cusp,
# - M = matrix in group s.t. M(a) = infty
# - word is M as word in terms of generators of Group
# - a = last number in the iteration
if currentL>maxL:
return (False, word, a, M)
currentL +=1 #You will give one more tile a chance
x = a.numerator
y = a.denominator
k=locateQ(x,y, Group.vertices)
if(k!= Group.rank-1): #k=rank => x is bigger than all vertices
# rank of group goes from 1 to N, vertices go from 0 to N-2
if(a == Group.vertices[k]): #If you found that x is vertex of the jigsaw
word = int(str(Group.RotationsToInf[k][1]) + str(word))
M = multiply(Group.RotationsToInf[k][0],M) # Multiplie by adequate matrix to send to oo
return (True,word, a, M)
N = Group.generators[k] # generator corresponding to interval x is on
word = (10**currentL)*(k+1)+word # Update word and transformation
M = multiply(N,M)
if(sends2inf(N,a)==True):
return (True,word, a, M)
# If it was not a cusp in this step, rotate by M and try again
a = mob_transf(N,Fraction(x,y))
return IS_CUSP_WORD(Group,a,maxL,currentL,word,M)
#---------------------------------------------------
def is_cusp_word (Group, a, maxL):
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
return IS_CUSP_WORD(Group, a, maxL,0,0,Id)
#---------------------------------------------------
def is_cusp_word_PRINT (Group, a, maxL):
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
R= IS_CUSP_WORD(Group, a, maxL,0,0,Id)
print('--------- is_cusp_word RESULTS ---------')
if (R[0]==True):
print('TRUE')
print(a,' is a cusp, sent to infinity by the element')
printQmtx(R[3])
print('word in generators: ',R[1])
else:
print('FALSE')
print('could not determine if', a, 'is a cusp by doing', maxL, 'iterations')
print('closest approximation:')
printQmtx(R[3])
print('word in generators: ',R[1])
return
################################################################################
################################################################################
def explore_cusps(Maxden, Group):
# Checks all rationals with denominator leq Maxden inside the fundamental interval
# of Group to see if they are cusps or not
# For each rational x it creates a tuple (True/False, word, x, M )
# where M(x)= oo and word is the word of M in the generators of Group
# returns ???
L = Group.L # Length of fundamental interval of JG
maxL = 100 # Maximum number of iterations done to find the cusp
wordscusps =[]
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
q=1
while(q<=Maxden):
p=0
while(p/q <= L ):
if(bltin_gcd(p, q)==1):
word = is_cusp_word(Group, Fraction(p,q), maxL) #,0,0,Id)
goodword = (word[0], word[1], Fraction(p,q), word[3]) # is_cusp_word changes the cusp
wordscusps.append(goodword)
p+=1
q+=1
return wordscusps
################################################################################
def print_explore_cusps(Maxden, Group):
# Prints the results of explore_cusps
print('****** explore_cusps RESULTS ******')
L = Group.L # Length of fundamental interval of JG
maxL = 100 # Maximum number of iterations done to find the cusp
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
q=1
while(q<=Maxden):
p=0
while(p/q <= L ):
if(bltin_gcd(p, q)==1):
word = is_cusp_word(Group, Fraction(p,q), maxL)#,0,0,Id)
#print(p,'/', q, siesonoes)
if(word[0] == False):
print('False: ', p,'/', q, ', approximation = ', word[1])
else:
print('True: ', p,'/', q, 'is cusp, -> infty by ', word[1])
p+=1
q+=1
print(' ')
print(' ')
return
JG= JigsawGroup([1,2],[1,1])
W = WeirGroup(Fraction(1,3), Fraction(3,1)) # This group has specials 1, 2
#JG.print()
M = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
printQmtx(M)
is_cusp_word(JG,Fraction(4,1), 8)
V = explore_cusps(2,JG)
print_explore_cusps(3,JG)
print_explore_cusps(3,W)
def check(wordscusps):
N = len(wordscusps)
#onlycusps = wordscusps
onlycusps=[] #Only checks if the cusps
for i in range(0,N):
if(wordscusps[i][0] == True):
onlycusps.append(wordscusps[i])
N= len(onlycusps)
for i in range(0,N):
M = onlycusps[i][3]
a = onlycusps[i][2]
if(sends2inf(M,a) == False): #only checks that M(a)=oo
#if(vertices.count(r) == 0): #Option to check if M(a) \in vertices
print('***** message from CHECK in killer intervals *****')
print('NO GO: ', a , 'is marked as cusp but does not get sent to oo by corresponding matrix.')
print(' ')
return False
#print('***************')
#print(len(onlycusps))
#print(len(wordscusps))
if (len(onlycusps) == len(wordscusps)):
#print('***** message from CHECK in killer intervals *****')
#print('GO! : all the cusps in the list are cusps and matrices correspond.')
print(' ')
else:
#print('***** message from CHECK in killer intervals *****')
#print('OK : all the elements marked as cusps get set to oo by their matrix, but there were non-cusps in the list')
print(' ')
return True
def prepare_matrix(M):
# To compute the killer intervals we need to send a representative of M(oo)=x
# that has integer entries (3.1). To do so we multiply the entries of M by
# a common denominator so they are in Z and then reduce them
a1=M[0,0].numerator
b1=M[0,0].denominator
a2=M[0,1].numerator
b2=M[0,1].denominator
a3=M[1,0].numerator
b3=M[1,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
a11 = a1*b2*b3*b4
a22 = a2*b1*b3*b4
a33 = a3*b1*b2*b4
a44 = a4*b1*b2*b3
cdiv = bltin_gcd(a11,a22)
cdiv = bltin_gcd(cdiv,a33)
cdiv = bltin_gcd(cdiv, a44)
M00 = Fraction(a11,cdiv)
M01 = Fraction(a22, cdiv)
M10 = Fraction(a33, cdiv)
M11 = Fraction(a44, cdiv)
if M10 < 0:
M00 = -M00
M01 = -M01
M10 = -M10
M11 = -M11
M = np.matrix( [ (M00, M01), (M10, M11)] )
return M
################################################################################
################################################################################
def matrix_infty_to_cusp(M):
# Given that the matrix M sends a known cusp to infinity
# returns a matrix corresponding to M^(-1) with all coordinates in Z and gcd=1
# see proposition 3.1 in paper
N = inverseQmatrix(M)
N = prepare_matrix(N)
return N
################################################################################
def killing_interval(M):
# Given a matrix M representing an element in Jigsawgroup, of the form of prop 3.1
# returns the killer interval (tuple) of the cusp M(oo)=M[0,0]/M[1,0] associated to M
return ( Fraction(M[0,0] -1, M[1,0]), Fraction(M[0,0]+1,M[1,0]))
################################################################################
def generate_killer_intervals(wordscusps):
# wordscusps is an array of the form (True/False, word, a, M)
# where - T/F indicates if a is a cusp,
# - M = matrix in group s.t. M(a) = infty
# - word is M as word in terms of generators of Group
# coming from function is_cusp_word
# Returns an array where each element is a tuple:
# (killer_interval, cusp corresponding to killer interval)
if(check(wordscusps) == False):
print('******* generate_killer_intervals MESSAGE *******')
print('Alert! There are false cusps in your array.')
return
killers = []
onlycusps=[] # First separates all the cusps from wordscusps
for i in range(0, len(wordscusps)):
if(wordscusps[i][0] == True):
onlycusps.append(wordscusps[i])
for i in range(0, len(onlycusps)):
N = inverse(onlycusps[i][3])
N = prepare_matrix(N)
killers.append( (killing_interval(N), onlycusps[i][2]) )
return killers
################################################################################
def killer_intervals(maxden, Group):
# Returns an array of tuples, where each element is:
# (killer_interval, cusp corresponding to killer interval)
# Cusps of Group are calculated up denominator maxden.
# (old tell_killer_intervals)
# Max#iterations to find cusps = 100 (defined @ explore_cusps)
V = explore_cusps(maxden, Group) #maxden, maxL, group
return generate_killer_intervals(V)
def print_killer_intervals(maxden, Group):
# Prints cusps and corresponding killer intervals of Group (up to denominator maxden)
# return is void.
# Max#iterations to find cusps = 100 (defined @ explore_cusps)
V = explore_cusps(maxden, Group)
killers = generate_killer_intervals(V)
print('')
intervals = []
for i in range (0, len(killers)):
intervals.append( killers[i][0])
for i in range (0, len(killers)):
print('killer around ', killers[i][1], ':')
print(intervals[i][0],',',intervals[i][1])
print(' ')
return
#####################################################################
# ***** Calculate killer intervals for all the cusps found among
# ***** the rationals up tp denominator Maxden (first parementer)
JG= JigsawGroup([1,2],[1,1])
print_killer_intervals(2,JigsawGroup)
################################################################################
################################################################################
def do_intervals_cover(x,y, cusps, Xend,Yend, cover):
# if possible, finds a subcollection of given intervals that covers [Xend, Yend]
# x,y are arrays with endpoints of opencover
# [Xend, Yend] is the interval you are trying to cover
# cover has to be externally initialized to []
# if cover covers: returns subcover that covers
# else: returns False
#previous does_cover_cover4
checkX = [] # Find all intervals that contain Xend of interval
checkY = []
checkCusp = []
# Method 1: when Xend is cusp of Group
# if X end is cusp and endpoint of some open in cover
if(Xend in cusps and Xend in x ):
# Add killer interval of the cusp to cover
k = cusps.index(Xend)
cover.append((x[k], y[k], cusps[k]))
if(y[k]>Yend): # If that interval covers, finish
return cover
# Look for the cusps that have Xend as x-endpt of their killer interval
for i in range (0, len(x)):
if(Xend==x[i] and y[k]<y[i]):
checkX.append(x[i])
checkY.append(y[i])
checkCusp.append(cusps[i])
if(len(checkX) == 0):
Xend = y[k]
# Method 2: if Xend not a (known) cusp of Group
else:
for i in range (0,len(x)): # Find all intervals that contain Xend of interval
if(x[i]<Xend and Xend<y[i]):
checkX.append(x[i])
checkY.append(y[i])
checkCusp.append(cusps[i])
if(len(checkX) == 0): # The cover doesn't cover Xend of interval
print(' ****** do_intervals_cover RESULTS ****** ')
print('did not cover', Xend)
return False
# From the intervals that contain Xend, find the one that covers the most
if(len(checkX)!=0):
maxi = 0
for i in range (1,len(checkY)):
if(checkY[i]>checkY[i-1]):
maxi=i
cover.append((checkX[maxi], checkY[maxi], checkCusp[maxi]))
if(checkY[maxi]> Yend): # That intervals covers!
return cover
Xend = checkY[maxi] # Construct new interval and new cover
newx = []
newy = []
newcusps = []
for i in range(0,len(y)): # Only keep the opens that have a chance of covering remainin interval
if(y[i]>Xend):
newx.append(x[i])
newy.append(y[i])
newcusps.append(cusps[i])
return do_intervals_cover( newx, newy, newcusps, Xend,Yend, cover)
################################################################################
################################################################################
def check_cover(x,y, Xend, Yend):
# Checks if the cover given by do_intervals_cover indeed covers interval [Xend,Yend]
if(x[0]> Xend):
return False
if(y[len(y)-1]< Yend):
return False
for i in range (0,len(x)-1):
if y[i]<x[i+1]: #there is a gap between intervals
return False
return True
################################################################################
################################################################################
def cover_with_killers(Group,maxden):
# checks if the collection of killer intervals corresponding to cusps in Group
# (up to denom maxden) cover the fundamenta interval of the Group
# It also prints the results
# ... can be altered to return : False if it intervals don't cover
# cover (subcollection of killer intervals) that cover ...
#previous cover_with_killers4
# Generates killer intervals for all cusps with den<=maxden
killers = killer_intervals(maxden, Group)
# Separate intervals and cusps
intervals = []
cusps = []
for i in range (0, len(killers)):
intervals.append( killers[i][0])
cusps.append(killers[i][1])
# Separate x and y ends of intervals
x = []
y = []
for i in range(0, len(intervals)):
x.append(intervals[i][0])
y.append(intervals[i][1])
# See if the killer interval collection covers
cover = do_intervals_cover(x,y, cusps, 0,Group.L, [])
if( cover == False):
print(' ****** cover_with_killers RESULTS ****** ')
print('Bad news...')
print('The cover generated by the cusps found in group does not cover the fundamental interval.')
print(' ')
return
# Double check the cover covers
x=[]
y=[]
for i in range(0, len(cover)):
x.append(cover[i][0])
y.append(cover[i][1])
siono = check_cover(x, y, 0, Group.L)
if(siono == True):
print(' ****** cover_with_killers RESULTS ****** ')
print('Good news!')
print('The cover generated by the cusps found among rationals with denominator at most', maxden,'covers the fundamental interval [0,', Group.L,'].')
print('The cover has', len(cover),' intervals:')
for i in range(0, len(cover)):
print(cover[i][2] ,' ---' , cover[i][0], ' , ', cover[i][1] )
print(' ')
return
print(' ****** cover_with_killers RESULTS ****** ')
print('Bad news...')
print('The program computed a false cover.')
print(' ')
return
################################################################################
#####################################################################
######## everything that can be done with the previous code #########
# ***** generate a jigsaw group ***** #
JG= JigsawGroup([1,4],[1,2])
print('fundamental interval = [0,', JG.L,']')
JG.printNOGENS()
# ***** generate a Weirstrass group *****
# ***** check if rationals up to denominator Maxden are cusps ***** #
# ***** for a given rational stops after 100 iterations ****** #
#esonoes = False
#def check_cusps(Maxden, maxL, JigsawGroup)
#check_cusps(20,50,JG)
# ***** check if rationals up to denominator Maxden are cusps ***** #
# ***** for a given rational stops after 100 iterations ****** #
#print_check_cusps(20,50,JG)
#print(' ')
# ***** Calculate killer intervals for all the cusps found among
# ***** the rationals up tp denominator Maxden (first parementer)
#JG= JigsawGroup([1,3],[1,1])
#print_killer_intervals(2,JigsawGroup)
cover_with_killers(JG, 7)
##AQUI
#W2 = WeirGroup(Fraction(3,2),Fraction(2,5))
#cover_with_killers(W2, 10)
#W6 = WeirGroup(Fraction(9,5),Fraction(5,7))
#cover_with_killers(W6, 50)
# -------------------------------------------------------------------------------
# ------------ THIS SECTION OF THE CODE DEALS WITH FINDING SPECIALS -------------
# -------------------------------------------------------------------------------
def look_for_wholes(Group, maxL):
# Tries to build a cover with killer intervals whose endpoints are cusps,
# starting with the killer interval around 0.
# if at some stage it cannot continue, it returns the number that could not be
# determined to be a cusp.
L = Group.L
x = 0
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
k=0
# maxL=1000
Yends = []
while( x< L and k<maxL):
#is_cusp_word(Group, a, maxL, currentL, word, M):
info = IS_CUSP_WORD(Group, x, maxL, 0,0, Id ) # has form (T/F, word, cusp, M)
if(info[0]== False):
print(' ***** look_for_wholes RESULTS ***** ')
print(x, 'is not a cusp of the group (up to', maxL ,' rotations)')
print('An approximation to it is', info[1])
print(' ')
return
k_interval = generate_killer_intervals([(info[0], info[1], x, info[3])])
# has form [(endX,endY) , cusp]
# Take left end of killer interval around x and repeat process
x= k_interval[0][0][1]
Yends.append(x)
k+=1
if(k == maxL):
print(' ***** look_for_wholes RESULTS ***** ')
print('Did not cover the interval. Endpoints were:')
for i in range (0, len(Yends)):
print(Yends[i])
print(' ')
return
if( x>= L):
print(' ***** look_for_wholes RESULTS ***** ')
print('A cover was generated!')
print(' ')
return
# -------------------------------------------------------------------------------
W = WeirGroup(Fraction(1,3), Fraction(3,1)) # This group has specials 1, 2
look_for_wholes(W,200)
#W = WeirGroup(Fraction(2,5), Fraction(5,7)) # This group has specials 1, 2
# look_for_wholes(W,1755)
################################################################################
def word2mtx(Group, word):
# word is a sequence of digits where each represents a generator in the group.
# retunrs the matrix X corresponding to this word
if word % 10 == 0:
word = Fraction(word, 10)
L = len(str(word)) # !!! assumes each digit represents a generator
X = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
for i in range(0,L):
k = int(word % 10)
X = multiply(X,Group.generators[k-1])
word = Fraction(word - k,10)
return X
################################################################################
def subword2mtx(Group, word, i, j):
# Returns (subword, M) where
# - subword = subword of word going from the i-th digit to the j-th (including both)
# - M = transformation corresponding to subword according to Group.generators
out = word % (10**i)
word = word % (10**j) - out
word = Fraction(word, 10**i)
M = word2mtx(Group, word)
return (word,M)
def conjugate_word(Group, word, subword, j):
# Conjugates subword by the first j digits of word
# Returns tuple (conjugated_word, M) where
# - M= matrix corresponding to conjugated_word according to Group.generators
for i in range(0, j):
k = int(word % 10) # !!! Only work for at most 9 generators. (8 tiles)
subword = int( str(k) + str(subword) + str(k))
word = Fraction(word - k,10)
return(subword, word2mtx(Group, subword))
################################################################################
def check_special(Group, a,maxL):
# Given a (rational type), the function tries to determine if it is a special for Group
# maxL = max number of iterations allowed
# prints results
#maxL=400 #digits in word
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
info = IS_CUSP_WORD(Group, a, maxL, 0,0, Id )
word = int(Fraction(info[1], 10)) # Adjust because words always have 0 at the beginning.
# If a is a cusp, finish.
if(info[0] == True):
print(' ****** check_special RESULTS ******')
print('This is a cusp! :', a)
print('Sent to oo by:')
printQmtx(info[3])
print('its word has length ',len(str(word)),'and it is:', word)
print(' ')
return
woord = word # Need a copy because word is going to get destroyed
orbit = [a]
distinct = True
i = 0
while( distinct==True and i<maxL):
k = int(word % 10) # !!! Only works for at most 9 generators. (less than 8 tiles)
M = Group.generators[k-1]
newpoint = mob_transf(M, orbit[i])
# Check if you are returning to newpoint
if( newpoint in orbit):
j=0
while(orbit[j]!= newpoint):
j+=1
# Have to conjugate the word of the first special found to get the word for a
#subword = (word, matrix)
subword = subword2mtx(Group, woord, j, i+1) #element in group that fixes element orbit[j]=orbit[i]
subword = conjugate_word(Group, woord, subword[0], j) # Conjugates subword by the first j digits of word
print(' ****** check_special RESULTS ******')
print(i,':',newpoint,'= first appearance',j,':',orbit[j])
print('This is a special:', a)
print('Fixed by Mobius transf=')
printQmtx(subword[1])
trace= tr(subword[1])*tr(subword[1])/det(subword[1])
print('as en element in PSL(2,R) this has trace',trace )
if np.absolute(trace)>4:
print(' hyperbolic')
if np.absolute(trace)<4:
print(' elliptic')
if np.absolute(trace)==4:
print(' parabolic')
print('its word has length ',len(str(subword[0])) ,' and it is', subword[0])
print(' ')
return
orbit.append(newpoint)
word = Fraction(word - k,10)
i+=1
if i== maxL:
print(' ****** check_special RESULTS ******')
print('Could not determine if',a,' is a cusp or a special. (', maxL,' steps)')
print(' ')
return
#W3 = WeirGroup(Fraction(1,3), Fraction(3,1)) # This group has specials 1, 2
#check_special(W3, Fraction(1,1),10)
#check_special(W3, Fraction(2,1),10)
#AAAA
W = WeirGroup(Fraction(1,3), Fraction(3,1)) # This group has specials 1, 2
W.print()
check_special(W, Fraction(1,1),80)
# -------------------------------------------------------------------------------
# ------- THIS SECTION OF THE CODE DEALS WITH TILING THE VERTICAL STRIPE --------
# -------------------------------------------------------------------------------
def calculate_side_types(n, k):
# n = type of the triangle
# k = how many vertices on x-axis
# returns an array sidetypes, where sidetypes[i] is the type of the side
# from the i-th vertex to the (i+1)-vertex.
# First vertex is assumed to be oo and then goes in CCW.
sidetypes = [1,1] # Jigsaw always has first tile type Delta(1,1,1)
for i in range (0,k-2):
if(i%3 == 0):
sidetypes.append(3) # 3 = 1/n
if (i%3 == 1):
sidetypes.append(1) # 1 = 1
if(i%3 == 2):
sidetypes.append(2) # 2 = n
# Vertical side from last vertex to infty
if ( sidetypes[k-1] == 1):
sidetypes.append(3)
if ( sidetypes[k-1] == 2):
sidetypes.append(1)
if ( sidetypes[k-1] == 3):
sidetypes.append(2)
return sidetypes
def cover_by_vertical_rotations(Group):
# Returns list of vertices of the tiling (given by Group)
# that have a vertex on the fundamental interval
sidetype = calculate_side_types(Group.sign[1], len(Group.vertices))
vertices = list(Group.vertices)
cusps = list(Group.vertices) # Where we save all the generated non-oo cusps.
vertices.insert(0, "INF") # now sydetipe[i] = sidetype from vertices[i] to vertices[i+1]
L = Group.L #length of fund interval
numV = len(vertices)
k = numV-1
largest = vertices[k]
n = Group.tiles[1]
while(largest < L ):
# calculate rotation about [largest, oo], take side type into account
if (sidetype[k] == 1):
R = pi_rotation_special(largest, 1)
else:
R = pi_rotation_special(largest, n) # propostion 4.3 5)
for i in range (0,len(vertices)):
a = vertices.pop(numV-1)
x = mob_transf(R, a)
vertices.insert(0,x)
if ( i!= numV-k-1):
cusps.append(x)
largest = max(cusps)
k = vertices.index(largest)
cusps = sorted(set(cusps))
return cusps
############################################################################
JG = JigsawGroup([1,4],[1,3])
print('length of fund interval: ', JG.L)
C = cover_by_vertical_rotations(JG)
for i in range(0, len(C)):
print(C[i])
#is_cusp(JG, Fraction(10,1), 3, 0)
W = WeirGroup(Fraction(1,4),Fraction(4,1))
cover_with_killers(W, 10)
JG = JigsawGroup([1,4],[1,4])
#print_explore_cusps(3,JG)
is_cusp_word(JG, Fraction(22,1), 10)
#JG.print()
M= np.matrix( [ (Fraction(-5,1),Fraction(29,1)) , (Fraction(-1,1), Fraction(5,1)) ] )
a=mob_transf(M,Fraction(6,1))
print(a)
N= np.matrix( [ (Fraction(-6,1),Fraction(37,1)) , (Fraction(-1,1), Fraction(6,1)) ] )
a=mob_transf(N,Fraction(15,1))
#print(a)
M = inverse(N)
b=mob_transf(N,Fraction(4,1))
print(b)
K= np.matrix( [ (Fraction(-1,5),Fraction(1,5)) , (Fraction(-1,1), Fraction(1,5)) ] )
c=mob_transf(M,Fraction(6,1))
print(c)
J=pi_rotation_special(Fraction(33,5),Fraction(4,25))
print(mob_transf(J,Fraction(6,1)))
#*** Calculating contraction constant of 4 = m+3
t1 = np.matrix( [ (Fraction(1,1),Fraction(2,1)) , (Fraction(-1,1), Fraction(-1,1)) ] )
t2 = np.matrix( [ (Fraction(1,1),Fraction(1,1)) , (Fraction(-2,1), Fraction(-1,1)) ] )
T9 =np.matrix( [ (Fraction(1,1),Fraction(-9,1)) , (Fraction(0,1), Fraction(1,1)) ] )
T9i=np.matrix( [ (Fraction(1,1),Fraction(9,1)) , (Fraction(0,1), Fraction(1,1)) ] )
M = np.matrix( [ (Fraction(-5,1),Fraction(29,1)) , (Fraction(-1,1), Fraction(5,1)) ] )
B=M
print(mob_transf(B,Fraction(4,1)))
B=multiply(T9,M)
print(mob_transf(B,Fraction(4,1)))
B=multiply(t2,B)
print(mob_transf(B,Fraction(4,1)))
B=multiply(t1,B)
print(mob_transf(B,Fraction(4,1)))
printQmtx(B)
#*** Calculating contraction constant of 11/3 = m+2 + 2/3
t1 = np.matrix( [ (Fraction(1,1),Fraction(2,1)) , (Fraction(-1,1), Fraction(-1,1)) ] )
T9 = np.matrix( [ (Fraction(1,1),Fraction(-9,1)) , (Fraction(0,1), Fraction(1,1)) ] )
M = np.matrix( [ (Fraction(-5,1),Fraction(29,1)) , (Fraction(-1,1), Fraction(5,1)) ] )
B=M
print(mob_transf(B,Fraction(11,3)))
B=multiply(T9,M)
print(mob_transf(B,Fraction(11,3)))
B=multiply(t1,B)
print(mob_transf(B,Fraction(11,3)))
printQmtx(B)
#*** Calculating contraction constant of 7/3 = m+2 0 2/3
t2 = np.matrix( [ (Fraction(1,1),Fraction(1,1)) , (Fraction(-2,1), Fraction(-1,1)) ] )
T7 = np.matrix( [ (Fraction(1,1),Fraction(-7,1)) , (Fraction(0,1), Fraction(1,1)) ] )
M = np.matrix( [ (Fraction(-5,1),Fraction(29,1)) , (Fraction(-1,1), Fraction(5,1)) ] )
B=M
print(mob_transf(B,Fraction(7,3)))
B=multiply(T7,M)
print(mob_transf(B,Fraction(7,3)))
B=multiply(t2,B)
print(mob_transf(B,Fraction(7,3)))
printQmtx(B)
#*** Calculating contraction constant of 2 = m+1
A = np.matrix( [ (Fraction(-7,1),Fraction(50,1)) , (Fraction(-1,1), Fraction(7,1)) ] )
B = np.matrix( [ (Fraction(-17,1),Fraction(145,1)) , (Fraction(-2,1), Fraction(17,1)) ] )
M = np.matrix( [ (Fraction(-5,1),Fraction(29,1)) , (Fraction(-1,1), Fraction(5,1)) ] )
C=M
print(mob_transf(C,Fraction(2,1)))
C=multiply(A,C)
print(mob_transf(C,Fraction(2,1)))
C=multiply(B,C)
print(mob_transf(C,Fraction(2,1)))
printQmtx(C)
# ################################################################################
###### TEST for Jigsaw_vertices #####
# JS= jigsawset([1,2],[4,6])
# JS.print()
# Jigsaw_vertices(JS)
################################################################################
def find_special_in_orbit(Group, a,maxL):
# Given a (rational type), the function tries to determine if there is a special in the orbit of a
# orbit is calculated ....
# maxL = max number of iterations allowed
# prints results
#maxL=400 #digits in word
Id = np.matrix( [ (Fraction(1),Fraction(0)) , (Fraction(0), Fraction(1)) ] )
info = IS_CUSP_WORD(Group, a, maxL, 0,0, Id )
word = int(Fraction(info[1], 10)) # Adjust because words always have 0 at the beginning.
# If a is a cusp, finish.
if(info[0] == True):
print(' ****** check_special RESULTS ******')
print('This is a cusp! :', a)
print('Sent to oo by:')
printQmtx(info[3])
print('its word has length ',len(str(word)),'and it is:', word)
print(' ')
return
woord = word # Need a copy because word is going to get destroyed
orbit = [a]
distinct = True
i = 0
while( distinct==True and i<maxL):
k = int(word % 10) # !!! Only works for at most 9 generators. (less than 8 tiles)
M = Group.generators[k-1]
newpoint = mob_transf(M, orbit[i])
# Check if you are returning to newpoint
if( newpoint in orbit):
j=0
while(orbit[j]!= newpoint):
j+=1
# Have to conjugate the word of the first special found to get the word for a
#subword = (word, matrix)
subword = subword2mtx(Group, woord, j, i+1) #element in group that fixes element orbit[j]=orbit[i]
print(' ****** check_special RESULTS ******')
print('This is a special:', a)
print(i,':',newpoint,' first appearance =',j,':',orbit[j])
print(orbit[j], 'fixed by Mobius transf=')
printQmtx(subword[1])
trace= tr(subword[1])*tr(subword[1])/det(subword[1])
print('as en element in PSL(2,R) this has trace',trace )
if np.absolute(trace)>4:
print(' hyperbolic')
if np.absolute(trace)<4:
print(' elliptic')
if np.absolute(trace)==4:
print(' parabolic')
print('its word has length ',len(str(subword[0])) ,' and it is', subword[0])
print(' ')
return subword[1]
orbit.append(newpoint)
word = Fraction(word - k,10)
i+=1
if i== maxL:
print(' ****** check_special RESULTS ******')
print('Could not determine if',a,' is a cusp or a special. (', maxL,' steps)')
print(' ')
return
# --------------------------------------------
def fixpts (M): # this does not work, use fixpts2 in "workspace looking for specials"
d= det(M) #checkpoint
if d!= 1:
if d!= -1:
print('matrix is not in PSL(2,R)')
return
# if d==-1:
# M=-1*M
a3=M[1,0].numerator
b3=M[1,0].denominator
if a3==0: #** fill this
return
a1=M[0,0].numerator
b1=M[0,0].denominator
a4=M[1,1].numerator
b4=M[1,1].denominator
disD = (a1**2)*(b4**2) +2*a1*a4*b1*b4 + (a4**2)*(b1**2)-4*(b1**2)*(b4**2) #denominator of discriminant
if disD == 0:
print('parabolic element')
#fill this
return
if disD <0:
print('elliptic element')
#fill this
return
disN = (b1**2)*(b4**2) #numerator of discriminant
init = Fraction(a1*b4-a4*b1,b1*b4)
div = Fraction(b3,2*a3)
rootD = int(math.sqrt(disD))
if (rootD**2 == disD):
rootN = int(math.sqrt(disN))
if (rootN**2 == disN):
disc = Fraction(rootD, rootN)
root1 = (init + disc)*div #all these are fraction type
root2 = (init - disc)*div
print('Fixed pts (calculated exactly):')
print(root1, ' , ',root2)
return [root1, root2]
disc= math.sqrt(Fraction(disD,disN))
root1 = (init + disc)*div
root2 = (init - disc)*div
print('Fixed pts (float approximation):')
print(root1, ' , ',root2)
return [root1, root2]
```
| github_jupyter |
# LSTM Stock Predictor Using Fear and Greed Index
In this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin fear and greed index values to predict the 11th day closing price.
You will need to:
1. Prepare the data for training and testing
2. Build and train a custom LSTM RNN
3. Evaluate the performance of the model
## Data Preparation
In this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.
You will need to:
1. Use the `window_data` function to generate the X and y values for the model.
2. Split the data into 70% training and 30% testing
3. Apply the MinMaxScaler to the X and y values
4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:
```python
reshape((X_train.shape[0], X_train.shape[1], 1))
```
```
import numpy as np
import pandas as pd
from numpy.random import seed
from tensorflow import random
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
%matplotlib inline
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
seed(1)
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous fng values
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 0
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
# Use the MinMaxScaler to scale data between 0 and 1.
# Creating a MinMaxScaler object
scaler = MinMaxScaler()
# Fitting the MinMaxScaler object with the features data X
scaler.fit(X)
# Scaling the features training and testing sets
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Fitting the MinMaxScaler object with the target data Y
scaler.fit(y)
# Scaling the target training and testing sets
y_train_scaled = scaler.transform(y_train)
y_test_scaled = scaler.transform(y_test)
# Reshape the features for the model
X_train_scaled = X_train_scaled.reshape((X_train_scaled.shape[0], X_train_scaled.shape[1], 1))
X_test_scaled = X_test_scaled.reshape((X_test_scaled.shape[0], X_test_scaled.shape[1], 1))
```
---
## Build and Train the LSTM RNN
In this section, you will design a custom LSTM RNN and fit (train) it using the training data.
You will need to:
1. Define the model architecture
2. Compile the model
3. Fit the model to the training data
### Hints:
You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
```
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# Defining the LSTM RNN model.
model = Sequential()
# Initial model setup
number_units = 50
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train_scaled.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train_scaled, y_train_scaled, epochs=50, shuffle=False, batch_size=10, verbose=1)
```
---
## Model Performance
In this section, you will evaluate the model using the test data.
You will need to:
1. Evaluate the model using the `X_test` and `y_test` data.
2. Use the X_test data to make predictions
3. Create a DataFrame of Real (y_test) vs predicted values.
4. Plot the Real vs predicted values as a line chart
### Hints
Remember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
```
# Evaluate the model
model.evaluate(X_test_scaled, y_test_scaled)
# Make some predictions
predicted = model.predict(X_test_scaled)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test_scaled.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title="Real Vs. Predicted Prices")
```
| github_jupyter |
# Coding Recommendation Engines Ground Up
***
## Overview
Recommendation Engines are the programs which basically compute the similarities between two entities and on that basis, they give us the targeted output. If we look at the root level of the recommendation engines, they all are trying to find out the level of similarity between two entities. Then, the computed similarities can be used to calculate the various kinds of results.
**Recommendation Engines are mostly based on the following concepts:**
1. Popularity Model
2. Collaborative Filtering Technique (Content Based / User Based)
3. Matrix Factorization Techniques.
### Popularity Model
***
The most basic form of a recommendation engine would be where the engine recommends the most popular items to all the customers. That would be generalised as everyone is getting the similar recommendations as we didn't personalize the recommendations. These kinds of recommendation engines are based on the **Popularity Model**. THe use case for this model would be the 'Top News' Section for the day on a news website where the most popular new for everyone is same irespective of the interests of every user because that makes a logical sense because News is a generalized thing and it has got nothing to do with your likeliness.
### Collaborative Filtering Techniques
***
**User Based Collaborative Filetering**
In user based collaborative filtering, we find out the similarity score between the two users. On the basis of similarity score, we recommend the items bought/liked by one user to other user assuing that he might like these items on the basis of similarity. This will be more clear when we go ahead and implement this.
**Content Based Filtering**
In user based filtering technique, we saw that we recommend items to a used based on the similarity score between two users where it does not matter whether the items were of a similar type. But, in this tehchnique out interest is in the content rather than the users. Here, if user 1 like watching movies of genre A (most of the movies he has watched/rated highly are of genre A), then we will recommend him more movies of the same genre. That's how this things works.
### Matrix Factorization Techniques
***
In this technique, our objective is to find out the latent features which we derive m*n matrix by taking the dot product of m*k and k*n matrices where k is out latent feature matrix. Here if we go by an example, if m is the row index of the users and n is column index of the items adn data is the rating provided by every user, then we start with m*k and k*n matrices by adjusting values such that they finally converge to ~ m*n matrix (not totally the same ofocurse). This is a very expensive approach but highly accurate.
## Problem Statement
***
We have a movie lens database and our objective is to apply various kinds of recommendation techniques from scratch and find out similarities between the users, most popular movies, and personalized recommendations for the targeted user based on user based collaborative filtering.
```
# Importing the required libraries.
import pandas as pd
from sklearn.model_selection import train_test_split
from math import pow, sqrt
# Reading users dataset into a pandas dataframe object.
u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code']
users = pd.read_csv('data/users.dat', sep='::', names=u_cols,
encoding='latin-1')
users.head(8)
# Reading ratings dataset into a pandas dataframe object.
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings = pd.read_csv('data/ratings.dat', sep='::', names=r_cols,
encoding='latin-1')
ratings.head(8)
# Reading movies dataset into a pandas dataframe object.
m_cols = ['movie_id', 'movie_title', 'genre']
movies = pd.read_csv('data/movies.dat', sep='::', names=m_cols, encoding='latin-1')
movies.head(8)
```
As seen in the above dataframe, the genre column has data with pipe separators which cannot be processed for recommendations as such. Hence, we need to genrate columns for every genre type such that if the movie belongs to that genre its value will be 1 otheriwse 0.(Sort of one hot encoding)
```
# Getting series of lists by applying split operation.
movies.genre = movies.genre.str.split('|')
# Getting distinct genre types for generating columns of genre type.
genre_columns = list(set([j for i in movies['genre'].tolist() for j in i]))
# Iterating over every list to create and fill values into columns.
for j in genre_columns:
movies[j] = 0
for i in range(movies.shape[0]):
for j in genre_columns:
if(j in movies['genre'].iloc[i]):
movies.loc[i,j] = 1
movies.head(7)
```
Also, we need to separate the year part of the 'movie_title' columns for better interpretability and processing. Hence, a columns named 'release_year' will be created using the below code.
```
# Separting movie title and year part using split function
split_values = movies['movie_title'].str.split("(", n = 1, expand = True)
# setting 'movie_title' values to title part and creating 'release_year' column.
movies.movie_title = split_values[0]
movies['release_year'] = split_values[1]
# Cleaning the release_year series and dropping 'genre' columns as it has already been one hot encoded.
movies['release_year'] = movies.release_year.str.replace(')','')
movies.drop('genre',axis=1,inplace=True)
```
Let's visualize all the dataframes after all the preprocessing we did.
```
movies[['movie_title', 'release_year']]
ratings.head()
users.head()
ratings.shape
```
### Writing generally used getter functions in the implementation
Here, we have written down a few getters so that we do not need to write down them again adn again and it also increases readability and reusability of the code.
```
#Function to get the rating given by a user to a movie.
def get_rating_(userid,movieid):
return (ratings.loc[(ratings.user_id==userid) & (ratings.movie_id == movieid),'rating'].iloc[0])
# Function to get the list of all movie ids the specified user has rated.
def get_movieids_(userid):
return (ratings.loc[(ratings.user_id==userid),'movie_id'].tolist())
# Function to get the movie titles against the movie id.
def get_movie_title_(movieid):
return (movies.loc[(movies.movie_id == movieid),'movie_title'].iloc[0])
```
## Similarity Scores
***
In this implementation the similarity between the two users have been calculated on the basis of the distance between the two users (i.e. Euclidean distances) and by calculating Pearson Correlation between the two users.
We have written two functions.
```
def distance_similarity_score(user1,user2):
'''
user1 & user2 : user ids of two users between which similarity score is to be calculated.
'''
both_watch_count = 0
both_watch_list = []
for element in ratings.loc[ratings.user_id==user1,'movie_id'].tolist():
if element in ratings.loc[ratings.user_id==user2,'movie_id'].tolist():
both_watch_count += 1
both_watch_list.append(element)
if both_watch_count == 0 :
return 0
distance = []
for element in both_watch_list:
rating1 = get_rating_(user1,element)
rating2 = get_rating_(user2,element)
distance.append(pow(rating1 - rating2, 2))
total_distance = sum(distance)
return 1/(1+sqrt(total_distance))
distance_similarity_score(1,310)
distance_similarity_score(1,310)
```
Calculating Similarity Scores based on the distances have an inherent problem. We do not have a threshold to decide how much more distance between two users is to be considered for calculating whether the users are close enough or far enough. On the other side, this problem is resolved by pearson correlation method as it always returns a value between -1 & 1 which clearly provides us with the boundaries for closeness as we prefer.
```
def pearson_correlation_score(user1,user2):
'''
user1 & user2 : user ids of two users between which similarity score is to be calculated.
'''
both_watch_count = []
for element in ratings.loc[ratings.user_id==user1,'movie_id'].tolist():
if element in ratings.loc[ratings.user_id==user2,'movie_id'].tolist():
both_watch_count.append(element)
if len(both_watch_count) == 0 :
return 0
ratings_1 = [get_rating_(user1,element) for element in both_watch_count]
ratings_2 = [get_rating_(user2,element) for element in both_watch_count]
rating_sum_1 = sum(ratings_1)
rating_sum_2 = sum(ratings_2)
rating_squared_sum_1 = sum([pow(element,2) for element in ratings_1])
rating_squared_sum_2 = sum([pow(element,2) for element in ratings_2])
product_sum_rating = sum([get_rating_(user1,element) * get_rating_(user2,element) for element in both_watch_count])
numerator = product_sum_rating - ((rating_sum_1 * rating_sum_2) / len(both_watch_count))
denominator = sqrt((rating_squared_sum_1 - pow(rating_sum_1,2) / len(both_watch_count)) * (rating_squared_sum_2 - pow(rating_sum_2,2) / len(both_watch_count)))
if denominator == 0:
return 0
return numerator/denominator
pearson_correlation_score(1,310)
pearson_correlation_score(1,310)
```
### Most Similar Users
The objective is to find out **Most Similar Users** to the targeted user. Here we have two metrics to find the score i.e. distance and correlation.
```
def most_similar_users_(user1,number_of_users,metric='pearson'):
'''
user1 : Targeted User
number_of_users : number of most similar users you want to user1.
metric : metric to be used to calculate inter-user similarity score. ('pearson' or else)
'''
# Getting distinct user ids.
user_ids = ratings.user_id.unique().tolist()
# Getting similarity score between targeted and every other suer in the list(or subset of the list).
if(metric == 'pearson'):
similarity_score = [(pearson_correlation_score(user1,nth_user),nth_user) for nth_user in user_ids[:100] if nth_user != user1]
else:
similarity_score = [(distance_similarity_score(user1,nth_user),nth_user) for nth_user in user_ids[:100] if nth_user != user1]
# Sorting in descending order.
similarity_score.sort()
similarity_score.reverse()
# Returning the top most 'number_of_users' similar users.
return similarity_score[:number_of_users]
```
## Getting Movie Recommendations for Targeted User
***
The concept is very simple. First, we need to iterate over only those movies not watched(or rated) by the targeted user and the subsetting items based on the users highly correlated with targeted user. Here, we have used a weighted similarity approach where we have taken product of rating and score into account to make sure that the highly similar users affect the recommendations more than those less similar. Then, we have sorted the list on the basis of score along with movie ids and returned the movie titles against those movie ids.
```
def get_recommendation_(userid):
user_ids = ratings.user_id.unique().tolist()
total = {}
similariy_sum = {}
# Iterating over subset of user ids.
for user in user_ids[:100]:
# not comparing the user to itself (obviously!)
if user == userid:
continue
# Getting similarity score between the users.
score = pearson_correlation_score(userid,user)
# not considering users having zero or less similarity score.
if score <= 0:
continue
# Getting weighted similarity score and sum of similarities between both the users.
for movieid in get_movieids_(user):
# Only considering not watched/rated movies
if movieid not in get_movieids_(userid) or get_rating_(userid,movieid) == 0:
total[movieid] = 0
total[movieid] += get_rating_(user,movieid) * score
similariy_sum[movieid] = 0
similariy_sum[movieid] += score
# Normalizing ratings
ranking = [(tot/similariy_sum[movieid],movieid) for movieid,tot in total.items()]
ranking.sort()
ranking.reverse()
# Getting movie titles against the movie ids.
recommendations = [get_movie_title_(movieid) for score,movieid in ranking]
return recommendations[:10]
```
**NOTE**: We have applied the above three techniques only to specific subset of the dataset as the dataset is too big and iterating over every row multiple times will increase runtime manifolds.
### Implementations
***
We will call all the functions one by one and let's see whether they return the desird output (or they return any output at all!) ;)
```
print(most_similar_users_(23,5))
```
I don't know if few of the people have noticed that the most similar users' logic can be stregthened more by considering other factors as well sucb as age etc. Here, we have created our logic on the basis of only one feature i.e. rating.
```
print(get_recommendation_(320))
```
Next in line, we will discuss and implement matrix factorization approach.
| github_jupyter |
# ism Import and Plotting
This example shows how to measure an impedance spectrum and then plot it in Bode and Nyquist using the Python library [matplotlib](https://matplotlib.org/).
```
import sys
from thales_remote.connection import ThalesRemoteConnection
from thales_remote.script_wrapper import PotentiostatMode,ThalesRemoteScriptWrapper
from thales_file_import.ism_import import IsmImport
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import EngFormatter
from jupyter_utils import executionInNotebook, notebookCodeToPython
```
# Connect Python to the already launched Thales-Software
```
if __name__ == "__main__":
zenniumConnection = ThalesRemoteConnection()
connectionSuccessful = zenniumConnection.connectToTerm("localhost", "ScriptRemote")
if connectionSuccessful:
print("connection successfull")
else:
print("connection not possible")
sys.exit()
zahnerZennium = ThalesRemoteScriptWrapper(zenniumConnection)
zahnerZennium.forceThalesIntoRemoteScript()
```
# Setting the parameters for the measurement
After the connection with Thales, the naming of the files of the measurement results is set.
Measure EIS spectra with a sequential number in the file name that has been specified.
Starting with number 1.
```
zahnerZennium.setEISNaming("counter")
zahnerZennium.setEISCounter(1)
zahnerZennium.setEISOutputPath(r"C:\THALES\temp\test1")
zahnerZennium.setEISOutputFileName("spectra")
```
Setting the parameters for the spectra.
Alternatively a rule file can be used as a template.
```
zahnerZennium.setPotentiostatMode(PotentiostatMode.POTMODE_POTENTIOSTATIC)
zahnerZennium.setAmplitude(10e-3)
zahnerZennium.setPotential(0)
zahnerZennium.setLowerFrequencyLimit(0.01)
zahnerZennium.setStartFrequency(1000)
zahnerZennium.setUpperFrequencyLimit(200000)
zahnerZennium.setLowerNumberOfPeriods(3)
zahnerZennium.setLowerStepsPerDecade(5)
zahnerZennium.setUpperNumberOfPeriods(20)
zahnerZennium.setUpperStepsPerDecade(10)
zahnerZennium.setScanDirection("startToMax")
zahnerZennium.setScanStrategy("single")
```
After setting the parameters, the measurement is started.
<div class="alert alert-block alert-info">
<b>Note:</b> If the potentiostat is set to potentiostatic before the impedance measurement and is switched off, the measurement is performed at the open circuit voltage/potential.
</div>
After the measurement the potentiostat is switched off.
```
zahnerZennium.enablePotentiostat()
zahnerZennium.measureEIS()
zahnerZennium.disablePotentiostat()
zenniumConnection.disconnectFromTerm()
```
# Importing the ism file
Import the spectrum from the previous measurement. This was saved under the set path and name with the number expanded.
The measurement starts at 1 therefore the following path results: "C:\THALES\temp\test1\spectra_0001.ism".
```
ismFile = IsmImport(r"C:\THALES\temp\test1\spectra_0001.ism")
impedanceFrequencies = ismFile.getFrequencyArray()
impedanceAbsolute = ismFile.getImpedanceArray()
impedancePhase = ismFile.getPhaseArray()
impedanceComplex = ismFile.getComplexImpedanceArray()
```
The Python datetime object of the measurement date is output to the console next.
```
print("Measurement end time: " + str(ismFile.getMeasurementEndDateTime()))
```
# Displaying the measurement results
The spectra are presented in the Bode and Nyquist representation. For this test, the Zahner test box was measured in the lin position.
## Nyquist Plot
The matplotlib diagram is configured to match the Nyquist representation. For this, the diagram aspect is set equal and the axes are labeled in engineering units. The axis labeling is realized with [LaTeX](https://www.latex-project.org/) for subscript text.
The possible settings of the graph can be found in the detailed documentation and tutorials of [matplotlib](https://matplotlib.org/).
```
figNyquist, (nyquistAxis) = plt.subplots(1, 1)
figNyquist.suptitle("Nyquist")
nyquistAxis.plot(np.real(impedanceComplex), -np.imag(impedanceComplex), marker="x", markersize=5)
nyquistAxis.grid(which="both")
nyquistAxis.set_aspect("equal")
nyquistAxis.xaxis.set_major_formatter(EngFormatter(unit="$\Omega$"))
nyquistAxis.yaxis.set_major_formatter(EngFormatter(unit="$\Omega$"))
nyquistAxis.set_xlabel(r"$Z_{\rm re}$")
nyquistAxis.set_ylabel(r"$-Z_{\rm im}$")
figNyquist.set_size_inches(18, 18)
plt.show()
figNyquist.savefig("nyquist.svg")
```
## Bode Plot
The matplotlib representation was also adapted for the Bode plot. A figure with two plots was created for the separate display of phase and impedance which are plotted over the same x-axis.
```
figBode, (impedanceAxis, phaseAxis) = plt.subplots(2, 1, sharex=True)
figBode.suptitle("Bode")
impedanceAxis.loglog(impedanceFrequencies, impedanceAbsolute, marker="+", markersize=5)
impedanceAxis.xaxis.set_major_formatter(EngFormatter(unit="Hz"))
impedanceAxis.yaxis.set_major_formatter(EngFormatter(unit="$\Omega$"))
impedanceAxis.set_xlabel(r"$f$")
impedanceAxis.set_ylabel(r"$|Z|$")
impedanceAxis.grid(which="both")
phaseAxis.semilogx(impedanceFrequencies, np.abs(impedancePhase * (360 / (2 * np.pi))), marker="+", markersize=5)
phaseAxis.xaxis.set_major_formatter(EngFormatter(unit="Hz"))
phaseAxis.yaxis.set_major_formatter(EngFormatter(unit="$°$", sep=""))
phaseAxis.set_xlabel(r"$f$")
phaseAxis.set_ylabel(r"$|Phase|$")
phaseAxis.grid(which="both")
phaseAxis.set_ylim([0, 90])
figBode.set_size_inches(18, 12)
plt.show()
figBode.savefig("bode.svg")
```
# Deployment of the source code
**The following instruction is not needed by the user.**
It automatically extracts the pure python code from the jupyter notebook to provide it to the user. Thus the user does not need jupyter itself and does not have to copy the code manually.
The source code is saved in a .py file with the same name as the notebook.
```
if executionInNotebook() == True:
notebookCodeToPython("EISImportPlot.ipynb")
```
| github_jupyter |
#### Omega and Xi
To implement Graph SLAM, a matrix and a vector (omega and xi, respectively) are introduced. The matrix is square and labelled with all the robot poses (xi) and all the landmarks (Li). Every time you make an observation, for example, as you move between two poses by some distance `dx` and can relate those two positions, you can represent this as a numerical relationship in these matrices.
It's easiest to see how these work in an example. Below you can see a matrix representation of omega and a vector representation of xi.
<img src='images/omega_xi.png' width="20%" height="20%" />
Next, let's look at a simple example that relates 3 poses to one another.
* When you start out in the world most of these values are zeros or contain only values from the initial robot position
* In this example, you have been given constraints, which relate these poses to one another
* Constraints translate into matrix values
<img src='images/omega_xi_constraints.png' width="70%" height="70%" />
If you have ever solved linear systems of equations before, this may look familiar, and if not, let's keep going!
### Solving for x
To "solve" for all these x values, we can use linear algebra; all the values of x are in the vector `mu` which can be calculated as a product of the inverse of omega times xi.
<img src='images/solution.png' width="30%" height="30%" />
---
**You can confirm this result for yourself by executing the math in the cell below.**
```
import numpy as np
# define omega and xi as in the example
omega = np.array([[1,0,0],
[-1,1,0],
[0,-1,1]])
xi = np.array([[-3],
[5],
[3]])
# calculate the inverse of omega
omega_inv = np.linalg.inv(np.matrix(omega))
# calculate the solution, mu
mu = omega_inv*xi
# print out the values of mu (x0, x1, x2)
print(mu)
```
## Motion Constraints and Landmarks
In the last example, the constraint equations, relating one pose to another were given to you. In this next example, let's look at how motion (and similarly, sensor measurements) can be used to create constraints and fill up the constraint matrices, omega and xi. Let's start with empty/zero matrices.
<img src='images/initial_constraints.png' width="35%" height="35%" />
This example also includes relationships between poses and landmarks. Say we move from x0 to x1 with a displacement `dx` of 5. Then we have created a motion constraint that relates x0 to x1, and we can start to fill up these matrices.
<img src='images/motion_constraint.png' width="50%" height="50%" />
In fact, the one constraint equation can be written in two ways. So, the motion constraint that relates x0 and x1 by the motion of 5 has affected the matrix, adding values for *all* elements that correspond to x0 and x1.
---
### 2D case
In these examples, we've been showing you change in only one dimension, the x-dimension. In the project, it will be up to you to represent x and y positional values in omega and xi. One solution could be to create an omega and xi that are 2x larger that the number of robot poses (that will be generated over a series of time steps) and the number of landmarks, so that they can hold both x and y values for poses and landmark locations. I might suggest drawing out a rough solution to graph slam as you read the instructions in the next notebook; that always helps me organize my thoughts. Good luck!
| github_jupyter |
<a href="https://colab.research.google.com/github/bitprj/Bitcamp-DataSci/blob/master/Week1-Introduction-to-Python-_-NumPy/Intro_to_Python_plus_NumPy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img src="https://github.com/bitprj/Bitcamp-DataSci/blob/master/Week1-Introduction-to-Python-_-NumPy/assets/icons/bitproject.png?raw=1" width="200" align="left">
<img src="https://github.com/bitprj/Bitcamp-DataSci/blob/master/Week1-Introduction-to-Python-_-NumPy/assets/icons/data-science.jpg?raw=1" width="300" align="right">
# Introduction to Python
### Table of Contents
- Why, Where, and How we use Python
- What we will be learning today
- Goals
- Numbers
- Types of Numbers
- Basic Arithmetic
- Arithmetic Continued
- Variable Assignment
- Strings
- Creating Strings
- Printing Strings
- String Basics
- String Properties
- Basic Built-In String Methods
- Print Formatting
- **1.0 Now Try This**
- Booleans
- Lists
- Creating Lists
- Basic List Methods
- Nesting Lists
- List Comprehensions
- **2.0 Now Try This**
- Tuples
- Constructing Tuples
- Basic Tuple Methods
- Immutability
- When To Use Tuples
- **3.0 Now Try This**
- Dictionaries
- Constructing a Dictionary
- Nesting With Dictionaries
- Dictionary Methods
- **4.0 Now Try This**
- Comparison Operators
- Functions
- Intro to Functions
- `def` Statements
- Examples
- Using `return`
- **5.0 Now Try This**
- Modules and Packages
- Overview
- NumPy
- Creating Arrays
- Indexing
- Slicing
- **6.0 Now Try This**
- Data Types
- **7.0 Now Try This**
- Copy vs. View
- **8.0 Now Try This**
- Shape
- **9.0 Now Try This**
- Iterating Through Arrays
- Joining Arrays
- Splitting Arrays
- Searching Arrays
- Sorting Arrays
- Filtering Arrays
- **10.0 Now Try This**
- Resources
## Why, Where, and How we use Python
Python is a very popular scripting language that you can use to create applications and programs of all sizes and complexity. It is very easy to learn and has very little syntax, making it very efficient to code with. Python is also the language of choice for many when performing comprehensive data analysis.
## What we will be learning today
### Goals
- Understanding key Python data types, operators and data structures
- Understanding functions
- Understanding modules
- Understanding errors and exceptions
First data type we'll cover in detail is Numbers!
## Numbers
### Types of numbers
Python has various "types" of numbers. We'll strictly cover integers and floating point numbers for now.
Integers are just whole numbers, positive or negative. (2,4,-21,etc.)
Floating point numbers in Python have a decimal point in them, or use an exponential (e). For example 3.14 and 2.17 are *floats*. 5E7 (5 times 10 to the power of 7) is also a float. This is scientific notation and something you've probably seen in math classes.
Let's start working through numbers and arithmetic:
### Basic Arithmetic
```
# Addition
4+5
# Subtraction
5-10
# Multiplication
4*8
# Division
25/5
# Floor Division
12//4
```
What happened here?
The reason we get this result is because we are using "*floor*" division. The // operator (two forward slashes) removes any decimals and doesn't round. This always produces an integer answer.
**So what if we just want the remainder of division?**
```
# Modulo
9 % 4
```
4 goes into 9 twice, with a remainder of 1. The % operator returns the remainder after division.
### Arithmetic continued
```
# Powers
4**2
# A way to do roots
144**0.5
# Order of Operations
4 + 20 * 52 + 5
# Can use parentheses to specify orders
(21+5) * (4+89)
```
## Variable Assignments
We can do a lot more with Python than just using it as a calculator. We can store any numbers we create in **variables**.
We use a single equals sign to assign labels or values to variables. Let's see a few examples of how we can do this.
```
# Let's create a variable called "a" and assign to it the number 10
a = 10
a
```
Now if I call *a* in my Python script, Python will treat it as the integer 10.
```
# Adding the objects
a+a
```
What happens on reassignment? Will Python let us write it over?
```
# Reassignment
a = 20
# Check
a
```
Yes! Python allows you to write over assigned variable names. We can also use the variables themselves when doing the reassignment. Here is an example of what I mean:
```
# Use A to redefine A
a = a+a
# check
a
```
The names you use when creating these labels need to follow a few rules:
1. Names can not start with a number.
2. There can be no spaces in the name, use _ instead.
3. Can't use any of these symbols :'",<>/?|\()!@#$%^&*~-+
4. Using lowercase names are best practice.
5. Can't words that have special meaning in Python like "list" and "str", we'll see why later
Using variable names can be a very useful way to keep track of different variables in Python. For example:
```
# Use object names to keep better track of what's going on in your code!
income = 1000
tax_rate = 0.2
taxes = income*tax_rate
# Show the result!
taxes
```
So what have we learned? We learned some of the basics of numbers in Python. We also learned how to do arithmetic and use Python as a basic calculator. We then wrapped it up with learning about Variable Assignment in Python.
Up next we'll learn about Strings!
## Strings
Strings are used in Python to record text information, such as names. Strings in Python are not treated like their own objects, but rather like a *sequence*, a consecutive series of characters. For example, Python understands the string "hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter).
### Creating Strings
To create a string in Python you need to use either single quotes or double quotes. For example:
```
# A word
'hi'
# A phrase
'A string can even be a sentence like this.'
# Using double quotes
"The quote type doesn't really matter."
# Be wary of contractions and apostrophes!
'I'm using single quotes, but this will create an error!'
```
The reason for the error above is because the single quote in <code>I'm</code> stopped the string. You can use combinations of double and single quotes to get the complete statement.
```
"This shouldn't cause an error now."
```
Now let's learn about printing strings!
### Printing Strings
Jupyter Notebooks have many neat behaviors that aren't available in base python. One of those is the ability to print strings by just typing it into a cell. The universal way to display strings however, is to use a **print()** function.
```
# In Jupyter, this is all we need
'Hello World'
# This is the same as:
print('Hello World')
# Without the print function, we can't print multiple times in one block of code:
'Hello World'
'Second string'
```
A print statement can look like the following.
```
print('Hello World')
print('Second string')
print('\n prints a new line')
print('\n')
print('Just to prove it to you.')
```
Now let's move on to understanding how we can manipulate strings in our programs.
### String Basics
Oftentimes, we would like to know how many characters are in a string. We can do this very easily with the **len()** function (short for 'length').
```
len('Hello World')
```
Python's built-in len() function counts all of the characters in the string, including spaces and punctuation.
Naturally, we can assign strings to variables.
```
# Assign 'Hello World' to mystring variable
mystring = 'Hello World'
# Did it work?
mystring
# Print it to make sure
print(mystring)
```
As stated before, Python treats strings as a sequence of characters. That means we can interact with each letter in a string and manipulate it. The way we access these letters is called **indexing**. Each letter has an index, which corresponds to their position in the string. In python, indices start at 0. For instance, in the string 'Hello World', 'H' has an index of 0, 'e' has an index of 1, the 'W' has an index of 6 (because spaces count as characters), and 'd' has an index of 10. The syntax for indexing is shown below.
```
# Extract first character in a string.
mystring[0]
mystring[1]
mystring[2]
```
We can use a <code>:</code> to perform *slicing* which grabs everything up to a designated index. For example:
```
# Grab all letters past the first letter all the way to the end of the string
mystring[:]
# This does not change the original string in any way
mystring
# Grab everything UP TO the 5th index
mystring[:5]
```
Note what happened above. We told Python to grab everything from 0 up to 5. It doesn't include the character in the 5th index. You'll notice this a lot in Python, where statements are usually in the context of "up to, but not including".
```
# The whole string
mystring[:]
# The 'default' values, if you leave the sides of the colon blank, are 0 and the length of the string
end = len(mystring)
# See that is matches above
mystring[0:end]
```
But we don't have to go forwards. Negative indexing allows us to start from the *end* of the string and work backwards.
```
# The LAST letter (one index 'behind' 0, so it loops back around)
mystring[-1]
# Grab everything but the last letter
mystring[:-1]
```
We can also use indexing and slicing to grab characters by a specified step size (1 is the default). See the following examples.
```
# Grab everything (default), go in steps size of 1
mystring[::1]
# Grab everything, but go in step sizes of 2 (every other letter)
mystring[0::2]
# A handy way to reverse a string!
mystring[::-1]
```
Strings have certain properties to them that affect the way we can, and cannot, interact with them.
### String Properties
It's important to note that strings are *immutable*. This means that once a string is created, the elements within it can not be changed or replaced. For example:
```
mystring
# Let's try to change the first letter
mystring[0] = 'a'
```
The error tells it us to straight. Strings do not support assignment the same way other data types do.
However, we *can* **concatenate** strings.
```
mystring
# Combine strings through concatenation
mystring + ". It's me."
# We can reassign mystring to a new value, however
mystring = mystring + ". It's me."
mystring
```
One neat trick we can do with strings is use multiplication whenever we want to repeat characters a certain number of times.
```
letter = 'a'
letter*20
```
We already saw how to use len(). This is an example of a built-in string method, but there are quite a few more which we will cover next.
### Basic Built-in String methods
Objects in Python usually have built-in methods. These methods are functions inside the object that can perform actions or commands on the object itself.
We call methods with a period and then the method name. Methods are in the form:
object.method(parameters)
Parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. We will be going into more depth with these later.
Here are some examples of built-in methods in strings:
```
mystring
# Make all letters in a string uppercase
mystring.upper()
# Make all letters in a string lowercase
mystring.lower()
# Split strings with a specified character as the separator. Spaces are the default.
mystring.split()
# Split by a specific character (doesn't include the character in the resulting string)
mystring.split('W')
```
### 1.0 Now Try This
Given the string 'Amsterdam' give an index command that returns 'd'. Enter your code in the cell below:
```
s = 'Amsterdam'
# Print out 'd' using indexing
answer1 = # INSERT CODE HERE
print(answer1)
```
Reverse the string 'Amsterdam' using slicing:
```
s ='Amsterdam'
# Reverse the string using slicing
answer2 = # INSERT CODE HERE
print(answer2)
```
Given the string Amsterdam, extract the letter 'm' using negative indexing.
```
s ='Amsterdam'
# Print out the 'm'
answer3 = # INSERT CODE HERE
print(answer3)
```
## Booleans
Python comes with *booleans* (values that are essentially binary: True or False, 1 or 0). It also has a placeholder object called None. Let's walk through a few quick examples of Booleans.
```
# Set object to be a boolean
a = True
#Show
a
```
We can also use comparison operators to create booleans. We'll cover comparison operators a little later.
```
# Output is boolean
1 > 2
```
We can use None as a placeholder for an object that we don't want to reassign yet:
```
# None placeholder
b = None
# Show
print(b)
```
That's all to booleans! Next we start covering data structures. First up, lists.
## Lists
Earlier when discussing strings we introduced the concept of a *sequence*. Lists is the most generalized version of sequences in Python. Unlike strings, they are mutable, meaning the elements inside a list can be changed!
Lists are constructed with brackets [] and commas separating every element in the list.
Let's start with seeing how we can build a list.
### Creating Lists
```
# Assign a list to an variable named my_list
my_list = [1,2,3]
```
We just created a list of integers, but lists can actually hold elements of multiple data types. For example:
```
my_list = ['A string',23,100.232,'o']
```
Just like strings, the len() function will tell you how many items are in the sequence of the list.
```
len(my_list)
my_list = ['one','two','three',4,5]
# Grab element at index 0
my_list[0]
# Grab index 1 and everything past it
my_list[1:]
# Grab everything UP TO index 3
my_list[:3]
```
We can also use + to concatenate lists, just like we did for strings.
```
my_list + ['new item']
```
Note: This doesn't actually change the original list!
```
my_list
```
You would have to reassign the list to make the change permanent.
```
# Reassign
my_list = my_list + ['add new item permanently']
my_list
```
We can also use the * for a duplication method similar to strings:
```
# Make the list double
my_list * 2
# Again doubling not permanent
my_list
```
Use the **append** method to permanently add an item to the end of a list:
```
# Append
list1.append('append me!')
# Show
list1
```
### List Comprehensions
Python has an advanced feature called list comprehensions. They allow for quick construction of lists. To fully understand list comprehensions we need to understand for loops. So don't worry if you don't completely understand this section, and feel free to just skip it since we will return to this topic later.
But in case you want to know now, here are a few examples!
```
# Build a list comprehension by deconstructing a for loop within a []
first_col = [row[0] for row in matrix]
first_col
```
We used a list comprehension here to grab the first element of every row in the matrix object. We will cover this in much more detail later on!
### 2.0 Now Try This
Build this list [0,0,0] using any of the shown ways.
```
# Build the list
answer1 = #INSERT CODE HERE
print(answer1)
```
## Tuples
In Python tuples are very similar to lists, however, unlike lists they are *immutable* meaning they can not be changed. You would use tuples to present things that shouldn't be changed, such as days of the week, or dates on a calendar.
You'll have an intuition of how to use tuples based on what you've learned about lists. We can treat them very similarly with the major distinction being that tuples are immutable.
### Constructing Tuples
The construction of a tuples use () with elements separated by commas. For example:
```
# Create a tuple
t = (1,2,3)
# Check len just like a list
len(t)
# Can also mix object types
t = ('one',2)
# Show
t
# Use indexing just like we did in lists
t[0]
# Slicing just like a list
t[-1]
```
### Basic Tuple Methods
Tuples have built-in methods, but not as many as lists do. Let's look at two of them:
```
# Use .index to enter a value and return the index
t.index('one')
# Use .count to count the number of times a value appears
t.count('one')
```
### Immutability
It can't be stressed enough that tuples are immutable. To drive that point home:
```
t[0]= 'change'
```
Because of this immutability, tuples can't grow. Once a tuple is made we can not add to it.
```
t.append('nope')
```
### When to use Tuples
You may be wondering, "Why bother using tuples when they have fewer available methods?" To be honest, tuples are not used as often as lists in programming, but are used when immutability is necessary. If in your program you are passing around an object and need to make sure it does not get changed, then a tuple becomes your solution. It provides a convenient source of data integrity.
You should now be able to create and use tuples in your programming as well as have an understanding of their immutability.
### 3.0 Now Try This
Create a tuple.
```
answer1 = #INSERT CODE HERE
print(type(answer1))
```
## Dictionaries
We've been learning about *sequences* in Python but now we're going to switch gears and learn about *mappings* in Python. If you're familiar with other languages you can think of dictionaries as hash tables.
So what are mappings? Mappings are a collection of objects that are stored by a *key*, unlike a sequence that stored objects by their relative position. This is an important distinction, since mappings won't retain order as is no *order* to keys..
A Python dictionary consists of a key and then an associated value. That value can be almost any Python object.
### Constructing a Dictionary
Let's see how we can build dictionaries and better understand how they work.
```
# Make a dictionary with {} and : to signify a key and a value
my_dict = {'key1':'value1','key2':'value2'}
# Call values by their key
my_dict['key2']
```
Its important to note that dictionaries are very flexible in the data types they can hold. For example:
```
my_dict = {'key1':123,'key2':[12,23,33],'key3':['item0','item1','item2']}
# Let's call items from the dictionary
my_dict['key3']
# Can call an index on that value
my_dict['key3'][0]
# Can then even call methods on that value
my_dict['key3'][0].upper()
```
We can affect the values of a key as well. For instance:
```
my_dict['key1']
# Subtract 123 from the value
my_dict['key1'] = my_dict['key1'] - 123
#Check
my_dict['key1']
```
A quick note, Python has a built-in method of doing a self subtraction or addition (or multiplication or division). We could have also used += or -= for the above statement. For example:
```
# Set the object equal to itself minus 123
my_dict['key1'] -= 123
my_dict['key1']
```
We can also create keys by assignment. For instance if we started off with an empty dictionary, we could continually add to it:
```
# Create a new dictionary
d = {}
# Create a new key through assignment
d['animal'] = 'Dog'
# Can do this with any object
d['answer'] = 42
#Show
d
```
### Nesting with Dictionaries
Hopefully you're starting to see how powerful Python is with its flexibility of nesting objects and calling methods on them. Let's see a dictionary nested inside a dictionary:
```
# Dictionary nested inside a dictionary nested inside a dictionary
d = {'key1':{'nestkey':{'subnestkey':'value'}}}
```
Seems complicated, but let's see how we can grab that value:
```
# Keep calling the keys
d['key1']['nestkey']['subnestkey']
```
### Dictionary Methods
There are a few methods we can call on a dictionary. Let's get a quick introduction to a few of them:
```
# Create a typical dictionary
d = {'key1':1,'key2':2,'key3':3}
# Method to return a list of all keys
d.keys()
# Method to grab all values
d.values()
# Method to return tuples of all items (we'll learn about tuples soon)
d.items()
```
### 4.0 Now Try This
Using keys and indexing, grab the 'hello' from the following dictionaries:
```
d = {'simple_key':'hello'}
# Grab 'hello'
answer1 = #INSERT CODE HERE
print(answer1)
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
answer2 = #INSERT CODE HERE
print(answer2)
# Getting a little tricker
d = {'k1':[{'nest_key':['this is deep',['hello']]}]}
#Grab hello
answer3 = #INSERT CODE HERE
print(answer3)
# This will be hard and annoying!
d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]}
# Grab hello
answer4 = #INSERT CODE HERE
print(answer4)
```
## Comparison Operators
As stated previously, comparison operators allow us to compare variables and output a Boolean value (True or False).
These operators are the exact same as what you've seen in Math, so there's nothing new here.
First we'll present a table of the comparison operators and then work through some examples:
<h2> Table of Comparison Operators </h2><p> In the table below, a=9 and b=11.</p>
<table class="table table-bordered">
<tr>
<th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th>
</tr>
<tr>
<td>==</td>
<td>If the values of two operands are equal, then the condition becomes true.</td>
<td> (a == b) is not true.</td>
</tr>
<tr>
<td>!=</td>
<td>If the values of two operands are not equal, then the condition becomes true.</td>
<td>(a != b) is true</td>
</tr>
<tr>
<td>></td>
<td>If the value of the left operand is greater than the value of the right operand, then the condition becomes true.</td>
<td> (a > b) is not true.</td>
</tr>
<tr>
<td><</td>
<td>If the value of the left operand is less than the value of the right operand, then the condition becomes true.</td>
<td> (a < b) is true.</td>
</tr>
<tr>
<td>>=</td>
<td>If the value of the left operand is greater than or equal to the value of the right operand, then the condition becomes true.</td>
<td> (a >= b) is not true. </td>
</tr>
<tr>
<td><=</td>
<td>If the value of the left operand is less than or equal to the value of the right operand, then the condition becomes true.</td>
<td> (a <= b) is true. </td>
</tr>
</table>
Let's now work through quick examples of each of these.
#### Equal
```
4 == 4
1 == 0
```
Note that <code>==</code> is a <em>comparison</em> operator, while <code>=</code> is an <em>assignment</em> operator.
#### Not Equal
```
4 != 5
1 != 1
```
#### Greater Than
```
8 > 3
1 > 9
```
#### Less Than
```
3 < 8
7 < 0
```
#### Greater Than or Equal to
```
7 >= 7
9 >= 4
```
#### Less than or Equal to
```
4 <= 4
1 <= 3
```
Hopefully this was more of a review than anything new! Next, we move on to one of the most important aspects of building programs: functions and how to use them.
## Functions
### Introduction to Functions
Here, we will explain what a function is in Python and how to create one. Functions will be one of our main building blocks when we construct larger and larger amounts of code to solve problems.
**So what is a function?**
Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions.
On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again. If you remember back to the lessons on strings and lists, remember that we used a function len() to get the length of a string. Since checking the length of a sequence is a common task you would want to write a function that can do this repeatedly at command.
Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design.
### def Statements
Let's see how to build out a function's syntax in Python. It has the following form:
```
def name_of_function(arg1,arg2):
'''
This is where the function's Document String (docstring) goes
'''
# Do stuff here
# Return desired result
```
We begin with <code>def</code> then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a [built-in function in Python](https://docs.python.org/2/library/functions.html) (such as len).
Next come a pair of parentheses with a number of arguments separated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon.
Now here is the important step, you must indent to begin the code inside your function correctly. Python makes use of *whitespace* to organize code. Lots of other programing languages do not do this, so keep that in mind.
Next you'll see the docstring, this is where you write a basic description of the function. Docstrings are not necessary for simple functions, but it's good practice to put them in so you or other people can easily understand the code you write.
After all this you begin writing the code you wish to execute.
The best way to learn functions is by going through examples. So let's try to go through examples that relate back to the various objects and data structures we learned about before.
### A simple print 'hello' function
```
def say_hello():
print('hello')
```
Call the function:
```
say_hello()
```
### A simple greeting function
Let's write a function that greets people with their name.
```
def greeting(name):
print('Hello %s' %(name))
greeting('Bob')
```
### Using return
Let's see some example that use a <code>return</code> statement. <code>return</code> allows a function to *return* a result that can then be stored as a variable, or used in whatever manner a user wants.
### Example 3: Addition function
```
def add_num(num1,num2):
return num1+num2
add_num(4,5)
# Can also save as variable due to return
result = add_num(4,5)
print(result)
```
What happens if we input two strings?
```
add_num('one','two')
```
Note that because we don't declare variable types in Python, this function could be used to add numbers or sequences together! We'll later learn about adding in checks to make sure a user puts in the correct arguments into a function.
Let's also start using <code>break</code>, <code>continue</code>, and <code>pass</code> statements in our code. We introduced these during the <code>while</code> lecture.
Finally let's go over a full example of creating a function to check if a number is prime (a common interview exercise).
We know a number is prime if that number is only evenly divisible by 1 and itself. Let's write our first version of the function to check all the numbers from 1 to N and perform modulo checks.
```
def is_prime(num):
'''
Naive method of checking for primes.
'''
for n in range(2,num): #'range()' is a function that returns an array based on the range you provide. Here, it is from 2 to 'num' inclusive.
if num % n == 0:
print(num,'is not prime')
break # 'break' statements signify that we exit the loop if the above condition holds true
else: # If never mod zero, then prime
print(num,'is prime!')
is_prime(16)
is_prime(17)
```
Note how the <code>else</code> lines up under <code>for</code> and not <code>if</code>. This is because we want the <code>for</code> loop to exhaust all possibilities in the range before printing our number is prime.
Also note how we break the code after the first print statement. As soon as we determine that a number is not prime we break out of the <code>for</code> loop.
We can actually improve this function by only checking to the square root of the target number, and by disregarding all even numbers after checking for 2. We'll also switch to returning a boolean value to get an example of using return statements:
```
import math
def is_prime2(num):
'''
Better method of checking for primes.
'''
if num % 2 == 0 and num > 2:
return False
for i in range(3, int(math.sqrt(num)) + 1, 2):
if num % i == 0:
return False
return True
is_prime2(27)
```
Why don't we have any <code>break</code> statements? It should be noted that as soon as a function *returns* something, it shuts down. A function can deliver multiple print statements, but it will only obey one <code>return</code>.
### 5.0 Now Try This
Write a function that capitalizes the first and fourth letters of a name. For this, you might want to make use of a string's `.upper()` method.
cap_four('macdonald') --> MacDonald
Note: `'macdonald'.capitalize()` returns `'Macdonald'`
```
def cap_four(name):
return new_name
# Check
answer1 = cap_four('macdonald')
print(answer1)
```
## Modules and Packages
### Understanding modules
Modules in Python are simply Python files with the .py extension, which implement a set of functions. Modules are imported from other modules using the import command.
To import a module, we use the import command. Check out the full list of built-in modules in the Python standard library here.
The first time a module is loaded into a running Python script, it is initialized by executing the code in the module once. If another module in your code imports the same module again, it will not be loaded twice.
If we want to import the math module, we simply import the name of the module:
```
# import the library
import math
# use it (ceiling rounding)
math.ceil(3.2)
```
## Why, Where, and How we use NumPy
NumPy is a library for Python that allows you to create matrices and multidimensional arrays, as well as perform many sophisticated mathematical operations on them. Previously, dealing with anything more than a single-dimensional array was very difficult in base Python. Additionally, there weren't a lot of built-in functionality to perform many standard mathematical operations that data scientists typically do with data, such as transposing, dot products, cumulative sums, etc. All of this makes NumPy very useful in statistical analyses and analyzing datasets to produce insights.
### Creating Arrays
NumPy allows you to work with arrays very efficiently. The array object in NumPy is called *ndarray*. This is short for 'n-dimensional array'.
We can create a NumPy ndarray object by using the array() function.
```
import numpy as np
arr = np.array([1,2,3,4,5,6,7,8,9,10])
print(arr)
print(type(arr))
```
### Indexing
Indexing is the same thing as accessing an element of a list or string. In this case, we will be accessing an array element.
You can access an array element by referring to its **index number**. The indexes in NumPy arrays start with 0, also like in base Python.
The following example shows how you can access multiple elements of an array and perform operations on them.
```
import numpy as np
arr = np.array([1,2,3,4,5,6,7,8,9,10])
print(arr[4] + arr[8])
```
### Slicing
Slicing in NumPy behaves much like in base Python, a quick recap from above:
We slice using this syntax: [start:end].
We can also define the step, like this: [start:end:step].
```
# Reverse an array through backwards/negative stepping
import numpy as np
arr = np.array([3,7,9,0])
print(arr[::-1])
# Slice elements from the beginning to index 8
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6, 7,8,9,10])
print(arr[:8])
```
You'll notice we only got to index 7. That's because the end is always *non-inclusive*. We slice up to but not including the end value. The start index on the other hand, **is** inclusive.
### 6.0 Now Try This:
Create an array of at least size 10 and populate it with random numbers. Then, use slicing to split it into two and create two new arrays. Then, find the sum of the third digits in each array.
```
# Answer here
```
### Data Types
Just like base Python, NumPy has many data types available. They are all differentiated by a single character, and here are the most common few:
* i - int / integer (whole numbers)
* b - boolean (true/false)
* f - float (decimal numbers)
* S - string
* There are many more too!
```
# Checking the data type of an array
import numpy as np
arr = np.array([5, 7, 3, 1])
print(arr.dtype)
# How to convert between types
import numpy as np
arr = np.array([4.4, 24.1, 3.7])
print(arr)
print(arr.dtype)
# Converts decimal numbers by rounding them all down to whole numbers
newarr = arr.astype('i')
print(newarr)
print(newarr.dtype)
```
### 7.0 Now Try This:
Modify the code below to fix the error and make the addition work:
```
import numpy as np
arr = np.array([1,3,5,7],dtype='S')
arr2 = np.array([2,4,6,8],dtype='i')
print(arr + arr2)
```
### Copy vs. View
In NumPy, you can work with either a copy of the data or the data itself, and it's very important that you know the difference. Namely, modifying a copy of the data will not change the original dataset but modifying the view **will**. Here are some examples:
```
# A Copy
import numpy as np
arr = np.array([6, 2, 1, 5, 3])
x = arr.copy()
arr[0] = 8
print(arr)
print(x)
# A View
import numpy as np
arr = np.array([6, 2, 1, 5, 3])
x = arr.view()
arr[0] = 8
print(arr)
print(x)
```
### 8.0 Now Try This:
A student wants to create a copy of an array and modify the first element. The following is the code they wrote for it:
arr = np.array([1,2,3,4,5])
x = arr
x[0] = 0
Is this correct?
### Shape
All NumPy arrays have an attribute called *shape*. This is helpful for 2d or n-dimensional arrays, but for simple lists, it is just the number of elements that it has.
```
# Print the shape of an array
import numpy as np
arr = np.array([2,7,3,7])
print(arr.shape)
```
### 9.0 Now Try This:
Without using Python, what is the shape of this array? Answer in the same format as the `shape` method.
arr = np.array([[0,1,2].[3,4,5])
### Iterating Through Arrays
Iterating simply means to traverse or travel through an object. In the case of arrays, we can iterate through them by using simple for loops.
```
import numpy as np
arr = np.array([1, 5, 7])
for x in arr:
print(x)
```
### Joining Arrays
Joining combining the elements of multiple arrays into one.
The basic way to do it is like this:
```
import numpy as np
arr1 = np.array([7, 1, 0])
arr2 = np.array([2, 8, 1])
arr = np.concatenate((arr1, arr2))
print(arr)
```
### Splitting Arrays
Splitting is the opposite of joining arrays. It takes one array and creates multiple from it.
```
# Split array into 4
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6,7,8])
newarr = np.array_split(arr, 4)
print(newarr)
```
### Searching Arrays
Searching an array to find a certain element is a very important and basic operation. We can do this using the *where()* method.
```
import numpy as np
arr = np.array([1, 2, 5, 9, 5, 3, 4])
x = np.where(arr == 4) # Returns the index of the array element(s) that matches this condition
print(x)
# Find all the odd numbers in an array
import numpy as np
arr = np.array([10, 20, 30, 40, 50, 60, 70, 80,99])
x = np.where(arr%2 == 1)
print(x)
```
### Sorting Arrays
Sorting an array is another very important and commonly used operation. NumPy has a function called sort() for this task.
```
import numpy as np
arr = np.array([4, 1, 0, 3])
print(np.sort(arr))
# Sorting a string array alphabetically
import numpy as np
arr = np.array(['zephyr', 'gate', 'match'])
print(np.sort(arr))
```
### Filtering Arrays
Sometimes you would want to create a new array from an existing array where you select elements out based on a certain condition. Let's say you have an array with all integers from 1 to 10. You would like to create a new array with only the odd numbers from that list. You can do this very efficiently with **filtering**. When you filter something, you only take out what you want, and the same principle applies to objects in NumPy. NumPy uses what's called a **boolean index list** to filter. This is an array of True and False values that correspond directly to the target array and what values you would like to filter. For example, using the example above, the target array would look like this:
[1,2,3,4,5,6,7,8,9,10]
And if you wanted to filter out the odd values, you would use this particular boolean index list:
[True,False,True,False,True,False,True,False,True,False]
Applying this list onto the target array will get you what you want:
[1,3,5,7,9]
A working code example is shown below:
```
import numpy as np
arr = np.array([51, 52, 53, 54])
x = [False, False, True, True]
newarr = arr[x]
print(newarr)
```
We don't need to hard-code the True and False values. Like stated previously, we can filter based on conditions.
```
arr = np.array([51, 52, 53, 54])
# Create an empty list
filter_arr = []
# go through each element in arr
for element in arr:
# if the element is higher than 52, set the value to True, otherwise False:
if element > 52:
filter_arr.append(True)
else:
filter_arr.append(False)
newarr = arr[filter_arr]
print(filter_arr)
print(newarr)
```
Filtering is a very common task when working with data and as such, NumPy has an even more efficient way to perform it. It is possible to create a boolean index list directly from the target array and then apply it to obtain the filtered array. See the example below:
```
import numpy as np
arr = np.array([10,20,30,40,50,60,70,80,90,100])
filter = arr > 50
filter_arr = arr[filter]
print(filter)
print(filter_arr)
```
### 10.0 Now Try This:
Create an array with the first 10 numbers of the Fibonacci sequence. Split this array into two. On each half, search for any multiples of 4. Next, filter both arrays for multiples of 5. Finally, take the two filtered arrays, join them, and sort them.
```
# Answer here
```
## Resources
- [Python Documentation](https://docs.python.org/3/)
- [Official Python Tutorial](https://docs.python.org/3/tutorial/)
- [W3Schools Python Tutorial](https://www.w3schools.com/python/)
| github_jupyter |
Move current working directory, in case for developing the machine learning program by remote machine or it is fine not to use below single line.
```
%cd /tmp/pycharm_project_881
import numpy as np
import pandas as pd
def sigmoid(x):
return 1/(1+np.exp(-x))
def softmax(x):
x = x - x.max(axis=1, keepdims=True)
return np.exp(x)/np.sum(np.exp(x),axis=1, keepdims=True)
df = pd.read_csv("adult.data.txt", names=["age","workclass","fnlwgt","education","education-num","marital-status" \
,"occupation","relationship","race","sex","capital-gain","capital-loss","hours-per-week","native-country","class"])
dx = pd.read_csv("adult.test.txt", names=["age","workclass","fnlwgt","education","education-num","marital-status" \
,"occupation","relationship","race","sex","capital-gain","capital-loss","hours-per-week","native-country","class"])
df.head()
for lf in df:
if df[lf].dtype == "object":
df[lf] = df[lf].astype("category").cat.codes
dx[lf] = dx[lf].astype("category").cat.codes
else :
df[lf] = (df[lf] - df[lf].mean())/(df[lf].max() - df[lf].min())
dx[lf] = (dx[lf] - dx[lf].mean()) / (dx[lf].max() - dx[lf].min())
df.head()
```
Set initial hyperparameters..
```
x = df.drop(columns=["class"])
y = df["class"].values
x_test = dx.drop(columns=["class"])
y_test = dx["class"].values
multi_y = np.zeros((y.size, y.max()+1))
multi_y[np.arange(y.size), y] = 1
multi_y_test = np.zeros((y_test.size, y_test.max()+1))
multi_y_test[np.arange(y_test.size), y_test] = 1
inputSize = len(x.columns)
numberOfNodes = 150
numberOfClass = y.max() + 1
numberOfExamples = x.shape[0]
w1 = np.random.random_sample(size=(inputSize, numberOfNodes))
b1 = np.random.random_sample(numberOfNodes)
w2 = np.random.random_sample(size=(numberOfNodes, numberOfClass))
b2 = np.random.random_sample(numberOfClass)
batchSize = 32
trainNum = 150
learningRate = 0.01
# Start Training
for k in range(trainNum + 1):
cost = 0
accuracy = 0
for i in range(int(numberOfExamples/batchSize)):
# Forward-Propagation
z = x[i * batchSize : (i+1) * batchSize]
z_y = multi_y[i * batchSize : (i+1) * batchSize]
layer1 = np.matmul(z, w1) + b1
sig_layer1 = sigmoid(layer1)
layer2 = np.matmul(sig_layer1, w2) + b2
soft_layer2 = softmax(layer2)
pred = np.argmax(soft_layer2, axis=1)
# Cost Function: Cross-Entropy loss
cost += -(z_y * np.log(soft_layer2 + 1e-9) + (1-z_y) * np.log(1 - soft_layer2 + 1e-9)).sum()
accuracy += (pred == y[i * batchSize : (i + 1) * batchSize]).sum()
# Back-Propagation
dlayer2 = soft_layer2 - multi_y[i * batchSize : (i+1) * batchSize]
dw2 = np.matmul(sig_layer1.T, dlayer2) / batchSize
db2 = dlayer2.mean(axis=0)
dsig_layer1 = (dlayer2.dot(w2.T))
dlayer1 = sigmoid(layer1) * (1 - sigmoid(layer1)) * dsig_layer1
dw1 = np.matmul(z.T, dlayer1) / batchSize
db1 = dlayer1.mean(axis=0)
w2 -= learningRate * dw2
w1 -= learningRate * dw1
b2 -= learningRate * db2
b1 -= learningRate * db1
if k % 10 == 0 :
print("-------- # : {} ---------".format(k))
print("cost: {}".format(cost/numberOfExamples))
print("accuracy: {} %".format(accuracy/numberOfExamples * 100))
# Test the trained model
test_cost = 0
test_accuracy = 0
# Forward-Propagation
layer1 = np.matmul(x_test, w1) + b1
sig_layer1 = sigmoid(layer1)
layer2 = np.matmul(sig_layer1, w2) + b2
soft_layer2 = softmax(layer2)
pred = np.argmax(soft_layer2, axis=1)
# Cost Function: Cross-Entropy loss
test_cost += -(multi_y_test * np.log(soft_layer2 + 1e-9) + (1-multi_y_test) * np.log(1 - soft_layer2 + 1e-9)).sum()
test_accuracy += (pred == y_test).sum()
print("---- Result of applying test data to the trained model")
print("cost: {}".format(test_cost/numberOfExamples))
print("accuracy: {} %".format(test_accuracy/numberOfExamples * 100))
```
| github_jupyter |
# 3D Nuclear Segmentation with RetinaNet
```
import os
import errno
import numpy as np
import deepcell
import deepcell_retinamask
# Download the data (saves to ~/.keras/datasets)
filename = 'HEK293.trks'
test_size = 0.1 # % of data saved as test
seed = 0 # seed for random train-test split
(X_train, y_train), (X_test, y_test) = deepcell.datasets.tracked.hek293.load_tracked_data(
filename, test_size=test_size, seed=seed)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
```
### Set up filepath constants
```
# the path to the data file is currently required for `train_model_()` functions
# NOTE: Change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
```
### Set up training parameters
```
from tensorflow.keras.optimizers import SGD, Adam
from deepcell.utils.train_utils import rate_scheduler
model_name = 'retinamovie_model'
backbone = 'resnet50' # vgg16, vgg19, resnet50, densenet121, densenet169, densenet201
n_epoch = 3 # Number of training epochs
lr = 1e-5
optimizer = Adam(lr=lr, clipnorm=0.001)
lr_sched = rate_scheduler(lr=lr, decay=0.99)
batch_size = 1
num_classes = 1 # "object" is the only class
# Each head of the model uses its own loss
from deepcell_retinamask.losses import RetinaNetLosses
sigma = 3.0
alpha = 0.25
gamma = 2.0
iou_threshold = 0.5
max_detections = 100
mask_size = (28, 28)
retinanet_losses = RetinaNetLosses(
sigma=sigma, alpha=alpha, gamma=gamma,
iou_threshold=iou_threshold,
mask_size=mask_size)
loss = {
'regression': retinanet_losses.regress_loss,
'classification': retinanet_losses.classification_loss,
}
```
## Create the RetinaMovieMask Model
```
from deepcell_retinamask.utils.anchor_utils import get_anchor_parameters
flat_shape = [y_train.shape[0] * y_train.shape[1]] + list(y_train.shape[2:])
flat_y = np.reshape(y_train, tuple(flat_shape)).astype('int')
# Generate backbone information from the data
backbone_levels, pyramid_levels, anchor_params = get_anchor_parameters(flat_y)
fpb = 5 # number of frames in each training batch
from deepcell_retinamask import model_zoo
# Pass frames_per_batch > 1 to enable 3D mode!
model = model_zoo.RetinaNet(
backbone=backbone,
use_imagenet=True,
panoptic=False,
frames_per_batch=fpb,
num_classes=num_classes,
input_shape=X_train.shape[2:],
backbone_levels=backbone_levels,
pyramid_levels=pyramid_levels,
num_anchors=anchor_params.num_anchors())
prediction_model = model_zoo.retinanet_bbox(
model,
panoptic=False,
frames_per_batch=fpb,
max_detections=100,
anchor_params=anchor_params)
model.compile(loss=loss, optimizer=optimizer)
```
## Train the model
```
from deepcell_retinamask.image_generators import RetinaMovieDataGenerator
datagen = RetinaMovieDataGenerator(
rotation_range=180,
zoom_range=(0.8, 1.2),
horizontal_flip=True,
vertical_flip=True)
datagen_val = RetinaMovieDataGenerator()
train_data = datagen.flow(
{'X': X_train, 'y': y_train},
batch_size=1,
include_masks=False,
frames_per_batch=fpb,
pyramid_levels=pyramid_levels,
anchor_params=anchor_params)
val_data = datagen_val.flow(
{'X': X_test, 'y': y_test},
batch_size=1,
include_masks=False,
frames_per_batch=fpb,
pyramid_levels=pyramid_levels,
anchor_params=anchor_params)
from tensorflow.keras import callbacks
from deepcell_retinamask.callbacks import RedirectModel, Evaluate
iou_threshold = 0.5
score_threshold = 0.01
max_detections = 100
model.fit_generator(
train_data,
steps_per_epoch=X_train.shape[0] // batch_size,
epochs=n_epoch,
validation_data=val_data,
validation_steps=X_test.shape[0] // batch_size,
callbacks=[
callbacks.LearningRateScheduler(lr_sched),
callbacks.ModelCheckpoint(
os.path.join(MODEL_DIR, model_name + '.h5'),
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False),
RedirectModel(
Evaluate(val_data,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
max_detections=max_detections,
frames_per_batch=fpb,
weighted_average=True),
prediction_model)
])
```
## Evaluate results
```
from deepcell_retinamask.utils.anchor_utils import evaluate
iou_threshold = 0.5
score_threshold = 0.01
max_detections = 100
average_precisions = evaluate(
val_data,
prediction_model,
frames_per_batch=fpb,
iou_threshold=iou_threshold,
score_threshold=score_threshold,
max_detections=max_detections,
)
# print evaluation
total_instances = []
precisions = []
for label, (average_precision, num_annotations) in average_precisions.items():
print('{:.0f} instances of class'.format(num_annotations),
label, 'with average precision: {:.4f}'.format(average_precision))
total_instances.append(num_annotations)
precisions.append(average_precision)
if sum(total_instances) == 0:
print('No test instances found.')
else:
print('mAP using the weighted average of precisions among classes: {:.4f}'.format(
sum([a * b for a, b in zip(total_instances, precisions)]) / sum(total_instances)))
print('mAP: {:.4f}'.format(sum(precisions) / sum(x > 0 for x in total_instances)))
```
## Display results
```
import matplotlib.pyplot as plt
import os
import time
import numpy as np
from deepcell_retinamask.utils.plot_utils import draw_detections
index = np.random.randint(low=0, high=X_test.shape[0])
frame = np.random.randint(low=0, high=X_test.shape[1] - fpb)
print('Image Number:', index)
print('Frame Number:', frame)
image = X_test[index:index + 1, frame:frame + fpb]
mask = y_test[index:index + 1, frame:frame + fpb]
boxes, scores, labels = prediction_model.predict(image)
display = 0.01 * np.tile(np.expand_dims(image[0, ..., 0], axis=-1), (1, 1, 3))
mask = np.squeeze(mask)
draw_list = []
for i in range(fpb):
draw = 0.1 * np.tile(image[0, i].copy(), (1, 1, 3))
# draw detections
draw_detections(
draw, boxes[0, i], scores[0, i], labels[0, i],
label_to_name=lambda x: 'cell', score_threshold=0.5)
draw_list.append(draw)
fig, axes = plt.subplots(ncols=3, nrows=fpb, figsize=(25, 25), sharex=True, sharey=True)
for i in range(fpb):
axes[i, 0].imshow(display[i, ..., 0], cmap='jet')
axes[i, 0].set_title('Source Image - Frame {}'.format(frame + i))
axes[i, 1].imshow(mask[i], cmap='jet')
axes[i, 1].set_title('Labeled Image - Frame {}'.format(frame + i))
axes[i, 2].imshow(draw_list[i], cmap='jet')
axes[i, 2].set_title('Detections')
fig.tight_layout()
plt.show()
```
| github_jupyter |
## Fashion Item Recognition with CNN
> Antonopoulos Ilias (p3352004) <br />
> Ndoja Silva (p3352017) <br />
> MSc Data Science AUEB
## Table of Contents
- [Data Loading](#Data-Loading)
- [Hyperparameter Tuning](#Hyperparameter-Tuning)
- [Model Selection](#Model-Selection)
- [Evaluation](#Evaluation)
```
import gc
import itertools
import numpy as np
import keras_tuner as kt
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.metrics import confusion_matrix
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.list_physical_devices("GPU")))
```
### Data Loading
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images.shape
train_labels
set(train_labels)
test_images.shape
```
This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories,
along with a test set of 10,000 images.
The classes are:
| Label | Description |
|:-----:|-------------|
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
```
class_names = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
```
### Hyperparameter Tuning
```
SEED = 123456
np.random.seed(SEED)
tf.random.set_seed(SEED)
def clean_up(model_):
tf.keras.backend.clear_session()
del model_
gc.collect()
def cnn_model_builder(hp):
"""Creates a HyperModel instance (or callable that takes hyperparameters and returns a Model instance)."""
model = tf.keras.Sequential(
[
tf.keras.layers.Conv2D(
filters=hp.Int("1st-filter", min_value=32, max_value=128, step=16),
kernel_size=(3, 3),
strides=(1, 1),
padding="same",
kernel_regularizer="l2",
dilation_rate=(1, 1),
activation="relu",
input_shape=(28, 28, 1),
name="1st-convolution",
),
tf.keras.layers.MaxPool2D(
pool_size=(2, 2), strides=(2, 2), padding="same", name="1st-max-pooling"
),
tf.keras.layers.Dropout(
rate=hp.Float("1st-dropout", min_value=0.0, max_value=0.4, step=0.1),
name="1st-dropout",
),
tf.keras.layers.Conv2D(
filters=hp.Int("2nd-filter", min_value=32, max_value=64, step=16),
kernel_size=(3, 3),
strides=(1, 1),
padding="same",
kernel_regularizer="l2",
dilation_rate=(1, 1),
activation="relu",
name="2nd-convolution",
),
tf.keras.layers.MaxPool2D(
pool_size=(2, 2), strides=(2, 2), padding="same", name="2nd-max-pooling"
),
tf.keras.layers.Dropout(
rate=hp.Float("2nd-dropout", min_value=0.0, max_value=0.4, step=0.1),
name="2nd-dropout",
),
tf.keras.layers.Flatten(name="flatten-layer"),
tf.keras.layers.Dense(
units=hp.Int("dense-layer-units", min_value=32, max_value=128, step=16),
kernel_regularizer="l2",
activation="relu",
name="dense-layer",
),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(units=10, activation="softmax", name="output-layer"),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(
learning_rate=hp.Choice(
"learning-rate", values=[1e-3, 1e-4, 2 * 1e-4, 4 * 1e-4]
)
),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=["accuracy"],
)
return model
# BayesianOptimization tuning with Gaussian process
# THERE IS A BUG HERE: https://github.com/keras-team/keras-tuner/pull/655
# tuner = kt.BayesianOptimization(
# cnn_model_builder,
# objective="val_accuracy",
# max_trials=5, # the total number of trials (model configurations) to test at most
# allow_new_entries=True,
# tune_new_entries=True,
# seed=SEED,
# directory="hparam-tuning",
# project_name="cnn",
# )
# Li, Lisha, and Kevin Jamieson.
# "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization."
# Journal of Machine Learning Research 18 (2018): 1-52.
# https://jmlr.org/papers/v18/16-558.html
tuner = kt.Hyperband(
cnn_model_builder,
objective="val_accuracy",
max_epochs=50, # the maximum number of epochs to train one model
seed=SEED,
directory="hparam-tuning",
project_name="cnn",
)
tuner.search_space_summary()
stop_early = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=5)
tuner.search(
train_images, train_labels, epochs=40, validation_split=0.2, callbacks=[stop_early]
)
# get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]
print(
f"""
The hyperparameter search is complete. \n
Results
=======
|
---- optimal number of output filters in the 1st convolution : {best_hps.get('1st-filter')}
|
---- optimal first dropout rate : {best_hps.get('1st-dropout')}
|
---- optimal number of output filters in the 2nd convolution : {best_hps.get('2nd-filter')}
|
---- optimal second dropout rate : {best_hps.get('2nd-dropout')}
|
---- optimal number of units in the densely-connected layer : {best_hps.get('dense-layer-units')}
|
---- optimal learning rate for the optimizer : {best_hps.get('learning-rate')}
"""
)
```
### Model Selection
```
model = tuner.get_best_models(num_models=1)[0]
model.summary()
tf.keras.utils.plot_model(
model, to_file="static/cnn_model.png", show_shapes=True, show_layer_names=True
)
clean_up(model)
# build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps)
history = model.fit(train_images, train_labels, epochs=50, validation_split=0.2)
# keep best epoch
val_acc_per_epoch = history.history["val_accuracy"]
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print("Best epoch: %d" % (best_epoch,))
clean_up(model)
hypermodel = tuner.hypermodel.build(best_hps)
# retrain the model
history = hypermodel.fit(
train_images, train_labels, epochs=best_epoch, validation_split=0.2
)
```
### Evaluation
```
eval_result = hypermodel.evaluate(test_images, test_labels, verbose=3)
print("[test loss, test accuracy]:", eval_result)
def plot_history(hs, epochs, metric):
print()
plt.style.use("dark_background")
plt.rcParams["figure.figsize"] = [15, 8]
plt.rcParams["font.size"] = 16
plt.clf()
for label in hs:
plt.plot(
hs[label].history[metric],
label="{0:s} train {1:s}".format(label, metric),
linewidth=2,
)
plt.plot(
hs[label].history["val_{0:s}".format(metric)],
label="{0:s} validation {1:s}".format(label, metric),
linewidth=2,
)
x_ticks = np.arange(0, epochs + 1, epochs / 10)
x_ticks[0] += 1
plt.xticks(x_ticks)
plt.ylim((0, 1))
plt.xlabel("Epochs")
plt.ylabel("Loss" if metric == "loss" else "Accuracy")
plt.legend()
plt.show()
print("Train Loss : {0:.5f}".format(history.history["loss"][-1]))
print("Validation Loss : {0:.5f}".format(history.history["val_loss"][-1]))
print("Test Loss : {0:.5f}".format(eval_result[0]))
print("-------------------")
print("Train Accuracy : {0:.5f}".format(history.history["accuracy"][-1]))
print("Validation Accuracy : {0:.5f}".format(history.history["val_accuracy"][-1]))
print("Test Accuracy : {0:.5f}".format(eval_result[1]))
# Plot train and validation error per epoch.
plot_history(hs={"CNN": history}, epochs=best_epoch, metric="loss")
plot_history(hs={"CNN": history}, epochs=best_epoch, metric="accuracy")
def plot_confusion_matrix(
cm, classes, normalize=False, title="Confusion matrix", cmap=plt.cm.PuBuGn
):
plt.style.use("default")
plt.rcParams["figure.figsize"] = [11, 9]
plt.imshow(cm, interpolation="nearest", cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j,
i,
cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black",
)
plt.tight_layout()
plt.ylabel("True label")
plt.xlabel("Predicted label")
# Predict the values from the validation dataset
Y_pred = hypermodel.predict(test_images)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred, axis=1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(test_labels, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(
confusion_mtx,
classes=class_names,
)
incorrect = []
for i in range(len(test_labels)):
if not Y_pred_classes[i] == test_labels[i]:
incorrect.append(i)
if len(incorrect) == 4:
break
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.set_size_inches(10, 10)
ax[0, 0].imshow(test_images[incorrect[0]].reshape(28, 28), cmap="gray")
ax[0, 0].set_title(
"Predicted Label : "
+ class_names[Y_pred_classes[incorrect[0]]]
+ "\n"
+ "Actual Label : "
+ class_names[test_labels[incorrect[0]]]
)
ax[0, 1].imshow(test_images[incorrect[1]].reshape(28, 28), cmap="gray")
ax[0, 1].set_title(
"Predicted Label : "
+ class_names[Y_pred_classes[incorrect[1]]]
+ "\n"
+ "Actual Label : "
+ class_names[test_labels[incorrect[1]]]
)
ax[1, 0].imshow(test_images[incorrect[2]].reshape(28, 28), cmap="gray")
ax[1, 0].set_title(
"Predicted Label : "
+ class_names[Y_pred_classes[incorrect[2]]]
+ "\n"
+ "Actual Label : "
+ class_names[test_labels[incorrect[2]]]
)
ax[1, 1].imshow(test_images[incorrect[3]].reshape(28, 28), cmap="gray")
ax[1, 1].set_title(
"Predicted Label : "
+ class_names[Y_pred_classes[incorrect[3]]]
+ "\n"
+ "Actual Label : "
+ class_names[test_labels[incorrect[3]]]
)
```
| github_jupyter |
```
%matplotlib inline
# Packages
import os, glob, scipy, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Project directory
base_dir = os.path.realpath('..')
print(base_dir)
# Project-specific functions
funDir = os.path.join(base_dir,'Code/Functions')
print(funDir)
sys.path.append(funDir)
import choiceModels, costFunctions, penalizedModelFit, simulateModel
# General-use python functions
dbPath = '/'.join(base_dir.split('/')[0:4])
sys.path.append('%s/Python'%dbPath)
import FigureTools
```
## Choose set
#### Select subs who are constant in their study 1 cluster
```
model = 'MP_ppSOE'
study = 1
clusters_4 = pd.read_csv(os.path.join(base_dir,'Data/Study1/ComputationalModel',
'ParamsClusters_study-1_baseMult-4_model-MP_ppSOE_precision-100.csv'),index_col=0)[
['sub','ClustName']]
clusters_6 = pd.read_csv(os.path.join(base_dir,'Data/Study1/ComputationalModel',
'ParamsClusters_study-1_baseMult-6_model-MP_ppSOE_precision-100.csv'),index_col=0)[
['sub','ClustName']]
exclude = np.array(pd.read_csv(os.path.join(base_dir,'Data/Study1/HMTG/exclude.csv'),index_col=None,header=None).T)[0]
clusters = clusters_4.merge(clusters_6,on='sub')
clusters = clusters.loc[~clusters['sub'].isin(exclude)]
clusters.columns = ['sub','x4','x6']
clusters['stable'] = 1*(clusters['x4']==clusters['x6'])
clusters.head()
clusters = clusters[['sub','x4','stable']]
clusters.columns = ['sub','cluster','stable']
clusters_study2 = pd.read_csv(os.path.join(base_dir,'Data/Study2/ComputationalModel',
'ParamsClusters_study-2_model-MP_ppSOE_precision-100.csv'),index_col=0)[
['sub','ClustName']]
exclude = np.array(pd.read_csv(os.path.join(base_dir,'Data/Study2/HMTG/exclude.csv'),index_col=0,header=0).T)[0]
clusters_study2 = clusters_study2.loc[~clusters_study2['sub'].isin(exclude)]
clusters_study2.columns = ['sub','cluster']
clusters_study2['stable'] = 1
clusters = clusters.append(clusters_study2)
clusters.head()
print(clusters.query('sub < 150')['stable'].sum())
print(clusters.query('sub > 150')['stable'].sum())
print(clusters['stable'].sum())
```
#### Load self-reported strategy
```
strat_1 = pd.read_csv(os.path.join(base_dir,
'Data/Study%i/SelfReportStrategy/parsed.csv'%1),index_col=0)
strat_1['sub'] = strat_1['record']-110000
strat_1.replace(to_replace=np.nan,value=0,inplace=True)
strat_1.head()
strat_2 = pd.read_csv(os.path.join(base_dir,
'Data/Study%i/SelfReportStrategy/parsed.csv'%2),index_col=0)
strat_2.head()
strat_2.replace(to_replace=np.nan,value=0,inplace=True)
strat_2_map = pd.read_csv(os.path.join(base_dir,'Data/Study2/SubCastorMap.csv'),index_col=None,header=None)
strat_2_map.columns = ['sub','record']
strat_2['record'] = strat_2['record'].astype(int)
strat_2 = strat_2.merge(strat_2_map,on='record')
strat_2.head()
strat_both = strat_1.append(strat_2)
strat_both = strat_both[['sub','GR','IA','GA','Altruism','AdvantEquity','DoubleInv','MoralOpport','Reciprocity','Return10','ReturnInv','RiskAssess','SplitEndow']]
strat_both.replace(to_replace=np.nan,value=0,inplace=True)
strat_both.head()
### Merge with clustering and additional measures
strat_use = strat_both.merge(clusters,on='sub')
strat_use = strat_use.loc[(strat_use['stable']==1)]
strat_use.head()
print (strat_use.shape)
```
## Plot
```
strategyList = ['GR','IA','GA','Altruism','AdvantEquity','DoubleInv','MoralOpport',
'Reciprocity','Return10','ReturnInv','RiskAssess','SplitEndow']
allStrategies_melted = strat_use.melt(id_vars=['sub','cluster'],value_vars=strategyList,
var_name='Strategy',value_name='Weight')
allStrategies_melted.head()
FigureTools.mydesign(context='poster')
sns.set_palette('tab10',len(strategyList))
strategyListOrder = [list(strategyList).index(list(strat_use.iloc[:,1:-2].mean().sort_values(
ascending=False).index)[i]) for i in range(len(strategyList))]
strategyListOrdered = [strategyList[i] for i in strategyListOrder]
fig,ax = plt.subplots(1,1,figsize=[16,5])
sns.barplot(data=allStrategies_melted,x='Strategy',y='Weight',ax=ax,
errwidth = 1, capsize = 0.1,errcolor='k',alpha=.9,
hue='cluster',hue_order=['GR','GA','IA','MO'],
order = strategyListOrdered,
)
strategyListOrdered_renamed = list(['50-50','Keep','Expectation'])+strategyListOrdered[3:]
plt.xticks(range(len(strategyList)),strategyListOrdered_renamed,rotation=45);
for i,strat in enumerate(strategyListOrdered):
allImp = allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat),'Weight']
stats = scipy.stats.f_oneway(
allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat) &
(allStrategies_melted['cluster']=='GR'),'Weight'],
allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat) &
(allStrategies_melted['cluster']=='GA'),'Weight'],
allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat) &
(allStrategies_melted['cluster']=='IA'),'Weight'],
allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat) &
(allStrategies_melted['cluster']=='MO'),'Weight'])
if stats[1] < 0.05:
FigureTools.add_sig_markers(ax,relationships=[[i-.2,i+.2,stats[1]]],linewidth=0,ystart=70)
print ('%s: F = %.2f, p = %.4f'%(strat,stats[0],stats[1]))
plt.xlabel('Self-reported strategy')
plt.ylabel('Importance (%)')
plt.legend(title='Model-derived strategy')
groups = ['GR','GA','IA','MO']
pairs = [[0,1],[0,2],[0,3],[1,2],[2,3],[1,3]]
for strat in ['IA','GA','GR']:
print (strat)
stratResults = pd.DataFrame(columns=['group1','group2','t','df','p'])
for pair in pairs:
group1 = groups[pair[0]]
group2 = groups[pair[1]]
samp1 = allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat) &
(allStrategies_melted['cluster']==group1),'Weight']
samp2 = allStrategies_melted.loc[(allStrategies_melted['Strategy']==strat) &
(allStrategies_melted['cluster']==group2),'Weight']
df = len(samp1) + len(samp2) -1
stats = scipy.stats.ttest_ind(samp1,samp2)
# print '%s vs %s: t(%i) = %.2f, p = %.4f, p-corr = %.4f'%(
# group1,group2,df,stats[0],stats[1],stats[1]*len(pairs))
stratResults = stratResults.append(pd.DataFrame([[group1,group2,df,stats[0],stats[1]]],
columns=stratResults.columns))
stratResults = stratResults.sort_values(by='p',ascending=False)
stratResults['p_holm'] = np.multiply(np.array(stratResults['p']),np.arange(1,7))
print (stratResults)
savedat = allStrategies_melted.loc[allStrategies_melted['Strategy'].isin(['IA','GA','GR','Altruism'])].reset_index(drop=True)
savedat.to_csv(base_dir+'/Data/Pooled/SelfReportStrategies/SelfReportStrategies2.csv')
```
## Plot by group in 3-strat space
```
stratsInclude = ['GR', 'IA', 'GA']
dat = allStrategies_melted.loc[allStrategies_melted['Strategy'].isin(stratsInclude)]
dat.head()
sns.barplot(data=dat,x='cluster',y='Weight',
errwidth = 1, capsize = 0.1,errcolor='k',alpha=.9,
hue='Strategy',hue_order=stratsInclude,
order = ['GR','GA','IA','MO'],
)
plt.legend(loc=[1.1,.5])
# plt.legend(['Keep','50-50','Expectation','Altruism'])
dat_piv = dat.pivot_table(index=['sub','cluster'],columns='Strategy',values='Weight').reset_index()
dat_piv.head()
sns.lmplot(data=dat_piv,x='GA',y='IA',hue='cluster',fit_reg=False)
FigureTools.mydesign()
sns.set_context('talk')
colors = sns.color_palette('tab10',4)
markers = ['o','*','s','d']
sizes = [70,170,60,80]
clusters = ['GR','GA','IA','MO']
fig,ax = plt.subplots(1,3,figsize=[12,4])
axisContents = [['IA','GA'],['GA','GR'],['GR','IA']]
faceWhiteFactor = 3
faceColors = colors
for i in range(faceWhiteFactor):
faceColors = np.add(faceColors,np.tile([1,1,1],[4,1]))
faceColors = faceColors/(faceWhiteFactor+1)
stratTranslate = dict(zip(['IA','GA','GR'],['50-50','Expectation','Keep']))
for i in range(3):
points = []
axCur = ax[i]
for clustInd,clust in enumerate(clusters):
print (clust)
x_point = dat_piv.loc[dat_piv['cluster']==clust,axisContents[i][0]].mean()
y_point = dat_piv.loc[dat_piv['cluster']==clust,axisContents[i][1]].mean()
handle = axCur.scatter(x_point,y_point, alpha=1,zorder=10, linewidth=2, edgecolor=colors[clustInd],
c=[faceColors[clustInd]], s=sizes[clustInd], marker=markers[clustInd])
points.append(handle)
x_sterr = scipy.stats.sem(dat_piv.loc[dat_piv['cluster']==clust,axisContents[i][0]])
y_sterr = scipy.stats.sem(dat_piv.loc[dat_piv['cluster']==clust,axisContents[i][1]])
x_range = [x_point - x_sterr, x_point + x_sterr]
y_range = [y_point - y_sterr, y_point + y_sterr]
axCur.plot(x_range,[y_point,y_point],c=colors[clustInd],linewidth=2,zorder=1)#,alpha=.5)
axCur.plot([x_point,x_point],y_range,c=colors[clustInd],linewidth=2,zorder=1)#,alpha=.5)
axCur.set(xlabel = 'Percentage %s'%stratTranslate[axisContents[i][0]],
ylabel = 'Percentage %s'%stratTranslate[axisContents[i][1]])
ax[2].legend(points,clusters)#,loc=[1.1,.5])
for i in range(3):
ax[i].set(xlim = [0,85], ylim = [0,85], aspect=1)
plt.tight_layout()
plt.suptitle('Relative importance of main 3 motives',y=1.05)
plt.show()
# FigureTools.mysavefig(fig,'Motives')
```
##### Set up 3d plot
```
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
FigureTools.mydesign()
sns.set_style('darkgrid', {"axes.facecolor": "1"})
sns.set_context('paper')
colors = sns.color_palette('tab10',4)
markers = ['o','*','s','d']
sizes = [70,170,60,80]
clusters = ['GR','GA','IA','MO']
faceWhiteFactor = 3
faceColors = colors
for i in range(faceWhiteFactor):
faceColors = np.add(faceColors,np.tile([1,1,1],[4,1]))
faceColors = faceColors/(faceWhiteFactor+1)
stratTranslate = dict(zip(['IA','GA','GR'],['50-50','Expectation','Keep']))
fig = plt.figure(figsize = [11,8])
ax = fig.add_subplot(111, projection='3d')
sns.set_context('talk')
points = []
for clustInd,clust in enumerate(clusters):
dat = dat_piv.query('cluster == @clust')
means = dat[['IA','GA','GR']].mean().values
sterrs = scipy.stats.sem(dat[['IA','GA','GR']])
handle = ax.scatter(*means, linewidth=1, edgecolor=colors[clustInd],
c=[faceColors[clustInd]], s=sizes[clustInd]/2, marker=markers[clustInd])
points.append(handle)
ax.plot([0,means[0]],[means[1],means[1]],[means[2],means[2]],':',color=colors[clustInd])
ax.plot([means[0],means[0]],[0,means[1]],[means[2],means[2]],':',color=colors[clustInd])
ax.plot([means[0],means[0]],[means[1],means[1]],[0,means[2]],':',color=colors[clustInd])
ax.plot([means[0] - sterrs[0],means[0] + sterrs[0]], [means[1],means[1]], [means[2],means[2]],
c=colors[clustInd],linewidth=2,zorder=1)
ax.plot([means[0],means[0]], [means[1] - sterrs[1],means[1] + sterrs[1]], [means[2],means[2]],
c=colors[clustInd],linewidth=2,zorder=1)
ax.plot([means[0],means[0]], [means[1],means[1]], [means[2] - sterrs[2],means[2] + sterrs[2]],
c=colors[clustInd],linewidth=2,zorder=1)
ax.set(xlabel = '%% %s'%stratTranslate['IA'],
ylabel = '%% %s'%stratTranslate['GA'],
zlabel = '%% %s'%stratTranslate['GR'])
ax.legend(points,clusters, title = 'Participant\ngroup', loc = [1.1,.5], frameon=False)
ax.set(xlim = [0,85], ylim = [0,50], zlim = [0,85])
plt.title('Self-reported importance of motives',y=1.05)
plt.tight_layout()
ax.view_init(elev=35,azim=-15) # Or azim -110
plt.savefig(base_dir + '/Results/Figure6.pdf',bbox_inches='tight')
```
| github_jupyter |
```
#!/usr/bin/env python
# coding: utf-8
'''
This module helps to predict new data sets using a trained model
Author: Tadele Belay Tuli, Valay Mukesh Patel,
University of Siegen, Germany (2022)
License: MIT
'''
import glob
import os
import subprocess
import pandas as pd
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
import tensorflow
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, Activation, Flatten, Dropout
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import load_model
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import seaborn as sn
# Lists of methods
def merge_rot_pos(df1,df2,label):
"""
This function merges position and orientation of BVH data into CSV format
The output is a concatinated data frame"""
# df1 is for rotation and df2 is for position
df1 = df1.drop(columns=['Time']) # Drop the time coloumn from rotation and postion CSV file.
df2 = df2.drop(columns=['Time'])
df_concat = pd.concat([df1, df2], axis=1) # Mereging rotation and position CSV data.
df_concat = df_concat.dropna()
df_concat['category'] = label # Adding the associated lable (folder_name) to fetch postion and rotation CSV data.
return df_concat
def convert_dataset_to_csv(file_loc):
"""
Function takes the file from dataset folder and convert it into CSV using BVH-converter library from https://github.com/tekulvw/bvh-converter.
"""
for directory in glob.glob(file_loc): # Path of dataset directory.
for file in glob.glob(directory+"*.bvh"): # Fetch each BVH file in dataset directory.
f = file.split('/')
command_dir = f[0]+'/'+f[1]
command_file = f[2]
command = "bvh-converter -r " + command_file # Load BVH to CSV converter.
subprocess.call(command, shell=True, cwd=command_dir) # Executing BVH TO CSV conveter command with shell.
#return command
def convert_CSV_into_df(file_loc):
"""
Generate Panda dataframe from CSV data (rotation and position).
"""
df = pd.DataFrame()
for directory in glob.glob(file_loc): # Selecting all the folders in dataset directory.
d = [] # Empty list.
f = directory.split('/')
for file in glob.glob(directory+"*.csv"): # Reading all the CSV files in dataset directory one by one.
d.append(file)
d = sorted(d) # Ensures rotation and position are together
while len(d)!=0:
rot = d.pop(0) # Rmove the header row from rotation and postion CSV.
pos = d.pop(0)
df1 = pd.read_csv(rot, nrows=200) # Read the first 200 rows from rotation and position CSV. value can be 200 or 150.
df2 = pd.read_csv(pos, nrows=200)
df_merge = merge_rot_pos(df1,df2,f[1]) # Call the mearge function to mearge fetch data of rotation and position CSV with class lable.
df = df.append(df_merge,ignore_index=True) # Append the merge data to panda dataframe one by one.
return df
file_loc = "Dataset/*/"
for directory in glob.glob(file_loc): # Path of dataset directory.
for file in glob.glob(directory+"*.bvh"): # Fetch each BVH file in dataset directory.
#f = file.split('\')
path_to_files, file_name = os.path.split(file)
command = "bvh-converter -r " + file_name # Load BVH to CSV converter.
subprocess.call(command, shell=True, cwd=path_to_files) # Executing BVH TO CSV conveter command with shell.
print(command)
"""
# Function to merge the rotation and position CSV files generated by BVH TO CSV converter.
def merge_rot_pos(df1,df2,label):
# df1 is for rotation and df2 is for position
df1 = df1.drop(columns=['Time']) # Drop the time coloumn from rotation and postion CSV file.
df2 = df2.drop(columns=['Time'])
df_concat = pd.concat([df1, df2], axis=1) # Mereging rotation and position CSV data.
df_concat = df_concat.dropna()
df_concat['category'] = label # Adding the associated lable (folder_name) to fetch postion and rotation CSV data.
return df_concat
# Panda dataframe is generated from CSV data (rotation and position).
df = pd.DataFrame()
for directory in glob.glob("Dataset/*/"): # Selecting all the folders in dataset directory.
d = [] # Empty list.
f = directory.split('/')
for file in glob.glob(directory+"*.csv"): # Reading all the CSV files in dataset directory one by one.
d.append(file)
d = sorted(d) # Ensures rotation and position are together
while len(d)!=0:
rot = d.pop(0) # Rmove the header row from rotation and postion CSV.
pos = d.pop(0)
df1 = pd.read_csv(rot, nrows=200) # Read the first 200 rows from rotation and position CSV. value can be 200 or 150.
df2 = pd.read_csv(pos, nrows=200)
df_merge = merge_rot_pos(df1,df2,f[1]) # Call the mearge function to mearge fetch data of rotation and position CSV with class lable.
df = df.append(df_merge,ignore_index=True) # Append the merge data to panda dataframe one by one.
"""
#new_df = df.drop('category',axis = 1) # drop the class lable coloumn from panda dataframe.
#print(new_df.shape)
df
# Function takes the file from dataset folder and convert it into CSV.
for directory in glob.glob("Dataset/*/"): # Path of dataset directory.
for file in glob.glob(directory+"*.bvh"): # Fetch each BVH file in dataset directory.
#f = file.split('/')
path_to_files, file_name = os.path.split(file)
command_dir = path_to_files
command_file = file_name
command = "bvh-converter -r " + command_file # Load BVH to CSV converter.
subprocess.call(command, shell=True, cwd=command_dir) # Executing BVH TO CSV conveter command with shell.
# Function to merge the rotation and position CSV files generated by BVH TO CSV converter.
def merge_rot_pos(df1,df2,label):
# df1 is for rotation and df2 is for position
df1 = df1.drop(columns=['Time']) # Drop the time coloumn from rotation and postion CSV file.
df2 = df2.drop(columns=['Time'])
df_concat = pd.concat([df1, df2], axis=1) # Mereging rotation and position CSV data.
df_concat = df_concat.dropna()
df_concat['category'] = label # Adding the associated lable (folder_name) to fetch postion and rotation CSV data.
return df_concat
# Panda dataframe is generated from CSV data (rotation and position).
df = pd.DataFrame()
for directory in glob.glob("Dataset/*/"): # Selecting all the folders in dataset directory.
d = [] # Empty list.
f = directory.split('/')
for file in glob.glob(directory+"*.csv"): # Reading all the CSV files in dataset directory one by one.
d.append(file)
d = sorted(d) # Ensures rotation and position are together
while len(d)!=0:
rot = d.pop(0) # Rmove the header row from rotation and postion CSV.
pos = d.pop(0)
df1 = pd.read_csv(rot, nrows=200) # Read the first 200 rows from rotation and position CSV. value can be 200 or 150.
df2 = pd.read_csv(pos, nrows=200)
df_merge = merge_rot_pos(df1,df2,f[1]) # Call the mearge function to mearge fetch data of rotation and position CSV with class lable.
df = df.append(df_merge,ignore_index=True) # Append the merge data to panda dataframe one by one.
#new_df = df.drop('category',axis = 1) # drop the class lable coloumn from panda dataframe.
#print(new_df.shape)
df1
import subprocess
dir(subprocess)
```
| github_jupyter |
# Writing custom steps
<!-- Add overview -->
The Graph executes built-in task classes, or task classes and functions that you implement.
The task parameters include the following:
* `class_name` (str): the relative or absolute class name.
* `handler` (str): the function handler (if class_name is not specified it is the function handler).
* `**class_args`: a set of class `__init__` arguments.
For example, see the following simple `echo` class:
```
import mlrun
# mlrun: start
# echo class, custom class example
class Echo:
def __init__(self, context, name=None, **kw):
self.context = context
self.name = name
self.kw = kw
def do(self, x):
print("Echo:", self.name, x)
return x
# mlrun: end
```
Test the graph: first convert the code to function, and then add the step to the graph:
```
fn_echo = mlrun.code_to_function("echo_function", kind="serving", image="mlrun/mlrun")
graph_echo = fn_echo.set_topology("flow")
graph_echo.to(class_name="Echo", name="pre-process", some_arg='abc')
graph_echo.plot(rankdir='LR')
```
Create a mock server to test this locally:
```
echo_server = fn_echo.to_mock_server(current_function="*")
result = echo_server.test("", {"inputs": 123})
print(result)
```
**For more information, see the [Advanced Graph Notebook Example](./graph-example.ipynb)**
<!-- Requires better description - and title? -->
You can use any Python function by specifying the handler name (e.g. `handler=json.dumps`).
The function is triggered with the `event.body` as the first argument, and its result
is passed to the next step.
Alternatively, you can use classes that can also store some step/configuration and separate the
one time init logic from the per event logic. The classes are initialized with the `class_args`.
If the class init args contain `context` or `name`, they are initialized with the
[graph context](./realtime-pipelines.ipynb) and the step name.
By default, the `class_name` and handler specify a class/function name in the `globals()` (i.e. this module).
Alternatively, those can be full paths to the class (module.submodule.class), e.g. `storey.WriteToParquet`.
You can also pass the module as an argument to functions such as `function.to_mock_server(namespace=module)`.
In this case the class or handler names are also searched in the provided module.
<!-- the previous sentence needs clarification -->
When using classes the class event handler is invoked on every event with the `event.body`.
If the Task step `full_event` parameter is set to `True` the handler is invoked and returns
the full `event` object. If the class event handler is not specified, it invokes the class `do()` method.
If you need to implement async behavior, then subclass `storey.MapClass`.
| github_jupyter |
```
import sys
sys.path.append('../src')
from mcmc_norm_learning.algorithm_1_v4 import to_tuple
from mcmc_norm_learning.rules_4 import get_log_prob
from pickle_wrapper import unpickle
import pandas as pd
import yaml
import tqdm
from numpy import log
with open("../params_nc.yaml", 'r') as fd:
params = yaml.safe_load(fd)
num_obs=params["num_observations"]
true_norm=params['true_norm']['exp']
num_obs
base_path="../data_nc/exp_nc3/"
exp_paths=!ls $base_path
def get_num_viols(nc_obs):
n_viols=0
for obs in nc_obs:
for action_pairs in zip(obs, obs[1:]):
if action_pairs[0] in [(('pickup', 8), ('putdown', 8, '1')),(('pickup', 40), ('putdown', 40, '1'))]:
if action_pairs[1][1][2] =='1': #not in obl zone
n_viols+=1
elif action_pairs[1][1][2] =='3':
if action_pairs[1][1][1] not in [35,13]: #permission not applicable
n_viols+=1
return (n_viols)
z1=pd.DataFrame()
for exp_path in exp_paths:
temp=pd.DataFrame()
#Add params
obs_path=base_path+exp_path+"/obs.pickle"
obs = unpickle(obs_path)
temp["w_nc"] = [float(exp_path.split("w_nc=")[1].split(",")[0])]
trial=1 if "trial" not in exp_path else exp_path.split(",trial=")[-1]
temp["trial"]=[int(trial)]
#Add violations
n_viols=get_num_viols(obs)
temp["violation_rate"]=[n_viols/num_obs]
#Add lik,post
prior_true=!grep "For True Norm" {base_path+exp_path+"/run.log"}
lik_true=!grep "lik_no_norm" {base_path+exp_path+"/run.log"}
post_true=float(prior_true[0].split("log_prior=")[-1]) + float(lik_true[0].split("lik_true_norm=")[1])
temp["true_norm_posterior"]=[post_true]
#Add if True Norm found in some chain
if_true_norm=!grep "True norm in some chain(s)" {base_path+exp_path+"/chain_info.txt"}
temp["if_true_norm_found"]= ["False" not in if_true_norm[0]]
#Rank of True Norm if found as per posterior
rank_df=pd.read_csv(base_path+exp_path+"/ranked_posteriors.csv",index_col=False)
rank_true=rank_df.loc[rank_df.expression==str(to_tuple(true_norm))][["post_rank","log_posterior"]].values
rank=rank_true[0][0] if rank_true.shape[0]==1 else None
temp["true_norm_rank_wrt_posterior"]= [rank]
#max posterior found in chains
rank_1=rank_df.loc[rank_df.post_rank==1]
temp["max_posterior_in_chain"]= [rank_1.log_posterior.values[0]]
temp["norm_wi_max_post"]= [rank_1.expression.values[0]]
#chain summary
chain_details = pd.read_csv(f"{base_path+exp_path}/chain_posteriors_nc.csv")
n_chains1=chain_details.loc[chain_details.expression==str(true_norm)].chain_number.nunique()
temp["#chains_wi_true_norm"]= [n_chains1]
chain_max_min=chain_details.groupby(["chain_number"])[["log_posterior"]].agg(['min', 'max', 'mean', 'std'])
n_chains2=(chain_max_min["log_posterior","max"]>post_true).sum()
temp["#chains_wi_post_gt_true_norm"]= [n_chains2]
#Posterior Estimation
n=params["n"]
top_norms=chain_details.loc[chain_details.chain_pos>2*n\
].groupby(["expression"]).agg({"log_posterior":["mean","count"]})
top_norms["chain_rank"]=top_norms[[('log_posterior', 'count')]].rank(method='dense',ascending=False)
top_norms.sort_values(by=["chain_rank"],inplace=True)
rank_true_wi_freq=top_norms.iloc[top_norms.index==str(true_norm)]["chain_rank"].values
rank_true_wi_freq = float(rank_true_wi_freq[0]) if rank_true_wi_freq.size>0 else None
temp["#rank_true_wi_freq"]= [rank_true_wi_freq]
post_norm_top=top_norms.loc[top_norms.chain_rank==1]["log_posterior","mean"].values
post_norm_top = post_norm_top[0] if post_norm_top.size>0 else None
temp["posterior_norm_top"]= [post_norm_top]
#Num equivalent norms in posterior
log_lik=float(lik_true[0].split("lik_true_norm=")[1])
top_norms["log_prior"]=top_norms.index.to_series().apply(lambda x: get_log_prob("NORMS",eval(x)))[0]
top_norms["log_lik"]=top_norms[('log_posterior', 'mean')]-top_norms["log_prior"]
mask_equiv=abs((top_norms["log_lik"]-log_lik)/log_lik)<=0.0005
n_equiv=mask_equiv.sum()
temp["total_equiv_norms_in_top_norms"]= [n_equiv]
n_equiv_20=mask_equiv[:20].sum()
temp["total_equiv_norms_in_top_20_norms"]= [n_equiv_20]
best_equiv_norm_rank=top_norms.loc[mask_equiv]["chain_rank"].min()
temp["best_equiv_norm_rank"]= [best_equiv_norm_rank]
best_equiv_norm=eval(top_norms.loc[mask_equiv].index[0]) if n_equiv>0 else None
temp["best_equiv_norm"]= [best_equiv_norm]
z1=z1.append(temp)
z1.columns
z1["if_equiv_norm_found"]=z1["total_equiv_norms_in_top_norms"]>0
z1["if_true_or_equiv_norm_found"]=z1["if_equiv_norm_found"] | z1["if_true_norm_found"]
z1["true_post/max_post"]=z1["true_norm_posterior"]/z1["max_posterior_in_chain"]
z1["%chains_wi_true_norm"]=z1["#chains_wi_true_norm"]/10
z1["%chains_wi_post_gt_true_norm"]=z1["#chains_wi_post_gt_true_norm"]/10
z1["expected_violation_rate"]=z1["w_nc"]*108/243
z1["chk"]=z1["violation_rate"]/z1["expected_violation_rate"]
```
### Summary
```
print ("%trials where true norms found: {:.2%}".format(z1["if_true_norm_found"].mean()))
print ("%trials where equiv norms found: {:.2%}".format(z1["if_equiv_norm_found"].mean()))
print ("%trials where true/equiv norms found: {:.2%}".format(z1["if_true_or_equiv_norm_found"].mean()))
```
### Where are neither True nor equivalent Norms found ?
```
z1.groupby(["chk"]).agg({"if_true_or_equiv_norm_found":"mean","trial":"count"})
import matplotlib.pyplot as plt
z1.plot(x="chk",y=["max_posterior_in_chain","true_norm_posterior",\
"#rank_true_wi_freq","best_equiv_norm_rank"],subplots=True,\
marker="o",kind = 'line',ls="none",figsize = (10,10))
#z1.plot(x="chk",y="true_norm_posterior",kind="scatter")
108/243*0.3
z1.groupby(["w_nc","violation_rate"]).agg({"trial":"count","if_true_or_equiv_norm_found":"mean"})
z1.groupby(["w_nc"]).agg({"trial":"count","if_true_norm_found":[("mean")],"if_equiv_norm_found":"mean",\
"if_true_or_equiv_norm_found":"mean","true_norm_posterior":"mean",\
"true_post/max_post":"mean","%chains_wi_true_norm":"mean"})
z1.dtypes
z1.groupby(["w_nc"]).mean()
true_norm
z1.loc[~(z1.if_true_norm_found)][["w_nc","best_equiv_norm_rank","best_equiv_norm"]].values
```
| github_jupyter |
# AbuseCH Data Scraper
## SSLBL
> The SSL Blacklist (SSLBL) is a project of abuse.ch with the goal of detecting malicious SSL connections, by identifying and blacklisting SSL certificates used by botnet C&C servers. In addition, SSLBL identifies JA3 fingerprints that helps you to detect & block malware botnet C&C communication on the TCP layer.
Reference: https://sslbl.abuse.ch/
```
from pprint import pprint
from abusech.sslbl import SslBl
sslbl = SslBl()
```
### IPAddress
> An SSL certificate can be associated with one or more servers (IP address:port combination). SSLBL collects IP addresses that are running with an SSL certificate blacklisted on SSLBL. These are usually botnet Command&Control servers (C&C). SSLBL hence publishes a blacklist containing these IPs which can be used to detect botnet C2 traffic from infected machines towards the internet, leaving your network. The CSV format is useful if you want to process the blacklisted IP addresses further, e.g. loading them into your SIEM.
Reference: https://sslbl.abuse.ch/blacklist/#botnet-c2-ips-csv
```
data = sslbl.get_ip_blacklist()
pprint(data[0:2])
```
### IpAddress - Aggressive
> If you want to fetch a comprehensive list of all IP addresses that SSLBL has ever seen, please use the CSV provided below.
Reference: https://sslbl.abuse.ch/blacklist/#botnet-c2-ips-csv
```
data = sslbl.get_ip_blacklist(aggressive=True)
pprint(data[0:2])
```
### SSLs
> The SSL Certificate Blacklist (CSV) is a CSV that contains SHA1 Fingerprint of all SSL certificates blacklisted on SSLBL. This format is useful if you want to process the blacklisted SSL certificate further, e.g. loading them into your SIEM.
Reference: https://sslbl.abuse.ch/blacklist/#ssl-certificates-csv
```
data = sslbl.get_ssl_blacklist()
pprint(data[0:2])
```
### SSL Details
> An SSL certificate is identified by a unique SHA1 hash (aka SSL certificate fingerprint). The following table shows further information as well as a list of malware samples including the corresponding botnet C&C associated with the SSL certificate fingerprint 7cf902ff50b3869ccaa4715b25bbea3cb18a18b5.
Reference: https://sslbl.abuse.ch/ssl-certificates/sha1/7cf902ff50b3869ccaa4715b25bbea3cb18a18b5/
```
data = sslbl.get_ssl_details(sha1='7cf902ff50b3869ccaa4715b25bbea3cb18a18b5')
pprint(data)
```
### JA3 fingerprint
> JA3 is an open source tool used to fingerprint SSL/TLS client applications. In the best case, you can use JA3 to identify malware and botnet C2 traffic that is leveraging SSL/TLS. The CSV format is useful if you want to process the JA3 fingerprints further, e.g. loading them into your SIEM. The JA3 fingerprints blacklisted on SSLBL have been collected by analysing more than 25,000,000 PCAPs generated by malware samples. These fingerprints have not been tested against known good traffic yet and may cause a significant amount of FPs!
Reference: https://sslbl.abuse.ch/blacklist/#ja3-fingerprints-csv
```
data = sslbl.get_ja3_blacklist()
pprint(data[0:2])
```
### JA3 details
> JA3 is an open source tool used to fingerprint SSL/TLS client applications. In the best case, you can use JA3 to identify malware traffic that is leveraging SSL/TLS.You can find further information about the JA3 fingerprint d76ee64fb7273733cbe455ac81c292e6, including the corresponding malware samples as well as the associated botnet C&Cs.
Reference: https://sslbl.abuse.ch/ja3-fingerprints/d76ee64fb7273733cbe455ac81c292e6/
```
data = sslbl.get_ja3_details(md5='d76ee64fb7273733cbe455ac81c292e6')
pprint(data)
```
| github_jupyter |
# Data Management with OpenACC
This version of the lab is intended for C/C++ programmers. The Fortran version of this lab is available [here](../../Fortran/jupyter_notebook/openacc_fortran_lab2.ipynb).
You will receive a warning five minutes before the lab instance shuts down. Remember to save your work! If you are about to run out of time, please see the [Post-Lab](#Post-Lab-Summary) section for saving this lab to view offline later.
---
Let's execute the cell below to display information about the GPUs running on the server. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
```
!pgaccelinfo
```
---
## Introduction
Our goal for this lab is to use the OpenACC Data Directives to properly manage our data.
<img src="images/development_cycle.png" alt="OpenACC development cycle" width="50%">
This is the OpenACC 3-Step development cycle.
**Analyze** your code, and predict where potential parallelism can be uncovered. Use profiler to help understand what is happening in the code, and where parallelism may exist.
**Parallelize** your code, starting with the most time consuming parts. Focus on maintaining correct results from your program.
**Optimize** your code, focusing on maximizing performance. Performance may not increase all-at-once during early parallelization.
We are currently tackling the **parallelize** and **optimize** steps by adding the *data clauses* necessary to parallelize the code without CUDA Managed Memory and then *structured data directives* to optimize the data movement of our code.
---
## Run the Code (With Managed Memory)
In the [previous lab](openacc_c_lab1.ipynb), we added OpenACC loop directives and relied on a feature called CUDA Managed Memory to deal with the separate CPU & GPU memories for us. Just adding OpenACC to our two loop nests we achieved a considerable performance boost. However, managed memory is not compatible with all GPUs or all compilers and it sometimes performs worse than programmer-defined memory management. Let's start with our solution from the previous lab and use this as our performance baseline. Note the runtime from the follow cell.
```
!cd ../source_code/lab2 && make clean && make laplace_managed && ./laplace_managed
```
### Optional: Analyze the Code
If you would like a refresher on the code files that we are working on, you may view both of them using the two links below by openning the downloaded file.
[jacobi.c](../source_code/lab2/jacobi.c)
[laplace2d.c](../source_code/lab2/laplace2d.c)
## Building Without Managed Memory
For this exercise we ultimately don't want to use CUDA Managed Memory, so let's remove the managed option from our compiler options. Try building and running the code now. What happens?
```
!cd ../source_code/lab2 && make clean && make laplace_no_managed && ./laplace
```
Uh-oh, this time our code failed to build. Let's take a look at the compiler output to understand why:
```
jacobi.c:
laplace2d.c:
PGC-S-0155-Compiler failed to translate accelerator region (see -Minfo messages): Could not find allocated-variable index for symbol (laplace2d.c: 47)
calcNext:
47, Accelerator kernel generated
Generating Tesla code
48, #pragma acc loop gang /* blockIdx.x */
50, #pragma acc loop vector(128) /* threadIdx.x */
54, Generating implicit reduction(max:error)
48, Accelerator restriction: size of the GPU copy of Anew,A is unknown
50, Loop is parallelizable
PGC-F-0704-Compilation aborted due to previous errors. (laplace2d.c)
PGC/x86-64 Linux 18.7-0: compilation aborted
```
This error message is not very intuitive, so let me explain it to you.:
* `PGC-S-0155-Compiler failed to translate accelerator region (see -Minfo messages): Could not find allocated-variable index for symbol (laplace2d.c: 47)` - The compiler doesn't like something about a variable from line 47 of our code.
* `48, Accelerator restriction: size of the GPU copy of Anew,A is unknown` - I don't see any further information about line 47, but at line 48 the compiler is struggling to understand the size and shape of the arrays Anew and A. It turns out, this is our problem.
So, what these cryptic compiler errors are telling us is that the compiler needs to create copies of A and Anew on the GPU in order to run our code there, but it doesn't know how big they are, so it's giving up. We'll need to give the compiler more information about these arrays before it can move forward, so let's find out how to do that.
## OpenACC Data Clauses
Data clauses allow the programmer to specify data transfers between the host and device (or in our case, the CPU and the GPU). Because they are clauses, they can be added to other directives, such as the `parallel loop` directive that we used in the previous lab. Let's look at an example where we do not use a data clause.
```cpp
int *A = (int*) malloc(N * sizeof(int));
#pragma acc parallel loop
for( int i = 0; i < N; i++ )
{
A[i] = 0;
}
```
We have allocated an array `A` outside of our parallel region. This means that `A` is allocated in the CPU memory. However, we access `A` inside of our loop, and that loop is contained within a *parallel region*. Within that parallel region, `A[i]` is attempting to access a memory location within the GPU memory. We didn't explicitly allocate `A` on the GPU, so one of two things will happen.
1. The compiler will understand what we are trying to do, and automatically copy `A` from the CPU to the GPU.
2. The program will check for an array `A` in GPU memory, it won't find it, and it will throw an error.
Instead of hoping that we have a compiler that can figure this out, we could instead use a *data clause*.
```cpp
int *A = (int*) malloc(N * sizeof(int));
#pragma acc parallel loop copy(A[0:N])
for( int i = 0; i < N; i++ )
{
A[i] = 0;
}
```
We will learn the `copy` data clause first, because it is the easiest to use. With the inclusion of the `copy` data clause, our program will now copy the content of `A` from the CPU memory, into GPU memory. Then, during the execution of the loop, it will properly access `A` from the GPU memory. After the parallel region is finished, our program will copy `A` from the GPU memory back to the CPU memory. Let's look at one more direct example.
```cpp
int *A = (int*) malloc(N * sizeof(int));
for( int i = 0; i < N; i++ )
{
A[i] = 0;
}
#pragma acc parallel loop copy(A[0:N])
for( int i = 0; i < N; i++ )
{
A[i] = 1;
}
```
Now we have two loops; the first loop will execute on the CPU (since it does not have an OpenACC parallel directive), and the second loop will execute on the GPU. Array `A` will be allocated on the CPU, and then the first loop will execute. This loop will set the contents of `A` to be all 0. Then the second loop is encountered; the program will copy the array `A` (which is full of 0's) into GPU memory. Then, we will execute the second loop on the GPU. This will edit the GPU's copy of `A` to be full of 1's.
At this point, we have two separate copies of `A`. The CPU copy is full of 0's, and the GPU copy is full of 1's. Now, after the parallel region finishes, the program will copy `A` back from the GPU to the CPU. After this copy, both the CPU and the GPU will contain a copy of `A` that contains all 1's. The GPU copy of `A` will then be deallocated.
This image offers another step-by-step example of using the copy clause.

We are also able to copy multiple arrays at once by using the following syntax.
```cpp
#pragma acc parallel loop copy(A[0:N], B[0:N])
for( int i = 0; i < N; i++ )
{
A[i] = B[i];
}
```
Of course, we might not want to copy our data both to and from the GPU memory. Maybe we only need the array's values as inputs to the GPU region, or maybe it's only the final results we care about, or perhaps the array is only used temporarily on the GPU and we don't want to copy it either directive. The following OpenACC data clauses provide a bit more control than just the `copy` clause.
* `copyin` - Create space for the array and copy the input values of the array to the device. At the end of the region, the array is deleted without copying anything back to the host.
* `copyout` - Create space for the array on the device, but don't initialize it to anything. At the end of the region, copy the results back and then delete the device array.
* `create` - Create space of the array on the device, but do not copy anything to the device at the beginning of the region, nor back to the host at the end. The array will be deleted from the device at the end of the region.
* `present` - Don't do anything with these variables. I've put them on the device somewhere else, so just assume they're available.
You may also use them to operate on multiple arrays at once, by including those arrays as a comma separated list.
```cpp
#pragma acc parallel loop copy( A[0:N], B[0:M], C[0:Q] )
```
You may also use more than one data clause at a time.
```cpp
#pragma acc parallel loop create( A[0:N] ) copyin( B[0:M] ) copyout( C[0:Q] )
```
### Array Shaping
The shape of the array specifies how much data needs to be transferred. Let's look at an example:
```cpp
#pragma acc parallel loop copy(A[0:N])
for( int i = 0; i < N; i++ )
{
A[i] = 0;
}
```
Focusing specifically on the `copy(A[0:N])`, the shape of the array is defined within the brackets. The syntax for array shape is `[starting_index:size]`. This means that (in the code example) we are copying data from array `A`, starting at index 0 (the start of the array), and copying N elements (which is most likely the length of the entire array).
We are also able to only copy a portion of the array:
```cpp
#pragma acc parallel loop copy(A[1:N-2])
```
This would copy all of the elements of `A` except for the first, and last element.
Lastly, if you do not specify a starting index, 0 is assumed. This means that
```cpp
#pragma acc parallel loop copy(A[0:N])
```
is equivalent to
```cpp
#pragma acc parallel loop copy(A[:N])
```
## Making the Sample Code Work without Managed Memory
In order to build our example code without CUDA managed memory we need to give the compiler more information about the arrays. How do our two loop nests use the arrays `A` and `Anew`? The `calcNext` function take `A` as input and generates `Anew` as output, but also needs Anew copied in because we need to maintain that *hot* boundary at the top. So you will want to add a `copyin` clause for `A` and a `copy` clause for `Anew` on your region. The `swap` function takes `Anew` as input and `A` as output, so it needs the exact opposite data clauses. It's also necessary to tell the compiler the size of the two arrays by using array shaping. Our arrays are `m` times `n` in size, so we'll tell the compiler their shape starts at `0` and has `n*m` elements, using the syntax above. Go ahead and add data clauses to the two `parallel loop` directives in `laplace2d.c`.
From the top menu, click on *File*, and *Open* `laplace2d.c` from the current directory at `C/source_code/lab2` directory. Remember to **SAVE** your code after changes, before running below cells.
Then try to build again.
```
!cd ../source_code/lab2 && make clean && make laplace_no_managed && ./laplace
```
Well, the good news is that it should have built correctly and run. If it didn't, check your data clauses carefully. The bad news is that now it runs a whole lot slower than it did before. Let's try to figure out why. The PGI compiler provides your executable with built-in timers, so let's start by enabling them and seeing what it shows. You can enable these timers by setting the environment variable `PGI_ACC_TIME=1`. Run the cell below to get the program output with the built-in profiler enabled.
**Note:** Profiling will not be covered in this lab. Please have a look at the supplementary [slides](https://drive.google.com/file/d/1Asxh0bpntlmYxAPjBxOSThFIz7Ssd48b/view?usp=sharing).
```
!cd ../source_code/lab2 && make clean && make laplace_no_managed && PGI_ACC_TIME=1 ./laplace
```
Your output should look something like what you see below.
```
total: 189.014216 s
Accelerator Kernel Timing data
/labs/lab2/English/C/laplace2d.c
calcNext NVIDIA devicenum=0
time(us): 53,290,779
47: compute region reached 1000 times
47: kernel launched 1000 times
grid: [4094] block: [128]
device time(us): total=2,427,090 max=2,447 min=2,406 avg=2,427
elapsed time(us): total=2,486,633 max=2,516 min=2,464 avg=2,486
47: reduction kernel launched 1000 times
grid: [1] block: [256]
device time(us): total=19,025 max=20 min=19 avg=19
elapsed time(us): total=48,308 max=65 min=44 avg=48
47: data region reached 4000 times
47: data copyin transfers: 17000
device time(us): total=33,878,622 max=2,146 min=6 avg=1,992
57: data copyout transfers: 10000
device time(us): total=16,966,042 max=2,137 min=9 avg=1,696
/labs/lab2/English/C/laplace2d.c
swap NVIDIA devicenum=0
time(us): 36,214,666
62: compute region reached 1000 times
62: kernel launched 1000 times
grid: [4094] block: [128]
device time(us): total=2,316,826 max=2,331 min=2,305 avg=2,316
elapsed time(us): total=2,378,419 max=2,426 min=2,366 avg=2,378
62: data region reached 2000 times
62: data copyin transfers: 8000
device time(us): total=16,940,591 max=2,352 min=2,114 avg=2,117
70: data copyout transfers: 9000
device time(us): total=16,957,249 max=2,133 min=13 avg=1,884
```
The total runtime was roughly 190 with the profiler turned on, but only about 130 seconds without. We can see that `calcNext` required roughly 53 seconds to run by looking at the `time(us)` line under the `calcNext` line. We can also look at the `data region` section and determine that 34 seconds were spent copying data to the device and 17 seconds copying data out for the device. The `swap` function has very similar numbers. That means that the program is actually spending very little of its runtime doing calculations. Why is the program copying so much data around? The screenshot below comes from the Nsight Systems profiler and shows part of one step of our outer while loop. The greenish and pink colors are data movement and the blue colors are our kernels (calcNext and swap). Notice that for each kernel we have copies to the device (greenish) before and copies from the device (pink) after. The means we have 4 segments of data copies for every iteration of the outer while loop.

Let's contrast this with the managed memory version. The image below shows the same program built with managed memory. Notice that there's a lot of "data migration" at the first kernel launch, where the data is first used, but there's no data movement between kernels after that. This tells me that the data movement isn't really needed between these kernels, but we need to tell the compiler that.

Because the loops are in two separate function, the compiler can't really see that the data is reused on the GPU between those function. We need to move our data movement up to a higher level where we can reuse it for each step through the program. To do that, we'll add OpenACC data directives.
---
## OpenACC Structured Data Directive
The OpenACC data directives allow the programmer to explicitly manage the data on the device (in our case, the GPU). Specifically, the structured data directive will mark a static region of our code as a **data region**.
```cpp
< Initialize data on host (CPU) >
#pragma acc data < data clauses >
{
< Code >
}
```
Device memory allocation happens at the beginning of the region, and device memory deallocation happens at the end of the region. Additionally, any data movement from the host to the device (CPU to GPU) happens at the beginning of the region, and any data movement from the device to the host (GPU to CPU) happens at the end of the region. Memory allocation/deallocation and data movement is defined by which clauses the programmer includes (the `copy`, `copyin`, `copyout`, and `create` clauses we saw above).
### Encompassing Multiple Compute Regions
A single data region can contain any number of parallel/kernels regions. Take the following example:
```cpp
#pragma acc data copyin(A[0:N], B[0:N]) create(C[0:N])
{
#pragma acc parallel loop
for( int i = 0; i < N; i++ )
{
C[i] = A[i] + B[i];
}
#pragma acc parallel loop
for( int i = 0; i < N; i++ )
{
A[i] = C[i] + B[i];
}
}
```
You may also encompass function calls within the data region:
```cpp
void copy(int *A, int *B, int N)
{
#pragma acc parallel loop
for( int i = 0; i < N; i++ )
{
A[i] = B[i];
}
}
...
#pragma acc data copyout(A[0:N],B[0:N]) copyin(C[0:N])
{
copy(A, C, N);
copy(A, B, N);
}
```
### Adding the Structured Data Directive to our Code
Add a structured data directive to the code to properly handle the arrays `A` and `Anew`. We've already added data clauses to our two functions, so this time we'll move up the calltree and add a structured data region around our while loop in the main program. Think about the input and output to this while loop and choose your data clauses for `A` and `Anew` accordingly.
From the top menu, click on *File*, and *Open* `jacobi.c` from the current directory at `C/source_code/lab2` directory. Remember to **SAVE** your code after changes, before running below cells.
Then, run the following script to check you solution. You code should run just as good as (or slightly better) than our managed memory code.
```
!cd ../source_code/lab2 && make clean && make laplace_no_managed && ./laplace
```
Did your runtime go down? It should have but the answer should still match the previous runs. Let's take a look at the profiler now.

Notice that we no longer see the greenish and pink bars on either side of each iteration, like we did before. Instead, we see a red OpenACC `Enter Data` region which contains some greenish bars corresponding to host-to-device data transfer preceding any GPU kernel launches. This is because our data movement is now handled by the outer data region, not the data clauses on each loop. Data clauses count how many times an array has been placed into device memory and only copies data the outermost time it encounters an array. This means that the data clauses we added to our two functions are now used only for shaping and no data movement will actually occur here anymore, thanks to our outer `data` region.
This reference counting behavior is really handy for code development and testing. Just like we just did, you can add clauses to each of your OpenACC `parallel loop` or `kernels` regions to get everything running on the accelerator and then just wrap those functions with a data region when you're done and the data movement magically disappears. Furthermore, if you want to isolate one of those functions into a standalone test case you can do so easily, because the data clause is already in the code.
---
## OpenACC Update Directive
When we use the data clauses you are only able to copy data between host and device memory at the beginning and end of your regions, but what if you need to copy data in the middle? For example, what if we wanted to debug our code by printing out the array every 100 steps to make sure it looks right? In order to transfer data at those times, we can use the `update` directive. The update directive will explicitly transfer data between the host and the device. The `update` directive has two clauses:
* `self` - The self clause will transfer data from the device to the host (GPU to CPU). You will sometimes see this clause called the `host` clause.
* `device` - The device clause will transfer data from the host to the device (CPU to GPU).
The syntax would look like:
`#pragma acc update self(A[0:N])`
`#pragma acc update device(A[0:N])`
All of the array shaping rules apply.
As an example, let's create a version of our laplace code where we want to print the array `A` after every 100 iterations of our loop. The code will look like this:
```cpp
#pragma acc data copyin( A[:m*n],Anew[:m*n] )
{
while ( error > tol && iter < iter_max )
{
error = calcNext(A, Anew, m, n);
swap(A, Anew, m, n);
if(iter % 100 == 0)
{
printf("%5d, %0.6f\n", iter, error);
for( int i = 0; i < n; i++ )
{
for( int j = 0; j < m; j++ )
{
printf("%0.2f ", A[i+j*m]);
}
printf("\n");
}
}
iter++;
}
}
```
Let's run this code (on a very small data set, so that we don't overload the console by printing thousands of numbers).
```
!cd ../source_code/lab2/update && make clean && make laplace_no_update && ./laplace_no_update 10 10
```
We can see that the array is not changing. This is because the host copy of `A` is not being *updated* between loop iterations. Let's add the update directive, and see how the output changes.
```cpp
#pragma acc data copyin( A[:m*n],Anew[:m*n] )
{
while ( error > tol && iter < iter_max )
{
error = calcNext(A, Anew, m, n);
swap(A, Anew, m, n);
if(iter % 100 == 0)
{
printf("%5d, %0.6f\n", iter, error);
#pragma acc update self(A[0:m*n])
for( int i = 0; i < n; i++ )
{
for( int j = 0; j < m; j++ )
{
printf("%0.2f ", A[i+j*m]);
}
printf("\n");
}
}
iter++;
}
}
```
```
!cd ../source_code/lab2/update/solution && make clean && make laplace_update && ./laplace_update 10 10
```
Although you weren't required to add an `update` directive to this example code, except in the contrived example above, it's an extremely important directive for real applications because it allows you to do I/O or communication necessary for your code to execute without having to pay the cost of allocating and decallocating arrays on the device each time you do so.
---
## Conclusion
Relying on managed memory to handle data management can reduce the effort the programmer needs to parallelize their code, however, not all GPUs work with managed memory, and it is also lower performance than using explicit data management. In this lab you learned about how to use *data clauses* and *structured data directives* to explicitly manage device memory and remove your reliance on CUDA Managed Memory.
---
## Bonus Task
If you would like some additional lessons on using OpenACC, there is an Introduction to OpenACC video series available from the OpenACC YouTube page. The fifth video in the series covers a lot of the content that was covered in this lab.
[Introduction to Parallel Programming with OpenACC - Part 5](https://youtu.be/0zTX7-CPvV8)
## Post-Lab Summary
If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.
You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
```
%%bash
cd ..
rm -f openacc_files.zip
zip -r openacc_files.zip *
```
**After** executing the above zip command, you should be able to download the zip file [here](../openacc_files.zip)
---
## Licensing
This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| github_jupyter |
```
%matplotlib inline
import cv2
import numpy as np
import scipy
from tqdm.notebook import tqdm
import seaborn as sns
import matplotlib.pyplot as plt
import utils
import disparity_functions
data_ix = 1
if data_ix == 0:
img_list = [cv2.imread("dataset/data_disparity_estimation/Plastic/view1.png"),
cv2.imread("dataset/data_disparity_estimation/Plastic/view5.png")]
elif data_ix == 1:
img_list = [cv2.imread("dataset/data_disparity_estimation/Cloth1/view1.png"),
cv2.imread("dataset/data_disparity_estimation/Cloth1/view5.png")]
fig = plt.figure(figsize=(8*len(img_list), 8))
fig.patch.set_facecolor('white')
for i in range(len(img_list)):
plt.subplot(1, len(img_list), i+1)
plt.imshow(cv2.cvtColor(img_list[i], cv2.COLOR_BGR2RGB))
keypoints, descriptors, img_keypoints = utils.find_keypoints(img_list)
fig = plt.figure(figsize=(8*len(img_keypoints), 8))
fig.patch.set_facecolor('white')
for i in range(len(img_keypoints)):
plt.subplot(1, len(img_keypoints), i+1)
plt.imshow(cv2.cvtColor(img_keypoints[i], cv2.COLOR_BGR2RGB))
matched_points, _ = utils.find_matches(descriptors, use_nndr=False, number_of_matches=50)
filtered_keypoints = []
filtered_keypoints.append([keypoints[0][match[0]] for match in matched_points])
filtered_keypoints.append([keypoints[1][match[1]] for match in matched_points])
img_keypoints = []
img_keypoints.append(cv2.drawKeypoints(img_list[0], filtered_keypoints[0], None))
img_keypoints.append(cv2.drawKeypoints(img_list[1], filtered_keypoints[1], None))
fig = plt.figure(figsize=(8*len(img_keypoints), 8))
fig.patch.set_facecolor('white')
for i in range(len(img_keypoints)):
plt.subplot(1, len(img_keypoints), i+1)
plt.imshow(cv2.cvtColor(img_keypoints[i], cv2.COLOR_BGR2RGB))
(_, mask) = cv2.findHomography(np.float32([kp.pt for kp in filtered_keypoints[1]]),
np.float32([kp.pt for kp in filtered_keypoints[0]]),
cv2.RANSAC, ransacReprojThreshold=3.0)
good_matches = []
good_points_l = []
good_points_r = []
for i in range(len(mask)):
if mask[i] == 1:
good_points_l.append(filtered_keypoints[0][i])
good_points_r.append(filtered_keypoints[1][i])
good_matches.append(good_points_l)
good_matches.append(good_points_r)
matches1to2 = [cv2.DMatch(i, i, 0) for i in range(len(good_matches[0]))]
matching_img = cv2.drawMatches(img_list[0], filtered_keypoints[0], img_list[1], filtered_keypoints[1], matches1to2, None)
fig = plt.figure(figsize=(16, 8))
fig.patch.set_facecolor('white')
plt.imshow(cv2.cvtColor(matching_img, cv2.COLOR_BGR2RGB))
left_img_rectified, right_img_rectified = disparity_functions.rectify_images(img_list, filtered_keypoints)
fig = plt.figure(figsize=(8*2, 8))
fig.patch.set_facecolor('white')
plt.subplot(1, 2, 1)
plt.imshow(cv2.cvtColor(left_img_rectified, cv2.COLOR_BGR2RGB))
plt.subplot(1, 2, 2)
plt.imshow(cv2.cvtColor(right_img_rectified, cv2.COLOR_BGR2RGB))
img_list_r = [cv2.resize(img, (0,0), fx=0.5, fy=0.5) for img in img_list]
img_l = img_list_r[0]
img_r = img_list_r[1]
keypoints_r, descriptors_l, descriptors_r, max_i, max_j = disparity_functions.compute_descriptors(img_l, img_r)
disp_img = np.zeros((img_l.shape[0], img_l.shape[1]))
for i in tqdm(range(img_l.shape[1])):
for j in range(img_l.shape[0]):
matched_point = disparity_functions.match_point(keypoints_r, descriptors_l, descriptors_r, (i, j), 40, max_i, max_j)
disp_img[j][i] = np.sum(np.abs(np.subtract(matched_point, (i, j))))
fig = plt.figure(figsize=(8, 8))
fig.patch.set_facecolor('white')
plt.imshow(cv2.resize(disp_img, (0,0), fx=2.0, fy=2.0, interpolation=cv2.INTER_NEAREST), cmap='gray')
cv2.imwrite(f"output/disparity_{data_ix}_1.png", cv2.resize((((disp_img - np.min(disp_img)) / (np.max(disp_img) - np.min(disp_img))) * 255).astype(np.uint8),
(0,0), fx=2.0, fy=2.0, interpolation=cv2.INTER_NEAREST))
disp_img = np.zeros((img_l.shape[0], img_l.shape[1]))
for i in tqdm(range(img_l.shape[1])):
for j in range(img_l.shape[0]):
matched_point = disparity_functions.match_point(keypoints_r, descriptors_l, descriptors_r, (i, j), 40, max_i, max_j, compute_right_img=True)
disp_img[j][i] = np.sum(np.abs(np.subtract(matched_point, (i, j))))
fig = plt.figure(figsize=(8, 8))
fig.patch.set_facecolor('white')
plt.imshow(cv2.resize(disp_img, (0,0), fx=2.0, fy=2.0, interpolation=cv2.INTER_NEAREST), cmap='gray')
cv2.imwrite(f"output/disparity_{data_ix}_2.png", cv2.resize((((disp_img - np.min(disp_img)) / (np.max(disp_img) - np.min(disp_img))) * 255).astype(np.uint8),
(0,0), fx=2.0, fy=2.0, interpolation=cv2.INTER_NEAREST))
```
| github_jupyter |
```
import sys
sys.path.insert(1, '../functions')
import importlib
import numpy as np
import nbformat
import plotly.express
import plotly.express as px
import pandas as pd
import scipy.optimize as optimization
import food_bank_functions
import food_bank_bayesian
import matplotlib.pyplot as plt
import seaborn as sns
from food_bank_functions import *
from food_bank_bayesian import *
import time
importlib.reload(food_bank_functions)
np.random.seed(1)
problem = 'poisson'
loc = '../simulations/' + problem + '/'
plt.style.use('PaperDoubleFig.mplstyle.txt')
# Make some style choices for plotting
colorWheel =['#2bd1e5',
'#281bf5',
'#db1bf5',
'#F5CD1B',
'#FF5733','#9cf51b',]
dash_styles = ["",
(4, 1.5),
(1, 1),
(3, 1, 1.5, 1),
(5, 1, 1, 1),
(5, 1, 2, 1, 2, 1),
(2, 2, 3, 1.5),
(1, 2.5, 3, 1.2)]
```
# Scaling with n dataset
```
algos_to_exclude = ['Threshold','Expected-Filling', 'Expect-Threshold', 'Fixed-Threshold', 'Expected_Filling', 'Expect_Threshold', 'Fixed_Threshold']
df_one = pd.read_csv(loc+'scale_with_n.csv')
# algos_to_exclude = ['Threshold','Expected-Filling']
df_one = (df_one[~df_one.variable.isin(algos_to_exclude)]
.rename({'variable': 'Algorithm'}, axis = 1)
)
df_one = df_one.sort_values(by='Algorithm')
df_one.Algorithm.unique()
print(df_one.Algorithm.str.title)
df_one.Algorithm.unique()
```
# Expected Waterfilling Levels
```
df_two = pd.read_csv(loc+'comparison_of_waterfilling_levels.csv')
df_two = (df_two[~df_two.variable.isin(algos_to_exclude)].rename({'variable': 'Algorithm'}, axis=1))
df_two['Algorithm'] = df_two['Algorithm'].replace({'hope_Online':'Hope-Online', 'hope_Full':'Hope-Full', 'et_Online':'ET-Online', 'et_Full':'ET-Full', 'Max_Min_Heuristic':'Max-Min'})
df_two = df_two.sort_values(by='Algorithm')
print(df_two.Algorithm.unique())
df_two.head
df_two = df_two.sort_values(by='Algorithm')
df_two.Algorithm.unique()
```
# Group allocation difference
```
df_three = pd.read_csv(loc+'fairness_group_by_group.csv')
df_three = (df_three[~df_three.variable.isin(algos_to_exclude)]
.rename({'variable': 'Algorithm'}, axis = 1)
)
df_three = df_three.sort_values(by='Algorithm')
df_three.Algorithm.unique()
legends = False
fig = plt.figure(figsize = (20,15))
# Create an array with the colors you want to use
colors = ["#FFC20A", "#1AFF1A", "#994F00", "#006CD1", "#D35FB7", "#40B0A6", "#E66100"]# Set your custom color palette
plt.subplot(2,2,1)
sns.set_palette(sns.color_palette(colors))
if legends:
g = sns.lineplot(x='NumGroups', y='value', hue='Algorithm', style = 'Algorithm', dashes = dash_styles, data=df_one[df_one.Norm == 'Linf'])
else:
g = sns.lineplot(x='NumGroups', y='value', hue='Algorithm', style = 'Algorithm', dashes = dash_styles, data=df_one[df_one.Norm == 'Linf'], legend=False)
plt.xlabel('Number of Agents')
plt.ylabel('Distance')
plt.title('Maximum Difference Between OPT and ALG Allocations')
plt.subplot(2,2,2)
sns.set_palette(sns.color_palette(colors))
if legends:
g = sns.lineplot(x='NumGroups', y='value', hue='Algorithm', style = 'Algorithm', dashes = dash_styles, data=df_one[df_one.Norm == 'L1'])
else:
g = sns.lineplot(x='NumGroups', y='value', hue='Algorithm', style = 'Algorithm', dashes = dash_styles, data=df_one[df_one.Norm == 'L1'], legend=False)
plt.xlabel('Number of Agents')
plt.ylabel('Distance')
plt.title('Total Difference Between OPT and ALG Allocations')
plt.subplot(2,2,3)
new_colors = colors[1:3] + colors[4:]+['#000000']
new_dashes = dash_styles[1:3]+dash_styles[4:]
sns.set_palette(sns.color_palette(new_colors))
if legends:
g = sns.lineplot(x='Group', y='value', style='Algorithm', hue = 'Algorithm', data=df_two, dashes=new_dashes)
else:
g = sns.lineplot(x='Group', y='value', style='Algorithm', hue = 'Algorithm', data=df_two, dashes=new_dashes, legend=False)
plt.title('Estimated Threshold Level by Agent')
plt.xlabel('Agent')
plt.ylabel('Level')
# plt.xlabel('Estimated Level')
plt.subplot(2,2,4)
sns.set_palette(sns.color_palette(colors))
try:
sns.lineplot(x='Agent', y='value', hue='Algorithm', data=df_three, style = 'Algorithm', dashes = dash_styles)
except ValueError:
sns.lineplot(x='Group', y='value', hue='Algorithm', data=df_three, style = 'Algorithm', dashes = dash_styles)
plt.title('Allocation Difference per Agent between OPT and ALG')
plt.ylabel('Difference')
plt.xlabel('Agent')
plt.show()
fig.savefig(problem+'.pdf', bbox_inches = 'tight',pad_inches = 0.01, dpi=900)
print(colors)
print(new_colors)
```
| github_jupyter |
```
import sys
import keras
import tensorflow as tf
print('python version:', sys.version)
print('keras version:', keras.__version__)
print('tensorflow version:', tf.__version__)
```
# 6.3 Advanced use of recurrent neural networks
---
## A temperature-forecasting problem
### Inspecting the data of the Jena weather dataset
```
import matplotlib.pyplot as plt
import numpy as np
import os
%matplotlib inline
data_dir = 'jena_climate'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
print(header)
print(len(lines))
```
### Parsing the data
```
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
```
### Plotting the temperature timeseries
```
temp = float_data[:, 1]
plt.plot(range(len(temp)), temp)
plt.show()
```
### Plotting the first 10 days of the temperature timeseries
```
plt.plot(range(1440), temp[:1440])
plt.show()
```
### Normalizing the data
```
mean = float_data[:200000].mean(axis = 0)
float_data -= mean
std = float_data[:200000].std(axis = 0)
float_data /= std
```
### Generator yielding timeseries samples and their targets
```
def generator(data, lookback, delay, min_index, max_index,
shuffle = False, batch_size = 128, step = 6, revert = False):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(min_index + lookback, max_index, size = batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows), lookback//step, data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
if revert:
yield samples[:, ::-1, :], targets
else:
yield samples, targets
```
### Preparing the training, validation and test generators
```
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback = lookback,
delay = delay,
min_index = 0,
max_index = 200000,
shuffle = True,
step = step,
batch_size = batch_size)
val_gen = generator(float_data,
lookback = lookback,
delay = delay,
min_index = 200001,
max_index = 300000,
step = step,
batch_size = batch_size)
test_gen = generator(float_data,
lookback = lookback,
delay = delay,
min_index = 300001,
max_index = None,
step = step,
batch_size = batch_size)
train_gen_r = generator(float_data,
lookback = lookback,
delay = delay,
min_index = 0,
max_index = 200000,
shuffle = True,
step = step,
batch_size = batch_size,
revert = True)
val_gen_r = generator(float_data,
lookback = lookback,
delay = delay,
min_index = 200001,
max_index = 300000,
step = step,
batch_size = batch_size,
revert = True)
test_gen_r = generator(float_data,
lookback = lookback,
delay = delay,
min_index = 300001,
max_index = None,
step = step,
batch_size = batch_size,
revert = True)
# How many steps to draw from val_gen in order to see the entire validation set
val_steps = (300000 - 200001 - lookback) // batch_size
# How many steps to draw from test_gen in order to see the entire test set
test_steps = (len(float_data) - 300001 - lookback) // batch_size
```
### Computing the common-sense baseline MAE
```
def evaluate_naive_method():
batch_maes = []
for step in range(val_steps):
samples, targets = next(val_gen)
preds = samples[:, -1, 1]
mae = np.mean(np.abs(preds - targets))
batch_maes.append(mae)
print(np.mean(batch_maes))
evaluate_naive_method()
```
### Training and evaluating a densely connected model
```
from keras import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Flatten(input_shape = (lookback // step, float_data.shape[-1])))
model.add(layers.Dense(32, activation = 'relu'))
model.add(layers.Dense(1))
model.compile(optimizer = RMSprop(), loss = 'mae')
history = model.fit_generator(train_gen,
steps_per_epoch = 500,
epochs = 20,
validation_data = val_gen,
validation_steps = val_steps)
```
### Plotting results
```
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
### Training and evaluating a GRU-based model
```
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32,
implementation = 1,
input_shape = (None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer = RMSprop(), loss = 'mae')
history = model.fit_generator(train_gen,
steps_per_epoch = 500,
epochs = 20,
validation_data = val_gen,
validation_steps = val_steps)
```
### Plotting results
```
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
### Training and evaluating a dropout-regularized GRU-based model
```
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32,
implementation = 1,
dropout = 0.2,
recurrent_dropout = 0.2,
input_shape = (None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer = RMSprop(), loss = 'mae')
history = model.fit_generator(train_gen,
steps_per_epoch = 500,
epochs = 40,
validation_data = val_gen,
validation_steps = val_steps)
```
### Plotting results
```
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
### Training and evaluating a dropout-regularized, stacked GRU model
```
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32,
implementation = 1,
dropout = 0.1,
recurrent_dropout = 0.5,
return_sequences = True,
input_shape = (None, float_data.shape[-1])))
model.add(layers.GRU(64,
implementation = 1,
activation = 'relu',
dropout = 0.1,
recurrent_dropout = 0.5))
model.add(layers.Dense(1))
model.compile(optimizer = RMSprop(), loss = 'mae')
history = model.fit_generator(train_gen,
steps_per_epoch = 500,
epochs = 40,
validation_data = val_gen,
validation_steps = val_steps)
```
### Plotting results
```
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
### Training and evaluating an GRU-based model using reversed sequences
```
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.GRU(32,
implementation = 1,
input_shape = (None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer = RMSprop(), loss = 'mae')
history = model.fit_generator(train_gen_r,
steps_per_epoch = 500,
epochs = 20,
validation_data = val_gen_r,
validation_steps = val_steps)
```
### Plotting results
```
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
### Training and evaluating an LSTM using reversed sequences
```
from keras.datasets import imdb
from keras.preprocessing import sequence
from keras import layers
from keras.models import Sequential
max_features = 10000 # Number of words to consider as features
maxlen = 500 # Cuts off texts after this number of words
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words = max_features)
# Reverses sequences
x_train = [x[::-1] for x in x_train]
x_test = [x[::-1] for x in x_test]
# Pads sequences
x_train = sequence.pad_sequences(x_train, maxlen = maxlen)
x_test = sequence.pad_sequences(x_test, maxlen = maxlen)
model = Sequential()
model.add(layers.Embedding(max_features, 128))
model.add(layers.LSTM(32))
model.add(layers.Dense(1, activation = 'sigmoid'))
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2)
```
### Training and evaluating a bidirectional LSTM
```
model = Sequential()
model.add(layers.Embedding(max_features, 32))
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(1, activation = 'sigmoid'))
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 128,
validation_split = 0.2)
```
### Training a bidirectional GRU
```
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Bidirectional(layers.GRU(32, implementation = 1),
input_shape = (None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer = RMSprop(), loss = 'mae')
history = model.fit_generator(train_gen,
steps_per_epoch = 500,
epochs = 40,
validation_data = val_gen,
validation_steps = val_steps)
```
### Plotting results
```
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
# Discover, Customize and Access NSIDC DAAC Data
This notebook is based off of the [NSIDC-Data-Access-Notebook](https://github.com/nsidc/NSIDC-Data-Access-Notebook) provided through NSIDC's Github organization.
Now that we've visualized our study areas, we will first explore data coverage, size, and customization (subsetting, reformatting, reprojection) service availability, and then access those associated files. The __Data access for all datasets__ notebook provides the steps needed to subset and download all the data we'll be using in the final __Visualize and Analyze Data__.
___A note on data access options:___
We will be pursuing data discovery and access "programmatically" using Application Programming Interfaces, or APIs.
*What is an API?* You can think of an API as a middle man between an application or end-use (in this case, us) and a data provider. In this case the data provider is both the Common Metadata Repository (CMR) housing data information, and NSIDC as the data distributor. These APIs are generally structured as a URL with a base plus individual key-value-pairs separated by ‘&’.
There are other discovery and access methods available from NSIDC including access from the data set landing page 'Download Data' tab (e.g. [ATL07 Data Access](https://nsidc.org/data/atl07?qt-data_set_tabs=1#qt-data_set_tabs)) and [NASA Earthdata Search](https://search.earthdata.nasa.gov/). Programmatic API access is beneficial for those of you who want to incorporate data access into your visualization and analysis workflow. This method is also reproducible and documented to ensure data provenance.
Here are the steps you will learn in this customize and access notebook:
1. Search for data programmatically using the Common Metadata Repository API by time and area of interest.
2. Determine subsetting, reformatting, and reprojection capabilities for our data of interest.
3. Access and customize data using NSIDC's data access and service API.
## Import packages
```
import requests
import getpass
import json
# This is our functions module. We created several functions used in this notebook and the Visualize and Analyze notebook.
import tutorial_helper_functions as fn
```
## Explore data availability using the Common Metadata Repository
The Common Metadata Repository (CMR) is a high-performance, high-quality, continuously evolving metadata system that catalogs Earth Science data and associated service metadata records. These metadata records are registered, modified, discovered, and accessed through programmatic interfaces leveraging standard protocols and APIs. Note that not all NSIDC data can be searched at the file level using CMR, particularly those outside of the NASA DAAC program.
CMR API documentation: https://cmr.earthdata.nasa.gov/search/site/docs/search/api.html
### Select data set and determine version number
Data sets are selected by data set IDs (e.g. ATL07). In the CMR API documentation, a data set ids is referred to as a "short name". These short names are located at the top of each NSIDC data set landing page in gray above the full title. We are using the Python Requests package to access the CMR. Data are then converted to [JSON](https://en.wikipedia.org/wiki/JSON) format; a language independant human-readable open-standard file format. More than one version can exist for a given data set:
```
CMR_COLLECTIONS_URL = 'https://cmr.earthdata.nasa.gov/search/collections.json' # CMR collection metadata endpoint
response = requests.get(CMR_COLLECTIONS_URL, params={'short_name': 'ATL07'}) # Request metadata of specified short name
results = json.loads(response.content) # load JSON results
# for each version entry, print version number
for entry in results['feed']['entry']:
fn.print_cmr_metadata(entry)
```
We will specify the most recent version for our remaining data set queries.
### Select time and area of interest
We will create a simple dictionary with our short name, version, time, and area of interest. We'll continue to add to this dictionary as we discover more information about our data set. The bounding box coordinates cover our region of interest over the East Siberian Sea and the temporal range covers March 23, 2019.
```
bounding_box = '140,72,153,80' # Bounding Box spatial parameter in decimal degree 'W,S,E,N' format.
temporal = '2019-03-23T00:00:00Z,2019-03-23T23:59:59Z' # Each date in yyyy-MM-ddTHH:mm:ssZ format; date range in start,end format
```
Start our data dictionary with our data set, version, and area and time of interest.
**Note that some version IDs include 3 digits and others include only 1 digit. Make sure to enter this value exactly as reported above.**
```
data_dict = {'short_name': 'ATL07',
'version': '005',
'bounding_box': bounding_box,
'temporal': temporal }
```
### Determine how many files exist over this time and area of interest, as well as the average size and total volume of those granules
We will use the `granule_info` function to query the CMR granule API. The function prints the number of granules, average size, and total volume of those granules. It returns the granule number value, which we will add to our data dictionary.
```
gran_num = fn.granule_info(data_dict)
data_dict['gran_num'] = gran_num # add file number to data dictionary
```
Note that subsetting, reformatting, or reprojecting can alter the size of the granules if those services are applied to your request.
## ***On your own***: Discover data availability for ATL07
Go back to the "Select data set and determine version number" heading. Replace all `MOD29` instances with `ATL07` along with its most recent version number, keeping your time and area of interest the same. ***Note that ATL07 has a 3-digit version number.*** How does the data volume compare to MOD29?
____
## Determine the subsetting, reformatting, and reprojection services enabled for your data set of interest.
The NSIDC DAAC supports customization (subsetting, reformatting, reprojection) services on many of our NASA Earthdata mission collections. Let's discover whether or not our data set has these services available using the `print_service_options` function. If services are available, we will also determine the specific service options supported for this data set, which we will then add to our data dictionary.
### Input Earthdata Login credentials
An Earthdata Login account is required to query data services and to access data from the NSIDC DAAC. If you do not already have an Earthdata Login account, visit http://urs.earthdata.nasa.gov to register. We will input our credentials below, and we'll add our email address to our dictionary for use in our final access request.
```
uid = '' # Enter Earthdata Login user name
pswd = getpass.getpass('Earthdata Login password: ') # Input and store Earthdata Login password
email = '' # Enter email associated with Earthata Login account
data_dict['email'] = email # Add to data dictionary
```
We now need to create an HTTP session in order to store cookies and pass our credentials to the data service URLs. The capability URL below is what we will query to determine service information.
```
# Query service capability URL
capability_url = f'https://n5eil02u.ecs.nsidc.org/egi/capabilities/{data_dict["short_name"]}.{data_dict["version"]}.xml'
# Create session to store cookie and pass credentials to capabilities url
session = requests.session()
s = session.get(capability_url)
response = session.get(s.url,auth=(uid,pswd))
response.raise_for_status() # Raise bad request to check that Earthdata Login credentials were accepted
```
This function provides a list of all available services:
```
fn.print_service_options(data_dict, response)
```
### Populate data dictionary with services of interest
We already added our CMR search keywords to our data dictionary, so now we need to add the service options we want to request. A list of all available service keywords for use with NSIDC's access and service API are available in our [Key-Value-Pair table](https://nsidc.org/support/tool/table-key-value-pair-kvp-operands-subsetting-reformatting-and-reprojection-services), as a part of our [Programmatic access guide](https://nsidc.org/support/how/how-do-i-programmatically-request-data-services). For our ATL07 request, we are interested in bounding box, temporal, and variable subsetting. These options crop the data values to the specified ranges and variables of interest. We will enter those values into our data dictionary below.
__Bounding box subsetting:__ Output files are cropped to the specified bounding box extent.
__Temporal subsetting:__ Output files are cropped to the specified temporal range extent.
```
data_dict['bbox'] = '140,72,153,80' # Just like with the CMR bounding box search parameter, this value is provided in decimal degree 'W,S,E,N' format.
data_dict['time'] = '2019-03-23T00:00:00,2019-03-23T23:59:59' # Each date in yyyy-MM-ddTHH:mm:ss format; Date range in start,end format
```
__Variable subsetting:__ Subsets the data set variable or group of variables. For hierarchical data, all lower level variables are returned if a variable group or subgroup is specified.
For ATL07, we will use only strong beams since these groups contain higher coverage and resolution due to higher surface returns. According to the user guide, the spacecraft was in the backwards orientation during our day of interest, setting the `gt*l` beams as the strong beams.
We'll use these primary geolocation, height and quality variables of interest for each of the three strong beams. The following descriptions are provided in the [ATL07 Data Dictionary](https://nsidc.org/sites/nsidc.org/files/technical-references/ATL07-data-dictionary-v001.pdf), with additional information on the algorithm and variable descriptions in the [ATBD (Algorithm Theoretical Basis Document)](https://icesat-2.gsfc.nasa.gov/sites/default/files/page_files/ICESat2_ATL07_ATL10_ATBD_r002.pdf).
`delta_time`: Number of GPS seconds since the ATLAS SDP epoch.
`latitude`: Latitude, WGS84, North=+, Lat of segment center
`longitude`: Longitude, WGS84, East=+,Lon of segment center
`height_segment_height`: Mean height from along-track segment fit determined by the sea ice algorithm
`height_segment_confidence`: Confidence level in the surface height estimate based on the number of photons; the background noise rate; and the error
analysis
`height_segment_quality`: Height segment quality flag, 1 is good quality, 0 is bad
`height_segment_surface_error_est`: Error estimate of the surface height (reported in meters)
`height_segment_length_seg`: along-track length of segment containing n_photons_actual
```
data_dict['coverage'] = '/gt1l/sea_ice_segments/delta_time,\
/gt1l/sea_ice_segments/latitude,\
/gt1l/sea_ice_segments/longitude,\
/gt1l/sea_ice_segments/heights/height_segment_confidence,\
/gt1l/sea_ice_segments/heights/height_segment_height,\
/gt1l/sea_ice_segments/heights/height_segment_quality,\
/gt1l/sea_ice_segments/heights/height_segment_surface_error_est,\
/gt1l/sea_ice_segments/heights/height_segment_length_seg,\
/gt2l/sea_ice_segments/delta_time,\
/gt2l/sea_ice_segments/latitude,\
/gt2l/sea_ice_segments/longitude,\
/gt2l/sea_ice_segments/heights/height_segment_confidence,\
/gt2l/sea_ice_segments/heights/height_segment_height,\
/gt2l/sea_ice_segments/heights/height_segment_quality,\
/gt2l/sea_ice_segments/heights/height_segment_surface_error_est,\
/gt2l/sea_ice_segments/heights/height_segment_length_seg,\
/gt3l/sea_ice_segments/delta_time,\
/gt3l/sea_ice_segments/latitude,\
/gt3l/sea_ice_segments/longitude,\
/gt3l/sea_ice_segments/heights/height_segment_confidence,\
/gt3l/sea_ice_segments/heights/height_segment_height,\
/gt3l/sea_ice_segments/heights/height_segment_quality,\
/gt3l/sea_ice_segments/heights/height_segment_surface_error_est,\
/gt3l/sea_ice_segments/heights/height_segment_length_seg'
```
### Select data access configurations
The data request can be accessed asynchronously or synchronously. The asynchronous option will allow concurrent requests to be queued and processed as orders. Those requested orders will be delivered to the specified email address, or they can be accessed programmatically as shown below. Synchronous requests will automatically download the data as soon as processing is complete. For this tutorial, we will be selecting the asynchronous method.
```
base_url = 'https://n5eil02u.ecs.nsidc.org/egi/request' # Set NSIDC data access base URL
data_dict['request_mode'] = 'async' # Set the request mode to asynchronous
data_dict['page_size'] = 2000 # Set the page size to the maximum of 2000, which equals the number of output files that can be returned
```
## Create the data request API endpoint
Programmatic API requests are formatted as HTTPS URLs that contain key-value-pairs specifying the service operations that we specified above. We will first create a string of key-value-pairs from our data dictionary and we'll feed those into our API endpoint. This API endpoint can be executed via command line, a web browser, or in Python below.
```
# Create a new param_dict with CMR configuration parameters removed from our data_dict
param_dict = dict((i, data_dict[i]) for i in data_dict if i!='gran_num' and i!='page_num')
param_string = '&'.join("{!s}={!r}".format(k,v) for (k,v) in param_dict.items()) # Convert param_dict to string
param_string = param_string.replace("'","") # Remove quotes
API_request = f'{base_url}?{param_string}'
print(API_request) # Print API base URL + request parameters
```
## Request data and clean up Output folder
We will now download data using the `request_data` function, which utilizes the Python requests library. Our param_dict and HTTP session will be passed to the function to allow Earthdata Login access. The data will be downloaded directly to this notebook directory in a new Outputs folder. The progress of the order will be reported. The data are returned in separate files, so we'll use the `clean_folder` function to remove those individual folders.
```
fn.request_data(param_dict,session)
fn.clean_folder()
```
To review, we have explored data availability and volume over a region and time of interest, discovered and selected data customization options, constructed API endpoints for our requests, and downloaded data. Let's move on to the analysis portion of the tutorial.
| github_jupyter |
# Improving Data Quality
**Learning Objectives**
1. Resolve missing values
2. Convert the Date feature column to a datetime format
3. Rename a feature column, remove a value from a feature column
4. Create one-hot encoding features
5. Understand temporal feature conversions
## Introduction
Recall that machine learning models can only consume numeric data, and that numeric data should be "1"s or "0"s. Data is said to be "messy" or "untidy" if it is missing attribute values, contains noise or outliers, has duplicates, wrong data, upper/lower case column names, and is essentially not ready for ingestion by a machine learning algorithm.
This notebook presents and solves some of the most common issues of "untidy" data. Note that different problems will require different methods, and they are beyond the scope of this notebook.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/launching_into_ml/labs/improve_data_quality.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
```
# Use the chown command to change the ownership of the repository to user
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
```
### Import Libraries
```
import os
# Here we'll import Pandas and Numpy data processing libraries
import pandas as pd
import numpy as np
from datetime import datetime
# Use matplotlib for visualizing the model
import matplotlib.pyplot as plt
# Use seaborn for data visualization
import seaborn as sns
%matplotlib inline
```
### Load the Dataset
The dataset is based on California's [Vehicle Fuel Type Count by Zip Code](https://data.ca.gov/dataset/vehicle-fuel-type-count-by-zip-codeSynthetic) report. The dataset has been modified to make the data "untidy" and is thus a synthetic representation that can be used for learning purposes.
```
# Creating directory to store dataset
if not os.path.isdir("../data/transport"):
os.makedirs("../data/transport")
# Download the raw .csv data by copying the data from a cloud storage bucket.
!gsutil cp gs://cloud-training-demos/feat_eng/transport/untidy_vehicle_data.csv ../data/transport
# ls shows the working directory's contents.
# Using the -l parameter will lists files with assigned permissions
!ls -l ../data/transport
```
### Read Dataset into a Pandas DataFrame
Next, let's read in the dataset just copied from the cloud storage bucket and create a Pandas DataFrame. We also add a Pandas .head() function to show you the top 5 rows of data in the DataFrame. Head() and Tail() are "best-practice" functions used to investigate datasets.
```
# Reading "untidy_vehicle_data.csv" file using the read_csv() function included in the pandas library.
df_transport = pd.read_csv('../data/transport/untidy_vehicle_data.csv')
# Output the first five rows.
df_transport.head()
```
### DataFrame Column Data Types
DataFrames may have heterogenous or "mixed" data types, that is, some columns are numbers, some are strings, and some are dates etc. Because CSV files do not contain information on what data types are contained in each column, Pandas infers the data types when loading the data, e.g. if a column contains only numbers, Pandas will set that column’s data type to numeric: integer or float.
Run the next cell to see information on the DataFrame.
```
# The .info() function will display the concise summary of an dataframe.
df_transport.info()
```
From what the .info() function shows us, we have six string objects and one float object. We can definitely see more of the "string" object values now!
```
# Let's print out the first and last five rows of each column.
print(df_transport,5)
```
### Summary Statistics
At this point, we have only one column which contains a numerical value (e.g. Vehicles). For features which contain numerical values, we are often interested in various statistical measures relating to those values. Note, that because we only have one numeric feature, we see only one summary stastic - for now.
```
# We can use .describe() to see some summary statistics for the numeric fields in our dataframe.
df_transport.describe()
```
Let's investigate a bit more of our data by using the .groupby() function.
```
# The .groupby() function is used for spliting the data into groups based on some criteria.
grouped_data = df_transport.groupby(['Zip Code','Model Year','Fuel','Make','Light_Duty','Vehicles'])
# Get the first entry for each month.
df_transport.groupby('Fuel').first()
```
### Checking for Missing Values
Missing values adversely impact data quality, as they can lead the machine learning model to make inaccurate inferences about the data. Missing values can be the result of numerous factors, e.g. "bits" lost during streaming transmission, data entry, or perhaps a user forgot to fill in a field. Note that Pandas recognizes both empty cells and “NaN” types as missing values.
#### Let's show the null values for all features in the DataFrame.
```
df_transport.isnull().sum()
```
To see a sampling of which values are missing, enter the feature column name. You'll notice that "False" and "True" correpond to the presence or abscence of a value by index number.
```
print (df_transport['Date'])
print (df_transport['Date'].isnull())
print (df_transport['Make'])
print (df_transport['Make'].isnull())
print (df_transport['Model Year'])
print (df_transport['Model Year'].isnull())
```
### What can we deduce about the data at this point?
# Let's summarize our data by row, column, features, unique, and missing values.
```
# In Python shape() is used in pandas to give the number of rows/columns.
# The number of rows is given by .shape[0]. The number of columns is given by .shape[1].
# Thus, shape() consists of an array having two arguments -- rows and columns
print ("Rows : " ,df_transport.shape[0])
print ("Columns : " ,df_transport.shape[1])
print ("\nFeatures : \n" ,df_transport.columns.tolist())
print ("\nUnique values : \n",df_transport.nunique())
print ("\nMissing values : ", df_transport.isnull().sum().values.sum())
```
Let's see the data again -- this time the last five rows in the dataset.
```
# Output the last five rows in the dataset.
df_transport.tail()
```
### What Are Our Data Quality Issues?
1. **Data Quality Issue #1**:
> **Missing Values**:
Each feature column has multiple missing values. In fact, we have a total of 18 missing values.
2. **Data Quality Issue #2**:
> **Date DataType**: Date is shown as an "object" datatype and should be a datetime. In addition, Date is in one column. Our business requirement is to see the Date parsed out to year, month, and day.
3. **Data Quality Issue #3**:
> **Model Year**: We are only interested in years greater than 2006, not "<2006".
4. **Data Quality Issue #4**:
> **Categorical Columns**: The feature column "Light_Duty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. In addition, we need to "one-hot encode the remaining "string"/"object" columns.
5. **Data Quality Issue #5**:
> **Temporal Features**: How do we handle year, month, and day?
#### Data Quality Issue #1:
##### Resolving Missing Values
Most algorithms do not accept missing values. Yet, when we see missing values in our dataset, there is always a tendency to just "drop all the rows" with missing values. Although Pandas will fill in the blank space with “NaN", we should "handle" them in some way.
While all the methods to handle missing values is beyond the scope of this lab, there are a few methods you should consider. For numeric columns, use the "mean" values to fill in the missing numeric values. For categorical columns, use the "mode" (or most frequent values) to fill in missing categorical values.
In this lab, we use the .apply and Lambda functions to fill every column with its own most frequent value. You'll learn more about Lambda functions later in the lab.
Let's check again for missing values by showing how many rows contain NaN values for each feature column.
```
# The isnull() method is used to check and manage NULL values in a data frame.
# TODO 1a
df_transport.isnull().sum()
```
Run the cell to apply the lambda function.
```
# Here we are using the apply function with lambda.
# We can use the apply() function to apply the lambda function to both rows and columns of a dataframe.
# TODO 1b
df_transport = df_transport.apply(lambda x:x.fillna(x.value_counts().index[0]))
```
Let's check again for missing values.
```
# The isnull() method is used to check and manage NULL values in a data frame.
# TODO 1c
df_transport.isnull().sum()
```
#### Data Quality Issue #2:
##### Convert the Date Feature Column to a Datetime Format
```
# The date column is indeed shown as a string object. We can convert it to the datetime datatype with the to_datetime() function in Pandas.
# TODO 2a
df_transport['Date'] = pd.to_datetime(df_transport['Date'],
format='%m/%d/%Y')
# Date is now converted and will display the concise summary of an dataframe.
# TODO 2b
df_transport.info()
# Now we will parse Date into three columns that is year, month, and day.
df_transport['year'] = df_transport['Date'].dt.year
df_transport['month'] = df_transport['Date'].dt.month
df_transport['day'] = df_transport['Date'].dt.day
#df['hour'] = df['date'].dt.hour - you could use this if your date format included hour.
#df['minute'] = df['date'].dt.minute - you could use this if your date format included minute.
# The .info() function will display the concise summary of an dataframe.
df_transport.info()
```
# Let's confirm the Date parsing. This will also give us a another visualization of the data.
```
# Here, we are creating a new dataframe called "grouped_data" and grouping by on the column "Make"
grouped_data = df_transport.groupby(['Make'])
# Get the first entry for each month.
df_transport.groupby('Fuel').first()
```
Now that we have Dates as a integers, let's do some additional plotting.
```
# Here we will visualize our data using the figure() function in the pyplot module of matplotlib's library -- which is used to create a new figure.
plt.figure(figsize=(10,6))
# Seaborn's .jointplot() displays a relationship between 2 variables (bivariate) as well as 1D profiles (univariate) in the margins. This plot is a convenience class that wraps JointGrid.
sns.jointplot(x='month',y='Vehicles',data=df_transport)
# The title() method in matplotlib module is used to specify title of the visualization depicted and displays the title using various attributes.
plt.title('Vehicles by Month')
```
#### Data Quality Issue #3:
##### Rename a Feature Column and Remove a Value.
Our feature columns have different "capitalizations" in their names, e.g. both upper and lower "case". In addition, there are "spaces" in some of the column names. In addition, we are only interested in years greater than 2006, not "<2006".
We can also resolve the "case" problem too by making all the feature column names lower case.
```
# Let's remove all the spaces for feature columns by renaming them.
# TODO 3a
df_transport.rename(columns = { 'Date': 'date', 'Zip Code':'zipcode', 'Model Year': 'modelyear', 'Fuel': 'fuel', 'Make': 'make', 'Light_Duty': 'lightduty', 'Vehicles': 'vehicles'}, inplace = True)
# Output the first two rows.
df_transport.head(2)
```
**Note:** Next we create a copy of the dataframe to avoid the "SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" warning. Run the cell to remove the value '<2006' from the modelyear feature column.
```
# Here, we create a copy of the dataframe to avoid copy warning issues.
# TODO 3b
df = df_transport.loc[df_transport.modelyear != '<2006'].copy()
# Here we will confirm that the modelyear value '<2006' has been removed by doing a value count.
df['modelyear'].value_counts(0)
```
#### Data Quality Issue #4:
##### Handling Categorical Columns
The feature column "lightduty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. We need to convert the binary answers from strings of yes/no to integers of 1/0. There are various methods to achieve this. We will use the "apply" method with a lambda expression. Pandas. apply() takes a function and applies it to all values of a Pandas series.
##### What is a Lambda Function?
Typically, Python requires that you define a function using the def keyword. However, lambda functions are anonymous -- which means there is no need to name them. The most common use case for lambda functions is in code that requires a simple one-line function (e.g. lambdas only have a single expression).
As you progress through the Course Specialization, you will see many examples where lambda functions are being used. Now is a good time to become familiar with them.
```
# Lets count the number of "Yes" and"No's" in the 'lightduty' feature column.
df['lightduty'].value_counts(0)
# Let's convert the Yes to 1 and No to 0.
# The .apply takes a function and applies it to all values of a Pandas series (e.g. lightduty).
df.loc[:,'lightduty'] = df['lightduty'].apply(lambda x: 0 if x=='No' else 1)
df['lightduty'].value_counts(0)
# Confirm that "lightduty" has been converted.
df.head()
```
#### One-Hot Encoding Categorical Feature Columns
Machine learning algorithms expect input vectors and not categorical features. Specifically, they cannot handle text or string values. Thus, it is often useful to transform categorical features into vectors.
One transformation method is to create dummy variables for our categorical features. Dummy variables are a set of binary (0 or 1) variables that each represent a single class from a categorical feature. We simply encode the categorical variable as a one-hot vector, i.e. a vector where only one element is non-zero, or hot. With one-hot encoding, a categorical feature becomes an array whose size is the number of possible choices for that feature.
Panda provides a function called "get_dummies" to convert a categorical variable into dummy/indicator variables.
```
# Making dummy variables for categorical data with more inputs.
data_dummy = pd.get_dummies(df[['zipcode','modelyear', 'fuel', 'make']], drop_first=True)
# Output the first five rows.
data_dummy.head()
# Merging (concatenate) original data frame with 'dummy' dataframe.
# TODO 4a
df = pd.concat([df,data_dummy], axis=1)
df.head()
# Dropping attributes for which we made dummy variables. Let's also drop the Date column.
# TODO 4b
df = df.drop(['date','zipcode','modelyear', 'fuel', 'make'], axis=1)
# Confirm that 'zipcode','modelyear', 'fuel', and 'make' have been dropped.
df.head()
```
#### Data Quality Issue #5:
##### Temporal Feature Columns
Our dataset now contains year, month, and day feature columns. Let's convert the month and day feature columns to meaningful representations as a way to get us thinking about changing temporal features -- as they are sometimes overlooked.
Note that the Feature Engineering course in this Specialization will provide more depth on methods to handle year, month, day, and hour feature columns.
```
# Let's print the unique values for "month", "day" and "year" in our dataset.
print ('Unique values of month:',df.month.unique())
print ('Unique values of day:',df.day.unique())
print ('Unique values of year:',df.year.unique())
```
Don't worry, this is the last time we will use this code, as you can develop an input pipeline to address these temporal feature columns in TensorFlow and Keras - and it is much easier! But, sometimes you need to appreciate what you're not going to encounter as you move through the course!
Run the cell to view the output.
```
# Here we map each temporal variable onto a circle such that the lowest value for that variable appears right next to the largest value. We compute the x- and y- component of that point using the sin and cos trigonometric functions.
df['day_sin'] = np.sin(df.day*(2.*np.pi/31))
df['day_cos'] = np.cos(df.day*(2.*np.pi/31))
df['month_sin'] = np.sin((df.month-1)*(2.*np.pi/12))
df['month_cos'] = np.cos((df.month-1)*(2.*np.pi/12))
# Let's drop month, and day
# TODO 5
df = df.drop(['month','day','year'], axis=1)
# scroll left to see the converted month and day coluumns.
df.tail(4)
```
### Conclusion
This notebook introduced a few concepts to improve data quality. We resolved missing values, converted the Date feature column to a datetime format, renamed feature columns, removed a value from a feature column, created one-hot encoding features, and converted temporal features to meaningful representations. By the end of our lab, we gained an understanding as to why data should be "cleaned" and "pre-processed" before input into a machine learning model.
Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Books Recommender System

This is the second part of my project on Book Data Analysis and Recommendation Systems.
In my first notebook ([The Story of Book](https://www.kaggle.com/omarzaghlol/goodreads-1-the-story-of-book/)), I attempted at narrating the story of book by performing an extensive exploratory data analysis on Books Metadata collected from Goodreads.
In this notebook, I will attempt at implementing a few recommendation algorithms (Basic Recommender, Content-based and Collaborative Filtering) and try to build an ensemble of these models to come up with our final recommendation system.
# What's in this kernel?
- [Importing Libraries and Loading Our Data](#1)
- [Clean the dataset](#2)
- [Simple Recommender](#3)
- [Top Books](#4)
- [Top "Genres" Books](#5)
- [Content Based Recommender](#6)
- [Cosine Similarity](#7)
- [Popularity and Ratings](#8)
- [Collaborative Filtering](#9)
- [User Based](#10)
- [Item Based](#11)
- [Hybrid Recommender](#12)
- [Conclusion](#13)
- [Save Model](#14)
# Importing Libraries and Loading Our Data <a id="1"></a> <br>
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime
import warnings
warnings.filterwarnings('ignore')
books = pd.read_csv('../input/goodbooks-10k//books.csv')
ratings = pd.read_csv('../input/goodbooks-10k//ratings.csv')
book_tags = pd.read_csv('../input/goodbooks-10k//book_tags.csv')
tags = pd.read_csv('../input/goodbooks-10k//tags.csv')
```
# Clean the dataset <a id="2"></a> <br>
As with nearly any real-life dataset, we need to do some cleaning first. When exploring the data I noticed that for some combinations of user and book there are multiple ratings, while in theory there should only be one (unless users can rate a book several times). Furthermore, for the collaborative filtering it is better to have more ratings per user. So I decided to remove users who have rated fewer than 3 books.
```
books['original_publication_year'] = books['original_publication_year'].fillna(-1).apply(lambda x: int(x) if x != -1 else -1)
ratings_rmv_duplicates = ratings.drop_duplicates()
unwanted_users = ratings_rmv_duplicates.groupby('user_id')['user_id'].count()
unwanted_users = unwanted_users[unwanted_users < 3]
unwanted_ratings = ratings_rmv_duplicates[ratings_rmv_duplicates.user_id.isin(unwanted_users.index)]
new_ratings = ratings_rmv_duplicates.drop(unwanted_ratings.index)
new_ratings['title'] = books.set_index('id').title.loc[new_ratings.book_id].values
new_ratings.head(10)
```
# Simple Recommender <a id="3"></a> <br>
The Simple Recommender offers generalized recommnendations to every user based on book popularity and (sometimes) genre. The basic idea behind this recommender is that books that are more popular and more critically acclaimed will have a higher probability of being liked by the average audience. This model does not give personalized recommendations based on the user.
The implementation of this model is extremely trivial. All we have to do is sort our books based on ratings and popularity and display the top books of our list. As an added step, we can pass in a genre argument to get the top books of a particular genre.
I will use IMDB's *weighted rating* formula to construct my chart. Mathematically, it is represented as follows:
Weighted Rating (WR) = $(\frac{v}{v + m} . R) + (\frac{m}{v + m} . C)$
where,
* *v* is the number of ratings for the book
* *m* is the minimum ratings required to be listed in the chart
* *R* is the average rating of the book
* *C* is the mean rating across the whole report
The next step is to determine an appropriate value for *m*, the minimum ratings required to be listed in the chart. We will use **95th percentile** as our cutoff. In other words, for a book to feature in the charts, it must have more ratings than at least 95% of the books in the list.
I will build our overall Top 250 Chart and will define a function to build charts for a particular genre. Let's begin!
```
v = books['ratings_count']
m = books['ratings_count'].quantile(0.95)
R = books['average_rating']
C = books['average_rating'].mean()
W = (R*v + C*m) / (v + m)
books['weighted_rating'] = W
qualified = books.sort_values('weighted_rating', ascending=False).head(250)
```
## Top Books <a id="4"></a> <br>
```
qualified[['title', 'authors', 'average_rating', 'weighted_rating']].head(15)
```
We see that J.K. Rowling's **Harry Potter** Books occur at the very top of our chart. The chart also indicates a strong bias of Goodreads Users towards particular genres and authors.
Let us now construct our function that builds charts for particular genres. For this, we will use relax our default conditions to the **85th** percentile instead of 95.
## Top "Genres" Books <a id="5"></a> <br>
```
book_tags.head()
tags.head()
genres = ["Art", "Biography", "Business", "Chick Lit", "Children's", "Christian", "Classics",
"Comics", "Contemporary", "Cookbooks", "Crime", "Ebooks", "Fantasy", "Fiction",
"Gay and Lesbian", "Graphic Novels", "Historical Fiction", "History", "Horror",
"Humor and Comedy", "Manga", "Memoir", "Music", "Mystery", "Nonfiction", "Paranormal",
"Philosophy", "Poetry", "Psychology", "Religion", "Romance", "Science", "Science Fiction",
"Self Help", "Suspense", "Spirituality", "Sports", "Thriller", "Travel", "Young Adult"]
genres = list(map(str.lower, genres))
genres[:4]
available_genres = tags.loc[tags.tag_name.str.lower().isin(genres)]
available_genres.head()
available_genres_books = book_tags[book_tags.tag_id.isin(available_genres.tag_id)]
print('There are {} books that are tagged with above genres'.format(available_genres_books.shape[0]))
available_genres_books.head()
available_genres_books['genre'] = available_genres.tag_name.loc[available_genres_books.tag_id].values
available_genres_books.head()
def build_chart(genre, percentile=0.85):
df = available_genres_books[available_genres_books['genre'] == genre.lower()]
qualified = books.set_index('book_id').loc[df.goodreads_book_id]
v = qualified['ratings_count']
m = qualified['ratings_count'].quantile(percentile)
R = qualified['average_rating']
C = qualified['average_rating'].mean()
qualified['weighted_rating'] = (R*v + C*m) / (v + m)
qualified.sort_values('weighted_rating', ascending=False, inplace=True)
return qualified
```
Let us see our method in action by displaying the Top 15 Fiction Books (Fiction almost didn't feature at all in our Generic Top Chart despite being one of the most popular movie genres).
```
cols = ['title','authors','original_publication_year','average_rating','ratings_count','work_text_reviews_count','weighted_rating']
genre = 'Fiction'
build_chart(genre)[cols].head(15)
```
For simplicity, you can just pass the index of the wanted genre from below.
```
list(enumerate(available_genres.tag_name))
idx = 24 # romance
build_chart(list(available_genres.tag_name)[idx])[cols].head(15)
```
# Content Based Recommender <a id="6"></a> <br>

The recommender we built in the previous section suffers some severe limitations. For one, it gives the same recommendation to everyone, regardless of the user's personal taste. If a person who loves business books (and hates fiction) were to look at our Top 15 Chart, s/he wouldn't probably like most of the books. If s/he were to go one step further and look at our charts by genre, s/he wouldn't still be getting the best recommendations.
For instance, consider a person who loves *The Fault in Our Stars*, *Twilight*. One inference we can obtain is that the person loves the romaintic books. Even if s/he were to access the romance chart, s/he wouldn't find these as the top recommendations.
To personalise our recommendations more, I am going to build an engine that computes similarity between movies based on certain metrics and suggests books that are most similar to a particular book that a user liked. Since we will be using book metadata (or content) to build this engine, this also known as **Content Based Filtering.**
I will build this recommender based on book's *Title*, *Authors* and *Genres*.
```
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.metrics.pairwise import linear_kernel, cosine_similarity
```
My approach to building the recommender is going to be extremely *hacky*. These are steps I plan to do:
1. **Strip Spaces and Convert to Lowercase** from authors. This way, our engine will not confuse between **Stephen Covey** and **Stephen King**.
2. Combining books with their corresponding **genres** .
2. I then use a **Count Vectorizer** to create our count matrix.
Finally, we calculate the cosine similarities and return books that are most similar.
```
books['authors'] = books['authors'].apply(lambda x: [str.lower(i.replace(" ", "")) for i in x.split(', ')])
def get_genres(x):
t = book_tags[book_tags.goodreads_book_id==x]
return [i.lower().replace(" ", "") for i in tags.tag_name.loc[t.tag_id].values]
books['genres'] = books.book_id.apply(get_genres)
books['soup'] = books.apply(lambda x: ' '.join([x['title']] + x['authors'] + x['genres']), axis=1)
books.soup.head()
count = CountVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english')
count_matrix = count.fit_transform(books['soup'])
```
## Cosine Similarity <a id="7"></a> <br>
I will be using the Cosine Similarity to calculate a numeric quantity that denotes the similarity between two books. Mathematically, it is defined as follows:
$cosine(x,y) = \frac{x. y^\intercal}{||x||.||y||} $
```
cosine_sim = cosine_similarity(count_matrix, count_matrix)
indices = pd.Series(books.index, index=books['title'])
titles = books['title']
def get_recommendations(title, n=10):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:31]
book_indices = [i[0] for i in sim_scores]
return list(titles.iloc[book_indices].values)[:n]
get_recommendations("The One Minute Manager")
```
What if I want a specific book but I can't remember it's full name!!
So I created the following *method* to get book titles from a **partial** title.
```
def get_name_from_partial(title):
return list(books.title[books.title.str.lower().str.contains(title) == True].values)
title = "business"
l = get_name_from_partial(title)
list(enumerate(l))
get_recommendations(l[1])
```
## Popularity and Ratings <a id="8"></a> <br>
One thing that we notice about our recommendation system is that it recommends books regardless of ratings and popularity. It is true that ***Across the River and Into the Trees*** and ***The Old Man and the Sea*** were written by **Ernest Hemingway**, but the former one was cnosidered a bad (not the worst) book that shouldn't be recommended to anyone, since that most people hated the book for it's static plot and overwrought emotion.
Therefore, we will add a mechanism to remove bad books and return books which are popular and have had a good critical response.
I will take the top 30 movies based on similarity scores and calculate the vote of the 60th percentile book. Then, using this as the value of $m$, we will calculate the weighted rating of each book using IMDB's formula like we did in the Simple Recommender section.
```
def improved_recommendations(title, n=10):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:31]
book_indices = [i[0] for i in sim_scores]
df = books.iloc[book_indices][['title', 'ratings_count', 'average_rating', 'weighted_rating']]
v = df['ratings_count']
m = df['ratings_count'].quantile(0.60)
R = df['average_rating']
C = df['average_rating'].mean()
df['weighted_rating'] = (R*v + C*m) / (v + m)
qualified = df[df['ratings_count'] >= m]
qualified = qualified.sort_values('weighted_rating', ascending=False)
return qualified.head(n)
improved_recommendations("The One Minute Manager")
improved_recommendations(l[1])
```
I think the sorting of similar is more better now than before.
Therefore, we will conclude our Content Based Recommender section here and come back to it when we build a hybrid engine.
# Collaborative Filtering <a id="9"></a> <br>

Our content based engine suffers from some severe limitations. It is only capable of suggesting books which are *close* to a certain book. That is, it is not capable of capturing tastes and providing recommendations across genres.
Also, the engine that we built is not really personal in that it doesn't capture the personal tastes and biases of a user. Anyone querying our engine for recommendations based on a book will receive the same recommendations for that book, regardless of who s/he is.
Therefore, in this section, we will use a technique called **Collaborative Filtering** to make recommendations to Book Readers. Collaborative Filtering is based on the idea that users similar to a me can be used to predict how much I will like a particular product or service those users have used/experienced but I have not.
I will not be implementing Collaborative Filtering from scratch. Instead, I will use the **Surprise** library that used extremely powerful algorithms like **Singular Value Decomposition (SVD)** to minimise RMSE (Root Mean Square Error) and give great recommendations.
There are two classes of Collaborative Filtering:

- **User-based**, which measures the similarity between target users and other users.
- **Item-based**, which measures the similarity between the items that target users rate or interact with and other items.
## - User Based <a id="10"></a> <br>
```
# ! pip install surprise
from surprise import Reader, Dataset, SVD
from surprise.model_selection import cross_validate
reader = Reader()
data = Dataset.load_from_df(new_ratings[['user_id', 'book_id', 'rating']], reader)
svd = SVD()
cross_validate(svd, data, measures=['RMSE', 'MAE'])
```
We get a mean **Root Mean Sqaure Error** of about 0.8419 which is more than good enough for our case. Let us now train on our dataset and arrive at predictions.
```
trainset = data.build_full_trainset()
svd.fit(trainset);
```
Let us pick users 10 and check the ratings s/he has given.
```
new_ratings[new_ratings['user_id'] == 10]
svd.predict(10, 1506)
```
For book with ID 1506, we get an estimated prediction of **3.393**. One startling feature of this recommender system is that it doesn't care what the book is (or what it contains). It works purely on the basis of an assigned book ID and tries to predict ratings based on how the other users have predicted the book.
## - Item Based <a id="11"></a> <br>
Here we will build a table for users with their corresponding ratings for each book.
```
# bookmat = new_ratings.groupby(['user_id', 'title'])['rating'].mean().unstack()
bookmat = new_ratings.pivot_table(index='user_id', columns='title', values='rating')
bookmat.head()
def get_similar(title, mat):
title_user_ratings = mat[title]
similar_to_title = mat.corrwith(title_user_ratings)
corr_title = pd.DataFrame(similar_to_title, columns=['correlation'])
corr_title.dropna(inplace=True)
corr_title.sort_values('correlation', ascending=False, inplace=True)
return corr_title
title = "Twilight (Twilight, #1)"
smlr = get_similar(title, bookmat)
smlr.head(10)
```
Ok, we got similar books, but we need to filter them by their *ratings_count*.
```
smlr = smlr.join(books.set_index('title')['ratings_count'])
smlr.head()
```
Get similar books with at least 500k ratings.
```
smlr[smlr.ratings_count > 5e5].sort_values('correlation', ascending=False).head(10)
```
That's more interesting and reasonable result, since we could get *Twilight* book series in our top results.
# Hybrid Recommender <a id="12"></a> <br>

In this section, I will try to build a simple hybrid recommender that brings together techniques we have implemented in the content based and collaborative filter based engines. This is how it will work:
* **Input:** User ID and the Title of a Book
* **Output:** Similar books sorted on the basis of expected ratings by that particular user.
```
def hybrid(user_id, title, n=10):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:51]
book_indices = [i[0] for i in sim_scores]
df = books.iloc[book_indices][['book_id', 'title', 'original_publication_year', 'ratings_count', 'average_rating']]
df['est'] = df['book_id'].apply(lambda x: svd.predict(user_id, x).est)
df = df.sort_values('est', ascending=False)
return df.head(n)
hybrid(4, 'Eat, Pray, Love')
hybrid(10, 'Eat, Pray, Love')
```
We see that for our hybrid recommender, we get (almost) different recommendations for different users although the book is the same. But maybe we can make it better through following steps:
1. Use our *improved_recommendations* technique , that we used in the **Content Based** seciton above
2. Combine it with the user *estimations*, by dividing their summation by 2
3. Finally, put the result into a new feature ***score***
```
def improved_hybrid(user_id, title, n=10):
idx = indices[title]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:51]
book_indices = [i[0] for i in sim_scores]
df = books.iloc[book_indices][['book_id', 'title', 'ratings_count', 'average_rating', 'original_publication_year']]
v = df['ratings_count']
m = df['ratings_count'].quantile(0.60)
R = df['average_rating']
C = df['average_rating'].mean()
df['weighted_rating'] = (R*v + C*m) / (v + m)
df['est'] = df['book_id'].apply(lambda x: svd.predict(user_id, x).est)
df['score'] = (df['est'] + df['weighted_rating']) / 2
df = df.sort_values('score', ascending=False)
return df[['book_id', 'title', 'original_publication_year', 'ratings_count', 'average_rating', 'score']].head(n)
improved_hybrid(4, 'Eat, Pray, Love')
improved_hybrid(10, 'Eat, Pray, Love')
```
Ok, we see that the new results make more sense, besides to, the recommendations are more personalized and tailored towards particular users.
# Conclusion <a id="13"></a> <br>
In this notebook, I have built 4 different recommendation engines based on different ideas and algorithms. They are as follows:
1. **Simple Recommender:** This system used overall Goodreads Ratings Count and Rating Averages to build Top Books Charts, in general and for a specific genre. The IMDB Weighted Rating System was used to calculate ratings on which the sorting was finally performed.
2. **Content Based Recommender:** We built content based engines that took book title, authors and genres as input to come up with predictions. We also deviced a simple filter to give greater preference to books with more votes and higher ratings.
3. **Collaborative Filtering:** We built two Collaborative Filters;
- one that uses the powerful Surprise Library to build an **user-based** filter based on single value decomposition, since the RMSE obtained was less than 1, and the engine gave estimated ratings for a given user and book.
- And the other (**item-based**) which built a pivot table for users ratings corresponding to each book, and the engine gave similar books for a given book.
4. **Hybrid Engine:** We brought together ideas from content and collaborative filterting to build an engine that gave book suggestions to a particular user based on the estimated ratings that it had internally calculated for that user.
Previous -> [The Story of Book](https://www.kaggle.com/omarzaghlol/goodreads-1-the-story-of-book/)
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.stats import gaussian_kde, chi2, pearsonr
SMALL_SIZE = 16
MEDIUM_SIZE = 18
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
SEED = 35010732 # from random.org
np.random.seed(SEED)
print(plt.style.available)
plt.style.use('seaborn-white')
cor1000 = pd.read_csv("correlations1kbig.csv")
cor10k = pd.read_csv("correlations10kbig.csv")
cor1000
corr1000_avg = cor1000.groupby('rho').mean().reset_index()
corr1000_std = cor1000.groupby('rho').std().reset_index()
corr1000_avg
plt.figure(figsize=(5,5))
rho_theory = np.linspace(-0.95,0.95,100)
c_theory = 2*np.abs(rho_theory)/(1-np.abs(rho_theory))*np.sign(rho_theory)
plt.scatter(cor1000['rho'],cor1000['C'])
plt.plot(rho_theory,c_theory)
plt.axhline(y=0.0, color='r')
plt.figure(figsize=(5,5))
rho_theory = np.linspace(-0.95,0.95,100)
c_theory = 2*np.abs(rho_theory)/(1-np.abs(rho_theory))*np.sign(rho_theory)
plt.errorbar(corr1000_avg['rho'],corr1000_avg['C'],yerr=corr1000_avg['dC'],fmt="o",color='k')
plt.plot(rho_theory,c_theory,"k")
plt.axhline(y=0.0, color='k')
plt.xlabel(r'$\rho$')
plt.ylabel("C")
plt.savefig("corr.png",format='png',dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
for rho in corr1000_avg['rho']:
data1000_rho = cor1000[cor1000['rho']==rho]
print(rho,data1000_rho['A1'].mean(),data1000_rho['A1'].std(),data1000_rho['dA1'].mean())
print(rho,data1000_rho['A2'].mean(),data1000_rho['A2'].std(),data1000_rho['dA2'].mean())
data1000_05 = cor1000[cor1000['rho']==0.4999999999999997]
data1000_05
plt.hist(data1000_05['A1'],bins=10,density=True)
data1k05 = pd.read_csv('correlations1k05.csv')
data1k05
plt.hist(data1k05['a2'],bins=30,density=True)
print(data1k05['A1'].mean(),data1k05['A1'].std(),data1k05['dA1'].mean(),data1k05['dA1'].std())
print(data1k05['a1'].mean(),data1k05['a1'].std(),data1k05['da1'].mean(),data1k05['da1'].std())
print(data1k05['A2'].mean(),data1k05['A2'].std(),data1k05['dA2'].mean(),data1k05['dA2'].std())
print(data1k05['a2'].mean(),data1k05['a2'].std(),data1k05['da2'].mean(),data1k05['da2'].std())
plt.figure(facecolor="white")
xs = np.linspace(0.25,2,200)
densityA1 = gaussian_kde(data1k05['A1'])
densityA2 = gaussian_kde(data1k05['A2'])
densitya1 = gaussian_kde(data1k05['a1'])
densitya2 = gaussian_kde(data1k05['a2'])
plt.plot(xs,densityA1(xs),"k-",label=r"$A_{1}$ MCMC")
plt.plot(xs,densitya1(xs),"k:",label=r"$A_{1}$ ML")
plt.axvline(x=1.0,color="k")
plt.legend()
plt.xlabel(r"$A_1$")
plt.ylabel(r"$p(A_{1})$")
plt.savefig("A1kde05.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
plt.figure(facecolor="white")
xs = np.linspace(0.25,0.5,200)
densityA2 = gaussian_kde(data1k05['A2'])
densitya2 = gaussian_kde(data1k05['a2'])
plt.plot(xs,densityA2(xs),"k-",label=r"$A_{2}$ MCMC")
plt.plot(xs,densitya2(xs),"k:",label=r"$A_{2}$ ML")
plt.axvline(x=0.3333,color="k")
plt.legend()
plt.xlabel(r"$A_2$")
plt.ylabel(r"$p(A_{2})$")
plt.savefig("A2kde05.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
data1k025 = pd.read_csv('correlations1k025.csv')
data1k025
plt.hist(data1k025['a2'],bins=30,density=True)
print(data1k025['A1'].mean(),data1k025['A1'].std(),data1k025['dA1'].mean(),data1k025['dA1'].std())
print(data1k025['a1'].mean(),data1k025['a1'].std(),data1k025['da1'].mean(),data1k025['da1'].std())
print(data1k025['A2'].mean(),data1k025['A2'].std(),data1k025['dA2'].mean(),data1k025['dA2'].std())
print(data1k025['a2'].mean(),data1k025['a2'].std(),data1k025['da2'].mean(),data1k025['da2'].std())
plt.figure(facecolor="white")
xs = np.linspace(0.25,2,200)
densityA1 = gaussian_kde(data1k025['A1'])
densityA2 = gaussian_kde(data1k025['A2'])
densitya1 = gaussian_kde(data1k025['a1'])
densitya2 = gaussian_kde(data1k025['a2'])
plt.plot(xs,densityA1(xs),"k-",label=r"$A_{1}$ MCMC")
plt.plot(xs,densitya1(xs),"k:",label=r"$A_{1}$ ML")
plt.axvline(x=1.0,color="k")
plt.legend()
plt.xlabel(r"$A_1$")
plt.ylabel(r"$p(A_{1})$")
plt.savefig("A1kde025.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
plt.figure(facecolor="white")
xs = np.linspace(0.35,1,200)
densityA2 = gaussian_kde(data1k025['A2'])
densitya2 = gaussian_kde(data1k025['a2'])
plt.plot(xs,densityA2(xs),"k-",label=r"$A_{2}$ MCMC")
plt.plot(xs,densitya2(xs),"k:",label=r"$A_{2}$ ML")
plt.axvline(x=0.6,color="k")
plt.legend()
plt.xlabel(r"$A_2$")
plt.ylabel(r"$p(A_{2})$")
plt.savefig("A2kde025.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
plt.figure(facecolor="white")
plt.scatter(data1k05['D'],data1k05['d'])
plt.xlabel(r"$A_1$ MCMC")
plt.ylabel(r"$A_{1}$ ML")
plt.savefig("A1corrkde025.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
print(pearsonr(data1k025['A1'],data1k025['a1']))
print(pearsonr(data1k025['A2'],data1k025['a2']))
print(pearsonr(data1k025['D'],data1k025['d']))
p1 = np.polyfit(data1k05['dA1'],data1k05['da1'],1)
print(p1)
print("factor of underestimation: ",1/p1[0])
dA1 = np.linspace(0.09,0.4,200)
da1 = p1[0]*dA1 + p1[1]
plt.figure(facecolor="white")
plt.scatter(data1k05['dA1'],data1k05['da1'],color="k")
plt.plot(dA1,da1,"k:")
plt.xlabel(r"$dA_1$ MCMC")
plt.ylabel(r"$dA_{1}$ ML")
plt.savefig("dA1corrkde05.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
p2 = np.polyfit(data1k05['dA2'],data1k05['da2'],1)
print(p2)
print("factor of underestimation: ",1/p2[0])
dA2 = np.linspace(0.03,0.15,200)
da2 = p2[0]*dA2 + p2[1]
plt.figure(facecolor="white")
plt.scatter(data1k05['dA2'],data1k05['da2'],color="k")
plt.plot(dA2,da2,"k:")
plt.xlabel(r"$dA_2$ MCMC")
plt.ylabel(r"$dA_{2}$ ML")
plt.savefig("dA2corrkde05.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
p1 = np.polyfit(data1k025['dA1'],data1k025['da1'],1)
print(p1)
p1 = np.polyfit(data1k05['dA1'],data1k05['da1'],1)
print(p1)
print("factor of underestimation: ",1/p1[0])
dA1 = np.linspace(0.05,0.4,200)
da1 = p1[0]*dA1 + p1[1]
plt.figure(facecolor="white")
plt.scatter(data1k05['dA1'],data1k05['da1'],color="k")
plt.plot(dA1,da1,"k:")
plt.xlabel(r"$dA_1$ MCMC")
plt.ylabel(r"$dA_{1}$ ML")
plt.savefig("dA1corrkde05.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
p2 = np.polyfit(data1k05['dA2'],data1k05['da2'],1)
print(p2)
print("factor of underestimation: ",1/p2[0])
dA2 = np.linspace(0.015,0.05,200)
da2 = p2[0]*dA2 + p2[1]
plt.figure(facecolor="white")
plt.scatter(data1k05['dA2'],data1k05['da2'],color="k")
plt.plot(dA2,da2,"k:")
plt.xlabel(r"$dA_2$ MCMC")
plt.ylabel(r"$dA_{2}$ ML")
plt.savefig("dA2corrkde05.png",format="png",dpi=300,bbox_inches='tight',facecolor="white",backgroundcolor="white")
```
| github_jupyter |
## 人脸与人脸关键点检测
在训练用于检测面部关键点的神经网络之后,你可以将此网络应用于包含人脸的*任何一个*图像。该神经网络需要一定大小的Tensor作为输入,因此,要检测任何一个人脸,你都首先必须进行一些预处理。
1. 使用人脸检测器检测图像中的所有人脸。在这个notebook中,我们将使用Haar级联检测器。
2. 对这些人脸图像进行预处理,使其成为灰度图像,并转换为你期望的输入尺寸的张量。这个步骤与你在Notebook 2中创建和应用的`data_transform` 类似,其作用是重新缩放、归一化,并将所有图像转换为Tensor,作为CNN的输入。
3. 使用已被训练的模型检测图像上的人脸关键点。
---
在下一个python单元格中,我们要加载项目此部分所需的库。
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
#### 选择图像
选择一张图像,执行人脸关键点检测。你可以在`images/`目录中选择任何一张人脸图像。
```
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
```
## 检测该图像中的所有人脸
要想检测到所选图像中的所有人脸,接下来,你要用到的是OpenCV预先训练的一个Haar级联分类器,所有这些分类器都可以在`detector_architectures/`目录中找到。
在下面的代码中,我们要遍历原始图像中的每个人脸,并在原始图像的副本中的每个人脸上绘制一个红色正方形,而原始图像不需要修改。此外,你也可以 [新增一项眼睛检测 ](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) ,作为使用Haar检测器的一个可选练习。
下面是各种图像上的人脸检测示例。
<img src='images/haar_cascade_ex.png' width=80% height=80%/>
```
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
```
## 加载到已训练的模型中
有了一个可以使用的图像后(在这里,你可以选择`images/` 目录中的任何一张人脸图像),下一步是对该图像进行预处理并将其输入进CNN人脸关键点检测器。
首先,按文件名加载你选定的最佳模型。
```
import torch
from models import Net
net = Net()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
# net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))
## print out your net and prepare it for testing (uncomment the line below)
# net.eval()
```
## 关键点检测
现在,我们需要再一次遍历图像中每个检测到的人脸,只是这一次,你需要将这些人脸转换为CNN可以接受的张量形式的输入图像。
### TODO: 将每个检测到的人脸转换为输入Tensor
你需要对每个检测到的人脸执行以下操作:
1. 将人脸从RGB图转换为灰度图
2. 把灰度图像归一化,使其颜色范围落在[0,1]范围,而不是[0,255]
3. 将检测到的人脸重新缩放为CNN的预期方形尺寸(我们建议为 224x224)
4. 将numpy图像变形为torch图像。
**提示**: Haar检测器检测到的人脸大小与神经网络训练过的人脸大小不同。如果你发现模型生成的关键点对给定的人脸来说,显得太小,请尝试在检测到的`roi`中添加一些填充,然后将其作为模型的输入。
你可能会发现,参看`data_load.py`中的转换代码对帮助执行这些处理步骤很有帮助。
### TODO: 检测并显示预测到的关键点
将每个人脸适当地转换为网络的输入Tensor之后,就可以将`net` 应用于每个人脸。输出应该是预测到的人脸关键点,这些关键点需要“非归一化”才能显示。你可能会发现,编写一个类似`show_keypoints`的辅助函数会很有帮助。最后,你会得到一张如下的图像,其中人脸关键点与每张人脸上的面部特征非常匹配:
<img src='images/michelle_detected.png' width=30% height=30%/>
```
image_copy = np.copy(image)
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
roi = image_copy[y:y+h, x:x+w]
## TODO: Convert the face region from RGB to grayscale
## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
## TODO: Make facial keypoint predictions using your loaded, trained network
## TODO: Display each detected face and the corresponding keypoints
```
| github_jupyter |
# WGAN
元論文 : Wasserstein GAN https://arxiv.org/abs/1701.07875 (2017)
WGANはGANのLossを変えることで、数学的に画像生成の学習を良くしよう!っていうもの。
通常のGANはKLDivergenceを使って、Generatorによる確率分布を、生成したい画像の生起分布に近づけていく。だが、KLDでは連続性が保証されないので、代わりにWasserstain距離を用いて、近似していこうというのがWGAN。
Wasserstain距離によるLossを実現するために、WGANのDiscriminatorでは最後にSigmoid関数を適用しない。つまり、LossもSigmoid Cross Entropyでなく、Discriminatorの出力の値をそのまま使う。
WGANのアルゴリズムは、イテレーション毎に以下のDiscriminatorとGeneratorの学習を交互に行っていく。
- 最適化 : RMSProp(LearningRate:0.0005)
#### Discriminatorの学習(以下操作をcriticの数値だけ繰り返す)
1. Real画像と、一様分布からzをサンプリングする
2. Loss $L_D = \frac{1}{|Minibatch|} \{ \sum_{i} D(x^{(i)}) - \sum_i D (G(z^{(i)})) \}$ を計算し、SGD
3. Discriminatorのパラメータを全て、 [- clip, clip] にクリッピングする
#### Generatorの学習
1. 一様分布からzをサンプリングする
2. Loss $L_G = \frac{1}{|Minibatch|} \sum_i D (G(z^{(i)})) $ を計算し、SGD
(WGANは収束がすごく遅い、、学習回数がめちゃくちゃ必要なので、注意!!!!)
## Import and Config
```
import torch
import torch.nn.functional as F
import torchvision
import numpy as np
from collections import OrderedDict
from easydict import EasyDict
import argparse
import os
import matplotlib.pyplot as plt
import pandas as pd
from _main_base import *
#---
# config
#---
cfg = EasyDict()
# class
cfg.CLASS_LABEL = ['akahara', 'madara'] # list, dict('label' : '[B, G, R]')
cfg.CLASS_NUM = len(cfg.CLASS_LABEL)
# model
cfg.INPUT_Z_DIM = 128
cfg.INPUT_MODE = None
cfg.OUTPUT_HEIGHT = 32
cfg.OUTPUT_WIDTH = 32
cfg.OUTPUT_CHANNEL = 3
cfg.OUTPUT_MODE = 'RGB' # RGB, GRAY, EDGE, CLASS_LABEL
cfg.G_DIM = 64
cfg.D_DIM = 64
cfg.CHANNEL_AXIS = 1 # 1 ... [mb, c, h, w], 3 ... [mb, h, w, c]
cfg.GPU = False
cfg.DEVICE = torch.device('cuda' if cfg.GPU and torch.cuda.is_available() else 'cpu')
# train
cfg.TRAIN = EasyDict()
cfg.TRAIN.DISPAY_ITERATION_INTERVAL = 50
cfg.PREFIX = 'WGAN'
cfg.TRAIN.MODEL_G_SAVE_PATH = 'models/' + cfg.PREFIX + '_G_{}.pt'
cfg.TRAIN.MODEL_D_SAVE_PATH = 'models/' + cfg.PREFIX + '_D_{}.pt'
cfg.TRAIN.MODEL_SAVE_INTERVAL = 200
cfg.TRAIN.ITERATION = 5000
cfg.TRAIN.MINIBATCH = 32
cfg.TRAIN.OPTIMIZER_G = torch.optim.Adam
cfg.TRAIN.LEARNING_PARAMS_G = {'lr' : 0.0002, 'betas' : (0.5, 0.9)}
cfg.TRAIN.OPTIMIZER_D = torch.optim.Adam
cfg.TRAIN.LEARNING_PARAMS_D = {'lr' : 0.0002, 'betas' : (0.5, 0.9)}
cfg.TRAIN.LOSS_FUNCTION = None
cfg.TRAIN.DATA_PATH = './data/'
cfg.TRAIN.DATA_HORIZONTAL_FLIP = False # data augmentation : holizontal flip
cfg.TRAIN.DATA_VERTICAL_FLIP = False # data augmentation : vertical flip
cfg.TRAIN.DATA_ROTATION = False # data augmentation : rotation False, or integer
cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE = True
cfg.TRAIN.LEARNING_PROCESS_RESULT_INTERVAL = 500
cfg.TRAIN.LEARNING_PROCESS_RESULT_IMAGE_PATH = 'result/' + cfg.PREFIX + '_result_{}.jpg'
cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH = 'result/' + cfg.PREFIX + '_loss.txt'
#---
# WGAN config
#---
cfg.TRAIN.WGAN_CLIPS_VALUE = 0.01
cfg.TRAIN.WGAN_CRITIC_N = 5
# test
cfg.TEST = EasyDict()
cfg.TEST.MODEL_G_PATH = cfg.TRAIN.MODEL_G_SAVE_PATH.format('final')
cfg.TEST.DATA_PATH = './data'
cfg.TEST.MINIBATCH = 10
cfg.TEST.ITERATION = 2
cfg.TEST.RESULT_SAVE = False
cfg.TEST.RESULT_IMAGE_PATH = 'result/' + cfg.PREFIX + '_result_{}.jpg'
# random seed
torch.manual_seed(0)
# make model save directory
def make_dir(path):
if '/' in path:
model_save_dir = '/'.join(path.split('/')[:-1])
os.makedirs(model_save_dir, exist_ok=True)
make_dir(cfg.TRAIN.MODEL_G_SAVE_PATH)
make_dir(cfg.TRAIN.MODEL_D_SAVE_PATH)
make_dir(cfg.TRAIN.LEARNING_PROCESS_RESULT_IMAGE_PATH)
make_dir(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH)
```
## Define Model
```
class Generator(torch.nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.module = torch.nn.Sequential(OrderedDict({
'G_layer_1' : torch.nn.ConvTranspose2d(cfg.INPUT_Z_DIM, cfg.G_DIM * 4, kernel_size=[cfg.OUTPUT_HEIGHT // 8, cfg.OUTPUT_WIDTH // 8], stride=1, bias=False),
'G_layer_1_bn' : torch.nn.BatchNorm2d(cfg.G_DIM * 4),
'G_layer_1_ReLU' : torch.nn.ReLU(),
'G_layer_2' : torch.nn.ConvTranspose2d(cfg.G_DIM * 4, cfg.G_DIM * 2, kernel_size=4, stride=2, padding=1, bias=False),
'G_layer_2_bn' : torch.nn.BatchNorm2d(cfg.G_DIM * 2),
'G_layer_2_ReLU' : torch.nn.ReLU(),
'G_layer_3' : torch.nn.ConvTranspose2d(cfg.G_DIM * 2, cfg.G_DIM, kernel_size=4, stride=2, padding=1, bias=False),
'G_layer_3_bn' : torch.nn.BatchNorm2d(cfg.G_DIM),
'G_layer_3_ReLU' : torch.nn.ReLU(),
'G_layer_out' : torch.nn.ConvTranspose2d(cfg.G_DIM, cfg.OUTPUT_CHANNEL, kernel_size=4, stride=2, padding=1, bias=False),
'G_layer_out_tanh' : torch.nn.Tanh()
}))
def forward(self, x):
x = self.module(x)
return x
class Discriminator(torch.nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.module = torch.nn.Sequential(OrderedDict({
'D_layer_1' : torch.nn.Conv2d(cfg.OUTPUT_CHANNEL, cfg.D_DIM, kernel_size=4, padding=1, stride=2, bias=False),
'D_layer_1_leakyReLU' : torch.nn.LeakyReLU(0.2, inplace=True),
'D_layer_2' : torch.nn.Conv2d(cfg.D_DIM, cfg.D_DIM * 2, kernel_size=4, padding=1, stride=2, bias=False),
'D_layer_2_bn' : torch.nn.BatchNorm2d(cfg.D_DIM * 2),
'D_layer_2_leakyReLU' : torch.nn.LeakyReLU(0.2, inplace=True),
'D_layer_3' : torch.nn.Conv2d(cfg.D_DIM * 2, cfg.D_DIM * 4, kernel_size=4, padding=1, stride=2, bias=False),
'G_layer_3_bn' : torch.nn.BatchNorm2d(cfg.D_DIM * 4),
'D_layer_3_leakyReLU' : torch.nn.LeakyReLU(0.2, inplace=True),
'D_layer_out' : torch.nn.Conv2d(cfg.D_DIM * 4, 1, kernel_size=[cfg.OUTPUT_HEIGHT // 8, cfg.OUTPUT_WIDTH // 8], padding=0, stride=1, bias=False),
}))
def forward(self, x):
x = self.module(x)
return x
```
## Train
```
def result_show(G, z, path=None, save=False, show=False):
if (save or show) is False:
print('argument save >> {} and show >> {}, so skip')
return
Gz = G(z)
Gz = Gz.detach().cpu().numpy()
Gz = (Gz * 127.5 + 127.5).astype(np.uint8)
Gz = Gz.reshape([-1, cfg.OUTPUT_CHANNEL, cfg.OUTPUT_HEIGHT, cfg.OUTPUT_WIDTH])
Gz = Gz.transpose(0, 2, 3, 1)
for i in range(cfg.TEST.MINIBATCH):
_G = Gz[i]
plt.subplot(1, cfg.TEST.MINIBATCH, i + 1)
plt.imshow(_G)
plt.axis('off')
if path is not None:
plt.savefig(path)
print('result was saved to >> {}'.format(path))
if show:
plt.show()
# train
def train():
# model
G = Generator().to(cfg.DEVICE)
D = Discriminator().to(cfg.DEVICE)
opt_G = cfg.TRAIN.OPTIMIZER_G(G.parameters(), **cfg.TRAIN.LEARNING_PARAMS_G)
opt_D = cfg.TRAIN.OPTIMIZER_D(D.parameters(), **cfg.TRAIN.LEARNING_PARAMS_D)
#path_dict = data_load(cfg)
#paths = path_dict['paths']
#paths_gt = path_dict['paths_gt']
trainset = torchvision.datasets.CIFAR10(root=cfg.TRAIN.DATA_PATH , train=True, download=True, transform=None)
train_Xs = trainset.data
train_ys = trainset.targets
# training
mbi = 0
train_N = len(train_Xs)
train_ind = np.arange(train_N)
np.random.seed(0)
np.random.shuffle(train_ind)
list_iter = []
list_loss_G = []
list_loss_D = []
list_loss_D_real = []
list_loss_D_fake = []
list_loss_WDistance = []
one = torch.FloatTensor([1])
minus_one = one * -1
print('training start')
progres_bar = ''
for i in range(cfg.TRAIN.ITERATION):
if mbi + cfg.TRAIN.MINIBATCH > train_N:
mb_ind = train_ind[mbi:]
np.random.shuffle(train_ind)
mb_ind = np.hstack((mb_ind, train_ind[ : (cfg.TRAIN.MINIBATCH - (train_N - mbi))]))
mbi = cfg.TRAIN.MINIBATCH - (train_N - mbi)
else:
mb_ind = train_ind[mbi : mbi + cfg.TRAIN.MINIBATCH]
mbi += cfg.TRAIN.MINIBATCH
# update D
for _ in range(cfg.TRAIN.WGAN_CRITIC_N):
opt_D.zero_grad()
# parameter clipping > [-clip_value, clip_value]
for param in D.parameters():
param.data.clamp_(- cfg.TRAIN.WGAN_CLIPS_VALUE, cfg.TRAIN.WGAN_CLIPS_VALUE)
# sample X
Xs = torch.tensor(preprocess(train_Xs[mb_ind], cfg, cfg.OUTPUT_MODE), dtype=torch.float).to(cfg.DEVICE)
# sample x
z = np.random.uniform(-1, 1, size=(cfg.TRAIN.MINIBATCH, cfg.INPUT_Z_DIM, 1, 1))
z = torch.tensor(z, dtype=torch.float).to(cfg.DEVICE)
# forward
Gz = G(z)
loss_D_fake = D(Gz).mean(0).view(1)
loss_D_real = D(Xs).mean(0).view(1)
loss_D = loss_D_fake - loss_D_real
loss_D_real.backward(one)
loss_D_fake.backward(minus_one)
opt_D.step()
Wasserstein_distance = loss_D_real - loss_D_fake
# update G
opt_G.zero_grad()
z = np.random.uniform(-1, 1, size=(cfg.TRAIN.MINIBATCH, cfg.INPUT_Z_DIM, 1, 1))
z = torch.tensor(z, dtype=torch.float).to(cfg.DEVICE)
loss_G = D(G(z)).mean(0).view(1)
loss_G.backward(one)
opt_G.step()
progres_bar += '|'
print('\r' + progres_bar, end='')
_loss_G = loss_G.item()
_loss_D = loss_D.item()
_loss_D_real = loss_D_real.item()
_loss_D_fake = loss_D_fake.item()
_Wasserstein_distance = Wasserstein_distance.item()
if (i + 1) % 10 == 0:
progres_bar += str(i + 1)
print('\r' + progres_bar, end='')
# save process result
if cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE:
list_iter.append(i + 1)
list_loss_G.append(_loss_G)
list_loss_D.append(_loss_D)
list_loss_D_real.append(_loss_D_real)
list_loss_D_fake.append(_loss_D_fake)
list_loss_WDistance.append(_Wasserstein_distance)
# display training state
if (i + 1) % cfg.TRAIN.DISPAY_ITERATION_INTERVAL == 0:
print('\r' + ' ' * len(progres_bar), end='')
print('\rIter:{}, LossG (fake:{:.4f}), LossD:{:.4f} (real:{:.4f}, fake:{:.4f}), WDistance:{:.4f}'.format(
i + 1, _loss_G, _loss_D, _loss_D_real, _loss_D_fake, _Wasserstein_distance))
progres_bar = ''
# save parameters
if (cfg.TRAIN.MODEL_SAVE_INTERVAL != False) and ((i + 1) % cfg.TRAIN.MODEL_SAVE_INTERVAL == 0):
G_save_path = cfg.TRAIN.MODEL_G_SAVE_PATH.format('iter{}'.format(i + 1))
D_save_path = cfg.TRAIN.MODEL_D_SAVE_PATH.format('iter{}'.format(i + 1))
torch.save(G.state_dict(), G_save_path)
torch.save(D.state_dict(), D_save_path)
print('save G >> {}, D >> {}'.format(G_save_path, D_save_path))
# save process result
if cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE and ((i + 1) % cfg.TRAIN.LEARNING_PROCESS_RESULT_INTERVAL == 0):
result_show(
G, z, cfg.TRAIN.LEARNING_PROCESS_RESULT_IMAGE_PATH.format('iter' + str(i + 1)),
save=cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE, show=True)
G_save_path = cfg.TRAIN.MODEL_G_SAVE_PATH.format('final')
D_save_path = cfg.TRAIN.MODEL_D_SAVE_PATH.format('final')
torch.save(G.state_dict(), G_save_path)
torch.save(D.state_dict(), D_save_path)
print('final paramters were saved to G >> {}, D >> {}'.format(G_save_path, D_save_path))
if cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE:
f = open(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH, 'w')
df = pd.DataFrame({'iteration' : list_iter, 'loss_G' : list_loss_G, 'loss_D' : list_loss_D,
'loss_D_real' : list_loss_D_real, 'loss_D_fake' : list_loss_D_fake, 'Wasserstein_Distance' : list_loss_WDistance})
df.to_csv(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH, index=False)
print('loss was saved to >> {}'.format(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH))
train()
```
## Test
```
# test
def test():
print('-' * 20)
print('test function')
print('-' * 20)
G = Generator().to(cfg.DEVICE)
G.load_state_dict(torch.load(cfg.TEST.MODEL_G_PATH, map_location=torch.device(cfg.DEVICE)))
G.eval()
np.random.seed(0)
for i in range(cfg.TEST.ITERATION):
z = np.random.uniform(-1, 1, size=(cfg.TEST.MINIBATCH, cfg.INPUT_Z_DIM, 1, 1))
z = torch.tensor(z, dtype=torch.float).to(cfg.DEVICE)
result_show(G, z, cfg.TEST.RESULT_IMAGE_PATH.format(i + 1), save=cfg.TEST.RESULT_SAVE, show=True)
test()
def arg_parse():
parser = argparse.ArgumentParser(description='CNN implemented with Keras')
parser.add_argument('--train', dest='train', action='store_true')
parser.add_argument('--test', dest='test', action='store_true')
args = parser.parse_args()
return args
# main
if __name__ == '__main__':
args = arg_parse()
if args.train:
train()
if args.test:
test()
if not (args.train or args.test):
print("please select train or test flag")
print("train: python main.py --train")
print("test: python main.py --test")
print("both: python main.py --train --test")
```
| github_jupyter |
```
#hide
# default_exp script
```
# Script - command line interfaces
> A fast way to turn your python function into a script.
Part of [fast.ai](https://www.fast.ai)'s toolkit for delightful developer experiences.
## Overview
Sometimes, you want to create a quick script, either for yourself, or for others. But in Python, that involves a whole lot of boilerplate and ceremony, especially if you want to support command line arguments, provide help, and other niceties. You can use [argparse](https://docs.python.org/3/library/argparse.html) for this purpose, which comes with Python, but it's complex and verbose.
`fastcore.script` makes life easier. There are much fancier modules to help you write scripts (we recommend [Python Fire](https://github.com/google/python-fire), and [Click](https://click.palletsprojects.com/en/7.x/) is also popular), but fastcore.script is very fast and very simple. In fact, it's <50 lines of code! Basically, it's just a little wrapper around `argparse` that uses modern Python features and some thoughtful defaults to get rid of the boilerplate.
For full details, see the [docs](https://fastcore.script.fast.ai) for `core`.
## Example
Here's a complete example (available in `examples/test_fastcore.py`):
```python
from fastcore.script import *
@call_parse
def main(msg:Param("The message", str),
upper:Param("Convert to uppercase?", store_true)):
"Print `msg`, optionally converting to uppercase"
print(msg.upper() if upper else msg)
````
If you copy that info a file and run it, you'll see:
```
$ examples/test_fastcore.py --help
usage: test_fastcore.py [-h] [--upper] [--pdb PDB] [--xtra XTRA] msg
Print `msg`, optionally converting to uppercase
positional arguments:
msg The message
optional arguments:
-h, --help show this help message and exit
--upper Convert to uppercase? (default: False)
--pdb PDB Run in pdb debugger (default: False)
--xtra XTRA Parse for additional args (default: '')
```
As you see, we didn't need any `if __name__ == "__main__"`, we didn't have to parse arguments, we just wrote a function, added a decorator to it, and added some annotations to our function's parameters. As a bonus, we can also use this function directly from a REPL such as Jupyter Notebook - it's not just for command line scripts!
## Param annotations
Each parameter in your function should have an annotation `Param(...)` (as in the example above). You can pass the following when calling `Param`: `help`,`type`,`opt`,`action`,`nargs`,`const`,`choices`,`required` . Except for `opt`, all of these are just passed directly to `argparse`, so you have all the power of that module at your disposal. Generally you'll want to pass at least `help` (since this is provided as the help string for that parameter) and `type` (to ensure that you get the type of data you expect). `opt` is a bool that defines whether a param is optional or required (positional) - but you'll generally not need to set this manually, because fastcore.script will set it for you automatically based on *default* values.
You should provide a default (after the `=`) for any *optional* parameters. If you don't provide a default for a parameter, then it will be a *positional* parameter.
## setuptools scripts
There's a really nice feature of pip/setuptools that lets you create commandline scripts directly from functions, makes them available in the `PATH`, and even makes your scripts cross-platform (e.g. in Windows it creates an exe). fastcore.script supports this feature too. The trick to making a function available as a script is to add a `console_scripts` section to your setup file, of the form: `script_name=module:function_name`. E.g. in this case we use: `test_fastcore.script=fastcore.script.test_cli:main`. With this, you can then just type `test_fastcore.script` at any time, from any directory, and your script will be called (once it's installed using one of the methods below).
You don't actually have to write a `setup.py` yourself. Instead, just use [nbdev](https://nbdev.fast.ai). Then modify `settings.ini` as appropriate for your module/script. To install your script directly, you can type `pip install -e .`. Your script, when installed this way (it's called an [editable install](http://codumentary.blogspot.com/2014/11/python-tip-of-year-pip-install-editable.html), will automatically be up to date even if you edit it - there's no need to reinstall it after editing. With nbdev you can even make your module and script available for installation directly from pip and conda by running `make release`.
## API details
```
from fastcore.test import *
#export
import inspect,functools,argparse,shutil
from fastcore.imports import *
from fastcore.utils import *
#export
def store_true():
"Placeholder to pass to `Param` for `store_true` action"
pass
#export
def store_false():
"Placeholder to pass to `Param` for `store_false` action"
pass
#export
def bool_arg(v):
"Use as `type` for `Param` to get `bool` behavior"
return str2bool(v)
#export
def clean_type_str(x:str):
x = str(x)
x = re.sub("(enum |class|function|__main__\.|\ at.*)", '', x)
x = re.sub("(<|>|'|\ )", '', x) # spl characters
return x
class Test: pass
test_eq(clean_type_str(argparse.ArgumentParser), 'argparse.ArgumentParser')
test_eq(clean_type_str(Test), 'Test')
test_eq(clean_type_str(int), 'int')
test_eq(clean_type_str(float), 'float')
test_eq(clean_type_str(store_false), 'store_false')
#export
class Param:
"A parameter in a function used in `anno_parser` or `call_parse`"
def __init__(self, help=None, type=None, opt=True, action=None, nargs=None, const=None,
choices=None, required=None, default=None):
if type==store_true: type,action,default=None,'store_true' ,False
if type==store_false: type,action,default=None,'store_false',True
if type and isinstance(type,typing.Type) and issubclass(type,enum.Enum) and not choices: choices=list(type)
store_attr()
def set_default(self, d):
if self.default is None:
if d==inspect.Parameter.empty: self.opt = False
else: self.default = d
if self.default is not None: self.help += f" (default: {self.default})"
@property
def pre(self): return '--' if self.opt else ''
@property
def kwargs(self): return {k:v for k,v in self.__dict__.items()
if v is not None and k!='opt' and k[0]!='_'}
def __repr__(self):
if not self.help and self.type is None: return ""
if not self.help and self.type is not None: return f"{clean_type_str(self.type)}"
if self.help and self.type is None: return f"<{self.help}>"
if self.help and self.type is not None: return f"{clean_type_str(self.type)} <{self.help}>"
test_eq(repr(Param("Help goes here")), '<Help goes here>')
test_eq(repr(Param("Help", int)), 'int <Help>')
test_eq(repr(Param(help=None, type=int)), 'int')
test_eq(repr(Param(help=None, type=None)), '')
```
Each parameter in your function should have an annotation `Param(...)`. You can pass the following when calling `Param`: `help`,`type`,`opt`,`action`,`nargs`,`const`,`choices`,`required` (i.e. it takes the same parameters as `argparse.ArgumentParser.add_argument`, plus `opt`). Except for `opt`, all of these are just passed directly to `argparse`, so you have all the power of that module at your disposal. Generally you'll want to pass at least `help` (since this is provided as the help string for that parameter) and `type` (to ensure that you get the type of data you expect).
`opt` is a bool that defines whether a param is optional or required (positional) - but you'll generally not need to set this manually, because fastcore.script will set it for you automatically based on *default* values. You should provide a default (after the `=`) for any *optional* parameters. If you don't provide a default for a parameter, then it will be a *positional* parameter.
Param's `__repr__` also allows for more informative function annotation when looking up the function's doc using shift+tab. You see the type annotation (if there is one) and the accompanying help documentation with it.
```
def f(required:Param("Required param", int),
a:Param("param 1", bool_arg),
b:Param("param 2", str)="test"):
"my docs"
...
help(f)
p = Param(help="help", type=int)
p.set_default(1)
test_eq(p.kwargs, {'help': 'help (default: 1)', 'type': int, 'default': 1})
#export
def anno_parser(func, prog=None, from_name=False):
"Look at params (annotated with `Param`) in func and return an `ArgumentParser`"
cols = shutil.get_terminal_size((120,30))[0]
fmtr = partial(argparse.HelpFormatter, max_help_position=cols//2, width=cols)
p = argparse.ArgumentParser(description=func.__doc__, prog=prog, formatter_class=fmtr)
for k,v in inspect.signature(func).parameters.items():
param = func.__annotations__.get(k, Param())
param.set_default(v.default)
p.add_argument(f"{param.pre}{k}", **param.kwargs)
p.add_argument(f"--pdb", help=argparse.SUPPRESS, action='store_true')
p.add_argument(f"--xtra", help=argparse.SUPPRESS, type=str)
return p
```
This converts a function with parameter annotations of type `Param` into an `argparse.ArgumentParser` object. Function arguments with a default provided are optional, and other arguments are positional.
```
_en = str_enum('_en', 'aa','bb','cc')
def f(required:Param("Required param", int),
a:Param("param 1", bool_arg),
b:Param("param 2", str)="test",
c:Param("param 3", _en)=_en.aa):
"my docs"
...
p = anno_parser(f, 'progname')
p.print_help()
#export
def args_from_prog(func, prog):
"Extract args from `prog`"
if prog is None or '#' not in prog: return {}
if '##' in prog: _,prog = prog.split('##', 1)
progsp = prog.split("#")
args = {progsp[i]:progsp[i+1] for i in range(0, len(progsp), 2)}
for k,v in args.items():
t = func.__annotations__.get(k, Param()).type
if t: args[k] = t(v)
return args
```
Sometimes it's convenient to extract arguments from the actual name of the called program. `args_from_prog` will do this, assuming that names and values of the params are separated by a `#`. Optionally there can also be a prefix separated by `##` (double underscore).
```
exp = {'a': False, 'b': 'baa'}
test_eq(args_from_prog(f, 'foo##a#0#b#baa'), exp)
test_eq(args_from_prog(f, 'a#0#b#baa'), exp)
#export
SCRIPT_INFO = SimpleNamespace(func=None)
#export
def call_parse(func):
"Decorator to create a simple CLI from `func` using `anno_parser`"
mod = inspect.getmodule(inspect.currentframe().f_back)
if not mod: return func
@functools.wraps(func)
def _f(*args, **kwargs):
mod = inspect.getmodule(inspect.currentframe().f_back)
if not mod: return func(*args, **kwargs)
if not SCRIPT_INFO.func and mod.__name__=="__main__": SCRIPT_INFO.func = func.__name__
if len(sys.argv)>1 and sys.argv[1]=='': sys.argv.pop(1)
p = anno_parser(func)
args = p.parse_args().__dict__
xtra = otherwise(args.pop('xtra', ''), eq(1), p.prog)
tfunc = trace(func) if args.pop('pdb', False) else func
tfunc(**merge(args, args_from_prog(func, xtra)))
if mod.__name__=="__main__":
setattr(mod, func.__name__, _f)
SCRIPT_INFO.func = func.__name__
return _f()
else: return _f
@call_parse
def test_add(a:Param("param a", int), b:Param("param 1",int)): return a + b
```
`call_parse` decorated functions work as regular functions and also as command-line interface functions.
```
test_eq(test_add(1,2), 3)
```
This is the main way to use `fastcore.script`; decorate your function with `call_parse`, add `Param` annotations as shown above, and it can then be used as a script.
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
'''
这个code的目的是用neurosketch 的数据来检测现在在realtime data里面发现的issue:也就是ceiling有时候竟然比floor更小
这个code的运行逻辑是
用neurosketch前五个run训练2 way classifiers,然后用最后一个run来计算ceiling和floor的值,看是否合理
'''
'''
purpose:
find the best performed mask from the result of aggregate_greedy.py and save as chosenMask
train all possible pairs of 2way classifiers and save for evidence calculation
load saved classifiers and calculate different forms of evidence
steps:
load the result of aggregate_greedy.py
display the result of aggregate_greedy.py
find the best performed ROI for each subject and display the accuracy of each subject, save the best performed ROI as chosenMask
load the functional and behavior data and choseMask and train all possible pairs of 2way classifiers
calculate the evidence floor and ceil for each subject and display different forms of evidences.
'''
'''
load the result of aggregate_greedy.py
'''
# To visualize the greedy result starting for 31 ROIs, in total 25 subjects.
import os
os.chdir("/gpfs/milgram/project/turk-browne/projects/rtTest/kp_scratch/")
from glob import glob
import matplotlib.pyplot as plt
from tqdm import tqdm
import pickle5 as pickle
import subprocess
import numpy as np
import os
print(f"conda env={os.environ['CONDA_DEFAULT_ENV']}")
import numpy as np
import nibabel as nib
import sys
import time
import pandas as pd
from sklearn.linear_model import LogisticRegression
import itertools
import pickle
import subprocess
from subprocess import call
workingDir="/gpfs/milgram/project/turk-browne/projects/rtTest/"
def save_obj(obj, name):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)
roiloc="schaefer2018"
dataSource="neurosketch"
subjects_correctly_aligned=['1206161','0119173','1206162','1130161','1206163','0120171','0111171','1202161','0125172','0110172','0123173','0120173','0110171','0119172','0124171','0123171','1203161','0118172','0118171','0112171','1207162','0117171','0119174','0112173','0112172']
subjects=subjects_correctly_aligned
N=25
workingPath="/gpfs/milgram/project/turk-browne/projects/rtTest/"
GreedyBestAcc=np.zeros((len(subjects),N+1))
GreedyBestAcc[GreedyBestAcc==0]=None
GreedyBestAcc={}
numberOfROIs={}
for ii,subject in enumerate(subjects):
# try:
# GreedyBestAcc[ii,N]=np.load(workingPath+"./{}/{}/output/uniMaskRanktag2_top{}.npy".format(roiloc, subject, N))
# except:
# pass
t=np.load(workingPath+"./{}/{}/output/uniMaskRanktag2_top{}.npy".format(roiloc, subject, N))
GreedyBestAcc[subject]=[np.float(t)]
numberOfROIs[subject]=[N]
# for len_topN_1 in range(N-1,0,-1):
for len_topN in range(1,N):
# Wait(f"./tmp/{subject}_{N}_{roiloc}_{dataSource}_{len_topN_1}.pkl")
try:
# {当前的被试}_{greedy开始的ROI数目,也就是25}_{mask的种类schaefer2018}_{数据来源neurosketch}_{当前的 megaROI 包含有的数目}
di = load_obj(f"./tmp__folder/{subject}_{N}_{roiloc}_{dataSource}_{len_topN}")
GreedyBestAcc[subject].append(np.float(di['bestAcc']))
numberOfROIs[subject].append(len_topN)
# GreedyBestAcc[ii,len_topN] = di['bestAcc']
except:
pass
# '''
# to load the imtermediate results from greedy code to examine the system
# '''
# def wait(tmpFile):
# while not os.path.exists(tmpFile+'_result.npy'):
# time.sleep(5)
# print(f"waiting for {tmpFile}_result.npy\n")
# return np.load(tmpFile+'_result.npy')
# subject= '0119173' #sys.argv[1]
# sub_id = [i for i,x in enumerate(subjects) if x == subject][0]
# intermediate_result=np.zeros((N+1,N+1))
# # 应该有多少?25个24ROI,2个1ROI,24个
# for i in range(N,1,-1):
# for j in range(i):
# tmpFile=f"./tmp__folder/{subject}_{N}_{roiloc}_{dataSource}_{i}_{j}"
# sl_result=wait(tmpFile)
# intermediate_result[i,j]=sl_result
# # _=plt.imshow(intermediate_result)
# #最后一行是25个24ROI,第2行是2个1ROI
'''
display the result of aggregate_greedy.py
'''
# GreedyBestAcc=GreedyBestAcc.T
# plt.imshow(GreedyBestAcc)
# _=plt.figure()
# for i in range(GreedyBestAcc.shape[0]):
# plt.scatter([i]*GreedyBestAcc.shape[1],GreedyBestAcc[i,:],c='g',s=2)
# plt.plot(np.arange(GreedyBestAcc.shape[0]),np.nanmean(GreedyBestAcc,axis=1))
# # plt.ylim([0.19,0.36])
# # plt.xlabel("number of ROIs")
# # plt.ylabel("accuracy")
# _=plt.figure()
# for j in range(GreedyBestAcc.shape[1]):
# plt.plot(GreedyBestAcc[:,j])
# GreedyBestAcc=GreedyBestAcc.T
# _=plt.figure()
# plt.imshow(GreedyBestAcc)
'''
find the best performed ROI for each subject and display the accuracy of each subject, save the best performed ROI as chosenMask
'''
#find best ID for each subject
bestID={}
for ii,subject in enumerate(subjects):
t=GreedyBestAcc[subject]
bestID[subject] = numberOfROIs[subject][np.where(t==np.nanmax(t))[0][0]] #bestID 指的是每一个subject对应的最好的megaROI包含的ROI的数目
chosenMask={}
for subject in bestID:
# best ID
# {当前的被试}_{greedy开始的ROI数目,也就是25}_{mask的种类schaefer2018}_{数据来源neurosketch}_{最好的megaROI 包含有的数目}
di = load_obj(f"./tmp__folder/{subject}_{N}_{roiloc}_{dataSource}_{bestID[subject]}")
chosenMask[subject] = di['bestROIs']
def getMask(topN, subject):
workingDir="/gpfs/milgram/project/turk-browne/projects/rtTest/"
for pn, parc in enumerate(topN):
_mask = nib.load(workingDir+"/{}/{}/{}".format(roiloc, subject, parc))
aff = _mask.affine
_mask = _mask.get_data()
_mask = _mask.astype(int)
# say some things about the mask.
mask = _mask if pn == 0 else mask + _mask
mask[mask>0] = 1
return mask
for sub in chosenMask:
mask=getMask(chosenMask[sub], sub)
# if not os.path.exists(f"{workingDir}/{roiloc}/{sub}/chosenMask.npy"):
np.save(f"{workingDir}/{roiloc}/{sub}/chosenMask",mask)
from scipy.stats import zscore
def normalize(X):
_X=X.copy()
_X = zscore(_X, axis=0)
_X[np.isnan(_X)]=0
return _X
def mkdir(folder):
if not os.path.isdir(folder):
os.mkdir(folder)
'''
load the functional and behavior data and choseMask and train all possible pairs of 2way classifiers
'''
def minimalClass(subject):
'''
purpose:
train offline models
steps:
load preprocessed and aligned behavior and brain data
select data with the wanted pattern like AB AC AD BC BD CD
train correspondng classifier and save the classifier performance and the classifiers themselves.
'''
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
import joblib
import nibabel as nib
import itertools
from sklearn.linear_model import LogisticRegression
def gaussian(x, mu, sig):
# mu and sig is determined before each neurofeedback session using 2 recognition runs.
return round(1+18*(1 - np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))))) # map from (0,1) -> [1,19]
def jitter(size,const=0):
jit = np.random.normal(0+const, 0.05, size)
X = np.zeros((size))
X = X + jit
return X
def other(target):
other_objs = [i for i in ['bed', 'bench', 'chair', 'table'] if i not in target]
return other_objs
def red_vox(n_vox, prop=0.1):
return int(np.ceil(n_vox * prop))
def get_inds(X, Y, pair, testRun=None):
inds = {}
# return relative indices
if testRun:
trainIX = Y.index[(Y['label'].isin(pair)) & (Y['run_num'] != int(testRun))]
else:
trainIX = Y.index[(Y['label'].isin(pair))]
# pull training and test data
trainX = X[trainIX]
trainY = Y.iloc[trainIX].label
# Main classifier on 5 runs, testing on 6th
clf = LogisticRegression(penalty='l2',C=1, solver='lbfgs', max_iter=1000,
multi_class='multinomial').fit(trainX, trainY)
B = clf.coef_[0] # pull betas
# retrieve only the first object, then only the second object
if testRun:
obj1IX = Y.index[(Y['label'] == pair[0]) & (Y['run_num'] != int(testRun))]
obj2IX = Y.index[(Y['label'] == pair[1]) & (Y['run_num'] != int(testRun))]
else:
obj1IX = Y.index[(Y['label'] == pair[0])]
obj2IX = Y.index[(Y['label'] == pair[1])]
# Get the average of the first object, then the second object
obj1X = np.mean(X[obj1IX], 0)
obj2X = np.mean(X[obj2IX], 0)
# Build the importance map
mult1X = obj1X * B
mult2X = obj2X * B
# Sort these so that they are from least to most important for a given category.
sortmult1X = mult1X.argsort()[::-1]
sortmult2X = mult2X.argsort()
# add to a dictionary for later use
inds[clf.classes_[0]] = sortmult1X
inds[clf.classes_[1]] = sortmult2X
return inds
if 'milgram' in os.getcwd():
main_dir='/gpfs/milgram/project/turk-browne/projects/rtTest/'
else:
main_dir='/Users/kailong/Desktop/rtTest'
working_dir=main_dir
os.chdir(working_dir)
objects = ['bed', 'bench', 'chair', 'table']
if dataSource == "neurosketch":
funcdata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/subjects/{sub}_neurosketch/data/nifti/realtime_preprocessed/{sub}_neurosketch_recognition_run_{run}.nii.gz"
metadata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/data/features/recog/metadata_{sub}_V1_{phase}.csv"
anat = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/subjects/{sub}_neurosketch/data/nifti/{sub}_neurosketch_anat_mprage_brain.nii.gz"
elif dataSource == "realtime":
funcdata = "/gpfs/milgram/project/turk-browne/projects/rtcloud_kp/subjects/{sub}/ses{ses}_recognition/run0{run}/nifti/{sub}_functional.nii.gz"
metadata = "/gpfs/milgram/project/turk-browne/projects/rtcloud_kp/subjects/{sub}/ses{ses}_recognition/run0{run}/{sub}_0{run}_preprocessed_behavData.csv"
anat = "$TO_BE_FILLED"
else:
funcdata = "/gpfs/milgram/project/turk-browne/projects/rtTest/searchout/feat/{sub}_pre.nii.gz"
metadata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/data/features/recog/metadata_{sub}_V1_{phase}.csv"
anat = "$TO_BE_FILLED"
# print('mask dimensions: {}'. format(mask.shape))
# print('number of voxels in mask: {}'.format(np.sum(mask)))
phasedict = dict(zip([1,2,3,4,5,6],["12", "12", "34", "34", "56", "56"]))
imcodeDict={"A": "bed", "B": "Chair", "C": "table", "D": "bench"}
chosenMask = np.load(f"/gpfs/milgram/project/turk-browne/projects/rtTest/schaefer2018/{subject}/chosenMask.npy")
print(f"np.sum(chosenMask)={np.sum(chosenMask)}")
# Compile preprocessed data and corresponding indices
metas = []
for run in range(1, 7):
print(run, end='--')
# retrieve from the dictionary which phase it is, assign the session
phase = phasedict[run]
# Build the path for the preprocessed functional data
this4d = funcdata.format(run=run, phase=phase, sub=subject)
# Read in the metadata, and reduce it to only the TR values from this run, add to a list
thismeta = pd.read_csv(metadata.format(run=run, phase=phase, sub=subject))
if dataSource == "neurosketch":
_run = 1 if run % 2 == 0 else 2
else:
_run = run
thismeta = thismeta[thismeta['run_num'] == int(_run)]
if dataSource == "realtime":
TR_num = list(thismeta.TR.astype(int))
labels = list(thismeta.Item)
labels = [imcodeDict[label] for label in labels]
else:
TR_num = list(thismeta.TR_num.astype(int))
labels = list(thismeta.label)
print("LENGTH OF TR: {}".format(len(TR_num)))
# Load the functional data
runIm = nib.load(this4d)
affine_mat = runIm.affine
runImDat = runIm.get_fdata()
# Use the TR numbers to select the correct features
features = [runImDat[:,:,:,n+3] for n in TR_num] # here shape is from (94, 94, 72, 240) to (80, 94, 94, 72)
features = np.array(features)
features = features[:, chosenMask==1]
print("shape of features", features.shape, "shape of chosenMask", chosenMask.shape)
features = normalize(features)
# features = np.expand_dims(features, 0)
# Append both so we can use it later
# metas.append(labels)
# metas['label']
t=pd.DataFrame()
t['label']=labels
t["run_num"]=run
behav_data=t if run==1 else pd.concat([behav_data,t])
runs = features if run == 1 else np.concatenate((runs, features))
dimsize = runIm.header.get_zooms()
brain_data = runs
print(brain_data.shape)
print(behav_data.shape)
FEAT=brain_data
print(f"FEAT.shape={FEAT.shape}")
META=behav_data
def Class(brain_data,behav_data):
accs = []
for run in range(1,7):
trainIX = behav_data['run_num']!=int(run)
testIX = behav_data['run_num']==int(run)
trainX = brain_data[trainIX]
trainY = behav_data.iloc[np.asarray(trainIX)].label
testX = brain_data[testIX]
testY = behav_data.iloc[np.asarray(testIX)].label
clf = LogisticRegression(penalty='l2',C=1, solver='lbfgs', max_iter=1000,
multi_class='multinomial').fit(trainX, trainY)
# Monitor progress by printing accuracy (only useful if you're running a test set)
acc = clf.score(testX, testY)
accs.append(acc)
accs
return np.mean(accs)
accs=Class(brain_data,behav_data)
print(f"new trained 4 way classifier accuracy={accs}")
# convert item colume to label colume
imcodeDict={
'A': 'bed',
'B': 'chair',
'C': 'table',
'D': 'bench'}
# Which run to use as test data (leave as None to not have test data)
testRun = 6 # when testing: testRun = 2 ; META['run_num'].iloc[:5]=2
# Decide on the proportion of crescent data to use for classification
include = 1
objects = ['bed', 'bench', 'chair', 'table']
allpairs = itertools.combinations(objects,2)
accs={}
# Iterate over all the possible target pairs of objects
for pair in allpairs:
# Find the control (remaining) objects for this pair
altpair = other(pair)
# pull sorted indices for each of the critical objects, in order of importance (low to high)
# inds = get_inds(FEAT, META, pair, testRun=testRun)
# Find the number of voxels that will be left given your inclusion parameter above
# nvox = red_vox(FEAT.shape[1], include)
for obj in pair:
# foil = [i for i in pair if i != obj][0]
for altobj in altpair:
# establish a naming convention where it is $TARGET_$CLASSIFICATION
# Target is the NF pair (e.g. bed/bench)
# Classificationis is btw one of the targets, and a control (e.g. bed/chair, or bed/table, NOT bed/bench)
naming = '{}{}_{}{}'.format(pair[0], pair[1], obj, altobj)
# Pull the relevant inds from your previously established dictionary
# obj_inds = inds[obj]
# If you're using testdata, this function will split it up. Otherwise it leaves out run as a parameter
# if testRun:
# trainIX = META.index[(META['label'].isin([obj, altobj])) & (META['run_num'] != int(testRun))]
# testIX = META.index[(META['label'].isin([obj, altobj])) & (META['run_num'] == int(testRun))]
# else:
# trainIX = META.index[(META['label'].isin([obj, altobj]))]
# testIX = META.index[(META['label'].isin([obj, altobj]))]
# # pull training and test data
# trainX = FEAT[trainIX]
# testX = FEAT[testIX]
# trainY = META.iloc[trainIX].label
# testY = META.iloc[testIX].label
# print(f"obj={obj},altobj={altobj}")
# print(f"unique(trainY)={np.unique(trainY)}")
# print(f"unique(testY)={np.unique(testY)}")
# assert len(np.unique(trainY))==2
# for testRun in range(6):
if testRun:
trainIX = ((META['label']==obj) + (META['label']==altobj)) * (META['run_num']!=int(testRun))
testIX = ((META['label']==obj) + (META['label']==altobj)) * (META['run_num']==int(testRun))
else:
trainIX = ((META['label']==obj) + (META['label']==altobj))
testIX = ((META['label']==obj) + (META['label']==altobj))
# pull training and test data
trainX = FEAT[trainIX]
testX = FEAT[testIX]
trainY = META.iloc[np.asarray(trainIX)].label
testY = META.iloc[np.asarray(testIX)].label
# print(f"obj={obj},altobj={altobj}")
# print(f"unique(trainY)={np.unique(trainY)}")
# print(f"unique(testY)={np.unique(testY)}")
assert len(np.unique(trainY))==2
# # If you're selecting high-importance features, this bit handles that
# if include < 1:
# trainX = trainX[:, obj_inds[-nvox:]]
# testX = testX[:, obj_inds[-nvox:]]
# Train your classifier
clf = LogisticRegression(penalty='l2',C=1, solver='lbfgs', max_iter=1000,
multi_class='multinomial').fit(trainX, trainY)
model_folder = f"{working_dir}{roiloc}/{subject}/clf/"
mkdir(model_folder)
# Save it for later use
joblib.dump(clf, model_folder +'/{}.joblib'.format(naming))
# Monitor progress by printing accuracy (only useful if you're running a test set)
acc = clf.score(testX, testY)
# print(naming, acc)
accs[naming]=acc
# _=plt.figure()
# _=plt.hist(list(accs.values()))
return accs
# sub_id=7
import sys
subject= '0119173' #sys.argv[1]
sub_id = [i for i,x in enumerate(subjects) if x == subject][0]
print("best 4way classifier accuracy = ",GreedyBestAcc[subject][bestID[subject]])
accs = minimalClass(subject)
for acc in accs:
print(acc,accs[acc])
'''
calculate the evidence floor and ceil for each subject and display different forms of evidences.
'''
def morphingTarget(subject):
'''
purpose:
get the morphing target function
steps:
load train clf
load brain data and behavior data
get the morphing target function
evidence_floor is C evidence for CD classifier(can also be D evidence for CD classifier)
evidence_ceil is A evidence in AC and AD classifier
'''
import os
import numpy as np
import pandas as pd
import joblib
import nibabel as nib
phasedict = dict(zip([1,2,3,4,5,6],["12", "12", "34", "34", "56", "56"]))
imcodeDict={"A": "bed", "B": "Chair", "C": "table", "D": "bench"}
if 'milgram' in os.getcwd():
main_dir='/gpfs/milgram/project/turk-browne/projects/rtTest/'
else:
main_dir='/Users/kailong/Desktop/rtTest'
working_dir=main_dir
os.chdir(working_dir)
funcdata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/subjects/{sub}_neurosketch/data/nifti/realtime_preprocessed/{sub}_neurosketch_recognition_run_{run}.nii.gz"
metadata = "/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/data/features/recog/metadata_{sub}_V1_{phase}.csv"
metas = []
# for run in range(1, 7):
# print(run, end='--')
# # retrieve from the dictionary which phase it is, assign the session
# phase = phasedict[run]
# ses = 1
# # Build the path for the preprocessed functional data
# this4d = funcdata.format(ses=ses, run=run, phase=phase, sub=subject)
# # Read in the metadata, and reduce it to only the TR values from this run, add to a list
# thismeta = pd.read_csv(metadata.format(ses=ses, run=run, phase=phase, sub=subject))
# if dataSource == "neurosketch":
# _run = 1 if run % 2 == 0 else 2
# else:
# _run = run
# thismeta = thismeta[thismeta['run_num'] == int(_run)]
# if dataSource == "realtime":
# TR_num = list(thismeta.TR.astype(int))
# labels = list(thismeta.Item)
# labels = [imcodeDict[label] for label in labels]
# else:
# TR_num = list(thismeta.TR_num.astype(int))
# labels = list(thismeta.label)
# print("LENGTH OF TR: {}".format(len(TR_num)))
# # Load the functional data
# runIm = nib.load(this4d)
# affine_mat = runIm.affine
# runImDat = runIm.get_fdata()
# # Use the TR numbers to select the correct features
# features = [runImDat[:,:,:,n+3] for n in TR_num]
# features = np.array(features)
# chosenMask = np.load(f"/gpfs/milgram/project/turk-browne/projects/rtTest/schaefer2018/{subject}/chosenMask.npy")
# features = features[:, chosenMask==1]
# print("shape of features", features.shape, "shape of mask", mask.shape)
# # featmean = features.mean(1).mean(1).mean(1)[..., None,None,None] #features.mean(1)[..., None]
# # features = features - featmean
# # features = features - features.mean(0)
# features = normalize(features)
# # features = np.expand_dims(features, 0)
# # Append both so we can use it later
# # metas.append(labels)
# # metas['label']
# t=pd.DataFrame()
# t['label']=labels
# t["run_num"]=run
# behav_data=t if run==1 else pd.concat([behav_data,t])
# runs = features if run == 1 else np.concatenate((runs, features))
# for run in range(1, 7):
run=6
print(run, end='--')
# retrieve from the dictionary which phase it is, assign the session
phase = phasedict[run]
ses = 1
# Build the path for the preprocessed functional data
this4d = funcdata.format(ses=ses, run=run, phase=phase, sub=subject)
# Read in the metadata, and reduce it to only the TR values from this run, add to a list
thismeta = pd.read_csv(metadata.format(ses=ses, run=run, phase=phase, sub=subject))
if dataSource == "neurosketch":
_run = 1 if run % 2 == 0 else 2
else:
_run = run
thismeta = thismeta[thismeta['run_num'] == int(_run)]
if dataSource == "realtime":
TR_num = list(thismeta.TR.astype(int))
labels = list(thismeta.Item)
labels = [imcodeDict[label] for label in labels]
else:
TR_num = list(thismeta.TR_num.astype(int))
labels = list(thismeta.label)
print("LENGTH OF TR: {}".format(len(TR_num)))
# Load the functional data
runIm = nib.load(this4d)
affine_mat = runIm.affine
runImDat = runIm.get_fdata()
# Use the TR numbers to select the correct features
features = [runImDat[:,:,:,n+3] for n in TR_num]
features = np.array(features)
chosenMask = np.load(f"/gpfs/milgram/project/turk-browne/projects/rtTest/schaefer2018/{subject}/chosenMask.npy")
features = features[:, chosenMask==1]
print("shape of features", features.shape, "shape of mask", mask.shape)
# featmean = features.mean(1).mean(1).mean(1)[..., None,None,None] #features.mean(1)[..., None]
# features = features - featmean
# features = features - features.mean(0)
features = normalize(features)
# features = np.expand_dims(features, 0)
# Append both so we can use it later
# metas.append(labels)
# metas['label']
t=pd.DataFrame()
t['label']=labels
t["run_num"]=run
behav_data=t
runs = features
dimsize = runIm.header.get_zooms()
brain_data = runs
print(brain_data.shape)
print(behav_data.shape)
FEAT=brain_data
print(f"FEAT.shape={FEAT.shape}")
META=behav_data
# print('mask dimensions: {}'. format(mask.shape))
# print('number of voxels in mask: {}'.format(np.sum(mask)))
# runRecording = pd.read_csv(f"{cfg.recognition_dir}../runRecording.csv")
# actualRuns = list(runRecording['run'].iloc[list(np.where(1==1*(runRecording['type']=='recognition'))[0])]) # can be [1,2,3,4,5,6,7,8] or [1,2,4,5]
# objects = ['bed', 'bench', 'chair', 'table']
# for ii,run in enumerate(actualRuns[:2]): # load behavior and brain data for current session
# t = np.load(f"{cfg.recognition_dir}brain_run{run}.npy")
# # mask = nib.load(f"{cfg.chosenMask}").get_data()
# mask = np.load(cfg.chosenMask)
# t = t[:,mask==1]
# t = normalize(t)
# brain_data=t if ii==0 else np.concatenate((brain_data,t), axis=0)
# t = pd.read_csv(f"{cfg.recognition_dir}behav_run{run}.csv")
# behav_data=t if ii==0 else pd.concat([behav_data,t])
# FEAT=brain_data.reshape(brain_data.shape[0],-1)
# # FEAT_mean=np.mean(FEAT,axis=1)
# # FEAT=(FEAT.T-FEAT_mean).T
# # FEAT_mean=np.mean(FEAT,axis=0)
# # FEAT=FEAT-FEAT_mean
# META=behav_data
# convert item colume to label colume
imcodeDict={
'A': 'bed',
'B': 'chair',
'C': 'table',
'D': 'bench'}
# label=[]
# for curr_trial in range(META.shape[0]):
# label.append(imcodeDict[META['Item'].iloc[curr_trial]])
# META['label']=label # merge the label column with the data dataframe
# def classifierEvidence(clf,X,Y): # X shape is [trials,voxelNumber], Y is ['bed', 'bed'] for example # return a 1-d array of probability
# # This function get the data X and evidence object I want to know Y, and output the trained model evidence.
# targetID=[np.where((clf.classes_==i)==True)[0][0] for i in Y]
# # Evidence=(np.sum(X*clf.coef_,axis=1)+clf.intercept_) if targetID[0]==1 else (1-(np.sum(X*clf.coef_,axis=1)+clf.intercept_))
# Evidence=(X@clf.coef_.T+clf.intercept_) if targetID[0]==1 else (-(X@clf.coef_.T+clf.intercept_))
# Evidence = 1/(1+np.exp(-Evidence))
# return np.asarray(Evidence)
# def classifierEvidence(clf,X,Y):
# ID=np.where((clf.classes_==Y[0])*1==1)[0][0]
# p = clf.predict_proba(X)[:,ID]
# BX=np.log(p/(1-p))
# return BX
def classifierEvidence(clf,X,Y):
ID=np.where((clf.classes_==Y[0])*1==1)[0][0]
Evidence=(X@clf.coef_.T+clf.intercept_) if ID==1 else (-(X@clf.coef_.T+clf.intercept_))
# Evidence=(X@clf.coef_.T+clf.intercept_) if ID==0 else (-(X@clf.coef_.T+clf.intercept_))
return np.asarray(Evidence)
A_ID = (META['label']=='bed')
X = FEAT[A_ID]
# evidence_floor is C evidence for AC_CD BC_CD CD_CD classifier(can also be D evidence for CD classifier)
# Y = ['table'] * X.shape[0]
# CD_clf=joblib.load(cfg.usingModel_dir +'bedbench_benchtable.joblib') # These 4 clf are the same: bedbench_benchtable.joblib bedtable_tablebench.joblib benchchair_benchtable.joblib chairtable_tablebench.joblib
# CD_C_evidence = classifierEvidence(CD_clf,X,Y)
# evidence_floor = np.mean(CD_C_evidence)
# print(f"evidence_floor={evidence_floor}")
model_folder = f"{working_dir}{roiloc}/{subject}/clf/"
# #try out other forms of floor: C evidence in AC and D evidence for AD
# Y = ['bench'] * X.shape[0]
# AD_clf=joblib.load(model_folder +'bedchair_bedbench.joblib') # These 4 clf are the same: bedchair_bedbench.joblib bedtable_bedbench.joblib benchchair_benchbed.joblib benchtable_benchbed.joblib
# AD_D_evidence = classifierEvidence(AD_clf,X,Y)
# evidence_floor = np.mean(AD_D_evidence)
# print(f"evidence_floor2={np.mean(evidence_floor)}")
# # floor
# Y = ['bench'] * X.shape[0]
# CD_clf=joblib.load(model_folder +'bedbench_benchtable.joblib') # These 4 clf are the same: bedbench_benchtable.joblib bedtable_tablebench.joblib benchchair_benchtable.joblib chairtable_tablebench.joblib
# CD_D_evidence = classifierEvidence(CD_clf,X,Y)
# evidence_floor = np.mean(CD_D_evidence)
# print(f"evidence_floor={evidence_floor}")
# Y = ['table'] * X.shape[0]
# CD_clf=joblib.load(model_folder +'bedbench_benchtable.joblib') # These 4 clf are the same: bedbench_benchtable.joblib bedtable_tablebench.joblib benchchair_benchtable.joblib chairtable_tablebench.joblib
# CD_C_evidence = classifierEvidence(CD_clf,X,Y)
# evidence_floor = np.mean(CD_C_evidence)
# print(f"evidence_floor={evidence_floor}")
# # evidence_ceil is A evidence in AC and AD classifier
# Y = ['bed'] * X.shape[0]
# AC_clf=joblib.load(model_folder +'benchtable_tablebed.joblib') # These 4 clf are the same: bedbench_bedtable.joblib bedchair_bedtable.joblib benchtable_tablebed.joblib chairtable_tablebed.joblib
# AC_A_evidence = classifierEvidence(AC_clf,X,Y)
# evidence_ceil1 = AC_A_evidence
# print(f"evidence_ceil1={np.mean(evidence_ceil1)}")
# Y = ['bed'] * X.shape[0]
# AD_clf=joblib.load(model_folder +'bedchair_bedbench.joblib') # These 4 clf are the same: bedchair_bedbench.joblib bedtable_bedbench.joblib benchchair_benchbed.joblib benchtable_benchbed.joblib
# AD_A_evidence = classifierEvidence(AD_clf,X,Y)
# evidence_ceil2 = AD_A_evidence
# print(f"evidence_ceil2={np.mean(evidence_ceil2)}")
# # evidence_ceil = np.mean(evidence_ceil1)
# # evidence_ceil = np.mean(evidence_ceil2)
# evidence_ceil = np.mean((evidence_ceil1+evidence_ceil2)/2)
# print(f"evidence_ceil={evidence_ceil}")
store="\n"
print("floor")
# D evidence for AD_clf when A is presented.
Y = ['bench'] * X.shape[0]
AD_clf=joblib.load(model_folder +'bedchair_bedbench.joblib') # These 4 clf are the same: bedchair_bedbench.joblib bedtable_bedbench.joblib benchchair_benchbed.joblib benchtable_benchbed.joblib
AD_D_evidence = classifierEvidence(AD_clf,X,Y)
evidence_floor = np.mean(AD_D_evidence)
print(f"D evidence for AD_clf when A is presented={evidence_floor}")
store=store+f"D evidence for AD_clf when A is presented={evidence_floor}"
# C evidence for AC_clf when A is presented.
Y = ['table'] * X.shape[0]
AC_clf=joblib.load(model_folder +'benchtable_tablebed.joblib') # These 4 clf are the same: bedbench_bedtable.joblib bedchair_bedtable.joblib benchtable_tablebed.joblib chairtable_tablebed.joblib
AC_C_evidence = classifierEvidence(AC_clf,X,Y)
evidence_floor = np.mean(AC_C_evidence)
print(f"C evidence for AC_clf when A is presented={evidence_floor}")
store=store+"\n"+f"C evidence for AC_clf when A is presented={evidence_floor}"
# D evidence for CD_clf when A is presented.
Y = ['bench'] * X.shape[0]
CD_clf=joblib.load(model_folder +'bedbench_benchtable.joblib') # These 4 clf are the same: bedbench_benchtable.joblib bedtable_tablebench.joblib benchchair_benchtable.joblib chairtable_tablebench.joblib
CD_D_evidence = classifierEvidence(CD_clf,X,Y)
evidence_floor = np.mean(CD_D_evidence)
print(f"D evidence for CD_clf when A is presented={evidence_floor}")
store=store+"\n"+f"D evidence for CD_clf when A is presented={evidence_floor}"
# C evidence for CD_clf when A is presented.
Y = ['table'] * X.shape[0]
CD_clf=joblib.load(model_folder +'bedbench_benchtable.joblib') # These 4 clf are the same: bedbench_benchtable.joblib bedtable_tablebench.joblib benchchair_benchtable.joblib chairtable_tablebench.joblib
CD_C_evidence = classifierEvidence(CD_clf,X,Y)
evidence_floor = np.mean(CD_C_evidence)
print(f"C evidence for CD_clf when A is presented={evidence_floor}")
store=store+"\n"+f"C evidence for CD_clf when A is presented={evidence_floor}"
print("ceil")
store=store+"\n"+"ceil"
# evidence_ceil is A evidence in AC and AD classifier
Y = ['bed'] * X.shape[0]
AC_clf=joblib.load(model_folder +'benchtable_tablebed.joblib') # These 4 clf are the same: bedbench_bedtable.joblib bedchair_bedtable.joblib benchtable_tablebed.joblib chairtable_tablebed.joblib
AC_A_evidence = classifierEvidence(AC_clf,X,Y)
evidence_ceil1 = AC_A_evidence
print(f"A evidence in AC_clf when A is presented={np.mean(evidence_ceil1)}")
store=store+"\n"+f"A evidence in AC_clf when A is presented={np.mean(evidence_ceil1)}"
Y = ['bed'] * X.shape[0]
AD_clf=joblib.load(model_folder +'bedchair_bedbench.joblib') # These 4 clf are the same: bedchair_bedbench.joblib bedtable_bedbench.joblib benchchair_benchbed.joblib benchtable_benchbed.joblib
AD_A_evidence = classifierEvidence(AD_clf,X,Y)
evidence_ceil2 = AD_A_evidence
print(f"A evidence in AD_clf when A is presented={np.mean(evidence_ceil2)}")
store=store+"\n"+f"A evidence in AD_clf when A is presented={np.mean(evidence_ceil2)}"
# evidence_ceil = np.mean(evidence_ceil1)
# evidence_ceil = np.mean(evidence_ceil2)
evidence_ceil = np.mean((evidence_ceil1+evidence_ceil2)/2)
print(f"evidence_ceil={evidence_ceil}")
store=store+"\n"+f"evidence_ceil={evidence_ceil}"
return evidence_floor, evidence_ceil,store
floor, ceil,store = morphingTarget(subject)
mu = (ceil+floor)/2
sig = (ceil-floor)/2.3548
print(f"floor={floor}, ceil={ceil}")
print(f"mu={mu}, sig={sig}")
store=store+"\n"+f"floor={floor}, ceil={ceil}"
store=store+"\n"+f"mu={mu}, sig={sig}"
save_obj(store,f"./{subject}store")
# # floorCeilNeurosketch_child.sh
# #!/usr/bin/env bash
# # Input python command to be submitted as a job
# #SBATCH --output=logs/floorCeil-%j.out
# #SBATCH --job-name floorCeil
# #SBATCH --partition=short,day,scavenge,verylong
# #SBATCH --time=1:00:00 #20:00:00
# #SBATCH --mem=10000
# #SBATCH -n 5
# # Set up the environment
# subject=$1
# echo source activate /gpfs/milgram/project/turk-browne/users/kp578/CONDA/rtcloud
# source activate /gpfs/milgram/project/turk-browne/users/kp578/CONDA/rtcloud
# python -u ./floorCeilNeurosketch.py $subject
# # floorCeilNeurosketch_parent.sh
# subjects="1206161 0119173 1206162 1130161 1206163 0120171 0111171 1202161 0125172 0110172 0123173 0120173 0110171 0119172 0124171 0123171 1203161 0118172 0118171 0112171 1207162 0117171 0119174 0112173 0112172" #these subjects are done with the batchRegions code
# for sub in $subjects
# do
# for num in 25; #best ID is 30 thus the best num is 31
# do
# echo sbatch --requeue floorCeilNeurosketch_child.sh $sub
# sbatch --requeue floorCeilNeurosketch_child.sh $sub
# done
# done
def subLoop(subject):
data={}
accs = minimalClass(subject)
print("best 4way classifier accuracy = ",GreedyBestAcc[subject][bestID[subject]])
data['best 4way classifier accuracy']=GreedyBestAcc[subject][bestID[subject]]
for acc in accs:
print(acc,accs[acc])
data["accs"]=accs
floor, ceil,store = morphingTarget(subject)
mu = (ceil+floor)/2
sig = (ceil-floor)/2.3548
print(f"floor={floor}, ceil={ceil}")
print(f"mu={mu}, sig={sig}")
store=store+"\n"+f"floor={floor}, ceil={ceil}"
store=store+"\n"+f"mu={mu}, sig={sig}"
data["store"]=store
save_obj(store,f"./{subject}store")
return data
import warnings
warnings.filterwarnings("ignore")
data={}
for subject in subjects:
data[subject]=subLoop(subject)
for sub in data:
print("---------------------------------------------------------------")
print()
print(f"subject={sub}")
print(data[sub]["store"])
```
| github_jupyter |
# Transfer Learning
A Convolutional Neural Network (CNN) for image classification is made up of multiple layers that extract features, such as edges, corners, etc; and then use a final fully-connected layer to classify objects based on these features. You can visualize this like this:
<table>
<tr><td rowspan=2 style='border: 1px solid black;'>⇒</td><td style='border: 1px solid black;'>Convolutional Layer</td><td style='border: 1px solid black;'>Pooling Layer</td><td style='border: 1px solid black;'>Convolutional Layer</td><td style='border: 1px solid black;'>Pooling Layer</td><td style='border: 1px solid black;'>Fully Connected Layer</td><td rowspan=2 style='border: 1px solid black;'>⇒</td></tr>
<tr><td colspan=4 style='border: 1px solid black; text-align:center;'>Feature Extraction</td><td style='border: 1px solid black; text-align:center;'>Classification</td></tr>
</table>
*Transfer Learning* is a technique where you can take an existing trained model and re-use its feature extraction layers, replacing its final classification layer with a fully-connected layer trained on your own custom images. With this technique, your model benefits from the feature extraction training that was performed on the base model (which may have been based on a larger training dataset than you have access to) to build a classification model for your own specific set of object classes.
How does this help? Well, think of it this way. Suppose you take a professional tennis player and a complete beginner, and try to teach them both how to play raquetball. It's reasonable to assume that the professional tennis player will be easier to train, because many of the underlying skills involved in raquetball are already learned. Similarly, a pre-trained CNN model may be easier to train to classify specific set of objects because it's already learned how to identify the features of common objects, such as edges and corners. Fundamentally, a pre-trained model can be a great way to produce an effective classifier even when you have limited data with which to train it.
In this notebook, we'll see how to implement transfer learning for a classification model using TensorFlow.
## Install and import TensorFlow libraries
Let's start my ensuring that we have the latest version of the **TensorFlow** package installed and importing the Tensorflow libraries we're going to use.
```
!pip install --upgrade tensorflow
import tensorflow
from tensorflow import keras
print('TensorFlow version:',tensorflow.__version__)
print('Keras version:',keras.__version__)
```
## Prepare the base model
To use transfer learning, we need a base model from which we can use the trained feature extraction layers. The ***resnet*** model is an CNN-based image classifier that has been pre-trained using a huge dataset of 3-color channel images of 224x224 pixels. Let's create an instance of it with some pretrained weights, excluding its final (top) prediction layer.
```
base_model = keras.applications.resnet.ResNet50(weights='imagenet', include_top=False, input_shape=(224,224,3))
print(base_model.summary())
```
## Prepare the image data
The pretrained model has many layers, starting with a convolutional layer that starts the feature extraction process from image data.
For feature extraction to work with our own images, we need to ensure that the image data we use the train our prediction layer has the same number of features (pixel values) as the images originally used to train the feature extraction layers, so we need data loaders for color images that are 224x224 pixels in size.
Tensorflow includes functions for loading and transforming data. We'll use these to create a generator for training data, and a second generator for test data (which we'll use to validate the trained model). The loaders will transform the image data to match the format used to train the original resnet CNN model and normalize them.
Run the following cell to define the data generators and list the classes for our images.
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
data_folder = 'data/shapes'
pretrained_size = (224,224)
batch_size = 30
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
data_folder,
target_size=pretrained_size, # resize to match model expected input
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
data_folder,
target_size=pretrained_size, # resize to match model expected input
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
classnames = list(train_generator.class_indices.keys())
print("class names: ", classnames)
```
## Create a prediction layer
We downloaded the complete *resnet* model excluding its final prediction layer, so need to combine these layers with a fully-connected (*dense*) layer that takes the flattened outputs from the feature extraction layers and generates a prediction for each of our image classes.
We also need to freeze the feature extraction layers to retain the trained weights. Then when we train the model using our images, only the final prediction layer will learn new weight and bias values - the pre-trained weights already learned for feature extraction will remain the same.
```
from tensorflow.keras import applications
from tensorflow.keras import Model
from tensorflow.keras.layers import Flatten, Dense
# Freeze the already-trained layers in the base model
for layer in base_model.layers:
layer.trainable = False
# Create prediction layer for classification of our images
x = base_model.output
x = Flatten()(x)
prediction_layer = Dense(len(classnames), activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=prediction_layer)
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Now print the full model, which will include the layers of the base model plus the dense layer we added
print(model.summary())
```
## Train the Model
With the layers of the CNN defined, we're ready to train it using our image data. The weights used in the feature extraction layers from the base resnet model will not be changed by training, only the final dense layer that maps the features to our shape classes will be trained.
```
# Train the model over 3 epochs
num_epochs = 3
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
```
## View the loss history
We tracked average training and validation loss for each epoch. We can plot these to verify that the loss reduced over the training process and to detect *over-fitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase).
```
%matplotlib inline
from matplotlib import pyplot as plt
epoch_nums = range(1,num_epochs+1)
training_loss = history.history["loss"]
validation_loss = history.history["val_loss"]
plt.plot(epoch_nums, training_loss)
plt.plot(epoch_nums, validation_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
```
## Evaluate model performance
We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class.
```
# Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
print("Generating predictions from validation data...")
# Get the image and label arrays for the first batch of validation data
x_test = validation_generator[0][0]
y_test = validation_generator[0][1]
# Use the model to predict the class
class_probabilities = model.predict(x_test)
# The model returns a probability value for each class
# The one with the highest probability is the predicted class
predictions = np.argmax(class_probabilities, axis=1)
# The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1
true_labels = np.argmax(y_test, axis=1)
# Plot the confusion matrix
cm = confusion_matrix(true_labels, predictions)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(classnames))
plt.xticks(tick_marks, classnames, rotation=85)
plt.yticks(tick_marks, classnames)
plt.xlabel("Predicted Shape")
plt.ylabel("Actual Shape")
plt.show()
```
## Use the trained model
Now that we've trained the model, we can use it to predict the class of an image.
```
from tensorflow.keras import models
import numpy as np
from random import randint
import os
%matplotlib inline
# Function to predict the class of an image
def predict_image(classifier, image):
from tensorflow import convert_to_tensor
# The model expects a batch of images as input, so we'll create an array of 1 image
imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2])
# We need to format the input to match the training data
# The generator loaded the values as floating point numbers
# and normalized the pixel values, so...
imgfeatures = imgfeatures.astype('float32')
imgfeatures /= 255
# Use the model to predict the image class
class_probabilities = classifier.predict(imgfeatures)
# Find the class predictions with the highest predicted probability
index = int(np.argmax(class_probabilities, axis=1)[0])
return index
# Function to create a random image (of a square, circle, or triangle)
def create_image (size, shape):
from random import randint
import numpy as np
from PIL import Image, ImageDraw
xy1 = randint(10,40)
xy2 = randint(60,100)
col = (randint(0,200), randint(0,200), randint(0,200))
img = Image.new("RGB", size, (255, 255, 255))
draw = ImageDraw.Draw(img)
if shape == 'circle':
draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col)
elif shape == 'triangle':
draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col)
else: # square
draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col)
del draw
return np.array(img)
# Create a random test image
classnames = os.listdir(os.path.join('data', 'shapes'))
classnames.sort()
img = create_image ((224,224), classnames[randint(0, len(classnames)-1)])
plt.axis('off')
plt.imshow(img)
# Use the classifier to predict the class
class_idx = predict_image(model, img)
print (classnames[class_idx])
```
## Learn More
* [Tensorflow Documentation](https://www.tensorflow.org/tutorials/images/transfer_learning)
| github_jupyter |
# Check Lipid differences in WT, KO and DKO
- Show if some Lipids are particularly high in one of the three categories
### Included libraries
```
from matplotlib import cm
from matplotlib.lines import Line2D
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from matplotlib import pylab as plt
import numpy as np
import seaborn as sns
```
### Functions and definitions
```
#define classes of lipids e.g. PC = Phosphatidylcholines
types_of_Lipids = ['CE','Cer','DAG','LPC','LPE','PC','PE','PI','PS','SM','TAG']
#colormap (20 unique colors)
cmap = cm.get_cmap('tab20')
#assign for each class of lipid a unique color
lipid_color = {}
for i,l in enumerate(types_of_Lipids):
lipid_color[l] = cmap(i)
```
### Main Code
```
#Load the actual lipid results
LipidData = pd.read_excel('../data/Report_MEC025_LIPID01_jb.xlsx' ,header=2, na_values='<LOD')
#extract the lipids
columns = LipidData.columns
Lipids = columns[7:]
print (Lipids)
#the those columns/entry for serum results (NOT Cells) as well as WT, PTEN and DKO (all HFD)
data = LipidData.loc[(LipidData['Specimen'] == 'serum') & ((LipidData['Experiment'] == 'WT_HFD') | (LipidData['Experiment'] == 'PTEN_HFD') | (LipidData['Experiment'] == 'DKO_HFD'))]
#remove entries that have no values
data = data.dropna(axis=1,how='all')
data.head(12)
#remaining lipids contains all valid columns (=lipids)
remaining_Lipids = data.columns.values[7:]
print ('Number of remaining Lipids: %d' %len(remaining_Lipids))
# Make bar plot Lipids together
fp_out = open('../results/Difference_WT_KO_DKO_Serum/Normalization.csv','w')
fp_out.write('Lipid,Mean_WT,Normalized_WT,Mean_KO,Normalized_KO,Mean_DKO,Normalized_DKO,Max_Val,Min_Val\n')
#dictionary that contins the results for WT, KO (=PTEN) and DKO (=PTEN and MARCO KO)
Lipid_Group_Results = {'WT':{},'KO':{},'DKO':{}}
#possible groups contains the lipid classses
possible_groups = set()
#go through all lipids
for Lipid in remaining_Lipids:
# get the lipid replicates for the three groups (remove emtpy rows)
#WT
WT_values = data.loc[data['Experiment'] == 'WT_HFD'][Lipid]
WT_values = WT_values.dropna()
#PTEN
KO_values = data.loc[data['Experiment'] == 'PTEN_HFD'][Lipid]
KO_values = KO_values.dropna()
#DKO
DKO_values = data.loc[data['Experiment'] == 'DKO_HFD'][Lipid]
DKO_values = DKO_values.dropna()
#Only make analaysis if all three groups have at least one entry (=replicate)
if len(WT_values) > 0 and len(KO_values) > 0 and len(DKO_values) > 0:
#calculate the mean for the three groups
WT_values = WT_values.mean()
KO_values = KO_values.mean()
DKO_values = DKO_values.mean()
#extract Max/Min of the three means
max_Val = max([WT_values,KO_values,DKO_values])
min_Val = min([WT_values,KO_values,DKO_values])
#normalize between this max/min (so that one value is always zero and one will be always 1 and the other in between)
WT_values_norm = (WT_values-min_Val)/(max_Val-min_Val)
KO_values_norm = (KO_values-min_Val)/(max_Val-min_Val)
DKO_values_norm = (DKO_values-min_Val)/(max_Val-min_Val)
#write results
fp_out.write(Lipid+','+str(WT_values)+','+str(WT_values_norm)+','+
str(KO_values)+','+str(KO_values_norm)+','+
str(DKO_values)+','+str(DKO_values_norm)+','+
str(max_Val)+','+str(min_Val)+'\n')
#if the result dictionary does not have an entry for this lipid class then add to dictionary
if Lipid.split(' ')[0] not in Lipid_Group_Results['WT']:
Lipid_Group_Results['WT'][Lipid.split(' ')[0]] = []
if Lipid.split(' ')[0] not in Lipid_Group_Results['KO']:
Lipid_Group_Results['KO'][Lipid.split(' ')[0]] = []
if Lipid.split(' ')[0] not in Lipid_Group_Results['DKO'] :
Lipid_Group_Results['DKO'][Lipid.split(' ')[0]] = []
#add lipid class to set of all possible lipid classes
possible_groups.add(Lipid.split(' ')[0])
#write results
Lipid_Group_Results['WT'][Lipid.split(' ')[0]].append(WT_values_norm)
Lipid_Group_Results['KO'][Lipid.split(' ')[0]].append(KO_values_norm)
Lipid_Group_Results['DKO'][Lipid.split(' ')[0]].append(DKO_values_norm)
#close file
fp_out.close()
# Make actual plot
##
#create legend entries
legend_elements = []
for key in possible_groups:
legend_elements.append(Line2D([0], [0], marker='o', color='w', label=key,
markerfacecolor=lipid_color[key], markersize=10))
#list of means (go throug the results to calculate actual mean (per lipid class))
WT_means = []
KO_means = []
DKO_means = []
#go through all lipid groups
for key in possible_groups:
#make plot showing the mean results for each this lipid group (no errorbars)
plt.errorbar([0,1,2,3,4,5],[np.mean(Lipid_Group_Results['WT'][key]),np.mean(Lipid_Group_Results['WT'][key]),
np.mean(Lipid_Group_Results['KO'][key]), np.mean(Lipid_Group_Results['KO'][key]),
np.mean(Lipid_Group_Results['DKO'][key]), np.mean(Lipid_Group_Results['DKO'][key])],
#assign assocaited color
color=lipid_color[key], alpha=0.8,lw=1.5)
#add result to result lise
WT_means.append(np.mean(Lipid_Group_Results['WT'][key]))
KO_means.append(np.mean(Lipid_Group_Results['KO'][key]))
DKO_means.append(np.mean(Lipid_Group_Results['DKO'][key]))
#create legend element
plt.legend(handles=legend_elements, loc='right',prop={'size': 5})
#plot the averall mean (over all lipids, blask dashed line)
plt.plot([0,1,2,3,4,5],[np.mean(WT_means),np.mean(WT_means),
np.mean(KO_means), np.mean(KO_means),
np.mean(DKO_means), np.mean(DKO_means)],
color='black', alpha=1,ls = '--', lw=2,zorder=100)
#Plot actual plot
plt.ylabel('Mean Normalized Relative Abundance')
plt.xlabel('Condition')
plt.xticks([0.5,2.5,4.5],['WT','KO','DKO'])
plt.savefig('../results/Difference_WT_KO_DKO_Serum/LipidGroups.pdf')
plt.close()
```
### Additional plot as heatmap showing lipids
```
#lists for data to plot
data = []
data_allLipids = []
col_colors = []
#go through all lipid groups to define correct color
for key in possible_groups:
for lipid in Lipid_Group_Results['WT'][key]:
col_colors.append(lipid_color[key])
#calculate mean expression for this lipid
for group in ['WT','KO','DKO']:
tmp = []
tmp_allLipids = []
for key in possible_groups:
tmp.append(np.mean(Lipid_Group_Results[group][key]))
for lipid in Lipid_Group_Results[group][key]:
tmp_allLipids.append(lipid)
data.append(tmp)
data_allLipids.append(tmp_allLipids)
#Make heatmap for LIPIDGROUPS
sns.heatmap(data=data)
plt.xlabel('Lipid Group')
plt.ylabel('Category')
plt.xticks([x-0.5 for x in range(1,len(possible_groups)+1)],possible_groups)
plt.yticks([0.5,1.5,2.5],['WT','PTEN','DKO'])
plt.savefig('../results/Difference_WT_KO_DKO_Serum/LipidGroups_Heatmap.pdf')
plt.close()
#Make heatmap for LIPIDS INDIVIDUALLY
sns.heatmap(data=data_allLipids)
plt.xlabel('Lipid')
plt.ylabel('Category')
plt.yticks([0.5,1.5,2.5],['WT','PTEN','DKO'])
plt.xticks()
plt.savefig('../results/Difference_WT_KO_DKO_Serum/Lipid_Heatmap.pdf')
plt.close()
#Make clustermap for LIPIDS INDIVIDUALLY
sns.clustermap(data=data_allLipids,row_cluster=True, col_colors=col_colors,yticklabels=['WT','PTEN','DKO'], method='weighted', )
plt.savefig('../results/Difference_WT_KO_DKO_Serum/Lipid_Clustermap.pdf')
plt.close()
```
| github_jupyter |
# Image Processing Dense Array, JPEG, PNG
> In this post, we will cover the basics of working with images in Matplotlib, OpenCV and Keras.
- toc: true
- badges: true
- comments: true
- categories: [Image Processing, Computer Vision]
- image: images/freedom.png
Images are dense matrixes, and have a certain numbers of rows and columns. They can have 1 (grey) or 3 (RGB) or 4 (RGB + alpha-transparency) channels.
The dimension of the image matrix is ( height, width, channels).
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import cv2
from sys import getsizeof
import tensorflow as tf
```
# 1. Load image files (*.jpg, *.png, *.bmp, *.tif)
- ~~using PIL~~
- using matplotlib: reads image as RGB
- using cv2 : reads image as BRG
- imread: reads a file from disk and decodes it
- imsave: encodes a image and writes it to a file on disk
```
#using PIL
#image = Image.open("images/freedom.png")
#plt.show(image)
```
#### Load image using Matplotlib
The Matplotlib image tutorial recommends using matplotlib.image.imread to read image formats from disk. This function will automatically change image array values to floats between zero and one, and it doesn't give any other options about how to read the image.
- imshow works on 0-1 floats & 0-255 uint8 values
- It doesn't work on int!
```
#using matplotlib.image
image = mpimg.imread("images/freedom.png")
plt.imshow(image)
plt.colorbar()
print(image.dtype)
freedom_array_uint8 = (image*255).astype(np.uint8) #convert to 0-255 values
```
#### Load image using OpenCV
```
#using opencv
image = cv2.imread("images/freedom.png")
#OpenCV uses BGR as its default colour order for images, matplotlib uses RGB
RGB_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)# cv2.cvtColor() method is used to convert an image from one color space to another
plt.imshow(RGB_image)
plt.colorbar()
```
For this image, the matrix will have 600 x 400 x 3 = 720,000 values. Each value is an unsigned 8-bit integer, in total 720,000 bytes.
Using unsigned 8-bit integers (256 possible values) for each value in the image array is enough for displaying images to humans. But when working with image data, it isn't uncommon to switch to 32-bit floats, for example. This increases tremendously the size of the data.
By loading the image files we can save them as arrays. Typical array operations can be performed on them.
```
print (RGB_image.shape, RGB_image.dtype)
```
### Load image using keras.preprocessing.image
- load_img(image): loads and decodes image
- img_to_array(image)
```
image_keras = tf.keras.preprocessing.image.load_img("images/freedom.png") # loads and decodes image
print(type(image_keras))
print(image_keras.format)
print(image_keras.mode)
print(image_keras.size)
#image_keras.show()
```
# 2. Image Processing
#### Dense Array
One way to store complete raster image data is by serializing a NumPy array to disk.
image04npy = 720,128 bytes
The file image04npy has 128 more bytes than the one required to store the array values. Those extra bytes specify things like the array shape/dimensions.
```
np.save("images/freedom.npy", RGB_image)
freedomnpy = np.load('images/freedom.npy')
print("Size of array:", freedomnpy.nbytes)
print("Size of disk:", getsizeof(freedomnpy))
```
Storing one pixels takes several bytes.There are two main options for saving images: whether to lose some information while saving, or not.
#### JPG format
- JPEG is lossy by deflaut
- When saving an image as $*$.JPEG and read from it again, it is not necessary to get back the same values
- The "image04_jpg.jpg" has 6.3 kB, less than the 7\% of $*$.npy file that generated it
- cv2.IMWRITE_JPEG_QUALITY is between (0, 100), and allows to save losseless
```
cv2.imwrite("images/freedom_jpg.jpg", freedomnpy, [cv2.IMWRITE_JPEG_QUALITY, 0])
freedom_jpg = cv2.imread("images/freedom_jpg.jpg")
plt.imshow(freedom_jpg)
```
#### PNG format
- PNG is lossless
- When saving an image as $*$.PNG and read from it again one gets the same value backs
- cv2.IMWRITE_PNG_COMPRESSION is between (0, 1): bigger file, slower compression
- freedom_png.png = 721.8 kB, close to freedomnpy
```
cv2.imwrite("images/freedom_png.png", freedomnpy, [cv2.IMWRITE_PNG_COMPRESSION, 0])
freedom_png = cv2.imread("images/freedom_png.png")
plt.imshow(freedom_png)
```
References:
<https://planspace.org/20170403-images_and_tfrecords/>
<https://subscription.packtpub.com/book/application_development/9781788474443/1/ch01lvl1sec14/saving-images-using-lossy-and-lossless-compression>
<https://www.tensorflow.org/tutorials/load_data/tfrecord>
<https://machinelearningmastery.com/how-to-load-convert-and-save-images-with-the-keras-api/>
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Unfairness Mitigation with Fairlearn and Azure Machine Learning
**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio**
## Table of Contents
1. [Introduction](#Introduction)
1. [Loading the Data](#LoadingData)
1. [Training an Unmitigated Model](#UnmitigatedModel)
1. [Mitigation with GridSearch](#Mitigation)
1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)
1. Registering models
1. Computing Fairness Metrics
1. Uploading to Azure
1. [Conclusion](#Conclusion)
<a id="Introduction"></a>
## Introduction
This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).
We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.
### Setup
To use this notebook, an Azure Machine Learning workspace is required.
Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.
This notebook also requires the following packages:
* `azureml-contrib-fairness`
* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)
* `joblib`
* `shap`
Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:
```
# !pip install --upgrade scikit-learn>=0.22.1
```
Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook.
<a id="LoadingData"></a>
## Loading the Data
We use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:
```
from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate
from fairlearn.widget import FairlearnDashboard
from sklearn.compose import ColumnTransformer
from sklearn.datasets import fetch_openml
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_selector as selector
from sklearn.pipeline import Pipeline
import pandas as pd
```
We can now load and inspect the data:
```
from fairness_nb_utils import fetch_openml_with_retries
data = fetch_openml_with_retries(data_id=1590)
# Extract the items we want
X_raw = data.data
y = (data.target == '>50K') * 1
X_raw["race"].value_counts().to_dict()
```
We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:
```
A = X_raw[['sex','race']]
X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)
```
We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset.
```
(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(
X_raw, y, A, test_size=0.3, random_state=12345, stratify=y
)
# Ensure indices are aligned between X, y and A,
# after all the slicing and splitting of DataFrames
# and Series
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
y_test = y_test.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
```
We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).
For this preprocessing, we make use of `Pipeline` objects from `sklearn`:
```
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
```
Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:
```
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
```
<a id="UnmitigatedModel"></a>
## Training an Unmitigated Model
So we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:
```
unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
unmitigated_predictor.fit(X_train, y_train)
```
We can view this model in the fairness dashboard, and see the disparities which appear:
```
FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],
y_true=y_test,
y_pred={"unmitigated": unmitigated_predictor.predict(X_test)})
```
Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.
Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact.
<a id="Mitigation"></a>
## Mitigation with GridSearch
The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.
For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used.
```
sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),
constraints=DemographicParity(),
grid_size=71)
```
With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.
The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:
```
sweep.fit(X_train, y_train,
sensitive_features=A_train.sex)
# For Fairlearn v0.5.0, need sweep.predictors_
predictors = sweep._predictors
```
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute).
```
errors, disparities = [], []
for m in predictors:
classifier = lambda X: m.predict(X)
error = ErrorRate()
error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
disparity = DemographicParity()
disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)
errors.append(error.gamma(classifier)[0])
disparities.append(disparity.gamma(classifier).max())
all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities})
dominant_models_dict = dict()
base_name_format = "census_gs_model_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity]
if row.error <= errors_for_lower_or_eq_disparity.min():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
```
We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):
```
predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)}
models_dominant = {"census_unmitigated": unmitigated_predictor}
for name, predictor in dominant_models_dict.items():
value = predictor.predict(X_test)
predictions_dominant[name] = value
models_dominant[name] = predictor
```
These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large.
```
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=y_test.tolist(),
y_pred=predictions_dominant)
```
When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.
By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints.
<a id="AzureUpload"></a>
## Uploading a Fairness Dashboard to Azure
Uploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:
1. Register the dominant models
1. Precompute all the required metrics
1. Upload to Azure
Before that, we need to connect to Azure Machine Learning Studio:
```
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
```
<a id="RegisterModels"></a>
### Registering Models
The fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `<name>:<version>` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:
```
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
model_name_id_mapping = dict()
for name, model in models_dominant.items():
m_id = register_model(name, model)
model_name_id_mapping[name] = m_id
```
Now, produce new predictions dictionaries, with the updated names:
```
predictions_dominant_ids = dict()
for name, y_pred in predictions_dominant.items():
predictions_dominant_ids[model_name_id_mapping[name]] = y_pred
```
<a id="PrecomputeMetrics"></a>
### Precomputing Metrics
We create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:
```
sf = { 'sex': A_test.sex, 'race': A_test.race }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=y_test,
predictions=predictions_dominant_ids,
sensitive_features=sf,
prediction_type='binary_classification')
```
<a id="DashboardUpload"></a>
### Uploading the Dashboard
Now, we import our `contrib` package which contains the routine to perform the upload:
```
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
```
Now we can create an Experiment, then a Run, and upload our dashboard to it:
```
exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Dominant Models from GridSearch"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
```
The dashboard can be viewed in the Run Details page.
Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
```
print(dash_dict == downloaded_dict)
```
<a id="Conclusion"></a>
## Conclusion
In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.github.io/) provides that discussion
| github_jupyter |
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
```
# Data Base Generation
### Basic Frame Capture
```
## This is just an example to ilustrate how to display video from webcam##
vid = cv2.VideoCapture(0) # define a video capture object
status = True # Initalize status
while(status): # Iterate while status is true, that is while there is a frame being captured
status, frame = vid.read() # Capture the video frame by frame, returns status (Boolean) and frame (numpy.ndarray)
cv2.imshow('frame', frame) # Display the resulting frame
## Exit if user presses q ##
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vid.release() # After the loop release the cap object
cv2.destroyAllWindows() # Destroy all the windows
```
### Create Screenshots off of Video
```
## This is just an example to ilustrate how to capture frames from webcam ##
path = "Bounding_box" # Name of folder where information will be stored
frame_id = 0 # Id of image
vid = cv2.VideoCapture(0) # define a video capture object
status = True # Initalize status
while(status): # Iterate while status is True
status, frame = vid.read() # Capture the video frame by frame
cv2.imshow('frame', frame) # Display the resulting frame
wait_key=cv2.waitKey(1) & 0xFF # Save Waitkey object in variable since we will use it multiple times
if wait_key == ord('a'): # If a is pressed
name ="eye"+str(frame_id)+'.jpg'
name = path + "\\" + name # Set name and path
cv2.imwrite(name, frame) # Save image
frame_id += 1 # Incremente frame_id
elif wait_key == ord('q'): # If user press "q"
break # Exit from while Loop
vid.release() # After the loop release the cap object
cv2.destroyAllWindows() # Destroy all the windows
```
## Use Haar Cascade to detect objects
```
## This is just an example to ilustrate how to use Haar Cascades in order to detect objects (LIVE) ##
face = cv2.CascadeClassifier('Haarcascade/haarcascade_frontalface_default.xml') # Face Haar Cascade loading
eye = cv2.CascadeClassifier('Haarcascade/haarcascade_eye.xml') # Eye Haar Cascade Loading
path = "Bounding_box" # Path to Store Photos
frame_id = 0 # Frame Id
vid = cv2.VideoCapture(0) # Define a video capture object
status = True # Initalize status
while(status):
status, frame = vid.read() # Capture the video frame by frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert to gray scale
face_info = face.detectMultiScale(gray, 1.3, 5) # Get face infromation
for (x,y,w,h) in face_info: # Iterate over this information
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,255,0),1) # Draw rectangle
cropped_face = gray[y:y+h, x:x+w] # Crop face
eye_info = eye.detectMultiScale(gray) # Get info of eyes
for (ex,ey,ew,eh) in eye_info: # Iterate over eye information
cv2.rectangle(frame,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) # Draw over eye information
cv2.imshow('frame', frame) # Display the resulting frame
wait_key = cv2.waitKey(1) & 0xFF # Store Waitkey object
if wait_key == ord('a'): # If a is pressed
name = "eye"+str(frame_id)+'.jpg' # Set name
name = path + "\\" + name # Add path
cv2.imwrite(name, frame) # Set photo
frame_id += 1 # Increment frame id
elif wait_key == ord('q'): # If q is pressed
break # Break while loop
vid.release() # After the loop release the cap object
cv2.destroyAllWindows() # Destroy all the windows
```
## Capture face gestures
```
## This is just an example to ilustrate how to use Haar Cascades in order to detect objects (LIVE) ##
face = cv2.CascadeClassifier('Haarcascade/haarcascade_frontalface_default.xml') # Face Haar Cascade loading
eye = cv2.CascadeClassifier('Haarcascade/haarcascade_eye.xml') # Eye Haar Cascade Loading
path = "Bouding_box" # Path to Store Photos
frame_id = 0 # Frame Id
vid = cv2.VideoCapture(0) # Define a video capture object
status = True # Initalize status
while(status):
status, frame = vid.read() # Capture the video frame by frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert to gray scale
face_info = face.detectMultiScale(gray, 1.3, 5) # Get face infromation
for (x,y,w,h) in face_info: # Iterate over this information
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,255,0),1) # Draw rectangle
cropped_face_color = frame[y:y+h, x:x+w] # Crop face (color)
cv2.imshow('frame', frame) # Display the resulting frame
wait_key = cv2.waitKey(1) & 0xFF # Store Waitkey object
if wait_key == ord('a'): # If a is pressed
name = "eye"+str(frame_id)+'.jpg' # Set name
name = path + "\\" + name # Add path
cv2.imwrite(name, cropped_face_color) # Set photo
frame_id += 1 # Increment frame id
elif wait_key == ord('q'): # If q is pressed
break # Break while loop
vid.release() # After the loop release the cap object
cv2.destroyAllWindows() # Destroy all the windows
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import time
import os
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
from tests import test_prediction, test_generation
# load all that we need
dataset = np.load('../dataset/wiki.train.npy', allow_pickle=True)
devset = np.load('../dataset/wiki.valid.npy', allow_pickle=True)
fixtures_pred = np.load('../fixtures/prediction.npz') # dev
fixtures_gen = np.load('../fixtures/generation.npy') # dev
fixtures_pred_test = np.load('../fixtures/prediction_test.npz') # test
fixtures_gen_test = np.load('../fixtures/generation_test.npy') # test
vocab = np.load('../dataset/vocab.npy')
# set device as per system
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# data loader
class LanguageModelDataLoader(DataLoader):
"""
TODO: Define data loader logic here
"""
def __init__(self, dataset, batch_size, shuffle=True):
self.dataset, self.batch_size, self.shuffle = dataset, batch_size, shuffle
def __iter__(self):
# concatenate dataset articles into single string
# dataset shape: (579,), dataset[0].shape: (3803,)
concatenate_string = np.concatenate(self.dataset) # concatenated shape: (2075677,)
# generate input, output sequences eg: I ate an apple -> inp_seq: I ate an, out_seq: ate an apple
# (also convert to torch tensors)
input_sequence = torch.as_tensor(concatenate_string[:-1]) # first element to second last element
output_sequence = torch.as_tensor(concatenate_string[1:]) # second element to last element
# calculate excess length while batching and truncate it off
excess_length = len(input_sequence)%self.batch_size
truncated_length = len(input_sequence) - excess_length
input_sequence, output_sequence = input_sequence[:truncated_length], output_sequence[:truncated_length]
# convert to long tensors
input_sequence, output_sequence = (input_sequence).type(torch.LongTensor), (output_sequence).type(torch.LongTensor)
# batch the input and output sequences
num_batches = truncated_length // self.batch_size
input_sequence = input_sequence.reshape(self.batch_size, num_batches)
output_sequence = output_sequence.reshape(self.batch_size, num_batches)
# print(f'input sequence: {input_sequence.shape} \noutput sequence: {output_sequence.shape}')
# YIELD single batch of input, output for each batch (since we are designing an iter)
prev = curr = 0 # current index for indexing data from sequences
while curr < num_batches:
# random BPTT length, https://arxiv.org/pdf/1708.02182.pdf section 5
bptt_length = self.random_length()
prev = curr
curr += bptt_length
yield input_sequence[:, prev:curr], output_sequence[:, prev:curr]
# random BPTT length, https://arxiv.org/pdf/1708.02182.pdf section 5
def random_length(self):
random_probability = np.random.random_sample()
if random_probability > 0.95:
bptt_length = np.random.normal(70, 5)
else:
bptt_length = np.random.normal(35, 5)
return round(bptt_length) #round off so we have integers
# test code
loader = LanguageModelDataLoader(dataset=dataset, batch_size=60, shuffle=True)
loader.__iter__()
# print(f'x:{x.shape}, y:{y.shape}')
# model
class LanguageModel(nn.Module):
"""
TODO: Define your model here
"""
def __init__(self, vocab_size):
super(LanguageModel, self).__init__()
# embedding size = 400 (https://arxiv.org/pdf/1708.02182.pdf section 5)
self.embedding = nn.Embedding(num_embeddings = vocab_size, embedding_dim = 400) # simple lookup table that stores embeddings of a fixed dictionary and size
# hidden size = 1150 (https://arxiv.org/pdf/1708.02182.pdf section 5)
self.lstm = nn.LSTM(input_size=400, hidden_size=1150, num_layers=3, batch_first=True)
# linear output = vocabulary size
self.linear = nn.Linear(in_features=1150, out_features=vocab_size)
def forward(self, x, hiddens=None):
# Feel free to add extra arguments to forward (like an argument to pass in the hiddens)
# embedding
embeddings = self.embedding(x)
# lstm / rnn
out, hiddens = self.lstm(embeddings, hiddens) if hiddens else self.lstm(embeddings) #operate on hidden states only if they are available
# linear
out = self.linear(out)
return out, hiddens
model = LanguageModel(len(vocab))
print(model)
# model hyperparameters
LEARNING_RATE = 1e-3
WEIGHT_DECAY = 1e-6
# model trainer
import time
class LanguageModelTrainer:
def __init__(self, model, loader, max_epochs=1, run_id='exp'):
"""
Use this class to train your model
"""
# feel free to add any other parameters here
self.model = model
self.loader = loader
self.train_losses = []
self.val_losses = []
self.predictions = []
self.predictions_test = []
self.generated_logits = []
self.generated = []
self.generated_logits_test = []
self.generated_test = []
self.epochs = 0
self.max_epochs = max_epochs
self.run_id = run_id
# TODO: Define your optimizer and criterion here
self.optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY) #
self.criterion = nn.CrossEntropyLoss()
def train(self):
self.model.train() # set to training mode
epoch_loss = 0
num_batches = 0
start_time = time.time()
for batch_num, (inputs, targets) in enumerate(self.loader):
if batch_num % 50 == 0:
print(f'batch : {batch_num} \ttotal time elapsed : {round(time.time()-start_time, 2)} sec')
epoch_loss += self.train_batch(inputs, targets)
epoch_loss = epoch_loss / (batch_num + 1)
self.epochs += 1
print('[TRAIN] Epoch [%d/%d] Loss: %.4f'
% (self.epochs + 1, self.max_epochs, epoch_loss))
print(f'time taken = {round((time.time()-start_time)/60, 2)} min')
self.train_losses.append(epoch_loss)
def train_batch(self, inputs, targets):
"""
TODO: Define code for training a single batch of inputs
"""
# initialize and move to the active device (make sure everything (inputs, model, outputs, targets) on same device)
self.optimizer.zero_grad()
inputs = inputs.to(device)
targets = targets.to(device)
# get output from model
outputs, _ = self.model(inputs)
# reshape outputs and targets
outputs = outputs.reshape(-1, outputs.shape[2])
targets = targets.reshape(-1)
# judge quality of output against the target using loss function
loss = self.criterion(outputs, targets)
# optimize weights
loss.backward()
self.optimizer.step()
return loss
def test(self):
# don't change these
self.model.eval() # set to eval mode
predictions = TestLanguageModel.prediction(fixtures_pred['inp'], self.model) # get predictions
self.predictions.append(predictions)
generated_logits = TestLanguageModel.generation(fixtures_gen, 10, self.model) # generated predictions for 10 words
generated_logits_test = TestLanguageModel.generation(fixtures_gen_test, 10, self.model)
nll = test_prediction(predictions, fixtures_pred['out'])
generated = test_generation(fixtures_gen, generated_logits, vocab)
generated_test = test_generation(fixtures_gen_test, generated_logits_test, vocab)
self.val_losses.append(nll)
self.generated.append(generated)
self.generated_test.append(generated_test)
self.generated_logits.append(generated_logits)
self.generated_logits_test.append(generated_logits_test)
# generate predictions for test data
predictions_test = TestLanguageModel.prediction(fixtures_pred_test['inp'], self.model) # get predictions
self.predictions_test.append(predictions_test)
print('[VAL] Epoch [%d/%d] Loss: %.4f'
% (self.epochs + 1, self.max_epochs, nll))
print('='*60)
return nll
def save(self):
# don't change these
model_path = os.path.join('experiments', self.run_id, 'model-{}.pkl'.format(self.epochs))
torch.save({'state_dict': self.model.state_dict()},
model_path)
np.save(os.path.join('experiments', self.run_id, 'predictions-{}.npy'.format(self.epochs)), self.predictions[-1])
np.save(os.path.join('experiments', self.run_id, 'predictions-test-{}.npy'.format(self.epochs)), self.predictions_test[-1])
np.save(os.path.join('experiments', self.run_id, 'generated_logits-{}.npy'.format(self.epochs)), self.generated_logits[-1])
np.save(os.path.join('experiments', self.run_id, 'generated_logits-test-{}.npy'.format(self.epochs)), self.generated_logits_test[-1])
with open(os.path.join('experiments', self.run_id, 'generated-{}.txt'.format(self.epochs)), 'w') as fw:
fw.write(self.generated[-1])
with open(os.path.join('experiments', self.run_id, 'generated-{}-test.txt'.format(self.epochs)), 'w') as fw:
fw.write(self.generated_test[-1])
class TestLanguageModel:
def prediction(inp, model):
"""
TODO: write prediction code here
:param inp:
:return: a np.ndarray of logits
"""
# every input across notebook needs to be converted to long tensor
# convert inputs to long tensor
inp = torch.LongTensor(inp)
# move to active device
inp = inp.to(device)
# get model output
out, out_lengths = model(inp)
out = out[:, -1]
# detatch logits array from tensor
predictions = out.cpu().detach().numpy()
return predictions
def generation(inp, forward, model):
"""
TODO: write generation code here
Generate a sequence of words given a starting sequence.
:param inp: Initial sequence of words (batch size, length)
:param forward: number of additional words to generate
:return: generated words (batch size, forward)
"""
model.eval()
with torch.no_grad():
result = [] # array of strings of length = forward
# change to long type
inp = torch.LongTensor(inp)
# move inputs to device
inp = inp.to(device)
hidden = None
for i in range(forward):
out, hidden = model(inp, hidden) if hidden else model(inp) # pass in hidden input only if available
predictions = torch.argmax(out, dim=2)
predictions = predictions[:,-1]
inp = predictions.unsqueeze(1)
result.append(inp)
# concatenate result shape
result = torch.cat(result, dim=1)
# detatch words array from tensor
result = result.cpu().detach().numpy()
return result
# TODO: define other hyperparameters here
NUM_EPOCHS = 6 # based on writeup
BATCH_SIZE = 80 # based on https://arxiv.org/pdf/1708.02182.pdf section 5
run_id = str(int(time.time()))
if not os.path.exists('./experiments'):
os.mkdir('./experiments')
os.mkdir('./experiments/%s' % run_id)
print("Saving models, predictions, and generated words to ./experiments/%s" % run_id)
loader = LanguageModelDataLoader(dataset=dataset, batch_size=BATCH_SIZE, shuffle=True)
model = LanguageModel(len(vocab))
model = model.to(device)
trainer = LanguageModelTrainer(model=model, loader=loader, max_epochs=NUM_EPOCHS, run_id=run_id)
print(f'length of dataloader = {len(loader.dataset)}')
best_nll = 1e30
for epoch in range(NUM_EPOCHS):
trainer.train()
nll = trainer.test()
if nll < best_nll:
best_nll = nll
print("Saving model, predictions and generated output for epoch "+str(epoch)+" with NLL: "+ str(best_nll))
trainer.save()
# Don't change these
# plot training curves
plt.figure()
plt.plot(range(1, trainer.epochs + 1), trainer.train_losses, label='Training losses')
plt.plot(range(1, trainer.epochs + 1), trainer.val_losses, label='Validation losses')
plt.xlabel('Epochs')
plt.ylabel('NLL')
plt.legend()
plt.show()
# see generated output
print (trainer.generated[-1]) # get last generated output
```
| github_jupyter |
# Degradation Module
```
## Importing Packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Pre-processing cycle dataframe for the battery pack in question
def deg_preprocessing(df):
df['avg_ch_C'] = (df["avg_ch_MW"]*1000)/design_dict['tot_cap'] # Charge C rate
df['avg_disch_C'] = (df["avg_disch_MW"]*1000)/design_dict['tot_cap'] # Discharge C rate
df['av_temp'] = [25]*len(df['cycle_num'].tolist())
df['SOC'] = [0]*len(df['cycle_num'].tolist())
df['max_SOC'] = [0]*len(df['cycle_num'].tolist())
df['min_SOC'] = [0]*len(df['cycle_num'].tolist())
## Iterating through cycle dataframe
for index,row in df.iterrows():
if index == 0: # first row exception
start_SOC = 0 ##start at 0 SOC
df.loc[df['cycle_num']==row['cycle_num'],'max_SOC'] = -row['ch_th_kWh']
df.loc[df['cycle_num']==row['cycle_num'],'min_SOC'] = start_SOC
df.loc[df['cycle_num']==row['cycle_num'],'SOC'] = -row['ch_th_kWh']-row['disch_th_kWh']
prev_row = df.loc[df['cycle_num']==row['cycle_num'],:]
else:
df.loc[df['cycle_num']==row['cycle_num'],'max_SOC'] = -row['ch_th_kWh']+prev_row['SOC'].values
df.loc[df['cycle_num']==row['cycle_num'],'min_SOC'] = min((-row['ch_th_kWh']-row['disch_th_kWh']),
prev_row['SOC'].values)
df.loc[df['cycle_num']==row['cycle_num'],'SOC'] = -row['ch_th_kWh']-row['disch_th_kWh']+prev_row['SOC'].values
prev_row = df.loc[df['cycle_num']==row['cycle_num'],:]
## Determining the start SOC so SOC never goes <0
overlap = min(df['SOC'].tolist()) # should be negative
df['min_SOC']= 100*(df['min_SOC'] - overlap)/design_dict['tot_cap']
df['max_SOC']= 100*(df['max_SOC'] - overlap)/design_dict['tot_cap']
df['SOC']= 100*(df['SOC'] - overlap)/design_dict['tot_cap']
df['DOD'] = df['max_SOC'] - df['SOC']
return df
# ## Function to graph cell dispatch
# def graphCellDispatch(res_path, df):
# fig, axs = plt.subplots(2,2, figsize =(10,6))
# axs[0,0].plot(df['cycle_num'],df['SOC'], label = 'SOC', c='turquoise')
# axs[0,0].plot(df['cycle_num'],df['min_SOC'], label = 'min SOC', c= 'powderblue')
# axs[0,0].plot(df['cycle_num'],df['max_SOC'], label = 'max SOC', c= 'powderblue')
# axs[0,0].set_ylabel('end of cycle SOC (%)')
# axs[0,0].set_xlabel('Cycle Number')
# axs[0,0].legend()
# axs[0,1].plot(df['cycle_num'],df['DOD'], label = 'DOD', c='turquoise')
# axs[0,1].set_ylabel('Depth of Discharge (%)')
# axs[0,1].set_xlabel('Cycle Number')
# axs[0,1].legend()
# axs[1,0].plot(df['cycle_num'],df['avg_disch_C'], label = 'Discharge', c='turquoise')
# axs[1,0].plot(df['cycle_num'],df['avg_ch_C'], label = 'Charge', c='red')
# axs[1,0].set_ylabel('C rate')
# axs[1,0].set_xlabel('Cycle Number')
# axs[1,0].legend()
# axs[1,1].plot(df['cycle_num'],df['av_temp'], label = 'Average', c='turquoise')
# axs[1,1].set_ylabel('Cell Temperature (degC)')
# axs[1,1].set_xlabel('Cycle Number')
# axs[1,1].legend()
# fig.savefig(res_path/'02_Dispatch Data'/f'{today}_Cell Dispatch_{red_factor_th}_{minimum_voltage}.png')
# #
# return
def getNextHigher(num, li):
higher = []
for number in li:
if number>num:
higher.append(number)
if higher:
lowest = sorted(higher)[0]
return lowest
else: ## for when C rate is higher than all listed
return 0
def getNextLower(num, li):
lower = []
for number in li:
if number<num:
lower.append(number)
if lower:
highest = sorted(lower, reverse = True)[0]
return lowest
else: ## for when C rate is higher than all listed
return 0
## Function that carries out linear interpolution
def linInterp(low_a,high_a,low_p,high_p,p):
# a = actual data being interpolated, p = data that determines interpolation coefficient
result = low_a + (high_a-low_a)*((p-low_p)/(high_p-low_p))
## In the boundary cases:
if low_p == high_p:
if p<low_p:
result = low_a
else:
result=high_a
return result
# Empirical Model Coefficients
empiricalModelCoeffs = {}
## FOR LFP (from paper: https://www.sciencedirect.com/science/article/abs/pii/S0378775310021269)
Coef = pd.DataFrame()
Coef['C-rate'] = [0.5,2,6,10]
Coef['B'] = [30330, 19330, 12000, 11500]
Coef['Ea']=[31500, 31000, 29500, 28000]
Coef['z']=[0.552, 0.554,0.56,0.56]
empiricalModelCoeffs['LFP'] = Coef
## FOR NMC (estimate)
Coef = pd.DataFrame()
Coef['C-rate'] = [0.5,2,6,10]
Coef['B'] = [40000, 30000, 25000, 20000]
Coef['Ea']=[30000, 29500, 29000, 28000]
Coef['z']=[ 0.552, 0.554,0.56,0.56]
empiricalModelCoeffs['NMC'] = Coef
## FOR NCA (estimate)
# (short cycle life, high energy density)
Coef = pd.DataFrame()
Coef['C-rate'] = [0.5,2,6,10]
Coef['B'] = [60660, 38000, 25000, 23000]
Coef['Ea']=[29000, 29000, 28000, 28000]
Coef['z']=[0.6, 0.62,0.64,0.65]
empiricalModelCoeffs['NCA'] = Coef
## FOR LCO (estimate)
Coef = pd.DataFrame()
Coef['C-rate'] = [0.5,2,6,10]
Coef['B'] = [50000, 40000, 30000, 25000]
Coef['Ea']=[31500, 31000, 29500, 28000]
Coef['z']=[0.57, 0.572,0.58,0.58]
empiricalModelCoeffs['LCO'] = Coef
## FOR IMR (estimate)
Coef = pd.DataFrame()
Coef['C-rate'] = [0.5,2,6,10]
Coef['B'] = [50500, 45000, 30000, 30000]
Coef['Ea']=[31500, 31000, 29500, 28000]
Coef['z']=[0.552, 0.554,0.56,0.56]
empiricalModelCoeffs['IMR'] = Coef
## FOR LTO (estimate)
Coef = pd.DataFrame()
Coef['C-rate'] = [0.5,2,6,10]
Coef['B'] = [15000, 10000, 6000, 5000]
Coef['Ea']=[31500, 31000, 29500, 28000]
Coef['z']=[0.552, 0.554,0.56,0.56]
empiricalModelCoeffs['LTO'] = Coef
## Function to carry out degradation modelling from processed data
def empiricalDegModel(df,design_dict, EOL, chem,chem_nom_voltage, chem_nom_cap):
## empirical model#
tot_dur = (df.iloc[[-1]]['time']+df.iloc[[-1]]['disch_dur']+df.iloc[[-1]]['ch_dur']+df.iloc[[-1]]['rest_dur']).values[0]
t_df = df[['cycle_num','time','disch_dur','ch_dur','rest_dur','disch_th_kWh','ch_th_kWh','av_temp','avg_ch_C']]
t_df['avg_ch_C'] = abs(t_df['avg_ch_C']) #making all C rates +ive
getNearest = lambda num,collection:min(collection,key=lambda x:abs(x-num))
# Empirical Model Coefficients: dependant on chemistry
Coef = pd.DataFrame()
Coef = empiricalModelCoeffs[chem]
# Initial Conditions
R = 8.3144626 # gas constant
running_deg = 0
running_SOH = 100
for index, row in t_df.iterrows():
##Getting Boundary C rates to interpolate coefficients: capped by highest and lowest val (no exterpolation)
highC = getNextHigher(row['avg_ch_C'],Coef['C-rate'].tolist())
if highC == 0:
highC = max(Coef['C-rate'].tolist())
lowC = highC
else:
lowC = getNextLower(row['avg_ch_C'],Coef['C-rate'].tolist())
if lowC == 0:
lowC = min(Coef['C-rate'].tolist())
t_df.loc[t_df['cycle_num']==row['cycle_num'],'avg_ch_C'] = row['avg_ch_C']
## Interpolating Degradation Coefficients (for relevant C rate)
B = linInterp(Coef.loc[Coef['C-rate']==lowC,'B'].values[0],
Coef.loc[Coef['C-rate']==highC,'B'].values[0],
lowC, highC, row['avg_ch_C'])
Ea = linInterp(Coef.loc[Coef['C-rate']==lowC,'Ea'].values[0],
Coef.loc[Coef['C-rate']==highC,'Ea'].values[0],
lowC, highC, row['avg_ch_C'])
z = linInterp(Coef.loc[Coef['C-rate']==lowC,'z'].values[0],
Coef.loc[Coef['C-rate']==highC,'z'].values[0],
lowC, highC, row['avg_ch_C'])
av_th = (row['disch_th_kWh']+(-row['ch_th_kWh']))/2
Ah = ((av_th*1000)/(chem_nom_voltage*design_dict['tot_cells']))*(2/chem_nom_cap)
# Ah throughput per cell: (average of charge and discharge)
# equivalent to throughput through a 2.2 Ah LFP (derated to 2Ah) with same no. of cycles
temp = row['av_temp']+273.15 # Temperature (K)
Qloss = B*(math.exp(-(Ea)/(R*temp)))*(Ah**z)# % capacity loss
t_df.loc[t_df['cycle_num']==row['cycle_num'],'Qloss'] = Qloss
running_SOH = running_SOH*((100-(Qloss))/100)
running_deg = 100 - running_SOH
t_df.loc[t_df['cycle_num']==row['cycle_num'],'running_deg'] = running_deg
end_SOH = running_SOH
## Calculation of degradation life from EoL condition and running SoH
deg_life = tot_dur*((np.log(EOL/100)/(np.log(end_SOH/100))))
## Storing data into dictionary
deg_dict['test_dur'] = tot_dur
deg_dict['end_SOH'] = end_SOH
deg_dict['deg_life'] = deg_life
return deg_dict
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from pandas.plotting import scatter_matrix
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
%matplotlib inline
```
<h3>Carrega os arquivos e padroniza os sem informação</h3>
```
missing_values = ["n/a", "na", "--", " "] # mais comuns
df = pd.read_csv("/Users/filipetatarli/Documents/WA_Fn-UseC_-Telco-Customer-Churn.csv", na_values = missing_values)
df.head(3)
```
<h3>Verifica se há valores não preenchidos</h3>
```
df.isnull().sum() # verificando se há valores não preenchidos
df.TotalCharges.fillna(value = df.tenure * df.MonthlyCharges, inplace = True)
df.isnull().sum() # verificando se há valores não preenchidos
```
<h2> Estuda variáveis </h2>
```
colors = ['#ff9999','#66b3ff','#99ff99','#ffcc99']
ax = df.groupby('Churn').size().transform(lambda x: x/x.sum())
#ax.plot.pie(legend="True")
ax.plot.pie(autopct='%1.1f%%', startangle=90,shadow=True,colors = colors)
## https://medium.com/@kvnamipara/a-better-visualisation-of-pie-charts-by-matplotlib-935b7667d77f
df.info()
# PADRONIZANDO COLUNAS
df.drop(["customerID"], axis=1, inplace=True)
df['gender'] = df['gender'].apply(lambda x: 1 if x == 'Male' else 0)
df['Partner'] = df['Partner'].str.lower().replace({"yes": 1, "no": 0})
df['Dependents'] = df['Dependents'].str.lower().replace({"yes": 1, "no": 0})
df['PhoneService'] = df['PhoneService'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['OnlineSecurity'] = df['OnlineSecurity'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['TechSupport'] = df['TechSupport'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['StreamingTV'] = df['StreamingTV'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['StreamingMovies'] = df['StreamingMovies'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['PaperlessBilling'] = df['PaperlessBilling'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['OnlineBackup'] = df['OnlineBackup'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['DeviceProtection'] = df['DeviceProtection'].str.lower().replace({"yes": 1, "no": 0, "no internet service": 0})
df['Churn'] = df['Churn'].str.lower().replace({"yes": 1, "no": 0})
display(df[['tenure','MonthlyCharges','TotalCharges']].describe())
sns.countplot(x="Contract", data=df)
g = sns.FacetGrid(df, col="Churn", size=10, palette="Set1")
g.map(sns.countplot, "PaymentMethod", alpha=.7);
g = sns.FacetGrid(df, row="Churn", height=1.7, aspect=4,)
g.map(sns.distplot, "MonthlyCharges", hist=False, rug=False);
```
<h2> Achando Outliers </h2>
```
sns.boxplot(x=df['MonthlyCharges'])
pal = dict([(1,'seagreen'), (0,'gray')])
g = sns.FacetGrid(df, hue="Churn", palette=pal, height=5, size=8)
g.map(plt.scatter, "TotalCharges", "tenure", s=50, alpha=.5, linewidth=.4, edgecolor="white")
g.add_legend();
# https://github.com/mwaskom/seaborn/issues/1114
#sns.pairplot(df[['tenure','MonthlyCharges','TotalCharges']]);
sns.distplot(df['MonthlyCharges']);
df_log_transformed = df
df_log_transformed['MonthlyCharges'] = df['MonthlyCharges'].apply(lambda x: np.log(x + 1))
# REMOVENDO OUTLOIERS
sns.boxplot(y=df["TotalCharges"])
# Produza uma matriz de dispersão para cada um dos pares de atributos dos dados
scatter_matrix(df[['tenure','MonthlyCharges','TotalCharges']], alpha=0.3, figsize = (10,8), diagonal = 'kde')
plt.figure(figsize=(12,8))
subjective_corr = df.corr()
mask = np.zeros_like(df, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
sns.heatmap(df.corr(), annot = True, linewidths=.5, cmap='coolwarm', mask = mask)
# EXPLICAÇÃO
# GRAFICOS
gp = df.groupby('Contract')["Churn"].value_counts()/len(df)
gp = gp.to_frame().rename({"Churn": "Porcentagem"}, axis=1).reset_index()
sns.barplot(x='Contract', y="Porcentagem", hue="Churn", data=gp)
## CORRELAcao
df.dtypes.value_counts()
# TODO: Utilize o one-hot encoding nos dados em 'features_log_minmax_transform' utilizando pandas.get_dummies()
features_final = pd.get_dummies(df)
encoded = list(features_final.columns)
print(encoded)
features_final
df['SeniorCitizen'].value_counts()
sns.countplot(x="Contract", data=df)
```
| github_jupyter |
# CTW dataset tutorial (Part 1: basics)
Hello, welcome to the tutorial of _Chinese Text in the Wild_ (CTW) dataset. In this tutorial, we will show you:
1. [Basics](#CTW-dataset-tutorial-(Part-1:-Basics)
- [The structure of this repository](#The-structure-of-this-repository)
- [Dataset split](#Dataset-Split)
- [Download images and annotations](#Download-images-and-annotations)
- [Annotation format](#Annotation-format)
- [Draw annotations on images](#Draw-annotations-on-images)
- [Appendix: Adjusted bounding box conversion](#Appendix:-Adjusted-bounding-box-conversion)
2. Classification baseline
- Train classification model
- Results format and evaluation API
- Evaluate your classification model
3. Detection baseline
- Train detection model
- Results format and evaluation API
- Evaluate your classification model
Our homepage is https://ctwdataset.github.io/, you may find some more useful information from that.
If you don't want to run the baseline code, please jump to [Dataset split](#Dataset-Split) and [Annotation format](#Annotation-format) sections.
Notes:
> This notebook MUST be run under `$CTW_ROOT/tutorial`.
>
> All the code SHOULD be run with `Python>=3.4`. We make it compatible with `Python>=2.7` with best effort.
>
> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
## The structure of this repository
Our git repository is `git@github.com:yuantailing/ctw-baseline.git`, which you can browse from [GitHub](https://github.com/yuantailing/ctw-baseline).
There are several directories under `$CTW_ROOT`.
- **tutorial/**: this tutorial
- **data/**: download and place images and annotations
- **prepare/**: prepare dataset splits
- **classification/**: classification baselines using [TensorFlow](https://www.tensorflow.org/)
- **detection/**: a detection baseline using [YOLOv2](https://pjreddie.com/darknet/yolo/)
- **judge/**: evaluate testing results and draw results and statistics
- **pythonapi/**: APIs to traverse annotations, to evaluate results, and for common use
- **cppapi/**: a faster implementation to detection AP evaluation
- **codalab/**: which we run on [CodaLab](https://competitions.codalab.org/competitions/?q=CTW) (our evaluation server)
- **ssd/**: a detection method using [SSD](https://github.com/weiliu89/caffe/tree/ssd)
Most of the above directories have some similar structures.
- **\*/settings.py**: configure directory of images, file path to annotations, and dedicated configurations for each step
- **\*/products/**: store temporary files, logs, middle products, and final products
- **\*/pythonapi**: a symbolic link to `pythonapi/`, in order to use Python API more conveniently
Most of the code is written in Python, while some code is written in C++, Shell, etc.
All the code is purposed to run in subdirectories, e.g., it's correct to execute `cd $CTW_ROOT/detection && python3 train.py`, and it's incorrect to execute `cd $CTW_ROOT && python3 detection/train.py`.
All our code won't create or modify any files out of `$CTW_ROOT` (except `/tmp/`), and don't need a privilege elevation (except for running docker workers on the evaluation server). You SHOULD install requirements before you run our code.
- git>=1
- Python>=3.4
- Jupyter notebook>=5.0
- gcc>=5
- g++>=5
- CUDA driver
- CUDA toolkit>=8.0
- CUDNN>=6.0
- OpenCV>=3.0
- requirements listed in `$CTW_ROOT/requirements.txt`
Recommonded hardware requirements:
- RAM >= 32GB
- GPU memory >= 12 GB
- Hard Disk free space >= 200 GB
- CPU logical cores >= 8
- Network connection
## Dataset Split
We split the dataset into 4 parts:
1. Training set (~75%)
For each image in training set, the annotation contains a lot of lines, while each lines contains some character instances.
Each character instance contains:
- its underlying character,
- its bounding box (polygon),
- and 6 attributes.
Only Chinese character instances are completely annotated, non-Chinese characters (e.g., ASCII characters) are partially annotated.
Some ignore regions are annotated, which contain character instances that cannot be recognized by human (e.g., too small, too fuzzy).
We will show the annotation format in [next sections](#Annotation-format).
2. Validation set (~5%)
Annotations in validation set is the same as that in training set.
The split between training set and validation set is only a recommendation. We make no restriction on how you split them. To enlarge training data, you MAY use TRAIN+VAL to train your models.
3. Testing set for classification (~10%)
For this testing set, we make images and annotated bounding boxes publicly available. Underlying character, attributes and ignored regions are not avaliable.
To evaluate your results on testing set, please visit our evaluation server.
4. Testing set for detection (~10%)
For this testing set, we make images public.
To evaluate your results on testing set, please visit our evaluation server.
Notes:
> You MUST NOT use annotations of testing set to fine tune your models or hyper-parameters. (e.g. use annotations of classification testing set to fine tune your detection models)
>
> You MUST NOT use evaluation server to fine tune your models or hyper-parameters.
## Download images and annotations
Visit our homepage (https://ctwdataset.github.io/) and gain access to the dataset.
1. Clone our git repository.
```sh
$ git clone git@github.com:yuantailing/ctw-baseline.git
```
1. Download images, and unzip all the images to `$CTW_ROOT/data/all_images/`.
For image file path, both `$CTW_ROOT/data/all_images/0000001.jpg` and `$CTW_ROOT/data/all_images/any/path/0000001.jpg` are OK, do not modify file name.
1. Download annotations, and unzip it to `$CTW_ROOT/data/annotations/downloads/`.
```sh
$ mkdir -p ../data/annotations/downloads && tar -xzf /path/to/ctw-annotations.tar.gz -C../data/annotations/downloads
```
1. In order to run evaluation and analysis code locally, we will use validation set as testing sets in this tutorial.
```sh
$ cd ../prepare && python3 fake_testing_set.py
```
If you propose to train your model on TRAIN+VAL, you can execute `cp ../data/annotations/downloads/* ../data/annotations/` instead of running the above code. But you will not be able to run evaluation and analysis code locally, just submit the results to our evaluation server.
1. Create symbolic links for TRAIN+VAL (`$CTW_ROOT/data/images/trainval/`) and TEST(`$CTW_ROOT/data/images/test/`) set, respectively.
```sh
$ cd ../prepare && python3 symlink_images.py
```
## Annotation format
In this section, we will show you:
- Overall information format
- Training set annotation format
- Classification testing set format
We will display some examples in the next section.
#### Overall information format
Overall information file (`../data/annotations/info.json`) is UTF-8 (no BOM) encoded [JSON](https://www.json.org/).
The data struct for this information file is described below.
```
information:
{
train: [image_meta_0, image_meta_1, image_meta_2, ...],
val: [image_meta_0, image_meta_1, image_meta_2, ...],
test_cls: [image_meta_0, image_meta_1, image_meta_2, ...],
test_det: [image_meta_0, image_meta_1, image_meta_2, ...],
}
image_meta:
{
image_id: str,
file_name: str,
width: int,
height: int,
}
```
`train`, `val`, `test_cls`, `test_det` keys denote to training set, validation set, testing set for classification, testing set for detection, respectively.
The resolution of each image is always $2048 \times 2048$. Image ID is a 7-digits string, the first digit of image ID indicates the camera orientation in the following rule.
- '0': back
- '1': left
- '2': front
- '3': right
The `file_name` filed doesn't contain directory name, and is always `image_id + '.jpg'`.
#### Training set annotation format
All `.jsonl` annotation files (e.g. `../data/annotations/train.jsonl`) are UTF-8 encoded [JSON Lines](http://jsonlines.org/), each line is corresponding to the annotation of one image.
The data struct for each of the annotations in training set (and validation set) is described below.
```
annotation (corresponding to one line in .jsonl):
{
image_id: str,
file_name: str,
width: int,
height: int,
annotations: [sentence_0, sentence_1, sentence_2, ...], # MUST NOT be empty
ignore: [ignore_0, ignore_1, ignore_2, ...], # MAY be an empty list
}
sentence:
[instance_0, instance_1, instance_2, ...] # MUST NOT be empty
instance:
{
polygon: [[x0, y0], [x1, y1], [x2, y2], [x3, y3]], # x, y are floating-point numbers
text: str, # the length of the text MUST be exactly 1
is_chinese: bool,
attributes: [attr_0, attr_1, attr_2, ...], # MAY be an empty list
adjusted_bbox: [xmin, ymin, w, h], # x, y, w, h are floating-point numbers
}
attr:
"occluded" | "bgcomplex" | "distorted" | "raised" | "wordart" | "handwritten"
ignore:
{
polygon: [[x0, y0], [x1, y1], [x2, y2], [x3, y3]],
bbox: [xmin, ymin, w, h],
]
```
Original bounding box annotations are polygons, we will describe how `polygon` is converted to `adjusted_bbox` in [appendix](#Appendix:-Adjusted-bounding-box-conversion).
Notes:
> The order of lines are not guaranteed to be consistent with `info.json`.
>
> A polygon MUST be a quadrangle.
>
> All characters in `CJK Unified Ideographs` are considered to be Chinese, while characters in `ASCII` and `CJK Unified Ideographs Extension`(s) are not.
>
> Adjusted bboxes of character `instance`s MUST be intersected with the image, while bboxes of `ignore` regions may not.
>
> Some logos on the camera car (e.g., "`腾讯街景地图`" in `2040368.jpg`) and licence plates are ignored to avoid bias.
#### Classification testing set format
The data struct for each of the annotations in classification testing set is described below.
```
annotation:
{
image_id: str,
file_name: str,
width: int,
height: int,
proposals: [proposal_0, proposal_1, proposal_2, ...],
}
proposal:
{
polygon: [[x0, y0], [x1, y1], [x2, y2], [x3, y3]],
adjusted_bbox: [xmin, ymin, w, h],
}
```
Notes:
> The order of `image_id` in each line are not guaranteed to be consistent with `info.json`.
>
> Non-Chinese characters (e.g., ASCII characters) MUST NOT appear in proposals.
```
from __future__ import print_function
from __future__ import unicode_literals
import json
import pprint
import settings
from pythonapi import anno_tools
print('Image meta info format:')
with open(settings.DATA_LIST) as f:
data_list = json.load(f)
pprint.pprint(data_list['train'][0])
print('Training set annotation format:')
with open(settings.TRAIN) as f:
anno = json.loads(f.readline())
pprint.pprint(anno, depth=3)
print('Character instance format:')
pprint.pprint(anno['annotations'][0][0])
print('Traverse character instances in an image')
for instance in anno_tools.each_char(anno):
print(instance['text'], end=' ')
print()
print('Classification testing set format')
with open(settings.TEST_CLASSIFICATION) as f:
anno = json.loads(f.readline())
pprint.pprint(anno, depth=2)
print('Classification testing set proposal format')
pprint.pprint(anno['proposals'][0])
```
## Draw annotations on images
In this section, we will draw annotations on images. This would help you to understand the format of annotations.
We show polygon bounding boxes of Chinese character instances in **<span style="color: #0f0;">green</span>**, non-Chinese character instances in **<span style="color: #f00;">red</span>**, and ignore regions in **<span style="color: #ff0;">yellow</span>**.
```
import cv2
import json
import matplotlib.patches as patches
import matplotlib.pyplot as plt
import os
import settings
from pythonapi import anno_tools
%matplotlib inline
with open(settings.TRAIN) as f:
anno = json.loads(f.readline())
path = os.path.join(settings.TRAINVAL_IMAGE_DIR, anno['file_name'])
assert os.path.exists(path), 'file not exists: {}'.format(path)
img = cv2.imread(path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(16, 16))
ax = plt.gca()
plt.imshow(img)
for instance in anno_tools.each_char(anno):
color = (0, 1, 0) if instance['is_chinese'] else (1, 0, 0)
ax.add_patch(patches.Polygon(instance['polygon'], fill=False, color=color))
for ignore in anno['ignore']:
color = (1, 1, 0)
ax.add_patch(patches.Polygon(ignore['polygon'], fill=False, color=color))
plt.show()
```
## Appendix: Adjusted bounding box conversion
In order to create a tighter bounding box to character instances, we compute `adjusted_bbox` in following steps, instead of use the real bounding box.
1. Take trisections for each edge of the polygon. (<span style="color: #f00;">red points</span>)
2. Compute the bouding box of above points. (<span style="color: #00f;">blue rectangles</span>)
Adjusted bounding box is better than the real bounding box, especially for sharp polygons.
```
from __future__ import division
import collections
import matplotlib.patches as patches
import matplotlib.pyplot as plt
%matplotlib inline
def poly2bbox(poly):
key_points = list()
rotated = collections.deque(poly)
rotated.rotate(1)
for (x0, y0), (x1, y1) in zip(poly, rotated):
for ratio in (1/3, 2/3):
key_points.append((x0 * ratio + x1 * (1 - ratio), y0 * ratio + y1 * (1 - ratio)))
x, y = zip(*key_points)
adjusted_bbox = (min(x), min(y), max(x) - min(x), max(y) - min(y))
return key_points, adjusted_bbox
polygons = [
[[2, 1], [11, 2], [12, 18], [3, 16]],
[[21, 1], [30, 5], [31, 19], [22, 14]],
]
plt.figure(figsize=(10, 6))
plt.xlim(0, 35)
plt.ylim(0, 20)
ax = plt.gca()
for polygon in polygons:
color = (0, 1, 0)
ax.add_patch(patches.Polygon(polygon, fill=False, color=(0, 1, 0)))
key_points, adjusted_bbox = poly2bbox(polygon)
ax.add_patch(patches.Rectangle(adjusted_bbox[:2], *adjusted_bbox[2:], fill=False, color=(0, 0, 1)))
for kp in key_points:
ax.add_patch(patches.Circle(kp, radius=0.1, fill=True, color=(1, 0, 0)))
plt.show()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import datetime
import numpy as np
import scipy
import pandas as pd
import torch
from torch import nn
import criscas
from criscas.utilities import create_directory, get_device, report_available_cuda_devices
from criscas.predict_model import *
base_dir = os.path.abspath('..')
base_dir
```
### Read sample data
```
seq_df = pd.read_csv(os.path.join(base_dir, 'sample_data', 'abemax_sampledata.csv'), header=0)
seq_df
```
The models expect sequences (i.e. target sites) to be wrapped in a `pandas.DataFrame` with a header that includes `ID` of the sequence and `seq` columns.
The sequences should be of length 20 (i.e. 20 bases) and represent the protospacer target site.
```
# create a directory where we dump the predictions of the models
csv_dir = create_directory(os.path.join(base_dir, 'sample_data', 'predictions'))
```
### Specify device (i.e. CPU or GPU) to run the models on
Specify device to run the model on. The models can run on `GPU` or `CPU`. We can instantiate a device by running `get_device(to_gpu,gpu_index)` function.
- To run on GPU we pass `to_gpu = True` and specify which card to use if we have multiple cards `gpu_index=int` (i.e. in case we have multiple GPU cards we specify the index counting from 0).
- If there is no GPU installed, the function will return a `CPU` device.
We can get a detailed information on the GPU cards installed on the compute node by calling `report_available_cuda_devices` function.
```
report_available_cuda_devices()
# instantiate a device using the only one available :P
device = get_device(True, 0)
device
```
### Create a BE-DICT model by sepcifying the target base editor
We start `BE-DICT` model by calling `BEDICT_CriscasModel(base_editor, device)` where we specify which base editor to use (i.e. `ABEmax`, `BE4max`, `ABE8e`, `Target-AID`) and the `device` we create earlier to run on.
```
base_editor = 'ABEmax'
bedict = BEDICT_CriscasModel(base_editor, device)
```
We generate predictions by calling `predict_from_dataframe(seq_df)` where we pass the data frame wrapping the target sequences. The function returns two objects:
- `pred_w_attn_runs_df` which is a data frame that contains predictions per target base and the attentions scores across all positions.
- `proc_df` which is a data frame that represents the processed sequence data frame we passed (i.e. `seq_df`)
```
pred_w_attn_runs_df, proc_df = bedict.predict_from_dataframe(seq_df)
```
`pred_w_attn_runs_df` contains predictions from 5 trained models for `ABEmax` base editor (we have 5 runs trained per base editor). For more info, see our [paper](https://www.biorxiv.org/content/10.1101/2020.07.05.186544v1) on biorxiv.
Target positions in the sequence reported in `base_pos` column in `pred_w_attn_runs_df` uses 0-based indexing (i.e. 0-19)
```
pred_w_attn_runs_df
proc_df
```
Given that we have 5 predictions per sequence, we can further reduce to one prediction by either `averaging` across all models, or taking the `median` or `max` prediction based on the probability of editing scores. For this we use `select_prediction(pred_w_attn_runs_df, pred_option)` where `pred_w_attn_runs_df` is the data frame containing predictions from 5 models for each sequence. `pred_option` can be assume one of {`mean`, `median`, `max`}.
```
pred_option = 'mean'
pred_w_attn_df = bedict.select_prediction(pred_w_attn_runs_df, pred_option)
pred_w_attn_df
```
We can dump the prediction results on a specified directory on disk. We will dump the predictions with all 5 runs `pred_w_attn_runs_df` and the one average across runs `pred_w_attn_df`.
Under `sample_data` directory we will have the following tree:
<pre>
sample_data
└── predictions
├── predictions_allruns.csv
└── predictions_predoption_mean.csv
</pre>
```
pred_w_attn_runs_df.to_csv(os.path.join(csv_dir, f'predictions_allruns.csv'))
pred_w_attn_df.to_csv(os.path.join(csv_dir, f'predictions_predoption_{pred_option}.csv'))
```
### Generate attention plots
We can generate attention plots for the prediction of each target base in the sequence using `highlight_attn_per_seq` method that takes the following arguments:
- `pred_w_attn_runs_df`: data frame that contains model's predictions (5 runs) for each target base of each sequence (see above).
- `proc_df`: data frame that represents the processed sequence data frame we passed (i.e. seq_df)
- `seqid_pos_map`: dictionary `{seq_id:list of positions}` where `seq_id` is the ID of the target sequence, and list of positions that we want to generate attention plots for. Users can specify a `position from 1 to 20` (i.e. length of protospacer sequence)
- `pred_option`: selection option for aggregating across 5 models' predictions. That is we can average the predictions across 5 runs, or take `max`, `median`, `min` or `None` (i.e. keep all 5 runs)
- `apply_attnscore_filter`: boolean (`True` or `False`) to further apply filtering on the generated attention scores. This filtering allow to plot only predictions where the associated attention scores have a maximum that is >= 3 times the base attention score value <=> (3 * 1/20)
- `fig_dir`: directory where to dump the generated plots or `None` (to return the plots inline)
```
# create a dictionary to specify target sequence and the position we want attention plot for
# we are targeting position 5 in the sequence
seqid_pos_map = {'CTRL_HEKsiteNO1':[5], 'CTRL_HEKsiteNO2':[5]}
pred_option = 'mean'
apply_attn_filter = False
bedict.highlight_attn_per_seq(pred_w_attn_runs_df,
proc_df,
seqid_pos_map=seqid_pos_map,
pred_option=pred_option,
apply_attnscore_filter=apply_attn_filter,
fig_dir=None)
```
We can save the plots on disk without returning them by specifing `fig_dir`
```
# create a dictionary to specify target sequence and the position I want attention plot for
# we are targeting position 5 in the sequence
seqid_pos_map = {'CTRL_HEKsiteNO1':[5], 'CTRL_HEKsiteNO2':[5]}
pred_option = 'mean'
apply_attn_filter = False
fig_dir = create_directory(os.path.join(base_dir, 'sample_data', 'fig_dir'))
bedict.highlight_attn_per_seq(pred_w_attn_runs_df,
proc_df,
seqid_pos_map=seqid_pos_map,
pred_option=pred_option,
apply_attnscore_filter=apply_attn_filter,
fig_dir=create_directory(os.path.join(fig_dir, pred_option)))
```
We will generate the following files:
<pre>
sample_data
├── abemax_sampledata.csv
├── fig_dir
│ └── mean
│ ├── ABEmax_seqattn_CTRL_HEKsiteNO1_basepos_5_predoption_mean.pdf
│ └── ABEmax_seqattn_CTRL_HEKsiteNO2_basepos_5_predoption_mean.pdf
└── predictions
├── predictions_allruns.csv
└── predictions_predoption_mean.csv
</pre>
Similarly we can change the other arguments such as `pred_option` `apply_attnscore_filter` and so on to get different filtering options - We leave this as an exercise for the user/reader :D
| github_jupyter |
# Day 3
batch size 256 lr 1e-3, normed weighted, rotated, cartesian, split ny jet mult (1)
### Import modules
```
%matplotlib inline
from __future__ import division
import sys
import os
os.environ['MKL_THREADING_LAYER']='GNU'
sys.path.append('../')
from Modules.Basics import *
from Modules.Class_Basics import *
```
## Options
```
nJets = '1'
inputPipe, outputPipe = getPreProcPipes(normIn=True)
classModel = 'modelSwish'
varSet = "filtered_rot_cart_features"
nSplits = 10
ensembleSize = 10
ensembleMode = 'loss'
maxEpochs = 200
compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'}
trainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0}
modelParams = {'version':classModel, 'nIn':22, 'compileArgs':compileArgs}
```
## Import data
```
trainData = h5py.File(dirLoc + 'train_' + nJets + '.hdf5', "r+")
valData = h5py.File(dirLoc + 'val_' + nJets + '.hdf5', "r+")
```
## Determine LR
```
lrFinder = batchLRFindClassifier(trainData, nSplits, getClassifier, modelParams, trainParams, lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0)
compileArgs['lr'] = 1e-3
```
## Train classifier
```
results, histories = batchTrainClassifier(trainData, nSplits, getClassifier, modelParams, trainParams, patience=100, cosAnnealMult=2, trainOnWeights=True, maxEpochs=maxEpochs, verbose=1)
```
## Construct ensemble
```
with open('train_weights/resultsFile.pkl', 'r') as fin:
results = pickle.load(fin)
ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs)
```
## Response on development data
```
batchEnsemblePredict(ensemble, weights, trainData, ensembleSize=10, verbose=1)
print 'Training ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', trainData), getFeature('pred', trainData)),
roc_auc_score(getFeature('targets', trainData), getFeature('pred', trainData), sample_weight=getFeature('weights', trainData)))
```
## Response on val data
```
batchEnsemblePredict(ensemble, weights, valData, ensembleSize=10, verbose=1)
print 'Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData), getFeature('pred', valData)),
roc_auc_score(getFeature('targets', valData), getFeature('pred', valData), sample_weight=getFeature('weights', valData)))
```
## Evaluation
### Import in dataframe
```
def convertToDF(datafile, columns={'gen_target', 'gen_weight', 'pred_class'}, nLoad=-1):
data = pandas.DataFrame()
data['gen_target'] = getFeature('targets', datafile, nLoad)
data['gen_weight'] = getFeature('weights', datafile, nLoad)
data['pred_class'] = getFeature('pred', datafile, nLoad)
print len(data), "candidates loaded"
return data
valData = convertToDF(valData)
sigVal = (valData.gen_target == 1)
bkgVal = (valData.gen_target == 0)
```
### MVA distributions
```
getClassPredPlot([valData[bkgVal], valData[sigVal]], weightName='gen_weight')
amsScan(valData)
def scoreTest(ensemble, weights, nJets):
testData = h5py.File(dirLoc + 'testing_' + nJets + '.hdf5', "r+")
batchEnsemblePredict(ensemble, weights, testData, ensembleSize=10, verbose=1)
def saveTest(cut, name, nJets):
testData = h5py.File(dirLoc + 'testing_' + nJets + '.hdf5', "r+")
data = pandas.DataFrame()
data['EventId'] = getFeature('EventId', testData)
data['pred_class'] = getFeature('pred', testData)
data['Class'] = 'b'
data.loc[data.pred_class >= cut, 'Class'] = 's'
data.sort_values(by=['pred_class'], inplace=True)
data['RankOrder']=range(1, len(data)+1)
data.sort_values(by=['EventId'], inplace=True)
print dirLoc + name + '_test.csv'
data.to_csv(dirLoc + name + '_test.csv', columns=['EventId', 'RankOrder', 'Class'], index=False)
scoreTest(ensemble, weights, nJets)
saveTest(0.9622855186462402, 'Day_2_Basic_Features_256_1e-3_swish_mult2_200E_normedweighted_rot_cart')
```
!kaggle competitions submit -c higgs-boson -f ../Data/Day_2_Basic_Features_256_1e-3_swish_mult2_200E_normedweighted_rot_cart_test.csv -m"Day2"
| github_jupyter |
# Procedure for Word Correction Strategy as mentioned in Page 43 in the dissertation report
```
import numpy as np
import pandas as pd
import os
import nltk
import re
import string
from bs4 import BeautifulSoup
from spellchecker import SpellChecker
def read_file(df_new):
print("Started extracting data from file",df_new.shape)
dfnew=pd.DataFrame()
dfnew.insert(0,'Post',None)
dfnew.insert(1,'class',None)
for val in df_new.values:
appList=[]
sp=np.array_str(val).split(",")
if len(sp)==2:
appList.append(sp[0])
appList.append(sp[1])
dfnew.loc[len(dfnew)]=appList
for i in range(0,dfnew.shape[0]):
dfnew.values[i][1]=int(dfnew.values[i][1].strip("\'|]|\""))
print(dfnew['class'].value_counts())
print("Finished extracting data from file",dfnew.shape)
return dfnew
def post_tokenizing_dataset1(df):
print("Started cleaning data in dataframe", df.shape)
#print(df.head(5))
wpt = nltk.WordPunctTokenizer()
stop_words = nltk.corpus.stopwords.words('english')
token_list=[]
phrase_list=[]
token_df=pd.DataFrame()
token_df.insert(0,'Post',None)
token_df.insert(1,'class',None)
for val in df.values:
append_list=[]
filter_val=re.sub(r'Q:','',val[0])
filter_val=re.sub(r''[a-z]{1}','',filter_val)
filter_val=re.sub('<[a-z]+>',' ',filter_val).lower()
filter_val=re.sub(r'[^a-zA-Z\s]', '', filter_val, re.I|re.A)
filter_val=[token for token in wpt.tokenize(filter_val)]
filter_val=[word for word in filter_val if word.isalpha()]
if(filter_val):
append_list.append(' '.join(filter_val))
append_list.append(val[1])
token_df.loc[len(token_df)]=append_list
print("Finished cleaning data in dataframe",token_df.shape)
#print(token_df.head(5))
return token_df
def post_tokenizing_dataset3(df):
print("Started cleaning data in dataframe", df.shape)
#print(df.head(5))
wpt = nltk.WordPunctTokenizer()
stop_words = nltk.corpus.stopwords.words('english')
token_df=pd.DataFrame()
token_df.insert(0,'Post',None)
token_df.insert(1,'class',None)
for val in df.values:
filter_val=[]
value=re.sub(r'@\w*','',val[0])
value=re.sub(r'&.*;','',value)
value=re.sub(r'http[s?]?:\/\/.*[\r\n]*','',value)
tokens=[token for token in wpt.tokenize(value)]
tokens=[word for word in tokens if word.isalpha()]
if len(tokens)!=0:
filter_val.append(' '.join(tokens).lower())
filter_val.append(val[1])
token_df.loc[len(token_df)]=filter_val
print("Finished cleaning data in dataframe",token_df.shape)
#print(token_df.head(5))
return token_df
def correct_words(token_df_copy,badWordsDict):
spell = SpellChecker()
token_df_ones=token_df_copy[token_df_copy['class']==1]
post_list=[]
for val in token_df_ones.values:
post_list.append(val[0])
count=0
val_counts=token_df_copy['class'].value_counts()
print(val_counts[0],val_counts[0]+val_counts[1])
for val in range(val_counts[0],val_counts[0]+val_counts[1]):
sentiment=token_df_copy.loc[val][1]
if sentiment==1:
post=post_list[count]
for word in post.split(' '):
misspelled = spell.unknown([word])
for value in misspelled:
get_list=badWordsDict.get(word[0])
if(get_list):
candi_list=spell.candidates(word)
list3 = list(set(get_list)&set(candi_list))
if list3:
post=[w.replace(word, list3[0]) for w in post.split()]
post=' '.join(post)
break
token_df_copy.loc[val][0]=post
count+=1
print(count)
token_df_copy.to_csv("cor.csv",index=False, header=True)
return token_df_copy
wordList=[]
for val in string.ascii_lowercase:
with open("../swear-words/"+val+".html") as fp:
soup = BeautifulSoup(fp)
wordSet=soup.find_all('table')[2]('b')
for i in range(0,len(wordSet)-1):
wordList.append(wordSet[i].string)
badWordsDict={}
for val in wordList:
if not badWordsDict.get(val[0]):
badWordsDict[val[0]]=[]
badWordsDict.get(val[0]).append(val)
df_data_1=read_file(pd.read_csv("../post.csv",sep="\t"))
df_data_2=read_file(pd.read_csv("../new_data.csv",sep=","))
df_data_3=pd.read_csv("../dataset_4.csv",sep=",")
df_data_1=post_tokenizing_dataset1(df_data_1)
token_data_2=df_data_2[df_data_2['class']==1].iloc[:,]
token_data_2=post_tokenizing_dataset1(token_data_2)
token_data_3=df_data_3[df_data_3['class']==1].iloc[0:3147,]
token_data_3=post_tokenizing_dataset3(token_data_3)
token_data_3=post_tokenizing_dataset3(token_data_3)
print(df_data_2['class'].value_counts())
df_data_1_new=pd.DataFrame()
df_data_1_new=df_data_1_new.append(df_data_1[df_data_1['class']==0].iloc[0:7500,],ignore_index=True)
df_data_1_new=df_data_1_new.append(df_data_1[df_data_1['class']==1],ignore_index=True)
df_data_1_new=df_data_1_new.append(token_data_2 ,ignore_index=True)
df_data_1_new=df_data_1_new.append(token_data_3,ignore_index=True)
token_df_2=df_data_1_new.copy()
token_df_new=correct_words(token_df_2,badWordsDict)
token_df_new.to_csv("corrected_post_2.csv",index=False, header=True)
df_data_1_new.to_csv("without_correction_2.csv",index=False, header=True)
print(token_df_2.loc[11859])
dfnew=pd.DataFrame()
dfnew.insert(0,'Post',None)
dfnew.insert(1,'class',None)
for val in token_df_new.values:
value=re.sub(r'http[a-z0-9]*[\r\n\s]?','',val[0])
dfnew.loc[len(dfnew)]=value
dfnew['class']=token_df_new['class']
dfnew.to_csv("without_correction_3.csv",index=False, header=True)
```
| github_jupyter |
```
from extra import *
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras import regularizers
from keras.layers import Dense, Dropout, Conv2D, Input, GlobalAveragePooling2D, GlobalMaxPooling2D
from keras.layers import Add, Concatenate, BatchNormalization
import keras.backend as K
from keras.optimizers import Adam
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
batch_size = 128
num_classes = 10
# input image dimensions
HEIGHT, WIDTH = 28, 28
K.set_image_data_format('channels_first')
keras.__version__
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print('pixel range',x_train.min(), x_train.max())
```
images are as pixel values, ranging from 0-255
```
pd.DataFrame(y_train)[0].value_counts().plot(kind='bar')
## changes pixel range to 0 to 1
def normalize(images):
images /= 255.
return images
x_train = normalize(x_train.astype(np.float32))
x_test = normalize(x_test.astype(np.float32))
x_train = x_train.reshape(x_train.shape[0], 1, WIDTH, HEIGHT)
x_test = x_test.reshape(x_test.shape[0], 1, WIDTH, HEIGHT)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
```
now we have images that are normalized, and labels are one hot encoded
```
def show_images(rows, columns):
fig, axes = plt.subplots(rows,columns)
for rows in axes:
for ax in rows:
idx = np.random.randint(0, len(y_train))
ax.title.set_text(np.argmax(y_train[idx]))
ax.imshow(x_train[idx][0], cmap='gray')
ax.axis('off')
plt.show()
show_images(2,4)
def build_model():
inp = Input((1, HEIGHT, WIDTH))
x = Conv2D(16, kernel_size=(7,7), strides=(2,2), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(inp)
x = BatchNormalization()(x)
y = Conv2D(16, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x)
y = BatchNormalization()(y)
y = Conv2D(16, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(y)
y = BatchNormalization()(y)
x = Add()([x,y])
x = Conv2D(32, kernel_size=(3,3), strides=(2,2), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x)
x = BatchNormalization()(x)
y = Conv2D(32, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x)
y = BatchNormalization()(y)
y = Conv2D(32, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(y)
y = BatchNormalization()(y)
x = Add()([x,y])
x = Conv2D(64, kernel_size=(3,3), strides=(2,2), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x)
x = BatchNormalization()(x)
x = Concatenate()([GlobalMaxPooling2D(data_format='channels_first')(x) , GlobalAveragePooling2D(data_format='channels_first')(x)])
x = Dropout(0.3)(x)
out = Dense(10, activation='softmax')(x)
return Model(inputs=inp, outputs=out)
model = build_model()
model.summary()
model.compile(Adam(), loss='categorical_crossentropy', metrics=['acc'])
K.get_value(model.optimizer.lr), K.get_value(model.optimizer.beta_1)
lr_find(model, data=(x_train, y_train)) ## use generator if using generator insted of (x_train, y_train) and pass parameter, generator=True
```
selecting lr as 2e-3
### high lr for demonstration of decay, from above graph anything b/w 0.002 to 0.004 seems nice
```
recorder = RecorderCallback()
clr = CyclicLRCallback(max_lr=0.4, cycles=4, decay=0.6, DEBUG_MODE=True, patience=1, auto_decay=True, pct_start=0.3, monitor='val_loss')
K.get_value(model.optimizer.lr), K.get_value(model.optimizer.beta_1)
model.fit(x_train, y_train, batch_size=128, epochs=4, callbacks=[recorder, clr], validation_data=(x_test, y_test))
K.get_value(model.optimizer.lr), K.get_value(model.optimizer.beta_1)
recorder.plot_losses()
recorder.plot_losses(log=True) #take log scale for loss
recorder.plot_losses(clip=True) #clips loss between 2.5 and 97.5 precentile
recorder.plot_losses(clip=True, log=True)
recorder.plot_lr()
recorder.plot_mom() ##plots momentum, beta_1 in adam family of optimizers
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# a)
import sse
Lx, Ly = 8, 8
n_updates_measure = 10000
# b)
spins, op_string, bonds = sse.init_SSE_square(Lx, Ly)
for beta in [0.1, 1., 64.]:
op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10)
ns = sse.measure(spins, op_string, bonds, beta, n_updates_measure)
plt.figure()
plt.hist(ns, bins=np.arange(len(op_string)+1))
plt.axvline(len(op_string), color='r', ) # mark the length of the operator string
plt.xlim(0, len(op_string)*1.1)
plt.title("T=1./{beta:.1f}, len of op_string={l:d}".format(beta=beta, l=len(op_string)))
plt.xlabel("number of operators $n$")
```
The red bar indicates the size of the operator string after thermalization.
These histograms justify that we can fix the length of the operator string `M` (called $n*$ in the lecture notes).
Since `M` is automatically chosen as large as needed, we effectively take into account *all* relevant terms of the full series $\sum_{n=0}^\infty$ in the expansion, even if our numerical simulations only use a finite `M`.
```
# c)
Ts = np.linspace(2., 0., 20, endpoint=False)
betas = 1./Ts
Ls = [4, 8, 16]
Es_Eerrs = []
for L in Ls:
print("="*80)
print("L =", L)
E = sse.run_simulation(L, L, betas)
Es_Eerrs.append(E)
plt.figure()
for E, L in zip(Es_Eerrs, Ls):
plt.errorbar(Ts, E[:, 0], yerr=E[:, 1], label="L={L:d}".format(L=L))
plt.legend()
plt.xlim(0, np.max(1./betas))
plt.xlabel("temperature $T$")
plt.ylabel("energy $E$ per site")
```
# specific heat
```
# d)
def run_simulation(Lx, Ly, betas=[1.], n_updates_measure=10000, n_bins=10):
"""A full simulation: initialize, thermalize and measure for various betas."""
spins, op_string, bonds = sse.init_SSE_square(Lx, Ly)
n_sites = len(spins)
n_bonds = len(bonds)
Es_Eerrs = []
Cs_Cerrs = []
for beta in betas:
print("beta = {beta:.3f}".format(beta=beta), flush=True)
op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10)
Es = []
Cs = []
for _ in range(n_bins):
ns = sse.measure(spins, op_string, bonds, beta, n_updates_measure)
# energy per site
n_mean = np.mean(ns)
E = (-n_mean/beta + 0.25*n_bonds) / n_sites
Es.append(E)
Cv = (np.mean(ns**2) - n_mean - n_mean**2)/ n_sites
Cs.append(Cv)
E, Eerr = np.mean(Es), np.std(Es)/np.sqrt(n_bins)
Es_Eerrs.append((E, Eerr))
C, Cerr = np.mean(Cs), np.std(Cs)/np.sqrt(n_bins)
Cs_Cerrs.append((C, Cerr))
return np.array(Es_Eerrs), np.array(Cs_Cerrs)
Es_Errs, Cs_Cerrs = run_simulation(8, 8, betas)
plt.figure()
plt.errorbar(Ts, Cs_Cerrs[:, 0], yerr=Cs_Cerrs[:, 1], label="L={L:d}".format(L=L))
plt.xlim(0, np.max(1./betas))
plt.xlabel("temperature $T$")
plt.ylabel("Specific heat $C_v$ per site")
```
## Interpretation
We see the behaviour expected from the previous plot considering $C_v= \partial_T <E> $.
However, as $T \rightarrow 0$ or $\beta \rightarrow \infty$ the error of $C_v$ blows up!
Looking at the formula $C_v = <n^2> - <n>^2 - <n>$, we see that it consist of larger terms which should cancel to zero.
Statistical noise is of the order of the large terms $<n^2>$, hence the relative error in $C_v$ explodes.
This is the essential problem of the infamous "sign problem" of quantum monte carlo (QMC): in many models (e.g. in our case of the SSE if we don't have a bipartite lattice) one encounters negative weights for some configurations in the partition function, and a cancelation of different terms. Similar as for the $C_v$ at low temperatures, this often leads to error bars which are often exponentially large in the system size. Obviously, phases from a "time evolution" lead to a similar problem. There is no generic solution to circumvent the sign problem (it's NP hard!), but for many specific models, there were actually sign-problem free solutions found.
On the other hand, whenever QMC has no sign problem, it is for sure one of the most powerful numerical methods we have. For example, it allows beautiful finite size scaling collapses to extract critical exponents etc. for quantum phase transitions even in 2D or 3D.
# Staggered Magnetization
```
# e)
def get_staggering(Lx, Ly):
stag = np.zeros(Lx*Ly, np.intp)
for x in range(Lx):
for y in range(Ly):
s = sse.site(x, y, Lx, Ly)
stag[s] = (-1)**(x+y)
return stag
def staggered_magnetization(spins, stag):
return 0.5*np.sum(spins * stag)
def measure(spins, op_string, bonds, stag, beta, n_updates_measure):
"""Perform a lot of updates with measurements."""
ns = []
ms = []
for _ in range(n_updates_measure):
n = sse.diagonal_update(spins, op_string, bonds, beta)
m = staggered_magnetization(spins, stag)
sse.loop_update(spins, op_string, bonds)
ns.append(n)
ms.append(m)
return np.array(ns), np.array(ms)
def run_simulation(Lx, Ly, betas=[1.], n_updates_measure=10000, n_bins=10):
"""A full simulation: initialize, thermalize and measure for various betas."""
spins, op_string, bonds = sse.init_SSE_square(Lx, Ly)
stag = get_staggering(Lx, Ly)
n_sites = len(spins)
n_bonds = len(bonds)
Es_Eerrs = []
Cs_Cerrs = []
Ms_Merrs = []
for beta in betas:
print("beta = {beta:.3f}".format(beta=beta), flush=True)
op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10)
Es = []
Cs = []
Ms = []
for _ in range(n_bins):
ns, ms = measure(spins, op_string, bonds, stag, beta, n_updates_measure)
# energy per site
n_mean = np.mean(ns)
E = (-n_mean/beta + 0.25*n_bonds) / n_sites
Es.append(E)
Cv = (np.mean(ns**2) - n_mean - n_mean**2)/ n_sites
Cs.append(Cv)
Ms.append(np.mean(np.abs(ms))/n_sites) # note that we need the absolute value here!
# there is a symmetry of flipping all spins which ensures that <Ms> = 0
E, Eerr = np.mean(Es), np.std(Es)/np.sqrt(n_bins)
Es_Eerrs.append((E, Eerr))
C, Cerr = np.mean(Cs), np.std(Cs)/np.sqrt(n_bins)
Cs_Cerrs.append((C, Cerr))
M, Merr = np.mean(Ms), np.std(Ms)/np.sqrt(n_bins)
Ms_Merrs.append((M, Merr))
return np.array(Es_Eerrs), np.array(Cs_Cerrs), np.array(Ms_Merrs)
# f)
Ls = [4, 8, 16]
Ms_Merrs = []
for L in Ls:
print("="*80)
print("L =", L)
E, C, M = run_simulation(L, L, betas)
Ms_Merrs.append(M)
plt.figure()
for M, L in zip(Ms_Merrs, Ls):
plt.errorbar(Ts, M[:, 0], yerr=M[:, 1], label="L={L:d}".format(L=L))
plt.legend()
plt.xlim(0, np.max(1./betas))
plt.xlabel("temperature $T$")
plt.ylabel("staggered magnetization $<|M_s|>$ per site")
```
# Honeycomb lattice
```
def site_honeycomb(x, y, u, Lx, Ly):
"""Defines a numbering of the sites, given positions x and y and u=0,1 within the unit cell"""
return y * Lx * 2 + x*2 + u
def init_SSE_honeycomb(Lx, Ly):
"""Initialize a starting configuration on a 2D square lattice."""
n_sites = Lx*Ly*2
# initialize spins randomly with numbers +1 or -1, but the average magnetization is 0
spins = 2*np.mod(np.random.permutation(n_sites), 2) - 1
op_string = -1 * np.ones(10, np.intp) # initialize with identities
bonds = []
for x0 in range(Lx):
for y0 in range(Ly):
sA = site_honeycomb(x0, y0, 0, Lx, Ly)
sB0 = site_honeycomb(x0, y0, 1, Lx, Ly)
bonds.append([sA, sB0])
sB1 = site_honeycomb(np.mod(x0+1, Lx), np.mod(y0-1, Ly), 1, Lx, Ly)
bonds.append([sA, sB1])
sB2 = site_honeycomb(x0, np.mod(y0-1, Ly), 1, Lx, Ly)
bonds.append([sA, sB2])
bonds = np.array(bonds, dtype=np.intp)
return spins, op_string, bonds
def get_staggering_honeycomb(Lx, Ly):
stag = np.zeros(Lx*Ly*2, np.intp)
for x in range(Lx):
for y in range(Ly):
stag[site_honeycomb(x, y, 0, Lx, Ly)] = +1
stag[site_honeycomb(x, y, 1, Lx, Ly)] = -1
return stag
def run_simulation_honeycomb(Lx, Ly, betas=[1.], n_updates_measure=10000, n_bins=10):
"""A full simulation: initialize, thermalize and measure for various betas."""
spins, op_string, bonds = init_SSE_honeycomb(Lx, Ly)
stag = get_staggering_honeycomb(Lx, Ly)
n_sites = len(spins)
n_bonds = len(bonds)
Es_Eerrs = []
Cs_Cerrs = []
Ms_Merrs = []
for beta in betas:
print("beta = {beta:.3f}".format(beta=beta), flush=True)
op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10)
Es = []
Cs = []
Ms = []
for _ in range(n_bins):
ns, ms = measure(spins, op_string, bonds, stag, beta, n_updates_measure)
# energy per site
n_mean = np.mean(ns)
E = (-n_mean/beta + 0.25*n_bonds) / n_sites
Es.append(E)
Cv = (np.mean(ns**2) - n_mean - n_mean**2)/ n_sites
Cs.append(Cv)
Ms.append(np.mean(np.abs(ms))/n_sites)
E, Eerr = np.mean(Es), np.std(Es)/np.sqrt(n_bins)
Es_Eerrs.append((E, Eerr))
C, Cerr = np.mean(Cs), np.std(Cs)/np.sqrt(n_bins)
Cs_Cerrs.append((C, Cerr))
M, Merr = np.mean(Ms), np.std(Ms)/np.sqrt(n_bins)
Ms_Merrs.append((M, Merr))
return np.array(Es_Eerrs), np.array(Cs_Cerrs), np.array(Ms_Merrs)
# just to check: plot the generated lattice
L =4
spins, op_string, bonds = init_SSE_honeycomb(L, L)
stag = get_staggering_honeycomb(L, L)
n_sites = len(spins)
n_bonds = len(bonds)
# use non-trivial unit-vectors
unit_vectors = np.array([[1, 0], [0.5, 0.5*np.sqrt(3)]])
dx = np.array([0., 0.5])
site_positions = np.zeros((n_sites, 2), np.float)
for x in range(L):
for y in range(L):
pos = x* unit_vectors[0, :] + y*unit_vectors[1, :]
s0 = site_honeycomb(x, y, 0, L, L)
site_positions[s0, :] = pos
s1 = site_honeycomb(x, y, 1, L, L)
site_positions[s1, :] = pos + dx
# plot the sites and bonds
plt.figure()
for bond in bonds:
linestyle = '-'
s0, s1 = bond
if np.max(np.abs(site_positions[s0, :] - site_positions[s1, :])) > L/2:
linestyle = ':' # plot bonds from the periodic boundary conditions dotted
plt.plot(site_positions[bond, 0], site_positions[bond, 1], linestyle=linestyle, color='k')
plt.plot(site_positions[:, 0], site_positions[:, 1], marker='o', linestyle='')
plt.show()
Ls = [4, 8, 16]
result_honeycomb = []
for L in Ls:
print("="*80)
print("L =", L)
res = run_simulation_honeycomb(L, L, betas)
result_honeycomb.append(res)
fig, axes = plt.subplots(nrows=3, figsize=(10, 15), sharex=True)
for res, L in zip(result_honeycomb, Ls):
for data, ax in zip(res, axes):
ax.errorbar(Ts, data[:, 0], yerr=data[:, 1], label="L={L:d}".format(L=L))
for ax, ylabel in zip(axes, ["energy $E$", "specific heat $C_v$", "stag. magnetization $<|M_s|>$"]):
ax.legend()
ax.set_ylabel(ylabel)
axes[0].set_xlim(0, np.max(1./betas))
axes[-1].set_xlabel("temperature $T$")
```
| github_jupyter |
# Register Client and Create Access Token Notebook
- Find detailed information about client registration and access tokens in this blog post: [Authentication to SAS Viya: a couple of approaches](https://blogs.sas.com/content/sgf/2021/09/24/authentication-to-sas-viya/)
- Use the client_id to create an access token you can use in the Jupyter environment or externally for API calls to SAS Viya.
- You must add the following info to the script: client_id, client_secret, baseURL, and consul_token
- Additional access token information is found at the end of this notebook.
### Run the cells below and follow the resulting instructions.
# Get register access token
```
import requests
import json
import os
import base64
# set/create variables
client_id=""
client_secret=""
baseURL = "" # sasserver.sas.com
consul_token = ""
# generate API call for register access token
url = f"https://{baseURL}/SASLogon/oauth/clients/consul?callback=false&serviceId={client_id}"
headers = {
'X-Consul-Token': consul_token
}
# process the results
response = requests.request("POST", url, headers=headers, verify=False)
register_access_token = json.loads(response.text)['access_token']
print(json.dumps(response.json(), indent=4, sort_keys=True))
```
# Register the client
```
# create API call payload data
payload='{"client_id": "' + client_id +'","client_secret": "'+ client_secret +'","scope": ["openid", "*"],"authorized_grant_types": ["authorization_code","refresh_token"],"redirect_uri": "urn:ietf:wg:oauth:2.0:oob","access_token_validity": "43199"}'
# generate API call for register access token
url = f"https://{baseURL}/SASLogon/oauth/clients"
headers = {
'Content-Type': 'application/json',
'Authorization': "Bearer " + register_access_token
}
# process the results
response = requests.request("POST", url, headers=headers, data=payload, verify=False)
print(json.dumps(response.json(), indent=4, sort_keys=True))
```
# Create access token
```
# create authorization url
codeURL = "https://" + baseURL + "/SASLogon/oauth/authorize?client_id=" + client_id + "&response_type=code"
# enccode client string
client_string = client_id + ":" + client_secret
message_bytes = client_string.encode('ascii')
base64_bytes = base64.b64encode(message_bytes)
base64_message = base64_bytes.decode('ascii')
# promt with instructions and entry for auth code
print(f"* Please visit the following site {codeURL}")
print("* If provided a login prompt, add your SAS login credentials")
print("* Once authenticated, you'll be redirected to an authoriztion screen, check all of the boxes that appear")
print("* This will result in a short string of numbers and letters such as `VAxVFVEnKr`; this is your authorization code; copy the code")
code = input("Please enter the authoriztion code you generated through the previous instructions, and then press Enter: ")
# generate API call for access token
url = f"https://{baseURL}/SASLogon/oauth/token?grant_type=authorization_code&code={code}"
headers = {
'Accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': "Basic " + base64_message
}
# process the results
response = requests.request("GET", url, headers=headers, verify=False)
access_token = json.loads(response.text)['access_token']
print(json.dumps(response.json(), indent=4, sort_keys=True))
# Create access_token.txt file
directory = os.getcwd()
with open(directory + '/access_token.txt', 'w') as f:
f.write(access_token)
print('The access token was stored for you as ' + directory + '/access_token.txt')
```
## Notes on the access token
- The access token has a 12 hour time-to-live (ttl) by default.
- The authorization code is good for 30 minutes and is only good for a single use.
- You can generate a new authorization code by reusing the authorization url.
- The access_token is valid in this Notebook and is transferable to other notebooks and used for external API calls.
| github_jupyter |
# 문자 단위 RNN으로 이름 분류하기
- 문자 하나(ex. a, b,....,z)를 하나의 one-hot벡터로 표현하여 예측 실시
- 한 문자의 벡터 길이는 alphabet의 길이(26)이다.
- 18개 언어로 된 수천 개의 성을 훈련시킨 후, 철자에 따라 이름이 어떤 언어인지 예측
# DataLoad
- data/name 디렉토리에 18개 텍스트 파일이 포함되어 있다.
- 각 파일에는 한 줄에 하나의 이름이 포함되어 있다.(로마자)
- ASCII로 변환해야 한다.
```
# data 보기
from io import open
import glob
import os
path = 'data/names/'
filenames = glob.glob(path + '*.txt')
print(filenames)
print('')
print(len(filenames))
import unicodedata
import string # 모든 알파벳을 출력하기 위해 import
print(string.hexdigits) # 16진수 표현하는 문자들
print(string.punctuation) # 특수문자, 특수기호
print(string.whitespace) # 공백문자
print(string.printable) # 모든 문자 + 기호
all_letters = string.ascii_letters + ' .,;' # 알파벳(대 + 소) + 공백 + ,.;
all_letters
# 유니코드 문자열을 일반 ASCII로 변환
# 한 단어를 문자 하나하나로 쪼개서 각각을 ascii로 변환
# 또한 변환된 단어의 분류가 'Mn'이 아니고 all_letters에 포함되어 있으면 출력
def Unicode_to_Ascii(s):
word = ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn' # Nonspacing Mark, 특정 언어에서 사용되는 기호
and c in all_letters) # c가 all_letter에 포함되어 있는 것
return word
print(Unicode_to_Ascii('Ślusàrski'))
# 각 파일로부터 이름 불러오기
def readline(filename):
lines = open(filename, encoding = 'utf-8').read().strip().split('\n')
names = [name for name in lines]
return names
lan_list = []
lan_name_list = {}
# 국가별 이름 목록사전 만들기{lan : [name1, name2...]}
for filename in filenames:
lan = os.path.splitext(os.path.basename(filename))[0]
lan_list.append(lan)
name = readline(filename)
lan_name_list[lan] = name
lan_n = len(lan_name_list)
```
### 변수 설명
- lan_list : 국가 목록
- lan_name_list : 국가별 이름 목록
- lan_name_list_n : 국가 개수
# 이름을 Tensor로 변환
- 하나의 문자(ex. a)를 표현하기 위해서는 size가 1xn_letter인 One-hot 벡터를 사용한다.
- a : Tensor[[1,0,0,0......,0]]
- b : Tensor[[0,1,0,0.......0]]
- z : Tensor[[0,0,0,0.......1]]
<br><br>
- 단어를 만들어 주기 위해 2차원 행렬(len_of_word x 1 x n_letters)
```
import torch
# 한 letter의 index값을 출력
def Letter_to_Index(letter):
letter_index = all_letters.find(letter)
return letter_index
# 각 letter별 index사전 만들기(one-hot 벡터)
def Letter_to_Tensor(letter):
letter_tensor = torch.zeros(1, len(all_letters))
letter_tensor[0][Letter_to_Index(letter)] = 1
return letter_tensor
def Name_to_Tensor(name):
tensor = torch.zeros(len(name), 1, len(all_letters))
for i, c in enumerate(name):
tensor[i][0][Letter_to_Index(c)] = 1
return tensor
print(Letter_to_Tensor('j'))
print(Name_to_Tensor('justin').size())
```
# RNN 생성
- input layer와 hidden layer가 합쳐져 output layer(i2o)를 형성
- input layer와 hideen layer가 합쳐져 다음 hidden layer(i2h)를 형성
- i2o인 경우 sofrmax 함수를 이용해 확률값 출력하고 정답label과 오차값 계산
- i2h인 경우 두 번째 h2로
넘어간다.
# 진행 과정
1. 이름 하나를 구성하는 각각의 letter들이 한 글자씩 input으로 들어간다.
2.
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim = 1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1) # 열로 붙이기
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
hidden_n = 128
letter_n = len(all_letters)
lan_n = len(lan_name_list)
rnn = RNN(letter_n, hidden_n, lan_n)
with torch.no_grad():
input = Letter_to_Tensor('a')
hidden = torch.zeros(1, hidden_n)
output, hidden = rnn(input, hidden)
print(output) # 출력은 국가 중 하나이고, 값이 높을수록 가능성이 높다.
with torch.no_grad():
input = Name_to_Tensor('justin')
hidden = torch.zeros(1, hidden_n)
output, hidden = rnn(input[0], hidden)
print(output) # 출력은 국가 중 하나이고, 값이 높을수록 가능성이 높다.
```
# 학습하기 전 준비과정
도움되는 함수 몇 가지가 필요하다.
1. output 결과로부터 가장 큰 값이 무엇인지 출력하기(가장 가능성이 큰 국가 출력)
2. 학습 예시를 출력해주는 함수
```
def LanfromOutput(output):
top_val, top_i = output.topk(1) # Tensor내에 최대값의 value와 index 찾기(input값으로 개수 선택)
lan_i = top_i[0].item()
return lan_list[lan_i], lan_i
LanfromOutput(output)
# 학습 예시(이름과 언어) 얻는 빠른 방법도 필요하다.
import random
def RandomChoice(l):
lan_random = l[random.randint(0, len(l) - 1)]
return lan_random
def RandomTrainingExample():
lan_random = RandomChoice(lan_list) # 랜덤 국가 선택
name = RandomChoice(lan_name_list[lan_random]) # 선택된 국가 중 랜덤 이름 선택
lan_tensor = torch.tensor([lan_list.index(lan_random)], dtype = torch.long) # 국가 index의 tensor
name_tensor = Name_to_Tensor(name) # 이름을 tensor로
return lan_random, name, lan_tensor, name_tensor
for i in range(10):
lan, name, lan_tensor, name_tensor = RandomTrainingExample()
print('Langauge : %s / name = %s' %(lan, name))
```
# 학습 과정
1. input과 target의 Tensor 생성
2. 0으로 초기화 된 hidden layer 생성
3. 각 문자 읽기 - 다음 문자를 위한 은닉 상태 유지
4. output과 target 비교하여 오차 계산
5. 오차 역전파
6. output과 loss 출력
```
# 일단 optimizer를 사용 안하고 그냥 했는데... 일단 그냥 해봄
import torch.optim as optim
optimizer = optim.SGD(rnn.parameters(), lr = 0.005)
loss_function = nn.NLLLoss()
learning_rate = 0.005
def train(lan_tensor, name_tensor):
hidden = rnn.initHidden()
optimizer.zero_grad()
for i in range(name_tensor.size()[0]):
output, hidden = rnn(name_tensor[i], hidden)
loss = loss_function(output, lan_tensor)
loss.backward()
optimizer.step()
#for p in rnn.parameters():
# p.data.add_(-learning_rate, p.grad.data)
return output, loss.item()
import time
import math
iter_n = 100000
loss_avg = 0
loss_list = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for i in range(1, iter_n + 1):
lan, name, lan_tensor, name_tensor = RandomTrainingExample()
output, loss = train(lan_tensor, name_tensor)
loss_avg += loss
if i % 5000 == 0:
guess, guess_i = LanfromOutput(output)
correct = '✓' if guess == lan else '✗ (%s)' % lan
print('%d %d%% (%s) %.4f %s / %s %s' % (i, i / iter_n * 100, timeSince(start), loss, name, guess, correct))
if i % 1000 == 0:
loss_list.append(loss_avg / 1000)
loss_avg = 0
```
# Loss값 시각화
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.plot(loss_list)
```
# 나만의 방식으로 재도전
- 문서에 나와있는 방식은 너무 복잡하다.
- 좀 더 간단하게 구현 시도!
- 사실 대부분 비슷한데.... 디테일이 다르다
# 데이터 불러오기
```
import glob
from io import open
path = 'data/names/'
filenames = glob.glob(path + '*.txt')
print(filenames)
```
# 국가별 이름 사전 만들기
- {국가1 : [이름1, 이름2...], 국가2 : [이름1, 이름2...]}
```
import os
from io import open
lan_name_dict = {}
lan_list = []
for filename in filenames:
names = open(filename, encoding = 'utf-8').read().strip().split('\n')
lan = os.path.splitext(os.path.basename(filename))[0]
lan_list.append(lan)
lan_name_dict[lan] = names
lan_list_n = len(lan_list)
lan_name_dict_n = len(lan_name_dict)
names_n = []
for lan in lan_list:
name_n = len(lan_name_dict[lan])
names_n.append(name_n)
print('Number of languge : %d' %lan_list_n)
print('Number of names : %d' %sum(names_n))
```
# 이름을 Unicode -> Ascii로 변환
```
import unicodedata
import string
letters = string.ascii_letters + " .,;'"
letters_n = len(letters)
def Unicode_to_Ascii(s):
w = ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in letters)
return w
for lan, names in lan_name_dict.items():
converted_names = [Unicode_to_Ascii(name) for name in names]
lan_name_dict[lan] = converted_names
lan_name_dict['Italian'][:5]
```
# 이름을 Tensor로 변환
## 방법은 2가지
1. Lookup 방식을 이용하기 위해 letter_to_vector 사전을 만든다.
2. 함수를 이용해서 해당 letter에 해당하는 vector를 바로 만든다.
```
# 1. lookup 방식
import torch
letter_to_vec = {}
for i, l in enumerate(letters):
vec = torch.zeros(1, letters_n)
vec[0][i] = 1
letter_to_vec[l] = vec
def Name_to_Vec(name):
vec = torch.zeros(len(name), 1, letters_n)
for i, l in enumerate(name):
vec[i][0] = letter_to_vec[l]
return vec
print(letter_to_vec['J'])
print(Name_to_Vec('Jones').size())
```
# RNN 모델
- input과 hidden을 합치는 combined layer
- input에서 output으로 가는 i2o layer
- input에서 hidden으로 가는 i2h layer
- softmax를 적용시키는 softmax layer
```
import torch
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.softmax = nn.LogSoftmax(dim = 1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
hidden_n = 128
rnn = RNN(letters_n, hidden_n, lan_list_n)
# letter 1개 예시
with torch.no_grad():
input_example = letter_to_vec['a']
hidden_example = torch.zeros(1, hidden_n)
output, next_hidden = rnn(input_example, hidden_example)
print(output)
# Name 1개 예시
with torch.no_grad():
input_example = Name_to_Vec('Justin')
hidden_example = torch.zeros(1, hidden_n)
output, next_hidden = rnn(input_example[0], hidden_example)
print(output)
```
# input 데이터 랜덤으로 생성
- Train할 이름들 랜덤으로 생성시키기
1. 랜덤으로 국가 선택
2. 해당 국가에서 랜덤으로 이름 선택
- Output = 국가, 이름, 국가Tensor, 이름Tensor
```
import random
def RandomPick():
lan_random_index = random.randint(0, lan_list_n - 1)
lan_random = lan_list[lan_random_index]
name_random_index = random.randint(0, len(lan_name_dict[lan_random]) - 1)
name_random = lan_name_dict[lan_random][name_random_index]
lan_tensor = torch.tensor([lan_list.index(lan_random)], dtype = torch.long)
name_tensor = Name_to_Vec(name_random)
return lan_random, name_random, lan_tensor, name_tensor
for i in range(10):
lan, name, lan_tensor, name_tensor = RandomPick()
print('Language : %s / Name : %s' %(lan, name))
```
# 학습하기
```
import torch.optim as optim
loss_function = nn.NLLLoss()
optimizer = optim.SGD(rnn.parameters(), lr = 0.01)
iter_n = 10000
loss_current = 0
loss_list = []
for i in range(1, iter_n):
lan, name, lan_tensor, name_tensor = RandomPick()
input = name_tensor
target = lan_tensor
hidden = rnn.initHidden()
optimizer.zero_grad()
for j in range(name_tensor.size()[0]):
output, hidden = rnn(name_tensor[j], hidden)
loss = loss_function(output, lan_tensor)
loss.backward()
optimizer.step()
loss_current += loss
if i % 500 == 0:
print(loss_current/ 50)
if i % 50 == 0:
loss_current = loss_current / 50
loss_list.append(loss_current)
loss_current = 0
```
# Loss 시각화
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.plot(loss_list)
```
# 새로 배운 사실
1. unicodedata.normalize('NFD' or 'NFC', word) 역할
- NFD(Normalization From Decomposition) : 소리 마디를 분해
- NFC(Normalization From Composition) : 소리 마디를 결합
- 즉 일단 A말고 위에 A'처럼 소리마디가 포함 된 언어의 경우 두 부분을 분해하거나 결합하는 인코딩
| github_jupyter |
# Sklearn
# Визуализация данных
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy.stats as sts
import seaborn as sns
from contextlib import contextmanager
sns.set()
sns.set_style("whitegrid")
color_palette = sns.color_palette('deep') + sns.color_palette('husl', 6) + sns.color_palette('bright') + sns.color_palette('pastel')
%matplotlib inline
sns.palplot(color_palette)
def ndprint(a, precision=3):
with np.printoptions(precision=precision, suppress=True):
print(a)
from sklearn import datasets, metrics, model_selection as mdsel
```
### Загрузка выборки
```
digits = datasets.load_digits()
print(digits.DESCR)
print('target:', digits.target[0])
print('features: \n', digits.data[0])
print('number of features:', len(digits.data[0]))
```
## Визуализация объектов выборки
```
#не будет работать: Invalid dimensions for image data
plt.imshow(digits.data[0])
digits.data[0].shape
digits.data[0].reshape(8,8)
digits.data[0].reshape(8,8).shape
plt.imshow(digits.data[0].reshape(8,8))
digits.keys()
digits.images[0]
plt.imshow(digits.images[0])
plt.figure(figsize=(8, 8))
plt.subplot(2, 2, 1)
plt.imshow(digits.images[0])
plt.subplot(2, 2, 2)
plt.imshow(digits.images[0], cmap='hot')
plt.subplot(2, 2, 3)
plt.imshow(digits.images[0], cmap='gray')
plt.subplot(2, 2, 4)
plt.imshow(digits.images[0], cmap='gray', interpolation='sinc')
plt.figure(figsize=(20, 8))
for plot_number, plot in enumerate(digits.images[:10]):
plt.subplot(2, 5, plot_number + 1)
plt.imshow(plot, cmap = 'gray')
plt.title('digit: ' + str(digits.target[plot_number]))
```
## Уменьшение размерности
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from collections import Counter
data = digits.data[:1000]
labels = digits.target[:1000]
print(Counter(labels))
plt.figure(figsize = (10, 6))
plt.bar(Counter(labels).keys(), Counter(labels).values())
classifier = KNeighborsClassifier()
classifier.fit(data, labels)
print(classification_report(classifier.predict(data), labels))
```
### Random projection
```
from sklearn import random_projection
projection = random_projection.SparseRandomProjection(n_components = 2, random_state = 0)
data_2d_rp = projection.fit_transform(data)
plt.figure(figsize=(10, 6))
plt.scatter(data_2d_rp[:, 0], data_2d_rp[:, 1], c = labels)
classifier.fit(data_2d_rp, labels)
print(classification_report(classifier.predict(data_2d_rp), labels))
```
### PCA
```
from sklearn.decomposition import PCA
pca = PCA(n_components = 2, random_state = 0, svd_solver='randomized')
data_2d_pca = pca.fit_transform(data)
plt.figure(figsize = (10, 6))
plt.scatter(data_2d_pca[:, 0], data_2d_pca[:, 1], c = labels)
classifier.fit(data_2d_pca, labels)
print(classification_report(classifier.predict(data_2d_pca), labels))
```
### MDS
```
from sklearn import manifold
mds = manifold.MDS(n_components = 2, n_init = 1, max_iter = 100)
data_2d_mds = mds.fit_transform(data)
plt.figure(figsize=(10, 6))
plt.scatter(data_2d_mds[:, 0], data_2d_mds[:, 1], c = labels)
classifier.fit(data_2d_mds, labels)
print(classification_report(classifier.predict(data_2d_mds), labels))
```
### t- SNE
```
tsne = manifold.TSNE(n_components = 2, init = 'pca', random_state = 0)
data_2d_tsne = tsne.fit_transform(data)
plt.figure(figsize = (10, 6))
plt.scatter(data_2d_tsne[:, 0], data_2d_tsne[:, 1], c = labels)
classifier.fit(data_2d_tsne, labels)
print(classification_report(classifier.predict(data_2d_tsne), labels))
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.debugger import Pdb; pdb = Pdb()
def get_down_centre_last_low(p_list):
zn_num = len(p_list) - 1
available_num = min(9, (zn_num - 6))
index = len(p_list) - 4
for i in range(0, available_num // 2):
if p_list[index - 2] < p_list[index]:
index = index -2
else:
return index
return index + 2
def get_down_centre_first_high(p_list):
s = max(enumerate(p_list[3:]), key=lambda x: x[1])[0]
return s + 3
def down_centre_expand_spliter(p_list):
lr0 = get_down_centre_last_low(p_list)
hl0 = get_down_centre_first_high(p_list[: lr0 - 2])
hr0 = lr0 -1
while hr0 < len(p_list) - 8:
if p_list[hr0] > p_list[hl0] and (len(p_list) - hr0) > 5:
hl0 = hr0
lr0 = (len(p_list) - 1 + hr0) // 2
if lr0 % 2 == 1:
lr0 = lr0 -1
# lr0 = hr0 + 3
break
hr0 = hr0 + 2
return [0, hl0, lr0, len(p_list) - 1], [p_list[0], p_list[hl0], p_list[lr0], p_list[-1]]
# y = [0, 100, 60, 130, 70, 120, 40, 90, 50, 140, 85, 105]
# y = [0, 100, 60, 110, 70, 72, 61, 143, 77, 91, 82, 100, 83, 124, 89, 99]
# y = [0, 100, 60, 110, 70, 115, 75, 120, 80, 125, 85, 130, 90, 135]
# y = [0, 100, 60, 110, 70, 78, 77, 121, 60, 93, 82, 141, 78, 134]
# y = [0, 110, 70, 100, 60, 100, 78, 90, 53, 109, 56, 141, 99, 106, 89, 99, 93, 141]
# x = list(range(0, len(y)))
# gg = [min(y[1], y[3])] * len(y)
# dd = [max(y[2], y[4])] * len(y)
# plt.figure(figsize=(len(y),4))
# plt.grid()
# plt.plot(x, y)
# plt.plot(x, gg, '--')
# plt.plot(x, dd, '--')
# sx, sy = down_centre_expand_spliter(y)
# plt.plot(sx, sy)
# plt.show()
# Centre Expand Prototype
%matplotlib inline
import matplotlib.pyplot as plt
y_base = [0, 100, 60, 130, 70, 120, 40, 90, 50, 140, 85, 105, 55, 80]
for i in range(10, len(y_base)):
y = y_base[:(i + 1)]
x = list(range(0, len(y)))
gg = [min(y[1], y[3])] * len(y)
dd = [max(y[2], y[4])] * len(y)
plt.figure(figsize=(i,4))
plt.grid()
plt.plot(x, y)
plt.plot(x, gg, '--')
plt.plot(x, dd, '--')
if i % 2 == 1:
sx, sy = down_centre_expand_spliter(y)
plt.plot(sx, sy)
plt.show()
# Random Centre Generator
%matplotlib inline
import random
import matplotlib.pyplot as plt
y_max = 150
y_min = 50
num_max = 18
def generate_next(y_list, direction):
if direction == 1:
y_list.append(random.randint(max(y_list[2], y_list[4], y_list[-1]) + 1, y_max))
elif direction == -1:
y_list.append(random.randint(y_min, min(y_list[1], y_list[3], y_list[-1]) - 1))
# y_base = [0, 100, 60, 110, 70]
y_base = [0, 110, 70, 100, 60]
# y_base = [0, 100, 60, 90, 70]
# y_base = [0, 90, 70, 100, 60]
direction = 1
for i in range(5, num_max):
generate_next(y_base, direction)
direction = 0 - direction
print(y_base)
for i in range(11, len(y_base), 2):
y = y_base[:(i + 1)]
x = list(range(0, len(y)))
gg = [min(y[1], y[3])] * len(y)
dd = [max(y[2], y[4])] * len(y)
plt.figure(figsize=(i,4))
plt.title(y)
plt.grid()
plt.plot(x, y)
plt.plot(x, gg, '--')
plt.plot(x, dd, '--')
sx, sy = down_centre_expand_spliter(y)
plt.plot(sx, sy)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
# Group 1
y_base = [0, 100, 60, 110, 70, 99, 66, 121, 91, 141, 57, 111, 69, 111]
# y_base = [0, 100, 60, 110, 70, 105, 58, 102, 74, 137, 87, 142, 55, 128]
# y_base = [0, 100, 60, 110, 70, 115, 75, 120, 80, 125, 85, 130, 90, 135]
# y_base = [0, 100, 60, 110, 70, 120, 80, 130, 90, 140, 50, 75]
# y_base = [0, 100, 60, 110, 70, 114, 52, 75, 54, 77, 65, 100, 66, 87, 70, 116]
# y_base = [0, 100, 60, 110, 70, 72, 61, 143, 77, 91, 82, 100, 83, 124, 89, 99, 89, 105]
# Group 2
# y_base = [0, 110, 70, 100, 60, 142, 51, 93, 78, 109, 60, 116, 50, 106]
# y_base = [0, 110, 70, 100, 60, 88, 70, 128, 82, 125, 72, 80, 63, 119]
# y_base = [0, 110, 70, 100, 60, 74, 66, 86, 57, 143, 50, 95, 70, 91]
# y_base = [0, 110, 70, 100, 60, 77, 73, 122, 96, 116, 82, 124, 69, 129]
# y_base = [0, 110, 70, 100, 60, 147, 53, 120, 77, 103, 56, 76, 74, 92]
# y_base = [0, 110, 70, 100, 60, 95, 55, 90, 50, 85, 45, 80, 40, 75]
# y_base = [0, 110, 70, 100, 60, 100, 78, 90, 53, 109, 56, 141, 99, 106, 89, 99, 93, 141]
# Group 3
# y_base = [0, 100, 60, 90, 70, 107, 55, 123, 79, 112, 64, 85, 74, 110]
# y_base = [0, 100, 60, 90, 70, 77, 55, 107, 76, 141, 87, 91, 60, 83]
# y_base = [0, 100, 60, 90, 70, 114, 67, 93, 58, 134, 53, 138, 64, 107]
# y_base = [0, 100, 60, 90, 70, 77, 66, 84, 79, 108, 87, 107, 72, 89]
# y_base = [0, 100, 60, 90, 70, 88, 72, 86, 74, 84, 76, 82, 74, 80]
# Group 4
# y_base = [0, 90, 70, 100, 60, 131, 57, 144, 85, 109, 82, 124, 87, 101]
# y_base = [0, 90, 70, 100, 60, 150, 56, 112, 63, 95, 84, 118, 58, 110]
# y_base = [0, 90, 70, 100, 60, 145, 64, 112, 69, 86, 71, 119, 54, 95]
# y_base = [0, 90, 70, 100, 60, 105, 55, 110, 50, 115, 45, 120, 40, 125]
for i in range(11, len(y_base), 2):
y = y_base[:(i + 1)]
x = list(range(0, len(y)))
gg = [min(y[1], y[3])] * len(y)
dd = [max(y[2], y[4])] * len(y)
plt.figure(figsize=(i,4))
plt.title(y)
plt.grid()
plt.plot(x, y)
plt.plot(x, gg, '--')
plt.plot(x, dd, '--')
sx, sy = down_centre_expand_spliter(y)
plt.plot(sx, sy)
plt.show()
```
| github_jupyter |
#Create the environment
```
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/ESoWC
import pandas as pd
import xarray as xr
import numpy as np
import pandas as pd
from sklearn import preprocessing
import seaborn as sns
#Our class
from create_dataset.make_dataset import CustomDataset
fn_land = 'Data/land_cover_data.nc'
fn_weather = 'Data/05_2019_weather_and_CO_for_model.nc'
fn_conc = 'Data/totalcolConcentretations_featured.nc'
fn_traffic = 'Data/emissions_traffic_hourly_merged.nc'
```
#Load datasets
##Land
```
land_instance = CustomDataset(fn_land)
land_instance.get_dataset()
land_instance.resample("1H")
land_fixed = land_instance.get_dataset()
land_fixed = land_fixed.drop_vars('NO emissions') #They are already in the weather dataset
land_fixed = land_fixed.transpose('latitude','longitude','time')
land_fixed
```
##Weather
```
weather = xr.open_dataset(fn_weather)
weather
#This variable is too much correlated with the tcw
weather_fixed = weather.drop_vars('tcwv')
weather_fixed = weather_fixed.transpose('latitude','longitude','time')
weather_fixed
```
##Conc
```
conc_fidex = xr.open_dataset(fn_conc)
conc_fidex
```
##Traffic
```
traffic_instance = CustomDataset(fn_traffic)
traffic_ds= traffic_instance.get_dataset()
traffic_ds
traffic_ds=traffic_ds.drop_vars('emissions')
lat_bins = np.arange(43,51.25,0.25)
lon_bins = np.arange(4,12.25,0.25)
traffic_ds = traffic_ds.sortby(['latitude','longitude','hour'])
traffic_ds = traffic_ds.interp(latitude=lat_bins, longitude=lon_bins, method="linear")
days = np.arange(1,32,1)
traffic_ds=traffic_ds.expand_dims({'Days':days})
traffic_ds
trafic_df = traffic_ds.to_dataframe()
trafic_df = trafic_df.reset_index()
trafic_df['time'] = (pd.to_datetime(trafic_df['Days']-1,errors='ignore',
unit='d',origin='2019-05') +
pd.to_timedelta(trafic_df['hour'], unit='h'))
trafic_df=trafic_df.drop(columns=['Days', 'hour'])
trafic_df = trafic_df.set_index(['latitude','longitude','time'])
trafic_df.head()
traffic_fixed = trafic_df.to_xarray()
traffic_fixed = traffic_fixed.transpose('latitude','longitude','time')
traffic_fixed
traffic_fixed.isel(time=[15]).traffic.plot()
```
#Merge
```
tot_dataset = weather_fixed.merge(land_fixed)
tot_dataset = tot_dataset.merge(conc_fidex)
tot_dataset = tot_dataset.merge(traffic_fixed)
tot_dataset
```
#Check
```
weather_fixed.to_dataframe().isnull().sum()
land_fixed.to_dataframe().isnull().sum()
conc_fidex.to_dataframe().isnull().sum()
traffic_fixed.to_dataframe().isnull().sum()
tot_dataset.to_dataframe().isnull().sum()
tot_dataset.isel(time=[12]).EMISSIONS_2019.plot()
tot_dataset.isel(time=[12]).u10.plot()
tot_dataset.isel(time=[15]).height.plot()
tot_dataset.isel(time=[12]).NO_tc.plot()
tot_dataset.isel(time=[15]).traffic.plot()
```
#Save the dataset
```
tot_dataset.to_netcdf('Data/05_2019_dataset_complete_for_model_CO.nc', 'w', 'NETCDF4')
```
| github_jupyter |
## Amazon SageMaker Feature Store: Client-side Encryption using AWS Encryption SDK
This notebook demonstrates how client-side encryption with SageMaker Feature Store is done using the [AWS Encryption SDK library](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) to encrypt your data prior to ingesting it into your Online or Offline Feature Store. We first demonstrate how to encrypt your data using the AWS Encryption SDK library, and then show how to use [Amazon Athena](https://aws.amazon.com/athena/) to query for a subset of encrypted columns of features for model training.
Currently, Feature Store supports encryption at rest and encryption in transit. With this notebook, we showcase an additional layer of security where your data is encrypted and then stored in your Feature Store. This notebook also covers the scenario where you want to query a subset of encrypted data using Amazon Athena for model training. This becomes particularly useful when you want to store encrypted data sets in a single Feature Store, and want to perform model training using only a subset of encrypted columns, forcing privacy over the remaining columns.
If you are interested in server side encryption with Feature Store, see [Feature Store: Encrypt Data in your Online or Offline Feature Store using KMS key](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-featurestore/feature_store_kms_key_encryption.html).
For more information on the AWS Encryption library, see [AWS Encryption SDK library](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html).
For detailed information about Feature Store, see the [Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store.html).
### Overview
1. Set up
2. Load in and encrypt your data using AWS Encryption library (`aws-encryption-sdk`)
3. Create Feature Group and ingest your encrypted data into it
4. Query your encrypted data in your feature store using Amazon Athena
5. Decrypt the data you queried
### Prerequisites
This notebook uses the Python SDK library for Feature Store, the AWS Encryption SDK library, `aws-encryption-sdk` and the `Python 3 (DataScience)` kernel. To use the`aws-encryption-sdk` library you will need to have an active KMS key that you created. If you do not have a KMS key, then you can create one by following the [KMS Policy Template](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-featurestore/feature_store_kms_key_encryption.html#KMS-Policy-Template) steps, or you can visit the [KMS section in the console](https://console.aws.amazon.com/kms/home) and follow the button prompts for creating a KMS key. This notebook works with SageMaker Studio, Jupyter, and JupyterLab.
### Library Dependencies:
* `sagemaker>=2.0.0`
* `numpy`
* `pandas`
* `aws-encryption-sdk`
### Data
This notebook uses a synthetic data set that has the following features: `customer_id`, `ssn` (social security number), `credit_score`, `age`, and aims to simulate a relaxed data set that has some important features that would be needed during the credit card approval process.
```
import sagemaker
import pandas as pd
import numpy as np
pip install -q 'aws-encryption-sdk'
```
### Set up
```
sagemaker_session = sagemaker.Session()
s3_bucket_name = sagemaker_session.default_bucket()
prefix = "sagemaker-featurestore-demo"
role = sagemaker.get_execution_role()
region = sagemaker_session.boto_region_name
```
Instantiate an encryption SDK client and provide your KMS ARN key to the `StrictAwsKmsMasterKeyProvider` object. This will be needed for data encryption and decryption by the AWS Encryption SDK library. You will need to substitute your KMS Key ARN for `kms_key`.
```
import aws_encryption_sdk
from aws_encryption_sdk.identifiers import CommitmentPolicy
client = aws_encryption_sdk.EncryptionSDKClient(
commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT
)
kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(
key_ids=[kms_key] ## Add your KMS key here
)
```
Load in your data.
```
credit_card_data = pd.read_csv("data/credit_card_approval_synthetic.csv")
credit_card_data.head()
credit_card_data.dtypes
```
### Client-Side Encryption Methods
Below are some methods that use the Amazon Encryption SDK library for data encryption, and decryption. Note that the data type of the encryption is byte which we convert to an integer prior to storing it into Feature Store and do the reverse prior to decrypting. This is because Feature Store doesn't support byte format directly, thus why we convert the byte encryption to an integer.
```
def encrypt_data_frame(df, columns):
"""
Input:
df: A pandas Dataframe
columns: A list of column names.
Encrypt the provided columns in df. This method assumes that column names provided in columns exist in df,
and uses the AWS Encryption SDK library.
"""
for col in columns:
buffer = []
for entry in np.array(df[col]):
entry = str(entry)
encrypted_entry, encryptor_header = client.encrypt(
source=entry, key_provider=kms_key_provider
)
buffer.append(encrypted_entry)
df[col] = buffer
def decrypt_data_frame(df, columns):
"""
Input:
df: A pandas Dataframe
columns: A list of column names.
Decrypt the provided columns in df. This method assumes that column names provided in columns exist in df,
and uses the AWS Encryption SDK library.
"""
for col in columns:
buffer = []
for entry in np.array(df[col]):
decrypted_entry, decryptor_header = client.decrypt(
source=entry, key_provider=kms_key_provider
)
buffer.append(float(decrypted_entry))
df[col] = np.array(buffer)
def bytes_to_int(df, columns):
"""
Input:
df: A pandas Dataframe
columns: A list of column names.
Convert the provided columns in df of type bytes to integers. This method assumes that column names provided
in columns exist in df and that the columns passed in are of type bytes.
"""
for col in columns:
for index, entry in enumerate(np.array(df[col])):
df[col][index] = int.from_bytes(entry, "little")
def int_to_bytes(df, columns):
"""
Input:
df: A pandas Dataframe
columns: A list of column names.
Convert the provided columns in df of type integers to bytes. This method assumes that column names provided
in columns exist in df and that the columns passed in are of type integers.
"""
for col in columns:
buffer = []
for index, entry in enumerate(np.array(df[col])):
current = int(df[col][index])
current_bit_length = current.bit_length() + 1 # include the sign bit, 1
current_byte_length = (current_bit_length + 7) // 8
buffer.append(current.to_bytes(current_byte_length, "little"))
df[col] = pd.Series(buffer)
## Encrypt credit card data. Note that we treat `customer_id` as a primary key, and since it's encryption is unique we can encrypt it.
encrypt_data_frame(credit_card_data, ["customer_id", "age", "SSN", "credit_score"])
credit_card_data
print(credit_card_data.dtypes)
## Cast encryption of type bytes to an integer so it can be stored in Feature Store.
bytes_to_int(credit_card_data, ["customer_id", "age", "SSN", "credit_score"])
print(credit_card_data.dtypes)
credit_card_data
def cast_object_to_string(data_frame):
"""
Input:
data_frame: A pandas Dataframe
Cast all columns of data_frame of type object to type string.
"""
for label in data_frame.columns:
if data_frame.dtypes[label] == object:
data_frame[label] = data_frame[label].astype("str").astype("string")
return data_frame
credit_card_data = cast_object_to_string(credit_card_data)
print(credit_card_data.dtypes)
credit_card_data
```
### Create your Feature Group and Ingest your encrypted data into it
Below we start by appending the `EventTime` feature to your data to timestamp entries, then we load the feature definition, and instantiate the Feature Group object. Then lastly we ingest the data into your feature store.
```
from time import gmtime, strftime, sleep
credit_card_feature_group_name = "credit-card-feature-group-" + strftime("%d-%H-%M-%S", gmtime())
```
Instantiate a FeatureGroup object for `credit_card_data`.
```
from sagemaker.feature_store.feature_group import FeatureGroup
credit_card_feature_group = FeatureGroup(
name=credit_card_feature_group_name, sagemaker_session=sagemaker_session
)
import time
current_time_sec = int(round(time.time()))
## Recall customer_id is encrypted therefore unique, and so it can be used as a record identifier.
record_identifier_feature_name = "customer_id"
```
Append the `EventTime` feature to your data frame. This parameter is required, and time stamps each data point.
```
credit_card_data["EventTime"] = pd.Series(
[current_time_sec] * len(credit_card_data), dtype="float64"
)
credit_card_data.head()
print(credit_card_data.dtypes)
credit_card_feature_group.load_feature_definitions(data_frame=credit_card_data)
credit_card_feature_group.create(
s3_uri=f"s3://{s3_bucket_name}/{prefix}",
record_identifier_name=record_identifier_feature_name,
event_time_feature_name="EventTime",
role_arn=role,
enable_online_store=False,
)
time.sleep(60)
```
Ingest your data into your feature group.
```
credit_card_feature_group.ingest(data_frame=credit_card_data, max_workers=3, wait=True)
time.sleep(30)
```
Continually check your offline store until your data is available in it.
```
s3_client = sagemaker_session.boto_session.client("s3", region_name=region)
credit_card_feature_group_s3_uri = (
credit_card_feature_group.describe()
.get("OfflineStoreConfig")
.get("S3StorageConfig")
.get("ResolvedOutputS3Uri")
)
credit_card_feature_group_s3_prefix = credit_card_feature_group_s3_uri.replace(
f"s3://{s3_bucket_name}/", ""
)
offline_store_contents = None
while offline_store_contents is None:
objects_in_bucket = s3_client.list_objects(
Bucket=s3_bucket_name, Prefix=credit_card_feature_group_s3_prefix
)
if "Contents" in objects_in_bucket and len(objects_in_bucket["Contents"]) > 1:
offline_store_contents = objects_in_bucket["Contents"]
else:
print("Waiting for data in offline store...\n")
time.sleep(60)
print("Data available.")
```
### Use Amazon Athena to Query your Encrypted Data in your Feature Store
Using Amazon Athena, we query columns `customer_id`, `age`, and `credit_score` from your offline feature store where your encrypted data is.
```
credit_card_query = credit_card_feature_group.athena_query()
credit_card_table = credit_card_query.table_name
query_credit_card_table = 'SELECT customer_id, age, credit_score FROM "' + credit_card_table + '"'
print("Running " + query_credit_card_table)
# Run the Athena query
credit_card_query.run(
query_string=query_credit_card_table,
output_location="s3://" + s3_bucket_name + "/" + prefix + "/query_results/",
)
time.sleep(60)
credit_card_dataset = credit_card_query.as_dataframe()
print(credit_card_dataset.dtypes)
credit_card_dataset
int_to_bytes(credit_card_dataset, ["customer_id", "age", "credit_score"])
credit_card_dataset
decrypt_data_frame(credit_card_dataset, ["customer_id", "age", "credit_score"])
```
In this notebook, we queried a subset of encrypted features. From here you can now train a model on this new dataset while remaining privacy over other columns e.g., `ssn`.
```
credit_card_dataset
```
### Clean Up Resources
Remove the Feature Group that was created.
```
credit_card_feature_group.delete()
```
### Next Steps
In this notebook we covered client-side encryption with Feature Store. If you are interested in understanding how server-side encryption is done with Feature Store, see [Feature Store: Encrypt Data in your Online or Offline Feature Store using KMS key](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-featurestore/feature_store_kms_key_encryption.html).
For more information on the AWS Encryption library, see [AWS Encryption SDK library](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html).
For detailed information about Feature Store, see the [Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store.html).
| github_jupyter |
# Strings
### **Splitting strings**
```
'a,b,c'.split(',')
latitude = '37.24N'
longitude = '-115.81W'
'Coordinates {0},{1}'.format(latitude,longitude)
f'Coordinates {latitude},{longitude}'
'{0},{1},{2}'.format(*('abc'))
coord = {"latitude":latitude,"longitude":longitude}
'Coordinates {latitude},{longitude}'.format(**coord)
```
### **Access argument' s attribute **
```
class Point:
def __init__(self,x,y):
self.x,self.y = x,y
def __str__(self):
return 'Point({self.x},{self.y})'.format(self = self)
def __repr__(self):
return f'Point({self.x},{self.y})'
test_point = Point(4,2)
test_point
str(Point(4,2))
```
### **Replace with %s , %r ** :
```
" repr() shows the quote {!r}, while str() doesn't:{!s} ".format('a1','a2')
```
### **Aligning the text with width** :
```
'{:<30}'.format('left aligned')
'{:>30}'.format('right aligned')
'{:^30}'.format('centerd')
'{:*^30}'.format('centerd')
```
### **Replace with %x , %o and convert the value to different base ** :
```
"int:{0:d}, hex:{0:x}, oct:{0:o}, bin:{0:b}".format(42)
'{:,}'.format(12345677)
```
### **Percentage ** :
```
points = 19
total = 22
'Correct answers: {:.2%}'.format(points/total)
import datetime as dt
f"{dt.datetime.now():%Y-%m-%d}"
f"{dt.datetime.now():%d_%m_%Y}"
today = dt.datetime.today().strftime("%d_%m_%Y")
today
```
### **Splitting without parameters ** :
```
"this is a test".split()
```
### **Concatenating and joining Strings ** :
```
'do'*2
orig_string ='Hello'
orig_string+',World'
full_sentence = orig_string+',World'
full_sentence
```
### **Concatenating with join() , other basic funstions** :
```
strings = ['do','re','mi']
', '.join(strings)
'z' not in 'abc'
ord('a'), ord('#')
chr(97)
s = "foodbar"
s[2:5]
s[:4] + s[4:]
s[:4] + s[4:] == s
t=s[:]
id(s)
id(t)
s is t
s[0:6:2]
s[5:0:-2]
s = 'tomorrow is monday'
reverse_s = s[::-1]
reverse_s
s.capitalize()
s.upper()
s.title()
s.count('o')
"foobar".startswith('foo')
"foobar".endswith('ar')
"foobar".endswith('oob',0,4)
"foobar".endswith('oob',2,4)
"My name is yaozeliang, I work at Societe Generale".find('yao')
# If can't find the string, return -1
"My name is yaozeliang, I work at Societe Generale".find('gent')
# Check a string if consists of alphanumeric characters
"abc123".isalnum()
"abc%123".isalnum()
"abcABC".isalpha()
"abcABC1".isalpha()
'123'.isdigit()
'123abc'.isdigit()
'abc'.islower()
"This Is A Title".istitle()
"This is a title".istitle()
'ABC'.isupper()
'ABC1%'.isupper()
'foo'.center(10)
' foo bar baz '.strip()
' foo bar baz '.lstrip()
' foo bar baz '.rstrip()
"foo abc foo def fo ljk ".replace('foo','yao')
'www.realpython.com'.strip('w.moc')
'www.realpython.com'.strip('w.com')
'www.realpython.com'.strip('w.ncom')
```
### **Convert between strings and lists** :
```
', '.join(['foo','bar','baz','qux'])
list('corge')
':'.join('corge')
'www.foo'.partition('.')
'foo@@bar@@baz'.partition('@@')
'foo@@bar@@baz'.rpartition('@@')
'foo.bar'.partition('@@')
# By default , rsplit split a string with white space
'foo bar adf yao'.rsplit()
'foo.bar.adf.ert'.split('.')
'foo\nbar\nadfa\nlko'.splitlines()
```
| github_jupyter |
# 9. Incorporating OD Veto Data
```
import sys
import os
import h5py
from collections import Counter
from progressbar import *
import re
import numpy as np
import h5py
from scipy import signal
import matplotlib
from repeating_classifier_training_utils import *
from functools import reduce
# Add the path to the parent directory to augment search for module
par_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
if par_dir not in sys.path:
sys.path.append(par_dir)
%load_ext autoreload
%matplotlib inline
%autoreload 2
veto_path = '/fast_scratch/WatChMaL/data/IWCDmPMT_4pm_full_tank_ODveto.h5'
odv_file = h5py.File(veto_path,'r')
odv_info = {}
for key in odv_file.keys():
odv_info[key] = np.array(odv_file[key])
odv_dict = {}
pbar = ProgressBar(widgets=['Creating Event-Index Dictionary: ', Percentage(), ' ', Bar(marker='0',left='[',right=']'),
' ', ETA()], maxval=len(odv_info['event_ids']))
pbar.start()
for i in range(len(odv_info['event_ids'])):
odv_dict[(odv_info['root_files'][i], odv_info['event_ids'][i])] = i
pbar.update(i)
pbar.finish()
```
## Load test set
```
# Get original h5 file info
# Import test events from h5 file
filtered_index = "/fast_scratch/WatChMaL/data/IWCD_fulltank_300_pe_idxs.npz"
filtered_indices = np.load(filtered_index, allow_pickle=True)
test_filtered_indices = filtered_indices['test_idxs']
original_data_path = "/data/WatChMaL/data/IWCDmPMT_4pi_fulltank_9M.h5"
f = h5py.File(original_data_path, "r")
original_eventids = np.array(f['event_ids'])
original_rootfiles = np.array(f['root_files'])
filtered_eventids = original_eventids[test_filtered_indices]
filtered_rootfiles = original_rootfiles[test_filtered_indices]
odv_mapping_indices = np.zeros(len(filtered_rootfiles))
pbar = ProgressBar(widgets=['Mapping Progress: ', Percentage(), ' ', Bar(marker='0',left='[',right=']'),
' ', ETA()], maxval=len(filtered_rootfiles))
pbar.start()
for i in range(len(filtered_rootfiles)):
odv_mapping_indices[i] = odv_dict[(filtered_rootfiles[i], filtered_eventids[i])]
pbar.update(i)
pbar.finish()
odv_mapping_indices = np.int32(odv_mapping_indices)
pbar = ProgressBar(widgets=['Verification Progress: ', Percentage(), ' ', Bar(marker='0',left='[',right=']'),
' ', ETA()], maxval=len(filtered_rootfiles))
pbar.start()
for i in range(len(filtered_rootfiles)):
assert odv_info['root_files'][odv_mapping_indices[i]] == filtered_rootfiles[i]
assert odv_info['event_ids'][odv_mapping_indices[i]] == filtered_eventids[i]
pbar.update(i)
pbar.finish()
np.savez(os.path.join(os.getcwd(), 'Index_Storage/od_veto_mapping_idxs.npz'), mapping_idxs_full_set=odv_mapping_indices)
```
| github_jupyter |
###### Name: Deepak Vadithala
###### Course: MSc Data Science
###### Project Name: MOOC Recommender System
##### Notes:
This notebook contains the analysis of the **Google's Word2Vec** model. This model is trained on the news articles.
two variable **(Role and Skill Scores)** is used to predict the course category.
Skill Score is calculated using the similarity between the skills from LinkedIn compared with the course description with keywords from Coursera.
*Model Source Code Path: /mooc-recommender/Model/Cosine_Distance.py*
*Github Repo: https://github.com/iamdv/mooc-recommender*
```
# **************************** IMPORTANT ****************************
'''
This cell configuration settings for the Notebook.
You can run one role at a time to evaluate the performance of the model
Change the variable names to run for multiple roles
In this model:
1. Google word2vec model has two variables Roles and Skills with
50% weightage for each
'''
# *******************************************************************
# For each role a list of category names are grouped.
# Please don't change these variables
label_DataScientist = ['Data Science','Data Analysis','Data Mining','Data Visualization']
label_SoftwareDevelopment = ['Software Development','Computer Science',
'Programming Languages', 'Algorithms and Data Structures',
'Information Technology']
label_DatabaseAdministrator = ['Databases']
label_Cybersecurity = ['Cybersecurity']
label_FinancialAccountant = ['Finance', 'Accounting']
label_MachineLearning = ['Machine Learning', 'Deep Learning']
label_Musician = ['Music']
label_Dietitian = ['Nutrition & Wellness', 'Health & Medicine']
# *******************************************************************
# *******************************************************************
# Environment and Config Variables. Change these variables as per the requirement.
my_fpath_model = "../Data/Final_Model_Output.csv"
my_fpath_courses = "../Data/main_coursera.csv"
my_fpath_skills_DataScientist = "../Data/Word2Vec-Google/Word2VecGoogle_DataScientist.csv"
my_fpath_skills_SoftwareDevelopment = "../Data/Word2Vec-Google/Word2VecGoogle_SoftwareDevelopment.csv"
my_fpath_skills_DatabaseAdministrator = "../Data/Word2Vec-Google/Word2VecGoogle_DatabaseAdministrator.csv"
my_fpath_skills_Cybersecurity = "../Data/Word2Vec-Google/Word2VecGoogle_Cybersecurity.csv"
my_fpath_skills_FinancialAccountant = "../Data/Word2Vec-Google/Word2VecGoogle_FinancialAccountant.csv"
my_fpath_skills_MachineLearning = "../Data/Word2Vec-Google/Word2VecGoogle_MachineLearning.csv"
my_fpath_skills_Musician = "../Data/Word2Vec-Google/Word2VecGoogle_Musician.csv"
my_fpath_skills_Dietitian = "../Data/Word2Vec-Google/Word2VecGoogle_Dietitian.csv"
# *******************************************************************
# *******************************************************************
# Weighting Variables. Change them as per the requirement.
# Role score is not applicable for Google's Word2Vec model.
my_role_weight = 0.5
my_skill_weight = 0.5
my_threshold = 0.37
# *******************************************************************
# Importing required modules/packages
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk, string
import string
import csv
import json
# Downloading the stopwords like i, me, and, is, the etc.
nltk.download('stopwords')
# Loading courses and skills data from the CSV files
df_courses = pd.read_csv(my_fpath_courses)
df_DataScientist = pd.read_csv(my_fpath_skills_DataScientist)
df_DataScientist = df_DataScientist.drop('Role', 1)
df_DataScientist.columns = ['Course Id', 'DataScientist_Skill_Score', 'DataScientist_Role_Score', 'DataScientist_Keyword_Score']
df_SoftwareDevelopment = pd.read_csv(my_fpath_skills_SoftwareDevelopment)
df_SoftwareDevelopment = df_SoftwareDevelopment.drop('Role', 1)
df_SoftwareDevelopment.columns = ['Course Id','SoftwareDevelopment_Skill_Score', 'SoftwareDevelopment_Role_Score', 'SoftwareDevelopment_Keyword_Score']
df_DatabaseAdministrator = pd.read_csv(my_fpath_skills_DatabaseAdministrator)
df_DatabaseAdministrator = df_DatabaseAdministrator.drop('Role', 1)
df_DatabaseAdministrator.columns = ['Course Id','DatabaseAdministrator_Skill_Score', 'DatabaseAdministrator_Role_Score', 'DatabaseAdministrator_Keyword_Score']
df_Cybersecurity = pd.read_csv(my_fpath_skills_Cybersecurity)
df_Cybersecurity = df_Cybersecurity.drop('Role', 1)
df_Cybersecurity.columns = ['Course Id','Cybersecurity_Skill_Score', 'Cybersecurity_Role_Score', 'Cybersecurity_Keyword_Score']
df_FinancialAccountant = pd.read_csv(my_fpath_skills_FinancialAccountant)
df_FinancialAccountant = df_FinancialAccountant.drop('Role', 1)
df_FinancialAccountant.columns = ['Course Id','FinancialAccountant_Skill_Score', 'FinancialAccountant_Role_Score', 'FinancialAccountant_Keyword_Score']
df_MachineLearning = pd.read_csv(my_fpath_skills_MachineLearning)
df_MachineLearning = df_MachineLearning.drop('Role', 1)
df_MachineLearning.columns = ['Course Id','MachineLearning_Skill_Score', 'MachineLearning_Role_Score', 'MachineLearning_Keyword_Score']
df_Musician = pd.read_csv(my_fpath_skills_Musician)
df_Musician = df_Musician.drop('Role', 1)
df_Musician.columns = ['Course Id','Musician_Skill_Score', 'Musician_Role_Score', 'Musician_Keyword_Score']
df_Dietitian = pd.read_csv(my_fpath_skills_Dietitian)
df_Dietitian = df_Dietitian.drop('Role', 1)
df_Dietitian.columns = ['Course Id','Dietitian_Skill_Score', 'Dietitian_Role_Score','Dietitian_Keyword_Score']
# Merging the csv files
df_cosdist = df_DataScientist.merge(df_SoftwareDevelopment, on = 'Course Id', how = 'outer')
df_cosdist = df_cosdist.merge(df_DatabaseAdministrator, on = 'Course Id', how = 'outer')
df_cosdist = df_cosdist.merge(df_Cybersecurity, on = 'Course Id', how = 'outer')
df_cosdist = df_cosdist.merge(df_FinancialAccountant, on = 'Course Id', how = 'outer')
df_cosdist = df_cosdist.merge(df_MachineLearning, on = 'Course Id', how = 'outer')
df_cosdist = df_cosdist.merge(df_Musician, on = 'Course Id', how = 'outer')
df_cosdist = df_cosdist.merge(df_Dietitian, on = 'Course Id', how = 'outer')
# Exploring data dimensionality, feature names, and feature types.
print(df_courses.shape,"\n")
print(df_cosdist.shape,"\n")
print(df_courses.columns, "\n")
print(df_cosdist.shape,"\n")
print(df_courses.describe(), "\n")
print(df_cosdist.describe(), "\n")
# Quick check to see if the dataframe showing the right results
df_cosdist.head(20)
# Joining two dataframes - Courses and the Cosein Similarity Results based on the 'Course Id' variable.
# Inner joins: Joins two tables with the common rows. This is a set operateion.
df_courses_score = df_courses.merge(df_cosdist, on ='Course Id', how='inner')
print(df_courses_score.shape,"\n")
# Tranforming and shaping the data to create the confusion matrix for the ROLE: DATA SCIENTIST
y_actu_DataScientist = ''
y_pred_DataScientist = ''
df_courses_score['DataScientist_Final_Score'] = (df_courses_score['DataScientist_Role_Score'] * my_role_weight) + (df_courses_score['DataScientist_Skill_Score'] * my_skill_weight)
df_courses_score['DataScientist_Predict'] = (df_courses_score['DataScientist_Final_Score'] >= my_threshold)
df_courses_score['DataScientist_Label'] = df_courses_score.Category.isin(label_DataScientist)
y_pred_DataScientist = pd.Series(df_courses_score['DataScientist_Predict'], name='Predicted')
y_actu_DataScientist = pd.Series(df_courses_score['DataScientist_Label'], name='Actual')
df_confusion_DataScientist = pd.crosstab(y_actu_DataScientist, y_pred_DataScientist , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: SOFTWARE ENGINEER/DEVELOPER
y_actu_SoftwareDevelopment = ''
y_pred_SoftwareDevelopment = ''
df_courses_score['SoftwareDevelopment_Final_Score'] = (df_courses_score['SoftwareDevelopment_Role_Score'] * my_role_weight) + (df_courses_score['SoftwareDevelopment_Skill_Score'] * my_skill_weight)
df_courses_score['SoftwareDevelopment_Predict'] = (df_courses_score['SoftwareDevelopment_Final_Score'] >= my_threshold)
df_courses_score['SoftwareDevelopment_Label'] = df_courses_score.Category.isin(label_SoftwareDevelopment)
y_pred_SoftwareDevelopment = pd.Series(df_courses_score['SoftwareDevelopment_Predict'], name='Predicted')
y_actu_SoftwareDevelopment = pd.Series(df_courses_score['SoftwareDevelopment_Label'], name='Actual')
df_confusion_SoftwareDevelopment = pd.crosstab(y_actu_SoftwareDevelopment, y_pred_SoftwareDevelopment , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: DATABASE DEVELOPER/ADMINISTRATOR
y_actu_DatabaseAdministrator = ''
y_pred_DatabaseAdministrator = ''
df_courses_score['DatabaseAdministrator_Final_Score'] = (df_courses_score['DatabaseAdministrator_Role_Score'] * my_role_weight) + (df_courses_score['DatabaseAdministrator_Skill_Score'] * my_skill_weight)
df_courses_score['DatabaseAdministrator_Predict'] = (df_courses_score['DatabaseAdministrator_Final_Score'] >= my_threshold)
df_courses_score['DatabaseAdministrator_Label'] = df_courses_score.Category.isin(label_DatabaseAdministrator)
y_pred_DatabaseAdministrator = pd.Series(df_courses_score['DatabaseAdministrator_Predict'], name='Predicted')
y_actu_DatabaseAdministrator = pd.Series(df_courses_score['DatabaseAdministrator_Label'], name='Actual')
df_confusion_DatabaseAdministrator = pd.crosstab(y_actu_DatabaseAdministrator, y_pred_DatabaseAdministrator , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: CYBERSECURITY CONSULTANT
y_actu_Cybersecurity = ''
y_pred_Cybersecurity = ''
df_courses_score['Cybersecurity_Final_Score'] = (df_courses_score['Cybersecurity_Role_Score'] * my_role_weight) + (df_courses_score['Cybersecurity_Skill_Score'] * my_skill_weight)
df_courses_score['Cybersecurity_Predict'] = (df_courses_score['Cybersecurity_Final_Score'] >= my_threshold)
df_courses_score['Cybersecurity_Label'] = df_courses_score.Category.isin(label_Cybersecurity)
y_pred_Cybersecurity = pd.Series(df_courses_score['Cybersecurity_Predict'], name='Predicted')
y_actu_Cybersecurity = pd.Series(df_courses_score['Cybersecurity_Label'], name='Actual')
df_confusion_Cybersecurity = pd.crosstab(y_actu_Cybersecurity, y_pred_Cybersecurity , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: FINANCIAL ACCOUNTANT
y_actu_FinancialAccountant = ''
y_pred_FinancialAccountant = ''
df_courses_score['FinancialAccountant_Final_Score'] = (df_courses_score['FinancialAccountant_Role_Score'] * my_role_weight) + (df_courses_score['FinancialAccountant_Skill_Score'] * my_skill_weight)
df_courses_score['FinancialAccountant_Predict'] = (df_courses_score['FinancialAccountant_Final_Score'] >= my_threshold)
df_courses_score['FinancialAccountant_Label'] = df_courses_score.Category.isin(label_FinancialAccountant)
y_pred_FinancialAccountant = pd.Series(df_courses_score['FinancialAccountant_Predict'], name='Predicted')
y_actu_FinancialAccountant = pd.Series(df_courses_score['FinancialAccountant_Label'], name='Actual')
df_confusion_FinancialAccountant = pd.crosstab(y_actu_FinancialAccountant, y_pred_FinancialAccountant , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: MACHINE LEARNING ENGINEER
y_actu_MachineLearning = ''
y_pred_MachineLearning = ''
df_courses_score['MachineLearning_Final_Score'] = (df_courses_score['MachineLearning_Role_Score'] * my_role_weight) + (df_courses_score['MachineLearning_Skill_Score'] * my_skill_weight)
df_courses_score['MachineLearning_Predict'] = (df_courses_score['MachineLearning_Final_Score'] >= my_threshold)
df_courses_score['MachineLearning_Label'] = df_courses_score.Category.isin(label_MachineLearning)
y_pred_MachineLearning = pd.Series(df_courses_score['MachineLearning_Predict'], name='Predicted')
y_actu_MachineLearning = pd.Series(df_courses_score['MachineLearning_Label'], name='Actual')
df_confusion_MachineLearning = pd.crosstab(y_actu_MachineLearning, y_pred_MachineLearning , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: MUSICIAN
y_actu_Musician = ''
y_pred_Musician = ''
df_courses_score['Musician_Final_Score'] = (df_courses_score['Musician_Role_Score'] * my_role_weight) + (df_courses_score['Musician_Skill_Score'] * my_skill_weight)
df_courses_score['Musician_Predict'] = (df_courses_score['Musician_Final_Score'] >= my_threshold)
df_courses_score['Musician_Label'] = df_courses_score.Category.isin(label_Musician)
y_pred_Musician = pd.Series(df_courses_score['Musician_Predict'], name='Predicted')
y_actu_Musician = pd.Series(df_courses_score['Musician_Label'], name='Actual')
df_confusion_Musician = pd.crosstab(y_actu_Musician, y_pred_Musician , rownames=['Actual'], colnames=['Predicted'], margins=False)
# Tranforming and shaping the data to create the confusion matrix for the ROLE: NUTRITIONIST/DIETITIAN
y_actu_Dietitian = ''
y_pred_Dietitian = ''
df_courses_score['Dietitian_Final_Score'] = (df_courses_score['Dietitian_Role_Score'] * my_role_weight) + (df_courses_score['Dietitian_Skill_Score'] * my_skill_weight)
df_courses_score['Dietitian_Predict'] = (df_courses_score['Dietitian_Final_Score'] >= my_threshold)
df_courses_score['Dietitian_Label'] = df_courses_score.Category.isin(label_Dietitian)
y_pred_Dietitian = pd.Series(df_courses_score['Dietitian_Predict'], name='Predicted')
y_actu_Dietitian = pd.Series(df_courses_score['Dietitian_Label'], name='Actual')
df_confusion_Dietitian = pd.crosstab(y_actu_Dietitian, y_pred_Dietitian , rownames=['Actual'], colnames=['Predicted'], margins=False)
df_confusion_DataScientist
df_confusion_SoftwareDevelopment
df_confusion_DatabaseAdministrator
df_confusion_Cybersecurity
df_confusion_FinancialAccountant
df_confusion_MachineLearning
df_confusion_Musician
df_confusion_Dietitian
# Performance summary for the ROLE: DATA SCIENTIST
try:
tn_DataScientist = df_confusion_DataScientist.iloc[0][False]
except:
tn_DataScientist = 0
try:
tp_DataScientist = df_confusion_DataScientist.iloc[1][True]
except:
tp_DataScientist = 0
try:
fn_DataScientist = df_confusion_DataScientist.iloc[1][False]
except:
fn_DataScientist = 0
try:
fp_DataScientist = df_confusion_DataScientist.iloc[0][True]
except:
fp_DataScientist = 0
total_count_DataScientist = tn_DataScientist + tp_DataScientist + fn_DataScientist + fp_DataScientist
print('Data Scientist Accuracy Rate : ', '{0:.2f}'.format((tn_DataScientist + tp_DataScientist) / total_count_DataScientist * 100))
print('Data Scientist Misclassifcation Rate : ', '{0:.2f}'.format((fn_DataScientist + fp_DataScientist) / total_count_DataScientist * 100))
print('Data Scientist True Positive Rate : ', '{0:.2f}'.format(tp_DataScientist / (tp_DataScientist + fn_DataScientist) * 100))
print('Data Scientist False Positive Rate : ', '{0:.2f}'.format(fp_DataScientist / (tn_DataScientist + fp_DataScientist) * 100))
# Performance summary for the ROLE: SOFTWARE ENGINEER
try:
tn_SoftwareDevelopment = df_confusion_SoftwareDevelopment.iloc[0][False]
except:
tn_SoftwareDevelopment = 0
try:
tp_SoftwareDevelopment = df_confusion_SoftwareDevelopment.iloc[1][True]
except:
tp_SoftwareDevelopment = 0
try:
fn_SoftwareDevelopment = df_confusion_SoftwareDevelopment.iloc[1][False]
except:
fn_SoftwareDevelopment = 0
try:
fp_SoftwareDevelopment = df_confusion_SoftwareDevelopment.iloc[0][True]
except:
fp_SoftwareDevelopment = 0
total_count_SoftwareDevelopment = tn_SoftwareDevelopment + tp_SoftwareDevelopment + fn_SoftwareDevelopment + fp_SoftwareDevelopment
print('Software Engineer Accuracy Rate : ', '{0:.2f}'.format((tn_SoftwareDevelopment + tp_SoftwareDevelopment) / total_count_SoftwareDevelopment * 100))
print('Software Engineer Misclassifcation Rate : ', '{0:.2f}'.format((fn_SoftwareDevelopment + fp_SoftwareDevelopment) / total_count_SoftwareDevelopment * 100))
print('Software Engineer True Positive Rate : ', '{0:.2f}'.format(tp_SoftwareDevelopment / (tp_SoftwareDevelopment + fn_SoftwareDevelopment) * 100))
print('Software Engineer False Positive Rate : ', '{0:.2f}'.format(fp_SoftwareDevelopment / (tn_SoftwareDevelopment + fp_SoftwareDevelopment) * 100))
# Performance summary for the ROLE: DATABASE DEVELOPER/ ADMINISTRATOR
try:
tn_DatabaseAdministrator = df_confusion_DatabaseAdministrator.iloc[0][False]
except:
tn_DatabaseAdministrator = 0
try:
tp_DatabaseAdministrator = df_confusion_DatabaseAdministrator.iloc[1][True]
except:
tp_DatabaseAdministrator = 0
try:
fn_DatabaseAdministrator = df_confusion_DatabaseAdministrator.iloc[1][False]
except:
fn_DatabaseAdministrator = 0
try:
fp_DatabaseAdministrator = df_confusion_DatabaseAdministrator.iloc[0][True]
except:
fp_DatabaseAdministrator = 0
total_count_DatabaseAdministrator = tn_DatabaseAdministrator + tp_DatabaseAdministrator + fn_DatabaseAdministrator + fp_DatabaseAdministrator
print('Database Administrator Accuracy Rate : ', '{0:.2f}'.format((tn_DatabaseAdministrator + tp_DatabaseAdministrator) / total_count_DatabaseAdministrator * 100))
print('Database Administrator Misclassifcation Rate : ', '{0:.2f}'.format((fn_DatabaseAdministrator + fp_DatabaseAdministrator) / total_count_DatabaseAdministrator * 100))
print('Database Administrator True Positive Rate : ', '{0:.2f}'.format(tp_DatabaseAdministrator / (tp_DatabaseAdministrator + fn_DatabaseAdministrator) * 100))
print('Database Administrator False Positive Rate : ', '{0:.2f}'.format(fp_DatabaseAdministrator / (tn_DatabaseAdministrator + fp_DatabaseAdministrator) * 100))
# Performance summary for the ROLE: CYBERSECURITY CONSULTANT
try:
tn_Cybersecurity = df_confusion_Cybersecurity.iloc[0][False]
except:
tn_Cybersecurity = 0
try:
tp_Cybersecurity = df_confusion_Cybersecurity.iloc[1][True]
except:
tp_Cybersecurity = 0
try:
fn_Cybersecurity = df_confusion_Cybersecurity.iloc[1][False]
except:
fn_Cybersecurity = 0
try:
fp_Cybersecurity = df_confusion_Cybersecurity.iloc[0][True]
except:
fp_Cybersecurity = 0
total_count_Cybersecurity = tn_Cybersecurity + tp_Cybersecurity + fn_Cybersecurity + fp_Cybersecurity
print('Cybersecurity Consultant Accuracy Rate : ', '{0:.2f}'.format((tn_Cybersecurity + tp_Cybersecurity) / total_count_Cybersecurity * 100))
print('Cybersecurity Consultant Misclassifcation Rate : ', '{0:.2f}'.format((fn_Cybersecurity + fp_Cybersecurity) / total_count_Cybersecurity * 100))
print('Cybersecurity Consultant True Positive Rate : ', '{0:.2f}'.format(tp_Cybersecurity / (tp_Cybersecurity + fn_Cybersecurity) * 100))
print('Cybersecurity Consultant False Positive Rate : ', '{0:.2f}'.format(fp_Cybersecurity / (tn_Cybersecurity + fp_Cybersecurity) * 100))
# Performance summary for the ROLE: FINANCIAL ACCOUNTANT
try:
tn_FinancialAccountant = df_confusion_FinancialAccountant.iloc[0][False]
except:
tn_FinancialAccountant = 0
try:
tp_FinancialAccountant = df_confusion_FinancialAccountant.iloc[1][True]
except:
tp_FinancialAccountant = 0
try:
fn_FinancialAccountant = df_confusion_FinancialAccountant.iloc[1][False]
except:
fn_FinancialAccountant = 0
try:
fp_FinancialAccountant = df_confusion_FinancialAccountant.iloc[0][True]
except:
fp_FinancialAccountant = 0
total_count_FinancialAccountant = tn_FinancialAccountant + tp_FinancialAccountant + fn_FinancialAccountant + fp_FinancialAccountant
print('Financial Accountant Consultant Accuracy Rate : ', '{0:.2f}'.format((tn_FinancialAccountant + tp_FinancialAccountant) / total_count_FinancialAccountant * 100))
print('Financial Accountant Consultant Misclassifcation Rate : ', '{0:.2f}'.format((fn_FinancialAccountant + fp_FinancialAccountant) / total_count_FinancialAccountant * 100))
print('Financial Accountant Consultant True Positive Rate : ', '{0:.2f}'.format(tp_FinancialAccountant / (tp_FinancialAccountant + fn_FinancialAccountant) * 100))
print('Financial Accountant Consultant False Positive Rate : ', '{0:.2f}'.format(fp_FinancialAccountant / (tn_FinancialAccountant + fp_FinancialAccountant) * 100))
# Performance summary for the ROLE: MACHINE LEARNING ENGINEER
try:
tn_MachineLearning = df_confusion_MachineLearning.iloc[0][False]
except:
tn_MachineLearning = 0
try:
tp_MachineLearning = df_confusion_MachineLearning.iloc[1][True]
except:
tp_MachineLearning = 0
try:
fn_MachineLearning = df_confusion_MachineLearning.iloc[1][False]
except:
fn_MachineLearning = 0
try:
fp_MachineLearning = df_confusion_MachineLearning.iloc[0][True]
except:
fp_MachineLearning = 0
total_count_MachineLearning = tn_MachineLearning + tp_MachineLearning + fn_MachineLearning + fp_MachineLearning
print('Machine Learning Engineer Accuracy Rate : ', '{0:.2f}'.format((tn_MachineLearning + tp_MachineLearning) / total_count_MachineLearning * 100))
print('Machine Learning Engineer Misclassifcation Rate : ', '{0:.2f}'.format((fn_MachineLearning + fp_MachineLearning) / total_count_MachineLearning * 100))
print('Machine Learning Engineer True Positive Rate : ', '{0:.2f}'.format(tp_MachineLearning / (tp_MachineLearning + fn_MachineLearning) * 100))
print('Machine Learning Engineer False Positive Rate : ', '{0:.2f}'.format(fp_MachineLearning / (tn_MachineLearning + fp_MachineLearning) * 100))
# Performance summary for the ROLE: MUSICIAN
try:
tn_Musician = df_confusion_Musician.iloc[0][False]
except:
tn_Musician = 0
try:
tp_Musician = df_confusion_Musician.iloc[1][True]
except:
tp_Musician = 0
try:
fn_Musician = df_confusion_Musician.iloc[1][False]
except:
fn_Musician = 0
try:
fp_Musician = df_confusion_Musician.iloc[0][True]
except:
fp_Musician = 0
total_count_Musician = tn_Musician + tp_Musician + fn_Musician + fp_Musician
print('Musician Accuracy Rate : ', '{0:.2f}'.format((tn_Musician + tp_Musician) / total_count_Musician * 100))
print('Musician Misclassifcation Rate : ', '{0:.2f}'.format((fn_Musician + fp_Musician) / total_count_Musician * 100))
print('Musician True Positive Rate : ', '{0:.2f}'.format(tp_Musician / (tp_Musician + fn_Musician) * 100))
print('Musician False Positive Rate : ', '{0:.2f}'.format(fp_Musician / (tn_Musician + fp_Musician) * 100))
# Performance summary for the ROLE: DIETITIAN
try:
tn_Dietitian = df_confusion_Dietitian.iloc[0][False]
except:
tn_Dietitian = 0
try:
tp_Dietitian = df_confusion_Dietitian.iloc[1][True]
except:
tp_Dietitian = 0
try:
fn_Dietitian = df_confusion_Dietitian.iloc[1][False]
except:
fn_Dietitian = 0
try:
fp_Dietitian = df_confusion_Dietitian.iloc[0][True]
except:
fp_Dietitian = 0
total_count_Dietitian = tn_Dietitian + tp_Dietitian + fn_Dietitian + fp_Dietitian
print('Dietitian Accuracy Rate : ', '{0:.2f}'.format((tn_Dietitian + tp_Dietitian) / total_count_Dietitian * 100))
print('Dietitian Misclassifcation Rate : ', '{0:.2f}'.format((fn_Dietitian + fp_Dietitian) / total_count_Dietitian * 100))
print('Dietitian True Positive Rate : ', '{0:.2f}'.format(tp_Dietitian / (tp_Dietitian + fn_Dietitian) * 100))
print('Dietitian False Positive Rate : ', '{0:.2f}'.format(fp_Dietitian / (tn_Dietitian + fp_Dietitian) * 100))
df_final_model = df_courses_score[['Course Id', 'Course Name', 'Course Description', 'Slug',
'Provider', 'Universities/Institutions', 'Parent Subject',
'Child Subject', 'Category', 'Url', 'Length', 'Language',
'Credential Name', 'Rating', 'Number of Ratings', 'Certificate',
'Workload',
'DataScientist_Final_Score', 'DataScientist_Predict',
'SoftwareDevelopment_Final_Score', 'SoftwareDevelopment_Predict',
'DatabaseAdministrator_Final_Score', 'DatabaseAdministrator_Predict',
'Cybersecurity_Final_Score', 'Cybersecurity_Predict',
'FinancialAccountant_Final_Score', 'FinancialAccountant_Predict',
'MachineLearning_Final_Score', 'MachineLearning_Predict',
'Musician_Final_Score', 'Musician_Predict',
'Dietitian_Final_Score', 'Dietitian_Predict']]
df_final_model
test = df_final_model.sort_values('FinancialAccountant_Final_Score', ascending=False)
test
# Save the model results to the CSV File
df_final_model.columns
df_final_model = df_final_model.drop(df_final_model.columns[df_final_model.columns.str.contains('unnamed',case = False)],axis = 1)
df_final_model = df_final_model.replace(np.nan, '', regex=True)
df_final_model.columns = ['courseId', 'courseName', 'courseDescription', 'slug', 'provider',
'universitiesInstitutions', 'parentSubject', 'childSubject',
'category', 'url', 'length', 'language', 'credentialName', 'rating',
'numberOfRatings', 'certificate', 'workload',
'dataScientistFinalScore', 'dataScientistPredict',
'softwareDevelopmentFinalScore', 'softwareDevelopmentPredict',
'databaseAdministratorFinalScore', 'databaseAdministratorPredict',
'cybersecurityFinalScore', 'cybersecurityPredict',
'financialAccountantFinalScore', 'financialAccountantPredict',
'machineLearningFinalScore', 'machineLearningPredict',
'musicianFinalScore', 'musicianPredict',
'dietitianFinalScore', 'dietitianPredict']
df_final_model
df_final_model.to_csv(my_fpath_model, sep=',', encoding='utf-8')
```
### End of the Notebook. Thank you!
| github_jupyter |
<a href="https://colab.research.google.com/github/sreyaschaithanya/football_analysis/blob/main/Football_1_Plotting_pass_and_shot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#! git clone https://github.com/statsbomb/open-data.git
from google.colab import drive
drive.mount('/content/drive')
#!rm -rf /content/open-data
#!cp -r "/content/drive/My Drive/Football/open-data" "open-data"
#!cp "/content/drive/My Drive/Football/open-data.zip" "open-data.zip"
#!unzip /content/open-data.zip -d /content/
#from google.colab import files
#files.download('open-data.zip')
DATA_PATH = "/content/drive/My Drive/Football/open-data/data/"
MATCHES_PATH = DATA_PATH+"matches/"
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Mar 25 17:32:00 2020
@author: davsu428
"""
import matplotlib.pyplot as plt
from matplotlib.patches import Arc
def createPitch(length,width, unity,linecolor): # in meters
# Code by @JPJ_dejong
"""
creates a plot in which the 'length' is the length of the pitch (goal to goal).
And 'width' is the width of the pitch (sideline to sideline).
Fill in the unity in meters or in yards.
"""
#Set unity
if unity == "meters":
# Set boundaries
if length >= 120.5 or width >= 75.5:
return(str("Field dimensions are too big for meters as unity, didn't you mean yards as unity?\
Otherwise the maximum length is 120 meters and the maximum width is 75 meters. Please try again"))
#Run program if unity and boundaries are accepted
else:
#Create figure
fig=plt.figure()
#fig.set_size_inches(7, 5)
ax=fig.add_subplot(1,1,1)
#Pitch Outline & Centre Line
plt.plot([0,0],[0,width], color=linecolor)
plt.plot([0,length],[width,width], color=linecolor)
plt.plot([length,length],[width,0], color=linecolor)
plt.plot([length,0],[0,0], color=linecolor)
plt.plot([length/2,length/2],[0,width], color=linecolor)
#Left Penalty Area
plt.plot([16.5 ,16.5],[(width/2 +16.5),(width/2-16.5)],color=linecolor)
plt.plot([0,16.5],[(width/2 +16.5),(width/2 +16.5)],color=linecolor)
plt.plot([16.5,0],[(width/2 -16.5),(width/2 -16.5)],color=linecolor)
#Right Penalty Area
plt.plot([(length-16.5),length],[(width/2 +16.5),(width/2 +16.5)],color=linecolor)
plt.plot([(length-16.5), (length-16.5)],[(width/2 +16.5),(width/2-16.5)],color=linecolor)
plt.plot([(length-16.5),length],[(width/2 -16.5),(width/2 -16.5)],color=linecolor)
#Left 5-meters Box
plt.plot([0,5.5],[(width/2+7.32/2+5.5),(width/2+7.32/2+5.5)],color=linecolor)
plt.plot([5.5,5.5],[(width/2+7.32/2+5.5),(width/2-7.32/2-5.5)],color=linecolor)
plt.plot([5.5,0.5],[(width/2-7.32/2-5.5),(width/2-7.32/2-5.5)],color=linecolor)
#Right 5 -eters Box
plt.plot([length,length-5.5],[(width/2+7.32/2+5.5),(width/2+7.32/2+5.5)],color=linecolor)
plt.plot([length-5.5,length-5.5],[(width/2+7.32/2+5.5),width/2-7.32/2-5.5],color=linecolor)
plt.plot([length-5.5,length],[width/2-7.32/2-5.5,width/2-7.32/2-5.5],color=linecolor)
#Prepare Circles
centreCircle = plt.Circle((length/2,width/2),9.15,color=linecolor,fill=False)
centreSpot = plt.Circle((length/2,width/2),0.8,color=linecolor)
leftPenSpot = plt.Circle((11,width/2),0.8,color=linecolor)
rightPenSpot = plt.Circle((length-11,width/2),0.8,color=linecolor)
#Draw Circles
ax.add_patch(centreCircle)
ax.add_patch(centreSpot)
ax.add_patch(leftPenSpot)
ax.add_patch(rightPenSpot)
#Prepare Arcs
leftArc = Arc((11,width/2),height=18.3,width=18.3,angle=0,theta1=308,theta2=52,color=linecolor)
rightArc = Arc((length-11,width/2),height=18.3,width=18.3,angle=0,theta1=128,theta2=232,color=linecolor)
#Draw Arcs
ax.add_patch(leftArc)
ax.add_patch(rightArc)
#Axis titles
#check unity again
elif unity == "yards":
#check boundaries again
if length <= 95:
return(str("Didn't you mean meters as unity?"))
elif length >= 131 or width >= 101:
return(str("Field dimensions are too big. Maximum length is 130, maximum width is 100"))
#Run program if unity and boundaries are accepted
else:
#Create figure
fig=plt.figure()
#fig.set_size_inches(7, 5)
ax=fig.add_subplot(1,1,1)
#Pitch Outline & Centre Line
plt.plot([0,0],[0,width], color=linecolor)
plt.plot([0,length],[width,width], color=linecolor)
plt.plot([length,length],[width,0], color=linecolor)
plt.plot([length,0],[0,0], color=linecolor)
plt.plot([length/2,length/2],[0,width], color=linecolor)
#Left Penalty Area
plt.plot([18 ,18],[(width/2 +18),(width/2-18)],color=linecolor)
plt.plot([0,18],[(width/2 +18),(width/2 +18)],color=linecolor)
plt.plot([18,0],[(width/2 -18),(width/2 -18)],color=linecolor)
#Right Penalty Area
plt.plot([(length-18),length],[(width/2 +18),(width/2 +18)],color=linecolor)
plt.plot([(length-18), (length-18)],[(width/2 +18),(width/2-18)],color=linecolor)
plt.plot([(length-18),length],[(width/2 -18),(width/2 -18)],color=linecolor)
#Left 6-yard Box
plt.plot([0,6],[(width/2+7.32/2+6),(width/2+7.32/2+6)],color=linecolor)
plt.plot([6,6],[(width/2+7.32/2+6),(width/2-7.32/2-6)],color=linecolor)
plt.plot([6,0],[(width/2-7.32/2-6),(width/2-7.32/2-6)],color=linecolor)
#Right 6-yard Box
plt.plot([length,length-6],[(width/2+7.32/2+6),(width/2+7.32/2+6)],color=linecolor)
plt.plot([length-6,length-6],[(width/2+7.32/2+6),width/2-7.32/2-6],color=linecolor)
plt.plot([length-6,length],[(width/2-7.32/2-6),width/2-7.32/2-6],color=linecolor)
#Prepare Circles; 10 yards distance. penalty on 12 yards
centreCircle = plt.Circle((length/2,width/2),10,color=linecolor,fill=False)
centreSpot = plt.Circle((length/2,width/2),0.8,color=linecolor)
leftPenSpot = plt.Circle((12,width/2),0.8,color=linecolor)
rightPenSpot = plt.Circle((length-12,width/2),0.8,color=linecolor)
#Draw Circles
ax.add_patch(centreCircle)
ax.add_patch(centreSpot)
ax.add_patch(leftPenSpot)
ax.add_patch(rightPenSpot)
#Prepare Arcs
leftArc = Arc((11,width/2),height=20,width=20,angle=0,theta1=312,theta2=48,color=linecolor)
rightArc = Arc((length-11,width/2),height=20,width=20,angle=0,theta1=130,theta2=230,color=linecolor)
#Draw Arcs
ax.add_patch(leftArc)
ax.add_patch(rightArc)
#Tidy Axes
plt.axis('off')
return fig,ax
def createPitchOld():
#Taken from FC Python
#Create figure
fig=plt.figure()
ax=fig.add_subplot(1,1,1)
#Pitch Outline & Centre Line
plt.plot([0,0],[0,90], color=linecolor)
plt.plot([0,130],[90,90], color=linecolor)
plt.plot([130,130],[90,0], color=linecolor)
plt.plot([130,0],[0,0], color=linecolor)
plt.plot([65,65],[0,90], color=linecolor)
#Left Penalty Area
plt.plot([16.5,16.5],[65,25],color=linecolor)
plt.plot([0,16.5],[65,65],color=linecolor)
plt.plot([16.5,0],[25,25],color=linecolor)
#Right Penalty Area
plt.plot([130,113.5],[65,65],color=linecolor)
plt.plot([113.5,113.5],[65,25],color=linecolor)
plt.plot([113.5,130],[25,25],color=linecolor)
#Left 6-yard Box
plt.plot([0,5.5],[54,54],color=linecolor)
plt.plot([5.5,5.5],[54,36],color=linecolor)
plt.plot([5.5,0.5],[36,36],color=linecolor)
#Right 6-yard Box
plt.plot([130,124.5],[54,54],color=linecolor)
plt.plot([124.5,124.5],[54,36],color=linecolor)
plt.plot([124.5,130],[36,36],color=linecolor)
#Prepare Circles
centreCircle = plt.Circle((65,45),9.15,color=linecolor,fill=False)
centreSpot = plt.Circle((65,45),0.8,color=linecolor)
leftPenSpot = plt.Circle((11,45),0.8,color=linecolor)
rightPenSpot = plt.Circle((119,45),0.8,color=linecolor)
#Draw Circles
ax.add_patch(centreCircle)
ax.add_patch(centreSpot)
ax.add_patch(leftPenSpot)
ax.add_patch(rightPenSpot)
#Prepare Arcs
leftArc = Arc((11,45),height=18.3,width=18.3,angle=0,theta1=310,theta2=50,color=linecolor)
rightArc = Arc((119,45),height=18.3,width=18.3,angle=0,theta1=130,theta2=230,color=linecolor)
#Draw Arcs
ax.add_patch(leftArc)
ax.add_patch(rightArc)
#Tidy Axes
plt.axis('off')
return fig,ax
def createGoalMouth():
#Adopted from FC Python
#Create figure
fig=plt.figure()
ax=fig.add_subplot(1,1,1)
linecolor='black'
#Pitch Outline & Centre Line
plt.plot([0,65],[0,0], color=linecolor)
plt.plot([65,65],[50,0], color=linecolor)
plt.plot([0,0],[50,0], color=linecolor)
#Left Penalty Area
plt.plot([12.5,52.5],[16.5,16.5],color=linecolor)
plt.plot([52.5,52.5],[16.5,0],color=linecolor)
plt.plot([12.5,12.5],[0,16.5],color=linecolor)
#Left 6-yard Box
plt.plot([41.5,41.5],[5.5,0],color=linecolor)
plt.plot([23.5,41.5],[5.5,5.5],color=linecolor)
plt.plot([23.5,23.5],[0,5.5],color=linecolor)
#Goal
plt.plot([41.5-5.34,41.5-5.34],[-2,0],color=linecolor)
plt.plot([23.5+5.34,41.5-5.34],[-2,-2],color=linecolor)
plt.plot([23.5+5.34,23.5+5.34],[0,-2],color=linecolor)
#Prepare Circles
leftPenSpot = plt.Circle((65/2,11),0.8,color=linecolor)
#Draw Circles
ax.add_patch(leftPenSpot)
#Prepare Arcs
leftArc = Arc((32.5,11),height=18.3,width=18.3,angle=0,theta1=38,theta2=142,color=linecolor)
#Draw Arcs
ax.add_patch(leftArc)
#Tidy Axes
plt.axis('off')
return fig,ax
import pandas as pd
import json
competitions = pd.read_json(DATA_PATH+"competitions.json")
competitions.head()
competitions.describe()
competitions.info()
competitions["competition_id"].unique(), competitions["competition_id"].nunique()
#show all the competitions and the related files for matches
pd.set_option("display.max_rows", None, "display.max_columns", None)
for i in competitions["competition_id"].unique():
print(competitions[competitions["competition_id"]==i])
# show files
import glob
competitions_path_list = glob.glob(MATCHES_PATH+"/*")
competition_file_dict = {}
for i in competitions_path_list:
competition_file_dict[i] = glob.glob(i+"/*.json")
competition_file_dict
DATA_PATH
with open(DATA_PATH + 'matches/72/30.json') as f:
matches = json.load(f)
matches[0]
for match in matches:
if match["home_team"]['home_team_name']=="Sweden Women's" or match["away_team"]['away_team_name']=="Sweden Women's":
print("match between: "+ match["home_team"]['home_team_name']+" vs " +match["away_team"]['away_team_name'] +" with score {}:{}".format(match["home_score"],match["away_score"]))
```
# Pitch map
```
import matplotlib.pyplot as plt
import numpy as np
pitchLenX = 120
pitchWidY = 80
match_id = 69301
def get_match(match_id):
for i in matches:
if i["match_id"]==match_id:
return i
def get_event(match_id):
with open(DATA_PATH+"events/"+str(match_id)+".json") as f:
event = json.load(f)
return event
match = get_match(match_id)
events = get_event(match_id)
match
events_df = pd.json_normalize(events)
events_df.columns.values
shots = events_df[events_df["type.name"]=="Shot"]
shots[["period","minute","location","team.name","shot.outcome.name"]]
(fig,ax) = createPitch(pitchLenX,pitchWidY,"yards","grey")
for i,shot in shots.iterrows():
x = shot.location[0]
y = shot.location[1]
goal = shot["shot.outcome.name"] == "Goal"
shot_team = shot["team.name"]
circle_size = np.sqrt(shot["shot.statsbomb_xg"]*15)
print(circle_size)
if shot_team == "Sweden Women's":
if goal:
shotCircle = plt.Circle((x,pitchWidY-y),circle_size,color="red")
#plt.text(x,pitchWidY-y,"hi")
else:
shotCircle = plt.Circle((x,pitchWidY-y),circle_size,color="red")
shotCircle.set_alpha(0.2)
else:
if goal:
shotCircle = plt.Circle((pitchLenX-x,y),circle_size,color="blue")
#plt.text((pitchLenX-x+1),y+1,shot['player.name'])
else:
shotCircle = plt.Circle((pitchLenX-x,y),circle_size,color="blue")
shotCircle.set_alpha(0.2)
ax.add_patch(shotCircle)
#"England Women's"
#plt.show()
fig
```
# passes plotting
```
#passes = events_df[(events_df["type.name"]=="Pass") & (events_df["player.name"]=="Sara Caroline Seger") & (events_df["play_pattern.name"]=="Regular Play")]
passes = events_df[(events_df["type.name"]=="Pass") & (events_df["player.name"]=="Sara Caroline Seger")]
# shots[["period","minute","location","team.name","shot.outcome.name"]]
#
#events_df["type.name"].unique()
#passes[["location"]+[i for i in passes.columns.values if "pass" in i]]
passes
(fig,ax) = createPitch(pitchLenX,pitchWidY,"yards","green")
fig.set_size_inches(15, 10.5)
for i,shot in passes.iterrows():
x_start = shot.location[0]
y_start = shot.location[1]
x_end = shot["pass.end_location"][0]
y_end = shot["pass.end_location"][1]
#goal = shot["shot.outcome.name"] == "Goal"
#circle_size = np.sqrt(shot["shot.statsbomb_xg"]*15)
#print(circle_size)
shotarrow = plt.Arrow(x_start, pitchWidY-y_start, x_end-x_start, pitchWidY-y_end-pitchWidY+y_start,width=2,color="blue")
ax.add_patch(shotarrow)
#"England Women's"
#plt.show()
(fig, ax) = createPitch(120, 80, 'yards', 'gray')
for i, p in passes.iterrows():
x, y = p.location
x, y = (x,pitchWidY-y)
end_x, end_y = p["pass.end_location"]
end_x, end_y = (end_x, pitchWidY-end_y)
start_circle = plt.Circle((x, y), 1, alpha=.2, color="blue")
pass_arrow = plt.Arrow(x, y, end_x - x, end_y - y, width=2, color="blue")
ax.add_patch(pass_arrow)
ax.add_patch(start_circle)
fig.set_size_inches(15, 10.5)
plt.show()
```
| github_jupyter |
# Vectors in Python
In the following exercises, you will work on coding vectors in Python.
Assume that you have a state vector
$$\mathbf{x_0}$$
representing the x position, y position, velocity in the x direction, and velocity in the y direction of a car that is driving in front of your vehicle. You are tracking the other vehicle.
Currently, the other vehicle is 5 meters ahead of you along your x-axis, 2 meters to your left along your y-axis, driving 10 m/s in the x direction and 0 m/s in the y-direction. How would you represent this in a Python list where the vector contains `<x, y, vx, vy>` in exactly that order?
### Vector Assignment: Example 1
```
## Practice working with Python vectors
## TODO: Assume the state vector contains values for <x, y, vx, vy>
## Currently, x = 5, y = 2, vx = 10, vy = 0
## Represent this information in a list
x0 = [5, 2, 10, 0]
```
### Test your code
Run the cell below to test your code.
The test code uses a Python assert statement. If you have a code statement that resolves to either True or False, an assert statement will either:
* do nothing if the statement is True
* throw an error if the statement is False
A Python assert statement
will output an error if the answer was not as expected. If the
answer was as expected, then nothing will be outputted.
```
### Test Cases
### Run these test cases to see if your results are as expected
### Running this cell should produce no output if all assertions are True
assert x0 == [5, 2, 10, 0]
```
### Vector Assignment: Example 2
The vehicle ahead of you has now moved farther away from you. You know that the vehicle has moved 3 meters forward in the x-direction, 5 meters forward in the y-direction, has increased its x velocity by 2 m/s and has increased its y velocity by 5 m/s.
Store the change in position and velocity in a list variable called xdelta
```
## TODO: Assign the change in position and velocity to the variable
## xdelta. Remember that the order of the vector is x, y, vx, vy
xdelta = [3, 5, 2, 5]
### Test Case
### Run this test case to see if your results are as expected
### Running this cell should produce no output if all assertions are True
assert xdelta == [3, 5, 2, 5]
```
### Vector Math: Addition
Calculate the tracked vehicle's new position and velocity. Here are the steps you need to carry this out:
* initialize an empty list called x1
* add xdelta to x0 using a for loop
* store your results in x1 as you iterate through the for loop using the append method
```
## TODO: Add the vectors together element-wise. For example,
## element-wise addition of [2, 6] and [10, 3] is [12, 9].
## Place the answer in the x1 variable.
##
## Hint: You can use a for loop. The append method might also
## be helpful.
x1 = []
for i in range(len(x0)):
x1.append(x0[i]+xdelta[i])
print(x1)
### Test Case
### Run this test case to see if your results are as expected
### Running this cell should produce no output if all assertions are True
assert x1 == [8, 7, 12, 5]
```
### Vector Math: Scalar Multiplication
You have your current position in meters and current velocity in meters per second. But you need to report your results at a company meeting where most people will only be familiar with working in feet rather than meters. Convert your position vector x1 to feet and feet/second.
This will involve scalar multiplication. The process for coding scalar multiplication is very similar to vector addition. You will need to:
* initialize an empty list
* use a for loop to access each element in the vector
* multiply each element by the scalar
* append the result to the empty list
```
## TODO: Multiply each element in the x1 vector by the conversion
## factor shown belowand store the results in the variable s.
## Use a for loop
meters_to_feet = 1.0 / 0.3048
x1feet = []
for i in range(len(x1)):
x1feet.append(meters_to_feet*x1[i])
print(x1feet)
### Test Cases
### Run this test case to see if your results are as expected
### Running this cell should produce no output if all assertions are True
x1feet_sol = [8/.3048, 7/.3048, 12/.3048, 5/.3048]
assert(len(x1feet) == len(x1feet_sol))
for response, expected in zip(x1feet, x1feet_sol):
assert(abs(response-expected) < 0.001)
```
### Vector Math: Dot Product
The tracked vehicle is currently at the state represented by
$$\mathbf{x_1} = [8, 7, 12, 5] $$.
Where will the vehicle be in two seconds?
You could actually solve this problem very quickly using Matrix multiplication, but we have not covered that yet. Instead, think about the x-direction and y-direction separately and how you could do this with the dot product.
#### Solving with the Dot Product
You know that the tracked vehicle at x1 is 8m ahead of you in the x-direction and traveling at 12m/s. Assuming constant velocity, the new x-position after 2 seconds would be
$$8 + 12*2 = 32$$
The new y-position would be
$$7 + 5*2 = 17$$
You could actually solve each of these equations using the dot product:
$$x_2 = [8, 7, 12, 5]\cdot[1, 0, 2, 0] \\\
= 8\times1 + 7\times0 + 12\times2 + 5\times0 \\\
= 32$$
$$y_2 = [8, 7, 12, 5]\cdot[0, 1, 0, 2] \\\
= 8\times0 + 7\times1 + 12\times0 + 5\times2 \\\
= 17$$
Since you are assuming constant velocity, the final state vector would be
$$\mathbf{x_2} = [32, 17, 12, 5]$$
#### Coding the Dot Product
Now, calculate the state vector $$\mathbf{x_2}$$ but with code. You will need to calculate the dot product of two vectors. Rather than writing the dot product code for the x-direction and then copying the code for the y-direction, write a function that calculates the dot product of two Python lists.
Here is an outline of the steps:
* initialize an empty list
* initialize a variable with value zero to accumulate the sum
* use a for loop to iterate through the vectors. Assume the two vectors have the same length
* accumulate the sum as you multiply elements together
You will see in the starter code that x2 is already being calculated for you based on the results of your dotproduct function
```
## TODO: Fill in the dotproduct() function to calculate the
## dot product of two vectors.
##
## Here are the inputs and outputs of the dotproduct() function:
## INPUTS: vector, vector
## OUTPUT: dot product of the two vectors
##
##
## The dot product involves mutliplying the vectors element
## by element and then taking the sum of the results
##
## For example, the dot product of [9, 7, 5] and [2, 3, 4] is
## 9*2+7*3 +5*4 = 59
##
## Hint: You can use a for loop. You will also need to accumulate
## the sum as you iterate through the vectors. In Python, you can accumulate
## sums with syntax like w = w + 1
x2 = []
def dotproduct(vectora, vectorb):
# variable for accumulating the sum
result = 0
# TODO: Use a for loop to multiply the two vectors
# element by element. Accumulate the sum in the result variable
for i in range(len(vectora)):
result+=vectora[i]*vectorb[i]
return result
x2 = [dotproduct([8, 7, 12, 5], [1, 0, 2, 0]),
dotproduct([8, 7, 12, 5], [0, 1, 0, 2]),
12,
5]
### Test Case
### Run this test case to see if your results are as expected
### Running this cell should produce no output if all assertions are True
assert x2 == [32, 17, 12, 5]
```
| github_jupyter |
# Workflows for reproducibile and trustworthy data science wrap-up
This topic serves as a wrap-up of the course, summarizing the course learning objectives, redefining what is meant by reproducible and trustworthy data science, as well as contains data analysis project critique exercises to reinforce what has been learned in the course.
## Course learning objectives
By the end of this course, students should be able to:
- Defend and justify the importance of creating data science workflows that are
reproducible and trustworthy and the elements that go into such a workflow (e.g.,
writing clear, robust, accurate and reproducible code, managing and sharing compute
environments, defined collaboration strategies, etc).
- Constructively criticize the workflows and data analysis of others in regards to its
reproducibility and trustworthiness.
- Develop a data science project (including code and non-code documents such as
reports) that uses reproducible and trustworthy workflows
- Demonstrate how to effectively share and collaborate on data science projects and
software by creating robust code packages, using reproducible compute environments,
and leveraging collaborative development tools.
- Defend and justify the benefit of, and employ automated testing regimes, continuous
integration and continuous deployment for managing and maintaining data science
projects and packages.
- Demonstrate strong communication, teamwork, and collaborative skills by working on
a significant data science project with peers throughout the course.
## Definitions review:
### Data science
*the study, development and practice of __reproducible and auditable processes__ to obtain __insight from data.__*
From this definition, we must also define reproducible and auditable analysis:
### Reproducible analysis:
*reaching the same result given the same input, computational methods and conditions $^1$.*
- input = data
- computational methods = computer code
- conditions = computational environment (e.g., programming language & it's dependencies)
### Auditable/transparent analysis,
*a readable record of the steps used to carry out the analysis as well as a record of how the analysis methods evolved $^2$.*
1. [National Academies of Sciences, 2019](https://www.nap.edu/catalog/25303/reproducibility-and-replicability-in-science)
2. [Parker, 2017](https://peerj.com/preprints/3210/) and [Ram, 2013](https://scfbm.biomedcentral.com/articles/10.1186/1751-0473-8-7)
## What makes trustworthy data science?
Some possible criteria:
1. It should be reproducible and auditable
2. It should be correct
3. It should be fair, equitable and honest
There are many ways a data science can be untrustworthy... In this course we will focus on workflows that can help build trust. I highly recommend taking a course in data science ethics* to help round out your education in how to do this. Further training in statistics and machine learning will also help with making sure your analysis is correct.
>*\* UBC's CPSC 430 (Computers and Society) will have a section reserved for DSCI minor students next year in 2022 T1, which will focus on ethics in data science.*
#### Exercise
Answer the questions below to more concretely connect with the criteria suggested above.
1. Give an example of a data science workflow that affords reproducibility, and one that affords auditable analysis.
2. Can a data analysis be reproducible but not auditable? How about auditable, but not reproducible?
3. Name at least two ways that a data analysis project could be correct (or incorrect).
## Critiquing data analysis projects
Critiquing is defined as evaluating something in a detailed and analytical way
(source: [Oxford Languages Dictionary](https://languages.oup.com/dictionaries/)).
It is used in many domains as a means to improve something (related to peer review),
but also serves as an excellent pedagogical tool to actively practice evaluation.
We will work together in class to critique the following projects from the lens of reproducible and trustworthy workflows:
- [Genomic data and code](https://github.com/ttimbers/Million-Mutation-Project-dye-filling-SKAT) to accompany the "Accelerating Gene Discovery by Phenotyping Whole-Genome Sequenced Multi-mutation Strains and Using the Sequence Kernel Association Test (SKAT)" manuscript by [Timbers et al., PLoS Genetics, 2015](https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1006235)
- [Data and code](https://github.com/jasonpriem/plos_altmetrics_study) to accompany the "Altmetrics in the wild: Using social media to explore scholarly impact" manuscript by [Priem et al., arXiv, 2021](https://arxiv.org/abs/1203.4745)
- [Code](https://github.com/sacadena/Cadena2019PlosCB) to accompany the "Deep convolutional models improve predictions of macaque V1 responses to natural images" manuscript by [Cadena et al., PLoS Computational Biology, 2019](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006897)
### Exercise
When prompted for each project listed above:
- In groups of ~5, take 10 minutes to review the project from the lens of reproducible and trustworthy workflows. You want to evaluate the project with the questions in mind:
- Would I know where to get started reproducing the project?
- If I could get started, do I think I could reproduce the project?
- As a group, come up with at least 1-2 things that have been done well, as well as 1-2 things that could be improved
from this lens. Justify these with respect to reproducibility and trustworthiness.
- Choose one person to be the reporter to bring back these critiques to the larger group.
| github_jupyter |
```
%matplotlib inline
```
DCGAN Tutorial
==============
**Author**: `Nathan Inkawhich <https://github.com/inkawhich>`__
Introduction
------------
This tutorial will give an introduction to DCGANs through an example. We
will train a generative adversarial network (GAN) to generate new
celebrities after showing it pictures of many real celebrities. Most of
the code here is from the dcgan implementation in
`pytorch/examples <https://github.com/pytorch/examples>`__, and this
document will give a thorough explanation of the implementation and shed
light on how and why this model works. But don’t worry, no prior
knowledge of GANs is required, but it may require a first-timer to spend
some time reasoning about what is actually happening under the hood.
Also, for the sake of time it will help to have a GPU, or two. Lets
start from the beginning.
Generative Adversarial Networks
-------------------------------
What is a GAN?
~~~~~~~~~~~~~~
GANs are a framework for teaching a DL model to capture the training
data’s distribution so we can generate new data from that same
distribution. GANs were invented by Ian Goodfellow in 2014 and first
described in the paper `Generative Adversarial
Nets <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>`__.
They are made of two distinct models, a *generator* and a
*discriminator*. The job of the generator is to spawn ‘fake’ images that
look like the training images. The job of the discriminator is to look
at an image and output whether or not it is a real training image or a
fake image from the generator. During training, the generator is
constantly trying to outsmart the discriminator by generating better and
better fakes, while the discriminator is working to become a better
detective and correctly classify the real and fake images. The
equilibrium of this game is when the generator is generating perfect
fakes that look as if they came directly from the training data, and the
discriminator is left to always guess at 50% confidence that the
generator output is real or fake.
Now, lets define some notation to be used throughout tutorial starting
with the discriminator. Let $x$ be data representing an image.
$D(x)$ is the discriminator network which outputs the (scalar)
probability that $x$ came from training data rather than the
generator. Here, since we are dealing with images the input to
$D(x)$ is an image of CHW size 3x64x64. Intuitively, $D(x)$
should be HIGH when $x$ comes from training data and LOW when
$x$ comes from the generator. $D(x)$ can also be thought of
as a traditional binary classifier.
For the generator’s notation, let $z$ be a latent space vector
sampled from a standard normal distribution. $G(z)$ represents the
generator function which maps the latent vector $z$ to data-space.
The goal of $G$ is to estimate the distribution that the training
data comes from ($p_{data}$) so it can generate fake samples from
that estimated distribution ($p_g$).
So, $D(G(z))$ is the probability (scalar) that the output of the
generator $G$ is a real image. As described in `Goodfellow’s
paper <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>`__,
$D$ and $G$ play a minimax game in which $D$ tries to
maximize the probability it correctly classifies reals and fakes
($logD(x)$), and $G$ tries to minimize the probability that
$D$ will predict its outputs are fake ($log(1-D(G(x)))$).
From the paper, the GAN loss function is
\begin{align}\underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(z)))\big]\end{align}
In theory, the solution to this minimax game is where
$p_g = p_{data}$, and the discriminator guesses randomly if the
inputs are real or fake. However, the convergence theory of GANs is
still being actively researched and in reality models do not always
train to this point.
What is a DCGAN?
~~~~~~~~~~~~~~~~
A DCGAN is a direct extension of the GAN described above, except that it
explicitly uses convolutional and convolutional-transpose layers in the
discriminator and generator, respectively. It was first described by
Radford et. al. in the paper `Unsupervised Representation Learning With
Deep Convolutional Generative Adversarial
Networks <https://arxiv.org/pdf/1511.06434.pdf>`__. The discriminator
is made up of strided
`convolution <https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d>`__
layers, `batch
norm <https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm2d>`__
layers, and
`LeakyReLU <https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU>`__
activations. The input is a 3x64x64 input image and the output is a
scalar probability that the input is from the real data distribution.
The generator is comprised of
`convolutional-transpose <https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d>`__
layers, batch norm layers, and
`ReLU <https://pytorch.org/docs/stable/nn.html#relu>`__ activations. The
input is a latent vector, $z$, that is drawn from a standard
normal distribution and the output is a 3x64x64 RGB image. The strided
conv-transpose layers allow the latent vector to be transformed into a
volume with the same shape as an image. In the paper, the authors also
give some tips about how to setup the optimizers, how to calculate the
loss functions, and how to initialize the model weights, all of which
will be explained in the coming sections.
```
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
```
Inputs
------
Let’s define some inputs for the run:
- **dataroot** - the path to the root of the dataset folder. We will
talk more about the dataset in the next section
- **workers** - the number of worker threads for loading the data with
the DataLoader
- **batch_size** - the batch size used in training. The DCGAN paper
uses a batch size of 128
- **image_size** - the spatial size of the images used for training.
This implementation defaults to 64x64. If another size is desired,
the structures of D and G must be changed. See
`here <https://github.com/pytorch/examples/issues/70>`__ for more
details
- **nc** - number of color channels in the input images. For color
images this is 3
- **nz** - length of latent vector
- **ngf** - relates to the depth of feature maps carried through the
generator
- **ndf** - sets the depth of feature maps propagated through the
discriminator
- **num_epochs** - number of training epochs to run. Training for
longer will probably lead to better results but will also take much
longer
- **lr** - learning rate for training. As described in the DCGAN paper,
this number should be 0.0002
- **beta1** - beta1 hyperparameter for Adam optimizers. As described in
paper, this number should be 0.5
- **ngpu** - number of GPUs available. If this is 0, code will run in
CPU mode. If this number is greater than 0 it will run on that number
of GPUs
```
# Root directory for dataset
dataroot = "data/celeba"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
```
Data
----
In this tutorial we will use the `Celeb-A Faces
dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`__ which can
be downloaded at the linked site, or in `Google
Drive <https://drive.google.com/drive/folders/0B7EVK8r0v71pTUZsaXdaSnZBZzg>`__.
The dataset will download as a file named *img_align_celeba.zip*. Once
downloaded, create a directory named *celeba* and extract the zip file
into that directory. Then, set the *dataroot* input for this notebook to
the *celeba* directory you just created. The resulting directory
structure should be:
::
/path/to/celeba
-> img_align_celeba
-> 188242.jpg
-> 173822.jpg
-> 284702.jpg
-> 537394.jpg
...
This is an important step because we will be using the ImageFolder
dataset class, which requires there to be subdirectories in the
dataset’s root folder. Now, we can create the dataset, create the
dataloader, set the device to run on, and finally visualize some of the
training data.
```
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
```
Implementation
--------------
With our input parameters set and the dataset prepared, we can now get
into the implementation. We will start with the weigth initialization
strategy, then talk about the generator, discriminator, loss functions,
and training loop in detail.
Weight Initialization
~~~~~~~~~~~~~~~~~~~~~
From the DCGAN paper, the authors specify that all model weights shall
be randomly initialized from a Normal distribution with mean=0,
stdev=0.02. The ``weights_init`` function takes an initialized model as
input and reinitializes all convolutional, convolutional-transpose, and
batch normalization layers to meet this criteria. This function is
applied to the models immediately after initialization.
```
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
```
Generator
~~~~~~~~~
The generator, $G$, is designed to map the latent space vector
($z$) to data-space. Since our data are images, converting
$z$ to data-space means ultimately creating a RGB image with the
same size as the training images (i.e. 3x64x64). In practice, this is
accomplished through a series of strided two dimensional convolutional
transpose layers, each paired with a 2d batch norm layer and a relu
activation. The output of the generator is fed through a tanh function
to return it to the input data range of $[-1,1]$. It is worth
noting the existence of the batch norm functions after the
conv-transpose layers, as this is a critical contribution of the DCGAN
paper. These layers help with the flow of gradients during training. An
image of the generator from the DCGAN paper is shown below.
.. figure:: /_static/img/dcgan_generator.png
:alt: dcgan_generator
Notice, the how the inputs we set in the input section (*nz*, *ngf*, and
*nc*) influence the generator architecture in code. *nz* is the length
of the z input vector, *ngf* relates to the size of the feature maps
that are propagated through the generator, and *nc* is the number of
channels in the output image (set to 3 for RGB images). Below is the
code for the generator.
```
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
```
Now, we can instantiate the generator and apply the ``weights_init``
function. Check out the printed model to see how the generator object is
structured.
```
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
```
Discriminator
~~~~~~~~~~~~~
As mentioned, the discriminator, $D$, is a binary classification
network that takes an image as input and outputs a scalar probability
that the input image is real (as opposed to fake). Here, $D$ takes
a 3x64x64 input image, processes it through a series of Conv2d,
BatchNorm2d, and LeakyReLU layers, and outputs the final probability
through a Sigmoid activation function. This architecture can be extended
with more layers if necessary for the problem, but there is significance
to the use of the strided convolution, BatchNorm, and LeakyReLUs. The
DCGAN paper mentions it is a good practice to use strided convolution
rather than pooling to downsample because it lets the network learn its
own pooling function. Also batch norm and leaky relu functions promote
healthy gradient flow which is critical for the learning process of both
$G$ and $D$.
Discriminator Code
```
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
```
Now, as with the generator, we can create the discriminator, apply the
``weights_init`` function, and print the model’s structure.
```
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
```
Loss Functions and Optimizers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With $D$ and $G$ setup, we can specify how they learn
through the loss functions and optimizers. We will use the Binary Cross
Entropy loss
(`BCELoss <https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss>`__)
function which is defined in PyTorch as:
\begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\end{align}
Notice how this function provides the calculation of both log components
in the objective function (i.e. $log(D(x))$ and
$log(1-D(G(z)))$). We can specify what part of the BCE equation to
use with the $y$ input. This is accomplished in the training loop
which is coming up soon, but it is important to understand how we can
choose which component we wish to calculate just by changing $y$
(i.e. GT labels).
Next, we define our real label as 1 and the fake label as 0. These
labels will be used when calculating the losses of $D$ and
$G$, and this is also the convention used in the original GAN
paper. Finally, we set up two separate optimizers, one for $D$ and
one for $G$. As specified in the DCGAN paper, both are Adam
optimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track
of the generator’s learning progression, we will generate a fixed batch
of latent vectors that are drawn from a Gaussian distribution
(i.e. fixed_noise) . In the training loop, we will periodically input
this fixed_noise into $G$, and over the iterations we will see
images form out of the noise.
```
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
```
Training
~~~~~~~~
Finally, now that we have all of the parts of the GAN framework defined,
we can train it. Be mindful that training GANs is somewhat of an art
form, as incorrect hyperparameter settings lead to mode collapse with
little explanation of what went wrong. Here, we will closely follow
Algorithm 1 from Goodfellow’s paper, while abiding by some of the best
practices shown in `ganhacks <https://github.com/soumith/ganhacks>`__.
Namely, we will “construct different mini-batches for real and fake”
images, and also adjust G’s objective function to maximize
$logD(G(z))$. Training is split up into two main parts. Part 1
updates the Discriminator and Part 2 updates the Generator.
**Part 1 - Train the Discriminator**
Recall, the goal of training the discriminator is to maximize the
probability of correctly classifying a given input as real or fake. In
terms of Goodfellow, we wish to “update the discriminator by ascending
its stochastic gradient”. Practically, we want to maximize
$log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batch
suggestion from ganhacks, we will calculate this in two steps. First, we
will construct a batch of real samples from the training set, forward
pass through $D$, calculate the loss ($log(D(x))$), then
calculate the gradients in a backward pass. Secondly, we will construct
a batch of fake samples with the current generator, forward pass this
batch through $D$, calculate the loss ($log(1-D(G(z)))$),
and *accumulate* the gradients with a backward pass. Now, with the
gradients accumulated from both the all-real and all-fake batches, we
call a step of the Discriminator’s optimizer.
**Part 2 - Train the Generator**
As stated in the original paper, we want to train the Generator by
minimizing $log(1-D(G(z)))$ in an effort to generate better fakes.
As mentioned, this was shown by Goodfellow to not provide sufficient
gradients, especially early in the learning process. As a fix, we
instead wish to maximize $log(D(G(z)))$. In the code we accomplish
this by: classifying the Generator output from Part 1 with the
Discriminator, computing G’s loss *using real labels as GT*, computing
G’s gradients in a backward pass, and finally updating G’s parameters
with an optimizer step. It may seem counter-intuitive to use the real
labels as GT labels for the loss function, but this allows us to use the
$log(x)$ part of the BCELoss (rather than the $log(1-x)$
part) which is exactly what we want.
Finally, we will do some statistic reporting and at the end of each
epoch we will push our fixed_noise batch through the generator to
visually track the progress of G’s training. The training statistics
reported are:
- **Loss_D** - discriminator loss calculated as the sum of losses for
the all real and all fake batches ($log(D(x)) + log(D(G(z)))$).
- **Loss_G** - generator loss calculated as $log(D(G(z)))$
- **D(x)** - the average output (across the batch) of the discriminator
for the all real batch. This should start close to 1 then
theoretically converge to 0.5 when G gets better. Think about why
this is.
- **D(G(z))** - average discriminator outputs for the all fake batch.
The first number is before D is updated and the second number is
after D is updated. These numbers should start near 0 and converge to
0.5 as G gets better. Think about why this is.
**Note:** This step might take a while, depending on how many epochs you
run and if you removed some data from the dataset.
```
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
```
Results
-------
Finally, lets check out how we did. Here, we will look at three
different results. First, we will see how D and G’s losses changed
during training. Second, we will visualize G’s output on the fixed_noise
batch for every epoch. And third, we will look at a batch of real data
next to a batch of fake data from G.
**Loss versus training iteration**
Below is a plot of D & G’s losses versus training iterations.
```
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
```
**Visualization of G’s progression**
Remember how we saved the generator’s output on the fixed_noise batch
after every epoch of training. Now, we can visualize the training
progression of G with an animation. Press the play button to start the
animation.
```
#%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
```
**Real Images vs. Fake Images**
Finally, lets take a look at some real images and fake images side by
side.
```
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
```
Where to Go Next
----------------
We have reached the end of our journey, but there are several places you
could go from here. You could:
- Train for longer to see how good the results get
- Modify this model to take a different dataset and possibly change the
size of the images and the model architecture
- Check out some other cool GAN projects
`here <https://github.com/nashory/gans-awesome-applications>`__
- Create GANs that generate
`music <https://deepmind.com/blog/wavenet-generative-model-raw-audio/>`__
| github_jupyter |
# Kernel-based Time-varying Regression - Part II
The previous tutorial covered the basic syntax and structure of **KTR** (or so called **BTVC**); time-series data was fitted with a KTR model accounting for trend and seasonality. In this tutorial a KTR model is fit with trend, seasonality, and additional regressors. To summarize part 1, **KTR** considers a time-series as an additive combination of local-trend, seasonality, and additional regressors. The coefficients for all three components are allowed to vary over time. The time-varying of the coefficients is modeled using kernel smoothing of latent variables. This can also be an advantage of picking this model over other static regression coefficients models.
This tutorial covers:
1. KTR model structure with regression
2. syntax to initialize, fit and predict a model with regressors
3. visualization of regression coefficients
```
import pandas as pd
import numpy as np
from math import pi
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTR
from orbit.diagnostics.plot import plot_predicted_components
from orbit.utils.plot import get_orbit_style
from orbit.constants.palette import OrbitPalette
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.5f' % x)
orbit_style = get_orbit_style()
plt.style.use(orbit_style);
print(orbit.__version__)
```
## Model Structure
This section gives the mathematical structure of the KTR model. In short, it considers a time-series ($y_t$) as the linear combination of three parts. These are the local-trend ($l_t$), seasonality (s_t), and regression ($r_t$) terms at time $t$. That is
$$y_t = l_t + s_t + r_t + \epsilon_t, ~ t = 1,\cdots, T,$$
where
- $\epsilon_t$s comprise a stationary random error process.
- $r_t$ is the regression component which can be further expressed as $\sum_{i=1}^{I} {x_{i,t}\beta_{i, t}}$ with covariate $x$ and coefficient $\beta$ on indexes $i,t$
For details of how on $l_t$ and $s_t$, please refer to **Part I**.
Recall in **KTR**, we express coefficients as
$$B=K b^T$$
where
- *coefficient matrix* $\text{B}$ has size $t \times P$ with rows equal to the $\beta_t$
- *knot matrix* $b$ with size $P\times J$; each entry is a latent variable $b_{p, j}$. The $b_j$ can be viewed as the "knots" from the perspective of spline regression and $j$ is a time index such that $t_j \in [1, \cdots, T]$.
- *kernel matrix* $K$ with size $T\times J$ where the $i$th row and $j$th element can be viewed as the normalized weight $k(t_j, t) / \sum_{j=1}^{J} k(t_j, t)$
In regression, we generate the matrix $K$ with Gaussian kernel $k_\text{reg}$ as such:
$k_\text{reg}(t, t_j;\rho) = \exp ( -\frac{(t-t_j)^2}{2\rho^2} ),$
where $\rho$ is the scale hyper-parameter.
## Data Simulation Module
In this example, we will use simulated data in order to have true regression coefficients for comparison. We propose two set of simulation data with three predictors each:
The two data sets are:
- random walk
- sine-cosine like
Note the data are random so it may be worthwhile to repeat the next few sets a few times to see how different data sets work.
### Random Walk Simulated Dataset
```
def sim_data_seasonal(n, RS):
""" coefficients curve are sine-cosine like
"""
np.random.seed(RS)
# make the time varing coefs
tau = np.arange(1, n+1)/n
data = pd.DataFrame({
'tau': tau,
'date': pd.date_range(start='1/1/2018', periods=n),
'beta1': 2 * tau,
'beta2': 1.01 + np.sin(2*pi*tau),
'beta3': 1.01 + np.sin(4*pi*(tau-1/8)),
'x1': np.random.normal(0, 10, size=n),
'x2': np.random.normal(0, 10, size=n),
'x3': np.random.normal(0, 10, size=n),
'trend': np.cumsum(np.concatenate((np.array([1]), np.random.normal(0, 0.1, n-1)))),
'error': np.random.normal(0, 1, size=n) #stats.t.rvs(30, size=n),#
})
data['y'] = data.x1 * data.beta1 + data.x2 * data.beta2 + data.x3 * data.beta3 + data.error
return data
def sim_data_rw(n, RS, p=3):
""" coefficients curve are random walk like
"""
np.random.seed(RS)
# initializing coefficients at zeros, simulate all coefficient values
lev = np.cumsum(np.concatenate((np.array([5.0]), np.random.normal(0, 0.01, n-1))))
beta = np.concatenate(
[np.random.uniform(0.05, 0.12, size=(1,p)),
np.random.normal(0.0, 0.01, size=(n-1,p))],
axis=0)
beta = np.cumsum(beta, 0)
# simulate regressors
covariates = np.random.normal(0, 10, (n, p))
# observation with noise
y = lev + (covariates * beta).sum(-1) + 0.3 * np.random.normal(0, 1, n)
regressor_col = ['x{}'.format(pp) for pp in range(1, p+1)]
data = pd.DataFrame(covariates, columns=regressor_col)
beta_col = ['beta{}'.format(pp) for pp in range(1, p+1)]
beta_data = pd.DataFrame(beta, columns=beta_col)
data = pd.concat([data, beta_data], axis=1)
data['y'] = y
data['date'] = pd.date_range(start='1/1/2018', periods=len(y))
return data
rw_data = sim_data_rw(n=300, RS=2021, p=3)
rw_data.head(10)
```
### Sine-Cosine Like Simulated Dataset
```
sc_data = sim_data_seasonal(n=80, RS=2021)
sc_data.head(10)
```
## Fitting a Model with Regressors
The metadata for simulated data sets.
```
# num of predictors
p = 3
regressor_col = ['x{}'.format(pp) for pp in range(1, p + 1)]
response_col = 'y'
date_col='date'
```
As in **Part I** KTR follows sklearn model API style. First an instance of the Orbit class `KTR` is created. Second fit and predict methods are called for that instance. Besides providing meta data such `response_col`, `date_col` and `regressor_col`, there are additional args to provide to specify the estimator and the setting of the estimator. For details, please refer to other tutorials of the **Orbit** site.
```
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
```
Here `predict` has the additional argument `decompose=True`. This returns the compponents ($l_t$, $s_t$, and $r_t$) of the regression along with the prediction.
```
ktr.fit(df=rw_data)
ktr.predict(df=rw_data, decompose=True).head(5)
```
## Visualization of Regression Coefficient Curves
The function `get_regression_coefs` to extract coefficients (they will have central credibility intervals if the argument `include_ci=True` is used).
```
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
coef_mid.head(5)
```
Because this is simulated data it is possible to overlay the estimate with the true coefficients.
```
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, rw_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1,0.5));
```
To plot coefficients use the function `plot_regression_coefs` from the KTR class.
```
ktr.plot_regression_coefs(figsize=(10, 5), include_ci=True);
```
These type of time-varying coefficients detection problems are not new. Bayesian approach such as the R packages Bayesian Structural Time Series (a.k.a **BSTS**) by Scott and Varian (2014) and **tvReg** Isabel Casas and Ruben Fernandez-Casal (2021). Other frequentist approach such as Wu and Chiang (2000).
For further studies on benchmarking coefficients detection, Ng, Wang and Dai (2021) provides a detailed comparison of **KTR** with other popular time-varying coefficients methods; **KTR** demonstrates superior performance in the random walk data simulation.
## Customizing Priors and Number of Knot Segments
To demonstrate how to specify the number of knots and priors consider the sine-cosine like simulated dataset. In this dataset, the fitting is more tricky since there could be some better way to define the number and position of the knots. There are obvious "change points" within the sine-cosine like curves. In **KTR** there are a few arguments that can leveraged to asign a priori knot attributes:
1. `regressor_init_knot_loc` is used to define the prior mean of the knot value. e.g. in this case, there is not a lot of prior knowledge so zeros are used.
2. The `regressor_init_knot_scale` and `regressor_knot_scale` are used to tune the prior sd of the global mean of the knot and the sd of each knot from the global mean respectively. These create a plausible range for the knot values.
3. The `regression_segments` defines the number of between knot segments (the number of knots - 1). The higher the number of segments the more change points are possible.
```
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
regressor_init_knot_loc=[0] * len(regressor_col),
regressor_init_knot_scale=[10.0] * len(regressor_col),
regressor_knot_scale=[2.0] * len(regressor_col),
regression_segments=6,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
ktr.fit(df=sc_data)
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, sc_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1, 0.5));
```
Visualize the knots using the `plot_regression_coefs` function with `with_knot=True`.
```
ktr.plot_regression_coefs(with_knot=True, figsize=(10, 5), include_ci=True);
```
There are more ways to define knots for regression as well as seasonality and trend (a.k.a levels). These are described in **Part III**
## References
1. Ng, Wang and Dai (2021). Bayesian Time Varying Coefficient Model with Applications to Marketing Mix Modeling, arXiv preprint arXiv:2106.03322
2. Isabel Casas and Ruben Fernandez-Casal (2021). tvReg: Time-Varying Coefficients Linear Regression for Single and Multi-Equations. https://CRAN.R-project.org/package=tvReg R package version 0.5.4.
3. Steven L Scott and Hal R Varian (2014). Predicting the present with bayesian structural time series. International Journal of Mathematical Modelling and Numerical Optimisation 5, 1-2 (2014), 4–23.
| github_jupyter |
# Extension Input Data Validation
When using extensions in Fugue, you may add input data validation logic inside your code. However, there is standard way to add your validation logic. Here is a simple example:
```
from typing import List, Dict, Any
# partitionby_has: a
# schema: a:int,ct:int
def get_count(df:List[Dict[str,Any]]) -> List[List[Any]]:
return [[df[0]["a"],len(df)]]
```
The following commented-out code will fail, because of the hint `partitionby_has: a` requires the input dataframe to be prepartitioned by at least column `a`.
```
from fugue import FugueWorkflow
with FugueWorkflow() as dag:
df = dag.df([[0,1],[1,1],[0,2]], "a:int,b:int")
# df.transform(get_count).show() # will fail because of no partition by
df.partition(by=["a"]).transform(get_count).show()
df.partition(by=["b","a"]).transform(get_count).show() # b,a is a super set of a
```
You can also have multiple rules, the following requires partition keys to contain `a`, and presort to be exactly `b asc` (`b == b asc`)
```
from typing import List, Dict, Any
# partitionby_has: a
# presort_is: b
# schema: a:int,ct:int
def get_count2(df:List[Dict[str,Any]]) -> List[List[Any]]:
return [[df[0]["a"],len(df)]]
from fugue import FugueWorkflow
with FugueWorkflow() as dag:
df = dag.df([[0,1],[1,1],[0,2]], "a:int,b:int")
# df.partition(by=["a"]).transform(get_count).show() # will fail because of no presort
df.partition(by=["a"], presort="b asc").transform(get_count).show()
```
## Supported Validations
The following are all supported validations. **Compile time validations** will happen when you construct the [FugueWorkflow](/dag.ipynb) while **runtime validations** happen during execution. Compile time validations are very useful to quickly identify logical issues. Runtime validations may take longer time to happen but they are still useful.On Fugue level, we are trying to move runtime validations to compile time as much as we can.
Rule | Description | Compile Time | Order Matters | Examples
:---|:---|:---|:---|:---
**partitionby_has** | assert the input dataframe is prepartitioned, and the partition keys contain these values | Yes | No | `partitionby_has: a,b` means the partition keys must contain `a` and `b` columns
**partitionby_is** | assert the input dataframe is prepartitioned, and the partition keys are exactly these values | Yes | Yes | `partitionby_is: a,b` means the partition keys must contain and only contain `a` and `b` columns
**presort_has** | assert the input dataframe is prepartitioned and [presorted](./partition.ipynb#Presort), and the presort keys contain these values | Yes | No | `presort_has: a,b desc` means the presort contains `a asc` and `b desc` (`a == a asc`)
**presort_is** | assert the input dataframe is prepartitioned and [presorted](./partition.ipynb#Presort), and the presort keys are exactly these values | Yes | Yes | `presort_is: a,b desc` means the presort is exactly `a asc, b desc`
**schema_has** | assert input dataframe schema has certain keys or key type pairs | No | No | `schema_has: a,b:str` means input dataframe schema contains column `a` regardless of type, and `b` of type string, order doesn't matter. So `b:str,a:int` is valid, `b:int,a:int` is invalid because of `b` type, and `b:str` is invalid because `a` is not in the schema
**schema_is** | assert input dataframe schema is exactly this value (the value must be a [schema expression](./schema_dataframes.ipynb#Schema)) | No | Yes | `schema_is: a:int,b:str`, then `b:str,a:int` is invalid because of order, `a:str,b:str` is invalid because of `a` type
## Extensions Compatibility
Extension Type | Supported | Not Supported
:---|:---|:---
Transformer | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None
CoTransformer | None | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is`
OutputTransformer | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None
OutputCoTransformer | None | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is`
Creator | N/A | N/A
Processor | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None
Outputter | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None
## How To Add Validations
It depends on how you write your extension, by comment, by decorator or by interface, feature wise, they are equivalent.
## By Comment
```
from typing import List, Dict, Any
# schema: a:int,ct:int
def get_count2(df:List[Dict[str,Any]]) -> List[List[Any]]:
return [[df[0]["a"],len(df)]]
```
## By Decorator
```
import pandas as pd
from typing import List, Dict, Any
from fugue import processor, transformer
@transformer(schema="*", partitionby_has=["a","d"], presort_is="b, c desc")
def example1(df:pd.DataFrame) -> pd.DataFrame:
return df
@transformer(schema="*", partitionby_has="a,d", presort_is=["b",("c",False)])
def example2(df:pd.DataFrame) -> pd.DataFrame:
return df
# partitionby_has: a
# presort_is: b
@transformer(schema="*")
def example3(df:pd.DataFrame) -> pd.DataFrame:
return df
@processor(partitionby_has=["a","d"], presort_is="b, c desc")
def example4(df:pd.DataFrame) -> pd.DataFrame:
return df
```
## By Interface
In every extension, you can override `validation_rules`
```
from fugue import Transformer
class T(Transformer):
@property
def validation_rules(self):
return {
"partitionby_has": ["a"]
}
def get_output_schema(self, df):
return df.schema
def transform(self, df):
return df
```
| github_jupyter |
##### Copyright 2020 The TF-Agents Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Tutorial on Multi Armed Bandits in TF-Agents
### Get Started
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/bandits_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/bandits_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/bandits_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/bandits_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
### Setup
If you haven't installed the following dependencies, run:
```
!pip install tf-agents
```
### Imports
```
import abc
import numpy as np
import tensorflow as tf
from tf_agents.agents import tf_agent
from tf_agents.drivers import driver
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.policies import tf_policy
from tf_agents.specs import array_spec
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
from tf_agents.trajectories import trajectory
from tf_agents.trajectories import policy_step
# Clear any leftover state from previous colabs run.
# (This is not necessary for normal programs.)
tf.compat.v1.reset_default_graph()
tf.compat.v1.enable_resource_variables()
tf.compat.v1.enable_v2_behavior()
nest = tf.compat.v2.nest
```
# Introduction
The Multi-Armed Bandit problem (MAB) is a special case of Reinforcement Learning: an agent collects rewards in an environment by taking some actions after observing some state of the environment. The main difference between general RL and MAB is that in MAB, we assume that the action taken by the agent does not influence the next state of the environment. Therefore, agents do not model state transitions, credit rewards to past actions, or "plan ahead" to get to reward-rich states.
As in other RL domains, the goal of a MAB *agent* is to find a *policy* that collects as much reward as possible. It would be a mistake, however, to always try to exploit the action that promises the highest reward, because then there is a chance that we miss out on better actions if we do not explore enough. This is the main problem to be solved in (MAB), often called the *exploration-exploitation dilemma*.
Bandit environments, policies, and agents for MAB can be found in subdirectories of [tf_agents/bandits](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits).
# Environments
In TF-Agents, the environment class serves the role of giving information on the current state (this is called **observation** or **context**), receiving an action as input, performing a state transition, and outputting a reward. This class also takes care of resetting when an episode ends, so that a new episode can start. This is realized by calling a `reset` function when a state is labelled as "last" of the episode.
For more details, see the [TF-Agents environments tutorial](https://github.com/tensorflow/agents/blob/master/docs/tutorials/2_environments_tutorial.ipynb).
As mentioned above, MAB differs from general RL in that actions do not influence the next observation. Another difference is that in Bandits, there are no "episodes": every time step starts with a new observation, independently of previous time steps.
To make sure observations are independent and to abstract away the concept of RL episodes, we introduce subclasses of `PyEnvironment` and `TFEnvironment`: [BanditPyEnvironment](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/bandit_tf_environment.py) and [BanditTFEnvironment](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/bandit_py_environment.py). These classes expose two private member functions that remain to be implemented by the user:
```python
@abc.abstractmethod
def _observe(self):
```
and
```python
@abc.abstractmethod
def _apply_action(self, action):
```
The `_observe` function returns an observation. Then, the policy chooses an action based on this observation. The `_apply_action` receives that action as an input, and returns the corresponding reward. These private member functions are called by the functions `reset` and `step`, respectively.
```
class BanditPyEnvironment(py_environment.PyEnvironment):
def __init__(self, observation_spec, action_spec):
self._observation_spec = observation_spec
self._action_spec = action_spec
super(BanditPyEnvironment, self).__init__()
# Helper functions.
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _empty_observation(self):
return tf.nest.map_structure(lambda x: np.zeros(x.shape, x.dtype),
self.observation_spec())
# These two functions below should not be overridden by subclasses.
def _reset(self):
"""Returns a time step containing an observation."""
return ts.restart(self._observe(), batch_size=self.batch_size)
def _step(self, action):
"""Returns a time step containing the reward for the action taken."""
reward = self._apply_action(action)
return ts.termination(self._observe(), reward)
# These two functions below are to be implemented in subclasses.
@abc.abstractmethod
def _observe(self):
"""Returns an observation."""
@abc.abstractmethod
def _apply_action(self, action):
"""Applies `action` to the Environment and returns the corresponding reward.
"""
```
The above interim abstract class implements `PyEnvironment`'s `_reset` and `_step` functions and exposes the abstract functions `_observe` and `_apply_action` to be implemented by subclasses.
## A Simple Example Environment Class
The following class gives a very simple environment for which the observation is a random integer between -2 and 2, there are 3 possible actions (0, 1, 2), and the reward is the product of the action and the observation.
```
class SimplePyEnvironment(BanditPyEnvironment):
def __init__(self):
action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=2, name='action')
observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation')
super(SimplePyEnvironment, self).__init__(observation_spec, action_spec)
def _observe(self):
self._observation = np.random.randint(-2, 3, (1,), dtype='int32')
return self._observation
def _apply_action(self, action):
return action * self._observation
```
Now we can use this environment to get observations, and receive rewards for our actions.
```
environment = SimplePyEnvironment()
observation = environment.reset().observation
print("observation: %d" % observation)
action = 2 #@param
print("action: %d" % action)
reward = environment.step(action).reward
print("reward: %f" % reward)
```
## TF Environments
One can define a bandit environment by subclassing `BanditTFEnvironment`, or, similarly to RL environments, one can define a `BanditPyEnvironment` and wrap it with `TFPyEnvironment`. For the sake of simplicity, we go with the latter option in this tutorial.
```
tf_environment = tf_py_environment.TFPyEnvironment(environment)
```
# Policies
A *policy* in a bandit problem works the same way as in an RL problem: it provides an action (or a distribution of actions), given an observation as input.
For more details, see the [TF-Agents Policy tutorial](https://github.com/tensorflow/agents/blob/master/docs/tutorials/3_policies_tutorial.ipynb).
As with environments, there are two ways to construct a policy: One can create a `PyPolicy` and wrap it with `TFPyPolicy`, or directly create a `TFPolicy`. Here we elect to go with the direct method.
Since this example is quite simple, we can define the optimal policy manually. The action only depends on the sign of the observation, 0 when is negative and 2 when is positive.
```
class SignPolicy(tf_policy.TFPolicy):
def __init__(self):
observation_spec = tensor_spec.BoundedTensorSpec(
shape=(1,), dtype=tf.int32, minimum=-2, maximum=2)
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.BoundedTensorSpec(
shape=(), dtype=tf.int32, minimum=0, maximum=2)
super(SignPolicy, self).__init__(time_step_spec=time_step_spec,
action_spec=action_spec)
def _distribution(self, time_step):
pass
def _variables(self):
return ()
def _action(self, time_step, policy_state, seed):
observation_sign = tf.cast(tf.sign(time_step.observation[0]), dtype=tf.int32)
action = observation_sign + 1
return policy_step.PolicyStep(action, policy_state)
```
Now we can request an observation from the environment, call the policy to choose an action, then the environment will output the reward:
```
sign_policy = SignPolicy()
current_time_step = tf_environment.reset()
print('Observation:')
print (current_time_step.observation)
action = sign_policy.action(current_time_step).action
print('Action:')
print (action)
reward = tf_environment.step(action).reward
print('Reward:')
print(reward)
```
The way bandit environments are implemented ensures that every time we take a step, we not only receive the reward for the action we took, but also the next observation.
```
step = tf_environment.reset()
action = 1
next_step = tf_environment.step(action)
reward = next_step.reward
next_observation = next_step.observation
print("Reward: ")
print(reward)
print("Next observation:")
print(next_observation)
```
# Agents
Now that we have bandit environments and bandit policies, it is time to also define bandit agents, that take care of changing the policy based on training samples.
The API for bandit agents does not differ from that of RL agents: the agent just needs to implement the `_initialize` and `_train` methods, and define a `policy` and a `collect_policy`.
## A More Complicated Environment
Before we write our bandit agent, we need to have an environment that is a bit harder to figure out. To spice up things just a little bit, the next environment will either always give `reward = observation * action` or `reward = -observation * action`. This will be decided when the environment is initialized.
```
class TwoWayPyEnvironment(BanditPyEnvironment):
def __init__(self):
action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=2, name='action')
observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation')
# Flipping the sign with probability 1/2.
self._reward_sign = 2 * np.random.randint(2) - 1
print("reward sign:")
print(self._reward_sign)
super(TwoWayPyEnvironment, self).__init__(observation_spec, action_spec)
def _observe(self):
self._observation = np.random.randint(-2, 3, (1,), dtype='int32')
return self._observation
def _apply_action(self, action):
return self._reward_sign * action * self._observation[0]
two_way_tf_environment = tf_py_environment.TFPyEnvironment(TwoWayPyEnvironment())
```
## A More Complicated Policy
A more complicated environment calls for a more complicated policy. We need a policy that detects the behavior of the underlying environment. There are three situations that the policy needs to handle:
0. The agent has not detected know yet which version of the environment is running.
1. The agent detected that the original version of the environment is running.
2. The agent detected that the flipped version of the environment is running.
We define a `tf_variable` named `_situation` to store this information encoded as values in `[0, 2]`, then make the policy behave accordingly.
```
class TwoWaySignPolicy(tf_policy.TFPolicy):
def __init__(self, situation):
observation_spec = tensor_spec.BoundedTensorSpec(
shape=(1,), dtype=tf.int32, minimum=-2, maximum=2)
action_spec = tensor_spec.BoundedTensorSpec(
shape=(), dtype=tf.int32, minimum=0, maximum=2)
time_step_spec = ts.time_step_spec(observation_spec)
self._situation = situation
super(TwoWaySignPolicy, self).__init__(time_step_spec=time_step_spec,
action_spec=action_spec)
def _distribution(self, time_step):
pass
def _variables(self):
return [self._situation]
def _action(self, time_step, policy_state, seed):
sign = tf.cast(tf.sign(time_step.observation[0, 0]), dtype=tf.int32)
def case_unknown_fn():
# Choose 1 so that we get information on the sign.
return tf.constant(1, shape=(1,))
# Choose 0 or 2, depending on the situation and the sign of the observation.
def case_normal_fn():
return tf.constant(sign + 1, shape=(1,))
def case_flipped_fn():
return tf.constant(1 - sign, shape=(1,))
cases = [(tf.equal(self._situation, 0), case_unknown_fn),
(tf.equal(self._situation, 1), case_normal_fn),
(tf.equal(self._situation, 2), case_flipped_fn)]
action = tf.case(cases, exclusive=True)
return policy_step.PolicyStep(action, policy_state)
```
## The Agent
Now it's time to define the agent that detects the sign of the environment and sets the policy appropriately.
```
class SignAgent(tf_agent.TFAgent):
def __init__(self):
self._situation = tf.compat.v2.Variable(0, dtype=tf.int32)
policy = TwoWaySignPolicy(self._situation)
time_step_spec = policy.time_step_spec
action_spec = policy.action_spec
super(SignAgent, self).__init__(time_step_spec=time_step_spec,
action_spec=action_spec,
policy=policy,
collect_policy=policy,
train_sequence_length=None)
def _initialize(self):
return tf.compat.v1.variables_initializer(self.variables)
def _train(self, experience, weights=None):
observation = experience.observation
action = experience.action
reward = experience.reward
# We only need to change the value of the situation variable if it is
# unknown (0) right now, and we can infer the situation only if the
# observation is not 0.
needs_action = tf.logical_and(tf.equal(self._situation, 0),
tf.not_equal(reward, 0))
def new_situation_fn():
"""This returns either 1 or 2, depending on the signs."""
return (3 - tf.sign(tf.cast(observation[0, 0, 0], dtype=tf.int32) *
tf.cast(action[0, 0], dtype=tf.int32) *
tf.cast(reward[0, 0], dtype=tf.int32))) / 2
new_situation = tf.cond(needs_action,
new_situation_fn,
lambda: self._situation)
new_situation = tf.cast(new_situation, tf.int32)
tf.compat.v1.assign(self._situation, new_situation)
return tf_agent.LossInfo((), ())
sign_agent = SignAgent()
```
In the above code, the agent defines the policy, and the variable `situation` is shared by the agent and the policy.
Also, the parameter `experience` of the `_train` function is a trajectory:
# Trajectories
In TF-Agents, `trajectories` are named tuples that contain samples from previous steps taken. These samples are then used by the agent to train and update the policy. In RL, trajectories must contain information about the current state, the next state, and whether the current episode has ended. Since in the Bandit world we do not need these things, we set up a helper function to create a trajectory:
```
# We need to add another dimension here because the agent expects the
# trajectory of shape [batch_size, time, ...], but in this tutorial we assume
# that both batch size and time are 1. Hence all the expand_dims.
def trajectory_for_bandit(initial_step, action_step, final_step):
return trajectory.Trajectory(observation=tf.expand_dims(initial_step.observation, 0),
action=tf.expand_dims(action_step.action, 0),
policy_info=action_step.info,
reward=tf.expand_dims(final_step.reward, 0),
discount=tf.expand_dims(final_step.discount, 0),
step_type=tf.expand_dims(initial_step.step_type, 0),
next_step_type=tf.expand_dims(final_step.step_type, 0))
```
# Training an Agent
Now all the pieces are ready for training our bandit agent.
```
step = two_way_tf_environment.reset()
for _ in range(10):
action_step = sign_agent.collect_policy.action(step)
next_step = two_way_tf_environment.step(action_step.action)
experience = trajectory_for_bandit(step, action_step, next_step)
print(experience)
sign_agent.train(experience)
step = next_step
```
From the output one can see that after the second step (unless the observation was 0 in the first step), the policy chooses the action in the right way and thus the reward collected is always non-negative.
# A Real Contextual Bandit Example
In the rest of this tutorial, we use the pre-implemented [environments](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/) and [agents](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/agents/) of the TF-Agents Bandits library.
```
# Imports for example.
from tf_agents.bandits.agents import lin_ucb_agent
from tf_agents.bandits.environments import stationary_stochastic_py_environment as sspe
from tf_agents.bandits.metrics import tf_metrics
from tf_agents.drivers import dynamic_step_driver
from tf_agents.replay_buffers import tf_uniform_replay_buffer
import matplotlib.pyplot as plt
```
## Stationary Stochastic Environment with Linear Payoff Functions
The environment used in this example is the [StationaryStochasticPyEnvironment](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_py_environment.py). This environment takes as parameter a (usually noisy) function for giving observations (context), and for every arm takes an (also noisy) function that computes the reward based on the given observation. In our example, we sample the context uniformly from a d-dimensional cube, and the reward functions are linear functions of the context, plus some Gaussian noise.
```
batch_size = 2 # @param
arm0_param = [-3, 0, 1, -2] # @param
arm1_param = [1, -2, 3, 0] # @param
arm2_param = [0, 0, 1, 1] # @param
def context_sampling_fn(batch_size):
"""Contexts from [-10, 10]^4."""
def _context_sampling_fn():
return np.random.randint(-10, 10, [batch_size, 4]).astype(np.float32)
return _context_sampling_fn
class LinearNormalReward(object):
"""A class that acts as linear reward function when called."""
def __init__(self, theta, sigma):
self.theta = theta
self.sigma = sigma
def __call__(self, x):
mu = np.dot(x, self.theta)
return np.random.normal(mu, self.sigma)
arm0_reward_fn = LinearNormalReward(arm0_param, 1)
arm1_reward_fn = LinearNormalReward(arm1_param, 1)
arm2_reward_fn = LinearNormalReward(arm2_param, 1)
environment = tf_py_environment.TFPyEnvironment(
sspe.StationaryStochasticPyEnvironment(
context_sampling_fn(batch_size),
[arm0_reward_fn, arm1_reward_fn, arm2_reward_fn],
batch_size=batch_size))
```
## The LinUCB Agent
The agent below implements the [LinUCB](http://rob.schapire.net/papers/www10.pdf) algorithm.
```
observation_spec = tensor_spec.TensorSpec([4], tf.float32)
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.BoundedTensorSpec(
dtype=tf.int32, shape=(), minimum=0, maximum=2)
agent = lin_ucb_agent.LinearUCBAgent(time_step_spec=time_step_spec,
action_spec=action_spec)
```
## Regret Metric
Bandits' most important metric is *regret*, calculated as the difference between the reward collected by the agent and the expected reward of an oracle policy that has access to the reward functions of the environment. The [RegretMetric](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py) thus needs a *baseline_reward_fn* function that calculates the best achievable expected reward given an observation. For our example, we need to take the maximum of the no-noise equivalents of the reward functions that we already defined for the environment.
```
def compute_optimal_reward(observation):
expected_reward_for_arms = [
tf.linalg.matvec(observation, tf.cast(arm0_param, dtype=tf.float32)),
tf.linalg.matvec(observation, tf.cast(arm1_param, dtype=tf.float32)),
tf.linalg.matvec(observation, tf.cast(arm2_param, dtype=tf.float32))]
optimal_action_reward = tf.reduce_max(expected_reward_for_arms, axis=0)
return optimal_action_reward
regret_metric = tf_metrics.RegretMetric(compute_optimal_reward)
```
## Training
Now we put together all the components that we introduced above: the environment, the policy, and the agent. We run the policy on the environment and output training data with the help of a *driver*, and train the agent on the data.
Note that there are two parameters that together specify the number of steps taken. `num_iterations` specifies how many times we run the trainer loop, while the driver will take `steps_per_loop` steps per iteration. The main reason behind keeping both of these parameters is that some operations are done per iteration, while some are done by the driver in every step. For example, the agent's `train` function is only called once per iteration. The trade-off here is that if we train more often then our policy is "fresher", on the other hand, training in bigger batches might be more time efficient.
```
num_iterations = 90 # @param
steps_per_loop = 1 # @param
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.policy.trajectory_spec,
batch_size=batch_size,
max_length=steps_per_loop)
observers = [replay_buffer.add_batch, regret_metric]
driver = dynamic_step_driver.DynamicStepDriver(
env=environment,
policy=agent.collect_policy,
num_steps=steps_per_loop * batch_size,
observers=observers)
regret_values = []
for _ in range(num_iterations):
driver.run()
loss_info = agent.train(replay_buffer.gather_all())
replay_buffer.clear()
regret_values.append(regret_metric.result())
plt.plot(regret_values)
plt.ylabel('Average Regret')
plt.xlabel('Number of Iterations')
```
After running the last code snippet, the resulting plot (hopefully) shows that the average regret is going down as the agent is trained and the policy gets better in figuring out what the right action is, given the observation.
# What's Next?
To see more working examples, please see the [bandits/agents/examples](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/agents/examples) directory that has ready-to-run examples for different agents and environments.
The TF-Agents library is also capable of handling Multi-Armed Bandits with per-arm features. To that end, we refer the reader to the per-arm bandit [tutorial](https://github.com/tensorflow/agents/blob/master/tf_agents/g3doc/tutorials/per_arm_bandits_tutorial.ipynb).
| github_jupyter |
# Week 4
Yay! It's week 4. Today's we'll keep things light.
I've noticed that many of you are struggling a bit to keep up and still working on exercises from the previous weeks. Thus, this week we only have two components with no lectures and very little reading.
## Overview
* An exercise on visualizing geodata using a different set of tools from the ones we played with during Lecture 2.
* Thinking about visualization, data quality, and binning. Why ***looking at the details of the data before applying fancy methods*** is often important.
## Part 1: Visualizing geo-data
It turns out that `plotly` (which we used during Week 2) is not the only way of working with geo-data. There are many different ways to go about it. (The hard-core PhD and PostDoc researchers in my group simply use matplotlib, since that provides more control. For an example of that kind of thing, check out [this one](https://towardsdatascience.com/visualizing-geospatial-data-in-python-e070374fe621).)
Today, we'll try another library for geodata called "[Folium](https://github.com/python-visualization/folium)". It's good for you all to try out a few different libraries - remember that data visualization and analysis in Python is all about the ability to use many different tools.
The exercise below is based on the code illustrated in this nice [tutorial](https://www.kaggle.com/daveianhickey/how-to-folium-for-maps-heatmaps-time-data), so let us start by taking a look at that one.
*Reading*. Read through the following tutorial
* "How to: Folium for maps, heatmaps & time data". Get it here: https://www.kaggle.com/daveianhickey/how-to-folium-for-maps-heatmaps-time-data
* (Optional) There are also some nice tricks in "Spatial Visualizations and Analysis in Python with Folium". Read it here: https://towardsdatascience.com/data-101s-spatial-visualizations-and-analysis-in-python-with-folium-39730da2adf
> *Exercise 1.1*: A new take on geospatial data.
>
>A couple of weeks ago (Part 3 of Week 2), we worked with spacial data by using color-intensity of shapefiles to show the counts of certain crimes within those individual areas. Today, we look at studying geospatial data by plotting raw data points as well as heatmaps on top of actual maps.
>
> * First start by plotting a map of San Francisco with a nice tight zoom. Simply use the command `folium.Map([lat, lon], zoom_start=13)`, where you'll have to look up San Francisco's longitude and latitude.
> * Next, use the the coordinates for SF City Hall `37.77919, -122.41914` to indicate its location on the map with a nice, pop-up enabled maker. (In the screenshot below, I used the black & white Stamen tiles, because they look cool).
> <img src="https://raw.githubusercontent.com/suneman/socialdata2022/main/files/city_hall_2022.png" alt="drawing" width="600"/>
> * Now, let's plot some more data (no need for pop-ups this time). Select a couple of months of data for `'DRUG/NARCOTIC'` and draw a little dot for each arrest for those two months. You could, for example, choose June-July 2016, but you can choose anything you like - the main concern is to not have too many points as this uses a lot of memory and makes Folium behave non-optimally.
> We can call this kind of visualization a *point scatter plot*.
Ok. Time for a little break. Note that a nice thing about Folium is that you can zoom in and out of the maps.
> *Exercise 1.2*: Heatmaps.
> * Now, let's play with **heatmaps**. You can figure out the appropriate commands by grabbing code from the main [tutorial](https://www.kaggle.com/daveianhickey/how-to-folium-for-maps-heatmaps-time-data)) and modifying to suit your needs.
> * To create your first heatmap, grab all arrests for the category `'SEX OFFENSES, NON FORCIBLE'` across all time. Play with parameters to get plots you like.
> * Now, comment on the differences between scatter plots and heatmaps.
>. - What can you see using the scatter-plots that you can't see using the heatmaps?
>. - And *vice versa*: what does the heatmaps help you see that's difficult to distinguish in the scatter-plots?
> * Play around with the various parameters for heatmaps. You can find a list here: https://python-visualization.github.io/folium/plugins.html
> * Comment on the effect on the various parameters for the heatmaps. How do they change the picture? (at least talk about the `radius` and `blur`).
> For one combination of settings, my heatmap plot looks like this.
> <img src="https://raw.githubusercontent.com/suneman/socialdata2022/main/files/crime_hot_spot.png" alt="drawing" width="600"/>
> * In that screenshot, I've (manually) highlighted a specific hotspot for this type of crime. Use your detective skills to find out what's going on in that building on the 800 block of Bryant street ... and explain in your own words.
(*Fun fact*: I remembered the concentration of crime-counts discussed at the end of this exercise from when I did the course back in 2016. It popped up when I used a completely different framework for visualizing geodata called [`geoplotlib`](https://github.com/andrea-cuttone/geoplotlib). You can spot it if you go to that year's [lecture 2](https://nbviewer.jupyter.org/github/suneman/socialdataanalysis2016/blob/master/lectures/Week3.ipynb), exercise 4.)
For the final element of working with heatmaps, let's now use the cool Folium functionality `HeatMapWithTime` to create a visualization of how the patterns of your favorite crime-type changes over time.
> *Exercise 1.3*: Heatmap movies. This exercise is a bit more independent than above - you get to make all the choices.
> * Start by choosing your favorite crimetype. Prefereably one with spatial patterns that change over time (use your data-exploration from the previous lectures to choose a good one).
> * Now, choose a time-resolution. You could plot daily, weekly, monthly datasets to plot in your movie. Again the goal is to find interesting temporal patterns to display. We want at least 20 frames though.
> * Create the movie using `HeatMapWithTime`.
> * Comment on your results:
> - What patterns does your movie reveal?
> - Motivate/explain the reasoning behind your choice of crimetype and time-resolution.
## Part 2: Errors in the data. The importance of looking at raw (or close to raw) data.
We started the course by plotting simple histogram and bar plots that showed a lot of cool patterns. But sometimes the binning can hide imprecision, irregularity, and simple errors in the data that could be misleading. In the work we've done so far, we've already come across at least three examples of this in the SF data.
1. In the temporal activity for `PROSTITUTION` something surprising is going on on Thursday. Remind yourself [**here**](https://raw.githubusercontent.com/suneman/socialdata2022/main/files/prostitution.png), where I've highlighted the phenomenon I'm talking about.
2. When we investigated the details of how the timestamps are recorded using jitter-plots, we saw that many more crimes were recorded e.g. on the hour, 15 minutes past the hour, and to a lesser in whole increments of 10 minutes. Crimes didn't appear to be recorded as frequently in between those round numbers. Remind yourself [**here**](https://raw.githubusercontent.com/suneman/socialdata2022/main/files/jitter.png), where I've highlighted the phenomenon I'm talking about.
3. And, today we saw that the Hall of Justice seemed to be an unlikely hotspot for sex offences. Remind yourself [**here**](https://raw.githubusercontent.com/suneman/socialdata2022/main/files/crime_hot_spot.png).
> *Exercise 2*: Data errors. The data errors we discovered above become difficult to notice when we aggregate data (and when we calculate mean values, as well as statistics more generally). Thus, when we visualize, errors become difficult to notice when binning the data. We explore this process in the exercise below.
>
>This last exercise for today has two parts:
> * In each of the examples above, describe in your own words how the data-errors I call attention to above can bias the binned versions of the data. Also, briefly mention how not noticing these errors can result in misconceptions about the underlying patterns of what's going on in San Francisco (and our modeling).
> * Find your own example of human noise in the data and visualize it.
| github_jupyter |
# DSCI 525 - Web and Cloud Computing
***Milestone 4:*** In this milestone, you will deploy the machine learning model you trained in milestone 3.
You might want to go over [this sample project](https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone4/sampleproject.ipynb) and get it done before starting this milestone.
Milestone 4 checklist :
- [x] Use an EC2 instance.
- [x] Develop your API here in this notebook.
- [x] Copy it to ```app.py``` file in EC2 instance.
- [x] Run your API for other consumers and test among your colleagues.
- [x] Summarize your journey.
In this milestone, you will do certain things that you learned. For example...
- Login to the instance
- Work with Linux and use some basic commands
- Configure security groups so that it accepts your webserver requests from your laptop
- Configure AWS CLI
In some places, I explicitly mentioned these to remind you.
```
## Import all the packages that you need
from flask import Flask, request, jsonify
import joblib
import numpy as np
```
## 1. Develop your API
rubric={mechanics:45}
You probably got how to set up primary URL endpoints from the [sampleproject.ipynb](https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone4/sampleproject.ipynb) and have them process and return some data. Here we are going to create a new endpoint that accepts a POST request of the features required to run the machine learning model that you trained and saved in last milestone (i.e., a user will post the predictions of the 25 climate model rainfall predictions, i.e., features, needed to predict with your machine learning model). Your code should then process this data, use your model to make a prediction, and return that prediction to the user. To get you started with all this, I've given you a template that you should fill out to set up this functionality:
***NOTE:*** You won't be able to test the flask module (or the API you make here) unless you go through steps in ```2. Deploy your API```. However, you can make sure that you develop all your functions and inputs properly here.
```python
from flask import Flask, request, jsonify
import joblib
import numpy as np
## Import any other packages that are needed
app = Flask(__name__)
# 1. Load your model here
model = joblib.load(...)
# 2. Define a prediction function
def return_prediction(...):
# format input_data here so that you can pass it to model.predict()
return model.predict(...)
# 3. Set up home page using basic html
@app.route("/")
def index():
# feel free to customize this if you like
return """
<h1>Welcome to our rain prediction service</h1>
To use this service, make a JSON post request to the /predict url with 25 climate model outputs.
"""
# 4. define a new route which will accept POST requests and return model predictions
@app.route('/predict', methods=['POST'])
def rainfall_prediction():
content = request.json # this extracts the JSON content we sent
prediction = return_prediction(...)
results = {...} # return whatever data you wish, it can be just the prediction
# or it can be the prediction plus the input data, it's up to you
return jsonify(results)
```
```
from flask import Flask, request, jsonify
from flask.logging import create_logger
import logging
import joblib
import numpy as np
import pandas as pd
app = Flask(__name__)
logger = create_logger(app)
logger.setLevel(logging.INFO)
# app.run(debug=True)
# 1. Load your model here
model = joblib.load("model.joblib")
# 2. Define a prediction function
def return_prediction(payload):
# format input_data here so that you can pass it to model.predict()
df = pd.DataFrame(payload)
return model.predict(df)
# 3. Set up home page using basic html
@app.route("/")
def index():
# feel free to customize this if you like
return """
<h1>Welcome to our rain prediction service</h1>
To use this service, make a JSON post request to the /predict url with 25 climate model outputs.
"""
# 4. define a new route which will accept POST requests and return model predictions
@app.route("/predict", methods=["POST"])
def rainfall_prediction():
content = request.json # this extracts the JSON content we sent
logger.info(f"Making prediction for {content}")
prediction = return_prediction(content)
prediction = list(
prediction
) # return whatever data you wish, it can be just the prediction
# or it can be the prediction plus the input data, it's up to you
logger.info(f"Returning prediction {prediction}")
return jsonify({"predicion": prediction})
```
## 2. Deploy your API
rubric={mechanics:40}
Once your API (app.py) is working, we're ready to deploy it! For this, do the following:
1. Setup an EC2 instance. Make sure you add a rule in security groups to accept `All TCP` connections from `Anywhere`. SSH into your EC2 instance from milestone2.
2. Make a file `app.py` file in your instance and copy what you developed above in there.
2.1 You can use the Linux editor using ```vi```. More details on vi Editor [here](https://www.guru99.com/the-vi-editor.html). Use your previous learnings, notes, mini videos, etc. You can copy code from your jupyter and paste it into `app.py`.
2.2 Or else you can make a file in your laptop called app.py and copy it over to your EC2 instance using ```scp```. Eg: ```scp -r -i "ggeorgeAD.pem" ~/Desktop/app.py ubuntu@ec2-xxx.ca-central-1.compute.amazonaws.com:~/```
3. Download your model from s3 to your EC2 instance. You want to configure your S3 for this. Use your previous learnings, notes, mini videos, etc.
4. You should use one of those package managers to install the dependencies of your API, like `flask`, `joblib`, `sklearn`, etc...
4.1. (Additional help) you can install the required packages inside your terminal.
- Install conda:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
- Install packages (there might be others):
conda install flask scikit-learn joblib
5. Now you're ready to start your service, go ahead and run `flask run --host=0.0.0.0 --port=8080`. This will make your service available at your EC2 instance's `Public IPv4 address` on port 8080. Please ensure that you run this from where ```app.py``` and ```model.joblib``` reside.
6. You can now access your service by typing your EC2 instances `public IPv4 address` append with `:8080` into a browser, so something like `http://Public IPv4 address:8080`. From step 4, you might notice that flask output saying "Running on http://XXXX:8080/ (Press CTRL+C to quit)", where XXXX is `Private IPv4 address`, and you want to replace it with the `Public IPv4 address`
7. You should use `curl` to send a post request to your service to make sure it's working as expected.
>EG: curl -X POST http://your_EC2_ip:8080/predict -d '{"data":[1,2,3,4,53,11,22,37,41,53,11,24,31,44,53,11,22,35,42,53,12,23,31,42,53]}' -H "Content-Type: application/json"
8. Now, what happens if you exit your connection with the EC2 instance? Can you still reach your service?
9. We could use several options to help us persist our server even after we exit our shell session. We'll be using `screen`. `screen` will allow us to create a separate session within which we can run `flask` and won't shut down when we exit the main shell session. Read [this](https://linuxize.com/post/how-to-use-linux-screen/) to learn more on ```screen```.
10. Now, create a new `screen` session (think of this as a new, separate shell), using: `screen -S myapi`. If you want to list already created sessions do ```screen -list```. If you want to get into an existing ```screen -x myapi```.
11. Within that session, start up your flask app. You can then exit the session by pressing `Ctrl + A then press D`. Here you are detaching the session, once you log back into EC2 instance you can attach it using ```screen -x myapi```.
12. Feel free to exit your connection with the EC2 instance now and try reaccessing your service with `curl`. You should find that the service has now persisted!
13. ***CONGRATULATIONS!!!*** You have successfully got to the end of our milestones. Move to Task 3 and submit it.
> https://github.com/UBC-MDS/dsci_525_group_22/blob/main/screenshots/api.png
```
from IPython.display import Image
Image(filename='../screenshots/api.png')
Image(filename='../screenshots/payload.png')
```
**Note** - We have used a different input values in the example above. The following files look more like the sample ipynb file and also uses the same values as input.
1. https://github.com/UBC-MDS/dsci_525_group_22/blob/michelle/notebooks/Milestone4.ipynb
2. https://github.com/UBC-MDS/dsci_525_group_22/blob/milestone4-mahsa/notebooks/mahsa/Milestone4.ipynb
Below is the screenshot from one of the version mentioned above.
> https://github.com/UBC-MDS/dsci_525_group_22/blob/main/screenshots/milestone4_curl_ms.png
```
Image("../screenshots/milestone4_curl_ms.png")
```
## 3. Summarize your journey from Milestone 1 to Milestone 4
rubric={mechanics:10}
>There is no format or structure on how you write this. (also, no minimum number of words). It's your choice on how well you describe it.
**Milestone 1**
In milestone 1, we learned about different methods of making reading of data faster locally, like just loading columns needed or using chunking technique. We also learned different file formats that are optimized for the loading of big data, for example parquet files. Parquet files use arrow engine and make use of projections/predicate pushdown to speed up the reading of columns and rows of data. 'Arrow Exchange' method works best in terms of speed and implementation because it's compatible with both Python and R and Arrow's serialization/deserialization process is minimal as well as a zero-copy process. By converting to arrow format, the dataframe reading and data analysis process was a lot faster for large files.
**Milestone 2**
Milestone 2 we were concerned with the basics of cloud computing and setting up simple cloud instances. We also focused on the interaction between stable long term storage (S3) with the dynamic and ephemeral nature of computational power in the cloud. We practiced setting both of these services up and configuring them.
There is a mental hurdle for many in understanding what it means to connect remotely via Secure Shell - that you are connected to a computer just like your local terminal! So for beginners, understanding that is a big moment. Configuring our instances this way, and interacting with our S3 buckets are foundational cloud computing ideas that we explored and became comfortable with in this milestone.
**Milestone 3**
We put what we learnt in lectures 5 and 6 into precise at this milestone. We learnt more about scale up and scale out in data processing, how to set up EMR, the architecture of the EMR cluster, and the distinction between the master and the slave nodes. One of the AWS primary services is EMR and scaling out with HADOOP framework and Spark processor. We developed our machine learning model in the local machine with scikit-learn. For tuning hyperparameter, we built an AWS EMR and learned how to spin an EMR cluster with the elements we want from the Hadoop ecosystem.
**Milestone 4**
Milestone 4's primary goal is to gain hands-on experience creating REST APIs to serve our model, and deploy in an AWS ec2 instance. In this milestone, we created app.py, which adheres to the flask app naming convention. We added code in app.py for initializing the flask app, writing a function to provide prediction, adding a home page and adding a route for prediction in flask. Then we copied app.py to AWS and served our model by running the flask app on the remote machine. The API was tested by using a curl command to send request to the server with a data instance containing all of the features required by our model.
## 4. Submission instructions
rubric={mechanics:5}
In the textbox provided on Canvas please put a link where TAs can find the following-
- [x] This notebook with solution to ```1 & 3```
- [x] Screenshot from
- [x] Output after trying curl. Here is a [sample](https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone4/images/curl_deploy_sample.png). This is just an example; your input/output doesn't have to look like this, you can design the way you like. But at a minimum, it should show your prediction value.
| github_jupyter |
# Inference with your model
This is the third and final tutorial of our [beginner tutorial series](https://github.com/awslabs/djl/tree/master/jupyter/tutorial) that will take you through creating, training, and running inference on a neural network. In this tutorial, you will learn how to execute your image classification model for a production system.
In the [previous tutorial](02_train_your_first_model.ipynb), you successfully trained your model. Now, we will learn how to implement a `Translator` to convert between POJO and `NDArray` as well as a `Predictor` to run inference.
## Preparation
This tutorial requires the installation of the Java Jupyter Kernel. To install the kernel, see the [Jupyter README](https://github.com/awslabs/djl/blob/master/jupyter/README.md).
```
// Add the snapshot repository to get the DJL snapshot artifacts
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
// Add the maven dependencies
%maven ai.djl:api:0.6.0
%maven ai.djl:model-zoo:0.6.0
%maven ai.djl.mxnet:mxnet-engine:0.6.0
%maven ai.djl.mxnet:mxnet-model-zoo:0.6.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven net.java.dev.jna:jna:5.3.0
// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md
// for more MXNet library selection options
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-b
import java.awt.image.*;
import java.nio.file.*;
import java.util.*;
import java.util.stream.*;
import ai.djl.*;
import ai.djl.basicmodelzoo.basic.*;
import ai.djl.ndarray.*;
import ai.djl.modality.*;
import ai.djl.modality.cv.*;
import ai.djl.modality.cv.util.NDImageUtils;
import ai.djl.translate.*;
```
## Step 1: Load your handwritten digit image
We will start by loading the image that we want to run our model to classify.
```
var img = ImageFactory.getInstance().fromUrl("https://djl-ai.s3.amazonaws.com/resources/images/0.png");
img.getWrappedImage();
```
## Step 2: Load your model
Next, we need to load the model to run inference with. This model should have been saved to the `build/mlp` directory when running the [previous tutorial](02_train_your_first_model.ipynb).
TODO: Mention model zoo? List models in model zoo?
TODO: Key Concept ZooModel
TODO: Link to Model javadoc
```
Path modelDir = Paths.get("build/mlp");
Model model = Model.newInstance("mlp");
model.setBlock(new Mlp(28 * 28, 10, new int[] {128, 64}));
model.load(modelDir);
```
## Step 3: Create a `Translator`
The [`Translator`](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/translate/Translator.html) is used to encapsulate the pre-processing and post-processing functionality of your application. The input to the processInput and processOutput should be single data items, not batches.
```
Translator<Image, Classifications> translator = new Translator<Image, Classifications>() {
@Override
public NDList processInput(TranslatorContext ctx, Image input) {
// Convert Image to NDArray
NDArray array = input.toNDArray(ctx.getNDManager(), Image.Flag.GRAYSCALE);
return new NDList(NDImageUtils.toTensor(array));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
NDArray probabilities = list.singletonOrThrow().softmax(0);
List<String> indices = IntStream.range(0, 10).mapToObj(String::valueOf).collect(Collectors.toList());
return new Classifications(indices, probabilities);
}
@Override
public Batchifier getBatchifier() {
return Batchifier.STACK;
}
};
```
## Step 4: Create Predictor
Using the translator, we will create a new [`Predictor`](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/inference/Predictor.html). The predictor is the main class to orchestrate the inference process. During inference, a trained model is used to predict values, often for production use cases. The predictor is NOT thread-safe, so if you want to do prediction in parallel, you should create a predictor object(with the same model) for each thread.
```
var predictor = model.newPredictor(translator);
```
## Step 5: Run inference
With our predictor, we can simply call the predict method to run inference. Afterwards, the same predictor should be used for further inference calls.
```
var classifications = predictor.predict(img);
classifications
```
## Summary
Now, you've successfully built a model, trained it, and run inference. Congratulations on finishing the [beginner tutorial series](https://github.com/awslabs/djl/tree/master/jupyter/tutorial). After this, you should read our other [examples](https://github.com/awslabs/djl/tree/master/examples) and [jupyter notebooks](https://github.com/awslabs/djl/tree/master/jupyter) to learn more about DJL.
You can find the complete source code for this tutorial in the [examples project](https://github.com/awslabs/djl/blob/master/examples/src/main/java/ai/djl/examples/inference/ImageClassification.java).
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Clarified "YOLO" instructions preceding the code.
* Added details about anchor boxes.
* Added explanation of how score is calculated.
* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.
* `iou`: clarify instructions for finding the intersection.
* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.
* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.
* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.
* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.
* `predict`: hint on calling sess.run.
* Spelling, grammar, wording and formatting updates to improve clarity.
## Import libraries
Run the following cell to load the packages and dependencies that you will find useful as you build the object detector!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.
</center></caption>
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
"You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
#### Inputs and outputs
- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
#### Anchor Boxes
* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'
* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.
* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
#### Encoding
Let's look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
#### Class score
Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class.
The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
##### Example of figure 4
* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1).
* The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$.
* The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$.
* Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1".
#### Visualizing classes
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
#### Visualizing bounding boxes
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
#### Non-Max suppression
In the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects.
To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
#### **Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$).
The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
This is an example of **broadcasting** (multiplying vectors of different sizes).
2. For each box, find:
- the index of the class with the maximum box score
- the corresponding box score
**Useful references**
* [Keras argmax](https://keras.io/backend/#argmax)
* [Keras max](https://keras.io/backend/#max)
**Additional Hints**
* For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`.
* Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here.
* Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep.
**Useful reference**:
* [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
**Additional Hints**:
* For the `tf.boolean_mask`, we can keep the default `axis=None`.
**Reminder**: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = ((box_class_scores) >= threshold)
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask, name='boolean_mask')
boxes = tf.boolean_mask(boxes, filtering_mask, name='boolean_mask')
classes = tf.boolean_mask(box_classes, filtering_mask, name='boolean_mask')
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
**Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative.
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
#### **Exercise**: Implement iou(). Some hints:
- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.
- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).
- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.
- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$:
- Feel free to draw some examples on paper to clarify this conceptually.
- The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom.
- The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top.
- The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero).
- The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.
**Additional Hints**
- `xi1` = **max**imum of the x1 coordinates of the two boxes
- `yi1` = **max**imum of the y1 coordinates of the two boxes
- `xi2` = **min**imum of the x2 coordinates of the two boxes
- `yi2` = **min**imum of the y2 coordinates of the two boxes
- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_width = xi2 - xi1
inter_height = yi2 - yi1
inter_area = max(inter_height, 0) * max(inter_width, 0)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0])
box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
```
**Expected Output**:
```
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
```
#### YOLO non-max suppression
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).
3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
** Reference documentation **
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
```
tf.image.non_max_suppression(
boxes,
scores,
max_output_size,
iou_threshold=0.5,
name=None
)
```
Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather)
Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`.
```
keras.gather(
reference,
indices
)
```
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes = boxes, scores = scores, max_output_size = max_boxes, iou_threshold = iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs[0], yolo_outputs[1], yolo_outputs[2], yolo_outputs[3]
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
## Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pre-trained model on images
In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes.
* We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt".
* We'll read class names and anchors from text files.
* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pre-trained model
* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes.
* You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5".
* These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.
Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
#### Hint: Using the TensorFlow Session object
* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.
* To evaluate a list of tensors, we call `sess.run()` like this:
```
sess.run(fetches=[tensor1,tensor2,tensor3],
feed_dict={yolo_model.input: the_input_variable,
K.learning_phase():0
}
```
* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
## <font color='darkblue'>What you should remember:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are grateful to Brody Huval, Chih Hu and Rahul Patel for providing this data.
| github_jupyter |
```
addressProviderAddr = '0xcF64698AFF7E5f27A11dff868AF228653ba53be0' #mainnet
#addressProviderAddr = '0xA526311C39523F60b184709227875b5f34793bD4' #kovan
import os
from dotenv import load_dotenv
load_dotenv() # add this line
providerRPC = os.getenv('RPC_NODE')
from web3 import Web3
import json
w3 = Web3(Web3.HTTPProvider(providerRPC))
afAbi = json.loads( '[{"inputs":[{"internalType":"address","name":"addressProvider","type":"address"}],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"miner","type":"address"}],"name":"AccountMinerChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"creditManager","type":"address"}],"name":"InitializeCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"account","type":"address"}],"name":"NewCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"account","type":"address"}],"name":"ReturnCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"creditAccount","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"}],"name":"TakeForever","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"},{"inputs":[],"name":"_contractsRegister","outputs":[{"internalType":"contract ContractsRegister","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"addCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"components":[{"internalType":"address","name":"token","type":"address"},{"internalType":"address","name":"swapContract","type":"address"}],"internalType":"struct DataTypes.MiningApproval[]","name":"_miningApprovals","type":"tuple[]"}],"name":"addMiningApprovals","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"},{"internalType":"address","name":"token","type":"address"},{"internalType":"address","name":"targetContract","type":"address"}],"name":"cancelAllowance","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"countCreditAccounts","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"countCreditAccountsInStock","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"id","type":"uint256"}],"name":"creditAccounts","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"finishMining","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"}],"name":"getNext","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"head","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"addr","type":"address"}],"name":"isCreditAccount","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"isMiningFinished","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"masterCreditAccount","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"mineCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"miningApprovals","outputs":[{"internalType":"address","name":"token","type":"address"},{"internalType":"address","name":"swapContract","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"pause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"usedAccount","type":"address"}],"name":"returnCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"tail","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"_borrowedAmount","type":"uint256"},{"internalType":"uint256","name":"_cumulativeIndexAtOpen","type":"uint256"}],"name":"takeCreditAccount","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"prev","type":"address"},{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"address","name":"to","type":"address"}],"name":"takeOut","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"unpause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"version","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}]' )
caAbi = json.loads( '[{"inputs":[{"internalType":"address","name":"token","type":"address"},{"internalType":"address","name":"swapContract","type":"address"}],"name":"approveToken","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"borrowedAmount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"token","type":"address"},{"internalType":"address","name":"targetContract","type":"address"}],"name":"cancelAllowance","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_creditManager","type":"address"},{"internalType":"uint256","name":"_borrowedAmount","type":"uint256"},{"internalType":"uint256","name":"_cumulativeIndexAtOpen","type":"uint256"}],"name":"connectTo","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"creditManager","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"cumulativeIndexAtOpen","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"destination","type":"address"},{"internalType":"bytes","name":"data","type":"bytes"}],"name":"execute","outputs":[{"internalType":"bytes","name":"","type":"bytes"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"factory","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"initialize","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"token","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"safeTransfer","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"since","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"_borrowedAmount","type":"uint256"},{"internalType":"uint256","name":"_cumulativeIndexAtOpen","type":"uint256"}],"name":"updateParameters","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"version","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}] ' )
addressProviderAbi = json.loads( '[{"inputs":[],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"service","type":"bytes32"},{"indexed":true,"internalType":"address","name":"newAddress","type":"address"}],"name":"AddressSet","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"user_id","type":"uint256"},{"indexed":false,"internalType":"address","name":"account","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"},{"indexed":false,"internalType":"bytes32","name":"leaf","type":"bytes32"}],"name":"Claimed","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"},{"inputs":[],"name":"ACCOUNT_FACTORY","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"ACL","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"CONTRACTS_REGISTER","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"DATA_COMPRESSOR","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"GEAR_TOKEN","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"LEVERAGED_ACTIONS","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"PRICE_ORACLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"TREASURY_CONTRACT","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"WETH_GATEWAY","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"WETH_TOKEN","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"name":"addresses","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getACL","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getAccountFactory","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getContractsRegister","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getDataCompressor","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getGearToken","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getLeveragedActions","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getPriceOracle","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getTreasuryContract","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getWETHGateway","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getWethToken","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"renounceOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setACL","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setAccountFactory","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setContractsRegister","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setDataCompressor","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setGearToken","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setLeveragedActions","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setPriceOracle","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setTreasuryContract","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setWETHGateway","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"setWethToken","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"version","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}]')
contractsRegistryAbi = json.loads( '[{"inputs":[{"internalType":"address","name":"addressProvider","type":"address"}],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"creditManager","type":"address"}],"name":"NewCreditManagerAdded","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"pool","type":"address"}],"name":"NewPoolAdded","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"},{"inputs":[{"internalType":"address","name":"newCreditManager","type":"address"}],"name":"addCreditManager","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"newPoolAddress","type":"address"}],"name":"addPool","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"creditManagers","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getCreditManagers","outputs":[{"internalType":"address[]","name":"","type":"address[]"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getCreditManagersCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getPools","outputs":[{"internalType":"address[]","name":"","type":"address[]"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"getPoolsCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"isCreditManager","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"isPool","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"pause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"pools","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"unpause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"version","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}]')
creditManagerAbi = json.loads( '[{"inputs":[{"internalType":"address","name":"_addressProvider","type":"address"},{"internalType":"uint256","name":"_minAmount","type":"uint256"},{"internalType":"uint256","name":"_maxAmount","type":"uint256"},{"internalType":"uint256","name":"_maxLeverage","type":"uint256"},{"internalType":"address","name":"_poolService","type":"address"},{"internalType":"address","name":"_creditFilterAddress","type":"address"},{"internalType":"address","name":"_defaultSwapContract","type":"address"}],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"onBehalfOf","type":"address"},{"indexed":true,"internalType":"address","name":"token","type":"address"},{"indexed":false,"internalType":"uint256","name":"value","type":"uint256"}],"name":"AddCollateral","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"},{"indexed":false,"internalType":"uint256","name":"remainingFunds","type":"uint256"}],"name":"CloseCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"borrower","type":"address"},{"indexed":true,"internalType":"address","name":"target","type":"address"}],"name":"ExecuteOrder","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"borrower","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"}],"name":"IncreaseBorrowedAmount","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"liquidator","type":"address"},{"indexed":false,"internalType":"uint256","name":"remainingFunds","type":"uint256"}],"name":"LiquidateCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"minAmount","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"maxAmount","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"maxLeverage","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"feeInterest","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"feeLiquidation","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"liquidationDiscount","type":"uint256"}],"name":"NewParameters","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"sender","type":"address"},{"indexed":true,"internalType":"address","name":"onBehalfOf","type":"address"},{"indexed":true,"internalType":"address","name":"creditAccount","type":"address"},{"indexed":false,"internalType":"uint256","name":"amount","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"borrowAmount","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"referralCode","type":"uint256"}],"name":"OpenCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"}],"name":"RepayCreditAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"oldOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"TransferAccount","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"uint256","name":"totalValue","type":"uint256"},{"internalType":"bool","name":"isLiquidated","type":"bool"}],"name":"_calcClosePayments","outputs":[{"internalType":"uint256","name":"_borrowedAmount","type":"uint256"},{"internalType":"uint256","name":"amountToPool","type":"uint256"},{"internalType":"uint256","name":"remainingFunds","type":"uint256"},{"internalType":"uint256","name":"profit","type":"uint256"},{"internalType":"uint256","name":"loss","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"totalValue","type":"uint256"},{"internalType":"bool","name":"isLiquidated","type":"bool"},{"internalType":"uint256","name":"borrowedAmount","type":"uint256"},{"internalType":"uint256","name":"cumulativeIndexAtCreditAccountOpen_RAY","type":"uint256"},{"internalType":"uint256","name":"cumulativeIndexNow_RAY","type":"uint256"}],"name":"_calcClosePaymentsPure","outputs":[{"internalType":"uint256","name":"_borrowedAmount","type":"uint256"},{"internalType":"uint256","name":"amountToPool","type":"uint256"},{"internalType":"uint256","name":"remainingFunds","type":"uint256"},{"internalType":"uint256","name":"profit","type":"uint256"},{"internalType":"uint256","name":"loss","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"onBehalfOf","type":"address"},{"internalType":"address","name":"token","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"addCollateral","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"targetContract","type":"address"},{"internalType":"address","name":"token","type":"address"}],"name":"approve","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"borrower","type":"address"},{"internalType":"bool","name":"isLiquidated","type":"bool"}],"name":"calcRepayAmount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"to","type":"address"},{"components":[{"internalType":"address[]","name":"path","type":"address[]"},{"internalType":"uint256","name":"amountOutMin","type":"uint256"}],"internalType":"struct DataTypes.Exchange[]","name":"paths","type":"tuple[]"}],"name":"closeCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"creditAccounts","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"creditFilter","outputs":[{"internalType":"contract ICreditFilter","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"defaultSwapContract","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"borrower","type":"address"},{"internalType":"address","name":"target","type":"address"},{"internalType":"bytes","name":"data","type":"bytes"}],"name":"executeOrder","outputs":[{"internalType":"bytes","name":"","type":"bytes"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"feeInterest","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"feeLiquidation","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"borrower","type":"address"}],"name":"getCreditAccountOrRevert","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"borrower","type":"address"}],"name":"hasOpenedCreditAccount","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"increaseBorrowedAmount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"borrower","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"bool","name":"force","type":"bool"}],"name":"liquidateCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"liquidationDiscount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"maxAmount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"maxLeverageFactor","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"minAmount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"minHealthFactor","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"amount","type":"uint256"},{"internalType":"address","name":"onBehalfOf","type":"address"},{"internalType":"uint256","name":"leverageFactor","type":"uint256"},{"internalType":"uint256","name":"referralCode","type":"uint256"}],"name":"openCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"pause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"poolService","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"address","name":"targetContract","type":"address"},{"internalType":"address","name":"token","type":"address"}],"name":"provideCreditAccountAllowance","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"to","type":"address"}],"name":"repayCreditAccount","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"borrower","type":"address"},{"internalType":"address","name":"to","type":"address"}],"name":"repayCreditAccountETH","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"uint256","name":"_minAmount","type":"uint256"},{"internalType":"uint256","name":"_maxAmount","type":"uint256"},{"internalType":"uint256","name":"_maxLeverageFactor","type":"uint256"},{"internalType":"uint256","name":"_feeInterest","type":"uint256"},{"internalType":"uint256","name":"_feeLiquidation","type":"uint256"},{"internalType":"uint256","name":"_liquidationDiscount","type":"uint256"}],"name":"setParams","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferAccountOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"underlyingToken","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"unpause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"version","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"wethAddress","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"wethGateway","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}]' )
erc20Abi = json.loads( '[{"inputs":[{"internalType":"uint256","name":"chainId_","type":"uint256"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"src","type":"address"},{"indexed":true,"internalType":"address","name":"guy","type":"address"},{"indexed":false,"internalType":"uint256","name":"wad","type":"uint256"}],"name":"Approval","type":"event"},{"anonymous":true,"inputs":[{"indexed":true,"internalType":"bytes4","name":"sig","type":"bytes4"},{"indexed":true,"internalType":"address","name":"usr","type":"address"},{"indexed":true,"internalType":"bytes32","name":"arg1","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"arg2","type":"bytes32"},{"indexed":false,"internalType":"bytes","name":"data","type":"bytes"}],"name":"LogNote","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"src","type":"address"},{"indexed":true,"internalType":"address","name":"dst","type":"address"},{"indexed":false,"internalType":"uint256","name":"wad","type":"uint256"}],"name":"Transfer","type":"event"},{"constant":true,"inputs":[],"name":"DOMAIN_SEPARATOR","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"PERMIT_TYPEHASH","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"address","name":"","type":"address"}],"name":"allowance","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"usr","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"approve","outputs":[{"internalType":"bool","name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"balanceOf","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"usr","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"burn","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"decimals","outputs":[{"internalType":"uint8","name":"","type":"uint8"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"guy","type":"address"}],"name":"deny","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"usr","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"mint","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"src","type":"address"},{"internalType":"address","name":"dst","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"move","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"name","outputs":[{"internalType":"string","name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"nonces","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"holder","type":"address"},{"internalType":"address","name":"spender","type":"address"},{"internalType":"uint256","name":"nonce","type":"uint256"},{"internalType":"uint256","name":"expiry","type":"uint256"},{"internalType":"bool","name":"allowed","type":"bool"},{"internalType":"uint8","name":"v","type":"uint8"},{"internalType":"bytes32","name":"r","type":"bytes32"},{"internalType":"bytes32","name":"s","type":"bytes32"}],"name":"permit","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"usr","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"pull","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"usr","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"push","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"guy","type":"address"}],"name":"rely","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"symbol","outputs":[{"internalType":"string","name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"totalSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"dst","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"transfer","outputs":[{"internalType":"bool","name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"src","type":"address"},{"internalType":"address","name":"dst","type":"address"},{"internalType":"uint256","name":"wad","type":"uint256"}],"name":"transferFrom","outputs":[{"internalType":"bool","name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"version","outputs":[{"internalType":"string","name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"wards","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"}]')
creditFilterAbi = json.loads( '[{"inputs":[{"internalType":"address","name":"_addressProvider","type":"address"},{"internalType":"address","name":"_underlyingToken","type":"address"}],"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"protocol","type":"address"},{"indexed":true,"internalType":"address","name":"adapter","type":"address"}],"name":"ContractAllowed","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"protocol","type":"address"}],"name":"ContractForbidden","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"chiThreshold","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"fastCheckDelay","type":"uint256"}],"name":"NewFastCheckParameters","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"newPriceOracle","type":"address"}],"name":"PriceOracleUpdated","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"token","type":"address"},{"indexed":false,"internalType":"uint256","name":"liquidityThreshold","type":"uint256"}],"name":"TokenAllowed","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"token","type":"address"}],"name":"TokenForbidden","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"from","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"},{"indexed":false,"internalType":"bool","name":"state","type":"bool"}],"name":"TransferAccountAllowed","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"pugin","type":"address"},{"indexed":false,"internalType":"bool","name":"state","type":"bool"}],"name":"TransferPluginAllowed","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"},{"inputs":[],"name":"addressProvider","outputs":[{"internalType":"contract AddressProvider","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"targetContract","type":"address"},{"internalType":"address","name":"adapter","type":"address"}],"name":"allowContract","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"plugin","type":"address"},{"internalType":"bool","name":"state","type":"bool"}],"name":"allowPlugin","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"token","type":"address"},{"internalType":"uint256","name":"liquidationThreshold","type":"uint256"}],"name":"allowToken","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"}],"name":"allowanceForAccountTransfers","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"allowedAdapters","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"i","type":"uint256"}],"name":"allowedContracts","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"allowedContractsCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"allowedPlugins","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"allowedTokens","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"allowedTokensCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"bool","name":"state","type":"bool"}],"name":"approveAccountTransfers","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"}],"name":"calcCreditAccountAccruedInterest","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"}],"name":"calcCreditAccountHealthFactor","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"percentage","type":"uint256"},{"internalType":"uint256","name":"times","type":"uint256"}],"name":"calcMaxPossibleDrop","outputs":[{"internalType":"uint256","name":"value","type":"uint256"}],"stateMutability":"pure","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"}],"name":"calcThresholdWeightedValue","outputs":[{"internalType":"uint256","name":"total","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"}],"name":"calcTotalValue","outputs":[{"internalType":"uint256","name":"total","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"address","name":"token","type":"address"}],"name":"checkAndEnableToken","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"address","name":"tokenIn","type":"address"},{"internalType":"address","name":"tokenOut","type":"address"},{"internalType":"uint256","name":"amountIn","type":"uint256"},{"internalType":"uint256","name":"amountOut","type":"uint256"}],"name":"checkCollateralChange","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"uint256[]","name":"amountIn","type":"uint256[]"},{"internalType":"uint256[]","name":"amountOut","type":"uint256[]"},{"internalType":"address[]","name":"tokenIn","type":"address[]"},{"internalType":"address[]","name":"tokenOut","type":"address[]"}],"name":"checkMultiTokenCollateral","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"chiThreshold","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"_creditManager","type":"address"}],"name":"connectCreditManager","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"contractToAdapter","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"creditManager","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"enabledTokens","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"fastCheckCounter","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"targetContract","type":"address"}],"name":"forbidContract","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"token","type":"address"}],"name":"forbidToken","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"uint256","name":"id","type":"uint256"}],"name":"getCreditAccountTokenById","outputs":[{"internalType":"address","name":"token","type":"address"},{"internalType":"uint256","name":"balance","type":"uint256"},{"internalType":"uint256","name":"tv","type":"uint256"},{"internalType":"uint256","name":"tvw","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"hfCheckInterval","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"}],"name":"initEnabledTokens","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"isTokenAllowed","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"liquidationThresholds","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"pause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"poolService","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"priceOracle","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"address","name":"newOwner","type":"address"}],"name":"revertIfAccountTransferIsNotAllowed","outputs":[],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"creditAccount","type":"address"},{"internalType":"uint256","name":"minHealthFactor","type":"uint256"}],"name":"revertIfCantIncreaseBorrowing","outputs":[],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"token","type":"address"}],"name":"revertIfTokenNotAllowed","outputs":[],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"_chiThreshold","type":"uint256"},{"internalType":"uint256","name":"_hfCheckInterval","type":"uint256"}],"name":"setFastCheckParameters","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"","type":"address"}],"name":"tokenMasksMap","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"underlyingToken","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"unpause","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"updateUnderlyingTokenLiquidationThreshold","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"upgradePriceOracle","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"version","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"wethAddress","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}]' )
def initContract( address, abi ):
checksumAddr = Web3.toChecksumAddress( address )
contract = w3.eth.contract(checksumAddr, abi=abi )
return contract
def toChecksumArray( arr ):
result = []
for addr in arr:
result.append( Web3.toChecksumAddress( addr ) )
return result
def erc20Params( address ):
token = initContract( address, erc20Abi )
symbol = token.functions.symbol().call()
decimals = token.functions.decimals().call()
return {'token': address, 'symbol': symbol, 'decimals': decimals }
def normalizeNumber( numb, decimals):
return numb / 10**decimals
def filterParams( cFilter ):
creditFilter = initContract( cFilter, creditFilterAbi )
print( 'CreditFilter: ', cFilter )
print( " hfCheckInterval: ", creditFilter.functions.hfCheckInterval().call() )
print( " Paused: ", creditFilter.functions.paused().call() )
allowedContractsCount = creditFilter.functions.allowedContractsCount().call()
print( ' Allowed Contracts: ')
for i in range(allowedContractsCount):
allowedContractAddr = Web3.toChecksumAddress( creditFilter.functions.allowedContracts(i).call() )
print( ' ', i, ': ', allowedContractAddr )
print( ' Adapters: ')
for i in range(allowedContractsCount):
print( ' ', i, ': ', creditFilter.functions.contractToAdapter(i).call() )
allowedTokensCount = creditFilter.functions.allowedTokensCount().call()
print( ' Allowed Tokens: ' )
for i in range( allowedTokensCount ):
tokenAddr = creditFilter.functions.allowedTokens(i).call()
lt = creditFilter.functions.liquidationThresholds(tokenAddr).call()
token = erc20Params( tokenAddr )
print( ' ', i, ': ', tokenAddr, token['symbol'], normalizeNumber(lt,4) )
def managerParams( manager ):
creditManager = initContract( manager, creditManagerAbi )
print( '------------------------------------')
tokenAddr = creditManager.functions.underlyingToken().call()
token = erc20Params( tokenAddr )
print( "CreditManager: ", manager )
print( "DefaultSwapContract: ", creditManager.functions.defaultSwapContract().call() )
print( "Fee Interest(%): ", normalizeNumber( creditManager.functions.feeInterest().call(), 2) )
print( "Fee Liquidation(%): ", normalizeNumber( creditManager.functions.feeLiquidation().call(), 2) )
print( "Liquidation Discount(%): ", normalizeNumber( creditManager.functions.liquidationDiscount().call(), 2) )
print( "Min amount: ", normalizeNumber( creditManager.functions.minAmount().call(), token['decimals'] ) )
print( "Max amount: ", normalizeNumber(creditManager.functions.maxAmount().call(), token['decimals'] ) )
print( "Max Leverage Factor: ", normalizeNumber( creditManager.functions.maxLeverageFactor().call(), 2) )
print( "Min Health Factor: ", normalizeNumber( creditManager.functions.minHealthFactor().call(), 4) )
print( "Paused: ", creditManager.functions.paused().call() )
print( "Underlying token: ", token['symbol'] )
cFilter = creditManager.functions.creditFilter().call()
filterParams( cFilter )
# find and init ContractRegistry
addressProvider = initContract( addressProviderAddr, addressProviderAbi )
contractRegistryAddr = addressProvider.functions.getContractsRegister().call()
contractRegistry = initContract( contractRegistryAddr, contractsRegistryAbi )
# find pools and creditmanagers
pools = contractRegistry.functions.getPools().call()
managers = contractRegistry.functions.getCreditManagers().call()
# print creditmanagers parameters
for i in range( len(managers)):
managerParams( managers[i])
```
| github_jupyter |
```
# Copyright (c) 2020-2021 Adrian Georg Herrmann
import os
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import interpolate
from sklearn.linear_model import LinearRegression
from datetime import datetime
data_root = "../../data"
locations = {
"berlin": ["52.4652025", "13.3412466"],
"wijchen": ["51.8235504", "5.7329005"]
}
dfs = { "berlin": None, "wijchen": None }
```
## Sunlight angles
```
def get_julian_day(time):
if time.month > 2:
y = time.year
m = time.month
else:
y = time.year - 1
m = time.month + 12
d = time.day + time.hour / 24 + time.minute / 1440 + time.second / 86400
b = 2 - np.floor(y / 100) + np.floor(y / 400)
jd = np.floor(365.25 * (y + 4716)) + np.floor(30.6001 * (m + 1)) + d + b - 1524.5
return jd
def get_angle(time, latitude, longitude):
# Source:
# https://de.wikipedia.org/wiki/Sonnenstand#Genauere_Ermittlung_des_Sonnenstandes_f%C3%BCr_einen_Zeitpunkt
# 1. Eclipctical coordinates of the sun
# Julian day
jd = get_julian_day(time)
n = jd - 2451545
# Median ecliptic longitude of the sun<
l = np.mod(280.46 + 0.9856474 * n, 360)
# Median anomaly
g = np.mod(357.528 + 0.9856003 * n, 360)
# Ecliptic longitude of the sun
lbd = l + 1.915 * np.sin(np.radians(g)) + 0.01997 * np.sin(np.radians(2*g))
# 2. Equatorial coordinates of the sun
# Ecliptic
eps = 23.439 - 0.0000004 * n
# Right ascension
alpha = np.degrees(np.arctan(np.cos(np.radians(eps)) * np.tan(np.radians(lbd))))
if np.cos(np.radians(lbd)) < 0:
alpha += 180
# Declination
delta = np.degrees(np.arcsin(np.sin(np.radians(eps)) * np.sin(np.radians(lbd))))
# 3. Horizontal coordinates of the sun
t0 = (get_julian_day(time.replace(hour=0, minute=0, second=0)) - 2451545) / 36525
# Median sidereal time
theta_hg = np.mod(6.697376 + 2400.05134 * t0 + 1.002738 * (time.hour + time.minute / 60), 24)
theta_g = theta_hg * 15
theta = theta_g + longitude
# Hour angle of the sun
tau = theta - alpha
# Elevation angle
h = np.cos(np.radians(delta)) * np.cos(np.radians(tau)) * np.cos(np.radians(latitude))
h += np.sin(np.radians(delta)) * np.sin(np.radians(latitude))
h = np.degrees(np.arcsin(h))
return (h if h > 0 else 0)
```
## Energy data
```
for location, _ in locations.items():
# This list contains all time points for which energy measurements exist, therefore delimiting
# the time frame that is to our interest.
energy = {}
data_path = os.path.join(data_root, location)
for filename in os.listdir(data_path):
with open(os.path.join(data_path, filename), "r") as file:
for line in file:
key = datetime.strptime(line.split(";")[0], '%Y-%m-%d %H:%M:%S').timestamp()
energy[key] = int(line.split(";")[1].strip())
df = pd.DataFrame(
data={"time": energy.keys(), "energy": energy.values()},
columns=["time", "energy"]
)
dfs[location] = df.sort_values(by="time", ascending=True)
# Summarize energy data per hour instead of keeping it per 15 minutes
for location, _ in locations.items():
times = []
energy = []
df = dfs[location]
for i, row in dfs[location].iterrows():
if row["time"] % 3600 == 0:
try:
t4 = row["time"]
e4 = row["energy"]
e3 = df["energy"][df["time"] == t4 - 900].values[0]
e2 = df["energy"][df["time"] == t4 - 1800].values[0]
e1 = df["energy"][df["time"] == t4 - 2700].values[0]
times += [t4]
energy += [e1 + e2 + e3 + e4]
except:
pass
df = pd.DataFrame(data={"time": times, "energy_h": energy}, columns=["time", "energy_h"])
df = df.sort_values(by="time", ascending=True)
dfs[location] = dfs[location].join(df.set_index("time"), on="time", how="right").drop("energy", axis=1)
dfs[location].rename(columns={"energy_h": "energy"}, inplace=True)
# These lists contain the time tuples that delimit connected ranges without interruptions.
time_delimiters = {}
for location, _ in locations.items():
delimiters = []
df = dfs[location]
next_couple = [df["time"].iloc[0], None]
interval = df["time"].iloc[1] - df["time"].iloc[0]
for i in range(len(df["time"].index) - 1):
if df["time"].iloc[i+1] - df["time"].iloc[i] > interval:
next_couple[1] = df["time"].iloc[i]
delimiters += [next_couple]
next_couple = [df["time"].iloc[i+1], None]
next_couple[1] = df["time"].iloc[-1]
delimiters += [next_couple]
time_delimiters[location] = delimiters
# This are lists of dataframes containing connected ranges without interruptions.
dataframes_wijchen = []
for x in time_delimiters["wijchen"]:
dataframes_wijchen += [dfs["wijchen"].loc[(dfs["wijchen"].time >= x[0]) & (dfs["wijchen"].time <= x[1])]]
dataframes_berlin = []
for x in time_delimiters["berlin"]:
dataframes_berlin += [dfs["berlin"].loc[(dfs["berlin"].time >= x[0]) & (dfs["berlin"].time <= x[1])]]
for location, _ in locations.items():
print(location, ":")
for delimiters in time_delimiters[location]:
t0 = datetime.fromtimestamp(delimiters[0])
t1 = datetime.fromtimestamp(delimiters[1])
print(t0, "-", t1)
print()
```
### Wijchen dataset
```
for d in dataframes_wijchen:
print(len(d))
plt.figure(figsize=(200, 25))
plt.plot(dfs["wijchen"]["time"], dfs["wijchen"]["energy"], drawstyle="steps-pre")
energy_max_wijchen = dfs["wijchen"]["energy"].max()
energy_max_wijchen_idx = dfs["wijchen"]["energy"].argmax()
energy_max_wijchen_time = datetime.fromtimestamp(dfs["wijchen"]["time"].iloc[energy_max_wijchen_idx])
print(energy_max_wijchen_time, ":", energy_max_wijchen)
energy_avg_wijchen = dfs["wijchen"]["energy"].mean()
print(energy_avg_wijchen)
```
### Berlin dataset
```
for d in dataframes_berlin:
print(len(d))
plt.figure(figsize=(200, 25))
plt.plot(dfs["berlin"]["time"], dfs["berlin"]["energy"], drawstyle="steps-pre")
energy_max_berlin = dfs["berlin"]["energy"].max()
energy_max_berlin_idx = dfs["berlin"]["energy"].argmax()
energy_max_berlin_time = datetime.fromtimestamp(dfs["berlin"]["time"].iloc[energy_max_berlin_idx])
print(energy_max_berlin_time, ":", energy_max_berlin)
energy_avg_berlin = dfs["berlin"]["energy"].mean()
print(energy_avg_berlin)
```
## Sunlight angles
```
for location, lonlat in locations.items():
angles = [
get_angle(
datetime.fromtimestamp(x - 3600), float(lonlat[0]), float(lonlat[1])
) for x in dfs[location]["time"]
]
dfs[location]["angles"] = angles
```
## Weather data
```
# Contact the author for a sample of data, see doc/thesis.pdf, page 72.
weather_data = np.load(os.path.join(data_root, "weather.npy"), allow_pickle=True).item()
# There is no cloud cover data for berlin2, so use the data of berlin1.
weather_data["berlin2"]["cloud"] = weather_data["berlin1"]["cloud"]
# There is no radiation data for berlin1, so use the data of berlin2.
weather_data["berlin1"]["rad"] = weather_data["berlin2"]["rad"]
# Preprocess weather data
weather_params = [ "temp", "humid", "press", "cloud", "rad" ]
stations = [ "wijchen1", "wijchen2", "berlin1", "berlin2" ]
for station in stations:
for param in weather_params:
to_del = []
for key, val in weather_data[station][param].items():
if val is None:
to_del.append(key)
for x in to_del:
del weather_data[station][param][x]
def interpolate_map(map, time_range):
ret = {
"time": [],
"value": []
}
keys = list(map.keys())
values = list(map.values())
f = interpolate.interp1d(keys, values)
ret["time"] = time_range
ret["value"] = f(ret["time"])
return ret
def update_df(df, time_range, map1, map2, param1, param2):
map1_ = interpolate_map(map1, time_range)
df1 = pd.DataFrame(
data={"time": map1_["time"], param1: map1_["value"]},
columns=["time", param1]
)
map2_ = interpolate_map(map2, time_range)
df2 = pd.DataFrame(
data={"time": map2_["time"], param2: map2_["value"]},
columns=["time", param2]
)
df_ = df.join(df1.set_index("time"), on="time").join(df2.set_index("time"), on="time")
return df_
# Insert weather data into dataframes
for location, _ in locations.items():
df = dfs[location]
station1 = location + "1"
station2 = location + "2"
for param in weather_params:
param1 = param + "1"
param2 = param + "2"
df = update_df(
df, df["time"], weather_data[station1][param], weather_data[station2][param], param1, param2
)
dfs[location] = df.set_index(keys=["time"], drop=False)
# These are lists of dataframes containing connected ranges without interruptions.
dataframes_wijchen = []
for x in time_delimiters["wijchen"]:
dataframes_wijchen += [dfs["wijchen"].loc[(dfs["wijchen"].time >= x[0]) & (dfs["wijchen"].time <= x[1])]]
dataframes_berlin = []
for x in time_delimiters["berlin"]:
dataframes_berlin += [dfs["berlin"].loc[(dfs["berlin"].time >= x[0]) & (dfs["berlin"].time <= x[1])]]
```
### Linear regression model
#### Wijchen
```
df_train = dataframes_wijchen[9].iloc[17:258]
# df_train = dataframes_wijchen[9].iloc[17:234]
# df_train = pd.concat([dataframes_wijchen[9].iloc[17:], dataframes_wijchen[10], dataframes_wijchen[11]])
df_val = dataframes_wijchen[-3].iloc[:241]
# df_val = dataframes_wijchen[-2].iloc[:241]
lr_x1 = df_train[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy()
lr_y1 = df_train[["energy"]].to_numpy()
lr_model1 = LinearRegression()
lr_model1.fit(lr_x1, lr_y1)
lr_model1.score(lr_x1, lr_y1)
lr_x2 = df_train[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy()
lr_y2 = df_train[["energy"]].to_numpy()
lr_model2 = LinearRegression()
lr_model2.fit(lr_x2, lr_y2)
lr_model2.score(lr_x2, lr_y2)
lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy()
lr_y3 = df_train[["energy"]].to_numpy()
lr_model3 = LinearRegression()
lr_model3.fit(lr_x3, lr_y3)
lr_model3.score(lr_x3, lr_y3)
# filename = "lr_model.pkl"
# with open(filename, 'wb') as file:
# pickle.dump(lr_model3, file)
xticks = df_train["time"].iloc[::24]
lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy()
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
ax.set_xticks(ticks=xticks)
ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks])
ax.tick_params(labelsize=18)
ax.plot(df_train["time"], df_train["energy"], label="Actual energy production in Wh", drawstyle="steps-pre")
ax.plot(df_train["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh (Volkel + Deelen)", drawstyle="steps-pre")
ax.legend(prop={'size': 18})
xticks = df_val["time"].iloc[::24]
lr_x1 = df_val[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy()
lr_x2 = df_val[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy()
lr_x3 = df_val[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy()
print(lr_model1.score(lr_x1, df_val[["energy"]].to_numpy()))
print(lr_model2.score(lr_x2, df_val[["energy"]].to_numpy()))
print(lr_model3.score(lr_x3, df_val[["energy"]].to_numpy()))
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
ax.set_xticks(ticks=xticks)
ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks])
ax.tick_params(labelsize=18)
ax.plot(df_val["time"], df_val["energy"], label="Actual energy production in Wh", drawstyle="steps-pre")
ax.plot(df_val["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh (Volkel + Deelen)", drawstyle="steps-pre")
ax.legend(prop={'size': 18})
print(df["angles"].min(), df_val["angles"].max())
print(df["angles"].min(), df_train["angles"].max())
```
#### Berlin
```
df_train = dataframes_berlin[1].iloc[:241]
# df_train = dataframes_berlin[1].iloc[:720]
df_val = dataframes_berlin[1].iloc[312:553]
# df_val = dataframes_berlin[1].iloc[720:961]
lr_x1 = df_train[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy()
lr_y1 = df_train[["energy"]].to_numpy()
lr_model1 = LinearRegression()
lr_model1.fit(lr_x1, lr_y1)
lr_model1.score(lr_x1, lr_y1)
lr_x2 = df_train[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy()
lr_y2 = df_train[["energy"]].to_numpy()
lr_model2 = LinearRegression()
lr_model2.fit(lr_x2, lr_y2)
lr_model2.score(lr_x2, lr_y2)
lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy()
lr_y3 = df_train[["energy"]].to_numpy()
lr_model3 = LinearRegression()
lr_model3.fit(lr_x3, lr_y3)
lr_model3.score(lr_x3, lr_y3)
# filename = "lr_model.pkl"
# with open(filename, 'wb') as file:
# pickle.dump(lr_model3, file)
xticks = df_train["time"].iloc[::24]
lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy()
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
ax.set_xticks(ticks=xticks)
ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks])
ax.tick_params(labelsize=18)
ax.plot(df_train["time"], df_train["energy"], label="Actual energy production in Wh", drawstyle="steps-pre")
ax.plot(df_train["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh", drawstyle="steps-pre")
ax.legend(prop={'size': 18})
xticks = df_val["time"].iloc[::24]
lr_x1 = df_val[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy()
lr_x2 = df_val[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy()
lr_x3 = df_val[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy()
print(lr_model1.score(lr_x1, df_val[["energy"]].to_numpy()))
print(lr_model2.score(lr_x2, df_val[["energy"]].to_numpy()))
print(lr_model3.score(lr_x3, df_val[["energy"]].to_numpy()))
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
ax.set_xticks(ticks=xticks)
ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks])
ax.tick_params(labelsize=18)
ax.plot(df_val["time"], df_val["energy"], label="Actual energy production in Wh", drawstyle="steps-pre")
ax.plot(df_val["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh", drawstyle="steps-pre")
ax.legend(prop={'size': 18})
```
| github_jupyter |
# GAMA-09 Selection Functions
## Depth maps and selection functions
The simplest selection function available is the field MOC which specifies the area for which there is Herschel data. Each pristine catalogue also has a MOC defining the area for which that data is available.
The next stage is to provide mean flux standard deviations which act as a proxy for the catalogue's 5$\sigma$ depth
```
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
import datetime
print("This notebook was executed on: \n{}".format(datetime.datetime.now()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
import os
import time
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table, join
import numpy as np
from pymoc import MOC
import healpy as hp
#import pandas as pd #Astropy has group_by function so apandas isn't required.
import seaborn as sns
import warnings
#We ignore warnings - this is a little dangerous but a huge number of warnings are generated by empty cells later
warnings.filterwarnings('ignore')
from herschelhelp_internal.utils import inMoc, coords_to_hpidx, flux_to_mag
from herschelhelp_internal.masterlist import find_last_ml_suffix, nb_ccplots
from astropy.io.votable import parse_single_table
FIELD = 'GAMA-09'
#FILTERS_DIR = "/Users/rs548/GitHub/herschelhelp_python/database_builder/filters/"
FILTERS_DIR = "/opt/herschelhelp_python/database_builder/filters/"
TMP_DIR = os.environ.get('TMP_DIR', "./data_tmp")
OUT_DIR = os.environ.get('OUT_DIR', "./data")
SUFFIX = find_last_ml_suffix()
#SUFFIX = "20171016"
master_catalogue_filename = "master_catalogue_{}_{}.fits".format(FIELD.lower(), SUFFIX)
master_catalogue = Table.read("{}/{}".format(OUT_DIR, master_catalogue_filename))
print("Depth maps produced using: {}".format(master_catalogue_filename))
ORDER = 10
#TODO write code to decide on appropriate order
field_moc = MOC(filename="../../dmu2/dmu2_field_coverages/{}_MOC.fits".format(FIELD))
```
## I - Group masterlist objects by healpix cell and calculate depths
We add a column to the masterlist catalogue for the target order healpix cell <i>per object</i>.
```
#Add a column to the catalogue with the order=ORDER hp_idx
master_catalogue.add_column(Column(data=coords_to_hpidx(master_catalogue['ra'],
master_catalogue['dec'],
ORDER),
name="hp_idx_O_{}".format(str(ORDER))
)
)
# Convert catalogue to pandas and group by the order=ORDER pixel
group = master_catalogue.group_by(["hp_idx_O_{}".format(str(ORDER))])
#Downgrade the groups from order=ORDER to order=13 and then fill out the appropriate cells
#hp.pixelfunc.ud_grade([2599293, 2599294], nside_out=hp.order2nside(13))
```
## II Create a table of all Order=13 healpix cells in the field and populate it
We create a table with every order=13 healpix cell in the field MOC. We then calculate the healpix cell at lower order that the order=13 cell is in. We then fill in the depth at every order=13 cell as calculated for the lower order cell that that the order=13 cell is inside.
```
depths = Table()
depths['hp_idx_O_13'] = list(field_moc.flattened(13))
depths[:10].show_in_notebook()
depths.add_column(Column(data=hp.pixelfunc.ang2pix(2**ORDER,
hp.pixelfunc.pix2ang(2**13, depths['hp_idx_O_13'], nest=True)[0],
hp.pixelfunc.pix2ang(2**13, depths['hp_idx_O_13'], nest=True)[1],
nest = True),
name="hp_idx_O_{}".format(str(ORDER))
)
)
depths[:10].show_in_notebook()
for col in master_catalogue.colnames:
if col.startswith("f_"):
errcol = "ferr{}".format(col[1:])
depths = join(depths,
group["hp_idx_O_{}".format(str(ORDER)), errcol].groups.aggregate(np.nanmean),
join_type='left')
depths[errcol].name = errcol + "_mean"
depths = join(depths,
group["hp_idx_O_{}".format(str(ORDER)), col].groups.aggregate(lambda x: np.nanpercentile(x, 90.)),
join_type='left')
depths[col].name = col + "_p90"
depths[:10].show_in_notebook()
```
## III - Save the depth map table
```
depths.write("{}/depths_{}_{}.fits".format(OUT_DIR, FIELD.lower(), SUFFIX))
```
## IV - Overview plots
### IV.a - Filters
First we simply plot all the filters available on this field to give an overview of coverage.
```
tot_bands = [column[2:] for column in master_catalogue.colnames
if (column.startswith('f_') & ~column.startswith('f_ap_'))]
ap_bands = [column[5:] for column in master_catalogue.colnames
if column.startswith('f_ap_') ]
bands = set(tot_bands) | set(ap_bands)
bands
for b in bands:
plt.plot(Table(data = parse_single_table(FILTERS_DIR + b + '.xml').array.data)['Wavelength']
,Table(data = parse_single_table(FILTERS_DIR + b + '.xml').array.data)['Transmission']
, label=b)
plt.xlabel('Wavelength ($\AA$)')
plt.ylabel('Transmission')
plt.xscale('log')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Passbands on {}'.format(FIELD))
```
### IV.a - Depth overview
Then we plot the mean depths available across the area a given band is available
```
average_depths = []
for b in ap_bands:
mean_err = np.nanmean(depths['ferr_ap_{}_mean'.format(b)])
print("{}: mean flux error: {}, 3sigma in AB mag (Aperture): {}".format(b, mean_err, flux_to_mag(3.0*mean_err*1.e-6)[0]))
average_depths += [('ap_' + b, flux_to_mag(1.0*mean_err*1.e-6)[0],
flux_to_mag(3.0*mean_err*1.e-6)[0],
flux_to_mag(5.0*mean_err*1.e-6)[0])]
for b in tot_bands:
mean_err = np.nanmean(depths['ferr_{}_mean'.format(b)])
print("{}: mean flux error: {}, 3sigma in AB mag (Total): {}".format(b, mean_err, flux_to_mag(3.0*mean_err*1.e-6)[0]))
average_depths += [(b, flux_to_mag(1.0*mean_err*1.e-6)[0],
flux_to_mag(3.0*mean_err*1.e-6)[0],
flux_to_mag(5.0*mean_err*1.e-6)[0])]
average_depths = np.array(average_depths, dtype=[('band', "<U16"), ('1s', float), ('3s', float), ('5s', float)])
def FWHM(X,Y):
half_max = max(Y) / 2.
#find when function crosses line half_max (when sign of diff flips)
#take the 'derivative' of signum(half_max - Y[])
d = half_max - Y
#plot(X,d) #if you are interested
#find the left and right most indexes
low_end = X[np.where(d < 0)[0][0]]
high_end = X[np.where(d < 0)[0][-1]]
return low_end, high_end, (high_end - low_end)
data = []
for b in ap_bands:
data += [['ap_' + b, Table(data = parse_single_table(FILTERS_DIR + b +'.xml').array.data)]]
for b in tot_bands:
data += [[b, Table(data = parse_single_table(FILTERS_DIR + b +'.xml').array.data)]]
wav_range = []
for dat in data:
print(dat[0], FWHM(np.array(dat[1]['Wavelength']), np.array(dat[1]['Transmission'])))
wav_range += [[dat[0], FWHM(np.array(dat[1]['Wavelength']), np.array(dat[1]['Transmission']))]]
for dat in data:
wav_deets = FWHM(np.array(dat[1]['Wavelength']), np.array(dat[1]['Transmission']))
depth = average_depths['5s'][average_depths['band'] == dat[0]]
#print(depth)
plt.plot([wav_deets[0],wav_deets[1]], [depth,depth], label=dat[0])
plt.xlabel('Wavelength ($\AA$)')
plt.ylabel('Depth')
plt.xscale('log')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Depths on {}'.format(FIELD))
```
### IV.c - Depth vs coverage comparison
How best to do this? Colour/intensity plot over area? Percentage coverage vs mean depth?
```
for dat in data:
wav_deets = FWHM(np.array(dat[1]['Wavelength']), np.array(dat[1]['Transmission']))
depth = average_depths['5s'][average_depths['band'] == dat[0]]
#print(depth)
coverage = np.sum(~np.isnan(depths['ferr_{}_mean'.format(dat[0])]))/len(depths)
plt.plot(coverage, depth, 'x', label=dat[0])
plt.xlabel('Coverage')
plt.ylabel('Depth')
#plt.xscale('log')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Depths (5 $\sigma$) vs coverage on {}'.format(FIELD))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jereyel/LinearAlgebra/blob/main/Assignment2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Welcome to Python Fundamentals
In this module, we are going to establish our skills in Python Programming. In this notebook we are going to cover:
* Variables and Data Types
* Operations
* Input and Output Operations
* Iterables
* Functions
## Variables and Data Types
```
x = 1
a, b = 3, -2
type (x)
y = 3.0
type(y)
x = float(x)
type(x)
s, t, u = "1", '3', 'three'
type(s)
```
## Operations
### Arithmetic
```
w, x, y, z = 4.0, -3.0, 1, -32
### Addition
S = w + x
### Subtractions
D = y - z
### Multiplication
P = w*z
### Division
Q = y/x
### Floor Division
Qf = w//z
Qf
### Exponentiation
E = w**w
E
### Modulo
mod = z%x
mod
```
### Assignment
```
A, B, C, D, E = 0, 100, 2, 1, 2
A += w
B -= x
C *= w
D /= x
E **= y
E
```
### Comparators
```
size_1, size_2, size_3 = 1, 2.0, "1"
true_size = 1.0
## Equality
size_1 == true_size
## Non-Equality
size_2 != true_size
## Inequality
s1 = size_1 > size_2
s2 = size_1 < size_2/2
s3 = true_size <= size_1
s4 = size_2 <= true_size
```
### Logical
```
size_1 == true_size
size_1
size_1 is true_size
size_1 is not true_size
P, Q = True, False
conj = P and Q
disj = P or Q
disj
nand = not (P and Q)
nand
xor = (not P and Q) or (P and not Q)
xor
```
## Input and Output
```
print("Helllo World!")
cnt = 14000
string = "Hello World!"
print(string, ", Current COVID count is", cnt)
cnt += 10000
print(f"{string}, current count is: {cnt}")
sem_grade = 85.25
name = "jerbox"
print("Hello {}, your semestral grade is: {}".format(name, sem_grade))
pg, mg, fg = 0.3, 0.3, 0.4
print("The weights of your semestral grade are:\
\n\t {:.2%} for Prelims\
\n\t {:.2%} for Midterms, and\
\n\t {:.2%} for Finals.".format(pg, mg, fg))
e = input("Enter a number: ")
name = input("Enter your name: ");
pg = input("Enter prelim grade: ")
mg = input("Enter midterm grade: ")
fg = input("Enter final grade: ")
sem_grade = None
print("Hello {}, your semestral grade is: {}".format(name, sem_grade))
```
### Looping Statements
## While
```
i, j = 0, 10
while(i<=j):
print(f"{i}\t|\t{j}")
i += 1
```
## For
```
i = 0
for i in range(11):
print(i)
playlist = ["Bahay Kubo", "Magandang Kanta", "Kubo"]
print('Now Playing:\n')
for song in playlist
print(song)
```
##Flow Control
###Condition Statements
```
num_1, num_2 = 14, 12
if(num_1 == num_2):
print("HAHA")
elif(num_1>num_2):
else:
print("HUHU")
```
##Functions
```
# void Deleteuser (int userid){
# delete(userid);
# }
def delete_user (userid):
print("Successfully deleted user: {}".format(userid))
addend1, addend2 = 5, 6
def add(addend1, addend2):
sum = addend1 + addend2
return sum
add(5, 4)
```
| github_jupyter |
# 2A.eco - Exercice API SNCF corrigé
Manipulation d'une [API REST](https://fr.wikipedia.org/wiki/Representational_state_transfer), celle de la SNCF est prise comme exemple. Correction d'exercices.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
## Partie 0 - modules recommandés et connexion à l'API
Il vous faudra sûrement les modules suivant :
- requests
- datetime
- pandas
- matplotlib
Créer un login pour vous connecter à l'API de la SNCF https://data.sncf.com/api
Vous pouvez maintenant commencer. Ce notebook peut prendre du temps à s'éxécuter, surout à partir de la partie 3
```
# !!!!! Attention à bien mettre votre token ici !!!!!
token_auth = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
import keyring, os
if "XXXXXX" in token_auth:
token_auth = keyring.get_password("sncf", os.environ["COMPUTERNAME"] + "key")
```
## Partie 1 - Trouver les gares accessibles _via_ la SNCF
- Trouver l'ensemble des gares disponibles sur l'API et créer un fichier csv avec les codes de la gare, son nom et ses coordonnées latitude et longitude, ainsi que les informations administratives de la région quand elles sont disponibles
- Représentez les sur un graphique
```
import pandas as pd
import requests
def page_gares(numero_page) :
return requests.get('https://api.sncf.com/v1/coverage/sncf/stop_areas?start_page={}'.format(numero_page),
auth=(token_auth, ''))
######################################
# on commence par la première page qui nous donne le nombre de résultats par page ainsi que le nombre total de résultats
page_initiale = page_gares(0)
item_per_page = page_initiale.json()['pagination']['items_per_page']
total_items = page_initiale.json()['pagination']['total_result']
dfs = []
# on fait une boucle sur toutes les pages suivantes
print_done = {}
for page in range(int(total_items/item_per_page)+1) :
stations_page = page_gares(page)
ensemble_stations = stations_page.json()
if 'stop_areas' not in ensemble_stations:
# pas d'arrêt
continue
# on ne retient que les informations qui nous intéressent
for station in ensemble_stations['stop_areas']:
station['lat'] = station['coord']['lat']
station["lon"] = station['coord']['lon']
if 'administrative_regions' in station.keys() :
for var_api, var_df in zip(['insee','name','label','id','zip_code'],
['insee','region','label_region','id_region','zip_code']) :
try:
station[var_df] = station['administrative_regions'][0][var_api]
except KeyError:
if var_api not in print_done:
print("key '{0}' not here but {1}".format(var_api,
",".join(station['administrative_regions'][0].keys())))
print_done[var_api] = var_api
[station.pop(k,None) for k in ['coord','links','administrative_regions', 'type', 'codes']]
stations = ensemble_stations['stop_areas']
try:
dp = pd.DataFrame(stations)
except Exception as e:
# La SNCF modifie parfois le schéma de ses données.
# On affiche station pour avoir une meilleure idée que l'erreur retournée par pandas
raise Exception("Problème de données\n{0}".format(stations)) from e
dfs.append(dp)
if page % 10 == 0:
print("je suis à la page", page, "---", dp.shape)
import pandas
df = pandas.concat(dfs)
df.to_csv("./ensemble_gares.csv")
print(df.shape)
df.head()
df = pd.read_csv("./ensemble_gares.csv", encoding = "ISO-8859-1")
print(df.columns)
print(df.shape)
# Exemple des informations sur une gare
df.iloc[317]
# on crée un dictionnaire des correspondances entre les noms et les codes des gares
dict_label_gare_code = df[['label','id']].set_index('label').to_dict()['id']
dict_nom_gare_code = df[['name','id']].set_index('name').to_dict()['id']
print(df.columns)
# graphique dans le plan des gares
%matplotlib inline
import matplotlib.pyplot as plt
lng_var = df[(df['lat']>35) & (df['lat']<60)]["lon"].tolist()
lat_var = df[(df['lat']>35) & (df['lat']<60)]["lat"].tolist()
plt.scatter(x = lng_var , y = lat_var,marker = "o")
```
## Les trajets depuis la Gare de Lyon
### Partons à Lyon : le 17 novembre 2016 à 19h57
Imaginez que vous vouliez un peu voyager hors de Paris, et il se trouve que justement on vous propose de passer quelques jours à Lyon. Vous partez le 17 novembre vers 19h50 pour ne pas trop écourter votre journée de travail.
#### Question 1
- Commencez par récupérer les informations sur le trajet entre Paris Gare de Lyon et Lyon Perrache le 17 novembre à 19h57
- Paris - Gare de Lyon (code de la gare : __stop\_area:OCE:SA:87686006__)
- Lyon - Gare Lyon Perrache (code de la gare : __stop\_area:OCE:SA:87722025__)
- Indice : utiliser la requête "journeys"
- Autre indice : le format de la date est AAAAMMJJTHHMMSS (Année, mois, jour, heure, minutes, secondes)
- Répondez aux questions suivantes
- combien y a-t-il d'arrêts entre ces deux gares ? (utilisez la clé 'journeys')
- combien de temps d'arrêt à chacune d'elles ?
```
##### une fonction qui sera utile pour calculer des temps
from datetime import datetime, timedelta
def convertir_en_temps(chaine) :
''' on convertit en date la chaine de caractères de l API'''
return datetime.strptime(chaine.replace('T',''),'%Y%m%d%H%M%S')
```
Et l'inverse :
```
def convertir_en_chaine(dt) :
''' on convertit en chaîne de caractères un datetime'''
return datetime.strftime(dt, '%Y%m%dT%H%M%S')
# informations sur le trajet qu'on choisit dans le futur
# l'API ne retourne pas de résultatq très loin dans le passé
now = datetime.now()
dt = now + timedelta(14) # dans deux semaines
date_depart = convertir_en_chaine(dt)
gare_depart = 'stop_area:OCE:SA:87686006'
gare_arrivee = 'stop_area:OCE:SA:87722025'
# ensemble des départs
paris_lyon = requests.get('https://api.sncf.com/v1/coverage/sncf/journeys?'\
'from={}&to={}&datetime={}'.format(gare_depart, gare_arrivee, date_depart), \
auth=(token_auth, '')).json()
date_depart
# les gares du chemin entre Paris et Lyon sur ce trajet
# ainsi que le temps d'arrêt
session = paris_lyon['journeys'][0]['sections'][1]
if "stop_date_times" in session:
for i in session['stop_date_times'] :
print(i['stop_point']['name'],
convertir_en_temps(i['departure_date_time'])-convertir_en_temps(i['arrival_date_time']),"minutes d'arrêt")
```
#### Question 2
Vous êtes un peu pressé et vous avez peur de vous tromper en arrivant à la gare car d'autres TGV partent à peu près en même temps (à partir de 19h00) de la gare de Lyon.
- Si vous demandez à l'API, combien de résultats vous donne-t-elle ?
```
### les trains qui partent autour de 19h00
departs_paris = requests.get('https://api.sncf.com/v1/coverage/sncf/stop_points/stop_point:OCE:SP:'\
'TGV-87686006/departures?from_datetime={}'.format(date_depart) ,
auth=(token_auth, '')).json()
# Nombre de trains que l'API renvoit à partir de cet horaire-là
print(len(departs_paris['departures']))
```
- Quels sont les horaires de départ de ces trains ?
```
for i in range(len(departs_paris['departures'])) :
print(departs_paris['departures'][i]['stop_date_time']['departure_date_time'])
```
- Parmi ces trains, combien de trains ont pour destination finale Lyon et qui partent le 17 novembre ?
```
nombre_trains_pour_lyon = 0
for depart in departs_paris['departures'] :
if "Lyon" in depart['display_informations']['direction'] :
if convertir_en_temps(depart['stop_date_time']['arrival_date_time']) > convertir_en_temps(date_depart) and \
convertir_en_temps(depart['stop_date_time']['arrival_date_time']) < datetime(2016,11,18,0,0,0):
nombre_trains_pour_lyon += 1
print("le prochain départ pour Lyon sera le", convertir_en_temps(depart['stop_date_time']['arrival_date_time']))
print("Il y a" , nombre_trains_pour_lyon, "train(s) pour Lyon dans les trains proposés",
"par l'API qui partent encore le 17 novembre")
```
-------------------------
### C'est quand qu'on va où ?
- En fait, vous n'êtes plus très sûr de vouloir aller à Lyon. Mais bon maintenant vous êtes Gare de Lyon et il est 18h00.
#### Question 3
- Combien de tgv partent entre 18h00 et 20h00 ?
- Lequel arrive le plus tôt à sa destination finale ?
```
# on crée deux fonctions :
def trouver_destination_tgv(origine, datetime) :
'''Permet d avoir les 10 prochains départs d une gare donnée '''
return requests.get('https://api.sncf.com/v1/coverage/sncf/stop_points/{}/' \
'departures?from_datetime={}'.format(origine, datetime) ,
auth=(token_auth, '')).json()
def trouver_trajet_dispo_max_heure(gare_depart, date_heure_depart, date_heure_max) :
''' Permet d avoir toutes les informations sur des trajets partant d une gare entre une date X et une date Y '''
destinations = []
# on interroge l'API tant qu'il renvoie des informations sur les trains partant de Gare de lyon
while convertir_en_temps(date_heure_depart) < convertir_en_temps(date_heure_max) :
# on prend toutes les destinations qui partent à partir d'une certaine heure
destinations = destinations + trouver_destination_tgv(gare_depart, date_heure_depart)['departures']
nombre_resultats = trouver_destination_tgv(gare_depart, date_heure_depart)['pagination']['items_on_page']
# on trouve l'heure max de la première série de 10 solutions que l'application renvoie
# on remplace l'heure qu'on cherche par celle là
date_heure_depart = trouver_destination_tgv(gare_depart,
date_heure_depart)['departures'][nombre_resultats-1]['stop_date_time']['departure_date_time']
return destinations
# on trouve l'ensemble des trajets dont le départ est compris entre deux horaires
# informations sur le trajet qu'on choisit dans le futur
# l'API ne retourne pas de résultatq très loin dans le passé
now = datetime.now()
if now.hour < 6:
# pas trop tôt
now += timedelta(hours=4)
dt = now + timedelta(14) # dans deux semaines
date_heure = convertir_en_chaine(dt)
max_date_heure = convertir_en_chaine(dt + timedelta(hours=4))
print("entre", date_heure, "et", max_date_heure)
gare_initiale = 'stop_point:OCE:SP:TGV-87686006'
# on demande à avoir tous les trajets partant de gare de lyon entre 18h et 20h
destinations_depuis_paris_max_20h = trouver_trajet_dispo_max_heure(gare_initiale, date_heure, max_date_heure)
# on veut supprimer ceux pour lesquels le départ est après 20h00
dictionnaire_destinations = {}
i = 0
for depart in destinations_depuis_paris_max_20h :
print(depart['display_informations']['direction'],depart['stop_date_time']['departure_date_time'])
if convertir_en_temps(depart['stop_date_time']['departure_date_time']) < convertir_en_temps(max_date_heure) :
i += 1
dictionnaire_destinations[i] = depart
print("Je peux prendre", len(dictionnaire_destinations.keys()),
"trains qui partent entre 18h et 20h de Gare de Lyon le 17 novembre 2016")
# on cherche celui qui arrive le plus tôt à sa destination
def trouver_info_trajet(dep, arr, heure) :
res = requests.get('https://api.sncf.com/v1/coverage/sncf/journeys?from={}&to={}&datetime={}'.format(dep,arr,heure), \
auth=(token_auth, '')).json()
if 'journeys' not in res:
if 'error' in res and "no solution" in res["error"]['message']:
print("Pas de solution pour '{0} --> '{1}' h: {2}.".format(dep, arr, heure))
return None
return res['journeys'][0]
# on initiale l'heure à la fin de la journée : on veut réduire cette variable au maximum
# on veut 6h après le départ
heure_minimale = dt + timedelta(hours=8)
destination_la_plus_rapide = None
print("heure_minimale", heure_minimale, " len ", len(dictionnaire_destinations))
# parmi toutes les destinations possibles, on recherche le train qui arrive le plus tôt à sa destination finale
for code, valeurs in dictionnaire_destinations.items() :
''' on prend le code de la gare'''
code_destination = dictionnaire_destinations[code]['route']['direction']['id']
''' on regarde à quelle heure arrive le train'''
trajet = trouver_info_trajet('stop_area:OCE:SA:87686006',code_destination,
dictionnaire_destinations[code]['stop_date_time']['arrival_date_time'])
if trajet is None:
continue
if heure_minimale > convertir_en_temps(trajet['arrival_date_time']) :
heure_minimale = convertir_en_temps(trajet['arrival_date_time'])
destination_la_plus_rapide = dictionnaire_destinations[code]
if destination_la_plus_rapide is not None:
print(destination_la_plus_rapide['display_informations']['direction'], heure_minimale)
else:
print("pas de résultat")
```
### Et les correspondances ?
#### Question 4
- On va essayer de voir jusqu'où on peut aller, en prenant des trains au départ de la Gare de Lyon :
- Quelles sont toutes les gares atteignables en partant le 17 novembre, sans faire de changement et sans partir après minuit ?
- Si on prend un de ces trains, jusqu'où peut-on aller, avec une correspondance, sans partir après 8h le lendemain matin ?
```
# on va trouver toutes les gares qui sont sur les trajets des trains retenus donc atteignables sans correspondance
def trouver_toutes_les_gares_du_trajet(gare_depart, gare_arrivee_finale, horaire_depart) :
return requests.get('https://api.sncf.com/v1/coverage/sncf/journeys?from={}&to={}' \
'&datetime={}'.format(gare_depart,gare_arrivee_finale,horaire_depart), \
auth=(token_auth, '')).json()
# Exemple pour la première gare de la liste
gare_depart = dictionnaire_destinations[1]['stop_point']['id']
gare_arrivee = dictionnaire_destinations[1]['route']['direction']['id']
horaire_train = dictionnaire_destinations[1]['stop_date_time']['arrival_date_time']
######################
trajet_recherche = trouver_toutes_les_gares_du_trajet(gare_depart,gare_arrivee,horaire_train)
session = trajet_recherche['journeys'][0]['sections'][0]
if "stop_date_times" in session:
for i in session['stop_date_times']:
print(i['stop_point']['name'])
#### on fait la liste des gares où on peut aller sans correspondance
liste_gares_direct = []
for x in dictionnaire_destinations.keys():
# on prend les deux gares départ + finale
gare_depart = dictionnaire_destinations[x]['stop_point']['id']
gare_arrivee = dictionnaire_destinations[x]['route']['direction']['id']
horaire_train = dictionnaire_destinations[x]['stop_date_time']['arrival_date_time']
# on appelle la fonction définie précédemment
trajet_recherche = trouver_toutes_les_gares_du_trajet(gare_depart,gare_arrivee,horaire_train)
if 'error' in trajet_recherche:
continue
session = trajet_recherche['journeys'][0]['sections'][0]
if "stop_date_times" in session:
for i in session['stop_date_times']:
print(i['stop_point']['name'], i['arrival_date_time'])
liste_gares_direct.append(i['stop_point']['name'])
print("-------------")
#### là on a la liste des gares atteignables sans correspondance
liste_gares_direct = set(liste_gares_direct)
```
#### Exemple : trouver toutes les correspondances possibles depuis le trajet entre les gares de Paris et de Perpignan
```
# pour le premier trajet gare de la liste trouvée à l'étape précédente
# on va chercher toutes les connexions des gares possibles, entre le moment de l'arrivée
# et 8 heures le lendemain matin
gare_depart = dictionnaire_destinations[1]['stop_point']['id']
gare_arrivee = dictionnaire_destinations[1]['route']['direction']['id']
horaire_train = dictionnaire_destinations[1]['stop_date_time']['arrival_date_time']
horaire_max = convertir_en_chaine(dt + timedelta(hours=8))
print("horaire_max", horaire_max)
###################### en partant de gare de lyon en direction de Perpignan
trajet_recherche = trouver_toutes_les_gares_du_trajet(gare_depart,gare_arrivee,horaire_train)
dictionnaire_correspondances = {}
for i in trajet_recherche['journeys'][0]['sections'][0]['stop_date_times']:
#print("la gare où on est descendu depuis Paris", i['stop_point']['name'])
if i['stop_point']['id'] == "stop_point:OCE:SP:TGV-87686006" :
#print("on ne prend pas la gare de Lyon - ce n'est pas une gare du trajet")
pass
else :
# on va appliquer à nouveau la fonction des trajets disponibles mais pour l'ensemble des gares
gare_dep_connexion = i['stop_point']['id']
nom_gare_dep = i['stop_point']['name']
heure_dep_connexion = i['arrival_date_time']
trajet_recherche_connexion = trouver_trajet_dispo_max_heure(gare_dep_connexion, heure_dep_connexion, horaire_max)
test_as_connexion_on_time = True
# pour chaque trajet possible depuis la gare où on est arrivé depuis paris, on va vérifier qu'on part bien
# avant 8h le lendemain
autre_gare = None
for vers_autre_gare in trajet_recherche_connexion :
heure_depart_depuis_autre_gare = vers_autre_gare['stop_date_time']['departure_date_time']
destination_trajet = vers_autre_gare['display_informations']['direction']
if convertir_en_temps(heure_depart_depuis_autre_gare) < convertir_en_temps(horaire_max) :
dictionnaire_correspondances[(nom_gare_dep,heure_depart_depuis_autre_gare)] = destination_trajet
test_as_connexion_on_time = False
# print(nom_gare_dep,heure_depart_depuis_autre_gare, "gare finale du trajet", destination_trajet)
autre_gare = vers_autre_gare
if autre_gare and test_as_connexion_on_time:
dictionnaire_correspondances[(nom_gare_dep,autre_gare['stop_date_time']['departure_date_time'])] = ""
# on garde toutes les gares où on peut aller depuis une des gares de correspondance, avec un départ avant 8H
dictionnaire_correspondances
# Pour les trajets qui partent avant 8h des gares, on va chercher toutes les gares qui sont sur le trajet
gares_avec_connexion = []
for k,v in dictionnaire_correspondances.items() :
if len(v) == 0 :
pass
else :
if k[0] not in dict_nom_gare_code:
print("'{0}' pas trouvé dans {1}".format(k[0], ", ".join(
sorted(_ for _ in dict_nom_gare_code if _[:4] == k[0][:4]))))
continue
if v not in dict_label_gare_code:
print("'{0}' pas trouvé dans {1}".format(v, ", ".join(
sorted(_ for _ in dict_label_gare_code if _[:4] == v[:4]))))
continue
dep = dict_nom_gare_code[k[0]]
arr = dict_label_gare_code[v]
gares_entre_dep_arr = trouver_toutes_les_gares_du_trajet(dep, arr,k[1])
for gare in gares_entre_dep_arr['journeys'][0]['sections'][1]['stop_date_times']:
#print("gare depart:", k[0], gare['stop_point']['name'])
gares_avec_connexion.append(gare['stop_point']['name'])
# la liste des gares atteignables avec 1 correspondance
gares_avec_connexion = set(gares_avec_connexion)
print(gares_avec_connexion)
# on crée la liste des gares atteignables seulement avec une correspondance (pas directement atteignable)
gares_atteintes_avec_connexion = [a for a in gares_avec_connexion if (a not in liste_gares_direct)]
print(gares_atteintes_avec_connexion)
```
##### Exemple : trouver toutes les correspondances possibles depuis les trains qu'on prend de la Gare de Lyon
Maintenant qu'on a fait un exemple, on le fait pour tous les trajets qui partent de la Gare de Lyon
!!! Attention cette celulle prend du temps (beaucoup beaucoup de temps) !!!!
```
gares_avec_connexion = []
for gare_initiale in dictionnaire_destinations:
# pour le premier trajet gare de la liste trouvée à l'étape précédente
# on va chercher toutes les connexions des gares possibles
print(gare_initiale, "/", len(dictionnaire_destinations))
gare_depart = dictionnaire_destinations[gare_initiale]['stop_point']['id']
gare_arrivee = dictionnaire_destinations[gare_initiale]['route']['direction']['id']
horaire_train = dictionnaire_destinations[gare_initiale]['stop_date_time']['arrival_date_time']
# Pour les trajets qui partent avant 8h des gares, on va chercher toutes les gares qui sont sur le trajet
trajet_recherche = trouver_toutes_les_gares_du_trajet(gare_depart, gare_arrivee, horaire_train)
dictionnaire_correspondances = {}
if 'journeys' not in trajet_recherche:
print("Pas de trajet entre '{0}' et '{1}' h={2}.".format(gare_depart, gare_arrivee, horaire_train))
continue
session = trajet_recherche['journeys'][0]['sections'][0]
if "stop_date_times" in session:
for i in session['stop_date_times']:
if i['stop_point']['id'] == "stop_point:OCE:SP:TGV-87686006" :
#print("on ne prend pas la gare de Lyon - ce n'est pas une gare du trajet")
pass
else :
# on va appliquer à nouveau la fonction des trajets disponibles mais pour l'ensemble des gares
gare_dep_connexion = i['stop_point']['id']
nom_gare_dep = i['stop_point']['name']
heure_dep_connexion = i['arrival_date_time']
trajet_recherche_connexion = trouver_trajet_dispo_max_heure(gare_dep_connexion, heure_dep_connexion, horaire_max)
test_as_connexion_on_time = True
# pour chaque trajet possible depuis la gare où on est arrivé depuis paris, on va vérifier qu'on part bien
# avant 8h le lendemain
for vers_autre_gare in trajet_recherche_connexion :
heure_depart_depuis_autre_gare = vers_autre_gare['stop_date_time']['departure_date_time']
destination_trajet = vers_autre_gare['display_informations']['direction']
if convertir_en_temps(heure_depart_depuis_autre_gare) < convertir_en_temps(horaire_max) :
dictionnaire_correspondances[(nom_gare_dep,heure_depart_depuis_autre_gare)] = destination_trajet
test_as_connexion_on_time = False
if test_as_connexion_on_time == True :
dictionnaire_correspondances[(nom_gare_dep,vers_autre_gare['stop_date_time']['departure_date_time'])] = ""
# on garde toutes les gares où on peut aller depuis une des gares de correspondance, avec un départ avant 8H
for k,v in dictionnaire_correspondances.items() :
if len(v) == 0:
continue
if k[0] not in dict_nom_gare_code:
print("'{0}' pas trouvé dans {1}".format(k[0], ", ".join(
sorted(_ for _ in dict_nom_gare_code if _[:4] == k[0][:4]))))
continue
if v not in dict_label_gare_code:
print("'{0}' pas trouvé dans {1}".format(v, ", ".join(
sorted(_ for _ in dict_label_gare_code if _[:4] == v[:4]))))
continue
dep = dict_nom_gare_code[k[0]]
arr = dict_label_gare_code[v]
gares_entre_dep_arr = trouver_toutes_les_gares_du_trajet(dep, arr, k[1])
if 'journeys' not in gares_entre_dep_arr:
print("Pas de trajet entre '{0}' et '{1}'.".format(k[0], v))
continue
session = gares_entre_dep_arr['journeys'][0]['sections'][1]
if "stop_date_times" in session:
for gare in session['stop_date_times'] :
gares_avec_connexion.append(gare['stop_point']['name'])
# la liste des gares atteignables avec 1 correspondance
gares_avec_connexion = set(gares_avec_connexion)
gares_connexion = [a for a in gares_avec_connexion if a not in liste_gares_direct]
print(gares_connexion)
```
#### Question 5
- Représenter toutes les gares atteignables avec un graphique de type scatter. Distinguer les gares atteintes en un seul trajet et celles atteintes avec une correspondance.
```
######### Type de chaque gare pour le graphique
dict_type_gares = {}
for a in liste_gares_direct :
dict_type_gares[a] = "direct"
for a in gares_connexion :
dict_type_gares[a] = "correspondance"
dict_type_gares['Paris-Gare-de-Lyon'] = 'depart'
dict_type_gares
```
On représente tout ça sur un graphique
```
# on va les représenter grâce à la base des latitude / longitude
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.lines import Line2D
mpl.rcParams['axes.facecolor'] = "whitesmoke"
palette = plt.cm.spring
liste_couleurs = [palette(0), palette(0.5), palette(0.8)]
data_all = pd.read_csv("./ensemble_gares.csv", encoding = "ISO-8859-1")
connexions = []
lat = []
lon = []
labels = []
dict_lat = data_all.set_index('name')['lat'].to_dict()
dict_lon = data_all.set_index('name')['lon'].to_dict()
#dict_lab = data_all.set_index('name')['name'].str.replace("gare de","").to_dict()
for gare in dict_type_gares:
if gare not in dict_lat:
print("'{0}' pas trouvé dans dict_lat (problème d'accents?)".format(gare))
continue
if gare not in dict_lon:
print("'{0}' pas trouvé dans dict_lon (problème d'accents?)".format(gare))
continue
lat.append(dict_lat[gare])
lon.append(dict_lon[gare])
labels.append(gare)
%matplotlib inline
### La carte
###################################################################################################
def liste_unique(liste) :
unicite = []
for x in liste :
if x in unicite :
pass
else :
unicite.append(x)
return unicite
lab_un = liste_unique(labels)
lat_un = liste_unique(lat)
lon_un = liste_unique(lon)
fig = plt.figure(figsize=(12,10))
for label, x, y in set(zip(labels, lon, lat)) :
if dict_type_gares[label] == "direct" :
plt.annotate(label, xy = (x - 0.05, y - 0.05), horizontalalignment = 'right', size = 13)
else :
plt.annotate(label, xy = (x + 0.05, y + 0.05), horizontalalignment = 'left', size = 13)
colors = []
for x in lab_un :
if dict_type_gares[x] == "depart" :
colors.append(liste_couleurs[0])
if dict_type_gares[x] == "direct" :
colors.append(liste_couleurs[1])
if dict_type_gares[x] == "correspondance" :
colors.append(liste_couleurs[2])
plt.scatter(x = lon_un , y = lat_un, marker = "o", c = colors, s = 100, alpha = 0.5)
#### Legende
circ1 = Line2D([0], [0], linestyle="none", marker="o", alpha=0.5, markersize=10, markerfacecolor = liste_couleurs[0])
circ2 = Line2D([0], [0], linestyle="none", marker="o", alpha=0.5, markersize=10, markerfacecolor = liste_couleurs[1])
circ3 = Line2D([0], [0], linestyle="none", marker="o", alpha=0.5, markersize=10, markerfacecolor = liste_couleurs[2])
legende = plt.legend((circ1, circ2, circ3), ("Gare de départ", "Direct depuis Gare de Lyon le soir du 17 novembre",
"Avec une correspondance depuis une gare directe"), numpoints=1, loc="best")
legende.get_frame().set_facecolor('white')
plt.title("Gares atteignables avant minuit depuis la Gare de Lyon", size = 20);
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Pre_data = pd.read_csv("C:\\Users\\2019A00303\\Desktop\\Code\\Airbnb Project\\Data\\PreProcessingAustralia.csv")
Pre_data
Pre_data['Price'].plot(kind='hist', bins=100)
Pre_data['group'] = pd.cut(x=Pre_data['Price'],
bins=[0, 50, 100, 150, 200, 1000],
labels=['group_1','group_2','group_3','group_4','group_5'])
Pre_data.head()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(Pre_data, Pre_data["group"]):
train = Pre_data.loc[train_index]
test = Pre_data.loc[test_index]
train['group'].value_counts() / len(train)
test['group'].value_counts() / len(test)
train.drop('group', axis=1, inplace=True)
train.head()
test.drop(['Unnamed: 0','group', 'Host Since', 'Country', 'Airbed', 'Couch', 'Futon', 'Pull-out Sofa', 'Real Bed', 'Cleaning Fee'], axis=1, inplace=True)
test.head()
train_y = train[['Price']]
train_y.head()
train.drop(['Unnamed: 0', 'Price', 'Host Since', 'Country','Airbed', 'Couch', 'Futon', 'Pull-out Sofa', 'Real Bed', 'Cleaning Fee'], axis=1, inplace=True)
train_X = train
train_X.head()
test_y= test[['Price']]
test_y.head()
test.drop('Price', axis=1, inplace=True)
test_X = test
test_X.head()
# from sklearn.linear_model import LinearRegression
# l_reg = LinearRegression()
# l_reg.fit(train_X, train_y)
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
import numpy as np
# predictions = l_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = l_reg.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.tree import DecisionTreeRegressor
# d_reg = DecisionTreeRegressor()
# d_reg.fit(train_X, train_y)
# predictions = d_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = d_reg.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.svm import SVR
# svr = SVR()
# svr.fit(train_X, train_y)
# predictions = svr.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = svr.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.neighbors import KNeighborsRegressor
# knn = KNeighborsRegressor()
# knn.fit(train_X, train_y)
# predictions = knn.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = knn.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# from sklearn.neural_network import MLPRegressor
# ann = MLPRegressor()
# ann.fit(train_X, train_y)
# predictions = ann.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
# predictions = ann.predict(test_X)
# mse = mean_squared_error(test_y, predictions)
# mae = mean_absolute_error(test_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
from sklearn.ensemble import RandomForestRegressor
r_reg = RandomForestRegressor()
r_reg.fit(train_X, train_y)
features = train_X.columns
importances = r_reg.feature_importances_
indices = np.argsort(importances)
plt.title('Australia Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='g', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
predictions = r_reg.predict(train_X)
mse = mean_squared_error(train_y, predictions)
mae = mean_absolute_error(train_y, predictions)
rmse = np.sqrt(mse)
print(mse, rmse, mae)
# from sklearn.model_selection import GridSearchCV
# param = {'n_estimators' : [800,900,1000], 'max_features' : ['sqrt','auto','log2'], 'max_depth' : [8,9,10],
# 'min_samples_split': [2,3,4]}
# r_reg = RandomForestRegressor(random_state=42)
# search = GridSearchCV(r_reg, param, cv=5,
# scoring='neg_mean_absolute_error')
# search.fit(train_X, train_y['Price'].ravel())
# from sklearn.ensemble import RandomForestRegressor
# r_reg = RandomForestRegressor(bootstrap=True,
# min_samples_split=2,
# criterion='mse',
# max_depth=None,
# max_features='auto',
# n_estimators=1000,
# random_state=42,
# )
# r_reg.fit(train_X, train_y['Price'].ravel())
# predictions = r_reg.predict(train_X)
# mse = mean_squared_error(train_y, predictions)
# mae = mean_absolute_error(train_y, predictions)
# rmse = np.sqrt(mse)
# print(mse, rmse, mae)
```
| github_jupyter |
# KNN(K Nearest Neighbours) for classification of glass types
We will make use of KNN algorithms to classify the type of glass.
### What is covered?
- About KNN algorithm
- Exploring dataset using visualization - scatterplot,pairplot, heatmap (correlation matrix).
- Feature scaling
- using KNN to predict
- Optimization
- Distance metrics
- Finding the best K value
### About KNN-
- It is an instance-based algorithm.
- As opposed to model-based algorithms which pre trains on the data, and discards the data. Instance-based algorithms retain the data to classify when a new data point is given.
- The distance metric is used to calculate its nearest neighbors (Euclidean, manhattan)
- Can solve classification(by determining the majority class of nearest neighbors) and regression problems (by determining the means of nearest neighbors).
- If the majority of the nearest neighbors of the new data point belong to a certain class, the model classifies the new data point to that class.

For example, in the above plot, Assuming k=5,
the black point
(new data) can be classified as class 1(Blue), because 3 out 5 of its nearest neighbors belong to class 1.
### Dataset
[Glass classification dataset](https://www.kaggle.com/uciml/glass) . Download to follow along.
**Description** -
This is a Glass Identification Data Set from UCI. It contains 10 attributes including id. The response is glass type(discrete 7 values)
- Id number: 1 to 214 (removed from CSV file)
- RI: refractive index
- Na: Sodium (unit measurement: weight percent in corresponding oxide, as are attributes 4-10)
- Mg: Magnesium
- Al: Aluminum
- Si: Silicon
- K: Potassium
- Ca: Calcium
- Ba: Barium
- Fe: Iron
- Type of glass: (class attribute)
- 1 buildingwindowsfloatprocessed
- 2 buildingwindowsnonfloatprocessed
- 3 vehiclewindowsfloatprocessed
- 4 vehiclewindowsnonfloatprocessed (none in this database)
- 5 containers
- 6 tableware
- 7 headlamps
About Type 2,4 -> **Float processed glass** means they are made on a floating molten glass on a bed of molten metal, this gives the sheet uniform thickness and flat surfaces.
## Load dependencies and data
```
#import dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import seaborn as sns
from sklearn.metrics import classification_report, accuracy_score
from sklearn.model_selection import cross_val_score
#load data
df = pd.read_csv('./data/glass.csv')
df.head()
# value count for glass types
df.Type.value_counts()
```
## Data exploration and visualizaion
#### correlation matrix -
```
cor = df.corr()
sns.heatmap(cor)
```
We can notice that Ca and K values don't affect Type that much.
Also Ca and RI are highly correlated, this means using only RI is enough.
So we can go ahead and drop Ca, and also K.(performed later)
## Scatter plot of two features
```
sns.scatterplot(df_feat['RI'],df_feat['Na'],hue=df['Type'])
```
Suppose we consider only RI, and Na values for classification for glass type.
- From the above plot, We first calculate the nearest neighbors from the new data point to be calculated.
- If the majority of nearest neighbors belong to a particular class, say type 4, then we classify the data point as type 4.
But there are a lot more than two features based on which we can classify.
So let us take a look at pairwise plot to capture all the features.
```
#pairwise plot of all the features
sns.pairplot(df,hue='Type')
plt.show()
```
The pairplot shows that the data is not linear and KNN can be applied to get nearest neighbors and classify the glass types
## Feature Scaling
Scaling is necessary for distance-based algorithms such as KNN.
This is to avoid higher weightage being assigned to data with a higher magnitude.
Using standard scaler we can scale down to unit variance.
**Formula:**
z = (x - u) / s
where x -> value, u -> mean, s -> standard deviation
```
scaler = StandardScaler()
scaler.fit(df.drop('Type',axis=1))
#perform transformation
scaled_features = scaler.transform(df.drop('Type',axis=1))
scaled_features
df_feat = pd.DataFrame(scaled_features,columns=df.columns[:-1])
df_feat.head()
```
## Applying KNN
- Drop features that are not required
- Use random state while splitting the data to ensure reproducibility and consistency
- Experiment with distance metrics - Euclidean, manhattan
```
dff = df_feat.drop(['Ca','K'],axis=1) #Removing features - Ca and K
X_train,X_test,y_train,y_test = train_test_split(dff,df['Type'],test_size=0.3,random_state=45) #setting random state ensures split is same eveytime, so that the results are comparable
knn = KNeighborsClassifier(n_neighbors=4,metric='manhattan')
knn.fit(X_train,y_train)
y_pred = knn.predict(X_test)
print(classification_report(y_test,y_pred))
accuracy_score(y_test,y_pred)
```
### Finding the best K value
We can do this either -
- by plotting Accuracy
- or by plotting the error rate
Note that plotting both is not required, both are plottted to show as an example.
```
k_range = range(1,25)
k_scores = []
error_rate =[]
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
#kscores - accuracy
scores = cross_val_score(knn,dff,df['Type'],cv=5,scoring='accuracy')
k_scores.append(scores.mean())
#error rate
knn.fit(X_train,y_train)
y_pred = knn.predict(X_test)
error_rate.append(np.mean(y_pred!=y_test))
#plot k vs accuracy
plt.plot(k_range,k_scores)
plt.xlabel('value of k - knn algorithm')
plt.ylabel('Cross validated accuracy score')
plt.show()
#plot k vs error rate
plt.plot(k_range,error_rate)
plt.xlabel('value of k - knn algorithm')
plt.ylabel('Error rate')
plt.show()
```
we can see that k=4 produces the most accurate results
## Findings -
- Manhattan distance produced better results (improved accuracy - more than 5%)
- Applying feature scaling improved accuracy by almost 5%.
- The best k value was found to be 4.
- Dropping Ca produced better results by a bit, K value did not affect results in any way.
- Also, we noticed that RI and Ca are highly correlated,
this makes sense as it was found that the Refractive index of glass was found to increase with the increase in Cao. (https://link.springer.com/article/10.1134/S1087659614030249)
## Further improvements -
We can see that the model can be improved further so we get better accuracy. Some suggestions -
- Using KFold Cross-validation
- Try different algorithms to find the best one for this problem - (SVM, Random forest, etc)
## Other Useful resources -
- [K Nearest Neighbour Easily Explained with Implementation by Krish Naik - video](https://www.youtube.com/watch?v=wTF6vzS9fy4)
- [KNN by sentdex -video](https://www.youtube.com/watch?v=1i0zu9jHN6U)
- [KNN sklearn - docs ](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)
- [Complete guide to K nearest neighbours - python and R - blog](https://kevinzakka.github.io/2016/07/13/k-nearest-neighbor/)
- [Why scaling is required in KNN and K-Means - blog](https://medium.com/analytics-vidhya/why-is-scaling-required-in-knn-and-k-means-8129e4d88ed7)
| github_jupyter |
<a href="https://colab.research.google.com/github/krakowiakpawel9/machine-learning-bootcamp/blob/master/unsupervised/04_anomaly_detection/01_local_outlier_factor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### scikit-learn
Strona biblioteki: [https://scikit-learn.org](https://scikit-learn.org)
Dokumentacja/User Guide: [https://scikit-learn.org/stable/user_guide.html](https://scikit-learn.org/stable/user_guide.html)
Podstawowa biblioteka do uczenia maszynowego w języku Python.
Aby zainstalować bibliotekę scikit-learn, użyj polecenia poniżej:
```
!pip install scikit-learn
```
Aby zaktualizować do najnowszej wersji bibliotekę scikit-learn, użyj polecenia poniżej:
```
!pip install --upgrade scikit-learn
```
Kurs stworzony w oparciu o wersję `0.22.1`
### Spis treści:
1. [Import bibliotek](#0)
2. [Wygenerowanie danych](#1)
3. [Wizualizacja danych](#2)
4. [Algorytm K-średnich](#3)
5. [Wizualizacja klastrów](#4)
### <a name='0'></a> Import bibliotek
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_blobs
import plotly.express as px
import plotly.graph_objects as go
sns.set(font_scale=1.2)
np.random.seed(10)
```
### <a name='1'></a> Wygenerowanie danych
```
data = make_blobs(n_samples=300, cluster_std=2.0, random_state=10)[0]
data[:5]
```
### <a name='2'></a> Wizualizacja danych
```
tmp = pd.DataFrame(data=data, columns={'x1', 'x2'})
px.scatter(tmp, x='x1', y='x2', width=950, title='Local Outlier Factor', template='plotly_dark')
fig = go.Figure()
fig1 = px.density_heatmap(tmp, x='x1', y='x2', width=700, title='Outliers', nbinsx=20, nbinsy=20)
fig2 = px.scatter(tmp, x='x1', y='x2', width=700, title='Outliers', opacity=0.5)
fig.add_trace(fig1['data'][0])
fig.add_trace(fig2['data'][0])
fig.update_traces(marker=dict(size=4, line=dict(width=2, color='white')), selector=dict(mode='markers'))
fig.update_layout(template='plotly_dark', width=950)
fig.show()
plt.figure(figsize=(12, 7))
plt.scatter(data[:, 0], data[:, 1], label='data', cmap='tab10')
plt.title('Local Outlier Factor')
plt.legend()
plt.show()
from sklearn.neighbors import LocalOutlierFactor
lof = LocalOutlierFactor(n_neighbors=20)
y_pred = lof.fit_predict(data)
y_pred[:10]
all_data = np.c_[data, y_pred]
all_data[:5]
tmp['y_pred'] = y_pred
px.scatter(tmp, x='x1', y='x2', color='y_pred', width=950,
title='Local Outlier Factor', template='plotly_dark')
plt.figure(figsize=(12, 7))
plt.scatter(all_data[:, 0], all_data[:, 1], c=all_data[:, 2], cmap='tab10', label='data')
plt.title('Local Outlier Factor')
plt.legend()
plt.show()
LOF_scores = lof.negative_outlier_factor_
radius = (LOF_scores.max() - LOF_scores) / (LOF_scores.max() - LOF_scores.min())
radius[:5]
plt.figure(figsize=(12, 7))
plt.scatter(all_data[:, 0], all_data[:, 1], label='data', cmap='tab10')
plt.scatter(all_data[:, 0], all_data[:, 1], s=2000 * radius, edgecolors='r', facecolors='none', label='outlier scores')
plt.title('Local Outlier Factor')
legend = plt.legend()
legend.legendHandles[1]._sizes = [40]
plt.show()
plt.figure(figsize=(12, 7))
plt.scatter(all_data[:, 0], all_data[:, 1], c=all_data[:, 2], cmap='tab10', label='data')
plt.scatter(all_data[:, 0], all_data[:, 1], s=2000 * radius, edgecolors='r', facecolors='none', label='outlier scores')
plt.title('Local Outlier Factor')
legend = plt.legend()
legend.legendHandles[1]._sizes = [40]
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import glob
import emcee
import corner
import scipy.stats
from scipy.ndimage import gaussian_filter1d
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KernelDensity
from fit_just_early_lc import prep_light_curve, multifcqfid_lnlike_big_unc, multifcqfid_lnprior_big_unc, multifcqfid_lnposterior_big_unc, lnlike_big_unc
from multiprocessing import Pool
import time
from corner_hack import corner_hack
from light_curve_plot import f_t, plot_both_filt
%matplotlib notebook
info_path = "../../forced_lightcurves/sample_lc_v2/"
salt_df = pd.read_csv(info_path + "../../Nobs_cut_salt2_spec_subtype_pec.csv")
```
## Measure the Deviance Information Criterion
$$DIC = 2 \bar{D(\theta)} - D(\bar{\theta})$$
where, $D(\theta) = -2 \log P(x|\theta)$.
Thus, we need to calculate the mean posterior parameters, AND, the mean likelihood for the posterior parameters. This requires the `multifcqfid_lnlike_big_unc` function.
```
thin_by = 100
rel_flux_cutoff = 0.4
sn = 'ZTF18abauprj'
h5_file = info_path + 'big_unc/{}_emcee_40_varchange.h5'.format(sn)
reader = emcee.backends.HDFBackend(h5_file)
nsteps = thin_by*np.shape(reader.get_chain())[0]
tau = reader.get_autocorr_time(tol=0)
burnin = int(5*np.max(tau))
samples = reader.get_chain(discard=burnin, thin=np.max([int(np.max(tau)), 1]), flat=True)
lnpost = reader.get_log_prob(discard=burnin, thin=np.max([int(np.max(tau)), 1]), flat=True)
t_max = float(salt_df['t0_g_adopted'][salt_df['name'] == sn].values)
z = float(salt_df['z_adopt'][salt_df['name'] == sn].values)
g_max = float(salt_df['fratio_gmax_2adam'][salt_df['name'] == sn].values)
r_max = float(salt_df['fratio_rmax_2adam'][salt_df['name'] == sn].values)
t_data, f_data, f_unc_data, fcqfid_data = prep_light_curve(info_path+"{}_force_phot.h5".format(sn),
t_max=t_max,
z=z,
g_max=g_max,
r_max=r_max,
rel_flux_cutoff=rel_flux_cutoff)
loglike_samples = np.zeros(len(samples))
for samp_num, sample in enumerate(samples):
loglike_samples[samp_num] = multifcqfid_lnlike_big_unc(sample, f_data, t_data, f_unc_data, fcqfid_data)
dhat = -2*multifcqfid_lnlike_big_unc(np.mean(samples, axis=0), f_data, t_data, f_unc_data, fcqfid_data)
dbar = -2*np.mean(loglike_samples)
dic = 2*dbar - dhat
print(dic)
```
#### What about for the $t^2$ model?
```
h5_file = info_path + 'big_unc/{}_emcee_40_tsquared.h5'.format(sn)
reader = emcee.backends.HDFBackend(h5_file)
nsteps = thin_by*np.shape(reader.get_chain())[0]
tau = reader.get_autocorr_time(tol=0)
burnin = int(5*np.max(tau))
samples_tsquared = reader.get_chain(discard=burnin, thin=np.max([int(np.max(tau)), 1]), flat=True)
loglike_samples_tsquared = np.zeros(len(samples))
for samp_num, sample in enumerate(samples_tsquared):
loglike_samples_tsquared[samp_num] = multifcqfid_lnlike_big_unc(sample, f_data, t_data, f_unc_data, fcqfid_data,
prior='delta2')
dhat = -2*multifcqfid_lnlike_big_unc(np.mean(samples_tsquared, axis=0), f_data, t_data, f_unc_data, fcqfid_data,
prior='delta2')
dbar = np.mean(-2*loglike_samples_tsquared)
dic_tsquared = 2*dbar_tsquared - dhat_tsquared
print(dic_tsquared)
```
### Loop over all SNe
```
salt_df.name.values
dic_uniformative_arr = np.zeros(len(salt_df))
dic_tsquared_arr = np.zeros(len(salt_df))
dic_alpha_r_plus_colors_arr = np.zeros(len(salt_df))
def get_dic(sn):
# sn, bw = tup
sn_num = np.where(salt_df.name == sn)[0]
h5_file = info_path + 'big_unc/{}_emcee_40_varchange.h5'.format(sn)
reader = emcee.backends.HDFBackend(h5_file)
thin_by = 100
nsteps = thin_by*np.shape(reader.get_chain())[0]
tau = reader.get_autocorr_time(tol=0)
burnin = int(5*np.max(tau))
samples = reader.get_chain(discard=burnin, thin=np.max(int(np.max(tau)), 0), flat=True)
rel_flux_cutoff = 0.4
t_max = float(salt_df['t0_g_adopted'][salt_df['name'] == sn].values)
z = float(salt_df['z_adopt'][salt_df['name'] == sn].values)
g_max = float(salt_df['fratio_gmax_2adam'][salt_df['name'] == sn].values)
r_max = float(salt_df['fratio_rmax_2adam'][salt_df['name'] == sn].values)
t_data, f_data, f_unc_data, fcqfid_data = prep_light_curve(info_path+"{}_force_phot.h5".format(sn),
t_max=t_max,
z=z,
g_max=g_max,
r_max=r_max,
rel_flux_cutoff=rel_flux_cutoff)
loglike_samples = np.zeros(len(samples))
for samp_num, sample in enumerate(samples):
loglike_samples[samp_num] = multifcqfid_lnlike_big_unc(sample, f_data, t_data, f_unc_data, fcqfid_data)
dhat = -2*multifcqfid_lnlike_big_unc(np.mean(samples, axis=0), f_data, t_data, f_unc_data, fcqfid_data)
dbar = -2*np.mean(loglike_samples)
dic = 2*dbar - dhat
h5_file = info_path + 'big_unc/{}_emcee_40_tsquared.h5'.format(sn)
reader = emcee.backends.HDFBackend(h5_file)
nsteps = thin_by*np.shape(reader.get_chain())[0]
tau = reader.get_autocorr_time(tol=0)
burnin = int(5*np.max(tau))
samples_tsquared = reader.get_chain(discard=burnin, thin=np.max([int(np.max(tau)), 1]), flat=True)
loglike_samples_tsquared = np.zeros(len(samples_tsquared))
for samp_num, sample in enumerate(samples_tsquared):
loglike_samples_tsquared[samp_num] = multifcqfid_lnlike_big_unc(sample, f_data, t_data, f_unc_data, fcqfid_data,
prior='delta2')
dhat_tsquared = -2*multifcqfid_lnlike_big_unc(np.mean(samples_tsquared, axis=0), f_data, t_data, f_unc_data, fcqfid_data,
prior='delta2')
dbar_tsquared = np.mean(-2*loglike_samples_tsquared)
dic_tsquared = 2*dbar_tsquared - dhat_tsquared
dic_uniformative_arr[sn_num] = dic
dic_tsquared_arr[sn_num] = dic_tsquared
h5_file = info_path + 'big_unc/{}_emcee_40_alpha_r_plus_colors.h5'.format(sn)
reader = emcee.backends.HDFBackend(h5_file)
nsteps = thin_by*np.shape(reader.get_chain())[0]
tau = reader.get_autocorr_time(tol=0)
burnin = int(5*np.max(tau))
samples_alpha_r_plus_colors = reader.get_chain(discard=burnin, thin=np.max([int(np.max(tau)), 1]), flat=True)
loglike_samples_alpha_r_plus_colors = np.zeros(len(samples_alpha_r_plus_colors))
for samp_num, sample in enumerate(samples_alpha_r_plus_colors):
loglike_samples_alpha_r_plus_colors[samp_num] = multifcqfid_lnlike_big_unc(sample, f_data, t_data, f_unc_data, fcqfid_data,
prior='alpha_r_plus_colors')
dhat_alpha_r_plus_colors = -2*multifcqfid_lnlike_big_unc(np.mean(samples_alpha_r_plus_colors, axis=0), f_data, t_data, f_unc_data, fcqfid_data,
prior='alpha_r_plus_colors')
dbar_alpha_r_plus_colors = np.mean(-2*loglike_samples_alpha_r_plus_colors)
dic_alpha_r_plus_colors = 2*dbar_alpha_r_plus_colors - dhat_alpha_r_plus_colors
dic_uniformative_arr[sn_num] = dic
dic_alpha_r_plus_colors_arr[sn_num] = dic_alpha_r_plus_colors
return (dic, dic_tsquared, dic_alpha_r_plus_colors)
pool = Pool()
dic_res = pool.map(get_dic, salt_df.name.values)
dic_res
dic_uninformative_arr = np.array(dic_res)[:,0]
dic_tsquared_arr = np.array(dic_res)[:,1]
dic_alpha_r_plus_colors_arr = np.array(dic_res)[:,2]
dic_df = pd.DataFrame(salt_df.name.values, columns=['ztf_name'])
dic_df['dic_uninformative'] = dic_uninformative_arr
dic_df['dic_delta2'] = dic_tsquared_arr
dic_df['dic_alpha_r_plus'] = dic_alpha_r_plus_colors_arr
len(np.where(np.exp((dic_tsquared_arr - dic_alpha_r_plus_colors_arr)/2) > 30)[0])
dic_evidence = np.array(['very strong']*len(salt_df))
dic_evidence[np.where((np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) <= 1))] = 'negative'
dic_evidence[np.where((np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) > 1) &
(np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) <= 3))] = 'weak'
dic_evidence[np.where((np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) > 3) &
(np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) <= 10))] = 'substantial'
dic_evidence[np.where((np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) > 10) &
(np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) <= 30))] = 'strong'
dic_evidence[np.where((np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) > 30) &
(np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) <= 100))] = 'very strong'
dic_evidence[np.where((np.exp((dic_tsquared_arr - dic_uninformative_arr)/2) > 100))] = 'decisive'
dic_evidence
np.unique(dic_evidence, return_counts=True)
dic_df['dic_evidence'] = dic_evidence
dic_df.to_csv('dic_results.csv', index=False)
```
## Analyze which SN prefer $t^2$ model
```
dic_df = pd.read_csv('dic_results.csv')
dic_df.head()
res = pd.read_csv('results_40percent.csv')
decisive = np.where(dic_df.dic_evidence == 'decisive')
vstrong = np.where(dic_df.dic_evidence == 'very strong')
strong = np.where(dic_df.dic_evidence == 'strong')
substantial = np.where(dic_df.dic_evidence == 'substantial')
weak = np.where(dic_df.dic_evidence == 'weak')
res[['ztf_name','final_selection', 't_rise_95', 't_rise_05', 'n_nights_gr_post']].iloc[decisive]
res_tsquared = pd.read_csv('results_40_tsquared.csv')
colors_sample = np.where( (((dic_df.dic_evidence == 'decisive') | (dic_df.dic_evidence == 'very strong'))
& (res.final_selection == 1)))
tsquared_sample = np.where( (((dic_df.dic_evidence == 'decisive') | (dic_df.dic_evidence == 'very strong'))
& (res.final_selection == 0) & (res_tsquared.final_selection == 1)) |
(((dic_df.dic_evidence != 'decisive') & (dic_df.dic_evidence != 'very strong'))
& (res_tsquared.final_selection == 1)))
```
The upshot here is that the very best models (i.e. low $z$, high $N_\mathrm{det}$, and low $CR_{90}$) and the very worst, opposite of this, are the ones that show significant evidence for a departure from $\alpha = 2$ according to the DIC. These models, therefore, should not be "lumped in" with a uniform $\alpha = 2$ analysis.
| github_jupyter |
This notebook demonstrates how to use this code to calculate various quantities for both Ngen=1 and Ngen=3.
```
import numpy as np
import time
#-- Change working directory to the main one with omegaH2.py and omegaH2_ulysses.py--#
import os
#print(os.getcwd())
os.chdir('../')
#print(os.getcwd())
```
# Define input variables
The inputs depend on whether you are using omegaH2.py directly or the SU2LDM class defined in omegaH2_ulysses.py.
###### For the SU2LDM class defined in omegaH2_ulysses.py:
Since the ulysses scan is performed in log(parameters), the inputs here are the powers of the variables. For example for BP1,
gs_pow = log(0.8) $\Rightarrow$ gs = 10**(gs_pow) = 0.8
###### For omegaH2.py directly:
Here the variables are their true values. For example, for BP1,
gs = 0.8
The input variables below match those in data/test.dat and correspond to BP1.
```
#-- Define input variables (standard) --#
kwargs = {}
# Scan parameters
kwargs["gs"] = 0.8
kwargs["eQ"] = 0.5
kwargs["sQsq"] = 0.3
kwargs["kappa"] = 1.
kwargs["fpi"] = 60000. # GeV
mDM_GeV = 5000. # GeV
kwargs["bsmall"] = (1./(4.*np.pi*kwargs["fpi"]))*mDM_GeV
# NOTE:
# The input parameter bsmall is related to the mass of the DM constituent.
# This controls how far the mass is below the weak confinement scale (lamW).
# Namely: mDM = bsmall*lamW
# Since lamW is also assumed to be related to fpi (namely lamW = 4*np.pi*fpi GeV) we get
# bsmall = (1/(4*np.pi*fpi))*mDM_GeV
print(kwargs)
#-- Define input variables (powers) --#
# Note that there are 16 extra parameters necessary for interfacing with ulysses
# We set these all to have a power of zero
kwargs_pow = { "m":0.000000, "M1":0.000000, "M2":0.000000, "M3":0.000000, "delta":0.000000,
"a21":0.000000, "a31":0.000000, "x1":0.000000, "x2":0.000000, "x3":0.000000,
"y1":0.000000, "y2":0.000000, "y3":0.000000, "t12":0.000000, "t13":0.000000,
"t23":0.000000}
# Scan parameters
kwargs_pow["gs"] = np.log10(kwargs["gs"])
kwargs_pow["fpi"] = np.log10(kwargs["fpi"])
kwargs_pow["kappa"] = np.log10(kwargs["kappa"])
kwargs_pow["eQ"] = np.log10(kwargs["eQ"])
kwargs_pow["bsmall"] = np.log10(kwargs["bsmall"])
kwargs_pow["sQsq"] = np.log10(kwargs["sQsq"])
print(kwargs_pow)
```
# Calculate with omegaH2.py
```
#-- Ngen=1 --#
kwargs["Ngen"] = 1
#-- Calculate oh2 and therm values --#
from omegaH2 import omegaH2
start = time.process_time()
oh2, therm = omegaH2(**kwargs, DEBUG=False)
end = time.process_time()
print("oh2 = ", oh2, therm)
print("TIME = ", end - start)
print("")
#-- Calculate m1 and aeff values (demonstration of RETURN option) --#
start = time.process_time()
m1, aeff = omegaH2(**kwargs, DEBUG=False, RETURN='m1_aeff')
end = time.process_time()
print("m1, aeff = ", m1, aeff)
print("TIME = ", end - start)
print("")
#-- Ngen=3 --#
kwargs["Ngen"] = 3
#-- Calculate oh2 and therm values --#
from omegaH2 import omegaH2
start = time.process_time()
oh2, therm = omegaH2(**kwargs, DEBUG=False)
end = time.process_time()
print("oh2 = ", oh2, therm)
print("TIME = ", end - start)
print("")
#-- Calculate m1 and aeff values (demonstration of RETURN option) --#
start = time.process_time()
m1, aeff = omegaH2(**kwargs, DEBUG=False, RETURN='m1_aeff')
end = time.process_time()
print("m1, aeff = ", m1, aeff)
print("TIME = ", end - start)
print("")
```
# Calculate with SU2LDM class defined in omegaH2_ulysses.py
NOTE: Ngen argument must be changed in the omegaH2_ulysses.py file and the kernel must be restarted. Run cells 1-3, and then run the cell below. #! Fix this later
```
from omegaH2_ulysses import SU2LDM
start = time.process_time()
objectSU2LDM = SU2LDM()
objectSU2LDM.setParams(pdict=kwargs_pow)
oh2 = objectSU2LDM.EtaB #EtaB is treated like a property of the class not a function
end = time.process_time()
print("oh2 = ", oh2)
print("TIME = ", end - start)
#-- Ngen=1 Result --#
# oh2 = 0.10010536788423431
# TIME = 10.036499000000001
#-- Ngen=3 Result --#
# oh2 = 0.07774433729213447
# TIME = 119.14895899999999
```
| github_jupyter |
```
import gradio as gr
import torch
from torchvision import transforms
import requests
from PIL import Image
from net import Net, Vgg16
import numpy as np
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def preprocess_batch(batch):
batch = batch.transpose(0, 1)
(r, g, b) = torch.chunk(batch, 3)
batch = torch.cat((b, g, r))
batch = batch.transpose(0, 1)
return batch
def tensor_load_rgbimage(filename, size=None, scale=None, keep_asp=False):
img = Image.open(filename).convert('RGB')
if size is not None:
if keep_asp:
size2 = int(size * 1.0 / img.size[0] * img.size[1])
img = img.resize((size, size2), Image.ANTIALIAS)
else:
img = img.resize((size, size), Image.ANTIALIAS)
elif scale is not None:
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
img = np.array(img).transpose(2, 0, 1)
img = torch.from_numpy(img).float()
return img
def tensor_load_rgbimage_2(filename, size=None, scale=None, keep_asp=False):
img = Image.fromarray(filename).convert('RGB')
if size is not None:
if keep_asp:
size2 = int(size * 1.0 / img.size[0] * img.size[1])
img = img.resize((size, size2), Image.ANTIALIAS)
else:
img = img.resize((size, size), Image.ANTIALIAS)
elif scale is not None:
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
img = np.array(img).transpose(2, 0, 1)
img = torch.from_numpy(img).float()
return img
def evaluate(img):
content_image = tensor_load_rgbimage_2(img, size=1024, keep_asp=True)
#content_image = img
content_image = content_image.unsqueeze(0)
style = tensor_load_rgbimage("images/21styles/candy.jpg", size=512)
style = style.unsqueeze(0)
style = preprocess_batch(style)
style_model = Net(ngf=128)
model_dict = torch.load("models/21styles.model")
model_dict_clone = model_dict.copy()
for key, value in model_dict_clone.items():
if key.endswith(('running_mean', 'running_var')):
del model_dict[key]
style_model.load_state_dict(model_dict, False)
style_v = Variable(style)
content_image = Variable(preprocess_batch(content_image))
style_model.setTarget(style_v)
output = style_model(content_image)
img = output.data[0].clone().clamp(0, 255).numpy()
img = img.transpose(1, 2, 0).astype('uint8')
img = Image.fromarray(img)
#output = utils.color_match(output, style_v)
return img
def predict(inp):
return inp
inputs = gr.inputs.Image()
outputs = gr.outputs.Image()
gr.Interface(fn=evaluate, inputs=inputs, outputs=outputs).launch(share=False)
```
| github_jupyter |
# Initial data and problem exploration
```
import xarray as xr
import pandas as pd
import urllib.request
import numpy as np
from glob import glob
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import os
import cartopy.feature as cfeature
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
```
# Data preprocessing
## TIGGE ECMWF
### Control run
```
tigge_ctrl = xr.open_mfdataset("/datadrive/tigge/16km/2m_temperature/2019-10.nc")
tigge_ctrl
tigge_ctrl.lat.min()
tigge_2dslice = tigge_ctrl.t2m.isel(lead_time=4, init_time=0)
p = tigge_2dslice.plot(
subplot_kws=dict(projection=ccrs.Orthographic(-80, 35), facecolor="gray"),
transform=ccrs.PlateCarree(),)
#p.axes.set_global()
p.axes.coastlines()
```
### TIGGE CTRL precip
```
prec = xr.open_mfdataset("/datadrive/tigge/raw/total_precipitation/*.nc")
prec # aggregated precipitation
prec.tp.mean('init_time').diff('lead_time').plot(col='lead_time', col_wrap=3) # that takes a while!
```
### Checking regridding
```
t2m_raw = xr.open_mfdataset("/datadrive/tigge/raw/2m_temperature/2019-10.nc")
t2m_32 = xr.open_mfdataset("/datadrive/tigge/32km/2m_temperature/2019-10.nc")
t2m_16 = xr.open_mfdataset("/datadrive/tigge/16km/2m_temperature/2019-10.nc")
for ds in [t2m_raw, t2m_16, t2m_32]:
tigge_2dslice = ds.t2m.isel(lead_time=4, init_time=-10)
plt.figure()
p = tigge_2dslice.plot(levels=np.arange(270,305),
subplot_kws=dict(projection=ccrs.Orthographic(-80, 35), facecolor="gray"),
transform=ccrs.PlateCarree(),)
p.axes.coastlines()
```
### Ensemble
```
!ls -lh ../data/tigge/2020-10-23_ens2.grib
tigge = xr.open_mfdataset('../data/tigge/2020-10-23_ens2.grib', engine='pynio').isel()
tigge = tigge.rename({
'tp_P11_L1_GGA0_acc': 'tp',
'initial_time0_hours': 'init_time',
'forecast_time0': 'lead_time',
'lat_0': 'latitude',
'lon_0': 'longitude',
'ensemble0' : 'member'
}).diff('lead_time').tp
tigge = tigge.where(tigge >= 0, 0)
# tigge = tigge * 1000 # m to mm
tigge.coords['valid_time'] = xr.concat([i + tigge.lead_time for i in tigge.init_time], 'init_time')
tigge
tigge.to_netcdf('../data/tigge/2020-10-23_ens_preprocessed.nc')
```
### Deterministic
```
tigge = xr.open_mfdataset('../data/tigge/2020-10-23.grib', engine='pynio')
tigge = tigge.rename({
'tp_P11_L1_GGA0_acc': 'tp',
'initial_time0_hours': 'init_time',
'forecast_time0': 'lead_time',
'lat_0': 'latitude',
'lon_0': 'longitude',
}).diff('lead_time').tp
tigge = tigge.where(tigge >= 0, 0)
tigge.coords['valid_time'] = xr.concat([i + tigge.lead_time for i in tigge.init_time], 'init_time')
tigge
tigge.to_netcdf('../data/tigge/2020-10-23_preprocessed.nc')
```
## YOPP
```
yopp = xr.open_dataset('../data/yopp/2020-10-23.grib', engine='pynio').TP_GDS4_SFC
yopp2 = xr.open_dataset('../data/yopp/2020-10-23_12.grib', engine='pynio').TP_GDS4_SFC
yopp = xr.merge([yopp, yopp2]).rename({
'TP_GDS4_SFC': 'tp',
'initial_time0_hours': 'init_time',
'forecast_time1': 'lead_time',
'g4_lat_2': 'latitude',
'g4_lon_3': 'longitude'
})
yopp = yopp.diff('lead_time').tp
yopp = yopp.where(yopp >= 0, 0)
yopp = yopp * 1000 # m to mm
yopp.coords['valid_time'] = xr.concat([i + yopp.lead_time for i in yopp.init_time], 'init_time')
yopp.to_netcdf('../data/yopp/2020-10-23_preprocessed.nc')
```
## NRMS data
```
def time_from_fn(fn):
s = fn.split('/')[-1].split('_')[-1]
year = s[:4]
month = s[4:6]
day = s[6:8]
hour = s[9:11]
return np.datetime64(f'{year}-{month}-{day}T{hour}')
def open_nrms(path):
fns = sorted(glob(f'{path}/*'))
dss = [xr.open_dataset(fn, engine='pynio') for fn in fns]
times = [time_from_fn(fn) for fn in fns]
times = xr.DataArray(times, name='time', dims=['time'], coords={'time': times})
ds = xr.concat(dss, times).rename({'lat_0': 'latitude', 'lon_0': 'longitude'})
da = ds[list(ds)[0]].rename('tp')
return da
def get_mrms_fn(path, source, year, month, day, hour):
month, day, hour = [str(x).zfill(2) for x in [month, day, hour]]
fn = f'{path}/{source}/MRMS_{source}_00.00_{year}{month}{day}-{hour}0000.grib2'
# print(fn)
return fn
def load_mrms_data(path, start_time, stop_time, accum=3):
times = pd.to_datetime(np.arange(start_time, stop_time, np.timedelta64(accum, 'h'), dtype='datetime64[h]'))
das = []
for t in times:
if os.path.exists(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass1', t.year, t.month, t.day, t.hour)):
ds = xr.open_dataset(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass1', t.year, t.month, t.day, t.hour), engine='pynio')
elif os.path.exists(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass2', t.year, t.month, t.day, t.hour)):
ds = xr.open_dataset(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass2', t.year, t.month, t.day, t.hour), engine='pynio')
elif os.path.exists(get_mrms_fn(path, f'RadarOnly_QPE_0{accum}H', t.year, t.month, t.day, t.hour)):
ds = xr.open_dataset(get_mrms_fn(path, f'RadarOnly_QPE_0{accum}H', t.year, t.month, t.day, t.hour), engine='pynio')
else:
raise Exception(f'No data found for {t}')
ds = ds.rename({'lat_0': 'latitude', 'lon_0': 'longitude'})
da = ds[list(ds)[0]].rename('tp')
das.append(da)
times = xr.DataArray(times, name='time', dims=['time'], coords={'time': times})
da = xr.concat(das, times)
return da
mrms = load_mrms_data('../data/', '2020-10-23', '2020-10-25')
mrms6h = mrms.rolling(time=2).sum().isel(time=slice(0, None, 2))
mrms.to_netcdf('../data/mrms/mrms_preprocessed.nc')
mrms6h.to_netcdf('../data/mrms/mrms6_preprocessed.nc')
```
# Analysis
```
!ls ../data
tigge_det = xr.open_dataarray('../data/tigge/2020-10-23_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
tigge_ens = xr.open_dataarray('../data/tigge/2020-10-23_ens_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
yopp = xr.open_dataarray('../data/yopp/2020-10-23_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
mrms = xr.open_dataarray('../data/mrms/mrms_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
mrms6h = xr.open_dataarray('../data/mrms/mrms6_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
```
## Regrid
```
import xesmf as xe
lons = slice(260, 280)
lats = slice(45, 25)
def regrid(ds, km, lats, lons):
deg = km/100.
grid = xr.Dataset(
{
'lat': (['lat'], np.arange(lats.start, lats.stop, -deg)),
'lon': (['lon'], np.arange(lons.start, lons.stop, deg))
}
)
regridder = xe.Regridder(ds.sel(lat=lats, lon=lons), grid, 'bilinear')
return regridder(ds.sel(lat=lats, lon=lons), keep_attrs=True)
mrms4km = regrid(mrms, 4, lats, lons)
mrms2km = regrid(mrms, 2, lats, lons)
mrms4km6h = regrid(mrms6h, 4, lats, lons)
mrms2km6h = regrid(mrms6h, 2, lats, lons)
mrms4km6h = mrms4km6h.rename('tp')
mrms2km6h =mrms2km6h.rename('tp')
yopp16km = regrid(yopp, 16, lats, lons)
yopp32km = regrid(yopp, 32, lats, lons)
tigge_det16km = regrid(tigge_det, 16, lats, lons)
tigge_det32km = regrid(tigge_det, 32, lats, lons)
tigge_ens16km = regrid(tigge_ens, 16, lats, lons)
tigge_ens32km = regrid(tigge_ens, 32, lats, lons)
!mkdir ../data/regridded
mrms2km.to_netcdf('../data/regridded/mrms2km.nc')
mrms4km.to_netcdf('../data/regridded/mrms4km.nc')
mrms2km6h.to_netcdf('../data/regridded/mrms2km6h.nc')
mrms4km6h.to_netcdf('../data/regridded/mrms4km6h.nc')
yopp16km.to_netcdf('../data/regridded/yopp16km.nc')
yopp32km.to_netcdf('../data/regridded/yopp32km.nc')
tigge_det16km.to_netcdf('../data/regridded/tigge_det16km.nc')
tigge_det32km.to_netcdf('../data/regridded/tigge_det32km.nc')
tigge_ens16km.to_netcdf('../data/regridded/tigge_ens16km.nc')
tigge_ens32km.to_netcdf('../data/regridded/tigge_ens32km.nc')
mrms2km = xr.open_dataarray('../data/regridded/mrms2km.nc')
mrms4km = xr.open_dataarray('../data/regridded/mrms4km.nc')
mrms2km6h = xr.open_dataarray('../data/regridded/mrms2km6h.nc')
mrms4km6h = xr.open_dataarray('../data/regridded/mrms4km6h.nc')
yopp16km = xr.open_dataarray('../data/regridded/yopp16km.nc')
yopp32km = xr.open_dataarray('../data/regridded/yopp32km.nc')
tigge_det16km = xr.open_dataarray('../data/regridded/tigge_det16km.nc')
tigge_det32km = xr.open_dataarray('../data/regridded/tigge_det32km.nc')
tigge_ens16km = xr.open_dataarray('../data/regridded/tigge_ens16km.nc')
tigge_ens32km = xr.open_dataarray('../data/regridded/tigge_ens32km.nc')
```
### Matplotlib
#### Compare different resolutions
```
mrms4km
np.arange(lons.start, lons.stop, 512/100)
def add_grid(axs):
for ax in axs:
ax.set_xticks(np.arange(lons.start, lons.stop, 512/100))
ax.set_yticks(np.arange(lats.start, lats.stop, -512/100))
ax.grid(True)
ax.set_aspect('equal')
yopp16km
yopp16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time
i = 3
valid_time = yopp16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time
figsize = (16, 5)
axs = mrms4km.sel(time=valid_time.values).plot(vmin=0, vmax=50, col='time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = yopp16km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = yopp32km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
i = 2
valid_time = tigge_det16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time
figsize = (16, 5)
axs = mrms4km6h.sel(time=valid_time.values, method='nearest').assign_coords({'time': valid_time.values}).plot(vmin=0, vmax=50, col='time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = tigge_det16km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = tigge_det32km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
tigge_ens16km.isel(init_time=i, lead_time=l)
i = 3
l = 0
t = tigge_ens16km.isel(init_time=i, lead_time=slice(l, l+2)).valid_time.values
axs = mrms4km6h.sel(time=t, method='nearest').assign_coords({'time': t}).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(10, 4), col='time').axes[0]
add_grid(axs)
axs = tigge_ens16km.isel(init_time=i, lead_time=l, member=slice(0, 6)).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(24, 4), col='member').axes[0]
add_grid(axs)
axs = tigge_ens16km.isel(init_time=i, lead_time=l+1, member=slice(0, 6)).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(24, 4), col='member').axes[0]
add_grid(axs)
```
### Holoviews
```
import holoviews as hv
hv.extension('bokeh')
hv.config.image_rtol = 1
# from holoviews import opts
# opts.defaults(opts.Scatter3D(color='Value', cmap='viridis', edgecolor='black', s=50))
lons2 = slice(268, 273)
lats2 = slice(40, 35)
lons2 = lons
lats2 = lats
def to_hv(da, dynamic=False, opts={'clim': (1, 50)}):
hv_ds = hv.Dataset(da)
img = hv_ds.to(hv.Image, kdims=["lon", "lat"], dynamic=dynamic)
return img.opts(**opts)
valid_time = yopp16km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).valid_time
valid_time2 = tigge_det16km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).valid_time
mrms2km_hv = to_hv(mrms2km.sel(time=valid_time, method='nearest').sel(lat=lats2, lon=lons2))
mrms4km_hv = to_hv(mrms4km.sel(time=valid_time, method='nearest').sel(lat=lats2, lon=lons2))
mrms2km6h_hv = to_hv(mrms2km6h.sel(time=valid_time2, method='nearest').sel(lat=lats2, lon=lons2))
mrms4km6h_hv = to_hv(mrms4km6h.sel(time=valid_time2, method='nearest').sel(lat=lats2, lon=lons2))
yopp16km_hv = to_hv(yopp16km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
yopp32km_hv = to_hv(yopp32km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
tigge_det16km_hv = to_hv(tigge_det16km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
tigge_det32km_hv = to_hv(tigge_det32km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
```
### Which resolution for MRMS?
```
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') [width=600, height=600]
# mrms4km6h_hv + tigge_det16km_hv + tigge_det32km_hv
# mrms4km_hv + yopp16km_hv + yopp32km_hv
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') [width=600, height=600]
mrms4km_hv + mrms4km6h_hv
hv_yopp = yopp.isel(init_time=0).sel(latitude=lats, longitude=lons)
hv_yopp.coords['time'] = hv_yopp.init_time + hv_yopp.lead_time
hv_yopp = hv_yopp.swap_dims({'lead_time': 'time'})
# hv_yopp
hv_mrms = hv.Dataset(mrms.sel(latitude=lats, longitude=lons)[1:])
hv_yopp = hv.Dataset(hv_yopp.sel(time=mrms.time[1:]))
img1 = hv_mrms.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
img2 = hv_yopp.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') plot[colorbar=True]
%%opts Image [width=500, height=400]
img1 + img2
hv_yopp = yopp.sel(latitude=lats, longitude=lons)
hv_yopp = hv.Dataset(hv_yopp)
img1 = hv_yopp.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') plot[colorbar=True]
%%opts Image [width=500, height=400]
img1
hv_ds = hv.Dataset(da.sel(latitude=lats, longitude=lons))
hv_ds
a = hv_ds.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
a.opts(colorbar=True, fig_size=200, cmap='viridis')
```
# Old
```
path = '../data/MultiSensor_QPE_01H_Pass1/'
da1 = open_nrms('../data/MultiSensor_QPE_01H_Pass1/')
da3 = open_nrms('../data/MultiSensor_QPE_03H_Pass1/')
dar = open_nrms('../data/RadarOnly_QPE_03H/')
da3p = open_nrms('../data/MultiSensor_QPE_03H_Pass2/')
da1
da3
da13 = da1.rolling(time=3).sum()
(da13 - da3).isel(time=3).sel(latitude=lats, longitude=lons).plot()
da13.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('1h accumulation with rolling(time=3).sum()', y=1.05)
da3.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation', y=1.05)
dar.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation radar', y=1.05)
da3.isel(time=slice(0, 7)).sel(latitude=slice(44, 43), longitude=slice(269, 270)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation', y=1.05)
dar.isel(time=slice(0, 7)).sel(latitude=slice(44, 43), longitude=slice(269, 270)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation radar', y=1.05)
for t in np.arange('2020-10-23', '2020-10-25', np.timedelta64(3, 'h'), dtype='datetime64[h]'):
print(t)
print('Radar', (dar.time.values == t).sum() > 0)
print('Pass1', (da3.time.values == t).sum() > 0)
print('Pass2', (da3p.time.values == t).sum() > 0)
t
(dar.time.values == t).sum() > 0
da3.time.values
def plot_facet(da, title='', **kwargs):
p = da.plot(
col='time', col_wrap=3,
subplot_kws={'projection': ccrs.PlateCarree()},
transform=ccrs.PlateCarree(),
figsize=(15, 15), **kwargs
)
for ax in p.axes.flat:
ax.coastlines()
ax.add_feature(states_provinces, edgecolor='gray')
# ax.set_extent([113, 154, -11, -44], crs=ccrs.PlateCarree())
plt.suptitle(title);
plot_facet(da.isel(time=slice(0, 9)).sel(latitude=lats, longitude=lons), vmin=0, vmax=10, add_colorbar=False)
import holoviews as hv
hv.extension('matplotlib')
from holoviews import opts
opts.defaults(opts.Scatter3D(color='Value', cmap='fire', edgecolor='black', s=50))
hv_ds = hv.Dataset(da.sel(latitude=lats, longitude=lons))
hv_ds
a = hv_ds.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
a.opts(colorbar=True, fig_size=200, cmap='viridis')
da.longitude.diff('longitude').min()
!cp ../data/yopp/2020-10-23.nc ../data/yopp/2020-10-23.grib
a = xr.open_dataset('../data/yopp/2020-10-23.grib', engine='pynio')
a
a.g4_lat_2.diff('g4_lat_2')
a.g4_lon_3.diff('g4_lon_3')
!cp ../data/tigge/2020-10-23.nc ../data/tigge/2020-10-23.grib
b = xr.open_dataset('../data/tigge/2020-10-23.grib', engine='pynio')
b
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
import numpy as np
import random
%matplotlib inline
mnist = input_data.read_data_sets('data/MNIST/', one_hot=True) #onhot=true -> y=5 ise => on tane eleman olan vektore cevirdi -> y=[0,0,0,0,1,0,0,0,0,0]
def show_digit(pixels):
img = pixels.reshape(28, 28)
plt.axis('off')
plt.imshow(img, cmap='gray_r')
sample = random.choice(mnist.train.images)
show_digit(sample)
print(u"Training data set: %d" % len(mnist.train.images))
print(u"Test data set: %d" % len(mnist.test.images))
#veri setini iclerine koymak icin
X = tf.placeholder(tf.float32, [None, 784]) #yertutucu tersorfow degiskeni, yertutucunun veri tipi float32, matrix formatinda kac satir oldugunu bilmiyoruz o yuzden none, 784 deger
y = tf.placeholder(tf.float32, [None, 10]) #onhot=true -> y=5 ise => on tane eleman olan vektore cevirdi -> y=[0,0,0,0,1,0,0,0,0,0]
W = tf.Variable(tf.truncated_normal(shape=[784, 10], stddev=0.1)) #rastgele sayilar olustur, boyutu 784x4. 0.1 -> elemanlar arasi sicrama
b = tf.Variable(tf.constant(shape=[10], value=0.1)) #basta sabit bir deger=0.1, boyutu 10
y_pred = tf.nn.softmax(tf.matmul(X, W) + b) #y_pred=[0.01, 0.2.., 0.75, ...,0.1] en yuksek 5inci index, y ile karsilastirmasi kolay eger onehot true ise
loss = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_pred),
reduction_indices=[1])) # probleme gore maliyet fonksiyonlari vardir, bu fonksiyon cross entropy. fonksiyona gore deger ne kadar buyuk ise o kadar kotu
# loss un minimum oldugu noktayi ariyoruz
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss) # 0.05 -> learning rate
correct_predictions = tf.equal(tf.argmax(y, 1), tf.argmax(y_pred, 1)) # liste halinde true false cikaracak
accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # dogrular 1.0, yanlislar 0. reduce_mean -> listenin icindeki m tane sayiyi topla ve m ye bol (yani ortalama al)
sess = tf.Session() # calisirabilmek icin session olusturduk
sess.run(tf.global_variables_initializer())
for i in range(10000):
xs, ys = mnist.train.next_batch(150) # 128 resim. tek seferde tek resim gondermek maliyeti yukseltir o yuzden 128 tane resmi paket halinde gonder. sistem gucune gore sayi artabilir azaltilabilir
sess.run(optimizer, feed_dict={X: xs, y: ys})
if i % 500 == 0: #her 500 adimda accuracy calistir, test veri setinin hepsini gonder, modelin basarisini olc
acc = sess.run(accuracy, feed_dict={X: mnist.test.images,
y: mnist.test.labels})
print("[*] Step: %d, test accuracy: %.2f%%" % (i, acc * 100))
weig = tf.Print(W,[W])
bia = tf.Print(b,[b])
# print sess.run(weig), sess.run(bia)
sample = random.choice(mnist.test.images)
predict = sess.run(y_pred, feed_dict={X: [sample]})[0]
for i, v in enumerate(predict):
print("Probability of being %d: %.2f%%" % (i, v * 100))
show_digit(sample)
random_img = np.random.rand(784)
predict = sess.run(y_pred, feed_dict={X: [random_img]})[0]
for i, v in enumerate(predict):
print("Probability of being %d: %.2f%%" % (i, v * 100))
show_digit(random_img)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from numpy import *
from IPython.html.widgets import *
import matplotlib.pyplot as plt
from IPython.core.display import clear_output
```
# Principal Component Analysis and EigenFaces
In this notebook, I will go through the basic concepts behind the principal component analysis (PCA). I will then apply PCA to a face dataset to find the characteristic faces ("eigenfaces").
## What is PCA?
PCA is a **linear** transformation. Suppose I have a $N \times P$ data matrix ${\bf X}$, where $N$ is the number of samples and $P$ is the dimension of each sample. Then PCA will find you a $K \times P$ matrix ${\bf V}$ such that
$$ \underbrace{{\bf X}}_{N \times P} = \underbrace{{\bf S}}_{P \times K} \underbrace{{\bf V}}_{K \times P}. $$
Here, $K$ is the number of **principal components** with $K \le P$.
## But what does the V matrix do?
${\bf V}$ can be though of in many different ways.
The first way is to think of it as a de-correlating transformation: originally, each variable (or dimension) in ${\bf X}$ - there are $P$ of them - may be *correlated*. That is, if I take any two column vectors of ${\bf X}$, say ${\bf x}_0$ and ${\bf x}_1$, their covariance is not going to be zero.
Let's try this in a randomly generated data:
```
from numpy.random import standard_normal # Gaussian variables
N = 1000; P = 5
X = standard_normal((N, P))
W = X - X.mean(axis=0,keepdims=True)
print(dot(W[:,0], W[:,1]))
```
I'll skip ahead and use a pre-canned PCA routine from `scikit-learn` (but I'll dig into it a bit later!) Let's see what happens to the transformed variables, ${\bf S}$:
```
from sklearn.decomposition import PCA
S=PCA(whiten=True).fit_transform(X)
print(dot(S[:,0], S[:,1]))
```
Another way to look at ${\bf V}$ is to think of them as **projections**. Since the row vectors of ${\bf V}$ is *orthogonal* to each other, the projected data ${\bf S}$ lines in a new "coordinate system" specified by ${\bf V}$. Furthermore, the new coordinate system is sorted in the decreasing order of *variance* in the original data. So, PCA can be thought of as calculating a new coordinate system where the basis vectors point toward the direction of largest variances first.
<img src="files/images/PCA/pca.png" style="margin:auto; width: 483px;"/>
Exercise 1. Let's get a feel for this in the following interactive example. Try moving the sliders around to generate the data, and see how the principal component vectors change.
In this demo, `mu_x` and `mu_y` specifies the center of the data, `sigma_x` and `sigma_y` the standard deviations, and everything is rotated by the angle `theta`. The two blue arrows are the rows of ${\bf V}$ that gets calculated.
When you click on `center`, the data is first centered (mean is subtracted from the data) first. (Question: why is it necessary to "center" data when `mu_x` and `mu_y` are not zero?)
```
from numpy.random import standard_normal
from matplotlib.patches import Ellipse
from numpy.linalg import svd
@interact
def plot_2d_pca(mu_x=FloatSlider(min=-3.0, max=3.0, value=0),
mu_y=FloatSlider(min=-3.0, max=3.0, value=0),
sigma_x=FloatSlider(min=0.2, max=1.8, value=1.8),
sigma_y=FloatSlider(min=0.2, max=1.8, value=0.3),
theta=FloatSlider(min=0.0, max=pi, value=pi/6), center=False):
mu=array([mu_x, mu_y])
sigma=array([sigma_x, sigma_y])
R=array([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]])
X=dot(standard_normal((1000, 2)) * sigma[newaxis,:],R.T) + mu[newaxis,:]
# Plot the points and the ellipse
fig, ax = plt.subplots(figsize=(8,8))
ax.scatter(X[:200,0], X[:200,1], marker='.')
ax.grid()
M=8.0
ax.set_xlim([-M,M])
ax.set_ylim([-M,M])
e=Ellipse(xy=array([mu_x, mu_y]), width=sigma_x*3, height=sigma_y*3, angle=theta/pi*180,
facecolor=[1.0,0,0], alpha=0.3)
ax.add_artist(e)
# Perform PCA and plot the vectors
if center:
X_mean=X.mean(axis=0,keepdims=True)
else:
X_mean=zeros((1,2))
# Doing PCA here... I'm using svd instead of scikit-learn PCA, I'll come back to this.
U,s,V =svd(X-X_mean, full_matrices=False)
for v in dot(diag(s/sqrt(X.shape[0])),V): # Each eigenvector
ax.arrow(X_mean[0,0],X_mean[0,1],-v[0],-v[1],
head_width=0.5, head_length=0.5, fc='b', ec='b')
Ustd=U.std(axis=0)
ax.set_title('std(U*s) [%f,%f]' % (Ustd[0]*s[0],Ustd[1]*s[1]))
```
Yet another use for ${\bf V}$ is to perform a **dimensionality reduction**. In many scenarios you encounter in image manipulation (as I'll see soon), Imight want to have a more concise representation of the data ${\bf X}$. PCA with $K < P$ is one way to *reduce the dimesionality*: because PCA picks the directions with highest data variances, if a small number of top $K$ rows are sufficient to approximate (reconstruct) ${\bf X}$.
## How do Iactually *perform* PCA?
Well, we can use `from sklearn.decomposition import PCA`. But for learning, let's dig just one step into what it acutally does.
One of the easiest way to perform PCA is to use the singular value decomposition (SVD). SVD decomposes a matrix ${\bf X}$ into a unitary matrix ${\bf U}$, rectangular diagonal matrix ${\bf \Sigma}$ (called "singular values"), and another unitary matrix ${\bf W}$ such that
$$ {\bf X} = {\bf U} {\bf \Sigma} {\bf W}$$
So how can Iuse that to do PCA? Well, it turns out ${\bf \Sigma} {\bf W}$ of SVD, are exactly what Ineed to calculate the ${\bf V}$ matrix for the PCA, so I just have to run SVD and set ${\bf V} = {\bf \Sigma} {\bf W}$.
(Note: `svd` of `numpy` returns only the diagonal elements of ${\bf \Sigma}$.)
Exercise 2. Generate 1000 10-dimensional data and perform PCA this way. Plot the squares of the singular values.
To reduce the the $P$-dimesional data ${\bf X}$ to a $K$-dimensional data, I just need to pick the top $K$ row vectors of ${\bf V}$ - let's call that ${\bf W}$ - then calcuate ${\bf T} = {\bf X} {\bf W}^\intercal$. ${\bf T}$ then has the dimension $N \times K$.
If I want to reconstruct the data ${\bf T}$, Isimply do ${\hat {\bf X}} = {\bf T} {\bf W}$ (and re-add the means for ${\bf X}$, if necessary).
Exercise 3. Reduce the same data to 5 dimensions, then based on the projected data ${\bf T}$, reconstruct ${\bf X}$. What's the mean squared error of the reconstruction?
# Performing PCA on a face dataset
Now that I have a handle on the PCA method, let's try applying it to a dataset consisting of face data. I will use the CAlifornia FAcial expressions dataset (CAFE) from http://cseweb.ucsd.edu/~gary/CAFE/ . The following code loads the dataset into the `dataset` variable:
```
import pickle
dataset=pickle.load(open('data/cafe.pkl','r'))
disp('dataset.images shape is %s' % str(dataset.images.shape))
disp('dataset.data shape is %s' % str(dataset.data.shape))
@interact
def plot_face(image_id=(0, dataset.images.shape[0]-1)):
plt.imshow(dataset.images[image_id],cmap='gray')
plt.title('Image Id = %d, Gender = %d' % (dataset.target[image_id], dataset.gender[image_id]))
plt.axis('off')
```
## Preprocessing
I'll center the data by subtracting the mean. The first axis (`axis=0`) is the `n_samples` dimension.
```
X=dataset.data.copy() # So that Iwon't mess up the data in the dataset\
X_mean=X.mean(axis=0,keepdims=True) # Mean for each dimension across sample (centering)
X_std=X.std(axis=0,keepdims=True)
X-=X_mean
disp(all(abs(X.mean(axis=0))<1e-12)) # Are means for all dimensions very close to zero?
```
Then I perform SVD to calculate the projection matrix $V$. By default, `U,s,V=svd(...)` returns full matrices, which will return $n \times n$ matrix `U`, $n$-dimensional vector of singular values `s`, and $d \times d$ matrix `V`. But here, I don't really need $d \times d$ matrix `V`; with `full_matrices=False`, `svd` only returns $n \times d$ matrix for `V`.
```
from numpy.linalg import svd
U,s,V=svd(X,compute_uv=True, full_matrices=False)
disp(str(U.shape))
disp(str(s.shape))
disp(str(V.shape))
```
I can also plot how much each eigenvector in `V` contributes to the overall variance by plotting `variance_ratio` = $\frac{s^2}{\sum s^2}$. (Notice that `s` is already in the decreasing order.) The `cumsum` (cumulative sum) of `variance_ratio` then shows how much of the variance is explained by components up to `n_components`.
```
variance_ratio=s**2/(s**2).sum() # Normalized so that they add to one.
@interact
def plot_variance_ratio(n_components=(1, len(variance_ratio))):
n=n_components-1
fig, axs = plt.subplots(1, 2, figsize=(12, 5))
axs[0].plot(variance_ratio)
axs[0].set_title('Explained Variance Ratio')
axs[0].set_xlabel('n_components')
axs[0].axvline(n, color='r', linestyle='--')
axs[0].axhline(variance_ratio[n], color='r', linestyle='--')
axs[1].plot(cumsum(variance_ratio))
axs[1].set_xlabel('n_components')
axs[1].set_title('Cumulative Sum')
captured=cumsum(variance_ratio)[n]
axs[1].axvline(n, color='r', linestyle='--')
axs[1].axhline(captured, color='r', linestyle='--')
axs[1].annotate(s='%f%% with %d components' % (captured * 100, n_components), xy=(n, captured),
xytext=(10, 0.5), arrowprops=dict(arrowstyle="->"))
```
Since I'm dealing with face data, each row vector of ${\bf V}$ is called an "eigenface". The first "eigenface" is the one that explains a lot of variances in the data, whereas the last one explains the least.
```
image_shape=dataset.images.shape[1:] # (H x W)
@interact
def plot_eigenface(eigenface=(0, V.shape[0]-1)):
v=V[eigenface]*X_std
plt.imshow(v.reshape(image_shape), cmap='gray')
plt.title('Eigenface %d (%f to %f)' % (eigenface, v.min(), v.max()))
plt.axis('off')
```
Now I'll try reconstructing faces with different number of principal components (PCs)! Now, the transformed `X` is reconstructed by multiplying by the sample standard deviations for each dimension and adding the sample mean. For this reason, even for zero components, you get a face-like image!
The rightmost plot is the "relative" reconstruction error (image minus the reconstruction squared, divided by the data standard deviations). White is where the error is close to zero, and black is where the relative error is large (1 or more). As you increase the number of PCs, you should see the error mostly going to zero (white).
```
@interact
def plot_reconstruction(image_id=(0,dataset.images.shape[0]-1), n_components=(0, V.shape[0]-1),
pc1_multiplier=FloatSlider(min=-2,max=2, value=1)):
# This is where Iperform the projection and un-projection
Vn=V[:n_components]
M=ones(n_components)
if n_components > 0:
M[0]=pc1_multiplier
X_hat=dot(multiply(dot(X[image_id], Vn.T), M), Vn)
# Un-center
I=X[image_id] + X_mean
I_hat = X_hat + X_mean
D=multiply(I-I_hat,I-I_hat) / multiply(X_std, X_std)
# And plot
fig, axs = plt.subplots(1, 3, figsize=(10, 10))
axs[0].imshow(I.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[0].axis('off')
axs[0].set_title('Original')
axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[1].axis('off')
axs[1].set_title('Reconstruction')
axs[2].imshow(1-D.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[2].axis('off')
axs[2].set_title('Difference^2 (mean = %f)' % sqrt(D.mean()))
plt.tight_layout()
```
## Image morphing
As a fun exercise, I'll morph two images by taking averages of the two images within the transformed data space. How is it different than simply morphing them in the pixel space?
```
def plot_morph(left=0, right=1, mix=0.5):
# Projected images
x_lft=dot(X[left], V.T)
x_rgt=dot(X[right], V.T)
# Mix
x_avg = x_lft * (1.0-mix) + x_rgt * (mix)
# Un-project
X_hat = dot(x_avg[newaxis,:], V)
I_hat = X_hat + X_mean
# And plot
fig, axs = plt.subplots(1, 3, figsize=(10, 10))
axs[0].imshow(dataset.images[left], cmap='gray', vmin=0, vmax=1)
axs[0].axis('off')
axs[0].set_title('Left')
axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1)
axs[1].axis('off')
axs[1].set_title('Morphed (%.2f %% right)' % (mix * 100))
axs[2].imshow(dataset.images[right], cmap='gray', vmin=0, vmax=1)
axs[2].axis('off')
axs[2].set_title('Right')
plt.tight_layout()
interact(plot_morph,
left=IntSlider(max=dataset.images.shape[0]-1),
right=IntSlider(max=dataset.images.shape[0]-1,value=1),
mix=FloatSlider(value=0.5, min=0, max=1.0))
```
(The answer: not very much...)
| github_jupyter |
## Data Mining and Machine Learning
### k-nn Classification
#### Edgar Acuna
#### November 2018
```
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Example 1. Predicting the final grade in a course based on Ex1 and Ex2 scores
```
df=pd.read_csv("c://PW-PR/eje1dis.csv")
#Convirtiendo en matriz la tabla de predictoras y la columna de clases
y=df['Nota']
X=df.iloc[:,0:2]
#creando una columna "pass" numerica para representar las clases
lb_make = LabelEncoder()
df["pass"] = lb_make.fit_transform(df["Nota"])
#Tambien se puede usar y.as_matrix() para la clasificacion
y2=df['pass']
y1=y2.as_matrix()
X1=X.as_matrix()
#Applying knn with k=3 and finding the accuracy
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X1, y1)
neigh.score(X1, y1)
#Finding the predictions
pred=neigh.predict(X1)
print pred
#Finding the number of errors
error=(y!=pred).sum()
print "This is the number of errors=", error
#Visualizing the decision boundary with k=1
from matplotlib.colors import ListedColormap
neigh = KNeighborsClassifier(n_neighbors=1)
neigh.fit(X1,y1)
eje1=np.arange(start = X1[:, 0].min()-1, stop = X1[:, 0].max() + 1, step = 0.1)
eje2=np.arange(start = X1[:, 1].min()-1, stop = X1[:, 1].max() + 1, step = 0.11)
Y1, Y2 = np.meshgrid(eje1,eje2)
pred2=neigh.predict(np.c_[Y1.ravel(), Y2.ravel()]).reshape(Y1.shape)
print pred2
plt.figure(figsize=(10, 10))
plt.pcolormesh(Y1, Y2, pred2,cmap=plt.cm.Paired)
# Plot also the training points#
plt.scatter(X1[:, 0], X1[:, 1], c=y2, edgecolors='k')
plt.xlabel('Ex1')
plt.ylabel('Ex2')
plt.xlim(Y1.min(), Y1.max())
plt.ylim(Y2.min(), Y2.max())
plt.xticks(())
plt.yticks(())
plt.show()
#Visualizing the decision boundary with k=7
from matplotlib.colors import ListedColormap
neigh = KNeighborsClassifier(n_neighbors=7)
neigh.fit(X1,y1)
eje1=np.arange(start = X1[:, 0].min()-1, stop = X1[:, 0].max() + 1, step = 0.1)
eje2=np.arange(start = X1[:, 1].min()-1, stop = X1[:, 1].max() + 1, step = 0.11)
Y1, Y2 = np.meshgrid(eje1,eje2)
pred2=neigh.predict(np.c_[Y1.ravel(), Y2.ravel()]).reshape(Y1.shape)
plt.figure(figsize=(10, 10))
plt.pcolormesh(Y1, Y2, pred2,cmap=plt.cm.Paired)
# Plot also the training points#
plt.scatter(X1[:, 0], X1[:, 1], c=y2, edgecolors='k')
plt.xlabel('Ex1')
plt.ylabel('Ex2')
plt.xlim(Y1.min(), Y1.max())
plt.ylim(Y2.min(), Y2.max())
plt.xticks(())
plt.yticks(())
plt.show()
```
#### Example 2. K-nn applied to Diabetes
```
url= "http://academic.uprm.edu/eacuna/diabetes.dat"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = pd.read_table(url, names=names)
print(data.shape)
data.head()
y=data['class']
X=data.iloc[:,0:8]
y1=y.as_matrix()
X1=X.as_matrix()
#Estimacion de la precision con k=3 vecinos por el metodo "holdout
X_train, X_test, y_train, y_test = train_test_split(X1,y1, test_size=0.4, random_state=0)
X_train, y_train
X_test, y_test
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X_train, y_train)
pred=neigh.predict(X_test)
(pred==1).sum()
(pred==2).sum()
neigh.score(X_test, y_test)
print(classification_report(y_test, pred))
```
Precision is the ratio tp / (tp + fp) where tp=168 is the number of true positives and fp=53 is the number of false positives. It measures the cspsbility of the classifier to do not label as positives intances that have negative as label.
Recall or sensitivity is the ratio tp / (tp + fn) where y fp=37 is the number of false negatives. It measures the capability of the classifier to find all the positive instancias.
The f1-score is the harmonic mean of the precision and recall.
```
#Estimacion de la precision usando k=5 vecinos usando validacion cruzada
from sklearn.model_selection import cross_val_score
neigh = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(neigh, X1, y1, cv=10)
scores
print("Accuracy using k=5 neighbors: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
```
### Ejemplo 3. Dataset Landsat visualizando con Componentes principales
```
#Cargando el conjunto de datos Landsat
url='http://academic.uprm.edu/eacuna/landsat.txt'
data = pd.read_table(url, header=None,delim_whitespace=True)
y=data.iloc[:,36]
X=data.iloc[:,0:36]
y1=y.as_matrix()
X1=X.as_matrix()
#Estimacion de la precision con k=3 vecinos por el metodo "holdout
X_train, X_test, y_train, y_test = train_test_split(X1,y1, test_size=0.4, random_state=0)
X_train, y_train
X_test, y_test
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X_train, y_train)
pred=neigh.predict(X_test)
neigh.score(X_test, y_test)
print(classification_report(y_test, pred))
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X = StandardScaler().fit_transform(X)
principalComponents = pca.fit_transform(X)
pcaDF=pd.DataFrame(data = principalComponents, columns = ['PC1', 'PC2'])
print pca.explained_variance_
print pca.explained_variance_ratio_
print pca.explained_variance_ratio_.cumsum()
#Aplicando el clasificador knn con k=9 y calculando el porcentaje de precision
neigh = KNeighborsClassifier(n_neighbors=9)
neigh.fit(principalComponents, y)
pcaDF['class']=y
finalDf=pcaDF
#Tasa de precision
ypred=neigh.predict(principalComponents)
precision=(y==ypred).sum()/float(len(y))
print "Este la precision con dos PC=", precision
from matplotlib.colors import ListedColormap
eje1=np.arange(start = finalDf['PC1'].min()-1, stop = finalDf['PC1'].max() + 1, step = 0.1)
eje2=np.arange(start = finalDf['PC2'].min()-1, stop = finalDf['PC2'].max() + 1, step = 0.11)
Y1, Y2 = np.meshgrid(eje1,eje2)
pred2=neigh.predict(np.c_[Y1.ravel(), Y2.ravel()]).reshape(Y1.shape)
plt.figure(figsize=(10, 10))
plt.pcolormesh(Y1, Y2, pred2,cmap=plt.cm.Paired)
# Plot also the training points#
plt.scatter(finalDf['PC1'], finalDf['PC2'], c=y)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.xlim(Y1.min(), Y1.max())
plt.ylim(Y2.min(), Y2.max())
plt.xticks(())
plt.yticks(())
plt.show()
```
| github_jupyter |
# Module 4 Required Coding Activity
Introduction to Python Unit 1
The activity is based on modules 1 - 4 and is similar to the Jupyter Notebooks **`Practice_MOD04_1-6_IntroPy.ipynb`** and **`Practice_MOD04_1-7_IntroPy.ipynb`** which you may have completed as practice. This activity is a new version of the str_analysis() function.
| Some Assignment Requirements |
|:-------------------------------|
|This program requires the use of<ul><li>**`while`** loop to get non-empty input</li><li>**`if, else`**</li><li>**`if, else`** (nested)</li><li>**`.isdigit()`** check for integer only input</li><li>**`.isalpha()`** check for alphabetic only input</li></ul><br/>The program should **only** use code syntax covered in modules 1 - 4.<br/><br/>The program must result in printed message analysis of the input. |
## Program: `str_analysis()` Function
Create the str_analysis() function that takes 1 string argument and returns a string message. The message will be an analysis of a test string that is passed as an argument to str_analysis(). The function should respond with messages such as:
- "big number"
- "small number"
- "all alphabetic"
- "multiple character types"
The program will call str_analysis() with a string argument from input collected within a while loop. The while loop will test if input is empty (an empty string "") and continue to loop and gather input until the user submits at least 1 character (input cannot be empty).
The program then calls the str_analysis() function and prints the **return** message.
#### Sample input and output:
enter nothing (twice) then enter a word
```
enter word or integer:
enter word or integer:
enter word or integer: Hello
"Hello" is all alphabetical characters!
```
-----
alphabetical word input
```
enter word or integer: carbonization
"carbonization" is all alphabetical characters!
```
-----
numeric inputs
```
enter word or integer: 30
30 is a smaller number than expected
enter word or integer: 1024
1024 is a pretty big number
```
-----
### loop until non-empty input is submitted
This diagram represents the input part of the assignment - it is the loop to keep prompting the user for input until they submit some input (non-empty).

Once the user gives input with characters use the input in calling the str_analysis() function.
### Additional Details
In the body of the str_analysis() function:
- Check `if` string is digits
- if digits: convert to `int` and check `if` greater than 99
- if greater than 99 print a message about a "big number"
- if not greater than 99 print message about "small number"
- check if string isalpha then (since not digits)
- if isalpha print message about being all alpha
- if not isalpha print a message about being neither all alpha nor all digit
call the function with a string from user input
- Run and test your code before submitting
```
# [ ] create, call and test the str_analysis() function
def str_analysis(string):
if string.isdigit():
string = int(string)
if string > 99:
return str(string) + " That's a big number."
else:
return str(string) + " That's small number."
else:
if string.isalpha():
return string + " uses all alpha characters."
else:
return string + " neither all alpha nor digit."
print("StringTester\n")
while True:
user_input = input("Input string for testing: ")
if user_input == "":
print("No input detected.")
else:
print(str_analysis(user_input))
break
```
Submit this by creating a python file (.py) and submitting it in D2L. Be sure to test that it works.
| github_jupyter |
# Descriptive analysis for the manuscript
Summarize geotagged tweets of the multiple regions used for the experiment and the application.
```
%load_ext autoreload
%autoreload 2
import os
import numpy as np
import pandas as pd
import yaml
import scipy.stats as stats
from tqdm import tqdm
def load_region_tweets(region=None):
df = pd.read_csv(f'../../dbs/{region}/geotweets.csv')
df['day'] = df['createdat'].apply(lambda x: x.split(' ')[0])
df['createdat'] = pd.to_datetime(df['createdat'], infer_datetime_format=True)
t_max, t_min = df.createdat.max(), df.createdat.min()
time_span = f'{t_min} - {t_max}'
num_users = len(df.userid.unique())
num_geo = len(df)
num_days = np.median(df.groupby(['userid'])['day'].nunique())
num_geo_freq = np.median(df.groupby(['userid']).size() / df.groupby(['userid'])['day'].nunique())
return region, time_span, num_users, num_geo, num_days, num_geo_freq
def user_stats_cal(data):
time_span = data.createdat.max() - data.createdat.min()
time_span = time_span.days
if time_span == 0:
time_span += 1
num_days = data['day'].nunique()
num_geo = len(data)
geo_freq = num_geo / num_days
share_active = num_days / time_span
return pd.DataFrame.from_dict({'time_span': [time_span],
'num_days': [num_days],
'num_geo': [num_geo],
'geo_freq': [geo_freq],
'share_active': [share_active]
})
def region_tweets_stats_per_user(region=None):
df = pd.read_csv(f'../../dbs/{region}/geotweets.csv')
df['day'] = df['createdat'].apply(lambda x: x.split(' ')[0])
df['createdat'] = pd.to_datetime(df['createdat'], infer_datetime_format=True)
tqdm.pandas(desc=region)
df_users = df.groupby('userid').progress_apply(user_stats_cal).reset_index()
df_users.loc[:, 'region'] = region
df_users.drop(columns=['level_1'], inplace=True)
return df_users
region_list = ['sweden', 'netherlands', 'saopaulo', 'australia', 'austria', 'barcelona',
'capetown', 'cebu', 'egypt', 'guadalajara', 'jakarta',
'johannesburg', 'kualalumpur', 'lagos', 'madrid', 'manila', 'mexicocity', 'moscow', 'nairobi',
'rio', 'saudiarabia', 'stpertersburg', 'surabaya']
with open('../../lib/regions.yaml', encoding='utf8') as f:
region_manager = yaml.load(f, Loader=yaml.FullLoader)
```
## 1 Summarize the geotagged tweets used as input to the model
Geotagged tweets: Time span, No. of Twitter users, No. of geotagged tweets,
Days covered/user, No. of geotagged tweets/day/user
```
df = pd.DataFrame([load_region_tweets(region=x) for x in region_list],
columns=('region', 'time_span', 'num_users', 'num_geo', 'num_days', 'num_geo_freq'))
df.loc[:, 'gdp_capita'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['gdp_capita'])
df.loc[:, 'country'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['country'])
df.loc[:, 'pop'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['pop'])
df.loc[:, 'time_span'] = df.loc[:, 'time_span'].apply(lambda x: ' - '.join([x_t.split(' ')[0] for x_t in x.split(' - ')]))
df.loc[:, 'region'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['name'])
df
df.to_clipboard(index=False)
```
## 1-extra Summarize the geotagged tweets used as input to the model - by user
This is for dissertation presentation - sparsity issue.
Geotagged tweets: Time span, No. of Twitter users, No. of geotagged tweets,
Days covered/user, No. of geotagged tweets/day/user
```
df = pd.concat([region_tweets_stats_per_user(region=x) for x in region_list])
df.loc[:, 'gdp_capita'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['gdp_capita'])
df.loc[:, 'country'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['country'])
df.loc[:, 'pop'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['pop'])
df.loc[:, 'region'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['name'])
df.to_csv(f'../../dbs/regional_stats.csv', index=False)
```
## 2 Merge ODMs for visualisation
This part applies to Sweden, The Netherlands, and Sao Paulo, Brazil.
Separate files will be deleted.
```
for region in ['sweden', 'netherlands', 'saopaulo']:
df = pd.read_csv(f'../../dbs/{region}/odm_gt.csv')
df_c = pd.read_csv(f'../../dbs/{region}/odm_calibration.csv')
df_v = pd.read_csv(f'../../dbs/{region}/odm_validation.csv')
df_cb = pd.read_csv(f'../../dbs/{region}/odm_benchmark_c.csv')
df_vb = pd.read_csv(f'../../dbs/{region}/odm_benchmark_v.csv')
df = pd.merge(df, df_c, on=['ozone', 'dzone'])
df = df.rename(columns={'model': 'model_c'})
df = pd.merge(df, df_v, on=['ozone', 'dzone'])
df = df.rename(columns={'model': 'model_v'})
df = pd.merge(df, df_cb, on=['ozone', 'dzone'])
df = df.rename(columns={'benchmark': 'benchmark_c'})
df = pd.merge(df, df_vb, on=['ozone', 'dzone'])
df = df.rename(columns={'benchmark': 'benchmark_v'})
df.loc[:, ['ozone', 'dzone',
'gt', 'model_c', 'model_v',
'benchmark_c', 'benchmark_v']].to_csv(f'../../dbs/{region}/odms.csv', index=False)
os.remove(f'../../dbs/{region}/odm_gt.csv')
os.remove(f'../../dbs/{region}/odm_calibration.csv')
os.remove(f'../../dbs/{region}/odm_validation.csv')
os.remove(f'../../dbs/{region}/odm_benchmark_c.csv')
os.remove(f'../../dbs/{region}/odm_benchmark_v.csv')
```
## 3 Quantify the od-pair similarity
This part applies to Sweden, The Netherlands, and Sao Paulo, Brazil.
The overall similarity.
```
quant_list = []
for region in ['sweden', 'netherlands', 'saopaulo']:
df = pd.read_csv(f'../../dbs/{region}/odms.csv')
df_c = df.loc[(df.gt != 0) & (df.model_c != 0) & (df.benchmark_c != 0), :]
mc = stats.kendalltau(df_c.loc[:, 'gt'], df_c.loc[:, 'model_c'])
quant_list.append((region, 'model', 'c', mc.correlation, mc.pvalue))
bc = stats.kendalltau(df_c.loc[:, 'gt'], df_c.loc[:, 'benchmark_c'])
quant_list.append((region, 'benchmark', 'c', bc.correlation, bc.pvalue))
df_v = df.loc[(df.gt != 0) & (df.model_v != 0) & (df.benchmark_v != 0), :]
mv = stats.kendalltau(df_v.loc[:, 'gt'], df_v.loc[:, 'model_v'])
quant_list.append((region, 'model', 'v', mv.correlation, mv.pvalue))
bv = stats.kendalltau(df_v.loc[:, 'gt'], df_v.loc[:, 'benchmark_v'])
quant_list.append((region, 'benchmark', 'v', bv.correlation, bv.pvalue))
df_stats = pd.DataFrame(quant_list, columns=['region', 'type', 'data', 'cor', 'p'])
df_stats
df_stats.groupby(['region', 'type'])['cor'].mean()
```
| github_jupyter |
You now know the following
1. Generate open-loop control from a given route
2. Simulate vehicular robot motion using bicycle/ unicycle model
Imagine you want to make an utility for your co-workers to try and understand vehicle models.
Dashboards are common way to do this.
There are several options out there : Streamlit, Voila, Observable etc
Follow this
<a href="https://medium.com/plotly/introducing-jupyterdash-811f1f57c02e">Medium post</a> on Jupyter Dash and see how to package what you learnt today in an interactive manner
Here is a <a href="https://stackoverflow.com/questions/53622518/launch-a-dash-app-in-a-google-colab-notebook">stackoverflow question </a> on how to run dash applications on Collab
What can you assume?
+ Fix $v,\omega$ or $v,\delta$ depending on the model (users can still pick the actual value)
+ fixed wheelbase for bicycle model
Users can choose
+ unicycle and bicycle models
+ A pre-configured route ("S", "inverted-S", "figure-of-eight" etc)
+ 1 of 3 values for $v, \omega$ (or $\delta$)
```
!pip install jupyter-dash
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import pandas as pd
import numpy as np
# Load Data
velocities = ['1','2','3']
omegas = ['15','30','45']
shapes = ["S", "Inverted-S", "Figure of 8"]
models = ["Unicycle", "Bicycle"]
def unicycle_model(curr_pose, v, w, dt=1.0):
'''
>>> unicycle_model((0.0,0.0,0.0), 1.0, 0.0)
(1.0, 0.0, 0.0)
>>> unicycle_model((0.0,0.0,0.0), 0.0, 1.0)
(0.0, 0.0, 1.0)
>>> unicycle_model((0.0, 0.0, 0.0), 1.0, 1.0)
(1.0, 0.0, 1.0)
'''
## write code to calculate next_pose
# refer to the kinematic equations of a unicycle model
x, y, theta = curr_pose
x += v*np.cos(theta)*dt
y += v*np.sin(theta)*dt
theta += w*dt
# Keep theta bounded between [-pi, pi]
theta = np.arctan2(np.sin(theta), np.cos(theta))
# return calculated (x, y, theta)
return x, y, theta
def bicycle_model(curr_pose, v, delta, dt=1.0):
'''
>>> bicycle_model((0.0,0.0,0.0), 1.0, 0.0)
(1.0, 0.0, 0.0)
>>> bicycle_model((0.0,0.0,0.0), 0.0, np.pi/4)
(0.0, 0.0, 0.0)
>>> bicycle_model((0.0, 0.0, 0.0), 1.0, np.pi/4)
(1.0, 0.0, 1.11)
'''
# write code to calculate next_pose
# refer to the kinematic equations of a bicycle model
#x, y, theta =
#x =
#y =
#theta =
L = 0.9
x, y, theta = curr_pose
x += v*np.cos(theta)*dt
y += v*np.sin(theta)*dt
theta += (v/L)*np.tan(delta)*dt
# Keep theta bounded between [-pi, pi]
theta = np.arctan2(np.sin(theta), np.cos(theta))
# return calculated (x, y, theta)
return x, y, theta
def get_open_loop_commands(route, vc_fast=1, wc=np.pi/12, dt=1.0):
all_w = []
omegas = {'straight': 0, 'left': wc, 'right': -wc}
for manoeuvre, command in route:
u = np.ceil(command/vc_fast).astype('int')
v = np.ceil(np.deg2rad(command)/wc).astype('int')
t_cmd = u if manoeuvre == 'straight' else v
all_w += [omegas[manoeuvre]]*t_cmd
all_v = vc_fast * np.ones_like(all_w)
return all_v, all_w
def get_commands(shape):
if(shape == shapes[0]):
return [("right", 180),("left", 180)]
elif(shape == shapes[1]):
return [("left", 180),("right", 180)]
return [("right", 180),("left", 180),("left", 180),("right", 180)]
def get_angle(omega):
if(omega == omegas[0]):
return np.pi/12
elif(omega == omegas[1]):
return np.pi/6
return np.pi/4
# Build App
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("Unicycle/Bicycle"),
html.Label([
"velocity",
dcc.Dropdown(
id='velocity', clearable=False,
value='1', options=[
{'label': c, 'value': c}
for c in velocities
])
]),
html.Label([
"omega/delta",
dcc.Dropdown(
id='omega', clearable=False,
value='15', options=[
{'label': c, 'value': c}
for c in omegas
])
]),
html.Label([
"shape",
dcc.Dropdown(
id='shape', clearable=False,
value='S', options=[
{'label': c, 'value': c}
for c in shapes
])
]),
html.Label([
"model",
dcc.Dropdown(
id='model', clearable=False,
value='Unicycle', options=[
{'label': c, 'value': c}
for c in models
])
]),
dcc.Graph(id='graph'),
])
# Define callback to update graph
@app.callback(
Output('graph', 'figure'),
[Input("velocity", "value"), Input("omega", "value"), Input("shape", "value"), Input("model", "value")]
)
def update_figure(velocity, omega, shape, model):
robot_trajectory = []
all_v, all_w = get_open_loop_commands(get_commands(shape), int(velocity), get_angle(omega))
pose = (0, 0, np.pi/2)
for v, w in zip(all_v, all_w):
robot_trajectory.append(pose)
if model == models[0]:
pose = unicycle_model(pose, v, w)
else:
pose = bicycle_model(pose,v,w)
robot_trajectory = np.array(robot_trajectory)
dt = pd.DataFrame({'x-axis': robot_trajectory[:,0],'y-axis': robot_trajectory[:,1]})
return px.line(dt, x="x-axis", y="y-axis", title='Simulate vehicular robot motion using unicycle/bicycle model')
# Run app and display result inline in the notebook
app.run_server(mode='inline')
```
| github_jupyter |
# U-Net: nuclei segmentation 2
This is an implementation of a [Kaggle kernel](https://www.kaggle.com/c0conuts/unet-imagedatagenerator-lb-0-336/notebook) of a [U-net](https://arxiv.org/abs/1505.04597).
Changes:
* added model time elapsed with timeit
* modelling:
* batch_size=100
* epochs=3
* data augmentation:
* shear_range=0.3
* rotation_range=90
* zoom_range=0.4
* width_shift_range=0.3
* height_shift_range=0.3
* fill_mode='reflect'
Ideas:
* add watershed transform to mask image
```
%pwd
import os
import sys
import random
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tqdm import tqdm
from datetime import datetime
from itertools import chain
from skimage.io import imread, imsave, imshow, imread_collection, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from keras.preprocessing import image
from keras.models import Model, load_model
from keras.layers import Input
from keras.layers.core import Dropout, Lambda
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras import backend as K
from utils.imaging import get_path, get_image_ids, label_mask, segmented_annotate
from utils.evaluate import keras_mean_iou, submit_kaggle
from utils import run_length_encoding
%matplotlib inline
warnings.filterwarnings('ignore', category=UserWarning, module='skimage')
```
get model name form notebook name using javascript
```
%%javascript
IPython.notebook.kernel.execute('nb_name = ' + '"' + IPython.notebook.notebook_name + '"')
notebook_name = os.path.splitext(os.path.basename(nb_name))[0]
model_name = notebook_name + '.h5'
model_path = get_path('models') + model_name
submission_name = notebook_name + '.csv'
submission_path = get_path('submission') + submission_name
```
### 0. U-Net Parameters
```
seed = 42
# model parameters
BATCH_SIZE = 100 # the higher the better
IMG_WIDTH = 128 # for faster computing on kaggle
IMG_HEIGHT = 128 # for faster computing on kaggle
IMG_CHANNELS = 3
TRAIN_PATH = get_path('data_train_1')
TEST_PATH = get_path('data_test_1')
```
### 1. Preprocess data
```
# Get train and test IDs
train_ids = get_image_ids(TRAIN_PATH)
test_ids = get_image_ids(TEST_PATH)
np.random.seed(10)
# Get and resize train images and masks
X_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8)
Y_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool)
print('Getting and resizing train images and masks ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)):
path = TRAIN_PATH + id_
img = imread(path + '/images/' + id_ + '.png')[:,:,:IMG_CHANNELS]
img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
X_train[n] = img
mask = np.zeros((IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool)
for mask_file in next(os.walk(path + '/masks/'))[2]:
mask_ = imread(path + '/masks/' + mask_file)
mask_ = np.expand_dims(resize(mask_, (IMG_HEIGHT, IMG_WIDTH), mode='constant',
preserve_range=True), axis=-1)
mask = np.maximum(mask, mask_)
Y_train[n] = mask
# Get and resize test images
X_test = np.zeros((len(test_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8)
sizes_test = []
print('Getting and resizing test images ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(test_ids), total=len(test_ids)):
path = TEST_PATH + id_
img = imread(path + '/images/' + id_ + '.png')[:,:,:IMG_CHANNELS]
sizes_test.append([img.shape[0], img.shape[1]])
img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
X_test[n] = img
```
### 2. Data augmentation
```
# Creating the training Image and Mask generator
image_datagen = image.ImageDataGenerator(shear_range=0.3, rotation_range=90, zoom_range=0.4, width_shift_range=0.3, height_shift_range=0.3, fill_mode='reflect')
mask_datagen = image.ImageDataGenerator(shear_range=0.3, rotation_range=90, zoom_range=0.4, width_shift_range=0.3, height_shift_range=0.3, fill_mode='reflect')
# Keep the same seed for image and mask generators so they fit together
image_datagen.fit(X_train[:int(X_train.shape[0]*0.9)], augment=True, seed=seed)
mask_datagen.fit(Y_train[:int(Y_train.shape[0]*0.9)], augment=True, seed=seed)
x=image_datagen.flow(X_train[:int(X_train.shape[0]*0.9)],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
y=mask_datagen.flow(Y_train[:int(Y_train.shape[0]*0.9)],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
# Creating the validation Image and Mask generator
image_datagen_val = image.ImageDataGenerator()
mask_datagen_val = image.ImageDataGenerator()
image_datagen_val.fit(X_train[int(X_train.shape[0]*0.9):], augment=True, seed=seed)
mask_datagen_val.fit(Y_train[int(Y_train.shape[0]*0.9):], augment=True, seed=seed)
x_val=image_datagen_val.flow(X_train[int(X_train.shape[0]*0.9):],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
y_val=mask_datagen_val.flow(Y_train[int(Y_train.shape[0]*0.9):],batch_size=BATCH_SIZE,shuffle=True, seed=seed)
f, axarr = plt.subplots(2,2,figsize=(12,12))
axarr[0,0].imshow(x.next()[0].astype(np.uint8))
axarr[0,1].imshow(np.squeeze(y.next()[0].astype(np.uint8)))
axarr[1,0].imshow(x_val.next()[0].astype(np.uint8))
axarr[1,1].imshow(np.squeeze(y_val.next()[0].astype(np.uint8)))
#creating a training and validation generator that generate masks and images
train_generator = zip(x, y)
val_generator = zip(x_val, y_val)
```
### 3. Initialise U-Net model
```
# Build U-Net model
inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = Lambda(lambda x: x / 255) (inputs)
c1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (s)
c1 = Dropout(0.1) (c1)
c1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)
c2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p1)
c2 = Dropout(0.1) (c2)
c2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)
c3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p2)
c3 = Dropout(0.2) (c3)
c3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)
c4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p3)
c4 = Dropout(0.2) (c4)
c4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
c5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (p4)
c5 = Dropout(0.3) (c5)
c5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c5)
u6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u6)
c6 = Dropout(0.2) (c6)
c6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c6)
u7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u7)
c7 = Dropout(0.2) (c7)
c7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c7)
u8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u8)
c8 = Dropout(0.1) (c8)
c8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c8)
u9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (u9)
c9 = Dropout(0.1) (c9)
c9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (c9)
outputs = Conv2D(1, (1, 1), activation='sigmoid') (c9)
model = Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[keras_mean_iou])
model.summary()
```
### 4. Train U-Net model
```
# Fit model
start_time = datetime.now()
earlystopper = EarlyStopping(patience=3, verbose=1)
checkpointer = ModelCheckpoint(model_path, verbose=1, save_best_only=True)
results = model.fit_generator(train_generator,
validation_data=val_generator,
validation_steps=10,
steps_per_epoch=250,
epochs=5,
callbacks=[earlystopper, checkpointer]
)
time_elapsed = datetime.now() - start_time
print('\n Model training time elapsed (hh:mm:ss.ms) {}'.format(time_elapsed))
```
### 5. Predict with U-Net model
```
# Predict on train, val and test
model = load_model(model_path, custom_objects={'keras_mean_iou': keras_mean_iou})
preds_train = model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1)
preds_val = model.predict(X_train[int(X_train.shape[0]*0.9):], verbose=1)
preds_test = model.predict(X_test, verbose=1)
# Threshold predictions
preds_train_t = (preds_train > 0.5).astype(np.uint8)
preds_val_t = (preds_val > 0.5).astype(np.uint8)
preds_test_t = (preds_test > 0.5).astype(np.uint8)
# Create list of upsampled test masks
preds_test_upsampled = []
for i in range(len(preds_test)):
preds_test_upsampled.append(resize(np.squeeze(preds_test[i]),
(sizes_test[i][0], sizes_test[i][1]),
mode='constant', preserve_range=True))
```
sanity check on some training examples
```
f, axarr = plt.subplots(2,3,figsize=(12,12))
ix1 = random.randint(0, len(preds_train_t))
ix2 = random.randint(0, len(preds_train_t))
axarr[0,0].imshow(X_train[ix1])
axarr[0,1].imshow(np.squeeze(Y_train[ix1]))
axarr[0,2].imshow(np.squeeze(preds_train_t[ix1]))
axarr[1,0].imshow(X_train[ix2])
axarr[1,1].imshow(np.squeeze(Y_train[ix2]))
axarr[1,2].imshow(np.squeeze(preds_train_t[ix2]))
```
### 7. Output image labels
Saving test labelled images
```
for idx, image_id in tqdm(enumerate(test_ids), total=len(test_ids)):
mask = preds_test_upsampled[idx] > 0.5
labels = label_mask(mask)
imsave(get_path('output_test_1_lab_seg') + image_id + '.png', labels)
```
Saving test annotated images
```
segmented_annotate(image_type = 'test', stage_num = 1)
df = run_length_encoding.rle_images_in_dir(image_type = 'test', stage_num = 1)
df.to_csv(submission_path, index=False)
```
### 8. Kaggle submit
```
message = "batch size of 100 and 5 epochs"
submit_string = submit_kaggle(notebook_name, submission_path, message)
!$submit_string
```
| github_jupyter |
# Manual Jupyter Notebook:
https://athena.brynmawr.edu/jupyter/hub/dblank/public/Jupyter%20Notebook%20Users%20Manual.ipynb
#Jupyter Notebook Users Manual
This page describes the functionality of the [Jupyter](http://jupyter.org) electronic document system. Jupyter documents are called "notebooks" and can be seen as many things at once. For example, notebooks allow:
* creation in a **standard web browser**
* direct **sharing**
* using **text with styles** (such as italics and titles) to be explicitly marked using a [wikitext language](http://en.wikipedia.org/wiki/Wiki_markup)
* easy creation and display of beautiful **equations**
* creation and execution of interactive embedded **computer programs**
* easy creation and display of **interactive visualizations**
Jupyter notebooks (previously called "IPython notebooks") are thus interesting and useful to different groups of people:
* readers who want to view and execute computer programs
* authors who want to create executable documents or documents with visualizations
<hr size="5"/>
###Table of Contents
* [1. Getting to Know your Jupyter Notebook's Toolbar](#1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar)
* [2. Different Kinds of Cells](#2.-Different-Kinds-of-Cells)
* [2.1 Code Cells](#2.1-Code-Cells)
* [2.1.1 Code Cell Layout](#2.1.1-Code-Cell-Layout)
* [2.1.1.1 Row Configuration (Default Setting)](#2.1.1.1-Row-Configuration-%28Default-Setting%29)
* [2.1.1.2 Cell Tabbing](#2.1.1.2-Cell-Tabbing)
* [2.1.1.3 Column Configuration](#2.1.1.3-Column-Configuration)
* [2.2 Markdown Cells](#2.2-Markdown-Cells)
* [2.3 Raw Cells](#2.3-Raw-Cells)
* [2.4 Header Cells](#2.4-Header-Cells)
* [2.4.1 Linking](#2.4.1-Linking)
* [2.4.2 Automatic Section Numbering and Table of Contents Support](#2.4.2-Automatic-Section-Numbering-and-Table-of-Contents-Support)
* [2.4.2.1 Automatic Section Numbering](#2.4.2.1-Automatic-Section-Numbering)
* [2.4.2.2 Table of Contents Support](#2.4.2.2-Table-of-Contents-Support)
* [2.4.2.3 Using Both Automatic Section Numbering and Table of Contents Support](#2.4.2.3-Using-Both-Automatic-Section-Numbering-and-Table-of-Contents-Support)
* [3. Keyboard Shortcuts](#3.-Keyboard-Shortcuts)
* [4. Using Markdown Cells for Writing](#4.-Using-Markdown-Cells-for-Writing)
* [4.1 Block Elements](#4.1-Block-Elements)
* [4.1.1 Paragraph Breaks](#4.1.1-Paragraph-Breaks)
* [4.1.2 Line Breaks](#4.1.2-Line-Breaks)
* [4.1.2.1 Hard-Wrapping and Soft-Wrapping](#4.1.2.1-Hard-Wrapping-and-Soft-Wrapping)
* [4.1.2.2 Soft-Wrapping](#4.1.2.2-Soft-Wrapping)
* [4.1.2.3 Hard-Wrapping](#4.1.2.3-Hard-Wrapping)
* [4.1.3 Headers](#4.1.3-Headers)
* [4.1.4 Block Quotes](#4.1.4-Block-Quotes)
* [4.1.4.1 Standard Block Quoting](#4.1.4.1-Standard-Block-Quoting)
* [4.1.4.2 Nested Block Quoting](#4.1.4.2-Nested-Block-Quoting)
* [4.1.5 Lists](#4.1.5-Lists)
* [4.1.5.1 Ordered Lists](#4.1.5.1-Ordered-Lists)
* [4.1.5.2 Bulleted Lists](#4.1.5.2-Bulleted-Lists)
* [4.1.6 Section Breaks](#4.1.6-Section-Breaks)
* [4.2 Backslash Escape](#4.2-Backslash-Escape)
* [4.3 Hyperlinks](#4.3-Hyperlinks)
* [4.3.1 Automatic Links](#4.3.1-Automatic-Links)
* [4.3.2 Standard Links](#4.3.2-Standard-Links)
* [4.3.3 Standard Links With Mouse-Over Titles](#4.3.3-Standard-Links-With-Mouse-Over-Titles)
* [4.3.4 Reference Links](#4.3.4-Reference-Links)
* [4.3.5 Notebook-Internal Links](#4.3.5-Notebook-Internal-Links)
* [4.3.5.1 Standard Notebook-Internal Links Without Mouse-Over Titles](#4.3.5.1-Standard-Notebook-Internal-Links-Without-Mouse-Over-Titles)
* [4.3.5.2 Standard Notebook-Internal Links With Mouse-Over Titles](#4.3.5.2-Standard-Notebook-Internal-Links-With-Mouse-Over-Titles)
* [4.3.5.3 Reference-Style Notebook-Internal Links](#4.3.5.3-Reference-Style-Notebook-Internal-Links)
* [4.4 Tables](#4.4-Tables)
* [4.4.1 Cell Justification](#4.4.1-Cell-Justification)
* [4.5 Style and Emphasis](#4.5-Style-and-Emphasis)
* [4.6 Other Characters](#4.6-Other-Characters)
* [4.7 Including Code Examples](#4.7-Including-Code-Examples)
* [4.8 Images](#4.8-Images)
* [4.8.1 Images from the Internet](#4.8.1-Images-from-the-Internet)
* [4.8.1.1 Reference-Style Images from the Internet](#4.8.1.1-Reference-Style-Images-from-the-Internet)
* [4.9 LaTeX Math](#4.9-LaTeX-Math)
* [5. Bibliographic Support](#5.-Bibliographic-Support)
* [5.1 Creating a Bibtex Database](#5.1-Creating-a-Bibtex-Database)
* [5.1.1 External Bibliographic Databases](#5.1.1-External-Bibliographic-Databases)
* [5.1.2 Internal Bibliographic Databases](#5.1.2-Internal-Bibliographic-Databases)
* [5.1.2.1 Hiding Your Internal Database](#5.1.2.1-Hiding-Your-Internal-Database)
* [5.1.3 Formatting Bibtex Entries](#5.1.3-Formatting-Bibtex-Entries)
* [5.2 Cite Commands and Citation IDs](#5.2-Cite-Commands-and-Citation-IDs)
* [6. Turning Your Jupyter Notebook into a Slideshow](#6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow)
# 1. Getting to Know your Jupyter Notebook's Toolbar
At the top of your Jupyter Notebook window there is a toolbar. It looks like this:

Below is a table which helpfully pairs a picture of each of the items in your toolbar with a corresponding explanation of its function.
Button|Function
-|-
|This is your save button. You can click this button to save your notebook at any time, though keep in mind that Jupyter Notebooks automatically save your progress very frequently.
|This is the new cell button. You can click this button any time you want a new cell in your Jupyter Notebook.
|This is the cut cell button. If you click this button, the cell you currently have selected will be deleted from your Notebook.
|This is the copy cell button. If you click this button, the currently selected cell will be duplicated and stored in your clipboard.
|This is the past button. It allows you to paste the duplicated cell from your clipboard into your notebook.
|These buttons allow you to move the location of a selected cell within a Notebook. Simply select the cell you wish to move and click either the up or down button until the cell is in the location you want it to be.
|This button will "run" your cell, meaning that it will interpret your input and render the output in a way that depends on [what kind of cell] [cell kind] you're using.
|This is the stop button. Clicking this button will stop your cell from continuing to run. This tool can be useful if you are trying to execute more complicated code, which can sometimes take a while, and you want to edit the cell before waiting for it to finish rendering.
|This is the restart kernel button. See your kernel documentation for more information.
|This is a drop down menu which allows you to tell your Notebook how you want it to interpret any given cell. You can read more about the [different kinds of cells] [cell kind] in the following section.
|Individual cells can have their own toolbars. This is a drop down menu from which you can select the type of toolbar that you'd like to use with the cells in your Notebook. Some of the options in the cell toolbar menu will only work in [certain kinds of cells][cell kind]. "None," which is how you specify that you do not want any cell toolbars, is the default setting. If you select "Edit Metadata," a toolbar that allows you to edit data about [Code Cells][code cells] directly will appear in the corner of all the Code cells in your notebook. If you select "Raw Cell Format," a tool bar that gives you several formatting options will appear in the corner of all your [Raw Cells][raw cells]. If you want to view and present your notebook as a slideshow, you can select "Slideshow" and a toolbar that enables you to organize your cells in to slides, sub-slides, and slide fragments will appear in the corner of every cell. Go to [this section][slideshow] for more information on how to create a slideshow out of your Jupyter Notebook.
|These buttons allow you to move the location of an entire section within a Notebook. Simply select the Header Cell for the section or subsection you wish to move and click either the up or down button until the section is in the location you want it to be. If your have used [Automatic Section Numbering][section numbering] or [Table of Contents Support][table of contents] remember to rerun those tools so that your section numbers or table of contents reflects your Notebook's new organization.
|Clicking this button will automatically number your Notebook's sections. For more information, check out the Reference Guide's [section on Automatic Section Numbering][section numbering].
|Clicking this button will generate a table of contents using the titles you've given your Notebook's sections. For more information, check out the Reference Guide's [section on Table of Contents Support][table of contents].
|Clicking this button will search your document for [cite commands][] and automatically generate intext citations as well as a references cell at the end of your Notebook. For more information, you can read the Reference Guide's [section on Bibliographic Support][bib support].
|Clicking this button will toggle [cell tabbing][], which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].
|Clicking this button will toggle the [collumn configuration][] for Code Cells, which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].
|Clicking this button will toggle spell checking. Spell checking only works in unrendered [Markdown Cells][] and [Header Cells][]. When spell checking is on all incorrectly spelled words will be underlined with a red squiggle. Keep in mind that the dictionary cannot tell what are [Markdown][md writing] commands and what aren't, so it will occasionally underline a correctly spelled word surrounded by asterisks, brackets, or other symbols that have specific meaning in Markdown.
[cell kind]: #2.-Different-Kinds-of-Cells "Different Kinds of Cells"
[code cells]: #2.1-Code-Cells "Code Cells"
[raw cells]: #2.3-Raw-Cells "Raw Cells"
[slideshow]: #6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow "Turning Your Jupyter Notebook Into a Slideshow"
[section numbering]: #2.4.2.1-Automatic-Section-Numbering
[table of contents]: #2.4.2.2-Table-of-Contents-Support
[cell tabbing]: #2.1.1.2-Cell-Tabbing
[cell layout]: #2.1.1-Code-Cell-Layout
[bib support]: #5.-Bibliographic-Support
[cite commands]: #5.2-Cite-Commands-and-Citation-IDs
[md writing]: #4.-Using-Markdown-Cells-for-Writing
[collumn configuration]: #2.1.1.3-Column-Configuration
[Markdown Cells]: #2.2-Markdown-Cells
[Header Cells]: #2.4-Header-Cells
# 2. Different Kinds of Cells
There are essentially four kinds of cells in your Jupyter notebook: Code Cells, Markdown Cells, Raw Cells, and Header Cells, though there are six levels of Header Cells.
## 2.1 Code Cells
By default, Jupyter Notebooks' Code Cells will execute Python. Jupyter Notebooks generally also support JavaScript, Python, HTML, and Bash commands. For a more comprehensive list, see your Kernel's documentation.
### 2.1.1 Code Cell Layout
Code cells have both an input and an output component. You can view these components in three different ways.
#### 2.1.1.1 Row Configuration (Default Setting)
Unless you specific otherwise, your Code Cells will always be configured this way, with both the input and output components appearing as horizontal rows and with the input above the output. Below is an example of a Code Cell in this default setting:
```
2 + 3
```
#### 2.1.1.2 Cell Tabbing
Cell tabbing allows you to look at the input and output components of a cell separately. It also allows you to hide either component behind the other, which can be usefull when creating visualizations of data. Below is an example of a tabbed Code Cell:
```
2+3
```
#### 2.1.1.3 Column Configuration
Like the row configuration, the column layout option allows you to look at both the input and the output components at once. In the column layout, however, the two components appear beside one another, with the input on the left and the output on the right. Below is an example of a Code Cell in the column configuration:
```
2+3
```
## 2.2 Markdown Cells
In Jupyter Notebooks, Markdown Cells are the easiest way to write and format text. For a more thorough explanation of how to write in Markdown cells, refer to [this section of the guide][writing markdown].
[writing markdown]: #4.-Using-Markdown-Cells-for-Writing "Using Markdown Cells for Writing"
## 2.3 Raw Cells
Raw Cells, unlike all other Jupyter Notebook cells, have no input-output distinction. This means that Raw Cells cannot be rendered into anything other than what they already are. If you click the run button in your tool bar with a Raw Cell selected, the cell will remain exactly as is and your Jupyter Notebook will automatically select the cell directly below it. Raw cells have no style options, just the same monospace font that you use in all other unrendered Notebook cells. You cannot bold, italicize, or enlarge any text or characters in a Raw Cell.
Because they have no rendered form, Raw Cells are mainly used to create examples. If you save and close your Notebook and then reopen it, all of the Code, Markdown, and Header Cells will automatically render in whatever form you left them when you first closed the document. This means that if you wanted to preserve the unrendered version of a cell, say if you were writing a computer science paper and needed code examples, or if you were writing [documentation on how to use Markdown] [writing markdown] and needed to demonstrate what input would yield which output, then you might want to use a Raw Cell to make sure your examples stayed in their most useful form.
[writing markdown]: #4.-Using-Markdown-Cells-for-Writing "Using Markdown Cells for Writing"
## 2.4 Header Cells
While it is possible to organize your document using [Markdown headers][], Header Cells provide a more deeply structural organization for your Notebook and thus there are several advantages to using them.
[Markdown headers]: #4.1.3-Headers "Headers"
### 2.4.1 Linking
Header Cells have specific locations inside your Notebook. This means you can use them to [create Notebook-internal links](#4.3.5-Notebook-Internal-Links "Notebook-Internal Links").
### 2.4.2 Automatic Section Numbering and Table of Contents Support
Your Jupyter Notebook has two helpful tools that utilize the structural organization that Header Cells give your document: automatic section numbering and table of contents generation.
#### 2.4.2.1 Automatic Section Numbering
Suppose you are writing a paper and, as is prone to happening when you have a lot of complicate thoughts buzzing around your brain, you've reorganized your ideas several times. Automatic section numbering will go through your Notebook and number your sections and subsection as designated by your Header Cells. This means that if you've moved one or more big sections around several times, you won't have to go through your paper and renumber it, as well as all its subsections, yourself.
**Notes:** Automatic Section Numbering tri-toggling tool, so when you click the Number Sections button one of three actions will occur: Automatic Section Numbering will number your sections, correct inconsistent numbering, or unnumber your sections (if all of your sections are already consistently and correctly numbered).
So, even if you have previously numbered your sections, Automatic Section Numbering will go through your document, delete the current section numbers, and replace them the correct number in a linear sequence. This means that if your third section was once your second, Automatic Section Numbering will delete the "2" in front of your section's name and replace it with a "3."
While this function saves you a lot of time, it creates one limitation. Maybe you're writing a paper about children's books and one of the books you're discussing is called **`2 Cats`**. You've unsurprisingly titled the section where you summarize and analyze this book **`2 Cats`**. Automatic Section Numbering will assume the number 2 is section information and delete it, leaving just the title **`Cats`** behind. If you bold, italicize, or place the title of the section inside quotes, however, the entire section title will be be preserved without any trouble. It should also be noted that even if you must title a section with a number occurring before any letters and you do not want to bold it, italicize it, or place it inside quotes, then you can always run Automatic Section Numbering and then go to that section and retype its name by hand.
Because Automatic Section Numbering uses your header cells, its performance relies somewhat on the clarity of your organization. If you have two sections that begin with Header 1 Cells in your paper, and each of the sections has two subsections that begin with Header 2 Cells, Automatic Section Numbering will number them 1, 1.1, 1.2, 2, 2.1, and 2.2 respectively. If, however, you have used a Header 3 Cell to indicate the beginning of what would have been section 2.1, Automatic Section Numbering will number that section 2.0.1 and an error message will appear telling you that "You placed a Header 3 cell under a Header 2 Cell in section 2". Similarly, if you begin your paper with any Header Cell smaller than a Header 1, say a Header 3 Cell, then Automatic Section Numbering will number your first section 0.0.3 and an error message will appear telling you that "Notebook begins with a Header 3 Cell."
#### 2.4.2.2 Table of Contents Support
The Table of Contents tool will automatically generate a table of contents for your paper by taking all your Header Cell titles and ordering them in a list, which it will place in a new cell at the very beginning of you Notebook. Because your Notebook does note utilize formal page breaks or numbers, each listed section will be hyperlinked to the actual section within your document.
**Notes: **Because Table of Contents Support uses your header cells, its performance relies somewhat on the clarity of your organization. If you have two sections that begin with Header 1 Cells in your paper, and each of the sections has two subsections that begin with Header 2 Cells, Table of Contents will order them in the following way:
* 1.
* 1.1
* 1.2
* 2.
* 2.1
* 2.2
If, however, you have used a Header 3 Cell to indicate the beginning of what would have been section 2.1, Table of Contents Support will insert a dummy line so that your table of contents looks like this:
* 1.
* 1.1
* 1.2
* 2.
*
* 2.0.1
* 2.2
#### 2.4.2.3 Using Both Automatic Section Numbering and Table of Contents Support
Automatic Section Numbering will always update every aspect of your notebook that is dependent on the title of one or more of your sections. This means that it will automatically correct an existing table of contents and all of your Notebook-internal links to reflect the new numbered section titles.
# 3. Keyboard Shortcuts
Jupyter Notebooks support many helpful Keyboard shortcuts, including ones for most of the buttons in [your toolbar][]. To view these shortcuts, you can click the help menu and then select Keyboard Shortcuts, as pictured below.
[your toolbar]: #1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar "Getting to know Your Jupyter Notebook's Toolbar"

# 4. Using Markdown Cells for Writing
**Why aren't there font and font size selection drop down menus, buttons I can press to bold and italicize my text, or other advanced style options in my Notebook?**
When you use Microsoft Word, Google Docs, Apple Pages, Open Office, or any other word processing software, you generally use your mouse to select various style options, like line spacing, font size, font color, paragraph format etc. This kind of system is often describes as a WYSIWYG (What You See Is What You Get) interface. This means that the input (what you tell the computer) exactly matches the output (what the computer gives back to you). If you type the letter **`G`**, highlight it, select the color green and up the font size to 64 pt, your word processor will show you a fairly large green colored letter **`G`**. And if you print out that document you will print out a fairly large green colored letter **`G`**.
This Notebook, however, does not use a WYSIWYG interface. Instead it uses something called a "[markup Language][]". When you use a a markup language, your input does not necessarily exactly equal your output.
[markup language]: http://en.wikipedia.org/wiki/Markup_language "Wikipedia Article on Markup"
For example, if I type "#Header 1" at the beginning of a cell, but then press Shift-Enter (or click the play button at the top of the window), this notebook will turn my input into a somewhat different output in the following way:
<pre>
#Header 1
</pre>
#Header 1
And if I type "##Header 2" (at the beginning of a cell), this notebook will turn that input into another output:
<pre>
##Header 2
</pre>
##Header 2
In these examples, the hashtags are markers which tell the Notebook how to typeset the text. There are many markup languages, but one family, or perhaps guiding philosophy, of markup languages is called "Markdown," named somewhat jokingly for its simplicity. Your Notebook uses "marked," a Markdown library of typeset and other formatting instructions, like the hashtags in the examples above.
Markdown is a markup language that generates HTML, which the cell can interpret and render. This means that Markdown Cells can also render plain HTML code. If you're interested in learning HTML, check out this [helpful online tutorial][html tutorial].
[html tutorial]: http://www.w3schools.com/html/ "w3schools.com HTML Tutorial"
**Why Use Markdown (and not a WYSIWYG)?**
Why is Markdown better? Well, it’s worth saying that maybe it isn't. Mainly, it’s not actually a question of better or worse, but of what’s in front of you and of who you are. A definitive answer depends on the user and on that user’s goals and experience. These Notebooks don't use Markdown because it's definitely better, but rather because it's different and thus encourages users to think about their work differently.
It is very important for computer science students to learn how to conceptualize input and output as dependent, but also distinct. One good reason to use Markdown is that it encourages this kind of thinking. Relatedly, it might also promote focus on substance over surface aesthetic. Markdown is somewhat limited in its style options, which means that there are inherently fewer non-subject-specific concerns to agonize over while working. It is the conceit of this philosophy that you would, by using Markdown and this Notebook, begin to think of the specific stylistic rendering of your cells as distinct from what you type into those same cells, and thus also think of the content of your writing as necessarily separate from its formating and appearance.
## 4.1 Block Elements
### 4.1.1 Paragraph Breaks
Paragraphs consist of one or more consecutive lines of text and they are separated by one or more blank lines. If a line contains only spaces, it is a blank line.
### 4.1.2 Line Breaks
#### 4.1.2.1 Hard-Wrapping and Soft-Wrapping
If you're used to word processing software, you've been writing with automatically hard-wrapped lines and paragraphs. In a hard-wrapped paragraph the line breaks are not dependent on the size of the viewing window. If you click and drag your mouse to expand a word processing document, for example, the shape of the paragraphs and the length of the lines will not change. In other words, the length of a hard-wrapped line is determined either by the number of words in the line (in the case of word processing software where this number is predetermined and the program wraps for the user automatically), or individual intention (when a user manually presses an Enter or Return key to control exactly how long a line is).
Soft-wrapped paragraphs and lines, however, *do* depend on the size of their viewing window. If you increase the size of a window where soft-wrapped paragraphs are displayed, they too will expand into longer lines, becoming shorter and wider to fill the increased window space horizontally. Unsurprising, then, if you *narrow* a window, soft-wrapped lines will shrink and the paragraphs will become longer vertically.
Markdown, unlike most word processing software, does not automatically hard-wrap. If you want your paragraphs to have a particular or deliberate shape and size, you must insert your own break by ending the line with two spaces and then typing Return.
#### 4.1.2.2 Soft-Wrapping
<tt>
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
</tt>
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
#### 4.1.2.3 Hard-Wrapping
<tt>
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah
</tt>
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah
### 4.1.3 Headers
<pre>
#Header 1
</pre>
#Header 1
<pre>
##Header 2
</pre>
##Header 2
<pre>
###Header 3
</pre>
###Header 3
<pre>
####Header 4
</pre>
####Header 4
<pre>
#####Header 5
</pre>
#####Header 5
<pre>
######Header 6
</pre>
######Header 6
### 4.1.4 Block Quotes
#### 4.1.4.1 Standard Block Quoting
<tt>
>blah blah block quote blah blah block quote blah blah block
quote blah blah block quote blah blah block
quote blah blah block quote blah blah block quote blah blah block quote
</tt>
>blah blah block quote blah blah block quote blah blah block
quote blah blah block quote blah blah block
quote blah blah block quote blah blah block quote blah blah block quote
**Note**: Block quotes work best if you intentionally hard-wrap the lines.
#### 4.1.4.2 Nested Block Quoting
<pre>
>blah blah block quote blah blah block quote blah blah block
block quote blah blah block block quote blah blah block
>>quote blah blah block quote blah blah
block block quote blah blah block
>>>quote blah blah block quote blah blah block quote blah blah block quote
</pre>
>blah blah block quote blah blah block quote blah blah block
block quote blah blah block block quote blah blah block
>>quote blah blah block quote blah blah
block block quote blah blah block
>>>quote blah blah block quote blah blah block quote blah blah block quote
### 4.1.5 Lists
#### 4.1.5.1 Ordered Lists
In Markdown, you can list items using numbers, a **`+`**, a **` - `**, or a **`*`**. However, if the first item in a list or sublist is numbered, Markdown will interpret the entire list as ordered and will automatically number the items linearly, no matter what character you use to denote any given separate item.
<pre>
####Groceries:
0. Fruit:
6. Pears
0. Peaches
3. Plums
4. Apples
2. Granny Smith
7. Gala
* Oranges
- Berries
8. Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
9. Whole Wheat
0. With oats on crust
0. Without oats on crust
0. Rye
0. White
0. Dairy:
0. Milk
0. Whole
0. Skim
0. Cheese
0. Wisconsin Cheddar
0. Pepper Jack
</pre>
####Groceries:
0. Fruit:
6. Pears
0. Peaches
3. Plums
4. Apples
2. Granny Smith
7. Gala
* Oranges
- Berries
8. Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
9. Whole Wheat
0. With oats on crust
0. Without oats on crust
0. Rye
0. White
0. Dairy:
0. Milk
0. Whole
0. Skim
0. Cheese
0. Wisconsin Cheddar
0. Pepper Jack
#### 4.1.5.2 Bulleted Lists
If you begin your list or sublist with a **`+`**, a **` - `**, or a **`*`**, then Markdown will interpret the whole list as unordered and will use bullets regardless of the characters you type before any individual list item.
<pre>
####Groceries:
* Fruit:
* Pears
0. Peaches
3. Plums
4. Apples
- Granny Smith
7. Gala
* Oranges
- Berries
- Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
* Whole Wheat
* With oats on crust
0. Without oats on crust
+ Rye
0. White
0. Dairy:
* Milk
+ Whole
0. Skim
- Cheese
- Wisconsin Cheddar
0. Pepper Jack
</pre>
####Groceries:
* Fruit:
* Pears
0. Peaches
3. Plums
4. Apples
- Granny Smith
7. Gala
* Oranges
- Berries
- Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
* Whole Wheat
* With oats on crust
0. Without oats on crust
+ Rye
0. White
0. Dairy:
* Milk
+ Whole
0. Skim
- Cheese
- Wisconsin Cheddar
0. Pepper Jack
### 4.1.6 Section Breaks
<pre>
___
</pre>
___
<pre>
***
</pre>
***
<pre>------</pre>
------
<pre>
* * *
</pre>
* * *
<pre>
_ _ _
</pre>
_ _ _
<pre>
- - -
</pre>
- - -
## 4.2 Backslash Escape
What happens if you want to include a literal character, like a **`#`**, that usually has a specific function in Markdown? Backslash Escape is a function that prevents Markdown from interpreting a character as an instruction, rather than as the character itself. It works like this:
<pre>
\# Wow, this isn't a header.
# This is definitely a header.
</pre>
\# Wow, this isn't a header.
# This is definitely a header.
Markdown allows you to use a backslash to escape from the functions of the following characters:
* \ backslash
* ` backtick
* \* asterisk
* _ underscore
* {} curly braces
* [] square brackets
* () parentheses
* \# hashtag
* \+ plus sign|
* \- minus sign (hyphen)
* . dot
* ! exclamation mark
## 4.3 Hyperlinks
### 4.3.1 Automatic Links
<pre>
http://en.wikipedia.org
</pre>
http://en.wikipedia.org
### 4.3.2 Standard Links
<pre>
[click this link](http://en.wikipedia.org)
</pre>
[click this link](http://en.wikipedia.org)
### 4.3.3 Standard Links With Mouse-Over Titles
<pre>
[click this link](http://en.wikipedia.org "Wikipedia")
</pre>
[click this link](http://en.wikipedia.org "Wikipedia")
### 4.3.4 Reference Links
Suppose you are writing a document in which you intend to include many links. The format above is a little arduous and if you have to do it repeatedly *while* you're trying to focus on the content of what you're writing, it's going to be a really big pain.
Fortunately, there is an alternative way to insert hyperlinks into your text, one where you indicate that there is a link, name that link, and then use the name to provide the actually URL later on when you're less in the writing zone. This method can be thought of as a "reference-style" link because it is similar to using in-text citations and then defining those citations later in a more detailed reference section or bibliography.
<pre>
This is [a reference] [identification tag for link]
[identification tag for link]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference] [identification tag for link]
[identification tag for link]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
**Note:** The "identification tag for link" can be anything. For example:
<pre>
This is [a reference] [lfskdhflhslgfh333676]
[lfskdhflhslgfh333676]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference] [lfskdhflhslgfh333676]
[lfskdhflhslgfh333676]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
This means you can give your link an intuitive, easy to remember, and relevant ID:
<pre>
This is [a reference][Chile]
[chile]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference][Chile]
[chile]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
**Note**: Link IDs are not case-sensitive.
If you don't want to give your link an ID, you don't have to. As a short cut, Markdown will understand if you just use the words in the first set of brackets to define the link later on. This works in the following way:
<pre>
This is [a reference][]
[a reference]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference][]
[a reference]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
Another really helpful feature of a reference-style link is that you can define the link anywhere in the cell. (must be in the cell) For example:
<tt>
This is [a reference] [ref] blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah <br/><br/>
[ref]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</tt>
This is [a reference] [ref] blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
[ref]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
**Note:** Providing a mouse-over title for any link, regardless of whether it is a standard or reference-stlye type, is optional. With reference-style links, you can include the mouse-over title by placing it in quotes, single quotes, or parentheses. For standard links, you can only define a mouse-over title in quotes.
### 4.3.5 Notebook-Internal Links
When you create a Header you also create a discrete location within your Notebook. This means that, just like you can link to a specific location on the web, you can also link to a Header Cell inside your Notebook. Internal links have very similar Markdown formatting to regular links. The only difference is that the name of the link, which is the URL in the case of external links, is just a hashtag plus the name of the Header Cell you are linking to (case-sensitive) with dashes in between every word. If you hover your mouse over a Header Cell, a blue Greek pi letter will appear next to your title. If you click on it, the URL at the top of your window will change and the internal link to that section will appear last in the address. You can copy and paste it in order to make an internal link inside a Markdown Cell.
#### 4.3.5.1 Standard Notebook-Internal Links Without Mouse-Over Titles
<pre>
[Here's a link to the section of Automatic Section Numbering](#Automatic-Section-Numbering)
</pre>
[Here's a link to the section of Automatic Section Numbering](#2.4.2.1-Automatic-Section-Numbering)
#### 4.3.5.2 Standard Notebook-Internal Links With Mouse-Over Titles
<pre>
[Here's a link to the section on lists](#Lists "Lists")
</pre>
[Here's a link to the section of Automatic Section Numbering](#2.4.2.1-Automatic-Section-Numbering)
#### 4.3.5.3 Reference-Style Notebook-Internal Links
<pre>
[Here's a link to the section on Table of Contents Support][TOC]
[TOC]: #Table-of-Contents-Support
</pre>
[Here's a link to the section on Table of Contents Support][TOC]
[TOC]: #2.4.2.2-Table-of-Contents-Support
## 4.4 Tables
In Markdown, you can make a table by using vertical bars and dashes to define the cell and header borders:
<pre>
|Header|Header|Header|Header|
|------|------|------|------|
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
</pre>
|Header|Header|Header|Header|
|------|------|------|------|
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
Making a table this way might be especially useful if you want your document to be legible both rendered and unrendered. However, you don't *need* to include all of those dashes, vertical bars, and spaces for Markdown to understand that you're making a table. Here's the bare minimum you would need to create the table above:
<pre>
Header|Header|Header|Header
-|-|-|-
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
</pre>
Header|Header|Header|Header
-|-|-|-
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
It's important to note that the second line of dashes and vertical bars is essential. If you have just the line of headers and the second line of dashes and vertical bars, that's enough for Markdown to make a table.
Another important formatting issue has to do with the vertical bars that define the left and right edges of the table. If you include all the vertical bars on the far left and right of the table, like in the first example above, Markdown will ignore them completely. *But*, if you leave out some and include others, Markdown will interpret any extra vertical bar as an additional cell on the side that the bar appears in the unrendered version of the text. This also means that if you include the far left or right vertical bar in the second line of bars and dashes, you must include all of the otherwise optional vertical bars (like in the first example above).
### 4.4.1 Cell Justification
If not otherwise specified the text in each header and cell of a table will justify to the left. If, however, you wish to specify either right justification or centering, you may do so like this:
<tt>
**Centered, Right-Justified, and Regular Cells and Headers**:
centered header | regular header | right-justified header | centered header | regular header
:-:|-|-:|:-:|-
centered cell|regular cell|right-justified cell|centered cell|regular cell
centered cell|regular cell|right-justified cell|centered cell|regular cell
</tt>
**Centered, Right-Justified, and Regular Cells and Headers**:
centered header | regular header | right-justified header | centered header | regular header
:-:|-|-:|:-:|-
centered cell|regular cell|right-justified cell|centered cell|regular cell
centered cell|regular cell|right-justified cell|centered cell|regular cell
While it is difficult to see that the headers are differently justified from one another, this is just because the longest line of characters in any column defines the width of the headers and cells in that column.
**Note:** You cannot make tables directly beneath a line of text. You must put a blank line between the end of a paragraph and the beginning of a table.
## 4.5 Style and Emphasis
<pre>
*Italics*
</pre>
*Italics*
<pre>
_Italics_
</pre>
_Italics_
<pre>
**Bold**
</pre>
**Bold**
<pre>
__Bold__
</pre>
__Bold__
**Note:** If you want actual asterisks or underscores to appear in your text, you can use the [backslash escape function] [backslash] like this:
[backslash]: #4.2-Backslash-Escape "Backslash Escape"
<pre>
\*awesome asterisks\* and \_incredible under scores\_
</pre>
\*awesome asterisks\* and \_incredible under scores\_
## 4.6 Other Characters
<pre>
Ampersand &amp; Ampersand
</pre>
Ampersand & Ampersand
<pre>
&lt; angle brackets &gt;
</pre>
< angle brackets >
<pre>
&quot; quotes &quot;
" quotes "
## 4.7 Including Code Examples
If you want to signify that a particular section of text is actually an example of code, you can use backquotes to surround the code example. These will switch the font to monospace, which creates a clear visual formatting difference between the text that is meant to be code and the text that isn't.
Code can either in the middle of a paragraph, or as a block. Use a single backquote to start and stop code in the middle of a paragraph. Here's an example:
<pre>
The word `monospace` will appear in a code-like form.
</pre>
The word `monospace` will appear in a code-like form.
**Note:** If you want to include a literal backquote in your code example you must suround the whole text block in double backquotes like this:
<pre>
`` Look at this literal backquote ` ``
</pre>
`` Look at this literal backquote ` ``
To include a complete code-block inside a Markdown cell, use triple backquotes. Optionally, you can put the name of the language that you are quoting after the starting triple backquotes, like this:
<pre>
```python
def function(n):
return n + 1
```
</pre>
That will format the code-block (sometimes called "fenced code") with syntax coloring. The above code block will be rendered like this:
```python
def function(n):
return n + 1
```
The language formatting names that you can currently use after the triple backquote are:
<pre>
apl django go jinja2 ntriples q smalltalk toml
asterisk dtd groovy julia octave r smarty turtle
clike dylan haml less pascal rpm smartymixed vb
clojure ecl haskell livescript pegjs rst solr vbscript
cobol eiffel haxe lua perl ruby sparql velocity
coffeescript erlang htmlembedded markdown php rust sql verilog
commonlisp fortran htmlmixed pig sass stex xml
css gas http mirc properties scheme tcl xquery
d gfm jade mllike puppet shell tiddlywiki yaml
diff gherkin javascript nginx python sieve tiki z80
</pre>
## 4.8 Images
### 4.8.1 Images from the Internet
Inserting an image from the internet is almost identical to inserting a link. You just also type a **`!`** before the first set of brackets:
<pre>

</pre>

**Note:** Unlike with a link, the words that you type in the first set of brackets do not appear when they are rendered into html by Markdown.
#### 4.8.1.1 Reference-Style Images from the Internet
Just like with links, you can also use a reference-style format when inserting images from the internet. This involves indicating where you want to place a picture, giving that picture an ID tag, and then later defining that ID tag. The process is nearly identical to using the reference-style format to insert a link:
<pre>
![][giraffe]
[giraffe]:http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/South_African_Giraffe,_head.jpg/877px-South_African_Giraffe,_head.jpg "Picture of a Giraffe"
</pre>
![][giraffe]
[giraffe]: http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/South_African_Giraffe,_head.jpg/877px-South_African_Giraffe,_head.jpg "Picture of a Giraffe"
## 4.9 LaTeX Math
Jupyter Notebooks' Markdown cells support LateX for formatting mathematical equations. To tell Markdown to interpret your text as LaTex, surround your input with dollar signs like this:
<pre>
$z=\dfrac{2x}{3y}$
</pre>
$z=\dfrac{2x}{3y}$
An equation can be very complex:
<pre>
$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx$
</pre>
$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx$
If you want your LaTex equations to be indented towards the center of the cell, surround your input with two dollar signs on each side like this:
<pre>
$$2x+3y=z$$
</pre>
$$2x+3y=z$$
For a comprehensive guide to the mathematical symbols and notations supported by Jupyter Notebooks' Markdown cells, check out [Martin Keefe's helpful reference materials on the subject][mkeefe].
[mkeefe]: http://martinkeefe.com/math/mathjax1 "Martin Keefe's MathJax Guide"
# 5. Bibliographic Support
Bibliographic Support makes managing references and citations in your Notebook much easier, by automating some of the bibliographic process every person goes through when doing research or writing in an academic context. There are essentially three steps to this process for which your Notebook's Bibliographic support can assist: gathering and organizing sources you intend to use, citing those sources within the text you are writing, and compiling all of the material you referenced in an organized, correctly formatted list, the kind which usually appears at the end of a paper in a section titled "References," "Bibliography," or "Works Cited.
In order to benefit from this functionality, you need to do two things while writing your paper: first, you need to create a [Bibtex database][bibdb] of information about your sources and second, you must use the the [cite command][cc] in your Markdown writing cells to indicate where you want in-text citations to appear.
If you do both these things, the "Generate References" button will be able to do its job by replacing all of your cite commands with validly formatted in-text citations and creating a References section at the end of your document, which will only ever include the works you specifically cited within in your Notebook.
**Note:** References are generated without a header cell, just a [markdown header][]. This means that if you want a References section to appear in your table of contents, you will have to unrender the References cell, delete the "References" header, make a Header Cell of the appropriate level and title it "References" yourself, and then generate a table of contents using [Table of Contents Support][table of contents]. This way, you can also title your References section "Bibliography" or "Works Cited," if you want.
[markdown header]: #4.1.3-Headers
[table of contents]: #2.4.2.2-Table-of-Contents-Support
[bibdb]: #5.1-Creating-a-Bibtex-Database
[cc]:#5.2-Cite-Commands-and-Citation-IDs
## 5.1 Creating a Bibtex Database
Bibtex is reference management software for formatting lists of references ([from Wikipedia](BibTeX is reference management software for formatting lists of references "Wikipedia Article On Bibtex")). While your Notebook does not use the Bibtex software, it does use [Bibtex formatting](#5.1.3-Formatting-Bibtex-Entries) for creating references within your Bibliographic database.
In order for the Generate References button to work, you need a bibliographic database for it to search and match up with the sources you've indicated you want to credit using [cite commands and citation IDs](#5.2-Cite-Commands-and-Citation-IDs).
When creating a bibliographic database for your Notebook, you have two options: you can make an external database, which will exist in a separate Notebook from the one you are writing in, or you can make an internal database which will exist in a single cell inside the Notebook in which you are writing. Below are explanations of how to use these database creation strategies, as well as a discussion of the pros and cons for each.
### 5.1.1 External Bibliographic Databases
To create an external bibliographic database, you will need to create a new Notebook and title it **`Bibliography`** in the toplevel folder of your current Jupyter session. As long as you do not also have an internal bibliographic database, when you click the Generate References button your Notebook's Bibliographic Support will search this other **`Bibliography`** Notebook for Bibtex entries. Bibtex entries can be in any cell and in any kind of cell in your **`Bibliography`** Notebook as long as the cell begins with **`<!--bibtex`** and ends with **`-->`**. Go to [this section][bibfor] for examples of valid BibTex formatting.
Not every cell has to contain BibTex entries for the external bibliographic database to work as intended with your Notebook's bibliographic support. This means you can use the same helpful organization features that you use in other Notebooks, like [Automatic Section Numbering][asn] and [Table of Contents Support][toc], to structure your own little library of references. The best part of this is that any Notebook containing validly formatted [cite commands][cc] can check your external database and find only the items that you have indicated you want to cite. So you only ever have to make the entry once and your external database can grow large and comprehensive over the course of your accademic writing career.
There are several advantages to using an external database over [an internal one][internal database]. The biggest one, which has already been described, is that you will only ever need to create one and you can organize it into sections by using headers and generating [automatic section numbers][asn] and a [table of contents][toc]. These tools will help you to easily find the right [citation ID][cc] for a given source you want to cite. The other major advantage is that an external database is not visible when viewing the Notebook in which you are citing sources and generating a References list. Bibtex databases are not very attractive or readable and you probably won't want one to show up in your finished document. There are [ways to hide internal databases][hiding bibtex cell], but it's convenient not to have to worry about that.
[asn]: #2.4.2.1-Automatic-Section-Numbering
[toc]: #2.4.2.2-Table-of-Contents-Support
[cc]: #5.2-Cite-Commands-and-Citation-IDs
[hiding bibtex cell]: #5.1.2.1-Hiding-Your-Internal-Database
[bibfor]:#5.1.3-Formatting-Bibtex-Entries
### 5.1.2 Internal Bibliographic Databases
Unlike [external bibliographic databases][exd], which are comprised from an entire separate notebook, internal bibliographic databases consist of only one cell within in the Notebook in which you are citing sources and compiling a References list. The single cell, like all of the many BibTex cells that can make up an external database, must begin with **`<!--bibtex`** and end with **`-->`** in order to be validly formatted and correctly interpreted by your Notebook's Bibliographic Support. It's probably best to keep this cell at the very end or the very beginning of your Notebook so you always know where it is. This is because when you use an intenral bibliographic databse it can only consist of one cell. This means that if you want to cite multiple sources you will need to keep track of the single cell that comprises your entire internal bibliographic database during every step of the research and writing process.
Internal bibliographic databases make more sense when your project is a small one and the list of total sources is short. This is especially convenient if you don't already have a built-up external database. With an internal database you don't have to create and organize a whole separate Notebook, a task that's only useful when you have to keep track of a lot of different material. Additionally, if you want to share your finished Notebook with others in a form that retains its structural validity, you only have to send one Notebook, as oppose to both the project itself and the Notebook that comprises your external bibliographic database. This is especially useful for a group project, where you want to give another reader the ability to edit, not simply read, your References section.
[exd]:#5.1.1-External-Bibliographic-Databases
#### 5.1.2.1 Hiding Your Internal Database
Even though they have some advantages, especially for smaller projects, internal databases have on major draw back. They are not very attractive or polished looking and you probably won't want one to appear in your final product. Fortunately, there are two methods for hiding your internal biblioraphic database.
While your Notebook's bibliographic support will be able to interpret [correctly formatted BibTex entries][bibfor] in any [kind of cell][cell kind], if you use a [Markdown Cell][md cell] to store your internal bibliographic database, then when you run the cell all of the ugly BibTex formatting will disappear. This is handy, but it also makes the cell very difficult to find, so remember to keep careful track of where your hidden BibTex databse is if you're planning to edit it later. If you want your final product to be viewed stably as HTML, then you can make your internal BibTex database inside a [Raw Cell][RC], use the [cell toolbar][] to select "Raw Cell Format", and then select "None" in the toolbar that appears in the corner of your Raw Cell BibTex database. This way, you will still be able to easily find and edit the database when you are working on your Notebook, but others won't be able to see the database when viewing your project in its final form.
[cell toolbar]: #1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar
[bibfor]:#5.1.3-Formatting-Bibtex-Entries
[RC]:#2.3-Raw-Cells
[md cell]: #2.2-Markdown-Cells
[cell kind]: #2.-Different-Kinds-of-Cells
### 5.1.3 Formatting Bibtex Entries
BibTex entries consist of three crucial components: one, the type of source you are citing (a book, article, website, etc.); two, the unique [citation ID][cc] you wish to remember the source by; and three, the fields of information about that source (author, title of work, date of publication, etc.). Below is an example entry, with each of these three components designated clearly
<pre>
<!--bibtex
@ENTRY TYPE{CITATION ID,
FIELD 1 = {source specific information},
FIELD 2 = {source specific informatio},
FIEL 3 = {source specific informatio},
FIELD 4 = {source specific informatio}
}
-->
</pre>
More comprehensive documentation of what entry types and corresponding sets of required and optional fields BibTex supports can be found in the [Wikipedia article on BibTex][wikibibt].
Below is a section of the external bibliographic database for a fake history paper about the fictional island nation of Calico. (None of the entries contain information about real books or articles):
[cc]: #5.2-Cite-Commands-and-Citation-IDs
[wikibibt]: http://en.wikipedia.org/wiki/Markdown
<pre>
<!--bibtex
@book{wellfarecut,
title = {Our Greatest Threat: The Rise of Anti-Wellfare Politics in Calico in the 21st Century},
author = {Jacob, Bernadette},
year = {2010},
publisher = {Jupyter University Press}
}
@article{militaryex2,
title = {Rethinking Calican Military Expansion for the New Century},
author = {Collier, Brian F.},
journal = {Modern Politics},
volume = {60},
issue = {25},
pages = {35 - 70},
year = {2012}
}
@article{militaryex1,
title = {Conservative Majority Passes Budget to Grow Military},
author = {Lane, Lois},
journal = {The Daily Calican},
month = {October 19th, 2011},
pages = {15 - 17},
year = {2011}
}
@article{oildrill,
title = {Oil Drilling Off the Coast of Jupyter Approved for Early Next Year},
author = {Marks, Meghan L.},
journal = {The Python Gazette},
month = {December 5th, 2012},
pages = {8 - 9},
year = {2012}
}
@article{rieseinterview,
title = {Interview with Up and Coming Freshman Senator, Alec Riese of Python},
author = {Wilmington, Oliver},
journal = {The Jupyter Times},
month = {November 24th, 2012},
pages = {4 - 7},
year = {2012}
}
@book{calicoww2:1,
title = {Calico and WWII: Untold History},
author = {French, Viola},
year = {1997},
publisher = {Calicia City Free Press}
}
@book{calicoww2:2,
title = {Rebuilding Calico After Japanese Occupation},
author = {Kepps, Milo },
year = {2002},
publisher = {Python Books}
}
-->
</pre>
## 5.2 Cite Commands and Citation IDs
When you want to cite a bibliographic entry from a database (either internal or external), you must know the citation ID, sometimes called the "key", for that entry. Citation IDs are strings of letters, numbers, and symbols that *you* make up, so they can be any word or combination of words you find easy to remember. Once, you've given an entry a citation ID, however, you do need to use that same ID every time you cite that source, so it may behoove you to keep your database organized. This way it will be much easier to locate any given source's entry and its potentially forgotten citation ID.
Once you know the citation ID for a given entry, use the following format to indicate to your Notebook's bibliographic support that you'd like to insert an in-text citation:
<pre>
[](#cite-CITATION ID)
</pre>
This format is the cite command. For example, if you wanted to cite *Rebuilding Calico After Japanese Occupation* listed above, you would use the cite command and the specific citation ID for that source:
<pre>
[](#cite-calicoww2:2)
</pre>
Before clicking the "Generate References" button, your unrendered text might look like this:
<pre>
Rebuilding Calico took many years [](#cite-calicoww2:2).
</pre>
After clicking the "Generate References" button, your unrendered text might look like this:
<pre>
Rebuilding Calico took many years <a name="ref-1"/>[(Kepps, 2002)](#cite-calicoww2:2).
</pre>
and then the text would render as:
>Rebuilding Calico took many years <a name="ref-1"/>[(Kepps, 2002)](#cite-calicoww2:2).
In addition, a cell would be added at the bottom with the following contents:
>#References
><a name="cite-calicoww2:2"/><sup>[^](#ref-1) [^](#ref-2) </sup>Kepps, Milo . 2002. _Rebuilding Calico After Japanese Occupation_.
# 6. Turning Your Jupyter Notebook into a Slideshow
To install slideshow support for your Notebook, go [here](http://nbviewer.ipython.org/github/fperez/nb-slideshow-template/blob/master/install-support.ipynb).
To see a tutorial and example slideshow, go [here](http://www.damian.oquanta.info/posts/make-your-slides-with-ipython.html).
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.plotting.register_matplotlib_converters.html
# Register converters for handling timestamp values in plots
```
<h2>Kaggle Bike Sharing Demand Dataset</h2>
<h4>To download dataset, sign-in and download from this link: https://www.kaggle.com/c/bike-sharing-demand/data</h4>
<br>
Input Features:<br>
['season', 'holiday', 'workingday', 'weather', 'temp', 'atemp', 'humidity', 'windspeed', 'year', 'month', 'day', 'dayofweek','hour']<br>
Target:<br>
['count']<br>
Objective:
You are provided hourly rental data spanning two years.
For this competition, the training set is comprised of the first 19 days of each month, while the test set is the 20th to the end of the month.
You must predict the total count of bikes rented during each hour covered by the test set, using only information available prior to the rental period
Reference: https://www.kaggle.com/c/bike-sharing-demand/data
```
columns = ['count', 'season', 'holiday', 'workingday', 'weather', 'temp',
'atemp', 'humidity', 'windspeed', 'year', 'month', 'day', 'dayofweek','hour']
df = pd.read_csv('train.csv', parse_dates=['datetime'],index_col=0)
df_test = pd.read_csv('test.csv', parse_dates=['datetime'],index_col=0)
df.head()
# We need to convert datetime to numeric for training.
# Let's extract key features into separate numeric columns
def add_features(df):
df['year'] = df.index.year
df['month'] = df.index.month
df['day'] = df.index.day
df['dayofweek'] = df.index.dayofweek
df['hour'] = df.index.hour
# Add New Features
add_features(df)
add_features(df_test)
df.head()
# Need to predict the missing data
plt.title('Rental Count - Gaps')
df['2011-01':'2011-02']['count'].plot()
plt.show()
# Rentals change hourly!
plt.plot(df['2011-01-01']['count'])
plt.xticks(fontsize=14, rotation=45)
plt.xlabel('Date')
plt.ylabel('Rental Count')
plt.title('Hourly Rentals for Jan 01, 2011')
plt.show()
# Seasonal
plt.plot(df['2011-01']['count'])
plt.xticks(fontsize=14, rotation=45)
plt.xlabel('Date')
plt.ylabel('Rental Count')
plt.title('Jan 2011 Rentals (1 month)')
plt.show()
group_hour = df.groupby(['hour'])
average_by_hour = group_hour['count'].mean()
plt.plot(average_by_hour.index,average_by_hour)
plt.xlabel('Hour')
plt.ylabel('Rental Count')
plt.xticks(np.arange(24))
plt.grid(True)
plt.title('Average Hourly Rental Count')
# Year to year trend
plt.plot(df['2011']['count'],label='2011')
plt.plot(df['2012']['count'],label='2012')
plt.xticks(fontsize=14, rotation=45)
plt.xlabel('Date')
plt.ylabel('Rental Count')
plt.title('2011 and 2012 Rentals (Year to Year)')
plt.legend()
plt.show()
group_year_month = df.groupby(['year','month'])
average_year_month = group_year_month['count'].mean()
average_year_month
for year in average_year_month.index.levels[0]:
plt.plot(average_year_month[year].index,average_year_month[year],label=year)
plt.legend()
plt.xlabel('Month')
plt.ylabel('Count')
plt.grid(True)
plt.title('Average Monthly Rental Count for 2011, 2012')
plt.show()
group_year_hour = df.groupby(['year','hour'])
average_year_hour = group_year_hour['count'].mean()
for year in average_year_hour.index.levels[0]:
#print (year)
#print(average_year_month[year])
plt.plot(average_year_hour[year].index,average_year_hour[year],label=year)
plt.legend()
plt.xlabel('Hour')
plt.ylabel('Count')
plt.xticks(np.arange(24))
plt.grid(True)
plt.title('Average Hourly Rental Count - 2011, 2012')
group_workingday_hour = df.groupby(['workingday','hour'])
average_workingday_hour = group_workingday_hour['count'].mean()
for workingday in average_workingday_hour.index.levels[0]:
#print (year)
#print(average_year_month[year])
plt.plot(average_workingday_hour[workingday].index,average_workingday_hour[workingday],
label=workingday)
plt.legend()
plt.xlabel('Hour')
plt.ylabel('Count')
plt.xticks(np.arange(24))
plt.grid(True)
plt.title('Average Hourly Rental Count by Working Day')
plt.show()
# Let's look at correlation beween features and target
df.corr()['count']
# Any relation between temperature and rental count?
plt.scatter(x=df.temp,y=df["count"])
plt.grid(True)
plt.xlabel('Temperature')
plt.ylabel('Count')
plt.title('Temperature vs Count')
plt.show()
# Any relation between humidity and rental count?
plt.scatter(x=df.humidity,y=df["count"],label='Humidity')
plt.grid(True)
plt.xlabel('Humidity')
plt.ylabel('Count')
plt.title('Humidity vs Count')
plt.show()
# Save all data
df.to_csv('bike_all.csv',index=True,index_label='datetime',columns=columns)
```
## Training and Validation Set
### Target Variable as first column followed by input features
### Training, Validation files do not have a column header
```
# Training = 70% of the data
# Validation = 30% of the data
# Randomize the datset
np.random.seed(5)
l = list(df.index)
np.random.shuffle(l)
df = df.loc[l]
rows = df.shape[0]
train = int(.7 * rows)
test = rows-train
rows, train, test
columns
# Write Training Set
df.iloc[:train].to_csv('bike_train.csv'
,index=False,header=False
,columns=columns)
# Write Validation Set
df.iloc[train:].to_csv('bike_validation.csv'
,index=False,header=False
,columns=columns)
# Test Data has only input features
df_test.to_csv('bike_test.csv',index=True,index_label='datetime')
print(','.join(columns))
# Write Column List
with open('bike_train_column_list.txt','w') as f:
f.write(','.join(columns))
```
| github_jupyter |
#### Naive bayes
```
#import library
import pandas as pd
#read data
df = pd.read_csv("spam.csv")
#display all data
df
#print full summary
df.info()
#display data first five rows
df.head()
#display data last five rows
df.tail()
#groupby category and describe
df.groupby('Category').describe()
#describe non-spam emails
df[df.Category == 'ham'].describe()
#describe spam emails
df[df.Category == 'spam'].describe()
#check number of rows where particular columns of null values
#check sum of null values
df.isnull().sum()
```
It can be obeserved that there are no NaN or None values in the data set.
#### Machine learning model understand numerical values
- Need to convert Category and Message into numbers first.
```
#take category column and apply lambda function check if spam will return 1 else 0
#create a new spam column
df['Spam'] = df['Category'].apply(lambda x: 1 if x=='spam' else 0)
# display data first five rows
df.head()
```
- Created a new column named as "Spam", grouping emails into 1 and 0.
```
#train model using sklearn and train test data
from sklearn.model_selection import train_test_split
#use X, y as input and test_size as ratio of spliting > will get 4 parameters back
#25% test and 75% train
#when run the cell, it will split samples into train and test data set
X_train, X_test, y_train, y_test = train_test_split(df.Message,df.Spam,test_size=0.25)
#convert a collection of text documents to a matrix of token counts
#find unique words, treat as columns and build matrix
#...represent unique words in huge data set
from sklearn.feature_extraction.text import CountVectorizer
v = CountVectorizer()
X_train_count = v.fit_transform(X_train.values)
X_train_count.toarray()[:3]
#use in discrete data and have certain frequency to represent
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB()
#fit method to train model
#X_train_count text converted into emails numbers matrix
model.fit(X_train_count,y_train)
#predict data
emails = [
'You have updated Instagram username.',
'Click on this link to change your account details.'
]
emails_count = v.transform(emails)
model.predict(emails_count)
```
Based on the emails above, first email is a normal email to update account user that username has been updated, while model detected that second email is a spam.
```
#model works only on numerical values not text
#need to convert X_test model into count and fit into prediction
X_test_count = v.transform(X_test)
#check accuracy of model by calling score method
#score will use X_test to predict model.predict(X_test) and compare with y_test value to find accuracy
model.score(X_test_count, y_test)
```
98% result shows that naive bayes spam filtering method is able to detect spam emails with high accuracy.
| github_jupyter |
```
import matplotlib.cbook
import warnings
import plotnine
warnings.filterwarnings(module='plotnine*', action='ignore')
warnings.filterwarnings(module='matplotlib*', action='ignore')
%matplotlib inline
```
# Querying SQL (intro)
## Reading in data
In this tutorial, we'll use the mtcars data ([source](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/mtcars.html)) that comes packaged with siuba. This data contains information about 32 cars, like their miles per gallon (`mpg`), and number of cylinders (`cyl`). This data in siuba is a pandas DataFrame.
```
from siuba.data import mtcars
mtcars.head()
```
First, we'll use sqlalchemy, and the pandas method `create_engine` to copy the data into a sqlite table.
Once we have that, `siuba` can use a class called `LazyTbl` to connect to the table.
```
from sqlalchemy import create_engine
from siuba.sql import LazyTbl
# copy in to sqlite
engine = create_engine("sqlite:///:memory:")
mtcars.to_sql("mtcars", engine, if_exists = "replace")
# connect with siuba
tbl_mtcars = LazyTbl(engine, "mtcars")
tbl_mtcars
```
Notice that `siuba` by default prints a glimpse into the current data, along with some extra information about the database we're connected to. However, in this case, there are more than 5 rows of data. In order to get all of it back as a pandas DataFrame we need to `collect()` it.
## Connecting to existing database
While we use `sqlalchemy.create_engine` to connect to a database in the previous section, `LazyTbl` also accepts a string as its first argument, followed by a table name.
This is shown below, with placeholder variables, like "username" and "password". See this [SqlAlchemy doc](https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls) for more.
```python
tbl = LazyTbl(
"postgresql://username:password@localhost:5432/dbname",
"tablename"
)
```
## Collecting data and previewing queries
```
from siuba import head, collect, show_query
tbl_mtcars >> head(2) >> collect()
tbl_mtcars >> head(2) >> show_query()
```
## Basic queries
A core goal of `siuba` is to make sure most column operations and methods that work on a pandas DataFrame, also work with a SQL table. As a result, the examples in these docs also work when applied to SQL.
This is shown below for `filter`, `summarize`, and `mutate`.
```
from siuba import _, filter, select, group_by, summarize, mutate
tbl_mtcars >> filter(_.cyl == 6)
(tbl_mtcars
>> group_by(_.cyl)
>> summarize(avg_mpg = _.mpg.mean())
)
tbl_mtcars >> select(_.mpg, _.cyl, _.endswith('t'))
tbl_mtcars >> \
mutate(feetpg = _.mpg * 5290, inchpg = _.feetpg * 12)
```
Note that the two SQL implementations supported are postgresql, and sqlite. Support for window and aggregate functions is currently limited.
## Diving deeper
**TODO**
* using raw SQL
* how methods and function calls are translated
* uses sqlalchemy column objects
| github_jupyter |
## Expressões Regulares
Uma expressão regular é um método formal de se especificar um padrão de texto.
Mais detalhadamente, é uma composição de símbolos, caracteres com funções especiais, que agrupados entre si e com caracteres literais, formam uma sequência, uma expressão,Essa expressão é interpretada como uma regra que indicará sucesso se uma entrada de dados qualquer casar com essa regra ou seja obdecer exatamente a todas as suas condições.
```
# importando o módulo re(regular expression)
# Esse módulo fornece operações com expressões regulares (ER)
import re
# lista de termos para busca
lista_pesquisa = ['informações', 'Negócios']
# texto para o parse
texto = 'Existem muitos desafios para o Big Data. O primerio deles é a coleta dos dados, pois fala-se aquie de'\
'enormes quantidades sendo geradas em uma taxa maior do que um servidor comum seria capaz de processar e armazenar.'\
'O segundo desafio é justamente o de processar essas informações. Com elas então distribuídas, a aplicação deve ser'\
'capaz de consumir partes das informações e gerar pequenas quantidades de dados processados, que serão calculados em'\
'conjunto depois para criar o resultado final. Outro desafio é a exibição dos resultados, de forma que as informações'\
'estejam disponíveis de forma clara para os tomadores de decisão.'
# exemplo básico de Data Mining
for item in lista_pesquisa:
print('Buscando por "%s" em :\n\n"%s"'% (item, texto))
# verificando o termo da pesquisa existe no texto
if re.search(item, texto):
print('\n')
print('Palavra encontrada. \n')
print('\n')
else:
print('\n')
print('Palavra não encontrada. \n')
print('\n')
# termo usado para dividir uma string
split_term = '@'
frase = 'Qual o domínio de alguém com o e-mail: aluno@gamail.com'
# dividindo a frase
re.split(split_term, frase)
def encontrar_padrao(lista, frase):
for item in lista:
print('Pesquisa na f: %r'% item)
print(re.findall(item, frase))
print('\n')
frase_padrao = 'zLzL..zzzLLL...zLLLzLLL...LzLz..dzzzzz...zLLLLL'
lista_padroes = [ 'zL*', # z seguido de zero ou mais L
'zL+', # z seguido por um ou mais L
'zL?', # z seguido por zero ou um L
'zL{3}', # z seguido por três L
'zL{2, 3}', # z seguido por dois a três L
]
encontrar_padrao(lista_padroes, frase_padrao)
frase = 'Esta é uma string com pontuação. Isso pode ser um problema quando fazemos mineração de dados em busca'\
'de padrões! Não seria melhor retirar os sinais ao fim de cada frase?'
# A expressão [^!.?] verifica por valores que não sejam pontuação
# (!, ., ?) e o sinal de adição (+) verifica se o item aparece pelo menos
# uma vez. Traduzindo: esta expressão diz: traga apenas as palavras na
# frase
re.findall('[^!.? ]+', frase)
frase = 'Está é uma frase do exemplo. Vamos verificar quais padrões serâo encontradas.'
lista_padroes = ['[a-z]+', # sequência de letras minúsculas
'[A-Z]+', # sequência de letras maiúsculas
'[a-zA-Z]+', # sequência de letras minúculas e maiúsculas
'[A-Z][a-z]+'] # uma letra maiúscula, seguida de uma letra minúscula
encontrar_padrao(lista_padroes, frase)
```
## Escape code
É possível usar códigos específicos para enocntrar padrões nos dados, tais como dígitos, não dígitos, espaços, etc..
<table border="1" class="docutils">
<colgroup>
<col width="14%" />
<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Código</th>
<th class="head">Significado</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td>
<td>um dígito</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td>
<td>um não-dígito</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td>
<td>espaço (tab, espaço, nova linha, etc.)</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td>
<td>não-espaço</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td>
<td>alfanumérico</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td>
<td>não-alfanumérico</td>
</tr>
</tbody>
</table>
```
# O prefixo r antes da expressão regular evita o pré-processamenta da ER
# pela linguage. Colocamos o modificador r (do inglês 'raw', crú)
# imediatamente antes das aspas
r'\b'
'\b'
frase = 'Está é uma string com alguns números, como 1287 e um símbolo #hashtag'
lista_padroes = [r'\d+', # sequência de digitos
r'\D+', # sequência de não digitos
r'\s+', # sequência de espaços
r'\S+', # sequência de não espaços
r'\w+', # caracteres alganuméricos
r'\W+', # não-alfanúmerico
]
encontrar_padrao(lista_padroes, frase)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MoRebaie/Sequences-Time-Series-Prediction-in-Tensorflow/blob/master/Course_4_Week_4_Lesson_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install tf-nightly-2.0-preview
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 30
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
#batch_size = 16
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
```
| github_jupyter |
# Protein-ligand complex MD Setup tutorial using BioExcel Building Blocks (biobb)
### --***AmberTools package version***--
**Based on the [MDWeb](http://mmb.irbbarcelona.org/MDWeb2/) [Amber FULL MD Setup tutorial](https://mmb.irbbarcelona.org/MDWeb2/help.php?id=workflows#AmberWorkflowFULL)**
***
This tutorial aims to illustrate the process of **setting up a simulation system** containing a **protein in complex with a ligand**, step by step, using the **BioExcel Building Blocks library (biobb)** wrapping the **AmberTools** utility from the **AMBER package**. The particular example used is the **T4 lysozyme** protein (PDB code [3HTB](https://www.rcsb.org/structure/3HTB)) with two residue modifications ***L99A/M102Q*** complexed with the small ligand **2-propylphenol** (3-letter code [JZ4](https://www.rcsb.org/ligand/JZ4)).
***
## Settings
### Biobb modules used
- [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases.
- [biobb_amber](https://github.com/bioexcel/biobb_amber): Tools to setup and run Molecular Dynamics simulations with AmberTools.
- [biobb_structure_utils](https://github.com/bioexcel/biobb_structure_utils): Tools to modify or extract information from a PDB structure file.
- [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories.
- [biobb_chemistry](https://github.com/bioexcel/biobb_chemistry): Tools to to perform chemical conversions.
### Auxiliar libraries used
- [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels): Enables a Jupyter Notebook or JupyterLab application in one conda environment to access kernels for Python, R, and other languages found in other environments.
- [nglview](http://nglviewer.org/#nglview): Jupyter/IPython widget to interactively view molecular structures and trajectories in notebooks.
- [ipywidgets](https://github.com/jupyter-widgets/ipywidgets): Interactive HTML widgets for Jupyter notebooks and the IPython kernel.
- [plotly](https://plot.ly/python/offline/): Python interactive graphing library integrated in Jupyter notebooks.
- [simpletraj](https://github.com/arose/simpletraj): Lightweight coordinate-only trajectory reader based on code from GROMACS, MDAnalysis and VMD.
### Conda Installation and Launch
```console
git clone https://github.com/bioexcel/biobb_wf_amber_md_setup.git
cd biobb_wf_amber_md_setup
conda env create -f conda_env/environment.yml
conda activate biobb_AMBER_MDsetup_tutorials
jupyter-nbextension enable --py --user widgetsnbextension
jupyter-nbextension enable --py --user nglview
jupyter-notebook biobb_wf_amber_md_setup/notebooks/mdsetup_lig/biobb_amber_complex_setup_notebook.ipynb
```
***
## Pipeline steps
1. [Input Parameters](#input)
2. [Fetching PDB Structure](#fetch)
3. [Preparing PDB file for AMBER](#pdb4amber)
4. [Create ligand system topology](#ligtop)
5. [Create Protein-Ligand Complex System Topology](#top)
6. [Energetically Minimize the Structure](#minv)
7. [Create Solvent Box and Solvating the System](#box)
8. [Adding Ions](#ions)
9. [Energetically Minimize the System](#min)
10. [Heating the System](#heating)
11. [Equilibrate the System (NVT)](#nvt)
12. [Equilibrate the System (NPT)](#npt)
13. [Free Molecular Dynamics Simulation](#free)
14. [Post-processing and Visualizing Resulting 3D Trajectory](#post)
15. [Output Files](#output)
16. [Questions & Comments](#questions)
***
<img src="https://bioexcel.eu/wp-content/uploads/2019/04/Bioexcell_logo_1080px_transp.png" alt="Bioexcel2 logo"
title="Bioexcel2 logo" width="400" />
***
<a id="input"></a>
## Input parameters
**Input parameters** needed:
- **pdbCode**: PDB code of the protein structure (e.g. 3HTB)
- **ligandCode**: 3-letter code of the ligand (e.g. JZ4)
- **mol_charge**: Charge of the ligand (e.g. 0)
```
import nglview
import ipywidgets
import plotly
from plotly import subplots
import plotly.graph_objs as go
pdbCode = "3htb"
ligandCode = "JZ4"
mol_charge = 0
```
<a id="fetch"></a>
***
## Fetching PDB structure
Downloading **PDB structure** with the **protein molecule** from the RCSB PDB database.<br>
Alternatively, a **PDB file** can be used as starting structure. <br>
Stripping from the **downloaded structure** any **crystallographic water** molecule or **heteroatom**. <br>
***
**Building Blocks** used:
- [pdb](https://biobb-io.readthedocs.io/en/latest/api.html#module-api.pdb) from **biobb_io.api.pdb**
- [remove_pdb_water](https://biobb-structure-utils.readthedocs.io/en/latest/utils.html#module-utils.remove_pdb_water) from **biobb_structure_utils.utils.remove_pdb_water**
- [remove_ligand](https://biobb-structure-utils.readthedocs.io/en/latest/utils.html#module-utils.remove_ligand) from **biobb_structure_utils.utils.remove_ligand**
***
```
# Import module
from biobb_io.api.pdb import pdb
# Create properties dict and inputs/outputs
downloaded_pdb = pdbCode+'.pdb'
prop = {
'pdb_code': pdbCode,
'filter': False
}
#Create and launch bb
pdb(output_pdb_path=downloaded_pdb,
properties=prop)
# Show protein
view = nglview.show_structure_file(downloaded_pdb)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', color='sstruc')
view.add_representation(repr_type='ball+stick', radius='0.1', selection='water')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='ligand')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='ion')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
# Import module
from biobb_structure_utils.utils.remove_pdb_water import remove_pdb_water
# Create properties dict and inputs/outputs
nowat_pdb = pdbCode+'.nowat.pdb'
#Create and launch bb
remove_pdb_water(input_pdb_path=downloaded_pdb,
output_pdb_path=nowat_pdb)
# Import module
from biobb_structure_utils.utils.remove_ligand import remove_ligand
# Removing PO4 ligands:
# Create properties dict and inputs/outputs
nopo4_pdb = pdbCode+'.noPO4.pdb'
prop = {
'ligand' : 'PO4'
}
#Create and launch bb
remove_ligand(input_structure_path=nowat_pdb,
output_structure_path=nopo4_pdb,
properties=prop)
# Removing BME ligand:
# Create properties dict and inputs/outputs
nobme_pdb = pdbCode+'.noBME.pdb'
prop = {
'ligand' : 'BME'
}
#Create and launch bb
remove_ligand(input_structure_path=nopo4_pdb,
output_structure_path=nobme_pdb,
properties=prop)
```
## Visualizing 3D structure
```
# Show protein
view = nglview.show_structure_file(nobme_pdb)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', color='sstruc')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='hetero')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
```
<a id="pdb4amber"></a>
***
## Preparing PDB file for AMBER
Before starting a **protein MD setup**, it is always strongly recommended to take a look at the initial structure and try to identify important **properties** and also possible **issues**. These properties and issues can be serious, as for example the definition of **disulfide bridges**, the presence of a **non-standard aminoacids** or **ligands**, or **missing residues**. Other **properties** and **issues** might not be so serious, but they still need to be addressed before starting the **MD setup process**. **Missing hydrogen atoms**, presence of **alternate atomic location indicators** or **inserted residue codes** (see [PDB file format specification](https://www.wwpdb.org/documentation/file-format-content/format33/sect9.html#ATOM)) are examples of these not so crucial characteristics. Please visit the [AMBER tutorial: Building Protein Systems in Explicit Solvent](http://ambermd.org/tutorials/basic/tutorial7/index.php) for more examples. **AmberTools** utilities from **AMBER MD package** contain a tool able to analyse **PDB files** and clean them for further usage, especially with the **AmberTools LEaP program**: the **pdb4amber tool**. The next step of the workflow is running this tool to analyse our **input PDB structure**.<br>
For the particular **T4 Lysosyme** example, the most important property that is identified by the **pdb4amber** utility is the presence of **disulfide bridges** in the structure. Those are marked changing the residue names **from CYS to CYX**, which is the code that **AMBER force fields** use to distinguish between cysteines forming or not forming **disulfide bridges**. This will be used in the following step to correctly form a **bond** between these cysteine residues.
We invite you to check what the tool does with different, more complex structures (e.g. PDB code [6N3V](https://www.rcsb.org/structure/6N3V)).
***
**Building Blocks** used:
- [pdb4amber_run](https://biobb-amber.readthedocs.io/en/latest/pdb4amber.html#pdb4amber-pdb4amber-run-module) from **biobb_amber.pdb4amber.pdb4amber_run**
***
```
# Import module
from biobb_amber.pdb4amber.pdb4amber_run import pdb4amber_run
# Create prop dict and inputs/outputs
output_pdb4amber_path = 'structure.pdb4amber.pdb'
# Create and launch bb
pdb4amber_run(input_pdb_path=nobme_pdb,
output_pdb_path=output_pdb4amber_path,
properties=prop)
```
<a id="ligtop"></a>
***
## Create ligand system topology
**Building AMBER topology** corresponding to the ligand structure.<br>
Force field used in this tutorial step is **amberGAFF**: [General AMBER Force Field](http://ambermd.org/antechamber/gaff.html), designed for rational drug design.<br>
- [Step 1](#ligandTopologyStep1): Extract **ligand structure**.
- [Step 2](#ligandTopologyStep2): Add **hydrogen atoms** if missing.
- [Step 3](#ligandTopologyStep3): **Energetically minimize the system** with the new hydrogen atoms.
- [Step 4](#ligandTopologyStep4): Generate **ligand topology** (parameters).
***
**Building Blocks** used:
- [ExtractHeteroAtoms](https://biobb-structure-utils.readthedocs.io/en/latest/utils.html#module-utils.extract_heteroatoms) from **biobb_structure_utils.utils.extract_heteroatoms**
- [ReduceAddHydrogens](https://biobb-chemistry.readthedocs.io/en/latest/ambertools.html#module-ambertools.reduce_add_hydrogens) from **biobb_chemistry.ambertools.reduce_add_hydrogens**
- [BabelMinimize](https://biobb-chemistry.readthedocs.io/en/latest/babelm.html#module-babelm.babel_minimize) from **biobb_chemistry.babelm.babel_minimize**
- [AcpypeParamsAC](https://biobb-chemistry.readthedocs.io/en/latest/acpype.html#module-acpype.acpype_params_ac) from **biobb_chemistry.acpype.acpype_params_ac**
***
<a id="ligandTopologyStep1"></a>
### Step 1: Extract **Ligand structure**
```
# Create Ligand system topology, STEP 1
# Extracting Ligand JZ4
# Import module
from biobb_structure_utils.utils.extract_heteroatoms import extract_heteroatoms
# Create properties dict and inputs/outputs
ligandFile = ligandCode+'.pdb'
prop = {
'heteroatoms' : [{"name": "JZ4"}]
}
extract_heteroatoms(input_structure_path=output_pdb4amber_path,
output_heteroatom_path=ligandFile,
properties=prop)
```
<a id="ligandTopologyStep2"></a>
### Step 2: Add **hydrogen atoms**
```
# Create Ligand system topology, STEP 2
# Reduce_add_hydrogens: add Hydrogen atoms to a small molecule (using Reduce tool from Ambertools package)
# Import module
from biobb_chemistry.ambertools.reduce_add_hydrogens import reduce_add_hydrogens
# Create prop dict and inputs/outputs
output_reduce_h = ligandCode+'.reduce.H.pdb'
prop = {
'nuclear' : 'true'
}
# Create and launch bb
reduce_add_hydrogens(input_path=ligandFile,
output_path=output_reduce_h,
properties=prop)
```
<a id="ligandTopologyStep3"></a>
### Step 3: **Energetically minimize the system** with the new hydrogen atoms.
```
# Create Ligand system topology, STEP 3
# Babel_minimize: Structure energy minimization of a small molecule after being modified adding hydrogen atoms
# Import module
from biobb_chemistry.babelm.babel_minimize import babel_minimize
# Create prop dict and inputs/outputs
output_babel_min = ligandCode+'.H.min.mol2'
prop = {
'method' : 'sd',
'criteria' : '1e-10',
'force_field' : 'GAFF'
}
# Create and launch bb
babel_minimize(input_path=output_reduce_h,
output_path=output_babel_min,
properties=prop)
```
### Visualizing 3D structures
Visualizing the small molecule generated **PDB structures** using **NGL**:
- **Original Ligand Structure** (Left)
- **Ligand Structure with hydrogen atoms added** (with Reduce program) (Center)
- **Ligand Structure with hydrogen atoms added** (with Reduce program), **energy minimized** (with Open Babel) (Right)
```
# Show different structures generated (for comparison)
view1 = nglview.show_structure_file(ligandFile)
view1.add_representation(repr_type='ball+stick')
view1._remote_call('setSize', target='Widget', args=['350px','400px'])
view1.camera='orthographic'
view1
view2 = nglview.show_structure_file(output_reduce_h)
view2.add_representation(repr_type='ball+stick')
view2._remote_call('setSize', target='Widget', args=['350px','400px'])
view2.camera='orthographic'
view2
view3 = nglview.show_structure_file(output_babel_min)
view3.add_representation(repr_type='ball+stick')
view3._remote_call('setSize', target='Widget', args=['350px','400px'])
view3.camera='orthographic'
view3
ipywidgets.HBox([view1, view2, view3])
```
<a id="ligandTopologyStep4"></a>
### Step 4: Generate **ligand topology** (parameters).
```
# Create Ligand system topology, STEP 4
# Acpype_params_gmx: Generation of topologies for AMBER with ACPype
# Import module
from biobb_chemistry.acpype.acpype_params_ac import acpype_params_ac
# Create prop dict and inputs/outputs
output_acpype_inpcrd = ligandCode+'params.inpcrd'
output_acpype_frcmod = ligandCode+'params.frcmod'
output_acpype_lib = ligandCode+'params.lib'
output_acpype_prmtop = ligandCode+'params.prmtop'
output_acpype = ligandCode+'params'
prop = {
'basename' : output_acpype,
'charge' : mol_charge
}
# Create and launch bb
acpype_params_ac(input_path=output_babel_min,
output_path_inpcrd=output_acpype_inpcrd,
output_path_frcmod=output_acpype_frcmod,
output_path_lib=output_acpype_lib,
output_path_prmtop=output_acpype_prmtop,
properties=prop)
```
<a id="top"></a>
***
## Create protein-ligand complex system topology
**Building AMBER topology** corresponding to the protein-ligand complex structure.<br>
*IMPORTANT: the previous pdb4amber building block is changing the proper cysteines residue naming in the PDB file from CYS to CYX so that this step can automatically identify and add the disulfide bonds to the system topology.*<br>
The **force field** used in this tutorial is [**ff14SB**](https://doi.org/10.1021/acs.jctc.5b00255) for the **protein**, an evolution of the **ff99SB** force field with improved accuracy of protein side chains and backbone parameters; and the [**gaff**](https://doi.org/10.1002/jcc.20035) force field for the small molecule. **Water** molecules type used in this tutorial is [**tip3p**](https://doi.org/10.1021/jp003020w).<br>
Adding **side chain atoms** and **hydrogen atoms** if missing. Forming **disulfide bridges** according to the info added in the previous step. <br>
*NOTE: From this point on, the **protein-ligand complex structure and topology** generated can be used in a regular MD setup.*
Generating three output files:
- **AMBER structure** (PDB file)
- **AMBER topology** (AMBER [Parmtop](https://ambermd.org/FileFormats.php#topology) file)
- **AMBER coordinates** (AMBER [Coordinate/Restart](https://ambermd.org/FileFormats.php#restart) file)
***
**Building Blocks** used:
- [leap_gen_top](https://biobb-amber.readthedocs.io/en/latest/leap.html#module-leap.leap_gen_top) from **biobb_amber.leap.leap_gen_top**
***
```
# Import module
from biobb_amber.leap.leap_gen_top import leap_gen_top
# Create prop dict and inputs/outputs
output_pdb_path = 'structure.leap.pdb'
output_top_path = 'structure.leap.top'
output_crd_path = 'structure.leap.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"]
}
# Create and launch bb
leap_gen_top(input_pdb_path=output_pdb4amber_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_pdb_path,
output_top_path=output_top_path,
output_crd_path=output_crd_path,
properties=prop)
```
### Visualizing 3D structure
Visualizing the **PDB structure** using **NGL**. <br>
Try to identify the differences between the structure generated for the **system topology** and the **original one** (e.g. hydrogen atoms).
```
import nglview
import ipywidgets
# Show protein
view = nglview.show_structure_file(output_pdb_path)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', opacity='0.4')
view.add_representation(repr_type='ball+stick', selection='protein')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='JZ4')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
```
<a id="minv"></a>
## Energetically minimize the structure
**Energetically minimize** the **protein-ligand complex structure** (in vacuo) using the **sander tool** from the **AMBER MD package**. This step is **relaxing the structure**, usually **constrained**, especially when coming from an X-ray **crystal structure**. <br/>
The **miminization process** is done in two steps:
- [Step 1](#minv_1): **Hydrogen** minimization, applying **position restraints** (50 Kcal/mol.$Å^{2}$) to the **protein heavy atoms**.
- [Step 2](#minv_2): **System** minimization, with **no restraints**.
***
**Building Blocks** used:
- [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.html#module-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun**
- [process_minout](https://biobb-amber.readthedocs.io/en/latest/process.html#module-process.process_minout) from **biobb_amber.process.process_minout**
***
<a id="minv_1"></a>
### Step 1: Minimize Hydrogens
**Hydrogen** minimization, applying **position restraints** (50 Kcal/mol.$Å^{2}$) to the **protein heavy atoms**.
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_h_min_traj_path = 'sander.h_min.x'
output_h_min_rst_path = 'sander.h_min.rst'
output_h_min_log_path = 'sander.h_min.log'
prop = {
'simulation_type' : "min_vacuo",
"mdin" : {
'maxcyc' : 500,
'ntpr' : 5,
'ntr' : 1,
'restraintmask' : '\":*&!@H=\"',
'restraint_wt' : 50.0
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_top_path,
input_crd_path=output_crd_path,
input_ref_path=output_crd_path,
output_traj_path=output_h_min_traj_path,
output_rst_path=output_h_min_rst_path,
output_log_path=output_h_min_log_path,
properties=prop)
```
### Checking Energy Minimization results
Checking **energy minimization** results. Plotting **potential energy** along time during the **minimization process**.
```
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_h_min_dat_path = 'sander.h_min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_h_min_log_path,
output_dat_path=output_h_min_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_h_min_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
```
<a id="minv_2"></a>
### Step 2: Minimize the system
**System** minimization, with **restraints** only on the **small molecule**, to avoid a possible change in position due to **protein repulsion**.
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_n_min_traj_path = 'sander.n_min.x'
output_n_min_rst_path = 'sander.n_min.rst'
output_n_min_log_path = 'sander.n_min.log'
prop = {
'simulation_type' : "min_vacuo",
"mdin" : {
'maxcyc' : 500,
'ntpr' : 5,
'restraintmask' : '\":' + ligandCode + '\"',
'restraint_wt' : 500.0
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_top_path,
input_crd_path=output_h_min_rst_path,
output_traj_path=output_n_min_traj_path,
output_rst_path=output_n_min_rst_path,
output_log_path=output_n_min_log_path,
properties=prop)
```
### Checking Energy Minimization results
Checking **energy minimization** results. Plotting **potential energy** by time during the **minimization process**.
```
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_n_min_dat_path = 'sander.n_min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_n_min_log_path,
output_dat_path=output_n_min_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_n_min_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
```
<a id="box"></a>
***
## Create solvent box and solvating the system
Define the unit cell for the **protein structure MD system** to fill it with water molecules.<br>
A **truncated octahedron box** is used to define the unit cell, with a **distance from the protein to the box edge of 9.0 Angstroms**. <br>
The solvent type used is the default **TIP3P** water model, a generic 3-point solvent model.
***
**Building Blocks** used:
- [amber_to_pdb](https://biobb-amber.readthedocs.io/en/latest/ambpdb.html#module-ambpdb.amber_to_pdb) from **biobb_amber.ambpdb.amber_to_pdb**
- [leap_solvate](https://biobb-amber.readthedocs.io/en/latest/leap.html#module-leap.leap_solvate) from **biobb_amber.leap.leap_solvate**
***
### Getting minimized structure
Getting the result of the **energetic minimization** and converting it to **PDB format** to be then used as input for the **water box generation**. <br/>This is achieved by converting from **AMBER topology + coordinates** files to a **PDB file** using the **ambpdb** tool from the **AMBER MD package**.
```
# Import module
from biobb_amber.ambpdb.amber_to_pdb import amber_to_pdb
# Create prop dict and inputs/outputs
output_ambpdb_path = 'structure.ambpdb.pdb'
# Create and launch bb
amber_to_pdb(input_top_path=output_top_path,
input_crd_path=output_h_min_rst_path,
output_pdb_path=output_ambpdb_path)
```
### Create water box
Define the **unit cell** for the **protein-ligand complex structure MD system** and fill it with **water molecules**.<br/>
```
# Import module
from biobb_amber.leap.leap_solvate import leap_solvate
# Create prop dict and inputs/outputs
output_solv_pdb_path = 'structure.solv.pdb'
output_solv_top_path = 'structure.solv.parmtop'
output_solv_crd_path = 'structure.solv.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"],
"water_type": "TIP3PBOX",
"distance_to_molecule": "9.0",
"box_type": "truncated_octahedron"
}
# Create and launch bb
leap_solvate(input_pdb_path=output_ambpdb_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_solv_pdb_path,
output_top_path=output_solv_top_path,
output_crd_path=output_solv_crd_path,
properties=prop)
```
<a id="ions"></a>
## Adding ions
**Neutralizing** the system and adding an additional **ionic concentration** using the **leap tool** from the **AMBER MD package**. <br/>
Using **Sodium (Na+)** and **Chloride (Cl-)** counterions and an **additional ionic concentration** of 150mM.
***
**Building Blocks** used:
- [leap_add_ions](https://biobb-amber.readthedocs.io/en/latest/leap.html#module-leap.leap_add_ions) from **biobb_amber.leap.leap_add_ions**
***
```
# Import module
from biobb_amber.leap.leap_add_ions import leap_add_ions
# Create prop dict and inputs/outputs
output_ions_pdb_path = 'structure.ions.pdb'
output_ions_top_path = 'structure.ions.parmtop'
output_ions_crd_path = 'structure.ions.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"],
"neutralise" : True,
"positive_ions_type": "Na+",
"negative_ions_type": "Cl-",
"ionic_concentration" : 150, # 150mM
"box_type": "truncated_octahedron"
}
# Create and launch bb
leap_add_ions(input_pdb_path=output_solv_pdb_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_ions_pdb_path,
output_top_path=output_ions_top_path,
output_crd_path=output_ions_crd_path,
properties=prop)
```
### Visualizing 3D structure
Visualizing the **protein-ligand complex system** with the newly added **solvent box** and **counterions** using **NGL**.<br> Note the **truncated octahedron box** filled with **water molecules** surrounding the **protein structure**, as well as the randomly placed **positive** (Na+, blue) and **negative** (Cl-, gray) **counterions**.
```
# Show protein
view = nglview.show_structure_file(output_ions_pdb_path)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein')
view.add_representation(repr_type='ball+stick', selection='solvent')
view.add_representation(repr_type='spacefill', selection='Cl- Na+')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
```
<a id="min"></a>
## Energetically minimize the system
**Energetically minimize** the **system** (protein structure + ligand + solvent + ions) using the **sander tool** from the **AMBER MD package**. **Restraining heavy atoms** with a force constant of 15 15 Kcal/mol.$Å^{2}$ to their initial positions.
- [Step 1](#emStep1): Energetically minimize the **system** through 500 minimization cycles.
- [Step 2](#emStep2): Checking **energy minimization** results. Plotting energy by time during the **minimization** process.
***
**Building Blocks** used:
- [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.html#module-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun**
- [process_minout](https://biobb-amber.readthedocs.io/en/latest/process.html#module-process.process_minout) from **biobb_amber.process.process_minout**
***
<a id="emStep1"></a>
### Step 1: Running Energy Minimization
The **minimization** type of the **simulation_type property** contains the main default parameters to run an **energy minimization**:
- imin = 1 ; Minimization flag, perform an energy minimization.
- maxcyc = 500; The maximum number of cycles of minimization.
- ntb = 1; Periodic boundaries: constant volume.
- ntmin = 2; Minimization method: steepest descent.
In this particular example, the method used to run the **energy minimization** is the default **steepest descent**, with a **maximum number of 500 cycles** and **periodic conditions**.
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_min_traj_path = 'sander.min.x'
output_min_rst_path = 'sander.min.rst'
output_min_log_path = 'sander.min.log'
prop = {
"simulation_type" : "minimization",
"mdin" : {
'maxcyc' : 300, # Reducing the number of minimization steps for the sake of time
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+\"', # Restraining solute
'restraint_wt' : 15.0 # With a force constant of 50 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_ions_crd_path,
input_ref_path=output_ions_crd_path,
output_traj_path=output_min_traj_path,
output_rst_path=output_min_rst_path,
output_log_path=output_min_log_path,
properties=prop)
```
<a id="emStep2"></a>
### Step 2: Checking Energy Minimization results
Checking **energy minimization** results. Plotting **potential energy** along time during the **minimization process**.
```
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_dat_path = 'sander.min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_min_log_path,
output_dat_path=output_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
```
<a id="heating"></a>
## Heating the system
**Warming up** the **prepared system** using the **sander tool** from the **AMBER MD package**. Going from 0 to the desired **temperature**, in this particular example, 300K. **Solute atoms restrained** (force constant of 10 Kcal/mol). Length 5ps.
***
- [Step 1](#heatStep1): Warming up the **system** through 500 MD steps.
- [Step 2](#heatStep2): Checking results for the **system warming up**. Plotting **temperature** along time during the **heating** process.
***
**Building Blocks** used:
- [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.html#module-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun**
- [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.html#module-process.process_mdout) from **biobb_amber.process.process_mdout**
***
<a id="heatStep1"></a>
### Step 1: Warming up the system
The **heat** type of the **simulation_type property** contains the main default parameters to run a **system warming up**:
- imin = 0; Run MD (no minimization)
- ntx = 5; Read initial coords and vels from restart file
- cut = 10.0; Cutoff for non bonded interactions in Angstroms
- ntr = 0; No restrained atoms
- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms
- ntf = 2; Bond interactions involving H omitted
- ntt = 3; Constant temperature using Langevin dynamics
- ig = -1; Seed for pseudo-random number generator
- ioutfm = 1; Write trajectory in netcdf format
- iwrap = 1; Wrap coords into primary box
- nstlim = 5000; Number of MD steps
- dt = 0.002; Time step (in ps)
- tempi = 0.0; Initial temperature (0 K)
- temp0 = 300.0; Final temperature (300 K)
- irest = 0; No restart from previous simulation
- ntb = 1; Periodic boundary conditions at constant volume
- gamma_ln = 1.0; Collision frequency for Langevin dynamics (in 1/ps)
In this particular example, the **heating** of the system is done in **2500 steps** (5ps) and is going **from 0K to 300K** (note that the number of steps has been reduced in this tutorial, for the sake of time).
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_heat_traj_path = 'sander.heat.netcdf'
output_heat_rst_path = 'sander.heat.rst'
output_heat_log_path = 'sander.heat.log'
prop = {
"simulation_type" : "heat",
"mdin" : {
'nstlim' : 2500, # Reducing the number of steps for the sake of time (5ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+\"', # Restraining solute
'restraint_wt' : 10.0 # With a force constant of 10 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_min_rst_path,
input_ref_path=output_min_rst_path,
output_traj_path=output_heat_traj_path,
output_rst_path=output_heat_rst_path,
output_log_path=output_heat_log_path,
properties=prop)
```
<a id="heatStep2"></a>
### Step 2: Checking results from the system warming up
Checking **system warming up** output. Plotting **temperature** along time during the **heating process**.
```
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_heat_path = 'sander.md.temp.dat'
prop = {
"terms" : ['TEMP']
}
# Create and launch bb
process_mdout(input_log_path=output_heat_log_path,
output_dat_path=output_dat_heat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_heat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Heating process",
xaxis=dict(title = "Heating Step (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
```
<a id="nvt"></a>
***
## Equilibrate the system (NVT)
Equilibrate the **protein-ligand complex system** in **NVT ensemble** (constant Number of particles, Volume and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.
- [Step 1](#eqNVTStep1): Equilibrate the **protein system** with **NVT** ensemble.
- [Step 2](#eqNVTStep2): Checking **NVT Equilibration** results. Plotting **system temperature** by time during the **NVT equilibration** process.
***
**Building Blocks** used:
- [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.html#module-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun**
- [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.html#module-process.process_mdout) from **biobb_amber.process.process_mdout**
***
<a id="eqNVTStep1"></a>
### Step 1: Equilibrating the system (NVT)
The **nvt** type of the **simulation_type property** contains the main default parameters to run a **system equilibration in NVT ensemble**:
- imin = 0; Run MD (no minimization)
- ntx = 5; Read initial coords and vels from restart file
- cut = 10.0; Cutoff for non bonded interactions in Angstroms
- ntr = 0; No restrained atoms
- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms
- ntf = 2; Bond interactions involving H omitted
- ntt = 3; Constant temperature using Langevin dynamics
- ig = -1; Seed for pseudo-random number generator
- ioutfm = 1; Write trajectory in netcdf format
- iwrap = 1; Wrap coords into primary box
- nstlim = 5000; Number of MD steps
- dt = 0.002; Time step (in ps)
- irest = 1; Restart previous simulation
- ntb = 1; Periodic boundary conditions at constant volume
- gamma_ln = 5.0; Collision frequency for Langevin dynamics (in 1/ps)
In this particular example, the **NVT equilibration** of the system is done in **500 steps** (note that the number of steps has been reduced in this tutorial, for the sake of time).
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_nvt_traj_path = 'sander.nvt.netcdf'
output_nvt_rst_path = 'sander.nvt.rst'
output_nvt_log_path = 'sander.nvt.log'
prop = {
"simulation_type" : 'nvt',
"mdin" : {
'nstlim' : 500, # Reducing the number of steps for the sake of time (1ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+ & !@H=\"', # Restraining solute heavy atoms
'restraint_wt' : 5.0 # With a force constant of 5 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_heat_rst_path,
input_ref_path=output_heat_rst_path,
output_traj_path=output_nvt_traj_path,
output_rst_path=output_nvt_rst_path,
output_log_path=output_nvt_log_path,
properties=prop)
```
<a id="eqNVTStep2"></a>
### Step 2: Checking NVT Equilibration results
Checking **NVT Equilibration** results. Plotting **system temperature** by time during the NVT equilibration process.
```
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_nvt_path = 'sander.md.nvt.temp.dat'
prop = {
"terms" : ['TEMP']
}
# Create and launch bb
process_mdout(input_log_path=output_nvt_log_path,
output_dat_path=output_dat_nvt_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_nvt_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="NVT equilibration",
xaxis=dict(title = "Equilibration Step (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
```
<a id="npt"></a>
***
## Equilibrate the system (NPT)
Equilibrate the **protein-ligand complex system** in **NPT ensemble** (constant Number of particles, Pressure and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.
- [Step 1](#eqNPTStep1): Equilibrate the **protein system** with **NPT** ensemble.
- [Step 2](#eqNPTStep2): Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NVT equilibration** process.
***
**Building Blocks** used:
- [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.html#module-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun**
- [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.html#module-process.process_mdout) from **biobb_amber.process.process_mdout**
***
<a id="eqNPTStep1"></a>
### Step 1: Equilibrating the system (NPT)
The **npt** type of the **simulation_type property** contains the main default parameters to run a **system equilibration in NPT ensemble**:
- imin = 0; Run MD (no minimization)
- ntx = 5; Read initial coords and vels from restart file
- cut = 10.0; Cutoff for non bonded interactions in Angstroms
- ntr = 0; No restrained atoms
- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms
- ntf = 2; Bond interactions involving H omitted
- ntt = 3; Constant temperature using Langevin dynamics
- ig = -1; Seed for pseudo-random number generator
- ioutfm = 1; Write trajectory in netcdf format
- iwrap = 1; Wrap coords into primary box
- nstlim = 5000; Number of MD steps
- dt = 0.002; Time step (in ps)
- irest = 1; Restart previous simulation
- gamma_ln = 5.0; Collision frequency for Langevin dynamics (in 1/ps)
- pres0 = 1.0; Reference pressure
- ntp = 1; Constant pressure dynamics: md with isotropic position scaling
- taup = 2.0; Pressure relaxation time (in ps)
In this particular example, the **NPT equilibration** of the system is done in **500 steps** (note that the number of steps has been reduced in this tutorial, for the sake of time).
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_npt_traj_path = 'sander.npt.netcdf'
output_npt_rst_path = 'sander.npt.rst'
output_npt_log_path = 'sander.npt.log'
prop = {
"simulation_type" : 'npt',
"mdin" : {
'nstlim' : 500, # Reducing the number of steps for the sake of time (1ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+ & !@H=\"', # Restraining solute heavy atoms
'restraint_wt' : 2.5 # With a force constant of 2.5 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_nvt_rst_path,
input_ref_path=output_nvt_rst_path,
output_traj_path=output_npt_traj_path,
output_rst_path=output_npt_rst_path,
output_log_path=output_npt_log_path,
properties=prop)
```
<a id="eqNPTStep2"></a>
### Step 2: Checking NPT Equilibration results
Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process.
```
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_npt_path = 'sander.md.npt.dat'
prop = {
"terms" : ['PRES','DENSITY']
}
# Create and launch bb
process_mdout(input_log_path=output_npt_log_path,
output_dat_path=output_dat_npt_path,
properties=prop)
# Read pressure and density data from file
with open(output_dat_npt_path,'r') as pd_file:
x,y,z = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]),float(line.split()[2]))
for line in pd_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
trace1 = go.Scatter(
x=x,y=y
)
trace2 = go.Scatter(
x=x,y=z
)
fig = subplots.make_subplots(rows=1, cols=2, print_grid=False)
fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2, 1, 2)
fig['layout']['xaxis1'].update(title='Time (ps)')
fig['layout']['xaxis2'].update(title='Time (ps)')
fig['layout']['yaxis1'].update(title='Pressure (bar)')
fig['layout']['yaxis2'].update(title='Density (Kg*m^-3)')
fig['layout'].update(title='Pressure and Density during NPT Equilibration')
fig['layout'].update(showlegend=False)
plotly.offline.iplot(fig)
```
<a id="free"></a>
***
## Free Molecular Dynamics Simulation
Upon completion of the **two equilibration phases (NVT and NPT)**, the system is now well-equilibrated at the desired temperature and pressure. The **position restraints** can now be released. The last step of the **protein** MD setup is a short, **free MD simulation**, to ensure the robustness of the system.
- [Step 1](#mdStep1): Run short MD simulation of the **protein system**.
- [Step 2](#mdStep2): Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step.
***
**Building Blocks** used:
- [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.html#module-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun**
- [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.html#module-process.process_mdout) from **biobb_amber.process.process_mdout**
- [cpptraj_rms](https://biobb-analysis.readthedocs.io/en/latest/ambertools.html#module-ambertools.cpptraj_rms) from **biobb_analysis.cpptraj.cpptraj_rms**
- [cpptraj_rgyr](https://biobb-analysis.readthedocs.io/en/latest/ambertools.html#module-ambertools.cpptraj_rgyr) from **biobb_analysis.cpptraj.cpptraj_rgyr**
***
<a id="mdStep1"></a>
### Step 1: Creating portable binary run file to run a free MD simulation
The **free** type of the **simulation_type property** contains the main default parameters to run an **unrestrained MD simulation**:
- imin = 0; Run MD (no minimization)
- ntx = 5; Read initial coords and vels from restart file
- cut = 10.0; Cutoff for non bonded interactions in Angstroms
- ntr = 0; No restrained atoms
- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms
- ntf = 2; Bond interactions involving H omitted
- ntt = 3; Constant temperature using Langevin dynamics
- ig = -1; Seed for pseudo-random number generator
- ioutfm = 1; Write trajectory in netcdf format
- iwrap = 1; Wrap coords into primary box
- nstlim = 5000; Number of MD steps
- dt = 0.002; Time step (in ps)
In this particular example, a short, **5ps-length** simulation (2500 steps) is run, for the sake of time.
```
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_free_traj_path = 'sander.free.netcdf'
output_free_rst_path = 'sander.free.rst'
output_free_log_path = 'sander.free.log'
prop = {
"simulation_type" : 'free',
"mdin" : {
'nstlim' : 2500, # Reducing the number of steps for the sake of time (5ps)
'ntwx' : 500 # Print coords to trajectory every 500 steps (1 ps)
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_npt_rst_path,
output_traj_path=output_free_traj_path,
output_rst_path=output_free_rst_path,
output_log_path=output_free_log_path,
properties=prop)
```
<a id="mdStep2"></a>
### Step 2: Checking free MD simulation results
Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. **RMSd** against the **experimental structure** (input structure of the pipeline) and against the **minimized and equilibrated structure** (output structure of the NPT equilibration step).
```
# cpptraj_rms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against minimized and equilibrated snapshot (backbone atoms)
# Import module
from biobb_analysis.ambertools.cpptraj_rms import cpptraj_rms
# Create prop dict and inputs/outputs
output_rms_first = pdbCode+'_rms_first.dat'
prop = {
'mask': 'backbone',
'reference': 'first'
}
# Create and launch bb
cpptraj_rms(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rms_first,
properties=prop)
# cpptraj_rms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against experimental structure (backbone atoms)
# Import module
from biobb_analysis.ambertools.cpptraj_rms import cpptraj_rms
# Create prop dict and inputs/outputs
output_rms_exp = pdbCode+'_rms_exp.dat'
prop = {
'mask': 'backbone',
'reference': 'experimental'
}
# Create and launch bb
cpptraj_rms(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rms_exp,
input_exp_path=output_pdb_path,
properties=prop)
# Read RMS vs first snapshot data from file
with open(output_rms_first,'r') as rms_first_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_first_file
if not line.startswith(("#","@"))
])
)
# Read RMS vs experimental structure data from file
with open(output_rms_exp,'r') as rms_exp_file:
x2,y2 = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_exp_file
if not line.startswith(("#","@"))
])
)
trace1 = go.Scatter(
x = x,
y = y,
name = 'RMSd vs first'
)
trace2 = go.Scatter(
x = x,
y = y2,
name = 'RMSd vs exp'
)
data = [trace1, trace2]
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": data,
"layout": go.Layout(title="RMSd during free MD Simulation",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "RMSd (Angstrom)")
)
}
plotly.offline.iplot(fig)
# cpptraj_rgyr: Computing Radius of Gyration to measure the protein compactness during the free MD simulation
# Import module
from biobb_analysis.ambertools.cpptraj_rgyr import cpptraj_rgyr
# Create prop dict and inputs/outputs
output_rgyr = pdbCode+'_rgyr.dat'
prop = {
'mask': 'backbone'
}
# Create and launch bb
cpptraj_rgyr(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rgyr,
properties=prop)
# Read Rgyr data from file
with open(output_rgyr,'r') as rgyr_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rgyr_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Radius of Gyration",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "Rgyr (Angstrom)")
)
}
plotly.offline.iplot(fig)
```
<a id="post"></a>
***
## Post-processing and Visualizing resulting 3D trajectory
Post-processing and Visualizing the **protein system** MD setup **resulting trajectory** using **NGL**
- [Step 1](#ppStep1): *Imaging* the resulting trajectory, **stripping out water molecules and ions** and **correcting periodicity issues**.
- [Step 2](#ppStep3): Visualizing the *imaged* trajectory using the *dry* structure as a **topology**.
***
**Building Blocks** used:
- [cpptraj_image](https://biobb-analysis.readthedocs.io/en/latest/ambertools.html#module-ambertools.cpptraj_image) from **biobb_analysis.cpptraj.cpptraj_image**
***
<a id="ppStep1"></a>
### Step 1: *Imaging* the resulting trajectory.
Stripping out **water molecules and ions** and **correcting periodicity issues**
```
# cpptraj_image: "Imaging" the resulting trajectory
# Removing water molecules and ions from the resulting structure
# Import module
from biobb_analysis.ambertools.cpptraj_image import cpptraj_image
# Create prop dict and inputs/outputs
output_imaged_traj = pdbCode+'_imaged_traj.trr'
prop = {
'mask': 'solute',
'format': 'trr'
}
# Create and launch bb
cpptraj_image(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_imaged_traj,
properties=prop)
```
<a id="ppStep2"></a>
### Step 2: Visualizing the generated dehydrated trajectory.
Using the **imaged trajectory** (output of the [Post-processing step 1](#ppStep1)) with the **dry structure** (output of
```
# Show trajectory
view = nglview.show_simpletraj(nglview.SimpletrajTrajectory(output_imaged_traj, output_ambpdb_path), gui=True)
view.clear_representations()
view.add_representation('cartoon', color='sstruc')
view.add_representation('licorice', selection='JZ4', color='element', radius=1)
view
```
<a id="output"></a>
## Output files
Important **Output files** generated:
- {{output_ions_pdb_path}}: **System structure** of the MD setup protocol. Structure generated during the MD setup and used in the MD simulation. With hydrogen atoms, solvent box and counterions.
- {{output_free_traj_path}}: **Final trajectory** of the MD setup protocol.
- {{output_free_rst_path}}: **Final checkpoint file**, with information about the state of the simulation. It can be used to **restart** or **continue** a MD simulation.
- {{output_ions_top_path}}: **Final topology** of the MD system in AMBER Parm7 format.
**Analysis** (MD setup check) output files generated:
- {{output_rms_first}}: **Root Mean Square deviation (RMSd)** against **minimized and equilibrated structure** of the final **free MD run step**.
- {{output_rms_exp}}: **Root Mean Square deviation (RMSd)** against **experimental structure** of the final **free MD run step**.
- {{output_rgyr}}: **Radius of Gyration** of the final **free MD run step** of the **setup pipeline**.
***
<a id="questions"></a>
## Questions & Comments
Questions, issues, suggestions and comments are really welcome!
* GitHub issues:
* [https://github.com/bioexcel/biobb](https://github.com/bioexcel/biobb)
* BioExcel forum:
* [https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library](https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library)
| github_jupyter |
# Object Recognition using CNN model
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
#detecting license plate on the vehicle
plateCascade = cv2.CascadeClassifier('indian_license_plate.xml')
#detect the plate and return car + plate image
def plate_detect(img):
plateImg = img.copy()
roi = img.copy()
plateRect = plateCascade.detectMultiScale(plateImg,scaleFactor = 1.2, minNeighbors = 7)
for (x,y,w,h) in plateRect:
roi_ = roi[y:y+h, x:x+w, :]
plate_part = roi[y:y+h, x:x+w, :]
cv2.rectangle(plateImg,(x+2,y),(x+w-3, y+h-5),(0,255,0),3)
return plateImg, plate_part
#normal function to display
def display_img(img):
img_ = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(img_)
plt.show()
#test image is used for detecting plate
inputImg = cv2.imread('car.jpg')
inpImg, plate = plate_detect(inputImg)
display_img(inpImg)
display_img(plate)
```
# Now we are taking every letter of it
```
def find_contours(dimensions, img) :
#finding all contours in the image using
#retrieval mode: RETR_TREE
#contour approximation method: CHAIN_APPROX_SIMPLE
cntrs, _ = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#Approx dimensions of the contours
lower_width = dimensions[0]
upper_width = dimensions[1]
lower_height = dimensions[2]
upper_height = dimensions[3]
#Check largest 15 contours for license plate character respectively
cntrs = sorted(cntrs, key=cv2.contourArea, reverse=True)[:15]
ci = cv2.imread('contour.jpg')
x_cntr_list = []
target_contours = []
img_res = []
for cntr in cntrs :
#detecting contour in binary image and returns the coordinates of rectangle enclosing it
intX, intY, intWidth, intHeight = cv2.boundingRect(cntr)
#checking the dimensions of the contour to filter out the characters by contour's size
if intWidth > lower_width and intWidth < upper_width and intHeight > lower_height and intHeight < upper_height :
x_cntr_list.append(intX)
char_copy = np.zeros((44,24))
#extracting each character using the enclosing rectangle's coordinates.
char = img[intY:intY+intHeight, intX:intX+intWidth]
char = cv2.resize(char, (20, 40))
cv2.rectangle(ci, (intX,intY), (intWidth+intX, intY+intHeight), (50,21,200), 2)
plt.imshow(ci, cmap='gray')
char = cv2.subtract(255, char)
char_copy[2:42, 2:22] = char
char_copy[0:2, :] = 0
char_copy[:, 0:2] = 0
char_copy[42:44, :] = 0
char_copy[:, 22:24] = 0
img_res.append(char_copy) # List that stores the character's binary image (unsorted)
#return characters on ascending order with respect to the x-coordinate
plt.show()
#arbitrary function that stores sorted list of character indeces
indices = sorted(range(len(x_cntr_list)), key=lambda k: x_cntr_list[k])
img_res_copy = []
for idx in indices:
img_res_copy.append(img_res[idx])# stores character images according to their index
img_res = np.array(img_res_copy)
return img_res
def segment_characters(image) :
#pre-processing cropped image of plate
#threshold: convert to pure b&w with sharpe edges
#erod: increasing the backgroung black
#dilate: increasing the char white
img_lp = cv2.resize(image, (333, 75))
img_gray_lp = cv2.cvtColor(img_lp, cv2.COLOR_BGR2GRAY)
_, img_binary_lp = cv2.threshold(img_gray_lp, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
img_binary_lp = cv2.erode(img_binary_lp, (3,3))
img_binary_lp = cv2.dilate(img_binary_lp, (3,3))
LP_WIDTH = img_binary_lp.shape[0]
LP_HEIGHT = img_binary_lp.shape[1]
img_binary_lp[0:3,:] = 255
img_binary_lp[:,0:3] = 255
img_binary_lp[72:75,:] = 255
img_binary_lp[:,330:333] = 255
#estimations of character contours sizes of cropped license plates
dimensions = [LP_WIDTH/6,
LP_WIDTH/2,
LP_HEIGHT/10,
2*LP_HEIGHT/3]
plt.imshow(img_binary_lp, cmap='gray')
plt.show()
cv2.imwrite('contour.jpg',img_binary_lp)
#getting contours
char_list = find_contours(dimensions, img_binary_lp)
return char_list
char = segment_characters(plate)
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(char[i], cmap='gray')
plt.axis('off')
import keras.backend as K
import tensorflow as tf
from sklearn.metrics import f1_score
from keras import optimizers
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Flatten, MaxPooling2D, Dropout, Conv2D
train_datagen = ImageDataGenerator(rescale=1./255, width_shift_range=0.1, height_shift_range=0.1)
path = 'data/data/'
train_generator = train_datagen.flow_from_directory(
path+'/train',
target_size=(28,28),
batch_size=1,
class_mode='sparse')
validation_generator = train_datagen.flow_from_directory(
path+'/val',
target_size=(28,28),
class_mode='sparse')
#It is the harmonic mean of precision and recall
#Output range is [0, 1]
#Works for both multi-class and multi-label classification
def f1score(y, y_pred):
return f1_score(y, tf.math.argmax(y_pred, axis=1), average='micro')
def custom_f1score(y, y_pred):
return tf.py_function(f1score, (y, y_pred), tf.double)
K.clear_session()
model = Sequential()
model.add(Conv2D(16, (22,22), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (16,16), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (8,8), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (4,4), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(36, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001), metrics=[custom_f1score])
class stop_training_callback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('val_custom_f1score') > 0.99):
self.model.stop_training = True
batch_size = 1
callbacks = [stop_training_callback()]
model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
epochs = 80, verbose=1, callbacks=callbacks)
def fix_dimension(img):
new_img = np.zeros((28,28,3))
for i in range(3):
new_img[:,:,i] = img
return new_img
def show_results():
dic = {}
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for i,c in enumerate(characters):
dic[i] = c
output = []
for i,ch in enumerate(char):
img_ = cv2.resize(ch, (28,28), interpolation=cv2.INTER_AREA)
img = fix_dimension(img_)
img = img.reshape(1,28,28,3)
y_ = model.predict_classes(img)[0]
character = dic[y_] #
output.append(character)
plate_number = ''.join(output)
return plate_number
final_plate = show_results()
print(final_plate)
import requests
import xmltodict
import json
def get_vehicle_info(plate_number):
r = requests.get("http://www.regcheck.org.uk/api/reg.asmx/CheckIndia?RegistrationNumber={0}&username=geerling".format(str(plate_number)))
data = xmltodict.parse(r.content)
jdata = json.dumps(data)
df = json.loads(jdata)
df1 = json.loads(df['Vehicle']['vehicleJson'])
return df1
get_vehicle_info(final_plate)
model.save('license_plate_character.pkl')
get_vehicle_info('WB06F5977')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.