text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
import trackml
from trackml.dataset import load_event
import sys
import os
sys.path.append('..')
sys.path.append('/global/homes/c/caditi97/exatrkx-iml2020/exatrkx/src/')
sys.path.append('/global/homes/c/caditi97/exatrkx-iml2020/exatrkx/src/tests')
%matplotlib inline
os.environ['TRKXINPUTDIR']="/global/cfs/cdirs/m3443/data/trackml-kaggle/train_10evts"
os.environ['TRKXOUTPUTDIR']= "/global/cfs/projectdirs/m3443/usr/caditi97/iml2020/out0"
from utils_robust import *
from gen_tests import*
hits, cells, particles, truth = load_event('/global/cfs/cdirs/m3443/data/trackml-kaggle/train_10evts/event000001000')
hits
(truth['hit_id'].values == hits['hit_id'].values).all()
truth
noise_keeps = ["0", "0.2", "0.4", "0.6", "0.8", "1"]
for i in [0,0.2,0.4,0.6,0.8,1]:
test_noise_reduced(i)
for i in noise_keeps:
test_noise_perc(float(i))
emb_ckpt_path = '/global/cfs/cdirs/m3443/data/lightning_models/embedding/checkpoints/epoch=10.ckpt'
emb_ckpt = get_emb_ckpt(emb_ckpt_path, [8,1,1], 'build_edges')
emb_ckpt['hyper_parameters']
best_emb = load_cktp(emb_ckpt, emb_ckpt_path, True)
d_path = f"/global/cfs/projectdirs/m3443/usr/caditi97/iml2020/out0.2/feature_store/1000"
data_in = torch.load(d_path)
spatial = best_emb(torch.cat([data_in.cell_data, data_in.x], axis=-1))
spatial.shape
spatial_np = spatial.detach().numpy()
spatial_np
pid_np = data_in.pid.detach().numpy()
np.sum(pid_np == 0)
if(torch.cuda.is_available()):spatial = spatial.cuda()
e_spatial = utils_torch.build_edges(spatial, best_emb.hparams['r_val'], best_emb.hparams['knn'])
e_spatial.shape
e_spatial[1]
e_spatial_np = e_spatial.cpu().detach().numpy()
e_spatial_np.T
e_bidir = torch.cat([data_in.layerless_true_edges,
torch.stack([data_in.layerless_true_edges[1],
data_in.layerless_true_edges[0]], axis=1).T], axis=-1)
e_bidir_np = e_bidir.cpu().detach().numpy().T
e_bidir_np
e_spatialn, y_cluster = graph_intersection(e_spatial, e_bidir)
espt = e_spatialn.cpu().detach().numpy().T
espt
espt.shape
espt[1000][1]
for i in [0,0.2,0.4,0.6,0.8,1]:
test_cluster_noise(i)
emb_ckpt_path = '/global/cfs/cdirs/m3443/data/lightning_models/embedding/checkpoints/epoch=10.ckpt'
emb_ckpt = get_emb_ckpt(emb_ckpt_path, [8,1,1], 'build_edges')
best_emb = load_cktp(emb_ckpt, emb_ckpt_path, True)
d_path = f"/global/cfs/projectdirs/m3443/usr/caditi97/iml2020/out0.2/feature_store/1000"
batch = torch.load(d_path)
pid_np, espt, y_cluster = get_cluster(best_emb,batch)
for idx in range(len(y_cluster)):
noise1 = 0
noise2 = 0
if y_cluster[idx]:
hitid1 = espt[idx][0]
hitid2 = espt[idx][1]
if pid_np[hitid1] == 0:
noise1+=1
if pid_np[hitid2] == 0:
noise2+=1
print(noise1,noise2)
```
| github_jupyter |
```
import tensorsignatures as ts
%matplotlib inline
from helper import hide_toggle
hide_toggle()
```
# The TensorSignatures CLI
The TensorSignatures CLI comes with six subroutines,
* `boot`: computes bootstrap intervals for a TensorSignature initialisation,
* `data`: simulates mutation count data for a TensorSignature inference,
* `prep`: computes a normalisation constant and formats a count tensor,
* `refit`: refits the exposures to set of fixed tensor signatures (Sec. A.2.3), • train: runs a denovo extraction of tensor signatures (Sec. A.2.3),
* `write`: creates a hdf5 file out of dumped tensor signatures pkls.
The goal of this tutorial is to illustrate how to run TensorSignatures in a practical setting. For this reason we will first simulate mutation count data using `tensorsignatures data`, and subsequently run `tensorsignatures train` to extract constituent signatures. In the next section we will then analyse the results of this experiment in jupyter with help of the `tensorsignatures` API.
## Simulate data via CLI
To create a reproducible (the first positional argument sets a seed: 573) synthetic dataset from 5 mutational signatures (second positional argument) with the CLI, we invoke the data subprogram
```
%%bash
tensorsignatures data 573 5 data.h5 -s 100 -m 1000 -d 3 -d 5
```
which will simulate 100 samples (`-s 100`) each with 10,000 mutations (`-m 10000`), and two additional genomic dimensions with 3 and 5 states (`-d 3 -d 5`) respectively. The program writes a `hdf5` file `data.h5` to the current folder containing the datasets `SNV` and `OTHER` representing the SNV count tensor and all other variants respectively.
## Running TensorSignatures using the command line interface
Since we know the number of signatures that made up the dataset we can run a TensorSignatures decomposition simply by executing
```
%%bash
tensorsignatures --verbose train data.h5 my_first_run.pkl 5
```
which saves a pickle able binary file to the disk, which we can load into a interactive python session (eg. a Jupyter notebook) for further investigation
```
init = ts.load_dump('my_first_run.pkl')
init.S.shape
```
However, usually we do not know the number of active mutational processes a priori. For this reason, it is necessary to run the algorithm using different decomposition ranks, and to subsequently select the most appropriate model for the data. Moreover, we recommend to run several initialisations of the algorithm at each decomposition rank. This is necessary, because non-negative matrix factorisation produces stochastic solutions, i.e. each decomposition represents a local minimum of the objective function that is used to train the model. As a result, it is worthwhile to sample the solution space thoroughly, and to pick the solution which maximised the log-likelihood. Running TensorSignatures at different decomposition ranks while computing several initialisations is easy using the CLI. For example, to compute decompositions from rank 2 to 10 with 3 initialisation each, we would simply write a nested bash loop (*Caution: this may take some time*).
```
%%bash
for rank in {2..10}; do
for init in {0..2}; do
tensorsignatures train data.h5 sol_${rank}_${init}.pkl ${rank} -i ${init} -j MyFirstExperiment;
done;
done;
```
Also note the additional arguments we pass here to the programme; the `-i` argument identifies each initialisation uniquely (mandatory), and the `-j` parameter allows us to name the experiment, which in this context denotes multiple TensorSignature decompositions across a range of ranks extracted using the same hyper parameters (number of epochs, dispersion, etc).
## Summarising the result from many initialisations with `tensorsignatures write`
This command produces for each rank (2-10) ten initialisation and saves the results as pickleable binary files to the hard disk. Loading the 9 x 3 initialisations manually using `ts.load_dump` would be quite tedious and even impracticable in larger experiments. For this reason, we included the subprogram `tensorsignatures write`, which takes a `glob` filename pattern and an output filename as arguments to generate a `hdf5` file containing all initialisations.
```
%%bash
tensorsignatures write "sol_*.pkl" results.h5
```
| github_jupyter |
```
import os
import numpy as np
import cPickle
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
import timeit
import sklearn
import cv2
import sys
import glob
sys.path.append('./recognition')
from embedding import Embedding
from menpo.visualize import print_progress
from menpo.visualize.viewmatplotlib import sample_colours_from_colourmap
from prettytable import PrettyTable
from pathlib import Path
import warnings
warnings.filterwarnings("ignore")
def read_template_media_list(path):
ijb_meta = np.loadtxt(path, dtype=str)
templates = ijb_meta[:,1].astype(np.int)
medias = ijb_meta[:,2].astype(np.int)
return templates, medias
def read_template_pair_list(path):
pairs = np.loadtxt(path, dtype=str)
t1 = pairs[:,0].astype(np.int)
t2 = pairs[:,1].astype(np.int)
label = pairs[:,2].astype(np.int)
return t1, t2, label
def read_image_feature(path):
with open(path, 'rb') as fid:
img_feats = cPickle.load(fid)
return img_feats
def get_image_feature(img_path, img_list_path, model_path, gpu_id):
img_list = open(img_list_path)
embedding = Embedding(model_path, 0, gpu_id)
files = img_list.readlines()
img_feats = []
faceness_scores = []
for img_index, each_line in enumerate(print_progress(files)):
name_lmk_score = each_line.strip().split(' ')
img_name = os.path.join(img_path, name_lmk_score[0])
img = cv2.imread(img_name)
lmk = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32)
lmk = lmk.reshape( (5,2) )
img_feats.append(embedding.get(img,lmk))
faceness_scores.append(name_lmk_score[-1])
img_feats = np.array(img_feats).astype(np.float32)
faceness_scores = np.array(faceness_scores).astype(np.float32)
return img_feats, faceness_scores
def image2template_feature(img_feats = None, templates = None, medias = None):
# ==========================================================
# 1. face image feature l2 normalization. img_feats:[number_image x feats_dim]
# 2. compute media feature.
# 3. compute template feature.
# ==========================================================
unique_templates = np.unique(templates)
template_feats = np.zeros((len(unique_templates), img_feats.shape[1]))
for count_template, uqt in enumerate(unique_templates):
(ind_t,) = np.where(templates == uqt)
face_norm_feats = img_feats[ind_t]
face_medias = medias[ind_t]
unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True)
media_norm_feats = []
for u,ct in zip(unique_medias, unique_media_counts):
(ind_m,) = np.where(face_medias == u)
if ct == 1:
media_norm_feats += [face_norm_feats[ind_m]]
else: # image features from the same video will be aggregated into one feature
media_norm_feats += [np.mean(face_norm_feats[ind_m], 0, keepdims=True)]
media_norm_feats = np.array(media_norm_feats)
# media_norm_feats = media_norm_feats / np.sqrt(np.sum(media_norm_feats ** 2, -1, keepdims=True))
template_feats[count_template] = np.sum(media_norm_feats, 0)
if count_template % 2000 == 0:
print('Finish Calculating {} template features.'.format(count_template))
template_norm_feats = template_feats / np.sqrt(np.sum(template_feats ** 2, -1, keepdims=True))
return template_norm_feats, unique_templates
def verification(template_norm_feats = None, unique_templates = None, p1 = None, p2 = None):
# ==========================================================
# Compute set-to-set Similarity Score.
# ==========================================================
template2id = np.zeros((max(unique_templates)+1,1),dtype=int)
for count_template, uqt in enumerate(unique_templates):
template2id[uqt] = count_template
score = np.zeros((len(p1),)) # save cosine distance between pairs
total_pairs = np.array(range(len(p1)))
batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation
sublists = [total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)]
total_sublists = len(sublists)
for c, s in enumerate(sublists):
feat1 = template_norm_feats[template2id[p1[s]]]
feat2 = template_norm_feats[template2id[p2[s]]]
similarity_score = np.sum(feat1 * feat2, -1)
score[s] = similarity_score.flatten()
if c % 10 == 0:
print('Finish {}/{} pairs.'.format(c, total_sublists))
return score
def read_score(path):
with open(path, 'rb') as fid:
img_feats = cPickle.load(fid)
return img_feats
```
# Step1: Load Meta Data
```
# =============================================================
# load image and template relationships for template feature embedding
# tid --> template id, mid --> media id
# format:
# image_name tid mid
# =============================================================
start = timeit.default_timer()
templates, medias = read_template_media_list(os.path.join('IJBB/meta', 'ijbb_face_tid_mid.txt'))
stop = timeit.default_timer()
print('Time: %.2f s. ' % (stop - start))
# =============================================================
# load template pairs for template-to-template verification
# tid : template id, label : 1/0
# format:
# tid_1 tid_2 label
# =============================================================
start = timeit.default_timer()
p1, p2, label = read_template_pair_list(os.path.join('IJBB/meta', 'ijbb_template_pair_label.txt'))
stop = timeit.default_timer()
print('Time: %.2f s. ' % (stop - start))
```
# Step 2: Get Image Features
```
# =============================================================
# load image features
# format:
# img_feats: [image_num x feats_dim] (227630, 512)
# =============================================================
start = timeit.default_timer()
#img_feats = read_image_feature('./MS1MV2/IJBB_MS1MV2_r100_arcface.pkl')
img_path = './IJBB/loose_crop'
img_list_path = './IJBB/meta/ijbb_name_5pts_score.txt'
model_path = './pretrained_models/VGG2-ResNet50-Arcface/model'
gpu_id = 0
img_feats, faceness_scores = get_image_feature(img_path, img_list_path, model_path, gpu_id)
stop = timeit.default_timer()
print('Time: %.2f s. ' % (stop - start))
print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], img_feats.shape[1]))
```
# Step3: Get Template Features
```
# =============================================================
# compute template features from image features.
# =============================================================
start = timeit.default_timer()
# ==========================================================
# Norm feature before aggregation into template feature?
# Feature norm from embedding network and faceness score are able to decrease weights for noise samples (not face).
# ==========================================================
# 1. FaceScore (Feature Norm)
# 2. FaceScore (Detector)
use_norm_score = False # if True, TestMode(N1)
use_detector_score = True # if True, TestMode(D1)
use_flip_test = True # if True, TestMode(F1)
if use_flip_test:
# concat --- F1
#img_input_feats = img_feats
# add --- F2
img_input_feats = img_feats[:,0:img_feats.shape[1]/2] + img_feats[:,img_feats.shape[1]/2:]
else:
img_input_feats = img_feats[:,0:img_feats.shape[1]/2]
if use_norm_score:
img_input_feats = img_input_feats
else:
# normalise features to remove norm information
img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats ** 2, -1, keepdims=True))
if use_detector_score:
img_input_feats = img_input_feats * np.matlib.repmat(faceness_scores[:,np.newaxis], 1, img_input_feats.shape[1])
else:
img_input_feats = img_input_feats
template_norm_feats, unique_templates = image2template_feature(img_input_feats, templates, medias)
stop = timeit.default_timer()
print('Time: %.2f s. ' % (stop - start))
```
# Step 4: Get Template Similarity Scores
```
# =============================================================
# compute verification scores between template pairs.
# =============================================================
start = timeit.default_timer()
score = verification(template_norm_feats, unique_templates, p1, p2)
stop = timeit.default_timer()
print('Time: %.2f s. ' % (stop - start))
score_save_name = './IJBB/result/VGG2-ResNet50-ArcFace-TestMode(N0D1F2).npy'
np.save(score_save_name, score)
```
# Step 5: Get ROC Curves and TPR@FPR Table
```
score_save_path = './IJBB/result'
files = glob.glob(score_save_path + '/VGG2*.npy')
methods = []
scores = []
for file in files:
methods.append(Path(file).stem)
scores.append(np.load(file))
methods = np.array(methods)
scores = dict(zip(methods,scores))
colours = dict(zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2')))
#x_labels = [1/(10**x) for x in np.linspace(6, 0, 6)]
x_labels = [10**-6, 10**-5, 10**-4,10**-3, 10**-2, 10**-1]
tpr_fpr_table = PrettyTable(['Methods'] + map(str, x_labels))
fig = plt.figure()
for method in methods:
fpr, tpr, _ = roc_curve(label, scores[method])
roc_auc = auc(fpr, tpr)
fpr = np.flipud(fpr)
tpr = np.flipud(tpr) # select largest tpr at same fpr
plt.plot(fpr, tpr, color=colours[method], lw=1, label=('[%s (AUC = %0.4f %%)]' % (method.split('-')[-1], roc_auc*100)))
tpr_fpr_row = []
tpr_fpr_row.append(method)
for fpr_iter in np.arange(len(x_labels)):
_, min_index = min(list(zip(abs(fpr-x_labels[fpr_iter]), range(len(fpr)))))
tpr_fpr_row.append('%.4f' % tpr[min_index])
tpr_fpr_table.add_row(tpr_fpr_row)
plt.xlim([10**-6, 0.1])
plt.ylim([0.3, 1.0])
plt.grid(linestyle='--', linewidth=1)
plt.xticks(x_labels)
plt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True))
plt.xscale('log')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC on IJB-B')
plt.legend(loc="lower right")
plt.show()
#fig.savefig('IJB-B.pdf')
print(tpr_fpr_table)
# setting N0D1F2 is the best
```
# Test Setting Conclusions
#### (1) add is better than concat for the flip test (N1D1F2 v.s. N1D1F1)
#### (2) detection score contains some faceness information to decrease weights of noise samples within the template (N0D1F0 v.s. N0D0F0)
| github_jupyter |
# Sustainable energy transitions data model
```
import pandas as pd, numpy as np, json, copy, zipfile, random, requests, StringIO
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
from IPython.core.display import Image
Image('favicon.png')
```
## Country and region name converters
```
#country name converters
#EIA->pop
clist1={'North America':'Northern America',
'United States':'United States of America',
'Central & South America':'Latin America and the Caribbean',
'Bahamas, The':'Bahamas',
'Saint Vincent/Grenadines':'Saint Vincent and the Grenadines',
'Venezuela':'Venezuela (Bolivarian Republic of)',
'Macedonia':'The former Yugoslav Republic of Macedonia',
'Moldova':'Republic of Moldova',
'Russia':'Russian Federation',
'Iran':'Iran (Islamic Republic of)',
'Palestinian Territories':'State of Palestine',
'Syria':'Syrian Arab Republic',
'Yemen':'Yemen ',
'Congo (Brazzaville)':'Congo',
'Congo (Kinshasa)':'Democratic Republic of the Congo',
'Cote dIvoire (IvoryCoast)':"C\xc3\xb4te d'Ivoire",
'Gambia, The':'Gambia',
'Libya':'Libyan Arab Jamahiriya',
'Reunion':'R\xc3\xa9union',
'Somalia':'Somalia ',
'Sudan and South Sudan':'Sudan',
'Tanzania':'United Republic of Tanzania',
'Brunei':'Brunei Darussalam',
'Burma (Myanmar)':'Myanmar',
'Hong Kong':'China, Hong Kong Special Administrative Region',
'Korea, North':"Democratic People's Republic of Korea",
'Korea, South':'Republic of Korea',
'Laos':"Lao People's Democratic Republic",
'Macau':'China, Macao Special Administrative Region',
'Timor-Leste (East Timor)':'Timor-Leste',
'Virgin Islands, U.S.':'United States Virgin Islands',
'Vietnam':'Viet Nam'}
#BP->pop
clist2={u' European Union #':u'Europe',
u'Rep. of Congo (Brazzaville)':u'Congo (Brazzaville)',
'Republic of Ireland':'Ireland',
'China Hong Kong SAR':'China, Hong Kong Special Administrative Region',
u'Total Africa':u'Africa',
u'Total North America':u'Northern America',
u'Total S. & Cent. America':'Latin America and the Caribbean',
u'Total World':u'World',
u'Total World ':u'World',
'South Korea':'Republic of Korea',
u'Trinidad & Tobago':u'Trinidad and Tobago',
u'US':u'United States of America'}
#WD->pop
clist3={u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Congo, Rep.':u'Congo (Brazzaville)',
u'Caribbean small states':'Carribean',
u'East Asia & Pacific (all income levels)':'Eastern Asia',
u'Egypt, Arab Rep.':'Egypt',
u'European Union':u'Europe',
u'Hong Kong SAR, China':u'China, Hong Kong Special Administrative Region',
u'Iran, Islamic Rep.':u'Iran (Islamic Republic of)',
u'Kyrgyz Republic':u'Kyrgyzstan',
u'Korea, Rep.':u'Republic of Korea',
u'Latin America & Caribbean (all income levels)':'Latin America and the Caribbean',
u'Macedonia, FYR':u'The former Yugoslav Republic of Macedonia',
u'Korea, Dem. Rep.':u"Democratic People's Republic of Korea",
u'South Asia':u'Southern Asia',
u'Sub-Saharan Africa (all income levels)':u'Sub-Saharan Africa',
u'Slovak Republic':u'Slovakia',
u'Venezuela, RB':u'Venezuela (Bolivarian Republic of)',
u'Yemen, Rep.':u'Yemen ',
u'Congo, Dem. Rep.':u'Democratic Republic of the Congo'}
#COMTRADE->pop
clist4={u"Bosnia Herzegovina":"Bosnia and Herzegovina",
u'Central African Rep.':u'Central African Republic',
u'China, Hong Kong SAR':u'China, Hong Kong Special Administrative Region',
u'China, Macao SAR':u'China, Macao Special Administrative Region',
u'Czech Rep.':u'Czech Republic',
u"Dem. People's Rep. of Korea":"Democratic People's Republic of Korea",
u'Dem. Rep. of the Congo':"Democratic Republic of the Congo",
u'Dominican Rep.':u'Dominican Republic',
u'Fmr Arab Rep. of Yemen':u'Yemen ',
u'Fmr Ethiopia':u'Ethiopia',
u'Fmr Fed. Rep. of Germany':u'Germany',
u'Fmr Panama, excl.Canal Zone':u'Panama',
u'Fmr Rep. of Vietnam':u'Viet Nam',
u"Lao People's Dem. Rep.":u"Lao People's Democratic Republic",
u'Occ. Palestinian Terr.':u'State of Palestine',
u'Rep. of Korea':u'Republic of Korea',
u'Rep. of Moldova':u'Republic of Moldova',
u'Serbia and Montenegro':u'Serbia',
u'US Virgin Isds':u'United States Virgin Islands',
u'Solomon Isds':u'Solomon Islands',
u'United Rep. of Tanzania':u'United Republic of Tanzania',
u'TFYR of Macedonia':u'The former Yugoslav Republic of Macedonia',
u'USA':u'United States of America',
u'USA (before 1981)':u'United States of America',
}
#Jacobson->pop
clist5={u"Korea, Democratic People's Republic of":"Democratic People's Republic of Korea",
u'All countries':u'World',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Iran, Islamic Republic of':u'Iran (Islamic Republic of)',
u'Macedonia, Former Yugoslav Republic of':u'The former Yugoslav Republic of Macedonia',
u'Congo, Democratic Republic of':u"Democratic Republic of the Congo",
u'Korea, Republic of':u'Republic of Korea',
u'Tanzania, United Republic of':u'United Republic of Tanzania',
u'Moldova, Republic of':u'Republic of Moldova',
u'Hong Kong, China':u'China, Hong Kong Special Administrative Region',
u'All countries.1':"World"
}
#NREL solar->pop
clist6={u"Antigua & Barbuda":u'Antigua and Barbuda',
u"Bosnia & Herzegovina":u"Bosnia and Herzegovina",
u"Brunei":u'Brunei Darussalam',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u"Iran":u'Iran (Islamic Republic of)',
u"Laos":u"Lao People's Democratic Republic",
u"Libya":'Libyan Arab Jamahiriya',
u"Moldova":u'Republic of Moldova',
u"North Korea":"Democratic People's Republic of Korea",
u"Reunion":'R\xc3\xa9union',
u'Sao Tome & Principe':u'Sao Tome and Principe',
u'Solomon Is.':u'Solomon Islands',
u'St. Lucia':u'Saint Lucia',
u'St. Vincent & the Grenadines':u'Saint Vincent and the Grenadines',
u'The Bahamas':u'Bahamas',
u'The Gambia':u'Gambia',
u'Virgin Is.':u'United States Virgin Islands',
u'West Bank':u'State of Palestine'
}
#NREL wind->pop
clist7={u"Antigua & Barbuda":u'Antigua and Barbuda',
u"Bosnia & Herzegovina":u"Bosnia and Herzegovina",
u'Occupied Palestinian Territory':u'State of Palestine',
u'China Macao SAR':u'China, Macao Special Administrative Region',
#"C\xc3\xb4te d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'East Timor':u'Timor-Leste',
u'TFYR Macedonia':u'The former Yugoslav Republic of Macedonia',
u'IAM-country Total':u'World'
}
#country entroids->pop
clist8={u'Burma':'Myanmar',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Republic of the Congo':u'Congo (Brazzaville)',
u'Reunion':'R\xc3\xa9union'
}
def cnc(country):
if country in clist1: return clist1[country]
elif country in clist2: return clist2[country]
elif country in clist3: return clist3[country]
elif country in clist4: return clist4[country]
elif country in clist5: return clist5[country]
elif country in clist6: return clist6[country]
elif country in clist7: return clist7[country]
elif country in clist8: return clist8[country]
else: return country
```
# Population
Consult the notebook entitled *pop.ipynb* for the details of mining the data from the UN statistics division online database.
Due to being the reference database for country names cell, the cell below needs to be run first, before any other databases.
```
try:
import zlib
compression = zipfile.ZIP_DEFLATED
except:
compression = zipfile.ZIP_STORED
#pop_path='https://dl.dropboxusercontent.com/u/531697/datarepo/Set/db/
pop_path='E:/Dropbox/Public/datarepo/Set/db/'
#suppres warnings
import warnings
warnings.simplefilter(action = "ignore")
cc=pd.read_excel(pop_path+'Country Code and Name ISO2 ISO3.xls')
#http://unstats.un.org/unsd/tradekb/Attachment321.aspx?AttachmentType=1
ccs=cc['Country Code'].values
neighbors=pd.read_csv(pop_path+'contry-geotime.csv')
#https://raw.githubusercontent.com/ppKrauss/country-geotime/master/data/contry-geotime.csv
#country name converter from iso to comtrade and back
iso2c={}
isoc2={}
for i in cc.T.iteritems():
iso2c[i[1][0]]=i[1][1]
isoc2[i[1][1]]=i[1][0]
#country name converter from pop to iso
pop2iso={}
for i in cc.T.iteritems():
pop2iso[cnc(i[1][1])]=int(i[1][0])
#country name converter from alpha 2 to iso
c2iso={}
for i in neighbors.T.iteritems():
c2iso[str(i[1][0])]=i[1][1]
c2iso['NA']=c2iso['nan'] #adjust for namibia
c2iso.pop('nan');
#create country neighbor adjacency list based on iso country number codes
c2neighbors={}
for i in neighbors.T.iteritems():
z=str(i[1][4]).split(' ')
if (str(i[1][1])!='nan'): c2neighbors[int(i[1][1])]=[c2iso[k] for k in z if k!='nan']
#extend iso codes not yet encountered
iso2c[729]="Sudan"
iso2c[531]="Curacao"
iso2c[535]="Bonaire, Sint Eustatius and Saba"
iso2c[728]="South Sudan"
iso2c[534]="Sint Maarten (Dutch part)"
iso2c[652]="Saint Barthélemy"
#load h2 min
h2=json.loads(file(pop_path+'h2.json','r').read())
#load tradealpha d
#predata=json.loads(file(pop_path+'/trade/traded.json','r').read())
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
#load savedata
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata={}
#load grids
grid=json.loads(file(pop_path+'grid.json','r').read())
grid5=json.loads(file(pop_path+'grid5.json','r').read())
gridz=json.loads(file(pop_path+'gridz.json','r').read())
gridz5=json.loads(file(pop_path+'gridz5.json','r').read())
#load ndists
ndists=json.loads(file(pop_path+'ndists.json','r').read())
distancenorm=7819.98
#load goodcountries
#goodcountries=list(set(data.keys()).intersection(set(tradealpha.keys())))
goodcountries=json.loads(file(pop_path+'GC.json','r').read())
#goodcountries=goodcountries[:20] #dev
rgc={} #reverse goodcountries coder
for i in range(len(goodcountries)):
rgc[goodcountries[i]]=i
cid={} #reverse goodcountries coder
for i in range(len(goodcountries)):
cid[goodcountries[i]]=i
def save3(sd,countrylist=[]):
#if True:
print 'saving... ',sd,
popsave={}
countries=[]
if countrylist==[]:
c=sorted(goodcountries)
else: c=countrylist
for country in c:
popdummy={}
tosave=[]
for year in data[country]:
popdummy[year]=data[country][year]['population']
for fuel in data[country][year]['energy']:
#for fuel in allfuels:
if fuel not in {'nrg','nrg_sum'}:
tosave.append({"t":year,"u":fuel,"g":"f","q1":"pp","q2":999,
"s":round(0 if (('navg3' in data[country][year]['energy'][fuel]['prod']) \
and (np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']))) else \
data[country][year]['energy'][fuel]['prod']['navg3'] if \
'navg3' in data[country][year]['energy'][fuel]['prod'] else 0,3)
})
tosave.append({"t":year,"u":fuel,"g":"m","q1":"cc","q2":999,
"s":round(0 if (('navg3' in data[country][year]['energy'][fuel]['cons']) \
and (np.isnan(data[country][year]['energy'][fuel]['cons']['navg3']))) else \
data[country][year]['energy'][fuel]['cons']['navg3'] if \
'navg3' in data[country][year]['energy'][fuel]['cons'] else 0,3)
})
#save balances - only for dev
#if (year > min(balance.keys())):
# if year in balance:
# if country in balance[year]:
# tosave.append({"t":year,"u":"balance","g":"m","q1":"cc","q2":999,
# "s":balance[year][country]})
#no import export flows on global
if country not in {"World"}:
flowg={"Import":"f","Export":"m","Re-Export":"m","Re-Import":"f"}
if country in tradealpha:
for year in tradealpha[country]:
for fuel in tradealpha[country][year]:
for flow in tradealpha[country][year][fuel]:
for partner in tradealpha[country][year][fuel][flow]:
tosave.append({"t":int(float(year)),"u":fuel,"g":flowg[flow],"q1":flow,"q2":partner,
"s":round(tradealpha[country][year][fuel][flow][partner],3)
})
popsave[country]=popdummy
countries.append(country)
file('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/data.json','w').write(json.dumps(tosave))
zf = zipfile.ZipFile('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+str(country.encode('utf-8').replace('/','&&'))+'.zip', mode='w')
zf.write('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/data.json','data.json',compress_type=compression)
zf.close()
#save all countries list
file('E:/Dropbox/Public/datarepo/Set/universal/countries.json','w').write(json.dumps(countries))
#save countries populations
#file('E:/Dropbox/Public/datarepo/Set/json/pop.json','w').write(json.dumps(popsave))
print ' done'
```
## Impex updating
```
def updatenormimpex(reporter,partner,flow,value,weight=0.1):
global nimportmatrix
global nexportmatrix
global nrimportmatrix
global nrexportmatrix
i=cid[reporter]
j=cid[partner]
if flow in {"Export","Re-Export"}:
nexportmatrix[i][j]=(nexportmatrix[i][j]*(1-weight))+(value*weight)
nrimportmatrix[j][i]=(nrimportmatrix[j][i]*(1-weight))+(value*weight)
if flow in {"Import","Re-Import"}:
nimportmatrix[i][j]=(nrimportmatrix[i][j]*(1-weight))+(value*weight)
nrexportmatrix[j][i]=(nrexportmatrix[j][i]*(1-weight))+(value*weight)
return
def influence(reporter,partner,selfinfluence=1.0,expfactor=3.0):
#country trade influence will tend to have an exponential distribution, therefore we convert to linear
#with a strength of expfactor
i=cid[reporter]
j=cid[partner]
if i==j: return selfinfluence
else: return (12.0/36*nimportmatrix[i][j]\
+6.0/36*nexportmatrix[j][i]\
+4.0/36*nrimportmatrix[i][j]\
+2.0/36*nrexportmatrix[j][i]\
+6.0/36*nexportmatrix[i][j]\
+3.0/36*nimportmatrix[j][i]\
+2.0/36*nrexportmatrix[i][j]\
+1.0/36*nrimportmatrix[j][i])**(1.0/expfactor)
#load ! careful, need to rebuild index if tradealpha or data changes
predata=json.loads(file(pop_path+'trade/nimpex.json','r').read())
nexportmatrix=predata["nexport"]
nimportmatrix=predata["nimport"]
nrexportmatrix=predata["nrexport"]
nrimportmatrix=predata["nrimport"]
predata={}
import scipy
import pylab
import scipy.cluster.hierarchy as sch
import matplotlib as mpl
import matplotlib.font_manager as font_manager
from matplotlib.ticker import NullFormatter
path = 'Inconsolata-Bold.ttf'
prop = font_manager.FontProperties(fname=path)
labeler=json.loads(file(pop_path+'../universal/labeler.json','r').read())
isoico=json.loads(file(pop_path+'../universal/isoico.json','r').read())
risoico=json.loads(file(pop_path+'../universal/risoico.json','r').read())
def dendro(sd='00',selfinfluence=1.0,expfactor=3.0):
returnmatrix=scipy.zeros([len(goodcountries),len(goodcountries)])
matrix=scipy.zeros([len(goodcountries),len(goodcountries)])
global labs
global labsorder
global labs2
global labs3
labs=[]
labs2=[]
labs3=[]
for i in range(len(goodcountries)):
labs.append(labeler[goodcountries[i]])
labsorder = pd.Series(np.array(labs)) #create labelorder
labsorder=labsorder.rank(method='dense').values.astype(int)-1
alphabetvector=[0 for i in range(len(labsorder))]
for i in range(len(labsorder)):
alphabetvector[labsorder[i]-1]=i
labs=[]
for i in range(len(goodcountries)):
labs.append(labeler[goodcountries[alphabetvector[i]]])
labs2.append(goodcountries[alphabetvector[i]])
labs3.append(isoico[goodcountries[alphabetvector[i]]])
for j in alphabetvector:
matrix[i][j]=influence(goodcountries[alphabetvector[i]],goodcountries[alphabetvector[j]],selfinfluence,expfactor)
returnmatrix[i][j]=influence(goodcountries[i],goodcountries[j],selfinfluence,expfactor)
title=u'Partner Importance of COLUMN Country for ROW Country in Energy Trade [self-influence $q='+\
str(selfinfluence)+'$, power factor $p='+str(expfactor)+'$]'
#cmap=plt.get_cmap('RdYlGn_r') #for logplot
cmap=plt.get_cmap('YlGnBu')
labelpad=32
# Generate random features and distance matrix.
D = scipy.zeros([len(matrix),len(matrix)])
for i in range(len(matrix)):
for j in range(len(matrix)):
D[i,j] =matrix[i][j]
# Compute and plot first dendrogram.
fig = pylab.figure(figsize=(17,15))
sch.set_link_color_palette(10*["#ababab"])
# Plot original matrix.
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
im = axmatrix.matshow(D[::-1], aspect='equal', origin='lower', cmap=cmap)
#im = axmatrix.matshow(E[::-1], aspect='auto', origin='lower', cmap=cmap) #for logplot
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.87,0.1,0.02,0.6])
pylab.colorbar(im, cax=axcolor)
# Label up
axmatrix.set_xticks(range(len(matrix)))
mlabs=list(labs)
for i in range(len(labs)):
kz='-'
for k in range(labelpad-len(labs[i])):kz+='-'
if i%2==1: mlabs[i]=kz+u' '+labs[i]+u' '+'-'
else: mlabs[i]='-'+u' '+labs[i]+u' '+kz
axmatrix.set_xticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.xaxis.set_label_position('top')
axmatrix.xaxis.tick_top()
pylab.xticks(rotation=-90, fontsize=8)
axmatrix.set_yticks(range(len(matrix)))
mlabs=list(labs)
for i in range(len(labs)):
kz='-'
for k in range(labelpad-len(labs[i])):kz+='-'
if i%2==0: mlabs[i]=kz+u' '+labs[i]+u' '+'-'
else: mlabs[i]='-'+u' '+labs[i]+u' '+kz
axmatrix.set_yticklabels(mlabs[::-1], minor=False,fontsize=7,fontproperties=prop)
axmatrix.yaxis.set_label_position('left')
axmatrix.yaxis.tick_left()
xlabels = axmatrix.get_xticklabels()
for label in range(len(xlabels)):
xlabels[label].set_rotation(90)
axmatrix.text(1.1, 0.5, title,
horizontalalignment='left',
verticalalignment='center',rotation=270,
transform=axmatrix.transAxes,size=10)
axmatrix.xaxis.grid(False)
axmatrix.yaxis.grid(False)
plt.savefig('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+'si'+str(selfinfluence)+'expf'+str(expfactor)+'dendrogram.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.close()
m1='centroid'
m2='single'
# Compute and plot first dendrogram.
fig = pylab.figure(figsize=(17,15))
ax1 = fig.add_axes([0.1245,0.1,0.1,0.6])
Y = sch.linkage(D, method=m1)
Z1 = sch.dendrogram(Y,above_threshold_color="#ababab", orientation='left')
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_axis_bgcolor('None')
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.335,0.825,0.5295,0.1])
Y = sch.linkage(D, method=m2)
Z2 = sch.dendrogram(Y,above_threshold_color="#ababab")
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_axis_bgcolor('None')
# Plot distance matrix.
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
#D = E[idx1,:] #for logplot
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='equal', origin='lower', cmap=cmap)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.87,0.1,0.02,0.6])
ac=pylab.colorbar(im, cax=axcolor)
# Label up
axmatrix.set_xticks(np.arange(len(matrix))-0)
mlabs=list(np.array(labs)[idx2])
for i in range(len(np.array(labs)[idx2])):
kz='-'
for k in range(labelpad-len(np.array(labs)[idx2][i])):kz+='-'
if i%2==1: mlabs[i]=kz+u' '+np.array(labs)[idx2][i]+u' '+'-'
else: mlabs[i]='-'+u' '+np.array(labs)[idx2][i]+u' '+kz
axmatrix.set_xticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.xaxis.set_label_position('top')
axmatrix.xaxis.tick_top()
pylab.xticks(rotation=-90, fontsize=8)
axmatrix.set_yticks(np.arange(len(matrix))+0)
mlabs=list(np.array(labs)[idx1])
for i in range(len(np.array(labs)[idx1])):
kz='-'
for k in range(labelpad-len(np.array(labs)[idx1][i])):kz+='-'
if i%2==0: mlabs[i]=kz+u' '+np.array(labs)[idx1][i]+u' '+'-'
else: mlabs[i]='-'+u' '+np.array(labs)[idx1][i]+u' '+kz
axmatrix.set_yticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.yaxis.set_label_position('left')
axmatrix.yaxis.tick_left()
xlabels = axmatrix.get_xticklabels()
for label in xlabels:
label.set_rotation(90)
axmatrix.text(1.11, 0.5, title,
horizontalalignment='left',
verticalalignment='center',rotation=270,
transform=axmatrix.transAxes,size=10)
axmatrix.xaxis.grid(False)
axmatrix.yaxis.grid(False)
plt.savefig('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+'si'+str(selfinfluence)+'expf'+str(expfactor)+'dendrogram2.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.close()
return [returnmatrix,returnmatrix.T]
```
##################################
```
#run once
GC=[] #create backup of global country list
for i in goodcountries: GC.append(i)
file('E:/Dropbox/Public/datarepo/Set/db/GC.json','w').write(json.dumps(GC))
#create mini-world
goodcountries2=["United States of America",#mostinfluential
"Russian Federation",
"Netherlands",
"United Kingdom",
"Italy",
"France",
"Saudi Arabia",
"Singapore",
"Germany",
"United Arab Emirates",
"China",
"India",
"Iran (Islamic Republic of)",
"Nigeria",
"Venezuela (Bolivarian Republic of)",
"South Africa"]
```
######################################
```
#[importancematrix,influencematrix]=dendro('00',1,5)
c=['seaGreen','royalBlue','#dd1c77']
levels=[3]
toplot=[cid[i] for i in goodcountries2]
tolabel=[labeler[i] for i in goodcountries2]
fig,ax=plt.subplots(1,2,figsize=(12,5))
for j in range(len(levels)):
[importancematrix,influencematrix]=dendro('99',4,levels[j])
z=[np.mean(i) for i in influencematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
zplot=[]
zplot2=[]
for i in toplot:
zplot.append(s[i]+1)
zplot2.append(z[i])
ax[0].scatter(np.array(zplot),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[1].scatter(np.array(zplot2),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[0].set_ylim(-1,len(toplot))
ax[1].set_ylim(-1,len(toplot))
ax[0].set_xlim(0,20)
ax[1].set_xscale('log')
ax[0].set_yticks(range(len(toplot)))
ax[0].set_yticklabels(tolabel)
ax[1].set_yticks(range(len(toplot)))
ax[1].set_yticklabels([])
ax[0].set_xlabel("Rank in Country Influence Vector")
ax[1].set_xlabel("Average Country Influence")
ax[1].legend(loc=1,framealpha=0)
plt.subplots_adjust(wspace=0.1)
plt.suptitle("Power Factor ($p$) Sensitivity of Country Influence",fontsize=14)
#plt.savefig('powerfactor.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.show()
civector={}
for i in range(len(influencevector)):
civector[i+1]={"inf":np.round(z[influencevector[i]],2),"country":labeler[goodcountries[influencevector[i]]]}
pd.DataFrame(civector).T.to_excel('c.xlsx')
c=['seaGreen','royalBlue','#dd1c77']
levels=[1,3,5]
toplot=[cid[i] for i in goodcountries2]
tolabel=[labeler[i] for i in goodcountries2]
fig,ax=plt.subplots(1,2,figsize=(12,5))
for j in range(len(levels)):
[importancematrix,influencematrix]=dendro('00',1,levels[j])
z=[np.mean(i) for i in importancematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
zplot=[]
zplot2=[]
for i in toplot:
zplot.append(s[i]+1)
zplot2.append(z[i])
ax[0].scatter(np.array(zplot),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[1].scatter(np.array(zplot2),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[0].set_ylim(-1,len(toplot))
ax[1].set_ylim(-1,len(toplot))
ax[0].set_xlim(0,20)
ax[1].set_xscale('log')
ax[0].set_yticks(range(len(toplot)))
ax[0].set_yticklabels(tolabel)
ax[1].set_yticks(range(len(toplot)))
ax[1].set_yticklabels([])
ax[0].set_xlabel("Rank in Country Dependence Vector")
ax[1].set_xlabel("Average Country Dependence")
ax[1].legend(loc=1,framealpha=0)
plt.subplots_adjust(wspace=0.1)
plt.suptitle("Power Factor ($p$) Sensitivity of Country Dependence",fontsize=14)
plt.savefig('powerfactor2.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.show()
```
Create energy cost by filling the matrix with the cost of row importing 1TWh from column. neglecting transport energy costs for now, this will be the extraction energy cost. Let us consider only solar for now. Try optimization with all three source, choose one with best objective value. 1TWh tier changes based on granurality.
```
#weighted resource class calculator
def re(dic,total):
if dic!={}:
i=max(dic.keys())
mi=min(dic.keys())
run=True
keys=[]
weights=[]
counter=0
while run:
counter+=1 #safety break
if counter>1000: run=False
if i in dic:
if total<dic[i]:
keys.append(i)
weights.append(total)
run=False
else:
total-=dic[i]
keys.append(i)
weights.append(dic[i])
i-=1
if i<mi: run=False
if sum(weights)==0: return 0
else: return np.average(keys,weights=weights)
else: return 0
region=pd.read_excel(pop_path+'regions.xlsx').set_index('Country')
#load
aroei=json.loads(file(pop_path+'aroei.json','r').read())
groei=json.loads(file(pop_path+'groei.json','r').read())
ndists=json.loads(file(pop_path+'ndists.json','r').read())
#average resource quality calculator for the globe
def update_aroei():
global aroei
aroei={}
groei={}
for c in res:
for r in res[c]:
if r not in groei: groei[r]={}
for cl in res[c][r]['res']:
if cl not in groei[r]: groei[r][cl]=0
groei[r][cl]+=res[c][r]['res'][cl]
for r in groei:
x=[]
y=[]
for i in range(len(sorted(groei[r].keys()))):
x.append(float(sorted(groei[r].keys())[i]))
y.append(float(groei[r][sorted(groei[r].keys())[i]]))
aroei[r]=np.average(x,weights=y)
#https://www.researchgate.net/publication/299824220_First_Insights_on_the_Role_of_solar_PV_in_a_100_Renewable_Energy_Environment_based_on_hourly_Modeling_for_all_Regions_globally
cost=pd.read_excel(pop_path+'/maps/storage.xlsx')
#1Bdi - grid
def normdistance(a,b):
return ndists[cid[a]][cid[b]]
def gridtestimator(country,partner,forceptl=False):
#return normdistance(country,partner)
def electricitytrade(country,partner):
scaler=1
gridpartners=grid5['electricity']
#existing trade partners
if ((partner in gridpartners[country]) or (country in gridpartners[partner])):
scaler+=cost.loc[region.loc[country]]['egrid'].values[0]/2.0
#neighbors, but need to build
elif pop2iso[country] in c2neighbors:
if (pop2iso[partner] in c2neighbors[pop2iso[country]]):
scaler+=cost.loc[region.loc[country]]['grid'].values[0]/2.0*normdistance(country,partner)
#not neighbors or partners but in the same region, need to build
elif (region.loc[country][0]==region.loc[partner][0]):
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*3.0/2.0*normdistance(country,partner)
#need to build supergrid, superlative costs
else:
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*10.0/2.0*normdistance(country,partner)
#need to build supergrid, superlative costs
else:
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*10.0/2.0*normdistance(country,partner)
return scaler
def ptltrade(country,partner):
#ptg costs scale with distance
scaler=1+cost.loc[11]['ptg']*100.0*normdistance(country,partner)
return scaler
if ptltrade(country,partner)<electricitytrade(country,partner) or forceptl:
return {"scaler":ptltrade(country,partner),"tradeway":"ptl"}
else: return {"scaler":electricitytrade(country,partner),"tradeway":"grid"}
#1Bdii - storage &curtailment
def storagestimator(country):
return cost.loc[region.loc[country]]['min'].values[0]
#curtoversizer
def curtestimator(country):
return cost.loc[region.loc[country]]['curt'].values[0]
#global benchmark eroei, due to state of technology
eroei={
#'oil':13,
#'coal':27,
#'gas':14,
#'nuclear':10,
#'biofuels':1.5,
#'hydro':84,
#'geo_other':22,
'pv':17.6,
'csp':10.2,
'wind':20.2 #was 24
}
#without esoei
#calibrated from global
```
# ALLINONE
```
#initialize renewable totals for learning
total2014={'csp':0,'solar':0,'wind':0}
learning={'csp':0.04,'solar':0.04,'wind':0.02}
year=2014
for fuel in total2014:
total2014[fuel]=np.nansum([np.nansum(data[partner][year]['energy'][fuel]['cons']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])
total2014
#scenario id (folder id)
#first is scenario family, then do 4 variations of scenarios (2 selfinluence, 2 power factor) as 01, 02...
sd='00' #only fossil profiles and non-scalable
#import resources
###################################
###################################
#load resources
predata=json.loads(file(pop_path+'maps/newres.json','r').read())
res={}
for c in predata:
res[c]={}
for f in predata[c]:
res[c][f]={}
for r in predata[c][f]:
res[c][f][r]={}
for year in predata[c][f][r]:
res[c][f][r][int(year)]=predata[c][f][r][year]
predata={}
print 'scenario',sd,'loaded resources',
###################################
###################################
#load demand2
predata=json.loads(file(pop_path+'demand2.json','r').read())
demand2={}
for c in predata:
demand2[c]={}
for year in predata[c]:
demand2[c][int(year)]=predata[c][year]
predata={}
print 'demand',
###################################
###################################
#load tradealpha d
#predata=json.loads(file(pop_path+'/trade/traded.json','r').read())
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
print 'tradedata',
###################################
###################################
#reload impex and normalize
predata=json.loads(file(pop_path+'trade/nimpex.json','r').read())
nexportmatrix=predata["nexport"]
nimportmatrix=predata["nimport"]
nrexportmatrix=predata["nrexport"]
nrimportmatrix=predata["nrimport"]
predata={}
print 'impex',
###################################
###################################
#load latest savedata
#we dont change the data for now, everything is handled through trade
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata={}
print 'data'
###################################
###################################
save3('00') #save default
#reset balance
ybalance={}
#recalculate balances
for year in range(2015,2101):
balance={}
if year not in ybalance:ybalance[year]={}
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]-=f1
balance[c]+=demand2[c][year]*8760*1e-12
if 'balance' not in data[c][year]['energy']:
data[c][year]['energy']['balance']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[c][year]['energy']['balance']['prod']['navg3']=max(0,balance[c])#balance can't be negative
data[c][year]['energy']['balance']['cons']['navg3']=max(0,balance[c])
ybalance[year]=balance
save3('01') #save default
def cbalance(year,c):
balance=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if '_' in fuel:
fuel=fuel[fuel.find('_')+1:]
#if fuel in data[c][year]['energy']:
# f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
for fuel in data[c][year]['energy']:
if fuel not in {"nrg_sum","nrg"}:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance-=f1
balance+=demand2[c][year]*8760*1e-12
return balance
def res_adv(country,fuel): #this country's wavg resource compared to global
x=[]
y=[]
if fuel=='solar':fuel='pv'
d=groei[fuel] #global wavg resource class
for i in range(len(sorted(d.keys()))):
if float(d[sorted(d.keys())[i]])>0.1:
x.append(float(sorted(d.keys())[i]))
y.append(float(d[sorted(d.keys())[i]]))
x2=[]
y2=[]
if country not in res: return 0
d2=res[country][fuel]['res'] #country's wavg resource class
for i in range(len(sorted(d2.keys()))):
if float(d2[sorted(d2.keys())[i]])>0.1:
x2.append(float(sorted(d2.keys())[i]))
y2.append(float(d2[sorted(d2.keys())[i]]))
if y2!=[]: return np.average(x2,weights=y2)*1.0/np.average(x,weights=y)
else: return 0
def costvectorranker(cv):
k={}
for i in cv:
for j in cv[i]:
k[(i)+'_'+str(j)]=cv[i][j]
return sorted(k.items(), key=lambda value: value[1])
def trade(country,partner,y0,fuel,value,l0):
lifetime=l0+int(random.random()*l0)
tradeable[partner][fuel]-=value
key=tradeway[country][partner]+'_'+fuel
for year in range(y0,min(2101,y0+lifetime)):
#add production
if fuel not in data[partner][year]['energy']:
data[partner][year]['energy'][fuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[partner][year]['energy'][fuel]['prod']['navg3']+=value
data[partner][year]['energy']['nrg_sum']['prod']['navg3']+=value
#add consumption
if fuel not in data[country][year]['energy']:
data[country][year]['energy'][fuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][fuel]['cons']['navg3']+=value
data[country][year]['energy']['nrg_sum']['cons']['navg3']+=value
#add storage on country side (if not ptl)
if tradeway[country][partner]=='grid':
if fuel not in {'csp'}:
if 'storage' not in data[country][year]['energy']:
data[country][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy']['storage']['prod']['navg3']+=value*storagestimator(country)
data[country][year]['energy']['storage']['cons']['navg3']+=value*storagestimator(country)
if country!=partner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[partner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[partner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[partner])]+=value
#add export flow
if key not in tradealpha[partner][year]:tradealpha[partner][year][key]={}
if 'Export' not in tradealpha[partner][year][key]:tradealpha[partner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[partner][year][key]["Export"]:
tradealpha[partner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[partner][year][key]["Export"][str(pop2iso[country])]+=value
#trade diversificatioin necessity
def divfill(cv,divfactor,divbalance):
scaler=min(1.0,divbalance/\
sum([tradeable[cv[i][0][:cv[i][0].find('_')]]\
[cv[i][0][cv[i][0].find('_')+1:]] for i in range(divfactor)])) #take all or partial
for i in range(divfactor):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
trade(country,partner,year,fuel,max(0,tradeable[partner][fuel])*scaler,lifetime)
def tradefill(cv):
totrade=[]
tradesum=0
for i in range(len(cv)):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
if tradeable[partner][fuel]>balance-tradesum:
totrade.append((cv[i][0],balance-tradesum))
tradesum+=balance-tradesum
break
else:
totrade.append((cv[i][0],tradeable[partner][fuel]))
tradesum+=tradeable[partner][fuel]
for i in totrade:
partner=i[0][:i[0].find('_')]
fuel=i[0][i[0].find('_')+1:]
trade(country,partner,year,fuel,i[1],lifetime)
def omegafill(cv):
global wasalready
totrade=[]
tradesum=0
for i in range(len(cv)):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
if country==partner:
if fuel not in wasalready:
wasalready.add(fuel)
if tradeable[partner][fuel]>balance-tradesum:
totrade.append((cv[i][0],balance-tradesum))
tradesum+=balance-tradesum
break
else:
totrade.append((cv[i][0],tradeable[partner][fuel]))
tradesum+=tradeable[partner][fuel]
#trade(country,partner,year,fuel,min(cv[i][1],tradeable[partner][fuel]),lifetime)
for i in totrade:
partner=i[0][:i[0].find('_')]
fuel=i[0][i[0].find('_')+1:]
trade(country,partner,year,fuel,i[1],lifetime)
def nrgsum(country,year):
return np.nansum([data[country][year]['energy'][i]['prod']['navg3'] for i in data[country][year]['energy'] if i not in ['nrg_sum','sum','nrg']])
def liquidcheck(year,country):
oil=data[country][year]['energy']['oil']['prod']['navg3']
gas=data[country][year]['energy']['gas']['prod']['navg3']
try: ptl=sum([sum(tradealpha[country][year][i]['Import'].values()) for i in tradealpha[country][year] if 'ptl' in i])
except: ptl=0
liquidshare=(oil+gas+ptl)/nrgsum(country,year)
return max(0,(h2[country]-liquidshare)*nrgsum(country,year)) #return amount to fill with liquids
def liquidfill(country,year):
toadjust=0
tofill=liquidcheck(year,country)
adjustable={}
if tofill>0:
for fuel in data[country][year]['energy']:
if fuel not in {"nrg","nrg_sum","storage","oil","gas"}:
if data[country][year]['energy'][fuel]['prod']['navg3']>0:
if not np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']):
toadjust+=data[country][year]['energy'][fuel]['prod']['navg3']
for fuel in tradealpha[country][year]:
if fuel not in {"coal","oil","gas"}:
if 'ptl' not in fuel:
if 'Import' in tradealpha[country][year][fuel]:
toadjust+=np.nansum(tradealpha[country][year][fuel]["Import"].values())
#scan fuels to adjust, calculate adjust scaler
adjustscaler=1.0-tofill*1.0/toadjust
#scale down fuels, record what to put back as ptl
for fuel in data[country][year]['energy']:
if fuel not in {"nrg","nrg_sum","storage","oil","gas"}:
if data[country][year]['energy'][fuel]['prod']['navg3']>0:
if not np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']):
data[country][year]['energy'][fuel]['prod']['navg3']*=adjustscaler
if fuel not in adjustable: adjustable[fuel]={}
adjustable[fuel][pop2iso[country]]=data[country][year]['energy'][fuel]['prod']['navg3']*(1-adjustscaler)
for fuel in tradealpha[country][year]:
if fuel not in {"coal","oil","gas"}:
if 'ptl' not in fuel:
if 'Import' in tradealpha[country][year][fuel]:
for p in tradealpha[country][year][fuel]["Import"]:
tradealpha[country][year][fuel]["Import"][p]*=adjustscaler
if fuel[fuel.find('_')+1:] not in adjustable: adjustable[fuel[fuel.find('_')+1:]]={}
adjustable[fuel[fuel.find('_')+1:]][p]=tradealpha[country][year][fuel]["Import"][p]*(1-adjustscaler)
#put back ptl
for fuel in adjustable:
for p in adjustable[fuel]:
if 'ptl_'+str(fuel) not in tradealpha[country][year]:
tradealpha[country][year]['ptl_'+str(fuel)]={}
if 'Import' not in tradealpha[country][year]['ptl_'+str(fuel)]:
tradealpha[country][year]['ptl_'+str(fuel)]["Import"]={}
tradealpha[country][year]['ptl_'+str(fuel)]["Import"][p]=adjustable[fuel][p]
[importancematrix,influencematrix]=dendro('56',4,3) #2,5, or 4,3
z=[np.mean(i) for i in influencematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
CV={}
CV2={}
TB={}
#load data - if already saved
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
fc={"solar":'pv',"csp":'csp',"wind":'wind'}
divfactor=10 #min trade partners in trade diversification
divshare=0.2 #min share of the trade diversification, total
tradeway={}
maxrut=0.01 #for each type #max rampup total, if zero 5% of 1% 0.05 / 0.001
maxrur=1.5 #growth rate for each techno #max rampup rate 0.5
omegamin=0.1 #min share of the in-country diversification, per fuel
random.seed(2)
cs=set()
for year in range(2015,2101):
tradeable={}
if year not in TB:TB[year]={}
for i in range(len(goodcountries)):
country=goodcountries[i]
if country not in tradeable:tradeable[country]={'solar':0,'csp':0,'wind':0}
for fuel in {"solar","csp","wind"}:
if fuel not in data[country][year-1]['energy']:
tradeable[country][fuel]=nrgsum(country,year-1)*maxrut
elif data[country][year-1]['energy'][fuel]['prod']['navg3']==0:
tradeable[country][fuel]=nrgsum(country,year-1)*maxrut
else: tradeable[country][fuel]=max(nrgsum(country,year-1)*maxrut,
data[country][year-1]['energy'][fuel]['prod']['navg3']*maxrur)
for i in range(len(influencevector))[:]:#4344
country=goodcountries[influencevector[i]]
cs.add(country)
#if year==2015:
if True:
costvector={}
for j in range(len(goodcountries)):
partner=goodcountries[j]
if partner not in costvector:costvector[partner]={}
transactioncost=gridtestimator(country,partner)
if country not in tradeway:tradeway[country]={}
if partner not in tradeway[country]:tradeway[country][partner]=transactioncost["tradeway"]
for fuel in {"solar","csp","wind"}:
ru0=0
if fuel not in data[partner][year]['energy']: ru = ru0
elif partner not in res: ru = ru0
elif sum(res[partner][fc[fuel]]['res'].values())==0: ru=1
elif data[partner][year]['energy'][fuel]['prod']['navg3']==0: ru=ru0
else: ru=data[partner][year]['energy'][fuel]['prod']['navg3']*1.0/\
sum(res[partner][fc[fuel]]['res'].values())
ru=max(ru,0)
ru=max(1,0.3+ru**0.1) #or 0.3
costvector[partner][fuel]=1.0/influencematrix[influencevector[i]][j]*\
transactioncost['scaler']*\
ru*\
1.0/(eroei[fc[fuel]]*1.0/np.mean(eroei.values())*\
res_adv(partner,fuel)*\
aroei[fc[fuel]]*1.0/np.mean(aroei.values()))
cv=costvectorranker(costvector)
#fulfill trade diversification criterion
balance=divshare*cbalance(year,country)
if balance>0:
divfill(cv,divfactor,balance)
#fulfill in-country diversification criterion
wasalready=set()
balance=cbalance(year,country)*omegamin
if balance>0:
omegafill(cv) #fill first best source to min share
omegafill(cv) #fill second best source to min share
#fill up rest of trade
balance=cbalance(year,country)
if balance>0:
tradefill(cv)
#fill liquids up to min liquid level
liquidfill(country,year)
print i,
#CV2[country]=cv
print year
save3('56',cs)
file('E:/Dropbox/Public/datarepo/Set/db/CV.json','w').write(json.dumps(CV))
c='United States of America'
for i in range(len(CV[c])):
p=CV[c][i][0][:CV[c][i][0].find('_')]
f=CV[c][i][0][CV[c][i][0].find('_')+1:]
print p,f,tradeway[c][p],CV[c][i][1]
c='United States of America'
for i in range(len(CV2[c])):
p=CV2[c][i][0][:CV2[c][i][0].find('_')]
f=CV2[c][i][0][CV2[c][i][0].find('_')+1:]
print i,p,f,tradeway[c][p],CV2[c][i][1],tradeable[p][f]
save3('96',cs)
adjustscaler
toadjust
wasalready=set()
balance=cbalance(year,country)*omegamin
#if balance>0:
# omegafill(cv)
# omegafill(cv)
print balance
save3('99',cs)
save3('98',cs)
file('E:/Dropbox/Public/datarepo/Set/56data.json','w').write(json.dumps(data))
file('E:/Dropbox/Public/datarepo/Set/56trade.json','w').write(json.dumps(tradealpha))
#load data - if already saved
predata=json.loads(file('E:/Dropbox/Public/datarepo/Set/56data.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata=json.loads(file('E:/Dropbox/Public/datarepo/Set/56trade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
#global summarizer
for c in goodcountries:
for y in range(2017,2101):
for fuel in data[c][y]['energy']:
for flow in data[c][y]['energy'][fuel]:
if fuel not in data['World'][y]['energy']:data['World'][y]['energy'][fuel]={}
if flow not in data['World'][y]['energy'][fuel]:data['World'][y]['energy'][fuel][flow]={}
if 'navg3' not in data['World'][y]['energy'][fuel][flow]:data['World'][y]['energy'][fuel][flow]['navg3']=0
if np.isnan(data['World'][y]['energy'][fuel][flow]['navg3']):
data['World'][y]['energy'][fuel][flow].pop('navg3');
data['World'][y]['energy'][fuel][flow]['navg3']=0
if 'navg3' in data[c][y]['energy'][fuel][flow]:
if not np.isnan(data[c][y]['energy'][fuel][flow]['navg3']):
if data[c][y]['energy'][fuel][flow]['navg3']>0:
data['World'][y]['energy'][fuel][flow]['navg3']+=data[c][y]['energy'][fuel][flow]['navg3']
save3('99',['World'])
fuel="storage"
for y in range(1950,2101):
if fuel not in data['World'][y]["energy"]:cw=0
elif np.isnan(data['World'][y]["energy"][fuel]["prod"]["navg3"]):cw=0
else: cw=data['World'][y]["energy"][fuel]["prod"]["navg3"]
print y,' ',cw
8-as ok top 40 20 y
61-all 1 y
62-all 20y
72-all 10y
sum([sum(tradealpha[country][year][i]['Import'].values()) for i in tradealpha[country][year] if 'ptl' in i])
save3('07')
save3('06',[country])
5 lifetime 10r40
6 lifetime 1
#countries bid, all of them
#givers give, proportionally with the country influence
#geters get the energy for lifetime years
#countries see how much do they have
#fill up rest locally form top two sources, proportionally with rq
cv[:5]
res uti!!
###################################
###################################
###################################
###################################
gi={"open":{},"notrade":{}}
eroei={}
once=True
release={} #release reserves
for year in range(2015,2040):
print year
#SET PARAMETERS
#------------------------------------------------
#reset balance
balance={}
#recalculate balances
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]=-(demand2[c][year]*8760*1e-12-f1)
#1A
avgbalance=np.mean(balance.values())
needers=sorted([c for c in balance if balance[c]<0])[:]
givers=sorted([c for c in balance if balance[c]>avgbalance])
#update global technical eroei
fuel2={'csp':'csp','pv':'solar','wind':'wind'}
for t in fuel2:
fuel=fuel2[t]
eroei[t]=eroei0[t]*(np.nansum([np.nansum(data[partner][year]['energy'][fuel]['prod']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])*1.0/total2015[fuel])**learning[fuel]
#################################################
#1B
#import random
#random.seed(sd*year)
#shuffle order of parsing countries
#random.shuffle(needers)
#------------------------------------------------
#1Ba
#country for parsing the needers list
for counter in range(len(needers)):
country=needers[counter]
#print country,
need=-balance[country] #as a convention switch to positive, defined as 'need'
mintier=1 #in TWh
midtier=10 #mid tier TWh
hitier=100 #mid tier TWh
if need>hitier: tiernumber=10
elif need>midtier: tiernumber=5
elif need>mintier: tiernumber=3
else: tiernumber=1
#OVERWRITE TIERNUMBER
tiernumber=3
#MIN SHARE LIMIT
homeshare={'csp':False,'pv':False,'wind':False}
minshare=0.10
homesum=np.sum([data[country][year]['energy'][ii]['prod']['navg3'] \
for ii in data[country][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if homesum>0:
for fuel in {'csp','pv','wind'}:
if fuel2[fuel] in data[country][year]['energy']:
if (minshare>data[country][year]['energy'][fuel2[fuel]]['prod']['navg3']*1.0/homesum):
homeshare[fuel]=True
#if all are fulfilled, no need for the constraint
if np.array(homeshare.values()).all(): homeshare={'csp':False,'pv':False,'wind':False}
for tier in range(tiernumber):
tierneed=need*1.0/tiernumber
#------------------------------------------------
#1Bb
costvector={}
update_aroei() #update sate of the resources globally to be able to rank between technologies
for partner in givers+[country]:
if partner in res:
for fuel in {'csp','pv','wind'}:
#if satisfies min share constraint
if not homeshare[fuel]:
#at each time step you much import each fuel typeat least once
if res[partner][fuel]['res']!={}:
#query if giver can ramp up production this fast
#max investment cannot exceed rampuplimit (=15%)
ok=False
su=np.sum([data[partner][year]['energy'][ii]['prod']['navg3'] \
for ii in data[partner][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if su*rampuplimit>tierneed: #not tierneed
if fuel2[fuel] in data[partner][year]['energy']:
if np.isnan(data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']): ok=True
elif data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']==0: ok=True
elif (tierneed<data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']*fuelrampuplimit):ok=True
#again not tierneed
else: ok=False
else: ok=True #new resource, build it
if ok:
#rq (resource query) returns the average resource class at which this tierneed can be provided
#we multiply by the storage/curtailment needs
storagescaler=(1+storagestimator(partner)+curtestimator(partner))
rq=re(res[partner][fuel]['res'],tierneed)/storagescaler
#the costvector takes the resource class and converts it to eroei by comparing it
#the average resource class at a known point with a know eroei (at start in 2015)
#we are looking figh highvalues, as a marginal quality of resource
costvector[fuel+'_'+partner]=(rq/aroei[fuel]*eroei[fuel]) #normalized resource quality over eroei
if costvector=={}:
print 'impossible to fullfill demand', country, ' in tier ', tier
#1Bbi - norlmalize costvector to be able to compare with trade influence
else:
normcostvector=copy.deepcopy(costvector)
for i in normcostvector:
costvector[i]/=np.nanmean(costvector.values())
#1Bbii - create costfactor, weights are tweakable
costfactor={}
for key in costvector:
partner=key[key.find('_')+1:]
costfactor[key]=((costvector[key]**2)*(influence(country,partner,selfinfluence)**2))**(1/4.0)
#costfactor[key]=costvector[key]
#The geometric mean is more appropriate than the arithmetic mean for describing proportional growth,
#both exponential growth (constant proportional growth) and varying growth; i
#n business the geometric mean of growth rates is known as the compound annual growth rate (CAGR).
#The geometric mean of growth over periods yields the equivalent constant growth rate that would
#yield the same final amount.
#influence(country,partner,2) - third parameter : relative importance of
#self comparted to most influential country
#1Bc - choose partner
best=max(costfactor, key=costfactor.get)
tradepartner=best[best.find('_')+1:]
tradefuel=best[:best.find('_')]
#------------------------------------------------
#1Be - IMPLEMENT TRADE
lt=int(20+random.random()*15) #lifetime
#otherwise we have to implement resource updating
#1Beii - Reduce provider reserves within year
levels=res[tradepartner][tradefuel]['res'].keys()
level=max(levels)
tomeet=tierneed*1.0
#record release lt years in the future
if year+lt not in release:release[year+lt]={}
if tradepartner not in release[year+lt]:release[year+lt][tradepartner]={}
if tradefuel not in release[year+lt][tradepartner]:release[year+lt][tradepartner][tradefuel]={}
#hold resources for lt
while level>min(levels):
if level not in res[tradepartner][tradefuel]['res']: level-=1
elif res[tradepartner][tradefuel]['res'][level]<tomeet:
tomeet-=res[tradepartner][tradefuel]['res'][level]
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=res[tradepartner][tradefuel]['res'][level]
res[tradepartner][tradefuel]['res'].pop(level)
level-=1
else:
res[tradepartner][tradefuel]['res'][level]-=tomeet
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=tomeet
level=0
#------------------------------------------------
#1Be-implement country trade
#only production capacity stays, trade does not have to
gyear=int(1.0*year)
for year in range(gyear,min(2100,gyear+lt)):
#update globalinvestment
if year not in globalinvestment:globalinvestment[year]={"net":0,"inv":0}
globalinvestment[year]["net"]+=tierneed
globalinvestment[year]["inv"]+=tierneed/normcostvector[best]
#add production
if tradefuel not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy'][tradefuel]['prod']['navg3']+=tierneed
#add storage
if tradefuel not in {'csp'}:
if 'storage' not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy']['storage']['prod']['navg3']+=tierneed*storagestimator(tradepartner)
data[tradepartner][year]['energy']['storage']['cons']['navg3']+=tierneed*storagestimator(tradepartner)
year=gyear
#add consumption
if tradefuel not in data[country][year]['energy']:
data[country][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][tradefuel]['cons']['navg3']+=tierneed
#add trade flows if not self
key=gridtestimator(country,partner)['tradeway']+'_'+tradefuel
if country!=tradepartner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[tradepartner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]+=tierneed
#add export flow
if key not in tradealpha[tradepartner][year]:tradealpha[tradepartner][year][key]={}
if 'Export' not in tradealpha[tradepartner][year][key]:tradealpha[tradepartner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[tradepartner][year][key]["Export"]:
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]+=tierneed
#record trade to influence - counld be weighted, deaful is 10%
updatenormimpex(country,tradepartner,'Import',tierneed/need)
updatenormimpex(tradepartner,country,'Export',tierneed/need)
#save data for processed countries
print 'saving...'
if selfinfluence==10:
sde=10
sdk="open"
else:
sde=20
sdk="notrade"
gi[sdk]=globalinvestment
save3(sde,goodcountries)
file('E:/Dropbox/Public/datarepo/Set/gi.json','w').write(json.dumps(gi))
print 'done',sde
###################################
###################################
###################################
###################################
gi={"open":{},"notrade":{}}
eroei={}
once=True
rampuplimit=0.08 #overall generation ramp up limit
fuelrampuplimit=0.25 #inditvidual fuel ramp up limit
for selfinfluence in {1,10}:
globalinvestment={}
release={} #release reserves
for year in range(2015,2040):
print year
#SET PARAMETERS
#------------------------------------------------
#release reserves
if year in release:
for c in release[year]:
for fuel in release[year][c]:
for level in release[year][c][fuel]:
if level in res[c][fuel]['res']:
res[c][fuel]['res'][level]+=release[year][c][fuel][level]
else: res[c][fuel]['res'][level]=release[year][c][fuel][level]
#reset balance
balance={}
#recalculate balances
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]=-(demand2[c][year]*8760*1e-12-f1)
#1A
avgbalance=np.mean(balance.values())
needers=sorted([c for c in balance if balance[c]<0])[:]
givers=sorted([c for c in balance if balance[c]>avgbalance])
#update global technical eroei
fuel2={'csp':'csp','pv':'solar','wind':'wind'}
for t in fuel2:
fuel=fuel2[t]
eroei[t]=eroei0[t]*(np.nansum([np.nansum(data[partner][year]['energy'][fuel]['prod']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])*1.0/total2015[fuel])**learning[fuel]
#################################################
#1B
#import random
#random.seed(sd*year)
#shuffle order of parsing countries
#random.shuffle(needers)
#------------------------------------------------
#1Ba
#country for parsing the needers list
for counter in range(len(needers)):
country=needers[counter]
#print country,
need=-balance[country] #as a convention switch to positive, defined as 'need'
mintier=1 #in TWh
midtier=10 #mid tier TWh
hitier=100 #mid tier TWh
if need>hitier: tiernumber=10
elif need>midtier: tiernumber=5
elif need>mintier: tiernumber=3
else: tiernumber=1
#OVERWRITE TIERNUMBER
tiernumber=3
#MIN SHARE LIMIT
homeshare={'csp':False,'pv':False,'wind':False}
minshare=0.10
homesum=np.sum([data[country][year]['energy'][ii]['prod']['navg3'] \
for ii in data[country][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if homesum>0:
for fuel in {'csp','pv','wind'}:
if fuel2[fuel] in data[country][year]['energy']:
if (minshare>data[country][year]['energy'][fuel2[fuel]]['prod']['navg3']*1.0/homesum):
homeshare[fuel]=True
#if all are fulfilled, no need for the constraint
if np.array(homeshare.values()).all(): homeshare={'csp':False,'pv':False,'wind':False}
for tier in range(tiernumber):
tierneed=need*1.0/tiernumber
#------------------------------------------------
#1Bb
costvector={}
update_aroei() #update sate of the resources globally to be able to rank between technologies
for partner in givers+[country]:
if partner in res:
for fuel in {'csp','pv','wind'}:
#if satisfies min share constraint
if not homeshare[fuel]:
#at each time step you much import each fuel typeat least once
if res[partner][fuel]['res']!={}:
#query if giver can ramp up production this fast
#max investment cannot exceed rampuplimit (=15%)
ok=False
su=np.sum([data[partner][year]['energy'][ii]['prod']['navg3'] \
for ii in data[partner][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if su*rampuplimit>tierneed: #not tierneed
if fuel2[fuel] in data[partner][year]['energy']:
if np.isnan(data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']): ok=True
elif data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']==0: ok=True
elif (tierneed<data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']*fuelrampuplimit):ok=True
#again not tierneed
else: ok=False
else: ok=True #new resource, build it
if ok:
#rq (resource query) returns the average resource class at which this tierneed can be provided
#we multiply by the storage/curtailment needs
storagescaler=(1+storagestimator(partner)+curtestimator(partner))
rq=re(res[partner][fuel]['res'],tierneed)/storagescaler
#the costvector takes the resource class and converts it to eroei by comparing it
#the average resource class at a known point with a know eroei (at start in 2015)
#we are looking figh highvalues, as a marginal quality of resource
costvector[fuel+'_'+partner]=(rq/aroei[fuel]*eroei[fuel]) #normalized resource quality over eroei
if costvector=={}:
print 'impossible to fullfill demand', country, ' in tier ', tier
#1Bbi - norlmalize costvector to be able to compare with trade influence
else:
normcostvector=copy.deepcopy(costvector)
for i in normcostvector:
costvector[i]/=np.nanmean(costvector.values())
#1Bbii - create costfactor, weights are tweakable
costfactor={}
for key in costvector:
partner=key[key.find('_')+1:]
costfactor[key]=((costvector[key]**2)*(influence(country,partner,selfinfluence)**2))**(1/4.0)
#costfactor[key]=costvector[key]
#The geometric mean is more appropriate than the arithmetic mean for describing proportional growth,
#both exponential growth (constant proportional growth) and varying growth; i
#n business the geometric mean of growth rates is known as the compound annual growth rate (CAGR).
#The geometric mean of growth over periods yields the equivalent constant growth rate that would
#yield the same final amount.
#influence(country,partner,2) - third parameter : relative importance of
#self comparted to most influential country
#1Bc - choose partner
best=max(costfactor, key=costfactor.get)
tradepartner=best[best.find('_')+1:]
tradefuel=best[:best.find('_')]
#------------------------------------------------
#1Be - IMPLEMENT TRADE
lt=int(20+random.random()*15) #lifetime
#otherwise we have to implement resource updating
#1Beii - Reduce provider reserves within year
levels=res[tradepartner][tradefuel]['res'].keys()
level=max(levels)
tomeet=tierneed*1.0
#record release lt years in the future
if year+lt not in release:release[year+lt]={}
if tradepartner not in release[year+lt]:release[year+lt][tradepartner]={}
if tradefuel not in release[year+lt][tradepartner]:release[year+lt][tradepartner][tradefuel]={}
#hold resources for lt
while level>min(levels):
if level not in res[tradepartner][tradefuel]['res']: level-=1
elif res[tradepartner][tradefuel]['res'][level]<tomeet:
tomeet-=res[tradepartner][tradefuel]['res'][level]
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=res[tradepartner][tradefuel]['res'][level]
res[tradepartner][tradefuel]['res'].pop(level)
level-=1
else:
res[tradepartner][tradefuel]['res'][level]-=tomeet
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=tomeet
level=0
#------------------------------------------------
#1Be-implement country trade
#only production capacity stays, trade does not have to
gyear=int(1.0*year)
for year in range(gyear,min(2100,gyear+lt)):
#update globalinvestment
if year not in globalinvestment:globalinvestment[year]={"net":0,"inv":0}
globalinvestment[year]["net"]+=tierneed
globalinvestment[year]["inv"]+=tierneed/normcostvector[best]
#add production
if tradefuel not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy'][tradefuel]['prod']['navg3']+=tierneed
#add storage
if tradefuel not in {'csp'}:
if 'storage' not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy']['storage']['prod']['navg3']+=tierneed*storagestimator(tradepartner)
data[tradepartner][year]['energy']['storage']['cons']['navg3']+=tierneed*storagestimator(tradepartner)
year=gyear
#add consumption
if tradefuel not in data[country][year]['energy']:
data[country][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][tradefuel]['cons']['navg3']+=tierneed
#add trade flows if not self
key=gridtestimator(country,partner)['tradeway']+'_'+tradefuel
if country!=tradepartner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[tradepartner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]+=tierneed
#add export flow
if key not in tradealpha[tradepartner][year]:tradealpha[tradepartner][year][key]={}
if 'Export' not in tradealpha[tradepartner][year][key]:tradealpha[tradepartner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[tradepartner][year][key]["Export"]:
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]+=tierneed
#record trade to influence - counld be weighted, deaful is 10%
updatenormimpex(country,tradepartner,'Import',tierneed/need)
updatenormimpex(tradepartner,country,'Export',tierneed/need)
#save data for processed countries
print 'saving...'
if selfinfluence==10:
sde=10
sdk="open"
else:
sde=20
sdk="notrade"
gi[sdk]=globalinvestment
save3(sde,goodcountries)
file('E:/Dropbox/Public/datarepo/Set/gi.json','w').write(json.dumps(gi))
print 'done',sde
```
| github_jupyter |
# Introduction to Data Science
# Lecture 21: Dimensionality Reduction
*COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/*
In this lecture, we'll discuss
* dimensionality reduction
* Principal Component Analysis (PCA)
* using PCA for visualization
Recommended Reading:
* G. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning, Ch. 10.2 [digitial version available here](http://www-bcf.usc.edu/~gareth/ISL/)
* V. Powell, [Principal Component Analysis: Explained Visually](http://setosa.io/ev/principal-component-analysis/)
* B. Everitt and T. Hothorn, [An Introduction to Applied Multivariate Analysis with R](https://www.springer.com/us/book/9781441996497), Ch. 3
```
# imports and setup
import numpy as np
import pandas as pd
pd.set_option('display.notebook_repr_html', False)
from sklearn.datasets import load_iris, load_digits
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, AgglomerativeClustering
from sklearn import metrics
from sklearn.metrics import homogeneity_score
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
plt.style.use('ggplot')
import seaborn as sns
```
## Recap: Supervised vs. Unsupervised Learning
### Supervised Learning
**Data:** both the features, $x$, and a response, $y$, for each item in the dataset.
**Goal:** 'learn' how to predict the response from the features.
**Examples:**
* Regression
* Classification
### Unsupervised Learning
**Data:** only the features, $x$, for each item in the dataset.
**Goal:** discover 'interesting' things about the dataset.
**Examples:**
* Clustering
* Dimensionality reduction, Principal Component Analysis (PCA)
## Dimensionality Reduction
In data science, [**dimensionality reduction**](https://en.wikipedia.org/wiki/Dimensionality_reduction) is the process of reducing the number of features in a dataset.
There are two approaches to dimensionality reduction: **feature selection** and **feature extraction**.
In **feature selection**, one just picks a subset of the available features. We discussed feature selection some in Lecture 15 in the context of classification.
In **feature extraction**, the data is transformed from a high-dimensional space to a lower dimensional space. The most common method is called **principal component analysis (PCA)**, where the transformation is taken to be linear, but many other methods exist. In this class, we'll focus on PCA.
**Why dimensionality reduction?**
- Remove redundancies and simplifies the dataset making it easier to understand.
- It's easier to visualize low dimensional data.
- It reduces storage space for large datasets (because of less features).
- It reduces time for computationally intensive tasks (because of less computation required).
- Reducing dimensionality can help avoid overfitting in supervised learning tasks.
## Principal Component Analysis (PCA)
**Problem:** Many datasets have too many features to be able to explore or understand in a reasonable way. Its difficult to even make a reasonable plot for a high-dimensional dataset.
**Idea**: In a [Principal Component Analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis), we find a small number of new features, which are linear combinations of the old features, that 'explain' most of the variance in the data. The *principal component directions* are the directions in feature space in which the data is the most variable.
Before we get into the mathematical description of Principal Component Analysis (PCA), we can gain a lot of intuition by taking a look at [this visual overview](http://setosa.io/ev/principal-component-analysis/) by Victor Powell.
**Mathematical description:** Let the $p$ features in our dataset be $x = (x_1, x_2, \ldots x_p)$. We define a new feature, the *first principal component direction*, by
$$
z_1 = \phi_{1,1} x_1 + \phi_{2,1} x_2 + \cdots + \phi_{p,1} x_p = \phi_1^t x
$$
Here, the coefficients $\phi_{j,1}$ are the *loadings* of the $j$-th feature on the first principal component. The vector $\phi_1 = (\phi_{1,1}, \phi_{2,1},\cdots, \phi_{p,1})$ is called the *loadings vector* for the first principal component.
We want to find the loadings so that $z_1$ has maximal sample variance.
Let $X$ be the $n\times p$ matrix where $X_{i,j}$ is the $j$-th feature for item $i$ in the dataset. $X$ is just the collection of the data in a matrix.
**Important:** Assume each of the variables has been normalized to have mean zero, *i.e.*, the columns of $X$ should have zero mean.
A short calculation shows that the sample variance of $z_1$ is then given by
$$
Var(z_1) = \frac{1}{n} \sum_{i=1}^n \left( \sum_{j=1}^p \phi_{j,1} X_{i,j} \right)^2.
$$
The variance can be arbitrarily large if the $\phi_{j,1}$ are allowed to be arbitrarily large. We constrain the $\phi_{j,1}$ to satisfy $\sum_{j=1}^p \phi_{j,1}^2 = 1$. In vector notation, this can be written $\| \phi_1 \| = 1$.
Putting this together, the first principal component is defined by $z_1 = \phi_1^t x$ where $\phi_1$ is the solution to the optimization problem
\begin{align*}
\max_{\phi_1} \quad & \textrm{Var}(z_1) \\
\text{subject to} \quad & \| \phi_1\|^2 = 1.
\end{align*}
Using linear algebra, it can be shown that $\phi_1$ is exactly the eigenvector corresponding to the largest eigenvalue of the *Gram matrix*, $X^tX$.
We similarly define the second principal direction to be the linear combination of the features,
$z_2 = \phi_2^t x$ with the largest variance, subject to the additional constraint that $z_2$ be uncorrelated with $z_1$. This is equivalent to $\phi_1^t \phi_2 = 0$. This corresponds to taking $\phi_2$ to be the eigenvector corresponding to the second largest eigenvalue of $X^tX$. Higher principal directions are defined analogously.
## PCA in practice
We can use the [```PCA``` function](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) from the ```sklearn.decomposition``` library.
### Exmple: the Iris dataset
The dataset contains 4 features (attributes) of 50 samples containing 3 different types of iris plants.
**Features (attributes):**
1. sepal length (cm)
+ sepal width (cm)
+ petal length (cm)
+ petal width (cm)
**Classes:**
1. Iris Setosa
+ Iris Versicolour
+ Iris Virginica
```
# import dataset
iris = load_iris()
X = iris.data
y = iris.target
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
```
### Previous ideas for plotting the data:
1. just plot along first two dimensions and ignore other dimensions
+ plot in three dimensions (3d scatter plot) and ignore other dimensions
+ make a scatterplot matrix with all pairs of dimensions
```
# plot along first two dimensions and ignore other diemtnsions
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,s=30)
plt.xlabel('Dimension 1')
plt.ylabel('Dimension 2')
plt.show()
# 3D scatter plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 0], X[:, 1],zs= X[:, 2], c=y, cmap=cmap_bold,s=30)
ax.set_xlabel('Dimension 1')
ax.set_ylabel('Dimension 2')
ax.set_zlabel('Dimension 3')
plt.show()
# scatterplot matrix
sns.set()
sns.pairplot(sns.load_dataset("iris"), hue="species");
```
### New idea: use PCA to plot the 2 most 'important' directions
```
# PCA analysis
pca_model = PCA()
X_PCA = pca_model.fit_transform(X)
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=y, cmap=cmap_bold,s=30)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.show()
```
### Example: use PCA to visualize cluster analysis of iris data
Principal components are very helpful for visualizing clusters.
```
cluster_model = AgglomerativeClustering(linkage="average", affinity='euclidean', n_clusters=3)
y_pred = cluster_model.fit_predict(X)
h = homogeneity_score(labels_true = y, labels_pred = y_pred)
print('homogeneity score for clustering is ' + str(h))
# plot using PCA
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=y_pred, cmap=cmap_bold,s=30)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.show()
```
Compare this to the previous way we plotted using just the first two features
```
# plot using first two features
plt.scatter(X[:, 0], X[:, 1], c=y_pred, cmap=cmap_bold,s=30)
plt.xlabel('sepal length (cm) ')
plt.ylabel('sepal width (cm) ')
plt.show()
```
## Number of principal components
For plotting the data, we generally just use the first 2 principal components. In other applications requiring dimensionality reduction, you might want to identify the number of principal components that can be used to explain the data. This can be done by considering the percentage of variance explained by each component or a *scree plot*.
```
# Variance ratio of the four principal components
var_ratio = pca_model.explained_variance_ratio_
print(var_ratio)
plt.plot([1,2,3,4], var_ratio, '-o')
plt.ylabel('Proportion of Variance Explained')
plt.xlabel('Principal Component')
plt.xlim(0.75,4.25)
plt.ylim(0,1.05)
plt.xticks([1,2,3,4])
plt.show()
```
## Example: visualizing clusters in the MNIST handwritten digit dataset
The MNIST handwritten digit dataset consists of images of handwritten digits, together with labels indicating which digit is in each image.
Because both the features and the labels are present in this dataset (and labels for large datasets are generally difficult/expensive to obtain), this dataset is frequently used as a benchmark to compare various methods.
For example, [this webpage](http://yann.lecun.com/exdb/mnist/) describes a variety of different classification results on MNIST (Note, the tests on this website are for a larger and higher resolution dataset than we'll use.) To see a comparison of classification methods implemented in scikit-learn on the MNIST dataset, see
[this page](http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html).
The MNIST dataset is also a frequently used for benchmarking clustering algorithms and because it has labels, we can evaluate the homogeneity or purity of the clusters.
There are several versions of the dataset. We'll use the one that is built-in to scikit-learn, described [here](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html).
* Classes: 10
* Samples per class: $\approx$180
* Samples total: 1797
* Feature Dimension: 64 (8 pixels by 8 pixels)
* Features: integers 0-16
Here are some examples of the images. Note that the digits have been size-normalized and centered in a fixed-size ($8\times8$ pixels) image.
<img src="http://scikit-learn.org/stable/_images/sphx_glr_plot_digits_classification_001.png" width="500">
```
digits = load_digits()
X = scale(digits.data)
y = digits.target
print(type(X))
n_samples, n_features = X.shape
n_digits = len(np.unique(digits.target))
print("n_digits: %d, n_samples %d, n_features %d" % (n_digits, n_samples, n_features))
plt.figure(figsize= (10, 10))
for ii in np.arange(25):
plt.subplot(5, 5, ii+1)
plt.imshow(np.reshape(X[ii,:],(8,8)), cmap='Greys',interpolation='none')
plt.show()
```
Here we'll use PCA to visualize the results of a clustering of the MNIST dataset.
This example was taken from the [scikit-learn examples](http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html).
```
X_PCA = PCA(n_components=2).fit_transform(X)
kmeans_model = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans_model.fit(X_PCA)
print(metrics.homogeneity_score(labels_true=y, labels_pred=kmeans_model.labels_))
# Plot the decision boundaries. For that, we will assign a color to each point in a mesh
x_min, x_max = X_PCA[:, 0].min() - 1, X_PCA[:, 0].max() + 1
y_min, y_max = X_PCA[:, 1].min() - 1, X_PCA[:, 1].max() + 1
h = .1 # step size of the mesh .02
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh.
Z = kmeans_model.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize= (10, 10))
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(X_PCA[:, 0], X_PCA[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans_model.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
```
## Example: 1988 Olympic haptathlon results
The [heptathlon](https://en.wikipedia.org/wiki/Heptathlon) is an Olympic event consisting of seven events:
1. 100m hurdles
+ shot
+ high jump
+ 200m run
+ long jump
+ javelin
+ 800m run
The values for each of these events are then combined according to [official Olympic rules](https://en.wikipedia.org/wiki/Heptathlon#Points_system) to generate a *score* for each athlete. The athlete with the largest score wins.
We'll use PCA to analyze the results for the women's heptathlon from the 1988 Olympics held in Seoul, Korea.
The results for all 25 athletes are contained in the file `heptathlon.csv`.
This example was taken from 'An Introduction to Applied Multivariate Analysis with R' by Brian Everitt and Torsten Hothorn, Ch. 3.10.2.
```
hept = pd.read_csv("heptathlon.csv")
print(hept)
hept.describe()
sns.set()
sns.pairplot(hept);
```
**Question:** Why in the matrix scatterplot are some of the results negatively correlated?
It also reveals that there is an outlier, namely, Launa (PNG). We'll remove this athlete before continuing.
```
# remove outlier
hept = hept.drop(24)
```
Now, we'll do a principal component analysis on this data
```
# scale the dataset
X = scale(hept.drop(['name ',' score '],axis=1).values)
# find PCA and transform to new coordinates
pca_model = PCA()
X_PCA = pca_model.fit_transform(X)
# create a new pandas dataframe
df_plot = pd.DataFrame(X_PCA, columns=['PC1', 'PC2', 'PC3', 'PC4', 'PC5', 'PC6', 'PC7'])
df_plot.head()
fig,ax1 = plt.subplots()
ax1.set_xlim(X_PCA[:,0].min()-1,X_PCA[:,0].max()+1)
ax1.set_ylim(X_PCA[:,1].min()-1,X_PCA[:,1].max()+1)
# Plot Principal Components 1 and 2
for i,name in enumerate(hept['name '].values):
ax1.annotate(name, (X_PCA[i,0], X_PCA[i,1]), ha='center',fontsize=10)
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Second Principal Component')
plt.show()
# Variance ratio of the four principal components
var_ratio = pca_model.explained_variance_ratio_
print(var_ratio)
plt.plot([1,2,3,4,5,6,7], var_ratio, '-o')
plt.ylabel('Proportion of Variance Explained')
plt.xlabel('Principal Component')
plt.xlim(0.75,7.25)
plt.ylim(0,1.05)
plt.xticks([1,2,3,4,5,6,7])
plt.show()
```
Most of the variance in the athletes is contained in the first principle component.
Let's make a plot of the first principle component vs. the score.
```
fig,ax1 = plt.subplots()
ax1.set_xlim(X_PCA[:,0].min()-1,X_PCA[:,0].max()+1)
ax1.set_ylim(hept[' score '].min()-10,hept[' score '].max()+10)
# Plot Principal Components 1 and score
for i,name in enumerate(hept['name '].values):
ax1.annotate(name, (X_PCA[i,0], hept[' score '][i]), ha='center',fontsize=10)
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Score')
plt.show()
```
The first principal component is highly correlated with the score determined by Olympic rules. Note that the winner of the heptathlon, [Jackie Joyner-Kersee](https://en.wikipedia.org/wiki/Jackie_Joyner-Kersee) really stands out.
Read more about the 1988 Summer Olympics Women's heptathlon [here](https://en.wikipedia.org/wiki/Athletics_at_the_1988_Summer_Olympics_%E2%80%93_Women%27s_heptathlon).
| github_jupyter |

# Populations of Countries
What are the most and least populated countries in the world?
We are going to use Gapminder data from http://gapm.io/dpop to find out.
First we need to download the data from the Google spreadsheet. Select the following code cell and use the `Run` button to run the code.
```
spreadsheet_key = '18Ep3s1S0cvlT1ovQG9KdipLEoQ1Ktz5LtTTQpDcWbX0' # from the URL
spreadsheet_gid = '1668956939' # the first sheet
csv_link = 'https://docs.google.com/spreadsheets/d/'+spreadsheet_key+'/export?gid='+spreadsheet_gid+'&format=csv'
import pandas as pd
data = pd.read_csv(csv_link)
data
```
## Current Population
Since that data set contains the years 1800 to 2100 (including expected future population sizes), we need to filter the data by year.
The following code will also sort the countries by population (in descending order) and re-number the rows. Run the code cell.
```
year = 2019
current_population = data[data['time']==year].sort_values('population', ascending=False).reset_index(drop=True)
current_population
```
### Ten Most Populated Countries
To find the 10 most populated countries from this data set, we use the `.head(10)` method. Run the cell.
Notice that the row numbering starts from 0.
```
current_population.head(10)
```
### Ten Least Populated Countries
Display the 10 least populated countries using `.tail(10)`.
```
current_population.tail(10)
```
## Population Rank of a Specific Country
To see population, and rank, of a particular country, run the following code cell. Change the name in the first line to look at a different country.
```
country = 'Canada'
current_population[current_population['name']==country]
```
## Listing All Countries
To see a list of all countries in the data set, run the next cell.
```
current_population['name'].values
```
## Comparing Populations
To compare populations of countries, we can make a horizontal bar graph.
After you run the following cell, you can mouse over and zoom in on parts of the graph to have a closer look.
```
import cufflinks as cf
cf.go_offline()
height=1000
current_population.iplot(kind='barh', x='name', y='population', title='Populations of Countries in '+str(year), layout=cf.Layout(height=height))
```
## Population Change
To look at the population of a country over time, we can make a line graph.
```
country = 'Canada'
data[data['name']==country].iplot(x='time', y='population', title='Population of '+country)
```
# Summary
This notebook allowed you to discover and rank the populations of countries. For more information about this data set, and other data sets to explore, check out [Gapminder.org](https://www.gapminder.org).
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
```
import pandas as pd
import numpy as np
from datetime import date, datetime
from dateutil.parser import parse
import matplotlib.pyplot as plt
# Date is up to Nov 6
date_data = pd.read_csv('/Users/liuye/ForPython/Optimal-Cryptocurrency-Trading-Strategies-Step2/Medium_Analysis/Webscrapping/kybermedium.csv')
date_data['Date'][0]= '·Nov 6'
date_data['Date'][1]= '·Nov 5'
dt = []
for x in date_data['Date']:
d = x[1:]
dt.append(parse(d))
len(dt)
date_data['Date'][119]
price = pd.read_csv('/Users/liuye/ForPython/Optimal-Cryptocurrency-Trading-Strategies-Step2/Medium_Analysis/Data Extraction/KNCUSDT-1d-binance.csv')
price
ddt = pd.DataFrame(dt, columns = ['timestamp'])
ddt['signal'] = 1
def convert_datetime(adt):
return datetime.strftime(adt, '%Y-%m-%d')
ddt['timestamp']= ddt['timestamp'].apply(convert_datetime)
ddt['timestamp']
df = pd.merge(price,ddt,how="left", on='timestamp')
df = df.fillna(0)
df
#Statistical Analysis
df['clop'] = np.log(df['close'])-np.log(df['open'])
df[['signal','clop','volume','trades']].corr()
df[['open','high','low','close']].corr()
plt.plot(df['open'])
# buy it at the open and sell it at next day open
pnl = []
pnl.append(1000)
for k in range(len(df)):
if df.iloc[k]['signal'] == 1:
share = pnl[-1]/df.iloc[k]['open']
earn = share * df.iloc[k+1]['open']
pnl.append(earn)
pnl1 = pnl
# buy it at the open and sell it at next day close
pnl = []
pnl.append(1000)
for k in range(len(df)):
if df.iloc[k]['signal'] == 1:
share = pnl[-1]/df.iloc[k]['open']
pnl.append(share * df.iloc[k+1]['close'])
pnl[-1]
# buy it at the close and sell it at next day close
pnl = []
pnl.append(1000)
for k in range(len(df)):
if df.iloc[k]['signal'] == 1:
share = pnl[-1]/df.iloc[k]['close']
pnl.append(share * df.iloc[k+1]['close'])
pnl[-1]
# buy it at the open and sell it at close
pnl = []
pnl.append(1000)
for k in range(len(df)):
if df.iloc[k]['signal'] == 1:
share = pnl[-1]/df.iloc[k]['open']
earn = share * df.iloc[k]['close']
pnl.append(earn)
pnl2 = pnl
baseline = df['open']/df['open'][0] * 1000
plt.figure(figsize=(8, 6))
plt.plot(baseline,label = "baseline", linestyle=":")
plt.plot(pnl1,label = "strategy 1", linestyle="--")
plt.plot(pnl2,label = "strategy 2", linestyle="-")
plt.xlabel('Date from beginning',fontsize=20)
plt.ylabel('price',fontsize=20)
plt.title('The price change of our two strategies compared with baseline',fontsize=15)
plt.legend( prop={'size': 15})
for x in df['signal']:
print(x)
```
| github_jupyter |
# `Lib-INVENT`: Reinforcement Learning - ROCS + reaction filter
The purpose of this notebook is to illustrate the assembly of a configuration input file containing a ROCS input.
ROCS is a licensed virtual screening software based on similarity between input compounds and a specified reference (or target) molecule. For more information on ROCS, please refer to the OpenEye website: https://www.eyesopen.com/rocs
*Note that in order to use ROCS, an OpenEye license is needed.*
```
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
project_directory = "</path/to/project/directory>"
output_directory = "</path/to/output/directory>"
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_directory)
except FileExistsError:
pass
```
## Setting up the configuration
The configuration is set up analogously to the previous tutorials. The difference arrises in the scoring function composition where a QSAR predictive property is not replaced with the ROCS model.
The previously discussed selective Amide-coupling/Buchwald reaction filters are imposed.
### 1. Run Type Block
```
# initialize the dictionary
configuration = {
"run_type": "reinforcement_learning"
}
```
### 2. Logging Block
```
# Add logging
configuration.update({
"logging": {
"sender": " ",
"recipient": "local",
"logging_path": os.path.join(output_directory, "run.log"),
"job_name": "RL Demo ROCS",
"job_id": " "
}
})
```
### 3. Parameters Block
```
#start assembling parameters block
configuration.update({
"parameters": {
"actor": os.path.join(project_directory, "trained_models/reaction_based.model"),
"critic": os.path.join(project_directory, "trained_models/reaction_based.model"),
"scaffolds": ["[*:0]N1CCN(CC1)CCCCN[*:1]"],
"n_steps": 100,
"learning_rate": 0.0001,
"batch_size": 128,
"randomize_scaffolds": False # important since a RF is to be imposed.
}
})
#configure learning strategy
learning_strategy = {
"name": "DAP",
"parameters": {
"sigma": 120
}
}
configuration["parameters"]["learning_strategy"] = learning_strategy
```
#### Scoring strategy
```
# configure scoring strategy
scoring_strategy = {
"name": "standard"
}
# configure diversity filter
diversity_filter = {
"name": "NoFilterWithPenalty", # To use a DF. The alternative option is "NoFilter"
}
scoring_strategy["diversity_filter"] = diversity_filter
# configure reaction filter
reaction_filter = {
"type":"selective",
"reactions":{"1": ["[#6;!$(C(C=*)(C=*));!$([#6]~[O,N,S]);$([#6]~[#6]):1][C:2](=[O:3])[N;D2;$(N(C=[O,S]));!$(N~[O,P,S,N]):4][#6;!$(C=*);!$([#6](~[O,N,S])N);$([#6]~[#6]):5]>>[#6:1][C:2](=[O:3])[*].[*][N:4][#6:5]"],
"0": ["[c;$(c1:[c,n]:[c,n]:[c,n]:[c,n]:[c,n]:1):1]-!@[N;$(NC)&!$(N=*)&!$([N-])&!$(N#*)&!$([ND1])&!$(N[O])&!$(N[C,S]=[S,O,N]),H2&$(Nc1:[c,n]:[c,n]:[c,n]:[c,n]:[c,n]:1):2]>>[*][c;$(c1:[c,n]:[c,n]:[c,n]:[c,n]:[c,n]:1):1].[*][N:2]"]}
}
scoring_strategy["reaction_filter"] = reaction_filter
```
##### Define the scoring function
This is the important point of difference to the previous tutorials. A ROCS similarity measure is imposed to guide the agent to propose compounds structurally resembling a known haloperidol ligand.
The inputs used for the publication is provided in the tutorial/models/ROCS/ folder.
```
scoring_function = {
"name": "custom_sum",
"parallel": False,
"parameters": [
{
"component_type": "parallel_rocs_similarity",
"model_path": None,
"name": "RefTversky ROCS sim",
"smiles": [],
"specific_parameters": {
"color_weight": 0.5,
"custom_cff": "<project_directory>/tutorial/models/ROCS/implicit_MD_mod_generic_hydrophobe.cff",
"enumerate_stereo": True,
"input_type": "shape_query",
"max_num_cpus": 8,
"rocs_input": "<project_directory>/tutorial/models/ROCS/haloperidol_3_feats.sq",
"shape_weight": 0.5,
"similarity_measure": "RefTversky",
"transformation_type": "no_transformation"
},
"weight": 1
},
{
"component_type": "custom_alerts",
"name": "Custom alerts",
"weight": 1,
"model_path": None,
"smiles": [
"[*;r8]",
"[*;r9]",
"[*;r10]",
"[*;r11]",
"[*;r12]",
"[*;r13]",
"[*;r14]",
"[*;r15]",
"[*;r16]",
"[*;r17]",
"[#8][#8]",
"[#6;+]",
"[#16][#16]",
"[#7;!n][S;!$(S(=O)=O)]",
"[#7;!n][#7;!n]",
"C#C",
"C(=[O,S])[O,S]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]"
],
"specific_parameters": None
}]
}
scoring_strategy["scoring_function"] = scoring_function
# Update the paramters block with the scoring strategy
configuration["parameters"]["scoring_strategy"] = scoring_strategy
```
## Write out the configuration
```
configuration_JSON_path = os.path.join(output_directory, "input.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
```
# Run
Execute in jupyter notebook
```
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {project_directory}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
```
Execute in command line
```
$ conda activate lib-invent
$ python <project_directory>/input.py <configuration_JSON_path>
```
| github_jupyter |
# This file has Form Recognizer Model trainign and Inferencing code
#### Read configuration file and get endpoint, key of the service
```
########### Python Form Recognizer Labeled Async Train #############
import json
import time
from requests import get, post
#read form recognizer service parameters
with open('config.json','r') as config_file:
config = json.load(config_file)
# Endpoint URL
endpoint = config['endpoint']
post_url = endpoint + r"/formrecognizer/v2.1/custom/models"
apim_key = config['apim-key']
filetype = 'application/json'
headers = {
# Request headers
'Content-Type': filetype,
'Ocp-Apim-Subscription-Key': apim_key,
}
body = {
"source": "",
"sourceFilter": {
"prefix": "",
"includeSubFolders": False
},
"useLabelFile": False
}
```
# Unsupervised training
### Function to train unsupervised Form Recognizer Model
```
n_tries = 60
n_try = 0
wait_sec = 60
def train_fr_model(sas_url, folder_path, model_file):
body['source'] = sas_url
body['sourceFilter']['prefix'] = folder_path
# trigger training
try:
resp = post(url = post_url, json = body, headers = headers)
#print(body)
#print(headers)
if resp.status_code != 201:
print("Training model failed (%s):\n%s" % (resp.status_code, json.dumps(resp.json())))
return
print("Training Started:\n%s" % resp.headers)
get_url = resp.headers["location"]
except Exception as e:
print("Error occurred when triggering training:\n%s" % str(e))
quit()
n_try = 0
#wait for training to complete and save model to json file
while n_try < n_tries:
try:
resp = get(url = get_url, headers = headers)
resp_json = resp.json()
if resp.status_code != 200:
print("Model training failed (%s):\n%s" % (resp.status_code, json.dumps(resp_json)))
break
model_status = resp_json["modelInfo"]["status"]
print("Model Status:", model_status)
if model_status == "ready":
#print("Training succeeded:\n%s" % json.dumps(resp_json))
print("Training succeeded:")
with open(model_file,"w") as f:
json.dump(resp_json, f)
break
if model_status == "invalid":
print("Training failed. Model is invalid:\n%s" % json.dumps(resp_json))
break
# Training still running. Wait and retry.
time.sleep(wait_sec)
n_try += 1
except Exception as e:
msg = "Model training returned error:\n%s" % str(e)
print(msg)
break
if resp.status_code != 200:
print("Train operation did not complete within the allocated time.")
```
# FR Model Inferencing
```
import requests
import glob
import os
import datetime
import tempfile
import pandas as pd
import shutil
from concurrent.futures import ThreadPoolExecutor, as_completed
params_infer = {
"includeTextDetails": True
}
headers_infer = {
# Request headers
'Content-Type': 'application/pdf',
'Ocp-Apim-Subscription-Key': apim_key,
}
```
#### Form Recognizer inferencing function
```
#######################################################
# FR Inference function multithreading
#######################################################
def fr_mt_inference(files, json_fld, model_id):
post_url = endpoint + "formrecognizer/v2.1/custom/models/%s/analyze" % model_id
###################
#send all requests in one go
###################
session = requests.Session()
url_list=[]
for fl in files:
#read file
fname = os.path.basename(fl)
#print("working on file: %s, %s" %(fl, datetime.datetime.now()))
with open(fl, "rb") as f:
data_bytes = f.read()
#set variables to default values
get_url = None
#st_time = datetime.datetime.now()
st_time = datetime.now()
gap_between_requests = 1 #in seconds
try:
#send post request (wait and send if overlaoded)
post_success = 0
while post_success == 0:
resp = session.post(url = post_url, data = data_bytes, headers = headers_infer, params = params_infer)
if resp.status_code != 429:
break
time.sleep(1)
#print(fl, resp.status_code)
if resp.status_code != 202:
print("POST analyze failed:\n%s" % json.dumps(resp.json()))
#print("POST analyze succeeded:\n%s" % resp.headers)
#print("POST analyze succeeded for %s \n" % fl)
get_url = resp.headers["operation-location"]
except Exception as e:
print("POST analyze failed 1:\n%s" % str(e))
url_list.append((fl, fname, get_url))
end_time = datetime.now()
#end_time = datetime.datetime.now()
delta = end_time - st_time
delta = delta.total_seconds()
if delta < gap_between_requests:
time.sleep(gap_between_requests - delta)
####################################
# get all responses in one go
####################################
n_tries = 15
wait_sec = 15
for cnt in range(n_tries):
#get results of requests sent
completed = []
for i in range(len(url_list)):
fl, fname, get_url = url_list[i]
if get_url is not None:
try:
resp = session.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
resp_json = resp.json()
if resp.status_code != 200:
print("GET analyze results failed:%s \n%s" % fl, json.dumps(resp_json))
break
status = resp_json["status"]
if status == "succeeded":
print("Analysis succeeded for %s:\n" % fl)
with open(os.path.join(json_fld,fname.replace('.pdf','.json')), 'w') as outfile:
json.dump(resp_json, outfile)
completed.append(i)
if status == "failed":
print("Analysis failed:%s \n%s" % fl, json.dumps(resp_json))
break
except Exception as e:
msg = "GET analyze results failed 2:\n%s" % str(e)
print(msg)
break
# remove files where
completed.sort(reverse=True)
for i in completed:
url_list.pop(i)
print("iteration",cnt,"complete. Still",len(url_list), " to infer")
if len(url_list) == 0:
break
time.sleep(wait_sec)
####################################
# retun files not inferred
####################################
session.close()
if len(url_list) == 0:
return("All files successfully inferred by FR")
else:
return(url_list)
```
#### Form Recognizer multi-threading inferencing function
```
# Form Recognizer inference
def fr_model_inference(src_dir, json_dir, model_file, thread_cnt):
#read model details
with open(model_file,'r') as model_file:
model = json.load(model_file)
if model['modelInfo']['modelId'] != None :
model_id = model['modelInfo']['modelId']
print("model id: %s" % model_id)
else:
print("Model details not present, either model training is not performed or the file is missing")
return
#Read files and divide into chunks
fls = glob.glob(os.path.join(src_dir, "*.pdf"))
print("inferencing ", len(fls), "files with", thread_cnt, "thread count")
fchunk = chunkify(fls, 100)
for chunk in fchunk:
fr_threads = min(len(chunk),thread_cnt)
flist = chunkify(chunk, fr_threads)
#Call FR inference
threads= []
with ThreadPoolExecutor(max_workers=thread_cnt) as executor:
for files in flist:
threads.append(executor.submit(fr_mt_inference, files, json_dir, model_id))
for task in as_completed(threads):
print(task.result())
```
| github_jupyter |
```
import sys
import os
from glob import glob
import random
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn as sns
import pysam
import h5py
from joblib import Parallel, delayed
%env KERAS_BACKEND tensorflow
import tensorflow as tf
from keras import (
models, layers, activations,
optimizers, losses, callbacks
)
import keras.backend as K
import keras
from sklearn.metrics import (
roc_curve, precision_recall_curve, roc_auc_score
)
from sklearn.model_selection import KFold, train_test_split
from sklearn.utils import shuffle
from adapter_detector.utils import (
get_fast5_read_id_mapping
)
from adapter_detector.signal import (
get_fast5_fiveprime
)
from adapter_detector.train import (
generate_training_data,
build_model, train_model,
SquiggleSequence
)
## Default plotting params
%matplotlib inline
sns.set(font='Arial')
plt.rcParams['svg.fonttype'] = 'none'
style = sns.axes_style('white')
style.update(sns.axes_style('ticks'))
style['xtick.major.size'] = 2
style['ytick.major.size'] = 2
sns.set(font_scale=2, style=style)
pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7'])
cmap = ListedColormap(pal.as_hex())
sns.set_palette(pal)
sns.palplot(pal)
plt.show()
keras.__version__
tf.__version__
```
First we need to create the mapping between read ids and Fast5 files so that we can pair full length reads from the bam files with the signal that they originated from. This information is present in the sequencing summaries file:
```
read_id_filemap = get_fast5_read_id_mapping(
'/cluster/ggs_lab/mtparker/analysis_notebooks/fiveprime_adapter_identification/data/201804_col0_5adapterLIG/5prime_adapter_rebasecalled/',
'/cluster/ggs_lab/mtparker/analysis_notebooks/fiveprime_adapter_identification/data/201804_col0_5adapterLIG/5prime_adapter_fast5s/'
)
positive_bam_fn = '/cluster/gjb_lab/nschurch/git/NS_Nanopore_paper/manuscript/supplementary/fulllength_data/201902_col0_2916_5adapter_exp2_fulllength.bam'
negative_bam_fn = '/cluster/gjb_lab/nschurch/git/NS_Nanopore_paper/manuscript/supplementary/fulllength_data/201902_col0_2916_5adapter_exp2_short_relaxed.bam'
with pysam.AlignmentFile(positive_bam_fn) as p:
print(p.mapped)
with pysam.AlignmentFile(negative_bam_fn) as n:
print(n.mapped)
keras
```
Now we can extract the 5' end signals (last 3000 signal measurements) from each Fast5 file. We also extract a bunch of internal signals to use as negative examples as well
```
def get_fiveprime_for_reads_in_bam(bam_fn, read_id_filemap, size=3000, internal=False):
with pysam.AlignmentFile(bam_fn) as bam:
signals = Parallel(n_jobs=16)(
delayed(get_fast5_fiveprime)(
r.query_name, read_id_filemap[r.query_name], size, internal)
for r in bam.fetch()
)
if internal:
read_ids, sig_lens, fiveprime_signals, internal_signals = zip(*signals)
internal_signals = [x for x in internal_signals if x is not None and len(x) == size]
return np.asarray(fiveprime_signals), np.asarray(internal_signals)
else:
read_ids, sig_lens, fiveprime_signals = zip(*signals)
return np.asarray(fiveprime_signals)
pos_signals = get_fiveprime_for_reads_in_bam(
positive_bam_fn, read_id_filemap, size=3000, internal=False)
neg_signals, neg_signals_internal = get_fiveprime_for_reads_in_bam(
negative_bam_fn, read_id_filemap, size=3000, internal=True)
def write_to_hdf5(h5_fn, pos_signals, neg_signals, neg_signals_internal):
with h5py.File(h5_fn, 'w') as f:
f.create_dataset('pos_signals', data=pos_signals)
f.create_dataset('neg_signals', data=neg_signals)
f.create_dataset('neg_internal_signals', data=neg_signals_internal)
write_to_hdf5(
'data/training_data_round1.h5',
pos_signals, neg_signals, neg_signals_internal
)
EPOCHS = 100
BATCH_SIZE = 128
STEPS_PER_EPOCH = 100
VAL_STEPS_PER_EPOCH = 20
train_gen, val_gen, test_data, _ = generate_training_data(
'data/training_data_round1.h5',
batch_size=BATCH_SIZE,
steps_per_epoch=STEPS_PER_EPOCH,
val_steps_per_epoch=VAL_STEPS_PER_EPOCH
)
model, history = train_model(
model, train_gen, val_gen,
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
val_steps_per_epoch=VAL_STEPS_PER_EPOCH
)
X_test, y_test = test_data
y_pred = model.predict(X_test)
fpr, tpr, thresh = roc_curve(y_test, y_pred)
i = len(thresh) - np.searchsorted(thresh[::-1], 0.9)
print(thresh[i], fpr[i], tpr[i])
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(fpr, tpr, color=pal[0], lw=2, label='AUC={:.3f}'.format(roc_auc_score(y_test, y_pred)))
ax.plot([0, 1], [0, 1], color='#555555', ls='--')
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize=(8, 5))
ax.hist(y_pred[y_test == 1], bins=np.linspace(0, 1, 101), color=pal[0], alpha=0.5)
ax.hist(y_pred[y_test == 0], bins=np.linspace(0, 1, 101), color=pal[1], alpha=0.5)
plt.show()
# use the existing model to remove false negatives from BLAST
with h5py.File('data/training_data_round1.h5') as f:
pos_signals = f['pos_signals'][:]
neg_signals = f['neg_signals'][:]
neg_signals_internal = f['neg_internal_signals'][:]
preds = model.predict(neg_signals.reshape(-1, 3000, 1)).squeeze()
neg_signals_filt = neg_signals[preds < 0.5]
write_to_hdf5(
'data/training_data_round2.h5',
pos_signals, neg_signals_filt, neg_signals_internal
)
with h5py.File('data/training_data_round2.h5') as f:
print(f['pos_signals'].shape)
print(f['neg_signals'].shape)
train_gen, val_gen, test_data, _ = generate_training_data(
'data/training_data_round2.h5',
batch_size=BATCH_SIZE,
steps_per_epoch=STEPS_PER_EPOCH,
val_steps_per_epoch=VAL_STEPS_PER_EPOCH
)
model = build_model(3000)
model, history = train_model(
model, train_gen, val_gen,
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
val_steps_per_epoch=VAL_STEPS_PER_EPOCH
)
X_test, y_test = test_data
y_pred = model.predict(X_test)
fpr, tpr, thresh = roc_curve(y_test, y_pred)
i = len(thresh) - np.searchsorted(thresh[::-1], 0.9)
print(thresh[i], fpr[i], tpr[i])
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(fpr, tpr, color=pal[0], lw=2, label='AUC={:.3f}'.format(roc_auc_score(y_test, y_pred)))
ax.plot([0, 1], [0, 1], color='#555555', ls='--')
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize=(8, 5))
ax.hist(y_pred[y_test == 1], bins=np.linspace(0, 1, 101), color=pal[0], alpha=0.5)
ax.hist(y_pred[y_test == 0], bins=np.linspace(0, 1, 101), color=pal[1], alpha=0.5)
plt.show()
model.save('data/model_weights_070319.h5')
```
Use K Fold cross validation to do proper model evaluation:
```
def get_data(h5_fn):
with h5py.File(h5_fn) as h5_file:
pos = h5_file['pos_signals'][:]
neg = np.concatenate([
h5_file['neg_signals'][:],
h5_file['neg_internal_signals'][:]
])
return pos, neg
pos_data, neg_data = get_data('data/training_data_round2.h5')
pos_kf = KFold(n_splits=5, shuffle=False).split(pos_data)
neg_kf = KFold(n_splits=5, shuffle=False).split(neg_data)
pos_pred = []
neg_pred = []
for (pos_train_idx, pos_test_idx), (neg_train_idx, neg_test_idx) in zip(pos_kf, neg_kf):
pos_train = pos_data[pos_train_idx]
pos_test = pos_data[pos_test_idx]
neg_train = neg_data[neg_train_idx]
neg_test = neg_data[neg_test_idx]
pos_train, pos_val = train_test_split(pos_train, test_size=0.1)
neg_train, neg_val = train_test_split(neg_train, test_size=0.1)
train_gen = SquiggleSequence(
pos_train, neg_train,
batch_size=128, steps_per_epoch=100,
)
val_gen = SquiggleSequence(
pos_val, neg_val,
batch_size=128, steps_per_epoch=20,
)
model = build_model(3000)
model, history = train_model(
model, train_gen, val_gen,
epochs=100,
steps_per_epoch=100,
val_steps_per_epoch=20
)
pos_pred.append(model.predict(pos_test.reshape(-1, 3000, 1)).squeeze())
neg_pred.append(model.predict(neg_test.reshape(-1, 3000, 1)).squeeze())
pos_pred = np.concatenate(pos_pred)
neg_pred = np.concatenate(neg_pred)
y_pred = np.concatenate([pos_pred, neg_pred])
y_true = np.concatenate([[1] * len(pos_pred), [0] * len(neg_pred)])
fpr, tpr, thresh = roc_curve(y_true, y_pred)
prec, rec, pr_thresh = precision_recall_curve(y_true, y_pred)
i = len(thresh) - np.searchsorted(thresh[::-1], 0.9)
print(thresh[i], fpr[i], tpr[i])
fig, axes = plt.subplots(figsize=(10, 5), ncols=2)
axes[0].plot(fpr, tpr, color=pal[0], lw=2)
axes[0].plot([0, 1], [0, 1], color='#555555', ls='--')
axes[0].set_xlabel('False Positive Rate')
axes[0].set_ylabel('True Positive Rate')
axes[1].plot(rec, prec)
axes[1].plot([0, 1], [prec[1], prec[1]], color='#555555', ls='--')
axes[1].set_xlabel('Recall')
axes[1].set_ylabel('Precision')
axes[1].set_ylim(0, 1.05)
plt.tight_layout()
plt.savefig('figures/roc_pr_curve.svg')
plt.show()
```
| github_jupyter |
# (7) Signal (CPD) Search and Detection Criteria
In Part (6), we learned that any CPDs of interest in the SR 4 disk should be point sources (i.e., their size is $\ll$ the resolution) and could be pretty faint, perhaps comparable to the residuals from the circumstellar disk model. We need to develop a means of quantifying what level of emission we are able to robustly detect. One intuitive way for doing that is to inject a fake signal into the data, perform the same fitting/post-processing analysis we used on the real observations to get a residual dataset, and then try to recover the fake signal.
For the last part, *recovery*, there is not a widely agreed-upon metric. We'll have to develop our own approach and demonstrate that it works in practice. Your goal in this part of the project is to establish an **automated** way to search for and quantify a point source in the SR 4 disk gap. This machinery should (1) quantify the *significance* of any such feature (i.e., its signal-to-noise ratio, or the ratio of the peak to the "local" standard deviation); and (2) measure its flux; and (3) measure its position in the SR 4 disk gap (given how narrow the gap is, the radius should be just about 0.08", but the *azimuth* is unknown a priori). There are no right or wrong answers here: your job is to experiment and see what might work. Start simple...if we can make it work with a straightforward search in a (r,az)-map like before, let's do that!
I know this is sort of backwards, worrying about the recovery part before the injection part. But I think it makes more sense in terms of the work that needs to be done. The *injection* doesn't involve a lot of activity from a research perspective (its more of just a code machinery in a big loop). I will show you how that works once your machinery is tested. To help you develop, I've posted two example residual images online (see below) with mock CPDs already injected.
```
import os, sys, time
import urllib.request
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.io import fits
import cmasher as cmr
from astropy.visualization import (AsinhStretch, LogStretch, LinearStretch, ImageNormalize)
from matplotlib.patches import Ellipse
# residuals - imaged like in DSHARP I, II
data = fits.open('SR4_residtest.fits')
dimage = np.squeeze(data[0].data)
dheader = data[0].header
# coordinates
nx, ny = dheader['NAXIS1'], dheader['NAXIS2']
RA = dheader['CRVAL1'] + dheader['CDELT1'] * (np.arange(nx) - (dheader['CRPIX1'] - 1))
DEC = dheader['CRVAL2'] + dheader['CDELT2'] * (np.arange(ny) - (dheader['CRPIX2'] - 1))
RAo, DECo = 3600 * (RA - dheader['CRVAL1']), 3600 * (DEC - dheader['CRVAL2'])
offRA, offDEC = -0.060, -0.509
RAo_shift, DECo_shift = RAo - offRA, DECo - offDEC
dRA, dDEC = np.meshgrid(RAo_shift, DECo_shift)
im_bounds = (dRA.max(), dRA.min(), dDEC.min(), dDEC.max())
# now set the RA and DEC offset ranges you want to show
dRA_lims = [0.5, -0.5] # (same reverse ordering to ensure E is to the left)
dDEC_lims = [-0.5, 0.5]
# now define the color-map, intensity limits, and stretch
cmap = cmr.eclipse
vmin, vmax = -0.03, 2.0 # these are in mJy/beam units
norm = ImageNormalize(vmin=vmin, vmax=vmax, stretch=AsinhStretch())
# set up and plot the image on the specified scale
fig, ax = plt.subplots(figsize=(10,4))
im = ax.imshow(1e3*dimage, origin='lower', cmap=cmap, extent=im_bounds, norm=norm, aspect='equal')
ax.set_xlim(dRA_lims)
ax.set_ylim(dDEC_lims)
ax.set_xlabel('RA offset [arcsec]')
ax.set_ylabel('DEC offset [arcsec]')
# add a scalebar
cb = plt.colorbar(im, ax=ax, pad=0.05)
cb.set_label('surface brightness [mJy / beam]', rotation=270, labelpad=13)
# make an ellipse to show the PSF (beam) dimensions, in the lower left corner (its very small!)
dbeam_maj, dbeam_min, dbeam_PA = 3600 * dheader['BMAJ'], 3600 * dheader['BMIN'], dheader['BPA']
dbeam = Ellipse((dRA_lims[0] + 0.15*np.diff(dRA_lims), dDEC_lims[0] + 0.15*np.diff(dDEC_lims)),
dbeam_maj, dbeam_min, 90-dbeam_PA)
dbeam.set_facecolor('w')
ax.add_artist(dbeam)
from scipy.interpolate import interp1d
# inclination and PA
incl, PA = 22., 26.
# convert these to radius
inclr, PAr = np.radians(incl), np.radians(PA)
# deproject and rotate to new coordinate frame
xp = (dRA * np.cos(PAr) - dDEC * np.sin(PAr)) / np.cos(inclr)
yp = (dRA * np.sin(PAr) + dDEC * np.cos(PAr))
# now convert to polar coordinates (r in arcseconds, theta in degrees)
# note that theta starts along the minor axis (theta = 0), and rotates clockwise in the sky plane)
r = np.sqrt(xp**2 + yp**2)
theta = np.degrees(np.arctan2(yp, xp))
# radius and azimuth bin centers (and their widths)
#rbins = np.linspace(0.003, 0.6, 300) # in arcseconds
rbins = np.arange(0.003, 0.6, 0.003)
tbins = np.linspace(-180, 180, 181) # in degrees
def razmap(imarray, rbins, tbins):
# bin widths
dr = np.abs(rbins[1] - rbins[0])
dt = np.abs(tbins[1] - tbins[0])
# initialize the (r, az)-map and radial profile
rtmap = np.empty((len(tbins), len(rbins)))
SBr, err_SBr = np.empty(len(rbins)), np.empty(len(rbins))
# loop through the bins to populate the (r, az)-map and radial profile
for i in range(len(rbins)):
# identify pixels that correspond to the radial bin (i.e., in this annulus)
in_annulus = ((r >= (rbins[i] - 0.5 * dr)) & (r < (rbins[i] + 0.5 * dr)))
# accumulate the azimuth values and surface brightness values in this annulus
az_annulus = theta[in_annulus]
SB_annulus = imarray[in_annulus]
# average the intensities (and their scatter) in the annulus
SBr[i], err_SBr[i] = np.average(SB_annulus), np.std(SB_annulus)
# populate the azimuthal bins for the (r, az)-map at this radius
for j in range(len(tbins)):
# identify pixels that correspond to the azimuthal bin
in_wedge = ((az_annulus >= (tbins[j] - 0.5 * dt)) & (az_annulus < (tbins[j] + 0.5 * dt)))
# if there are pixels in that bin, average the corresponding intensities
if (len(SB_annulus[in_wedge]) > 0):
rtmap[j,i] = np.average(SB_annulus[in_wedge])
else:
rtmap[j,i] = -1e10 # this is a temporary placeholder; it will be fixed in next piece of code
# now "fix" the (r, az)-map where there are too few pixels in certain az bins (inner disk)
# its ok if this part is a "black box": it is not important / relevant
for i in range(len(rbins)):
# extract an azimuthal slice of the (r, az)-map
az_slice = rtmap[:,i]
# identify if there's missing information in an az bin along that slice:
# if so, fill it in with linear interpolation along the slice
if np.any(az_slice < -1e5):
# extract non-problematic bins in the slice
x_slice, y_slice = tbins[az_slice >= -1e5], az_slice[az_slice >= -1e5]
# pad the arrays to make sure they span a full circle in azimuth
x_slice_ext = np.pad(x_slice, 1, mode='wrap')
x_slice_ext[0] -= 360.
x_slice_ext[-1] += 360.
y_slice_ext = np.pad(y_slice, 1, mode='wrap')
# define the interpolation function
raz_func = interp1d(x_slice_ext, y_slice_ext, bounds_error=True)
# interpolate and replace those bins in the (r, az)-map
fixed_slice = raz_func(tbins)
rtmap[:,i] = fixed_slice
class raz_out:
def __init__(self, razmap, r, az, prof, eprof):
self.razmap = razmap
self.r = r
self.az = az
self.prof = prof
self.eprof = eprof
return raz_out(rtmap, rbins, tbins, SBr, err_SBr)
deproj_d = razmap(dimage, rbins, tbins)
# temp
raz_d = deproj_d.razmap
# define the full (r, az)-map boundaries with a list of the min/max (r, az) bins
rtmap_bounds = (rbins.min(), rbins.max(), tbins.min(), tbins.max())
# set the radius and azimuth ranges you want to show
t_lims = [-180, 180]
r_lims = [0, 0.3]
# now define the color-map, intensity limits, and stretch
cmap = cmr.eclipse
vmin, vmax = 0, 250
norm = ImageNormalize(vmin=vmin, vmax=vmax, stretch=LinearStretch())
# set up and plot the images on the specified scale
fig, ax = plt.subplots(figsize=(7, 5))
im = ax.imshow(1e6*raz_d, origin='lower', cmap=cmap, extent=rtmap_bounds, norm=norm, aspect='auto')
ax.set_xlim(r_lims)
ax.set_ylim(t_lims)
ax.plot([0.065, 0.065], [-180, 180], ':w')
ax.plot([0.095, 0.095], [-180, 180], ':w')
ax.set_ylabel('azimuth [degrees]')
ax.set_xlabel('radius [arcsec]')
# add a scalebar
cb = plt.colorbar(im, ax=ax, pad=0.03)
cb.set_label('surface brightness [$\mu$Jy / beam]', rotation=270, labelpad=17)
# isolate the emission in the gap region (+/-15 mas from mean gap radius of 80 mas)
r_wedge = ((rbins >= 0.065) & (rbins <= 0.095))
raz_gap = raz_d[:, r_wedge]
rbins_gap = rbins[r_wedge]
print(raz_gap.shape)
# find the peak
tpeak_idx, rpeak_idx = np.unravel_index(np.argsort(raz_gap, axis=None), raz_gap.shape)
az_peak, r_peak = tbins[tpeak_idx[-1]], rbins_gap[rpeak_idx[-1]]
# peak brightness
f_cpd = raz_gap[tpeak_idx[-1], rpeak_idx[-1]]
print(1e6*f_cpd, az_peak, r_peak)
# local significance of peak
# make a 2-D mask
mask = np.zeros_like(raz_gap, dtype='bool')
az_wid = 15.
t_exc = ((tbins >= (az_peak - az_wid)) & (tbins <= (az_peak + az_wid)))
mask[t_exc, :] = True
# calculate noise outside the masked region
noise_gap = np.std(raz_gap[~mask])
# SNR of the peak
print(noise_gap)
print(f_cpd / noise_gap)
```
| github_jupyter |
# Demo: How to scrape multiple things from multiple pages
The goal is to scrape info about the **five top-grossing movies** for each year, for 10 years. I want the title and rank of the movie, and also, how much money did it gross at the box office. In the end I will put the scraped data into a CSV file.
```
from bs4 import BeautifulSoup
import requests
url = 'https://www.boxofficemojo.com/year/2018/'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
```
Using [Developer Tools](https://developers.google.com/web/tools/chrome-devtools#elements), I discover the data I want is in an HTML **table.** I also discover that it is the only table on the page.
I store it in a variable named `table`.
```
table = soup.find( 'table' )
```
I use trial-and-error testing with `print()` to discover whether I can get row and cell data cleanly from the table.
```
# get all the rows from that one table
rows = table.find_all('tr')
# some more trial-and-error testing to find out which row holds the first movie
print(rows[1])
# now that I have the right row, get all the cells in that row
cells = rows[1].find_all('td')
# see whether I can print the movie title cleanly
title = cells[1].text
print(title)
```
Next I try a for-loop to see if I can cleanly get the first five movies in the table.
```
# get top 5 movies on this page - I know the first row is [1]
for i in range(1, 6):
cells = rows[i].find_all('td')
title = cells[1].text
print(title)
```
Try a similar for-loop to get total gross for the top five movies. Developer Tools show me this value is in the eighth cell in each row.
```
# I would like to get the total gross number also
for i in range(1, 6):
cells = rows[i].find_all('td')
gross = cells[7].text
print(gross)
```
Now I test getting all the values I want from each row, and it works!
```
# next I want to get rank (1-5), title and gross all on one line
for i in range(1, 6):
cells = rows[i].find_all('td')
print(cells[0].text, cells[1].text, cells[7].text)
```
I want this same data for each of 10 years, so first I will create list of the years I want.
```
# create a list of the 10 years I want
years = []
start = 2019
for i in range(0, 10):
years.append(start - i)
print(years)
```
Still prepping for the 10 years, I create a base URL to use when I open each year's page.
```
# create base url
base_url = 'https://www.boxofficemojo.com/year/'
# test it
# print(base_url + years[0] + '/') -- ERROR!
print( base_url + str(years[0]) + '/')
```
Now I *should* have all the pieces I need ... I will test the code with a print statement --
```
# collect all necessary pieces (tested above) to make a loop that gets
# top 5 movies for each of the 10 years in my list of years
for year in years:
url = base_url + str(year) + '/'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find( 'table' )
rows = table.find_all('tr')
for i in range(1, 6):
cells = rows[i].find_all('td')
print(cells[0].text, cells[1].text, cells[7].text)
```
When I see the result, I realize I need to make two adjustments.
1. Each line needs to have the year also
2. Maybe I should clean the gross so it's a pure integer
I can get rid of the dollar sign and the commas with a combination of two string methods --
`.strip()` and `.replace()`
```
# testing the clean-up code
num = '$293,004,164'
print(num.strip('$').replace(',', ''))
# testing a way to add the year to each line, using a list with only two years in it to save time
miniyears = [2017, 2014]
for year in miniyears:
url = base_url + str(year) + '/'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find( 'table' )
rows = table.find_all('tr')
for i in range(1, 6):
cells = rows[i].find_all('td')
gross = cells[7].text.strip('$').replace(',', '')
print(year, cells[0].text, cells[1].text, gross)
```
Now that I know it all works, I want to save the data in a CSV file.
Python has a handy **built-in module** for reading and writing CSVs. We need to import it before we can use it.
```
import csv
# open new file for writing - this creates the file
csvfile = open("movies.csv", 'w', newline='', encoding='utf-8')
# make a new variable, c, for Python's CSV writer object -
c = csv.writer(csvfile)
# write a header row to the csv
c.writerow( ['year', 'rank', 'title', 'gross'] )
# modified code from above
for year in years:
url = base_url + str(year) + '/'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find( 'table' )
rows = table.find_all('tr')
for i in range(1, 6):
cells = rows[i].find_all('td')
gross = cells[7].text.strip('$').replace(',', '')
# print(year, cells[0].text, cells[1].text, gross)
# instead of printing, I need to make a LIST and write that list to the CSV as one row
# I use the same cells that I had printed before
c.writerow( [year, cells[0].text, cells[1].text, gross] )
# close the file
csvfile.close()
print("The CSV is done!")
```
The result is a CSV file, named movies.csv, that has 51 rows: the header row plus 5 movies for each year from 2010 through 2019. It has four columns: year, rank, title, and gross.
Note that **only the final cell above** is needed to create this CSV, by scraping 10 separate web pages. Everything *above* the final cell above is just instruction, demonstration. It is intended to show the problem-solving you need to go through to get to a desired scraping result.
You would not need to keep all the other work. Those cells could be deleted.
| github_jupyter |
```
import logging
from django.db import models
from utils.merge_model_objects import merge_instances
from fuzzywuzzy import fuzz
from tqdm import tqdm
from collections import Counter
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s')
logger = logging.getLogger('jupyter')
logger.info('logging works')
help(merge_instances)
```
# Run merge operations
```
def sort_images(images):
def key_func(im):
return -im.usage, -len(im.exif_data), im.created
return sorted(
images.annotate(usage=models.Count('storyimage')),
key=key_func,
)
def merge_duplicates(qs, attrs=('id',), sort_func=None):
proc = tqdm(qs)
for item in proc:
proc.set_description_str(f'{str(item)[:30]:<30} ')
kwargs = {attr: getattr(item, attr) for attr in attrs}
clones = qs.filter(**kwargs)
if len(clones) > 1:
proc.set_postfix(item=str(item.pk), clones=len(clones)-1)
if sort_func:
clones = sort_func(clones)
merge_instances(*clones)
logger.info(f'{qs.model.__qualname__} count: {qs.count()} -> {qs.all().count()}')
def merge_images_by_field(field='imagehash', qs = ImageFile.objects.all()):
proc = tqdm(qs)
for item in proc:
proc.set_description_str(f'{str(item)[:30]:<30} ')
clones = item.similar(field) | qs.filter(pk=item.pk)
if len(clones) > 1:
proc.set_postfix(item=str(item.pk), clones=len(clones)-1)
merge_instances(*sort_images(clones))
logger.info(f'{qs.model.__qualname__} count: {qs.count()} -> {qs.all().count()}')
def merge_bylines():
attrs = ['story', 'contributor', 'credit']
qs = Byline.objects.all()
merge_duplicates(qs, attrs)
def merge_images_by_md5():
vals = [im['stat']['md5'] for im in ImageFile.objects.all().values('stat')]
dupes = [h for h, n in Counter(vals).most_common() if n > 1]
proc = tqdm(dupes)
for md5 in proc:
proc.set_description_str(f'md5: {md5}')
imgs = ImageFile.objects.filter(stat__md5=md5)
merge_instances(*list(imgs))
def _clone(*items):
for item in items:
item.pk = None
item.save()
def test_merge():
_clone(*Byline.objects.order_by('?')[:3])
merge_bylines()
_clone(*ImageFile.objects.order_by('?')[:3])
merge_images_by_field('md5')
_clone(*ImageFile.objects.order_by('?')[:3])
merge_images_by_field('imagehash')
def dupes(qs, attr):
vals = qs.values_list(attr, flat=True)
dupes = [h for h, n in Counter(vals).most_common() if n > 1]
return qs.filter(**{f'{attr}__in': dupes})
merge_images_by_md5()
duplicates = dupes(ImageFile.objects.all(), '_imagehash')
merge_images_by_field('imagehash', duplicates)
#duplicates = dupes(ImageFile.objects.all(), 'stat')
#merge_images_by_field('md5', duplicates)
[st['stat']['md5'] for st in ImageFile.objects.values('stat')]
def fuzz_diff(a, attr):
def fn(b):
return fuzz.ratio(getattr(a, attr), getattr(b, attr))
return fn
def merge_contributors(cutoff=90):
"""find and merge contributors."""
qs = Contributor.objects.order_by('-pk')
for n, item in enumerate(qs):
clones = qs.filter(display_name__trigram_similar=item.display_name)
if len(clones) > 1:
ratio = fuzz_diff(item, 'display_name')
clones = [c for c in clones if ratio(c) > cutoff]
clones.sort(key=keyfn, reverse=True)
if len(clones) > 1:
msg = f'{str(n):<5}: merge ' + ' + '.join(f'{c}({ratio(c)} {c.bylines_count()})' for c in clones)
logger.info(msg)
merge_instances(*clones)
logger.info(f'{qs.model.__qualname__} count: {qs.count()} -> {qs.all().count()}')
def keyfn(cn):
return (cn.bylines_count(), bool(cn.email))
merge_contributors()
```
| github_jupyter |
# Add model: translation attention ecoder-decocer over the b3 dataset
```
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchtext import data
import pandas as pd
import unicodedata
import string
import re
import random
import copy
from contra_qa.plots.functions import simple_step_plot, plot_confusion_matrix
import matplotlib.pyplot as plt
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from nltk.translate.bleu_score import sentence_bleu
% matplotlib inline
```
### Preparing data
```
df2 = pd.read_csv("data/boolean3_train.csv")
df2_test = pd.read_csv("data/boolean3_test.csv")
df2["text"] = df2["sentence1"] + df2["sentence2"]
df2_test["text"] = df2_test["sentence1"] + df2_test["sentence2"]
all_sentences = list(df2.text.values) + list(df2_test.text.values)
df2train = df2
df2train.tail()
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
# Turn a Unicode string to plain ASCII, thanks to
# http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
example = "ddddda'''~~çãpoeéééééÈ'''#$$##@!@!@AAS@#12323fdf"
print("Before:", example)
print()
print("After:", normalizeString(example))
pairs_A = list(zip(list(df2train.sentence1.values), list(df2train.and_A.values)))
pairs_B = list(zip(list(df2train.sentence1.values), list(df2train.and_B.values)))
pairs_A = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_A]
pairs_B = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_B]
all_text_pairs = zip(all_sentences, all_sentences)
all_text_pairs = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in all_text_pairs]
def readLangs(lang1, lang2, pairs, reverse=False):
# Reverse pairs, make Lang instances
if reverse:
pairs = [tuple(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
f = lambda x: len(x.split(" "))
MAX_LENGTH = np.max(list(map(f, all_sentences)))
def filterPair(p):
cond1 = len(p[0].split(' ')) < MAX_LENGTH
cond2 = len(p[1].split(' ')) < MAX_LENGTH
return cond1 and cond2
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
def prepareData(lang1, lang2, pairs, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, pairs, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
_, _, training_pairs_A = prepareData("eng_enc",
"eng_dec",
pairs_A)
print()
input_lang, _, _ = prepareData("eng_enc",
"eng_dec",
all_text_pairs)
output_lang = copy.deepcopy(input_lang)
_, _, training_pairs_B = prepareData("eng_enc",
"eng_dec",
pairs_B)
```
### sentences 2 tensors
```
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]
def tensorFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
indexes.append(EOS_token)
return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
def tensorsFromPair(pair):
input_tensor = tensorFromSentence(input_lang, pair[0])
target_tensor = tensorFromSentence(output_lang, pair[1])
return (input_tensor, target_tensor)
def tensorsFromTriple(triple):
input_tensor = tensorFromSentence(input_lang, triple[0])
target_tensor = tensorFromSentence(output_lang, triple[1])
label_tensor = torch.tensor(triple[2], dtype=torch.long).view((1))
return (input_tensor, target_tensor, label_tensor)
```
### models
```
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]), dim=1)
return output, hidden, attn_weights
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
hidden_size = 256
eng_enc_v_size = input_lang.n_words
eng_dec_v_size = output_lang.n_words
input_lang.n_words
encoderA = EncoderRNN(eng_enc_v_size, hidden_size)
decoderA = AttnDecoderRNN(hidden_size, eng_dec_v_size)
encoderA.load_state_dict(torch.load("b3_encoder1_att.pkl"))
decoderA.load_state_dict(torch.load("b3_decoder1_att.pkl"))
encoderB = EncoderRNN(eng_enc_v_size, hidden_size)
decoderB = AttnDecoderRNN(hidden_size, eng_dec_v_size)
encoderB.load_state_dict(torch.load("b3_encoder2_att.pkl"))
decoderB.load_state_dict(torch.load("b3_decoder2_att.pkl"))
```
## translating
```
def translate(encoder,
decoder,
sentence,
max_length=MAX_LENGTH):
with torch.no_grad():
input_tensor = tensorFromSentence(input_lang, sentence)
input_length = input_tensor.size()[0]
encoder_hidden = encoder.initHidden()
encoder_outputs = torch.zeros(
max_length, encoder.hidden_size, device=device)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei],
encoder_hidden)
encoder_outputs[ei] += encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device) # SOS
decoder_hidden = encoder_hidden
decoded_words = []
for di in range(max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(decoder_input, decoder_hidden, encoder_outputs)
_, topone = decoder_output.data.topk(1)
if topone.item() == EOS_token:
break
else:
decoded_words.append(output_lang.index2word[topone.item()])
decoder_input = topone.squeeze().detach()
return " ".join(decoded_words)
def projectA(sent):
neural_translation = translate(encoderA,
decoderA,
sent,
max_length=MAX_LENGTH)
return neural_translation
def projectB(sent):
neural_translation = translate(encoderB,
decoderB,
sent,
max_length=MAX_LENGTH)
return neural_translation
```
## translation of a trained model: and A
```
for t in training_pairs_A[0:3]:
print("input_sentence : " + t[0])
neural_translation = projectA(t[0])
print("neural translation : " + neural_translation)
reference = t[1]
print("reference translation : " + reference)
reference = reference.split(" ")
candidate = neural_translation.split(" ")
score = sentence_bleu([reference], candidate)
print("blue score = {:.2f}".format(score))
print()
```
## translation of a trained model: and B
```
for t in training_pairs_B[0:3]:
print("input_sentence : " + t[0])
neural_translation = projectB(t[0])
print("neural translation : " + neural_translation)
reference = t[1]
print("reference translation : " + reference)
reference = reference.split(" ")
candidate = neural_translation.split(" ")
score = sentence_bleu([reference], candidate)
print("blue score = {:.2f}".format(score))
print()
```
## generating new data for training
```
df2train.sentence1 = df2train.sentence1.map(normalizeString)
df2train["project A"] = df2train.sentence1.map(projectA)
df2.head()
df2train["project B"] = df2train.sentence1.map(projectB)
df2.head()
df2train["sentence1_p"] = df2train["project A"] + " and " + df2train["project B"]
df2.head()
df_train_plus = df2train[["sentence1_p", "sentence2", "label"]]
df_train_plus.sentence2 = df_train_plus.sentence2.map(normalizeString)
df_train_plus.rename(columns={"sentence1_p": "sentence1"}, inplace=True)
df_train_plus.head()
df_train_plus.to_csv("data/boolean3_plus_train.csv", index=False)
# df2 = pd.read_csv("data/boolean3_train.csv")
# df2_test = pd.read_csv("data/boolean3_test.csv")
```
## generating new data for test
```
df2_test.sentence1 = df2_test.sentence1.map(normalizeString)
df2_test["project A"] = df2_test.sentence1.map(projectA)
df2_test.head()
df2_test["project B"] = df2_test.sentence1.map(projectB)
df2_test.head()
df2_test["sentence1_p"] = df2_test["project A"] + " and " + df2_test["project B"]
df2_test.head()
df2_test_plus = df2_test[["sentence1_p", "sentence2", "label"]]
df2_test_plus.sentence2 = df2_test_plus.sentence2.map(normalizeString)
df2_test_plus.rename(columns={"sentence1_p": "sentence1"}, inplace=True)
df2_test_plus.head()
df2_test_plus.to_csv("data/boolean3_plus_test.csv", index=False)
```
| github_jupyter |
Random Forests - Data Exploration
===
***
##Introduction
Now we're going to use a large and messy data set from a familiar source object and then prepare it for analysis using Random Forests.
Why do we want to use Random Forests? This will become clear very shortly.
We will use a data set of mobile phone accelerometer and gyroscope readings to create a predictive model. The data set is found in R Data form [1] on Amazon S3 and raw form at the UCI Repository [2] The data set readings encode data on mobile phone orientation and motion of the wearer of the phone.
The subject is known to be doing one of six activities - sitting, standing, lying down, walking, walking up, and walking down.
##Methods
Our goal is to predict, given one data point, which activity they are doing.
We set ourself a goal of creating a model with understandable variables rather than a black box model. We have the choice of creating a black box model that just has variables and coefficients. When given a data point we feed it to the model and out pops an answer. This generally works but is simply too much "magic" to give us any help in building our intuition or giving us any opportunity to use our domain knowledge.
So we are going to open the box a bit and we are going to use domain knowledge combined with the massive power of Random Forests once we have some intuition going. We find that in the long run this is a much more satisfying approach and also, it appears, a much more powerful one.
We will reduce the independent variable set to 36 variables using domain knowledge alone and then use Random Forests to predict the variable ‘activity’.
This may not be the best model from the point of view of accuracy, but we want to understand what is going on and from that perspective it turns out to be much better.
We use accuracy measures Positive and Negative Prediction Value, Sensitivity and Specificity to rate our model.
### Data Cleanup
* The given data set contains activity data for 21 subjects.
* The data set has 7,352 rows with 561 numeric data columns plus 2 columns ‘subject’, an integer, and ‘activity’, a character string.
* Since we have 563 total columns we will dispense with the step of creating a formal data dictionary and refer to feature_info.txt instead
* Initial exploration of the data shows it has dirty column name text with a number of problems:
* Duplicate column names - multiple occurrences.
* Inclusion of ( ) in column names.
* Extra ) in some column names.
* Inclusion of ‘-’ in column names.
* Inclusion of multiple ‘,’ in column names
* Many column names contain “BodyBody” which we assume is a typo.
```
import pandas as pd
df = pd.read_csv('../datasets/samsung/samsungdata.csv')
```
* We change ‘activity’ to be a categorical variable
* We keep ‘subject’ as integer
We want to create an interpretable model rather than use Random Forests as a black box. So we will need to understand our variables and leverage our intuition about them.
To plan the data exploration, the documentation of the data set from the UCI website [2] is very useful and we study it in detail.
Especially the file feature_info.txt is very important in understanding our variables. It is, in effect, the data dictionary which we have avoided listing here.
Also the explanation for terminology which we use is in feature_info.txt. So going through it in some detail is critical.
### Exercise
Do each of the above data cleanup activities on the data set.
i.e.
* Identify and remove duplicate column names - multiple occurrences.
* Identify and fix inclusion of ( ) in column names. How will you fix this?
* Identify and fix extra ) in some column names. How will you fix this?
* Identify and fixInclusion of ‘-’ in column names. How will you fix this?
* Identify and fixInclusion of multiple ‘,’ in column names. How will you fix this?
* Identify and fix column names containing “BodyBody”.
--------
#### Note to students
_The major value of this data set is as follows_
It teaches the implicit lesson that
* You can just use blind brute force techniques and get useful results OR
* You can short circuit a lot of that and use domain knowledge. This data set highlights the power you get from domain knowledge.
* It also nudges us out of our comfort zone to seek supporting knowledge from semanticaly adjacent data sources to empower the analysis further.
* This underlines the fact, obvious in restrospect, that you never get the data and all supporting information in a neat bundle.
* You have to clean it up - we learnt that earlier, but we also may have to be willing to expand our knowledge, do a little research to enhance our background expertise.
So this particular data set may seem a little techy but it could easily be in the direction of bio, or finance or mechanics of fractures or sports analytics or whatever - a data scientist should be willing to get hands *and* mind dirty. The most successful ones are/will be the ones that are willing to be interdisciplinary.
__That's__ the implicit lesson here.
----------
Aside from understanding what each variable represents, we also want to get some technical background about the meaning of each variable.
So we use the Android Developer Reference [3] to educate ourselves about each of the physical parameters that are important.
In this way we extend our domain knowledge so that we understand the language of the data - we allow it to come alive and figuratively speak to us and reveal it's secrets. The more we learn about the background context from which the data comes, the better, faster, and deeper our exploration of the data will be.
In this case see that the variables have X, Y, Z prefixes/suffixes and the Android Developer Reference [3] gives us the specific reference frame with which these are measured. They are vector components of jerk and acceleration, the angles are measured with respect to the direction of gravity or more precisely the vector acceleration due to gravity.
We use this information and combine it with some intuition about motion, velocity, acceleration etc.
### Variable Reduction
So we dig into the variables and make some quick notes.
Before we go further, you'll need to open a file in the dataset directory for the HAR data set.
There is a file called feature_info.txt. This file describes each feature, it's physical significance and also describes features that are derived from raw data by doing some averaging, or sampling or some operation that gives a numerical results.
We want to look at
a) all the variable names
b) physical quantities
and take some time to understand these.
Once we spend some time doing all that, we can extract some useful guidelines using physical understanding and common sense.
* In static activities (sit, stand, lie down) motion information will not be very useful.
* In the dynamic activities (3 types of walking) motion will be significant.
* Angle variables will be useful both in differentiating “lie vs stand” and “walk up vs walk down”.
* Acceleration and Jerk variables are important in distinguishing various kinds of motion while filtering out random tremors while static.
* Mag and angle variables contain the same info as (= strongly correlated with) XYZ variables
* We choose to focus on the latter as they are simpler to reason about.
* This is a very important point to understand as it results in elimination of a few hundred variables.
* We ignore the *band* variables as we have no simple way to interpret the meaning and relate them to physical activities.
* -mean and -std are important, -skewness and -kurtosis may also be hence we include all these.
* We see the usefulness of some of these variables as predictors in Figure 1.
which shows some of our exploration and validates our thinking.
---
<img src="files/images/randomforests_c2_figure1.png" />
Figure 1. Using a histogram of Body Acceleration Magnitude to evaluate that variable as a predictor of static vs dynamic activities. This is an example of data exploration in support of our heuristic variable selection using domain knowledge.
---
### Eliminating confounders
In dropping the -X -Y -Z variables (cartesian coordinates) we removed a large number of confounding variables as these have information strongly correlated with Magnitude + Angle (polar coordinates). There may still be some confounding influences but the remaining effects are hard to interpret.
From common sense we see other variables -min, -max, -mad have correlations with mean/std so we drop all these confounders also.
The number of variables is now reduced to 37 as below.
----
Note to reviewers - we do some tedious name mapping to keep the semantics intact since we want to have a "white box" like model.
If we don't we can just take the remaining variables and map them to v1, v2 ..... v37. This would be a couple of lines of code and explanation but we would lose a lot of the value we derived from retaining interpretability using domain knowledge. So we soldier on for just one last step and then we are into the happy land of analysis
----
### Name transformations
To be able to explore the data easily we rename variables and simplify them for readability as follows.
We drop the "Body" and "Mag" wherever they occur as these are common to all our remaining variables.
We map ‘mean’ to Mean and ‘std’ to SD
So e.g.
tAccBodyMag-mean -> tAccMean
fAccBodyMag-std -> fAccSD
etc.
## Results
The reduced set of selected variables with transformed names is now (with meaningful groupings):
* tAccMean, tAccSD tJerkMean, tJerkSD
* tGyroMean, tGyroSD tGyroJerkMean, tGyroJerkSD
* fAccMean, fAccSD, fJerkMean, fJerkSD,
* fGyroMean, fGyroSD, fGyroJerkMean, fGyroJerkSD,
* fGyroMeanFreq, fGyroJerkMeanFreq fAccMeanFreq, fJerkMeanFreq
* fAccSkewness, fAccKurtosis, fJerkSkewness, fJerkKurtosis
* fGyroSkewness, fGyroKurtosis fGyroJerkSkewness, fGyroJerkKurtosis
* angleAccGravity, angleJerkGravity angleGyroGravity, angleGyroJerkGravity
* angleXGravity, angleYGravity, angleZGravity
* subject, activity
## Conclusions
Now after all these data cleanup calisthenics we raise our weary heads and notice something pleasantly surprising and positively encouraging.
These variables are primarily magnitudes of acceleration and jerk with their statistics, along with angle variables. This encourages us to think that our approach of focusing on domain knowledge, doing some extra reading and research and using some elementary physical intuition
seems to be bearing fruit.
This is a set of variables that is semantically compact, interpretable and relatively easy to reason about.
We can do another round of winnowing down the variables, because we might have a feeling that 37 variables is too many to hold in our mind at one time - and we would be right. But at this point we bring in the heavy artillery and let the modeling software do the work, using Random Forests on this variable set.
## References
[1] <https://spark-public.s3.amazonaws.com/dataanalysis/samsungData.rda>
[2] Human Activity Recognition Using Smartphones <http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
[3] Android Developer Reference <http://developer.android.com/reference/android/hardware/Sensor.html>
[4] Random Forests <http://en.wikipedia.org/wiki/Random_forest>
[5] Code for computation of error measures <https://gist.github.com/nborwankar/5131870>
```
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
# Train a Medical Specialty Detector on SageMaker Using HuggingFace Transformers.
In this workshop, we will show how you can train an NLP classifier using trainsformers from [HuggingFace](https://huggingface.co/). HuggingFace allows for easily using prebuilt transformers, which you can train for your own use cases.
In this workshop, we will use the SageMaker HuggingFace supplied container to train an algorithm that will distinguish between physician notes that are either part of the General Medicine (encoded as 0), or Radiology (encoded as 1) medical specialties. The data is a subsample from [MTSamples](https://www.mtsamples.com/) which was downloaded from [here](https://www.kaggle.com/tboyle10/medicaltranscriptions).
```
import json
from get_dependencies import get_dependencies
get_dependencies()
import pandas as pd
import os
import tensorflow as tf
import numpy as np
import sagemaker
from sklearn.model_selection import train_test_split
pd.set_option('max_colwidth', 500) # to allow for better display of dataframes
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session()
BUCKET=sagemaker_session.default_bucket()
PREFIX='mtsample_speciality_prediction'
```
## Read in data and examine sample data
First we will read in the data; we will then copy it to S3.
```
df_1=pd.read_csv('MTsample_input_data.csv')
print(f'''The data has {df_1.shape[0]} rows''')
X_train, X_test = train_test_split(df_1, test_size=0.3)
X_train.to_csv('train.csv')
X_test.head(2)
!aws s3 cp train.csv s3://$BUCKET/$PREFIX/ #Copy the data to S3
```
## Configure the SageMaker training job
We will leverage the SageMaker provided container definition to build and train the transformer. In this approach we specify our training script (`train.py`) but rely on the SageMaker HuggingFace container.
For more information, see [here](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) and [here](https://huggingface.co/docs/sagemaker/main).
```
from sagemaker.huggingface import HuggingFace
# hyperparameters which are passed to the training job
hyperparameters={}
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='train.py',
instance_type='ml.p2.xlarge',
instance_count=1,
role=role,
transformers_version='4.11.0',
tensorflow_version='2.5.1',
py_version='py37',
hyperparameters = hyperparameters
)
!pygmentize train.py #specify our training script
```
Now train the model by calling the `fit` method.
```
huggingface_estimator.fit(
{'train': f's3://{BUCKET}/{PREFIX}/train.csv'}
)
```
## Deploy the Model as an endpoint
Now we will deploy the model as an endpoint, which can be queried with independent data.
```
endpoint=huggingface_estimator.deploy(1,"ml.g4dn.xlarge")
```
## Invoke the endpoint with test data
We will pass some holdout data to the endpoint to get an estimate of performance.
```
from sagemaker.serializers import JSONSerializer
my_serializer=JSONSerializer()
my_predictor=sagemaker.predictor.Predictor(endpoint.endpoint_name,sagemaker_session=sagemaker_session,serializer=my_serializer)
the_inputs=X_test['text'].tolist()
all_results=[]
for i in range(0,len(the_inputs)):
the_input= the_inputs[i][0:512] #truncate to 512 characters
the_result=my_predictor.predict({"inputs":the_input})
all_results.append(the_result)
print(all_results[0]) # see what one result looks like.
#if the predicted label is negative normalize subtract it from 1,
#so that lower scores mean predictions of General Medicine, and higher scores mean prediction of Radioligy.
all_results_2=[]
for i in all_results:
the_result= json.loads(i)[0]
the_score=the_result['score']
the_label=the_result['label']
if the_label=="NEGATIVE":
all_results_2.append(1-the_score)
else:
all_results_2.append(the_score)
```
Measure the performance using a ROC curve.
```
import sklearn.metrics as metrics
# calculate the fpr and tpr for all thresholds of the classification
preds=all_results_2
fpr, tpr, threshold = metrics.roc_curve(X_test['specialty_encoded'].tolist(), preds)
roc_auc = metrics.auc(fpr, tpr)
# method I: plt
import matplotlib.pyplot as plt
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
Note that due to the fact that this is a small dataset, you may get a performance of .95 or even higher.
| github_jupyter |
```
#r "./../../../../../../public/src/L4-application/BoSSSpad/bin/Release/net5.0/BoSSSpad.dll"
using System;
using ilPSP;
using ilPSP.Utils;
using BoSSS.Platform;
using BoSSS.Foundation;
using BoSSS.Foundation.XDG;
using BoSSS.Foundation.Grid;
using BoSSS.Solution;
using BoSSS.Application.XNSE_Solver;
using BoSSS.Application.BoSSSpad;
using BoSSS.Foundation.Grid.Classic;
using BoSSS.Foundation.IO;
using BoSSS.Solution.AdvancedSolvers;
using BoSSS.Solution.Control;
using BoSSS.Solution.XNSECommon;
using BoSSS.Solution.NSECommon;
using BoSSS.Application.XNSE_Solver.LoadBalancing;
using BoSSS.Solution.LevelSetTools;
using BoSSS.Solution.XdgTimestepping;
using static BoSSS.Application.BoSSSpad.BoSSSshell;
Init();
// set parameterz
int[] core_sweep = {64};
int[] PolyDegS = new int[] {3};
int[] ResArray = new int[] {15};
int MemoryPerCore = 2000;
bool useAMR = false;
bool useLoadBal = true;
int NoOfTimeSteps = 60;
bool Steady = false;
bool IncludeConvection = true;
var Gshape = Shape.Cube;
var DBshowcase = OpenOrCreateDatabase(@"W:\work\scratch\jw52xeqa\DB_rotCubePaper");
List<ISessionInfo> restartSess = new List<ISessionInfo>();
restartSess.Add(DBshowcase.Sessions.Pick(9)); restartSess.Pick(0).Timesteps.Count()
// ==================================
// setup Client & Workflow & Database
// ==================================
/*
var myBatch = (SlurmClient)ExecutionQueues[3];
var AddSbatchCmds = new List<string>();
AddSbatchCmds.AddRange(new string[]{"#SBATCH -p test24", "#SBATCH -C avx512", "#SBATCH --mem-per-cpu="+MemoryPerCore});
myBatch.AdditionalBatchCommands = AddSbatchCmds.ToArray();
myBatch.AdditionalBatchCommands
*/
//var myBatch = (MsHPC2012Client)ExecutionQueues[1];
static var myBatch = GetDefaultQueue();
if(myBatch is SlurmClient){
(myBatch as SlurmClient).AdditionalBatchCommands = new string[]{"#SBATCH -p test24", "#SBATCH -C avx512", "#SBATCH --mem-per-cpu=2000"};
}
string WFlowName = "Cube_Inlet_test";
BoSSS.Application.BoSSSpad.BoSSSshell.WorkflowMgm.Init(WFlowName);
BoSSS.Application.BoSSSpad.BoSSSshell.WorkflowMgm.SetNameBasedSessionJobControlCorrelation();
/*
myBatch.AllowedDatabasesPaths.Clear();
//string dirname = "DB_rotCubePaper";
//string dirname = "DB_rotSphere_CoreScaling";
//string dirname = "DB_rotSphere_DOFScaling";
string dirname ="DB_rotCubePaper";
string winpath = @"W:\work\scratch\jw52xeqa\"+dirname;
string remotepath = @"/work/scratch/jw52xeqa/"+dirname;
//string winpath = @"\\hpccluster\hpccluster-scratch\"+dirname;
var myDB = OpenOrCreateDatabase(winpath);
//myBatch.AllowedDatabasesPaths.Add(new AllowedDatabasesPair(winpath,remotepath)); myBatch.AllowedDatabasesPaths
*/
static var pair = myBatch.AllowedDatabasesPaths.Pick(0);
static string DBName = @"\"+"DB_rotCubePaper";
static string localpath=pair.LocalMountPath+DBName;
string remotepath=pair.PathAtRemote+DBName;
static var myDB = OpenOrCreateDatabase(localpath);
static class Utils {
// DOF per cell in 3D
public static int Np(int p) {
return (p*p*p + 6*p*p + 11*p + 6)/6;
}
}
```
using
```
double xMax = 4.0, yMax = 1.0, zMax = 1.0;
double xMin = -2.0, yMin = -1.0,zMin = -1.0;
```
double xMax = 1.0, yMax = 1.0, zMax=1.0;
double xMin = -1.0, yMin = -1.0,zMin = -1.0;
```
/*
List<IGridInfo> Grids = new List<IGridInfo>();
foreach(var Res in ResArray){
int Stretching = (int)Math.Floor(Math.Abs(xMax-xMin)/Math.Abs(yMax-yMin));
var _xNodes = GenericBlas.Linspace(xMin, xMax, Stretching*Res + 1);
var _yNodes = GenericBlas.Linspace(yMin, yMax, Res + 1);
var _zNodes = GenericBlas.Linspace(zMin, zMax, Res + 1);
GridCommons grd;
string gname = "RotBenchmarkGrid";
var tmp = new List<IGridInfo>();
foreach(var grid in myDB.Grids){
try{
bool IsMatch = grid.Name.Equals(gname)&&grid.NumberOfCells==_xNodes.Length*_yNodes.Length*_zNodes.Length;
if(IsMatch) tmp.Add(grid);
}
catch(Exception ex) {
Console.WriteLine(ex.Message);
}
}
if(tmp.Count()>=1){
Console.WriteLine("Grid found: "+tmp.Pick(0).Name);
Grids.Add(tmp.Pick(0));
continue;
}
grd = Grid3D.Cartesian3DGrid(_xNodes, _yNodes, _zNodes);
grd.Name = gname;
//grd.AddPredefinedPartitioning("debug", MakeDebugPart);
grd.EdgeTagNames.Add(1, "Velocity_inlet");
grd.EdgeTagNames.Add(2, "Wall");
grd.EdgeTagNames.Add(3, "Pressure_Outlet");
grd.DefineEdgeTags(delegate (double[] _X) {
var X = _X;
double x, y, z;
x = X[0];
y = X[1];
z = X[2];
if(Math.Abs(x-xMin)<1E-8)
return 1;
else
return 3;
});
myDB.SaveGrid(ref grd);
Grids.Add(grd);
} Grids
*/
int SpaceDim = 3;
double _partRad = 0.3800;
_partRad/(2/(double)ResArray[0]/4)
Func<ISessionInfo, int, double, XNSE_Control> GenXNSECtrl = delegate(ISessionInfo Session, int k, double Viscosity){
XNSE_Control C = new XNSE_Control();
// basic database options
// ======================
C.RestartInfo = new Tuple<Guid, TimestepNumber>(Session.ID, new TimestepNumber("60"));
IGridInfo grd = Session.GetGrids().Pick(0);
C.AlternateDbPaths = new[] {
(localpath, ""),
(remotepath,"")};
//C.DbPath=@"\\hpccluster\hpccluster-scratch\DB_rotSphereBenchmark";
C.savetodb = true;
int J = grd.NumberOfCells;
C.SessionName = string.Format("J{0}_k{1}_t{2}", J, k,NoOfTimeSteps);
if(IncludeConvection){
C.SessionName += "_NSE";
C.Tags.Add("NSE");
} else {
C.SessionName += "_Stokes";
C.Tags.Add("Stokes");
}
C.Tags.Add(SpaceDim + "D");
if(Steady)C.Tags.Add("steady");
else C.Tags.Add("transient");
// DG degrees
// ==========
C.SetFieldOptions(k, Math.Max(k, 2));
C.saveperiod = 1;
//C.TracingNamespaces = "*";
//C.GridGuid = grd.ID;
C.GridPartType = GridPartType.clusterHilbert;
C.DynamicLoadbalancing_ClassifierType = ClassifierType.CutCells;
C.DynamicLoadBalancing_On = useLoadBal;
C.DynamicLoadBalancing_RedistributeAtStartup = true;
C.DynamicLoadBalancing_Period = 1;
C.DynamicLoadBalancing_ImbalanceThreshold = 0.1;
// Physical Parameters
// ===================
const double rhoA = 1;
const double Re = 100;
double muA = Viscosity;
double partRad = _partRad;
double d_hyd = 2*partRad;
double anglev = Re*muA/rhoA/d_hyd;
double VelocityIn = Re*muA/rhoA/d_hyd;
double[] pos = new double[SpaceDim];
double ts = Math.PI/anglev/NoOfTimeSteps;
double inletdelay = 5*ts;
C.PhysicalParameters.IncludeConvection = IncludeConvection;
C.PhysicalParameters.Material = true;
C.PhysicalParameters.rho_A = rhoA;
C.PhysicalParameters.mu_A = muA;
C.Rigidbody.SetParameters(pos,anglev,partRad,SpaceDim);
C.Rigidbody.SpecifyShape(Gshape);
C.Rigidbody.SetRotationAxis("x");
C.AddInitialValue(VariableNames.LevelSetCGidx(0), new Formula("X => -1"));
C.UseImmersedBoundary = true;
C.AddInitialValue("Pressure", new Formula(@"X => 0"));
C.AddBoundaryValue("Pressure_Outlet");
//C.AddBoundaryValue("Velocity_inlet","VelocityX",new Formula($"(X,t) => {VelocityIn}*(double)(t<={inletdelay}?(t/{inletdelay}):1)",true));
C.AddBoundaryValue("Velocity_inlet","VelocityX",new Formula($"(X) => {VelocityIn}"));
C.CutCellQuadratureType = BoSSS.Foundation.XDG.XQuadFactoryHelper.MomentFittingVariants.Saye;
C.UseSchurBlockPrec = true;
C.AgglomerationThreshold = 0.1;
C.AdvancedDiscretizationOptions.ViscosityMode = ViscosityMode.FullySymmetric;
C.Option_LevelSetEvolution2 = LevelSetEvolution.Prescribed;
C.Option_LevelSetEvolution = LevelSetEvolution.None;
//C.Timestepper_LevelSetHandling = LevelSetHandling.Coupled_Once;
C.Timestepper_LevelSetHandling = LevelSetHandling.None;
C.LinearSolver.NoOfMultigridLevels = 4;
C.LinearSolver.ConvergenceCriterion = 1E-6;
C.LinearSolver.MaxSolverIterations = 200;
C.LinearSolver.MaxKrylovDim = 50;
C.LinearSolver.TargetBlockSize = 10000;
C.LinearSolver.verbose = true;
C.LinearSolver.SolverCode = LinearSolverCode.exp_Kcycle_schwarz;
C.NonLinearSolver.SolverCode = NonLinearSolverCode.Newton;
C.NonLinearSolver.ConvergenceCriterion = 1E-3;
C.NonLinearSolver.MaxSolverIterations = 5;
C.NonLinearSolver.verbose = true;
C.AdaptiveMeshRefinement = useAMR;
if (useAMR) {
C.SetMaximalRefinementLevel(2);
C.AMR_startUpSweeps = 0;
}
// Timestepping
// ============
double dt = -1;
if(Steady){
C.TimesteppingMode = AppControl._TimesteppingMode.Steady;
dt = 1000;
C.NoOfTimesteps = 1;
} else {
C.TimesteppingMode = AppControl._TimesteppingMode.Transient;
dt = ts;
C.NoOfTimesteps = NoOfTimeSteps;
}
C.TimeSteppingScheme = TimeSteppingScheme.BDF2;
C.dtFixed = dt;
return C;
};
var ViscositySweep = new double[]{1E-2};
List<XNSE_Control> controls = new List<XNSE_Control>();
foreach(ISessionInfo SesInfo in restartSess){
foreach(int k in PolyDegS){
foreach(double v in ViscositySweep)
controls.Add(GenXNSECtrl(SesInfo,k,v));
}
} controls.Select(s=>s.SessionName)
controls.Select(s=>s.SessionName)
static Func<XNSE_Control,int,int> NodeRegression = delegate (XNSE_Control thisctrl, int cores) {
var Session = myDB.Sessions.Where(s=>s.ID==thisctrl.RestartInfo.Item1).Pick(0);
int Ncells = Session.GetGrids().Pick(0).NumberOfCells;
thisctrl.FieldOptions.TryGetValue("Velocity*",out FieldOpts Foptions);
int k = Foptions.Degree;
int DOFperCell = Utils.Np(k)*3+Utils.Np(k-1);
int DOFperCore = DOFperCell*Ncells/cores;
double mempercore = -9E-7*DOFperCore*DOFperCore+3.84E-1*DOFperCore+13500;
double memoryneed = mempercore * cores;
double memorypernode = 384*1024; // MB
int nodes2allocate = (int)Math.Ceiling(memoryneed / memorypernode);
// these 2 lines make it a power of 2
double bla = Math.Ceiling(Math.Log(nodes2allocate)/Math.Log(2));
nodes2allocate = (int)Math.Pow(2,bla);
return nodes2allocate;
};
NodeRegression(controls.Pick(0),128)
foreach(var ctrl in controls){
string sessname = ctrl.SessionName;
foreach(int cores in core_sweep){
ctrl.SessionName = sessname + "_c"+cores+"_mue_"+ctrl.PhysicalParameters.mu_A+"_test2";
var aJob = new Job("rotting_"+Gshape+ctrl.SessionName,typeof(XNSE));
aJob.SetControlObject(ctrl);
aJob.NumberOfMPIProcs = cores;
aJob.ExecutionTime = "24:00:00";
aJob.UseComputeNodesExclusive = true;
if(myBatch is SlurmClient){
int nodes2allocate = NodeRegression(ctrl,cores);
var Cmdtmp = new List<string>();
Cmdtmp.AddRange((myBatch as SlurmClient).AdditionalBatchCommands.ToList());
Cmdtmp.Add($"#SBATCH -N {nodes2allocate}");
(myBatch as SlurmClient).AdditionalBatchCommands = Cmdtmp.ToArray();
}
aJob.Activate(myBatch);
}
}
```
| github_jupyter |
```
import os
import sys
import numpy as np
import pandas as pd
import pysubgroup as ps
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'sd-4sql\\packages'))
saved_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.getcwd()))),'Data\\saved-data\\')
from sd_analysis import *
from subgroup_discovery import *
from sd_postprocessing import *
import matplotlib.pyplot as plt
import matplotlib
import warnings
warnings.filterwarnings("ignore")
```
### Servers with blocking session issues
```
queries = pd.read_csv(saved_path + 'dataset-d4.csv')
```
## Use cases 3 : Blocking sessions alert
#### Support
```
result_supp = sd_binary_conds (queries, dict_conds = {},_target = 'blockedSessions_disc', mesure = 'Support',_depth = 1,
threshold = 10000, result_size = 100, algorithm = 'Beam Search', _beam_width = 100,
features_ignore = ['blockedSessions'])
res_supp = result_supp.to_dataframe()
```
#### Lift
```
result_lift = sd_binary_conds (queries, dict_conds = {},_target = 'blockedSessions_disc', mesure = 'Lift',_depth = 1,
threshold = 10000, result_size = 100, algorithm = 'Beam Search', _beam_width = 100,
features_ignore = ['blockedSessions'])
res_lift = result_lift.to_dataframe()
```
#### WRAcc
```
result_wracc = sd_binary_conds (queries, dict_conds = {},_target = 'blockedSessions_disc', mesure = 'WRAcc',_depth = 1,
threshold = 10000, result_size = 100, algorithm = 'Beam Search', _beam_width = 100,
features_ignore = ['blockedSessions'])
res_wracc = result_wracc.to_dataframe()
```
#### Binomial
```
result_binomial = sd_binary_conds (queries, dict_conds = {},_target = 'blockedSessions_disc', mesure = 'Binomial',
_depth = 1,threshold = 10000, result_size = 100, algorithm = 'Beam Search',
_beam_width = 100,features_ignore = ['blockedSessions'])
res_binomial = result_binomial.to_dataframe()
```
#### Post-processing
```
plot_sgbars(res_lift, 10, ylabel="target share", title="Discovered Subgroups", dynamic_widths=False, _suffix="")
plot_npspace(res_lift, 10, queries, annotate=True, fixed_limits=False)
d, d_names, sg_names = greedy_jaccard(result_lift.to_descriptions(),10, queries, 0.8)
for sg in d_names.keys() :
print(sg)
similarity_sgs(result_lift.to_descriptions(), 10, queries, color=True)
similarity_dendrogram(result_lift.to_descriptions(), 20, queries)
indices = similarity_dendrogram(result_lift.to_descriptions(), 20, queries,truncated = True, p = 5)
res_raf = res_lift[res_lift.index.isin(indices)]
plot_sgbars(res_raf, res_raf.shape[0], ylabel="target share", title="Discovered Subgroups",
dynamic_widths=False, _suffix="")
plot_npspace(res_raf, res_raf.shape[0], queries, annotate=True, fixed_limits=False)
```
| github_jupyter |
# Digit Recognizer - CNN
## Importing Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from tensorflow import keras
from tensorflow.keras import layers
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.optimizers import Adam
```
## Importing Data and data Preprocessing
```
#Spliting the data to train and dev sets
(x_train, y_train), (x_dev, y_dev) = keras.datasets.mnist.load_data()
x_train = np.expand_dims(x_train, -1)
x_dev = np.expand_dims(x_dev, -1)
#Normalizing the data
x_train = x_train / 255.0
x_dev = x_dev / 255.0
#Reshaping the data
x_train = x_train.reshape(-1,28,28,1)
x_dev = x_dev.reshape(-1,28,28,1)
#Encoding the Labels
y_train = to_categorical(y_train, num_classes = 10)
y_dev = to_categorical(y_dev, num_classes = 10)
```
## Model Making and training
```
#Building the Model
Model = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(filters = 32 , kernel_size=(5, 5), activation="relu", padding = 'Same'),
layers.MaxPooling2D(pool_size=(2, 2), strides = 2),
layers.Conv2D(filters = 64 , kernel_size=(5, 5), activation="relu", padding = 'Valid'),
layers.MaxPooling2D(pool_size=(2, 2), strides = 2),
layers.BatchNormalization(),
layers.Flatten(),
layers.Dense(100, activation="relu"),
layers.Dense(10, activation="softmax"),
]
)
Model.summary()
#Declaring Hyperparametrs
batch_size = 64
epochs = 15
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.01,
decay_steps=1000,
decay_rate=0.9)
optimizer = keras.optimizers.Adam(learning_rate=lr_schedule)
# Training Model
Model.compile(loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
history = Model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
```
## Model Evaluation
```
score = Model.evaluate(x_dev, y_dev, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.grid(linestyle="--")
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.grid(linestyle="--")
plt.legend(['train', 'test'], loc='upper right')
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
"""
Hypothesis: Links contain more information about duplicate data. Create a test
exploring whether further investigation is neccessary.
"""
def loadandcleandata(filepath):
"""Upload csv file and remove unneccesary columns for testing."""
# Loading .csv file
df = pd.read_csv(filepath, sep="|")
# drop all unneccesary columns for testing
# df.drop(labels=['id', 'city', 'state', 'lat', 'long', 'description',
# 'tags', 'Verbalization', 'Empty_Hand_Soft',
# 'Empty_Hand_Hard', 'Less_Lethal_Methods', 'Lethal_Force',
# 'Uncategorized'], axis=1, inplace=True)
return df
filepath = '/content/Labs28_AllSources_Data2020-11-10 23_02_00.093189+00_00.csv'
last_updated = '2020-11-09 03_02_06.620162+00_00'
df = loadandcleandata(filepath)
# df
# Print Here
def see_all_links(col):
"""Visualize all links. Create a dataframe from each individual link."""
l = []
for everyrow in col:
l.append(everyrow)
return pd.DataFrame(data=l)
col = df["links"]
test = see_all_links(col) # create a dataframe of only links
print("Length of test", len(test))
print("Length of data", len(df))
# test
# In theory you can add this Series to df
# use the function below on the column***
# Print Here
def removeUniqueInstance(col):
"""Removes all unique instances and rows. Returns updated dataframe"""
url = []
instances = []
for i, item in enumerate(vc_df[0]):
if item != 1:
# print("index: ", i, item, vc_df['url'][i])
url.append(vc_df['url'][i])
instances.append(item)
else:
url.append(None)
instances.append(None)
return url, instances
vc_df = pd.DataFrame(data=test[0].value_counts()) # Value counts dataframe
vc_df['url'] = vc_df.index # Create a column from the urls within the index
vc_df['url'], vc_df[0] = removeUniqueInstance(vc_df[0]) # Unpack function
finaltest = vc_df.reset_index().dropna().drop(labels=['index'], axis=1)
finaltest
# Print Here
# # SINGLE substring to be searched
# # sub = "'https://twitter.com/AFriendlyDad/status/1318740334541590529'"
# sub = finaltest['url'][-1]
# # creating and passsing series to new column
# test["Indexes"]= test[0].str.find(sub)
# # print frequency substring shows, -1 means it doe not exist.
# print("Occurrence(s):\n", test['Indexes'].value_counts())
# # print the matching row, row title, and the indeces.
# for i, item in enumerate(test['Indexes']):
# if item == 0:
# print(df['date'][i], df['title'][i], "\nindex: ", i, test[0][i])
###############################################################################
# Multipule Substring Search
def findandprint(col):
"""
A function that takes a column and searches for the row of links that value
counts returns greater than one instance.
input: column of string hyperlinks
return: four columns, date reported, title, index to find link, and url
duplicated
"""
index, date, links, id, city, state, lat, lon, title, description, tags, Verbalization, Empty_Hand_Soft, Empty_Hand_Hard, Less_Lethal_Methods, Lethal_Force, Uncategorized = [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []
search_array = []
row = 0
counter = 0
for i in range(len(col)):
sub = col[row]
search_array = test[0].str.find(sub)
for i, item in enumerate(search_array):
if item == 0:
print(df['date'][i],
df['title'][i],
"\nIndexed Row: ", i,
test[0][i])
index.append(i)
date.append(df['date'][i])
links.append(test[0][i])
id.append(df['id'][i])
city.append(df['city'][i])
state.append(df['state'][i])
lat.append(df['lat'][i])
lon.append(df['long'][i])
title.append(df['title'][i])
description.append(df['description'][i])
tags.append(df['tags'][i])
Verbalization.append(df['Verbalization'][i])
Empty_Hand_Soft.append(df['Empty_Hand_Soft'][i])
Empty_Hand_Hard.append(df['Empty_Hand_Hard'][i])
Less_Lethal_Methods.append(df['Less_Lethal_Methods'][i])
Lethal_Force.append(df['Lethal_Force'][i])
Uncategorized.append(df['Uncategorized'][i])
row += 1
counter += 1
search_array = []
percentageofdata = counter / df.shape[0]
print()
print(f'------------------ {counter} occurences of duplicate data. --------------------')
print(f'The uploaded .csv contains {round(percentageofdata, 4)} % of duplicate data.')
print()
return index, date, links, id, city, state, lat, lon, title, description, tags, Verbalization, Empty_Hand_Soft, Empty_Hand_Hard, Less_Lethal_Methods, Lethal_Force, Uncategorized
# Print Here
datas = findandprint(finaltest['url'])
FinalOutput_transposed = pd.DataFrame(data=datas).T
FinalOutput_transposed.drop_duplicates(subset=2, inplace=True)
FinalOutput_transposed
df.iloc[1076] ## insert row_index number (1st instance)
df.iloc[1086] ## insert row_index number (2nd instance)
# Clean Up Data
FinalOutput_transposed.rename(
mapper={0: 'duplicate_idx',
1: 'date',
2: 'links',
3: 'id',
4: "city",
5: "state",
6: "lat",
7: "long",
8: "title",
9: "description",
10: "tags",
11: "verbalization",
12: "empty_hand_soft",
13: "empty_hand_hard",
14: "less_lethal_methods",
15: "lethal_force",
16: "uncategorized"
},
axis=1, inplace=True
)
FinalOutput_transposed.sort_values(by='date', inplace=True)
FinalOutput_transposed.reset_index(drop=True, inplace=True)
FinalOutput_transposed.drop(labels='duplicate_idx', axis=1, inplace=True)
FinalOutput_transposed.drop_duplicates(subset='links', inplace=True)
# Replace NaNs with int zero
FinalOutput_transposed.replace({np.NaN: 0}, inplace=True)
# Update the "date" column to timestamps
FinalOutput_transposed['date'] = pd.to_datetime(FinalOutput_transposed['date'],format='%Y-%m-%d')
# change the column type from float to int
FinalOutput_transposed['verbalization'] = FinalOutput_transposed['verbalization'].astype(int)
FinalOutput_transposed['empty_hand_soft'] = FinalOutput_transposed['empty_hand_soft'].astype(int)
FinalOutput_transposed['empty_hand_hard'] = FinalOutput_transposed['empty_hand_hard'].astype(int)
FinalOutput_transposed['less_lethal_methods'] = FinalOutput_transposed['less_lethal_methods'].astype(int)
FinalOutput_transposed['lethal_force'] = FinalOutput_transposed['lethal_force'].astype(int)
FinalOutput_transposed['uncategorized'] = FinalOutput_transposed['uncategorized'].astype(int)
FinalOutput_transposed['lat'] = FinalOutput_transposed['lat'].replace({'None': 0.0})
FinalOutput_transposed['long'] = FinalOutput_transposed['long'].replace({'None': 0.0})
FinalOutput_transposed['lat'] = FinalOutput_transposed['lat'].astype(float)
FinalOutput_transposed['long'] = FinalOutput_transposed['long'].astype(float)
FinalOutput_transposed.info()
# Create a copy of the data
cleaned_df_wo_duplicates = FinalOutput_transposed.copy()
cleaned_df_wo_duplicates
# Saved the data in on .csv file for all sources.
# cleaned_df_wo_duplicates.to_csv(f'Labs28_AllSources_wo_DuplicateLinks{last_updated}.csv', sep="|",index=False) # Uncomment to save.
cleaned_df_wo_duplicates.to_csv(f'Test{last_updated}.csv', sep="|",index=False) # Uncomment to save.
```
# * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
#Proceed to Find_Sources_and_Users.ipynb
# * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
```
test = pd.read_csv('/content/Test2020-11-09 03_02_06.620162+00_00.csv', sep='|')
test
test.id.nunique()
```
| github_jupyter |
# 100 pandas puzzles
Inspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.
Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects.
Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.
The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.
If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...
- [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)
- [pandas basics](http://pandas.pydata.org/pandas-docs/stable/basics.html)
- [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)
- [cookbook and idioms](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#cookbook)
Enjoy the puzzles!
\* *the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.*
## Importing pandas
### Getting started and checking your pandas setup
Difficulty: *easy*
**1.** Import pandas under the name `pd`.
```
import pandas as pd
```
**2.** Print the version of pandas that has been imported.
```
pd.__version__
```
**3.** Print out all the version information of the libraries that are required by the pandas library.
```
pd.show_versions()
```
## DataFrame basics
### A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Difficulty: *easy*
Note: remember to import numpy using:
```python
import numpy as np
```
Consider the following Python dictionary `data` and Python list `labels`:
``` python
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
```
(This is just some meaningless data I made up with the theme of animals and trips to a vet.)
**4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`.
```
df = pd.DataFrame(data, index=labels)
```
**5.** Display a summary of the basic information about this DataFrame and its data.
```
df.info()
# ...or...
df.describe()
```
**6.** Return the first 3 rows of the DataFrame `df`.
```
df.iloc[:3]
# or equivalently
df.head(3)
```
**7.** Select just the 'animal' and 'age' columns from the DataFrame `df`.
```
df.loc[:, ['animal', 'age']]
# or
df[['animal', 'age']]
```
**8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`.
```
df.loc[df.index[[3, 4, 8]], ['animal', 'age']]
```
**9.** Select only the rows where the number of visits is greater than 3.
```
df[df['visits'] > 3]
```
**10.** Select the rows where the age is missing, i.e. is `NaN`.
```
df[df['age'].isnull()]
```
**11.** Select the rows where the animal is a cat *and* the age is less than 3.
```
df[(df['animal'] == 'cat') & (df['age'] < 3)]
```
**12.** Select the rows the age is between 2 and 4 (inclusive).
```
df[df['age'].between(2, 4)]
```
**13.** Change the age in row 'f' to 1.5.
```
df.loc['f', 'age'] = 1.5
```
**14.** Calculate the sum of all visits (the total number of visits).
```
df['visits'].sum()
```
**15.** Calculate the mean age for each different animal in `df`.
```
df.groupby('animal')['age'].mean()
```
**16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame.
```
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
```
**17.** Count the number of each type of animal in `df`.
```
df['animal'].value_counts()
```
**18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visit' column in *ascending* order.
```
df.sort_values(by=['age', 'visits'], ascending=[False, True])
```
**19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`.
```
df['priority'] = df['priority'].map({'yes': True, 'no': False})
```
**20.** In the 'animal' column, change the 'snake' entries to 'python'.
```
df['animal'] = df['animal'].replace('snake', 'python')
```
**21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot table).
```
df.pivot_table(index='animal', columns='visits', values='age', aggfunc='mean')
```
## DataFrames: beyond the basics
### Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: *medium*
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
**22.** You have a DataFrame `df` with a column 'A' of integers. For example:
```python
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
```
How do you filter out rows which contain the same integer as the row immediately above?
```
df.loc[df['A'].shift() != df['A']]
# Alternatively, we could use drop_duplicates() here. Note
# that this removes *all* duplicates though, so it won't
# work as desired if A is [1, 1, 2, 2, 1, 1] for example.
df.drop_duplicates(subset='A')
```
**23.** Given a DataFrame of numeric values, say
```python
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
```
how do you subtract the row mean from each element in the row?
```
df.sub(df.mean(axis=1), axis=0)
```
**24.** Suppose you have DataFrame with 10 columns of real numbers, for example:
```python
df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))
```
Which column of numbers has the smallest sum? (Find that column's label.)
```
df.sum().idxmin()
```
**25.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
```
len(df) - df.duplicated(keep=False).sum()
# or perhaps more simply...
len(df.drop_duplicates(keep=False))
```
The next three puzzles are slightly harder...
**26.** You have a DataFrame that consists of 10 columns of floating--point numbers. Suppose that exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the *column* which contains the *third* NaN value.
(You should return a Series of column labels.)
```
(df.isnull().cumsum(axis=1) == 3).idxmax(axis=1)
```
**27.** A DataFrame has a column of groups 'grps' and and column of numbers 'vals'. For example:
```python
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
```
For each *group*, find the sum of the three greatest values.
```
df.groupby('grp')['vals'].nlargest(3).sum(level=0)
```
**28.** A DataFrame has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.
```
df.groupby(pd.cut(df['A'], np.arange(0, 101, 10)))['B'].sum()
```
## DataFrames: harder problems
### These might require a bit of thinking outside the box...
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).
Difficulty: *hard*
**29.** Consider a DataFrame `df` where there is an integer column 'X':
```python
df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
```
For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be `[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]`. Make this a new column 'Y'.
```
izero = np.r_[-1, (df['X'] == 0).nonzero()[0]] # indices of zeros
idx = np.arange(len(df))
df['Y'] = idx - izero[np.searchsorted(izero - 1, idx) - 1]
# http://stackoverflow.com/questions/30730981/how-to-count-distance-to-the-previous-zero-in-pandas-series/
# credit: Behzad Nouri
```
Here's an alternative approach based on a [cookbook recipe](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#grouping):
```
x = (df['X'] != 0).cumsum()
y = x != x.shift()
df['Y'] = y.groupby((y != y.shift()).cumsum()).cumsum()
```
And another approach using a groupby:
```
df['Y'] = df.groupby((df['X'] == 0).cumsum()).cumcount()
# We're off by one before we reach the first zero.
first_zero_idx = (df['X'] == 0).idxmax()
df['Y'].iloc[0:first_zero_idx] += 1
```
**30.** Consider a DataFrame containing rows and columns of purely numerical data. Create a list of the row-column index locations of the 3 largest values.
```
df.unstack().sort_values()[-3:].index.tolist()
# http://stackoverflow.com/questions/14941261/index-and-column-for-the-max-value-in-pandas-dataframe/
# credit: DSM
```
**31.** Given a DataFrame with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
```
def replace(group):
mask = group<0
group[mask] = group[~mask].mean()
return group
df.groupby(['grps'])['vals'].transform(replace)
# http://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means/
# credit: unutbu
```
**32.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:
```python
>>> df = pd.DataFrame({'group': list('aabbabbbabab'),
'value': [1, 2, 3, np.nan, 2, 3,
np.nan, 1, 7, 3, np.nan, 8]})
>>> df
group value
0 a 1.0
1 a 2.0
2 b 3.0
3 b NaN
4 a 2.0
5 b 3.0
6 b NaN
7 b 1.0
8 a 7.0
9 b 3.0
10 a NaN
11 b 8.0
```
The goal is to compute the Series:
```
0 1.000000
1 1.500000
2 3.000000
3 3.000000
4 1.666667
5 3.000000
6 3.000000
7 2.000000
8 3.666667
9 2.000000
10 4.500000
11 4.000000
```
E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2)
```
g1 = df.groupby(['group'])['value'] # group values
g2 = df.fillna(0).groupby(['group'])['value'] # fillna, then group values
s = g2.rolling(3, min_periods=1).sum() / g1.rolling(3, min_periods=1).count() # compute means
s.reset_index(level=0, drop=True).sort_index() # drop/sort index
# http://stackoverflow.com/questions/36988123/pandas-groupby-and-rolling-apply-ignoring-nans/
```
## Series and DatetimeIndex
### Exercises for creating and manipulating Series with datetime data
Difficulty: *easy/medium*
pandas is fantastic for working with dates and times. These puzzles explore some of this functionality.
**33.** Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series `s`.
```
dti = pd.date_range(start='2015-01-01', end='2015-12-31', freq='B')
s = pd.Series(np.random.rand(len(dti)), index=dti)
```
**34.** Find the sum of the values in `s` for every Wednesday.
```
s[s.index.weekday == 2].sum()
```
**35.** For each calendar month in `s`, find the mean of values.
```
s.resample('M').mean()
```
**36.** For each group of four consecutive calendar months in `s`, find the date on which the highest value occurred.
```
s.groupby(pd.TimeGrouper('4M')).idxmax()
```
**37.** Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016.
```
pd.date_range('2015-01-01', '2016-12-31', freq='WOM-3THU')
```
## Cleaning Data
### Making a DataFrame easier to work with
Difficulty: *easy/medium*
It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?
Take this monstrosity as the DataFrame to use in the following puzzles:
```python
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British Airways. )',
'12. Air France', '"Swiss Air"']})
```
(It's some flight data I made up; it's not meant to be accurate in any way.)
**38.** Some values in the the FlightNumber column are missing. These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Fill in these missing numbers and make the column an integer column (instead of a float column).
```
df['FlightNumber'] = df['FlightNumber'].interpolate().astype(int)
```
**39.** The From\_To column would be better as two separate columns! Split each string on the underscore delimiter `_` to give a new temporary DataFrame with the correct values. Assign the correct column names to this temporary DataFrame.
```
temp = df.From_To.str.split('_', expand=True)
temp.columns = ['From', 'To']
```
**40.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
```
temp['From'] = temp['From'].str.capitalize()
temp['To'] = temp['To'].str.capitalize()
```
**41.** Delete the From_To column from `df` and attach the temporary DataFrame from the previous questions.
```
df = df.drop('From_To', axis=1)
df = df.join(temp)
```
**42**. In the Airline column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`.
```
df['Airline'] = df['Airline'].str.extract('([a-zA-Z\s]+)', expand=False).str.strip()
# note: using .strip() gets rid of any leading/trailing spaces
```
**43**. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named `delays`, rename the columns `delay_1`, `delay_2`, etc. and replace the unwanted RecentDelays column in `df` with `delays`.
```
# there are several ways to do this, but the following approach is possibly the simplest
delays = df['RecentDelays'].apply(pd.Series)
delays.columns = ['delay_{}'.format(n) for n in range(1, len(delays.columns)+1)]
df = df.drop('RecentDelays', axis=1).join(delays)
```
The DataFrame should look much better now.
## Using MultiIndexes
### Go beyond flat DataFrames with additional index levels
Difficulty: *medium*
Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using *multiple* levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.
The set of puzzles below explores how you might use multiple index levels to enhance data analysis.
To warm up, we'll look make a Series with two index levels.
**44**. Given the lists `letters = ['A', 'B', 'C']` and `numbers = list(range(10))`, construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series `s`.
```
letters = ['A', 'B', 'C']
numbers = list(range(10))
mi = pd.MultiIndex.from_product([letters, numbers])
s = pd.Series(np.random.rand(30), index=mi)
```
**45.** Check the index of `s` is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex).
```
s.index.is_lexsorted()
# or more verbosely...
s.index.lexsort_depth == s.index.nlevels
```
**46**. Select the labels `1`, `3` and `6` from the second level of the MultiIndexed Series.
```
s.loc[:, [1, 3, 6]]
```
**47**. Slice the Series `s`; slice up to label 'B' for the first level and from label 5 onwards for the second level.
```
s.loc[pd.IndexSlice[:'B', 5:]]
# or equivalently without IndexSlice...
s.loc[slice(None, 'B'), slice(5, None)]
```
**48**. Sum the values in `s` for each label in the first level (you should have Series giving you a total for labels A, B and C).
```
s.sum(level=0)
```
**49**. Suppose that `sum()` (and other methods) did not accept a `level` keyword argument. How else could you perform the equivalent of `s.sum(level=1)`?
```
# One way is to use .unstack()...
# This method should convince you that s is essentially
# just a regular DataFrame in disguise!
s.unstack().sum(axis=0)
```
**50**. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it.
```
new_s = s.swaplevel(0, 1)
# check
new_s.index.is_lexsorted()
# sort
new_s = new_s.sort_index()
```
## Minesweeper
### Generate the numbers for safe squares in a Minesweeper grid
Difficulty: *medium* to *hard*
If you've ever used an older version of Windows, there's a good chance you've played with [Minesweeper](https://en.wikipedia.org/wiki/Minesweeper_(video_game). If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.
In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares.
**51**. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.
```
X = 5
Y = 4
```
To begin, generate a DataFrame `df` with two columns, `'x'` and `'y'` containing every coordinate for this grid. That is, the DataFrame should start:
```
x y
0 0 0
1 0 1
2 0 2
```
```
p = pd.tools.util.cartesian_product([np.arange(X), np.arange(Y)])
df = pd.DataFrame(np.asarray(p).T, columns=['x', 'y'])
```
**52**. For this DataFrame `df`, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4.
```
# One way is to draw samples from a binomial distribution.
df['mine'] = np.random.binomial(1, 0.4, X*Y)
```
**53**. Now create a new column for this DataFrame called `'adjacent'`. This column should contain the number of mines found on adjacent squares in the grid.
(E.g. for the first row, which is the entry for the coordinate `(0, 0)`, count how many mines are found on the coordinates `(0, 1)`, `(1, 0)` and `(1, 1)`.)
```
# Here is one way to solve using merges.
# It's not necessary the optimal way, just
# the solution I thought of first...
df['adjacent'] = \
df.merge(df + [ 1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, -1, 0], on=['x', 'y'], how='left')\
.iloc[:, 3:]\
.sum(axis=1)
# An alternative solution is to pivot the DataFrame
# to form the "actual" grid of mines and use convolution.
# See https://github.com/jakevdp/matplotlib_pydata2013/blob/master/examples/minesweeper.py
from scipy.signal import convolve2d
mine_grid = df.pivot_table(columns='x', index='y', values='mine')
counts = convolve2d(mine_grid.astype(complex), np.ones((3, 3)), mode='same').real.astype(int)
df['adjacent'] = (counts - mine_grid).ravel('F')
```
**54**. For rows of the DataFrame that contain a mine, set the value in the `'adjacent'` column to NaN.
```
df.loc[df['mine'] == 1, 'adjacent'] = np.nan
```
**55**. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the `x` coordinate, rows are the `y` coordinate.
```
df.drop('mine', axis=1)\
.set_index(['y', 'x']).unstack()
```
## Plotting
### Visualize trends and patterns in data
Difficulty: *medium*
To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.
**56.** Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:
```python
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
```
matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to ```plt```.
```%matplotlib inline``` tells the notebook to show plots inline, instead of creating them in a separate window.
```plt.style.use('ggplot')``` is a style theme that most people find agreeable, based upon the styling of R's ggplot package.
For starters, make a scatter plot of this random data, but use black X's instead of the default markers.
```df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})```
Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) if you get stuck!
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})
df.plot.scatter("xs", "ys", color = "black", marker = "x")
```
**57.** Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.
(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)
*The chart doesn't have to be pretty: this isn't a course in data viz!*
```
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
```
```
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
df.plot.scatter("hours_in", "productivity", s = df.happiness * 30, c = df.caffienated)
```
**58.** What if we want to plot multiple things? Pandas allows you to pass in a matplotlib *Axis* object for plots, and plots will also return an Axis object.
Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)
```
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
```
```
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
ax = df.plot.bar("month", "revenue", color = "green")
df.plot.line("month", "advertising", secondary_y = True, ax = ax)
ax.set_xlim((-1,12))
```
Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.

This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.
Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this.
The below cell contains helper functions. Call ```day_stock_data()``` to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call ```plot_candlestick(df)``` on your properly aggregated and formatted stock data to print the candlestick chart.
```
#This function is designed to create semi-interesting random stock price data
import numpy as np
def float_to_time(x):
return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2)
def day_stock_data():
#NYSE is open from 9:30 to 4:00
time = 9.5
price = 100
results = [(float_to_time(time), price)]
while time < 16:
elapsed = np.random.exponential(.001)
time += elapsed
if time > 16:
break
price_diff = np.random.uniform(.999, 1.001)
price *= price_diff
results.append((float_to_time(time), price))
df = pd.DataFrame(results, columns = ['time','price'])
df.time = pd.to_datetime(df.time)
return df
def plot_candlestick(agg):
fig, ax = plt.subplots()
for time in agg.index:
ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values, color = "black")
ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values, color = agg.loc[time, "color"], linewidth = 10)
ax.set_xlim((8,16))
ax.set_ylabel("Price")
ax.set_xlabel("Hour")
ax.set_title("OHLC of Stock Value During Trading Day")
plt.show()
```
**59.** Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly summaries of the opening, highest, lowest, and closing prices
```
df = day_stock_data()
df.head()
df.set_index("time", inplace = True)
agg = df.resample("H").ohlc()
agg.columns = agg.columns.droplevel()
agg["color"] = (agg.close > agg.open).map({True:"green",False:"red"})
agg.head()
```
**60.** Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use the ```plot_candlestick(df)``` function above, or matplotlib's [```plot``` documentation](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.plot.html) if you get stuck.
```
plot_candlestick(agg)
```
*More exercises to follow soon...*
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.rcParams["figure.dpi"] = 80
def remove_frame():
for spine in plt.gca().spines.values():
spine.set_visible(False)
np.random.seed(111)
# classification
from sklearn.datasets import make_blobs
X, y = make_blobs(centers=3)
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y>=1, 0], X[y>=1, 1], c="r")
plt.xticks([])
plt.yticks([])
plt.grid()
remove_frame()
plt.savefig("classification.png")
# regression
def true_f(x):
return 0.1 * (x-2) ** 3 + x ** 2 - 8.0 * x - 1.0
def generate(n_samples):
X = np.random.rand(n_samples) * 20.0 - 10.0
y = true_f(X) + 5 * np.random.randn(n_samples)
return X.reshape(n_samples, 1), y
X_train, y_train = generate(15)
xs = np.linspace(-10, 10, num=1000)
plt.plot(xs, true_f(xs), c="r", label="$g(x)$")
plt.scatter(X_train, y_train, label="$y = g(x) + \epsilon$")
plt.xticks([])
plt.yticks([])
plt.grid()
remove_frame()
plt.savefig("regression.png")
def true_f(x):
return 0.1 * (x-2) ** 3 + x ** 2 - 6.0 * x + 1.0
def generate(n_samples):
X = np.random.rand(n_samples) * 20.0 - 10.0
y = true_f(X) + 10 * np.random.randn(n_samples)
return X.reshape(n_samples, 1), y
X_train, y_train = generate(15)
# data
xs = np.linspace(-10, 10, num=1000)
plt.plot(xs, true_f(xs), c="r", label="$g(x)$")
plt.scatter(X_train, y_train, label="$y = g(x) + \epsilon$")
plt.grid()
plt.legend()
remove_frame()
plt.savefig("data.png")
from sklearn.preprocessing import PolynomialFeatures
def model(X, beta, poly):
Xp = poly.transform(X)
return np.dot(Xp, beta)
for degree in [1, 2, 3, 4, 5, 10]:
poly = PolynomialFeatures(degree=degree, include_bias=True)
Xp = poly.fit_transform(X_train)
beta = np.dot(np.dot(np.linalg.inv(np.dot(Xp.T, Xp)), Xp.T), y_train)
error = np.mean((y_train - model(X_train, beta, poly)) ** 2)
plt.plot(xs, true_f(xs), c="r", label="$g(x)$")
plt.scatter(X_train, y_train, label="$y = g(x) + \epsilon$")
plt.plot(xs, model(xs.reshape(-1,1), beta, poly), c="b", label="$\hat{y} = f(x)$")
plt.title("degree = %d, $\hat{R}(f, d) = %.2f$" % (degree, error))
plt.ylim(-20, 80)
plt.grid()
plt.legend()
remove_frame()
plt.savefig("poly-%d.png" % degree)
plt.show()
train_error = []
test_error = []
X_test, y_test = generate(10000)
for degree in range(1, 11):
poly = PolynomialFeatures(degree=degree, include_bias=True)
Xp = poly.fit_transform(X_train)
beta = np.dot(np.dot(np.linalg.inv(np.dot(Xp.T, Xp)), Xp.T), y_train)
error = np.mean((y_train - model(X_train, beta, poly)) ** 2)
train_error.append(error)
error = np.mean((y_test - model(X_test, beta, poly)) ** 2)
test_error.append(error)
plt.plot(range(1,11), train_error, c="r", label="training error")
plt.plot(range(1,11), test_error, c="b", label="test error")
#plt.ylim(-20, 80)
plt.yscale("log")
plt.grid()
plt.legend()
remove_frame()
plt.savefig("training-test-error.png")
plt.show()
train_error
np.random.seed(777)
for N in [5, 10, 50, 100, 500, 1000]:
X_train, y_train = generate(N)
poly = PolynomialFeatures(degree=3, include_bias=True)
Xp = poly.fit_transform(X_train)
beta = np.dot(np.dot(np.linalg.inv(np.dot(Xp.T, Xp)), Xp.T), y_train)
error = np.mean((y_train - model(X_train, beta, poly)) ** 2)
plt.plot(xs, true_f(xs), c="r", label="$g(x)$")
plt.scatter(X_train, y_train, label="$y = g(x) + \epsilon$")
plt.plot(xs, model(xs.reshape(-1,1), beta, poly), c="b", label="$\hat{y} = f(x)$")
plt.title("N = %d" % N)
plt.ylim(-20, 80)
plt.grid()
plt.legend()
remove_frame()
plt.savefig("poly-N-%d.png" % N)
plt.show()
np.random.seed(42)
for degree in [1, 2, 3, 4, 5]:
y_pred = []
fig = plt.figure()
ax1 = fig.add_subplot(111)
plt.plot(xs, true_f(xs), c="r", label="$g(x)$")
for i in range(200):
N = 15
X_train, y_train = generate(N)
poly = PolynomialFeatures(degree=degree, include_bias=True)
Xp = poly.fit_transform(X_train)
beta = np.dot(np.dot(np.linalg.inv(np.dot(Xp.T, Xp)), Xp.T), y_train)
error = np.mean((y_train - model(X_train, beta, poly)) ** 2)
pred = model(xs.reshape(-1,1), beta, poly)
plt.plot(xs, pred, c="b", alpha=0.025)
y_pred.append(pred)
m = np.mean(y_pred, axis=0)
s = np.std(y_pred, axis=0)
plt.plot(xs, m, c="b", label="$\mathbb{E}_{d} [\hat{y}]$")
ax1.fill_between(xs, m+s, m-s, color="b", alpha=0.1)
plt.title("degree = %d, N = %d" % (degree, N))
plt.ylim(-20, 80)
plt.grid()
plt.legend(loc="upper left")
remove_frame()
plt.savefig("poly-avg-degree-%d.png" % degree)
plt.show()
```
| github_jupyter |
# Quantum Phase Estimation
## Contents
1. [Overview](#overview)
1.1 [Intuition](#intuition)
1.2 [Mathematical Basis](#maths)
2. [Example: T-gate](#example_t_gate)
2.1 [Creating the Circuit](#creating_the_circuit)
2.2 [Results](#results)
3. [Getting More Precision](#getting_more_precision)
3.1 [The Problem](#the_problem)
3.2 [The Solution](#the_solution)
4. [Experimenting on Real Devices](#real_devices)
4.1 [With the Circuit from 2.1](#circuit_2.1)
5. [Exercises](#exercises)
6. [Looking Forward](#looking_forward)
7. [References](#references)
8. [Contributors](#contributors)
Quantum phase estimation is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms. The objective of the algorithm is the following:
Given a unitary operator $U$, the algorithm estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$. Here $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. Since $U$ is unitary, all of its eigenvalues have a norm of 1.
## 1. Overview <a id='overview'></a>
The general quantum circuit for phase estimation is shown below. The top register contains $t$ 'counting' qubits, and the bottom contains qubits in the state $|\psi\rangle$:

### 1.1 Intuition <a id='intuition'></a>
The quantum phase estimation algorithm uses phase kickback to write the phase of $U$ (in the Fourier basis) to the $t$ qubits in the counting register. We then use the inverse QFT to translate this from the Fourier basis into the computational basis, which we can measure.
We remember (from the QFT chapter) that in the Fourier basis the topmost qubit completes one full rotation when counting between $0$ and $2^t$. To count to a number, $x$ between $0$ and $2^t$, we rotate this qubit by $\tfrac{x}{2^t}$ around the z-axis. For the next qubit we rotate by $\tfrac{2x}{2^t}$, then $\tfrac{4x}{2^t}$ for the third qubit.

When we use a qubit to control the $U$-gate, the qubit will turn (due to kickback) proportionally to the phase $e^{2i\pi\theta}$. We can use successive $CU$-gates to repeat this rotation an appropriate number of times until we have encoded the phase theta as a number between $0$ and $2^t$ in the Fourier basis.
Then we simply use $QFT^\dagger$ to convert this into the computational basis.
### 1.2 Mathematical Foundation <a id='maths'></a>
As mentioned above, this circuit estimates the phase of a unitary operator $U$. It estimates $\theta$ in $U\vert\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, where $|\psi\rangle$ is an eigenvector and $e^{\boldsymbol{2\pi i}\theta}$ is the corresponding eigenvalue. The circuit operates in the following steps:
i. **Setup**: $\vert\psi\rangle$ is in one set of qubit registers. An additional set of $n$ qubits form the counting register on which we will store the value $2^n\theta$:
$$ \psi_0 = \lvert 0 \rangle^{\otimes n} \lvert \psi \rangle$$
ii. **Superposition**: Apply a $n$-bit Hadamard gate operation $H^{\otimes n}$ on the counting register:
$$ \psi_1 = {\frac {1}{2^{\frac {n}{2}}}}\left(|0\rangle +|1\rangle \right)^{\otimes n} \lvert \psi \rangle$$
iii. **Controlled Unitary Operations**: We need to introduce the controlled unitary $C-U$ that applies the unitary operator $U$ on the target register only if its corresponding control bit is $|1\rangle$. Since $U$ is a unitary operatory with eigenvector $|\psi\rangle$ such that $U|\psi \rangle =e^{\boldsymbol{2\pi i} \theta }|\psi \rangle$, this means:
$$U^{2^{j}}|\psi \rangle =U^{2^{j}-1}U|\psi \rangle =U^{2^{j}-1}e^{2\pi i\theta }|\psi \rangle =\cdots =e^{2\pi i2^{j}\theta }|\psi \rangle$$
Applying all the $n$ controlled operations $C − U^{2^j}$ with $0\leq j\leq n-1$, and using the relation $|0\rangle \otimes |\psi \rangle +|1\rangle \otimes e^{2\pi i\theta }|\psi \rangle =\left(|0\rangle +e^{2\pi i\theta }|1\rangle \right)\otimes |\psi \rangle$:
\begin{aligned}
\psi_{2} & =\frac {1}{2^{\frac {n}{2}}} \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{n-1}}}|1\rangle \right) \otimes \cdots \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{1}}}\vert1\rangle \right) \otimes \left(|0\rangle+{e^{\boldsymbol{2\pi i} \theta 2^{0}}}\vert1\rangle \right) \otimes |\psi\rangle\\\\
& = \frac{1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes \vert\psi\rangle
\end{aligned}
where $k$ denotes the integer representation of n-bit binary numbers.
iv. **Inverse Fourier Transform**: Notice that the above expression is exactly the result of applying a quantum Fourier transform as we derived in the notebook on [Quantum Fourier Transform and its Qiskit Implementation](qft.ipynb). Recall that QFT maps an n-qubit input state $\vert x\rangle$ into an output as
$$
QFT\vert x \rangle = \frac{1}{2^\frac{n}{2}}
\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right)
\otimes
\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right)
\otimes
\ldots
\otimes
\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right)
\otimes
\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right)
$$
Replacing $x$ by $2^n\theta$ in the above expression gives exactly the expression derived in step 2 above. Therefore, to recover the state $\vert2^n\theta\rangle$, apply an inverse Fourier transform on the ancilla register. Doing so, we find
$$
\vert\psi_3\rangle = \frac {1}{2^{\frac {n}{2}}}\sum _{k=0}^{2^{n}-1}e^{\boldsymbol{2\pi i} \theta k}|k\rangle \otimes | \psi \rangle \xrightarrow{\mathcal{QFT}_n^{-1}} \frac {1}{2^n}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1} e^{-\frac{2\pi i k}{2^n}(x - 2^n \theta)} |x\rangle \otimes |\psi\rangle
$$
v. **Measurement**:
The above expression peaks near $x = 2^n\theta$. For the case when $2^n\theta$ is an integer, measuring in the computational basis gives the phase in the ancilla register with high probability:
$$ |\psi_4\rangle = | 2^n \theta \rangle \otimes | \psi \rangle$$
For the case when $2^n\theta$ is not an integer, it can be shown that the above expression still peaks near $x = 2^n\theta$ with probability better than $4/\pi^2 \approx 40\%$ [1].
## 2. Example: T-gate <a id='example_t_gate'></a>
Let’s take a gate we know well, the $T$-gate, and use Quantum Phase Estimation to estimate its phase. You will remember that the $T$-gate adds a phase of $e^\frac{i\pi}{4}$ to the state $|1\rangle$:
$$ T|1\rangle =
\begin{bmatrix}
1 & 0\\
0 & e^\frac{i\pi}{4}\\
\end{bmatrix}
\begin{bmatrix}
0\\
1\\
\end{bmatrix}
= e^\frac{i\pi}{4}|1\rangle $$
Since QPE will give us $\theta$ where:
$$ T|1\rangle = e^{2i\pi\theta}|1\rangle $$
We expect to find:
$$\theta = \frac{1}{8}$$
In this example we will use three qubits and obtain an _exact_ result (not an estimation!)
### 2.1 Creating the Circuit <a id='creating_the_circuit'></a>
Let's first prepare our environment:
```
#initialization
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
```
Now, set up the quantum circuit. We will use four qubits -- qubits 0 to 2 as counting qubits, and qubit 3 as the eigenstate of the unitary operator ($T$).
We initialize $\vert\psi\rangle = \vert1\rangle$ by applying an $X$ gate:
```
qpe = QuantumCircuit(4, 3)
qpe.x(3)
qpe.draw()
```
Next, we apply Hadamard gates to the counting qubits:
```
for qubit in range(3):
qpe.h(qubit)
qpe.draw()
```
Next we perform the controlled unitary operations. **Remember:** Qiskit orders its qubits the opposite way round to the image above.
```
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe.cu1(math.pi/4, counting_qubit, 3); # This is C-U
repetitions *= 2
qpe.draw()
```
We apply the inverse quantum Fourier transformation to convert the state of the counting register. Here we provide the code for $QFT^\dagger$:
```
def qft_dagger(circ, n):
"""n-qubit QFTdagger the first n qubits in circ"""
# Don't forget the Swaps!
for qubit in range(n//2):
circ.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
circ.cu1(-math.pi/float(2**(j-m)), m, j)
circ.h(j)
```
We then measure the counting register:
```
qpe.barrier()
# Apply inverse QFT
qft_dagger(qpe, 3)
# Measure
qpe.barrier()
for n in range(3):
qpe.measure(n,n)
qpe.draw()
```
### 2.2 Results <a id='results'></a>
```
backend = Aer.get_backend('qasm_simulator')
shots = 2048
results = execute(qpe, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
```
We see we get one result (`001`) with certainty, which translates to the decimal: `1`. We now need to divide our result (`1`) by $2^n$ to get $\theta$:
$$ \theta = \frac{1}{2^3} = \frac{1}{8} $$
This is exactly the result we expected!
## 3. Example: Getting More Precision <a id='getting_more_precision'></a>
### 3.1 The Problem <a id='the_problem'></a>
Instead of a $T$-gate, let’s use a gate with $\theta = \frac{1}{3}$. We set up our circuit as with the last example:
```
# Create and set up circuit
qpe2 = QuantumCircuit(4, 3)
# Apply H-Gates to counting qubits:
for qubit in range(3):
qpe2.h(qubit)
# Prepare our eigenstate |psi>:
qpe2.x(3)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(3):
for i in range(repetitions):
qpe2.cu1(angle, counting_qubit, 3);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe2, 3)
# Measure of course!
for n in range(3):
qpe2.measure(n,n)
qpe2.draw()
# Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe2, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
```
We are expecting the result $\theta = 0.3333\dots$, and we see our most likely results are `010(bin) = 2(dec)` and `011(bin) = 3(dec)`. These two results would tell us that $\theta = 0.25$ (off by 25%) and $\theta = 0.375$ (off by 13%) respectively. The true value of $\theta$ lies between the values we can get from our counting bits, and this gives us uncertainty and imprecision.
### 3.2 The Solution <a id='the_solution'></a>
To get more precision we simply add more counting qubits. We are going to add two more counting qubits:
```
# Create and set up circuit
qpe3 = QuantumCircuit(6, 5)
# Apply H-Gates to counting qubits:
for qubit in range(5):
qpe3.h(qubit)
# Prepare our eigenstate |psi>:
qpe3.x(5)
# Do the controlled-U operations:
angle = 2*math.pi/3
repetitions = 1
for counting_qubit in range(5):
for i in range(repetitions):
qpe3.cu1(angle, counting_qubit, 5);
repetitions *= 2
# Do the inverse QFT:
qft_dagger(qpe3, 5)
# Measure of course!
qpe3.barrier()
for n in range(5):
qpe3.measure(n,n)
qpe3.draw()
### Let's see the results!
backend = Aer.get_backend('qasm_simulator')
shots = 4096
results = execute(qpe3, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
```
The two most likely measurements are now `01011` (decimal 11) and `01010` (decimal 10). Measuring these results would tell us $\theta$ is:
$$
\theta = \frac{11}{2^5} = 0.344,\;\text{ or }\;\; \theta = \frac{10}{2^5} = 0.313
$$
These two results differ from $\frac{1}{3}$ by 3% and 6% respectively. A much better precision!
## 4. Experiment with Real Devices <a id='real_devices'></a>
### 4.1 Circuit from 2.1 <a id='circuit_2.1'></a>
We can run the circuit in section 2.1 on a real device, let's remind ourselves of the circuit:
```
qpe.draw()
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_vigo')
# Run with 2048 shots
shots = 2048
job = execute(qpe, backend=backend, shots=2048, optimization_level=3)
job_monitor(job)
# get the results from the computation
results = job.result()
answer = results.get_counts(qpe)
plot_histogram(answer)
```
We can hopefully see that the most likely result is `001` which is the result we would expect from the simulator. Unlike the simulator, there is a probability of measuring something other than `001`, this is due to noise and gate errors in the quantum computer.
## 5. Exercises <a id='exercises'></a>
1. Try the experiments above with different gates ($\text{CNOT}$, $S$, $T^\dagger$), what results do you expect? What results do you get?
2. Try the experiment with a $Y$-gate, do you get the correct result? (Hint: Remember to make sure $|\psi\rangle$ is an eigenstate of $Y$!)
## 6. Looking Forward <a id='looking_forward'></a>
The quantum phase estimation algorithm may seem pointless, since we have to know $\theta$ to perform the controlled-$U$ operations on our quantum computer. We will see in later chapters that it is possible to create circuits for which we don’t know $\theta$, and for which learning theta can tell us something very useful (most famously how to factor a number!)
## 7. References <a id='references'></a>
[1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA.
## 8. Contributors <a id='contributors'></a>
03/20/2020 — Hwajung Kang (@HwajungKang) — Fixed inconsistencies with qubit ordering
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
```
import shp_process
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import geopandas as gpd
import geoplot
from pysal.lib import weights
import networkx as nx
from scipy.spatial import distance
import momepy
import pickle
import math
import sys
import statsmodels.api as sm
mount_path = "/mnt/c/Users/jason/Dropbox (MIT)/"
# mount_path = "/Users/shenhaowang/Dropbox (MIT)/project_media_lab_South_Australia/"
cur_shp = gpd.read_file('../../data_process/shapefiles/sa2_adelaide.shp')
road_file = gpd.read_file(mount_path + "/SA data/dataSA/roads/Roads_GDA2020_without_South_Road.shp")
node_df, edge_df = shp_process.get_final_node_edge_dfs(cur_shp, road_file, mount_path)
node_df
edge_df
node_df.to_pickle("../../data_process/test_wosouth_node.pickle")
edge_df.to_pickle("../../data_process/test_wosouth_edge.pickle")
model1 = sm.load("../../models/best_model1.pickle")
X = np.log(edge_df[['od_distance_value', 'class_ART', 'class_BUS', 'class_COLL', 'class_FREE',
'class_HWY', 'class_LOCL','class_SUBA', 'class_TRK2', 'class_TRK4',
'class_UND', "road_counts",
'num_nodes_pth', 'num_1degree_pth', 'num_2degree_pth',
'num_3degree_pth', 'num_4degree_pth',
'num_greater5degree_pth']])
X = sm.add_constant(X)
pred_od_duration_value = model1.predict(X)
with open('wo_south_road_pred_od_duration_value.pkl', 'wb') as f:
pickle.dump(pred_od_duration_value, f)
plt.clf()
plt.scatter(np.log(edge_df["od_duration_value"].values),pred_od_duration_value,s=1)
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.axis('square')
plt.show()
model2 = sm.load("../../models/best_model2.pickle")
X = edge_df[['num_jobs_000_persons_origin', 'num_jobs_000_persons_destination',
'od_duration_value',"poi_count_x","poi_count_y","poi_entropy_x","poi_entropy_y"]].copy()
X[["poi_entropy_x","poi_entropy_y"]] = X[["poi_entropy_x","poi_entropy_y"]] + 1
X = np.log(X)
#dont need to log this because it is already in log space
X["od_duration_value"] = pred_od_duration_value
X = sm.add_constant(X)
pred_stays = model2.predict(X)
with open('wo_south_road_pred_stays.pkl', 'wb') as f:
pickle.dump(pred_stays, f)
plt.clf()
plt.scatter(np.log(edge_df["total_stays"].values),pred_stays,s=1)
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.axis('square')
plt.show()
helper = edge_df[["origin","destination"]]
helper["pred_stays"] = np.exp(pred_stays)
helper
helper.groupby(["origin"]).sum()
helper.groupby(["destination"]).sum()
helper_origin = helper.groupby(["origin"]).sum()
helper_dest = helper.groupby(["destination"]).sum()
helper_origin.rename({"pred_stays":"pred_stays_origin"},axis="columns")
helper_dest.rename({"pred_stays":"pred_stays_dest"},axis="columns")
node_df = node_df.merge(helper_origin, how="left", left_on="SA2_MAIN16", right_on="origin")
node_df = node_df.merge(helper_dest, how="left", left_on="SA2_MAIN16", right_on="destination")
node_df
node_df['sum_stay_duration_origin_counts'] = node_df.pred_stays_x
node_df['sum_stay_duration_destination_counts'] = node_df.pred_stays_y
node_df[["SA2_MAIN16", 'sum_stay_duration_origin_counts', 'sum_stay_duration_destination_counts']]
model3 = sm.load("../../models/best_model3.pickle")
# y
y = np.log(node_df['median_income_per_job_aud_persons'])
X = node_df[['sum_stay_duration_origin_counts', 'sum_stay_duration_destination_counts',' m_percent', ' f_percent', ' avg_med_age', ' p_tot_tot',
'degree_percentage', " p_cer_tot_tot"]].copy()
X = np.log(X)
X = sm.add_constant(X)
#in log space
pred_median_income_per_job_aud_persons = model3.predict(X)
with open('wo_south_road_pred_median_income_per_job_aud_persons.pkl', 'wb') as f:
pickle.dump(pred_median_income_per_job_aud_persons, f)
plt.clf()
plt.scatter(np.log(node_df['median_income_per_job_aud_persons'].values),pred_median_income_per_job_aud_persons,s=1)
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.axis('square')
plt.show()
```
| github_jupyter |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/283-tweet-train-5fold-roberta-onecycle-label2/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
# vocab_path = base_path + 'roberta-base-vocab.json'
# merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, name="qa_outputs", use_bias=False)(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
```
텐서플로우 라이브러리를 임포트 하세요.
텐서플로우에는 MNIST 데이터를 자동으로 로딩해 주는 헬퍼 함수가 있습니다. "MNIST_data" 폴더에 데이터를 다운로드하고 훈련, 검증, 테스트 데이터를 자동으로 읽어 들입니다. `one_hot` 옵션을 설정하면 정답 레이블을 원핫벡터로 바꾸어 줍니다.
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
```
`mnist.train.images`에는 훈련용 이미지 데이터가 있고 `mnist.test.images`에는 테스트용 이미지 데이터가 있습니다. 이 데이터의 크기를 확인해 보세요.
`matplotlib`에는 이미지를 그려주는 `imshow()` 함수가 있습니다. 우리가 읽어 들인 `mnist.train.images`는 길이 784의 배열입니다. 55,000개 중에서 원하는 하나를 출력해 보세요.
이미지로 표현하려면 원본 이미지 사각형 크기인 [28, 28]로 변경해 줍니다. 그리고 흑백이미지 이므로 컬러맵을 그레이 스케일로 지정합니다.
```
plt.imshow(mnist.train.images[..].reshape([.., ..]), cmap=plt.get_cmap('gray_r'))
```
`mnist.train.labels`에는 정답값 y 가 들어 있습니다. 원핫벡터로 로드되었는지 55,000개의 정답 데이터 중 하나를 확인해 보세요.
```
mnist.train.labels[..]
```
훈련 데이터는 55,000개로 한꺼번에 처리하기에 너무 많습니다. 그래서 미니배치 그래디언트 디센트 방식을 사용하려고 합니다. 미니배치 방식을 사용하려면 훈련 데이터에서 일부를 쪼개어 반복하여 텐서플로우 모델에 주입해 주어야 합니다.
텐서플로우 모델이 동작하면서 입력 데이터를 받기위해 플레이스 홀더를 정의합니다. 플레이스 홀더는 x(이미지), y(정답 레이블) 두가지입니다.
`x = tf.placeholder("float32", [None, 784])
y = tf.placeholder("float32", shape=[None, 10])`
콘볼루션을 적용하려면 1차원 배열이 아닌 이미지와 같은 모양으로 바꾸어 주어야 합니다. `reshape` 명령을 이용하면 텐서의 차원을 바꿀 수 있습니다. 첫번째 차원은 미니 배치 데이터의 개수이므로 그대로 두고, 두번째 차원을 28x28x1 로 변경합니다.
`x_image = tf.reshape(x, [-1,28,28,1])`
```
x_image = ...
x_image
```
콘볼루션을 적용하기 위해 tf.layers.conv2d 함수를 사용하겠습니다. 커널 사이즈는 5x5 이고 32개를 사용합니다. 스트라이드는 1x1 이고 'same' 패딩을 사용합니다. 활성화 함수는 렐루 함수를 적용합니다.
풀링은 tf.layers.max_pooling2d 를 사용하여 2x2 맥스 풀링을 적용합니다.
`conv1 = tf.layers.conv2d(x_image, 32, (5, 5), strides=(1, 1),
padding="same", activation=tf.nn.relu)
pool1 = tf.layers.max_pooling2d(conv1, pool_size=(2, 2), strides=(2, 2))`
입력 데이터와 콘볼루션된 직후와 풀링된 직후의 텐서의 크기를 비교해 보세요.
```
print(x_image.get_shape())
print(conv1.get_shape())
print(pool1.get_shape())
```
두번째 콘볼루션의 커널 사이즈는 5x5 이고 64개를 사용합니다. 스트라이드는 1x1 이고 'same' 패딩을 사용합니다. 활성화 함수는 렐루 함수를 적용합니다.
풀링은 tf.layers.max_pooling2d 를 사용하여 2x2 맥스 풀링을 적용합니다.
`conv2 = tf.layers.conv2d(pool1, 64, (5, 5), strides=(1, 1),
padding="same", activation=tf.nn.relu)
pool2 = tf.layers.max_pooling2d(conv2, pool_size=(2, 2), strides=(2, 2))`
두번째 콘볼루션된 후와 풀링된 후의 텐서 크기를 비교해 보세요.
```
print(conv2.get_shape())
print(pool2.get_shape())
```
덴스 네트워크에 연결하기 위해 두번째 풀링 결과를 펼칩니다. 이 때에도 `reshape` 명령을 사용합니다. 첫번째 차원은 상관없이 2~4번째 차원을 하나로 합칩니다.
이전과 달리 행렬연산을 직접하지 않고 `tf.layers.dense` 함수를 사용하여 레이어를 구성합니다. 활성화 함수는 렐루 함수를 사용합니다.
`pool2_flat = tf.reshape(pool2, [-1, 7*7*64])
fc = tf.layers.dense(pool2_flat, 1024, activation=tf.nn.relu)`
최종 출력 레이어에 전달하기 전에 드롭 아웃을 적용하여 학습시에 일부 유닛을 비활성화 시키려고 합니다. 하지만 추론시에는 모든 뉴런을 활성화시켜야 하므로 계산 그래프에 상수로 고정시키지 않고 플레이스홀더로 지정하여 외부에서 드롭 아웃 비율을 지정할 수 있도로 합니다.
`drop_prob = tf.placeholder("float")
fc_drop = tf.layers.dropout(fc, rate=drop_prob)`
마지막 출력 레이어를 구성하고 출력값을 정규화하여 정답과 비교하려고 소프트맥스 함수를 적용합니다.
`z = tf.layers.dense(fc_drop, 10)
y_hat=tf.nn.softmax(z)`
손실 함수 크로스 엔트로피를 계산하기 위해 위에서 구한 y_hat을 사용해도 되지만 텐서플로우에는 소프트맥스를 통과하기 전의 값 z 를 이용하여 소프트맥스 크로스 엔트로피를 계산해 주는 함수를 내장하고 있습니다. `softmax_cross_entropy`를 이용하여 z 와 정답 y 의 손실을 계산합니다.
`loss = tf.losses.softmax_cross_entropy(y, z)`
학습속도 0.5로 경사하강법을 적용하고 위에서 만든 손실 함수를 이용해 훈련 노드를 생성합니다.
`optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)`
올바르게 분류된 정확도를 계산하려면 정답을 가지고 있는 원핫벡터인 y 와 소프트맥스를 통과한 원핫벡터인 y_hat을 비교해야 합니다. 이 두 텐서는 [None, 10]의 크기를 가지고 있습니다. 따라서 행방향(1)으로 가장 큰 값을 가진 인덱스(argmax)를 찾아서 같은지(equal) 확인하면 됩니다.
`correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_hat,1))`
correct_prediction은 [True, False, ...] 와 같은 배열이므로 불리언을 숫자(1,0)로 바꾼다음(cast) 전체를 합하여 평균을 내면 정확도 값을 얻을 수 있습니다.
`accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))`
세션 객체를 만들고 모델에 사용할 변수를 초기화합니다.
```
sess = tf.Session()
sess.run(tf.global_variables_initializer())
```
5000번 반복을 하면서 훈련 데이터에서 100개씩 뽑아내어(mnist.train.next_batch) 모델에 드롭아웃 비율과 함께 주입합니다. 모델의 플레이스 홀더에 주입하려면 플레이스 홀더의 이름과 넘겨줄 값을 딕셔너리 형태로 묶어서 feed_dict 매개변수에 전달합니다.
계산할 값은 훈련 노드 train과 학습 과정을 그래프로 출력하기 위해 손실값 loss를 계산하여 costs 리스트에 누적합니다.
```
costs = []
for i in range(5000):
x_data, y_data = mnist.train.next_batch(100)
_, cost = sess.run([train, loss],
feed_dict={x: x_data, y: y_data, drop_prob: 0.5})
costs.append(cost)
```
costs 리스트를 그래프로 출력합니다.
```
plt.plot(costs)
```
정확도를 계산하기 위해 만든 노드 accuracy를 실행합니다. 이때 입력 데이터는 mnist.test 로 훈련시에 사용하지 않았던 데이터입니다. accuracy를 계산할 때는 모든 뉴런을 사용하기 위해 드롭아웃 비율을 1로 지정해야 합니다.
`sess.run(accuracy, feed_dict={x: mnist.test.images,
y: mnist.test.labels, drop_prob: 1.0})`
실제 이미지와 예측 값이 동일한지 확인하기 위해 테스트 데이터 앞의 5개 이미지와 예측 값을 차례대로 출력해 봅니다.
```
for i in range(5):
plt.imshow(mnist.test.images[i].reshape([28, 28]), cmap=plt.get_cmap('gray_r'))
plt.show()
print(sess.run(tf.argmax(y_hat,1), feed_dict={x: mnist.test.images[i].reshape([1,784]),
drop_prob: 1.0}))
```
학습된 변수를 모두 출력해 봅니다. 여기에는 두개의 콘볼루션 레이어의 가중치와 바이어스, 두개의 덴스 레이어의 가중치와 바이어스가 있습니다.
```
[x.name for x in tf.global_variables()]
```
첫번째 콘볼루션 레이어의 가중치 텐서의 값을 추출합니다. 이 가중치는 위에서 우리가 정의했던 것과 같이 5x5 사이즈의 32개를 합친 것입니다.
```
with tf.variable_scope('conv2d', reuse=True):
kernel = tf.get_variable('kernel')
weight = sess.run(kernel)
weight.shape
```
이 가중치를 한개씩 이미지로 출력해 보겠습니다. 첫번째 콘볼루션 레이어에서 학습한 것을 눈으로 확인할 수 있나요?
```
fig, axes = plt.subplots(4, 8, figsize=(10, 10))
for i in range(4):
for j in range(8):
axes[i][j].imshow(weight[:, :, :, i*8+j].reshape([5, 5]), cmap=plt.get_cmap('gray_r'))
plt.show()
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-split-oldnew/hold-out_5.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
```
# Model parameters
```
# Model parameters
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4 * FACTOR
WARMUP_LEARNING_RATE = 1e-3 * FACTOR
HEIGHT = 300
WIDTH = 300
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
LR_WARMUP_EPOCHS_1st = 2
LR_WARMUP_EPOCHS_2nd = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE
TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE
WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE
WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
```
# Pre-procecess images
```
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['set']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB3(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b3_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE,
total_steps=TOTAL_STEPS_1st,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_1st,
hold_base_rate_steps=(2 * STEP_SIZE))
metric_list = ["accuracy"]
callback_list = [cosine_lr_1st]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
callbacks=callback_list,
verbose=2).history
```
# Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS_2nd,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_2nd,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [es, cosine_lr_2nd]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 6))
ax1.plot(cosine_lr_1st.learning_rates)
ax1.set_title('Warm up learning rates')
ax2.plot(cosine_lr_2nd.learning_rates)
ax2.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
## Save model
```
model.save_weights('../working/effNetB5_img224.h5')
```
| github_jupyter |
# Welding Example #01: Basics
The goal of this small example is to introduce the main functionalities and interfaces to create and describe a simple welding application using the WelDX package.
## Imports
```
# enable interactive plots on Jupyterlab with ipympl and jupyterlab-matplotlib installed
# %matplotlib widget
import numpy as np
import pandas as pd
from weldx import (
Q_,
CoordinateSystemManager,
Geometry,
LinearHorizontalTraceSegment,
LocalCoordinateSystem,
Trace,
WeldxFile,
WXRotation,
get_groove,
)
```
## create a simple welding groove shape
We start of defining our welding application example by defining the base groove shape. For this examples we assume the groove shape to be constant along the entire workpiece.
The groove shall have the following attributes:
- a workpiece thickness of 5 mm
- a single sided V-Groove but weld with 50 degree groove angle
- root gap and face of 1 mm
To generate the groove shape, we can use the `get_groove` function of the of `iso_groove` that implements all groove types and shapes defined in *ISO 9692-1:2013*. For all available groove types and options take a look at the extensive docstring of `get_groove` and the groove_type tutorial notebook.
Be aware that we must pass along all groove parameters as Quantities with a specified unit using the default `Q_` type imported above. All units are automatically converted to SI units for most mathematical operations in the background so we can specify parameters in any appropriate unit we desire.
```
groove = get_groove(
groove_type="VGroove",
workpiece_thickness=Q_(5, "mm"),
groove_angle=Q_(50, "deg"),
root_face=Q_(1, "mm"),
root_gap=Q_(1, "mm"),
)
```
Using the `plot` function of the created groove instance gives a quick overview of the created geometry.
```
groove.plot(raster_width="0.2mm")
```
## create 3d workpiece geometry
Once we have created our desired 2d groove shape, we can simply extend the groove shape into 3d-space to create a volumetric workpiece.
To do this, two steps are missing:
1. we have to decide on a weld seam length first (we will use 300 mm in this example)
2. create a trace object that defines the path of our element through space. (we use a simple linear trace in this example)
```
# define the weld seam length in mm
seam_length = Q_(300, "mm")
# create a linear trace segment a the complete weld seam trace
trace_segment = LinearHorizontalTraceSegment(seam_length)
trace = Trace(trace_segment)
```
Once we have defined the trace object, we can create the workpiece geometry by combining the groove profile with the trace.
```
# create 3d workpiece geometry from the groove profile and trace objects
geometry = Geometry(groove.to_profile(width_default=Q_(5, "mm")), trace)
```
To visualize the geometry we simply call its `plot` function. Since it internally rasterizes the data, we need to provide the raster widths:
```
# rasterize geometry
profile_raster_width = "2mm" # resolution of each profile in mm
trace_raster_width = "30mm" # space between profiles in mm
```
Here is the plot:
```
def ax_setup(ax):
ax.legend()
ax.set_xlabel("x / mm")
ax.set_ylabel("y / mm")
ax.set_zlabel("z / mm")
ax.view_init(30, -10)
ax.set_ylim([-5.5, 5.5])
ax.set_zlim([0, 13])
color_dict = {
"tcp_contact": (255, 0, 0),
"tcp_wire": (0, 150, 0),
"T1": (255, 0, 150),
"T2": (255, 150, 150),
"T3": (255, 150, 0),
"specimen": (0, 0, 255),
}
ax = geometry.plot(
profile_raster_width,
trace_raster_width,
color=color_dict["specimen"],
show_wireframe=True,
label="groove",
)
ax_setup(ax)
```
## Setup the Coordinate System Manager (CSM)
Once we have created the 3d geometry it is now time to describe the movement of the wire during the welding process. To handle different moving coordinate systems and objects we use the CoordinateSystemManager.
Start by creating a new instance of the CSM. When setting up a CSM instance we have to supply a name that indicates the reference coordinate system which is a static Cartesian coordinate system that defines an origin.
```
# crete a new coordinate system manager with default base coordinate system
csm = CoordinateSystemManager("base")
```
The trace we created earlier to extend the groove shape into 3d has its own associated coordinate system that starts in the origin of the groove (see point (0,0) in our first plot of the groove profile) and has the x-axis running along the direction of the weld seam by convention.
We simply add the trace coordinate system to our coordinate system manager defining it as the *workpiece* coordinate system.
```
# add the workpiece coordinate system
csm.add_cs(
coordinate_system_name="workpiece",
reference_system_name="base",
lcs=trace.coordinate_system,
)
```
Now that we have added the workpiece coordinate system to the CSM, we can attach a rasterized representation of our geometry to it:
```
csm.assign_data(
geometry.spatial_data(profile_raster_width, trace_raster_width),
"specimen",
"workpiece",
)
```
## generate the tcp movement of the wire tip
For this example, we want the tip of the wire (i.e. the robot TCP during welding) to move along the center of the groove at 2 mm from the bottom of the workpiece with a speed of 10 mm/s.
We begin by defining the start and end points relative to our workpiece coordinate system. Note that the z-axis of the workpiece coordinate system is pointing upwards (see Figure 1). Hence we use a positive offset of 2 mm in z direction from our workpiece. For the x-axis we start the weld 5 mm into the weldseam and 5 mm before reaching the end of the weldseam.
```
tcp_start_point = Q_([5.0, 0.0, 2.0], "mm")
tcp_end_point = Q_([-5.0, 0.0, 2.0], "mm") + np.append(seam_length, Q_([0, 0], "mm"))
```
To completely describe the TCP movement in space __and__ time we need to supply time information for the start and end point. Lets say the weld starts on 2020-04-20 10:00:00. We calculate the time
```
v_weld = Q_(10, "mm/s")
s_weld = (tcp_end_point - tcp_start_point)[0] # length of the weld
t_weld = s_weld / v_weld
t_start = pd.Timedelta("0s")
t_end = pd.Timedelta(str(t_weld.to_base_units()))
```
The two points and timestamps are enough to create the linear moving coordinate system. We can interpolate the movement with a higher resolution later.
The orientation of the wire has the z coordinate facing downwards towards the workpiece. The workpiece z-coordinate is facing upwards. We add a constant 180 degree rotation around the x-axis to orientate the wire coordinate system correctly. Orientations can be described using the scipy Rotation objects
```
rot = WXRotation.from_euler(seq="x", angles=180, degrees=True)
```
With the defined coordinates, the constant orientation and the associated times we can create the coordinate system for the wire tcp.
```
coords = [tcp_start_point.magnitude, tcp_end_point.magnitude]
tcp_wire = LocalCoordinateSystem(
coordinates=coords, orientation=rot, time=[t_start, t_end]
)
```
Add the new coordinate system to the coordinate system manager relative to the workpiece.
```
# add the workpiece coordinate system
csm.add_cs(
coordinate_system_name="tcp_wire",
reference_system_name="workpiece",
lcs=tcp_wire,
)
```
Lets say The wire extends 10 mm from the contact tip. We can add the contact tip as another point using the coordinate system manager. To simplify things we now use the _tcp_wire_ coordinate system as reference. All we need to add is the z-offset along the wire. Note that we have to provide a negative offset along the z-axis since the _wire-tcp_ z-axis is pointing downwards.
```
tcp_contact = LocalCoordinateSystem(coordinates=[0, 0, -10])
# add the workpiece coordinate system
csm.add_cs(
coordinate_system_name="tcp_contact",
reference_system_name="tcp_wire",
lcs=tcp_contact,
)
```
We can create a simple plot of the relations between our our coordinate systems
```
csm
```
## plot the TCP trajectory
To examine the movement of our wire TCP and contact tip, lets create a simple plot. We only have a linear movement so we don't have to add additional timestamps to the moving coordinate systems to increase the resolution of the traces.
```
ax = csm.plot(
coordinate_systems=["tcp_contact", "tcp_wire"],
colors=color_dict,
show_vectors=False,
show_wireframe=True,
)
ax_setup(ax)
```
## Add static temperature measurement points
With everything setup we can now start adding some measurements with associated points in space. We add a temperature measurements __T1, T2, T3__ to the surface of the weld seam.
```
# add the workpiece coordinate system
csm.add_cs("T1", "workpiece", LocalCoordinateSystem(coordinates=[200, 3, 5]))
csm.add_cs("T2", "T1", LocalCoordinateSystem(coordinates=[0, 1, 0]))
csm.add_cs("T3", "T2", LocalCoordinateSystem(coordinates=[0, 1, 0]))
ax = csm.plot(
coordinate_systems=["tcp_contact", "tcp_wire", "T1", "T2", "T3"],
reference_system="workpiece",
colors=color_dict,
show_vectors=False,
show_wireframe=True,
)
ax_setup(ax)
csm
```
## K3D Visualization
```
csm.plot(
backend="k3d",
coordinate_systems=["tcp_contact", "tcp_wire", "T1", "T2", "T3"],
colors=color_dict,
limits=[-5, 150],
show_vectors=False,
show_traces=True,
show_data_labels=False,
show_labels=False,
show_origins=True,
)
```
## using ASDF
Now we write all of our structured data to an ASDF file and have a look at the ASDF header.
```
tree = {"workpiece": {"groove": groove, "length": seam_length}, "CSM": csm}
file = WeldxFile(tree=tree, mode="rw")
file.show_asdf_header()
```
| github_jupyter |
```
%load_ext watermark
%watermark -d -u -a 'Andreas Mueller, Kyle Kastner, Sebastian Raschka' -v -p numpy,scipy,matplotlib,scikit-learn
```
# SciPy 2016 Scikit-learn Tutorial
# Cross-Validation and scoring methods
In the previous sections and notebooks, we split our dataset into two parts, a training set and a test set. We used the training set to fit our model, and we used the test set to evaluate its generalization performance -- how well it performs on new, unseen data.
<img src="figures/train_test_split.svg" width="100%">
However, often (labeled) data is precious, and this approach lets us only use ~ 3/4 of our data for training. On the other hand, we will only ever try to apply our model 1/4 of our data for testing.
A common way to use more of the data to build a model, but also get a more robust estimate of the generalization performance, is cross-validation.
In cross-validation, the data is split repeatedly into a training and non-overlapping test-sets, with a separate model built for every pair. The test-set scores are then aggregated for a more robust estimate.
The most common way to do cross-validation is k-fold cross-validation, in which the data is first split into k (often 5 or 10) equal-sized folds, and then for each iteration, one of the k folds is used as test data, and the rest as training data:
<img src="figures/cross_validation.svg" width="100%">
This way, each data point will be in the test-set exactly once, and we can use all but a k'th of the data for training.
Let us apply this technique to evaluate the KNeighborsClassifier algorithm on the Iris dataset:
```
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
```
The labels in iris are sorted, which means that if we split the data as illustrated above, the first fold will only have the label 0 in it, while the last one will only have the label 2:
```
y
```
To avoid this problem in evaluation, we first shuffle our data:
```
import numpy as np
rng = np.random.RandomState(0)
permutation = rng.permutation(len(X))
X, y = X[permutation], y[permutation]
print(y)
```
Now implementing cross-validation is easy:
```
k = 5
n_samples = len(X)
fold_size = n_samples // k
scores = []
masks = []
for fold in range(k):
# generate a boolean mask for the test set in this fold
test_mask = np.zeros(n_samples, dtype=bool)
test_mask[fold * fold_size : (fold + 1) * fold_size] = True
# store the mask for visualization
masks.append(test_mask)
# create training and test sets using this mask
X_test, y_test = X[test_mask], y[test_mask]
X_train, y_train = X[~test_mask], y[~test_mask]
# fit the classifier
classifier.fit(X_train, y_train)
# compute the score and record it
scores.append(classifier.score(X_test, y_test))
```
Let's check that our test mask does the right thing:
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.matshow(masks, cmap='gray_r')
```
And now let's look a the scores we computed:
```
print(scores)
print(np.mean(scores))
```
As you can see, there is a rather wide spectrum of scores from 90% correct to 100% correct. If we only did a single split, we might have gotten either answer.
As cross-validation is such a common pattern in machine learning, there are functions to do the above for you with much more flexibility and less code.
The ``sklearn.model_selection`` module has all functions related to cross validation. There easiest function is ``cross_val_score`` which takes an estimator and a dataset, and will do all of the splitting for you:
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(classifier, X, y)
print(scores)
print(np.mean(scores))
```
As you can see, the function uses three folds by default. You can change the number of folds using the cv argument:
```
score = cross_val_score(classifier, X, y, cv=5)
print score
print np.mean(score)
```
There are also helper objects in the cross-validation module that will generate indices for you for all kinds of different cross-validation methods, including k-fold:
```
from sklearn.model_selection import KFold, StratifiedKFold, ShuffleSplit#, LeavePLabelOut
```
By default, cross_val_score will use ``StratifiedKFold`` for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0.
If you would just use KFold cross-validation, it is likely that you would generate a split that only contains class 0.
It is generally a good idea to use ``StratifiedKFold`` whenever you do classification.
``StratifiedKFold`` would also remove our need to shuffle ``iris``.
Let's see what kinds of folds it generates on the unshuffled iris dataset.
Each cross-validation class is a generator of sets of training and test indices:
```
cv = StratifiedKFold(n_splits=5)
## Printing only test data
for train, test in cv.split(iris.data, iris.target):
print train
print(test)
```
As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds.
This way, the class ratios are preserved. Let's visualize the split:
```
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.matshow(masks, cmap='gray_r')
plot_cv(StratifiedKFold(n_splits=5), iris.data, iris.target)
```
For comparison, again the standard KFold, that ignores the labels:
```
plot_cv(KFold(n_splits=5), iris.data, iris.target)
```
Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation:
```
plot_cv(KFold(n_splits=10), iris.data, iris.target)
```
Another helpful cross-validation generator is ``ShuffleSplit``. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently:
```
plot_cv(ShuffleSplit(n_splits=5, test_size=.2), iris.data, iris.target)
```
If you want a more robust estimate, you can just increase the number of splits:
```
plot_cv(ShuffleSplit(n_splits=20, test_size=.2), iris.data, iris.target)
```
You can use all of these cross-validation generators with the `cross_val_score` method:
```
cv = ShuffleSplit(n_splits=5, test_size=.2)
cross_val_score(classifier, X, y, cv=cv)
```
# Exercise
Perform three-fold cross-validation using the ``KFold`` class on the iris dataset without shuffling the data. Can you explain the result?
```
#%load solutions/13_cross_validation.py
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
bunch_obj = load_iris()
samples = bunch_obj.data
results = bunch_obj.target
#print samples.shape
#print results.shape
kfold = KFold(n_splits=3)
for train,test in kfold.split(samples,results):
train_X = samples[train]
train_y = results[train]
test_X = samples[test]
test_y = results[test]
print train_X.shape, train_y.shape, test_X.shape, test_y.shape
print KNeighborsClassifier().fit(train_X,train_y).score(test_X,test_y)
cv = KFold(n_splits=3)
cross_val_score(classifier, iris.data, iris.target, cv=cv)
from sklearn.neighbors import KNeighborsClassifier
```
| github_jupyter |
In this notebook, we,
1. Create a basic stochastic Multi-Armed Bandit (MAB) environment;
2. Create a epsilon-greedy player and an adaptive epsilon-greedy player;
3. Simuate the two party two-party game between the environment and a MAB player.
```
import numpy as np
class MultiArmedBanditEnvironment:
""" Class for the multi-armed bandit environment.
"""
def __init__(self, arm_num=3, arm_type='stochastic-normal') -> None:
self._arms_mean_reward = [0.2 + 0.5 * i for i in range(arm_num)]
self._arm_type = arm_type
def generate_reward(self, played_arm)-> float:
""" Generate reward if the input arm is played.
"""
assert played_arm in range(len(self._arms_mean_reward)), 'Arm does not exist.'
if 'stochastic' in self._arm_type and 'normal' in self._arm_type:
return np.random.normal(self._arms_mean_reward[played_arm], 0.5 , 1)
else:
NotImplementedError
class MultiArmedBanditPlayer:
""" Class for the multi-armed bandit player/algorithm
"""
def __init__(self, arm_num=3, epsilon=None, total_budget=10):
# this is needed in general
self._arm_num = arm_num
self._arms_mean_reward_estimate = [0.1] * arm_num
self._arms_times_played = [0] * arm_num
# this is needed for epsilon-style algorithm
self._epsilon = epsilon
self._total_budget = total_budget
self._total_budget_used = 0.0
def act(self) -> int:
""" Pull and return an arm. This is the place you implement your
arm selection strategy.
"""
NotImplementedError
def learn(self, arm, reward):
""" Update your model. This is the place where you (the player/algorithm)
digest the feedback received.
"""
NotImplementedError
def is_budget_left(self):
return True if (self._total_budget - self._total_budget_used) > 0 else False
class EpsilonGreedyPlayer(MultiArmedBanditPlayer):
"""Class for the epsilon greedy player.
"""
def act(self):
self._total_budget_used += 1.0
eps = np.random.uniform(0, 1, 1)
if eps < self._epsilon:
# random exploration
return np.random.randint(self._arm_num)
else:
# being greedy based on the empirical mean, break tie randomly
arm_reward = np.array(self._arms_mean_reward_estimate)
best_arms = np.flatnonzero(arm_reward == arm_reward.max())
return np.random.choice(best_arms)
def learn(self, arm, reward):
# update the empirical reward and the number of times the arm is played
self._arms_mean_reward_estimate[arm] = self._arms_mean_reward_estimate[arm] + \
(reward - self._arms_mean_reward_estimate[arm]) / (self._arms_times_played[arm] + 1)
# note this is equivent to the following more straightforward equation
# self._arms_mean_reward_estimate[arm] = (self._arms_mean_reward_estimate[arm] * self._arms_times_played[arm]
# + reward)/ (self._arms_times_played[arm] + 1)
self._arms_times_played[arm] += 1
class AdaptiveEpsilonGreedyPlayer(EpsilonGreedyPlayer):
def __init__(self, arm_num=3, epsilon=0, total_budget=10):
super().__init__(arm_num, epsilon, total_budget)
self._init_epsilon = epsilon
def learn(self, arm, reward):
super().learn(arm, reward)
self._epsilon = self._init_epsilon/np.sqrt(self._total_budget_used)
class UCB1Player(EpsilonGreedyPlayer):
def __init__(self, arm_num=3, epsilon=0, total_budget=10):
super().__init__(arm_num, epsilon, total_budget)
# initial values for CB
self._arms_CB = [1.0] * arm_num
def act(self):
for i in range(self._arm_num):
if self._arms_times_played[i] == 0:
return i
self._total_budget_used += 1.0
arms_UCB = np.array([self._arms_mean_reward_estimate[i] + self._arms_CB[i] for i in range(self._arm_num)])
best_arms = np.flatnonzero(arms_UCB == arms_UCB.max())
return np.random.choice(best_arms)
def learn(self, arm, reward):
super().learn(arm, reward)
# update CB
self._arms_CB[arm] = np.sqrt(2*np.log(self._total_budget)/self._arms_times_played[arm])
# Simulate the game between the environment and the player
total_budget = 500
arm_num = 5
mab_env = MultiArmedBanditEnvironment(arm_num=arm_num)
greedy_player = EpsilonGreedyPlayer(arm_num=arm_num, epsilon=0.0, total_budget=total_budget)
greedy_player05 = EpsilonGreedyPlayer(arm_num=arm_num, epsilon=0.5, total_budget=total_budget)
greedy_player01 = EpsilonGreedyPlayer(arm_num=arm_num, epsilon=0.1, total_budget=total_budget)
greedy_player001 = EpsilonGreedyPlayer(arm_num=arm_num, epsilon=0.001, total_budget=total_budget)
adaptive_greedy = AdaptiveEpsilonGreedyPlayer(arm_num=arm_num, epsilon=1.0, total_budget=total_budget)
ucb1 = UCB1Player(arm_num=arm_num, epsilon=1.0, total_budget=total_budget)
players = {"greedy_player": greedy_player,
"greedy_player05": greedy_player05,
"greedy_player01": greedy_player01,
"greedy_player001": greedy_player001,
"adaptive_greedy": adaptive_greedy,
"ucb1": ucb1,
}
players_reward = {}
for k, mab_player in players.items():
player_reward = []
budget_left = True
while budget_left:
# the player select an arm
arm_pulled = mab_player.act()
# the environment gives the reward
reward = mab_env.generate_reward(arm_pulled)
# the player learn
mab_player.learn(arm_pulled, reward)
player_reward.append(reward)
# print(arm_pulled, reward)
budget_left = mab_player.is_budget_left()
# print(budget_left)
players_reward[k] = player_reward
# plot
import matplotlib.pyplot as plt
import numpy as np
for k, player_reward in players_reward.items():
player_progressive_avg_reward = [sum(player_reward[:i])/i for i in range(1, len(player_reward)+1)]
plt.xlabel('Number of iterations')
plt.ylabel('Average Reward')
plt.plot(range(len(player_reward)), player_progressive_avg_reward, label=k)
plt.legend()
```
# Obervations
- Pure greedy is sensitive to the initial values of each arm
- Epsilon-greedy is sentitive to `epsilon` and how to set `epsilon` is non-trivial
- Adaptive-greedy seems a robust alternative of epsilon-greedy
| github_jupyter |
# Diamond Practice Project
I already did the diamond project on alteryx and now I want to do it again in Python to learn some new features of the statistics and number packages.
What I found fascinating in alteryx is the tool's ability to recognise nominal data and introduce dummy variables for it directly. Back in the last project I extended the excel data file myself and added dummies. You know what? Pandas can do it for us!
```
# a reference to the pandas library
import pandas as pd
# To visualize the data
import matplotlib.pyplot as plt
# This is new. Let's try a library which does
# the linear regression for us
import statsmodels.api as sm
from sklearn import linear_model
from sklearn.linear_model import LinearRegression
# To visualize the data
import matplotlib.pyplot as plt
# the excel file must be in the same directory as this notebook
# be sure to use the right excel data file.
# This one is the diamonds csv file for building the model
diamond_file= 'diamonds.csv'
# via pandas, the contents ae read into a variable or data frame named data
# pandas is able not to only read excel, but does a great job on csv, too.
diamond_data = pd.read_csv(diamond_file)
# Up to now, I used print to show the data
print(diamond_data)
# now I know I can show the data by typing the variable name
# and the output is a lot nicer than with "print"
diamond_data
```
At least column "cut" in the dataset is a nominal column. In the example before I used Excel to enrich the data file. Now pandas can do the job for us. May I introduce the get_dummies() function:
```
# get the column cut from the diamond_data data frame
# and generate dummy values for it
# save the dummy values in enriched_nominal_data
enriched_nominal_data = pd.get_dummies(diamond_data['cut'])
enriched_nominal_data
```
So far, so good. Now we have a data frame containing the dummy values. But... how can we combine the dummy values with the diamond_data? Very simple. The data frame has a function called join() to join data sets. This should be no problem for us as both datasets have the same amount of rows.
```
diamond_data = diamond_data.join(enriched_nominal_data)
diamond_data
```
Now we have almost everything we need for the diamond project. Let's get started!
```
# the excel file must be in the same directory as this notebook
# be sure to use the right excel data file.
# This one is the diamonds csv file for building the model
diamond_file= 'diamonds.csv'
diamond_data = pd.read_csv(diamond_file)
# create dummy variables for the nomimal values of color, clarity and cut.
# drop the first column of each of the dummy variables.
# You remember, we need one column less
enriched_nominal_data = pd.get_dummies(diamond_data, columns=['color','clarity','cut'], drop_first=True)
# We want to calculate the price of the new diamonds to make a bid, so this is our dependent variable
# and has to be put on the Y-Axis
Y = enriched_nominal_data['price']
# this is another nice feature of pandas. we simply drop the columns we do not need anymore...
enriched_nominal_data = enriched_nominal_data.drop(['cut_ord', 'clarity_ord', 'price'], axis=1)
# ...and assign the values to X
X = enriched_nominal_data
# let's to the evaluation with statsmodels
# we have to add a constant to the calculation or
# we do not have a Y-intercept
X = sm.add_constant(X)
# build the model
model = sm.OLS(Y,X).fit()
model_prediction = model.predict(X)
model_details = model.summary()
# let's print the results, so we can compare to alteryx
print(model_details)
# Now we have a working model, we can use the data from
# new-diamonds to test the model and to make a prediction
diamond_test_file = "new-diamonds.csv"
diamond_test_data = pd.read_csv(diamond_test_file)
# see above
enriched_nominal_test_data = pd.get_dummies(diamond_test_data, columns=['color','clarity','cut'], drop_first=True)
enriched_nominal_test_data = enriched_nominal_test_data.drop(['cut_ord', 'clarity_ord'], axis=1)
# assign to X
X_test = enriched_nominal_test_data
# add a constant
X_test = sm.add_constant(X_test)
# and make a prediction with the test data
model_prediction_test = model.predict(X_test)
print("")
print("")
# calculate the bid
print("The bid should be:",'${:,.2f}'.format(sum(model_prediction_test)*0.7))
# how far are we from the alteryx solution
print("Our solution is:",format(sum(model_prediction_test)*0.7/8230695.69*100,'.2f'),"% correct to the alteryx solution.")
print("We have a deviation of ", (1-sum(model_prediction_test)*0.7/8230695.69)*100,"%")
# Now we use sklearn instead of statsmodel to see
# where the deviation may come from
sk_model = LinearRegression()
sk_model.fit(X,Y)
print("Intercept: ",sk_model.intercept_)
print("Coefficients: ",sk_model.coef_)
print("")
print("")
# calculate the bid
print("The bid should be:",'${:,.2f}'.format(sum(sk_model.predict(X_test))*0.7))
# how far are we from the alteryx solution
print("Our solution is:",format(sum(sk_model.predict(X_test))*0.7/8230695.69*100,'.2f'),"% correct to the alteryx solution.")
print("We have a deviation of ", (1-sum(sk_model.predict(X_test))*0.7/8230695.69)*100,"%")
# ... but we get exactly the same result
```
| github_jupyter |
```
import os
import re
import json
import utils
import scipy
import torch
import random
import gensim
import warnings
import numpy as np
import pandas as pd
from tasks import *
from pprint import pprint
from transformers import *
from tqdm.notebook import tqdm
from sklearn.cluster import KMeans
from sklearn.neighbors import NearestNeighbors
complete_df = pd.read_csv("data/clean_df.csv")
complete_df.shape
complete_df.head(2)
complete_df.describe()
# Keep only texts with minimal number of words
complete_df = complete_df[complete_df['text'].apply(lambda x: len(re.findall(r"(?i)\b[a-z]+\b", x))) > 1000]
complete_df.shape
frac_of_articles = 1
train_df = complete_df.sample(frac = frac_of_articles, random_state = 42)
train_corpus = (list(utils.read_corpus(train_df, 'abstract')))
# Using distributed memory model
model = gensim.models.doc2vec.Doc2Vec(dm = 1, vector_size = 50, min_count = 10, dm_mean = 1, epochs = 20, seed = 42, workers = 6)
model.build_vocab(train_corpus)
model.train(train_corpus, total_examples = model.corpus_count, epochs = model.epochs)
list_of_tasks = [task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9]
abstract_vectors = model.docvecs.vectors_docs
array_of_tasks = [utils.get_doc_vector(task, model) for task in list_of_tasks]
train_df['abstract_vector'] = [vec for vec in abstract_vectors]
```
### Nearest Neigbors search
```
train_df = train_df[train_df['abstract'].apply(lambda x: len(re.findall(r"(?i)\b[a-z]+\b", x))) > 40]
train_df.shape
train_array = train_df['abstract_vector'].values.tolist()
ball_tree = NearestNeighbors(algorithm = 'ball_tree', leaf_size = 20).fit(train_array)
# Query for all tasks
distances, indices = ball_tree.kneighbors(array_of_tasks, n_neighbors = 3)
print("="*80, f"\n\nTask = {list_of_tasks[3]}\n", )
df = train_df.iloc[indices[3]]
abstracts = df['abstract']
titles = df['title']
dist = distances[3]
for l in range(len(dist)):
print(f" Text index = {indices[3][l]} \n Distance = {distances[3][l]} \n Title: {titles.iloc[l]} \n Abstract extract: {abstracts.iloc[l]}\n\n")
```
## Level 2 Abstraction using SciBERT
```
tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased')
transformer = AutoModel.from_pretrained('allenai/scibert_scivocab_uncased').to('cuda')
number_top_matches = 3
def convert(sentence):
tokenized_sentence = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_sentence)
with torch.no_grad():
vector = transformer(torch.LongTensor([indexed_tokens]).to('cuda'))[1].cpu().numpy().tolist()
return vector[0]
query = 'What are the possible medications against COVID-19?'
query_embeddings = convert(query)
print("="*80, f"\n\nTask = \n\n {list_of_tasks[3]}\n", )
print("\n\n======================\n\n")
print("Searching in Abstracts")
dfs = train_df.iloc[indices[3]]
abstracts = [i.split(".") for i in df['abstract']]
for abstract in abstracts:
print("\n\n======================\n\n")
print("Abstract:", '.'.join(abstract))
abstracts_vector = [convert(i) for i in abstract if not(len(i) < 5)]
distance = scipy.spatial.distance.cdist([query_embeddings], abstracts_vector, "cosine")[0]
results = zip(range(len(distance)), distance)
results = sorted(results, key = lambda x: x[1])
print("\n\n======================\n\n")
print("Query:", query)
print("\nTop 3 most similar sentences are:")
for idx, dist in results[0:number_top_matches]:
print(abstract[idx].strip(), "(Cosine Score: %.4f)" % (1-dist))
print("\n\n======================\n\n")
print("Searching in Texts")
texts = [i.split(".") for i in df['text']]
for text in texts:
print("\n\n======================\n\n")
print("Text:", text[:100])
text_vector = [convert(i) for i in text if not(len(i) < 5)]
distance = scipy.spatial.distance.cdist([query_embeddings], text_vector, "cosine")[0]
results = zip(range(len(distance)), distance)
results = sorted(results, key = lambda x: x[1])
print("\n\n======================\n\n")
print("Query:", query)
print("\nTop 3 most similar sentences are:")
for idx, dist in results[0:number_top_matches]:
print(text[idx].strip(), "\n(Cosine Score: %.4f)" % (1-dist))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mrdbourke/tensorflow-deep-learning/blob/main/video_notebooks/05_transfer_learning_in_tensorflow_part_2_fine_tuning_video.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Transfer Learning with TensorFlow Part 2: Fine-tuning
In the previous notebook, we covered transfer learning feature extraction, now it's time to learn about a new kind of transfer learning: fine-tuning.
See full course materials on GitHub: https://github.com/mrdbourke/tensorflow-deep-learning/
```
# Check if we're using a GPU
!nvidia-smi
```
## Creating helper functions
In previous notebooks, we've created a bunch of helper functions, now we could rewrite them all, however, this is tedious.
So, it's a good idea to put functions you'll want to use again in a script you can download and import into your notebooks (or elsewhere).
We've done this for some of the functions we've used previously here: https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
```
!wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
# Import helper functions we're going to use in this notebook
from helper_functions import create_tensorboard_callback, plot_loss_curves, unzip_data, walk_through_dir
```
> 🔑 **Note:** If you're running this notebook in Google Colab, when it times out Colab will delete `helper_functions.py`, so you'll have to redownload it if you want access to your helper functions.
## Let's get some data
This time we're going to see how we can use the pretrained models within `tf.keras.applications` and apply them to our own problem (recognizing images of food).
link: https://www.tensorflow.org/api_docs/python/tf/keras/applications
```
# Get 10% of training data of 10 classes of Food101
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip
unzip_data("10_food_classes_10_percent.zip")
# Check out how many images and subdirectories are in our dataset
walk_through_dir("10_food_classes_10_percent")
# Create training and test directory paths
train_dir = "10_food_classes_10_percent/train"
test_dir = "10_food_classes_10_percent/test"
import tensorflow as tf
IMG_SIZE = (224, 224)
BATCH_SIZE = 32
train_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(directory=train_dir,
image_size=IMG_SIZE,
label_mode="categorical",
batch_size=BATCH_SIZE)
test_data = tf.keras.preprocessing.image_dataset_from_directory(directory=test_dir,
image_size=IMG_SIZE,
label_mode="categorical",
batch_size=BATCH_SIZE)
train_data_10_percent
# Check out the class names of our dataset
train_data_10_percent.class_names
# See an example of a batch of data
for images, labels in train_data_10_percent.take(1):
print(images, labels)
```
## Model 0: Building a transfer learning feature extraction model using the Keras Functional API
The sequential API is straight-forward, it runs our layers in sequential order.
But the functional API gives us more flexibility with our models - https://www.tensorflow.org/guide/keras/functional
```
# 1. Create base model with tf.keras.applications
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
# 2. Freeze the base model (so the underlying pre-trained patterns aren't updated during training)
base_model.trainable = False
# 3. Create inputs into our model
inputs = tf.keras.layers.Input(shape=(224, 224, 3), name="input_layer")
# 4. If using a model like ResNet50V2 you will need to normalize inputs (you don't have to for EfficientNet(s))
# x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(inputs)
# 5. Pass the inputs to the base_model
x = base_model(inputs)
print(f"Shape after passing inputs through base model: {x.shape}")
# 6. Average pool the outputs of the base model (aggregate all the most important infromation, reduce number of computations)
x = tf.keras.layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x)
print(f"Shape after GlobalAveragePooling2D: {x.shape}")
# 7. Create the output activation layer
outputs = tf.keras.layers.Dense(10, activation="softmax", name="output_layer")(x)
# 8. Combine the inputs with the outputs into a model
model_0 = tf.keras.Model(inputs, outputs)
# 9. Compile the model
model_0.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# 10. Fit the model and save its history
history_10_percent = model_0.fit(train_data_10_percent,
epochs=5,
steps_per_epoch=len(train_data_10_percent),
validation_data=test_data,
validation_steps=int(0.25 * len(test_data)),
callbacks=[create_tensorboard_callback(dir_name="transfer_learning",
experiment_name="10_percent_feature_extraction")])
# Evalaute on the full test dataset
model_0.evaluate(test_data)
# Check the layers in our base model
for layer_number, layer in enumerate(base_model.layers):
print(layer_number, layer.name)
# How about we get a summary of the base model?
base_model.summary()
# How about a summary of our whole model?
model_0.summary()
# Check out our model's training curves
plot_loss_curves(history_10_percent)
```
## Getting a feature vector from a trained model
Let's demonstrate the Global Average Pooling 2D layer...
We have a tensor after our model goes through `base_model` of shape (None, 7, 7, 1280).
But then when it passes through GlobalAveragePooling2D, it turns into (None, 1280).
Let's use a similar shaped tensor of (1, 4, 4, 3) and then pass it to GlobalAveragePooling2D.
```
# Define the input shape
input_shape = (1, 4, 4, 3)
# Create a random tensor
tf.random.set_seed(42)
input_tensor = tf.random.normal(input_shape)
print(f"Random input tensor:\n {input_tensor}\n")
# Pass the random tensor through a global average pooling 2D layer
global_average_pooled_tensor = tf.keras.layers.GlobalAveragePooling2D()(input_tensor)
print(f"2D global average pooled random tensor:\n {global_average_pooled_tensor}\n")
# Check the shape of the different tensors
print(f"Shape of input tensor: {input_tensor.shape}")
print(f"Shape of Global Average Pooled 2D tensor: {global_average_pooled_tensor.shape}")
# Let's replicate the GlobalAveragePool2D layer
tf.reduce_mean(input_tensor, axis=[1, 2])
```
> 🛠 **Practice:** Try to do the same with the above two cells but this time use `GlobalMaxPool2D`... and see what happens.
> 🔑 **Note:** One of the reasons feature extraction transfer learning is named how it is is because what often happens is pretrained model outputs a **feature vector** (a long tensor of numbers which represents the learned representation of the model on a particular sample, in our case, this is the output of the `tf.keras.layers.GlobalAveragePooling2D()` layer) which can then be used to extract patterns out of for our own specifc problem.
## Running a series of transfer learning experiments
We've seen the incredible results transfer learning can get with only 10% of the training data, but how does it go with 1% of the training data... how about we set up a bunch of experiments to find out:
1. `model_1` - use feature extraction transfer learning with 1% of the training data with data augmentation
2. `model_2` - use feature extraction transfer learning with 10% of the training with data augmentaton
3. `model_3` - use fine-tuning transfer learning on 10% of the training data with data augmentation
4. `model_4` - use fine-tuning transfer learning on 100% of the training data with data augmentation
> 🔑 **Note:** throughout all experiments the same test dataset will be used to evaluate our model... this ensures consistency across evaluation metrics.
### Getting and preprocessing data for model_1
```
# Download and unzip data - preprocessed from Food101
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_1_percent.zip
unzip_data("10_food_classes_1_percent.zip")
# Create training and test dirs
train_dir_1_percent = "10_food_classes_1_percent/train"
test_dir = "10_food_classes_1_percent/test"
# How many images are we working with?
walk_through_dir("10_food_classes_1_percent")
# Setup data loaders
IMG_SIZE = (224, 224)
train_data_1_percent = tf.keras.preprocessing.image_dataset_from_directory(train_dir_1_percent,
label_mode="categorical",
image_size=IMG_SIZE,
batch_size=BATCH_SIZE) # default = 32
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
label_mode="categorical",
image_size=IMG_SIZE,
batch_size=BATCH_SIZE)
```
## Adding data augmentation right into the model
To add data augmentation right into our models, we can use the layers inside:
* `tf.keras.layers.experimental.preprocessing()`
We can see the benefits of doing this within the TensorFlow Data augmentation documentation: https://www.tensorflow.org/tutorials/images/data_augmentation#use_keras_preprocessing_layers
Off the top our of heads, after reading the docs, the benefits of using data augmentation inside the model are:
* Preprocessing of imges (augmenting them) happens on the GPU (much faster) rather than the CPU.
* Iamge data augmentation only happens during training, so we can still export our whole model and use it elsewhere.
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
# Create data augmentation stage with horizontal flipping, rotations, zooms, etc
data_augmentation = keras.Sequential([
preprocessing.RandomFlip("horizontal"),
preprocessing.RandomRotation(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomHeight(0.2),
preprocessing.RandomWidth(0.2),
# preprocessing.Rescale(1./255) # Keep for models like ResNet50V2 but EfficientNet's having resclaing built-in
], name="data_augmentation")
```
### Visualize our data augmentation layer (and see what happens to our data)
```
# View a random image and compare it to its augmented version
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import random
target_class = random.choice(train_data_1_percent.class_names)
target_dir = "10_food_classes_1_percent/train/" + target_class
random_image = random.choice(os.listdir(target_dir))
random_image_path = target_dir + "/" + random_image
# Read and plot in the random image
img = mpimg.imread(random_image_path)
plt.imshow(img)
plt.title(f"Original random image from class: {target_class}")
plt.axis(False);
# Now let's plot our augmented random image
augmented_img = data_augmentation(tf.expand_dims(img, axis=0))
plt.figure()
plt.imshow(tf.squeeze(augmented_img)/255.)
plt.title(f"Augmented random image from class: {target_class}")
plt.axis(False);
# print(augmented_img)
```
## Model 1: Feature extraction transfer learning on 1% of the data with data augmentation
```
# Setup input shape and base model, freezing the base model layers
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False
# Create input layer
inputs = layers.Input(shape=input_shape, name="input_layer")
# Add in data augmentation Sequential model as a layer
x = data_augmentation(inputs)
# Give base_model the inputs (after augmentation) and don't train it
x = base_model(x, training=False)
# Pool output features of the base model
x = layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x)
# Put a dense layer on as the output
outputs = layers.Dense(10, activation="softmax", name="output_layer")(x)
# Make a model using the inputs and outputs
model_1 = keras.Model(inputs, outputs)
# Compile the model
model_1.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit the model
history_1_percent = model_1.fit(train_data_1_percent,
epochs=5,
steps_per_epoch=len(train_data_1_percent),
validation_data=test_data,
validation_steps=int(0.25 * len(test_data)),
# Track model training logs
callbacks=[create_tensorboard_callback(dir_name="transfer_learning",
experiment_name="1_percent_data_aug")])
# Check out a model summary
model_1.summary()
# Evalute on the full test dataset
results_1_percent_data_aug = model_1.evaluate(test_data)
results_1_percent_data_aug
# How do the model with 1% of the and data augmentation loss curves look?
plot_loss_curves(history_1_percent)
```
## Model 2: feature extraction transfer learning model with 10% of data and data augmentation
```
# Get 10% of data... (uncomment if you don't have it)
# !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip
# unzip_data(10_food_classes_10_percent)
train_dir_10_percent = "10_food_classes_10_percent/train"
test_dir = "10_food_classes_10_percent/test"
# How many images are in our directories?
walk_through_dir("10_food_classes_10_percent")
# Set data inputs
import tensorflow as tf
IMG_SIZE = (224, 224)
train_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(train_dir_10_percent,
label_mode="categorical",
image_size=IMG_SIZE)
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
label_mode="categorical",
image_size=IMG_SIZE)
# Create model 2 with data augmentation built in
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.models import Sequential
# Build data augmentation layer
data_augmentation = Sequential([
preprocessing.RandomFlip("horizontal"),
preprocessing.RandomHeight(0.2),
preprocessing.RandomWidth(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomRotation(0.2),
# preprocessing.Rescaling(1./255) # if you're using a model such as ResNet50V2, you'll need to rescale your data, efficientnet has rescaling built-in
], name="data_augmentation")
# Setup the input shape to our model
input_shape = (224, 224, 3)
# Create a frozen base model (also called the backbone)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False
# Create the inputs and outputs (including the layers in between)
inputs = layers.Input(shape=input_shape, name="input_layer")
x = data_augmentation(inputs) # augment our training images (augmentation doesn't occur on test data)
x = base_model(x, training=False) # pass augmented images to base model but keep it in inference mode, this also insures batchnorm layers don't get updated - https://keras.io/guides/transfer_learning/#build-a-model
x = layers.GlobalAveragePooling2D(name="global_average_pooling_2D")(x)
outputs = layers.Dense(10, activation="softmax", name="output_layer")(x)
model_2 = tf.keras.Model(inputs, outputs)
# Compile
model_2.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
```
### Creating a ModelCheckpoint callback
The ModelCheckpoint callback intermediately saves our model (the full model or just the weights) during training. This is useful so we can come and start where we left off.
```
# Set checkpoint path
checkpoint_path = "ten_percent_model_checkpoints_weights/checkpoint.ckpt"
# Create a ModelCheckpoint callback that saves the model's weights only
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
save_best_only=False,
save_freq="epoch", # save every epoch
verbose=1)
```
### Fit model 2 passing in the ModelCheckpoint callback
```
# Fit the model saving checkpoints every epoch
initial_epochs = 5
history_10_percent_data_aug = model_2.fit(train_data_10_percent,
epochs=initial_epochs,
validation_data=test_data,
validation_steps=int(0.25 * len(test_data)),
callbacks=[create_tensorboard_callback(dir_name="transfer_learning",
experiment_name="10_percent_data_aug"),
checkpoint_callback])
# What were model_0 results?
model_0.evaluate(test_data)
# Check model_2 results on all test_data
results_10_percent_data_aug = model_2.evaluate(test_data)
results_10_percent_data_aug
# Plot model loss curves
plot_loss_curves(history_10_percent_data_aug)
```
### Loading in checkpointed weights
Loading in checkpointed weights returns a model to a specific checkpoint.
```
# Load in saved model weights and evaluate model
model_2.load_weights(checkpoint_path)
# Evalaute model_2 with loaded weights
loaded_weights_model_results = model_2.evaluate(test_data)
# If the results from our previously evaluated model_2 match the loaded weights, everything has worked!
results_10_percent_data_aug == loaded_weights_model_results
results_10_percent_data_aug
loaded_weights_model_results
# Check to see if loaded model results are very close to our previous non-loaded model results
import numpy as np
np.isclose(np.array(results_10_percent_data_aug), np.array(loaded_weights_model_results))
# Check the difference between the two results
print(np.array(results_10_percent_data_aug) - np.array(loaded_weights_model_results))
```
## Model 3: Fine-tuning an existing model on 10% of the data
> 🔑 **Note:** Fine-tuning usually works best *after* training a feature extraction model for a few epochs with large amounts of custom data.
```
# Layers in loaded model
model_2.layers
# Are these layers trainable?
for layer in model_2.layers:
print(layer, layer.trainable)
# What layers are in our base_model (EfficientNetB0) and are they trainable?
for i, layer in enumerate(model_2.layers[2].layers):
print(i, layer.name, layer.trainable)
# How many trainable varialbes are in our base model?
print(len(model_2.layers[2].trainable_variables))
# To begin fine-tuning, let's start by setting the last 10 layers of our base_model.trainable = True
base_model.trainable = True
# Freeze all layers except for the last 10
for layer in base_model.layers[:-10]:
layer.trainable = False
# Recompile (we have to recompile our models every time we make a change)
model_2.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(lr=0.0001), # when fine-tuning you typically want to lower the learning rate by 10x*
metrics=["accuracy"])
```
> 🔑 **Note:** When using fine-tuning it's best practice to lower your learning rate by some amount. How much? This is a hyperparameter you can tune. But a good rule of thumb is at least 10x (though different sources will claim other values). A good resource for information on this is the ULMFiT paper: https://arxiv.org/abs/1801.06146
```
# Check which layers are tunable (trainable)
for layer_number, layer in enumerate(model_2.layers[2].layers):
print(layer_number, layer.name, layer.trainable)
# Now we've unfrozen some of the layers closer to the top, how many trainable variables are there?
print(len(model_2.trainable_variables))
# Fine tune for another 5 epochs
fine_tune_epochs = initial_epochs + 5
# Refit the model (same as model_2 except with more trainable layers)
history_fine_10_percent_data_aug = model_2.fit(train_data_10_percent,
epochs=fine_tune_epochs,
validation_data=test_data,
validation_steps=int(0.25 * len(test_data)),
initial_epoch=history_10_percent_data_aug.epoch[-1], # start training from previous last epoch
callbacks=[create_tensorboard_callback(dir_name="transfer_learning",
experiment_name="10_percent_fine_tune_last_10")])
# Evaluate the fine-tuned model (model_3 which is actualy model_2 fine-tuned for another 5 epochs)
results_fine_tune_10_percent = model_2.evaluate(test_data)
# Check out the loss curves of our fine-tuned model
plot_loss_curves(history_fine_10_percent_data_aug)
```
The `plot_loss_curves` function works great with models which have only been fit once, however, we want something to compare one series of running `fit()` with another (e.g. before and after fine-tuning).
```
# Let's create a function to compare training histories
def compare_historys(original_history, new_history, initial_epochs=5):
"""
Compares two TensorFlow History objects.
"""
# Get original history measurements
acc = original_history.history["accuracy"]
loss = original_history.history["loss"]
val_acc = original_history.history["val_accuracy"]
val_loss = original_history.history["val_loss"]
# Combine original history metrics with new_history metrics
total_acc = acc + new_history.history["accuracy"]
total_loss = loss + new_history.history["loss"]
total_val_acc = val_acc + new_history.history["val_accuracy"]
total_val_loss = val_loss + new_history.history["val_loss"]
# Make plot for accuracy
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(total_acc, label="Training Accuracy")
plt.plot(total_val_acc, label="Val Accuracy")
plt.plot([initial_epochs-1, initial_epochs-1], plt.ylim(), label="Start Fine Tuning")
plt.legend(loc="lower right")
plt.title("Training and Validation Accuracy")
# Make plot for loss
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 2)
plt.plot(total_loss, label="Training Loss")
plt.plot(total_val_loss, label="Val Loss")
plt.plot([initial_epochs-1, initial_epochs-1], plt.ylim(), label="Start Fine Tuning")
plt.legend(loc="upper right")
plt.title("Training and Validation Loss")
compare_historys(history_10_percent_data_aug,
history_fine_10_percent_data_aug,
initial_epochs=5)
```
## Model 4: Fine-tuning and existing model on all of the data
```
# Download and unzip 10 classes of Food101 data with all images
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_all_data.zip
unzip_data("10_food_classes_all_data.zip")
# Setup training and test dir
train_dir_all_data = "10_food_classes_all_data/train"
test_dir = "10_food_classes_all_data/test"
# How many images are we working with now?
walk_through_dir("10_food_classes_all_data")
# Setup data inputs
import tensorflow as tf
IMG_SIZE = (224, 224)
train_data_10_classes_full = tf.keras.preprocessing.image_dataset_from_directory(train_dir_all_data,
label_mode="categorical",
image_size=IMG_SIZE)
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
label_mode="categorical",
image_size=IMG_SIZE)
```
The test dataset we've loaded in is the same as what we've been using for previous experiments (all experiments have used the same test dataset).
Let's verify this...
```
# Evaluate model 2 (this is the fine-tuned on 10 percent of data version)
model_2.evaluate(test_data)
results_fine_tune_10_percent
```
To train a fine-tuning model (model_4) we need to revert model_2 back to its feature extraction weights.
```
# Load weights from checkpoint, that way we can fine-tune from
# the same stage the 10 percent data model was fine-tuned from
model_2.load_weights(checkpoint_path)
# Let's evaluate model_2 now
model_2.evaluate(test_data)
# Check to see if our model_2 has been reverted back to feature extraction results
results_10_percent_data_aug
```
Alright, the previous steps might seem quite confusing but all we've done is:
1. Trained a feature extraction transfer learning model for 5 epochs on 10% of the data with data augmentation (model_2) and we saved the model's weights using `ModelCheckpoint` callback.
2. Fine-tuned the same model on the same 10% of the data for a further 5 epochs with the top 10 layers of the base model unfrozen (model_3).
3. Saved the results and training logs each time.
4. Reloaded the model from step 1 to do the same steps as step 2 except this time we're going to use all of the data (model_4).
```
# Check which layers are tunable in the whole model
for layer_number, layer in enumerate(model_2.layers):
print(layer_number, layer.name, layer.trainable)
# Let's drill into our base_model (efficientnetb0) and see what layers are trainable
for layer_number, layer in enumerate(model_2.layers[2].layers):
print(layer_number, layer.name, layer.trainable)
# Compile
model_2.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(lr=0.0001),
metrics=["accuracy"])
# Continue to train and fine-tune the model to our data (100% of training data)
fine_tune_epochs = initial_epochs + 5
history_fine_10_classes_full = model_2.fit(train_data_10_classes_full,
epochs=fine_tune_epochs,
validation_data=test_data,
validation_steps=int(0.25 * len(test_data)),
initial_epoch=history_10_percent_data_aug.epoch[-1],
callbacks=[create_tensorboard_callback(dir_name="transfer_learning",
experiment_name="full_10_classes_fine_tune_last_10")])
```
> 🔑 **Note:** Fine-tuning generally takes longer per epoch (more layers being updated) and in the case of the model we just ran we used 10x more data than before (more patterns to find) so it makes sense training took longer.
```
# Let's evaluate on all of the test data
results_fine_tune_full_data = model_2.evaluate(test_data)
results_fine_tune_full_data
# How did fine-tuning go with more data?
compare_historys(original_history=history_10_percent_data_aug,
new_history=history_fine_10_classes_full,
initial_epochs=5)
```
## Viewing our experiment data on TensorBoard
> 🔑 **Note:** Anything you upload to TensorBoard.dev is going to be public. So if you have private data, do not upload.
```
# View tensorboard logs of transfer learning modelling experiments (should ~4 models)
# Upload TensorBoard dev records
!tensorboard dev upload --logdir ./transfer_learning \
--name "Transfer Learning Experiments with 10 Food101 Classes" \
--description "A series of different transfer learning experiments with varying amounts of data and fine-tuning." \
--one_shot # exits the uploader once its finished uploading
```
My TensorBoard experiments are available at: https://tensorboard.dev/experiment/vcySzjmkRkKBLVSdAQMO8g/
If you need to view or delete a previous TensorBoard experiment, you can run the commands below.
```
# # View all of your uploaded TensorBoard.dev experiments (public)
# !tensorboard dev list
# # To delete an experiment
# !tensorboard dev delete --experiment_id vcySzjmkRkKBLVSdAQMO8g # Change this for the experiment ID you want to delete
```
| github_jupyter |
# 25 TFE exercises
#### 1. Import tensorflow package under the name `tf` and enable eager (★☆☆)
```
import tensorflow as tf
tf.enable_eager_execution()
```
#### 2. Check eager is enabled (★☆☆)
```
tf.executing_eagerly()
```
#### 3. Show number of GPU (★☆☆)
```
tfe = tf.contrib.eager
tfe.num_gpus()
```
#### 4. Create 1D Tensor from 0 to 10 step 2 - [0, 2, 4, 6, 8] (★☆☆)
```
tf.range(0,10,2)
```
#### 5. How to get last 3 elements of 1D Tensor? (★☆☆)
```
a = tf.range(10)
tf.slice(a,[a.shape.num_elements()-3],[3])
```
#### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
```
b= tf.SparseTensor(indices=[[0, 4]], values=[1], dense_shape=[1, 10])
tf.sparse_tensor_to_dense(b)
```
#### 7. Create a vector with values ranging from 10 to 49 (★☆☆)
```
tf.range(10,50,1)
```
#### 8. Reverse a vector (first element becomes last) (★☆☆)
```
a=tf.range(10)
a[::-1]
```
#### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
```
a=tf.range(9)
tf.reshape(a,shape=(3,3))
```
#### 10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
```
a=tf.constant([1,2,0,0,4,0])
tf.where(a>0)
```
#### 11. Create a 3x3 identity matrix (★☆☆)
```
tf.eye(3)
```
#### 12. Create a 3x3x3 tensor with random values (★☆☆)
```
tf.random_normal(shape=(3,3,3))
```
#### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
```
a=tf.random_normal(shape=(10,10))
tf.reduce_min(a),tf.reduce_max(a)
```
#### 14. Create a random vector of size 30 and find the mean value (★☆☆)
```
a=tf.random_normal(mean=0,shape=(30,))
tf.reduce_mean(a)
```
#### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
```
a=tf.zeros(shape=(3,3))
paddings = tf.constant([[1, 1],[1,1]])
tf.pad(a, paddings, "CONSTANT",constant_values=1)
```
#### 16. How to add a border (filled with 0's) around an existing array? (★☆☆)
```
a=tf.ones(shape=(3,3))
paddings = tf.constant([[1, 1],[1,1]])
tf.pad(a, paddings, "CONSTANT",constant_values=0)
```
#### 17. What is the result of the following expression? (★☆☆)
```python
a=tf.constant(0.3)
b=tf.constant(0.1)
c=tf.constant(3,dtype=b.dtype)
print(c*b==a)#?
print(tf.equal(c*b,a)) #?
```
```
a=tf.constant(0.3)
b=tf.constant(0.1)
c=tf.constant(3,dtype=b.dtype)
print(c*b==a)
print(tf.equal(c*b,a))
```
#### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
```
diagonal=tf.range(1,5)
b=tf.matrix_diag(diagonal)
paddings = tf.constant([[1,0 ],[0,1]])
tf.pad(b,paddings,"CONSTANT",constant_values=0)
```
#### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
```
max_size = tf.constant(8)
f_row = tf.range(max_size)%2
s_row = tf.range(1,max_size+1)%2
tf.fill([8, 8], 1)*f_row*tf.expand_dims(f_row,axis=1)+ \
tf.fill([8, 8], 1)*s_row*tf.expand_dims(s_row,axis=1)
max_size = tf.constant(8)
f_row = tf.range(max_size)%2
s_row = tf.range(1,max_size+1)%2
X, Y = tf.meshgrid(f_row, s_row)
tf.abs(X-Y)
```
#### 20. Make One Hot Encoding for [0,1,1,0] vector?
```
indices = [0, 1, 1,0]
tf.one_hot(indices, 2)
```
#### 21. Get Maximum [5,1,7,3,4,5,6,8,2] of each group, where group indexes are [0,0,0,1,1,2,2,2,2] (★☆☆) result = [7, 4, 8]
```
tf.segment_max(
[5,1,7,3,4,5,6,8,2],
[0,0,0,1,1,2,2,2,2],
)
```
#### 22. Normalize a 5x5 random matrix (★☆☆)
```
a= tf.random_uniform(shape=(5,5))
Amax, Amin = tf.reduce_max(a), tf.reduce_min(a)
A_normalized = (a - Amin)/(Amax - Amin)
A_normalized
```
#### 23. Split vector [1,2,3,4,5,6] into 2 parts [1,2,3] and [4,5,6] (★☆☆)
```
a=tf.constant([1,2,3,4,5,6])
tf.split(a,2)
```
#### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
```
a = tf.ones((5,3))
b = tf.ones((3,2))
Z = tf.matmul(a,b )
Z
```
#### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
```
a=tf.range(11)
tf.where((a<8)&(a>3),-1*a,a)
a= tf.range(10)
# a[4:8].assign(tf.zeros(4))
tf.where()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
!python -m pip install --upgrade --user jax==0.2.8 jaxlib==0.1.59+cuda101 -f https://storage.googleapis.com/jax-releases/jax_releases.html
!git checkout dev;
!git pull
pwd
!python -m setup.py install
from jax.config import config
config.update("jax_debug_nans", True)
config.update('jax_enable_x64', True)
import itertools
import math
from functools import partial
import numpy as onp
import jax
print("jax version: ", jax.__version__)
import jax.experimental.optimizers as optimizers
import jax.experimental.stax as stax
import jax.numpy as np
from jax import jit
import matplotlib.pyplot as plt
import IMNN
print("IMNN version: ", IMNN.__version__)
from IMNN.experimental.jax.imnn import (
AggregatedGradientIMNN,
AggregatedNumericalGradientIMNN,
AggregatedSimulatorIMNN,
GradientIMNN,
NumericalGradientIMNN,
SimulatorIMNN,
)
from IMNN.experimental.jax.lfi import (
ApproximateBayesianComputation,
GaussianApproximation,
)
from IMNN.experimental.jax.utils import value_and_jacrev, value_and_jacfwd
rng = jax.random.PRNGKey(0)
!XLA_FLAGS=--xla_cpu_enable_fast_math=false
from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
```
# Model in STAX
```
n_summaries = 2
n_s = 10000
n_d = 5000
λ = 100.0
ϵ = 0.1
# define inception block layer
def InceptBlock2(filters, strides, do_5x5=True, do_3x3=True):
"""InceptNet convolutional striding block.
filters: tuple: (f1,f2,f3)
filters1: for conv1x1
filters2: for conv1x1,conv3x3
filters3L for conv1x1,conv5x5"""
filters1, filters2, filters3 = filters
conv1x1 = stax.serial(stax.Conv(filters1, (1,1), strides, padding="SAME"))
filters4 = filters2
conv3x3 = stax.serial(stax.Conv(filters2, (1,1), strides=None, padding="SAME"),
stax.Conv(filters4, (3,3), strides, padding="SAME"))
filters5 = filters3
conv5x5 = stax.serial(stax.Conv(filters3, (1,1), strides=None, padding="SAME"),
stax.Conv(filters5, (5,5), strides, padding="SAME"))
# conv5x5 = stax.serial(stax.Conv(filters1, (1,1), strides=None, padding="SAME"),
# stax.Conv(filters3, (3,3), strides=None, padding="SAME"),
# stax.Conv(filters3, (3,3), strides, padding="SAME"))
maxpool = stax.serial(stax.MaxPool((3,3), padding="SAME"),
stax.Conv(filters4, (1,1), strides, padding="SAME"))
if do_3x3:
if do_5x5:
return stax.serial(
stax.FanOut(4), # should num=3 or 2 here ?
stax.parallel(conv1x1, conv3x3, conv5x5, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
else:
return stax.serial(
stax.FanOut(3), # should num=3 or 2 here ?
stax.parallel(conv1x1, conv3x3, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
else:
return stax.serial(
stax.FanOut(2), # should num=3 or 2 here ?
stax.parallel(conv1x1, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
rng,drop_rng = jax.random.split(rng)
# fs = 4 #for 128x128 sims
# layers = []
# for i in range(7):
# if i == 5:
# layers.append(InceptBlock2((fs,fs,fs), strides=(2,2), do_5x5=False)
# )
# elif i == 6:
# layers.append(InceptBlock2((fs,fs,fs), strides=(2,2), do_5x5=False, do_3x3=False)
# )
# else:
# layers.append(InceptBlock2((fs,fs,fs), strides=(2,2))
# )
# if i % 4 == 0:
# fs *= 2
# layers.append(stax.Conv(n_summaries, (1,1), strides=(1,1), padding="SAME"))
# layers.append(stax.Flatten)
# model = stax.serial(*layers)
fs = 32
# model = stax.serial(InceptBlock((fs,fs,fs), strides=(2,2)), # output 64x64
# InceptBlock((fs,fs,fs), strides=(2,2)), # output 32x32
# InceptBlock((fs,fs,fs), strides=(2,2)), # output 16x16
# InceptBlock((fs,fs,fs), strides=(2,2)), # output 8x8
# InceptBlock((fs,fs,fs), strides=(2,2)), # output 4x4
# InceptBlock((fs,fs,fs), strides=(2,2), do_5x5=False), # output 2x2
# InceptBlock((fs,fs,fs), strides=(2,2), do_5x5=False), # output 1x1
# stax.Conv(n_summaries, (1,1), strides=(1,1), padding="SAME"),
# stax.Flatten,
# )
model = stax.serial(
# InceptBlock2((fs,fs,fs), strides=(4,4)),
InceptBlock2((fs,fs,fs), strides=(4,4)),
InceptBlock2((fs,fs,fs), strides=(4,4)),
InceptBlock2((fs,fs,fs), strides=(2,2), do_5x5=False, do_3x3=False),
stax.Conv(n_summaries, (1,1), strides=(1,1), padding="SAME"),
stax.Flatten
)
optimiser = optimizers.adam(step_size=1e-3)
# weights = model[0](rng, input_shape)[1]
# model[1](weights, np.zeros(input_shape, dtype=np.float32), rng=rng).shape
# state = optimiser[0](weights)
# model[1](optimiser[2](state), np.zeros(input_shape, dtype=np.float32), rng=rng)
```
# Random seeds for IMNN
```
rng, initial_model_key = jax.random.split(rng)
rng, fitting_key = jax.random.split(rng)
```
# Random seeds for ABC
```
rng, abc_key = jax.random.split(rng)
```
# 2D Gaussian Field Simulator in JAX
Steps to creating $(N \times N)$ 2D Gaussian field for IMNN:
1. Generate a $(N\times N)$ white noise field $\varphi$ such that $\langle \varphi_k \varphi_{-k} \rangle' = 1$
2. Fourier Transform $\varphi$ to real space: $R_{\rm white}(\textbf{x}) \rightarrow R_{\rm white}(\textbf{k})$
- note: NumPy's DFT Fourier convention is:
$$\phi_{ab}^{\textbf{k}} = \sum_{c,d = 0}^{N-1} \exp{(-i x_c k_a - i x_d k_b) \phi^{\textbf{x}}_{cd}}$$
$$\phi_{ab}^{\textbf{x}} = \frac{1}{N^2}\sum_{c,d = 0}^{N-1} \exp{(-i x_c k_a - i x_d k_b) \phi^{\textbf{k}}_{cd}}$$
3. Scale white noise $R_{\rm white}(\textbf{k})$ by the chosen power spectrum evaluated over a field of $k$ values:
$$ R_P(\textbf{k}) = P^{1/2}(k) R_{\rm white}(\textbf{k}) $$
- note: here we need to ensure that this array of amplitudes is Hermitian, e.g. $\phi^{* \textbf{k}}_{a(N/2 + b)} = \phi^{\textbf{k}}_{a(N/2 - b)}$. This is accomplished by choosing indexes $k_a = k_b = \frac{2\pi}{N} (0, \dots, N/2, -N/2+1, \dots, -1)$ and then evaluating the square root of the outer product of the meshgrid between the two: $k = \sqrt{k^2_a + k^2_b}$. We can then evaluate $P^{1/2}(k)$.
4. Fourier Transform $R_{P}(\textbf{k})$ to real space: $ R_P(\textbf{x}) = \int d^d \tilde{k} e^{i\textbf{k} \cdot \textbf{x}} R_p(\textbf{k}) $:
$$R_{ab}^{\textbf{x}} = \frac{1}{N^2}\sum_{c,d = 0}^{N-1} \exp{(-i x_c k_a - i x_d k_b) R^{\textbf{k}}_{cd}}$$
```
# SET 32-BiT floats for model !
θ_fid = np.array([1.0, 0.5], dtype=np.float32)
δθ = np.array([0.1, 0.1], dtype=np.float32)
n_params = 2
N = 32
dim = 2
L = 32
field_shape = (N,N)
dx = L / N
fourier_b = 2*np.pi
input_shape = (1,1, N,N)
simulator_args = {"N": N, "L": L, "dim": dim, "shape": field_shape, 'vol_norm': False, "N_scale": True, "squeeze": False}
rng,fg_key = jax.random.split(rng)
foregrounds = jax.random.normal(fg_key, (1000, 1,) + simulator_args['shape'])*0
def default_P(k, A, B):
return A*k**-B
class powerBoxJax:
def __init__(self, shape, pk=None, k=None):
if pk is None:
self.pk = default_P
else:
self.pk = pk
if k is None:
self.k = np.sqrt(np.sum(np.array(np.meshgrid(*(
(np.hstack((np.arange(0, _shape//2 + 1),
np.arange(-_shape//2 + 1, 0))) * 2*np.pi / _shape)**2
for _shape in shape))), axis=0))
else:
self.k = k
self.shape = shape
self.N = shape[0]
def simulator(self, rng, θ, simulator_args=simulator_args, add_foregrounds=False):
def P(k, A=1, B=1):
return self.pk(k, A, B)
def fn(key, A, B):
shape = self.shape #simulator_args["shape"]
k = self.k
new_shape = ()
for _shape in shape:
if _shape % 2 == 0:
new_shape += (_shape+1,)
else:
new_shape += (_shape,)
key1,key2 = jax.random.split(key)
if add_foregrounds:
foreground = foregrounds[jax.random.randint(key2,
minval=0, maxval=1000, shape=())]
else:
foreground = 0.
# L is in length units, like Gpc
L = simulator_args['L']
dim = simulator_args['dim']
if np.isscalar(L):
L = [L]*int(dim)
else:
L = np.array(L)
V = np.prod(np.array(L))
scale = V**(1./dim)
Lk = ()
_N = 1
for i,_shape in enumerate(shape):
_N *= _shape
Lk += (_shape / L[i],) # 1 / dx
fft_norm = np.prod(np.array(Lk))
_dims = len(shape)
tpl = ()
for _d in range(_dims):
tpl += (_d,)
# POWERBOX IMPLEMENTATION
mag = jax.random.normal(key1, shape=tuple(N for N in new_shape))
# random phases
pha = 2 * np.pi * jax.random.uniform(key1, shape=tuple(N for N in new_shape))
# now make hermitian field (reality condition)
revidx = (slice(None, None, -1),) * len(mag.shape)
mag = (mag + mag[revidx]) / np.sqrt(2)
pha = (pha - pha[revidx]) / 2 + np.pi
dk = mag * (np.cos(pha) + 1j * np.sin(pha)) # output is complex
cutidx = (slice(None, -1),) * len(new_shape)
dk = dk[cutidx]
powers = np.concatenate((np.zeros(1),
np.sqrt(P(k.flatten()[1:], A=A, B=B)))).reshape(k.shape)
# normalize power by volume
if simulator_args['vol_norm']:
powers = powers/V
fourier_field = powers * dk
fourier_field = jax.ops.index_update(
fourier_field,
np.zeros(len(shape), dtype=int),
np.zeros((1,)))
field = np.real(np.fft.ifftn(fourier_field) * fft_norm * V)
if simulator_args["N_scale"]:
field *= scale
field = np.expand_dims(field + foreground, (0,))
if not simulator_args["squeeze"]:
field = np.expand_dims(field, (0,))
return np.array(field, dtype='float32')
shape = self.shape #simulator_args["shape"]
A, B = θ
if A.shape == B.shape:
if len(A.shape) == 0:
return fn(rng, A, B)
else:
keys = jax.random.split(rng, num=A.shape[0] + 1)
rng = keys[0]
keys = keys[1:]
return jax.vmap(
lambda key, A, B: simulator(key, (A, B), simulator_args=simulator_args)
)(keys, A, B)
else:
if len(A.shape) > 0:
keys = jax.random.split(rng, num=A.shape[0] + 1)
rng = keys[0]
keys = keys[1:]
return jax.vmap(
lambda key, A: simulator(key, (A, B), simulator_args=simulator_args)
)(keys, A)
elif len(B.shape) > 0:
keys = jax.random.split(rng, num=B.shape[0])
return jax.vmap(
lambda key, B: simulator(key, (A, B), simulator_args=simulator_args)
)(keys, B)
def AnalyticFisher(self,
θ,
kvec=None,
N=None
):
"""
Code for computing the Analytic Fisher for a Gaussian
Field with power spectrum P(k) = Ak^-B
"""
A,B = θ
if N is None:
N = self.N
# we want all UNIQUE fourier modes
if kvec is None:
kvec = self.k[:N//2, :N//2]
pk = lambda k : A*(k**-B) # P(k) = Ak^(-B)
p_a = lambda k : k**-B # deriv w.r.t. A
p_b = lambda k : -A*(k**-B)*np.log(k) # deriv w.r.t. B
powers = np.concatenate((np.ones(1),
(pk(kvec.flatten()[1:]))))
powera = np.concatenate((np.zeros(1),
(p_a(kvec.flatten()[1:]))))
powerb = np.concatenate((np.zeros(1),
(p_b(kvec.flatten()[1:]))))
Cinv = np.diag(1. / powers) # diagonal inv. covariance
Ca = np.diag(powera / 1.) # C_{,A}
Cb = np.diag(powerb / 1.) # C_{,B}
Faa = 0.5 * np.trace((Ca @ Cinv @ Ca @ Cinv))
Fab = 0.5 * np.trace((Ca @ Cinv @ Cb @ Cinv))
Fba = 0.5 * np.trace((Cb @ Cinv @ Ca @ Cinv))
Fbb = 0.5 * np.trace((Cb @ Cinv @ Cb @ Cinv))
return np.array([[Faa, Fab], [Fba, Fbb]])
class analyticFieldLikelihood:
def __init__(self,
PBJ,
field_shape,
Δ,
prior,
k=None,
pk=None,
gridsize=20,
tiling=2):
"""code for computing a gaussian field's likelihood for power spectrum parameters
PBJ : powerBox simulator object
field_shape : list. shape of field input
Δ : array-like. FFT of the real-space field
prior : array-like. range over which to compute the likelihood
k : array-like. fourier modes over which to compute P(k)
tiling : list or int. tiling=2 means likelihood will be computed as 2x2 grid
gridsize : how large to make the likelihood surface
"""
if k is None:
self.k = PBJ.k
if pk is None:
self.pk = PBJ.pk
self.field_shape = field_shape
self.gridsize = gridsize
if np.isscalar(tiling):
self.tiling = [tiling]*2
else:
self.tiling = tiling
#self.tilesize = gridsize // tiling
self.N = np.sqrt(np.prod(np.array(field_shape))) # should just be N for NxN grid
self.prior = prior
self.k = k
self.Δ = Δ
def Pk(self, k, A=1, B=0.5):
return self.pk(k, A, B)
return np.diag(pk)
def log_likelihood(self, k, A, B, Δ):
Δ = Δ.flatten()
k = k
dlength = len(k.flatten())
def fn(_A, _B):
nrm = np.pad(np.ones(dlength-2)*2, (1,1), constant_values=1.)
nrm = jax.ops.index_update(
nrm, np.array([[0],[(dlength-2)]]), np.array([[1],[1]]))
#nrm = 1
powers = np.concatenate((np.ones(1),
(self.Pk(k.flatten()[1:], A=_A, B=_B))))
# covariance is P(k)
C = powers * nrm
invC = np.concatenate((np.zeros(1),
(1./self.Pk(k.flatten()[1:], A=_A, B=_B))))
logdetC = np.sum(np.log(C))
pi2 = np.pi * 2.
m_half_size = -0.5 * len(Δ)
exponent = - 0.5 * np.sum(np.conj(Δ) * invC * Δ)
norm = -0.5 * logdetC + m_half_size*np.log(pi2)
return (exponent + norm)
return jax.vmap(fn)(A, B)
def get_likelihood(self, return_grid=False, shift=None):
A_start = self.prior[0][0]
A_end = self.prior[1][0]
B_start = self.prior[0][1]
B_end = self.prior[1][1]
region_size = [self.gridsize // self.tiling[i] for i in range(len(self.tiling))]
print("computing likelihood on a %dx%d grid \n \
in tiles of size %dx%d"%(self.gridsize, self.gridsize, region_size[0], region_size[1]))
def get_like_region(A0, A1, B0, B1, qsize):
A_range = np.linspace(A0, A1, qsize)
B_range = np.linspace(B0, B1, qsize)
A, B = np.meshgrid(A_range, B_range)
return (self.log_likelihood(k,
A.ravel(), B.ravel(), Δ).reshape(qsize,qsize))
A_incr = (A_end - A_start) / self.tiling[0]
B_incr = (B_end - B_start) / self.tiling[1]
# marks the ends of linspace
A_starts = [A_start + (i)*A_incr for i in range(self.tiling[0])]
A_ends = [A_start + (i+1)*A_incr for i in range(self.tiling[0])]
B_starts = [B_start + (i)*B_incr for i in range(self.tiling[1])]
B_ends = [B_start + (i+1)*B_incr for i in range(self.tiling[1])]
_like_cols = []
for _col in range(self.tiling[0]):
# slide horizontally in A
_like_row = []
for _row in range(self.tiling[1]):
# slide vertically in B
_like = get_like_region(A_starts[_row],
A_ends[_row],
B_starts[_col],
B_ends[_col],
region_size[0],
)
_like_row.append(_like)
_like_cols.append(np.concatenate(_like_row, axis=1))
_log_likelihood = np.real(np.concatenate(_like_cols, axis=0))
if shift is None:
shift = np.max(_log_likelihood)
print('shift', shift)
print('loglike mean', np.mean(_log_likelihood))
_log_likelihood = _log_likelihood - shift
if return_grid:
_A_range = np.linspace(self.prior[0,0], self.prior[1,0], self.gridsize)
_B_range = np.linspace(self.prior[0,0], self.prior[1,0], self.gridsize)
return np.exp(_log_likelihood), _A_range, _B_range
return np.exp(_log_likelihood)
def plot_contours(self, ax=None,
θ_ref=None, shift=None,
xlabel='A', ylabel='B',
return_like=True):
_like, _A, _B = self.get_likelihood(return_grid=True, shift=shift)
_A, _B = np.meshgrid(_A, _B)
if ax is None:
fig,ax = plt.subplots(figsize=(10,10))
mesh = ax.contourf(_A, _B, _like)
plt.colorbar(mesh, ax=ax)
if θ_ref is not None:
ax.scatter(θ_ref[0], θ_ref[1], zorder=10, marker='+', s=100, color='r')
ax.set_xlabel('A')
ax.set_ylabel('B')
if return_like:
return _like, ax
else:
return ax
def plot_corner(self, ax=None, label="Analytic likelihood"):
_like, _A_range, _B_range = self.get_likelihood(return_grid=True)
likelihoodA = _like.sum(0)
likelihoodA /= likelihoodA.sum() * (_A_range[1] - _A_range[0])
likelihoodB = _like.sum(1)
likelihoodB /= likelihoodB.sum() * (_B_range[1] - _B_range[0])
sorted_marginal = np.sort(_like.flatten())[::-1]
cdf = np.cumsum(sorted_marginal / sorted_marginal.sum())
value = []
for level in [0.95, 0.68]:
this_value = sorted_marginal[np.argmin(np.abs(cdf - level))]
if len(value) == 0:
value.append(this_value)
elif this_value <= value[-1]:
break
else:
value.append(this_value)
# add in the likelihood estimate
ax[0, 0].plot(_A_range, likelihoodA, color="C2", label=label)
ax[0, 1].axis("off")
ax[1, 0].contour(_A_range, _B_range, _like, levels=value, colors="C2")
ax[1, 1].plot(likelihoodB, _B_range, color="C2", label=label)
return ax
PBJ = powerBoxJax(simulator_args['shape'])
simulator = PBJ.simulator
```
## sim and gradient
```
def simulator_gradient(rng, θ, simulator_args=simulator_args):
return value_and_jacrev(simulator, argnums=1, allow_int=True, holomorphic=True)(rng, θ, simulator_args=simulator_args)
rng, key = jax.random.split(rng)
field_shape
# plot example simulation and derivative
deriv_args = {"N": N, "L": 32, "dim": dim, "shape": field_shape, "vol_norm": True, "N_scale": True, "squeeze": False}
simulation, simulation_gradient = value_and_jacfwd(simulator, argnums=1)(rng, θ_fid, simulator_args=deriv_args)
plt.imshow(np.squeeze(simulation[0]), extent=(0,1,0,1))
plt.colorbar()
plt.title('example simulation')
plt.show()
plt.imshow(np.squeeze(simulation_gradient[0].T[0].T), extent=(0,1,0,1))
plt.title('gradient of simulation')
plt.colorbar()
plt.show()
def get_simulations(rng, n_s, θ, simulator_args=simulator_args):
def get_simulator(key):
return simulator(key, θ, simulator_args=simulator_args)
keys = jax.random.split(rng, num=n_s)
return jax.vmap(get_simulator)(np.array(keys))
def get_simulation_gradients(rng, n_s, n_d, θ, simulator_args=simulator_args):
def get_batch_gradient(key):
return simulator_gradient(key, θ, simulator_args=simulator_args)
keys = jax.random.split(rng, num=n_s)
return jax.vmap(get_batch_gradient)(np.array(keys)[:n_d])
```
# known analytic Fisher information
For a gaussian field, the likelihood is written
$$ \mathcal{L}(\Delta | \theta) = \frac{1}{(2\pi)^{N_p / 2} \det(C)^{1/2}}\exp{\left(-\frac{1}{2} \Delta C^{-1} \Delta \right)}$$
Where $\Delta \in \mathbb{R}^{N_p},\ N_p=N_k=V=N\times N$ is the Fourier Transform of the observed real-space field.
This yields a Fisher information matrix of
$$F_{\alpha \beta} = \langle -\frac{\partial^2 \ln \mathcal{L}}{\partial \lambda_\alpha \partial \lambda_\beta} \rangle= \frac{1}{2} {\rm Tr} (C_{, \alpha} C^{-1} C_{, \beta} C^{-1}) $$
where the covariance is
$$ C_{k_i, k_j} = P(k_i)\delta_{ij}$$
The associated derivatives for a power law $P(k) = Ak^{-B}$ are
$$\begin{align}
C_{,A} &= \left( k^{-B} \right)\delta_{ij} \\
C_{,B} &= \left( -Ak^{-B}\ln k \right) \delta_{ij}
\end{align} $$
We notice that the Fisher information is *only* a function of the power spectrum parameters. It tells us the curvature of the chosen model (likelihood function) at a given $\theta$. The analytic Fisher information is the maximum amount of information we can expect the IMNN to extract from our simulations.
<!-- Alternatively, we can explore a volume integral analytically from the definition of C :
where the Fisher matrix is given by
$$ F_{\alpha \beta} = \sum_k \frac{1}{(\delta P_k)^2} \frac{\partial P_k}{\partial \lambda_\alpha} \frac{\partial P_k}{\partial \lambda_\beta}$$
and the error on $P_k$ is given (for a square, 2D box) as
$$ \delta P_k = \sqrt{\frac{2}{k (\Delta k) V} } \left( P_k + \frac{1}{\bar{n}} \right) $$ -->
<!-- For a gaussian field, the likelihood is written
$$ \ln \mathcal{L}(\theta | \vec{d}) = \ln \mathcal{L}(\theta | \Delta) = \sqrt{\frac{1}{2\pi C}} \exp{\frac{-{\Delta}^2}{2C}}$$
where $\vec{d} = \Delta$ is the overdensity field (in a cosmological context this is the measured temperature or galaxy count in a sky survey). Given that the power spectrum describes the correlations at different scales $k$, we can define the correlation via the power spectrum $C = P(k) = Ak^{-B}$ to compute the log-likelihood. The Fisher information matrix, given as
$$ F_{\alpha \beta} = \langle - \frac{\partial^2 \ln \mathcal{L}}{\partial \theta_\alpha \partial \theta_\beta} \rangle $$
can then be computed analytically for our likelihood:
$$ F_{\alpha \beta} = \sum_k \frac{1}{(\delta P_k)^2} \frac{\partial P_k}{\partial \theta_\alpha} \frac{\partial P_k}{\partial \theta_\beta} $$
where $\delta P_k = \sqrt{\frac{2}{4\pi \Delta k V k_{\rm tot}^2}} (P_k + \frac{1}{\bar{n}})$ is the error on $P_k$ with survey volume $V$, sampling interval $\Delta k$, and shot noise $1/\bar{n}$. Using the fact that $d\ln P_k = \frac{d P_k}{P_k}$, we can rewrite the sum as an integral:
$$ F_{\alpha \beta} = 2 \pi \left( \frac{V_{\rm eff}}{\lambda^3} \right) \int_{k_{\rm min}}^{k_{\rm max}} d \ln k \frac{\partial \ln P_k}{\partial \theta_\alpha} \frac{\partial \ln P_k}{\partial \theta_\beta}$$
Where $V_{\rm eff}$ and $\lambda^3$ are our effective windowed survey size and survey extent, respectively (set to 1 for now). Doing the integration explicitly, we obtain the Fisher matrix for parameters $\theta = (A, B)$:
$$ F = 2 \pi \left( \frac{V_{\rm eff}}{\lambda^3} \right) \begin{bmatrix}
\frac{1}{A^2} \ln (\frac{k_{\rm max}}{k_{\rm min}}) & \frac{1}{2A} ((\ln k_{\rm min})^2 - (\ln k_{\rm max})^2) \\
\frac{1}{2A} ((\ln k_{\rm min})^2 - (\ln k_{\rm max})^2) & \frac{1}{3}((\ln k_{\rm max})^3 - (\ln k_{\rm min})^3) \\
\end{bmatrix}$$
-->
For our fiducial model with our data vector of size $128^2$, our $\rm det|F|$ reads:
```
N = simulator_args["N"]
shape = simulator_args["shape"]
kbin = np.sqrt(np.sum(np.array(np.meshgrid(*(
np.hstack((np.arange(0, _shape//2 + 1),
np.arange(-_shape//2 + 1, 0))) *2* np.pi / _shape
for _shape in shape)))**2, axis=0))
f_expected = PBJ.AnalyticFisher(θ_fid, kvec=None)
print("analytic F(θ_fid): ", f_expected)
detf_expected = np.linalg.det(f_expected)
print("analytic det(F(θ_fid)): ", detf_expected)
# MAKE SIMULATION
N = simulator_args["N"]
shape = (N,N)
θ_sim = np.array([0.7, 0.8])
simulator_args = {"N": N, "L": 32, "dim": dim, "shape": shape, "vol_norm": True, "N_scale": False, "squeeze": True}
simulator_args["shape"] = (N,N)
simkey,rng = jax.random.split(rng)
#sim = np.squeeze(target_data)#
sim = np.squeeze(simulator(simkey, θ_sim, simulator_args=simulator_args))
sim_fft = (np.fft.fft2(sim)) #/ (N**2)
gridsize = 100 # for likelihood gridding
Δ = sim_fft[:N//2, :N//2]
k = kbin[:N//2, :N//2]
prior_range = np.array([[0.1, 0.1], [1.25, 1.25]])
AL = analyticFieldLikelihood(PBJ, shape, Δ, prior_range, k=k, gridsize=gridsize, tiling=[5,5])
%%time
#plt.style.use('default')
likelihood,_ = AL.plot_contours(θ_ref=θ_sim, shift=None, xlabel=r'$A$', ylabel=r'$B$', return_like=True)
```
# Initialise IMNN
```
from IMNN.experimental.jax.imnn._imnn import _IMNN
from IMNN.experimental.jax.imnn._imnn import _IMNN
from IMNN.experimental.jax.utils import check_simulator, value_and_jacrev
class SimIMNN(_IMNN):
def __init__(self, n_s, n_d, n_params, n_summaries, input_shape, θ_fid,
model, optimiser, key_or_state, simulator, verbose=True):
super().__init__(
n_s=n_s,
n_d=n_d,
n_params=n_params,
n_summaries=n_summaries,
input_shape=input_shape,
θ_fid=θ_fid,
model=model,
key_or_state=key_or_state,
optimiser=optimiser,
verbose=verbose)
self.simulator = check_simulator(simulator)
self.simulate = True
def get_fitting_keys(self, rng):
return jax.random.split(rng, num=3)
def get_summaries(self, w, key, validate=False):
def get_summary(key, θ):
return self.model(w, self.simulator(key, θ))
def get_derivatives(key):
return value_and_jacrev(get_summary, argnums=1)(key, self.θ_fid)
keys = np.array(jax.random.split(key, num=self.n_s))
summaries, derivatives = jax.vmap(get_derivatives)(keys[:self.n_d])
if self.n_s > self.n_d:
summaries = np.vstack([
summaries,
jax.vmap(partial(get_summary, θ=self.θ_fid))(keys[self.n_d:])])
return np.squeeze(summaries), np.squeeze(derivatives)
import jax
import jax.numpy as np
from IMNN.experimental.jax.imnn import SimulatorIMNN
from IMNN.experimental.jax.utils import value_and_jacrev, check_devices, \
check_type, check_splitting
class AggregatedSimulatorIMNN(SimulatorIMNN):
def __init__(self, n_s, n_d, n_params, n_summaries, input_shape, θ_fid,
model, optimiser, key_or_state, simulator, devices,
n_per_device, verbose=True):
super().__init__(
n_s=n_s,
n_d=n_d,
n_params=n_params,
n_summaries=n_summaries,
input_shape=input_shape,
θ_fid=θ_fid,
model=model,
key_or_state=key_or_state,
optimiser=optimiser,
simulator=simulator,
verbose=verbose)
self.devices = check_devices(devices)
self.n_devices = len(self.devices)
self.n_per_device = check_type(n_per_device, int, "n_per_device")
if self.n_s == self.n_d:
check_splitting(self.n_s, "n_s and n_d", self.n_devices,
self.n_per_device)
else:
check_splitting(self.n_s, "n_s", self.n_devices, self.n_per_device)
check_splitting(self.n_d, "n_d", self.n_devices, self.n_per_device)
def get_summaries(self, w, key=None, validate=False):
def derivative_scan(counter, rng):
def get_device_summaries(rng):
def get_summary(key, θ):
return self.model(w, self.simulator(key, θ))
def get_derivatives(rng):
return value_and_jacrev(get_summary, argnums=1)(
rng, self.θ_fid)
keys = np.array(jax.random.split(rng, num=self.n_per_device))
return jax.vmap(get_derivatives)(keys)
keys = np.array(jax.random.split(rng, num=self.n_devices))
summaries, derivatives = jax.pmap(
get_device_summaries, devices=self.devices)(keys)
return counter, (summaries, derivatives)
def summary_scan(counter, rng):
def get_device_summaries(rng):
def get_summary(key):
return self.model(w, self.simulator(key, self.θ_fid))
keys = np.array(jax.random.split(rng, num=self.n_per_device))
return jax.vmap(get_summary)(keys)
keys = np.array(jax.random.split(rng, num=self.n_devices))
summaries = jax.pmap(
get_device_summaries, devices=self.devices)(keys)
return counter, summaries
n = self.n_d // (self.n_devices * self.n_per_device)
if self.n_s > self.n_d:
n_r = (self.n_s - self.n_d) // (self.n_devices * self.n_per_device)
key, *keys = jax.random.split(key, num=n_r + 1)
counter, remaining_summaries = jax.lax.scan(
summary_scan, n_r, np.array(keys))
keys = np.array(jax.random.split(key, num=n))
counter, results = jax.lax.scan(
derivative_scan, 0, keys)
summaries, derivatives = results
if self.n_s > self.n_d:
summaries = np.vstack([summaries, remaining_summaries])
return (summaries.reshape((-1, self.n_summaries)),
derivatives.reshape((-1, self.n_summaries, self.n_params)))
simulator_args["squeeze"] = False
simulator_args['vol_norm'] = True
simulator_args['N_scale'] = True # false
simulator_args['L'] = 32.0
simulator_args
IMNN = SimIMNN(
n_s=10000,
n_d=10000,
n_params=n_params,
n_summaries=n_summaries,
input_shape=input_shape,
θ_fid=θ_fid,
model=model,
optimiser=optimiser,
key_or_state=initial_model_key,
simulator=lambda rng, θ: simulator(rng, θ, simulator_args=simulator_args),
# devices=[jax.devices()[0]],
# n_per_device=1000
)
```
# Fit
```
# new_optimiser = jax.experimental.optimizers.sgd(1e-5)
# weights = np.load('./model/best_w.npy', allow_pickle=True)
# IMNN.opt_initialiser, IMNN.update, IMNN.get_parameters = optimiser
# IMNN.state = IMNN.opt_initialiser(list(weights))
# SAVING IMNN ATTRIBUTES
import cloudpickle as pickle
import os
def save_weights(IMNN, folder_name='./model', weights='final'):
# create output directory
if not os.path.exists(folder_name):
os.mkdir(folder_name)
def pckl_me(obj, path):
with open(path, 'wb') as file_pi:
pickle.dump(obj, file_pi)
file_pi.close()
# save IMNN (optimiser) state:
savestate = jax.experimental.optimizers.unpack_optimizer_state(IMNN.state)
pckl_me(savestate, os.path.join(folder_name, 'IMNN_state'))
# save weights
if weights == 'final':
np.save(os.path.join(folder_name, 'final_w'), IMNN.final_w)
else:
np.save(os.path.join(folder_name, 'best_w'), IMNN.best_w)
# save initial weights
np.save(os.path.join(folder_name, 'initial_w'), IMNN.initial_w)
# save training history
pckl_me(IMNN.history, os.path.join(folder_name, 'history'))
# save important attributes as a dict
imnn_attributes = {
'n_s': IMNN.n_s,
'n_d': IMNN.n_d,
'input_shape': IMNN.input_shape,
'n_params' : IMNN.n_params,
'n_summaries': IMNN.n_summaries,
'θ_fid': IMNN.θ_fid,
'F': IMNN.F,
'validate': IMNN.validate,
'simulate': IMNN.simulate,
}
pckl_me(imnn_attributes, os.path.join(folder_name, 'IMNN_attributes'))
print('saved weights and attributes to the file ', folder_name)
def load_weights(IMNN, folder_name='./model', weights='final', load_attributes=True):
def unpckl_me(path):
file = open(path, 'rb')
return pickle.load(file)
# load and assign weights
if weights=='final':
weights = np.load(os.path.join(folder_name, 'final_w.npy'), allow_pickle=True)
IMNN.final_w = weights
else:
weights = np.load(os.path.join(folder_name, 'best_w.npy'), allow_pickle=True)
IMNN.best_w = weights
# re-pack and load the optimiser state
loadstate = unpckl_me(os.path.join(folder_name, 'IMNN_state'))
IMNN.state = jax.experimental.optimizers.pack_optimizer_state(loadstate)
# load history
IMNN.history = unpckl_me(os.path.join(folder_name, 'history'))
# load important attributes
if load_attributes:
IMNN.intial_w = np.load(os.path.join(folder_name, 'initial_w.npy'), allow_pickle=True)
attributes = unpckl_me(os.path.join('test_model', 'IMNN_attributes'))
IMNN.θ_fid = attributes['θ_fid']
IMNN.n_s = attributes['n_s']
IMNN.n_d = attributes['n_d']
IMNN.input_shape = attributes['input_shape']
print('loaded IMNN with these attributes: ', attributes)
# # test save functions
# save_weights(IMNN, folder_name='./model')
# # test load functions
# # initialize a new imnn with different attributes and then load the old file
# # to overwrite them
# my_new_IMNN = SimIMNN(
# n_s=300,
# n_d=100,
# n_params=n_params,
# n_summaries=n_summaries,
# input_shape=input_shape,
# θ_fid=np.array([1.0,1.0]),
# key=initial_model_key,
# model=model,
# optimiser=optimiser,
# simulator=lambda rng, θ: simulator(rng, θ, simulator_args=simulator_args),
# )
# load_weights(my_new_IMNN, folder_name='./model', load_attributes=True)
# my_new_IMNN.set_F_statistics(rng, my_new_IMNN.best_w, my_new_IMNN.θ_fid, my_new_IMNN.n_s, my_new_IMNN.n_d, validate=True)
IMNN_rngs = 1 * [fitting_key] #+ 12 * [None]
labels = [
"Simulator, InceptNet\n"
]
θ_fid
%%time
for i in range(1):
rng,fit_rng = jax.random.split(rng)
IMNN.fit(λ=10., ϵ=ϵ, rng=fit_rng, min_iterations=500) #for IMNN, IMNN_rng in zip(IMNNs, IMNN_rngs);
#save_weights(IMNN, folder_name='./big_incept128')
IMNNs = [IMNN]
for i, (IMNN, label) in enumerate(zip(IMNNs, labels)):
if i == 0:
ax = IMNN.training_plot(expected_detF=detf_expected, colour="C{}".format(i), label=label)
elif i == 10:
other_ax = IMNN.training_plot(
expected_detF=detf_expected, colour="C{}".format(i), label=label
)
elif i == 11:
IMNN.training_plot(
ax=other_ax,
expected_detF=50, colour="C{}".format(i), label=label
)
other_ax[0].set_yscale("log")
other_ax[2].set_yscale("log")
else:
IMNN.training_plot(
ax=ax, expected_detF=None, colour="C{}".format(i), label=label, ncol=5
);
ax[0].set_yscale("log")
latexify(fig_width=3.37)
plt.plot(IMNN.history['detF'][:])
plt.plot(np.ones(len(IMNN.history['detF'][:]))*detf_expected, c='k', linestyle='--')
plt.ylim(1e-2, 1e7)
plt.ylabel(r'$\det \textbf{F}$')
plt.xlabel('number of epochs')
plt.yscale('log')
plt.tight_layout()
#plt.savefig('/mnt/home/tmakinen/repositories/field-plots/128x128-training.png', dpi=400)
np.linalg.det(IMNNs[0].F) #/ (detf_expected)
IMNNs[0].F
print('IMNN F:', IMNN.F)
print('IMNN det F:', np.linalg.det(IMNN.F))
print('IMNN F / analytic det F: ', (np.linalg.det(IMNN.F)) / detf_expected)
```
# Data for ABC example
```
class uniform:
def __init__(self, low, high):
self.low = np.array(low)
self.high = np.array(high)
self.event_shape = [[] for i in range(self.low.shape[0])]
def sample(self, n=None, seed=None):
if n is None:
n = 1
keys = np.array(jax.random.split(
seed,
num=len(self.event_shape)))
return jax.vmap(
lambda key, low, high : jax.random.uniform(
key,
shape=(n,),
minval=low,
maxval=high))(
keys, self.low, self.high)
prior = uniform([0.1, 0.1], [1.25, 1.25])
simulator_args
simulator_args = {"N": N, "L": 32, "dim": dim, "shape": shape, "N_scale": True, "vol_norm": True, "squeeze": True}
rng, key = jax.random.split(rng)
θ_target = np.array([0.8, 0.8])
target_data = simulator(
key,
θ_target,
simulator_args={**simulator_args, **{'squeeze':False}})
%matplotlib inline
plt.imshow(np.squeeze(target_data))
plt.colorbar()
```
# analytic likelihood calculation
```
gridsize = 100 # for likelihood gridding
Δ = np.fft.fftn(np.squeeze(target_data))[:N//2, :N//2] / N
k = kbin[:N//2, :N//2]
prior_range = np.array([[0.1, 0.1], [1.25, 1.25]])
AL = analyticFieldLikelihood(PBJ, shape, Δ, prior_range, k=k, gridsize=gridsize, tiling=[5,5])
%%time
%matplotlib inline
#plt.style.use('default')
likelihood,_ = AL.plot_contours(θ_ref=θ_target, shift=None, xlabel=r'$A$', ylabel=r'$B$', return_like=True)
```
# Gaussian approximation
```
IMNN.set_F_statistics(IMNN.best_w, key=rng, validate=True);
@jit #partial(jax.jit, static_argnums=0)
def get_estimate(d):
if len(d.shape) == 1:
return IMNN.θ_fid + np.einsum(
"ij,kj,kl,l->i",
IMNN.invF,
IMNN.dμ_dθ,
IMNN.invC,
IMNN.model(IMNN.best_w, d, rng=rng) - IMNN.μ)
else:
return IMNN.θ_fid + np.einsum(
"ij,kj,kl,ml->mi",
IMNN.invF,
IMNN.dμ_dθ,
IMNN.invC,
IMNN.model(IMNN.best_w, d, rng=rng) - IMNN.μ)
estimates = get_estimate(target_data) #[i.get_estimate(target_data) for i in IMNNs];
detf_expected
GAs = [GaussianApproximation(get_estimate(target_data), IMNN.invF, prior)]
#GaussianApproximation(get_estimate(target_data), np.linalg.inv(f_expected), prior)]
%matplotlib inline
for i, (GA, label) in enumerate(zip(GAs, labels)):
if i == 0:
ax = GA.marginal_plot(
axis_labels=[r"$A$", r"$B$"], label='on-the-fly IMNN', colours="C{}".format(i)
)
else:
GA.marginal_plot(ax=ax, label=label, colours="C{}".format(i), ncol=8)
fig, ax = plt.subplots(2, 6, figsize=(30, 8))
for i, (GA, label) in enumerate(zip(GAs, labels)):
if i == 0:
ax1 = GA.marginal_plot(
ax=ax[:, :2],
axis_labels=[r"$A$", r"$B$"],
label=label,
target=0,
format=True,
colours="C{}".format(i),
)
ax2 = GA.marginal_plot(
ax=ax[:, 2:4],
axis_labels=[r"$A$", r"$B$"],
target=1,
format=True,
colours="C{}".format(i),
)
ax3 = GA.marginal_plot(
ax=ax[:, 4:],
axis_labels=[r"$A$", r"$B$"],
target=2,
format=True,
colours="C{}".format(i),
)
else:
GA.marginal_plot(ax=ax1, label=label, target=0, colours="C{}".format(i), bbox_to_anchor=(-0.05, 1.0))
GA.marginal_plot(ax=ax2, target=1, colours="C{}".format(i))
GA.marginal_plot(ax=ax3, target=2, colours="C{}".format(i))
```
# ABC
```
{**simulator_args, **{'squeeze':True}}
ABC = ApproximateBayesianComputation(
target_data, prior,
lambda A,B : simulator(A,B, simulator_args={**simulator_args, **{'squeeze':True}}),
get_estimate, F=IMNN.F, gridsize=50
)
ABCs = [
ApproximateBayesianComputation(
target_data, prior, simulator, IMNN.get_estimate, F=IMNN.F, gridsize=50
)
for IMNN, estimate in zip(IMNNs, estimates)
]
%%time
rng,abc_key = jax.random.split(rng)
ABC(rng=abc_key,
n_samples=int(1e3),
min_accepted=15000,
max_iterations=20000,
ϵ=0.05,
smoothing=0.);
ABC.parameters.accepted[0].shape
#ax = ABC.scatter_plot(points=ABC.parameters.rejected, colours='red')
ax = ABC.scatter_plot()
#np.save("accepted.npy", ABC.parameters.accepted)
#ax = ABC.scatter_summaries(points=ABC.summaries.rejected, colours='red')
ABC.scatter_summaries( colours='blue')
likelihood,A_range,B_range = AL.get_likelihood(return_grid=True)
#A_range = np.linspace(0.1, 3.0, 25)
#B_range = np.linspace(0.1, 2.5, 25)
likelihoodA = np.real(likelihood).sum(0)
likelihoodA /= likelihoodA.sum() * (A_range[1] - A_range[0])
likelihoodB = np.real(likelihood).sum(1)
likelihoodB /= likelihoodB.sum() * (B_range[1] - B_range[0])
sorted_marginal = np.sort(np.real(likelihood).flatten())[::-1]
cdf = np.cumsum(sorted_marginal / sorted_marginal.sum())
value = []
for level in [0.95, 0.68]:
this_value = sorted_marginal[np.argmin(np.abs(cdf - level))]
if len(value) == 0:
value.append(this_value)
elif this_value <= value[-1]:
break
else:
value.append(this_value)
#fig, ax = plt.subplots(2, 2, figsize=(10, 10))
%matplotlib inline
#plt.style.use('default')
new_colors = [ '#2c0342', '#286d87', '#4fb49d', '#9af486']
fig,ax = plt.subplots(nrows=2, ncols=2, figsize=(3.37*2, 3.37*2))
latexify(fig_width=3.37, fig_height=3.37)
ABC.scatter_plot(ax=ax,
colours=new_colors[0],
axis_labels=[r"$A$", r"$B$"],
s=8,
label='ABC estimate')
# ABC.marginal_plot(ax=ax,
# axis_labels=[r"$A$", r"$B$"], colours='green',
# label='ABC marginal plot')
GAs[0].marginal_plot(ax=ax, colours=new_colors[2], axis_labels=[r"$A$", r"$B$"], label=None, ncol=1)
ax[0,1].imshow(target_data[0, 0])
#ax[0,1].set_title(r'$\theta_{\rm target} = A,B = (%.2f,%.2f)$'%(θ_target[0], θ_target[1]))
ax[0,0].axvline(θ_target[0], linestyle='--', c='k')
ax[1,0].axvline(θ_target[0], linestyle='--', c='k')
ax[1,0].axhline(θ_target[1], linestyle='--', c='k')
ax[1,1].axhline(θ_target[1], linestyle='--', c='k', label=r'$\theta_{\rm target}$')
ax[1,0].set_xlabel(r'$A$')
ax[1,0].set_ylabel(r'$B$')
ax[0,0].axvline(θ_fid[0], linestyle='--', c='k', alpha=0.4)
#ax[1,0].contourf(A_range, B_range, L1.reshape((size, size)))
#ax[0, 0].plot(A_range, np.real(loglikeA), color='g', label='loglikeA')
ax[1,0].axvline(θ_fid[0], linestyle='--', c='k', alpha=0.4)
ax[1,0].axhline(θ_fid[1], linestyle='--', c='k', alpha=0.4)
ax[1,1].axhline(θ_fid[1], linestyle='--', c='k', alpha=0.4, label=r'$\theta_{\rm fid}$')
ax[1,1].legend(framealpha=0.)
# add in the likelihood estimate
ax[0, 0].plot(A_range, likelihoodA, color='#FF8D33', label=None)
ax[0, 1].axis("off")
ax[1, 0].contour(A_range, B_range, np.real(likelihood), levels=value, colors='#FF8D33')
ax[1, 1].plot(likelihoodB, B_range, color='#FF8D33', label='loglike')
ax[0,0].legend(framealpha=0.)
#plt.savefig('/mnt/home/tmakinen/repositories/field-plots/128x128-contours.png', dpi=400)
#plt.subplots_adjust(wspace=0, hspace=0)
plt.show()
# do PMC-ABC
import tensorflow_probability
tfp = tensorflow_probability.experimental.substrates.jax
tfd = tfp.distributions
tfb = tfp.bijectors
def new_ABC(rng, n_points, proposal_distribution, simulator, data_summary, f, n_parallel_simulations=None, simulator_parameters=None):
def get_distance(summaries, data_summary, f):
if len(data_summary.shape) > 1:
return jax.vmap(lambda data_summary, f : get_distance(summaries, data_summary, f))(data_summary, f)
if len(summaries.shape) == 1:
difference = summaries - data_summary
distance = difference.dot(f).dot(difference)
return jax.lax.cond(np.isnan(distance), lambda _ : np.inf, lambda distance : distance, distance)
elif len(summaries.shape) == 2:
return jax.vmap(lambda summaries : get_distance(summaries, data_summary, f), out_axes=-1)(summaries)
else:
return jax.vmap(lambda summaries : get_distance(summaries, data_summary, f), out_axes=-2)(summaries)
shape = (n_points,)
if n_parallel_simulations is not None:
shape = shape + (n_parallel_simulations,)
rng, key = jax.random.split(rng)
proposed = proposal_distribution.sample(shape, seed=key)
key = jax.random.split(rng, num=np.prod(np.array(proposed).shape[:-1]))
summaries = simulator(key.reshape(proposed.shape[:-1] + (2,)), proposed, simulator_parameters=simulator_parameters)
distances = get_distance(summaries, data_summary, f)
return proposed, summaries, distances
def w_cov(proposed, weighting):
weighted_samples = proposed * weighting[:, np.newaxis]
return weighted_samples.T.dot(weighted_samples) / weighting.T.dot(weighting)
class tmvn():
def __init__(self, loc, scale, low, high, max_counter=int(1e3)):
self.loc = loc
self.scale = scale
self.low = low
self.high = high
if len(loc.shape) > 1:
self.n_samples = loc.shape[0]
else:
self.n_samples = None
self.n_params = low.shape[0]
self.max_counter = max_counter
def mvn(self, rng, loc):
u = jax.random.normal(rng, shape=(self.n_params,))
return loc + u.dot(self.scale)
def w_cond(self, args):
_, loc, counter = args
return np.logical_and(
np.logical_or(
np.any(np.greater(loc, self.high)),
np.any(np.less(loc, self.low))),
np.less(counter, self.max_counter))
def __sample(self, args):
rng, loc, counter = args
rng, key = jax.random.split(rng)
return (rng, self.mvn(key, loc), counter+1)
def _sample(self, rng, loc):
rng, key = jax.random.split(rng)
_, loc, counter = jax.lax.while_loop(
self.w_cond,
self.__sample,
(rng, self.mvn(key, loc), 0))
return jax.lax.cond(
np.greater_equal(counter, self.max_counter),
lambda _ : np.nan * np.ones((self.n_params,)),
lambda _ : loc,
None)
def _sample_n(self, rng, loc, n=None):
if n is None:
return self._sample(rng, loc)
else:
key = jax.random.split(rng, num=n)
return jax.vmap(self._sample)(key,
np.repeat(loc[np.newaxis], n, axis=0))
def sample(self, shape=None, seed=None):
if shape is None:
if self.n_samples is None:
return self._sample_n(seed, self.loc)
else:
key = jax.random.split(seed, num_self.n_samples)
return jax.vmap(lambda key, loc : self._sample_n(key, loc))(key, self.loc)
elif len(shape) == 1:
if self.n_samples is None:
return self._sample_n(seed, self.loc, n=shape[0])
else:
key = jax.random.split(seed, num_self.n_samples)
return jax.vmap(lambda key, loc : self._sample_n(key, loc, n=shape[0]))(key, self.loc)
else:
key = jax.random.split(seed, num=shape[-1])
return jax.vmap(lambda key: self.sample(shape=tuple(shape[:-1]), seed=key), out_axes=-2)(key)
def PMC(rng, n_initial_points, n_points, prior, simulator, data_summary, f, percentile=75, acceptance_ratio=0.1,
max_iteration=10, max_acceptance=10, max_samples=int(1e3), n_parallel_simulations=None, simulator_parameters=None):
low = np.array([dist.low for dist in prior.distributions])
high = np.array([dist.high for dist in prior.distributions])
def single_PMC(rng, samples, summaries, distances, weighting, data_summary, f):
def single_iteration_condition(args):
return np.logical_and(
np.greater(args[-3], acceptance_ratio),
np.less(args[-2], max_iteration))
def single_iteration(args):
def single_acceptance_condition(args):
return np.logical_and(
np.less(args[-2], 1),
np.less(args[-1], max_acceptance))
def single_acceptance(args):
rng, loc, summ, dist, draws, accepted, acceptance_counter = args
#rng, loc, summ, dist, draws, rejected, acceptance_counter = args
rng, key = jax.random.split(rng)
proposed, summaries, distances = new_ABC(
key, None, tmvn(loc, scale, low, high, max_counter=max_samples),
simulator, data_summary, f, n_parallel_simulations=n_parallel_simulations,
simulator_parameters=simulator_parameters)
if n_parallel_simulations is not None:
min_distance_index = np.argmin(distances)
min_distance = distances[min_distance_index]
closer = np.less(min_distance, ϵ)
loc = jax.lax.cond(closer, lambda _ : proposed[min_distance_index], lambda _ : loc, None)
summ = jax.lax.cond(closer, lambda _ : summaries[min_distance_index], lambda _ : summ, None)
dist = jax.lax.cond(closer, lambda _ : distances[min_distance_index], lambda _ : dist, None)
iteration_draws = n_parallel_simulations - np.isinf(distances).sum()
draws += iteration_draws
accepted = closer.sum()
#rejected = iteration_draws - closer.sum()
else:
closer = np.less(distances, dist)
loc = jax.lax.cond(closer, lambda _ : proposed, lambda _ : loc, None)
summ = jax.lax.cond(closer, lambda _ : summaries, lambda _ : summ, None)
dist = jax.lax.cond(closer, lambda _ : distances, lambda _ : dist, None)
iteration_draws = 1 - np.isinf(distances).sum()
draws += iteration_draws
accepted = closer.sum()
#rejected = iteration_draws - closer.sum()
return (rng, loc, summ, dist, draws, accepted, acceptance_counter+1)
#return (rng, loc, summ, dist, draws, rejected, acceptance_counter+1)
rng, samples, summaries, distances, weighting, data_summary, f, acceptance_reached, iteration_counter, total_draws = args
ϵ = distances[ϵ_ind]
loc = samples[ϵ_ind:]
cov = w_cov(samples, weighting)
inv_cov = np.linalg.inv(cov)
scale = np.linalg.cholesky(cov)
rng, *key = jax.random.split(rng, num=loc.shape[0]+1)
draws = np.zeros(loc.shape[0], dtype=np.int32)
accepted = np.zeros(loc.shape[0], dtype=np.int32)
#rejected = np.zeros(loc.shape[0], dtype=np.int32)
acceptance_counter = np.zeros(loc.shape[0], dtype=np.int32)
results = jax.vmap(
lambda key, loc, summaries, distances, draws, accepted, acceptance_counter : jax.lax.while_loop(
#lambda key, loc, summaries, distances, draws, rejected, acceptance_counter : jax.lax.while_loop(
single_acceptance_condition, single_acceptance, (key, loc, summaries, distances, draws, accepted, acceptance_counter)))(
#single_acceptance_condition, single_acceptance, (key, loc, summaries, distances, draws, rejected, acceptance_counter)))(
np.array(key), loc, summaries[ϵ_ind:], distances[ϵ_ind:], draws, accepted, acceptance_counter)
#np.array(key), loc, summaries[ϵ_ind:], distances[ϵ_ind:], draws, rejected, acceptance_counter)
weighting = jax.vmap(
lambda proposed : (
prior.prob(proposed)
/ (np.sum(weighting * tfd.MultivariateNormalTriL(
loc=samples,
scale_tril=np.repeat(
scale[np.newaxis],
samples.shape[0],
axis=0)).prob(proposed)))))(
np.vstack([samples[:ϵ_ind], results[1]]))
samples = jax.ops.index_update(samples, jax.ops.index[ϵ_ind:, :], results[1])
summaries = jax.ops.index_update(summaries, jax.ops.index[ϵ_ind:, :], results[2])
distances = jax.ops.index_update(distances, jax.ops.index[ϵ_ind:], results[3])
acceptance_reached = results[-2].sum() / results[-3].sum()
return (rng, samples, summaries, distances, weighting, data_summary, f, acceptance_reached, iteration_counter+1, total_draws+results[-3].sum())
acceptance_reached = np.inf
iteration_counter = 0
total_draws = 0
results = jax.lax.while_loop(
single_iteration_condition,
single_iteration,
(rng, samples, summaries, distances, weighting, data_summary, f, acceptance_reached, iteration_counter, total_draws))
return results[1], results[2], results[3], results[4], results[7], results[8], results[9]
rng, key = jax.random.split(rng)
proposed, summaries, distances = new_ABC(
key, n_initial_points, prior, simulator, data_summary, f,
n_parallel_simulations=n_parallel_simulations, simulator_parameters=simulator_parameters)
if n_parallel_simulations is not None:
proposed = proposed.reshape((n_initial_points * n_parallel_simulations, -1))
summaries = summaries.reshape((n_initial_points * n_parallel_simulations, -1))
if len(data_summary.shape) > 1:
distances = distances.reshape((data_summary.shape[0], -1))
else:
distances = distances.reshape((-1,))
if len(data_summary.shape) == 1:
sample_indices = np.argsort(distances)[:n_points]
samples = proposed[sample_indices]
summaries = summaries[sample_indices]
distances = distances[sample_indices]
else:
sample_indices = np.argsort(distances, axis=1)[:, :n_points]
samples = jax.vmap(lambda x: proposed[x])(sample_indices)
summaries = jax.vmap(lambda x: summaries[x])(sample_indices)
distances = np.take_along_axis(distances, sample_indices, axis=1)
weighting = prior.prob(samples)
if percentile is None:
ϵ_ind = -1
to_accept = 1
else:
ϵ_ind = int(percentile / 100 * n_points)
to_accept = n_points - ϵ_ind
if len(data_summary.shape) == 1:
return single_PMC(rng, samples, summaries, distances, weighting, data_summary, f)
else:
key = jax.random.split(rng, num=data_summary.shape[0])
return jax.vmap(single_PMC)(key, samples, summaries, distances, weighting, data_summary, f)
targe_data_summary = get_estimate(target_data)
low = np.array([0.1, 0.1])
high = np.array([2., 2.])
myprior = tfd.Blockwise([tfd.Uniform(low=low[i], high=high[i]) for i in range(low.shape[0])])
rng, key = jax.random.split(rng)
ppmc_prop, ppmc_summ, ppmc_dist, ppmc_w, ppmc_crit, ppmc_it, ppmc_draws = PMC(
rng=key, n_initial_points=1000, n_points=250, prior=myprior, simulator=simulator,
data_summary=targe_data_summary, f=IMNN.F, percentile=75, acceptance_ratio=0.5,
max_iteration=int(1e2), max_acceptance=int(1e3), max_samples=int(1e3),
n_parallel_simulations=100, simulator_parameters=(low, high, input_shape))
# Create figures in Python that handle LaTeX, and save images to files in my
# preferred formatting. I typically place this code in the root of each of my
# projects, and import using:
# from latexify import *
# which will also run the latexify() function on the import.
# Based on code from https://nipunbatra.github.io/blog/2014/latexify.html
import matplotlib
import matplotlib.pyplot as plt
from math import sqrt
#Back-end to use depends on the system
from matplotlib.backends.backend_pgf import FigureCanvasPgf
matplotlib.backend_bases.register_backend('pdf', FigureCanvasPgf)
# matplotlib.use('pgf')
# from matplotlib.backends.backend_pgf import FigureCanvasPgf
# matplotlib.backend_bases.register_backend('ps', FigureCanvasPgf)
import seaborn as sns
sns.set_style("white")
#my preferred palette. From
#https://seaborn.pydata.org/tutorial/color_palettes.html: "The cubehelix color
#palette system makes sequential palettes with a linear increase or decrease in
#brightness and some variation in hue. This means that the information in your
#colormap will be preserved when converted to black and white (for printing) or
#when viewed by a colorblind individual."
# I typically set the number of colors (below, 8) to the distinct colors I need
# in a given plot, so as to use the full range.
sns.set_palette(sns.color_palette("cubehelix", 8))
# The following is the latexify function. It allows you to create 2 column or 1
# column figures. You may also wish to alter the height or width of the figure.
# The default settings are good for most cases. You may also change the
# parameters such as labelsize and fontsize based on your classfile.
def latexify(fig_width=None, fig_height=None, columns=1):
"""Set up matplotlib's RC params for LaTeX plotting.
Call this before plotting a figure.
Parameters
----------
fig_width : float, optional, inches
fig_height : float, optional, inches
columns : {1, 2}
"""
# code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples
# Width and max height in inches for IEEE journals taken from
# computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf
assert(columns in [1, 2])
if fig_width is None:
fig_width = 6.9 if columns == 1 else 13.8 # width in inches #3.39
if fig_height is None:
golden_mean = (sqrt(5) - 1.0) / 2.0 # Aesthetic ratio
fig_height = fig_width * golden_mean # height in inches
MAX_HEIGHT_INCHES = 16.0
if fig_height > MAX_HEIGHT_INCHES:
print(("WARNING: fig_height too large:" + fig_height +
"so will reduce to" + MAX_HEIGHT_INCHES + "inches."))
fig_height = MAX_HEIGHT_INCHES
params = {
# 'backend': 'ps',
# 'pgf.rcfonts': False,
# 'pgf.preamble': ['\\usepackage{gensymb}', '\\usepackage[dvipsnames]{xcolor}'],
# "pgf.texsystem": "pdflatex",
# 'text.latex.preamble': ['\\usepackage{gensymb}', '\\usepackage[dvipsnames]{xcolor}'],
'text.latex.preamble': '\\usepackage{mathptmx}',
#values below are useful defaults. individual plot fontsizes are
#modified as necessary.
'axes.labelsize': 8, # fontsize for x and y labels
'axes.titlesize': 8,
'font.size': 8,
'legend.fontsize': 8,
'xtick.labelsize': 6,
'ytick.labelsize': 6,
'text.usetex': True,
'figure.figsize': [fig_width, fig_height],
'font.family': 'serif',
'font.serif': 'Times',
'lines.linewidth': 1.5,
'lines.markersize':1,
'xtick.major.pad' : 2,
'ytick.major.pad' : 2,
'axes.xmargin' : .0, # x margin. See `axes.Axes.margins`
'axes.ymargin' : .0, # y margin See `axes.Axes.margins`
}
matplotlib.rcParams.update(params)
def saveimage(name, fig = plt, extension = 'pdf', folder = 'plots/'):
sns.despine()
#Minor ticks off by default in matplotlib
# plt.minorticks_off()
#grid being off is the default for seaborn white style, so not needed.
# plt.grid(False, axis = "x")
# plt.grid(False, axis = "y")
fig.savefig('{}{}.{}'.format(folder,name, extension), bbox_inches = 'tight')
latexify()
```
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=2
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### dataset information
```
from datetime import datetime
dataset = "fmnist"
dims = (28, 28, 1)
num_classes = 10
labels_per_class = 1024 # full
batch_size = 128
datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f")
datestring = (
str(dataset)
+ "_"
+ str(labels_per_class)
+ "____"
+ datestring
+ '_baseline_augmented'
)
print(datestring)
```
### Load packages
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
```
### Load dataset
```
from tfumap.load_datasets import load_FMNIST, mask_labels
X_train, X_test, X_valid, Y_train, Y_test, Y_valid = load_FMNIST(flatten=False)
X_train.shape
if labels_per_class == "full":
X_labeled = X_train
Y_masked = Y_labeled = Y_train
else:
X_labeled, Y_labeled, Y_masked = mask_labels(
X_train, Y_train, labels_per_class=labels_per_class
)
```
### Build network
```
from tensorflow.keras import datasets, layers, models
from tensorflow_addons.layers import WeightNormalization
def conv_block(filts, name, kernel_size = (3, 3), padding = "same", **kwargs):
return WeightNormalization(
layers.Conv2D(
filts, kernel_size, activation=None, padding=padding, **kwargs
),
name="conv"+name,
)
#CNN13
#See:
#https://github.com/vikasverma1077/ICT/blob/master/networks/lenet.py
#https://github.com/brain-research/realistic-ssl-evaluation
lr_alpha = 0.1
dropout_rate = 0.5
num_classes = 10
input_shape = dims
model = models.Sequential()
model.add(tf.keras.Input(shape=input_shape))
### conv1a
name = '1a'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv1b
name = '1b'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv1c
name = '1c'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='valid', name="mp1"))
# dropout
model.add(layers.Dropout(dropout_rate, name="drop1"))
### conv2a
name = '2a'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha))
### conv2b
name = '2b'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv2c
name = '2c'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='valid', name="mp2"))
# dropout
model.add(layers.Dropout(dropout_rate, name="drop2"))
### conv3a
name = '3a'
model.add(conv_block(name = name, filts = 512, kernel_size = (3,3), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv3b
name = '3b'
model.add(conv_block(name = name, filts = 256, kernel_size = (1,1), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv3c
name = '3c'
model.add(conv_block(name = name, filts = 128, kernel_size = (1,1), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.AveragePooling2D(pool_size=(3, 3), strides=2, padding='valid'))
model.add(layers.Flatten())
model.add(layers.Dense(256, activation=None, name='z'))
model.add(WeightNormalization(layers.Dense(256, activation=None)))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelufc1'))
model.add(WeightNormalization(layers.Dense(256, activation=None)))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelufc2'))
model.add(WeightNormalization(layers.Dense(num_classes, activation=None)))
model.summary()
```
### Augmentation
```
import tensorflow_addons as tfa
def norm(x):
return( x - tf.reduce_min(x))#/(tf.reduce_max(x) - tf.reduce_min(x))
def augment(image, label):
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
# stretch
randint_hor = tf.random.uniform((2,), minval=0, maxval = 8, dtype=tf.int32)[0]
randint_vert = tf.random.uniform((2,), minval=0, maxval = 8, dtype=tf.int32)[0]
image = tf.image.resize(image, (dims[0]+randint_vert*2, dims[1]+randint_hor*2))
#image = tf.image.crop_to_bounding_box(image, randint_vert,randint_hor,28,28)
image = tf.image.resize_with_pad(
image, dims[0], dims[1]
)
image = tf.image.resize_with_crop_or_pad(
image, dims[0] + 3, dims[1] + 3
) # crop 6 pixels
image = tf.image.random_crop(image, size=dims)
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
image = tfa.image.rotate(
image,
tf.squeeze(tf.random.uniform(shape=(1, 1), minval=-0.25, maxval=0.25)),
interpolation="BILINEAR",
)
image = tf.image.random_flip_left_right(image)
image = tf.clip_by_value(image, 0, 1)
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness
image = tf.image.random_contrast(image, lower=0.5, upper=1.75)
image = norm(image)
image = tf.clip_by_value(image, 0, 1)
if tf.random.uniform((1,), minval=0, maxval = 2, dtype=tf.int32)[0] == 0:
image = tfa.image.random_cutout(
tf.expand_dims(image, 0), (8, 8), constant_values=0.5
)[0]
image = tf.clip_by_value(image, 0, 1)
return image, label
nex = 10
for i in range(5):
fig, axs = plt.subplots(ncols=nex +1, figsize=((nex+1)*2, 2))
axs[0].imshow(np.squeeze(X_train[i]), cmap = plt.cm.Greys)
axs[0].axis('off')
for ax in axs.flatten()[1:]:
aug_img = np.squeeze(augment(X_train[i], Y_train[i])[0])
ax.matshow(aug_img, cmap = plt.cm.Greys, vmin=0, vmax=1)
ax.axis('off')
```
### train
```
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_accuracy', min_delta=0, patience=100, verbose=1, mode='auto',
baseline=None, restore_best_weights=True
)
import tensorflow_addons as tfa
opt = tf.keras.optimizers.Adam(1e-4)
opt = tfa.optimizers.MovingAverage(opt)
loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)
model.compile(opt, loss = loss, metrics=['accuracy'])
Y_valid_one_hot = tf.keras.backend.one_hot(
Y_valid, num_classes
)
Y_labeled_one_hot = tf.keras.backend.one_hot(
Y_labeled, num_classes
)
from livelossplot import PlotLossesKerasTF
# plot losses callback
plotlosses = PlotLossesKerasTF()
train_ds = (
tf.data.Dataset.from_tensor_slices((X_labeled, Y_labeled_one_hot))
.repeat()
.shuffle(len(X_labeled))
.map(augment, num_parallel_calls=tf.data.experimental.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.experimental.AUTOTUNE)
)
steps_per_epoch = int(len(X_train)/ batch_size)
history = model.fit(
train_ds,
epochs=500,
validation_data=(X_valid, Y_valid_one_hot),
callbacks = [early_stopping, plotlosses],
steps_per_epoch = steps_per_epoch,
)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
submodel = tf.keras.models.Model(
[model.inputs[0]], [model.get_layer('z').output]
)
z = submodel.predict(X_train)
np.shape(z)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
z_valid = submodel.predict(X_valid)
np.shape(z_valid)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z_valid.reshape(len(z_valid), np.product(np.shape(z_valid)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_valid.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(embedding[:, 0], embedding[:, 1], c=Y_valid.flatten(), s= 1, alpha = 1, cmap = plt.cm.tab10)
predictions = model.predict(X_valid)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(embedding[:, 0], embedding[:, 1], c=np.argmax(predictions, axis=1), s= 1, alpha = 1, cmap = plt.cm.tab10)
Y_test_one_hot = tf.keras.backend.one_hot(
Y_test, num_classes
)
result = model.evaluate(X_test, Y_test_one_hot)
```
### save results
```
# save score, valid embedding, weights, results
from tfumap.paths import MODEL_DIR, ensure_dir
save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring
ensure_dir(save_folder)
```
#### save weights
```
encoder = tf.keras.models.Model(
[model.inputs[0]], [model.get_layer('z').output]
)
encoder.save_weights((save_folder / "encoder").as_posix())
classifier = tf.keras.models.Model(
[tf.keras.Input(tensor=model.get_layer('weight_normalization').input)], [model.outputs[0]]
)
print([i.name for i in classifier.layers])
classifier.save_weights((save_folder / "classifier").as_posix())
```
#### save score
```
Y_test_one_hot = tf.keras.backend.one_hot(
Y_test, num_classes
)
result = model.evaluate(X_test, Y_test_one_hot)
np.save(save_folder / 'test_loss.npy', result)
```
#### save embedding
```
z = encoder.predict(X_train)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
np.save(save_folder / 'train_embedding.npy', embedding)
```
#### save results
```
import pickle
with open(save_folder / 'history.pickle', 'wb') as file_pi:
pickle.dump(history.history, file_pi)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
train = pd.read_csv("train.csv")
train.head()
test = pd.read_csv("test.csv")
test.head()
train_id = train["id"]
test_id = test["id"]
train.drop('id', axis=1)
test.drop('id', axis=1)
train["Gender"] = train["Gender"].replace({'Male':0, 'Female':1})
train["Vehicle_Damage"] = train["Vehicle_Damage"].replace({'Yes':1,"No":0})
test["Gender"] = test["Gender"].replace({'Male':0, 'Female':1})
test["Vehicle_Damage"] = test["Vehicle_Damage"].replace({'Yes':1,"No":0})
map_dict={'> 2 Years':2,'1-2 Year':1,'< 1 Year':0}
train['Vehicle_Age']=train['Vehicle_Age'].map(map_dict)
map_dict={'> 2 Years':2,'1-2 Year':1,'< 1 Year':0}
test['Vehicle_Age']=test['Vehicle_Age'].map(map_dict)
def analysis(df, target):
instance = df.shape[0]
types=df.dtypes
counts = df.apply(lambda x: x.count())
uniques = df.T.apply(pd.Series.unique,1)
nulls= df.apply(lambda x: x.isnull().sum())
distincts = df.apply(pd.Series.nunique)
null_perc = (df.isnull().sum()/instance)*100
skewness = df.skew()
kurtosis = df.kurt()
corr = df.corr()[target]
str = pd.concat([types, counts,uniques, nulls,distincts, null_perc, skewness, kurtosis, corr], axis = 1, sort=False)
corr_col = 'corr ' + target
cols = ['types', 'counts','uniques', 'nulls','distincts', 'null_perc', 'skewness', 'kurtosis', corr_col ]
str.columns = cols
return str
details = analysis(train, "Response")
details.sort_values("corr Response",ascending=False)
len(train)
len(test)
y = train.Response
X = train.copy()
X = X.drop('Response',1)
dataset = pd.concat([X,test])
len(dataset)
#data_sqrt = np.sqrt(dataset)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
data_scaled = scaler.fit_transform(dataset)
data_scaled = pd.DataFrame(data_scaled,columns = X.columns)
data_scaled.head()
train_df =data_scaled[:len(train)].copy()
len(train_df)
train_df['Response']=y
test_df =data_scaled[len(train):].copy()
len(test_df)
y = train_df.Response
X = train_df.copy()
X = X.drop('Response',1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,train_size=0.7)
y_train.value_counts()
from imblearn.over_sampling import RandomOverSampler
from collections import Counter
os=RandomOverSampler()
X_train_ns,y_train_ns=os.fit_sample(X_train,y_train)
print("The number of classes before fit {}".format(Counter(y_train)))
print("The number of classes after fit {}".format(Counter(y_train_ns)))
from sklearn.linear_model import LogisticRegression
log_clf = LogisticRegression()
log_clf.fit(X_train_ns,y_train_ns)
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix, classification_report
y_pred=log_clf.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(roc_auc_score(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
from sklearn.linear_model import SGDClassifier
SGD = SGDClassifier(loss='hinge')
SGD.fit(X_train_ns,y_train_ns)
y_pred=SGD.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(roc_auc_score(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
submission = pd.read_csv("sample_submission.csv")
prediction = SGD.predict(test_df)
submission.Response = prediction
submission.to_csv('SGD_hinge.csv', index=False)
import lightgbm
categorical_features =['Gender','Driving_License','Region_Code','Previously_Insured','Vehicle_Age','Vehicle_Damage','Policy_Sales_Channel']
train_data = lightgbm.Dataset(X_train_ns, label=y_train_ns, categorical_feature=categorical_features)
test_data = lightgbm.Dataset(X_test, label=y_test)
parameters = {
'application': 'binary',
'objective': 'binary',
'metric': 'auc',
'is_unbalance': 'true',
'boosting': 'gbdt',
'num_leaves': 31,
'feature_fraction': 0.5,
'bagging_fraction': 0.5,
'bagging_freq': 20,
'learning_rate': 0.01,
'verbose': 0
}
model = lightgbm.train(parameters,
train_data,
valid_sets=test_data,
num_boost_round=5000,
early_stopping_rounds=100)
```
| github_jupyter |
```
#Define libraries
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Conv1D, MaxPooling1D, BatchNormalization, Flatten
from sklearn.model_selection import KFold
from keras.utils import multi_gpu_model
#from sklearn.cross_validation import StratifiedKFold
from contextlib import redirect_stdout
from keras.utils import plot_model
from IPython.display import Image
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import auc
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.utils.vis_utils import plot_model
from IPython.display import SVG
import datetime
from keras.utils.vis_utils import model_to_dot
from keras.callbacks import EarlyStopping, ModelCheckpoint
gpu_options = tf.GPUOptions(allow_growth=True)
sess =tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
tf.keras.backend.set_session(sess)
NBname='_again12FC'
%matplotlib inline
# =======
# 441PANet2
# np.random.seed(100)
# kernel_len 25
# half (3,6, 9, 12, 15)
# decay=0.0000125
# dropout 0.25
# # # diff b/w 441PANet2 & 10p121PANet2
# FC 2x12
# patience 10
# epochs 50
# lr=0.00000625
# # # diff b/w 10p121PANet2 & 12FC
# no ES
# =======
SMALL_SIZE = 10
MEDIUM_SIZE = 15
BIGGER_SIZE = 18
# font = {'family' : 'monospace',
# 'weight' : 'bold',
# 'size' : 'larger'}
#plt.rc('font', **font) # pass in the font dict as kwargs
plt.rc('font', size=MEDIUM_SIZE,family='normal',weight='normal') # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE,) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE,) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE,titleweight='bold') # fontsize of the figure title
#plt.rc('xtick', labelsize=15)
#plt.rc('ytick', labelsize=15)
print(str(datetime.datetime.now()))
def save_models(mod, last):
for i in range(len(mod)):
name=str(i+1)+last
mod[i].model.save(name)
def plot_perform2(mod, metric, last,ttl):
#plt.figure(figsize=(13,13))
plt.figure(figsize=(11,11))
for i in range(len(mod)):
name=str(i+1)
val = plt.plot(mod[i].epoch, mod[i].history['val_'+metric],
'--', label=name.title()+'_Val',linewidth=1.5)
plt.plot(mod[i].epoch, mod[i].history[metric],
color=val[0].get_color(), label=name.title()+'_Train',linewidth=1.2)
plt.xlabel('Epochs')
plt.ylabel(metric.replace('_',' ').title())
plt.ylabel(metric.title())
plt.title(ttl)
plt.legend(loc='best')
plt.xlim([0,max(mod[i].epoch)])
figname=metric+last+'.png'
plt.savefig(figname,dpi=500)
def create_model0(shape1):
model0 = Sequential()
model0.add(Conv1D(3, 25, strides=1,padding='same',activation='relu', batch_input_shape=(None,shape1,1)))
model0.add(BatchNormalization())
model0.add(Conv1D(3, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(6, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(6, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(9, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(9, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(12, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(12, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(15, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(15, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Flatten())
model0.add(Dense(12, activation='relu'))
model0.add(Dense(12, activation='relu'))
#model0.add(Dense(8, activation='relu'))
model0.add(Dropout(0.25))
model0.add(Dense(2, activation='softmax'))
return model0
%%time
batch_size = 10
N_epochs = 50
N_folds=4
np.random.seed(100)
kf = KFold(n_splits=N_folds, shuffle=False)
# fm='train_x.npy'
# fl='train_y.npy'
# data=np.load(os.path.abspath(fm))
# dlabels=np.load(os.path.abspath(fl))
rm='res_x.npy'
rl='res_y.npy'
rdata=np.load(os.path.abspath(rm))
rlabels=np.load(os.path.abspath(rl))
sm='sen_x.npy'
sl='sen_y.npy'
sdata=np.load(os.path.abspath(sm))
slabels=np.load(os.path.abspath(sl))
# =================
# Do once!
# =================
sen_batch = np.random.RandomState(seed=45).permutation(sdata.shape[0])
bins = np.linspace(0, 200, 41)
digitized = np.digitize(sen_batch, bins,right=False)
# ================
#adamax=keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0)
#adam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#default reco
#nadam=keras.optimizers.Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, schedule_decay=0.004)
# sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# adadelta=keras.optimizers.Adadelta(lr=1, rho=0.95, epsilon=None, decay=0.0)
# rmsp=keras.optimizers.RMSprop(lr=0.01, rho=0.9, epsilon=None, decay=0.0)
# adagrad=keras.optimizers.Adagrad(lr=0.01, epsilon=None, decay=0.0
i=0
adamax=[]
callbacks = [EarlyStopping(monitor='val_loss', patience=10),
ModelCheckpoint(filepath='best_model'+NBname+'.h5', monitor='val_loss', save_best_only=True)]
for train_idx_k, val_idx_k in kf.split(rdata):
print ("Running Fold", i+1, "/", N_folds)
# ===============================
# select train
# ===============================
s_train_x=sdata[np.isin(digitized,train_idx_k+1)]
s_train_y=slabels[np.isin(digitized,train_idx_k+1)]
r_train_x=np.concatenate((rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k]))
r_train_y=np.concatenate((rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k]))
# ===============================
# select val
# ===============================
s_val_x=sdata[np.isin(digitized,val_idx_k+1)]
s_val_y=slabels[np.isin(digitized,val_idx_k+1)]
r_val_x=np.concatenate((rdata[val_idx_k],rdata[val_idx_k],rdata[val_idx_k],rdata[val_idx_k],rdata[val_idx_k]))
r_val_y=np.concatenate((rlabels[val_idx_k],rlabels[val_idx_k],rlabels[val_idx_k],rlabels[val_idx_k],rlabels[val_idx_k]))
# ===============================
# concatenate F_train/val_x/y
# ===============================
f_train_x, f_train_y = np.concatenate((s_train_x,r_train_x)), np.concatenate((s_train_y,r_train_y))
# train_shuf_idx = np.random.permutation(f_train_x.shape[0])
# F_train_x, F_train_y = f_train_x[train_shuf_idx], f_train_y[train_shuf_idx]
f_val_x, f_val_y = np.concatenate((s_val_x,r_val_x)), np.concatenate((s_val_y,r_val_y))
# val_shuf_idx = np.random.permutation(f_val_x.shape[0])
# F_val_x, F_val_y = f_val_x[val_shuf_idx], f_val_y[val_shuf_idx]
# ===============================
# shuffle just because we can?
# ===============================
train_shuf_idx = np.random.permutation(f_train_x.shape[0])
x_train_CV, y_train_CV = f_train_x[train_shuf_idx], f_train_y[train_shuf_idx]
val_shuf_idx = np.random.permutation(f_val_x.shape[0])
x_val_CV, y_val_CV = f_val_x[val_shuf_idx], f_val_y[val_shuf_idx]
# ===============================
# clear and create empty model
# ===============================
model0 = None # Clearing the NN.
model0 = create_model0(rdata.shape[1])
# x_train_CV, y_train_CV, = data[train_idx_k], dlabels[train_idx_k]
# x_val_CV, y_val_CV, = data[val_idx_k], dlabels[val_idx_k]
# parallel_model = None
# parallel_model = multi_gpu_model(model0, gpus=2)
# #default
# #parallel_model.compile(optimizer=keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0),
# parallel_model.compile(optimizer=keras.optimizers.Adamax(lr=0.004, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.005),
# loss='categorical_crossentropy',
# metrics=['accuracy','categorical_crossentropy'])
# model0_adamax = parallel_model.fit(x_train_CV, y_train_CV,
# epochs=N_epochs,
# batch_size=batch_size,
# validation_data=(x_val_CV,y_val_CV),
# verbose=1)
#default
#parallel_model.compile(optimizer=keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0),
model0.compile(optimizer=keras.optimizers.Adamax(lr=0.00000625, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0000125),
loss='categorical_crossentropy',
metrics=['accuracy','categorical_crossentropy'])
model0_adamax = model0.fit(x_train_CV, y_train_CV,
epochs=N_epochs,
batch_size=batch_size,
validation_data=(x_val_CV,y_val_CV),
verbose=2)#,callbacks=callbacks)
adamax.append(model0_adamax)
i=i+1
plot_perform2(adamax,'acc',NBname,'CV:Performance-I')
plot_perform2(adamax,'loss',NBname,'CV:Performance-II')
with open('summary'+NBname+'.txt', 'w') as f:
with redirect_stdout(f):
model0.summary()
model0.summary()
#save_models(adamax,NBname)
print(str(datetime.datetime.now()))
# produces extremely tall png, that doesn't really fit into a screen
# plot_model(model0, to_file='model'+NBname+'.png', show_shapes=True,show_layer_names=False)
# produces crappy SVG object. dont uncomment until desperate
# SVG(model_to_dot(model0, show_shapes=True,show_layer_names=False).create(prog='dot', format='svg'))
# # =====================================
# # Legacy block, life saver truly
# # =====================================
# # sdata.shape
# # (200, 1152012, 1)
# print('\n')
# sen_batch = np.random.RandomState(seed=45).permutation(sdata.shape[0])
# print(sen_batch)
# print('\n')
# bins = np.linspace(0, 200, 41)
# print(bins.shape)
# print(bins)
# print('\n')
# digitized = np.digitize(sen_batch, bins,right=False)
# print(digitized.shape)
# print(digitized)
# # #instead of 10, run counter
# # print(np.where(digitized==10))
# # print(sdata[np.where(digitized==10)].shape)
# # # (array([ 0, 96, 101, 159, 183]),)
# # # (5, 1152012, 1)
# # dig_sort=digitized
# # dig_sort.sort()
# # # print(dig_sort)
# # # [ 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5
# # # 5 6 6 6 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10
# # # 10 10 11 11 11 11 11 12 12 12 12 12 13 13 13 13 13 14 14 14 14 14 15 15
# # # 15 15 15 16 16 16 16 16 17 17 17 17 17 18 18 18 18 18 19 19 19 19 19 20
# # # 20 20 20 20 21 21 21 21 21 22 22 22 22 22 23 23 23 23 23 24 24 24 24 24
# # # 25 25 25 25 25 26 26 26 26 26 27 27 27 27 27 28 28 28 28 28 29 29 29 29
# # # 29 30 30 30 30 30 31 31 31 31 31 32 32 32 32 32 33 33 33 33 33 34 34 34
# # # 34 34 35 35 35 35 35 36 36 36 36 36 37 37 37 37 37 38 38 38 38 38 39 39
# # # 39 39 39 40 40 40 40 40]
# # print(val_idx_k)
# # # array([ 2, 3, 8, 10, 14, 15, 23, 24, 30, 32])
# # print(val_idx_k+1)
# # # array([ 3, 4, 9, 11, 15, 16, 24, 25, 31, 33])
# # print('\n')
# # print(sdata[np.isin(digitized,train_idx_k+1)].shape)
# # # (150, 1152012, 1)
# # print(sdata[np.isin(digitized,val_idx_k+1)].shape)
# # # (50, 1152012, 1)
# # ==========================================================================
# # # DO NOT UNCOMMENT UNTIL THE END; DECLARES FUNCTION FOR AN UNBIASED TEST
# # ==========================================================================
# def plot_auc(aucies,fprs,tprs, last):
# #plt.figure(figsize=(13,13))
# plt.figure(figsize=(11,11))
# plt.plot([0, 1], [0, 1], 'k--')
# for i in range(len(aucies)):
# st='CV_'+str(i+1)+' '
# plt.plot(fprs[i], tprs[i], label='{} (AUC= {:.3f})'.format(st,aucies[i]),linewidth=1.5)
# plt.xlabel('False positive rate')
# plt.ylabel('True positive rate')
# plt.title('ROC curve')
# plt.legend(loc='best')
# figname='ROC'+last+'.png'
# plt.savefig(figname,dpi=500)
# # ==========================================================================
# # # THIS IS THE FUCKING UNBIASED TEST; DO NOT UNCOMMENT UNTIL THE END
# # ==========================================================================
# fpr_x=[]
# tpr_x=[]
# thresholds_x=[]
# auc_x=[]
# pre_S=[]
# rec_S=[]
# f1_S=[]
# kap_S=[]
# acc_S=[]
# mat_S=[]
# y_pred = model0_adamax.model.predict(test)#.ravel()
# fpr_0, tpr_0, thresholds_0 = roc_curve(tlabels[:,1], y_pred[:,1])
# fpr_x.append(fpr_0)
# tpr_x.append(tpr_0)
# thresholds_x.append(thresholds_0)
# auc_x.append(auc(fpr_0, tpr_0))
# # predict probabilities for test set
# yhat_probs = model0_adamax.model.predict(test, verbose=0)
# # predict crisp classes for test set
# yhat_classes = model0_adamax.model.predict_classes(test, verbose=0)
# # reduce to 1d array
# testy=tlabels[:,1]
# #testy1=tlabels[:,1]
# #yhat_probs = yhat_probs[:, 0]
# #yhat_classes = yhat_classes[:, 0]
# # accuracy: (tp + tn) / (p + n)
# acc_S.append(accuracy_score(testy, yhat_classes))
# #print('Accuracy: %f' % accuracy_score(testy, yhat_classes))
# #precision tp / (tp + fp)
# pre_S.append(precision_score(testy, yhat_classes))
# #print('Precision: %f' % precision_score(testy, yhat_classes))
# #recall: tp / (tp + fn)
# rec_S.append(recall_score(testy, yhat_classes))
# #print('Recall: %f' % recall_score(testy, yhat_classes))
# # f1: 2 tp / (2 tp + fp + fn)
# f1_S.append(f1_score(testy, yhat_classes))
# #print('F1 score: %f' % f1_score(testy, yhat_classes))
# # kappa
# kap_S.append(cohen_kappa_score(testy, yhat_classes))
# #print('Cohens kappa: %f' % cohen_kappa_score(testy, yhat_classes))
# # confusion matrix
# mat_S.append(confusion_matrix(testy, yhat_classes))
# #print(confusion_matrix(testy, yhat_classes))
# with open('perform'+NBname+'.txt', "w") as f:
# f.writelines("AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
# f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
# for x in range(len(fpr_x)):
# f.writelines(map("{}\n".format, mat_S[x]))
# f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# # ==========================================================================
# # # THIS IS THE FUCKING UNBIASED TEST; DO NOT UNCOMMENT UNTIL THE END
# # ==========================================================================
# plot_auc(auc_x,fpr_x,tpr_x,NBname)
# plt.figure(figsize=(16,10))
# plt.plot([0, 1], [0, 1], 'k--')
# plt.plot(fpr_x[0], tpr_x[0], label='CV1 (area= {:.3f})'.format(auc_x[0]))
# plt.plot(fpr_x[1], tpr_x[1], label='CV2 (area= {:.3f})'.format(auc_x[1]))
# plt.plot(fpr_x[2], tpr_x[2], label='CV3 (area= {:.3f})'.format(auc_x[2]))
# plt.xlabel('False positive rate')
# plt.ylabel('True positive rate')
# plt.title('ROC curve')
# plt.legend(loc='best')
# figname='model0_011GWAS'+'_ROC.png'
# plt.savefig(figname,dpi=400)
# As index starts from 0, changed from general form
# [(M*(k-i)):(M*k-1)]
for train_idx,val_idx in kf.split(rdata):
print(train_idx)
print(5*train_idx)
print(5*train_idx+4)
print('\n')
print(val_idx)
print(5*val_idx)
print(5*val_idx+4)
print('\n \n')
# plot_perform([#('1_nadam', nadam[0]),
# ('1_adamax', adamax[0]),
# #('2_nadam', nadam[1]),
# ('2_adamax', adamax[1]),
# #('3_nadam', nadam[2]),
# ('3_adamax', adamax[2])],
# #('3_nadam', nadam[2]),
# #('4_adamax', adamax[3]),
# #('3_nadam', nadam[2]),
# #('5_adamax', adamax[4])],
# 'acc','model0_011GWAS')
# plot_perform([#('1_nadam', nadam[0]),
# ('1_adamax', adamax[0]),
# #('2_nadam', nadam[1]),
# ('2_adamax', adamax[1]),
# #('3_nadam', nadam[2]),
# ('3_adamax', adamax[2])],
# #('3_nadam', nadam[2]),
# #('4_adamax', adamax[3]),
# #('3_nadam', nadam[2]),
# #('5_adamax', adamax[4])],
# 'loss','model0_011GWAS')
# adamax[0].model.save('adamax_1_011GWAS')
# adamax[1].model.save('adamax_2_011GWAS')
# adamax[2].model.save('adamax_3_011GWAS')
# # adamax[3].model.save('adamax_4_011GWAS')
# # adamax[4].model.save('adamax_5_011GWAS')
# # plot_perform([#('1_nadam', nadam[0]),
# # ('1_adamax', adamax[0]),
# # #('2_nadam', nadam[1]),
# # ('2_adamax', adamax[1]),
# # #('3_nadam', nadam[2]),
# # ('3_adamax', adamax[2])],
# # #('3_nadam', nadam[2]),
# # #('4_adamax', adamax[3]),
# # #('3_nadam', nadam[2]),
# # #('5_adamax', adamax[4])],
# # 'acc','model0_011GWAS')
# def plot_perform(histories, metric,initial):
# plt.figure(figsize=(16,10))
# for name, history in histories:
# val = plt.plot(history.epoch, history.history['val_'+metric],
# '--', label=name.title()+' Val')
# #print(val) [<matplotlib.lines.Line2D object at 0x7fbb1899a940>]
# #print(val[0]) Line2D(Baseline Val)
# #print(val[0].get_color()) #1f77b4
# plt.plot(history.epoch, history.history[metric],
# color=val[0].get_color(), label=name.title()+' Train')
# plt.xlabel('Epochs')
# plt.ylabel(metric.replace('_',' ').title())
# plt.ylabel(metric.title())
# plt.legend()
# plt.xlim([0,max(history.epoch)])
# figname=initial+"_"+metric+".png"
# plt.savefig(figname,dpi=400)
```
| github_jupyter |
# TP 2 - Régression
## Prédiction des prix de l'immobilier à Boston dans les années 1970
La prédiction du prix de maisons bostoniennes des années 1970, dont les données sont issues de la base *Boston House Prices*, créée par D. Harrison et D.L. Rubinfeld à l'Université de Californie à Irvine (http://archive.ics.uci.edu/ml/machine-learning-databases/housing/), est un problème classique d'apprentissage supervisé.
<img src="https://1.bp.blogspot.com/-sCZIatDf9LQ/XGm-lEHXnAI/AAAAAAAAPxQ/kv8S8fdgudAwWTFuJhuAoiykLmWLCoOtgCLcBGAs/s1600/197010xx-GovernmentCenter-Boston_resize.JPG" width=600 />
Plus précisément, le label à prédire dans cette base de données est le prix médian par quartier de l'immobilier (en milliers de dollars). Il s'agit donc d'un problème de régression puisque l'on veut inférer des valeurs continues. Pour ce faire, on dispose de 13 entrées offrant les informations suivantes :
- CRIM - per capita crime rate by town
- ZN - proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS - proportion of non-retail business acres per town.
- CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise)
- NOX - nitric oxides concentration (parts per 10 million)
- RM - average number of rooms per dwelling
- AGE - proportion of owner-occupied units built prior to 1940
- DIS - weighted distances to five Boston employment centres
- TAX - full-value property-tax rate per \$10,000
- RAD - index of accessibility to radial highways
- PTRATIO - pupil-teacher ratio by town
- B $ = 1000(B_k - 0.63)^2$ where $B_k$ is the proportion of blacks by town
- LSTAT - percentage lower status of the population
L'objectif de ce TP est d'arriver à prédire au plus près les valeurs médianes de prix de maison par quartier.

```
from __future__ import print_function
import numpy as np
from matplotlib import pyplot as plt
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import regularizers
%matplotlib inline
```
Nous réutilisons la fonction d'affichage des fonctions de coût introduite dans le TP1
```
def plot_loss(val_loss, train_loss, ymax=100):
plt.plot(val_loss, color='green', label='Erreur de validation')
plt.plot(train_loss, color='blue', linestyle='--', label='Erreur d\'entraînement')
plt.xlabel('Epochs')
plt.ylim(0, ymax)
plt.title('Évolution de la perte sur les ensembles d\'apprentissage et de validation au cours de l\'apprentissage')
plt.legend()
```
# Préparation des données
On commence par charger les données d'entraînement et de test.
```
from keras.datasets import boston_housing
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
```
# Approche simple à corriger
## Création du modèle
```
model = Sequential()
model.add(Dense(4, activation='relu', input_dim=13))
model.add(Dense(1, activation='sigmoid'))
```
## Entrainement du réseau
```
optim = optimizers.SGD(lr = 0.01)
model.compile(optimizer=optim, loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=50, batch_size=32)
```
## Evaluation du modèle
```
train_loss=(history.history['loss'])
plot_loss([], train_loss, ymax=800)
#val_loss=(history.history['val_loss'])
#plot_loss(val_loss, train_loss, ymax=500)
model.evaluate(x_test, y_test)
```
On obtient une mae d'environ 22, ce qui signie que l'on est éloigné en moyenne de 22000$ de la vérité terrain.
# Travail à faire
L'approche présentée ci-dessus apporte des résultats décevants, en raison de quelques maladresses, voire erreurs. Dans un premier temps, vous devez **trouver et corriger ces problèmes**.
Dans un second temps, cherchez à améliorer les performances du modèle. Vous pouvez atteindre sans trop de difficulté un score de MAE inférieur à 3 sur l'ensemble de test. Pour vous aider, inspirez-vous de la vidéo du cours ci-dessous. A chaque nouveau test, vous devez évaluer si votre réseau est en sous-apprentissage, ou en sur-apprentissage, et en déduire des modifications possibles pour en améliorer les performances.
MAE de test à battre si vous aimez les défis : **2.20** !
```
from IPython.display import IFrame
IFrame("https://video.polymny.studio/?v=c9e5c27b-2228-488e-b64d-8fd57ed30056/", width=640, height=360)
```
| github_jupyter |
# Quantization of Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Oversampling
[Oversampling](https://en.wikipedia.org/wiki/Oversampling) is a technique which is applied in [analog-to-digital converters](https://en.wikipedia.org/wiki/Analog-to-digital_converter) to lower the average power of the quantization error. It requires a joint consideration of sampling and quantization.
### Ideal Analog-to-Digital Conversion
Let's consider the ideal sampling of a signal followed by its quantization, as given by the following block diagram

Ideal sampling is modeled by multiplying the continuous signal $x(t)$ with a series of equidistant Dirac impulse, resulting in the discrete signal $x[k] = x(k T)$ where $T$ denotes the sampling interval. The discrete signal $x[k]$ is then quantized. The output of the ideal analog-to-digital converter is the quantized discrete signal $x_\text{Q}[k]$.
### Nyquist Sampling
Sampling of the continuous signal $x(t)$ leads to repetitions of the spectrum $X(j \omega) = \mathcal{F} \{ x(t) \}$ at multiples of $\omega_\text{S} = \frac{2 \pi}{T}$. We limit ourselves to a continuous real-valued $x(t) \in \mathbb{R}$, band-limited signal $| X(j \omega) | = 0$ for $|\omega| > \omega_\text{C}$ where $\omega_\text{C}$ denotes its cut-off frequency. The spectral repetitions due to sampling do not overlap if the [sampling theorem](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem) $\omega_\text{S} \geq 2 \cdot \omega_\text{C}$ is fulfilled. In the case of Nyquist (critical) sampling, the sampling frequency is chosen as $\omega_\text{S} = 2 \cdot \omega_\text{C}$.
### Oversampling
The basic idea of oversampling is to sample the input signal at frequencies which are significantly higher than the Nyquist criterion dictates. After quantization, the signal is low-pass filtered by a discrete filter $H_\text{LP}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and resampled back to the Nyquist rate. In order to avoid aliasing due to the resampling this filter has to be chosen as an ideal low-pass
\begin{equation}
H_\text{LP}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \text{rect} \left( \frac{\Omega}{2 \, \Omega_\text{C}} \right)
\end{equation}
where $\Omega_\text{C} = \omega_\text{C} \cdot T$. For an oversampling of factor $L \in \mathbb{N}$ we have $\omega_\text{S} = L \cdot 2 \omega_\text{C}$. For this case, the resampling can be realized by keeping only every $L$-th sample which is known as decimation. The following block diagram illustrates the building blocks of oversampled digital-to-analog conversion, $\downarrow L$ denotes decimation by a factor of $L$

In order the conclude on the benefits of oversampling we have to derive the average power of the overall quantization error. According to our [model of the quantization error](linear_uniform_quantization_error.ipynb#Model-for-the-Quantization-Error), the quantization error $e[k]$ can be modeled as uniformly distributed white noise. Its power spectral density (PSD) is given as
\begin{equation}
\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{Q^2}{12}
\end{equation}
where $Q$ denotes the quantization step. Before the discrete low-pass filter $H_\text{LP}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$, the power of the quantization error is uniformly distributed over the entire frequency range $-\pi < \Omega \leq \pi$. However, after the ideal low-pass filter its frequency range is limited to $- \frac{\pi}{L} < \Omega \leq \frac{\pi}{L}$. The average power of the quantization error is then given as
\begin{equation}
\sigma_{e, \text{LP}}^2 = \frac{1}{2 \pi} \int\limits_{- \frac{\pi}{L}}^{\frac{\pi}{L}} \Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \; \mathrm{d}\Omega = \frac{1}{L} \cdot \frac{Q^2}{12}
\end{equation}
The average power $\sigma_x^2$ of the sampled signal $x[k]$ is not affected, since the cutoff frequency of the low-pass filter has been chosen as the upper frequency limit $\omega_\text{C}$ of the input signal $x(t)$.
In order to calculate the SNR of the oversampled analog-to-digital converter we assume that the input signal is drawn from a wide-sense stationary (WSS) uniformly distributed zero-mean random process with $x_\text{min} \leq x[k] < x_\text{max}$. With the results from our discussion of [linear uniform quantization](linear_uniform_quantization_error.ipynb#Uniformly-Distributed-Signal) and $\sigma_{e, \text{LP}}^2$ from above we get
\begin{equation}
SNR = 10 \cdot \log_{10} \left( 2^{2 w} \right) + 10 \cdot \log_{10} \left( L \right) \approx 6.02 \, w + 10 \cdot \log_{10} \left( L \right) \quad \text{in dB}
\end{equation}
where $w$ denotes the number of bits used for a binary representation of the quantization index. Hence, oversampling by a factor of $L$ brings a plus of $10 \cdot \log_{10} \left( L \right)$ dB in terms of SNR. For instance, an oversampling by a factor of $L = 4$ results in a SNR which is approximately 6 dB higher. For equal SNR the quantization step $Q$ can be chosen larger. In terms of the wordlength of a quantizer this accounts to a reduction by one bit. Consequently, there is a trade-off between accuracy of the quantizer and its sampling frequency.
### Example
The following numerical simulation illustrates the benefit in terms of SNR for an oversampled linear uniform quantizer with $w = 16$ for the quantization of the harmonic signal $x[k] = \cos[\Omega_0 k]$.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
w = 16 # wordlength of the quantized signal
L = 2**np.arange(1, 10) # oversampling factors
N = 8192 # length of signal
Om0 = 100*2*np.pi/N # frequency of harmonic signal
Q = 1/(2**(w-1)) # quantization step
def uniform_midtread_quantizer(x, Q):
'''Uniform mid-tread qantizer with limiter.'''
# limiter
x = np.copy(x)
idx = np.where(x <= -1)
x[idx] = -1
idx = np.where(x > 1 - Q)
x[idx] = 1 - Q
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def SNR_oversampled_ADC(L):
'''Estimate SNR of oversampled analog-to-digital converter.'''
x = (1-Q)*np.cos(Om0*np.arange(N))
xu = (1-Q)*np.cos(Om0*np.arange(N*L)/L)
# quantize signal
xQu = uniform_midtread_quantizer(xu, Q)
# low-pass filtering and decimation
xQ = sig.resample(xQu, N)
# estimate SNR
e = xQ - x
return 10*np.log10((np.var(x)/np.var(e)))
# compute SNR for oversampled ADC
SNR = [SNR_oversampled_ADC(l) for l in L]
# plot result
plt.figure(figsize=(10, 4))
plt.semilogx(L, SNR, label='SNR with oversampling')
plt.plot(L, (6.02*w+1.76)*np.ones(L.shape), label='SNR w/o oversampling')
plt.xlabel(r'oversampling factor $L$')
plt.ylabel(r'SNR in dB')
plt.legend(loc='upper left')
plt.grid()
```
**Exercise**
* What SNR can be achieved for an oversampling factor of $L=16$?
* By how many bits could the word length $w$ be reduced in order to gain the same SNR as without oversampling?
Solution: The SNR for the quantization of a uniformly distributed input signal without oversampling is $\text{SNR} \approx 6.02 w \approx 96$ dB and with 16 times oversampling $\text{SNR}_{L} \approx 6.02 w + 10 \cdot \log_{10} (16) \approx 96 + 12$ dB. Since the [quantization of a harmonic signal](linear_uniform_quantization_error.ipynb#Harmonic-Signal) is considered an offset of $1.76$ dB has to added to both. The wordlength could be reduced by 2 bits according to these numbers.
### Anti-Aliasing Filter
Besides an increased SNR, oversampling has also another benefit. In order to ensure that the input signal $x(t)$ is band-limited before sampling, a low-pass filter $H_\text{LP}(\mathrm{j}\,\omega)$ is applied in typical analog-to-digital converters. This is illustrated in the following

The filter $H_\text{LP}(\mathrm{j}\,\omega)$ is also known as [anti-aliasing filter](https://en.wikipedia.org/wiki/Anti-aliasing_filter). The ideal low-pass filter is given as $H_\text{LP}(\mathrm{j}\,\omega) = \text{rect}\left( \frac{\omega}{\omega_\text{S}} \right)$. The ideal $H_\text{LP}(\mathrm{j}\,\omega)$ can only be approximated in the analog domain. Since the sampling rate is higher than the Nyquist rate, there is no need for a steep slope of the filter in order to avoid aliasing. However, the pass-band of the filter within $|\omega| < |\omega_\text{C}|$ has to be flat.
Before decimation, the discrete filter $H_\text{LP}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ has to remove the spectral contributions that may lead to aliasing. However, a discrete filter $H_\text{LP}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ with steep slope can be realized much easier than in the analog domain.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/MachineLearning/clustering.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/MachineLearning/clustering.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=MachineLearning/clustering.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/MachineLearning/clustering.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
```
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
```
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# Load a pre-computed Landsat composite for input.
input = ee.Image('LANDSAT/LE7_TOA_1YEAR/2001')
# Define a region in which to generate a sample of the input.
region = ee.Geometry.Rectangle(29.7, 30, 32.5, 31.7)
# Display the sample region.
Map.setCenter(31.5, 31.0, 8)
Map.addLayer(ee.Image().paint(region, 0, 2), {}, 'region')
# Make the training dataset.
training = input.sample(**{
'region': region,
'scale': 30,
'numPixels': 5000
})
# Instantiate the clusterer and train it.
clusterer = ee.Clusterer.wekaKMeans(15).train(training)
# Cluster the input using the trained clusterer.
result = input.cluster(clusterer)
# Display the clusters with random colors.
Map.addLayer(result.randomVisualizer(), {}, 'clusters')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import colour
from colour.plotting import *
import pylab
from pylab import *
from matplotlib import path
from scipy.interpolate import interp1d
from scipy.integrate import simps, trapz
%matplotlib inline
rcParams['legend.numpoints'] = 1
# ITE Traffic Color Specification
Traf_Spec = np.array(pd.read_csv('./datasets/ITE_color_spec.csv'))
Traf_Spec = np.transpose(Traf_Spec)
Traf_Red = Traf_Spec[0:2]
Traf_Amber = Traf_Spec[3:5]
Traf_Green = Traf_Spec[6:8]
Blackbody_xy = np.array(pd.read_csv('./datasets/BlackBody_xy.csv'))
Blackbody_xy = np.transpose(Blackbody_xy)
# Color Calculation Conclusion - Target inside Green A, nominal 0.2, 0.6
CIE1931 = np.array(pd.read_csv('./datasets/CIE1931_1nm.csv'))
CIE1931 = np.transpose(CIE1931)
CIE1931_x = CIE1931[0,:]
CIE1931_y = CIE1931[1,:]
CIE_1931_chromaticity_diagram_plot(standalone = False)
#planckian_locus_CIE_1931_chromaticity_diagram_plot[()]
plt.xlabel('x', fontsize = 20)
plt.ylabel('y', fontsize = 20)
plt.tick_params(axis='x', labelsize=15)
plt.tick_params(axis='y', labelsize=15)
plt.plot(0.464, 0.523, 'ro', markersize = 10, label = 'LE174-H00-N50-2A CW7 DOE')
plt.plot(0.511, 0.477, 'mo', markersize = 10, label = 'LE174-H00-N30 (PC Cover CW8) DOE')
plt.plot(0.531, 0.464, 'bo', markersize = 10, label = 'LE174-H00-N30-2A CW9 DOE')
plt.plot(0.562, 0.432, 'wo', markersize = 10, label = 'PC Converted Amber LED')
plt.plot(0.45, 0.41, 'co', markersize = 10, label = '3000K Blackbody Source')
plt.plot(0.35, 0.355, 'go', markersize = 10, label = '5000K Blackbody Source')
plt.plot(Blackbody_xy[0], Blackbody_xy[1], '--', color = 'black', linewidth = 0.5)
plt.plot(Traf_Red[0], Traf_Red[1], '-', color='white', linewidth = 2)
plt.plot(Traf_Amber[0], Traf_Amber[1], '-', color ='white', linewidth=2)
plt.plot(Traf_Green[0], Traf_Green[1], '-', color ='white', linewidth=2)
plt.xlabel('x', fontsize = 20)
plt.ylabel('y', fontsize = 20)
plt.grid(True)
plt.legend(loc=1, fontsize =15)
plt.xlim(-.1,.9), plt.ylim(-.1,.9)
plt.show()
# Color Calculation Conclusion - Target inside Green A, nominal 0.2, 0.6
CIE1931 = np.array(pd.read_csv('./datasets/CIE1931_1nm.csv'))
CIE1931 = np.transpose(CIE1931)
CIE1931_x = CIE1931[0,:]
CIE1931_y = CIE1931[1,:]
CIE_1931_chromaticity_diagram_plot(standalone = False)
#planckian_locus_CIE_1931_chromaticity_diagram_plot[()]
plt.xlabel('x', fontsize = 20)
plt.ylabel('y', fontsize = 20)
plt.tick_params(axis='x', labelsize=15)
plt.tick_params(axis='y', labelsize=15)
plt.plot(0.464, 0.523, 'ro', markersize = 10, label = 'LE174-H00-N50-2A CW7 DOE')
plt.plot(0.511, 0.477, 'mo', markersize = 10, label = 'LE174-H00-N30 (PC Cover CW8) DOE')
plt.plot(0.531, 0.464, 'bo', markersize = 10, label = 'LE174-H00-N30-2A CW9 DOE')
plt.plot(0.562, 0.432, 'wo', markersize = 10, label = 'PC Converted Amber LED')
plt.plot(0.45, 0.41, 'co', markersize = 10, label = '3000K Blackbody Source')
plt.plot(0.35, 0.355, 'go', markersize = 10, label = '5000K Blackbody Source')
plt.plot(Blackbody_xy[0], Blackbody_xy[1], '--', color = 'black', linewidth = 0.5)
plt.plot(Traf_Red[0], Traf_Red[1], '-', color='white', linewidth = 2)
plt.plot(Traf_Amber[0], Traf_Amber[1], '-', color ='white', linewidth=2)
plt.plot(Traf_Green[0], Traf_Green[1], '-', color ='white', linewidth=2)
plt.xlabel('x', fontsize = 20)
plt.ylabel('y', fontsize = 20)
plt.grid(True)
plt.legend(loc=1, fontsize =15)
plt.xlim(.3,.9), plt.ylim(.3,.6)
plt.show()
night_glow = np.array(pd.read_csv('./datasets/Night_glow.csv'))
night_glow = np.transpose(night_glow)
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('F($\lambda$) [10$^{-17}$ erg s$^{-1}$ cm$^{-2}$ $\AA$ arcsec$^{-2}$]', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(night_glow[0], night_glow[1], '-', color = 'b', linewidth = 1)
plt.grid(True)
plt.annotate('[O1]', (520, 6), color ='r')
plt.annotate('Na' , (580, 4.2), color ='r')
plt.annotate('[O1]', (635, 4.2), color ='r')
plt.xlim(400, 1000)
plt.show()
```
The above image is measurments of our atmostphere's natural light emmisions
at night, also known as night glow. We can observe that the blue 400-550nm
band has little natural emmissions; where as the remaining terestrial visible
light spectrum has intense emmissions.
Although blue is not naturally emmited, it scatters very easily in our atomosphere.
This is why artifical blue light so greatly impacts terestrial viewing of the
night sky.
```
thinned_CCD = np.array(pd.read_csv('./datasets/Thinned_CCD_Response.csv'))
thinned_CCD = np.transpose(thinned_CCD)
thinned_CCD[0] = thinned_CCD[0]*1000
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('QE', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(thinned_CCD[0], thinned_CCD[1], '-', color = 'green', linewidth = 1)
plt.grid(True)
plt.xlim(400, 1000)
plt.ylim(0,1)
plt.show()
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
if 1:
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
#par2 = host.twinx()
offset = 60
#new_fixed_axis = par2.get_grid_helper().new_fixed_axis
#par2.axis["right"] = new_fixed_axis(loc="right",axes=par2,offset=(offset, 0))
#par2.axis["right"].toggle(all=True)
#host.set_xlim(0, 2)
#host.set_ylim(0, 1.0)
host.set_xlabel("Wavelength (nm)")
host.set_ylabel("Irradiance")
par1.set_ylabel("QE")
#par2.set_ylabel("Velocity")
p1, = host.plot(night_glow[0], night_glow[1], label="Irradiance", linewidth = 0.3)
p2, = par1.plot(thinned_CCD[0], thinned_CCD[1], label="QE")
#p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity")
#par1.set_ylim(0, 4)
#par2.set_ylim(1, 65)
#host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
#par2.axis["right"].label.set_color(p3.get_color())
plt.xlim(400, 1000)
plt.draw()
plt.show()
from mpl_toolkits.axes_grid1 import host_subplot
import matplotlib.pyplot as plt
host = host_subplot(111)
par = host.twinx()
host.set_xlabel("Wavelength (nm)")
host.set_ylabel("Irradiance")
par.set_ylabel("QE")
p1, = host.plot(night_glow[0], night_glow[1], label="Irradiance", linewidth = 0.3)
p2, = par.plot(thinned_CCD[0], thinned_CCD[1], label="QE")
#leg = plt.legend()
host.yaxis.get_label().set_color(p1.get_color())
#leg.texts[0].set_color(p1.get_color())
par.yaxis.get_label().set_color(p2.get_color())
#leg.texts[1].set_color(p2.get_color())
plt.xlim(400, 1000)
par.set_ylim(0,1 )
plt.show()
Sloan_Filters = np.array(pd.read_csv('./datasets/Sloan_Filters.csv'))
Sloan_Filters = np.transpose(Sloan_Filters)
wl_sloan = pd.Series.from_array(Sloan_Filters[0, 3:1804]).astype(np.float)
u_prime = pd.Series.from_array(Sloan_Filters[1, 3:1804]).astype(np.float)
g_prime = pd.Series.from_array(Sloan_Filters[2, 3:1804]).astype(np.float)
r_prime = pd.Series.from_array(Sloan_Filters[3, 3:1804]).astype(np.float)
i_prime = pd.Series.from_array(Sloan_Filters[4, 3:1804]).astype(np.float)
z_prime = pd.Series.from_array(Sloan_Filters[5, 3:1804]).astype(np.float)
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('Transmission', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
#plt.plot(wl_sloan, u_prime, '-', color = 'm', linewidth = 1, label = "u'")
plt.plot(wl_sloan, g_prime, '-', color = 'c', linewidth = 1, label = "g'")
plt.plot(wl_sloan, r_prime, '-', color = 'r', linewidth = 1, label = "r'")
plt.plot(wl_sloan, i_prime, '-', color = 'g', linewidth = 1, label = "i'")
plt.plot(wl_sloan, z_prime, '-', color = 'b', linewidth = 1, label = "z'")
plt.ylim(0, 100)
plt.xlim(400, 1000)
plt.legend(loc=3)
plt.grid(True)
plt.show()
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
if 1:
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
offset = 60
new_fixed_axis = par2.get_grid_helper().new_fixed_axis
par2.axis["right"] = new_fixed_axis(loc="right",axes=par2,offset=(offset, 0))
par2.axis["right"].toggle(all=True)
host.set_xlabel("Wavelength (nm)")
host.set_ylabel("Irradiance")
par1.set_ylabel("QE")
par2.set_ylabel("Transmission")
p1, = host.plot(night_glow[0], night_glow[1], label="Irradiance", linewidth = 0.3)
p2, = par1.plot(thinned_CCD[0], thinned_CCD[1], label="QE", linewidth = 0.4)
p3, = par2.plot(wl_sloan, g_prime, label="Transmission", color = 'c')
plt.fill_between(wl_sloan, g_prime, color = 'c', alpha = 0.4)
plt.ylim(0,7)
par1.set_ylim(0,1 )
par2.set_ylim(0, 100)
#host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.xlim(400, 1000)
plt.draw()
plt.show()
#Rayleigh Scattering
wl_rs = pd.Series(range(400, 1000, 5))
rs = 1/(wl_rs**4)
rs = rs/max(rs)
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('Normmalized Scattering Efficiency', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(wl_rs, rs, '-', color = 'm', linewidth = 1)
plt.fill_between(wl_sloan, g_prime, color = 'c', alpha = 0.4)
plt.ylim(0, 1)
plt.xlim(400, 1000)
plt.annotate('Rayleigh Scattering Efficiency ~ $\lambda^{-4}}$', (600, 0.8), fontsize = 12)
plt.grid(True)
plt.show()
CW_LED = np.array(pd.read_csv('./datasets/LE174_H00_N50_2A_DOE.csv'))
CW_LED = np.transpose(CW_LED)
CW_wl = CW_LED[0, 20:]
CW_rs = CW_LED[2, 20:]
CW_rs = CW_rs/max(CW_rs)
WW_LED = np.array(pd.read_csv('./datasets/LE174_H00_N30_2A_29_DOE.csv'))
WW_LED = np.transpose(WW_LED)
WW_wl = WW_LED[0, 20:]
WW_rs = WW_LED[2, 20:]
WW_rs = WW_rs/max(WW_rs)
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('Normalized Irradiance', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(wl_rs, rs, '-', color = 'm', linewidth = .5)
plt.plot(CW_wl, CW_rs, '-', color = 'r', linewidth = 1)
plt.fill_between(wl_sloan, g_prime, color = 'c', alpha = 0.4)
plt.ylim(0, 1)
plt.xlim(400, 1000)
plt.annotate('Cool White LED', (800, 0.8), fontsize = 12)
plt.grid(True)
plt.show()
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('Normalized Irradiance', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(wl_rs, rs, '-', color = 'm', linewidth = .5)
plt.plot(WW_wl, WW_rs, '-', color = 'r', linewidth = 1)
plt.fill_between(wl_sloan, g_prime, color = 'c', alpha = 0.4)
plt.ylim(0, 1)
plt.xlim(400, 1000)
plt.annotate('Warm White LED', (800, 0.8), fontsize = 12)
plt.grid(True)
plt.show()
PC_Red = np.array(pd.read_csv('./datasets/PC_Amber.csv'))
PC_Red = np.transpose(PC_Red)
PC_Red_wl = PC_Red[0, 24:].astype(np.float)
PC_Red = PC_Red[1, 24:].astype(np.float)
PC_Amber = np.array(pd.read_csv('./datasets/Philips_Amber_PC_LED.csv'))
PC_Amber = np.transpose(PC_Amber)
PC_Amber_wl = pd.Series.from_array(PC_Amber[0, 139:])
PC_Amber = pd.Series.from_array(PC_Amber[1, 139:])
PC_Amber = PC_Amber/max(PC_Amber)
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('Normalized Irradiance', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(wl_rs, rs, '-', color = 'm', linewidth = .5)
plt.plot(PC_Red_wl, PC_Red, '-', color = 'r', linewidth = 1)
plt.plot(PC_Amber_wl, PC_Amber, '-', color = 'b', linewidth = 1)
plt.fill_between(wl_sloan, g_prime, color = 'c', alpha = 0.4)
plt.ylim(0, 1)
plt.xlim(400, 1000)
plt.annotate('Phosphor Converter LEDs', (650, 0.8), fontsize = 12)
plt.grid(True)
plt.show()
WW_Fled = np.array(pd.read_csv('./datasets/WW_CW9.csv'))
WW_Fled = np.transpose(WW_Fled)
WW_Fled_wl = pd.Series.from_array(WW_Fled[0, 106:]).astype(np.float)
WW_Fled = pd.Series.from_array(WW_Fled[1, 106:]).astype(np.float)
WW_Fled = WW_Fled/max(WW_Fled)
CW_Fled = np.array(pd.read_csv('./datasets/LE174_H00_N50_2A_CW9_DOE.csv'))
CW_Fled = np.transpose(CW_Fled)
CW_Fled_wl = CW_Fled[0, 20:]
CW_Fled = CW_Fled[2, 20:]
CW_Fled = CW_Fled/max(CW_Fled)
plt.xlabel('Wavelength (nm)', fontsize = 12)
plt.ylabel('Normalized Irradiance', fontsize = 12)
plt.tick_params(axis='x', labelsize=10)
plt.tick_params(axis='y', labelsize=10)
plt.plot(wl_rs, rs, '-', color = 'm', linewidth = .5)
plt.plot(WW_Fled_wl, WW_Fled, '-', color = 'r', linewidth = 1)
plt.plot(CW_Fled_wl, CW_Fled, '-', color = 'b', linewidth = 1)
plt.fill_between(wl_sloan, g_prime, color = 'c', alpha = 0.4)
plt.ylim(0, 1)
plt.xlim(400, 1000)
plt.annotate('Cool White and Warm White Filtered LEDs', (624, 0.9), fontsize = 10)
plt.grid(True)
plt.show()
print "test"
```
| github_jupyter |
```
# !pip install numpy --upgrade
!pip install backoff
!git clone https://github.com/solpaul/fpl-prediction.git
%cd fpl-prediction/
from fpl_predictor.util import *
import pandas as pd
import numpy as np
from tqdm import tqdm
from IPython.display import clear_output
from pathlib import Path
import tensorflow as tf
import matplotlib.pyplot as plt
print(tf.__version__)
print(np.__version__)
# from numpy.lib.stride_tricks import sliding_window_view
!nvidia-smi
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
# path to project directory
path = Path('./')
# read in training dataset
train_df = pd.read_csv(path/'fpl_predictor/data/train_v8.csv',
index_col=0,
dtype={'season':str,
'squad':str,
'comp':str})
train_df.info()
# some players with the same names and after transfers causing duplicates
# delete them for now
train_df = train_df[~((train_df['player'] == 'Ben Davies') & (train_df['team'] == 'Liverpool'))]
# train_df = train_df[~((train_df['player'] == 'Dale Stephens') & (train_df['kickoff_time'] == '2020-09-20T13:00:00Z'))]
# train_df = train_df[~((train_df['player'] == 'Ross Barkley') & (train_df['kickoff_time'] == '2020-09-26T16:30:00Z'))]
# can try only using data when xg/xa is available
# still seems to do better with the additional incomplete data, but getting marginal
# train_df = train_df[train_df['season'] != '1617']
train_df.shape
train_df[train_df.duplicated(subset=['date','player','team'], keep=False)]
sum(train_df.duplicated(subset=['date','player','team'], keep=False))
# enhanced stats are NA if the player didn't have any minutes in the match
# also NA for 16/17 season
# set all these to 0
train_df[['xg', 'xa']] = train_df[['xg', 'xa']].fillna(0)
# one-hot categorical variables
train_df['s'] = train_df['season']
train_df = pd.get_dummies(train_df, columns=['position', 's'])
train_df
valid_season = '2021'
valid_gw = 20
valid_len = 6
identifiers = ['player', 'team', 'season', 'gw', 'kickoff_time']
cont_vars = ['minutes',
#'total_points_team', 'goals_scored',
'xg', 'xa',
'xg_team', 'xg_team_conceded']#,
# 'goals_scored_team', 'goals_scored_team_conceded']
cat_vars = ['position_1', 'position_2', 'position_3', 'position_4',
's_1617',
's_1718', 's_1819', 's_1920', 's_2021',
'was_home']
# could try adding gw in some way
# may adapt better to different stages of season
next_vars = ['next_was_home']#'next_minutes']
features_opponent = [
# 'total_points_team_pg_last_all_opponent',
# 'total_points_team_pg_last_10_opponent',
# 'total_points_team_pg_last_4_opponent',
# 'total_points_team_pg_last_2_opponent',
# 'total_points_team_conceded_pg_last_10_opponent',
# 'total_points_team_conceded_pg_last_20_opponent',
# 'goals_scored_team_pg_last_10_opponent',
# 'goals_scored_team_pg_last_20_opponent',
# 'goals_scored_team_conceded_pg_last_10_opponent',
# 'goals_scored_team_conceded_pg_last_20_opponent',
'xg_team_pg_last_10_opponent',
'xg_team_pg_last_20_opponent',
'xg_team_conceded_pg_last_10_opponent',
'xg_team_conceded_pg_last_20_opponent'
]
# adding opponent goals scored seems to help it converge very quickly (not quite as good)
features_next_opponent = ['next_' + x for x in features_opponent]
dep_var = ['total_points']
fields = cont_vars + cat_vars + next_vars + features_opponent + features_next_opponent + dep_var
dims = len(fields)
# lag points needed for opponent
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points', 'goals_scored', 'xg'], [10, 20])#'all', 10, 4, 2])
# dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(lag_train_df, cat_vars + identifiers, cont_vars,
[], features_opponent, dep_var,
valid_season, valid_gw, valid_len)
# new teams as opposition have NA, replace with 0
train_valid_df[features_opponent] = train_valid_df[features_opponent].fillna(0)
# add season match numbers for players
train_valid_df = train_valid_df.sort_values(['kickoff_time', 'team'])
train_valid_df['season_match_no'] = train_valid_df.groupby(['player', 'season'])['kickoff_time'].apply(lambda x: (~pd.Series(x).duplicated()).cumsum())
# combine with season to get overall match numbers
train_valid_df['overall_match_no'] = train_valid_df['season'] + [str(x).zfill(2) for x in train_valid_df['season_match_no']]
# at some point I really should try normalising the continuous variables
def flattened_dataframes(df, fields, features_opponent, features_next_opponent,
valid_season, valid_gw):
# training set is everything up to (and including) validation gw
train_base = df.copy()[(df['season'] < valid_season) |
((df['season'] == valid_season) &
(df['gw'] < valid_gw))]
train = train_base.copy()
# add next gameweek's minutes
# NOT DOING THIS ANYMORE
# train['next_minutes'] = (train.groupby(['player'])['minutes']
# .apply(lambda x: x.shift(-1)
# .rolling(1, min_periods = 0).sum()))
# add next gameweek's was_home
train['next_was_home'] = (train.groupby(['player'])['was_home']
.apply(lambda x: x.shift(-1)
.rolling(1, min_periods = 0).sum()))
# add next opponent stats
for feature, feature_next in zip(features_opponent, features_next_opponent):
train[feature_next] = (train.groupby(['player'])[feature]
.apply(lambda x: x.shift(-1)
.rolling(1, min_periods = 0).sum()))
# one row of data for each player
train = train.set_index(['player', 'overall_match_no']).unstack()[fields]
# for validation we'll loop through each kickoff time
valid_df = df.copy()[(df['season'] == valid_season) &
(df['gw'] >= valid_gw)]
valid_kickoffs = valid_df['kickoff_time'].unique()
valid = []
for kickoff in valid_kickoffs:
# for each kickoff time we take all the players
valid_rows = valid_df[valid_df['kickoff_time'] == kickoff]
valid_players = valid_rows['player'].unique()
# we want all previous data from the train portion for each player
train_rows = train_base[train_base['player'].isin(valid_players)]
# concat the valid and train portions
combined = train_rows.append(valid_rows, ignore_index=True)
# add next gameweek's minutes
# NOT DOING THIS ANYMORE
# combined['next_minutes'] = (combined.groupby(['player'])['minutes']
# .apply(lambda x: x.shift(-1)
# .rolling(1, min_periods = 0).sum()))
# add next gameweek's was_home
combined['next_was_home'] = (combined.groupby(['player'])['was_home']
.apply(lambda x: x.shift(-1)
.rolling(1, min_periods = 0).sum()))
# add next opponent stats
for feature, feature_next in zip(features_opponent, features_next_opponent):
combined[feature_next] = (combined.groupby(['player'])[feature]
.apply(lambda x: x.shift(-1)
.rolling(1, min_periods = 0).sum()))
# now add flattened version to list
valid.append(combined.set_index(['player', 'overall_match_no']).unstack()[fields])
# concat all to get the validation set
# we will only take one sequence of the last window_size per row
valid = pd.concat(valid)
# make sure it is in the right order
valid = valid[fields]
return train, valid
train, valid = flattened_dataframes(train_valid_df, fields, features_opponent,
features_next_opponent, valid_season, valid_gw)
print(train.shape)
print(train)
print(train.iloc[1])
print(valid.shape)
print(valid)
print(valid.iloc[5])
# add columns for minutes shifted back once
# gameweeks = valid.shape[1]//dims
# valid_shifted = valid.iloc[:,:gameweeks].shift(-1,axis=1)
# valid_shifted.iloc[:,-1] = valid_shifted.iloc[:,-2]
# valid = pd.concat([valid, valid_shifted], axis=1)
for column in valid.columns.get_level_values(0).unique():
print(column + " - " + str(len(valid.iloc[0][column].dropna())))
# for column in valid.columns.get_level_values(0).unique():
# print(column + " - " + str(len(valid.loc['Emiliano Martínez'][column].iloc[0].dropna())))
# need to reshape into no of weeks x no of fields
for row in valid[3:4].iterrows():
one_player = row[1].to_numpy()
# couldn't figure how to do this with reshape or similar
# so just loop through and stack
length = len(one_player)//(dims)
array = [one_player[i*length:(i+1)*length] for i in range(dims)]
array = np.stack(array, axis=1)
print(array[155:])
print(array.shape)
array.shape
# beware duplicates...
# lag_train_df[lag_train_df.duplicated(subset=['player', 'overall_match_no'], keep=False)].head(10)
# create windows for each player
# window_size long sequences of vectors
window_size = 38
batch_size = 64
shuffle_buffer_size = 1000
def windowed_dataset(df, dims, window_size, batch_size, shuffle_buffer_size,
valid=False, valid_len=None):
players = []
for row in df.iterrows():
# remove NaNs (most players haven't played maximum possible games)
# (also need to remove validation NAs)
one_player = row[1].dropna().to_numpy().astype(float)
assert(len(one_player) % dims == 0)
length = len(one_player)//dims
one_player = np.stack([one_player[i*length:(i+1)*length] for i in range(dims)], axis=1)
# pad with zeros to front of sequence
one_player = np.vstack((np.tile([0]*dims, (window_size, 1)), one_player))
# create window sequence indexer (split by 1)
indexer = np.arange(window_size + 1)[None, :] + np.arange(len(one_player) - window_size)[:, None]
# add to list
if valid:
# for validation we only want the final sequence
# also only rows where final minutes value is not 0 NOT DOING THIS ANYMORE
sequences = np.expand_dims(one_player[-window_size - 1:], axis=0)
# mask = sequences[:,-1,0] != 0
# players.append(sequences[mask])
players.append(sequences)
else:
# for train we want all sequences
players.append(one_player[indexer])
# concatenate all the sequences and add a dimension
ds = np.concatenate(players)
# check the shape, should be (n, window_size + 1, dims)
assert(ds.shape[1] == window_size + 1)
assert(ds.shape[2] == dims)
# create dataset
ds = tf.data.Dataset.from_tensor_slices(ds)
# shuffle it if in train
if valid == False:
ds = ds.shuffle(shuffle_buffer_size)
# create input and label sequences
# label is just the last element (total_points)
# both of length window_size, separated by 1
ds = ds.map(lambda w: (w[:-1], w[1:,-1]))
# this version removes total points from the input sequences
# ds = ds.map(lambda w: (w[:-1,:-1], w[1:,-1]))
return ds.batch(batch_size).prefetch(1)
ds_valid = windowed_dataset(valid[:1], dims, window_size, batch_size, shuffle_buffer_size,
valid=True, valid_len=valid_len)
ds_train = windowed_dataset(train[:1], dims, window_size, batch_size, shuffle_buffer_size)
ds_train = windowed_dataset(train, dims, window_size, batch_size, shuffle_buffer_size)
ds_valid = windowed_dataset(valid, dims, window_size, batch_size, shuffle_buffer_size,
valid=True, valid_len=valid_len)
# sequences = ds_train.take(1)
# for input, label in sequences:
# print(input)
# print(label)
labels = []
for _, label in ds_valid:
labels.extend(label[:,-1])
labels = np.array(labels)
labels.shape
# we need a custom validation metric
# only takes last output (i.e. the gw after the last in the sequence)
class MaeLast(tf.keras.metrics.Metric):
def __init__(self, name='mae_last', **kwargs):
super(MaeLast, self).__init__(name=name, **kwargs)
self.mae_last = self.add_weight(name='mae_last')
self.m = tf.keras.metrics.MeanAbsoluteError()
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = y_true[:,-1]
y_pred = y_pred[:,-1]
self.mae_last.assign(self.m(y_true, y_pred))
def result(self):
return self.mae_last
class RmseLast(tf.keras.metrics.Metric):
def __init__(self, name='rmse_last', **kwargs):
super(RmseLast, self).__init__(name=name, **kwargs)
self.rmse_last = self.add_weight(name='rmse_last')
self.m = tf.keras.metrics.RootMeanSquaredError()
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = y_true[:,-1]
y_pred = y_pred[:,-1]
self.rmse_last.assign(self.m(y_true, y_pred))
def result(self):
return self.rmse_last
# instatiate the validation metric
metric = RmseLast()
# early stopping callback
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_rmse_last',
patience=5,
restore_best_weights=True)
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
model = tf.keras.models.Sequential([
# tf.keras.layers.Conv1D(filters=32, kernel_size=5,
# strides=1, padding="causal",
# activation="relu",
# input_shape=[None, dims]),
tf.keras.layers.LSTM(128, return_sequences=True),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 20)
])
# lr_schedule = tf.keras.callbacks.LearningRateScheduler(
# lambda epoch: 1e-4 * 0.5**(epoch / 3))
# optimizer = tf.keras.optimizers.SGD(learning_rate=1e-4, momentum=0.9)
# try adam
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)
model.compile(
# loss=tf.keras.losses.Huber(),
loss=tf.keras.losses.MeanSquaredError(),
optimizer=optimizer,
metrics=[metric])
history = model.fit(ds_train, epochs=100, validation_data=ds_valid,
callbacks=[early_stopping])#, callbacks=[lr_schedule])
preds = model.predict(ds_valid)[:,-1,0]
preds.shape
max(preds)
labels.shape
tf.keras.metrics.mean_absolute_error(labels, preds).numpy()
np.sqrt(tf.keras.metrics.mean_squared_error(labels, preds).numpy())
valid.index.array
results = pd.DataFrame({'player': valid.index.array, 'prediction': preds})
results.sort_values('prediction', ascending=False).head(50)
train_df[train_df['player'] == 'Mohamed Salah'].tail(40)
from google.colab import drive
drive.mount('/content/gdrive')
def lstm_season(df, cat_vars, cont_vars, identifiers, team_lag_vars, features_opponent,
dep_var, valid_season='2021'):
# empty list for scores and gws
scores_rmse = []
scores_mae = []
valid_gws = []
# or load them in from existing temp file
# temp_scores = pd.read_csv('/content/gdrive/My Drive/FPL Tool/temp_scores.csv', index_col=0)
# scores_rmse = temp_scores['rmse'].tolist()
# scores_mae = temp_scores['mae'].tolist()
# valid_gws = temp_scores['gw'].tolist()
valid_len = 6
window_size = 38
batch_size = 64
shuffle_buffer_size = 1000
for valid_gw in range(1, 34): #:tqdm(range(1,34)):
# dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(df, cat_vars + identifiers, cont_vars,
[], features_opponent, dep_var,
valid_season, valid_gw, valid_len)
# new teams as opposition have NA, replace with 0
train_valid_df[features_opponent] = train_valid_df[features_opponent].fillna(0)
# add season match numbers for teams
train_valid_df = train_valid_df.sort_values(['kickoff_time', 'team'])
train_valid_df['season_match_no'] = train_valid_df.groupby(['player', 'season'])['kickoff_time'].apply(lambda x: (~pd.Series(x).duplicated()).cumsum())
# # combine with season to get overall match numbers
train_valid_df['overall_match_no'] = train_valid_df['season'] + [str(x).zfill(2) for x in train_valid_df['season_match_no']]
train, valid = flattened_dataframes(train_valid_df, fields, features_opponent,
features_next_opponent, valid_season, valid_gw)
ds_train = windowed_dataset(train, dims, window_size, batch_size, shuffle_buffer_size)
ds_valid = windowed_dataset(valid, dims, window_size, batch_size, shuffle_buffer_size,
valid=True, valid_len=valid_len)
labels = []
for _, label in ds_valid:
labels.extend(label[:,-1])
labels = np.array(labels)
# instatiate the validation metric
metric = RmseLast()
# early stopping callback
# early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_rmse_last',
# patience=5,
# restore_best_weights=True)
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(128, return_sequences=True),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 20)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)
model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer=optimizer,
metrics=[metric])
history = model.fit(ds_train, epochs=30, validation_data=ds_valid)#,
# callbacks=[early_stopping])
preds = model.predict(ds_valid)[:,-1,0]
gw_mae = tf.keras.metrics.mean_absolute_error(labels, preds).numpy()
gw_rmse = np.sqrt(tf.keras.metrics.mean_squared_error(labels, preds).numpy())
scores_rmse.append(gw_rmse)
scores_mae.append(gw_mae)
valid_gws.append(valid_gw)
scores = pd.DataFrame({'gw':valid_gws, 'rmse':scores_rmse, 'mae':scores_mae})
scores.to_csv('/content/gdrive/My Drive/FPL Tool/temp_scores.csv')
clear_output(wait=True)
for gw, rmse, mae in zip(valid_gws, scores_rmse, scores_mae):
print('GW%d RMSE %f MAE %f' % (gw, rmse, mae))
return [scores_rmse, scores_mae]
scores = lstm_season(lag_train_df, cat_vars, cont_vars, identifiers, team_lag_vars, features_opponent,
dep_var, valid_season='2021')
results = pd.read_csv('/content/gdrive/My Drive/FPL Tool/temp_scores.csv', index_col=0)['mae']
plt.plot(results)
plt.ylabel('GW MAE')
plt.xlabel('GW')
plt.text(15, 1.45, 'Season Avg MAE: %.2f' % np.mean(results), bbox={'facecolor':'white', 'alpha':1, 'pad':5})
plt.show()
results
```
| github_jupyter |
# Load Packages
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
```
# Pseudocode
this code below briefly explains how the whole process works
---------------
```python
data = raw_data()
#assign activity label
prevRow = null
foreach row in data:
if row.ActivityLabel is null:
row.ActivityLabel = prevRow.ActivityLabel
prevRow = row
#group according to its activity
groups = grouping(data)
#apply sliding windows into each group
_ = []
foreach g in groups:
sp = split(g, SECONDS_WINDOW)
if sp is not empty:
_ += sp
#convert each group to vector
vactors = []
for subg in _:
for item in subg:
vector[item.sensorID] += 1 if item.SensorValue == "ON"
normalise(vector)
vectors.append(vector)
return vectors
```
---------------
```
fileName = "annotated.sample"
dataDir = '../Data/twor.2009/'
columns = ["Date", "Time", "SensorID", "SensorValue", "ActivityLabel", "Status"]
raw = pd.read_csv(dataDir+fileName, sep=" ", names=columns, header=None)
# data = raw
data = raw[(raw["ActivityLabel"].notna()) | (raw["SensorValue"] != "OFF")]
data['DateTime'] = data['Date'] + " "+data['Time']
data['Timestamp'] = pd.to_datetime(data['DateTime'], format='%Y-%m-%d %H:%M:%S.%f')
len(data)
def transform(d):
g = []
activity = None
currGroup = []
d["na"] = d[["ActivityLabel"]].isna()
for i, row in d.iterrows():
if not row["na"]:
currGroup.append(row)
activity = row["ActivityLabel"]
if row['Status'] == "end":
g.append(currGroup)
currGroup = []
else:
row["ActivityLabel"] = activity
row["Status"] = "working"
currGroup.append(row)
g.append(currGroup)
return g
g = transform(data)
SECONDS_WINDOW = 15*60
def toVector(act, cols):
if (len(act)==0):
return []
rows = []
start = act[0]["Timestamp"].timestamp()
activityLabel = act[0]["ActivityLabel"]
actinwindow = {}
for a in act:
t = a["Timestamp"].timestamp()
interval = ((t-start)//SECONDS_WINDOW)
# consider only a second that when changes occur
if interval in actinwindow:
actinwindow[interval].append(a)
else:
actinwindow[interval] = [a]
colIndex = {}
for c in range(len(cols)):
colIndex[cols[c]] = c
vec = []
for interval in actinwindow:
v = np.zeros(len(cols))
normalise = 0
for a in actinwindow[interval]:
if a["SensorValue"] in ["O", "ON", "OF", "ONF", "OPEN", "PRESENT"]:
i = colIndex[a["SensorID"]]
v[i] += 1
normalise += 1
elif a["SensorValue"] in ["OFFF", "OFF", "CLOSE", "ABSENT"]:
continue
else:
# check isnumber https://stackoverflow.com/questions/354038/how-do-i-check-if-a-string-is-a-number-float
value = a["SensorValue"].replace('F','',1)
isNumberic = value.replace('.','',1).isdigit()
if isNumberic:
value = float(value);
i = colIndex[a["SensorID"]]
v[i] += value
normalise += value
else:
raise Exception('Unknown keyword: '+a["SensorValue"])
vec.append([value/normalise for value in v])
rows = pd.DataFrame(vec, columns=cols)
rows["ActivityName"] = activityLabel
return rows
vColumns = np.insert(np.sort(raw.SensorID.unique()), 0, "ActivityName", axis=0);
vectors = pd.DataFrame(columns=vColumns)
for activity in g:
v = toVector(activity, vColumns)
if len(v) > 0:
vectors = vectors.append(v, ignore_index=True)
vectors.head()
# TODO: optimise by using aggregate function
# GroupID = 1
# def aggActivity(d):
# global GroupID
# na = d.isna()
# if not na["ActivityLabel"] and d["Status"] == "end":
# GroupID += 1
# return GroupID - 1
# return GroupID
# data['GroupID'] = data.apply(aggActivity, axis=1)
# SECONDS_WINDOW = 10*60
# StartInterval = None
# def splitTimestamp(d):
# global StartInterval
# global SECONDS_WINDOW
# na = d.isna()
# if not na["ActivityLabel"] and d["Status"] == "begin":
# StartInterval = d["Timestamp"].timestamp()
# t = d["Timestamp"].timestamp()
# interval = ((t-StartInterval)//SECONDS_WINDOW)
# return interval
# data['Interval'] = data.apply(splitTimestamp, axis=1)
# print(data.columns)
# g = data.groupby(["GroupID", "Interval", "SensorID"])
vectors.to_csv(dataDir+fileName+".feat", index=False)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Linear Mixed Effects Models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Linear_Mixed_Effects_Models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
A linear mixed effects model is a simple approach for modeling structured linear relationships (Harville, 1997; Laird and Ware, 1982). Each data point consists of inputs of varying type—categorized into groups—and a real-valued output. A linear mixed effects model is a _hierarchical model_: it shares statistical strength across groups in order to improve inferences about any individual data point.
In this tutorial, we demonstrate linear mixed effects models with a real-world example in TensorFlow Probability. We'll use the JointDistributionCoroutine and Markov Chain Monte Carlo (`tfp.mcmc`) modules.
### Dependencies & Prerequisites
```
#@title Import and set ups{ display-mode: "form" }
import csv
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
dtype = tf.float64
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
plt.style.use('ggplot')
```
### Make things Fast!
Before we dive in, let's make sure we're using a GPU for this demo.
To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".
The following snippet will verify that we have access to a GPU.
```
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
```
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.)
## Data
We use the `InstEval` data set from the popular [`lme4` package in R](https://CRAN.R-project.org/package=lme4) (Bates et al., 2015). It is a data set of courses and their evaluation ratings. Each course includes metadata such as `students`, `instructors`, and `departments`, and the response variable of interest is the evaluation rating.
```
def load_insteval():
"""Loads the InstEval data set.
It contains 73,421 university lecture evaluations by students at ETH
Zurich with a total of 2,972 students, 2,160 professors and
lecturers, and several student, lecture, and lecturer attributes.
Implementation is built from the `observations` Python package.
Returns:
Tuple of np.ndarray `x_train` with 73,421 rows and 7 columns and
dictionary `metadata` of column headers (feature names).
"""
url = ('https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/'
'lme4/InstEval.csv')
with requests.Session() as s:
download = s.get(url)
f = download.content.decode().splitlines()
iterator = csv.reader(f)
columns = next(iterator)[1:]
x_train = np.array([row[1:] for row in iterator], dtype=np.int)
metadata = {'columns': columns}
return x_train, metadata
```
We load and preprocess the data set. We hold out 20% of the data so we can evaluate our fitted model on unseen data points. Below we visualize the first few rows.
```
data, metadata = load_insteval()
data = pd.DataFrame(data, columns=metadata['columns'])
data = data.rename(columns={'s': 'students',
'd': 'instructors',
'dept': 'departments',
'y': 'ratings'})
data['students'] -= 1 # start index by 0
# Remap categories to start from 0 and end at max(category).
data['instructors'] = data['instructors'].astype('category').cat.codes
data['departments'] = data['departments'].astype('category').cat.codes
train = data.sample(frac=0.8)
test = data.drop(train.index)
train.head()
```
We set up the data set in terms of a `features` dictionary of inputs and a `labels` output corresponding to the ratings. Each feature is encoded as an integer and each label (evaluation rating) is encoded as a floating point number.
```
get_value = lambda dataframe, key, dtype: dataframe[key].values.astype(dtype)
features_train = {
k: get_value(train, key=k, dtype=np.int32)
for k in ['students', 'instructors', 'departments', 'service']}
labels_train = get_value(train, key='ratings', dtype=np.float32)
features_test = {k: get_value(test, key=k, dtype=np.int32)
for k in ['students', 'instructors', 'departments', 'service']}
labels_test = get_value(test, key='ratings', dtype=np.float32)
num_students = max(features_train['students']) + 1
num_instructors = max(features_train['instructors']) + 1
num_departments = max(features_train['departments']) + 1
num_observations = train.shape[0]
print("Number of students:", num_students)
print("Number of instructors:", num_instructors)
print("Number of departments:", num_departments)
print("Number of observations:", num_observations)
```
## Model
A typical linear model assumes independence, where any pair of data points has a constant linear relationship. In the `InstEval` data set, observations arise in groups each of which may have varying slopes and intercepts. Linear mixed effects models, also known as hierarchical linear models or multilevel linear models, capture this phenomenon (Gelman & Hill, 2006).
Examples of this phenomenon include:
+ __Students__. Observations from a student are not independent: some students may systematically give low (or high) lecture ratings.
+ __Instructors__. Observations from an instructor are not independent: we expect good teachers to generally have good ratings and bad teachers to generally have bad ratings.
+ __Departments__. Observations from a department are not independent: certain departments may generally have dry material or stricter grading and thus be rated lower than others.
To capture this, recall that for a data set of $N\times D$ features $\mathbf{X}$ and $N$ labels $\mathbf{y}$, linear regression posits the model
$$
\begin{equation*}
\mathbf{y} = \mathbf{X}\beta + \alpha + \epsilon,
\end{equation*}
$$
where there is a slope vector $\beta\in\mathbb{R}^D$, intercept $\alpha\in\mathbb{R}$, and random noise $\epsilon\sim\text{Normal}(\mathbf{0}, \mathbf{I})$. We say that $\beta$ and $\alpha$ are "fixed effects": they are effects held constant across the population of data points $(x, y)$. An equivalent formulation of the equation as a likelihood is $\mathbf{y} \sim \text{Normal}(\mathbf{X}\beta + \alpha, \mathbf{I})$. This likelihood is maximized during inference in order to find point estimates of $\beta$ and $\alpha$ that fit the data.
A linear mixed effects model extends linear regression as
$$
\begin{align*}
\eta &\sim \text{Normal}(\mathbf{0}, \sigma^2 \mathbf{I}), \\
\mathbf{y} &= \mathbf{X}\beta + \mathbf{Z}\eta + \alpha + \epsilon.
\end{align*}
$$
where there is still a slope vector $\beta\in\mathbb{R}^P$, intercept $\alpha\in\mathbb{R}$, and random noise $\epsilon\sim\text{Normal}(\mathbf{0}, \mathbf{I})$. In addition, there is a term $\mathbf{Z}\eta$, where $\mathbf{Z}$ is a features matrix and $\eta\in\mathbb{R}^Q$ is a vector of random slopes; $\eta$ is normally distributed with variance component parameter $\sigma^2$. $\mathbf{Z}$ is formed by partitioning the original $N\times D$ features matrix in terms of a new $N\times P$ matrix $\mathbf{X}$ and $N\times Q$ matrix $\mathbf{Z}$, where $P + Q=D$: this partition allows us to model the features separately using the fixed effects $\beta$ and the latent variable $\eta$ respectively.
We say the latent variables $\eta$ are "random effects": they are effects that vary across the population (although they may be constant across subpopulations). In particular, because the random effects $\eta$ have mean 0, the data label's mean is captured by $\mathbf{X}\beta + \alpha$. The random effects component $\mathbf{Z}\eta$ captures variations in the data: for example, "Instructor \#54 is rated 1.4 points higher than the mean."
In this tutorial, we posit the following effects:
+ Fixed effects: `service`. `service` is a binary covariate corresponding to whether the course belongs to the instructor's main department. No matter how much additional data we collect, it can only take on values $0$ and $1$.
+ Random effects: `students`, `instructors`, and `departments`. Given more observations from the population of course evaluation ratings, we may be looking at new students, teachers, or departments.
In the syntax of R's lme4 package (Bates et al., 2015), the model can be summarized as
```
ratings ~ service + (1|students) + (1|instructors) + (1|departments) + 1
```
where `x` denotes a fixed effect,`(1|x)` denotes a random effect for `x`, and `1` denotes an intercept term.
We implement this model below as a JointDistribution. To have better support for parameter tracking (e.g., we want to track all the `tf.Variable` in `model.trainable_variables`), we implement the model template as `tf.Module`.
```
class LinearMixedEffectModel(tf.Module):
def __init__(self):
# Set up fixed effects and other parameters.
# These are free parameters to be optimized in E-steps
self._intercept = tf.Variable(0., name="intercept") # alpha in eq
self._effect_service = tf.Variable(0., name="effect_service") # beta in eq
self._stddev_students = tfp.util.TransformedVariable(
1., bijector=tfb.Exp(), name="stddev_students") # sigma in eq
self._stddev_instructors = tfp.util.TransformedVariable(
1., bijector=tfb.Exp(), name="stddev_instructors") # sigma in eq
self._stddev_departments = tfp.util.TransformedVariable(
1., bijector=tfb.Exp(), name="stddev_departments") # sigma in eq
def __call__(self, features):
model = tfd.JointDistributionSequential([
# Set up random effects.
tfd.MultivariateNormalDiag(
loc=tf.zeros(num_students),
scale_identity_multiplier=self._stddev_students),
tfd.MultivariateNormalDiag(
loc=tf.zeros(num_instructors),
scale_identity_multiplier=self._stddev_instructors),
tfd.MultivariateNormalDiag(
loc=tf.zeros(num_departments),
scale_identity_multiplier=self._stddev_departments),
# This is the likelihood for the observed.
lambda effect_departments, effect_instructors, effect_students: tfd.Independent(
tfd.Normal(
loc=(self._effect_service * features["service"] +
tf.gather(effect_students, features["students"], axis=-1) +
tf.gather(effect_instructors, features["instructors"], axis=-1) +
tf.gather(effect_departments, features["departments"], axis=-1) +
self._intercept),
scale=1.),
reinterpreted_batch_ndims=1)
])
# To enable tracking of the trainable variables via the created distribution,
# we attach a reference to `self`. Since all TFP objects sub-class
# `tf.Module`, this means that the following is possible:
# LinearMixedEffectModel()(features_train).trainable_variables
# ==> tuple of all tf.Variables created by LinearMixedEffectModel.
model._to_track = self
return model
lmm_jointdist = LinearMixedEffectModel()
# Conditioned on feature/predictors from the training data
lmm_train = lmm_jointdist(features_train)
lmm_train.trainable_variables
```
As a Probabilistic graphical program, we can also visualize the model's structure in terms of its computational graph. This graph encodes dataflow across the random variables in the program, making explicit their relationships in terms of a graphical model (Jordan, 2003).
As a statistical tool, we might look at the graph in order to better see, for example, that `intercept` and `effect_service` are conditionally dependent given `ratings`; this may be harder to see from the source code if the program is written with classes, cross references across modules, and/or subroutines. As a computational tool, we might also notice latent variables flow into the `ratings` variable via `tf.gather` ops. This may be a bottleneck on certain hardware accelerators if indexing `Tensor`s is expensive; visualizing the graph makes this readily apparent.
```
lmm_train.resolve_graph()
```
## Parameter Estimation
Given data, the goal of inference is to fit the model's fixed effects slope $\beta$, intercept $\alpha$, and variance component parameter $\sigma^2$. The maximum likelihood principle formalizes this task as
$$
\max_{\beta, \alpha, \sigma}~\log p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}; \beta, \alpha, \sigma) = \max_{\beta, \alpha, \sigma}~\log \int p(\eta; \sigma) ~p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}, \eta; \beta, \alpha)~d\eta.
$$
In this tutorial, we use the Monte Carlo EM algorithm to maximize this marginal density (Dempster et al., 1977; Wei and Tanner, 1990).¹ We perform Markov chain Monte Carlo to compute the expectation of the conditional likelihood with respect to the random effects ("E-step"), and we perform gradient descent to maximize the expectation with respect to the parameters ("M-step"):
+ For the E-step, we set up Hamiltonian Monte Carlo (HMC). It takes a current state—the student, instructor, and department effects—and returns a new state. We assign the new state to TensorFlow variables, which will denote the state of the HMC chain.
+ For the M-step, we use the posterior sample from HMC to calculate an unbiased estimate of the marginal likelihood up to a constant. We then apply its gradient with respect to the parameters of interest. This produces an unbiased stochastic descent step on the marginal likelihood. We implement it with the Adam TensorFlow optimizer and minimize the negative of the marginal.
```
target_log_prob_fn = lambda *x: lmm_train.log_prob(x + (labels_train,))
trainable_variables = lmm_train.trainable_variables
current_state = lmm_train.sample()[:-1]
# For debugging
target_log_prob_fn(*current_state)
# Set up E-step (MCMC).
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.015,
num_leapfrog_steps=3)
kernel_results = hmc.bootstrap_results(current_state)
@tf.function(autograph=False, experimental_compile=True)
def one_e_step(current_state, kernel_results):
next_state, next_kernel_results = hmc.one_step(
current_state=current_state,
previous_kernel_results=kernel_results)
return next_state, next_kernel_results
optimizer = tf.optimizers.Adam(learning_rate=.01)
# Set up M-step (gradient descent).
@tf.function(autograph=False, experimental_compile=True)
def one_m_step(current_state):
with tf.GradientTape() as tape:
loss = -target_log_prob_fn(*current_state)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
return loss
```
We perform a warm-up stage, which runs one MCMC chain for a number of iterations so that training may be initialized within the posterior's probability mass. We then run a training loop. It jointly runs the E and M-steps and records values during training.
```
num_warmup_iters = 1000
num_iters = 1500
num_accepted = 0
effect_students_samples = np.zeros([num_iters, num_students])
effect_instructors_samples = np.zeros([num_iters, num_instructors])
effect_departments_samples = np.zeros([num_iters, num_departments])
loss_history = np.zeros([num_iters])
# Run warm-up stage.
for t in range(num_warmup_iters):
current_state, kernel_results = one_e_step(current_state, kernel_results)
num_accepted += kernel_results.is_accepted.numpy()
if t % 500 == 0 or t == num_warmup_iters - 1:
print("Warm-Up Iteration: {:>3} Acceptance Rate: {:.3f}".format(
t, num_accepted / (t + 1)))
num_accepted = 0 # reset acceptance rate counter
# Run training.
for t in range(num_iters):
# run 5 MCMC iterations before every joint EM update
for _ in range(5):
current_state, kernel_results = one_e_step(current_state, kernel_results)
loss = one_m_step(current_state)
effect_students_samples[t, :] = current_state[0].numpy()
effect_instructors_samples[t, :] = current_state[1].numpy()
effect_departments_samples[t, :] = current_state[2].numpy()
num_accepted += kernel_results.is_accepted.numpy()
loss_history[t] = loss.numpy()
if t % 500 == 0 or t == num_iters - 1:
print("Iteration: {:>4} Acceptance Rate: {:.3f} Loss: {:.3f}".format(
t, num_accepted / (t + 1), loss_history[t]))
```
You can also write the warmup for-loop into a `tf.while_loop`, and the training step into a `tf.scan` or `tf.while_loop` for even faster inference. For example:
```
@tf.function(autograph=False, experimental_compile=True)
def run_k_e_steps(k, current_state, kernel_results):
_, next_state, next_kernel_results = tf.while_loop(
cond=lambda i, state, pkr: i < k,
body=lambda i, state, pkr: (i+1, *one_e_step(state, pkr)),
loop_vars=(tf.constant(0), current_state, kernel_results)
)
return next_state, next_kernel_results
```
Above, we did not run the algorithm until a convergence threshold was detected. To check whether training was sensible, we verify that the loss function indeed tends to converge over training iterations.
```
plt.plot(loss_history)
plt.ylabel(r'Loss $-\log$ $p(y\mid\mathbf{x})$')
plt.xlabel('Iteration')
plt.show()
```
We also use a trace plot, which shows the Markov chain Monte Carlo algorithm's trajectory across specific latent dimensions. Below we see that specific instructor effects indeed meaningfully transition away from their initial state and explore the state space. The trace plot also indicates that the effects differ across instructors but with similar mixing behavior.
```
for i in range(7):
plt.plot(effect_instructors_samples[:, i])
plt.legend([i for i in range(7)], loc='lower right')
plt.ylabel('Instructor Effects')
plt.xlabel('Iteration')
plt.show()
```
## Criticism
Above, we fitted the model. We now look into criticizing its fit using data, which lets us explore and better understand the model. One such technique is a residual plot, which plots the difference between the model's predictions and ground truth for each data point. If the model were correct, then their difference should be standard normally distributed; any deviations from this pattern in the plot indicate model misfit.
We build the residual plot by first forming the posterior predictive distribution over ratings, which replaces the prior distribution on the random effects with its posterior given training data. In particular, we run the model forward and intercept its dependence on prior random effects with their inferred posterior means.²
```
lmm_test = lmm_jointdist(features_test)
[
effect_students_mean,
effect_instructors_mean,
effect_departments_mean,
] = [
np.mean(x, axis=0).astype(np.float32) for x in [
effect_students_samples,
effect_instructors_samples,
effect_departments_samples
]
]
# Get the posterior predictive distribution
(*posterior_conditionals, ratings_posterior), _ = lmm_test.sample_distributions(
value=(
effect_students_mean,
effect_instructors_mean,
effect_departments_mean,
))
ratings_prediction = ratings_posterior.mean()
```
Upon visual inspection, the residuals look somewhat standard-normally distributed. However, the fit is not perfect: there is larger probability mass in the tails than a normal distribution, which indicates the model might improve its fit by relaxing its normality assumptions.
In particular, although it is most common to use a normal distribution to model ratings in the `InstEval` data set, a closer look at the data reveals that course evaluation ratings are in fact ordinal values from 1 to 5. This suggests that we should be using an ordinal distribution, or even Categorical if we have enough data to throw away the relative ordering. This is a one-line change to the model above; the same inference code is applicable.
```
plt.title("Residuals for Predicted Ratings on Test Set")
plt.xlim(-4, 4)
plt.ylim(0, 800)
plt.hist(ratings_prediction - labels_test, 75)
plt.show()
```
To explore how the model makes individual predictions, we look at the histogram of effects for students, instructors, and departments. This lets us understand how individual elements in a data point's feature vector tends to influence the outcome.
Not surprisingly, we see below that each student typically has little effect on an instructor's evaluation rating. Interestingly, we see that the department an instructor belongs to has a large effect.
```
plt.title("Histogram of Student Effects")
plt.hist(effect_students_mean, 75)
plt.show()
plt.title("Histogram of Instructor Effects")
plt.hist(effect_instructors_mean, 75)
plt.show()
plt.title("Histogram of Department Effects")
plt.hist(effect_departments_mean, 75)
plt.show()
```
## Footnotes
¹ Linear mixed effect models are a special case where we can analytically compute its marginal density. For the purposes of this tutorial, we demonstrate Monte Carlo EM, which more readily applies to non-analytic marginal densities such as if the likelihood were extended to be Categorical instead of Normal.
² For simplicity, we form the predictive distribution's mean using only one forward pass of the model. This is done by conditioning on the posterior mean and is valid for linear mixed effects models. However, this is not valid in general: the posterior predictive distribution's mean is typically intractable and requires taking the empirical mean across multiple forward passes of the model given posterior samples.
## Acknowledgments
This tutorial was originally written in Edward 1.0 ([source](https://github.com/blei-lab/edward/blob/master/notebooks/linear_mixed_effects_models.ipynb)). We thank all contributors to writing and revising that version.
## References
1. Douglas Bates and Martin Machler and Ben Bolker and Steve Walker. Fitting Linear Mixed-Effects Models Using lme4. _Journal of Statistical Software_, 67(1):1-48, 2015.
2. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. _Journal of the Royal Statistical Society, Series B (Methodological)_, 1-38, 1977.
3. Andrew Gelman and Jennifer Hill. _Data analysis using regression and multilevel/hierarchical models._ Cambridge University Press, 2006.
4. David A. Harville. Maximum likelihood approaches to variance component estimation and to related problems. _Journal of the American Statistical Association_, 72(358):320-338, 1977.
5. Michael I. Jordan. An Introduction to Graphical Models. Technical Report, 2003.
6. Nan M. Laird and James Ware. Random-effects models for longitudinal data. _Biometrics_, 963-974, 1982.
7. Greg Wei and Martin A. Tanner. A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms. _Journal of the American Statistical Association_, 699-704, 1990.
| github_jupyter |
# Text
This notebook serves as supporting material for topics covered in **Chapter 22 - Natural Language Processing** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [text.py](https://github.com/aimacode/aima-python/blob/master/text.py).
```
from text import *
from utils import DataFile
```
## Contents
* Text Models
* Viterbi Text Segmentation
* Overview
* Implementation
* Example
* Decoders
* Introduction
* Shift Decoder
* Permutation Decoder
## Text Models
Before we start performing text processing algorithms, we will need to build some word models. Those models serve as a look-up table for word probabilities. In the text module we have implemented two such models, which inherit from the `CountingProbDist` from `learning.py`. `UnigramTextModel` and `NgramTextModel`. We supply them with a text file and they show the frequency of the different words.
The main difference between the two models is that the first returns the probability of one single word (eg. the probability of the word 'the' appearing), while the second one can show us the probability of a *sequence* of words (eg. the probability of the sequence 'of the' appearing).
Also, both functions can generate random words and sequences respectively, random according to the model.
Below we build the two models. The text file we will use to build them is the *Flatland*, by Edwin A. Abbott. We will load it from [here](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/EN-text/flatland.txt).
```
flatland = DataFile("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramTextModel(wordseq)
P2 = NgramTextModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
```
We see that the most used word in *Flatland* is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences.
## Viterbi Text Segmentation
### Overview
We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the `Viterbi Segmentation` algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.
The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into "windows", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occuring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words.
### Implementation
```
%psource viterbi_segment
```
The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
The "window" is `w` and it includes the characters from *j* to *i*. We use it to "build" the following sequence: from the start to *j* and then `w`. We have previously calculated the probability from the start to *j*, so now we multiply that probability by `P[w]` to get the probability of the whole sequence. If that probability is greater than the probability we have calculated so far for the sequence from the start to *i* (`best[i]`), we update it.
### Example
The model the algorithm uses is the `UnigramTextModel`. First we will build the model using the *Flatland* text and then we will try and separate a space-devoid sentence.
```
flatland = DataFile("EN-text/flatland.txt").read()
wordseq = words(flatland)
P = UnigramTextModel(wordseq)
text = "itiseasytoreadwordswithoutspaces"
s, p = viterbi_segment(text,P)
print("Sequence of words is:",s)
print("Probability of sequence is:",p)
```
The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
## Decoders
### Introduction
In this section we will try to decode ciphertext using probabilistic text models. A ciphertext is obtained by performing encryption on a text message. This encryption lets us communicate safely, as anyone who has access to the ciphertext but doesn't know how to decode it cannot read the message. We will restrict our study to <b>Monoalphabetic Substitution Ciphers</b>. These are primitive forms of cipher where each letter in the message text (also known as plaintext) is replaced by another another letter of the alphabet.
### Shift Decoder
#### The Caesar cipher
The Caesar cipher, also known as shift cipher is a form of monoalphabetic substitution ciphers where each letter is <i>shifted</i> by a fixed value. A shift by <b>`n`</b> in this context means that each letter in the plaintext is replaced with a letter corresponding to `n` letters down in the alphabet. For example the plaintext `"ABCDWXYZ"` shifted by `3` yields `"DEFGZABC"`. Note how `X` became `A`. This is because the alphabet is cyclic, i.e. the letter after the last letter in the alphabet, `Z`, is the first letter of the alphabet - `A`.
```
plaintext = "ABCDWXYZ"
ciphertext = shift_encode(plaintext, 3)
print(ciphertext)
```
#### Decoding a Caesar cipher
To decode a Caesar cipher we exploit the fact that not all letters in the alphabet are used equally. Some letters are used more than others and some pairs of letters are more probable to occur together. We call a pair of consecutive letters a <b>bigram</b>.
```
print(bigrams('this is a sentence'))
```
We use `CountingProbDist` to get the probability distribution of bigrams. In the latin alphabet consists of only only `26` letters. This limits the total number of possible substitutions to `26`. We reverse the shift encoding for a given `n` and check how probable it is using the bigram distribution. We try all `26` values of `n`, i.e. from `n = 0` to `n = 26` and use the value of `n` which gives the most probable plaintext.
```
%psource ShiftDecoder
```
#### Example
Let us encode a secret message using Caeasar cipher and then try decoding it using `ShiftDecoder`. We will again use `flatland.txt` to build the text model
```
plaintext = "This is a secret message"
ciphertext = shift_encode(plaintext, 13)
print('The code is', '"' + ciphertext + '"')
flatland = DataFile("EN-text/flatland.txt").read()
decoder = ShiftDecoder(flatland)
decoded_message = decoder.decode(ciphertext)
print('The decoded message is', '"' + decoded_message + '"')
```
### Permutation Decoder
Now let us try to decode messages encrypted by a general monoalphabetic substitution cipher. The letters in the alphabet can be replaced by any permutation of letters. For example if the alpahbet consisted of `{A B C}` then it can be replaced by `{A C B}`, `{B A C}`, `{B C A}`, `{C A B}`, `{C B A}` or even `{A B C}` itself. Suppose we choose the permutation `{C B A}`, then the plain text `"CAB BA AAC"` would become `"ACB BC CCA"`. We can see that Caesar cipher is also a form of permutation cipher where the permutation is a cyclic permutation. Unlike the Caesar cipher, it is infeasible to try all possible permutations. The number of possible permutations in Latin alphabet is `26!` which is of the order $10^{26}$. We use graph search algorithms to search for a 'good' permutation.
```
%psource PermutationDecoder
```
Each state/node in the graph is represented as a letter-to-letter map. If there no mapping for a letter it means the letter is unchanged in the permutation. These maps are stored as dictionaries. Each dictionary is a 'potential' permutation. We use the word 'potential' because every dictionary doesn't necessarily represent a valid permutation since a permutation cannot have repeating elements. For example the dictionary `{'A': 'B', 'C': 'X'}` is invalid because `'A'` is replaced by `'B'`, but so is `'B'` because the dictionary doesn't have a mapping for `'B'`. Two dictionaries can also represent the same permutation e.g. `{'A': 'C', 'C': 'A'}` and `{'A': 'C', 'B': 'B', 'C': 'A'}` represent the same permutation where `'A'` and `'C'` are interchanged and all other letters remain unaltered. To ensure we get a valid permutation a goal state must map all letters in the alphabet. We also prevent repetions in the permutation by allowing only those actions which go to new state/node in which the newly added letter to the dictionary maps to previously unmapped letter. These two rules togeter ensure that the dictionary of a goal state will represent a valid permutation.
The score of a state is determined using word scores, unigram scores, and bigram scores. Experiment with different weightages for word, unigram and bigram scores and see how they affect the decoding.
```
ciphertexts = ['ahed world', 'ahed woxld']
pd = PermutationDecoder(canonicalize(flatland))
for ctext in ciphertexts:
print('"{}" decodes to "{}"'.format(ctext, pd.decode(ctext)))
```
As evident from the above example, permutation decoding using best first search is sensitive to initial text. This is because not only the final dictionary, with substitutions for all letters, must have good score but so must the intermediate dictionaries. You could think of it as performing a local search by finding substitutons for each letter one by one. We could get very different results by changing even a single letter because that letter could be a deciding factor for selecting substitution in early stages which snowballs and affects the later stages. To make the search better we can use different definition of score in different stages and optimize on which letter to substitute first.
| github_jupyter |
# Environment
In this file an Environment class with three diffrent methodologies is cunstructed to face with our problem. This three types of modeling is helping us for a better ovecome to tackle this issue.
برای مدلسازی مسئله ما ۳ سناریو متفاوت را در ادامه بررسی خواهیم کرد.
سناریوی اول:
در این سناریو، محیط ما که قرار است استیتهای شبکه را تشکیل دهد برابر با ماتریس ورودی درنظر میشود.
## First senario:
State: ()
we define the input $D$ matrix as the observation
and for action, we get a random indeces of matrix and
```
import numpy as np
from sklearn.decomposition import PCA
a = [\
[0., 1., 0., 1., 0., 1., 1., 0., 0., 0., 0.],
[0., 1., 0., 1., 1., 0., 1., 0., 1., 0., 1.],
[0., 1., 0., 1., 0., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 1., 0., 1., 0., 1.],
[0., 1., 0., 1., 1., 0., 1., 0., 1., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 1., 1., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 1., 0., 1., 0., 1.],
[1., 1., 0., 1., 0., 1., 1., 1., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 1., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 1., 0., 1., 0., 1.]
]
m = np.array(a)
def measure(f):
import timeit
def wrapped(*args, **kwargs):
tic = timeit.default_timer()
ret = f(*args, **kwargs)
toc = timeit.default_timer()
print("TICTOC: {}".format((toc - tic)))
return ret
return wrapped
@measure
def conflict(B):
B = B.astype(np.uint8)
N, M = B.shape[:2]
confilicts = []
for m1 in range(M):
for m2 in range(m1+1, M):
for c1 in range(N):
if B[c1,m1]==0 and B[c1,m2]==0:
continue
for c2 in range(c1+1, N):
for c3 in range(c2+1, N):
if B[c1,m1]+B[c2,m1]+B[c3,m1]!=2 or B[c1,m2]+B[c2,m2]+B[c3,m2]!=2:
continue
# Y = np.array([
# [B[c1,m1], B[c1,m2]],
# [B[c2,m1], B[c2,m2]],
# [B[c3,m1], B[c3,m2]],
# ])
# cond1 = np.sum(Y, axis=1).tolist() in [[1,1,2], [1,2,1], [2,1,1]]
# cond2 = np.sum(Y, axis=0).tolist() == [2,2]
if [B[c1,m1]+B[c1,m2],B[c2,m1]+B[c2,m2],B[c3,m1]+B[c3,m2]] in [[1,1,2],[1,2,1],[2,1,1]]:
confilicts.append((c1,c2,c3,m1,m2))
return confilicts
len(conflict(m))
np.sum(m, axis=0).astype(int).tolist()
def sort_m(m):
sum0 = np.sum(m, axis=0)
sum1 = np.sum(m, axis=1)
arg0 = np.argsort(sum0).astype(int).tolist()
arg1 = np.argsort(sum1).astype(int).tolist()
sorted_m = m[arg1, :][:, arg0[::-1]]
print(sorted_m)
m
sort_m(m)
```
res
# Data
```
class Tree():
pass
class TreeGenerator():
pass
class ActionSpace(object):
"""Defines the observation and action spaces, so you can write generic
code that applies to any Env. For example, you can choose a random
action.
"""
def __init__(self, shape=None, dtype=None):
import numpy as np # takes about 300-400ms to import, so we load lazily
self.shape = None if shape is None else tuple(shape)
self.dtype = None if dtype is None else np.dtype(dtype)
self.low = 0
self.high = 1
self._np_random = None
def sample(self):
"""Randomly sample an element of this space. Can be
uniform or non-uniform sampling based on boundedness of space."""
raise NotImplementedError
def seed(self, seed=None):
"""Seed the PRNG of this space. """
self._np_random, seed = seeding.np_random(seed)
return [seed]
class ObservationSpace(object):
def __init__(self, shape=None, dtype=None):
import numpy as np # takes about 300-400ms to import, so we load lazily
self.shape = None if shape is None else tuple(shape)
self.dtype = None if dtype is None else np.dtype(dtype)
self.low = 0
self.high = 1
self._np_random = None
class Environment(object):
def __init__(self, **params):
self.__E = E
self.__D = D
self.__gamma = 0.9
self.action_space = ActionSpace()
self.observation_space = ObservationSpace()
self.max_episode_step = ""
N, M = params["N"], params["M"]
generator = TreeGenerator(
M, N,
ZETA=1,
Gamma=0.15,
alpha=0.2,
beta=0.08,
MR=0.05,
save_dir=False,
)
self.__tree = generator.generate()
return
def reset(self,):
return obs
def step(self, action):
return obs, reward, done, info
def seed(self, seed):
pass
def get_loss(self, B):
c = self.conflict(B)
l = self.likelihood(B)
return c + self.__gamma*l
def conflict(B):
B = B.astype(np.uint8)
N, M = B.shape[:2]
confilicts = []
for m1 in range(M):
for m2 in range(m1+1, M):
for c1 in range(N):
if B[c1,m1]==0 and B[c1,m2]==0:
continue
for c2 in range(c1+1, N):
for c3 in range(c2+1, N):
if B[c1,m1]+B[c2,m1]+B[c3,m1]!=2 or B[c1,m2]+B[c2,m2]+B[c3,m2]!=2:
continue
# Y = np.array([
# [B[c1,m1], B[c1,m2]],
# [B[c2,m1], B[c2,m2]],
# [B[c3,m1], B[c3,m2]],
# ])
# cond1 = np.sum(Y, axis=1).tolist() in [[1,1,2], [1,2,1], [2,1,1]]
# cond2 = np.sum(Y, axis=0).tolist() == [2,2]
if [B[c1,m1]+B[c1,m2],B[c2,m1]+B[c2,m2],B[c3,m1]+B[c3,m2]] in [[1,1,2],[1,2,1],[2,1,1]]:
confilicts.append((c1,c2,c3,m1,m2))
return confilicts
def likelihood(self, B):
E = self.__E
a = self.__tree.get_alpha()
b = self.__tree.get_beta()
# N00 -> count([b=0]^[e=0])
N00 = np.prod(B.shape[:2]) - np.count_nonzero(B+E)
# N11 -> count([b=1]^[e=1])
N11 = np.count_nonzero(B*E)
# N01 -> count([b=0]^[e=1])
N00 = np.count_nonzero( np.where((E-B)>0,1,0) )
# N10 -> count([b=1]^[e=0])
N11 = np.count_nonzero( np.where((B-E)>0,1,0) )
# alpha^N10 * beta^N01 * (1-alpha)^N00 * (1-beta)^N11
lh = a**N10 * b**N01 * (1-a)**N00 * (1-b)**N11
return lh
```
| github_jupyter |
```
'''Trains and evaluate a simple MLP
on the Reuters newswire topic classification task.
'''
from __future__ import print_function
import numpy as np
import keras
from keras.datasets import reuters
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import mlflow
import mlflow.keras
```
# Setup Experiment Tracker
```
tracking_uri='file:///mnt/pipelineai/users/experiments'
mlflow.set_tracking_uri(tracking_uri)
experiment_name = 'reuters'
mlflow.set_experiment(experiment_name)
mlflow.keras.autolog()
```
# Load the Training Data
```
max_words = 1000
batch_size = 32
epochs = 1
#######
# BEGIN HACK: modify the default parameters of np.load
np_load_old = np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
#######
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words,
test_split=0.2)
#######
# END HACK
np.load = np_load_old
#######
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
num_classes = np.max(y_train) + 1
print(num_classes, 'classes')
print('Vectorizing sequence data...')
tokenizer = Tokenizer(num_words=max_words)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Convert class vector to binary class matrix '
'(for use with categorical_crossentropy)')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
print('Building model...')
model = Sequential()
model.add(Dense(512, input_shape=(max_words,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
```
# Start Training Run
```
with mlflow.start_run():
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split=0.1)
score = model.evaluate(x_test, y_test,
batch_size=batch_size, verbose=1)
print('Test score:', score[0])
print('Test accuracy:', score[1])
```
# Check the MLflow Pipelines Tab

| github_jupyter |
# Compute Hinge Loss (Empirical Risk)
The empirical risk Rn is defined as
Rn(θ)=1nn∑t=1Loss(y(t)−θ⋅x(t))
where (x(t),y(t)) is the tth training exampl
e (and there are n in total), and Loss is some loss function, such as hinge loss.
Recall from a previous lecture that the definition of hinge loss:
Lossh(z)={0if z≥11−z, otherwise.
In this problem, we calculate the empirical risk with hinge loss when given specific θ and {(x(t),y(t))}t=1,...,n. Assume we have 4 training examples (i.e. n=4), where x(t)∈R3 and y(t) is a scalar. The training examples {(x(t),y(t))}t=1,2,3,4 are given as follows:
```
def empirical_risk(feature_matrix, labels, theta):
total_loss = []
total_loss2 = []
for i, x in enumerate(feature_matrix):
##The loss before applying the Hinge loss function
loss = labels[i] - (np.dot(feature_matrix[i], theta))
total_loss.append(loss)
##Aplying the Hinge Loss function
if (labels[i] - (np.dot(feature_matrix[i], theta))) >= 1:
loss2=0
else:
loss2 = 1- (labels[i] - (np.dot(feature_matrix[i], theta)))
total_loss2.append(loss2)
result= (sum(total_loss2)/len(feature_matrix))
return total_loss, total_loss2, result
import numpy as np
feature_matrix= np.array([[1,0,1],
[1,1,1],
[1,1,-1],
[-1,1,1]])
labels= np.array([2,
2.7,
-0.7,
2])
theta= np.array([0, 1,2])
empirical_risk(feature_matrix, labels, theta)
```
# Compute Squared Error Loss
1 point possible (graded)
Now, we will calculate the empirical risk with the squared error loss. Remember that the squared error loss is given by
Loss(z)= Z squared/ 2
```
def empirical_risk(feature_matrix, labels, theta):
total_loss = []
total_loss2 = []
for i, x in enumerate(feature_matrix):
##The loss before applying the Hinge loss function
loss = np.square(labels[i] - (np.dot(feature_matrix[i], theta)))/2
total_loss.append(loss)
##Aplying the Squared Error Function
result= (sum(total_loss)/len(feature_matrix))
return total_loss, result
import numpy as np
feature_matrix= np.array([[1,0,1],
[1,1,1],
[1,1,-1],
[-1,1,1]])
labels= np.array([2,
2.7,
-0.7,
2])
theta= np.array([0, 1,2])
empirical_risk(feature_matrix, labels, theta)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/krakowiakpawel9/neural-network-course/blob/master/03_keras/03_overfitting_underfitting.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
* @author: krakowiakpawel9@gmail.com
* @site: e-smartdata.org
### Główne problemy uczenia maszynowego: przeuczenie (overfitting) oraz niedouczenie (underfitting)
>Celem tego notebook'a jest pokazanie przykładów zbyt dobrego dopasowanie modelu do danych uczących (przeuczenie) oraz zbyt słabego dopasowania modelu do danych uczących (niedouczenie).
>
>Wykorzystamy zbiór z bilioteki Keras składający się z 50000 recenzji filmów oznaczonych sentymentem: pozytywny/negatywny. Recenzje są wstępnie przetworzone, a każda recenzja jest zakodowana jako sekwencja indeksów słów. Słowa są indeksowane według ogólnej częstotliwości w zbiorze danych. Na przykład liczba 5 oznacza piąte najczęściej pojawiające się słowo w danych. Liczba 0 nie oznacza określonego słowa.
### Spis treści
1. [Import bibliotek](#a1)
2. [Załadowanie i przygotowanie danych](#a2)
3. [Budowa modelu bazowego](#a3)
4. [Budowa 'mniejszego' modelu](#a4)
5. [Budowa 'większego' modelu](#a5)
6. [Porównanie wydajności modeli](#a6)
7. [Metody regularyzacji](#a7)
### <a name='a1'></a> 1. Import bibliotek
```
# Przygotowanie środowiska do pracy z Tensorflow 2.0.
# Jeśli otrzymasz błąd podczas instalacji Tensorflow uruchom tę komórkę raz jeszcze.
!pip uninstall -y tensorflow
!pip install -q tensorflow==2.0.0
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.datasets.imdb import get_word_index
from tensorflow.keras.utils import get_file
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
sns.set()
tf.__version__
```
### <a name='a2'></a> 2. Załadowanie i przygotowanie danych
```
NUM_WORDS = 10000 # 10000 najczęściej pojawiających się słów
INDEX_FROM = 3
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=NUM_WORDS, index_from=INDEX_FROM)
print(f'train_data shape: {train_data.shape}')
print(f'test_data shape: {test_data.shape}')
print(train_data[0])
word_to_idx = get_word_index()
word_to_idx = {k:(v + INDEX_FROM) for k, v in word_to_idx.items()}
word_to_idx['<PAD>'] = 0
word_to_idx['<START>'] = 1
word_to_idx['<UNK>'] = 2
word_to_idx['<UNUSED>'] = 3
idx_to_word = {v: k for k, v in word_to_idx.items()}
list(idx_to_word.items())[:10]
print(' '.join(idx_to_word[idx] for idx in train_data[0]))
train_labels[:10]
def multi_hot_sequences(sequences, dimension):
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
train_data.shape
test_data.shape
```
### <a name='a3'></a> 3. Budowa modelu bazowego
```
baseline_model = Sequential()
baseline_model.add(Dense(16, activation='relu', input_shape=(NUM_WORDS,)))
baseline_model.add(Dense(16, activation='relu'))
baseline_model.add(Dense(1, activation='sigmoid'))
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels))
```
### <a name='a4'></a> 3. Budowa 'mniejszego' modelu
```
smaller_model = Sequential()
smaller_model.add(Dense(4, activation='relu', input_shape=(NUM_WORDS,)))
smaller_model.add(Dense(4, activation='relu'))
smaller_model.add(Dense(1, activation='sigmoid'))
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
smaller_history = smaller_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels))
```
### <a name='a5'></a> 4. Budowa 'większego' modelu
```
bigger_model = Sequential()
bigger_model.add(Dense(512, activation='relu', input_shape=(NUM_WORDS,)))
bigger_model.add(Dense(512, activation='relu'))
bigger_model.add(Dense(1, activation='sigmoid'))
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
bigger_model.summary()
bigger_history = bigger_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels))
hist = pd.DataFrame(baseline_history.history)
hist['epoch'] = baseline_history.epoch
hist.head()
```
### <a name='a6'></a> 5. Porównanie wydajności modeli
```
import plotly.graph_objects as go
fig = go.Figure()
for name, history in zip(['smaller', 'baseline', 'bigger'], [smaller_history, baseline_history, bigger_history]):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
fig.add_trace(go.Scatter(x=hist['epoch'], y=hist['binary_crossentropy'], name=name + '_binary_crossentropy', mode='lines+markers'))
fig.add_trace(go.Scatter(x=hist['epoch'], y=hist['val_binary_crossentropy'], name=name + '_val_binary_crossentropy', mode='lines+markers'))
fig.update_layout(xaxis_title='Epoki', yaxis_title='binary_crossentropy')
fig.show()
```
### <a name='a7'></a> 6. Metody regularyzacji
```
from tensorflow.keras.regularizers import l2
l2_model = Sequential()
l2_model.add(Dense(16, kernel_regularizer=l2(0.001), activation='relu', input_shape=(NUM_WORDS,)))
l2_model.add(Dense(16, kernel_regularizer=l2(0.01), activation='relu'))
l2_model.add(Dense(1, activation='sigmoid'))
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model.summary()
l2_model_history = l2_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels))
fig = go.Figure()
for name, history in zip(['baseline', 'l2'], [baseline_history, l2_model_history]):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
fig.add_trace(go.Scatter(x=hist['epoch'], y=hist['binary_crossentropy'], name=name + '_binary_crossentropy', mode='lines+markers'))
fig.add_trace(go.Scatter(x=hist['epoch'], y=hist['val_binary_crossentropy'], name=name + '_val_binary_crossentropy', mode='lines+markers'))
fig.update_layout(xaxis_title='Epoki', yaxis_title='binary_crossentropy')
fig.show()
from tensorflow.keras.layers import Dropout
dropout_model = Sequential()
dropout_model.add(Dense(16, activation='relu', input_shape=(NUM_WORDS,)))
dropout_model.add(Dropout(0.5))
dropout_model.add(Dense(16, activation='relu'))
dropout_model.add(Dropout(0.5))
dropout_model.add(Dense(1, activation='sigmoid'))
dropout_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
dropout_model.summary()
dropout_history = dropout_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels))
fig = go.Figure()
for name, history in zip(['baseline', 'dropout'], [baseline_history, dropout_history]):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
fig.add_trace(go.Scatter(x=hist['epoch'], y=hist['binary_crossentropy'], name=name + '_binary_crossentropy', mode='lines+markers'))
fig.add_trace(go.Scatter(x=hist['epoch'], y=hist['val_binary_crossentropy'], name=name + '_val_binary_crossentropy', mode='lines+markers'))
fig.update_layout(xaxis_title='Epoki', yaxis_title='binary_crossentropy')
fig.show()
```
| github_jupyter |
```
import glob
import xml.etree.ElementTree as ET
import re
class Argument(object):
def __init__(self, id_, start, end, role, text):
self.id_ = id_
self.start = start
self.end = end
self.role = role
self.text = text
def to_string(self):
return "Argument: {id_ = " + self.id_ + ", role = " + self.role + ", text = " + self.text + "}"
class Entity(object):
def __init__(self, phrase_type, end, text, entity_type, start, id_):
self.phrase_type = phrase_type
self.end = end
self.text = text
self.entity_type = entity_type
self.start = start
self.id_ = id_
def to_string(self):
return "Entity: {id_ = " + self.id_ + ", entity_type = " + self.entity_type + ", text = " + self.text + ", phrase_type=" + self.phrase_type +"}"
class Sentence(object):
def __init__(self, text, start, end):
self.text = text
self.start = start
self.end = end
def to_string(self):
return "Sentence: {text = " + self.text + ", start = " + self.start + ", end = " + self.end + "}"
def __str__(self):
return str(self.__dict__)
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __hash__(self):
return hash(self)
class Event(object):
def __init__(self, event_id, mention_id, type_, subtype, modality, polarity, genericity, tense, extent, extent_start, extent_end, scope, trig_text, trig_start, trig_end, arguments, entities):
self.event_id = event_id
self.mention_id = mention_id
self.type_ = type_
self.subtype = subtype
self.modality = modality
self.polarity = polarity
self.genericity = genericity
self.tense = tense
self.extent = extent
self.extent_start = extent_start
self.extent_end = extent_end
self.scope = scope
self.trig_text = trig_text
self.trig_start = trig_start
self.trig_end = trig_end
self.arguments = arguments
self.entities = entities
def to_string(self):
return "Event: { event_id = " + self.event_id + "mention_id = " + self.mention_id + ", type = " + self.type_ + ", subtype = " +self.subtype + ", modality = " \
+ self.modality + ", polarity = " + self.polarity + ", genericity= " + self.genericity + ", tense = " +\
self.tense + ", extent = " +self.extent + ", scope = " + self.scope + ", trigger = " + self.trigger
def extract_entity_info(entity, scope_start, scope_end, extent, words_start, words_end, extent_start):
entity_id = entity.attrib["ID"]
phrase_type = entity.attrib["TYPE"] + ":" + entity.attrib["SUBTYPE"]
entity_class = entity.attrib["CLASS"]
entities = []
for mention in entity.iter('entity_mention'):
entity_type = mention.attrib["LDCTYPE"]
for child in mention:
if child.tag == "extent":
for chil2 in child:
text = chil2.text
start = int(chil2.attrib["START"])
end = int(chil2.attrib["END"])
list_end = list(words_end.keys())
list_start = list(words_start.keys())
a_end = words_end[findClosest(list_end, len(list_end), end)]
a_start = words_start[findClosest(list_start, len(list_start), start)]
if scope_start <= start and scope_end >= end:
ent = Entity(phrase_type, a_end, text, entity_type, a_start, entity_id)
entities.append(ent)
return entities
class Word(object):
def __init__(self, start, end):
#self.str_ = str_
self.start = start
self.end = end
def find_positions(sent):
parse, = dep_parser.raw_parse(sent)
i = 0
words_start = {}
words_end = {}
accu_sum = 0
i = 1
for part in parse.to_conll(4).split("\n"):
if part != "":
parts = part.split("\t")
word = parts[0]
word_start = accu_sum
word_end = accu_sum + len(word)
accu_sum += len(word) + 1
words_start.update({word_start: i})
words_end.update({word_end: i})
i+=1
return words_start, words_end
def extract_from_xml(root_path, language, domain):
events = {}
#print(root_path + language + "/" + domain + "/adj/*.apf.xml")
files_processed = 0
for file_name in sorted(glob.glob(root_path + language + "/" + domain + "/adj/*.apf.xml")):
# Get the event + argument annotation
print("file_name=", file_name)
files_processed += 1
tree = ET.parse(file_name, ET.XMLParser(encoding='utf-8'))
root = tree.getroot()
for event in root.iter('event'):
sent, ev = extract_event_info(root, event)
if sent.text not in events:
events.update({sent.text: [ev]})
else:
ev_list = events[sent.text]
ev_list.append(ev)
events.update({sent.text: ev_list})
return events, files_processed
import math
def findClosest(arr, n, target):
# Corner cases
if (target <= arr[0]):
return arr[0]
if (target >= arr[n - 1]):
return arr[n - 1]
# Doing binary search
i = 0; j = n; mid = 0
while (i < j):
mid = math.floor((i + j) / 2)
if (arr[mid] == target):
return arr[mid]
# If target is less than array
# element, then search in left
if (target < arr[mid]) :
# If target is greater than previous
# to mid, return closest of two
if (mid > 0 and target > arr[mid - 1]):
return getClosest(arr[mid - 1], arr[mid], target)
# Repeat for left half
j = mid
# If target is greater than mid
else :
if (mid < n - 1 and target < arr[mid + 1]):
return getClosest(arr[mid], arr[mid + 1], target)
# update i
i = mid + 1
# Only single element left after search
return arr[mid]
def getClosest(val1, val2, target):
if (target - val1 >= val2 - target):
return val2
else:
return val1
def extract_event_info(root, event):
event_id = event.attrib["ID"]
event_type = event.attrib["TYPE"]
subtype = event.attrib["SUBTYPE"]
modality = event.attrib["MODALITY"]
polarity = event.attrib["POLARITY"]
genericity = event.attrib["GENERICITY"]
tense = event.attrib["TENSE"]
## Looking at event mentions
for mention in event.iter('event_mention'):
mention_id = mention.attrib["ID"]
for child in mention:
if child.tag == "extent":
for chil2 in child:
extent = chil2.text
extent_start = int(chil2.attrib["START"])
extent_end = int(chil2.attrib["END"])
words_start, words_end = find_positions(extent)
elif child.tag == "ldc_scope":
for chil2 in child:
scope = chil2.text
scope_start = int(chil2.attrib["START"])
scope_end = int(chil2.attrib["END"])
sent = Sentence(scope, scope_start, scope_end)
elif child.tag == "anchor":
for chil2 in child:
trig_text = chil2.text
trig_start = int(chil2.attrib["START"]) - extent_start
trig_end = int(chil2.attrib["END"]) - extent_start
list_end = list(words_end.keys())
list_start = list(words_start.keys())
t_end = words_end[findClosest(list_end, len(list_end), trig_end)]
t_start = words_start[findClosest(list_start, len(list_start), trig_start)]
arguments = []
for argument in mention.iter('event_mention_argument'):
arg_id = argument.attrib["REFID"]
role = argument.attrib["ROLE"]
for child in argument:
for chil2 in child:
arg_text = chil2.text
arg_start = int(chil2.attrib["START"]) - extent_start
arg_end = int(chil2.attrib["END"]) - extent_start
list_end = list(words_end.keys())
list_start = list(words_start.keys())
a_end = words_end[findClosest(list_end, len(list_end), arg_end)]
a_start = words_start[findClosest(list_start, len(list_start), arg_start)]
arg = Argument(arg_id, a_start, a_end, role, arg_text)
arguments.append(arg)
## Looking at entity mentions with that same event
entities = []
for entity in root.iter('entity'):
entities.extend(extract_entity_info(entity, scope_start, scope_end, extent, words_start, words_end, extent_start))
ev = Event(event_id, mention_id, event_type, subtype, modality, polarity, genericity, tense, extent, extent_start, extent_end, scope, trig_text, t_start, t_end, arguments, entities)
return sent, ev
#data_path = "/Users/d22admin/USCGDrive/Spring19/ISI/EventExtraction/3Datasets/ACE/Raw/LDC2006T06/data/English/bc/adj/"
root_path = "/Users/d22admin/USCGDrive/Spring19/ISI/EventExtraction/3Datasets/EventsExtraction/ACE/Raw/LDC2006T06/data/"
languages = [file_.split("/")[-1] for file_ in glob.glob(root_path + "*") if "Icon\r" not in file_]
events_list_lang = {}
def merge_two_dicts(x, y):
z = x.copy() # start with x's keys and values
z.update(y) # modifies z with y's keys and values & returns None
return z
for language in languages:
files_num = 0
domains = [file_.split("/")[-1] for file_ in glob.glob(root_path + language + "/*" ) if "Icon\r" not in file_]
events_lang = {}
for domain in domains:
events, files_processed = extract_from_xml(root_path, language, domain)
files_num += files_processed
events_lang = merge_two_dicts(events_lang, events)
print("Number of files processed for language= ", language, " is= ", files_num)
events_list_lang.update({language: events_lang})
```
## Stanford CoreNLP Preprocessing:
Generating the needed attributes:
- words, lemmas, pos-tags
- stanford-colcc
```
from nltk.parse.corenlp import CoreNLPDependencyParser, CoreNLPParser
dep_parser = CoreNLPDependencyParser(url='http://localhost:9001')
#parse, = dep_parser.raw_parse('The quick brown fox jumps over the lazy dog.')
sent = "No, they don\'t the 'suspicious evidence' is simply that\nArafat did not look particularly near death when he was\nremoved from his HQ"
parse, = dep_parser.raw_parse(sent)
print(parse)
print(parse.tokenBeginIndex)
def find_stanford_colcc(sent):
parse, = dep_parser.raw_parse(sent)
i = 0
pos_tags = []
words = []
triples = []
for part in parse.to_conll(4).split("\n"):
if part != "":
parts = part.split("\t")
words.append(parts[0])
pos_tags.append(parts[1])
rel = parts[3].lower()
gov = int(parts[2])-1
dep = i
i += 1
triples.append(rel+"/dep="+str(dep)+"/gov="+str(gov))
return triples, words, pos_tags
find_positions(sent)
find_stanford_colcc(sent)
print(parse.to_conll(4))
```
## Save to json:
```
import json
from tqdm import tqdm
data_json = []
sent_json = []
for sent in tqdm(events_list_lang["English"]):
sent_json.append(sent)
data_sub = {}
data_sub["golden-event-mentions"] = []
entities_unique = {}
for event in events_list_lang["English"][sent]:
event_info = {}
#(self, event_id, mention_id, type_, subtype, modality, polarity, genericity,
# tense, extent, scope, trig_text, trig_start, trig_end, arguments):
event_info["trigger"] = {"start": event.trig_start, "end": event.trig_end, "text": event.trig_text}
event_info["arguments"] = []
for arg in event.arguments:
arg_info = {"start": arg.start, "role": arg.role, "end": arg.end, "text": arg.text}
event_info["arguments"].append(arg_info)
event_info["id"] = event.event_id
event_info["event_type"] = event.type_
data_sub["golden-event-mentions"].append(event_info)
# Loading entities for that event and adding it to the list of entities
for entity in event.entities:
entities_unique.update({entity.id_:entity})
data_sub["golden-entity-mentions"] = []
for entity_id in entities_unique.keys():
entity_info = {"phrase-type": entities_unique[entity_id].phrase_type, "end": entities_unique[entity_id].end, "text": entities_unique[entity_id].text, "entity-type": entities_unique[entity_id].entity_type, "start": entities_unique[entity_id].start, "id": entity_id}
data_sub["golden-entity-mentions"].append(entity_info)
triples, words, pos_tags = find_stanford_colcc(sent)
data_json.append({"stanford-colcc": triples, "golden-entity-mentions": data_sub["golden-entity-mentions"], "words": words, "pos-tags": pos_tags, "golden-event-mentions": data_sub["golden-event-mentions"]})
len(data_json)
data_json[0]
```
### Train/Dev/Test Split:
```
import random
random.shuffle(data_json)
with open('/Users/d22admin/USCGDrive/Spring19/ISI/EventExtraction/3Datasets/EventsExtraction/ACE/ace-05-splits/english/train.json', 'w') as outfile:
json.dump(data_json[:2000], outfile)
with open('/Users/d22admin/USCGDrive/Spring19/ISI/EventExtraction/3Datasets/EventsExtraction/ACE/ace-05-splits/english/dev.json', 'w') as outfile:
json.dump(data_json[2000:2250], outfile)
with open('/Users/d22admin/USCGDrive/Spring19/ISI/EventExtraction/3Datasets/EventsExtraction/ACE/ace-05-splits/english/test.json', 'w') as outfile:
json.dump(data_json[2250:], outfile)
phrase_type, end, text, entity_type, start, id_
ev_example[1].trig_text
events_list_lang["English"][6].scope
```
| github_jupyter |
# bqplot
`bqplot` is a [Grammar of Graphics](https://www.cs.uic.edu/~wilkinson/TheGrammarOfGraphics/GOG.html) based interactive plotting framework for the Jupyter notebook. The library offers a simple bridge between `Python` and `d3.js` allowing users to quickly and easily build complex GUI's with layered interactions.
## Basic Plotting
To begin start by investigating the introductory notebooks:
1. [Introduction](Introduction.ipynb) - If you're new to `bqplot`, get started with our Introduction notebook
2. [Basic Plotting](Basic%20Plotting/Basic%20Plotting.ipynb) - which demonstrates some basic `bqplot` plotting commands and how to use them
3. [Pyplot](Basic%20Plotting/Pyplot.ipynb) - which introduces the simpler `pyplot` API
4. [Tutorials](Tutorials.ipynb) - which provides tutorials for using pyplot, object model, linking with ipywidgets and selectors
## Marks
Move to exploring the different `Marks` that you can use to represent your data. You have two options for rendering marks:
* Object Model, which is a verbose API but gives you full flexibility and customizability
* Pyplot, which is a simpler API (similar to matplotlib's pyplot) and sets meaningul defaults for the user
1. Bars: Bar mark ([Object Model](Marks/Object%20Model/Bars.ipynb), [Pyplot](Marks/Pyplot/Bars.ipynb))
* Bins: Backend histogram mark ([Object Model](Marks/Object%20Model/Bins.ipynb), [Pyplot](Marks/Pyplot/Bins.ipynb))
* Boxplot: Boxplot mark ([Object Model](Marks/Object%20Model/Boxplot.ipynb), [Pyplot](Marks/Pyplot/Boxplot.ipynb))
* Candles: OHLC mark ([Object Model](Marks/Object%20Model/Candles.ipynb), [Pyplot](Marks/Pyplot/Candles.ipynb))
* FlexLine: Flexible lines mark ([Object Model](Marks/Object%20Model/FlexLine.ipynb), Pyplot)
* Graph: Network mark ([Object Model](Marks/Object%20Model/Graph.ipynb), Pyplot)
* GridHeatMap: Grid heatmap mark ([Object Model](Marks/Object%20Model/GridHeatMap.ipynb), [Pyplot](Marks/Pyplot/GridHeatMap.ipynb))
* HeatMap: Heatmap mark ([Object Model](Marks/Object%20Model/HeatMap.ipynb), [Pyplot](Marks/Pyplot/HeatMap.ipynb))
* Hist: Histogram mark ([Object Model](Marks/Object%20Model/Hist.ipynb), [Pyplot](Marks/Pyplot/Hist.ipynb))
* Image: Image mark ([Object Model](Marks/Object%20Model/Image.ipynb), [Pyplot](Marks/Pyplot/Image.ipynb))
* Label: Label mark ([Object Model](Marks/Object%20Model/Label.ipynb), [Pyplot](Marks/Pyplot/Label.ipynb))
* Lines: Lines mark ([Object Model](Marks/Object%20Model/Lines.ipynb), [Pyplot](Marks/Pyplot/Lines.ipynb))
* Map: Geographical map mark ([Object Model](Marks/Object%20Model/Map.ipynb), [Pyplot](Marks/Pyplot/Map.ipynb))
* Market Map: Tile map mark ([Object Model](Marks/Object%20Model/Market%20Map.ipynb), Pyplot)
* Pie: Pie mark ([Object Model](Marks/Object%20Model/Pie.ipynb), [Pyplot](Marks/Pyplot/Pie.ipynb))
* Scatter: Scatter mark ([Object Model](Marks/Object%20Model/Scatter.ipynb), [Pyplot](Marks/Pyplot/Scatter.ipynb))
* Mega Scatter: webgl-based Scatter mark ([Scatter Mega](./ScatterMega.ipynb))
## Interactions
Learn how to use `bqplot` interactions to convert your plots into interactive applications:
13. [Mark Interactions](Interactions/Mark%20Interactions.ipynb) - which describes the mark specific interactions and how to use them
14. [Interaction Layer](Interactions/Interaction%20Layer.ipynb) - which describes the use of the interaction layers, including selectors and how they can be used for facilitating better interaction
## Advanced Plotting
Once you've mastered the basics of `bqplot`, you can use these notebooks to learn about some of it's more advanced features.
15. [Plotting Dates](Advanced%20Plotting/Plotting%20Dates.ipynb)
16. [Advanced Plotting](Advanced%20Plotting/Advanced%20Plotting.ipynb)
17. [Animations](Advanced%20Plotting/Animations.ipynb)
18. [Axis Properties](Advanced%20Plotting/Axis%20Properties.ipynb)
## Applications
Finally, we have a collection of notebooks that demonstrate how to use `bqplot` and `ipywidgets` to create advanced interactive applications.
19. [Wealth of Nations](Applications/Wealth%20of%20Nations.ipynb) - a recreation of [Hans Rosling's famous TED Talk](http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen)
## Help
For more help,
- Reach out to us via the `ipywidgets` gitter chat
[](https://gitter.im/ipython/ipywidgets?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
- Or take a look at a talk given on Interactive visualizations in Jupyter at PyData
```
from IPython.display import YouTubeVideo
YouTubeVideo('eVET9IYgbao')
```
| github_jupyter |
# Broadcasts
This notebook explains the different types of broadcast available in PyBaMM.
Understanding of the [expression_tree](./expression-tree.ipynb) and [discretisation](../spatial_methods/finite-volumes.ipynb) notebooks is assumed.
```
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
```
We also explicitly set up the discretisation that is used for this notebook. We use a small number of points in each domain, in order to easily visualise the results.
```
var = pybamm.standard_spatial_vars
geometry = {
"negative electrode": {var.x_n: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}},
"negative particle": {var.r_n: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}},
}
submesh_types = {
"negative electrode": pybamm.Uniform1DSubMesh,
"negative particle": pybamm.Uniform1DSubMesh,
}
var_pts = {var.x_n: 5, var.r_n: 3}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {
"negative electrode": pybamm.FiniteVolume(),
"negative particle": pybamm.FiniteVolume(),
}
disc = pybamm.Discretisation(mesh, spatial_methods)
```
## Primary broadcasts
Primary broadcasts are used to broadcast from a "larger" scale to a "smaller" scale, for example broadcasting temperature T(x) from the electrode to the particles, or broadcasting current collector current i(y, z) from the current collector to the electrodes.
To demonstrate this, we first create a variable `T` on the negative electrode domain, discretise it, and evaluate it with a simple linear vector
```
T = pybamm.Variable("T", domain="negative electrode")
disc.set_variable_slices([T])
disc_T = disc.process_symbol(T)
disc_T.evaluate(y=np.linspace(0,1,5))
```
We then broadcast `T` onto the "negative particle" domain (using primary broadcast as we are going from the larger electrode scale to the smaller particle scale), and discretise and evaluate the resulting object.
```
primary_broad_T = pybamm.PrimaryBroadcast(T, "negative particle")
disc_T = disc.process_symbol(primary_broad_T)
disc_T.evaluate(y=np.linspace(0,1,5))
```
The broadcasted object makes 3 (since the r-grid has 3 points) copies of each element of `T` and stacks them all up to give an object with size 3x5=15. In the resulting vector, the first 3 entries correspond to the 3 points in the r-domain at the first x-grid point (where T=0 uniformly in r), the next 3 entries correspond to the next 3 points in the r-domain at the second x-grid point (where T=0.25 uniformly in r), etc
## Secondary broadcasts
Secondary broadcasts are used to broadcast from a "smaller" scale to a "larger" scale, for example broadcasting SPM particle concentrations c_s(r) from the particles to the electrodes. Note that this wouldn't be used to broadcast particle concentrations in the DFN, since these already depend on both x and r.
To demonstrate this, we first create a variable `c_s` on the negative particle domain, discretise it, and evaluate it with a simple linear vector
```
c_s = pybamm.Variable("c_s", domain="negative particle")
disc.set_variable_slices([c_s])
disc_c_s = disc.process_symbol(c_s)
disc_c_s.evaluate(y=np.linspace(0,1,3))
```
We then broadcast `c_s` onto the "negative electrode" domain (using secondary broadcast as we are going from the smaller particle scale to the large electrode scale), and discretise and evaluate the resulting object.
```
secondary_broad_c_s = pybamm.SecondaryBroadcast(c_s, "negative electrode")
disc_broad_c_s = disc.process_symbol(secondary_broad_c_s)
disc_broad_c_s.evaluate(y=np.linspace(0,1,3))
```
The broadcasted object makes 5 (since the x-grid has 5 points) identical copies of the whole variable `c_s` to give an object with size 5x3=15. In the resulting vector, the first 3 entries correspond to the 3 points in the r-domain at the first x-grid point (where c_s varies in r), the next 3 entries correspond to the next 3 points in the r-domain at the second x-grid point (where c_s varies in r), etc
| github_jupyter |
# MLP GenCode
Wen et al 2019 used DNN to distinguish GenCode mRNA/lncRNA.
Based on K-mer frequencies, K={1,2,3}, they reported 99% accuracy.
Their CNN used 2 Conv2D layers of 32 filters of width 3x3, max pool 2x2, 25% drop, dense 128.
Can we reproduce that with MLP layers instead of CNN?
Extract features as list of K-mer frequencies for K={1,2,3}.
```
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
PC_TRAINS=8000
NC_TRAINS=8000
PC_TESTS=2000
NC_TESTS=2000 # Wen et al 2019 used 8000 and 2000 of each class
PC_LENS=(200,4000)
NC_LENS=(200,4000) # Wen et al 2019 used 250-3500 for lncRNA only
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=16
EPOCHS=50
SPLITS=5
FOLDS=1 # make this 5 for serious testing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
```
## Data Load
Restrict mRNA to those transcripts with a recognized ORF.
```
PC_FILENAME='gencode.v38.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v38.lncRNA_transcripts.fa.gz'
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
# Full GenCode ver 38 human is 106143 pc + 48752 nc and loads in 7 sec.
# Expect fewer transcripts if special filtering is used.
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(True)
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()
```
## Data Prep
```
def dataframe_length_filter(df,low_high):
(low,high)=low_high
# The pandas query language is strange,
# but this is MUCH faster than loop & drop.
return df[ (df['seqlen']>=low) & (df['seqlen']<=high) ]
def dataframe_shuffle(df):
# The ignore_index option is new in Pandas 1.3.
# The default (False) replicates the old behavior: shuffle the index too.
# The new option seems more logical th
# After shuffling, df.iloc[0] has index == 0.
# return df.sample(frac=1,ignore_index=True)
return df.sample(frac=1) # Use this till CoLab upgrades Pandas
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(
dataframe_shuffle(
dataframe_length_filter(pcdf,PC_LENS)))
nc_all = dataframe_extract_sequence(
dataframe_shuffle(
dataframe_length_filter(ncdf,NC_LENS)))
show_time()
print("PC seqs pass filter:",len(pc_all))
print("NC seqs pass filter:",len(nc_all))
# Garbage collection to reduce RAM footprint
pcdf=None
ncdf=None
# Any portion of a shuffled list is a random selection
pc_train=pc_all[:PC_TRAINS]
nc_train=nc_all[:NC_TRAINS]
pc_test=pc_all[PC_TRAINS:PC_TESTS]
nc_test=nc_all[NC_TRAINS:PC_TESTS]
# Garbage collection
pc_all=None
nc_all=None
def prepare_x_and_y(seqs1,seqs0):
len1=len(seqs1)
len0=len(seqs0)
labels1=np.ones(len1,dtype=np.int8)
labels0=np.zeros(len0,dtype=np.int8)
all_labels = np.concatenate((labels1,labels0))
seqs1 = np.asarray(seqs1)
seqs0 = np.asarray(seqs0)
all_seqs = np.concatenate((seqs1,seqs0),axis=0)
# return all_seqs,all_labels # test unshuffled
tandem=(all_seqs,all_labels)
X,y = shuffle(tandem) # sklearn.utils.shuffle
return X,y
Xseq,y=prepare_x_and_y(pc_train,nc_train)
#print(pc_train[0])
#print(Xseq[0],y[0])
show_time()
def seqs_to_kmer_freqs(seqs,max_K):
tool = KmerTools() # from SimTools
empty = tool.make_dict_upto_K(max_K)
collection = []
for seq in seqs:
counts = empty
counts = tool.update_count_one_K(counts,max_K,seq,True)
counts = tool.harvest_counts_from_K(counts,max_K)
fdict = tool.count_to_frequency(counts,max_K)
freqs = list(fdict.values())
collection.append(freqs)
return np.asarray(collection)
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
show_time()
print("X shape",np.shape(Xfrq))
print(type(Xfrq),"of",type(Xfrq[0]),"of",type(Xfrq[0][0]))
print("y shape",np.shape(y))
```
## Neural network
```
def make_DNN():
dt=np.float32
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
#dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=dt))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
return dnn
model = make_DNN()
print(model.summary())
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(Xfrq,y)
# TO DO: run trained model on (pc_test,nc_test)
# and draw the AUC.
# Borrow code from other notebooks.
```
| github_jupyter |
```
data_path = '../../../data/3dObjects/sketchpad_repeated/feedback_pilot1_group_data.csv'
D = pd.read_csv(data_path)
# directory & file hierarchy
exp_path = '3dObjects/sketchpad_repeated'
analysis_dir = os.getcwd()
data_dir = os.path.abspath(os.path.join(os.getcwd(),'../../..','data',exp_path))
exp_dir = os.path.abspath(os.path.join(os.getcwd(),'../../..','experiments',exp_path))
sketch_dir = os.path.abspath(os.path.join(os.getcwd(),'../../..','analysis',exp_path,'sketches','pilot1'))
# set vars
auth = pd.read_csv('auth.txt', header = None) # this auth.txt file contains the password for the sketchloop user
pswd = auth.values[0][0]
user = 'sketchloop'
host = 'rxdhawkins.me' ## cocolab ip address
# have to fix this to be able to analyze from local
import pymongo as pm
conn = pm.MongoClient('mongodb://sketchloop:' + pswd + '@127.0.0.1')
db = conn['3dObjects']
coll = db['sketchpad_repeated']
S = coll.find({ '$and': [{'iterationName':'pilot1'}, {'eventType': 'stroke'}]}).sort('time')
C = coll.find({ '$and': [{'iterationName':'pilot1'}, {'eventType': 'clickedObj'}]}).sort('time')
print(str(S.count()) + ' stroke records in the database.')
print(str(C.count()) + ' clickedObj records in the database.')
# print unique gameid's
unique_gameids = coll.find({ '$and': [{'iterationName':'pilot1'}, {'eventType': 'clickedObj'}]}).sort('time').distinct('gameid')
print(list(map(str,unique_gameids)))
# filter out records that match researcher ID's
jefan = ['A1MMCS8S8CTWKU','A1MMCS8S8CTWKV']
hawkrobe = ['A1BOIDKD33QSDK']
researchers = jefan + hawkrobe
workers = [i for i in coll.find({'iterationName':'pilot1'}).distinct('workerId') if i not in researchers]
valid_gameids = []
for i,g in enumerate(unique_gameids):
W = coll.find({ '$and': [{'gameid': g}]}).distinct('workerId')
for w in W:
if w in workers:
X = coll.find({ '$and': [{'workerId': w}, {'gameid': g}]}).distinct('trialNum')
eventType = coll.find({ '$and': [{'workerId': w}]}).distinct('eventType')
print(i, w[:4], len(X), str(eventType[0]))
if (str(eventType[0])=='clickedObj') & (len(X)==40):
valid_gameids.append(g)
print(' =========== ')
## filter if the pair clearly cheated
cheaty = ['0766-fcb90e7e-bf4a-4a46-b6d6-3165b6c12b88','7024-8ac78089-539a-428b-9d0e-b52c71a0a1b4']
valid_gameids = [i for i in valid_gameids if i not in cheaty]
print(str(len(valid_gameids)) + ' valid gameIDs (# complete games).')
from functools import reduce
TrialNum = []
GameID = []
Condition = []
Target = []
Distractor1 = []
Distractor2 = []
Distractor3 = []
Outcome = []
Response = []
Repetition = []
numStrokes = []
drawDuration = [] # in seconds
svgStringLength = [] # sum of svg string for whole sketch
svgStringLengthPerStroke = [] # svg string length per stroke
numCurvesPerSketch = [] # number of curve segments per sketch
numCurvesPerStroke = [] # mean number of curve segments per stroke
svgStringStd = [] # std of svg string length across strokes for this sketch
Outcome = []
for g in valid_gameids:
print('Analyzing game: ', g)
X = coll.find({ '$and': [{'gameid': g}, {'eventType': 'clickedObj'}]}).sort('time')
Y = coll.find({ '$and': [{'gameid': g}, {'eventType': 'stroke'}]}).sort('time')
for t in X:
targetname = t['intendedName']
Repetition.append(t['repetition'])
distractors = [t['object2Name'],t['object3Name'],t['object4Name']]
full_list = [t['intendedName'],t['object2Name'],t['object3Name'],t['object4Name']]
y = coll.find({ '$and': [{'gameid': g}, {'eventType': 'stroke'}, {'trialNum': t['trialNum']}]}).sort('time')
ns = y.count()
numStrokes.append(ns)
drawDuration.append((y.__getitem__(ns-1)['time'] - y.__getitem__(0)['time'])/1000) # in seconds
ls = [len(_y['svgData']) for _y in y]
svgStringLength.append(reduce(lambda x, y: x + y, ls))
y = coll.find({ '$and': [{'gameid': g}, {'eventType': 'stroke'}, {'trialNum': t['trialNum']}]}).sort('time')
num_curves = [len([m.start() for m in re.finditer('c', _y['svgData'])]) for _y in y]
numCurvesPerSketch.append(reduce(lambda x, y: x + y, num_curves))
numCurvesPerStroke.append(reduce(lambda x, y: x + y, num_curves)/ns)
svgStringLengthPerStroke.append(reduce(lambda x, y: x + y, ls)/ns)
svgStringStd.append(np.std(ls))
# ## aggregate game metadata
TrialNum.append(t['trialNum'])
GameID.append(t['gameid'])
Target.append(targetname)
Condition.append(t['condition'])
Response.append(t['clickedName'])
Outcome.append(t['correct'])
Distractor1.append(distractors[0])
Distractor2.append(distractors[1])
Distractor3.append(distractors[2])
# add png to D dataframe
png = []
for g in valid_gameids:
X = coll.find({ '$and': [{'gameid': g}, {'eventType': 'clickedObj'}]}).sort('time')
Y = coll.find({ '$and': [{'gameid': g}, {'eventType': 'stroke'}]}).sort('time')
# print out sketches from all trials from this game
for t in X:
png.append(t['pngString']
D = D.assign(png=pd.Series(png).values)
# save D out as group_data.csv
D.to_csv(os.path.join(data_dir,'group_data.csv'))
```
| github_jupyter |
```
import json
import re
def load_rhythm_list():
with open("平水韵表.txt", encoding="UTF-8") as file:
rhythm_lines = file.readlines()
rhythm_dict = dict()
for rhythm_line in rhythm_lines:
rhythm_name = re.search(".*(?=[平上去入]声:)", rhythm_line).group()
rhythm_tune = re.search("[平上去入](?=声:)", rhythm_line).group()
rhythm_characters = re.sub(".*[平上去入]声:", "", rhythm_line)
for character in rhythm_characters:
if character not in rhythm_dict:
rhythm_dict[character] = list()
rhythm_dict[character].append([rhythm_name, rhythm_tune])
return rhythm_dict
RHYTHM_LIST = load_rhythm_list()
def get_rhythm(character):
rhythm_set = set()
if character in RHYTHM_LIST:
for rhythm_item in RHYTHM_LIST.get(character):
rhythm_set.add(rhythm_item[0])
if len(rhythm_set) == 1:
return list(rhythm_set)[0]
else:
return "/".join(list(rhythm_set))
else:
return "Special Char"
def get_tone(character):
"""
:param character: <str>
:return: <str>
"""
tone_set = set()
if character in RHYTHM_LIST:
for rhythm_item in RHYTHM_LIST.get(character):
tone_set.add(re.sub("[上去入]", "Z", rhythm_item[1]))
if len(tone_set) == 1: #
if (list(tone_set)[0] == "平"):
return "P"
return list(tone_set)[0]
else:
return "*"
else:
return "*"
def inspect_sentence_tone(sentence_tone):
"""
:return: <bool> , <bool> , <str>
"""
if re.match("[PZ*]?[P*]?[PZ*][Z*][P*][P*][Z*]", sentence_tone): # (Z)ZPPZ
return True, "ZZPPZ",
elif re.match("[PZ*]?[Z*]?[PZ*][P*][P*][Z*][Z*]", sentence_tone): # (P)PPZZ
return True, "PPPZZ",
elif re.match("[PZ*]?[P*]?[PZ*][Z*][Z*][P*][P*]", sentence_tone): # (Z)ZZPP
return True, "ZZZPP",
elif re.match("[PZ*]?[Z*]?[P*][P*][Z*][Z*][P*]", sentence_tone): # PPZZP
return True, "PPZZP",
elif re.match("[PZ*]?[P*]?[PZ*][Z*][Z*][P*][Z*]", sentence_tone): # (Z)ZZPZ
return True, "ZZPPZ",
elif re.match("[PZ*]?[P*]?[PZ*][Z*][PZ*][Z*][Z*]", sentence_tone): # (Z)Z(P)ZZ
return True, "ZZPPZ",
elif re.match("[PZ*]?[Z*]?[P*][P*][Z*][PZ*][Z*]", sentence_tone): # PPZ(P)Z
return True, "PPPZZ",
elif re.match("[PZ*]?[Z*]?[PZ*][Z*][P*][P*][P*]", sentence_tone): # (Z)ZPPP
return True, "ZZZPP",
elif re.match("[PZ*]?[Z*]?[Z*][P*][P*][Z*][P*]", sentence_tone): # ZPPZP
return True, "PPZZP",
elif re.match("[PZ*]?[Z*]?[P*][P*][P*][Z*][P*]", sentence_tone): # PPPZP
return True, "PPZZP",
else:
return False, "", "拗句"
def is_tone_same(tone_1, tone_2):
"""
"""
if (tone_1 == "Z" or tone_1 == "*") and (tone_2 == "Z" or tone_2 == "*"):
return True
elif (tone_1 == "P" or tone_1 == "*") and (tone_2 == "P" or tone_2 == "*"):
return True
else:
return False
def is_tone_differ(tone_1, tone_2):
"""
:param tone_1:
:param tone_2:
:return:
"""
if (tone_1 == "Z" or tone_1 == "*") and (tone_2 == "P" or tone_2 == "*"):
return True
elif (tone_1 == "P" or tone_1 == "*") and (tone_2 == "Z" or tone_2 == "*"):
return True
else:
return False
def inspect_corresponding(first_type, second_type):
"""
:param first_type: <str>
:param second_type: <str>
:return: <bool>
"""
if len(first_type) != len(second_type):
return False
return is_tone_differ(first_type[-2], second_type[-2]) and is_tone_differ(first_type[-1], second_type[-1])
def inspect_sticky(last_second_type, this_first_type):
"""
:param last_second_type: <str>
:param this_first_type: <str>
:return: <bool>
"""
if len(last_second_type) != len(this_first_type):
return False
return is_tone_same(last_second_type[-2], this_first_type[-2])
def poem_analyse(title, author, content):
print(content)
sentences = [sentence for sentence in re.split("[,。?!]", content) if sentence != ""]
punctuations = re.findall("[,。?!]", content)
# check if the poem follow number of characters.
if len(sentences) != 4 and len(sentences) != 8:
print("************** Bad Example ********************")
print("The poem does not follow number of lines.")
return False
# ehck if the sentense follows length constrain
if not all([len(sentence) == 5 or len(sentence) == 7 for sentence in sentences]):
print("************** Bad Example ********************")
print("The poem does not follow number of chars.")
return False
# check the Ping Ze evalue.
sentence_tone_list = list()
for sentence in sentences:
sentence_tone_list.append("".join([get_tone(character) for character in sentence]))
#
if not all([sentence_tone_list[i][-1] in ["P", "*"] for i in range(len(sentences)) if i % 2 == 1]):
# for i in range(int(len(sentences) / 2)):
# first_sentence = sentences[2 * i + 0]
# second_sentence = sentences[2 * i + 1]
# # output_sentence = first_sentence + punctuations[2 * i + 0] + second_sentence + punctuations[2 * i + 1]
# print(output_sentence)
# print("《" + title + "》", author, "The poem does not follow tonal pattern")
print("************** Bad Example ********************")
print("The poem does not follow tonal pattern")
return False
print("《" + title + "》", author)
last_second_type = ""
for i in range(int(len(sentences) / 2)):
first_sentence = sentences[2 * i + 0]
second_sentence = sentences[2 * i + 1]
first_tone = sentence_tone_list[2 * i + 0]
second_tone = sentence_tone_list[2 * i + 1]
second_rhythm = "(" + get_rhythm(second_sentence[-1]) + ")"
first_correct, first_type = inspect_sentence_tone(first_tone)
second_correct, second_type = inspect_sentence_tone(second_tone)
other_analysis = ""
if first_correct and second_correct:
if not inspect_corresponding(first_type, second_type):
other_analysis += "【失对】"
if last_second_type is not None and inspect_sticky(last_second_type, first_type):
other_analysis += "【失黏】"
last_second_type = second_type
output_sentence = first_sentence + punctuations[2 * i + 0] + second_sentence + punctuations[2 * i + 1]
output_analysis = first_tone + " " + second_tone + second_rhythm
print(output_sentence)
print(output_analysis)
# print("**********************************")
return True
if __name__ == "__main__":
with open("test.json", encoding="UTF-8") as file:
poem_json = json.loads(file.read())
for poem_item in poem_json["data"]:
if poem_analyse(poem_item["title"], poem_item["author"], poem_item["content"].replace("\n", "")):
print("press enter to continue...")
print("************** Good Example ********************")
input()
import pandas as pd
df = pd.read_fwf('../data/train.txt', header=None)
df
with open('../data/WY_train.txt', 'w') as w:
# for index, row in df.iloc[0:len(df) - validateNum].iterrows():
for index, row in df.iloc[:].iterrows():
content = row.values[0]
if (len(content) == 24):
w.write(content+'\n')
# print(len(content))
# w.write(content+'\n')
with open('../data/QY_train.txt', 'w') as w:
# for index, row in df.iloc[0:len(df) - validateNum].iterrows():
for index, row in df.iloc[:].iterrows():
content = row.values[0]
if (len(content) == 32):
w.write(content+'\n')
```
| github_jupyter |
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
```
NAME = ""
COLLABORATORS = ""
```
---
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Pose Basics](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.01-Pose-Basics.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Accessing PyRosetta Documentation](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.02-Working-with-Pose-Residues.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# Working with Pose residues
Keywords: total_residue(), chain(), number(), pdb2pose(), pose2pdb()
```
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
init()
```
**From previous section:**
Make sure you are in the directory with the pdb files:
`cd google_drive/My\ Drive/student-notebooks/`
```
pose = pose_from_pdb("inputs/5tj3.pdb")
pose_clean = pose_from_pdb("inputs/5tj3.clean.pdb")
```
We can use methods in `Pose` to count residues and pick out residues from the pose. Remember that `Pose` is a python class, and to access methods it implements, you need an instance of the class (here `pose` or `pose_clean`) and you then use a dot after the instance.
```
print(pose.total_residue())
print(pose_clean.total_residue())
# Did you catch all the missing residues before?
```
Store the `Residue` information for residue 20 of the pose by using the `pose.residue(20)` function.
```
# residue20 = type here
# YOUR CODE HERE
raise NotImplementedError()
print(residue20.name())
```
## Exercise 2: Residue objects
Use the `pose`'s `.residue()` object to get the 24th residue of the protein pose. What is the 24th residue in the PDB file (look in the PDB file)? Are they the same residue?
```
# store the 24th residue in the pose into a variable (see residue20 example above)
# YOUR CODE HERE
raise NotImplementedError()
# what other methods are attached to that Residue object? (type "residue24." and hit Tab to see a list of commands)
```
We can immediately see that the numbering PyRosetta internally uses for pose residues is different from the PDB file. The information corresponding to the PDB file can be accessed through the `pose.pdb_info()` object.
```
print(pose.pdb_info().chain(24))
print(pose.pdb_info().number(24))
```
By using the `pdb2pose` method in `pdb_info()`, we can turn PDB numbering (which requires a chain ID and a residue number) into Pose numbering
```
# PDB numbering to Pose numbering
print(pose.pdb_info().pdb2pose('A', 24))
```
Use the `pose2pdb` method in `pdb_info()` to see what is the corresponding PDB chain and residue ID for pose residue number 24
```
# Pose numbering to PDB numbering
# YOUR CODE HERE
raise NotImplementedError()
```
Now we can see how to examine the identity of a residue by PDB chain and residue number.
Once we get a residue, there are various methods in the `Residue` class that might be for running analysis. We can get instances of the `Residue` class from `Pose`. For instance, we can do the following:
```
res_24 = pose.residue(24)
print(res_24.name())
print(res_24.is_charged())
```
<!--NAVIGATION-->
< [Pose Basics](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.01-Pose-Basics.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Accessing PyRosetta Documentation](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.02-Working-with-Pose-Residues.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| github_jupyter |
# Expected return
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
---
A common way of evaluating a portfolio is computing its expected return, which corresponds to the reward for investing in that portfolio, and the variance of the return, which corresponds to the risk. To compute the expected return, we use the linearity property of expected value:
$$ E(R_p) = E(w_1 R_1 + w_2 R_2 + \ldots + w_n R_n) = w_1 E(R_1) + w_2 E(R_2) + \ldots w_n E(R_n) $$
So the expected return of our portfolio $R_p$, which is a weighted average of some securities, is the weighted average of the expected returns on the individual securities $R_i$. As with the expected value of other variables, we can compute this from a known or estimated probability distribution, or empirically from historical data.
# Portfolio variance
To compute the variance of the portfolio, we need to define the <i>covariance</i> between two random variables:
$$ Cov(R_i, R_j) = E[(R_i - E[R_i])(R_j - E[R_j])] $$
This is an extension of the idea of variance — notice that $Cov(R_i, R_i) = E[(R_i - E[R_i])^2] = Var(R_i)$. We can summarize the covariances in a covariance matrix, where the $ij$th entry is $Cov(R_i, R_j)$. For two variables, it looks like this:
$$ \left( \begin{array}{cc}
Var(R_1) & Cov(R_1, R_2) \\
Cov(R_1, R_2) & Var(R_2) \end{array} \right) $$
The covariance is useful here because when we expand the variance of the portfolio returns $Var(R_p)$, we find that
$$ Var(R_p) = E[(R_p - E[R_p])^2] = E\left[ \left(\sum_{i=1}^n w_i R_i - E\left[\sum_{i=1}^n w_i R_i \right] \right) \right] = \ldots = \sum_{i=1}^n \sum_{j=1}^n w_i w_j Cov(R_i, R_j)$$
So, the variance of the portfolio returns is the weighted sum of the covariances of the individual securities' returns (with each term involving two different securities appearing twice). If our portfolio consists of two securities, then
$$Var(R_p) = w_1^2 Var(R_1) + 2w_1 w_2 Cov(R_1, R_2) + w_2^2 Var(R_2)$$
Notice that there are $n^2$ terms in this sum, of which only $n$ are variances of individual securities $Var(R_i)$. Therefore, the covariances of pairs of securities play a huge role. We will see in the next section that covariances correspond to correlations, which is why minimizing correlations between securities is vital for minimizing portfolio variance (i.e. risk).
## Correlation
The covariance of two variables is negative if, on average, one is above its expected value when the other is below its expected value, and vice versa. The covariance is positive if the two variables tend to be on the same side of their expected values at the same time (in particular, variance is always positive). However, the magnitude of the covariance doesn't tell us much. Therefore we define <i>correlation</i> as follows:
$$ \rho(R_i, R_j) = \frac{Cov(R_i, R_j)}{\sigma(R_i)\sigma(R_j)} $$
The correlation is normalized and unitless, so its value is always between -1 and 1. Since $\sigma$ is always positive, the same sign rules apply to correlation as to covariance. Additionally, we can say that the smaller $\rho$ is in absolute value, the weaker the linear relationship ($R_1 = a + b R_2 + $ error) between the variables. A positive correlation means that $b>0$, and the variables are called correlated. A variable has a correlation of 1 with itself, which indicates a perfect linear relationship. A negative correlation indicates an inverse linear relationship ($b<0$), and we say that the variables are anticorrelated. If $\rho = 0$, then $b=0$ in this relationship, and the variables are uncorrelated.
Two independent variables are always uncorrelated, but the converse is not always true.
Below we compute the correlation matrix for some returns, and plot the most and the least correlated pairs.
```
# Import for plotting
import matplotlib.pyplot as plt
# Get returns data for 5 different assets
assets = ['XLK', 'MIG', 'KO', 'ATHN', 'XLY']
data = get_pricing(assets,fields='price',start_date='2014-01-01',end_date='2015-01-01').pct_change()[1:].T
# Print pairwise correlations
print 'Correlation matrix:\n', np.corrcoef(data)
# Print the mean return of each
print 'Means:\n', data.T.mean()
# Plot what we've identified as the most and the least correlated pairs from the matrix above
plt.scatter(data.iloc[4], data.iloc[0], alpha=0.5)
plt.scatter(data.iloc[2], data.iloc[1], color='r', alpha=0.4)
plt.legend(['Correlated', 'Uncorrelated']);
```
All of the means are approximately 0, so we can intuitively see the correlation from the plot. The blue data is mostly in the top right and bottom left quadrants, so the variables are generally either both positive (i.e. both above their means, since both have mean approximately 0) or both negative. This means that they are positively correlated. The red data is scattered fairly evenly across the quadrants, so there is no relationship between the data; x and y are both positive or both negative about as often as they have opposite signs.
| github_jupyter |
**<p style="font-size: 35px; text-align: center">Ejercicios distribuciones de Probabilidad</p>**
***<center>Miguel Ángel Vélez Guerra</center>***
<hr/>

<hr/>
<hr/>
**<p id="tocheading">Tabla de contenido</p>**
<br/>
<div id="toc"></div>
<hr/>
<hr/>
Empezaremos importando las librerías que nos ayudarán en el desarrollo del taller.
- numpy para los cálculos complejos necesarios
- scipy.stats para el uso de las funciones estadísticas y distribuciones que necesitamos
- matplotlib.pyplot para graficar las funciones
- **mstats** que es mi librería que cree con algunos métodos y funciones para graficar y hacer cálculos más específicos
```
#-------Importing from other folder------#
import sys
sys.path.insert(0, "../resources/")
import mstats
#-----------Miguel's statistics----------#
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
%%javascript
// Script to generate the table of contents
$.getScript('../resources/table_of_contents.js')
```
<hr/>
<hr/>
## 1. Distribución binomial
Un fabricante en California le suministra un diseño prototipo para
una pieza de aeronave que requiere su negocio. Este nuevo producto, que es enviado en
lotes de n= 12, sufre una tasa de defectos del 40%.
a. Si usted no desea un riesgo mayor al 10% en la probabilidad de que 5 de los 12 sean
defectuosos, ¿Debería comprarle a ese distribuidor?
b. Si usted no desea enfrentar un riesgo mayor del 20% de probabilidad de que más de 5
salgan defectuosos, ¿Debería comprarle a ese distribuidor?
```
n_lotes = 12 # Número de lotes de prototipos de aeronave tomados como muestra
pi_lotes = 0.4 # Porcentaje en la tasa de defectos de los lotes
```
Podemos notar con los datos que nos da el ejercicio, que la distribución que siguen los valores es una distribución *binomial*; pues nos dan una muestra (n) y una probabilidad de exito (pi) de que ocurra un evento.
```
distr_bin_lotes = ss.binom(n_lotes,pi_lotes)
mstats.graficar_discreta(distr_bin_lotes, "Distribución Binomial")
```
Aquí estamos declarando toda la distribución binomial que va seguir nuestro ejercicio, según el número de la muestra (15), y nuestra probabilidad de que un prototipo esté defectuoso (0.4). Esto lo haremos usando la función *.binom (distribución binomial)* de la librería *scipy.stats*
**a) Si usted no desea un riesgo mayor al 10% en la probabilidad de que 5 de los 12 sean defectuosos, ¿Debería comprarle a ese distribuidor?**
```
x_lotes_a = 5 # Número de lotes defectuosos a evaluar
```
Ahora usaremos la función *.pmf (función de masa de probabilidad)* de la librería *scipy.stats.binom* que nos determina la probabilidad en un punto dado.
Nos arroja la probabilidad de que de los 12 lotes con una tasa de defectos del 40%, hayan **exactamente** 5 lotes defectuosos.
```
prob_lotes_a = distr_bin_lotes.pmf(x_lotes_a)
mstats.graficar_discreta_cdf(distr_bin_lotes, x_lotes_a, x_lotes_a)
prob_lotes_a
```
Como vemos, según el resultado que nos arroja la función, tenemos que hay un **22.7%** de probabilidad de que hayan 5 de 12 lotes defectuosos.
**R/ a)** Como nosotros no queremos tener un riesgo mayor del **10%** en 5 de 12 unidades, entonces rechazaremos a ese distribuidor y no le compraremos sus prototipos, pues éste distribuidor tiene un riesgo del **22.7%**
<hr/>
**b) Si usted no desea enfrentar un riesgo mayor del 20% de probabilidad de que más de 5 salgan defectuosos, ¿Debería comprarle a ese distribuidor?**
```
x_lotes_b = 5 # Número de lotes defectuosos a evaluar
```
Ahora utilizaremos la función *.cdf (función acumulativa de la distribución)* de la librería *scipy.stats.binom* que nos dará el valor acumulado de la función hasta el punto que le pasamos como parámetro.
Nos arrojará la probabilidad de que de los 12 lotes con una tasa de defectos del 40%, hayan **menos de 5** lotes defectuosos. Entonces, como necesitamos evaluar la probabilidad de que hayan **más de 5** lotes defectuosos, tendremos que restarle a 1 el valor que hemos obtenido (1 - valor)
```
prob_lotes_b = 1 - distr_bin_lotes.cdf(x_lotes_b)
mstats.graficar_discreta_cdf(distr_bin_lotes, 1, x_lotes_b)
prob_lotes_b
```
Como vemos, según el resultado que nos arroja la función, tenemos que hay un **33.47%** de probabilidad de que hayan más de 5 lotes defectuosos en la muestra de 12 lotes.
**R/ b)** Como nosotros no deseamos enfrentar un riesgo mayor del **20%** de probabilidad de que más de 5 lotes salgan defectuosos. Entonces debemos rechazar la oferta de este distribuidor, que nos ofrece un riesgo del **33.47%**
<hr/>
<hr/>
## 2. Distribución hipergeométrica
Una tienda de productos deportivos tiene en existencia N=
20 pares de botas para esquiar de las cuales r=8 son de su talla. Si usted selecciona n = 3
pares que usted desea, ¿Cuál es la probabilidad de que X = 1 le quede bien?
```
N_botas = 20 # Población total de pares de botas para esquiar
r_botas = 8 # Número de botas de nuestra talla
n_botas = 3 # Número de pares de botas seleccionado para la muestra
```
Si analizamos la información y los datos que nos brinda el ejercicio, notamos que los valores están distribuidos en una distribución *hipergeométrica*; pues nos dicen el tamaño de la población a analizar (N), una proporción de la población con una condición especial (r), y una muestra para analizar con respecto a la condición especial (n).
```
distr_hyper_botas = ss.hypergeom(N_botas, r_botas, n_botas)
mstats.graficar_discreta(distr_hyper_botas, "Distribución Hipergeométrica")
```
Declaramos la función hipergeométrica que va a seguir nuestro ejercicio, utilizando la función *.hypergeom (distribución hipergeométrica)* de la librería *scipy.stats*.
Esta función recibe como parámetros el tamaño de la población (N), el número de éxitos de la población (r), y el tamaño de la muestra tomada (n)
**¿Cuál es la probabilidad de que X = 1 le quede bien?**
```
x_botas = 1 # Número de pares de botas seleccionado para la evaluación
```
Vamos a determinar la probabilidad de que **x_botas = 1 botas** seleccionadas sean de nuestra talla **r_botas = 8** de una muestra de **n_botas = 3** tomadas del total de los **N_botas = 20** pares de botas que hay en total
Para eso usaremos la función *.pmf (función de masa de probabilidad)* de la librería *scipy.stats.hypergeom*, que nos dice la probabilidad en un punto dado.
```
prob_botas = distr_hyper_botas.pmf(x_botas)
mstats.graficar_discreta_cdf(distr_hyper_botas, x_botas, x_botas)
prob_botas
```
**R/** La probabilidad de que 1 de los pares de botas nos sirva es de **46.31%**
<hr/>
<hr/>
## 3. Distribución de Poisson
El cable utilizado para asegurar las estructuras de los puentes
tiene un promedio de 3 defectos por cada 100 yardas. Si usted necesita 50 yardas, ¿Cuál
es la probabilidad de que haya una defectuosa?
```
mu_yardas = (3/100) # Número promedio de defectos por cada unidad de yardas
n_yardas = 50 # Número de yardas del cable a tener en cuenta
```
Podemos afirmar que los valores del ejercicio siguen una distribución de *Poisson*, pues tenemos una proporción de ocurrencias de un evento por unidad de espacio que siempre es constante.
```
mu_yardas_cable = (mu_yardas * n_yardas)
```
Nos dicen que según ese promedio de 3 defectos por cada 100 yardas, tenemos que determinar el promedio de defectos que hay en 50 yardas, que son las yardas que tenemos de cable
```
distr_poi_yardas = ss.poisson(mu_yardas_cable)
mstats.graficar_discreta(distr_poi_yardas, "Distribución Poisson")
```
Entonces, según estos datos, ya podemos construir la distribución *poisson* que tienen los datos.
Para esto usaremos la función *.poisson (distribución poisson)* de la librería *scipy.stats*, la cuál la recibe como parámetro el 'mu' del ejercicio que tengamos.
**¿Cuál es la probabilidad de que haya una defectuosa?**
```
n_yardas = 50 # Número de yardas del cable a tener en cuenta
x_cables = 1 # Número de cables tomados para evaluar si están defetuosos
```
Ahora vamos a determinar la probabilidad de que **x_cables = 1** cables seleccionados estén defectuosos según las condiciones que nos dieron.
Para esto usaremos la función *.pmf (función de masa de probabilidad)* de la librería de *scipy.stats.poisson* que nos da la probabilidad en un punto dado
```
prob_yardas = distr_poi_yardas.pmf(x_cables)
mstats.graficar_discreta_cdf(distr_poi_yardas, x_cables, x_botas)
prob_yardas
```
**R/** La probabilidad de que uno de los cables seleccionados esté defectuoso es del **33.47%**
<hr/>
<hr/>
## 4. Distribución exponencial
Como gerente de Burguer Heaven, usted observa que los
clientes entran a su establecimiento a razón de 8 por hora. ¿Cuál es la probabilidad de que
pasen más de 15 minutos entre la llegada de dos clientes?
```
mu_exp = (8/60) # Tasa promedio de entradas de los clientes al establecimientos por minuto
```
Teniendo en cuenta la forma que siguen las distribuciones exponenciales, podemos tomar la distribución de este ejercicio como *exponencial*, pues tenemos una tasa promedio de ocurrencias en un intervalo de tiempo.
```
distr_exp = ss.expon(mu_exp)
mstats.graficar_continua(distr_exp, "Distribución Exponencial")
```
**¿Cuál es la probabilidad de que pasen más de 15 minutos entre la llegada de dos clientes?**
```
x_exp = 15 # Minutos a evaluar entre la entrada de 2 clientes
```
Entonces, tenemos que con un promedio de 'mu' entradas por minutos al establecimiento. Queremos determinar la probabilidad de que pasen más de 15 minutos entre la llegada de 2 clientes.
Para esto usaremos la función que declaramos al principio del taller *cdf_exponential (función de probabilidad acumulada)* que nos permite calcular la probabilidad acumulada hasta el punto que le pasamos como parámetro junto al 'mu' de la distribución. Luego de calcular esta probabilidad, tendremos que restarle a 1 esa probabilidad, pues nos interesa la probabilidad de que pasen **más** de 15 minutos entre la llegada de 2 clientes.
```
prob_exp = 1 - mstats.cdf_exponential(mu_exp, x_exp)
mstats.graficar_continua_cdf(distr_exp, x_exp, distr_exp.ppf(0.99))
prob_exp
```
**R/** La probabilidad de pasen más de 15 minutos entre la llegada de 2 clientes al establecimiento es de **13.53%**
<hr/>
<hr/>
## 5. Distribución uniforme
Los tiempos de terminación de un trabajo oscilan entre 10.2
minutos a 18.3 minutos y se piensan que están distribuidos uniformemente. ¿Cuál es la
probabilidad de que se requiera entre 12.7 y 14.5 minutos para realizar este trabajo?
```
a_unif = 10.2 # Tiempo mínimo de terminación de un trabajo
b_unif = 18.3 # Tiempo máximo de terminación de un trabajo
rango_unif = b_unif - a_unif # Rango de oscilación de la terminación de los trabajos general
```
Se puede asegurar que el ejercicio sigue una distribución *uniforme*, pues tenemos un rango de ocurrencia de un evento, y nos están preguntando por la probabilidad de que otro rango esté dentro del rango de la distribución.
```
distr_unif_trabajos = ss.uniform(a_unif, rango_unif)
mstats.graficar_continua(distr_unif_trabajos, "Distribución Uniforme")
```
Con esta información, podemos construir la distribución *uniforme* que sigue el ejercicio, con la ayuda de la función *.uniform* de la librería *scipy.stats* la cual recibe como parámetros para su construcción, el punto mínimo de la distribución, y el rango que tiene *(tiempo máximo - tiempo mínimo)*
**¿Cuál es la probabilidad de que se requiera entre 12.7 y 14.5 minutos para realizar este trabajo?**
```
a_trabajo = 12.7 # Tiempo mínimo a evaluar de la terminación de un trabajo
b_trabajo = 14.5 # Tiempo máximo a evaluar de la terminación de un trabajo
rango_trabajo = a_trabajo - b_trabajo # Rango de oscilación de la terminación de los trabajos a evaluar
```
Ahora vamos a determinar la probabilidad de que el tiempo necesario para terminar un trabajo esté en el rango de **rango_trabajo = 14.5 - 12.7**.
Entonces necesitamos calculcar la probabilidad acumulado hasta el valor máximo y restarla con la probabilidad acumulada hasta el valor mínimo, para así obtener la probabilidad de que ese intervalo de valores ocurra en la distribución uniforme.
Para esto utilizaremos la función *.cdf (función de probabilidad acumulada)* de la librería *scipy.stats.uniform*.
```
prob_unif = distr_unif_trabajos.cdf(b_trabajo) - distr_unif_trabajos.cdf(a_trabajo)
mstats.graficar_continua_cdf(distr_unif_trabajos, 12.7, 14.5)
prob_unif
```
**R/** La probabilidad de que se requiera entre 12.7 y 14.5 minutos es de **22.22%**
<hr/>
<hr/>
## 6. Distribución normal
El ministerio de agricultura de estados unidos en un estudio sobre
cultivos ha detectado que las precipitaciones diarias en ciertos lugares de Hawaii parecen
estar distribuidas normalmente con una media de 2.2 pulgadas durante la estación
lluviosa. Se determinó que la desviación estándar era de 0.8 pulgadas.
a. ¿Cuál es la probabilidad de que llueva más de 3.3 pulgadas en un día durante la
estación lluviosa?
b. Halle la probabilidad de que llueva más de 1.3 pulgadas.
c. ¿Cuál es la probabilidad de que las precipitaciones estén entre 2.7 y 3.0 pulgadas?
d. ¿Cuánta precipitación debe presentarse para exceder el 10% de las precipitaciones
diarias?
```
mu_precip = 2.2 # Valor promedio de pulgadas que se precipitan diariamente
rho_precip = 0.8 # Desviación estándar de pulgadas que se precipitan diariamente
```
Podemos afirmar que la distribución es *normal*, porque tenemos una media de la población y una desviación estándar de la población también. Asumiendo que todos los datos están relacionados con esos valores.
```
distr_norm_precip = ss.norm(mu_precip, rho_precip)
mstats.graficar_continua(distr_norm_precip, "Distribución Normal")
```
Hemos construido la distribución *normal* con la ayuda de la función *.norm (distribución normal)* de la librería *scipy.stats*; la cual recibe como parámetros la media de la población, y la desviación estándar de la población también.
**a) ¿Cuál es la probabilidad de que llueva más de 3.3 pulgadas en un día durante la estación lluviosa?**
```
x_precip_a = 3.3 # Pulgadas precipitadas a evaluar
```
Vamos a determinar la probabilidad de que llueva más de 3.3 pulgadas en un día **x_precip > 3.3**, que para esto necesitaremos tomar la probabilidad acumulada hasta ese punto y restarle a 1 ese valor de probabilidad.
```
prob_precip_a = 1 - distr_norm_precip.cdf(3.3)
mstats.graficar_continua_cdf(distr_norm_precip, 3.3, distr_norm_precip.ppf(0.99))
prob_precip_a
```
**R/ a)** La probabilidad de que llueva más de 3.3 pulgadas en un dia durante la estación lluviosa es de **8.45%**
<hr/>
**b) Halle la probabilidad de que llueva más de 1.3 pulgadas.**
```
x_precip_b = 1.3 # Pulgadas precipitadas a evaluar
```
Vamos a determinar la probabilidad de que llueva más de 1.3 pulgadas en un día **x_precip > 1.3**, que para esto necesitaremos tomar la probabilidad acumulada hasta ese punto y restarle a 1 ese valor de probabilidad.
```
prob_precip_a = 1 - distr_norm_precip.cdf(1.3)
mstats.graficar_continua_cdf(distr_norm_precip, 1.3, distr_norm_precip.ppf(0.99))
prob_precip_a
```
**R/ b)** Hay un **86.97%** de probabilidades de que llueva más de 1.3 pulgadas
<hr/>
**c) ¿Cuál es la probabilidad de que las precipitaciones estén entre 2.7 y 3.0 pulgadas?**
```
x1_precip_c = 2.7 # Pulgadas mínimas precipitadas a evaluar
x2_precip_c = 3 # Pulgadas máximas precipitadas a evaluar
```
Entonces, tenemos que determinar la probabilidad de que las precipitaciones estén entre 2.7 y 3 pulgadas, para esto vamos a determinar la probabilidad acumulada hasta el punto máximo (3) y restarla con la probabilidad acumulada hasta el punto mínimo (2.7).
```
prob_precip_c = distr_norm_precip.cdf(3) - distr_norm_precip.cdf(2.7)
mstats.graficar_continua_cdf(distr_norm_precip, 2.7, 3)
prob_precip_c
```
**R/ c)** La probabilidad de que las precipitaciones estén entre 2.7 y 3 pulgadas es de **10.73%**
<hr/>
**d) ¿Cuánta precipitación debe presentarse para exceder el 10% de las precipitaciones diarias?**
```
prob_precip_d = 0.1 # Probabilidad de que ocurran 'x' numero de precipitaciones diarias
```
Entonces, para encontrar el valor de precipitaciones correspondiente a la probabilidad del 0.1 en la distribución normal, debemos tratarla como una función matemática, dónde el eje 'x' corresponde a los valores de las precipitaciones que toma el ejercicio, y el eje 'y' corresponde a los valores de las probabilidades respectivas a esos valores 'x', teniendo en cuenta la función que sigue la distribución normal.
\begin{align}
p(x;\mu, \sigma^2) = \frac{1}{\sigma \sqrt{2 \pi}} e^{\frac{-1}{2}\left(\frac{x - \mu}{\sigma} \right)^2}
\end{align}
Teniendo en cuenta esto, podemos encontrar el valor del punto en 'x' si conocemos el valor del punto en 'y'
```
distr_norm_precip.ppf(0.1)
```
**R/ d)** Se necesitan más **1.1747** pulgadas precipitadas para exceder el 10% de las precipitaciones diarias.
<hr/>
<hr/>
## 7. Aproximación normal a la distribución binomial
El 45% de todos los empleados del centro
de capacitación gerencial en Condor Magnetics tienen títulos universitarios. ¿Cuál es la
probabilidad de que de los 150 empleados seleccionados aleatoriamente, 72 tengan título
universitario?
```
pi_empleados = 0.45 # Porcentaje de empleados que tienen títulos universitarios
n_empleados = 150 # Número de empleados de la muestra
```
Según el enunciado y los datos que nos dan, podemos concluir que esta información se refiere a una distribución binomial, pero esta muestra es demasiado grande, y en la construcción de su distribución pueden presentarse problemas.
Entonces lo que haremos será verificar si la podemos aproximar a una distribución normal, teniendo en cuenta que si multiplicamos la muestra por el porcentaje de éxito ***(n * pi)*** y por el porcentaje de fracaso ***(n * (1 - pi))***, nos da un valor mayor o igual a 5, y si pi está próximo a 0.5.
```
n_empleados * pi_empleados
n_empleados * (1 - pi_empleados)
pi_empleados
```
Se cumplen las condiciones, entonces podemos aproximar esta distribución binomial a una normal; para esto, la media de la población está dada por la multiplicación de ***(n * pi)*** y la desviación estándar es ***√(n(π)(1-π)***
```
mu_empleados = n_empleados * pi_empleados
rho_empleados = ((n_empleados*pi_empleados) * (1 - pi_empleados))**0.5
```
Con estos datos podemos construir la función normal.
```
distr_norm_empleados = ss.norm(mu_empleados, rho_empleados)
mstats.graficar_continua(distr_norm_empleados, "Distribución normal")
```
**¿Cuál es la probabilidad de que de los 150 empleados seleccionados aleatoriamente, 72 tengan título universitario?**
```
x_empleados = 72 # Número de empleados a evaluar
```
Cómo hemos adaptado una distribución discreta a una normal, no podemos calcular la probabilidad exacta de un punto dado, por lo que tenemos que usar el factor de correción de continuidad, calcularemos la probabilidad que hay entre **71.5 y 72.5**
```
prob_empleados = distr_norm_empleados.cdf(72.5) - distr_norm_empleados.cdf(71.5)
mstats.graficar_continua_cdf(distr_norm_empleados, 71.5, 72.5)
prob_empleados
```
**R/** La probabilidad de que de los 150 empleados seleccionados aleatoriamente, 72 tengan título universitario es del **4.98%**
<hr/>
<hr/>
## 8. Distribuciones muestrales
Los registros de inversiones muestran que la tasa promedio de rendimiento para las firmas
que están en la industria de bienes de consumo es del 30%, con una desviación estándar
del 12%. Si se selecciona una muestra de 250 de tales firmas, ¿Cuál es la probabilidad de
que la media de estas firmas esté entre el 30 y el 31%?
```
mu_firmas = 30 # Tasa promedio del rendimiento para las firmas
rho_firmas = 12 # Desviación estándar del rendimiento para las firmas
n_firmas = 250 # Tamaño de la muestra
```
Podemos identificar una distribución normal en una población, pero nos dicen que están cogiendo una muestra de esa población y hacer evaluaciones con respecto a eso, por eso tendremos que aplicar los conceptos de distribuciones muestrales. Entonces tenemos que evaluar la desviación estándar teniendo en cuenta que esta desviación será la desviación estándar de la población dividida entre la raíz cuadrada del tamaño de la muestra.
***desviación estándar / √tamaño de la muestra***
```
rho_error = rho_firmas / (n_firmas**0.5)
```
Con esto ya definido, podemos construir la distribución normal que tendrá el ejercicio, con el promedio poblacional y ésta desviación estándar que encontramos
```
distr_norm_firmas = ss.norm(mu_firmas, rho_error)
mstats.graficar_continua(distr_norm_firmas, "Distribución normal")
```
**¿Cuál es la probabilidad de que la media de estas firmas esté entre el 30 y el 31%?**
```
x1_firmas = 30 # Porcentaje promedio 1 de rendimiento para las firmas de la muestra
x2_firmas = 31 # Porcentaja promedio 2 de rendimiento para las firmas de la muestra
```
Ahora lo que tenemos que hacer es calcular la probabilidad acumulada hasta el valor de 31 y restarla con la acumulada que hay hasta 30, de este modo obtendremos la probabilidad de que la media de las firmas esté entre el 30% y el 31%.
```
prob_firmas = distr_norm_firmas.cdf(x2_firmas) - distr_norm_firmas.cdf(x1_firmas)
mstats.graficar_continua_cdf(distr_norm_firmas, x1_firmas, x2_firmas)
prob_firmas
```
**R/** La probabilidad de que la media de estas firmas esté entre el 30% y el 31% de rendimiento es de **40.61%**
<hr/>
<hr/>
## 9. Proporciones muestrales
Sólo el 22% de todas las firmas en la industria de bienes de consumo comercializa sus
productos directamente con el consumidor final. Si una muestra de 250 firmas revela una
proporción de más del 20% que se compromete en el mercado directo, usted planea hacer
su siguiente compra a las firmas de esta industria. ¿Qué tan probable es que usted gaste
su dinero bien ganado en otra parte?
```
p_mercado = 0.22 # Proporción del total de firmas en la industria que comercializa directamente con el cliente
n_mercado = 250 # Tamaño de la muestra de las firmas en la industria que comercializa directamente con el cliente
```
Vamos a tener en cuenta que tenemos una población, de la cuál se tiene una proporción que cumple una condición (éxito); y de la cuál se extrae una muestra grande de la cuál también se le conoce la proporción que cumple una condición en esa muestra.
Entonces, tenemos un enunciado de proporciones muestrales.
```
p_muestra = 0.20 # Proporción de la muestra de firmas en la industria que comercializa directamente con el cliente
```
Vamos a calcular la probabilidad de que la inversión que hagamos la tomen una de esas empresas que comercializa directamente con el cliente, para esto, vamos a aproximar esta proporción muestral que tenemos a los parámetros de una distribución normal.
Antes de esto, tenemos que comprobar que se cumplan las condiciones de que la multiplicación entre el tamaño de la muestra y la proporción de éxitos y fracasos sea mayor o igual a 5 *(np >= 5 y n(1-p) >= 5)*
```
n_mercado * p_mercado
n_mercado * (1 - p_mercado)
```
Bien, podemos hacer la aproximación a la normal, teniendo en cuenta que el promedio de la distribución muestral de proporciones es igual a la proporción de la población, y la desviación estándar de la distribución muestral de proporciones es
***desviación estándar = √(proporción de éxitos * proporción de fracasos) / (tamaño de la muestra)***
```
mu_mercado = p_mercado # Promedio de la proporción
rho_mercado = ((p_mercado * (1 - p_mercado)) / (n_mercado))**0.5 # Desviación estándar de la proporción
```
Con esto ya definido podemos construir la distribución normal que tendrá el ejercicio, pasándole el mu y el rho.
```
distr_norm_mercado = ss.norm(mu_mercado, rho_mercado)
mstats.graficar_continua(distr_norm_mercado, "Distribución Normal")
prob_mercado = distr_norm_mercado.cdf(0.20)
mstats.graficar_continua_cdf(distr_norm_mercado, distr_norm_mercado.ppf(0.01), 0.20)
prob_mercado
```
| github_jupyter |
```
import math
import time
import wikionly #script name is wikionly (no summary), class name is wiki
import re as re
import nltk
# nltk.download('wordnet')
from nltk.corpus import wordnet
import math
#Input two Wikipedia articles to compute similarity percentage
class similar:
def __init__(self,text1,text2,verbose=1):
"""To start, assign var = comparewiki.similar('arg1','arg2', verbose=1).
arg1 and arg2 are names of the wikipedia articles.
verbose=1 prints the probability score and mathematical calculation.
verbose=2 additionally prints array of words for each article
verbose=0 disables any logs.
To get values in a list for storage, use .ans(). To get the 40 common words for comparison, use .words()"""
time01 = time.time()
self.wn = nltk.corpus.wordnet #the corpus reader
self.verbose = verbose # Verbose/log level of detail
time02 = time.time()
print("Initialize NLTK similarity corpus reader: " + str(time02 - time01) + " seconds.")
#Error handling: check if both arguments input are string format
checkstr = False
if isinstance(text1, str) == True:
if isinstance(text2, str) == True:
self.text1 = text1
self.text2 = text2
time03 = time.time()
print("String check of arguments: " + str(time03 - time02) + " seconds.")
checkstr = True
else:
print('Error! The second argument is not a string format!')
else:
print('Error! The first argument is not a string format!')
#Run internal wikipedia python file for processing for both wiki titles
if checkstr == True:
self.wiki1 = wikionly.wiki(text1)
time04 = time.time()
print("Scrape wikipedia for article 1: " + str(round(time04 - time03, 4)) + " seconds.")
self.wiki2 = wikionly.wiki(text2)
time05 = time.time()
print("Scrape wikipedia for article 2: " + str(round(time05 - time04, 4)) + " seconds.")
#Call the function that calculates percentage
self.percent(self.wiki1,self.wiki2,self.verbose)
#call the function that shows list of words for both Wiki sites
#Only can be used if self.percent has been called and list/arrays for articles are created
if self.verbose == 2:
print(self.words())
#Retrieve top 40 common words from wiki page, slice up and append .n01 for NLTK usage
def percent(self,input1,input2,verbose):
time06 = time.time()
self.dotn01 = ('.','n','.','0','1')
self.wiki1list = []
for key in self.wiki1.commonwords(40):
self.wiki1slice = list(key)
for letter in self.dotn01:
self.wiki1slice.append(letter)
self.wiki1slice = ''.join(self.wiki1slice)
self.wiki1list.append(self.wiki1slice)
self.wiki2list = []
for key in self.wiki2.commonwords(40):
self.wiki2slice = list(key)
for letter in self.dotn01:
self.wiki2slice.append(letter)
self.wiki2slice = ''.join(self.wiki2slice)
self.wiki2list.append(self.wiki2slice)
time07 = time.time()
print("Get list of 40 words for both articles: " + str(round(time07 - time06, 4)) + " seconds.")
#count and sum for calculating similarity
self.count = 0
self.sum = 0
#A count for the ranking of the word (how often it appears in both wiki passages)
self.topten1 = 0
self.topten2 = 0
#For words that are 1-10th and 11-21st in popularity, if both wiki pages have the word, they get more points
for word1 in self.wiki1list:
#Reset self.topten2
self.topten2 = 0
self.topten1 += 1
for word2 in self.wiki2list:
self.topten2 += 1
#reinitialize to zero to prevent old sums from going into maxsum
self.sum1 = 0
self.sum2 = 0
self.sum3 = 0
self.sum4 = 0
self.maxsum = 0
if self.topten1 < 11 and self.topten2 < 11:
self.expvalue = 4.5
elif self.topten1 < 21 and self.topten2 < 21:
self.expvalue = 2.5
else:
self.expvalue = 1.5
#Main algorithm for calculating score of words
try:
if re.findall(r"\d+.n.01", word1) == [] and re.findall(r"\d+.n.01", word2) == []: #check both words not numbers
#since words have many meanings, for every pair of words, use top two meanings n.01 and n.02 for comparison
#two for loops will check every permutation pair of words between wiki pages, two meanings for each word,
#Take the max similarity value taken for computation of similarity index
#e.g. money.n.01 may have highest value with value.n.02 because value.n.01 has the obvious meaning of worth/significance and secondary for money
word11 = word1.replace('n.01','n.02')
word22 = word2.replace('n.01','n.02')
#print(word11,word22)
self.x = self.wn.synset(word1)
self.y = self.wn.synset(word2)
#get default similarity value of 1st definitions of word
self.sum1 = self.x.path_similarity(self.y) * math.exp(self.expvalue * self.x.path_similarity(self.y)) + 10 * math.log(0.885+self.x.path_similarity(self.y))
try: #get 2nd definitions of words and their similarity values, if it exist
self.xx = self.wn.synset(word11)
self.yy = self.wn.synset(word22)
self.sum2 = self.xx.path_similarity(self.y) * math.exp(self.expvalue * self.xx.path_similarity(self.y)) + 10 * math.log(0.89+self.xx.path_similarity(self.y))
self.sum3 = self.x.path_similarity(self.yy) * math.exp(self.expvalue * self.x.path_similarity(self.yy)) + 10 * math.log(0.89+self.x.path_similarity(self.yy))
self.sum4 = self.xx.path_similarity(self.yy) * math.exp(self.expvalue * self.xx.path_similarity(self.yy)) + 10 * math.log(0.89+self.xx.path_similarity(self.yy))
except:
continue
self.maxsum = max(self.sum1,self.sum2,self.sum3,self.sum4) #get the max similarity value between 2 words x 2 meanings = 4 comparisons
#print(word1, word2, self.maxsum)
self.sum += self.maxsum
self.count += 1
except:
if word1 == word2 and re.findall(r"\d+.n.01", word1) == []: #remove years/numbers being counted as match yyyy.n.01
self.sum += math.exp(self.expvalue) + 10 * math.log(1.89)
self.count += 1
else:
continue
time08 = time.time()
print("Calculate similarity for both articles: " + str(round(time08 - time07, 4)) + " seconds.")
#Print the results and implement ceiling if the percent exceeds 100% or drops below 0%
if self.count != 0:
self.pct = round(self.sum/self.count*100)
if self.pct > 100:
self.pct = 100
elif self.pct < 0:
self.pct = 0
if self.verbose >= 1:
print('Probability of topics being related is ' + str(self.pct) + '%')
print('Count is ' + str(self.count) + ' and sum is ' + str(self.sum))
else:
if self.verbose >= 1:
print('No relation index can be calculated as words are all foreign')
time09 = time.time()
print("Print output: " + str(time09 - time08) + " seconds.")
return self.pct
#Print out list of common words for both Wiki articles
def words(self):
print(self.wiki1list)
print('\n')
print(self.wiki2list)
#Outputs list of results [Article 1, Article 2, Percentage, Yes/No] that can be put into a dataframe
def ans(self):
self.listans = [self.text1,self.text2,self.pct]
if self.pct > 49:
self.listans.append('Yes')
else:
self.listans.append('No')
if self.verbose == 2:
self.listans.append(self.wiki1list)
self.listans.append(self.wiki2list)
return self.listans
def help(self):
print("To start, assign var = comparewiki.similar('arg1','arg2', verbose=1). arg1 and arg2 are names of the wikipedia articles, while verbose=1 prints the probability score and mathematical calculation. verbose=2 additionally prints array of words for each article, and verbose=0 disables any logs. To get values in a list for storage, use .ans(). To get the 40 common words for comparison, use .words()")
a = similar('Joe Biden','Donald Trump')
b = similar('John F Kennedy','Donald Trump')
b = similar('Tony Blair','Donald Trump')
x = similar('Tony Blair','HP')
z = similar('NP','HP')
trance = similar('Armin van Buuren','Tiesto')
country = similar('Singapore','Japan')
```
| github_jupyter |
```
!pip install gdown
!gdown https://drive.google.com/uc?id=1TQv6oGf3uySrXGkB4iT__4wgycVadH8F
!gdown https://drive.google.com/uc?id=12-zJnHZaRNlHweeBOk0t2yHbkyvFRsf1
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#some libraries cause this future warnings when the newer versions will be released. To prevent showing them, this code is used.
```
Original Amazon Musical Instruments dataset contains around 230K reviews and ratings. However, training the model with this amount takes a lot of time. Therefore, we previously prepare the data as highly balanced and smaller size. The reason for that **handling imbalanced data is advanced topic, if you are interestedn in this, [you can follow this paper.](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5128907)** The script includes data preparation steps will be shared with participants after the workshop.
```
import pandas as pd
df_train = pd.read_csv(r"train.csv",index_col=[0])
# Report the number of sentences.
print('Number of training sentences: {:,}\n'.format(df_train.shape[0]))
print()
print(df_train["label"].value_counts())
df_test = pd.read_csv(r"test.csv",index_col=[0])
# Report the number of sentences.
print()
print('Number of test sentences: {:,}\n'.format(df_test.shape[0]))
print()
print(df_test["label"].value_counts())
```
**Classes:**
0. Negative
1. Neutral
2. Positive
```
df_train.head()
df_test.head()
```
## TEXT PREPROCESSING
```
df_train.review[100]
import numpy as np
import nltk
import string as s
import re
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
lemmatizer=nltk.stem.WordNetLemmatizer()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from wordcloud import WordCloud
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import f1_score,accuracy_score
from sklearn.metrics import confusion_matrix,classification_report
```
Lemmatization is the process of converting a word to its base form. The difference between stemming and lemmatization is, lemmatization considers the context and converts the word to its meaningful base form, whereas stemming just removes the last few characters, often leading to incorrect meanings and spelling errors.
```
def preprocess(text,remove_stop_punc=False):
text=text.lower()
text=text.replace("\n"," ")
#removing URL
text = re.sub(r'https?:\/\/.*[\r\n]*', '', text)
text = re.sub(r'http?:\/\/.*[\r\n]*', '', text)
#Replace &, <, > with &,<,> respectively
text=text.replace(r'&?',r'and')
text=text.replace(r'<',r'<')
text=text.replace(r'>',r'>')
#remove hashtags
text=re.sub(r"#[A-Za-z0-9]+","",text)
#remove \
text=re.sub(r"\\ "," ",text)
#remove punctuations and stop words
stop_words=stopwords.words('english')
tokens=nltk.word_tokenize(text)
if remove_stop_punc:
tokens_new=[i for i in tokens if not i in stop_words and i.isalpha()] #isalpha() method returns True if all the characters are alphabet letters
else:
tokens_new=tokens
#tokens_new=[lemmatizer.lemmatize(i) for i in tokens_new]
#remove excess whitespace
text= ' '.join(tokens_new)
return text
df_train["review"]=df_train["review"].apply(preprocess,remove_stop_punc=False)
df_test["review"]=df_test["review"].apply(preprocess,remove_stop_punc=False)
#Remove reviews which have no word in them
df_train["Text_length"] = [len(text.split(' ')) for text in df_train.review]
df_train = df_train[df_train["Text_length"]>1]
#Remove reviews which have no word in them
df_test["Text_length"] = [len(text.split(' ')) for text in df_test.review]
df_test = df_test[df_test["Text_length"]>1]
df_train.review[100]
```
##### **BEFORE PREPROCESSING:**
> I ordered two of these cables in April. Today, on September 30, both are not operational. PVC for connectors is a dumb idea. Both cables have issues with the straight connector, if I move and hold the connector in an offset position, I can get a signal from the amp. Other than that, no connection at all. These cables were a total waste of money. The 30 day return policy is BS. I should be entitled to a refund. Amazon should take responsibility because I bought the cable because touted these "best seller,". Bestseller or not, these cables are cheap pieces of garbage.
# FEATURE EXTRACTION WITH TF-IDF AND MODELLING WITH MULTINOMIAL NB
```
texts = df_train.review
labels = df_train.label
from sklearn.model_selection import train_test_split
train_x, valid_x, train_y, valid_y = train_test_split(texts, labels, random_state=42, test_size=0.2)
test_x=df_test.review
test_y=df_test.label
stop_words=stopwords.words('english')
tfidf=TfidfVectorizer(max_df=0.9,min_df=10) #actually, we already discard the stop words in preprocessing
train_1=tfidf.fit_transform(train_x)
valid_1=tfidf.transform(valid_x)
test_1=tfidf.transform(test_x)
print("No. of features extracted")
print(len(tfidf.get_feature_names()))
print(tfidf.get_feature_names()[:20])
train_arr=train_1.toarray()
valid_arr=valid_1.toarray()
test_arr=test_1.toarray()
```
# MODELLING
The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work. https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html
```
nb=MultinomialNB()
nb.fit(train_arr,train_y)
pred=nb.predict(valid_arr)
from sklearn.metrics import accuracy_score,confusion_matrix
print("\nAccuracy of TF-IDF and Multinomial NB is:",accuracy_score(valid_y, pred))
print(classification_report(valid_y, pred))
```
#TESTING THE MODEL
```
test_pred=nb.predict(test_arr)
from sklearn.metrics import accuracy_score,confusion_matrix
print("\nAccuracy of TF-IDF and Multinomial NB is:",accuracy_score(test_y, test_pred))
print(classification_report(test_y, test_pred))
cm = confusion_matrix(test_y, test_pred)
labels = ['Negative', 'Neutral', 'Positive']
from mlxtend.plotting import plot_confusion_matrix
plt.figure()
plot_confusion_matrix(cm, figsize=(8,8), hide_ticks=True, cmap=plt.cm.Blues)
plt.xticks(range(3), labels, fontsize=12)
plt.yticks(range(3), labels, fontsize=12)
plt.show()
```
| github_jupyter |
# Deploy Detection Model
This notebook provides a basic introduction to deploying a trained model as either an ACI or AKS webservice with AML leveraging the azure_utils and tfod_utils packages in this repo.
Before executing the code please ensure you have a completed experiement with a trained model using either the scripts in src or the train model notebooks.
Note that this notebook makes use of additional files in this repo:
- utils - contains mapping functions for the classes that are needed for the deployment image
- conda_env.yml - contains the environment and packages required
- score.py - the base scoring file that is used to created the model_score.py used int he dpeloyment
```
import os
import sys
from azure_utils.azure import load_config
from azure_utils.deployment import AMLDeploy
```
## 1. Define Run Paramters
```
# Run params
ENV_CONFIG_FILE = "dev_config.json"
EXPERIMENT = "pothole"
RUN_ID = "pothole_1629819580_7ce6a2e8"
IMAGE_TYPE = "testpotholeservice"
COMPUTE_TARGET_NAME = "testdeployment"
MODEL_NAME = "testpotholeservice"
WEBSERVICE_NAME = MODEL_NAME.lower().replace("_", '')
```
## 2. Initialise Deployment Class
```
deployment = AMLDeploy(RUN_ID,
EXPERIMENT,
WEBSERVICE_NAME,
MODEL_NAME,
IMAGE_TYPE,
config_file=ENV_CONFIG_FILE)
```
## 3. Register Model from Experiment
```
model = deployment.register_run_model()
```
## 4. Set Scoring Script
The base score file is avliable in the src dir, variation can be created as needed. At deployment model name will be updated to create the final deploy script.
We also set the src dir to the deployment src folder so that at deployment we can access the utils
```
src_dir = os.path.join('..', 'src', 'deployment')
score_file = os.path.join(src_dir, 'score_tf2.py')
env_file = './conda_env_tf2.yml'
```
## 5. Create Inference Config
```
inference_config = deployment.create_inference_config(score_file, src_dir, env_file)
```
## 6. Check is a webservice exists with same name
Checks if there is a webservice with the same name. If it returns true you can either skip the next two cells and update that service or change the service name to deploy a new webservice.
```
deployment.webservice_exists(deployment.webservice_name)
```
## 6. Deploy ACI
Deploy the model to an ACI endpoint, this will be targeting a CPU instance and not GPU and is used just for testing purposes.
```
target, config = deployment.create_aci()
deployment.deploy_new_webservice(model,
inference_config,
config,
target)
```
## 7. Deploy AKS
```
deployment.webservice_name = deployment.webservice_name + "-aks"
target, config = deployment.create_aks(COMPUTE_TARGET_NAME, exists=False)
deployment.deploy_new_webservice(model,
inference_config,
config,
target)
```
## 8. Update Existing Webservice
```
deployment.update_existing_webservice(model, inference_config)
```
| github_jupyter |
## ARK Fund Analysis
- <a href=#Stock/fund-breakdown>Stock/fund breakdown</a>
- <a href=#Current-fund-holdings>Current fund holdings</a>
- <a href=#Change-in-value-during-past-two-sessions>Change in value during past two sessions</a>
- <a href=#Change-in-holdings-during-past-two-sessions>Change in holdings during past two sessions</a>
- <a href=#Change-in-holdings-during-past-week>Change in holdings during past week</a>
- <a href=#Change-in-holdings-during-past-month>Change in holdings during past month</a>
- <a href=#Change-in-holdings-during-past-quarter>Change in holdings during past quarter</a>
- <a href=#Change-in-holdings-during-past-half-year>Change in holdings during past half year</a>
- <a href=#Change-in-holdings-during-past-year>Change in holdings during past year</a>
- <a href=#Share-price-and-estimated-capital-flows>Share price and estimated capital flows</a>
- <a href=#Share-change-graphs>Share change graphs</a>
- <a href=#License>License</a>
```
import IPython
from automation import download_fund_holdings_data, download_fund_daily_price_data
download_fund_daily_price_data()
download_fund_holdings_data()
IPython.display.clear_output()
from ark_fund_analysis import *
from IPython.display import display, HTML, Javascript
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
display(HTML(f'<p><strong>Last updated</strong></p><p>{datetime.now().astimezone(utc).strftime(time_format)}</p>'))
```
**Disclaimer**
This project was created solely for educational purposes. The information contained in or generated by it could be inaccurate or incomplete. The author is not affiliated with ARK Invest and assumes no responsibility for the financial decisions of others.
**Known issues**
- Adjusting for stock splits: Only the most recent stock split for each stock is taken into account, due to free tier limits on the IEX API. In the case of multiple splits of a stock during a given time period, this may lead to inaccurate results in the "Change in holdings" tables. However, such cases are rare.
- There may be slight inaccuracies in the capital flow calculations around the end of the year due to dividend payments.
To see how this page was generated, or to contribute to the project, check out the corresponding [repository](https://github.com/depasqualeorg/ark_fund_analysis). The repository does not include historical data, but some historical data is available [here](https://github.com/tigger0jk/ark-invest-scraper).
Contact [anthony@depasquale.org](mailto:anthony@depasquale.org) for more information.
### Stock/fund breakdown
```
display(fund_holdings)
```
### Current fund holdings
```
for fund in funds:
if 'holdings_by_weight' in funds[fund]:
date = str(funds[fund]['latest_date_from_data'])
display(HTML(f'<br><h5>{fund.upper()} ({date})</h5>'))
display(funds[fund]['holdings_by_weight'])
from random import random, shuffle
with open('colors.json') as file:
colors = json.load(file)
for fund in funds:
shuffle(colors)
colors_flat = [color for palette in colors for color in palette] # Flatten array
# shuffle(colors_flat)
map_csv_to_df(funds[fund]['csv_files'][-1]).set_index('company').plot.pie(title=f'{fund.upper()}', y='weight (%)', startangle=180, counterclock=False, rotatelabels=True, legend=False, figsize=(12, 12), colors=colors_flat, normalize=True).axis('off')
```
### Change in value during past two sessions
```
for fund in funds:
if 'change_in_value_past_two_sessions' in funds[fund]:
date = str(funds[fund]['latest_date_from_data'])
display(HTML(f'<br><h5>{fund.upper()} ({date})</h5>'))
display(funds[fund]['change_in_value_past_two_sessions'])
change_in_value_colors = list(funds[fund]['change_in_value_past_two_sessions_df']['color'].drop('Total'))
funds[fund]['change_in_value_past_two_sessions_df'].drop('Total').set_index('company_x').plot.pie(title=f'{fund.upper()}', y='contribution_abs', startangle=90, counterclock=False, rotatelabels=True, legend=False, figsize=(12, 12), colors=change_in_value_colors, normalize=True).axis('off')
```
### Change in holdings during past two sessions
```
for fund in funds:
if 'change_in_holdings_past_two_sessions' in funds[fund]:
display(HTML(f'<br><h5>{fund.upper()}</h5>'))
display(funds[fund]['change_in_holdings_past_two_sessions'])
```
### Change in holdings during past week
```
for fund in funds:
if 'change_in_holdings_past_week' in funds[fund]:
display(HTML(f'<br><h5>{fund.upper()}</h5>'))
display(funds[fund]['change_in_holdings_past_week'])
```
### Change in holdings during past month
```
for fund in funds:
if 'change_in_holdings_past_month' in funds[fund]:
display(HTML(f'<br><h5>{fund.upper()}</h5>'))
display(funds[fund]['change_in_holdings_past_month'])
```
### Change in holdings during past quarter
```
for fund in funds:
if 'change_in_holdings_past_quarter' in funds[fund]:
display(HTML(f'<br><h5>{fund.upper()}</h5>'))
display(funds[fund]['change_in_holdings_past_quarter'])
```
### Change in holdings during past half year
```
for fund in funds:
if 'change_in_holdings_past_half_year' in funds[fund]:
display(HTML(f'<br><h5>{fund.upper()}</h5>'))
display(funds[fund]['change_in_holdings_past_half_year'])
```
### Change in holdings during past year
```
for fund in funds:
if 'change_in_holdings_past_year' in funds[fund]:
display(HTML(f'<br><h5>{fund.upper()}</h5>'))
display(funds[fund]['change_in_holdings_past_year'])
```
### Share price and estimated capital flows
```
for fund in funds:
if fund == 'arkx':
# The graph for ARKX looks better starting a couple weeks after inception
start_date = datetime(2021, 4, 14).date()
else:
start_date = None
plot_share_price_and_estimated_capital_flows(fund, start_date)
```
### Share change graphs
```
for fund in funds:
show_share_change_graph(fund)
# Text formatting for batch adding to Yahoo Finance portfolios
# def format_list(list):
# list_copy = list.copy()
# list_copy.reverse()
# return ', '.join(list_copy).upper()
# for fund in funds:
# print(format_list(list(funds[fund]['companies_df'].index)) + '\n')
```
### License
```
display(HTML('''<p>MIT License</p>
<p>Copyright (c) {year} Anthony DePasquale (<a href="mailto:anthony@depasquale.org">anthony@depasquale.org</a>)</p>
<p>Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:</p>
<p>The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.</p>
<p>THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.</p>'''.format(year=datetime.now().year)))
```
| github_jupyter |
```
import test_module as test #바탕화면에 있는 test_module.py에서 모듈을 가져옴
radius = test.num_input() #test 모듈에 있는 num_input을 가져와라(내가만든겨)
print(test.get_circum(radius))
print(test.get_circle_area(radius))
__name__ #메인 함수인지 확인
if __name__ == '__main__':
print("get_circum(10)")
print("get_circle_area(10)")
#ipynb라 못 읽는 듯
from urllib import *
target = request.urlopen("http://www.hanbit.co.kr/images/common/logo_hanbit.png")
output = target.read()
print(output)
file = open("outpng.png", 'wb')
file.write(output)
file.close()
#클래스
#학생 리스트 선언
create_student=( "윤인성", 87, 98, 88, 95)
students = [
{"name": "윤인성", "korean": 87, "math": 98, "english": 88, "science": 95},
{"name": "김가연", "korean": 92, "math": 98, "english": 96, "science": 98},
{"name": "구지연", "korean": 76, "math": 96, "english": 94, "science": 90},
{"name": "나선주", "korean": 98, "math": 92, "english": 96, "science": 92},
{"name": "윤아린", "korean": 95, "math": 98, "english": 98, "science": 98},
{"name": "윤명월", "korean": 64, "math": 64, "english": 92, "science": 92}
]
#학생 이름 한 명씩 반복
print("이름", "총점", "평균", sep="\t")
for student in students:
#점수의 총합과 평균을 구함
score_sum = student["korean"] + student["math"] +\
student["english"] + student["science"]
score_average = score_sum / 4
#출력
print(student["name"], score_sum, score_average, sep="\t")
#객체
#딕셔너리 리턴하는 함수 선언
def create_student(name,korean,math,english,science):
return {
"name": name,
"korean": korean,
"math": math,
"english": english,
"science": science
}
#학생을 처리하는 함수 선언
def student_get_sum(student):
return student["korean"] + student["math"] +\
student["english"] + student["science"]
def student_get_average(student):
return student_get_sum(student) / 4
#합, 평균을 하나로 합친다.
def student_to_string(student):
return "{}\t{}\t{}".format(
student["name"],
student_get_sum(student),
student_get_average(student))
#학생 리스트 선언
students = [
create_student( "윤인성", 87, 98, 88, 95),
create_student( "김가연", 92, 98, 96, 98),
create_student( "구지연", 76, 96, 94, 90),
create_student( "나선주", 98, 92, 96, 92),
create_student( "윤아린", 95, 98, 98, 98),
create_student( "윤명월", 64, 88, 92, 92)
]
#학생 이름을 한 번씩 반복
print("이름", "총점", "평균", sep="\t")
for student in students:
#출력
print(student_to_string(student))
#클래스 형태
#class 클래스이름:
#클래스내용
#인스턴스이름(변수이름) = 클래스이름()
class student:
#__init__(self,~)일 때, self.~ = ~로 하나씩 불러올 수 있다.
def __init__(self, name, korean, math, english, science):
self.name = name
self.korean = korean
self.math = math
self.english = english
self.science = science
#리스트에 학생들을 넣는다.
students = [
student( "윤인성", 87, 98, 88, 95),
student( "김가연", 92, 98, 96, 98),
student( "구지연", 76, 96, 94, 90),
student( "나선주", 98, 92, 96, 92),
student( "윤아린", 95, 98, 98, 98),
student( "윤명월", 64, 88, 92, 92)
]
#students인스턴스의 속성에 접근하는 방법
print(students[0].name)
print(students[0].korean)
print(students[0].math)
print(students[0].english)
print(students[0].science)
#메소드
class student:
def __init__(self, name, korean, math, english, science):
self.name = name
self.korean = korean
self.math = math
self.english = english
self.science = science
def get_sum(self):
return self.korean + self.math +\
self.english + self.science
def get_average(self):
return self.get_sum() / 4
def to_string(self):
return "{}\t{}\t{}".format(
self.name,\
self.get_sum(),\
self.get_average())
students = [
student( "윤인성", 87, 98, 88, 95),
student( "김가연", 92, 98, 96, 98),
student( "구지연", 76, 96, 94, 90),
student( "나선주", 98, 92, 96, 92),
student( "윤아린", 95, 98, 98, 98),
student( "윤명월", 64, 88, 92, 92)
]
print("이름", "총점", "평균", sep="\t")
for student in students:
print(student.to_string())
```
| github_jupyter |
# Use Case 8: Outliers
When looking at data, we often want to identify outliers, extremely high or low data points. In this use case we will show you how to use the Blacksheep package to find these in the CPTAC data. For more detailed information about the Blacksheep package see [this](https://github.com/ruggleslab/blackSheep/) repository.
In the CPTAC breast cancer study ([here](https://www.nature.com/articles/nature18003)) it was shown that tumors classified as HER-2 enriched are frequently outliers for high abundance of ERBB2 phosphorylation, protein and mRNA (see [figure 4](https://www.nature.com/articles/nature18003/figures/4) of the manuscript). In this use case we will show that same phenomena in an independent cohort of breast cancer tumors, whose data are included in the cptac package.
## Step 1: Importing packages and setting up your notebook
Before we begin performing the analysis, we must import the packages we will be using. In this first code block, we import the standard set of data science packages.
We will need an external package called blacksheep. To install it run the following on your command line:
```
pip install blksheep
```
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
In this next code block we import the blacksheep and cptac packages and grab our proteomic and clinical data.
```
import blacksheep
import cptac
brca = cptac.Brca()
clinical = brca.get_clinical()
proteomics = brca.get_proteomics()
```
## Step 2: Binarize Data
The Blacksheep package requires that annotations are a binary variable. Our cptac tumors are divided into 4 subtypes: LumA, LumB, Basal, and Her2. We will use the binarize_annotations function to create a binary table of these PAM50 tumor classifications. We will call this table 'annotations'.
```
annotations = clinical[['PAM50']].copy()
annotations = blacksheep.binarize_annotations(annotations)
annotations.head()
```
## Step 3: Perform Outlier Analysis
Now that our dataframes are correctly formatted, we will start looking for outliers.
We will start by using the deva function found in the blacksheep package. This function takes the proteomics data frame (which we transpose to fit the requirements of the function), and the annotations data frame that includes the binarized columns. We also indicate that we want to look for up regulated genes, and that we do not want to aggregate the data. The function returns two things:
1. A data object with a dataframe which states whether a sample is an outlier for a specific protein. In the code block below we named this 'outliers'
2. A data object with a dataframe with the Q Values showing if a gene shows an enrichment in outliers for a specific subset of tumors as defined in annotations. In the code block below, we named this 'qvalues'.
```
outliers, qvalues = blacksheep.deva(proteomics.transpose(),
annotations,
up_or_down='up',
aggregate=False,
frac_filter=0.3)
```
## Step 4: Inspect Results
Because these two tables that are returned are quite complex, we will now look at each of these individually.
The outliers table indicates whether each sample is an outlier for a particular gene. In this use case, we will focus on ERBB2. The first line below simplifies the index for each row by dropping the database id and leaving the gene name. We also only print off a portion of the table for brevity.
```
outliers.df.index = outliers.df.index.droplevel('Database_ID')
erbb2_outliers = outliers.df[outliers.df.index.str.match('ERBB2')]
erbb2_outliers.iloc[:, :8]
```
In the chart above you can see that most of the samples have 0, indiciating that the sample is not an outlier for ERBB2 protein abundance. X01BR017, however, has a 1, indicating that particular sample is an outlier.
The Outliers table contains boolean columns for both outlier and notOutliers. The notOutliers columns are redundant so we will remove them.
```
erbb2_outliers = erbb2_outliers.loc[:,~erbb2_outliers.columns.str.endswith('_notOutliers')]
```
We can now complile a list of all the samples that were considered to be outliers.
```
outlier_list = erbb2_outliers.columns[erbb2_outliers.isin([1.0]).all()].tolist()
print(outlier_list)
```
## Step 5: Visualizing Outliers
To understand what this means, we will plot the proteomics data for the ERBB2 gene and label the outlier samples. Before we graph the result we will join the proteomics and clinical data, isolating the PAM50 subtype and ERBB2.
```
combined_data = brca.join_metadata_to_omics(metadata_df_name="clinical",
omics_df_name="proteomics",
metadata_cols=["PAM50"],
omics_genes=['ERBB2'])
combined_data.columns = combined_data.columns.droplevel("Database_ID")
```
We will now create the graph.
```
plt.figure(figsize=(8, 8))
sns.set_palette('colorblind')
ax = sns.boxplot(data=combined_data, showfliers=False, y='ERBB2_proteomics', color='lightgray')
left = False
# This for loop labels all the specific outlier data points.
for sample in outlier_list:
if left:
position = -0.08
left = False
else:
position = 0.01
left = True
sample = sample.split("_")[0]
ax.annotate(sample, (position, combined_data.transpose()[sample].values[1]))
ax = sns.swarmplot(data=combined_data, y='ERBB2_proteomics')
plt.show()
```
As you can see from this graph, the samples we labeled, which had a 1.0 in the outliers table were all located at the top of the graph, indicating they are very highly expressed.
## Step 6: Looking at the Qvalue table
Let's now take a look at the Qvalues table. Remember that the qvalues table indicates the probability that a gene shows an enrichment in outliers for categories defined in our annotation dataframe.
```
qvalues.df.head()
```
This table includes all the q-values. Before really analyzing the table we will want to remove any insignificant q-values. For our purposes we will remove any q-values that are greater than 0.05.
```
for col in qvalues.df.columns:
qvalues.df.loc[qvalues.df[col] > 0.05, col] = np.nan
```
We will now isolate the ERBB2 gene.
```
qvalues.df.index = qvalues.df.index.droplevel('Database_ID')
qvalues = qvalues.df[qvalues.df.index.str.match('ERBB2')]
erbb2_qvalues = qvalues.reset_index()['Name'] == 'ERBB2'
qvalues = qvalues.reset_index()[erbb2_qvalues]
qvalues.head()
```
Here we see that the only PAM50 subtype that has a significant enrichment is the Her2, which is exactly what is to be expected. To visualize this pattern, we will create a graph similiar to the one above, but with each of the categories in the PAM50 category differentially colored.
```
plt.figure(figsize=(8, 8))
sns.set_palette('colorblind')
cols = {'Basal': 0, 'Her2':1, 'LumA':2, 'LumB':3, 'Normal-like':4}
ax = sns.boxplot(data=combined_data, y='ERBB2_proteomics', x='PAM50', color='lightgray')
ax = sns.swarmplot(data=combined_data, y='ERBB2_proteomics',x='PAM50', hue='PAM50')
for sample in outlier_list:
sample = sample.split("_")[0]
ax.annotate(sample, (cols[combined_data.transpose()[sample].values[0]], combined_data.transpose()[sample].values[1]))
plt.show()
```
Looking at the distribution of the graph you can see that distribution of the Her2 category is much different than the distributions of the other catgeories. The median of the proteomic data in the Her2 category is much higher than other categories, with many more data points in the upper portion of the graph.
## Additional Applications
We have just walked through one example of how you might use the Outlier Analysis. Using this same approach, you can run the outlier analysis on a number of different clinical attributes, cohorts, and omics data. For example, you may look for outliers within the transcriptomics of the Endometrial cancer type using the clinical attribute of Histological_type. You can also look at more than one clinical attribute at a time by appending more attributes to your annotations table, or you can look for downregulated omics by chaning the 'up_or_down' variable of the run_outliers function.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import freqopttest.util as util
import freqopttest.data as data
import freqopttest.kernel as kernel
import freqopttest.tst as tst
import freqopttest.glo as glo
import sys
# sample source
n = 3000
dim = 10
seed = 17
#ss = data.SSGaussMeanDiff(dim, my=1)
#ss = data.SSGaussVarDiff(dim)
#ss = data.SSSameGauss(dim)
ss = data.SSBlobs()
dim = ss.dim()
tst_data = ss.sample(n, seed=seed)
tr, te = tst_data.split_tr_te(tr_proportion=0.5, seed=10)
```
## grid search for Gaussian width. Random test features
```
J = 5
alpha = 0.01
T = tst.MeanEmbeddingTest.init_locs_2randn(tr, J, seed=seed+1)
#T = np.random.randn(J, dim)
med = util.meddistance(tr.stack_xy(), 1000)
list_gwidth = np.hstack( ( (med**2) *(2.0**np.linspace(-5, 5, 40) ) ) )
list_gwidth.sort()
besti, powers = tst.MeanEmbeddingTest.grid_search_gwidth(tr, T, list_gwidth, alpha)
# plot
plt.plot(list_gwidth, powers, 'o-')
plt.xscale('log', basex=2)
plt.xlabel('Gaussian width')
plt.ylabel('Test power')
plt.title('Median dist: %.3g. Best gwidth2**0.5: %.3g'%(med, list_gwidth[besti]**0.5) )
# test with the best Gaussian width
best_width = list_gwidth[besti]
met_grid = tst.MeanEmbeddingTest(T, best_width, alpha)
met_grid.perform_test(te)
```
## optimize the test locations and Gaussian width
```
op = {'n_test_locs': J, 'seed': seed+5, 'max_iter': 200,
'batch_proportion': 1.0, 'locs_step_size': 1.0,
'gwidth_step_size': 0.1, 'tol_fun': 1e-4}
# optimize on the training set
test_locs, gwidth, info = tst.MeanEmbeddingTest.optimize_locs_width(tr, alpha, **op)
# Plot evolution of the test locations, Gaussian width
# trajectories of the Gaussian width
gwidths = info['gwidths']
fig, axs = plt.subplots(2, 2, figsize=(10, 9))
axs[0, 0].plot(gwidths)
axs[0, 0].set_xlabel('iteration')
axs[0, 0].set_ylabel('Gaussian width')
axs[0, 0].set_title('Gaussian width evolution')
# evolution of objective values
objs = info['obj_values']
axs[0, 1].plot(objs)
axs[0, 1].set_title('Objective $\lambda(T)$')
# trajectories of the test locations
# iters x J. X Coordinates of all test locations
locs = info['test_locs']
for coord in [0, 1]:
locs_d0 = locs[:, :, coord]
J = locs_d0.shape[1]
axs[1, coord].plot(locs_d0)
axs[1, coord].set_xlabel('iteration')
axs[1, coord].set_ylabel('index %d of test_locs'%(coord))
axs[1, coord].set_title('evolution of %d test locations'%J)
print('optimized width: %.3f'%gwidth)
# test with the best optimized test features, and optimized Gaussian width
met_opt = tst.MeanEmbeddingTest(test_locs, gwidth, alpha)
met_opt.perform_test(te)
```
| github_jupyter |
```
import dense_correspondence_manipulation.utils.utils as utils
utils.add_dense_correspondence_to_python_path()
from dense_correspondence.training.training import *
import sys
import logging
# utils.set_default_cuda_visible_devices()
utils.set_cuda_visible_devices([0]) # use this to manually set CUDA_VISIBLE_DEVICES
from dense_correspondence.training.training import DenseCorrespondenceTraining
from dense_correspondence.dataset.spartan_dataset_masked import SpartanDataset
logging.basicConfig(level=logging.INFO)
config_filename = os.path.join(utils.getDenseCorrespondenceSourceDir(), 'config', 'dense_correspondence',
'dataset', 'composite', 'caterpillar_only.yaml')
config = utils.getDictFromYamlFilename(config_filename)
train_config_file = os.path.join(utils.getDenseCorrespondenceSourceDir(), 'config', 'dense_correspondence',
'training', 'training.yaml')
train_config = utils.getDictFromYamlFilename(train_config_file)
dataset = SpartanDataset(config=config)
logging_dir = "trained_models/test"
num_iterations_thousands = 15
num_iterations = int(num_iterations_thousands*1e3)
num_image_pairs = 100
TRAIN = True
EVALUATE = True
descriptor_dim = [3,]
M_background_list = [0.5]
for M_background in M_background_list:
for d in descriptor_dim:
print "d:", d
print "M_background:", M_background
print "training descriptor of dimension %d" %(d)
train = DenseCorrespondenceTraining(dataset=dataset, config=train_config)
train_config = utils.getDictFromYamlFilename(train_config_file)
name = "caterpillar_M_background_%.3f_%s_Resnet101_%dK" %(M_background, d, num_iterations_thousands)
train._config["training"]["logging_dir"] = logging_dir
train._config["training"]["logging_dir_name"] = name
train._config["dense_correspondence_network"]["descriptor_dimension"] = d
train._config["loss_function"]["M_background"] = M_background
train._config["training"]["num_iterations"] = num_iterations
train._config['training']['learning_rate_decay'] = 0.5
train._config['training']['steps_between_learning_rate_decay'] = 3000
train._config['dense_correspondence_network']["backbone"] = "Resnet101_8s"
train.build_network()
raise ValueError("test")
if TRAIN:
train.run()
print "finished training descriptor of dimension %d" %(d)
model_folder = os.path.join(logging_dir, name)
model_folder = utils.convert_to_absolute_path(model_folder)
if EVALUATE:
DCE = DenseCorrespondenceEvaluation
DCE.run_evaluation_on_network(model_folder, num_image_pairs=num_image_pairs)
```
## Evaluate network every 1K training steps
```
for i in xrange(1,ngit um_iterations_thousands):
iteration = int(i * 1e3)
save_folder_name = "analysis_" + str(iteration)
print "iteration", iteration
DCE = DenseCorrespondenceEvaluation
DCE.run_evaluation_on_network(model_folder, num_image_pairs=num_image_pairs,
iteration=iteration,
save_folder_name=save_folder_name)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import tensorflow as tf
from tfrecorder import TFrecorder
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
%pylab inline
```
# data
```
# Load training and eval data
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
```
# how to write
```
# info of data
df = pd.DataFrame({'name':['image','label','image_shape'],
'type':['float32','int64','int64'],
'shape':[(784,),(),(3,)],
'isbyte':[True,False,False],
"length_type":['fixed','fixed','fixed'],
"default":[np.NaN,np.NaN,np.NaN]})
'''
Note:
1. if the shape of particular feature for every example in your dataset varies, then you must set "length_type" as 'var'
example: 'image' feature for the first example is 28x28; but for the second example is 64x64
2. 'image_shape' decides the final shape of 'image'
example: the shape of 'image' is (784,) when you write the tfrecord file, but the final shape after parsing is reshaped by the value of 'image_shape'
3. by default, '_shape' info will not be returned after parsing,
if you want to keep '_shape' info, please use: tfr = TFrecorder(retrieve_shape=True)
'''
df
```
# write
```
tfr = TFrecorder()
rm -rf mnist_tfrecord/train mnist_tfrecord/test
mkdir -p mnist_tfrecord/train mnist_tfrecord/test
dataset = mnist.train
path = 'mnist_tfrecord/train/train'
num_examples_per_file = 10000
num_so_far = 0
writer = tf.python_io.TFRecordWriter('%s%s_%s.tfrecord' %(path, num_so_far, num_examples_per_file))
print('saved %s%s_%s.tfrecord' %(path, num_so_far, num_examples_per_file))
# write mutilple examples
for i in np.arange(dataset.num_examples):
features = {}
# shape: 28,28,1
if i%2==0:
images = dataset.images[i].reshape((28,28,1))
# shape: 14,56,1
else:
images = dataset.images[i].reshape((14,56,1))
tfr.feature_writer(df.iloc[0], images, features)
# 写一个样本的标签信息存到字典features中
tfr.feature_writer(df.iloc[1], dataset.labels[i], features)
# ******************
# write shape info *
# ******************
tfr.feature_writer(df.iloc[2], images.shape, features)
tf_features = tf.train.Features(feature= features)
tf_example = tf.train.Example(features = tf_features)
tf_serialized = tf_example.SerializeToString()
writer.write(tf_serialized)
if i%num_examples_per_file ==0 and i!=0:
writer.close()
num_so_far = i
writer = tf.python_io.TFRecordWriter('%s%s_%s.tfrecord' %(path, num_so_far, i+num_examples_per_file))
print('saved %s%s_%s.tfrecord' %(path, num_so_far, i+num_examples_per_file))
writer.close()
# 用该方法写测试集的tfrecord文件
dataset = mnist.test
path = 'mnist_tfrecord/test/test'
# 每个tfrecord文件写多少个样本
num_examples_per_file = 10000
# 当前写的样本数
num_so_far = 0
# 要写入的文件
writer = tf.python_io.TFRecordWriter('%s%s_%s.tfrecord' %(path, num_so_far, num_examples_per_file))
print('saved %s%s_%s.tfrecord' %(path, num_so_far, num_examples_per_file))
# 写多个样本
for i in np.arange(dataset.num_examples):
# 要写到tfrecord文件中的字典
features = {}
# 写一个样本的图片信息存到字典features中
# shape: 28,28,1
if i%2==0:
images=dataset.images[i].reshape((28,28,1))
# shape: 14,56,1
else:
images=dataset.images[i].reshape((14,56,1))
tfr.feature_writer(df.iloc[0], images, features)
# 写一个样本的标签信息存到字典features中
tfr.feature_writer(df.iloc[1], dataset.labels[i], features)
# ******************
# write shape info *
# ******************
# shape: 28,28,1
tfr.feature_writer(df.iloc[2], images.shape, features)
tf_features = tf.train.Features(feature= features)
tf_example = tf.train.Example(features = tf_features)
tf_serialized = tf_example.SerializeToString()
writer.write(tf_serialized)
# 每写了num_examples_per_file个样本就令生成一个tfrecord文件
if i%num_examples_per_file ==0 and i!=0:
writer.close()
num_so_far = i
writer = tf.python_io.TFRecordWriter('%s%s_%s.tfrecord' %(path, num_so_far, i+num_examples_per_file))
print('saved %s%s_%s.tfrecord' %(path, num_so_far, i+num_examples_per_file))
writer.close()
# 把指定如何写成tfrecord文件的信息保存起来
data_info_path = 'mnist_tfrecord/data_info.csv'
df.to_csv(data_info_path,index=False)
```
# import function
```
tfr = TFrecorder(retrieve_shape=True)
def input_fn_maker(path, data_info_path, shuffle=False, batch_size = 1, epoch = 1, padding = None):
def input_fn():
filenames = tfr.get_filenames(path=path, shuffle=shuffle)
dataset=tfr.get_dataset(paths=filenames, data_info=data_info_path, shuffle = shuffle,
batch_size = batch_size, epoch = epoch, padding =padding)
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()
return input_fn
padding_info = ({'image':[None,None,1],'label':[],'image_shape':[None]})
test_input_fn = input_fn_maker('mnist_tfrecord/test/', 'mnist_tfrecord/data_info.csv',batch_size = 1,
padding = padding_info)
train_input_fn = input_fn_maker('mnist_tfrecord/train/', 'mnist_tfrecord/data_info.csv', shuffle=True, batch_size = 1,
padding = padding_info)
train_eval_fn = input_fn_maker('mnist_tfrecord/train/', 'mnist_tfrecord/data_info.csv', batch_size = 1,
padding = padding_info)
test_inputs = test_input_fn()
sess =tf.InteractiveSession()
exmaple_image, exmaple_label, exmaple_shape_image = sess.run([test_inputs['image'],test_inputs['label'],test_inputs['image_shape']])
print("shape of image:",exmaple_image.shape)
print("value of label:",exmaple_label)
print("value of 'image_shape':\n",exmaple_shape_image)
sess =tf.InteractiveSession()
# Because test_inputs contain n_batch examples, and each example has different image shape.
# executing tf.reshape once can not do the work.
batch_size = 1
list_image_examples = tf.reshape(test_inputs['image'][0],test_inputs['image_shape'][0])
example= sess.run(list_image_examples)
plt.imshow(example.reshape((28,28)),cmap=plt.cm.gray)
print('image shape of example', example.shape)
conv = tf.layers.conv2d(
inputs=test_inputs['image'],
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu,
name = 'conv')
tf.global_variables_initializer().run()
sess.run(conv).shape
```
| github_jupyter |
## Project: Visualizing the Orion Constellation
In this project you are Dr. Jillian Bellovary, a real-life astronomer for the Hayden Planetarium at the American Museum of Natural History. As an astronomer, part of your job is to study the stars. You've recently become interested in the constellation Orion, a collection of stars that appear in our night sky and form the shape of [Orion](https://en.wikipedia.org/wiki/Orion_(constellation)), a warrior God from ancient Greek mythology.
As a researcher on the Hayden Planetarium team, you are in charge of visualizing the Orion constellation in 3D using the Matplotlib function `.scatter()`. To learn more about the `.scatter()` you can see the Matplotlib documentation [here](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html).
You will create a rotate-able visualization of the position of the Orion's stars and get a better sense of their actual positions. To achieve this, you will be mapping real data from outer space that maps the position of the stars in the sky
The goal of the project is to understand spatial perspective. Once you visualize Orion in both 2D and 3D, you will be able to see the difference in the constellation shape humans see from earth versus the actual position of the stars that make up this constellation.
<img src="https://upload.wikimedia.org/wikipedia/commons/9/91/Orion_constellation_with_star_labels.jpg" alt="Orion" style="width: 400px;"/>
## 1. Set-Up
The following set-up is new and specific to the project. It is very similar to the way you have imported Matplotlib in previous lessons.
+ Add `%matplotlib notebook` in the cell below. This is a new statement that you may not have seen before. It will allow you to be able to rotate your visualization in this jupyter notebook.
+ We will be using a subset of Matplotlib: `matplotlib.pyplot`. Import the subset as you have been importing it in previous lessons: `from matplotlib import pyplot as plt`
+ In order to see our 3D visualization, we also need to add this new line after we import Matplotlib:
`from mpl_toolkits.mplot3d import Axes3D`
```
%matplotlib notebook
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
## 2. Get familiar with real data
Astronomers describe a star's position in the sky by using a pair of angles: declination and right ascension. Declination is similar to longitude, but it is projected on the celestian fear. Right ascension is known as the "hour angle" because it accounts for time of day and earth's rotaiton. Both angles are relative to the celestial equator. You can learn more about star position [here](https://en.wikipedia.org/wiki/Star_position).
The `x`, `y`, and `z` lists below are composed of the x, y, z coordinates for each star in the collection of stars that make up the Orion constellation as documented in a paper by Nottingham Trent Univesity on "The Orion constellation as an installation" found [here](https://arxiv.org/ftp/arxiv/papers/1110/1110.3469.pdf).
Spend some time looking at `x`, `y`, and `z`, does each fall within a range?
```
# Orion
x = [-0.41, 0.57, 0.07, 0.00, -0.29, -0.32,-0.50,-0.23, -0.23]
y = [4.12, 7.71, 2.36, 9.10, 13.35, 8.13, 7.19, 13.25,13.43]
z = [2.06, 0.84, 1.56, 2.07, 2.36, 1.72, 0.66, 1.25,1.38]
```
## 3. Create a 2D Visualization
Before we visualize the stars in 3D, let's get a sense of what they look like in 2D.
Create a figure for the 2d plot and save it to a variable name `fig`. (hint: `plt.figure()`)
Add your subplot `.add_subplot()` as the single subplot, with `1,1,1`.(hint: `add_subplot(1,1,1)`)
Use the scatter [function](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html) to visualize your `x` and `y` coordinates. (hint: `.scatter(x,y)`)
Render your visualization. (hint: `plt.show()`)
Does the 2D visualization look like the Orion constellation we see in the night sky? Do you recognize its shape in 2D? There is a curve to the sky, and this is a flat visualization, but we will visualize it in 3D in the next step to get a better sense of the actual star positions.
```
fig = plt.figure()
constellation2d = fig.add_subplot(1,1,1)
constellation2d.scatter(x,y)
plt.show()
```
## 4. Create a 3D Visualization
Create a figure for the 3D plot and save it to a variable name `fig_3d`. (hint: `plt.figure()`)
Since this will be a 3D projection, we want to make to tell Matplotlib this will be a 3D plot.
To add a 3D projection, you must include a the projection argument. It would look like this:
```py
projection="3d"
```
Add your subplot with `.add_subplot()` as the single subplot `1,1,1` and specify your `projection` as `3d`:
`fig_3d.add_subplot(1,1,1,projection="3d")`)
Since this visualization will be in 3D, we will need our third dimension. In this case, our `z` coordinate.
Create a new variable `constellation3d` and call the scatter [function](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html) with your `x`, `y` and `z` coordinates.
Include `z` just as you have been including the other two axes. (hint: `.scatter(x,y,z)`)
Render your visualization. (hint `plt.show()`.)
```
fig_3d = plt.figure()
constellation3d = fig_3d.add_subplot(1,1,1,projection="3d")
constellation3d.scatter(x,y,z)
plt.show()
```
## 5. Rotate and explore
Use your mouse to click and drag the 3D visualization in the previous step. This will rotate the scatter plot. As you rotate, can you see Orion from different angles?
Note: The on and off button that appears above the 3D scatter plot allows you to toggle rotation of your 3D visualization in your notebook.
Take your time, rotate around! Remember, this will never look exactly like the Orion we see from Earth. The visualization does not curve as the night sky does.
There is beauty in the new understanding of Earthly perspective! We see the shape of the warrior Orion because of Earth's location in the universe and the location of the stars in that constellation.
Feel free to map more stars by looking up other celestial x, y, z coordinates [here](http://www.stellar-database.com/).
| github_jupyter |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
$ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
## Operators on Multiple Bits
[Watch Lecture](https://youtu.be/vd21d1KTC5c)
We explain how to construct the operator of a composition system when we apply an operator to one bit or to a few bits of the composite system.
*Here we have a simple rule, we assume that the identity operator is applied on the rest of the bits.*
### Single bit operators
When we have two bits, then our system has four states and any operator of the system can be defined as a $ (4 \times 4) $-dimensional matrix.
For example, if we apply the probabilistic operator $ M = \mymatrix{c}{ 0.3 & 0.6 \\ 0.7 & 0.4 } $ to the second bit, then how can we represent the corresponding $ (4 \times 4) $-dimensional matrix?
The answer is easy. By assuming that the identity operator is applied to the first bit, the matrix is
$$ I \otimes M = \I \otimes \mymatrix{c}{ 0.3 & 0.6 \\ 0.7 & 0.4 } = \mymatrix{cccc} { 0.3 & 0.6 & 0 & 0 \\ 0.7 & 0.4 & 0 & 0 \\ 0 & 0 & 0.3 & 0.6 \\ 0 & 0& 0.7 & 0.4 }. $$
<h3> Task 1</h3>
We have two bits. What is $ (4 \times 4) $-dimensional matrix representation of the probabilistic operator $ M = \mymatrix{c}{ 0.2 & 0.7 \\ 0.8 & 0.3 } $ applied to the first bit?
<a href="B19_Operators_on_Multiple_Bits_Solutions.ipynb#task1">click for our solution</a>
<h3> Task 2</h3>
We have three bits. What is $ (8 \times 8) $-dimensional matrix representation of the probabilistic operator $ M = \mymatrix{c}{ 0.9 & 0.4 \\ 0.1 & 0.6 } $ applied to the second bit?
<a href="B19_Operators_on_Multiple_Bits_Solutions.ipynb#task2">click for our solution</a>
### Two bits operators
We start with an easy example.
We have three bits and we apply the probabilistic operator
$ M = \mymatrix{rrrr}{0.05 & 0 & 0.70 & 0.60 \\ 0.45 & 0.50 & 0.20 & 0.25 \\ 0.20 & 0.35 & 0.10 & 0 \\ 0.30 & 0.15 & 0 & 0.15 } $ to the first and second bits. Then, the corresponding $ (8 \times 8) $-dimensional matrix is $ M \otimes I $, where $I$ is the $(2 \times 2)$-dimensional Identity matrix.
If $ M $ is applied to the second and third bits, then the corresponding matrix is $ I \otimes M $.
**What if $ M $ is applied to the first and third bits?**
We pick an example transition: it is given in $ M $ that $ \greenbit{0} \brownbit{1} \xrightarrow{0.35} \greenbit{1} \brownbit{0} $.
- That is, when the first bit is 0 and third bit is 1, the first bit is set to 1 and the the third bit is set to 0 with probability 0.35:
$$ \myarray{ccccc}{\mbox{first-bit} & \mbox{third-bit} & probability & \mbox{first-bit} & \mbox{third-bit} \\ \greenbit{0} & \brownbit{1} & \xrightarrow{0.35} & \greenbit{1} & \brownbit{0} } $$
- We put the second bit in the picture by assuming that the identity operator is applied to it:
$$
\myarray{ccccccc}{
\mbox{first-bit} & \mbox{second-bit} & \mbox{third-bit} & probability & \mbox{first-bit} & \mbox{second-bit} & \mbox{third-bit} \\
\greenbit{0} & \bluebit{0} & \brownbit{1} & \xrightarrow{0.35} & \greenbit{1} & \bluebit{0} & \brownbit{0} \\
\greenbit{0} & \bluebit{1} & \brownbit{1} & \xrightarrow{0.35} & \greenbit{1} & \bluebit{1} & \brownbit{0} \\
\\ \hline \\
\greenbit{0} & \bluebit{0} & \brownbit{1} & \xrightarrow{0} & \greenbit{1} & \bluebit{1} & \brownbit{0} \\
\greenbit{0} & \bluebit{1} & \brownbit{1} & \xrightarrow{0} & \greenbit{1} & \bluebit{0} & \brownbit{0}
}
$$
<h3> Task 3</h3>
Why are the last two transition probabilities zeros in the above table?
<h3> Task 4</h3>
We have three bits and the probabilistic operator
$ M = \mymatrix{rrrr}{0.05 & 0 & 0.70 & 0.60 \\ 0.45 & 0.50 & 0.20 & 0.25 \\ 0.20 & 0.35 & 0.10 & 0 \\ 0.30 & 0.15 & 0 & 0.15 } $ is applied to the first and third bits.
What is the corresponding the $(8 \times 8)$-dimensional matrix applied to the whole system?
*You may solve this task by using python.*
```
# the given matrix
M = [
[0.05, 0, 0.70, 0.60],
[0.45, 0.50, 0.20, 0.25],
[0.20, 0.35, 0.10, 0],
[0.30, 0.15, 0, 0.15]
]
#
# you may enumarate the columns and rows by the strings '00', '01', '10', and '11'
# int('011',2) returns the decimal value of the binary string '011'
#
#
# your solution is here
#
```
<a href="B19_Operators_on_Multiple_Bits_Solutions.ipynb#task4">click for our solution</a>
### Controlled operators
The matrix form of the controlled-NOT operator is as follows:
$$ CNOT = \mymatrix{cc|cc}{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \hline 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 }
= \mymatrix{c|c}{ I & \mathbf{0} \\ \hline \mathbf{0} & X},
$$
where $ X $ denotes the NOT operator.
Similarly, for a given single bit operator $ M $, we can define the controlled-$M$ operator (where the first bit is the control bit and the second bit is target bit) as follows:
$$ CM = \mymatrix{c|c}{ I & \mathbf{0} \\ \hline \mathbf{0} & M } $$
By definition:
* when the first bit is 0, the identity is applied to the second bit, and
* when the first bit is 1, the operator $ M $ is applied to the second bit.
Here we observe that the matrix $ CM $ has a nice form because the first bit is control bit. The matrix $ CM $ given above is divided into four sub-matrices based on the states of the first bit. Then, we can follow that
* the value of the first bit never changes, and so the off diagonal sub-matrices are zeros;
* when the first bit is 0, the identity is applied to the second bit, and so top-left matrix is $ I $; and,
* when the first bit is 1, the operator $ M $ is applied to the second bit, and so the bottom-right matrix is $ M $.
<h3> Task 5</h3>
Let $ M = \mymatrix{cc}{0.7 & 0.4 \\ 0.3 & 0.6} $ be a single bit operator. What is the matrix form of the controlled-$M$ operator where the first bit is the target bit and the second bit is the control bit.
<a href="B19_Operators_on_Multiple_Bits_Solutions.ipynb#task5">click for our solution</a>
### Controlled operator activated when in state 0
For a given single bit operator $ M $, **how can we obtain the following operator** by using the operator $ CM $?
$$ C_0M = \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & I } $$
Controlled operator are defined to be triggered when the control bit is in state 1. In this example, we expect it to be triggered when the control bit is in state 0.
Here we can use a simple trick. We first apply NOT operator to the first bit, and then the CM operator, and again NOT operator. In this way, we guarentee that $ M $ is applied to the second bit if the first bit is state 0 and do nothing if the first bit is in state 1. In short:
$$ C_0M = (X \otimes I) \cdot (CM) \cdot ( X \otimes I ). $$
<h3> Task 6</h3>
Verify that $ C_0M = (X \otimes I) \cdot (CM) \cdot ( X \otimes I ) = \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & I } $.
<a href="B19_Operators_on_Multiple_Bits_Solutions.ipynb#task6">click for our solution</a>
<h3> Task 7</h3>
For the given two single bit operators $ M $ and $ N $, let $ CM $ and $ CN $ be the controlled-$M$ and controlled-$N$ operators. By using $ X $, $ CM $, and $ CN $ operators, how can we obtain the operator $ \mymatrix{c|c}{ M & \mathbf{0} \\ \hline \mathbf{0} & N} $?
| github_jupyter |
<h2><b> GAME ENVIRONMENT CODE & BASIC FUNCTIONS</b></h2>
```
%matplotlib inline
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import time
from game import Game
from racing_env import RaceGameEnv
from PIL import Image
from io import BytesIO
from tf_agents.environments import utils
from tf_agents.networks import q_network
from tf_agents.agents.dqn import dqn_agent
from tf_agents.utils import common
from tf_agents.environments import tf_py_environment
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
#game = Game()
```
for i in range(1):
img = game.takess()
print(img.shape)
im = Image.fromarray(img, 'RGB')
imgplot = plt.imshow(im,cmap=plt.cm.binary)
#plt.show()
#im.save("screenshot.jpeg")
game.getSpead()
game.move('up')
game.resetGame()
<h2><b> TENSORFLOW ENVIRONMENT CODE </b></h2>
```
env = RaceGameEnv()
#env = tf_py_environment.TFPyEnvironment(env)
#utils.validate_py_environment(env, episodes=1)
#utils.validate_py_environment(env, episodes=1)
env = tf_py_environment.TFPyEnvironment(env)
num_iterations = 1500 # @param {type:"integer"}
initial_collect_steps = 500 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_max_length = 1000 # @param {type:"integer"}
batch_size = 64 # @param {type:"integer"}
learning_rate = 1e-5 # @param {type:"number"}
log_interval = 10 # @param {type:"integer"}
num_eval_episodes = 2 # @param {type:"integer"}
eval_interval = 1000 # @param {type:"integer"}
#fc_layer_params = (200,)
#fc_layer_params = [100,50] #(150,75)
fc_layer_params = [100,10] #(150,75)
##q_net = q_network.QNetwork(
# env.observation_spec(),
# env.action_spec(),
# fc_layer_params=fc_layer_params)
#conv_layer_params = [( 16 , ( 3 , 3 ), 1 ), ( 16 , ( 3 , 3 ), 1 )]
conv_layer_params = [( 16 , ( 3 , 3 ), 1 )]
q_net = q_network.QNetwork(
env.observation_spec(),
env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
env.time_step_spec(),
env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
random_policy = random_tf_policy.RandomTFPolicy(env.time_step_spec(),
env.action_spec())
random_return = -999
#random_return = compute_avg_return(env, random_policy, num_eval_episodes)
env._reset()
print("Return from Random agent is " + str(random_return))
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=env.batch_size,
max_length=replay_buffer_max_length)
agent.collect_data_spec._fields
def collect_step(environment, policy, buffer):
#start_time = time.time()
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
#print("All collect_step took %s seconds" % (time.time() - start_time))
def collect_data(env, policy, buffer, steps):
for i in range(steps):
collect_step(env, policy, buffer)
collect_data(env, random_policy, replay_buffer, steps=100)
# This loop is so common in RL, that we provide standard implementations.
# For more details see the drivers module.
# https://www.tensorflow.org/agents/api_docs/python/tf_agents/drivers
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
dataset
iterator = iter(dataset)
print(iterator)
"""
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
env._reset()
avg_return = 0
#avg_return = compute_avg_return(env, agent.policy, num_eval_episodes)
returns = [avg_return]
for i in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for j in range(collect_steps_per_iteration):
#print("Step " + str(j))
collect_step(env, agent.collect_policy, replay_buffer)
env._reset()
# Sample a batch of data from the buffer and update the agent's network.
print("Agent training for %s th time" % str(i))
start_time = time.time()
for j in range(collect_steps_per_iteration):
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
print("Agent trained in %s seconds" % (time.time() - start_time))
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
env._reset()
avg_return = compute_avg_return(env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
"""
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = 0
#avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
train_loss_arr = []
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
start_time = time.time()
for _ in range(collect_steps_per_iteration):
collect_step(env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
#print("Agent trained in %s seconds" % (time.time() - start_time))
train_loss_arr.append(train_loss)
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
#time_step = environment.reset()
#avg_return = compute_avg_return(env, agent.collect_policy, num_eval_episodes)
#avg_return
avg_return = compute_avg_return(env, agent.policy, num_eval_episodes)
avg_return
import matplotlib.pyplot as plt
import numpy as np
plt.plot(train_loss_arr)
plt.grid(True)
plt.show()
```
| github_jupyter |
```
import nltk
import numpy as np
import pprint
import utils as utl
from time import time
from gensim import corpora, models, utils
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.stem.snowball import EnglishStemmer
from tqdm import tqdm
from tqdm import tqdm_notebook as tqdm
authorID_to_titles = utl.load_pickle("../pmi_data/authorID_to_publications_clean.p")
```
# Preprocessing
For the topic extraction part we will use the dictionary of author->list_of_publications collected in the previous step. We need to do some preprocessing first
1. We use the utils.simple_preprocess function from gensim to return a list of lowered tokenized word
2. We stem each word
3. filter out the stopwords.
```
#Uncomment this cell if you don't have the data on your computer
#nltk.download("stopwords")
#nltk.download("wordnet")
```
For the stop words we use the one given by nltk. This set seems small so we include also other common English stop words found online or in the titles
```
english_stop_words = ["a", "about", "above", "above", "across", "after", "afterwards", "again", "against", "all", "almost", "alone", "along", "already", "also","although","always","am","among", "amongst", "amoungst", "amount", "an", "and", "another", "any","anyhow","anyone","anything","anyway", "anywhere", "are", "around", "as", "at", "back","be","became", "because","become","becomes", "becoming", "been", "before", "beforehand", "behind", "being", "below", "beside", "besides", "between", "beyond", "bill", "both", "bottom","but", "by", "call", "can", "cannot", "cant", "co", "con", "could", "couldnt", "cry", "de", "describe", "detail", "do", "done", "down", "due", "during", "each", "eg", "eight", "either", "eleven","else", "elsewhere", "empty", "enough", "etc", "even", "ever", "every", "everyone", "everything", "everywhere", "except", "few", "fifteen", "fify", "fill", "find", "fire", "first", "five", "for", "former", "formerly", "forty", "found", "four", "from", "front", "full", "further", "get", "give", "go", "had", "has", "hasnt", "have", "he", "hence", "her", "here", "hereafter", "hereby", "herein", "hereupon", "hers", "herself", "him", "himself", "his", "how", "however", "hundred", "ie", "if", "in", "inc", "indeed", "interest", "into", "is", "it", "its", "itself", "keep", "last", "latter", "latterly", "least", "less", "ltd", "made", "many", "may", "me", "meanwhile", "might", "mill", "mine", "more", "moreover", "most", "mostly", "move", "much", "must", "my", "myself", "name", "namely", "neither", "never", "nevertheless", "next", "nine", "no", "nobody", "none", "noone", "nor", "not", "nothing", "now", "nowhere", "of", "off", "often", "on", "once", "one", "only", "onto", "or", "other", "others", "otherwise", "our", "ours", "ourselves", "out", "over", "own","part", "per", "perhaps", "please", "put", "rather", "re", "same", "see", "seem", "seemed", "seeming", "seems", "serious", "several", "she", "should", "show", "side", "since", "sincere", "six", "sixty", "so", "some", "somehow", "someone", "something", "sometime", "sometimes", "somewhere", "still", "such", "system", "take", "ten", "than", "that", "the", "their", "them", "themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein", "thereupon", "these", "they", "thickv", "thin", "third", "this", "those", "though", "three", "through", "throughout", "thru", "thus", "to", "together", "too", "top", "toward", "towards", "twelve", "twenty", "two", "un", "under", "until", "up", "upon", "us", "very", "via", "was", "we", "well", "were", "what", "whatever", "when", "whence", "whenever", "where", "whereafter", "whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while", "whither", "who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would", "yet", "you", "your", "yours", "yourself", "yourselves", "the",'like', 'think', 'know', 'want', 'sure', 'thing', 'send', 'sent', 'speech', 'print', 'time','want', 'said', 'maybe', 'today', 'tomorrow', 'thank', 'thanks']
specific_stop_words = ['base', 'use', 'model', 'process', 'network']
sw =stopwords.words('english') + english_stop_words + specific_stop_words
```
We decide to use a stemmer and not a lemmatizer (from nltk). The reason is that we want to group together words with the same meaning. For example if one publication contains _algorithm_ and another one contains _Algorithmic_ in this case it would help to map those 2 words to the same. Let's see the output of a stemmer and lemmatizer. Even if our model should be able to capture the similitude among those 2 words, it will help reduce the vocabulary and speed up the training
```
lemmatizer = WordNetLemmatizer()
stemmer = EnglishStemmer()
print("Stemmer",stemmer.stem("Algorithm"), stemmer.stem("Algorithmic"))
print("Lemmatizer",lemmatizer.lemmatize("algorithm"), lemmatizer.lemmatize("Algorithmic"))
```
Indeed the lemmatizer keep 2 different words. Let's use the stemmer
```
def pre_processing(titles):
list_of_tokens = []
for title in titles:
tokens = utils.simple_preprocess(title)
tokens = [stemmer.stem(x) for x in tokens]
tokens = list(filter(lambda t: t not in sw, tokens))
list_of_tokens.append(tokens)
return list_of_tokens
authorID_to_titles_stem = {id_: pre_processing(titles) for id_, titles in tqdm(authorID_to_titles.items())}
utl.pickle_data(authorID_to_titles_stem, "../pmi_data/authorID_to_titles_stem.p")
```
# Topic Extraction
We want to extract the k main topics among all the publication. And then for each author we will compute its score in each one of those topics
We use **Latent Dirichlet allocation** and the implementation provided by **Gensim**.
## Latent Dirichlet allocation (LDA)
The principle behind LDA is that if you have a collection of documents, each documents represent a **mixtures of topics**. It's means that a documents contains words that belong to different categories. The goal of LDA is to retrieve those sets of words used to create the documents
## Extraction
We have a dictionnary of ```authorID-> list(list(tokens))``` with the inner list representing the titles
The LDA implementation of gensim take as parameter:
- a dictionary token -> id
- list of list of (token,token_count)
We use 2 functions provided by Gensim
Since we are dealing with title, most of the time, all the words we have an occurance of 1 in the titles. And then all the word will have the same importance it will be hard for the algorithm to infer the probality p(topics | title)
Since we want to find the set of topic that represent an author, it means that we have already made the assumption that all the publications of one author should be in a subset of topics. So lets put all the publication of one author together like if it was a big documents
```
authorID_to_titles_stem = utl.load_pickle("../pmi_data/authorID_to_titles_stem.p")
authorID_to_document = dict()
for author, titles in tqdm(authorID_to_titles_stem.items()):
authorID_to_document[author] = []
for t in titles:
authorID_to_document[author].extend(t)
```
Now we have a list of author->document. We can build the dictionaray and transform each document to a list of (token, token_count)
```
dictionary = corpora.Dictionary([doc for doc in tqdm(authorID_to_document.values())])
corpus = [dictionary.doc2bow(doc) for doc in tqdm(authorID_to_document.values())]
```
Set up the number of parameter, we select 20 topics.
```
#parameters
num_topics = 20 # number of topics LDA has to select
passes = 1 # number of passe in the lda training
num_words = 5 # number of most important word in one topic to be printed
tmp = corpus
corpus = tmp
corpus = np.random.choice(corpus, int(len(corpus)/1000))
len(corpus)
c = [c for c in tqdm(tmp) if len(c)> 100]
len(c)
start = time()
pp = pprint.PrettyPrinter(depth=2)
lda = models.LdaModel(c, num_topics=num_topics, id2word = dictionary, passes=passes)
print("Training time:", round((time()-start)/60,2),"[min]")
pp.pprint(lda.print_topics(lda.num_topics, num_words=num_words))
lda.save('lda.model')
utl.pickle_data(lda, "../pmi_data/lda_model__20_100.p")
def compute_score(titles):
total_score = np.zeros(num_topics)
for title in titles:
#lda output : [(id1, score1), (id2, score2),... if id != 0]
for id_, value in lda[dictionary.doc2bow(title)]:
total_score[id_] += value
return total_score
score_by_author_by_document = [compute_score([doc]) for _, doc in tqdm(authorID_to_document.items())]
utl.pickle_data(score_by_author_by_document, "../pmi_data/score_by_author_by_document.p")
score_by_author_by_titles = [compute_score(titles) for _, titles in tqdm(authorID_to_titles_stem.items())]
utl.pickle_data(score_by_author_by_titles,"../pmi_data/score_by_author_by_titles.p")
```
| github_jupyter |
```
from bids.grabbids import BIDSLayout
from nipype.interfaces.fsl import (BET, ExtractROI, FAST, FLIRT, ImageMaths,
MCFLIRT, SliceTimer, Threshold,Info, ConvertXFM,MotionOutliers)
import nipype.interfaces.fsl as fsl
from nipype.interfaces.afni import Resample
from nipype.interfaces.io import DataSink
from nipype.pipeline import Node, MapNode, Workflow, JoinNode
from nipype.interfaces.utility import IdentityInterface, Function
import os
from os.path import join as opj
from nipype.interfaces import afni
import nibabel as nib
import json
import numpy as np
# func = '../results_again_again/fc_motionRegress1filt1global1/_subject_id_0050002/pearcoff/0050002_fc_map.nii.gz'
# func
```
# func2std Code
```
os.chdir('/home1/varunk/Autism-Connectome-Analysis-bids-related/')
# Node for applying xformation matrix to functional data
func2std_xform = Node(FLIRT(output_type='NIFTI',
apply_xfm=True), name="func2std_xform")
in_file = '/home1/varunk/results_again_again/test/0050002_fc_map.nii.gz'
ref = '/home1/varunk/results_again_again/test/MNI152_T1_2mm_brain_resample.nii'
# ref = '../results_again_again/test/MNI152_T1_3mm.nii.gz'
# ref = '/home1/varunk/results_again_again/test/MNI152_T1_3mm.nii'
mat_file = '/home1/varunk/results_again_again/test/sub-0050002_task-rest_run-1_bold_roi_st_mcf_mean_bet_flirt_sub-0050002_T1w_resample_brain_flirt.mat'
# out_file = '/home1/varunk/results_again_again/test/sub-0050002_std.nii.gz'
# /home1/varunk/results_again_again/test/MNI152_T1_3mm.nii.gz
# import nibabel as nib
# brain_data = nib.load(ref)
# brain = brain_data.get_data()
# brain_with_header = nib.Nifti1Image(brain, affine=brain_data.affine,header = brain_data.header)
# nib.save(brain_with_header,'../results_again_again/test/MNI152_T1_3mm.nii.gz')
func2std_xform.inputs.in_file = in_file
func2std_xform.inputs.reference = ref
func2std_xform.inputs.in_matrix_file = mat_file
# func2std_xform.inputs.out_file = out_file
func2std_xform.inputs
result = func2std_xform.run()
# applyxfm = fsl.ApplyXfm()
# applyxfm.inputs.in_file = in_file
# applyxfm.inputs.in_matrix_file =mat_file
# # applyxfm.inputs.out_file = 'newfile.nii'
# applyxfm.inputs.reference = ref
# applyxfm.inputs.apply_xfm = True
# result = applyxfm.run()
# func2std_xform.inputs
# (maskfunc4mean, std2func_xform, [(('out_file','reference'))]),
# (resample_atlas, std2func_xform, [('out_file','in_file')] ),
# # Now, applying the inverse matrix
# (inv_mat, std2func_xform, [('out_file','in_matrix_file')]), # output: Atlas in func space
X = np.load('../results_again_again/fc_datasink/pearcoff_motionRegress0filt0global0/fc_map_brain_file_list.npy')
X
```
# Reading CSV using pandas
```
import pandas as pd
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
table = df.sort_values(['SUB_ID'])
table
sub_id = df.as_matrix(columns=['SUB_ID'])#table['SUB_ID']
type(sub_id)
sub_id.shape
table.SUB_ID.loc[lambda s: s == 50002]
table.loc[(table['SUB_ID'] > 50010) & (table['SUB_ID'] < 50050)]
# selecting Autistic males of age <= 20 years
table_males = table.loc[(table['SEX'] == 1) & (table['AGE_AT_SCAN'] <=20) & (table['DX_GROUP'] == 1) ]
table_males.shape
table_males
table_males_np = table_males.as_matrix(columns=['SUB_ID','DX_GROUP', 'DSM_IV_TR', 'AGE_AT_SCAN' ,'SEX' ,'EYE_STATUS_AT_SCAN'])
table_males_np
# selecting Autistic males(DSM IV) of age <= 18 years
df_aut_lt18_m = table.loc[(table['SEX'] == 1) & (table['AGE_AT_SCAN'] <=18) & (table['DSM_IV_TR'] == 1) ]
df_aut_lt18_m.shape
df_td_lt18_m = table.loc[(table['SEX'] == 1) & (table['AGE_AT_SCAN'] <=18) & (table['DSM_IV_TR'] == 0) ]
df_td_lt18_m.shape
```
| github_jupyter |
## API
http://dart.fss.or.kr/api/search.xml?auth=xxxxx 사용
```
# DART Open API 를 사용하기 위해서는 인증키를 사용해야 된다
import requests
auth = 'fa2804dc433cff0900e8107d9c6afd00382f6fd9'
url_temp = 'http://dart.fss.or.kr/api/search.xml?auth={auth}'
url = url.format(auth = auth)
r = requests.get(url)
r.text[:100]
```
## 기업 개황 API
http://dart.fss.or.kr/api/company.json?auth=xxx&crp_cd=xxx 사용
```
import json
from pandas.io.json import json_normalize
url_temp = 'http://dart.fss.or.kr/api/company.json?auth={auth}&crp_cd={crp}'
url = url_temp.format(auth = auth , crp = '005930')
r = requests.get(url)
samsung = json_normalize(json.loads(r.content))
```
## 정보 검색 API
http://dart.fss.or.kr/api/search.json?auth={auth}&crp_cd={code}&start_dt=19990101&bsn_tp=A001&bsn_tp=A002&bsn_tp=A003
- crp_cd: 공시대상회사의 종목코드(상장사:숫자 6자리) 또는 고유번호(기타법인:숫자 8자리)
- end_dt: 검색종료 접수일자(YYYYMMDD) : 없으면 당일
- start_dt: 검색시작 접수일자(YYYYMMDD) : 없으면 end_dt crp_cd가 없는 경우 검색기간은 3개월로 제한
- fin_rpt: 최종보고서만 검색여부(Y or N) 기본값 : N (정정이 있는 경우 최종정정만 검색)
- dsp_tp: 정기공시(A), 주요사항보고(B), 발행공시(C), 지분공시(D), 기타공시(E), 외부감사관련(F), 펀드공시(G), 자산유동화(H), 거래소공시(I), 공정위공시(J)
- bsn_tp: 정기공시(5), 주요사항보고(3), 발행공시(11), 지분공시(4), 기타공시(9), 외부감사관련(3), 펀드공시(3), 자산유동화(6), 거래소공시(6), 공정위공시(5)
- sort: 접수일자(date), 회사명(crp), 보고서명(rpt) 기본값 : date
- series: 오름차순(asc), 내림차순(desc) 기본값 : desc
- page_no: 페이지 번호(1~n) 기본값 : 1
- page_set: 페이지당 건수(1~100) 기본값 : 10, 최대값 : 100
- callback: 콜백함수명(JSONP용)
```
# 당일 접수 100건
url_temp ="http://dart.fss.or.kr/api/search.json?auth={auth}&page_set=100"
url = url_temp.format(auth = auth)
r = requests.get(url)
news = json_normalize(json.loads(r.content) , 'list')
news = news[['crp_cd' , 'crp_cls' , 'crp_nm' , 'rmk' , 'rpt_nm']]
news.head(n=10)
# 임원ㆍ주요주주특정증권등소유상황보고서 회사들 찾아오는 function 만들기
def ownership_delta() :
# STEP 1 : 새로운 공시 다운로드
auth = 'fa2804dc433cff0900e8107d9c6afd00382f6fd9'
url_temp ="http://dart.fss.or.kr/api/search.json?auth={auth}&page_set=100"
url = url_temp.format(auth = auth)
r = requests.get(url)
news = json_normalize(json.loads(r.content) , 'list')
news = news[['crp_cd' , 'crp_cls' , 'crp_nm' , 'rmk' , 'rpt_nm']]
# STEP 2 : '임원ㆍ주요주주특정증권등소유상황보고서' 있는 경우만
import pandas as pd
df = pd.DataFrame([news.loc[x] for x in range(len(news)) if news['rpt_nm'][x] == '임원ㆍ주요주주특정증권등소유상황보고서'])
return df
ownership_delta().head(n=5)
# 단일판매ㆍ공급계약체결 회사들 찾아오는 function 만들기
def bigsales() :
# STEP 1 : 새로운 공시 다운로드
auth = 'fa2804dc433cff0900e8107d9c6afd00382f6fd9'
url_temp ="http://dart.fss.or.kr/api/search.json?auth={auth}&page_set=100"
url = url_temp.format(auth = auth)
r = requests.get(url)
news = json_normalize(json.loads(r.content) , 'list')
news = news[['crp_cd' , 'crp_cls' , 'crp_nm' , 'rmk' , 'rpt_nm']]
# STEP 2 : '임원ㆍ주요주주특정증권등소유상황보고서' 있는 경우만
import pandas as pd
df = pd.DataFrame([news.loc[x] for x in range(len(news)) if news['rpt_nm'][x] == '단일판매ㆍ공급계약체결'])
return df
bigsales().head(n=5)
```
## 보고서 다운받기 : 재무제표 및 다른 데이타 크롤링
http://dart.fss.or.kr/dsaf001/main.do?rcpNo=20170515003806
```
# Example : 비아트론
# 사업+반기+분기보고서 : 사업 A001 , 반기 A002 , 분기 A003
code = '141000'
url_temp = 'http://dart.fss.or.kr/api/search.json?auth={auth}&crp_cd={code}&start_dt=19990101&bsn_tp=A001&bsn_tp=A002&bsn_tp=A003&page_set=100'
url = url_temp.format(auth = auth , code = code)
r = requests.get(url)
docs = json_normalize(json.loads(r.content), 'list')
docs
# 링크 뽑아내기
for x in range(len(docs)):
url_tmpl = "http://dart.fss.or.kr/dsaf001/main.do?rcpNo={}"
url = url_tmpl.format(docs['rcp_no'][x])
print (x, url)
```
### url 파일 포맷
- pdf_link_tmpl = "http://dart.fss.or.kr/pdf/download/pdf.do?rcp_no={rcp_no}&dcm_no={dcm_no}"
- excel_link_tmpl = "http://dart.fss.or.kr/pdf/download/excel.do?rcp_no={rcp_no}&dcm_no={dcm_no}&lang=ko"
- ifrs_link_tmpl = "http://dart.fss.or.kr/pdf/download/ifrs.do?rcp_no={rcp_no}&dcm_no={dcm_no}&lang=ko"
```
# rcp_no와 dcm_no을 알면 다운로드 할 수 있음
def get_report_attach_urls(rcp_no):
pdf_link_tmpl = "http://dart.fss.or.kr/pdf/download/pdf.do?rcp_no={rcp_no}&dcm_no={dcm_no}"
excel_link_tmpl = "http://dart.fss.or.kr/pdf/download/excel.do?rcp_no={rcp_no}&dcm_no={dcm_no}&lang=ko"
ifrs_link_tmpl = "http://dart.fss.or.kr/pdf/download/ifrs.do?rcp_no={rcp_no}&dcm_no={dcm_no}&lang=ko"
attach_urls = []
url = 'http://dart.fss.or.kr/dsaf001/main.do?rcpNo=%s' %(rcp_no)
r = requests.get(url)
start_str = "javascript: viewDoc\('" + rcp_no + "', '"
end_str = "', null, null, null,"
reg = re.compile(start_str + '(.*)' + end_str)
m = reg.findall(r.text)
dcm_no = m[0]
attach_urls.append(pdf_link_tmpl.format(rcp_no=rcp_no, dcm_no= dcm_no))
attach_urls.append(excel_link_tmpl.format(rcp_no=rcp_no, dcm_no= dcm_no))
attach_urls.append(ifrs_link_tmpl.format(rcp_no=rcp_no, dcm_no= dcm_no))
return attach_urls
get_report_attach_urls('20180518000333')
# import os # file 이름 rename 해줄 때 필요
# import os 해서 rename 을 해줄 수도 있음
def download_file(url) :
reg = re.compile('rcp_no=' + '(.*)' + '&dcm_no')
file = reg.findall(url)[0][7:]
local_filename = file + "_보고서"
r = requests.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
return local_filename
# 비아트론 2018년 1분기 보고서 다운로드
a = get_report_attach_urls('20180518000333')[1]
b = download_file(a)
b
import pandas as pd
import numpy as np
df = pd.read_excel(b , sheet_name= '연결 재무상태표', skiprows=5)
df
df['제 18 기 1분기말'] = pd.to_numeric(df['제 18 기 1분기말'], errors='coerce')
df['제 17 기말'] = pd.to_numeric(df['제 17 기말'], errors='coerce')
df['change'] = df['제 18 기 1분기말']/df['제 17 기말'] -1
df
```
| github_jupyter |
# Q learner with fictitious play
```
import numpy as np
from engine import RMG
from agent import RandomAgent, IndQLearningAgent, FPLearningAgent, PHCLearningAgent, Level2QAgent, WoLFPHCLearningAgent
N_EXP = 10
r0ss = []
r1ss = []
for n in range(N_EXP):
batch_size = 1
max_steps = 20
gamma = 0.96
# Reward matrix for the Chicken game
ipd_rewards = np.array([[0., 1.], [-2., -4.]])
env = RMG(max_steps=max_steps, payouts=ipd_rewards, batch_size=batch_size)
env.reset()
possible_actions = [0, 1] # Cooperate or Defect
adversary, dm = WoLFPHCLearningAgent(possible_actions, n_states=1, learning_rate=0.1, delta_w=0.01, delta_l = 0.04, epsilon=0.05, gamma=gamma), \
FPLearningAgent(possible_actions, possible_actions, n_states=1, learning_rate=0.1, epsilon=0.05, gamma=gamma)
# Stateless interactions (agents do not have memory)
s = 0
n_iter = 1000
r0s = []
r1s = []
for i in range(n_iter):
# A full episode:
done = False
while not done:
# Agents decide
a0 = dm.act(s)
a1 = adversary.act(s)
# World changes
_, (r0, r1), done, _ = env.step(([a0], [a1]))
# Agents learn
dm.update(s, (a0, a1), (r0, r1), s )
adversary.update(s, (a1, a0), (r1, r0), s )
#s = new_s #stateless!
#print(r0, r1)
r0s.append(r0[0])
r1s.append(r1[0])
env.reset()
print(n)
r0ss.append(r0s)
r1ss.append(r1s)
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
```
We report moving avearage of rewards, since it's common in RL taks
```
%matplotlib inline
import matplotlib.pyplot as plt
# We set a fancy theme
plt.style.use('ggplot')
plt.axis([0, max_steps*n_iter, -4.5, 1.5])
for i in range(N_EXP):
plt.plot(moving_average(r0ss[i], 100), 'b', alpha=0.05)
plt.plot(moving_average(r1ss[i], 100), 'r', alpha=0.05)
plt.plot(moving_average(np.asarray(r0ss).mean(axis=0), 100), 'b', alpha=0.5)
plt.plot(moving_average(np.asarray(r1ss).mean(axis=0), 100), 'r', alpha=0.5)
plt.xlabel('t');
plt.ylabel('R');
from matplotlib.lines import Line2D
cmap = plt.cm.coolwarm
custom_lines = [Line2D([0], [0], color='b'),
Line2D([0], [0], color='r')]
plt.legend(custom_lines,['Agent A', 'Agent B']);
plt.savefig('img/L1vsWoLF_C.png')
social_utility = 0.5*(np.asarray(r0ss).mean(axis=0) + np.asarray(r1ss).mean(axis=0))
plt.axis([0, max_steps*n_iter, -4.5, 1.5])
plt.plot(moving_average(social_utility, max_steps), 'g')
plt.xlabel('t');
plt.ylabel('$ \dfrac{R_A + R_B}{2} $');
dm.Q
adversary.Q
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import pandas as pd
import datetime
import numpy as np
from sklearn.cluster import KMeans
from mpl_toolkits.mplot3d import Axes3D
from importnb import Notebook
with Notebook():
from RFM_model import RFM
from utility import Utility
from data_preprocessing import Data
transaction_data = Data('transaction_data.csv')
transaction_data.clean_data()
transaction_data.get_data_info(transaction_data.data_cleaned)
transaction_data.data_cleaned.Country.value_counts()[:5].plot(kind='bar',color='Yellow')
plt.title('Top 5 Countries')
plt.ylabel('Number of Users')
plt.show()
transaction_data.data_cleaned.ItemCode.value_counts()[:5].plot(kind='bar',color='Blue')
plt.title('Top 5 Items')
plt.ylabel('Number of times purchased')
plt.xlabel('Item Code')
plt.show()
top_users = transaction_data.data_cleaned.groupby('UserId').agg({'TransactionId': lambda num: len(np.unique(num))})
top_users = top_users.sort_values('TransactionId',ascending=False)
top_users[:10].plot(kind='bar', color='Green',legend=None)
plt.title('Top 10 Users based on number of transactions made')
plt.ylabel('Number of Transaction')
plt.xlabel('User ID')
plt.show()
rfm_model = RFM()
rfm_model.get_rfm_values(transaction_data.data_cleaned)
rfm_model.calculate_rfm_score()
plt.figure(figsize=(12,6))
rfm_model.RFM['RFM'].value_counts()[:len(np.unique(rfm_model.RFM['RFM']))].plot(kind='bar', color='Orange')
plt.title('Number of Users per RFM value')
plt.ylabel('Number of User')
plt.xlabel('RFM Score')
plt.show()
rfm_model.plot_boxplot(rfm_model.RFM,False)
rfm_model.remove_outliers()
rfm_model.plot_boxplot(rfm_model.RFM_outlier_free,True)
ssd = []
for k in range(1,20):
kmeans = KMeans(n_clusters=k, init="k-means++")
kmeans.fit(rfm_model.RFM_outlier_free[['Recency','Frequency','Monetary']])
ssd.append(kmeans.inertia_)
plt.figure(figsize=(12,6))
plt.grid()
plt.plot(range(1,20),ssd, linewidth=3, color="Green", marker ="*", markerfacecolor="Red", markerfacecoloralt="Red", markersize=12)
plt.xlabel("K Value")
plt.xticks(np.arange(1,20,1))
plt.ylabel("SSD")
plt.title('Elbow method')
plt.show()
k_m = KMeans(n_clusters=3)
clusters = k_m.fit_predict(rfm_model.RFM_outlier_free[['Recency','Frequency','Monetary']])
rfm_model.RFM_outlier_free["label"] = clusters
rfm_model.RFM_outlier_free.head(5)
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(np.array(rfm_model.RFM_outlier_free.Recency[rfm_model.RFM_outlier_free.label == 0]), np.array(rfm_model.RFM_outlier_free.Frequency[rfm_model.RFM_outlier_free.label == 0]), np.array(rfm_model.RFM_outlier_free.Monetary[rfm_model.RFM_outlier_free.label == 0]), c='Blue')
ax.scatter(np.array(rfm_model.RFM_outlier_free.Recency[rfm_model.RFM_outlier_free.label == 1]), np.array(rfm_model.RFM_outlier_free.Frequency[rfm_model.RFM_outlier_free.label == 1]), np.array(rfm_model.RFM_outlier_free.Monetary[rfm_model.RFM_outlier_free.label == 1]), c='Orange')
ax.scatter(np.array(rfm_model.RFM_outlier_free.Recency[rfm_model.RFM_outlier_free.label == 2]), np.array(rfm_model.RFM_outlier_free.Frequency[rfm_model.RFM_outlier_free.label == 2]), np.array(rfm_model.RFM_outlier_free.Monetary[rfm_model.RFM_outlier_free.label == 2]), c='Green')
ax.set_xlabel('Recency')
ax.set_ylabel('Frequency')
ax.set_zlabel('Monetary')
plt.show()
output = pd.DataFrame()
output['UserId'] = list(rfm_model.RFM_outlier_free.index.values)
output['Cluster'] = list(rfm_model.RFM_outlier_free['label'])
output['Cluster'][(output['Cluster']==0)] = 'Blue'
output['Cluster'][(output['Cluster']==1)] = 'Orange'
output['Cluster'][(output['Cluster']==2)] = 'Green'
output.head(10)
```
| github_jupyter |
## Resample data into Healpix
1. Create HEALPix grid
2. Extract data if it is still in a zip file
3. Interpolate from initial points to Healpix grid. Uses linear interpolation
4. Save file
Interpolation possibilities:
1. Interpolate only Temperature, Geopotential and TOA
2. Interpolate all files and save chuncked version
3. Interpolate all files and save without specifying chunks
```
import sys
sys.path.append('/'.join(sys.path[0].split('/')[:-1]))
import zipfile
import matplotlib.pyplot as plt
import xarray as xr
import numpy as np
import xesmf as xe
import os
from pathlib import Path
import healpy as hp
# use "nearest neighbors" for datapoints outside the convex hull
input_dir = '../data/equiangular/5.625deg/'
nearest = True
if nearest:
output_dir = '../data/healpix/5.625deg_nearest/'
else:
output_dir = '../data/healpix/5.625deg/'
#output_dir = '../data/healpix/5.625deg_chunks/'
nside = 16
n_pixels = 12*(nside**2)
# New HEALPix grid
n_pixels = 12*(nside**2)
hp_lon, hp_lat = hp.pix2ang(nside, np.arange(n_pixels), lonlat=True, nest=True)
lon_idx = xr.DataArray(hp_lon, dims=["lon"])
lat_idx = xr.DataArray(hp_lat, dims=["lat"])
if nearest and not os.path.isdir(output_dir):
!mkdir '../data/healpix/5.625deg_chunks/'
all_files = os.listdir(input_dir)
all_files
```
### Option 1. Only Temperature, Geopotential and TOA. Save with new chunks
Geopotential
```
ds = xr.open_mfdataset(input_dir + 'geopotential_500' + '/*.nc', combine='by_coords')
interp_ds = ds.interp(lon=('node', lon_idx), lat=('node', lat_idx)).interpolate_na(dim='node', method='nearest')
aux = xr.Dataset({'z': (['time', 'node'], interp_ds.z.data.rechunk((500,-1)))}, \
coords={'time':interp_ds.time.values,
'node':interp_ds.node.values})
aux = aux.assign_coords(lat = ('node', interp_ds.lat.values), lon=('node', interp_ds.lon.values))
aux.to_netcdf(output_dir + 'geopotential_500' + '/geopotential_500_5.625deg.nc')
```
Temperature
```
ds = xr.open_mfdataset(input_dir + 'temperature_850' + '/*.nc', combine='by_coords')
interp_ds = ds.interp(lon=('node', lon_idx), lat=('node', lat_idx)).interpolate_na(dim='node', method='nearest')
aux = xr.Dataset({'t': (['time', 'node'], interp_ds.t.data.rechunk((500,-1)))}, \
coords={'time':interp_ds.time.values,
'node':interp_ds.node.values})
aux = aux.assign_coords(lat = ('node', interp_ds.lat.values), lon=('node', interp_ds.lon.values))
aux.to_netcdf(output_dir + 'temperature_850' + '/temperature_850_5.625deg.nc')
```
TOA
```
ds = xr.open_mfdataset(input_dir + 'toa_incident_solar_radiation' + '/*.nc', combine='by_coords')
interp_ds = ds.interp(lon=('node', lon_idx), lat=('node', lat_idx)).interpolate_na(dim='node', method='nearest')
aux = xr.Dataset({'tisr': (['time', 'node'], interp_ds.tisr.data.rechunk((500,-1)))}, \
coords={'time':interp_ds.time.values,
'node':interp_ds.node.values})
aux = aux.assign_coords(lat = ('node', interp_ds.lat.values), lon=('node', interp_ds.lon.values))
aux.to_netcdf(output_dir + 'toa_incident_solar_radiation' + '/toa_incident_solar_radiation_5.625deg.nc')
```
## Option 2.
### Function for interpolation generation, saved chunked file
```
def save_mf(dir_files):
ds = xr.open_mfdataset(input_dir + dir_files + '/*.nc', combine='by_coords')
interp_ds = ds.interp(lon=('node', lon_idx), lat=('node', lat_idx)).interpolate_na(dim='node', method='nearest')
interp_ds = interp_ds.assign_coords(node=np.arange(n_pixels))
aux = interp_ds.chunk({'time':720})
years, datasets = zip(*aux.groupby("time.year"))
out_path = output_dir + dir_files + "/"
out_filename = dir_files + '_5.625deg_'
paths = [out_filename + "%s.nc" % y for y in years]
xr.save_mfdataset(datasets, [out_path + p for p in paths])
for f in all_files:
print('Processing ', f)
save_mf(f)
```
## Option 3.
### Function for interpolation generation, NO chunking
```
for f in all_files:
print('Working on ', f)
# Check if file has been unzip
if not os.path.isfile(input_dir + f + '/' + f + '_1979_5.625deg.nc') and not os.path.isfile(input_dir + f + '/' + f + '_5.625deg.nc'):
print('Data Extraction...')
# extract data in the same folder
with zipfile.ZipFile(input_dir + f + '/' + f + '_5.625deg.zip',"r") as zip_ref:
zip_ref.extractall(input_dir + f + '/')
# Interpolate
print('Interpolation...')
ds = xr.open_mfdataset(input_dir + f + '/*.nc', combine='by_coords')
#linear interpolation
if nearest:
interp_ds = ds.interp(lon=('node', lon_idx), lat=('node', lat_idx)).interpolate_na(dim='node', method='nearest')
else:
interp_ds = ds.interp(lon=('node', lon_idx), lat=('node', lat_idx)).interpolate_na(dim='node')
interp_ds = interp_ds.assign_coords(node=np.arange(n_pixels))
# Create out folder
out_path = output_dir + f + "/"
Path(out_path).mkdir(parents=True, exist_ok=True)
# Save
out_filename = f + '_5.625deg.nc'
interp_ds = interp_ds.chunk({'time':720})
interp_ds.to_netcdf(out_path + out_filename)
```
Plot of values on the original grid. Dots in red represent the new grid.
```
fig, ax = plt.subplots(figsize=(10,5))
ds_aux.tisr.isel(time=0).plot(ax=ax)
ax.plot(lon_idx.values, lat_idx.values, 'o', alpha=0.5, c='red', markersize=1)
plt.show()
```
Interpolated with xESMF
```
ax = ds_out.tisr.isel(time=0).sortby(['lon','lat']).plot()
```
| github_jupyter |
# Loading Fake Timeseries Surface Data
This notebook is designed to explore some functionality with loading DataFiles and using Loaders.
This example will require some extra optional libraries, including nibabel and nilearn! Note: while nilearn is not imported, when trying to import SingleConnectivityMeasure, if nilearn is not installed, this will give an ImportError.
We will also use fake data for this example - so no special datasets required!
```
import BPt as bp
import nibabel as nib
import numpy as np
import pandas as pd
import os
def save_fake_timeseries_data():
'''Save fake timeseries and fake surface data.'''
X = np.random.random(size = (20, 100, 10242))
os.makedirs('fake_time_data', exist_ok=True)
for x in range(len(X)):
np.save('fake_time_data/' + str(x) + '_lh', X[x])
for x in range(len(X)):
np.save('fake_time_data/' + str(x) + '_rh', X[x])
save_fake_timeseries_data()
# Init a Dataset
data = bp.Dataset()
```
Next, we are interested in loading in the files to the dataset as data files. There are a few different ways to do this, but we will use the method add_data_files. We will try and load the timeseries data first.
First we need a dictionary mapping desired column name to location or a file glob (which is easier so let's use that).
```
# The *'s just mean wildcard
files = {'timeseries_lh': 'fake_time_data/*_lh*',
'timeseries_rh': 'fake_time_data/*_rh*'}
# Now let's try loading with 'auto' as the file to subject function
data.add_data_files(files, 'auto')
```
We can see 'auto' doesn't work for us, so we can try writing our own function instead.
```
def file_to_subj(loc):
return loc.split('/')[-1].split('_')[0]
# Actually load it this time
data = data.add_data_files(files, file_to_subj)
data
```
What's this though? Why are the files showing up as Loc(int). Whats going on is that the data files are really stored as just integers, see:
```
data['timeseries_lh']
```
They correspond to locations in a stored file mapping (note: you don't need to worry about any of this most of the time)
```
data.file_mapping[0], data.file_mapping[1], data.file_mapping[2]
```
Let's add a fake target to our dataset now
```
data['t'] = np.random.random(len(data))
data.set_target('t', inplace=True)
data
```
Next we will generate a Loader to apply a parcellation, then extract a measure of connectivity.
```
from BPt.extensions import SurfLabels
lh_parc = SurfLabels(labels='data/lh.aparc.annot', vectorize=False)
rh_parc = SurfLabels(labels='data/rh.aparc.annot', vectorize=False)
```
We can see how this object works on example data first.
```
ex_lh = data.file_mapping[0].load()
ex_lh.shape
trans = lh_parc.fit_transform(ex_lh)
trans.shape
```
We essentially get a reduction from 10242 features to 35.
Next, we want to transform the matrix into a correlation matrix.
```
from BPt.extensions import SingleConnectivityMeasure
scm = SingleConnectivityMeasure(kind='covariance', discard_diagonal=True, vectorize=True)
scm.fit_transform(trans).shape
```
The single connectivity measure is just a wrapper designed to let the ConnectivityMeasure from nilearn work with a single subject's data at a time.
Next, let's use the input special Pipe wrapper to compose these two objects into their own pipeline
```
lh_loader = bp.Loader(bp.Pipe([lh_parc, scm]), scope='_lh')
rh_loader = bp.Loader(bp.Pipe([rh_parc, scm]), scope='_rh')
```
Define a simple pipeline with just our loader steps, then evaluate with mostly default settings.
```
pipeline = bp.Pipeline([lh_loader, rh_loader, bp.Model('linear')])
results = bp.evaluate(pipeline, data)
results
```
Don't be discouraged that this didn't work, we are after all trying to predict random noise with random noise ...
```
# These are the steps of the pipeline
fold0_pipeline = results.estimators[0]
for step in fold0_pipeline.steps:
print(step[0])
```
We can investigate pieces, or use special functions like
```
results.get_X_transform_df(data, fold=0)
```
| github_jupyter |
based on GM @aerdem4 Keras CNN (lofoCNN)
(this is a keras tensorflow so no need to change /.keras/keras.json)
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('./'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
from keras.models import Model, load_model
from keras.layers import Input, Dropout, Dense, Embedding, SpatialDropout1D, concatenate, BatchNormalization, Flatten
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing import text, sequence
from keras.callbacks import Callback
from keras import backend as K
from keras.models import Model
from keras.losses import mean_squared_error as mse_loss
from keras import optimizers
from keras.optimizers import RMSprop, Adam
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
train = pd.read_feather("./train_simple_cleanup.feather")
train.columns
from sklearn.preprocessing import LabelEncoder
# le = LabelEncoder()
# train["primary_use"] = le.fit_transform(train["primary_use"])
categoricals = ["site_id", "building_id", "primary_use", "tm_hour_of_day", "tm_day_of_week", "meter"]
drop_cols = ["sea_level_pressure", "wind_speed", "wind_direction"]
fonct_cols_list = ['meter_reading_0',
'meter_reading_1', 'meter_reading_2', 'meter_reading_3',
'cnt_building_per_site', 'cnt_building_per_site_prim',
'sqr_mean_per_site', 'sqr_mean_per_prim_site',
'RH', 'heat',
'windchill', 'feellike', 'air_temperature_mean_lag3',
'dew_temperature_mean_lag3', 'heat_mean_lag3', 'windchill_mean_lag3',]
numcols_3d = ["tm_hour_of_day", 'RH', 'heat', 'windchill', 'feellike',
"air_temperature", "cloud_coverage","dew_temperature", "precip_depth_1_hr"]
had_cols_list = ['had_air_temperature', 'had_cloud_coverage',
'had_dew_temperature', 'had_precip_depth_1_hr',
'had_sea_level_pressure', 'had_wind_direction', 'had_wind_speed',
'had_RH', 'had_heat', 'had_windchill', 'had_feellike',
'had_air_temperature_mean_lag3', 'had_dew_temperature_mean_lag3',
'had_heat_mean_lag3', 'had_windchill_mean_lag3',
'had_feellike_mean_lag3']
numericals = ["square_feet", "year_built", "air_temperature", "cloud_coverage",
"dew_temperature", "precip_depth_1_hr", "floor_count", ]+drop_cols
feat_cols = categoricals + numericals + fonct_cols_list + had_cols_list
len(numcols_3d)
train['square_feet'] = train['square_feet'].apply(lambda x: int(x/1000))
target = train["meter_reading"]#np.log1p(train["meter_reading"])
del train["meter_reading"]
# train = train.drop(drop_cols, axis = 1)
max_col = {}
for col in numericals+ fonct_cols_list:
if col != 'meter_reading':
max_ = max(-train[col].min(), train[col].max())
train[col] = train[col] / max_
max_col[col]=max_
max_col
train.columns
# https://www.kaggle.com/divrikwicky/lightweight-version-of-2-65-custom-nn
def nn_block(input_layer, size, dropout_rate, activation):
out_layer = Dense(size, activation=None)(input_layer)
out_layer = BatchNormalization()(out_layer)
out_layer = Activation(activation)(out_layer)
out_layer = Dropout(dropout_rate)(out_layer)
return out_layer
def cnn_block(input_layer, size, dropout_rate, activation):
out_layer = Conv1D(size, 1, activation=None)(input_layer)
out_layer = BatchNormalization()(out_layer)
out_layer = Activation(activation)(out_layer)
out_layer = Dropout(dropout_rate)(out_layer)
return out_layer
def model_lofo(dense_dim_1=64, dense_dim_2=32, dense_dim_3=32, dense_dim_4=16,
dropout1=0.2, dropout2=0.1, dropout3=0.1, dropout4=0.1, lr=0.001):
#Inputs
site_id = Input(shape=[1], name="site_id")
building_id = Input(shape=[1], name="building_id")
meter = Input(shape=[1], name="meter")
primary_use = Input(shape=[1], name="primary_use")
square_feet = Input(shape=[1], name="square_feet")
year_built = Input(shape=[1], name="year_built")
air_temperature = Input(shape=[1], name="air_temperature")
cloud_coverage = Input(shape=[1], name="cloud_coverage")
dew_temperature = Input(shape=[1], name="dew_temperature")
tm_hour_of_day = Input(shape=[1], name="tm_hour_of_day")
precip = Input(shape=[1], name="precip_depth_1_hr")
tm_day_of_week = Input(shape=[1], name="tm_day_of_week")
# beaufort_scale = Input(shape=[1], name="beaufort_scale")
orig_feature = Input(shape=[len(numericals)], name="orig_feature")
had_cols = Input(shape=[len(had_cols_list)], name="had_cols")
fonct_cols = Input(shape=[len(fonct_cols_list)], name="fonct_cols")
all_num3D = Input(shape=(train[numcols_3d].shape[1], 3), name="all_num3D")
#Embeddings layers
emb_site_id = Embedding(16, 2)(site_id)
emb_building_id = Embedding(1449, 3)(building_id)
emb_meter = Embedding(4, 2)(meter)
emb_primary_use = Embedding(16, 2)(primary_use)
# emb_hour = Embedding(24, 2)(tm_hour_of_day)
# emb_weekday = Embedding(7, 2)(tm_day_of_week)
concat_emb = concatenate([
Flatten() (emb_site_id)
, Flatten() (emb_building_id)
, Flatten() (emb_primary_use)
# , Flatten() (emb_hour)
# , Flatten() (emb_weekday)
])
# categ = Dropout(dropout1)(Dense(dense_dim_1,activation='relu') (concat_emb))
# categ = BatchNormalization()(categ)
# categ = Dropout(dropout2)(Dense(dense_dim_2,activation='relu') (categ))
#main layer
main_l = concatenate([
# categ
square_feet
, year_built
, air_temperature
, cloud_coverage
, dew_temperature
, tm_hour_of_day
, tm_day_of_week
, precip
# , beaufort_scale
])
dense_input = concatenate([orig_feature, concat_emb, had_cols, fonct_cols])
# type_input = Input(shape=(1,))
# type_emb = Flatten()(Embedding(dev_df["le_type"].max() + 1, 5)(type_input))
type_emb = Flatten()(emb_meter)
mol_input = all_num3D #RepeatVector(3)(dense_input) #Input(shape=(M_train.shape[1], M_train.shape[2]))
mol_layer = cnn_block(mol_input, 200, 0.05, "relu")
mol_layer = cnn_block(mol_layer, 100, 0.05, "relu")
merged_input = BatchNormalization()(concatenate([dense_input, type_emb,
GlobalMaxPooling1D()(mol_layer), GlobalAvgPool1D()(mol_layer)]))
x1 = Dense(200, activation="relu")(merged_input)
x2 = nn_block(x1, 20, 0.01, "sigmoid")
mol_layer = concatenate([RepeatVector(9)(x2), mol_layer])
mol_layer = cnn_block(mol_layer, 200, 0.05, "relu")
mol_layer = cnn_block(mol_layer, 100, 0.05, "relu")
merged_input = BatchNormalization()(concatenate([dense_input, type_emb,
GlobalMaxPooling1D()(mol_layer), GlobalAvgPool1D()(mol_layer)]))
hidden_layer = concatenate([Dense(600, activation="relu")(merged_input), x1])
hidden_layer = nn_block(hidden_layer, 400, 0.05, "relu")
hidden_layer = nn_block(hidden_layer, 200, 0.05, "relu")
hidden_layer = nn_block(hidden_layer, 100, 0.05, "relu")
hidden_layer = concatenate([Flatten()(emb_meter), hidden_layer])
output = Dense(1, activation="linear")(hidden_layer)
model = Model([ site_id,
building_id,
meter,
primary_use,
square_feet,
year_built,
air_temperature,
cloud_coverage,
dew_temperature,
tm_hour_of_day,
tm_day_of_week,
precip,
all_num3D,
orig_feature,
had_cols,
fonct_cols,
], output)
model.compile(optimizer = Nadam(lr=lr),
loss= mse_loss,
metrics=[root_mean_squared_error])
return model
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=0))
from pandas.api.types import is_datetime64_any_dtype as is_datetime
from pandas.api.types import is_categorical_dtype
def reduce_mem_usage(df, use_float16=False):
"""
Iterate through all the columns of a dataframe and modify the data type to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print("Memory usage of dataframe is {:.2f} MB".format(start_mem))
for col in df.columns:
if is_datetime(df[col]) or is_categorical_dtype(df[col]):
continue
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == "int":
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if use_float16 and c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype("category")
end_mem = df.memory_usage().sum() / 1024**2
print("Memory usage after optimization is: {:.2f} MB".format(end_mem))
print("Decreased by {:.1f}%".format(100 * (start_mem - end_mem) / start_mem))
return df
# introduction of lagged features (-1 hours and -2 hours)
train = reduce_mem_usage(train)
train1 = train.groupby(['building_id', 'meter'])[numcols_3d].shift(1).fillna(0)
train1 = reduce_mem_usage(train1)
train2 = train.groupby(['building_id', 'meter'])[numcols_3d].shift(2).fillna(0)
train2 = reduce_mem_usage(train2)
# fct to return the data as 3D table embeded into a dictionnary:
def get_keras_data(df, df1, df2, num_cols, cat_cols):
cols = cat_cols + num_cols
X = {col: np.array(df[col]) for col in cols}
X['orig_feature'] = np.array(df[numericals])
X['had_cols'] = np.array(df[had_cols_list])
X['fonct_cols'] = np.array(df[fonct_cols_list])
X['all_num0'] = np.array(df[numcols_3d])
X['all_num1'] = np.array(df1[numcols_3d])
X['all_num2'] = np.array(df2[numcols_3d])
M = np.zeros((df[numcols_3d].shape[0], df[numcols_3d].shape[1], 3), dtype=np.float32)
M[:,:,0] = X['all_num0']
M[:,:,1] = X['all_num1']
M[:,:,2] = X['all_num2']
X['all_num3D'] = M
return X
def train_model(keras_model, X_t, y_train, batch_size, epochs, X_v, y_valid, fold, patience=3):
early_stopping = EarlyStopping(patience=patience, verbose=1)
model_checkpoint = ModelCheckpoint("cnn_model_" + str(fold) + ".hdf5",
save_best_only=True, verbose=1, monitor='val_root_mean_squared_error', mode='min')
hist = keras_model.fit(X_t, y_train, batch_size=batch_size, epochs=epochs,
validation_data=(X_v, y_valid), verbose=1,
callbacks=[early_stopping, model_checkpoint])
keras_model = load_model("cnn_model_" + str(fold) + ".hdf5", custom_objects={'root_mean_squared_error': root_mean_squared_error})
return keras_model
from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold
import gc
# from sklearn.model_selection import GroupKFold
#
from keras.layers import *
from keras.models import Model
from keras.optimizers import Nadam
oof = np.zeros(len(train))
batch_size = 4096
epochs = 5
models = []
seed = 666
folds = 2
kf = KFold(n_splits=folds, shuffle=False, random_state=seed)
for fold_n, (train_index, valid_index) in enumerate(kf.split(train, target)):
# for fold_n, (train_index, valid_index) in enumerate(weirdfolds):
print('Fold:', fold_n)
X_train, X_valid = train.iloc[train_index], train.iloc[valid_index]
y_train, y_valid = target.iloc[train_index], target.iloc[valid_index]
X_train1, X_valid1 = train1.iloc[train_index], train1.iloc[valid_index]
X_train2, X_valid2 = train2.iloc[train_index], train2.iloc[valid_index]
# X_train, X_valid = train.iloc[valid_index], train.iloc[train_index]
# y_train, y_valid = target.iloc[valid_index], target.iloc[train_index]
X_t = get_keras_data(X_train, X_train1, X_train2, numericals, categoricals)
X_v = get_keras_data(X_valid, X_valid1, X_valid2, numericals, categoricals)
del X_train, X_valid, X_train1, X_valid1,X_train2, X_valid2
gc.collect()
keras_model = model_lofo(dense_dim_1=64, dense_dim_2=64, dense_dim_3=16, dense_dim_4=8,
dropout1=0.2, dropout2=0.1, dropout3=0.1, dropout4=0.1, lr=0.001)
mod = train_model(keras_model, X_t, y_train, batch_size, epochs, X_v, y_valid, fold_n, patience=3)
models.append(mod)
gc.collect()
print('*'* 50)
gc.collect()
models
numericals
import gc
test = pd.read_feather('test_simple_cleanup.feather')
test['square_feet'] = test['square_feet'].apply(lambda x: int(x/1000))
test = test[feat_cols]
len(feat_cols), len(['site_id',
'building_id',
'primary_use',
'tm_hour_of_day',
'tm_day_of_week',
'meter',
'square_feet',
'year_built',
'air_temperature',
'cloud_coverage',
'dew_temperature',
'precip_depth_1_hr',
'floor_count',
'meter_reading_0',
'meter_reading_1',
'meter_reading_2',
'meter_reading_3',
'cnt_building_per_site',
'cnt_building_per_site_prim',
'sqr_mean_per_site',
'sqr_mean_per_prim_site',
'RH',
'heat',
'windchill',
'feellike',
'air_temperature_mean_lag3',
'dew_temperature_mean_lag3',
'heat_mean_lag3',
'windchill_mean_lag3',
'had_air_temperature',
'had_cloud_coverage',
'had_dew_temperature',
'had_precip_depth_1_hr',
'had_sea_level_pressure',
'had_wind_direction',
'had_wind_speed',
'had_RH',
'had_heat',
'had_windchill',
'had_feellike',
'had_air_temperature_mean_lag3',
'had_dew_temperature_mean_lag3',
'had_heat_mean_lag3',
'had_windchill_mean_lag3',
'had_feellike_mean_lag3'])
feat_cols
max_col
for col in numericals+ fonct_cols_list:
if col != 'meter_reading':
# max_ = max(-train[col].min(), train[col].max())
test[col] = test[col] / max_col[col]
test = reduce_mem_usage(test)
test1 = test.groupby(['building_id', 'meter'])[numcols_3d].shift(1).fillna(0)
test1 = reduce_mem_usage(test1)
test2 = test.groupby(['building_id', 'meter'])[numcols_3d].shift(2).fillna(0)
test2 = reduce_mem_usage(test2)
len(numcols_3d),
from tqdm import tqdm
i=0
res = np.zeros((test.shape[0]),dtype=np.float32)
step_size = 50000
for j in tqdm(range(int(np.ceil(test.shape[0]/step_size)))):
# for_prediction = get_keras_data(test.iloc[i:i+step_size], numericals, categoricals)
for_prediction = get_keras_data(test.iloc[i:i+step_size], test1.iloc[i:i+step_size], test2.iloc[i:i+step_size], numericals, categoricals)
res[i:min(i+step_size,test.shape[0])] = \
np.expm1(sum([model.predict(for_prediction, batch_size=1024)[:,0] for model in models])/len(models))
i+=step_size
# for_prediction = get_keras_data(test, test1, test2, numericals, categoricals)
# res = np.expm1(sum([model.predict(for_prediction, batch_size=1024)[:,0] for model in models])/len(models))
res
submission = pd.read_csv('sample_submission.csv')
submission['meter_reading'] = res
submission.loc[submission['meter_reading']<0, 'meter_reading'] = 0
submission
submission.to_csv('submission_nn007lofo.csv.gz', index=False, compression='gzip', float_format='%.4f')
#!kaggle competitions submit -c ashrae-energy-prediction -f submission_nn007lofo.csv.gz -m "keras cnn"
1+1
```
| github_jupyter |
Import All Required Libraries
```
#Code : Imports
import json
import os
from langdetect import detect as detectlang, DetectorFactory
DetectorFactory.seed = 0
#from textblob import TextBlob
import zipfile
import pandas as pd
import seaborn as sns
sns.set(style="darkgrid")
import matplotlib.pyplot as plt
import re
%matplotlib inline
import warnings; warnings.simplefilter('ignore')
```
# Helper Functions
Below cell has helper functions for extraction information about hashtags
```
#Code Helper Functions Hashtag Extraction
#Updates HashTag Dictionary
def UpdateHasttagDict(htlist, hashtags):
curr_doc_counts = {}
for t in htlist:
tlower = t.lower()
curr_doc_counts[tlower] = curr_doc_counts.get(tlower,0)+1
for ht,doccnt in curr_doc_counts.items():
if ht in hashtags:
hashtags[ht]['tf'] +=doccnt
hashtags[ht]['df'] +=1
else:
hashtags[ht] = {'tf' :doccnt , 'df': 1}
return hashtags
#Extracts HashTags from given text passed as input
def ExtractHashTags(text):
htpattern = r"#\w+"
htlist = re.findall(htpattern,text)
return htlist
# Extract hahstags from JSON as well as text anc compare for mismatches
def ExtractHTInfo(tjson, hashtags, regex_doesntmatch):
tweet_text = tjson['full_text'].replace('\n',' ').strip()
#Extract HTS from JSON
htjsonlist = ['#'+ht['text'] for ht in tjson['entities']['hashtags']]
#Extract HTS from Based on regex
htregexlist = ExtractHashTags(tweet_text)
#ValidateRegex
setdiffA = set(htjsonlist) - set(htregexlist)
setdiffB = set(htregexlist) - set(htjsonlist)
validate_regex = len(setdiffA) + len(setdiffB) + len(htjsonlist) - len(htregexlist)
if(validate_regex !=0):
link = 'https://twitter.com/' +tjson['user']['screen_name']+'/status/' + tjson['id_str']
regex_doesntmatch.append((setdiffA,setdiffB, link, tweet_text))
hashtags = UpdateHasttagDict(htregexlist, hashtags)
#hashtags = UpdateHasttagDict(htjsonlist, hashtags)
return hashtags,regex_doesntmatch
```
Below cell has helper functions for finding the language of the tweets
```
# Code Helper Function to Resolve Language
def ResolveLanguage(shortname, langconfig):
for l in langconfig:
if shortname==l['code']:
return l['name']
return shortname
```
Below cell has helper functions for convert a dictionary to dataframe
```
# Code: Helper function to convert a dictionary to dataframe
def Dicttodataframe(dictin, idxname, columnsin = None):
df =pd.DataFrame.from_dict(dictin, orient='index')
if columnsin:
df.columns = columnsin
df.index.name = idxname
df= df.reset_index()
return df
```
# Extract Info from Tweets to Prepare Data
```
# Code Extract Information and Prepare Data also reports any issues
datafolder ='Data/'
jsonfolder='/JSON/'
zippedfilepath = 'JSON.zip'
orig_htlist = ['service', 'price', 'cost', 'quality', 'ambiente', 'reservation']
langconfig = None
with open('langconfig.json', 'r') as f:
langconfig = json.load(f)
hashtagsdict = {}
langdict = {}
langdictJSON = {}
regex_doesntmatch = []
if zippedfilepath:
zippedFolder = zipfile.ZipFile(datafolder + zippedfilepath, 'r')
tweetjsonfiles = zippedFolder.infolist()
else:
tweetjsonfiles = os.listdir(datafolder + jsonfolder)
for tweetfile in tweetjsonfiles:
if zippedfilepath:
currjson = json.loads(zippedFolder.open(tweetfile).read())
else:
currjson = json.loads(open(datafolder + jsonfolder + tweetfile).read())
hashtagsdict,regexfailed = ExtractHTInfo(currjson, hashtagsdict,regex_doesntmatch)
#lang= TextBlob(currjson['full_text']).detect_language()
lang = ResolveLanguage(detectlang(currjson['full_text']), langconfig)
langdict[lang] = langdict.get(lang,0) +1
langjson = ResolveLanguage(currjson['lang'],langconfig)
langdictJSON[langjson] = langdictJSON.get(langjson,0) +1
print("Total tweets processed " , len(tweetjsonfiles) )
print( "Tweets for which original hashtags are different from regex hashtags" , len(regex_doesntmatch))
# for ht, val in hashtags.items():
# print(ht, val['tf'], val['df'])
```
Prints Information about tweets for which regular expressions does not match
```
# Code for printing info about Tweets for which regex doesnt match
print(len(regex_doesntmatch))
for rf in regex_doesntmatch:
print("HT in JSON but was not found in regex filtering:", rf[0])
print("HT not in JSON but was found in regex filtering:", rf[1])
print("Tweet Link", rf[2])
print()
```
# PLOTS
Distribution for Top 10 Languages most represented in the Tweets (Based on Detection Library)
```
#Code For Plotting Language Frequeny Detected Using Helper Library
langdf = Dicttodataframe(langdict, 'Language',['count'])
top10languages = langdf.sort_values(by="count", ascending=False).iloc[0:10]
#NORMAL PLOT
title = "Top 10 detected languages based on their Tweet frequency"
ax = top10languages.plot(x='Language', y='count', kind ='barh', figsize=(15,8), grid = True, sort_columns= False, legend = False)
ax.yaxis.set_tick_params(labelsize=12)
ax.xaxis.set_tick_params(labelsize=12)
ax.set_xlabel("Number of Tweets", fontsize= '16')
ax.set_ylabel("Language", fontsize= '16')
ax.set_title(title, fontsize = '16')
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()+.2, i.get_y()+.3, \
str(round((i.get_width()), 3)), fontsize= 12, color='black')
ax.invert_yaxis()
```
Distribution for Top 10 Languages most represented in the Tweets (Based on Language present in JSON)
```
#Code For Plotting Language Frequeny Detected Using JSON Information
langdfJSON = Dicttodataframe(langdictJSON, 'Language',['count'])
top10languagesJSON = langdfJSON.sort_values(by="count", ascending=False).iloc[0:10]
#SNS BASEAD Plot
plt.figure(figsize=(15,8))
title = "Top 10 detected languages( from JSON) based on their Tweet frequency"
ax= sns.barplot("count", "Language", data=top10languagesJSON, orient="h")
ax.yaxis.set_tick_params(labelsize=12)
ax.xaxis.set_tick_params(labelsize=12)
ax.set_xlabel("Number of Tweets", fontsize= '14')
ax.set_ylabel("Language", fontsize= '14')
ax.set_title(title, fontsize = '16')
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()+.3, i.get_y()+.4, \
str(int(round((i.get_width()), 4))), fontsize= 14, color='black')
```
Frequency Distribution for Top 50 Hashtags (other than search terms) most represented in the Tweets (based on their total occurence)
```
#Code to get top 50 Hashtags other than search keywords
htdf = Dicttodataframe(hashtagsdict, 'HashTag')
ignore_filter = ~htdf['HashTag'].isin(['#'+w for w in orig_htlist])
top50 = htdf[ignore_filter].sort_values(by="df", ascending=False).iloc[0:50]
#Code to Plot top 50 Hashtag frequencies based on total frequency
plt.figure(figsize=(15,20))
title = "Top 50 words based on their total term frequency"
ax = sns.barplot("tf", "HashTag", data=top50, orient="h")
ax.yaxis.set_tick_params(labelsize=12)
ax.xaxis.set_tick_params(labelsize=12)
ax.set_xlabel("Number of Tweets", fontsize= '14')
ax.set_ylabel("Language", fontsize= '14')
ax.set_title(title, fontsize = '16')
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()+.3, i.get_y()+.4, \
str(int(round((i.get_width()), 4))), fontsize= 14, color='black')
```
Frequency Distribution for Top 50 Hashtags (other than search terms) most represented in the Tweets (based on their dcoument frequency)
```
#Code to Plot top 50 frequencies based on document frequency
plt.figure(figsize=(15,20))
title = "Top 50 words based on their document frequency"
ax = sns.barplot("df", "HashTag", data=top50, orient="h")
ax.yaxis.set_tick_params(labelsize=12)
ax.xaxis.set_tick_params(labelsize=12)
ax.set_xlabel("Number of Tweets", fontsize= '14')
ax.set_ylabel("Language", fontsize= '14')
ax.set_title(title, fontsize = '16')
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()+.3, i.get_y()+.4, \
str(int(round((i.get_width()), 4))), fontsize= 14, color='black')
```
# Optional
Below Code Load all Tweet JSONs in a dataframe and also set the class to be predicted
```
# Code for JSONs to Dataframe
from pandas.io.json import json_normalize
tweetsDF = pd.DataFrame()
for tweetfile in tweetjsonfiles:
if zippedfilepath:
currjson = json.loads(zippedFolder.open(tweetfile).read())
else:
currjson = json.loads(open(datafolder + jsonfolder + tweetfile).read())
currtweetDF = json_normalize(currjson)
tweetsDF = tweetsDF.append(currtweetDF)
# Code to set Sentiment Class for each tweet
def getClass(fav):
if fav<=4:
return "NEGATIVE"
elif fav>10:
return "POSITIVE"
else:
return "NEUTRAL"
tweetsDF['sentiment'] = tweetsDF['favorite_count'].apply(getClass)
```
Distribution of No of Tweets w.r.t How they have been liked
```
#Code to Plot distribution of Likes
title = 'No of Tweets with given Number of Likes'
plt.figure(figsize=(35,20))
ax =sns.countplot(tweetsDF.favorite_count)
ax.yaxis.set_tick_params(labelsize=14)
ax.xaxis.set_tick_params(labelsize=14)
ax.set_xlabel("Number of Likes", fontsize= '14')
ax.set_ylabel("Number of Tweets", fontsize= '14')
ax.set_title(title, fontsize = '18')
plt.show()
```
Distribution of No of Sentiment Classes
```
#Code to Plot Sentiment Class Distribution
plt.figure(figsize=(8,8))
title = 'Distribution of Tweets Per Sentiment Class'
sns.set(style="darkgrid")
ax =sns.countplot(tweetsDF['sentiment'])
ax.yaxis.set_tick_params(labelsize=12)
ax.xaxis.set_tick_params(labelsize=12)
ax.set_xlabel("Sentiment Class", fontsize= '16')
ax.set_ylabel("No of Tweets in Class", fontsize= '16')
ax.set_title(title, fontsize = '16')
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 5,
height,
ha="center", fontsize= 12, color='black')
```
| github_jupyter |
# Running Azure Cosmos Gremlin
I've built a lot of my own helper functions to make queries and manipulate data. I'll document them here
It isim.
First, I'm only using `nest_asyncio` to run the queries in cells. This is a requirement of how gremlinpython manages requests.
```
import sys
import pandas as pd
sys.path.append('..')
import helpers.dbquery as db
import yaml, ssl, asyncio
ssl._create_default_https_context = ssl._create_unverified_context
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
import nest_asyncio
# this is required for running in a Jupyter Notebook.
nest_asyncio.apply()
db.run_query()
```
the method `run_query` takes a query string and returns the json response. It manages the opening and closing of objects so you don't have to.
```
res = db.run_query("g.V().hasLabel('system').has('username','userbill').limit(4).in().inE('orbits').limit(2)")
res
```
This makes a great `dataframe`
```
df = pd.DataFrame(db.run_query("g.V().hasLabel('system').has('username','userbill').in().inE('orbits')"))
df.head()
```
## Adding Vertices and Edges
I've added some functions that create the `g.addV()` command with all of the properties I want. This also forces some properties like `username` and `objtype` so I don't have to think about them. The function `create_vertex` just returns the string.
**Note:** I'm using objtype as an extra lable as I found the query to be slightly cheaper when querying that value (e.g. `valueMap()`)than returning the actual lable maps (e.g. `valueMap(true)`).
In the game, I'm adding `username` for the user's account so that I can easily filter data for that user. However in notebooks I've replaced the user account with `notebook` so that I can cleanup notebook runs easily. This `username` is forced to `notebook`, so I can quickly cleanup.
The node must include a label
```
nodes = [{"label":"example","objid":db.uuid(),"property1":"foo","property2":"bar"},
{"label":"example","objid":db.uuid(),"property1":"foo","property2":"bar"}]
[db.create_vertex(node) for node in nodes]
```
edges reuqure a `node1` and a `node2` object.
```
edge = {"label":"isExample","node1":nodes[0]["objid"],"node2":nodes[1]["objid"]}
db.create_edge(edge)
```
The method `upload_data` requires a dict of both edges and nodes
```
data = {'nodes':nodes,'edges':[edge]}
db.upload_data(data)
```
Once you have the values you can query them back, to confirm they are in the API.
```
res = db.run_query("g.V().has('username','notebook').valueMap()")
res
```
| github_jupyter |
# Bahdanau Attention
:label:`sec_seq2seq_attention`
We studied the machine translation
problem in :numref:`sec_seq2seq`,
where we designed
an encoder-decoder architecture based on two RNNs
for sequence to sequence learning.
Specifically,
the RNN encoder
transforms
a variable-length sequence
into a fixed-shape context variable,
then
the RNN decoder
generates the output (target) sequence token by token
based on the generated tokens and the context variable.
However,
even though not all the input (source) tokens
are useful for decoding a certain token,
the *same* context variable
that encodes the entire input sequence
is still used at each decoding step.
In a separate but related
challenge of handwriting generation for a given text sequence,
Graves designed a differentiable attention model
to align text characters with the much longer pen trace,
where the alignment moves only in one direction :cite:`Graves.2013`.
Inspired by the idea of learning to align,
Bahdanau et al. proposed a differentiable attention model
without the severe unidirectional alignment limitation :cite:`Bahdanau.Cho.Bengio.2014`.
When predicting a token,
if not all the input tokens are relevant,
the model aligns (or attends)
only to parts of the input sequence that are relevant to the current prediction.
This is achieved
by treating the context variable as an output of attention pooling.
## Model
When describing
Bahdanau attention
for the RNN encoder-decoder below,
we will follow the same notation in
:numref:`sec_seq2seq`.
The new attention-based model
is the same as that
in :numref:`sec_seq2seq`
except that
the context variable
$\mathbf{c}$
in
:eqref:`eq_seq2seq_s_t`
is replaced by
$\mathbf{c}_{t'}$
at any decoding time step $t'$.
Suppose that
there are $T$ tokens in the input sequence,
the context variable at the decoding time step $t'$
is the output of attention pooling:
$$\mathbf{c}_{t'} = \sum_{t=1}^T \alpha(\mathbf{s}_{t' - 1}, \mathbf{h}_t) \mathbf{h}_t,$$
where the decoder hidden state
$\mathbf{s}_{t' - 1}$ at time step $t' - 1$
is the query,
and the encoder hidden states $\mathbf{h}_t$
are both the keys and values,
and the attention weight $\alpha$
is computed as in
:eqref:`eq_attn-scoring-alpha`
using the additive attention scoring function
defined by
:eqref:`eq_additive-attn`.
Slightly different from
the vanilla RNN encoder-decoder architecture
in :numref:`fig_seq2seq_details`,
the same architecture
with Bahdanau attention is depicted in
:numref:`fig_s2s_attention_details`.

:label:`fig_s2s_attention_details`
```
import tensorflow as tf
from d2l import tensorflow as d2l
```
## Defining the Decoder with Attention
To implement the RNN encoder-decoder
with Bahdanau attention,
we only need to redefine the decoder.
To visualize the learned attention weights more conveniently,
the following `AttentionDecoder` class
defines [**the base interface for
decoders with attention mechanisms**].
```
#@save
class AttentionDecoder(d2l.Decoder):
"""The base attention-based decoder interface."""
def __init__(self, **kwargs):
super(AttentionDecoder, self).__init__(**kwargs)
@property
def attention_weights(self):
raise NotImplementedError
```
Now let us [**implement
the RNN decoder with Bahdanau attention**]
in the following `Seq2SeqAttentionDecoder` class.
The state of the decoder
is initialized with
(i) the encoder final-layer hidden states at all the time steps (as keys and values of the attention);
(ii) the encoder all-layer hidden state at the final time step (to initialize the hidden state of the decoder);
and (iii) the encoder valid length (to exclude the padding tokens in attention pooling).
At each decoding time step,
the decoder final-layer hidden state at the previous time step is used as the query of the attention.
As a result, both the attention output
and the input embedding are concatenated
as the input of the RNN decoder.
```
class Seq2SeqAttentionDecoder(AttentionDecoder):
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super().__init__(**kwargs)
self.attention = d2l.AdditiveAttention(num_hiddens, num_hiddens,
num_hiddens, dropout)
self.embedding = tf.keras.layers.Embedding(vocab_size, embed_size)
self.rnn = tf.keras.layers.RNN(tf.keras.layers.StackedRNNCells(
[tf.keras.layers.GRUCell(num_hiddens, dropout=dropout)
for _ in range(num_layers)]),
return_sequences=True, return_state=True)
self.dense = tf.keras.layers.Dense(vocab_size)
def init_state(self, enc_outputs, enc_valid_lens, *args):
# Shape of `outputs`: (`batch_size`, `num_steps`, `num_hiddens`).
# Shape of `hidden_state[0]`: (`num_layers`, `batch_size`, `num_hiddens`)
outputs, hidden_state = enc_outputs
return (outputs, hidden_state, enc_valid_lens)
def call(self, X, state, **kwargs):
# Shape of `enc_outputs`: (`batch_size`, `num_steps`, `num_hiddens`).
# Shape of `hidden_state[0]`: (`num_layers`, `batch_size`, `num_hiddens`)
enc_outputs, hidden_state, enc_valid_lens = state
# Shape of the output `X`: (`num_steps`, `batch_size`, `embed_size`)
X = self.embedding(X) # Input `X` has shape: (`batch_size`, `num_steps`)
X = tf.transpose(X, perm=(1, 0, 2))
outputs, self._attention_weights = [], []
for x in X:
# Shape of `query`: (`batch_size`, 1, `num_hiddens`)
query = tf.expand_dims(hidden_state[-1], axis=1)
# Shape of `context`: (`batch_size, 1, `num_hiddens`)
context = self.attention(query, enc_outputs, enc_outputs,
enc_valid_lens, **kwargs)
# Concatenate on the feature dimension
x = tf.concat((context, tf.expand_dims(x, axis=1)), axis=-1)
out = self.rnn(x, hidden_state, **kwargs)
hidden_state = out[1:]
outputs.append(out[0])
self._attention_weights.append(self.attention.attention_weights)
# After fully-connected layer transformation, shape of `outputs`:
# (`batch_size`, `num_steps`, `vocab_size`)
outputs = self.dense(tf.concat(outputs, axis=1))
return outputs, [enc_outputs, hidden_state, enc_valid_lens]
@property
def attention_weights(self):
return self._attention_weights
```
In the following, we [**test the implemented
decoder**] with Bahdanau attention
using a minibatch of 4 sequence inputs
of 7 time steps.
```
encoder = d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
decoder = Seq2SeqAttentionDecoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
X = tf.zeros((4, 7))
state = decoder.init_state(encoder(X, training=False), None)
output, state = decoder(X, state, training=False)
output.shape, len(state), state[0].shape, len(state[1]), state[1][0].shape
```
## [**Training**]
Similar to :numref:`sec_seq2seq_training`,
here we specify hyperparemeters,
instantiate
an encoder and a decoder with Bahdanau attention,
and train this model for machine translation.
Due to the newly added attention mechanism,
this training is much slower than
that in :numref:`sec_seq2seq_training` without attention mechanisms.
```
embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.1
batch_size, num_steps = 64, 10
lr, num_epochs, device = 0.005, 250, d2l.try_gpu()
train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
encoder = d2l.Seq2SeqEncoder(
len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqAttentionDecoder(
len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
```
After the model is trained,
we use it to [**translate a few English sentences**]
into French and compute their BLEU scores.
```
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, dec_attention_weight_seq = d2l.predict_seq2seq(
net, eng, src_vocab, tgt_vocab, num_steps, True)
print(f'{eng} => {translation}, ',
f'bleu {d2l.bleu(translation, fra, k=2):.3f}')
attention_weights = tf.reshape(
tf.concat([step[0][0][0] for step in dec_attention_weight_seq], 0),
(1, 1, -1, num_steps))
```
By [**visualizing the attention weights**]
when translating the last English sentence,
we can see that each query assigns non-uniform weights
over key-value pairs.
It shows that at each decoding step,
different parts of the input sequences
are selectively aggregated in the attention pooling.
```
# Plus one to include the end-of-sequence token
d2l.show_heatmaps(attention_weights[:, :, :, :len(engs[-1].split()) + 1],
xlabel='Key posistions', ylabel='Query posistions')
```
## Summary
* When predicting a token, if not all the input tokens are relevant, the RNN encoder-decoder with Bahdanau attention selectively aggregates different parts of the input sequence. This is achieved by treating the context variable as an output of additive attention pooling.
* In the RNN encoder-decoder, Bahdanau attention treats the decoder hidden state at the previous time step as the query, and the encoder hidden states at all the time steps as both the keys and values.
## Exercises
1. Replace GRU with LSTM in the experiment.
1. Modify the experiment to replace the additive attention scoring function with the scaled dot-product. How does it influence the training efficiency?
| github_jupyter |
<a id="title_ID"></a>
# JWST Pipeline Validation Notebook: calwebb_detector1, ramp_fitting unit tests
<span style="color:red"> **Instruments Affected**</span>: NIRCam, NIRISS, NIRSpec, MIRI, FGS
### Table of Contents
<div style="text-align: left">
<br> [Introduction](#intro)
<br> [JWST Unit Tests](#unit)
<br> [Defining Terms](#terms)
<br> [Test Description](#description)
<br> [Data Description](#data_descr)
<br> [Imports](#imports)
<br> [Convenience Functions](#functions)
<br> [Perform Tests](#testing)
<br> [About This Notebook](#about)
<br>
</div>
<a id="intro"></a>
# Introduction
This is the validation notebook that displays the unit tests for the Ramp Fitting step in calwebb_detector1. This notebook runs and displays the unit tests that are performed as a part of the normal software continuous integration process. For more information on the pipeline visit the links below.
* Pipeline description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/ramp_fitting/index.html
* Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/
[Top of Page](#title_ID)
<a id="unit"></a>
# JWST Unit Tests
JWST unit tests are located in the "tests" folder for each pipeline step within the [GitHub repository](https://github.com/spacetelescope/jwst/tree/master/jwst/), e.g., ```jwst/ramp_fitting/tests```.
* Unit test README: https://github.com/spacetelescope/jwst#unit-tests
[Top of Page](#title_ID)
<a id="terms"></a>
# Defining Terms
These are terms or acronymns used in this notebook that may not be known a general audience.
* JWST: James Webb Space Telescope
* NIRCam: Near-Infrared Camera
[Top of Page](#title_ID)
<a id="description"></a>
# Test Description
Unit testing is a software testing method by which individual units of source code are tested to determine whether they are working sufficiently well. Unit tests do not require a separate data file; the test creates the necessary test data and parameters as a part of the test code.
[Top of Page](#title_ID)
<a id="data_descr"></a>
# Data Description
Data used for unit tests is created on the fly within the test itself, and is typically an array in the expected format of JWST data with added metadata needed to run through the pipeline.
[Top of Page](#title_ID)
<a id="imports"></a>
# Imports
* tempfile for creating temporary output products
* pytest for unit test functions
* jwst for the JWST Pipeline
* IPython.display for display pytest reports
[Top of Page](#title_ID)
```
import tempfile
import pytest
import jwst
from IPython.display import IFrame
```
<a id="functions"></a>
# Convenience Functions
Here we define any convenience functions to help with running the unit tests.
[Top of Page](#title_ID)
```
def display_report(fname):
'''Convenience function to display pytest report.'''
return IFrame(src=fname, width=700, height=600)
```
<a id="testing"></a>
# Perform Tests
Below we run the unit tests for the Ramp Fitting step.
[Top of Page](#title_ID)
```
with tempfile.TemporaryDirectory() as tmpdir:
!pytest jwst/ramp_fitting -v --ignore=jwst/associations --ignore=jwst/datamodels --ignore=jwst/stpipe --ignore=jwst/regtest --html=tmpdir/unit_report.html --self-contained-html
report = display_report('tmpdir/unit_report.html')
report
```
<a id="about"></a>
## About This Notebook
**Author:** Alicia Canipe, Staff Scientist, NIRCam
<br>**Updated On:** 01/07/2021
[Top of Page](#title_ID)
<img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/>
| github_jupyter |
# Report Notes
## TODO:
* describe how equal sign is calculated in segmenatation part.
* in introduction add note that we assume the reader has a background in theory of neural networks and geometry
## Introduction
With computational resources and storage getting cheaper and cheaper, a window of possibilities opens for searching, reusing, and processing information. Thus everything around gets digitized, including all kind of paper documentation. And till now most of that work is done manually, which is extremely expensive. And for mathematical papers specifically, it becomes even more time-consuming, since it's far harder to type a mathematical formula rather than a regular word.
For that reason, hundred and hundred years of math history is still stored in paper format exclusively and cannot be processed efficiently. And while we already have comprehensive tools to parse scanned text [TESSERACT](), but it's not applicable to math expressions, which have a big complexity for parsing. Although there are some decent tools[SCANNER LATEX](), they are proprietary and have limited functionality, in particular, it cannot parse multiline expressions or even entire documents. For that reason, we will be working on an open-source project to parse math expressions so that it could be freely reused and help developing tools, which can convert the entire math paper at once. Only then we will have a dramatic effect of digitizing all math literature.
But since it's a project in the field of OCR, which is a good way to start for newcomers in Computer Vision, we decided to describe our project in self-contained and educative manner. That will allow the reader to enter the field of CV smoothly by exploring and playing with our project. We will also encourage readers to contribute to the project, which they will know by heart until the end of the paper.
## Problem Definition
Right now, Optical Character Recognition is a solved task for a regular text. We have plenty of approaches and available solution to parse entire files with either handwritten or scanned typed text.
But for some specific class of documents, such as math papers, the problem still exists. The main complexity here is that parsing math expression is untrivial, since, depending on the context, some character will overlap other, appear on the top of each other, or even span multiple lines. Thus we want to create an open-source project tackling this problem so that it could be reused by a community and eventually move us a little closer to the destination of the document-wise parser of math papers.
## Algorithm. Character recognition
For character recognition, the most frequently used tool is a neural network. (Add why! e.g. "since they are the best at recognizing the patterns?"). And we will use a basic version of neural network popular variation - convolutional neural network. If you are not familiar with the basic theory of CNN, we recommend watching this material [LINK]() and also any other you will find on the Internet. But in any case, below, we will give a very high-level explanation of how it works better with computer vision than a classical neural network.
So in the classical neural network, we have an input and output vector. Our input will be the flattened image, that is if we have a 28x28 image represented as a matrix, we can also express it as a vector with 784 elements, where first 28 items equivalent to the first row of a matrix, the second 28 items - to the second row, etc. But with this approach, we are not enforcing our model to work on the level of some groups of related pixels but rather as an independent set of 784 input signals. While in real life, a pixel just above or below has a very strong relationship with each other, while those influence far lower with pixels in another part of an image.
And convolutional neural network solves this issue by working one level higher than merely on the pixel scale. It transforms the image by scanning it with a rectangular kernel, thus learning from the patterns on a level of that rectangular window sliding through the image. It also can reduce the dimension of input image following some rule, e.g., replaces 2x2 part of the image with a single pixel, which value equals the maximum from 4 pixels in the original image. After that, we can scan the picture again with a rectangular window and learn some more high-level features.
Convolutions allow us to learn a pattern depending on the geographical location of images. And pooling enables us to get rid of some local and specific pattern and work with something more high level as corners and lines of an image.
### Our implementation
In our implementation we used the same ideas. The first is the convolutional layer, which is like a set of learnable filters. There are 32 filters for the two firsts convolutional layers, and 64 filters for the two last ones. Each filter transforms a part of the image (defined by the kernel size) using the kernel filter.
It is followed by a pooling layer, which decreases the dimension of the image. These are used to reduce the computational cost and, to some extent, also minimize overfitting. Moreover, it's useful to allow consequent convolutional layers to learn more high-level features.
Also we have dropout layers, which are used to randomly ignore some fraction of output nodes from previous layers. This way we force the network to learn features in a distributed way, which, in turn, improves generalization and reduces the overfitting.
And at the end, we flatten our final feature maps into a one-dimensional vector, which will combine all founded local and global features. And then based on that feature vector, we output a vector where i-th elements is the probability of i-th character being on the input picture
During this process, we also to standard improvements to the quality of the model and the training time. For example, we can decrease the latter by using a dynamically-changed learning rate. In the beginning it will be bigger, and when the precision of the model doesn't increase significantly, we'll decrease the learning rate by a factor of 2. What concerns quality enhancements, here we incorporate data augmentation as well. Although, we only use a little zooming, horizontal and vertical shifts, and a small-angle rotation so that not corrupt the image. That is, if we applied vertical flipping to "6" symbol, we'd get a "9" symbol with the old label. Or if we rotate "l" too much to the left, we will get something more similar to division sign "/". All those confusions will decrease the precision of the network.
## Future Improvements
Since the topic has a lot of complications, there are multiples possibilities to improve the solution. Currently, it's working in its basic form, that is parsing a single-line formula with brackets, subtraction, addition, multiplication, and division.
The next improvement should be the ability to parse subscript and superscript properly. As a functionality which correctly identifies "=" sign, that extension requires only pure Computer Science enhancements. That is, we already have a feature, which determines the bounding box for each character. Now if we draw a horizontal line in the middle of a total bounding box, we could identify sub- and subscript. That is when a bounding box is crossed by that line; then it's a pure character. Otherwise, it's superscript or subscript character when bounding box is above or below the line respectively.
Besides that, there plenty other more complicated features that can be added. Starting from something relatively simple as identifying a fraction, where we could also use a calculation based on bounding boxes, or parsing trigonometric operators to something more complex such as $\sum_{i=1}^{n}$, or multiline system of linear equations.
We could also improve the quality of existing functionality. While formula segmentation works precisely, we can extend it to work with formulas in the wild(whiteboard, street art, etc.) as well. And what concerns character recognition module, here we have a huge room for improvement, since its precision is only 88%
## Conclusion
So in this work we tackled the problem of optical character recognition in the math domain. To do so, we incorporated two fundamental techniques of that field: character recognition and string segmentation. Former gave us experience in applying classic machine learning techniques with CNN, while in latter we used a lot of graphical algorithms from computer science field. But the most efforts, improvements, and troubleshooting were dedicated to data preprocessing part. Since not only we were required to adjust the dataset to a specific need of our project, but also merge two different datasets, which opened us another huge task of adjusting one dataset format to another.
As for someone with no prior ML experience, this project was extremely valuable in learning basic but vital parts of most computer vision problem. It gave us a broad experience of computer vision tools: preprocessing of image database, graphical algorithms, and neural network. And that is the reason, why we decided to format this report in a self-containing and educative manner. Since we grabbed so broad and fundamental experience, we wanted to share it to other newcomers into ML and CV in particular. Something that would have been useful to us at the beginning of the journey.
Besides that, we are going to continue the work on this project, which will give us the ability to create a second part of the tutorial, and thus we will produce something more applicable to real-life problems. In that way, our solution will be useful not only as a learning example or repository where anybody could reuse some of the basic functionality for OCR, but also will be valuable as a complete solution on its own.
| github_jupyter |
## Creating Landsat Timelapse
**Steps to create a Landsat timelapse:**
1. Pan and zoom to your region of interest.
2. Use the drawing tool to draw a rectangle anywhere on the map.
3. Adjust the parameters (e.g., start year, end year, title) if needed.
4. Check `Upload to imgur.com` if you would like to download the GIF.
5. Click the Submit button to create a timelapse.
6. Deploy the app to [heroku](https://www.heroku.com/). See https://github.com/giswqs/earthengine-apps
```
import os
import ee
import geemap
import ipywidgets as widgets
Map = geemap.Map()
Map
style = {'description_width': 'initial'}
title = widgets.Text(
description='Title:',
value='Landsat Timelapse',
width=200,
style=style
)
bands = widgets.Dropdown(
description='Select RGB Combo:',
options=['Red/Green/Blue', 'NIR/Red/Green', 'SWIR2/SWIR1/NIR', 'NIR/SWIR1/Red','SWIR2/NIR/Red',
'SWIR2/SWIR1/Red', 'SWIR1/NIR/Blue', 'NIR/SWIR1/Blue', 'SWIR2/NIR/Green', 'SWIR1/NIR/Red'],
value='NIR/Red/Green',
style=style
)
hbox1 = widgets.HBox([title, bands])
hbox1
start_year = widgets.IntSlider(description='Start Year:', value=1984, min=1984, max=2019, style=style)
end_year = widgets.IntSlider(description='End Year:', value=2019, min=1984, max=2019, style=style)
hbox2 = widgets.HBox([start_year, end_year])
hbox2
speed = widgets.IntSlider(
description='Frames per second:',
tooltip='Frames per second:',
value=10,
min=1,
max = 30,
style=style
)
upload = widgets.Checkbox(
value=False,
description='Upload to imgur.com',
style=style
)
hbox3 = widgets.HBox([speed, upload])
hbox3
font_size = widgets.IntSlider(description='Font size:', value=30, min=10, max=50, style=style)
font_color = widgets.ColorPicker(
concise=False,
description='Font color:',
value='white',
style=style
)
progress_bar_color = widgets.ColorPicker(
concise=False,
description='Progress bar color:',
value='blue',
style=style
)
hbox4 = widgets.HBox([font_size, font_color, progress_bar_color])
hbox4
submit = widgets.Button(
description='Submit',
button_style='primary',
tooltip='Click the submit the request to create timelapse',
style=style
)
output = widgets.Output()
def submit_clicked(b):
with output:
output.clear_output()
if start_year.value >= end_year.value:
print('The end year must be great than the start year.')
return
print('Computing...')
Map.add_landsat_ts_gif(roi=Map.user_roi, label=title.value, start_year=start_year.value,
end_year=end_year.value, start_date='05-01', end_date='10-31',
bands=bands.value.split('/'), font_color=font_color.value,
frames_per_second=speed.value, font_size=font_size.value,
progress_bar_color=progress_bar_color.value, upload=upload.value)
submit.on_click(submit_clicked)
submit
output
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Image classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/images/image_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/images/image_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/images/image_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:
* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.
* _Overfitting_ —How to identify and prevent it.
* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.
This tutorial follows a basic machine learning workflow:
1. Examine and understand data
2. Build an input pipeline
3. Build the model
4. Train the model
5. Test the model
6. Improve the model and repeat the process
# Import packages
Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
```
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import matplotlib.pyplot as plt
```
This tutorial uses data available as `.zip` archive file. Use the `zipfile` module to extract its contents.
```
import zipfile
```
Import Tensorflow and the Keras classes needed to construct our model.
```
!pip install tensorflow-gpu==2.0.0-beta1
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
# Load data
Begin by downloading the dataset. This tutorial uses a filtered version of <a href="https://www.kaggle.com/c/dogs-vs-cats/data" target="_blank">Dogs vs Cats</a> dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
```
Extract the dataset contents:
```
local_zip = '/tmp/cats_and_dogs_filtered.zip' # local path of downloaded .zip file
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp') # contents are extracted to '/tmp' folder
zip_ref.close()
```
The dataset has the following directory structure:
<pre>
<b>cats_and_dogs_filtered</b>
|__ <b>train</b>
|______ <b>cats</b>: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....]
|______ <b>dogs</b>: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]
|__ <b>validation</b>
|______ <b>cats</b>: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....]
|______ <b>dogs</b>: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]
</pre>
After extracting its contents, assign variables with the proper file path for the training and validation set.
```
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
```
### Understand the data
Let's look at how many cats and dogs images are in the training and validation directory:
```
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
```
# Set the model parameters
For convenience, set up variables to use while pre-processing the dataset and training the network.
```
batch_size = 100
epochs = 15
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels
```
# Data preparation
Format the images into appropriately pre-processed floating point tensors before feeding to the network:
1. Read images from the disk.
2. Decode contents of these images and convert it into proper grid format as per their RGB content.
3. Convert them into floating point tensors.
4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.
Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
```
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
```
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
```
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
# Its usually best practice to shuffle the training data
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
```
### Visualize training images
Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
```
sample_training_images, _ = next(train_data_gen)
```
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
```
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
```
# Create the model
The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it thatr is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
```
model = Sequential()
model.add(Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_SHAPE,IMG_SHAPE, 3,)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
```
### Compile the model
For this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
```
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
```
### Model summary
View all the layers of the network using the model's `summary` method:
```
model.summary()
```
### Train the model
Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
```
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(batch_size))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(batch_size)))
)
```
### Visualize training results
Now visualize the results after training the network.
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.
Let's look at what went wrong and try to increase overall performance of the model.
# Overfitting
In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.
When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.
There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model.
To begin, clear the previous Keras session and start a new one:
```
# Clear resources
tf.keras.backend.clear_session()
epochs = 80
```
# Data augmentation
Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.
Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process.
## Augment and visualize data
Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation.
### Apply horizontal flip
Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
```
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE)
)
```
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
```
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
```
### Randomly rotate the image
Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
```
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE)
)
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
```
### Apply zoom augmentation
Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
```
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE)
)
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
```
### Put it all together
Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
```
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE),
class_mode='binary'
)
```
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
```
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
```
### Create validation data generator
Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
```
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_SHAPE, IMG_SHAPE),
class_mode='binary')
```
# Dropout
Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorial
When you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.
When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.
Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers.
# Creating a new network with Dropouts
Here, you apply dropout to first and last max pool layers and to a fully connected layer that has 512 output units. 30% of the first and last max pool layer, and 10% of fully connected layer output units, are randomly set to zero during each training epoch.
```
model = Sequential()
model.add(Conv2D(16, 3, padding='same', activation='relu', input_shape=(150,150,3,)))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
```
### Compile the model
After introducing dropouts to the network, compile the model and view the layers summary.
```
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
model.summary()
```
### Train the model
After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
```
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(batch_size))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(batch_size)))
)
```
### Visualize the model
Visualize the new model after training and see if there are signs of overfitting:
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
### Evaluating the model
As you can see, the model's learning curves are much better than before and there is much less overfitting. The model is able to achieve an accuracy of ~*75%*.
| github_jupyter |
# MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image model on [MNIST](http://yann.lecun.com/exdb/mnist/) using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). It builds the foundation for this <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb">companion notebook</a>, which explores tackling the same problem with other types of models such as DNN and CNN.
## Learning Objectives
1. Know how to read and display image data
2. Know how to find incorrect predictions to analyze the model
3. Visually see how computers see images
```
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import Dense, Flatten, Softmax
print(tf.__version__)
```
## Exploring the data
The MNIST dataset is already included in tensorflow through the keras datasets module. Let's load it and get a sense of the data.
```
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
HEIGHT, WIDTH = x_train[0].shape
NCLASSES = tf.size(tf.unique(y_train).y)
print("Image height x width is", HEIGHT, "x", WIDTH)
tf.print("There are", NCLASSES, "classes")
```
Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
```
IMGNO = 12
# Uncomment to see raw numerical values.
# print(x_test[IMGNO])
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH))
print("The label for image number", IMGNO, "is", y_test[IMGNO])
```
## Define the model
Let's start with a very simple linear classifier. This was the first method to be tried on MNIST in 1998, and scored an 88% accuracy. Quite ground breaking at the time!
We can build our linear classifier using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras), so we don't have to define or initialize our weights and biases. This happens automatically for us in the background. We can also add a softmax layer to transform the logits into probabilities. Finally, we can compile the model using categorical cross entropy in order to strongly penalize high probability predictions that were incorrect.
When building more complex models such as DNNs and CNNs our code will be more readable by using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Let's get one working so we can test it and use it as a benchmark.
```
def linear_model():
# TODO: Build a sequential model and compile it.
return model
```
## Write Input Functions
As usual, we need to specify input functions for training and evaluating. We'll scale each pixel value so it's a decimal value between 0 and 1 as a way of normalizing the data.
**TODO 1**: Define the scale function below and build the dataset
```
BUFFER_SIZE = 5000
BATCH_SIZE = 100
def scale(image, label):
# TODO
def load_dataset(training=True):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = mnist
x = x_train if training else x_test
y = y_train if training else y_test
# TODO: a) one-hot encode labels, apply `scale` function, and create dataset.
# One-hot encode the classes
if training:
# TODO
return dataset
def create_shape_test(training):
dataset = load_dataset(training=training)
data_iter = dataset.__iter__()
(images, labels) = data_iter.get_next()
expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)
expected_label_ndim = 2
assert images.shape == expected_image_shape
assert labels.numpy().ndim == expected_label_ndim
test_name = "training" if training else "eval"
print("Test for", test_name, "passed!")
create_shape_test(True)
create_shape_test(False)
```
Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
```
NUM_EPOCHS = 10
STEPS_PER_EPOCH = 100
model = linear_model()
train_data = load_dataset()
validation_data = load_dataset(training=False)
OUTDIR = "mnist_linear/"
checkpoint_callback = ModelCheckpoint(OUTDIR, save_weights_only=True, verbose=1)
tensorboard_callback = TensorBoard(log_dir=OUTDIR)
history = model.fit(
# TODO: specify training/eval data, # epochs, steps per epoch.
verbose=2,
callbacks=[checkpoint_callback, tensorboard_callback],
)
BENCHMARK_ERROR = 0.12
BENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
assert accuracy[-1] > BENCHMARK_ACCURACY
assert val_accuracy[-1] > BENCHMARK_ACCURACY
print("Test to beat benchmark accuracy passed!")
assert accuracy[0] < accuracy[1]
assert accuracy[1] < accuracy[-1]
assert val_accuracy[0] < val_accuracy[1]
assert val_accuracy[1] < val_accuracy[-1]
print("Test model accuracy is improving passed!")
assert loss[0] > loss[1]
assert loss[1] > loss[-1]
assert val_loss[0] > val_loss[1]
assert val_loss[1] > val_loss[-1]
print("Test loss is decreasing passed!")
```
## Evaluating Predictions
Were you able to get an accuracy of over 90%? Not bad for a linear estimator! Let's make some predictions and see if we can find where the model has trouble. Change the range of values below to find incorrect predictions, and plot the corresponding images. What would you have guessed for these images?
**TODO 2**: Change the range below to find an incorrect prediction
```
image_numbers = range(0, 10, 1) # Change me, please.
def load_prediction_dataset():
dataset = (x_test[image_numbers], y_test[image_numbers])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
dataset = dataset.map(scale).batch(len(image_numbers))
return dataset
predicted_results = model.predict(load_prediction_dataset())
for index, prediction in enumerate(predicted_results):
predicted_value = np.argmax(prediction)
actual_value = y_test[image_numbers[index]]
if actual_value != predicted_value:
print("image number: " + str(image_numbers[index]))
print("the prediction was " + str(predicted_value))
print("the actual label is " + str(actual_value))
print("")
bad_image_number = 8
plt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));
```
It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
Each of the 10 neurons in the dense layer of our model has 785 weights feeding into it. That's 1 weight for every pixel in the image + 1 for a bias term. These weights are flattened feeding into the model, but we can reshape them back into the original image dimensions to see what the computer sees.
**TODO 3**: Reshape the layer weights to be the shape of an input image and plot.
```
DIGIT = 0 # Change me to be an integer from 0 to 9.
LAYER = 1 # Layer 0 flattens image, so no weights
WEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases
dense_layer_weights = model.layers[LAYER].get_weights()
digit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]
plt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))
```
Did you recognize the digit the computer was trying to learn? Pretty trippy, isn't it! Even with a simple "brain", the computer can form an idea of what a digit should be. The human brain, however, uses [layers and layers of calculations for image recognition](https://www.salk.edu/news-release/brain-recognizes-eye-sees/). Ready for the next challenge? <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/images/mnist_linear.ipynb">Click here</a> to super charge our models with human-like vision.
## Bonus Exercise
Want to push your understanding further? Instead of using Keras' built in layers, try repeating the above exercise with your own [custom layers](https://www.tensorflow.org/tutorials/eager/custom_layers).
Copyright 2021 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Indonesian VAT Numbers
## Introduction
The function `clean_id_npwp()` cleans a column containing Indonesian VAT Number (NPWP) strings, and standardizes them in a given format. The function `validate_id_npwp()` validates either a single NPWP strings, a column of NPWP strings or a DataFrame of NPWP strings, returning `True` if the value is valid, and `False` otherwise.
NPWP strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "013000666091000"
* `standard`: NPWP strings with proper whitespace in the proper places, like "01.300.066.6-091.000"
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_id_npwp()` and `validate_id_npwp()`.
### An example dataset containing NPWP strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"npwp": [
"013000666091000",
"123456789",
"51824753556",
"51 824 753 556",
"hello",
np.nan,
"NULL"
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_id_npwp`
By default, `clean_id_npwp` will clean npwp strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_id_npwp
clean_id_npwp(df, column = "npwp")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_id_npwp(df, column = "npwp", output_format="standard")
```
### `compact`
```
clean_id_npwp(df, column = "npwp", output_format="compact")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned NPWP strings is added with a title in the format `"{original title}_clean"`.
```
clean_id_npwp(df, column="npwp", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_id_npwp(df, "npwp", errors="coerce")
```
### `ignore`
```
clean_id_npwp(df, "npwp", errors="ignore")
```
## 4. `validate_id_npwp()`
`validate_id_npwp()` returns `True` when the input is a valid NPWP. Otherwise it returns `False`.
The input of `validate_id_npwp()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_id_npwp()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_id_npwp()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_id_npwp
print(validate_id_npwp("013000666091000"))
print(validate_id_npwp("123456789"))
print(validate_id_npwp("51824753556"))
print(validate_id_npwp("51 824 753 556"))
print(validate_id_npwp("hello"))
print(validate_id_npwp(np.nan))
print(validate_id_npwp("NULL"))
```
### Series
```
validate_id_npwp(df["npwp"])
```
### DataFrame + Specify Column
```
validate_id_npwp(df, column="npwp")
```
### Only DataFrame
```
validate_id_npwp(df)
```
| github_jupyter |
# Predicting Whether a Planet Has a Shorter Year than Earth
Using the Open Exoplanet Catalogue database: https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue/
## Data License
Copyright (C) 2012 Hanno Rein
Permission is hereby granted, free of charge, to any person obtaining a copy of this database and associated scripts (the "Database"), to deal in the Database without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Database, and to permit persons to whom the Database is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Database. A reference to the Database shall be included in all scientific publications that make use of the Database.
THE DATABASE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATABASE OR THE USE OR OTHER DEALINGS IN THE DATABASE.
## Setup
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
planets = pd.read_csv('../../ch_09/data/planets.csv')
planets.head()
```
## Creating the `shorter_year_than_earth` column
```
planets['shorter_year_than_earth'] = planets.period < planets.query('name == "Earth"').period.iat[0]
planets.shorter_year_than_earth.value_counts()
```
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
data = planets[['shorter_year_than_earth', 'semimajoraxis', 'mass', 'eccentricity']].dropna()
y = data.pop('shorter_year_than_earth')
X = data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=0, stratify=y
)
lm = LogisticRegression(solver='lbfgs', random_state=0).fit(X_train, y_train)
lm.score(X_test, y_test)
```
## Evaluation
```
preds = lm.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, preds))
from ml_utils.classification import plot_roc
plot_roc(y_test, lm.predict_proba(X_test)[:,1])
from ml_utils.classification import confusion_matrix_visual
confusion_matrix_visual(y_test, preds, ['>=', 'shorter'])
```
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell
# install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp]
# If you're not using Colab, you might need to upgrade jupyter notebook to avoid the following error:
# 'ImportError: IProgress not found. Please update jupyter and ipywidgets.'
! pip install ipywidgets
! jupyter nbextension enable --py widgetsnbextension
# Please restart the kernel after running this cell
from nemo.collections import nlp as nemo_nlp
from nemo.utils.exp_manager import exp_manager
import os
import wget
import torch
import pytorch_lightning as pl
from omegaconf import OmegaConf
```
# Task Description
**Sentiment Analysis** is the task of detecting the sentiment in text. We model this problem as a simple form of a text classification problem. For example `Gollum's performance is incredible!` has a positive sentiment while `It's neither as romantic nor as thrilling as it should be.` has a negative sentiment.
.
# Dataset
In this tutorial we going to use [The Stanford Sentiment Treebank (SST-2)](https://nlp.stanford.edu/sentiment/index.html) corpus for sentiment analysis. This version of the dataset contains a collection of sentences with binary labels of positive and negative. It is a standard benchmark for sentence classification and is part of the GLUE Benchmark: https://gluebenchmark.com/tasks. Please download and unzip the SST-2 dataset from GLUE. It should contain three files of train.tsv, dev.tsv, and test.tsv which can be used for training, validation, and test respectively.
# NeMo Text Classification Data Format
[TextClassificationModel](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/nlp/models/text_classification/text_classification_model.py) in NeMo supports text classification problems such as sentiment analysis or domain/intent detection for dialogue systems, as long as the data follows the format specified below.
TextClassificationModel requires the data to be stored in TAB separated files (.tsv) with two columns of sentence and label. Each line of the data file contains text sequences, where words are separated with spaces and label separated with [TAB], i.e.:
```
[WORD][SPACE][WORD][SPACE][WORD][TAB][LABEL]
```
For example:
```
hide new secretions from the parental units[TAB]0
that loves its characters and communicates something rather beautiful about human nature[TAB]1
...
```
If your dataset is stored in another format, you need to convert it to this format to use the TextClassificationModel.
## Download and Preprocess the Data
First, you need to download the zipped file of the SST-2 dataset from the GLUE Benchmark website: https://gluebenchmark.com/tasks, and put it in the current folder. Then the following script would extract it into the data path specified by `DATA_DIR`:
```
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
os.environ['DATA_DIR'] = DATA_DIR
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
! unzip -o SST-2.zip -d {DATA_DIR}
```
Now, the data folder should contain the following files:
* train.tsv
* dev.tsv
* test.tsv
The format of `train.tsv` and `dev.tsv` is close to NeMo's format except to have an extra header line at the beginning of the files. We would remove these extra lines. But `test.tsv` has different format and labels are missing for this part of the data.
```
! sed 1d {DATA_DIR}/SST-2/train.tsv > {DATA_DIR}/SST-2/train_nemo_format.tsv
! sed 1d {DATA_DIR}/SST-2/dev.tsv > {DATA_DIR}/SST-2/dev_nemo_format.tsv
! ls -l {DATA_DIR}/SST-2
# let's take a look at the data
print('Contents (first 5 lines) of train.tsv:')
! head -n 5 {DATA_DIR}/SST-2/train_nemo_format.tsv
print('\nContents (first 5 lines) of test.tsv:')
! head -n 5 {DATA_DIR}/SST-2/test.tsv
```
# Model Configuration
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model.
Our text classification model uses a pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model (or other BERT-like models) followed by a classification layer on the output of the first token ([CLS]).
The model is defined in a config file which declares multiple important sections. The most important ones are:
- **model**: All arguments that are related to the Model - language model, tokenizer, head classifier, optimizer, schedulers, and datasets/data loaders.
- **trainer**: Any argument to be passed to PyTorch Lightning including number of epochs, number of GPUs, precision level, etc.
```
# download the model's configuration file
MODEL_CONFIG = "text_classification_config.yaml"
CONFIG_DIR = WORK_DIR + '/configs/'
os.makedirs(CONFIG_DIR, exist_ok=True)
if not os.path.exists(CONFIG_DIR + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/text_classification/conf/' + MODEL_CONFIG, CONFIG_DIR)
print('Config file downloaded!')
else:
print ('config file already exists')
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
```
## this line will print the entire config of the model
print(OmegaConf.to_yaml(config))
# Model Training From Scratch
## Setting up data within the config
We first need to set the num_classes in the config file which specifies the number of classes in the dataset. For SST-2, we have just two classes (0-positive and 1-negative). So we set the num_classes to 2. The model supports more than 2 classes too.
```
config.model.dataset.num_classes=2
```
Among other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
Notice that some config lines, including `model.dataset.classes_num`, have `???` as their value, this means that values for these fields are required to be to be specified by the user. We need to specify and set the `model.train_ds.file_name`, `model.validation_ds.file_name`, and `model.test_ds.file_name` in the config file to the paths of the train, validation, and test files if they exist. We may do it by updating the config file or by setting them from the command line.
Let's now set the train and validation paths in the config.
```
config.model.train_ds.file_path = os.path.join(DATA_DIR, 'SST-2/train_nemo_format.tsv')
config.model.validation_ds.file_path = os.path.join(DATA_DIR, 'SST-2/dev_nemo_format.tsv')
# Name of the .nemo file where trained model will be saved.
config.save_to = 'trained-model.nemo'
config.export_to = 'trained-model.onnx'
print("Train dataloader's config: \n")
# OmegaConf.to_yaml() is used to create a proper format for printing the train dataloader's config
# You may change other params like batch size or the number of samples to be considered (-1 means all the samples)
print(OmegaConf.to_yaml(config.model.train_ds))
```
## Building the PyTorch Lightning Trainer
NeMo models are primarily PyTorch Lightning (PT) modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.
Let's first instantiate a PT Trainer object by using the trainer section of the config.
```
print("Trainer config - \n")
# OmegaConf.to_yaml() is used to create a proper format for printing the trainer config
print(OmegaConf.to_yaml(config.trainer))
```
First you need to create a PT trainer with the params stored in the trainer's config. You may set the number of steps for training with max_steps or number of epochs with max_epochs in the trainer's config.
```
# lets modify some trainer configs
# checks if we have GPU available and uses it
config.trainer.accelerator = 'gpu' if torch.cuda.is_available() else 'cpu'
config.trainer.devices = 1
# for mixed precision training, uncomment the lines below (precision should be set to 16 and amp_level to O1):
# config.trainer.precision = 16
# config.trainer.amp_level = O1
# disable distributed training when using Colab to prevent the errors
config.trainer.strategy = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
# Training stops when max_step or max_epochs is reached (earliest)
config.trainer.max_epochs = 5
# instantiates a PT Trainer object by using the trainer section of the config
trainer = pl.Trainer(**config.trainer)
```
## Setting up the NeMo Experiment¶
NeMo has an experiment manager that handles the logging and saving checkpoints for us, so let's setup it. We need the PT trainer and the exp_manager config:
```
# The experiment manager of a trainer object can not be set twice. We repeat the trainer creation code again here to prevent getting error when this cell is executed more than once.
trainer = pl.Trainer(**config.trainer)
# exp_dir specifies the path to store the the checkpoints and also the logs, it's default is "./nemo_experiments"
# You may set it by uncommentig the following line
# config.exp_manager.exp_dir = 'LOG_CHECKPOINT_DIR'
# OmegaConf.to_yaml() is used to create a proper format for printing the trainer config
print(OmegaConf.to_yaml(config.exp_manager))
exp_dir = exp_manager(trainer, config.exp_manager)
# the exp_dir provides a path to the current experiment for easy access
print(exp_dir)
```
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model to another model. The default model is `bert-base-uncased`. We support a variety of models including all the models available in HuggingFace, and Megatron.
```
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify the BERT-like model, you want to use
# set the `model.language_modelpretrained_model_name' parameter in the config to the model you want to use
config.model.language_model.pretrained_model_name = "bert-base-uncased"
```
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders will also be prepared for the training and validation.
Also, the pretrained BERT model will be automatically downloaded. Note it can take up to a few minutes depending on the size of the chosen BERT model for the first time you create the model. If your dataset is large, it also may take some time to read and process all the datasets.
Now we can create the model with the model config and the trainer object like this:
```
model = nemo_nlp.models.TextClassificationModel(cfg=config.model, trainer=trainer)
```
## Monitoring Training Progress
Optionally, you can create a Tensorboard visualization to monitor training progress.
```
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
```
## Training
You may start the training by using the trainer.fit() method. The number of steps/epochs of the training are specified already in the config of the trainer and you may update them before creating the trainer.
```
# start model training
trainer.fit(model)
model.save_to(config.save_to)
```
# Evaluation
To see how the model performs, we can run evaluate and test the performance of the trained model on a data file. Here we would load the best checkpoint (the one with the lowest validation loss) and create a model (eval_model) from the checkpoint. We would also create a new trainer (eval_trainer) to show how it is done when training is done and you have just the checkpoints. If you want to perform the evaluation in the same script as the training's script, you may still use the same model and trainer you used for training.
```
# extract the path of the best checkpoint from the training, you may update it to any checkpoint
checkpoint_path = trainer.checkpoint_callback.best_model_path
# Create an evaluation model and load the checkpoint
eval_model = nemo_nlp.models.TextClassificationModel.load_from_checkpoint(checkpoint_path=checkpoint_path)
# create a dataloader config for evaluation, the same data file provided in validation_ds is used here
# file_path can get updated with any file
eval_config = OmegaConf.create({'file_path': config.model.validation_ds.file_path, 'batch_size': 64, 'shuffle': False, 'num_samples': -1})
eval_model.setup_test_data(test_data_config=eval_config)
#eval_dataloader = eval_model._create_dataloader_from_config(cfg=eval_config, mode='test')
# a new trainer is created to show how to evaluate a checkpoint from an already trained model
# create a copy of the trainer config and update it to be used for final evaluation
eval_trainer_cfg = config.trainer.copy()
eval_trainer_cfg.accelerator = 'gpu' if torch.cuda.is_available() else 'cpu' # it is safer to perform evaluation on single GPU as PT is buggy with the last batch on multi-GPUs
eval_trainer_cfg.strategy = None # 'ddp' is buggy with test process in the current PT, it looks like it has been fixed in the latest master
eval_trainer = pl.Trainer(**eval_trainer_cfg)
eval_trainer.test(model=eval_model, verbose=False) # test_dataloaders=eval_dataloader,
```
# Inference
You may create a model from a saved checkpoint and use the model.infer() method to perform inference on a list of queries. There is no need of any trainer for inference.
```
# extract the path of the best checkpoint from the training, you may update it to any other checkpoint file
checkpoint_path = trainer.checkpoint_callback.best_model_path
# Create an evaluation model and load the checkpoint
infer_model = nemo_nlp.models.TextClassificationModel.load_from_checkpoint(checkpoint_path=checkpoint_path)
```
To see how the model performs, let’s get model's predictions for a few examples:
```
# move the model to the desired device for inference
# we move the model to "cuda" if available otherwise "cpu" would be used
if torch.cuda.is_available():
infer_model.to("cuda")
else:
infer_model.to("cpu")
# define the list of queries for inference
queries = ['by the end of no such thing the audience , like beatrice , has a watchful affection for the monster .',
'director rob marshall went out gunning to make a great one .',
'uneasy mishmash of styles and genres .']
# max_seq_length=512 is the maximum length BERT supports.
results = infer_model.classifytext(queries=queries, batch_size=3, max_seq_length=512)
print('The prediction results of some sample queries with the trained model:')
for query, result in zip(queries, results):
print(f'Query : {query}')
print(f'Predicted label: {result}')
```
## Training Script
If you have NeMo installed locally (eg. cloned from the Github), you can also train the model with `examples/nlp/text_classification/text_classifciation_with_bert.py`. This script contains an example on how to train, evaluate and perform inference with the TextClassificationModel.
For example the following would train a model for 50 epochs in 2 GPUs on a classification task with 2 classes:
```
# python text_classification_with_bert.py
model.dataset.num_classes=2
model.train_ds=PATH_TO_TRAIN_FILE
model.validation_ds=PATH_TO_VAL_FILE
trainer.max_epochs=50
trainer.devices=2
trainer.accelerator='gpu'
```
This script would also reload the best checkpoint after the training is done and does evaluation on the dev set. Then perform inference on some sample queries.
By default, this script uses `examples/nlp/text_classification/conf/text_classifciation_config.py` config file, and you may update all the params in the config file from the command line. You may also use another config file like this:
```
# python text_classification_with_bert.py --config-name==PATH_TO_CONFIG_FILE
model.dataset.num_classes=2
model.train_ds=PATH_TO_TRAIN_FILE
model.validation_ds=PATH_TO_VAL_FILE
trainer.max_epochs=50
trainer.devices=2
trainer.accelerator='gpu'
```
## Deployment
You can also deploy a model to an inference engine (like TensorRT or ONNXRuntime) using ONNX exporter.
If you don't have one, let's install it:
```
!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu
# !mkdir -p ort
# %cd ort
# !git clean -xfd
# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .
# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
# !pip uninstall -y onnxruntime
# !pip uninstall -y onnxruntime-gpu
# !pip install --upgrade --force-reinstall ./build/Linux/Release/dist/onnxruntime*.whl
# %cd ..
```
Then export
```
model.export(config.export_to)
```
And run some queries
```
import numpy as np
import torch
from nemo.utils import logging
from nemo.collections.nlp.parts.utils_funcs import tensor2list
from nemo.collections.nlp.models.text_classification import TextClassificationModel
from nemo.collections.nlp.data.text_classification import TextClassificationDataset
import onnxruntime
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
def postprocessing(results, labels):
return [labels[str(r)] for r in results]
def create_infer_dataloader(model, queries):
batch_size = len(queries)
dataset = TextClassificationDataset(tokenizer=model.tokenizer, queries=queries, max_seq_length=512)
return torch.utils.data.DataLoader(
dataset=dataset,
batch_size=batch_size,
shuffle=False,
num_workers=2,
pin_memory=True,
drop_last=False,
collate_fn=dataset.collate_fn,
)
queries = ["by the end of no such thing the audience , like beatrice , has a watchful affection for the monster .",
"director rob marshall went out gunning to make a great one .",
"uneasy mishmash of styles and genres .",
"I love exotic science fiction / fantasy movies but this one was very unpleasant to watch . Suggestions and images of child abuse , mutilated bodies (live or dead) , other gruesome scenes , plot holes , boring acting made this a regretable experience , The basic idea of entering another person's mind is not even new to the movies or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4 / 10 since some special effects were nice ."]
model.eval()
infer_datalayer = create_infer_dataloader(model, queries)
ort_session = onnxruntime.InferenceSession(config.export_to)
for batch in infer_datalayer:
input_ids, input_type_ids, input_mask, subtokens_mask = batch
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_ids),
ort_session.get_inputs()[1].name: to_numpy(input_mask),
ort_session.get_inputs()[2].name: to_numpy(input_type_ids),}
ologits = ort_session.run(None, ort_inputs)
alogits = np.asarray(ologits)
logits = torch.from_numpy(alogits[0])
preds = tensor2list(torch.argmax(logits, dim = -1))
processed_results = postprocessing(preds, {"0": "negative", "1": "positive"})
logging.info('The prediction results of some sample queries with the trained model:')
for query, result in zip(queries, processed_results):
logging.info(f'Query : {query}')
logging.info(f'Predicted label: {result}')
break
```
| github_jupyter |
# Diseño de software para cómputo científico
----
## Unidad 1: Decoradores en Python
### Agenda de la Unidad 1
---
- Orientación a objetos.
- **Decoradores**.
### Decoradores en Python
- Permiten cambiar el comportamiento de una función (*sin modificarla*)
- Reusar código fácilmente
<img align="right" width="1000" src="https://external-content.duckduckgo.com/iu/?u=http%3A%2F%2Fvignette4.wikia.nocookie.net%2Fpowerlisting%2Fimages%2Fd%2Fdf%2FIron-Man-Transform.gif%2Frevision%2Flatest%3Fcb%3D20150514015703&f=1&nofb=1" alt="Fancy Iron Man suitup">
### Todo en Python es un objeto!
Los objetos se definen por su **Identidad**, **Tipo** y **Valor**.
```
x = 1
print(id(1), type(x))
x.__add__(2)
# Otros objetos
[1, 2, 3] # listas
5.2 # flotantes
"hola" # strings
```
### Funciones
Las funciones también son objetos
```
def saludar():
print('Hola!')
id(saludar)
saludar.__name__
hablar = saludar # asignamos la funcion a otra variable
hablar() # la podemos llamar
```
## Decorador (definición no estricta)
Un decorador es una *función* **d** que recibe como parámetro otra *función* **a** y retorna una nueva *función* **r**.
- **d**: función decoradora
- **a**: función a decorar
- **r**: función decorada
## Código
```
def d(a):
def r(*args, **kwargs):
# comportamiento previo a la ejecución de a
a(*args, **kwargs)
# comportamiento posterior a la ejecución de a
return r
```
## Código
```
def d(a):
def r(*args, **kwargs):
print("Inicia ejecucion de", a.__name__)
result = a(*args, **kwargs)
print("Fin ejecucion de", a.__name__)
return result
return r
```
## Manipulando funciones
```
def suma(x, y):
return x + y
suma(1, 2)
suma_decorada = d(suma)
suma_decorada(1, 2)
```
## Azúcar sintáctica
A partir de Python 2.4 se incorporó la notación con **@** para decorar funciones.
```
def suma(x, y):
return x + y
suma = d(suma)
suma(1,2)
@d
def suma(x, y):
return x + y
```
## Ejemplo: Medir el tiempo de ejecución
```
import time
def timer(method):
'''Decorator to time a function runtime'''
def wrapper(*args, **kwargs):
t0 = time.time()
output = method(*args, **kwargs)
dt = time.time() - t0
print(f'<{method.__name__} finished in {dt} seconds>')
return output
return wrapper
@timer
def fib(n):
values = [0, 1]
while values[-2] < n:
values.append(values[-2] + values[-1])
time.sleep(1)
return values
lista = fib(1000)
lista
```
## Decoradores encadenados
Similar al concepto matemático de componer funciones.
```
@registrar_uso
@medir_tiempo_ejecucion
def mi_funcion(algunos, argumentos):
# cuerpo de la funcion
...
def mi_funcion(algunos, argumentos):
# cuerpo de la funcion
mi_funcion = registrar_uso(medir_tiempo_ejecucion(mi_funcion))
```
## Decoradores con parámetros?
- Se denominan *Fábrica de decoradores*
- Permiten tener decoradores más flexibles.
- Ejemplo: un decorador que fuerce el tipo de retorno de una función
```
def to_string(user_function):
def inner(*args, **kwargs):
r = user_function(*args, **kwargs)
return str(r)
return inner
@to_string
def count():
return 42
count()
```
## Envolvemos el decorador en una función externa
```
def typer(t):
def decorator(user_function):
def inner(*args, **kwargs):
r = user_function(*args, **kwargs)
return t(r)
return inner
return decorator
@typer(str)
def count(x, y):
return 42
count()
@typer(round)
def edad():
return 25.5
edad()
```
## Clases decoradoras
- Decoradores con estado
- Código mejor organizado
```
class Decorador:
def __init__(self, a):
self.a = a
self.variable = None
def __call__(self, *args, **kwargs):
# comportamiento previo a la ejecución de a
r = self.a(*args, **kwargs)
# comportamiento posterior a la ejecución de a
return r
@Decorador
def edad():
return 25.5
edad() # se ejecuta el método __call__
```
## Decoradores (definición más estricta)
Un decorador es un *callable* **d** que recibe como parámetro un *objeto* **a** y retorna un nuevo *objeto* **r** (por lo general del mismo tipo que el orginal o con su misma interfaz).
- **d**: clase que defina el método <code>\_\_call\_\_</code>
- **a**: cualquier objeto
- **r**: objeto decorado
## Decorar clases
Identidad
```
def identidad(C):
return C
@identidad
class A:
pass
A()
```
## Decorar clases
Cambiar totalmente el comportamiento de una clase
```
def corromper(C):
return "hola"
@corromper
class A:
pass
A()
```
## Decorar clases
Reemplazar con una nueva clase
```
def reemplazar_con_X(C):
class X():
pass
return X
@reemplazar_con_X
class MiClase():
pass
MiClase
```
## Algunos decoradores comunes
La librería estándar de Python incluye <code>classmethod</code>, <code>staticmethod</code>, <code>property</code>
```
class Student:
def __init__(self, first_name, last_name):
self.first_name = first_name.capitalize()
self.last_name = last_name.capitalize()
self.full_name = ' '.join([self.first_name, self.last_name])
@property
def name(self):
return self.full_name
@name.setter
def name(self, new_name):
self.full_name = new_name.title()
@classmethod
def from_string(cls, name_str):
first_name, last_name = map(str, name_str.split())
student = cls(first_name, last_name)
return student
@staticmethod
def is_full_name(name_str):
names = name_str.split(' ')
return len(names) > 1
s = Student('maria', 'perez')
s.name = 'mariana perez'
s.name
Student.from_string('José Gonzalez')
Student.is_full_name('JoséGonzalez')
```
| github_jupyter |
# quant-econ Solutions: LQ Control Problems
Solutions for http://quant-econ.net/py/lqcontrol.html
```
%matplotlib inline
```
Common imports for the exercises
```
import numpy as np
import matplotlib.pyplot as plt
from quantecon import LQ
```
## Exercise 1
Here’s one solution
We use some fancy plot commands to get a certain style — feel free to use simpler ones
The model is an LQ permanent income / life-cycle model with hump-shaped income
$$ y_t = m_1 t + m_2 t^2 + \sigma w_{t+1} $$
where $\{w_t\}$ is iid $N(0, 1)$ and the coefficients $m_1$ and $m_2$ are chosen so that
$p(t) = m_1 t + m_2 t^2$ has an inverted U shape with
* $p(0) = 0, p(T/2) = \mu$, and
* $p(T) = 0$.
```
# == Model parameters == #
r = 0.05
beta = 1 / (1 + r)
T = 50
c_bar = 1.5
sigma = 0.15
mu = 2
q = 1e4
m1 = T * (mu / (T/2)**2)
m2 = - (mu / (T/2)**2)
# == Formulate as an LQ problem == #
Q = 1
R = np.zeros((4, 4))
Rf = np.zeros((4, 4))
Rf[0, 0] = q
A = [[1 + r, -c_bar, m1, m2],
[0, 1, 0, 0],
[0, 1, 1, 0],
[0, 1, 2, 1]]
B = [[-1],
[0],
[0],
[0]]
C = [[sigma],
[0],
[0],
[0]]
# == Compute solutions and simulate == #
lq = LQ(Q, R, A, B, C, beta=beta, T=T, Rf=Rf)
x0 = (0, 1, 0, 0)
xp, up, wp = lq.compute_sequence(x0)
# == Convert results back to assets, consumption and income == #
ap = xp[0, :] # Assets
c = up.flatten() + c_bar # Consumption
time = np.arange(1, T+1)
income = wp[0, 1:] + m1 * time + m2 * time**2 # Income
# == Plot results == #
n_rows = 2
fig, axes = plt.subplots(n_rows, 1, figsize=(12, 10))
plt.subplots_adjust(hspace=0.5)
for i in range(n_rows):
axes[i].grid()
axes[i].set_xlabel(r'Time')
bbox = (0., 1.02, 1., .102)
legend_args = {'bbox_to_anchor' : bbox, 'loc' : 3, 'mode' : 'expand'}
p_args = {'lw' : 2, 'alpha' : 0.7}
axes[0].plot(range(1, T+1), income, 'g-', label="non-financial income", **p_args)
axes[0].plot(range(T), c, 'k-', label="consumption", **p_args)
axes[0].legend(ncol=2, **legend_args)
axes[1].plot(range(T+1), ap.flatten(), 'b-', label="assets", **p_args)
axes[1].plot(range(T+1), np.zeros(T+1), 'k-')
axes[1].legend(ncol=1, **legend_args)
plt.show()
```
## Exercise 2
This is a permanent income / life-cycle model with polynomial growth in income
over working life followed by a fixed retirement income. The model is solved
by combining two LQ programming problems as described in the lecture.
```
# == Model parameters == #
r = 0.05
beta = 1 / (1 + r)
T = 60
K = 40
c_bar = 4
sigma = 0.35
mu = 4
q = 1e4
s = 1
m1 = 2 * mu / K
m2 = - mu / K**2
# == Formulate LQ problem 1 (retirement) == #
Q = 1
R = np.zeros((4, 4))
Rf = np.zeros((4, 4))
Rf[0, 0] = q
A = [[1 + r, s - c_bar, 0, 0],
[0, 1, 0, 0],
[0, 1, 1, 0],
[0, 1, 2, 1]]
B = [[-1],
[0],
[0],
[0]]
C = [[0],
[0],
[0],
[0]]
# == Initialize LQ instance for retired agent == #
lq_retired = LQ(Q, R, A, B, C, beta=beta, T=T-K, Rf=Rf)
# == Iterate back to start of retirement, record final value function == #
for i in range(T-K):
lq_retired.update_values()
Rf2 = lq_retired.P
# == Formulate LQ problem 2 (working life) == #
R = np.zeros((4, 4))
A = [[1 + r, -c_bar, m1, m2],
[0, 1, 0, 0],
[0, 1, 1, 0],
[0, 1, 2, 1]]
B = [[-1],
[0],
[0],
[0]]
C = [[sigma],
[0],
[0],
[0]]
# == Set up working life LQ instance with terminal Rf from lq_retired == #
lq_working = LQ(Q, R, A, B, C, beta=beta, T=K, Rf=Rf2)
# == Simulate working state / control paths == #
x0 = (0, 1, 0, 0)
xp_w, up_w, wp_w = lq_working.compute_sequence(x0)
# == Simulate retirement paths (note the initial condition) == #
xp_r, up_r, wp_r = lq_retired.compute_sequence(xp_w[:, K])
# == Convert results back to assets, consumption and income == #
xp = np.column_stack((xp_w, xp_r[:, 1:]))
assets = xp[0, :] # Assets
up = np.column_stack((up_w, up_r))
c = up.flatten() + c_bar # Consumption
time = np.arange(1, K+1)
income_w = wp_w[0, 1:K+1] + m1 * time + m2 * time**2 # Income
income_r = np.ones(T-K) * s
income = np.concatenate((income_w, income_r))
# == Plot results == #
n_rows = 2
fig, axes = plt.subplots(n_rows, 1, figsize=(12, 10))
plt.subplots_adjust(hspace=0.5)
for i in range(n_rows):
axes[i].grid()
axes[i].set_xlabel(r'Time')
bbox = (0., 1.02, 1., .102)
legend_args = {'bbox_to_anchor' : bbox, 'loc' : 3, 'mode' : 'expand'}
p_args = {'lw' : 2, 'alpha' : 0.7}
axes[0].plot(range(1, T+1), income, 'g-', label="non-financial income", **p_args)
axes[0].plot(range(T), c, 'k-', label="consumption", **p_args)
axes[0].legend(ncol=2, **legend_args)
axes[1].plot(range(T+1), assets, 'b-', label="assets", **p_args)
axes[1].plot(range(T+1), np.zeros(T+1), 'k-')
axes[1].legend(ncol=1, **legend_args)
plt.show()
```
## Exercise 3
The first task is to find the matrices $A, B, C, Q, R$ that define the
LQ problem
Recall that $x_t = (\bar q_t \;\, q_t \;\, 1)'$, while $u_t = q_{t+1} - q_t$
Letting $m_0 := (a_0 - c) / 2a_1$ and $m_1 := 1 / 2 a_1$, we can
write $\bar q_t = m_0 + m_1 d_t$, and then, with some manipulation
$$
\bar q_{t+1} = m_0 (1 - \rho) + \rho \bar q_t + m_1 \sigma w_{t+1}
$$
By our definition of $u_t$, the dynamics of $q_t$ are $q_{t+1} = q_t + u_t$
Using these facts you should be able to build the correct $A, B, C$ matrices (and then
check them against those found in the solution code below)
Suitable $R, Q$ matrices can be found by inspecting the objective
function, which we repeat here for convenience:
$$
\min
\mathbb E \,
\left\{
\sum_{t=0}^{\infty} \beta^t
a_1 ( q_t - \bar q_t)^2 + \gamma u_t^2
\right\}
$$
Our solution code is
```
# == Model parameters == #
a0 = 5
a1 = 0.5
sigma = 0.15
rho = 0.9
gamma = 1
beta = 0.95
c = 2
T = 120
# == Useful constants == #
m0 = (a0 - c) / (2 * a1)
m1 = 1 / (2 * a1)
# == Formulate LQ problem == #
Q = gamma
R = [[a1, -a1, 0],
[-a1, a1, 0],
[0, 0, 0]]
A = [[rho, 0, m0 * (1 - rho)],
[0, 1, 0],
[0, 0, 1]]
B = [[0],
[1],
[0]]
C = [[m1 * sigma],
[0],
[0]]
lq = LQ(Q, R, A, B, C=C, beta=beta)
# == Simulate state / control paths == #
x0 = (m0, 2, 1)
xp, up, wp = lq.compute_sequence(x0, ts_length=150)
q_bar = xp[0, :]
q = xp[1, :]
# == Plot simulation results == #
fig, ax = plt.subplots(figsize=(10, 6.5))
ax.set_xlabel('Time')
# == Some fancy plotting stuff -- simplify if you prefer == #
bbox = (0., 1.01, 1., .101)
legend_args = {'bbox_to_anchor' : bbox, 'loc' : 3, 'mode' : 'expand'}
p_args = {'lw' : 2, 'alpha' : 0.6}
time = range(len(q))
ax.set_xlim(0, max(time))
ax.plot(time, q_bar, 'k-', lw=2, alpha=0.6, label=r'$\bar q_t$')
ax.plot(time, q, 'b-', lw=2, alpha=0.6, label=r'$q_t$')
ax.legend(ncol=2, **legend_args)
s = r'dynamics with $\gamma = {}$'.format(gamma)
ax.text(max(time) * 0.6, 1 * q_bar.max(), s, fontsize=14)
plt.show()
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv('telco_churn.csv')
df.head()
pd.set_option('display.max_columns', df.shape[1])
df.info()
del df['customerID']
#df['TotalCharges'] = pd.to_numeric(df['TotalCharges'])
df = df.replace(r'^\s+$', 0, regex=True)
df['TotalCharges'] = pd.to_numeric(df['TotalCharges'])
df = pd.get_dummies(df)
del df['Churn_No']
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Split data into X and y
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
xgb = XGBClassifier(booster='gbtree', objective='binary:logistic',random_state=2, n_jobs=-1)
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_test)
score = accuracy_score(y_pred, y_test)
print('Score: ' + str(score))
xgb = XGBClassifier(random_state=2, n_jobs=-1)
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_test)
score = accuracy_score(y_pred, y_test)
print('Score: ' + str(score))
# Import GridSearchCV
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
def grid_search(params, random=False):
xgb = XGBClassifier(booster='gbtree', objective='binary:logistic', random_state=2)
if random:
grid = RandomizedSearchCV(xgb, params, cv=5, n_jobs=-1, random_state=2)
else:
# Instantiate GridSearchCV as grid_reg
grid = GridSearchCV(xgb, params, cv=5, n_jobs=-1)
# Fit grid_reg on X_train and y_train
grid.fit(X_train, y_train)
# Extract best params
best_params = grid.best_params_
# Print best params
print("Best params:", best_params)
# Compute best score
best_score = grid.best_score_
# Print best score
print("Training score: {:.5f}".format(best_score))
# Predict test set labels
y_pred = grid.predict(X_test)
# Compute rmse_test
acc = accuracy_score(y_test, y_pred)
# Print rmse_test
print('Test score: {:.5f}'.format(acc))
grid_search(params={'n_estimators':[100, 200, 400, 800]})
grid_search(params={'learning_rate':[0.01, 0.05, 0.1, 0.2, 0.3]})
grid_search(params={'max_depth':[2, 3, 5, 6, 8]})
grid_search(params={'gamma':[0, 0.01, 0.1, 0.5, 1, 2]})
grid_search(params={'min_child_weight':[0.5, 1, 2, 3, 5]})
grid_search(params={'subsample':[0.5, 0.7, 0.8, 0.9, 1]})
grid_search(params={'colsample_bytree':[0.5, 0.7, 0.8, 0.9, 1]})
model = XGBClassifier(random_state=2)
eval_set = [(X_test, y_test)]
eval_metric='error'
model.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set)
# make predictions for test data
y_pred = model.predict(X_test)
# evaluate predictions
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
model = XGBClassifier(random_state=2)
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_metric="error", eval_set=eval_set, early_stopping_rounds=10, verbose=True)
# make predictions for test data
y_pred = model.predict(X_test)
# evaluate predictions
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
model = XGBClassifier(random_state=2, n_estimators=5000)
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_metric="error", eval_set=eval_set, early_stopping_rounds=50)
# make predictions for test data
y_pred = model.predict(X_test)
# evaluate predictions
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
grid_search(params={'max_depth':[2, 3, 5, 6, 8], 'gamma':[0, 0.1, 0.5, 1, 2, 5],
'learning_rate':[0.01, 0.05, 0.1, 0.2, 0.3], 'n_estimators':[48]}, random=True)
grid_search(params={'max_depth':[2, 3, 4, 5, 6, 7, 8], 'n_estimators':[48]})
grid_search(params={'learning_rate':[0.01, 0.05, 0.1, 0.2, 0.3, 0.3], 'max_depth':[3], 'n_estimators':[48]})
grid_search(params={'learning_rate':[0.08, 0.09, 0.1, 0.11, 0.12], 'max_depth':[3], 'n_estimators':[48]})
grid_search(params={'min_child_weight':[0.5, 1, 2, 3, 4, 5], 'max_depth':[3], 'n_estimators':[48]})
grid_search(params={'subsample':[0.5, 0.5, 0.7, 0.8, 0.9, 1], 'min_child_weight':[5], 'max_depth':[3], 'n_estimators':[48]})
grid_search(params={'colsample_bytree':[0.5, 0.5, 0.7, 0.8, 0.9, 1], 'subsample':[0.7],
'min_child_weight':[5], 'max_depth':[3], 'n_estimators':[48]})
model = XGBClassifier(max_depth=3, subsample=0.7, min_child_weight=5, colsample_bytree=0.7,
random_state=2, n_estimators=5000)
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_metric="error", eval_set=eval_set, early_stopping_rounds=50)
# make predictions for test data
y_pred = model.predict(X_test)
# evaluate predictions
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
grid_search(params={'learning_rate':[0.01, 0.025, 0.05, 0.075, 0.1, 0.2,], 'colsample_bytree':[0.7],
'subsample':[0.7], 'min_child_weight':[5], 'max_depth':[3], 'n_estimators':[81]})
grid_search(params={'colsample_bytree':[0.7],
'subsample':[0.7], 'min_child_weight':[5], 'max_depth':[3], 'n_estimators':[81]})
```
| github_jupyter |
**Chapter 18 – Reinforcement Learning**
_This notebook contains all the sample code in chapter 18_.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/18_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
!apt update && apt install -y libpq-dev libsdl2-dev swig xorg-dev xvfb
!pip install -q -U tf-agents-nightly pyvirtualdisplay gym[atari]
IS_COLAB = True
except Exception:
IS_COLAB = False
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
if not tf.config.list_physical_devices('GPU'):
print("No GPU was detected. CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
tf.random.set_seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# To get smooth animations
import matplotlib.animation as animation
mpl.rc('animation', html='jshtml')
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rl"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Introduction to OpenAI gym
In this notebook we will be using [OpenAI gym](https://gym.openai.com/), a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning *agents* to interact with. Let's start by importing `gym`:
```
import gym
```
Let's list all the available environments:
```
gym.envs.registry.all()
```
The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright.
```
env = gym.make('CartPole-v1')
```
Let's initialize the environment by calling is `reset()` method. This returns an observation:
```
env.seed(42)
obs = env.reset()
```
Observations vary depending on the environment. In this case it is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity.
```
obs
```
An environment can be visualized by calling its `render()` method, and you can pick the rendering mode (the rendering options depend on the environment).
**Warning**: some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify `mode="rgb_array"`. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like [Xvfb](http://en.wikipedia.org/wiki/Xvfb). On Debian or Ubuntu:
```bash
$ apt update
$ apt install -y xvfb
```
You can then start Jupyter using the `xvfb-run` command:
```bash
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
```
Alternatively, you can install the [pyvirtualdisplay](https://github.com/ponty/pyvirtualdisplay) Python library which wraps Xvfb:
```bash
python3 -m pip install -U pyvirtualdisplay
```
And run the following code:
```
try:
import pyvirtualdisplay
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
except ImportError:
pass
env.render()
```
In this example we will set `mode="rgb_array"` to get an image of the environment as a NumPy array:
```
img = env.render(mode="rgb_array")
img.shape
def plot_environment(env, figsize=(5,4)):
plt.figure(figsize=figsize)
img = env.render(mode="rgb_array")
plt.imshow(img)
plt.axis("off")
return img
plot_environment(env)
plt.show()
```
Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like:
```
env.action_space
```
Yep, just two possible actions: accelerate towards the left or towards the right.
Since the pole is leaning toward the right (`obs[2] > 0`), let's accelerate the cart toward the right:
```
action = 1 # accelerate right
obs, reward, done, info = env.step(action)
obs
```
Notice that the cart is now moving toward the right (`obs[1] > 0`). The pole is still tilted toward the right (`obs[2] > 0`), but its angular velocity is now negative (`obs[3] < 0`), so it will likely be tilted toward the left after the next step.
```
plot_environment(env)
save_fig("cart_pole_plot")
```
Looks like it's doing what we're telling it to do!
The environment also tells the agent how much reward it got during the last step:
```
reward
```
When the game is over, the environment returns `done=True`:
```
done
```
Finally, `info` is an environment-specific dictionary that can provide some extra information that you may find useful for debugging or for training. For example, in some games it may indicate how many lives the agent has.
```
info
```
The sequence of steps between the moment the environment is reset until it is done is called an "episode". At the end of an episode (i.e., when `step()` returns `done=True`), you should reset the environment before you continue to use it.
```
if done:
obs = env.reset()
```
Now how can we make the poll remain upright? We will need to define a _policy_ for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do.
# A simple hard-coded policy
Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and _vice versa_. Let's see if that works:
```
env.seed(42)
def basic_policy(obs):
angle = obs[2]
return 0 if angle < 0 else 1
totals = []
for episode in range(500):
episode_rewards = 0
obs = env.reset()
for step in range(200):
action = basic_policy(obs)
obs, reward, done, info = env.step(action)
episode_rewards += reward
if done:
break
totals.append(episode_rewards)
np.mean(totals), np.std(totals), np.min(totals), np.max(totals)
```
Well, as expected, this strategy is a bit too basic: the best it did was to keep the poll up for only 68 steps. This environment is considered solved when the agent keeps the poll up for 200 steps.
Let's visualize one episode:
```
env.seed(42)
frames = []
obs = env.reset()
for step in range(200):
img = env.render(mode="rgb_array")
frames.append(img)
action = basic_policy(obs)
obs, reward, done, info = env.step(action)
if done:
break
```
Now show the animation:
```
def update_scene(num, frames, patch):
patch.set_data(frames[num])
return patch,
def plot_animation(frames, repeat=False, interval=40):
fig = plt.figure()
patch = plt.imshow(frames[0])
plt.axis('off')
anim = animation.FuncAnimation(
fig, update_scene, fargs=(frames, patch),
frames=len(frames), repeat=repeat, interval=interval)
plt.close()
return anim
plot_animation(frames)
```
Clearly the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that!
# Neural Network Policies
Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will estimate a probability for each action, then we will select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`.
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
n_inputs = 4 # == env.observation_space.shape[0]
model = keras.models.Sequential([
keras.layers.Dense(5, activation="elu", input_shape=[n_inputs]),
keras.layers.Dense(1, activation="sigmoid"),
])
```
In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state.
You may wonder why we plan to pick a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between _exploring_ new actions and _exploiting_ the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried.
Let's write a small function that will run the model to play one episode, and return the frames so we can display an animation:
```
def render_policy_net(model, n_max_steps=200, seed=42):
frames = []
env = gym.make("CartPole-v1")
env.seed(seed)
np.random.seed(seed)
obs = env.reset()
for step in range(n_max_steps):
frames.append(env.render(mode="rgb_array"))
left_proba = model.predict(obs.reshape(1, -1))
action = int(np.random.rand() > left_proba)
obs, reward, done, info = env.step(action)
if done:
break
env.close()
return frames
```
Now let's look at how well this randomly initialized policy network performs:
```
frames = render_policy_net(model)
plot_animation(frames)
```
Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right.
We can make the same net play in 50 different environments in parallel (this will give us a diverse training batch at each step), and train for 5000 iterations. We also reset environments when they are done. We train the model using a custom training loop so we can easily use the predictions at each training step to advance the environments.
```
n_environments = 50
n_iterations = 5000
envs = [gym.make("CartPole-v1") for _ in range(n_environments)]
for index, env in enumerate(envs):
env.seed(index)
np.random.seed(42)
observations = [env.reset() for env in envs]
optimizer = keras.optimizers.RMSprop()
loss_fn = keras.losses.binary_crossentropy
for iteration in range(n_iterations):
# if angle < 0, we want proba(left) = 1., or else proba(left) = 0.
target_probas = np.array([([1.] if obs[2] < 0 else [0.])
for obs in observations])
with tf.GradientTape() as tape:
left_probas = model(np.array(observations))
loss = tf.reduce_mean(loss_fn(target_probas, left_probas))
print("\rIteration: {}, Loss: {:.3f}".format(iteration, loss.numpy()), end="")
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
actions = (np.random.rand(n_environments, 1) > left_probas.numpy()).astype(np.int32)
for env_index, env in enumerate(envs):
obs, reward, done, info = env.step(actions[env_index][0])
observations[env_index] = obs if not done else env.reset()
for env in envs:
env.close()
frames = render_policy_net(model)
plot_animation(frames)
```
Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own. One that does not wobble as much.
# Policy Gradients
To train this neural network we will need to define the target probabilities `y`. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in an episode, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the _credit assignment problem_.
The _Policy Gradients_ algorithm tackles this problem by first playing multiple episodes, then making the actions in good episodes slightly more likely, while actions in bad episodes are made slightly less likely. First we play, then we go back and think about what we did.
Let's start by creating a function to play a single step using the model. We will also pretend for now that whatever action it takes is the right one, so we can compute the loss and its gradients (we will just save these gradients for now, and modify them later depending on how good or bad the action turned out to be):
```
def play_one_step(env, obs, model, loss_fn):
with tf.GradientTape() as tape:
left_proba = model(obs[np.newaxis])
action = (tf.random.uniform([1, 1]) > left_proba)
y_target = tf.constant([[1.]]) - tf.cast(action, tf.float32)
loss = tf.reduce_mean(loss_fn(y_target, left_proba))
grads = tape.gradient(loss, model.trainable_variables)
obs, reward, done, info = env.step(int(action[0, 0].numpy()))
return obs, reward, done, grads
```
If `left_proba` is high, then `action` will most likely be `False` (since a random number uniformally sampled between 0 and 1 will probably not be greater than `left_proba`). And `False` means 0 when you cast it to a number, so `y_target` would be equal to 1 - 0 = 1. In other words, we set the target to 1, meaning we pretend that the probability of going left should have been 100% (so we took the right action).
Now let's create another function that will rely on the `play_one_step()` function to play multiple episodes, returning all the rewards and gradients, for each episode and each step:
```
def play_multiple_episodes(env, n_episodes, n_max_steps, model, loss_fn):
all_rewards = []
all_grads = []
for episode in range(n_episodes):
current_rewards = []
current_grads = []
obs = env.reset()
for step in range(n_max_steps):
obs, reward, done, grads = play_one_step(env, obs, model, loss_fn)
current_rewards.append(reward)
current_grads.append(grads)
if done:
break
all_rewards.append(current_rewards)
all_grads.append(current_grads)
return all_rewards, all_grads
```
The Policy Gradients algorithm uses the model to play the episode several times (e.g., 10 times), then it goes back and looks at all the rewards, discounts them and normalizes them. So let's create couple functions for that: the first will compute discounted rewards; the second will normalize the discounted rewards across many episodes.
```
def discount_rewards(rewards, discount_rate):
discounted = np.array(rewards)
for step in range(len(rewards) - 2, -1, -1):
discounted[step] += discounted[step + 1] * discount_rate
return discounted
def discount_and_normalize_rewards(all_rewards, discount_rate):
all_discounted_rewards = [discount_rewards(rewards, discount_rate)
for rewards in all_rewards]
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discounted_rewards - reward_mean) / reward_std
for discounted_rewards in all_discounted_rewards]
```
Say there were 3 actions, and after each action there was a reward: first 10, then 0, then -50. If we use a discount factor of 80%, then the 3rd action will get -50 (full credit for the last reward), but the 2nd action will only get -40 (80% credit for the last reward), and the 1st action will get 80% of -40 (-32) plus full credit for the first reward (+10), which leads to a discounted reward of -22:
```
discount_rewards([10, 0, -50], discount_rate=0.8)
```
To normalize all discounted rewards across all episodes, we compute the mean and standard deviation of all the discounted rewards, and we subtract the mean from each discounted reward, and divide by the standard deviation:
```
discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8)
n_iterations = 150
n_episodes_per_update = 10
n_max_steps = 200
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=0.01)
loss_fn = keras.losses.binary_crossentropy
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(5, activation="elu", input_shape=[4]),
keras.layers.Dense(1, activation="sigmoid"),
])
env = gym.make("CartPole-v1")
env.seed(42);
for iteration in range(n_iterations):
all_rewards, all_grads = play_multiple_episodes(
env, n_episodes_per_update, n_max_steps, model, loss_fn)
total_rewards = sum(map(sum, all_rewards)) # Not shown in the book
print("\rIteration: {}, mean rewards: {:.1f}".format( # Not shown
iteration, total_rewards / n_episodes_per_update), end="") # Not shown
all_final_rewards = discount_and_normalize_rewards(all_rewards,
discount_rate)
all_mean_grads = []
for var_index in range(len(model.trainable_variables)):
mean_grads = tf.reduce_mean(
[final_reward * all_grads[episode_index][step][var_index]
for episode_index, final_rewards in enumerate(all_final_rewards)
for step, final_reward in enumerate(final_rewards)], axis=0)
all_mean_grads.append(mean_grads)
optimizer.apply_gradients(zip(all_mean_grads, model.trainable_variables))
env.close()
frames = render_policy_net(model)
plot_animation(frames)
```
# Markov Chains
```
np.random.seed(42)
transition_probabilities = [ # shape=[s, s']
[0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3
[0.0, 0.0, 0.9, 0.1], # from s1 to ...
[0.0, 1.0, 0.0, 0.0], # from s2 to ...
[0.0, 0.0, 0.0, 1.0]] # from s3 to ...
n_max_steps = 50
def print_sequence():
current_state = 0
print("States:", end=" ")
for step in range(n_max_steps):
print(current_state, end=" ")
if current_state == 3:
break
current_state = np.random.choice(range(4), p=transition_probabilities[current_state])
else:
print("...", end="")
print()
for _ in range(10):
print_sequence()
```
# Markov Decision Process
Let's define some transition probabilities, rewards and possible actions. For example, in state s0, if action a0 is chosen then with proba 0.7 we will go to state s0 with reward +10, with probability 0.3 we will go to state s1 with no reward, and with never go to state s2 (so the transition probabilities are `[0.7, 0.3, 0.0]`, and the rewards are `[+10, 0, 0]`):
```
transition_probabilities = [ # shape=[s, a, s']
[[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]],
[[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]],
[None, [0.8, 0.1, 0.1], None]]
rewards = [ # shape=[s, a, s']
[[+10, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, -50]],
[[0, 0, 0], [+40, 0, 0], [0, 0, 0]]]
possible_actions = [[0, 1, 2], [0, 2], [1]]
```
# Q-Value Iteration
```
Q_values = np.full((3, 3), -np.inf) # -np.inf for impossible actions
for state, actions in enumerate(possible_actions):
Q_values[state, actions] = 0.0 # for all possible actions
gamma = 0.90 # the discount factor
history1 = [] # Not shown in the book (for the figure below)
for iteration in range(50):
Q_prev = Q_values.copy()
history1.append(Q_prev) # Not shown
for s in range(3):
for a in possible_actions[s]:
Q_values[s, a] = np.sum([
transition_probabilities[s][a][sp]
* (rewards[s][a][sp] + gamma * np.max(Q_prev[sp]))
for sp in range(3)])
history1 = np.array(history1) # Not shown
Q_values
np.argmax(Q_values, axis=1)
```
The optimal policy for this MDP, when using a discount factor of 0.90, is to choose action a0 when in state s0, and choose action a0 when in state s1, and finally choose action a1 (the only possible action) when in state s2.
Let's try again with a discount factor of 0.95:
```
Q_values = np.full((3, 3), -np.inf) # -np.inf for impossible actions
for state, actions in enumerate(possible_actions):
Q_values[state, actions] = 0.0 # for all possible actions
gamma = 0.95 # the discount factor
for iteration in range(50):
Q_prev = Q_values.copy()
for s in range(3):
for a in possible_actions[s]:
Q_values[s, a] = np.sum([
transition_probabilities[s][a][sp]
* (rewards[s][a][sp] + gamma * np.max(Q_prev[sp]))
for sp in range(3)])
Q_values
np.argmax(Q_values, axis=1)
```
Now the policy has changed! In state s1, we now prefer to go through the fire (choose action a2). This is because the discount factor is larger so the agent values the future more, and it is therefore ready to pay an immediate penalty in order to get more future rewards.
# Q-Learning
Q-Learning works by watching an agent play (e.g., randomly) and gradually improving its estimates of the Q-Values. Once it has accurate Q-Value estimates (or close enough), then the optimal policy consists in choosing the action that has the highest Q-Value (i.e., the greedy policy).
We will need to simulate an agent moving around in the environment, so let's define a function to perform some action and get the new state and a reward:
```
def step(state, action):
probas = transition_probabilities[state][action]
next_state = np.random.choice([0, 1, 2], p=probas)
reward = rewards[state][action][next_state]
return next_state, reward
```
We also need an exploration policy, which can be any policy, as long as it visits every possible state many times. We will just use a random policy, since the state space is very small:
```
def exploration_policy(state):
return np.random.choice(possible_actions[state])
```
Now let's initialize the Q-Values like earlier, and run the Q-Learning algorithm:
```
np.random.seed(42)
Q_values = np.full((3, 3), -np.inf)
for state, actions in enumerate(possible_actions):
Q_values[state][actions] = 0
alpha0 = 0.05 # initial learning rate
decay = 0.005 # learning rate decay
gamma = 0.90 # discount factor
state = 0 # initial state
history2 = [] # Not shown in the book
for iteration in range(10000):
history2.append(Q_values.copy()) # Not shown
action = exploration_policy(state)
next_state, reward = step(state, action)
next_value = np.max(Q_values[next_state]) # greedy policy at the next step
alpha = alpha0 / (1 + iteration * decay)
Q_values[state, action] *= 1 - alpha
Q_values[state, action] += alpha * (reward + gamma * next_value)
state = next_state
history2 = np.array(history2) # Not shown
Q_values
np.argmax(Q_values, axis=1) # optimal action for each state
true_Q_value = history1[-1, 0, 0]
fig, axes = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
axes[0].set_ylabel("Q-Value$(s_0, a_0)$", fontsize=14)
axes[0].set_title("Q-Value Iteration", fontsize=14)
axes[1].set_title("Q-Learning", fontsize=14)
for ax, width, history in zip(axes, (50, 10000), (history1, history2)):
ax.plot([0, width], [true_Q_value, true_Q_value], "k--")
ax.plot(np.arange(width), history[:, 0, 0], "b-", linewidth=2)
ax.set_xlabel("Iterations", fontsize=14)
ax.axis([0, width, 0, 24])
save_fig("q_value_plot")
```
# Deep Q-Network
Let's build the DQN. Given a state, it will estimate, for each possible action, the sum of discounted future rewards it can expect after it plays that action (but before it sees its outcome):
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
env = gym.make("CartPole-v1")
input_shape = [4] # == env.observation_space.shape
n_outputs = 2 # == env.action_space.n
model = keras.models.Sequential([
keras.layers.Dense(32, activation="elu", input_shape=input_shape),
keras.layers.Dense(32, activation="elu"),
keras.layers.Dense(n_outputs)
])
```
To select an action using this DQN, we just pick the action with the largest predicted Q-value. However, to ensure that the agent explores the environment, we choose a random action with probability `epsilon`.
```
def epsilon_greedy_policy(state, epsilon=0):
if np.random.rand() < epsilon:
return np.random.randint(2)
else:
Q_values = model.predict(state[np.newaxis])
return np.argmax(Q_values[0])
```
We will also need a replay memory. It will contain the agent's experiences, in the form of tuples: `(obs, action, reward, next_obs, done)`. We can use the `deque` class for that:
```
from collections import deque
replay_memory = deque(maxlen=2000)
```
And let's create a function to sample experiences from the replay memory. It will return 5 NumPy arrays: `[obs, actions, rewards, next_obs, dones]`.
```
def sample_experiences(batch_size):
indices = np.random.randint(len(replay_memory), size=batch_size)
batch = [replay_memory[index] for index in indices]
states, actions, rewards, next_states, dones = [
np.array([experience[field_index] for experience in batch])
for field_index in range(5)]
return states, actions, rewards, next_states, dones
```
Now we can create a function that will use the DQN to play one step, and record its experience in the replay memory:
```
def play_one_step(env, state, epsilon):
action = epsilon_greedy_policy(state, epsilon)
next_state, reward, done, info = env.step(action)
replay_memory.append((state, action, reward, next_state, done))
return next_state, reward, done, info
```
Lastly, let's create a function that will sample some experiences from the replay memory and perform a training step:
**Note**: the first 3 releases of the 2nd edition were missing the `reshape()` operation which converts `target_Q_values` to a column vector (this is required by the `loss_fn()`).
```
batch_size = 32
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=1e-3)
loss_fn = keras.losses.mean_squared_error
def training_step(batch_size):
experiences = sample_experiences(batch_size)
states, actions, rewards, next_states, dones = experiences
next_Q_values = model.predict(next_states)
max_next_Q_values = np.max(next_Q_values, axis=1)
target_Q_values = (rewards +
(1 - dones) * discount_rate * max_next_Q_values)
target_Q_values = target_Q_values.reshape(-1, 1)
mask = tf.one_hot(actions, n_outputs)
with tf.GradientTape() as tape:
all_Q_values = model(states)
Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
```
And now, let's train the model!
```
env.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
rewards = []
best_score = 0
for episode in range(600):
obs = env.reset()
for step in range(200):
epsilon = max(1 - episode / 500, 0.01)
obs, reward, done, info = play_one_step(env, obs, epsilon)
if done:
break
rewards.append(step) # Not shown in the book
if step > best_score: # Not shown
best_weights = model.get_weights() # Not shown
best_score = step # Not shown
print("\rEpisode: {}, Steps: {}, eps: {:.3f}".format(episode, step + 1, epsilon), end="") # Not shown
if episode > 50:
training_step(batch_size)
model.set_weights(best_weights)
plt.figure(figsize=(8, 4))
plt.plot(rewards)
plt.xlabel("Episode", fontsize=14)
plt.ylabel("Sum of rewards", fontsize=14)
save_fig("dqn_rewards_plot")
plt.show()
env.seed(42)
state = env.reset()
frames = []
for step in range(200):
action = epsilon_greedy_policy(state)
state, reward, done, info = env.step(action)
if done:
break
img = env.render(mode="rgb_array")
frames.append(img)
plot_animation(frames)
```
Not bad at all!
## Double DQN
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Dense(32, activation="elu", input_shape=[4]),
keras.layers.Dense(32, activation="elu"),
keras.layers.Dense(n_outputs)
])
target = keras.models.clone_model(model)
target.set_weights(model.get_weights())
batch_size = 32
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=1e-3)
loss_fn = keras.losses.Huber()
def training_step(batch_size):
experiences = sample_experiences(batch_size)
states, actions, rewards, next_states, dones = experiences
next_Q_values = model.predict(next_states)
best_next_actions = np.argmax(next_Q_values, axis=1)
next_mask = tf.one_hot(best_next_actions, n_outputs).numpy()
next_best_Q_values = (target.predict(next_states) * next_mask).sum(axis=1)
target_Q_values = (rewards +
(1 - dones) * discount_rate * next_best_Q_values)
target_Q_values = target_Q_values.reshape(-1, 1)
mask = tf.one_hot(actions, n_outputs)
with tf.GradientTape() as tape:
all_Q_values = model(states)
Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
replay_memory = deque(maxlen=2000)
env.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
rewards = []
best_score = 0
for episode in range(600):
obs = env.reset()
for step in range(200):
epsilon = max(1 - episode / 500, 0.01)
obs, reward, done, info = play_one_step(env, obs, epsilon)
if done:
break
rewards.append(step)
if step > best_score:
best_weights = model.get_weights()
best_score = step
print("\rEpisode: {}, Steps: {}, eps: {:.3f}".format(episode, step + 1, epsilon), end="")
if episode > 50:
training_step(batch_size)
if episode % 50 == 0:
target.set_weights(model.get_weights())
# Alternatively, you can do soft updates at each step:
#if episode > 50:
#target_weights = target.get_weights()
#online_weights = model.get_weights()
#for index in range(len(target_weights)):
# target_weights[index] = 0.99 * target_weights[index] + 0.01 * online_weights[index]
#target.set_weights(target_weights)
model.set_weights(best_weights)
plt.figure(figsize=(8, 4))
plt.plot(rewards)
plt.xlabel("Episode", fontsize=14)
plt.ylabel("Sum of rewards", fontsize=14)
save_fig("double_dqn_rewards_plot")
plt.show()
env.seed(42)
state = env.reset()
frames = []
for step in range(200):
action = epsilon_greedy_policy(state)
state, reward, done, info = env.step(action)
if done:
break
img = env.render(mode="rgb_array")
frames.append(img)
plot_animation(frames)
```
# Dueling Double DQN
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
K = keras.backend
input_states = keras.layers.Input(shape=[4])
hidden1 = keras.layers.Dense(32, activation="elu")(input_states)
hidden2 = keras.layers.Dense(32, activation="elu")(hidden1)
state_values = keras.layers.Dense(1)(hidden2)
raw_advantages = keras.layers.Dense(n_outputs)(hidden2)
advantages = raw_advantages - K.max(raw_advantages, axis=1, keepdims=True)
Q_values = state_values + advantages
model = keras.models.Model(inputs=[input_states], outputs=[Q_values])
target = keras.models.clone_model(model)
target.set_weights(model.get_weights())
batch_size = 32
discount_rate = 0.95
optimizer = keras.optimizers.Adam(lr=1e-2)
loss_fn = keras.losses.Huber()
def training_step(batch_size):
experiences = sample_experiences(batch_size)
states, actions, rewards, next_states, dones = experiences
next_Q_values = model.predict(next_states)
best_next_actions = np.argmax(next_Q_values, axis=1)
next_mask = tf.one_hot(best_next_actions, n_outputs).numpy()
next_best_Q_values = (target.predict(next_states) * next_mask).sum(axis=1)
target_Q_values = (rewards +
(1 - dones) * discount_rate * next_best_Q_values)
target_Q_values = target_Q_values.reshape(-1, 1)
mask = tf.one_hot(actions, n_outputs)
with tf.GradientTape() as tape:
all_Q_values = model(states)
Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
replay_memory = deque(maxlen=2000)
env.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
rewards = []
best_score = 0
for episode in range(600):
obs = env.reset()
for step in range(200):
epsilon = max(1 - episode / 500, 0.01)
obs, reward, done, info = play_one_step(env, obs, epsilon)
if done:
break
rewards.append(step)
if step > best_score:
best_weights = model.get_weights()
best_score = step
print("\rEpisode: {}, Steps: {}, eps: {:.3f}".format(episode, step + 1, epsilon), end="")
if episode > 50:
training_step(batch_size)
if episode % 200 == 0:
target.set_weights(model.get_weights())
model.set_weights(best_weights)
plt.plot(rewards)
plt.xlabel("Episode")
plt.ylabel("Sum of rewards")
plt.show()
env.seed(42)
state = env.reset()
frames = []
for step in range(200):
action = epsilon_greedy_policy(state)
state, reward, done, info = env.step(action)
if done:
break
img = env.render(mode="rgb_array")
frames.append(img)
plot_animation(frames)
```
This looks like a pretty robust agent!
```
env.close()
```
# Using TF-Agents to Beat Breakout
Let's use TF-Agents to create an agent that will learn to play Breakout. We will use the Deep Q-Learning algorithm, so you can easily compare the components with the previous implementation, but TF-Agents implements many other (and more sophisticated) algorithms!
## TF-Agents Environments
```
tf.random.set_seed(42)
np.random.seed(42)
from tf_agents.environments import suite_gym
env = suite_gym.load("Breakout-v4")
env
env.gym
env.seed(42)
env.reset()
env.step(1) # Fire
img = env.render(mode="rgb_array")
plt.figure(figsize=(6, 8))
plt.imshow(img)
plt.axis("off")
save_fig("breakout_plot")
plt.show()
env.current_time_step()
```
## Environment Specifications
```
env.observation_spec()
env.action_spec()
env.time_step_spec()
```
## Environment Wrappers
You can wrap a TF-Agents environments in a TF-Agents wrapper:
```
from tf_agents.environments.wrappers import ActionRepeat
repeating_env = ActionRepeat(env, times=4)
repeating_env
repeating_env.unwrapped
```
Here is the list of available wrappers:
```
import tf_agents.environments.wrappers
for name in dir(tf_agents.environments.wrappers):
obj = getattr(tf_agents.environments.wrappers, name)
if hasattr(obj, "__base__") and issubclass(obj, tf_agents.environments.wrappers.PyEnvironmentBaseWrapper):
print("{:27s} {}".format(name, obj.__doc__.split("\n")[0]))
```
The `suite_gym.load()` function can create an env and wrap it for you, both with TF-Agents environment wrappers and Gym environment wrappers (the latter are applied first).
```
from functools import partial
from gym.wrappers import TimeLimit
limited_repeating_env = suite_gym.load(
"Breakout-v4",
gym_env_wrappers=[partial(TimeLimit, max_episode_steps=10000)],
env_wrappers=[partial(ActionRepeat, times=4)],
)
limited_repeating_env
```
Create an Atari Breakout environment, and wrap it to apply the default Atari preprocessing steps:
```
limited_repeating_env.unwrapped
from tf_agents.environments import suite_atari
from tf_agents.environments.atari_preprocessing import AtariPreprocessing
from tf_agents.environments.atari_wrappers import FrameStack4
max_episode_steps = 27000 # <=> 108k ALE frames since 1 step = 4 frames
environment_name = "BreakoutNoFrameskip-v4"
env = suite_atari.load(
environment_name,
max_episode_steps=max_episode_steps,
gym_env_wrappers=[AtariPreprocessing, FrameStack4])
env
```
Play a few steps just to see what happens:
```
env.seed(42)
env.reset()
time_step = env.step(1) # FIRE
for _ in range(4):
time_step = env.step(3) # LEFT
def plot_observation(obs):
# Since there are only 3 color channels, you cannot display 4 frames
# with one primary color per frame. So this code computes the delta between
# the current frame and the mean of the other frames, and it adds this delta
# to the red and blue channels to get a pink color for the current frame.
obs = obs.astype(np.float32)
img = obs[..., :3]
current_frame_delta = np.maximum(obs[..., 3] - obs[..., :3].mean(axis=-1), 0.)
img[..., 0] += current_frame_delta
img[..., 2] += current_frame_delta
img = np.clip(img / 150, 0, 1)
plt.imshow(img)
plt.axis("off")
plt.figure(figsize=(6, 6))
plot_observation(time_step.observation)
save_fig("preprocessed_breakout_plot")
plt.show()
```
Convert the Python environment to a TF environment:
```
from tf_agents.environments.tf_py_environment import TFPyEnvironment
tf_env = TFPyEnvironment(env)
```
## Creating the DQN
Create a small class to normalize the observations. Images are stored using bytes from 0 to 255 to use less RAM, but we want to pass floats from 0.0 to 1.0 to the neural network:
Create the Q-Network:
```
from tf_agents.networks.q_network import QNetwork
preprocessing_layer = keras.layers.Lambda(
lambda obs: tf.cast(obs, np.float32) / 255.)
conv_layer_params=[(32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1)]
fc_layer_params=[512]
q_net = QNetwork(
tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layer,
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
```
Create the DQN Agent:
```
from tf_agents.agents.dqn.dqn_agent import DqnAgent
# see TF-agents issue #113
#optimizer = keras.optimizers.RMSprop(lr=2.5e-4, rho=0.95, momentum=0.0,
# epsilon=0.00001, centered=True)
train_step = tf.Variable(0)
update_period = 4 # run a training step every 4 collect steps
optimizer = tf.compat.v1.train.RMSPropOptimizer(learning_rate=2.5e-4, decay=0.95, momentum=0.0,
epsilon=0.00001, centered=True)
epsilon_fn = keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=1.0, # initial ε
decay_steps=250000 // update_period, # <=> 1,000,000 ALE frames
end_learning_rate=0.01) # final ε
agent = DqnAgent(tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
target_update_period=2000, # <=> 32,000 ALE frames
td_errors_loss_fn=keras.losses.Huber(reduction="none"),
gamma=0.99, # discount factor
train_step_counter=train_step,
epsilon_greedy=lambda: epsilon_fn(train_step))
agent.initialize()
```
Create the replay buffer:
```
from tf_agents.replay_buffers import tf_uniform_replay_buffer
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=tf_env.batch_size,
max_length=1000000)
replay_buffer_observer = replay_buffer.add_batch
```
Create a simple custom observer that counts and displays the number of times it is called (except when it is passed a trajectory that represents the boundary between two episodes, as this does not count as a step):
```
class ShowProgress:
def __init__(self, total):
self.counter = 0
self.total = total
def __call__(self, trajectory):
if not trajectory.is_boundary():
self.counter += 1
if self.counter % 100 == 0:
print("\r{}/{}".format(self.counter, self.total), end="")
```
Let's add some training metrics:
```
from tf_agents.metrics import tf_metrics
train_metrics = [
tf_metrics.NumberOfEpisodes(),
tf_metrics.EnvironmentSteps(),
tf_metrics.AverageReturnMetric(),
tf_metrics.AverageEpisodeLengthMetric(),
]
train_metrics[0].result()
from tf_agents.eval.metric_utils import log_metrics
import logging
logging.getLogger().setLevel(logging.INFO)
log_metrics(train_metrics)
```
Create the collect driver:
```
from tf_agents.drivers.dynamic_step_driver import DynamicStepDriver
collect_driver = DynamicStepDriver(
tf_env,
agent.collect_policy,
observers=[replay_buffer_observer] + train_metrics,
num_steps=update_period) # collect 4 steps for each training iteration
```
Collect the initial experiences, before training:
```
from tf_agents.policies.random_tf_policy import RandomTFPolicy
initial_collect_policy = RandomTFPolicy(tf_env.time_step_spec(),
tf_env.action_spec())
init_driver = DynamicStepDriver(
tf_env,
initial_collect_policy,
observers=[replay_buffer.add_batch, ShowProgress(20000)],
num_steps=20000) # <=> 80,000 ALE frames
final_time_step, final_policy_state = init_driver.run()
```
Let's sample 2 sub-episodes, with 3 time steps each and display them:
```
tf.random.set_seed(888) # chosen to show an example of trajectory at the end of an episode
trajectories, buffer_info = replay_buffer.get_next(
sample_batch_size=2, num_steps=3)
trajectories._fields
trajectories.observation.shape
from tf_agents.trajectories.trajectory import to_transition
time_steps, action_steps, next_time_steps = to_transition(trajectories)
time_steps.observation.shape
trajectories.step_type.numpy()
plt.figure(figsize=(10, 6.8))
for row in range(2):
for col in range(3):
plt.subplot(2, 3, row * 3 + col + 1)
plot_observation(trajectories.observation[row, col].numpy())
plt.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0, wspace=0.02)
save_fig("sub_episodes_plot")
plt.show()
```
Now let's create the dataset:
```
dataset = replay_buffer.as_dataset(
sample_batch_size=64,
num_steps=2,
num_parallel_calls=3).prefetch(3)
```
Convert the main functions to TF Functions for better performance:
```
from tf_agents.utils.common import function
collect_driver.run = function(collect_driver.run)
agent.train = function(agent.train)
```
And now we are ready to run the main loop!
```
def train_agent(n_iterations):
time_step = None
policy_state = agent.collect_policy.get_initial_state(tf_env.batch_size)
iterator = iter(dataset)
for iteration in range(n_iterations):
time_step, policy_state = collect_driver.run(time_step, policy_state)
trajectories, buffer_info = next(iterator)
train_loss = agent.train(trajectories)
print("\r{} loss:{:.5f}".format(
iteration, train_loss.loss.numpy()), end="")
if iteration % 1000 == 0:
log_metrics(train_metrics)
```
Run the next cell to train the agent for 10,000 steps. Then look at its behavior by running the following cell. You can run these two cells as many times as you wish. The agent will keep improving!
```
train_agent(n_iterations=10000)
frames = []
def save_frames(trajectory):
global frames
frames.append(tf_env.pyenv.envs[0].render(mode="rgb_array"))
prev_lives = tf_env.pyenv.envs[0].ale.lives()
def reset_and_fire_on_life_lost(trajectory):
global prev_lives
lives = tf_env.pyenv.envs[0].ale.lives()
if prev_lives != lives:
tf_env.reset()
tf_env.pyenv.envs[0].step(1)
prev_lives = lives
watch_driver = DynamicStepDriver(
tf_env,
agent.policy,
observers=[save_frames, reset_and_fire_on_life_lost, ShowProgress(1000)],
num_steps=1000)
final_time_step, final_policy_state = watch_driver.run()
plot_animation(frames)
```
If you want to save an animated GIF to show off your agent to your friends, here's one way to do it:
```
import PIL
image_path = os.path.join("images", "rl", "breakout.gif")
frame_images = [PIL.Image.fromarray(frame) for frame in frames[:150]]
frame_images[0].save(image_path, format='GIF',
append_images=frame_images[1:],
save_all=True,
duration=30,
loop=0)
%%html
<img src="images/rl/breakout.gif" />
```
# Extra material
## Deque vs Rotating List
The `deque` class offers fast append, but fairly slow random access (for large replay memories):
```
from collections import deque
np.random.seed(42)
mem = deque(maxlen=1000000)
for i in range(1000000):
mem.append(i)
[mem[i] for i in np.random.randint(1000000, size=5)]
%timeit mem.append(1)
%timeit [mem[i] for i in np.random.randint(1000000, size=5)]
```
Alternatively, you could use a rotating list like this `ReplayMemory` class. This would make random access faster for large replay memories:
```
class ReplayMemory:
def __init__(self, max_size):
self.buffer = np.empty(max_size, dtype=np.object)
self.max_size = max_size
self.index = 0
self.size = 0
def append(self, obj):
self.buffer[self.index] = obj
self.size = min(self.size + 1, self.max_size)
self.index = (self.index + 1) % self.max_size
def sample(self, batch_size):
indices = np.random.randint(self.size, size=batch_size)
return self.buffer[indices]
mem = ReplayMemory(max_size=1000000)
for i in range(1000000):
mem.append(i)
mem.sample(5)
%timeit mem.append(1)
%timeit mem.sample(5)
```
## Creating a Custom TF-Agents Environment
To create a custom TF-Agent environment, you just need to write a class that inherits from the `PyEnvironment` class and implements a few methods. For example, the following minimal environment represents a simple 4x4 grid. The agent starts in one corner (0,0) and must move to the opposite corner (3,3). The episode is done if the agent reaches the goal (it gets a +10 reward) or if the agent goes out of bounds (-1 reward). The actions are up (0), down (1), left (2) and right (3).
```
class MyEnvironment(tf_agents.environments.py_environment.PyEnvironment):
def __init__(self, discount=1.0):
super().__init__()
self._action_spec = tf_agents.specs.BoundedArraySpec(
shape=(), dtype=np.int32, name="action", minimum=0, maximum=3)
self._observation_spec = tf_agents.specs.BoundedArraySpec(
shape=(4, 4), dtype=np.int32, name="observation", minimum=0, maximum=1)
self.discount = discount
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = np.zeros(2, dtype=np.int32)
obs = np.zeros((4, 4), dtype=np.int32)
obs[self._state[0], self._state[1]] = 1
return tf_agents.trajectories.time_step.restart(obs)
def _step(self, action):
self._state += [(-1, 0), (+1, 0), (0, -1), (0, +1)][action]
reward = 0
obs = np.zeros((4, 4), dtype=np.int32)
done = (self._state.min() < 0 or self._state.max() > 3)
if not done:
obs[self._state[0], self._state[1]] = 1
if done or np.all(self._state == np.array([3, 3])):
reward = -1 if done else +10
return tf_agents.trajectories.time_step.termination(obs, reward)
else:
return tf_agents.trajectories.time_step.transition(obs, reward,
self.discount)
```
The action and observation specs will generally be instances of the `ArraySpec` or `BoundedArraySpec` classes from the `tf_agents.specs` package (check out the other specs in this package as well). Optionally, you can also define a `render()` method, a `close()` method to free resources, as well as a `time_step_spec()` method if you don't want the `reward` and `discount` to be 32-bit float scalars. Note that the base class takes care of keeping track of the current time step, which is why we must implement `_reset()` and `_step()` rather than `reset()` and `step()`.
```
my_env = MyEnvironment()
time_step = my_env.reset()
time_step
time_step = my_env.step(1)
time_step
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import cufflinks as cf
import plotly
import panel as pn
pn.extension()
cf.go_offline()
%matplotlib inline
plt.style.use('ggplot')
```
<img src="../img/logo_white_bkg_small.png" align="right" />
# Worksheet 4 - Data Visualization
This worksheet will walk you through the basic process of preparing a visualization using Python/Pandas/Matplotlib/Seaborn/Cufflinks.
For this exercise, we will be creating a line plot comparing the number of hosts infected by the Bedep and ConfickerAB Bot Families in the Government/Politics sector.
## Prepare the data
The data we will be using is in the `dailybots.csv` file which can be found in the `data` folder. As is common, we will have to do some data wrangling to get it into a format which we can use to visualize this data. To do that, we'll need to:
1. Read in the data
2. Filter the data by industry and botnet
The result should look something like this:
<table>
<tr>
<th></th>
<th>date</th>
<th>ConflikerAB</th>
<th>Bedep</th>
</tr>
<tr>
<td>0</td>
<td>2016-06-01</td>
<td>255</td>
<td>430</td>
</tr>
<tr>
<td>1</td>
<td>2016-06-02</td>
<td>431</td>
<td>453</td>
</tr>
</table>
The way I chose to do this might be a little more complex, but I wanted you to see all the steps involved.
### Step 1 Read in the data
Using the `pd.read_csv()` function, you can read in the data.
```
DATA_HOME = '../data/'
data = pd.read_csv(DATA_HOME + 'dailybots.csv')
data.head()
data['botfam'].value_counts()
```
### Step 2: Filter the Data
The next step is to filter both by industry and by botfam. In order to get the data into the format I wanted, I did this separately. First, I created a second dataframe called `filteredData` which only contains the information from the `Government/Politics` industry.
```
# Your code here...
```
Next, I created a second DataFrame which only contains the information from the `ConfickerAB` botnet. I also reduced the columns to the date and host count. You'll need to rename the host count so that you can merge the other data set later.
```
# Your code here...
```
Repeat this porcess for the `Bedep` botfam in a separate dataFrame.
### Step 3: Merge the DataFrames.
Next, you'll need to merge the dataframes so that you end up with a dataframe with three columns: the date, the `ConfickerAB` count, and the the `Bedep` count. Pandas has a `.merge()` function which is documented here: http://pandas.pydata.org/pandas-docs/stable/merging.html
```
# Your code here...
```
## Create the first chart
Using the `.plot()` method, plot your dataframe and see what you get.
```
#Your code here...
```
## Step 3 Customizing your plot:
The default plot doesn't look horrible, but there are certainly some improvements which can be made. Try the following:
1. Change the x-axis to a date by converting the date column to a date object.
2. Move the Legend to the upper center of the graph
3. Make the figure size larger.
4. Instead of rendering both lines on one graph, split them up into two plots
5. Add axis labels
There are many examples in the documentation which is available: http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html
```
#Your code here...
```
### Move the Legend to the Upper Center of the Graph
For this, you'll have to assign the plot variable to a new variable and then call the formatting methods on it.
```
# Your code here...
```
### Make the Figure Size Larger:
```
# Your code here...
```
### Adding Subplots
The first thing you'll need to do is call the `.subplots( nrows=<rows>, ncols=<cols> )` to create a subplot.
Next, plot your charts using the `.plot()` method. To do add the second plot to your figure, add the `ax=axes[n]` to the `.plot()` method.
```
# Your code here...
```
# Making it Interactive
Using `cufflinks`, plot an interactive time series chart of this data. Plot each series on a separate line. The documentation for `cufflinks` can be found here: https://plot.ly/ipython-notebooks/cufflinks/.
```
# Your code here...
```
## Building Dashboards with Interactive Widgets
In this last example, you are going to create a chart to visualize the breakdown of bots attacking each industry. In order to do this we will be using the `panel` module. The complete documentation for the `Panel` module are available here: http://panel.pyviz.org/index.html
The first thing you will need to do is define a fuction which takes an argument of an industry and returns a figure from a visualization. In order to do that, you will have to do a bit of data wrangling as well. Specifically, you will need to:
1. Filter your data by the user supplied industry
2. Remove extraneous columns
3. Aggregate the data by the `Industry` column
4. Calculate a `sum` of the `hosts` column
5. Set the index to the `botfam` column.
Ultimately your data will need to be formatted like this:
<table>
<tr>
<th>hosts</th>
<th>botfam</th>
</tr>
<tr>
<td>Bedep</td>
<td>52049</td>
</tr>
<tr>
<td>ConfickerAB</td>
<td>321373</td>
</tr>
<tr>
<td>Necurs</td>
<td>48037</td>
</tr>
<tr>
<td>Olmasco</td>
<td>1572</td>
</tr>
<tr>
<td>PushDo</td>
<td>62485</td>
</tr>
<tr>
<td>Ramnit</td>
<td>78753</td>
</tr>
<tr>
<td>Sality</td>
<td>56600</td>
</tr>
<tr>
<td>Zeus</td>
<td>16156</td>
</tr>
<tr>
<td>Zusy</td>
<td>45648</td>
</tr>
<tr>
<td>zeroaccess</td>
<td>24456</td>
</tr>
</table>
```
industry_list = ['Manufacturing', 'Retail', 'Education', 'Healthcare/Wellness',
'Government/Politics', 'Finance']
def plot_industry_bar_chart(selected_industry):
#Your code here...
return fig
```
Next, use the `pn.interact()` function to actually render the widget and graph. This function takes two arguments:
1. A function which renders the chart. (This is the function that you wrote in the previous step)
2. A list of inputs, in this case the industries, that you want to pass to the charting function
Documentation available here: http://panel.pyviz.org/user_guide/Interact.html
```
#Your code here...
```
| github_jupyter |
# Random Clustering test `2018-08-28`
Updated (2018-08-14) Grammar Tester, server `94.130.238.118`
Each line is calculated 1x, parsing metrics tested 1x for each calculation.
The calculation table is shared as 'short_table.txt' in data folder
[http://langlearn.singularitynet.io/data/clustering_2018/Random-Clusters-CDS-2018-08-28/](http://langlearn.singularitynet.io/data/clustering_2018/Random-Clusters-CDS-2018-08-28/)
This notebook is shared as static html via
[http://langlearn.singularitynet.io/data/clustering_2018/html/Random-Clusters-CDS-2018-08-28.html](http://langlearn.singularitynet.io/data/clustering_2018/html/Random-Clusters-CDS-2018-08-28.html)
The constituency test (multi-run version of this notebook) is shared via
[http://langlearn.singularitynet.io/data/clustering_2018/html/Random-Clusters-CDS-2018-08-28-multi.html](http://langlearn.singularitynet.io/data/clustering_2018/html/Random-Clusters-CDS-2018-08-28-multi.html)
## Basic settings
```
import os, sys, time
from IPython.display import display
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path: sys.path.append(module_path)
grammar_learner_path = module_path + '/src/grammar_learner/'
if grammar_learner_path not in sys.path: sys.path.append(grammar_learner_path)
from utl import UTC
from read_files import check_dir
from widgets import html_table
from pqa_table import table_cds
tmpath = module_path + '/tmp/'
if check_dir(tmpath, True, 'none'):
table = []
long_table = []
header = ['Line','Corpus','Parsing','LW','"."','Generalization','Space','Rules','PA','PQ']
start = time.time()
print(UTC(), ':: module_path =', module_path)
else: print(UTC(), ':: could not create temporary files directory', tmpath)
```
## Corpus test settings
```
out_dir = module_path + '/output/Random-Clusters-' + str(UTC())[:10]
runs = (1,1) # (attempts to learn grammar per line, grammar tests per attempt)
if runs != (1,1): out_dir += '-multi'
kwargs = {
'left_wall' : '' ,
'period' : False ,
'clustering' : ('kmeans', 'kmeans++', 10),
'cluster_range' : (30,120,3,3), # min, max, step, repeat
'cluster_criteria': 'silhouette',
'cluster_level' : 1 ,
'tmpath' : tmpath ,
'verbose' : 'min' ,
'template_path' : 'poc-turtle',
'linkage_limit' : 1000 ,
'categories_generalization': 'off' }
lines = [
[58, 'CDS-caps-br-text+brent9mos' , 'LG-English' ,0,0, 'none' ],
[60, 'CDS-caps-br-text+brent9mos' , 'R=6-Weight=6:R-mst-weight=+1:R' ,0,0, 'none' ]]
rp = module_path + '/data/CDS-caps-br-text+brent9mos/LG-English'
cp = rp # corpus path = reference_path :: use 'gold' parses as test corpus
```
## Random clusters, interconnected -- RNDic
"Connector-based rules" style interconnection:
C01: {C01C01- or C02C01- or ... or CnC01-} and {C01C01+ or C01C02+ or ... or C01Cn+} ...
Cxx: {C01Cxx- or C02Cxx- or ... or CnCxx-} and {CxxC01+ or CxxC02+ or ... or CxxCn+} ...
where n -- number of clusters, Cn -- n-th cluster, Cx -- x-th cluster of {C01 ... Cn}
```
kwargs['context'] = 1
kwargs['word_space'] = 'none'
kwargs['clustering'] = 'random'
%%capture
kwargs['grammar_rules'] = -1
average21, long21 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average21)
long_table.extend(long21)
display(html_table([header]+average21))
print(UTC())
```
## Random clusters, connector-based rules
```
%%capture
kwargs['grammar_rules'] = 1
average22, long22 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average22)
long_table.extend(long22)
display(html_table([header]+average22))
print(UTC())
```
## Random clusters, disjunct-based rules
```
%%capture
kwargs['grammar_rules'] = 2
average23, long23 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average23)
long_table.extend(long23)
display(html_table([header]+average23))
print(UTC())
```
## Random clusters, linked -- RNDid
Every cluster is linked to all clusters with single-link disjuncts:
C01: (C01C01-) or (C02C01-) or ... (CnC01-) or (C01C01+) or (C01C02+) or ... (C01Cn+) ...
Cxx: (C01Cxx-) or (C02Cxx-) or ... (CnCxx-) or (CxxC01+) or (CxxC02+) or ... (CxxCn+) ...
where n -- number of clusters, Cn -- n-th cluster, Cx -- x-th cluster of {C01 ... Cn}
```
%%capture
kwargs['grammar_rules'] = -2
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
display(html_table([header]+average24))
print(UTC())
```
# Baseline: (c,d)DRK(c,d), fast optimal clustering search
```
kwargs['word_space'] = 'vectors'
kwargs['clustering'] = 'kmeans'
```
## Connectors-DRK-Connectors
```
%%capture
kwargs['context'] = 1
kwargs['grammar_rules'] = 1
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
display(html_table([header]+average24))
print(UTC())
```
## Connectors-DRK-Disjuncts
```
%%capture
kwargs['grammar_rules'] = 2
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
display(html_table([header]+average24))
print(UTC())
```
## Disjuncts-DRK-disjuncts
```
%%capture
kwargs['context'] = 2
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
display(html_table([header]+average24))
print(UTC())
```
## Disjuncts-DRK-connectors
```
%%capture
kwargs['grammar_rules'] = 1
average24, long24 = table_cds(lines, out_dir, cp, rp, runs, **kwargs)
table.extend(average24)
long_table.extend(long24)
display(html_table([header]+average24))
print(UTC())
```
# All tests
```
display(html_table([header]+long_table))
from write_files import list2file
print(UTC(), ':: finished, elapsed', str(round((time.time()-start)/3600.0, 1)), 'hours')
table_str = list2file(table, out_dir+'/short_table.txt')
if runs == (1,1):
print('Results saved to', out_dir + '/short_table.txt')
else:
long_table_str = list2file(long_table, out_dir+'/long_table.txt')
print('Average results saved to', out_dir + '/short_table.txt\n'
'Detailed results for every run saved to', out_dir + '/long_table.txt')
```
| github_jupyter |
```
#just the CPU version of FAISS, will have to look deeper on how to get GPU version, but works fast enough for now
!wget https://anaconda.org/pytorch/faiss-cpu/1.2.1/download/linux-64/faiss-cpu-1.2.1-py36_cuda9.0.176_1.tar.bz2
!tar xvjf faiss-cpu-1.2.1-py36_cuda9.0.176_1.tar.bz2
!cp -r lib/python3.6/site-packages/* /usr/local/lib/python3.6/dist-packages/
!pip install mkl
import pandas as pd
import numpy as np
import os
import csv
from tqdm import tqdm
import argparse
from glob import glob
import faiss
from multiprocessing import Pool, cpu_count
from math import ceil
from google.colab import drive
drive.mount('/content/gdrive')
class namespace():
pass
"/content/gdrive/My Drive/FFNNEmbeds"
args = namespace()
args.data_path = '/content/gdrive/My Drive/data/FFNNEmbeds'
args.output_path = '/content/gdrive/My Drive/data/gpt2_train_data/'
args.number_samples = 10
args.batch_size = 512
files = os.listdir(args.data_path)
print(files)
def fix_array(x):
x = np.fromstring(
x.replace('\n','')
.replace('[','')
.replace(']','')
.replace(' ',' '), sep=' ')
return x.reshape((1, 768))
pd.options.display.max_rows = 700
pd.set_option('expand_frame_repr', True)
pd.set_option('max_colwidth', 250)
pd.get_option("display.max_rows")
qa = pd.read_csv("/content/gdrive/My Drive/data/FFNNEmbeds/" + files[0])
for file in files[1:]:
print(file)
qa = pd.concat([qa, pd.read_csv("/content/gdrive/My Drive/data/FFNNEmbeds/" + file)], axis = 0)
del qa['Unnamed: 0']
qa["Q_FFNN_embeds"] = qa["Q_FFNN_embeds"].apply(fix_array)
qa["A_FFNN_embeds"] = qa["A_FFNN_embeds"].apply(fix_array)
qa = qa.reset_index(drop=True)
qa.head()
#qa = pd.read_hdf(args.data_path, key='qa_embedding')
'''
with Pool(cpu_count()) as p:
question_bert = p.map(eval, qa["Q_FFNN_embeds"].tolist())
answer_bert = p.map(eval, qa["A_FFNN_embeds"].tolist())
'''
question_bert = np.concatenate(qa["Q_FFNN_embeds"].values, axis=0)
answer_bert = np.concatenate(qa["A_FFNN_embeds"].values, axis=0)
question_bert = np.array(question_bert)
answer_bert = np.array(answer_bert)
question_bert = question_bert.astype('float32')
answer_bert = answer_bert.astype('float32')
answer_index = faiss.IndexFlatIP(answer_bert.shape[-1])
question_index = faiss.IndexFlatIP(question_bert.shape[-1])
faiss.normalize_L2(question_bert)
faiss.normalize_L2(answer_bert)
answer_index.add(answer_bert)
question_index.add(question_bert)
os.makedirs(args.output_path, exist_ok=True)
output_path = os.path.join(
args.output_path, 'GPT2_data_FFNN.csv')
output = open(output_path, "w")
writer = csv.writer(output)
firstrow = ['question', 'answer']
for ii in range(0, args.number_samples):
firstrow.append('question'+str(ii))
firstrow.append('answer'+str(ii))
writer.writerow(firstrow)
def topKforGPT2(start_ind, end_ind, topk):
D1, I1 = answer_index.search(
question_bert[start_ind:end_ind].astype('float32'), topk)
D2, I2 = question_index.search(
question_bert[start_ind:end_ind].astype('float32'), topk)
return I1, I2
steps = ceil(qa.shape[0] / args.batch_size)
# for k in tqdm(range(1000), mininterval=30, maxinterval=60):
for k in tqdm(range(0, qa.shape[0], args.batch_size), total=steps):
start_ind = k
end_ind = k+args.batch_size
a_batch_index, q_batch_index = topKforGPT2(
start_ind, end_ind, int(args.number_samples/2))
for a_index, q_index in zip(a_batch_index, q_batch_index):
rowfill = []
rowfill.append(qa["question"][k])
rowfill.append(qa["answer"][k])
aaa = qa.loc[list(a_index), :]
qqq = qa.loc[list(q_index), :]
aaaa = [*sum(zip(list(aaa['question']), list(aaa['answer'])), ())]
qqqq = [*sum(zip(list(qqq['question']), list(qqq['answer'])), ())]
finalfill = aaaa+qqqq
rowfill = rowfill + finalfill
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlow 2 quickstart for experts
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This is a [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page.
1. In Colab, connect to a Python runtime: At the top-right of the menu bar, select *CONNECT*.
2. Run all the notebook code cells: Select *Runtime* > *Run all*.
Download and install TensorFlow 2. Import TensorFlow into your program:
Note: Upgrade `pip` to install the TensorFlow 2 package. See the [install guide](https://www.tensorflow.org/install) for details.
Import TensorFlow into your program:
```
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
```
Load and prepare the [MNIST dataset](http://yann.lecun.com/exdb/mnist/).
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")
```
Use `tf.data` to batch and shuffle the dataset:
```
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
```
Build the `tf.keras` model using the Keras [model subclassing API](https://www.tensorflow.org/guide/keras#model_subclassing):
```
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10)
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
```
Choose an optimizer and loss function for training:
```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
```
Select metrics to measure the loss and the accuracy of the model. These metrics accumulate the values over epochs and then print the overall result.
```
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
```
Use `tf.GradientTape` to train the model:
```
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(images, training=True)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
```
Test the model:
```
@tf.function
def test_step(images, labels):
# training=False is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
# Reset the metrics at the start of the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
print(
f'Epoch {epoch + 1}, '
f'Loss: {train_loss.result()}, '
f'Accuracy: {train_accuracy.result() * 100}, '
f'Test Loss: {test_loss.result()}, '
f'Test Accuracy: {test_accuracy.result() * 100}'
)
```
The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the [TensorFlow tutorials](https://www.tensorflow.org/tutorials).
| github_jupyter |
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
# 1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(np.power(u,2)))
# Compute the L2 norm of v (≈1 line)
norm_v = np.sqrt(np.sum(np.power(v,2)))
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = np.divide(dot,norm_u*norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = np.divide(np.dot(e,g),np.linalg.norm(g)**2) * g
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e - e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = pair
e_w1, e_w2 = word_to_vec_map[w1], word_to_vec_map[w2]
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = (e_w1 + e_w2)/2.0
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = np.divide(np.dot(mu, bias_axis),np.linalg.norm(bias_axis)**2)*bias_axis
mu_orth = mu - mu_B
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = np.divide(np.dot(e_w1, bias_axis),np.linalg.norm(bias_axis)**2)*bias_axis
e_w2B = np.divide(np.dot(e_w2, bias_axis),np.linalg.norm(bias_axis)**2)*bias_axis
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = np.sqrt(np.abs(1-np.sum(mu_orth**2)))*np.divide(e_w1B-mu_B, np.abs(e_w1-mu_orth-mu_B))
corrected_e_w2B = np.sqrt(np.abs(1-np.sum(mu_orth**2)))*np.divide(e_w2B-mu_B, np.abs(e_w2-mu_orth-mu_B))
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = corrected_e_w1B + mu_orth
e2 = corrected_e_w2B + mu_orth
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
# 10. Programación Orientada a Objetos (POO)
El mundo real (o el mundo natural) está compuesto de objetos. Esos objetos (o entidades) se pueden representar computacionalmente para la creación de aplicaciones de software.
La POO es una técnica o una tecnología que permite simular la realidad con el fin de resolver problemas de una manera más exacta y eficiente.
Miembros de una clase:
- Campos de instancia: representan el estado del objeto.
- Métodos de instancia: representan el comportamiento del objeto.
## 10.1 Crear una clase de objeto
```
class Persona:
def __init__(self, documento, nombre_completo, email, direccion):
self.documento = documento
self.nombre_completo = nombre_completo
self.email = email
self.direccion = direccion
def caminar(self):
print('La persona está caminando.')
def trabajar(self):
print('La persona está trabajando.')
```
## 10.2 Instanciación de un objeto a partir de una clase
```
cristian = Persona(123456789, 'Cristián Javier Ocampo', 'cristian@mail.co', 'Carrera 10 134-93')
cristian
type(cristian)
```
## 10.3 Acceso a las propiedades de un objeto
```
cristian.documento
cristian.direccion
cristian.nombre_completo
cristian.email
```
## 10.4 Invocar (llamar) funciones de un objeto
A las funciones en un objeto se les conoce como métodos.
```
cristian.caminar()
cristian.trabajar()
```
**Ejemplo 10.1**:
Crear una clase que represente la entidad calculadora. Dentro de la implementación se deben crear los métodos asociados a las operaciones aritméticas básicas (suma, resta, multiplicación, y división).
```
class Calculadora:
def sumar(self, a, b):
"""
Suma dos valores numéricos.
a: Primer número a sumar.
b: Segundo número a sumar.
return: Suma de los dos valores.
"""
suma = a + b
return suma
def restar(self, a, b):
"""
Resta dos valores numéricos.
a: Primer número a restar.
b: Segundo número a restar.
return: Resta de los dos valores.
"""
resta = a - b
return resta
def multiplicar(self, a, b):
"""
Multiplica dos valores numéricos.
a: Primer número a multiplicar.
b: Segundo número a multiplicar.
return: Multiplicación de los dos valores.
"""
multiplicacion = a * b
return multiplicacion
def dividir(self, a, b):
"""
Divide dos valores numéricos.
a: Primer número a dividir.
b: Segundo número a dividir.
return: División de los dos valores.
"""
division = a / b
return division
calculadora_basica = Calculadora()
type(calculadora_basica)
id(calculadora_basica)
calculadora_aritmetica = Calculadora()
type(calculadora_aritmetica)
id(calculadora_aritmetica)
id(calculadora_basica) == id(calculadora_aritmetica)
```
**Nota:** Cada objeto/instancia tiene recursos computacionales asociados. No pueden existir dos objetos diferentes que ocupen el mismo espacio en memoria.
Cada instancia/objeto tiene los mismos métodos y atributos, **PERO** con estado diferente.
```
isinstance(calculadora_basica, Calculadora)
isinstance(calculadora_aritmetica, Calculadora)
numeros = [2, 3, 5]
type(numeros)
isinstance(numeros, Calculadora)
```
Llamar/invocar los métodos de una instancia de una clase:
```
calculadora_basica.sumar(2, 3)
calculadora_basica.restar(2, 3)
calculadora_basica.multiplicar(2, 3)
calculadora_basica.dividir(2, 3)
calculadora_aritmetica.sumar(2, 3)
calculadora_aritmetica.restar(2, 3)
calculadora_aritmetica.multiplicar(2, 3)
calculadora_aritmetica.dividir(2, 3)
```
Consultar la documentación de funciones que están definidas en una clase:
```
help(calculadora_aritmetica.sumar)
help(calculadora_aritmetica.multiplicar)
```
# 10.5 Cambiar el estado de un objeto a través de métodos de instancia
Los métodos de instancia son funciones especiales que pertenecen a una clase. Cada objeto que se instancie tendrá acceso a esos métodos.
Estos métodos tienen un parámetro obligatorio: `self`.
Las funciones (métodos de instancia) definen lo que el objeto (entidad) puede hacer (comportamiento).
**Ejemplo 10.2:**
Crear una clase que represente una cuenta bancaria.
Sobre una cuenta bancaria se pueden realizar las siguientes operaciones:
1. Abrir cuenta
2. Depositar dinero
3. Retirar dinero
4. Consultar saldo
5. Cerrar cuenta
También se deben incluir atributos (propiedades o características) como:
1. Nombre del cliente
2. Número de la cuenta
3. Saldo
4. Estado (activo o inactivo)
```
class CuentaBancaria:
def __init__(self, numero, cliente, saldo=10000, estado=True):
self.numero = numero
self.cliente = cliente
self.saldo = saldo
self.estado = estado
def depositar(self, cantidad):
"""
Deposita cierta cantidad de dinero en la cuenta.
:cantidad: Cantidad de dinero a depositar.
"""
if self.estado and cantidad > 0:
self.saldo += cantidad
def retirar(self, cantidad):
"""
Retira cierta cantidad de dinero.
:cantidad: Cantidad de dinero a retirar.
"""
if self.estado and cantidad > 0 and cantidad <= self.saldo:
self.saldo -= cantidad
def cerrar_cuenta(self):
"""
Cierra la cuenta bancaria.
"""
self.estado = False
cuenta_ahorros = CuentaBancaria(123456789, 'Juan Urbano', 50000)
cuenta_ahorros
cuenta_ahorros.cliente
cuenta_ahorros.estado
cuenta_ahorros.numero
cuenta_ahorros.saldo
```
Podemos preguntar por el tipo de dato de una variable utilizando la función `type()`:
```
type(cuenta_ahorros)
isinstance(cuenta_ahorros, CuentaBancaria)
type(cuenta_ahorros) in [CuentaBancaria]
type('Python') in [CuentaBancaria]
```
Realizar operaciones sobre un objeto de tipo `CuentaBancaria`:
```
dir(cuenta_ahorros)
cuenta_ahorros.saldo
cuenta_ahorros.depositar(10000)
cuenta_ahorros.saldo
cuenta_ahorros.depositar(-10000)
cuenta_ahorros.saldo
cuenta_ahorros.retirar(20000)
cuenta_ahorros.saldo
cuenta_ahorros.retirar(-20000)
cuenta_ahorros.saldo
type(cuenta_ahorros)
```
1. Crear un nuevo objeto de tipo `CuentaBancaria`.
2. Enviar dinero de una cuenta a otra.
```
cuenta_corriente = CuentaBancaria(95185123, 'Angela Burgos', 100000)
type(cuenta_corriente)
print(cuenta_corriente.numero, cuenta_corriente.cliente, cuenta_corriente.saldo, cuenta_corriente.estado)
dinero = 20000
cuenta_corriente.retirar(dinero)
cuenta_corriente.saldo
print(cuenta_ahorros.numero, cuenta_ahorros.cliente, cuenta_ahorros.saldo, cuenta_ahorros.estado)
cuenta_ahorros.depositar(dinero)
cuenta_ahorros.saldo
dinero = cuenta_corriente.saldo
dinero
cuenta_ahorros.depositar(dinero)
cuenta_ahorros.saldo
cuenta_corriente.cerrar_cuenta()
cuenta_corriente.estado
```
## 10.6 Cambiar la funcionalidad que se hereda desde otro objeto
```
print(cuenta_ahorros)
class CuentaBancaria(object):
def __init__(self, numero, cliente, saldo=10000, estado=True):
self.numero = numero
self.cliente = cliente
self.saldo = saldo
self.estado = estado
def depositar(self, cantidad):
"""
Deposita cierta cantidad de dinero en la cuenta.
:cantidad: Cantidad de dinero a depositar.
"""
if self.estado and cantidad > 0:
self.saldo += cantidad
def retirar(self, cantidad):
"""
Retira cierta cantidad de dinero.
:cantidad: Cantidad de dinero a retirar.
"""
if self.estado and cantidad > 0 and cantidad <= self.saldo:
self.saldo -= cantidad
def cerrar_cuenta(self):
"""
Cierra la cuenta bancaria.
"""
self.estado = False
def __str__(self):
return f'{self.numero};{self.cliente};{self.saldo};{self.estado}'
cuenta_ahorros = CuentaBancaria(123456789, 'Juan Urbano', 50000)
print(cuenta_ahorros)
print(cuenta_ahorros.__str__())
cuenta_ahorros
cuenta_ahorros.__str__()
```
## 10.7 Variables de instancia privadas
Miembro privado: Es una variable que **sólo** es visible para el cuerpo de declaración de una clase.
Esto apoya el concepto de encapsulación.
**Encapsulación**: patrón de diseño para definir los miembros que sólo son visibles al interior de una entidad (clase).
Lectura recomendada: consultar acerca de los pilares de la programación orientada a objetos.
1. Abstracción (A)
2. Polimorfismo (P)
3. Herencia (I)
4. Encapsulación (E)
A-PIE
```
class Perro(object):
"""
Representa la entidad Perro.
"""
def __init__(self, nombre, edad, amo):
self._nombre = nombre
self._edad = edad
self._amo = amo
tony = Perro('Tony', 35, 'Alexander')
tony._nombre
tony._edad
tony._amo
tony._edad = 7
tony._edad
class Perro(object):
"""
Representa la entidad Perro.
"""
def __init__(self, nombre, edad, amo):
"""
Inicializa (o instancia) un nuevo objeto de la clase Perro.
"""
self._nombre = nombre
self._edad = edad
self._amo = amo
def get_nombre(self):
"""
Obtiene el nombre del perro.
return: Nombre del perro.
"""
return self._nombre
def set_nombre(self, nombre):
"""
Establece un nuevo nombre para el perro.
:param nombre: Nuevo nombre para el perro.
"""
self._nombre = nombre
def get_edad(self):
"""
Obtiene la edad del perro.
return: Edad del perro.
"""
return self._edad
def set_edad(self, edad):
"""
Establece la nueva edad del perro.
:param edad: Nueva edad del perro.
"""
self._edad = edad
def get_amo(self):
"""
Obtiene el nombre del amo del perro.
:return: Nombre del amo del perro.
"""
return self._amo
def set_amo(self, amo):
"""
Establece el nuevo nombre del amo del perro.
:param amo: Nuevo nombre del amo del perro.
"""
self._amo = amo
tony = Perro('Tony', 3, 'Alexander')
tony
dir(tony)
tony.get_amo()
tony.get_edad()
tony.get_nombre()
tony.set_edad(4)
tony.get_edad()
tony.set_amo('Alexander Ordoñez')
tony.get_amo()
tony._edad = 5
tony._edad
help(Perro)
```
## 10.8 Uso de atributos para establecer el acceso y modificación de variables de instancia
`@nombre_atributo`
```
class Perro(object):
"""
Representa la entidad Perro.
"""
def __init__(self, nombre, edad, amo):
"""
Inicializa (o instancia) un nuevo objeto de la clase Perro.
"""
self._nombre = nombre
self._edad = edad
self._amo = amo
@property
def nombre(self):
"""
Obtiene el nombre del perro.
return: Nombre del perro.
"""
return self._nombre
@nombre.setter
def nombre(self, nombre):
"""
Establece un nuevo nombre para el perro.
:param nombre: Nuevo nombre para el perro.
"""
self._nombre = nombre
@property
def edad(self):
"""
Obtiene la edad del perro.
return: Edad del perro.
"""
return self._edad
@edad.setter
def edad(self, edad):
"""
Establece la nueva edad del perro.
:param edad: Nueva edad del perro.
"""
self._edad = edad
@property
def amo(self):
"""
Obtiene el nombre del amo del perro.
:return: Nombre del amo del perro.
"""
return self._amo
@amo.setter
def amo(self, amo):
"""
Establece el nuevo nombre del amo del perro.
:param amo: Nuevo nombre del amo del perro.
"""
self._amo = amo
help(Perro)
tony = Perro('Tony', 3, 'Alexander')
id(tony)
type(tony)
tony
tony.nombre
tony.edad
tony.amo
tony.edad = 4
tony.edad
tony.amo = 'Alexander Meneses'
tony.amo
class Carro(object):
"""
Representa la clase base para una jerarquía de herencia de vehículos.
"""
def __init__(self, placa, marca, modelo, pais_procedencia):
"""
Crear un nuevo carro.
:param self: este mismo objeto.
:param placa: placa del carro.
:param modelo: modelo del carro.
:param pais_procedencia: país de procedencia.
:return: None.
"""
self.placa = placa
self.marca = marca
self.modelo = modelo
self.pais_procedencia = pais_procedencia
self.estado = False
self.velocidad = 0
def encender(self):
"""
Enciende el carro.
"""
if not self.estado:
self.estado = True
def apagar(self):
"""
Apaga el carro.
"""
if self.estado:
self.estado = False
def acelerar(self):
"""
Acelara el carro.
"""
if self.estado:
self.velocidad += 2
def frenar(self):
"""
Frena el carro.
"""
if self.estado:
self.velocidad = 0
class Camion(Carro):
"""
Representa un camión en la jerarquía de herencia de carros.
"""
def __init__(self, placa, marca, modelo, pais_procedencia, capacidad_carga):
"""
Crear un nuevo camión.
"""
super().__init__(placa, marca, modelo, pais_procedencia)
self.capacidad_carga = capacidad_carga
self.carga_actual = 0
def cargar_mercancia(self, cantidad):
"""
Carga cierta cantidad de mercancía sin exceder la capacidad de carga.
:param cantidad:int: Cantidad de mercancía a cargar.
"""
if self.carga_actual + cantidad <= self.capacidad_carga:
self.carga_actual += cantidad
def descargar_mercancia(self):
"""
Descarga toda la mercancía que hay en el camión.
"""
self.carga_actual = 0
class Deportivo(Carro):
"""
Representa un carro deportivo.
"""
def __init__(self, placa, marca, modelo, pais_procedencia, marca_rines, tipo):
"""
Crea un nuevo carro deportivo.
:param placa: Placa del carro.
:param marca: Marca del carro.
:param modelo: modelo del carro.
:param pais_procedencia: país de procedencia.
:param marca_rines: Marca de los rines del carro deportivo.
:param tipo: Tipo del carro deportivo.
"""
super().__init__(placa, marca, modelo, pais_procedencia)
self.marca_rines = marca_rines
self.tipo = tipo
self.puertas_abiertas = False
def abrir_puertas(self):
"""
Abre las puertas del carro deportivo.
"""
if not self.puertas_abiertas:
self.puertas_abiertas = True
def cerrar_puertas(self):
"""
Cierra las puertas del carro deportivo.
"""
if self.puertas_abiertas:
self.puertas_abiertas = False
class Formula1(Carro):
"""
Representa un carro de Fórmula 1.
"""
def __init__(self, placa, marca, modelo, pais_procedencia, peso):
"""
Crea un nuevo carro de Fórmula 1.
:param placa: Placa del carro.
:param marca: Marca del carro.
:param modelo: modelo del carro.
:param pais_procedencia: país de procedencia.
:param peso: Peso del carro de Fórmula 1.
"""
super().__init__(placa, marca, modelo, pais_procedencia)
self.peso = peso
def competir(self):
"""
El carro de Fórmula 1 compite.
"""
print('El carro está compitiendo...')
class Volqueta(Carro):
"""
Representa la entidad volqueta.
"""
def __init__(self, placa, marca, modelo, pais_procedencia, capacidad_carga, costo_servicio):
"""
Crea un nuevo carro tipo volqueta.
:param placa: Placa del carro.
:param marca: Marca del carro.
:param modelo: modelo del carro.
:param pais_procedencia: país de procedencia.
:param capacidad_carga: Capacidad de carga de la volqueta.
:param costo_servicio: Costo del servicio de la volqueta.
"""
super().__init__(placa, marca, modelo, pais_procedencia)
self.capacidad_carga = capacidad_carga
self.costo_servicio = costo_servicio
self.carga_actual = 0
def cargar_material(self, cantidad):
"""
Carga material en la volqueta.
:param cantidad: Cantidad de material a cargar en la volqueta.
"""
if cantidad + self.carga_actual <= self.capacidad_carga:
self.carga_actual += cantidad
def descargar_material(self):
"""
Descarga el material actual de la volqueta.
"""
self.carga_actual = 0
carro_chevrolet = Carro('ABC-123', 'Chevrolet', 2010, 'Estados Unidos')
type(carro_chevrolet)
type(carro_chevrolet).__name__
carro_chevrolet.placa
carro_chevrolet.marca
carro_chevrolet.modelo
carro_chevrolet.pais_procedencia
dir(carro_chevrolet)
carro_chevrolet.estado
if carro_chevrolet.estado:
print('El carro Chevrolet está encendido.')
else:
print('El carro Chevrolet no está encendido.')
carro_chevrolet.encender()
carro_chevrolet.estado
if carro_chevrolet.estado:
print('El carro Chevrolet está encendido.')
else:
print('El carro Chevrolet no está encendido.')
carro_chevrolet.apagar()
if carro_chevrolet.estado:
print('El carro Chevrolet está encendido.')
else:
print('El carro Chevrolet no está encendido.')
carro_chevrolet.velocidad
carro_chevrolet.acelerar()
carro_chevrolet.velocidad
carro_chevrolet.encender()
carro_chevrolet.velocidad
carro_chevrolet.acelerar()
carro_chevrolet.velocidad
carro_chevrolet.acelerar()
carro_chevrolet.velocidad
carro_chevrolet.frenar()
carro_chevrolet.velocidad
carro_chevrolet.estado
carro_chevrolet.apagar()
```
Instanciación de un objeto de la clase `Camion`:
```
camion_carga = Camion('ABD-456', 'Scania', 2015, 'China', 2000)
type(camion_carga)
type(camion_carga).__name__
camion_carga
isinstance(camion_carga, Camion)
isinstance(camion_carga, Formula1)
isinstance(camion_carga, Carro)
camion_carga.placa
camion_carga.marca
camion_carga.modelo
camion_carga.pais_procedencia
camion_carga.estado
camion_carga.capacidad_carga
camion_carga.encender()
camion_carga.estado
if camion_carga.estado:
print('El camión está encendido.')
else:
print('El camión no está encendido.')
dir(camion_carga)
help(camion_carga.cargar_mercancia)
camion_carga.cargar_mercancia(1000)
camion_carga.carga_actual
camion_carga.cargar_mercancia(3000)
camion_carga.carga_actual
camion_carga.descargar_mercancia()
camion_carga.carga_actual
camion_carga.apagar()
if camion_carga.estado:
print('El camión está encendido.')
else:
print('El camión no está encendido.')
camion_carga.estado
deportivo_lujo = Deportivo('DEF-789', 'Audi', 2020, 'Alemania', 'Marca Rines', 'Lujo')
type(deportivo_lujo)
type(deportivo_lujo).__name__
isinstance(deportivo_lujo, Deportivo)
isinstance(deportivo_lujo, Formula1)
isinstance(deportivo_lujo, Carro)
deportivo_lujo.placa
deportivo_lujo.marca
deportivo_lujo.modelo
deportivo_lujo.pais_procedencia
deportivo_lujo.marca_rines
deportivo_lujo.tipo
```
Consultar si el carro deportivo está encendido:
```
deportivo_lujo.estado
if deportivo_lujo.estado:
print('El carro deportivo está encendido.')
else:
print('El carro deportivo no está encendido.')
deportivo_lujo.encender()
if deportivo_lujo.estado:
print('El carro deportivo está encendido.')
else:
print('El carro deportivo no está encendido.')
dir(deportivo_lujo)
if deportivo_lujo.puertas_abiertas:
print('Las puertas del carro deportivo están abiertas.')
else:
print('Las puertas del carro deportivo NO están abiertas.')
deportivo_lujo.abrir_puertas()
if deportivo_lujo.puertas_abiertas:
print('Las puertas del carro deportivo están abiertas.')
else:
print('Las puertas del carro deportivo NO están abiertas.')
deportivo_lujo.cerrar_puertas()
if deportivo_lujo.puertas_abiertas:
print('Las puertas del carro deportivo están abiertas.')
else:
print('Las puertas del carro deportivo NO están abiertas.')
deportivo_lujo.velocidad
```
Creación/instanciación de un objeto `Volqueta`:
```
volqueta_carga = Volqueta('FGH-951', 'Daewoo', 2019, 'Taiwan', 4000, 2000)
type(volqueta_carga)
type(volqueta_carga).__name__
isinstance(volqueta_carga, Volqueta)
isinstance(volqueta_carga, Camion)
isinstance(volqueta_carga, Carro)
dir(volqueta_carga)
```
Estado del objeto `volqueta_carga`:
```
print('Placa:', volqueta_carga.placa)
print('Marca:', volqueta_carga.marca)
print('Modelo:', volqueta_carga.modelo)
print('País de procedencia:', volqueta_carga.pais_procedencia)
print('Capacidad de carga:', volqueta_carga.capacidad_carga)
print('Costo de servicio:', volqueta_carga.costo_servicio)
print('¿La volqueta está encendida?', 'Sí' if volqueta_carga.estado else 'No')
volqueta_carga.encender()
print('¿La volqueta está encendida?', 'Sí' if volqueta_carga.estado else 'No')
volqueta_carga.acelerar()
volqueta_carga.velocidad
volqueta_carga.frenar()
print('¿La volqueta está encendida?', 'Sí' if volqueta_carga.estado else 'No')
help(volqueta_carga.cargar_material)
volqueta_carga.carga_actual
volqueta_carga.cargar_material(5000)
volqueta_carga.carga_actual
volqueta_carga.cargar_material(3900)
volqueta_carga.carga_actual
volqueta_carga.cargar_material(500)
volqueta_carga.carga_actual
volqueta_carga.acelerar()
volqueta_carga.frenar()
volqueta_carga.descargar_material()
volqueta_carga.carga_actual
```
Ahora creemos un objeto de la clase `Formula1`:
```
auto_formula1 = Formula1('F11-458', 'BMW', 2020, 'Alemania', 120)
type(auto_formula1)
type(auto_formula1).__name__
```
Consultemos si la variable `auto_formula1` es de algún tipo de dato:
```
isinstance(auto_formula1, Carro)
isinstance(auto_formula1, Volqueta)
isinstance(auto_formula1, Formula1)
auto_formula1
```
Consulta del estado de un objeto (`Formula1`):
```
auto_formula1.placa
auto_formula1.marca
auto_formula1.modelo
auto_formula1.pais_procedencia
auto_formula1.peso
'Encendido' if auto_formula1.estado else 'Apagado'
auto_formula1.encender()
'Encendido' if auto_formula1.estado else 'Apagado'
help(auto_formula1)
auto_formula1.competir()
auto_formula1.acelerar()
auto_formula1.velocidad
auto_formula1.frenar()
auto_formula1.velocidad
auto_formula1.apagar()
```
Guardar todos los objetos de las subclases `Camion`, `Volqueta`, `Formula1` y `Deportivo` en una lista:
```
carros = [camion_carga, volqueta_carga, auto_formula1, deportivo_lujo]
len(carros)
for c in carros:
print(type(c))
for c in carros:
print(isinstance(c, Carro))
for c in carros:
print(isinstance(c, Camion))
for c in carros:
c.apagar()
for c in carros:
print('Tipo:', type(c).__name__)
print('Placa:', c.placa)
print('Marca:', c.marca)
print('Modelo:', c.modelo)
print('Placa:', c.pais_procedencia)
print('¿Está encendido?:', 'Encendido' if c.estado else 'Apagado')
c.encender()
print('¿Está encendido?:', 'Encendido' if c.estado else 'Apagado')
c.acelerar()
c.frenar()
c.apagar()
print()
```
## 10.9 Jerarquía de Herencia de Figuras - Implementación con Polimorfismo
```
import abc
class Figura(object):
"""
Representa el concepto abstracto de una figura geométrica de dos dimensiones.
"""
__metaclass__ = abc.ABCMeta
def __init__(self, color_fondo, color_borde):
"""
Inicializa una figura a partir de un color de fondo y de borde.
:param color_fondo: Color del fondo de la figura.
:param color_borde: Color del borde de la figura.
"""
self.color_fondo = color_fondo
self.color_borde = color_borde
@abc.abstractmethod
def area(self):
"""
Definición abstracta del método que permite calcular el área de una figura geométrica.
"""
pass
@abc.abstractmethod
def dibujar(self):
"""
Definición abstracta del método que permite dibujar una figura geométrica.
"""
pass
def __str__(self):
"""
Redefinición del método para mostrar la representación en texto de un objeto.
"""
return f'Color fondo: {self.color_fondo} - Color borde: {self.color_borde}'
class Rectangulo(Figura):
"""
Representa una figura geométrica de tipo rectángulo.
"""
def __init__(self, color_fondo, color_borde, ancho, alto):
"""
Crea un nuevo objeto de la clase Rectangulo.
:param color_fondo: Color del fondo de la figura.
:param color_borde: Color del borde de la figura.
:param ancho: Ancho del rectángulo.
:param alto: Alto del rectángulo.
"""
super().__init__(color_fondo, color_borde)
self.ancho = ancho
self.alto = alto
def area(self):
"""
Calcula el área de un rectángulo.
"""
return self.ancho * self.alto
def dibujar(self):
"""
Dibuja un rectángulo.
**********
* *
* *
**********
"""
print('El rectángulo se está dibujando.')
from math import pi
class Circulo(Figura):
"""
Representa una figura geométrica de tipo círculo.
"""
def __init__(self, color_fondo, color_borde, radio):
"""
Crea una nueva figura geométrica de tipo círculo.
:param color_fondo: Color del fondo de la figura.
:param color_borde: Color del borde de la figura.
:param radio: Radio del círculo.
"""
super().__init__(color_fondo, color_borde)
self.radio = radio
def area(self):
"""
Calcula el área del círculo.
:return: El área del círculo.
"""
return pi * self.radio ** 2
def dibujar(self):
"""
Dibuja una figura geométrica de tipo círculo.
"""
print('El círculo se está dibujando.')
class Triangulo(Figura):
"""
Representa una figura geométrica de tipo triángulo.
"""
def __init__(self, color_fondo, color_borde, base, altura):
"""
Crea un nuevo objeto de la clase Triangulo.
:param color_fondo: Color del fondo de la figura.
:param color_borde: Color del borde de la figura.
:param base: Base del triángulo.
:param altura: Altura del triángulo.
"""
super().__init__(color_fondo, color_borde)
self.base = base
self.altura = altura
def area(self):
"""
Calcula el área de un triángulo.
:return: Área de un triángulo.
"""
return self.base * self.altura / 2
def dibujar(self):
"""
Dibuja un triángulo.
"""
print('El triángulo se está dibujando.')
class Romboide(Figura):
"""
Representa una figura geométrica de tipo romboide.
"""
def __init__(self, color_fondo, color_borde, base, altura):
"""
Crea un nuevo objeto de la clase Triangulo.
:param color_fondo: Color del fondo de la figura.
:param color_borde: Color del borde de la figura.
:param base: Base del romboide.
:param altura: Altura del romboide.
"""
super().__init__(color_fondo, color_borde)
self.base = base
self.altura = altura
def area(self):
"""
Calcula el área de una figura geométrica de tipo romboide.
:return: Área del romboide.
"""
return self.base * self.altura
def dibujar(self):
"""
Dibuja un romboide.
"""
print('El romboide se está dibujando.')
```
Crear instancias de las diferentes clases definidas en las celdas anteriores:
```
rectangulo_rojo = Rectangulo('Rojo', 'Negro', 5, 10)
type(rectangulo_rojo)
circulo_verde = Circulo('Verde', 'Azul', 5)
type(circulo_verde)
triangulo_azul = Triangulo('Azul', 'Amarillo', 7, 13)
type(triangulo_azul)
romboide_negro = Romboide('Negro', 'Blanco', 5, 7)
type(romboide_negro)
figuras = [rectangulo_rojo, circulo_verde, triangulo_azul, romboide_negro]
len(figuras)
for f in figuras:
print(type(f))
```
Invocación de los métodos `area` y `dibujar` de cada instancia de las clases de figura:
```
rectangulo_rojo.area()
rectangulo_rojo.dibujar()
circulo_verde.area()
circulo_verde.dibujar()
triangulo_azul.area()
triangulo_azul.dibujar()
romboide_negro.area()
romboide_negro.dibujar()
```
Aplicar polimorfismo a los objetos `Figura`:
```
for f in figuras:
print(f'Tipo: {type(f).__name__}')
print('Área:', f.area())
f.dibujar()
print()
```
## 10.10 Jerarquía de Herencia de Empleados
```
import abc
class Empleado(object):
"""
Representa la entidad Empleado.
"""
__metaclass__ = abc.ABC
SALARIO_BASE = 1000
def __init__(self, documento, nombre_completo, email, especialidad):
"""
Crea un nuevo empleado.
:param documento: Documento de identificación.
:param nombre_completo: Nombre completo.
:param email: Correo electrónico.
:param especialidad: Especialidad.
"""
self.documento = documento
self.nombre_completo = nombre_completo
self.email = email
self.especialidad = especialidad
def calcular_salario(self):
"""
Calcula el salario base de un empleado.
:return: Salario base de un empleado.
"""
total = Empleado.SALARIO_BASE * 1.10
return total
def __str__(self):
"""
Representación en texto de un objeto empleado.
"""
return f'Documento: {self.documento} - Nombre completo: {self.nombre_completo}' + f' - Email: {self.email} - Especialidad: {self.especialidad}'
class EmpleadoComision(Empleado):
"""
Representa una entidad de tipo empleado por comisión.
"""
def __init__(self, documento, nombre_completo, email, especialidad, porcentaje_comision, monto):
"""
Crea un nuevo empleado.
:param documento: Documento de identificación.
:param nombre_completo: Nombre completo.
:param email: Correo electrónico.
:param especialidad: Especialidad.
:param porcentaje_comision: Porcentaje comisión.
:param monto: Monto de las ventas.
"""
super().__init__(documento, nombre_completo, email, especialidad)
self.porcentaje_comision = porcentaje_comision
self.monto = monto
def calcular_salario(self):
"""
Calcula el salario base más el salario por comisión.
:return: Salario base más el salario por comisión.
"""
salario_base = super().calcular_salario()
total = salario_base + self.monto * self.porcentaje_comision
return total
def __str__(self):
datos_basicos = super().__str__()
return datos_basicos + f' - Monto: {self.monto} - Porcentaje comisión: {self.porcentaje_comision}'
class EmpleadoHoras(Empleado):
"""
Representa un empleado que trabaja por horas.
"""
def __init__(self, documento, nombre_completo, email, especialidad, numero_horas, valor_hora):
"""
Crea un nuevo empleado que trabaja por horas.
:param documento: Documento de identificación.
:param nombre_completo: Nombre completo.
:param email: Correo electrónico.
:param especialidad: Especialidad.
:param numero_horas: Cantidad de horas trabajadas.
:valor_hora: Valor de cada hora trabajada.
"""
super().__init__(documento, nombre_completo, email, especialidad)
self.numero_horas = numero_horas
self.valor_hora = valor_hora
def calcular_salario(self):
"""
Calcula el salario total de un empleado por horas.
:return: Salario total de un empleado por horas.
"""
salario_base = super().calcular_salario()
total = salario_base + self.numero_horas * self.valor_hora
return total
def __str__(self):
"""
Obtiene la representación en texto de un objeto EmpleadoHoras.
:return: Representación en texto de un objeto EmpleadoHoras.
"""
resultado = super().__str__()
resultado += f' - Número horas: {self.numero_horas} - Valor hora: ${self.valor_hora}'
return resultado
class EmpleadoNomina(Empleado):
"""
Representa un empleado por nómina.
"""
SALARIO = 2000
def __init__(self, documento, nombre_completo, email, especialidad, porcentaje_prestaciones):
"""
Crea un empleado de tipo nómina.
:param documento: Documento de identificación.
:param nombre_completo: Nombre completo.
:param email: Correo electrónico.
:param especialidad: Especialidad.
:param porcentaje_prestaionces: Porcentaje de prestaciones.
"""
super().__init__(documento, nombre_completo, email, especialidad)
self.porcentaje_prestaciones = porcentaje_prestaciones
def calcular_salario(self):
"""
Calcula el salario de un empleado por nónima.
:return: El salario de un empleado por nónima.
"""
total = super().calcular_salario()
total += EmpleadoNomina.SALARIO * (1 - self.porcentaje_prestaciones)
return total
def __str__(self):
"""
Obtiene la representación en texto de un objeto de esta clase.
:return:
"""
resultado = super().__str__()
resultado += f' - Porcentaje prestaciones: {self.porcentaje_prestaciones}'
return resultado
```
Instanciación de objetos de las clases de la jerarquía de herencia de empleados:
```
julio = EmpleadoComision('123456', 'Julio Ordoñez', 'julio@mail.co', 'Ventas', 0.1, 1000000)
type(julio)
alexander = EmpleadoHoras('564987', 'Alexander Pérez', 'alex@mail.co', 'Diseño gráfico', 100, 200)
type(alexander)
edgar = EmpleadoNomina('951753', 'Edgar Ramírez', 'edgar@mail.co', 'Finanzas', 0.05)
type(edgar)
empleados = [julio, alexander, edgar]
len(empleados)
for e in empleados:
print(e)
print(f'Salario final: ${e.calcular_salario()}')
print()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.