code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## With great power comes great responsibility!
#
# By this point, you've already built a near perfect classifier of tumor/normal status from about 2000 breast biopsies, and you've applied it to an independent validation set of approximately 500 samples from TCGA.
#
# **You could now build a biomarker panel to identify people with a disease, people who would benefit from a drug, or address many other use cases.** How's that for power? If we're going to use these algorithms, we should get a handle on how they can be applied, and what might go wrong.
#
# For this, we're going to take a time machine back to the fall of 2015. The top song on the billboard top 100 music chart was *The Hills* by *The Weeknd*, and <NAME> was sitting in a conference room at ASHG. On stage, <NAME> was sharing the results of his analysis to identify epigenetic marks associated with sexual orientation. This paper would erupt into a twitterstorm of epic ferocity. The back and forth is captured in these three blog posts:
#
# * http://www.theatlantic.com/science/archive/2015/10/no-scientists-have-not-found-the-gay-gene/410059/
# * http://vizbang.tumblr.com/post/130904633310/a-responserebuttal-to-ed-yongs-article-in-the
# * http://andrewgelman.com/2015/10/10/gay-gene-tabloid-hype-update/
#
# Before class, read and think about these blog posts in the context of what you've learned thus far. Be ready to discuss how these articles fit into what you've learned, as we'll spend the first part of class on this.
#
# [In case you can't get to these blog posts, but can get to sage math cloud, we've uploaded PDFs of them to this folder.]
# _Q1: In a nutshell, who do you think is right and why?_
#
| 98_Archive/30_Prelab_ML-III/ML3-Prelab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Dependencies
# + _kg_hide-input=true
# # !pip install --quiet efficientnet
# !pip install --quiet image-classifiers
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=true _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import glob, json, re
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from tensorflow.keras import Model
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
# import efficientnet.tfkeras as efn
from classification_models.tfkeras import Classifiers
SEED = 0
seed_everything(SEED)
# -
# ## TPU configuration
# + _kg_hide-input=true
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
# -
# # Model parameters
# + _kg_hide-input=true
input_base_path = '/kaggle/input/6-melanoma-3fold-resnet18-backbone-lr2/'
dataset_path = 'melanoma-256x256'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
# -
# # Load data
# + _kg_hide-input=true
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
test = pd.read_csv(database_base_path + 'test.csv')
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = KaggleDatasets().get_gcs_path(dataset_path)
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')
# + _kg_hide-input=true
model_path_list = glob.glob(input_base_path + '*.h5')
n_models = len(model_path_list)
model_path_list.sort()
print(f'{n_models} Models to predict:')
print(*model_path_list, sep='\n')
# -
# ## Auxiliar functions
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _kg_hide-input=true _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
# Datasets utility functions
UNLABELED_TFREC_FORMAT = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"image_name": tf.io.FixedLenFeature([], tf.string), # shape [] means single element
# meta features
"patient_id": tf.io.FixedLenFeature([], tf.int64),
"sex": tf.io.FixedLenFeature([], tf.int64),
"age_approx": tf.io.FixedLenFeature([], tf.int64),
"anatom_site_general_challenge": tf.io.FixedLenFeature([], tf.int64),
}
def decode_image(image_data, height, width, channels):
image = tf.image.decode_jpeg(image_data, channels=channels)
image = tf.cast(image, tf.float32) / 255.0
image = tf.reshape(image, [height, width, channels])
return image
def read_unlabeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_tabular': data}, image_name # returns a dataset of (image, data, image_name)
def load_dataset_test(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_unlabeled_tfrecord, num_parallel_calls=buffer_size)
# returns a dataset of (image, data, label, image_name) pairs if labeled=True or (image, data, image_name) pairs if labeled=False
return dataset
def get_test_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_test(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# -
# # Model
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
ResNet18, preprocess_input = Classifiers.get('resnet18')
base_model = ResNet18(input_shape=input_shape,
weights=None,
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
x = 0.01*x + 0.99*K.stop_gradient(x)
output = L.Dense(1, activation='sigmoid')(x)
model = Model(inputs=input_image, outputs=output)
return model
# # Make predictions
# + _kg_hide-input=true _kg_hide-output=true
test_dataset = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
NUM_TEST_IMAGES = len(test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
for model_path in model_path_list:
print(model_path)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
model.load_weights(model_path)
test_preds += model.predict(test_dataset) / n_models
image_names = next(iter(test_dataset.unbatch().map(lambda data, image_name: image_name).batch(NUM_TEST_IMAGES))).numpy().astype('U')
name_preds = dict(zip(image_names, test_preds.reshape(len(test_preds))))
test['target'] = test.apply(lambda x: name_preds[x['image_name']], axis=1)
# -
# # Visualize predictions
# + _kg_hide-input=true
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge','target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target >= .5').head(10))
# -
# # Test set predictions
# + _kg_hide-input=true
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
submission.to_csv('submission.csv', index=False)
submission.head(10)
| Model backlog/Inference/6-melanoma-inf-3fold-resnet18-backbone-lr2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_de:
# -
#
# ## Differential Evolution
#
# The classical single-objective
# differential evolution algorithm <cite data-cite="de"></cite> where different crossover variations
# and methods can be defined. It is known for its good results for
# effective global optimization.
#
# The differential evolution crossover is simply defined by:
#
# $$
# v = x_{\pi_1} + F \cdot (x_{\pi_2} - x_{\pi_3})
# $$
#
# where $\pi$ is a random permutation with with 3 entries. The difference is taken between individual 2 and 3 and added to the first one. This is shown below:
#
#
# <div style="display: block;margin-left: auto;margin-right: auto;width: 50%;">
# 
# </div>
#
#
# Then, a second crossover between an individual and the so called donor vector $v$ is performed. The second crossover can be simply binomial/uniform or exponential.
#
#
#
# A great tutorial and more detailed information can be found [here](http://www1.icsi.berkeley.edu/~storn/code.html). The following guideline is copied from the description there (variable names are modified):
#
# If you are going to optimize your own objective function with DE, you may try the following classical settings for the input file first: Choose method e.g. DE/rand/1/bin, set the population size to 10 times the number of parameters, select weighting factor F=0.8, and crossover constant CR=0.9.
# It has been found recently that selecting F from the interval (0.5, 1.0) randomly for each generation or for each difference vector, a technique called dither, improves convergence behavior significantly, especially for noisy objective functions.
#
#
# It has also been found that setting CR to a low value, e.g. CR=0.2 helps optimizing separable functions since it fosters the search along the coordinate axes. On the contrary this choice is not effective if parameter dependence is encountered, something which is frequently occurring in real-world optimization problems rather than artificial test functions. So for parameter dependence the choice of CR=0.9 is more appropriate. Another interesting empirical finding is that raising NP above, say, 40 does not substantially improve the convergence, independent of the number of parameters. It is worthwhile to experiment with these suggestions. Make sure that you initialize your parameter vectors by exploiting their full numerical range, i.e. if a parameter is allowed to exhibit values in the range (-100, 100) it's a good idea to pick the initial values from this range instead of unnecessarily restricting diversity.
#
#
# Keep in mind that different problems often require different settings for NP, F and CR (have a look into the different papers to get a feeling for the settings). If you still get misconvergence you might want to try a different method. We mostly use 'DE/rand/1/...' or 'DE/best/1/...'. The crossover method is not so important although Ken Price claims that binomial is never worse than exponential. In case of misconvergence also check your choice of objective function. There might be a better one to describe your problem. Any knowledge that you have about the problem should be worked into the objective function. A good objective function can make all the difference.
#
# And this is how DE can be used:
# + code="algorithms/usage_de.py"
from pymoo.algorithms.so_de import DE
from pymoo.factory import get_problem
from pymoo.operators.sampling.latin_hypercube_sampling import LatinHypercubeSampling
from pymoo.optimize import minimize
problem = get_problem("ackley", n_var=10)
algorithm = DE(
pop_size=100,
sampling=LatinHypercubeSampling(iterations=100, criterion="maxmin"),
variant="DE/rand/1/bin",
CR=0.5,
F=0.3,
dither="vector",
jitter=False
)
res = minimize(problem,
algorithm,
seed=1,
verbose=False)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# -
# ### API
# + raw_mimetype="text/restructuredtext" active=""
# .. autoclass:: pymoo.algorithms.so_de.DE
# :noindex:
| doc/source/algorithms/differential_evolution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 (stable)
# language: python
# name: stable
# ---
# # Some comments from homework 4
#
# * Don't do lines too long, it's considered bad style as horizontal
# scrolling is awkward. Most projects demand lines < 80 or, more rarely, < 100 chars.
# * This also helps the case when you want to compare two codes next to
# each other
# * Include a space between argumenst for increased readability:
# * `Good: myfunc(a, b, c, d, e)`
# * `Bad: myfunc(a,b,c,d,e)`
# * Only use `elif` if there's another differing case to check. Otherwise, just use `else`.
# * Help yourself by doing unit conversions before the actual equation.
# * Otherwise, already awkward looking equations become even harder to read.
# * imports at top of module, not inside functions!
# * Makes the reader immediately understand the dependencies of your code.
# * This paradigm is being softened for parallel processing, where it becomes easier to send a logically complete function (with imports at beginning of function) to the different processors.
# ## Comments on HW 5
# ```python
# Narr = [N(i) for i in xArr] # list comprehension, **NOT** okay for vectorial
# ```
# is not the same as
# ```python
# Narr = N(xArr) # optimal
# ```
# is not the same as
# ```python
# Narr = np.exp(xArr**2/[....]) # kinda cheating...
# ```
# And when the instructions say, call f() on each element of a vector, it means that.
# So:
# ```python
# xList = [f(i) for i in vector]
# ```
# #### Q. Review: what is the rank of $A_{i,j,k,l}$?
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as pl
# #### little matplotlib config trick
#
# As the colormap default is still the awful 'jet' colormap (creates artificial visual boundaries that don't exist in the data -> it fools you.), I want to switch the default to 'viridis'.
#
# (exercise to the reader: this also can be done in a config file that is being read everytime matplotlib is being loaded!)
from matplotlib import rcParams
# Now, this config dictionary is huge:
rcParams.keys()
[key for key in rcParams.keys() if 'map' in key]
rcParams['image.cmap']
rcParams['image.cmap'] = 'viridis'
rcParams['image.interpolation'] = 'none'
# ### Visualizing Multi-Dimensional Arrays
# + active=""
# Simple example:
# -
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10,11,12]])
x
# Q. What is the rank of x?
#
# Q. What is the shape of x?
x.shape
x.ndim
# + active=""
# Visualize it:
# -
print(x) # for reference
pl.imshow(x)
pl.colorbar();
# Notice that the first row of the array was plotted
# at the top of the image.
#
# This may be counterintuitive if when you think of
# row #0 you think of y=0, which in a normal x-y coordinate
# system is on the bottom.
#
# This be changed using the "origin" keyword argument.
#
# The reason for this is that this command was made for displaying
# CCD image data, and often the pixel (0,0) was considered to be the
# one in the upper left.
#
# But it also matches the standard print-out of arrays, so that's good as well.
print(x) # for reference
pl.imshow(x, origin='lower')
pl.colorbar();
# +
# Interpolation (by default) makes an image look
# smoother.
# Instead:
pl.imshow(x, origin='lower', interpolation='bilinear')
pl.colorbar()
# -
# To look up other interpolations, just use the help feature.
#
# And by the way, there shouldn't be any space after the question mark!
# +
# pl.imshow?
# -
x # for reference
# + active=""
# If the rows were to correspond to x and the columns to y, then
# we need to regroup the elements.
#
# Q. How could we do that?
#
#
#
#
#
#
#
#
#
#
# -
print(x)
print()
print(x.T)
xT = x.T
pl.imshow(xT)
pl.colorbar()
# #### Q. And what should this yield?
xT.shape
# #### Arrays can be indexed in one of two ways:
xT # Reminder
# #### Q. What should this be?
xT[2][1]
# + active=""
# Or, more simply:
# -
xT[2,1]
# #### Can access x and y index information using numpy.indices:
xT
# +
print(np.indices(xT.shape))
print("-" * 50)
for i in range(xT.shape[0]):
for j in range(len(xT[0])):
print(i, j)
# + active=""
# The result is clearer if we unpack it. The x and y indices are stored separately:
# -
i, j = np.indices(xT.shape)
i
j
# + active=""
# These two arrays, i and j, are the same shape as xT, and each element corresponds to the row number (i) and column number (j).
# -
# #### Q. How to isolate the element in xT corresponding to i = 1 and j = 2?
xT
xT[1,2]
# + active=""
# or, use boolean logic:
# +
print(xT[np.logical_and(i == 1, j == 2)])
# Q. How did this work?
print(np.logical_and(i == 1, j == 2))
i == 1
# -
# #### Q. How about the *indices* of all even elements in xT?
xT # for reference
np.argwhere(xT % 2 == 0)
# Note you only need this if you want to use these indices somewhere else, e.g. in another array of same shape.
#
# Because if you just wanted the values, you of course would do that:
xT[xT % 2 == 0]
# #### How to find particular elements in a 2-D array?
xT # for reference
# + active=""
# Find indices of all elements with values > 5:
# -
np.argwhere(xT > 5)
xT
# ### Array Computing
# + active=""
# Can add, subtract, multiply, and divide 2 arrays **so long as they have the same shape.**
# -
xT
pl.imshow(xT)
pl.colorbar()
pl.clim(1, 12) # colorbar limits,
# analogous to xlim, ylim
print(xT + 5)
pl.imshow(xT+5)
pl.colorbar()
# pl.clim(1, 12)
# + active=""
# Notice how "clim" affects the output!
| lecture_15_ndarraysII.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Зразок
# +
examMarks = [6,7,6,9,8,7,5,6,7,9,10,11,7,12,8]
#Вивести всі змінні зі списку
# 1 спосіб
print(examMarks)
# 2 спосіб
for mark in examMarks:
print(mark)
print("--------------------------------------------")
# Порахувати суму чисел в списку
# 1 спосіб
print(sum(examMarks))
# 2 спосіб
summa=0
for mark in examMarks:
summa = summa + mark
print(summa)
# Якщо оцінка дорівнює 12 замінити її на 10
counter = 0
for mark in examMarks:
if(mark==12):
examMarks[counter] = 10
counter = counter + 1
print(examMarks)
# -
# # Homework
# +
# Дано список
games = ["Minecraft","LoL","Naruto Ninja Storm","Terraria"]
# Вивести всі елменти
# Якщо гра Minecraft замінити її назву на Minecraft Deluxe Edition
| Python practice/.ipynb_checkpoints/ExamPractice-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# We learned usage of RDKit - a python library for cheminformatics. <br>
# In this tutorial, we will use a support vector regression (SVR) to predict logP (partition coefficeint). <br>
# The input - structural feature of molecules is Morgan fingerprint and the eoutput is logP. <br>
#
# At first, import necessary libraries.
import numpy as np
from rdkit import Chem
from rdkit.Chem.Crippen import MolLogP
from rdkit import Chem, DataStructs
from rdkit.Chem import AllChem
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error, r2_score
from scipy import stats
import matplotlib.pyplot as plt
# I prepared the SMILES of molecules in ZINC dataset.
# You can download more data from ZINC database - http://zinc.docking.org/
#
# Obtain the molecular fingerprint and logP values from RDKit. <br>
# You can see more detail usage of RDKit in a 'RDKit Cookbook' - https://www.rdkit.org/docs/Cookbook.html.
# +
num_mols = 20000
f = open('smiles.txt', 'r')
contents = f.readlines()
fps_total = []
logP_total = []
for i in range(num_mols):
smi = contents[i].split()[0]
m = Chem.MolFromSmiles(smi)
fp = AllChem.GetMorganFingerprintAsBitVect(m,2)
arr = np.zeros((1,))
DataStructs.ConvertToNumpyArray(fp,arr)
fps_total.append(arr)
logP_total.append(MolLogP(m))
fps_total = np.asarray(fps_total)
logP_total = np.asarray(logP_total)
# -
# Then split the total dataset to a training and test set.
# +
num_total = fps_total.shape[0]
num_train = int(num_total*0.8)
num_total, num_train, (num_total-num_train)
# -
fps_train = fps_total[0:num_train]
logP_train = logP_total[0:num_train]
fps_test = fps_total[num_train:]
logP_test = logP_total[num_train:]
# We will use a SVR model for a regression model. <br>
# Documentation is in here - http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html. <br>
# In this case, we will use a polynomial kernel and coefficients of the kernel as 5.0.
_gamma = 5.0
clf = SVR(kernel='poly', gamma=_gamma)
clf.fit(fps_train, logP_train)
# After finish training, we should check the accuracy of our prediction. <br>
# For evaluation, we will use r2 and mean squared error for metrics.
logP_pred = clf.predict(fps_test)
r2 = r2_score(logP_test, logP_pred)
mse = mean_squared_error(logP_test, logP_pred)
r2, mse
# We can visualize the results from the model. <br>
# Plot (True values - Predicted values), and also linear regression between them.
slope, intercept, r_value, p_value, std_error = stats.linregress(logP_test, logP_pred)
yy = slope*logP_test+intercept
plt.scatter(logP_test, logP_pred, color='black', s=1)
plt.plot(logP_test, yy, label='Predicted logP = '+str(round(slope,2))+'*True logP + '+str(round(intercept,2)))
plt.xlabel('True logP')
plt.ylabel('Predicted logP')
plt.legend()
plt.show()
# In summary, we use a SVR model for prediction of logP. <br>
# With prepared dataset, we can easily preprocess, construct the model and validated the results. <br>
# I hope that students became more accustomed to using RDKit, machine learning models and its visualizations. <br>
# Don't fear for familiar with data science. Just searching what functions are necessary and use it. I ensure that trials and errors are the best teacher.
| Assignments/CH485---Artificial-Intelligence-and-Chemistry-master/Practice 02/.ipynb_checkpoints/prediction_logP-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
##################################################################
#《Python机器学习及实践:从零开始通往Kaggle竞赛之路(2023年度版)》开源代码
#-----------------------------------------------------------------
# @章节号:4.4.3(查询/筛选数据)
# @作者:范淼
# @电子邮箱:<EMAIL>
# @微博:https://weibo.com/fanmiaothu
# @官方交流QQ群号:561500762
##################################################################
# -
import pandas as pd
# +
#使用字典创建一个DataFrame。
d = {'国家': ['中国', '美国', '日本', '俄罗斯', '英国'],
'人口': [14.22, 3.18, 1.29, 1.4, 0.66]}
df = pd.DataFrame(d, index=['a', 'b', 'c', 'd', 'e'])
df
# -
#选择指定列名的数据。
df['国家']
#选择多列数据。
df[['国家', '人口']]
#利用索引,选择某一行数据。
df.loc['c']
#利用多个索引,选择多行数据。
df.loc[['c', 'd']]
#类似列表切片,选择多行数据的另一种方式。
df[0:3]
| Chapter_4/.ipynb_checkpoints/Section_4.4.3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # ModelSeed universal model
# - 2022-04-14
# - <NAME>
import requests
import json
import os
import pandas as pd
try:
os.mkdir("../output/ModelSeed/")
except:
None
try:
os.mkdir("../input/ModelSeed/")
except:
None
# +
# r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/Pathways/HopeScenarios.txt?raw=True")
# with open("../input/ModelSeed/HopeScenarios.txt", "w") as f:
# f.write(r.text)
# r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/Pathways/KEGG.pathways?raw=True")
# with open("../input/ModelSeed/KEGG.pathways", "w") as f:
# f.write(r.text)
# r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/Pathways/plantdefault.pathways.tsv?raw=True")
# with open("../input/ModelSeed/plantdefault.pathways.tsv", "w") as f:
# f.write(r.text)
# -
# ## pathway request
r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/Pathways/ModelSEED_Subsystems.tsv?raw=True")
with open("../input/ModelSeed/ModelSEED_Subsystems.tsv", "w") as f:
f.write(r.text)
# ## compounds request
r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/compounds.tsv?raw=True")
with open("../input/ModelSeed/compounds.tsv", "w") as f:
f.write(r.text)
r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/compounds.json?raw=True")
with open("../input/ModelSeed/compounds.json", "w") as f:
f.write(r.text)
# ## reactions request
r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/reactions.tsv?raw=True")
with open("../input/ModelSeed/reactions.tsv", "w") as f:
f.write(r.text)
r = requests.get("https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/reactions.json?raw=True")
with open("../input/ModelSeed/reactions.json", "w") as f:
f.write(r.text)
cpd_df = pd.read_csv("../input/ModelSeed/compounds.tsv", sep = "\t") # using json formatted compound table for parsing
rxn_df = pd.read_csv("../input/ModelSeed/reactions.tsv", sep = '\t')
pwy_df = pd.read_csv("../input/ModelSeed/ModelSEED_Subsystems.tsv", sep = '\t')
cpd_df.shape # matched with the json format! So using json is fine!
rxn_df.shape # matched with the json format! So using json is fine!
with open("../input/ModelSeed/compounds.json", "r") as f:
cpd_list = json.load(f)
with open("../input/ModelSeed/reactions.json", "r") as f:
rxn_list = json.load(f)
len(rxn_list)
# ----
import sys
from metDataModel.core import Compound, Reaction, Pathway, MetabolicModel
sys.path.append("/Users/gongm/Documents/projects/mass2chem/")
sys.path.append("/Users/gongm/Documents/projects/JMS/JMS/jms/")
import mass2chem
from jms import formula
# # Porting compounds
cpd.__dict__
myCpdlist = []
for seedCpd in cpd_list:
cpd = Compound()
cpd.abbreviation = seedCpd['abbreviation']
cpd.id = seedCpd['id']
cpd.internal_id = cpd.id
cpd.db_ids = []
try:
for item in seedCpd['aliases']:
record = tuple(item.split(": "))
if record[0] == "Name":
cpd.name = record[1]
else:
cpd.db_ids.append(record)
except:
None
cpd.db_ids.append(('inchikey',seedCpd['inchikey']))
cpd.charge = seedCpd['charge']
cpd.charged_formula = seedCpd['formula']
cpd.SMILES = seedCpd['smiles']
try:
cpd.neutral_formula = formula.adjust_charge_in_formula(cpd.charged_formula,cpd.charge)
except:
cpd.neutral_formula = ''
try:
cpd.neutral_mono_mass = round(mass2chem.formula.calculate_formula_mass(cpd.neutral_formula),6)
except:
cpd.neutral_mono_mass = 0
myCpdlist.append(cpd)
len(myCpdlist)
[x.__dict__ for x in myCpdlist[:2]]
# # Porting reactions
rxn = Reaction()
rxn.__dict__
rxn_list[0]
rxn_list[0].keys()
test = rxn_list[10]['code']
rxn_list[10]['code']
# +
import re
def extract_compounds(equation_code):
[reactant_str, product_str] = equation_code.split("=")
reactants = re.findall(r'\) (cpd\d+)\[',reactant_str)
products = re.findall(r'\) (cpd\d+)\[',product_str)
return(reactants,products)
reactants, products = extract_compounds(rxn_list[10]['code'])
# -
def port_reaction(R):
new = Reaction()
new.id = R['id']
reactants, products = extract_compounds(R['code'])
new.reactants = reactants
new.products = products
if R['ec_numbers']:
if "|" in R['ec_numbers']:
new.enzymes = ecs.split("|")
else:
new.enzymes = R['ec_numbers'] # this version of human-GEM may have it as string
return new
myRxnlist = []
for R in rxn_list:
if R['is_transport']==0: # remove the transportation reactions
myRxnlist.append(port_reaction(R))
print(f'There are {len(myRxnlist)} true reactions without considering transport in {len(rxn_list)} reactions')
# ----
# # Port pathways
pwy = Pathway()
pwy.__dict__
subsys2reactions_dict = pwy_df.groupby('Sub-class')['Reaction'].apply(list).to_dict()
def generateMSPwy(number):
res = "MSPwy" + str(number).zfill(4)
return(res)
path_test = Pathway();path_test.__dict__
mysubsyslist = []
i = 1
for k,v in subsys2reactions_dict.items():
new = Pathway()
new.id = generateMSPwy(i)
new.name = k
new.source = "ModelSeed"
new.list_of_reactions = v
new.status: "testing"
i +=1
mysubsyslist.append(new)
len(mysubsyslist)
# # Collect data and output
# +
note = """ModelSeed Universal Model. Add all reactions and compounds, """
## metabolicModel to export
MM = MetabolicModel()
MM.id = '' #
MM.meta_data = {
'species': 'universal',
'version': '',
'sources': ['https://github.com/ModelSEED/ModelSEEDDatabase/blob/master/Biochemistry/, retrieved 2022-04-13'], #
'status': '',
'last_update': '20220413', #
'note': note,
}
MM.list_of_pathways = [P.serialize() for P in mysubsyslist]
MM.list_of_reactions = [R.serialize() for R in myRxnlist]
MM.list_of_compounds = [C.serialize() for C in myCpdlist]
# -
# check output
[
MM.list_of_pathways[2],
MM.list_of_reactions[:2],
MM.list_of_compounds[100:102],
]
with open('../output/ModelSeed/Universal_ModelSeed.json', "w") as f:
json.dump(MM.serialize(),f)
with open('../output/ModelSeed/Universal_ModelSeed.json', "r") as f:
test = json.load(f)
| docs/ModelSeed_Universal_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sos
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SoS
# language: sos
# name: sos
# ---
# + [markdown] kernel="SoS"
# # SoS actions, targets, and functions
#
# + [markdown] kernel="SoS"
# ## SoS Action
# + [markdown] kernel="SoS"
# Although arbitrary python functions can be used in SoS step process, SoS defines many special functions called **`actions`** that accepts some shared parameters, and can behave differently in different modes of SoS.
#
# For example, function `time.sleep(5)` would be executed in run mode,
# + kernel="SoS"
# %run
[0]
import time
st = time.time()
time.sleep(1)
print('I just slept {:.2f} seconds'.format(time.time() - st))
# + [markdown] kernel="SoS"
# and also in dryrun mode (option `-n`),
# + kernel="SoS"
# %run -n
[0]
import time
st = time.time()
time.sleep(1)
print('I just slept {:.2f} seconds'.format(time.time() - st))
# + [markdown] kernel="SoS"
# because these statements are regular Python functions. However, if you put the statements in an action `python`, the statements would be executed in run mode,
# + kernel="SoS"
# %run
[0]
python:
import time
st = time.time()
time.sleep(1)
print('I just slept {:.2f} seconds'.format(time.time() - st))
# + [markdown] kernel="SoS"
# but will print out the script it would execute in dryrun mode (option `-n`)
# + kernel="SoS"
# %run -n
[0]
python:
import time
st = time.time()
time.sleep(1)
print('I just slept {:.2f} seconds'.format(time.time() - st))
# + [markdown] kernel="SoS"
# ## Action options
# + [markdown] kernel="SoS"
# Actions define their own parameters but their execution is controlled by a common set of options.
# + [markdown] kernel="SoS"
# ### `active`
# + [markdown] kernel="SoS"
# Action option `active` is used to activate or inactivate an action in an input loop. It accept either a condition that returns a boolean variable (`True` or `False`), or one or more integers, or slices that corresponds to indexes of active substeps.
#
# The first usage allows you to execute an action only if certain condition is met, so
#
# ```
# if cond:
# action(param)
# ```
#
# is equivalent to
#
# ```
# action(param, active=cond)
# ```
# or
# ```
# action: active=cond
# param
# ```
# in script format. For example, the following action will only be executed if `a.txt` exists
# + kernel="SoS"
!echo "something" > a.txt
sh: active=path('a.txt').exists()
echo `wc a.txt`
# + [markdown] kernel="SoS"
# For the second usage, when a loop is defined by `for_each` or `group_by` options of `input:` statement, an action after input would be repeated for each substep. The `active` parameter accepts an integer, either a non-negative number, a negative number (counting backward), a sequence of indexes, or a slice object, for which the action would be active.
#
# For example, for an input loop that loops through a sequence of numbers, the first action `run` is executed for all groups, the second action is executed for even number of groups, the last action is executed for the last step.
# + kernel="SoS"
seq = range(5)
input: for_each='seq'
run: expand=True
echo I am active at all groups {_index}
run: active=slice(None, None, 2), expand=True
echo I am active at even groups {_index}
run: active=-1, expand=True
echo I am active at last group {_index}
# + [markdown] kernel="SoS" output_cache="[]"
# ### `allow_error`
# + [markdown] kernel="SoS" output_cache="[]"
# Option `allow_error` tells SoS that the action might fail but this should not stop the workflow from executing. This option essentially turns an error to a warning message and change the return value of action to `None`.
#
# For example, in the following example, the wrong shell script would stop the execution of the step so the following action is not executed.
# + kernel="sos" output_cache="[{\"output_type\":\"stream\",\"text\":\"/<KEY>: line 1: This: command not found\\nFailed to process statement run(r\\\"\\\"\\\"This is not shell\\\\n\\\"\\\"\\\")...fter run'): Failed to execute script (ret=127). \\nPlease use command\\n /bin/bash /<KEY>/.sos/interactive_0_0\\nunder /private/var/folders/ys/gnzk0qbx5wbdgm531v82xxljv5yqy8/T/tmpzn3zpjx3 to test it.\\n\",\"name\":\"stderr\"}]"
# %sandbox --expect-error
run:
This is not shell
print('Step after run')
# + [markdown] kernel="SoS" output_cache="[]"
# but in this example, the error of `run` action is turned to a warning message and the later step would still be executed.
# + kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"text\":\"/var/folders/ys/gnzk0qbx5wbdgm531v82xxljv5yqy8/T/tmpfemreue0: line 1: This: command not found\\n\\u001b[95mWARNING\\u001b[0m: \\u001b[95mFailed to execute script (ret=127). \\nPlease use command\\n /bin/bash /var/folders/ys/gnzk0qbx5wbdgm531v82xxljv5yqy8/T/tmp557fin4d/.sos/interactive_0_0\\nunder /Users/bpeng1/SOS/docs/src/documentation to test it.\\u001b[0m\\n\",\"name\":\"stderr\"}]"
run: allow_error=True
This is not shell
print('Step after run')
# + [markdown] kernel="SoS"
# ### `args`
# + [markdown] kernel="SoS"
# All script-executing actions accept an option `args`, which changes how the script is executed.
#
# By default, such an action has an `interpreter` (e.g. `bash`), a default `args='{filename:q}'`, and the script would be executed as `interpreter args`, which is
# ```
# bash {filename:q}
# ```
# where `{filename:q}` would be replaced by the script file created from the body of the action.
# + [markdown] kernel="SoS"
# If you would like to change the command line with additional parameters, or different format of filename, you can specify an alternative `args`, with variables `filename` (filename of temporary script) and `script` (actual content of the script).
#
# For example, option `-n` can be added to command `bash` to execute script in dryrun mode
# + kernel="SoS"
bash: args='-n {filename:q}'
echo "-n means running in dryrun mode (only check syntax)"
# + [markdown] kernel="SoS"
# and you can actually execute a command without filename, and instead executing the script directly from command line
# + kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"name\":\"stdout\",\"text\":\"10000 loops, best of 3: 31.2 usec per loop\\n\"}]"
python: args='-m timeit {script}'
'"-".join(str(n) for n in range(100))'
# + [markdown] kernel="SoS"
# ### `container` and `engine`
# + [markdown] kernel="SoS"
# Parameter `container` and `engine` specify name or URL and execution engine of the container used to execute the action. Parameter `engine` is usually derived from `container` but can be specified explicitly as one of
#
# * `engine='docker'`: Execute the script in specified container using [docker](https://www.docker.com/)
# * `engine='singularity'`: Execute the script with [singularity](https://www.sylabs.io/)
# * `engine='local'`: Execute the script locally, this is the default mode.
#
# Parameters `container` and `engine` accept the following values:
#
# | `container` | `engine` | execute by | example | comment |
# | -- | -- | -- | -- | -- |
# | `tag` | ` ` | docker | `container='ubuntu'` | docker is the default container engine |
# | `name` | `docker` | docker | `container='ubuntu', engine='docker'` | treat `name` as docker tag |
# | `docker://tag` | ` ` | docker | `container='docker://ubuntu'` | |
# | `filename.simg` | ` ` | singularity | `container='ubuntu.simg'` | |
# | `shub://tag` | ` ` | singularity | `container='shub://GodloveD/lolcow'` | Image will be pulled to a local image |
# | `name` | `singularity` | singularity | `container='a_dir', engine='singularity'` | treat `name` as singularity image file or directory |
# | `docker://tag` | `singularity` | singularity | `container='docker://godlovdc/lolcow', engine='singularity'` | |
# | `file://filename` | ` ` | singularity | `container='file://ubuntu.simg'` | |
# | `local://name` | ` ` | local | `container='local:any_tag'` | `local://any_tag` is equivalent to `engine='local'` |
# | `name` | `local` | local | `engine=engine` with `parameter: engine='docker'` | Usually used to override parameter `container` |
#
# Basically,
# * `container='tag'` pulls and uses docker image `tag`
# * `container='filename.simg` uses an existing singularity image
# * `container='shub://tag'` pulls and uses singularity image `shub://tag`, which will generate a local `tag.simg` file
# + [markdown] kernel="SoS"
# If a docker image is specified, the action is assumed to be executed in the specified docker container. The image will be automatically downloaded (pulled) if it is not available locally.
#
# For example, executing the following script
# + [markdown] kernel="SoS"
# ```
# [10]
# python3: container='python'
# set = {'a', 'b'}
# print(set)
# ```
# + [markdown] kernel="SoS"
# under a docker terminal (that is connected to the docker daemon) will
#
# 1. Pull docker image `python`, which is the official docker image for Python 2 and 3.
# 2. Create a python script with the specified content
# 3. Run the docker container `python` and make the script available inside the container
# 4. Use the `python3` command inside the container to execute the script.
#
# Additional `docker_run` parameters can be passed to actions when the action
# is executed in a docker image. These options include
#
# * `name`: name of the container (option `--name`)
# * `tty`: if a tty is attached (default to `True`, option `-t`)
# * `stdin_open`: if stdin should be open (default to `False`, option `-i`)
# * `user`: username (default o `root`, option `-u`)
# * `environment`: Can be a string, a list of string or dictinary of environment variables for docker (option `-e`)
# * `volumes`: shared volumes as a string or list of strings, in the format of `hostdir` (for `hostdir:hostdir`) or `hostdir:mnt_dir`, in addition to current working directory which will always be shared.
# * `volumes_from`: container names or Ids to get volumes from
# * `port`: port opened (option `-p`)
# * `extra_args`: If there is any extra arguments you would like to pass to the `docker run` process (after you check the actual command of `docker run` of SoS
#
# Because of the different configurations of docker images, use of docker in SoS can be complicated. Please refer to http://vatlab.github.io/SOS/doc/tutorials/SoS_Docker_Guide.html for details.
#
# + [markdown] kernel="SoS"
# ### `docker_image` (deprecated)
# + kernel="SoS"
`docker_image='tag'` is now replaced with `container='tag'` or `container='docker://tag'`
# + [markdown] kernel="SoS"
# ### `default_env`
# + [markdown] kernel="SoS"
# Option `default_env` set environment variables **if they do not exist in the system**. The value of this option should be a dictionary with string keys and values.
# + [markdown] kernel="SoS"
# ### `docker_file` (deprecated)
#
# This option allows you to import a docker from specified `docker_file`, which can be an archive file (`.tar`, `.tar.gz`, `.tgz`, `.bzip`, `.tar.xz`, `.txz`) or a URL to an archive file (e.g. `http://example.com/exampleimage.tgz`). SoS will use command `docker import` to import the `docker_file`. However, because SoS does not know the repository and tag names of the imported docker file, you will still need to use option `docker_image` to specify the image to use.
# + [markdown] kernel="SoS"
# ### `env`
# + [markdown] kernel="SoS"
# Option `env` set environment variables **that overrides system variables defined in `os.environ`**. This option can be used to define `PATH` and other environmental variables for the action.
# + [markdown] kernel="SoS"
# ### `input`
# + [markdown] kernel="SoS"
# Parameter `input` specifies the input files that an action needs before it can be executed. However, unlike targets in `input:` statement of a step where lacking an input target would trigger the execution of an auxiliary step (if needs) to produce it, SoS would yield an error if the input file does not exist.
#
# For example, in the following example, step `20` is executed after step `10` so its `report` action can report the content of `a.txt` produced by step `10`.
# + kernel="SoS"
# %sandbox
# %run
[10]
output: 'a.txt'
bash:
echo 'content of a.txt' > a.txt
[20]
report: input='a.txt'
# + [markdown] kernel="SoS"
# However, in the following example, step `20` is executed as the first step of workflow `default`. The `report` action requires input file `a.txt` and yields an error.
# + kernel="SoS"
# %sandbox --expect-error
# %run
[a: provides='a.txt']
bash:
echo 'content of a.txt' > a.txt
[20]
report: input='a.txt'
# + [markdown] kernel="SoS"
# `a.txt` has to be put into the input statement of step `20` for the auxiliary step to be executed:
# + kernel="SoS"
# %sandbox
# %run
[a: provides='a.txt']
bash:
echo 'content of a.txt' > a.txt
[20]
input: 'a.txt'
report: input=_input[0]
# + [markdown] kernel="SoS"
# Although all actions accept parameter `input` and SoS will always check the existence of specified input file, the action themselves might or might not make use of this parameter. Roughly speaking, script-executing actions such as `run`, `bash` and `python` accepts this parameter and prepend the content of all input files to the script; report-generation actions `report`, `pandoc` and `RMarkdown` append the content of input files after the specifie dscript, and other actions usually ignore this parameter.
#
# For example, if you have a function that needs to be included in a Python script (more likely multiple scripts), you could define it in a separate file and include it with scripts defined in a `python` action:
# + kernel="SoS"
# %run
# define a function and save to file myfunc.inc
report: output="myfunc.inc"
def myfunc():
print("Hello")
[1]
python: input='myfunc.inc'
myfunc()
# + [markdown] kernel="SoS"
# ### `output`
# + [markdown] kernel="SoS"
# Similar to `input`, parameter `output` defines the output of an action, which can be a single name (or target) or a list of files or targets. SoS would check the existence of output target after the completion of the action. For example,
# + kernel="SoS"
# %sandbox --expect-error
# %run
[10]
bash: output='non_existing.txt'
# + [markdown] kernel="SoS"
# In addition to checking the existence of input and output files, specifying `input` and `output` of an action will allow SoS to create signatures of action so that it will not be executed when it is called again with the same input and output files. This is in addition to step-level signature and can be useful for long-running actions.
#
# For example, suppose action `sh` is time-consuming that produces output `test.txt`
# + kernel="SoS"
# %run -s default
[10]
import time, os
time.sleep(2)
sh: input=[], output='test.txt'
touch test.txt
print(os.path.getmtime('test.txt'))
# + [markdown] kernel="SoS"
# Because the action has parameter `input` and `output`, a signature will be created so it will not be re-executed even when the step itself is changed (from `sleep(2)` to `sleep(1)`).
# + kernel="SoS"
# %run -s default
[10]
import time, os
time.sleep(1)
sh: input=[], output='test.txt'
touch test.txt
print(os.path.getmtime('test.txt'))
# + [markdown] kernel="SoS"
# Note that we have to use option `-s default` for our examples because the default mode for SoS in Jupyter is `ignore` so no siguatures will be saved and used by default.
# + [markdown] kernel="SoS"
# ### `stdout`
# + [markdown] kernel="SoS"
# Option `stdout` is applicable to script-executing actions such as `bash` and `R` and redirect the standard out of the action to specified file. The value of the option should be a path-like object (`str`, `path`, etc), or `False`. The file will be opened in `append` mode so you will have to remove or truncate the file if the file already exists. If `stdout=False`, the output will be suppressed (redirect to `/dev/null` under linux).
# + [markdown] kernel="SoS"
# For example,
# + kernel="SoS"
!rm -f test.log
sh: stdout='test.log'
ls *.ipynb
# + kernel="SoS"
!head -2 test.log
# + [markdown] kernel="SoS"
# ### `stderr`
# + [markdown] kernel="SoS"
# Option `stderr` is similar to `stdout` but redirects the standard error output of actions. `stderr=False` also suppresses stderr.
# + [markdown] kernel="SoS"
# ### `tracked`
# + [markdown] kernel="SoS"
# If an action takes a long time to execute and the step it resides tend to be changed (for example, during the development of a workflow step), you might want to keep action-level signatures so that the action could be skipped if it has been executed before.
#
# Action-level signature is controlled by parameter `tracked`, which can be `None` (no signature), `True` (record signature), `False` (do not record signature), a string (filename), or a list of filenames. When this parameter is `True` or one or more filenames, SoS will
#
# 1. if specified, collect targets specified by parameter `input`
# 2. if specified, colelct targets specified by parameter `output`
# 3. if one or more files are specified, collect targets from parameter `tracked`
#
# These files, together with the content of the first parameter (usually a script), will be used to create a step signature and allow the actions with the same signature be skipped.
# + [markdown] kernel="SoS"
# For example, suppose action `sh` is time-consuming that produces output `test.txt`
# + kernel="SoS"
# %run -s force
[10]
import time, os
time.sleep(2)
sh: output='test.txt', tracked=True
touch test.txt
print(os.path.getmtime('test.txt'))
# + [markdown] kernel="SoS"
# Because of the `tracked=True` parameter, a signature will be created with `output` and it will not be re-executed even when the step itself is changed (from `sleep(2)` to `sleep(1)`).
# + kernel="SoS"
# %run -s default
[10]
import time, os
time.sleep(1)
sh: output='test.txt', tracked=True
touch test.txt
print(os.path.getmtime('test.txt'))
# + [markdown] kernel="SoS"
# Note that the signature can only be saved and used with appropriate signature mode (`force`, `default` etc).
# + [markdown] kernel="SoS"
# ### `workdir`
# + [markdown] kernel="SoS"
# Option `workdir` changes the current working directory for the action, and change back once the action is executed. The directory will be created if it does not exist.
# + kernel="SoS"
bash: workdir='tmp'
touch a.txt
bash:
ls tmp
rm tmp/a.txt
rmdir tmp
# + [markdown] kernel="SoS"
# ## Core Actions
# + [markdown] kernel="SoS"
# Let us start by listing all options for action `run` and compare it with actions `script` and `bash` before we dive into the details:
#
# |action |condition | interpreter (configurable for `script`) | args (configurable) | command |comment|
# |:--|:--|:-|:-|:-|:-|:-|
# |`run`| `windows`| | `{filename}` | `{filename}` |execute script directly as `.bat` file|
# | | non-windows | `/bin/bash` | `-ev {filename}`| `/bin/bash -ev {filename}` | execute script by bash, print command and exit with error |
# | | script with shebang line (`#!`) | | `{filename}`| `{filename}` | execute script directly |
# | | | `/bin/bash` | `{script}` | `/bin/bash` content of script | `script` as arguments of `/bin/bash` |
# |`bash` | | `/bin/bash` | `{filename}`| `/bin/bash {filename}` | execute script as a bash script |
# |`script`| | | `{filename}` | `{filename}` | execute script directly |
# | | | `any_interpreter` | `{filename}` | `any {filename}` | execute with specified interpreter |
# | | | `any_interpreter` | `{script}` | `any_interpreter` content of script | execute content of script directly in command line|
#
# Note that
# 1. All actions except `script` has fixed interpreter although action `run` uses different interpreter for different situations.
# 3. All actions accept configurable `args`, which can contain `{filename}` and `{script}` with `filename` being the name of the temporary script file, and `script` being the content of the script. In the latter case, the content of the script goes to the command line directly. It can of course contain any other fixed options.
# 3. If no interpreter is specified, the command will consist of only `args` so either the script file (if `args={filename}`) or the content of the script (if `args={script}`) is executed. SoS will make the script file executable in this case.
# 4. All script-executing actions except for `run` and `script` have fixed interpreters.
#
# + [markdown] kernel="SoS"
# ### Action `run`
# + [markdown] kernel="SoS"
# `run` is the most frequently used action in sos. In most cases, it is similar to action `bash` and uses `bash` to execute specified script. Under the hood, this action is quite different from `bash` because the run action does not have a default interpreter and would behave differently under different situations.
# + [markdown] kernel="SoS"
# In the simplest case when one or more commands are specified, action `run` would assume it is a batch script under windows, and a bash script otherwise.
# + kernel="SoS" output_cache="[{\"name\":\"stdout\",\"output_type\":\"stream\",\"text\":\"A\\n\"}]"
run:
echo "A"
# + [markdown] kernel="SoS" output_cache="[]"
# It is different from an `bash` action in that it will exit with error if any of the commands exits with non-zero code. That is to say, whereas a `sh` action would print an error message but continue as follows
# + kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"text\":\"B\\n\",\"name\":\"stdout\"},{\"output_type\":\"stream\",\"text\":\"/var/folders/ys/gnzk0qbx5wbdgm531v82xxljv5yqy8/T/tmpp8ur8ynw.sh: line 1: echoo: command not found\\n\",\"name\":\"stderr\"}]"
sh:
echoo "A"
echo "B"
# + [markdown] kernel="SoS" output_cache="[]"
# The `run` action would exit with error
# + kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"text\":\"/<KEY>: line 1: echoo: command not found\\nFailed to process statement 'run(\\\"echoo \\\\\\\\\\\"A\\\\\\\\\\\"\\\\\\\\necho \\\\\\\\\\\"B\\\\\\\\\\\"\\\")\\\\n' (RuntimeError): Failed to execute script (ret=127).\\nPlease use command\\n\\t``/bin/bash \\\\\\n\\t -e \\\\\\n\\t /var/folders/ys/gnzk0qbx5wbdgm531v82xxljv5yqy8/T/tmppwtttdjc/.sos/interactive_0_0_9f062482``\\nunder \\\"/<KEY>\\\" to test it.\\n\",\"name\":\"stderr\"}]"
# %sandbox --expect-error
run:
echoo "A"
echo "B"
# + [markdown] kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"text\":\"File contains parsing errors: <string>\\n\\t[line 2]: In another word,\\n```\\nrun:\\n command1\\n command2\\n\\nInvalid statements: SyntaxError('invalid syntax', ('<string>', 1, 10, 'In another word,\\\\n'))\\n\",\"name\":\"stderr\"}]"
# In another word,
# ```
# run:
# command1
# command2
# command3
# ```
# is equivalent to
#
# ```
# bash:
# command1 && command2 && command3
# ```
# under Linux/MacOS systems.
# + [markdown] kernel="SoS"
# However, if the script starts with a shebang line, this action would execute the script directly. This allows you to execute any script in any language. For example, the following script executes a python script using action `run`
# + kernel="SoS"
run:
#!/usr/bin/env python
print('This is python')
# + [markdown] kernel="SoS"
# and the following example runs a complete sos script using command `sos-runner`
# + kernel="SoS"
# %run
# use sigil=None to stop interpolating expressions in script
[sos]
run:
#!/usr/bin/env sos-runner
[10]
print(f"This is {step_name}")
[20]
print(f"This is {step_name}")
# + [markdown] kernel="SoS"
# Note that action `run`would not analyze shebang line of a script if it is executed in a docker container (with option `docker-image`) and would always assumed to be `bash`.
# + [markdown] kernel="SoS"
# ### Action `script`
# + [markdown] kernel="SoS"
# Action `script` is the general form of all script-executing actions in SoS. It accepts a script, and parameters `interpreter` (required), `suffix` (if required by the interpreter) and optional `args` (command line arguments). It can be used to execute any script for which its interpreter is not currently supported by SoS. For example, the action
#
# ```
# python:
# print('HERE')
# ```
#
# can be executed as
# + kernel="SoS"
script: interpreter='python'
print('HERE')
# + [markdown] kernel="SoS"
# ### Action `sos_run`
# + [markdown] kernel="SoS"
# Action `sos_run(workflow=None, targets=None, shared=[], source=None, args={}, **kwargs)` executes a specified workflow from the current (default) or specified SoS script (`source`). The workflow can be a single workflow, a subworkflow (e.g. `A_-10`), a combined workflow (e.g. `A + B`), or a workflow that is constructed to generate `targets`. The workflow
#
# * Takes `_input` of the parental step as the input of the first step of the subworkflow
# * Takes `args` (a dictionary) and `**kwargs` as parameters as if they are specified from command line
# * Copies variables specified in `shared` (a string or a list of string) to the subworkflow if they exist in the parental namespace
# * Returns variables defined in `shared` to the parental namespace after the completion of the workflow
#
# `sos_run` would be executed in a separate process in batch mode, and would be executed in the same process in interactive mode, so parameter `shared` is only needed for batch execution.
# + [markdown] kernel="SoS"
# The simplest use of action `sos_run` is for the execution of one or more workflows. For example,
# + kernel="SoS"
# %run
[A]
print(step_name)
[B]
print(step_name)
[default]
sos_run('A + B')
# + [markdown] kernel="SoS"
# The subworkflows are executed separately and only takes the `_input` of the step as the `input` of the workflow. For example,
# + kernel="SoS"
# %sandbox
# %run
!touch a.txt b.txt
[process]
print(f"Handling {_input}")
[default]
input: 'a.txt', 'b.txt', group_by=1
sos_run('process')
# + [markdown] kernel="SoS"
# If you would like to send one or more variables to the subworkflow or return a variable from the execution of subworkflow, you can specify them with the `shared` variable. The return variable part is a bit tricky here because you can only return workflow level variable that are usually `shared` from a step of the subworkflow. For example,
# + kernel="SoS"
# %sandbox
# %run
[process]
print(f"Working with seed {seed}")
[default]
for seed in range(5):
sos_run('process', seed=seed)
# + kernel="SoS"
# %sandbox
# %run
[process: shared='result']
result = 100
[default]
sos_run('process')
print(f"Result from subworkflow process is {result}")
# + [markdown] kernel="SoS"
# If the subworkflow accepts parameters, they can be specified using keyword arguments or as a dictionary for parameter `args` of the `sos_run` function. The subworkflow would take values from parameters as if they are passed from command line.
#
# For example, the following workflow defines parameter `cutoff` with default value 10. When it is executed without command line option, the default value is used.
# + kernel="SoS"
# %sandbox
# %run
[default]
parameter: cutoff=10
print(f"Process with cutoff={cutoff}")
[batch]
for value in range(2, 10, 2):
sos_run('default', cutoff=value)
# + [markdown] kernel="SoS"
# Command line argument could be used to specify a different `cutoff` value:
# + kernel="SoS"
# %sandbox
# %rerun --cutoff 4
# + [markdown] kernel="SoS"
# Now, if we run the `batch` workflow, which calls the `default` workflow with parameter `cutoff`, the `parameter: cutoff=10` statement takes the passed value as if it were specified from command line.
# + kernel="SoS"
# %sandbox
# %rerun batch
# + [markdown] kernel="SoS"
# Note that the parameters could also be specified with parameter `args`,
# + kernel="SoS"
# %sandbox
# %run batch
[default]
parameter: cutoff=10
print(f"Process with cutoff={cutoff}")
[batch]
for value in range(2, 10, 2):
sos_run('default', args={'cutoff': value})
# + [markdown] kernel="SoS"
# although the keyword arguments are usually easier to use.
# + [markdown] kernel="SoS"
# Action `sos_run` cannot be used in `task` (see [Remote Execution](Remote_Execution.html) for details) because tasks are designed to be executed independently of the workflow.
# + [markdown] kernel="SoS"
# ### Action `report`
# + [markdown] kernel="SoS"
# Action `report` writes some content to an output stream. The input can either be a string or content of one or more files specified by option `input`. The output is determined by parameter `output`, and command line option `-r`.
#
# * If `output='filename'`, the content will be written to a file.
# * If `output=obj` and `obj` has a `write` function (e.g. a file handle), the content will be passed to the `write` function
# * If output is unspecified and no filename is specified from option `-r`, the content will be written to standard output.
# * If output is unspecified and a filename is specified with option `-r`, the content will be appended to specified file.
#
# For example, the content of `report` actions is printed to standard output if no output is specified.
# + kernel="SoS"
# %run
[10]
report: expand=True
Runing {step_name}
[20]
report: expand=True
Runing {step_name}
# + [markdown] kernel="SoS"
# We can specify an output file with option `output`, but the output will be overwritten if multiple actions write to the same file
# + kernel="SoS"
# %sandbox
# %preview report.txt
# %run
[10]
report: output='report.txt', expand=True
Runing {step_name}
[20]
report: output='report.txt', expand=True
Runing {step_name}
# + [markdown] kernel="SoS"
# Action `report` can also take the content of one or more input files and write them to the output stream, after the script content (if specified). For example, the `report` action in the following example writes the content of `out.txt` to the default report stream (which is the standard output in this case).
# + kernel="SoS"
# %sandbox
# %run
[10]
output: 'out.txt'
run:
# run some command and generate a out.txt
echo "* some result " > out.txt
[20]
report: input='out.txt'
Summary Report:
# + [markdown] kernel="SoS"
# ### Action `bash`
#
# Action `bash(script)` accepts a shell script and execute it using `bash`. `sh`, `csh`, `tcsh`, `zsh` uses respective shell to execute the provided script.
#
# These actions, as well as all script-executing actions such as `python`, also accept an option `args` and allows you to pass additional arguments to the interpreter. For example
# + kernel="SoS"
run: args='-n {filename:q}'
echo "a"
# + [markdown] kernel="SoS"
# execute the script with command `bash -n` (check syntax), so command `echo` is not actually executed.
# + [markdown] kernel="SoS"
# ### Action `sh`
# Execute script with a `sh` interpreter
# + [markdown] kernel="SoS"
# ### Action `csh`
# Execute script with a `csh` interpreter
# + [markdown] kernel="SoS"
# ### Action `tcsh`
# Execute script with a `tcsh` interpreter
# + [markdown] kernel="SoS"
# ### Action `zsh`
# Execute script with a `zsh` interpreter
# + [markdown] kernel="SoS"
# ### Action `perl`
#
# Action `perl(script)` execute the passed script using `perl` interpreter.
# + kernel="SoS"
perl:
my $name = "Brian";
print "Hello, $name!\n";
# + [markdown] kernel="SoS"
# ### Action `ruby`
# + [markdown] kernel="SoS"
# Action `ruby(script)` execute the passed script using `ruby` interpreter.
# + kernel="SoS"
ruby:
a = [ 45, 3, 19, 8 ]
b = [ 'sam', 'max', 56, 98.9, 3, 10, 'jill' ]
print (a + b).join(' '), "\n"
print a[2], " ", b[4], " ", b[-2], "\n"
print a.sort.join(' '), "\n"
a << 57 << 9 << 'phil'
print "A: ", a.join(' '), "\n"
# + [markdown] kernel="SoS"
# ### Action `node`
#
# Action `node(script)` executes the passed script using `node` (JavaScript) interpreter.
# + kernel="SoS"
node:
var i, a, b, c, max;
max = 1000000000;
var d = Date.now();
for (i = 0; i < max; i++) {
a = 1234 + 5678 + i;
b = 1234 * 5678 + i;
c = 1234 / 2 + i;
}
console.log(Date.now() - d);
# + [markdown] kernel="SoS"
# ### Action `pandoc`
# + [markdown] kernel="SoS"
# Action `pandoc` uses command [pandoc](http://pandoc.org/) to convert specified input to output. This input to this action can be specified from option `script` (usually specified in script format) and `input`.
#
# First, if a script is specified, pandoc assumes it is in markdown format and convert it by default to 'HTML' format. For example,
# + kernel="SoS"
pandoc:
# this is header
This is some test, with **emphasis**.
# + [markdown] kernel="SoS"
# You can specify an output with option `output`
# + kernel="SoS"
# %sandbox
# %preview out.html
pandoc: output='out.html'
Itemize
* item 1
* item 2
# + [markdown] kernel="SoS"
# You can convert input file to another file type using a different file extension
# + kernel="SoS"
# %sandbox
# %preview out.tex
pandoc: output='out.tex'
Itemize
* item 1
* item 2
# + [markdown] kernel="SoS"
# Or you can add more options to the command line by modifying `args`,
# + kernel="SoS"
# %sandbox
# %preview out.html
pandoc: output='out.html', args='{input:q} --output {output:q} -s'
Itemize
* item 1
* item 2
# + [markdown] kernel="SoS"
# The second usage of the `pandoc` action is to specify one or more input filenames. You have to use the function form of this action as follows
# + kernel="SoS"
# %sandbox
# %preview out.html
[10]
report: output = 'out.md'
Itemize
* item 1
* item 2
[20]
pandoc(input='out.md', output='out.html')
# + [markdown] kernel="SoS"
# If multiple files are specified, the content of these input files will be concatenated. This is very useful for generating a single pandoc output with input from different steps. We will demonstrate this feature in the [Generating Reports](../tutorials/Generating_Reports.html) tutorial.
#
# If both `script` and `input` parameters are specified, the content of input files would be appended to `script`. So
# + kernel="SoS"
# #%sandbox
# %preview out.html
[10]
report: output = 'out10.md'
Itemize
* item 1
* item 2
[20]
report: output= 'out20.md'
enumerated
1. item 1
2. item 2
[30]
pandoc: input=['out10.md', 'out20.md'], output='out.html'
Markdown supports both itemized and enumerated
# + [markdown] kernel="SoS"
# ### Action `docker_build`
#
# Build a docker image from an inline Docker file. The inline version of the action currently does not support adding any file from local machine because the docker file will be saved to a random directory. You can walk around this problem by creating a `Dockerfile` and pass it to the action through option `path`. This action accepts all parameters as specified in [docker-py documentation](http://docker-py.readthedocs.io/en/latest/images.html) because SoS simply pass additional parameters to the `build` function.
#
# For example, the following step builds a docker container for [MISO](http://miso.readthedocs.org/en/fastmiso/) based on anaconda python 2.7.
#
# ```
# [build_1]
# # building miso from a Dockerfile
# docker_build: tag='mdabioinfo/miso:latest'
#
# ############################################################
# # Dockerfile to build MISO container images
# # Based on Anaconda python
# ############################################################
#
# # Set the base image to anaconda Python 2.7 (miso does not support python 3)
# FROM continuumio/anaconda
#
# # File Author / Maintainer
# MAINTAINER <NAME> <<EMAIL>>
#
# # Update the repository sources list
# RUN apt-get update
#
# # Install compiler and python stuff, samtools and git
# RUN apt-get install --yes \
# build-essential \
# gcc-multilib \
# gfortran \
# apt-utils \
# libblas3 \
# liblapack3 \
# libc6 \
# cython \
# samtools \
# libbam-dev \
# bedtools \
# wget \
# zlib1g-dev \
# tar \
# gzip
#
# WORKDIR /usr/local
# RUN pip install misopy
# ```
# + [markdown] kernel="SoS"
# ### Action `download`
# + [markdown] kernel="SoS"
# Action `download(URLs, dest_dir='.', dest_file=None, decompress=False, max_jobs=5)` download files from specified URLs, which can be a list of URLs, or a string with tab, space or newline separated URLs.
#
# * If `dest_file` is specified, only one URL is allowed and the URL can have any form.
# * Otherwise all files will be downloaded to `dest_dir`. Filenames are determined from URLs so the URLs must have the last portion as the filename to save.
# * If `decompress` is True, `.zip` file, compressed or plan `tar` (e.g. `.tar.gz`) files, and `.gz` files will be decompressed to the same directory as the downloaded file.
# * `max_jobs` controls the maximum number of concurrent connection to **each domain** across instances of the `download` action. That is to say, if multiple steps from multiple workflows download files from the same website, at most `max_jobs` connections will be made. This option can therefore be used to throttle downloads to websites.
#
# For example,
#
# ```
# [10]
# GATK_RESOURCE_DIR = '/path/to/resource'
# GATK_URL = 'ftp://gsapubftp-anonymous@ftp.broadinstitute.org/bundle/2.8/hg19/'
#
# download: dest_dir=GATK_RESOURCE_DIR, expand=True
# {GATK_URL}/1000G_omni2.5.hg19.sites.vcf.gz
# {GATK_URL}/1000G_omni2.5.hg19.sites.vcf.gz.md5
# {GATK_URL}/1000G_omni2.5.hg19.sites.vcf.idx.gz
# {GATK_URL}/1000G_omni2.5.hg19.sites.vcf.idx.gz.md5
# ```
#
# download the specified files to `GATK_RESOURCE_DIR`. The `.md5` files will be automatically used to validate the content of the associated files. Note that
#
# SoS automatically save signature of downloaded and decompressed files so the files will not be re-downloaded if the action is called multiple times. You can however still still specifies input and output of the step to use step signature
#
#
# ```
# [10]
# GATK_RESOURCE_DIR = '/path/to/resource'
# GATK_URL = 'ftp://gsapubftp-anonymous@ftp.broadinstitute.org/bundle/2.8/hg19/'
# RESOUCE_FILES = '''1000G_omni2.5.hg19.sites.vcf.gz
# 1000G_omni2.5.hg19.sites.vcf.gz.md5
# 1000G_omni2.5.hg19.sites.vcf.idx.gz
# 1000G_omni2.5.hg19.sites.vcf.idx.gz.md5'''.split()
# input: []
# output: [os.path.join(GATK_RESOURCE_DIR, x) for x in GATK_RESOURCE_FILES]
# download([f'{GATK_URL}/{x}' for x in GATK_RESOURCE_FILES], dest=GATK_RESOURCE_DIR)
# ```
#
# Note that the `download` action uses up to 5 processes to download files. You can change this number by adjusting system configuration `sos_download_processes`.
# + [markdown] kernel="SoS"
# ### Action `fail_if`
#
# Action `fail_if(expr, msg='')` raises an exception with `msg` (and terminate the execution of the workflow if the exception is not caught) if `expr` returns True.
# + [markdown] kernel="SoS"
# ### Action `warn_if`
#
# Action `warn_if(expr, msg)` yields a warning message `msg` if `expr` is evaluate to be true.
# + [markdown] kernel="SoS"
# ### Action `stop_if`
#
# Action `stop_if(expr, msg='')` stops the execution of the current step (or current iteration if within input loops specified by parameters `group_by` or `for_each`) and gives a warning message if `msg` is specified. For example,
# + kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"text\":\"['b.txt']\\n\",\"name\":\"stdout\"}]"
# %sandbox
# %run
!touch a.txt
!echo 'something' > b.txt
[10]
input: '*.txt', group_by=1
stop_if(os.path.getsize(_input[0]) == 0)
print(_input)
# + [markdown] kernel="SoS"
# skips `a.txt` because it has size 0.
# + [markdown] kernel="SoS" output_cache="[]"
# A side effect of `stop_if` is that it will clear `_output` of the iteration so that the step `output` consists of only files from non-stopped iterations. For example,
# + kernel="SoS" output_cache="[{\"output_type\":\"stream\",\"text\":\"/<KEY>: line 1: unexpected EOF while looking for matching `\\\"'\\n/<KEY>: line 4: syntax error: unexpected end of file\\nFailed to process statement run(r\\\"\\\"\\\"echo \\\"Generating ${_ou...ut}\\\\n\\\\n\\\"\\\"\\\")\\\\n: Failed to execute script (ret=2). \\nPlease use command\\n /bin/bash /<KEY>/.sos/default_10_1\\nunder /private/var/folders/ys/gnzk0qbx5wbdgm531v82xxljv5yqy8/T/tmp7jyzqr2o to test it.\\n\",\"name\":\"stderr\"}]"
# %sandbox
# %run
[10]
input: for_each={'idx': range(10)}
output: f"{idx}.txt"
stop_if(idx % 2 == 0)
run: expand=True
echo "Generating {_output}"
touch {_output}
[20]
print(f"Output of last step is {input}")
# + [markdown] kernel="SoS"
# ## Core Targets
# + [markdown] kernel="SoS"
# Targets are objects that a SoS step can input, output, or dependent on. They are usually files that are presented by filenames, but can also be other targets.
# + [markdown] kernel="SoS"
# ### Target `file_target`
# + [markdown] kernel="SoS"
# Targets of type `file_target` represents files on a file system. The type ``file_target` should not be used explicitly because SoS treats string type targets as ``file_target`.
#
# SoS uses md5 signature to detect changes of the content of files. To reduce the time to generate signature for large files, SoS extracts strips of data from large files to calculate partial MD5. The resulting signatures are different from md5 signature of complete files calcualted from other tools.
#
# A file can be **zapped** by command `sos remove --zap` and still be considered available by SoS. This command removes a `file` and generates `{file}.zapped` with essential information such as signature and size of the original file. A step would not be rerun if its input, dependent, or output files are zapped instead of removed. This feature is useful for the removal of large intermediate files generated from the execution of workflows, whiling still keeping the complete runtime information of the workflow.
# + [markdown] kernel="SoS"
# ### Target `executable`
# + kernel="SoS"
`executable` targets are commands that should be accessible and executable by SoS. These targets are usually listed in the `depends` section of a SoS step. For example, SoS would stop if a command `fastqc` is not found.
# %sandbox --expect-error
[10]
input: 'a.txt'
depends: executable('some_command')
sh: expand=True
some_command {input}
`executable` target can also be output of a step but installing executables can be tricky because the commands should be installed to existing `$PATH` so that they can be immediately accessible by SoS. Because SoS automatically adds `~/.sos/bin` to `$PATH` (option `-b`), an environment-neutral way for on-the-fly installation is to install commands to this directory. For example
!rm -f ~/.sos/bin/lls
[lls: provides=executable('lls')]
sh:
echo "#!/usr/bin/env bash" > ~/.sos/bin/lls
echo "echo I am lls" >> ~/.sos/bin/lls
chmod +x ~/.sos/bin/lls
[10]
depends: executable('lls')
sh:
lls
creates an executable command `lls` under `~/.sos/bin` when this executable does not exist.
You can also have finer control over which version of the command is eligible by checking the output of commands. The trick here is to provide a complete command and one or more version strings as the string that should appear in the output of the command.
For example, command `perl --version` is executed in the following example to check if the output contains string `5.18`. The step would only be executed if the right version exists.
[10]
depends: executable('perl --version', version='5.18')
print('ok')
If no verion string is provided, SoS will only check the existence of the command and not actually execute the command.
# + [markdown] kernel="SoS"
# ### Target `sos_variable`
#
# `sos_variable(name)` targets represent SoS variables that are created by a SoS step and shared to other steps. These targets can be used to provide information to other steps. For example,
# + kernel="SoS"
# %sandbox
[counts: shared='counts']
input: 'result.txt'
with open(input[0]) as ifile:
counts = int(ifile.read())
[10]
# perform some task and create a file with some statistics
output: 'result.txt'
run:
echo 100 >> result.txt
[100]
depends: sos_variable('counts')
report: expand=True
There are {counts} objects
# + [markdown] kernel="SoS"
# Step `100` needed some information extracted from output of another step (step `10`). You can either parse the information in step `100` or use another step to provide the information. The latter is recommended because the information could be requested by multiple steps. Note that `counts` is an auxiliary step that provides `sos_variable('counts')` through its `shared` section option.
# + [markdown] kernel="SoS"
# ### Target `env_variable`
#
# SoS keeps tract of runtime environment and creates signatures of executed steps so that they do not have to be executed again. Some commands, especially shell scripts, could however behave differently with different environmental variables. To make sure a step would be re-executed with changing environments, you should list the variables that affects the output of these commands as dependencies of the step. For example
#
# # # %sandbox --expect-error
# [10]
# depends: env_variable('DEBUG')
# sh:
# echo DEBUG is set to $DEBUG
# + [markdown] kernel="SoS"
# ### Target `sos_step`
#
# The `sos_step` target represents, needless to say, a SoS step. This target provides a straightforward method to specify step dependencies. For example,
# + kernel="SoS"
# %run
[init]
print("Initialize")
[10]
depends: sos_step("init")
print(f"I am {step_name}")
# + [markdown] kernel="SoS"
# What is more interesting, however, is that `sos_step('a')` matches to steps such as `a_1`, `a_2` so the step will depend on the execution of the entire workflow.
#
# For example, in the following workflow, step `default` depends on `sos_step('work')`, which triggers a process-oriented workflow `work` with steps `work_1` and `work_2`.
# + kernel="SoS"
# %preview -n test.dot
# %run -d test.dot
[work_1]
# generate result
output: 'result.txt'
sh: expand=True
echo some result > {_output}
[work_2]
# backup result
output: 'result.txt.bak'
sh: expand=True
cp {_input} {_output}
[default]
depends: sos_step("work")
# + [markdown] kernel="SoS"
# This example is similar to the following workflow that uses subworkflow (`sos_run('work')`) but as you can see from the generated DAG, the execution logics of the two are quite different. More specifically, the `sos_step()` target adds a subworkflow to the master DAG, while `sos_run` triggers a separate DAG.
# + kernel="SoS"
# %preview -n test.dot
# %run -d test.dot
[work_1]
# generate result
output: 'result.txt'
sh: expand=True
echo some result > {_output}
[work_2]
# backup result
output: 'result.txt.bak'
sh: expand=True
cp {_input} {_output}
[default]
sos_run("work")
# + [markdown] kernel="SoS"
# ### Target `dynamic`
#
# A `dynamic` target is a target that can only be determined when the step is actually executed.
#
# For example,
# + kernel="SoS"
# %sandbox --expect-error
[10]
output: '*.txt'
sh:
touch a.txt
[20]
print(f'Last output is {input}')
# + [markdown] kernel="SoS"
# To address this problem, you should try to expand the output file after the completion of the step, using a `dynamic` target.
# + kernel="SoS"
# %sandbox
[10]
output: dynamic('*.txt')
sh:
touch a.txt
[20]
print(f"Last output is {input}")
# + [markdown] kernel="SoS"
# Please refer to chapter [SoS Step](SoS_Step.html) for details of such targets.
# + [markdown] kernel="SoS"
# ### Target `remote`
# + [markdown] kernel="SoS"
# A target that is marked as `remote` and would be instantiated only when it is executed by a task. Please check section [Remote Execution](Remote_Execution.html) for details.
# + [markdown] kernel="SoS"
# ### Target `system_resource`
# + [markdown] kernel="SoS"
# Target `system_resource` checks the available system resource and is available only if the system has enough memory and/or diskspace for the workflow step. For example, the following step would generate an error if the system does not have at least `16G` of RAM and `1T` of disk space on the volume of the current project directory.
# + kernel="SoS"
# %run
[10]
depends: system_resource(mem='16G', disk='1T')
run:
echo "some large job"
# + [markdown] kernel="SoS"
# ## Functions and objects
# + [markdown] kernel="SoS"
# ### Function `get_output`
# + [markdown] kernel="SoS"
# Function `get_output(cmd)` returns the output of command (decoded in `UTF-8`), which is a shortcut for `subprocess.check_output(cmd, shell=True).decode()`.
# + kernel="SoS"
get_output('which ls')
# + [markdown] kernel="SoS"
# This function also accepts two options `show_command=False`, and `prompt='$ '` that can be useful in case you would like to present the command that produce the output. For example,
# + kernel="SoS"
print(get_output('which ls', show_command=True))
# + [markdown] kernel="SoS"
# ### Function `expand_pattern`
#
# Function `expand_pattern` expands a string to multiple ones using items of variables quoted between `{ }`. For example,
#
# ```python
# output: expand_pattern('{a}_{b}.txt')
# ```
#
# is equivalent to
#
# ```python
# output: ['{x}_{y}.txt' for x,y in zip(a, b)]
# ```
#
# if `a` and `b` are sequences of the same length. For example,
# + kernel="SoS"
name = ['Bob', 'John']
salary = [200, 300]
expand_pattern("{name}'s salary is {salary}")
# + [markdown] kernel="SoS"
# The sequences should have the same length
# + kernel="SoS"
# %sandbox --expect-error
salary = [200]
expand_pattern("{name}'s salary is {salary}")
# + [markdown] kernel="SoS"
# An exception is made for variables of simple non-sequence types, in which case they are repeated in all expanded items
# + kernel="SoS"
salary = 200
expand_pattern("{name}'s salary is {salary}")
# + [markdown] kernel="SoS"
# ### Object `logger`
# + [markdown] kernel="SoS"
# The SoS logger object is a `logging` object used by SoS to produce various outputs. You can use this object to output error, warning, info, debug, and trace messages to terminal. For example,
# + kernel="SoS"
# %run -v2
[0]
logger.info(f"I am at {step_name}")
# + [markdown] kernel="SoS"
# The output of `logger` is controlled by logging level, for example, the above message would not be printed at `-v1` (warning)
# + kernel="SoS"
# %run -v1
[0]
logger.info(f"I am at {step_name}")
# + [markdown] kernel="SoS"
# ## Language JavaScript
# + [markdown] kernel="SoS"
# ### Action `node`
# + [markdown] kernel="SoS"
# Action `node` execute specified JavaScript with command `node`.
# + [markdown] kernel="SoS"
# ## Language Julia
# + [markdown] kernel="SoS"
# ### Action `julia`
# + [markdown] kernel="SoS"
# Action `julia` execute specified Julia script with command `julia`.
# + [markdown] kernel="SoS"
# ## Language Matlab
# + [markdown] kernel="SoS"
# ### Action `matlab`
# + [markdown] kernel="SoS"
# Action `matlab` execute specified matlab script with default options `-nojvm -nodisplay -nosplash -r {filename:n};quit`. The options start matlab without desktop, execute the strip (with parameter without `.m` extension), and quit matlab when the script is completed.
# + [markdown] kernel="SoS"
# ## Language `Python`
# + [markdown] kernel="SoS"
# ### Action `python`
# + [markdown] kernel="SoS"
# Action `python(script)` and `python3(script)` accepts a Python script and execute it with python or python3, respectively.
#
# Because SoS can include Python statements directly in a SoS script, it is important to note that embedded Python
# statements are interpreted by SoS and the `python` and `python3` actions are execute in separate processes without
# access to the SoS environment.
#
# For example, the following SoS step execute some python statements **within** SoS with direct access to SoS variables
# such as `input`, and with `result` writing directly to the SoS environment,
#
# ```python
# [10]
# for filename in input:
# with open(filename) as data:
# result = filename + '.res'
# ....
# ```
#
# while
#
# ```python
# [10]
# input: group_by='single'
#
# python:
#
# with open(${input!r}) as data:
# result = ${input!r} + '.res'
# ...
#
#
# ```
#
# composes a Python script for each input file and calls separate Python interpreters to execute them. Whereas
# the Python statement in the first example will always be executed, the statements in `python` will not be executed
# in `inspect` mode.
# + [markdown] kernel="SoS"
# ### Action `python2`
#
# Action `python2` is similar to `python` but it tries to use interpreter `python2` (or `python2.7` on some systems) before `python`, which could be python 3. Note that this action does not actually test the version of interpreter so it would use python 3 if this is the only available version.
# + [markdown] kernel="SoS"
# ### Action `python3`
#
# Action `python3` is similar to `python` but it tries to use interpreter `python3` (version 3 of python) before `python`, which could be python 2. Note that this action does not actually test the version of interpreter so it would use python 2 if this is the only available version.
# + [markdown] kernel="SoS"
# ### Target `Py_Module`
# + [markdown] kernel="SoS"
# This target is usually used in the `depends` statement of a SoS step to specify a required Python module. For example,
# + kernel="SoS"
depends: Py_Module('tabulate')
from tabulate import tabulate
table = [["Sun",696000,1989100000],["Earth",6371,5973.6],
["Moon",1737,73.5],["Mars",3390,641.85]]
print(tabulate(table))
# + [markdown] kernel="SoS"
# If a module is not available, with `autoinstall=True` SoS will try to execute command `pip install` to install it, which might or might not succeed depending on your system configuration. For example,
#
# ```
# Py_Module('numpy', autoinstall=True)
# ```
#
# To specify version,
#
# ```
# Py_Module('numpy', version=">=1.14.0")
# ```
#
# Or a shorthand syntax,
#
# ```
# Py_Module('numpy>1.14.0')
# ```
# + kernel="SoS"
# + [markdown] kernel="SoS"
# ## Language `R`
# + [markdown] kernel="SoS"
# ### Action `R`
# + [markdown] kernel="SoS"
# Action `R(script)` execute the passed script using `Rscript` command.
# + kernel="SoS"
R:
D <- data.frame(x=c(1,2,3,1), y=c(7,19,2,2))
# Sort on x
indexes <- order(D$x)
D[indexes,]
# + [markdown] kernel="SoS"
# ### Action `Rmarkdown`
# + [markdown] kernel="SoS"
# Action `Rmarkdown` shares the same user interface with action `pandoc`. The only big difference is that it used `R`'s `rmarkdown` package to render R-flavored Markdown language.
#
# For example, the `Rmarkdown` action of the following example collects input files `A_10.md` and `A_20.md` and use `R`'s `rmarkdown` package to convert it to `out.html`.
# + kernel="SoS"
# %sandbox
[A_10]
report: output="A_10.md"
step_10
[A_20]
report: output="A_20.md"
Itemize
* item 1
* item 2
[A_30]
Rmarkdown(input=['A_10.md', 'A_20.md'], output='out.html')
# + [markdown] kernel="SoS"
# ### Target `R_library`
# + [markdown] kernel="SoS"
# The `R_library` target represents a R library. If the libraries are not available, it will try to install it from [CRAN](https://cran.r-project.org/), [bioconductor](https://www.bioconductor.org/), or [github](https://github.com/). Github package name should be formatted as `pkg@path`. A typical usage of this target would be
# + kernel="SoS"
# %sandbox
[10]
output: 'test.jpg'
depends: R_library('ggplot2')
R: expand=True
library(ggplot2)
jpeg({output!r})
qplot(Sepal.Length, Petal.Length, data = iris, color = Species)
dev.off()
# + [markdown] kernel="sos"
# `R_library` can also be used to check for specific versions of packages. For example:
#
# ```
# R_library('edgeR', '3.12.0')
# ```
# will result in a warning if edgeR version is not 3.12.0. You can specify multiple versions
#
# ```
# R_library('edgeR', ['3.12.0', '3.12.1'])
# ```
#
# certain version or newer,
# ```
# R_library('edgeR', '>=3.12.0')
# ```
#
# certain version or older
# ```
# check_R_library('ggplot2', '<1.0.0')
# ```
#
# The default R library repo is `http://cran.us.r-project.org`. It is possible to customize the repo for which a R library would be installed, for example:
#
# ```
# R_library('Rmosek', repos = "http://download.mosek.com/R/7")
# ```
#
# To install from a github repository:
#
# ```
# R_library('varbvs@pcarbo/varbvs/varbvs-R')
# ```
# where `varbvs` is package name, `pcarbo/varbvs/varbvs-R` corresponds to sub-directory `varbvs-R` in repository `https://github.com/pcarbo/varbvs`.
| src/documentation/Targets_and_Actions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.1
# language: julia
# name: julia-1.5
# ---
# ---
# title: Working with Data Files
# ---
# **Originally Contributed by**: <NAME>
# In many cases we might need to read data available in an external file rather than type it into Julia ourselves.
# This tutorial is concerned with reading tabular data into Julia and using it for a JuMP model.
# We'll be reading data using the DataFrames.jl package and some other packages specific to file types.
# Note: There are multiple ways to read the same kind of data intto Julia.
# However, this tutorial only focuses on DataFrames.jl as
# it provides the ecosystem to work with most of the required file types in a straightforward manner.
# ### DataFrames.jl
# The DataFrames package provides a set of tools for working with tabular data.
# It is available through the Julia package system.
using Pkg
Pkg.add("DataFrames")
# ### What is a DataFrame?
# A DataFrame is a data structure like a table or spreadsheet. You can use it for storing and exploring a set of related data values.
# Think of it as a smarter array for holding tabular data.
# ## Reading Tabular Data into a DataFrame
# We will begin by reading data from different file formats into a DataFrame object.
# The example files that we will be reading are present in the data folder.
# ### Excel Sheets
# Excel files can be read using the ExcelFiles.jl package.
Pkg.add("XLSX")
# To read a Excel file into a DataFrame, we use the following julia code.
# The first arguement to the `readtable` function is the file to be read and the second arguement is the name of the sheet.
using DataFrames
using XLSX
data_dir = joinpath(@__DIR__, "data")
excel_df = DataFrame(XLSX.readtable(joinpath(data_dir, "SalesData.xlsx"), "SalesOrders")...)
# ### CSV Files
# CSV and other delimited text files can be read the CSV.jl package.
Pkg.add("CSV")
# To read a CSV file into a DataFrame, we use the `CSV.read` function.
using CSV
csv_df = CSV.read(joinpath(data_dir, "StarWars.csv"), DataFrame)
# ### Other Delimited Files
# We can also use the CSV.jl package to read any other delimited text file format.
# By default, CSV.File will try to detect a file's delimiter from the first 10 lines of the file.
# Candidate delimiters include `','`, `'\t'`, `' '`, `'|'`, `';'`, and `':'`. If it can't auto-detect the delimiter, it will assume `','`.
# Let's take the example of space separated data.
ss_df = CSV.read(joinpath(data_dir, "Cereal.txt"), DataFrame)
# We can also specify the delimiter by passing the `delim` arguement.
delim_df = CSV.read(joinpath(data_dir, "Soccer.txt"), DataFrame, delim = "::")
# Note that by default, are read-only. If we wish to make changes to the data read, we pass the `copycols = true` arguement in the function call.
ss_df = CSV.read(joinpath(data_dir, "Cereal.txt"), DataFrame, copycols = true)
# ## Working with DataFrames
# Now that we have read the required data into a DataFrame, let us look at some basic operations we can perform on it.
# ### Querying Basic Information
# The `size` function gets us the dimensions of the DataFrame.
size(ss_df)
# We can also us the `nrow` and `ncol` functions to get the number of rows and columns respectively.
nrow(ss_df), ncol(ss_df)
# The `describe` function gives basic summary statistics of data in a DataFrame.
describe(ss_df)
# Names of every column can be obtained by the `names` function.
names(ss_df)
# Corresponding data types are obtained using the broadcasted `eltype` function.
eltype.(ss_df)
# ### Accessing the Data
# Similar to regular arrays, we use numerical indexing to access elements of a DataFrame.
csv_df[1,1]
# The following are different ways to access a column.
csv_df[!, 1]
csv_df[!, :Name]
csv_df.Name
csv_df[:, 1] # note that this creates a copy
# The following are different ways to access a row.
csv_df[1:1, :]
csv_df[1, :] # this produces a DataFrameRow
# We can change the values just as we normally assign values.
# Assign a range to scalar.
excel_df[1:3, 5] .= 1
# Vector to equal length vector.
excel_df[4:6, 5] = [4, 5, 6]
# Subset of the DataFrame to another data frame of matching size.
excel_df[1:2, 6:7] = DataFrame([-2 -2; -2 -2], [Symbol("Unit Cost"), :Total])
excel_df
# There are a lot more things which can be done with a DataFrame.
# See the [docs](https://juliadata.github.io/DataFrames.jl/stable/) for more information.
# ## A Complete Modelling Example - Passport Problem
# Let's now apply what we have learnt to solve a real modelling problem.
# The [Passport Index Dataset](https://github.com/ilyankou/passport-index-dataset)
# lists travel visa requirements for 199 countries, in .csv format.
# Our task is to find out the minimum number of passports required to visit all countries.
# In this dataset, the first column represents a passport (=from) and each remaining column represents a foreign country (=to).
# The values in each cell are as follows:
# * 3 = visa-free travel
# * 2 = eTA is required
# * 1 = visa can be obtained on arrival
# * 0 = visa is required
# * -1 is for all instances where passport and destination are the same
# Our task is to find out the minimum number of passports needed to visit every country without requiring a visa.
# Thus, the values we are interested in are -1 and 3. Let us modify the data in the following manner -
# +
passportdata = CSV.read(joinpath(data_dir, "passport-index-matrix.csv"), DataFrame, copycols = true)
for i in 1:nrow(passportdata)
for j in 2:ncol(passportdata)
if passportdata[i,j] == -1 || passportdata[i,j] == 3
passportdata[i,j] = 1
else
passportdata[i,j] = 0
end
end
end
# -
# The values in the cells now represent:
# * 1 = no visa required for travel
# * 0 = visa required for travel
# Let us assossciate each passport with a decision variable $pass_{cntr}$ for each country.
# We want to minize the sum $\sum pass_{cntr}$ over all countries.
# Since we wish to visit all the countries, for every country,
# we should own atleast one passport that lets us travel to that country visa free.
# For one destination, this can be mathematically represented as $\sum_{cntr \in world} passportdata_{cntr,dest} \cdot pass_{cntr} \geq 1$.
# Thus, we can represent this problem using the following model:
# $$
# \begin{align*}
# \min && \sum_{cntr \in World} pass_{cntr} \\
# \text{s.t.} && \sum_{cntr \in World} passportdata_{cntr,dest} \cdot pass_{cntr} \geq 1 && \forall dest \in World \\
# && pass_{cntr} \in \{0,1\} && \forall cntr \in World
# \end{align*}
# $$
# We'll now solve the problem using JuMP.
# +
using JuMP, GLPK
# Finding number of countries
n = ncol(passportdata) - 1 # Subtract 1 for column representing country of passport
model = Model(GLPK.Optimizer)
@variable(model, pass[1:n], Bin)
@constraint(model, [j = 2:n], sum(passportdata[i,j] * pass[i] for i in 1:n) >= 1)
@objective(model, Min, sum(pass))
optimize!(model)
println("Minimum number of passports needed: ", objective_value(model))
# +
countryindex = findall(value.(pass) .== 1 )
print("Countries: ")
for i in countryindex
print(names(passportdata)[i+1], " ")
end
| notebook/using_JuMP/working_with_data_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing Regression Uncertainty
#
# In this notebook we visualize regression uncertainties for subspace inference.
# +
import sys
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import torch
import torch.utils.data
from torch.nn import functional as F
from visualization import plot_predictive
import seaborn as sns
import subspace_inference.utils as utils
from subspace_inference.posteriors import SWAG
from subspace_inference import models, losses, utils
from subspace_inference.models import MLP
from tqdm import tqdm
import os
torch.backends.cudnn.benchmark = True
torch.manual_seed(1)
torch.cuda.manual_seed(1)
np.random.seed(1)
# %load_ext autoreload
# %autoreload 2
# -
# ## Data and Model
# Let's load the dataset we used in the paper.
# +
def features(x):
return np.hstack([x[:, None] / 2.0, (x[:, None] / 2.0) ** 2])
data = np.load("ckpts/data.npy")
x, y = data[:, 0], data[:, 1]
y = y[:, None]
f = features(x)
dataset = torch.utils.data.TensorDataset(torch.from_numpy(f.astype(np.float32)),
torch.from_numpy(y.astype(np.float32)))
loader = torch.utils.data.DataLoader(dataset, batch_size=50, shuffle=True)
sns.set_style('darkgrid')
palette = sns.color_palette('colorblind')
blue = sns.color_palette()[0]
red = sns.color_palette()[3]
plt.figure(figsize=(9., 7.))
plt.plot(data[:, 0], data[:, 1], "o", color=red, alpha=0.7, markeredgewidth=1., markeredgecolor="k")
plt.title("Data", fontsize=16)
# -
# Model definition
model_cfg = models.ToyRegNet
criterion = losses.GaussianLikelihood(noise_var=1.)
model = model_cfg.base(*model_cfg.args, **model_cfg.kwargs)
def train(model, loader, optimizer, criterion, lr_init=1e-2, epochs=3000,
swag_model=None, swag=False, swag_start=2000, swag_freq=50, swag_lr=1e-3,
print_freq=100):
for epoch in range(epochs):
t = (epoch + 1) / swag_start if swag else (epoch + 1) / epochs
lr_ratio = swag_lr / lr_init if swag else 0.05
if t <= 0.5:
factor = 1.0
elif t <= 0.9:
factor = 1.0 - (1.0 - lr_ratio) * (t - 0.5) / 0.4
else:
factor = lr_ratio
lr = factor * lr_init
utils.adjust_learning_rate(optimizer, lr)
train_res = utils.train_epoch(loader, model, criterion, optimizer, cuda=False, regression=True)
if swag and epoch > swag_start:
swag_model.collect_model(model)
if (epoch % print_freq == 0 or epoch == epochs - 1):
print('Epoch %d. LR: %g. Loss: %.4f' % (epoch, lr, train_res['loss']))
# ## Pre-Training SGD Solutions
# We train 5 SGD solutions and save checkpoints for each of them. You can skip this section and just load the provided checkpoints later.
# +
wd = 0.
lr_init = 1e-2
for i in range(5):
print("Training Model", i)
model = model_cfg.base(*model_cfg.args, **model_cfg.kwargs)
optimizer = torch.optim.SGD(model.parameters(), lr=lr_init, momentum=0.95, weight_decay=wd)
train(model, loader, optimizer, criterion, lr_init, 3000, print_freq=1000)
# torch.save(model.state_dict(), "ckpts/sgd_checkpoint"+str(i+1)+".pt")
# -
# Let's look at the solutions we get: the five different trajectories are shown in lighter blue lines.
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for i in range(5):
model.load_state_dict(torch.load("ckpts/sgd_checkpoint" + str(i+1) + ".pt"))
out = model(inp).detach().numpy().T
trajectories.append(out)
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="SGD Solutions")
# -
# Independently-trained SGD solutions are diverse far away from the data. In the following we aim to obtain similar behavior with local Bayesian model averaging within low-dimensional subspaces.
# ## Pre-Training SWAG Solutions and PCA Subspaces
# We pre-train two different SWAG solutions. You can also skip this section and just load the provided checkpoints later.
# +
wd = 0.
lr_init = 1e-2
for i in range(2):
print("Training Model", i)
swag_model = SWAG(model_cfg.base, subspace_type="pca", *model_cfg.args, **model_cfg.kwargs,
subspace_kwargs={"max_rank": 10, "pca_rank": 10})
model = model_cfg.base(*model_cfg.args, **model_cfg.kwargs)
optimizer = torch.optim.SGD(model.parameters(), lr=lr_init, momentum=0.95, weight_decay=wd)
train(model, loader, optimizer, criterion, lr_init, 3000, print_freq=1000,
swag=True, swag_model=swag_model, swag_start=2000, swag_freq=10, swag_lr=1e-2)
# utils.save_checkpoint(
# dir="ckpts",
# epoch=3000,
# name="swag_checkpoint"+str(i),
# state_dict=swag_model.state_dict(),
# )
# -
# Let's visualize one of our solutions.
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
swag_model = SWAG(model_cfg.base, subspace_type="pca", *model_cfg.args, **model_cfg.kwargs,
subspace_kwargs={"max_rank": 10, "pca_rank": 10})
swag_model.load_state_dict(torch.load("ckpts/swag_checkpoint0-3000.pt")["state_dict"])
trajectories = []
for i in range(100):
swag_model.sample(scale=10.)
out = swag_model(inp)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="SWAG Solution")
# -
# ## Pre-Training Curve Subspace
#
# Note that for this to work you need to add the repo https://github.com/timgaripov/dnn-mode-connectivity to your Python path. You can skip this section as well and just load the provided checkpoints later.
#
# We first load the two endpoints of the curve as SWA solutions.
# +
import curves
swag_model = SWAG(model_cfg.base, subspace_type="pca", *model_cfg.args, **model_cfg.kwargs,
subspace_kwargs={"max_rank": 10, "pca_rank": 10})
model1 = model_cfg.base(*model_cfg.args, **model_cfg.kwargs)
swag_model.load_state_dict(torch.load("ckpts/swag_checkpoint0-3000.pt")["state_dict"])
w_swa = swag_model.get_space()[0]
utils.set_weights(model1, w_swa)
model2 = model_cfg.base(*model_cfg.args, **model_cfg.kwargs)
swag_model.load_state_dict(torch.load("ckpts/swag_checkpoint1-3000.pt")["state_dict"])
w_swa = swag_model.get_space()[0]
utils.set_weights(model2, w_swa)
# -
# Initialize the curve model.
architecture = model_cfg.curve
curve = curves.Bezier
model = curves.CurveNet(curve, architecture, 3, fix_end=True, fix_start=True,
architecture_kwargs=model_cfg.kwargs)
model.import_base_parameters(model1, 0)
model.import_base_parameters(model2, 2)
model.init_linear()
# +
wd = 0.
lr_init = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=lr_init, momentum=0.95, weight_decay=wd)
train(model, loader, optimizer, criterion, lr_init, 6000, print_freq=1000)
# torch.save(model.state_dict(), "curve.pt")
# -
# Let's visualize the samples corresponding to different points on the curve.
# +
model.load_state_dict(torch.load("ckpts/curve.pt"))
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for t in np.linspace(0, 1, 20):
out = model(inp, t)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="Solutions along the Curve")
# -
# We now save the parameters of the curve as a numpy array.
# +
# W = []
# for i in range(3):
# model.export_base_parameters(model1, i)
# W.append(np.concatenate([param.data.numpy().reshape(-1) for param in model1.parameters()]))
# W = np.vstack(W)
# W[1, :] = 0.25*(W[0, :] + W[2, :]) + 0.5 * W[1, :]
# np.save("ckpts/curve_parameters.npy", W)
# -
# ## Subspace Inference
#
# Now we will showcase subspace inference with different inference methods and subspaces. We first define functions that load each of the 3 subspaces: _Random_, _PCA_ and _Curve_. The subspaces are constructed from the checkpoints we pre-trained above.
# +
from subspace_inference.posteriors.proj_model import SubspaceModel
from subspace_inference.posteriors.ess import EllipticalSliceSampling
from subspace_inference.posteriors.vi_model import VIModel, ELBO
import math
def get_rand_space(seed=1):
swag_model = SWAG(model_cfg.base, subspace_type="pca", *model_cfg.args, **model_cfg.kwargs,
subspace_kwargs={"max_rank": 10, "pca_rank": 10})
swag_model.load_state_dict(torch.load("ckpts/swag_checkpoint0-3000.pt")["state_dict"])
mean, _, cov_factor = swag_model.get_space()
torch.manual_seed(seed)
cov_factor = torch.randn_like(cov_factor)
cov_factor /= torch.norm(cov_factor, dim=1)[:, None]
subspace = SubspaceModel(mean, cov_factor)
return subspace
def get_pca_space():
swag_model = SWAG(model_cfg.base, subspace_type="pca", *model_cfg.args, **model_cfg.kwargs,
subspace_kwargs={"max_rank": 10, "pca_rank": 10})
swag_model.load_state_dict(torch.load("ckpts/swag_checkpoint0-3000.pt")["state_dict"])
mean, _, cov_factor = swag_model.get_space()
subspace = SubspaceModel(mean, cov_factor)
return subspace
def get_curve_space():
w = np.load("ckpts/curve_parameters.npy")
mean = np.mean(w, axis=0)
u = w[2] - w[0]
du = np.linalg.norm(u)
v = w[1] - w[0]
v -= u / du * np.sum(u / du * v)
dv = np.linalg.norm(v)
print(du, dv)
u /= math.sqrt(3.0)
v /= 1.5
cov_factor = np.vstack((u[None, :], v[None, :]))
subspace = SubspaceModel(torch.FloatTensor(mean), torch.FloatTensor(cov_factor))
return subspace
# -
# Now, let's run ESS in each of the subspaces.
#
# ### ESS, Random Subspace
# +
subspace = get_rand_space()
init_sigma = 1.
prior_sigma = 1.
criterion = losses.GaussianLikelihood(noise_var=.05)
temperature = .15
ess_model = EllipticalSliceSampling(
base=model_cfg.base,
subspace=subspace,
var=None,
loader=loader,
criterion=criterion,
num_samples=1000,
use_cuda = False,
*model_cfg.args,
**model_cfg.kwargs
)
ess_model.fit(temperature=temperature, scale=prior_sigma**2);
# -
# We can now plot the predictive distribution.
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for i in range(100):
ess_model.sample()
out = ess_model(inp)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="ESS, Random Subspace")
# -
# Notice how uncertainty doesn't increase much between the data.
# ### ESS, PCA Subspace
# +
subspace = get_pca_space()
init_sigma = 1.
prior_sigma = 5.
criterion = losses.GaussianLikelihood(noise_var=.05)
temperature = 1.5
ess_model = EllipticalSliceSampling(
base=model_cfg.base,
subspace=subspace,
var=None,
loader=loader,
criterion=criterion,
num_samples=1000,
use_cuda = False,
*model_cfg.args,
**model_cfg.kwargs
)
ess_model.fit(temperature=temperature, scale=prior_sigma**2);
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for i in range(100):
ess_model.sample()
out = ess_model(inp)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="ESS, PCA Subspace")
# -
# In the PCA subspace on the other hand the uncertainty does increase substantially between the middle and right clusters of data.
# ### ESS, Curve Subspace
# +
subspace = get_curve_space()
prior_sigma = 1.
criterion = losses.GaussianLikelihood(noise_var=.05)
temperature = 0.75
ess_model = EllipticalSliceSampling(
base=model_cfg.base,
subspace=subspace,
var=None,
loader=loader,
criterion=criterion,
num_samples=1000,
use_cuda = False,
*model_cfg.args,
**model_cfg.kwargs
)
ess_model.fit(temperature=temperature, scale=prior_sigma**2);
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for i in range(1000):
ess_model.sample()
out = ess_model(inp)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="ESS, Curve Subspace")
# -
# Finally, in the richest curve subspace, uncertainty is low near the data, but gets high as we move away from the data.
# ## VI, PCA Subspace
#
# We can also run other inference methods in the subspaces. For example, here is variational inference in the PCA subspace.
# +
subspace = get_pca_space()
init_sigma = 1.
prior_sigma = 5.
criterion = losses.GaussianLikelihood(noise_var=.05)
temperature = 1.
vi_model = VIModel(
subspace=subspace,
init_inv_softplus_sigma=math.log(math.exp(init_sigma) - 1.0),
prior_log_sigma=math.log(prior_sigma),
base=model_cfg.base,
*model_cfg.args,
**model_cfg.kwargs
)
elbo = ELBO(criterion, len(loader.dataset), temperature=temperature)
optimizer = torch.optim.Adam([param for param in vi_model.parameters()], lr=.1)
for epoch in range(2000):
train_res = utils.train_epoch(loader, vi_model, elbo, optimizer, regression=True, cuda=False)
sigma = torch.nn.functional.softplus(vi_model.inv_softplus_sigma.detach().cpu())
if epoch % 1000 == 0 or epoch == 1999:
print(epoch, train_res)
if epoch == 1000:
utils.adjust_learning_rate(optimizer, 0.01)
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for i in range(100):
out = vi_model(inp)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="VI, PCA Subspace")
# -
# ## NUTS, Curve Subspace
#
# And here we apply the no-u-turn-sampler (NUTS) in the curve subspace.
# +
import pyro
import pyro.distributions as dist
from pyro.infer.mcmc import HMC, MCMC, NUTS
from subspace_inference.posteriors.pyro import PyroModel
from subspace_inference.utils import extract_parameters, set_weights
# +
subspace = get_curve_space()
prior_sigma = 1.
# criterion = losses.GaussianLikelihood(noise_var=.05)
temperature = 0.5
pyro_model = PyroModel(
base=model_cfg.base,
subspace=subspace,
prior_log_sigma=math.log(prior_sigma),
likelihood_given_outputs=lambda x: dist.Normal(x, np.sqrt(.05 / temperature)),
*model_cfg.args,
**model_cfg.kwargs
)
# -
nuts_kernel = NUTS(pyro_model.model, step_size=1.)
x_, y_ = loader.dataset.tensors
mcmc_run = MCMC(nuts_kernel, num_samples=200, warmup_steps=100).run(x_, y_)
samples = torch.cat(list(mcmc_run.marginal(sites="t").support(flatten=True).values()), dim=-1)
# samples
# +
z = np.linspace(-10, 10, 100)
inp = torch.from_numpy(features(z).astype(np.float32))
trajectories = []
for i in range(200):
pyro_model.t.set_(samples[i, :])
out = pyro_model(inp)
trajectories.append(out.detach().numpy().ravel())
trajectories = np.vstack(trajectories)
plot_predictive(data, trajectories, z, title="NUTS, Curve Subspace")
# -
# We can verify that samples indeed concentrate around a curve in the subspace by plotting them.
sns.set_style("whitegrid")
plt.plot(samples.numpy()[:, 0], samples.numpy()[:, 1], "mo")
plt.title("Samples in the subspace", fontsize=16)
| experiments/synthetic_regression/visualizing_uncertainty.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fictional Army - Filtering and Sorting
# ### Introduction:
#
# This exercise was inspired by this [page](http://chrisalbon.com/python/)
#
# Special thanks to: https://github.com/chrisalbon for sharing the dataset and materials.
#
# ### Step 1. Import the necessary libraries
# ### Step 2. This is the data given as a dictionary
# ### Step 3. Create a dataframe and assign it to a variable called army.
#
# #### Don't forget to include the columns names in the order presented in the dictionary ('regiment', 'company', 'deaths'...) so that the column index order is consistent with the solutions. If omitted, pandas will order the columns alphabetically.
# ### Step 4. Set the 'origin' colum as the index of the dataframe
# ### Step 5. Print only the column veterans
# ### Step 6. Print the columns 'veterans' and 'deaths'
# ### Step 7. Print the name of all the columns.
# ### Step 8. Select the 'deaths', 'size' and 'deserters' columns from Maine and Alaska
# ### Step 9. Select the rows 3 to 7 and the columns 3 to 6
# ### Step 10. Select every row after the fourth row
# ### Step 11. Select every row up to the 4th row
# ### Step 12. Select the 3rd column up to the 7th column
# ### Step 13. Select rows where df.deaths is greater than 50
# ### Step 14. Select rows where df.deaths is greater than 500 or less than 50
# ### Step 15. Select all the regiments not named "Dragoons"
# ### Step 16. Select the rows called Texas and Arizona
# ### Step 17. Select the third cell in the row named Arizona
# ### Step 18. Select the third cell down in the column named deaths
| 02_Filtering_&_Sorting/Fictional Army/Solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## DO PHOTOMETRY!
# +
# python 2/3 compatibility
from __future__ import print_function
# numerical python
import numpy as np
# file management tools
import glob
import os
# good module for timing tests
import time
# plotting stuff
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# ability to read/write fits files
from astropy.io import fits
# fancy image combination technique
from astropy.stats import sigma_clip
# median absolute deviation: for photometry
from astropy.stats import mad_std
# photometric utilities
from photutils import DAOStarFinder,aperture_photometry, CircularAperture, CircularAnnulus, Background2D, MedianBackground
# periodograms
from astropy.stats import LombScargle
from regions import read_ds9, write_ds9 ## download regions with: pip install regions
from astropy.wcs import WCS
import numpy.ma as ma
import warnings
import pandas as pd
warnings.filterwarnings("ignore") #ignores warnings
np.set_printoptions(suppress=True) #prints all numbers as floats and not exponentials
# -
from DifferentialPhotometry_functions import *
# ## Initial setup:
#
# You will need to define the data directory where your images exist, a region file that you have created for the target members in your field, and a fits image with WCS information.
#
# Also, define a signal to noise threshold for detrending the photometry. If detrending fails, lower the threshold.
# +
# For your own data, modify as needed
# name of field
target = "praesepe"
filt = 'V'
#location of data
datadir = '/Volumes/ARCTURUS/WIYN09/SmallStack_' + target + '/' + filt + '/'
#location to save final .npz save file
savedir = '/Volumes/ARCTURUS/WIYN09/SmallStack_' + target + '/' + filt + '/'
#grab data
im = glob.glob(datadir + '*N*.fits')
#image with WCS information
wcs_image = '/Volumes/ARCTURUS/WIYN09/RegionFiles/' + target+ '/' + target + '_WCS.fits'
#optional region file
wcs_region = '/Volumes/ARCTURUS/WIYN09/RegionFiles/' + target + '/' + target + '_VZ_sm.reg'
# -
#S/N threshold for detrending
sn_thresh=3
print(f"Number of images found: {len(im)}")
if len(im) == 0:
print('No images found: check datadir and im')
else:
print(np.array(im))
aprad=20. # aperture radius
skybuff=14. # sky annulus inner radius
skywidth=18. # sky annulus outer radius
# sensitivities for star finding
nsigma=4.5 # detection threshold in sigma
FWHM=5. # pixels
# ## Start Photometry
## do starfind on one of the images.
xpos, ypos, nstars = StarFind(im[2], FWHM, nsigma)
#find ra, dec coordinates of stars
hdr_wcs = fits.getheader(wcs_image)
w = construct_astrometry(hdr_wcs)
ra, dec = w.wcs_pix2world(xpos, ypos,1)
# +
# Note that you will need to update the "timekey" keyword to pick the right header keyword from AIJ!
times, Photometry_initial = doPhotometry(im, xpos, ypos,aprad, skybuff, skywidth)
# -
ePhotometry = doPhotometryError(im,xpos, ypos, aprad, skybuff, skywidth, Photometry_initial, manual=True, xboxcorner=2000, yboxcorner=2000, boxsize=200)
# +
#find ra, dec coordinates of stars
#using region file
memberlist = wcs_region
#manual inputting 1 star -- use x and y coordinates
# ra_in, dec_in = w.wcs_pix2world([2402],[2076],1)
# memberlist = (ra_in, dec_in)
#determine index, ra, dec of target stars from the DAOstarfinder catalogue.
idx, RA, DEC = target_list(memberlist, ra, dec)
# -
print('indices of targets in starfinder catalogue: {}'.format(idx) )
print('number of target stars found: {}'.format(len(idx)))
#detrend photometry and plot
Photometry, cPhotometry = detrend(idx, Photometry_initial, ePhotometry, nstars, sn_thresh)
plotPhotometry(times,cPhotometry)
# determine most accurate comparison stars
most_accurate = findComparisonStars(Photometry, cPhotometry) # comp_num= (can set this)
#run differential photometry on all stars
dPhotometry, edPhotometry, tePhotometry = runDifferentialPhotometry(Photometry_initial, ePhotometry, nstars, most_accurate)
# +
#pull out differential photometry for target stars and save to .npz file
tar_ra, tar_dec, tar_xpos, tar_ypos, time, flux, fluxerr = diffPhot_IndividualStars(savedir, idx, ra, dec, xpos, ypos, dPhotometry, edPhotometry, tePhotometry,times, target, filt, wcs_image, most_accurate)
# +
#open data
data = np.load(savedir + 'differentialPhot_field' + target + filt + '.npz')
#read column headers
data.files
#access data with: data['flux']
# -
# # Plot the lightcurve here!
#
| DifferentialPhotometry_main_student.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="9veUEV0CfmHX"
# ##### Copyright 2020 The TensorFlow Hub Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# + cellView="both" id="BlCInyRifxHS"
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# + [markdown] id="_LRMeRxCfzC4"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/boundless"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/boundless.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/boundless.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/boundless.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# <td>
# <a href="https://tfhub.dev/s?q=google%2Fboundless"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
# </td>
# </table>
# + [markdown] id="QOjczJJ4gWHS"
# # Boundless Colab
#
# Welcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results.
#
# ## Overview
#
# Boundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub.
# + [markdown] id="hDKbpAEZf8Lt"
# ## Imports and Setup
#
# Lets start with the base imports.
# + id="xJMFtTqPr7lf"
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
# + [markdown] id="pigUDIXtciQO"
# ## Reading image for input
#
# Lets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
# + id="KTEVPgXH6rtV"
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
# + [markdown] id="lonrLxuKcsL0"
# ## Visualization method
#
# We will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
# + id="j7AkoMFG7r-O"
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
# + [markdown] id="8rwaCWmxdJGH"
# ## Loading an Image
#
# We will load a sample image but fell free to upload your own image to the colab and try with it. Remeber that the model have some limitations regarding human images.
# + id="92w-Jfbm60XA"
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
# + [markdown] id="4lIkmZL_dtyX"
# ## Selecting a model from TensorFlow Hub
#
# On TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.
# In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execture the following cells.
# + id="B3myNctEQ5GE"
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_handle = model_handle_map[model_name]
# + [markdown] id="aSJFeNNSeOn8"
# Now that we've chosen the model we want, lets load it from TensorFlow Hub.
#
# **Note**: You can point your browser to the model handle to read the model's documentation.
# + id="0IDKMNyYSWsj"
print("Loading model {} ({})".format(model_name, model_handle))
model = hub.load(model_handle)
# + [markdown] id="L4G7CPOaeuQb"
# ## Doing Inference
#
# The boundless model have two ouputs:
#
# * The input image with a mask applied
# * The masked image with the extrapolation to complete it
#
# we can use these two images to show a comparisson visualization.
# + id="W7uCAuKxSd-M"
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
| site/en-snapshot/hub/tutorials/boundless.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Track II
import pickle
from sklearn.model_selection import train_test_split
DX = pickle.load(open("data/DX.p", "rb"))
DY = pickle.load(open("data/DY.p", "rb"))
DZ = pickle.load(open("data/DZ.p", "rb"))
result = {}
# +
# random forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import accuracy_score
temp = []
# y_part
clf = RandomForestClassifier(n_estimators=200, max_depth=15, random_state=0)
y_pred = cross_val_predict(clf, DX, DY, cv = 10)
temp.append(accuracy_score(DY, y_pred))
# z_part
clf = RandomForestClassifier(n_estimators=200, max_depth=15, random_state=0)
z_pred = cross_val_predict(clf, DX, DZ, cv = 10)
temp.append(accuracy_score(DZ, z_pred))
result.update({'Random Forest': temp})
# +
# SVC
from sklearn.svm import SVC
temp = []
# y_part
clf = SVC(gamma='auto')
y_pred = cross_val_predict(clf, DX, DY, cv = 10)
temp.append(accuracy_score(DY, y_pred))
# z_part
clf = SVC(gamma = 'auto')
z_pred = cross_val_predict(clf, DX, DZ, cv = 10)
temp.append(accuracy_score(DZ, z_pred))
result.update({'SVC':temp})
# +
# K neighbor
from sklearn.neighbors import KNeighborsClassifier
neiborNum = [3, 5, 10, 15, 20]
for num in neiborNum:
temp = []
clf = KNeighborsClassifier(n_neighbors=num)
y_pred = cross_val_predict(clf, DX, DY, cv = 10)
temp.append(accuracy_score(DY, y_pred))
clf = KNeighborsClassifier(n_neighbors=num)
z_pred = cross_val_predict(clf, DX, DZ, cv = 10)
temp.append(accuracy_score(DZ, z_pred))
string = "{} Neighbor".format(num)
result.update({string: temp})
print(result)
# -
import numpy as np
import pandas as pd
df = pd.DataFrame.from_dict(result)
idx = {0: 'Y', 1: 'Z'}
df_res = df.rename(index={0: "y", 1: "z"})
df_res
| code/classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from sklearn.datasets import load_diabetes
#加载糖尿病数据集。
dataset = load_diabetes()
#查看数据的样本数量和特征维度。
dataset.data.shape
# +
from sklearn.model_selection import train_test_split
#按照6:4的比例分割出训练和测试集。
X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, train_size=0.6, random_state=2022)
#查看训练集的样本数量和特征维度。
X_train.shape
# +
from sklearn.preprocessing import StandardScaler
#初始化特征标准化处理器。
ss = StandardScaler()
#标准化训练数据特征。
X_train = ss.fit_transform(X_train)
#标准化测试数据特征。
X_test = ss.transform(X_test)
# +
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
#初始化线性回归器模型,不采用参数正则化。
lr = LinearRegression()
#用训练集数据训练回归器模型。
lr.fit(X_train, y_train)
#用训练好的分类模型,在测试集上预测一遍。
y_predict = lr.predict(X_test)
print('Scikit-learn的线性回归器模型(无参数正则化)在diabetes测试集上的均方误差为:%.2f。' %(mean_squared_error(y_test, y_predict)))
# +
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
#初始化线性回归器模型,采用参数L2正则化。
lr = Ridge()
#用训练集数据训练回归器模型。
lr.fit(X_train, y_train)
#用训练好的分类模型,在测试集上预测一遍。
y_predict = lr.predict(X_test)
print('Scikit-learn的线性回归器模型(参数L2正则化)在diabetes测试集上的均方误差为:%.2f。' %(mean_squared_error(y_test, y_predict)))
# +
from sklearn.linear_model import Lasso
from sklearn.metrics import mean_squared_error
#初始化线性回归器模型,采用参数L1正则化。
lr = Lasso()
#用训练集数据训练回归器模型。
lr.fit(X_train, y_train)
#用训练好的分类模型,在测试集上预测一遍。
y_predict = lr.predict(X_test)
print('Scikit-learn的线性回归器模型(参数L1正则化)在diabetes测试集上的均方误差为:%.2f。' %(mean_squared_error(y_test, y_predict)))
# +
from sklearn.preprocessing import PolynomialFeatures
#初始化多项式特征生成器。
pf = PolynomialFeatures(degree=2)
#对训练数据进行特征升维处理。
X_train = pf.fit_transform(X_train)
#对测试数据进行特征升维处理。
X_test = pf.transform(X_test)
#查看升维训练集的样本数量和特征维度
X_train.shape
# +
from sklearn.linear_model import Lasso
from sklearn.metrics import mean_squared_error
#初始化线性回归器模型,采用参数L1正则化。
lr = Lasso()
#用升维的训练集数据训练回归器模型。
lr.fit(X_train, y_train)
#用训练好的分类模型,在升维的测试集上预测一遍。
y_predict = lr.predict(X_test)
print('Scikit-learn的线性回归器模型(参数L1正则化)在diabetes升维测试集上的均方误差为:%.2f。' %(mean_squared_error(y_test, y_predict)))
# -
| Chapter_5/Section_5.5.3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# In this Machine Learning Model we are going to predict a binary outcome.
# We use the Cars Hardware Specifications dataset. We predict the cars Gear System - Automatic(1) or Manual(0) - (AM) from different independent variables.
# -
# # Importing Libraries
# +
# Importing libraries for exploratory Data Analysis & Data Visualization
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
# %matplotlib inline
# -
# importing Data
cars = pd.read_csv("cars.csv")
cars.head()
#checking data set shape
print(cars.shape)
cars.info()
# # Exloratory Data Analysis & Visualization
#checking for null values in data set
cars.isnull().sum()
sb.countplot(x='AM',data=cars,palette='RdBu_r')
# +
sb.countplot(x='AM',hue='Gear',data=cars,palette='rainbow')
# -
# # Training & Testing Split
# +
#car has Automatic(1) or Manual(0) Gear System - AM
# Using predictor variables/ features/ independent variabes, x1, x2, x3,... => X
y = cars.AM
X = cars.loc[:,['MPG','HP','Wt']]
# -
# training and testing model selection
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y, test_size=0.2,random_state=0)
# Importing machine learning algorithm
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train,y_train)
y_predict = model.predict(X_test)
y_predict
y_test
# # Model Evaluation
# Model evaluation
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
accuracy_score(y_test,y_predict)
print(classification_report(y_test,y_predict))
confusion_matrix(y_test,y_predict)
5/7
| 2. Logistics Regression/Logistic Regression Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sqlalchemy import create_engine
# + [markdown] slideshow={"slide_type": "-"}
# ### Store CSV into DataFrame
# -
csv_file = "../Resources/customer_data.csv"
customer_data_df = pd.read_csv(csv_file)
customer_data_df.head()
# ### Create new data with select columns
new_customer_data_df = customer_data_df[['id', 'first_name', 'last_name']].copy()
new_customer_data_df.head()
# ### Store JSON data into a DataFrame
json_file = "../Resources/customer_location.json"
customer_location_df = pd.read_json(json_file)
customer_location_df.head()
# ### Clean DataFrame
new_customer_location_df = customer_location_df[["id", "address", "us_state"]].copy()
new_customer_location_df.head()
# ### Connect to local database
rds_connection_string = "<inser user name>:<insert password>@127.0.0.1/customer_db"
engine = create_engine(f'mysql://{rds_connection_string}')
# ### Check for tables
engine.table_names()
# ### Use pandas to load csv converted DataFrame into database
new_customer_data_df.to_sql(name='customer_name', con=engine, if_exists='append', index=False)
# ### Use pandas to load json converted DataFrame into database
new_customer_location_df.to_sql(name='customer_location', con=engine, if_exists='append', index=False)
# ### Confirm data has been added by querying the customer_name table
# * NOTE: can also check using pgAdmin
pd.read_sql_query('select * from customer_name', con=engine).head()
# ### Confirm data has been added by querying the customer_location table
pd.read_sql_query('select * from customer_location', con=engine).head()
| christine/Project 2 Examples/ETL-examples/01-example_ETL_Pandas/demo/.ipynb_checkpoints/data_etl-checkpoint.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// # Numerical Stability and Initialization
//
// :label:`sec_numerical_stability`
//
//
//
// Thus far, every model that we have implemented
// required that we initialize its parameters
// according to some pre-specified distribution.
// Until now, we took the initialization scheme for granted,
// glossing over the details of how these these choices are made.
// You might have even gotten the impression that these choices
// are not especially important.
// To the contrary, the choice of initialization scheme
// plays a significant role in neural network learning,
// and it can be crucial for maintaining numerical stability.
// Moreover, these choices can be tied up in interesting ways
// with the choice of the nonlinear activation function.
// Which function we choose and how we initialize parameters
// can determine how quickly our optimization algorithm converges.
// Poor choices here can cause us to encounter
// exploding or vanishing gradients while training.
// In this section, we delve into these topics with greater detail
// and discuss some useful heuristics
// that you will find useful
// throughout your career in deep learning.
//
//
// ## Vanishing and Exploding Gradients
//
// Consider a deep network with $L$ layers,
// input $\mathbf{x}$ and output $\mathbf{o}$.
// With each layer $l$ defined by a transformation $f_l$
// parameterized by weights $\mathbf{W}_l$
// our network can be expressed as:
//
// $$\mathbf{h}^{l+1} = f_l (\mathbf{h}^l) \text{ and thus } \mathbf{o} = f_L \circ \ldots, \circ f_1(\mathbf{x}).$$
//
// If all activations and inputs are vectors,
// we can write the gradient of $\mathbf{o}$ with respect to
// any set of parameters $\mathbf{W}_l$ as follows:
//
// $$\partial_{\mathbf{W}_l} \mathbf{o} = \underbrace{\partial_{\mathbf{h}^{L-1}} \mathbf{h}^L}_{:= \mathbf{M}_L} \cdot \ldots, \cdot \underbrace{\partial_{\mathbf{h}^{l}} \mathbf{h}^{l+1}}_{:= \mathbf{M}_l} \underbrace{\partial_{\mathbf{W}_l} \mathbf{h}^l}_{:= \mathbf{v}_l}.$$
//
// In other words, this gradient is
// the product of $L-l$ matrices
// $\mathbf{M}_L \cdot \ldots, \cdot \mathbf{M}_l$
// and the gradient vector $\mathbf{v}_l$.
// Thus we are susceptible to the same
// problems of numerical underflow that often crop up
// when multiplying together too many probabilities.
// When dealing with probabilities, a common trick is to
// switch into log-space, i.e., shifting
// pressure from the mantissa to the exponent
// of the numerical representation.
// Unfortunately, our problem above is more serious:
// initially the matrices $M_l$ may have a wide variety of eigenvalues.
// They might be small or large, and
// their product might be *very large* or *very small*.
//
// The risks posed by unstable gradients
// go beyond numerical representation.
// Gradients of unpredictable magnitude
// also threaten the stability of our optimization algorithms.
// We may be facing parameter updates that are either
// (i) excessively large, destroying our model
// (the *exploding* gradient problem);
// or (ii) excessively small
// (the *vanishing gradient problem*),
// rendering learning impossible as parameters
// hardly move on each update.
//
//
// ### Vanishing Gradients
//
// One frequent culprit causing the vanishing gradient problem
// is the choice of the activation function $\sigma$
// that is appended following each layer's linear operations.
// Historically, the sigmoid function
// $1/(1 + \exp(-x))$ (introduced in :numref:`sec_mlp`)
// was popular because it resembles a thresholding function.
// Since early artificial neural networks were inspired
// by biological neural networks,
// the idea of neurons that fire either *fully* or *not at all*
// (like biological neurons) seemed appealing.
// Let us take a closer look at the sigmoid
// to see why it can cause vanishing gradients.
// +
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
// %maven ai.djl:api:0.7.0-SNAPSHOT
// %maven ai.djl:model-zoo:0.7.0-SNAPSHOT
// %maven ai.djl:basicdataset:0.7.0-SNAPSHOT
// %maven org.slf4j:slf4j-api:1.7.26
// %maven org.slf4j:slf4j-simple:1.7.26
// %maven ai.djl.mxnet:mxnet-engine:0.7.0-SNAPSHOT
// %maven ai.djl.mxnet:mxnet-native-auto:1.7.0-a
// -
// %%loadFromPOM
<dependency>
<groupId>tech.tablesaw</groupId>
<artifactId>tablesaw-jsplot</artifactId>
<version>0.30.4</version>
</dependency>
// %load ../utils/plot-utils.ipynb
// %load ../utils/DataPoints.java
// %load ../utils/Training.java
// +
import java.nio.file.*;
import ai.djl.*;
import ai.djl.engine.Engine;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDManager;
import ai.djl.nn.Activation;
import ai.djl.ndarray.types.Shape;
import ai.djl.training.GradientCollector;
import org.apache.commons.lang3.ArrayUtils;
import tech.tablesaw.api.*;
import tech.tablesaw.plotly.api.*;
import tech.tablesaw.plotly.components.*;
import tech.tablesaw.plotly.Plot;
import tech.tablesaw.plotly.components.Figure;
// +
NDManager manager = NDManager.newBaseManager();
NDArray x = manager.arange(-8.0f, 8.0f, 0.1f);
x.attachGradient();
NDArray y = null;
try(GradientCollector gc = Engine.getInstance().newGradientCollector()){
y = Activation.sigmoid(x);
gc.backward(y);
}
NDArray res = x.getGradient();
int xLength = (int) x.size();
int yLength = (int) y.size();
float[] X = new float[xLength];
float[] Y = new float[yLength];
float[] Z = new float[yLength];
X = x.toFloatArray();
Y = y.toFloatArray();
Z = res.toFloatArray();
String[] groups = new String[xLength*2];
Arrays.fill(groups, 0, xLength, "sigmoid");
Arrays.fill(groups, xLength, xLength * 2, "gradient");
Table data = Table.create("Data").addColumns(
FloatColumn.create("X", ArrayUtils.addAll(X, X)),
FloatColumn.create("grad of relu", ArrayUtils.addAll(Y, Z)),
StringColumn.create("groups", groups)
);
render(LinePlot.create("", data, "x", "grad of relu", "groups"), "text/html");
// -
// As you can see, the sigmoid's gradient vanishes
// both when its inputs are large and when they are small.
// Moreover, when backpropagating through many layers,
// unless we are in the Goldilocks zone, where
// the inputs to many of the sigmoids are close to zero,
// the gradients of the overall product may vanish.
// When our network boasts many layers,
// unless we are careful, the gradient
// will likely be cut off at *some* layer.
// Indeed, this problem used to plague deep network training.
// Consequently, ReLUs, which are more stable
// (but less neurally plausible),
// have emerged as the default choice for practitioners.
//
//
// ### Exploding Gradients
//
// The opposite problem, when gradients explode,
// can be similarly vexing.
// To illustrate this a bit better,
// we draw $100$ Gaussian random matrices
// and multiply them with some initial matrix.
// For the scale that we picked
// (the choice of the variance $\sigma^2=1$),
// the matrix product explodes.
// When this happens due to the initialization
// of a deep network, we have no chance of getting
// a gradient descent optimizer to converge.
// + attributes={"classes": [], "id": "", "n": "5"}
NDArray M = manager.randomNormal(new Shape(4,4));
System.out.println("A single matrix: " + M);
for(int i=0; i < 100; i++){
M = M.dot(manager.randomNormal(new Shape(4,4)));
}
System.out.println("after multiplying 100 matrices: " + M);
// -
// ### Symmetry
//
// Another problem in deep network design
// is the symmetry inherent in their parametrization.
// Assume that we have a deep network
// with one hidden layer and two units, say $h_1$ and $h_2$.
// In this case, we could permute the weights $\mathbf{W}_1$
// of the first layer and likewise permute
// the weights of the output layer
// to obtain the same function.
// There is nothing special differentiating
// the first hidden unit vs the second hidden unit.
// In other words, we have permutation symmetry
// among the hidden units of each layer.
//
// This is more than just a theoretical nuisance.
// Imagine what would happen if we initialized
// all of the parameters of some layer
// as $\mathbf{W}_l = c$ for some constant $c$.
// In this case, the gradients
// for all dimensions are identical:
// thus not only would each unit take the same value,
// but it would receive the same update.
// Stochastic gradient descent would
// never break the symmetry on its own
// and we might never be able to realize
// the network's expressive power.
// The hidden layer would behave
// as if it had only a single unit.
// Note that while SGD would not break this symmetry,
// dropout regularization would!
//
//
// ## Parameter Initialization
//
// One way of addressing---or at least mitigating---the
// issues raised above is through careful initialization.
// Additional care during optimization
// and suitable regularization can further enhance stability.
//
//
// ### Default Initialization
//
// In the previous sections,
// we used `manager.randomNormal()`
// to initialize the values of our weights.
// MXNet will use the default random initialization method,
// sampling each weight parameter from
// the uniform distribution $U[-0.07, 0.07]$
// and setting the bias parameters to $0$.
// Both choices tend to work well in practice
// for moderate problem sizes.
//
//
// ### Xavier Initialization
//
// Let us look at the scale distribution of
// the activations of the hidden units $h_{i}$ for some layer.
// They are given by
//
// $$h_{i} = \sum_{j=1}^{n_\mathrm{in}} W_{ij} x_j.$$
//
// The weights $W_{ij}$ are all drawn
// independently from the same distribution.
// Furthermore, let us assume that this distribution
// has zero mean and variance $\sigma^2$
// (this does not mean that the distribution has to be Gaussian,
// just that the mean and variance need to exist).
// For now, let us assume that the inputs to layer $x_j$
// also have zero mean and variance $\gamma^2$
// and that they are independent of $\mathbf{W}$.
// In this case, we can compute the mean and variance of $h_i$ as follows:
//
// $$
// \begin{aligned}
// E[h_i] & = \sum_{j=1}^{n_\mathrm{in}} E[W_{ij} x_j] = 0, \\
// E[h_i^2] & = \sum_{j=1}^{n_\mathrm{in}} E[W^2_{ij} x^2_j] \\
// & = \sum_{j=1}^{n_\mathrm{in}} E[W^2_{ij}] E[x^2_j] \\
// & = n_\mathrm{in} \sigma^2 \gamma^2.
// \end{aligned}
// $$
//
// One way to keep the variance fixed
// is to set $n_\mathrm{in} \sigma^2 = 1$.
// Now consider backpropagation.
// There we face a similar problem,
// albeit with gradients being propagated from the top layers.
// That is, instead of $\mathbf{W} \mathbf{x}$,
// we need to deal with $\mathbf{W}^\top \mathbf{g}$,
// where $\mathbf{g}$ is the incoming gradient from the layer above.
// Using the same reasoning as for forward propagation,
// we see that the gradients' variance can blow up
// unless $n_\mathrm{out} \sigma^2 = 1$.
// This leaves us in a dilemma:
// we cannot possibly satisfy both conditions simultaneously.
// Instead, we simply try to satisfy:
//
// $$
// \begin{aligned}
// \frac{1}{2} (n_\mathrm{in} + n_\mathrm{out}) \sigma^2 = 1 \text{ or equivalently }
// \sigma = \sqrt{\frac{2}{n_\mathrm{in} + n_\mathrm{out}}}.
// \end{aligned}
// $$
//
// This is the reasoning underlying the now-standard
// and practically beneficial *Xavier* initialization,
// named for its creator :cite:`Glorot.Bengio.2010`.
// Typically, the Xavier initialization
// samples weights from a Gaussian distribution
// with zero mean and variance
// $\sigma^2 = 2/(n_\mathrm{in} + n_\mathrm{out})$.
// We can also adapt Xavier's intuition to
// choose the variance when sampling weights
// from a uniform distribution.
// Note the distribution $U[-a, a]$ has variance $a^2/3$.
// Plugging $a^2/3$ into our condition on $\sigma^2$
// yields the suggestion to initialize according to
// $U\left[-\sqrt{6/(n_\mathrm{in} + n_\mathrm{out})}, \sqrt{6/(n_\mathrm{in} + n_\mathrm{out})}\right]$.
//
// ### Beyond
//
// The reasoning above barely scratches the surface
// of modern approaches to parameter initialization.
// In fact, MXNet has an entire `mxnet.initializer` module
// implementing over a dozen different heuristics.
// Moreover, parameter initialization continues to be
// a hot area of fundamental research in deep learning.
// Among these are heuristics specialized for
// tied (shared) parameters, super-resolution,
// sequence models, and other situations.
// If the topic interests you we suggest
// a deep dive into this module's offerings,
// reading the papers that proposed and analyzed each heuristic,
// and then exploring the latest publications on the topic.
// Perhaps you will stumble across (or even invent!)
// a clever idea and contribute an implementation to MXNet.
//
//
// ## Summary
//
// * Vanishing and exploding gradients are common issues in deep networks. Great care in parameter initialization is required to ensure that gradients and parameters remain well controlled.
// * Initialization heuristics are needed to ensure that the initial gradients are neither too large nor too small.
// * ReLU activation functions mitigate the vanishing gradient problem. This can accelerate convergence.
// * Random initialization is key to ensure that symmetry is broken before optimization.
//
// ## Exercises
//
// 1. Can you design other cases where a neural network might exhibit symmetry requiring breaking besides the permutation symmetry in a multilayer pereceptron's layers?
// 1. Can we initialize all weight parameters in linear regression or in softmax regression to the same value?
// 1. Look up analytic bounds on the eigenvalues of the product of two matrices. What does this tell you about ensuring that gradients are well conditioned?
// 1. If we know that some terms diverge, can we fix this after the fact? Look at the paper on LARS for inspiration :cite:`You.Gitman.Ginsburg.2017`.
//
// ## [Discussions](https://discuss.mxnet.io/t/2345)
//
// 
| jupyter/d2l-java/chapter_multilayer-perceptrons/numerical-stability-and-init.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn import svm
import glob
import cv2
from sklearn import metrics
import seaborn as sns
# %matplotlib inline
# +
# created parent class for training of data
class train:
def animal(self, cat, dog):
#function name animal is used and path of the folder is passed in the parameters
cats_dogs = []
file=glob.glob(cat)
file2=glob.glob(dog)
for i in file:
img= cv2.imread(i,0)
img = cv2.resize(img, dsize=(250, 250))
cats_dogs.append(img)
for i in file2:
img=cv2.imread(i,0)
img = cv2.resize(img, dsize=(250, 250))
cats_dogs.append(img)
self.cats_dogs = np.asarray(cats_dogs)
# In list name cats_dogs all images of cats and dogs are stored.
def shape(self):
X = self.cats_dogs
zeros = np.zeros(402)
ones = np.ones(402)
Y = np.concatenate((zeros,ones),axis=0)
print("Shape of X =", X.shape)
print("Shape of Y =", Y.shape)
# Above print statement suggest shape of both input variable X and output variable Y
self.X_train, self.X_test, self.Y_train, self.Y_test = train_test_split(X, Y, test_size=0.15, random_state=0)
self.num_train = self.X_train.shape[0]
self.num_test = self.X_test.shape[0]
print("Data supplied in training phase = ",self.num_train)
print("Data supplied in testing phase = ",self.num_test)
#Above the data supplied is split into training and testing with 85% data in training and rest in testing.
def merge_transpose(self):
self.X_train_dimen = self.X_train.reshape(self.num_train, self.X_train.shape[1]*self.X_train.shape[2])
self.X_test_dimen = self.X_test.reshape(self.num_test, self.X_test.shape[1]*self.X_test.shape[2])
print("X Train flat Shape: " ,self.X_train_dimen.shape)
print("X Test flat Shape: " ,self.X_test_dimen.shape)
# Here the dimension reductionality is used to combine 250*250
self.x_train =self.X_train_dimen.T
self.x_test = self.X_test_dimen.T
self.y_train =self.Y_train.T
self.y_test = self.Y_test.T
print("x train shape = ",self.x_train.shape)
print("x test shape = ",self.x_test.shape)
print("y train shape = ",self.y_train.shape)
print("y test shape = ",self.y_test.shape)
# .T function is used to transpose the shape of the variable
#New class knn is defined which inherit train class.
class knn(train):
def classification_knn(self):
#Below for loop is used to check accuracy of the model with different k values
for k in range(1):
k_value =k+1
model = KNeighborsClassifier(n_neighbors = k_value)
model.fit(self.x_train.T,self.y_train.T)
self.y_pred = model.predict(self.x_test.T)
print(k_value ,"=", metrics.accuracy_score(self.y_test.T,self.y_pred))
# With k value = 5 best accuracy is obtained so we have used it and trained the model with k_value = 5
model = KNeighborsClassifier(n_neighbors = 5)
model.fit(self.x_train.T,self.y_train.T)
self.pred = model.predict(self.x_test.T)
#prediction of x_test value is done and accuracy for the same is occured.
print(metrics.accuracy_score(self.y_test.T,self.pred)*100,"% accuracy")
cnf = metrics.confusion_matrix(self.y_test,self.pred)
class_names=[0,1]
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(cnf), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# Confusion matrix with heatmap is made for better vizualiation and to see miss classification
#New class SVM is defined which inherit train class.
class SVM(train):
def classification_SVM(self):
#SVM fn called with for acurracy of model
model = svm.SVC(kernel='linear')
model.fit(self.x_train.T,self.y_train.T)
self.pred = model.predict(self.x_test.T)
print(metrics.accuracy_score(self.y_test.T,self.pred)*100,"% accuracy")
cnf = metrics.confusion_matrix(self.y_test,self.pred)
class_names=[0,1]
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(cnf), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# Confusion matrix with heatmap is made for better vizualiation and to see miss classification
#New class NB(Naive Bayes) is defined which inherit train class.
class NB(train):
def classification_NB(self):
#NB fn called for accuracy of model
model = GaussianNB()
model.fit(self.x_train.T,self.y_train.T)
self.pred = model.predict(self.x_test.T)
print(metrics.accuracy_score(self.y_test.T,self.pred)*100,"% accuracy")
cnf = metrics.confusion_matrix(self.y_test,self.pred)
class_names=[0,1]
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(cnf), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# Confusion matrix with heatmap is made for better vizualiation and to see miss classification
#New class LR(Logistic Regression) is defined which inherit train class.
class LR(train):
def classification_LR(self):
#Logistic fn called with 15 times irreration
model = LogisticRegression(random_state = 0,max_iter= 15)
model.fit(self.x_train.T,self.y_train.T)
self.pred = model.predict(self.x_test.T)
print(metrics.accuracy_score(self.y_test.T,self.pred)*100,"% accuracy")
cnf = metrics.confusion_matrix(self.y_test,self.pred)
class_names=[0,1]
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(cnf), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# Confusion matrix with heatmap is made for better vizualiation and to see miss classification
#Child class validate is class which inherit the properties of all parent class.
class validate(knn, SVM, NB, LR):
def validate_animal(self, link):
#In the above function the path of folder is given to access the images stored for testing.
cats_dogs_valid = []
file=glob.glob(link)
for i in file:
img= cv2.imread(i,0)
img = cv2.resize(img, dsize=(250, 250))
cats_dogs_valid.append(img)
self.X_valid = np.asarray(cats_dogs_valid)
def knn_predic(self):
#Here the function pred is used to predict the images supplied belogs to dog or cat.
self.num_X_valid = self.X_valid.shape[0]
#print("Total elemenmts in testing are = ",self.num_X_valid)
self.X_valid_dimen = self.X_valid.reshape(self.num_X_valid ,self.X_valid.shape[1]*self.X_valid.shape[2])
#print(self.X_valid_dimen.shape)
self.x_valid = self.X_valid_dimen.T
#print(self.x_valid.shape)
model = KNeighborsClassifier(n_neighbors = 5)
model.fit(self.x_train.T,self.y_train.T)
self.pred_valid = model.predict(self.x_valid.T)
print("Predicted values = ",self.pred_valid)
prediction = pd.DataFrame(self.pred_valid,index = None, columns=['predictions']).to_csv('D:/cerebroz_assignment/output/prediction_knn.csv')
def SVM_predic(self):
#Here the function pred is used to predict the images supplied belogs to dog or cat.
self.num_X_valid = self.X_valid.shape[0]
#print("Total elemenmts in testing are = ",self.num_X_valid)
self.X_valid_dimen = self.X_valid.reshape(self.num_X_valid ,self.X_valid.shape[1]*self.X_valid.shape[2])
#print(self.X_valid_dimen.shape)
self.x_valid = self.X_valid_dimen.T
#print(self.x_valid.shape)
model = svm.SVC(kernel='linear')
model.fit(self.x_train.T,self.y_train.T)
self.pred_valid = model.predict(self.x_valid.T)
print("Predicted values = ",self.pred_valid)
prediction = pd.DataFrame(self.pred_valid,index = None, columns=['predictions']).to_csv('D:/cerebroz_assignment/output/prediction_SVM.csv')
def NB_predic(self):
#Here the function pred is used to predict the images supplied belogs to dog or cat.
self.num_X_valid = self.X_valid.shape[0]
#print("Total elemenmts in testing are = ",self.num_X_valid)
self.X_valid_dimen = self.X_valid.reshape(self.num_X_valid ,self.X_valid.shape[1]*self.X_valid.shape[2])
#print(self.X_valid_dimen.shape)
self.x_valid = self.X_valid_dimen.T
#print(self.x_valid.shape)
model = GaussianNB()
model.fit(self.x_train.T,self.y_train.T)
self.pred_valid = model.predict(self.x_valid.T)
print("Predicted values = ",self.pred_valid)
prediction = pd.DataFrame(self.pred_valid,index = None, columns=['predictions']).to_csv('D:/cerebroz_assignment/output/prediction_NB.csv')
def LR_predic(self):
#Here the function pred is used to predict the images supplied belogs to dog or cat.
self.num_X_valid = self.X_valid.shape[0]
#print("Total elemenmts in testing are = ",self.num_X_valid)
self.X_valid_dimen = self.X_valid.reshape(self.num_X_valid ,self.X_valid.shape[1]*self.X_valid.shape[2])
#print(self.X_valid_dimen.shape)
self.x_valid = self.X_valid_dimen.T
#print(self.x_valid.shape)
model = LogisticRegression(random_state = 0,max_iter= 15)
model.fit(self.x_train.T,self.y_train.T)
self.pred_valid = model.predict(self.x_valid.T)
print("Predicted values = ",self.pred_valid)
prediction = pd.DataFrame(self.pred_valid,index = None, columns=['predictions']).to_csv('D:/cerebroz_assignment/output/prediction_LR.csv')
# -
# object is created of child class which can be used to call function of parent class also.
obj = validate()
#in function animal path for folder dog and cat is set
obj.animal("D:/cerebroz_assignment/train/cats/*","D:/cerebroz_assignment/train/dogs/*")
# It depicts the shape for X and Y variable and data splited for train and test
obj.shape()
# In merge_transpose training and testing variable is flatten and then shape is obtained for the same.
obj.merge_transpose()
# classification_knn class gets the accuarcy for the model and shows heatmap for confusion matrix
obj.classification_knn()
# classification_SVM class gets the accuarcy for the model and shows heatmap for confusion matrix
obj.classification_SVM()
# classification_NB class gets the accuarcy for the model and shows heatmap for confusion matrix
obj.classification_NB()
# classification_LR class gets the accuarcy for the model and shows heatmap for confusion matrix
obj.classification_LR()
# Here path of testing data is supplied
obj.validate_animal("D:/cerebroz_assignment/test/*")
# Prediction of data is done for KNN algorithm.
obj.knn_predic()
# Prediction of data is done for SVM algorithm.
obj.SVM_predic()
# Prediction of data is done for NB(Naive Bayes) algorithm.
obj.NB_predic()
# Prediction of data is done for LR algorithm.
obj.LR_predic()
| submission3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tensorflow20
# language: python
# name: tensorflow20
# ---
import pandas as pd
import numpy as np
from nltk.tokenize import word_tokenize
from tensorflow import keras
from nltk.stem.snowball import RomanianStemmer
from unidecode import unidecode
import sklearn.metrics as metrics
SENT_SIZE = 100
CHAR_FEAT_SIZE = 10
UNKNOWN_WORD = '<UNKNOWN_WORD>'
PADDING_WORD = '<PADDING_WORD>'
df = pd.read_csv("../data/ro_news.csv")
df.head()
df['y'] = df['source'].apply(lambda x: x in ('puterea.ro', 'b1.ro')).astype(int)
def clean(txt):
tokens = word_tokenize(txt)
stemmer = RomanianStemmer()
# remove all tokens that are not alpha
words = [stemmer.stem(unidecode(word.lower())) for word in tokens if word.isalpha()]
return words
# %%time
df['title_clean'] = df['title'].apply(clean)
df['text_clean'] = df['text'].apply(clean)
# +
#WORD FEAT MODEL HELPERS
def get_word2idx(X):
words = set()
word2idx = {}
words.add(UNKNOWN_WORD)
words.add(PADDING_WORD)
for sent in X:
for word in sent:
words.add(word)
words = sorted(list(words))
word2idx = { w : i for i, w in enumerate(words) }
return word2idx
def sentence2idxs(sentence, word2idx):
sentence_idxs = list(map(lambda w: word2idx.get(w, word2idx[UNKNOWN_WORD]), sentence))
return sentence_idxs
def convert_data(X, word2idx, sent_size=SENT_SIZE):
X = list(map(lambda s: sentence2idxs(s, word2idx), X))
X = keras.preprocessing.sequence.pad_sequences(maxlen=sent_size, sequences=X,
padding="post", value=word2idx[PADDING_WORD])
return X
# -
# %%time
title_word2idx = get_word2idx(df['title_clean'])
text_word2idx = get_word2idx(df['text_clean'])
X_title_w = convert_data(df['title_clean'], title_word2idx)
X_text_w = convert_data(df['text_clean'], text_word2idx)
def word_feat_model(title_word2idx, text_word2idx, sent_size=SENT_SIZE, emb_out_dim=50, n_lstm_units=100):
# TITLE INPUT
title_input = keras.layers.Input(shape=(sent_size,))
title_char_emb = keras.layers.Embedding(input_dim=len(title_word2idx),
output_dim=emb_out_dim)(title_input)
# TEXT INPUT
text_input = keras.layers.Input(shape=(sent_size,))
text_char_emb = keras.layers.Embedding(input_dim=len(text_word2idx),
output_dim=emb_out_dim)(title_input)
all_feat = keras.layers.concatenate([title_char_emb, text_char_emb])
bi_lstm = keras.layers.Bidirectional(keras.layers.LSTM(units=n_lstm_units,
return_sequences=False))(all_feat)
out = keras.layers.Dense(1, activation="sigmoid")(bi_lstm)
model = keras.models.Model([title_input], out)
model.compile(optimizer='adam', loss=keras.losses.binary_crossentropy, metrics=['acc'])
return model
y = df['y'].values
word_model = word_feat_model(title_word2idx, text_word2idx)
word_model.summary()
word_model.fit(x=[X_title_w, X_text_w], y=y, batch_size=32, epochs=3, validation_split=.2)
y_hat = word_model.predict([X_title_w, X_text_w])
y_hat = np.round(y_hat)
metrics.classification_report(y, y_hat, output_dict=True)
# +
import itertools
def grid_search(df):
title_word2idx = get_word2idx(df['title_clean'])
text_word2idx = get_word2idx(df['text_clean'])
y = df['y']
best_config = None
grid = {
'sent_sizes' : [50, 75, 100],
'emb_out_dim' : [50, 100],
'n_lstm_units' : [100, 150],
'batch_s' : [32, 64],
}
lists_keys = ['sent_sizes', 'emb_out_dim', 'n_lstm_units', 'batch_s']
lists_vals = list(map(lambda k: grid[k], lists_keys))
prod = list(itertools.product(*lists_vals))
for (s_size, emb_dim, n_lstm, b_s) in itertools.product(*lists_vals):
X_title_w = convert_data(df['title_clean'], title_word2idx, sent_size=s_size)
X_text_w = convert_data(df['text_clean'], text_word2idx, sent_size=s_size)
model = word_feat_model(title_word2idx, text_word2idx, sent_size=s_size, emb_out_dim=emb_dim, n_lstm_units=n_lstm)
es = keras.callbacks.EarlyStopping(monitor='val_loss', patience=2)
model.fit(x=[X_title_w, X_text_w], y=y, batch_size=b_s, epochs=10, validation_split=.2, callbacks=[es])
y_hat = model.predict([X_title_w, X_text_w])
y_hat = np.round(y)
metrics_ = metrics.classification_report(y, y_hat, output_dict=True)
f1_ = metrics_['1']['f1-score']
if best_config is None or best_config['f1'] < f1_:
best_config = {
'sent_sizes' : s_size,
'emb_out_dim' : emb_dim,
'n_lstm_units' : n_lstm,
'batch_s' : b_s,
'f1' : f1_,
'metrics' : metrics_
}
return best_config
# +
title_word2idx = get_word2idx(df['title_clean'])
text_word2idx = get_word2idx(df['text_clean'])
y = df['y']
(s_size, emb_dim, n_lstm, b_s) = (50, 50, 100, 32)
X_title_w = convert_data(df['title_clean'], title_word2idx, sent_size=s_size)
X_text_w = convert_data(df['text_clean'], text_word2idx, sent_size=s_size)
model = word_feat_model(title_word2idx, text_word2idx, sent_size=s_size, emb_out_dim=emb_dim, n_lstm_units=n_lstm)
es = keras.callbacks.EarlyStopping(monitor='val_loss', patience=2)
model.fit(x=[X_title_w, X_text_w], y=y, batch_size=b_s, epochs=10, validation_split=.2, callbacks=[es])
y_hat = model.predict([X_title_w, X_text_w])
y_hat = np.round(y_hat)
metrics_ = metrics.classification_report(y, y_hat, output_dict=True)
print(model.summary())
# -
model.summary()
best_config = {
'sent_sizes' : s_size,
'emb_out_dim' : emb_dim,
'n_lstm_units' : n_lstm,
'batch_s' : b_s,
'metrics' : metrics_
}
best_config
grid_search(df)
plt.plot(hist.history['loss'], label='loss')
plt.plot(hist.history['val_loss'], label='val loss')
plt.title('Model training loss')
plt.legend()
plt.plot(hist.history['acc'], label='acc')
plt.plot(hist.history['val_acc'], label='val acc')
plt.title('Model training acc')
plt.legend()
| notebooks/fake_news.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Test your SQLite DB
import pandas as pd
import sqlite3
db = sqlite3.connect('./db/chinook.db')
# prepare a cursor object using cursor() method
cursor = db.cursor()
# get a list of ALL tables in chinook db
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
result = cursor.fetchall()
print(result)
mytablelist = [ tup[0] for tup in result] # fetchall gave us a list of tuples with a single element so we only need that first el
print(mytablelist)
# most popular type of query!
albums = pd.read_sql_query("SELECT * from albums", db)
albums.head()
# we'll generate a dataframe of songs with albums from where they came from and genre and artist so 3 JOINS
songs = pd.read_sql_query('''SELECT t.name AS Song,
g.name AS Genre,
t.composer,
a.title AS Album,
ar.name AS Artist
FROM tracks AS t
JOIN genres AS g
ON t.genreid = g.genreid
JOIN albums AS a
ON t.albumid = a.albumid
JOIN artists as ar
ON a.artistid = ar.artistid
ORDER BY t.name
''',db)
songs.head()
# # Test your SQLite db
# ## SQLite Online
# https://sqliteonline.com/
| SQL/SQLite Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Two types:
# * Planning: The path to the goal is important. Paths have costs, depths. Heuristics give problem specific guidance.
# * Identification: assignments to variables. The goal itself is important, not the path. All paths same depth. CSP are specialized for identification problem.
# CSP:
# * A special subset of serach problem
# * State is defined by variables X<sub>i</sub>, with values from a domain D (sometimes D depends on i)
# * Goal test is a set of constraints specifying allowable combinations of values for subsets of variables
# * A solution is all the assignments which meet all the constraints
# Backtracking search - basic uninformed algorithgm for solving CSP's
# * Idea 1: One variable at a time
# * Variable assignments are commutative, so fix ordering
# * i.e. [WA = red, then NT = green] same as [NT = green then WA = red]
# * Only need to consider assignments to a single variable at each step
# * Check constraints as you go
# * i.e. consider only values which do not conflict previous assignments
# * might have to do some computation to check the constraints
# * "Incremental goal test"
| ai_constraint_satisfaction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="llx_cGpoAyAq" colab_type="text"
# ##### Copyright 2020 The TensorFlow Authors.
# + id="5MAYU_6KA0Kt" colab_type="code" cellView="both" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="35lZ8kr3UcsB" colab_type="text"
# # Data augmentation
# + [markdown] colab_type="text" id="MfBg1C5NB3X0"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="BZP72A6eFw74" colab_type="text"
# ## Overview
#
# This tutorial demonstrates data augmentation using tensorflow.image API. This can help the model trainer to overcome small dataset problem.
# + [markdown] id="8sZIVqk7HvnC" colab_type="text"
# ## Setup
# + id="r-vdtEaCeK4Z" colab_type="code" colab={}
from __future__ import absolute_import, division, print_function, unicode_literals
# + id="3TS4vBJBd8jY" colab_type="code" colab={}
# !pip install tensorflow_addons
# + id="rdP8EQbPsyRA" colab_type="code" colab={}
# %matplotlib inline
try:
# %tensorflow_version 2.x
except:
pass
import urllib
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
import PIL.Image
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)
import numpy as np
# + [markdown] id="__EuXwM-8uth" colab_type="text"
# Let's check the data augmentation feautres on an image and then augment a whole dataset later to train a model.
# + [markdown] id="frBSdODBLOOI" colab_type="text"
# Download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg), by Von.grzanka, for augmentation.
# + id="s5ThIwG8KqzI" colab_type="code" colab={}
image_path = tf.keras.utils.get_file("cat.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg")
PIL.Image.open(image_path)
# + [markdown] id="-Ec3bGonGDCF" colab_type="text"
# Read and decode the image to tensor format.
# + id="cdCoB8b-uZjf" colab_type="code" colab={}
image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)
# + [markdown] id="isGwyT0386yi" colab_type="text"
# A function to visualize and compare the original and augmented image side by side.
# + id="FKnRfw2dvyql" colab_type="code" colab={}
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
# + [markdown] id="jYLzpEOhGqWY" colab_type="text"
# ## Augment a single image
# + [markdown] id="8IiXghY99Bo6" colab_type="text"
# ### Flipping the image
# Flip the image either vertically or horizontally.
# + id="X14VjLlFxnvZ" colab_type="code" colab={}
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
# + [markdown] id="ObsvSmu99MfC" colab_type="text"
# ### Grayscale the image
# Grayscale an image.
# + id="mnqQA2ubyo6O" colab_type="code" colab={}
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
# + [markdown] id="juI4A4HF9gYc" colab_type="text"
# ### Saturate the image
# Saturate an image by providing a saturation factor.
# + id="tiTUhw-gzCJW" colab_type="code" colab={}
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
# + [markdown] id="E82CqomP9qcR" colab_type="text"
# ### Change image brightness
# Change the brightness of image by providing a brightness factor.
# + id="05dA6uEtzfyd" colab_type="code" colab={}
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
# + [markdown] id="5_0kMbmS91x6" colab_type="text"
# ### Rotate the image
# Rotate an image to your desired angles.
# + id="edNoQzhszxo8" colab_type="code" colab={}
rotated = tf.image.rot90(image)
visualize(image, rotated)
# + [markdown] id="9vdmcWJKeXYa" colab_type="text"
# Or rotate by any angle using `tfa.image.rotate`:
# + id="fKximR6DecYz" colab_type="code" colab={}
rotated = tfa.image.rotate(image, 20*np.pi/180)
visualize(image,rotated)
# + [markdown] id="bomBnFWp9895" colab_type="text"
# ### Center crop the image
# Crop the image from center upto the image part you desire.
# + id="fvgz_6t21dq2" colab_type="code" colab={}
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)
# + [markdown] id="8W5E_c7o-H96" colab_type="text"
# See the `tf.image` reference for available augmentation options available.
# + [markdown] id="92lBGZSQ-1Tx" colab_type="text"
# ## Augment a dataset and train a model with it
# + [markdown] id="lrDez4xIX9Ss" colab_type="text"
# Train the model on mnist dataset.
# + id="mazlEonS_gTR" colab_type="code" colab={}
dataset, info = tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
num_train_examples= info.splits['train'].num_examples
# + [markdown] id="011caOa0YCz5" colab_type="text"
# Use a scale function to augment the image. Pass the dataset to it and it will return augmented datset.
# + id="3oaSV5QcDS8p" colab_type="code" colab={}
def augment(image,label):
image = tf.image.resize(image, (28, 28))/255.0 #normalizing the image
image = tf.image.random_crop(image, size=[28,28,1]) #providing random crop to image
image = tf.image.random_brightness(image, max_delta=0.5) #providing random brightness to image
image = tf.image.random_flip_left_right(image) #providing random flip to image
return image,label
BATCH_SIZE = 64
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_batches = train_dataset.shuffle(num_train_examples//4).map(augment, num_parallel_calls=AUTOTUNE).batch(BATCH_SIZE).prefetch(1) #A batch dataset that can be directly passed to model for training
# + [markdown] id="Yi9TIwR-ZIOi" colab_type="text"
# Create and compile the model. The model is a two layered, fully-connected neural network without convolution.
# + id="hHhkA4Q0CsHx" colab_type="code" colab={}
model = tf.keras.Sequential([
layers.Flatten(input_shape=(28, 28,1)),
layers.Dense(256, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])
# + id="17wqPAAoNe3N" colab_type="code" colab={}
model.compile(optimizer = 'adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# + [markdown] id="P0rciou3ZWwy" colab_type="text"
# Train the model:
# + id="z8X8CpqvNhG9" colab_type="code" colab={}
model.fit(train_batches, epochs=5)
# + [markdown] id="UEqeeNsHZaC5" colab_type="text"
# ## Conclusion:
# This model provides ~95% accuracy on training set. This is slightly higher than the model trained without data augmentation. It didn't provide much significance on this model because the dataset already had a large number of samples. But, on a small dataset, you could see a huge difference.
| site/en/tutorials/images/data_augmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# +
# To remove warnings for certain cells
pd.options.mode.chained_assignment = None
# -
# #### Read in Data Files
may_2021 = pd.read_csv('../data/may_2021.csv')
april_2021 = pd.read_csv('../data/april_2021.csv')
march_2021 = pd.read_csv('../data/march_2021.csv')
february_2021 = pd.read_csv('../data/february_2021.csv')
january_2021 = pd.read_csv('../data/january_2021.csv')
december_2020 = pd.read_csv('../data/december_2020.csv')
november_2020 = pd.read_csv('../data/november_2020.csv')
october_2020 = pd.read_csv('../data/october_2020.csv')
september_2020 = pd.read_csv('../data/september_2020.csv')
august_2020 = pd.read_csv('../data/august_2020.csv')
july_2020 = pd.read_csv('../data/july_2020.csv')
june_2020 = pd.read_csv('../data/june_2020.csv')
may_2020 = pd.read_csv('../data/may_2020.csv')
april_2020 = pd.read_csv('../data/april_2020.csv')
march_2020 = pd.read_csv('../data/march_2020.csv')
february_2020 = pd.read_csv('../data/february_2020.csv')
january_2020 = pd.read_csv('../data/january_2020.csv')
december_2019 = pd.read_csv('../data/december_2019.csv')
november_2019 = pd.read_csv('../data/november_2019.csv')
october_2019 = pd.read_csv('../data/october_2019.csv')
september_2019 = pd.read_csv('../data/september_2019.csv')
august_2019 = pd.read_csv('../data/august_2019.csv')
july_2019 = pd.read_csv('../data/july_2019.csv')
june_2019 = pd.read_csv('../data/june_2019.csv')
may_2019 = pd.read_csv('../data/may_2019.csv')
april_2019 = pd.read_csv('../data/april_2019.csv')
march_2019 = pd.read_csv('../data/march_2019.csv')
february_2019 = pd.read_csv('../data/february_2019.csv')
january_2019 = pd.read_csv('../data/january_2019.csv')
december_2018 = pd.read_csv('../data/december_2018.csv')
november_2018 = pd.read_csv('../data/november_2018.csv')
october_2018 = pd.read_csv('../data/october_2018.csv')
september_2018 = pd.read_csv('../data/september_2018.csv')
august_2018 = pd.read_csv('../data/august_2018.csv')
july_2018 = pd.read_csv('../data/july_2018.csv')
june_2018 = pd.read_csv('../data/june_2018.csv')
may_2018 = pd.read_csv('../data/may_2018.csv')
april_2018 = pd.read_csv('../data/april_2018.csv')
march_2018 = pd.read_csv('../data/march_2018.csv')
february_2018 = pd.read_csv('../data/february_2018.csv')
january_2018 = pd.read_csv('../data/january_2018.csv')
# #### Cleaning Function for Combined File
# +
# Create cleaning function to clean files:
# Remove the unnamed column
# Snake case the column names
# Remove state from the city names
# Update delay columns: populate null to zero and set to be true or false
# Change floats to ints where appropriate
def clean_files(df, pct):
df.drop(columns = ['Unnamed: 29'], inplace = True)
df.columns = df.columns.str.lower().str.replace(' ', '_')
df['origin_city_name'] = df['origin_city_name'].str.split(',').str[0].str.strip()
df['dest_city_name'] = df['dest_city_name'].str.split(',').str[0].str.strip()
df['carrier_delay'].fillna(0, inplace = True)
df['weather_delay'].fillna(0, inplace = True)
df['nas_delay'].fillna(0, inplace = True)
df['security_delay'].fillna(0, inplace = True)
df['late_aircraft_delay'].fillna(0, inplace = True)
df['carrier_delay'] = df['carrier_delay'].apply(lambda x: 0 if x == 0 else 1)
df['weather_delay'] = df['weather_delay'].apply(lambda x: 0 if x == 0 else 1)
df['nas_delay'] = df['nas_delay'].apply(lambda x: 0 if x == 0 else 1)
df['security_delay'] = df['security_delay'].apply(lambda x: 0 if x == 0 else 1)
df['late_aircraft_delay'] = df['late_aircraft_delay'].apply(lambda x: 0 if x == 0 else 1)
df['cancelled'] = df['cancelled'].astype(int)
df['carrier_delay'] = df['carrier_delay'].astype(int)
df['weather_delay'] = df['weather_delay'].astype(int)
df['nas_delay'] = df['nas_delay'].astype(int)
df['security_delay'] = df['security_delay'].astype(int)
df['late_aircraft_delay'] = df['late_aircraft_delay'].astype(int)
df = df.sample(frac = pct)
return df
# -
may_2021 = clean_files(may_2021, 1)
april_2021 = clean_files(april_2021, 1)
march_2021 = clean_files(march_2021, 1)
february_2021 = clean_files(february_2021, 1)
january_2021 = clean_files(january_2021, 1)
december_2020 = clean_files(december_2020, 1)
november_2020 = clean_files(november_2020, 1)
october_2020 = clean_files(october_2020, 1)
september_2020 = clean_files(september_2020, 1)
august_2020 = clean_files(august_2020, 1)
july_2020 = clean_files(july_2020, 1)
june_2020 = clean_files(june_2020, 1)
may_2020 = clean_files(may_2020, 1)
april_2020 = clean_files(april_2020, 1)
march_2020 = clean_files(march_2020, 1)
february_2020 = clean_files(february_2020, 1)
january_2020 = clean_files(january_2020, 1)
december_2019 = clean_files(december_2019, 1)
november_2019 = clean_files(november_2019, 1)
october_2019 = clean_files(october_2019, 1)
september_2019 = clean_files(september_2019, 1)
august_2019 = clean_files(august_2019, 1)
july_2019 = clean_files(july_2019, 1)
june_2019 = clean_files(june_2019, 1)
may_2019 = clean_files(may_2019, 1)
april_2019 = clean_files(april_2019, 1)
march_2019 = clean_files(march_2019, 1)
february_2019 = clean_files(february_2019, 1)
january_2019 = clean_files(january_2019, 1)
december_2018 = clean_files(december_2018, 1)
november_2018 = clean_files(november_2018, 1)
october_2018 = clean_files(october_2018, 1)
september_2018 = clean_files(september_2018, 1)
august_2018 = clean_files(august_2018, 1)
july_2018 = clean_files(july_2018, 1)
june_2018 = clean_files(june_2018, 1)
may_2018 = clean_files(may_2018, 1)
april_2018 = clean_files(april_2018, 1)
march_2018 = clean_files(march_2018, 1)
february_2018 = clean_files(february_2018, 1)
january_2018 = clean_files(january_2018, 1)
# #### Split Files: Delays vs Cancels
# +
# Create a function to separate the files into delays versus cancels and print new shapes
def delay_cancel(df, name):
# Create cancel data frame and drop unnecessary rows
df_cancel = df[df['cancelled'] == 1]
df_cancel.reset_index(drop = True, inplace = True)
# Create delay data frame and drop unnecessary rows
df_delay = df[df['cancelled'] == 0]
df_delay.reset_index(drop = True, inplace = True)
print(f'The {name} delays data frame is {df_delay.shape[0]} rows')
print(f'The {name} cancels data frame is {df_cancel.shape[0]} rows')
print(f'The {name} full data frame was {df.shape[0]} rows')
return df_delay, df_cancel
# -
may_2021_delays, may_2021_cancels = delay_cancel(may_2021, 'may 2021')
april_2021_delays, april_2021_cancels = delay_cancel(april_2021, 'april 2021')
march_2021_delays, march_2021_cancels = delay_cancel(march_2021, 'march 2021')
february_2021_delays, february_2021_cancels = delay_cancel(february_2021, 'february 2021')
january_2021_delays, january_2021_cancels = delay_cancel(january_2021, 'january 2021')
december_2020_delays, december_2020_cancels = delay_cancel(december_2020, 'december 2020')
november_2020_delays, november_2020_cancels = delay_cancel(november_2020, 'november 2020')
october_2020_delays, october_2020_cancels = delay_cancel(october_2020, 'october 2020')
september_2020_delays, september_2020_cancels = delay_cancel(september_2020, 'september 2020')
august_2020_delays, august_2020_cancels = delay_cancel(august_2020, 'august 2020')
july_2020_delays, july_2020_cancels = delay_cancel(july_2020, 'july 2020')
june_2020_delays, june_2020_cancels = delay_cancel(june_2020, 'june 2020')
may_2020_delays, may_2020_cancels = delay_cancel(may_2020, 'may 2020')
april_2020_delays, april_2020_cancels = delay_cancel(april_2020, 'april 2020')
march_2020_delays, march_2020_cancels = delay_cancel(march_2020, 'march 2020')
february_2020_delays, february_2020_cancels = delay_cancel(february_2020, 'february 2020')
january_2020_delays, january_2020_cancels = delay_cancel(january_2020, 'january 2020')
december_2019_delays, december_2019_cancels = delay_cancel(december_2019, 'december 2019')
november_2019_delays, november_2019_cancels = delay_cancel(november_2019, 'november 2019')
october_2019_delays, october_2019_cancels = delay_cancel(october_2019, 'october 2019')
september_2019_delays, september_2019_cancels = delay_cancel(september_2019, 'september 2019')
august_2019_delays, august_2019_cancels = delay_cancel(august_2019, 'august 2019')
july_2019_delays, july_2019_cancels = delay_cancel(july_2019, 'july 2019')
june_2019_delays, june_2019_cancels = delay_cancel(june_2019, 'june 2019')
may_2019_delays, may_2019_cancels = delay_cancel(may_2019, 'may 2019')
april_2019_delays, april_2019_cancels = delay_cancel(april_2019, 'april 2019')
march_2019_delays, march_2019_cancels = delay_cancel(march_2019, 'march 2019')
february_2019_delays, february_2019_cancels = delay_cancel(february_2019, 'february 2019')
january_2019_delays, january_2019_cancels = delay_cancel(january_2019, 'january 2019')
december_2018_delays, december_2018_cancels = delay_cancel(december_2018, 'december 2018')
november_2018_delays, november_2018_cancels = delay_cancel(november_2018, 'november 2018')
october_2018_delays, october_2018_cancels = delay_cancel(october_2018, 'october 2018')
september_2018_delays, september_2018_cancels = delay_cancel(september_2018, 'september 2018')
august_2018_delays, august_2018_cancels = delay_cancel(august_2018, 'august 2018')
july_2018_delays, july_2018_cancels = delay_cancel(july_2018, 'july 2018')
june_2018_delays, june_2018_cancels = delay_cancel(june_2018, 'june 2018')
may_2018_delays, may_2018_cancels = delay_cancel(may_2018, 'may 2018')
april_2018_delays, april_2018_cancels = delay_cancel(april_2018, 'april 2018')
march_2018_delays, march_2018_cancels = delay_cancel(march_2018, 'march 2018')
february_2018_delays, february_2018_cancels = delay_cancel(february_2018, 'february 2018')
january_2018_delays, january_2018_cancels = delay_cancel(january_2018, 'january 2018')
# #### Cleaning Function on Delays File
# +
# Review null values in files before removing (expecting arrival time, arrival delay and actual elapsed time)
def review_nulls(df, name):
print(f'The rows with null values in {name}:')
nulls = df.isnull().sum()
for i in range(0, len(nulls)):
if nulls[i] != 0:
print(f'{nulls.index[i]}: {nulls[i]}')
else:
continue
# -
review_nulls(may_2021_delays, 'may 2021')
review_nulls(april_2021_delays, 'april 2021')
review_nulls(march_2021_delays, 'march 2021')
review_nulls(february_2021_delays, 'february 2021')
review_nulls(january_2021_delays, 'january 2021')
review_nulls(december_2020_delays, 'december 2020')
review_nulls(november_2020_delays, 'november 2020')
review_nulls(october_2020_delays, 'october 2020')
review_nulls(september_2020_delays, 'september 2020')
review_nulls(august_2020_delays, 'august 2020')
review_nulls(july_2020_delays, 'july 2020')
review_nulls(june_2020_delays, 'june 2020')
review_nulls(may_2020_delays, 'may 2020')
review_nulls(april_2020_delays, 'april 2020')
review_nulls(march_2020_delays, 'march 2020')
review_nulls(february_2020_delays, 'february 2020')
review_nulls(january_2020_delays, 'january 2020')
review_nulls(december_2019_delays, 'december 2019')
review_nulls(november_2019_delays, 'november 2019')
review_nulls(october_2019_delays, 'october 2019')
review_nulls(september_2019_delays, 'september 2019')
review_nulls(august_2019_delays, 'august 2019')
review_nulls(july_2019_delays, 'july 2019')
review_nulls(june_2019_delays, 'june 2019')
review_nulls(may_2019_delays, 'may 2019')
review_nulls(april_2019_delays, 'april 2019')
review_nulls(march_2019_delays, 'march 2019')
review_nulls(february_2019_delays, 'february 2019')
review_nulls(january_2019_delays, 'january 2019')
review_nulls(december_2018_delays, 'december 2018')
review_nulls(november_2018_delays, 'november 2018')
review_nulls(october_2018_delays, 'october 2018')
review_nulls(september_2018_delays, 'september 2018')
review_nulls(august_2018_delays, 'august 2018')
review_nulls(july_2018_delays, 'july 2018')
review_nulls(june_2018_delays, 'june 2018')
review_nulls(may_2018_delays, 'may 2018')
review_nulls(april_2018_delays, 'april 2018')
review_nulls(march_2018_delays, 'march 2018')
review_nulls(february_2018_delays, 'february 2018')
review_nulls(january_2018_delays, 'january 2018')
# +
# Create cleaning function on the delay file to:
# Drop rows with null values
# Fix departure and arrival times to be ints
# Drop extra columns that won't be needed in the model
def cleaning_delays(df):
df = df.dropna()
df.reset_index(drop = True, inplace = True)
df['dep_time'] = df['dep_time'].astype(int)
df['arr_time'] = df['arr_time'].astype(int)
df = df.drop(columns = ['op_carrier_fl_num', 'origin_airport_id', 'dest_airport_id', 'cancelled'])
return df
# -
may_2021_delays = cleaning_delays(may_2021_delays)
april_2021_delays = cleaning_delays(april_2021_delays)
march_2021_delays = cleaning_delays(march_2021_delays)
february_2021_delays = cleaning_delays(february_2021_delays)
january_2021_delays = cleaning_delays(january_2021_delays)
december_2020_delays = cleaning_delays(december_2020_delays)
november_2020_delays = cleaning_delays(november_2020_delays)
october_2020_delays = cleaning_delays(october_2020_delays)
september_2020_delays = cleaning_delays(september_2020_delays)
august_2020_delays = cleaning_delays(august_2020_delays)
july_2020_delays = cleaning_delays(july_2020_delays)
june_2020_delays = cleaning_delays(june_2020_delays)
may_2020_delays = cleaning_delays(may_2020_delays)
april_2020_delays = cleaning_delays(april_2020_delays)
march_2020_delays = cleaning_delays(march_2020_delays)
february_2020_delays = cleaning_delays(february_2020_delays)
january_2020_delays = cleaning_delays(january_2020_delays)
december_2019_delays = cleaning_delays(december_2019_delays)
november_2019_delays = cleaning_delays(november_2019_delays)
october_2019_delays = cleaning_delays(october_2019_delays)
september_2019_delays = cleaning_delays(september_2019_delays)
august_2019_delays = cleaning_delays(august_2019_delays)
july_2019_delays = cleaning_delays(july_2019_delays)
june_2019_delays = cleaning_delays(june_2019_delays)
may_2019_delays = cleaning_delays(may_2019_delays)
april_2019_delays = cleaning_delays(april_2019_delays)
march_2019_delays = cleaning_delays(march_2019_delays)
february_2019_delays = cleaning_delays(february_2019_delays)
january_2019_delays = cleaning_delays(january_2019_delays)
december_2018_delays = cleaning_delays(december_2018_delays)
november_2018_delays = cleaning_delays(november_2018_delays)
october_2018_delays = cleaning_delays(october_2018_delays)
september_2018_delays = cleaning_delays(september_2018_delays)
august_2018_delays = cleaning_delays(august_2018_delays)
july_2018_delays = cleaning_delays(july_2018_delays)
june_2018_delays = cleaning_delays(june_2018_delays)
may_2018_delays = cleaning_delays(may_2018_delays)
april_2018_delays = cleaning_delays(april_2018_delays)
march_2018_delays = cleaning_delays(march_2018_delays)
february_2018_delays = cleaning_delays(february_2018_delays)
january_2018_delays = cleaning_delays(january_2018_delays)
# #### Concatenate Delay Files Together
# +
# Create a function to concatenate the delays files together
def combine_delays(dfs):
delays = pd.concat(dfs, ignore_index = True)
return delays
# -
delays = combine_delays([may_2021_delays, april_2021_delays, march_2021_delays, february_2021_delays, january_2021_delays, december_2020_delays,
november_2020_delays, october_2020_delays, september_2020_delays, august_2020_delays, july_2020_delays, june_2020_delays,
may_2020_delays, april_2020_delays, march_2020_delays, february_2020_delays, january_2020_delays, december_2019_delays,
november_2019_delays, october_2019_delays, september_2019_delays, august_2019_delays, july_2019_delays, june_2019_delays,
may_2019_delays, april_2019_delays, march_2019_delays, february_2019_delays, january_2019_delays, december_2018_delays,
november_2018_delays, october_2018_delays, september_2018_delays, august_2018_delays, july_2018_delays, june_2018_delays,
may_2018_delays, april_2018_delays, march_2018_delays, february_2018_delays, january_2018_delays])
delays = combine_delays([december_2019_delays, november_2019_delays, october_2019_delays, september_2019_delays,
august_2019_delays, july_2019_delays, june_2019_delays, may_2019_delays, april_2019_delays, march_2019_delays,
february_2019_delays, january_2019_delays, december_2018_delays, november_2018_delays, october_2018_delays,
september_2018_delays, august_2018_delays, july_2018_delays, june_2018_delays, may_2018_delays, april_2018_delays,
march_2018_delays, february_2018_delays, january_2018_delays])
combined = combine_delays([may_2021, april_2021, march_2021, february_2021, january_2021, december_2020, november_2020, october_2020,
september_2020, august_2020, july_2020, june_2020, may_2020, april_2020, march_2020, february_2020, january_2020,
december_2019, november_2019, october_2019, september_2019, august_2019, july_2019, june_2019, may_2019,
april_2019, march_2019, february_2019, january_2019, december_2018, november_2018, october_2018, september_2018,
august_2018, july_2018, june_2018, may_2018, april_2018, march_2018, february_2018, january_2018])
# #### Save Cleaned Delay File
delays.to_parquet('../data/cleaned_delays_full.parquet')
combined.to_parquet('../data/cleaned_combined_full.parquet')
delays.to_csv('../data/cleaned_delays.csv', index = False)
# #### Cleaning Function on Cancels File
# +
# Create function to remove columns not relevant for the cancels file
def drop_cancel_rows(df):
df = df.drop(columns = ['origin_airport_id', 'dest_airport_id', 'dep_time', 'dep_delay_new', 'arr_time', 'arr_delay_new', 'actual_elapsed_time',
'carrier_delay', 'weather_delay', 'nas_delay', 'security_delay', 'late_aircraft_delay'])
return df
# -
may_2021_cancels = drop_cancel_rows(may_2021_cancels)
april_2021_cancels = drop_cancel_rows(april_2021_cancels)
march_2021_cancels = drop_cancel_rows(march_2021_cancels)
february_2021_cancels = drop_cancel_rows(february_2021_cancels)
january_2021_cancels = drop_cancel_rows(january_2021_cancels)
december_2020_cancels = drop_cancel_rows(december_2020_cancels)
november_2020_cancels = drop_cancel_rows(november_2020_cancels)
october_2020_cancels = drop_cancel_rows(october_2020_cancels)
september_2020_cancels = drop_cancel_rows(september_2020_cancels)
august_2020_cancels = drop_cancel_rows(august_2020_cancels)
july_2020_cancels = drop_cancel_rows(july_2020_cancels)
june_2020_cancels = drop_cancel_rows(june_2020_cancels)
may_2020_cancels = drop_cancel_rows(may_2020_cancels)
april_2020_cancels = drop_cancel_rows(april_2020_cancels)
march_2020_cancels = drop_cancel_rows(march_2020_cancels)
february_2020_cancels = drop_cancel_rows(february_2020_cancels)
january_2020_cancels = drop_cancel_rows(january_2020_cancels)
december_2019_cancels = drop_cancel_rows(december_2019_cancels)
november_2019_cancels = drop_cancel_rows(november_2019_cancels)
october_2019_cancels = drop_cancel_rows(october_2019_cancels)
september_2019_cancels = drop_cancel_rows(september_2019_cancels)
august_2019_cancels = drop_cancel_rows(august_2019_cancels)
july_2019_cancels = drop_cancel_rows(july_2019_cancels)
june_2019_cancels = drop_cancel_rows(june_2019_cancels)
may_2019_cancels = drop_cancel_rows(may_2019_cancels)
april_2019_cancels = drop_cancel_rows(april_2019_cancels)
march_2019_cancels = drop_cancel_rows(march_2019_cancels)
february_2019_cancels = drop_cancel_rows(february_2019_cancels)
january_2019_cancels = drop_cancel_rows(january_2019_cancels)
# #### Concatenate Cancel Files Together
# +
# Create a function to concetante the delays files together
def combine_cancels(dfs):
cancels = pd.concat(dfs, ignore_index = True)
return cancels
# -
cancels = combine_cancels([may_2021_cancels, april_2021_cancels, march_2021_cancels, february_2021_cancels, january_2021_cancels, december_2020_cancels,
november_2020_cancels, october_2020_cancels, september_2020_cancels, august_2020_cancels, july_2020_cancels, june_2020_cancels,
may_2020_cancels, april_2020_cancels, march_2020_cancels, february_2020_cancels, january_2020_cancels, december_2019_cancels,
november_2019_cancels, october_2019_cancels, september_2019_cancels, august_2019_cancels, july_2019_cancels, june_2019_cancels,
may_2019_cancels, april_2019_cancels, march_2019_cancels, february_2019_cancels, january_2019_cancels])
# #### Saved Cleaned Cancel File
cancels.to_csv('../data/cleaned_cancels.csv', index = False)
| code/flights_cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Overview
#
# In this project, I will build an item-based collaborative filtering system using [MovieLens Datasets](https://grouplens.org/datasets/movielens/latest/). Specifically, I will train a KNN model to cluster similar movies based on user's ratings and make movie recommendation based on similarity score of previous rated movies.
#
#
# ## [Recommender system](https://en.wikipedia.org/wiki/Recommender_system)
# A recommendation system is basically an information filtering system that seeks to predict the "rating" or "preference" a user would give to an item. It is widely used in different internet / online business such as Amazon, Netflix, Spotify, or social media like Facebook and Youtube. By using recommender systems, those companies are able to provide better or more suited products/services/contents that are personalized to a user based on his/her historical consumer behaviors
#
# Recommender systems typically produce a list of recommendations through collaborative filtering or through content-based filtering
#
# This project will focus on collaborative filtering and use item-based collaborative filtering systems make movie recommendation
#
#
# ## [Item-based Collaborative Filtering](https://beckernick.github.io/music_recommender/)
# Collaborative filtering based systems use the actions of users to recommend other items. In general, they can either be user based or item based. User based collaborating filtering uses the patterns of users similar to me to recommend a product (users like me also looked at these other items). Item based collaborative filtering uses the patterns of users who browsed the same item as me to recommend me a product (users who looked at my item also looked at these other items). Item-based approach is usually prefered than user-based approach. User-based approach is often harder to scale because of the dynamic nature of users, whereas items usually don't change much, so item-based approach often can be computed offline.
#
#
# ## Data Sets
# I use [MovieLens Datasets](https://grouplens.org/datasets/movielens/latest/).
# This dataset (ml-latest.zip) describes 5-star rating and free-text tagging activity from [MovieLens](http://movielens.org), a movie recommendation service. It contains 27753444 ratings and 1108997 tag applications across 58098 movies. These data were created by 283228 users between January 09, 1995 and September 26, 2018. This dataset was generated on September 26, 2018.
#
# Users were selected at random for inclusion. All selected users had rated at least 1 movies. No demographic information is included. Each user is represented by an id, and no other information is provided.
#
# The data are contained in the files `genome-scores.csv`, `genome-tags.csv`, `links.csv`, `movies.csv`, `ratings.csv` and `tags.csv`.
#
# ## Project Content
# 1. Load data
# 2. Exploratory data analysis
# 3. Train KNN model for item-based collaborative filtering
# 4. Use this trained model to make movie recommendations to myself
# 5. Deep dive into the bottleneck of item-based collaborative filtering.
# - cold start problem
# - data sparsity problem
# - popular bias (how to recommend products from the tail of product distribution)
# - scalability bottleneck
# 6. Further study
# +
import os
import time
# data science imports
import math
import numpy as np
import pandas as pd
from scipy.sparse import csr_matrix
from sklearn.neighbors import NearestNeighbors
# utils import
from fuzzywuzzy import fuzz
# visualization imports
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# %matplotlib inline
# +
# %env DATA_PATH=/home/aferik/Documents/Code-Stuff/Embedded2021-KNN-Recommendation
# path config
data_path = os.path.join(os.environ['DATA_PATH'], 'MovieLens')
movies_filename = 'movies.csv'
ratings_filename = 'ratings.csv'
# -
# ## 1. Load Data
# +
df_movies = pd.read_csv(
os.path.join(data_path, movies_filename),
usecols=['movieId', 'title'],
dtype={'movieId': 'int32', 'title': 'str'})
df_ratings = pd.read_csv(
os.path.join(data_path, ratings_filename),
usecols=['userId', 'movieId', 'rating'],
dtype={'userId': 'int32', 'movieId': 'int32', 'rating': 'float32'})
# -
df_movies.info()
df_ratings.info()
df_movies.head()
df_ratings.head()
num_users = len(df_ratings.userId.unique())
num_items = len(df_ratings.movieId.unique())
print('There are {} unique users and {} unique movies in this data set'.format(num_users, num_items))
# ## 2. Exploratory data analysis
# - Plot the counts of each rating
# - Plot rating frequency of each movie
# #### 1. Plot the counts of each rating
# we first need to get the counts of each rating from ratings data
# get count
df_ratings_cnt_tmp = pd.DataFrame(df_ratings.groupby('rating').size(), columns=['count'])
df_ratings_cnt_tmp
# We can see that above table does not include counts of zero rating score. So we need to add that in rating count dataframe as well
# there are a lot more counts in rating of zero
total_cnt = num_users * num_items
rating_zero_cnt = total_cnt - df_ratings.shape[0]
# append counts of zero rating to df_ratings_cnt
df_ratings_cnt = df_ratings_cnt_tmp.append(
pd.DataFrame({'count': rating_zero_cnt}, index=[0.0]),
verify_integrity=True,
).sort_index()
df_ratings_cnt
# The count for zero rating score is too big to compare with others. So let's take log transform for count values and then we can plot them to compare
# add log count
df_ratings_cnt['log_count'] = np.log(df_ratings_cnt['count'])
df_ratings_cnt
ax = df_ratings_cnt[['count']].reset_index().rename(columns={'index': 'rating score'}).plot(
x='rating score',
y='count',
kind='bar',
figsize=(12, 8),
title='Count for Each Rating Score (in Log Scale)',
logy=True,
fontsize=12,
)
ax.set_xlabel("movie rating score")
ax.set_ylabel("number of ratings")
# It's interesting that there are more people giving rating score of 3 and 4 than other scores
# #### 2. Plot rating frequency of all movies
df_ratings.head()
# get rating frequency
df_movies_cnt = pd.DataFrame(df_ratings.groupby('movieId').size(), columns=['count'])
df_movies_cnt.head()
# plot rating frequency of all movies
ax = df_movies_cnt \
.sort_values('count', ascending=False) \
.reset_index(drop=True) \
.plot(
figsize=(12, 8),
title='Rating Frequency of All Movies',
fontsize=12
)
ax.set_xlabel("movie Id")
ax.set_ylabel("number of ratings")
# The distribution of ratings among movies often satisfies a property in real-world settings,
# which is referred to as the long-tail property. According to this property, only a small
# fraction of the items are rated frequently. Such items are referred to as popular items. The
# vast majority of items are rated rarely. This results in a highly skewed distribution of the
# underlying ratings.
# Let's plot the same distribution but with log scale
# plot rating frequency of all movies in log scale
ax = df_movies_cnt \
.sort_values('count', ascending=False) \
.reset_index(drop=True) \
.plot(
figsize=(12, 8),
title='Rating Frequency of All Movies (in Log Scale)',
fontsize=12,
logy=True
)
ax.set_xlabel("movie Id")
ax.set_ylabel("number of ratings (log scale)")
# We can see that roughly 10,000 out of 53,889 movies are rated more than 100 times. More interestingly, roughly 20,000 out of 53,889 movies are rated less than only 10 times. Let's look closer by displaying top quantiles of rating counts
df_movies_cnt['count'].quantile(np.arange(1, 0.6, -0.05))
# So about 1% of movies have roughly 97,999 or more ratings, 5% have 1,855 or more, and 20% have 100 or more. Since we have so many movies, we'll limit it to the top 25%. This is arbitrary threshold for popularity, but it gives us about 13,500 different movies. We still have pretty good amount of movies for modeling. There are two reasons why we want to filter to roughly 13,500 movies in our dataset.
# - Memory issue: we don't want to run into the “MemoryError” during model training
# - Improve KNN performance: lesser known movies have ratings from fewer viewers, making the pattern more noisy. Droping out less known movies can improve recommendation quality
# filter data
popularity_thres = 531.0 # Default value: 50 / I have changed this in order to work with a smaller dataset
popular_movies = list(set(df_movies_cnt.query('count >= @popularity_thres').index))
df_ratings_drop_movies = df_ratings[df_ratings.movieId.isin(popular_movies)]
print('shape of original ratings data: ', df_ratings.shape)
print('shape of ratings data after dropping unpopular movies: ', df_ratings_drop_movies.shape)
# After dropping 90% (default value 75%) of movies in our dataset, we still have a very large dataset. So next we can filter users to further reduce the size of data
# get number of ratings given by every user
df_users_cnt = pd.DataFrame(df_ratings_drop_movies.groupby('userId').size(), columns=['count'])
df_users_cnt.head()
# plot rating frequency of all movies
ax = df_users_cnt \
.sort_values('count', ascending=False) \
.reset_index(drop=True) \
.plot(
figsize=(12, 8),
title='Rating Frequency of All Users',
fontsize=12
)
ax.set_xlabel("user Id")
ax.set_ylabel("number of ratings")
df_users_cnt['count'].quantile(np.arange(1, 0.5, -0.05))
# We can see that the distribution of ratings by users is very similar to the distribution of ratings among movies. They both have long-tail property. Only a very small fraction of users are very actively engaged with rating movies that they watched. Vast majority of users aren't interested in rating movies. So we can limit users to the top 40%, which is about 113,291 users.
# filter data
ratings_thres = 232 # Default value: 50 / I have changed this in order to work with a smaller dataset
active_users = list(set(df_users_cnt.query('count >= @ratings_thres').index))
df_ratings_drop_users = df_ratings_drop_movies[df_ratings_drop_movies.userId.isin(active_users)]
print('shape of original ratings data: ', df_ratings.shape)
print('shape of ratings data after dropping both unpopular movies and inactive users: ', df_ratings_drop_users.shape)
# ## 3. Train KNN model for item-based collaborative filtering
# - Reshaping the Data
# - Fitting the Model
# #### 1. Reshaping the Data
# For K-Nearest Neighbors, we want the data to be in an (artist, user) array, where each row is a movie and each column is a different user. To reshape the dataframe, we'll pivot the dataframe to the wide format with movies as rows and users as columns. Then we'll fill the missing observations with 0s since we're going to be performing linear algebra operations (calculating distances between vectors). Finally, we transform the values of the dataframe into a scipy sparse matrix for more efficient calculations.
# +
# pivot and create movie-user matrix
movie_user_mat = df_ratings_drop_users.pivot(index='movieId', columns='userId', values='rating').fillna(0)
# create mapper from movie title to index
movie_to_idx = {
movie: i for i, movie in
enumerate(list(df_movies.set_index('movieId').loc[movie_user_mat.index].title))
}
# transform matrix to scipy sparse matrix
movie_user_mat_sparse = csr_matrix(movie_user_mat.values)
# -
# movie_user_mat array contains for each movie the rate of all the selected users
movie_user_mat.shape
# +
# %env DATASET_PATH=/home/aferik/Documents/Code-Stuff/Embedded2021-KNN-Recommendation/dataset.csv
movie_user_mat.to_csv(os.environ['DATASET_PATH'], sep=',', encoding='utf-8')
# +
# %env NAME_ID_MAPPING_FILE_PATH=/home/aferik/Documents/Code-Stuff/Embedded2021-KNN-Recommendation/nameIdMapping.csv
with open(os.environ['NAME_ID_MAPPING_FILE_PATH'], 'w') as f:
for movieName in movie_to_idx:
idx = movie_to_idx[movieName]
outputLine = str(idx) + ',' + str(movieName)
print(outputLine, file=f)
# -
# #### 2. Fitting the Model
# Time to implement the model. We'll initialize the NearestNeighbors class as model_knn and fit our sparse matrix to the instance. By specifying the metric = cosine, the model will measure similarity bectween artist vectors by using cosine similarity.
# %env JOBLIB_TEMP_FOLDER=/tmp
# define model
model_knn = NearestNeighbors(metric='cosine', algorithm='brute', n_neighbors=20, n_jobs=-1)
# fit
model_knn.fit(movie_user_mat_sparse)
# ## 4. Use this trained model to make movie recommendations to myself
# And we're finally ready to make some recommendations!
# +
def fuzzy_matching(mapper, fav_movie, verbose=True):
"""
return the closest match via fuzzy ratio. If no match found, return None
Parameters
----------
mapper: dict, map movie title name to index of the movie in data
fav_movie: str, name of user input movie
verbose: bool, print log if True
Return
------
index of the closest match
"""
match_tuple = []
# get match
for title, idx in mapper.items():
ratio = fuzz.ratio(title.lower(), fav_movie.lower())
if ratio >= 60:
match_tuple.append((title, idx, ratio))
# sort
match_tuple = sorted(match_tuple, key=lambda x: x[2])[::-1]
if not match_tuple:
print('Oops! No match is found')
return
if verbose:
print('Found possible matches in our database: {0}\n'.format([x[0] for x in match_tuple]))
return match_tuple[0][1]
def make_recommendation(model_knn, data, mapper, fav_movie, n_recommendations):
"""
return top n similar movie recommendations based on user's input movie
Parameters
----------
model_knn: sklearn model, knn model
data: movie-user matrix
mapper: dict, map movie title name to index of the movie in data
fav_movie: str, name of user input movie
n_recommendations: int, top n recommendations
Return
------
list of top n similar movie recommendations
"""
# fit
model_knn.fit(data)
# get input movie index
print('You have input movie:', fav_movie)
idx = fuzzy_matching(mapper, fav_movie, verbose=True)
# inference
print('Recommendation system start to make inference')
print('......\n')
distances, indices = model_knn.kneighbors(data[idx], n_neighbors=n_recommendations+1)
# get list of raw idx of recommendations
raw_recommends = \
sorted(list(zip(indices.squeeze().tolist(), distances.squeeze().tolist())), key=lambda x: x[1])[:0:-1]
# get reverse mapper
reverse_mapper = {v: k for k, v in mapper.items()}
# print recommendations
print('Recommendations for {}:'.format(fav_movie))
for i, (idx, dist) in enumerate(raw_recommends):
print('{0}: {1}, with distance of {2}'.format(i+1, reverse_mapper[idx], dist))
# +
my_favorite = 'Iron Man'
make_recommendation(
model_knn=model_knn,
data=movie_user_mat_sparse,
fav_movie=my_favorite,
mapper=movie_to_idx,
n_recommendations=10)
# -
# This is very interesting that my **KNN** model recommends movies that were also produced in very similar years. However, the cosine distance of all those recommendations are actually quite small. This is probabily because there is too many zero values in our movie-user matrix. With too many zero values in our data, the data sparsity becomes a real issue for **KNN** model and the distance in **KNN** model starts to fall apart. So I'd like to dig deeper and look closer inside our data.
# #### (extra inspection)
# Let's now look at how sparse the movie-user matrix is by calculating percentage of zero values in the data.
# calcuate total number of entries in the movie-user matrix
num_entries = movie_user_mat.shape[0] * movie_user_mat.shape[1]
# calculate total number of entries with zero values
num_zeros = (movie_user_mat==0).sum(axis=1).sum()
# calculate ratio of number of zeros to number of entries
ratio_zeros = num_zeros / num_entries
print('There is about {:.2%} of ratings in our data is missing'.format(ratio_zeros))
# This result confirms my hypothesis. The vast majority of entries in our data is zero. This explains why the distance between similar items or opposite items are both pretty large.
# ## 5. Deep dive into the bottleneck of item-based collaborative filtering.
# - cold start problem
# - data sparsity problem
# - popular bias (how to recommend products from the tail of product distribution)
# - scalability bottleneck
# We saw there is 98.35% of user-movie interactions are not yet recorded, even after I filtered out less-known movies and inactive users. Apparently, we don't even have sufficient information for the system to make reliable inferences for users or items. This is called **Cold Start** problem in recommender system.
#
# There are three cases of cold start:
#
# 1. New community: refers to the start-up of the recommender, when, although a catalogue of items might exist, almost no users are present and the lack of user interaction makes very hard to provide reliable recommendations
# 2. New item: a new item is added to the system, it might have some content information but no interactions are present
# 3. New user: a new user registers and has not provided any interaction yet, therefore it is not possible to provide personalized recommendations
#
# We are not concerned with the last one because we can use item-based filtering to make recommendations for new user. In our case, we are more concerned with the first two cases, especially the second case.
#
# The item cold-start problem refers to when items added to the catalogue have either none or very little interactions. This constitutes a problem mainly for collaborative filtering algorithms due to the fact that they rely on the item's interactions to make recommendations. If no interactions are available then a pure collaborative algorithm cannot recommend the item. In case only a few interactions are available, although a collaborative algorithm will be able to recommend it, the quality of those recommendations will be poor. This arises another issue, which is not anymore related to new items, but rather to unpopular items. In some cases (e.g. movie recommendations) it might happen that a handful of items receive an extremely high number of iteractions, while most of the items only receive a fraction of them. This is also referred to as popularity bias. Please recall previous long-tail skewed distribution of movie rating frequency plot.
#
# In addtition to that, scalability is also a big issue in KNN model too. Its time complexity is O(nd + kn), where n is the cardinality of the training set and d the dimension of each sample. And KNN takes more time in making inference than training, which increase the prediction latency
# ## 6. Further study
#
# Use spark's ALS to solve above problems
| various/KNN-Movie-Recommendation-System.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCamHD
# language: python
# name: pycamhd
# ---
# # Time-Lapse Movie
# This notebook shows how to make a time-lapse animation from a set of CamHD videos. This notebook relies on the scene log compiled by <NAME> and <NAME>.
#
# #### Setup your environment
# %matplotlib inline
import pycamhd as camhd
import numpy as np
import matplotlib.pyplot as plt
# #### Ingest the Rutgers scene log into a nested list
import requests
import csv
scene_log_url = 'https://raw.githubusercontent.com/ooi-data-review/camhd_metadata/master/scene_timing/zoom0_scenes.csv'
scene_log_page = requests.get(scene_log_url)
scene_log_content = scene_log_page.content.decode('utf-8')
scene_log_csv = csv.reader(scene_log_content.splitlines(), delimiter=',')
scene_log = list(scene_log_csv)
# #### Get a list of local CamHD files to process
import glob
filenames = glob.glob('/data/*.mov')
# #### Get a list of frame numbers to process using the log file
# Here we set frame_number to 99999 whenever there is no entry in the metadata for a particular filename.
frame_numbers = []
for filename in filenames:
for row in scene_log:
if filename[6:32] in row[0]:
file_exists = True;
next_frame_time = row[7].split(':') # this is the seventh scene showing a bacterial mat at the base of Mushroom
if len(next_frame_time)==3:
frame_time = next_frame_time
if file_exists:
frame_numbers.append(int(round((int(frame_time[1])*60+int(frame_time[2]))*29.95))+60)
else:
frame_numbers.append(99999)
file_exists = False
# #### Show the first image of the time-lapse and save it for the cover image
plt.rc('figure', figsize=(8, 8))
frame = camhd.get_frame(filenames[0], frame_numbers[0], 'rgb24')
imgplot = plt.imshow(frame)
from numpngw import write_png
from scipy.misc import imresize
write_png('time_lapse.png', imresize(frame, (270, 480)))
# #### Loop through the file list to generate an MP4 using an FFMPEG pipe
# +
# %%time
import subprocess as sp
command = [ 'ffmpeg',
'-y', #overwrite output file if it exists
'-f', 'rawvideo',
'-vcodec','rawvideo',
'-s', '1920x1080', # size of input frame
'-pix_fmt', 'rgb24',
'-r', '30', # output frame rate
'-i', '-', # input from pipe
'-an', # no audio
'-vf', 'scale=480x270',
'-c:v', 'h264',
'-preset', 'veryfast',
'-crf', '18',
'-pix_fmt', 'yuv420p',
'time_lapse.mp4' ]
pipe = sp.Popen(command, stdin=sp.PIPE, stderr=sp.PIPE)
for i, filename in enumerate(filenames):
if frame_numbers[i] != 99999:
frame = camhd.get_frame(filename, frame_numbers[i], 'rgb24')
pipe.stdin.write(frame.tostring())
pipe.stdin.flush() # Ensure nothing is left in the buffer
pipe.terminate()
# -
# #### Show the video using HTML5 magic
# %%HTML
<video width="480" height="270" controls poster="test.png">
<source src="time_lapse.mp4" type="video/mp4">
</video>
# [The HTML5 vieo will not render in GitHub, but will show in your notebook when working on the CamHD Compute Engine.]
# ### References
#
# PyCamHD: https://github.com/tjcrone/pycamhd<br>
# CamHDHub: https://github.com/tjcrone/camhdhub<br>
# Raw Data Archive: https://rawdata.oceanobservatories.org/files/RS03ASHS/PN03B/06-CAMHDA301/<br>
# CamHD Metadata: https://github.com/ooi-data-review/camhd_metadata
| examples/time_lapse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Standalone quad-precision LP solver interface
# +
from cobra.io import load_json_model
from qminospy.solver import QMINOS
import numpy as np
solver = QMINOS()
# -
# ### Usually, can use default solver options and just set
# `precision = 'quad'`
# (or 'dq' or 'dqq')
# for quad precision, and
# `precision = 'double'`
# for double precision
# +
#==================================================
# Runtime options that stay constant for Double and Quad (or DQQ) are set here
#==================================================
# Double for all runs
solver.opt_intdict['lp_d']['Expand frequency'] = 100000
solver.opt_intdict['lp_d']['Scale option'] = 2
solver.opt_realdict['lp_d']['Feasibility tol'] = 1e-7
solver.opt_realdict['lp_d']['Optimality tol'] = 1e-7
solver.opt_realdict['lp_d']['LU factor tol'] = 1.9
solver.opt_realdict['lp_d']['LU update tol'] = 1.9
# Quad of DQQ
solver.opt_intdict['lp']['Expand frequency'] = 100000
solver.opt_intdict['lp']['Scale option'] = 2
solver.opt_realdict['lp']['Feasibility tol'] = 1e-15
solver.opt_realdict['lp']['Optimality tol'] = 1e-15
solver.opt_realdict['lp']['LU factor tol'] = 10.0
solver.opt_realdict['lp']['LU update tol'] = 10.0
# +
#==================================================
# Load model and extract matrices
#==================================================
import os
filei = os.path.join('models','iJO1366.json')
mdl = load_json_model(filei)
mdl = mdl.to_array_based_model()
A = mdl.S
b = mdl.b
c = mdl.objective_coefficients
xl = mdl.lower_bounds
xu = mdl.upper_bounds
csense= mdl.constraint_sense
#==================================================
# Solve
#==================================================
# Quad solution
print('Solving with Quad')
xq,statq,hsq = solver.solvelp(A,b,c,xl,xu,csense,precision='quad')
nRxns = len(c)
fq = np.dot(c, xq[0:nRxns])
# Double solution
print('Solving with Double')
xd,statd,hsd = solver.solvelp(A,b,c,xl,xu,csense,precision='double')
fd = np.dot(c, xd[0:nRxns])
# What was the objective?
obj_rxn = str(mdl.objective)
max_abs_c = max(abs(mdl.objective_coefficients))
# -
# # What was the status of the solver?
print('Quad status:', statq)
print('Double status:', statd)
# # By how much did objectives differ between DQQ and D?
df = fq - fd
print('Max objective difference between Quad and Double: %g'%df)
| examples/test_quadLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ___
#
# <a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>
# ___
# <center><em>Content Copyright by <NAME></em></center>
# # Decorators Homework (Optional)
#
# Since you won't run into decorators until further in your coding career, this homework is optional. Check out the Web Framework [Flask](http://flask.pocoo.org/). You can use Flask to create web pages with Python (as long as you know some HTML and CSS) and they use decorators a lot! Learn how they use [view decorators](http://flask.pocoo.org/docs/0.12/patterns/viewdecorators/). Don't worry if you don't completely understand everything about Flask, the main point of this optional homework is that you have an awareness of decorators in Web Frameworks. That way if you decide to become a "Full-Stack" Python Web Developer, you won't find yourself perplexed by decorators. You can also check out [Django](https://www.djangoproject.com/) another (and more popular) web framework for Python which is a bit more heavy duty.
#
# Also for some additional info:
#
# A framework is a type of software library that provides generic functionality which can be extended by the programmer to build applications. Flask and Django are good examples of frameworks intended for web development.
#
# A framework is distinguished from a simple library or API. An API is a piece of software that a developer can use in his or her application. A framework is more encompassing: your entire application is structured around the framework (i.e. it provides the framework around which you build your software).
#
# ## Great job!
| 4-assets/BOOKS/Jupyter-Notebooks/02-Decorators_Homework.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import requests
s = pd.Series(np.random.randn(4), name ='daily returns')
print(s)
s*100
np.max(s)
s.describe()
s.index = ['test','test1', 'test2', 'test3']
s
s['test']
s['test'] = 0
s
df = pd.read_csv('test_pwt.csv')
df
df = pd.read_csv('https://python-programming.quantecon.org/_static/lecture_specific/pandas/data/test_pwt.csv')
df
type(df)
df[1:2]
df[['country', 'country isocode']]
df.iloc[3:5, 0:4]
df.loc[:4, ['country isocode', 'POP']]
df = df[['country', 'POP', 'tcgdp']]
df
df = df.set_index('country')
df
df.columns = 'Population', 'Total GDP'
df
df['Population'] = df['Population'] * 1e3
df
df['GDP percap'] = df['Total GDP']*1e6/df['Population']
df
ax = df['GDP percap'].plot(kind='bar')
ax.set_xlabel('Country')
ax.set_ylabel('GDP per capita')
df = df.sort_values(by='GDP percap', ascending=True)
ax = df['GDP percap'].plot(kind='bar')
# +
url = 'http://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv'
source = requests.get(url).content.decode().split("\n")
print(source[0], source[1], source[2])
# +
url = 'http://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv'
data = pd.read_csv(url, index_col=0, parse_dates=True)
pd.set_option('precision', 1)
data.describe()
# -
ax = data['1990':'2020'].plot(title='US Unemployment Rate', legend=False)
ax.set_xlabel('Year', fontsize=12)
ax.set_ylabel('%', fontsize=12)
# +
from pandas_datareader import wb
govt_debt = wb.download(indicator='GC.DOD.TOTL.GD.ZS', country=['US', 'AU'], start=2005, end=2016).stack().unstack(0)
ind = govt_debt.index.droplevel(-1)
govt_debt.index = ind
ax = govt_debt.plot(lw=2)
ax.set_xlabel('year', fontsize=12)
plt.title("Government Debt to GDP (%)")
plt.show()
# +
import datetime as dt
import pandas as pd
import pandas_datareader as data
ticker_list = {'INTC': 'Intel',
'MSFT': 'Microsoft',
'IBM': 'IBM',
'BHP': 'BHP',
'TM': 'Toyota',
'AAPL': 'Apple',
'AMZN': 'Amazon',
'BA': 'Boeing',
'QCOM': 'Qualcomm',
'KO': 'Coca-Cola',
'GOOG': 'Google',
'SNE': 'Sony',
'PTR': 'PetroChina'}
def read_data(ticker_list, start=dt.datetime(2019,1,2), end=dt.datetime(2019,12,31)):
ticker = pd.DataFrame()
for tick in ticker_list:
prices = data.DataReader(tick, 'yahoo', start, end)
closing_prices = prices['Close']
ticker[tick] = closing_prices
return ticker
ticker = read_data(ticker_list)
first = ticker.iloc[0]
last = ticker.iloc[-1]
norm_change = 100*((last/first)-1)
norm_change.sort_values(ascending=True, inplace=True)
norm_change = norm_change.rename(index=ticker_list)
norm_change.plot(kind='bar')
# +
import matplotlib.pyplot as plt
import datetime as dt
import pandas as pd
import pandas_datareader as data
import numpy as np
indices_list = {'^GSPC': 'S&P 500',
'^IXIC': 'NASDAQ',
'^DJI': 'Dow Jones',
'^N225': 'Nikkei'}
def read_data(indices_list, start=dt.datetime(1970,1,2), end=dt.datetime(2019,12,31)):
indices = pd.DataFrame()
for index in indices_list:
prices = data.DataReader(index, 'yahoo', start, end)
closing_prices = prices['Close']
indices[index] = closing_prices
return indices
indices_data = read_data(indices_list)
yearly_returns = pd.DataFrame()
for index, name in indices_list.items():
p1 = indices_data.groupby(indices_data.index.year)[index].first()
p2 = indices_data.groupby(indices_data.index.year)[index].last()
yearly = 100*((p1/p2)-1)
yearly_returns[name] = yearly
fig, ax = plt.subplots(2,2, figsize=(16,7))
for i, ax in enumerate(ax.flatten()):
index_name = yearly_returns.columns[i]
ax.plot(yearly_returns[index_name])
ax.set_ylabel('Percent Change')
ax.set_title(index_name)
# -
| Exercises/QuantEcon/Packages/Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 9.1
# language: sage
# name: sagemath
# ---
a = 10
b = a
b = b+1
a, b
L = [1,2,3,4]
A = L
B = A
B[0] = 3 # [3,2,3,4]
A[-1] = 1 # [3,2,3,1]
sum(B[:3]) - B[-1]
B[:3] , B[-1]
[7^k%100 for k in [1..10]]
20210417 % 4
| 20210417/6 - Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7
# language: python
# name: py37
# ---
import numpy as np
import torch
# # Numpy
# +
# flatten
a = np.random.random((1, 2, 3))
b = a.reshape(-1)
print(a.shape)
print(b.shape)
# +
# Add dimension
a = np.random.random((6,))
b = a.reshape(1, -1)
print(a.shape)
print(b.shape)
# +
# -1: autofit dim
a = np.random.random((1, 2, 3))
b = a.reshape(-1, 3)
print(a.shape)
print(b.shape)
# -
# # Torch 1 tensor ops
# #### view (reshape)
# +
# flatten
a = torch.rand((1, 2, 3))
b = a.view(-1)
print(a.shape)
print(b.shape)
# +
# Add dimension
a = torch.rand(5)
print(a.size())
b = a.view(1, -1)
print(b.size())
# +
# -1: autofit dim
a = torch.rand(1,2,3)
b = a.view(-1, 2)
print(a.size())
print(b.size())
# -
# #### Permute, transpose
# +
# Permute: swap dimension
a = torch.rand(1,2,3)
b = a.permute(0, 2, 1)
print(a.size())
print(b.size())
# +
# transpose - For 2D tensors only
a = torch.rand(2, 3)
print(a.size())
b = a.t()
print(b.size())
# -
# #### squeeze, unsqueeze
# +
# squeeze
a = torch.rand(1,5) # (1,5)
print(a.squeeze().size())
# +
# unsqueeze
a = torch.rand(5) # (5)
print(a.unsqueeze(dim=0).size())
print(a.unsqueeze(dim=1).size())
# -
# #### repeat
# - Repeat each dimension () time
# +
a = torch.tensor([
[1,2,3],
[4,5,6]
]) # (2,3)
res = a.repeat(1,1)
print(res, res.size())
res = a.repeat(1,2)
print(res, res.size())
res = a.repeat(1,3)
print(res, res.size())
# +
res = a.repeat(2,1)
print(res, res.size())
res = a.repeat(3,1)
print(res, res.size())
# -
res = a.repeat(2,2)
print(res, res.size())
# # Torch 2+ tensor ops
# #### stack
# +
a = torch.tensor([
[1,2,3],
[4,5,6]
]) # (2,3)
b = torch.Tensor([
[11,12,13],
[14,15,16]
]) # (2,3)
res = torch.stack([a,b], dim=0)
print(res, res.size())
res = torch.stack([a,b], dim=1)
print(res, res.size())
res = torch.stack([a,b], dim=2)
print(res, res.size())
# -
# #### concat
# +
a = torch.tensor([
[1,2,3],
[4,5,6]
])
b = torch.Tensor([
[11,12,13],
[14,15,16]
])
res = torch.cat([a,b], dim=0)
print(res, res.size())
res = torch.cat([a,b], dim=1)
print(res, res.size())
# -
# ## Save/Load tensor to a file
a = torch.rand(10, 3, 3)
# Save
torch.save(a, 'a_tensor.pt')
# Load
a_load = torch.load('a_tensor.pt')
print(a_load.size())
# # Calculate mean, var of a big dataset
# #### Example 1: get mean, var along dim=2 (257)
# +
import torch
from torch.utils.data import TensorDataset, DataLoader
# Dataset
N = 1000
X = torch.randn(N, 1937, 257)
y = torch.randn(N, 1)
# Dataloader
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=8)
# Calc mean, var
mean = 0.0
variance = 0.0
N_ = 0
for Xb, yb in loader:
N_ += Xb.size(0)
mean += Xb.mean(1).sum(0)
variance += Xb.var(1).sum(0)
mean /= N_
variance /= N_
# -
print(mean.size(), variance.size())
# #### Example 2: get mean, var along C-dim of Image dataset (N, C, W, H)
# +
import torch
from torch.utils.data import TensorDataset, DataLoader
# Dataset
N = 1000
X = torch.randn(N, 3, 128, 128)
y = torch.randn(N, 1)
# Dataloader
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=8)
# Calc mean, std
mean = 0.0
std = 0.0
N_ = 0
for Xb, yb in loader:
# Update total number of images
N_ += Xb.size(0)
# Rearrange batch to be the shape of [B, C, W * H]
Xb = Xb.view(Xb.size(0), Xb.size(1), -1)
# Compute mean and std here
mean += Xb.mean(2).sum(0)
std += Xb.std(2).sum(0)
mean /= N_
std /= N_
# -
print(mean.size(), std.size())
| Pytorch/Notes/Notes_on_Tensors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # This is the final merge for all data from year 2010 to 2019 using the clean dataset of 4 source and show the charactristic of the final dataset
# +
import pandas as pd
#merging the movie_alt (for year 2019 which I used to test the merging validity) and movie_alt1 which is a clean data for 2010 to 2018
movie_complete = pd.read_csv('/Users/Amin/Documents/GitHub/Movie_boxoffice_reviews/data/processed/movie_alt1.csv')
movie_alt = pd.read_csv('/Users/Amin/Documents/GitHub/Movie_boxoffice_reviews/data/processed/movie_alt.csv')
#drop unwanted columns
movie_alt.drop(columns=['imdb_title_id','Runtime'],inplace=True,)
#rearenge the columns for better visualization
new_col = ['Title', 'Alternative Title','director','Year' ,'date_published','Worldwide_Lifetime_Gross', 'Domestic_Lifetime_Gross', 'Foreign_Lifetime_Gross', 'tomatometer_rating', 'tomatometer_count', 'tomato_audience_rating', 'tomato_audience_vote_count', 'imdb_avg_vote', 'imdb_total_votes', 'imdb_us_voters_rating', 'imdb_us_voters_votes', 'imdb_non_us_voters_rating', 'imdb_non_us_voters_votes', 'imdb_females_avg_vote', 'imdb_females_votes', 'imdb_males_avg_vote', 'imdb_males_votes', 'Metascore']
movie_alt = movie_alt[new_col]
#concate two dataframe
movie_final = pd.concat([movie_complete,movie_alt])
movie_final.drop(columns=['Unnamed: 0'],inplace=True,)
# -
print(movie_final.shape)
print(movie_final.head(5))
print(movie_final.describe())
movie_final.dtypes
movie_final.isna().sum()
movie_final.Title.value_counts()
### cleaning the repeated titles
movie_final.drop([1447,1448,1450,1451,1452,1453,1454,1455,1456,1457,1458,1459,1460,1461,1462,1463,1464],inplace=True)
movie_final[movie_final.Title == 'Home']
movie_final.drop([857,858,859,860,862,863],inplace=True)
movie_final[movie_final.Title == 'The Double']
movie_final.drop([418,419,420,421,423,424],inplace=True)
movie_final[movie_final.Title == 'Frozen']
movie_final.drop([1518,1520,1521,1523,1524,1525],inplace=True)
movie_final[movie_final.Title == 'Concussion']
movie_final.drop([841,843,844,846,847,848],inplace=True)
movie_final[movie_final.Title == 'Inside Out']
movie_final.Title.value_counts()
movie_final.drop([188,189,190,191,193,194],inplace=True)
movie_final[movie_final.Title == 'Killers']
movie_final.drop([2121,2122,2123,2125],inplace=True)
movie_final[movie_final.Title == 'Victoria']
movie_final.drop([1434,1435,1436],inplace=True)
movie_final[movie_final.Title == 'The Other Woman']
movie_final.drop([880,881,883],inplace=True)
movie_final[movie_final.Title == 'Stolen']
movie_final.Title.value_counts()
movie_final.drop([1007,1008,1009],inplace=True)
movie_final[movie_final.Title == 'Eden']
movie_final.drop([337,338],inplace=True)
movie_final[movie_final.Title == 'Paradise']
movie_final.drop([1824,1825,1826],inplace=True)
movie_final[movie_final.Title == 'Extraterrestrial']
movie_final.drop([1642,1643,1644],inplace=True)
movie_final[movie_final.Title == 'Gloria']
movie_final.drop([95,807],inplace=True)
movie_final[movie_final.Title == 'Little Women']
movie_final.drop([829,830,831],inplace=True)
movie_final[movie_final.Title == 'Intruders']
movie_final.Title.value_counts()
movie_final.drop([1651,1652,1654],inplace=True)
movie_final[movie_final.Title == 'The Boy']
movie_final.drop([986,987],inplace=True)
movie_final[movie_final.Title == 'Anna']
movie_final.drop([1413],inplace=True)
movie_final[movie_final.Title == 'Barbara']
movie_final.drop([438,349],inplace=True)
movie_final[movie_final.Title == 'Brotherhood']
movie_final.Title.value_counts()
movie_final.drop([1843,1844],inplace=True)
movie_final[movie_final.Title == 'The Guest']
movie_final.drop(605,inplace=True)
movie_final[movie_final.Title == 'Lucky']
movie_final.drop(1916,inplace=True)
movie_final[movie_final.Title == 'Blackbird']
movie_final.drop([979,980],inplace=True)
movie_final[movie_final.Title == 'Weekend']
movie_final.Title.value_counts()
movie_final.drop([956,957],inplace=True)
movie_final[movie_final.Title == 'The Hunter']
movie_final.drop([1850,1851],inplace=True)
movie_final[movie_final.Title == 'Boy Meets Girl']
movie_final.drop(727,inplace=True)
movie_final[movie_final.Title == 'Aftermath']
movie_final.drop([603,604],inplace=True)
movie_final[movie_final.Title == 'Escape Room']
movie_final.Title.value_counts()
movie_final.drop([1081,1082],inplace=True)
movie_final[movie_final.Title == 'The Reunion']
movie_final.drop([1130,1131],inplace=True)
movie_final[movie_final.Title == 'The King']
movie_final.drop(23,inplace=True)
movie_final[movie_final.Title == 'Coco']
movie_final.drop(710,inplace=True)
movie_final[movie_final.Title == 'Outcast']
movie_final.Title.value_counts()
movie_final.drop(1535,inplace=True)
movie_final[movie_final.Title == 'Wild']
movie_final.drop(996,inplace=True)
movie_final[movie_final.Title == 'Innocence']
movie_final.drop(930,inplace=True)
movie_final[movie_final.Title == 'Creature']
movie_final.drop(1023,inplace=True)
movie_final[movie_final.Title == 'Dog Days']
movie_final.Title.value_counts()
movie_final.drop(876,inplace=True)
movie_final[movie_final.Title == 'Night School']
movie_final.drop(521,inplace=True)
movie_final[movie_final.Title == 'Euphoria']
movie_final.drop(654,inplace=True)
movie_final[movie_final.Title == 'Anonymous']
# +
movie_final.Title.value_counts()
# -
movie_final.drop(1022,inplace=True)
movie_final[movie_final.Title == 'Blood Ties']
movie_final.drop(500,inplace=True)
movie_final[movie_final.Title == 'The Hero']
# +
movie_final.Title.value_counts()
# -
movie_final.drop(1941,inplace=True)
movie_final[movie_final.Title == 'Captive']
# +
movie_final.Title.value_counts()
# -
movie_final.drop(712,inplace=True)
movie_final[movie_final.Title == 'A Better Life']
movie_final.drop(735,inplace=True)
movie_final[movie_final.Title == 'War Horse']
movie_final.drop(576,inplace=True)
movie_final[movie_final.Title == 'The Kitchen']
movie_final.Title.value_counts()
movie_final.drop(165,inplace=True)
movie_final[movie_final.Title == 'Parental Guidance']
movie_final.drop(1500,inplace=True)
movie_final[movie_final.Title == 'A Perfect Man']
movie_final.Title.value_counts()
movie_final.drop(1002,inplace=True)
movie_final[movie_final.Title == 'The Silence']
movie_final.drop([554,555,556],inplace=True)
movie_final[movie_final.director=='<NAME>']
movie_final.drop([549,551,552],inplace=True)
movie_final[movie_final.director=='<NAME>']
movie_final.Title.value_counts()
#save the final data set and print the charactristic of the set
movie_final.to_csv('/Users/Amin/Documents/GitHub/Movie_boxoffice_reviews/data/processed/movie_db_fianl1.csv')
movie_final.columns
df = pd.read_csv('/Users/Amin/Desktop/mojo.csv')
df.columns
movie_2022 = movie_final.merge(df,left_on='Title',right_on='Title',how='left')
movie_2022.shape
movie_2022
movie_2022.to_csv('/Users/Amin/Desktop/temp.csv')
db_fix = pd.read_csv('/Users/Amin/Documents/GitHub/Movie_boxoffice_reviews/data/interim/movie_db_fix.csv',encoding = "ISO-8859-1", engine='python')
db_fix.drop(columns=['Unnamed: 0','Metascore','Unnamed: 0.1','index','Year_y'],inplace=True)
db_fix.dropna(inplace=True)
db_fix.isna().sum()
db_fix = db_fix.rename(columns={'Year_x':'Year','Worldwide_Lifetime_Gross':'All_time_gross','tomatometer_rating':'tom_cri_vote',
'tomatometer_count':'tom_cri_num','tomato_audience_rating':'tom_aud_vote','tomato_audience_vote_count':'tom_aud_num',
'imdb_avg_vote':'imdb_vote','imdb_total_votes':'imdb_num','imdb_us_voters_rating':'imdb_us_vote',
'imdb_us_voters_votes':'imdb_us_num','imdb_non_us_voters_rating':'imdb_nus_vote','imdb_non_us_voters_votes':'imdb_nus_num',
'imdb_females_avg_vote':'imdb_fem_vote','imdb_females_votes':'imdb_fem_num','imdb_males_avg_vote':'imdb_mal_vote',
'imdb_males_votes':'imdb_mal_num','mc_av':'met_cri_vote','mc_no':'met_cri_num','ma_av':'met_aud_vote','ma_no':'met_aud_num'})
db_fix.columns
new_col=['Title', 'Alternative Title', 'director', 'Year', 'date_published',
'All_time_gross', 'Domestic_Lifetime_Gross', 'Foreign_Lifetime_Gross',
'tom_cri_vote', 'tom_cri_num', 'tom_aud_vote', 'tom_aud_num','met_cri_vote', 'met_cri_num', 'met_aud_vote', 'met_aud_num', 'imdb_vote',
'imdb_num', 'imdb_us_vote', 'imdb_us_num', 'imdb_nus_vote', 'imdb_nus_num',
'imdb_fem_vote', 'imdb_fem_num', 'imdb_mal_vote', 'imdb_mal_num']
db_fix = db_fix[new_col]
db_fix.head()
db_fix['met_cri_vote'] = db_fix['met_cri_vote'].astype('int')
db_fix['met_cri_num'] = db_fix['met_cri_num'].astype('int')
db_fix['met_aud_vote'] = db_fix['met_aud_vote'].astype('int')
db_fix['met_aud_num'] = db_fix['met_aud_num'].astype('int')
db_fix.to_csv('/Users/Amin/Documents/GitHub/Movie_boxoffice_reviews/data/processed/movie_db_fianl1.csv')
| notebooks/final_merge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Apple Stock
# ### Introduction:
#
# We are going to use Apple's stock price.
#
#
# ### Step 1. Import the necessary libraries
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('apple').getOrCreate()
spark
from pyspark import SparkFiles
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv)
# ### Step 3. Assign it to a variable apple
# +
url = "https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv"
spark.sparkContext.addFile(url)
apple = spark.read.csv(SparkFiles.get("appl_1980_2014.csv"), header=True, inferSchema=True, sep=',')
# -
apple.show(5)
# ### Step 4. Check out the type of the columns
apple.printSchema()
# ### Step 5. Transform the Date column as a datetime type
from pyspark.sql.types import *
apple_date = apple.withColumn('Date', apple.Date.cast(DateType()))
apple_date.printSchema()
apple_date.show(5)
# ### Step 6. Set the date as the index
# ### Step 7. Is there any duplicate dates?
apple_date.groupBy('Date').count().filter('count>1').show()
# ### Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
apple_date.orderBy('Date').show(5)
# ### Step 9. Get the last business day of each month
# +
#trying to find an answer
# -
# ### Step 10. What is the difference in days between the first day and the oldest
import pyspark.sql.functions as F
first = apple_date.orderBy('Date').head(1)[0][0]
print(first)
last = apple_date.orderBy('Date').tail(1)[0][0]
print(last)
df = spark.createDataFrame([(first, last)], ['first', 'last'])
df.show()
F.datediff()
df.select(F.datediff(df["last"], df["first"]).alias('diff')).head(1)[0][0]
# ### Step 11. How many months in the data we have?
df.select(F.months_between(df["last"], df["first"]).alias('diff')).head(1)[0][0]
# ### Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
# +
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
# %matplotlib inline
# +
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple_pandas = pd.read_csv(url)
apple_pandas.head()
# -
apple_pandas.sort_values(by='Date',inplace=True)
apple_pandas.set_index('Date',inplace=True)
apple_pandas.head()
# +
# makes the plot and assign it to a variable
appl_open = apple_pandas['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
# -
# ### BONUS: Create your own question and answer it.
| 09_Time_Series/Apple_Stock/PySparkSolution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 16. Making Simple Decisions
# **16.1** \[almanac-game\](Adapted from David Heckerman.) This exercise concerns
# the **Almanac Game**, which is used by
# decision analysts to calibrate numeric estimation. For each of the
# questions that follow, give your best guess of the answer, that is, a
# number that you think is as likely to be too high as it is to be too
# low. Also give your guess at a 25th percentile estimate, that is, a
# number that you think has a 25% chance of being too high, and a 75%
# chance of being too low. Do the same for the 75th percentile. (Thus, you
# should give three estimates in all—low, median, and high—for each
# question.)
#
# 1. Number of passengers who flew between New York and Los Angeles
# in 1989.
#
# 2. Population of Warsaw in 1992.
#
# 3. Year in which Coronado discovered the Mississippi River.
#
# 4. Number of votes received by <NAME> in the 1976
# presidential election.
#
# 5. Age of the oldest living tree, as of 2002.
#
# 6. Height of the Hoover Dam in feet.
#
# 7. Number of eggs produced in Oregon in 1985.
#
# 8. Number of Buddhists in the world in 1992.
#
# 9. Number of deaths due to AIDS in the United States
# in 1981.
#
# 10. Number of U.S. patents granted in 1901.
#
# The correct answers appear after the last exercise of this chapter. From
# the point of view of decision analysis, the interesting thing is not how
# close your median guesses came to the real answers, but rather how often
# the real answer came within your 25% and 75% bounds. If it was about
# half the time, then your bounds are accurate. But if you’re like most
# people, you will be more sure of yourself than you should be, and fewer
# than half the answers will fall within the bounds. With practice, you
# can calibrate yourself to give realistic bounds, and thus be more useful
# in supplying information for decision making. Try this second set of
# questions and see if there is any improvement:
#
# 1. Year of birth of Zsa Zsa Gabor.
#
# 2. Maximum distance from Mars to the sun in miles.
#
# 3. Value in dollars of exports of wheat from the United States in 1992.
#
# 4. Tons handled by the port of Honolulu in 1991.
#
# 5. Annual salary in dollars of the governor of California in 1993.
#
# 6. Population of San Diego in 1990.
#
# 7. Year in which <NAME> founded Providence, Rhode Island.
#
# 8. Height of Mt. Kilimanjaro in feet.
#
# 9. Length of the Brooklyn Bridge in feet.
#
# 10. Number of deaths due to automobile accidents in the United States
# in 1992.
#
# **16.2** Chris considers four used cars before buying the one with maximum
# expected utility. Pat considers ten cars and does the same. All other
# things being equal, which one is more likely to have the better car?
# Which is more likely to be disappointed with their car’s quality? By how
# much (in terms of standard deviations of expected quality)?
#
# **16.3** Chris considers five used cars before buying the one with maximum
# expected utility. Pat considers eleven cars and does the same. All other
# things being equal, which one is more likely to have the better car?
# Which is more likely to be disappointed with their car’s quality? By how
# much (in terms of standard deviations of expected quality)?
#
# **16.4** \[St-Petersburg-exercise\] In 1713, <NAME> stated a puzzle,
# now called the St. Petersburg paradox, which works as follows. You have
# the opportunity to play a game in which a fair coin is tossed repeatedly
# until it comes up heads. If the first heads appears on the $n$th toss,
# you win $2^n$ dollars.
#
# 1. Show that the expected monetary value of this game is infinite.
#
# 2. How much would you, personally, pay to play the game?
#
# 3. Nicolas’s cousin <NAME> resolved the apparent paradox in
# 1738 by suggesting that the utility of money is measured on a
# logarithmic scale (i.e., $U(S_{n}) = a\log_2 n +b$, where $S_n$ is
# the state of having $\$n$). What is the expected utility of the game
# under this assumption?
#
# 4. What is the maximum amount that it would be rational to pay to play
# the game, assuming that one’s initial wealth is $\$k$?
#
# **16.5** Write a computer program to automate the process in
# Exercise [assessment-exercise](#/). Try your program out on
# several people of different net worth and political outlook. Comment on
# the consistency of your results, both for an individual and across
# individuals.
#
# **16.6** \[surprise-candy-exercise\] The Surprise Candy Company makes candy in
# two flavors: 75% are strawberry flavor and 25% are anchovy flavor. Each
# new piece of candy starts out with a round shape; as it moves along the
# production line, a machine randomly selects a certain percentage to be
# trimmed into a square; then, each piece is wrapped in a wrapper whose
# color is chosen randomly to be red or brown. 70% of the strawberry
# candies are round and 70% have a red wrapper, while 90% of the anchovy
# candies are square and 90% have a brown wrapper. All candies are sold
# individually in sealed, identical, black boxes.
#
# Now you, the customer, have just bought a Surprise candy at the store
# but have not yet opened the box. Consider the three Bayes nets in
# Figure [3candy-figure](#3candy-figure).
#
# 1. Which network(s) can correctly represent
# ${\textbf{P}}(Flavor,Wrapper,Shape)$?
#
# 2. Which network is the best representation for this problem?
#
# 3. Does network (i) assert that
# ${\textbf{P}}(Wrapper|Shape){{\,{=}\,}}{\textbf{P}}(Wrapper)$?
#
# 4. What is the probability that your candy has a red wrapper?
#
# 5. In the box is a round candy with a red wrapper. What is the
# probability that its flavor is strawberry?
#
# 6. A unwrapped strawberry candy is worth $s$ on the open market and an
# unwrapped anchovy candy is worth $a$. Write an expression for the
# value of an unopened candy box.
#
# 7. A new law prohibits trading of unwrapped candies, but it is still
# legal to trade wrapped candies (out of the box). Is an unopened
# candy box now worth more than less than, or the same as before?
#
# <center>
# <b id="3candy-figure">Figure [3candy-figure]</b> Three proposed Bayes nets for the Surprise Candy
# problem
# </center>
#
# 
#
# **16.7** \[surprise-candy-exercise\] The Surprise Candy Company makes candy in
# two flavors: 70% are strawberry flavor and 30% are anchovy flavor. Each
# new piece of candy starts out with a round shape; as it moves along the
# production line, a machine randomly selects a certain percentage to be
# trimmed into a square; then, each piece is wrapped in a wrapper whose
# color is chosen randomly to be red or brown. 80% of the strawberry
# candies are round and 80% have a red wrapper, while 90% of the anchovy
# candies are square and 90% have a brown wrapper. All candies are sold
# individually in sealed, identical, black boxes.
#
# Now you, the customer, have just bought a Surprise candy at the store
# but have not yet opened the box. Consider the three Bayes nets in
# Figure [3candy-figure](#3candy-figure).
#
# 1. Which network(s) can correctly represent
# ${\textbf{P}}(Flavor,Wrapper,Shape)$?
#
# 2. Which network is the best representation for this problem?
#
# 3. Does network (i) assert that
# ${\textbf{P}}(Wrapper|Shape){{\,{=}\,}}{\textbf{P}}(Wrapper)$?
#
# 4. What is the probability that your candy has a red wrapper?
#
# 5. In the box is a round candy with a red wrapper. What is the
# probability that its flavor is strawberry?
#
# 6. A unwrapped strawberry candy is worth $s$ on the open market and an
# unwrapped anchovy candy is worth $a$. Write an expression for the
# value of an unopened candy box.
#
# 7. A new law prohibits trading of unwrapped candies, but it is still
# legal to trade wrapped candies (out of the box). Is an unopened
# candy box now worth more than less than, or the same as before?
#
# **16.8** Prove that the judgments $B \succ A$ and $C \succ D$ in the Allais
# paradox (page [allais-page](#/)) violate the axiom of substitutability.
#
# **16.9** Consider the Allais paradox described on page [allais-page](#/): an agent
# who prefers $B$ over $A$ (taking the sure thing), and $C$ over $D$
# (taking the higher EMV) is not acting rationally, according to utility
# theory. Do you think this indicates a problem for the agent, a problem
# for the theory, or no problem at all? Explain.
#
# **16.10** Tickets to a lottery cost 1. There are two possible prizes:
# a 10 payoff with probability 1/50, and a 1,000,000 payoff with
# probability 1/2,000,000. What is the expected monetary value of a
# lottery ticket? When (if ever) is it rational to buy a ticket? Be
# precise—show an equation involving utilities. You may assume current
# wealth of $k$ and that $U(S_k)=0$. You may also assume that
# $U(S_{k+{10}}) = {10}\times U(S_{k+1})$, but you may not make any
# assumptions about $U(S_{k+1,{000},{000}})$. Sociological studies show
# that people with lower income buy a disproportionate number of lottery
# tickets. Do you think this is because they are worse decision makers or
# because they have a different utility function? Consider the value of
# contemplating the possibility of winning the lottery versus the value of
# contemplating becoming an action hero while watching an adventure movie.
#
# **16.11** \[assessment-exercise\]Assess your own utility for different incremental
# amounts of money by running a series of preference tests between some
# definite amount $M_1$ and a lottery $[p,M_2; (1-p), 0]$. Choose
# different values of $M_1$ and $M_2$, and vary $p$ until you are
# indifferent between the two choices. Plot the resulting utility
# function.
#
# **16.12** How much is a micromort worth to you? Devise a protocol to determine
# this. Ask questions based both on paying to avoid risk and being paid to
# accept risk.
#
# **16.13** \[kmax-exercise\] Let continuous variables $X_1,\ldots,X_k$ be
# independently distributed according to the same probability density
# function $f(x)$. Prove that the density function for
# $\max\{X_1,\ldots,X_k\}$ is given by $kf(x)(F(x))^{k-1}$, where $F$ is
# the cumulative distribution for $f$.
#
# **16.14** Economists often make use of an exponential utility function for money:
# $U(x) = -e^{-x/R}$, where $R$ is a positive constant representing an
# individual’s risk tolerance. Risk tolerance reflects how likely an
# individual is to accept a lottery with a particular expected monetary
# value (EMV) versus some certain payoff. As $R$ (which is measured in the
# same units as $x$) becomes larger, the individual becomes less
# risk-averse.
#
# 1. Assume Mary has an exponential utility function with $R = \$500$.
# Mary is given the choice between receiving \$500 with certainty
# (probability 1) or participating in a lottery which has a 60%
# probability of winning \$5000 and a 40% probability of
# winning nothing. Assuming Marry acts rationally, which option would
# she choose? Show how you derived your answer.
#
# 2. Consider the choice between receiving \$100 with certainty
# (probability 1) or participating in a lottery which has a 50%
# probability of winning \$500 and a 50% probability of winning
# nothing. Approximate the value of R (to 3 significant digits) in an
# exponential utility function that would cause an individual to be
# indifferent to these two alternatives. (You might find it helpful to
# write a short program to help you solve this problem.)
#
# **16.1** Economists often make use of an exponential utility function for money:
# $U(x) = -e^{-x/R}$, where $R$ is a positive constant representing an
# individual’s risk tolerance. Risk tolerance reflects how likely an
# individual is to accept a lottery with a particular expected monetary
# value (EMV) versus some certain payoff. As $R$ (which is measured in the
# same units as $x$) becomes larger, the individual becomes less
# risk-averse.
#
# 1. Assume Mary has an exponential utility function with $R = \$400$.
# Mary is given the choice between receiving \$400 with certainty
# (probability 1) or participating in a lottery which has a 60%
# probability of winning \$5000 and a 40% probability of
# winning nothing. Assuming Marry acts rationally, which option would
# she choose? Show how you derived your answer.
#
# 2. Consider the choice between receiving \$100 with certainty
# (probability 1) or participating in a lottery which has a 50%
# probability of winning \$500 and a 50% probability of winning
# nothing. Approximate the value of R (to 3 significant digits) in an
# exponential utility function that would cause an individual to be
# indifferent to these two alternatives. (You might find it helpful to
# write a short program to help you solve this problem.)
#
# **16.15** Alex is given the choice between two games. In Game 1, a fair coin is
# flipped and if it comes up heads, Alex receives \$100. If the coin comes
# up tails, Alex receives nothing. In Game 2, a fair coin is flipped
# twice. Each time the coin comes up heads, Alex receives \$50, and Alex
# receives nothing for each coin flip that comes up tails. Assuming that
# Alex has a monotonically increasing utility function for money in the
# range \[\$0, \$100\], show mathematically that if Alex prefers Game 2 to
# Game 1, then Alex is risk averse (at least with respect to this range of
# monetary amounts).
#
# Show that if $X_1$ and $X_2$ are preferentially independent of $X_3$,
# and $X_2$ and $X_3$ are preferentially independent of $X_1$, then $X_3$
# and $X_1$ are preferentially independent of $X_2$.
#
# **16.16** \[airport-au-id-exercise\]Repeat
# Exercise [airport-id-exercise](#/), using the action-utility
# representation shown in Figure [airport-au-id-figure](#/).
#
# **16.17** For either of the airport-siting diagrams from Exercises
# [airport-id-exercise] and [airport-au-id-exercise], to which
# conditional probability table entry is the utility most sensitive, given
# the available evidence?
#
# **16.18** Modify and extend the Bayesian network code in the code repository to
# provide for creation and evaluation of decision networks and the
# calculation of information value.
#
# **16.19** Consider a student who has the choice to buy or not buy a textbook for a
# course. We’ll model this as a decision problem with one Boolean decision
# node, $B$, indicating whether the agent chooses to buy the book, and two
# Boolean chance nodes, $M$, indicating whether the student has mastered
# the material in the book, and $P$, indicating whether the student passes
# the course. Of course, there is also a utility node, $U$. A certain
# student, Sam, has an additive utility function: 0 for not buying the
# book and -\$100 for buying it; and \$2000 for passing the course and 0
# for not passing. Sam’s conditional probability estimates are as follows:
# $$\begin{array}{ll}
# P(p|b,m) = 0.9 & P(m|b) = 0.9 \\
# P(p|b, \lnot m) = 0.5 & P(m|\lnot b) = 0.7 \\
# P(p|\lnot b, m) = 0.8 & \\
# P(p|\lnot b, \lnot m) = 0.3 & \\
# \end{array}$$ You might think that $P$ would be independent of $B$ given
# $M$, But this course has an open-book final—so having the book helps.
#
# 1. Draw the decision network for this problem.
#
# 2. Compute the expected utility of buying the book and of not
# buying it.
#
# 3. What should Sam do?
#
# **16.20** \[airport-id-exercise\]This exercise completes the analysis of the
# airport-siting problem in Figure [airport-id-figure](#/).
#
# 1. Provide reasonable variable domains, probabilities, and utilities
# for the network, assuming that there are three possible sites.
#
# 2. Solve the decision problem.
#
# 3. What happens if changes in technology mean that each aircraft
# generates half the noise?
#
# 4. What if noise avoidance becomes three times more important?
#
# 5. Calculate the VPI for ${AirTraffic}$, ${Litigation}$, and
# ${Construction}$ in your model.
#
# **16.21** \[car-vpi-exercise\] (Adapted from Pearl [@Pearl:1988].) A used-car
# buyer can decide to carry out various tests with various costs (e.g.,
# kick the tires, take the car to a qualified mechanic) and then,
# depending on the outcome of the tests, decide which car to buy. We will
# assume that the buyer is deciding whether to buy car $c_1$, that there
# is time to carry out at most one test, and that $t_1$ is the test of
# $c_1$ and costs \$50.
#
# A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$),
# and the tests might help indicate what shape the car is in. Car $c_1$
# costs \$1,500, and its market value is \$2,000 if it is in good shape; if
# not, \$700 in repairs will be needed to make it in good shape. The buyer’s
# estimate is that $c_1$ has a 70% chance of being in good shape.
#
# 1. Draw the decision network that represents this problem.
#
# 2. Calculate the expected net gain from buying $c_1$, given no test.
#
# 3. Tests can be described by the probability that the car will pass or
# fail the test given that the car is in good or bad shape. We have
# the following information:
#
# $P({pass}(c_1,t_1) | q^+(c_1)) = {0.8}$
#
# $P({pass}(c_1,t_1) | q^-(c_1)) = {0.35}$
#
# Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
#
# 4. Calculate the optimal decisions given either a pass or a fail, and
# their expected utilities.
#
# 5. Calculate the value of information of the test, and derive an
# optimal conditional plan for the buyer.
#
# **16.22** \[nonnegative-VPI-exercise\]Recall the definition of *value of
# information* in Section [VPI-section](#/).
#
# 1. Prove that the value of information is nonnegative and
# order independent.
#
# 2. Explain why it is that some people would prefer not to get some
# information—for example, not wanting to know the sex of their baby
# when an ultrasound is done.
#
# 3. A function $f$ on sets is **submodular** if, for any element $x$ and any sets $A$
# and $B$ such that $A\subseteq B$, adding $x$ to $A$ gives a greater
# increase in $f$ than adding $x$ to $B$:
# $$A\subseteq B {\:\;{\Rightarrow}\:\;}(f(A{{\,{\cup}\,}}\{x\}) - f(A)) \geq (f(B{{\,{\cup}\,}}\{x\}) - f(B))\ .$$
# Submodularity captures the intuitive notion of *diminishing
# returns*. Is the value of information, viewed as a function
# $f$ on sets of possible observations, submodular? Prove this or find
# a counterexample.
#
#
# The answers to Exercise [almanac-game](#/) (where M stands
# for million): First set: 3M, 1.6M, 1541, 41M, 4768, 221, 649M, 295M,
# 132, 25,546. Second set: 1917, 155M, 4,500M, 11M, 120,000, 1.1M, 1636,
# 19,340, 1,595, 41,710.
#
| notebooks/decision_theory_exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from qiskit import Aer
from qiskit.transpiler import PassManager
from qiskit_aqua import Operator, QuantumInstance
from qiskit_aqua.algorithms.adaptive import VQE
from qiskit_aqua.algorithms.classical import ExactEigensolver
from qiskit_aqua.components.optimizers import L_BFGS_B
from qiskit_aqua.components.variational_forms import RY
from qiskit_chemistry import FermionicOperator
from qiskit_chemistry.drivers import PyQuanteDriver, UnitsType, BasisType
# -
# using driver to get fermionic Hamiltonian
# PyQuante example
driver = PyQuanteDriver(atoms='H .0 .0 .0; H .0 .0 0.735', units=UnitsType.ANGSTROM,
charge=0, multiplicity=1, basis=BasisType.BSTO3G)
molecule = driver.run()
h1 = molecule.one_body_integrals
h2 = molecule.two_body_integrals
# convert from fermionic hamiltonian to qubit hamiltonian
ferOp = FermionicOperator(h1=h1, h2=h2)
qubitOp_jw = ferOp.mapping(map_type='JORDAN_WIGNER', threshold=0.00000001)
qubitOp_pa = ferOp.mapping(map_type='PARITY', threshold=0.00000001)
qubitOp_bi = ferOp.mapping(map_type='BRAVYI_KITAEV', threshold=0.00000001)
# +
# print out qubit hamiltonian in Pauli terms and exact solution
qubitOp_jw.to_matrix()
qubitOp_jw.chop(10**-10)
print(qubitOp_jw.print_operators())
# Using exact eigensolver to get the smallest eigenvalue
exact_eigensolver = ExactEigensolver(qubitOp_jw, k=1)
ret = exact_eigensolver.run()
print('The exact ground state energy is: {}'.format(ret['energy']))
# +
# setup VQE
# setup optimizer, use L_BFGS_B optimizer for example
lbfgs = L_BFGS_B(maxfun=1000, factr=10, iprint=10)
# setup variational form generator (generate trial circuits for VQE)
var_form = RY(qubitOp_jw.num_qubits, 5, entangler_map = {0: [1], 1:[2], 2:[3]})
# setup VQE with operator, variational form, and optimizer
vqe_algorithm = VQE(qubitOp_jw, var_form, lbfgs, 'matrix')
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, pass_manager=PassManager())
results = vqe_algorithm.run(quantum_instance)
print("Minimum value: {}".format(results['eigvals'][0]))
print("Parameters: {}".format(results['opt_params']))
| community/aqua/chemistry/Pyquante_end2end.ipynb |
# ---
# title: "Categorical Heatmap"
# author: "Charles"
# date: 2020-08-14
# description: "-"
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: kagglevil
# language: python
# name: kagglevil
# ---
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# +
vegetables = ["cucumber", "tomato", "lettuce", "asparagus",
"potato", "wheat", "barley"]
farmers = ["Far<NAME>", "Upland Bros.", "<NAME>",
"Agrifun", "Organiculture", "BioGoods Ltd.", "Cornylee Corp."]
harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0],
[2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0],
[1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0],
[0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0],
[0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0],
[1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1],
[0.1, 2.0, 0.0, 1.4, 0.0, 1.9, 6.3]])
# +
fig, ax = plt.subplots()
im = ax.imshow(harvest)
ax.set_xticks(np.arange(len(farmers)))
ax.set_yticks(np.arange(len(vegetables)))
# ... and label them with the respective list entries
ax.set_xticklabels(farmers)
ax.set_yticklabels(vegetables)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(len(vegetables)):
for j in range(len(farmers)):
text = ax.text(j, i, harvest[i, j],
ha="center", va="center", color="w")
ax.set_title("Harvest of local farmers (in tons/year)")
fig.tight_layout()
plt.show()
# -
| docs/python/Plots/categorical-heatmap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %cd /home/jovyan/work
from chordifier.KeyboardRenderer import KeyboardRenderer
from chordifier.experiments.KeyboardFactory import make
# +
keyboard = make([5, 4, 4, 4, 3, 4, 2, 3, 4, 5])
KeyboardRenderer(keyboard, 'Two possible keyboards with many buttons').present()
keyboard = make([1, 1, 1, 1, 0, 0, 1, 1, 1, 1])
KeyboardRenderer(keyboard, 'rGloves input device').present()
keyboard = make([0, 0, 0, 0, 2, 2, 0, 0, 0, 0])
KeyboardRenderer(keyboard, 'Handheld keyboard, operated with thumbs').present()
keyboard = make([3, 3, 3, 3, 2, 0, 0, 0, 0, 0])
KeyboardRenderer(keyboard, 'One-handed keyboard').present()
keyboard = make([2, 2, 2, 2, 2, 0, 0, 0, 0, 0])
KeyboardRenderer(keyboard, 'Minimalist one-handed keyboard').present()
# -
| lab/presenter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
import os
import matplotlib.pyplot as plt
import scipy.io as sio
import torch
import numpy as np
# + tags=[]
print(os.getcwd())
# + tags=[]
WD = '/local/meliao/projects/fourier_neural_operator/experiments/09_predict_residuals/'
os.chdir(WD)
print(os.getcwd())
# -
from train_models import SpectralConv1d, FNO1dComplexTime
# +
EMULATOR_FP = '/local/meliao/projects/fourier_neural_operator/experiments/08_FNO_pretraining/models/00_pretrain_ep_1000'
MODEL_FP = '/local/meliao/projects/fourier_neural_operator/experiments/09_predict_residuals/models/00_residual_ep_500'
DATA_FP = '/local/meliao/projects/fourier_neural_operator/data/2021-06-24_NLS_data_04_train.mat'
PLOTS_DIR = '/local/meliao/projects/fourier_neural_operator/experiments/09_predict_residuals/plots/'
# -
d = sio.loadmat(DATA_FP)
emulator = torch.load(EMULATOR_FP, map_location='cpu')
class TimeDataSetResiduals(torch.utils.data.Dataset):
def __init__(self, X, t_grid, x_grid, emulator):
super(TimeDataSetResiduals, self).__init__()
assert X.shape[1] == t_grid.shape[-1]
self.X = torch.tensor(X, dtype=torch.cfloat)
self.t = torch.tensor(t_grid.flatten(), dtype=torch.float)
self.x_grid = torch.tensor(x_grid, dtype=torch.float).view(-1, 1)
self.n_tsteps = self.t.shape[0] - 1
self.n_batches = self.X.shape[0]
self.dataset_len = self.n_tsteps * self.n_batches
self.emulator = emulator
self.make_composed_predictions()
def make_composed_predictions(self):
t_interval = self.t[1]
n_tsteps = self.X.shape[1]
t_tensor = torch.tensor(t_interval, dtype=torch.float).repeat([self.n_batches, 1,1])
preds = np.zeros(self.X.shape, dtype=np.cfloat)
# The IC is at time 0
preds[:,0] = self.X[:,0]
comp_input_i = self.make_x_train(self.X[:,0])
for i in range(1, n_tsteps):
comp_preds_i = self.emulator(comp_input_i, t_tensor).detach().numpy()
preds[:,i] = comp_preds_i
comp_input_i = self.make_x_train(comp_preds_i)
self.emulator_preds = preds
def make_x_train(self, X, single_batch=False):
# X has shape (nbatch, 1, grid_size)
n_batches = X.shape[0] if len(X.shape) > 1 else 1
# Convert to tensor
X_input = torch.view_as_real(torch.tensor(X, dtype=torch.cfloat))
if single_batch:
X_input = torch.cat((X_input, self.x_grid), dim=1)
else:
x_grid_i = self.x_grid.repeat(n_batches, 1, 1)
X_input = torch.cat((X_input.view((n_batches, -1, 2)), x_grid_i), axis=2)
return X_input
def __getitem__(self, idx):
idx_original = idx
t_idx = int(idx % self.n_tsteps) + 1
idx = int(idx // self.n_tsteps)
batch_idx = int(idx % self.n_batches)
x = self.make_x_train(self.X[batch_idx, 0], single_batch=True) #.reshape(self.output_shape)
y = self.X[batch_idx, t_idx] #.reshape(self.output_shape)
preds = self.emulator_preds[batch_idx, t_idx]
t = self.t[t_idx]
return x,y,t,preds
def __len__(self):
return self.dataset_len
def __repr__(self):
return "TimeDataSetResiduals with length {}, n_tsteps {}, n_batches {}".format(self.dataset_len,
self.n_tsteps,
self.n_batches)
# + tags=[]
t_dset = TimeDataSetResiduals(d['output'][:, :7], d['t'][:, :7], d['x'], emulator)
t_dloader = torch.utils.data.DataLoader(t_dset, batch_size=1, shuffle=False)
# -
def plot_check(x, y, t, preds, resid, fp=None):
# X has size (grid_size, 3) with the columns being (Re(u_0), Im(u_0), x)
fig, ax=plt.subplots(nrows=1, ncols=4)
fig.set_size_inches(15,10) #15, 20 works well
fig.patch.set_facecolor('white')
x_real = x[:, 0].flatten()
x_imag = x[:, 1].flatten()
# print("X_REAL:", x_real.shape, "X_IMAG:", x_imag.shape)
# print("PREDS_REAL:", np.real(preds).shape, "PREDS_IMAG:", np.imag(preds).shape)
# print("Y_REAL:", np.real(y).shape, "Y_IMAG:", np.imag(y).shape)
ax[0].set_title("$Re(u)$")
ax[0].plot(x_real, label='Input')
ax[0].plot(np.real(y), label='Soln')
ax[0].plot(np.real(preds), '--', label='Pred')
ax[0].legend()
ax[1].set_title("Residuals: $Re(u)$")
ax[1].plot(np.real(y) - np.real(preds), color='red', label='actual')
ax[1].plot(np.real(resid), color='green', label='predicted')
ax[1].legend()
ax[2].set_title("$Im(u)$")
ax[2].plot(x_imag, label='Input')
ax[2].plot(np.imag(y), label='Soln')
ax[2].plot(np.imag(preds), '--', label='Pred')
ax[2].legend()
ax[3].set_title("Residuals: $Im(u)$")
ax[3].plot(np.imag(y) - np.imag(preds), color='red', label='actual')
ax[3].plot(np.imag(resid), color='green', label='predicted')
ax[3].legend()
plt.tight_layout()
plt.title("T = {}".format(t))
if fp is not None:
plt.savefig(fp)
else:
plt.show()
plt.clf()
model = torch.load(MODEL_FP, map_location=torch.device('cpu'))
# + tags=[]
n = 0
for x_i, y_i, t_i, preds_i in t_dloader:
# x_i, y_i, t_i, preds_i = t_dset[i]
# print(x_i.shape)
model_resid = model(x_i, t_i)
fp_i = os.path.join(PLOTS_DIR, 'test_case_{}.png'.format(n))
plot_check(x_i[0], y_i[0], t_i.item(), preds_i[0], model_resid[0].detach().numpy(), fp=fp_i)
n += 1
if n >= 5:
break
# -
| experiments/09_predict_residuals/test_TimeDataSetResidual.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/usmandroid/ML_Comm_TUM/blob/main/tut04_to_fill.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="4479e7a1"
# # Tutorial 4: Bitwise Demapper
# November 11, 2021
# + id="17f53cb7"
import numpy as np
import matplotlib.pyplot as plt
import torch
from torch import nn, optim
# + [markdown] id="24eb4cf7"
# ### Problem 4.2 - Optimality of Binary NN Demapper for AWGN
#
# Consider an additive white Gaussian noise (AWGN) channel
# $$Y = X + Z $$
# with binary phase shift keying (BPSK) input $X$ uniformly distributed on $\{-1, 1 \}$, Gaussian
# noise $Z\sim\mathcal{N} (0, \sigma^2)$. Assume the binary label $B = b(X)$ with $b(-1) = 0$ and $b(1) = 1$.
# 1. Show that
# $$L = log\frac{P_{B|Y}(1|y)}{P_{B|Y}(0|y)}$$
# is given by
# $$ L= \lambda y$$
#
# 2. For $\text{SNR} = [0, 1,\dots, 5]$ train the binary NN demapper in Figure 4.3. Compare
# the weight $w$ after training with the value of $\lambda$ that you calculated in 1.
# + id="cb1ddd56"
def mapper(msg, alphabet): # maps message a in [0,1,..,M-1] to alphabet symbols
return alphabet[msg]
def awgn_channel(x, snr, seed=None): # returns Y = X+Z
rng = np.random.Generator(np.random.PCG64(seed))
power_x = np.mean(np.abs(x) ** 2)
noise_power = power_x / snr
noise = np.sqrt(noise_power) * rng.normal(size=x.shape)
return x + noise
# + id="499b2389"
M = 2 # cardinality of the alphabet
a = np.random.choice(M, 10000) # messages
x = mapper(a, np.array([ -1., 1.])) # symbols
# + id="29f73e16"
class bin_demapper(nn.Module):
def __init__(self):
super().__init__()
self.h1 = nn.Linear(1,1, bias=False)
def forward(self, y):
return self.h1(y)
# + colab={"base_uri": "https://localhost:8080/"} id="e85a4e47" outputId="0cd959f6-d5af-4559-8c97-82d0f26b18a2"
SNRdBs = np.array([0,1,2,3,4,5])
SNRs = 10**(SNRdBs/10)
# Preallocate
weigths = np.zeros(SNRs.size)
logits = np.zeros((SNRs.size,10000))
ys = np.zeros((SNRs.size,10000))
# Convert labels to torch tensor
a_t = torch.Tensor(a.reshape((-1,1)))
# Loss function
# Your code here
loss_fn = nn.BCEWithLogitsLoss()
for idx,snr in enumerate(SNRs):
print(f'--- SNR is: {snr:.2f} ---')
# Generate y
y = awgn_channel(x, snr)
ys[idx,:] = y
# Convert data
y_t = torch.Tensor(y.reshape(-1,1))
# Initialize network
bin_demap = bin_demapper()
# Optimizer
optimizer = optim.Adam(bin_demap.parameters(), lr=0.1)
for j in range(1000):
l = bin_demap(y_t)
loss = loss_fn(l, a_t)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Printout and visualization
if j % 50 == 0:
print(f'epoch {j}: Loss = {loss.detach().numpy() :.4f}')
weigths[idx]=bin_demap.h1.weight[0].detach().numpy()[0]
logits[idx,:] = l.detach().numpy().reshape((-1,))
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="95397d10" outputId="f4175e4a-5dce-4c9c-9bef-dbf967e4cfd1"
for j in range(SNRs.size):
plt.plot(ys[j,:], logits[j,:], label=f'SNR={SNRdBs[j]} dB')
plt.grid()
plt.legend()
print(f'Expected values for lambda : {2/(1/SNRs)}')
print(f'Value of w after the training : {weigths} ')
print(f'Slopes of the plots : {(logits[:,0]/ys[:,0])}')
# + [markdown] id="c4ad5b70"
# ### Problem 4.3 - Function composition
# Use the function compositions developed in Section 4.4 to plot Figure 4.7.
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="29774d55" outputId="ee765a18-c30d-4578-94eb-a7f8c2ab3326"
y = np.arange(-8,9,1)
# Your code here
l0 = y
l1 = -np.abs(l0)+4
l2 = -np.abs(l1)+2
plt.plot(y, l0, label = 'L0')
plt.plot(y,l1, label = 'L1')
plt.plot(y,l2, label = 'L2')
plt.grid()
plt.legend()
plt.show()
# + [markdown] id="748d47a5"
# ### Problem 4.4 - ReLU activation
# Design an NN using linear neurons and ReLU activation that for input $y$ outputs $a \cdot |y| + b$.
# + id="80c69d55"
class relu_nn(nn.Module):
def __init__(self, a, b):
super().__init__()
self.h1 = nn.Linear(1,2, bias=False)
self.act = nn.ReLU()
self.h1.weight = nn.Parameter(torch.Tensor([[a],[-a]]))
self.out = nn.Linear(2,1)
self.out.bias = nn.Parameter(torch.Tensor([b]))
self.out.weight = nn.Parameter(torch.Tensor([1., 1.]))
def forward(self, y):
y = self.out(self.act(self.h1(y)))
return y
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="832258b7" outputId="c2b42041-dbd2-4f13-dc56-0f507f07b81f"
# Plot of the expected output
y = np.arange(-5.,6.)
a= 0.5
b = -1.
plt.plot(y, a*np.abs(y)+b)
plt.plot(y,y)
plt.grid()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="cb6340b3" outputId="237a8b48-3c2e-425f-86c7-bc30e34aa7da"
# Plot of the NN output
y_t = torch.tensor(y.reshape(-1,1,1))
net = relu_nn(a,b)
net = net.float()
z = net(y_t.float()).detach().numpy()
plt.plot(y,z)
plt.plot(y,y)
plt.grid()
# + [markdown] id="b0b9d401"
# ### Problem 4.5 - Bitwise NN demapper
# Consider the Gray labeled channel input alphabet
# $$\begin{matrix}
# \text{symbol} & -3 &-1 &1 &3\\
# \text{label} &00 &01 &11 &10
# \end{matrix}$$
# 1. For $\text{SNR} = [5, 6,\dots, 10]$ train the bitwise NN demapper from Figure 4.6.
# + id="5a0804e7"
def get_gray_labeling(msg):
label = np.array([[0,0], [0,1], [1,1], [1,0]])
return label[msg, :]
# + colab={"base_uri": "https://localhost:8080/"} id="ed983248" outputId="92500e1e-7ae4-478e-c972-2a85fe15494f"
# Create data
M = 4 # cardinality of the alphabet
m = 2 # number of bits per symbol
a = np.random.choice(M,10000) # messages
x = mapper(a, np.array([ -3., -1., 1., 3.])) # symbols
bits = get_gray_labeling(a)
print(bits.shape)
# + id="d41f841e"
# Define the network
class BitwiseNNDemapper(torch.nn.Module):
def __init__(self, num_bitlevels):
super().__init__()
self.num_bitlevels = num_bitlevels
# Output neurons
self.param = []
self.lin_out = []
for j in range(self.num_bitlevels):
lin_out = nn.Linear(1, 1, bias=False)
lin_out.weight = nn.Parameter(torch.Tensor([[1.]])) # trainable output weight: adjust level of confidence via w
self.param = self.param + list(lin_out.parameters()) # prepare a list of parameters to be passed to an optimizer
self.lin_out.append(lin_out)
# Intermediate neurons
self.lin = []
for j in range(1, num_bitlevels):
linj = nn.Linear(1, 1)
linj.weight = nn.Parameter(torch.Tensor([[-1.]]), requires_grad=False)
linj.bias = nn.Parameter(torch.tensor([2.**(num_bitlevels - j)])) # trainable bias: adjust transitions through zero. May not be
# exactly in the middle of two neighboring nodes, in particular
# in the case of low SNR
self.param = self.param + list(linj.parameters()) # prepare a list of parameters to be passed to an optimizer
self.lin.append(linj)
def forward(self, y):
out = []
out.append(self.lin_out[0](y))
for j in range(1, self.num_bitlevels):
y = torch.abs(y)
y = self.lin[j - 1](y)
out.append(self.lin_out[j](y))
return torch.cat(out, 1)
def parameters(self):
return self.param
# + colab={"base_uri": "https://localhost:8080/"} id="6c0a8429" outputId="6c377d5a-ef16-4c40-d384-a38a41a8eacb"
SNRdBs = np.array([5,6,7,8,9,10])
SNRs = 10**(SNRdBs/10)
b_t = torch.Tensor(bits)
# Loss function
loss_fn = nn.BCEWithLogitsLoss()
# Preallocate
equivocations_bw = np.zeros(SNRs.size) #bit-wise equivocations
for idx,snr in enumerate(SNRs):
print(f'--- SNR is: {snr:.2f} ---')
y = awgn_channel(x, snr)
y_t = torch.Tensor(y.reshape(-1,1))
# Initialize network
demap = BitwiseNNDemapper(num_bitlevels=m)
# Optimizer
optimizer = optim.Adam(demap.parameters(), lr=0.005)
for j in range(1000):
l = demap(y_t)
loss = loss_fn(l, b_t)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Printout and visualization
if j % 50 == 0:
print(f'epoch {j}: Loss = {loss.detach().numpy() :.4f}')
equivocations_bw[idx]= (2*loss)/np.log(2)
# + [markdown] id="b19c0b95"
# 2. Plot the log probabilities $\ell_0(y)$ and $\ell_1(y)$ output by the trained NN demapper for $y \in [-4,4]$.
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="1f9b7930" outputId="dd519abb-6158-45cd-d0be-b72e38175180"
y = np.arange(-4,4,0.1)
ll = demap(torch.tensor(y.reshape(-1,1)).float()).detach().numpy()
plt.plot(y,ll[:,0],label='L0')
plt.plot(y,ll[:,1],label='L1')
plt.plot(np.array([-3.,-1.,1.,3.]), np.zeros(4), linestyle='', marker='*', label='channel input alphabet')
plt.legend()
plt.grid()
# + [markdown] id="5a8bb974"
# 3. For the trained NN demapper, estimate the sum of the bitwise equivocations.
# $$H(B_0|Y) + H(B_1|Y) $$
# How does it compare to the equivocation achieved by the symbolwise NN demapper
# from Problem 3.2?
# + id="878c2187"
class awgn_demapper(nn.Module):
def __init__(self):
super().__init__()
self.out = nn.Linear(1, 4)
def forward(self, y):
y = self.out(y)
return y
# + colab={"base_uri": "https://localhost:8080/"} id="87a46dea" outputId="1fca7181-2ea7-4ec2-f663-ffb42d0243ea"
# Code from Tutorial 3
# Prepare data
a_t = torch.Tensor(a.reshape(-1,))
a_t = a_t.type(torch.LongTensor)
# Loss function
loss_fn = nn.CrossEntropyLoss()
# Trainings loop
equivocations_sw = np.zeros(SNRs.size) #symbol-wise equivocation
for idx,snr in enumerate(SNRs):
print(f'---- SNR is: {snr:.2f}')
y = awgn_channel(x, snr)
y_t = torch.Tensor(y.reshape(-1, 1))
# Initialize network
demap = awgn_demapper()
# Optimizer
optimizer = optim.Adam(demap.parameters(), lr=0.05)
for j in range(1000):
logit = demap(y_t).reshape(-1, 4)
loss = loss_fn(logit, a_t)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Printout and visualization
if j % 50 == 0:
print(f'epoch {j}: Loss = {loss.detach().numpy() :.4f}')
equivocations_sw[idx] = loss / np.log(2)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="9df767b6" outputId="51547f1d-699e-41a8-aca4-7912fdf3eb18"
plt.plot(SNRdBs, equivocations_sw, label='Eq symbolwise')
plt.plot(SNRdBs, equivocations_bw, label='Eq bitwise')
plt.legend()
plt.grid()
# + id="11c7823f"
| tut04_to_fill.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial: Learning on Tangent Data
# Lead author: <NAME>.
#
# In this notebook, we demonstrate how any standard machine learning algorithm can be used on data that live on a manifold yet respecting its geometry. In the previous notebooks we saw that linear operations (mean, linear weighting) don't work on manifold. However, to each point on a manifold, is associated a tangent space, which is a vector space, where all our off-the-shelf ML operations are well defined!
#
# We will use the [logarithm map](02_from_vector_spaces_to_manifolds.ipynb#From-substraction-to-logarithm-map) to go from points of the manifolds to vectors in the tangent space at a reference point. This will enable to use a simple logistic regression to classify our data.
# ## Set up
# Before starting this tutorial, we set the working directory to be the root of the geomstats repository. In order to have the code working on your machine, you need to change this path to the path of your geomstats repository.
# +
import os
import subprocess
geomstats_gitroot_path = subprocess.check_output(
['git', 'rev-parse', '--show-toplevel'],
universal_newlines=True)
os.chdir(geomstats_gitroot_path[:-1])
print('Working directory: ', os.getcwd())
# -
# We import the backend that will be used for geomstats computations and set a seed for reproducibility of the results.
# +
import geomstats.backend as gs
gs.random.seed(2020)
# -
# We import the visualization tools.
import matplotlib.pyplot as plt
# ## The Data
# We use data from the [MSLP 2014 Schizophrenia Challenge](https://www.kaggle.com/c/mlsp-2014-mri/data). The dataset correponds to the Functional Connectivity Networks (FCN) extracted from resting-state fMRIs of 86 patients at 28 Regions Of Interest (ROIs). Roughly, an FCN corresponds to a correlation matrix and can be seen as a point on the manifold of Symmetric Positive-Definite (SPD) matrices. Patients are separated in two classes: schizophrenic and control. The goal will be to classify them.
#
# First we load the data (reshaped as matrices):
# +
import geomstats.datasets.utils as data_utils
data, patient_ids, labels = data_utils.load_connectomes()
# -
# We plot the first two connectomes from the MSLP dataset with their corresponding labels.
# + tags=["nbsphinx-thumbnail"]
labels_str = ['Healthy', 'Schizophrenic']
fig = plt.figure(figsize=(8, 4))
ax = fig.add_subplot(121)
imgplot = ax.imshow(data[0])
ax.set_title(labels_str[labels[0]])
ax = fig.add_subplot(122)
imgplot = ax.imshow(data[1])
ax.set_title(labels_str[labels[1]])
plt.show()
# -
# In order to compare with a standard Euclidean method, we also flatten the data:
flat_data, _, _ = data_utils.load_connectomes(as_vectors=True)
print(flat_data.shape)
# ## The Manifold
# As mentionned above, correlation matrices are SPD matrices. Because multiple metrics could be used on SPD matrices, we also import two of the most commonly used ones: the Log-Euclidean metric and the Affine-Invariant metric [[PFA2006]](#References). We can use the SPD module from `geomstats` to handle all the geometry, and check that our data indeed belongs to the manifold of SPD matrices:
# +
import geomstats.geometry.spd_matrices as spd
manifold = spd.SPDMatrices(28)
ai_metric = spd.SPDMetricAffine(28)
le_metric = spd.SPDMetricLogEuclidean(28)
print(gs.all(manifold.belongs(data)))
# -
# ## The Transformer
# Great! Now, although the sum of two SPD matrices is an SPD matrix, their difference or their linear combination with non-positive weights are not necessarily! Therefore we need to work in a tangent space to perform simple machine learning. But worry not, all the geometry is handled by geomstats, thanks to the preprocessing module.
from geomstats.learning.preprocessing import ToTangentSpace
# What `ToTangentSpace` does is simple: it computes the Frechet Mean of the data set (covered in the previous tutorial), then takes the log of each data point from the mean. This results in a set of tangent vectors, and in the case of the SPD manifold, these are simply symmetric matrices. It then squeezes them to a 1d-vector of size `dim = 28 * (28 + 1) / 2`, and thus outputs an array of shape `[n_patients, dim]`, which can be fed to your favorite scikit-learn algorithm.
#
# Because the mean of the input data is computed, `ToTangentSpace` should be used in a pipeline (as e.g. scikit-learn's `StandardScaler`) not to leak information from the test set at train time.
# +
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
pipeline = Pipeline(
steps=[('feature_ext', ToTangentSpace(geometry=ai_metric)),
('classifier', LogisticRegression(C=2))])
# -
# We now have all the material to classify connectomes, and we evaluate the model with cross validation. With the affine-invariant metric we obtain:
result = cross_validate(pipeline, data, labels)
print(result['test_score'].mean())
# And with the log-Euclidean metric:
# +
pipeline = Pipeline(
steps=[('feature_ext', ToTangentSpace(geometry=le_metric)),
('classifier', LogisticRegression(C=2))])
result = cross_validate(pipeline, data, labels)
print(result['test_score'].mean())
# -
# But wait, why do the results depend on the metric used? You may remember from the previous notebooks that the Riemannian metric defines the notion of geodesics and distance on the manifold. Both notions are used to compute the Frechet Mean and the logarithms, so changing the metric changes the results, and some metrics may be more suitable than others for different applications.
#
# We can finally compare to a standard Euclidean logistic regression on the flattened data:
flat_result = cross_validate(LogisticRegression(), flat_data, labels)
print(flat_result['test_score'].mean())
# ## Conclusion
# In this example using Riemannian geometry does not make a big difference compared to applying logistic regression in the ambiant Euclidean space, but there are published results that show how useful geometry can be with this type of data (e.g [[NDV2014]](#References), [[WAZ2918]](#References)). We saw how to use the representation of points on the manifold as tangent vectors at a reference point to fit any machine learning algorithm, and compared the effect of different metrics on the space of symmetric positive-definite matrices
# ## References
# .. [PFA2006] <NAME>., <NAME>. & <NAME>. A Riemannian Framework for Tensor Computing. Int J Comput Vision 66, 41–66 (2006). https://doi.org/10.1007/s11263-005-3222-z
#
# .. [NDV2014] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, et al.. Transport on Riemannian Manifold for Functional Connectivity-based Classification. MICCAI - 17th International Conference on Medical Image Computing and Computer Assisted Intervention, Polina Golland, Sep 2014, Boston, United States. hal-01058521
#
# .. [WAZ2918] <NAME>., <NAME>., <NAME>., <NAME>. (2018) Riemannian Regression and Classification Models of Brain Networks Applied to Autism. In: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (eds) Connectomics in NeuroImaging. CNI 2018. Lecture Notes in Computer Science, vol 11083. Springer, Cham
| notebooks/03_simple_machine_learning_on_tangent_spaces.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="JkRk_jT1NqUC"
# ## 今天的作業
# 在鐵達尼資料集中,今天我們專注觀察變數之間的相關性,以Titanic_train.csv 中,首先將有遺失值的數值刪除,我們取 Titanic_train.csv 的年齡資料,試著將課堂中所學的方法應用上去。
#
# * Q1: 產生一個新的變數(Age_above65_) Age>=65為 'Y',其餘為'N'。
# * Q2: 添加女性和男性,產生一個新的變數(Age_above65_female),女性或Age>=65為'Y',其餘為'N'。
# * Q3: 透過昨天課程的內容,驗證產生的兩個新變數,哪一個和目標變數(Survived_cate)的相關性較高?
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="yz28_IgkYdBW" executionInfo={"elapsed": 1558, "status": "ok", "timestamp": 1578021044012, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mB40f7sDArbZ5_DYq02nNcnLD0Ryaf7AhsASSQeLQ=s64", "userId": "03171203089166907199"}, "user_tz": -480} outputId="a12f486c-18b3-4fb5-d06c-f162aebd9444"
# library
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
import math
import statistics
import seaborn as sns
from IPython.display import display
import sklearn
print(sklearn.__version__)
#如果只有 0.19 記得要更新至 最新版本
# %matplotlib inline
#顯示所有行
pd.set_option('display.max_columns', None)
#顯示所有列
pd.set_option('display.max_rows', None)
#顯示行設定
pd.set_option('max_colwidth',100)
import pingouin as pg
import researchpy
# + [markdown] id="mGEZdv6JNqUG"
# ## 讀入資料
# + id="zGBwKpIINqUH" outputId="b8824392-0e91-4460-b1b5-a05de5f88fff"
df_train = pd.read_csv("Titanic_train.csv")
print(df_train.info())
# + id="i73ub82KNqUI" outputId="0576c875-d008-4449-8ad9-dd165004fc8e"
#1.產稱一個新的變數 Survived_cate ,資料型態傳換成類別型態
#2.把題目中的 Survived 用 Survived_cate 來做分析
df_train['Survived_cate']=df_train['Survived']
df_train['Survived_cate']=df_train['Survived_cate'].astype('object')
print(df_train.info())
# + id="HSsM_aCJNqUI" outputId="cd44479d-f7ae-40a9-faa8-92ff693c6d72"
#我們先把遺失值刪除
## 取出資料後,把遺失值刪除
complete_data=df_train[['Age','Survived_cate','Sex']].dropna()
display(complete_data)
# + [markdown] id="vG3z4y9BNqUJ"
# ### Q1: 產生一個新的變數(Age_above65_) Age>=65為 'Y',其餘為'N'。
#
# + id="7VkqJzv2NqUJ" outputId="cdec26bd-3cbd-4e1a-a6d6-15fdf6ca7a29"
def age_map1(x):
if(x>=65):
return('Y')
else:
return('N')
complete_data['Age_above65']=complete_data['Age'].apply(age_map1)
display(complete_data)
# + [markdown] id="hlSZadKhNqUK"
# ### Q2: 添加女性和男性,產生一個新的變數(Age_above65_female),女性或Age>=65為'Y',其餘為'N'。
# * 暗示: 觀看下面影片找答案,https://www.youtube.com/watch?v=X2d-wUt5azY
# + id="CWFZaNfYNqUK" outputId="8ba72ede-f8a1-4ea1-e87b-e311dd0ca23e"
def age_map2(row):
if (row.Sex=='female'):
return('Y')
else:
if(row.Age>=65):
return('Y')
else:
return('N')
complete_data['Age_above65_female']=complete_data[['Age','Sex']].apply(age_map2,axis=1)
display(complete_data)
# + [markdown] id="y0if8VccNqUL"
# ### Q3: 透過昨天課程的內容,驗證產生的兩個新變數,哪一個和目標變數(Survived_cate)的相關性較高?
# * 提示:
# 首先觀察一下這些變數的資料型態後,再來想要以哪一種判斷倆倆的相關性。
# + id="Z2Ii9lk2NqUL" outputId="d925359f-f9c8-481b-938b-af3718a7c8eb"
#Age_above65
contTable = pd.crosstab(complete_data['Age_above65'], complete_data['Survived_cate'])
contTable
df = min(contTable.shape[0], contTable.shape[1]) - 1
df
crosstab, res = researchpy.crosstab(complete_data['Survived_cate'], complete_data['Age_above65'], test='chi-square')
#print(res)
print("Cramer's value is",res.loc[2,'results'])
def judgment_CramerV(df,V):
if df == 1:
if V < 0.10:
qual = 'negligible'
elif V < 0.30:
qual = 'small'
elif V < 0.50:
qual = 'medium'
else:
qual = 'large'
elif df == 2:
if V < 0.07:
qual = 'negligible'
elif V < 0.21:
qual = 'small'
elif V < 0.35:
qual = 'medium'
else:
qual = 'large'
elif df == 3:
if V < 0.06:
qual = 'negligible'
elif V < 0.17:
qual = 'small'
elif V < 0.29:
qual = 'medium'
else:
qual = 'large'
elif df == 4:
if V < 0.05:
qual = 'negligible'
elif V < 0.15:
qual = 'small'
elif V < 0.25:
qual = 'medium'
else:
qual = 'large'
else:
if V < 0.05:
qual = 'negligible'
elif V < 0.13:
qual = 'small'
elif V < 0.22:
qual = 'medium'
else:
qual = 'large'
return(qual)
judgment_CramerV(df,res.loc[2,'results'])
# + id="uEbsbSEANqUM" outputId="f3912b61-6806-4891-c89d-dcadfe97a4dd"
#Age_above65_female
contTable = pd.crosstab(complete_data['Age_above65_female'], complete_data['Survived_cate'])
contTable
df = min(contTable.shape[0], contTable.shape[1]) - 1
df
crosstab, res = researchpy.crosstab(complete_data['Survived_cate'], complete_data['Age_above65_female'], test='chi-square')
#print(res)
print("Cramer's value is",res.loc[2,'results'])
judgment_CramerV(df,res.loc[2,'results'])
# + [markdown] id="0PBewThbNqUM"
# ### Age_above65_female 相關性較高
| Solution/Day_39_Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Introduction to Containers, or, _Units of Software_
#
# https://www.docker.com/what-container
#
# **<NAME> - Wednesday 23 Jan, DIAG UoK Workshop**
#
# <img src="docker.png" width="150"/>
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <p><a href="https://commons.wikimedia.org/wiki/File:Line3174_-_Shipping_Containers_at_the_terminal_at_Port_Elizabeth,_New_Jersey_-_NOAA.jpg#/media/File:Line3174_-_Shipping_Containers_at_the_terminal_at_Port_Elizabeth,_New_Jersey_-_NOAA.jpg"><img src="https://upload.wikimedia.org/wikipedia/commons/7/7a/Line3174_-_Shipping_Containers_at_the_terminal_at_Port_Elizabeth%2C_New_Jersey_-_NOAA.jpg" alt="Line3174 - Shipping Containers at the terminal at Port Elizabeth, New Jersey - NOAA.jpg" height="427" width="640"></a><br>By Captain <NAME>, NOAA Corps (ret.) - <a rel="nofollow" class="external free" href="http://www.photolib.noaa.gov/coastline/line3174.htm">http://www.photolib.noaa.gov/coastline/line3174.htm</a>, Public Domain, <a href="https://commons.wikimedia.org/w/index.php?curid=69709">Link</a></p>
# + [markdown] slideshow={"slide_type": "subslide"}
# <p><a href="https://commons.wikimedia.org/wiki/File:ZIM_New_York_(ship,_2002)_002.jpg#/media/File:ZIM_New_York_(ship,_2002)_002.jpg"><img src="https://upload.wikimedia.org/wikipedia/commons/2/23/ZIM_New_York_%28ship%2C_2002%29_002.jpg" alt="ZIM New York (ship, 2002) 002.jpg" height="431" width="640"></a><br>By <a rel="nofollow" class="external text" href="https://www.flickr.com/people/61993185@N00"><NAME></a> from Athens, GA - <a rel="nofollow" class="external text" href="https://www.flickr.com/photos/pdenker/6998948883/">DSC04878</a>, <a href="http://creativecommons.org/licenses/by/2.0" title="Creative Commons Attribution 2.0">CC BY 2.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=37301184">Link</a></p>
# + [markdown] slideshow={"slide_type": "subslide"}
# <p><a href="https://commons.wikimedia.org/wiki/File:BNSF_5216_West_Kingman_Canyon_AZ_(293094839).jpg#/media/File:BNSF_5216_West_Kingman_Canyon_AZ_(293094839).jpg"><img src="https://upload.wikimedia.org/wikipedia/commons/5/5a/BNSF_5216_West_Kingman_Canyon_AZ_%28293094839%29.jpg" alt="BNSF 5216 West Kingman Canyon AZ (293094839).jpg" height="426" width="640"></a><br>By <a rel="nofollow" class="external text" href="https://www.flickr.com/people/40563877@N00">BriYYZ</a> from Toronto, Canada - <a rel="nofollow" class="external text" href="https://www.flickr.com/photos/bribri/293094839/">BNSF 5216 West Kingman Canyon AZ</a>, <a href="https://creativecommons.org/licenses/by-sa/2.0" title="Creative Commons Attribution-Share Alike 2.0">CC BY-SA 2.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=25908599">Link</a></p>
# + [markdown] slideshow={"slide_type": "subslide"}
# <p><a href="https://commons.wikimedia.org/wiki/File:Flat-rack-small-vessel.jpeg#/media/File:Flat-rack-small-vessel.jpeg"><img src="https://upload.wikimedia.org/wikipedia/commons/5/55/Flat-rack-small-vessel.jpeg" alt="Flat-rack-small-vessel.jpeg" height="480" width="640"></a><br>By <NAME>, Denmark
# Uploaded by <a href="https://en.wikipedia.org/wiki/User:Chlor" class="extiw" title="en:User:Chlor">Chlor</a> at en.wikipedia - File handed over from Marc Cromme to <a href="https://en.wikipedia.org/wiki/User:Chlor" class="extiw" title="en:User:Chlor">User:Chlor</a> (Hans Schou)
# Transferred from <a class="external text" href="http://en.wikipedia.org">en.wikipedia</a> by <a href="//commons.wikimedia.org/wiki/User:SreeBot" title="User:SreeBot">SreeBot</a>, <a href="https://creativecommons.org/licenses/by-sa/3.0" title="Creative Commons Attribution-Share Alike 3.0">CC BY-SA 3.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=17382578">Link</a></p>
# + [markdown] slideshow={"slide_type": "subslide"}
# ## What about containers for software?
#
# How do we run your deep learning model on:
# - a laptop,
# - the DL Cluster,
# - or in the cloud,
#
# all without affecting stuff that is running already?
# + [markdown] slideshow={"slide_type": "slide"}
# ## What do I need to put into a conatiner?
#
# We want to run processes:
# - a process to train your model
# - a process to apply that model to new cases.
#
# To do this the process needs to be packaged with all its dependencies:
# - data (image data, model weights)
# - runtime environment (linux, windows)
# - external libraries (python, CUDA, ...)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## How would we bundle a process and its dependencies?
#
# - Install an OS into a Virtual Machine Image (eg, Virtualbox)
# - Add all the executables and dependencies (by hand?)
# - Shutdown the machine, package, upload somewhere, document on a wiki
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Virtual Machines
#
# <img src="VM@2x.png" width="300">
#
# VMs abstract hardware, allowing many systems to run on the same physical infrastructure.
#
# Problems:
# - Size
# - Speed
# - Hypervisor Compatibility
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## What are containers?
#
# <img src="Container@2x.png" width="300">
#
# > Containers are a way of packaging a process
# > - self-contained
# > - portable
# > - lightweight
# + [markdown] slideshow={"slide_type": "subslide"}
# ## How do containers compare to VMs?
#
# <div>
# <img src="VM@2x.png" width="300" style="float:left"><img src="Container@2x.png" width="300" style="float:right">
# </div>
#
# <div style="float:left">
# Containers abstract away the application level, virtual machines abstract the hardware.
#
# <ul>
# <li> share the same kernel (start instantly, use less memory) </li>
# <li> don't need to include an entire os </li>
# <li> are now a standard </li>
# </ul>
# </div>
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Containers are not a primative
#
# Low level:
#
# > A combination of two linux kernel primitives:
# > - `namespaces`: controls what the process can **see** (pid, mount, network, ipc, user, ...)
# > - `cgroups`: controls what a process can **use** (memory, cpu, blkio, cpuset, devices, ...)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Why Containers?
#
# Containers help us solve 3 problems:
# - Encapsulation
# - All necessary files, isolation
# - Dependency Management
# - All necessary libraries
# - Immutability
# - We can easily keep many different versions of container images
# - Changes that you make on top on the image do not persist!
# + [markdown] slideshow={"slide_type": "slide"}
# ## What is Docker?
#
# Docker is a **containerization platform**. It provides:
# - a format for building and packaging containers: `container image`
# - a place to download, upload and persist container images: `docker registry`
# - tools for running containers on individual hosts
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Container Registry
#
# [Demo](https://hub.docker.com)
#
# Use the container registry to `pull` and `push` container images to/from your local machine.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Running images
#
# Use `docker run` to start images. Useful extras:
# - `-p`: expose a port
# - `-i`: interative mode
# - `-t`: allocate a pseudo-TTY
# - `-d`: detach - run in the background
# - `-e`: environment variable
#
# Note that any changes won't be saved!
# - `-v`: attach a volume
#
# Useful commands:
# - `docker image list`
# - `docker volume list`
# - `docker ps`
# - `docker stats`
# - `docker stop`
# - `docker system prune`
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Container Images
#
# Use a `dockerfile` to create container images.
#
# Choose the `FROM` image carefully: ubuntu is 188MB, debian 125MB, alpine 5MB. Try `python:3.6-slim` or `python:3.6-alpine`.
#
# Build them with `docker build`.
# - Remember the layering, don't ever include secrets.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Installation
#
# Follow the instructions carefully.
#
# On windows, the docker runtime will run inside a VM that is running the linux kernel.
#
# For CUDA/cuDNN and using your : you'll also need `nvidia-docker`. This is included on the deep learning machines.
# - Portability: you can build container images on your laptop and run them on the cluster.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Conclusion
#
# A `container` is a _Unit of Software_
#
# They're like Virtual Machines, but different in that they share the Operating System kernel, and do not persist data (immutable).
#
# A `docker container` is a container that is built to use the `docker platform`.
#
| docker-intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <h1><center>CSEN1022:Assignment 3</center></h1>
# <h3><center>Winter 2021</center></h3>
# <hr style="border:2px solid black"> </hr>
# ## <u> Please don't forget to fill in this data </u>
# **Member 1**
#
# Name:<NAME>
#
# GUC-ID:43-15747
#
# Elective Tutorial No.:T07
#
# **Member 2**
#
# Name:<NAME>
#
# GUC-ID:43-16620
#
# Elective Tutorial No.:T03
#
# <hr style="border:2px solid black"> </hr>
# ## Imports (Don't Edit)
# ONLY USE THESE IMPORTS.
# PLEASE DON'T EDIT THIS CELL.
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Read Data
# +
train_plane = np.array([plt.imread('Data/train/airplane/'+str(i)+'.jpg').reshape(-1) for i in range (5000)],dtype = float)
test_plane = np.array([plt.imread('Data/test/airplane/'+str(i)+'.jpg').reshape(-1) for i in range (1000)],dtype = float)
train_bird = np.array([plt.imread('Data/train/bird/'+str(i)+'.jpg').reshape(-1) for i in range (5000)],dtype = float)
test_bird = np.array([plt.imread('Data/test/bird/'+str(i)+'.jpg').reshape(-1) for i in range (1000)],dtype = float)
train_truck = np.array([plt.imread('Data/train/truck/'+str(i)+'.jpg').reshape(-1) for i in range (5000)],dtype = float)
test_truck = np.array([plt.imread('Data/test/truck/'+str(i)+'.jpg').reshape(-1) for i in range (1000)],dtype = float)
for i,sample in enumerate(train_plane):
for j, pixle in enumerate(sample):
x = pixle /255
train_plane[i][j]=x
#########################################################################################################
for i,sample in enumerate(train_bird):
for j, pixle in enumerate(sample):
x = pixle /255
train_bird[i][j]=x
# #########################################################################################################
for i,sample in enumerate(train_truck):
for j, pixle in enumerate(sample):
x = pixle /255
train_truck[i][j]=x
##########################################################################################################
for i,sample in enumerate(test_plane):
for j, pixle in enumerate(sample):
x = pixle /255
test_plane[i][j]=x
##########################################################################################################
for i,sample in enumerate(test_bird):
for j, pixle in enumerate(sample):
x = pixle /255
test_bird[i][j]=x
##########################################################################################################
for i,sample in enumerate(test_truck):
for j, pixle in enumerate(sample):
x = pixle /255
test_truck[i][j]=x
X_train = np.append(train_plane,train_bird,axis=0)
X_train = np.append(X_train,train_truck,axis=0)
X_test= np.append(test_plane,test_bird,axis=0)
X_test = np.append(X_test,test_truck,axis=0)
X_train.shape
# THE image categories are different.
# -
# <hr style="border:2px solid black"> </hr>
#
# # Perform K means clustering for all 3 classes (Training Data).
# ### Return (memberships, centroids, dbi) --> (vector, matrix, scalar value).
# +
###
#CODE HERE#
###
def get_memberships(centroids,X_train):
centroid0 = centroids[0]
centroid1 = centroids[1]
centroid2 = centroids[2]
membership={"cluster0":np.array([]),"cluster1":np.array([]),"cluster2":np.array([])}
s0=0
s1=0
s2=0
for i in range(len(X_train)):
dist0 = np.linalg.norm(X_train[i]-centroid0)
s0+=dist0
dist1 = np.linalg.norm(X_train[i]-centroid1)
s1+=dist1
dist2 = np.linalg.norm(X_train[i]-centroid2)
s2+=dist2
if(dist0 <dist1 and dist0<dist2 ):
membership["cluster0"]=np.append(membership["cluster0"],[int(i)])
elif(dist1<dist2):
membership["cluster1"]=np.append(membership["cluster1"],[int(i)])
else:
membership["cluster2"]=np.append(membership["cluster2"],[int(i)])
s0 = s0/len(membership["cluster0"])
s1 = s1/len(membership["cluster1"])
s2 = s2/len(membership["cluster2"])
pixels0=np.zeros((len(membership["cluster0"]), len(X_train[0])))
for i in range(len(membership["cluster0"])):
pixels0[i]=np.array([X_train[int(membership["cluster0"][i])]])
print(pixels0.shape)
mean0 = pixels0.mean(axis=0)
del pixels0
pixels1=np.zeros((len(membership["cluster1"]), len(X_train[0])))
for i in range(len(membership["cluster1"])):
pixels1[i]=np.array([X_train[int(membership["cluster1"][i])]])
mean1 = pixels1.mean(axis=0)
print(pixels1.shape)
del pixels1
pixels2=np.zeros((len(membership["cluster2"]), len(X_train[0])))
for i in range(len(membership["cluster2"])):
pixels2[i]=np.array([X_train[int(membership["cluster2"][i])]])
mean2 = pixels2.mean(axis=0)
print(pixels2.shape)
del pixels2
centroids=np.array([mean0,mean1,mean2])
if((centroid0.mean()) == (mean0.mean()) and (centroid1.mean()) == (mean1.mean()) and (centroid2.mean()) == (mean2.mean())):
m01= np.linalg.norm(centroid0-centroid1)
m02= np.linalg.norm(centroid0-centroid2)
m12= np.linalg.norm(centroid1-centroid2)
r01 = (s0+s1)/m01
r02 = (s0+s2)/m02
r12 = (s1+s2)/m12
dbi = np.max(np.array([r01,r02,r12]))
return (membership,centroids,dbi)
else:
return get_memberships(centroids,X_train)
# return (memberships, centroids, dbi)
# -
def get_random_centroid(X_train):
i = np.random.randint(15000)
j=np.random.randint(15000)
k=np.random.randint(15000)
centroid0 = X_train[i]
centroid1 = X_train[j]
centroid2 = X_train[k]
centroids=np.array([centroid0,centroid1,centroid2])
return centroids
# +
# -
# <hr style="border:2px solid black"> </hr>
#
# # Repeat the previous process 10 times.
# ### Pick the membership vector and the centroids matrix corresponding to the best dbi.
# ##### Make sure you return max_counts and confusion_matrix.
# (keep history in whatever datastructure you like).
# +
def get_best(centroids,X_train):
alloutput = get_memberships(centroids,X_train)
best_membership_matrix =alloutput[0]
best_centroids = alloutput[1]
best_dbi = alloutput[2]
for i in range (9):
centroids = get_random_centroid(X_train)
alloutput = get_memberships(centroids,X_train)
dbi = alloutput[2]
if(dbi< best_dbi):
best_membership_matrix =alloutput[0]
best_centroids = alloutput[1]
best_dbi = dbi
return(best_membership_matrix,best_centroids,best_dbi)
# -
centroids= get_random_centroid(X_train)
best = get_best(centroids,X_train)
membership =best[0]
centroids=best[1]
dbi=best[2]
print(membership)
print(centroids)
print(dbi)
# +
predict={"plane":[],"bird":[],"truck":[]}
# to detect plane
c0=0
c1=0
c2=0
max_count_plane=0
for i in range(5000):
if(i in membership["cluster0"]):
c0+=1
elif(i in membership["cluster1"]):
c1+=1
elif(i in membership["cluster2"]):
c2+=1
if(c0>c1 and c0>c2):
max_count_plane=c0
predict["plane"] = centroids[0]
elif(c1>c2):
max_count_plane=c1
predict["plane"] = centroids[1]
else:
max_count_plane=c2
predict["plane"] = centroids[2]
# to detect bird
c0=0
c1=0
c2=0
max_count_bird=0
for i in range(5000,10000):
if(i in membership["cluster0"]):
c0+=1
elif(i in membership["cluster1"]):
c1+=1
elif(i in membership["cluster2"]):
c2+=1
if(c0>c1 and c0>c2):
max_count_bird=c0
predict["bird"] = centroids[0]
elif(c1>c2):
max_count_bird=c1
predict["bird"] = centroids[1]
else:
max_count_bird=c2
predict["bird"] = centroids[2]
# to detect truck
c0=0
c1=0
c2=0
max_count_truck=0
for i in range(10000,15000):
if(i in membership["cluster0"]):
c0+=1
elif(i in membership["cluster1"]):
c1+=1
elif(i in membership["cluster2"]):
c2+=1
if(c0>c1 and c0>c2):
max_count_truck=c0
predict["truck"] = centroids[0]
elif(c1>c2):
max_count_truck=c1
predict["truck"] = centroids[1]
else:
max_count_truck=c2
predict["truck"] = centroids[2]
max_counts = [max_count_plane,max_count_bird,max_count_truck]
print(max_counts)
# +
#classify plane
classified_plane=0
classified_bird=0
classified_truck=0
for i in range(1000):
dist0 = np.linalg.norm(X_test[i]-predict["plane"])
dist1 = np.linalg.norm(X_test[i]-predict["bird"])
dist2 = np.linalg.norm(X_test[i]-predict["truck"])
if(dist0<dist1 and dist0<dist2):
classified_plane+=1
elif(dist1<dist2 and dist1<dist0):
classified_bird+=1
elif(dist2<dist0 and dist2<dist1):
classified_truck+=1
planeclass = [classified_plane,classified_bird,classified_truck]
#classify bird
classified_plane=0
classified_bird=0
classified_truck=0
for i in range(1000,2000):
dist0 = np.linalg.norm(X_test[i]-predict["plane"])
dist1 = np.linalg.norm(X_test[i]-predict["bird"])
dist2 = np.linalg.norm(X_test[i]-predict["truck"])
if(dist0<dist1 and dist0<dist2):
classified_plane+=1
elif(dist1<dist2 and dist1<dist0):
classified_bird+=1
elif(dist2<dist0 and dist2<dist1):
classified_truck+=1
birdclass = [classified_plane,classified_bird,classified_truck]
#classify truck
classified_plane=0
classified_bird=0
classified_truck=0
for i in range(2000,3000):
dist0 = np.linalg.norm(X_test[i]-predict["plane"])
dist1 = np.linalg.norm(X_test[i]-predict["bird"])
dist2 = np.linalg.norm(X_test[i]-predict["truck"])
if(dist0<dist1 and dist0<dist2):
classified_plane+=1
elif(dist1<dist2 and dist1<dist0):
classified_bird+=1
elif(dist2<dist0 and dist2<dist1):
classified_truck+=1
truckclass = [classified_plane,classified_bird,classified_truck]
confusion_matrix=np.array([planeclass,birdclass,truckclass])
# -
# <hr style="border:2px solid black"> </hr>
#
# ## Don't Edit the Following Cells, Just Run & Save them.
plt.figure(figsize=(10,6))
plt.plot(['Airplane','Bird','Truck'],max_counts,'-o')
plt.title('Best Counts')
plt.rc('figure', figsize=[5,5])
plt.matshow(confusion_matrix,cmap="Blues")
for i in range(0,confusion_matrix.shape[0]):
for j in range(0,confusion_matrix.shape[1]):
plt.annotate(confusion_matrix[i,j],(j,i))
| Assignment_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dual CRISPR Screen Analysis
# # Construct Filter
# <NAME>, CCBB, UCSD (<EMAIL>)
# ## Instructions
#
# To run this notebook reproducibly, follow these steps:
# 1. Click **Kernel** > **Restart & Clear Output**
# 2. When prompted, click the red **Restart & clear all outputs** button
# 3. Fill in the values for your analysis for each of the variables in the [Input Parameters](#input-parameters) section
# 4. Click **Cell** > **Run All**
# <a name = "input-parameters"></a>
#
# ## Input Parameters
g_num_processors = 3
g_trimmed_fastqs_dir = "/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/processed/runTemp"
g_filtered_fastas_dir = ""
g_code_location = "/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python"
# ## CCBB Library Imports
import sys
sys.path.append(g_code_location)
# ## Automated Set-Up
# # %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
from ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp
g_filtered_fastas_dir = check_or_set(g_filtered_fastas_dir, g_trimmed_fastqs_dir)
print(describe_var_list(['g_filtered_fastas_dir']))
from ccbbucsd.utilities.files_and_paths import verify_or_make_dir
verify_or_make_dir(g_filtered_fastas_dir)
# ## Info Logging Pass-Through
from ccbbucsd.utilities.notebook_logging import set_stdout_info_logger
set_stdout_info_logger()
# ## Construct Filtering Functions
import enum
# +
# # %load -s TrimType,get_trimmed_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/scaffold_trim.py
class TrimType(enum.Enum):
FIVE = "5"
THREE = "3"
FIVE_THREE = "53"
def get_trimmed_suffix(trimtype):
return "_trimmed{0}.fastq".format(trimtype.value)
# +
# # %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_filterer.py
# standard libraries
import logging
# ccbb libraries
from ccbbucsd.utilities.basic_fastq import FastqHandler, paired_fastq_generator
from ccbbucsd.utilities.files_and_paths import transform_path
__author__ = "<NAME>"
__maintainer__ = "<NAME>"
__email__ = "<EMAIL>"
__status__ = "development"
def get_filtered_file_suffix():
return "_len_filtered.fastq"
def filter_pair_by_len(min_len, max_len, retain_len, output_dir, fw_fastq_fp, rv_fastq_fp):
fw_fastq_handler = FastqHandler(fw_fastq_fp)
rv_fastq_handler = FastqHandler(rv_fastq_fp)
fw_out_handle, rv_out_handle = _open_output_file_pair(fw_fastq_fp, rv_fastq_fp, output_dir)
counters = {"num_pairs": 0, "num_pairs_passing": 0}
filtered_fastq_records = _filtered_fastq_generator(fw_fastq_handler, rv_fastq_handler, min_len, max_len, retain_len,
counters)
for fw_record, rv_record in filtered_fastq_records:
fw_out_handle.writelines(fw_record.lines)
rv_out_handle.writelines(rv_record.lines)
fw_out_handle.close()
rv_out_handle.close()
return _summarize_counts(counters)
def _filtered_fastq_generator(fw_fastq_handler, rv_fastq_handler, min_len, max_len, retain_len, counters):
paired_fastq_records = paired_fastq_generator(fw_fastq_handler, rv_fastq_handler, True)
for curr_pair_fastq_records in paired_fastq_records:
counters["num_pairs"] += 1
_report_progress(counters["num_pairs"])
fw_record = curr_pair_fastq_records[0]
fw_passing_seq = _check_and_trim_seq(_get_upper_seq(fw_record), min_len, max_len, retain_len, False)
if fw_passing_seq is not None:
rv_record = curr_pair_fastq_records[1]
rv_passing_seq = _check_and_trim_seq(_get_upper_seq(rv_record), min_len, max_len, retain_len, True)
if rv_passing_seq is not None:
counters["num_pairs_passing"] += 1
fw_record.sequence = fw_passing_seq
fw_record.quality = _trim_seq(fw_record.quality, retain_len, False)
rv_record.sequence = rv_passing_seq
rv_record.quality = _trim_seq(rv_record.quality, retain_len, True)
yield fw_record, rv_record
def _open_output_file_pair(fw_fastq_fp, rv_fastq_fp, output_dir):
fw_fp = transform_path(fw_fastq_fp, output_dir, get_filtered_file_suffix())
rv_fp = transform_path(rv_fastq_fp, output_dir, get_filtered_file_suffix())
fw_handle = open(fw_fp, 'w')
rv_handle = open(rv_fp, 'w')
return fw_handle, rv_handle
def _report_progress(num_fastq_pairs):
if num_fastq_pairs % 100000 == 0:
logging.debug("On fastq pair number {0}".format(num_fastq_pairs))
def _get_upper_seq(fastq_record):
return fastq_record.sequence.upper()
def _check_and_trim_seq(input_seq, min_len, max_len, retain_len, retain_5p_end):
result = None
seq_len = len(input_seq)
if seq_len >= min_len and seq_len <= max_len:
result = _trim_seq(input_seq, retain_len, retain_5p_end)
return result
def _trim_seq(input_seq, retain_len, retain_5p_end):
if len(input_seq) < retain_len:
raise ValueError(
"input sequence {0} has length {1}, shorter than retain length {2}".format(input_seq, len(input_seq),
retain_len))
if retain_5p_end:
return input_seq[:retain_len]
else:
return input_seq[-retain_len:]
# def _write_fasta_record(file_handle, label, sequence):
# lines = [">" + label, sequence]
# file_handle.writelines(lines)
def _summarize_counts(counts_by_type):
summary_pieces = []
sorted_keys = sorted(counts_by_type.keys()) # sort to ensure deterministic output ordering
for curr_key in sorted_keys:
curr_value = counts_by_type[curr_key]
summary_pieces.append("{0}:{1}".format(curr_key, curr_value))
result = ",".join(summary_pieces)
return result
# +
from ccbbucsd.utilities.parallel_process_fastqs import parallel_process_paired_reads, concatenate_parallel_results
g_parallel_results = parallel_process_paired_reads(g_trimmed_fastqs_dir, get_trimmed_suffix(TrimType.FIVE_THREE),
g_num_processors, filter_pair_by_len, [19, 21, 19, g_filtered_fastas_dir])
# -
print(concatenate_parallel_results(g_parallel_results))
| notebooks/crispr/Dual CRISPR 2-Constuct Filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## VggNet in Keras
# #### We are goin to classify Oxford Flowers
# Set Seed
import numpy as np
np.random.seed(42)
# #### Load Dependencies
import keras
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D
from keras.layers.normalization import BatchNormalization
# #### Load and Preprocess Data
import tflearn.datasets.oxflower17 as oxflower17
X,y = oxflower17.load_data(one_hot=True)
# #### Design Neural Network
# +
model = Sequential()
model.add(Conv2D(64,kernel_size=(3,3),activation='relu',input_shape=(224,224,3)))
model.add(Conv2D(64,kernel_size=(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(128,3,activation='relu'))
model.add(Conv2D(128,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(256,3,activation='relu'))
model.add(Conv2D(256,3,activation='relu'))
model.add(Conv2D(256,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(512,3,activation='relu'))
model.add(Conv2D(512,3,activation='relu'))
model.add(Conv2D(512,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(512,3,activation='relu'))
model.add(Conv2D(512,3,activation='relu'))
model.add(Conv2D(512,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(4096,activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten()
model.add(Dense(4096,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(17,activation='relu'))
# -
model.summary()
# #### Configure Model
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
# #### Train !!
model.fit(X,y,batch_size=64,epochs=1,verbose=1,validation_split=0.1,shuffle=True)
| TFLiveLessons/vggnet_in_keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.2 64-bit (''r-py-test'': conda)'
# name: python3
# ---
# +
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import train_test_split
## Utils and Library for notebook
from notebook_utils.OpenKbcMSToolkit import ExtractionToolkit as exttoolkit
# Root data path
DATA_PATH = '../data/'
#Data loading
df = pd.read_csv(DATA_PATH+"second_level_data/msigdb_activation_vst_CD8.csv", engine='c', index_col=0).T
meta_data = pd.read_csv(DATA_PATH+'annotation_metadata/EPIC_HCvB_metadata_baseline_updated-share.csv')
longDD_samples, shortDD_samples = exttoolkit.get_sample_name_by_contValues(meta_data, 'HCVB_ID', 'DiseaseDuration', 50)
longDD_samples = list(set(longDD_samples.values.tolist()).intersection(df.columns.tolist())) # intersected with act score matrix
shortDD_samples = list(set(shortDD_samples.values.tolist()).intersection(df.columns.tolist())) # intersected with act score matrix
df = df[longDD_samples+shortDD_samples].dropna() # reform df with intersected samples
X = df.T.values # Training sample
y = [0]*len(longDD_samples)+[1]*len(shortDD_samples) # Training y
X.shape
# +
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.33, random_state=42) # Sample split
sel = SelectFromModel(RandomForestClassifier(n_estimators = 10000)) # Model
sel.fit(X_train, y_train) # fitting
# -
print(len(sel.get_support(indices=True)))
for feature_list_index in sel.get_support(indices=True):
print(df.index[feature_list_index])
sel.threshold_
| notebook/notebook_archive/Jun09182021/etc_RF_ActScore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# To install pylazybam we are going to use system calls to create a virtual environment and then install pylazybam into this environment.
# Once pylazybam is in PyPI it will be possible to create the venv then install within reticulate
system("python3 -m venv ./reticulate")
system("./reticulate/bin/pip3 install git+git://github.com/genomematt/pylazybam.git")
library(reticulate)
use_virtualenv("./reticulate")
bam <- import("pylazybam.bam")
gzip <- import("gzip")
infile <- gzip$open("pylazybam/tests/data/paired_end_testdata_human.bam")
my_bam = bam$FileReader(infile)
cat(my_bam$header)
# +
as <- list()
xs <- list()
count = 0
for (align in iterate(my_bam)){
count = count + 1
as[[count]] <- bam$get_AS(align)
xs[[count]] <- bam$get_XS(align)
}
# -
df <- data.frame(cbind(as,xs))
df[1:6,]
plot(df)
sessionInfo()
print(bam)
| reticulate_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ivanightingale/cirquits/blob/master/Bernstein_Vazirani_Problem.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="-i5Un_snY4MW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="5db8ce88-f396-4260-b4b2-a38234f365dc"
# !pip install -q qiskit
# + id="8qJkwynnZpGU" colab_type="code" colab={}
from qiskit import *
from qiskit.visualization import plot_histogram
# %matplotlib inline
# + id="nKdWeqnfZrni" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="27f1d3a7-d798-4df3-e803-6b30f3d15c4f"
a = input("""Please choose a coefficient (in bits) for the function f(x) = ax that maps to {0,1}.
It is assumed that any input has the same number of bits as a.
The quantum circuit will decide what it is in one query.
""")
num_bits = len(a)
qr0 = QuantumRegister(num_bits, "q0")
qr1 = QuantumRegister(1, "q1")
cr = ClassicalRegister(num_bits)
circuit = QuantumCircuit(qr0, qr1, cr)
circuit.x(qr1)
circuit.h(qr0)
circuit.h(qr1)
circuit.barrier()
for i in range(num_bits):
bit = int(a[i])
assert(bit == 1 or bit == 0)
if bit == 1:
circuit.cx(qr0[num_bits - i - 1], qr1)
circuit.barrier()
circuit.h(qr0)
circuit.measure(qr0, cr)
# + id="3J_Xwl1s-t-q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 413} outputId="0dadab06-0563-40fb-bee3-2c6e5a30113d"
circuit.draw(output='mpl')
# + id="cqBAzGIT-_PM" colab_type="code" colab={}
simulator = Aer.get_backend('qasm_simulator')
shots = 100
job = execute(circuit, simulator, shots=shots)
result = job.result()
counts = result.get_counts(circuit)
# + id="PmAIlVAXB1Ah" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 368} outputId="09520bbf-8ef3-424c-f9e9-65b30e492459"
plot_histogram(counts)
# + id="iFadrMjICfLI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="31f02b2d-ff9e-4975-a630-3eb1fddad0cf"
max_counts_bits = ""
max_counts = 0
for bits, c in counts.items():
if c > max_counts:
max_counts = counts
max_counts_bits = bits
print("Your chosen coefficient was " + max_counts_bits)
| Bernstein_Vazirani_Problem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import os
import shutil
import random
import torch
import sklearn.utils
from vae.model import VAE
# from gen_model.model import AutoDropoutNN
from loader.dataloader import MIDI_Loader,MIDI_Render
from loader.chordloader import Chord_Loader
from torch.nn import functional as F
new_outputs= np.load("model_v6_output.npy",allow_pickle=True)
test_data_real = np.load("test_data_no_split_8_crop.npy",allow_pickle=True)
render = MIDI_Render(datasetName = "Nottingham", minStep= 0.125)
vae_model = VAE(130, 2048, 3, 12, 128, 128, 32)
vae_model.eval()
dic = torch.load("vae/tr_chord.pt")
for name in list(dic.keys()):
dic[name.replace('module.', '')] = dic.pop(name)
vae_model.load_state_dict(dic)
if torch.cuda.is_available():
vae_model = vae_model.cuda()
outs = []
for kkk in range(len(new_outputs)):
print(kkk)
z1 = new_outputs[kkk][0][:,:128]
z2 = new_outputs[kkk][0][:,128:]
chord_cond = new_outputs[kkk][1]
# print(z1.shape)
# print(z2.shape)
# print(chord_cond.shape)
z1 = torch.from_numpy(z1).float()
z2 = torch.from_numpy(z2).float()
chord_cond = torch.from_numpy(chord_cond).float()
z1 = z1.cuda()
z2 = z2.cuda()
chord_cond = chord_cond.cuda()
res = vae_model.decoder(z1,z2,chord_cond)
# print(res.detach().cpu().numpy().shae)
q = np.asarray((res.detach().cpu().numpy()))
temp = []
for i in q:
for j in i:
temp.append(np.argmax(j))
q = np.array(temp)
song = {"notes":q, "chords": test_data_real[kkk]["chords"]}
outs.append(song)
# render.data2midi(data = new_outputs[kkk],output = "temp/v6/v6_" + str(kkk) + ".mid")
# render.data2midi(data = test_data_real[kkk],output = "temp/real/real_" + str(kkk) + ".mid")
# -
np.save("model_v6_midi.npy",outs)
| testfile/output_midi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from Bio import SeqIO
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from collections import Counter
# +
#Print total number of seqs before and after filtering (for seqs longer than 1000nt)
full_cov_data = '../all_hcovs/data/human_cov_full.fasta'
full_cov_filtered = '../all_hcovs/results/filtered_cov_full.fasta'
cov_total = 0
cov_seq_lengths = []
cov_filtered = 0
cov_filtered_lengths = []
with open(full_cov_data, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
cov_total+=1
cov_seq_lengths.append(len(record.seq))
with open(full_cov_filtered, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
cov_filtered+=1
cov_filtered_lengths.append(len(record.seq))
print(f'total seqs: {cov_total}\nafter filtering: {cov_filtered}')
# +
#Find how many total sequences for each virus
#And then how many of these sequences align to each gene
viruses = ['oc43', '229e', 'nl63', 'hku1']
full_virus_totals = {}
virus_filtered_totals = {}
for virus in viruses:
virus_full = 0
virus_filtered_total = 0
virus_filtered = '../'+str(virus)+'/results/filtered_'+str(virus)+'_full.fasta'
with open(virus_filtered, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
virus_filtered_total+=1
if len(record.seq) > 25000:
virus_full +=1
full_virus_totals[virus] = virus_full
virus_filtered_totals[virus] = virus_filtered_total
for virus in viruses:
print(f'{virus} seqs after filtering: {virus_filtered_totals[virus]} \nfull length {virus} seqs: {full_virus_totals[virus]}')
# -
def get_virus_genes(virus):
if virus == 'nl63':
genes_dict = {'replicase1ab':"replicase polyprotein 1ab", 'spike':"spike protein", 'protein3':"protein 3",
'envelope':"envelope protein", 'membrane':"membrane protein", 'nucleocapsid':"nucleocapsid protein",
's1':'spike_subdomain1', 's2':'spike_subdomain2', 'rdrp':'rna_depedent_rna_polymerase'}
elif virus == '229e':
genes_dict = {'replicase1ab':"replicase polyprotein 1ab", 'replicase1a': "replicase polyprotein 1a", 'spike':"surface glycoprotein",
'protein4a':"4a protein", 'protein4b':"4b protein",
'envelope':"envelope protein", 'membrane':"membrane protein", 'nucleocapsid':"nucleocapsid protein",
's1':'spike_subdomain1', 's2':'spike_subdomain2', 'rdrp':'rna_depedent_rna_polymerase'}
elif virus == 'hku1':
genes_dict = {'replicase1ab':"orf1ab polyprotein", 'he':"hemagglutinin-esterase glycoprotein",
'spike':"spike glycoprotein", 'nonstructural4':"non-structural protein",
'envelope':"small membrane protein", 'membrane':"membrane glycoprotein",
'nucleocapsid':"nucleocapsid phosphoprotein", 'nucleocapsid2':"nucleocapsid phosphoprotein 2",
's1':'spike_subdomain1', 's2':'spike_subdomain2', 'rdrp':'rna_depedent_rna_polymerase'}
elif virus == 'oc43':
genes_dict = {'replicase1ab':"replicase polyprotein", 'nonstructural2a':"NS2a protein",
'he':"HE protein", 'spike':"S protein", 'nonstructural2':"NS2 protein",
'envelope':"NS3 protein", 'membrane':"M protein",
'nucleocapsid':"N protein", 'n2protein':"N2 protein",
's1':'spike_subdomain1', 's2':'spike_subdomain2', 'rdrp':'rna_depedent_rna_polymerase'}
return genes_dict
def virus_seq_coverage(virus):
strain_counts = {}
full_datafile = '../'+str(virus)+'/data/'+str(virus)+'_full.fasta'
with open(full_datafile, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
strain_counts[record.id.split('|')[0]] = 0
gene_list = get_virus_genes(virus).keys()
virus_gene_coverage = {}
virus_year_coverage = {}
for gene in gene_list:
if gene!= 's1' and gene!= 's2':
gene_total = 0
min_year = 2020
max_year = 0
gene_seq_file = '../'+str(virus)+'/data/'+str(virus)+'_'+str(gene)+'.fasta'
with open(gene_seq_file, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
strain_counts[record.id.split('|')[0]]+=1
gene_total+=1
if record.id.split('|')[3][0:4] != 'NA':
if int(record.id.split('|')[3][0:4]) < min_year:
min_year = int(record.id.split('|')[3][0:4])
elif int(record.id.split('|')[3][0:4]) > max_year:
max_year = int(record.id.split('|')[3][0:4])
virus_gene_coverage[gene] = gene_total
virus_year_coverage[gene] = str(min_year) + '-' + str(max_year)
strain_coverage = pd.DataFrame(list(strain_counts.items()), index=[x for x in range(len(strain_counts))],
columns=['strain', 'gene_coverage'])
# plt.hist(strain_coverage['gene_coverage'], bins = [x for x in range(0,max(strain_coverage['gene_coverage']))])
gene_coverage = pd.DataFrame(list(virus_gene_coverage.items()), index=[x for x in range(len(virus_gene_coverage))],
columns=['gene', 'gene_coverage'])
fig, ax = plt.subplots(figsize=(15,8))
ax = sns.barplot(x='gene', y='gene_coverage', color='salmon', data=gene_coverage)
plt.ylim(0,400)
plt.title(str(virus), size=20)
plt.xlabel('gene', size=16)
plt.ylabel('sequences covering each gene', size=16)
no_coverage = len(strain_coverage[strain_coverage['gene_coverage']==0])
at_least_one_gene = len(strain_coverage[strain_coverage['gene_coverage']>=1])
print(f'{virus} sequences that dont cover any genes: {no_coverage}\nsequences that cover at least one gene: {at_least_one_gene}')
viruses = ['oc43', '229e', 'nl63', 'hku1']
for virus in viruses:
virus_seq_coverage(virus)
# +
def virus_seq_coverage_year(viruses, genes=None):
to_plot = []
for virus in viruses:
if genes==None:
gene_list = get_virus_genes(virus).keys()
elif genes!=None:
gene_list = genes
year_counts = {}
for gene in gene_list:
if gene!= 's1' and gene!= 's2':
year_counts[gene] = []
gene_seq_file = '../'+str(virus)+'/data/'+str(virus)+'_'+str(gene)+'.fasta'
with open(gene_seq_file, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
if record.id.split('|')[3][0:4] != 'NA':
year_counts[gene]+=[int(record.id.split('|')[3][0:4])]
year_counts[gene] = Counter(year_counts[gene])
for year in year_counts[gene].keys():
to_plot.append({'virus': virus, 'gene': gene, 'year': str(year),
'year_count': year_counts[gene][year]})
# to_plot.append({'virus': virus, 'gene': gene, 'year_counts': year_counts[gene]})
plot_df = pd.DataFrame(to_plot)
color_map = {'oc43': '#CB4335', '229e': '#2E86C1', 'nl63': '#009888', 'hku1': '#7c5295'}
g = sns.catplot(x='year', y='year_count', col='gene', palette=color_map,
col_wrap=4, hue='virus', data = plot_df, aspect=2)
# -
virus_seq_coverage_year(['oc43', '229e', 'nl63', 'hku1'], genes=['spike', 'rdrp'])
virus_seq_coverage_year(['oc43', '229e', 'nl63', 'hku1'])
| data-wrangling/data_distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Moving Average High and Low
# https://www.incrediblecharts.com/indicators/ma_high_low.php
# + outputHidden=false inputHidden=false
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# + outputHidden=false inputHidden=false
# input
symbol = 'AAPL'
start = '2018-08-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
# + outputHidden=false inputHidden=false
import talib as ta
# + outputHidden=false inputHidden=false
df['MA_High'] = df['High'].rolling(10).mean()
df['MA_Low'] = df['Low'].rolling(10).mean()
# + outputHidden=false inputHidden=false
df = df.dropna()
df.head()
# + outputHidden=false inputHidden=false
plt.figure(figsize=(16,10))
plt.plot(df['Adj Close'])
plt.plot(df['MA_High'])
plt.plot(df['MA_Low'])
plt.title('Moving Average of High and Low for Stock')
plt.legend(loc='best')
plt.xlabel('Price')
plt.ylabel('Date')
plt.show()
# -
# # Candlestick with Moving Averages High and Low
# + outputHidden=false inputHidden=false
from matplotlib import dates as mdates
import datetime as dt
df['VolumePositive'] = df['Open'] < df['Adj Close']
df = df.dropna()
df = df.reset_index()
df['Date'] = mdates.date2num(df['Date'].astype(dt.date))
df.head()
# + outputHidden=false inputHidden=false
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(20,16))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,df.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.plot(df.Date, df['MA_High'],label='MA High')
ax1.plot(df.Date, df['MA_Low'],label='MA Low')
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax1.axhline(y=dfc['Adj Close'].mean(),color='r')
ax1v = ax1.twinx()
colors = df.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(df.Date, df['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.set_xlabel('Date')
ax1.legend(loc='best')
| Python_Stock/Technical_Indicators/MA_High_Low.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
description = "The Norwegian Blue is a wonderful parrot. This parrot is notable for its exquisite plumage."
pattern = "(parrot)"
replacement = "ex-\\1"
print(re.sub(pattern, replacement, description))
| Chapter07/Exercise111/Exercise111.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# **[LSE-01]** モジュールをインポートします。
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# **[LSE-02]** Placeholder x を定義します。
x = tf.placeholder(tf.float32, [None, 5])
# **[LSE-03]** Variable w を定義します。
w = tf.Variable(tf.zeros([5, 1]))
# **[LSE-04]** 計算式 y を定義します。
y = tf.matmul(x, w)
# **[LSE-05]** Placeholder t を定義します。
t = tf.placeholder(tf.float32, [None, 1])
# **[LSE-06]** 誤差関数 loss を定義します。
loss = tf.reduce_sum(tf.square(y-t))
# **[LSE-07]** トレーニングアルゴリズム train_step を定義します。
train_step = tf.train.AdamOptimizer().minimize(loss)
# **[LSE-08]** セッションを用意して、Variableを初期化します。
sess = tf.Session()
sess.run(tf.initialize_all_variables())
# **[LSE-09]** トレーニングセットのデータを用意します。
# +
train_t = np.array([5.2, 5.7, 8.6, 14.9, 18.2, 20.4,
25.5, 26.4, 22.8, 17.5, 11.1, 6.6])
train_t = train_t.reshape([12,1])
train_x = np.zeros([12, 5])
for row, month in enumerate(range(1, 13)):
for col, n in enumerate(range(0, 5)):
train_x[row][col] = month**n
# -
# **[LSE-10]** 勾配降下法によるパラメーターの最適化を100000回繰り返します。
i = 0
for _ in range(100000):
i += 1
sess.run(train_step, feed_dict={x:train_x, t:train_t})
if i % 10000 == 0:
loss_val = sess.run(loss, feed_dict={x:train_x, t:train_t})
print ('Step: %d, Loss: %f' % (i, loss_val))
# **[LSE-11]** さらに100000回繰り返します。
for _ in range(100000):
i += 1
sess.run(train_step, feed_dict={x:train_x, t:train_t})
if i % 10000 == 0:
loss_val = sess.run(loss, feed_dict={x:train_x, t:train_t})
print ('Step: %d, Loss: %f' % (i, loss_val))
# **[LSE-12]** トレーニング後のパラメーターの値を確認します。
w_val = sess.run(w)
print w_val
# **[LSE-13]** トレーニング後のパラメーターを用いて、予測気温を計算する関数を定義します。
def predict(x):
result = 0.0
for n in range(0, 5):
result += w_val[n][0] * x**n
return result
# **[LSE-14]** 予測気温のグラフを描きます。
fig = plt.figure()
subplot = fig.add_subplot(1,1,1)
subplot.set_xlim(1,12)
subplot.scatter(range(1,13), train_t)
linex = np.linspace(1,12,100)
liney = predict(linex)
subplot.plot(linex, liney)
| Chapter01/Least squares example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy
import pandas as pd
# Define here your stats and environment
# Stats :
intel = 356
crit_score = 336
hit_score = 141
spellpower = 835
haste = 0
is_csd = True # Chaotic Skyfire Diamond equiped
is_spellstrike = True
is_spellfire = False
is_arcane_brilliance = True #40 intellect +14 motw
is_totem_of_wrath = True
is_draenei_in_group = True
curse_of_elements = 1.1 # not included for wrath, but taken into account for Moonfire
include_moonfire_in_rotation = True
## Update stats
if is_arcane_brilliance:
intel = intel + 40 + 14 #AI + MOTW
if is_totem_of_wrath:
hit_score = hit_score + (3 * 12.6)
crit_score = crit_score + (3 * 22.1)
if is_draenei_in_group:
hit_score = hit_score + 12.6
# Functions
def logfunc(s):
is_log_on = False
if is_log_on:
print(s)
def compute_dps(intel, crit_score, hit_score, spellpower, haste, is_csd, is_spellstrike, is_spellfire):
# print("Provided values : " + " " + str(intel) + " " + str(crit_score) + " " + str(hit_score) + " " + str(spellpower))
# Talents
balance_of_power = 4 # +4% Hit
focused_starlight = 4 # +4% crit for SF and Wrath
moonkin_form = 5 # +5% Crit
improved_mf = 10 # +10% Moonfire crit
starlight_wrath = True # reduce cast time by 0.5s
vengeance = True # +100% Crit damange
lunar_guidance = True # Spellpower bonus = 24% of total intel
moonfury = 1.1 # +10% damage
wrath_of_cenarius = 1.1 # +20% Spellpower for SF | +10% SpellPower for Wrath
fight_length = 90 # in seconds
# Sets bonuses
spellfire = is_spellfire # SP bonus = +7% of total intellect
spellstrike = is_spellstrike # 5% chance to have +92sp for 10s - No ICD
windhawk = False # 8MP/5 KEK
# Meta GEM - Chaotic Skyfire Diamond
csd_equiped = is_csd
# Special Trinkets
eye_of_mag = False # Grants 170 increased spell damage for 10 sec when one of your spells is resisted.
silver_crescent = False # Use: Increases damage and healing done by magical spells and effects by up to 155 for 20 sec. (2 Min Cooldown)
scryer_gem = False # Use: Increases spell damage by up to 150 and healing by up to 280 for 15 sec. (1 Min, 30 Sec Cooldown)
quagmirran = False # Equip: Your harmful spells have a chance to increase your spell haste rating by 320 for 6 secs. (Proc chance: 10%, 45s cooldown)
essence_sapphi = False # Use: Increases damage and healing done by magical spells and effects by up to 130 for 20 sec. (2 Min Cooldown)
# Translating stats to %
# At level 70, 22.1 Spell Critical Strike Rating increases your chance to land a Critical Strike with a Spell by 1%
# At level 70, 12.6 Spell Hit Rating increases your chance to Hit with Spells by 1%. Hit cap is 202 FLAT (not including talents & buffs).
# Druids receive 1% Spell Critical Strike chance for every 79.4 points of intellect.
# Moonfire base damage : 305 to 357 Arcane damage and then an additional 600 Arcane damage over 12 sec.
MF_coeff = 0.15
MF_coeff_dot = 0.52
# Starfire base damage : 605 to 711 Arcane damage -> 658 on average
# SF_coeff = 1
# SF_average_damage = 658
wrath_coeff = 0.65
wrath_average_damage = 407.5
MF_average_damage = 331
MF_average_dot_damage = 600
partial_coeff = 0.5 # For the moment, let's say that in average, partials get 50% damage reduction
wrath_cast_time = 1.5
wrath_cast_time_ng = 1
# sf_cast_time = 3
# sf_cast_time_ng = 2.5
# Apply spell haste coefficients here
# Spell power calculation for fight SP + lunar guidance
if lunar_guidance:
spellpower = spellpower + 0.24 * intel
if spellfire:
spellpower = spellpower + 0.08 * intel
# Hit chance
# 12.6 Spell Hit Rating -> 1%
hit_chance = min(99, 83 + (hit_score/12.6) + balance_of_power )
logfunc("Hit chance is : " + str(hit_chance))
# Crit chance
# At level 70, 22.1 Spell Critical Strike Rating -> 1%
# Druids receive 1% Spell Critical Strike chance for every 79.4 points of intellect.
MF_crit_percent = crit_score/22.1 + intel/79.4 + improved_mf + moonkin_form + focused_starlight
logfunc("Moonfire crit chance is : " + str(MF_crit_percent))
wrath_crit_percent = crit_score/22.1 + intel/79.4 + + moonkin_form + focused_starlight
logfunc("Wrath crit chance is : " + str(wrath_crit_percent))
logfunc("Spellpower is : " + str(spellpower))
# Crit coeff
if csd_equiped:
crit_coeff = 2.09
else:
crit_coeff = 2
# Spellstrike bonus:
if spellstrike:
spellstrike_bonus = 92
else:
spellstrike_bonus = 0
# Prepare and launch the simulations
loop_size = 100 # number of fights simulated
average_dps = 0
n = 0
while n < loop_size:
n = n +1
# Initialization
total_damage_done = 0
damage = 0
fight_time = 0
spellstrike_uptime = 0
ff_uptime = 0
mf_uptime = 0
is_ff_up = False
is_mf_up = False
is_ng = False
spellstrike_proc = False
ng_proc = False
# Time to kick ass and chew bubble gum
while fight_time <= fight_length:
loop_duration = 1 #GCD - can't be less, it's the rule !
damage = 0
if spellstrike_proc:
fight_spell_power = spellpower + spellstrike_bonus
else:
fight_spell_power = spellpower
# if FF not up, cast FF
if not is_ff_up:
logfunc("Casting Faerie Fire !")
is_crit = False # can't crit on FF
damage = 0 # and no damage applied
if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
is_hit = True
ff_uptime = 40
is_ff_up = True
# Test if spellstrike is proc
spellstrike_proc = (numpy.random.randint(1, high = 101, size = 1) <= 10)
else:
is_hit = False
logfunc("Faerie Fire -> Resist !")
loop_duration = 1 #GCD
# if Moonfire not up, cast Moonfire
else:
if not is_mf_up and include_moonfire_in_rotation:
logfunc("Casting Moonfire !")
loop_duration = 1 #GCD because we cast a spell
# Is it a hit ?
if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
is_hit = True
# Is it a crit ?
is_crit = (numpy.random.randint(1, high = 101, size = 1) <= MF_crit_percent)
# Is it a partial ?
if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
damage = MF_average_damage + MF_coeff * fight_spell_power * partial_coeff
else:
damage = MF_average_damage + MF_coeff * fight_spell_power
# Apply damage
if is_crit:
damage = damage * crit_coeff
# DoT :
damage = curse_of_elements * (damage + MF_average_dot_damage + (MF_coeff_dot * fight_spell_power * min(12, (fight_length - fight_time - 1))/12))
# There is a Hit ! update model
is_mf_up = True
mf_uptime = 12
else:
is_hit = False
logfunc("Moonfire -> Resist ! ")
else:
# Cast Wrath
logfunc("Casting Wrath !")
# Is it a hit ?
if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
is_hit = True
# Is it a crit ?
is_crit = (numpy.random.randint(1, high = 101, size = 1) <= wrath_crit_percent)
# Is it a partial ?
if(numpy.random.randint(1, high = 101, size = 1) > hit_chance):
logfunc("Partial hit !")
damage = (wrath_average_damage + (wrath_coeff * fight_spell_power * wrath_of_cenarius * partial_coeff )) * moonfury
# logfunc("Damage done : " + str(damage))
else:
damage = (wrath_average_damage + (wrath_coeff * fight_spell_power * wrath_of_cenarius )) * moonfury
logfunc("Damage done : " + str(damage))
if is_crit:
damage = damage * crit_coeff
else:
is_hit = False
logfunc("Wrath -> Resist ! ")
if is_ng:
loop_duration = wrath_cast_time_ng
else:
loop_duration = wrath_cast_time
is_ng = False # Consume NG once wrath is cast
# Update time and model
fight_time = fight_time + loop_duration
ff_uptime = ff_uptime - loop_duration
mf_uptime = mf_uptime - loop_duration
# Check the timer on buffs / debuffs
spellstrike_uptime = spellstrike_uptime - loop_duration
if spellstrike_uptime <= 0:
spellstrike_proc = False
if mf_uptime <= 0:
is_mf_up = False
if ff_uptime <= 0:
is_ff_up = False
# @TODO if trinket available, activate
# Update nature's grace
if is_crit:
is_ng = True
total_damage_done = total_damage_done + damage * curse_of_elements
# If there is a Hit, Check if spellstrike is proc or refreshed :
if is_hit:
if numpy.random.randint(1, high = 11, size = 1) == 10:
spellstrike_proc = True
spellstrike_uptime = 10
logfunc("Spellstrike proc !!!")
# Print output
logfunc("Loop Duration : " + str(loop_duration))
logfunc("Loop Damage : " + str(damage))
logfunc("Overall damage done : " + str(total_damage_done))
logfunc("Overall DPS : " + str(total_damage_done/fight_time)) # We use fight_time here in case wrath lands after the fight_length mark
average_dps = average_dps + (total_damage_done/fight_time)
real_average_dps = average_dps / loop_size
# print("Average DPS : " + str(real_average_dps))
return(real_average_dps)
# -
# Run this part to calculate the average dps value of your configuration
n = 0
all_dps = 0
loop_size = 100
damage = 0
while n < loop_size :
n = n + 1
# all_dps = all_dps + compute_dps(intel = 381, crit_score = 243, hit_score = 135, spellpower = 1162, haste = 0)
all_dps = all_dps + compute_dps(intel, crit_score, hit_score, spellpower, haste,
is_csd, is_spellstrike, is_spellfire)
average_dps = all_dps / loop_size
print("Average DPS with current configuration is : ", average_dps)
# +
# This part is useful if you want to generate random values and find stats coefficients
#compute_dps(intel = 359, crit_score = 269, hit_score = 52, spellpower = 1102, haste = 0,
# is_csd = False, is_spellstrike = True, is_spellfire = True)
import timeit
print("Starting loop")
# Generate the dataset
dataset_size = 100 # number of rows
start = timeit.default_timer()
# Adapting the function to panda dataframes
v_compute_dps = numpy.vectorize(compute_dps)
##
# Creating the randomized dataframe for Chaotic Skyfire Diamond
##
df_csd = pd.DataFrame(dict(
intel = numpy.random.randint(300, high = 450, size = dataset_size),
crit = numpy.random.randint(150, high = 320, size = dataset_size),
hit = numpy.random.randint(110, high = 156, size = dataset_size),
sp = numpy.random.randint(650, high = 1400, size = dataset_size),
haste = numpy.random.randint(1, high = 11, size = dataset_size),
spellstrike = True,
spellfire = False,
CSD = True
))
print('Dataframe generated for CSD, adding dps for each row.')
df_csd['dps'] = v_compute_dps(df_csd.intel, df_csd.crit, df_csd.hit, df_csd.sp, df_csd.haste,
df_csd.CSD, df_csd.spellstrike, df_csd.spellfire)
stop = timeit.default_timer()
print("End of loop. Duration: ", stop - start )
df_csd.to_csv(path_or_buf='C:/Users/vincent-pc/Wow TC/Jupyter/output/dps_generated-CSD2.csv')
# -
##
# Spellstrike values
##
start = timeit.default_timer()
df_spellstrike = pd.DataFrame(dict(
intel = numpy.random.randint(300, high = 450, size = dataset_size),
crit = numpy.random.randint(150, high = 320, size = dataset_size),
hit = numpy.random.randint(110, high = 156, size = dataset_size),
sp = numpy.random.randint(650, high = 1400, size = dataset_size),
haste = numpy.random.randint(1, high = 11, size = dataset_size),
spellstrike = True,
spellfire = True,
CSD = False
))
print('Dataframe generated for Spellstrike, adding dps for each row.')
df_spellstrike['dps'] = v_compute_dps(df_spellstrike.intel, df_spellstrike.crit, df_spellstrike.hit, df_spellstrike.sp,
df_spellstrike.haste, df_spellstrike.CSD, df_spellstrike.spellstrike, df_spellstrike.spellfire)
stop = timeit.default_timer()
print("End of loop. Duration: ", stop - start )
df_spellstrike.to_csv(path_or_buf='C:/Users/vincent-pc/Wow TC/Jupyter/output/dps_generated-Spellstrike2.csv')
df.to_csv(path_or_buf='C:/Users/vincent-pc/Wow TC/Jupyter/output/dps_generated.csv')
# +
# Comparison between CSD and Spellstrike
is_csd = True # Chaotic Skyfire Diamond equiped
is_spellfire = False
n = 0
all_dps = 0
loop_size = 100
while n < loop_size :
n = n + 1
# all_dps = all_dps + compute_dps(intel = 381, crit_score = 243, hit_score = 135, spellpower = 1162, haste = 0)
all_dps = all_dps + compute_dps(intel = 379, crit_score = 324, hit_score = 111, spellpower = 1117, haste = 0,
is_csd = True, is_spellstrike = True, is_spellfire = False)
average_dps = all_dps / loop_size
print("Average DPS wit CSD is : ", average_dps)
# Spellstrike
is_csd = False # Chaotic Skyfire Diamond equiped
is_spellfire = True
n = 0
all_dps = 0
loop_size = 100
while n < loop_size :
n = n + 1
all_dps = all_dps + compute_dps(intel = 364, crit_score = 229, hit_score = 152, spellpower = 1182, haste = 0,
is_csd = False, is_spellstrike = True, is_spellfire = True)
average_dps = all_dps / loop_size
print("Average DPS wit Spellstrike is : ", average_dps)
# -
| dps_generator-Wrath.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## [作業目標]
# 持續接觸有關機器學習的相關專案與最新技術
# ## [作業重點]
# 透過觀察頂尖公司的機器學習文章,來了解各公司是怎麼應用機器學習在實際的專案上
# ## [作業]
# 今天的作業希望大家能夠看看全球機器學習巨頭們在做的機器學習專案。以 google 為例,下圖是 Google 內部專案使用機器學習的數量,隨著時間進展,現在早已超過 2000 個專案在使用機器學習。
# 
# 底下幫同學整理幾間知名企業的 blog 或機器學習網站 (自行搜尋也可),這些網站都會整理最新的機器學習專案或者是技術文章,請挑選一篇文章閱讀並試著回答
# 1. 專案的目標? (要解決什麼問題)
# 2. 使用的技術是? (只需知道名稱即可,例如:使用 CNN 卷積神經網路做影像分類)
# 3. 資料來源?
# - [Google AI blog](https://ai.googleblog.com/)
# - [Facebook Research blog](https://research.fb.com/blog/)
# - [Apple machine learning journal](https://machinelearning.apple.com/)
# - [機器之心](https://www.jiqizhixin.com/)
# - [雷鋒網](http://www.leiphone.com/category/ai)
| hw/Day_003_HW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example 6: Running non-Poissonian Scans with MultiNest
# In this example we show a simple example of how to run a scan using templates that follow non-Poisson statistics.
#
# To this end we perform a simple analysis of a small region around the north galactic pole, looking for isotropically distributed point sources.
#
# **NB:** This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details.
# +
# Import relevant modules
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import numpy as np
import corner
import matplotlib.pyplot as plt
from matplotlib import rcParams
from NPTFit import nptfit # module for performing scan
from NPTFit import create_mask as cm # module for creating the mask
from NPTFit import psf_correction as pc # module for determining the PSF correction
from NPTFit import dnds_analysis # module for analysing the output
# -
# ## Step 1: Setup an instance of NPTFit, add data, mask and templates
# As in the Poissonian case (see Example 3), the first step is to create an instance of `nptfit.NPTF` and add to it data, a mask and templates. Here we choose an ROI that is a small ring around the galactic north pole. In this small region we will only use isotropically distributed templates.
n = nptfit.NPTF(tag='non-Poissonian_Example')
fermi_data = np.load('fermi_data/fermidata_counts.npy').astype(np.int32)
fermi_exposure = np.load('fermi_data/fermidata_exposure.npy')
n.load_data(fermi_data, fermi_exposure)
analysis_mask = cm.make_mask_total(mask_ring = True, inner = 0, outer = 5, ring_b = 90, ring_l = 0)
n.load_mask(analysis_mask)
# Now we add the required templates. The `iso_p` template here is in units of photon counts and is exposure corrected, which we anticipate using as such for the Poissonian model, while for the non-Poissonian model the underlying PS distribution is truly isotropic.
iso_p = np.load('fermi_data/template_iso.npy')
n.add_template(iso_p, 'iso_p')
iso_np = np.ones(len(iso_p))
n.add_template(iso_np, 'iso_np',units='PS')
# ## Step 2: Add Background Models to the Fit
# We now add background models. For the details of adding a Poissonian model, see Example 3.
#
# For the non-poissonian model the format is similar, except for the fact we must account for the additional parameters that describe an non-Poissonian (NP) template. In `nptfit`, NP templates are determined by source count functions which we allow to be specified by a broken power law with an arbitrary number of breaks.
#
# The simplest possible example consistent with a finite number of sources is a singly broken power law, which is specified by four parameters: the template normalisation $A$, indices above and below the break $n_1$ and $n_2$, and the location of the break $S_b$.
#
# In general, for a source count function with $\ell$ breaks, the $2\ell+2$ parameters are specified as follows:
#
# $$\left[ A, n_1, \ldots, n_{\ell+1}, S_b^{(1)}, \ldots, S_b^{(\ell)} \right]\,,$$
#
# where $n_1$ is the highest index and $S_b^{(1)}$ the highest break.
#
# Priors must be entered as an array where each element is an array of the priors for each unfixed parameter. For multiply broken power laws it is possible to specify the breaks in terms of the highest break, in which case the option `dnds_model=specifiy_relative_breaks` should be used.
#
# Fixed parameters are similarly entered as an array, where the first element is the location of the parameter to be fixed (an integer), and the second element is the value to which it should be fixed.
#
# In the example below we add an isotropic distributed non-Poissonian template, with a log flat normalisation, linear flat indices, and a fixed break.
n.add_poiss_model('iso_p','$A_\mathrm{iso}$', False, fixed=True, fixed_norm=1.51)
n.add_non_poiss_model('iso_np',
['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'],
[[-6,1],[2.05,30],[-2,1.95]],
[True,False,False],
fixed_params = [[3,172.52]])
# ## Step 3: Configure the Scan with the PSF correction
# For a non-Poissonian fit, we need to specify the PSF correction at the stage of configuring the scan. The details of this are described in Example 5. These are calculated using `psf_correction.py` and then passed to the `NPTF` via `configure_for_scan`.
#
# At this stage we also specify the number of exposure regions to be used. Here we take `nexp=1` for a simple example. Generally increasing `nexp` leads to more accurate results, but also increases the runtime of the code.
# +
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary = pc_inst.f_ary
df_rho_div_f_ary = pc_inst.df_rho_div_f_ary
n.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=1)
# -
# ## Step 4: Perform the Scan
# Next we perform the scan. The syntax is identical to the Poissonian case and described in Example 3. Note even though we float fewer parameters than in the Poissonian example, the runtime is longer here. This is due to the fact that the NPTF likelihood is inherently more complicated and so takes longer to evaluate.
n.perform_scan(nlive=800)
# ## Step 5: Analyze the Output
# Here we analyze the output using the same commands as in the Poissonian example.
n.load_scan()
cs=dnds_analysis.Analysis(n)
cs.make_triangle()
# We also show a plot of the source count function, although a careful explanation of the details here are deferred until Example 9.
# +
cs.plot_source_count_median('iso_np',smin=0.01,smax=10000,nsteps=1000,spow=2,color='forestgreen')
cs.plot_source_count_band('iso_np',smin=0.01,smax=10000,nsteps=1000,qs=[0.16,0.5,0.84],spow=2,color='forestgreen',alpha=0.3)
plt.yscale('log')
plt.xscale('log')
plt.xlim([1e-10,1e-7])
plt.ylim([1e-15,1e-9])
plt.tick_params(axis='x', length=5,width=2,labelsize=18)
plt.tick_params(axis='y',length=5,width=2,labelsize=18)
plt.ylabel('$F^2 dN/dF$ [counts cm$^{-2}$s$^{-1}$deg$^{-2}$]', fontsize=18)
plt.xlabel('$F$ [counts cm$^{-2}$ s$^{-1}$]', fontsize=18)
| examples/Example6_Running_nonPoissonian_Scans.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2
# language: julia
# name: julia-0.6
# ---
# # Building a single neural network layer using `Flux.jl`
#
# In this notebook, we'll move beyond binary classification. We'll try to distinguish between three fruits now, instead of two. We'll do this using **multiple** neurons arranged in a **single layer**.
# ## Read in and process data
# We can start by loading the necessary packages and getting our data into working order with similar code we used at the beginning of the previous notebooks, except that now we will combine three different apple data sets, and will add in some grapes to the fruit salad!
# +
# Load apple data in with `readdlm` for each file
apples1, applecolnames1 = readdlm("data/Apple_Golden_1.dat", '\t', header = true)
apples2, applecolnames2 = readdlm("data/Apple_Golden_2.dat", '\t', header = true)
apples3, applecolnames3 = readdlm("data/Apple_Golden_3.dat", '\t', header = true)
# Check that the column names are the same for each apple file
println( applecolnames1 == applecolnames2 == applecolnames3)
# -
# Since each apple file has columns with the same headers, we know we can concatenate these columns from the different files together:
apples = vcat(apples1, apples2, apples3)
# And now let's build an array called `x_apples` that stores data from the `red` and `blue` columns of `apples`. From `applecolnames1`, we can see that these are the 3rd and 5th columns of `apples`:
applecolnames1[3], applecolnames1[5]
length(apples[:, 1])
x_apples = [ [apples[i, 3], apples[i, 5]] for i in 1:length(apples[:, 3]) ]
# Similarly, let's create arrays called `x_bananas` and `x_grapes`:
# +
# Load data from *.dat files
bananas, bananacolnames = readdlm("data/Banana.dat", '\t', header = true)
grapes1, grapecolnames1 = readdlm("data/Grape_White.dat", '\t', header = true)
grapes2, grapecolnames2 = readdlm("data/Grape_White_2.dat", '\t', header = true)
# Concatenate data from two grape files together
grapes = vcat(grapes1, grapes2)
# Check that column 3 and column 5 refer to the "red" and "blue" columns from each file
println("All column headers are the same: ", bananacolnames == grapecolnames1 == grapecolnames2 == applecolnames1)
# Build x_bananas and x_grapes from bananas and grapes
x_bananas = [ [bananas[i, 3], bananas[i, 5]] for i in 1:length(bananas[:, 3]) ]
x_grapes = [ [grapes[i, 3], grapes[i, 5]] for i in 1:length(grapes[:, 3]) ]
# -
# ## One-hot vectors
# Now we wish to classify *three* different types of fruit. It is not clear how to encode these three types using a single output variable; indeed, in general this is not possible.
#
# Instead, we have the idea of encoding $n$ output types from the classification into *vectors of length $n$*, called "one-hot vectors":
#
# $$
# \textrm{apple} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix};
# \quad
# \textrm{banana} = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix};
# \quad
# \textrm{grape} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}.
# $$
#
# The term "one-hot" refers to the fact that each vector has a single $1$, and is $0$ otherwise.
#
# Effectively, the first neuron will learn whether or not (1 or 0) the data corresponds to an apple, the second whether or not (1 or 0) it corresponds to a banana, etc.
# `Flux.jl` provides an efficient representation for one-hot vectors, using advanced features of Julia so that it does not actually store these vectors, which would be a waste of memory; instead `Flux` just records in which position the non-zero element is. To us, however, it looks like all the information is being stored:
# +
using Flux: onehot
onehot(1, 1:3)
# -
# #### Exercise 1
#
# Make an array `labels` that gives the labels (1, 2 or 3) of each data point. Then use `onehot` to encode the information about the labels as a vector of `OneHotVector`s.
# ## Single layer in Flux
# Let's suppose that there are two pieces of input data, as in the previous single neuron notebook. Then the network has 2 inputs and 3 outputs:
include("draw_neural_net.jl")
draw_network([2, 3])
# `Flux` allows us to express this again in a simple way:
using Flux
model = Dense(2, 3, σ)
# #### Exercise 2
#
# Now what do the weights inside `model` look like? How does this compare to the diagram of the network layer above?
# ## Training the model
# Despite the fact that the model is now more complicated than the single neuron from the previous notebook, the beauty of `Flux.jl` is that the rest of the training process **looks exactly the same**!
# #### Exercise 3
#
# Implement training for this model.
# #### Exercise 4
#
# Visualize the result of the learning for each neuron. Since each neuron is sigmoidal, we can get a good idea of the function by just plotting a single contour level where the function takes the value 0.5, using the `contour` function with keyword argument `levels=[0.5, 0.501]`.
# +
plot()
contour!(0:0.01:1, 0:0.01:1, (x,y)->model([x,y]).data[1], levels=[0.5, 0.501], color = cgrad([:blue, :blue]))
contour!(0:0.01:1, 0:0.01:1, (x,y)->model([x,y]).data[2], levels=[0.5,0.501], color = cgrad([:green, :green]))
contour!(0:0.01:1, 0:0.01:1, (x,y)->model([x,y]).data[3], levels=[0.5,0.501], color = cgrad([:red, :red]))
scatter!(first.(x_apples), last.(x_apples), m=:cross, label="apples")
scatter!(first.(x_bananas), last.(x_bananas), m=:circle, label="bananas")
scatter!(first.(x_grapes), last.(x_grapes), m=:square, label="grapes")
# -
# #### Exercise 5
#
# Interpret the results by checking which fruit each neuron was supposed to learn and what it managed to achieve.
| introductory-tutorials/broader-topics-and-ecosystem/intro-to-ml/16. Tools - Using Flux to build a single layer neural net.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="kgYWNPhf801A"
# # Setup
# + id="ftK7cgx-ra5X"
#Save Checkpoints after each round of active learning
store_checkpoint=True
#Mount persistent storage for logs and checkpoints (Drive)
persistent=False
#Load initial model.
'''
Since there is a need to compare all strategies with same initial model,
the base model only needs to be trained once.
True: Will load the model from the model directory configured in section Initial Training
and Parameter Definitions
False: Will train a base model and store it in model directory configured in section Initial Training
and Parameter Definitions
'''
load_model = False
'''
This notebook defaults with 1000 points per class, to change samples per class, change variable
class_count in section Initial Training and Parameter Definitions.
'''
# + [markdown] id="-7DmzUo2vZZ_"
# **Installations**
# + id="wKMPt_L5bNeu"
# !pip install apricot-select
# !git clone https://github.com/decile-team/distil.git
# !git clone https://github.com/circulosmeos/gdown.pl.git
# !mv distil asdf
# !mv asdf/distil .
# + [markdown] id="Maz6VJxS787x"
# **Imports, Training Class Definition, Experiment Procedure Definition**
# + id="V9-8qRo8KD3a"
import pandas as pd
import numpy as np
import copy
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from torch.utils.data import Subset
import torch.nn.functional as F
from torch import nn
from torchvision import transforms
from torchvision import datasets
from PIL import Image
import torch
import torch.optim as optim
from torch.autograd import Variable
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import time
import math
import random
import os
import pickle
from numpy.linalg import cond
from numpy.linalg import inv
from numpy.linalg import norm
from scipy import sparse as sp
from scipy.linalg import lstsq
from scipy.linalg import solve
from scipy.optimize import nnls
from distil.active_learning_strategies.badge import BADGE
from distil.active_learning_strategies.glister import GLISTER
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.entropy_sampling import EntropySampling
from distil.active_learning_strategies.random_sampling import RandomSampling
from distil.active_learning_strategies.fass import FASS
from distil.active_learning_strategies.core_set import CoreSet
from distil.active_learning_strategies.least_confidence import LeastConfidence
from distil.utils.models.resnet import ResNet18
from distil.utils.data_handler import DataHandler_MNIST, DataHandler_CIFAR10, DataHandler_Points, DataHandler_FASHION_MNIST, DataHandler_SVHN
from distil.utils.dataset import get_dataset
from distil.utils.train_helper import data_train
from google.colab import drive
import warnings
warnings.filterwarnings("ignore")
class Checkpoint:
def __init__(self, acc_list=None, indices=None, state_dict=None, experiment_name=None, path=None):
# If a path is supplied, load a checkpoint from there.
if path is not None:
if experiment_name is not None:
self.load_checkpoint(path, experiment_name)
else:
raise ValueError("Checkpoint contains None value for experiment_name")
return
if acc_list is None:
raise ValueError("Checkpoint contains None value for acc_list")
if indices is None:
raise ValueError("Checkpoint contains None value for indices")
if state_dict is None:
raise ValueError("Checkpoint contains None value for state_dict")
if experiment_name is None:
raise ValueError("Checkpoint contains None value for experiment_name")
self.acc_list = acc_list
self.indices = indices
self.state_dict = state_dict
self.experiment_name = experiment_name
def __eq__(self, other):
# Check if the accuracy lists are equal
acc_lists_equal = self.acc_list == other.acc_list
# Check if the indices are equal
indices_equal = self.indices == other.indices
# Check if the experiment names are equal
experiment_names_equal = self.experiment_name == other.experiment_name
return acc_lists_equal and indices_equal and experiment_names_equal
def save_checkpoint(self, path):
# Get current time to use in file timestamp
timestamp = time.time_ns()
# Create the path supplied
os.makedirs(path, exist_ok=True)
# Name saved files using timestamp to add recency information
save_path = os.path.join(path, F"c{timestamp}1")
copy_save_path = os.path.join(path, F"c{timestamp}2")
# Write this checkpoint to the first save location
with open(save_path, 'wb') as save_file:
pickle.dump(self, save_file)
# Write this checkpoint to the second save location
with open(copy_save_path, 'wb') as copy_save_file:
pickle.dump(self, copy_save_file)
def load_checkpoint(self, path, experiment_name):
# Obtain a list of all files present at the path
timestamp_save_no = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
# If there are no such files, set values to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Sort the list of strings to get the most recent
timestamp_save_no.sort(reverse=True)
# Read in two files at a time, checking if they are equal to one another.
# If they are equal, then it means that the save operation finished correctly.
# If they are not, then it means that the save operation failed (could not be
# done atomically). Repeat this action until no possible pair can exist.
while len(timestamp_save_no) > 1:
# Pop a most recent checkpoint copy
first_file = timestamp_save_no.pop(0)
# Keep popping until two copies with equal timestamps are present
while True:
second_file = timestamp_save_no.pop(0)
# Timestamps match if the removal of the "1" or "2" results in equal numbers
if (second_file[:-1]) == (first_file[:-1]):
break
else:
first_file = second_file
# If there are no more checkpoints to examine, set to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Form the paths to the files
load_path = os.path.join(path, first_file)
copy_load_path = os.path.join(path, second_file)
# Load the two checkpoints
with open(load_path, 'rb') as load_file:
checkpoint = pickle.load(load_file)
with open(copy_load_path, 'rb') as copy_load_file:
checkpoint_copy = pickle.load(copy_load_file)
# Do not check this experiment if it is not the one we need to restore
if checkpoint.experiment_name != experiment_name:
continue
# Check if they are equal
if checkpoint == checkpoint_copy:
# This checkpoint will suffice. Populate this checkpoint's fields
# with the selected checkpoint's fields.
self.acc_list = checkpoint.acc_list
self.indices = checkpoint.indices
self.state_dict = checkpoint.state_dict
return
# Instantiate None values in acc_list, indices, and model
self.acc_list = None
self.indices = None
self.state_dict = None
def get_saved_values(self):
return (self.acc_list, self.indices, self.state_dict)
def delete_checkpoints(checkpoint_directory, experiment_name):
# Iteratively go through each checkpoint, deleting those whose experiment name matches.
timestamp_save_no = [f for f in os.listdir(checkpoint_directory) if os.path.isfile(os.path.join(checkpoint_directory, f))]
for file in timestamp_save_no:
delete_file = False
# Get file location
file_path = os.path.join(checkpoint_directory, file)
# Unpickle the checkpoint and see if its experiment name matches
with open(file_path, "rb") as load_file:
checkpoint_copy = pickle.load(load_file)
if checkpoint_copy.experiment_name == experiment_name:
delete_file = True
# Delete this file only if the experiment name matched
if delete_file:
os.remove(file_path)
#Logs
def write_logs(logs, save_directory, rd, run):
file_path = save_directory + 'run_'+str(run)+'.txt'
with open(file_path, 'a') as f:
f.write('---------------------\n')
f.write('Round '+str(rd)+'\n')
f.write('---------------------\n')
for key, val in logs.items():
if key == 'Training':
f.write(str(key)+ '\n')
for epoch in val:
f.write(str(epoch)+'\n')
else:
f.write(str(key) + ' - '+ str(val) +'\n')
def train_one(X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, strategy, save_directory, run, checkpoint_directory, experiment_name):
# Define acc initially
acc = np.zeros(n_rounds)
initial_unlabeled_size = X_unlabeled.shape[0]
initial_round = 1
# Define an index map
index_map = np.array([x for x in range(initial_unlabeled_size)])
# Attempt to load a checkpoint. If one exists, then the experiment crashed.
training_checkpoint = Checkpoint(experiment_name=experiment_name, path=checkpoint_directory)
rec_acc, rec_indices, rec_state_dict = training_checkpoint.get_saved_values()
# Check if there are values to recover
if rec_acc is not None:
# Restore the accuracy list
for i in range(len(rec_acc)):
acc[i] = rec_acc[i]
print('Loaded from checkpoint....')
print('Accuracy List:', acc)
# Restore the indices list and shift those unlabeled points to the labeled set.
index_map = np.delete(index_map, rec_indices)
# Record initial size of X_tr
intial_seed_size = X_tr.shape[0]
X_tr = np.concatenate((X_tr, X_unlabeled[rec_indices]), axis=0)
X_unlabeled = np.delete(X_unlabeled, rec_indices, axis = 0)
y_tr = np.concatenate((y_tr, y_unlabeled[rec_indices]), axis = 0)
y_unlabeled = np.delete(y_unlabeled, rec_indices, axis = 0)
# Restore the model
net.load_state_dict(rec_state_dict)
# Fix the initial round
initial_round = (X_tr.shape[0] - initial_seed_size) // budget + 1
# Ensure loaded model is moved to GPU
if torch.cuda.is_available():
net = net.cuda()
strategy.update_model(net)
strategy.update_data(X_tr, y_tr, X_unlabeled)
else:
if torch.cuda.is_available():
net = net.cuda()
acc[0] = dt.get_acc_on_set(X_test, y_test)
print('Initial Testing accuracy:', round(acc[0]*100, 2), flush=True)
logs = {}
logs['Training Points'] = X_tr.shape[0]
logs['Test Accuracy'] = str(round(acc[0]*100, 2))
write_logs(logs, save_directory, 0, run)
#Updating the trained model in strategy class
strategy.update_model(net)
##User Controlled Loop
for rd in range(initial_round, n_rounds):
print('-------------------------------------------------')
print('Round', rd)
print('-------------------------------------------------')
sel_time = time.time()
idx = strategy.select(budget)
sel_time = time.time() - sel_time
print("Selection Time:", sel_time)
#Saving state of model, since labeling new points might take time
# strategy.save_state()
#Adding new points to training set
X_tr = np.concatenate((X_tr, X_unlabeled[idx]), axis=0)
X_unlabeled = np.delete(X_unlabeled, idx, axis = 0)
#Human In Loop, Assuming user adds new labels here
y_tr = np.concatenate((y_tr, y_unlabeled[idx]), axis = 0)
y_unlabeled = np.delete(y_unlabeled, idx, axis = 0)
# Update the index map
index_map = np.delete(index_map, idx, axis = 0)
print('Number of training points -',X_tr.shape[0])
#Reload state and start training
# strategy.load_state()
strategy.update_data(X_tr, y_tr, X_unlabeled)
dt.update_data(X_tr, y_tr)
t1 = time.time()
clf, train_logs = dt.train(None)
t2 = time.time()
acc[rd] = dt.get_acc_on_set(X_test, y_test)
logs = {}
logs['Training Points'] = X_tr.shape[0]
logs['Test Accuracy'] = str(round(acc[rd]*100, 2))
logs['Selection Time'] = str(sel_time)
logs['Trainining Time'] = str(t2 - t1)
logs['Training'] = train_logs
write_logs(logs, save_directory, rd, run)
strategy.update_model(clf)
print('Testing accuracy:', round(acc[rd]*100, 2), flush=True)
# Create a checkpoint
used_indices = np.array([x for x in range(initial_unlabeled_size)])
used_indices = np.delete(used_indices, index_map).tolist()
if store_checkpoint:
round_checkpoint = Checkpoint(acc.tolist(), used_indices, clf.state_dict(), experiment_name=experiment_name)
round_checkpoint.save_checkpoint(checkpoint_directory)
print('Training Completed')
return acc
# Define a function to perform experiments in bulk and return the mean accuracies
def BADGE_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = BADGE(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def random_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = RandomSampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def entropy_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = EntropySampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def GLISTER_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'lr': args['lr']}
strategy = GLISTER(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args,valid=False, typeOf='rand', lam=0.1)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def FASS_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = FASS(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def adversarial_bim_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = AdversarialBIM(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def adversarial_deepfool_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = AdversarialDeepFool(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def coreset_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = CoreSet(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def least_confidence_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = LeastConfidence(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def margin_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = MarginSampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def bald_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size']}
strategy = BALDDropout(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# + [markdown] id="-rFh9y0M3ZVH"
# # CIFAR10
# + [markdown] id="O0WfH3eq3nv_"
# **Initial Training and Parameter Definitions**
# + id="K1522SUk3nwF"
data_set_name = 'CIFAR10'
download_path = '../downloaded_data/'
handler = DataHandler_CIFAR10
net = ResNet18()
# Mount drive containing possible saved model and define file path
if persistent:
drive.mount('/content/drive')
# Retrieve the model from link and save it to the drive
logs_directory = '/content/drive/MyDrive/experiments/cifar10_alrs/'
# initial_model = data_set_name
model_directory = "/content/drive/MyDrive/experiments/cifar10_alrs/"
os.makedirs(model_directory, exist_ok = True)
model_directory = "/content/drive/MyDrive/experiments/cifar10_alrs/base_model.pth"
X, y, X_test, y_test = get_dataset(data_set_name, download_path)
dim = np.shape(X)[1:]
initial_seed_size = 1000
initial_class_count = 100
class_count = 1000
training_size_cap = 10000
nclasses = 10
budget = 1000
y = y.numpy()
y_test = y_test.numpy()
unique_values = np.unique(y)
seed_indices = []
# initial_class_count = 100
for val in unique_values:
ind = np.where(y==val)
seed_indices.extend(ind[0].tolist()[0:initial_class_count])
X_tr = X[seed_indices]
y_tr = y[seed_indices]
unlabeled_indices = []
for i in range(X.shape[0]):
if i not in seed_indices:
unlabeled_indices.append(i)
X_unlabeled = X[unlabeled_indices]
y_unlabeled = y[unlabeled_indices]
unique_values, count = np.unique(y_unlabeled, return_counts=True)
print('****************')
print('DEBUG')
print('****************')
print('Size of original unlabeled set', X_unlabeled.shape, y_unlabeled.shape, y_unlabeled[0:5])
print('Count of unique values ', np.unique(y_unlabeled, return_counts = True))
print('Size of seed set', X_tr.shape, y_tr.shape, y_tr[0:5])
print('Count of unique values ', np.unique(y_tr, return_counts = True))
required_indices = []
# class_count = 1000
for val in unique_values:
ind = np.where(y_unlabeled==val)
required_indices.extend(ind[0].tolist()[0:class_count])
X_unlabeled = X_unlabeled[required_indices]
y_unlabeled = y_unlabeled[required_indices]
print('Size of unlabeled set', X_unlabeled.shape, y_unlabeled.shape, y_unlabeled[0:5])
print('Count of unique values ', np.unique(y_unlabeled, return_counts = True))
#Initial Training
args = {'n_epoch':300, 'lr':float(0.01), 'batch_size':20, 'max_accuracy':float(0.99), 'num_classes':nclasses, 'islogs':True, 'isreset':True, 'isverbose':True}
# Only train a new model if one does not exist.
if load_model:
net.load_state_dict(torch.load(model_directory))
dt = data_train(X_tr, y_tr, net, handler, args)
clf = net
else:
dt = data_train(X_tr, y_tr, net, handler, args)
clf, train_logs = dt.train(None)
torch.save(clf.state_dict(), model_directory)
# Train on approximately the full dataset given the budget contraints
n_rounds = math.floor(training_size_cap / budget)
n_exp = 1
print("Training for", n_rounds, "rounds with budget", budget, "on unlabeled set size", training_size_cap)
# + [markdown] id="B9N-4eTMPrZZ"
# **Random Sampling**
# + id="i4eKSOaiPruO"
strat_logs = logs_directory+'random_sampling/'
os.makedirs(strat_logs, exist_ok = True)
checkpoint_directory = '/content/drive/MyDrive/experiments/cifar10_alrs/random_sampling/check/'
os.makedirs(checkpoint_directory, exist_ok = True)
mean_test_acc_random = random_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, "cf_random")
# + [markdown] id="bg1XH87hPsCe"
# **Entropy (Uncertainty) Sampling**
# + id="mRAKMe2RPsTp"
strat_logs = logs_directory+'entropy_sampling/'
os.makedirs(strat_logs, exist_ok = True)
checkpoint_directory = '/content/drive/MyDrive/experiments/cifar10_alrs/entropy_sampling/check/'
os.makedirs(checkpoint_directory, exist_ok = True)
mean_test_acc_entropy = entropy_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, "cf_entropy")
# + [markdown] id="6ZSiRahu3nwK"
# **BADGE**
# + colab={"background_save": true} id="b5c8AckN3nwK"
strat_logs = logs_directory+'badge/'
os.makedirs(strat_logs, exist_ok = True)
checkpoint_directory = '/content/drive/MyDrive/experiments/cifar10_alrs/badge/check/'
os.makedirs(checkpoint_directory, exist_ok = True)
mean_test_acc_badge = BADGE_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, "cf_badge")
| benchmark_notebooks/RSvsAL/AL_CIFAR10_ALvsRS_Master.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.2 ('scientific_pyt')
# language: python
# name: python3
# ---
# <h1 style="font-family:Didot; text-align:center;">RMSE dla modeli</h1>
import pandas as pd
import numpy as np
import os
from Arima import Arima
from boosting import XGBoost
from copy import deepcopy
def rmse(y, y_hat):
return np.sqrt(np.sum((y - y_hat)**2) / len(y))
df = pd.read_csv('final_data.csv')
countries = df["Country"].unique()
continents = [df[df["Country"] == i]["Continent"].iloc[0] for i in countries]
new_df = pd.DataFrame({'Continent': continents, 'Country': countries})
def country_loss(model, country, years, *args):
data = df[df["Country"] == country]["AverageTemperature"].to_numpy()
training, test = data[:years*12], data[:-years*12]
model_ = model(training, *args)
model_.train()
predictions = model_.predict(len(test))
return rmse(test, predictions)
os.chdir(r"C:\\Users\\<NAME>\\time_series_prediction")
def loss_to_files(model, model_name, *args):
os.chdir(os.path.dirname(os.path.realpath(__name__)))
os.chdir(r"loss/test/country")
df_ = deepcopy(new_df)
vec = np.zeros(len(df_['country']))
for i in range(len(countries)):
vec[i] = country_loss(model, countries[i], 10, *args)
df_['Loss'] = vec
df_.to_csv(f'{model_name}.csv', index=False)
loss_to_files(XGBoost, "XGBoost", 24)
os.chdir(r"C:\\Users\\<NAME>\\time_series_prediction")
loss_to_files(Arima, "Arima")
continents_ = df["Continent"].unique()
continents_
def continent_loss(model_name):
df = pd.read_csv(rf"loss/test/country/{model_name}.csv")
new_df = pd.DataFrame({'Continent': continents_})
func = lambda x: df[df["continent"] == x]['loss'].mean()
new_df['Loss'] = new_df['Continent'].apply(func)
new_df.to_csv(rf"loss/test/continent/{model_name}.csv", index=False)
continent_loss("XGBoost")
continent_loss("Arima")
a = pd.read_csv(r"loss/test/country/Arima.csv")
a.columns = ['Continent', 'Country', 'Loss']
a.to_csv(r'loss/test/country/Arima.csv', index=False)
b = pd.read_csv(r"loss/test/country/XGBoost.csv")
b.columns = ['Continent', 'Country', 'Loss']
b.to_csv(r'loss/test/country/XGBoost.csv', index=False)
| rmse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Solving linear equations with Gaussian elimination
#
# +
# import the numpy package in the np namespace
import numpy as np
# this line will load the plotting function into the namespace plt.
import matplotlib.pyplot as plt
# the following lines prevent Python from opening new windows for figures.
# %matplotlib inline
# -
# ## Part 1: Implement Gaussian elimination
# In this part you are asked to implement Gaussian elimination as presented in the lectures.
# It is recommended to implement separate functions for generating the reduced row echelon form, and for solving it.
#
# Below I have used the method of gaussian elimination discussed in lectures to solve systems of linear equations.
#
# These could be of the form: $$Ax = B$$
#
# Where $A$ is a square matrix of coefficients, $B$ is a column matrix of constants and $x$ is a column matrix of unknown variables to be solved.
#
# To do this I have:
#
# - Made a function called "reduced_row_echelon" which calculates the reduced row echelon form as an augmented matrix $A|B$.
#
#
# - I have then made a function called "solve_echelon" which uses an augmented matrix in reduced row echelon form to solve and produce a column matrix of the unknown variables $x$ to be found.
#
#
# - Finally i have made a function called "gaussian_elimination" which uses both of the functions above to implement the whole guassian elimination process, allowing users to easily input $A$ and $B$ instantly outputting $x$.
# +
def reduced_row_echelon( A, B ):
"""
Calculates the reduced row echelon form using two matrices 'A' and 'B'.
The augmented matrix (A|B) in reduced row echelon form is returned.
Arguments:
'A': Matrix ceofficient.
'B': Column matrix of constants.
Caveats:
'A' must be of square form.
'B' must have the same number of rows as 'A'.
"""
#Ensure 'A' is square.
rows_a, columns_a = np.shape( A )
assert rows_a == columns_a, 'A is not a square matrix.'
#Ensure 'B' has 1 column, and same number of rows as 'A'.
rows_b, columns_b = np.shape( B )
assert rows_b == rows_a, 'B does not have the same number of rows as A.'
#Form the augmented matrix.
augmented_matrix = np.c_[A, B]
pivot_row = 0
#Assign the pivot to the next diagonal location, until end of matrix is reached.
while pivot_row < rows_a:
pivot = augmented_matrix[pivot_row, pivot_row]
#Iterate down the pivot column, by accessing the rows.
for row in range( pivot_row, rows_a ):
if row == pivot_row: #If diagonal location, make this value 1.
augmented_matrix[row] /= pivot
normalised_row = augmented_matrix[row]
#Otheriwse zero the rest of the elements below.
else:
element_to_zero = augmented_matrix[row, pivot_row]
#Exploit that the normalised row will always have 1 in this column.
augmented_matrix[row] -= normalised_row*element_to_zero
pivot_row += 1
return augmented_matrix
def solve_echelon( echelon_matrix ):
"""
Solves a matrix in reduced row echelon form, returning a column matrix of the
solved variables.
Arguments:
'echelon_matrix': The matrix to be solved in reduced row echelon form.
Caveats:
Only a matrix in reduced row echelon form will function properly.
(square matrix augmented with a column constant matrix).
"""
#Ensure matrix is of correct echelon shape.
number_rows, number_columns = np.shape( echelon_matrix )
assert number_columns == 1 + number_rows, 'The matrix is not in the correct echelon form.'
#Separate the constants matrix from the reduced row echelon.
constants = echelon_matrix[:,number_rows]
echelon_matrix = echelon_matrix[:,:number_rows]
variables = np.matrix( np.zeros( (number_rows, 1) )) #Initialise variables matrix.
#Iterate over rows, starting at bottom of matrix, finishing at the top.
for row in range( number_rows - 1, -1, -1 ):
#Immediate assignment for the last row in the matrix.
if row == number_rows - 1:
variables[row] = constants[row]
#General case, use matrix product to caluculate the next corresponding unknown variable.
else:
variables[row] = constants[row] - echelon_matrix[row, row+1:]*variables[row+1:]
return variables
def gaussian_elimination( A, B ):
"""
Performs the full guassian elimination algorithm to solve unknown varibles matrix 'x'
in equation form 'Ax = B'. A column marix of the solved variables is returned.
Arguments:
Input the matrices 'A' and 'B' corresponding to this equation.
Caveats:
'A' must be of square form.
'B' must have only 1 column and the same number of rows as 'A'.
"""
#Calculate the solved variables column matrix.
solved_variables = solve_echelon( reduced_row_echelon( A, B ) )
return solved_variables
#Example used in lectures:
A = np.matrix([
[3., 4., 9.],
[6., 7., 9.],
[9., 10., 11.]
])
B = np.matrix([
[13.],
[7.],
[3.]
])
print("Variables [x3, x2, x1] are:\n", gaussian_elimination( A, B ))
# -
# ## Part 2: Finding solution to a linear problem
# In this part, you will be asked to use the functions you have implemented in part 1 to solve a simple problem.
#
# <b>Problem statement</b>
# You want to find a linear transformation by a set of points before and after the transformation.
#
# 2a) try with the following set of points
# +
def show_before_after(x,y):
plt.plot(x[:,0], x[:,1], 'rx', y[:,0], y[:,1], 'go')
plt.ylim([-5,5])
plt.xlim([-5,5])
plt.legend(('before', 'after'), loc=4)
plt.title('Points before and after transformation')
# matrix of points before transformation, one point per row
x = np.matrix(
[[ 1., 2. ],
[ 2., 1. ],
[ 0., 3. ],
[ 2., 0.5]])
# matrix of points after transformation, one point per row
y = np.matrix(
[[ 2.75, 3.25 ],
[ 3.25, 2.75 ],
[ 2.25, 3.75 ],
[ 2.875, 2.125]])
show_before_after(x,y)
plt.show()
# -
# We have here a problem of the form $Ax = B$, where $A$ is the transformation matrix to be found, $x$ is a matrix of points before transformation, and $B$ is a matrix of the points after the transformation.
# From looking at this problem I can see two immediate methods I could implement utilising the functions I have already made.
#
#
# Method 1) Using diagonalisation to calculate inverse.
#
# By taking the first two points, I could say that:
# $$x =
# \begin{pmatrix}
# 1 & 2 \\
# 2 & 1
# \end{pmatrix}$$
# and hence:
# $$B =
# \begin{pmatrix}
# 2.75 & 3.25 \\
# 3.25 & 2.75
# \end{pmatrix}$$
# +
#Initialise x
x = np.matrix([
[1., 2.],
[2., 1.]
])
#Initialise B
B = np.matrix([
[2.75, 3.25],
[3.25, 2.75]
])
# -
# To find $A$, I could rearrange to formulate $A = Bx^{-1}$. So I need to calculate $x^{-1}$. This can be achieved by formulating the augmented matrix between $x$ and it's identity matrix:
# $$[A|I] = \begin{pmatrix}
# 1 & 2 \\
# 2 & 1
# \end{pmatrix}
# |
# \begin{pmatrix}
# 1 & 0 \\
# 0 & 1
# \end{pmatrix}$$
identity = np.matrix([
[1., 0.],
[0., 1.,]
])
# I then need to use operations to transform the left hand of the augmented matrix into the identity matrix, this will calculate $x^{-1}$ on the right hand side. I can use my "reduced_row_echelon" function to transform the left hand side into a reduced row echelon. I can then easily follow this up with a simple elementary row operation to produce the identity matrix as follows:
# +
#Convert to echelon form.
augmented = reduced_row_echelon( x, identity )
print("Echelon form: \n", augmented, "\n")
#Convert to Identity form with an elementary row operation.
augmented[0] -= augmented[1]*augmented[0, 1]
print("Identity form:\n", augmented, "\n")
# -
# Now the right hand side is $x^{-1}$. Therefore the final operation to find the transformation matrix $A$ is to perform a matrix product between $B$ and $x^{-1}$:
# +
#Discard the left hand side identity matrix.
x_inverse = augmented[:,2:4]
print("Inverse of x:\n", x_inverse, "\n")
#Perform the matrix product between x inverse, and B.
transformation = B*x_inverse
print("Transformation:\n", transformation)
# -
# And so from above, I can conclude the transformation matrix ($A$) is:
#
# $$A =
# \begin{pmatrix}
# 1.25 & 0.75 \\
# 0.75 & 1.25
# \end{pmatrix}$$
# Method 2) Simultaneous equation solving, (utilising both my functions).
#
# By assuming that the linear transformation matrix is 2x2, this can be modelled as follows:
#
# $$\begin{pmatrix}
# a & b \\
# c & d
# \end{pmatrix}$$
#
# By performing matrix product on each of the first sets of points, and equalling to the corresponding transformed points I can reduce down a set of simultaneous equations that can be used to solve the variables in the matrix:
#
# $3a + 3b = 6$
#
# $2a + 3.5b = 5.125$
#
# $3c + 3d = 6$
#
# $2c + 3.5d = 5.875$
#
# I can formulate this problem using matrices:
#
# Solving for $a$ and $b$:
# $$\begin{pmatrix}
# 3 & 3 \\
# 2 & 3.5
# \end{pmatrix}
# \begin{pmatrix}
# a \\
# b
# \end{pmatrix}
# =
# \begin{pmatrix}
# 6 \\
# 5.125
# \end{pmatrix}$$
#
# Solving for $c$ and $d$:
# $$\begin{pmatrix}
# 3 & 3 \\
# 2 & 3.5
# \end{pmatrix}
# \begin{pmatrix}
# c \\
# d
# \end{pmatrix}
# =
# \begin{pmatrix}
# 6 \\
# 5.875
# \end{pmatrix}$$
#
# I implement this below:
# +
ab_coefficients = np.matrix([
[3., 3.],
[2., 3.5,]
])
ab_constants = np.matrix([
[6.],
[5.125]
])
cd_coefficients = np.matrix([
[3., 3.],
[2., 3.5,]
])
cd_constants = np.matrix([
[6.],
[5.875]
])
#Perform the full gaussian elimination.
ab = gaussian_elimination( ab_coefficients, ab_constants )
cd = gaussian_elimination( cd_coefficients, cd_constants )
print("a and b solutions: \n", ab, "\n")
print("c and d solutions: \n", cd, "\n")
#construct the transformation matrix from the answers.
transformation = np.vstack( [ab.T, cd.T] )
print("transformation:\n",
transformation)
# -
# I get the same result as method 1. This method is arguably simpler and utilises both functions.
#
# $$\begin{pmatrix}
# a & b \\
# c & d
# \end{pmatrix}
# =
# \begin{pmatrix}
# 1.25 & 0.75 \\
# 0.75 & 1.25
# \end{pmatrix}$$
#
# 2b) Test your approach on the following points. Explain what happens.
x1 = [
[ 1, 2],
[ 2, 4],
[-1, -2],
[ 0, 0]]
y1 = [
[ 5, 10],
[10, 20],
[-1, -2],
[-2, -4]]
# +
#Initialise x
x = np.matrix([
[1., 2.],
[2., 4.]
])
#Initialise B
B = np.matrix([
[5., 10.],
[10., 20.]
])
identity = np.matrix([
[1., 0.],
[0., 1.,]
])
print(reduced_row_echelon( x, identity ))
# -
# Here we can see that attempting to perform the same method on these points will not work, this is because none of them exhibit linear independance. all the points can be scaled to form the next (apart from 0,0), and so a tranformation is not possible to be found here using these methods.
| Year 1- Computational Mathematics/Gaussian Elimination.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
img = load_img('data/train/normal/ia_500000023.jpg') # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
# the .flow() command below generates batches of randomly transformed images
# and saves the results to the `preview/` directory
i = 0
for batch in datagen.flow(x, batch_size=1,
save_to_dir='preview', save_prefix='cat', save_format='jpeg'):
i += 1
if i > 20:
break # otherwise the generator would loop indefinitely
# +
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150, 150, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size = 16
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)
# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'data/train', # this is the target directory
target_size=(150, 150), # all images will be resized to 150x150
batch_size=batch_size,
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'data/test',
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=20,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save('second_try.h5')
# +
import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing import image
from keras.models import load_model
model = load_model("second_try.h5")
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
imagePath = "predict_dir/normal/ia_100003833.jpg"
test_image = image.load_img(imagePath, target_size = (150, 150))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = model.predict(test_image)
print(result[0])
# +
import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing import image
from keras.models import load_model
model = load_model("second_try.h5")
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
imagePath = "predict_dir/normal/ia_100003834.jpeg"
img = image.load_img(imagePath, target_size = (150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict_classes(images, batch_size=4)
print (classes)
# predicting multiple images at once
img = image.load_img('predict_dir/normal/ia_100003836.jpg', target_size=(150, 150))
y = image.img_to_array(img)
y = np.expand_dims(y, axis=0)
# pass the list of multiple images np.vstack()
images = np.vstack([x, y])
classes = model.predict_classes(images, batch_size=4)
# print the classes, the images belong to
print (classes)
print (classes[0])
print (classes[0][0])
# -
| FunnyMoments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#default_exp synchro.processing
# %load_ext autoreload
# %autoreload 2
# # synchro.processing
# > Processing functions to align stimuli, detect frame timings and correct errors of the display.
#export
import numpy as np
import datetime
import glob
import os
from scipy import signal
#export
def get_thresholds(data):
"""Function that attempts to get the high and low thresholds. Not working very well"""
max_val = max(data[len(data)//2:len(data)//2 + 10000000]) #Looking for a max in a portion of the data, from the middle
high_thresh = max_val*3/4 # High threshold set at 3/4th of the max
low_thresh = max_val*1/4
return low_thresh, high_thresh
from theonerig.synchro.io import *
from theonerig.core import *
from theonerig.utils import *
import matplotlib.pyplot as plt
photodiode_data = load_adc_raw("./files/basic_synchro/photodiode_data", sampling_rate=30000)
# Supposidly, get_thresholds should provide low and high threshold for the data, but the low_treshold is a sensitive value that should be checked manually in a record
#export
def get_first_high(data, threshold):
if np.any(data>threshold):
return np.argmax(data>threshold)
else:
return -1
# `get_first_high` finds the idx of the first frame higher than the threshold
# +
#export
def detect_frames(data, low_threshold, high_threshold, increment, do_reverse=True, precision=.95):
"""Frame detection (or ON signal detection). Capable of finding frame times produced in a regular
fashion:
- data: raw data
- low_threshold: threshold used to detect begginning of each frame.
- high_threshold: threshold used to assign label to the frames, and used to detect the beggining of the reading frame.
- increment: Number of timepoints separating frames. Eg, for 30KHz recording of 60hz stimulus: 30000/60
- do_reverse: boolean to indicate if the reverse detection should be done after detecting the first frame.
- precision: Value that indicates how precise are the events recorded to accelerate the detection.
DLP is very stable (.95) whereas some camera triggers have more jitter (.6). Too low value (bellow 0.5) can
result in an overdetection of frames.
"""
assert (precision>0) and (precision<=1)
frame_timepoints, frame_signals = [], []
increment = int(increment)
safe_increment = int(increment*precision)
first_high = get_first_high(data, high_threshold)
if first_high == -1:
print("No high frame detected. Detection can't work.")
return
frame_timepoints.append(first_high)
frame_signals.append(1)
if do_reverse:
new_timepoints = reverse_detection(data, frame_timepoints, low_threshold, increment, precision)
if len(new_timepoints)>1:
new_extrapolated = extend_timepoints(new_timepoints)
else:
new_extrapolated = []
frame_timepoints = new_extrapolated + new_timepoints + frame_timepoints
frame_signals = [0]*(len(new_timepoints)+len(new_extrapolated)) + frame_signals
i = first_high + safe_increment
while i < len(data):
data_slice = data[i:i+increment//2+(increment-safe_increment)*2]
if np.any(data_slice>low_threshold):
i = i+np.argmax(data_slice>low_threshold)
else:
break #This frame sequence is over. Pass the next sequence through this function if there are frames left
frame_timepoints.append(i)
frame_signals.append(int(np.any(data_slice > high_threshold)))
i += safe_increment
frame_timepoints = np.array(frame_timepoints)
frame_signals = np.array(frame_signals)
frame_timepoints = frame_timepoints - 3 # A slight shift of the timepoints
# to include the begginning of the peaks.
error_check(frame_timepoints)
return frame_timepoints, frame_signals
def reverse_detection(data, frame_timepoints, low_threshold, increment, precision=.95):
"""Detect frames in the left direction."""
new_timepoints = []
new_signals = []
safe_increment = int(increment * (1+(1-precision)))
i = frame_timepoints[0]-safe_increment
while i>0:
data_slice = data[i:i+increment//2+(safe_increment-increment)*2]
if np.any(data_slice > low_threshold):
i = i+np.argmax(data_slice > low_threshold)
else:
break #No low threshold crossing found -> no more frames to detect
new_timepoints.append(i)
i-= safe_increment #We move backward of almost a frame
return new_timepoints[::-1]
def extend_timepoints(frame_timepoints, n=10):
"""Extrapolates points to the left. Not really needed now except for the signals idx that would change
otherwise (and some starting index were set manually)"""
frame_timepoints = np.array(frame_timepoints)
typical_distance = int(np.mean(np.diff(frame_timepoints)))
extended_tp = [frame_timepoints[0]-(i+1)*typical_distance for i in range(n) if (frame_timepoints[0]-(i+1)*typical_distance)>0]
return extended_tp[::-1]
def error_check(frame_tp):
"""Search error by looking at the time between each frame.
DLP is regular and odd time reveal misdetections."""
deriv_frame_tp = np.diff(frame_tp)
error_len_th = np.mean(deriv_frame_tp)+np.std(deriv_frame_tp)*6
error_frames = np.abs(deriv_frame_tp)>error_len_th
if np.any(error_frames):
print("Error in timepoints detected in frames", np.where(error_frames)[0],
"at timepoint", frame_tp[np.where(error_frames)[0]])
# -
# detect_frames do frame detection. Works for camera pulses and photodiode data emitted by a DLP. It does it by:
# * Finding the first frame higher than a threshold
# * Detecting the frames before if flag do_reverse is set to True
# * Detect frames
# * Assign a binary value of if it's higher than the high threshold
# * Do a quick check on the frames to spot weird results
frame_timepoints, frame_signals = detect_frames(photodiode_data, 200, 1000, increment=500)
plt.figure()
plt.plot(photodiode_data)
plt.scatter(frame_timepoints, frame_signals*800+600, c="r")
# +
#export
def cluster_frame_signals(data, frame_timepoints, n_cluster=5):
"""Cluster the `frame_timepoints` in `n_cluster` categories depending on the area under the curve.
- data: raw data used to compute the AUC
- frame_timepoints: timepoints delimitating each frame
- n_cluster: Number of cluster for the frame signals"""
frame_aucs = np.fromiter(map(np.trapz, np.split(data, frame_timepoints)), float)
if frame_timepoints[0] != 0: #We need to remove the first part if it wasn't a full frame
frame_aucs = frame_aucs[1:]
frame_auc_sorted = np.sort(frame_aucs)
deriv = np.array(frame_auc_sorted[1:]-frame_auc_sorted[:-1])
deriv[:5] = 0 #removing tails values that can show weird stuff
deriv[-5:] = 0
threshold_peak = np.std(deriv)*3
n = n_cluster - 1
idx_gaps = np.zeros(n+3, dtype="int")
tmp_deriv = deriv.copy()
zero_set_range = 10#int(len(deriv)*0.05) #Around the peaks, we set the values to 0 around
for i in range(n+3): #Detecting more peaks than needed and then taking them starting on the right
if tmp_deriv[np.argmax(tmp_deriv)] < threshold_peak:
if i<n_cluster-1:
print("Less transition in AUC detected than needed, results will be weird")
break
idx_gaps[i] = np.argmax(tmp_deriv)
tmp_deriv[idx_gaps[i]-zero_set_range:idx_gaps[i]+zero_set_range] = 0
idx_gaps = np.sort(idx_gaps)
idx_gaps = idx_gaps[-(n_cluster-1):]
thresholds = np.zeros(n, dtype="float")
for i, idx in enumerate(idx_gaps):
thresholds[i] = (frame_auc_sorted[idx+1] + frame_auc_sorted[idx])/2
return np.array([np.sum(auc>thresholds) for auc in frame_aucs], dtype=int)
def cluster_by_epochs(data, frame_timepoints, frame_signals, epochs):
"""Does the same thing as `cluster_frame_signals`, but working on epochs around which the
number of cluster can differ. Useful when a record contains stimuli with different signals sizes."""
frame_aucs = np.fromiter(map(np.trapz, np.split(data, frame_timepoints)), float)
if frame_timepoints[0] != 0: #We need to remove the first part if it wasn't a full frame
frame_aucs = frame_aucs[1:]
max_cluster = max([nclust-1 for (_,_,nclust) in epochs])
for start,stop,n_cluster in epochs:
n = n_cluster - 1
norm_clust = max_cluster/n
frame_auc_sorted = np.sort(frame_aucs[start:stop])
deriv = np.array(frame_auc_sorted[1:]-frame_auc_sorted[:-1])
deriv[:5] = 0 #removing tails values that can show weird stuff
deriv[-5:] = 0
idx_gaps = np.zeros(n, dtype="int")
tmp_deriv = deriv.copy()
zero_set_range = 10
for i in range(n):
idx_gaps[i] = np.argmax(tmp_deriv)
tmp_deriv[idx_gaps[i]-zero_set_range:idx_gaps[i]+zero_set_range] = 0
idx_gaps = np.sort(idx_gaps)
thresholds = np.zeros(n, dtype="float")
for i, idx in enumerate(idx_gaps):
thresholds[i] = (frame_auc_sorted[idx+1] + frame_auc_sorted[idx])/2
frame_signals[start:stop] = np.array([np.sum(auc>thresholds)*norm_clust for auc in frame_aucs[start:stop]], dtype=int)
return frame_signals
# -
#export
def cluster_by_list(data, frame_timepoints, frame_signals, stim_list):
"""Assign the stimulus identity values from stim_list to the frames in data. stim_list contains only the
sequence of stimuli. Those need to be expanded. Opposed to cluster_frame_signals and cluster_by_epochs no
AUC operation is performed.
Input:
- data: raw data used to compute the stimulus times
- frame_timepoints: timepoints delimiting each frame
- frame_signals: binary 1-D numpy array indicating if high_threshold was passed in 'detect_frames'
- stim_list: 1-D numpy array containing the sequence of the stimuli presented
Output:
- frame_signals: [same size as frame_timepoints] stim_signals list containing the correct value from
stim_list at every entry"""
# Determine the stimulus on- and offsets and their location relative to data
stim_change = np.where(frame_signals[:-1] != frame_signals[1:])[0]
stim_change = stim_change + 1 # since I need to compare to [1:] all values are shifted by 1
#stim_idx = frame_timepoints[stim_change]
# QDSpy currently is set to emit a short peak at the end to indicate the end of the stimulus presentation
# This peak needs to be ignored
epoch_end = stim_change[-2:] # return it for future analysis
stim_change = stim_change[:-2] #add it to the no stimulus category
# Split into on times & values vs off times & values
stim_ons = stim_change[0::2]
#stim_ons_idx = stim_idx[0::2]
stim_offs = stim_change[1::2]
#stim_offs_idx = stim_idx[1::2]
# Replace the frame_signal entries with the stimulus codes
frame_signals[frame_signals == 0] = -1 # To avoid confusion with the '0' stimulus code
for i,stim_type in enumerate(stim_list):
frame_signals[stim_ons[i]:stim_offs[i]] = stim_type
return frame_signals, stim_ons, stim_offs, epoch_end
# Frame signals are then refined using cluster_frame_signals of the signals to attribute them a value in a defined range
frame_signals = cluster_frame_signals(photodiode_data, frame_timepoints, n_cluster=5)
plt.figure()
plt.plot(photodiode_data[120000:131800])
plt.scatter(frame_timepoints[frame_timepoints>120000]-120000, frame_signals[frame_timepoints>120000]*200+200, c='r')
# With the frame detected, we can create our record master, often named reM
ref_timepoints, ref_signals = extend_sync_timepoints(frame_timepoints, frame_signals, up_bound=len(photodiode_data))
reM = RecordMaster([(ref_timepoints, ref_signals)])
print(len(reM[0]))
# Though the reM we just created is from a tiny portion of real data. From now one we will use a premade reM generated from the same dataset, in full.
reM = import_record("./files/basic_synchro/reM_basic_synchro.h5")
# +
#export
def parse_time(time_str, pattern="%y%m%d_%H%M%S"):
"""Default parser of rhd timestamps. (serve as a template too)"""
return datetime.datetime.strptime(time_str, pattern)
def get_position_estimate(stim_time, record_time, sampling_rate):
"""Estimate where in the record should a stimulus start, in sample points"""
if stim_time < record_time:
return -1
else:
return (stim_time - record_time).seconds * sampling_rate
# -
record_time = parse_time("200331_170849") #Starting time of that example record found on the filename of the record
print(record_time)
#export
def match_starting_position(frame_timepoints, frame_signals, stim_signals, estimate_start, search_size=1000):
"""
Search the best matching index between frame_signals and stim_signals.
params:
- frame_timepoints: Indexes of the frames in the record
- frame_signals: Signals of the detected frames
- stim_signals: Expected stimulus signals
- estimate_start: Estimated start index
- search_size: Stimulus is searched in frame_signals[idx_estimate-search_size: idx_estimate+search_size]
return:
- best match for the starting position of the stimulus
"""
stim_matching_len = min(600, np.where(np.diff(stim_signals)!=0)[0][50]) #Way of getting the 50th change in the signals
idx_estimate = np.argmax(frame_timepoints>estimate_start)
search_slice = slice(max(0, idx_estimate-search_size-frame_signals.idx), min(idx_estimate+search_size-frame_signals.idx, len(frame_signals)))
return search_slice.start + np.argmax(np.correlate(frame_signals[search_slice],
stim_signals[:stim_matching_len]))
# match_starting_position seaks in the record the first frame of a stimulus. We can use functions from theonerig.synchro.extracting to find out the stimuli used in that record, and get their values
from theonerig.synchro.extracting import get_QDSpy_logs, unpack_stim_npy
log = get_QDSpy_logs("./files/basic_synchro")[0]
print(log.stimuli[2])
#Unpacking the stimulus printed above
unpacked_checkerboard = unpack_stim_npy("./files/basic_synchro/stimulus_data", "eed21bda540934a428e93897908d049e")
print(unpacked_checkerboard[0].shape, unpacked_checkerboard[1].shape, unpacked_checkerboard[2])
# get_position_estimate can approximately tell us where the stimulus should be to reduce the search time
estimate_start = get_position_estimate(log.stimuli[2].start_time, record_time, sampling_rate=30000)
print("Estimate position in sample points", estimate_start)
stim_start_frame = match_starting_position(reM["main_tp"][0], reM["signals"][0], stim_signals=unpacked_checkerboard[1], estimate_start=estimate_start)
print(stim_start_frame)
#export
def display_match(match_position, reference=None, recorded=None, corrected=None, len_line=50):
start, mid, end = 0, len(reference)//2, len(reference)-len_line
for line in [start, mid, end]:
if reference is not None:
print("REF ["+str(line)+"] "," ".join(map(str,map(int, reference[line:line+len_line]))))
if recorded is not None:
print("REC ["+str(line)+"] "," ".join(map(str,map(int, recorded[line+match_position:line+len_line+match_position]))))
if corrected is not None:
print("COR ["+str(line)+"] "," ".join(map(str,map(int, corrected[line:line+len_line]))))
print()
# Let's see the match we obtain
display_match(stim_start_frame, reference=unpacked_checkerboard[1], recorded=reM["signals"][0])
# We have a match!! But be sure to check it everytime, as mismatches occurs. Set then stim_start_frame manually
# +
#export
def frame_error_correction(signals, unpacked, algo="nw", **kwargs):
"""Correcting the display stimulus frame values. Shifts are first detected with one of
`shift_detection_conv` or `shift_detection_NW` and applied to the stimulus template. Then single frame
mismatch are detected and corrected.
- signals: true signal values recorded
- unpacked: stimulus tuple (inten,marker,shader)
- algo: algorithm for shift detection among [nw, conv]
- **kwargs: extra parameter for shift detection functions
returns: stim_tuple_corrected, shift_log, (error_frames_idx, replacement_idx)"""
if algo=="no_shift":
intensity, marker, shader = unpacked[0].copy(), unpacked[1].copy(), unpacked[2]
if shader is not None:
shader = shader.copy()
error_frames, replacements = error_frame_matches(signals, marker, range_=5)
shift_log = []
else:
if algo=="nw":
shift_log = shift_detection_NW(signals.astype(int), unpacked[1].astype(int), **kwargs)
elif algo=="conv":
shift_log = shift_detection_conv(signals.astype(int), unpacked[1].astype(int), range_=5, **kwargs)
intensity, marker, shader = apply_shifts(unpacked, shift_log)
error_frames, replacements = error_frame_matches(signals, marker, range_=5)
if len(error_frames)>0:
intensity[error_frames] = intensity[replacements]
marker[error_frames] = marker[replacements]
if shader is not None:
shader[error_frames] = shader[replacements]
return (intensity, marker, shader), shift_log, list(zip(map(int,error_frames), map(int,replacements)))
def error_frame_matches(signals, marker, range_):
"""Find the frames mismatching and finds in the record the closest frame with an identical signal value"""
error_frames = np.nonzero(signals!=marker)[0]
if len(error_frames)>0:
where_equal = [((np.where(marker[err_id-range_:err_id+(range_+1)] == signals[err_id])[0]) - range_) for err_id in error_frames]
#filtering out the frames where no match was found
tmp = np.array([[wheq,err] for (wheq, err) in zip(where_equal, error_frames) if len(wheq)>0])
if len(tmp)==0:
replacements = np.empty(shape=(0,), dtype=int)
error_frames = np.empty(shape=(0,), dtype=int)
else:
where_equal = tmp[:,0]
error_frames = tmp[:,1]
#Choosing among the equal frame signals the one that is the closest
closest_equal = [wheq[(np.abs(wheq)).argmin()] for wheq in where_equal]
error_frames = np.array(error_frames, dtype=int)
replacements = error_frames + np.array(closest_equal, dtype=int)
else:
replacements = np.empty(shape=(0,), dtype=int)
error_frames = np.empty(shape=(0,), dtype=int)
return error_frames, replacements
def apply_shifts(unpacked, op_log):
"""Applies the shifts found by either shift_detection functions"""
inten, marker, shader = unpacked[0].copy(), unpacked[1].copy(), unpacked[2]
if shader is not None:
shader = shader.copy()
orig_len = len(marker)
for idx, op in op_log:
if op=="ins": #We insert a frame
marker = np.insert(marker, idx, marker[idx], axis=0)
inten = np.insert(inten , idx, inten[idx], axis=0)
if shader is not None:
shader = np.insert(shader, idx, shader[idx], axis=0)
elif op=="del": #We concatenate without the deleted frame
marker = np.concatenate((marker[:idx],marker[idx+1:]))
inten = np.concatenate((inten[:idx],inten[idx+1:]))
if shader is not None:
shader = np.concatenate((shader[:idx],shader[idx+1:]))
marker = marker[:orig_len]
inten = inten[:orig_len]
if shader is not None:
shader = shader[:orig_len]
return (inten, marker, shader)
def shift_detection_conv(signals, marker, range_):
"""Detect shifts with a convolution method. First look at how far the next closest frame are, and average
it over the record. When the average cross the -1 or 1 threshold, shift the reference accordingly."""
marker = marker.copy()
shift_detected = True
shift_log = []
while shift_detected:
error_frames, replacements = error_frame_matches(signals, marker, range_)
all_shifts = np.zeros(len(marker))
all_shifts[error_frames] = replacements-error_frames
all_shifts_conv = np.convolve(all_shifts, [1/20]*20, mode="same") #Averaging the shifts to find consistant shifts
shift_detected = np.any(np.abs(all_shifts_conv)>.5)
if shift_detected: #iF the -.5 threshold is crossed, we insert a "fake" frame in the reference and we repeat the operation
change_idx = np.argmax(np.abs(all_shifts_conv)>.5)
if all_shifts_conv[change_idx]>.5:#Need to delete frame in reference
#Need to refine index to make sure we delete a useless frame
start,stop = max(0,change_idx-2), min(len(marker),change_idx+2)
for i in range(start,stop):
if marker[i] not in signals[start:stop]:
change_idx = i
break
shift_log.append([int(change_idx), "del"])
marker = np.concatenate((marker[:change_idx], marker[change_idx+1:], [0]))
else:#Need to insert frame in reference
shift_log.append([int(change_idx), "ins"])
#inserting a frame and excluding the last frame to keep the references the same length
marker = np.insert(marker, change_idx, marker[change_idx], axis=0)[:-1]
return shift_log
def shift_detection_NW(signals, marker, simmat_basis=[1,-1,-3,-3,-1], insdel=-10, rowside=20):
"""Memory optimized Needleman-Wunsch algorithm.
Instead of an N*N matrix, it uses a N*(side*2+1) matrix. Indexing goes slightly differently but
result is the same, with far less memory consumption and exection speed scaling better with
size of the sequences to align."""
#Setting the similarity matrix
side = rowside
sim_mat = np.empty((len(marker), side*2+1), dtype="int32")
#Setting the errors
insertion_v = insdel #insertions are commons not so high penalty
deletion_v = insdel #deletions detection happens during periods of confusion but are temporary. High value
error_match = np.array(simmat_basis) #The value for a 0 matching with [0,1,2,3,4]
error_mat = np.empty((len(simmat_basis),len(simmat_basis)))
for i in range(len(simmat_basis)):
error_mat[i] = np.roll(error_match,i)
#Filling the similarity matrix
sim_mat[0, side] = error_mat[marker[0], signals[0]]
#Initialization: Setting the score of the first few row and first few column cells
for j in range(side+1, side*2+1):
sim_mat[0,j] = sim_mat[0,side] + insertion_v*j
for i in range(1, side+1):
sim_mat[i,side-i] = sim_mat[0,side] + deletion_v*i
#Corpus: if j is the first cell of the row, the insert score is set super low
# if j is the last cell of the row, the delete score is set super low
for i in range(1, sim_mat.shape[0]):
start = max(side-i+1, 0)
stop = min(side*2+1, side+sim_mat.shape[0]-i)
for j in range(start, stop):
if j==0:#j==start and i>side:
insert = -99999
delete = sim_mat[i-1, j+1] + deletion_v
elif j==side*2:
delete = -99999
insert = sim_mat[i, j-1] + insertion_v
else:
insert = sim_mat[i, j-1] + insertion_v
delete = sim_mat[i-1, j+1] + deletion_v
match = sim_mat[i-1, j] + error_mat[marker[i], signals[j+i-side]]
sim_mat[i,j] = max(insert,delete,match)
#Reading the similarity matrix
#In general, it's the same, at the difference that when i decrement, must add 1 to j compared to usual.
i = len(marker)-1
j = side
shift_log = []
while (i > 0 or j>side-i):
if (i > 0 and j>side-i and sim_mat[i,j]==(sim_mat[i-1,j]+error_mat[marker[i], signals[j+i-side]])):
i -= 1
elif(i > 0 and sim_mat[i,j] == sim_mat[i-1,j+1] + deletion_v):
shift_log.insert(0,(j+i-side+1, "del")) #Insert the j value for deletion too because all shifts
i-=1 #are relative to the signals recorded, unlike normal NW
j+=1
else:
shift_log.insert(0,(j+i-side, "ins"))
j-=1
return shift_log
# -
# We correct the stimulus values with frame_error_correction and it gives us back the changes it made to keep track of the errors made.
signals = reM["signals"][0][stim_start_frame:stim_start_frame+len(unpacked_checkerboard[0])]
corrected_checkerboard, shift_log, error_frames = frame_error_correction(signals, unpacked_checkerboard, algo="nw")
print(shift_log, len(error_frames))
#export
def chop_stim_edges(first_frame, last_frame, stim_tuple, shift_log, frame_replacement):
"""Cut out the stimulus parts not containing actual stimulus, and change the idx values of `shift_log`
and `frame_replacement` to match the new indexing."""
inten, marker, shader = stim_tuple
if last_frame<0: #Using negative indexing
last_frame = len(marker)+last_frame
inten = inten[first_frame:last_frame]
marker = marker[first_frame:last_frame]
if shader is not None:
shader = shader[first_frame:last_frame]
shift_log = [(shift[0]-first_frame, shift[1]) for shift in shift_log if shift[0]<last_frame]
frame_replacement = [(fr[0]-first_frame, fr[1]-first_frame) for fr in frame_replacement if fr[0]<last_frame]
return (inten, marker, shader), shift_log, frame_replacement
#export
def detect_calcium_frames(scanning_data, epoch_threshold=-8):
"""Detect the timing of the 2P frames, epoch by epoch over a record."""
#Finds the start of a stack recording
start_set = np.where((scanning_data[1:] > epoch_threshold) & (scanning_data[:-1] < epoch_threshold))[0]
#Finds the end of a stack recording
end_set = np.where((scanning_data[1:] < epoch_threshold) & (scanning_data[:-1] > epoch_threshold))[0]
#Splits the records into the epochs
list_epoch = np.array_split(scanning_data, np.ravel(list(zip(start_set, end_set))))[1::2]
def detect_peak_sync(epoch):
#Finds the peaks in an epoch. Peaks have strong SNR so this works fine
return signal.find_peaks(epoch, prominence=2)[0]
return [arr + start_set[i] for i, arr in enumerate(list(map(detect_peak_sync, list_epoch)))]
#hide
from nbdev.export import *
notebook2script()
| 12_synchro.processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="SB-sLaVVQE2t"
# # WGAN-GP 훈련
# + [markdown] id="em7ZnXNxQE3M"
# ## 라이브러리 임포트
# + id="gJny9s-wQE3M" outputId="86d20868-c05b-491a-ed0a-9b929118ca69"
# %matplotlib inline
import os
import matplotlib.pyplot as plt
from models.WGANGP import WGANGP
from utils.loaders import load_celeb
import pickle
# + id="3O-8rMj6QE3N"
# run params
SECTION = 'gan'
RUN_ID = '0003'
DATA_NAME = 'celeb'
RUN_FOLDER = 'run/{}/'.format(SECTION)
RUN_FOLDER += '_'.join([RUN_ID, DATA_NAME])
if not os.path.exists(RUN_FOLDER):
os.mkdir(RUN_FOLDER)
os.mkdir(os.path.join(RUN_FOLDER, 'viz'))
os.mkdir(os.path.join(RUN_FOLDER, 'images'))
os.mkdir(os.path.join(RUN_FOLDER, 'weights'))
mode = 'build' #'load' #
# + [markdown] id="KYd0HMvRQE3N"
# ## 데이터 적재
# + id="TeSYXgwUQE3O"
BATCH_SIZE = 64
IMAGE_SIZE = 64
# + id="9yCmDrUTQE3O" outputId="5ffcff04-4c7e-41aa-9639-75ba467e248a"
x_train = load_celeb(DATA_NAME, IMAGE_SIZE, BATCH_SIZE)
# + id="rDri6SZFQE3O" outputId="d14eb56a-8931-4570-da45-d6b73a292616"
x_train[0][0][0]
# + id="UEMCsDgNQE3O" outputId="40ae3d5a-f018-4eb5-9f13-3dd160175f85"
plt.imshow((x_train[0][0][0]+1)/2)
# + [markdown] id="APdvhAvHQE3P"
# ## 모델 생성
# + id="E59Fjk4GQE3P" outputId="9ab9233c-1373-45ee-92e7-345f1e57a332"
gan = WGANGP(input_dim = (IMAGE_SIZE,IMAGE_SIZE,3)
, critic_conv_filters = [64,128,256,512]
, critic_conv_kernel_size = [5,5,5,5]
, critic_conv_strides = [2,2,2,2]
, critic_batch_norm_momentum = None
, critic_activation = 'leaky_relu'
, critic_dropout_rate = None
, critic_learning_rate = 0.0002
, generator_initial_dense_layer_size = (4, 4, 512)
, generator_upsample = [1,1,1,1]
, generator_conv_filters = [256,128,64,3]
, generator_conv_kernel_size = [5,5,5,5]
, generator_conv_strides = [2,2,2,2]
, generator_batch_norm_momentum = 0.9
, generator_activation = 'leaky_relu'
, generator_dropout_rate = None
, generator_learning_rate = 0.0002
, optimiser = 'adam'
, grad_weight = 10
, z_dim = 100
, batch_size = BATCH_SIZE
)
if mode == 'build':
gan.save(RUN_FOLDER)
else:
gan.load_weights(os.path.join(RUN_FOLDER, 'weights/weights.h5'))
# + id="lLQpoyU2QE3Q" outputId="c07662ca-f8b2-4e0d-fcc9-11ee949dbdc3"
gan.critic.summary()
# + id="zfehucfwQE3Q" outputId="863662ee-0098-435a-a939-f43db0be1f18"
gan.generator.summary()
# + id="nh_wH9-AQE3Q" outputId="ae99328d-aa1d-41d6-aec6-48c2bea9cc1d"
gan.critic_model.summary()
# + [markdown] id="n4zSKvuhQE3S"
# ## 모델 훈련
# + id="dVYihzZkQE3S"
EPOCHS = 6000
PRINT_EVERY_N_BATCHES = 5
N_CRITIC = 5
BATCH_SIZE = 64
# + id="9TdDmvXYQE3S" outputId="8a3625e2-139a-42cb-fabc-4dd0409adee3"
gan.train(
x_train
, batch_size = BATCH_SIZE
, epochs = EPOCHS
, run_folder = RUN_FOLDER
, print_every_n_batches = PRINT_EVERY_N_BATCHES
, n_critic = N_CRITIC
, using_generator = True
)
# + id="DC88cExSQE3Z" outputId="de388a9b-278e-420d-9c34-ffc32201bef9"
fig = plt.figure()
plt.plot([x[0] for x in gan.d_losses], color='black', linewidth=0.25)
plt.plot([x[1] for x in gan.d_losses], color='green', linewidth=0.25)
plt.plot([x[2] for x in gan.d_losses], color='red', linewidth=0.25)
plt.plot(gan.g_losses, color='orange', linewidth=0.25)
plt.xlabel('batch', fontsize=18)
plt.ylabel('loss', fontsize=16)
plt.xlim(0, 2000)
# plt.ylim(0, 2)
plt.show()
# + [markdown] id="AXLCxB7eQE3Z"
# 
| wgan faces_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pathlib import Path
import pandas as pd
import seaborn as sns
#reading freesurfer results
out_dir = Path("/output")
fig_dir = out_dir/"figs"
fsdata_file = out_dir/'freesurfer_out_preped.csv'
tab_data = pd.read_csv(fsdata_file, sep=',', header=0, index_col=0);
GROUPS = ['PD','ET','NC']
n_groups = len(GROUPS);
tab_data.shape
## basic functions
from matplotlib import pyplot as plt
# distribution of large brain parts ratio
def lr_ratio(data, items_basic, items_single, items_lr):
item_left = [ "Left_"+x for x in items_lr];
item_right = [ "Right_"+x for x in items_lr];
items_all = items_single + item_left + item_right + items_lr;
tmp_data = data[items_basic+items_single+item_left+item_right];
for x in items_lr:
tmp_data[x] = tmp_data["Left_"+x] + tmp_data["Right_"+x]
#for x in items_all:
# tmp_data[x+"_r"] = tmp_data[x]/tmp_data["eTIV"]
return tmp_data, items_basic+items_all
def rm_age_sex(data, var_list):
from sklearn import linear_model
import numpy as np
dat = data.copy()
nc_data = dat[dat["is_NC"] == 1]
x_nc = np.array([np.ones(nc_data.shape[0]), np.array((nc_data["is_Male"])), np.array((nc_data["age"]))]).T;
x_all= np.array([np.ones(dat.shape[0]), np.array((dat["is_Male"])), np.array((dat["age"]))]).T;
reg_list = []; new_col=[];
for x in var_list:
reg = linear_model.LinearRegression()
y_nc= np.array(nc_data[x]);
reg.fit(x_nc, y_nc);
tmp_col = x+"_AgeSexRemoved"
dat[tmp_col] = dat[x]-np.matmul(x_all[:,1:], reg.coef_[1:])
dat[tmp_col+"_resid"] = dat[x]-reg.predict(x_all)
dat[tmp_col+"_resid_per"] = (dat[x]-reg.predict(x_all))/dat[x]
reg_list.append(reg); new_col.append(tmp_col);
return dat, new_col, reg_list
# plot distribution of brian tissues
x_focus = ['eTIV', 'TotalGrayVol', 'CortexVol',
'Brain-Stem', 'SubCortGrayVol', 'CSF', 'Left-Cerebellum-Cortex',
'Left-Cerebellum-White-Matter', 'Right-Cerebellum-Cortex',
'Right-Cerebellum-White-Matter', 'Cerebellum-Cortex',
'Cerebellum-White-Matter', 'CortexVol_r', 'Brain-Stem_r',
'SubCortGrayVol_r', 'CSF_r', 'Left-Cerebellum-Cortex_r',
'Left-Cerebellum-White-Matter_r', 'Right-Cerebellum-Cortex_r',
'Right-Cerebellum-White-Matter_r', 'Cerebellum-Cortex_r',
'Cerebellum-White-Matter_r'];
#dist_plot(tmp_data, x_focus, "age-distr")
# -
# QA checking: nan errors (eTIV)
tab_data[tab_data.isna().any(axis=1)]
# +
import statsmodels.api as sm
from statsmodels.formula.api import ols
from scipy.special import logit, expit
import scipy.stats as stats
def anova_table(aov):
aov['mean_sq'] = aov[:]['sum_sq']/aov[:]['df']
aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq'])
#aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']*aov['mean_sq'][-1]))/(sum(aov['sum_sq'])+aov['mean_sq'][-1])
#, 'omega_sq']
cols = ['sum_sq', 'df', 'mean_sq', 'F', 'PR(>F)', 'eta_sq']
aov = aov[cols]
return aov
def check_anova(dat, feature):
from statsmodels.formula.api import ols
from scipy.special import logit, expit
import scipy.stats as stats
y_str = 'Q(\"'+feature+'\") ~ '
yr_str = 'logit(Q(\"'+feature+'_r\")) ~ '
yas_str = 'Q(\"'+feature+'_AgeSexRemoved\") ~ '
yasr_str = 'logit(Q(\"'+feature+'_AgeSexRemoved_r\")) ~ '
model1 = ols(y_str + 'C(diagnosis) + C(sex) + age', data=dat).fit()
model2 = ols(yr_str + 'C(diagnosis) + C(sex) + age', data=dat).fit()
model3 = ols(yas_str + 'C(diagnosis) + C(sex) + age', data=dat).fit()
model4 = ols(yasr_str + 'C(diagnosis) + C(sex) + age', data=dat).fit()
aov_table1 = sm.stats.anova_lm(model1, typ=2)
aov_table2 = sm.stats.anova_lm(model2, typ=2)
aov_table3 = sm.stats.anova_lm(model3, typ=2)
aov_table4 = sm.stats.anova_lm(model4, typ=2)
print(feature," Shapiro-Wilk test:", stats.shapiro(model1.resid))
print(anova_table(aov_table1))
print(feature+" normalized ", "Shapiro-Wilk test:", stats.shapiro(model2.resid))
print(anova_table(aov_table2))
print(feature+" age sex controled ", "Shapiro-Wilk test:", stats.shapiro(model3.resid))
print(anova_table(aov_table3))
print(feature+" age sex controled and normalized ", "Shapiro-Wilk test:", stats.shapiro(model4.resid))
print(anova_table(aov_table4))
return anova_table(aov_table1), anova_table(aov_table2), anova_table(aov_table3), anova_table(aov_table4)
def check_ancova(dat, feature, is_logit):
from statsmodels.formula.api import ols
from scipy.special import logit, expit
import scipy.stats as stats
if is_logit:
y_str = 'logit(Q(\"'+feature+'_r\")) ~ '
else:
y_str = 'Q(\"'+feature+'\") ~ '
model1 = ols(y_str + 'C(diagnosis) + C(sex) + age', data=dat).fit()
model2 = ols(y_str + 'C(diagnosis) + C(sex) + age + C(diagnosis):C(sex)', data=dat).fit()
model3 = ols(y_str + 'C(diagnosis) + C(sex) + age + C(diagnosis):age', data=dat).fit()
model4 = ols(y_str + 'C(diagnosis) + C(sex) + age + C(sex):age', data=dat).fit()
model5 = ols(y_str + 'C(diagnosis) + C(sex) + age + C(diagnosis):C(sex):age', data=dat).fit()
aov_table1 = sm.stats.anova_lm(model1, typ=2)
aov_table2 = sm.stats.anova_lm(model2, typ=2)
aov_table3 = sm.stats.anova_lm(model3, typ=2)
aov_table4 = sm.stats.anova_lm(model4, typ=2)
aov_table5 = sm.stats.anova_lm(model5, typ=2)
print(feature," Shapiro-Wilk test:", stats.shapiro(model1.resid))
print(anova_table(aov_table1))
print(feature+" + diagnosis:sex ", "Shapiro-Wilk test:", stats.shapiro(model2.resid))
print(anova_table(aov_table2))
print(feature+" + diagnosis:age ", "Shapiro-Wilk test:", stats.shapiro(model3.resid))
print(anova_table(aov_table3))
print(feature+" + sex:age ", "Shapiro-Wilk test:", stats.shapiro(model4.resid))
print(anova_table(aov_table4))
print(feature+" + diagnosis:sex:age ", "Shapiro-Wilk test:", stats.shapiro(model5.resid))
print(anova_table(aov_table5))
return anova_table(aov_table1), anova_table(aov_table2), anova_table(aov_table3), anova_table(aov_table4), anova_table(aov_table5)
check_ancova(rm_asr_data, "Cerebellum-Cortex", 1)
# -
def screen_Tukeyhsd(data, test_list):
import statsmodels.stats.multicomp as mc
from functools import reduce
res_all=[]; reject_index=[];
for i in range(len(test_list)):
x = test_list[i];
tmp_comp = mc.MultiComparison(data[x], data['diagnosis'])
tmp_res = tmp_comp.tukeyhsd()
res_all.append(tmp_res.summary())
if sum(list(tmp_res.reject))>=2:
reject_index.append(i)
print(str(i)+"th Tukey HSD test positive -->> "+x)
print(res_all[i])
#print(res_all[i])
return res_all, reject_index
non_feature_list = ["diagnosis", "age", "sex", "is_PD", "is_ET","is_NC", "is_Male", "is_Female"];
all_feature_list = tab_data.columns.drop(non_feature_list)
res_all, reject_index = screen_Tukeyhsd(tab_data, all_feature_list)
print(len(reject_index))
# rh_G_temp_sup-G_T_transv_volume, rh_S_temporal_transverse_thickness
# select the data
from sklearn import linear_model
import numpy as np
items_basic = ["diagnosis", "age", "sex", "is_PD", "is_ET","is_NC",
"is_Male", "is_Female", "eTIV", "TotalGrayVol",];
items_single = ["CerebralWhiteMatterVol", "CortexVol", "Brain-Stem", "SubCortGrayVol", "CSF",
"3rd-Ventricle", "4th-Ventricle", "5th-Ventricle", "SupraTentorialVol",
"CC_Anterior", "CC_Central", "CC_Mid_Anterior", "CC_Mid_Posterior", "CC_Posterior"];
items_lr = ["Inf-Lat-Vent", "Lateral-Ventricle",
"Cerebellum-Cortex", "Cerebellum-White-Matter", "WM-hypointensities",
"Accumbens-area", "Amygdala", "Hippocampus",
"Pallidum", "Caudate", "Putamen", "Thalamus-Proper"];
tmp_data, items_all = lr_ratio(tab_data, items_basic, items_single, items_lr);
rm_AgeSex_list = items_all[8:];
rm_as_data, rm_as_col_list, rm_as_reg_list = rm_age_sex(tmp_data, rm_AgeSex_list)
# Check regression residuals
resid_list = [x+"_resid_per" for x in rm_as_col_list ];
#rm_as_data[resid_list].plot.box(vert=False,figsize=(10,20))
# +
# test correlation between brain areas and diagnosis
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.formula.api import glm
from scipy.special import logit, expit
m1_form = "Q(\"Cerebellum-Cortex\") ~ is_ET + is_PD + is_NC"
m1 = glm(formula=m1_form, data=rm_asr_data)
res1=m1.fit()
print(res1.summary2())
m2_form = "logit(Q(\"Cerebellum-Cortex_r\")) ~ is_ET + is_PD + is_NC"
m2 = glm(formula=m2_form, data=rm_asr_data)
res2=m2.fit()
print(res2.summary2())
m3_form = "Q(\"Cerebellum-Cortex_AgeSexRemoved\") ~ is_ET + is_PD + is_NC"
m3 = glm(formula=m3_form, data=rm_asr_data)
res3=m3.fit()
print(res3.summary2())
m4_form = "logit(Q(\"Cerebellum-Cortex_AgeSexRemoved_r\")) ~ is_ET + is_PD + is_NC"
m4 = glm(formula=m4_form, data=rm_asr_data)
res4=m4.fit()
print(res4.summary2())
# +
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.formula.api import glm
from scipy.special import logit, expit
m1_form = "is_ET ~ Q(\"Left-Cerebellum-Cortex_AgeSexRemoved\") + Q(\"Left-Cerebellum-White-Matter_AgeSexRemoved\") + \
Q(\"Right-Cerebellum-Cortex_AgeSexRemoved\") + Q(\"Right-Cerebellum-White-Matter_AgeSexRemoved\") "
m1 = glm(formula=m1_form, data=rm_asr_data[rm_asr_data["diagnosis"]!="PD"],)
res1=m1.fit()
print(res1.summary2())
m2_form = "is_PD ~ Q(\"Left-Cerebellum-Cortex_AgeSexRemoved\") + Q(\"Left-Cerebellum-White-Matter_AgeSexRemoved\") + \
Q(\"Right-Cerebellum-Cortex_AgeSexRemoved\") + Q(\"Right-Cerebellum-White-Matter_AgeSexRemoved\") "
m2 = glm(formula=m2_form, data=rm_asr_data[rm_asr_data["diagnosis"]!="ET"], )
res2=m2.fit()
print(res2.summary2())
# -
import statsmodels.stats.multicomp as mc
comp = mc.MultiComparison(logit(rm_asr_data['Cerebellum-Cortex_AgeSexRemoved']), rm_asr_data['diagnosis'])
tbl, a1, a2 = comp.allpairtest(stats.ttest_ind, method= "bonf")
tbl
items_focus = ['CortexVol', 'Brain-Stem', 'SubCortGrayVol',
'Cerebellum-Cortex','Cerebellum-White-Matter'];
t2=sns.pairplot(rm_asr_data, vars=items_focus, hue="diagnosis", markers=["o", "s", "D"],
diag_kind="kde", height=5)
t2.map_lower(sns.kdeplot, levels=4, color=".2")
t2.add_legend(title="brain", adjust_subtitles=True)
#t1.savefig("brain_all.jpg", figsize=(12,6.5))
# +
features_list = ['CortexVol_AgeSexRemoved', 'Brain-Stem_AgeSexRemoved', 'SubCortGrayVol_AgeSexRemoved',
'Cerebellum-Cortex_AgeSexRemoved','Cerebellum-White-Matter_AgeSexRemoved']
# 'Accumbens-area_AgeSexRemoved', 'Amygdala_AgeSexRemoved', 'Hippocampus_AgeSexRemoved',
# 'Pallidum_AgeSexRemoved', 'Caudate_AgeSexRemoved', 'Putamen_AgeSexRemoved', 'Thalamus-Proper_AgeSexRemoved'
et_Xtrain = et_data[features_list]
et_ytrain = et_data[['is_ET']]
pd_Xtrain = pd_data[features_list]
pd_ytrain = pd_data[['is_PD']]
# building the model and fitting the data
et_log_reg = sm.Logit(et_ytrain, et_Xtrain).fit()
pd_log_reg = sm.Logit(pd_ytrain, pd_Xtrain).fit()
print("ET reg: ", et_log_reg.summary())
print("PD reg: ", pd_log_reg.summary())
# +
# one-way anova
import pandas as pd
import researchpy as rp
import scipy.stats as stats
def check_stats(data, item):
print(rp.summary_cont(data[item]))
print(rp.summary_cont(data[item].groupby(data['diagnosis'])))
anova_res = stats.f_oneway(data[item][data['diagnosis'] == 'PD'],
data[item][data['diagnosis'] == 'ET'],
data[item][data['diagnosis'] == 'NC'])
print("F=", anova_res.statistic, "p-val=", anova_res.pvalue)
return anova_res
anova_res=check_stats(rm_asr_data, 'Cerebellum-White-Matter_AgeSexRemoved')
# -
| codes/devel/cerebellum_confounder/screening_features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
########################################################################################################################
# Filename: RNN_Models.ipynb
#
# Purpose: Multi-label Text-categorization via recurrent neural networks
# Author(s): Bobby (Robert) Lumpkin
#
# Library Dependencies: numpy, pandas, scikit-learn, skmultilearn, joblib, os, sys, threshold_learning
########################################################################################################################
# -
# # Multilabel Text Classification with Recurrent Neural Networks
import numpy as np
import pandas as pd
import math
import os
import json
import ast
import random
from joblib import dump, load
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from bpmll import bp_mll_loss
import sklearn_json as skljson
from sklearn.model_selection import train_test_split
from sklearn import metrics
import matplotlib.pyplot as plt
import sys
os.chdir('C:\\Users\\rober\\OneDrive\\Documents\\Multilabel-Text-Classification\\Deep Learning Models\\RNN Models') ## Set working directory
## to be 'ANN Results'
sys.path.append('../../ThresholdFunctionLearning') ## Append path to the ThresholdFunctionLearning directory to the interpreters
## search path
from threshold_learning import predict_test_labels_binary ## Import the 'predict_test_labels_binary()' function from the
from threshold_learning import predict_labels_binary ## threshold_learning library
sys.path.append('../FF Models/GridSearchAid_FFNetworks')
from FFNN_gridSearch_aid import SizeLayersPows2, createModel
## Set config values
learning_rates_list = [0.1, 0.01, 0.001]
path_to_models = 'Models'
path_to_CE_training_histories = 'Training Histories/CE_histories.npz'
## Load the sequence of integers training/test data
npzfile = np.load("../../Data/seq_trainTest_data.npz")
X_train_padded = npzfile['train_padded']
X_test_padded = npzfile['test_padded']
Y_train = npzfile['Y_train'].astype('float64')
Y_test = npzfile['Y_test'].astype('float64')
num_unique_words = npzfile['num_unique_words']
SizeLayersPows2(4, 32, 827)[0:3]
# # Cross Entropy Models -- Traditional ("Naive") Approach
# +
# %%capture
## Define the LSTM RNN architecture
num_labels = Y_train.shape[1]
embedding_size_list = SizeLayersPows2(4, 32, 827)[0:3]
input_length = X_train_padded.shape[1]
model_dict = {}
histories_dict = {}
histories_df_dict = {}
for learning_rate in learning_rates_list:
for embedding_size in embedding_size_list:
hidden_size = int(embedding_size / 2)
model_biLSTM = tf.keras.models.Sequential([
tf.keras.layers.Embedding(num_unique_words, embedding_size, input_length = input_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(hidden_size,
return_sequences = False,
return_state = False)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(num_labels, activation = 'sigmoid')
])
optim = tf.keras.optimizers.Adam(lr = learning_rate)
#optim = tf.keras.optimizers.Adagrad(
# learning_rate = 0.001, initial_accumulator_value = 0.1, epsilon = 1e-07,
# name = 'Adagrad')
#optim = tf.keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9, momentum = 0.8, epsilon=1e-07,)
metric = tfa.metrics.HammingLoss(mode = 'multilabel', threshold = 0.5)
model_biLSTM.compile(loss = 'binary_crossentropy', optimizer = optim, metrics = metric)
tf.random.set_seed(123)
history = model_biLSTM.fit(X_train_padded, Y_train, epochs = 15, validation_data = (X_test_padded, Y_test), verbose = 0)
name = f"model_{embedding_size}_ce_{learning_rate}"
name = name.replace('.', '')
model_dict[name] = model_biLSTM
histories_dict[name] = history
histories_df_dict[name] = pd.DataFrame(history.history)
history_name = name + 'history_df'
exec(history_name + ' = pd.DataFrame(history.history)')
# -
## (NOTE: IF RUNNING PREVIOUS CELL, SKIP THIS CELL) -- Load training histories
histories = np.load(path_to_CE_training_histories, allow_pickle = True)
history_cols = ['loss', 'hamming_loss', 'val_loss', 'val_hamming_loss']
for learning_rate in learning_rates_list:
for embedding_size in embedding_size_list:
history_name = f"model_{embedding_size}_ce_{learning_rate}" + 'history_df'
history_name = history_name.replace('.', '')
exec_string = f"{history_name} = histories['{history_name}']"
exec(exec_string)
exec_string2 = f"{history_name} = pd.DataFrame({history_name}, columns = history_cols)"
exec(exec_string2)
## Visualize the 'val_hamming_loss' histories for each model by learning rate
fig = plt.figure(figsize = (14, 4))
i = 1
for learning_rate in learning_rates_list:
for embedding_size in embedding_size_list:
history_name = f"model_{embedding_size}_ce_{learning_rate}" + 'history_df'
history_name = history_name.replace('.', '')
val_loss_name = f"model_{embedding_size}_val_hamming_loss"
val_loss_name = val_loss_name.replace('.', '')
exec(val_loss_name + ' = ' + history_name + '.val_hamming_loss')
subplot_string = f"""ax{i} = fig.add_subplot(13{i})
ax{i}.plot(model_32_val_hamming_loss, label = 'Embedding' + str(32))
ax{i}.plot(model_128_val_hamming_loss, label = 'Embedding' + str(128))
ax{i}.plot(model_256_val_hamming_loss, label = 'Embedding' + str(256))
ax{i}.set_ylim(0, 0.015)
ax{i}.set_xlabel('Epochs')
ax{i}.set_ylabel('Validation Hamming Loss')
ax{i}.legend()
ax{i}.set_title('Learning Rate = ' + str({learning_rate}), fontsize = 14)"""
exec(subplot_string)
i += 1
fig.tight_layout()
# +
## Try applying a learned threshold function -- (Takes a few minutes to run)
#selected_model = model_dict['model_256_ce_0001'] ## If ran training cell
selected_model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(num_unique_words, 256, input_length = input_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(hidden_size,
return_sequences = False,
return_state = False)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(num_labels, activation = 'sigmoid')
])
optim = tf.keras.optimizers.Adam(lr = 0.001) ## If loading histories and model
metric = tfa.metrics.HammingLoss(mode = 'multilabel', threshold = 0.5)
selected_model.compile(loss = 'binary_crossentropy', optimizer = optim, metrics = metric)
selected_model.load_weights(path_to_models + '/RNN_ce_best')
# Learn a threshold function
Y_train_pred_proba_array = selected_model.predict(X_train_padded)
Y_test_pred_proba_array = selected_model.predict(X_test_padded)
t_range = (0, 1)
#test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred_proba_array, Y_train, Y_test_pred_proba_array, t_range)
threshold_function = load(path_to_models + '/RNN_ce_best_thresholdFunction')
test_labels_binary = predict_labels_binary(Y_test_pred_proba_array, threshold_function)
ce_val_hamming_loss_withThreshold = metrics.hamming_loss(Y_test, test_labels_binary)
ce_val_hamming_loss_withThreshold
# +
## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Write training histories to .npz file and save best model
outfile = path_to_CE_training_histories
save_string = 'np.savez_compressed(outfile'
for learning_rate in learning_rates_list:
for embedding_size in embedding_size_list:
history_name = f"model_{embedding_size}_ce_{learning_rate}" + 'history_df'
history_name = history_name.replace('.', '')
save_string = save_string + f", \n {history_name} = {history_name}"
save_string = save_string + ')'
#exec(save_string)
#selected_model = model_dict['model_256_ce_0001']
#selected_model.save_weights(path_to_models + '/RNN_ce_best')
#dump(threshold_function, path_to_models + '/RNN_ce_best_thresholdFunction', compress = 3)
# -
# # BPMLL Models -- "Novel" Approach
| Deep Learning Models/RNN Models/RNN_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Credit Card Fraud Analysis and Detection
#
# ---
#
# ## Problem Identification
#
# ### Problem statement
#
# How can credit card companies detect up to 90% of fraudulent transactions?
#
# ### Key data sources
#
# - [Credit Card Fraud Detection Kaggle dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
# ---
#
# ## Environment setup
# +
import os
import pandas as pd
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
sns.set('notebook')
# -
# Read CSV to Dataframe
df = pd.read_csv('../data/raw/creditcard.csv')
# Inspect the data
df.head()
# +
from sklearn.metrics import confusion_matrix, plot_precision_recall_curve, plot_roc_curve
def metrics_plots(model, X_test, y_test):
'''Plot Recall Precision AUC'''
y_pred = model.predict(X_test)
fig = plt.figure(figsize=(15,5))
# Confusion matrix
ax = fig.add_subplot(121)
labels = ['Normal', 'Fraud']
conf_matrix = confusion_matrix(y_test, y_pred)
sns.heatmap(conf_matrix, xticklabels=labels, yticklabels=labels, annot=True, fmt="d", cmap="Blues", cbar=False, ax=ax);
plt.title("Confusion matrix")
plt.xlabel('True class')
plt.ylabel('Predicted class')
# AUC graph
ax = fig.add_subplot(122)
plot_precision_recall_curve(model, X_test, y_test, ax=ax)
plt.show()
# +
from sklearn.metrics import accuracy_score, auc, f1_score, matthews_corrcoef, precision_recall_curve, precision_score, recall_score
def metrics(model, X_test, y_test):
'''Print out metrics'''
# Predict
y_pred = model.predict(X_test)
probabilities = model.predict_proba(X_test)[:,1]
# Calculate AUC
precision, recall, _ = precision_recall_curve(y_test, probabilities)
auc_score = auc(recall, precision)
print(f'Predicted labels: {(np.unique(y_pred))}')
print(pd.DataFrame(y_pred)[0].value_counts())
print(f'\nAccuracy: {accuracy_score(y_test, y_pred):.4}')
print(f'Recall: {recall_score(y_test, y_pred):.4}')
print(f'Precision: {precision_score(y_test, y_pred):.4}')
# This style will make it easier to copy and paste for a Markdown table
print(f'\nAUC | F1 Score | MCC Score')
print(f'{auc_score:.4} | {f1_score(y_test, y_pred):.4} | {matthews_corrcoef(y_test, y_pred):.4}')
# +
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(model, X_test, y_test, save_path='conf-matrix.png'):
'''Plot the confusion matrix for a model'''
y_pred = model.predict(X_test)
labels = ['Normal', 'Fraud']
conf_matrix = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(5, 5))
sns.heatmap(conf_matrix, xticklabels=labels, yticklabels=labels, annot=True, fmt="d", cmap="Blues", cbar=False);
plt.title("Confusion matrix")
plt.xlabel('True class')
plt.ylabel('Predicted class')
plt.savefig(save_path, bbox_inches = 'tight')
plt.show()
# +
from sklearn.metrics import plot_precision_recall_curve
def plot_auc_results(models, X_test, y_test, save_path='prauc-results.png'):
'''Plot Recall Precision AUC results'''
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111)
for model in models:
plot_precision_recall_curve(model, X_test, y_test, ax=ax)
plt.savefig(save_path, bbox_inches = 'tight')
plt.show()
# -
# ## Hyperparameter Optimization
# +
from pprint import pprint
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
pprint(random_grid)
# +
# NOTE: this takes >100 minutes to complete
# rf = RandomForestClassifier()
# rf_random = RandomizedSearchCV(estimator = rf,
# param_distributions = random_grid,
# n_iter = 10,
# cv = 3,
# verbose=2,
# random_state=42,
# n_jobs = 2)
# rf_random.fit(X_train, y_train)
best_params = {
'bootstrap': False,
'max_depth': 80,
'max_features': 'auto',
'min_samples_leaf': 1,
'min_samples_split': 10,
'n_estimators': 94,
'random_state': 42,
'verbose': 1,
'n_jobs': -1
}
| notebooks/3.2-cjm-tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project: **Finding Lane Lines**
# ***
# The topic of this project is to come up with a pipeline that identifies lane lines on the road in a selection of video streams. To achieve this, the pipline detects candidate line segments based on transforming the region-masked edge image output by the Canny edge detector into Hough space. The detected line segments are then averaged and extrapolated to map out the full extent of the lane lines such that a single line is drawn for each side of a lane.
# ***
#
# **Example output**
#
# ---
#
# <figure>
# <img src="data/examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Example output of color and region masked hough transform line detection </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="data/examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Example output after averaging and extrapolating the detected line segments</p>
# </figcaption>
# </figure>
# +
import cv2
import math
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import os
# %matplotlib inline
# -
# ## Reading in single images
#
img_dir = os.path.join('data', 'test_images')
img = mpimg.imread(os.path.join(img_dir, 'solidWhiteRight.jpg'))
print('Image type:', type(img), ', dimensions:', img.shape)
plt.imshow(img)
grayscale_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
print('Image type:', type(grayscale_img), ', dimensions:', grayscale_img.shape)
plt.imshow(grayscale_img, cmap='gray')
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# ## Utils
# +
def apply_roi_mask(img, vertices):
"""
Applies a region mask.
Only keeps the region of the image defined by the polygon
formed using `vertices`. The remaining part of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
mask = np.zeros_like(img)
if len(img.shape) > 2:
num_channels = img.shape[2] # 3 or 4
roi_color = (255,) * num_channels
else:
roi_color = 255
cv2.fillPoly(mask, vertices, roi_color)
masked_img = cv2.bitwise_and(img, mask)
return masked_img
def draw_lines(img, lines, color=[255, 0, 0], thickness=5):
"""
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
left_start_candidates = []
left_end_candidates = []
right_start_candidates = []
right_end_candidates = []
for line in lines:
for x1,y1,x2,y2 in line:
m = (y2 - y1) / (x2 - x1)
# Filter out outliers (i.e. close to horizontal segments)
m_threshold = 0.2
if abs(m) <= m_threshold:
continue
b = y1 - m * x1
# Start position
y_s = img.shape[0]
x_s = (y_s - b) / m
# End position
y_e = 0.61 * img.shape[0]
x_e = (y_e - b) / m
if m > 0:
left_start_candidates.append((x_s, y_s))
left_end_candidates.append((x_e, y_e))
continue
right_start_candidates.append((x_s, y_s))
right_end_candidates.append((x_e, y_e))
left_start_mean = tuple(np.floor(np.mean(left_start_candidates, axis=0)).astype(np.int))
left_end_mean = tuple(np.floor(np.mean(left_end_candidates, axis=0)).astype(np.int))
right_start_mean = tuple(np.floor(np.mean(right_start_candidates, axis=0)).astype(np.int))
right_end_mean = tuple(np.floor(np.mean(right_end_candidates, axis=0)).astype(np.int))
cv2.line(img, left_start_mean, left_end_mean, color, thickness)
cv2.line(img, right_start_mean, right_end_mean, color, thickness)
def run_hough_transform(edge_img, rho, theta, threshold, min_line_len, max_line_gap):
"""
Applies the Hough transform to the input edge image.
`img` is the edge image output by the Canny edge detector.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(
edge_img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
def overlay_images(img_a, img_b, α=0.8, β=1., γ=0.):
"""
Produces an overlaid image from the two input images according to:
img_a * α + img_b * β + γ
Note: Both images must be of identical shape!
"""
return cv2.addWeighted(img_a, α, img_b, β, γ)
# -
# ## Lane Line Detection Pipeline
#
#
# Several images in data/test_images are used for parameter tuning in order to make the pipeline work for these select cases.
def detect_lane_lines(img, dbg_name, save_dbg_imgs = False):
dbg_dir = os.path.join('data', 'debug_images')
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Apply Gaussian smoothing
kernel_size = 3
blurred_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)
# Run the Canny edge detector
low_threshold = 50
high_threshold = 140
edge_img = cv2.Canny(blurred_gray, low_threshold, high_threshold)
if save_dbg_imgs:
mpimg.imsave(os.path.join(dbg_dir, dbg_name + '_canny_output.jpg'), edge_img)
# Apply region mask
vertices = np.array([[(115, img.shape[0]),(430, 0.61 * img.shape[0]), (530, 0.61 * img.shape[0]), (900, img.shape[0])]], dtype=np.int32)
edge_img = apply_roi_mask(edge_img, vertices)
if save_dbg_imgs:
mpimg.imsave(os.path.join(dbg_dir, dbg_name + '_masked_canny_output.jpg'), edge_img)
# Run the Hough transform to detect line segments within the ROI
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 10 # minimum number of pixels making up a line
max_line_gap = 4 # maximum gap in pixels between connectable line segments
line_img = run_hough_transform(edge_img, rho, theta, threshold, min_line_len, max_line_gap)
if save_dbg_imgs:
mpimg.imsave(os.path.join(dbg_dir, dbg_name + '_hough_output.jpg'), line_img)
result_img = overlay_images(img, line_img)
if save_dbg_imgs:
mpimg.imsave(os.path.join(dbg_dir, dbg_name + '_result.jpg'), result_img)
return result_img
# +
img_dir = os.path.join('data', 'test_images')
img_fns = os.listdir(img_dir)
for img_fn in img_fns:
name, _ = os.path.splitext(img_fn);
img = mpimg.imread(os.path.join(img_dir, img_fn))
detect_lane_lines(img, name, True)
# -
# ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return result
# Let's try the one with the solid white lane on the right first ...
white_output = 'data/test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("data/test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
# Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# ## Improve the draw_lines() function
#
# **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
#
# **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
# Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
# %time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
# ## Writeup and Submission
#
# If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
#
# ## Optional Challenge
#
# Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
# %time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
| lane_line_finder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import layers
from keras.layers import Input, Dense, Activation
import matplotlib.pyplot as plt
from keras.models import Model
# +
# tag list
tag_file=pd.read_csv("tags.csv")
tag=tag_file["tagID"]
tag=list(tag)
# user_code
user_tag_file=pd.read_csv("user_tags.csv")
company_tag_file=pd.read_csv("job_tags.csv")
# user_code 작성
list_of_user=list(set(user_tag_file["userID"]))
user_code={}
for user in list_of_user:
user_code[user]=[0 for i in range(887)]
sizeOfUserTag=len(user_tag_file)
for i in range(sizeOfUserTag):
tag_id_of_user=user_tag_file["tagID"][i]
index_of_tag_user=tag.index(tag_id_of_user)
user_code[user_tag_file["userID"][i]][index_of_tag_user]=1
# company_code 작성
list_of_company=list(set(company_tag_file["jobID"]))
company_code={}
for company in list_of_company:
company_code[company]=[0 for i in range(887)]
sizeOfCompanyTag=len(company_tag_file)
for i in range(sizeOfCompanyTag):
tag_id_of_company=company_tag_file["tagID"][i]
index_of_tag_company=tag.index(tag_id_of_company)
company_code[company_tag_file["jobID"][i]][index_of_tag_company]=1
# +
# x_test 입력
train_file=pd.read_csv("test_job.csv")
size_input=len(train_file)
X_test=[]
for i in range(size_input):
userL=user_code[train_file["userID"][i]]
companyL=company_code[train_file["jobID"][i]]
tempL=userL+companyL
X_test.append(tempL)
X_test=np.array(X_test)
# -
#모델 불러오기
from tensorflow.python.keras.models import load_model
model = load_model("donghyun_model.h5")
# +
# x_test 입력
test_file=pd.read_csv("test_job.csv")
size_input_test=len(test_file)
X_test=[]
for i in range(size_input):
userL=user_code[train_file["userID"][i]]
companyL=company_code[train_file["jobID"][i]]
tempL=userL+companyL
X_test.append(tempL)
X_test=np.array(X_test)
# -
y_prob=model.predict(X_test)
y_class=[]
for i in range(len(y_prob)):
if y_prob[i][0]>=0.5:
y_class.append(1)
else:
y_class.append(0)
df=pd.DataFrame(data=y_class,columns=['applied'])
df.to_csv("result.csv",mode='w',index=False)
| .ipynb_checkpoints/test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# +
# page 1
# -
# * 1 # extracted and correctly geocoded
# * 2 # incorrectly geocoded
# * 3 # no gazetteer entry
# * 4 # spacy failed to extract
# * 5 # not relevant point filtered out
# * 6 # dbpedia timeout
# * 7 # duplicate gazetteer entry with different coordinates
page1_geolocations = ['Blueberry Hill',
'Will Rogers State Historic Park',
'Oatman',
'Meteor Crater',
'Chicago',
'Lincoln',
'Big Piney River',
'America',
'Tacoma',
'Palo Duro Canyon State Park',
'Amarillo',
'Los Angeles',
'Santa Monica',
'Kingman',
'Lebanon',
'El Morro National Monument',
'Acoma',
'Palisades Park',
'St. Louis',
'Stroud',
'San Bernardino',
'Rialto',
'Pacific',
'Needles',
'Springfield',
'Seligman',
'Colorado River',
'Route 66',
'Palo Duro Canyon',
'Loop',
'Tucumcari',
'Chain of Rocks Bridge',
'Route 66 Museum',
'Hackberry General Store',
'Ozarks',
'Shirley',
'Oklahoma City',
'Clinton',
'Grants',
'Funks Grove',
"Devil's Elbow",
'Tulsa',
'Jack Smith Park',
'Woody Guthrie Center',
'Cozy Dog Drive In',
'Delmar Loop',
'Lone Star State',
'Cadillac Ranch',
'Oklahoma Route 66 Museum',
'Munger Moss Motel',
'Blue Swallow Motel',
'Wigwam Motel',
'Rock Cafe',
'Oatman Hotel',
'<NAME>',
'El Garces',
'Amtrak',
"Cain's Ballroom"]
page1 = {
"Route 66": 1,
"Chicago": 1,
"Santa Monica": 1,
"America": 2, # incorrectly geocoded
"Tulsa": 1,
"Cozy Dogs": 3, # no gazetteer entry
"Springfield": 1,
"Illinois": 1,
"Cozy Dog Drive In": 1,
"Funks Grove Pure Maple Sirup": 4, # spacy failed to extract
"Shirley": 1,
"Illinois Funks": 3, # no gazetteer entry
"Chain of Rocks Bridge": 1,
"Mississippi": 2,
"Missouri": 1,
"St. Louis": 1,
"Delmar Loop": 1,
"Blueberry Hill":1,
"<NAME>": 1,
"Missouri": 1,
"Munger Moss Motel": 1,
"Lebanon": 1,
"Elbow Inn Bar": 3,
"Big Piney River": 1,
"Ozarks": 1,
"Tulsa": 1,
"Cain's Ballroom": 1,
"Woody Guthrie Center": 1,
"Rock Cafe": 1,
"Stroud": 1,
"Oklahoma": 1,
"Oklahoma City": 1,
"Route 66 Museum": 1,
"Clinton": 1,
"Jiggs Smokehouse": 3,
"Palo Duro Canyon State Park": 1,
"Texas": 1,
"Amarillo": 1,
"Cadillac Ranch": 1,
"Palo Duro Canyon": 1,
"Tucumcari": 1,
"New Mexico": 1,
"Blue Swallow Motel": 1,
"Odeon Theatre": 3,
"El Morro National Monument": 1,
"Acoma": 1,
"Meteor Crater": 1,
"Arizona": 1,
"Kingman": 1,
"Hackberry General Store": 1,
"Oatman": 1,
"Oatman Hotel": 1,
"Needles": 2, #
"California": 1,
"Colorado River": 0, # rule based failed
"Jack Smith Park": 1,
"El Garces": 1,
"Wigwam Motel": 23, # no gazetteer entry in the region mentioned
"Rialto": 1,
"San Bernardino": 1,
"Will Rogers State Historic Park": 1,
"Los Angeles": 1,
"Santa Monica": 1,
"Palisades Park": 7
}
np.unique(np.array(list(page1.values())), return_counts=True)
sum([1 for i in page1.values() if i == 1]) / len(page1.values())
sum([1 for i in page1.values() if i == 3]) / len(page1.values())
page2_geolocations = ['Santa Ynez',
'Santa Barbara',
'Southern California',
'Orange County',
'Long Beach',
'Crystal Cove State Park',
'Pismo',
'Los Trancos Creek',
'Huntington Beach',
'Newport Beach',
'Pismo Beach',
'Long Beach Convention and Entertainment Center',
'Pacific',
'Harbor House Inn',
'Ripley',
'Avila Beach',
'Mendenhall',
'El Paseo',
'Arroyo Grande',
'Hearst Castle',
'Trout Creek Trail',
'Lompoc',
'State Street',
'Buellton',
'Aquarium of the Pacific',
'Vandenberg Air Force Base',
'Pea Soup Andersen',
'Ed<NAME>',
'Dunes Center',
'Huntington Pier',
'La Purisima Mission',
'Guadalupe-<NAME>',
'SoCal',
'International Surfing Museum']
page2 = {
"Southern California": 1,
"Pismo Beach": 1,
"Pismo": 1,
"Guadalupe-Nipomo Dunes": 1,
"Dunes Center": 1,
"Guadalupe": 3,
"Avila Beach": 1,
"Trout Creek Trail": 1, # google doesn't know this place, :=))
"Ar<NAME>": 1,
"Edna Valley": 1, # google doens't know this place
"Vandenberg Air Force Base": 1,
"Lompoc": 1,
"La Purisima Mission": 1,
"California": 1,
"Buellton": 1,
"Pea Soup Andersen": 1,
"Hearst Castle": 1,
"Mendenhall’s Museum of Gas Pumps and Petroliana": 4, # spacy
"OstrichLand USA": 3,
"Santa Ynez": 1,
"Santa Barbara": 1,
"State Street": 1,
"El Paseo": 1,
"Harbor House Inn": 1,
"Huntington Beach": 1,
"Huntington Pier": 1,
"International Surfing Museum": 1,
"Aquarium of the Pacific": 1,
"Long Beach": 1,
"Long Beach Convention and Entertainment Center": 1,
"Crystal Cove State Park": 1,
"Orange County": 1,
"Crystal Cove Beach Cottages": 3,
"Los Trancos Creek": 1
}
np.unique(np.array(list(page2.values())), return_counts=True)
sum([1 for i in page2.values() if i == 1]) / len(page2.values())
sum([1 for i in page2.values() if i == 3]) / len(page2.values())
page3_geolocations = ['Crescent Rock',
'Piedmont',
'Baymont Inn & Suites',
'Big Meadows Lodge',
'Charlies Bunion',
'Clingmans Dome',
'Gateway Airport',
'Great Smoky Mountains National Park',
'Mammoth Cave',
'Gatlinburg',
'Calvary Rocks',
'Hawksbill Mountain',
'LeConte Lodge',
'Green River',
'Smoky Mountains',
'Rapidan Camp',
'George Washington Memorial Parkway',
'Cave City',
'Skyline Drive',
'Shenandoah Valley',
'Shenandoah River',
'Great Smoky Mountains',
'Cades Cove',
'Mount Le Conte',
'Blue Ridge Parkway',
'Grotto Falls',
'Chimney Tops',
'Skyland',
'Deep Creek Trail',
'Catoosa Wildlife Management Area',
'Union',
'Compton Peak',
'Arlington',
'Shenandoah National Park',
'Skyland Resort',
'Ronald Reagan Washington National Airport',
'Cataloochee',
'Shenandoah',
'Mammoth Cave National Park',
'Newfound Gap',
'Manassas National Battlefield Park',
'Cedar Sink',
'Buckhorn Inn',
'Trillium Gap Trail',
'Frozen Niagara',
'I-64W',
'Pollock Dining Room',
'Indian Creek Falls',
'Blackrock Summit',
'Tuscarora-Overall Run Trail',
'Cataloochee Valley',
'Cades Cove Campground Store',
'Mammoth Cave Hotel',
'Crystal Lake Café',
'Appalachian Trail']
page3 = {
"Ronald Reagan Washington National Airport": 1,
"Arlington": 1,
"Virginia": 1,
"Shenandoah": 2,
"Great Smoky Mountains": 1,
"Mammoth Cave": 1,
"Shenandoah National Park": 1,
"Skyline Drive": 1,
"Shenandoah National Park": 1,
"Piedmont": 1,
"Shenandoah Valley": 1,
"George Washington Memorial Parkway": 1,
"Manassas National Battlefield Park": 1,
"Skyland Resort": 1,
"Shenandoah River": 1,
"Shenandoah Valley": 1,
"Pollock Dining Room": 1,
"Big Meadows Lodge": 1,
"Spottswood Dining Room": 3,
"Crescent Rock": 1,
"Hawksbill Mountain": 2,
"Rapidan Camp": 1,
"Blackrock Summit": 1,
"Compton Peak": 1,
"Overall Run Falls": 3,
"Calvary Rocks": 1,
"Great Smoky Mountains National Park": 1,
"Smoky Mountains": 1,
"Blue Ridge Parkway": 1,
"LeConte Lodge": 1,
"Buckhorn Inn": 3,
"Baymont Inn & Suites": 3,
"Cades Cove Campground Store": 1,
"Cades Cove": 1,
"Clingmans Dome": 1,
"Grotto Falls": 1,
"Deep Creek Trail": 1,
"Newfound Gap": 1,
"Charlies Bunion": 1,
"Chimney Tops": 1,
"Mount Le Conte": 1,
"Cataloochee Valley": 1,
"Mammoth Cave National Park": 1,
"Catoosa Wildlife Management Area": 1,
"Mammoth Cave Hotel": 1,
"Crystal Lake Café": 3,
"Watermill Restaurant": 3,
"Green River": 1,
"Cedar Sink": 1,
"Frozen Niagara": 1
}
np.unique(np.array(list(page3.values())), return_counts=True)
sum([1 for i in page3.values() if i == 1]) / len(page3.values())
sum([1 for i in page3.values() if i == 3]) / len(page3.values())
page4_geolocations = ['Kings Canyon Scenic Byway',
'Wawona',
'Yosemite',
'Furnace Creek',
'Kings River',
'Scotty’s Castle',
'Glacier Point',
'Vernal',
'Kings Canyon National Park',
'Sequoia Park',
'Sierra',
'Taft Point',
'Stovepipe Wells',
'General Grant Tree',
'Yosemite National Park',
'Furnace Creek Ranch',
'Grizzly Giant',
'Death Valley National Park',
'Badwater',
'Death Valley',
'Lower Yosemite Fall',
'Badger Pass',
'Las Vegas',
'Ubehebe Crater',
'Dante’s View',
'Half Dome',
'Hume Lake',
'Manzanar National Historic Site',
'Alabama Hills',
'Mariposa Grove',
'Lodgepole',
'Olmsted Point',
'Wildrose Peak',
'Vernal Fall',
'Furnace Creek Inn',
'Mirror Lake',
'Ash Mountain',
'Sentinel Dome',
'General Sherman Tree',
'Merced River',
'Yosemite Valley',
'El Capitan',
'Coarsegold',
'Crystal Cave',
'Lone Pine',
'Mist Trail',
'Sequoia',
'Zabriskie Point',
'Yosemite Falls',
'Tunnel Log',
'Death Valley Junction',
'Mono Lake',
'Mount Whitney',
'Nevada Falls',
'Moro Rock',
'Sierra Nevada',
'Kings Canyon',
'Swinging Bridge',
'Giant Forest',
'Golden Canyon',
'Glacier Point Road',
'Artists Drive',
'The Ahwahnee Hotel',
'Wildrose Charcoal Kilns',
'Grants Grove',
'Underground Tour',
'Wuksachi Lodge',
'McCarran International',
'Furnace Creek Resort',
'Tioga Pass Road',
'Peaks Restaurant',
'North Grove Loop',
'Crescent Meadow Road']
page4 = {
"McCarran International": 1,
"Las Vegas": 1,
"Nevada": 1,
"California": 1,
"Sierra Nevada": 1,
"Death Valley": 7,
"Mount Whitney": 1,
"Sequoia": 2,
"Yosemite Falls": 1,
"Yosemite": 1,
"Death Valley National Park": 1,
"Death Valley Junction": 1,
"Amargosa Opera House and Hotel": 3,
"Furnace Creek Resort": 1,
"Furnace Creek Inn": 1,
"Badwater": 1,
"Scotty’s Castle": 1,
"Stovepipe Wells": 1,
"Zabriskie Point": 1,
"Furnace Creek": 1,
"Golden Canyon": 2, # several results in one region,
"Red Cathedral": 3, # hiking area not known to gazetteer
"Wildrose Charcoal Kilns": 1,
"Wildrose Peak": 1,
"Ubehebe Crater": 1,
"Yosemite National Park": 1,
"Merced River": 1,
"Alabama Hills": 1,
"Lone Pine": 1,
"Manzanar National Historic Site": 1,
"Mono Lake": 1,
"Ahwahnee Hotel": 4,
"Lower Yosemite Fall": 1,
"Yosemite Valley": 1,
"Half Dome": 1,
"Olmsted Point": 1,
"Mirror Lake": 3,
"Mariposa Grove": 1,
"Wawona": 1,
"Grizzly Giant": 1,
"Glacier Point": 1,
"Taft Point": 1,
"Sentinel Dome": 1,
"General Sherman Tree": 1,
"Coarsegold": 1,
"Wuksachi Lodge": 1,
"Wolverton Meadow": 2, # used different name than in gazetteer, str similarity failed
"Giant Forest": 1,
"Crystal Cave": 1,
"Congress Trail": 4,
"Kings Canyon": 1,
"North Grove Loop": 3,
"General Grant Tree": 1,
"Kings Canyon Scenic Byway": 1,
"Hume Lake": 1,
"Kings River": 1,
"Moro Rock": 1,
"Ash Mountain": 3,
"Tunnel Log": 1,
"Crescent Meadow Road": 1
}
np.unique(np.array(list(page4.values())), return_counts=True)
sum([1 for i in page4.values() if i == 1]) / len(page4.values())
sum([1 for i in page4.values() if i == 3]) / len(page4.values())
page5_geolocations = ['Westcliffe',
'Zapata Ranch',
'Paonia',
'Valley View Hot Springs',
'Grand Junction',
'La Veta',
'Zapata Falls',
'Florence',
'North Fork Valley',
'Black Canyon',
'Baca National Wildlife Refuge',
'Gunnison River',
'South Rim',
'Lizard',
'Ragged Mountains',
'Colorado Springs',
'Moab',
'Milky Way',
'Spanish Peaks',
'Paonia Reservoir',
'San Luis Valley',
'Wild West',
'Great Sand Dunes National Park',
'Crestone',
'Colorado National Monument',
'Midwest',
'Bluff Park',
'Willow Lake',
'Silver Cliff',
'Saddlehorn Campground',
'Book Cliffs View',
'South Rim Campground',
'Piñon Flats',
'Great Sand Dunes',
'The Crestone Peaks',
'Paint Mines Interpretive Park',
'Ziggurat',
'Ryus Avenue Bakery',
'Southwest Colorado',
'Colorado FIVE',
'Paonia State Park',
'Black Canyon of',
'Bross Hotel',
'Smokey Jack Observatory',
'American Viticulture Area',
'Gunnison National Park']
page5 = {
"Colorado": 1,
"Paint Mines Interpretive Park": 1,
"Colorado Springs": 1,
"Westcliffe": 1,
"Florence": 1,
"Turmeric Indian & Nepalese Restaurant": 3,
"Silver Cliff": 1,
"Smokey Jack Observatory": 1,
"La Veta": 1,
"Spanish Peaks": 1,
"Wahatoya State Wildlife Area": 3,
"Ryus Avenue Bakery": 1,
"Great Sand Dunes National Park": 1,
"Great Sand Dunes": 2,
"Piñon Flats": 3,
"Zapata Ranch": 1,
"Zapata Falls": 1,
"The Crestone Peaks": 1,
"Crestone": 1,
"Crestone Ziggurat": 3,
"San Luis Valley": 1,
"Valley View Hot Springs": 1,
"Crestone Brewing Company": 3,
"Baca National Wildlife Refuge": 1,
"Willow Lake": 3,
"Black Canyon of the Gunnison National Park": 4,
"Gunnison River": 1,
"Paonia": 1,
"North Fork Valley": 3,
"Bross Hotel": 1,
"Paonia State Park": 1,
"Ragged Mountains": 3,
"Paonia Reservoir": 1,
"Colorado National Monument": 1,
"Bin 707": 2, # might be that unify lookup results missed it place on some exception
"Wedding Canyons": 3,
"Book Cliffs View": 1,
"Saddlehorn Campground": 1
}
sum([1 for i in page5.values() if i == 1]) / len(page5.values())
sum([1 for i in page5.values() if i == 3])/ len(page5.values())
page6_geolocations = ['Ruby Falls',
'Craighead',
'Craighead Caverns',
'Notchy Creek',
'Chihuahuan Desert',
'Campground',
'Cave Spring Hollow',
'Chattanooga',
'Nickajack Lake',
'Dunbar Cave State Park',
'Caverns',
'Earth',
'St. Louis',
'Red River Campground',
'Spring Creek Campground',
'Cherokee National Forest',
'Mexico',
'Lookout Mountain',
'Raccoon Mountain Caverns',
'Cumberland Caverns’',
'Cumberland Caverns',
'National Register of Historic Places',
'The Caverns',
'Bell Witch Cave']
page6 = {
"Tennessee": 1,
"Chattanooga": 1,
"Craighead Caverns": 1,
"KOA": 3,
"Notchy Creek": 1,
"Cherokee National Forest": 1,
"Craighead": 2, # coreference resolution would help in this case
"<NAME>": 3,
"Dunbar Cave State Park": 1,
"Spring Creek Campground": 3,
"Red River Campground": 3,
"C<NAME>ollow": 1,
"<NAME> Distillery": 3,
"Cumberland Caverns": 1,
"The Caverns": 3,
"<NAME>": 1,
"Lookout Mountain": 1,
"<NAME>": 3, # gazetteer query with placetype=place, wikipedia page doesn't have any typename,
# therefore dbpedia returned nothing relevant
"<NAME>": 1,
"Raccoon Mountain Caverns and Campground": 4,
}
sum([1 for i in page6.values() if i == 1]) / len(page6.values())
sum([1 for i in page6.values() if i == 3]) / len(page6.values())
page7_geolocations = ['Cape Elizabeth',
'Main Street',
'Crescent Beach',
'New York City',
'Wolfe’s Neck Woods State Park',
'Topsham',
'Bath Iron Works',
'Freeport',
'Portland',
'Androscoggin River',
'Camden',
'Rockland',
'Paris',
'Kennebec River',
'Farnsworth Art Museum',
'Scarborough River',
'New England',
'Kennebunkport',
'Inn by the Sea',
'Brunswick',
'Maine Maritime Museum',
'Penobscot Bay',
'Damariscotta',
'Bath',
'Napa Valley',
'Damariscotta River',
'Urban Farm Fermentory',
'L.L. Bean',
'Sea Bags',
'Bath Freight Shed',
'Animal Refuge League of Portland',
'Pine Tree State',
'Oyster Trail',
'U.S. Navy',
'Androscoggin Swinging Bridge']
# * 1 # extracted and correctly geocoded
# * 2 # incorrectly geocoded
# * 3 # no gazetteer entry
# * 4 # spacy failed to extract
# * 5 # not relevant point filtered out
# * 6 # dbpedia timeout
# * 7 # duplicate gazetteer entry with different coordinates
page7 = {
"Maine": 1,
"Pine Tree State": 3,
"Kennebunkport": 1,
"Mabel's Lobster Claw Restaurant": 3,
"Sandy Pines Campground": 3,
"Scarborough River": 1,
"Inn by the Sea": 1,
"Cape Elizabeth": 1,
"Crescent Beach": 1,
"Portland": 1,
"Urban Farm Fermentory": 3,
"Freeport": 1,
"Wolfe’s Neck Woods State Park": 1,
"Brunswick’s Fort Andross Mill": 4,
"Androscoggin River": 1,
"Topsham": 1,
"Androscoggin Riverwalk": 2, # name by which gazetteer knows this place differed significantly
"Kennebec River": 1,
"Maine Maritime Museum": 1,
"Damariscotta River": 1,
"Rockland": 1,
"Farnsworth Art Museum": 1,
"Penobscot Bay": 1,
"Camden": 1,
"Sea Bags": 3 # 4 outlets, gazetteers knows only one, which is in 160 range, but from article it is clear
# that author meant other outlet
}
sum([1 for i in page7.values() if i == 1]) / len(page7.values())
sum([1 for i in page7.values() if i == 3]) / len(page7.values())
page8_geolocations = ['Bear Mountain',
'Hudson',
'Monroe',
'Tappan Zee Bridge',
'Sleepy Hollow Cemetery',
'Lyndhurst',
'Greenwich Village',
'Nyack',
'Bear Mountain State Park',
'New Paltz',
'Montgomery Place',
'Hudson River',
'Hudson River Maritime Museum',
'Clermont State Historic Site',
'Kingston',
'Albany',
'Stony Point',
'Rhinebeck',
'Home of Franklin D. Roosevelt National Historic Site',
'West Point',
'Eleanor Roosevelt National Historic Site',
'United States Military Academy',
'Sleepy Hollow',
'Rondout',
'Hudson Valley',
'Goshen',
'Mohonk Lake',
'Newburgh',
'Storm King Mountain',
'America',
'Philipsburg Manor',
'Old Dutch Church',
'Tarrytown',
'Kingston-Rhinecliff Bridge',
'Churchill Downs',
'Kykuit',
'Harness Racing Museum',
'Hyde Park',
'Vanderbilt Mansion National Historic Site',
'Mohonk Mountain House',
'Springwood',
'Visitor Center',
'Old Kingston',
'Old Rhinebeck Aerodrome',
'U.S. 6',
'Bear Mountain Inn',
'NY 9A',
'Trolley Museum of New York',
'Staatsburgh State Historic Site',
'Beekman Arms',
'Culinary Institute of America',
'The Visitor Center',
'Val-Kill',
'National Register of Historic Places',
'Gunks',
'Goshen Historic Track',
'Mohonk',
'Shawangunks Ridge',
'Edward Hopper House Art Center',
'Stockade District',
'Continental Army']
page8 = {
"Tappan Zee Bridge": 1,
"Hudson River": 1,
"Hyde Park": 2,
"Nyack": 1,
"Hudson": 1,
"Kingston": 1,
"Rhinebeck": 1,
"Tarrytown": 1,
"West Point": 1,
"United States Military Academy": 1,
"Greenwich Village": 1,
"Edward Hopper House Art Center": 1,
"Stony Point": 1,
"Bear Mountain State Park": 1,
"Bear Mountain": 2, # several points in one state
"Ge<NAME> Memorial Drive": 3,
"Bear Mountain Inn": 1,
"Monroe": 1,
"NY 9A": 1,
"Museum Village": 3,
"Goshen": 1,
"Goshen Historic Track": 1,
"Harness Racing Museum": 1,
"Hall of Fame": 4,
"New Paltz": 1,
"Huguenot Street National Historic Landmark District": 3,
# gazetteer query with placetype=place, wikipedia page doesn't have any typename,
# therefore dbpedia returned nothing relevant,
"Shawangunks Ridge": 1,
"Mohonk Mountain House": 1,
"Mohonk Lake": 1,
"Kingston Urban Cultural Park": 3,
"Rondout": 1,
"Trolley Museum of New York": 1,
"Hudson River Maritime Museum": 1,
"Kingston-Rhinecliff Bridge": 1,
"Beekman Arms": 1,
"Old Rhinebeck Aerodrome": 1,
"Hudson River National Historic Landmark District": 2,
# name by which gazetteer knows this place differed significantly
"Clermont State Historic Site": 1,
"Montgomery Place": 1,
"Staatsburgh State Historic Site": 1,
"Vanderbilt Mansion National Historic Site": 1,
"Home of Franklin D. Roosevelt National Historic Site": 1,
"Eleanor Roosevelt National Historic Site": 1,
"Culinary Institute of America": 1,
"Newburgh": 1,
"Storm King Mountain": 1,
"Sleepy Hollow": 1,
"Philipsburg Manor": 1,
"Kykuit": 3,
"Hudson Valley": 1,
"Sleepy Hollow Cemetery": 1,
"Lyndhurst": 2
}
sum([1 for i in page8.values() if i == 1]) / len(page8.values()) * 100
sum([1 for i in page8.values() if i == 3]) / len(page8.values()) * 100
page9_geolocations = ['Chicago River',
'North Branch',
'Sunda',
'Lake Michigan',
'Chicago',
'Museum of Science and Industry',
'Bucktown',
'<NAME>',
'Pilsen',
'Wrigley Field',
'Lincoln Park',
'Kingston Mines',
'Ragstock',
'L<NAME>ati',
'Bari Italian Deli',
'Mana Food Bar',
'<NAME>',
'Second Chance Thrift']
page9 = {
"Chicago": 1,
"Chicago River": 1,
"Wrigley Field": 1,
"Pilsen": 1,
"Bucktown": 3,
"Brown Elephant": 3,
"Ragstock": 1,
"Second Chance Thrift": 3,
"Lou Malnati's": 1,
"Neo Futurists": 3, # Performing arts group
"Sound-Bar": 3,
"Lake Michigan": 1,
"<NAME>": 3,
"Sunda": 1,
"Mana Food Bar": 3,
"Irazu": 4,
"Kingston Mines": 3,
"Bari Italian Deli": 3,
"<NAME>": 1,
"Museum of Science and Industry": 1,
"<NAME>": 1,
"<NAME>": 1
}
sum([1 for i in page9.values() if i == 1]) / len(page9.values())
sum([1 for i in page9.values() if i == 3]) / len(page9.values())
page10_geolocations = ['Los Angeles',
'San Francisco',
'Carmel',
'South-of-the-Border',
'Stanford',
'Malibu',
'Palo Alto',
'Napa',
'Big Sur',
'Coupa Cafe',
'Santa Cruz',
'Silicon Valley',
'Clock Tower',
'Earth',
'Stanford University',
'Mountain View',
'Pacific Coast Highway',
'Santa Barbara',
'Philz Coffee',
'Palo Alto Creamery',
'Reposado',
'Stanford Memorial Church',
'Garden Court Hotel',
'Computer History Museum',
'Google']
# * 1 # extracted and correctly geocoded
# * 2 # incorrectly geocoded
# * 3 # no gazetteer entry
# * 4 # spacy failed to extract
# * 5 # not relevant point filtered out
# * 6 # dbpedia timeout
# * 7 # duplicate gazetteer entry with different coordinates
page10 = {
"California": 1,
"Los Angeles": 1,
"Santa Cruz": 1,
"Carmel": 2,
"Santa Barbara": 1,
"Malibu": 1,
"Pacific Coast Highway": 1,
"Silicon Valley": 1,
"Palo Alto": 1,
"Stanford University": 1,
"Garden Court Hotel": 1,
"Philz Coffee": 1,
"Coupa Cafe": 2,
"Stanford Memorial Church": 1,
"Clock Tower": 2,
"Google’s Mountain View campus": 4,
"Computer History Museum": 1,
"Palo Alto Junior Museum and Zoo": 4,
"Palo Alto Farmers Market": 3,
"Chocolate Garage": 3,
"Reposado": 1,
"Palo Alto Creamery": 1
}
sum([1 for i in page10.values() if i == 1]) / len(page10.values())
sum([1 for i in page10.values() if i == 3]) / len(page10.values())
results = dict()
page = 1
for i in [page1, page2, page3, page4, page5, page6, page7, page8, page9, page10]:
results[page] = [sum([1 for k in i.values() if k == 1]) / len(i.values()),
sum([1 for k in i.values() if k == 3]) / len(i.values())]
page +=1
sorted(results.items(), key=lambda x: x[1])
| src/Results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.4 64-bit
# metadata:
# interpreter:
# hash: 5c4d2f1fdcd3716c7a5eea90ad07be30193490dd4e63617705244f5fd89ea793
# name: python3
# ---
# # Data Science Bootcamp - The Bridge
# ## Precurso
# En este notebook vamos a ver, uno a uno, los conceptos básicos de Python. Constarán de ejercicios prácticos acompañados de una explicación teórica dada por el profesor.
#
# Los siguientes enlaces están recomendados para el alumno para profundizar y reforzar conceptos a partir de ejercicios y ejemplos:
#
# - https://facundoq.github.io/courses/aa2018/res/02_python.html
#
# - https://www.w3resource.com/python-exercises/
#
# - https://www.practicepython.org/
#
# - https://es.slideshare.net/egutierrezru/python-paraprincipiantes
#
# - https://www.sololearn.com/Play/Python#
#
# - https://github.com/mhkmcp/Python-Bootcamp-from-Basic-to-Advanced
#
# Ejercicios avanzados:
#
# - https://github.com/darkprinx/100-plus-Python-programming-exercises-extended/tree/master/Status (++)
#
# - https://github.com/mahtab04/Python-Programming-Practice (++)
#
# - https://github.com/whojayantkumar/Python_Programs (+++)
#
# - https://www.w3resource.com/python-exercises/ (++++)
#
# - https://github.com/fupus/notebooks-ejercicios (+++++)
#
# Tutor de ayuda PythonTutor:
#
# - http://pythontutor.com/
#
# ## 1. Variables y tipos
#
# ### Cadenas
# +
# Entero - Integer - int
x = 7
# Cadena - String - Lista de caracteres
x = "lorena"
print(x)
x = 7
print(x)
# -
# built-in
type
# +
x = 5
y = 7
z = x + y
print(z)
# +
x = "'lorena'\" "
l = 'silvia----'
g = x + l
# Las cadenas se concatenan
print(g)
# -
print(g)
# type muestra el tipo de la variable
type(g)
type(3)
# +
# print es una función que recibe varios argumentos y cada argumento está diferenciado por la coma. Después de cada coma, la función 'print' añade un espacio.
# mala praxis
print( g,z , 6, "cadena")
# buena praxis - PEP8
print(g, z, 6, "cadena")
# +
u = "g"
silvia = "<NAME> "
anos = " años"
suma = silvia + u + anos
print(suma)
# +
n = 2
m = "3"
print(n + m)
# +
# Cambiar de int a str
j = 2
print(j)
print(type(j))
j = str(j)
print(j)
print(type(j))
# +
# Cambiar de int a str
j = 2
print(j)
print(type(j))
j = str(j) + " - " + silvia
print(j)
print(type(j))
# -
k = 22
k = str(k)
print(k)
# +
# Cambiar de str a int
lor = "98"
lor = int(lor)
print(lor)
# +
# Para ver la longitud de una lista de caracteres (lista)
mn = "lista de caracteres$%·$% "
#lenght
print(len(mn))
# +
h = len(mn)
print(h + 7)
# -
h = 8
print(h)
# +
x = 2
print(x)
gabriel_vazquez = "<NAME>"
# +
print("Hello Python world!")
print("Nombre de compañero")
companero_clase = "Compañero123123"
print(companero_clase)
# -
print(compañero)
x = (2 + 4) + 7
print(x)
# +
# String, Integer, Float, List, None (NaN)
# str, int, float, list,
string_ = "23"
numero = 23
print(type(string_))
# -
print(string_)
numero2 = 10
suma = numero + numero2
print(suma)
string2 = "33"
suma2 = string_ + string2
print(suma2)
m = (numero2 + int(string2))
print(m)
m = ((((65 + int("22")) * 2)))
m
print(type(int(string2)))
y = 22
y = str(y)
print(type(y))
string2 = int(string2)
print(type(string2))
# +
string3 = "10"
numero_a_partir_de_string = int(string3)
print(numero_a_partir_de_string)
print(string3)
print(type(numero_a_partir_de_string))
print(type(string3))
# -
h = "2"
int(h)
# +
# los decimales son float. Python permite las operaciones entre int y float
x = 4
y = 4.2
print(x + y)
# -
# La división normal (/) es siempre float
# La división absoluta (//) puede ser:
# - float si uno de los dos números (o los dos) son float
# - int si los dos son int
j = 15
k = 4
division = j // k
print(division)
print(type(division))
# +
num1 = 12
num2 = 3
suma = num1 + num2
resta = num1 - num2
multiplicacion = num1 * num2
division = num1 / num2
division_absoluta = num1 // num2
gabriel_vazquez = "<NAME>"
print("suma:", suma)
print("resta:", resta)
print("multiplicacion:", multiplicacion)
print("division:", division)
print("division_absoluta:", division_absoluta)
print(type(division))
print(type(division_absoluta))
# -
print(x)
j = "2"
j
print(j)
# +
x = 2
j = 6
g = 4
h = "popeye"
# Jupyter notebook permite que la última línea se imprima por pantalla ( la variable )
print(g)
print(j)
print(h)
x
# -
int(5.6/2)
float(2)
g = int(5.6/2)
print(g)
5
# +
x = int(5.6//2)
x
# -
# Soy un comentario
# print("Hello Python world!")
# Estoy creando una variable que vale 2
"""
Esto es otro comentario
"""
print(x)
x = 25
x = 76
x = "1"
message2 = "One of Python's strengths is its diverse community."
print(message2)
# ## Ejercicio:
# ### Crear una nueva celda.
# ### Declarar tres variables:
# - Una con el nombre "edad" con valor vuestra edad
# - Otra "edad_compañero_der" que contengan la edad de tipo entero de vuestro compañero de la derecha
# - Otra "suma_anterior" que contenga la suma de las dos variables anteriormente declaradas
#
# Mostrar por pantalla la variable "suma_anterior"
#
#
edad = 99
edad_companero_der = 30
suma_anterior = edad_companero_der + edad
print(suma_anterior)
h = 89 + suma_anterior
h
edad = 18
edad_companero_der = 29
suma_anterior = edad + edad_companero_der
suma_anterior
i = "hola"
o = i.upper()
o
o.lower()
name = "<NAME>"
x = 2
print(name.upper())
print(name.lower())
print(name.upper)
print(name.upper())
x = 2
x = x + 1
x
x += 1
x
# int
x = 1
# float
y = 2.
# str
s = "string"
# type --> muestra el tipo de la variable o valor
print(type(x))
type(y)
type(s)
5 + 2
x = 2
x = x + 1
x += 1
# +
x = 2
y = 4
print(x, y, "Pepito", "Hola")
# -
s = "<NAME> Soraya:"
s + "789"
print(s, 98, 29, sep="")
print(s, 98, 29)
type( x )
2 + 6
# ## 2. Números y operadores
# +
### Enteros ###
x = 3
print("- Tipo de x:")
print(type(x)) # Imprime el tipo (o `clase`) de x
print("- Valor de x:")
print(x) # Imprimir un valor
print("- x+1:")
print(x + 1) # Suma: imprime "4"
print("- x-1:")
print(x - 1) # Resta; imprime "2"
print("- x*2:")
print(x * 2) # Multiplicación; imprime "6"
print("- x^2:")
print(x ** 2) # Exponenciación; imprime "9"
# Modificación de x
x += 1
print("- x modificado:")
print(x) # Imprime "4"
x *= 2
print("- x modificado:")
print(x) # Imprime "8"
print("- El módulo de x con 40")
print(40 % x)
print("- Varias cosas en una línea:")
print(1, 2, x, 5*2) # imprime varias cosas a la vez
# +
# El módulo muestra el resto de la división entre dos números
2 % 2
# -
3 % 2
4 % 5
# +
numero = 99
numero % 2 # Si el resto es 0, el número es par. Sino, impar.
# -
numero % 2
99 % 100
y = 2.5
print("- Tipo de y:")
print(type(y)) # Imprime el tipo de y
print("- Varios valores en punto flotante:")
print(y, y + 1, y * 2.5, y ** 2) # Imprime varios números en punto flotante
# ## Título
#
# Escribir lo que sea
#
# 1. uno
# 2. dos
# # INPUT
# +
edad = input("Introduce tu edad")
print("Diego tiene", edad, "años")
# +
# Input recoge una entrada de texto de tipo String
num1 = int(input("Introduce el primer número"))
num2 = int(input("Introduce el segundo número"))
print(num1 + num2)
# -
# ## 3. Tipo None
x = None
n = 5
s = "Cadena"
print(x + s)
# ## 4. Listas y colecciones
# +
# Lista de elementos:
# Las posiciones se empiezan a contar desde 0
s = "Cadena"
primer_elemento = s[0]
#ultimo_elemento = s[5]
ultimo_elemento = s[-1]
print(primer_elemento)
print(ultimo_elemento)
# -
bicycles = ['trek', 'cannondale', 'redline', 'specialized']
bicycles[0]
tamano_lista = len(bicycles)
tamano_lista
ultimo_elemento_por_posicion = tamano_lista - 1
bicycles[ultimo_elemento_por_posicion]
# +
bicycles = ['trek', 'cannondale', 'redline', 'specialized']
message = "My first bicycle was a " + bicycles[0]
print(bicycles)
print(message)
# -
print(type(bicycles))
s = "String"
s.lower()
print(s.lower())
print(s)
s = s.lower()
s
# +
# Existen dos tipos de funciones:
# 1. Los que modifican los valores sin que tengamos que especificar una reasignación a la variable
# 2. Los que solo devuelven la operación y no modifican el valor de la variable. Tenemos que forzar la reasignación si queremos modificar la variable.
cars = ['bmw', 'audi', 'toyota', 'subaru']
print(cars)
cars.reverse()
print(cars)
# +
cars = ['bmw']
print(cars)
cars.reverse()
print(cars)
# +
s = "Hola soy Clara"
print(s[::-1])
# -
s
l = "hola"
len(l)
l[3]
# +
# Para acceder a varios elementos, se especifica con la nomenclatura "[N:M]". N es el primer elemento a obtener, M es el último elemento a obtener pero no incluido. Ejemplo:
# Queremos mostrar desde las posiciones 3 a la 7. Debemos especificar: [3:8]
# Si M no tiene ningún valor, se obtiene desde N hasta el final.
# Si N no tiene ningún valor, es desde el principio de la colección hasta M
s[3:len(s)]
# -
s[:3]
s[3:10]
# +
motorcycles = ['honda', 'yamaha', 'suzuki', 'ducati']
print(motorcycles)
too_expensive = 'ducati'
motorcycles.remove(too_expensive)
print(motorcycles)
print(too_expensive + " is too expensive for me.")
# -
# Agrega un valor a la última posición de la lista
motorcycles.append("ducati")
motorcycles
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'ducati']
lista[3]
lista.remove(8.9)
lista
lista
l = lista[1]
l
lista.remove(l)
lista
lista.remove(lista[2])
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'honda', 'ducati']
lista
# remove elimina el primer elemento que se encuentra que coincide con el valor del argumento
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'honda', 'ducati']
lista.remove("honda")
lista
# Accedemos a la posición 1 del elemento que está en la posición 2 de lista
lista[2][1]
lista[3][2]
p = lista.remove("honda")
print(p)
l = [2, 4, 6, 8]
l.reverse()
l
# ### Colecciones
#
# 1. Listas
# 2. String (colección de caracteres)
# 3. Tuplas
# 4. Conjuntos (Set)
# Listas --> Mutables
lista = [2, 5, "caract", [9, "g", ["j"]]]
print(lista[-1][-1][-1])
lista[3][2][0]
lista.append("ultimo")
lista
# +
# Tuplas --> Inmutables
tupla = (2, 5, "caract", [9, "g", ["j"]])
tupla
# -
s = "String"
s[2]
tupla[3].remove(9)
tupla
tupla[3][1].remove("j")
tupla
tupla2 = (2, 5, 'caract', ['g', ['j']])
tupla2[-1].remove(["j"])
tupla2
tupla2 = (2, 5, 'caract', ['g', ['j']])
tupla2[-1].remove("g")
tupla2
tupla2[-1].remove(["j"])
tupla2
if False == 0:
print(0)
print(type(lista))
print(type(tupla))
# Update listas
lista = [2, "6", ["k", "m"]]
lista[1] = 1
lista
lista = [2, "6", ["k", "m"]]
lista[2] = 0
lista
tupla = (2, "6", ["k", "m"])
tupla[2] = 0
tupla
tupla = (2, "6", ["k", "m"])
tupla[2][1] = 0
tupla
# +
# Conjuntos
conjunto = [2, 4, 6, "a", "z", "h", 2]
conjunto = set(conjunto)
conjunto
# -
conjunto = ["a", "z", "h", 2, 2, 4, 6, True, True, False]
conjunto = set(conjunto)
conjunto
conjunto = ["a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False]
conjunto = set(conjunto)
conjunto
conjunto_tupla = ("a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False)
conjunto = set(conjunto_tupla)
conjunto
conjunto = {"a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False}
conjunto
s = "String"
lista_s = list(s)
lista_s
s = "String"
conj = {s}
conj
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = tuple(tupla)
tupla
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = (((((tupla)))))
tupla
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = tuple(tupla)
tupla
conjunto = {2, 5, "h"}
lista_con_conjunto = [conjunto]
lista_con_conjunto[0]
# No podemos acceder a elementos de un conjunto
lista_con_conjunto[0][0]
lista = [1, 5, 6, True, 6]
set(lista)
lista = [True, 5, 6, 1, 6]
set(lista)
1 == True
# ## 5. Condiciones: if, elif, else
# ### Boolean
True
False
# Operadores comparativos
x = (1 == 1)
x
1 == 2
"a" == "a"
# Diferente
"a" != "a"
2 > 4
4 > 2
4 >= 4
4 > 4
4 <= 5
4 < 3
input()
1 == 1
"""
== -> Igualdad
!= -> Diferecia
< -> Menor que
> -> Mayor que
<= -> Menor o igual
>= -> Mayor o igual
"""
# and solo devolverá True si TODOS son True
True and False
# +
# or devolverá True si UNO es True
(1 == 1) or (1 == 2)
# -
(1 == 1) or (1 == 2) and (1 == 1)
(1 == 1) or ((1 == 2) and (1 == 1))
(1 == 1) and (1 == 2) and (1 == 1)
(1 == 1) and (1 == 2) and ((1 == 1) or (0 == 0))
# True and False and True
print("Yo soy\n Gabriel")
if 1 != 1:
print("Son iguales")
else:
print("No entra en if")
if 1 == 1:
print("Son iguales")
else:
print("No entra en if")
if 1>3:
print("Es mayor")
elif 2==2:
print("Es igual")
elif 3==3:
print("Es igual2")
else:
print("Ninguna de las anteriores")
if 2>3:
print(1)
else:
print("Primer else")
if 2==2:
print(2)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
if 3==3:
print(" 3 es 3 ")
if 2==2:
print(2)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
# -------
if 3==4:
print(" 3 es 3 ")
# --------
if 2==2:
print(2)
print(5)
x = 6
print(x)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
# -------
if 3==4:
print(" 3 es 3 ")
# --------
if 2==2:
print(2)
print(5)
x = 6
print(x)
# ------
if x == 7:
print("X es igual a 6")
# ------
y = 7
print(y)
else:
print("Segundo else")
if (not (1==1)):
print("Hola")
if not None:
print(1)
if "a":
print(2)
if 0:
print(0)
# +
# Sinónimos de False para las condiciones:
# None
# False
# 0 (int or float)
# Cualquier colección vacía --> [], "", (), {}
# El None no actúa como un número al compararlo con otro número
"""
El True lo toma como un numérico 1
"""
# +
lista = []
if lista:
print(4)
# +
lista = ["1"]
if lista:
print(4)
# -
if 0.0:
print(2)
if [] or False or 0 or None:
print(4)
if [] and False or 0 or None:
print(4)
if not ([] or False or 0 or None):
print(4)
if [] or False or not 0 or None:
print(True)
else:
print(False)
# +
x = True
y = False
x + y
# +
x = True
y = False
str(x) + str(y)
# -
#
# +
def funcion_condicion():
if y > 4:
print("Es mayor a 4")
else:
print("No es mayor a 4")
funcion_condicion(y=4)
# +
def funcion_primera(x):
if x == 4:
print("Es igual a 4")
else:
funcion_condicion(y=x)
funcion_primera(x=5)
# +
def funcion_final(apellido):
if len(apellido) > 5:
print("Cumple condición")
else:
print("No cumpole condición")
funcion_final(apellido="Vazquez")
# -
#
# # Bucles For, While
# +
lista = [1, "dos", 3, "Pepito"]
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
# -
for p in lista:
print(p)
# +
altura1 = [1.78, 1.63, 1.75, 1.68]
altura2 = [2.00, 1.82]
altura3 = [1.65, 1.73, 1.75]
altura4 = [1.72, 1.71, 1.71, 1.62]
lista_alturas = [altura1, altura2, altura3, altura4]
print(lista_alturas[0][1])
for x in lista_alturas:
print(x[3])
# +
def mostrar_cada_elemento_de_lista(lista):
for x in lista:
print(x)
mostrar_cada_elemento_de_lista(lista=lista_alturas)
# -
mostrar_cada_elemento_de_lista(lista=lista)
for x in lista_alturas:
if len(x) > 2:
print(x[2])
else:
print(x[1])
l = ["Ana", "Dorado", ["m", 2], 7]
| week1/day5/theory/Python_III_Precurse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Given a binary tree, determine if it is a valid binary search tree (BST).
#
# Assume a BST is defined as follows:
#
# The left subtree of a node contains only nodes with keys less than the node's key.
# The right subtree of a node contains only nodes with keys greater than the node's key.
# Both the left and right subtrees must also be binary search trees.
# Example 1:
#
# Input:
# 2
# / \
# 1 3
# Output: true
# Example 2:
#
# 5
# / \
# 1 4
# / \
# 3 6
# Output: false
# Explanation: The input is: [5,1,4,null,null,3,6]. The root node's value
# is 5 but its right child's value is 4.
#
# +
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution1(object): # post-order transerval
# Bottom of validate binary search tree
# The idea can be used for 333. Largest BST Subtree
def isValidBST(self, root):
"""
:type root: TreeNode
:rtype: bool
"""
self.prev = None
return self.validBSTPostOrderHelper(root)[0]
# The return result stores [True/False, max_so_far, min_so_far]
def validBSTPostOrderHelper(self, root):
if not root:
return (True, float('-inf'), float('inf'))
left_res = self.validBSTPostOrderHelper(root.left)
right_res = self.validBSTPostOrderHelper(root.right)
if not left_res[0] or not right_res[0] or root.val <= left_res[1] or root.val >= right_res[2]:
return (False, 0, 0)
return (True, max(root.val, right_res[1]), min(root.val, left_res[2]) )
class Solution2(object): # inorder traversal resulting in a sorted array
def isValidBST(self, root):
"""
:type root: TreeNode
:rtype: bool
"""
stack = []
pre = None
while root or stack:
while root:
stack.append(root)
root = root.left
root = stack.pop()
if pre != None and pre.val >= root.val:
return False
pre = root
root = root.right
return True
class Solution: #recurssion
def isValidBSTRecurssion(self, root, low, high):
if root is None:
return True
return low < root.val and root.val < high\
and self.isValidBSTRecurssion(root.left, low, root.val)\
and self.isValidBSTRecurssion(root.right, root.val, high)
def isValidBST(self, root):
"""
:type root: TreeNode
:rtype: bool
"""
return self.isValidBSTRecurssion(root, float("-inf"), float("inf"))
# -
| Python/ValidateBinarySearchTree .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## SynShapes Edit Transfer
# +
# %matplotlib notebook
import os
import matplotlib
from data import PartNetDataset
from vis_utils import draw_partnet_objects
matplotlib.pyplot.ion()
# results directory
shape1 = 'chair'
shape2 = 'stool'
root1_dir = '../data/results/exp_vae_synshapes_consis_syn_%s_syn_%s' % (shape1, shape2)
root2_dir = '../data/results/exp_structurenet_vae_synshapes_consis_syn_%s_syn_%s' % (shape1, shape2)
obj_idx = 0
obj_A = PartNetDataset.load_object(os.path.join(root1_dir, '%06d'%obj_idx, 'obj_A.json'))
obj_B = PartNetDataset.load_object(os.path.join(root1_dir, '%06d'%obj_idx, 'obj_B.json'))
obj_C = PartNetDataset.load_object(os.path.join(root1_dir, '%06d'%obj_idx, 'obj_C.json'))
obj_D = PartNetDataset.load_object(os.path.join(root1_dir, '%06d'%obj_idx, 'obj_D.json'))
recon_D1 = PartNetDataset.load_object(os.path.join(root1_dir, '%06d'%obj_idx, 'recon_D.json'))
recon_D2 = PartNetDataset.load_object(os.path.join(root2_dir, '%06d'%obj_idx, 'recon_D.json'))
draw_partnet_objects(objects=[obj_A, obj_B, obj_C, obj_D, recon_D2, recon_D1],
object_names=['A', 'B', 'C', 'D', 'StructureNet', 'Ours'],
figsize=(10, 5), leafs_only=True, visu_edges=False,
sem_colors_filename='../stats/semantics_colors/SynChair.txt')
with open(os.path.join(root1_dir, '%06d'%obj_idx, 'stats.txt'), 'r') as fin:
print('ours: ', fin.readline().rstrip())
print('ours: ', fin.readline().rstrip())
with open(os.path.join(root2_dir, '%06d'%obj_idx, 'stats.txt'), 'r') as fin:
print('sn: ', fin.readline().rstrip())
print('sn: ', fin.readline().rstrip())
# -
| code/vis_edittrans_synshapes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pandas TA ([pandas_ta](https://github.com/twopirllc/pandas-ta)) Strategies for Custom Technical Analysis
#
# ## Topics
# - What is a Pandas TA Strategy?
# - Builtin Strategies: __AllStrategy__ and __CommonStrategy__
# - Creating Strategies
# - Watchlist Class
# - Strategy Management and Execution
# - Indicator Composition/Chaining for more Complex Strategies
# - Comprehensive Example: _MACD and RSI Momo with BBANDS and SMAs 50 & 200 and Cumulative Log Returns_
# +
# %matplotlib inline
import datetime as dt
import pandas as pd
import pandas_ta as ta
from alphaVantageAPI.alphavantage import AlphaVantage # pip install alphaVantage-api
from watchlist import Watchlist
# %pylab inline
# -
# # What is a Pandas TA Strategy?
# A _Strategy_ is a simple way to name and group your favorite TA indicators. Technically, a _Strategy_ is a simple Data Class to contain list of indicators and their parameters. __Note__: _Strategy_ is experimental and subject to change. Pandas TA comes with two basic Strategies: __AllStrategy__ and __CommonStrategy__.
#
# ## Strategy Requirements:
# - _name_: Some short memorable string. _Note_: Case-insensitive "All" is reserved.
# - _ta_: A list of dicts containing keyword arguments to identify the indicator and the indicator's arguments
#
# ## Optional Requirements:
# - _description_: A more detailed description of what the Strategy tries to capture. Default: None
# - _created_: At datetime string of when it was created. Default: Automatically generated.
#
# ### Things to note:
# - A Strategy will __fail__ when consumed by Pandas TA if there is no {"kind": "indicator name"} attribute.
# # Builtin Examples
# ### All
AllStrategy = ta.AllStrategy
print("name =", AllStrategy.name)
print("description =", AllStrategy.description)
print("created =", AllStrategy.created)
print("ta =", AllStrategy.ta)
# ### Common
CommonStrategy = ta.CommonStrategy
print("name =", CommonStrategy.name)
print("description =", CommonStrategy.description)
print("created =", CommonStrategy.created)
print("ta =", CommonStrategy.ta)
# # Creating Strategies
# ### Simple Strategy A
custom_a = ta.Strategy(name="A", ta=[{"kind": "sma", "length": 50}, {"kind": "sma", "length": 200}])
custom_a
# ### Simple Strategy B
custom_b = ta.Strategy(name="B", ta=[{"kind": "ema", "length": 8}, {"kind": "ema", "length": 21}, {"kind": "log_return", "cumulative": True}, {"kind": "rsi"}, {"kind": "supertrend"}])
custom_b
# ### Bad Strategy. (Misspelled Indicator)
# Misspelled indicator, will fail later when ran with Pandas
custom_run_failure = ta.Strategy(name="Runtime Failure", ta=[{"kind": "percet_return"}])
custom_run_failure
# # Strategy Management and Execution with _Watchlist_
# ### Initialize AlphaVantage Data Source
AV = AlphaVantage(
api_key="YOUR API KEY", premium=False,
output_size='full', clean=True,
export_path=".", export=True
)
AV
# ### Create Watchlist and set it's 'ds' to AlphaVantage
watch = Watchlist(["SPY", "IWM"])
# #### Info about the Watchlist. Note, the default Strategy is "All"
watch
# ### Help about Watchlist
help(Watchlist)
# ### Default Strategy is "Common"
# No arguments loads all the tickers and applies the Strategy to each ticker.
# The result can be accessed with Watchlist's 'data' property which returns a
# dictionary keyed by ticker and DataFrames as values
watch.load(verbose=True, timed=False)
watch.data
# ###
watch.data["SPY"]
# ## Easy to swap Strategies and run them
# ### Running Simple Strategy A
# Load custom_a into Watchlist and verify
watch.strategy = custom_a
# watch.debug = True
watch.strategy
watch.load("IWM")
# ### Running Simple Strategy B
# Load custom_b into Watchlist and verify
watch.strategy = custom_b
watch.strategy
watch.load("SPY")
# ### Running Bad Strategy. (Misspelled indicator)
# Load custom_run_failure into Watchlist and verify
watch.strategy = custom_run_failure
watch.strategy
try:
iwm = watch.load("IWM")
except AttributeError as error:
print(f"[X] Oops! {error}")
# # Indicator Composition/Chaining
# - When you need an indicator to depend on the value of a prior indicator
# - Utilitze _prefix_ or _suffix_ to help identify unique columns or avoid column name clashes.
# ### Volume MAs and MA chains
# Set EMA's and SMA's 'close' to 'volume' to create Volume MAs, prefix 'volume' MAs with 'VOLUME' so easy to identify the column
# Take a price EMA and apply LINREG from EMA's output
volmas_price_ma_chain = [
{"kind":"ema", "close": "volume", "length": 10, "prefix": "VOLUME"},
{"kind":"sma", "close": "volume", "length": 20, "prefix": "VOLUME"},
{"kind":"ema", "length": 5},
{"kind":"linreg", "close": "EMA_5", "length": 8, "prefix": "EMA_5"},
]
vp_ma_chain_ta = ta.Strategy("Volume MAs and Price MA chain", volmas_price_ma_chain)
vp_ma_chain_ta
# Update the Watchlist
watch.strategy = vp_ma_chain_ta
watch.strategy.name
spy = watch.load("SPY")
spy
# ### MACD BBANDS
# MACD is the initial indicator that BBANDS depends on.
# Set BBANDS's 'close' to MACD's main signal, in this case 'MACD_12_26_9' and add a prefix (or suffix) so it's easier to identify
macd_bands_ta = [
{"kind":"macd"},
{"kind":"bbands", "close": "MACD_12_26_9", "length": 20, "prefix": "MACD"}
]
macd_bands_ta = ta.Strategy("MACD BBands", macd_bands_ta, f"BBANDS_{macd_bands_ta[1]['length']} applied to MACD")
macd_bands_ta
# Update the Watchlist
watch.strategy = macd_bands_ta
watch.strategy.name
spy = watch.load("SPY")
spy
# # Comprehensive Strategy
# ### MACD and RSI Momentum with BBANDS and SMAs and Cumulative Log Returns
momo_bands_sma_ta = [
{"kind":"sma", "length": 50},
{"kind":"sma", "length": 200},
{"kind":"bbands", "length": 20},
{"kind":"macd"},
{"kind":"rsi"},
{"kind":"log_return", "cumulative": True},
{"kind":"sma", "close": "CUMLOGRET_1", "length": 5, "suffix": "CUMLOGRET"},
]
momo_bands_sma_strategy = ta.Strategy(
"Momo, Bands and SMAs and Cumulative Log Returns", # name
momo_bands_sma_ta, # ta
"MACD and RSI Momo with BBANDS and SMAs 50 & 200 and Cumulative Log Returns" # description
)
momo_bands_sma_strategy
# Update the Watchlist
watch.strategy = momo_bands_sma_strategy
watch.strategy.name
spy = watch.load("SPY", timed=True)
# Apply constants to the DataFrame for indicators
spy.ta.constants(True, [0, 30, 70])
spy.head()
# # Additional Strategy Options
# The ```params``` keyword takes a _tuple_ as a shorthand to the parameter arguments in order.
# * **Note**: If the indicator arguments change, so will results. Breaking Changes will **always** be posted on the README.
#
# The ```col_numbers``` keyword takes a _tuple_ specifying which column to return if the result is a DataFrame.
params_ta = [
{"kind":"ema", "params": (10,)},
# params sets MACD's keyword arguments: fast=9, slow=19, signal=10
# and returning the 2nd column: histogram
{"kind":"macd", "params": (9, 19, 10), "col_numbers": (1,)},
# Selects the Lower and Upper Bands and renames them LB and UB, ignoring the MB
{"kind":"bbands", "col_numbers": (0,2), "col_names": ("LB", "UB")},
{"kind":"log_return", "params": (5, False)},
]
params_ta_strategy = ta.Strategy(
"EMA, MACD History, Outter BBands, Log Returns", # name
params_ta, # ta
"EMA, MACD History, BBands(LB, UB), and Log Returns Strategy" # description
)
params_ta_strategy
# Update the Watchlist
watch.strategy = params_ta_strategy
watch.strategy.name
spy = watch.load("SPY", timed=True)
spy.tail()
| examples/PandasTA_Strategy_Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Proteus Applications Main Page
#
# Welcome to the Proteus Applications page!
# ## Wave Tank Examples
#
# ### [wavetank2d](wavetank2d/Flume.ipynb)
# #### This application models a computational wave flume with an embedded sandbed.
#
#
| notebooks/Applications/Proteus_Applications.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# Import pyNBS modules
from pyNBS import data_import_tools as dit
from pyNBS import network_propagation as prop
from pyNBS import pyNBS_core as core
from pyNBS import pyNBS_single
from pyNBS import consensus_clustering as cc
from pyNBS import pyNBS_plotting as plot
# Import other needed packages
import os
import time
import pandas as pd
import numpy as np
from IPython.display import Image
# -
# # Load Data
# First, we must load the somatic mutation and network data for running pyNBS. We will also set an output directory location to save our results.
# ### Load binary somatic mutation data
# The binary somatic mutation data file can be represented in two file formats:
# The default format for the binary somatic mutation data file is the ```list``` format. This file format is a 2-column csv or tsv list where the 1st column is a sample/patient and the 2nd column is a gene mutated in the sample/patient. There are no headers in this file format. Loading data with the list format is typically faster than loading data from the matrix format.The following text is the list representation of the matrix above.
# ```
# TCGA-04-1638 A2M
# TCGA-23-1029 A1CF
# TCGA-23-2647 A2BP1
# TCGA-24-1847 A2M
# TCGA-42-2589 A1CF
# ```
#
# The ```matrix``` binary somatic mutation data format is the format that data for this example is currently represented. This file format is a binary csv or tsv matrix with rows represent samples/patients and columns represent genes. The following table is a small excerpt of a matrix somatic mutation data file:
#
# ||A1CF|A2BP1|A2M|
# |-|-|-|-|
# |TCGA-04-1638|0|0|1|
# |TCGA-23-1029|1|0|0|
# |TCGA-23-2647|0|1|0|
# |TCGA-24-1847|0|0|1|
# |TCGA-42-2589|1|0|0|
#
# __Note:__ The default file type is defined as ```'list'```, but if the user would like to specify the 'matrix' type, the user needs to simply pass the string ```'matrix'``` to the ```filetype``` optional parameter (as below). The delimiter for the file is passed similarly to the optional parameter ```delimiter```
#
# For more examples and definitions in the somatic mutation data file format, please see our Github Wiki page:
# https://github.com/huangger/pyNBS/wiki/Somatic-Mutation-Data-File-Format
# +
# The only required file here is the file path to the somatic mutation data
# However, in this example, the data is not formatted in the default 2-column tab-separated list, so we set the
# file loading parameters explicitly below
sm_data_filepath = './Example_Data/Mutation_Files/OV_sm_mat_Hofree.csv'
sm_mat = dit.load_binary_mutation_data(sm_data_filepath, filetype='matrix', delimiter=',')
# -
# ### Load molecular network
# The network file is a 2-column text file representing an unweighted network. Each row represents a single edge in the molecular network.
#
# Notes about the network file:
# - The default column delimiter is a tab character '\t' but a different delimiter can be defined by the user here or in the parameter file with the "net_filedelim" parameter.
# - The network must not contain duplicate edges (e.g. TP53\tMDM2 is equivalent to MDM2\tTP53)
# - The network must not contain self-edges (e.g. TP53\tTP53)
# - Only the first two columns of a network file are read as edges for the network, all other columns will be ignored.
# - The load_network function also includes options to read in edge- or label-shuffled versions of the network, but by default, these options are turned off.
#
# An excerpt of the first five rows of the PID network file is given below:
# ```
# A1BG A2M
# A1BG AKT1
# A1BG GRB2
# A1BG PIK3CA
# A1BG PIK3R1
# ```
#
# For more examples and definitions in the network file format, please see our Github Wiki page:
# https://github.com/huangger/pyNBS/wiki/Molecular-Network-File-Format
# +
# The only required parameter for this function is the network file path
network_filepath = './Example_Data/Network_Files/HM90.txt'
network = dit.load_network_file(network_filepath)
# -
# ### Setting result output options
# The following code is completely optional for the user. Allows users to pre-define a directory to save intermediate and final results to and establishes a file name prefix for those files in the output directory folder. Also creates the output directory if it does not already exist. The result of this cell will be a dictionary that can be passed optionally to functions to save results.
#
# **Note:** The key assumption here is that if the user passes **save_args to the function that contains a valid file path to a directory in ```outdir```, the result of that particular call of the function will be saved to the given ```outdir```
# +
# Optional: Setting the output directory for files to be saved in
outdir = './Results/via_notebook/Hofree_OV/'
# Optional: Creating above output directory if it doesn't already exist
if not os.path.exists(outdir):
os.makedirs(outdir)
# Optional: Setting a filename prefix for all files saved to outdir
job_name = 'Hofree_OV'
# Constructs dictionary to be passed as "save_args" to functions if output to be saved
save_args = {'outdir': outdir, 'job_name': job_name}
# -
# # Construct regularization graph for use in network-regularized NMF
#
# In this step, we will construct the graph used in the network-regularized non-negative matrix factorization (netNMF) step of pyNBS. This network is a K-nearest neighbor (KNN) network constructed from the network influence matrix (Vandin et al 2011*) of the molecular network being used to stratify tumor samples. The graph laplacian of this KNN network (knnGlap) is used as the regularizer in the following netNMF steps. This step uses the ```network_inf_KNN_glap``` function in the pyNBS_core module.
#
# For additional notes on the graph laplacian construction method, please visit our GitHub wiki for this function:
# https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.network_inf_KNN_glap
#
# ---
# **Note: ** This step is technically optional. No regularization network laplacian has to be constructed if the user would like to run the NMF step without a network regularizer. The user simply has to pass ```None``` into the optional parameter ```regNet_glap``` or remove the optional parameter in the ```pyNBS_single()``` function call below. This will cause pyNBS to run a non-network regularized NMF procedure. However, given the implementation of the multiplicative update steps, the results may not be exactly the same as some other NMF implementations (e.g. from scikit-learn).
# +
# Constructing knnGlap
knnGlap = core.network_inf_KNN_glap(network)
##########################################################################################################
# The resulting matrix can be very large, so we choose not to save the intermediate result here
# To run this function and save the KNN graph laplaican to the output directory 'outdir' given above:
# Uncomment and run the following line instead:
# knnGlap = core.network_inf_KNN_glap(network, **save_args)
##########################################################################################################
# -
# # Construct network propagation kernel matrix
# Due to the multiple subsampling and propagation steps used in pyNBS, we have found that the algorithm can be significantly sped up for large numbers of subsampling and propagation iterations if a gene-by-gene matrix describing the influence of each gene on every other gene in the network by the random-walk propagation operation is pre-computed. We refer to this matrix as the "network propagation kernel". Here we compute this propagation kernel by propagating the all genes in the molecular network independently of one another. The propagation profile of each tumor is then simply the column sum vector of the resulting network propagation kernel selected for only the rows of genes marked as mutated in each tumor, rather than having to perform the full network propagation step again after each subsampling of the data.
#
# For additional notes on the propagation methods used, please visit our GitHub wiki for this function:
# https://github.com/huangger/pyNBS/wiki/pyNBS.network_propagation.network_propagation
#
# ### Calibrating the network propagation coefficient
# The current network propagation coefficient ($\alpha$) is currently set to 0.7 and must range between 0 and 1. This parameter can be tuned and changing it may have a result on the final propagation results. Previous results from [Hofree et al 2013](https://www.nature.com/articles/nmeth.2651) suggest that values between 0.5 and 0.8 produce relatively robust results, but we suspect that the optimal value may be dependent on certain network properties such as edge density.
# Set or change network propagation coefficient if desired
alpha = 0.7
# Construct identity matrix of network
network_nodes = network.nodes()
network_I = pd.DataFrame(np.identity(len(network_nodes)), index=network_nodes, columns=network_nodes)
# **Note about the propagation method used here:** For the Hofree OV results, the symmetric normalization for the adjacency matrix was used in the original analysis, so we will perform the same normalization here. More details can be found at the documentation page given above.
# +
# Construct network propagation kernel
kernel = prop.network_propagation(network, network_I, alpha=alpha, symmetric_norm=False)
##########################################################################################################
# The resulting matrix can be very large, so we choose not to save the intermediate result here
# To run this function and save the propagation kernel to the output directory 'outdir' given above,
# Uncomment and run the following two lines instead of the above line:
# save_args['iteration_label']='kernel'
# kernel = prop.network_propagation(network, network_I, alpha=alpha, symmetric_norm=True, **save_args)
##########################################################################################################
# -
# # Subsampling, propagation, and netNMF
# After the pre-computation of the regularization graph laplacian and the network propagation kernel, we perform the following core steps of the NBS algorithm multiple times (default=100x) to produce multiple patient clusterings that will be used in the later consensus clustering step. Each patient clustering is performed with the following steps:
#
# 1. **Subsample binary somatic mutation data.** (See the documentation for the [```subsample_sm_mat```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.subsample_sm_mat) function for more details.)
# 2. **Propagate binary somatic mutation data over network.** (See the documentation for the [```network_propagation```](https://github.com/huangger/pyNBS/wiki/pyNBS.network_propagation.network_propagation) or [```network_kernel_propagation```](https://github.com/huangger/pyNBS/wiki/pyNBS.network_propagation.network_propagation) function for more details.)
# 3. **Quantile normalize the network-smoothed mutation data.** (See the documentation for the [```qnorm```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.qnorm) function for more details.)
# 4. **Use netNMF to decompose network data into k clusters.** (See the documentation for the [```mixed_netNMF```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.mixed_netNMF) function for more details.)
#
# These functions for each step here are wrapped by the [```NBS_single```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_single.NBS_single) function, which calls each step above in sequence to perform a single iteration of the pyNBS algorithm.
# ### Number of pyNBS clusters
# The default number of clusters constructed by pyNBS is k=3. We change that definition explicitly below or in the parameters for [```NBS_single```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_single.NBS_single), but in this example there are 4 clusters, so we change that here. Other parameters such as the subsampling parameters and the propagation coefficient (when no kernel is pre-computed) can also be changed using \*\*kwargs. \*\*kwargs can also will hold the values of \*\*save_args as seen in previous functions if the user would like to save the resulting dimension reduced patient profiles. All documentation of \*\*kwargs definitions are given in the Github wiki page for [```NBS_single```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_single.NBS_single)
clusters = 4
# ### Number of pyNBS iterations
# The consensus clustering step of the pyNBS algorithm will improve if the data is subsampled, and re-clustered multiple times. The default number of times we perform the aforementioned operation (```niter```) is 100 times. The number can be reduced for faster run-time, but may produce less robust results. Increasing ```niter``` will increase overall runtime, but should produce more robust cluster assignments during consensus clustering.
# Set the number of times to perform pyNBS core steps
niter = 100
# +
# Optional: Saving the intermediate propagation step (from subsampled data) to file
# save_args['save_prop'] = True
# Run pyNBS 'niter' number of times
Hlist = []
for i in range(niter):
netNMF_time = time.time()
# Run pyNBS core steps and save resulting H matrix to Hlist
Hlist.append(pyNBS_single.NBS_single(sm_mat, knnGlap, propNet=network, propNet_kernel=kernel, k=clusters))
##########################################################################################################
# Optional: If the user is saving intermediate outputs (propagation results or H matrices),
# a different 'iteration_label' should be used for each call of pyNBS_single().
# Otherwise, the user will overwrite each H matrix at each call of pyNBS_single()
# Uncomment and run the two lines below to save intermediate steps instead of the previous line
# save_args['iteration_label']=str(i+1)
# Hlist.append(pyNBS_single.NBS_single(sm_mat, propNet=network, propNet_kernel=kernel, regNet_glap=knnGlap,
# k=clusters, **save_args))
##########################################################################################################
# Report run time of each pyNBS iteration
t = time.time()-netNMF_time
print 'NBS iteration:', i+1, 'complete:', t, 'seconds'
# -
# # Consensus Clustering
# In order to produce robust patient clusters, the sub-sampling and re-clustering steps as done above are needed. After the patient data is subsampled multiple times (default ```niter```=100), we perform the [```consensus_hclust_hard```](https://github.com/huangger/pyNBS/wiki/pyNBS.consensus_clustering.consensus_hclust_hard) function in the conensus_clustering module. It accepts a list of pandas dataframes as generated in the previous step. If the H matrices were generated separately and saved to a directory, the user will need to manually import those H matrices into a python list first before passing the list to the function below.
#
# For more information on how the consensus clustering is performed, please see our wiki page on this function:
# https://github.com/huangger/pyNBS/wiki/pyNBS.consensus_clustering.consensus_hclust_hard
NBS_cc_table, NBS_cc_linkage, NBS_cluster_assign = cc.consensus_hclust_hard(Hlist, k=clusters, **save_args)
# # Co-Clustering Map
# To visualize the clusters formed by the pyNBS algorithm, we can plot a similarity map using the objects created in the previous step. We will also load data from the original Hofree et al 2013 paper to compare the results of the pyNBS implementation of the algorithm to the results reported in the paper. This step uses the [`cluster_color_assign`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting.cluster_color_assign) and [`plot_cc_map()`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting.plot_cc_map) functions in the [`pyNBS_plotting`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting) module.
# +
# First load the cluster assignment data from Hofree 2013 for OV cancer patients
orig_Hofree_OV_clust = pd.read_table('./Example_Data/Hofree_Results/Hofree_OV_NBS_Results.csv',sep=',',index_col=0)
# Align the pyNBS and Hofree cluster assignments with one another using Pandas
cluster_align = pd.concat([orig_Hofree_OV_clust.iloc[:,0], NBS_cluster_assign], axis=1).dropna(axis=0, how='any').astype(int)
Hofree_OV_clust = cluster_align.iloc[:,0].astype(int)
pyNBS_OV_clust = cluster_align.iloc[:,1].astype(int)
# +
# Assign colors to clusters from Hofree and pyNBS
Hofree_OV_clust_cmap = plot.cluster_color_assign(Hofree_OV_clust, name='Hofree OV Cluster Assignments')
pyNBS_OV_clust_cmap = plot.cluster_color_assign(pyNBS_OV_clust, name='pyNBS OV Cluster Assignments')
# Plot and save co-cluster map figure
plot.plot_cc_map(NBS_cc_table, NBS_cc_linkage, row_color_map=Hofree_OV_clust_cmap, col_color_map=pyNBS_OV_clust_cmap, **save_args)
# -
Image(filename = save_args['outdir']+save_args['job_name']+'_cc_map.png', width=600, height=600)
# # Survival analysis
# To determine if the patient clusters are prognostically relevant, we perform a standard survival analysis using a multi-class logrank test to evaluate the significance of survival separation between patient clusters. This data is plotted using a Kaplan-Meier plot using the [`cluster_KMplot()`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting.cluster_KMplot) in the [`pyNBS_plotting`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting) module.
#
#
# In order to plot the survival differences between clusters, we will need to load survival data for each patient. This data was extracted from TCGA clinical data. The survival data is given in a 5-column delimited table with the specific headings described below (the columns must be in the same order as shown below). The following is an example of a few lines of a survival table:
#
# ||vital_status|days_to_death|days_to_last_followup|overall_survival|
# |-|-|-|-|-|
# |TCGA-2E-A9G8|0|0|1065|1065|
# |TCGA-A5-A0GI|0|0|1750|1750|
# |TCGA-A5-A0GM|0|0|1448|1448|
# |TCGA-A5-A1OK|0|0|244|244|
# |TCGA-A5-AB3J|0|0|251|251|
#
# Additional details on the survival data file format is also describe on our Github wiki at:
# https://github.com/huangger/pyNBS/wiki/Patient-Survival-Data-File-Format
#
# Note: The default setting for pyNBS is that no survival curves are drawn because the survival data is not a required parameter. The path to valid survival data must be explicitly defined.
# +
# Load survival Data
surv_data = './Example_Data/Clinical_Files/OV.clin.merged.Hofree.txt'
# Plot KM Plot for patient clusters
plot.cluster_KMplot(NBS_cluster_assign, surv_data, delimiter=',', **save_args)
Image(filename = save_args['outdir']+save_args['job_name']+'_KM_plot.png', width=600, height=600)
# -
# # pyNBS Result comparison to Hofree et al 2013
# We also compare the pyNBS clustering results against the original Hofree 2013 cluster assignments of the same patient data using two scores:
# [adjusted rand score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) and [adjusted mutual information score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_mutual_info_score.html).
from sklearn.metrics.cluster import adjusted_mutual_info_score, adjusted_rand_score
adj_rand_index = adjusted_rand_score(Hofree_OV_clust, pyNBS_OV_clust)
adj_mutual_info_score = adjusted_mutual_info_score(Hofree_OV_clust, pyNBS_OV_clust)
print 'Adjusted Rand Index is: ' + str(adj_rand_index)
print 'Adjusted Mutual Info Score is: ' + str(adj_mutual_info_score)
# ### Chi-Squared Association Test
#
# We report in the Application Note the ability of pyNBS to recover the Hofree et al results by the significance of a multi-class Chi-squared statistic. We calculate that here:
import scipy.stats as stats
# +
# Construct contingency table for cluster assignments
intersect_OV_pats = list(cluster_align.index)
NBS_OV_cont_table_array = []
for i in range(1,clusters+1):
Hofree_cluster = set(Hofree_OV_clust.ix[intersect_OV_pats][Hofree_OV_clust.ix[intersect_OV_pats]==i].index)
Hofree_pyNBS_cluster_intersect = []
for j in range(1,clusters+1):
pyNBS_cluster = set(pyNBS_OV_clust.ix[intersect_OV_pats][pyNBS_OV_clust.ix[intersect_OV_pats]==j].index)
Hofree_pyNBS_cluster_intersect.append(len(Hofree_cluster.intersection(pyNBS_cluster)))
NBS_OV_cont_table_array.append(Hofree_pyNBS_cluster_intersect)
# Display contingency table
pd.DataFrame(NBS_OV_cont_table_array,
index=['Hofree Cluster '+repr(i) for i in range(1, clusters+1)],
columns=['pyNBS Cluster '+repr(i) for i in range(1, clusters+1)])
# -
# Calculate p-value and chi-squared statistic:
chi_sq_test = stats.chi2_contingency(NBS_OV_cont_table_array, correction=False)
print 'Chi-Squared Statistic:', chi_sq_test[0]
print 'Chi-Squared P-Value:', chi_sq_test[1]
| Examples/Hofree_OV_pyNBS_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stemming
# Often when searching text for a certain keyword, it helps if the search returns variations of the word. For instance, searching for "boat" might also return "boats" and "boating". Here, "boat" would be the **stem** for [boat, boater, boating, boats].
#
# Stemming is a somewhat crude method for cataloging related words; it essentially chops off letters from the end until the stem is reached. This works fairly well in most cases, but unfortunately English has many exceptions where a more sophisticated process is required. In fact, spaCy doesn't include a stemmer, opting instead to rely entirely on lemmatization. For those interested, there's some background on this decision [here](https://github.com/explosion/spaCy/issues/327). We discuss the virtues of *lemmatization* in the next section.
#
# Instead, we'll use another popular NLP tool called **nltk**, which stands for *Natural Language Toolkit*. For more information on nltk visit https://www.nltk.org/
# ## Porter Stemmer
#
# One of the most common - and effective - stemming tools is [*Porter's Algorithm*](https://tartarus.org/martin/PorterStemmer/) developed by <NAME> in [1980](https://tartarus.org/martin/PorterStemmer/def.txt). The algorithm employs five phases of word reduction, each with its own set of mapping rules. In the first phase, simple suffix mapping rules are defined, such as:
# 
# From a given set of stemming rules only one rule is applied, based on the longest suffix S1. Thus, `caresses` reduces to `caress` but not `cares`.
#
# More sophisticated phases consider the length/complexity of the word before applying a rule. For example:
# 
# Here `m>0` describes the "measure" of the stem, such that the rule is applied to all but the most basic stems.
# +
# Import the toolkit and the full Porter Stemmer library
import nltk
from nltk.stem.porter import *
# -
p_stemmer = PorterStemmer()
words = ['run','runner','running','ran','runs','easily','fairly']
for word in words:
print(word+' --> '+p_stemmer.stem(word))
# <font color=green>Note how the stemmer recognizes "runner" as a noun, not a verb form or participle. Also, the adverbs "easily" and "fairly" are stemmed to the unusual root "easili" and "fairli"</font>
# ___
# ## Snowball Stemmer
# This is somewhat of a misnomer, as Snowball is the name of a stemming language developed by <NAME>. The algorithm used here is more acurately called the "English Stemmer" or "Porter2 Stemmer". It offers a slight improvement over the original Porter stemmer, both in logic and speed. Since **nltk** uses the name SnowballStemmer, we'll use it here.
# +
from nltk.stem.snowball import SnowballStemmer
# The Snowball Stemmer requires that you pass a language parameter
s_stemmer = SnowballStemmer(language='english')
# -
words = ['run','runner','running','ran','runs','easily','fairly']
# words = ['generous','generation','generously','generate']
for word in words:
print(word+' --> '+s_stemmer.stem(word))
# <font color=green>In this case the stemmer performed the same as the Porter Stemmer, with the exception that it handled the stem of "fairly" more appropriately with "fair"</font>
# ___
# ## Try it yourself!
# #### Pass in some of your own words and test each stemmer on them. Remember to pass them as strings!
words = ['consolingly']
print('Porter Stemmer:')
for word in words:
print(word+' --> '+p_stemmer.stem(word))
print('Porter2 Stemmer:')
for word in words:
print(word+' --> '+s_stemmer.stem(word))
# ___
# Stemming has its drawbacks. If given the token `saw`, stemming might always return `saw`, whereas lemmatization would likely return either `see` or `saw` depending on whether the use of the token was as a verb or a noun. As an example, consider the following:
phrase = 'I am meeting him tomorrow at the meeting'
for word in phrase.split():
print(word+' --> '+p_stemmer.stem(word))
# Here the word "meeting" appears twice - once as a verb, and once as a noun, and yet the stemmer treats both equally.
# ### Next up: Lemmatization
| 08-deep-learning/labs/03_NLP/02-Stemming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Input Data
from __future__ import print_function
from salib import extend, import_notebooks
from Tables import Table, DataSet
from Nodes import Node
from Members import Member
from LoadSets import LoadSet, LoadCombination
from NodeLoads import makeNodeLoad
from MemberLoads import makeMemberLoad
from collections import OrderedDict, defaultdict
import numpy as np
import pandas as pd
from Frame2D_Base import Frame2D
@extend
class Frame2D:
COLUMNS_xxx = [] # list of column names for table 'xxx'
def get_table(self,tablename,extrasok=False,optional=False):
columns = getattr(self,'COLUMNS_'+tablename)
index = getattr(self,'INDEX_'+tablename,None)
validatefn = getattr(self,'validate_'+tablename,None)
processfn = getattr(self,'process_'+tablename,None)
t = DataSet.get_table(tablename,columns=columns,optional=optional)
if index is not None:
t.set_index(index,inplace=True)
if validatefn:
validatefn(t)
if processfn:
processfn(t)
return t
def check_duplicates(self,table,displayname):
if table.index.has_duplicates:
dups = table.index.get_duplicates()
raise ValueError("Duplicate {}{}: {}"
.format(displayname,'' if len(dups) == 1 else 's',', '.join(dups)))
##test:
f = Frame2D()
# ## Test Frame
# 
# ## Nodes
# %%Table nodes
NODEID,X,Y,Z
A,0.,0.,5000.
B,0,4000,5000
C,8000,4000,5000
D,8000,0,5000
@extend
class Frame2D:
COLUMNS_nodes = ['NODEID','X','Y']
INDEX_nodes = 'NODEID'
def validate_nodes(self,data):
self.check_duplicates(data,'node id')
nulls = data[data.isnull().any(axis=1)].index.values.tolist()
if nulls:
raise ValueError("X or Y Coordinate data missing for node{}: {}".format('' if len(nulls) == 1 else 's',', '.join(nulls)))
def process_nodes(self,data):
pass
def get_node(self,id):
try:
return self.nodes.ix[id]
except KeyError:
raise Exception('Node not defined: {}'.format(id))
##test:
f.nodes = f.get_table('nodes')
f.nodes
##test:
n = f.get_node('C')
n
n.name
# ## Supports
# %%Table supports
NODEID,C0,C1,C2
A,FX,FY,MZ
D,FY,FX
def isnan(x):
if x is None:
return True
try:
return np.isnan(x)
except TypeError:
return False
@extend
class Frame2D:
COLUMNS_supports = ('NODEID','C0','C1','C2')
def input_supports(self):
table = self.get_table('supports')
for ix,row in table.data.iterrows():
node = self.get_node(row.NODEID)
for c in [row.C0,row.C1,row.C2]:
if not isnan(c):
node.add_constraint(c)
self.rawdata.supports = table
def validate_supports(self,table):
col = table['NODEID']
invalid = col.apply(lambda x: x not in self.nodes.index)
if any(invalid):
bad = col[invalid].tolist()
raise ValueError("Invalid nodeid in supports table: {}".format(bad))
##test:
s = f.get_table('supports')
s
##test:
t = f.nodes
r = t.merge(s,left_index=True,right_on='NODEID',how='outer').set_index('NODEID')
r
v = s['NODEID'].apply(lambda x: x not in t.index)
v
any(v)
##test:
def makeset(*args):
return set([x for x in args if not pd.isnull(x)])
r['Constraints'] = np.vectorize(makeset)(r['C0'],r['C1'],r['C2'])
r[['X','Y','Constraints']]
# ## Members
# %%Table members
MEMBERID,NODEJ,NODEK
AB,A,B
BC,B,C
CD,C,D
@extend
class Frame2D:
COLUMNS_members = ('MEMBERID','NODEJ','NODEK')
def input_members(self):
table = self.get_table('members')
for ix,m in table.data.iterrows():
if m.MEMBERID in self.members:
raise Exception('Multiply defined member: {}'.format(m.MEMBERID))
memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))
self.members[memb.id] = memb
self.rawdata.members = table
def get_member(self,id):
try:
return self.members[id]
except KeyError:
raise Exception('Member not defined: {}'.format(id))
##test:
f.input_members()
f.members
##test:
m = f.get_member('BC')
m.id, m.L, m.dcx, m.dcy
# ## Releases
# %%Table releases
MEMBERID,RELEASE
AB,MZK
CD,MZJ
@extend
class Frame2D:
COLUMNS_releases = ('MEMBERID','RELEASE')
def input_releases(self):
table = self.get_table('releases',optional=True)
for ix,r in table.data.iterrows():
memb = self.get_member(r.MEMBERID)
memb.add_release(r.RELEASE)
self.rawdata.releases = table
##test:
f.input_releases()
##test:
vars(f.get_member('AB'))
# ## Properties
# If the SST module is loadable, member properties may be specified by giving steel shape designations
# (such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and
# $I_x$ directly (it only tries to lookup the properties if these two are not provided).
try:
from sst import SST
__SST = SST()
get_section = __SST.section
except ImportError:
def get_section(dsg,fields):
raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg))
##return [1.] * len(fields.split(',')) # in case you want to do it that way
# %%Table properties
MEMBERID,SIZE,IX,A
BC,W460x106,,
AB,W310x97,,
CD,,
@extend
class Frame2D:
COLUMNS_properties = ('MEMBERID','SIZE','IX','A')
def input_properties(self):
table = self.get_table('properties')
table = self.fill_properties(table)
for ix,row in table.data.iterrows():
memb = self.get_member(row.MEMBERID)
memb.size = row.SIZE
memb.Ix = row.IX
memb.A = row.A
self.rawdata.properties = table
def fill_properties(self,table):
data = table.data
prev = None
for ix,row in data.iterrows():
nf = 0
if type(row.SIZE) in [type(''),type(u'')]:
if isnan(row.IX) or isnan(row.A):
Ix,A = get_section(row.SIZE,'Ix,A')
if isnan(row.IX):
nf += 1
data.loc[ix,'IX'] = Ix
if isnan(row.A):
nf += 1
data.loc[ix,'A'] = A
elif isnan(row.SIZE):
data.loc[ix,'SIZE'] = '' if nf == 0 else prev
prev = data.loc[ix,'SIZE']
table.data = data.fillna(method='ffill')
return table
##test:
f.input_properties()
##test:
vars(f.get_member('CD'))
# ## Node Loads
# %%Table node_loads
LOAD,NODEID,DIRN,F
Wind,B,FX,-200000.
@extend
class Frame2D:
COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F')
def input_node_loads(self):
table = self.get_table('node_loads')
dirns = ['FX','FY','FZ']
for ix,row in table.data.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in dirns:
raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))
if row.DIRN in n.constraints:
raise ValueError("Constrained node {} {} must not have load applied."
.format(row.NODEID,row.DIRN))
l = makeNodeLoad({row.DIRN:row.F})
self.nodeloads.append(row.LOAD,n,l)
self.rawdata.node_loads = table
##test:
f.input_node_loads()
##test:
for o,l,fact in f.nodeloads.iterloads('Wind'):
print(o,l,fact,l*fact)
# ## Support Displacements
# %%Table support_displacements
LOAD,NODEID,DIRN,DELTA
Other,A,DY,-10
@extend
class Frame2D:
COLUMNS_support_displacements = ('LOAD','NODEID','DIRN','DELTA')
def input_support_displacements(self):
table = self.get_table('support_displacements',optional=True)
forns = {'DX':'FX','DY':'FY','RZ':'MZ'}
for ix,row in table.data.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in forns:
raise ValueError("Invalid support displacements direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(forns.keys())))
fd = forns[row.DIRN]
if fd not in n.constraints:
raise ValueError("Support displacement, load: '{}' node: '{}' dirn: '{}' must be for a constrained node."
.format(row.LOAD,row.NODEID,row.DIRN))
l = makeNodeLoad({fd:row.DELTA})
self.nodedeltas.append(row.LOAD,n,l)
self.rawdata.support_displacements = table
##test:
f.input_support_displacements()
##test:
list(f.nodedeltas)[0]
# ## Member Loads
# %%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
Live,BC,UDL,-50,,,,
Live,BC,PL,-200000,,5000
@extend
class Frame2D:
COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C')
def input_member_loads(self):
table = self.get_table('member_loads')
for ix,row in table.data.iterrows():
m = self.get_member(row.MEMBERID)
l = makeMemberLoad(m.L,row)
self.memberloads.append(row.LOAD,m,l)
self.rawdata.member_loads = table
##test:
f.input_member_loads()
##test:
for o,l,fact in f.memberloads.iterloads('Live'):
print(o.id,l,fact,l.fefs()*fact)
# ## Load Combinations
# %%Table load_combinations
CASE,LOAD,FACTOR
One,Live,1.5
One,Wind,1.75
@extend
class Frame2D:
COLUMNS_load_combinations = ('CASE','LOAD','FACTOR')
def input_load_combinations(self):
table = self.get_table('load_combinations',optional=True)
if len(table) > 0:
for ix,row in table.data.iterrows():
self.loadcombinations.append(row.CASE,row.LOAD,row.FACTOR)
if 'all' not in self.loadcombinations:
all = self.nodeloads.names.union(self.memberloads.names)
all = self.nodedeltas.names.union(all)
for l in all:
self.loadcombinations.append('all',l,1.0)
self.rawdata.load_combinations = table
##test:
f.input_load_combinations()
##test:
for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):
print(o.id,l,fact)
for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):
print(o.id,l,fact,l.fefs()*fact)
# ### Load Iterators
@extend
class Frame2D:
def iter_nodeloads(self,casename):
for o,l,f in self.loadcombinations.iterloads(casename,self.nodeloads):
yield o,l,f
def iter_nodedeltas(self,casename):
for o,l,f in self.loadcombinations.iterloads(casename,self.nodedeltas):
yield o,l,f
def iter_memberloads(self,casename):
for o,l,f in self.loadcombinations.iterloads(casename,self.memberloads):
yield o,l,f
##test:
for o,l,fact in f.iter_nodeloads('One'):
print(o.id,l,fact)
for o,l,fact in f.iter_memberloads('One'):
print(o.id,l,fact)
# ## Number the DOFs
@extend
class Frame2D:
def number_dofs(self):
self.ndof = (3*len(self.nodes))
self.ncons = sum([len(node.constraints) for node in self.nodes.values()])
self.nfree = self.ndof - self.ncons
ifree = 0
icons = self.nfree
self.dofdesc = [None] * self.ndof
for node in self.nodes.values():
for dirn,ix in node.DIRECTIONS.items():
if dirn in node.constraints:
n = icons
icons += 1
else:
n = ifree
ifree += 1
node.dofnums[ix] = n
self.dofdesc[n] = (node,dirn)
##test:
f.number_dofs()
f.ndof, f.ncons, f.nfree
##test:
f.dofdesc
##test:
f.get_node('D').dofnums
# ## Input Everything
@extend
class Frame2D:
def input_all(self):
self.input_nodes()
self.input_supports()
self.input_members()
self.input_releases()
self.input_properties()
self.input_node_loads()
self.input_support_displacements()
self.input_member_loads()
self.input_load_combinations()
self.input_finish()
def input_finish(self):
self.number_dofs()
##test:
f.reset()
f.input_all()
# ## Accumulated Cell Data
##test:
Table.CELLDATA
# ## Input From Files
##test:
f.reset()
Table.set_source('frame-1')
f.input_all()
##test:
vars(f.rawdata)
##test:
f.rawdata.nodes.data
##test:
f.members
##test:
Table.CELLDATA
| Devel/V05/Frame2D_Input.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bootcamp
# language: python
# name: bootcamp
# ---
import pandas as pd
import numpy as np
pd.options.mode.chained_assignment = None
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import xgboost as xgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
flights = pd.read_csv('data/flights.csv')
flights.head()
flights = flights[['fl_date','mkt_carrier','origin','dest'
,'taxi_out','taxi_in','cancelled','crs_elapsed_time','arr_delay']]
#features and target
X = flights.loc[:,flights.columns!='cancelled']
y = flights[['cancelled']]
#lets create year, month and day
X['fl_date'] = pd.to_datetime(X['fl_date'],format='%Y-%m-%d')
X['year'] = pd.DatetimeIndex(X['fl_date']).year
X['month'] = pd.DatetimeIndex(X['fl_date']).month
X['day'] = pd.DatetimeIndex(X['fl_date']).day
X = X[['month','day','origin','dest','crs_elapsed_time']]
X= pd.get_dummies(X,columns=['origin','dest'])
#calculating the weights of class 0 to 1
class_0 = y.loc[y['cancelled'] == 0].value_counts().values[0]
class_1 = y.loc[y['cancelled'] == 1].value_counts().values[0]
weight = class_0/class_1
print('ratio of zeros to one:',weight)
print(class_1)
#Splitting into train test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=123)
y_test.loc[y['cancelled'] == 1].value_counts().values[0]
#giving the ratio as a weight to XGBClassifier
model = xgb.XGBClassifier(scale_pos_weight=60.72)
model.fit(X_train.values,y_train.values.reshape(1,-1)[0])
y_pred = model.predict(X_test.values)
print('Roc_Auc Score:',roc_auc_score(y_test,y_pred))
print('accuracy score: ',accuracy_score(y_test,y_pred))
confusion_matrix(y_test,y_pred)
| .ipynb_checkpoints/XGboost_binary-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from numba import njit
from ipywidgets import interactive
# Differential equation for the motion of a projectile in presence of air drag
# $m\vec{\dot{v}}=\vec{F_d}+\vec{F_g}$
# $F_d=-\beta D \vec{v}- \gamma D^2 |\vec{v}|^2 \hat{v}$
# $\vec{\dot{v}}=\vec{g}-\beta \frac{D}{m} \vec{v} - \gamma \frac{D}{m}D |\vec{v}|^2 \hat{v} $
# $\dot{v_x}=-\beta \frac{D}{m} v_x - \gamma \frac{D^2}{m} (v_x^2+v_y^2)^{0.5}v_x$
# $\dot{v_y}=-g-\beta \frac{D}{m} v_y - \gamma \frac{D^2}{m} (v_x^2+v_y^2)^{0.5}v_y$
# $\dot{r_x}=v_x$
# $\dot{r_y}=v_y$
# +
@njit
def dv_xdt(v_x,v_y,D,m,beta,gamma):
f1=-beta*D/m
f2=-gamma*D**2/m
dvdt=f1*v_x+f2*(v_x**2+v_y**2)**0.5*v_x
return(dvdt)
@njit
def rk4_v_x(v_x,v_y,D,m,beta,gamma,dt):
k0=dv_xdt(v_x,v_y,D,m,beta,gamma)
k1=dv_xdt(v_x+k0/2,v_y,D,m,beta,gamma)
k2=dv_xdt(v_x+k1/2,v_y,D,m,beta,gamma)
k3=dv_xdt(v_x+k2/2,v_y,D,m,beta,gamma)
v_x+=dt/6*(k0+2*k1+2*k2+k3)
return(v_x)
@njit
def dv_ydt(v_x,v_y,D,m,beta,gamma,g):
f1=-beta*D/m
f2=-gamma*D**2/m
dvdt=-g+f1*v_x+f2*(v_x**2+v_y**2)**0.5*v_x
return(dvdt)
@njit
def rk4_v_y(v_x,v_y,D,m,beta,gamma,dt,g):
k0=dv_ydt(v_x,v_y,D,m,beta,gamma,g)
k1=dv_ydt(v_x,v_y+k0/2,D,m,beta,gamma,g)
k2=dv_ydt(v_x,v_y+k1/2,D,m,beta,gamma,g)
k3=dv_ydt(v_x,v_y+k2/2,D,m,beta,gamma,g)
v_y+=dt/6*(k0+2*k1+2*k2+k3)
return(v_y)
@njit
def dr_xdt(v_x):
return(v_x)
@njit
def rk4_r_x(r_x,v_x,dt):
k0=dr_xdt(v_x)
k1=dr_xdt(v_x+dt/2)
k2=dr_xdt(v_x+dt/2)
k3=dr_xdt(v_x+dt)
r_x+=dt/6*(k0+2*k1+2*k2+k3)
return(r_x)
@njit
def dr_ydt(v_y):
return(v_y)
@njit
def rk4_r_y(r_y,v_y,dt):
k0=dr_ydt(v_y)
k1=dr_ydt(v_y+dt/2)
k2=dr_ydt(v_y+dt/2)
k3=dr_ydt(v_y+dt)
r_y+=dt/6*(k0+2*k1+2*k2+k3)
return(r_y)
# -
def trajectory(theta,v,y,D,m,beta,gamma,g,time_steps):
theta=theta*np.pi/180
v_i_x=v*np.cos(theta)
v_i_y=v*np.sin(theta)
D=D*0.01
m=m*0.001
beta=beta*10**(-4)
dt=1/time_steps
gamma*=10**(-2)
i=0
v_x_rk=[]
v_y_rk=[]
r_x_rk=[]
r_y_rk=[]
v_x_rk=[v_i_x]
v_y_rk=[v_i_y]
r_x_rk=[0]
r_y_rk=[y]
while r_y_rk[i]>=0:
v_x_rk.append(rk4_v_x(v_x_rk[i],v_y_rk[i],D,m,beta,gamma,dt))
v_y_rk.append(rk4_v_y(v_x_rk[i],v_y_rk[i],D,m,beta,gamma,dt,g))
r_x_rk.append(rk4_r_x(r_x_rk[i],v_x_rk[i],dt))
r_y_rk.append(rk4_r_y(r_y_rk[i],v_y_rk[i],dt))
i+=1
a=np.where(r_y_rk==np.max(r_y_rk))
y_max=np.max(r_y_rk)
x_max=np.max(r_x_rk[a[0][0]])
fig=plt.figure(figsize=(10,10),facecolor='white')
plt.plot(r_x_rk,r_y_rk,'--k',x_max,y_max,'ro')
plt.xlabel('Range [m]')
plt.ylabel('Height [m]')
plt.legend(["Trajectory","Max Height"])
plt.title("Time of flight=%.3fs, Range=%.3fm, Maximum height=%.3fm"%(i*dt,r_x_rk[-1],y_max),fontsize=16)
plt.grid()
plt.xticks(np.arange(0,r_x_rk[-1],step=1.0))
plt.yticks(np.arange(0,y_max+2,step=1.0))
plt.xlim((0,r_x_rk[-1]))
plt.ylim((0,y_max+1))
plt.show()
iplot=interactive(trajectory,
theta=(0,90,1),
v=(1,25,1),
y=(0,50,1),
D=(1,50,1),
m=(10,1000,10),
beta=(0,100,0.1),
gamma=(0,10,0.1),
g=(1,30,1),
time_steps=(1,10000,1))
iplot
| Projectile motion/With Drag/code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kareem1925/Ismailia-school-of-AI/blob/master/quantum%20machine%20learning%20session/quantum_kernels_svm.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="B-4tNSOW1rm4"
# # Integrating quantum kernels into scikit-learn
# + [markdown] id="VjuxMoLk1rnB"
# This notebook provides a didactic template to use *scikit-learn*'s **support vector machine** in combination with a **quantum kernel**. The quantum kernel is a dummy function that you can fill with life yourself!
# + [markdown] id="OEoz2TX11rnB"
# ## Important Disclaimer
# This notebook is adapted from the guest lectures of this course [QML](https://www.edx.org/course/quantum-machine-learning-2) by Prof <NAME>
#
# + [markdown] id="lIWNwALT1rnC"
# I tried to implement the implicit approach of this paper [paper](https://arxiv.org/abs/1803.07128) By <NAME> & <NAME>.
# + id="WvqcNnlg1rnD"
import numpy as np
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from numba import jit # I used numba to accelerate nested for in the kernel function
from sklearn.datasets import make_moons, make_blobs, make_circles, make_classification
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score
# + [markdown] id="mK0lrWoK1rnE"
# ### Preliminaries
# + [markdown] id="Wn317_dM1rnE"
# The quantum kernel -- as any other real-valued kernel -- is a function that takes two data inputs x1 and x2 and maps them to a real number. Here we always return zero to use the function as a dummy or placeholder.
# + id="HT7lCBLI1rnF"
@jit
def kernel(x1, x2,amp):
a = np.array((1/np.cosh(amp)) * (1/np.cosh(amp)))
# print('a= ', a)
b = 1 - (np.exp(1j*(x2-x1)) * np.tanh(amp) * np.tanh(amp))
# print('b= ', b)
# print("sqrt ",np.sqrt(a/b))
return abs(np.dot(np.sqrt(a/b).T.conj(),np.sqrt(a/b)))
# + [markdown] id="4z28WpFb1rnF"
# Scikit-learn's Support Vector Machine estimator takes kernel Gram matrices. We therefore create a function that, given two lists of data points A and B, computes the Gram matrix whose entries are the pairwise kernels
# + id="8MsyfO9b1rnF"
def gram(A, B,amp):
gram = np.zeros((len(A), len(B)))
for id1, x1 in enumerate(A):
# print(id1,x1)
for id2, x2 in enumerate(B):
temp = kernel(x1, x2,amp)
# print('value = ', temp)
gram[id1, id2] = temp
# print("index_1: ",id1," index_1: ",id2)
# print("vectors: ",x1," ",x2)
return gram
# + [markdown] id="xU5LCMun1rnG"
# Let's look at an example where we feed one list of data points into both slots. Of course, our dummy kernel returns only zeros.
# + colab={"base_uri": "https://localhost:8080/"} id="0sSVWdE41rnG" outputId="612a0952-3d98-4509-de4c-7d075c66a496"
data = np.array([[1, 2], [3, 4]])
#print(data.shape)
gram(data, data,.5)
# + [markdown] id="4wcDov2O1rnI"
# Another example constructs rectangular gram matrices from two data lists of different length. This will be useful for new predictions.
# + colab={"base_uri": "https://localhost:8080/"} id="ITvvkCuD1rnI" outputId="cca161da-51c1-43ef-bf48-4875574920a0"
data1 = np.array([[1, 2], [3, 4]])
data2 = np.array([[2, 4]])
gram(data1, data2,1.5)
# + [markdown] id="OzNkFEnL1rnJ"
# ### Data preparation
# + [markdown] id="2dRMmSju1rnJ"
# Let's load the good old Iris dataset and split it into training and test set or any dataset you want to play with
# + colab={"base_uri": "https://localhost:8080/"} id="VK_YVpDX1rnJ" outputId="e61002ea-3dde-4e18-e6be-99d52792b0dc"
X,y = make_circles(n_samples=400,random_state=1,noise=0.20,factor=0.05)
X_train, X_test, Y_train, Y_test = train_test_split(X, y,test_size=0.2,random_state=0)
X_train.shape,Y_train[2]
# + [markdown] id="4X41JZuM1rnK"
# If you have matplotlib installed, you can plot the first of four dimensions.
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="wTstW6gG1rnK" outputId="0afb7cc5-bc70-44b5-924d-90d68a9a5e28"
# preprocessing the data
scale = StandardScaler().fit(X_train)
X_train = scale.transform(X_train)
X_test = scale.transform(X_test)
# %matplotlib inline
import matplotlib.pyplot as plt
plt.figure(2, figsize=(8, 6))
plt.scatter(X_train[:, 0], X_train[:, 1], c=Y_train)
plt.show()
# + [markdown] id="QRZgh2181rnL"
# To prepare the data for the SVM with custom kernel, we have to compute two different Gram matrices for the iris datasets. The "training Gram matrix" computes kernels on pairwise entries of the training set, while the "test Gram matrix" combines training and test set.
# + id="o0kGo0qI1rnL"
gram_train = gram(X_train, X_train,1.)
gram_test = gram(X_test, X_train,1.)
# + [markdown] id="FqF-DGPg1rnM"
# ### Training
# + [markdown] id="Pl5e4hDv1rnM"
# Now we can train a Support Vector Machine and, for example, compute the accuracy on the test set. We have to select the 'precomputed' option to feed custom kernels.
#
# The fitting function takes the "training gram matrix". To make predictions on the test set using the trained model, we have to feed it the "test Gram matrix".
# + colab={"base_uri": "https://localhost:8080/", "height": 286} id="_LXVB-2G1rnM" outputId="b0f31422-94ee-4474-c7ea-bc5169ee7e2b"
plt.imshow(gram_train,interpolation='nearest',origin='upper')
# + colab={"base_uri": "https://localhost:8080/"} id="J-mx18pn1rnN" outputId="bd7e5c47-6217-4027-977a-9314a0ea8d24"
svm = SVC(C=1.0,kernel='precomputed',random_state=0)
svm.fit(gram_train, Y_train)
print("training score = ", svm.score(gram_train,Y_train))
scores = cross_val_score(X=gram_train,y=Y_train,estimator=svm,cv=10,n_jobs=4)
print("validation accuracy of the estimator = ", scores.mean())
predictions_test = svm.predict(gram_test)
print('testing accuracy = ', accuracy_score(predictions_test, Y_test))
# + [markdown] id="EIMrPC8T77B5"
# Classical SVM with RBF kernel
# + id="GAhxL6gh7qwh" colab={"base_uri": "https://localhost:8080/"} outputId="cbbd684c-64fa-4902-b647-c2f370cd3540"
classical_svm = SVC(C=1.0,kernel='rbf',random_state=0)
classical_svm.fit(X_train, Y_train)
print("training score = ", classical_svm.score(X_train,Y_train))
scores = cross_val_score(X=X_train,y=Y_train,estimator=classical_svm,cv=10,n_jobs=4)
print("validation accuracy of the estimator = ", scores.mean())
predictions_test = classical_svm.predict(X_test)
print('testing accuracy = ', accuracy_score(predictions_test, Y_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="BgEy6mja1rnO" outputId="79520ac8-e0ba-43ef-bc8f-c91ff69ef712"
xx, yy = np.meshgrid(np.linspace(-3.5,3.5, 30), np.linspace(-3.5, 3.5, 30))
X_grid = [np.array([x, y]) for x, y in zip(xx.flatten(), yy.flatten())]
tt=gram(X_grid, X_train,1)
# start plot
plt.figure()
cm = plt.cm.RdYlBu
# plot decision regions
predictions_grid = [svm.predict(tt)]
Z = np.reshape(predictions_grid, xx.shape)
cnt = plt.contourf(xx, yy, Z, levels=np.arange(0, 1.1, 0.1), cmap=cm, alpha=.8)
plt.colorbar(cnt, ticks=[0, 0.2, .4,.6,.8,1])
# plot data
plt.scatter(X_train[:, 0][Y_train==0], X_train[:, 1][Y_train==0], c='r', marker='*', edgecolors='k',label='train0')
plt.scatter(X_train[:, 0][Y_train==1], X_train[:, 1][Y_train==1], c='b', marker='*', edgecolors='k',label='train1')
plt.scatter(X_test[:, 0][Y_test==0], X_test[:, 1][Y_test==0], c='r', marker='o', edgecolors='k',label='test0')
plt.scatter(X_test[:, 0][Y_test==1], X_test[:, 1][Y_test==1], c='b', marker='o', edgecolors='k',label='test1')
#plt.ylim(-0.4, 0.4)
#plt.xlim(-0.4, 0.4)
plt.legend()
plt.tight_layout()
plt.show()
# + id="sxzhLJy91rnP"
| quantum machine learning session/quantum_kernels_svm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # pandapower Interface
# This example (1) shows how to convert an ANDES system (`ssa`) to a pandapower network (`ssp`), (2) benchmarks the powerflow results, (3) shows how to alter `ssa` active power setpoints according to `ssp` results.
#
# This interface is mainly developed by [<NAME>](https://www.linkedin.com/in/jinning-wang-846490199/).
# +
import andes
from andes.interop.pandapower import to_pandapower, \
make_link_table, runopp_map, add_gencost
andes.config_logger(20)
import pandapower as pp
import numpy as np
import pandas as pd
from math import pi
# -
# ## Load case
# Here we use the same ANDES system `ss0` to do the pandapower conversion, which leaves the `ssa` untouched.
#
# This will be useful if you need to modify the `ssa` parametes or setpoints.
# +
ssa = andes.load(andes.get_case('ieee14/ieee14_ieeet1.xlsx'),
setup=False,
no_output=True,
default_config=True)
ssa.Toggler.u.v = [0, 0]
ssa.setup()
ss0 = andes.load(andes.get_case('ieee14/ieee14_ieeet1.xlsx'),
setup=False,
no_output=True,
default_config=True)
ss0.Toggler.u.v = [0, 0]
ss0.setup()
# -
# ## Convert to pandapower net
# Convert ADNES system `ssa` to pandapower net `ssp`.
ssp = to_pandapower(ss0)
# Add generator cost data.
# +
gen_cost = np.array([
[2, 0, 0, 3, 0.0430293, 20, 0],
[2, 0, 0, 3, 0.25, 20, 0],
[2, 0, 0, 3, 0.01, 40, 0],
[2, 0, 0, 3, 0.01, 40, 0],
[2, 0, 0, 3, 0.01, 40, 0]
])
add_gencost(ssp, gen_cost)
# -
# Inspect the pandapower net `ssp`.
ssp
# ## Comapre Power Flow Results
# Run power flow of `ssa`.
ssa.PFlow.run()
# +
# ssa
ssa_res_gen = pd.DataFrame(columns=['name', 'p_mw', 'q_mvar', 'va_degree', 'vm_pu'])
ssa_res_gen['name'] = ssa.PV.as_df()['name']
ssa_res_gen['p_mw'] = ssa.PV.p.v * ssa.config.mva
ssa_res_gen['q_mvar'] = ssa.PV.q.v * ssa.config.mva
ssa_res_gen['va_degree'] = ssa.PV.a.v * 180 / pi
ssa_res_gen['vm_pu'] = ssa.PV.v.v
ssa_res_slack = pd.DataFrame([[ssa.Slack.name.v[0], ssa.Slack.p.v[0] * ssa.config.mva,
ssa.Slack.q.v[0] * ssa.config.mva, ssa.Slack.a.v[0] * 180 / pi,
ssa.Slack.v.v[0]]],
columns=ssa_res_gen.columns,
)
ssa_res_gen = pd.concat([ssa_res_gen, ssa_res_slack]).reset_index(drop=True)
# ssp
pp.runpp(ssp)
ssp_res_gen = pd.concat([ssp.gen['name'], ssp.res_gen], axis=1)
res_gen_concat = pd.concat([ssa_res_gen, ssp_res_gen], axis=1)
# ssa
ssa_pf_bus = ssa.Bus.as_df()[["name"]].copy()
ssa_pf_bus['v_andes'] = ssa.Bus.v.v
ssa_pf_bus['a_andes'] = ssa.Bus.a.v * 180 / pi
# ssp
ssp_pf_bus = ssa.Bus.as_df()[["name"]].copy()
ssp_pf_bus['v_pp'] = ssp.res_bus['vm_pu']
ssp_pf_bus['a_pp'] = ssp.res_bus['va_degree']
pf_bus_concat = pd.concat([ssa_pf_bus, ssp_pf_bus], axis=1)
# -
# ### Generation
# In the table below, the left half are ANDES results, and the right half are from pandapower
res_gen_concat.round(4)
# ### Bus voltage and angle
# Likewise, the left half are ANDES results, and the right half are from pandapower
pf_bus_concat.round(4)
# ## Generator dispatch based on OPF from pandapower
# Prepare the link table.
# Asign the StaticGen with OPF, in this case, all the SynGen are GENROU
link_table = make_link_table(ssa)
link_table
# Run the TDS in ADNES to 2s.
# +
ssa.TDS.config.tf = 2
ssa.TDS.run()
ssa.TDS.plt.plot(ssa.GENROU.Pe)
# -
# Get the OPF results from pandapower. The `ssp_res` has been converted to p.u..
ssp_res = runopp_map(ssp, link_table)
ssp_res
# Now dispatch the resutls into `ssa`, where the active power setpoitns are updated to `TurbinGov.pref0`.
ssa_gov_idx = list(ssp_res['gov_idx'][~ssp_res['gov_idx'].isna()])
ssa.TurbineGov.set(src='pref0', idx=ssa_gov_idx, attr='v', value=ssp_res['p'][~ssp_res['gov_idx'].isna()])
ssa.TurbineGov.get(src='pref0', idx=ssa_gov_idx, attr='v')
# Now run the TDS to 50s.
ssa.TDS.config.tf = 50
ssa.TDS.run()
# We can see the outputs of `GENROU` are rearranged by the OPF results.
ssa.TDS.plt.plot(ssa.GENROU.Pe)
| docs/source/examples/pandapower.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import timeit
import myfunction
t = timeit.Timer(myfunction('Hello World'))
t.timeit()
3.32132323232
t.repeat(2, 2000000)
import hotshot
import myfunction
prof = hotshot.Profile('my_hotshot _stats')
prof.run('myfunction').close()
import hotshot.stats
hotshot.stats.load('my_hotshot_stats').strip_dirs().sort_stats('time').print_stats()
| Chapter10/1 profiling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import scipy.stats
import numpy as np
scipy.stats.norm.ppf(0.95)
def VaR(Position,sigma,Quantile):
return Position * sigma * scipy.stats.norm.ppf(Quantile)
VaR(1000,0.05,0.95)
w1 = 0.5
w2 = 0.5
sigma1 = 0.07
sigma2 = 0.03
corr = 0.4
pottfoliovar = w1**2*sigma1**2+w2**2*sigma2**2+2*(w1*w2*sigma1*sigma2*corr)
portfoliovol = pottfoliovar**(1/2)
portfoliovol
VaR(1000,portfoliovol,0.95)
VaR1 = VaR(1000*w1,sigma1,0.95)
VaR2 = VaR(1000*w2,sigma2,0.95)
vector = np.array([VaR1, VaR2])
vector
corrmatrix = np.array([[1,corr],[corr,1]])
corrmatrix
(np.dot(np.dot(vector,corrmatrix),vector))**(1/2)
import pandas_datareader.data as reader
import datetime as dt
end = dt.datetime.now()
start = dt.datetime(end.year - 1, end.month,end.day)
df = reader.get_data_yahoo(['AAPL', 'MSFT','TSLA'],start,end)['Adj Close']
returns = np.log(1+ df.pct_change())
returns
returns.std()
Position = df.iloc[-1]
Position
VaRarray = []
for i in range(len(Position)):
VaRarray.append(VaR(Position[i],returns.std()[i],0.95))
VaRarray
vector = np.array(VaRarray)
returns.corr()
(np.dot(np.dot(vector,returns.corr()),vector))**(1/2)
| Value at Risk of a Portfolio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %cd section2
# %run 'sample_code_c2_1.py'
# %run 'sample_code_c2_2.py'
data
# %run 'sample_code_c2_3.py'
# %run 'sample_code_c2_4.py'
# %run 'sample_code_c2_5.py'
# %run 'sample_code_c2_7.py'
# %run 'sample_code_c2_8.py'
# %run 'sample_code_c2_9.py'
# %run 'sample_code_c2_11.py'
# %run 'sample_code_c2_12.py'
# %run 'sample_code_c2_13.py'
# %run 'sample_code_c2_14.py'
| notebooks/sec2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# This cell is added by sphinx-gallery
# %matplotlib inline
import mrinversion
print(f'You are using mrinversion v{mrinversion.__version__}')
# -
#
# # 2D MAT data of KMg0.5O 4.SiO2 glass
#
# The following example illustrates an application of the statistical learning method
# applied in determining the distribution of the nuclear shielding tensor parameters
# from a 2D magic-angle turning (MAT) spectrum. In this example, we use the 2D MAT
# spectrum [#f1]_ of $\text{KMg}_{0.5}\text{O}\cdot4\text{SiO}_2$ glass.
#
# Setup for the matplotlib figure.
#
#
# +
import csdmpy as cp
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import cm
from csdmpy import statistics as stats
from mrinversion.kernel.nmr import ShieldingPALineshape
from mrinversion.kernel.utils import x_y_to_zeta_eta
from mrinversion.linear_model import SmoothLassoCV, TSVDCompression
from mrinversion.utils import plot_3d, to_Haeberlen_grid
# -
# Setup for the matplotlib figures.
#
#
# function for plotting 2D dataset
def plot2D(csdm_object, **kwargs):
plt.figure(figsize=(4.5, 3.5))
ax = plt.subplot(projection="csdm")
ax.imshow(csdm_object, cmap="gist_ncar_r", aspect="auto", **kwargs)
ax.invert_xaxis()
ax.invert_yaxis()
plt.tight_layout()
plt.show()
# ## Dataset setup
#
# ### Import the dataset
#
# Load the dataset. Here, we import the dataset as the CSDM data-object.
#
#
# +
# The 2D MAT dataset in csdm format
filename = "https://zenodo.org/record/3964531/files/KMg0_5-4SiO2-MAT.csdf"
data_object = cp.load(filename)
# For inversion, we only interest ourselves with the real part of the complex dataset.
data_object = data_object.real
# We will also convert the coordinates of both dimensions from Hz to ppm.
_ = [item.to("ppm", "nmr_frequency_ratio") for item in data_object.dimensions]
# -
# Here, the variable ``data_object`` is a
# `CSDM <https://csdmpy.readthedocs.io/en/latest/api/CSDM.html>`_
# object that holds the real part of the 2D MAT dataset. The plot of the MAT dataset
# is
#
#
plot2D(data_object)
# There are two dimensions in this dataset. The dimension at index 0 is the pure
# anisotropic spinning sideband dimension, while the dimension at index 1 is the
# isotropic chemical shift dimension.
#
# ### Prepping the data for inversion
# **Step-1: Data Alignment**
#
# When using the csdm objects with the ``mrinversion`` package, the dimension at index
# 0 must be the dimension undergoing the linear inversion. In this example, we plan to
# invert the pure anisotropic shielding line-shape. In the ``data_object``, the
# anisotropic dimension is already at index 0 and, therefore, no further action is
# required.
#
# **Step-2: Optimization**
#
# Also notice, the signal from the 2D MAF dataset occupies a small fraction of the
# two-dimensional frequency grid. For optimum performance, truncate the dataset to the
# relevant region before proceeding. Use the appropriate array indexing/slicing to
# select the signal region.
#
#
data_object_truncated = data_object[:, 75:105]
plot2D(data_object_truncated)
# ## Linear Inversion setup
#
# ### Dimension setup
#
# **Anisotropic-dimension:**
# The dimension of the dataset that holds the pure anisotropic frequency
# contributions. In ``mrinversion``, this must always be the dimension at index 0 of
# the data object.
#
#
anisotropic_dimension = data_object_truncated.dimensions[0]
# **x-y dimensions:**
# The two inverse dimensions corresponding to the `x` and `y`-axis of the `x`-`y` grid.
#
#
inverse_dimensions = [
cp.LinearDimension(count=25, increment="370 Hz", label="x"), # the `x`-dimension.
cp.LinearDimension(count=25, increment="370 Hz", label="y"), # the `y`-dimension.
]
# ### Generating the kernel
#
# For MAF/PASS datasets, the kernel corresponds to the pure nuclear shielding anisotropy
# sideband spectra. Use the
# :class:`~mrinversion.kernel.nmr.ShieldingPALineshape` class to generate a
# shielding spinning sidebands kernel.
#
#
sidebands = ShieldingPALineshape(
anisotropic_dimension=anisotropic_dimension,
inverse_dimension=inverse_dimensions,
channel="29Si",
magnetic_flux_density="9.4 T",
rotor_angle="54.735°",
rotor_frequency="790 Hz",
number_of_sidebands=anisotropic_dimension.count,
)
# Here, ``sidebands`` is an instance of the
# :class:`~mrinversion.kernel.nmr.ShieldingPALineshape` class. The required
# arguments of this class are the `anisotropic_dimension`, `inverse_dimension`, and
# `channel`. We have already defined the first two arguments in the previous
# sub-section. The value of the `channel` argument is the nucleus observed in the
# MAT/PASS experiment. In this example, this value is '29Si'.
# The remaining arguments, such as the `magnetic_flux_density`, `rotor_angle`,
# and `rotor_frequency`, are set to match the conditions under which the 2D MAT/PASS
# spectrum was acquired, which in this case corresponds to acquisition at
# the magic-angle and spinning at a rotor frequency of 790 Hz in a 9.4 T magnetic
# flux density.
#
# The value of the `rotor_frequency` argument is the effective anisotropic
# modulation frequency. In a MAT measurement, the anisotropic modulation frequency
# is the same as the physical rotor frequency. For other measurements like the extended
# chemical shift modulation sequences (XCS) [#f3]_, or its variants, the effective
# anisotropic modulation frequency is lower than the physical rotor frequency and
# should be set appropriately.
#
# The argument `number_of_sidebands` is the maximum number of computed
# sidebands in the kernel. For most two-dimensional isotropic v.s pure
# anisotropic spinning-sideband correlation measurements, the sampling along the
# sideband dimension is the rotor or the effective anisotropic modulation
# frequency. Therefore, the value of the `number_of_sidebands` argument is
# usually the number of points along the sideband dimension.
# In this example, this value is 32.
#
# Once the ShieldingPALineshape instance is created, use the kernel()
# method to generate the spinning sideband lineshape kernel.
#
#
K = sidebands.kernel(supersampling=2)
print(K.shape)
# The kernel ``K`` is a NumPy array of shape (32, 625), where the axes with 32 and
# 625 points are the spinning sidebands dimension and the features (x-y coordinates)
# corresponding to the $25\times 25$ `x`-`y` grid, respectively.
#
#
# ### Data Compression
#
# Data compression is optional but recommended. It may reduce the size of the
# inverse problem and, thus, further computation time.
#
#
# +
new_system = TSVDCompression(K, data_object_truncated)
compressed_K = new_system.compressed_K
compressed_s = new_system.compressed_s
print(f"truncation_index = {new_system.truncation_index}")
# -
# ## Solving the inverse problem
#
# ### Smooth LASSO cross-validation
#
# Solve the smooth-lasso problem. Use the statistical learning ``SmoothLassoCV``
# method to solve the inverse problem over a range of α and λ values and determine
# the best nuclear shielding tensor parameter distribution for the given 2D MAT
# dataset. Considering the limited build time for the documentation, we'll perform
# the cross-validation over a smaller $5 \times 5$ `x`-`y` grid. You may
# increase the grid resolution for your problem if desired.
#
#
# +
# setup the pre-defined range of alpha and lambda values
lambdas = 10 ** (-5.4 - 1 * (np.arange(5) / 4))
alphas = 10 ** (-4.5 - 1.5 * (np.arange(5) / 4))
# setup the smooth lasso cross-validation class
s_lasso = SmoothLassoCV(
alphas=alphas, # A numpy array of alpha values.
lambdas=lambdas, # A numpy array of lambda values.
sigma=0.00070, # The standard deviation of noise from the MAT dataset.
folds=10, # The number of folds in n-folds cross-validation.
inverse_dimension=inverse_dimensions, # previously defined inverse dimensions.
verbose=1, # If non-zero, prints the progress as the computation proceeds.
max_iterations=20000, # The maximum number of allowed interations.
)
# run fit using the compressed kernel and compressed data.
s_lasso.fit(compressed_K, compressed_s)
# -
# ### The optimum hyper-parameters
#
# Use the :attr:`~mrinversion.linear_model.SmoothLassoCV.hyperparameters` attribute of
# the instance for the optimum hyper-parameters, $\alpha$ and $\lambda$,
# determined from the cross-validation.
#
#
print(s_lasso.hyperparameters)
# ### The cross-validation surface
#
# Optionally, you may want to visualize the cross-validation error curve/surface. Use
# the :attr:`~mrinversion.linear_model.SmoothLassoCV.cross_validation_curve` attribute
# of the instance, as follows
#
#
# +
CV_metric = s_lasso.cross_validation_curve # `CV_metric` is a CSDM object.
# plot of the cross validation surface
plt.figure(figsize=(5, 3.5))
ax = plt.subplot(projection="csdm")
ax.contour(np.log10(CV_metric), levels=25)
ax.scatter(
-np.log10(s_lasso.hyperparameters["alpha"]),
-np.log10(s_lasso.hyperparameters["lambda"]),
marker="x",
color="k",
)
plt.tight_layout(pad=0.5)
plt.show()
# -
# ### The optimum solution
#
# The :attr:`~mrinversion.linear_model.SmoothLassoCV.f` attribute of the instance holds
# the solution corresponding to the optimum hyper-parameters,
#
#
f_sol = s_lasso.f # f_sol is a CSDM object.
# where ``f_sol`` is the optimum solution.
#
# ### The fit residuals
#
# To calculate the residuals between the data and predicted data(fit), use the
# :meth:`~mrinversion.linear_model.SmoothLasso.residuals` method, as follows,
#
#
# +
residuals = s_lasso.residuals(K=K, s=data_object_truncated)
# residuals is a CSDM object.
# The plot of the residuals.
plot2D(residuals, vmax=data_object_truncated.max(), vmin=data_object_truncated.min())
# -
# The standard deviation of the residuals is
#
#
residuals.std()
# ### Saving the solution
#
# To serialize the solution to a file, use the `save()` method of the CSDM object,
# for example,
#
#
f_sol.save("KMg_mixed_silicate_inverse.csdf") # save the solution
residuals.save("KMg_mixed_silicate_residue.csdf") # save the residuals
# ## Data Visualization
#
# At this point, we have solved the inverse problem and obtained an optimum
# distribution of the nuclear shielding tensor parameters from the 2D MAT dataset. You
# may use any data visualization and interpretation tool of choice for further
# analysis. In the following sections, we provide minimal visualization and analysis
# to complete the case study.
#
# ### Visualizing the 3D solution
#
#
# +
# Normalize the solution
f_sol /= f_sol.max()
# Convert the coordinates of the solution, `f_sol`, from Hz to ppm.
[item.to("ppm", "nmr_frequency_ratio") for item in f_sol.dimensions]
# The 3D plot of the solution
plt.figure(figsize=(5, 4.4))
ax = plt.subplot(projection="3d")
plot_3d(ax, f_sol, x_lim=[0, 120], y_lim=[0, 120], z_lim=[-50, -150])
plt.tight_layout()
plt.show()
# -
# From the 3D plot, we observe two distinct regions: one for the $\text{Q}^4$
# sites and another for the $\text{Q}^3$ sites.
# Select the respective regions by using the appropriate array indexing,
#
#
# +
Q4_region = f_sol[0:8, 0:8, 5:25]
Q4_region.description = "Q4 region"
Q3_region = f_sol[0:8, 7:24, 7:25]
Q3_region.description = "Q3 region"
# -
# The plot of the respective regions is shown below.
#
#
# +
# Calculate the normalization factor for the 2D contours and 1D projections from the
# original solution, `f_sol`. Use this normalization factor to scale the intensities
# from the sub-regions.
max_2d = [
f_sol.sum(axis=0).max().value,
f_sol.sum(axis=1).max().value,
f_sol.sum(axis=2).max().value,
]
max_1d = [
f_sol.sum(axis=(1, 2)).max().value,
f_sol.sum(axis=(0, 2)).max().value,
f_sol.sum(axis=(0, 1)).max().value,
]
plt.figure(figsize=(5, 4.4))
ax = plt.subplot(projection="3d")
# plot for the Q4 region
plot_3d(
ax,
Q4_region,
x_lim=[0, 120], # the x-limit
y_lim=[0, 120], # the y-limit
z_lim=[-50, -150], # the z-limit
max_2d=max_2d, # normalization factors for the 2D contours projections
max_1d=max_1d, # normalization factors for the 1D projections
cmap=cm.Reds_r, # colormap
box=True, # draw a box around the region
)
# plot for the Q3 region
plot_3d(
ax,
Q3_region,
x_lim=[0, 120], # the x-limit
y_lim=[0, 120], # the y-limit
z_lim=[-50, -150], # the z-limit
max_2d=max_2d, # normalization factors for the 2D contours projections
max_1d=max_1d, # normalization factors for the 1D projections
cmap=cm.Blues_r, # colormap
box=True, # draw a box around the region
)
ax.legend()
plt.tight_layout()
plt.show()
# -
# ### Visualizing the isotropic projections.
#
# Because the $\text{Q}^4$ and $\text{Q}^3$ regions are fully resolved
# after the inversion, evaluating the contributions from these regions is trivial.
# For examples, the distribution of the isotropic chemical shifts for these regions are
#
#
# +
# Isotropic chemical shift projection of the 2D MAT dataset.
data_iso = data_object_truncated.sum(axis=0)
data_iso /= data_iso.max() # normalize the projection
# Isotropic chemical shift projection of the tensor distribution dataset.
f_sol_iso = f_sol.sum(axis=(0, 1))
# Isotropic chemical shift projection of the tensor distribution for the Q4 region.
Q4_region_iso = Q4_region.sum(axis=(0, 1))
# Isotropic chemical shift projection of the tensor distribution for the Q3 region.
Q3_region_iso = Q3_region.sum(axis=(0, 1))
# Normalize the three projections.
f_sol_iso_max = f_sol_iso.max()
f_sol_iso /= f_sol_iso_max
Q4_region_iso /= f_sol_iso_max
Q3_region_iso /= f_sol_iso_max
# The plot of the different projections.
plt.figure(figsize=(5.5, 3.5))
ax = plt.subplot(projection="csdm")
ax.plot(f_sol_iso, "--k", label="tensor")
ax.plot(Q4_region_iso, "r", label="Q4")
ax.plot(Q3_region_iso, "b", label="Q3")
ax.plot(data_iso, "-k", label="MAF")
ax.plot(data_iso - f_sol_iso - 0.1, "gray", label="residuals")
ax.set_title("Isotropic projection")
ax.invert_xaxis()
plt.legend()
plt.tight_layout()
plt.show()
# -
# Notice the shape of the isotropic chemical shift distribution for the
# $\text{Q}^4$ sites is skewed, which is expected.
#
#
# ## Analysis
#
# For the analysis, we use the
# `statistics <https://csdmpy.readthedocs.io/en/latest/api/statistics.html>`_
# module of the csdmpy package. Following is the moment analysis of the 3D volumes for
# both the $\text{Q}^4$ and $\text{Q}^3$ regions up to the second moment.
#
#
# +
int_Q4 = stats.integral(Q4_region) # volume of the Q4 distribution
mean_Q4 = stats.mean(Q4_region) # mean of the Q4 distribution
std_Q4 = stats.std(Q4_region) # standard deviation of the Q4 distribution
int_Q3 = stats.integral(Q3_region) # volume of the Q3 distribution
mean_Q3 = stats.mean(Q3_region) # mean of the Q3 distribution
std_Q3 = stats.std(Q3_region) # standard deviation of the Q3 distribution
print("Q4 statistics")
print(f"\tpopulation = {100 * int_Q4 / (int_Q4 + int_Q3)}%")
print("\tmean\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*mean_Q4))
print("\tstandard deviation\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*std_Q4))
print("Q3 statistics")
print(f"\tpopulation = {100 * int_Q3 / (int_Q4 + int_Q3)}%")
print("\tmean\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*mean_Q3))
print("\tstandard deviation\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*std_Q3))
# -
# The statistics shown above are according to the respective dimensions, that is, the
# `x`, `y`, and the isotropic chemical shifts. To convert the `x` and `y` statistics
# to commonly used $\zeta$ and $\eta$ statistics, use the
# :func:`~mrinversion.kernel.utils.x_y_to_zeta_eta` function.
#
#
# +
mean_ζη_Q3 = x_y_to_zeta_eta(*mean_Q3[0:2])
# error propagation for calculating the standard deviation
std_ζ = (std_Q3[0] * mean_Q3[0]) ** 2 + (std_Q3[1] * mean_Q3[1]) ** 2
std_ζ /= mean_Q3[0] ** 2 + mean_Q3[1] ** 2
std_ζ = np.sqrt(std_ζ)
std_η = (std_Q3[1] * mean_Q3[0]) ** 2 + (std_Q3[0] * mean_Q3[1]) ** 2
std_η /= (mean_Q3[0] ** 2 + mean_Q3[1] ** 2) ** 2
std_η = (4 / np.pi) * np.sqrt(std_η)
print("Q3 statistics")
print(f"\tpopulation = {100 * int_Q3 / (int_Q4 + int_Q3)}%")
print("\tmean\n\t\tζ:\t{0}\n\t\tη:\t{1}\n\t\tiso:\t{2}".format(*mean_ζη_Q3, mean_Q3[2]))
print(
"\tstandard deviation\n\t\tζ:\t{0}\n\t\tη:\t{1}\n\t\tiso:\t{2}".format(
std_ζ, std_η, std_Q3[2]
)
)
# -
# ## Convert the 3D tensor distribution in Haeberlen parameters
# You may re-bin the 3D tensor parameter distribution from a
# $\rho(\delta_\text{iso}, x, y)$ distribution to
# $\rho(\delta_\text{iso}, \zeta_\sigma, \eta_\sigma)$ distribution as follows.
#
#
# +
# Create the zeta and eta dimensions,, as shown below.
zeta = cp.as_dimension(np.arange(40) * 4 - 40, unit="ppm", label="zeta")
eta = cp.as_dimension(np.arange(16) / 15, label="eta")
# Use the `to_Haeberlen_grid` function to convert the tensor parameter distribution.
fsol_Hae = to_Haeberlen_grid(f_sol, zeta, eta)
# -
# ### The 3D plot
#
#
plt.figure(figsize=(5, 4.4))
ax = plt.subplot(projection="3d")
plot_3d(ax, fsol_Hae, x_lim=[0, 1], y_lim=[-40, 120], z_lim=[-50, -150], alpha=0.4)
plt.tight_layout()
plt.show()
# ## References
#
# .. [#f1] <NAME>., <NAME>., <NAME>., <NAME>.,
# <NAME>. Sideband separation experiments in NMR with phase
# incremented echo train acquisition, J. Chem. Phys. 138, 4803142, (2013).
# `doi:10.1063/1.4803142. <https://doi.org/10.1063/1.4803142>`_
#
# .. [#f3] <NAME>., Extended chemical-shift modulation, <NAME>. Res., **85**, 3,
# (1989).
# `10.1016/0022-2364(89)90253-9 <https://doi.org/10.1016/0022-2364(89)90253-9>`_
#
#
| docs/notebooks/auto_examples/sideband/plot_2D_KMg0.5O4SiO2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Reading data line by line
file = "/Users/rakeshravi/Documents/Capstone Project/Secondary Dataset - Host Dataset/netflow_day-02.txt"
#reading data into a set
for i in range(1,n):
file.readline()
with open(file, "r") as ins:
data = []
for line in ins:
data.append(line)
print(len(data))
#Reading data line by line
file = "/home/rk9cx/LANL/netflow_day-02.txt"
#reading data into a set
with open(file, "r") as ins:
data = []
prev_lis = []
i = 1
for line in ins:
lis = line.split(",")
if lis[2:7] != set(lis[2:7]).intersection(prev_lis[2:7]):
data.append(prev_lis)
prev_lis= lis
i = i +1
if i % 1000000 == 0:
print("Million iteration")
print(len(data))
with open("file.txt", "w") as output:
output.write(str(data))
# +
df = pd.DataFrame.from_records(data, columns = header)
df.drop_duplicates(subset=['Time', 'SrcDevice', 'DstDevice', 'Protocol', 'SrcPort', 'DstPort'], keep='last')
dsts = df.loc[:,["Time","DstDevice"]]
np.savetxt('Time-DstDevice.txt', dsts.values, fmt='%d,%d,', delimiter=",", header="Time,Duration,")
# -
sample[1].split(",")[2:7]
def chunker(seq, size):
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
import os
import pandas as pd
os.chdir("/home/rk9cx/")
df = pd.read_csv("singular.csv", index_col=False)
df.tail()
| LANL - Host Logs/EDA-Netflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### <span style="color:yellow">정규 표현식(Regular Expression)</span>
# - [Regular Expresion online for PYthon] https://regex101.com
# - 기계학습에서 데이타 전처리라는 골치아프고 시간이 많이 소요는 과정을 거칩니다.
# - 특히, 텍스트 전처리의 경우 정규표현식을 활용하면 많은 시간낭비를 줄일 수 있습니다.
# - 다음에 정리한 것은 wikidoc.net 책자에서 정리한 것입니다.
# - 정규표현식을 완전히 익히고 잘 다루려면 많은 연습이 필요합니다.
# - 그래서 이 연습을 하기 위한 기본 1장 1절을 정리한 것이나 아주 유용합니다.
# - 간단하지만 시간이 지나면 바로 잊어버리는 것이 정규표현식입니다.
# - 시간이 날때마다 지속적으로 또는 주기적으로 들어와 기본을 익히시기 바랍니다.
# - 그러면 어느 순간 정규표현식을 아주 자유롭게 다룰 수 있습니다.
# - 그 날이 올때까지 보고 또 보고 심심할 때마다 응용하는 연습을 하시기 바랍니다.
# ----
import re # 정규 표현식 모듈 re
# ----
# ##### 1) <span style="color:yellow">. 기호</span>
# - .은 한 개의 임의의 문자를 나타냄
r = re.compile("a.c") # a와 c 사이에는 어떤 1개의 문자라도 올 수 있다.
r.search("kkk") # 아무런 결과도 출력되지 않는다.
r.search("abc")
# ----
# ##### 2) <span style="color:yellow">? 기호</span>
# - ?는 ? 앞의 문자가 존재할 수도 있고, 존재하지 않을 수도 있는 경우 표현
r = re.compile("ab?c") # b는 있다고 취급할 수도 있고, 없다고 취급 할 수도
r.search("abbc") # 아무런 결과도 출력되지 않는다.
print(r.search("abc")) # b가 있는 것으로 판단하여 abc를 매치
print(r.search("ac")) # b가 없는 것으로 판단하여 ac를 매치
re.search("ab?c", "abc") # 미리 컴파일하지 않고 그때 그때(속도 느림)
# ----
# ##### 3) <span style="color:yellow">* 기호</span>
# - *은 바로 앞의 문자가 0개 이상일 경우를 표현
r = re.compile("ab*c")
r.search("a") # 아무런 결과도 출력되지 않는다.
print(r.search("ac")) # b가 0 개
print(r.search("abc")) # b가 1 개
print(r.search("abbbbc")) # b가 4 개
# ##### 4) <span style="color:yellow">+ 기호</span>
# - +은 바로 앞의 문자가 1개 이상일 경우를 표현
r = re.compile("ab+c")
r.search("ac") # 아무런 결과도 출력되지 않는다.
print(r.search("abc")) # b가 1 개
print(r.search("abbbbc")) # b가 4 개
# ----
# ##### 5) <span style="color:yellow">^ 기호</span>
# - ^는 시작되는 글자를 지정
r = re.compile("^a")
r.search("bbc") # 아무런 결과도 출력되지 않는다.
r.search("ab") # a로 시작되는 문자열만을 찾아냅니다.
# ----
# ##### 6) <span style="color:yellow">{숫자} 기호</span>
# - 문자에 해당 기호를 붙이면, 해당 문자를 숫자만큼 반복한 것을 표현
# +
# a와 c 사이에 b가 존재하면서 b가 2개인 문자열에 대해서 매치
r = re.compile("ab{2}c")
r.search("ac") # 아무런 결과도 출력되지 않는다.
r.search("abc") # 아무런 결과도 출력되지 않는다.
# -
print(r.search("abbc")) # b기 정확하게 2개 나오면 메치
print(r.search("abbbbbc")) # 아무런 결과도 출력되지 않는다
# ----
# ##### 7) <span style="color:yellow"> {숫자1, 숫자2} 기호</span>
# - 해당 문자를 숫자1 이상 숫자2 이하만큼 반복하면 매치
# +
r = re.compile("ab{2,8}c")
r.search("ac") # 아무런 결과도 출력되지 않는다.
r.search("ac") # 아무런 결과도 출력되지 않는다.
r.search("abc") # 아무런 결과도 출력되지 않는다.
# -
print(r.search("abbc")) # b가 2개
print(r.search("abbbbbbbbc")) # b가 8개
r.search("abbbbbbbbbc") # 아무런 결과도 출력되지 않는다.
# ----
# ##### 8) <span style="color:yellow"> {숫자1,} 기호</span>
# - 해당 문자를 숫자1 이상 만큼 반복하면 매치
# +
r=re.compile("a{2,}bc")
r.search("bc") # 아무런 결과도 출력되지 않는다.
r.search("aa") # 아무런 결과도 출력되지 않는다.
# -
print(r.search("aabc")) # a가 2개 이어서 bc 매치
print(r.search("aaaaaaaabc")) # a가 8개 이어서 bc 매치
# ----
# ##### 9) <span style="color:yellow"> [ ] 기호</span>
# - [ ]안에 문자들을 넣으면 그 문자들 중 한 개의 문자와 매치라는 의미
r = re.compile("[abc]") # [abc]는 [a-c]와 같다.
r.search("zzz") # 아무런 결과도 출력되지 않는다.
print(r.search("a"))
print(r.search("aaaaaaa"))
print(r.search("baac"))
print(r.search("cbaa"))
print(r.search("aBC"))
print(r.search("111")) # 아무런 결과도 출력되지 않는다.
# ----
# ##### 10) <span style="color:yellow"> [^문자] 기호</span>
# - [^문자]는 5)에서 설명한 ^와는 완전히 다른 의미
# - 여기서는 ^ 기호 뒤에 붙은 문자들을 제외한 모든 문자를 매치
# +
# a 또는 b 또는 c가 들어간 문자열을 제외한 모든 문자열을 매치합니다.
r = re.compile("[^abc]")
r.search("a") # 아무런 결과도 출력되지 않는다.
r.search("ab") # 아무런 결과도 출력되지 않는다.
r.search("b") # 아무런 결과도 출력되지 않는다.
# -
print(r.search("d"))
print(r.search("1"))
print(r.search("X") )
# ____
# #### <span style="color:yellow"> 정규 표현식 모듈 함수 예제</span>
# ----
# ##### (1) <span style="color:yellow"> re.match() 와 re.search()의 차이</span>
# - search()가 정규 표현식 전체에 대해서 문자열이 매치하는지를 본다면, match()는 문자열의 첫 부분부터 정규 표현식과 매치하는지를 확인합니다.
# - 문자열 중간에 찾을 패턴이 있다고 하더라도, match 함수는 문자열의 시작에서 패턴이 일치하지 않으면 찾지 않습니다.
import re
r = re.compile("ab.")
r.search("kkkabc")
r.match("kkkabc") #아무런 결과도 출력되지 않는다.
r.match("abckkk")
# ----
# ##### (2) <span style="color:yellow"> re.split()</span>
# - split() 함수는 입력된 정규 표현식을 기준으로 문자열들을 분리하여 리스트로 리턴합니다.
# - 자연어 처리에 있어서 가장 많이 사용되는 정규 표현식 함수 중 하나인데, 토큰화에 유용하게 쓰일 수 있기 때문입니다.
# +
import re
text = "사과 딸기 수박 메론 바나나"
re.split(" ",text) # text.split()
# -
text.split()
# +
text = \
"""
사과
딸기
수박
메론
바나나
"""
re.split("\n",text)
# -
text.split('\n')
# +
text = "사과+딸기+수박+메론+바나나"
re.split("\+",text)
# -
text.split('+')
# ----
# ##### (3) <span style="color:yellow"> re.findall()</span>
# - findall() 함수는 정규 표현식과 매치되는 모든 문자열들을 리스트로 리턴합니다.
# - 단, 매치되는 문자열이 없다면 빈 리스트를 리턴합니다.
# +
import re
text = """이름 : 김철수
전화번호 : 010 - 1234 - 1234
나이 : 30
성별 : 남"""
re.findall("\d+",text)
# -
re.findall("\d+", "문자열입니다.") # 빈 리스트를 리턴한다.
# ----
# ##### (4) <span style="color:yellow"> re.sub()</span>
# - sub() 함수는 정규 표현식 패턴과 일치하는 문자열을 찾아 다른 문자열로 대체할 수 있습니다.
# +
import re
text = "Regular expression : A regular expression, regex or regexp[1] (sometimes called a rational expression)[2][3] is, in theoretical computer science and formal language theory, a sequence of characters that define a search pattern."
re.sub('[^a-zA-Z]',' ',text)
# +
p = re.compile("(내|나의|네꺼)")
p.sub("그의", "나의 물건에 손대지 마시오.")
# 나의 ==> 그의 로 대치되는 재미있는 sub
# -
# ----
# ##### <span style="color:yellow"> 정규 표현식 텍스트 전처리 예제</span>
# +
import re
text = """100 John PROF
101 James STUD
102 Mac STUD"""
re.split('\s+', text)
# -
re.findall('\d+',text) # 최소 1개 이상의 숫자 추출
re.findall('[A-Z]{4}',text) # 대문자가 연속적으로 4번 등장하는 경우
re.findall('[A-Z][a-z]+',text) # 첫번째 대문자 다음은 소문자 추출
# 아래 코드는 영문자가 아닌 문자는 전부 공백으로 치환합니다.
letters_only = re.sub('[^a-zA-Z]', ' ', text)
letters_only
letters_only.split() # convert from str to list
# +
text = """신 의원은 “기상예보의 정확도는 담당자의 전문성에 비례한다”며 “기상관측 직원들의 교육과 훈련을 강화해야 한다”고 말했다.
사지원 기자 <EMAIL>"""
print(text)
# +
import re
# 기사에서 기자의 email 주소 삭제
re.sub("[0-9a-zA-Z_\+]+@[a-z]+\.com", '', text)
# -
# ----
# ##### <span style="color:yellow"> 정규 표현식을 이용한 토큰화</span>
# - NLTK에서는 정규 표현식을 사용해서 단어 토큰화를 수행하는 RegexpTokenizer를 지원합니다.
# - RegexpTokenizer()에서 괄호 안에 원하는 정규 표현식을 넣어서 토큰화를 수행하는 것입니다.
# +
import nltk
from nltk.tokenize import RegexpTokenizer
text = "Don't be fooled by the dark sounding name, Mr. Jone's Orphanage is as cheery as cheery goes for a pastry shop"
print(text)
tokenizer=RegexpTokenizer("[\w]+")
print(tokenizer.tokenize(text))
# +
import nltk
from nltk.tokenize import RegexpTokenizer
tokenizer=RegexpTokenizer("[\s]+", gaps=True)
print(tokenizer.tokenize(text))
# -
tokenizer=RegexpTokenizer("[\s]+", gaps=False)
print(tokenizer.tokenize(text))
| Web_Crawling/regular-expression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import cx_Oracle
import pyora
import pandas as pd
# ## CX_Oracle
# +
# database configuration
pyora.db_setup("jupyter", "admin", "")
# add TNS_ADMIN to the environment
os.environ['TNS_ADMIN'] = '/home/jovyan/work/wallet/jupyter'
# -
connection = cx_Oracle.connect("admin", "", "jupyter_high")
cursor = connection.cursor()
cursor.arraysize = 5000 #tune array size
sql = """
SELECT CUST_ID, SUM(AMOUNT_SOLD) AS TOTAL_SALES
FROM SH.SALES
GROUP BY CUST_ID
ORDER BY SUM(AMOUNT_SOLD)
DESC
"""
data = cursor.execute(sql).fetchall()
pd.DataFrame(data).head()
cursor.close()
connection.close()
# ## SQL Magic
# %load_ext sql
# %sql oracle+cx_oracle://admin:@jupyter_medium
# + magic_args="result <<" language="sql"
# SELECT CUST_ID, SUM(AMOUNT_SOLD) AS TOTAL_SALES FROM SH.SALES
# GROUP BY CUST_ID
# ORDER BY SUM(AMOUNT_SOLD) DESC
# -
df = result.DataFrame()
df.head()
| notebooks/00-setup-autonomous.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Metastream on Flights Data
# Predicting departure delay.
import sys
sys.path.append("..")
from metastream.metastream import MetaStream
from metastream.metrics import normalized_mse
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import rpy2.robjects as ro
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
from rpy2.robjects.conversion import localconverter
# %load_ext rpy2.ipython
# -
from sklearn.model_selection import KFold, train_test_split
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, GradientBoostingClassifier
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor, ExtraTreeRegressor
from sklearn.linear_model import LinearRegression, Lasso, Ridge, SGDClassifier
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.metrics import cohen_kappa_score, mean_squared_error, classification_report, accuracy_score
from imblearn.metrics import geometric_mean_score, classification_report_imbalanced
# # Get Data
# As specified in page 80 from Rossi 2013.
# +
# # !mkdir -p ../data/flight
# # !wget http://stat-computing.org/dataexpo/2009/2007.csv.bz2 -O ../data/flight/flight2007.csv.bz2
# # !wget http://stat-computing.org/dataexpo/2009/2008.csv.bz2 -O ../data/flight/flight2008.csv.bz2
# +
# flights = [pd.read_csv('../data/flight/flight2007.csv.bz2').query('Origin=="ORD" & Dest=="LGA"'),
# pd.read_csv('../data/flight/flight2008.csv.bz2').query('Origin=="ORD" & Dest=="LGA"')]
# df = pd.concat(flights, axis=0, ignore_index=True)[['Year', 'Month', 'DayofMonth', 'DayOfWeek',
# 'CRSDepTime', 'CRSArrTime', 'FlightNum',
# 'CRSElapsedTime', 'DepDelay']]
# df.to_csv("../data/flight/flights_f.csv", index=False)
# del flights
df = pd.read_csv("../data/flight/flights_f.csv")
df.head()
# -
df.tail()
df.dropna(inplace=True)
df.sort_values('CRSDepTime', inplace=True)
df.reset_index(drop=True, inplace=True)
df.shape[0]
# # Model
# Set metastream params and train base classifiers.
metadf = []
window_size = 300
gamma_sel = 10
initial_data = 200 # Paper says 40 but too few for estimation
target = 'DepDelay'
models = [
SVR(),
RandomForestRegressor(random_state=42),
GaussianProcessRegressor(random_state=42),
LinearRegression(),
Ridge(),
GradientBoostingRegressor(random_state=42)]
metas = MetaStream(SGDClassifier(), models)
# Generate metafeatures and best regressor.
# +
for idx in range(initial_data):
train = df.iloc[idx*gamma_sel:idx*gamma_sel+window_size]
sel = df.iloc[idx*gamma_sel+window_size:(idx+1)*gamma_sel+window_size]
xtrain, ytrain = train.drop(target, axis=1), train[target]
xsel, ysel = sel.drop(target, axis=1), sel[target]
mfe_feats = {}
ecol = importr("ECoL")
with localconverter(ro.default_converter + pandas2ri.converter):
rfeatures = ecol.complexity(xtrain, ytrain)
for i, value in enumerate(rfeatures):
mfe_feats["mfeats_"+str(i)] = value
# best_score = 1e5
metas.base_fit(xtrain, ytrain)
preds = metas.base_predict(xsel)
scores = [normalized_mse(ysel, pred) for pred in preds]
max_score = np.argmax(scores)
mfe_feats['reg'] = max_score
# for idx, model in enumerate(models):
# model.fit(xtrain, ytrain)
# preds = model.predict(xsel)
# score = mean_squared_error(ysel, preds) / mean_squared_error(np.ones_like(ysel)*ysel.mean(), ysel)# is it right?
# if score < best_score:
# best_score = score
# mfe_feats['reg'] = 'reg' + str(idx)
metadf.append(mfe_feats)
# -
metadf = pd.DataFrame(metadf)
metadf.head()
try:
metadf.to_csv('../data/flight/meta.csv', index=False)
import gc
gc.collect()
except:
print('Loading from file.')
metadf = pd.read_csv('../data/flight/meta.csv')
metadf.dropna(axis=1, inplace=True)
metadf.dropna(inplace=True)
metadf.head()
for i, rate in (metadf['reg'].value_counts()/metadf.shape[0]).items():
print(type(metas._learners[int(i)]), rate)
# Train metaclassifier.
mxtrain, mxtest, mytrain, mytest = train_test_split(metadf.drop('reg', axis=1), metadf.reg, random_state=42)
metas.initial_fit(mxtrain, mytrain)
myhattest = metas.predict(mxtest)
print("Kappa: ", cohen_kappa_score(mytest, myhattest))
print("GMean: ", geometric_mean_score(mytest, myhattest))
print("Accuracy: ", accuracy_score(mytest, myhattest))
print(classification_report(mytest, myhattest))
print(classification_report_imbalanced(mytest, myhattest))
m_recommended = []
m_best = []
for idx in range(initial_data, int((df.shape[0]-window_size)/gamma_sel)):
train = df.iloc[idx*gamma_sel:idx*gamma_sel+window_size]
sel = df.iloc[idx*gamma_sel+window_size:(idx+1)*gamma_sel+window_size]
xtrain, ytrain = train.drop(target, axis=1), train[target]
xsel, ysel = sel.drop(target, axis=1), sel[target]
mfe_feats = []
ecol = importr("ECoL")
with localconverter(ro.default_converter + pandas2ri.converter):
mfe_feats = ecol.complexity(xtrain, ytrain)
mfe_feats = np.delete(mfe_feats, 8).reshape(1, -1)
yhat_model_name = metas.predict(mfe_feats)[0]
m_recommended.append(yhat_model_name)
metas.base_fit(xtrain, ytrain)
preds = metas.base_predict(xsel)
scores = [normalized_mse(ysel, pred) for pred in preds]
max_score = np.argmax(scores)
metas._metalearner.partial_fit(mfe_feats, [max_score])
m_best.append(max_score)
print("Kappa: ", cohen_kappa_score(m_best, m_recommended))
print("GMean: ", geometric_mean_score(m_best, m_recommended))
print("Accuracy: ", accuracy_score(m_best, m_recommended))
print(classification_report(m_best, m_recommended))
print(classification_report_imbalanced(m_best, m_recommended))
# ## Last Prediction
lp_best, lp_recom = m_best[-1000:], m_recommended[-1000:]
print("Kappa: ", cohen_kappa_score(lp_best, lp_recom))
print("GMean: ", geometric_mean_score(lp_best, lp_recom))
print("Accuracy: ", accuracy_score(lp_best, lp_recom))
print(classification_report(lp_best, lp_recom))
print(classification_report_imbalanced(lp_best, lp_recom))
print(lp_best[-20:])
print(lp_recom[-20:])
| notebooks/metastream-flights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercises - Chapter 5
# ## Exercise 5.1
#
# The time module provides a function, also named time, that returns the current
# Greenwich Mean Time in "the epoch", which is an arbitrary time used as a reference point. On UNIX systems, the epoch is 1 January 1970.<br>
# <br>
# `>>> import time`<br>
# `>>> time.time()`<br>
# `1437746094.5735958`<br>
# <br>
# Write a script that reads the current time and converts it to a time of day in hours, minutes, and seconds, plus the number of days since the epoch.
# +
import time
from datetime import datetime
epoch_start = datetime.strptime('01.01.70', '%d.%m.%y')
unix_timestamp = time.time()
struct_timestamp = time.gmtime(unix_timestamp)
formated_timestamp = time.strftime("%H:%M:%S", struct_timestamp)
days_since_epoch = datetime.fromtimestamp(time.mktime(struct_timestamp)) - epoch_start
print(formated_timestamp, days_since_epoch.days)
# -
# ***
# ## Exercise 5.2
#
# Fermat’s Last Theorem says that there are no positive integers $a$, $b$, and $c$ such that<br>
#
# $$
# a^n + b^n = c^n
# $$
#
# for any values of $n$ greater than $2$.
#
# 1. Write a function named `check_fermat` that takes four parameters — $a$, $b$, $c$ and $n$ — and <br>
#
# $$
# a^n + b^n = c^n
# $$
#
# the program should print, "Holy smokes, Fermat was wrong!" Otherwise the program should print, “No, that doesn’t work.”
# 2. Write a function that prompts the user to input values for $a$, $b$, $c$ and $n$, <i>converts</i> them to integers, and uses check_fermat to check whether they violate Fermat’s theorem.
# +
def check_fermat():
prompt = "This program will check if Fermat's Last Theorem Holds True. Please provide an integer number!\n"
a = int(input(prompt))
prompt = "Please provide another integer number!\n"
b = int(input(prompt))
prompt = "Please provide another integer number!\n"
c = int(input(prompt))
prompt = "Please provide an integer number greater than 2!\n"
n = int(input(prompt))
print('Now we will check if ' + str(a) + '^' + str(n) + ' + '+ str(b) + '^' + str(n) +' = '+ str(c) + '^' + str(n))
if a**n + b**n == c**n:
print(str(a**n + b**n)+' = '+str(c**n)+ ' Holy smokes, Fermat was wrong!')
else:
print(str(a**n + b**n)+' = '+str(c**n)+ " No, that doesn't work.")
check_fermat()
# -
# ***
# ## Exercise 5.3.
#
# If you are given three sticks, you may or may not be able to arrange them in a triangle.For example, if one of the sticks is 12 inches long and the other two are one inch long, you will not be able to get the short sticks to meet in the middle. For any three lengths, there is a simple test to see if it is possible to form a triangle:<br>
# If any of the three lengths is greater than the sum of the other two, then you cannot form a triangle. Otherwise, you can. (If the sum of two lengths equals the third, they form what is called a "degenerate" triangle.)<br>
#
# 1. Write a function named `is_triangle` that takes three integers as arguments, and that prints either "Yes" or "No", depending on whether you can or cannot form a triangle from sticks with the given lengths.
# 2. Write a function that prompts the user to input three stick lengths, converts them to integers, and uses `is_triangle` to check whether sticks with the given lengths can form a triangle
# +
def is_triangle(a, b, c):
if a + b < c:
print('No')
elif a + c < b:
print('No')
elif b + c < a:
print('No')
else:
print('Yes')
def check_triangle():
prompt = "This program will check if you can create a triangle with 3 given lengths. Please provide an integer number!\n"
a = int(input(prompt))
prompt = "Please provide another integer number!\n"
b = int(input(prompt))
prompt = "Please provide another integer number!\n"
c = int(input(prompt))
is_triangle(a, b, c)
check_triangle()
# -
# ***
# ## Exercise 5.4.
#
# What is the output of the following program? Draw a stack diagram that shows the state of the program when it prints the result.
# + tags=[]
def recurse(n, s):
if n == 0:
print(s)
else:
recurse(n-1, n+s)
recurse(3, 0)
# -
# 1. What would happen if you called this function like this: recurse(-1, 0)?
# 2. Write a docstring that explains everything someone would need to know in order to use this function (and nothing else).
recurse(-1, 0)
# ***
# ## Exercise 5.5.
#
# Read the following function and see if you can figure out what it does (see the examples in Chapter 4). Then run it and see if you got it right
def draw(t, length, n):
if n == 0:
return
angle = 50
t.forward(length * n)
t.left(angle)
draw(t, length, n-1)
t.right(2 * angle)
draw(t, length, n-1)
t.left(angle)
t.back(length * n)
from ipyturtle import Turtle
t = Turtle()
t
'''
This piece of code uses the turtle world inputs draw "branches" in a recursive way, the angle of each branch is fixed
by the variable "angle". The initial length is length * n, which will go down to (n-1), (n-2), (n-3) and so on until
n = 1 is reached and the final branch is drawn.
Once n is 0 the recursion of a branch ends and the turtle will return to a previous branch,
where it will start out to draw the next branches.
'''
t.reset()
draw(t, 10, 5)
# ***
# ## Exercise 5.6.
#
# The Koch curve is a fractal that looks something like the figure shown below. To draw a Koch curve with length $x$, all you have to do is<br>
#
# 1. Draw a Koch curve with length $\frac{x}{3}$.
# 2. Turn left 60 degrees.
# 3. Draw a Koch curve with $\frac{x}{3}$.
# 4. Turn right 120 degrees.
# 5. Draw a Koch curve with $\frac{x}{3}$.
# 6. Turn left 60 degrees.
# 7. Draw a Koch curve with $\frac{x}{3}$.
#
# The exception is if $x$ is less than $3$: in that case, you can just draw a straight line with length $x$.<br>
#
# 
#
# 1. Write a function called `koch` that takes a turtle and a length as parameters, and that uses the turtle to draw a Koch curve with the given length.
# 2. Write a function called `snowflake` that draws three Koch curves to make the outline of a snowflake.
# Solution: http://thinkpython2.com/code/koch.py
# 3. The Koch curve can be generalized in several ways. See http://en.wikipedia.org/wiki/Koch_snowflake for examples and implement your favorite.
# +
def koch(t, length):
if length < 3:
t.forward(length)
else:
koch(t, length / 3)
t.left(60)
koch(t, length / 3)
t.right(120)
koch(t, length / 3)
t.left(60)
koch(t, length / 3)
def snowflake(t, length, n):
for i in range(n):
koch(t, length)
t.right(360/n)
def koch_line(t, length, n):
if n == 1:
t.forward(length)
else:
koch_line(t, length/3, n - 1)
t.left(60)
koch_line(t, length/3, n - 1)
t.right(120)
koch_line(t, length/3, n - 1)
t.left(60)
koch_line(t, length/3, n - 1)
def koch_snowflake(t, length, n):
for i in range(3):
koch_line(t, length, n)
t.right(t, 120)
t.reset()
snowflake(t, 50, 3)
| Exercises/Solutions/Think-Python_Exercises_Chapter-5-Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import os
import numpy as np
import pandas as pd
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.utils.estimator_checks import check_estimator
from sklearn.externals import joblib
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
from src.prediction_models.scorer import moving_average_score
from src.prediction_models.moving_average import MovingAverage
# -
ts = joblib.load('../../bld/out/data_processed/BTC_POT.p.lzma')
# Restriction to chart data only
ts = ts[ts.CHART == True].dropna(axis=1).drop(['CHART', 'TRADE'], axis=1)
# # Moving averages
#
# Here, I will be demonstrating a moving average crossover strategy. We will use two moving averages, one we consider *fast*, and the other *slow*. The strategy is:
#
# Trade the asset when the fast moving average crosses over the slow moving average.
# Exit the trade when the fast moving average crosses over the slow moving average again.
#
# The fast version is a 20-day moving average and the slow one a 50-day moving average.
for fast, slow in [[1, 50], [2, 5], [5, 10], [10, 20], [15, 30], [20, 50], [50, 100], [100, 200]]:
# Correction to days
fast *= 12 * 24
slow *= 12 * 24
ma = MovingAverage()
ma = ma.fit(ts.BTC_POT_CLOSE, window_fast=fast, window_slow=slow)
plt.plot(ma.signals)
plt.show()
plt.close()
results = ma.predict()
print(moving_average_score(ts.BTC_POT_CLOSE, results))
# # Graphics
signals = ma.signals
regimes = ma.regimes
chart = ma.y
plot = chart.to_frame()
plot['SIGNALS'] = signals
plot['REGIMES'] = regimes
plot.SIGNALS.plot()
# +
fig, ax = plt.subplots(2, 1)
fig.suptitle('Regimes and Signals')
ax1 = plt.subplot(2, 1, 1)
plt.plot(plot.SIGNALS)
ax1.set_title('Signals')
ax2 = plt.subplot(2, 1, 2)
plt.plot(plot.REGIMES)
ax2.set_title('Regimes')
plt.save
# -
plt.plot(chart)
plt.plot(regimes)
plt.plot()
trades = plot.BTC_POT_CLOSE[(plot.SIGNALS == -1) | (plot.SIGNALS == 1)].reset_index()
trades.rename(columns={'date': 'FROM', 'BTC_POT_CLOSE': 'BTC_POT_FROM'}, inplace=True)
trades['RETURNS'] = (trades.BTC_POT_FROM - trades.BTC_POT_FROM.shift(1)) / trades.BTC_POT_FROM
trades['BTC_POT_TO'] = trades.BTC_POT_FROM.shift(1)
trades['TO'] = trades.FROM.shift(1)
trades = trades[trades.index % 2 != 0]
trades
from collections import OrderedDict
plt.plot(plot.BTC_POT_CLOSE)
for row in trades.iterrows():
color = 'green' if row[1]['RETURNS'] > 0 else 'red'
label = 'gain' if row[1]['RETURNS'] > 0 else 'loss'
plt.plot([row[1]['FROM'], row[1]['TO']], [row[1]['BTC_POT_FROM'], row[1]['BTC_POT_TO']], 'k-', lw=2, color=color, label=label)
handles, labels = plt.gca().get_legend_handles_labels()
by_label = OrderedDict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys())
# # Test MovingAverage
from src.prediction_models.moving_average import MovingAverage
ma = MovingAverage()
ma = ma.fit(np.arange(1000))
print(ma)
check_estimator(MovingAverage)
| src/exploration/prediction_moving_average.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/harnalashok/deeplearning/blob/main/classify_with_vgg16_softmax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="FEXKIYEf82qQ"
'''
Last amended: 23rd Feb, 2021
Ref: Page 143 (Section 5.3) of Book 'Deep Learning with python'
by <NAME>
https://www.kaggle.com/rajmehra03/a-comprehensive-guide-to-transfer-learning#data
Objective:
i) Transfer Learning: Building powerful image classification
models using very little data using
pre-trained applications
ii) Feature Enngineering: Using engineered features with
Random Forest Classifier
iii)Learning Rate Annealing:Moderating Learning rate
on/near plateau
Steps:
1. Create higher level abstract features from train data
and save these to file
2. Use saved features as input to a FC model to make predictions
3. Save FC model
4. Use the complete model to make predictions
Data from Kaggle: https://www.kaggle.com/c/dogs-vs-cats/data
In our setup, we:
- created a folder: Images/cats_dogs/ folder
- created train/ and validation/ subfolders inside cats_dogs/
- created cats/ and dogs/ subfolders inside train/ and validation/
In summary, this is our directory structure:
Images/
data/
train/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
validation/
dogs/
dog001.jpg
dog002.jpg
...
cats/
cat001.jpg
cat002.jpg
...
$ source activate tensorflow
$ ipython
'''
# + [markdown] id="GQL6YWWY9SP8"
# ## PART I: Extract features from train data
# + id="UfgXApdG9F6v"
####********************************************************************************
#### ***************** PART I: Tranform train data to abstract features and save**
####********************************************************************************
# + [markdown] id="OlB5imiZJK34"
# #### Call libraries
# + id="SCjy9SoP9cBb"
# %reset -f
## 1. Call libraries
import numpy as np
# 1.1 Classes for creating models
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dropout, Flatten, Dense
# 1.2 Class for accessing pre-built models
from tensorflow.keras import applications
# 1.3 Class for generating infinite images
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# 1.4 Miscelleneous
import matplotlib.pyplot as plt
import time, os
# + id="FfNA1baCcmWf"
# 1.5 Display multiple commands outputs from a sell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# + [markdown] id="ovAfE-NFJRo8"
# #### Bringing file from Google drive to VM
# And unzip it there
# + id="G8KBaPl4EhKk"
#2.0 Copy file cats_dogs.tar.gz to our HOME folder
# ! cp /content/drive/MyDrive/Colab_data_files/cats_dogs.tar.gz $HOME
# + colab={"base_uri": "https://localhost:8080/"} id="qRtANTgyEqWT" outputId="5afff893-a865-4bf3-d447-af65c681fe91"
#2.0.1 Check if file copied. If so, unzip it
# !ls $HOME
# + id="RkVma-GtGn9k"
# 2.0.2 We will keep our unzipped files here
# !mkdir $HOME/data
# + id="bwNQpaRFFFGD"
# 2.0.3 Untar the gz file
# ! tar -xvzf $HOME/cats_dogs.tar.gz -C $HOME/data/
# + colab={"base_uri": "https://localhost:8080/"} id="P3mt8DfSJ1GW" outputId="a8040a13-2452-4e27-9387-427d7490cfb8"
# ! ls /root/data
# + colab={"base_uri": "https://localhost:8080/"} id="I1WKDYWXHHaD" outputId="55140878-22fd-4052-d598-2899ee9dcc4b"
# Check the folders/files
# !ls -la $HOME/data/cats_dogs
# And where is my HOME?
# ! echo $HOME # /root
# + [markdown] id="AKdtLJhd9q_x"
# ### AA. Constants & Hyperparameters
# + id="W02H8V9R9nGy"
############################# AA. Constants & Hyperparameters ###################3
## 3. Constants/hyperparameters
# 3.1 Where are cats and dogs?
#train_data_dir = "C:\\Users\\ashok\\Desktop\\chmod\\2. data_augmentation\\cats_dogs\\train"
#train_data_dir = '/home/ashok/Images/cats_dogs/train'
train_data_dir = "/root/data/cats_dogs/train"
#validation_data_dir = "C:\\Users\\ashok\\Desktop\\chmod\\2. data_augmentation\\cats_dogs\\validation"
#validation_data_dir = '/home/ashok/Images/cats_dogs/validation'
validation_data_dir = "/root/data/cats_dogs/validation"
# 3.2 Constrain dimensions of our
# images during image generation:
img_width, img_height = 75,75 # Large size images affect model-fitting speed
# 3.3 How many sampels of each one of them?
nb_train_samples, nb_validation_samples = 2000, 800
# 3.4 Predict in batches that fit RAM
# and also sample-size is fully divisible by it
batch_size = 50 # Maybe for 4GB machine, batch-size of 32 will be OK
# + id="hposIiuXUIg2"
# + [markdown] id="V1l1UQjrUJN8"
# #### numpy .npy format?
#
# A simple format for saving numpy arrays to disk with the full information about them.
#
# The .npy format is the standard binary file format in NumPy for persisting a single arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals.
#
# The .npz format is the standard format for persisting multiple NumPy arrays on disk. A .npz file is a zip file containing multiple .npy files, one for each arra
# + id="XjFt1VVN96zS"
# 3.5 File to which transformed bottleneck features
# for train data wil be stored
#bf_filename = 'C:\\Users\\ashok\\Desktop\\chmod\\3. veryDeepConvNets\\models\\bottleneck_features_train.npy'
#bf_filename = '/home/ashok/.keras/models/bottleneck_features_train.npy'
bf_filename = '/root/data/cats_dogs/bottleneck_features_train.npy'
# 3.6 File to which transformed bottleneck features
# for validation data wil be stored
#val_filename = 'C:\\Users\\ashok\\Desktop\\chmod\\3. veryDeepConvNets\\models\\bottleneck_features_validation.npy'
val_filename = '/root/data/cats_dogs/bottleneck_features_validation.npy'
# + [markdown] id="grGEVbnj-Ixa"
# ### BB. Data Augmentation
# + colab={"base_uri": "https://localhost:8080/"} id="DTFb5I3D9_Fq" outputId="a9c9fcb9-fd58-497a-ffe9-a47a1f3f8529"
############################# BB. Data Generation ###################3
## 4. Data augmentation
# 4.1 Instanstiate an image data generator:
# Needed to feed into the model
datagen_train = ImageDataGenerator(
rescale=1. / 255,
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.2,# randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False # randomly flip images
)
# 4.2 Configure datagen_train further
# Datagenerator is configured twice.
# First configuration
# is about image manipulation features
# IInd configuration is regarding data source,
# data classes, batchsize etc
generator_tr = datagen_train.flow_from_directory(
directory = train_data_dir, # Path to target train directory.
target_size=(img_width, img_height),# Dimensions to which all images will be resized.
batch_size=batch_size, # At a time so many images will be output
class_mode=None, # Return NO labels along with image data
shuffle=False # Default shuffle = True
# Now images are picked up first from
# one folder then from another; no shuffling
# We will be using images NOT for
# learning any model but only for prediction
# so shuffle = False is OK as we now know
# that Ist 1000 images are of one kind
# and next 1000 images of another kind
# See: https://github.com/keras-team/keras/issues/3296
)
# + id="eTAg_Jqz-V2c"
"""
# 4.2 If data was not arranged in the directory,
# then iterator would be:
generator_t = datagen.flow(X_train, # Should have rank 4. In case of grayscale data,
# the channels axis should have value 1, and in
# case of RGB data, it should have value 3.
y_train, # X_train labels
shuffle=False,
batch_size=batch_size
)
There is, however, no 'target_size' parameter here
"""
# + [markdown] id="74qwS67u-cCD"
# #### Generation for validation data
# + colab={"base_uri": "https://localhost:8080/"} id="fpYwJpZy-fv7" outputId="15a77e24-8ee9-4d92-fc36-380815b881f0"
# 4.3 Generator for validation data.
# Initialize ImageDataGenerator object once more
# shuffle = False => Sort data in alphanumeric order
datagen_val = ImageDataGenerator(rescale=1. / 255)
generator_val = datagen_val.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False # Default shuffle = True
# Now images are picked up first from
# one folder then from another; no shuffling
# We will be using images NOT for
# learning any model but only for prediction
# so shuffle = False is OK as we now know
# that Ist 1000 images are of one kind
# and next 1000 images of another kind
# See: https://github.com/keras-team/keras/issues/3296
)
# + [markdown] id="Ej575ooU-vUR"
# ### CC. Modeling & Feature creation
# + id="ziJdkL1n-sej"
############################# CC. Modeling & Feature creation #####################
############################For both train and validation data ####################
####################Created features become our fresh train/validation data########
# + colab={"base_uri": "https://localhost:8080/"} id="kACp609V-zn6" outputId="5f545ac8-03c5-48de-eeb2-78aed6c2bb49"
# 5. Buld VGG16 network model with 'imagenet' weights
# Do not include the top FC layer of VGG16 model
# Weights will be downloaded, if absent
model = applications.VGG16(
include_top=False,
weights='imagenet',
input_shape=(img_width, img_height,3)
)
# 5.1
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="VoiO6Izk-_9z" outputId="68cdef3e-b438-4d38-8552-858f8f00f2a8"
# 5.2 Feed images through VGG16 model in batches
# And make 'predictions' in 2000/50 = 40 steps.
# Following takes time 7 +3 = 10 minutes
# Note that there is no need for 'fit' method as weights are
# already learnt
start = time.time()
# 4.1 By feeding the input samples from generator,
# create vgg16 output/predictions uptil the last
# layer. We call it 'bottleneck features' as it
# is not the desired end result
# steps: How many batches of images to output or how many times to call image-generator
bottleneck_features_train = model.predict_generator(
generator = generator_tr,
steps = nb_train_samples // batch_size,
verbose = 1
)
end = time.time()
print("Time taken: ",(end - start)/60, "minutes")
# + colab={"base_uri": "https://localhost:8080/"} id="_4WJI7_8_Hr9" outputId="bc2c5428-66aa-4f01-ee88-e124d9cfd0dc"
# 5.3 Similarly, make predictions for
# validation data and extract features
# Takes 12 minutes
start = time.time()
bottleneck_features_validation = model.predict_generator(
generator = generator_val,
steps = nb_validation_samples // batch_size,
verbose = 1
)
end = time.time()
print("Time taken: ",(end - start)/60, "minutes")
# + [markdown] id="ke2c95BJ_XnT"
# ### Saving Features
# + id="LNhLQ_Fb_QWL"
############################# DD. Saving features ###################
# 6. Save the train features
# 6.1 First delete the file to whcih we will save
if os.path.exists(bf_filename):
os.system('rm ' + bf_filename)
# 6.2 Next save the train-features
np.save(open(bf_filename, 'wb'), bottleneck_features_train)
# 6.3 Save validation features from model
if os.path.exists(val_filename):
os.system('rm ' + val_filename)
np.save(open(val_filename, 'wb'), bottleneck_features_validation)
# 6.4 Quit python so that complete memory is reset
# Maybe reboot your lubuntu (NOT WINDOWS)
# + id="Akvdc2NIk_sZ"
# 7.0 Else, Let us delete some variables
del bottleneck_features_validation
del bottleneck_features_train
del model
del datagen_train
del datagen_val
# + [markdown] id="yrZpJLFc_fJS"
# ## PART-II Use Extracted features
# + id="qGFncvYO_iY9"
################### ########### ##################### #######
################### PART-II BEGIN AGAIN #####################
################### ########### ##################### #######
# + id="Id2cMR8PquqB"
# + [markdown] id="U3Vb9eM6qvVE"
# #### Call libraries
# + id="ON_KK40g_oMv"
## Part II: Load saved abstract features and proceed
# with modeling and prediction
# Start ipython #
# 8.0 Call libraries
# %reset -f
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dropout, Flatten, Dense, Softmax
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import applications
import matplotlib.pyplot as plt
import time, os
# + [markdown] id="1luUFXHtqzsH"
# #### Define constants
# + id="_DjQq1Nh_viF"
# 8.1 Hyperparameters/Constants
# 8.2 Dimensions of our images.
img_width, img_height = 75,75 # 150, 150
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 64
num_classes = 2
# + id="wEJznvUc_0Hz"
# 8.3 Where are saved bottleneck features for train data?
#bf_filename = '/home/ashok/.keras/models/bottleneck_features_train.npy'
#bf_filename = 'C:\\Users\\ashok\\Desktop\\chmod\\3. veryDeepConvNets\\models\\bottleneck_features_train.npy'
bf_filename = '/root/data/cats_dogs/bottleneck_features_train.npy'
# 8.4 Validation-bottleneck features filename
#val_filename = '/home/ashok/.keras/models/bottleneck_features_validation.npy'
#val_filename = 'C:\\Users\\ashok\\Desktop\\chmod\\3. veryDeepConvNets\\models\\bottleneck_features_validation.npy'
val_filename = '/root/data/cats_dogs/bottleneck_features_validation.npy'
# + [markdown] id="uXgeLAkSpuXY"
# #### HDF5 storage for Python?
#
# The h5py package is a Pythonic interface to the HDF5 binary data format.
#
# HDF5 lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays. Thousands of datasets can be stored in a single file, categorized and tagged however you want.
# + id="OIxhmYeUpxPV"
# 8.6 File to which FC model weights could be stored
#top_model_weights_path = "C:\\Users\\ashok\\Desktop\\chmod\\3. veryDeepConvNets\\models\\bottleneck_fc_model.h5"
#top_model_weights_path = '/home/ashok/.keras/models/bottleneck_fc_model.h5'
top_model_weights_path = '/root/data/cats_dogs/bottleneck_fc_model.h5'
# + [markdown] id="XEg0KtOmAEPL"
# #### Load Train data
# + id="IwKjS5yZ_5rb"
# 9. Get train features
train_data_features = np.load(open(bf_filename,'rb'))
# + colab={"base_uri": "https://localhost:8080/"} id="JY2LnIzh_oxc" outputId="ab16909c-b6f0-4250-ea3b-efdb923027f7"
# 9.1
train_data_features.shape # (2000, 2, 2, 512)
print("\n-----------\n")
# 9.2 Train lables. First half are of one kind and next half of other
# Remember we had put 'shuffle = False' in data generators
# 1000 labels of one kind. Another 1000 labels of another kind
train_labels = np.array([1] * 1000 + [2] * 1000) # Try [0] * 3 + [1] * 5
train_labels
print("\n-----------\n")
# 9.2.1
train_labels.shape # (2000,)
print("\n-----------\n")
# + [markdown] id="h4w1IkBDqYm3"
# ##### Shuffle train data
# + colab={"base_uri": "https://localhost:8080/"} id="l95Ieg0zAS5k" outputId="d3d0310c-470e-4159-f0ea-b6f4f2330763"
# 9.3 Shuffle train features(tiles) as also corresponding labels
x = np.arange(2000) # Generate 2000 indicies for shuffling
np.random.shuffle(x) # x is inplace shuffled
x[:5]
# 9.4
train_data_features = train_data_features[x, :,:,:]
train_labels = train_labels[x]
train_labels.shape
# + [markdown] id="DGss4iZIqcZY"
# ##### OneHotEncode train labels
# + colab={"base_uri": "https://localhost:8080/"} id="H46xOhwkAZWZ" outputId="fbd18595-1740-40e7-e337-c0cc239b3b29"
# 9.5 One hot encode the labels
# For any classification problem, in Deep Learning
# softmax layer rather than sigmoid MUST be used
# See reasons at the end of code
train_labels_cat = to_categorical(train_labels )
print("\n-----------\n")
train_labels_cat.shape # (2000, 3)
print("\n-----------\n")
train_labels_cat # Ist column is constant
# + colab={"base_uri": "https://localhost:8080/"} id="FdDI3C6-Aebp" outputId="dbfda617-7665-4dec-d783-a295a099706b"
# 9.5.1 Ist constant column is of no use to us
# So forget it
train_labels_cat = train_labels_cat[:, 1:]
train_labels_cat
# + [markdown] id="NmXF0PgNqQdd"
# #### Load Validation data
# + colab={"base_uri": "https://localhost:8080/"} id="09aC-lB3As5T" outputId="e83dfd41-a9c5-4d8f-dc08-e662571727a1"
# 10. Read validation features
validation_data_features = np.load(open(val_filename,'rb'))
# 10.1
print("\n-----------\n")
validation_data_features.shape # (800, 2, 2, 512)
# 10.2 Validation labels: half-half
validation_labels = np.array([1] * 400 + [2] * 400)
# 10.3 Convert to OHE
validation_labels = to_categorical(validation_labels)
validation_labels = validation_labels[:,1:]
# 10.4
print("\n-----------\n")
train_data_features.shape[1:] # (2, 2, 512)
# + [markdown] id="FR321-P-Aw2p"
# #### Design our Model
# + [markdown] id="vYfuORwHNTFj"
# The gist of RMSprop is to:
# * Maintain a moving (discounted) average of the square of gradients
# * Divide the gradient by the root of this average
#
# + id="h_W6K5UqktcW"
# 11.0 Delete any existing model
# Perform model fitting with or without
# 'ReduceLROnPlateau'
#
del model
# + colab={"base_uri": "https://localhost:8080/"} id="bPGToOCOAzGU" outputId="dc851097-08f6-4950-9eaf-1fd54848ce2b"
# 12. Plan model with FC layers only
# We use transformed features as input to FC model
# instead of actual train data
model = Sequential()
model.add(Flatten(input_shape=train_data_features.shape[1:])) # (2, 2, 512)
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
#model.add(Dense(1, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
# 12.1 Declare optimizer to use
# https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/RMSprop
opt = tf.keras.optimizers.RMSprop(
learning_rate=0.001,
rho=0.9, # Discount factor for coming gradient.
epsilon=1e-07, # Small constant for numerical stability.
name='RMSprop'
)
# 12.2
model.compile(
optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy']
)
# + [markdown] id="I52p5lhRrDJ9"
# ##### Reduce learning rate on plateau
# + id="A4KRO4LNhgMb"
# 12.3 Reduce learning rate when a metric has stopped improving.
# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ReduceLROnPlateau
reduce_lr=tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_accuracy',
factor = 0.1, # factor by which the learning
# rate will be reduced.
# new_lr = lr * factor.
min_delta=0.0001, # threshold for measuring the
# new optimum, to only focus on
# significant changes.
patience=2, # number of epochs with no
# improvement after which
# learning rate will be reduced.
verbose=1
)
# + colab={"base_uri": "https://localhost:8080/"} id="OyrW0UsQCEPJ" outputId="19609228-0f5f-4db2-8cc6-8148d2948f19"
# 13.0 Fit model and make predictions on validation dataset
# Takes 2 minutes
# Watch Validation loss and Validation accuracy (around 81%)
#
start = time.time()
history = model.fit(
train_data_features, train_labels_cat,
epochs=100,
batch_size=batch_size,
callbacks=[reduce_lr],
validation_data=(validation_data_features, validation_labels),
verbose =1
)
end = time.time()
print("Time taken: ",(end - start)/60, "minutes")
# + colab={"base_uri": "https://localhost:8080/"} id="h4NrT7PLCFqU" outputId="d31bf283-ef72-47e6-c681-92e941e65e7a"
# 13.1
print("\n---History keys-----------\n")
history.history.keys() # dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
# 13.1.1
print("\n---History tr accuracy length-----------\n")
len(history.history['accuracy']) # 50 Training accuracy:
# As many as number of epochs
#13.1.2
print("\n---History val accuracy length-----------\n")
len(history.history['val_accuracy']) # 50 Validation accuracy
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="R8d3rg4lCJqV" outputId="f4fc645c-0795-4267-d4ec-c5177ecf705f"
# 14.0 It is very wavy without
# ReduceLROnPlateau
plot_learning_curve()
# + [markdown] id="sQcQSNrjCa85"
# #### Save model
# + id="zHgi9UG5Cd3Y"
# 15. Finally save model weights for later use
model.save_weights(top_model_weights_path)
# + colab={"base_uri": "https://localhost:8080/"} id="lFekxDAuoHDv" outputId="cd9b8199-b07a-48e5-a1aa-8903787df1f8"
# 15.1 Check if saved and size of file:
# ! ls -la /root/data/cats_dogs/
# + [markdown] id="BfmwfaLbCtSy"
# #### Learning curve
# + id="-edkQyxcCrro"
#######################################################
# How accuracy changes as epochs increase
# We will use this function agai and again
# in subsequent examples
def plot_learning_curve():
val_acc = history.history['val_accuracy']
tr_acc=history.history['accuracy']
epochs = range(1, len(val_acc) +1)
plt.plot(epochs,val_acc, 'b', label = "Validation accu")
plt.plot(epochs, tr_acc, 'r', label = "Training accu")
plt.title("Training and validation accuracy")
plt.xlabel("epochs-->")
plt.ylabel("accuracy")
plt.legend()
plt.show()
#########################
# + [markdown] id="A0RFjRseZL7M"
# # Use Random Forest
# Making predictions using extracted features using Random Forest
# + id="xBrbwTv7ZQlx"
# 16.0 Using RandomForestClassifier for classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
X= train_data_features
print("\n----X.shape-------\n")
X.shape # (2000, 2, 2, 512)
X = X.reshape(2000, -1)
print("\n----X.shape-------\n")
X.shape # (2000, 2048)
y = train_labels
print("\n-----y.shape------\n")
y.shape # (2000, )
# 16.1
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size= 0.25, shuffle = True)
print("\n----y_train-------\n")
y_train[:5] # It is now shuffled
print("\n----y_test-------\n")
y_test[:5] # It is now shuffled
print("\n-----Model------\n")
# 16.2
model = RandomForestClassifier()
model.fit(X_train,y_train)
print("\n-----------\n")
# 16.3
y_pred = model.predict(X_test)
print("\n----y_pred-------\n")
y_pred[:10]
print("\n-----------\n")
np.sum(y_test==y_pred)/len(y_test) # 79.8%
#===============================================
# + id="gMcTZ9MgC6HZ"
#########################
"""
Sigmoid vs Softmax
===================
Suppose there are 5 classes. So we may have either 5-sigmoid
neurons at the output or just one neuron.
If we have just one neuron, than the neuron will emit
numbers as 4.3, 3.6 instead of 4 or 3. In a regression problem
we can take exactly the value outputted that is 4.6 and subtract
from expected output to calculate error. But in a classification
problem, 4.6 is meaning less and we are not sure whether to consider
it as 4th class or 5th class.
If we want to have 5 neurons with sigmoid at the end then
the better option is to have softmax.
Therefore, in a classification problem, even if there
are two classes, it is better to use softmax rather than use
one sigmoid neuron at the end.
"""
# + id="0GtQG2r4eeDe"
############ I am done #############
| classify_with_vgg16_softmax.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="NriAkWY7oX5S"
# # How random is `r/random`?
#
# There's a limit of 0.5 req/s (1 request every 2 seconds)
#
#
# ## What a good response looks like (status code 302)
# ```
# $ curl https://www.reddit.com/r/random
#
# <html>
# <head>
# <title>302 Found</title>
# </head>
# <body>
# <h1>302 Found</h1>
# The resource was found at <a href="https://www.reddit.com/r/Amd/?utm_campaign=redirect&utm_medium=desktop&utm_source=reddit&utm_name=random_subreddit">https://www.reddit.com/r/Amd/?utm_campaign=redirect&utm_medium=desktop&utm_source=reddit&utm_name=random_subreddit</a>;
# you should be redirected automatically.
#
#
# </body>
# </html>
# ```
#
# ## What a bad response looks like (status code 429)
# ```
# $ curl https://www.reddit.com/r/random
#
# <!doctype html>
# <html>
# <head>
# <title>Too Many Requests</title>
# <style>
# body {
# font: small verdana, arial, helvetica, sans-serif;
# width: 600px;
# margin: 0 auto;
# }
#
# h1 {
# height: 40px;
# background: transparent url(//www.redditstatic.com/reddit.com.header.png) no-repeat scroll top right;
# }
# </style>
# </head>
# <body>
# <h1>whoa there, pardner!</h1>
#
#
#
# <p>we're sorry, but you appear to be a bot and we've seen too many requests
# from you lately. we enforce a hard speed limit on requests that appear to come
# from bots to prevent abuse.</p>
#
# <p>if you are not a bot but are spoofing one via your browser's user agent
# string: please change your user agent string to avoid seeing this message
# again.</p>
#
# <p>please wait 4 second(s) and try again.</p>
#
# <p>as a reminder to developers, we recommend that clients make no
# more than <a href="http://github.com/reddit/reddit/wiki/API">one
# request every two seconds</a> to avoid seeing this message.</p>
# </body>
# </html>
# ```
#
#
# # What happens
# GET --> 302 (redirect) --> 200 (subreddit)
#
# I only want the name of the subreddit, so I don't need to follow the redirect.
# + id="hdSpIAKUoU5b"
import pandas as pd
import requests
from time import sleep
from tqdm import tqdm
from random import random
# + id="qCVAf5wSr1Bq"
def parse_http(req):
"""
Returns the name of the subreddit from a request
If the status code isn't 302, returns "Error"
"""
if req.status_code != 302:
return "Error"
start_idx = req.text.index('/r/') + len('/r/')
end_idx = req.text.index('?utm_campaign=redirect') - 1
return req.text[start_idx:end_idx]
# + id="3-znuHNgpQOF" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1631930653292, "user_tz": 360, "elapsed": 2624566, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "16755363354285896480"}} outputId="3a969d74-76d4-40b7-b264-f2c440cd0bea"
sites = []
codes = []
headers = {
'User-Agent': 'Mozilla/5.0'
}
# Works for 10, 100 @ 3 seconds / request
# Works for 10 @ 2 seconds / request
for _ in tqdm(range(1000), ascii=True):
# Might have to mess with the User-Agent to look less like a bot
# https://evanhahn.com/python-requests-library-useragent
# Yeah the User-Agent says it's coming from python requests
# Changing it fixed everything
r = requests.get('https://www.reddit.com/r/random',
headers=headers,
allow_redirects=False)
if r.status_code == 429:
print("Got rate limit error")
sites.append(parse_http(r))
codes.append(r.status_code)
# Jitter the sleep a bit to throw off bot detection
sleep(2 + random())
# + colab={"base_uri": "https://localhost:8080/"} id="gDNsW8qMtkFH" executionInfo={"status": "ok", "timestamp": 1631930868282, "user_tz": 360, "elapsed": 160, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "16755363354285896480"}} outputId="4ae654fb-cc76-42bd-af88-a9deb79e395f"
#[print(code, site) for code, site in zip(codes, sites)];
for row in list(zip(codes, sites))[-10:]:
print(row[0], row[1])
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="MbrbD8KkQCLc" executionInfo={"status": "ok", "timestamp": 1631930875221, "user_tz": 360, "elapsed": 141, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "16755363354285896480"}} outputId="53593f4b-fc47-4fdf-bd93-ea9ea688ad1c"
df = pd.DataFrame(list(zip(sites, codes)), columns=['subreddit', 'response_code'])
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="FCO_KuJ0ecqo" executionInfo={"status": "ok", "timestamp": 1631930880345, "user_tz": 360, "elapsed": 925, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "16755363354285896480"}} outputId="72d7f758-6d8d-40e0-ac00-94db14475025"
df.info()
# + id="_v3TAuUzQ_hU" executionInfo={"status": "ok", "timestamp": 1631933320687, "user_tz": 360, "elapsed": 135, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "16755363354285896480"}}
from time import time
fname = 'reddit_randomness_' + str(int(time())) + '.csv'
df.to_csv(fname,index=False)
# + id="0ya5qkknRSRG"
| reddit_randomness_scrape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pretrained GPT2 Model Deployment Example
#
# In this notebook, we will run an example of text generation using GPT2 model exported from HuggingFace and deployed with Seldon's Triton pre-packed server. the example also covers converting the model to ONNX format.
# The implemented example below is of the Greedy approach for the next token prediction.
# more info: https://huggingface.co/transformers/model_doc/gpt2.html?highlight=gpt2
#
# After we have the module deployed to Kubernetes, we will run a simple load test to evaluate the module inference performance.
#
#
# ## Steps:
# 1. Download pretrained GPT2 model from hugging face
# 2. Convert the model to ONNX
# 3. Store it in MinIo bucket
# 4. Setup Seldon-Core in your kubernetes cluster
# 5. Deploy the ONNX model with Seldon’s prepackaged Triton server.
# 6. Interact with the model, run a greedy alg example (generate sentence completion)
# 7. Run load test using vegeta
# 8. Clean-up
#
# ## Basic requirements
# * Helm v3.0.0+
# * A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM)
# * kubectl v1.14+
# * Python 3.6+
# %%writefile requirements.txt
transformers==4.5.1
torch==1.8.1
tokenizers<0.11,>=0.10.1
tensorflow==2.4.1
tf2onnx
# !pip install --trusted-host=pypi.python.org --trusted-host=pypi.org --trusted-host=files.pythonhosted.org -r requirements.txt
# ### Export HuggingFace TFGPT2LMHeadModel pre-trained model and save it locally
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = TFGPT2LMHeadModel.from_pretrained("gpt2", from_pt=True, pad_token_id=tokenizer.eos_token_id)
model.save_pretrained("./tfgpt2model", saved_model=True)
# ### Convert the TensorFlow saved model to ONNX
# !python -m tf2onnx.convert --saved-model ./tfgpt2model/saved_model/1 --opset 11 --output model.onnx
# ### Copy your model to a local MinIo
# #### Setup MinIo
# Use the provided [notebook](https://docs.seldon.io/projects/seldon-core/en/latest/examples/minio_setup.html) to install MinIo in your cluster and configure `mc` CLI tool. Instructions also [online](https://docs.min.io/docs/minio-client-quickstart-guide.html).
#
# -- Note: You can use your prefer remote storage server (google/ AWS etc.)
#
# #### Create a Bucket and store your model
# !mc mb minio-seldon/onnx-gpt2 -p
# !mc cp ./model.onnx minio-seldon/onnx-gpt2/gpt2/1/
# ### Run Seldon in your kubernetes cluster
#
# Follow the [Seldon-Core Setup notebook](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to Setup a cluster with Ambassador Ingress or Istio and install Seldon Core
# ### Deploy your model with Seldon pre-packaged Triton server
# +
# %%writefile secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-init-container-secret
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
AWS_ENDPOINT_URL: http://minio.minio-system.svc.cluster.local:9000
USE_SSL: "false"
# -
# %%writefile gpt2-deploy.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: gpt2
spec:
predictors:
- graph:
implementation: TRITON_SERVER
logger:
mode: all
modelUri: s3://onnx-gpt2
envSecretRefName: seldon-init-container-secret
name: gpt2
type: MODEL
name: default
replicas: 1
protocol: kfserving
# !kubectl apply -f secret.yaml -n default
# !kubectl apply -f gpt2-deploy.yaml -n default
# !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=gpt2 -o jsonpath='{.items[0].metadata.name}')
# #### Interact with the model: get model metadata (a "test" request to make sure our model is available and loaded correctly)
# !curl -v http://localhost:80/seldon/default/gpt2/v2/models/gpt2
# ### Run prediction test: generate a sentence completion using GPT2 model - Greedy approach
#
# +
import requests
import json
import numpy as np
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
input_text = 'I enjoy working in Seldon'
count = 0
max_gen_len = 10
gen_sentence = input_text
while count < max_gen_len:
input_ids = tokenizer.encode(gen_sentence, return_tensors='tf')
shape = input_ids.shape.as_list()
payload = {
"inputs": [
{"name": "input_ids:0",
"datatype": "INT32",
"shape": shape,
"data": input_ids.numpy().tolist()
},
{"name": "attention_mask:0",
"datatype": "INT32",
"shape": shape,
"data": np.ones(shape, dtype=np.int32).tolist()
}
]
}
ret = requests.post('http://localhost:80/seldon/default/gpt2/v2/models/gpt2/infer', json=payload)
try:
res = ret.json()
except:
continue
# extract logits
logits = np.array(res["outputs"][1]["data"])
logits = logits.reshape(res["outputs"][1]["shape"])
# take the best next token probability of the last token of input ( greedy approach)
next_token = logits.argmax(axis=2)[0]
next_token_str = tokenizer.decode(next_token[-1:], skip_special_tokens=True,
clean_up_tokenization_spaces=True).strip()
gen_sentence += ' ' + next_token_str
count += 1
print(f'Input: {input_text}\nOutput: {gen_sentence}')
# -
# ### Run Load Test / Performance Test using vegeta
# #### Install vegeta, for more details take a look in [vegeta](https://github.com/tsenart/vegeta#install) official documentation
# !wget https://github.com/tsenart/vegeta/releases/download/v12.8.3/vegeta-12.8.3-linux-amd64.tar.gz
# !tar -zxvf vegeta-12.8.3-linux-amd64.tar.gz
# !chmod +x vegeta
# #### Generate vegeta [target file](https://github.com/tsenart/vegeta#-targets) contains "post" cmd with payload in the requiered structure
# +
from subprocess import run, Popen, PIPE
import json
import numpy as np
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
import base64
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
input_text = 'I enjoy working in Seldon'
input_ids = tokenizer.encode(input_text, return_tensors='tf')
shape = input_ids.shape.as_list()
payload = {
"inputs": [
{"name": "input_ids:0",
"datatype": "INT32",
"shape": shape,
"data": input_ids.numpy().tolist()
},
{"name": "attention_mask:0",
"datatype": "INT32",
"shape": shape,
"data": np.ones(shape, dtype=np.int32).tolist()
}
]
}
cmd= {"method": "POST",
"header": {"Content-Type": ["application/json"] },
"url": "http://localhost:80/seldon/default/gpt2/v2/models/gpt2/infer",
"body": base64.b64encode(bytes(json.dumps(payload), "utf-8")).decode("utf-8")}
with open("vegeta_target.json", mode="w") as file:
json.dump(cmd, file)
file.write('\n\n')
# -
# !vegeta attack -targets=vegeta_target.json -rate=1 -duration=60s -format=json | vegeta report -type=text
# ### Clean-up
# !kubectl delete -f gpt2-deploy.yaml -n default
| examples/triton_gpt2/README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
# __Universidad Tecnológica Nacional, Buenos Aires__\
# __Ingeniería Industrial__\
# __Cátedra de Investigación Operativa__\
# __Autor: <NAME>__, <EMAIL>
#
# ---
# # Ejemplo de simulación discreta compleja
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Resolución" data-toc-modified-id="Resolución-1"><span class="toc-item-num">1 </span>Resolución</a></span><ul class="toc-item"><li><span><a href="#Simulador-de-eventos-discretos:" data-toc-modified-id="Simulador-de-eventos-discretos:-1.1"><span class="toc-item-num">1.1 </span>Simulador de eventos discretos:</a></span><ul class="toc-item"><li><span><a href="#Función-de-resultados-del-simulador:" data-toc-modified-id="Función-de-resultados-del-simulador:-1.1.1"><span class="toc-item-num">1.1.1 </span>Función de resultados del simulador:</a></span></li><li><span><a href="#Estructura-del-simulador:" data-toc-modified-id="Estructura-del-simulador:-1.1.2"><span class="toc-item-num">1.1.2 </span>Estructura del simulador:</a></span></li></ul></li><li><span><a href="#Método-de-Monte-Carlo-aplicado-a-filas-de-espera:" data-toc-modified-id="Método-de-Monte-Carlo-aplicado-a-filas-de-espera:-1.2"><span class="toc-item-num">1.2 </span>Método de Monte Carlo aplicado a filas de espera:</a></span></li></ul></li></ul></div>
# -
# Se busca estimar las métricas para un modelo de filas de espera con distribuciones de llegadas y salidas que tienen parámetros dinámicos en cada tiempo simulado. Esto es consecuencia de un requerimiento de modelado que proviene del análisis cualitativo de psicología de filas.
#
# Se busca comparar un modelo M/M/1 con Mvar/Mvar/1, en donde Mvar indica la distribución exponencial de parámetro variable.
# Los parámetros son función del largo de las filas. Concretamente se eligió arbitrariamente una función sigmoidal.
# ## Resolución
# En primer lugar, importamos las librerías necesarias para el cálculo:
import numpy as np
from numpy.random import exponential
import matplotlib.pyplot as plt
import pandas as pd
import heapq
from time import time
from scipy.stats import logistic
# ### Simulador de eventos discretos:
# Definimos funciones asociadas a los eventos:
# +
# Funciones pertenecientes a eventos:
def generar_llegada(t_global, generador_codigo_persona, lambd):
t_llegada = exponential(1/lambd) + t_global
nueva_n_persona = next(generador_codigo_persona)
return t_llegada, nueva_n_persona
def generar_salida_servidor(t_global, mu):
t_salida = exponential(1/mu) + t_global
return t_salida
# -
# #### Función de resultados del simulador:
# +
## Función para caclular el tiempo medio que un producto pasa en la fila:
def calcular_q_media(estado_fila_array, t_array):
tiempo_total = t_array[-1]
t_entre_eventos = np.diff(np.insert(t_array, 0, 0)).transpose()
return np.dot(estado_fila_array, t_entre_eventos) / tiempo_total
def calculo_metricas(tabla_eventos, t_array, estado_fila, estado_sistema):
# Cálculo de Wq y Ws:
ws_acum = 0
wq_acum = 0
for key, value in tabla_eventos.items():
# los ws y wq los vamos a calcular con las diferencias entre el tiempo en sistema y el tiempo en fila de c/persona.
if 'salida_servidor' in value:
ws_acum += value['salida_servidor'] - value['nueva_persona']
if 'salida_fila' in value:
wq_acum += value['salida_fila'] - value['nueva_persona']
wq = wq_acum / (key + 1)
ws = ws_acum / (key + 1)
# Cálculo de Lq y Ls:
## Calculamos lq y ls:
lq = calcular_q_media(estado_fila, t_array)
ls = calcular_q_media(estado_sistema, t_array)
# Cantidad de personas que salieron del sistema:
q_personas = sorted(tabla_eventos.keys())[-1]
return wq, ws, lq, ls, q_personas
# -
# #### Estructura del simulador:
# Escribimos la estructura del simulador a través de un while loop. Esta vez el simulador será una función ya que lo usaremos bajo el método de Monte Carlo.
def simulador(t_corte, n_servidores, lambd, mu):
# Inicialización de fila simulador:
fila_simulador = []
def ingresar_a_fila_simulador(t, tipo_evento, n_persona, n_servidor):
heapq.heappush(fila_simulador, (t, tipo_evento, n_persona, n_servidor))
def recuperar_de_fila_simulador():
return heapq.heappop(fila_simulador)
# Inicialización de fila caso:
fila = []
def entrada_fila(n_persona):
heapq.heappush(fila, n_persona)
def salida_fila():
return heapq.heappop(fila)
# Lambda y Mu variables:
def calculate_mu(mu, length_fila):
# return mu*(1 - logistic.cdf(length_fila, 9, 4))
return mu
def calculate_lambd(lambd, length_fila):
# return lambd*(1 - logistic.cdf(length_fila, 9, 4))
return lambd
# Inicialización de variables:
t_array = []
tabla_eventos = {}
estado_fila = []
estado_sistema = []
gen_codigo_personas = iter(range(0, 20000))
t_global = 0 # Tiempo actual de simulación.
n_persona = 0 # Número de producto en el tiempo actual de simulación.
estado_servidores = np.array([True]*n_servidores) # Estado libre de cada máquina: True (libre), False (en uso)
################## SIMULADOR #############################
# Inicio
t_llegada, n_nueva_persona = generar_llegada(t_global, gen_codigo_personas, calculate_lambd(lambd, len(fila)))
ingresar_a_fila_simulador(t_llegada, 'nueva_persona', n_nueva_persona, None)
while True:
# Sacar evento de la fila de eventos:
nuevo_evento = recuperar_de_fila_simulador()
t_global = nuevo_evento[0]
if t_global > t_corte:
break
tipo_evento = nuevo_evento[1]
n_persona = nuevo_evento[2]
n_servidor = nuevo_evento[3]
# Agregar eventos a la tabla de seguimiento:
evento_a_tabla = {tipo_evento: t_global}
if n_persona in tabla_eventos:
tabla_eventos[n_persona].update(evento_a_tabla)
else:
tabla_eventos.update({n_persona: evento_a_tabla})
################ EVENTO: llegada de una persona ################
if tipo_evento == 'nueva_persona':
servidores_libres = np.argwhere(estado_servidores)
if servidores_libres.size != 0:
# Se elige el primer servidor libre:
index_servidor = servidores_libres[0][0]
estado_servidores[index_servidor] = False
t_salida = generar_salida_servidor(t_global, calculate_mu(mu, len(fila)))
ingresar_a_fila_simulador(t_salida, 'salida_servidor', n_persona, index_servidor)
else:
entrada_fila(n_persona)
t_llegada, n_nueva_persona = generar_llegada(t_global, gen_codigo_personas, calculate_lambd(lambd, len(fila)))
ingresar_a_fila_simulador(t_llegada, 'nueva_persona', n_nueva_persona, None)
################ EVENTO: salida de servidor ################
if tipo_evento == 'salida_servidor':
# ingresar nueva persona:
if len(fila) > 0:
n_persona = salida_fila()
t_salida = generar_salida_servidor(t_global, calculate_mu(mu, len(fila)))
ingresar_a_fila_simulador(t_salida, 'salida_servidor', n_persona, n_servidor)
# Agregamos un evento virtual para registrar la salida de la fila.
# Este evento no forma parte de los eventos del simulador.
evento_a_tabla = {'salida_fila': t_global}
tabla_eventos[n_persona].update(evento_a_tabla)
else:
estado_servidores[n_servidor] = True
####### Recolectar datos ##########
t_array.append(t_global)
q_fila = len(fila)
estado_fila.append(q_fila)
q_sistema = q_fila + np.sum(~estado_servidores)
estado_sistema.append(q_sistema)
###### Cálculo de resultados ##########
return calculo_metricas(tabla_eventos, t_array, estado_fila, estado_sistema)
# ### Método de Monte Carlo aplicado a filas de espera:
# Iteramos n veces y luego sacamos la media de las métricas Wq, Ws, Lq y Ls.
# +
# Parámetros:
iteraciones = 10
n_servidores = 1
t_corte = 240
lambd = n_servidores*9
mu = 12
# Inicialización:
wq_array = []
ws_array = []
lq_array = []
ls_array = []
q_personas_array = []
t_inicio = time() # guardamos tiempo inicial
# Iteramos:
for i in range(0, iteraciones):
# Ejecutar simulador:
wq_i, ws_i, lq_i, ls_i, q_personas_i = simulador(t_corte, n_servidores, lambd, mu)
# Agregar resultados a array:
wq_array.append(wq_i)
ws_array.append(ws_i)
lq_array.append(lq_i)
ls_array.append(ls_i)
q_personas_array.append(q_personas_i)
t_fin = time() # guardamos tiempo final
wq = np.mean(wq_array)
ws = np.mean(ws_array)
lq = np.mean(lq_array)
ls = np.mean(ls_array)
q_personas = np.mean(q_personas_array)
# Resultados:
print('######## Resultados ##############')
print('## Monte Carlo ##')
print('Iteraciones de Monte Carlo: %i' % iteraciones,
'\nTiempo de cálculo: %0.2f' % (t_fin - t_inicio))
print('\n## Filas de espera ##')
print('Cantidad de servidores: %s' % n_servidores,
'\nTiempo medio en fila: %0.2f' % wq,
'\nTiempo medio en sistema: %0.2f' % ws,
'\nLargo medio de la fila: %0.2f' % lq,
'\nCantidad media de personas en sistema: %0.2f' % ls,
'\nPersonas salidas del sistema en simulación: %i' % q_personas)
| 05_filas/casos_codigo/caso_fila_compleja/simulacion_fila_compleja.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Vizualizing BigQuery data in a Jupyter notebook
#
# [BigQuery](https://cloud.google.com/bigquery/docs/) is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.
#
# Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table.
# ## Using Jupyter magics to query BigQuery data
#
# The BigQuery Python client library provides a magic command that allows you to run queries with minimal code.
#
# The BigQuery client library provides a cell magic, `%%bigquery`. The `%%bigquery` magic runs a SQL query and returns the results as a pandas `DataFrame`. The following cell executes a query of the BigQuery natality public dataset and returns the total births by year.
# %%bigquery
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
# The following command to runs the same query, but this time the results are saved to a variable. The variable name, `total_births`, is given as an argument to the `%%bigquery`. The results can then be used for further analysis and visualization.
# %%bigquery total_births
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
# The next cell uses the pandas `DataFrame.plot` method to visualize the query results as a bar chart. See the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/visualization.html) to learn more about data visualization with pandas.
total_births.plot(kind='bar', x='year', y='birth_count');
# Run the following query to retrieve the number of births by weekday. Because the `wday` (weekday) field allows null values, the query excludes records where wday is null.
# %%bigquery births_by_weekday
SELECT
wday,
SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births,
SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births
FROM `bigquery-public-data.samples.natality`
WHERE wday IS NOT NULL
GROUP BY wday
ORDER BY wday ASC
# Visualize the query results using a line chart.
births_by_weekday.plot(x='wday');
# ## Using Python to query BigQuery data
#
# Magic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, `%%bigquery` uses the BigQuery Python client library to run the given query, convert the results to a pandas `Dataframe`, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks.
#
# To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API.
# +
from google.cloud import bigquery
client = bigquery.Client()
# -
# Use the [`Client.query`](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.html#google.cloud.bigquery.client.Client.query) method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.).
sql = """
SELECT
plurality,
COUNT(1) AS count,
year
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(plurality) AND plurality > 1
GROUP BY
plurality, year
ORDER BY
count DESC
"""
df = client.query(sql).to_dataframe()
df.head()
# To chart the query results in your `DataFrame`, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time.
pivot_table = df.pivot(index='year', columns='plurality', values='count')
pivot_table.plot(kind='bar', stacked=True, figsize=(15, 7));
# Run the following query to retrieve the count of births by the number of gestation weeks.
sql = """
SELECT
gestation_weeks,
COUNT(1) AS count
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99
GROUP BY
gestation_weeks
ORDER BY
gestation_weeks
"""
df = client.query(sql).to_dataframe()
# Finally, chart the query results in your `DataFrame`.
ax = df.plot(kind='bar', x='gestation_weeks', y='count', figsize=(15,7))
ax.set_title('Count of Births by Gestation Weeks')
ax.set_xlabel('Gestation Weeks')
ax.set_ylabel('Count');
# ## What's Next
#
# + __Learn more about writing queries for BigQuery__ — [Querying Data](https://cloud.google.com/bigquery/querying-data) in the BigQuery documentation explains how to run queries, create user-defined functions (UDFs), and more.
#
# + __Explore BigQuery syntax__ — The preferred dialect for SQL queries in BigQuery is standard SQL. Standard SQL syntax is described in the [SQL Reference](https://cloud.google.com/bigquery/docs/reference/standard-sql/). BigQuery's legacy SQL-like syntax is described in the [Query Reference (legacy SQL)](https://cloud.google.com/bigquery/query-reference).
| nb1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Working with Missing data in pandas
import pandas as pd
# We begin by defining a pandas dataframe that contains some cells with missing values. Note that pandas, in addition to allowing us to create dataframes from a variety of files, also supports explicit declaration.
import numpy as np
incomplete_df = pd.DataFrame({'id': [1,2,3,2,2,3,1,1,1,2,4],
'type': ['one', 'one', 'two', 'three', 'two', 'three', 'one', 'two', 'one', 'three','one'],
'amount': [345,928,np.NAN,645,113,942,np.NAN,539,np.NAN,814,np.NAN]
}, columns=['id','type','amount'])
# Column 'amount' is the only one with missing values.
incomplete_df
# Recall that summary statistics and arithmetic with missing data is natively supported by pandas. Let's define two series, both containing some missing values.
A = incomplete_df['amount']
B = pd.Series(data=[np.NAN,125,335,345,312,np.NAN,np.NAN,129,551,800,222])
print(A)
print(B)
# The mean is computed normally and missing values are ignored:
A.mean()
# Min, Max, STD and Variance all work even when data are missing:
print(B.mean())
print(B.max())
print(B.std())
print(B.var())
# We can also perform element-wise arithmetic operations between series with missing data. Note that by definition the result of any operation that involves missing values is NaN.
A+B
# ###### Filling missing values
# Recall that pandas has a function that allows you to drop any rows in a dataframe (or elements in a series) that contain a missing value.
A
A.dropna()
# However, very often you may wish to fill in those missing values rather than simply dropping them. Of course, pandas also has that functionality. For example, we could fill missing values with a scalar number, as shown below.
A.fillna(-1)
# That actually works with any data type.
A.fillna('missing data')
# As such, we can use this functionality to fill in the gaps with the average value computed across the non-missing values.
A.fillna(A.mean())
# Even better, if we want to fill in the gaps with mean values of corresponding id's (recall our initial dataframe printed below), the following two lines of code perform that seemingly complex task.
incomplete_df
# Fill in gaps in the 'amount' column with means obtained from corresponding id's in the first column
incomplete_df["amount"].fillna(incomplete_df.groupby("id")["amount"].transform("mean"),inplace=True)
# If there is no corresponding id, simply use the overall mean
incomplete_df["amount"].fillna(incomplete_df["amount"].mean(), inplace=True)
incomplete_df
# You can fill values forwards and backwards with the flags pad / ffill and bfill / backfill
print(B)
print(B.fillna(method='pad'))
# We can set a limit if we only want to replace consecutive gaps.
B.fillna(method='bfill',limit=1)
# There is also a function that does linear interpolation. The keyword method gives you access to fancier methods for interpolation, some of which require SciPy.
print(B)
print(B.interpolate())
B.interpolate(method='barycentric')
B.interpolate(method='pchip')
# Below we compare three different methods.
np.random.seed(2)
ser = pd.Series(np.arange(1, 10.1, .25)**2 + np.random.randn(37))
bad = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
ser[bad] = np.nan
methods = ['linear', 'cubic']
df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})
df.plot()
| 2_2_Working with Missing Values.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rosiezou/ssl_3d_recon/blob/master/SSL_python2_proto.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="tUtcNlCJvQLW"
# This notebook contains the entire environment setup needed to run
# https://github.com/rosiezou/ssl_3d_recon in Google collab and/or AWS
# + id="JSiT1GxZcn3b"
# ls /usr/local/cuda-8.0/
# + id="vZAVZiPHlaEm"
# Check pyton requirements
# ! python --version
# + id="g5_Uw5V5lgwJ"
# Sanity check GPU and cuda version
# !nvidia-smi
# !nvcc --version
# + id="FK5eJe5llzVj"
# Remove the version of CUDA installed on the machine
# !apt-get --purge remove cuda* nvidia* libnvidia-*
# Steps below in this cell, may fail on AWS. That should be okay.
# !dpkg -l | grep cuda- | awk '{print $2}' | xargs -n1 dpkg --purge
# !s
# !apt autoremove
# !apt-get update
# + id="UZvk_KW0l8ot"
# Install CUDA 8
# !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
# !dpkg -i --force-overwrite cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
# !apt-get update
# !apt-get install cuda-8-0
# + id="B9jyJHaZmA7P"
# install will fail, need to force dpkg to overwrite the configuration file
# Note: Skip this cell for AWS. Not needed.
# !wget http://archive.ubuntu.com/ubuntu/pool/main/m/mesa/libglx-mesa0_18.0.5-0ubuntu0~18.04.1_amd64.deb
# !dpkg -i --force-overwrite libglx-mesa0_18.0.5-0ubuntu0~18.04.1_amd64.deb
# !wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/nvidia-410_410.48-0ubuntu1_amd64.deb
# !dpkg -i --force-overwrite nvidia-410_410.48-0ubuntu1_amd64.deb
# !apt --fix-broken install
# !apt-get install cuda-8-0
# + id="1By45HOymKov"
# Sanity check if the correct version of cuda is installed now.
# !nvcc --version
# + id="xyc2ujYNmNQx"
# Experimental install of tensorflow 1.13
# ! pip install tensorflow==1.13.2
# Make sure that tensorflow is working
import tensorflow as tf
print(tf.sysconfig.get_lib())
# + id="o9GNDd_zxEDo"
# Missing module! :|
# ! pip install tflearn
# + id="LnH1TsO9mQEJ"
# Downgrade GNU to a CUDA compatible version
# Note: Skip this cell for AWS. Not needed.
# ! apt install g++-4.8
# ! update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 60 --slave /usr/bin/g++ g++ /usr/bin/g++-7
# ! update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 50 --slave /usr/bin/g++ g++ /usr/bin/g++-4.8
# List available compilers
# ! update-alternatives --list gcc
# Set gcc 4.x as default
# !update-alternatives --set gcc /usr/bin/gcc-4.8
# + id="eSn-zjMbmUcj"
# git repo clone
# Original author repository
# ! git clone https://github.com/klnavaneet/ssl_3d_recon.git
# OR Forked version of the repository with changes made to actually make the code work on collab.
# ! git clone https://github.com/rosiezou/ssl_3d_recon.git
# In order to run this on AWS, make sure to add the flag for -D_GLIBCXX_USE_CXX11_ABI=0 back in makefile as on https://github.com/klnavaneet/ssl_3d_recon/blob/master/makefile
# AWS runs with a GNU version > 4.9. This flag will be needed in that case. Still relies on the extra tensorflow and cuda library paths provided in the custom makefile from
# + id="9u7HBd4FmZ1E"
# Makefile
# Note: Makefile should be inside src. Place it over there if it's not there.
# Also: Makefile and the shell scripts within chamfer_utils have been changed. Use the ones uploaded
# on the drive (For collab only).
# Makefile used on AWS will be different.
% cd /content/ssl_3d_recon/src
# ! make clean
# ! make
# + id="eBUpSj24tnp8"
# Mount google drive to connect to the training data
# from google.colab import drive
# drive.mount('/content/drive')
# + [markdown] id="iWYmmQhdxWwU"
# The steps below should be used for training the model on collab. This works as long as the training data is uploaded to my google drive. :)
# + id="KpbkwJVxmaJH"
# Following the steps in https://github.com/klnavaneet/ssl_3d_recon/blob/master/README.md after running "make"
# % cd /content/ssl_3d_recon/src/utils/
# # !pwd
# Note: The path to test data may point to my google drive folder for prototyping. Make sure the python script contains the right path.
# Create tfrecords file for OURS-CC model
# # ! python create_tf_records.py
# + id="6dmRSwzZoEGH"
# Create tfrecords file for OURS-NN model
# # ! python create_tf_records_knn.py
# + id="s1gUB-GZ6j69"
# Try training if possible
# % cd /content/ssl_3d_recon/
# # ! bash run.sh
| SSL_python2_proto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: fe_test
# language: python
# name: fe_test
# ---
# ## Random sample imputation
#
# Imputation is the act of replacing missing data with statistical estimates of the missing values. The goal of any imputation technique is to produce a **complete dataset** that can then be then used for machine learning.
#
# Random sampling imputation is in principle similar to mean / median / mode imputation, in the sense that it aims to preserve the statistical parameters of the original variable, for which data is missing.
#
# Random sampling consist of taking a random observation from the pool of available observations of the variable, and using that randomly extracted value to fill the NA. In random sample imputation one takes as many random observations as missing values are present in the variable.
#
# By random sampling observations of the variable for those instances where data is available, we guarantee that the mean and standard deviation of the variable are preserved.
#
# By random sampling observations of the present categories, for categorical variables, we guarantee that the frequency of the different categories / labels within the variable is preserved.
#
#
# ### Which variables can I impute by Random Sample Imputation?
#
# Random Sample Imputation can be applied to both numerical and categorical variables.
#
#
# ### Assumptions
#
# Random sample imputation assumes that the data are missing completely at random (MCAR). If this is the case, it makes sense to substitute the missing values by values extracted from the original variable distribution.
#
# From a probabilistic point of view, values that are more frequent, like the mean or the median or the most frequent category, for categorical variables, will be selected more often -because there are more of them to select from-, but other less frequent values will be selected as well. Thus, the variance and distribution of the variable are preserved.
#
# The idea is to replace the population of missing values with a population of values with the same distribution of the original variable.
#
#
# ### Advantages
#
# - Easy to implement
# - Fast way of obtaining complete datasets
# - Preserves the variance of the variable
#
# ### Limitations
#
# - Randomness
# - The relationship of imputed variables with other variables may be affected if there are a lot of NA
# - Memory heavy for deployment, as we need to store the original training set to extract values from and replace the NA in coming observations.
#
#
# ### When to use Random Sample Imputation?
#
# - Data is missing completely at random
# - No more than 5% of the variable contains missing data
# - Well suited for linear models as it does not distort the distribution, regardless of the % of NA
#
# If used in combination with a Missing Indicator, as we will see in the next lecture, then this method can be used when data is not missing at random as well, or when there are many missing observations.
#
#
# #### Randomness
#
# Randomness may not seem much of a concern when replacing missing values for data competitions, where the whole batch of missing values is replaced once and then the dataset is scored and that is the end of the problem. However, in business scenarios the situation is very different.
#
# Imagine for example a car manufacturer is trying to predict how long a certain car will be in the garage before it passes all the security tests. Today, they receive a car with missing data in some of the variables, they run the machine learning model to predict how long this car will stay in the garage, the model replaces missing values by a random sample of the variable and then produces an estimate of time. Tomorrow, when they run the same model on the same car, the model will randomly assign values to the missing data, that may or may not be the same as the ones it selected today, therefore, the final estimation of time in the garage, may or may not be the same as the one obtained the day before.
#
# In addition, imagine the car manufacturer evaluates 2 different cars that have exactly the same values for all of the variables, and missing values in exactly the same subset of variables. They run the machine learning model for each car, and because the missing data is randomly filled with values, the 2 cars, that are exactly the same, may end up with different estimates of time in the garage.
#
# This may sound completely trivial and unimportant, however, businesses must follow a variety of regulations, and some of them require the same treatment to be provided to the same situation. So if instead of cars, these were people applying for a loan, or people seeking for disease treatment, the machine learning model would end up providing different solutions to candidates that are otherwise in the same conditions. And this is not fair or acceptable. This behaviour needs to be avoided.
#
# #### So, should we randomly replace NA or not?
#
# It is still possible to replace missing data by random sample, but these randomness needs to be controlled, so that individuals in the same situation end up with the same scores and therefore with the same solutions offered. How can we ensure this? by appropriately setting seeds during the random extraction of values.
#
# Finally, another potential limitation of random sampling, similarly to replacing with the mean and median, is that estimates of covariance and correlations with other variables in the dataset may also be washed off by the randomness, particularly if there are a lot of missing observations.
#
#
# ### Final note
#
# Replacement of missing values by random sample, although similar in concept to replacement by the median or mean, is not as widely used in the data science community as the mean / median imputation, presumably because of the element of randomness, or because the code implementation is not so straightforward.
#
# However, it is a valid approach, with clear advantages over mean / median imputation as it preserves the distribution of the variable. And if you are mindful of the element of randomness and account for it somehow, this may as well be your method of choice, particularly for linear models.
#
# ## In this demo:
#
# We will learn how to perform random sample imputation using pandas on the Ames House Price and Titanic Datasets.
#
# - To download the datasets please refer to the lecture **Datasets** in **Section 1** of this course.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# to split and standarise the datasets
from sklearn.model_selection import train_test_split
# -
# ## Random Sampling for Numerical Variables
# +
# load the Titanic Dataset with a few variables for demonstration
data = pd.read_csv('../titanic.csv', usecols=['age', 'fare', 'survived'])
data.head()
# +
# let's look at the percentage of NA
data.isnull().mean()
# -
# ### Imputation important
#
# Imputation should done over the training set, and then propagated to the test set. This means that the random sample to be used to fill missing values both in train and test set, should be extracted from the train set.
# +
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data,
data.survived,
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# +
# let's impute Age by random sampling both in
# train and test sets
# create the new variable where NA will be imputed:
# make a copy from the original variable, with NA
X_train['Age_imputed'] = X_train['age'].copy()
X_test['Age_imputed'] = X_test['age'].copy()
# extract the random sample to fill the na:
# remember we do this always from the train set, and we use
# these to fill both train and test
random_sample_train = X_train['age'].dropna().sample(
X_train['age'].isnull().sum(), random_state=0)
random_sample_test = X_train['age'].dropna().sample(
X_test['age'].isnull().sum(), random_state=0)
# what is all of the above code doing?
# 1) dropna() removes the NA from the original variable, this
# means that I will randomly extract existing values and not NAs
# 2) sample() is the method that will do the random sampling
# 3) X_train['Age'].isnull().sum() is the number of random values to extract
# I want to extract as many values as NAs are present in the original variable
# 4) random_state sets the seed for reproducibility, so that I extract
# always the same random values, every time I run this notebook
# pandas needs to have the same index in order to merge datasets
random_sample_train.index = X_train[X_train['age'].isnull()].index
random_sample_test.index = X_test[X_test['age'].isnull()].index
# replace the NA in the newly created variable
X_train.loc[X_train['age'].isnull(), 'Age_imputed'] = random_sample_train
X_test.loc[X_test['age'].isnull(), 'Age_imputed'] = random_sample_test
# -
# check that NA were imputed
X_train['Age_imputed'].isnull().sum()
# check that NA were imputed
X_test['Age_imputed'].isnull().sum()
X_train.head(15)
# We can see how NAs are replaced by different values in the different rows! This is what we wanted.
# #### Random sampling preserves the original distribution of the variable
# +
# we can see that the distribution of the variable after
# random sample imputation is almost exactly the same as the original
fig = plt.figure()
ax = fig.add_subplot(111)
X_train['age'].plot(kind='kde', ax=ax)
X_train['Age_imputed'].plot(kind='kde', ax=ax, color='red')
lines, labels = ax.get_legend_handles_labels()
ax.legend(lines, labels, loc='best')
# -
# We can see that replacing missing values with a random sample from the training set preserves the original distribution of the variable. If you remember from previous notebooks, every other imputation technique altered the distribution of Age, because the percentage of NA in Age is high, ~20%. However, random sample imputation preserves the distribution, even in those cases. So this imputation technique is quite handy, if we are building linear models and we don't want to distort normal distributions.
# +
# there is some change in the variance of the variable.
# however this change is much smaller compared to mean / median
# imputation (check the previous notebook for comparison)
print('Original variable variance: ', X_train['age'].var())
print('Variance after random imputation: ', X_train['Age_imputed'].var())
# +
# the covariance of Age with Fare is also less affected by this
# imputation technique compared to mean / median imputation
X_train[['fare', 'age', 'Age_imputed']].cov()
# +
# Finally, the outliers are also less affected by this imputation
# technique
# Let's find out using a boxplot
X_train[['age', 'Age_imputed']].boxplot()
# -
# So random sample imputation offers all the advantages provided by the preservation of the original distribution. And that is a big plus, particularly, if we care about distribution and outliers for our machine learning models. This is particularly relevant for linear models. But not so important for tree based algorithms.
# ## Randomness can lead to different scores being assigned to the same observation
#
# Let's examine the effect of randomness on multiple scoring, and how we can mitigate this behaviour, as this is very important when putting our models in production / integrating our models with live systems.
# +
# let's pick one observation with NA in Age
# in this case we pick observation indexed 5
observation = data[data.age.isnull()].head(1)
observation
# +
# and now let's fill that NA with a random value
# extracted from the same variable where observations are available
# extract a random value, just 1
sampled_value = X_train['age'].dropna().sample(1)
# re index to 5
sampled_value.index = [15] # pandas needs the same index to be able to merge
# replace the NA with the sampled value
observation['Age_random'] = sampled_value
observation
# +
# let's repeat the exercise again:
# we fill the NA with another random extracted value
# extract a random value, just 1
sampled_value = X_train['age'].dropna().sample(1)
# re index to 5
sampled_value.index = [15] #pandas needs the same index to be able to merge
# replace the NA with the sampled value
observation['Age_random'] = sampled_value
observation
# +
# and again
# we fill the NA with another random extracted value
# extract a random value, just 1
sampled_value = X_train['age'].dropna().sample(1)
# re index to 5
sampled_value.index = [15] #pandas needs the same index to be able to merge
# replace the NA with the sampled value
observation['Age_random'] = sampled_value
observation
# -
# We can see that every time we repeat the operation, we get a different value replacement for exactly the same observation. In fact, if we repeat the process 1000 times:
# +
# if we repeat the process 1000 times:
values_ls = []
# capture the non-Na values to speed
# the computation
tmp = X_train.age.dropna()
for i in range(1000):
# extract a random value, just 1
sampled_value = tmp.sample(1).values
# add the extracted value to the list
values_ls.append(float(sampled_value))
pd.Series(values_ls).hist(bins=50)
plt.xlabel('Randomly Extracted Values')
plt.ylabel('Number of times')
# -
# We obtain very different values for the same observation. Note how the distribution of extracted values is similar to the distribution of Age.
#
# If this were patients looking for treatment, every time we run a predictive model, which would operate on the differently randomly extracted values, we would assign patients with the same characteristics to different treatments, and this is not OK.
#
# ### How can we fix this behaviour?
#
# We can fix this randomness by assigning a seed:
# +
values_ls = []
for i in range(100):
# extract a random value, just 1, now with seed
sampled_value = X_train.age.dropna().sample(1, random_state=10)
# add random value to the list
values_ls.append(float(sampled_value))
# print the values
pd.Series(values_ls).unique()
# -
values_ls
# Now that we set the seed, every randomly extracted value for that observation is the same.
#
# However, if we set the same seed for every single observation, what would happen is that for every different observation, we would be filling the NA with exactly the same value (same seed == same random value extracted). This would be the equivalent to arbitrary value imputation!!!
#
# We don't want that behaviour either.
#
# Therefore, we want our seed to change observation per observation, but in a controlled manner, so that 2 observations that are exactly the same, receive the same imputed random values. But 2 observations that are different, receive different imputed random values.
# ### Controlling the element of randomness by varying the seed
#
# We can attribute a different seed to each observation, and in fact, we can make this seed depend on an alternative variable of the same observation, thus, thinking in the Titanic dataset, if 2 passengers paid exactly the same Fare, they would get exactly the same probability of survival (when Age is missing).
# +
# let's pick one observation with NA in Age
# in this case we pick observation indexed 5
observation = data[data.age.isnull()].head(1)
observation
# +
# the seed is now the Fare
int(observation.fare)
# +
# we assign the Fare as the seed in the random sample extraction
sampled_value = X_train.age.dropna().sample(1,
random_state=int(observation.fare))
sampled_value.index = [15]
observation['Age_random'] = sampled_value
observation
# +
# for a different observation with a different Fare,
# we would get a different randomly extracted value
observation = data[data.age.isnull()].tail(1)
observation
# -
# new seed
int(observation.fare)
# +
# we assign the Fare as the seed in the random sample extraction
sampled_value = X_train.age.dropna().sample(1,
random_state=int(observation.fare))
sampled_value.index = [1305]
observation['Age_random'] = sampled_value
observation
# -
# This is a way of controlling the randomness. Using the Fare to set the random state, you guarantee that for 2 passengers with equal Fare, the Age will be replaced with the same number, and therefore the 2 passengers will get the same probability of survival.
#
# ### Note!!
#
# In real life, you will build models that use tens of variables or more. So in cases like those, you can think of picking the 3-5 more important variables, those that have the strongest impact on the output of the machine learning model, and combine them to create the random state. Therefore, customers that share the 3-5 main variable values, will get the same scores.
# ## Random Sampling for Categorical Variables
# +
# let's load the dataset with a few columns for the demonstration
cols_to_use = ['BsmtQual', 'FireplaceQu', 'SalePrice']
data = pd.read_csv('../houseprice.csv', usecols=cols_to_use)
# let's inspect the percentage of missing values in each variable
data.isnull().mean().sort_values(ascending=True)
# +
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data,
data.SalePrice,
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# +
# let's impute BsmtQual by random sampling both in
# train and test sets
# create the new variable where NA will be imputed
# make a copy from the original variable, with NA
X_train['BsmtQual_imputed'] = X_train['BsmtQual'].copy()
X_test['BsmtQual_imputed'] = X_test['BsmtQual'].copy()
# extract the random sample to fill the na:
# remember we do this always from the train set, and we use
# these to fill both train and test
random_sample_train = X_train['BsmtQual'].dropna().sample(
X_train['BsmtQual'].isnull().sum(), random_state=0)
random_sample_test = X_train['BsmtQual'].dropna().sample(
X_test['BsmtQual'].isnull().sum(), random_state=0)
# what is all of the above code doing?
# 1) dropna() removes the NA from the original variable, this
# means that I will randomly extract existing values and not NAs
# 2) sample() is the method that will do the random sampling
# 3) X_train['BsmtQual'].isnull().sum() is the number of random values to extract
# I want to extract as many values as NAs are present in the original variable
# 4) random_state sets the seed for reproducibility, so that I extract
# always the same random values, every time I run this notebook
# pandas needs to have the same index in order to merge datasets
random_sample_train.index = X_train[X_train['BsmtQual'].isnull()].index
random_sample_test.index = X_test[X_test['BsmtQual'].isnull()].index
# replace the NA in the newly created variable
X_train.loc[X_train['BsmtQual'].isnull(), 'BsmtQual_imputed'] = random_sample_train
X_test.loc[X_test['BsmtQual'].isnull(), 'BsmtQual_imputed'] = random_sample_test
# +
# let's impute FireplaceQu by random sampling both in
# train and test sets
# create the new variable where NA will be imputed
# make a copy from the original variable, with NA
X_train['FireplaceQu_imputed'] = X_train['FireplaceQu'].copy()
X_test['FireplaceQu_imputed'] = X_test['FireplaceQu'].copy()
# extract the random sample to fill the na:
# remember we do this always from the train set, and we use
# these to fill both train and test
random_sample_train = X_train['FireplaceQu'].dropna().sample(
X_train['FireplaceQu'].isnull().sum(), random_state=0)
random_sample_test = X_train['FireplaceQu'].dropna().sample(
X_test['FireplaceQu'].isnull().sum(), random_state=0)
# what is all of the above code doing?
# 1) dropna() removes the NA from the original variable, this
# means that I will randomly extract existing values and not NAs
# 2) sample() is the method that will do the random sampling
# 3) X_train['FireplaceQu'].isnull().sum() is the number of random values to extract
# I want to extract as many values as NAs are present in the original variable
# 4) random_state sets the seed for reproducibility, so that I extract
# always the same random values, every time I run this notebook
# pandas needs to have the same index in order to merge datasets
random_sample_train.index = X_train[X_train['FireplaceQu'].isnull()].index
random_sample_test.index = X_test[X_test['FireplaceQu'].isnull()].index
# replace the NA in the newly created variable
X_train.loc[X_train['FireplaceQu'].isnull(), 'FireplaceQu_imputed'] = random_sample_train
X_test.loc[X_test['FireplaceQu'].isnull(), 'FireplaceQu_imputed'] = random_sample_test
# -
# check that nulls were removed
X_train['FireplaceQu_imputed'].isnull().sum()
# +
# and now let's evaluate the effect of the imputation on the distribution
# of the categories and the target within those categories
# we used a similar function in the notebook of arbitrary value imputation
# for categorical variables
def categorical_distribution(df, variable_original, variable_imputed):
tmp = pd.concat(
[
# percentage of observations per category, original variable
df[variable_original].value_counts() / len(df[variable_original].dropna()),
# percentage of observations per category, imputed variable
df[variable_imputed].value_counts() / len(df)
],
axis=1)
# add column names
tmp.columns = ['original', 'imputed']
return tmp
# -
# run the function in a categorical variable
categorical_distribution(X_train, 'BsmtQual', 'BsmtQual_imputed')
# run the function in a categorical variable
categorical_distribution(X_train, 'FireplaceQu', 'FireplaceQu_imputed')
# As expected, the percentage of observations within each category is very similar in the original and imputed variables, for both BsmtQual where NA are low and FireplaceQu were NA are high.
# +
# now let's look at the distribution of the target within each
# variable category
def automate_plot(df, variable, target):
fig = plt.figure()
ax = fig.add_subplot(111)
for category in df[variable].dropna().unique():
df[df[variable]==category][target].plot(kind='kde', ax=ax)
# add the legend
lines, labels = ax.get_legend_handles_labels()
labels = df[variable].dropna().unique()
ax.legend(lines, labels, loc='best')
plt.show()
# -
automate_plot(X_train, 'BsmtQual', 'SalePrice')
automate_plot(X_train, 'BsmtQual_imputed', 'SalePrice')
automate_plot(X_train, 'FireplaceQu', 'SalePrice')
automate_plot(X_train, 'FireplaceQu_imputed', 'SalePrice')
# For BsmtQual, where the NA are low, the distribution of the target is preserved for the categories in the original and imputed variable. However, for FireplaceQu, which contains more NAs, the distribution of the target per category is affected slightly.
#
# ## Note on Random Sample Imputation code
#
# The code provided in this notebook for random sampling, is a bit complex. Don't worry! You can do random sample imputation using the package feature engine in just a couple of lines. I will show you how in a coming notebok
#
| 04.07-Random-Sample-Imputation.ipynb |