code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Heroes Of Pymoli Data Analysis
# * Of the 1163 active players, the vast majority are male (84%). There also exists, a smaller, but notable proportion of female players (14%).
#
# * Our peak age demographic falls between 20-24 (44.8%) with secondary groups falling between 15-19 (18.60%) and 25-29 (13.4%).
# -----
# ### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data
# -
# ## Player Count
# * Display the total number of players
#
# +
player_data= purchase_data.loc[:, [ "SN", "Gender", "Age"]]
player_data = player_data.drop_duplicates()
number_players = player_data.count()[0]
number_players
player_data_df = pd.DataFrame({"Total Players" : [number_players]})
player_data_df
# -
# * Run basic calculations to obtain number of unique items, average price, etc.
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
#
unique_items = len (purchase_data["Item Name"].unique())
mean_price = purchase_data["Price"].mean()
#Average_price = mean_price.map( "${: .2F} ".format)
total_revenue = purchase_data["Price"].sum()
Number_purchases= len (purchase_data["Purchase ID"])
summary_data_df = pd.DataFrame({
"Number of unique itemrs": [unique_items],
"Average Price" : [mean_price],
"Number of Purchases" : [Number_purchases],
"Total Revenue" : [total_revenue]
})
summary_data_df
summary_data_df["Average Price"] = summary_data_df["Average Price"].map( "${: .2F} ".format)
summary_data_df["Total Revenue"]= summary_data_df["Total Revenue"].map( "${: .2F} ".format)
summary_data_df
# ## Gender Demographics
# * Percentage and Count of Male Players
#
#
# * Percentage and Count of Female Players
#
#
# * Percentage and Count of Other / Non-Disclosed
#
#
#
gender_count = player_data["Gender"].value_counts()
gender_percentage = gender_count / number_players *100
gender_percentage
gender_demographics_df = pd.DataFrame({
"Total Count": gender_count,
"Percentage of Players": gender_percentage
})
gender_demographics_df
gender_demographics_df["Percentage of Players"]= gender_demographics_df["Percentage of Players"].map("{: .2F}%".format)
gender_demographics_df
purchase_count = purchase_data.groupby(["Gender"]).count()["Purchase ID"]
average_purchase_price = purchase_data.groupby(["Gender"]).mean()["Price"]
total_purchase_value = purchase_data.groupby(["Gender"]).sum()["Price"]
purchase_per_person = purchase_data.groupby(["Gender"]).sum()["Price"] / gender_count
summary_table_df = pd.DataFrame({
"Purchase Count" : purchase_count,
"Average Purchase Price" : average_purchase_price,
"Total Purchase Value" : total_purchase_value,
"Avg Total Purchase per Person" :purchase_per_person
})
summary_table_df
summary_table_df["Average Purchase Price"]= summary_table_df["Average Purchase Price"].map("${: .2F}".format)
summary_table_df["Total Purchase Value"] = summary_table_df["Total Purchase Value"].map("${: .2F}".format)
summary_table_df["Avg Total Purchase per Person"] = summary_table_df["Avg Total Purchase per Person"].map("{: .2F}%".format)
summary_table_df
# * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
#
#
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
# ## Age Demographics
# * Establish bins for ages
#
#
# * Categorize the existing players using the age bins. Hint: use pd.cut()
#
#
# * Calculate the numbers and percentages by age group
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: round the percentage column to two decimal points
#
#
# * Display Age Demographics Table
#
# +
player_data["Age Ranges"] = pd.cut(player_data["Age"],
[0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 100],
labels=["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"])
total_count = player_data["Age Ranges"].value_counts()
percentage_count = total_count / number_players * 100
age_df = pd.DataFrame({ "Total Count" : total_count,
"Percentage of Players" : percentage_count
})
age_df["Percentage of Players"] = age_df["Percentage of Players"].map("{: .2F}%".format)
age_df.sort_index()
# -
# ## Purchasing Analysis (Age)
# * Bin the purchase_data data frame by age
#
#
# * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
#
#
# * Create a summary data frame to hold the results
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display the summary data frame
# +
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"],
[0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 100],
labels=["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"])
purchase_count = purchase_data.groupby(["Age Ranges"]).count()["Price"]
avg_purchase_price = purchase_data.groupby(["Age Ranges"]).mean()["Price"]
total_purchase = purchase_data.groupby(["Age Ranges"]).sum()["Price"]
avg_per_person= total_purchase/ age_df["Total Count"]
age_range_df =pd.DataFrame({
"Purchase Count" : purchase_count,
"Average Purchase Price" : avg_purchase_price,
"Total Purchase Value" : total_purchase,
"Average Price Per Person" : avg_per_person,
})
age_range_df["Average Purchase Price"] = age_range_df["Average Purchase Price"].map("${: .2F}".format)
age_range_df["Average Price Per Person"]=age_range_df["Average Price Per Person"].map("${: .2F}".format)
age_range_df
# -
# ## Top Spenders
# * Run basic calculations to obtain the results in the table below
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the total purchase value column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
# +
top_spenders = purchase_data.groupby(["SN"]).sum()["Price"]
purchase_count = purchase_data.groupby(["SN"]). count()["Purchase ID"]
average_purchase_price = purchase_data.groupby(["SN"]). mean()["Price"]
top_spenders_df = pd.DataFrame( {
"Purchase Count": purchase_count,
"Average Purchase Price" : average_purchase_price,
"Total Purchase Value" : top_spenders} )
top_spenders_table = top_spenders_df.sort_values(by = ["Total Purchase Value"], ascending = False )
top_spenders_table["Average Purchase Price"] = top_spenders_table["Average Purchase Price"].map("${: .2F}".format)
top_spenders_table["Total Purchase Value"]= top_spenders_table["Total Purchase Value"].map("${: .2F}".format)
top_spenders_table_head = top_spenders_table.head()
top_spenders_table_head
# -
# ## Most Popular Items
# * Retrieve the Item ID, Item Name, and Item Price columns
#
#
# * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value
#
#
# * Create a summary data frame to hold the results
#
#
# * Sort the purchase count column in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the summary data frame
#
#
total_purchase_value = purchase_data.groupby(["Item ID", "Item Name"]). sum()["Price"]
purchase_count = purchase_data.groupby(["Item ID", "Item Name"]).count()["Purchase ID"]
average_price = purchase_data.groupby(["Item ID", "Item Name"]).mean()["Price"]
popular_item = pd.DataFrame({
"Purchase Count": purchase_count,
"Item Price" : average_price,
"Total Purchase Value" : total_purchase_value
})
popular_item["Item Price"] = popular_item["Item Price"].map("${: .2F}".format)
popular_item["Total Purchase Value"]=popular_item["Total Purchase Value"].map("${: .2F}".format)
popular_item.sort_values(by=["Purchase Count"], ascending=False).head()
# +
item_total_purchase=popular_item.sort_values(by = "Total Purchase Value", ascending = False )
item_total_purchase.head(5)
# -
# ## Most Profitable Items
# * Sort the above table by total purchase value in descending order
#
#
# * Optional: give the displayed data cleaner formatting
#
#
# * Display a preview of the data frame
#
#
| HeroesOfPymoli/.ipynb_checkpoints/HeroesOfPymoli_starter-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # %load solution.py
# Import important libraries
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial.distance import cdist
from itertools import chain
from itertools import repeat
from collections import OrderedDict
import xml.etree.ElementTree as ET
config = {}
# E.g. "1.256660 0.431805 -4.981400"
def parse_coords(text):
return [float(x) for x in text.split(' ')]
def iter_dataset(xml_tree):
for child in xml_tree.getroot():
name = int(child.tag.split('_')[1])
try:
energy = float(child.find('energy').text)
except AttributeError:
energy = np.nan
atoms = [parse_coords(element.text) for element in child.find('coordinates').findall('c')]
for i, coords in enumerate(atoms):
yield {'Entry':name, 'Energy':energy, 'Atom': i, 'X':coords[0], 'Y':coords[1], 'Z':coords[2]}
def parse_dataset(xml_file):
xml_tree = ET.parse(xml_file)
training_set = list(iter_dataset(xml_tree))
return pd.DataFrame(training_set, columns=('Entry', 'Energy', 'Atom', 'X', 'Y', 'Z'))
def get_pos(data, entry):
# Convert the X, Y, Z position for entry to a numpy array of size 60x3
# Get single entry
E = data[data['Entry'] == entry]
if E.empty:
print('Invalid Entry id!')
return None
# Get the position in format Nx3
E_ = E.apply(lambda row: [row['X'], row['Y'], row['Z']], axis=1).values
# Transform it to a numpy array
Epos = np.reshape(list(chain(*E_)), (60, 3))
return Epos
def get_distance(pos0, pos1, method='atom_pos'):
# Calculate a distance value between e0 and e1 based on
# method='atom_pos' ... their cummulative difference in atom positions
# method='mesh_size' ... the abs. diff in mean atom gap size (i.e mesh size)
# method='mesh_size_variance' ... the abs. diff of variance of the mean atom gap size (i.e variance of the mesh size)
if method == 'atom_pos':
# Calculate the distance matrix
D = cdist(pos0, pos1, metric='euclidean')
# Find the closest match for each point
assignment = np.argsort(D, axis=1)[:, 0]
# Calculate distance between each point to its assigned point
distance = np.sum(np.sqrt(np.sum((pos0 - pos1[assignment, :])**2, axis=1)))
elif method == 'mesh_size':
# For each atom calculate the mean distance to its three closest neighbours
D0 = cdist(pos0, pos0, metric='euclidean')
D0.sort(axis=1)
D0_mesh_size = np.mean(D0[:, 1:4])
D1 = cdist(pos1, pos1, metric='euclidean')
D1.sort(axis=1)
D1_mesh_size = np.mean(D1[:, 1:4])
distance = np.abs(D0_mesh_size - D1_mesh_size)
elif method == 'mesh_size_variance':
# For each atom calculate the mean distance to its three closest neighbours
D0 = cdist(pos0, pos0, metric='euclidean')
D0.sort(axis=1)
D0_mesh_size_var = np.var(np.mean(D0[:, 1:4], axis=1))
D1 = cdist(pos1, pos1, metric='euclidean')
D1.sort(axis=1)
D1_mesh_size_var = np.var(np.mean(D1[:, 1:4], axis=1))
distance = np.abs(D0_mesh_size_var - D1_mesh_size_var)
return distance
def calculate_ranking(prediction_data, lookup_data, distance_method = ''):
# For each entry in 'prediction_data' rank all entries in 'data'
#
# Return a ordered Dictionary containg for each prediction_data Entry
# a tuple describing the similary/distance to each entry in the lookup table.
prediction_entries = prediction_data['Entry'].drop_duplicates()
lookup_entries = lookup_data['Entry'].drop_duplicates()
results = OrderedDict()
for pre in prediction_entries:
ranking = []
e0pos = get_pos(prediction_data, pre)
for (e0, e1) in zip(repeat(pre), lookup_entries):
e1pos = get_pos(lookup_data, e1)
d = get_distance(e1pos, e0pos, method=distance_method)
ranking.append((d, e1))
ranking.sort()
results[pre] = ranking
return results
def get_predictions(results, lookup_data):
# Based on the ranking calculate a energy value for each entry by
# taking the mean energy value of its 3 closest matches.
entries = []
predictions = []
for entry_id in results.keys():
entries.append(entry_id)
closest_entries = [res[1] for res in results[entry_id][0:3]]
predictions.append(np.mean(get_energies(lookup_data, closest_entries)))
return entries, predictions
def single_stage_prediction(training, validation):
ranking = calculate_ranking(validation, training, distance_method='atom_pos')
entries, predictions = get_predictions(ranking, training)
return entries, predictions
def two_stage_prediction(training, validation, energy_sw=0.05, distance_methods=['atom_pos', 'mesh_size_variance']):
ranking = calculate_ranking(validation, training, distance_method=distance_methods[0])
entries, predictions = get_predictions(ranking, training)
# For each entry in the first prediction generate a subset of the training data
# and apply another distance metric to the subset in order to calculate
# a improved prediction
new_predictions = []
for entry_id, predicted_energy in zip(entries, predictions):
# Calculate a subset of the data
training_subset = training[(training['Energy'] > (predicted_energy-energy_sw)) & (training['Energy'] < (predicted_energy+energy_sw))]
validation_subset = validation[validation['Entry'] == entry_id]
new_ranking = calculate_ranking(validation_subset, training_subset, distance_method=distance_methods[1])
_, new_prediction = get_predictions(new_ranking, training_subset)
new_predictions.append(new_prediction[0])
return entries, new_predictions
############### HELPER FUNCTIONS - NOT PART OF THE ALGORITHM ###############
def evaluate_prediction(entry_ids, predicted_energies, lookup_table):
# Calculate the prediction error
prediction_errors = []
for entry_id, predicted_energy in zip(entry_ids, predicted_energies):
real_energy = lookup_table[lookup_table['Entry'] == entry_id]['Energy'].values[0]
prediction_errors.append(predicted_energy - real_energy)
return np.array(prediction_errors)
def cross_validation(n_tests, n_entries, training_data, prediction_function, kwargs={}):
prediction_errors = np.zeros(shape=(n_tests, n_entries))
for n in range(0, n_tests):
# Split the training data into a new set of training and validation data in order to test the algorithm
validation_entries = set(np.random.choice(training_data['Entry'].unique(), n_entries, replace=False))
training_entries = set(training_data['Entry'].unique()) - validation_entries
print('Running Test (%d/%d) with validation entries %s ...' % (n+1, n_tests, validation_entries))
training = training_data[training_data['Entry'].isin(training_entries)]
validation = training_data[training_data['Entry'].isin(validation_entries)]
entries, predictions = prediction_function(training, validation, **kwargs)
prediction_errors[n, :] = evaluate_prediction(entries, predictions, training_data)
return prediction_errors
def get_energies(table, entries):
return [table[table['Entry'] == entry]['Energy'].values[0] for entry in entries]
def get_closest_entries(table, energy):
uT = table[['Entry', 'Energy']].drop_duplicates()
energies = uT['Energy'].values
entries = uT['Entry'].values
diff_energies = (energies - energy)**2
closest_energies = np.argsort(diff_energies)
closest_entries = entries[closest_energies]
return closest_entries, energies[closest_energies]
# -
# Load data
training = parse_dataset('data/new_training_set.xml')
validation = parse_dataset('data/new_validation_set.xml')
submission = pd.read_csv('data/return_file_template.csv', sep=';')
# Perform prediction
entries, energies = two_stage_prediction(training, validation, energy_sw=0.5, distance_methods=['atom_pos', 'mesh_size_variance'])
# Write submission file based on template
submission['energy'] = energies
submission.to_csv('final_submission.csv', index=False)
submission
| src/final_submission.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2D Advection-Diffusion equation
# in this notebook we provide a simple example of the DeepMoD algorithm and apply it on the 2D advection-diffusion equation.
# +
# General imports
import numpy as np
import torch
# DeepMoD functions
from deepymod import DeepMoD
from deepymod.model.func_approx import NN
from deepymod.model.library import Library2D_third
from deepymod.model.constraint import LeastSquares
from deepymod.model.sparse_estimators import Threshold,PDEFIND
from deepymod.training import train
from deepymod.training.sparsity_scheduler import TrainTestPeriodic
from scipy.io import loadmat
# Settings for reproducibility
np.random.seed(1)
torch.manual_seed(1)
if torch.cuda.is_available():
device = 'cuda'
else:
device = 'cpu'
# -
# ## Prepare the data
# Next, we prepare the dataset.
data = loadmat('Diffusion_2D_space41.mat')
data = np.real(data['Expression1']).reshape((41,41,41,4))[:,:,:,3]
x_dim, y_dim, t_dim = data.shape
time_range = [1,2]
for i in time_range:
# Downsample data and prepare data without noise:
down_data= np.take(np.take(np.take(data,np.arange(0,x_dim,6),axis=0),np.arange(0,y_dim,6),axis=1),np.arange(0,t_dim,i),axis=2)
print("Dowmsampled shape:",down_data.shape, "Total number of data points:", np.product(down_data.shape))
index = len(np.arange(0,t_dim,i))
width, width_2, steps = down_data.shape
x_arr, y_arr, t_arr = np.linspace(0,1,width), np.linspace(0,1,width_2), np.linspace(0,1,steps)
x_grid, y_grid, t_grid = np.meshgrid(x_arr, y_arr, t_arr, indexing='ij')
X, y = np.transpose((t_grid.flatten(), x_grid.flatten(), y_grid.flatten())), np.float32(down_data.reshape((down_data.size, 1)))
# Add noise
noise_level = 0.02
y_noisy = y + noise_level * np.std(y) * np.random.randn(y.size, 1)
# Randomize data
idx = np.random.permutation(y.shape[0])
X_train = torch.tensor(X[idx, :], dtype=torch.float32, requires_grad=True).to(device)
y_train = torch.tensor(y_noisy[idx, :], dtype=torch.float32).to(device)
# Configure DeepMoD
network = NN(3, [40, 40, 40, 40], 1)
library = Library2D_third(poly_order=0)
estimator = Threshold(0.05)
sparsity_scheduler = TrainTestPeriodic(periodicity=50, patience=200, delta=1e-5)
constraint = LeastSquares()
model = DeepMoD(network, library, estimator, constraint).to(device)
optimizer = torch.optim.Adam(model.parameters(), betas=(0.99, 0.99), amsgrad=True, lr=2e-3)
logdir='final_runs/2_noise_x07/'+str(index)+'/'
train(model, X_train, y_train, optimizer,sparsity_scheduler, log_dir=logdir, split=0.8, max_iterations=50000, delta=1e-6, patience=200)
| paper/Advection_diffusion/AD_artificial/Loop_noise_0_07_noise2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3-fastai (Python3.6.1)
# language: python
# name: py3-fastai
# ---
# +
import random
import numpy as np
import pandas as pd
import xarray
import matplotlib.pyplot as plt
import seaborn as sns
import skimage as skim
from tqdm import tqdm as tqdm
from src.plot import orig_vs_transformed as plot_ovt
from src.data_loader import Shifted_Data_Loader
# -
DL = Shifted_Data_Loader('fashion_mnist',
rotation=None,
translation=0.6,
bg_noise=0.1,
flatten=False,
noise_mode='gaussian',
bg_only=True,
)
# +
from src.data_loader import Shifted_Data_Loader
add_bg_noise = Shifted_Data_Loader.add_bg_noise
# noise_ims = DL.add_bg_noise(DL.sx_train.copy(),bg_only=False)
# +
from src.data_loader import norm_to_8bit
sxt = DL.fg_test
noise_ims = skim.util.random_noise(sxt,mode='s&p')
plt.imshow(np.squeeze(noise_ims[50]))
# -
DL.regen_bg_noise(mode='gaussian',var=0.6**2)
# +
idx = 25
DL.sx_test = DL.fg_test.copy()
DL.add_noise(DL.sx_test,DL.bg_test,bg_only=True)
# DL.add_noise(DL.sx_test,DL.bg_test, bg_only=True)
noise_ims = DL.sx_test
ims = [DL.fg_test[idx],noise_ims[idx],DL.bg_test[idx]]
fig,axs = plt.subplots(1,len(ims),figsize=(len(ims)*4,4))
for im,ax in zip(ims,axs):
ax.imshow(np.squeeze(im),vmin=0,vmax=1)
# plt.colorbar()
# +
modes = ['gaussian','localvar','poisson','salt','pepper','s&p','speckle']
kws = [{'var':0.1},{},{},{},{},{},{'var':0.1}]
orig_im = DL.fg_test[50]
noise_ims = [skim.util.random_noise(orig_im,mode=m,**kwargs) for m,kwargs in zip(modes,kws)]
ims = [orig_im]
ims.extend(noise_ims)
fig,axs = plt.subplots(1,len(ims),figsize=(len(ims)*4,4))
for ax,im in zip(axs,ims):
ax.imshow(np.squeeze(im))
# -
plt.imshow(np.squeeze(noise_im))
cls_masks = [DL.y_test==i for i in np.arange(10)]
def plot_fg(DL,index=None,cmap='gray',clean=True):
X = DL.x_test
sX = DL.fg_test
if index is None:
idxs = np.random.randint(0,len(X)/10,size=10)
im_scale = 3
n_rows=2
n_cols=10
figure,axs = plt.subplots(n_rows,n_cols,figsize=(n_cols*im_scale,n_rows*im_scale))
rand_class = 9
for ax_col,idx,m in zip(np.swapaxes(axs,0,1),idxs,reversed(cls_masks)):
ax_col[0].imshow(X[m][idx].squeeze(),cmap=cmap)
# ax_row[1].imshow(sX[m][idx].squeeze(),cmap=cmap)
ax_col[1].imshow(sX[cls_masks[rand_class]][idx].squeeze(),cmap=cmap)
if clean:
for ax in np.ravel(axs):
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# axs[1].get_xaxis().set_visible(False)
# axs[1].get_yaxis().set_visible(False)
plt.tight_layout()
return (idxs)
plot_fg(DL)
pt,idx = plot_ovt(DL,cmap='gray')
neural_data = brainscore.get_assembly(name="dicarlo.Majaj2015")
neural_data.load()
stimulus_set = neural_data.attrs['stimulus_set']
def process_dicarlo(assembly,avg_repetition=True,variation=3,tasks=['ty','tz','rxy']):
stimulus_set = assembly.attrs['stimulus_set']
stimulus_set['dy_deg'] = stimulus_set.tz*stimulus_set.degrees
stimulus_set['dx_deg'] = stimulus_set.ty*stimulus_set.degrees
stimulus_set['dy_px'] = stimulus_set.dy_deg*32
stimulus_set['dx_px'] = stimulus_set.dx_deg*32
assembly.attrs['stimulus_set'] = stimulus_set
data = assembly.sel(variation=variation)
groups = ['category_name', 'object_name', 'image_id']+tasks
if not avg_repetition:
groups.append('repetition')
data = data.multi_groupby(groups) # (2)
data = data.mean(dim='presentation')
data = data.squeeze('time_bin') # (3)
data.attrs['stimulus_set'] = stimulus_set.query('variation == {}'.format(variation))
data = data.T
return data
hi_data = process_dicarlo(neural_data,variation=6)
# +
def SUCorrelation(da,neuroid_coord,correlation_vars,exclude_zeros=True):
if exclude_zeros:
nz_neuroids = da.groupby(neuroid_coord).sum('presentation').values!=0
da = da[:,nz_neuroids]
correlations = np.empty((len(da[neuroid_coord]),len(correlation_vars)))
for i,nid in tqdm(enumerate(da[neuroid_coord].values),total=len(da[neuroid_coord])):
for j,prop in enumerate(correlation_vars):
n_act = da.sel(**{neuroid_coord:nid}).squeeze()
r,p = pearsonr(n_act,prop)
correlations[i,j] = np.abs(r)
neuroid_dim = da[neuroid_coord].dims
c = {coord: (dims, values) for coord, dims, values in walk_coords(da) if dims == neuroid_dim}
c['task']=('task',[v.name for v in correlation_vars])
# print(neuroid_dim)
result = Score(correlations,
coords=c,
dims=('neuroid','task'))
return result
def result_to_df(SUC,corr_var_labels):
df = SUC.neuroid.to_dataframe().reset_index()
for label in corr_var_labels:
df[label]=SUC.sel(task=label).values
return df
# -
corr_vars_both = [pd.Series(lg_both[v].values,name=v) for v in ['tx','ty']]
corr_both = SUCorrelation(lg_both,neuroid_coord='neuroid_id',correlation_vars=corr_vars_both)
both_df = result_to_df(corr_both,['tx','ty'])
both_df['norm_ty'] = both_df.ty
corr_vars_xent = [pd.Series(lg_xent[v].values,name=v) for v in ['tx','ty']]
corr_xent = SUCorrelation(lg_xent,neuroid_coord='neuroid_id',correlation_vars=corr_vars_xent)
xent_df = result_to_df(corr_xent,['tx','ty'])
xent_df['norm_ty'] = xent_df.ty
# +
dicarlo_corr_vars = [
pd.Series(hi_data['ty'],name='tx'),
pd.Series(hi_data['tz'],name='ty'),
pd.Series(hi_data['rxy'],name='rxy'),
]
# corr_dicarlo_med = SUCorrelation(med_data,neuroid_coord='neuroid_id',correlation_vars=dicarlo_med_corr_vars,exclude_zeros=True)
# dicarlo_med_df = result_to_df(corr_dicarlo_med,['tx','ty','rxy'])
# dicarlo_med_df['variation']=3
corr_dicarlo_hi = SUCorrelation(hi_data,neuroid_coord='neuroid_id',correlation_vars=dicarlo_corr_vars,exclude_zeros=True)
dicarlo_df = result_to_df(corr_dicarlo_hi, ['tx','ty','rxy'])
layer_map = {
'V4':3,
'IT':4
}
for reg,layer in zip(['V4','IT'],[3,4]):
dicarlo_df['layer'] = [layer_map[r] for r in dicarlo_df.region]
# +
def plot_bars(y,df,by='region',order=None):
if order is not None:
subsets = order
else:
subsets = df[by].drop_duplicates().values
plot_scale = 5
fig,axs = plt.subplots(1,len(subsets),figsize=(plot_scale*len(subsets),plot_scale),sharex=True,sharey=True,
subplot_kw={
# 'xlim':(0.0,0.8),
# 'ylim':(0.0,0.8)
})
for ax,sub in zip(axs,subsets):
subsets = df[by].drop_duplicates().values
sub_df = df.query('{} == "{}"'.format(by,sub))
sns.barplot(x=by,y=y,ax=ax)
def plot_kde(x,y,df,by='region',order=None):
if order is not None:
subsets = order
else:
subsets = df[by].drop_duplicates().values
plot_scale = 5
fig,axs = plt.subplots(1,len(subsets),figsize=(plot_scale*len(subsets),plot_scale),sharex=True,sharey=True,
subplot_kw={
'xlim':(0.0,0.8),
'ylim':(0.0,0.8)
})
for ax,sub in zip(axs,subsets):
sub_df = df.query('{} == "{}"'.format(by,sub))
sns.kdeplot(sub_df[x],sub_df[y],ax=ax)
ax.set_title("{}: {}".format(by,sub))
# plot_bars(y='tx',df=both_df,by='layer',order=np.arange(5))
# +
fig,axs = plt.subplots(2,1,figsize=(5,10),sharex=True)
mod_order=np.arange(5)
for ax,df,order in zip(axs,[xent_df,both_df,],[mod_order,mod_order]):
sns.barplot(x='layer',y='tx',order=order,data=df,ax=ax)
# -
sns.barplot(x='layer',y='tx',data=xent_df)
sns.violinplot(x='region',y='ty',data=dicarlo_df,order=['V4','IT'])
plot_kde('tx','ty',dicarlo_df,by='region',order=['V4','IT'])
# +
class MURegressor(object):
def __init__(self,da,train_frac=0.8,n_splits=5,n_units=None,estimator=Ridge):
if n_units is not None:
self.neuroid_idxs = [np.array([random.randrange(len(da.neuroid_id)) for _ in range(n_units)]) for _ in range(n_splits)]
self.original_data = da
self.train_frac = train_frac
self.n_splits = n_splits
splits = [split_assembly(self.original_data[:,n_idxs]) for n_idxs in tqdm(self.neuroid_idxs,total=n_splits,desc='CV-splitting')]
self.train = [tr for tr,te in splits]
self.test = [te for tr,te in splits]
self.estimators = [estimator() for _ in range(n_splits)]
def fit(self,y_coord):
# Get Training data
for mod,train in tqdm(zip(self.estimators,self.train),total=len(self.train),desc='fitting'):
# print(train)
mod.fit(X=train.values,y=train[y_coord])
return self
def predict(self,X=None):
if X is not None:
return [e.predict(X) for e in self.estimators]
else:
return [e.predict(te.values) for e,te in zip(self.estimators,self.test)]
def score(self,y_coord):
return [e.score(te.values,te[y_coord].values) for e,te in zip(self.estimators,self.test)]
def stratified_regressors(data, filt='region',n_units=126,y_coords=['ty','tz'],task_names=None,estimator=Ridge):
subsets = np.unique(data[filt].values)
if task_names is None:
task_names = y_coords
dfs = []
for y,task in zip(y_coords,task_names):
print('regressing {}...'.format(y))
regressors = {k:MURegressor(data.sel(**{filt:k}),n_units=n_units,estimator=Ridge).fit(y_coord=y) for k in subsets}
df = pd.DataFrame.from_records({k:v.score(y_coord=y) for k,v in regressors.items()})
df = df.melt(var_name='region',value_name='performance')
df['task']=task
dfs.append(df)
return pd.concat(dfs)
# -
properties = ['tx','ty']
both_df = stratified_regressors(lg_both,filt='layer',y_coords=properties,n_units=50)
sns.barplot(x='task',y='performance',hue='region',data=both_df)
xent_df = stratified_regressors(lg_xent,filt='layer',y_coords=properties,n_units=50)
sns.barplot(x='task',y='performance',hue='region',data=xent_df)
plot_kde(x='tx',y='ty',df=both_df,by='layer',order=np.arange(5))
plot_kde(x='tx',y='ty',df=xent_df,by='layer',order=np.arange(5))
| notebooks/Fig-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %%capture
import os
import site
os.sys.path.insert(0, '/home/schirrmr/code/reversible/reversible2/')
os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/')
os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//')
# %cd /home/schirrmr/
# %load_ext autoreload
# %autoreload 2
import numpy as np
import logging
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import cm
# %matplotlib inline
# %config InlineBackend.figure_format = 'png'
matplotlib.rcParams['figure.figsize'] = (12.0, 1.0)
matplotlib.rcParams['font.size'] = 14
import seaborn
seaborn.set_style('darkgrid')
from reversible.sliced import sliced_from_samples
from numpy.random import RandomState
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import copy
import math
import itertools
from reversible.plot import create_bw_image
import torch as th
from braindecode.torch_ext.util import np_to_var, var_to_np
from reversible.revnet import ResidualBlock, invert, SubsampleSplitter, ViewAs, ReversibleBlockOld
from spectral_norm import spectral_norm
from conv_spectral_norm import conv_spectral_norm
def display_text(text, fontsize=18):
fig = plt.figure(figsize=(12,0.1))
plt.title(text, fontsize=fontsize)
plt.axis('off')
display(fig)
plt.close(fig)
# -
feature_model = nn.Sequential(
ResidualBlock(
nn.Sequential(
spectral_norm(nn.Linear(1,200), to_norm=0.92, n_power_iterations=3),
nn.ReLU(),
spectral_norm(nn.Linear(200,1), to_norm=0.92, n_power_iterations=3),
)),
ResidualBlock(
nn.Sequential(
spectral_norm(nn.Linear(1,200), to_norm=0.92, n_power_iterations=3),
nn.ReLU(),
spectral_norm(nn.Linear(200,1), to_norm=0.92, n_power_iterations=3),
)),)
optimizer = th.optim.Adam(feature_model.parameters(), lr=1e-3)
# ## check that gradient is correct
# +
from torch import autograd
grad_all = autograd.grad(outputs=outs, inputs=examples,
grad_outputs=th.ones(examples.size()),
create_graph=True, retain_graph=True)[0]
grad_per_part = th.cat([autograd.grad(outputs=outs[i_example], inputs=examples,
create_graph=True, retain_graph=True)[0][i_example] for i_example in range(300)])
assert np.all(var_to_np(grad_all.squeeze() == grad_per_part) == 1)
# -
n_epochs = 5001
for i_epoch in range(n_epochs):
examples = (th.randn(300,1) * 3 + 2).requires_grad_(True)
outs = feature_model(examples)
grad_all = autograd.grad(outputs=outs, inputs=examples,
grad_outputs=th.ones(examples.size()),
create_graph=True, retain_graph=True)[0]
loss = -th.sum(th.log(grad_all)) + th.sum(outs * outs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i_epoch % (n_epochs // 20) == 0:
print("Epoch {:d} of {:d}".format(i_epoch,n_epochs))
print("Loss: {:.2f}".format(loss.item()))
fig = plt.figure(figsize=(5,2))
plt.plot(var_to_np(outs)[:,0], var_to_np(outs[:,0] * 0), marker='o', ls='')
display(fig)
plt.close(fig)
seaborn.distplot(var_to_np(outs).squeeze())
# ## uniform dist
feature_model = nn.Sequential(
ResidualBlock(
nn.Sequential(
spectral_norm(nn.Linear(1,200), to_norm=0.92, n_power_iterations=3),
nn.ReLU(),
spectral_norm(nn.Linear(200,1), to_norm=0.92, n_power_iterations=3),
)),
ResidualBlock(
nn.Sequential(
spectral_norm(nn.Linear(1,200), to_norm=0.92, n_power_iterations=3),
nn.ReLU(),
spectral_norm(nn.Linear(200,1), to_norm=0.92, n_power_iterations=3),
)),)
optimizer = th.optim.Adam(feature_model.parameters(), lr=1e-4)
n_epochs = 5001
for i_epoch in range(n_epochs):
examples = (th.rand(300,1) * 3 + 1).requires_grad_(True)
outs = feature_model(examples)
grad_all = autograd.grad(outputs=outs, inputs=examples,
grad_outputs=th.ones(examples.size()),
create_graph=True, retain_graph=True)[0]
loss = -th.sum(th.log(grad_all)) + th.sum(outs * outs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i_epoch % (n_epochs // 20) == 0:
print("Epoch {:d} of {:d}".format(i_epoch,n_epochs))
print("Loss: {:.2f}".format(loss.item()))
fig = plt.figure(figsize=(5,2))
plt.plot(var_to_np(outs)[:,0], var_to_np(outs[:,0] * 0), marker='o', ls='')
display(fig)
plt.close(fig)
seaborn.distplot(var_to_np(outs).squeeze())
| notebooks/toy-1d-2d-examples/.ipynb_checkpoints/Change_Of_Variable_1D-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pydata_nyc20173.6
# language: python
# name: pydata_nyc20173_6
# ---
# + slideshow={"slide_type": "skip"}
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
from sampled import sampled
import seaborn as sns
import theano.tensor as tt
matplotlib.rcParams['figure.figsize'] = (0.89 * 12, 6)
matplotlib.rcParams['lines.linewidth'] = 5
# + [markdown] slideshow={"slide_type": "slide"}
# # An Introduction to Probabilistic Programming
#
#
# <center><img src="images/commute.png" style="height: 600px;"></img></center>
# + [markdown] slideshow={"slide_type": "slide"}
# # An Introduction to Probabilistic Programming
#
#
# <center><img src="images/PyMC3.png"></img></center>
# + [markdown] slideshow={"slide_type": "slide"}
# # An Introduction to Probabilistic Programming
#
#
# <center><img src="images/commute.png" style="height: 600px;"></img></center>
# + [markdown] slideshow={"slide_type": "slide"}
# # Probabilistic Programming
# + slideshow={"slide_type": "-"}
@sampled
def commute():
train_time = pm.SkewNormal('train_time', mu=40., sd=10., alpha=15.)
takes_bike = pm.Binomial('takes_bike', n=1, p=0.1)
bike_time = pm.Normal('bike_time', mu=20., sd=3.)
walk_time = pm.Normal('walk_time', mu=5., sd=1.)
t_time = pm.SkewNormal('t_time', mu=15., sd=5., alpha=4.)
total_time = pm.Normal('total_time',
mu=train_time + tt.switch(takes_bike, bike_time, walk_time+t_time),
sd=1)
# + [markdown] slideshow={"slide_type": "-"}
# <center><img src="images/commute.png" style="height: 300px;"></img></center>
# + [markdown] slideshow={"slide_type": "slide"}
# # Sampling from a generative model
# + slideshow={"slide_type": "-"}
with commute():
base_trace = pm.sample(2000)
# + slideshow={"slide_type": "slide"}
pm.traceplot(base_trace, combined=True, figsize=(15, 8));
# + [markdown] slideshow={"slide_type": "slide"}
# # Commute time
# -
base_trace['total_time'].mean()
sns.distplot(base_trace['total_time'], color='r');
# + [markdown] slideshow={"slide_type": "slide"}
# ## "My train is on time!"
# + slideshow={"slide_type": "-"}
with commute(train_time=40):
on_time_train = pm.sample(2000, njobs=4)
# -
# <center><img src="images/commuter_train.png" style="height: 150px;"></img></center>
on_time_train['bike_time'].shape
# + slideshow={"slide_type": "slide"}
sns.distplot(on_time_train['total_time'], color='b');
sns.distplot(base_trace['total_time'], color='r');
# + [markdown] slideshow={"slide_type": "slide"}
# ## "My train is 10 minutes late!"
# -
with commute(train_time=50):
late_train = pm.sample(2000)
# <center><img src="images/commuter_train.png" style="height: 150px;"></img></center>
# + slideshow={"slide_type": "slide"}
sns.kdeplot(on_time_train['total_time'], label='Train on time', color='b')
sns.kdeplot(base_trace['total_time'], label='Prior', color='r');
sns.kdeplot(late_train['total_time'], label='Train 10 minutes late', color='g');
# + [markdown] slideshow={"slide_type": "slide"}
# # "You're early -- how was the bike ride?"
# -
with commute(total_time=55):
early_trace = pm.sample(2000)
# + [markdown] slideshow={"slide_type": "slide"}
# # "You're early -- how was the bike ride?"
# -
# <center><img src="images/bike.png" style="height: 150px;"></img></center>
early_trace['takes_bike'].mean(), base_trace['takes_bike'].mean()
# + [markdown] slideshow={"slide_type": "slide"}
# <div style="display: inline-block;">
# <img src="images/commuter_train.png" style="float: left; width: 8em; height: auto; border: none;"></img>
# <h2 style="float: right;">What else changes when I am 15 minutes early?</h2>
# </div>
#
# + slideshow={"slide_type": "-"}
sns.kdeplot(early_trace['train_time'], label='Early Day', color='b');
sns.kdeplot(base_trace['train_time'], label='Prior', color='r');
# + [markdown] slideshow={"slide_type": "slide"}
# <div style="display: inline-block;">
# <img src="images/bike.png" style="float: left; height: 4em; width: auto; border: none;"></img>
# <h2 style="float: right;">What else changes when I am 15 minutes early?</h2>
# </div>
#
# + slideshow={"slide_type": "-"}
sns.kdeplot(early_trace['bike_time'], label='Early Day', color='b');
sns.kdeplot(base_trace['bike_time'], label='Prior', color='r');
# + [markdown] slideshow={"slide_type": "slide"}
# # What else changes when I am 15 minutes early?
# + slideshow={"slide_type": "-"}
sns.kdeplot(base_trace['walk_time'], label='Prior', color='r');
sns.kdeplot(early_trace['walk_time'], label='Early Day', color='b');
# + [markdown] slideshow={"slide_type": "slide"}
# # PyMC3 Supports Latex Representations in notebooks!
# -
commute()
# + [markdown] slideshow={"slide_type": "slide"}
# # Exercise 4 and 5
# + [markdown] slideshow={"slide_type": "slide"}
# # Extreme data!
# -
with commute(total_time=0.):
no_time = pm.sample(2000)
# + [markdown] slideshow={"slide_type": "slide"}
# # Extreme data!
# + slideshow={"slide_type": "-"}
pm.traceplot(no_time, combined=True, figsize=(15, 8));
# + [markdown] slideshow={"slide_type": "slide"}
# # Fine tuning distributions
# -
@sampled
def bounded_commute():
TrainSkewNormal = pm.Bound(pm.SkewNormal, lower=35.)
BikeNormal = pm.Bound(pm.Normal, lower=15.)
WalkNormal = pm.Bound(pm.Normal, lower=10.)
TSkewNormal = pm.Bound(pm.SkewNormal, lower=10.)
TotalNormal = pm.Bound(pm.Normal, lower=50.)
train_time = TrainSkewNormal('train_time', mu=40., sd=10., alpha=15.)
takes_bike = pm.Binomial('takes_bike', n=1, p=0.1)
bike_time = BikeNormal('bike_time', mu=20., sd=3.)
walk_time = WalkNormal('walk_time', mu=5., sd=1.)
t_time = TSkewNormal('t_time', mu=15., sd=5., alpha=4.)
total_time = TotalNormal('total_time',
mu=train_time + tt.switch(takes_bike, bike_time, walk_time+t_time),
sd=1)
# + [markdown] slideshow={"slide_type": "slide"}
# # Fine tuning distributions
# -
# ```python
# with bounded_commute(total_time=0.):
# bounded_no_time = pm.sample(2000)
#
# # ValueError: Bad initial energy: inf. The model might be misspecified.
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# # Exercise 5
| Probabilistic Programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib as plt
data = pd.read_csv('Survey-1.csv')
data.shape
# 'ID', 'Gender', 'Age', 'Class', 'Major', 'Grad Intention', 'GPA',
# 'Employment', 'Salary', 'Social Networking', 'Satisfaction', 'Spending',
# 'Computer', 'Text Messages'
# #### 2.1.1. Gender and Major
data_211 = pd.crosstab(data['Gender'],data['Major'],margins = False)
print(data_211)
# #### 2.1.2. Gender and Grad Intention
data_212 = pd.crosstab(data['Gender'],data['Grad Intention'],margins = False)
print(data_212)
# #### 2.1.3. Gender and Employment
data_213 = pd.crosstab(data['Gender'],data['Employment'],margins = False)
print(data_213)
# #### 2.1.4. Gender and Computer
data_214 = pd.crosstab(data['Gender'],data['Computer'],margins = False)
print(data_214)
# #### 2.2.1. What is the probability that a randomly selected CMSU student will be male?
prob221 = round(data.groupby('Gender').count()['ID']['Male']/data.shape[0]*100,2)
print('Probability that a randomly selected CMSU student will be male is ',prob221,'%')
# #### 2.2.2. What is the probability that a randomly selected CMSU student will be female?
prob222 = round(data.groupby('Gender').count()['ID']['Female']/data.shape[0]*100,2)
print('Probability that a randomly selected CMSU student will be Female is ',prob222,'%')
# #### 2.3.1. Find the conditional probability of different majors among the male students in CMSU.
print('The conditional probability of different majors among the male students in CMSU is as below\n Accounting: ',round(data.groupby(['Major','Gender'])['Gender'].count()[1]/data.groupby('Major').count()['ID']['Accounting']*100,2),
'\n CIS: ',round(data.groupby(['Major','Gender'])['Gender'].count()[3]/data.groupby('Major').count()['ID']['CIS']*100,2),
'\n Economics/Finance: ',round(data.groupby(['Major','Gender'])['Gender'].count()[5]/data.groupby('Major').count()['ID']['Economics/Finance']*100,2),
'\n International Business: ',round(data.groupby(['Major','Gender'])['Gender'].count()[7]/data.groupby('Major').count()['ID']['International Business']*100,2),
'\n Management: ',round(data.groupby(['Major','Gender'])['Gender'].count()[9]/data.groupby('Major').count()['ID']['Management']*100,2),
'\n Other: ',round(data.groupby(['Major','Gender'])['Gender'].count()[11]/data.groupby('Major').count()['ID']['Other']*100,2),
'\n Retailing/Marketing: ',round(data.groupby(['Major','Gender'])['Gender'].count()[13]/data.groupby('Major').count()['ID']['Retailing/Marketing']*100,2),
'\n Undecided: ',round(data.groupby(['Major','Gender'])['Gender'].count()[14]/data.groupby('Major').count()['ID']['Undecided']*100,2))
# #### 2.3.2 Find the conditional probability of different majors among the female students of CMSU.
print('The conditional probability of different majors among the female students in CMSU is as below\n Accounting: ',round(data.groupby(['Major','Gender'])['Gender'].count()[0]/data.groupby('Major').count()['ID']['Accounting']*100,2),
'\n CIS: ',round(data.groupby(['Major','Gender'])['Gender'].count()[2]/data.groupby('Major').count()['ID']['CIS']*100,2),
'\n Economics/Finance: ',round(data.groupby(['Major','Gender'])['Gender'].count()[4]/data.groupby('Major').count()['ID']['Economics/Finance']*100,2),
'\n International Business: ',round(data.groupby(['Major','Gender'])['Gender'].count()[6]/data.groupby('Major').count()['ID']['International Business']*100,2),
'\n Management: ',round(data.groupby(['Major','Gender'])['Gender'].count()[8]/data.groupby('Major').count()['ID']['Management']*100,2),
'\n Other: ',round(data.groupby(['Major','Gender'])['Gender'].count()[10]/data.groupby('Major').count()['ID']['Other']*100,2),
'\n Retailing/Marketing: ',round(data.groupby(['Major','Gender'])['Gender'].count()[12]/data.groupby('Major').count()['ID']['Retailing/Marketing']*100,2),
'\n Undecided: ',round(0/data.groupby('Major').count()['ID']['Undecided']*100,2))
# #### 2.4.1. Find the probability That a randomly chosen student is a male and intends to graduate.
total_males = data.groupby('Gender').count()['ID']['Male'];
intention_graducation = data.groupby(['Gender','Grad Intention'])['Gender'].count()['Male']['Yes'];
prob241 = intention_graducation/total_males*100
print("Probability That a randomly chosen student is a male and intends to graduate is",round(prob241,2))
# #### 2.4.2 Find the probability that a randomly selected student is a female and does NOT have a laptop.
total_females = data.groupby('Gender').count()['ID']['Female']
total_females_withlaptop = data.groupby(['Gender','Computer'])['Gender'].count()['Female']['Laptop']
prob242 = round((1-(total_females_withlaptop/total_females))*100,2)
print("The probability that a randomly selected student is a female and does NOT have a laptop",prob242)
# #### 2.5.1. Find the probability that a randomly chosen student is either a male or has full-time employment?
total_males = data.groupby('Gender').count()['ID']['Male']
total_males_fulltimeemploy = data.groupby(['Gender','Employment'])['Gender'].count()['Male']['Full-Time']
print('The probability that a randomly chosen student is either a male or has full-time employment is',round((1/total_males)+(1/total_males_fulltimeemploy)*100,2))
# #### 2.5.2. Find the conditional probability that given a female student is randomly chosen, she is majoring in international business or management.
total_females = data.groupby('Gender').count()['ID']['Female']
total_females_interBiz = data.groupby(['Gender','Major'])['Gender'].count()['Female']['International Business']
total_females_manage = data.groupby(['Gender','Major'])['Gender'].count()['Female']['Management']
print('The conditional probability that given a female student is randomly chosen, she is majoring in international business or management is',(round((total_females_interBiz+total_females_manage)/total_females,2)*100))
# #### 2.6.1. If a student is chosen randomly, what is the probability that his/her GPA is less than 3?
total_student = data.shape[0]
tatal_students_less3gpa = data[data['GPA'] < 3]['ID'].count()
print('If a student is chosen randomly, the probability that his/her GPA is less than 3 is',round((tatal_students_less3gpa / total_student),2)*100)
# #### 2.6.2. Find the conditional probability that a randomly selected male earns 50 or more. Find the conditional probability that a randomly selected female earns 50 or more.
total_males = data.groupby('Gender').count()['ID']['Male']
tatal_students_salarygt50 = data[(data['Salary'] > 49.99) & (data['Gender'] == 'Male')].count()[0]
print("The conditional probability that a randomly selected male earns 50 or more is",round((tatal_students_salarygt50/total_males)*100,2))
total_males = data.groupby('Gender').count()['ID']['Female']
tatal_students_salarygt50 = data[(data['Salary'] > 49.99) & (data['Gender'] == 'Female')].count()[0]
print("The conditional probability that a randomly selected female earns 50 or more is",round((tatal_students_salarygt50/total_males)*100,2))
# #### 2.8. Note that there are four numerical (continuous) variables in the data set, GPA, Salary, Spending, and Text Messages. For each of them comment whether they follow a normal distribution. Write a note summarizing your conclusions.
data[['GPA','Salary','Spending','Text Messages']].hist(figsize=(15,12));
| M2 Statistical Methods for Decision Making/Week_3_SMDM_Hypothesis_Testing/EDA Survey-Students.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cssp37
# language: python
# name: cssp37
# ---
#
# # <b>Tutorial 3: Basic data analysis</b>
#
#
# ## Learning Objectives:
#
# In this session we will learn:
# 1. to calculate and visualise annual and monthly means
# 2. to calculate and visualise seasonal means
# 3. to calculate mean differences (anomalies)
# ## Contents
#
# 1. [Calculate annual and monthly mean](#annual)
# 2. [Calculate seasonal means](#season)
# 3. [Calculating differences (anomalies)](#percent)
# 4. [Exercises](#exercise)
# <div class="alert alert-block alert-warning">
# <b>Prerequisites</b> <br>
# - Basic programming skills in python<br>
# - Familiarity with python libraries Iris, Numpy and Matplotlib<br>
# - Basic understanding of climate data<br>
# - Tutorials 1 and 2
# </div>
# ___
# ## 1. Calculating annual and monthly mean<a id='annual'></a>
#
# ## 1.1 Import libraries <a id='explore'></a>
# Import the necessary libraries. Current datasets are in zarr format, we need zarr and xarray libraries to access the data
import numpy as np
import xarray as xr
import zarr
import iris
import os
from catnip.preparation import extract_rot_cube, add_bounds
from scripts.xarray_iris_coord_system import XarrayIrisCoordSystem as xics
xi = xics()
xr.set_options(display_style='text') # Work around for AML bug that won't display HTML output.
# ### 1.2 Set up authentication for the Azure blob store
#
# The data for this course is held online in an Azure Blob Storage Service. To access this we use a SAS (shared access signature). You should have been given the credentials for this service before the course, but if not please ask your instructor. We use the getpass module here to avoid putting the token into the public domain. Run the cell below and in the box enter your SAS and press return. This will store the password in the variable SAS.
import getpass
# SAS WITHOUT leading '?'
SAS = getpass.getpass()
# We now use the Zarr library to connect to this storage. This is a little like opening a file on a local file system but works without downloading the data. This makes use of the Azure Blob Storage service. The zarr.ABStore method returns a zarr.storage.ABSStore object which we can now use to access the Zarr data in the same way we would use a local file. If you have a Zarr file on a local file system you could skip this step and instead just use the path to the Zarr data below when opening the dataset.
store = zarr.ABSStore(container='metoffice-20cr-ds', prefix='monthly/', account_name="metdatasa", blob_service_kwargs={"sas_token":SAS})
type(store)
# ### 1.3 Read monthly data
# A Dataset consists of coordinates and data variables. Let's use the xarray's **open_zarr()** method to read all our zarr data into a dataset object and display it's metadata.
# use the open_zarr() method to read in the whole dataset metadata
dataset = xr.open_zarr(store)
# print out the metadata
dataset
# Convert dataset into iris cubelist
# +
# create an empty list to hold the iris cubes
cubelist = iris.cube.CubeList([])
# use the DataSet.apply() to convert the dataset to Iris Cublelist
dataset.apply(lambda da: cubelist.append(xi.to_iris(da)))
# print out the cubelist.
cubelist
# -
# ---
# ### 1.4 Calculating annual mean, maximum and minimum over an area
#
# Here we calculate annual mean, maximum and minimum air_temperature over the Shanghai region from 1981 to 2010.
#
# We will first need to extract the required variables, extract the Shanghai region and constrain by time period.
# +
# extract air_temperature
air_temp = cubelist.extract('air_temperature')
# extracting maximum air temperature
cons = iris.Constraint(cube_func=lambda c: ('ukmo__process_flags' in c.attributes) and (c.attributes['ukmo__process_flags'][0].split(' ')[0] == 'Maximum'))
air_temp_max = air_temp.extract_strict(cons)
# extracting mainimum air temperature
cons = iris.Constraint(cube_func=lambda c: ('ukmo__process_flags' in c.attributes) and (c.attributes['ukmo__process_flags'][0].split(' ')[0] == 'Minimum'))
air_temp_min = air_temp.extract_strict(cons)
# extracting mean air temperature
cons = iris.Constraint(cube_func=lambda c: (len(c.cell_methods) > 0) and (c.cell_methods[0].method == 'mean') and c.cell_methods[0].intervals[0] == '1 hour')
air_temp_mean = air_temp.extract_strict(cons)
# +
# defining Shangai region coords
min_lat=29.0
max_lat=32.0
min_lon=118.0
max_lon=123.0
# extract data for the the Shanghai region using extract_rot_cube() function
max_cube = extract_rot_cube(air_temp_max, min_lat, min_lon, max_lat, max_lon)
min_cube = extract_rot_cube(air_temp_min, min_lat, min_lon, max_lat, max_lon)
mean_cube = extract_rot_cube(air_temp_mean, min_lat, min_lon, max_lat, max_lon)
# -
# print out the mean cube
mean_cube
# +
# define start and end year for our time constraint
start_time = 1981
end_time = 2010
# define the time constraint
cons = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time)
# load the data into cubes applying the time constraint
max_cube = max_cube.extract(cons)
min_cube = min_cube.extract(cons)
mean_cube = mean_cube.extract(cons)
# printing out the mean cube to see the change in the shape of time dimension
mean_cube
# -
# <div class="alert alert-block alert-info">
# <b>Note:</b>The <b>CATNIP</b> library function <b>preparation.add_time_coord_cats</b> adds a range of numeric coordinate categorisations to the cube. For more details see the <a href="https://metoffice.github.io/CATNIP/tools.html#preparation-module">documentation</a>
# </div>
# We have got required cubes. Now we can add categorical coordinates to such as *year* to the time dimension in our cubes using the CATNIP **preparation.add_time_coord_cats** function.
# +
# load CATNIP's add_time_coord_cats method
from catnip.preparation import add_time_coord_cats
# Add other dimension coordinates
max_cube = add_time_coord_cats(max_cube)
min_cube = add_time_coord_cats(min_cube)
mean_cube = add_time_coord_cats(mean_cube)
# -
# Print the *max_cube* and inspect the categorical coordinates that have been added to the time coordinate of our cube.
# printing the max_cube. Note the addtional coordinates under the Auxiliary coordinates
max_cube
# We can see that **add_time_coord_cats** has added a few auxiliary coordinates including the *year* coordinate to the *time* dimension.
#
# Now we can calculate maximum, minimum and mean values over the *year* coordinate using **aggregated_by** method. This will give us a single value for each year at every grid cell (30 time points in total).
# +
# Calculate yearly max, min and mean values
yearly_max = max_cube.aggregated_by(['year'], iris.analysis.MAX)
yearly_min = min_cube.aggregated_by(['year'], iris.analysis.MIN)
yearly_mean = mean_cube.aggregated_by(['year'], iris.analysis.MEAN)
# printing the yearly_mean
yearly_mean
# -
# Now we have the mean, maximum and minimum at every grid point, we can find the maximum, minimum and mean of all grid boxes to give us a timeseries which represents the aveage etc over all grid boxes in our region of interest.
# +
# Collapse longitude and latitude to get a timeseries
yearly_max = yearly_max.collapsed(['grid_longitude', 'grid_latitude'], iris.analysis.MAX)
yearly_min = yearly_min.collapsed(['grid_longitude', 'grid_latitude'], iris.analysis.MIN)
yearly_mean = yearly_mean.collapsed(['grid_longitude', 'grid_latitude'], iris.analysis.MEAN)
# The resulting cube will be time series only
yearly_mean
# -
# Print the *year* coordinate of max cube to see if we have the correct years for our constraint time period.
print(yearly_max.coord('year').points)
# ### 1.5 Calculating mean annual cycle
#
# We can calculate the mean annual cycle for precipitation_flux data over the Shanghai region for 1981-2010 (30 years) by averaging together each month(so we average all January data to get the mean for January).
# extract the precipitation_flux data into an iris cube from the cubelist
pflx = cubelist.extract_strict('precipitation_flux')
# +
min_lat=29.0
max_lat=32.0
min_lon=118.0
max_lon=123.0
# extract data for the the Shanghai region using extract_rot_cube() function
pflx_ext = extract_rot_cube(pflx, min_lat, min_lon, max_lat, max_lon)
pflx_ext
# -
# Next extracting time constraint
# Extracting time constraint
start_time = 1981
end_time = 2010
time_cons = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time)
subcube = pflx_ext.extract(time_cons)
subcube
# remove auxiliary coordinates, added by extract_rot_cube but cause unecessary warnings later
subcube.remove_coord("latitude")
subcube.remove_coord("longitude")
# we can use the **add_time_coord_cats** method and add categorical coordinates such as *month* to the *time* dimension to our cube.
# add the time categorical coordinate to cube
subcube = add_time_coord_cats(subcube)
# Create mean annual cycle
monthly_mean = subcube.aggregated_by(['month_number'], iris.analysis.MEAN)
monthly_mean = add_bounds(monthly_mean, ['grid_latitude','grid_longitude'])
# We can calculate the area weight using **iris.analysis.cartography.area_weights** so that we weight the average to account for the fact that the areas of grid cells are variable.
import iris.analysis.cartography
#calculate the area weight
grid_areas = iris.analysis.cartography.area_weights(monthly_mean)
# Calculate area averaged monthly mean rainfall
monthly_mean = monthly_mean.collapsed(['grid_longitude', 'grid_latitude'], iris.analysis.MEAN, weights=grid_areas)
# ### 1.6 Visualising yearly and monthly means
#
# Let's now visualise yearly mean, max, min data for the air temperature and monthly mean data for the precipitation_flux.
# we first need to load libraries for plotting
import iris.plot as iplt
import iris.quickplot as qplt
import matplotlib.pyplot as plt
# Visualise yearly max, min and mean data for *air_temperature*.
# plot the timeseries for yearly max, min and mean data
ax1 = qplt.plot(yearly_max, label = 'Max Temp')
ax2 = qplt.plot(yearly_min, label = 'Min Temp')
ax3 = qplt.plot(yearly_mean, label = 'Mean Temp')
plt.legend(bbox_to_anchor=(1.18, 0.78))
plt.grid()
plt.show()
# Let's visualise monthly precipitation mean over the thirty years (1980-2010).
qplt.plot(monthly_mean.coord('month_number'), monthly_mean,color='seagreen')
plt.show()
# <div class="alert alert-block alert-success">
# <b>Task:</b><br><ul>
# <li>Calculate and visualise the monthly mean over Tibetan region from 1981 to 2010. Create the monthly mean with month names</li>
# <li>Coordinates of Tibetan region: Latitude = [26 36], Longitude = [77 104]</li>
# </ul>
# </div>
#
# +
# time series plot
# write your code here ..
# -
# ___
# ## 2. Calculating seasonal means<a id='season'></a>
# ### 2.1 Calculating seasonal means: djf, mam, jja and son
#
# Calculate mean wind speed and wind direction from 1981 to 2010 for different seasons over the entire domain.
#
# First we need to calculate wind speed and wind direction. In previous tutorial, we calculated the wind speed using hard coded simple arithmetic operations. In this tutorial, we will use catnip's **windspeed** and **wind_direction** methods.
#
# +
# define constraints for x_wind and y_wind data
xcons = iris.Constraint(cube_func=lambda c: c.standard_name == 'x_wind' and ('pressure' not in [coord.name() for coord in c.coords()]))
ycons = iris.Constraint(cube_func=lambda c: c.standard_name == 'y_wind' and ('pressure' not in [coord.name() for coord in c.coords()]))
# apply the constraint and load the x_wind and y_wind data
u = cubelist.extract_strict(xcons)
v = cubelist.extract_strict(ycons)
# -
# define time constraint
start_time = 1981
end_time = 2010
cons = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time)
u = u.extract(cons)
v = v.extract(cons)
# We can now import and use catnip's **windspeed** and **wind_direction** methods
# import catnip methods
from catnip.analysis import windspeed
from catnip.analysis import wind_direction
# calculate windspeed and wind direction
wind_speed_cube = windspeed(u,v)
wind_direction_cube = wind_direction(u,v)
# Add coordinates and extract data for different seasons
from catnip.preparation import add_time_coord_cats
wind_speed_cube = add_time_coord_cats(wind_speed_cube)
wind_direction_cube = add_time_coord_cats(wind_direction_cube)
# +
# Extract the windspeed data for all four seasons
wndspd_djf = wind_speed_cube.extract(iris.Constraint(season='djf'))
wndspd_mam = wind_speed_cube.extract(iris.Constraint(season='mam'))
wndspd_jja = wind_speed_cube.extract(iris.Constraint(season='jja'))
wndspd_son = wind_speed_cube.extract(iris.Constraint(season='son'))
# Extract the wind direction data for the all four season
wnddir_djf = wind_direction_cube.extract(iris.Constraint(season='djf'))
wnddir_mam = wind_direction_cube.extract(iris.Constraint(season='mam'))
wnddir_jja = wind_direction_cube.extract(iris.Constraint(season='jja'))
wnddir_son = wind_direction_cube.extract(iris.Constraint(season='son'))
# -
# lets print out wnddir_son cube to see its chape
wnddir_son
# Calculate seasonal means
# +
# calculate the windspeed mean over the seasons
wspd_djf_mean = wndspd_djf.aggregated_by(['season'], iris.analysis.MEAN)
wspd_mam_mean = wndspd_mam.aggregated_by(['season'], iris.analysis.MEAN)
wspd_jja_mean = wndspd_jja.aggregated_by(['season'], iris.analysis.MEAN)
wspd_son_mean = wndspd_son.aggregated_by(['season'], iris.analysis.MEAN)
# calculate the wind direction mean over the seasons
wndir_djf_mean = wnddir_djf.aggregated_by(['season'], iris.analysis.MEAN)
wndir_mam_mean = wnddir_mam.aggregated_by(['season'], iris.analysis.MEAN)
wndir_jja_mean = wnddir_jja.aggregated_by(['season'], iris.analysis.MEAN)
wndir_son_mean = wnddir_son.aggregated_by(['season'], iris.analysis.MEAN)
# -
# lets print out wndir_son_mean cube to see its chape
wndir_son_mean
# We can now visualise seasonal means
# we first need to load libraries for plotting
import iris.plot as iplt
import iris.quickplot as qplt
import matplotlib.pyplot as plt
# +
# list of seasonal cubes to loop through
seasonal_cubes = [wspd_djf_mean, wspd_mam_mean, wspd_jja_mean, wspd_son_mean]
# here we create a list of mean cubes and we will run for loop over it.
# set a figure big enough to hold the subplots
plt.figure(figsize=(10, 10))
# loop through the seaonal cube list and plot the data
# len functions returns the length of the list.
for i in range(len(seasonal_cubes)):
plt.subplot(2, 2, i+1)
# above line will create the 2 rows and 2 cols of subplots and i indicates the position of subplot.
# i starts from 0, that is why we increment 1 in it.
# plot the windspeed at the first timestep
qplt.contourf(seasonal_cubes[i][0,:,:])
# add some coastlines for context
plt.gca().coastlines()
# get the season name from the coordinate
season = seasonal_cubes[i].coord('season').points[0]
# add the name as plot's title
plt.title('Season: '+ season)
plt.tight_layout()
plt.show()
# -
# We can also plot in any other project. Python cartopy library provide us the range of option of ploting in different projection. See the list of option in [cartopy documentation](https://scitools.org.uk/cartopy/docs/latest/crs/projections.html). Lets try the above plot with **PlateCarree** projection.
# +
import cartopy.crs as ccrs
# list of seasonal cubes to loop through
seasonal_cubes = [wspd_djf_mean, wspd_mam_mean, wspd_jja_mean, wspd_son_mean]
# set a figure big enough to hold the subplots
plt.figure(figsize=(10, 10))
# loop through the seaonal cube list and plot the data
for i in range(len(seasonal_cubes)):
plt.subplot(2, 2, i+1, projection=ccrs.PlateCarree())
# plot the windspeed at the first timestep
qplt.contourf(seasonal_cubes[i][0,:,:])
# add some coastlines for context
plt.gca().coastlines()
# get the season name from the coordinate
season = seasonal_cubes[i].coord('season').points[0]
# add the name as plot's title
plt.title('Season: '+ season)
plt.tight_layout()
plt.show()
# -
# <div class="alert alert-block alert-success">
# <b>Task:</b><br><ul>
# <li>Calculate and visualise the seasonal mean of <b>surface temperature</b> over Tibatan region from 1981 to 2010.</li>
# <li>Coordinates of Tibetan region: Latitude = [26 36], Longitude = [77 104]</li>
# </ul>
# </div>
# +
# Calculate seasonal means
# write your code here ..
# +
# Visualising seasonal means
# write your code here ..
# -
# ___
# ## 3. Calculating differences<a id='percent'></a>
# ### 3.1 mean surface temperature diffference in winter season (dec, jan, feb)
# We can find the difference of mean surface temperature between the past(1851-1880) and recent(1981-2010) 30 years periods.
#
# First, we need to extract out desired data
# extract air_temperature
sft = cubelist.extract_strict('surface_temperature')
# constraints: two 30 years periods - past and presnet
past_cons = iris.Constraint(time=lambda cell: 1851 <= cell.point.year <= 1880)
present_cons = iris.Constraint(time=lambda cell: 1981 <= cell.point.year <= 2010)
past = sft.extract(past_cons)
present = sft.extract(present_cons)
# +
# load catnip's add_time_coord_cats method
from catnip.preparation import add_time_coord_cats
# Add other dimension coordinates
past = add_time_coord_cats(past)
present = add_time_coord_cats(present)
# Extract the winter season
past_djf = past.extract(iris.Constraint(season='djf'))
present_djf = present.extract(iris.Constraint(season='djf'))
# extract data for Shanghai region
past_djf = extract_rot_cube(past_djf, min_lat, min_lon, max_lat, max_lon)
present_djf = extract_rot_cube(present_djf, min_lat, min_lon, max_lat, max_lon)
# calculate 30 year mean of winter season
past_djf = past_djf.aggregated_by(['season'], iris.analysis.MEAN)
present_djf = present_djf.aggregated_by(['season'], iris.analysis.MEAN)
# -
# We have now got our cubes for different climatological periods. We now calcuate the difference by subtracting the past data form present using **iris.analysis.math.subtract** method.
djf_diff = iris.analysis.maths.subtract(present_djf, past_djf)
djf_diff.rename('surface temperature difference: Winter')
past_djf.rename('surface temperature past climate: Winter 1851-1880 ')
present_djf.rename('surface temperature present climate: Winter 1981-2010 ')
# add bounds to the cubes
past_djf = add_bounds(past_djf, ['grid_latitude','grid_longitude'])
present_djf = add_bounds(present_djf, ['grid_latitude','grid_longitude'])
djf_diff = add_bounds(djf_diff, ['grid_latitude','grid_longitude'])
# </pre>
# <div class="alert alert-block alert-info">
# <b>Note:</b> iris.analysis.math provides a range of mathematical and statistical operations. See the <a href="https://scitools.org.uk/iris/docs/v1.9.1/iris/iris/analysis/maths.html">documentation for more information</a>
#
# </div>
# We can now visualise the difference
# +
# list of our djf cubes and diff cube to loop through
seasonal_cubes = [present_djf, past_djf, djf_diff]
plt.figure(figsize=(15, 10))
# loop through the seaonal cube list and plot the data
for i in range(len(seasonal_cubes)):
plt.subplot(2, 2, i+1)
# plot the windspeed at the first timestep
if i==2:
qplt.pcolormesh(seasonal_cubes[i][0,:,:],cmap=plt.cm.get_cmap('Reds'),vmin=0, vmax=2)
else:
qplt.pcolormesh(seasonal_cubes[i][0,:,:],vmin=277.5, vmax=289)
# add some coastlines for context
plt.gca().coastlines()
# get the season name from the coordinate
season = seasonal_cubes[i].coord('season').points[0]
# add the name as plot's title
plt.title(seasonal_cubes[i].name())
plt.show()
# -
# ### 3.2 Percentage difference in winter precipitaition
# We can also calculate the percentage difference.
#
# Let's calculate the change in mean precipitation from a past 30 year period (1851-1880) to the most recent 30 years (1981-2010)
#
# +
# extract precipitation flux
pflx = cubelist.extract_strict('precipitation_flux')
# extract the time contraints
past_cons = iris.Constraint(time=lambda cell: 1851 <= cell.point.year <= 1880)
present_cons = iris.Constraint(time=lambda cell: 1981 <= cell.point.year <= 2010)
past = pflx.extract(past_cons)
present = pflx.extract(present_cons)
# Add other dimension coordinates
past = add_time_coord_cats(past)
present = add_time_coord_cats(present)
# Extract the precipitation data for the winter season
past_djf = past.extract(iris.Constraint(season='djf'))
present_djf = present.extract(iris.Constraint(season='djf'))
# extract data for Shanghai region
past_djf = extract_rot_cube(past_djf, min_lat, min_lon, max_lat, max_lon)
present_djf = extract_rot_cube(present_djf, min_lat, min_lon, max_lat, max_lon)
# calculate the means
past_djf = past_djf.aggregated_by(['season'], iris.analysis.MEAN)
present_djf = present_djf.aggregated_by(['season'], iris.analysis.MEAN)
# -
# We can now calculate the difference using **subtract** function
djf_diff = iris.analysis.maths.subtract(present_djf, past_djf)
djf_diff.rename('precipitation flux difference: Winter')
past_djf.rename('precipitation flux past climate: Winter 1851-1880 ')
present_djf.rename('precipitation flux present climate: Winter 1981-2010 ')
# add bounds to the cubes
past_djf = add_bounds(past_djf, ['grid_latitude','grid_longitude'])
present_djf = add_bounds(present_djf, ['grid_latitude','grid_longitude'])
djf_diff = add_bounds(djf_diff, ['grid_latitude','grid_longitude'])
# To calcuate the percentage difference, we can use **analysis.maths.multiply** and **iris.analysis.maths.divide** to calculate percentage change
# +
# Find the percentage change
pcent_change = iris.analysis.maths.multiply(iris.analysis.maths.divide(djf_diff, past_djf), 100)
# remember to change the title and units to reflect the data processing
pcent_change.rename('precipitation flux percent difference')
pcent_change.units = '%'
# +
# using iris plot for better colour saturation!
import iris.plot as iplt
# list of our winter cubes and the diff cube to loop through for plotting
seasonal_cubes = [present_djf, past_djf, pcent_change]
plt.figure(figsize=(15, 10))
# loop through the seaonal cube list and plot the data
for i in range(len(seasonal_cubes)):
plt.subplot(2, 2, i+1)
# plot the windspeed at the first timestep
if i==2:
qplt.pcolormesh(seasonal_cubes[i][0,:,:],cmap=plt.cm.get_cmap('RdBu'))
else:
qplt.pcolormesh(seasonal_cubes[i][0,:,:])
# add some coastlines for context
plt.gca().coastlines()
# get the season name from the coordinate
season = seasonal_cubes[i].coord('season').points[0]
# add the name as plot's title
plt.title(seasonal_cubes[i].name())
plt.show()
# -
# <div class="alert alert-block alert-success">
# <b>Task:</b><br><ul>
# <li>Calculate mean surface temperature difference over Tibetan region from past 30 years (1851-1880) to present 30 years (1981-2010).
# <li>Coordinates of Tibetan region: Latitude = [26 36], Longitude = [77 104]</li>
# </ul>
# </div>
#
# +
# Write your code here ...
# -
# ___
# ## 4. Exercises<a id='exercise'></a>
# In this exercise we will analyse the mean air temperature from past 30 years (1851-1880) to present 30 years (1981-2010), over the Shanghai region, in all four seasons. Visualize past, present and difference in a row.
# ### Exercise 1: Load monthly data and constraint time and region
# +
# Write your code here ...
# -
# ### Excercise 2: Calculate seasonal mean
# +
# Write your code here ...
# -
# ### Excercise 3: Visualise the results
# +
# Write your code here ...
# -
# ___
# <div class="alert alert-block alert-success">
# <b>Summary</b><br>
# In this session we learned how:<br>
# <ul>
# <li>to calculate yearly and monthly means</li>
# <li>to calculate seasonal means and differences</li>
# <li>to visualize the results</li>
# </ul>
#
# </div>
#
| notebooks/CSSP_20CRDS_Tutorials/tutorial_3_basic_analysis.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.1
# language: julia
# name: julia-1.6
# ---
# + [markdown] tags=[]
# # Diffusion Example
# -
# dependencies
using Pkg
Pkg.activate(".")
Pkg.instantiate()
Pkg.develop(path="../..")
using Plots
using LFAToolkit
using LinearAlgebra
# + code_folding=[5, 7]
# setup
p = 2
dimension = 1
mesh = []
if dimension == 1
mesh = Mesh1D(0.5)
elseif dimension == 2
mesh = Mesh2D(1.0, 1.0)
end
# operator
mass = GalleryOperator("mass", p + 1, p + 1, mesh)
diffusion = GalleryOperator("diffusion", p + 1, p + 1, mesh)
# + code_folding=[6, 15, 29, 40, 54]
# compute full operator symbols
numbersteps = 100
θ_min = -π
# compute and plot smoothing factor
# -- 1D --
if dimension == 1
# compute
(θ_range, eigenvalues, _) = computesymbolsoverrange(diffusion, numbersteps; mass=mass, θ_min=θ_min)
maxeigenvalues = maximum(real(eigenvalues); dims=2)
mineigenvalues = minimum(real(eigenvalues); dims=2)
# plot
println("max eigenvalue: ", maximum(maxeigenvalues))
xrange = θ_range/π
plot(
xrange,
xlabel="θ/π",
xtickfont=font(12, "Courier"),
[maxeigenvalues, mineigenvalues, π^2*xrange.^2],
ytickfont=font(12, "Courier"),
ylabel="spectral radius",
linewidth=3,
label=["Maximum λ" "Minimum λ" "θ^2"],
title="Diffusion Operator Symbol Maximal Eigenvalues"
)
ymin = minimum(mineigenvalues)
ylims!(minimum([0, ymin * 1.1]), maximum(maxeigenvalues) * 1.1)
# -- 2D --
elseif dimension == 2
# compute
(_, eigenvalues, _) = computesymbolsoverrange(diffusion, numbersteps; mass=mass, θ_min=θ_min)
maxeigenvalues = reshape(maximum(real(eigenvalues); dims=2), (numbersteps, numbersteps))
mineigenvalues = reshape(minimum(real(eigenvalues); dims=2), (numbersteps, numbersteps))
# plot
θ_max = θ_min + 2π
θ_range = LinRange(θ_min, θ_max, numbersteps)
println("max eigenvalue: ", maximum(maxeigenvalues))
xrange = θ_range/π
plot1 = heatmap(
xrange,
xlabel="θ/π",
xtickfont=font(12, "Courier"),
xrange,
ylabel="θ/π",
maxeigenvalues,
ytickfont=font(12, "Courier"),
title="Diffusion Operator Symbol Maximum Eigenvalues",
transpose=true,
aspect_ratio=:equal
)
xlims!(θ_min/π, θ_max/π)
ylims!(θ_min/π, θ_max/π)
plot2 = heatmap(
xrange,
xlabel="θ/π",
xtickfont=font(12, "Courier"),
xrange,
ylabel="θ/π",
mineigenvalues,
ytickfont=font(12, "Courier"),
title="Diffusion Operator Symbol Minimum Eigevalues",
transpose=true,
aspect_ratio=:equal
)
xlims!(θ_min/π, θ_max/π)
ylims!(θ_min/π, θ_max/π)
plot!(plot1, plot2, layout = (2, 1), size = (700, 1400))
end
| examples/jupyter/demo003_diffusion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
sys.executable
# [Optional]: If you're using a Mac/Linux, you can check your environment with these commands:
#
# ```
# # !which pip3
# # !which python3
# # !ls -lah /usr/local/bin/python3
# ```
# !pip3 install -U pip
# !pip3 install torch==1.3.0
# !pip3 install seaborn
import torch
torch.cuda.is_available()
# +
# IPython candies...
from IPython.display import Image
from IPython.core.display import HTML
from IPython.display import clear_output
# + language="html"
# <style> table {float:left} </style>
# -
# Perceptron
# =====
#
# **Perceptron** algorithm is a:
#
# > "*system that depends on **probabilistic** rather than deterministic principles for its operation, gains its reliability from the **properties of statistical measurements obtain from a large population of elements***"
# > \- <NAME> (1957)
#
# Then the news:
#
# > "*[Perceptron is an] **embryo of an electronic computer** that [the Navy] expects will be **able to walk, talk, see, write, reproduce itself and be conscious of its existence.***"
# > \- The New York Times (1958)
#
# News quote cite from Olazaran (1996)
# Perceptron in Bullets
# ----
#
# - Perceptron learns to classify any linearly separable set of inputs.
# - Some nice graphics for perceptron with Go https://appliedgo.net/perceptron/
#
# If you've got some spare time:
#
# - There's a whole book just on perceptron: https://mitpress.mit.edu/books/perceptrons
# - For watercooler gossips on perceptron in the early days, read [Olazaran (1996)](https://pdfs.semanticscholar.org/f3b6/e5ef511b471ff508959f660c94036b434277.pdf?_ga=2.57343906.929185581.1517539221-1505787125.1517539221)
#
#
# Perceptron in Math
# ----
#
# Given a set of inputs $x$, the perceptron
#
# - learns $w$ vector to map the inputs to a real-value output between $[0,1]$
# - through the summation of the dot product of the $w·x$ with a transformation function
#
#
# Perceptron in Picture
# ----
#
##Image(url="perceptron.png", width=500)
Image(url="https://ibin.co/4TyMU8AdpV4J.png", width=500)
# (**Note:** Usually, we use $x_1$ as the bias and fix the input to 1)
# Perceptron as a Workflow Diagram
# ----
#
# If you're familiar with [mermaid flowchart](https://mermaidjs.github.io)
#
# ```
# .. mermaid::
#
# graph LR
# subgraph Input
# x_1
# x_i
# x_n
# end
# subgraph Perceptron
# n1((s)) --> n2(("f(s)"))
# end
# x_1 --> |w_1| n1
# x_i --> |w_i| n1
# x_n --> |w_n| n1
# n2 --> y["[0,1]"]
#
# ```
##Image(url="perceptron-mermaid.svg", width=500)
Image(url="https://svgshare.com/i/AbJ.svg", width=500)
# Optimization Process
# ====
#
# To learn the weights, $w$, we use an **optimizer** to find the best-fit (optimal) values for $w$ such that the inputs correct maps to the outputs.
#
# Typically, process performs the following 4 steps iteratively.
#
# ### **Initialization**
#
# - **Step 1**: Initialize weights vector
#
# ### **Forward Propagation**
#
#
# - **Step 2a**: Multiply the weights vector with the inputs, sum the products, i.e. `s`
# - **Step 2b**: Put the sum through the sigmoid, i.e. `f()`
#
# ### **Back Propagation**
#
#
# - **Step 3a**: Compute the errors, i.e. difference between expected output and predictions
# - **Step 3b**: Multiply the error with the **derivatives** to get the delta
# - **Step 3c**: Multiply the delta vector with the inputs, sum the product
#
# ### **Optimizer takes a step**
#
# - **Step 4**: Multiply the learning rate with the output of Step 3c.
#
#
#
#
import math
import numpy as np
np.random.seed(0)
# +
def sigmoid(x): # Returns values that sums to one.
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(sx):
# See https://math.stackexchange.com/a/1225116
# Hint: let sx = sigmoid(x)
return sx * (1 - sx)
# -
sigmoid(np.array([2.5, 0.32, -1.42])) # [out]: array([0.92414182, 0.57932425, 0.19466158])
sigmoid_derivative(np.array([2.5, 0.32, -1.42])) # [out]: array([0.07010372, 0.24370766, 0.15676845])
def cost(predicted, truth):
return np.abs(truth - predicted)
gold = np.array([0.5, 1.2, 9.8])
pred = np.array([0.6, 1.0, 10.0])
cost(pred, gold)
gold = np.array([0.5, 1.2, 9.8])
pred = np.array([9.3, 4.0, 99.0])
cost(pred, gold)
# Representing OR Boolean
# ---
#
# Lets consider the problem of the OR boolean and apply the perceptron with simple gradient descent.
#
# | x2 | x3 | y |
# |:--:|:--:|:--:|
# | 0 | 0 | 0 |
# | 0 | 1 | 1 |
# | 1 | 0 | 1 |
# | 1 | 1 | 1 |
#
X = or_input = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = or_output = np.array([[0,1,1,1]]).T
or_input
or_output
# Define the shape of the weight vector.
num_data, input_dim = or_input.shape
# Define the shape of the output vector.
output_dim = len(or_output.T)
print('Inputs\n======')
print('no. of rows =', num_data)
print('no. of cols =', input_dim)
print('\n')
print('Outputs\n=======')
print('no. of cols =', output_dim)
# Initialize weights between the input layers and the perceptron
W = np.random.random((input_dim, output_dim))
W
# Step 2a: Multiply the weights vector with the inputs, sum the products
# ====
#
# To get the output of step 2a,
#
# - Itrate through each row of the data, `X`
# - For each column in each row, find the product of the value and the respective weights
# - For each row, compute the sum of the products
# If we write it imperatively:
summation = []
for row in X:
sum_wx = 0
for feature, weight in zip(row, W):
sum_wx += feature * weight
summation.append(sum_wx)
print(np.array(summation))
# If we vectorize the process and use numpy.
np.dot(X, W)
# Train the Single-Layer Model
# ====
#
#
# +
num_epochs = 10000 # No. of times to iterate.
learning_rate = 0.03 # How large a step to take per iteration.
# Lets standardize and call our inputs X and outputs Y
X = or_input
Y = or_output
for _ in range(num_epochs):
layer0 = X
# Step 2a: Multiply the weights vector with the inputs, sum the products, i.e. s
# Step 2b: Put the sum through the sigmoid, i.e. f()
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(X, W))
# Back propagation.
# Step 3a: Compute the errors, i.e. difference between expected output and predictions
# How much did we miss?
layer1_error = cost(layer1, Y)
# Step 3b: Multiply the error with the derivatives to get the delta
# multiply how much we missed by the slope of the sigmoid at the values in layer1
layer1_delta = layer1_error * sigmoid_derivative(layer1)
# Step 3c: Multiply the delta vector with the inputs, sum the product (use np.dot)
# Step 4: Multiply the learning rate with the output of Step 3c.
W += learning_rate * np.dot(layer0.T, layer1_delta)
# -
layer1
# Expected output.
Y
# On the training data
[[int(prediction > 0.5)] for prediction in layer1]
# Lets try the XOR Boolean
# ---
#
# Lets consider the problem of the OR boolean and apply the perceptron with simple gradient descent.
#
# | x2 | x3 | y |
# |:--:|:--:|:--:|
# | 0 | 0 | 0 |
# | 0 | 1 | 1 |
# | 1 | 0 | 1 |
# | 1 | 1 | 0 |
#
X = xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = xor_output = np.array([[0,1,1,0]]).T
xor_input
xor_output
# +
num_epochs = 10000 # No. of times to iterate.
learning_rate = 0.003 # How large a step to take per iteration.
# Lets drop the last row of data and use that as unseen test.
X = xor_input
Y = xor_output
# Define the shape of the weight vector.
num_data, input_dim = X.shape
# Define the shape of the output vector.
output_dim = len(Y.T)
# Initialize weights between the input layers and the perceptron
W = np.random.random((input_dim, output_dim))
for _ in range(num_epochs):
layer0 = X
# Forward propagation.
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(X, W))
# How much did we miss?
layer1_error = cost(layer1, Y)
# Back propagation.
# multiply how much we missed by the slope of the sigmoid at the values in layer1
layer1_delta = sigmoid_derivative(layer1) * layer1_error
# update weights
W += learning_rate * np.dot(layer0.T, layer1_delta)
# -
# Expected output.
Y
# On the training data
[int(prediction > 0.5) for prediction in layer1] # All correct.
# You can't represent XOR with simple perceptron !!!
# ====
#
# No matter how you change the hyperparameters or data, the XOR function can't be represented by a single perceptron layer.
#
# There's no way you can get all four data points to get the correct outputs for the XOR boolean operation.
#
# Solving XOR (Add more layers)
# ====
# +
from itertools import chain
import numpy as np
np.random.seed(0)
def sigmoid(x): # Returns values that sums to one.
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(sx):
# See https://math.stackexchange.com/a/1225116
return sx * (1 - sx)
# Cost functions.
def cost(predicted, truth):
return truth - predicted
xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
xor_output = np.array([[0,1,1,0]]).T
# Define the shape of the weight vector.
num_data, input_dim = X.shape
# Lets set the dimensions for the intermediate layer.
hidden_dim = 5
# Initialize weights between the input layers and the hidden layer.
W1 = np.random.random((input_dim, hidden_dim))
# Define the shape of the output vector.
output_dim = len(Y.T)
# Initialize weights between the hidden layers and the output layer.
W2 = np.random.random((hidden_dim, output_dim))
# Initialize weigh
num_epochs = 10000
learning_rate = 0.03
for epoch_n in range(num_epochs):
layer0 = X
# Forward propagation.
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(layer0, W1))
layer2 = sigmoid(np.dot(layer1, W2))
# Back propagation (Y -> layer2)
# How much did we miss in the predictions?
layer2_error = cost(layer2, Y)
# In what direction is the target value?
# Were we really close? If so, don't change too much.
layer2_delta = layer2_error * sigmoid_derivative(layer2)
# Back propagation (layer2 -> layer1)
# How much did each layer1 value contribute to the layer2 error (according to the weights)?
layer1_error = np.dot(layer2_delta, W2.T)
layer1_delta = layer1_error * sigmoid_derivative(layer1)
# update weights
W2 += learning_rate * np.dot(layer1.T, layer2_delta)
W1 += learning_rate * np.dot(layer0.T, layer1_delta)
##print(epoch_n, list((layer2)))
# -
# Training input.
X
# Expected output.
Y
layer2 # Our output layer
# On the training data
[int(prediction > 0.5) for prediction in layer2]
# Now try adding another layer
# ====
#
# Use the same process:
#
# 1. Initialize
# 2. Forward Propagate
# 3. Back Propagate
# 4. Update (aka step)
#
#
# +
from itertools import chain
import numpy as np
np.random.seed(0)
def sigmoid(x): # Returns values that sums to one.
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(sx):
# See https://math.stackexchange.com/a/1225116
return sx * (1 - sx)
# Cost functions.
def cost(predicted, truth):
return truth - predicted
xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
xor_output = np.array([[0,1,1,0]]).T
# -
X
Y
# +
# Define the shape of the weight vector.
num_data, input_dim = X.shape
# Lets set the dimensions for the intermediate layer.
layer0to1_hidden_dim = 5
layer1to2_hidden_dim = 5
# Initialize weights between the input layers 0 -> layer 1
W1 = np.random.random((input_dim, layer0to1_hidden_dim))
# Initialize weights between the layer 1 -> layer 2
W2 = np.random.random((layer0to1_hidden_dim, layer1to2_hidden_dim))
# Define the shape of the output vector.
output_dim = len(Y.T)
# Initialize weights between the layer 2 -> layer 3
W3 = np.random.random((layer1to2_hidden_dim, output_dim))
# Initialize weigh
num_epochs = 10000
learning_rate = 1.0
for epoch_n in range(num_epochs):
layer0 = X
# Forward propagation.
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(layer0, W1))
layer2 = sigmoid(np.dot(layer1, W2))
layer3 = sigmoid(np.dot(layer2, W3))
# Back propagation (Y -> layer2)
# How much did we miss in the predictions?
layer3_error = cost(layer3, Y)
# In what direction is the target value?
# Were we really close? If so, don't change too much.
layer3_delta = layer3_error * sigmoid_derivative(layer3)
# Back propagation (layer2 -> layer1)
# How much did each layer1 value contribute to the layer3 error (according to the weights)?
layer2_error = np.dot(layer3_delta, W3.T)
layer2_delta = layer3_error * sigmoid_derivative(layer2)
# Back propagation (layer2 -> layer1)
# How much did each layer1 value contribute to the layer2 error (according to the weights)?
layer1_error = np.dot(layer2_delta, W2.T)
layer1_delta = layer1_error * sigmoid_derivative(layer1)
# update weights
W3 += learning_rate * np.dot(layer2.T, layer3_delta)
W2 += learning_rate * np.dot(layer1.T, layer2_delta)
W1 += learning_rate * np.dot(layer0.T, layer1_delta)
# -
Y
layer3
# On the training data
[int(prediction > 0.5) for prediction in layer3]
# # Now, lets do it with PyTorch
#
# First lets try a single perceptron and see that we can't train a model that can represent XOR.
#
#
#
# +
from tqdm import tqdm
import torch
from torch import nn
from torch.autograd import Variable
from torch import FloatTensor
from torch import optim
use_cuda = torch.cuda.is_available()
# +
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_style("darkgrid")
sns.set(rc={'figure.figsize':(15, 10)})
# -
X # Original XOR X input in numpy array data structure.
Y # Original XOR Y output in numpy array data structure.
device = 'gpu' if torch.cuda.is_available() else 'cpu'
# Converting the X to PyTorch-able data structure.
X_pt = torch.tensor(X).float()
X_pt = X_pt.to(device)
# Converting the Y to PyTorch-able data structure.
Y_pt = torch.tensor(Y, requires_grad=False).float()
Y_pt = Y_pt.to(device)
print(X_pt)
print(Y_pt)
# +
# Use tensor.shape to get the shape of the matrix/tensor.
num_data, input_dim = X_pt.shape
print('Inputs Dim:', input_dim)
num_data, output_dim = Y_pt.shape
print('Output Dim:', output_dim)
# -
# Use Sequential to define a simple feed-forward network.
model = nn.Sequential(
nn.Linear(input_dim, output_dim), # Use nn.Linear to get our simple perceptron
nn.Sigmoid() # Use nn.Sigmoid to get our sigmoid non-linearity
)
model
# Remember we define as: cost = truth - predicted
# If we take the absolute of cost, i.e.: cost = |truth - predicted|
# we get the L1 loss function.
criterion = nn.L1Loss()
learning_rate = 0.03
# The simple weights/parameters update processes we did before
# is call the gradient descent. SGD is the sochastic variant of
# gradient descent.
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# (**Note**: Personally, I strongely encourage you to go through the [University of Washington course of machine learning regression](https://www.coursera.org/learn/ml-regression) to better understand the fundamentals of (i) ***gradient***, (ii) ***loss*** and (iii) ***optimizer***. But given that you know how to code it, the process of more complex variants of gradient/loss computation and optimizer's step is easy to grasp)
# # Training a PyTorch model
#
# To train a model using PyTorch, we simply iterate through the no. of epochs and imperatively state the computations we want to perform.
#
# ## Remember the steps?
#
# 1. Initialize
# 2. Forward Propagation
# 3. Backward Propagation
# 4. Update Optimizer
num_epochs = 1000
# +
# Step 1: Initialization.
# Note: When using PyTorch a lot of the manual weights
# initialization is done automatically when we define
# the model (aka architecture)
model = nn.Sequential(
nn.Linear(input_dim, output_dim),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 1.0
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 10000
losses = []
for i in tqdm(range(num_epochs)):
# Reset the gradient after every epoch.
optimizer.zero_grad()
# Step 2: Foward Propagation
predictions = model(X_pt)
# Step 3: Back Propagation
# Calculate the cost between the predictions and the truth.
loss_this_epoch = criterion(predictions, Y_pt)
# Note: The neat thing about PyTorch is it does the
# auto-gradient computation, no more manually defining
# derivative of functions and manually propagating
# the errors layer by layer.
loss_this_epoch.backward()
# Step 4: Optimizer take a step.
# Note: Previously, we have to manually update the
# weights of each layer individually according to the
# learning rate and the layer delta.
# PyTorch does that automatically =)
optimizer.step()
# Log the loss value as we proceed through the epochs.
losses.append(loss_this_epoch.data.item())
# Visualize the losses
plt.plot(losses)
plt.show()
# -
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction))
print('Ouput:\t', int(_y))
print('######')
# Now, try again with 2 layers using PyTorch
# ====
# +
# %%time
hidden_dim = 5
num_data, input_dim = X_pt.shape
num_data, output_dim = Y_pt.shape
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Sigmoid(),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 0.3
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 5000
losses = []
for _ in tqdm(range(num_epochs)):
optimizer.zero_grad()
predictions = model(X_pt)
loss_this_epoch = criterion(predictions, Y_pt)
loss_this_epoch.backward()
optimizer.step()
losses.append(loss_this_epoch.data.item())
##print([float(_pred) for _pred in predictions], list(map(int, Y_pt)), loss_this_epoch.data[0])
# Visualize the losses
plt.plot(losses)
plt.show()
# -
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction > 0.5))
print('Ouput:\t', int(_y))
print('######')
# MNIST: The "Hello World" of Neural Nets
# ====
#
# Like any deep learning class, we ***must*** do the MNIST.
#
# The MNIST dataset is
#
# - is made up of handwritten digits
# - 60,000 examples training set
# - 10,000 examples test set
# We're going to install tensorflow here because their dataset access is simpler =)
# !pip3 install torchvision
from torchvision import datasets, transforms
# +
mnist_train = datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
mnist_test = datasets.MNIST('../data', train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
# +
# Visualization Candies
import matplotlib.pyplot as plt
def show_image(mnist_x_vector, mnist_y_vector):
pixels = mnist_x_vector.reshape((28, 28))
label = np.where(mnist_y_vector == 1)[0]
plt.title('Label is {}'.format(label))
plt.imshow(pixels, cmap='gray')
plt.show()
# -
# Fifth image and label.
show_image(mnist_train.data[5], mnist_train.targets[5])
# # Lets apply what we learn about multi-layered perceptron with PyTorch and apply it to the MNIST data.
# +
X_mnist = mnist_train.data.float()
Y_mnist = mnist_train.targets.float()
X_mnist_test = mnist_test.data.float()
Y_mnist_test = mnist_test.targets.float()
# -
Y_mnist.shape
# +
# Use FloatTensor.shape to get the shape of the matrix/tensor.
num_data, *input_dim = X_mnist.shape
print('No. of images:', num_data)
print('Inputs Dim:', input_dim)
num_data, *output_dim = Y_mnist.shape
num_test_data, *output_dim = Y_mnist_test.shape
print('Output Dim:', output_dim)
# +
# Flatten the dimensions of the images.
X_mnist = mnist_train.data.float().view(num_data, -1)
Y_mnist = mnist_train.targets.float().unsqueeze(1)
X_mnist_test = mnist_test.data.float().view(num_test_data, -1)
Y_mnist_test = mnist_test.targets.float().unsqueeze(1)
# +
# Use FloatTensor.shape to get the shape of the matrix/tensor.
num_data, *input_dim = X_mnist.shape
print('No. of images:', num_data)
print('Inputs Dim:', input_dim)
num_data, *output_dim = Y_mnist.shape
num_test_data, *output_dim = Y_mnist_test.shape
print('Output Dim:', output_dim)
# +
hidden_dim = 500
model = nn.Sequential(nn.Linear(784, 1),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 1.0
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 100
losses = []
plt.ion()
for _e in tqdm(range(num_epochs)):
optimizer.zero_grad()
predictions = model(X_mnist)
loss_this_epoch = criterion(predictions, Y_mnist)
loss_this_epoch.backward()
optimizer.step()
##print([float(_pred) for _pred in predictions], list(map(int, Y_pt)), loss_this_epoch.data[0])
losses.append(loss_this_epoch.data.item())
clear_output(wait=True)
plt.plot(losses)
plt.pause(0.05)
# -
predictions = model(X_mnist_test)
predictions
pred = np.array([np.argmax(_p) for _p in predictions.data.numpy()])
pred
truth = np.array([np.argmax(_p) for _p in Y_mnist_test.data.numpy()])
truth
(pred == truth).sum() / len(pred)
| completed/.ipynb_checkpoints/Session 1 - Hello NLP World-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# -*- coding: utf-8 -*-
from item_parser import *
sentence = "una muza y una caprese"
sent, tokens, parsed_items = parse(sentence)
print(sent)
for item in parsed_items:
print(item)
| FoodParser.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def convert_epoch_to_date(epoch):
start_of_epoch = moment.now().timezone("America/New_York").replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
start_of_epoch.add(days=epoch)
return start_of_epoch.format('YYYY-MM-DD')
def compute_intensity(domains_and_sessions):
#print(domains_and_sessions)
number_of_sessions_total = 0
number_of_sessions_where_intervention_was_seen = 0
for domain_and_sessions in domains_and_sessions:
domain = domain_and_sessions['domain']
is_goal_enabled = domain_and_sessions['is_goal_enabled']
is_goal_frequent = domain_and_sessions['is_goal_frequent']
if not is_goal_enabled:
continue
for session_info in domain_and_sessions['session_info_list_for_domain']:
number_of_sessions_total += 1
if session_info['intervention_active'] != None:
number_of_sessions_where_intervention_was_seen += 1
if number_of_sessions_total == 0:
return None
return number_of_sessions_where_intervention_was_seen / number_of_sessions_total
# OLD BROWSER ANALYSIS CODE, JUST IN CASE
# for stat in ext_db[user+ "_synced:seconds_on_domain_per_day"].find({"key2":{"$gt":930}}):
# date = epoch_to_date(int(stat['key2']))
# if date in acc_dev_day_inter[acc]:
# if "domain_time" not in acc_dev_day_inter[acc][date][BROWSER]:
# acc_dev_day_inter[acc][date][BROWSER]["domain_time"] = {}
# name = stat['key']
# if name not in acc_dev_day_inter[acc][date][BROWSER]["domain_time"] or stat['val'] > acc_dev_day_inter[acc][date][BROWSER]["domain_time"][name]:
# acc_dev_day_inter[acc][date][BROWSER]["domain_time"][name] = stat['val']
# for intervention_stat in ext_db[user + "_synced:interventions_active_for_domain_and_session"].find():
# if ("is_preview_mode" not in intervention_stat or not intervention_stat["is_preview_mode"]) and (
# "developer_mode" not in intervention_stat or not intervention_stat["developer_mode"]):
# moment_obj = moment.unix(intervention_stat["timestamp_local"])
# date = moment_obj.format("YYYY-MM-DD")
# if date in acc_dev_day_inter[acc] and len(intervention_stat["val"]) > 0:
# if "intervention_sessions" not in acc_dev_day_inter[acc][date][BROWSER]:
# acc_dev_day_inter[acc][date][BROWSER]["intervention_sessions"] = set([])
# acc_dev_day_inter[acc][date][BROWSER]["intervention_sessions"].add(intervention_stat["key"] +" " + str(intervention_stat["key2"]))
# acc_dev_day_inter[acc][date][BROWSER][GOALS].add(intervention_stat["key"])
# elif "is_preview_mode" in intervention_stat:
# print(intervention_stat)
# for ses_stat in ext_db[user + "_synced:seconds_on_domain_per_session"].find():
# moment_obj = moment.unix(ses_stat["timestamp_local"])
# date = moment_obj.format("YYYY-MM-DD")
# if date in acc_dev_day_inter[acc]:
# if "all_sessions" not in acc_dev_day_inter[acc][date][BROWSER]:
# acc_dev_day_inter[acc][date][BROWSER]["all_sessions"] = set([])
# acc_dev_day_inter[acc][date][BROWSER]["all_sessions"].add(ses_stat["key"] +" " + str(ses_stat["key2"]))
# Now let's validate that the data isn't funky.
# for date in acc_dev_day_inter[acc]:
# if "intervention_sessions" in acc_dev_day_inter[acc][date][BROWSER] and "domain_time" in acc_dev_day_inter[acc][date][BROWSER]:
# sessions_not_counted = acc_dev_day_inter[acc][date][BROWSER]["intervention_sessions"].difference(acc_dev_day_inter[acc][date][BROWSER]["all_sessions"])
# if (len(sessions_not_counted) < .1 * len(acc_dev_day_inter[acc][date][BROWSER]["all_sessions"])):
# obj = acc_dev_day_inter[acc][date][BROWSER]
#print("before addition:")
#print(obj[TOTAL_TIME])
#print(obj[GOAL_TIME])
#print(obj[OTHER_TIME])
# if len(obj["intervention_sessions"]) != 0:
# obj[NUM_INTER] += len(obj["intervention_sessions"])
# obj[NUM_SESSIONS] += len(obj["all_sessions"])
# if obj[GOAL_TIME] == 0:
# obj[GOAL_TIME] += sum([obj["domain_time"][n] for n in obj["domain_time"] if n in obj[GOALS]])
# obj[OTHER_TIME] += sum([obj["domain_time"][n] for n in obj["domain_time"] if n not in obj[GOALS]])
# obj[TOTAL_TIME] += obj[GOAL_TIME] + obj[OTHER_TIME]
# if obj[GOAL_TIME] + obj[OTHER_TIME] != obj[TOTAL_TIME]:
# print(acc + " "+ str(obj[TOTAL_TIME]) + " " + str(obj[OTHER_TIME]) + " " + str(obj[GOAL_TIME]))
# else:
# funky_days += 1
# print("data is too funky: " + str(len(sessions_not_counted)/len(acc_dev_day_inter[acc][date][BROWSER]["all_sessions"])))
import json
sessions_for_user_by_day_and_goal_for_all_users = json.load(open('browser_all_session_info_sept18_v4.json'))
# Turn this array into dictionary
user_sessions_for_user_by_day_and_goal = {}
for sessions_for_user_by_day_and_goal in sessions_for_user_by_day_and_goal_for_all_users:
user = sessions_for_user_by_day_and_goal['user']
print(user)
user_sessions_for_user_by_day_and_goal[user] = sessions_for_user_by_day_and_goal
for sessions_for_user_by_day_and_goal in sessions_for_user_by_day_and_goal_for_all_users[-6:-4]:
user = sessions_for_user_by_day_and_goal['user']
for key in sessions_for_user_by_day_and_goal["days_domains_and_sessions"][0]:
print(key)
print(sessions_for_user_by_day_and_goal["days_domains_and_sessions"][0]["domains_and_sessions"])
# +
import os
print(os.getcwd())
# Get Mongo database
from yaml import load
from pymongo import MongoClient
from getsecret import getsecret
client = MongoClient(getsecret("MONGODB_URI"))
db = client[getsecret("MOBILE_NAME")]
ext_client = MongoClient(getsecret("EXT_URI"))
ext_db = ext_client[getsecret("DB_NAME")]
# Get all synced accounts and their respective users.
import urllib.request as req
import json
accounts = json.loads(req.urlopen("http://localhost:5000/synced_emails").read().decode("utf-8"))
print(accounts)
# counter for figures
counter = 0
browser_freq = 0
browser_infreq = 0
### CONSTANTS ###
ANDROID_INTENSITY = "android_intensity"
ANDROID = "android"
BROWSER = "browser" # habitlab goal, i.e. facebook/spend_less_time or custom/spend_less_time_developers.slashdot.org
BROWSER_DOMAIN = "browser_domain"
SHARED = "shared"
PACKAGES = "packages"
OTHER_FREQUENCY = "other_frequency"
TIME = "time"
from datetime import date, datetime
SPEND_LESS_TIME_LENGTH = len("custom/spend_less_time_")
INTENSITY = "intensity"
TOTAL_TIME = "total_time"
ANDROID = "android"
BROWSER = "browser"
HASH = "email_hash"
GOALS = "goals"
DAY = "day"
FREQ_GOALS = "freq_goals"
INFREQ_GOALS = "infreq_goals"
OTHER_TIME = "other_time"
GOAL_TIME = "goal_time"
FREQ_TIME = "freq_time"
INFREQ_TIME = "infreq_time"
AVG_FREQ_TIME = "avg_freq_time"
AVG_INFREQ_TIME = "avg_infreq_time"
BROWSER_INTENSITY = "browser_intensity"
NUM_BROWSER_GOALS = "num_browser_goals"
NUM_INTER = "number_interventions"
NUM_SESSIONS = "number_sessions"
TARGET_TIME = "target_time"
CATEGORY = "category"
# Associate users with domain name which will function as our key.
# Top-level-domain-names that are not pertinent to the application.
TLDs = ['www', 'aaa', 'abb', 'abc', 'ac', 'aco', 'ad', 'ads', 'ae', 'aeg', 'af', 'afl', 'ag', 'ai', 'aig', 'al',
'am', 'anz', 'ao', 'aol', 'app', 'aq', 'ar', 'art', 'as', 'at', 'au', 'aw', 'aws', 'ax', 'axa', 'az', 'ba',
'bar', 'bb', 'bbc', 'bbt', 'bcg', 'bcn', 'bd', 'be', 'bet', 'bf', 'bg', 'bh', 'bi', 'bid', 'bio', 'biz', 'bj',
'bm', 'bms', 'bmw', 'bn', 'bnl', 'bo', 'bom', 'boo', 'bot', 'box', 'br', 'bs', 'bt', 'buy', 'bv', 'bw', 'by',
'bz', 'bzh', 'ca', 'cab', 'cal', 'cam', 'car', 'cat', 'cba', 'cbn', 'cbs', 'cc', 'cd', 'ceb', 'ceo', 'cf',
'cfa', 'cfd', 'cg', 'ch', 'ci', 'ck', 'cl', 'cm', 'cn', 'co', 'com', 'cr', 'crs', 'csc', 'cu', 'cv', 'cw',
'cx', 'cy', 'cz', 'dad', 'day', 'dds', 'de', 'dev', 'dhl', 'diy', 'dj', 'dk', 'dm', 'dnp', 'do', 'dog', 'dot',
'dtv', 'dvr', 'dz', 'eat', 'ec', 'eco', 'edu', 'ee', 'eg', 'er', 'es', 'esq', 'et', 'eu', 'eus', 'fan', 'fi',
'fit', 'fj', 'fk', 'fly', 'fm', 'fo', 'foo', 'fox', 'fr', 'frl', 'ftr', 'fun', 'fyi', 'ga', 'gal', 'gap', 'gb',
'gd', 'gdn', 'ge', 'gea', 'gf', 'gg', 'gh', 'gi', 'gl', 'gle', 'gm', 'gmo', 'gmx', 'gn', 'goo', 'gop', 'got',
'gov', 'gp', 'gq', 'gr', 'gs', 'gt', 'gu', 'gw', 'gy', 'hbo', 'hiv', 'hk', 'hkt', 'hm', 'hn', 'hot', 'how',
'hr', 'ht', 'hu', 'ibm', 'ice', 'icu', 'id', 'ie', 'ifm', 'il', 'im', 'in', 'inc', 'ing', 'ink', 'int', 'io',
'iq', 'ir', 'is', 'ist', 'it', 'itv', 'jcb', 'jcp', 'je', 'jio', 'jlc', 'jll', 'jm', 'jmp', 'jnj', 'jo', 'jot',
'joy', 'jp', 'ke', 'kfh', 'kg', 'kh', 'ki', 'kia', 'kim', 'km', 'kn', 'kp', 'kpn', 'kr', 'krd', 'kw', 'ky',
'kz', 'la', 'lat', 'law', 'lb', 'lc', 'lds', 'li', 'lk', 'llc', 'lol', 'lpl', 'lr', 'ls', 'lt', 'ltd', 'lu',
'lv', 'ly', 'ma', 'man', 'map', 'mba', 'mc', 'md', 'me', 'med', 'men', 'mg', 'mh', 'mil', 'mit', 'mk', 'ml',
'mlb', 'mls', 'mm', 'mma', 'mn', 'mo', 'moe', 'moi', 'mom', 'mov', 'mp', 'mq', 'mr', 'ms', 'msd', 'mt', 'mtn',
'mtr', 'mu', 'mv', 'mw', 'mx', 'my', 'mz', 'na', 'nab', 'nba', 'nc', 'ne', 'nec', 'net', 'new', 'nf', 'nfl',
'ng', 'ngo', 'nhk', 'ni', 'nl', 'no', 'now', 'np', 'nr', 'nra', 'nrw', 'ntt', 'nu', 'nyc', 'nz', 'obi', 'off',
'om', 'one', 'ong', 'onl', 'ooo', 'org', 'ott', 'ovh', 'pa', 'pay', 'pe', 'pet', 'pf', 'pg', 'ph', 'phd',
'pid', 'pin', 'pk', 'pl', 'pm', 'pn', 'pnc', 'pr', 'pro', 'pru', 'ps', 'pt', 'pub', 'pw', 'pwc', 'py', 'qa',
'qvc', 're', 'red', 'ren', 'ril', 'rio', 'rip', 'ro', 'rs', 'ru', 'run', 'rw', 'rwe', 'sa', 'sap', 'sas', 'sb',
'sbi', 'sbs', 'sc', 'sca', 'scb', 'sd', 'se', 'ses', 'sew', 'sex', 'sfr', 'sg', 'sh', 'si', 'sj', 'sk', 'ski',
'sky', 'sl', 'sm', 'sn', 'so', 'soy', 'sr', 'srl', 'srt', 'st', 'stc', 'su', 'sv', 'sx', 'sy', 'sz', 'tab',
'tax', 'tc', 'tci', 'td', 'tdk', 'tel', 'tf', 'tg', 'th', 'thd', 'tj', 'tjx', 'tk', 'tl', 'tm', 'tn', 'to',
'top', 'tr', 'trv', 'tt', 'tui', 'tv', 'tvs', 'tw', 'tz', 'ua', 'ubs', 'ug', 'uk', 'uno', 'uol', 'ups', 'us',
'uy', 'uz', 'va', 'vc', 've', 'vet', 'vg', 'vi', 'vig', 'vin', 'vip', 'vn', 'vu', 'wed', 'wf', 'win', 'wme',
'wow', 'ws', 'wtc', 'wtf', 'xin', 'xxx', 'xyz', 'ye', 'you', 'yt', 'yun', 'za', 'zip', 'zm', 'zw']
def get_name(name, device):
"""
@param name: goal name (package name for Android)
@param device: "android" or "browser" or "browser_domain"
@return name of goal with subdomains removed and goal annotation removed (i.e. spend_less_time)
"""
if device == ANDROID and name =="com.google.android.gm" or device == BROWSER and "gmail" in name:
return "gmail"
name = name.lower()
if "custom" in name and device == BROWSER:
# strip off the "custom/spend_less_time_"
name = name[SPEND_LESS_TIME_LENGTH:]
elif device == BROWSER:
return name.split('/spend')[0]
# Now we have to get juicy part of domain.
subs = list(filter(lambda x: x != "android" and x != "google" and x != "apps" and x not in TLDs, name.split('.')))
if device == ANDROID:
if len(subs) > 0:
return subs[0]
return name
else:
if len(subs) > 0:
return subs[len(subs) - 1]
def organize_stats(shared_goals, stats, device, counts, user_id):
"""
Organizes that stats object into shared_goals for device.
@param shared_goals: dictionary
@param stats: stats object returned from freq_stats
@param device: ANDROID or BROWSER
"""
for iso in stats:
for freq in stats[iso]:
for goal in stats[iso][freq]:
name = get_name(goal, device)
if name not in shared_goals:
shared_goals[name] = {ANDROID: {PACKAGES:[]} , BROWSER: {PACKAGES:[]} }
if goal not in shared_goals[name][device][PACKAGES]:
shared_goals[name][device][PACKAGES].append(goal)
shared_goals[name][device][goal] = {}
if iso not in shared_goals[name][device][goal]:
shared_goals[name][device][goal][iso] = freq
if device == BROWSER:
counts[freq] += 1
# Before I submitted the update, some apps under the same name wouldn't have the same freq setting.
elif shared_goals[name][device][goal][iso] != freq:
shared_goals[name][device][goal][iso] = "both"
counts["both"] += 1
if len(shared_goals[name][ANDROID][PACKAGES]) > 0 and len(shared_goals[name][BROWSER][PACKAGES]) > 0:
shared_goals[SHARED].add(name)
# +
acc_dev_day_inter = {}
devices = [ANDROID, BROWSER]
import moment
from time_utils import epoch_to_date
counter = 0
funky_days = 0
print(len([accout for accout in accounts if len(accout[ANDROID]) > 0 and len(accout[BROWSER]) >0]))
for account in accounts:
counter += 1
acc = account["_id"]
acc_dev_day_inter[acc] = {}
print(counter/len(accounts))
if len(account[ANDROID]) > 0 and len(account[BROWSER]) > 0:
# interventions
for user in account[ANDROID]:
for s in db[user + "_sessions"].find({"interventions": {"$exists" :True}, "enabled": {"$exists": True}, "duration": {"$lt": 86400}}):
time = moment.unix(s["timestamp"])
date = time.format("YYYY-MM-DD")
if date not in acc_dev_day_inter[acc]:
acc_dev_day_inter[acc][date] = {device: {NUM_INTER: 0, NUM_SESSIONS: 0, OTHER_TIME:0, GOAL_TIME: 0, TOTAL_TIME: 0, GOALS: set([])} for device in devices}
acc_dev_day_inter[acc][date][ANDROID][TOTAL_TIME] += s["duration"]
if s["enabled"]:
acc_dev_day_inter[acc][date][ANDROID][GOAL_TIME] += s["duration"]
acc_dev_day_inter[acc][date][ANDROID][GOALS].add(s["domain"])
acc_dev_day_inter[acc][date][ANDROID][NUM_SESSIONS] += 1
acc_dev_day_inter[acc][date][ANDROID][NUM_INTER] += len(s["interventions"])
else:
acc_dev_day_inter[acc][date][ANDROID][OTHER_TIME] += s["duration"]
for user in account[BROWSER]:
if user in user_sessions_for_user_by_day_and_goal:
sessions_for_user_by_day_and_goal = user_sessions_for_user_by_day_and_goal[user]
for day in sessions_for_user_by_day_and_goal["days_domains_and_sessions"]:
domains = day["domains_and_sessions"]
date = convert_epoch_to_date( day["epoch"])
if date in acc_dev_day_inter[acc]:
acc_dev_day_inter[acc][date][BROWSER][INTENSITY] = compute_intensity(domains) # really intensity
if len(domains) > 0:
if domains[0]["is_goal_enabled"]:
acc_dev_day_inter[acc][date][BROWSER][GOAL_TIME] = domains[0]["time_on_domain_today"] + domains[0]["time_on_other_goal_domains_today"]
else:
acc_dev_day_inter[acc][date][BROWSER][GOAL_TIME] = domains[0]["time_on_other_goal_domains_today"]
acc_dev_day_inter[acc][date][BROWSER][TOTAL_TIME] = domains[0]["time_on_domain_today"] + domains[0]["time_on_all_other_domains_today"]
acc_dev_day_inter[acc][date][BROWSER][OTHER_TIME] = acc_dev_day_inter[acc][date][BROWSER][TOTAL_TIME]-acc_dev_day_inter[acc][date][BROWSER][GOAL_TIME]
# +
import math
import pandas as pd
AVG_GOAL_TIME = "avg_goal_time"
PORP_INT = "proportion_interventions"
props = [NUM_INTER, GOAL_TIME, TOTAL_TIME, OTHER_TIME] #OTHER_TIME, GOAL_TIME, TOTAL_TIME]
all_props = [HASH]
for device in devices:
for prop in props:
all_props.append(device + "_" + prop)
#all_props.append(device + "_" + AVG_GOAL_TIME)
all_props.append(device + "_" + PORP_INT)
data_frame_dict = {prop: [] for prop in all_props}
for acc in acc_dev_day_inter:
for date in acc_dev_day_inter[acc]:
#print(acc + " " + date)
# check if all time values > 0
gt0 = True
for device in devices:
for prop in props:
if "time" in prop and acc_dev_day_inter[acc][date][device][prop] == 0:
print(prop + " " + device)
gt0 = False
if gt0 > 0:
data_frame_dict[HASH].append(acc)
for device in devices:
for prop in props:
if "time" in prop:
data_frame_dict[device + "_" + prop].append(math.log(acc_dev_day_inter[acc][date][device][prop]))
else:
data_frame_dict[device + "_" + prop].append((acc_dev_day_inter[acc][date][device][prop]))
obj = acc_dev_day_inter[acc][date][device]
if device == ANDROID:
data_frame_dict[device + "_" + PORP_INT].append(obj[NUM_INTER]/obj[NUM_SESSIONS])
elif device == BROWSER:
data_frame_dict[device + "_" + PORP_INT].append(obj[INTENSITY])
#data_frame_dict[device+"_"+AVG_GOAL_TIME].append(math.log(obj[GOAL_TIME]/len(obj[GOALS])))
df = pd.DataFrame(data_frame_dict)
print(df)
# -
df.to_csv("2018-09-20-cross_device_intervention_effects")
import matplotlib.pyplot as plt
plt.figure(12)
plt.hist([s for s in data_frame_dict[ANDROID+"_"+NUM_SESSIONS]], bins=20)
# %load_ext rpy2.ipython
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
#
# #install.packages('ez')
# #install.packages('lme4')
#
# library(lme4)
# library(sjPlot)
# library(lmerTest)
# unique(df$email_hash)
# #library(ez)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( android_other_time~ browser_proportion_interventions , data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# # ANDROID EFFECTS BROWSER - SORRY ANDROID_NUMBER_INTERVENTIONS IS ACTUALLY NEW DEF OF INTENSITY
# #df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( browser_goal_time~android_number_interventions , data = df)
# #summary(results)
# #class(results) <- "lmerMod"
# library(stargazer)
# show(summary(results))
# stargazer(results,
# dep.var.labels=c("Log time spent per day", "(Intercept)"),
# no.space=TRUE,
# title="Time spent on a given goal. Users spend less time on goals assigned to the \"frequent\" condition.",
# label="tab:mobile_frequency_effective",
# keep.stat="n",
# table.placement = "tb",
# star.cutoffs = c(0.05, 0.01, 0.001)
# )
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( android_total_time~ browser_number_interventions , data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( android_goal_time~ browser_number_interventions, data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( browser_goal_time~ android_number_interventions , data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( android_number_sessions~ browser_number_interventions , data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( browser_number_sessions~ android_number_interventions, data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( browser_avg_goal_time~ android_number_interventions, data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( android_avg_goal_time~ browser_number_interventions, data = df)
# summary(results)
# + magic_args="-i df -w 5 -h 5 --units in -r 200" language="R"
# df$email_hash <- factor(df$email_hash, ordered=FALSE)
# results <- lm( browser_avg_goal_time~ android_number_interventions, data = df)
# summary(results)
# -
| Cross Device Intervention Effects.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Qiskit v0.31.0 (ipykernel)
# language: python
# name: python3
# ---
# ## IBM Quantum Challenge Fall 2021
# # Challenge 4: Battery revenue optimization
#
# <div class="alert alert-block alert-info">
#
# We recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience.
# # Introduction to QAOA
#
# When it comes to optimization problems, a well-known algorithm for finding approximate solutions to combinatorial-optimization problems is **QAOA (Quantum approximate optimization algorithm)**. You may have already used it once in the finance exercise of Challenge-1, but still don't know what it is. In this challlenge we will further learn about QAOA----how does it work? Why we need it?
#
# First off, what is QAOA? Simply put, QAOA is a classical-quantum hybrid algorithm that combines a parametrized quantum circuit known as ansatz, and a classical part to optimize those circuits proposed by Farhi, Goldstone, and Gutmann (2014)[**[1]**](https://arxiv.org/abs/1411.4028).
#
# It is a variational algorithm that uses a unitary $U(\boldsymbol{\beta}, \boldsymbol{\gamma})$ characterized by the parameters $(\boldsymbol{\beta}, \boldsymbol{\gamma})$ to prepare a quantum state $|\psi(\boldsymbol{\beta}, \boldsymbol{\gamma})\rangle$. The goal of the algorithm is to find optimal parameters $(\boldsymbol{\beta}_{opt}, \boldsymbol{\gamma}_{opt})$ such that the quantum state $|\psi(\boldsymbol{\beta}_{opt}, \boldsymbol{\gamma}_{opt})\rangle$ encodes the solution to the problem.
#
# The unitary $U(\boldsymbol{\beta}, \boldsymbol{\gamma})$ has a specific form and is composed of two unitaries $U(\boldsymbol{\beta}) = e^{-i \boldsymbol{\beta} H_B}$ and $U(\boldsymbol{\gamma}) = e^{-i \boldsymbol{\gamma} H_P}$ where $H_{B}$ is the mixing Hamiltonian and $H_{P}$ is the problem Hamiltonian. Such a choice of unitary drives its inspiration from a related scheme called quantum annealing.
#
# The state is prepared by applying these unitaries as alternating blocks of the two unitaries applied $p$ times such that
#
# $$\lvert \psi(\boldsymbol{\beta}, \boldsymbol{\gamma}) \rangle = \underbrace{U(\boldsymbol{\beta}) U(\boldsymbol{\gamma})
# \cdots U(\boldsymbol{\beta}) U(\boldsymbol{\gamma})}_{p \; \text{times}}
# \lvert \psi_0 \rangle$$
#
# where $|\psi_{0}\rangle$ is a suitable initial state.
#
# <center><img src="resources/qaoa_circuit.png" width="600"></center>
#
# The QAOA implementation of Qiskit directly extends VQE and inherits VQE’s general hybrid optimization structure.
# To learn more about QAOA, please refer to the [**QAOA chapter**](https://qiskit.org/textbook/ch-applications/qaoa.html) of Qiskit Textbook.
# <div class="alert alert-block alert-success">
#
# **Goal**
#
# Implement the quantum optimization code for the battery revenue problem. <br>
# <br>
# **Plan**
#
# First, you will learn about QAOA and knapsack problem.
# <br><br>
# **Challenge 4a** - Simple knapsack problem with QAOA: familiarize yourself with a typical knapsack problem and find the optimized solution with QAOA.
# <br><br>
# **Final Challenge 4b** - Battery revenue optimization with Qiskit knapsack class: learn the battery revenue optimization problem and find the optimized solution with QAOA. You can receive a badge for solving all the challenge exercises up to 4b.
# <br><br>
# **Final Challenge 4c** - Battery revenue optimization with your own quantum circuit: implement the battery revenue optimization problem to find the lowest circuit cost and circuit depth. Achieve better accuracy with smaller circuits. you can obtain a score with ranking by solving this exercise.
# </div>
#
# <div class="alert alert-block alert-info">
#
# Before you begin, we recommend watching the [**Qiskit Optimization Demo Session with Atsushi Matsuo**](https://youtu.be/claoY57eVIc?t=104) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-optimization) to learn how to do classifications using QSVM.
#
# </div>
# As we just mentioned, QAOA is an algorithm which can be used to find approximate solutions to combinatorial optimization problems, which includes many specific problems, such as:
#
# - TSP (Traveling Salesman Problem) problem
# - Vehicle routing problem
# - Set cover problem
# - Knapsack problem
# - Scheduling problems,etc.
#
# Some of them are hard to solve (or in another word, they are NP-hard problems), and it is impractical to find their exact solutions in a reasonable amount of time, and that is why we need the approximate algorithm. Next, we will introduce an instance of using QAOA to solve one of the combinatorial optimization problems----**knapsack problem**.
# # Knapsack Problem #
#
# [**Knapsack Problem**](https://en.wikipedia.org/wiki/Knapsack_problem) is an optimization problem that goes like this: given a list of items that each has a weight and a value and a knapsack that can hold a maximum weight. Determine which items to take in the knapsack so as to maximize the total value taken without exceeding the maiximum weight the knapsack can hold. The most efficient approach would be a greedy approach, but that is not guaranteed to give the best result.
#
#
# <center><img src="resources/Knapsack.png" width="400"></center>
#
# Image source: [Knapsack.svg.](https://commons.wikimedia.org/w/index.php?title=File:Knapsack.svg&oldid=457280382)
#
# <div id='problem'></div>
# <div class="alert alert-block alert-info">
# Note: Knapsack problem have many variations, here we will only discuss the 0-1 Knapsack problem: either take an item or not (0-1 property), which is a NP-hard problem. We can not divide one item, or take multiple same items.
# </div>
# ## Challenge 4a: Simple knapsack problem with QAOA
#
# <div class="alert alert-block alert-success">
#
# **Challenge 4a** <br>
# You are given a knapsack with a capacity of 18 and 5 pieces of luggage. When the weights of each piece of luggage $W$ is $w_i = [4,5,6,7,8]$ and the value $V$ is $v_i = [5,6,7,8,9]$, find the packing method that maximizes the sum of the values of the luggage within the capacity limit of 18.
#
#
# </div>
from qiskit_optimization.algorithms import MinimumEigenOptimizer
from qiskit import Aer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
import numpy as np
# ## Dynamic Programming Approach ##
#
# A typical classical method for finding an exact solution, the Dynamic Programming approach is as follows:
val = [5,6,7,8,9]
wt = [4,5,6,7,8]
W = 18
# +
def dp(W, wt, val, n):
k = [[0 for x in range(W + 1)] for x in range(n + 1)]
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
k[i][w] = 0
elif wt[i-1] <= w:
k[i][w] = max(val[i-1] + k[i-1][w-wt[i-1]], k[i-1][w])
else:
k[i][w] = k[i-1][w]
picks=[0 for x in range(n)]
volume=W
for i in range(n,-1,-1):
if (k[i][volume]>k[i-1][volume]):
picks[i-1]=1
volume -= wt[i-1]
return k[n][W],picks
n = len(val)
print("optimal value:", dp(W, wt, val, n)[0])
print('\n index of the chosen items:')
for i in range(n):
if dp(W, wt, val, n)[1][i]:
print(i,end=' ')
# -
# The time complexity of this method $O(N*W)$, where $N$ is the number of items and $W$ is the maximum weight of the knapsack. We can solve this problem using an exact solution approach within a reasonable time since the number of combinations is limited, but when the number of items becomes huge, it will be impractical to deal with by using a exact solution approach.
# ## QAOA approach ##
#
# Qiskit provides application classes for various optimization problems, including the knapsack problem so that users can easily try various optimization problems on quantum computers. In this exercise, we are going to use the application classes for the `Knapsack` problem here.
#
# There are application classes for other optimization problems available as well. See [**Application Classes for Optimization Problems**](https://qiskit.org/documentation/optimization/tutorials/09_application_classes.html#Knapsack-problem) for details.
# import packages necessary for application classes.
from qiskit_optimization.applications import Knapsack
# To represent Knapsack problem as an optimization problem that can be solved by QAOA, we need to formulate the cost function for this problem.
# +
def knapsack_quadratic_program():
# Put values, weights and max_weight parameter for the Knapsack()
##############################
# Provide your code here
prob = Knapsack(val, wt, W)
#
##############################
# to_quadratic_program generates a corresponding QuadraticProgram of the instance of the knapsack problem.
kqp = prob.to_quadratic_program()
return prob, kqp
prob,quadratic_program=knapsack_quadratic_program()
quadratic_program
# print(prob)
# -
# We can solve the problem using the classical `NumPyMinimumEigensolver` to find the minimum eigenvector, which may be useful as a reference without doing things by Dynamic Programming; we can also apply QAOA.
# Numpy Eigensolver
meo = MinimumEigenOptimizer(min_eigen_solver=NumPyMinimumEigensolver())
result = meo.solve(quadratic_program)
print('result:\n', result)
print('\n index of the chosen items:', prob.interpret(result))
# +
# QAOA
seed = 123
algorithm_globals.random_seed = seed
qins = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), shots=1000, seed_simulator=seed, seed_transpiler=seed)
meo = MinimumEigenOptimizer(min_eigen_solver=QAOA(reps=1, quantum_instance=qins))
result = meo.solve(quadratic_program)
print('result:\n', result)
print('\n index of the chosen items:', prob.interpret(result))
# -
# You will submit the quadratic program created by your `knapsack_quadratic_program` function.
# Check your answer and submit using the following code
from qc_grader import grade_ex4a
grade_ex4a(quadratic_program)
# <div id='problem'></div>
# <div class="alert alert-block alert-info">
# Note: QAOA finds the approximate solutions, so the solution by QAOA is not always optimal.
# </div>
# # Battery Revenue Optimization Problem #
#
# <div id='problem'></div>
# <div class="alert alert-block alert-success">
# In this exercise we will use a quantum algorithm to solve a real-world instance of a combinatorial optimization problem: Battery revenue optimization problem.
# </div>
# Battery storage systems have provided a solution to flexibly integrate large-scale renewable energy (such as wind and solar) in a power system. The revenues from batteries come from different types of services sold to the grid. The process of energy trading of battery storage assets is as follows: A regulator asks each battery supplier to choose a market in advance for each time window. Then, the batteries operator will charge the battery with renewable energy and release the energy to the grid depending on pre-agreed contracts. The supplier makes therefore forecasts on the return and the number of charge/discharge cycles for each time window to optimize its overall return.
#
# How to maximize the revenue of battery-based energy storage is a concern of all battery storage investors. Choose to let the battery always supply power to the market which pays the most for every time window might be a simple guess, but in reality, we have to consider many other factors.
#
# What we can not ignore is the aging of batteries, also known as **degradation**. As the battery charge/discharge cycle progresses, the battery capacity will gradually degrade (the amount of energy a battery can store, or the amount of power it can deliver will permanently reduce). After a number of cycles, the battery will reach the end of its usefulness. Since the performance of a battery decreases while it is used, choosing the best cash return for every time window one after the other, without considering the degradation, does not lead to an optimal return over the lifetime of the battery, i.e. before the number of charge/discharge cycles reached.
#
# Therefore, in order to optimize the revenue of the battery, what we have to do is to select the market for the battery in each time window taking both **the returns on these markets (value)**, based on price forecast, as well as expected battery **degradation over time (cost)** into account ——It sounds like solving a common optimization problem, right?
#
# We will investigate how quantum optimization algorithms could be adapted to tackle this problem.
#
#
# <div>
# <p></p>
# <center><img src="resources/renewable-g7ac5bd48e_640.jpg" width="600"></center>
#
# </div>
#
# Image source: [pixabay](https://pixabay.com/photos/renewable-energy-environment-wind-1989416/)
# ## Problem Setting
#
# Here, we have referred to the problem setting in de la Grand'rive and Hullo's paper [**[2]**](https://arxiv.org/abs/1908.02210).
#
# Considering two markets $M_{1}$ , $M_{2}$, during every time window (typically a day), the battery operates on one or the other market, for a maximum of $n$ time windows. Every day is considered independent and the intraday optimization is a standalone problem: every morning the battery starts with the same level of power so that we don’t consider charging problems. Forecasts on both markets being available for the $n$ time windows, we assume known for each time window $t$ (day) and for each market:
#
# - the daily returns $\lambda_{1}^{t}$ , $\lambda_{2}^{t}$
#
# - the daily degradation, or health cost (number of cycles), for the battery $c_{1}^{t}$, $c_{2}^{t}$
#
# We want to find the optimal schedule, i.e. optimize the life time return with a cost less than $C_{max}$ cycles. We introduce $d = max_{t}\left\{c_{1}^{t}, c_{2}^{t}\right\} $.
#
# We introduce the decision variable $z_{t}, \forall t \in [1, n]$ such that $z_{t} = 0$ if the supplier chooses $M_{1}$ , $z_{t} = 1$ if choose $M_{2}$, with every possible vector $z = [z_{1}, ..., z_{n}]$ being a possible schedule. The previously formulated problem can then be expressed as:
#
#
# \begin{equation}
# \underset{z \in \left\{0,1\right\}^{n}}{max} \displaystyle\sum_{t=1}^{n}(1-z_{t})\lambda_{1}^{t}+z_{t}\lambda_{2}^{t}
# \end{equation}
# <br>
# \begin{equation}
# s.t. \sum_{t=1}^{n}[(1-z_{t})c_{1}^{t}+z_{t}c_{2}^{t}]\leq C_{max}
# \end{equation}
# This does not look like one of the well-known combinatorial optimization problems, but no worries! we are going to give hints on how to solve this problem with quantum computing step by step.
# # Challenge 4b: Battery revenue optimization with Qiskit knapsack class #
#
# <div class="alert alert-block alert-success">
#
# **Challenge 4b** <br>
# We will optimize the battery schedule using Qiskit optimization knapsack class with QAOA to maximize the total return with a cost within $C_{max}$ under the following conditions;
# <br>
# - the time window $t = 7$<br>
# - the daily return $\lambda_{1} = [5, 3, 3, 6, 9, 7, 1]$<br>
# - the daily return $\lambda_{2} = [8, 4, 5, 12, 10, 11, 2]$<br>
# - the daily degradation for the battery $c_{1} = [1, 1, 2, 1, 1, 1, 2]$<br>
# - the daily degradation for the battery $c_{2} = [3, 2, 3, 2, 4, 3, 3]$<br>
# - $C_{max} = 16$<br>
# <br>
#
# Your task is to find the argument, `values`, `weights`, and `max_weight` used for the Qiskit optimization knapsack class, to get a solution which "0" denote the choice of market $M_{1}$, and "1" denote the choice of market $M_{2}$. We will check your answer with another data set of $\lambda_{1}, \lambda_{2}, c_{1}, c_{2}, C_{max}$.
# <br>
# You can receive a badge for solving all the challenge exercises up to 4b.
#
# </div>
L1 = [5,3,3,6,9,7,1]
L2 = [8,4,5,12,10,11,2]
C1 = [1,1,2,1,1,1,2]
C2 = [3,2,3,2,4,3,3]
C_max = 16
# +
def knapsack_argument(L1, L2, C1, C2, C_max):
##############################
# Provide your code here
# t = len(L1)
# values = []
# weights = []
# max_weight = []
# for i in range(t):
# if (L1[i] >= L2[i]) and (C1[i] <= C2[i]):
# elif (L2[i] >= L1[i]) and (C2[i] <= C1[i]):
# elif (L2[i] >= L1[i]) and (C2[i] <= C1[i]):
# values.append((L2[i] - L1[i]))
# weights.append((C2[i] - C1[i]))
# max_weight.append(max(C1[i], C2[i]))
# elif (L2[i] >= L1[i]) and (C2[i] <= C1[i]):
#
##############################
return values, weights, max_weight
values, weights, max_weight = knapsack_argument(L1, L2, C1, C2, C_max)
print(values, weights, max_weight)
prob = Knapsack(values = values, weights = weights, max_weight = max_weight)
qp = prob.to_quadratic_program()
qp
# -
# Check your answer and submit using the following code
from qc_grader import grade_ex4b
grade_ex4b(knapsack_argument)
# We can solve the problem using QAOA.
# +
# QAOA
seed = 123
algorithm_globals.random_seed = seed
qins = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), shots=1000, seed_simulator=seed, seed_transpiler=seed)
meo = MinimumEigenOptimizer(min_eigen_solver=QAOA(reps=1, quantum_instance=qins))
result = meo.solve(qp)
print('result:', result.x)
item = np.array(result.x)
revenue=0
for i in range(len(item)):
if item[i]==0:
revenue+=L1[i]
else:
revenue+=L2[i]
print('total revenue:', revenue)
# -
# ## Challenge 4c: Battery revenue optimization with adiabatic quantum computation
#
# <div class="alert alert-block alert-danger">
#
# Here we come to the final exercise! The final challenge is for people to compete in ranking.
#
# </div>
#
# ## Background
#
# QAOA was developed with inspiration from adiabatic quantum computation. In adiabatic quantum computation, based on the quantum adiabatic theorem, the ground state of a given Hamiltonian can ideally be obtained. Therefore, by mapping the optimization problem to this Hamiltonian, it is possible to solve the optimization problem with adiabatic quantum computation.
#
# Although the computational equivalence of adiabatic quantum computation and quantum circuits has been shown, simulating adiabatic quantum computation on quantum circuits involves a large number of gate operations, which is difficult to achieve with current noisy devices. QAOA solves this problem by using a quantum-classical hybrid approach.
#
# In this extra challenge, you will be asked to implement a quantum circuit that solves an optimization problem without classical optimization, based on this adiabatic quantum computation framework. In other words, the circuit you build is expected to give a good approximate solution in a single run.
#
# Instead of using the Qiskit Optimization Module and Knapsack class, let's try to implement a quantum circuit with as few gate operations as possible, that is, as small as possible. By relaxing the constraints of the optimization problem, it is possible to find the optimum solution with a smaller circuit. We recommend that you follow the solution tips.
#
# <div class="alert alert-block alert-success">
#
# **Challenge 4c**<br>
# We will optimize the battery schedule using the adiabatic quantum computation to maximize the total return with a cost within $C_{max}$ under the following conditions;
# <br>
# - the time window $t = 11$<br>
# - the daily return $\lambda_{1} = [3, 7, 3, 4, 2, 6, 2, 2, 4, 6, 6]$<br>
# - the daily return $\lambda_{2} = [7, 8, 7, 6, 6, 9, 6, 7, 6, 7, 7]$<br>
# - the daily degradation for the battery $c_{1} = [2, 2, 2, 3, 2, 4, 2, 2, 2, 2, 2]$<br>
# - the daily degradation for the battery $c_{2} = [4, 3, 3, 4, 4, 5, 3, 4, 4, 3, 4]$<br>
# - $C_{max} = 33$<br>
# - **Note:** $\lambda_{1}[i] < \lambda_{2}[i]$ and $c_{1}[i] < c_{2}[i]$ holds for $i \in \{1,2,...,t\}$
# <br>
#
#
# Let "0" denote the choice of market $M_{1}$ and "1" denote the choice of market $M_{2}$, the optimal solutions are "00111111000", and "10110111000" with return value $67$ and cost $33$.
# Your task is to implement adiabatic quantum computation circuit to meet the accuracy below.
# We will check your answer with other data set of $\lambda_{1}, \lambda_{2}, c_{1}, c_{2}, C_{max}$. We show examples of inputs for checking below. We will use similar inputs with these examples.
# </div>
instance_examples = [
{
'L1': [3, 7, 3, 4, 2, 6, 2, 2, 4, 6, 6],
'L2': [7, 8, 7, 6, 6, 9, 6, 7, 6, 7, 7],
'C1': [2, 2, 2, 3, 2, 4, 2, 2, 2, 2, 2],
'C2': [4, 3, 3, 4, 4, 5, 3, 4, 4, 3, 4],
'C_max': 33
},
{
'L1': [4, 2, 2, 3, 5, 3, 6, 3, 8, 3, 2],
'L2': [6, 5, 8, 5, 6, 6, 9, 7, 9, 5, 8],
'C1': [3, 3, 2, 3, 4, 2, 2, 3, 4, 2, 2],
'C2': [4, 4, 3, 5, 5, 3, 4, 5, 5, 3, 5],
'C_max': 38
},
{
'L1': [5, 4, 3, 3, 3, 7, 6, 4, 3, 5, 3],
'L2': [9, 7, 5, 5, 7, 8, 8, 7, 5, 7, 9],
'C1': [2, 2, 4, 2, 3, 4, 2, 2, 2, 2, 2],
'C2': [3, 4, 5, 4, 4, 5, 3, 3, 5, 3, 5],
'C_max': 35
}
]
# ### IMPORTANT: Final exercise submission rules
#
# <div class="alert alert-block alert-danger">
# <p style="font-size:20px">
# For solving this problem:
# </p>
#
# - Do not optimize with classical methods.
# - Create a quantum circuit by filling source code in the functions along the following steps.
# - As for the parameters $p$ and $\alpha$, please **do not change the values from $p=5$ and $\alpha=1$.**
# - Please implement the quantum circuit within 28 qubits.
# - You should submit a function that takes (L1, L2, C1, C2, C_max) as inputs and returns a QuantumCircuit. (You can change the name of the function in your way.)
# - Your circuit should be able to solve different input values. We will validate your circuit with several inputs.
# - Create a circuit that gives precision of 0.8 or better with lower cost. The precision is explained below. The lower the cost, the better.
#
# - Please **do not run jobs in succession** even if you are concerned that your job is not running properly. This can create a long queue and clog the backend. You can check whether your job is running properly at:
# [**https://quantum-computing.ibm.com/jobs**](https://quantum-computing.ibm.com/jobs)
#
# - Judges will check top 10 solutions manually to see if their solutions adhere to the rules. **Please note that your ranking is subject to change after the challenge period as a result of the judging process.**
#
# - Top 10 participants will be recognized and asked to submit a write up on how they solved the exercise.
#
# **Note: In this challenge, please be aware that you should solve the problem with a quantum circuit, otherwise you will not have a rank in the final ranking.**
#
# </div>
# ### Scoring Rule
#
#
# The score of submitted function is computed by two steps.
#
# 1. In the first step, the precision of output of your quantum circuit is checked.
# To pass this step, your circuit should output a probability distribution whose **average precision is more than 0.80** for eight instances; four of them are fixed data, while the remaining four are randomly selected data from multiple datasets.
# If your circuit cannot satisfy this threshold **0.8**, you will not obtain a score.
# We will explain how the precision of a probability distribution will be calculated when the submitted quantum circuit solves one instance.
#
# 1. This precision evaluates how the values of measured feasible solutions are close to the value of optimal solutions.
# 2. Firstly the number of measured feasible solutions is very low, the precision will be 0 (Please check **"The number of feasible solutions"** below). <br>Before calculating precision, the values of solutions will be normalized so that the precision of the solution whose value is the lowest would be always 0 by subtracting the lowest value.
#
# Let $N_s$, $N_f$, and $\lambda_{opt}$ be the total shots (the number of execution), the shots of measured feasible solutions, the optimial solution value. Also let $R(x)$ and $C(x)$ be value and cost of a solution $x\in\{0,1\}^n$ respectively. We normalize the values by subtracting the lowest value of instance, which can be calculated by the summation of $\lambda_{1}$.
# Given a probability distribution, the precision is computed with the following formula:
#
# \begin{equation*}
# \text{precision} = \frac 1 {N_f\cdot (\lambda_{opt}-\mathrm{sum}(\lambda_{1}) )} \sum_{x, \text{$\mathrm{shots}_x$}\in \text{ prob.dist.}} (R(x)-\mathrm{sum}(\lambda_{1})) \cdot \text{$\mathrm{shots}_x$} \cdot 1_{C(x) \leq C_{max}}
# \end{equation*}
#
# Here, $\mathrm{shots}_x$ is the counts of measuring the solution $x$. For example, given a probability distribution {"1000101": 26, "1000110": 35, "1000111": 12, "1001000": 16, "1001001": 11} with shots $N_s = 100$,
# the value and the cost of each solution are listed below.
# | Solution | Value | Cost | Feasible or not | Shot counts |
# |:-------:|:-------:|:-------:|:-------:|:--------------:|
# | 1000101 | 46 | 16 | 1 | 26 |
# | 1000110 | 48 | 17 | 0 | 35 |
# | 1000111 | 45 | 15 | 1 | 12 |
# | 1001000 | 45 | 18 | 0 | 16 |
# | 1001001 | 42 | 16 | 1 | 11 |
#
# Since $C_{max}= 16$, the solutions "1000101", "1000111", and "1001001" are feasible, but the solutions "1000110" and "1001000" are infeasible. So, the shots of measured feasbile solutions $N_f$ is calculated as $N_f = 26+12+11=49$. And the lowest value is $ \mathrm{sum}(\lambda_{1}) = 5+3+3+6+9+7+1=34$.
# Therefore, the precision becomes
#
# $$((46-34) \cdot 26 \cdot 1 + (48-34) \cdot 35 \cdot 0 + (45-34) \cdot 12 \cdot 1 + (45-34) \cdot 16 \cdot 0 + (42-34) \cdot 11 \cdot 1) / (49\cdot (50-34)) = 0.68$$
#
# **The number of feasible solutions**: If $N_f$ is less than 20 ($ N_f < 20$), the precision will be calculated as 0.
#
# 2. In the second step, the score of your quantum circuit will be evaluated only if your solution passes the first step.
# The score is the sum of circuit costs of four instances, where the circuit cost is calculated as below.
#
# 1. Transpile the quantum circuit without gate optimization and decompose the gates into the basis gates of "rz", "sx", "cx".
# 2. Then the score is calculated by
#
# \begin{equation*}
# \text{score} = 50 \cdot depth + 10 \cdot \#(cx) + \#(rz) + \#(sx)
# \end{equation*}
#
# where $\#(gate)$ denotes the number of $gate$ in the circuit.
#
# Your circuit will be executed <span style="color: deepskyblue; ">512 times</span>, which means <span style="color: deepskyblue; ">$N_s = 512$</span> here.<br>
# The smaller the score become, the higher you will be ranked.
# ## General Approach
#
# Here we are making the answer according to the way shown in [**[2]**](https://arxiv.org/abs/1908.02210), which is solving the "relaxed" formulation of knapsack problem.
# The relaxed problem can be defined as follows:
# \begin{equation*}
# \text{maximize } f(z)=return(z)+penalty(z)
# \end{equation*}
#
# \begin{equation*}
# \text{where} \quad return(z)=\sum_{t=1}^{n} return_{t}(z) \quad \text{with} \quad return_{t}(z) \equiv\left(1-z_{t}\right) \lambda_{1}^{t}+z_{t} \lambda_{2}^{t}
# \end{equation*}
#
# \begin{equation*}
# \quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}
# 0 & \text{if}\quad cost(z)<C_{\max } \\
# -\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}
# \end{array}\right.
# \end{equation*}
#
# A non-Ising target function to compute a linear penalty is used here.
# This may reduce the depth of the circuit while still achieving high accuracy.
# The basic unit of relaxed approach consisits of the following items.
# 1. Phase Operator $U(C, \gamma_i)$
# 1. return part
# 2. penalty part
# 1. Cost calculation (data encoding)
# 2. Constraint testing (marking the indices whose data exceed $C_{max}$)
# 3. Penalty dephasing (adding penalty to the marked indices)
# 4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register)
# 2. Mixing Operator $U(B, \beta_i)$
#
# This procedure unit $U(B, \beta_i)U(C, \gamma_i)$ will be totally repeated $p$ times in the whole relaxed QAOA procedure.
# <br>
# Let's take a look at each function one by one.
# The quantum circuit we are going to make consists of three types of registers: index register, data register, and flag register.
# Index register and data register are used for QRAM which contain the cost data for every possible choice of battery.
# Here these registers appear in the function templates named as follows:
# - `qr_index`: a quantum register representing the index (the choice of 0 or 1 in each time window)
# - `qr_data`: a quantum register representing the total cost associated with each index
# - `qr_f`: a quantum register that store the flag for penalty dephasing
# <br>
#
# We also use the following variables to represent the number of qubits in each register.
# - `index_qubits`: the number of qubits in `qr_index`
# - `data_qubits`: the number of qubits in `qr_data`
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 1**
#
# </div>
#
# ## Phase Operator $U(C, \gamma_i)$
# ### Return Part
# The return part $return (z)$ can be transformed as follows:
#
# \begin{equation*}
# \begin{aligned}
# e^{-i \gamma_i . return(z)}\left|z\right\rangle
# &=\prod_{t=1}^{n} e^{-i \gamma_i return_{t}(z)}\left|z\right\rangle \\
# &=e^{i \theta} \bigotimes_{t=1}^{n} e^{-i \gamma_i z_{t}\left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)}\left|z_{t}\right\rangle \\
# \text{with}\quad \theta &=\sum_{t=1}^{n} \lambda_{1}^{t}\quad \text{constant}
# \end{aligned}
# \end{equation*}
#
# Since we can ignore the constant phase rotation, the return part $return (z)$ can be realized by rotation gate $U_1\left(\gamma_i \left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)\right)= e^{-i \frac{\gamma_i \left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)} 2}$ for each qubit.
# <br>
# <br>
#
#
# Fill in the blank in the following cell to complete the `phase_return` function.
from typing import List, Union
import math
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, assemble
from qiskit.compiler import transpile
from qiskit.circuit import Gate
from qiskit.circuit.library.standard_gates import *
from qiskit.circuit.library import QFT
def phase_return(index_qubits: int, gamma: float, L1: list, L2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
##############################
### U_1(gamma * (lambda2 - lambda1)) for each qubit ###
# Provide your code here
##############################
return qc.to_gate(label=" phase return ") if to_gate else qc
# ## Phase Operator $U(C, \gamma_i)$
# ### Penalty Part
#
# In this part, we are considering how to add penalty to the quantum states in index register whose total cost exceed the constraint $C_{max}$.
#
# As shown above, this can be realized by the following four steps.
#
# 1. Cost calculation (data encoding)
# 2. Constraint testing (marking the indices whose data value exceed $C_{max}$)
# 3. Penalty dephasing (adding penalty to the marked indices)
# 4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register)
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 2**
#
# </div>
#
# #### Cost calculation (data encoding)
#
# To represent the sum of cost for every choice of answer, we can use QRAM structure.
# In order to implement QRAM by quantum circuit, the addition function would be helpful.
# Here we will first prepare a function for constant value addition.
# <br>
# <br>
# To add a constant value to data we can use
# - Series of full adders
# - Plain adder network [**[3]**](https://arxiv.org/abs/quant-ph/9511018)
# - Ripple carry adder [**[4]**](https://arxiv.org/abs/quant-ph/0410184)
# - QFT adder **[[5](https://arxiv.org/abs/quant-ph/0008033), [6](https://arxiv.org/abs/1411.5949)]**
# - etc...
# <br>
#
# Each adder has its own characteristics. Here, for example, we will briefly explain how to implement QFT adder, which is less likely increase circuits cost when the number of additions increases.
# 1. QFT on the target quantum register
# 2. Local phase rotation on the target quantum register controlled by quantum register for the constant
# 3. IQFT on the target quantum register
# <br>
# <br>
#
#
#
# Fill in the blank in the following cell to complete the `const_adder` and `subroutine_add_const` function.
def subroutine_add_const(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
qc = QuantumCircuit(data_qubits)
##############################
### Phase Rotation ###
# Provide your code here
##############################
return qc.to_gate(label=" [+"+str(const)+"] ") if to_gate else qc
def const_adder(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_data)
##############################
### QFT ###
# Provide your code here
##############################
##############################
### Phase Rotation ###
# Use `subroutine_add_const`
##############################
##############################
### IQFT ###
# Provide your code here
##############################
return qc.to_gate(label=" [ +" + str(const) + "] ") if to_gate else qc
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 3**
#
# </div>
#
# Here we want to store the cost in a QRAM form:
#
# \begin{equation*}
# \sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle
# \end{equation*}
#
# where $t$ is the number of time window (=size of index register), and $x$ is the pattern of battery choice through all the time window.
#
# Given two lists $C^1 = \left[c_0^1, c_1^1, \cdots\right]$ and $C^2 = \left[c_0^2, c_1^2, \cdots\right]$,
# we can encode the "total sum of each choice" of these data using controlled gates by each index qubit.
# If we want to add $c_i^1$ to the data whose $i$-th index qubit is $0$ and $c_i^2$ to the data whose $i$-th index qubit is $1$,
# then we can add $C_i^1$ to data register when the $i$-th qubit in index register is $0$,
# and $C_i^2$ to data register when the $i$-th qubit in index register is $1$.
# These operation can be realized by controlled gates.
# If you want to create controlled gate from gate with type `qiskit.circuit.Gate`, the `control()` method might be useful.
# <br>
# <br>
#
#
#
# Fill in the blank in the following cell to complete the `cost_calculation` function.
def cost_calculation(index_qubits: int, data_qubits: int, list1: list, list2: list, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_index, qr_data)
for i, (val1, val2) in enumerate(zip(list1, list2)):
##############################
### Add val2 using const_adder controlled by i-th index register (set to 1) ###
# Provide your code here
##############################
qc.x(qr_index[i])
##############################
### Add val1 using const_adder controlled by i-th index register (set to 0) ###
# Provide your code here
##############################
qc.x(qr_index[i])
return qc.to_gate(label=" Cost Calculation ") if to_gate else qc
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 4**
#
# </div>
#
# #### Constraint Testing
#
# After the cost calculation process, we have gained the entangled QRAM with flag qubits set to zero for all indices:
#
# \begin{equation*}
# \sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\left|0\right\rangle
# \end{equation*}
#
# In order to selectively add penalty to those indices with cost values larger than $C_{max}$, we have to prepare the following state:
#
# \begin{equation*}
# \sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\left|cost(x)\geq C_{max}\right\rangle
# \end{equation*}
# <br>
# <br>
#
#
# Fill in the blank in the following cell to complete the `constraint_testing` function.
def constraint_testing(data_qubits: int, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
##############################
### Set the flag register for indices with costs larger than C_max ###
# Provide your code here
##############################
return qc.to_gate(label=" Constraint Testing ") if to_gate else qc
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 5**
#
# </div>
#
# #### Penalty Dephasing
#
# We also have to add penalty to the indices with total costs larger than $C_{max}$ in the following way.
#
# \begin{equation*}
# \quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}
# 0 & \text{if}\quad cost(z)<C_{\max } \\
# -\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}
# \end{array}\right.
# \end{equation*}
#
# This penalty can be described as quantum operator $e^{i \gamma \alpha\left(cost(z)-C_{\max }\right)}$.
# <br>
# To realize this unitary operator as quantum circuit, we focus on the following property.
#
# \begin{equation*}
# \alpha\left(cost(z)-C_{m a x}\right)=\sum_{j=0}^{k-1} 2^{j} \alpha A_{1}[j]-2^{c} \alpha
# \end{equation*}
#
# where $A_1$ is the quantum register for qram data, $A_1[j]$ is the $j$-th qubit of $A_1$, and $k$ and $c$ are appropriate constants.
#
# Using this property, the penalty rotation part can be realized as rotation gates on each digit of data register of QRAM controlled by the flag register.
# <br>
# <br>
#
#
# Fill in the blank in the following cell to complete the `penalty_dephasing` function.
def penalty_dephasing(data_qubits: int, alpha: float, gamma: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
##############################
### Phase Rotation ###
# Provide your code here
##############################
return qc.to_gate(label=" Penalty Dephasing ") if to_gate else qc
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 6**
#
# </div>
#
# #### Reinitialization
#
# The ancillary qubits such as the data register and the flag register should be reinitialized to zero states when the operator $U(C, \gamma_i)$ finishes.
# <br>
# If you want to apply inverse unitary of a `qiskit.circuit.Gate`, the `inverse()` method might be useful.
# <br>
# <br>
#
#
# Fill in the blank in the following cell to complete the `reinitialization` function.
def reinitialization(index_qubits: int, data_qubits: int, C1: list, C2: list, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_index, qr_data, qr_f)
##############################
### Reinitialization Circuit ###
# Provide your code here
##############################
return qc.to_gate(label=" Reinitialization ") if to_gate else qc
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 7**
#
# </div>
#
# ### Mixing Operator $U(B, \beta_i)$
#
# Finally, we have to add the mixing operator $U(B,\beta_i)$ after phase operator $U(C,\gamma_i)$.
# The mixing operator can be represented as follows.
# \begin{equation*}
# U(B, \beta_i)=\exp (-i \beta_i B)=\prod_{i=j}^{n} \exp \left(-i \beta_i \sigma_{j}^{x}\right)
# \end{equation*}
#
# This operator can be realized by $R_x(2\beta_i)$ gate on each qubits in index register.
# <br>
# <br>
#
# Fill in the blank in the following cell to complete the `mixing_operator` function.
def mixing_operator(index_qubits: int, beta: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
##############################
### Mixing Operator ###
# Provide your code here
##############################
return qc.to_gate(label=" Mixing Operator ") if to_gate else qc
# <div class="alert alert-block alert-success">
#
# **Challenge 4c - Step 8**
#
# </div>
#
# Finally, using the functions we have created above, we will make the submit function `solver_function` for whole relaxed QAOA process.
#
#
# Fill the TODO blank in the following cell to complete the answer function.
# - You can copy and paste the function you have made above.
# - you may also adjust the number of qubits and its arrangement if needed.
def solver_function(L1: list, L2: list, C1: list, C2: list, C_max: int) -> QuantumCircuit:
# the number of qubits representing answers
index_qubits = len(L1)
# the maximum possible total cost
max_c = sum([max(l0, l1) for l0, l1 in zip(C1, C2)])
# the number of qubits representing data values can be defined using the maximum possible total cost as follows:
data_qubits = math.ceil(math.log(max_c, 2)) + 1 if not max_c & (max_c - 1) == 0 else math.ceil(math.log(max_c, 2)) + 2
### Phase Operator ###
# return part
def phase_return():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def subroutine_add_const():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def const_adder():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def cost_calculation():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def constraint_testing():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def penalty_dephasing():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def reinitialization():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
### Mixing Operator ###
def mixing_operator():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
qr_index = QuantumRegister(index_qubits, "index") # index register
qr_data = QuantumRegister(data_qubits, "data") # data register
qr_f = QuantumRegister(1, "flag") # flag register
cr_index = ClassicalRegister(index_qubits, "c_index") # classical register storing the measurement result of index register
qc = QuantumCircuit(qr_index, qr_data, qr_f, cr_index)
### initialize the index register with uniform superposition state ###
qc.h(qr_index)
### DO NOT CHANGE THE CODE BELOW
p = 5
alpha = 1
for i in range(p):
### set fixed parameters for each round ###
beta = 1 - (i + 1) / p
gamma = (i + 1) / p
### return part ###
qc.append(phase_return(index_qubits, gamma, L1, L2), qr_index)
### step 1: cost calculation ###
qc.append(cost_calculation(index_qubits, data_qubits, C1, C2), qr_index[:] + qr_data[:])
### step 2: Constraint testing ###
qc.append(constraint_testing(data_qubits, C_max), qr_data[:] + qr_f[:])
### step 3: penalty dephasing ###
qc.append(penalty_dephasing(data_qubits, alpha, gamma), qr_data[:] + qr_f[:])
### step 4: reinitialization ###
qc.append(reinitialization(index_qubits, data_qubits, C1, C2, C_max), qr_index[:] + qr_data[:] + qr_f[:])
### mixing operator ###
qc.append(mixing_operator(index_qubits, beta), qr_index)
### measure the index ###
### since the default measurement outcome is shown in big endian, it is necessary to reverse the classical bits in order to unify the endian ###
qc.measure(qr_index, cr_index[::-1])
return qc
# Validation function contains four input instances.
# The output should pass the precision threshold 0.80 for the eight inputs before scored.
# +
# Execute your circuit with following prepare_ex4c() function.
# The prepare_ex4c() function works like the execute() function with only QuantumCircuit as an argument.
from qc_grader import prepare_ex4c
job = prepare_ex4c(solver_function)
result = job.result()
# -
# Check your answer and submit using the following code
from qc_grader import grade_ex4c
grade_ex4c(job)
# ### References
# 1. <NAME> and <NAME> and <NAME> (2014). A Quantum Approximate Optimization Algorithm. (https://arxiv.org/abs/1411.4028)
# 2. <NAME> & <NAME> (2019). Knapsack Problem variants of QAOA for battery revenue optimisation. (https://arxiv.org/abs/1908.02210)
# 3. <NAME>, <NAME>, <NAME> (1995). Quantum Networks for Elementary Arithmetic Operations. (https://arxiv.org/abs/quant-ph/9511018)
# 4. <NAME>, <NAME>, <NAME>, <NAME> (2004). A new quantum ripple-carry addition circuit. (https://arxiv.org/abs/quant-ph/0410184)
# 5. <NAME> (2000). Addition on a Quantum Computer (https://arxiv.org/abs/quant-ph/0008033)
# 6. <NAME>, <NAME> (2014). Quantum arithmetic with the Quantum Fourier Transform. (https://arxiv.org/abs/1411.5949)
# ## Additional information
#
# **Created by:** <NAME>, <NAME>, <NAME>, <NAME>
#
# **Version:** 1.0.1
| 2021-fall/challenge-4/challenge-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
# -
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe.describe()
my_feature = california_housing_dataframe[["population"]]
feature_columns = [tf.feature_column.numeric_column("population")]
targets = california_housing_dataframe["median_house_value"]
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0000001)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(feature_columns=feature_columns, optimizer=my_feature)
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
linear_regressor.train(
input_fn = lambda:my_input_fn(my_feature, targets),
steps=100
)
# +
prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False)
# Call predict() on the linear_regressor to make predictions.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
# Format predictions as a NumPy array, so we can calculate error metrics.
predictions = np.array([item['predictions'][0] for item in predictions])
# -
def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"):
"""Trains a linear regression model of one feature.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]]
my_label = "median_house_value"
targets = california_housing_dataframe[my_label]
# Create feature columns
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create input functions
training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size)
prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, root_mean_squared_error)
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Output a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print "Final RMSE (on training data): %0.2f" % root_mean_squared_error
train_model(
learning_rate=0.0002,
steps=500,
batch_size=20,
input_feature="population"
)
| tensorflow_study/tensorflow_demo_population.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Large-area metalens
#
# <!--  -->
# <!-- <img src="./img/metalens_drawing.png" alt="drawing" width="1000"/> -->
# ## Introduction
#
# See on [github](https://github.com/flexcompute/tidy3d-notebooks/blob/main/Metalens.ipynb), run on [colab](https://colab.research.google.com/github/flexcompute/tidy3d-notebooks/blob/main/Metalens.ipynb), or just follow along with the output below.
#
# Here we use Tidy3D to simulate a very large dielectric metalens. We base this example off of the recent paper from Khorasaninejad et al. titled [_Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging_](https://science.sciencemag.org/content/352/6290/1190), which was recently published in Science. In this paper, a 2-dimensional array of dielectric structures is used as a lens to focus transmitted light to a single position directly above the device.
#
# Typically, these structures are simulated by simulating each dielectric unit cell individually to compute a phase and amplitude transmittance for each cell. While this approach makes for an approximation of the overall device performance, it would be useful to be able to simulate the *entire* device as a whole to capture the entire physics. However, a simulation of this scale requires several hours or days to do with a conventional CPU-based FDTD. With Tidy3D, we are able to complete the entire simulation in about 1 minute!
#
# ## Setup
#
# We first perform basic imports of the packages needed.
# +
# get the most recent version of tidy3d
# !pip install -q --upgrade tidy3d
# make sure notebook plots inline
# %matplotlib inline
# standard python imports
import numpy as np
from numpy import random
import matplotlib.pyplot as plt
# tidy3D import
import tidy3d as td
from tidy3d import web
# -
# ## Define Simulation Parameters
#
# We now set the basic parameters that define the metalens. The following image (taken from the original paper) define the variables describing the unit cell of the metalens. The angle of each cell (θ) is chosen for maximum focusing effect. Note that microns are the default spatial unit in Tidy3D.
#
# <!--  -->
# <img src="img/metalens_diagram.png" alt="diagram" width="700"/>
# +
# 1 nanometer in units of microns (for conversion)
nm = 1e-3
# free space central wavelength
wavelength = 600 * nm
# desired numerical aperture
NA = 0.5
# shape parameters of metalens unit cell (um) (refer to image above and see paper for details)
W = 85 * nm
L = 410 * nm
H = 600 * nm
S = 430 * nm
# space between bottom PML and substrate (-z)
space_below_sub = 1 * wavelength
# thickness of substrate
thickness_sub = 100 * nm
# side length of entire metalens (um)
side_length = 10
# Number of unit cells in each x and y direction (NxN grid)
N = int(side_length / S)
print(f'for diameter of {side_length:.1f} um, have {N} cells per side')
print(f'full metalens has area of {side_length**2:.1f} um^2 and {N*N} total cells')
# Define material properties at 600 nm
n_TiO2 = 2.40
n_SiO2 = 1.46
air = td.Medium(epsilon=1.0)
SiO2 = td.Medium(epsilon=n_SiO2**2)
TiO2 = td.Medium(epsilon=n_TiO2**2)
# resolution control
grids_per_wavelength = 25
# Number of PML layers to use along z direction
npml = 10
# -
# ## Process Geometry
#
# Next we need to do conversions to get the problem parameters ready to define the simulation.
# +
# grid size (um)
dl = wavelength / grids_per_wavelength
# using the wavelength in microns, one can use td.C_0 (um/s) to get frequency in Hz
# wavelength_meters = wavelength * meters
f0 = td.C_0 / wavelength
# Define PML layers, for this we have no PML in x, y but `npml` cells in z
pml_layers = [0, 0, npml]
# Compute the domain size in x, y (note: round down from side_length)
length_xy = N * S
# focal length given diameter and numerical aperture
f = length_xy / 2 / NA * np.sqrt(1 - NA**2)
# Function describing the theoretical best angle of each box at position (x,y). see paper for details
def theta(x, y):
return np.pi / wavelength * (f - np.sqrt(x ** 2 + y ** 2 + f ** 2))
# total domain size in z: (space -> substrate -> unit cell -> 1.7 focal lengths)
length_z = space_below_sub + thickness_sub + H + 1.7 * f
# construct simulation size array
sim_size = np.array([length_xy, length_xy, length_z])
# -
# ## Create Metalens Geometry
#
# Now we can automatically generate a large metalens structure using these parameters.
#
# We will first create the substrate as a [td.Box](../generated/tidy3d.Box.rst)
#
# Then, we will loop through the x and y coordinates of the lens and create each unit cell as a [td.PolySlab](../generated/tidy3d.PolySlab.rst)
# +
# define substrate
substrate = td.Box(
center=[0, 0, -length_z/2 + space_below_sub + thickness_sub / 2.0],
size=[length_xy, length_xy, thickness_sub],
material=SiO2)
# create a running list of structures
geometry = [substrate]
# define coordinates of each unit cell
centers_x = S * np.arange(N) - length_xy / 2.0 + S / 2.0
centers_y = S * np.arange(N) - length_xy / 2.0 + S / 2.0
center_z = -length_z/2 + space_below_sub + thickness_sub + H / 2.0
# convenience function to make an angled box at each x,y location using polyslab.
# For more details see, https://simulation.cloud/docs/html/generated/tidy3d.PolySlab.html
def angled_box(x, y, angle):
""" make a box of size (L, W, H) centered at `(x, y)` at `angle` from x axis"""
# x, y vertices of box of size (L, W) centered at the origin
vertices_origin = np.array([[+L/2, +W/2],
[-L/2, +W/2],
[-L/2, -W/2],
[+L/2, -W/2]])
# 2x2 rotation matrix angle `angle` with respect to x axis
rotation_matrix = np.array([[+np.cos(angle), -np.sin(angle)],
[+np.sin(angle), +np.cos(angle)]])
# rotate the origin vertices by this angle
vertices_rotated = vertices_origin @ rotation_matrix
# shift the rotated vertices to be centered at (x, y)
vertices = vertices_rotated + np.array([x, y])
# create a tidy3D PolySlab with these rotated and shifted vertices and thickness `H`
return td.PolySlab(
vertices=vertices,
z_cent=center_z,
z_size=H,
material=TiO2
)
# loop through the coordinates and add all unit cells to geometry list
for i, x in enumerate(centers_x):
for j, y in enumerate(centers_y):
angle = theta(x, y)
geometry.append(angled_box(x, y, angle))
# -
# ## Define Sources
#
# Now we define the incident fields. We simply use a normally incident plane wave with Guassian time dependence centered at our central frequency. For more details, see the [plane wave source documentation](../generated/tidy3d.PlaneWave.rst) and the [gaussian source documentation](../generated/tidy3d.GaussianPulse.rst)
# +
# Bandwidth in Hz
fwidth = f0 / 10.0
# time dependence of source
gaussian = td.GaussianPulse(f0, fwidth, phase=0)
source = td.PlaneWave(
source_time=gaussian,
injection_axis='+z',
position=-length_z/2 + space_below_sub / 10.0, # just next to PML
polarization='x')
# Simulation run time past the source decay (around t=2*offset/fwidth)
run_time = 40 / fwidth
# -
# ## Define Monitors
#
# Now we define the monitors that measure field output from the FDTD simulation. For simplicity, we use measure the fields at the central frequency. We'll get the fields at the following locations:
#
# - The y=0 cross section
# - The x=0 cross section
# - The z=f cross section at the focal plane
# - The central axis, along x=y=0
#
# For more details on defining monitors, see the [frequency-domain monitor documentation](../generated/tidy3d.FreqMonitor.rst).
# +
# get fields along x=y=0 axis
monitor_center = td.FreqMonitor(
center=[0., 0., 0],
size=[0, 0, length_z],
freqs=[f0],
store=['E'],
name='central_line')
# get the fields at a few cross-sectional planes
monitor_xz = td.FreqMonitor(
center=[0., 0., 0.],
size=[length_xy, 0., length_z],
freqs=[f0],
store=['E'],
name='xz_plane')
monitor_yz = td.FreqMonitor(
center=[0., 0., 0.],
size=[0., length_xy, length_z],
freqs=[f0],
store=['E'],
name='yz_plane')
monitor_xy = td.FreqMonitor(
center=[0., 0., center_z + H/2 + f],
size=[length_xy, length_xy, 0],
freqs=[f0],
store=['E'],
name='focal_plane')
# put them into a single list
monitors=[monitor_center, monitor_xz, monitor_yz, monitor_xy]
# -
# ## Create Simulation
#
# Now we can put everything together and define a simulation class to be run
#
#
sim = td.Simulation(
size=sim_size,
mesh_step=[dl, dl, dl],
structures=geometry,
sources=[source],
monitors=monitors,
run_time=run_time,
pml_layers=pml_layers)
# ## Visualize Geometry
#
# Lets take a look and make sure everything is defined properly
#
# +
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(14, 6))
# Time the visualization of the 2D plane
sim.viz_eps_2D(normal='x', position=0.1, ax=ax1);
sim.viz_eps_2D(normal='y', position=0.1, ax=ax2);
sim.viz_eps_2D(normal='z', position=-length_z/2 + space_below_sub + thickness_sub + H / 2, ax=ax3);
# -
# ## Run Simulation
#
# Now we can run the simulation over time and measure the results to plot
#
# +
# Run simulation
project = web.new_project(sim.export(), task_name='metalens')
web.monitor_project(project['taskId'])
# download and load the results
print('Downloading results')
web.download_results(project['taskId'], target_folder='output')
sim.load_results('output/monitor_data.hdf5')
# print stats from the logs
with open("output/tidy3d.log") as file:
print(file.read())
# -
# ----
#
# As we can see from the logs, the **total time to solve this problem (not including data transfer and pre/post-processing) was about 1 minute!**
#
# For reference, the same problem run with FDTD on a **single CPU core FDTD is projected to take 11 hours!**
# ## Visualize Fields
#
# Let's see the results of the simulation as captured by our field monitors.
#
# First, we look at the field intensity along the axis of the lens
# +
# split monitors by those that have area and those that are on a line
line_monitor, *area_monitors = monitors
# get the data from the line monitor and plot it
data = sim.data(line_monitor)
E = data['E']
zs = data['zmesh']
I = np.squeeze(np.sum(np.square(np.abs(E)), axis=0))
plt.plot(zs / wavelength, I / np.max(np.abs(I)))
plt.plot(np.ones(2) * (center_z + H/4 + f) / wavelength, np.array([0, 0.9]), 'k--', alpha=0.5 )
plt.xlabel('position along z axis ($\lambda_0$)')
plt.ylabel('field intensity')
plt.title('$|E(z)|^2$ along axis of metalens')
plt.show()
# -
# We now can inspect the field patterns on the area monitors using the Tidy3D built in field visualization methods. For more details see the documentation of [viz_field_2D](../generated/tidy3d.Simulation.viz_field_2D.rst)
fig, axes = plt.subplots(1, 3, figsize=(14, 4))
for ax, monitor, comp in zip(axes, area_monitors, ('x', 'y', 'x')):
im = sim.viz_field_2D(monitor, eps_alpha=0.99, comp=comp, val='abs', cbar=True, ax=ax)
# Or you can use your own plotting functions using the raw data, similar to how we dealt with the line monitor.
# +
fig, axes = plt.subplots(1, len(area_monitors), figsize=(14, 4))
for ax, monitor in zip(axes, area_monitors):
data = sim.data(monitor)
E = data['E']
I = np.squeeze(np.sum(np.square(np.abs(E)), axis=0))
I = np.flipud(I.T)
im = ax.imshow(I / np.max(np.abs(I)), cmap='BuPu')
plt.colorbar(im, ax=ax)
ax.set_title(monitor.name)
plt.show()
# -
| Metalens.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_latest_p36
# language: python
# name: conda_pytorch_latest_p36
# ---
# # Explaining Image Classification with SageMaker Clarify
# 1. [Overview](#Overview)
# 1. [Train and Deploy Image Classifier](#Train-and-Deploy-Image-Classifier)
# 1. [Permissions and environment variables](#Permissions-and-environment-variables)
# 2. [Fine-tuning the Image classification model](#Fine-tuning-the-Image-classification-model)
# 3. [Training](#Training)
# 4. [Input data specification](#Input-data-specification)
# 5. [Start the training](#Start-the-training)
# 6. [Deploy SageMaker model](#Deploy-SageMaker-model)
# 7. [List of object categories](#List-of-object-categories)
# 1. [Amazon SageMaker Clarify](#Amazon-SageMaker-Clarify)
# 1. [Test Images](#Test-Images)
# 2. [Set up config objects](#Set-up-config-objects)
# 3. [SageMaker Clarify Processor](#SageMaker-Clarify-Processor)
# 4. [Reading Results](#Reading-Results)
# 1. [Clean Up](#Clean-Up)
#
# ## Overview
# Amazon SageMaker Clarify provides you the ability to gain an insight into your Computer Vision models. Clarify generates heat map, which highlights feature importance, for each input image and helps understand the model behavior. For Computer Vision, Clarify supports both Image Classification and Object Detection use cases.
# This notebook can be run inside the SageMaker Studio with **conda_pytorch_latest_py36** kernel and inside SageMaker Notebook-Instance with **Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized)** kernel.
# This sample notebook walks you through:
# 1. Key terms and concepts needed to understand SageMaker Clarify.
# 1. Explaining the importance of the image features (super pixels) for Image Classification model.
# 1. Accessing the reports and output images.
#
# In doing so, the notebook will first train and deploy an [Image Classification](https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-transfer-learning-highlevel.ipynb) model with Sagemaker Estimator using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/) [1], then use SageMaker Clarify to run explainability on a subset of test images.
# >[1] <NAME>, <NAME>, P. The Caltech 256. Caltech Technical Report.
# Let's start by installing the latest version of the SageMaker Python SDK, boto, and AWS CLI.
# ! pip install sagemaker botocore boto3 awscli --upgrade
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Train and Deploy Image Classifier
# Let's first train and deploy an Image Classification model to SageMaker.
#
#
# ### Permissions and environment variables
# Here we set up the linkage and authentication to AWS services. There are three parts to this:
#
# * The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
# * The S3 bucket that you want to use for training and model data
# * The Amazon sagemaker image classification docker image which need not be changed
# + pycharm={"name": "#%%\n"}
# %%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
s3_client = boto3.client("s3")
sess = sagemaker.Session()
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
# download the files
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/caltech-256-60-train.rec > ./caltech-256-60-train.rec
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/caltech-256-60-val.rec > ./caltech-256-60-val.rec
s3_client.upload_file(
"caltech-256-60-train.rec", output_bucket, output_prefix + "/train_rec/caltech-256-60-train.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec",
output_bucket,
output_prefix + "/validation_rec/caltech-256-60-train.rec",
)
# + pycharm={"name": "#%%\n"}
from sagemaker import image_uris
training_image = image_uris.retrieve(
"image-classification", sess.boto_region_name, version="latest"
)
print(training_image)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Fine-tuning the Image classification model
#
# The Caltech-256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category.
#
# The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.apache.org/versions/1.8.0/api/python/docs/api/mxnet/recordio/index.html) and the other is a [lst format](https://mxnet.apache.org/versions/1.6/api/r/docs/api/im2rec.html). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](https://mxnet.apache.org/versions/1.0.0/faq/finetune.html#prepare-data).
# + pycharm={"name": "#%%\n"}
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{output_bucket}/{output_prefix}/train_rec/"
s3validation = f"s3://{output_bucket}/{output_prefix}/validation_rec/"
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Training
# Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. There are two kinds of parameters that need to be set for training. Following are the parameters for the training job:
# * **instance_count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings.
# * **instance_typee**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for such training jobs.
# * **output_path**: This the s3 folder in which the training output is stored.
# + pycharm={"name": "#%%\n"}
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic_estimator = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p2.xlarge",
volume_size=50,
max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
# + [markdown] pycharm={"name": "#%% md\n"}
# Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
#
# * **num_layers**: The number of layers (depth) for the network. We use 18 for this training but other values such as 50, 152 can also be used.
# * **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.
# * **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
# * **num_classes**: This is the number of output classes for the new dataset. ImageNet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.
# * **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.
# * **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
# * **epochs**: Number of training epochs.
# * **learning_rate**: Learning rate for training.
# * **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode.
# + pycharm={"name": "#%%\n"}
ic_estimator.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Input data specification
# Set the data type and channels used for training.
# + pycharm={"name": "#%%\n"}
train_data = sagemaker.inputs.TrainingInput(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.inputs.TrainingInput(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Start the training
# Start training by calling the fit method in the estimator.
# + pycharm={"name": "#%%\n"}
ic_estimator.fit(inputs=data_channels, logs=True)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Deploy SageMaker model
# Once trained, we use the estimator to deploy a model to SageMaker. This model will be used by Clarify to deploy endpoints and run inference on images.
# + pycharm={"name": "#%%\n"}
from time import gmtime, strftime
timestamp_suffix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
model_name = "DEMO-clarify-image-classification-model-{}".format(timestamp_suffix)
model = ic_estimator.create_model(name=model_name)
container_def = model.prepare_container_def()
sess.create_model(model_name, role, container_def)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### List of object categories
# + pycharm={"name": "#%%\n"}
with open("caltech_256_object_categories.txt", "r+") as object_categories_file:
object_categories = [category.rstrip("\n") for category in object_categories_file.readlines()]
# Let's list top 10 entries from the object_categories list
object_categories[:10]
# -
# ## Amazon SageMaker Clarify
# Now that we have your image classification endpoint all set up, let's get started with SageMaker Clarify!
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Test Images
# We need some test images to explain predictions made by the Image Classification model using Clarify. Let's grab some test images from the Caltech-256 dataset and upload them to some S3 bucket.
# + pycharm={"name": "#%%\n"}
prefix = "sagemaker/DEMO-sagemaker-clarify-cv"
file_name_map = {
"167.pyramid/167_0002.jpg": "pyramid.jpg",
"038.chimp/038_0013.jpg": "chimp.jpg",
"124.killer-whale/124_0013.jpg": "killer-whale.jpg",
"170.rainbow/170_0001.jpg": "rainbow.jpg",
}
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/256_ObjectCategories/167.pyramid/167_0002.jpg > ./pyramid.jpg
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/256_ObjectCategories/038.chimp/038_0013.jpg > ./chimp.jpg
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/256_ObjectCategories/124.killer-whale/124_0013.jpg > ./killer-whale.jpg
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/256_ObjectCategories/038.chimp/038_0013.jpg > ./chimp.jpg
# !curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/caltech-256/256_ObjectCategories/170.rainbow/170_0001.jpg > ./rainbow.jpg
for file_name in file_name_map:
s3_client.upload_file(
file_name_map[file_name], output_bucket, f"{prefix}/{file_name_map[file_name]}"
)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Set up config objects
# Now we setup some config objects required for running the Clarify job:
# * **explainability_data_config**: Config object related to configurations of the input and output dataset.
# * **model_config**: Config object related to a model and its endpoint to be created.
# * **content_type**: Specifies the type of input expected by the model.
# * **predictions_config**: Config object to extract a predicted label from the model output.
# * **label_headers**: This is the list of all the classes on which the model was trained.
# * **image_config**: Config object for image data type
# * **model_type**: Specifies the type of CV model (IMAGE_CLASSIFICATION | OBJECT_DETECTION)
# * **num_segments**: Clarify uses SKLearn's [SLIC](https://scikit-image.org/docs/dev/api/skimage.segmentation.html?highlight=slic#skimage.segmentation.slic) method for image segmentation to generate features/superpixels. num_segments specifies approximate number of segments to be generated.
# * **segment_compactness**: Balances color proximity and space proximity. Higher values give more weight to space proximity, making superpixel shapes more square/cubic. We recommend exploring possible values on a log scale, e.g., 0.01, 0.1, 1, 10, 100, before refining around a chosen value.
# * **shap_config**: Config object for kernel SHAP parameters
# * **num_samples**: total number of feature coalitions to be tested by Kernel SHAP.
# + pycharm={"name": "#%%\n"}
from sagemaker import clarify
s3_data_input_path = "s3://{}/{}/".format(output_bucket, prefix)
clarify_output_prefix = f"{prefix}/cv_analysis_result"
analysis_result_path = "s3://{}/{}".format(output_bucket, clarify_output_prefix)
explainability_data_config = clarify.DataConfig(
s3_data_input_path=s3_data_input_path,
s3_output_path=analysis_result_path,
dataset_type="application/x-image",
)
model_config = clarify.ModelConfig(
model_name=model_name, instance_type="ml.m5.xlarge", instance_count=1, content_type="image/jpeg"
)
predictions_config = clarify.ModelPredictedLabelConfig(label_headers=object_categories)
image_config = clarify.ImageConfig(
model_type="IMAGE_CLASSIFICATION", num_segments=20, segment_compactness=5
)
shap_config = clarify.SHAPConfig(num_samples=500, image_config=image_config)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### SageMaker Clarify Processor
# Let's get the execution role for running SageMakerClarifyProcessor.
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
import os
account_id = os.getenv("AWS_ACCOUNT_ID", "<your-account-id>")
sagemaker_iam_role = "<AmazonSageMaker-ExecutionRole>"
# Fetch the IAM role to initialize the sagemaker processing job
try:
role = sagemaker.get_execution_role()
except ValueError as e:
print(e)
role = f"arn:aws:iam::{account_id}:role/{sagemaker_iam_role}"
clarify_processor = clarify.SageMakerClarifyProcessor(
role=role, instance_count=1, instance_type="ml.m5.xlarge", sagemaker_session=sess
)
# + [markdown] pycharm={"name": "#%% md\n"}
# Finally, we run explainability on the clarify processor.
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
clarify_processor.run_explainability(
data_config=explainability_data_config,
model_config=model_config,
explainability_config=shap_config,
model_scores=predictions_config,
)
# -
# ### Reading Results
# Let's download all the result images along with the PDF report.
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# %%time
output_objects = s3_client.list_objects(Bucket=output_bucket, Prefix=clarify_output_prefix)
result_images = []
for file_obj in output_objects["Contents"]:
file_name = os.path.basename(file_obj["Key"])
if os.path.splitext(file_name)[1] == ".jpeg":
result_images.append(file_name)
print(f"Downloading s3://{output_bucket}/{file_obj['Key']} ...")
s3_client.download_file(output_bucket, file_obj["Key"], file_name)
# + [markdown] pycharm={"name": "#%% md\n"}
# Let's visualize and understand the results.
# The result images shows the segmented image and the heatmap.
#
# * **Segments**: Highlights the image segments.
# * **Shades of Blue**: Represents positive Shapley values indicating that the corresponding feature increases the overall confidence score.
# * **Shades of Red**: Represents negative Shapley values indicating that the corresponding feature decreases the overall confidence score.
#
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
from IPython.display import Image
for img in result_images:
display(Image(img))
# -
# ## Clean Up
# Finally, don't forget to clean up the resources we set up and used for this demo!
# +
# %%time
# Delete the SageMaker model
model.delete_model()
| sagemaker-clarify/computer_vision/image_classification/explainability_image_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Rendering OpenAI Gym Envs on Binder and Google Colab
# > Notes on solving a mildly tedious (but important) problem
#
# - branch: 2020-04-16-remote-rendering-gym-envs
# - badges: true
# - image: images/gym-colab-binder.png
# - comments: true
# - author: <NAME>
# - categories: [openai, binder, google-colab]
# Getting [OpenAI](https://openai.com/) [Gym](https://gym.openai.com/docs/) environments to render properly in remote environments such as [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) and [Binder](https://mybinder.org/) turned out to be more challenging than I expected. In this post I lay out my solution in the hopes that I might save others time and effort to work it out independently.
# # Google Colab Preamble
#
# If you wish to use Google Colab, then this section is for you! Otherwise, you can skip to the next section for the Binder Preamble.
# ## Install X11 system dependencies
#
# Install necessary [X11](https://en.wikipedia.org/wiki/X_Window_System) dependencies, in particular [Xvfb](https://www.x.org/releases/X11R7.7/doc/man/man1/Xvfb.1.xhtml), which is an X server that can run on machines with no display hardware and no physical input devices.
# !apt-get install -y xvfb x11-utils
# ## Install additional Python dependencies
#
# Now that you have installed Xvfb, you need to install a Python wrapper
# [`pyvirtualdisplay`](https://github.com/ponty/PyVirtualDisplay) in order to interact with Xvfb
# virtual displays from within Python. Next you need to install the Python bindings for
# [OpenGL](https://www.opengl.org/): [PyOpenGL](http://pyopengl.sourceforge.net/) and
# [PyOpenGL-accelerate](https://pypi.org/project/PyOpenGL-accelerate/). The former are the actual
# Python bindings, the latter is and optional set of C (Cython) extensions providing acceleration of
# common operations for slow points in PyOpenGL 3.x.
# !pip install pyvirtualdisplay==0.2.* PyOpenGL==3.1.* PyOpenGL-accelerate==3.1.*
# ## Install OpenAI Gym
#
# Next you need to install the OpenAI Gym package. Note that depending on which Gym environment you are interested in working with you may need to add additional dependencies. Since I am going to simulate the LunarLander-v2 environment in my demo below I need to install the `box2d` extra which enables Gym environments that depend on the [Box2D](https://box2d.org/) physics simulator.
# !pip install gym[box2d]==0.17.*
# ## Create a virtual display in the background
#
# Next you need to create a virtual display in the background which the Gym Envs can connect to for rendering purposes. You can check that there is no display at present by confirming that the value of the [`DISPLAY`](https://askubuntu.com/questions/432255/what-is-the-display-environment-variable) environment variable has not yet been set.
# !echo $DISPLAY
# The code in the cell below creates a virtual display in the background that your Gym Envs can connect to for rendering. You can adjust the `size` of the virtual buffer as you like but you must set `visible=False` when working with Xvfb.
#
# **This code only needs to be run once per session to start the display.**
# +
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, # use False with Xvfb
size=(1400, 900))
_ = _display.start()
# -
# After running the cell above you can echo out the value of the `DISPLAY` environment variable again to confirm that you now have a display running.
# !echo $DISPLAY
# For convenience I have gathered the above steps into two cells that you can copy and paste into the top of you Google Colab notebooks.
# + language="bash"
#
# # install required system dependencies
# apt-get install -y xvfb x11-utils
#
# # install required python dependencies (might need to install additional gym extras depending)
# pip install gym[box2d]==0.17.* pyvirtualdisplay==0.2.* PyOpenGL==3.1.* PyOpenGL-accelerate==3.1.*
# +
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, # use False with Xvfb
size=(1400, 900))
_ = _display.start()
# -
# # Binder Preamble
#
# If you wish to use Binder, then this section is for you! Although there really isn't much of anything that needs doing.
# ## No additional installation required!
#
# Unlike Google Colab, with Binder you can bake all the required dependencies (including the X11 system dependencies!) into the Docker image on which the Binder instance is based using Binder config files. These config files can either live in the root directory of your Git repo or in a `binder` sub-directory as is this case here. If you are interested in learning more about Binder, then check out the documentation for [BinderHub](https://binderhub.readthedocs.io/en/latest/) which is the underlying technology behind the Binder project.
# config file for system dependencies
# !cat ../binder/apt.txt
# config file describing the conda environment
# !cat ../binder/environment.yml
# config file containing python deps not avaiable via conda channels
# !cat ../binder/requirements.txt
# ## Create a virtual display in the background
#
# Next you need to create a virtual display in the background which the Gym Envs can connect to for rendering purposes. You can check that there is no display at present by confirming that the value of the [`DISPLAY`](https://askubuntu.com/questions/432255/what-is-the-display-environment-variable) environment variable has not yet been set.
# !echo $DISPLAY
# The code in the cell below creates a virtual display in the background that your Gym Envs can connect to for rendering. You can adjust the `size` of the virtual buffer as you like but you must set `visible=False` when working with Xvfb.
#
# **This code only needs to be run once per session to start the display.**
# +
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, # use False with Xvfb
size=(1400, 900))
_display.start()
# -
# After running the cell above you can echo out the value of the `DISPLAY` environment variable again to confirm that you now have a display running.
# !echo $DISPLAY
# # Demo
#
# Just to prove that the above setup works as advertised I will run a short simulation. First I will define an `Agent` that chooses an action randomly from the set of possible actions and the define a function that can be used to create such agents.
# +
import typing
import numpy as np
# represent states as arrays and actions as ints
State = np.array
Action = int
# agent is just a function!
Agent = typing.Callable[[State], Action]
def uniform_random_policy(state: State,
number_actions: int,
random_state: np.random.RandomState) -> Action:
"""Select an action at random from the set of feasible actions."""
feasible_actions = np.arange(number_actions)
probs = np.ones(number_actions) / number_actions
action = random_state.choice(feasible_actions, p=probs)
return action
def make_random_agent(number_actions: int,
random_state: np.random.RandomState = None) -> Agent:
"""Factory for creating an Agent."""
_random_state = np.random.RandomState() if random_state is None else random_state
return lambda state: uniform_random_policy(state, number_actions, _random_state)
# -
# In the cell below I wrap up the code to simulate a single epsiode of an OpenAI Gym environment. Note that the implementation assumes that the provided environment supports `rgb_array` rendering (which not all Gym environments support!).
# +
import gym
import matplotlib.pyplot as plt
from IPython import display
def simulate(agent: Agent, env: gym.Env) -> None:
state = env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
done = False
while not done:
action = agent(state)
img.set_data(env.render(mode='rgb_array'))
plt.axis('off')
display.display(plt.gcf())
display.clear_output(wait=True)
state, reward, done, _ = env.step(action)
env.close()
# -
# Finally you can setup your desired environment...
lunar_lander_v2 = gym.make('LunarLander-v2')
_ = lunar_lander_v2.seed(42)
# ...and run a simulation!
random_agent = make_agent(lunar_lander_v2.action_space.n, random_state=None)
simulate(random_agent, lunar_lander_v2)
# Currently there appears to be a non-trivial amount of flickering during the simulation. Not entirely sure what is causing this undesireable behavior. If you have any idea how to improve this, please leave a comment below. I will be sure to update this post accordingly if I find a good fix.
| _notebooks/2020-04-16-remote-rendering-gym-envs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="cb6d7af4"
# ## Scraping scrollers
#
# Infinite scroll sites are designed for the mobile age. Links are hard to tap with a finger on a small device, but a simple swipe easily scrolls the page down to reveal more data. That can make scraping an infinite scroll page difficult. We’ll learn to find the actual location of the data buried in the scrolls.
#
# Here's a couple of examples of a scrolling sites:
#
# - <a href="https://www.difc.ae/public-register/">DIFC Public Register</a>
#
# - <a href="https://www.quintoandar.com.br/alugar/imovel/sao-paulo-sp-brasil">Rentals in São Paulo</a>
#
# Let's target the data source we'll need to scrape this <a href="https://quotes.toscrape.com/scroll">mockup site</a>.
#
#
#
#
#
#
# + [markdown] id="6tJdvD9FQWXf"
# ### we want ice cream!
# + id="e739b2ad"
pip install icecream
# + id="a17111ee"
## Lets import all the libaries we are likely to need
import requests ## to capture content from web pages
from bs4 import BeautifulSoup ## to parse our scraped data
import pandas as pd ## to easily export our data to dataframes/CSVs
from icecream import ic ## easily debug
from pprint import pprint as pp ## to prettify our printouts
import itertools ## to flatten lists
from random import randrange ## to create a range of numbers
import time # for timer
import json ## to work with JSON data
# + [markdown] id="11ac7588"
# ### Figure out how to scape a single page
# + id="98d1cc6c"
| advanced scraping workshops/.ipynb_checkpoints/week_2_scroll_site_scrape-DEMO-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/JauraSeerat/Wonder_Vision_Face_Detection/blob/master/Face_Detection_(Abhishek%20Tandon)/FaceDet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="1BvwjLZ9y_Uy" colab_type="text"
# A Colab notebook to test various face detection models.
#
# Models tested --
# 1. OpenCV HAAR Classifier
# 2. OpenCV LBP Classifier
# 3. dlib HOG Classifier
# 4. OpenCV DNN Classifier - A ResNet10 based model.
# 5. dlib DNN Classifier
# 6. cvlib -- A tensorflow based face detection library
# 6. Face Evolve
#
# The first row in the results show the model's performance on the images as it is.
# The second row in the results show the model's performance on the images which are first resized to (224 x 224) resolution size.
#
# Two models seem to perform really well on various images --
#
# a. OpenCV DNN Classifier -- Provides almost accurate bounding box over the images when images are first resized to (224 x 224)
#
# b. Face Evolve Classifier -- Based on MTCNN network this also provides almost accurate bounding box predictions. This also provides face landmarks information.
#
#
# Note --
# 1. Some of these images are taken from google for testing purposes. No copyright held by me.
# 2. Scroll on the output block of the cells to see the results on various images
#
# Resources --
# 1. Face Evolve -- https://github.com/ZhaoJ9014/face.evoLVe.PyTorch
# 2. OpenCV -- https://github.com/opencv/opencv
# 3. Blog for implemnting OpenCV HAAR and LBP Classifiers https://www.superdatascience.com/blogs/opencv-face-detection
# 4. Blog for implemnting OpenCV DNN, Dlib HOG and Dlib CNN Classifiers https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/
# 5. Cvlib documentation http://cvlib.net
# 6. MTCNN Paper: https://arxiv.org/pdf/1604.02878.pdf
#
# + id="zvLiy0Jofwqj" colab_type="code" colab={}
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
# + id="j40ATWZ7htIn" colab_type="code" colab={}
import warnings
warnings.filterwarnings('ignore')
# + id="jWPFK_yQg1cn" colab_type="code" colab={}
# Make directory to hold all the files required for varioud models
# !mkdir ./face_det
# !ls ./face_det/
# + id="dH7m3szIdymR" colab_type="code" colab={}
#Get all the files related to various models and store them in the face_det directory created in the last step
#getting OpenCV DNN model config file
# !wget https://raw.githubusercontent.com/opencv/opencv/master/samples/dnn/face_detector/opencv_face_detector.pbtxt -O ./face_det/opencv_face_detector.pbtxt
#getting OpenCV DNN model weights
# !wget https://raw.githubusercontent.com/opencv/opencv_3rdparty/dnn_samples_face_detector_20180220_uint8/opencv_face_detector_uint8.pb -O ./face_det/opencv_face_detector_uint8.pb
#getting dlib DNN face detector weights
# !wget https://github.com/davisking/dlib-models/raw/master/mmod_human_face_detector.dat.bz2 -O ./face_det/mmod_human_face_detector.dat.bz2
#unzipping the weight file
# !bzip2 -d ./face_det/mmod_human_face_detector.dat.bz2
#getting OpenCV HAAR Cascade Classifier model file
# !wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades_cuda/haarcascade_frontalface_alt.xml -O ./face_det/haarcascade_frontalface_alt.xml
#getting OpenCV LBP Cascade Classifier model file
# !wget https://raw.githubusercontent.com/opencv/opencv/master/data/lbpcascades/lbpcascade_frontalface_improved.xml -O ./face_det/lbpcascade_frontalface_improved.xml
#install cvlib -- a tensorflow based face detection library
# !pip install cvlib
# + id="lS-wO_PWfVl3" colab_type="code" colab={}
#Listing the face_det directory contents
# !ls ./face_det/
# + id="37hHrmoHRgiy" colab_type="code" colab={}
#OpenCV Haar Classifier
import cv2
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder -- Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['day7.png','day9.png','day11.png','day13.png','face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
#Loadign the HAAR Face Cascade Classifier Model file
haar_face_cascade = cv2.CascadeClassifier('./face_det/haarcascade_frontalface_alt.xml')
time_normal = 0
image_list_normal = list()
for img_name in img_list:
t1 = time.time()
image = cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB)
H,W = image.shape[:2]
img_gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
faces = haar_face_cascade.detectMultiScale(img_gray)
for face in faces:
cv2.rectangle(image, (face[0], face[1]), (face[0] + face[2],face[1] + face[3]),(255,0,0),4)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(image)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = cv2.resize(cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB),(224,224))
H,W = image.shape[:2]
img_gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
faces = haar_face_cascade.detectMultiScale(img_gray)
for face in faces:
cv2.rectangle(image, (face[0], face[1]), (face[0] + face[2],face[1] + face[3]),(255,0,0),4)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(image)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
# + id="53caBmMuY3je" colab_type="code" colab={}
#OpenCV LBP Classifier
import cv2
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder -- Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['day7.png','day9.png','day11.png','day13.png','face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
#Loading OpenCV LBP Classifier Model file
lbp_face_cascade = cv2.CascadeClassifier('./face_det/lbpcascade_frontalface_improved.xml')
time_normal = 0
image_list_normal = list()
for img_name in img_list:
t1 = time.time()
image = cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB)
H,W = image.shape[:2]
img_gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
faces = lbp_face_cascade.detectMultiScale(img_gray)
for face in faces:
cv2.rectangle(image, (face[0], face[1]), (face[0] + face[2],face[1] + face[3]),(255,0,0),4)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(image)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = cv2.resize(cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB),(224,224))
H,W = image.shape[:2]
img_gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
faces = lbp_face_cascade.detectMultiScale(img_gray)
for face in faces:
cv2.rectangle(image, (face[0], face[1]), (face[0] + face[2],face[1] + face[3]),(255,0,0),4)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(image)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
# + id="jJr9QYKovu-_" colab_type="code" colab={}
#dlib HOG detector
import dlib
import cv2
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder -- Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['day7.png','day9.png','day11.png','day13.png','face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
#Creating an instance of dlib HOG Face Detector
hogFaceDetector = dlib.get_frontal_face_detector()
time_normal = 0
image_list_normal = list()
for img_name in img_list:
t1 = time.time()
image = cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB)
H,W = image.shape[:2]
detections = hogFaceDetector(image,1)
for face_rect in detections:
x1 = face_rect.left()
y1 = face_rect.top()
x2 = face_rect.right()
y2 = face_rect.bottom()
cv2.rectangle(image, (x1, y1), (x2,y2),(255,0,0),4)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(image)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = cv2.resize(cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB),(224,224))
H,W = image.shape[:2]
detections = hogFaceDetector(image,1)
for face_rect in detections:
x1 = face_rect.left()
y1 = face_rect.top()
x2 = face_rect.right()
y2 = face_rect.bottom()
cv2.rectangle(image, (x1, y1), (x2,y2),(255,0,0),4)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(image)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
# + id="vmaEB43KhPfs" colab_type="code" colab={}
#OpenCV DNN Face Detection
import cv2
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder - Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['day7.png','day9.png','day11.png','day13.png','face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
#Loading weights and config file of OpenCV DNN Classifier
model_weights = "./face_det/opencv_face_detector_uint8.pb"
model_config = "./face_det/opencv_face_detector.pbtxt"
net = cv2.dnn.readNetFromTensorflow(model_weights,model_config)
conf_thresh = 0.5
time_normal = 0
image_list_normal = list()
for img_name in img_list:
t1 = time.time()
image = cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB)
H,W = image.shape[:2]
image_blob = cv2.dnn.blobFromImage(image,1.0,(300,300),(104.0, 177.0, 123.0))
net.setInput(image_blob)
detections = net.forward()
for i in range(detections.shape[2]):
confidence = float(detections[:,:,i,2])
if confidence > conf_thresh:
x1 = int(detections[:,:,i,3] * W)
y1 = int(detections[:,:,i,4] * H)
x2 = int(detections[:,:,i,5] * W)
y2 = int(detections[:,:,i,6] * H)
y = y1 - 10 if (y1 - 10) > 10 else y1 + 10
cv2.rectangle(image, (x1, y1), (x2,y2),(255, 0,0), 4)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(image)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = cv2.resize(cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB),(224,224))
H,W = image.shape[:2]
image_blob = cv2.dnn.blobFromImage(image,1.0,(300,300),(104.0, 177.0, 123.0))
net.setInput(image_blob)
detections = net.forward()
for i in range(detections.shape[2]):
confidence = float(detections[:,:,i,2])
if confidence > conf_thresh:
x1 = int(detections[:,:,i,3] * W)
y1 = int(detections[:,:,i,4] * H)
x2 = int(detections[:,:,i,5] * W)
y2 = int(detections[:,:,i,6] * H)
cv2.rectangle(image, (x1, y1), (x2,y2),(255,0,0), 4)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(image)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
#Performs really well on resized images
# + id="48pFRVXC0VG3" colab_type="code" colab={}
#dlib CNN face detector
import dlib
import cv2
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder - Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['day7.png','day9.png','day11.png','day13.png','face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
#Creating an instance of Dlib CNN Face Detector Model
cnnFaceDetector = dlib.cnn_face_detection_model_v1("./face_det/mmod_human_face_detector.dat")
time_normal = 0
image_list_normal = list()
for img_name in img_list:
t1 = time.time()
image = cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB)
H,W = image.shape[:2]
detections = cnnFaceDetector(image,1)
for face_rect in detections:
x1 = face_rect.rect.left()
y1 = face_rect.rect.top()
x2 = face_rect.rect.right()
y2 = face_rect.rect.bottom()
cv2.rectangle(image, (x1, y1), (x2,y2),(255,0,0),4)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(image)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = cv2.resize(cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB),(224,224))
H,W = image.shape[:2]
detections = cnnFaceDetector(image,1)
for face_rect in detections:
x1 = face_rect.rect.left()
y1 = face_rect.rect.top()
x2 = face_rect.rect.right()
y2 = face_rect.rect.bottom()
cv2.rectangle(image, (x1, y1), (x2,y2),(255,0,0),4)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(image)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
# + id="OBsMyJ103TZn" colab_type="code" colab={}
#cvlib Face Detector -- a library based on tensorflow
import cvlib as cv
import cv2
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder - Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['day7.png','day9.png','day11.png','day13.png','face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
time_normal = 0
image_list_normal = list()
for img_name in img_list:
t1 = time.time()
image = cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB)
H,W = image.shape[:2]
faces, confidences = cv.detect_face(image)
for face in faces:
cv2.rectangle(image, (face[0], face[1]), (face[2],face[3]),(255,0,0),4)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(image)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = cv2.resize(cv2.cvtColor(cv2.imread(img_path + img_name),cv2.COLOR_BGR2RGB),(224,224))
H,W = image.shape[:2]
faces, confidences = cv.detect_face(image)
for face in faces:
cv2.rectangle(image, (face[0], face[1]), (face[2],face[3]),(255,0,0),4)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(image)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
# + id="uaje3f5jXps8" colab_type="code" colab={}
#Getting files require to run Face Evolve model based on MTCNN for Face Detection
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/align_trans.py -O ./align_trans.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/box_utils.py -O ./box_utils.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/detector.py -O ./detector.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/face_align.py -O ./face_align.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/face_resize.py -O ./face_resize.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/first_stage.py -O ./first_stage.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/get_nets.py -O ./get_nets.py
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/matlab_cp2tform.py -O ./matlab.cpt2tform.py
#Getting the weights of three models Onet, Pnet and Rnet
# !wget https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/raw/master/align/onet.npy -O ./onet.npy
# !wget https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/raw/master/align/pnet.npy -O ./pnet.npy
# !wget https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/raw/master/align/rnet.npy -O ./rnet.npy
# !wget https://raw.githubusercontent.com/ZhaoJ9014/face.evoLVe.PyTorch/master/align/visualization_utils.py -O ./visualization_utils.py
# + id="gvcZl_CUXElZ" colab_type="code" colab={}
#Face Evolve -- Pytorch
from PIL import Image
from detector import detect_faces
from visualization_utils import show_results
import matplotlib.pyplot as plt
import time
#Change the img_path as per your folder - Parent folder containing all the images listed in img_list
img_path = "/content/drive/My Drive//Colab//privateAI//Face//data//"
#List of all images on which the classifier needs to be tested
img_list = ['face1.jpg','face2.jpg','face3.jpeg','face4.jpg','face5.jpeg','face6.jpg','face7.jpeg','face8.jpg','face9.jpeg','face10.jpeg','face11.jpg']
num_imgs = len(img_list)
#Creating a grid to visualize results of size 2 X num_imgs
gridsize = (2,num_imgs)
fig = plt.figure(figsize=(80,10))
plt.subplots_adjust(wspace=0.5,hspace=0.3)
time_normal = 0
image_list_normal = list()
for k,img_name in enumerate(img_list):
t1 = time.time()
image = Image.open(img_path + img_name)
#H,W = image.shape[:2]
bounding_boxes, landmarks = detect_faces(image)
img_res = show_results(image, bounding_boxes, landmarks)
t2 = time.time()
time_normal = time_normal + (t2-t1)
image_list_normal.append(img_res)
time_normal = round(time_normal/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (0, i))
ax.grid(False)
ax.imshow(image_list_normal[i])
time_resized = 0
image_list_resized = list()
for img_name in img_list:
t1 = time.time()
image = Image.open(img_path + img_name)
image = image.resize((224,224), Image.ANTIALIAS)
#H,W = image.shape[:2]
bounding_boxes, landmarks = detect_faces(image)
img_res = show_results(image, bounding_boxes, landmarks)
t2 = time.time()
time_resized = time_resized + (t2-t1)
image_list_resized.append(img_res)
time_resized = round(time_resized/num_imgs,3)
for i in range(num_imgs):
ax = plt.subplot2grid(gridsize, (1, i))
ax.grid(False)
ax.imshow(image_list_resized[i])
print ("avg. time normal = %r (secs) , avg. time resized = %r (secs) " %(time_normal,time_resized))
# + id="nUeMyo35E8Na" colab_type="code" colab={}
#Testing OpenCV DNN Classifier on a video -- Video available at https://www.videvo.net/video/woman-carrying-basket-on-head/7525/
import imutils
import numpy as np
#Load the face video as per your directory path
vs = cv2.VideoCapture('/content/drive/My Drive//Colab//privateAI//Face//data//facevid.mp4')
writer = None
(W, H) = (None, None)
while True:
(grabbed, frame) = vs.read()
if not grabbed:
break
#image = cv2.resize(frame,(224,224))
image = frame
H,W = image.shape[:2]
image_blob = cv2.dnn.blobFromImage(image,1.0,(300,300),(104.0, 177.0, 123.0))
net.setInput(image_blob)
detections = net.forward()
for i in range(detections.shape[2]):
confidence = float(detections[:,:,i,2])
if confidence > conf_thresh:
x1 = int(detections[:,:,i,3] * W)
y1 = int(detections[:,:,i,4] * H)
x2 = int(detections[:,:,i,5] * W)
y2 = int(detections[:,:,i,6] * H)
cv2.rectangle(image, (x1, y1), (x2,y2),(255,0,0), 4)
if writer is None:
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
#saving the frame of the video at the desired path
writer = cv2.VideoWriter('/content/drive/My Drive//Colab//privateAI//Face//data//outvid1.avi', fourcc, 30,(image.shape[1], image.shape[0]), True)
writer.write(image)
writer.release()
vs.release()
print ('done')
| Wonder_Vision_Face_Detection/Face_Detection_(Abhishek Tandon)/FaceDet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mgcpy.independence_tests.mgc.mgc import MGC
# +
'''
Linear data. Generated using, simulations from R MGC.
data = mgc.sims.spiral(50, 1, 0.1) # (samples, features, noise)
'''
X = np.array([-0.51912931, -0.24699695, -0.27129802, 0.08988214, -0.93133797, -0.96727363, 0.28306118, 0.92460929, 0.44272519, 0.33997158, 0.07680145, 0.49784049, -0.83435064, -0.95344318, 0.18082179, 0.56857578, 0.11608616, 0.64271549, -0.66037328, -0.44755648, -0.54511420, 0.86948706, 0.67568800, -0.33662677, 0.66859276, -0.66739094, -0.58791119, -0.46675664, -0.91455421, 0.09667132, -0.77858571, -0.42093049, -0.84167713, 0.52854907, -0.47128527, -0.09418058, 0.80176046, 0.31252840, -0.31872562, -0.54070292, -0.94153721, -0.86128020, -0.80080271, -0.05168897, 0.44781059, 0.45950319, 0.20463332, 0.55464372, 0.93062876, 0.80908862]).reshape(-1, 1)
Y = np.array([-0.39896448, -0.19591009, -0.14459404, 0.07341945, -0.83465098, -1.03312750, 0.17985988, 0.84980555, 0.52840996, 0.26309697, 0.05037735, 0.49614189, -0.81993808, -0.79206330, 0.13625602, 0.58084302, 0.07114134, 0.51828978, -0.61492621, -0.60282539, -0.40258272, 0.93494555, 0.69088195, -0.46674890, 0.67400056, -0.63810416, -0.50916804, -0.47341975, -0.87257035, -0.02095436, -0.85325834, -0.52114691, -0.87077426, 0.48017861, -0.35084970, -0.13109888, 0.67058056, 0.31713241, -0.68035453, -0.47372769, -0.79708787, -0.66887376, -0.83039591, 0.09555898, 0.33861048, 0.48388096, 0.32049636, 0.54457146, 0.97025822, 0.76087465]).reshape(-1, 1)
print(X.shape)
print(Y.shape)
# -
plt.scatter(X, Y)
plt.show()
'''
Results from R:
> result = mgc.sample(data$X, data$Y)
> result$statMGC
[1] 0.9674665
> result$optimalScale
$x
[1] 50
$y
[1] 50
'''
mgc_statistic = 0.9674665
optimal_scale = [50, 50]
# +
mgc = MGC(X, Y, None)
mgc_statistic_actual, independence_test_metadata = mgc.test_statistic()
print("MGC stats from Python:")
print("mgc_test_statistic:", mgc_statistic_actual)
print("optimal_scale:", independence_test_metadata["optimal_scale"])
assert np.allclose(mgc_statistic, mgc_statistic_actual)
assert np.allclose(optimal_scale, independence_test_metadata["optimal_scale"])
# +
'''
Non-linear (spiral data). Generated using, simulations from R MGC
data = mgc.sims.spiral(50, 1, 0.1) # (samples, features, noise)
'''
X = np.array([0.01421334, -2.74656298, 0.77314408, 1.16537193, -2.90519144, -4.22264528, -0.89891050, 3.84718386, -1.56702656, 0.96299667, -4.04908889, 1.23211283, 0.16186615, -2.80339400, -4.52963579, 1.50649684, -2.19419173, -0.17158617, -4.99565629, 4.00691706, -2.84808717, 2.21568866, -0.42059251, -2.72486898, 1.93019558, 0.17617120, 3.81677009, -0.22529392, 1.26280821, 3.86391161, 2.02327480, 1.99630927, 1.82478940, -4.44260943, -0.98931338, -2.94800338, -3.01352134, -2.25180778, -0.92833990, -0.76114888, -0.62939441, -0.98264787, -3.82053926, 1.04258668, -2.15039133, -1.03893152, 1.93208834, 1.82361258, 0.99874067, 0.24155370]).reshape(-1, 1)
Y = np.array([0.038971270, 0.833857761, -1.521444608, 4.304165019, -1.020602336, 2.312007878, 0.155703456, 1.380484921, 4.254963435, -1.278597445, 2.701115622, 2.021871587, 0.456433211, 0.829949638, 1.885978521, -0.945712267, -2.407083726, -3.454198852, -0.014086007, 0.145404797, 0.670799013, 3.672285741, 0.602226433, 3.785518506, -0.350152540, -1.588788123, 1.520267990, -1.409208061, -1.398139886, 1.131541762, 0.356061760, 0.002080002, 1.017373457, 2.080078016, -0.042361065, -0.973710472, -0.228119833, 1.550782393, 0.203302174, -3.383666925, 0.576408069, 0.209023397, 2.955761299, -1.346304143, 4.215744704, -0.540418117, 0.931602249, -0.445806493, 2.179037240, -1.662197212]).reshape(-1, 1)
# X = np.array([-1.94991422, 2.01793503, 2.02135856, -4.76973254, -1.04045136, 3.62213298, 2.02344143, -0.22133186, 3.98299539, 0.66580652, -0.24892631, 2.01964689, -2.97581697, 2.82926344, -2.50334141, -2.11630230, 3.18571181, 3.91006534, -0.22079471, 0.16869683, -1.04671684, -2.54328145, 0.68913210, -4.15471419, 1.79381544, 3.79174064, 2.68077651, -0.89844733, -0.93375334, -4.87132871, 3.89156021, 0.98022341, 0.17447712, -2.61819021, 0.17833797, 3.99576158, 0.52171377, -2.77714987, -0.54302444, -1.56384122, 1.09424681, -0.39163951, -3.65442216, 3.15281371, 0.15473363, 2.01153606, -1.02473795, 3.00558863, 1.50315554, -0.05125508]).reshape(-1, 1)
# Y = np.array([-2.6017111, 0.5691600, 0.2136069, 1.1416266, -0.1335919, 2.0291697, 0.1815304, 0.3729499, -0.1515422, 2.3587602, 4.4952152, 0.1235281, 0.1723651, -2.5462003, -2.1091046, -2.2855511, 2.6731396, 1.1488033, -3.5944704, 2.3914336, -0.3166776, -1.9556538, -3.7036444, 2.4278484, 1.2072623, 1.5497797, -2.7300803, 0.2535017, 0.2489763, 0.8197172, -0.6682824, -1.3701825, 0.5100829, 1.1830588, 0.2070552, 0.6000385, 4.4385086, 0.8184786, 0.4215651, -2.9205822, 2.1344058, -1.3949349, 2.9661085, -2.0557557, 0.3908240, 0.6770227, -0.6539511, 3.0709319, 1.7056418, 0.4568098]).reshape(-1, 1)
print(X.shape)
print(Y.shape)
# -
plt.scatter(X, Y)
plt.show()
'''
Results from R:
> result = mgc.sample(data$X, data$Y)
> result$statMGC
[1] 0.1935397
> result$optimalScale
$x
[1] 2
$y
[1] 7
'''
mgc_statistic = 0.2313806
optimal_scale = [3, 4]
# +
mgc = MGC(X, Y, None)
mgc_statistic_actual, independence_test_metadata = mgc.test_statistic()
print("MGC stats from Python:")
print("mgc_test_statistic:", mgc_statistic_actual)
print("optimal_scale:", independence_test_metadata["optimal_scale"])
assert np.allclose(mgc_statistic, mgc_statistic_actual)
assert np.allclose(optimal_scale, independence_test_metadata["optimal_scale"])
# -
| week 7/compare_R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href='http://www.holoviews.org'><img src="../../assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
# <div style="float:right;"><h2>Exercise 1: Making Data Visualizable</h2></div>
# +
import numpy as np
import pandas as pd
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
# -
# ## Example 1
macro = pd.read_csv('../../data/macro.csv')
us_macro = macro[macro.country=='United States']
# Have a look at the ``us_macro`` dataframe, using ``us_macro.head()``:
# Now plot the 'unem' column against the 'year' column using a ``Curve`` element:
#
# <b><a href="#hint1" data-toggle="collapse">Hint</a></b>
#
# <div id="hint1" class="collapse">
# When declaring an element list the independent dimensions (or <code>kdims</code>) first and the dependent dimensions (or <code>vdims</code>) second.
# </div>
# <b><a href="#solution1" data-toggle="collapse">Solution</a></b>
#
# <div id="solution1" class="collapse">
# <br>
# <code>hv.Curve(us_macro, 'year', 'unem')</code>
# </div>
#
# Finally overlay the<code>Curve</code> with a<code>Scatter</code> element plotting the same data. Then adjust the<code>width</code> of the plot and increase the<code>size</code> of the<code>Scatter</code>.
#
# <b><a href="#hint2" data-toggle="collapse">Hint</a></b>
#
# <div id="hint2" class="collapse">
# Plot options like<code>width</code> are specified using square brackets (</code>[]</code>), while style options like<code>size</code> are specified using parentheses (</code>()</code>)
#
# </div>
# <b><a href="#solution2" data-toggle="collapse">Solution</a></b>
#
# <div id="solution2" class="collapse">
# <br>
# <code>hv.Curve(us_macro,'year','unem') * hv.Scatter(us_macro,'year','unem').opts(width=600, size=10)</code>
# <br>
# <br>
# And you can make the ranges a bit nicer if you want by also applying <code>.redim.range(unem=(2, 10))</code>.
# </div>
# ## Example 2
dates = pd.date_range('2017-01-01', '2017-11-14')
values = np.cumsum(np.random.randn(len(dates)))
# Display the ``values`` for the given ``dates`` using a ``Curve`` element, and name the x-axis and y-axis.
#
# <b><a href="#hint3" data-toggle="collapse">Hint</a></b>
#
# <div id="hint3" class="collapse">
# When declaring an Element with individual columns as arrays, lists or Series wrap them in a tuple, e.g. (column1, column2). Or you can make a dataframe out of it first and pass that.
# </div>
# <b><a href="#solution3" data-toggle="collapse">Solution</a></b>
#
# <div id="solution3" class="collapse">
# <br>
# <code>hv.Curve((dates, values), 'Date', 'Value').relabel('Timeseries')</code>
# <br><br>
# or
# <br><br>
# <code>hv.Curve(pd.DataFrame(dict(Date=dates,Value=values)))</code>
# </div>
# ### Example 3
# Now display a ``Curve`` of the ``'trade'`` column by ``'year'`` next to a ``Histogram`` of the 'trade'. Then change the ``color`` and ``width`` of the ``Curve`` and the ``fill_color`` of the Histogram.
#
# <b><a href="#hint4" data-toggle="collapse">Hint</a></b>
#
# <div id="hint4" class="collapse">
# To declare the Histogram use ``np.histogram`` to compute the histogram values and declare the ``kdims``.
# </div>
# <b><a href="#solution4" data-toggle="collapse">Solution</a></b>
#
# <div id="solution4" class="collapse">
# <br>
# <code>layout = hv.Curve(us_macro, 'year', 'trade') + hv.Histogram(np.histogram(us_macro.trade), kdims='Trade')
# layout.opts(
# opts.Curve(width=600, color='indianred'),
# opts.Histogram(fill_color='darkgreen'))</code>
# <br><br>
# or, for an adjoint histogram:
# <br><br>
# <code>hv.Curve(us_macro, 'year', 'trade').hist()</code>
# </div>
#
| examples/tutorial/exercises/Exercise-1-making-data-visualizable.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# language: python
# name: python397jvsc74a57bd0340e956ee656efd8fdfb480dc033c937d9b626f8b21073bd1b5aa2a469586ea6
# ---
# +
################################
# LIBRARIES
################################
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from src.util import *
from src.frames import *
from src.stats import *
from src.plots import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
# Matplotlib settings
plt.style.use("seaborn")
params = {
"font.family": "STIXGeneral",
"mathtext.fontset": "stix",
"axes.labelsize": 20,
"legend.fontsize": 20,
"xtick.labelsize": 18,
"ytick.labelsize": 18,
"text.usetex": False,
"figure.figsize": [10, 5],
"axes.grid": True,
}
plt.rcParams.update(params)
plt.close("all")
# Apply the default theme
sns.set_theme()
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
pd.options.mode.chained_assignment = None # default='warn'
# -
# # Calibration A
# +
# Get calibration dataframe
c_df = get_calibrate_df('2021-12-14', '../data/calibration_A')
c_df = c_df[['Sensor', 'PM2.5']]
# figure folder
fig_folder = '../results/calibration_A/'
# -
# ## 1 Overall initial statistics
#
# General statistics for the whole dataset.
# +
param = 'PM2.5'
grand_mean = c_df[param].mean()
grand_median = c_df[param].median()
grand_mode = c_df[param].mode()
grand_std = c_df[param].std()
c_df[param].describe()
# -
# #### Standard deviations, etc.
# The "grand std" shows how much every sample varies from the total mean. The coefficient of variation is computed as follows:
#
# $$CV = \frac{\sigma}{grand\ mean}$$
CV = grand_std / grand_mean
CV
# *How much do the medians vary from the total median?* (same formula as standard deviation but with medians)
# +
median_diff = 0
for sensor, grp in c_df.groupby('Sensor'):
median_diff += (grp[param].median() - grand_median) ** 2
median_diff = np.sqrt(median_diff / (len(c_df['Sensor'].unique()) - 1))
median_diff
# -
# ## 2 Central tendency and variability
# +
central = c_df.groupby('Sensor').agg(
{param:
['mean', 'median', mode, 'min', 'max', x_range, sample_std, standard_error, CI95]
}
)
central.head()
# -
# **Comment**
#
# Sensor 3 has the largest range and highest standard deviation. Sensor 1 has the lowest range. Sensor 4 has the lowest standard deviation.
# ## 3 Distribution
# How is the data distributed?
# ### 3.1 Box plots
# +
fig, ax = plt.subplots(figsize=[10,6], dpi=200)
sns.boxplot(x='Sensor', y=param, data=c_df, width=0.5)
plt.axhline(c_df[param].median(), c='k', linestyle='--', label='grand median')
plt.axhline(c_df[param].mean(), c='c', linestyle='--', label='grand mean')
plt.legend()
plt.title('Box Plots Calibration A')
plt.savefig(fig_folder + 'box_plots.pdf')
plt.show()
# -
# **Comment**
#
# From the boxplot, we can see that we have outliers. Mostly from sensors 2, 3, and 5. As the environment was controlled for during the calibration and assumed to be stable, these outliers are seen as errors in the sensor readings. Let's evaluate the outliers.
#
# Question: what should we do with outliers?
# - *Remove them*: Will give more acurate statistics later on. In this case, it is probably the most sensible thing to do as we have so many records to compare with. If the number of outliers are small, they can be probably be seen as minor measurement deviations
# - *Keep them*: How much will the outliers affect later statistics? In this case, we can determine outliers based on a large number of samples. On the station records, however, we do not have as many reference values. When comparing statistics from this dataset to the other datasets we want to keep the procedure as similar as possible.
#
# ### 3.2 Outliers
# +
# Compute median, lower quartile, upper quartile, IQR, lower limit and upper limit
quantiles = c_df.groupby('Sensor').agg(
{param:
[Q1, Q2, Q3, IQR, lowerLimit, upperLimit, outliers, prcnt_outliers, 'count']
}
)
# Display stats
quantiles.head()
# +
# Get exact values of outliers
outliers_dict = {}
for sensor, grp in c_df.groupby('Sensor'):
lower = quantiles[param]['lowerLimit'][sensor]
upper = quantiles[param]['upperLimit'][sensor]
outliers_dict[sensor] = grp.loc[(grp[param] < lower) | (grp[param] > upper)][param].values
outliers_dict
# -
# **Comment**
#
# Sensor 3 has the most outliers, followed by Sensor 2, Sensor 5, Sensor 4, and lastly Sensor 1. Not that many outliers in comparison to the total amount of samples taken. However, 0.07-0.36% of sample points are still contributing to a slightly different mean.
# ### 3.3 Histograms
plot_sensor_distributions(c_df, 'Distributions Calibration A', fig_name=False, bins=20, param=param)
# **Comment**
#
# This gives us a nice general overview of the individual sensor distributions. They seem to roughly follow normal distributions, but to what extent? To get more exact values, let's use QQ-plots, skew, and kurtosis.
# ### 3.4 Normal distribution
# +
normal = c_df.groupby('Sensor').agg({param: [skew, kurtosis]})
normal.head(10)
# -
# Compare with values
#
# **Comment**
#
# Sensor 1 has the highest absolute skew and kurtosis. Sensor 6 has the lowest skew while Sensor 3 has the least amount of kurtosis. This is interesting as Sensor 3 had the most outliers in numbers.
# #### QQ Plots
plot_QQ_plots(c_df, title='QQ Plots Calibration A', param=param, fig_name=False)
# **Comment**
#
# Based on visuals from the above graphs, all sensors seem to follow a normal distribution quite well.
# ## 4 Comparison among sensors and between them
# ### 4.1 Distribution
# +
fig, ax = plt.subplots(figsize=[7,4], dpi=200)
sns.histplot(c_df, x=param, hue='Sensor', multiple='stack', bins=94)
plt.axvline(grand_mean, c='k', linestyle='--', label='mean', linewidth=1.5)
plt.title('Histogram Calibration A')
plt.show()
# +
grand_skew = stats.skew(c_df[param], bias=False)
grand_kurtosis = stats.kurtosis(c_df[param], bias=False)
print(f'Skew: {grand_skew}')
print(f'Kurtosis: {grand_kurtosis}')
print(f'Std: {grand_std}')
# -
# **Comment**
#
# Slightly longer tail on the right side (positive skew) than a normal distribution. Low kurtosis.
# ## 5 Other
# ### 5.1 Pairplots
# +
pair_df = get_calibrate_df('2021-12-14', '../data/calibration_A')
pair_df = pair_df[['PM1.0', 'PM2.5', 'Temperature', 'Humidity', 'NC1.0', 'NC2.5', 'Sensor']]
sns.pairplot(pair_df, hue='Sensor')
#plt.savefig(fig_folder + 'pairplot.pdf')
plt.title('Pairplot Calibration A')
plt.show()
# -
# # Box plot in stations
# +
# Session df and raw session df
s_df = get_computed_sessions()
r_df = combine_raw_session_dfs()
r_df['Sensor'] = r_df['Sensor'].astype(str)
# Only keep green line
s_df = s_df[s_df['Station'].isin(get_green_line())]
r_df = r_df[r_df['Station'].isin(get_green_line())]
# Get session ids
session_ids = sorted(list(r_df["Session Id"].unique()))
# +
fig, ax = plt.subplots(figsize=[10,6], dpi=200)
sns.boxplot(x='Station', y='NC2.5', data=s_df, width=0.5, order=get_green_line())
plt.xticks(rotation=90)
plt.title('Box Plots Stations')
#plt.axhline(10, c='r', linestyle=(0, (3, 10, 2, 3, 30, 1, 2, 1))) # dash dash dot dash densly dash
#plt.savefig('figures/PaperV1/Exploration/CalibrationA/box_plot.pdf')
plt.show()
# -
# **Comment**
#
# There are some stations which have quite many outliers.
# +
# Compute median, lower quartile, upper quartile, IQR, lower limit and upper limit
station_quantiles = s_df.groupby('Station').agg(
{'PM2.5':
[Q1, Q2, Q3, IQR, lowerLimit, upperLimit, outliers, prcnt_outliers, 'count']
}
)
station_quantiles['PM2.5'].sort_values(by='outliers', ascending=False)
# -
# **Comment**
#
# Some stations have outliers. What happened during these sessions?
outlier_ids = print_outliers(s_df, station_quantiles, 'PM2.5')
sns.histplot(outlier_ids)
plt.title('Session Outliers')
plt.xticks(rotation=90)
plt.show()
# **Comment**
#
# These sessions are worth examining and comparing with other sources. Especially session 20211004-2, as it contains 5 outliers within the same session!
# # Station Distributions
#
# - Get all raw data for a station and plot histograms etc. like calibration df
# # Drift in sensors
# ### Per station
# ### Per sensor per station
# +
station_data = {}
for sensor, grp in r_df.groupby('Sensor'):
if sensor not in station_data:
station_data[sensor] = {}
for session_id, s_grp in grp.groupby('Session Id'):
# get median value
station_records = s_grp.loc[s_grp['Station'] == 'Rådmansgatan']
if len(station_records) > 0:
station_data[sensor][session_id] = station_records['PM2.5'].median()
# +
fig, axs = plt.subplots(figsize=[12,10], nrows=4, ncols=3, dpi=200)
for sensor, ax in zip(station_data.keys(), axs.flatten()):
sorted_data = {k: v for k, v in sorted(station_data[sensor].items(), key=lambda item: item[0])}
ax.scatter(sorted_data.keys(), sorted_data.values())
#labels = r_df.loc[r_df['Sensor'] == sensor]['Session Id'].values
#ax.set_xticklabels(labels, rotation=90)
plt.tight_layout()
plt.show()
# +
#1234BD
# -
# # Comparison DiSC Data
# +
# Session df and raw session df
s_df = get_computed_sessions()
r_df = combine_raw_session_dfs()
r_df['Sensor'] = r_df['Sensor'].astype(str)
# Only keep green line
s_df = s_df[s_df['Station'].isin(get_green_line())]
r_df = r_df[r_df['Station'].isin(get_green_line())]
# DiSC df and raw DiSC df
disc_df = get_computed_sessions('sessionsDiSC', disc=True)
raw_disc_df = combine_raw_session_dfs('sessionsDiSC')
disc_df = disc_df.loc[disc_df['Date'] != '2021-10-12']
raw_disc_df = raw_disc_df.loc[raw_disc_df['Date'] != '2021-10-12']
raw_disc_df['Sensor'] = raw_disc_df['Sensor'].astype(str)
# Only keep green line
disc_df = disc_df[disc_df['Station'].isin(get_green_line())]
raw_disc_df = raw_disc_df[raw_disc_df['Station'].isin(get_green_line())]
# Get session ids
session_ids = sorted(list(r_df["Session Id"].unique()))
# -
disc_df['Date'].unique()
# +
# DISC DF
fig, ax = plt.subplots(figsize=[10,6], dpi=200)
sns.boxplot(x='Station', y='Number', data=disc_df, width=0.5, order=get_green_line())
plt.xticks(rotation=90)
plt.title('Box Plots DiSC')
plt.show()
# +
# Compute median, lower quartile, upper quartile, IQR, lower limit and upper limit
d_station_quantiles = disc_df.groupby('Station').agg(
{'Number':
[Q1, Q2, Q3, IQR, lowerLimit, upperLimit, outliers, prcnt_outliers, 'count']
}
)
d_station_quantiles['Number'].sort_values(by='outliers', ascending=False)
# -
outlier_ids = print_outliers(disc_df, d_station_quantiles, 'Number')
sns.histplot(outlier_ids)
plt.title('Session Outliers')
plt.xticks(rotation=90)
plt.show()
# ### Session Graphs
# +
session = '20210930-1'
s1_df = r_df.loc[r_df['Session Id'] == session]
s2_df = raw_disc_df.loc[raw_disc_df['Session Id'] == session]
# -
s2_df
| scripts/Exploratory Data Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--NAVIGATION-->
# < [Dataset Metadata](DatasetMetaData.ipynb) | [Index](Index.ipynb) | [Dataset Columns](Columns.ipynb) >
#
# <a href="https://colab.research.google.com/github/simonscmap/pycmap/blob/master/docs/MetaData.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
#
# <a href="https://mybinder.org/v2/gh/simonscmap/pycmap/master?filepath=docs%2FMetaData.ipynb"><img align="right" src="https://mybinder.org/badge_logo.svg" alt="Open in Colab" title="Open and Execute in Binder"></a>
# ## *get_metadata(table, variable)*
#
# Returns a dataframe containing the associated metadata of a variable (such as data source, distributor, refrences, and etc..).<br/>
# The inputs can be string literals (if only one table, and variable is passed) or a list of string literals.
# > **Parameters:**
# >> **table: string or list of string**
# >> <br />The name of table where the variable is stored. A full list of table names can be found in the [catalog](Catalog.ipynb).
# >> <br />
# >> <br />**variable: string or list of string**
# >> <br />Variable short name. A full list of variable short names can be found in the [catalog](Catalog.ipynb).
#
# >**Returns:**
# >> Pandas dataframe.
# ### Example
# +
# #!pip install pycmap -q #uncomment to install pycmap, if necessary
import pycmap
api = pycmap.API(token='<YOUR_API_KEY>')
api.get_metadata(['tblsst_AVHRR_OI_NRT', 'tblArgoMerge_REP'], ['sst', 'argo_merge_salinity_adj'])
# -
# <img src="figures/sql.png" alt="SQL" align="left" width="40"/>
# <br/>
# ### SQL Statement
# Here is how to achieve the same results using a direct SQL statement. Please refere to [Query](Query.ipynb) for more information.
# <code>EXEC uspVariableMetaData 'table', 'variable'</code><br/><br/>
# **Example:**<br/>
# Metadata associated with Argo salinity measurements:<br/><br/>
# <code>EXEC uspVariableMetaData 'tblArgoMerge_REP', 'argo_merge_salinity_adj'</code><br/><br/>
| docs/MetaData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/obeshor/Plant-Diseases-Detector/blob/master/Plant_Diseases_Detection_with_TF2_V2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="68KM1BqVaL-K" colab_type="text"
# # TensorFlow Lite End-to-End Android Application
#
# By [<NAME>](https://www.linkedin.com/in/yannick-serge-obam/)
#
# For this project, we will create an end-to-end Android application with TFLite that will then be open-sourced as a template design pattern.
#
# We opte to develop an **Android application that detects plant diseases**.
#
# <img src='https://github.com/obeshor/Plant-Diseases-Detector/blob/master/assets/detect_crop_disease_in_africa.jpg?raw=1' width=-500px>
#
# The project is broken down into multiple steps:
#
# * Building and creating a machine learning model using TensorFlow with Keras
# * Deploying the model to an Android application using TFLite
# * Documenting and open-sourcing the development process
#
#
#
#
#
# + [markdown] id="R_FLx9gt3Rb7" colab_type="text"
# ##**Machine Learning model using Tensorflow with Keras**
#
# We designed algorithms and models to recognize species and diseases in the crop leaves by using Convolutional Neural Network
#
# + [markdown] id="jm459EZSk4zS" colab_type="text"
# ### **Importing the Librairies**
# + id="B9XR2lRUPtQG" colab_type="code" colab={}
# Install nightly package for some functionalities that aren't in alpha
# !pip install tf-nightly-gpu-2.0-preview
# Install TF Hub for TF2
# !pip install 'tensorflow-hub == 0.4'
# + id="f_hZVquWBx9b" colab_type="code" colab={}
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
#tf.logging.set_verbosity(tf.logging.ERROR)
#tf.enable_eager_execution()
import tensorflow_hub as hub
import os
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers
#from keras import optimizers
# + id="eoKSWDje_6z0" colab_type="code" outputId="01c83133-5951-412f-a996-42dae67a2f1a" colab={"base_uri": "https://localhost:8080/", "height": 87}
# verify TensorFlow version
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
# + [markdown] id="l9qTqaSDaL-i" colab_type="text"
# ### Load the data
# We will download a public dataset of 54,305 images of diseased and healthy plant leaves collected under controlled conditions ( [PlantVillage Dataset](https://storage.googleapis.com/plantdata/PlantVillage.tar)). The images cover 14 species of crops, including: apple, blueberry, cherry, grape, orange, peach, pepper, potato, raspberry, soy, squash, strawberry and tomato. It contains images of 17 basic diseases, 4 bacterial diseases, 2 diseases caused by mold (oomycete), 2 viral diseases and 1 disease caused by a mite. 12 crop species also have healthy leaf images that are not visibly affected by disease. Then store the downloaded zip file to the "/tmp/" directory.
#
# we'll need to make sure the input data is resized to 224x224 or 229x229 pixels as required by the networks.
#
#
# + id="YG5Hzp83Ezb0" colab_type="code" colab={}
zip_file = tf.keras.utils.get_file(origin='https://storage.googleapis.com/plantdata/PlantVillage.zip',
fname='PlantVillage.zip', extract=True)
# + [markdown] id="HA69CPvROvk5" colab_type="text"
# ### Prepare training and validation dataset
# Create the training and validation directories
# + id="PhB3buj3OoKP" colab_type="code" colab={}
data_dir = os.path.join(os.path.dirname(zip_file), 'PlantVillage')
train_dir = os.path.join(data_dir, 'train')
validation_dir = os.path.join(data_dir, 'validation')
# + id="bDUxGnMdkmE8" colab_type="code" colab={}
import time
import os
from os.path import exists
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
# + id="lwFeDj4ughkE" colab_type="code" outputId="31a6b4e0-fd4c-4774-d4e8-154ed4f93013" colab={"base_uri": "https://localhost:8080/", "height": 50}
print('total images for training :', count(train_dir))
print('total images for validation :', count(validation_dir))
# + [markdown] id="sKbwKt0_aL_D" colab_type="text"
# ### Label mapping
#
# You'll also need to load in a mapping from category label to category name. You can find this in the file `categories.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the plants and diseases.
# + id="3-G_4n8ZF91f" colab_type="code" outputId="681ccb47-e685-4f98-88e0-0c5b3f9fec72" colab={"base_uri": "https://localhost:8080/", "height": 194}
!!wget https://github.com/obeshor/Plant-Diseases-Detector/archive/master.zip
# !unzip master.zip;
# + id="FCf_N7v3HBzB" colab_type="code" outputId="79cde527-4de7-40be-ddf0-5290d9ce65ad" colab={"base_uri": "https://localhost:8080/", "height": 54}
import json
with open('Plant-Diseases-Detector-master/categories.json', 'r') as f:
cat_to_name = json.load(f)
classes = list(cat_to_name.values())
print (classes)
# + id="hzILXef8Um5v" colab_type="code" outputId="f974e268-f126-4ab5-af05-9db1076cacf0" colab={"base_uri": "https://localhost:8080/", "height": 34}
print('Number of classes:',len(classes))
# + [markdown] id="5VeSPT_J0otV" colab_type="text"
# ###Select the Hub/TF2 module to use
# + id="lyCAYjNT09ja" colab_type="code" outputId="a3a254aa-49af-479a-9ab1-d00deaca02a1" colab={"base_uri": "https://localhost:8080/", "height": 34}
module_selection = ("inception_v3", 299, 2048) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/2".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
BATCH_SIZE = 64 #@param {type:"integer"}
# + [markdown] id="ypePvgaXw5Lm" colab_type="text"
# ### Data Preprocessing
#
# Let's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network.
#
# As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).
#
#
# + id="JRrsFKez6fFf" colab_type="code" outputId="f1f96775-a07c-4b88-dda9-0932a8cfe061" colab={"base_uri": "https://localhost:8080/", "height": 52}
# Inputs are suitably resized for the selected module. Dataset augmentation (i.e., random distortions of an image each time it is read) improves training, esp. when fine-tuning.
validation_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
shuffle=False,
seed=42,
color_mode="rgb",
class_mode="categorical",
target_size=IMAGE_SIZE,
batch_size=BATCH_SIZE)
do_data_augmentation = True #@param {type:"boolean"}
if do_data_augmentation:
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
fill_mode='nearest' )
else:
train_datagen = validation_datagen
train_generator = train_datagen.flow_from_directory(
train_dir,
subset="training",
shuffle=True,
seed=42,
color_mode="rgb",
class_mode="categorical",
target_size=IMAGE_SIZE,
batch_size=BATCH_SIZE)
# + [markdown] id="3Afm5Jn42E4T" colab_type="text"
# ###Build the model
# All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
#
# For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
# + id="Iiu7viPNdEI9" colab_type="code" colab={}
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE+(3,),
output_shape=[FV_SIZE])
# + id="hMYytuAGB34w" colab_type="code" colab={}
do_fine_tuning = False #@param {type:"boolean"}
if do_fine_tuning:
feature_extractor.trainable = True
# unfreeze some layers of base network for fine-tuning
for layer in base_model.layers[-30:]:
layer.trainable =True
else:
feature_extractor.trainable = False
# + id="A9iG69R72XUT" colab_type="code" outputId="90818696-a14c-493d-89dc-215469d2b995" colab={"base_uri": "https://localhost:8080/", "height": 354}
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(train_generator.num_classes, activation='softmax',
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
#model.build((None,)+IMAGE_SIZE+(3,))
model.summary()
# + [markdown] id="PGGphhh8Vu50" colab_type="text"
# ### Specify Loss Function and Optimizer
# + id="cq5rzR-vV7tn" colab_type="code" colab={}
#Compile model specifying the optimizer learning rate
LEARNING_RATE = 0.001 #@param {type:"number"}
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=LEARNING_RATE),
loss='categorical_crossentropy',
metrics=['accuracy'])
# + [markdown] id="ia50ckJ6-rVr" colab_type="text"
# ### Train Model
# train model using validation dataset for validate each steps
# + id="Cf1FVwyCRI8o" colab_type="code" outputId="dfc3f955-13ea-481b-ab20-8235e1c7ba66" colab={"base_uri": "https://localhost:8080/", "height": 194}
EPOCHS=5 #@param {type:"integer"}
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples//train_generator.batch_size,
epochs=EPOCHS,
validation_data=validation_generator,
validation_steps=validation_generator.samples//validation_generator.batch_size)
# + [markdown] id="pLa4bHwbPNWD" colab_type="text"
# ###Check Performance
# Plot training and validation accuracy and loss
# + id="QUPxHwHC3Gy_" colab_type="code" outputId="815d66eb-9f29-420e-ed77-c2219581bcde" colab={"base_uri": "https://localhost:8080/", "height": 512}
import matplotlib.pylab as plt
import numpy as np
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.ylabel("Accuracy (training and validation)")
plt.xlabel("Training Steps")
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.ylabel("Loss (training and validation)")
plt.xlabel("Training Steps")
plt.show()
# + [markdown] id="aNzdageWS6k5" colab_type="text"
# ### Random test
# Random sample images from validation dataset and predict
# + id="l1jIf7rKTBLx" colab_type="code" colab={}
# Import OpenCV
import cv2
# Utility
import itertools
import random
from collections import Counter
from glob import iglob
def load_image(filename):
img = cv2.imread(os.path.join(data_dir, validation_dir, filename))
img = cv2.resize(img, (IMAGE_SIZE[0], IMAGE_SIZE[1]) )
img = img /255
return img
def predict(image):
probabilities = model.predict(np.asarray([img]))[0]
class_idx = np.argmax(probabilities)
return {classes[class_idx]: probabilities[class_idx]}
# + id="tKZduGZoTFSg" colab_type="code" outputId="95405360-16ea-49be-e967-f78c2e0a9ec9" colab={"base_uri": "https://localhost:8080/", "height": 1546}
for idx, filename in enumerate(random.sample(validation_generator.filenames, 5)):
print("SOURCE: class: %s, file: %s" % (os.path.split(filename)[0], filename))
img = load_image(filename)
prediction = predict(img)
print("PREDICTED: class: %s, confidence: %f" % (list(prediction.keys())[0], list(prediction.values())[0]))
plt.imshow(img)
plt.figure(idx)
plt.show()
# + [markdown] id="31q93iM8ejMu" colab_type="text"
# ## Export as saved model and convert to TFLite
# Now that you've trained the model, export it as a saved model
# + id="OxZNx2DRTolS" colab_type="code" colab={}
import time
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
tf.keras.experimental.export_saved_model(model, export_path)
export_path
# + id="i1EkGUwbUIw8" colab_type="code" colab={}
# Now confirm that we can reload it, and it still gives the same results
reloaded = tf.keras.experimental.load_from_saved_model(export_path, custom_objects={'KerasLayer':hub.KerasLayer})
# + id="wo2RmcCdd-Oa" colab_type="code" colab={}
def predict_reload(image):
probabilities = reloaded.predict(np.asarray([img]))[0]
class_idx = np.argmax(probabilities)
return {classes[class_idx]: probabilities[class_idx]}
# + id="Tr5tUCEpeNX5" colab_type="code" outputId="de8375d1-f0fe-4c19-8b6f-bf9dea12753f" colab={"base_uri": "https://localhost:8080/", "height": 610}
for idx, filename in enumerate(random.sample(validation_generator.filenames, 2)):
print("SOURCE: class: %s, file: %s" % (os.path.split(filename)[0], filename))
img = load_image(filename)
prediction = predict_reload(img)
print("PREDICTED: class: %s, confidence: %f" % (list(prediction.keys())[0], list(prediction.values())[0]))
plt.imshow(img)
plt.figure(idx)
plt.show()
# + id="nJHOEnvxdN50" colab_type="code" colab={}
# convert the model to TFLite
# !mkdir "tflite_models"
TFLITE_MODEL = "tflite_models/plant_disease_model.tflite"
# Get the concrete function from the Keras model.
run_model = tf.function(lambda x : reloaded(x))
# Save the concrete function.
concrete_func = run_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype)
)
# Convert the model to standard TensorFlow Lite model
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converted_tflite_model = converter.convert()
open(TFLITE_MODEL, "wb").write(converted_tflite_model)
# + [markdown] id="IPepNCqPtfq-" colab_type="text"
# ## CONCLUSION
# The model can be improved if you change some hyperparameters. You can try using a different pretrained model. It's up to you. Let me know if you can improve the accuracy! Let's develop an Android app that uses this model.
| Plant_Diseases_Detection_with_TF2_V2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''Thesis-S9-fXffy'': pipenv)'
# name: python3
# ---
# ## Human Activity Recognition System using Machine Learning techniques for Home Automation by <NAME>
#
# Dataset used: https://tev.fbk.eu/technologies/smartwatch-gestures-dataset
#
# Eight different users performed twenty repetitions of twenty different gestures, for a total of 3200 sequences. Each sequence contains acceleration data from the 3-axis accelerometer of a first generation Sony SmartWatch™, as well as timestamps from the different clock sources available on an Android device. The smartwatch was worn on the user's right wrist. The gestures have been manually segmented by the users performing them by tapping the smartwatch screen at the beginning and at the end of every repetition.
#
# <img src="https://i.imgur.com/U3Uqe9x.png">
#
#
# +
import yaml
from utils import dotdict, make_range
with open('settings.yaml', 'r') as f:
settings = dotdict(yaml.load(f, Loader=yaml.Loader))
# -
# ### Load and preprocess dataset
# We use the first 8 Haar transformation coefficients for each axis, creating a 24 vector for each sequence
# +
from dataset import read_dataset
print("Loading and preprocessing dataset...")
dataset = read_dataset(settings.dataset)
print("In total", len(dataset.X_train) + len(dataset.X_test),
"sequences have been loaded and preprocessed")
# -
# ### Split training (6 users) / test set (2 users)
# ## Find best algorithm
# Between the following:
# - K-nearest neighbors (KNN)
# - Decision Tree (DT)
# - Random Forest (RF)
# - Support Vector Machines (SVM)
# - Multilayer perceptron (MLP)
# - Convolutional neural network (CNN)
# - Recurrent neural networks (RNN) using long-short term memory (LSTM)
# ### K-nearest neighbors (KNN)
# +
from sklearn.neighbors import KNeighborsClassifier
from eval import SklearnEvaluator
best_accuracy, best_accuracy6 = [0] * 5, [0] * 5
gen_neighbors = lambda x: KNeighborsClassifier(n_neighbors=x,
weights='distance')
evaluator = SklearnEvaluator(gen_neighbors, gen_neighbors, 'neighbors', 0,
dataset, best_accuracy, best_accuracy6)
evaluator.fit(make_range(**settings.knn.neighbors))
# -
# ### Decision Tree (DT)
# +
from sklearn.tree import DecisionTreeClassifier
gen_decision_tree = lambda x : DecisionTreeClassifier(max_depth=x, random_state=0)
evaluator = SklearnEvaluator(gen_decision_tree, gen_decision_tree, 'max depth',
1, dataset, best_accuracy, best_accuracy6)
evaluator.fit(make_range(**settings.decision_tree.depth))
# -
# ### Random Forest (RF)
# +
from sklearn.ensemble import RandomForestClassifier
from tqdm import tqdm
gen_random_forest = lambda x : RandomForestClassifier(n_estimators=x, random_state=0)
evaluator = SklearnEvaluator(gen_random_forest, gen_random_forest, 'number of trees',
2, dataset, best_accuracy, best_accuracy6)
evaluator.fit(make_range(**settings.random_forest.estimators))
# -
# ### Support Vector Machines (SVM)
# +
from sklearn.svm import SVC
gen_random_svm = lambda x: SVC(
kernel='poly', gamma='scale', degree=x, random_state=0)
evaluator = SklearnEvaluator(gen_random_svm, gen_random_svm, 'degree', 3,
dataset, best_accuracy, best_accuracy6)
evaluator.fit(make_range(**settings.svm.degree))
# -
# ## Deep Learning
# ### Multilayer perceptron (MLP)
#
# ### MLP Classifier using Sklearn
# +
from sklearn.neural_network import MLPClassifier
gen_classifier = lambda x: MLPClassifier(
hidden_layer_sizes=(x, x), random_state=settings.mlp.seed, max_iter=1000)
evaluator = SklearnEvaluator(gen_classifier, gen_classifier, 'neurons', 4,
dataset, best_accuracy, best_accuracy6)
evaluator.fit(make_range(**settings.mlp.neurons))
# -
# ### Print accuracies for every algorithm
# +
algorithms = ['KNN', 'DT', 'RF', 'SVM', 'MLP']
for algorithm, accuracy, accuracy6 in zip(algorithms, best_accuracy, best_accuracy6):
accuracy = format(accuracy*100, '.2f')+'%'
accuracy6 = format(accuracy6*100, '.2f')+'%'
print('%-16s%-16s%s' % (algorithm, accuracy, accuracy6))
# -
# ### Train the best model
# MLP for all the gestures
from sklearn import metrics
model = gen_classifier(256)
history = model.fit(dataset.X_train, dataset.y_train)
metrics.accuracy_score(dataset.y_test, model.predict(dataset.X_test))
| Thesis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="_1m_hR0wgSkr"
import pandas as pd
from typing import List
import matplotlib.pyplot as plt
import plotly.express as px
# + id="UI9Wy3yf8rHX"
## to convert unix timestamp into readable date
class io:
def is_file_like(obj) -> bool:
if not (hasattr(obj, "read") or hasattr(obj, "write")):
return False
if not hasattr(obj, "__iter__"):
return False
return True
def read_ibi_file(file) -> pd.DataFrame:
if io.is_file_like(file):
file_to_read = file
elif isinstance(file, str):
file_to_read = open(file, 'br')
close_file = True
else:
raise ValueError("Not a valid Input : %d" % type(file))
initial_pos = file_to_read.tell()
timestamp = float(str(file_to_read.readline(), 'utf-8').strip().split(',')[0])
file_to_read.seek(initial_pos)
ibi = pd.read_csv(file_to_read, skiprows=1, names = ['ibi'], index_col=0)
ibi['ibi'] *= 1000
ibi.index = pd.to_datetime((ibi.index * 1000 + timestamp * 1000).map(int), unit='ms', utc = True)
ibi.index.name = 'datetime'
if close_file:
file_to_read.close()
return ibi
# + colab={"base_uri": "https://localhost:8080/", "height": 449} id="reJOqFYA8t9N" outputId="22b9b823-612f-444a-a4cb-b75b1e8fb9d8"
df = read_ibi_file('IBI.csv')
df
# + id="sajd7ZaA8wdG"
import preprocessing as pp
# + colab={"base_uri": "https://localhost:8080/"} id="LHtAVaAb8zV3" outputId="4913a066-100f-4320-c6d2-44d267f3c1a4"
hrv_features = pp.get_hrv_features(df['ibi'], 180, 1, ["td"], 0.1, clean_data = True, num_cores= 0)
# + colab={"base_uri": "https://localhost:8080/", "height": 638} id="2jngi-A29SHs" outputId="518cf558-1823-43f5-9f1c-e906294b2e1e"
hrv_features
# + colab={"base_uri": "https://localhost:8080/", "height": 828} id="PLcUdeWT9e-G" outputId="cb3e0f4d-deee-4822-d94f-1cf1391657c1"
hrv_features[['hrv_rmssd']].plot(subplots=True, figsize=(24,14 ))
# + colab={"base_uri": "https://localhost:8080/", "height": 902} id="HieCIe24Hh4g" outputId="f7b1c7c2-90bd-49ad-9e0a-db7981500542"
_ = hrv_features[['hrv_mean_hr', 'hrv_rmssd']].plot(subplots=True, figsize=(24, 18))
# + id="2HqAv4CjHGlU"
def lagoption(option, window_size, shift) -> pd.DataFrame:
"""
option : if we want a lagging / leading or a centered window
window size : the size of the window we need to specify w.r.t timeinterval
shift : how much can the data shift for the lagging/ leading and centered window
"""
if option == "trailing":
shift = -(shift)
elif option == "leading":
shift = +(shift)
elif option == "centered":
return (hrv_features['hrv_rmssd'].rolling(window_size, center = True).mean())
return (hrv_features['hrv_rmssd'].rolling(window_size).mean().shift(shift))
# + colab={"base_uri": "https://localhost:8080/", "height": 863} id="ov0wIIkVR65Z" outputId="3d176adb-d42c-428e-a31a-863cb6473baa"
ibi_1 = lagoption("centered", 300, 0)
ibi_2 = lagoption('trailing', 300, 100)
ibi_3 = lagoption('leading', 300, 100)
plt.figure(figsize=(24, 14))
plt.plot(ibi_2, color = 'red')
plt.plot( ibi_1, color='green')
plt.plot( ibi_3)
plt.legend(['trailing', 'centered', 'leading'])
plt.title('window size is 300 seconds with the shift of 10')
# + id="MlRAzuUJSSNY"
# + id="huH0QW4DSXQz"
# + id="kvoIIaLbWSZG"
| IBI_processing/flirt_package.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: track4
# language: python
# name: track4
# ---
# +
import os
import sys
os.chdir(os.path.join(os.getenv('HOME'), 'RPOSE'))
sys.path.insert(0, os.getcwd())
sys.path.append(os.path.join(os.getcwd() + '/src'))
sys.path.append(os.path.join(os.getcwd() + '/core'))
sys.path.append(os.path.join(os.getcwd() + '/segmentation'))
import pickle
import numpy as np
# p = "/home/jonfrey/RPOSE/notebooks/Mode.MUTIPLE_INIT_POSES_data_final.pkl" # experiment with predicted flow and valid
p = "/home/jonfrey/RPOSE/notebooks/Mode.MUTIPLE_INIT_POSES_gt_valid_data_final.pkl"
# more noise
p = "/home/jonfrey/RPOSE/notebooks/Mode.MUTIPLE_INIT_POSES_gt_valid_more_noise_trained_data_final.pkl"
p = "/media/scratch1/jonfrey/results/rpose/training_flow_dry_run/2021-07-02T10:40:27_no_synthetic_data/Mode.MUTIPLE_INIT_POSES_eval_data_final.pkl"
p = "/media/scratch1/jonfrey/results/rpose/training_flow_reloaded/2021-07-03T16:20:38_iterations_2/Mode.MUTIPLE_INIT_POSES_eval_data_final.pkl"
p = "/media/scratch1/jonfrey/results/rpose/pose_prediction/gt_valid/Mode.MUTIPLE_INIT_POSES_eval_data_final.pkl"
p = "docs/result_final.pkl"
p = "docs/result_inter_1000.pkl"
p = "docs/both_epnp_6iter.pkl"
p = "docs/tracking.pkl"
p = "docs/315.pkl"
p = "docs/336.pkl"
p = "docs/306.pkl"
p = "docs/223.pkl"
with open(p, 'rb') as f:
data = pickle.load(f)
data_2 = {}
data_new = {}
print( data.keys() )
for k in data.keys():
try:
data_new[k] = data[k][:,0]
for l in range(data[k].shape[1]):
data_2 [k+f"_{l}"] = data[k][:,l]
# data[k+"_l"] = data[k][:,l]
except:
data_new[k] = data[k]
pass
import pandas as pd
d = { k: data_new[k] for k in ['add_s', 'adds', 'idx_arr', 'ratios_arr', 'valid_corrospondences', 'init_adds_arr', 'init_add_s_arr', 'epe','repro_errors'] }
df = pd.DataFrame.from_dict( d )
# +
dis = data['add_s'][:,0] - data['init_add_s_arr'][:,0]
m = dis != np.inf
idx = np.argmax((dis[m]))
data['add_s'][:,0][idx ]
data['init_add_s_arr'][:,0][idx ]
# +
data['add_s'][data['add_s'] == np.inf] = data['init_add_s_arr'][data['add_s'] == np.inf]
data['adds'][data['adds'] == np.inf] = data['init_adds_arr'][data['adds'] == np.inf]
compute_auc(data['adds'][:,0])
# -
# +
from pose_estimation import compute_auc
vio = np.array( [ d.value for d in data["violation_arr"].flatten()]).reshape( data["violation_arr"].shape)
val = np.linalg.norm( (data['h_init_all'] - data['h_pred_all'])[:,:,:3,3], axis=2)
val = data['r_repro_arr']
# Results Abblation Consecutive Refinement
for tar in range( 0, 3):
# latest with fallback
num = np.zeros( data['epe'].shape )
for i in range( m.shape[0]):
val = np.where(vio[i] == 4)[0]
for k in range(0, tar+1):
k = tar-k
if k in val:
val = k
break
if k == 0:
val = 0
num[i,val] = 1
print(f"{tar}th Iteration: ", compute_auc_mix(data['add_s'][num == 1], data['adds'][num == 1] , data['idx_arr'] ),
compute_auc ( data['adds'][num == 1] ) )
val = data['repro_errors']
m= np.argmin( val ,axis=1)
num = np.zeros( data['epe'].shape )
for i in range( m.shape[0]):
num[i,m[i]] = 1
print(f"Repro Error: ", compute_auc_mix(data['add_s'][num == 1], data['adds'][num == 1] , data['idx_arr'] ),
compute_auc ( data['adds'][num == 1] ) )
val = data['add_s']
m= np.argmin( val ,axis=1)
num = np.zeros( data['epe'].shape )
for i in range( m.shape[0]):
num[i,m[i]] = 1
print(f"Optimal: ", compute_auc_mix(data['add_s'][num == 1], data['adds'][num == 1] , data['idx_arr'] ),
compute_auc ( data['adds'][num == 1] ) )
# -
(data['h_init_all'][:,0,:3,3] - data['h_pred_all'][:,0,:3,3] ).mean(axis=1)
# +
# Experiment when to fall back on inital pose:
dif = np.linalg.norm( data['h_init_all'][:,0,:3,3] - data['h_pred_all'][:,0,:3,3] , axis = 1 )
m1 = data['repro_errors'][:,0] < 100
tmp = np.zeros( data['adds'][:,0].shape )
tmp[m1] = data['adds'][m1,0]
use_pose_cnn =True
if use_pose_cnn:
p= data_posecnn['adds'][:,None]
else:
p = data['init_adds_arr']
tmp[m1 == False] = p[m1 == False,0]
print( compute_auc ( data['adds'][:,0] ), compute_auc ( tmp ), sum(m1), m1.shape )
def compute_auc_mix(add, adds, obj):
sym = []
for ind in obj.tolist():
sym.append(
not (int(ind) + 1 in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 17, 18])
)
sym = np.array(sym)
non_sym = sym == False
mix = add[sym].tolist() + adds[non_sym].tolist()
return compute_auc(np.array(mix))
idx = 0
print(compute_auc_mix(data['add_s'][:,idx], data['adds'][:,idx] , data['idx_arr'] ),
compute_auc(data['adds'][:,idx] ),
(vio[:,idx]!=4).sum() )
# +
from pose_estimation import compute_auc
vio = np.array( [ d.value for d in data["violation_arr"].flatten()]).reshape( data["violation_arr"].shape)
val = np.linalg.norm( (data['h_init_all'] - data['h_pred_all'])[:,:,:3,3], axis=2)
val = data['repro_errors']
val = data['add_s']
val = data['epe']
val = data['repro_errors']
m= np.argmin( val ,axis=1)
num = np.zeros( data['adds'].shape )
for i in range( m.shape[0]):
num[i,m[i]] = 1
print( 'ADDS best repro', compute_auc ( data['adds'][num == 1] ), 'Default-Iter1:', compute_auc ( data['adds'][:,0] ) )
# +
#for i in range(data.shape[0]):
# for j in range(data.shape[0]):
vio = np.array( [ d.value for d in data["violation_arr"].flatten()]).reshape( data["violation_arr"].shape)
(vio != 4).sum()
num = np.zeros( data['adds'].shape )
for i in range( vio.shape[0]):
if vio[i][0] == 4:
num[i,0] = 1
if vio[i][1] == 4:
num[i,0] = 0
num[i,1] = 1
if vio[i][2] == 4:
num[i,0] = 0
num[i,1] = 0
num[i,2] = 1
np.unique( data['idx_arr'][ vio[:,0] != 4 ], return_counts=True )
vio[:,0][ vio[:,0] != 4]
# -
np.unique( vio,return_counts=True )
# +
mask_pred = data['adds'] < data['init_adds_arr']
res = np.concatenate( [data['adds'][mask_pred], data['init_adds_arr'][mask_pred==False] ], axis = 0)
compute_auc( res )
# -
vio = np.array( [ d.value for d in data['violation_arr'][:,0]])
mask_pred
# +
vio = np.array( [ d.value for d in data['violation_arr'][:,0]])
best = 0
mask_pred = (data['ratios_arr'][:,0] > 0.1) * ( vio == 4 ) # * (data['valid_corrospondences'] < i)
# data['init_add_s_arr'][:,0][mask_pred==False]
selection = np.concatenate( [data['add_s'][:,0][mask_pred], data['init_add_s_arr'][:,1][mask_pred==False] ], axis = 0)
df['add_s'] = selection
selection = np.concatenate( [data['adds'][:,0][mask_pred], data['init_adds_arr'][:,0][mask_pred==False] ], axis = 0)
df['adds'] = selection
res = compute_auc( selection )
print( "RES WITH FILTERING", res, "rejection: ", (mask_pred==False).sum())
#if res > best:
# best = res
# print(i, res)
best_value_ransacinlier = 0.06
print( "Normal", compute_auc( data['adds'][:,0] ) )
# -
with open('/home/jonfrey/PoseCNN-PyTorch/data_posecnn.pickle', 'rb') as handle:
posecnn = pickle.load(handle)
data_posecnn = {}
data_posecnn['add_s'] = np.array( [d['distances_non'] for d in posecnn])
data_posecnn['adds'] = np.array( [d['distances_sys'] for d in posecnn])
data_posecnn['idx_arr'] = np.array( [d['cls_index']-1 for d in posecnn])
df_posecnn = pd.DataFrame.from_dict( data_posecnn )
df_posecnn
# +
import seaborn as sns
import matplotlib.pyplot as plt
print ( "Experiment: ADDS to EPE")
epe = df["epe"].to_numpy()
adds = df["adds"].to_numpy()
m_fin = np.isfinite(epe)
epe = epe[m_fin]
adds = adds[m_fin]
sta = 2
sto = 20
b = np.digitize(epe, np.arange(sta,sto,1) , right=False)
y = []
for i in range(b.max()):
ma = b == i
#print( compute_auc( adds[ma] ), epe[ma].mean(), ma.sum() )
y.append( compute_auc( adds[ma] ) )
import seaborn as sns
from matplotlib import pyplot
import matplotlib
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 12}
matplotlib.rc('font', **font)
fig, ax = pyplot.subplots(figsize=(6,3))
ax = sns.barplot(ax=ax, x=np.arange(sta,sto,1), y=y, color=sns.color_palette("bright")[-1])
ax.set_ylim([80, 99])
ax.yaxis.label.set_size(14)
ax.xaxis.label.set_size(14)
ax.set(xlabel='EPE in pixels', ylabel='AUC of ADD-S')
plt.gcf().subplots_adjust(bottom=+0.22)
fig.savefig("/home/jonfrey/RPOSE/docs/adds_mean_vs_epe.png", dpi = 600)
# +
print("Experiment: ADD-S to EPE")
adds = df["adds"].to_numpy()
epe = df["epe"].to_numpy()
obj = df['idx_arr']
sym = []
for ind in obj.tolist():
sym.append(
not (int(ind) + 1 in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 17, 18])
)
sym = np.array(sym)
epe = epe[sym == False]
adds = adds[sym == False]
m_fin = np.isfinite(epe)
epe = epe[m_fin]
adds = adds[m_fin]
m = (adds < np.quantile(adds[adds<0.1], 0.9) ) * (epe < np.quantile(epe, 0.8) )
epe = epe[m]
adds = adds[m]
for i in range(0,12):
ma = ( epe < (i+1) )*( epe > i)
# KDE 2D density plot
df_plot = pd.DataFrame.from_dict( {'EPE in pixels':epe ,'ADD-S in cm':adds*100} )
ax = sns.jointplot(data = df_plot, x='EPE in pixels', y='ADD-S in cm',kind='kde', space=0, fill=True, thresh=0.05, cmap=sns.color_palette("ch:start=.2,rot=-.3", as_cmap=True) )#')# kind = 'hex',height =4.5) #color=sns.color_palette("bright")[-1]) # color = 'royalblue', kind='kde'
# ax.fig.suptitle('ADD-(S) vs EPE', fontsize=12) #, verticalalignment='bottom');
ax.set_axis_labels("EPE in pixels","ADD-S in cm",fontsize=14)
plt.gcf().subplots_adjust(left=+0.2)
# ax.fig.yaxis.label.set_size(14)
ax.fig.savefig("/home/jonfrey/RPOSE/docs/adds_vs_epe.png", dpi = 600)
# -
m.sum() / (sym == False).sum()
# +
import seaborn as sns
add_s = df["add_s"].to_numpy()
ratio = df["ratios_arr"].to_numpy()
m = (add_s < np.quantile(add_s[add_s<0.1], 0.9) ) * (ratio > np.quantile(ratio, 0.3) )
ratio = ratio[m]
add_s = add_s[m]
# KDE 2D density plot
df_plot = pd.DataFrame.from_dict( {'Ratio':ratio ,'ADD-(S) in meters':add_s} )
ax = sns.jointplot(data = df_plot, x='Ratio', y='ADD-(S) in meters', kind = 'hex',height =4) # color = 'royalblue', kind='kde'
# +
import seaborn as sns
add_s = df["add_s"].to_numpy()
repro_errors = df["repro_errors"].to_numpy()
m = (add_s < np.quantile(add_s[add_s<0.1], 0.9) ) * (repro_errors < np.quantile(repro_errors, 0.8) )
repro_error = repro_errors[m]
add_s = add_s[m]
# KDE 2D density plot
df_plot = pd.DataFrame.from_dict( {'Repro Error':repro_error ,'ADD-(S) in meters':add_s} )
ax = sns.jointplot(data = df_plot, x='Repro Error', y='ADD-(S) in meters', kind = 'hex',height =4) # color = 'royalblue', kind='kde'
# +
import seaborn as sns
adds = df["adds"].to_numpy()
init_adds = df["init_adds_arr"].to_numpy()
import matplotlib.pyplot as plt
m = (adds < np.quantile(adds[adds<0.1], 0.9) ) * (init_adds < np.quantile(init_adds[init_adds<0.1], 0.9) )
adds = adds[m]
init_adds= init_adds[m]
print(m.sum())
# KDE 2D density plot
df_plot = pd.DataFrame.from_dict( {'Init': init_adds ,'ADD-(S) in meters': adds} )
ax = sns.jointplot(data = df_plot, x='Init', y='ADD-(S) in meters', kind = 'hex',height =4)
# +
from pose_estimation import compute_auc
sym = [20,19,18,15,12]
def auc( k,obj,df):
return round( compute_auc( df[k][df['idx_arr']==obj].to_numpy() ),2)
def bold(a,b):
if a > b:
return f'\033[1m {a} \033[0m vs {b}'# + str(a) +'\033[0m' + ' vs ' + str(b)
return f' {a} vs \033[1m {b} \033[0m'
for obj in range(0,21):
s = ""
if obj in sym:
s+= '\033[1m'
s+= f'ADDS-AUC {obj}: '
if obj in sym:
s+= '\033[0m'
a,b = auc('adds',obj,df), auc('adds',obj,df_posecnn)
s+= bold(a,b)
if 'init_adds_arr' in df.keys():
s+= f" init: {auc('init_adds_arr',obj,df)}"
print(s)
#print(auc('adds',obj,df),)
print( round( compute_auc( df['adds'].to_numpy() ),2) )
print("")
print("")
print("")
for obj in range(0,21):
s = ""
if obj in sym:
s+= '\033[1m'
s+= f'ADD-(S) AUC {obj}: '
if obj in sym:
s+= '\033[0m'
a,b = auc('add_s',obj,df), auc('add_s',obj,df_posecnn)
s+= bold(a,b)
if 'init_add_s_arr' in df.keys():
s+= f" init: {auc('init_add_s_arr',obj,df)}"
print(auc('init_add_s_arr',obj,df))
print( round( compute_auc( df['add_s'].to_numpy() ),2) )
# -
print("EPE")
for obj in range(0,21):
epe_obj = df['epe'][df['idx_arr']==obj].to_numpy()
m = (epe_obj!= np.inf) * (np.isnan(epe_obj) != True)
#print( obj, round( np.mean( epe_obj[m] ),2), "valids" , m.sum(), "total", epe_obj.shape[0])
print( round( np.mean( epe_obj[m] ),2))
# +
from pose_estimation import compute_auc
sym = [20,19,18,15,12]
def auc( k,obj,df):
return round( compute_auc( df[k][df['idx_arr']==obj].to_numpy() ),2)
def bold(a,b):
if a > b:
return f'\033[1m {a} \033[0m vs {b}'# + str(a) +'\033[0m' + ' vs ' + str(b)
return f' {a} vs \033[1m {b} \033[0m'
print("POSECNN RESULTS only RGB")
for obj in range(0,21):
s = ""
if obj in sym:
s+= '\033[1m'
s+= f'ADDS-AUC {obj}: '
if obj in sym:
s+= '\033[0m'
s+= str(auc('adds',obj,df_posecnn))
print(s)
print("all ", round( compute_auc( df_posecnn['adds'].to_numpy() ),2))
print("")
print("")
print("")
for obj in range(0,21):
s = ""
if obj in sym:
s+= '\033[1m'
s+= f'ADD-S AUC {obj}: '
if obj in sym:
s+= '\033[0m'
s += str(auc('add_s',obj,df_posecnn))
print(s)
print("all ", round( compute_auc( df_posecnn['add_s'].to_numpy() ),2))
# +
from src_utils import load_yaml
import datasets
env = load_yaml(os.path.join('cfg/env', os.environ['ENV_WORKSTATION_NAME']+ '.yml'))
exp = load_yaml("cfg/exp/final/0_training_flow_reload/standard/standard.yml")
test_dataloader = datasets.fetch_dataloader( exp['test_dataset'], env )
test_dataloader.dataset.deterministic_random_shuffel()
# -
import pickle
with open('/home/jonfrey/PoseCNN-PyTorch/data_posecnn.pickle', 'rb') as handle:
posecnn = pickle.load(handle)
data_posecnn = {}
data_posecnn['add_s'] = np.array( [d['distances_non'] for d in posecnn])
data_posecnn['adds'] = np.array( [d['distances_sys'] for d in posecnn])
data_posecnn['idx_arr'] = np.array( [d['cls_index']-1 for d in posecnn])
df_posecnn = pd.DataFrame.from_dict( data_posecnn )
df_posecnn
# +
import torch
from ycb.rotations import *
def deg( a,b ):
a = torch.tensor( a ) [:3,:3][None]
b = torch.tensor( b ) [:3,:3][None]
return np.rad2deg( float(so3_relative_angle(a,b) ))
def trans(a,b):
return float( np.linalg.norm( (a-b)[:3,3] ) )
from loss import AddSLoss
adds = AddSLoss( sym_list = list( range(0,22)))
add = AddSLoss( sym_list = []) # bowl, wood_block, large clamp, extra_large clamp, foam_brick
# +
# POSECNN CREATION
mode = "posecnn"
# Use predicted poses
# mode = "pred"
import copy
import random
import scipy.io as scio
ds = test_dataloader.dataset
pose_cnn_data = ds._posecnn_data
ds.estimate_pose = True
ds.err = True
ds.valid_flow_minimum = 0
ds.fake_flow = True
translations = np.zeros( len(ds) ); translations[:] = np.inf
rotations = np.zeros( len(ds) ); rotations[:] = np.inf
results_adds = np.zeros( len(ds) ); results_adds[:] = np.inf
results_add = np.zeros( len(ds) ); results_add[:] = np.inf
import time
st = time.time()
for i in range( len(ds) ):
if i % 50 == 0:
print(i, time.time()-st)
obj_idx = ds._obj_idx_list[i]
if obj_idx-1 in [12,15,18,19,20]:
continue
p = ds._base_path_list[i]
dellist = [j for j in range(0, len(ds._pcd_cad_list[obj_idx-1]))]
dellist = random.sample(dellist, len(
ds._pcd_cad_list[obj_idx-1]) - ds._num_pt_cad_model)
model_points = np.delete(ds._pcd_cad_list[obj_idx-1], dellist, axis=0).astype(np.float32)
meta = scio.loadmat( p+"-meta.mat")
obj = meta['cls_indexes'].flatten().astype(np.int32)
obj_idx_in_list = int( np.argwhere(obj == obj_idx) )
h_gt = np.eye(4)
h_gt[:3,:4] = meta['poses'][:, :, obj_idx_in_list]
h_gt = h_gt.astype(np.float32)
if mode == "pred":
h_pred = (data["h_pred_all"][i,0]).astype(np.float32)
elif mode == "posecnn":
h_pred = ds._get_init_pose_posecnn( obj_idx, p).astype(np.float32)
translations[i] = trans(h_gt,h_pred)
rotations[i] = deg(h_gt,h_pred)
target = model_points @ h_gt[:3,:3].T \
+ h_gt[:3,3][:,None].repeat(model_points.shape[0], 1).T
_adds = adds( torch.from_numpy(target)[None].cuda(),
torch.from_numpy(model_points)[None].cuda(),
torch.tensor([[obj_idx]]).cuda(),
H=torch.from_numpy(h_pred)[None].cuda())
_add = add( torch.from_numpy(target)[None].cuda(),
torch.from_numpy(model_points)[None].cuda(),
torch.tensor([[obj_idx]]).cuda(),
H=torch.from_numpy(h_pred)[None].cuda())
results_adds[i] = _adds.cpu().numpy()
results_add[i] = _add.cpu().numpy()
if mode == "pred":
pred_translations = copy.deepcopy(translations)
pred_rotations = copy.deepcopy(rotations)
elif mode == "posecnn":
posecnn_translations = copy.deepcopy(translations)
posecnn_rotations = copy.deepcopy(rotations)
# -
import copy
pred_translations = copy.deepcopy(translations)
pred_rotations = copy.deepcopy(rotations)
sns.color_palette("hls", 8)[2]
# +
results_adds
import seaborn as sns
from matplotlib import pyplot
sns.reset_defaults()
rotations = pred_rotations
translation = pred_translation
fig, axs = pyplot.subplots(2, figsize=(6,4))
fig.tight_layout(pad=3.0)
r = rotations[rotations<np.inf]
t = translations[translations<np.inf]
m = (r<np.quantile(r, 0.95)) * (t<np.quantile(t, 0.95))
df_plot = pd.DataFrame.from_dict( {'Rotation Error': r[m] ,
'Translation Error': t[m] } )
with sns.plotting_context(font_scale=5):
sns.histplot(ax=axs[0], data = df_plot['Rotation Error'],bins=50 , stat="probability",color = sns.color_palette("hls", 8)[2] )
axs[0].set_xlabel('Rotation Error in Degree',fontsize=12)
axs[0].set_ylabel('Probability',fontsize=12)
sns.histplot(ax=axs[1], data = df_plot['Translation Error'] ,bins=50 , stat="probability",color = sns.color_palette("hls", 8)[2] )
axs[1].set_xlabel('Translation Error in Meter',fontsize=12)
axs[1].set_ylabel('Probability',fontsize=12)
#
# color = 'royalblue', kind='kde'
# ax.fig.suptitle('ADD-(S) vs EPE', fontsize=12) #, verticalalignment='bottom');
fig.savefig("/home/jonfrey/RPOSE/docs/init_posecnn.png", dpi = 600)
# -
results_adds
import seaborn as sns
from matplotlib import pyplot
sns.reset_defaults()
fig, axs = pyplot.subplots(2, figsize=(6,4))
fig.tight_layout(pad=3.0)
with sns.plotting_context(font_scale=5):
rots = [ posecnn_rotations, pred_rotations ]
trans = [posecnn_translations , pred_translations]
cols = [sns.color_palette("bright")[2] ,sns.color_palette("bright")[-1]] #[sns.color_palette("Paired")[0],sns.color_palette("Paired")[2]]
for rotations, translations,col in zip(rots, trans, cols):
r = rotations[rotations<np.inf]
t = translations[translations<np.inf]
m = (r<np.quantile(r, 0.95)) * (t<np.quantile(t, 0.95))
df_plot = pd.DataFrame.from_dict( {'Rotation Error': r[m] ,
'Translation Error': t[m] } )
sns.histplot(ax=axs[0], data = df_plot['Rotation Error'],bins=50 , stat="probability",color = col )
axs[0].set_xlabel('Rotation Error in Degree',fontsize=12)
axs[0].set_ylabel('Probability',fontsize=12)
sns.histplot(ax=axs[1], data = df_plot['Translation Error'] ,bins=50 , stat="probability",color = col )
axs[1].set_xlabel('Translation Error in Meter',fontsize=12)
axs[1].set_ylabel('Probability',fontsize=12)
#
# color = 'royalblue', kind='kde'
# ax.fig.suptitle('ADD-(S) vs EPE', fontsize=12) #, verticalalignment='bottom');
fig.savefig("/home/jonfrey/RPOSE/docs/init_posecnn_vs_output.png", dpi = 600)
# +
obj = data["idx_arr"]
sym = []
for ind in obj.tolist():
sym.append(
not (int(ind) + 1 in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 17, 18])
)
sym = np.array(sym)
rotations, translations = posecnn_rotations[sym == False], posecnn_translations[sym == False]
r = rotations[rotations<np.inf]
t = translations[translations<np.inf]
m = (r<np.quantile(r, 0.95)) * (t<np.quantile(t, 0.99))
print( np.mean( r[m] ),np.mean( t[m] ) )
rotations, translations = pred_rotations[sym == False], pred_translations[sym == False]
r = rotations[rotations<np.inf]
t = translations[translations<np.inf]
m = (r<np.quantile(r, 0.95)) * (t<np.quantile(t, 0.99))
np.mean( r[m] ),np.mean( t[m] )
# np.mean( posecnn_translations[posecnn_translations<np.inf] ), np.mean( pred_translations [pred_translations <np.inf])
# -
| notebooks/Analyze Evaluator Results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
# Silhouette Highlight
# ====================
#
# Extract a subset of the edges of a polygonal mesh to generate an outline
# (silhouette) of a mesh.
#
import pyvista
from pyvista import examples
# Prepare a triangulated `PolyData`
#
bunny = examples.download_bunny()
# Now we can display the silhouette of the mesh and compare the result:
#
plotter = pyvista.Plotter(shape=(1, 2))
plotter.subplot(0, 0)
plotter.add_mesh(bunny, color='tan', silhouette=True)
plotter.add_text("Silhouette")
plotter.view_xy()
plotter.subplot(0, 1)
plotter.add_mesh(bunny, color='tan')
plotter.add_text("No silhouette")
plotter.view_xy()
plotter.show()
# Maybe the default parameters are not enough to really notice the
# silhouette. But by using a `dict`, it is possible to modify the
# properties of the outline. For example, color and width could be
# specified like so:
#
plotter = pyvista.Plotter()
silhouette = dict(
color='red',
line_width=8.0,
)
plotter.add_mesh(bunny, silhouette=silhouette)
plotter.view_xy()
plotter.show()
# By default, PyVista uses a pretty aggressive decimation level but we
# might want to disable it. It is also possible to display sharp edges:
#
# +
cylinder = pyvista.Cylinder(center=(0, 0.04, 0), direction=(0, 1, 0),
radius=0.15, height=0.03).triangulate()
plotter = pyvista.Plotter(shape=(1, 3))
plotter.subplot(0, 0)
plotter.add_mesh(cylinder, color='tan', smooth_shading=True,
silhouette=dict(
color='red',
line_width=8.0,
decimate=None,
feature_angle=True))
plotter.add_text("Silhouette with sharp edges")
plotter.view_isometric()
plotter.subplot(0, 1)
plotter.add_mesh(cylinder, color='tan', smooth_shading=True,
silhouette=dict(
color='red',
line_width=8.0,
decimate=None))
plotter.add_text("Silhouette without sharp edges")
plotter.view_isometric()
plotter.subplot(0, 2)
plotter.add_mesh(cylinder, color='tan', smooth_shading=True)
plotter.add_text("No silhouette")
plotter.view_isometric()
plotter.show()
# -
# Here is another example:
#
# +
dragon = examples.download_dragon()
plotter = pyvista.Plotter()
plotter.set_background('black', 'blue')
plotter.add_mesh(dragon, color="green", specular=1, smooth_shading=True,
silhouette=dict(line_width=8, color='white'))
plotter.add_mesh(cylinder, color='tan', smooth_shading=True,
silhouette=dict(decimate=None, feature_angle=True,
line_width=8, color='white'))
plotter.camera_position = [
(-0.2936731887752889, 0.2389060430625446, 0.35138839367034236),
(-0.005878899246454239, 0.12495124898850918, -0.004603400826454163),
(0.34348225747312017, 0.8567703221182346, -0.38466160965007384)
]
plotter.show()
| examples/02-plot/silhouette.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Validate chandra_aca.attitude.calc_att
#
# Use the OBC y/z centroids and the star catalog y/z values and
# star catalog reference attitude to compute the estimated attitude
# as a function of time. Then compare to the OBC estimated attitude.
#
# This is done for [obsid 18098](http://kadi.cfa.harvard.edu/mica/?obsid_or_date=18098).
# +
import numpy as np
from Quaternion import Quat
from Ska.quatutil import radec2yagzag
from Ska.engarchive import fetch
from kadi import events
from astropy.io import ascii
import matplotlib.pyplot as plt
import agasc
from chandra_aca.attitude import calc_roll, calc_roll_pitch_yaw, _calc_roll_pitch_yaw, calc_att
# %matplotlib inline
# -
fetch.data_source.set('maude') # If on a remote computer without CXC data access
dwell = events.dwells.filter(obsid=18098)[0]
# Get OBC estimated attitude
atts = fetch.MSIDset(['aoattqt*'], dwell.tstart + 500, dwell.tstart + 1500)
# Get observed yag/zag data
msids = ['aoacyan{}'.format(ii) for ii in (3,4,5,6,7)] + ['aoaczan{}'.format(ii) for ii in (3,4,5,6,7)]
dats = fetch.MSIDset(msids, dwell.tstart + 480, dwell.tstart + 1520)
atts.interpolate(times=atts['aoattqt1'].times, bad_union=True)
# Subtract time offset between MAUDE OBC centroids and aoattqt
for msid in dats:
dats[msid].times -= 2.4 # from plot in https://github.com/sot/chandra_aca/pull/25
dats.interpolate(times=atts.times)
q_atts = np.array([atts['aoattqt{}'.format(ii)].vals for ii in (1,2,3,4)]).transpose()
yags_obs = np.array([dats['aoacyan{}'.format(ii)].vals for ii in (3,4,5,6,7)]).transpose()
zags_obs = np.array([dats['aoaczan{}'.format(ii)].vals for ii in (3,4,5,6,7)]).transpose()
yags_obs
# Obsid 18098 guide stars
cat = """
3 327689032 BOT 6x6 0.985 6.353 7.859 -647 -1472 20 1 120
4 327691744 BOT 6x6 0.985 8.596 10.094 -95 -600 20 1 120
5 327691864 BOT 6x6 0.985 7.985 9.484 1563 -358 20 1 120
6 327681976 GUI 6x6 --- 8.026 9.531 -1701 -2331 1 1 25
7 327691984 GUI 6x6 --- 7.894 9.391 -2320 -1570 1 1 25"""
cat = ascii.read(cat, format='no_header')
q_att = Quat([ 142.668983, 36.033299, 176.121686])
yags = []
zags = []
for agasc_id in cat['col2']:
star = agasc.get_star(agasc_id, date='2016:039:09:23:10')
yag, zag = radec2yagzag(star['RA_PMCORR'], star['DEC_PMCORR'], q_att)
yags.append(yag * 3600)
zags.append(zag * 3600)
# Compute the attitudes from observed centroids
q_atts_obs = calc_att(q_att, yags, zags, yags_obs, zags_obs)
# Get attitude differences as delta roll, pitch, yaw values
dr = []
dp = []
dy = []
for q_att_obs, q_att in zip(q_atts_obs, q_atts):
q_att = Quat(q_att)
dq = q_att.dq(q_att_obs)
dr.append(dq.roll0 * 3600)
dp.append(dq.pitch * 3600)
dy.append(dq.yaw * 3600)
times = dats.times - dats.times[0]
plt.plot(times, dr)
plt.grid()
plt.title('Delta roll')
plt.ylabel('offset (arcsec)')
plt.xlabel('time (sec)');
plt.plot(times, dp)
plt.grid()
plt.title('Delta pitch')
plt.ylabel('offset (arcsec)')
plt.xlabel('time (sec)');
plt.plot(times, dy)
plt.grid()
plt.title('Delta yaw')
plt.ylabel('offset (arcsec)')
plt.xlabel('time (sec)');
| validate/calc_att_validate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
path = '\\'dataset="cora"
# +
names = ['x', 'y', 'tx', 'ty', 'allx', 'ally', 'graph']
objects = []
for i in range(len(names)):
with open("{}/ind.{}.{}".format(path, dataset, names[i]), 'rb') as f:
objects.append(pkl.load(f))
x, y, tx, ty, allx, ally, graph = tuple(objects)
test_idx_reorder = parse_index_file("{}/ind.{}.test.index".format(path, dataset))
test_idx_range = np.sort(test_idx_reorder)
# -
| data/Planetoid/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# <font size="+5">#00 | Introduction Motivation & Learning Philosophy</font>
#
# # The challenge
# -
# - [ ] What's your context right now?
# - Studies
# - Job
# - Life
#
# > Be honest with yourself and write down every single detail here ↓
# # The covered solution
#
# - [ ] What's your ultimate goal in learning Python?
# - Get a job?
# - Change career?
# - Promote in your job?
# - For the sake of learning?
#
# > Be honest with yourself and write down every single detail here ↓
# # What will we learn?
# - Fundamental Concept 1
# - Fundamental Concept 2
# # Requirements?
# - Mandatory Requirement 1
# - Mandatory Requirement 2
# # Let’s develop the solution
#
# ## The starting *thing*
# - Imagine that you already know everything to achieve your goal...
# - [ ] How can they trust
# - `error_1`
# ## Concept needed for `error_1`
# - Parent concept and its elements:
# 1. Element 1
# 2. Element 2
# 3. Element …
# - `solution_1`
# - Solve the `error_1`
# - Get an `error_2` from `solution_1`
# ## Concept needed for `error_2`
# - Parent concept and its elements:
# 1. Element 1
# 2. Element 2
# 3. Element …
# - `solution_2`
# - Solve the `error_2`
# - Get an `error_x` from `solution_2`
# ## Concept needed for `error_x`
# - Parent concept and its elements:
# 1. Element 1
# 2. Element 2
# 3. Element …
# - `solution_x`
# - Solve the `error_x`
# - Do you have a good learning habit?
# ## Python Resolver Discipline for Learning
#
# 1. **Humbleness**
# - ❌ Reach pyramid's top in one jump
# - ✅ Reach pyramid's top step by step
# 2. **Flexible Thinking**
# - ❌ There is a perfect solution and only 1 way to get there
# - ✅ There are many paths to get to Rome, be happy with one that gives you the result you want
# 3. **Curiosity on the Error**
# - ❌ Just solve the problem
# - ✅ Check the doubts with counter-examples
# - ✅ Ask yourself why does it work this way? Why doesn't work this other way?
# - ✅ Write code to try it out & leave it there
# 4. **Fundamental Knowledge**
# - ❌ You need to learn every single detail
# - ✅ You need to learn the common patterns in the syntaxis that always behave the same way
# 5. **Notes**
# - ❌ Assume you've learnt it in your head
# - ✅ Reflect it on a piece of paper that you can later refer to solve repetitive doubts/errors
# 6. **Efficient Learning**
# - ❌ Stuck for 20 minutes because you are trying to solve a problem way above your level
# - ✅ Sometimes, you need to pass 20 pages to understand a previous scene
# - ✅ Move on with other basic concept because you will take more advantage of the time
#
#
# ### Solution:
#
# - ✅ Read this every single time you are on your own & take notes about how you measured up
#
# ### Error:
#
# - ❌ Going over the exercises without these friendly reminders will cause you a lot of suffering
# # The uncovered solution
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # Recap on the learning
# - Fundamental Concept 1
# - [ ] `solution`
# - Fundamental Concept 2
# - [ ] `solution`
# -
# # References
# - Link Reference 1
# - Link Reference 2
| 00_Introduction Motivation & Philosophy/00_Philosophy of the Course.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Installation process
# 1) Install the robotpkg binaries which are needed for this exercice:
#
# ```
# sudo apt-get install robotpkg-sot-talos robotpkg-talos-dev
# sudo apt-get install robotpkg-py27-qt4-gepetto-viewer-corba
# ```
# #### Alternative to install jupyter
#
# 2) Use ```pip install jupyter``` (but be carefull it may broke your environment)
#
# 2) Configure a specific environment with [virtualenv](https://virtualenv.pypa.io/en/stable/):
#
# - In the directory containing the jupyter notebook download the _virtualenv_ source, extract and install it:
#
# ```
# curl --location --output virtualenv-master.tar.gz https://github.com/pypa/virtualenv/tarball/master
# tar xvfz virtualenv-master.tar.gz
# cd pypa-virtualenv-master
# python virtualenv.py myVE
# ```
#
# - Then activate your environment:
#
# ```source myVE/bin/activate```
#
#
# - And install the needed python packages:
#
# ```pip install jupyter numpy matplotlib```
#
# - WARNING: for some obscure reasons, virtualenv removes /usr/lib/python2.7/dist-packages from PYTHONPATH. You may need to re-add it.
# 3) Source your terminal with the use of this [script](http://robotpkg.openrobots.org/install.html) to setup your environment variables to find _openrobots_ installation
#
# 4) Make sure you have placed the plot_utils.py in the parent directory of the jupyter notebooks
#
# 5) Start the notebook with the command:
#
# ```jupyter notebook```
#
# 6) At the end if you have chosen to use _virtualenv_ you should deactivate your environment with the command:
#
# ```deactivate```
| exercizes/notebooks/Installation process.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Demo Z2 Self-Attention
#
# In this demo, we will analyze the equivariance properties of (conventional) z2 self-attention models.
#
# First, we will demonstrate the translation equivariance of the model and, subsequently, we will perform an analysis into the rotation and flipping equivariance properties of the model.
# + [markdown] pycharm={"name": "#%%\n"}
# ## Importing Libraries
# Add the library to the system path via the relative folder structure:
# -
import os,sys
g_selfatt_source = os.path.join(os.getcwd(),'..')
if g_selfatt_source not in sys.path:
sys.path.append(g_selfatt_source)
# Import the necessary libraries:
# torch
import torch
import torch.nn as nn
# project
import g_selfatt
# other
from matplotlib import pyplot as plt
# ## Z2 Self-Attention Layers
#
# In what follows we take:
#
# * a random noise image f as input
# * apply a sequence of z2 self-attention layers to it f -> N(f)
# * translate the input (T(f)) via the action of the translation group on f and send it through the same sequence of layers and (T(f) -> N(R(f)))
# * then we test the equivariance property T'(N(f))=N(T(f)) In the above T denotes the translation operator of 2D images, and T' denotes the translation operator on feature maps.
#
# Now, let us create a net with two self-attention layers, with a total of 3 ** 2 heads:
# ### The input feature map
Nxy = 15 # This spatial dimension
N_in = 10 # This many feature channels
B = 4 # Batch size
# For now we work with a placeholder
inputs = torch.randn([B,N_in,Nxy,Nxy], dtype=torch.float32)
inputs[:,:, :4, :] = 0.0
inputs[:,:, :, :4] = 0.0
inputs[:,:, -4:,:] = 0.0
inputs[:,:, :, -4:] = 0.0
# ### Attention Layers
# +
# Layer parameters
num_heads = 3 ** 2
# construct layers
sa_1 = g_selfatt.nn.RdSelfAttention(N_in, N_in, N_in * 2, num_heads, Nxy, 0.0)
sa_2 = g_selfatt.nn.RdSelfAttention(N_in * 2, N_in, N_in * 4, num_heads, Nxy, 0.0)
# -
# ### Test the network - Translation Equivariance
#
# We create random noise input and translated noise:
# +
input_tensor = inputs
input_tensor_trans = torch.roll(inputs, (2,2), dims=(-2,-1))
f,axs = plt.subplots(1,2,figsize=(10,10))
plt.subplot(1,2,1);plt.imshow(input_tensor.numpy()[0, 0,:, :])
plt.subplot(1,2,2);plt.imshow(input_tensor_trans.numpy()[0,0,:,:,])
plt.show()
# -
# Pass the original random signal to the network and then its translated version
# +
out_1 = sa_1(input_tensor)
out_2 = sa_2(out_1)
out_1_trans = sa_1(input_tensor_trans)
out_2_trans = sa_2(out_1_trans)
# -
# Let's compare the results
# +
print('First layer:')
f,axs = plt.subplots(1,2,figsize=(10,10))
plt.subplot(1,2,1);plt.imshow(out_1.detach().numpy()[0, 0,:, :])
plt.subplot(1,2,2);plt.imshow(out_1_trans.detach().numpy()[0,0,:,:,])
plt.show()
print('Second layer:')
f,axs = plt.subplots(1,2,figsize=(10,10))
plt.subplot(1,2,1);plt.imshow(out_2.detach().numpy()[0, 0,:, :])
plt.subplot(1,2,2);plt.imshow(out_2_trans.detach().numpy()[0,0,:,:,])
plt.show()
# -
# ### Test the network - Roration Equivariance
#
# Now, let's analyze what occurs if the input image is rotated. Lets first see the input
# +
input_tensor = inputs
input_tensor_90 = inputs.rot90(k=1, dims=[-2,-1])
f,axs = plt.subplots(1,2,figsize=(10,10))
plt.subplot(1,2,1);plt.imshow(input_tensor.numpy()[0, 0,:, :])
plt.subplot(1,2,2);plt.imshow(input_tensor_90.numpy()[0,0,:,:,]);
plt.show()
# -
# Pass the original random signal to the network and then its rotated version
# +
out_1 = sa_1(input_tensor)
out_2 = sa_2(out_1)
out_1_90 = sa_1(input_tensor_90)
out_2_90 = sa_2(out_1_90)
# -
# Let's compare the results
# +
print('First layer:')
f,axs = plt.subplots(1,2,figsize=(10,10))
plt.subplot(1,2,1);plt.imshow(out_1.detach().numpy()[0, 0,:, :])
plt.subplot(1,2,2);plt.imshow(out_1_90.detach().numpy()[0,0,:,:,])
plt.show()
print('Second layer:')
f,axs = plt.subplots(1,2,figsize=(10,10))
plt.subplot(1,2,1);plt.imshow(out_2.detach().numpy()[0, 0,:, :])
plt.subplot(1,2,2);plt.imshow(out_2_90.detach().numpy()[0,0,:,:,])
plt.show()
# -
# As we can see, the responses are very different for rotated versions of the same image.
# ## Important!
# *please check the other notebooks in this folder to see how equivariant self-attention can be used.
| demo/0_z2_selfattention.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bbo
# language: python
# name: bbo
# ---
# +
from copy import deepcopy
import random
import numpy as np
import scipy.stats as ss
import math
import matplotlib
import matplotlib.pyplot as plt
import sys
sys.path.insert(0, "/home/sungnyun/custom_turbo/")
#sys.path.insert(0, "/home/nakyil/jupyter/git_repo/bbo/custom_turbo/")
from turbo import Turbo1
from turbo.utils import from_unit_cube, latin_hypercube, to_unit_cube
from bayesmark.abstract_optimizer import AbstractOptimizer
from bayesmark.experiment import experiment_main
from bayesmark.space import JointSpace
from sklearn import svm
from sklearn.cluster import KMeans
from sklearn.preprocessing import MaxAbsScaler, RobustScaler
from sklearn.pipeline import make_pipeline
from sklearn.cluster import SpectralClustering
import time
import torch
def order_stats(X):
_, idx, cnt = np.unique(X, return_inverse=True, return_counts=True)
obs = np.cumsum(cnt) # Need to do it this way due to ties
o_stats = obs[idx]
return o_stats
def copula_standardize(X):
X = np.nan_to_num(np.asarray(X)) # Replace inf by something large
assert X.ndim == 1 and np.all(np.isfinite(X))
o_stats = order_stats(X)
quantile = np.true_divide(o_stats, len(X) + 1)
X_ss = ss.norm.ppf(quantile)
return X_ss
def softmax(a) :
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
def reject_outliers(data, m=.05):
up = np.quantile(data, q=1.-m)
up_list = data < up
return up_list
#data - np.mean(data) < m * np.std(data)
class AdvancedTurbo:
def __init__(self, api_config, **kwargs):
"""
Parameters
----------
api_config : dict-like of dict-like
Configuration of the optimization variables. See API description.
"""
self.space_x = JointSpace(api_config)
reference = {}
for i, orig_key in enumerate(self.space_x.param_list):
reference[i] = orig_key
self.api_config = {param_name:api_config[param_name] for _, param_name in sorted(reference.items())}
return_true_dim_dict = {}
previous = 0
for idx1, b in enumerate(self.space_x.blocks):
for val in range(previous, b):
return_true_dim_dict[val] = idx1
previous = b
self.return_true_dim = return_true_dim_dict
self.bounds = self.space_x.get_bounds()
self.lb, self.ub = self.bounds[:, 0], self.bounds[:, 1]
self.dim = len(self.bounds)
self.max_evals = np.iinfo(np.int32).max # NOTE: Largest possible int
self.batch_size = None
self.history = []
self.epoch = 0
self.int_dim, self.bool_dim, self.float_dim, self.cat_dim, self.cat_loc = self.find_loc()
self._mab_var = ['bool','cat'] #['int']
self.params = self.initialize_mab()
self._discount = 1.0
self.adapt_region = None
self.turbo = Turbo1(
f=None,
lb=self.bounds[:, 0],
ub=self.bounds[:, 1],
n_init= max(30,2 * self.dim + 1),
max_evals=self.max_evals,
batch_size=1, # We need to update this later
verbose=False,
)
self.span = 2 ** (1)
self.squeeze = 2 ** (-1)
self.classifier = make_pipeline(RobustScaler(),svm.SVC(kernel='rbf'))
self.selected_label = None
self.initiated = False
# values for UCB
self.c_p = 0 # percentage of max function value
def select_region_by_ucb(self,y,labels):
y_1 = y[np.where(labels==np.unique(labels)[0])]
y_2 = y[np.where(labels==np.unique(labels)[1])]
ucb1 = - y_1.mean()/len(y_1) - self.c_p*y[(-y).argmax().item()]*np.sqrt(2*np.log(len(y))/len(y_1))
ucb2 = - y_2.mean()/len(y_2) - self.c_p*y[(-y).argmax().item()]*np.sqrt(2*np.log(len(y))/len(y_2))
selected_label = np.argmax([ucb1,ucb2])
return selected_label
def train_classifier(self,X,fX):
fX[np.isinf(fX) == True] = np.nanmax(fX[fX!=np.inf]) #replace inf to max value in hand
data_points = np.hstack((X,fX))
mas = RobustScaler()
rescaled_data = mas.fit_transform(data_points)
rescaled_data[:,-1] *= np.sqrt(self.dim) #rescaled for fX
# data_points = fX
# data_points = fX[np.where(reject_outliers(fX))]
# data_points = np.expand_dims(data_points, axis=-1)
# mas = RobustScaler()
# rescaled_data = mas.fit_transform(data_points)
KM = KMeans(n_clusters=2)
labels = KM.fit_predict(rescaled_data)
self.classifier.fit(X, labels) #, **kwargs)
return labels
def restart(self):
X_init = latin_hypercube(self.turbo.n_init, self.dim)
if self.initiated is False: # when initiating
self.X_init = from_unit_cube(X_init, self.lb, self.ub)
else: # if it is restarting, initiate turbo within selected region
X_init = from_unit_cube(X_init, self.lb, self.ub)
_ = self.train_classifier(deepcopy(self.turbo._X),deepcopy(self.turbo._fX))
self.selected_label = self.classifier.predict(self.turbo._X[self.turbo._fX.argmin().item()][None,:])
y_pred = self.classifier.predict(X_init)
self.X_init = X_init[np.where(y_pred==self.selected_label)]
self.turbo._restart() # reset succ&fail count, length
self.turbo._X = np.zeros((0, self.turbo.dim))
self.turbo._fX = np.zeros((0, 1))
self.epoch = 0
def make_type_list(self):
int_where_type = [False] * len(self.api_config)
bool_where_type = [False] * len(self.api_config)
float_where_type = [False] * len(self.api_config)
cat_where_type = [False] * len(self.api_config)
for ind, param in enumerate(self.api_config):
if self.api_config[param]['type'] == 'int':
int_where_type[ind] = True
elif self.api_config[param]['type'] == 'real':
float_where_type[ind] = True
elif self.api_config[param]['type'] == 'bool':
bool_where_type[ind] = True
elif self.api_config[param]['type'] == 'cat':
cat_where_type[ind] = True
return int_where_type, bool_where_type, float_where_type, cat_where_type
def find_loc(self):
# data_type = np.float64 # np.float64 # np.unicode_
# space: int - np.int_, float: np.float_, bool: np.bool_, cat: np.unicode_
int_where_type, bool_where_type, float_where_type, cat_where_type = self.make_type_list()
len_each_bounds = np.array([len(self.space_x.spaces[param].get_bounds()) for param in self.space_x.param_list])
blocks = self.space_x.blocks
int_where_end = blocks[np.where(int_where_type)]
bool_where_end = blocks[np.where(bool_where_type)]
float_where_end = blocks[np.where(float_where_type)]
cat_where_end = blocks[np.where(cat_where_type)]
int_where_bound = len_each_bounds[np.where(int_where_type)]
bool_where_bound = len_each_bounds[np.where(bool_where_type)]
float_where_bound = len_each_bounds[np.where(float_where_type)]
cat_where_bound = len_each_bounds[np.where(cat_where_type)]
int_intervals = [(int_where_end[idx]- int_where_bound[idx] ,int_where_end[idx]) for idx in range(len(int_where_end))]
bool_intervals = [(bool_where_end[idx]- bool_where_bound[idx] , bool_where_end[idx]) for idx in range(len(bool_where_end))]
float_intervals = [(float_where_end[idx]- float_where_bound[idx] , float_where_end[idx]) for idx in range(len(float_where_end))]
cat_intervals = [(cat_where_end[idx]- cat_where_bound[idx] ,cat_where_end[idx]) for idx in range(len(cat_where_end))]
int_dim = []
bool_dim = []
float_dim = []
cat_dim = []
# int interval
if int_intervals:
for (s, e) in int_intervals:
for idx in range(s,e):
int_dim.append(idx)
# bool_interval
if bool_intervals:
for (s, e) in bool_intervals:
for idx in range(s,e):
bool_dim.append(idx)
# float_interval
if float_intervals:
for (s, e) in float_intervals:
for idx in range(s,e):
float_dim.append(idx)
# cat_interval
if cat_intervals:
for (s, e) in cat_intervals:
for idx in range(s,e):
cat_dim.append(idx)
cat_where_loc = [-1]*max(blocks) # since block is built up by cumsum, max value considers num of cat values
for location in cat_dim:
cat_where_loc[location] = 0
return int_dim, bool_dim, float_dim, cat_dim, cat_where_loc
def initialize_mab(self):
params = {}
if 'int' in self._mab_var:
for dim in self.int_dim:
gt_lb = self.space_x.unwarp(self.lb[None,:])[0][self.space_x.param_list[self.return_true_dim[dim]]]
gt_ub = self.space_x.unwarp(self.ub[None,:])[0][self.space_x.param_list[self.return_true_dim[dim]]]
if gt_ub-gt_lb < 11:
params[dim] = {}
for num in range(int(gt_lb), int(gt_ub+1)):
params[dim][num] = {'alpha': 1., 'beta': 1.}
if 'bool' in self._mab_var:
for dim in self.bool_dim:
params[dim] = {}
for num in range(int(self.lb[dim]), int(self.ub[dim]+1)):
params[dim][num] = {'alpha': 1., 'beta': 1.}
if 'cat' in self._mab_var:
if 0 in np.unique(self.cat_loc):
params['cat'] = {}
for dim in np.unique(self.cat_loc):
if dim != -1:
params['cat'][dim]={}
for cor_dim in np.where(self.cat_loc==dim)[0]:
params['cat'][dim][cor_dim] = {'alpha': 1., 'beta': 1.}
return params
def sample_mab(self):
result = {}
if self.params:
for dim_key in self.params.keys():
if dim_key != 'cat':
best = - 1.
for can_key in self.params[dim_key].keys():
tmp = np.random.beta(self.params[dim_key][can_key]['alpha'], self.params[dim_key][can_key]['beta'])
if tmp > best:
best = tmp
best_cand = self.space_x.spaces[self.space_x.param_list[self.return_true_dim[dim_key]]].warp(can_key)
result[dim_key] = best_cand
else:
for cat_key in self.params['cat'].keys():
tmp_list = []
for can_key in self.params['cat'][cat_key].keys():
tmp = np.random.beta(self.params['cat'][cat_key][can_key]['alpha'], self.params['cat'][cat_key][can_key]['beta'])
tmp_list.append(tmp)
argmax_list = [1. if idx == np.argmax(tmp_list) else 0. for idx in range(len(tmp_list))]
for idx, can_key in enumerate(self.params['cat'][cat_key].keys()):
result[can_key] = argmax_list[idx]
return result
def update_mab(self, XX, fX_next, random=False):
for index in range(len(fX_next)):
if random:
if round(fX_next[index][0],4) < round(np.min(fX_next),4) + 1e-5:
if self.params:
for key in self.params.keys():
if key != 'cat':
if key in self.int_dim:
unwarped_X = self.space_x.unwarp(XX[index][None,:])[0][self.space_x.param_list[self.return_true_dim[key]]]
self.params[key][unwarped_X]['alpha'] += 3.
if unwarped_X + 1 in self.params[key].keys():
self.params[key][unwarped_X + 1]['alpha'] += 1.
if unwarped_X - 1 in self.params[key].keys():
self.params[key][unwarped_X - 1]['alpha'] += 1.
if unwarped_X + 2 in self.params[key].keys():
self.params[key][unwarped_X + 2]['alpha'] += 0.5
if unwarped_X - 2 in self.params[key].keys():
self.params[key][unwarped_X - 2]['alpha'] += 0.5
else:
self.params[key][XX[index][key]]['alpha'] += 1.5
else:
for cat_key in self.params['cat'].keys():
for can_key in self.params['cat'][cat_key].keys():
self.params['cat'][cat_key][can_key]['alpha'] += XX[index][can_key] * 1.5
elif fX_next[index][0] < np.min(self.turbo._fX) - 1e-5 * math.fabs(np.min(self.turbo._fX)):
if self.params:
for key in self.params.keys():
if key != 'cat':
if key in self.int_dim:
unwarped_X = self.space_x.unwarp(XX[index][None,:])[0][self.space_x.param_list[self.return_true_dim[key]]]
self.params[key][unwarped_X]['alpha'] += 2.5 #max(2.5, len(self.params[key].keys()) / self.batch_size)
if unwarped_X + 1 in self.params[key].keys():
self.params[key][unwarped_X + 1]['alpha'] += 1.5
if unwarped_X - 1 in self.params[key].keys():
self.params[key][unwarped_X - 1]['alpha'] += 1.
if unwarped_X + 2 in self.params[key].keys():
self.params[key][unwarped_X + 2]['alpha'] += 0.5
if unwarped_X - 2 in self.params[key].keys():
self.params[key][unwarped_X - 2]['alpha'] += 0.5
else:
self.params[key][XX[index][key]]['alpha'] += 1.5
else:
for cat_key in self.params['cat'].keys():
for can_key in self.params['cat'][cat_key].keys():
self.params['cat'][cat_key][can_key]['alpha'] += XX[index][can_key] * 1.5
# else:
# for key in self.params.keys():
# self.params[key][XX[index][key]]['beta'] += 1 / 8
def discount_mab(self):
# discount other params
_discount = self._discount
for dim_key in self.params.keys():
if dim_key != 'cat':
for can_key in self.params[dim_key].keys():
self.params[dim_key][can_key]['alpha'] *= _discount
self.params[dim_key][can_key]['beta'] *= _discount
else:
catparam = self.params['cat']
for can_key in catparam.keys():
for val_key in catparam[can_key].keys():
catparam[can_key][val_key]['alpha'] *= _discount
catparam[can_key][val_key]['beta'] *= _discount
def subsample_mab(self, X_cand):
X_cand_tmp = from_unit_cube(X_cand,self.lb,self.ub)
for index in range(len(X_cand)):
tmp = self.sample_mab()
if tmp:
for key in tmp.keys():
X_cand[index][key] = float((tmp[key]-self.lb[key])/(self.ub[key]-self.lb[key]))
return X_cand
def suggest(self, n_suggestions=10):
if self.batch_size is None: # Remember the batch size on the first call to suggest
self.batch_size = n_suggestions
self.turbo.batch_size = n_suggestions
self.turbo.failtol = np.ceil(np.max([4.0 / self.batch_size, self.dim / self.batch_size]))
self.turbo.n_init = max([self.turbo.n_init, self.batch_size])
self.restart()
X_next = np.zeros((n_suggestions, self.dim))
# Pick from the initial points
n_init = min(len(self.X_init), n_suggestions)
if n_init > 0:
X_next[:n_init] = deepcopy(self.X_init[:n_init, :])
self.X_init = self.X_init[n_init:, :] # Remove these pending points
self.initiated = True
# Get remaining points from TuRBO
n_adapt = n_suggestions - n_init
if n_adapt > 0 and self.initiated is True and len(self.turbo._X) > 0: ## n_suggestion 1
kmeans_labels = self.train_classifier(deepcopy(self.turbo._X),deepcopy(self.turbo._fX))
if n_adapt > 0:
if len(self.turbo._X) > 0: # Use random points if we can't fit a GP
if self.adapt_region == 'ucb':
self.selected_label = self.select_region_by_ucb(deepcopy(self.turbo.fX),labels)
else:
input_labels = self.classifier.predict(deepcopy(self.turbo._X))
self.selected_label = input_labels[self.turbo._fX.argmin().item()]
x_select = deepcopy(self.turbo._X)[np.where(input_labels==self.selected_label)]
y_select = deepcopy(self.turbo._fX)[np.where(input_labels==self.selected_label)]
# create TR with the center point inside the selected region
X = to_unit_cube(x_select, self.lb, self.ub)
fX = copula_standardize(y_select.ravel()) # Use Copula
## update on 10.04: below code does not solved the suggest exception error on leaderboard
## though the exception appears occasionally
## so i guess this is not the case.. but i'll just leave it as it is
sel_y_cand = np.array([])
timeout = 2
time_started = time.time()
while len(sel_y_cand) < n_adapt and time.time()<time_started+timeout:
X_cand, _ = self.turbo._create_candidates(
X, fX, length=self.turbo.length, n_training_steps=80, hypers={}, epoch = self.epoch, int_dim = self.int_dim, bool_dim = self.bool_dim, float_dim = self.float_dim, cat_dim = self.cat_dim
)
X_cand = self.subsample_mab(X_cand)
y_cand = self.turbo.generate_ycand(X_cand)
# reject that are out of range using classifier
label_X_cand = self.classifier.predict(from_unit_cube(X_cand,self.lb,self.ub))
sel_X_cand = X_cand[np.where(label_X_cand==self.selected_label)]
sel_y_cand = y_cand[np.where(label_X_cand==self.selected_label)]
# also select the candidates from the selected region
if len(sel_y_cand) >= n_adapt:
X_next[-n_adapt:, :] = self.turbo._select_candidates(sel_X_cand, sel_y_cand)[:n_adapt, :]
else:
X_next[-n_adapt:, :] = self.turbo._select_candidates(X_cand, y_cand)[:n_adapt, :]
X_next[-n_adapt:, :] = from_unit_cube(X_next[-n_adapt:, :], self.lb, self.ub)
else:
# code below is for the case
# when restarted, but num of X_init that satisfies the classifier is smaller than n_suggestion
# create more samples
# print("iterating for extra samples..")
timeout = 1
time_started = time.time()
while True and time.time()<time_started+timeout:
X_init = latin_hypercube(self.turbo.n_init, self.dim)
X_init = from_unit_cube(X_init, self.lb, self.ub)
y_pred = self.classifier.predict(X_init)
extra_X_init = X_init[np.where(y_pred==self.selected_label)]
if len(extra_X_init) < n_adapt:
continue
else:
X_next[-n_adapt:, :] = deepcopy(extra_X_init[:n_adapt, :])
break
else:
# create additional samples (random, do not fit in the satisfaction)
extra_X = latin_hypercube(self.turbo.n_init, self.dim)
extra_X = from_unit_cube(extra_X, self.lb, self.ub)
X_next[-n_adapt:, :] = extra_X[:n_adapt, :]
# Unwarp the suggestions
suggestions = self.space_x.unwarp(X_next)
self.epoch += 1
return suggestions
def observe(self, X, y):
"""Send an observation of a suggestion back to the optimizer.
Parameters
----------
X : list of dict-like
Places where the objective function has already been evaluated.
Each suggestion is a dictionary where each key corresponds to a
parameter being optimized.
y : array-like, shape (n,)
Corresponding values where objective has been evaluated
"""
assert len(X) == len(y)
XX, yy = self.space_x.warp(X), np.array(y)[:, None]
# print('int', self.int_dim, 'bool', self.bool_dim, 'float', self.float_dim, 'length', self.turbo.length)
if len(self.turbo._fX) < self.turbo.n_init:
self.update_mab(XX, yy, random=True)
if len(self.turbo._fX) >= self.turbo.n_init:
self.update_mab(XX, yy)
self.turbo._adjust_length(yy, self.span, self.squeeze)
self.turbo.n_evals += self.batch_size
self.turbo._X = np.vstack((self.turbo._X, deepcopy(XX)))
self.turbo._fX = np.vstack((self.turbo._fX, deepcopy(yy)))
self.turbo.X = np.vstack((self.turbo.X, deepcopy(XX)))
self.turbo.fX = np.vstack((self.turbo.fX, deepcopy(yy)))
# Check for a restart
# if self.turbo.volume < self.turbo.vol_min:# and self.flag:
# self.restart()
if self.turbo.length < self.turbo.length_min:
self.restart()
print("restart")
def get_fX(self, x, f):
"""
x : Unwarped suggestion
f : Function to optimize
"""
XX = self.space_x.warp(x)
y = np.array([[f(X)] for X in XX])
if y.ndim > 1 :
y = y.reshape(-1)
return y
def optimize(self, f, num_evals=10, n_suggestions=10): # Suggest + Observe
min_yy = float("inf")
for e in range(num_evals):
suggestions = self.suggest(n_suggestions=n_suggestions)
yy = self.get_fX(suggestions, f)
self.observe(suggestions, yy)
if yy.min() < min_yy :
min_yy = yy.min()
print("Evaluation iter : {}, yy minimum : {}".format(e, yy.min()))
# +
class Levy:
def __init__(self, dim=10):
self.dim = dim
self.lb = -5 * np.ones(dim)
self.ub = 10 * np.ones(dim)
def __call__(self, x):
assert len(x) == self.dim
assert x.ndim == 1
assert np.all(x <= self.ub) and np.all(x >= self.lb)
w = 1 + (x - 1.0) / 4.0
val = np.sin(np.pi * w[0]) ** 2 + \
np.sum((w[1:self.dim - 1] - 1) ** 2 * (1 + 10 * np.sin(np.pi * w[1:self.dim - 1] + 1) ** 2)) + \
(w[self.dim - 1] - 1) ** 2 * (1 + np.sin(2 * np.pi * w[self.dim - 1])**2)
return val
class Ackley:
def __init__(self, dims=10):
self.dims = dims
self.lb = -5 * np.ones(dims)
self.ub = 10 * np.ones(dims)
def __call__(self, x):
assert len(x) == self.dims
assert x.ndim == 1
assert np.all(x <= self.ub) and np.all(x >= self.lb)
result = (-20*np.exp(-0.2 * np.sqrt(np.inner(x,x) / x.size )) -np.exp(np.cos(2*np.pi*x).sum() /x.size) + 20 +np.e )
return result
def print_avg(results):
print("Albo Levy average :{}, Ackley : {}".format(np.average(results['levy']), np.average(results['ackley'])) )
print("Albo Levy STD :{}, Ackley : {}".format(np.std(results['levy']), np.std(results['ackley'])) )
def save_plot(fX, func):
fX = AT.turbo.fX
matplotlib.rcParams.update({'font.size': 16})
plt.plot(fX, 'b.', ms=10) # Plot all evaluated points as blue dots
plt.plot(np.minimum.accumulate(fX), 'r', lw=3) # Plot cumulative minimum as a red line
plt.xlim([0, len(fX)])
plt.ylim([0, 30])
plt.title("10D {} function".format(func))
plt.tight_layout()
#plt.savefig("svm_local_0.45.png")
plt.show()
# -
# # 5 Runs for Levy, Ackley
# +
results = {}
plot_results = {}
dimension=10
for func in ["levy", "ackley"]:
if func == "levy":
f = Levy(dimension)
else :
f = Ackley(dimension)
api_config = {}
for i in range(dimension):
api_config["dim_"+str(i)] = {"type" : "real", "space" : "linear", "range" : (f.lb[0], f.ub[0])}
min_result = np.zeros(0)
for random_seed in range(5):
print("="*20)
print("Seed : ", random_seed)
random.seed(random_seed)
np.random.seed(random_seed)
torch.manual_seed(random_seed)
# torch.cuda.manual_seed(random_seed)
# torch.cuda.manual_seed_all(random_seed) # if use multi-GPU
# torch.backends.cudnn.deterministic = True
# torch.backends.cudnn.benchmark = False
# Optimize
AT = AdvancedTurbo(api_config)
AT.optimize(f, num_evals=100, n_suggestions=10)
# Collect Minimum value
fX = AT.turbo.fX
min_result = np.concatenate((min_result, min(fX)))
plot_results[func+'_'+str(random_seed)] = fX.tolist()
results[func] = min_result.tolist()
# -
print_avg(results)
print("Levy : {}, Ackley : {}".format(np.average(results['levy']) ,np.average(results['ackley'])))
print("Levy : {}, Ackley : {}".format(np.average(results['levy']) ,np.average(results['ackley'])))
# +
# import json
# fname = "../results/albo_result_batch10_eval100.json"
# with open(fname, 'w') as f:
# json.dump(results, f)
# +
# fname = "../results/albo_plot_result_batch10_eval100.json"
# with open(fname, 'w') as f:
# json.dump(plot_results, f)
# -
# ## Save Graph
# + jupyter={"outputs_hidden": true}
plot_results = {}
for func in ["levy", "ackley"]:
if func == "levy":
f = Levy(dimension)
else :
f = Ackley(dimension)
api_config = {}
for i in range(dimension):
api_config["dim_"+str(i)] = {"type" : "real", "space" : "linear", "range" : (f.lb[0], f.ub[0])}
min_result = np.zeros(0)
random_seed = 0
print("="*20)
print("Seed : ", random_seed)
random.seed(random_seed)
np.random.seed(random_seed)
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
torch.cuda.manual_seed_all(random_seed) # if use multi-GPU
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Optimize
AT = AdvancedTurbo(api_config)
AT.optimize(f, num_evals=100, n_suggestions=10)
# Collect Minimum value
fX = AT.turbo.fX
min_result = np.concatenate((min_result, min(fX)))
plot_results[func] = fX.tolist()
# -
for key in ["levy_0_turbo", "ackley_0_turbo"]:
fX = plot_results[key]
func = key.split('_')[0]
ylim = 30
if func == 'ackley':
ylim = 15
matplotlib.rcParams.update({'font.size': 16})
plt.plot(fX, 'b.', ms=10) # Plot all evaluated points as blue dots
plt.plot(np.minimum.accumulate(fX), 'r', lw=3) # Plot cumulative minimum as a red line
plt.xlim([0, len(fX)])
plt.ylim([0, ylim])
plt.title("10D {} function".format(func.capitalize()))
plt.xlabel("Number of evaluation")
plt.ylabel("Value")
plt.tight_layout()
plt.savefig("../results/3_albo_{}.pdf".format(func))
plt.show()
# ### For 200 Dimension
#
# + jupyter={"outputs_hidden": true}
results = {}
plot_results = {}
dimension=200
for func in ["ackley"]:
if func == "levy":
f = Levy(dimension)
else :
f = Ackley(dimension)
api_config = {}
for i in range(dimension):
api_config["dim_"+str(i)] = {"type" : "real", "space" : "linear", "range" : (f.lb[0], f.ub[0])}
min_result = np.zeros(0)
for random_seed in range(5):
print("="*20)
print("Seed : ", random_seed)
random.seed(random_seed)
np.random.seed(random_seed)
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
torch.cuda.manual_seed_all(random_seed) # if use multi-GPU
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Optimize
AT = AdvancedTurbo(api_config)
AT.optimize(f, num_evals=50, n_suggestions=10)
# Collect Minimum value
fX = AT.turbo.fX
min_result = np.concatenate((min_result, min(fX)))
plot_results[func+'_'+str(random_seed)] = fX.tolist()
results[func] = min_result.tolist()
| AI701/bbo_osi/Levy_Ackley_ALBO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Geometric Distribution
# ***
# ## Definition
# >The Geometric distribution is a discrete distribution and gives the probability that the first occurrence of success requires k independent trials [a.k.a. Bernoulli trials], each with success probability p. $ ^{[1]}$.
#
# ## Formula
# The probability mass function of a Geometric distributed random variable is defined as:
# $$ Geom(k|p) = (1-p)^{k-1}p $$
# where $p$ denotes the probability of success in a Bernoulli trial.
# +
# # %load ../src/geometric/01_general.py
# -
# ***
# ## Parameters
# +
# # %load ../src/geometric/02_p.py
# -
# ***
# ## Implementation in Python
# Multiple Python packages implement the Geometric distribution. One of those is the `stats.geom` module from the `scipy` package. The following methods are only an excerpt. For a full list of features the [official documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.geom.html) should be read.
# ### Random Variates
# In order to generate a random sample from, the function `rvs` should be used.
# +
import numpy as np
from scipy.stats import geom
# draw a single sample
np.random.seed(42)
print(geom.rvs(p=0.3), end="\n\n")
# draw 10 samples
print(geom.rvs(p=0.3, size=10), end="\n\n")
# -
# ### Probability Mass Function
# The probability mass function can be accessed via the `pmf` function (mass instead of density since the Geometric distribution is discrete). Like the `rvs` method, the `pdf` allows for adjusting the $p$ of the random variable:
# +
import numpy as np
from scipy.stats import geom
# additional imports for plotting purpose
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams["figure.figsize"] = (14,7)
# likelihood of x and y
x = 1
y = 7
print("pmf(X=1) = {}\npmf(X=7) = {}".format(geom.pmf(k=x, p=0.3), geom.pmf(k=y, p=0.3)))
# continuous pdf for the plot
x_s = np.arange(11)
y_s = geom.pmf(k=x_s, p=0.3)
plt.scatter(x_s, y_s, s=100);
# -
# ### Cumulative Probability Density Function
# The cumulative probability density function is useful when a probability range has to be calculated. It can be accessed via the `cdf` function:
# +
from scipy.stats import geom
# probability of x less or equal 0.3
print("P(X <=3) = {}".format(geom.cdf(k=3, p=0.3)))
# probability of x in [-0.2, +0.2]
print("P(2 < X <= 8) = {}".format(geom.cdf(k=8, p=0.3) - geom.cdf(k=2, p=0.3)))
# -
# ***
# ## Infering $p$
# Given a sample of datapoints it is often required to estimate the "true" parameters of the distribution. In the case of the Geometric distribution this estimation is quite simple. $p$ can be derived by calculating the reciprocal of the sample's mean.
# +
# # %load ../src/geometric/03_estimation.py
# -
# ## Infering $p$ - MCMC
# In addition to a direct estimation from the sample $p$ can also be estimated using Markov chain Monte Carlo simulation - implemented in Python's [PyMC3](https://github.com/pymc-devs/pymc3).
# +
# # %load ../src/geometric/04_MCMC_estimation.py
# -
# ***
# [1] - [Wikipedia. Geometric Distribution](https://en.wikipedia.org/wiki/Geometric_distribution)
| notebooks/Geometric Distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create System Metadata
#
# An instance of System Metadata is created for arbitrary values to demonstrate programmatic construction of system metadata using the python libraries.
#
# First gather the values to be used for the system metadata. Normally these would be obtained by examining a file.
# +
import notebook_utils as nbu
from d1_common.types import dataoneTypes
import hashlib
def checksumOfObject( o ):
'''Compute the SHA-256 checksum of an arbitrary thing.
'''
m = hashlib.sha256()
if isinstance(o, bytes):
m.update(o)
elif isinstance(o, str):
m.update( bytes(o, "utf-8") )
else:
m.update( bytes( o ) )
return m.hexdigest()
# Create a dictionary of properties to use
source = {
"identifier":"test_identifier_01",
"seriesIdentifier":"test",
"formatId": "eml://ecoinformatics.org/eml-2.0.1",
"mediaType": "text/xml",
"fileName":"metadata.xml",
"size":17181,
"checksum": checksumOfObject("arbitrary thing to create checksum"),
"checksum_algorithm": "SHA256",
"submitter":"CN=urn:node:cnStageUNM1,DC=dataone,DC=org",
"rightsHolder":"CN=testRightsHolder,DC=dataone,DC=org",
"obsoletes":"test_identifier_00",
"obsoletedBy": None,
}
# -
# Given the dictionary of values, create the system metadata and print the resulting XML.
# +
system_metadata = dataoneTypes.systemMetadata()
# Set with public read-only defaults
# enable public read by adding read permission for the "Public" user
access_policy = dataoneTypes.accessPolicy()
access_rule = dataoneTypes.accessRule()
access_rule.subject.append("Public")
access_rule.permission.append(dataoneTypes.Permission('read'))
access_policy.append(access_rule)
system_metadata.accessPolicy = access_policy
# Replication Policy
replication_policy = dataoneTypes.replicationPolicy()
replication_policy.replicationAllowed = True
replication_policy.numberReplicas = 1
system_metadata.replicationPolicy = replication_policy
# Obsolescence
system_metadata.obsoletes = source.get("obsoletes", None)
system_metadata.obsoletedBy = source.get("obsoletedBy", None)
# Add properties
system_metadata.identifier = source.get("identifier")
system_metadata.seriesIdentifier = source.get("seriesIdentifier")
system_metadata.formatId = source.get("formatId")
system_metadata.size = source.get("size")
checksum = dataoneTypes.checksum( source.get("checksum") )
checksum.algorithm = source.get("checksum_algorithm")
system_metadata.checksum = checksum
system_metadata.submitter = source.get("submitter")
system_metadata.rightsHolder = source.get("rightsHolder")
#Media type information
media_type = dataoneTypes.mediaType()
media_type.name = source.get("mediaType")
system_metadata.mediaType = media_type
system_metadata.fileName = source.get("fileName")
print(nbu.asXml(system_metadata))
# -
| python_examples/10_create_systemmetadata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.009956, "end_time": "2021-04-22T02:56:38.712295", "exception": false, "start_time": "2021-04-22T02:56:38.702339", "status": "completed"} tags=[]
# # Setting Up Environment
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 7.241664, "end_time": "2021-04-22T02:56:45.963171", "exception": false, "start_time": "2021-04-22T02:56:38.721507", "status": "completed"} tags=[]
import math, os, re, time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tqdm.notebook import tqdm
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization, AlphaDropout
from tensorflow.keras.metrics import MeanSquaredError, MeanAbsolutePercentageError
from tensorflow.keras.callbacks import EarlyStopping
tqdm.pandas()
tf.random.set_seed(42)
# %matplotlib inline
print(tf.__version__)
# + [markdown] papermill={"duration": 0.0092, "end_time": "2021-04-22T02:56:45.982426", "exception": false, "start_time": "2021-04-22T02:56:45.973226", "status": "completed"} tags=[]
# # Data Preparation
# + papermill={"duration": 0.437033, "end_time": "2021-04-22T02:56:46.428904", "exception": false, "start_time": "2021-04-22T02:56:45.991871", "status": "completed"} tags=[]
raw_data = pd.read_csv('../input/bt4222-project/modelling_dataset.csv')
raw_data = pd.concat([raw_data.pop('Unit Price ($ PSM)'), raw_data], axis=1)
raw_data
# + [markdown] papermill={"duration": 0.009995, "end_time": "2021-04-22T02:56:46.449384", "exception": false, "start_time": "2021-04-22T02:56:46.439389", "status": "completed"} tags=[]
# # Train-Test Split
# + papermill={"duration": 0.110307, "end_time": "2021-04-22T02:56:46.569938", "exception": false, "start_time": "2021-04-22T02:56:46.459631", "status": "completed"} tags=[]
# Perform 70/30 train_test split
X = raw_data.iloc[:,1:].copy()
y = raw_data.iloc[:,0:1].copy()
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
shuffle=True,
random_state=42)
X_train
# + [markdown] papermill={"duration": 0.011021, "end_time": "2021-04-22T02:56:46.592293", "exception": false, "start_time": "2021-04-22T02:56:46.581272", "status": "completed"} tags=[]
# # Feature Scaling
# + papermill={"duration": 0.023426, "end_time": "2021-04-22T02:56:46.627333", "exception": false, "start_time": "2021-04-22T02:56:46.603907", "status": "completed"} tags=[]
def all_transform(X_train, X_test):
all_features = list(X_train.columns)
standardScale_vars = ['Area (SQM)',
'Floor Number',
'PPI',
'Average Cases Per Year',
'Nearest Primary School',
'nearest_station_distance']
minMax_vars = ['Remaining Lease']
remaining_features = [x for x in all_features if x not in standardScale_vars and x not in minMax_vars]
s_scaler = StandardScaler()
mm_scaler = MinMaxScaler()
s_scaled = pd.DataFrame(s_scaler.fit_transform(X_train.loc[:, standardScale_vars].copy()), columns=standardScale_vars, index=X_train.index)
mm_scaled = pd.DataFrame(mm_scaler.fit_transform(X_train.loc[:, minMax_vars].copy()), columns=minMax_vars, index=X_train.index)
X_train = pd.concat([s_scaled,
mm_scaled,
X_train.loc[:, remaining_features].copy()], axis=1)
s_scaled = pd.DataFrame(s_scaler.transform(X_test.loc[:, standardScale_vars].copy()), columns=standardScale_vars, index=X_test.index)
mm_scaled = pd.DataFrame(mm_scaler.transform(X_test.loc[:, minMax_vars].copy()), columns=minMax_vars, index=X_test.index)
X_test = pd.concat([s_scaled,
mm_scaled,
X_test.loc[:, remaining_features].copy()], axis=1)
return X_train, X_test
# + papermill={"duration": 0.083849, "end_time": "2021-04-22T02:56:46.722424", "exception": false, "start_time": "2021-04-22T02:56:46.638575", "status": "completed"} tags=[]
X_train, X_test = all_transform(X_train, X_test)
X_train
# -
# # Train-Validation Split
# + papermill={"duration": 0.066742, "end_time": "2021-04-22T02:56:46.801306", "exception": false, "start_time": "2021-04-22T02:56:46.734564", "status": "completed"} tags=[]
# Train-Validation Split
X_train, X_eval, y_train, y_eval = train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True,
random_state=42)
X_train
# + papermill={"duration": 0.0227, "end_time": "2021-04-22T02:56:46.837066", "exception": false, "start_time": "2021-04-22T02:56:46.814366", "status": "completed"} tags=[]
print(X_train.shape)
print(y_train.shape)
print(X_eval.shape)
print(y_eval.shape)
print(X_test.shape)
print(y_test.shape)
# + [markdown] papermill={"duration": 0.01284, "end_time": "2021-04-22T02:56:46.864719", "exception": false, "start_time": "2021-04-22T02:56:46.851879", "status": "completed"} tags=[]
# # Baseline Model (Unregularized)
# + papermill={"duration": 328.282076, "end_time": "2021-04-22T03:02:15.161351", "exception": false, "start_time": "2021-04-22T02:56:46.879275", "status": "completed"} tags=[]
# %%time
# Model Parameters
nodes = [512, 512]
batch_norm = True
drop_rate = 0.2
# Model building
n_features = X_train.shape[1]
model = Sequential()
model.add(Dense(nodes[0], activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
if batch_norm:
model.add(BatchNormalization())
else:
model.add(Dropout(drop_rate))
model.add(Dense(nodes[1], activation='relu', kernel_initializer='he_normal'))
model.add(Dropout(drop_rate))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
print(model.summary(), '\n')
early_stop = EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(X_train, y_train, epochs=500, batch_size=32, validation_split=0.2, callbacks=[early_stop])
# Model Errors
plt.rcParams['figure.figsize'] = 12,8
plt.title('Learning Curves')
plt.xlabel('Epoch')
plt.ylabel('Mean Squared Error')
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='val')
plt.legend()
plt.show()
plt.clf()
# Model Evaluation
y_train_pred = model.predict(X_train)
y_eval_pred = model.predict(X_eval)
y_test_pred = model.predict(X_test)
train_rmse = np.sqrt(MeanSquaredError()(y_train, y_train_pred).numpy())
train_mape = MeanAbsolutePercentageError()(y_train, y_train_pred).numpy()
eval_rmse = np.sqrt(MeanSquaredError()(y_eval, y_eval_pred).numpy())
eval_mape = MeanAbsolutePercentageError()(y_eval, y_eval_pred).numpy()
test_rmse = np.sqrt(MeanSquaredError()(y_test, y_test_pred).numpy())
test_mape = MeanAbsolutePercentageError()(y_test, y_test_pred).numpy()
print('Train RMSE: %.3f, Train MAPE: %.3f' % (train_rmse, train_mape), '\n')
print('Val RMSE: %.3f, Val MAPE: %.3f' % (eval_rmse, eval_mape), '\n')
print('Test RMSE: %.3f, Test MAPE: %.3f' % (test_rmse, test_mape), '\n')
# + [markdown] papermill={"duration": 1.359987, "end_time": "2021-04-22T03:02:17.836603", "exception": false, "start_time": "2021-04-22T03:02:16.476616", "status": "completed"} tags=[]
# # Model Hyperparameter Tuning (Regularization Type + Lambda)
# + papermill={"duration": 1.318877, "end_time": "2021-04-22T03:02:20.474652", "exception": false, "start_time": "2021-04-22T03:02:19.155775", "status": "completed"} tags=[]
def model_tuning_2nn(reg_type='L2', lambda_val=0.01):
# Model Parameters
nodes = [512, 512]
batch_norm = True
drop_rate = 0.2
# Model building
n_features = X_train.shape[1]
# Selecting regularization type
if reg_type == 'L1':
regularizer = tf.keras.regularizers.L1(l1=lambda_val)
elif reg_type == 'L2':
regularizer = tf.keras.regularizers.L2(l2=lambda_val)
model = Sequential()
model.add(Dense(nodes[0], activation='relu', kernel_initializer='he_normal', input_shape=(n_features,), kernel_regularizer=regularizer))
if batch_norm:
model.add(BatchNormalization())
else:
model.add(Dropout(drop_rate))
model.add(Dense(nodes[1], activation='relu', kernel_initializer='he_normal', kernel_regularizer=regularizer))
model.add(Dropout(drop_rate))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
print(model.summary(), '\n')
early_stop = EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(X_train, y_train, epochs=500, batch_size=32, validation_split=0.2, callbacks=[early_stop])
# Model Errors
plt.rcParams['figure.figsize'] = 12,8
plt.title('Learning Curves')
plt.xlabel('Epoch')
plt.ylabel('Mean Squared Error')
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='val')
plt.legend()
plt.show()
plt.clf()
# Model Evaluation
y_train_pred = model.predict(X_train)
y_eval_pred = model.predict(X_eval)
y_test_pred = model.predict(X_test)
train_rmse = np.sqrt(MeanSquaredError()(y_train, y_train_pred).numpy())
train_mape = MeanAbsolutePercentageError()(y_train, y_train_pred).numpy()
eval_rmse = np.sqrt(MeanSquaredError()(y_eval, y_eval_pred).numpy())
eval_mape = MeanAbsolutePercentageError()(y_eval, y_eval_pred).numpy()
test_rmse = np.sqrt(MeanSquaredError()(y_test, y_test_pred).numpy())
test_mape = MeanAbsolutePercentageError()(y_test, y_test_pred).numpy()
print('Train RMSE: %.3f, Train MAPE: %.3f' % (train_rmse, train_mape), '\n')
print('Val RMSE: %.3f, Val MAPE: %.3f' % (eval_rmse, eval_mape), '\n')
print('Test RMSE: %.3f, Test MAPE: %.3f' % (test_rmse, test_mape), '\n')
return pd.DataFrame([{'regularizer': reg_type,
'lambda': lambda_val,
'train_rmse': train_rmse,
'validation_rmse': eval_rmse,
'test_rmse': test_rmse,
'train_mape': train_mape,
'validation_mape': eval_mape,
'test_mape': test_mape}])
# -
# # Collecting Error Metrics (for comparison)
# + papermill={"duration": 2034.496254, "end_time": "2021-04-22T03:36:16.317118", "exception": false, "start_time": "2021-04-22T03:02:21.820864", "status": "completed"} tags=[]
# %%time
df_results = pd.DataFrame()
for reg in ['L1', 'L2']:
for lam_val in [0.001, 0.005, 0.01, 0.05, 0.1]:
output = model_tuning_2nn(reg_type=reg, lambda_val=lam_val)
df_results = pd.concat([df_results, output], axis=0)
df_results.reset_index(drop=True, inplace=True)
df_results
# + [markdown] papermill={"duration": 9.647928, "end_time": "2021-04-22T03:36:35.342445", "exception": false, "start_time": "2021-04-22T03:36:25.694517", "status": "completed"} tags=[]
# # Saving Results
# + papermill={"duration": 9.401758, "end_time": "2021-04-22T03:36:54.115622", "exception": false, "start_time": "2021-04-22T03:36:44.713864", "status": "completed"} tags=[]
df_results.to_csv('2nn_regularized.csv', index=False)
| Model Experimentation/Model Experimentation - Neural Networks (2 Layers with Regularization).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example Template
#
# > Templates are python notebooks to guide the users for consuming the facades of each component. Templates are not oriented to run projects in deep learning for software engieering but it encourages to try different techniques to explore data or create tools for sucessfully researching in DL4SE.
#hide
from nbdev.showdoc import *
| nbs/deprecated/af_emp.eval.pp1.rq4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# <a href="https://colab.research.google.com/github/RecoHut-Projects/recohut/blob/master/tutorials/modeling/T936914_siren_ml1m_torch_gpu.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# # SiReN on ML-1m in PyTorch
# ## **Step 1 - Setup the environment**
# ### **1.1 Install libraries**
# torch geometric
try:
import torch_geometric
except ModuleNotFoundError:
# Installing torch geometric packages with specific CUDA+PyTorch version.
# See https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details
import torch
TORCH = torch.__version__.split('+')[0]
CUDA = 'cu' + torch.version.cuda.replace('.','')
# !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
# !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
# !pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
# !pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
# !pip install torch-geometric
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
# !pip install -q -U git+https://github.com/RecoHut-Projects/recohut.git -b v0.0.4
# ### **1.2 Download datasets**
# !git clone -q --branch v2 https://github.com/RecoHut-Datasets/movielens_1m.git
# ### **1.3 Import libraries**
# +
import torch
from torch import optim
from torch.utils.data import DataLoader
import pandas as pd
import numpy as np
import time
from tqdm.notebook import tqdm
import os
import pickle
import warnings
warnings.filterwarnings('ignore')
# +
# layers
from recohut.models.layers.message_passing import LightGConv, LRGCCF
# models
from recohut.models.siren import SiReN
# transforms
from recohut.transforms.bipartite import BipartiteDataset
# -
# ### **1.4 Set params**
class Args:
dataset = 'ML-1M' # Dataset
version = 1 # Dataset version
batch_size = 1024 # Batch size
dim = 64 # Dimension
lr = 5e-3 # Learning rate
offset = 3.5 # Criterion of likes/dislikes
K = 40 # The number of negative samples
num_layers = 4 # The number of layers of a GNN model for the graph with positive edges
MLP_layers = 2 # The number of layers of MLP for the graph with negative edges
epoch = 4 # The number of epochs
reg = 0.05 # Regularization coefficient
# ## **Step 2 - Data preparation**
class Data_loader():
def __init__(self,dataset,version):
self.dataset=dataset; self.version=version
self.sep='::'
self.names=['userId','movieId','rating','timestemp'];
self.path_for_whole='./movielens_1m/ratings.dat'
self.path_for_train='./movielens_1m/train_1m%s.dat'%(version)
self.path_for_test='./movielens_1m/test_1m%s.dat'%(version)
self.num_u=6040; self.num_v=3952;
def data_load(self):
self.whole_=pd.read_csv(self.path_for_whole, names = self.names, sep=self.sep, engine='python').drop('timestemp',axis=1).sample(frac=1,replace=False,random_state=self.version)
self.train_set = pd.read_csv(self.path_for_train,engine='python',names=self.names).drop('timestemp',axis=1)
self.test_set = pd.read_csv(self.path_for_test,engine='python',names=self.names).drop('timestemp',axis=1)
return self.train_set, self.test_set
def deg_dist(train, num_v):
uni, cou = np.unique(train['movieId'].values-1,return_counts=True)
cou = cou**(0.75)
deg = np.zeros(num_v)
deg[uni] = cou
return torch.tensor(deg)
def gen_top_K(data_class,emb,train,directory_):
no_items = np.array(list(set(np.arange(1,data_class.num_v+1))-set(train['movieId'])))
total_users = set(np.arange(1,data_class.num_u+1))
reco = dict()
pbar = tqdm(desc = 'top-k recommendation...',total=len(total_users),position=0)
for j in total_users:
pos = train[train['userId']==j]['movieId'].values-1
embedding_ = emb[j-1].view(1,len(emb[0])).mm(emb[data_class.num_u:].t()).detach();
embedding_[0][no_items-1]=-np.inf;
embedding_[0][pos]=-np.inf;
reco[j]=torch.topk(embedding_[0],300).indices.cpu().numpy()+1
pbar.update(1)
pbar.close()
return reco
# +
args = Args()
data_class=Data_loader(args.dataset,args.version)
threshold = round(args.offset) # To generate ground truth set
print('data loading...'); st=time.time()
train,test = data_class.data_load();
train = train.astype({'userId':'int64', 'movieId':'int64'})
print('loading complete! time :: %s'%(time.time()-st))
print('generate negative candidates...'); st=time.time()
neg_dist = deg_dist(train,data_class.num_v)
print('complete ! time : %s'%(time.time()-st))
# -
# ## **Step 3 - Training & Evaluation**
class evaluate():
def __init__(self,reco,train,test,threshold,num_u,num_v,N=[5,10,15,20,25],ratings=[20,50]):
'''
train : training set
test : test set
threshold : To generate ground truth set from test set
'''
self.reco = reco
self.num_u = num_u;
self.num_v = num_v;
self.N=N
self.p=[]
self.r=[]
self.NDCG=[]
self.p_c1=[]; self.p_c2=[]; self.p_c3=[]
self.r_c1=[]; self.r_c2=[]; self.r_c3=[]
self.NDCG_c1=[]; self.NDCG_c2=[]; self.NDCG_c3=[]
self.tr = train; self.te = test;
self.threshold = threshold;
self.gen_ground_truth_set()
self.ratings = ratings
self.partition_into_groups_(self.ratings)
print('\nevaluating recommendation accuracy....')
self.precision_and_recall_G(self.group1,1)
self.precision_and_recall_G(self.group2,2)
self.precision_and_recall_G(self.group3,3)
self.Normalized_DCG_G(self.group1,1)
self.Normalized_DCG_G(self.group2,2)
self.Normalized_DCG_G(self.group3,3)
self.metric_total()
def gen_ground_truth_set(self):
result = dict()
GT = self.te[self.te['rating']>=self.threshold];
U = set(GT['userId'])
for i in U:
result[i] = list(set([j for j in GT[GT['userId']==i]['movieId']]))#-set(self.TOP))
if len(result[i])==0:
del(result[i])
self.GT = result
def precision_and_recall(self):
user_in_GT=[j for j in self.GT];
for n in self.N:
p=0; r=0;
for i in user_in_GT:
topn=self.reco[i][:n]
num_hit=len(set(topn).intersection(set(self.GT[i])));
p+=num_hit/n; r+=num_hit/len(self.GT[i]);
self.p.append(p/len(user_in_GT)); self.r.append(r/len(user_in_GT));
def Normalized_DCG(self):
maxn=max(self.N);
user_in_GT=[j for j in self.GT];
ndcg=np.zeros(maxn);
for i in user_in_GT:
idcg_len = min(len(self.GT[i]), maxn)
temp_idcg = np.cumsum(1.0 / np.log2(np.arange(2, maxn + 2)))
temp_idcg[idcg_len:] = temp_idcg[idcg_len-1]
temp_dcg=np.cumsum([1.0/np.log2(idx+2) if item in self.GT[i] else 0.0 for idx, item in enumerate(self.reco[i][:maxn])])
ndcg+=temp_dcg/temp_idcg;
ndcg/=len(user_in_GT);
for n in self.N:
self.NDCG.append(ndcg[n-1])
def metric_total(self):
self.p = self.len1 * np.array(self.p_c1) + self.len2 * np.array(self.p_c2) + self.len3 * np.array(self.p_c3);
self.p/= self.len1 + self.len2 + self.len3
self.p = list(self.p)
self.r = self.len1 * np.array(self.r_c1) + self.len2 * np.array(self.r_c2) + self.len3 * np.array(self.r_c3);
self.r/= self.len1 + self.len2 + self.len3
self.r = list(self.r)
self.NDCG = self.len1 * np.array(self.NDCG_c1) + self.len2 * np.array(self.NDCG_c2) + self.len3 * np.array(self.NDCG_c3);
self.NDCG/= self.len1 + self.len2 + self.len3
self.NDCG = list(self.NDCG)
def partition_into_groups_(self,ratings=[20,50]):
unique_u, counts_u = np.unique(self.tr['userId'].values,return_counts=True)
self.group1 = unique_u[np.argwhere(counts_u<ratings[0])]
temp = unique_u[np.argwhere(counts_u<ratings[1])]
self.group2 = np.setdiff1d(temp,self.group1)
self.group3 = np.setdiff1d(unique_u,temp)
self.cold_groups = ratings
self.group1 = list(self.group1.reshape(-1))
self.group2 = list(self.group2.reshape(-1))
self.group3 = list(self.group3.reshape(-1))
def precision_and_recall_G(self,group,gn):
user_in_GT=[j for j in self.GT];
leng = 0 ; maxn = max(self.N) ; p = np.zeros(maxn); r = np.zeros(maxn);
for i in user_in_GT:
if i in group:
leng+=1
hit_ = np.cumsum([1.0 if item in self.GT[i] else 0.0 for idx, item in enumerate(self.reco[i][:maxn])])
p+=hit_ / np.arange(1,maxn+1); r+=hit_/len(self.GT[i])
p/= leng; r/=leng;
for n in self.N:
if gn == 1 :
self.p_c1.append(p[n-1])
self.r_c1.append(r[n-1])
self.len1 = leng;
elif gn == 2 :
self.p_c2.append(p[n-1])
self.r_c2.append(r[n-1])
self.len2 = leng;
elif gn == 3 :
self.p_c3.append(p[n-1])
self.r_c3.append(r[n-1])
self.len3 = leng;
def Normalized_DCG_G(self,group,gn):
maxn=max(self.N);
user_in_GT=[j for j in self.GT];
ndcg=np.zeros(maxn);
leng = 0
for i in user_in_GT:
if i in group:
leng+=1
idcg_len = min(len(self.GT[i]), maxn)
temp_idcg = np.cumsum(1.0 / np.log2(np.arange(2, maxn + 2)))
temp_idcg[idcg_len:] = temp_idcg[idcg_len-1]
temp_dcg=np.cumsum([1.0/np.log2(idx+2) if item in self.GT[i] else 0.0 for idx, item in enumerate(self.reco[i][:maxn])])
ndcg+=temp_dcg/temp_idcg;
ndcg/=leng
for n in self.N:
if gn == 1 :
self.NDCG_c1.append(ndcg[n-1])
elif gn == 2 :
self.NDCG_c2.append(ndcg[n-1])
elif gn == 3 :
self.NDCG_c3.append(ndcg[n-1])
# +
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
args.user_col = 'userId'
args.item_col = 'movieId'
args.feedback_col = 'rating'
model= SiReN(train,
data_class.num_u,
data_class.num_v,
offset=args.offset,
num_layers=args.num_layers,
MLP_layers=args.MLP_layers,
dim=args.dim,
device=device,
reg=args.reg,
graph_enc = 'lightgcn',
user_col = args.user_col,
item_col = args.item_col,
rating_col = args.feedback_col)
model.data_p.to(device)
model.to(device)
optimizer = optim.Adam(model.parameters(), lr = args.lr)
# +
print("\nTraining on {}...\n".format(device))
model.train()
training_dataset = BipartiteDataset(args, train, neg_dist, args.offset,
data_class.num_u, data_class.num_v, args.K)
for EPOCH in range(1,args.epoch+1):
if EPOCH%2-1==0:training_dataset.negs_gen_EP(2)
LOSS=0
training_dataset.edge_4 = training_dataset.edge_4_tot[:,:,EPOCH%2-1]
ds = DataLoader(training_dataset,batch_size=args.batch_size,shuffle=True)
q=0
pbar = tqdm(desc = 'Version : {} Epoch {}/{}'.format(args.version,EPOCH,args.epoch),total=len(ds),position=0)
for u,v,w,negs in ds:
q+=len(u)
st=time.time()
optimizer.zero_grad()
loss = model(u,v,w,negs,device) # original
loss.backward()
optimizer.step()
LOSS+=loss.item() * len(ds)
pbar.update(1);
pbar.set_postfix({'loss':loss.item()})
pbar.close()
if EPOCH%2==0 :
directory = os.getcwd() + '/results/%s/SiReN/epoch%s_batch%s_dim%s_lr%s_offset%s_K%s_num_layers%s_MLP_layers%s_threshold%s_reg%s/'%(args.dataset,EPOCH,args.batch_size,args.dim,args.lr,args.offset,args.K,args.num_layers,args.MLP_layers,threshold,args.reg)
if not os.path.exists(directory):
os.makedirs(directory)
model.eval()
emb = model.aggregate();
top_k_list = gen_top_K(data_class,emb,train,directory+'r%s_reco.pickle'%(args.version))
eval_ = evaluate(top_k_list,train,test,threshold,data_class.num_u,data_class.num_v,N=[10,15,20],ratings=[20,50])
print("\n***************************************************************************************")
print(" /* Recommendation Accuracy */")
print('Precision at [10, 15, 20] :: ',eval_.p)
print('Recall at [10, 15, 20] :: ',eval_.r)
print('NDCG at [10, 15, 20] :: ',eval_.NDCG)
print("***************************************************************************************")
directory_ = directory+'r%s_reco.pickle'%(args.version)
with open(directory_,'wb') as fw:
pickle.dump(eval_,fw)
model.train()
# -
# ## **Closure**
# For more details, you can refer to https://github.com/RecoHut-Stanzas/S138006.
# <a href="https://github.com/RecoHut-Stanzas/S138006/blob/main/reports/S138006_Report.ipynb" alt="S138006_Report"> <img src="https://img.shields.io/static/v1?label=report&message=active&color=green" /></a> <a href="https://github.com/RecoHut-Stanzas/S138006" alt="S138006"> <img src="https://img.shields.io/static/v1?label=code&message=github&color=blue" /></a>
# !pip install -q watermark
# %reload_ext watermark
# %watermark -a "Sparsh A." -m -iv -u -t -d -p recohut
# ---
# **END**
| tutorials/modeling/T936914_siren_ml1m_torch_gpu.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# +
import datetime
import numpy as np
import string
import pandas as pd
from SVM import myfunc
np.random.seed(0123) # for reproducibility
import csv
authorlist = []
doc_id = []
with open('finalA6.csv') as csvfile:
myreader = csv.reader(csvfile, delimiter=',')
for row in myreader:
authorlist.append(int(row[1]))
doc_id.append(int(row[2]))
#authorlist1 = []
#for i in set(authorlist):
# authorlist1.append(i)
#
#authorlist = authorlist1
test_output = []
for cnt in range(len(doc_id)):
temp_id = doc_id[:]
del temp_id[cnt]
temp_authorlist = authorlist[:]
del temp_authorlist[cnt]
out = myfunc(temp_authorlist , temp_id , authorlist[cnt] , doc_id[cnt])
test_output.append(out)
print("here")
avg_score = float(sum(test_output))/float(len(test_output))
print("here")
print(avg_score)
# -
| SVM_AuthorshipID/myfile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Validating fsspec reference marker's zarr versus orginal hdf (netcdf4) for DAS dataset
# ## Creation notes
# Original HDF dataset from: https://ds.iris.edu/pub/dropoff/buffer/PH5/das_example.h5
# Reference json file created via script: das_h5_to_ref.py (in this repo)
#
# ## Methodology
# Load both datasets via xarray. One from json refernce file using zarr and
# one from orginal netcdf4 format. Compared in the following ways:
# 1. High level using xArray equity operator '=='
# 2. Comparing individual DataArray "Acoustic"
# 3. Comparing Dataset attrs (for this example file there are none)
# 4. Comparaing individual DataArray "Acoustic" attrs
#
# ## Future work
# 1. Add even more ways to compare files like plotting an event from both
# datasets to demostrate how to extract data, units, and plot them.
# 2. In other notebooks replicate this for other formats like PH5, MTH, ...
# + pycharm={"name": "#%% Libs\n"}
import os
import numpy as np
import xarray as xr
import fsspec
# + pycharm={"name": "#%% File locations EDIT HERE for your local drive\n"}
FOLDER = '/mnt/hgfs/ssd_tmp/'
h5_filename = os.path.join(FOLDER, 'das_example.h5')
reference_file = 'results_20210809/das_h5/das_example_ref_fs.json'
# + pycharm={"name": "#%% Load dataset into xArray via zarr\n"}
uri = f'file://{reference_file}'
fs = fsspec.filesystem('reference', fo=uri, remote_protocol="file")
m = fs.get_mapper("")
# ds_zarr = xr.open_dataset(m, engine="zarr") # This caused data array to come in as float32
# Per https://stackoverflow.com/questions/68460507/xarray-loading-int-data-as-float
ds_zarr = xr.open_dataset(m, engine="zarr", mask_and_scale=False)
ds_zarr
# + pycharm={"name": "#%% Load dataset into xArray via netcdf\n"}
ds_hdf = xr.open_dataset(h5_filename, engine='netcdf4')
# The "mask_and_scale" option has no effect on hdf files
# ds_hdf = xr.open_dataset(h5_filename, engine='netcdf4', mask_and_scale=False)
ds_hdf
# + pycharm={"name": "#%% Use built-in equity operator\n"}
is_equal = ds_zarr == ds_hdf
is_equal_np = is_equal.to_array().to_numpy()
f'dataset_zarr == dataset_hdf {np.all(is_equal_np)}'
# + pycharm={"name": "#%% Double check the data array as well\n"}
is_da_equal = ds_zarr.Acoustic == ds_hdf.Acoustic
is_da_equal_np = is_da_equal.to_numpy()
is_da_equal_np = np.squeeze(is_da_equal_np)
f'All DataArray elements are the same: {np.all(is_equal_np)}'
# + pycharm={"name": "#%% NOT NEEDED since they now match with both being int16s How much are different?\n"}
n_true = np.count_nonzero(is_da_equal_np)
total = is_da_equal_np.size
f'{n_true} are the same out of {total} ({100*n_true/total:.5f}%)'
# + pycharm={"name": "#%% Double attributes top level\n"}
attrs_zarr = ds_zarr.attrs
attrs_hdf = ds_hdf.attrs
f'No top level atttributes. len zarr attrs: {len(attrs_zarr)}, \
len of hdf attrs: {len(attrs_hdf)}'
# + pycharm={"name": "#%% Double attributes Acoustic Data array\n"}
darray_attrs_zarr = ds_zarr.Acoustic.attrs
darray_attrs_hdf = ds_hdf.Acoustic.attrs
f'Acoustic atttributes. len zarr attrs: {len(darray_attrs_zarr)}, \
len of hdf attrs: {len(darray_attrs_hdf)}'
# + pycharm={"name": "#%% Looks like zarr has an extra value\n"}
diff_keys = set(darray_attrs_zarr.keys()) - set(darray_attrs_hdf.keys())
f'The extra key in zarr set is: {diff_keys}'
# + pycharm={"name": "#%% Compare the common attrs\n"}
darray_attrs_zarr_fixed = darray_attrs_zarr.copy()
del darray_attrs_zarr_fixed['_FillValue']
f'After removing extra key from zarr attrs they are both equal: \
{darray_attrs_zarr_fixed == darray_attrs_hdf}'
# + pycharm={"name": "#%%\n"}
| das_h5_reference_load_cmp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
import os
import copy
from math import log, pow
import subprocess
import matplotlib.pyplot as plt
# #### Fault Classes
# | Fault | Binary | Decimal |
# | --- | --- | --- |
# | A | 00 | 0 |
# | B | 01 | 1 |
# | C | 10 | 2 |
# #### Fault Code = (variable << 2) + fault_class
# | Fault | Binary | Decimal |
# | --- | --- | --- |
# | A1 | 100 | 4 |
# | B1 | 101 | 5 |
# | C1 | 110 | 6 |
# ### Sample (C1)
fault = 2
variable = 1
code = (variable << 2) + fault
code
# #### Reverse
fault = code & 3
fault
variable = code >> 2
variable
# #### Current Fault Codes
# | Fault | Decimal |
# | --- | --- |
# | A1 | 4 |
# | B1 | 5 |
# | C1 | 6 |
# | A1B1 | 4-5 |
# | A1C1 | 4-6 |
# | B1C1 | 5-6 |
# #### Flow Chart
# +
from graphviz import Digraph
dot = Digraph(node_attr={'shape': 'box'}, format='png', filename='sensorfusion')
dot.edge_attr.update(arrowhead='vee', arrowsize='1')
dot.node('0', 'Fault Generator')
dot.node('1', 'Other Faults')
dot.node('2', 'Supervisor (Polling)')
dot.node('3', 'All faults\n processed\n?' ,shape='diamond')
dot.node('4', 'Send trigger signal \n &\n increment frequency')
dot.node('5', 'Append fault')
dot.node('6', 'Does fault\n exist?', shape='diamond')
dot.node('7', 'Remove and delay fault\nwith lower\npriority')
dot.node('8', 'Time\nOut?', shape='diamond')
dot.node('9', 'end', shape='oval')
dot.node('10', 'start', shape='oval')
dot.node('11', 'Delayed Faults')
dot.edge('2', '1')
dot.edge('0', '2', ' Possibility\nof fault')
dot.edge('1', '5', ' Possibility\nof fault')
dot.edge('2', '3')
dot.edge('3', '6', 'Yes')
dot.edge('3', '5', 'No')
dot.edge('4', '8')
dot.edge('5', '6')
dot.edge('6', '4', 'Yes')
dot.edge('6', '7', 'No')
dot.edge('7', '3')
dot.edge('8', '2', 'No')
dot.edge('8', '9', 'Yes')
dot.edge('10', '0')
dot.edge('11', '1')
#dot.save()
#dot.render(view=True)
# -
# ### Run
command = "D:/code/C++/RT-Cadmium-FDD-New-Code/top_model/mainwd.exe"
completed_process = subprocess.run(command, shell=False, capture_output=True, text=True)
#print(completed_process.stdout)
#
# ### Read from file
# +
fileName = "SensorFusion.txt"
fault_codes = {}
with open(fileName, "r") as f:
lines = f.readlines()
with open(fileName, "r") as f:
output = f.read()
for line in lines:
if (re.search("supervisor", line) != None):
res = re.findall("\{\d+[, ]*\d*[, ]*\d*\}", line)
if len(res) > 0:
str_interest = res[0].replace('}', '').replace('{', '')
faults = str_interest.split(', ')
key = '-' + '-'.join(faults) + '-'
fault_codes[key] = fault_codes.get(key, 0) + 1
generators = {'A': 0, 'B': 0, 'C': 0, 'D': 0}
for key in generators.keys():
generators[key] = len(re.findall("faultGen" + key, output))
# -
fault_codes
# ### ANALYSIS / VERIFICATION
# #### Definitions
# **Pure Fault**: Faults from a single generator.
# **Compound Faults**: Faults formed from the combination of pure faults.
# ### Premise
# Fault $A1$: Should have no discarded entry, because it has the highest priority
# Fault $B1$: Should have some discarded value, for the case $BD$, which is not available
# Fault $C1$: Higher percentage of discarded cases than $C$, because of its lower priority
# Fault $D1$: Highest percentage of discarded cases, because it has the lowest priority
# Generator $output_{A1} = n({A1}) + n({A1} \cap {B1}) + n({A1} \cap {C1}) + n({A1} \cap {D1}) + discarded_{A1}$
# Generator $output_{B1} = n({B1}) + n({A1} \cap {B1}) + n({B1} \cap {C1}) + discarded_{B1}$
# Generator $output_{C1} = n({C1}) + n({A1} \cap {C1}) + n({B1} \cap {C1}) + n({C1} \cap {D1}) + discarded_{C1}$
# Generator $output_{D1} = n({D1}) + n({A1} \cap {D1}) + n({C1} \cap {D1}) + discarded_{D1}$
#
# Where $discarded_{A1} \equiv 0$, because A has the highest priority, and $discarded_{B1} = 0$ because B1 has a fault code combination with the others in the right order, using the priority system.
def sumFromSupervisor(code):
'''
Returns the number of times faults associated with a particular pure fault (the parameter) were output by the supervisor
@param code: int
@return int
'''
sum = 0
for key, value in fault_codes.items():
if '-' + str(code) + '-' in key:
sum += value;
return sum;
a_discarded = generators['A'] - sumFromSupervisor(4)
a_discarded
b_discarded = generators['B'] - sumFromSupervisor(5)
b_discarded
c_discarded = generators['C'] - sumFromSupervisor(6)
c_discarded
d_discarded = generators['D'] - sumFromSupervisor(7)
d_discarded
total_discarded = a_discarded + b_discarded + c_discarded + d_discarded
total_discarded
total_generated = generators['A'] + generators['B'] + generators['C'] + generators['D']
total_generated
discarded = {'A': a_discarded, 'B': b_discarded, 'C': c_discarded, 'D': d_discarded}
discarded_percentage = {'A': a_discarded * 100 / total_generated, 'B': b_discarded * 100 / total_generated, 'C': c_discarded * 100 / total_generated, 'D': d_discarded * 100 / total_generated}
discarded
fault_codes
a_increment = generators['A'] - fault_codes['-4-5-'] - fault_codes['-4-6-'] - fault_codes['-4-7-'] - a_discarded
a_increment
b_increment = generators['B'] - fault_codes['-4-5-'] - fault_codes['-5-6-'] - b_discarded
b_increment
c_increment = generators['C'] - fault_codes['-4-6-'] - fault_codes['-5-6-'] - fault_codes['-6-7-'] - c_discarded
c_increment
d_increment = generators['D'] - fault_codes['-4-7-'] - fault_codes['-6-7-'] - d_discarded
d_increment
# ### Discard Charts
#plt.title('Discarded Bar')
plt.bar(discarded.keys(), discarded.values())
plt.show()
#plt.savefig('discarded bar.png', format='png')
# +
keys, values = list(discarded.keys()), list(discarded.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Discarded Pie")
plt.show()
#plt.savefig('discard pie.png', format='png')
# -
# ### Discard Percentage Charts
#plt.title('Discard Percentage')
plt.bar(discarded_percentage.keys(), discarded_percentage.values())
plt.show()
#plt.savefig('sensorfusion.png', format='png')
# +
keys, values = list(discarded_percentage.keys()), list(discarded_percentage.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " (%) = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Discard Percentage")
plt.show()
#plt.savefig('discard percntage pie.png')
# -
# ### Toggle Time vs Frequency of Generators
toggle_times = {'A': 620, 'B': 180, 'C': 490, 'D': 270}
# #### Premise
# $faults\,generated \propto \frac{1}{toggle\,time}$
#
# $\therefore B > D > C > A$
# ### Generator Output Charts (Possibilities of Faults)
generators['A']
#plt.title('Generator Output (Possibilities of Faults)')
plt.bar(generators.keys(), generators.values())
plt.show()
#plt.savefig('generator output bar.png')
# +
keys, values = list(generators.keys()), list(generators.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = "n (" + str(legend_keys[i]) + ") = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Generator Output Charts (Possibilities of Fault)")
#plt.show()
plt.savefig('generator output pie.png')
# -
# ### Single-Run Fault Charts
# +
chart_data = copy.copy(fault_codes)
values = list(chart_data.values())
keys = list(chart_data.keys())
plt.bar(keys, values)
#plt.title('Single-Run')
plt.show()
#plt.savefig('single-run bar.png')
# +
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values,
textprops=dict(color="w"),
wedgeprops=dict(width=0.5))
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " " + str(values[i]) + " " + "times"
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
plt.title("Single-Run")
plt.show()
plt.savefig('single-run pie.png')
# -
# ### Cumulative Faults Chart
# +
fileName = "D:/code/C++/RT-Cadmium-FDD-New-Code/knowledge_base/_fault_codes_dir/_fault_codes_list.txt"
with open(fileName, "r") as f:
lines = f.readlines()
total = {}
for line in lines:
res = re.findall("\d+[-]*\d*", line)
if len(res) > 0:
total[res[0]] = str(res[1])
# +
values = list(total.values())
keys = list(total.keys())
plt.bar(keys, values)
plt.title('Cummulative')
plt.show()
#plt.savefig('single-run bar.png')
# +
values = list(total.values())
keys = list(total.keys())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " " + str(values[i]) + " " + 'times'
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
plt.title("Cumulative")
plt.show()
#plt.savefig('cumulative pie.png')
# -
| Sensor_Fusion_New_Fault_Codes_WD/.ipynb_checkpoints/Sensor_Fusion_New_Fault_Codes_WD-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Author: [@Lednik7](https://github.com/Lednik7)
#
# [](https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/RuCLIP_onnx_example.ipynb)
# + pycharm={"name": "#%%\n"}
#@title Allowed Resources
import multiprocessing
import torch
from psutil import virtual_memory
ram_gb = round(virtual_memory().total / 1024**3, 1)
print('CPU:', multiprocessing.cpu_count())
print('RAM GB:', ram_gb)
print("PyTorch version:", torch.__version__)
print("CUDA version:", torch.version.cuda)
print("cuDNN version:", torch.backends.cudnn.version())
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device:", device.type)
# !nvidia-smi
# + [markdown] id="whlsBiJgR8le"
# ## Restart colab session after installation
# Reload the session if something doesn't work
# + id="HnbpAkvuR73L"
# %%capture
# !pip install git+https://github.com/Lednik7/CLIP-ONNX.git
# !pip install ruclip==0.0.1rc7
# !pip install onnxruntime-gpu
# + id="tqy0zKM4R-7M"
# %%capture
# !wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true
# + colab={"base_uri": "https://localhost:8080/"} id="x8IN72OnSAIh" outputId="3174cf2c-ace3-4e1f-a550-e16c72302d51"
import onnxruntime
# priority device (if available)
print(onnxruntime.get_device())
# + [markdown] id="8_wSsSheT5mw"
# ## RuCLIP
# WARNING: specific RuCLIP like forward "model(text, image)" instead of classic(OpenAI CLIP) "model(image, text)"
# + id="gZTxanR26knr"
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
# + id="FdTLuqsJUBFY"
import ruclip
# onnx cannot export with cuda
model, processor = ruclip.load("ruclip-vit-base-patch32-384", device="cpu")
# + id="rPwc6A2SSGyl"
from PIL import Image
import numpy as np
# simple input
pil_images = [Image.open("CLIP.png")]
labels = ['диаграмма', 'собака', 'кошка']
dummy_input = processor(text=labels, images=pil_images,
return_tensors='pt', padding=True)
# batch first
image = dummy_input["pixel_values"] # torch tensor [1, 3, 384, 384]
image_onnx = dummy_input["pixel_values"].cpu().detach().numpy().astype(np.float32)
# batch first
text = dummy_input["input_ids"] # torch tensor [3, 77]
text_onnx = dummy_input["input_ids"].cpu().detach().numpy()[::-1].astype(np.int64)
# + colab={"base_uri": "https://localhost:8080/"} id="pv0mH626SdzO" outputId="d563462f-b2a9-4d49-b491-17e88ffa81f0"
#RuCLIP output
logits_per_image, logits_per_text = model(text, image)
probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()
print("Label probs:", probs) # prints: [[0.9885839 0.00894288 0.0024732 ]]
# + [markdown] id="R_e5OjJeXRiF"
# ## Convert RuCLIP model to ONNX
# + colab={"base_uri": "https://localhost:8080/"} id="oYM5FDSGSJBW" outputId="c647dc2e-946d-4769-c66e-77edfa98237f"
from clip_onnx import clip_onnx
visual_path = "clip_visual.onnx"
textual_path = "clip_textual.onnx"
onnx_model = clip_onnx(model, visual_path=visual_path, textual_path=textual_path)
onnx_model.convert2onnx(image, text, verbose=True)
# + [markdown] id="U1Pr-YTtSEhs"
# ## [ONNX] CPU inference mode
# + id="aY9wRe5kT3wG"
# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
onnx_model.start_sessions(providers=["CPUExecutionProvider"]) # cpu mode
# + colab={"base_uri": "https://localhost:8080/"} id="tYVuk72nSLw6" outputId="75bf3803-6ed7-4516-ccd0-42f9cf7f22e0"
image_features = onnx_model.encode_image(image_onnx)
text_features = onnx_model.encode_text(text_onnx)
logits_per_image, logits_per_text = onnx_model(image_onnx, text_onnx)
probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()
print("Label probs:", probs) # prints: Label probs: [[0.90831375 0.07174418 0.01994203]]
# + colab={"base_uri": "https://localhost:8080/"} id="Bpu4_HFRVeNk" outputId="e8f1681b-40dc-495f-d382-f0348d87c412"
# %timeit onnx_model.encode_text(text_onnx) # text representation
# + colab={"base_uri": "https://localhost:8080/"} id="JsOccP2gVmpo" outputId="adb33860-b000-461b-959f-95126e2ac049"
# %timeit onnx_model.encode_image(image_onnx) # image representation
# + [markdown] id="Zww0E-jIULug"
# ## [ONNX] GPU inference mode
# + id="PBakYeiQUOAm"
onnx_model.start_sessions(providers=["CUDAExecutionProvider"]) # cuda mode
# + colab={"base_uri": "https://localhost:8080/"} id="EjvRBvCaWJBL" outputId="07426652-1cc5-4713-c355-fb4f1bd138d4"
# %timeit onnx_model.encode_text(text_onnx) # text representation
# + colab={"base_uri": "https://localhost:8080/"} id="pmu4mQCsWJ8w" outputId="5cb45026-dfd3-419d-e5d3-f5d0d9681cd0"
# %timeit onnx_model.encode_image(image_onnx) # image representation
| jupyters/RuCLIP_onnx_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 8.6
# language: ''
# name: sagemath
# ---
# +
import time
class Node:
def __init__(self, prime, modulus, product, children):
self.prime = prime # prime that the modulus is based on
self.modulus = modulus # which mod to take at the node
self.product = product # what the product that the node "fills"
self.children = children # children in tree
self.parent = None # parent in tree
self.value = None # value of total matrix product currently at node
def setParent(self, parent):
self.parent = parent
return
def getParent(self):
return self.parent
def setValue(self, value):
self.value = value
return
def getValue(self):
return self.value
def getChildren(self):
return self.children
def getModulus(self):
return self.modulus
def getProduct(self):
return self.product
def getPrime(self):
return self.prime
N = 2*10^6
p = 1
P = Primes()
c = 1
# start time
startTime = time.time()
### INITIALIZE START VALUE ###
startValue = 1
### INITIALIZE A ###
AList = []
i = 0
while i < N:
AList.append(i)
i = i+1
### SET UP FACTOR TREE ###
nodetree = []
level = 0
nodetree.append([]) # bottom level
AIndex = 1
# determine the initial product that shows up in all products
while P.next(p) <= N:
p = P.next(p)
# if p doesn't satisfy some condition
if False:
continue
AProd = 1
for i in range(AIndex, p):
AProd = AProd * AList[i]
startValue = startValue * AProd
AIndex = p
break
# initialize all nodes in first (prime) layer except last node
while P.next(p) <= N:
p = P.next(p)
# if p doesn't satisfy some condition
if False:
continue
AProd = 1
for i in range(AIndex, p):
AProd = AProd * AList[i]
nodetree[0].append(Node(AIndex, AIndex^2, AProd, None)) # Note: AIndex = value of previous p
AIndex = p
# initialize final node in first (prime) layer
nodetree[0].append(Node(AIndex, AIndex^2, 1, None))
while len(nodetree[-1]) > 1:
level = level + 1
nodetree.append([]) # add another level
for i in range(len(nodetree[level-1])/2):
# get children
node1 = nodetree[level-1][2*i]
node2 = nodetree[level-1][2*i+1]
# creates next layer node
nodetree[level].append(Node(node1.getPrime()*node2.getPrime(), node1.getModulus()*node2.getModulus(), node1.getProduct()*node2.getProduct(), [node1, node2]))
# set parent of children
node1.setParent(nodetree[level][i])
node2.setParent(nodetree[level][i])
if len(nodetree[level-1]) % 2 == 1:
# moves last node to next layer if there exists one
nodetree[level].append(nodetree[level-1].pop())
#for list in nodetree:
# print([n.getModulus() for n in list])
#for list in nodetree:
# print([n.getProduct() for n in list])
### COMPUTE MODS ###
root = nodetree[-1][0]
root.setValue(mod(startValue, root.getModulus()))
nodequeue = []
child1 = root.getChildren()[0]
child2 = root.getChildren()[1]
child1.setValue(root.getValue().mod(child1.getModulus()))
child2.setValue(mod(root.getValue()*child1.getProduct(), child2.getModulus()))
nodequeue.append(child1)
nodequeue.append(child2)
while len(nodequeue) > 0:
parent = nodequeue.pop(0)
if parent.getChildren() == None:
if parent.getModulus() - parent.getValue() <= 3:
print(parent.getPrime(), parent.getValue(), parent.getModulus())
continue
child1 = parent.getChildren()[0]
child2 = parent.getChildren()[1]
child1.setValue(mod(parent.getValue(), child1.getModulus()))
child2.setValue(mod(parent.getValue()*child1.getProduct(), child2.getModulus()))
nodequeue.append(child1)
nodequeue.append(child2)
# end time
endTime = time.time()
print("elapsed time: " + str(endTime-startTime))
| archives/Remainder Tree via data structure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: autumn310
# language: python
# name: autumn310
# ---
# +
from autumn.tools.project import get_project
from matplotlib import pyplot
from autumn.tools.plots.utils import REF_DATE
# from autumn.tools.calibration.targets import get_target_series
import pandas as pd
from autumn.tools.utils.display import pretty_print
from autumn.tools.plots.uncertainty.plots import _get_target_values, _plot_targets_to_axis
# -
project = get_project("sm_sir", "hanoi")
# +
update_params = {
"immunity_stratification.prop_high_among_immune": 0.0,
}
params = project.param_set.baseline.update(update_params, calibration_format=True)
# -
model = project.run_baseline_model(params)
derived_df = model.get_derived_outputs_df()
# +
outputs = [
# "notifications",
"hospital_occupancy",
# "icu_occupancy",
"infection_deaths",
"cdr"
]
for output in outputs:
fig = pyplot.figure(figsize=(12, 8))
pyplot.style.use("ggplot")
axis = fig.add_subplot()
axis = derived_df[output].plot()
axis.set_title(output)
if output in project.plots:
values, times = _get_target_values(project.plots, output)
date_times = pd.to_datetime(times, origin="01Jan2020", unit="D")
_plot_targets_to_axis(axis, values, date_times, on_uncertainty_plot=True)
fig.savefig(f"./Hanoi_manual_calibration_{output}.png")
| notebooks/user/hango/smsir_hanoi_manual_calibration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (ox)
# language: python
# name: ox
# ---
# # OSMnx + igraph
#
# Author: [<NAME>](https://geoffboeing.com/)
#
# First install [igraph](https://igraph.org/python/) or run Jupyter from the [Docker container](https://github.com/gboeing/osmnx/tree/master/docker) (which already has it installed along with OSMnx and NetworkX).
#
# - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
# - [GitHub repo](https://github.com/gboeing/osmnx)
# - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
# - [Documentation](https://osmnx.readthedocs.io/en/stable/)
# - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)
# +
import igraph as ig
import networkx as nx
import numpy as np
import operator
import osmnx as ox
# %matplotlib inline
ox.config(use_cache=True, log_console=True)
weight = 'length'
ox.__version__
# -
# ## Construct graphs
# create networkx graph
G_nx = ox.graph_from_place('Piedmont, CA, USA', network_type='drive')
G_nx = nx.relabel.convert_node_labels_to_integers(G_nx)
# %%time
# convert networkx graph to igraph
G_ig = ig.Graph(directed=True)
G_ig.add_vertices(list(G_nx.nodes()))
G_ig.add_edges(list(G_nx.edges()))
G_ig.vs['osmid'] = list(nx.get_node_attributes(G_nx, 'osmid').values())
G_ig.es[weight] = list(nx.get_edge_attributes(G_nx, weight).values())
assert len(G_nx.nodes()) == G_ig.vcount()
assert len(G_nx.edges()) == G_ig.ecount()
# ## Shortest paths
source = list(G_nx.nodes())[0]
target = list(G_nx.nodes())[-1]
# %%time
path1 = G_ig.get_shortest_paths(v=source, to=target, weights=weight)[0]
# %%time
path2 = nx.shortest_path(G_nx, source, target, weight=weight)
assert path1 == path2
# %%time
path_length1 = G_ig.shortest_paths(source=source, target=target, weights=weight)[0][0]
# %%time
path_length2 = nx.shortest_path_length(G_nx, source, target, weight)
assert path_length1 == path_length2
# ## Centrality analysis
# %%time
closeness1 = G_ig.closeness(vertices=None, mode='ALL', cutoff=None, weights=weight, normalized=True)
max_closeness1 = np.argmax(closeness1)
# %%time
closeness2 = nx.closeness_centrality(G_nx, distance=weight, wf_improved=True)
max_closeness2 = max(closeness2.items(), key=operator.itemgetter(1))[0]
assert G_nx.nodes[max_closeness2]['osmid'] == G_ig.vs[max_closeness1]['osmid']
| notebooks/14-osmnx-to-igraph.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
print 'Make the "Get the Data" widget code.'
# +
import bs4
import requests
def makeCode(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
fileNames = gitSoup.select('.js-directory-link') #get tag with URL for each file
return fileNames
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
makeCode(myFolder)
# +
import bs4
import requests
def makeCode(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href'))
print urls
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
makeCode(myFolder)
# +
import bs4
import requests
def makeCode(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
# print urls
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
return urls
#print urls
for u in urls:
v = u.replace("/InsideEnergy/Data-for-stories/blob/master", "")
return urls
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
makeCode(myFolder)
# +
import bs4
import requests
def stripUrls(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
# print urls
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
#return urls
#print urls
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w)
return halfUrls
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
stripUrls(myFolder)
# +
import bs4
import requests
def makeCode(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
# print urls
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
#return urls
#print urls
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w)
csvFile = halfUrls[0]
codeHasCsv = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % csvFile
print codeHasCsv
xlsFile = halfUrls[1]
codeHasXls = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % xlsFile
print codeHasXls
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
makeCode(myFolder)
# +
import bs4
import requests
def makeCode(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
# print urls
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
#return urls
#print urls
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w)
csvFile = halfUrls[0]
codeHasCsv = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % csvFile
#print codeHasCsv
xlsFile = halfUrls[1]
codeHasXls = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % xlsFile
#print codeHasXls
widgetCode = "<small><strong> Get the data: <a href='" + codeHasCsv + "'>CSV</a> | <a href='" + codeHasXls + "'>XLS</a> | <a href='GOOGLE SHEETS LINK YOU JUST MADE' target='_blank'>Google Sheets</a> | Source and notes: <a href='" + folder + "'>Github</a> </strong></small>"
print widgetCode
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
makeCode(myFolder)
# +
import bs4
import requests
def makeCode(folder, sheet):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w) #strip extra stuff off front of url
csvFile = halfUrls[0]
codeHasCsv = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % csvFile
xlsFile = halfUrls[1]
codeHasXls = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % xlsFile
#now concatonate the code together
widgetCode = "<small><strong> Get the data: <a href='" + codeHasCsv + "'>CSV</a> | <a href='" + codeHasXls + "'>XLS</a> | <a href='" + sheet + "' target='_blank'>Google Sheets</a> | Source and notes: <a href='" + folder + "'>Github</a> </strong></small>"
print widgetCode
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
print "Enter Google Sheets URL for public viewing:"
mySheet = raw_input()
makeCode(myFolder, mySheet)
# +
import bs4
import requests
def makeCode(folder, sheet):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w) #strip extra stuff off front of url
csvFile = halfUrls[0]
codeHasCsv = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % csvFile
xlsFile = halfUrls[1]
codeHasXls = "http://rawgit.com/insideenergy/Data-for-stories/master%s" % xlsFile
#now concatonate the code together
widgetCode = "<small><strong> Get the data: <a href='" + codeHasCsv + "'>CSV</a> | <a href='" + codeHasXls + "'>XLS</a> | <a href='" + sheet + "' target='_blank'>Google Sheets</a> | Source and notes: <a href='" + folder + "'>Github</a> </strong></small>"
print widgetCode
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
print "Enter Google Sheets URL for public viewing:"
mySheet = raw_input()
print "~~~~~~~~~~Widget Code - Paste this below your chart~~~~~~~~~~"
makeCode(myFolder, mySheet)
# +
#new function needs to strip off .csv and .xlsx
#needs to say, if two items match, get rid of duplicate
#then add each into its own widget code, enter new sheets input for each one
# +
import bs4
import requests
def stripUrls(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
# print urls
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
#return urls
#print urls
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w) #strip extra stuff off front of url
return halfUrls
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
stripUrls(myFolder)
# +
import bs4
import requests
def stripUrls(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = []
for f in files:
urls.append(f.get('href')) #put urls into list
# print urls
for u in urls:
if "README.md" in u:
urls.remove(u) #get README out of list
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w) #strip extra stuff off front of url
print halfUrls
justFolders = []
for x in halfUrls:
if ".csv" in x:
y = x.replace(".csv", "")
justFolders.append(y)
if ".xlsx" in x:
z = x.replace(".xlsx", "")
justFolders.append(z)
print justFolders #gets file extensions off
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
stripUrls(myFolder)
# +
import bs4
import requests
def makeCode(folder):
x = requests.get(folder)
x.raise_for_status()
gitSoup = bs4.BeautifulSoup(x.text)
files = gitSoup.select('.js-directory-link') #get tag with URL for each file
urls = [f.get('href') for f in files if 'README.md' not in f.get('href')] #put urls in list without readme filr
halfUrls = []
for v in urls:
if "/InsideEnergy/Data-for-stories/blob/master" in v:
w = v.replace("/InsideEnergy/Data-for-stories/blob/master", "")
halfUrls.append(w) #strip extra stuff off front of url
justFolders = []
for x in halfUrls:
if ".csv" in x:
y = x.replace(".csv", "")
justFolders.append(y)
if ".xlsx" in x:
z = x.replace(".xlsx", "")
justFolders.append(z) #gets file extensions off
noDuplicates = []
for z in justFolders:
if z not in noDuplicates:
noDuplicates.append(z) #gets rid of duplicates
#now concatonate a code for each folder name, and ask for corresponding Google Sheets URL
for i in noDuplicates:
print "Enter the Google Sheets URL for public viewing that corresponds with " + i
mySheet = raw_input()
print "~~~~~~~~~~Widget code for " + i + "~~~~~~~~~~"
print
print '<small><strong> Get the data: <a href="http://rawgit.com/insideenergy/Data-for-stories/master' + i + '.csv">CSV</a> | <a href="http://rawgit.com/insideenergy/Data-for-stories/master' + i + '.xlsx">XLS</a> | <a href="' + mySheet + '" target="_blank">Google Sheets</a> | Source and notes: <a href="' + folder + '">Github</a> </strong></small>'
print
print 'Make the "Get the Data" widget code.'
print "Enter GitHub ULR of your new folder inside 'Data-for-stories':"
myFolder = raw_input()
makeCode(myFolder)
# -
| github automation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <table><tr>
# <td style="background-color:#ffffff;text-align:left;"><a href="http://qworld.lu.lv" target="_blank"><img src="images\qworld.jpg" width="30%" align="left"></a></td>
# <td style="background-color:#ffffff;"> </td>
# <td style="background-color:#ffffff;vertical-align:text-middle;text-align:right;">
# <table><tr style="background-color:white;">
# <td> Visit</td>
# <td><a href="http://qworld.lu.lv" target="_blank"><img src="images/web-logo.png" width="35px"></a></td>
# <td width="10pt"></td>
# <td> Join</td>
# <td><a href="https://qworldworkspace.slack.com/" target="_blank"><img src="images/slack-icon.png" width="80px"></a></td>
# <td width="10pt"></td>
# <td>Follow</td>
# <td><a href="https://www.facebook.com/qworld19/" target="_blank"><img src="images/facebook-icon.png" width="40px"></a></td>
# <td><a href="https://twitter.com/QWorld19" target="_blank"><img src="images/twitter-icon.png" width="40px"></a></td>
# </tr></table>
# </td>
# </tr></table>
# <h1 align="center" style="color: #cd7f32;"> Welcome to QWorld's Bronze </h1>
# <font style="color: #cd7f32;size:+1;"><b>Bronze</b></font> is our introductory tutorial to introduce the basics of quantum computation and quantum programming. It is a big collection of [Jupyter notebooks](https://jupyter.org).
#
# Bronze can be used to organize two-day or three-day long workshops or to design one-semester course for the second or third year university students. In Bronze, we focus on real numbers and skip to use complex numbers to keep the tutorial simpler. Here is a complete list of our workshops using Bronze: <a href="http://qworld.lu.lv/index.php/workshop-bronze/#list" target="_blank">QWBronze</a>.
#
# *If you are using Jupyter notebooks for the first time, you can check our very short <a href="python/Python02_Into_Notebooks.ipynb" target="_blank">Introduction for Notebooks</a>.*
#
# **The open-source toolkit we are using**
# - Programming language: <a href="https://www.python.org" target="_blank">python</a>
# - Quantum programming libraries:</u> <a href="https://qiskit.org" target="_blank">Qiskit</a> is the main library at the moment. We are extending Bronze to use <a href="https://projectq.ch" target="_blank">ProjectQ</a>, <a href="https://github.com/quantumlib/Cirq" target="_blank">Cirq</a>, and <a href="http://docs.rigetti.com/en/stable/" target="_blank">pyQuil</a>.
# - We use <a href="https://www.mathjax.org" target="_blank">MathJax</a> to display mathematical expressions on html files (e.g., exercises).
# - We use open source interactive tool <a href="http://play.quantumgame.io" target="_blank">quantumgame</a> for showing quantum coin flipping experiments.
#
# **Support:** Please use _#general channel under_ <a href="https://qworldworkspace.slack.com/" target="_blank">QWorld's slack workspace</a> to ask your questions.
# ### Installation and Test
#
# _Python libraries including quantum ones are often updated. Therefore, there might appear some problems due to different versions. You can ask your questions #general channel under [QWorld's slack workspace](https://qworldworkspace.slack.com/) if you need help regarding the installations._
#
# Before starting to use Bronze, please test your system by using the following notebook(s)!
#
# Qiskit is the main the quantum programming library, and you should install it to follow the whole Bronze. The installations of the other programmming libraries are options at the moment.
#
# <ul>
# <li><a href="test/Qiskit_installation_and_test.ipynb" target="_blank">Qiskit installation and test</a></li>
# <li>ProjectQ installation and test (very soon)</li>
# <li>Cirq installation and test (very soon)</li>
# <li>pyQuil installation and test (very soon)</li>
# </ul>
# <h1 style="color: #cd7f32;"> Content </h1>
#
# [Credits](bronze/B00_Credits.ipynb)
#
# ### References
#
# [Python Reference](python/Python04_Quick_Reference.ipynb) |
# [Python: Drawing](python/Python06_Drawing.ipynb) |
# [Qiskit Reference](bronze/B01_Qiskit_Reference.ipynb)
#
# ### Python review
#
# [Variables](python/Python08_Basics_Variables.ipynb) |
# [Loops](python/Python12_Basics_Loops.ipynb) |
# [Conditionals](python/Python16_Basics_Conditionals.ipynb) |
# [Lists](python/Python16_Basics_Conditionals.ipynb)
#
# ### Basic math
#
# [Vectors](math/Math20_Vectors.ipynb) |
# [Dot Product](math/Math24_Dot_Product.ipynb) |
# [Matrices](math/Math28_Matrices.ipynb) |
# [Tensor Product](math/Math32_Tensor_Product.ipynb) |
# [Exercises](exercises/E05_Basic_Math.ipynb)
#
# ### Basics of classical systems
#
# [One Bit](bronze/B03_One_Bit.ipynb) |
# [Coin Flipping](bronze/B06_Coin_Flip.ipynb) |
# [Coin Flipping Game](bronze/B09_Coin_Flip_Game.ipynb) |
# [Probabilistic States](bronze/B12_Probabilistic_States.ipynb) |
# [Probabilistic Operators](bronze/B15_Probabilistic_Operators.ipynb)
#
# [Two Probabilistic Bits](bronze/B17_Two_Probabilistic_Bits.ipynb) |
# [Freivalds [optional]](bronze/B17-5_Freivalds.ipynb) |
# [Correlation](bronze/B18_Correlation.ipynb) |
# [Exercises](exercises/E09_Probabilistic_Systems.ipynb)
#
# ### Basics of a quantum program (circuit)
#
# [Qiskit](bronze/B20_First_Quantum_Programs_by_Qiskit.ipynb) |
# _ProjectQ (very soon)_ |
# _Cirq (very soon)_ |
# _pyQuil (very soon)_
#
# ### Basics of quantum systems
#
# [Quantum Coin Flipping](bronze/B20_Quantum_Coin_Flipping.ipynb) |
# [Hadamard Operator](bronze/B24_Hadamard.ipynb) |
# [One Qubit](bronze/B26_One_Qubit.ipynb) |
# [Quantum State](bronze/B28_Quantum_State.ipynb)
#
# [Visualization of a (Real-Valued) Qubit](bronze/B30_Visualization_of_a_Qubit.ipynb) |
# [Superposition and Measurement](bronze/B34_Superposition_and_Measurement.ipynb) |
# [Exercises](exercises/E13_Basics_of_Quantum_Systems.ipynb)
#
# ### Quantum operators on a (real-valued) qubit
#
# [Operations on the Unit Circle](bronze/B40_Operations_on_the_Unit_Circle.ipynb) |
# [Rotations](bronze/B42_Rotations.ipynb) |
# [Reflections](bronze/B44_Reflections.ipynb) |
# [Quantum Tomography](bronze/B48_Quantum_Tomography.ipynb) |
# [Exercises](exercises/E16_Quantum_Operators_on_a_Real-Valued_Qubit.ipynb)
#
# ### Quantum correlation (entanglement)
#
# [Two Qubits](bronze/B50_Two_Qubits.ipynb) |
# [Phase Kickback](bronze/B52_Phase_Kickback.ipynb) |
# [Entanglement and Superdense Coding](bronze/B54_Superdense_Coding.ipynb) |
# [Quantum Teleportation](bronze/B56_Quantum_Teleportation.ipynb) |
# [Exercises](exercises/E18_Quantum_Correlation.ipynb)
#
# ### Multi-qubit control operators
#
# [Rotation Automata (optional)](bronze/B72_Rotation_Automata.ipynb) |
# [Multiple Control Constructions](bronze/B74_Multiple_Control_Constructions.ipynb) |
# [Multiple Rotations (optional)](bronze/B76_Multiple_Rotations.ipynb)
#
# ### Grover's search algorithm
#
# [Inversion About the Mean](bronze/B80_Inversion_About_the_Mean.ipynb) |
# [Grover's Search: One Qubit Representation](bronze/B88_Grovers_Search_One_Qubit_Reprsentation.ipynb) |
# [Grover's Search: Implementation](bronze/B92_Grover_Search_Implementation.ipynb)
# <h1 style="color: #cd7f32;"> Projects </h1>
#
# *Difficulty levels:
# easy (<font size="+1" color="7777ee">★</font>),
# medium (<font size="+1" color="7777ee">★★</font>), and
# hard (<font size="+1" color="7777ee">★★★</font>).*
#
# <font size="+1" color="7777ee"> ★</font> |
# [Correlation Game](projects/Project_Correlation_Game.ipynb) *on classical bits*
# \
# <font size="+1" color="7777ee"> ★</font> |
# [Swapping Quantum States](projects/Project_Swapping_Quantum_States.ipynb) *on qubits*
# \
# <font size="+1" color="7777ee"> ☆★</font> |
# [Simulating a Real-Valued Qubit](projects/Project_Simulating_a_RealValued_Qubit.ipynb)
# \
# <font size="+1" color="7777ee"> ★★</font> |
# [Quantum Tomography with Many Qubits](projects/Project_Quantum_Tomography_with_Many_Qubits.ipynb)
# \
# <font size="+1" color="7777ee"> ★★</font> |
# [Implementing Quantum Teleportation](projects/Project_Implementing_Quantum_Teleportation.ipynb)
# \
# <font size="+1" color="7777ee">☆★★</font> |
# [Communication via Superdense Coding](projects/Project_Communication_via_Superdense_Coding.ipynb)
# \
# <font size="+1" color="7777ee">★★★</font> |
# [Your Quantum Simulator](projects/Project_Your_Quantum_Simulator.ipynb)
| index_bronze.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### CRC183 Summer School "Machine Learning in Condensed Matter Physics"
# # Hands-on session: Building deep learning models with JAX/Flax
#
# ## Installing JAX, NetKet & Co.
#
# In today's session you will learn about basics of JAX and the Netket library. Installing NetKet is relatively straightforward and it will automatically install JAX as a dependency.
#
# For this Tutorial, **if you are running it locally on your machine**, we recommend that you create a clean virtual environment and install NetKet within:
#
# ```bash
# python3 -m venv netket
# source netket/bin/activate
# pip install --pre netket
# ```
#
# If you are wondering why we use the flag ```--pre``` it is because today we will be working on a pre (beta) release of version 3.0.
#
# **If you are on Google Colab**, run the following cell to install the required packages.
# !pip install --pre -U netket
# You can check that the installation was succesfull doing
import jax
# ## JAX
#
# [JAX](https://github.com/google/jax) is a Python library that provides essential functionality for deep learning applications, namely
#
# * **automatic differentiation**
# * **vectorization**
# * **just-in-time compilation** to GPU/TPU accelerators
#
# It is being developed "for high-performance machine learning research" and as such well suited for physics applications. The abovementioned features are implemented in a system of composable function transformations that "process" Python functions.
#
# ## `jax.numpy`
#
# The function transformations of JAX rely on functions being written in a "JAX-intellegible" form. For this purpose the `jax.numpy` sub-module emulates almost the whole NumPy functionality. When using JAX, any array operations should be written using `jax.numpy`.
import numpy as np
import jax.numpy as jnp
# The basic object is the `jnp.array`:
arr = jnp.array([1,2,3])
print(arr)
# NumPy arrays and `jax.numpy` arrays can be converted into each other:
np_array = np.array(arr)
type(np_array)
type(jnp.array(np_array))
# Most NumPy functions have an equivalent in `jax.numpy`, e.g. `sum()`:
print(jnp.sum(arr))
# ## JAX random numbers
#
# JAX implements a pseudo-random number generator (PRNG) in the `jax.random` submodule. The JAX PRNG handles the state of the PRNG explicitly in the form of *keys* that have to be passed around, and which can be split in order to fork the PRNG state into new PRNGs.
#
# Let's generate an initial key:
seed=1234
key = jax.random.PRNGKey(seed)
print(key)
# Any function in `jax.random` that generates random numbers takes a key as argument, e.g.
jax.random.normal(key, (3,3))
# Passing the same key results in the same set of random numbers
jax.random.normal(key, (3,3))
# To get a different sequence of random numbers, we need to generate a new key by splitting the original one:
# +
key, newkey = jax.random.split(key)
jax.random.normal(newkey, (3,3))
# -
# ## Warning of "sharp edges" in JAX
#
# The ability to perform the various function transformations imposes a number of **constraints on how those functions are written**. Before diving deeper into the JAX library, reading ["JAX - the sharp bits"](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html), which summarizes the main pitfalls when working with it, is highly recommended.
#
# The most important points are
#
# * **Pure functions**: All input data have to be passed to the function as arguments and all results given as return values. Don't work with global variables or side effects. Some workarounds are needed to reconcile JAX with object-oriented code.
# * **Control flow** needs some care. Replace Python ``for``-loops or ``if``-branching with primitives from the ``jax.lax`` sub-module, [see documentation](https://jax.readthedocs.io/en/latest/jax.lax.html#control-flow-operators).
# * **JAX arrays are immutable**: As a python programmer you are used to updating array elements with ``array[13,:]=3.1415``. Unfortunately, this does not work with JAX arrays. Instead you need to use JAX's indexed update functionality, e.g. in this case ``array.at[13, :].set(3.1415)``. [See the documentation for details](https://jax.readthedocs.io/en/latest/jax.ops.html#indexed-update-operators).
# ## Flax
#
# [Flax](https://github.com/google/flax) builds on JAX and provides a framework to easily define arbitrary deep learning models very similar to PyTorch. Moreover, typical building blocks of deep learning models are already implemented in the library.
#
# As an example, let's define a simple feed-forward network with a single layer. It consists of a dense layer, i.e., an affine-linear transformation of the input followed by a $\text{ReLU}(\cdot)$ non-linearity giving the *activations*. Additionally, we include a reduction operation to obtain a single number as output, namely the sum of the activations:
#
# $$
# f_{W,b}(x) = \sum_i \text{ReLU}(W_{i,j}x_j+b_i)
# $$
#
# In Flax this neural network can be implemented as follows:
# +
import flax.linen as nn
class MyFFN(nn.Module):
num_neurons: int
@nn.compact
def __call__(self, x):
z = nn.Dense(self.num_neurons)(x) # affine-linear transformation of the input
a = nn.relu(z) # non-linearity
return jnp.sum(a)
# -
# The code above shows the basic syntax to define a custom model in Flax. A Flax model is a class that inherits from the abstract `flax.linen.Module` class. As part of it we have to implement a `__call__` method that defines how the network is evaluated for a given input.
#
# In the example the `__call__` method is decorated with `@nn.compact`, because we are using the compact definition of a Flax model. This is possible for simple models where we do not have to specify an additional setup procedure for the initialization of the model. For more complex models a `setup` method can be defined for the initialization; in that case the `@nn.compact` decorator has to be dropped.
#
# In addition, we defined the data field `num_neurons`. The data fields of the model class are initialized by the constructor and they can contain the hyperparameters of the model.
#
# Finally, we use some of the pre-implemented building blocks, namely the `nn.Dense` layer for the affine-linear transformation and the activation function `nn.relu`.
#
# Through the `nn.Module` base class our `MyFFN` class will have further methods that allow us to deal with the model. Below we will learn about `init` for parameter intialization and `apply` for the evaluation.
#
# Now we can create an instance of this class, passing a value for the `num_neurons` hyperparameter to the constructor:
net = MyFFN(num_neurons=123)
# Before we can work with this model, we also need to get an initial set of parameters. For this purpose we have to call the model's `init` method, which takes an `jax.random.PRNGKey` and an exemplary input datum as arguments:
example_input = jax.random.normal(jax.random.PRNGKey(4321), (28*28,))
params = net.init(jax.random.PRNGKey(1234), example_input)
# The set of parameters is returned in the form of a tree:
params
# Now we can evaluate our neural network on the input data:
net.apply(params, example_input)
# To define models that don't rely on Flax's standard building blocks, parameters can be declared explicitly with the ``self.param`` method:
class MyFFNfromScratch(nn.Module):
num_neurons: int
@nn.compact
def __call__(self, x):
W = self.param('W',
nn.initializers.lecun_normal(), # Initialization function
(self.num_neurons, x.shape[-1])) # shape of the matrix
b = self.param('b', nn.initializers.zeros, (self.num_neurons,))
z = jnp.dot(W,x) + b # affine-linear transformation of the input
a = nn.relu(z) # non-linearity
return jnp.sum(a)
# Initialization and evaluation is identical to the example above:
net1 = MyFFNfromScratch(num_neurons=123)
params1 = net1.init(jax.random.PRNGKey(1234), example_input)
net1.apply(params1, example_input)
# We can check, that in the ``params1`` dictionary we see our custom parameter names together with the random initial values.
params1
# # Function transformations in JAX
# <h3 style="color: red">This part of the JAX notebook is not required for getting started with NetKet.</h3>
# In the following part, we introduce some essential elements that make JAX so useful. In particular, we will look at 'just-in-time' compilation with `jit`, automatic differentation with `grad`, and vectorization with `vmap`. These are useful to know not only for working with neural networks, but more generally (see e.g. the neural_schrodinger and coulomb_gas notebooks)
# ## Just-in-time compilation with `jax.jit`
#
# With just-in-time compilation JAX can generate compiled versions of Python functions that run on GPUs/TPUs if available and otherwise on the CPU. This can yield substantial performance gains.
#
# Let's define an example function:
def f(x):
return x*x
# Now we can get a compiled version of the function with
f_jitd = jax.jit(f)
f(3)-f_jitd(3)
# #### Remarks
# * The function returned by `jax.jit` is only compiled at the point when it is first called.
# * The function is compiled for fixed shapes of the arguments that are given at the first call. Therefore, the first call can take substantially longer than subsequent calls.
# * Calling a compiled function with different argument shapes will cause re-compilation.
# ## Automatic differentiation with `jax.grad`
#
# `jax.grad` returns a function that computes the gradient of the input function with respect to its arguments:
df = jax.grad(f)
# Since JAX function transformations are composable, we can also easily compute higher order derivatives:
ddf = jax.grad(jax.grad(f))
# Let's convince ourselves, that JAX's gradients are correct:
# +
import matplotlib.pyplot as plt
x_list = jnp.arange(0,1.
1,.1)
f_values = jnp.array([f(x) for x in x_list])
df_values = jnp.array([df(x) for x in x_list])
ddf_values = jnp.array([ddf(x) for x in x_list])
plt.plot(x_list, f_values, label="f(x)")
plt.plot(x_list, df_values, label="f'(x)")
plt.plot(x_list, ddf_values, label="f''(x)")
plt.legend();
# -
# ## Vectorization with `jax.vmap`
# For a given function that operates on some input data, `jax.vmap` returns a vectorized version of that function, that applies the original function *element-wise* to vectors of input data.
#
# This is very natural, since one focus of JAX is ultimately the execution of the Python code on GPUs. On the other hand, JAX inherits some of the Python issues with for-loops. Vectorizing an operation is often a good alternative to writing a for-loop.
#
# Let's vectorize our gradient function:
df_vmapd = jax.vmap(df)
# Now we can directly call `df_vmapd` on a list of input values and we do not need any more the construction `jnp.array([df(x) for x in x_list])` that we used above.
#
# Let's check that the output of both versions is identical:
# +
x_list = jnp.arange(0,1.1,.1)
# The vmap-ed function enables us to evaluate gradients directly on the list of x-values
df_values_with_vmap = df_vmapd(x_list)
df_values = jnp.array([df(x) for x in x_list])
df_values - df_values_with_vmap
| notebooks/01_jax.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # More Variables and Printing
# **Now** we’ll do even more typing of variables and printing them out. This time we’ll use some- thing called a “format string.” Every time you put " (double-quotes) around a piece of text, you have been making a string. A string is how you make something that your program might give to a human. You print them, save them to files, send them to web servers, all sorts of things.
#
# Strings are really handy, so in this exercise you will learn how to make strings that have variables embedded in them. You embed variables inside a string by using specialized format sequences and then putting the variables at the end with a special syntax that tells Python, “Hey, this is a format string, put these variables in there.”
#
# As usual, just type this in even if you do not understand it and make it exactly the same.
# ```python
# my_name = '<NAME>'
# my_age = 35 # not a lie
# my_height = 74 # inches
# my_weight = 180 # lbs
# my_eyes = 'Blue'
# my_teeth = 'White'
# my_hair = 'Brown'
#
# print ("Let's talk about %s." % my_name)
# print ("He's %d inches tall." % my_height)
# print ("He's %d pounds heavy." % my_weight)
# print ("Actually that's not too heavy.")
# print ("He's got %s eyes and %s hair." % (my_eyes, my_hair))
# print ("His teeth are usually %s depending on the coffee." % my_teeth)
#
# # this line is tricky, try to get it exactly right
# total = my_age + my_height + my_weight
# print ("If I add %d, %d, and %d I get %d."%(my_age, my_height, my_weight, my_age + my_height + my_weight))
# ```
# ### What You Should See
# <code>
# Let's talk about <NAME>.
# He's 74 inches tall.
# He's 180 pounds heavy.
# Actually that's not too heavy.
# He's got Blue eyes and Brown hair.
# His teeth are usually White depending on the coffee.
# If I add 35, 74, and 180 I get 289.
# </code>
# ### Study Drills
# 1. Change all the variables so there isn’t the my_ in front. Make sure you change the name everywhere, not just where you used = to set them.
# 2. Try more format characters. %r is a very useful one. It’s like saying “print this no matter what.”
# 3. Search online for all the Python format characters.
# 4. Try to write some variables that convert the inches and pounds to centimeters and kilos. Do not just type in the measurements. Work out the math in Python.
# ### Common Student Questions
# **Can I make a variable like this: 1 = '<NAME>'?**
#
# No, the 1 is not a valid variable name. They need to start with a character, so a1 would work, but 1 will not.
#
# **What does %s, %r, and %d do again?**
#
# You’ll learn more about this as you continue, but they are “formatters.” They tell Python to take the variable on the right and put it in to replace the %s with its value.
#
# **I don’t get it, what is a “formatter”? Huh?**
#
# The problem with teaching you programming is that to understand many of my descriptions, you need to know how to do programming already. The way I solve this is I make you do something, and then I explain it later. When you run into these kinds of questions, write them down and see if I explain it later.
#
# **How can I round a floating point number?**
#
# You can use the round() function like this: round(1.7333).
#
# ***I get this error TypeError: 'str' object is not callable.**
#
# You probably forgot the % between the string and the list of variables.
#
# **Why does this not make sense to me?***
#
# Try making the numbers in this script your measurements. It’s weird, but talking about yourself will make it seem more real.
| Exercise-5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib
from datascience import *
# %matplotlib inline
import matplotlib.pyplot as plots
import numpy as np
plots.style.use('fivethirtyeight')
# ## Standard Units
exams = Table.read_table('exams_fa18.csv')
exams.show(5)
exams.hist(overlay=False, bins=20)
def standard_units(x):
"""Convert array x to standard units."""
return (x - np.average(x)) / np.std(x)
# +
midterm_su = standard_units(exams.column('Midterm'))
exams = exams.with_column('Midterm in Standard Units', midterm_su)
final_su = standard_units(exams.column('Final'))
exams = exams.with_column('Final in Standard Units', final_su)
exams.show(10)
# -
exams.hist('Midterm in Standard Units', bins=20)
exams.hist('Final in Standard Units', bins=20)
# ## Central Limit Theorem ##
united = Table.read_table('united_summer2015.csv')
united_bins = np.arange(-20, 300, 10)
united
united.hist('Delay', bins=united_bins)
delays = united.column('Delay')
delay_mean = np.mean(delays)
delay_sd = np.std(delays)
delay_mean, delay_sd
percentile(50, delays)
def one_sample_mean(sample_size):
""" Takes a sample from the population of flights and computes its mean"""
sampled_flights = united.sample(sample_size)
return np.mean(sampled_flights.column('Delay'))
one_sample_mean(100)
def ten_thousand_sample_means(sample_size):
means = make_array()
for i in np.arange(10000):
mean = one_sample_mean(sample_size)
means = np.append(means, mean)
return means
sample_means_100 = ten_thousand_sample_means(100)
sample_means_100
len(sample_means_100)
# +
Table().with_column('Mean of 100 flight delays', sample_means_100).hist(bins=20)
print('Population Average:', delay_mean)
# -
sample_means_400 = ten_thousand_sample_means(400)
Table().with_column('Mean of 400 flight delays', sample_means_400).hist(bins=20)
print('Population Average:', delay_mean)
sample_means_900 = ten_thousand_sample_means(900)
# ## Distribution of the Sample Average
means_tbl = Table().with_columns(
'400', sample_means_400,
'900', sample_means_900,
)
means_tbl.hist(bins = np.arange(5, 31, 0.5))
plots.title('Distribution of Sample Average');
united.num_rows
# How many possible sample means are there?
united.num_rows ** 400
delay_mean = np.mean(united.column('Delay'))
delay_sd = np.std(united.column('Delay'))
# +
"""Empirical distribution of random sample means"""
def plot_and_summarize_sample_means(sample_size):
sample_means = ten_thousand_sample_means(sample_size)
sample_means_tbl = Table().with_column('Sample Means', sample_means)
# Print some information about the distribution of the sample means
print("Sample size: ", sample_size)
print("Population mean:", delay_mean)
print("Average of sample means: ", np.mean(sample_means))
print("Population SD:", delay_sd)
print("SD of sample means:", np.std(sample_means))
# Plot a histogram of the sample means
sample_means_tbl.hist(bins=20)
plots.xlabel('Sample Means')
plots.title('Sample Size ' + str(sample_size))
# -
plot_and_summarize_sample_means(100)
39.48 / 3.932
plot_and_summarize_sample_means(400)
39.48 / 1.973
plot_and_summarize_sample_means(625)
39.48 / 1.577
| lec/lec27.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dmeoli/NeuroSAT/blob/master/GATQSAT.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="md4JZ9dNr-cj" colab={"base_uri": "https://localhost:8080/"} outputId="2e115cb1-8c0b-4d1c-e090-2a1c2f5fcffa"
from google.colab import drive
drive.mount('/content/gdrive')
# + id="PSSjiEf1sjTf" colab={"base_uri": "https://localhost:8080/"} outputId="d098f9fb-171e-4596-f77f-c3c7e0505bfd"
# %cd gdrive/My Drive
# + id="wQDmH68WssAm" colab={"base_uri": "https://localhost:8080/"} outputId="b6ec07ff-83ed-4add-8e3e-0dd0128f5391"
import os
if not os.path.isdir('neuroSAT'):
# !git clone --recurse-submodules https://github.com/dmeoli/neuroSAT
# %cd neuroSAT
else:
# %cd neuroSAT
# !git pull
# + id="FDLLNcrEt_il" colab={"base_uri": "https://localhost:8080/"} outputId="f76b67a8-351c-4d9e-de72-b000fc4f37c8"
# !pip install -r requirements.txt
# + id="HMMK1om7bzWs"
datasets = {'uniform-random-3-sat': {'train': ['uf50-218', 'uuf50-218',
'uf100-430', 'uuf100-430'],
'val': ['uf50-218', 'uuf50-218',
'uf100-430', 'uuf100-430'],
'inner_test': ['uf50-218', 'uuf50-218',
'uf100-430', 'uuf100-430'],
'test': ['uf250-1065', 'uuf250-1065']},
'graph-coloring': {'train': ['flat50-115'],
'val': ['flat50-115'],
'inner_test': ['flat50-115'],
'test': ['flat30-60',
'flat75-180',
'flat100-239',
'flat125-301',
'flat150-360',
'flat175-417',
'flat200-479']}}
# + id="Ypffdwwc60GL" colab={"base_uri": "https://localhost:8080/"} outputId="39325f89-e01f-470c-b611-92f29a8a6c24"
# %cd GQSAT
# + [markdown] id="9hMESDiv-pWG"
# ## Build C++
# + id="nZF8lgna7tGm" colab={"base_uri": "https://localhost:8080/"} outputId="4ff7a314-d43c-4c11-e7eb-b7261436a343"
# %cd minisat
# !sudo ln -s --force /usr/local/lib/python3.7/dist-packages/numpy/core/include/numpy /usr/include/numpy # https://stackoverflow.com/a/44935933/5555994
# !make distclean && CXXFLAGS=-w make && make python-wrap PYTHON=python3.7
# !apt install swig
# !swig -fastdispatch -c++ -python minisat/gym/GymSolver.i
# %cd ..
# + [markdown] id="rH78QOrwiSVt"
# ## Uniform Random 3-SAT
#
# We split *(u)uf50-218* and *(u)uf100-430* into three subsets: 800 training problems, 100 validation, and 100 test problems.
#
# For generalization experiments, we use 100 problems from all the other benchmarks.
#
# To evaluate the knowledge transfer properties of the trained models across different task families, we use 100 problems from all the *graph coloring* benchmarks.
# + id="HGI9Dj-FRXN4"
PROBLEM_TYPE='uniform-random-3-sat'
# + id="7WecpwjxDQGm"
# !bash train_val_test_split.sh "$PROBLEM_TYPE"
# + [markdown] id="nqMVfDNR-_AV"
# ### Add metadata for evaluation (train and validation set)
# + id="bored-emphasis"
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME"
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME"
# + [markdown] id="MTTzlCOm_W4d"
# ### Train
# + id="rising-death"
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
# !python dqn.py \
# --logdir log \
# --env-name sat-v0 \
# --train-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME" \
# --eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME" \
# --lr 0.00002 \
# --bsize 64 \
# --buffer-size 20000 \
# --eps-init 1.0 \
# --eps-final 0.01 \
# --gamma 0.99 \
# --eps-decay-steps 30000 \
# --batch-updates 50000 \
# --history-len 1 \
# --init-exploration-steps 5000 \
# --step-freq 4 \
# --target-update-freq 10 \
# --loss mse \
# --opt adam \
# --save-freq 500 \
# --grad_clip 1 \
# --grad_clip_norm_type 2 \
# --eval-freq 1000 \
# --eval-time-limit 3600 \
# --core-steps 4 \
# --expert-exploration-prob 0.0 \
# --priority_alpha 0.5 \
# --priority_beta 0.5 \
# --e2v-aggregator sum \
# --n_hidden 1 \
# --hidden_size 64 \
# --decoder_v_out_size 32 \
# --decoder_e_out_size 1 \
# --decoder_g_out_size 1 \
# --encoder_v_out_size 32 \
# --encoder_e_out_size 32 \
# --encoder_g_out_size 32 \
# --core_v_out_size 64 \
# --core_e_out_size 64 \
# --core_g_out_size 32 \
# --activation relu \
# --penalty_size 0.1 \
# --train_time_max_decisions_allowed 500 \
# --test_time_max_decisions_allowed 500 \
# --no_max_cap_fill_buffer \
# --lr_scheduler_gamma 1 \
# --lr_scheduler_frequency 3000 \
# --independent_block_layers 0 \
# --use_attention \
# --heads 3
# + [markdown] id="UIFGn6JJRgdO"
# ### Add metadata for evaluation (test set)
# + id="BmGaDnimPYun"
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME"
for PROBLEM_NAME in datasets['graph-coloring']['inner_test']:
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME"
for PROBLEM_NAME in datasets['graph-coloring']['test']:
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME"
# + [markdown] id="2H_5kfS8X3wm"
# ### Evaluate
# + id="ptB7-ZibX3yE"
models = {'uf50-218': ('Nov12_14-06-54_c42e8ad320d8',
'model_50003'),
'uuf50-218': ('Nov12_20-35-32_c42e8ad320d8',
'model_50006'),
'uf100-430': ('Nov13_03-55-51_c42e8ad320d8',
'model_50085'),
'uuf100-430': ('Nov14_03-26-36_54337a27a809',
'model_50003')}
# + [markdown] id="djZ1CBXbtzK8"
# We test these trained models on the inner test sets.
# + id="oFxGGZdPX62D"
for SAT_MODEL in models.keys():
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
# do not use models trained on unSAT problems to solve SAT ones
if not (SAT_MODEL.startswith('uuf') and PROBLEM_NAME.startswith('uf')):
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
# !python evaluate.py \
# --logdir log \
# --env-name sat-v0 \
# --core-steps -1 \
# --eps-final 0.0 \
# --eval-time-limit 100000000000000 \
# --no_restarts \
# --test_time_max_decisions_allowed "$MODEL_DECISION" \
# --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME" \
# --model-dir runs/"$MODEL_DIR" \
# --model-checkpoint "$CHECKPOINT".chkp \
# >> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
# + [markdown] id="C1Ux737RuaoN"
# We test the trained models on the outer test sets.
# + id="huspm-fG4ruP"
for SAT_MODEL in models.keys():
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['test']:
# do not use models trained on unSAT problems to solve SAT ones
if not (SAT_MODEL.startswith('uuf') and PROBLEM_NAME.startswith('uf')):
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
# !python evaluate.py \
# --logdir log \
# --env-name sat-v0 \
# --core-steps -1 \
# --eps-final 0.0 \
# --eval-time-limit 100000000000000 \
# --no_restarts \
# --test_time_max_decisions_allowed "$MODEL_DECISION" \
# --eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME" \
# --model-dir runs/"$MODEL_DIR" \
# --model-checkpoint "$CHECKPOINT".chkp \
# >> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
# + [markdown] id="-H7bHgHblOit"
# We test the trained models on the *graph coloring* test sets, both inners and outers.
# + id="0WBwidOk9EIG"
for SAT_MODEL in models.keys():
# do not use models trained on unSAT problems to solve SAT ones
if not SAT_MODEL.startswith('uuf'):
for PROBLEM_NAME in datasets['graph-coloring']['inner_test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
# !python evaluate.py \
# --logdir log \
# --env-name sat-v0 \
# --core-steps -1 \
# --eps-final 0.0 \
# --eval-time-limit 100000000000000 \
# --no_restarts \
# --test_time_max_decisions_allowed "$MODEL_DECISION" \
# --eval-problems-paths ../data/graph-coloring/test/"$PROBLEM_NAME" \
# --model-dir runs/"$MODEL_DIR" \
# --model-checkpoint "$CHECKPOINT".chkp \
# >> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
# + id="xfjCQpJzlNOS"
for SAT_MODEL in models.keys():
# do not use models trained on unSAT problems to solve SAT ones
if not SAT_MODEL.startswith('uuf'):
for PROBLEM_NAME in datasets['graph-coloring']['test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
# !python evaluate.py \
# --logdir log \
# --env-name sat-v0 \
# --core-steps -1 \
# --eps-final 0.0 \
# --eval-time-limit 100000000000000 \
# --no_restarts \
# --test_time_max_decisions_allowed "$MODEL_DECISION" \
# --eval-problems-paths ../data/graph-coloring/"$PROBLEM_NAME" \
# --model-dir runs/"$MODEL_DIR" \
# --model-checkpoint "$CHECKPOINT".chkp \
# >> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
# + [markdown] id="VYw40NNfjCDL"
# ## Graph Coloring
#
# Graph coloring benchmarks have only 100 problems each, except for *flat50-115* which contains 1000, so we split it into three subsets: 800 training problems, 100 validation, and 100 test problems.
#
# For generalization experiments, we use 100 problems from all the other benchmarks.
# + id="Ods6xm8nRs1x"
PROBLEM_TYPE='graph-coloring'
# + id="crvN_WcWSFcb"
# !bash train_val_test_split.sh "$PROBLEM_TYPE"
# + [markdown] id="_-HPh7AVjCDM"
# ### Add metadata for evaluation (train and validation set)
# + id="n624I80gjCDM"
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME"
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME"
# + [markdown] id="sloVrX1zjCDN"
# ### Train
# + id="fMwwOBfcjCDN"
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
# !python dqn.py \
# --logdir log \
# --env-name sat-v0 \
# --train-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME" \
# --eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME" \
# --lr 0.00002 \
# --bsize 64 \
# --buffer-size 20000 \
# --eps-init 1.0 \
# --eps-final 0.01 \
# --gamma 0.99 \
# --eps-decay-steps 30000 \
# --batch-updates 50000 \
# --history-len 1 \
# --init-exploration-steps 5000 \
# --step-freq 4 \
# --target-update-freq 10 \
# --loss mse \
# --opt adam \
# --save-freq 500 \
# --grad_clip 0.1 \
# --grad_clip_norm_type 2 \
# --eval-freq 1000 \
# --eval-time-limit 3600 \
# --core-steps 4 \
# --expert-exploration-prob 0.0 \
# --priority_alpha 0.5 \
# --priority_beta 0.5 \
# --e2v-aggregator sum \
# --n_hidden 1 \
# --hidden_size 64 \
# --decoder_v_out_size 32 \
# --decoder_e_out_size 1 \
# --decoder_g_out_size 1 \
# --encoder_v_out_size 32 \
# --encoder_e_out_size 32 \
# --encoder_g_out_size 32 \
# --core_v_out_size 64 \
# --core_e_out_size 64 \
# --core_g_out_size 32 \
# --activation relu \
# --penalty_size 0.1 \
# --train_time_max_decisions_allowed 500 \
# --test_time_max_decisions_allowed 500 \
# --no_max_cap_fill_buffer \
# --lr_scheduler_gamma 1 \
# --lr_scheduler_frequency 3000 \
# --independent_block_layers 0 \
# --use_attention \
# --heads 3
# + [markdown] id="1MD6DHwwjCDN"
# ### Add metadata for evaluation (test set)
# + id="imKKT3snjCDO"
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME"
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['test']:
# !python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME"
# + [markdown] id="am0iALxRjCDO"
# ### Evaluate
# + id="t1GmjNk3jCDO"
MODEL_DIR='Dec09_12-16-16_d4e65e7af705'
CHECKPOINT='model_50001'
# + [markdown] id="v0bYPPEdbAkZ"
# We test this trained model on the inner test set.
# + id="AdSllBM8bAlc"
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
# !python evaluate.py \
# --logdir log \
# --env-name sat-v0 \
# --core-steps -1 \
# --eps-final 0.0 \
# --eval-time-limit 100000000000000 \
# --no_restarts \
# --test_time_max_decisions_allowed "$MODEL_DECISION" \
# --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME" \
# --model-dir runs/"$MODEL_DIR" \
# --model-checkpoint "$CHECKPOINT".chkp \
# >> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
# + [markdown] id="zMW85WsMbAlf"
# We test the trained model on the outer test sets.
# + id="7YQTwDwvjCDO"
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
# !python evaluate.py \
# --logdir log \
# --env-name sat-v0 \
# --core-steps -1 \
# --eps-final 0.0 \
# --eval-time-limit 100000000000000 \
# --no_restarts \
# --test_time_max_decisions_allowed "$MODEL_DECISION" \
# --eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME" \
# --model-dir runs/"$MODEL_DIR" \
# --model-checkpoint "$CHECKPOINT".chkp \
# >> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
| GATQSAT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import configargparse
from pathlib import Path
from spark_utils.streaming_utils import event_hub_parse, preview_stream
p = configargparse.ArgParser(prog='streaming.py',
description='Streaming Job Sample',
default_config_files=[Path().joinpath('configuration/run_args_data_generator.conf').resolve().as_posix()],
formatter_class=configargparse.ArgumentDefaultsHelpFormatter)
p.add('--output-eh-connection-string', type=str, required=True,
help='Output Event Hub connection string', env_var='GENERATOR_OUTPUT_EH_CONNECTION_STRING')
args, unknown_args = p.parse_known_args()
if unknown_args:
print("Unknown args:")
_ = [print(arg) for arg in unknown_args]
# +
from pyspark import SparkConf
from pyspark.sql import SparkSession
spark_conf = SparkConf(loadDefaults=True)
spark = SparkSession\
.builder\
.config(conf=spark_conf)\
.getOrCreate()
sc = spark.sparkContext
print("Spark Configuration:")
_ = [print(k + '=' + v) for k, v in sc.getConf().getAll()]
# +
rateStream = spark \
.readStream \
.format("rate") \
.option("rowsPerSecond", 10) \
.load()
preview_stream(rateStream, await_seconds=3)
# +
from pyspark.sql.functions import col, lit, struct
generatedData = rateStream \
.withColumn("value", col("value") * 3019) \
.withColumnRenamed("timestamp", "ObservationTime") \
.withColumn("MeterId", col("value") % lit(127)) \
.withColumn("SupplierId", col("value") % lit(23)) \
.withColumn("Measurement", struct(
(col("value") % lit(59)).alias("Value"),
lit("kWH").alias("Unit")
)) \
.drop("value")
preview_stream(generatedData,await_seconds=3)
# +
from pyspark.sql.functions import to_json
jsonData = generatedData \
.select(to_json(struct(col("*"))).cast("string").alias("body"))
preview_stream(jsonData, await_seconds=3)
# +
from spark_utils.schemas import message_schema
from pyspark.sql.functions import col, from_json
fromjsondata = jsonData \
.select(from_json(col("body").cast("string"), message_schema).alias("message")) \
.select(col("message.*"))
preview_stream(fromjsondata, await_seconds=3)
# + tags=[]
from pyspark.sql.functions import to_json
eh_conf = {
'eventhubs.connectionString':
sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(args.output_eh_connection_string)
}
exec = jsonData \
.writeStream \
.format("eventhubs") \
.options(**eh_conf) \
.option("checkpointLocation", '.checkpoint/data-generator3') \
.start()
exec.awaitTermination()
exec.stop()
# -
| src/python/src/data-generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Band tailing from 3D Cu-Zn disorder in CZTS:
#
# ## 1. Introduction
#
# ### Analysis of distribution of Cu and Sn on-site electrostatic potentials
#
# The distribution for Cu ions is used to infer band tailing of the VBM and the distribution for Sn ions is used to infer band tailing of the CBM due to the pDOS composition of the band extrema in CZTS, as shown below in the band structure.
#
# Later in the notebook we also produce visuals of spatial variation in the electrostatic potentials for Cu or Sn in 2D slices.
#
# 
# (Fig. from doi: 10.1002/adma.201203146)
#
# ### A note on Cu and Sn potential distributions
# Note that for the perfectly ordered lattice at T=0K there is only one crystallographically unique Sn, therefore there is only 1 value for the potential and hence the standard deviation of the electrostatic potential is zero. For Cu there are 2 distinct sites (one in the Cu-Zn plane and one in the Cu-Sn plane), therefore the standard deviation is non-zero even for the ordered lattice.
# ## 2a. Converting potentials from Eris internal units to V (!!need to review!!)
# V = $\frac{Q}{4 \pi \epsilon_{0} \epsilon_{CZTS} r}$
# - Q = bare formal charge of ion * e
# - e = $1.6\times10^{-19} C$
# - $\epsilon_{0} = 8.85 \times 10^{-12} C V^{-1} m^{-1}$
# - $\epsilon_{CZTS}$ for perfect CZTS (most similar case to our lattice model) = 9.9 (doi: 10.1063/1.5028186)
# - 1 Eris lattice unit = 2.72 Angstroms = 2.72 $\times10^{-10}$ m
# - In Eris, only consider (bare formal charge)/ d, where d is ion separation in lattice units
# - **To convert from Eris internal units to V, multiply result by conversion factor: $\frac{e}{4\pi \epsilon_{0} \epsilon_{CZTS} \times 2.72 \times10^{-10}}$ = 0.534**
#
# ## 2b. Notes for reviewing unit scaling
# Exposed charge of a Cu-on-Zn and Zn-on-Cu antisite will first be screened by local electronic rearrangement, before being screened by (bulk dielectric constant)/ r, where only the latter is accounted for in our model.
#
# ### Next steps
# 1. Compare change in site potential before/ after generating a Cu/ Zn antisite pair in Eris and in Gulp
# 2. Look at effective charges on defect sites from year1 defect calcs with VASP
#
# #### p.o.a. for 1
# Run std Eris calc at T=0K and just 1 MC step for:
# 1. Make new function in eris-lattice.c to manually initialise 64 atom supercell (4x4x4 lattice) in Eris: output **ALL** potentials and gulp input file
# 2. Modify above initial lattice function to have one n.n. Cu/ Zn antisite pair: output **ALL** potentials and gulp input file
#
# Use gulp to compute on-site potentials for perfect 64 atom system and one with one Cu/ Zn antisite
#
# #### Eris development
# 1. Make sure there is a function the uses potential_at_site_cube to output all potentials
# 2. Code up new manual lattice initialisation for perfect 64 atom supercell (4x4x4 lattice) and to introduce one n.n Cu/ Zn antisite pair
#
# ***Make sure to update unit_conversion variable in scripts below and/ or set up here instead***
#
# ## 3. Visualising the potential distributions (optional)
# Choose a temperature and run the script below to produce a histogram and kernel density estimate for the potential distributions of Cu and Sn. This step is largely just to inspect that the data is approximately normally distributed. Feel free the vary the number of bins used to produce the histogram (i.e. the variable 'bin_num' below temp)
# +
# Script to generate a histogram and kernal density estimate of Cu and Sn distributions
# Note: get an error if the distribution is a delta function/ singularity (i.e. no disorder yet!)
# It is useful to refer to the S vs T or PCF peak vs T plots when deciding which T to plot
# %matplotlib inline
import numpy as np
import glob
import os
import matplotlib.pyplot as plt
from scipy import stats
### Choose T to plot for
temp = 600 #in K
bin_num =15
unit_conversion = 0.534 # Defined in cell 2
T_formatted = str(temp).zfill(4)
Cu_file = "Cu_potentials_T_"+str(T_formatted)+"K.dat"
Sn_file = "Sn_potentials_T_"+str(T_formatted)+"K.dat"
Cu_potentials = np.genfromtxt(Cu_file)
Sn_potentials = np.genfromtxt(Sn_file)
# For Sn ---------------------------------------------------------------------------------------------
# Calculating kernal density estimate of Sn potential distribution of final lattice for specified simulation T
Sn_potentials_V = Sn_potentials*unit_conversion
Sn_kernel_density_est = stats.gaussian_kde(Sn_potentials_V)
Sn_pot_range_eval = np.linspace(-5, 5, num=200)
plt.xlabel('Electrostatic Potentials of Sn Ions (V)')
plt.ylabel('Density')
plt.title('Sn potential distribution at temperature: '+ str(temp) +'K')
plt.hist(Sn_potentials_V, normed=True, bins=bin_num)
plt.plot(Sn_pot_range_eval, Sn_kernel_density_est(Sn_pot_range_eval), label="Temperature: "+str(temp)+"K")
plt.xlim(-4,2)
#plt.ylim((0,6))
plt.show()
# For Cu ---------------------------------------------------------------------------------------------
# Calculating kernal density estimate of Cu potential distribution of final lattice for specified simulation T
Cu_potentials_V = Cu_potentials*unit_conversion
Cu_kernel_density_est = stats.gaussian_kde(Cu_potentials_V)
Cu_pot_range_eval = np.linspace(-5, 5, num=200)
plt.xlabel('Electrostatic Potentials of Cu Ions (V)')
plt.ylabel('Density')
plt.title('Cu potential distribution at temperature: '+ str(temp) +'K')
#plt.ylim((0,0.3))
plt.xlim(-4,2)
plt.hist(Cu_potentials_V, normed=True, bins=bin_num)
plt.plot(Cu_pot_range_eval, Cu_kernel_density_est(Cu_pot_range_eval), label="Temperature: "+str(temp)+"K")
plt.show()
# -
# ## 4. Calculate mean and variance of distributions at each T
# Run the script below to compute the mean and variance of Cu and Sn potential distributions at each T. These will be outputted to the file: 'Gaussian_params_Cu.dat' and 'Gaussian_params_Sn.dat'. Columns are: T, mean, variance, standard deviation
#
# As the name implies, these parameters will be used to apply a Guassian broadening to the electron pDOS of Cu and Sn at the VBM and CBM of CZTS.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
### USER INPUTS ###
# Temperature range and step size from Eris simulation (as defined in cx1 submission script)
TMIN = 0
TMAX = 1000
TSTEP = 50
###################
### ADD IN CONVERSION FROM INTERNAL ERIS UNITS TO V HERE ###
unit_conversion = 0.534 #Convert from internal Eris units to V (see cell 2)
Cu_Gauss_params = open("Gaussian_params_Cu.dat", "w")
Cu_Gauss_params.write("# T, mean, var, s.d\n")
Cu_mean_list = []
Cu_var_list = []
Sn_Gauss_params = open("Gaussian_params_Sn.dat", "w")
Sn_Gauss_params.write("# T, mean, var, s.d\n")
Sn_mean_list = []
Sn_var_list = []
T_list = np.arange(TMIN, TMAX+TSTEP, TSTEP)
for T in range(TMIN, TMAX+TSTEP, TSTEP):
T_formatted = str(T).zfill(4)
# Write to file for Cu potentials: T, distribution mean, variance
Cu_file = "Cu_potentials_T_"+str(T_formatted)+"K.dat"
Cu_potentials = np.genfromtxt(Cu_file)
Cu_mean = np.mean(Cu_potentials)
Cu_mean_list.append(Cu_mean*unit_conversion)
Cu_var = np.var(Cu_potentials)
Cu_var_list.append(Cu_var*unit_conversion)
Cu_Gauss_params.write(str(T)+" ")
Cu_Gauss_params.write(str(Cu_mean*unit_conversion)+" ")
Cu_Gauss_params.write(str(Cu_var*unit_conversion)+" ")
Cu_Gauss_params.write(str(np.sqrt(Cu_var)*unit_conversion)+"\n")
# Write to file for Sn potentials: T, distribution mean, variance
Sn_file = "Sn_potentials_T_"+str(T_formatted)+"K.dat"
Sn_potentials = np.genfromtxt(Sn_file)
Sn_mean = np.mean(Sn_potentials)
Sn_mean_list.append(Sn_mean*unit_conversion)
Sn_var = np.var(Sn_potentials)
Sn_var_list.append(Sn_var*unit_conversion)
Sn_Gauss_params.write(str(T)+" ")
Sn_Gauss_params.write(str(Sn_mean*unit_conversion)+" ")
Sn_Gauss_params.write(str(Sn_var*unit_conversion)+" ")
Sn_Gauss_params.write(str(np.sqrt(Sn_var)*unit_conversion)+"\n")
Cu_Gauss_params.close()
Sn_Gauss_params.close()
### Option to plot variance vs. T for Cu and Sn
fig = plt.figure(figsize = (10,7))
plt.plot(T_list, Cu_var_list, label="Cu")
plt.plot(T_list, Sn_var_list, label="Sn")
plt.xlabel("Simulation temperature (K)")
plt.ylabel("Variance of potential distribution (V)")
plt.legend()
plt.show()
# -
# ## 5. Plot the Gaussians (optional)
# (Make sure to run cell 4 first)
#
# Run the script below to plot the Gaussian functions generated from the mean and variance of the Cu and Sn potential distributions. It may be useful to do this to compare to the plots from cell 3.
#
# Lines in the plots are used to show the on-site potential for the perfectly ordered case at T=0K.
# +
# Make sure to add line for perfectly ordered data! (can literally just do a line plot for T=0K raw data?)
# %matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import norm
### Choose T to plot for
temp = 600 #in K
unit_conversion = 0.534 # Defined in cell 2
# Reading in parameters for Gaussians to plot for T inputted by user
Cu_file = "Gaussian_params_Cu.dat"
Cu_params = np.genfromtxt(Cu_file)
i=0
for lines in Cu_params:
if (Cu_params[i][0] == temp):
#print(Cu_params[i][1])
Cu_mean = Cu_params[i][1]
Cu_var = Cu_params[i][2]
Cu_sd = Cu_params[i][3]
i += 1
Sn_file = "Gaussian_params_Sn.dat"
Sn_params = np.genfromtxt(Sn_file)
i=0
for lines in Sn_params:
if (Sn_params[i][0] == temp):
#print(Cu_params[i][1])
Sn_mean = Sn_params[i][1]
Sn_var = Sn_params[i][2]
Sn_sd = Sn_params[i][3]
i += 1
# Adding lines for ordered T=0K on-site potentials for Cu and Sn
Cu_ordered1 = 0.707362*unit_conversion # read in from 'Cu_potentials_T_0000K.dat'
Cu_ordered2 = 1.605296*unit_conversion
Sn_ordered = -2.761624*unit_conversion
# Set x_axis to be same for both plots as same as used in cell 3
x_axis = np.arange(-4, 2, 0.001)
# Plotting Gaussian and T=0K line for Sn
plt.plot(x_axis, norm.pdf(x_axis,Sn_mean,Sn_sd), label='Sn')
# Add 1 line for T=0K Sn
plt.axvline(x=Sn_ordered, color='black', linestyle='--', label='Sn at T=0K')
plt.legend()
plt.show()
# Plotting Gaussian and T=0K line for Cu
plt.plot(x_axis, norm.pdf(x_axis,Cu_mean,Cu_sd), label='Cu')
# Add 2 lines for T=0K Cu
plt.axvline(x=Cu_ordered1, color='black', linestyle='--', label='Cu at T=0K')
plt.axvline(x=Cu_ordered2, color='black', linestyle='--', label='Cu at T=0K')
plt.legend()
plt.show()
# -
# ## 6a. 2D spatial variation in electrostatic potential
# The script below can be used to generate plots showing the spatial variation of Cu or Sn in 2D slices of the lattice. In eris odd slice numbers correspond to Cu-Zn planes, even correspond to Cu-Sn planes.
#
# In each plot, the mean of the potential distribution is subtracted from each on-site potential to show regions of higher or lower potential.
#
# Please enter into the script below the simulation temperature and slice number you wish to plot (where the total number of slices is the Z dimension of your Eris lattice). If you enter an even number for the slice, plots will be generated for both Cu and Sn, if you enter an odd number only a plot for Cu will be generated.
# +
# NOTE: When using 'mean' method for histogram plot NaNs show up as white (i.e. sites not in data file in plots below)
# Also, Cu's move when plotting Cu-Zn plane slices but not when plotting Cu-Sn plane slices
# This is the only allowed disorder process in Eris currently (02.07.18) so is to be expected
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
### USER INPUTS ###
T =950 # in K
slice_num = 12
X_dim = 48
Y_dim = 48
Z_dim = 24
# Set params for plots
cmap = 'RdBu' # Colormap (see matplotlib colormap docs for options)
pmin = -1.5 # Set limits for histogram plot of (onsite potential) - (mean potential)
pmax = 1.5
bins = X_dim-1 # Test bins in histogram plots
#bins = X_dim/2
# Bin choice a little arbitrary.
# For on-lattice data may be better to read in X, Y coords to 2D array and use plt.imshow instead?
# e.g. plt.imshow(eris_pots_as_2d_array, cmap=plt.cm.cmap) #cmap defined above
###################
unit_conversion = 0.534 #Convert from internal Eris units to V (see above for derivation)
T_formatted = str(T).zfill(4)
slice_formatted = str(slice_num).zfill(2)
# Generating plot for just Cu's in Cu-Zn slice
if (slice_num%2 == 1):
data_file = "Cu_potentials_T_"+str(T_formatted)+"K_slice_z="+str(slice_formatted)+".dat"
# Reading in data from eris output file
CuZnSlice = np.genfromtxt(data_file, delimiter = ' ')
x_vals = CuZnSlice[:,0]
y_vals = CuZnSlice[:,1]
pots = CuZnSlice[:,2]
pot_mean = np.mean(pots)
pot_fluc = CuZnSlice[:,2] - pot_mean
pot_fluc_in_V = pot_fluc * unit_conversion
# Generate 2D histogram of (on-site potential) - (mean potential) for Cu in Cu-Zn plane
H, xedges, yedges, binnumber = stats.binned_statistic_2d(x_vals, y_vals, values = pot_fluc, statistic='mean' , bins = [bins,bins])
XX, YY = np.meshgrid(xedges, yedges)
fig = plt.figure(figsize = (8,8))
plt.rcParams.update({'font.size': 16})
ax1=plt.subplot(111)
#plt.title("T = "+str(T)+"K, Cu in Cu-Zn plane, slice = "+ str(slice_num))
plot1 = ax1.pcolormesh(XX,YY,H.T, cmap=cmap, vmin=pmin, vmax=pmax)
cbar = plt.colorbar(plot1,ax=ax1, pad = .015, aspect=10)
# Generating separate plots for Cu's and Sn's in Cu-Sn slice
if (slice_num%2 == 0):
# Set up subplots
Cu_data_file = "Cu_potentials_T_"+str(T_formatted)+"K_slice_z="+str(slice_formatted)+".dat"
Sn_data_file = "Sn_potentials_T_"+str(T_formatted)+"K_slice_z="+str(slice_formatted)+".dat"
# Reading in data from eris output file for Cu
Cu_CuSnSlice = np.genfromtxt(Cu_data_file, delimiter = ' ')
Cu_x_vals = Cu_CuSnSlice[:,0]
Cu_y_vals = Cu_CuSnSlice[:,1]
Cu_pots = Cu_CuSnSlice[:,2]
Cu_pot_mean = np.mean(Cu_pots)
Cu_pot_fluc = Cu_CuSnSlice[:,2] - Cu_pot_mean
Cu_pot_fluc_in_V = Cu_pot_fluc * unit_conversion
# Generate 2D histogram of (on-site potential) - (mean potential) for Cu in Cu-Sn plane
H, xedges, yedges, binnumber = stats.binned_statistic_2d(Cu_x_vals, Cu_y_vals, values = Cu_pot_fluc, statistic='mean' , bins = [bins,bins])
XX, YY = np.meshgrid(xedges, yedges)
fig = plt.figure(figsize = (8,8))
plt.rcParams.update({'font.size': 16})
ax1=plt.subplot(111)
#plt.title("T = "+str(T)+"K, Cu in Cu-Sn plane, slice = "+ str(slice_num))
plot1 = ax1.pcolormesh(XX,YY,H.T, cmap=cmap, vmin=pmin, vmax=pmax)
cbar = plt.colorbar(plot1,ax=ax1, pad = .015, aspect=10)
plt.xlabel('X (lattice units)')
plt.ylabel('Y (lattice units)')
plt.savefig("spatial_pot_fluc_2D_Cu.png")
# Reading in data from eris output file for Sn
Sn_CuSnSlice = np.genfromtxt(Sn_data_file, delimiter = ' ')
Sn_x_vals = Sn_CuSnSlice[:,0]
Sn_y_vals = Sn_CuSnSlice[:,1]
Sn_pots = Sn_CuSnSlice[:,2]
Sn_pot_mean = np.mean(Sn_pots)
Sn_pot_fluc = Sn_CuSnSlice[:,2] - Sn_pot_mean
Sn_pot_fluc_in_V = Sn_pot_fluc * unit_conversion
# Generate 2D histogram of (on-site potential) - (mean potential) for Sn in Cu-Sn plane
H, xedges, yedges, binnumber = stats.binned_statistic_2d(Sn_x_vals, Sn_y_vals, values = Sn_pot_fluc, statistic='mean' , bins = [bins,bins])
XX, YY = np.meshgrid(xedges, yedges)
fig2 = plt.figure(figsize = (8,8))
plt.rcParams.update({'font.size': 16})
ax2=plt.subplot(111)
#plt.title("T = "+str(T)+"K, Sn in Cu-Sn plane, slice = "+ str(slice_num))
plot2 = ax2.pcolormesh(XX,YY,H.T, cmap=cmap, vmin=pmin, vmax=pmax)
cbar = plt.colorbar(plot2,ax=ax2, pad = .015, aspect=10)
plt.savefig("spatial_pot_fluc_2D_Sn.png")
plt.xlabel('X (lattice units)')
plt.ylabel('Y (lattice units)')
plt.show()
# -
# ## 6b. 1D plot of (on-site potential) - (mean potential) for Cu's and Sn's across y=x
# Make sure to run above cell first.
# +
Cu_1D_pot = []
Cu_1D_coord = []
Sn_1D_pot = []
Sn_1D_coord = []
# Write y=x potentials for Cu
for x,y,pot in zip(Cu_CuSnSlice[:,0], Cu_CuSnSlice[:,1], Cu_CuSnSlice[:,2]):
if (int(x) == int(y)):
Cu_1D_pot.append(pot*unit_conversion)
Cu_1D_coord.append(x)
# Write y=x potentials for Sn
for x,y,pot in zip(Sn_CuSnSlice[:,0], Sn_CuSnSlice[:,1], Sn_CuSnSlice[:,2]):
if (int(x) == int(y)):
Sn_1D_pot.append(pot*unit_conversion)
Sn_1D_coord.append(x)
fig = plt.figure(figsize = (10,7))
plt.plot(Cu_1D_coord, Cu_1D_pot, label='Cu potentials along y=x')
plt.plot(Sn_1D_coord, Sn_1D_pot, label='Sn potentials along y=x')
plt.xlabel("X,Y coordinate (lattice units)")
plt.ylabel("Potential (V)")
plt.legend()
plt.show()
# -
| AnalysisNotebooks/BandTailingAnalysis/BandTailingAnalysisNew.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mahfuz978/TECH-I.S.----DT-Ensemble/blob/main/Bagging_Boosting_Project/Mahfuzur_Bagging_Boosting_Project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="7GBoIyS6GyxU"
# ## Understanding the Business Problem
# TalkingData is a Chinese big data company, and one of their areas of expertise is mobile advertisements.
#
# In mobile advertisements, click fraud is a major source of losses. Click fraud is the practice of repeatedly clicking on an advertisement hosted on a website with the intention of generating revenue for the host website or draining revenue from the advertiser.
#
# In this case, TalkingData happens to be serving the advertisers (their clients). TalkingData cover a whopping approx. 70% of the active mobile devices in China, of which 90% are potentially fraudulent (i.e. the user is actually not going to download the app after clicking).
#
# You can imagine the amount of money they can help clients save if they are able to predict whether a given click is fraudulent (or equivalently, whether a given click will result in a download).
#
# Their current approach to solve this problem is that they've generated a blacklist of IP addresses - those IPs which produce lots of clicks, but never install any apps. Now, they want to try some advanced techniques to predict the probability of a click being genuine/fraud.
#
# In this problem, we will use the features associated with clicks, such as IP address, operating system, device type, time of click etc. to predict the probability of a click being fraud.
#
#
# + [markdown] id="npstTA7_Gyxc"
# # Project on Bagging and Boosting ensemble model:
#
#
# **The data contains observations of about 240 million clicks, and whether a given click resulted in a download or not (1/0):**
#
# The detailed data dictionary is mentioned here:
# - ```ip```: ip address of click.
# - ```app```: app id for marketing.
# - ```device```: device type id of user mobile phone (e.g., iphone 6 plus, iphone 7, huawei mate 7, etc.)
# - ```os```: os version id of user mobile phone
# - ```channel```: channel id of mobile ad publisher
# - ```click_time```: timestamp of click (UTC)
# - ```attributed_time```: if user download the app for after clicking an ad, this is the time of the app download
# - ```is_attributed```: the target that is to be predicted, indicating the app was downloaded
#
# Let's try finding some useful trends in the data.
#
# **1. Explore the dataset for anomalies and missing values and take corrective actions if necessary.**
#
# **2. Which column has maximum number of unique values present among all the available columns**
#
# **3. Use an appropriate technique to get rid of all the apps that are very rare (say which comprise of less than 20% clicks) and plot the rest..**
#
# **4. By using Pandas derive new features such as - 'day_of_week' , 'day_of_year' , 'month' , and 'hour' as float/int datatypes using the 'click_time' column . Add the newly derived columns in original dataset.**
#
# **5. Divide the data into training and testing subsets into 80:20 ratio(Train_data = 80% , Testing_data = 20%) and
# check the average download rates('is_attributed') for train and test data, scores should be comparable.**
#
# **6. Apply XGBoostClassifier with default parameters on training data and make first 10 prediction for Test data. NOTE: Use y_pred = model.predict_proba(X_test) since we need probabilities to compute AUC.**
#
# **7. On evaluating the predictions made by the model what is the AUC/ROC score with default hyperparameters.**
#
# **8. Compute feature importance score and name the top 5 features/columns .**
#
# **9. Apply BaggingClassifier with base_estimator LogisticRegression and compute AUC/ROC score.
#
# **10. On the basis of AUC/ROC score which one will you choose from BaggingClassifier and XGBoostClassifier and why?What does AUC/ROC score signifies?
#
# **11. What is the accuracy for BaggingClassifier and XGBoostClassifier?()
# ### All the Best!!!
# + id="K_qV-zzDGyxc"
url = 'https://raw.githubusercontent.com/Tech-i-s/techis-ds-wiki/master/Step%203-3%20DT%20and%20Ensemble/03_Project/talking_data.csv?token=<KEY>'
# + id="NPiYjZD0Gyxd"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="3JmbIZUnHtQq" outputId="08b70dae-481b-4ee9-f292-235d3d23e09b"
df = pd.read_csv(url)
df.head()
# + id="LIlo3nEOIED6" colab={"base_uri": "https://localhost:8080/"} outputId="1f1b1433-f7ac-439c-fbdd-6f17c55920fa"
df.isnull().sum()#attributed time has too many null values
# + id="t9yVfxi-QFOZ"
df = df.drop(columns='attributed_time')#lets drop this column
# + colab={"base_uri": "https://localhost:8080/"} id="45iqFH3e0LmR" outputId="58887199-4b4e-4b77-9ba8-51235816b1d7"
df.shape
# + id="b8n_dh_R0bLS" colab={"base_uri": "https://localhost:8080/"} outputId="f4a85574-ce02-47c1-a836-2c6c7156abf7"
df.dtypes
# + colab={"base_uri": "https://localhost:8080/"} id="ZfChsNU_dAgF" outputId="89b635d0-bb49-4f35-bf08-e5f8983f87d7"
df.apply(pd.value_counts).count() # counting how many unique values in a column
# looks like click time and ip has the top 2 unique items
# + id="S8OVsfCF0y5N" colab={"base_uri": "https://localhost:8080/", "height": 203} outputId="643bdbfc-25c8-4383-bfd5-21d98d0b044f"
df.head()
# + id="huNyBNaP0z7g" colab={"base_uri": "https://localhost:8080/"} outputId="ee3554c5-0a0c-471b-b4ad-4ddaee558567"
df.is_attributed.value_counts()
# + id="vrudVKmSgM-Y" colab={"base_uri": "https://localhost:8080/", "height": 203} outputId="69bb251c-aa09-44d5-acc1-ae2a66714d71"
df.tail()
# + id="0cbb2WynHm5N"
df['click_time'] = pd.to_datetime(df['click_time'])
# + id="0roXBue7Nanh" colab={"base_uri": "https://localhost:8080/"} outputId="71da2e1b-9f2d-490f-cded-173517176667"
type(df['click_time'][0])
# + colab={"base_uri": "https://localhost:8080/"} id="ZnXZJWle7I-W" outputId="0b83c56d-c605-449f-c7ee-54da3b37d309"
df['app'].value_counts()[:31] # these are the top apps
# + id="xM4KPLDV7uxE"
# top_apps = [] # make an empty list
# x = df['app'].value_counts(ascending=False)[:16] # these are the top apps
# x = list(x.index.values) # make it into a list so we can filter it out
# top_apps.append(x)
# + id="RdZ7L6er4jHO"
# top_apps = top_apps[0]
# top_apps # these are the list of the top apps
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="Rtv1qbrN08rQ" outputId="a9c67878-1caa-458d-9567-72fe1537829e"
df.head()
# + [markdown] id="kvAmesDBqml2"
# ## First Approach to dropping useless apps
#
# + id="DJDgKtWq6hxe" colab={"base_uri": "https://localhost:8080/"} outputId="11cdb8c6-ac1f-4ab5-81df-17abdb9213cd"
app_counts = df['app'].value_counts()
app_counts# here we are trying to see which apps has the most counts
# + id="5SkFOnmVWI0k" colab={"base_uri": "https://localhost:8080/"} outputId="0bf45c6b-5884-42bf-fd1f-ba448c847cbc"
app_counts = app_counts[app_counts > 100]
app_counts# I am only selecting the popular apps
# + id="YhoFHjZ6WkZ0"
req_apps = app_counts.index # we are calling the index so we can grab the apps
# + colab={"base_uri": "https://localhost:8080/"} id="cQ14EDdzWs44" outputId="34b2712a-8829-4764-a585-e2d6a2872d2d"
req_apps
# + id="DV_5rhyjWuLB"
filtered_df = [] # empty list
for app_id in req_apps:
filtered_df.append(df.loc[df['app']==app_id])# only appends the columns with included apps
# after appending we need to attach all the dataframes together
df_final = pd.concat(filtered_df)
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="ihc3NJMuXVJN" outputId="efd0db64-fb2b-4ec0-8802-4b719473c31e"
df_final.head()# here is the final df
# + colab={"base_uri": "https://localhost:8080/"} id="afUJ70BEZOzq" outputId="0715b9aa-bae8-4330-ea52-f01deaa09c00"
df_final.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Dl0I-jt_ZVK7" outputId="03bd28c7-4c54-4cdd-a19b-fd995249c366"
df_final.info()
# + [markdown] id="aTEvJGcgsE9u"
# ## My second Approach
# + id="3W-nkSAHZaUd"
# function to filter the apps
def app_filter(app_id):
if app_id in req_apps:
return app_id
else:
return np.nan # if the app is not a popular app then use nan
df['app'] = df['app'].apply(app_filter)
# + colab={"base_uri": "https://localhost:8080/"} id="_Xgeg4rJabEa" outputId="6a649460-5ff6-47cb-a702-84c20fd491fc"
df.info()
# + id="bqdTv768ad_d"
df = df.dropna() # its easy to drop the nan
# + colab={"base_uri": "https://localhost:8080/"} id="LuJ_qky2a0mi" outputId="32a5e296-b298-4118-fb6f-f01d0fd71bc2"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="fgykxPefa4Cn" outputId="98538352-df06-49d9-fc37-9db1e14f3ad8"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="tpADBIwha6Qx" outputId="c18135ba-975b-4460-cf34-9171cc0b0e10"
# let's feature engineer the new features for date time
df['day_of_week'] = df['click_time'].apply(lambda x: x.dayofweek)
df['day_of_year'] = df['click_time'].apply(lambda x: x.dayofyear)
df['month'] = df['click_time'].apply(lambda x: x.month)
df['hour'] = df['click_time'].apply(lambda x: x.hour)
df = df.drop('click_time', axis = 1) # dropping the click time column since we have everything
df['app'] = df['app'].astype('int64') # converting app to int since it turned into a float from our nan values
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 422} id="HSNwkfzC3swt" outputId="bd82beb2-a2ec-4944-fcdb-33d6c82a83ad"
import seaborn as sns
import matplotlib.pyplot as plt
sns.catplot(x = 'app',kind = 'count', data=df, aspect=1.9)
plt.title('Top Apps', fontsize = 20)
plt.xlabel('APP NUMBER')
;
# + colab={"base_uri": "https://localhost:8080/"} id="W1MyUP3vcemw" outputId="4a52c094-e217-4245-e554-dde0df8bca20"
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size = .2, random_state = 42)
print(train.shape)
print(test.shape)
# + id="Pu5shx45qXf_"
x_train = train.drop(['is_attributed'], axis=1)
y_train = train['is_attributed']
x_test = test.drop(['is_attributed'], axis=1)
y_test = test['is_attributed']
# + colab={"base_uri": "https://localhost:8080/"} id="1q4yEg4pJey2" outputId="0b507675-7a76-447e-b509-e05c2ca23553"
x_train.shape, y_train.shape
# + colab={"base_uri": "https://localhost:8080/"} id="1-NoF1fnJrQH" outputId="2db798fb-c603-4fd4-f345-1c5b8fd61dce"
x_test.shape, y_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="xL0At8ISBKbs" outputId="e2d0148b-1186-4b48-b6c3-b0315f26d421"
from sklearn.model_selection import KFold
kf = KFold(n_splits = 3)
kf
# + id="oUV9fM4KtKld"
def get_score(model, x_train, x_test, y_train, y_test):
model.fit(x_train, y_train)
return model.score(x_test, y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="jY2f-9kUBx8l" outputId="db97c093-c734-4930-a86a-0abdcf1b0df6"
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier, GradientBoostingClassifier
rfcl = RandomForestClassifier(criterion = 'entropy', class_weight = {0:.5, 1:.5}, max_depth = 5, min_samples_leaf = 5)
rfcl
# + colab={"base_uri": "https://localhost:8080/"} id="wTqO9MMWM2P6" outputId="dc394730-e0e9-4d53-fdeb-d38f9c3732d0"
rfcl.fit(x_train, y_train)
rfcl.predict(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="RnzuqM3BBlc9" outputId="29aece13-2c99-47c4-ec7a-c5afb2e94cf2"
get_score(rfcl, x_train, x_test, y_train, y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="EJhSBB3lG-Cs" outputId="d598d55c-d916-4184-db84-8bd4b38602aa"
from xgboost import XGBClassifier
xgb = XGBClassifier()
xgb.fit(x_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="XDet6TfgPnyN" outputId="ab559cfa-8c2f-47c8-d1cb-6bf5db361be3"
get_score(xgb, x_train, x_test, y_train, y_test)
# + id="9hmlQEkUs1-t"
pred = xgb.predict(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="6wZsjrGzRO5f" outputId="f5f5f9c3-fbee-4afa-a296-07b52205d6e9"
xgb.predict_proba(x_test)
# + id="cfJE4twVQjSQ"
y_pred = xgb.predict_proba(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="_WroD6KrszDG" outputId="51901eb4-2a16-40a8-9725-2aa363dc124b"
y_pred = y_pred[:,1] # keep the positives
y_pred
# + colab={"base_uri": "https://localhost:8080/"} id="K3GswxVpqa4d" outputId="1d50d4dc-4cca-47d8-e649-73392411cf48"
from sklearn.metrics import roc_auc_score, roc_curve
# calculate scores
auc = roc_auc_score(y_test, y_pred)
# summarize scores
print('XGB: ROC AUC=%.3f' % (auc))
# + colab={"base_uri": "https://localhost:8080/"} id="R0ccwbnvZcht" outputId="352f50a6-4060-4081-c9a5-f49cba36cc28"
from sklearn.metrics import accuracy_score
xgb_acc_score = accuracy_score(y_test, pred)
xgb_acc_score
# + id="JZt1t6gK5m6l" colab={"base_uri": "https://localhost:8080/"} outputId="ea598737-6f8b-4d3e-d3be-2eacfd23d591"
# calculate roc curves
fpr, tpr , threshold = roc_curve(y_test, y_pred)
print(fpr,tpr,threshold)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="0UJBB-lz5BZT" outputId="6ac1c7a7-1ee6-44bf-a100-1700b64ad7de"
plt.plot(fpr,tpr, linestyle='--', label='XGB');
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="jNW8kHg97Mus" outputId="1c7cde33-fdf0-47fa-b34a-15a687fce6b4"
from sklearn.metrics import plot_roc_curve
plot_roc_curve(xgb, x_test, y_test);
# + id="laHafVpl7WDy" colab={"base_uri": "https://localhost:8080/"} outputId="bdc9b757-4bc4-4c30-e2ca-2af17fce9796"
# get the feature importace
importance = xgb.feature_importances_
importance
# + colab={"base_uri": "https://localhost:8080/"} id="2cwcD_xYKiOL" outputId="8496b0e3-5680-499e-8c67-344ed986c41c"
help(pd.concat)
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="UHoe_wLQKP0t" outputId="e3f7e59e-5b73-45ce-b36b-877942b498e7"
feature_importance = pd.DataFrame(importance, columns=['feature_importance'])
feature_name = pd.DataFrame(x_train.columns, columns=['feature_name'])
feature_importances = feature_name.join(feature_importance)
feature_importances.sort_values(by='feature_importance', ascending=False)[:5]
# + colab={"base_uri": "https://localhost:8080/"} id="YmI-ZN4yQIUZ" outputId="698fee75-5cd5-4532-d9ce-0107d86b529d"
for i, v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i, v))
# + colab={"base_uri": "https://localhost:8080/"} id="T3WwynwQWP-Q" outputId="c78c2659-5c2a-43a6-b351-1c64b2de3c02"
df.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="Y-LJtoBsRQ3N" outputId="95df198b-bd36-41d1-f874-0465436c174b"
plt.bar([x for x in range(len(importance))], importance);
# + id="ClksAESrTrHh"
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
# + id="ygKfirMbUZf4"
import warnings
warnings.filterwarnings('ignore')
# + id="GEnrrDaySBBe"
# here I am implementing the bagging classifier model
bagcl = BaggingClassifier(base_estimator = logreg)
bagcl.fit(x_train, y_train)
pred = bagcl.predict(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="P8rNi1kHcJeD" outputId="9e4f28a4-44b4-47f5-9514-1f4c0ea2efbc"
bagcl_acc_score = accuracy_score(y_test, pred)
bagcl_acc_score
# + colab={"base_uri": "https://localhost:8080/"} id="-v2egZVNTAkc" outputId="112e3bec-d2ed-4a1f-daf8-609edc73649c"
y_proba = bagcl.predict_proba(x_test)
y_proba = y_proba[:, 1]
y_proba
# + colab={"base_uri": "https://localhost:8080/"} id="pA-hZsMQUuPS" outputId="0bf7d17e-9f9d-447f-b58f-5a6a59efb009"
bauc = roc_auc_score(y_test, y_proba)
print('bagcl: ROC AUC = %.3f'% bauc)
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="yASyJaWrIU_C" outputId="c0e3f5ad-d501-4408-8a35-0ca0e42e0404"
plot_roc_curve(bagcl, x_test, y_test);
# + colab={"base_uri": "https://localhost:8080/"} id="PJjELjs6Vhza" outputId="06ea5a4e-606a-4419-8d3b-b057ef1b285d"
print('On the basis of the AUC ROC Score I would choose XGBoostClassifier(), \n\
because it gave a score of well over %.3f that means that the area under the \n\
curve is much larger compared to the score of BaggingClassifier() which was \n\
only %.3f'%(auc, bauc))
# + colab={"base_uri": "https://localhost:8080/"} id="iL21i1rycPcz" outputId="d53d2911-47fe-4819-c7f0-1f25365de544"
print(f'The accuracy score of the XGBoost model is : {xgb_acc_score}')
print(f'The accuracy score of the BaggingClassifier model is {bagcl_acc_score}')
| Bagging_Boosting_Project/Mahfuzur_Bagging_Boosting_Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import ipycytoscape
import json
with open("concentricData.json") as fi:
json_file = json.load(fi)
cytoscapeobj = ipycytoscape.CytoscapeWidget()
cytoscapeobj.graph.add_graph_from_json(json_file)
cytoscapeobj.set_layout(name='concentric')
cytoscapeobj.set_style([{
"selector":"node",
"style":{
"height":20,
"width":20,
"background-color":"#30c9bc"
}
},
{
"selector":"edge",
"style":{
"curve-style":"haystack",
"haystack-radius":0,
"width":5,
"opacity":0.5,
"line-color":"#a8eae5"
}
}])
cytoscapeobj
| examples/Concentric example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Análisis y visualización de datos abiertos con python
#
#
# El objetivo de este tutorial es analizar dos conjuntos de datos abiertos de la Ciudad de México (Carpetas de investigación PGJ y Víctimas en carpetas de investigación PGJ) usando las herramientas del lenguaje de programación python. El participante aprenderá a usar las biblioteca pandas y otras bibliotecas, para cargar, estructurar, obtener estadísticas básicas y graficar los datos. Además, el participante aplicará metodologías y criterios de análisis y visualización de datos, que le permitan generar conclusiones sobre los conjuntos de datos analizados.
#
#
# 1. [Introducción a python](./CP-Introduccion.ipynb)
# 2. [Exploración de datos](./CP-Exploracion.ipynb)
# 3. __Limpieza de datos__
# 1. Pasos de un análisis de datos
# 2. Operaciones comunes de limpieza
# * Quitar columnas (drop, select)
# * Cambiar tipos de datos (datetime)
# * Modificar campos de texto (replace, title, unidecode)
# * Datos faltantes (fillna)
# * Eliminar datos fuera de rango (map, replace)
# 5. Guardar datos
# * Excel y csv
# * pickle
# 4. [Gráficación básica](./CP-AnalisisGraficas.ipynb)
# 5. [Análisis de datos](./CP-AnalisisGraficas.ipynb)
# 6. Extras
#
# <a id='Obtencion'></a>
# <a id='Obtencion'></a>
# ## 3. Limpieza de datos
#
# ### 3.1 Pasos de un análisis de datos
#
# Un análisis de datos es un proceso que lleva varios pasos.
#
# En primer lugar es importante entender los datos, es decir saber su origen y qué representan. Este paso es fundamental, ya que de poco sirve hacer un análisis complejo que no tenga utilidad con respecto a los datos que estamos análizando.
#
# Una vez que se entiende de que tratan los datos se necesita determinar las características del conjunto de datos. Esto generalmente implica saber cuántos son, qué tipos, la distribución de los datos, obtener medidas estadísticas básicas, etc.
#
# Después, se lleva a cabo una limpieza. Este proceso es generalmente el que toma más tiempo. Este paso es iterativo, conforme vamos conociendo más acerca de los datos, puede ser necesario hacer nuevas limpiezas o modificaciones. Para evitar errores es importante guardar los datos originales y documentar todas las transformaciones que se vayan haciendo a los datos y las razones de tales cambios.
#
# El análisis de datos puede conllevar varias actividades como transformar los datos, resumirmos, unirlos con otros conjuntos de datos y finalmente visualizarlos.
from os import chdir, getcwd
chdir(getcwd())
# +
import pandas as pd
from numpy import nan
df_vic = pd.read_csv('data-raw/denuncias-victimas-pgj.csv')
# -
# ### 3.4 Limpieza de datos
#
# Generalmente, los conjuntos de datos contienen errores o incluyen datos que no son utiles para los análisis. Durante la limpieza de datos se detectan y corrigen datos incorrectos. Además, en este paso se quitan los datos que no se usaran en el análisis.
#
# En este caso haremos una serie de correciones:
# * Quitar columnas
# * Cambiar tipos de datos
# * Modificar campos de texto
# * Eliminar datos fuera de rango
# * Datos faltantes
# * Guardar datos
# #### Quitar columnas
#
# Podemos quitar las columnas usando el comando _.drop()_. En este caso quitamos las columas que tienen información redundante (año, mes, coordenadas) o que no nos interesan en este momento (agencia, unidad_investigacion, colonia, calle). Para quitar columnas se debe de agregar el parametro _axis=1_. Si lo que queremos es quitar filas, se puede hacer de una manera similar, poniendo los nombres de las filas entre corchetes y usando el paramentro _axis=0_.
#
# Para guardar este paso de limpieza es muy importante rescribir los valores de la variable.
df_vic = df_vic.drop(['idCarpeta', 'Año_hecho', 'Mes_hecho', 'ClasificacionDelito', 'lon', 'lat'], axis=1)
df_vic.columns = ['delito', 'categoria_delito', 'fecha_hechos',
'Sexo', 'Edad', 'TipoPersona', 'CalidadJuridica', 'Geopoint']
df_vic.head()
# #### Cambiar tipos de datos
#
# Como ya vimos exiten varios tipos de datos. En partícular en este conjunto de datos, tenemos dos columnas de fechas, y si revisamos a que tipo de dato pertenecen usando el comando _dtype_ ,podemos observar que pandas las identificó como texto, lo cual dificulta trabajar con ellas. Si cambiamos estas columnas a tipo _datetime_, podemos usar las operaciones definidas para este tipo de datos. Haremos lo anterior empleando el comando _to_datetime_. Debido a que nos interesa guardar la transformación, es importante recordar hacer la asignación con _=_.
df_vic['fecha_hechos'] = pd.to_datetime(df_vic['fecha_hechos'])
# Usando este tipo de dato ahora es posible seleccionar por año, mes, día, hora y minuto usando los comandos:
#
# * columna.dt.year
# * columna.dt.month
# * columna.dt.day
# * columna.dt.hour
# * columna.dt.minute
df_vic['fecha_hechos'].dt.hour
# Esto nos permite ver los delitos cometidos después de las 8pm. Para lograrlo usaremos _.dt.hour_ y el subsetting que ya hemos visto con anterioridad.
df_vic[df_vic['fecha_hechos'].dt.hour>=20]
# #### Modificar campos de texto
#
# Cuando todas las columnas de texto están con mayúsculas, como en el grupo de datos con el que estamos trabajando, puede dificultarse la lectura. En este caso, vamos a cambiar el texto de mayúsculas de los delitos usando el comando _.capitalize()_, es decir este comando dejará la primera letra en mayúscula y todas las demás en minúscula. En el caso de las alcaldías y fiscalias usaremos _.title()_, con el fin de que la primera letra de todas las palabras sea mayúscula.
df_vic['delito'] = df_vic['delito'].str.capitalize()
df_vic['categoria_delito'] = df_vic['categoria_delito'].str.capitalize()
df_vic['TipoPersona'] = df_vic['TipoPersona'].str.capitalize()
df_vic['CalidadJuridica'] = df_vic['CalidadJuridica'].str.capitalize()
df_vic.tail()
# #### Datos faltantes
#
# Si revisamos los datos podemos ver que hay varios casos donde nos faltan datos, en especial en "Sexo" y "Edad".
df_vic.count()
# En el caso de "Sexo" podemos ver que en algunos casos se reporta que este "No se especifica" mientras que en otros se deja vacio, es decir, con NaN.
df_vic['Sexo'].value_counts(dropna=False)
# En este caso vamos a remplazar todos los "NaN" por "No se especifica" usando el comando _.fillna()_
df_vic['Sexo'] = df_vic['Sexo'].fillna("No se especifica")
df_vic['Sexo'].value_counts(dropna=False)
# #### Datos fuera de rango
#
# __Fecha__
#
# En el caso de este conjunto de datos tenemos las carpetas que se crearon entre 2016 y 2019. Sin embargo, esto no significa que todos los delitos hayan sido cometidos durante este periodo de tiempo. Para visualizarlo vamos a contar cuántas carpetas hay por año usando _.dt.year_ y _.value_counts()_.
df_vic['fecha_hechos'].dt.year.value_counts()
# En este caso podemos ver que hay carpetas sobre delitos cometidos en 1964. Para nuestro análisis vamos a quitar todas las carpetas que sean sobre hechos anteriores al 2016. Es decir, dejaremos unicamente las carpetas cuyo año sea mayor o igual a 2016.
df_vic = df_vic[df_vic['fecha_hechos'].dt.year>=2016]
# __Edad__
#
# En el caso de la edad podemos recordar que hay una persona que tiene una edad de 258 años. Probablemente este sea un error de captura, ya que es la única persona reportada de mas de cién años.
# En este caso vamos a poner un críterio para manejar problemas similares a futuro, si la persona tiene mas de cién años remplazaremos su edad por nan.
# Para lograr esto seleccionaremos todas las celdas de personas de mas de cién años que están en la columna Edad con _.loc[]_ y remplazaremos estos valores por NaN.
# Esto debé de cambiar la edad máxima de nuestros datos.
df_vic.loc[df_vic['Edad']>=100,'Edad'] = nan
df_vic['Edad'].max()
# Otra cuestión que cabe destacar es que hay una gran cantidad de víctimas con edad de cero años. Este es un problema de la captura de los datos, ya que muchas veces, cuando no se conoce la edad de la víctima se pone un cero, lo cual se puede confundir con casos donde la víctima es un menor de edad. Sin embargo, en este caso es imposible corregir este error, por lo que dejaremos los datos como están, pero tendremos cuidado de esto durante el análisis.
df_vic[df_vic['Edad']==0]
# __Categorias y delitos__
#
# En el pandas_profiling vimos que la gran mayoría de los delitos eran "de bajo impacto" o "hecho no delictivo". Quitar estos delitos nos permitiria simplificar el análisis, además de ocupar menos memoria.
#
# En primer lugar debemos revisar que tipo de delitos están en estas clasificaciones revisando los datos.
#
# Para lograrlo:
# * Encontrar todos los delitos que pertenezcan a las clasificaciones de interes usando _.isin()_
# * Seleccionar las filas de interes y la columna de delitos con _.loc[]_
# * Obtener los unicos con _.unique()_
cat_interes = ["Delito de bajo impacto", "Hecho no delictivo"]
del_bajo_impacto = df_vic.loc[df_vic['categoria_delito'].isin(cat_interes), "delito"]
del_bajo_impacto.unique()
# Para ver los delitos de "alto impacto" usaremos exactamente el mismo método, solo que agregaremos una negación _~_ al selector de filas.
cat_interes = ["Delito de bajo impacto", "Hecho no delictivo"]
del_alto_impacto = df_vic.loc[~df_vic['categoria_delito'].isin(cat_interes), "delito"]
del_alto_impacto.unique()
# Quitar los delitos de bajo impacto implica que nos concentramos en los que más afectan a la población, tales como: robo, secuestro y homicidio. No obstante, al mismo tiempo implica ignorar la forma de contacto con la violencia más común dentro de la población. La decisión de ignorar datos siempre debé de tomarse tomando en cuenta las implicaciones.
#
# En este caso, para simplificar los análisis quitaremos los delitos de bajo impacto y los hechos no delictivos.
#
# Con la finalidad de quitar los delitos que no nos interesan aplicaremos una modificación del comando _isin()_. En el comando anterior vimos los delitos en la lista, para seleccionar los delitos que no están en la lista agregaremos _~_ al inicio del comando, lo cual representa una negación.
df_vic = df_vic.loc[~df_vic['categoria_delito'].isin(['Delito de bajo impacto','Hecho no delictivo'])]
# Después, de estas transformaciones nuestros datos son menos.
df_vic.shape
# __Tipo de persona y cálidad jurídica__
#
# Una pregunta que podemos hacernos es si hay una relación entre el tipo de persona y la calidad juridica.
#
# Para verificar esto nos centraremos en análizar solo esas dos filas y usaremos el comando _.drop_duplicates()_, el cual quita todas las filas repetidas. En primer lugar seleccionaremos las dos columnas que nos interesan ('TipoPersona' y 'CalidadJuridica'), para después quitar todos los duplicados y comparar si siempre coinciden.
#
# En este ocasión no vamos a guardar el resultado, solo veremos lo que sucede sin modificar los datos, y a partir de eso tomaremos una decisión.
df_vic[['TipoPersona', 'CalidadJuridica']].drop_duplicates()
# Podemos apreciar que no hay relación entre el tipo de persona y la calidad jurídica, por lo que ambas columnas nos proporcionan información relevante y dejaremos ambas.
# Este es un ejemplo de por que es importante revisar y entender los datos antes de modificarlos, ya que no siempre las transformaciones que hacemos durante el proceso de limpieza son adecuadas o necesarias.
#
# Sin embargo, como podemos ver la Calidad Juridica tiene categorias que son equivalente como "Cadaver y Fallecido" o que podrían ser absorvidas en categorías mas grandes como "Menor víctima", "Victima niño" y "Lesionado adolescente". En este caso vamos a tratar de simplificar las categorias lo mas posible, en gran parte por que no todos los menores de edad vienen categorizados como "Menor víctima", lo cual puede causar errores en nuestros análisis posteriores.
df_vic['CalidadJuridica'].value_counts()
# El comando _replace_ toma un diccionario de python para hacer una lista de remplazo. Por ejemplo, para remplazar las categorías en las que dectamos error usaremos el siguiente dicionario de remplazo. Recuerda que los diccionarios usar _:_ para unir los pares de llave:valor y _,_ para separa los pares.
# +
remplazo = {"Fallecido":"Cadaver",
"Menor víctima":"Victima",
"Victima niño":"Victima",
"Lesionado adolescente":"Lesionado"}
df_vic['CalidadJuridica'] = df_vic['CalidadJuridica'].replace(remplazo)
df_vic['CalidadJuridica'].value_counts()
# -
# Ahora las categorias juridicas que detectamos han sido sustituidas y llas víctimas se han sumado a sus nuevas categorias. Sin embargo, todas las mcategorias que no estaban en el diccionario de remplazo quedaron exactamente igual. Este metodo puede ser muy costoso cuando se quieren hacer muchos remplazos, ya que tendriamos que crear un diccionario con todas las opciones que deseamos remplazar.
#
# Una opción en esos casos es el comando _.map()_. El comando map remplaza solo los datos que están en el diccionario y llena todos aquellos que no están con NaN.
# #### Guardar datos
#
# Es posible guardar los datos limpios que hemos obtenido de varias formas, podemos guardarlos como csv o excel usando los comandos _.to_csv_ y _.to_excel_. Estos formatos tienen la ventaja de que son faciles de compartir.
#
# Nosotros guardaremos los datos limpios en la carpeta _data_clean_ como un csv, debido a que el tamaño de la base de datos es grande, puede ser complicado para manipularla en excel. Sin embargo, es posible abrir archivos csv usando excel.
#
# Es muy importante no rescribir los datos originales y tratar de tener una carpeta para cada parte del proceso de análisis, para evitar perder información y poder reproducir confiablemente nuestros análisis.
df_vic.to_csv('./data-clean/VictimasPgjCdmx-AltoImpacto.csv', index=False)
# Una desventaja de guardar los archivos usando csv o excel, es que podemos perder el formato y los tipos de datos. Esto es importante sobretodo para tipos de datos como _datetime_. Una opción es guardar nuestros datos en un formato que sea facilmente interpretable para python, aunque este no se pueda trabajar con excel.
# +
from joblib import dump
with open('./data-clean/VictimasPgjCdmx-AltoImpacto.pkl', 'wb') as f:
dump(df_vic, f)
# -
# #### Ejercicio 5
#
# Revisa el archivo [CP-EjemploLimpiezaCarpetas](./extras/CP-EjemploLimpiezaCarpetas.ipynb) que se encuentra en la sección de extras. Este archivo realiza una limpieza similar para el conjunto de datos de las carpetas. Escribe que hace cada uno de los pasos y da las razones por las cuales se hacen.
| CP3-Limpieza.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modifications Detected in the HEK293T Data
#
# This notebook creates a mass shift histogram figure.
#
# ## Setup
# + tags=[]
import sys
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Import some utility functions:
sys.path.append("bin")
import utils
# plot styling
sns.set(context="paper", style="ticks")
# The search result files:
mztab_files = snakemake.input
# Parameters to define mass shifts:
tol_mass = 0.1
tol_mode = "Da"
# -
# ## Parse the SSMs
ssms = utils.read_matches(mztab_files)
mass_shifts = utils.get_mass_groups(ssms, tol_mass, tol_mode)
# ## Create the plot
# +
fig, ax = plt.subplots(1, 1, sharey=True, figsize=(5, 2))
mod = mass_shifts.loc[mass_shifts["mass_diff_median"].abs() > tol_mass, :]
ax.vlines(mod["mass_diff_median"], 0, mod["num_psms"], linewidth=1)
ax.set_xlim(-50, 200)
ax.set_ylim(0)
ax.set_ylabel("SSMs")
sns.despine(ax=ax)
ax.set_xlabel("Mass Shift (Da)")
plt.tight_layout()
result_dir = Path("../results")
result_dir.mkdir(exist_ok=True)
plt.savefig(
result_dir / f"{snakemake.config['name']}-mass_shifts.png",
dpi=300,
bbox_inches="tight",
)
plt.show()
| workflow/notebooks/mass_shift_figure.py.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="Hb15qJcekavI"
# #**Text Classification with the IMDb-Reviews Dataset from Keras**
# @author: [vatsalya-gupta](https://github.com/vatsalya-gupta)
# + colab_type="code" id="z6EBWDDYEtSx" colab={}
import numpy as np
import tensorflow as tf
from tensorflow import keras
imdb = keras.datasets.imdb
# + [markdown] colab_type="text" id="7kOEthtfnkBu"
# We will do our analysis with the 88000 most frequent unique words in our dataset. Here, we are making the train-test split, 80 % and 20 % respectively. Afterwards, we will split train_data into training and validation sets, making the final train-test-validate split be 70-20-10 %
# + colab_type="code" id="iwPp1ourJAN5" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="35bc69d0-9782-42d0-870c-4fa7ea1b7793"
(train_data, train_targets), (test_data, test_targets) = imdb.load_data(num_words = 88000)
data = np.concatenate((test_data, train_data), axis=0)
targets = np.concatenate((test_targets, train_targets), axis=0)
test_data = data[:10000]
test_labels = targets[:10000]
train_data = data[10000:]
train_labels = targets[10000:]
print(train_data[0])
# + [markdown] colab_type="text" id="qu7omIyTlMq7"
# Our training and testing data is in the form of an array of reviews, where each review is a list of integers and each integer represents a unique word. So we need to make it human readable. For this, we will be adding the following tags to the data, map the values to their respective keys and implement a function which converts the integers to the respective words.
# + colab_type="code" id="oLm7rhdcJZPD" colab={}
word_index = imdb.get_word_index()
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
np.save("word_index.npy", word_index) # saving the word_index for future use
# + colab_type="code" id="a9Ck0V79ledx" colab={}
def decode_review(text):
return " ".join([reverse_word_index.get(i, "?") for i in text])
# + colab_type="code" id="TpL5VpjHL02l" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="341e9e74-2590-4004-dcc2-121cca442094"
print(decode_review(train_data[0]))
# + colab_type="code" id="AaChBIYGM3jI" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="56fd9520-d8e5-4501-e8c5-c298c82b7029"
print(len(train_data[0]), len(test_data[0]))
# + [markdown] colab_type="text" id="DuIMtysMndkN"
# In the following block of code, we will be finding the length of the longest review in our dataset.
# + colab_type="code" id="Q2fFYnOfNGQr" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b26e96a0-7f61-4f50-acff-638423cb2ce7"
longest_train = max(len(l) for l in train_data)
longest_test = max(len(l) for l in test_data)
max_words = max(longest_train, longest_test)
print(max_words)
# + [markdown] colab_type="text" id="6zlusI0RoQ5a"
# Even though the longest review is 2494 words long, we can safely limit the length of our reviews to 500 words as most of them are well below that. For the ones with length less than 500 words, we will add zero padding to their end.
# + colab_type="code" id="Mna5HgcdQLgj" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="941c14de-0bca-4ed9-c421-1d5d5adee728"
train_data = keras.preprocessing.sequence.pad_sequences(train_data, value = word_index["<PAD>"], padding = "post", maxlen = 500)
test_data = keras.preprocessing.sequence.pad_sequences(test_data, value = word_index["<PAD>"], padding = "post", maxlen = 500)
print(len(train_data[0]), len(test_data[0]))
# + [markdown] colab_type="text" id="ucvkCeKrqiJx"
# We are using a Sequential model. An Embedding layer attempts to determine the meaning of each word in the sentence by mapping each word to a position in vector space (helps in grouping words like "fantastic" and "awesome"). The GlobalAveragePooling1D layer scales down our data's dimensions to make it easier computationally. The last two layers in our network are dense fully connected layers. The output layer is one neuron that uses the sigmoid function to get a value between 0 and 1 which will represent the likelihood of the review being negative or positive respectively.
# + colab_type="code" id="NL8OsQrRQ94w" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="575b2f1b-693e-403e-ffeb-2eae6806bb29"
model = keras.Sequential()
model.add(keras.layers.Embedding(88000, 16)) # 88000 words as input
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation = "relu"))
model.add(keras.layers.Dense(1, activation = "sigmoid"))
model.summary()
# + [markdown] colab_type="text" id="8CZdpVPYsRTH"
# Compiling the data using the following parameters. We are using loss as "binary_crossentropy", as the expected output of our model is either 0 or 1, that is negative or positive.
# + colab_type="code" id="1HLinov6UMcA" colab={}
model.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])
# + [markdown] colab_type="text" id="gFZ8IzeYs7m0"
# Here we split the training data into training and validation sets, then the training data is fit onto the model and the results are evaluated.
# + colab_type="code" id="wrvl9V3aSx1o" colab={"base_uri": "https://localhost:8080/", "height": 391} outputId="5e007f2d-3581-48d8-8123-1e91cac<PASSWORD>"
model.fit(train_data, train_labels, epochs = 10, batch_size = 256, validation_split = 0.125, verbose = 1)
model.evaluate(test_data, test_labels)
# + [markdown] colab_type="text" id="zXGei7y3pU_2"
# Sample prediction from testing data.
# + colab_type="code" id="NXDGSeVGYod8" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="3c777520-773a-49b5-e0ec-4bb76de3826d"
test_review = test_data[0]
predict = model.predict(test_review)
print("Review:\n", decode_review(test_review))
print("Prediction:", predict[0])
print("Actual:", test_labels[0])
# + [markdown] colab_type="text" id="QloUm6_ppYQf"
# Saving the model so that we don't have to train it again.
# + colab_type="code" id="tSR71k3jbKji" colab={}
model.save("imdb_model.h5") # any name ending with .h5
# model = keras.models.load_model("imdb_model.h5") # loading the model, use this in any other project for testing
# + [markdown] colab_type="text" id="iLkyZ73npn-A"
# Function to encode a text based review into a list of integers.
# + colab_type="code" id="IShn516EiWP-" colab={}
def review_encode(s):
encoded = [1] # 1 implies "<START>"
for word in s:
if word.lower() in word_index:
encoded.append(word_index[word.lower()] if (word_index[word.lower()] < 88001) else 2) # vocabulary size is 88000
else:
encoded.append(2) # 2 implies "<UNK>"
return encoded
# + [markdown] colab_type="text" id="xoldbSdrpvuR"
# Evaluating our model on an [external review](https://www.imdb.com/review/rw2284594).
# + colab_type="code" id="FxVejEqpiLw0" colab={"base_uri": "https://localhost:8080/", "height": 802} outputId="04789103-426f-42e7-cb36-826c548773bb"
with open("sample_data/test.txt", encoding = "utf-8") as f:
for line in f.readlines():
nline = line.replace(",", "").replace(".", "").replace("(", "").replace(")", "").replace(":", "").replace("\"","").strip().split(" ")
encode = review_encode(nline)
encode = keras.preprocessing.sequence.pad_sequences([encode], value = word_index["<PAD>"], padding = "post", maxlen = 500) # make the review 500 words long
predict = model.predict(encode)
print(line, "\n", encode, "\n", predict[0])
sentiment = "Positive" if (predict[0] > 0.5) else "Negative"
print("Sentiment:", sentiment)
# + [markdown] colab_type="text" id="sFOWhgAQ1s6B"
# We are able to achieve a score of "highly positive" on the review rated 10/10 on IMDb. Hence, our model is fairly accurate.
| imdb_reviews.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
pd.options.display.max_columns = 999
pd.options.display.max_rows = 999
pd.options.display.max_colwidth = 100
# -
df = pd.read_csv('../../covidData/dados-nacional.csv', sep=';',error_bad_lines=False)
print(pd.Series(df.columns))
df.head()
| notebooks/CoronaNacional.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/PacktPublishing/Hands-On-Computer-Vision-with-PyTorch/blob/master/Chapter03/Impact_of_regularization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/", "height": 437, "referenced_widgets": ["ccf30c6f445849b986c620f92fda5cd1", "ee6621d8a77c4967b9e319249157d939", "87637c7b73d3441d834c9225be819517", "2c896bf364434e6d9d5bc9702faf60a6", "<KEY>", "26962147ab0944c9a4184113004fd4e2", "<KEY>", "381f3d1f10644e54a281b0a2efb3cd08", "b8ace46c7ab345febda3ad9ab28487b6", "ebbef415392b43d6bb24e7aab76d3333", "<KEY>", "c9e247ecbe784eac96a3b8c956df69d5", "<KEY>", "<KEY>", "36a4beea1e874348b2da12cdf1627fac", "<KEY>", "<KEY>", "ad92d130357b4d3d9e30c7f46b3b00be", "a40544facee6431f8bcbad262f319c2e", "124242cdf8d34eafaca1de582de595f9", "e37e2009e9a94ef78cc89f8390f9db06", "<KEY>", "<KEY>", "f18d1a32aaf4455eb513d9438e658ca5", "999c0136999e4356ac8cf7879931e943", "f464f4e2d6644003a6787c201f35c101", "4ea2d14e43ba42eaab5925635076a8e7", "ed987686bb7741ad8c9e0e3e496f0a98", "250e9afd561544f2ac232d28909fa583", "daba0236fdae4183878d2b8a647765f1", "<KEY>", "5eb1eb89584443eb8c2666a720fbb79f"]} id="cw46kvPNeSCj" outputId="8e7a83bb-44ea-4699-b1bd-28f8400f7b2f"
from torchvision import datasets
import torch
data_folder = '../data/FMNIST' # This can be any directory you want to download FMNIST to
fmnist = datasets.FashionMNIST(data_folder, download=True, train=True)
# + id="0mVCuo84ef1b"
tr_images = fmnist.data
tr_targets = fmnist.targets
# + id="7mJmZMHSej1Z"
val_fmnist = datasets.FashionMNIST(data_folder, download=True, train=False)
val_images = val_fmnist.data
val_targets = val_fmnist.targets
# + id="guWrLqLUelZZ"
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
from torch.utils.data import Dataset, DataLoader
import torch
import torch.nn as nn
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# + [markdown] id="watgH162yyv_"
# ### No Regularization
# + id="zc309_JWem63"
class FMNISTDataset(Dataset):
def __init__(self, x, y):
x = x.float()/255
x = x.view(-1,28*28)
self.x, self.y = x, y
def __getitem__(self, ix):
x, y = self.x[ix], self.y[ix]
return x.to(device), y.to(device)
def __len__(self):
return len(self.x)
from torch.optim import SGD, Adam
def get_model():
model = nn.Sequential(
nn.Linear(28 * 28, 1000),
nn.ReLU(),
nn.Linear(1000, 10)
).to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=1e-3)
return model, loss_fn, optimizer
def train_batch(x, y, model, opt, loss_fn):
model.train()
prediction = model(x)
batch_loss = loss_fn(prediction, y)
batch_loss.backward()
optimizer.step()
optimizer.zero_grad()
return batch_loss.item()
def accuracy(x, y, model):
with torch.no_grad():
prediction = model(x)
max_values, argmaxes = prediction.max(-1)
is_correct = argmaxes == y
return is_correct.cpu().numpy().tolist()
# + id="wAO3L9MIeqVD"
def get_data():
train = FMNISTDataset(tr_images, tr_targets)
trn_dl = DataLoader(train, batch_size=32, shuffle=True)
val = FMNISTDataset(val_images, val_targets)
val_dl = DataLoader(val, batch_size=len(val_images), shuffle=True)
return trn_dl, val_dl
# + id="zeptO6C9ert-"
@torch.no_grad()
def val_loss(x, y, model):
prediction = model(x)
val_loss = loss_fn(prediction, y)
return val_loss.item()
# + id="8XgKcBFies94"
trn_dl, val_dl = get_data()
model, loss_fn, optimizer = get_model()
# + colab={"base_uri": "https://localhost:8080/", "height": 199} id="iS71jHNJeuPP" outputId="ef023469-41e2-495a-efde-b7619006d2d8"
train_losses, train_accuracies = [], []
val_losses, val_accuracies = [], []
for epoch in range(10):
print(epoch)
train_epoch_losses, train_epoch_accuracies = [], []
for ix, batch in enumerate(iter(trn_dl)):
x, y = batch
batch_loss = train_batch(x, y, model, optimizer, loss_fn)
train_epoch_losses.append(batch_loss)
train_epoch_loss = np.array(train_epoch_losses).mean()
for ix, batch in enumerate(iter(trn_dl)):
x, y = batch
is_correct = accuracy(x, y, model)
train_epoch_accuracies.extend(is_correct)
train_epoch_accuracy = np.mean(train_epoch_accuracies)
for ix, batch in enumerate(iter(val_dl)):
x, y = batch
val_is_correct = accuracy(x, y, model)
validation_loss = val_loss(x, y, model)
val_epoch_accuracy = np.mean(val_is_correct)
train_losses.append(train_epoch_loss)
train_accuracies.append(train_epoch_accuracy)
val_losses.append(validation_loss)
val_accuracies.append(val_epoch_accuracy)
# + colab={"base_uri": "https://localhost:8080/", "height": 337} id="-kvvJvJ8ew1i" outputId="87e7efb2-f288-44e6-8905-03cf1aa623bd"
epochs = np.arange(10)+1
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
# %matplotlib inline
plt.subplot(211)
plt.plot(epochs, train_losses, 'bo', label='Training loss')
plt.plot(epochs, val_losses, 'r', label='Validation loss')
#plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(1))
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.grid('off')
plt.show()
plt.subplot(212)
plt.plot(epochs, train_accuracies, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracies, 'r', label='Validation accuracy')
#plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(1))
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
#plt.ylim(0.8,1)
plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()])
plt.legend()
plt.grid('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="l5PhJV4RfHc6" outputId="e25a5fb9-f17a-4e09-c873-9c542a09a364"
for ix, par in enumerate(model.parameters()):
if(ix==0):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of weights conencting input to hidden layer')
plt.show()
elif(ix ==1):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of biases of hidden layer')
plt.show()
elif(ix==2):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of weights conencting hidden to output layer')
plt.show()
elif(ix ==3):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of biases of output layer')
plt.show()
# + id="5m8km-NPfM8T"
# + [markdown] id="rOc9DjsjfM-v"
# ### Regularization - 1e-4
# + id="3yucge9gfPe7"
class FMNISTDataset(Dataset):
def __init__(self, x, y):
x = x.float()/255
x = x.view(-1,28*28)
self.x, self.y = x, y
def __getitem__(self, ix):
x, y = self.x[ix], self.y[ix]
return x.to(device), y.to(device)
def __len__(self):
return len(self.x)
from torch.optim import SGD, Adam
def get_model():
model = nn.Sequential(
nn.Linear(28 * 28, 1000),
nn.ReLU(),
nn.Linear(1000, 10)
).to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=1e-3)
return model, loss_fn, optimizer
def train_batch(x, y, model, opt, loss_fn):
prediction = model(x)
l1_regularization = 0
for param in model.parameters():
l1_regularization += torch.norm(param,1)
batch_loss = loss_fn(prediction, y) + 0.0001*l1_regularization
batch_loss.backward()
optimizer.step()
optimizer.zero_grad()
return batch_loss.item()
def accuracy(x, y, model):
with torch.no_grad():
prediction = model(x)
max_values, argmaxes = prediction.max(-1)
is_correct = argmaxes == y
return is_correct.cpu().numpy().tolist()
# + id="1NTiKDPZgWs8"
trn_dl, val_dl = get_data()
model_l1, loss_fn, optimizer = get_model()
# + colab={"base_uri": "https://localhost:8080/", "height": 563} id="473oc4g0gaKq" outputId="bfce7740-f85d-4200-c0b3-6dc8a112f786"
train_losses, train_accuracies = [], []
val_losses, val_accuracies = [], []
for epoch in range(30):
print(epoch)
train_epoch_losses, train_epoch_accuracies = [], []
for ix, batch in enumerate(iter(trn_dl)):
x, y = batch
batch_loss = train_batch(x, y, model_l1, optimizer, loss_fn)
train_epoch_losses.append(batch_loss)
train_epoch_loss = np.array(train_epoch_losses).mean()
for ix, batch in enumerate(iter(trn_dl)):
x, y = batch
is_correct = accuracy(x, y, model_l1)
train_epoch_accuracies.extend(is_correct)
train_epoch_accuracy = np.mean(train_epoch_accuracies)
for ix, batch in enumerate(iter(val_dl)):
x, y = batch
val_is_correct = accuracy(x, y, model_l1)
validation_loss = val_loss(x, y, model_l1)
val_epoch_accuracy = np.mean(val_is_correct)
train_losses.append(train_epoch_loss)
train_accuracies.append(train_epoch_accuracy)
val_losses.append(validation_loss)
val_accuracies.append(val_epoch_accuracy)
# + colab={"base_uri": "https://localhost:8080/", "height": 337} id="7g6gqU7dgb89" outputId="a7ec522e-a117-4ad7-d5c9-24d0bf975793"
epochs = np.arange(30)+1
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
# %matplotlib inline
plt.subplot(211)
plt.plot(epochs, train_losses, 'bo', label='Training loss')
plt.plot(epochs, val_losses, 'r', label='Validation loss')
#plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(1))
plt.title('Training and validation loss with L1 regularization')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.grid('off')
plt.show()
plt.subplot(212)
plt.plot(epochs, train_accuracies, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracies, 'r', label='Validation accuracy')
#plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(1))
plt.title('Training and validation accuracy with L1 regularization')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
#plt.ylim(0.8,1)
plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()])
plt.legend()
plt.grid('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="nQpc_bIxgvo8" outputId="7acc9fb9-ef06-4f75-ccf7-63a2b832d44a"
for ix, par in enumerate(model.parameters()):
if(ix==0):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of weights conencting input to hidden layer')
plt.show()
elif(ix ==1):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of biases of hidden layer')
plt.show()
elif(ix==2):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of weights conencting hidden to output layer')
plt.show()
elif(ix ==3):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of biases of output layer')
plt.show()
# + id="kK1KXCExyywl"
# + [markdown] id="6VSc2FkBhpU5"
# ### Regularization 1e-2
# + id="stPdysFphpXr"
class FMNISTDataset(Dataset):
def __init__(self, x, y):
x = x.float()/255
x = x.view(-1,28*28)
self.x, self.y = x, y
def __getitem__(self, ix):
x, y = self.x[ix], self.y[ix]
return x.to(device), y.to(device)
def __len__(self):
return len(self.x)
from torch.optim import SGD, Adam
def get_model():
model = nn.Sequential(
nn.Linear(28 * 28, 1000),
nn.ReLU(),
nn.Linear(1000, 10)
).to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=1e-3)
return model, loss_fn, optimizer
def train_batch(x, y, model, opt, loss_fn):
prediction = model(x)
l2_regularization = 0
for param in model.parameters():
l2_regularization += torch.norm(param,2)
batch_loss = loss_fn(prediction, y) + 0.01*l2_regularization
batch_loss.backward()
optimizer.step()
optimizer.zero_grad()
return batch_loss.item()
def accuracy(x, y, model):
with torch.no_grad():
prediction = model(x)
max_values, argmaxes = prediction.max(-1)
is_correct = argmaxes == y
return is_correct.cpu().numpy().tolist()
# + id="poLmwOqohvJL"
trn_dl, val_dl = get_data()
model_l2, loss_fn, optimizer = get_model()
# + colab={"base_uri": "https://localhost:8080/", "height": 490} id="LWLYUYVMhxNT" outputId="eafaa657-0e23-4cfc-e45a-8269b3ef76a6"
train_losses, train_accuracies = [], []
val_losses, val_accuracies = [], []
for epoch in range(30):
print(epoch)
train_epoch_losses, train_epoch_accuracies = [], []
for ix, batch in enumerate(iter(trn_dl)):
x, y = batch
batch_loss = train_batch(x, y, model_l2, optimizer, loss_fn)
train_epoch_losses.append(batch_loss)
train_epoch_loss = np.array(train_epoch_losses).mean()
for ix, batch in enumerate(iter(trn_dl)):
x, y = batch
is_correct = accuracy(x, y, model_l2)
train_epoch_accuracies.extend(is_correct)
train_epoch_accuracy = np.mean(train_epoch_accuracies)
for ix, batch in enumerate(iter(val_dl)):
x, y = batch
val_is_correct = accuracy(x, y, model_l2)
validation_loss = val_loss(x, y, model_l2)
val_epoch_accuracy = np.mean(val_is_correct)
train_losses.append(train_epoch_loss)
train_accuracies.append(train_epoch_accuracy)
val_losses.append(validation_loss)
val_accuracies.append(val_epoch_accuracy)
# + id="ieeGRz04h2Re"
epochs = np.arange(30)+1
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
# %matplotlib inline
plt.subplot(211)
plt.plot(epochs, train_losses, 'bo', label='Training loss')
plt.plot(epochs, val_losses, 'r', label='Validation loss')
#plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(1))
plt.title('Training and validation loss with L2 regularization')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.grid('off')
plt.show()
plt.subplot(212)
plt.plot(epochs, train_accuracies, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracies, 'r', label='Validation accuracy')
#plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(1))
plt.title('Training and validation accuracy with L2 regularization')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
#plt.ylim(0.8,1)
plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()])
plt.legend()
plt.grid('off')
plt.show()
# + id="CLFTzhJbiNQY"
# + id="ndswNmBBr4qX"
# + id="yIIHtFdSr4tB"
for ix, par in enumerate(model.parameters()):
if(ix==0):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of weights conencting input to hidden layer')
plt.show()
elif(ix ==1):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of biases of hidden layer')
plt.show()
elif(ix==2):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of weights conencting hidden to output layer')
plt.show()
elif(ix ==3):
plt.hist(par.cpu().detach().numpy().flatten())
plt.xlim(-2,2)
plt.title('Distribution of biases of output layer')
plt.show()
# + id="CYXpuDbGsfDv"
for ix, par in enumerate(model_l1.parameters()):
if(ix==0):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of weights conencting input to hidden layer')
plt.show()
elif(ix ==1):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of biases of hidden layer')
plt.show()
elif(ix==2):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of weights conencting hidden to output layer')
plt.show()
elif(ix ==3):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of biases of output layer')
plt.show()
# + id="hVLaPXe0slWV"
par.cpu().detach().numpy().flatten()
# + id="qM6Q5XRtsv61"
for ix, par in enumerate(model_l2.parameters()):
if(ix==0):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of weights conencting input to hidden layer')
plt.show()
elif(ix ==1):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of biases of hidden layer')
plt.show()
elif(ix==2):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of weights conencting hidden to output layer')
plt.show()
elif(ix ==3):
plt.hist(par.cpu().detach().numpy().flatten())
#plt.xlim(-2,2)
plt.title('Distribution of biases of output layer')
plt.show()
# + id="_9QBeqRws4ka"
| Chapter03/Impact_of_regularization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <table>
# <tr><td>
# <a href="https://nbviewer.jupyter.org/github/panayiotiska/Jupyter-Sentiment-Analysis-Video-games-reviews/blob/master/Naive_Bayes_&_SVM_HashingVectorizer.ipynb">
# <img alt="start" src="figures/button_previous.jpg" width= 70% height= 70%>
# </td><td>
# <a href="https://nbviewer.jupyter.org/github/panayiotiska/Jupyter-Sentiment-Analysis-Video-games-reviews/blob/master/Index.ipynb">
# <img alt="start" src="figures/button_table-of-contents.jpg" width= 70% height= 70%>
# </td><td>
# <a href="https://nbviewer.jupyter.org/github/panayiotiska/Jupyter-Sentiment-Analysis-Video-games-reviews/blob/master/Naive_Bayes_tfidfVectorizer-SnowballStemmer.ipynb">
# <img alt="start" src="figures/button_next.jpg" width= 70% height= 70%>
# </td></tr>
# </table>
# ## Naive Bayes using TF-IDF vectorizer and Lancaster stemmer
# In this model the Naive Bayes algorithm is tested keeping the TF-IDF vectorizer (as it performed better than other vectorizers) by using the Lancaster stemmer implemented by the NLTK library in the third section of the code ([3] Cleaning the text). Lancaster stemmer is a word stemmer based on the Lancaster (Paice/Husk) stemming algorithm. More about this algorithm are discussed in the Vectorization notebook.
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
from nltk.corpus import wordnet as wn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import model_selection, naive_bayes, svm
from sklearn.metrics import accuracy_score
from collections import Counter
#[1] Importing dataset
dataset = pd.read_json(r"C:\Users\Panos\Desktop\Dissert\Code\Video_Games_5.json", lines=True, encoding='latin-1')
dataset = dataset[['reviewText','overall']]
#[2] Reduce number of classes
ratings = []
for index,entry in enumerate(dataset['overall']):
if entry == 1.0 or entry == 2.0:
ratings.append(-1)
elif entry == 3.0:
ratings.append(0)
elif entry == 4.0 or entry == 5.0:
ratings.append(1)
# +
#[3] Cleaning the text
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem import LancasterStemmer
corpus = []
for i in range(0, len(dataset)):
review = re.sub('[^a-zA-Z]', ' ', dataset['reviewText'][i])
review = review.lower()
review = review.split()
lc = LancasterStemmer()
review = [lc.stem(word) for word in review if not word in set(stopwords.words('english'))]
review = [word for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
# +
#[4] Prepare Train and Test Data sets
Train_X, Test_X, Train_Y, Test_Y = model_selection.train_test_split(corpus,ratings,test_size=0.3)
print(Counter(Train_Y).values()) # counts the elements' frequency
# +
#[5] Encoding
Encoder = LabelEncoder()
Train_Y = Encoder.fit_transform(Train_Y)
Test_Y = Encoder.fit_transform(Test_Y)
# +
#[6] Word Vectorization
Tfidf_vect = TfidfVectorizer(max_features=10000)
Tfidf_vect.fit(corpus)
Train_X_Tfidf = Tfidf_vect.transform(Train_X)
Test_X_Tfidf = Tfidf_vect.transform(Test_X)
# the vocabulary that it has learned from the corpus
#print(Tfidf_vect.vocabulary_)
# the vectorized data
#print(Train_X_Tfidf)
# +
#[7] Use the Naive Bayes Algorithms to Predict the outcome
# fit the training dataset on the NB classifier
Naive = naive_bayes.MultinomialNB()
Naive.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_NB = Naive.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("-----------------------Naive Bayes------------------------\n")
print("Naive Bayes Accuracy Score -> ",accuracy_score(predictions_NB, Test_Y)*100)
# Making the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Test_Y, predictions_NB)
print("\n",cm,"\n")
# Printing a classification report of different metrics
from sklearn.metrics import classification_report
my_tags = ['Positive','Neutral','Negative']
print(classification_report(Test_Y, predictions_NB,target_names=my_tags))
# Export reports to files for later visualizations
report_NB = classification_report(Test_Y, predictions_NB,target_names=my_tags, output_dict=True)
report_NB_df = pd.DataFrame(report_NB).transpose()
report_NB_df.to_csv(r'NB_report_TFIDFVect_LancasterStemmer.csv', index = True, float_format="%.3f")
# -
# <a href="https://nbviewer.jupyter.org/github/panayiotiska/Jupyter-Sentiment-Analysis-Video-games-reviews/blob/master/Naive_Bayes_tfidfVectorizer-SnowballStemmer.ipynb">
# <img alt="start" src="figures/button_next.jpg">
| Naive_Bayes_tfidfVectorizer-LancasterStemmer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
path = '../datasets/student.csv'
df = pd.read_csv(path)
df1 = df.drop('f101',axis=1)
df1.astype('category').head(2)
## encode the columns
labels = df1.columns
scale_mapper = {"A":1,"P":0}
df_e = df1.replace(scale_mapper)
df_r = pd.DataFrame(df_e,columns=labels)
df_r.head(3)
y = df_r['rs']
X = df_r.drop('rs',axis=1)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=42)
from sklearn.tree import DecisionTreeClassifier
tree_clf = DecisionTreeClassifier(max_depth=150)
tree_clf.fit(X_train, y_train)
from sklearn.metrics import mean_squared_error
y_pred = tree_clf.predict(X_test)
mse = mean_squared_error(y_pred, y_test)
rmse = np.sqrt(mse)
rmse
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(X_train, y_train)
y_pred = forest_reg.predict(X_test)
mse = mean_squared_error(y_pred, y_test)
rmse = np.sqrt(mse)
rmse
# bagging
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
mse = mean_squared_error(y_pred, y_test)
rmse = np.sqrt(lin_mse)
rmse
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
# adaboost
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5)
ada_clf.fit(X_train, y_train)
y_pred = ada_clf.predict(X_test)
accuracy_score(y_test, y_pred)
# voting classifiers
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression()
rnd_clf = RandomForestClassifier()
svm_clf = SVC(probability=True)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
y_pred = voting_clf.predict(X_test)
accuracy_score(y_test, y_pred)
# check each classifier's accuracy
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
mse = mean_squared_error(y_pred, y_test)
rmse = np.sqrt(mse)
rmse
| student.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Flair Training Sequence Labeling Models
# **(C) 2019-2021 by [<NAME>](http://damir.cavar.me/)**
# **Version:** 0.4, January 2021
# **Download:** This and various other Jupyter notebooks are available from my [GitHub repo](https://github.com/dcavar/python-tutorial-for-ipython).
# For the Flair tutorial 7 license and copyright restrictions, see the website below. For all the parts that I added, consider the license to be [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)).
# Based on the [Flair Tutorial 7 Training a Model](https://github.com/zalandoresearch/flair/blob/master/resources/docs/TUTORIAL_7_TRAINING_A_MODEL.md).
# This tutorial is using the CoNLL-03 Named Entity Recognition data set. See [this website](https://www.clips.uantwerpen.be/conll2003/ner/) for more details and to download an independent copy of the data set.
# ## Training a Sequence Labeling Model
# We will need the following modules:
from flair.data import Corpus
from flair.datasets import WNUT_17
from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings
from typing import List
# If you want to use the CoNLL-03 corpus, you need to download it and unpack it in your Flair data and model folder. This folder should be in your home-directory and it is named *.flair*. Once you have downloaded the corpus, unpack it into a folder *.flair/datasets/conll_03*. If you do not want to use the CoNLL-03 corpus, but rather [the free W-NUT 17 corpus](https://noisy-text.github.io/2017/emerging-rare-entities.html), you can use the Flair command: *WNUT_17()*
# If you decide to download the CoNLL-03 corpus, adapt the following code. We load the W-NUT17 corpus and down-sample it to 10% of its size:
corpus: Corpus = WNUT_17().downsample(0.1)
print(corpus)
# Declare the tag type to be predicted:
tag_type = 'ner'
# Create the tag-dictionary for the tag-type:
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
print(tag_dictionary)
# Load the embeddings:
embedding_types: List[TokenEmbeddings] = [
WordEmbeddings('glove'),
# comment in this line to use character embeddings
# CharacterEmbeddings(),
# comment in these lines to use flair embeddings
# FlairEmbeddings('news-forward'),
# FlairEmbeddings('news-backward'),
]
embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
# Load and initialize the sequence tagger:
# +
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=True)
# -
# Load and initialize the trainer:
# +
from flair.trainers import ModelTrainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# -
# If you have a GPU (otherwise maybe tweak the batch size, etc.), run the training with 150 epochs:
trainer.train('resources/taggers/example-ner',
learning_rate=0.1,
mini_batch_size=32,
max_epochs=150)
# Plot the training curves and results:
from flair.visual.training_curves import Plotter
plotter = Plotter()
plotter.plot_training_curves('resources/taggers/example-ner/loss.tsv')
plotter.plot_weights('resources/taggers/example-ner/weights.txt')
# Use the model via the *predict* method:
from flair.data import Sentence
model = SequenceTagger.load('resources/taggers/example-ner/final-model.pt')
sentence = Sentence('John lives in the Empire State Building .')
model.predict(sentence)
print(sentence.to_tagged_string())
#
| Data Science and Machine Learning/Thorough Python Data Science Topics/Flair Training Sequence Labeling Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Exporting With `nbinteract`
#
# `nbinteract` comes with two options for exporting Jupyter notebooks into standalone HTML pages: using the `nbinteract` command line tool and running `nbi.publish()` in a notebook cell.
# ### The `nbinteract` CLI tool
#
# Installing the `nbinteract` package installs a command-line tool for converting notebooks into HTML pages.
#
# For example, to convert a notebook called `Hello.ipynb` using the Binder spec
# `calebs11/nbinteract-image/main`:
#
# ```bash
# # `main` is optional since it is the default
# nbinteract Hello.ipynb -s calebs11/nbinteract-image
# ```
#
# After running `nbinteract init`, you may omit the `-s` flag and simply write:
#
# ```bash
# nbinteract Hello.ipynb
# ```
#
# One advantage of the command line tool is that it can convert notebooks in
# folders as well as individual notebooks:
#
# ```bash
# # Using the -r flag tells nbinteract to recursively search for .ipynb files
# # in nb_folder
# nbinteract -r nb_folder/
# ```
# For the complete set of options, run `nbinteract --help`.
#
# ```
# $ nbinteract --help
# `nbinteract NOTEBOOKS ...` converts notebooks into HTML pages. Note that
# running this command outside a GitHub project initialized with `nbinteract
# init` requires you to specify the --spec SPEC option.
#
# Arguments:
# NOTEBOOKS List of notebooks or folders to convert. If folders are passed in,
# all the notebooks in each folder are converted. The resulting HTML
# files are created adjacent to their originating notebooks and will
# clobber existing files of the same name.
#
# By default, notebooks in subfolders will not be converted; use the
# --recursive flag to recursively convert notebooks in subfolders.
#
# Options:
# -h --help Show this screen
# -s SPEC --spec SPEC BinderHub spec for Jupyter image. Must be in the
# format: `{username}/{repo}/{branch}`. For example:
# 'SamLau95/nbinteract-image/master'. This flag is
# **required** unless a .nbinteract.json file exists
# in the project root with the "spec" key. If branch
# is not specified, default to `main`.
# -t TYPE --template TYPE Specifies the type of HTML page to generate. Valid
# types: full (standalone page), partial (embeddable
# page with library), or plain (embeddable page
# without JS).
# [default: full]
# -B --no-top-button If set, doesn't generate button at top of page.
# -r --recursive Recursively convert notebooks in subdirectories.
# -o FOLDER --output=FOLDER Outputs HTML files into FOLDER instead of
# outputting files adjacent to their originating
# notebooks. All files will be direct descendants of
# the folder even if --recursive is set.
# -i FOLDER --images=FOLDER Extracts images from HTML and writes into FOLDER
# instead of encoding images in base64 in the HTML.
# Requires -o option to be set as well.
# -e --execute Executes the notebook before converting to HTML,
# functioning like the equivalent flag for
# nbconvert. Configure NbiExecutePreprocessor to
# change conversion instead of the base
# ExecutePreprocessor.
# ```
# ### `nbi.publish()`
#
# The `nbi.publish()` method can be run inside a Jupyter notebook cell. It has
# the following signature:
#
# ```python
# import nbinteract as nbi
#
# nbi.publish(spec, nb_name, template='full', save_first=True)
# '''
# Converts nb_name to an HTML file. Preserves widget functionality.
#
# Outputs a link to download HTML file after conversion if called in a
# notebook environment.
#
# Equivalent to running `nbinteract ${spec} ${nb_name}` on the command line.
#
# Args:
# spec (str): BinderHub spec for Jupyter image. Must be in the format:
# `${username}/${repo}/${branch}`.
#
# nb_name (str): Complete name of the notebook file to convert. Can be a
# relative path (eg. './foo/test.ipynb').
#
# template (str): Template to use for conversion. Valid templates:
#
# - 'full': Outputs a complete standalone HTML page with default
# styling. Automatically loads the nbinteract JS library.
# - 'partial': Outputs an HTML partial that can be embedded in
# another page. Automatically loads the nbinteract JS library.
# - 'gitbook': Outputs an HTML partial used to embed in a Gitbook or
# other environments where the nbinteract JS library is already
# loaded.
#
# save_first (bool): If True, saves the currently opened notebook before
# converting nb_name. Used to ensure notebook is written to
# filesystem before starting conversion. Does nothing if not in a
# notebook environment.
#
#
# Returns:
# None
# '''
# ```
#
# For example, to convert a notebook called `Hello.ipynb` using the Binder spec
# `calebs11/nbinteract-image/main`:
#
# ```python
# nbi.publish('calebs11/nbinteract-image/master', 'Hello.ipynb')
# ```
| docs/notebooks/recipes/recipes_exporting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# create entry points to spark
try:
sc.stop()
except:
pass
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc=SparkContext()
spark = SparkSession(sparkContext=sc)
# ## Boolean column expression
#
# Create a column expression that will return **boolean values**.
# ## Example data
mtcars = spark.read.csv('../../data/mtcars.csv', inferSchema=True, header=True)
mtcars = mtcars.withColumnRenamed('_c0', 'model')
mtcars.show()
# ## `between()`: true/false if the column value is between a given range
mpg_between = mtcars.cyl.between(4,6)
mpg_between
mtcars.select(mtcars.cyl, mpg_between).show(5)
# ## `contains()`: true/false if the column value contains a string
model_contains = mtcars.model.contains('Ho')
model_contains
mtcars.select(mtcars.model, model_contains).show(5)
# ## `endswith()`: true/false if the column value ends with a string
model_endswith = mtcars.model.endswith('t')
model_endswith
mtcars.select(mtcars.model, model_endswith).show(6)
# ## `isNotNull()`: true/false if the column value is not Null
from pyspark.sql import Row
df = spark.createDataFrame([Row(name='Tom', height=80), Row(name='Alice', height=None)])
df.show()
height_isNotNull = df.height.isNotNull()
height_isNotNull
df.select(df.height, height_isNotNull).show()
# ## `isNull()`: true/false if the column value is Null
height_isNull = df.height.isNull()
height_isNull
df.select(df.height, height_isNull).show()
# ## `isin()`: true/false if the column value is contained by the evaluated argument
carb_isin = mtcars.carb.isin([2, 3])
carb_isin
mtcars.select(mtcars.carb, carb_isin).show(10)
# ## `like()`: true/false if the column value matches a pattern based on a _SQL LIKE_
model_like = mtcars.model.like('Ho%')
model_like
mtcars.select(mtcars.model, model_like).show(10)
# ## `rlike()`: true/false if the column value matches a pattern based on a _SQL RLIKE_ (LIKE with Regex)
model_rlike = mtcars.model.rlike('t$')
model_rlike
mtcars.select(mtcars.model, model_rlike).show()
# ## `startswith()`: true/false if the column value starts with a string
model_startswith = mtcars.model.startswith('Merc')
model_startswith
mtcars.select(mtcars.model, model_startswith).show()
| notebooks/02-data-manipulation/.ipynb_checkpoints/2.7.3-boolean-column-expression-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Conservative remapping
import xgcm
import xarray as xr
import numpy as np
import xbasin
# We open the example data and create 2 grids: 1 for the dataset we have and 1 for the remapped one.
# Here '_fr' means *from* and '_to' *to* (i.e. remapped data).
# +
ds = xr.open_dataset('data/nemo_output_ex.nc')
metrics_fr = {
('X',): ['e1t', 'e1u', 'e1v', 'e1f'],
('Y',): ['e2t', 'e2u', 'e2v', 'e2f'],
('Z',): ['e3t_0', 'e3u_0', 'e3v_0', 'e3w_0']
}
metrics_to = {
('X',): ['e1t', 'e1u', 'e1v', 'e1f'],
('Y',): ['e2t', 'e2u', 'e2v', 'e2f'],
('Z',): ['e3t_1d', 'e3w_1d']
}
grid_fr = xgcm.Grid(ds, periodic=False, metrics=metrics_fr)
grid_to = xgcm.Grid(ds, periodic=False, metrics=metrics_to)
ds
# -
# We remap
theta_to = xbasin.remap_vertical(ds.thetao, grid_fr, grid_to, axis='Z', z_fr=ds.gdepw_0, z_to=ds.gdepw_1d)
theta_to
# The total heat content is conserved:
# +
hc_fr = grid_fr.integrate(ds.thetao, axis='Z')
hc_to = grid_to.integrate(theta_to, axis='Z')
(hc_fr == hc_to).all()
| docs/remapping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
from __future__ import print_function
from keras import backend as K
K.set_image_dim_ordering('th') # ensure our dimension notation matches
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Reshape
from keras.layers.core import Activation
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import UpSampling2D
from keras.layers.convolutional import Convolution2D, AveragePooling2D
from keras.layers.core import Flatten
from keras.optimizers import SGD, Adam
from keras.datasets import mnist
from keras import utils
import numpy as np
from PIL import Image, ImageOps
import argparse
import math
import os
import os.path
import glob
# + deletable=true editable=true
def generator_model():
model = Sequential()
model.add(Dense(input_dim=100, output_dim=1024))
model.add(Activation('tanh'))
model.add(Dense(128*8*8))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Reshape((128, 8, 8), input_shape=(128*8*8,)))
model.add(UpSampling2D(size=(4, 4)))
model.add(Convolution2D(64, 5, 5, border_mode='same'))
model.add(Activation('tanh'))
model.add(UpSampling2D(size=(4, 4)))
model.add(Convolution2D(1, 5, 5, border_mode='same'))
model.add(Activation('tanh'))
return model
def discriminator_model():
model = Sequential()
model.add(Convolution2D(
64, 5, 5,
border_mode='same',
input_shape=(1, 128, 128)))
model.add(Activation('tanh'))
model.add(AveragePooling2D(pool_size=(4, 4)))
model.add(Convolution2D(128, 5, 5))
model.add(Activation('tanh'))
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('tanh'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
return model
def generator_containing_discriminator(generator, discriminator):
model = Sequential()
model.add(generator)
discriminator.trainable = False
model.add(discriminator)
return model
def combine_images(generated_images):
num = generated_images.shape[0]
width = int(math.sqrt(num))
height = int(math.ceil(float(num)/width))
shape = generated_images.shape[2:]
image = np.zeros((height*shape[0], width*shape[1]),
dtype=generated_images.dtype)
for index, img in enumerate(generated_images):
i = int(index/width)
j = index % width
image[i*shape[0]:(i+1)*shape[0], j*shape[1]:(j+1)*shape[1]] = \
img[0, :, :]
return image
# + deletable=true editable=true
model = generator_model()
print(model.summary())
# + deletable=true editable=true
def load_data(pixels=128, verbose=False):
print("Loading data")
X_train = []
paths = glob.glob(os.path.normpath(os.getcwd() + '/logos/*.jpg'))
for path in paths:
if verbose: print(path)
im = Image.open(path)
im = ImageOps.fit(im, (pixels, pixels), Image.ANTIALIAS)
im = ImageOps.grayscale(im)
#im.show()
im = np.asarray(im)
X_train.append(im)
print("Finished loading data")
return np.array(X_train)
def train(epochs, BATCH_SIZE, weights=False):
"""
:param epochs: Train for this many epochs
:param BATCH_SIZE: Size of minibatch
:param weights: If True, load weights from file, otherwise train the model from scratch.
Use this if you have already saved state of the network and want to train it further.
"""
X_train = load_data()
X_train = (X_train.astype(np.float32) - 127.5)/127.5
X_train = X_train.reshape((X_train.shape[0], 1) + X_train.shape[1:])
discriminator = discriminator_model()
generator = generator_model()
if weights:
generator.load_weights('goodgenerator.h5')
discriminator.load_weights('gooddiscriminator.h5')
discriminator_on_generator = \
generator_containing_discriminator(generator, discriminator)
d_optim = SGD(lr=0.0005, momentum=0.9, nesterov=True)
g_optim = SGD(lr=0.0005, momentum=0.9, nesterov=True)
generator.compile(loss='binary_crossentropy', optimizer="SGD")
discriminator_on_generator.compile(
loss='binary_crossentropy', optimizer=g_optim)
discriminator.trainable = True
discriminator.compile(loss='binary_crossentropy', optimizer=d_optim)
noise = np.zeros((BATCH_SIZE, 100))
for epoch in range(epochs):
print("Epoch is", epoch)
print("Number of batches", int(X_train.shape[0]/BATCH_SIZE))
for index in range(int(X_train.shape[0]/BATCH_SIZE)):
for i in range(BATCH_SIZE):
noise[i, :] = np.random.uniform(-1, 1, 100)
image_batch = X_train[index*BATCH_SIZE:(index+1)*BATCH_SIZE]
generated_images = generator.predict(noise, verbose=0)
#print(generated_images.shape)
if index % 20 == 0 and epoch % 10 == 0:
image = combine_images(generated_images)
image = image*127.5+127.5
destpath = os.path.normpath(os.getcwd()+ "/logo-generated-images/"+str(epoch)+"_"+str(index)+".png")
Image.fromarray(image.astype(np.uint8)).save(destpath)
X = np.concatenate((image_batch, generated_images))
y = [1] * BATCH_SIZE + [0] * BATCH_SIZE
d_loss = discriminator.train_on_batch(X, y)
print("batch %d d_loss : %f" % (index, d_loss))
for i in range(BATCH_SIZE):
noise[i, :] = np.random.uniform(-1, 1, 100)
discriminator.trainable = False
g_loss = discriminator_on_generator.train_on_batch(
noise, [1] * BATCH_SIZE)
discriminator.trainable = True
print("batch %d g_loss : %f" % (index, g_loss))
if epoch % 10 == 9:
generator.save_weights('goodgenerator.h5', True)
discriminator.save_weights('gooddiscriminator.h5', True)
def clean(image):
for i in range(1, image.shape[0] - 1):
for j in range(1, image.shape[1] - 1):
if image[i][j] + image[i+1][j] + image[i][j+1] + image[i-1][j] + image[i][j-1] > 127 * 5:
image[i][j] = 255
return image
def generate(BATCH_SIZE):
generator = generator_model()
generator.compile(loss='binary_crossentropy', optimizer="SGD")
generator.load_weights('goodgenerator.h5')
noise = np.zeros((BATCH_SIZE, 100))
a = np.random.uniform(-1, 1, 100)
b = np.random.uniform(-1, 1, 100)
grad = (b - a) / BATCH_SIZE
for i in range(BATCH_SIZE):
noise[i, :] = np.random.uniform(-1, 1, 100)
generated_images = generator.predict(noise, verbose=1)
#image = combine_images(generated_images)
print(generated_images.shape)
for image in generated_images:
image = image[0]
image = image*127.5+127.5
Image.fromarray(image.astype(np.uint8)).save("dirty.png")
Image.fromarray(image.astype(np.uint8)).show()
clean(image)
image = Image.fromarray(image.astype(np.uint8))
image.show()
image.save("clean.png")
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--mode", type=str)
parser.add_argument("--batch_size", type=int, default=128)
parser.add_argument("--nice", dest="nice", action="store_true")
parser.set_defaults(nice=False)
args = parser.parse_args()
return args
# + deletable=true editable=true
train(400, 10, False)
# + deletable=true editable=true
generate(1)
# + deletable=true editable=true
| 4-generative-adversarial-networks/how-to-generate-video/logograms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="poiUReSDo1r6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="4cdc178e-e485-48e5-d6af-e6a6e679a881"
# Install nltk
# !pip install --user -U nltk
# + id="vOUgXHY6pAIh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 159} outputId="8f6a7b78-d3ad-49c5-e759-65a11648d158"
# Import all libraries
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import numpy as np
import pandas as pd
import pickle
import re
import seaborn as sns
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
import string
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
import warnings
warnings.filterwarnings('ignore')
np.set_printoptions(precision=4)
nltk.download('stopwords')
nltk.download('punkt')
# + id="raKtqeIDpBf0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="78683602-627a-4043-dfca-42c7aab64b38"
# Load dataset
#data = pd.read_csv('/content/UpdatedResumeDataSet.csv', engine='python')
data = pd.read_csv('/content/drive/My Drive/CodeDay/UpdatedResumeDataSet.csv') # Comment this line and uncomment the above line if this does not work for you
data.head()
# + id="lQWr0RJRpIQ9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 469} outputId="27176f4a-2057-44c7-fd61-830ce8ad3e98"
# Print unique categories of resumes
print(data['Category'].value_counts())
# + id="4ml2WddhpNEl" colab_type="code" colab={}
# Drop rows where category is "Testing" and store new size of dataset
data = data[data.Category != 'Testing']
data_size = len(data)
# + id="MgLthsGHpN_6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 897} outputId="48dfa1ed-7b09-4a86-c64a-feebadfaa74e"
# Bar graph visualization
plt.figure(figsize=(15,15))
plt.xticks(rotation=90)
sns.countplot(y="Category", data=data)
# + id="xHxRTJ_QpS_o" colab_type="code" colab={}
# Get set of stopwords
from nltk.corpus import stopwords
stopwords_set = set(stopwords.words('english')+['``',"''"])
# + id="81wWgOKYpU1-" colab_type="code" colab={}
# Function to clean resume text
def clean_text(resume_text):
resume_text = re.sub('http\S+\s*', ' ', resume_text) # remove URLs
resume_text = re.sub('RT|cc', ' ', resume_text) # remove RT and cc
resume_text = re.sub('#\S+', '', resume_text) # remove hashtags
resume_text = re.sub('@\S+', ' ', resume_text) # remove mentions
resume_text = re.sub('[%s]' % re.escape("""!"#$%&'()*+,-./:;<=>?@
[\]^_`{|}~"""), ' ', resume_text) # remove punctuations
resume_text = re.sub(r'[^\x00-\x7f]',r' ', resume_text)
resume_text = re.sub('\s+', ' ', resume_text) # remove extra whitespace
resume_text = resume_text.lower() # convert to lowercase
resume_text_tokens = word_tokenize(resume_text) # tokenize
filtered_text = [w for w in resume_text_tokens if not w in stopwords_set]
# remove stopwords
return ' '.join(filtered_text)
# + id="IKol74xcpWfR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 766} outputId="26c96630-95b6-40a0-ab4c-48f55a2ea2cf"
# Print a sample original resume
print('--- Original resume ---')
print(data['Resume'][0])
# + id="ixmhpVkfpYH8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="ff169ef9-38e2-4e7a-e7a4-dd391597b775"
# Clean the resume
data['cleaned_resume'] = data.Resume.apply(lambda x: clean_text(x))
print('--- Cleaned resume ---')
print(data['cleaned_resume'][0])
# + id="OcxDvdYoparS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="00bff151-8a02-4b36-e7fc-1244846df6e7"
# Get features and labels from data and shuffle
features = data['cleaned_resume'].values
original_labels = data['Category'].values
labels = original_labels[:]
for i in range(data_size):
labels[i] = str(labels[i].lower()) # convert to lowercase
labels[i] = labels[i].replace(" ", "") # use hyphens to convert multi-token labels into single tokens
import random
random.seed(20)
features, labels = shuffle(features, labels, random_state=20)
# Print example feature and label
print(features[0])
print(labels[0])
# + id="ymbM9wkVpe7l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="918ca8af-c01c-4d4f-fe86-c9ea56f1984b"
# Split for train and test
train_split = 0.8
train_size = int(train_split * data_size)
train_features = features[:train_size]
train_labels = labels[:train_size]
test_features = features[train_size:]
test_labels = labels[train_size:]
# Print size of each split
print(len(train_labels))
print(len(test_labels))
# + id="siMQvlJapkmZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="c6fc6e2f-515d-4a57-de63-f77f3d9d5093"
# Tokenize feature data and print word dictionary
vocab_size = 6000
oov_tok = '<OOV>'
feature_tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
feature_tokenizer.fit_on_texts(features)
feature_index = feature_tokenizer.word_index
print(dict(list(feature_index.items())))
# Print example sequences from train and test datasets
train_feature_sequences = feature_tokenizer.texts_to_sequences(train_features)
print(train_feature_sequences[0])
test_feature_sequences = feature_tokenizer.texts_to_sequences(test_features)
print(test_feature_sequences[0])
# + id="10PldKNSprXW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="e58456fe-141b-4197-bd58-e5b6a015c3ca"
# Tokenize label data and print label dictionary
label_tokenizer = Tokenizer(lower=True)
label_tokenizer.fit_on_texts(labels)
label_index = label_tokenizer.word_index
print(dict(list(label_index.items())))
# Print example label encodings from train and test datasets
train_label_sequences = label_tokenizer.texts_to_sequences(train_labels)
print(train_label_sequences[0])
test_label_sequences = label_tokenizer.texts_to_sequences(test_labels)
print(test_label_sequences[0])
# + id="SvHlL7PXpzBX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 781} outputId="edf2b961-d95b-4a82-eedc-a95e23dcd58b"
# Pad sequences for feature data
max_length = 300
trunc_type = 'post'
pad_type = 'post'
train_feature_padded = pad_sequences(train_feature_sequences, maxlen=max_length, padding=pad_type, truncating=trunc_type)
test_feature_padded = pad_sequences(test_feature_sequences, maxlen=max_length, padding=pad_type, truncating=trunc_type)
# Print example padded sequences from train and test datasets
print(train_feature_padded[0])
print(test_feature_padded[0])
# + id="89LGoDoqp39S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="4aeac712-8c24-4fc7-cf37-d4aecac17f9e"
# Define the neural network
embedding_dim = 64
model = tf.keras.Sequential([
# Add an Embedding layer expecting input vocab of size 6000, and output embedding dimension of size 64 we set at the top
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=300),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(embedding_dim)),
#tf.keras.layers.Dense(embedding_dim, activation='relu'),
# use ReLU in place of tanh function since they are very good alternatives of each other.
#tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(embedding_dim, activation='relu'),
# Add a Dense layer with 25 units and softmax activation for probability distribution
tf.keras.layers.Dense(25, activation='softmax')
])
model.summary()
# + id="N7lkMhVa2JCV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="be062a19-e3a0-42cb-e4c0-6183ba8dc5b9"
# Alternative model
embedding_dim = 64
num_categories = 25
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim, input_length=300),
tf.keras.layers.GlobalMaxPooling1D(),
# use ReLU in place of tanh function since they are very good alternatives of each other.
tf.keras.layers.Dense(128, activation='relu'),
# Add a Dense layer with 25 units and softmax activation for probability distribution
tf.keras.layers.Dense(num_categories, activation='softmax'),])
model.summary()
# + id="BgCN2tjmp8a0" colab_type="code" colab={}
# Compile the model and convert train/test data into NumPy arrays
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Features
train_feature_padded = np.array(train_feature_padded)
test_feature_padded = np.array(test_feature_padded)
# Labels
train_label_sequences = np.array(train_label_sequences)
test_label_sequences = np.array(test_label_sequences)
# Print example values
#print(train_feature_padded[0])
#print(train_label_sequences[0])
#print(test_feature_padded[0])
#print(test_label_sequences[0])
# + id="HjkpyftMp_gj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 885} outputId="a0a0a6e3-07f4-4aa0-d423-229d14ba7d89"
# Train the neural network
num_epochs = 25
history = model.fit(train_feature_padded, train_label_sequences, epochs=num_epochs, shuffle = True, validation_data=(test_feature_padded, test_label_sequences), verbose=2)
# + id="ChtkGnwGyHXe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="daaddf67-0809-4086-bd8d-4a76983b8213"
# Plot the training and validation loss
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + id="YfTvLMAKqpWV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="e59416db-e02c-44ad-d5bc-e521e229e42c"
# print example feature and its correct label
print(test_features[5])
print(test_labels[5])
# + id="DAqRBYLsqsS3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 406} outputId="ffa4392b-8040-4c02-9f02-dc179888ac55"
# Create padded sequence for example
resume_example = test_features[5]
example_sequence = feature_tokenizer.texts_to_sequences([resume_example])
example_padded = pad_sequences(example_sequence, maxlen=max_length, padding=pad_type, truncating=trunc_type)
example_padded = np.array(example_padded)
print(example_padded)
# + id="hIKf8crUq8wx" colab_type="code" colab={}
# Make a prediction
prediction = model.predict(example_padded)
# + id="OWYVziHMrCav" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 106} outputId="648ad0d1-bc90-4903-f9e9-4eb108966f2d"
# Verify that prediction has correct format
print(prediction[0])
print(len(prediction[0])) # should be 25
print(np.sum(prediction[0])) # should be 1
# + id="7yf4KnNjrGcV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="de5a87e5-11fb-458c-85f3-446a335bb745"
# Find maximum value in prediction and its index
print(max(prediction[0])) # confidence in prediction (as a fraction of 1)
print(np.argmax(prediction[0])) # should be 3 which corresponds to python developer
# + id="Y-DNi39XxqCN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d84a5f4d-c083-47f7-83f0-647939bf2262"
# Indices of top 5 most probable solutions
indices = np.argpartition(prediction[0], -5)[-5:]
indices = indices[np.argsort(prediction[0][indices])]
indices = list(reversed(indices))
print(indices)
# + id="pRxAe_QkrW_2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 108} outputId="66acea8b-8984-4bd6-b9fd-29510eb8f78b"
# Save model
model.save('model')
# + id="Xo584Smdr4ps" colab_type="code" colab={}
# Save feature tokenizer
with open('feature_tokenizer.pickle', 'wb') as handle:
pickle.dump(feature_tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# + id="Ul39OZ10sJCC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="ef21da9c-2632-4418-b0dc-919acd24faad"
# Save reverse dictionary of labels to encodings
label_to_encoding = dict(list(label_index.items()))
print(label_to_encoding)
encoding_to_label = {}
for k, v in label_to_encoding.items():
encoding_to_label[v] = k
print(encoding_to_label)
with open('dictionary.pickle', 'wb') as handle:
pickle.dump(encoding_to_label, handle, protocol=pickle.HIGHEST_PROTOCOL)
# + id="7rFzn7JcuOVz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3f1e86c2-8c33-4a61-9416-b8b87895b6ae"
print(encoding_to_label[np.argmax(prediction[0])])
| Neural_network_model_for_resume_classification/Neural_Network_CodeDay_Resume_Screening.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (ctgan)
# language: python
# name: ctgan
# ---
# +
#tabular data
# -
#imports
from ctgan import load_demo
from ctgan import CTGANSynthesizer
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import torch
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
cbc=pd.read_csv("data/dataR2 (2).csv")
cbc.shape
cbc.dtypes
cbc.head(5)
cbc_discrete_columns = ['Classification']
ctgan_model = CTGANSynthesizer()
np.random.seed(42)
torch.manual_seed(42)
ctgan_model.fit(cbc, cbc_discrete_columns,epochs=200)
cbc_synth = ctgan_model.sample(cbc.shape[0])
cbc_concatenated = pd.concat([cbc.assign(dataset='original'), cbc_synth.assign(dataset='sinthetic')])
f, axes = plt.subplots(2, 2, figsize=(7, 7))
sns.countplot(x="Classification", data=cbc_concatenated,hue='dataset',ax=axes[0, 0])
sns.histplot(data=cbc_concatenated, x="Age", hue="dataset",ax=axes[0, 1])
sns.histplot(data=cbc_concatenated, x="Leptin", hue='dataset',ax=axes[1, 0])
sns.histplot(data=cbc_concatenated, x="BMI", hue='dataset',ax=axes[1, 1])
plt.tight_layout()
##sns.pairplot(cbc_concatenated, hue="dataset",kind="kde")
# +
#sdv evaluation metrics
from sdv.evaluation import evaluate
sdv_cbc_eval=evaluate(cbc_synth, cbc, aggregate=False)
sdv_cbc_eval
| generative/tabular.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# libraries
#
import pandas as pd
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
#
#
# load data
#
weather_data = pd.read_csv('Datasets\\austin_weather.csv')
#
weather_data.head()
#
#
# drop the Date column as you will model just using the numeric features
#
weather_data.drop(columns = ['Date'], inplace = True)
#
# list the unique values of the target
#
weather_data['Events'].unique()
#
#
# replace the ' ' with 'None'
#
weather_data['Events'].replace(' ', 'None', inplace = True)
weather_data.head()
#
#
# split the data into train, val, and test splits at 0.7, 0.2, and 0.1
#
train_X, val_X, train_y, val_y = \
train_test_split(weather_data.drop(columns = 'Events'),
weather_data['Events'],
train_size = 0.7,
test_size = 0.2,
random_state = 42)
#
# get the indices of the two splits
#
train_val = list(train_X.index) + list(val_X.index)
#
# test set is therefore the indices not yet used
#
test_X = weather_data.drop(columns = 'Events').drop(train_val, axis = 0)
test_y = weather_data.iloc[test_X.index]['Events']
#
# verify what we wanted
#
print(f'train set is {train_X.shape[0] / weather_data.shape[0]:.2%}\n',
f'val set is {val_X.shape[0] / weather_data.shape[0]:.2%}\n',
f'test set is {test_X.shape[0] / weather_data.shape[0]:.2%}')
#
print('train rows in val set: ',
list(train_X.index.intersection(val_X.index)))
print('train rows in test set: ',
list(train_X.index.intersection(test_X.index)))
print('val rows in test set: ',
list(val_X.index.intersection(test_X.index)))
val_X.head()
#
# load stock data (time series)
#
stock_data = pd.read_csv('Datasets\\spx.csv')
#
stock_data.date = pd.to_datetime(stock_data.date)
stock_data.head()
#
#
# split the stock data into a train and val set
# with the val data being the last 9 months of the data
# and the training data being after 12-31-2009
#
# inspect the dates
#
stock_data['date'].describe()
#
#
train_data = stock_data[(stock_data['date'] < '2017-10-01') & (stock_data['date'] > '2009-12-31')]
val_data = stock_data[stock_data['date'] >= '2017-10-01']
#
#
# visualize the result
#
fig, ax = plt.subplots(figsize = (9, 7))
ax.plot(train_data.date, train_data.close,
color = 'black',
lw = 1,
label = 'SPX closing price (training data)')
ax.plot(val_data.date,
val_data.close,
color = 'red',
lw = 0.5,
label = 'SPX closing price (validation data)')
ax.set_title('SPX closing price performance', fontsize = 16)
ax.set_ylabel('closing price', fontsize = 14)
ax.tick_params(labelsize = 12)
ax.legend(fontsize = 12)
plt.show()
#
| Chapter09/Exercise9.01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
nome = input('Qual é o seu nome? ')
print('Bem vindo,', nome)
n1 = input('Digite o primeiro número: ')
n2 = input('Digite o segundo número: ')
print('A soma é:', n1 + n2)
n1 = int(input('Digite um número: '))
n2 = int(input('Digite outro número:'))
print('A soma de {} e {} é {}'.format(n1,n2,n1+n2))
algo = input('Digite algo: ')
print('O tipo primitivo desse valor é ', type(algo))
print('Só tem espaços? ', algo.isspace())
print('É um número? ', algo.isnumeric())
print('É alfabético? ', algo.isalpha())
print('É alfanumérico? ', algo.isalnum())
print('Está em maiúsculas? ', algo.isupper())
print('Está em minúsculas? ', algo.islower())
print('Está capitalizada? ', algo.istitle())
nome = input('Qual o seu nome?')
print('Prazer em te conhecer {}!'.format(nome))
print('Prazer em te conhecer {:20}!'.format(nome))
print('Prazer em te conhecer {:>20}!'.format(nome))
print('Prazer em te conhecer {:<20}!'.format(nome))
print('Prazer em te conhecer {:^20}!'.format(nome))
print('Prazer em te conhecer {:=^20}!'.format(nome))
n1 = int(input('Informe um número'))
n2 = int(input('Informe outro número'))
print('A soma de {} e {} é {}'.format(n1,n2,n1+n2))
print('A soma de {} e {} é {:.3f}'.format(n1,n2,n1+n2))
# +
print('Frase 1')
print('Frase 2')
print('Frase 1', end=' ')
print('Frase 2')
| 01-Lendo e Escrevendo dados.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cognitive Neuroscience: Group Project
#
# ## Worksheet 1 - sinusoids
#
# <NAME>, Department of Cognitive Science and Artificial Intelligence – Tilburg University Academic Year 21-22
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22}) # we want stuff to be visible
# In this worksheet, we will start working with sinusoids. (co)Sine waves are the basic element of the Fourier analysis - basically we will be looking at the *overlap* between a cosine wave of a particular frequency with a particular segment of data. The amount of overlap determines the presence of that cosine wave in the signal, and we call this the **spectral power** of the signal at that frequency.
# ### Sinusoids
# Oscillations are periodical signals, and can be created from sinusoids with np.sin and np.cos. There are a couple of basic features that make up a sinus (see the lab intro presentation):
# - frequency
# - phase offset (theta)
# - amplitude
# these parameters come together in the following lines of code:
# +
## simulation parameters single sine
sin_freq = 1 # frequency of the oscillation, default is 1 Hz -> one cycle per second
theta = 0*np.pi/4 # phase offset (starting point of oscillation), set by default to 0, adjustable in 1/4 pi steps
amp = 1 # amplitude, set by default to 1, so the sine is bounded between [-1,1] on the y-axis
# one full cycle is completed in 2pi.
# We are putting 1000 steps in between
time = np.linspace(start = 0, num = 1000, stop = 2*np.pi)
# we create the signal by taking the 1000 timesteps, shifted by theta (0 for now) and taking the sine value
signal = amp*np.sin(time + theta)
fig, ax = plt.subplots(figsize=(24,8))
ax.plot(time,signal,'b.-', label='Sinus with phase offset 0')
# we know that at 0, pi and 2pi the sine wave crosses 0. Let's mark these points
# a line plot uses plot([x1,x2], [y1,y2]). When we plot vertical, x1 == x2
# 'k' is the marker for plotting in black
ax.plot(0*np.array([np.pi, np.pi]), [-1,1], 'k')
ax.plot(1*np.array([np.pi, np.pi]), [-1,1], 'k')
ax.plot(2*np.array([np.pi, np.pi]), [-1,1],'k')
ax.set_xlabel('radians')
ax.legend()
plt.show()
# -
# We can take the example a bit further by making some adjustments. First, we normalize time in such a way that [0,1] on time always means [0,2pi] in the signal creation. We can them redefine time as going from [0,1] and define the _sampling rate_ as 1000, taking steps of 1/srate, so 1/1000, meaning we still have 1000 samples:
# 0, 0.001, 0.002 ... 0.999
# +
## simulation parameters
sin_freq = 5 # frequency of the oscillation
theta = 0*np.pi/4 # phase offset (starting point of oscillation)
amp = 1 # amplitude
srate = 1000 # defined in Hz: samples per second.
time = np.arange(start = 0, step = 1/srate, stop = 1)
signal = amp*np.sin(2*np.pi*sin_freq*time + theta)
fig, ax = plt.subplots(figsize=(24,8))
ax.plot(time,signal,'b.-', label='Sinus with freq = 5 and\n phase offset = 0')
ax.set_xlabel('Time (Sample index /1000)')
ax.set_ylim([-1.2,1.7]) # make some room for the legend
ax.legend()
plt.show()
# -
# In its most basic form with a frequency of 1 and no offset, this would plot a single sinus over the time interval [0,1]. As time gets multiplied by 2\*pi, we are scaling the time axis to the interval [0,2pi], so one sinus. Theta is a time offset (that also gets multiplied to 2pi), so this offset makes only sense in the [0:2pi] interval, as after that the sinus is back where it started. In this example, we choose negative time offsets; you can think of these as time _lags_ , so all signals follow the blue one with a certain delay in time:
# +
## simulation parameters
sin_freq = 1 # frequency of the oscillation
theta0 = 0*np.pi/4 # phase offset (starting point of oscillation)
theta1 = -1*np.pi/4 # phase offset (1/4 pi)
theta2 = -2*np.pi/4 # phase offset (2/4 pi)
theta3 = -3*np.pi/4 # phase offset (3/4 pi)
amp = 1 # amplitude
srate = 1000 # defined in Hz: samples per second.
time = np.arange(start = 0, step = 1/srate, stop = 1)
signal0 = amp*np.sin(2*np.pi*sin_freq*time + theta0)
signal1 = amp*np.sin(2*np.pi*sin_freq*time + theta1)
signal2 = amp*np.sin(2*np.pi*sin_freq*time + theta2)
signal3 = amp*np.sin(2*np.pi*sin_freq*time + theta3)
# could we have done this in a loop? of course, but let's keep it simple for now
fig, ax = plt.subplots(figsize=(24,8))
ax.plot(time,signal0,'b.-', label='Sinus with phase offset 0')
ax.plot(time,signal1,'r.-', label='Sinus with phase offset -1/4pi')
ax.plot(time,signal2,'g.-', label='Sinus with phase offset -2/4pi')
ax.plot(time,signal3,'m.-', label='Sinus with phase offset -3/4pi')
ax.set_xlabel('Time (Sample index /1000)')
ax.set_ylim([-1.2,2]) # make some room for the legend
ax.legend()
plt.show()
# -
# Building from this, lets add a few sinuses with different frequencies. We will do this by pre-generating a list of frequencies, and plotting them inside a 'for' loop
# +
# pre-specify a list of frequencies
freqs = np.arange(start = 1, stop = 5)
theta = 0*np.pi/4 # phase offset (starting point of oscillation)
amp = 1 # amplitude
srate = 1000 # defined in Hz: samples per second.
time = np.arange(start = 0, step = 1/srate, stop = 1)
#open a new plot
fig, ax = plt.subplots(figsize=(24,8))
for iFreq in freqs:
signal = np.sin(2*np.pi*iFreq*time + theta);
sin_label = "Freq: {} Hz"
ax.plot(time, signal, label = sin_label.format(iFreq))
ax.legend()
ax.set_xlabel('Time (Sample index /1000)')
plt.show()
# -
# ## Exercises
# Exercise 1:
# - Using a loop, create a plot for a 2 Hz oscillation, but now with 4 different phase offsets. Try 0, 0.5pi, 1pi, 1.5pi for example
# - set up "thetas" as a vector of theta values
# - loop over thetas
# - recalculate "signal" in each loop iteration
# - plot to the ax in each iteration
# - add a formatted label to each sine plot
# - prettify according to taste
# +
##
## Your answer here
##
thetas = -np.arange(start = 0, step = 0.5, stop = 2)*np.pi
freq = 2
fig, ax = plt.subplots(figsize=(24,8))
for iTheta in thetas:
signal = np.sin(2*np.pi*freq*time + iTheta);
sin_label = "Offset: {} radians"
ax.plot(time, signal, label = sin_label.format(iTheta))
plt.title('Exercise 1: a collection of sine waves with different phases')
ax.set_ylabel('amplitude')
ax.set_xlabel('time')
ax.legend()
plt.show()
# -
# Exercise 2:
# - pre-allocate an empty matrix with 3 rows and 1000 colums
# - per row, fill this matrix with the signal for an oscillation:
# - Plot a series of 3 oscillations that differ in frequency AND phase offset
# - find the intersection points for the pairs of oscillations (1,2), (1,3), (2,3)
# - define the intersection points as a boolean [ix12] that is true when the difference between e.g. sine1 and sine2 is smaller than 0.05 and false otherwise
# - use the intersection index boolean to extract the time points belonging to these points (time[ix12])
# - similarily, define ix13 and ix23
# - plot vertical lines on these intersection points (using ax.plot([time[ix12], time[ix12]],[-1, 1],'k')
# +
##
## Your answer here
##
freq_mat = np.zeros((3,1000))
freqs = np.array([2,5,7])
thetas = np.array([0.5, 1, 1.5])*np.pi
fig, ax = plt.subplots(figsize=(24,8))
plt.title('Exercise 2')
for iMat in range(freq_mat.shape[0]):
freq_mat[iMat,:] = np.sin(2*np.pi*freqs[iMat]*time + thetas[iMat])
ax.plot(time, np.transpose(freq_mat), linewidth=3)
ax.legend(['sine1', 'sine2', 'sine3'])
# it is necessary to transpose the matrix to line up the dimensions
# between time and the sine matrix
ix_one_two = np.abs(freq_mat[0,:] - freq_mat[1,:]) < 0.05
ax.plot([time[ix_one_two], time[ix_one_two]], [-1,1], 'k', alpha = 0.2)
# black lines: sine 1 & sine 2 intersect
ix_one_three = np.abs(freq_mat[0,:] - freq_mat[2,:]) < 0.05
ax.plot([time[ix_one_three], time[ix_one_three]], [-1,1], 'r', alpha = 0.2)
# red lines: sine 1 & sine 3 intersect
ix_two_three = np.abs(freq_mat[1,:] - freq_mat[2,:]) < 0.05
ax.plot([time[ix_two_three], time[ix_two_three]], [-1,1], 'm', alpha = 0.2)
# magenta lines: sine 2 & sine 3 intersect
plt.show()
# -
# This concludes the exercises for Lab 1
| CogNeuro GroupProject_2022 - Worksheet 1 - answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ------
# # **Dementia Patients -- Analysis and Prediction**
# ### ***Author : <NAME>***
# ### ****Date : August, 2019****
#
#
# # ***Result Plots***
# - <a href='#00'>0. Setup </a>
# - <a href='#00.1'>0.1. Load libraries </a>
# - <a href='#00.2'>0.2. Define paths </a>
#
# - <a href='#01'>1. Data Preparation </a>
# - <a href='#01.1'>1.1. Read Data </a>
# - <a href='#01.2'>1.2. Prepare data </a>
# - <a href='#01.3'>1.3. Prepare target </a>
# - <a href='#01.4'>1.4. Removing Unwanted Features </a>
#
# - <a href='#02'>2. Data Analysis</a>
# - <a href='#02.1'>2.1. Feature </a>
# - <a href='#02.2'>2.2. Target </a>
#
# - <a href='#03'>3. Data Preparation and Vector Transformation</a>
#
# - <a href='#04'>4. Analysis and Imputing Missing Values </a>
#
# - <a href='#05'>5. Feature Analysis</a>
# - <a href='#05.1'>5.1. Correlation Matrix</a>
# - <a href='#05.2'>5.2. Feature and target </a>
# - <a href='#05.3'>5.3. Feature Selection Models </a>
#
# - <a href='#06'>6.Machine Learning -Classification Model</a>
# # <a id='00'>0. Setup </a>
# # <a id='00.1'>0.1 Load libraries </a>
# Loading Libraries
# +
import sys
sys.path.insert(1, '../preprocessing/')
import numpy as np
import pickle
import scipy.stats as spstats
import matplotlib.pyplot as plt
import seaborn as sns
import pandas_profiling
from sklearn.datasets.base import Bunch
#from data_transformation_cls import FeatureTransform
from ast import literal_eval
import plotly.figure_factory as ff
import plotly.offline as py
import plotly.graph_objects as go
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', -1)
from ordered_set import OrderedSet
from func_def import *
# %matplotlib inline
# -
# # <a id='00.2'>0.2 Define paths </a>
# +
# data_path
# # !cp -r ../../../datalcdem/data/optima/dementia_18July/data_notasked/ ../../../datalcdem/data/optima/dementia_18July/data_notasked_mmse_0_30/
#data_path = '../../../datalcdem/data/optima/dementia_03_2020/data_filled_wiiliam/'
#result_path = '../../../datalcdem/data/optima/dementia_03_2020/data_filled_wiiliam/results/'
#optima_path = '../../../datalcdem/data/optima/optima_excel/'
data_path = '../../data/'
# +
# Reading Data
#patients data
patient_df = pd.read_csv(data_path+'patients.csv')
print (patient_df.dtypes)
# change dataType if there is something
for col in patient_df.columns:
if 'Date' in col:
patient_df[col] = pd.to_datetime(patient_df[col])
patient_df = patient_df[['patient_id','gender', 'smoker', 'education', 'ageAtFirstEpisode', 'apoe']]
patient_df.rename(columns={'ageAtFirstEpisode':'age'}, inplace=True)
patient_df.head(5)
# -
# # <a id='1'>1. Data Preparation </a>
# ## <a id='01.1'>1.1. Read Data</a>
# + jupyter={"source_hidden": true}
#Preparation Features from Raw data
# Extracting selected features from Raw data
def rename_columns(col_list):
d = {}
for i in col_list:
if i=='GLOBAL_PATIENT_DB_ID':
d[i]='patient_id'
elif 'CAMDEX SCORES: ' in i:
d[i]=i.replace('CAMDEX SCORES: ', '').replace(' ', '_')
elif 'CAMDEX ADMINISTRATION 1-12: ' in i:
d[i]=i.replace('CAMDEX ADMINISTRATION 1-12: ', '').replace(' ', '_')
elif 'DIAGNOSIS 334-351: ' in i:
d[i]=i.replace('DIAGNOSIS 334-351: ', '').replace(' ', '_')
elif 'OPTIMA DIAGNOSES V 2010: ' in i:
d[i]=i.replace('OPTIMA DIAGNOSES V 2010: ', '').replace(' ', '_')
elif 'PM INFORMATION: ' in i:
d[i]=i.replace('PM INFORMATION: ', '').replace(' ', '_')
else:
d[i]=i.replace(' ', '_')
return d
columns_selected = ['GLOBAL_PATIENT_DB_ID', 'EPISODE_DATE', 'CAMDEX SCORES: MINI MENTAL SCORE', 'CLINICAL BACKGROUND: BODY MASS INDEX',
'DIAGNOSIS 334-351: ANXIETY/PHOBIC', 'OPTIMA DIAGNOSES V 2010: CERBRO-VASCULAR DISEASE PRESENT', 'DIAGNOSIS 334-351: DEPRESSIVE ILLNESS',
'OPTIMA DIAGNOSES V 2010: DIAGNOSTIC CODE', 'CAMDEX ADMINISTRATION 1-12: EST OF SEVERITY OF DEPRESSION',
'CAMDEX ADMINISTRATION 1-12: EST SEVERITY OF DEMENTIA', 'DIAGNOSIS 334-351: PRIMARY PSYCHIATRIC DIAGNOSES', 'OPTIMA DIAGNOSES V 2010: PETERSEN MCI']
columns_selected = list(OrderedSet(columns_selected).union(OrderedSet(features_all)))
# Need to think about other columns eg. dementia, social, sleeping habits,
df_datarequest = pd.read_excel(data_path+'Optima_Data_Report_Cases_6511_filled.xlsx')
display(df_datarequest.head(1))
df_datarequest_features = df_datarequest[columns_selected]
display(df_datarequest_features.columns)
columns_renamed = rename_columns(df_datarequest_features.columns.tolist())
df_datarequest_features.rename(columns=columns_renamed, inplace=True)
patient_com_treat_fea_raw_df = df_datarequest_features # Need to be changed ------------------------
display(patient_com_treat_fea_raw_df.head(5))
# merging
patient_df = patient_com_treat_fea_raw_df.merge(patient_df,how='inner', on=['patient_id'])
# age calculator
patient_df['age'] = patient_df['age'] + patient_df.groupby(by=['patient_id'])['EPISODE_DATE'].transform(lambda x: (x - x.iloc[0])/(np.timedelta64(1, 'D')*365.25))
# saving file
patient_df.to_csv(data_path + 'patient_com_treat_fea_filled_sel_col.csv', index=False)
# patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df.drop_duplicates(subset=['patient_id', 'EPISODE_DATE'])
patient_df.sort_values(by=['patient_id', 'EPISODE_DATE'], inplace=True)
display(patient_df.head(5))
# +
display(patient_df.describe(include='all'))
display(patient_df.info())
tmp_l = []
for i in range(len(patient_df.index)):
# print("Nan in row ", i , " : " , patient_com_treat_fea_raw_df.iloc[i].isnull().sum())
tmp_l.append(patient_df.iloc[i].isnull().sum())
plt.hist(tmp_l)
plt.show()
# -
# find NAN and Notasked and replace them with suitable value
'''print (patient_df.columns.tolist())
notasked_columns = ['ANXIETY/PHOBIC', 'CERBRO-VASCULAR_DISEASE_PRESENT', 'DEPRESSIVE_ILLNESS','EST_OF_SEVERITY_OF_DEPRESSION', 'EST_SEVERITY_OF_DEMENTIA',
'PRIMARY_PSYCHIATRIC_DIAGNOSES']
print ('total nan values %: ', 100*patient_df.isna().sum().sum()/patient_df.size)
patient_df.loc[:, notasked_columns] = patient_df.loc[:, notasked_columns].replace([9], [np.nan])
print ('total nan values % after considering notasked: ', 100*patient_df.isna().sum().sum()/patient_df.size)
display(patient_df.isna().sum())
notasked_columns.append('DIAGNOSTIC_CODE')
notasked_columns.append('education')
patient_df.loc[:, notasked_columns] = patient_df.groupby(by=['patient_id'])[notasked_columns].transform(lambda x: x.fillna(method='pad'))
patient_df.loc[:, ['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']] = patient_df.groupby(by=['patient_id'])[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].transform(lambda x: x.interpolate())
patient_df.loc[:, ['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']] = patient_df.groupby(by=['patient_id'])[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].transform(lambda x: x.fillna(method='pad'))
print ('total nan values % after filling : ', 100*patient_df.isna().sum().sum()/patient_df.size)
display(patient_df.isna().sum())'''
# +
# Label of patients:
misdiagnosed_df = pd.read_csv(data_path+'misdiagnosed.csv')
display(misdiagnosed_df.head(5))
misdiagnosed_df['EPISODE_DATE'] = pd.to_datetime(misdiagnosed_df['EPISODE_DATE'])
#Merge Patient_df
patient_df = patient_df.merge(misdiagnosed_df[['patient_id', 'EPISODE_DATE', 'Misdiagnosed','Misdiagnosed1']], how='left', on=['patient_id', 'EPISODE_DATE'])
display(patient_df.head(5))
# -
patient_df.to_csv(data_path+'patient_df.csv', index=False)
patient_df = pd.read_csv(data_path+'patient_df.csv')
patient_df['EPISODE_DATE'] = pd.to_datetime(patient_df['EPISODE_DATE'])
# duration and previous mini mental score state
patient_df['durations(years)'] = patient_df.groupby(by='patient_id')['EPISODE_DATE'].transform(lambda x: (x - x.iloc[0])/(np.timedelta64(1, 'D')*365.25))
patient_df['MINI_MENTAL_SCORE_PRE'] = patient_df.groupby(by='patient_id')['MINI_MENTAL_SCORE'].transform(lambda x: x.shift(+1))
patient_df[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].describe() # Out of Range values
patient_df['CLINICAL_BACKGROUND:_BODY_MASS_INDEX'][(patient_df['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']>54) | (patient_df['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']<8)]=np.nan
patient_df[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].describe()
# +
# drop unnecessary columns
# patient_df.drop(columns=['patient_id', 'EPISODE_DATE'], inplace=True)
# -
# drop rows where Misdiagnosed cases are invalid
patient_df = patient_df.dropna(subset=['MINI_MENTAL_SCORE_PRE'], axis=0 )
patient_df['gender'].unique(), patient_df['smoker'].unique(), patient_df['education'].unique(), patient_df['apoe'].unique(), patient_df['Misdiagnosed1'].unique(), patient_df['Misdiagnosed'].unique()
# encoding of categorial features
patient_df['smoker'] = patient_df['smoker'].replace(['smoker', 'no_smoker'],[1, 0])
patient_df['education'] = patient_df['education'].replace(['medium', 'higher','basic'],[1, 2, 0])
patient_df['Misdiagnosed1'] = patient_df['Misdiagnosed1'].replace(['NO', 'YES', 'UNKNOWN'],[0, 1, 2])
patient_df['Misdiagnosed'] = patient_df['Misdiagnosed'].replace(['NO', 'YES', 'UNKNOWN'],[0, 1, 2])
patient_df = pd.get_dummies(patient_df, columns=['gender', 'apoe'])
patient_df.replace(['mixed mitral & Aortic Valve disease', 'Bilateral knee replacements'],[np.nan, np.nan], inplace=True)
patient_df.dtypes
# +
for i, j in zip(patient_df, patient_df.dtypes):
if not (j == "float64" or j == "int64" or j == 'uint8' or j == 'datetime64[ns]'):
print(i)
patient_df[i] = pd.to_numeric(patient_df[i], errors='coerce')
patient_df = patient_df.fillna(-9)
# -
# Misdiagnosed Criteria
patient_df = patient_df[patient_df['Misdiagnosed']<2]
patient_df = patient_df.astype({col: str('float64') for col, dtype in zip (patient_df.columns.tolist(), patient_df.dtypes.tolist()) if 'int' in str(dtype) or str(dtype)=='object'})
patient_df.describe()
patient_df_X = patient_df.drop(columns=['patient_id', 'EPISODE_DATE', 'Misdiagnosed1', 'MINI_MENTAL_SCORE', 'PETERSEN_MCI', 'Misdiagnosed'])
patient_df_y_cat = patient_df['Misdiagnosed1']
patient_df_y_cat_s = patient_df['Misdiagnosed']
patient_df_y_real = patient_df['MINI_MENTAL_SCORE']
print (patient_df_X.shape, patient_df_y_cat.shape, patient_df_y_cat_s.shape, patient_df_y_real.shape)
print(patient_df_X.shape, patient_df_y_cat.shape, patient_df_y_cat_s.shape, patient_df_y_real.shape)
# +
# training data
patient_df_X_fill_data = pd.DataFrame(data=patient_df_X.values, columns=patient_df_X.columns, index=patient_df_X.index)
patient_df_X_train, patient_df_y_train = patient_df_X_fill_data[patient_df_y_cat==0], patient_df_y_real[patient_df_y_cat==0]
patient_df_X_test, patient_df_y_test= patient_df_X_fill_data[patient_df_y_cat==1], patient_df_y_real[patient_df_y_cat==1]
patient_df_X_s_train, patient_df_y_s_train = patient_df_X_fill_data[patient_df_y_cat_s==0], patient_df_y_real[patient_df_y_cat_s==0]
patient_df_X_s_test, patient_df_y_s_test= patient_df_X_fill_data[patient_df_y_cat_s==1], patient_df_y_real[patient_df_y_cat_s==1]
# -
patient_df_X_train.to_csv(data_path+'X_train.csv', index=False)
patient_df_y_train.to_csv(data_path+'y_train.csv', index=False)
patient_df_X_test.to_csv(data_path+'X_test.csv', index=False)
patient_df_y_test.to_csv(data_path+'y_test.csv', index=False)
print(patient_df_X_train.shape, patient_df_y_train.shape, patient_df_X_test.shape, patient_df_y_test.shape)
print(patient_df_X_s_train.shape, patient_df_y_s_train.shape, patient_df_X_s_test.shape, patient_df_y_s_test.shape)
# +
X_train, y_train, X_test, y_test = patient_df_X_train.values, patient_df_y_train.values.reshape(-1, 1),patient_df_X_test.values, patient_df_y_test.values.reshape(-1,1)
X_s_train, y_s_train, X_s_test, y_s_test = patient_df_X_s_train.values, patient_df_y_s_train.values.reshape(-1, 1),patient_df_X_s_test.values, patient_df_y_s_test.values.reshape(-1,1)
# +
# Random Forest Classfier
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm, datasets
from sklearn.model_selection import cross_val_score, cross_validate, cross_val_predict
from sklearn.metrics import classification_report
# patient_df_X_fill_data[patient_df_y_cat==0]
X, y = patient_df_X_fill_data, patient_df_y_cat
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
from imblearn.over_sampling import SMOTE
smote = SMOTE(sampling_strategy='auto')
data_p_s, target_p_s = smote.fit_sample(patient_df_X_fill_data, patient_df_y_cat)
print (data_p_s.shape, target_p_s.shape)
# patient_df_X_fill_data[patient_df_y_cat==0]
X, y = data_p_s, target_p_s
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
from collections import Counter
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=0)
X_resampled, y_resampled = cc.fit_resample(patient_df_X_fill_data, patient_df_y_cat)
print(sorted(Counter(y_resampled).items()))
X, y = X_resampled, y_resampled
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
from imblearn.under_sampling import RandomUnderSampler
rus = RandomUnderSampler(random_state=0)
X, y = rus.fit_resample(patient_df_X_fill_data, patient_df_y_cat)
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
# -
X_positive, y_positive, X_negative, y_negative = X_train, y_train, X_test, y_test
X_positive
cr_score_list = []
y_true_5, y_pred_5 = np.array([]), np.array([])
y_true_5.shape, y_pred_5.shape
# +
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
for i in range(5):
X_train, X_test_pos, y_train, y_test_pos = train_test_split(X_positive, y_positive, test_size=0.136)
print (X_train.shape, X_test_pos.shape, y_train.shape, y_test_pos.shape)
X_test, y_test = np.append(X_negative, X_test_pos, axis=0), np.append(y_negative, y_test_pos, axis=0)
#X_test, y_test = X_negative, y_negative
print (X_test.shape, y_test.shape)
regr = RandomForestRegressor(max_depth=2, random_state=0)
regr.fit(X_train, y_train)
#print(regr.feature_importances_)
y_pred = regr.predict(X_test)
#print(regr.predict(X_test))
print (regr.score(X_test, y_test))
print (regr.score(X_train, y_train))
X_y_test = np.append(X_test, y_pred.reshape(-1,1), axis=1)
print (X_test.shape, y_test.shape, X_y_test.shape)
df_X_y_test = pd.DataFrame(data=X_y_test, columns=patient_df_X_fill_data.columns.tolist()+['MMSE_Predicted'])
df_X_y_test.head(5)
patient_df_tmp = patient_df[['patient_id', 'EPISODE_DATE', 'DIAGNOSTIC_CODE', 'smoker', 'gender_Male', 'age', 'durations(years)', 'MINI_MENTAL_SCORE_PRE', ]]
df_X_y_test_tmp = df_X_y_test[['smoker', 'gender_Male', 'DIAGNOSTIC_CODE', 'age', 'durations(years)', 'MINI_MENTAL_SCORE_PRE', 'MMSE_Predicted']]
p_tmp = patient_df_tmp.merge(df_X_y_test_tmp)
print (patient_df.shape, df_X_y_test_tmp.shape, p_tmp.shape)
print (p_tmp.head(5))
# Compare it with Predicted MMSE Scores and True MMSE values
patient_df_misdiag = pd.read_csv(data_path+'misdiagnosed.csv')
patient_df_misdiag['EPISODE_DATE'] = pd.to_datetime(patient_df_misdiag['EPISODE_DATE'])
patient_df_misdiag.head(5)
patient_df_misdiag_predmis = patient_df_misdiag.merge(p_tmp[['patient_id', 'EPISODE_DATE', 'MMSE_Predicted']], how='outer', on=['patient_id', 'EPISODE_DATE'])
patient_df_misdiag_predmis.head(5)
display(patient_df_misdiag_predmis.isna().sum())
index_MMSE_Predicted = patient_df_misdiag_predmis['MMSE_Predicted'].notnull()
patient_df_misdiag_predmis['MMSE_Predicted'] = patient_df_misdiag_predmis['MMSE_Predicted'].fillna(patient_df_misdiag_predmis['MINI_MENTAL_SCORE'])
print (sum(patient_df_misdiag_predmis['MMSE_Predicted']!=patient_df_misdiag_predmis['MINI_MENTAL_SCORE']))
# find Misdiagnosed
def find_misdiagonsed1():
k = 0
l_misdiagno = []
for pat_id in patient_df_misdiag_predmis['patient_id'].unique():
tmp_df = patient_df_misdiag_predmis[['PETERSEN_MCI', 'AD_STATUS', 'MMSE_Predicted', 'durations(years)']][patient_df_misdiag_predmis['patient_id']==pat_id]
flag = False
mms_val = 0.0
dur_val = 0.0
for i, row in tmp_df.iterrows():
if (row[0]==1.0 or row[1]== 1.0) and flag==False:
l_misdiagno.append('UNKNOWN')
mms_val = row[2]
dur_val = row[3]
flag = True
elif (flag==True):
if (row[2]-mms_val>5.0) and (row[3]-dur_val<=1.0) or\
(row[2]-mms_val>3.0) and ((row[3]-dur_val<2.0 and row[3]-dur_val>1.0)) or\
(row[2]-mms_val>0.0) and (row[3]-dur_val>=2.0):
l_misdiagno.append('YES')
else:
l_misdiagno.append('NO')
else:
l_misdiagno.append('UNKNOWN')
return l_misdiagno
print (len(find_misdiagonsed1()))
patient_df_misdiag_predmis['Misdiagnosed_Predicted'] = find_misdiagonsed1()
c2=patient_df_misdiag_predmis['Misdiagnosed1']!=patient_df_misdiag_predmis['Misdiagnosed_Predicted']
misdiagnosed1_true_pred= patient_df_misdiag_predmis[index_MMSE_Predicted][['Misdiagnosed1', 'Misdiagnosed_Predicted']].replace(['NO', 'YES'], [0,1])
print(classification_report(misdiagnosed1_true_pred.Misdiagnosed1, misdiagnosed1_true_pred.Misdiagnosed_Predicted, target_names=['NO', 'YES']))
y_true_5, y_pred_5 = np.append(y_true_5, misdiagnosed1_true_pred.Misdiagnosed1, axis=0), np.append(y_pred_5, misdiagnosed1_true_pred.Misdiagnosed_Predicted, axis=0)
print(y_true_5.shape, y_pred_5.shape)
# +
df_all = pd.DataFrame(classification_report(y_true_5, y_pred_5, target_names=['NO', 'YES'], output_dict=True))
df_all = df_all.round(2)
n_range = int(y_true_5.shape[0]/X_test.shape[0])
y_shape = X_test.shape[0]
for cr in range(n_range):
d = classification_report(y_true_5.reshape(n_range,y_shape)[cr], y_pred_5.reshape(n_range,y_shape)[cr], target_names=['NO', 'YES'], output_dict=True)
cr_score_list.append(d)
print(cr_score_list)
df_tot = pd.DataFrame(cr_score_list[0])
for i in range(n_range-1):
df_tot = pd.concat([df_tot, pd.DataFrame(cr_score_list[i])], axis='rows')
df_avg = df_tot.groupby(level=0, sort=False).mean().round(2)
acc, sup, acc1, sup1 = df_avg.loc['precision', 'accuracy'], df_avg.loc['support', 'macro avg'],\
df_all.loc['precision', 'accuracy'], df_all.loc['support', 'macro avg']
pd.concat([df_avg.drop(columns='accuracy'), df_all.drop(columns='accuracy')], \
keys= ['Average classification metrics (accuracy:{}, support:{})'.format(acc, sup),\
'Classification metrics (accuracy:{}, support:{})'.format(acc1, sup1)], axis=1)
# +
cm_all = confusion_matrix(y_true_5, y_pred_5)
print(cm_all)
n_range = int(y_true_5.shape[0]/X_test.shape[0])
y_shape = X_test.shape[0]
cr_score_list = []
for cr in range(n_range):
d = confusion_matrix(y_true_5.reshape(n_range,y_shape)[cr], y_pred_5.reshape(n_range,y_shape)[cr])
cr_score_list.append(d)
print(cr_score_list)
cr_score_np = np.array(cr_score_list)
cm_avg = cr_score_np.sum(axis=0)/cr_score_np.shape[0]
print(cm_avg)
# -
| dementia_optima/models/misc/mmse_prediction_final_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# imports:
import numpy as np
import matplotlib.pyplot as plt
import re
import geopandas as gpd
import os
from IPython.display import Image
from IPython.core.display import HTML
import hvplot.xarray
import pandas as pd
import rioxarray
# %matplotlib inline
# N.B. This notebook is a lot more interesting if initialized with
# #%matplotlib widget
# # ICESat-2
#
# ICESat-2 is a laser altimeter designed to precisely measure the height of snow and ice surfaces using green lasers with small footprints. Although ICESat-2 doesn't measure surface heights with the same spatial density as airborne laser altimeters, its global spatial coverage makes it a tempting source of free data about snow surfaces. In this tutorial we will:
#
# 1. Give a brief overview of ICESat-2
#
# 2. Show how to find ICESat-2 granues using the IcePyx metadata search tool
#
# 3. Download some ATL03 photon data from the openAltimetry web service
#
# 4. Request custom processed height estimates from the SlideRule project.
#
# ## Measurements and coverage
#
# ICESat-2 measures surface heights with six laser beams, grouped into three pairs separated by 3 km, with a 90-m separation between the beams in each pair.
#
# Here's a sketch of how this looks (image credit: NSIDC)
#
Image('https://nsidc.org/sites/nsidc.org/files/images/atlas-beam-pattern.png', width=500)
# ICESat-2 flies a repeat orbit with 1387 ground tracks every 91 days, but over Grand Mesa, the collection strategy (up until now) has designed to optimize spatial coverage, so the measurements are shifted to the left and right of the repeat tracks to help densify the dataset. We should expect to see tracks running (approximately) north-south over the Mesa, in tripplets of pairs that are scattered from east to west. Because clouds often block the laser, not every track will return usable data.
#
Image('https://nsidc.org/sites/nsidc.org/files/images/icesat-2-spots-beams-fwd-rev.png', width=500)
# We describe ICESat-2's beam layout on the ground based on pairs (numbered 1, 2, and 3, from left to right) and the location of each beam in each pair (L, R). Thus GT2L is the left beam in the center pair. In each pair, one beam is always stronger than the other (to help penetrate thin clouds), but since the spacecraft sometimes reverses its orientation to keep the solar panels illuminated, the strong beam can be either left or right, depending on the phase of the mission.
#
# ## Basemap (Sentinel)
#
# To get a sense of where the data are, we're going to use an Sentinel SAR image of Grand Mesa. I've stolen this snippet of code from the SAR tutorial:
# +
# GDAL environment variables to efficiently read remote data
os.environ['GDAL_DISABLE_READDIR_ON_OPEN']='EMPTY_DIR'
os.environ['AWS_NO_SIGN_REQUEST']='YES'
# SAR Data are stored in a public S3 Bucket
url = 's3://sentinel-s1-rtc-indigo/tiles/RTC/1/IW/12/S/YJ/2016/S1B_20161121_12SYJ_ASC/Gamma0_VV.tif'
# These Cloud-Optimized-Geotiff (COG) files have 'overviews', low-resolution copies for quick visualization
XR=[725000.0, 767000.0]
YR=[4.30e6, 4.34e6]
# open the dataset
da = rioxarray.open_rasterio(url, overview_level=1).squeeze('band')#.clip_box([712410.0, 4295090.0, 797010.0, 4344370.0])
da=da.where((da.x>XR[0]) & (da.x < XR[1]), drop=True)
da=da.where((da.y>YR[0]) & (da.y < YR[1]), drop=True)
dx=da.x[1]-da.x[0]
SAR_extent=[da.x[0]-dx/2, da.x[-1]+dx/2, np.min(da.y)-dx/2, np.max(da.y)+dx/2]
# Prepare coordinate transformations into the basemap coordinate system
from pyproj import Transformer, CRS
crs=CRS.from_wkt(da['spatial_ref'].spatial_ref.crs_wkt)
to_image_crs=Transformer.from_crs(crs.geodetic_crs, crs)
to_geo_crs=Transformer.from_crs(crs, crs.geodetic_crs)
corners_lon, corners_lat=to_geo_crs.transform(np.array(XR)[[0, 1, 1, 0, 0]], np.array(YR)[[0, 0, 1, 1, 0]])
lonlims=[np.min(corners_lat), np.max(corners_lat)]
latlims=[np.min(corners_lon), np.max(corners_lon)]
# -
# ## Searching for ICESat-2 data using IcePyx
#
# The IcePyx library has functions for searching for ICEsat-2 data, as well as subsetting it and retrieving it from NSIDC. We're going to use the search functions today, because we don't need to retrieve the complete ICESat-2 products.
# +
import requests
import icepyx as ipx
region_a = ipx.Query('ATL03', [lonlims[0], latlims[0], lonlims[1], latlims[1]], ['2018-12-01','2021-06-01'], \
start_time='00:00:00', end_time='23:59:59')
# -
# To run this next section, you'll need to setup your netrc file to connect to nasa earthdata. During the hackweek we will use machine credentials, but afterwards, you may need to use your own credentials. The login procedure is in the next cell, commented out.
# +
#earthdata_uid = 'your_name_here'
#email = '<EMAIL>'
#region_a.earthdata_login(earthdata_uid, email)
# -
# Once we're logged in, the avail_granules() fetches a list of available ATL03 granules:
region_a.avail_granules()
# The filename for each granule (which contains lots of handy information) is in the 'producer_granule_id' field:
region_a.granules.avail[0]['producer_granule_id']
# The filename contains ATL03_YYYYMMDDHHMMSS_TTTTCCRR_rrr_vv.h5 where:
#
# * YYYMMDDHHMMSS gives the date (to the second) of the start of the granule
# * TTTT gives the ground-track number
# * CC gives the cycle number
# * RR gives the region (what part of the orbit this is)
# * rrr_vv give the release and version
#
# Let's strip out the date using a regular expression, and see when ICESat-2 flew over Grand Mesa:
# +
ATLAS_re=re.compile('ATL.._(?P<year>\d\d\d\d)(?P<month>\d\d)(?P<day>\d\d)\d+_(?P<track>\d\d\d\d)')
date_track=[]
for count, item in enumerate(region_a.granules.avail):
granule_info=ATLAS_re.search(item['producer_granule_id']).groupdict()
date_track += [ ('-'.join([granule_info[key] for key in ['year', 'month', 'day']]), granule_info['track'])]
# print the first ten dates and ground tracks, plus their indexes
[(count, dt) for count, dt in enumerate(date_track[0:10])]
# -
# From this point, the very capable icepyx interface allows you to order either full data granules or subsets of granules from NSIDC. Further details are available from https://icepyx.readthedocs.io/en/latest/, and their 'examples' pages are quite helpful. Note that ATL03 photon data granules are somewhat cumbersome, so downloading them without subsetting will be time consuming, and requesting subsetting from NSIDC may take a while.
#
# ## Ordering photon data from openAltimetry
# For ordering small numbers of points (up to one degree worth of data), the openAltimetry service provides very quick and efficient access to a simplified version of the ATL03 data. Their API (https://openaltimetry.org/data/swagger-ui/) allows us to build web queries for the data. We'll use that for a quick look at the data over Grand Mesa, initially reading just one central beam pair:
def get_OA(date_track, lonlims, latlims, beamnames=["gt1l","gt1r","gt2l","gt2r","gt3l","gt3r"]):
'''
retrieve ICESat2 ATL03 data from openAltimetry
Inputs:
date_track: a list of tuples. Each contains a date string "YYYY-MM-DD" and track number (4-character string)
lonlims: longitude limits for the search
latlims: latitude limits for the search
beamnames: list of strings for the beams
outputs:
a dict containing ATL03 data by beam name
Due credit:
Much of this code was borrowed <NAME>'s Pond Picker repo: https://github.com/fliphilipp/pondpicking
'''
IS2_data={}
for this_dt in date_track:
this_IS2_data={}
for beamname in beamnames:
oa_url = 'https://openaltimetry.org/data/api/icesat2/atl03?minx={minx}&miny={miny}&maxx={maxx}&maxy={maxy}&trackId={trackid}&beamName={beamname}&outputFormat=json&date={date}&client=jupyter'
oa_url = oa_url.format(minx=lonlims[0],miny=latlims[0],maxx=lonlims[1], maxy=latlims[1],
trackid=this_dt[1], beamname=beamname, date=this_dt[0], sampling='true')
#.conf_ph = ['Noise','Buffer', 'Low', 'Medium', 'High']
if True:
r = requests.get(oa_url)
data = r.json()
D={}
D['lat_ph'] = []
D['lon_ph'] = []
D['h_ph'] = []
D['conf_ph']=[]
conf_ph = {'Noise':0, 'Buffer':1, 'Low':2, 'Medium':3, 'High':4}
for beam in data:
for photons in beam['series']:
for conf, conf_num in conf_ph.items():
if conf in photons['name']:
for p in photons['data']:
D['lat_ph'].append(p[0])
D['lon_ph'].append(p[1])
D['h_ph'].append(p[2])
D['conf_ph'].append(conf_num)
D['x_ph'], D['y_ph']=to_image_crs.transform(D['lat_ph'], D['lon_ph'])
for key in D:
D[key]=np.array(D[key])
if len(D['lat_ph']) > 0:
this_IS2_data[beamname]=D
#except Exception as e:
# print(e)
# pass
if len(this_IS2_data.keys()) > 0:
IS2_data[this_dt] = this_IS2_data
return IS2_data
#submitting all of these requests should take about 1 minute
IS2_data=get_OA(date_track, lonlims, latlims, ['gt2l'])
# +
plt.figure()
plt.imshow(np.array(da)[::-1,:], origin='lower', extent=SAR_extent, cmap='gray', clim=[0, 0.5])#plt.figure();
for dt, day_data in IS2_data.items():
for beam, D in day_data.items():
plt.plot(D['x_ph'][::10], D['y_ph'][::10], '.', markersize=3, label=str(dt))
# -
# What we see in this plot is Grand Mesa, with lines showing data from the center beams of several tracks passing across it. A few of these tracks have been repeated, but most are offset from the others. Looking at these, it should be clear that the quality of the data is not consistent from track to track. Some are nearly continuous, others have gaps, and other still have no data at all and are not plotted here. Remember, though, that what we've plotted here are just the center beams. There are a total of two more beam pairs, and a total of five more beams!
#
# To get an idea of what the data look like, we'll pick one of the tracks and plot its elevation profile. In interactive mode (%matplotlib widget) it's possible to zoom in on the plot, query the x and y limits, and use these to identify the data for the track that intersects an area of interest. I've done this to pick two good-looking tracks, but you can uncomment the first two lines here and zoom in yourself to look at other tracks:
XR=plt.gca().get_xlim()
YR=plt.gca().get_ylim()
print(XR)
print(YR)
# +
#XR=plt.gca().get_xlim()
#YR=plt.gca().get_ylim()
XR=(740773.7483556366, 741177.9430390946)
YR=(4325197.508090873, 4325728.013612912)
dts_in_axes=[]
for dt, day_data in IS2_data.items():
for beam, D in day_data.items():
if np.any(
(D['x_ph'] > XR[0]) & (D['x_ph'] < XR[1]) &
(D['y_ph'] > np.min(YR)) & (D['y_ph'] < np.max(YR))):
dts_in_axes += [dt]
dts_in_axes
# -
# Based on the axis limits I filled in, Track 295 has two repeats over the mesa that nearly coincide.
#
# Now we can get the full (six-beam) dataset for one of these repeats and plot it:
full_track_data=get_OA([dts_in_axes[0]], lonlims, latlims)
# +
fig=plt.figure();
hax=fig.subplots(1, 2)
plt.sca(hax[0])
plt.imshow(np.array(da)[::-1,:], origin='lower', extent=SAR_extent, cmap='gray', clim=[0, 0.5])#plt.figure();
for dt, day_data in full_track_data.items():
for beam, D in day_data.items():
plt.plot(D['x_ph'], D['y_ph'],'.', markersize=1)
plt.title(dts_in_axes[0])
plt.sca(hax[1])
D=day_data['gt2l']
colors_key={((0,1)):'k', (2,3,4):'r'}
for confs, color in colors_key.items():
for conf in confs:
these=np.flatnonzero(D['conf_ph']==conf)
plt.plot(D['y_ph'][these], D['h_ph'][these],'.', color=color, markersize=1)#label=','.join(list(confs)))
plt.ylabel('WGS-84 height, m');
plt.xlabel('UTM-12 northing, m');
plt.title('gt2l');
plt.tight_layout()
# -
# On the left we see a plot of all six beams crossing (or almost crossing) Grand Mesa, in April of 2020. If you zoom in on the plot, you can distinguish the beam pairs into separate beams. On the right, we see one of the central beams crossing the mesa from south to north. There is a broad band of noise photons that were close enough to the ground to be telemetered by the satellite, and a much narrower band (in red) of photons identified by the processing software as likely coming from the ground.
# These data give a maximum of detail about what the surface looks like to ICESat-2. to reduce this to elevation data, telling the surface height at specific locations, there are a few options:
#
# 1. Download higher-level products (i.e. ATL06, ATL08) from NSIDC
# 2. Calculate statistics of the photons (i.e. a running mean of the flagged photon heights
# 3. Ask the SlideRule service to calculate along-track averages of the photon heights.
#
# We're going to try (3).
# ## Ordering surface-height segments from SlideRule
#
# SildeRule is a new and exciting (to me) system that does real-time processing of ICESat-2 data _in the cloud_ while also offering efficient web-based delivery of data products. It's new, and it's not available for all locations, but Grand Mesa is one of the test sites, so we should be able to get access to the full set of ATL03 data there.
# [MORE WORK TO GO HERE]
# You'll need to install the sliderule-python package, available from https://github.com/ICESat2-SlideRule/sliderule-python
# This package has been installed on the hub, but if you need it, these commands will install it:
# +
#! [ -d sliderule-python ] || git clone https://github.com/ICESat2-SlideRule/sliderule-python.git
# #! cd sliderule-python; python setup.py develop
# -
# We will submit a query to sliderule to process all of the data that CMR finds for our region, fitting 20-meter line-segments to all of the photons with medium-or-better signal confidence
# +
import pandas as pd
from sliderule import icesat2
# initialize
icesat2.init("icesat2sliderule.org", verbose=False)
# region of interest polygon
region = [ {"lon":lon_i, "lat":lat_i} for lon_i, lat_i in
zip(np.array(lonlims)[[0, -1, -1, 0, 0]], np.array(latlims)[[0, 0, -1, -1, 0]])]
# request parameters
params = {
"poly": region, # request the polygon defined by our lat-lon bounds
"srt": icesat2.SRT_LAND, # request classification based on the land algorithm
"cnf": icesat2.CNF_SURFACE_MEDIUM, # use all photons of low confidence or better
"len": 20.0, # fit data in overlapping 40-meter segments
"res": 10.0, # report one height every 20 m
"ats":5., #report a segment only if it contains at least 2 photons separated by 5 m
"maxi": 6, # allow up to six iterations in fitting each segment to the data
}
# make request
rsps = icesat2.atl06p(params, "atlas-s3")
# save the result in a dataframe
df = pd.DataFrame(rsps)
# calculate the polar-stereographic coordinates:
df['x'], df['y']=to_image_crs.transform(df['lat'], df['lon'])
# -
# SlideRule complains when it tries to calculate heights within our ROI for ground tracks that don't intersect the ROI. This happens quite a bit because the CMR service that IcePyx and SlideRule use to search for the data uses a generous buffer on each ICESat-2 track. It shouldn't bother us. In fact, we have quite a few tracks for our region.
#
# Let's find all the segments from rgt 295, cycle 7 and map their heights:
plt.figure();
plt.imshow(np.array(da)[::-1,:], origin='lower', extent=SAR_extent, cmap='gray', clim=[0, 0.5])#plt.figure();
ii=(df['rgt']==295) & (df['cycle']==7)
plt.scatter(df['x'][ii], df['y'][ii],4, c=df['h_mean'][ii], cmap='gist_earth')
plt.colorbar()
# As we saw a few cells up, for track 295 cycles 7 and 8 are nearly exact repeats. Cycle 7 was April 2020, cycle 8 was July 2020. Could it be that we can measure snow depth in April by comparing the two? Let's plot spot 3 for both!
# +
plt.figure();
ii=(df['rgt']==295) & (df['cycle']==7) & (df['spot']==3)
plt.plot(df['y'][ii], df['h_mean'][ii],'.', label='April')
ii=(df['rgt']==295) & (df['cycle']==8) & (df['spot']==3)
plt.plot(df['y'][ii], df['h_mean'][ii],'.', label='July')
plt.legend()
plt.xlabel('polar stereographic northing, m')
plt.ylabel('height, m')
# -
# To try to get at snow depth, we can look for bare-earth DTMs here:
# 'https://prd-tnm.s3.amazonaws.com/LidarExplorer/index.html#'
# I've picked one of the 1-meter DTMs that covers part of track 295. We'll read it directly from s3 with the rasterio/xarray package, and downsample it to 3m (to save time later).
# +
import rioxarray as rxr
from rasterio.enums import Resampling
url='https://prd-tnm.s3.amazonaws.com/StagedProducts/Elevation/1m/Projects/CO_MesaCo_QL2_UTM12_2016/TIFF/USGS_one_meter_x74y433_CO_MesaCo_QL2_UTM12_2016.tif'
lidar_ds=rxr.open_rasterio(url)
#resample the DTM to ~3m:
scale_factor = 1/3
new_width = int(lidar_ds.rio.width * scale_factor)
new_height = int(lidar_ds.rio.height * scale_factor)
#reproject the horizontal CRS to match ICESat-2
UTM_wgs84_crs=CRS.from_epsg(32612)
lidar_3m = lidar_ds.rio.reproject(
UTM_wgs84_crs,
shape=(new_height, new_width),
resampling=Resampling.bilinear,
)
# -
plt.figure();
lidar_3m.sel(band=1).plot.imshow()
# To compare the DTM directly with the ICESat-2 data, we'll need to sample it at the ICESat-2 points. There are probably ways to do this directly in xarray, but I'm not an expert. Here we'll use a scipy interpolator:
from scipy.interpolate import RectBivariateSpline
interpolator = RectBivariateSpline(np.array(lidar_3m.y)[::-1], np.array(lidar_3m.x),
np.array(lidar_3m.sel(band=1))[::-1,:], kx=1, ky=1)
# +
x0=np.array(lidar_3m.x)
y0=np.array(lidar_3m.y)
ii=(df['rgt']==295) & (df['cycle']==7) & (df['spot']==3)
ii &= (df['x'] > np.min(x0)) & (df['x'] < np.max(x0))
ii &= (df['y'] > np.min(y0)) & (df['y'] < np.max(y0))
zi=interpolator.ev(df['y'][ii], df['x'][ii])
# +
fig=plt.figure(figsize=[8, 5]);
hax=fig.subplots(1,2)
plt.sca(hax[0])
lidar_3m.sel(band=1).plot.imshow()
plt.plot(df['x'][ii], df['y'][ii],'.')
plt.axis('equal')
plt.sca(hax[1])
plt.plot(df['y'][ii], df['h_mean'][ii],'.', label='April')
plt.plot(df['y'][ii], zi,'.', label='DTM')
plt.legend()
plt.tight_layout()
# -
# The DTM is below the April ICESat-2 heights. That's probably not right, and it's because we don't have the vertical datums correct here (ICESat-2 WGS84, the DEM is NAD83). That's OK! Since we have multiple passes over the same DEM, we can use the DEM to correct for spatial offsets between the measurements. Let's use the DEM to correct for differences between the July and April data:
# +
plt.figure()
ii=(df['rgt']==295) & (df['cycle']==7) & (df['spot']==3)
ii &= (df['x'] > np.min(x0)) & (df['x'] < np.max(x0))
ii &= (df['y'] > np.min(y0)) & (df['y'] < np.max(y0))
zi=interpolator.ev(df['y'][ii], df['x'][ii])
plt.plot(df['y'][ii], df['h_mean'][ii]-zi,'.', label='April')
ii=(df['rgt']==295) & (df['cycle']==8) & (df['spot']==3)
ii &= (df['x'] > np.min(x0)) & (df['x'] < np.max(x0))
ii &= (df['y'] > np.min(y0)) & (df['y'] < np.max(y0))
zi=interpolator.ev(df['y'][ii], df['x'][ii])
plt.plot(df['y'][ii], df['h_mean'][ii]-zi,'.', label='July')
plt.gca().set_ylim([-20, -10])
plt.legend()
# -
# This looks good, if a little noisy. We could get a better comparison by (1) using multiple ICESat-2 tracks to extract a mean snow-off difference between the DTM and ICESat-2, or (2). finding adjacent pairs of measurements between the two tracks, and comparing their heights directly. These are both good goals for projects!
#
# ## Further reading:
#
#
# There are lots of resources available for ICESat-2 data on the web. Two of the best are the NSIDC ICESat-2 pages:
#
# https://nsidc.org/data/icesat-2
#
# and NASA's ICESat-2 page:
# https://icesat-2.gsfc.nasa.gov
| book/tutorials/lidar/ICESat2_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import pandas as pd
# # Regression
#
# ## Relationships
#
# - deterministic (such as Celius <-> Fahrenheit) - these are exact
# - statistical (height & weight) - displays scatter or trend in graph
#
# ## Notation
#
# > Greek letters express variables in the population
# > $Y_i = \alpha + \beta(x_i - \bar{x}) + \epsilon_i$
#
# > Latin characters express variables in the sample
# > $y_i = a + b(x_i - \bar{x}) + e_i$
#
# > bar expresses the calculated mean
# > $\bar{x}$
#
# > hat expresses the expected value (best fit, i.e. regression line)
# > $\hat{y}$
#
# ## Linear Regression Equation
# *line of best fit*
#
# $Y=a+bX$ <br/>
# where Y is the *dependent* (or response, outcome) variable, and X is the *independent* (or predictor/explanatory) variable.
#
# $\displaystyle a = \frac{ (\sum y)(\sum x^2) - (\sum x)(\sum xy) }{ n(\sum x^2) - (\sum x)^2}$<br/>
#
# $\displaystyle b = \frac{ n(\sum xy) - (\sum x)(\sum y) }{ n(\sum x^2) - (\sum x)^2}$
#
#
# ### Derivation
#
# - let $x_i$ be the ith predictor value (x axis)
# - let $y_i$ be the ith observed value (y axis)
# - let $\hat{y}_i$ be the ith predicted value using a regression line (y value of the line of best fit)
#
# #### Find the least error
# *The error between observation and prediction needs to be a low as possible*
# - $e_i = y_i - \hat{y}_i$
#
# Squaring the sum of errors allows us to express the optimal value - the least squares criterion
#
# $\displaystyle\sum_{i=1}^{n} e_i^2$
#
# *This is better than using summing the absolute value, as squaring not only creates positive values, it is supported better with calculus*
#
# *So, we need to find the best fitted line, a.k.a the values a and b, where $\hat{y}_i = mx_i + c$. We can express the sum of values in terms found in the aforementioned linear formula*
#
# $\displaystyle\begin{split}
# Q &= \sum_{i=1}^{n} e_i^2 \\
# &= \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 \\
# &= \sum_{i=1}^{n} \big(y_i - (a + bx_i) \big)^2 \\
# \end{split}$
#
# *Instead of finding 'a' the intercept, which isn't useful for many statistical analysis (i.e. height and weight won't ever become 0), we use the arithmetic mean of x and y as the origin.*
#
# $\hat{y}_i = a + b(x_i - \bar{x})$
#
# #### Find the minimum intercept
# To find the minimum with respects to a, the intercept, solve the derivative at 0
#
# $\displaystyle\begin{split}
# Q &= \sum_{i=1}^{n} \Big(y_i - \big(a + b(x_i - \bar{x}) \big) \Big)^2 \\
# & \text{ using chain rule: } f = g \circ h (x) \implies f^\prime = g^\prime \circ h(x) \cdot h^\prime(x) \\
# & \text{ where } g(x) = \sum_{i=1}^n ( x_i^2 )\text{, and } h(x) = \Big(y_i - \big(a + b(x_i - \bar{x}) \big) \Big) \\
# \frac{dQ}{da} &= 2\sum_{i=1}^{n} \Big(y_i - \big(a + b(x_i - \bar{x}) \big) \Big) (-1) \equiv 0 \\
# &= - \sum_{i=1}^n y_i + \sum_{i=1}^n a + b \sum_{i=1}^n (x_i - \bar{x}) \equiv 0 \\
# & \text{as } \sum_{i=1}^n a = na \text{. and } \sum_{i=1}^n (x_i - \bar{x}) = 0 \\
# &= - \sum_{i=1}^n y_i + na \equiv 0 \\
# a &= \frac{1}{n} \sum\limits_{i=1}^n y_i \\
# &= \bar{y} \\
# & \text{ a is the mean of all y values at the minimum/best fit, when the mean of all x's is taken as the mid point} \\
# Q &= \sum_{i=1}^{n} \Big(y_i - \big( \bar{y} + b(x_i - \bar{x}) \big) \Big)^2 \\
# \end{split}$
#
# #### Find the minimum slope
# To find the minimum with respects to b, the slope, solve the derivative at 0
#
# $\displaystyle\begin{split}
# Q &= \sum_{i=1}^{n} \Big(y_i - \big( \bar{y} + b(x_i - \bar{x}) \big) \Big)^2 \\
# & \text{using chain rule: } f = g \circ h (x) \implies f^\prime = g^\prime \circ h(x) \cdot h^\prime(x) \\
# & \text{where } g(x) = \sum_{i=1}^n ( x_i^2 )\text{, and } h(x) = \Big(y_i - \big( \bar{y} + b(x_i - \bar{x}) \big) \\
# \frac{dQ}{db} &= 2\sum_{i=1}^{n} \Big(y_i - \big( \bar{y} + b(x_i - \bar{x}) \big) \Big) \cdot - (x_i - \bar{x}) \equiv 0 \\
# &= - \sum_{i=1}^{n} (y_i - \bar{y})(x_i - \bar{x}) + b \sum_{i=1}^{n} (x_i - \bar{x})(x_i - \bar{x}) \equiv 0 \\
# & b \sum_{i=1}^{n} (x_i - \bar{x})^2 = \sum_{i=1}^{n} (y_i - \bar{y})(x_i - \bar{x}) \\
# b &= \frac{ \sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y}) }{ \sum_{i=1}^{n} (x_i - \bar{x})^2 } \\
# \end{split}$
#
# +
# Regression Line
SIZE = 1200
SAMPLE_SIZE = 120
# Generate random variables for our population
M = 0.3
C = 3.0
x = np.random.uniform(size=SIZE) * 20
y = 10.0 + np.random.normal(scale=0.8, size=SIZE)
y = y+(x*M-C)
# get midpoint
x_mean_pop = x.sum() / x.size
y_mean_pop = y.sum() / y.size
# Take a random sample for analysis
sample = np.random.choice(range(SAMPLE_SIZE), 120)
x_sample = x[sample]
y_sample = y[sample]
x_mean_sample = x_sample.sum() / x_sample.size
y_mean_sample = y_sample.sum() / y_sample.size
# Manual Way to get intercept and slope for our sample
nom = np.fromfunction(lambda i: (x_sample[i] - x_mean_sample) * (y_sample[i] - y_mean_sample), shape = (SAMPLE_SIZE,), dtype=np.int )
denom = np.fromfunction(lambda i: (x_sample[i] - x_mean_sample) ** 2, shape = (SAMPLE_SIZE,), dtype=np.int )
slope_sample = nom.sum() / denom.sum()
intercept_sample = y_mean - slope_sample*x_mean
# The Numpy way for our population
slope_pop, intercept_pop = np.polyfit(x, y, 1)
# build ab line
abline_x = np.linspace(0,20.0,20)
abline_values_pop = [slope_pop * i + intercept_pop for i in abline_x]
abline_values_sample = [slope_sample * i + intercept_sample for i in abline_x]
plt.figure(figsize=(14,6))
plt.margins(0,0)
plt.title('')
plt.plot(x_mean_pop, y_mean_pop, color='indigo', lw=1, alpha=0.5, marker='o', label='Arithmetic mean of population x={:0.2f}, y={:0.2f}'.format(x_mean_pop, y_mean_pop))
plt.scatter(x, y, color='slateblue', s=1, lw=1, alpha=0.3, label='Distribution of population')
plt.plot(abline_x, abline_values_pop, color='rebeccapurple', lw=1, alpha=1.0, dashes =(6,6), label=r'Regression Line (using np.polyfit) of population $\mu_Y = E(Y) = \alpha + \beta(x-\bar{x})$')
plt.plot(x_mean_sample, y_mean_sample, color='maroon', lw=1, alpha=0.5, marker='o', label='Arithmetic mean of sample x={:0.2f}, y={:0.2f}'.format(x_mean_sample, y_mean_sample))
plt.scatter(x_sample, y_sample, color='red', s=1, lw=1, alpha=0.7, label='Distribution of sample')
plt.plot(abline_x, abline_values_sample, color='orangered', lw=1, alpha=1.0, dashes =(6,6), label=r'Manual Regression Line of sample $\hat{y} = a + b(x-\bar{x})$')
plt.xlim((0,20.0))
plt.ylim((0,15.0))
plt.legend()
plt.show()
# -
# ----
#
# ## The Simple Linear Regression Model
#
# We can define this with $ \mu_Y = E(Y) = \alpha + \beta(x-\bar{x})$
#
# Also, consider each data point deviates from the regression line by $\epsilon_i$ would suggest $Y_i = \alpha + \beta(x - \bar{x}) + \epsilon_i$
#
# We don't have the luxury of calculating $\alpha$ and $\beta$ in the population, so we calculate a and b from the sample instead. $\hat{y}_i = a+b(x_i - \bar{x})$
#
# Use this model, when you can make the following assumptions (LINE)
# - The mean of the responses $E(Y_i)$ is a **L**inear function of $x_i$
# - The errors, $\epsilon_i$, and hence the responses $Y_i$ are **I**ndependent
# - The errors, $\epsilon_i$, and hence the responses $Y_i$ are **N**ormally Distributed
# - The errors, $\epsilon_i$, and hence the responses $Y_i$ have **E**qual variances ($\sigma^2$) for all x values
#
# We can see the line defines best fit, but does not express the dispersion or how the values are dsitributed. We need to expand our model to define the distribution.
#
# ![Simple Linear Regression Illustration][i1]
#
#
# ### References
# - [Pen State - Stats - The Model][r1]
#
#
# [r1]: https://newonlinecourses.science.psu.edu/stat414/node/279/
# [i1]: https://newonlinecourses.science.psu.edu/stat414/sites/onlinecourses.science.psu.edu.stat414/files/lesson35/Less35_Graph16/index.gif
# ----
#
# ### Simple Linear Regression Model Proof
#
# Simple Linear Regression Mode states that the errors $\epsilon_i$, are independent and normally distributed with mean 0 and variance $\sigma^2$ : $\epsilon_i \sim N(0, \sigma^2) $ ($N$ is the normal distribution function)
#
# The linearity condition: $Y_i = \alpha + \beta(x - \bar{x}) + \epsilon_i$
#
# therefore implies that: $Y_i \sim N\Big( \alpha + \beta(x - \bar{x}) , \sigma^2 \Big)$
#
# Considering the normal distribution function $f(x| \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{- \frac{(x-\mu)^2}{2\sigma^2} }$, where $\sigma^2$ is the variance, and $\mu$ is the mean
#
# Therefore the likelihood function is: $L_{Y_i}(\alpha, \beta, \sigma^2) = \prod\limits_{i=1}^n \frac{1}{\sqrt{2\pi} \sigma} \exp \Bigg[ - \frac{ \big(Y_i - \alpha - \beta(x_i - \bar{x})\big)^2}{ 2\sigma^2} \Bigg]$
#
# Which can be rewritten as: $L = (2\pi)^{-\frac{n}{2}}(\sigma^2)^{-\frac{n}{2}} \exp \Bigg[ - \frac{1}{2\sigma^2} \sum\limits_{i=1}^{n} \big(Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2 \Bigg]$
#
# Take log on both sides, we get: $\log{(L)} = -\frac{n}{2}\log{(2\pi)} - \frac{n}{2}\log{(\sigma^2)} \;\;\;-\;\;\; \frac{1}{2\sigma^2} \sum\limits_{i=1}^n \big( Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2$
#
# Now, that negative sign in the front of that summation on the right hand side tells us that the only way we can maximize $\log L(\alpha, \beta, \sigma^2)$ with respect to $\alpha$ and $\beta$ is if we minimize $\sum\limits_{i=1}^n \big( Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2$ with respect to $\alpha$ and $\beta$ - Which is the least squares criterion. Therefore the Maximum Liklihood (ML) estimators of $\alpha$ and $\beta$ must be the same as the least squares estimators $\alpha$ and $\beta$.
#
# ### References
# - [Pen State - Stats - The Model][r1]
#
#
# [r1]: https://newonlinecourses.science.psu.edu/stat414/node/279/
#
# ----
# ## Variance $\sigma^2$
#
# ![Probability Density][i2]
#
# We would estimate the population variance $\sigma^2$ using the sample variance $s^2$
#
# $s^2 = \frac{1}{n-1} \sum\limits_{i=1}^{n} (Y_i - \bar{Y})^2$
#
# If we had multiple populations, for example,
#
# ![PD for Multiple Populations][i3]
#
# To estimate the common variance amongst many populations, we can calculate biased and unbiased.
#
# Biased estimator: $\hat{\sigma}^2 = \frac{1}{n} \sum\limits_{i=1}^{n} (Y_i - \hat{Y_i})^2$
#
# Unbiased/ Mean Square Estimator: $ MSE = \frac{1}{n-2} \sum\limits_{i=1}^{n} (Y_i - \hat{Y_i})^2$
#
#
# These are needed to derive confidence levels for $\alpha$ and $\beta$.
#
# [i2]: https://newonlinecourses.science.psu.edu/stat414/sites/onlinecourses.science.psu.edu.stat414/files/lesson35/Less35_Graph19/index.gif
# [i3]: https://newonlinecourses.science.psu.edu/stat414/sites/onlinecourses.science.psu.edu.stat414/files/lesson35/Less35_Graph20b/index.gif
# ### Proof: Variance Biased Estimator
#
# We have shown that $\log{(L)} = -\frac{n}{2}\log{(2\pi)} - \frac{n}{2}\log{(\sigma^2)} \;\;\;-\;\;\; \frac{1}{2\sigma^2} \sum\limits_{i=1}^n \big( Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2$
#
# To maximize this value, we need the derivative with respects to $\sigma^2$:
#
# $\displaystyle\begin{split}
# \frac{\partial L_{Y_i}(\alpha, \beta, \sigma^2)}{\partial \sigma^2} &= -n\frac{n}{2\sigma^2} - \frac{1}{2} \sum\limits_{i=1}^n \big( Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2 \cdot \Bigg( - \frac{1}{(\sigma^2)^2} \Bigg) \equiv 0 \text{ at maximum} \\
# & \text{ multiply by }2(\sigma^2)^2 \\
# & \therefore - n\sigma^2 + \sum\limits_{i=1}^n \big( Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2 = 0 \\
# \hat{\sigma}^2 &= \frac{1}{n} \sum\limits_{i=1}^n \big( Y_i - \alpha - \beta(x_i - \bar{x}) \big)^2 \\
# &= \frac{1}{n} \sum\limits_{i=1}^n \ ( Y_i - \hat{Y_i} )^2
# \end{split}$
#
#
# ## Confidence Intervals
#
# $$
| notebooks/math/statistics/linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Estrutura de dados lista
# iniciando uma lista, com único elemento
lista_de_mercado = ['ovos, farinha, leite, maça']
# imprimindo uma lista
print(lista_de_mercado)
# criando outra lista com 4 elementos
lista_de_mercado2 = ['ovos', 'farinha', 'leite', 'maça']
# imprimindo a segunda lista
print(lista_de_mercado2)
#imprimindo o conceito de sequência pelo indice
print(lista_de_mercado[0])
print(lista_de_mercado2[0])
# criando uma lista dinâmica
dinamica = [15, 1.75, 'Carlos']
#imprimindo sequência da lista
print(dinamica)
# recuperar cada item da lista
item1 = dinamica[0]
item2 = dinamica[1]
item3 = dinamica[2]
# imprimindo cada item recuperado
print(item1, item2, item3)
# ## Atualizando item da lista
print(lista_de_mercado2)
# trocando o valor de um item
lista_de_mercado2[2] = 'açucar'
# mostrando o valor alterado
print(lista_de_mercado2)
# ## Deletando um item
# deletando dados da lista
del lista_de_mercado2[3]
print(lista_de_mercado2)
lista_encadeada = [['1', '2', '3'], ['Carlos', 'Francine', 'Mindingo'], ['39', '39', '6']]
print(lista_encadeada)
# atribuindo 1 elemento da lista a uma variável
nome = lista_encadeada[1]
# imprimindo pelo indice da lista item do item
print(nome)
# atrubuindo 1 elemento de 1 item da lista
f = nome[1]
# imprimindo o elemento
print(f)
# retornando 1 elemento da lista armazenando em outra lista
nome = lista_encadeada[1]
# imprimindo a lista
print(nome)
# slice em lista
c = nome[0]
f = nome[1]
m = nome[2]
print(f'1º {c}\n2º {f}\n3º {m}')
# ## Operações com lista
# criando a lista
lista_encadeada = [['1', '2', '3'], ['Carlos', 'Francine', 'Mindingo'], ['39', '39', '6']]
# imprimindo a lista
print(lista_encadeada)
# slice - fatiamento, é a representação de uma matriz linha e coluna
print(lista_encadeada[0][0])
# alocando um certo item dentro de uma variável
m = lista_encadeada[1][2]
#imprimindo este elemento
print(m)
# somando elemento da lista em slice
numero = lista_encadeada[0]
print(numero)
valor = int(numero[2])
soma = valor + 3
print(f'A soma do indice 3 é: {soma}')
# ## Concatenando lista
# criando a primeira lista
lista01 = [1, 2, 3]
# criando a segunda lista
lista02 = [4, 5, 6]
# concatenando
lista_total = lista01 + lista02
# imprimindo a concatenação
print(lista_total)
# ## Operador in
# criando a lista
operador = [10, 45, 100]
# usando in para achar valor na lista, retorna um boolean
print(45 in operador)
# usando in para achar valor na lista, retorna um boolean
print(25 in operador)
# ## Funções build in
# criando lista
lista = [1, 'Carlos', 39, '<NAME>']
# função len retorna o cumprimento da lista
print(len(lista))
# criando uma lista de inteiro
lista02 = [60, 1, 57, 30, 70]
# imprimindo o maior valor da lista
print(max(lista02))
# retornando o menor valor
print(min(lista02))
# lista de string
lista_de_mercado = ['carne', 'açucar', 'arroz']
# imprimir a lista
print(lista_de_mercado)
# adicionando item a lista
lista_de_mercado.append('suco')
# adicionando mesmo valor na lista, lista é mutável, cada item é uma string
lista_de_mercado.append('suco')
# imprimindo a lista
print(lista_de_mercado)
# criando uma lista vazia
vazia = []
# vendo o typo da lista
print(type(vazia))
# adicionando valor
vazia.append(1)
# imprimindo a lista
print(vazia)
# criando uma lista
old_lista = [0, 1, 2, 3, 4, 5]
# criando uma lista vazia
new_lista = []
# preenchendo a new lista em um loop for
for item in old_lista:
new_lista.append(item)
#imprimindo a new lista preenchida
for i in new_lista:
print(i)
# procurando por indices
cidades = ['Recife', 'Manaus', 'Salvador']
# adicionando mais items na lista em build in
cidades.extend(['Fortaleza', 'Palmas'])
# imprimindo a lista
print(cidades)
# procurando o index de elemento na lista
print(cidades.index('Fortaleza'))
# inserindo em uma posição especifica da lista
cidades.insert(0, 'Bahia')
# imprimindo a lista
print(cidades)
# removendo elemento da lista
cidades.remove('Bahia')
# imprimindo lista
print(cidades)
# invertendo a lista
cidades.reverse()
# imprimindo a lista
print(cidades)
# ordenando a lista
numeros = [90, 80, 70, 60, 50, 40, 30, 20, 10]
numeros.sort()
print(numeros)
# ## Fim
| Cap02/estrutura-de-dados-lista.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="UIX8NfGVyqc1"
# # Introduction
#
# This is a Jupyter notebook. It is a python in REPL mode.
#
#
# + [markdown] colab_type="text" id="3wW7keFr0j3c"
# ## Simple Python
# + colab={} colab_type="code" id="MSi6383hzKX2"
a=1
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="uytqJJOh0P_Y" outputId="2c1ec830-b647-491a-9a2c-58b6e0f31171"
print(a)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="GRiec64O0TaM" outputId="797038a1-5b19-4a34-bde0-6c25f5b81752"
a
# + [markdown] colab_type="text" id="N8pIxnpI2b-P"
# ## Simple Terminal
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="fniQem6g0Zek" outputId="5404dcb3-1008-4c76-f186-4db2c4e92b0b"
# !ls -a ..
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3iGvZxqE1I44" outputId="2904d0f9-01f5-45af-f048-b5208bc08134"
# !pwd
# + colab={} colab_type="code" id="1hsCmSCh1qAD"
# !ls /home
# + colab={} colab_type="code" id="OY89c3ot1yud"
# !ls ../home
# + colab={} colab_type="code" id="H0858JMp12gA"
# !mkdir data
# + colab={} colab_type="code" id="F9v7hPig19k6"
# !cp -r ./data ./temp
# + colab={} colab_type="code" id="EHbRu8fM2EN2"
# !touch config.txt
# + colab={} colab_type="code" id="UfPnw6Wg2P3J"
# !mv config.txt config.json
# + colab={"base_uri": "https://localhost:8080/", "height": 357} colab_type="code" id="lZK82cpA2W_4" outputId="77ccd049-97ca-4698-f9f8-93bce2fdc158"
# !nvidia-smi
# + [markdown] colab_type="text" id="RylgozvU21D4"
# ## Python Packages
# + colab={} colab_type="code" id="jTPTuB8l2iKl"
import torch
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="igGCB7pl2-iI" outputId="f54443f8-1831-49b2-f839-5d81dd245063"
# !pip install sklearn
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Eix1yyt23HnW" outputId="980a4b72-6e95-4326-c9d2-67a247aced05"
# !python --version
# + [markdown] colab_type="text" id="1RtvsOC6lvLD"
# ## Python syntax
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8HaAck-c3Oma" outputId="49e8146c-20e8-4792-cb68-a5e022449b8f"
array=[1,2,3,8,0]
total=0
for x in array:
total = total + x
print(total)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="DgfH0XwamLnd" outputId="4d28c9d8-9d9f-4f75-ecd4-15a6d2114366"
array=[1,2,3,8,0]
total=0
for x in array:
total += x
print(total)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="YWxmEa4AnGT1" outputId="db4fd37b-daf8-40c9-f70a-a9e56714d53a"
total=0
for x in [1,2,3,8,0]:
total += x
print(total)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="l_bGKhB_nJ1I" outputId="63333913-ca51-40d6-cc65-4ed176390768"
total = sum([1,2,3,8,0])
print(total)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="7qQebTt0ngYW" outputId="c214130f-dd8c-4360-a20a-0078a51c38d4"
print(sum([1,2,3,8,0]))
# + [markdown] colab_type="text" id="Y9O0KSO1n0YA"
# ## Python language
# + [markdown] colab_type="text" id="FdAFhpY2n6e2"
# ### Types
# + colab={} colab_type="code" id="nOAACPtKnlBK"
# Numbers
a=1
b=2.3
c=False # c=True
d=42j
e=None
# Strings
f="This is a string"
f='This is a string. Это строка. 🙎♂️'
# + colab={} colab_type="code" id="1FPt91EioYS5"
# Sequences
g=[1,2,3,4,5] # List
h=(1,"a",2,5) # Tuple (read only)
i=range(19) # Range [0,19)
i=range(5,19) # Range [5,19)
i=range(1,10,2)
# + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="1dCFfDU5podh" outputId="dd41a5b5-fc18-4772-f847-4ac534cec15e"
for x in range(1,10,2):
print(x)
# + colab={} colab_type="code" id="dD2N4QtUppHu"
j = {"key": "value", "university": "polytech"} # dictionary
k = {1,2,3,4,5,6} # set
# + colab={"base_uri": "https://localhost:8080/", "height": 198} colab_type="code" id="aPKREA_GqCKJ" outputId="86371734-cdd8-4b92-aa69-6bd60ed03af1"
print(hash(f)) # hash of string
print(hash(k)) # hash of set (must be error)
# + [markdown] colab_type="text" id="tMhydcOhru5C"
# ### Operations
# + colab={} colab_type="code" id="uV4G-2pKqZBc"
x=1
y,z=1,2 #(y,z) = (1,2)
b=x # It is no a copy operation
# Example
z,y=y,z
# Math
z=x+y
z=x//y
z=x%y
z=x**y
# + [markdown] colab_type="text" id="-BUdPFv_tPtw"
# ### Control flow
# + colab={} colab_type="code" id="6FJ1DItir5RC"
for x in range(10):
pass
while False:
pass
# + colab={} colab_type="code" id="wxbEmsratX3d"
# Conditional operator: >, <, >=, <=, ==, !=, is
if a>b:
pass
elif a>d: # switch-case
pass
else:
pass
# + [markdown] colab_type="text" id="Q-17W2X1uJWG"
# ### Functions
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="PrUxV1iruIYl" outputId="1e08cfae-3913-4596-dd98-67ffd4c59085"
def function_name(x):
return x+1
print(function_name(0))
print(function_name(x=5))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="oFDTEkrtuahF" outputId="74168d67-e752-4a0f-aa6d-7078515f6771"
def function_name2(x):
pass
print(function_name2(0))
# + [markdown] colab_type="text" id="xH1tOmVfu47Q"
# ### Extra
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="do82JWxvu0dz" outputId="7a9e8813-b212-44e3-9937-30b9cf5b3248"
import math
print(math.pi)
# + colab={"base_uri": "https://localhost:8080/", "height": 374} colab_type="code" id="DKSNr8xxvFSw" outputId="958e1968-57b4-49d3-ff51-6aac8dcdf8f5"
import this
# + colab={} colab_type="code" id="HpcnUqkHvIKw"
# import antigravity (it opens https://xkcd.com/353/)
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="GK-22aRJvPwF" outputId="d3e209d8-9e5f-465d-cb40-adc1540c9717"
help(sum)
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="I8HU1s2MvZBq" outputId="62aa8963-bc56-4f87-ad1c-c8f02fe5809d"
x=1
print(id(x))
print(type(x))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="JUaQHuXjvtIL" outputId="30e3876f-021c-43a5-d90b-317f2a72bab8"
print(type(type(x)))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="JAThlWzLv__m" outputId="63690db4-8b55-4044-fb8d-d4e049a7a269"
print(type(type(type(x))))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="IC1-pFWawHZ0" outputId="3e80f83f-152f-4494-af79-7d6b97767bc9"
print("xxx", "aaa", 1, 4, end=" ")
print("yyy")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="2JmsBbsowRys" outputId="714ba859-51ae-4c61-d3da-d1c3f7696ae5"
print(f"x={x+1} and y={y:02} and pi={math.pi:1.4}")
# + colab={} colab_type="code" id="5WBNG0FcwtoB"
| 01_python_introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="gAgYq0u1tKZQ"
# ## Welcome to my Notebook on Tide Times.
# With this notebook I hope to answer the following question:
#
#
# 2. Is there a formula/algorithm that can predict future tides?
#
# For this I have data that I wrangled using an app I created in C# from multiple locations. I am going to use the data from Aberdeen and Hosojima, I took three years worth of measurements, to hopefully answer my question.
# + colab={"base_uri": "https://localhost:8080/"} id="2BorJRfCtIst" outputId="654b2372-e4e2-4ade-f6b4-69dfb3b7b41b"
# !wget --no-check-certificate \
# https://raw.githubusercontent.com/xerscot/tidaltimesds/main/Files/CleanedData/Dover11Years.csv \
# -O /tmp/Dover11Years.csv
# + id="Bm0fL4tMuBYX"
import datetime
import csv
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import tensorflow.keras as keras
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="JaRRDui4tgY5"
df = pd.read_csv('/tmp/Dover11Years.csv', parse_dates = [['Date','Time']])
df['Date_Time'] = pd.to_datetime(df['Date_Time'])
df = df.set_index('Date_Time')
df.index.freq = "H"
date_time = df.index
timestamp_s = date_time.map(datetime.datetime.timestamp)
df['TimeStamp'] = timestamp_s
del df['Location']
del df['IsCorrupt']
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="VvfRrZMLT4Oi" outputId="31a5f2fb-bd4d-4c9f-b9b3-3a688eff8210"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="VL-YRl3aT3Zr" outputId="64d0fcc9-f6f3-4066-b6e4-2946b1eff880"
df.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="ngBSFenLTUMD" outputId="1f457a57-81aa-4494-ccde-610355fdf0da"
df.describe().transpose()
# + colab={"base_uri": "https://localhost:8080/"} id="aKwubI7uTh_i" outputId="6cefd58d-ab0f-4af7-f7d0-160ad4f18121"
df.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="sHVNETRvVN_D" outputId="1c750b02-a92f-458a-c591-9d06b771fcb8"
df.index.min(), df.index.max()
# + colab={"base_uri": "https://localhost:8080/"} id="q-HiB_KFWUIr" outputId="3961947e-2efc-4688-a1d3-5a447297ac4f"
print(df.index.freq)
# + id="_OoBIpFzZJAT"
def plot_data(time, series, format="-", start=0, end=None, title=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Milimeters")
plt.title(title)
plt.grid(True)
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="jdzDEOTFZOAi" outputId="39465b77-8359-4f09-ce14-662629c0d93c"
# 2010, 2011, 2014, 2015, 2017 are the cleanest
anual_data = df["2017"]
meters = anual_data["Height"]
time_step = anual_data.index
series = np.array(meters)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_data(time, series, title="Height of sea water over 2017")
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="KsxmZuBBaTpb" outputId="d848f780-4523-459f-df0c-7fdd9cd064df"
day_data = df["2017-06-21"]
meters = day_data["Height"]
time_step = day_data.index
series = np.array(meters)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_data(time, series, title="Height of sea water for June 21 2017")
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="Z6kDpXTzWyF9" outputId="f9efb032-d3e3-49c7-d606-aba040fea0a5"
fft = tf.signal.rfft(anual_data['Height'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(anual_data['Height'])
hours_per_year = 24*365.2524
years_per_dataset = n_samples_h/(hours_per_year)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 9000000)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 365.2524], labels=['1/Year', '1/day'])
_ = plt.xlabel('Frequency (log scale)')
# + id="eojJmbH7XSme"
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="-xPiqhmvXcqv" outputId="f7c069ff-198e-40aa-d9ba-4641575426d9"
plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
# + id="UbBJSBWlbCDt"
column_indices = {name: i for i, name in enumerate(df.columns)}
#n = len(df)
train_df = df["2010":"2011"]
train_df.reset_index(drop=True, inplace=True)
val_df = df["2017"]
val_df.reset_index(drop=True, inplace=True)
test_df = df["2014":"2015"]
test_df.reset_index(drop=True, inplace=True)
num_features = df.shape[1]
# + id="ZAZB00KKbf4d"
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
# + id="IdIdmb_LaoQl"
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
# + colab={"base_uri": "https://localhost:8080/"} id="tbd-3X29bqBN" outputId="88073a60-c027-4f8a-c7e2-2f616c268b08"
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['Height'])
w1
# + colab={"base_uri": "https://localhost:8080/"} id="toiGcYU8byes" outputId="227e9529-561e-4199-81d9-cbc9fc35b410"
w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['Height'])
w2
# + id="OeuYq2wjb_C9"
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
# + colab={"base_uri": "https://localhost:8080/"} id="eNuaegXYcEN4" outputId="d1947225-22ba-4459-d183-e714d152e3f5"
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')
# + id="RYy-da9ocR4F"
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
where_are_NaNs = np.isnan(data)
data[where_are_NaNs] = 0
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
# + id="lgHTd89acqCV"
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
# + colab={"base_uri": "https://localhost:8080/"} id="3GSPQLmnc1l-" outputId="a86271c5-91c0-4997-b02d-dafd55780697"
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['Height'])
single_step_window
# + id="si_-Va5ezo5p"
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
# + colab={"base_uri": "https://localhost:8080/"} id="t3n1XIifcKI7" outputId="2495362a-40d3-4ff8-fe1b-9da92886a619"
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
# + colab={"base_uri": "https://localhost:8080/"} id="b9uKfx1bcs1d" outputId="feae3820-b0bb-43b6-fca8-82e819f852b5"
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
baseline = Baseline(label_index=column_indices['Height'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
# + colab={"base_uri": "https://localhost:8080/"} id="dOiNXHalS9qq" outputId="91b43161-681e-45e3-f457-ae6a3464853e"
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['Height'])
wide_window
# + colab={"base_uri": "https://localhost:8080/"} id="iewYD3HiTMDB" outputId="59de43bf-cf41-47a8-96fe-ae6cfa610f3b"
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
# + id="sK07x-zeTaqL"
def plot(self, model=None, plot_col='Height', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(max_n, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="edDbthboTOqx" outputId="4f1a56ee-3b5e-45dd-a4cb-4b0751febe1d"
wide_window.plot(baseline)
# + id="UTxV6Jr_T54j"
#linear = tf.keras.Sequential([tf.keras.layers.Flatten(),
# tf.keras.layers.Dense(units=1)
#])
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
# + colab={"base_uri": "https://localhost:8080/"} id="9ADQ1YCTT_HD" outputId="6310461e-8143-40be-cb25-0f1c2fef2c22"
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
# + id="pX1vj2SgUCT8"
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
optimizer = tf.optimizers.Adam()
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=optimizer,
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
# + colab={"base_uri": "https://localhost:8080/"} id="q16h4cMFUHCq" outputId="5855df7a-cea8-457e-d669-755fd370f4d2"
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
# + colab={"base_uri": "https://localhost:8080/"} id="1Su9XFv5a6Yj" outputId="69099a4e-d516-466a-b219-455552ba8e17"
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="KycWix-sa9Dy" outputId="9a735ec4-85e9-4930-8df7-d279ea492a01"
wide_window.plot(linear)
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="e0eyN89tbMEU" outputId="82651c5a-e235-4c91-b86f-752ad7be8c0a"
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)
# + colab={"base_uri": "https://localhost:8080/"} id="5Kq6j5jfrjWD" outputId="fe4c380a-5e58-4923-a5d8-4c07c40775f1"
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['Height'])
conv_window
# + id="R6VIbu9jrcZG"
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
# + colab={"base_uri": "https://localhost:8080/"} id="GHebposfrqlc" outputId="d6e8ab50-dd1a-4446-8fb0-82b25f3be819"
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
# + colab={"base_uri": "https://localhost:8080/"} id="II0qWP7qruay" outputId="854b9232-7248-4d5c-a15f-be3d41b29cf9"
history = compile_and_fit(conv_model, conv_window)
#IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
# + colab={"base_uri": "https://localhost:8080/"} id="lTwSPeaXsopb" outputId="80446b67-4df0-4e4c-c04d-cb7869f41a1b"
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['Height'])
wide_conv_window
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="SbTW3KxssjsD" outputId="1b35236f-b268-46a3-ac46-9b3daca6415a"
wide_conv_window.plot(conv_model)
# + id="50w1aGMatLIe"
layer = tf.keras.layers.Dense(units=1)
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
layer
])
# + colab={"base_uri": "https://localhost:8080/"} id="fKea1PyhtPpT" outputId="e61b5b0a-fa0b-43c1-951c-cd0e4378183c"
history = compile_and_fit(lstm_model, wide_window)
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="x62mh6w7te9a" outputId="570e5765-3a61-4df7-e86b-8a34fc3bde8e"
wide_window.plot(lstm_model)
# + colab={"base_uri": "https://localhost:8080/", "height": 565} id="8ttQuOcWtxJ7" outputId="e90c8d7c-4b31-4444-d4e4-3e3ec5fd97bd"
OUT_STEPS = 24
multi_window = WindowGenerator(input_width=24,
label_width=OUT_STEPS,
shift=OUT_STEPS)
multi_window.plot()
multi_window
# + colab={"base_uri": "https://localhost:8080/"} id="CNEHCuCkvzMm" outputId="4be9897a-90a7-46d1-e22a-3f92748f143e"
layer.get_weights()
# + colab={"base_uri": "https://localhost:8080/"} id="yT53IT_u7pWt" outputId="dfd78b8b-557f-4286-f255-4dee48ef505a"
right_now = datetime.datetime.now()
timestamp_now = datetime.datetime.timestamp(right_now)
print(timestamp_now)
| Notebooks/TideTimesQ2b.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('pytorch')
# language: python
# name: python3
# ---
# # BPTI analysis
# ## Setup
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
# ## Load Data
# +
from stateinterpreter import load_dataframe
kBT = 2.5
BPTI_data_path = '../../data/md_datasets/BPTI-unbiased/'
colvar_file = BPTI_data_path + 'deeptica/v2-lag100-nodes1024-256-64-3/COLVAR.csv'
traj_dict = {
'trajectory' : BPTI_data_path+'trajectory.trr',
'topology' : BPTI_data_path+'topology.pdb'
}
start,stop,stride=0,None,1
colvar = load_dataframe(colvar_file, start, stop, stride) #.drop(['idx', 'time'], axis=1)
colvar.head()
# -
# ## Compute descriptors
# +
from stateinterpreter.utils import load_trajectory
from stateinterpreter.descriptors import compute_descriptors
traj = load_trajectory(traj_dict,start,stop,stride)
descriptors, feats_info = compute_descriptors( traj, descriptors = ['dihedrals', 'disulfide', 'hbonds_distances', 'hbonds_contacts'] )
# -
# Compute SASA contributions to each residues with mdtraj
# +
#calculate descriptors
import mdtraj as md
mode = 'residue'
traj = md.load(traj_dict['trajectory'], top=traj_dict['topology'], stride=stride)
sasa = md.shrake_rupley(traj,mode=mode)
print(sasa.shape)
names = []
#feats_info = {}
table, bonds = traj.top.to_dataframe()
if mode == 'atom':
for idx in range(traj.n_atoms):
name = 'SASA '+str(traj.top.atom(idx))
names.append(name)
feats_info[name] = {
'atoms': [idx],
'group': table["resName"][idx] + table["resSeq"][idx].astype("str")
}
elif mode == 'residue':
for res in traj.top.residues:
name = 'SASA '+str(res)
names.append(name)
feats_info[name] = {
#'atoms': [idx],
'group': str(res)
}
sasa = pd.DataFrame(sasa, columns=names)
descriptors = pd.concat([descriptors,sasa],axis=1)
# -
descriptors
# ## Identify metastable states (hierarchical search)
# +
from stateinterpreter.utils.hierarchical import state_tree,hierarchy_pos,generate_state_labels_from_tree
cvs = colvar[['DeepTICA 1','DeepTICA 2','DeepTICA 3']].values
bandwidth = 0.1
logweights= np.ones(len(colvar))
# build tree of metastable states
T = state_tree(cvs, bandwidth , logweights=logweights)
# draw tree
pos = hierarchy_pos(T)
options = {
"with_labels": True,
"font_size": 16,
"node_size": 1000,
"node_color": "white",
"edgecolors": "white",
"linewidths": 1,
"width": 1,
}
nx.draw(T, pos, **options)
# generate state_labels for classification
labels_list = generate_state_labels_from_tree(T, root='MD', fes_threshold=1 )
# -
# ## Train classifier
# Training helper function
# +
from stateinterpreter.utils.plot import paletteFessa,plot_fes
from stateinterpreter.utils.metrics import get_basis_quality,get_best_reg
from itertools import cycle
def train(cv_list,descriptors,states_labels=None,mask=None,merge_states=None,states_subset=None,color=None):
# apply mask if given
if mask is not None:
cv = colvar[mask].reset_index()
desc = descriptors[mask].reset_index()
logw = None
if logweights is not None:
logw = logweights[mask]
if type(logw) == pd.DataFrame:
logw = logw.reset_index()
else:
cv = colvar
desc = descriptors
logw = logweights
### identify states if states_labels is None
if states_labels is None:
states_labels = identify_metastable_states(cv, cv_list, kBT, bandwidth, logweights=logw, fes_cutoff=kBT, gradient_descent_iterates=gradient_descent_iterates)
# merge states if requested
if merge_states is not None:
for pair in merge_states:
states_labels.loc[states_labels['labels'] == pair[1], 'labels'] = pair[0]
# classify
sample_obj, features_names = prepare_training_dataset(desc, states_labels, num_samples, regex_filter=select_feat , states_subset=states_subset)
classifier = Classifier(sample_obj, features_names)
classifier.compute(regularizers, max_iter= 10000) #, groups=groups)
# get best reg
reg,acc,num = get_best_reg(classifier)
print(f'log_10 (lambda) : {np.round(np.log10(reg),3)}')
print(f'Accuracy : {np.round(acc*100,0)} %')
print(f'No. features : {num}')
# get basis quality
quality = get_basis_quality(classifier)
print(f'Basis quality : {quality} (lower the better)')
# count states
num_states = len(classifier.classes)
num_histo = num_states if num_states>2 else 1
# classes names
relevant_feat = classifier.feature_summary(reg)
classes_names = classifier.classes
fig = plt.figure(figsize=(13.5,3), constrained_layout=True)
gs = matplotlib.gridspec.GridSpec(num_histo, 3, figure=fig)
ax1 = fig.add_subplot(gs[:, 0])
ax2 = fig.add_subplot(gs[:, 1])
axs_histo = []
for i in range(num_histo):
axs_histo.append(fig.add_subplot(gs[i, 2]))
ax = ax1
# COLOR
if color is None:
colors = cycle(iter(paletteFessa[::-1]))
global global_states_no
for i in range(global_states_no):
next(colors)
color = [next(colors) for i in range(5)]
# PLOT FES
cv_plot = cv [states_labels['labels'] != 'undefined']
logw_plot=None
if logw is not None:
logw_plot = logw[states_labels['labels'] != 'undefined']
plot_fes(cv_plot[cv_list],bandwidth,states_labels,logweights=logw_plot,num_samples=200,states_subset=states_subset,ax=ax, colors=color)
# plot classfier
ax = ax2
_, ax_twin = plot_classifier_complexity_vs_accuracy(classifier, ax = ax)
ax.axvline(np.log10(reg),linestyle='dotted',color='k')
ax.text(np.log10(reg)+0.02,acc+.05,f'{np.round(acc*100,1)}%',fontsize=12,color='fessa0')
ax_twin.text(np.log10(reg)+0.02,num+0.5,f'{num}',fontsize=12,color='fessa1')
#print features
classifier.print_selected(reg)
plot_histogram_features(desc, states_labels, classes_names, relevant_feat, axs = axs_histo, height=1, colors=color)
# INCREASE COUNTER
global_states_no += num_states
return (cv_list,states_labels,features_names,classifier,reg)
# -
# Choose descriptors
# +
import matplotlib
from stateinterpreter import identify_metastable_states,prepare_training_dataset,Classifier
from stateinterpreter.utils.plot import plot_classifier_complexity_vs_accuracy,plot_histogram_features
regularizers = np.geomspace(0.01, 1, 51)
num_samples = 10000 # per metastable state
# +
# HBONDS
feat_type = 'hbonds'
select_feat = 'HB_C'
filter_descriptors = True
# Select only contacts which are greater than 0.5 at least once for these states
if filter_descriptors:
selected = ((descriptors.filter(regex=select_feat)>0.5).sum()>0)
desc = descriptors[selected.index[selected]]
print('Filtering H-bonds:',desc.shape)
else:
desc = descriptors.filter(regex=select_feat)
print('Descriptors:',desc.shape)
desc.head()
# +
# ANGLES
feat_type = 'angles'
select_feat = 'sin_|cos_'
filter_descriptors = False
desc = descriptors.filter(regex=select_feat)
print('Descriptors:',desc.shape)
desc.head()
# -
# Train a classifier for each set of labels in `labels_list`
# +
color_list = [[paletteFessa[6],paletteFessa[5],'grey'],[paletteFessa[0],paletteFessa[3],'grey'],[paletteFessa[4],paletteFessa[1],'grey']]
global_states_no = 0
results_list = []
for i,states_labels in enumerate(labels_list):
states_subset = states_labels['labels'].unique()
states_subset = states_subset[states_subset != 'undefined' ]
cv_level = states_subset[0].count('.')+1
cv_list = [f'DeepTICA {cv_level}']
print(cv_list)
result = train(cv_list,desc,states_labels,color=color_list[i])
results_list.append(result)
plt.show()
# -
# ## Visualize features
# Here we select the results of the first classification and plot samples of the protein along with the relevant features highlighted
(cv_list,states_labels,features_names,classifier,reg) = results_list[0]
relevant_feat = classifier.feature_summary(reg)
from importlib import reload
import stateinterpreter
reload(stateinterpreter)
reload(stateinterpreter.utils.visualize)
# +
import nglview
import matplotlib
from stateinterpreter.utils.visualize import compute_residue_score,visualize_residue_score
n_residues = traj.n_residues
residue_score = compute_residue_score(classifier,reg,feats_info,n_residues)
states_subset = states_labels['labels'].unique()
states_subset = states_subset[states_subset != 'undefined' ]
classes_names = classifier.classes
view = visualize_residue_score(traj, states_labels, classes_names, residue_score, representation='cartoon', relevant_features=relevant_feat, features_info=feats_info)#,state_frames=[82957,0])
view
# -
# ## Plot RMSD
# +
# compute RMSD
import mdtraj as md
# Compute RMSD
traj = load_trajectory(traj_dict,stride=stride)
traj_file = traj_dict["trajectory"]
topo_file = traj_dict["topology"] if "topology" in traj_dict else None
traj = md.load(traj_file, top=topo_file, stride=stride)
file_ref = BPTI_data_path+'RMSD-reference.pdb'
rmsd_ref = md.load(topo_file)
df, _ = traj.top.to_dataframe()
# CA RMSD
rmsd = md.rmsd(traj,rmsd_ref,atom_indices=df [df['name'] == 'CA'].index)
# +
# select all samples from states
labels_list = generate_state_labels_from_tree(T,root='MD',fes_threshold=10)
t = np.arange(len(rmsd))*0.010/1000
N = 10
centimeters = 1/2.54
fig,axs = plt.subplots(3,1,dpi=150,figsize=(8*centimeters,4*centimeters*len(labels_list)),sharex=True,sharey=True)
for i,states_labels in enumerate(labels_list):
states_subset = sorted(states_labels['labels'].unique())
ax = axs[i]
ax.set_ylim(0.5,4)
ax.set_xlim(0,1.005)
fontsize='medium'
if i == len(labels_list)-1:
ax.set_xlabel(r'Time [ms]', labelpad=-0.5 ,fontsize=fontsize)
ax.set_ylabel(r'RMSD [$\AA$]',fontsize=fontsize)
ax.set_yticks([1,2,3])
ax.set_title(f'DeepTICA {i+1}',fontsize=fontsize)
colors = iter(color_list[i])
for label in states_subset:
mask = (states_labels['labels'] == label)
t_i = t[mask]
rmsd_i = rmsd[mask]*10
if label == 'undefined':
c='silver'
a=0.05
l=None
else:
c=next(colors)
a=0.5
l=label
ax.scatter(t_i[1::N],rmsd_i[1::N],facecolors=c,s=1,marker='o',edgecolors='none',alpha=a,rasterized=True)
ax.scatter(-1,0,rasterized=True,s=10,label=l,marker='o',edgecolors='none',alpha=1,facecolors=c)
ax.legend(frameon=False,loc='upper left',ncol=2)
plt.tight_layout()
plt.show()
# -
| analysis/2_bpti.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tabulation API Example: Nonadiabatic Flamelet Models
#
# _This demo is part of Spitfire, with [licensing and copyright info here.](https://github.com/sandialabs/Spitfire/blob/master/license.md)_
#
# _Highlights_
# - using `build_nonadiabatic*` methods in Spitfire to build nonadiabatic equilibrium, Burke-Schumann, and SLFM models
#
# This example builds nonadiabatic flamelet models and compares profiles of the temperature, mass fractions, and enthalpy defect of several nonadiabatic flamelet tabulation techniques for n-heptane chemistry.
# +
from spitfire import (ChemicalMechanismSpec,
FlameletSpec,
build_nonadiabatic_defect_eq_library,
build_nonadiabatic_defect_bs_library,
build_nonadiabatic_defect_transient_slfm_library,
build_nonadiabatic_defect_steady_slfm_library)
import matplotlib.pyplot as plt
import numpy as np
mech = ChemicalMechanismSpec(cantera_xml='heptane-liu-hewson-chen-pitsch-highT.xml', group_name='gas')
pressure = 101325.
air = mech.stream(stp_air=True)
air.TP = 300., pressure
fuel = mech.stream('TPX', (485., pressure, 'NXC7H16:1'))
flamelet_specs = {'mech_spec': mech, 'oxy_stream': air, 'fuel_stream': fuel, 'grid_points': 128}
# -
l_eq = build_nonadiabatic_defect_eq_library(FlameletSpec(**flamelet_specs), verbose=False)
l_bs = build_nonadiabatic_defect_bs_library(FlameletSpec(**flamelet_specs), verbose=False)
l_ts = build_nonadiabatic_defect_transient_slfm_library(FlameletSpec(**flamelet_specs),
verbose=True,
diss_rate_values=np.array([1e-2, 1e-1, 1e0, 1e1, 1e2]),
diss_rate_log_scaled=True)
l_ss = build_nonadiabatic_defect_steady_slfm_library(FlameletSpec(**flamelet_specs),
verbose=True,
diss_rate_values=np.array([1e-2, 1e-1, 1e0, 1e1, 1e2]),
diss_rate_log_scaled=True,
solver_verbose=False,
h_stoich_spacing=1.e-3)
# +
c_ts = 'SpringGreen'
c_ss = 'Indigo'
c_eq = 'DodgerBlue'
c_bs = 'DarkOrange'
ichi1 = 1
ichi2 = 4
fig, axarray = plt.subplots(1, 6, sharex=True, sharey=True)
axarray[0].plot(l_eq.mixture_fraction_values, l_eq['enthalpy_defect'][:, ::2] * 1e-6, '-.', color=c_eq)
axarray[1].plot(l_bs.mixture_fraction_values, l_bs['enthalpy_defect'][:, ::2] * 1e-6, ':', color=c_bs)
axarray[2].plot(l_ts.mixture_fraction_values, l_ts['enthalpy_defect'][:, ichi1, ::4] * 1e-6, '-', color=c_ts)
axarray[3].plot(l_ts.mixture_fraction_values, l_ts['enthalpy_defect'][:, ichi2, ::4] * 1e-6, '-', color=c_ts)
axarray[4].plot(l_ss.mixture_fraction_values, l_ss['enthalpy_defect'][:, ichi1, ::4] * 1e-6, '--', color=c_ss)
axarray[5].plot(l_ss.mixture_fraction_values, l_ss['enthalpy_defect'][:, ichi2, ::4] * 1e-6, '--', color=c_ss)
axarray[0].set_ylabel('enthalpy defect (MJ/kg)')
axarray[0].set_title('equilibrium')
axarray[1].set_title('Burke-Schumann')
axarray[2].set_title('transient SLFM chi_2')
axarray[3].set_title('transient SLFM chi_4')
axarray[4].set_title('steady SLFM chi_2')
axarray[5].set_title('steady SLFM chi_4')
for ax in axarray:
ax.set_xlim([0, 1])
ax.grid()
ax.set_xlabel('$\\mathcal{Z}$')
fig.set_size_inches(16, 6)
plt.show()
fig, axarray = plt.subplots(1, 6, sharex=True, sharey=True)
axarray[0].plot(l_eq.mixture_fraction_values, l_eq['temperature'][:, ::2], '-.', color=c_eq)
axarray[1].plot(l_bs.mixture_fraction_values, l_bs['temperature'][:, ::2], ':', color=c_bs)
axarray[2].plot(l_ts.mixture_fraction_values, l_ts['temperature'][:, ichi1, ::4], '-', color=c_ts)
axarray[3].plot(l_ts.mixture_fraction_values, l_ts['temperature'][:, ichi2, ::4], '-', color=c_ts)
axarray[4].plot(l_ss.mixture_fraction_values, l_ss['temperature'][:, ichi1, ::4], '--', color=c_ss)
axarray[5].plot(l_ss.mixture_fraction_values, l_ss['temperature'][:, ichi2, ::4], '--', color=c_ss)
axarray[0].set_ylabel('temperature (K)')
axarray[0].set_title('equilibrium')
axarray[1].set_title('Burke-Schumann')
axarray[2].set_title('transient SLFM chi_2')
axarray[3].set_title('transient SLFM chi_4')
axarray[4].set_title('steady SLFM chi_2')
axarray[5].set_title('steady SLFM chi_4')
for ax in axarray:
ax.set_xlim([0, 0.4])
ax.grid()
ax.set_xlabel('$\\mathcal{Z}$')
fig.set_size_inches(16, 6)
plt.show()
fig, axarray = plt.subplots(1, 6, sharex=True, sharey=True)
axarray[0].plot(l_eq.mixture_fraction_values, l_eq['mass fraction C2H2'][:, ::2], '-.', color=c_eq)
axarray[1].plot(l_bs.mixture_fraction_values, l_bs['mass fraction C2H2'][:, ::2], ':', color=c_bs)
axarray[2].plot(l_ts.mixture_fraction_values, l_ts['mass fraction C2H2'][:, ichi1, ::4], '-', color=c_ts)
axarray[3].plot(l_ts.mixture_fraction_values, l_ts['mass fraction C2H2'][:, ichi2, ::4], '-', color=c_ts)
axarray[4].plot(l_ss.mixture_fraction_values, l_ss['mass fraction C2H2'][:, ichi1, ::4], '--', color=c_ss)
axarray[5].plot(l_ss.mixture_fraction_values, l_ss['mass fraction C2H2'][:, ichi2, ::4], '--', color=c_ss)
axarray[0].set_ylabel('temperature (K)')
axarray[0].set_title('equilibrium')
axarray[1].set_title('Burke-Schumann')
axarray[2].set_title('transient SLFM chi_2')
axarray[3].set_title('transient SLFM chi_4')
axarray[4].set_title('steady SLFM chi_2')
axarray[5].set_title('steady SLFM chi_4')
axarray[0].set_ylabel('mass fraction C2H2')
for ax in axarray:
ax.set_xlim([0, 1])
ax.grid()
ax.set_xlabel('$\\mathcal{Z}$')
fig.set_size_inches(16, 6)
plt.show()
# +
from mpl_toolkits.mplot3d import axes3d
from matplotlib.colors import Normalize
fig = plt.figure()
ax = fig.gca(projection='3d')
z = l_ts.mixture_fraction_grid[:, :, 0]
x = np.log10(l_ts.dissipation_rate_stoich_grid[:, :, 0])
for ih in range(0, l_ts.enthalpy_defect_stoich_npts, 6):
dh = l_ts.enthalpy_defect_stoich_values[ih]
ax.contourf(z, x, l_ts['temperature'][:, :, ih], offset=dh / 1.e6,
cmap='inferno', levels=30, norm=Normalize(vmin=300, vmax=2400))
ax.set_zlim([0, 0.7])
ax.set_xlabel('$\\mathcal{Z}$')
ax.set_ylabel('$\\log_{10}\\chi_{\\rm st}$ (Hz)')
ax.set_zlabel('$\\gamma$ (MJ/kg)')
ax.set_zticks([-2.0, -1.5, -1.0, -0.5, 0.0])
ax.set_title('gas temperature (K)')
fig.set_size_inches(8, 8)
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
for ih in range(0, l_ts.enthalpy_defect_stoich_npts, 6):
dh = l_ts.enthalpy_defect_stoich_values[ih]
ax.contourf(z, x, l_ts['mass fraction OH'][:, :, ih], offset=dh / 1.e6,
cmap='Oranges', levels=30, norm=Normalize(vmin=0, vmax=5e-3), alpha=0.8)
ax.set_zlim([0, 0.7])
ax.set_xlabel('$\\mathcal{Z}$')
ax.set_ylabel('$\\log_{10}\\chi_{\\rm st}$ (Hz)')
ax.set_zlabel('$\\gamma$ (MJ/kg)')
ax.set_zticks([-2.0, -1.5, -1.0, -0.5, 0.0])
ax.set_xlim([0, 0.2])
ax.set_title('mass fraction OH')
fig.set_size_inches(8, 8)
plt.show()
# -
fig = plt.figure()
ax = fig.gca(projection='3d')
z = l_ts.mixture_fraction_grid[:, 0, :]
g = l_ts.enthalpy_defect_stoich_grid[:, 0, :] / 1.e6
for ichi in range(0, l_ts.dissipation_rate_stoich_npts):
lchi = np.log10(l_ts.dissipation_rate_stoich_values[ichi])
ax.contourf(z, g + lchi/2, l_ts['temperature'][:, ichi, :], offset=lchi,
cmap='inferno', levels=30, norm=Normalize(vmin=300, vmax=2400), alpha=0.8)
ax.set_zlim([0, 0.7])
ax.set_xlabel('$\\mathcal{Z}$')
ax.set_ylabel('$\\gamma$ (MJ/kg) + $\\log_{10}\\chi_{\\rm st}/2$ (Hz)')
ax.set_zlabel('$\\log_{10}\\chi_{\\rm st}$ (Hz)')
ax.set_zticks([-2, -1, 0, 1, 2])
ax.set_title('gas temperature (K)')
fig.set_size_inches(8, 8)
plt.show()
| docs/source/demo/flamelet/example_nonadiabatic_flamelets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# **Course website**: https://github.com/leomiquelutti/UFU-geofisica-1
#
# **Note**: This notebook is part of the course "Geofísica 1" of Geology program of the
# [Universidade Federal de Uberlândia](http://www.ufu.br/).
# All content can be freely used and adapted under the terms of the
# [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
#
# 
#
# Agradecimentos especiais ao [<NAME>](www.leouieda.com) e ao [Grupo Geosci](http://geosci.xyz/)http://localhost:8888/notebooks/GravMag_Interactive_2D_forward_modeling_with_polygons.ipynb#
# Esse documento que você está usando é um [Jupyter notebook](http://jupyter.org/). É um documento interativo que mistura texto (como esse), código (como abaixo), e o resultado de executar o código (números, texto, figuras, videos, etc).
# # Modelagem direta gravitacional 2D
#
# ## Objetivos
#
# * Entender melhor como funciona a modelagem (simulação) de dados
# * Entender como a resposta geofísica altera de acordo com as propriedades do modelo
# A célula abaixo _prepara_ o ambiente
import numpy
from fatiando.gravmag.interactive import Moulder
# A célula abaixo executa a simulação
# %matplotlib qt
area = (0, 100000, 0, 5000)
xp = numpy.arange(0, 100000, 1000)
zp = numpy.zeros_like(xp)
app = Moulder(area, xp, zp)
app.run()
| notebooks/INTRO_GravMag_Interactive_2D_forward_modeling_with_polygons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %pylab
# %matplotlib inline
# ### Part 1
fn = 'd9p1_test.txt'
def read_table(fn):
table = []
with open(fn) as data:
for row in data:
row_arr = [int(i) for i in row.strip()]
table+=[row_arr]
return table
def assess_risk(table):
nrow,ncol=np.shape(table)
lowrisk = 0
for i in range(nrow):
for j in range(ncol):
try:
if table[i][j]>=table[i][j+1]:
continue
except IndexError:
pass
if j!=0:
try:
if table[i][j]>=table[i][j-1]:
continue
except IndexError:
pass
try:
if table[i][j]>=table[i+1][j]:
continue
except IndexError:
pass
if i!=0:
try:
if table[i][j]>=table[i-1][j]:
continue
except IndexError:
pass
lowrisk+=(table[i][j]+1)
return lowrisk
fn = 'd9p1_test.txt'
table = read_table(fn)
#display(table)
assert assess_risk(table)==15
fn = 'd9p1.txt'
table = read_table(fn)
assess_risk(table)
fn = 'd9p1_test.txt'
def assess_risk_2(fn):
table = read_table(fn)
sz = np.shape(table)
nrow,ncol=np.shape(table)
basins = np.zeros(sz)
for row in range(sz[0]):
for col in range(sz[1]):
if (row==0) & (col==0): # top left corner
if ((table[row][col]<table[row+1][col]) &
(table[row][col]<table[row][col+1])):
basins[row][col]=1
elif (row==0) & (col==sz[1]-1): # top right corner
if ((table[row][col]<table[row+1][col]) &
(table[row][col]<table[row][col-1])):
basins[row][col]=1
elif (row==sz[0]-1) & (col==0): # bottom left corner
if ((table[row][col]<table[row-1][col]) &
(table[row][col]<table[row][col+1])):
basins[row][col]=1
elif (row==sz[0]-1) & (col==sz[1]-1): # bottom right corner
if ((table[row][col]<table[row-1][col]) &
(table[row][col]<table[row][col-1])):
basins[row][col]=1
elif row==0: # top row
if ((table[row][col]<table[row+1][col]) &
(table[row][col]<table[row][col-1]) &
(table[row][col]<table[row][col+1])):
basins[row][col]=1
elif row==sz[0]-1: # bottom row
if ((table[row][col]<table[row-1][col]) &
(table[row][col]<table[row][col-1]) &
(table[row][col]<table[row][col+1])):
basins[row][col]=1
elif col==0: # left column
if ((table[row][col]<table[row+1][col]) &
(table[row][col]<table[row-1][col]) &
(table[row][col]<table[row][col+1])):
basins[row][col]=1
elif col==sz[1]-1: # right column
if ((table[row][col]<table[row-1][col]) &
(table[row][col]<table[row+1][col]) &
(table[row][col]<table[row][col-1])):
basins[row][col]=1
else: # middle point
if ((table[row][col]<table[row-1][col]) &
(table[row][col]<table[row+1][col]) &
(table[row][col]<table[row][col-1]) &
(table[row][col]<table[row][col+1])):
basins[row][col]=1
#display(basins)
ix = np.where(basins==1)
return (np.array(table)+1)[ix].sum()
assert assess_risk_2('d9p1_test.txt')==15
print(f"The answer is {assess_risk_2('d9p1.txt')}")
# ### Part 2
| Day 9 - Smoke Basin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="KZ_3fVYWU322"
# # Import necessary libraries
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 73} id="lbIJU1G9UioZ" outputId="9b2e0ea0-c40b-4726-d0a0-2f04233d3cf6"
import os, sys, pdb, tqdm
import numpy as np
import random
# Plotting
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix
# NLP related
import pandas as pd
import unicodedata, re, string
import nltk
# PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
import torch.autograd as autograd
# Show the same warnings only once
import warnings
warnings.filterwarnings(action='once')
torch.manual_seed(1)
# Install Kaggle library
# !pip install -q kaggle
# Colab library to upload files to notebook
from google.colab import files
# Upload kaggle API key file
uploaded = files.upload()
# + [markdown] id="ZsPj_NCfVGye"
# # 0. Data Exploration and Pre-processing
# + [markdown] id="xxIui1_XVPWz"
# ## Download and Read Files
# + id="FYodvieNVI9E"
# !mkdir ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# + colab={"base_uri": "https://localhost:8080/"} id="PhzkN2IVVz-X" outputId="a201026a-2e1f-4878-ca0b-7a4796314a7f"
# !mkdir ~/sentiment-analysis-on-movie-reivews
# + colab={"base_uri": "https://localhost:8080/"} id="lf9ylK_kAlzd" outputId="5323d359-26ed-4a17-81a0-71b9c16da82e"
# !pip show kaggle
# + colab={"base_uri": "https://localhost:8080/"} id="p7Lg-HwDAdCN" outputId="f9d26655-1fd0-472d-e0db-9edba8d81101"
# !kaggle competitions download -c sentiment-analysis-on-movie-reviews -p sentiment-analysis-on-movie-reviews
# + colab={"base_uri": "https://localhost:8080/", "height": 224} id="3ylIwDPoWl4y" outputId="a1aa6570-edf5-4b19-ff69-8d5e34263e6e"
DATA_ROOT = 'sentiment-analysis-on-movie-reviews/'
df_train = pd.read_csv(os.path.join(DATA_ROOT, 'train.tsv.zip'), sep="\t")
df_test = pd.read_csv(os.path.join(DATA_ROOT,'test.tsv.zip'), sep="\t")
print(df_train.shape,df_test.shape)
df_train.head()
# + [markdown] id="ezwZjTCKaylQ"
# ## Data Visualization
# + colab={"base_uri": "https://localhost:8080/", "height": 438} id="5EK6G_BIWqeZ" outputId="1d02fb84-c3b3-4604-9bc9-119da3c2c5b7"
dist = df_train.groupby(["Sentiment"]).size()
fig, ax = plt.subplots(figsize=(8,5), dpi=80)
ax.set_title('Sentiment Distribution')
sns.barplot(dist.keys(), dist.values);
# + [markdown] id="-YJMJp0La5CS"
# ## Data Pre-processing
# + colab={"base_uri": "https://localhost:8080/"} id="IVKcPb4iaZya" outputId="d42bc6ce-c43b-452c-a2d4-7e625a20fad4"
def remove_non_ascii(words):
"""Remove non-ASCII characters from list of tokenized words"""
new_words = []
for word in words:
new_word = unicodedata.normalize('NFKD', word).encode('ascii', 'ignore').decode('utf-8', 'ignore')
new_words.append(new_word)
return new_words
def to_lowercase(words):
"""Convert all characters to lowercase from list of tokenized words"""
new_words = []
for word in words:
new_word = word.lower()
new_words.append(new_word)
return new_words
def remove_punctuation(words):
"""Remove punctuation from list of tokenized words"""
new_words = []
for word in words:
new_word = re.sub(r'[^\w\s]', '', word)
if new_word != '':
new_words.append(new_word)
return new_words
def remove_numbers(words):
"""Remove all interger occurrences in list of tokenized words with
textual representation
"""
new_words = []
for word in words:
new_word = re.sub("\d+", "", word)
if new_word != '':
new_words.append(new_word)
return new_words
def remove_stopwords(words):
"""Remove stop words from list of tokenized words"""
new_words = []
for word in words:
if word not in stopwords.words('english'):
new_words.append(word)
return new_words
def stem_words(words):
"""Stem words in list of tokenized words"""
stemmer = LancasterStemmer()
stems = []
for word in words:
stem = stemmer.stem(word)
stems.append(stem)
return stems
def lemmatize_verbs(words):
"""Lemmatize verbs in list of tokenized words
(ate => eat, cars => car, ...)
"""
lemmatizer = WordNetLemmatizer()
lemmas = []
for word in words:
lemma = lemmatizer.lemmatize(word, pos='v')
lemmas.append(lemma)
return lemmas
def normalize(words):
"""Execute all the pre-processing steps"""
words = remove_non_ascii(words)
words = to_lowercase(words)
words = remove_punctuation(words)
words = remove_numbers(words)
# words = remove_stopwords(words)
return words
# + [markdown] id="WBCdwWBAbPih"
# ### Step 1: Tokenizing phrases
# + colab={"base_uri": "https://localhost:8080/"} id="KmspLtYAa7gk" outputId="da944aca-f048-4034-d98c-fee46d8699cc"
# Step 0, we need to download the 'punkt' sentece tokenizer for nltk
nltk.download('punkt')
# Add another column 'Words' to store tokenized phrases
df_train['Words'] = df_train['Phrase'].apply(nltk.word_tokenize)
# + [markdown] id="6NnkLauHbdH7"
# ### Step 2: Passing through prep functions defined above
# + colab={"base_uri": "https://localhost:8080/"} id="h1R1rg9jbSCS" outputId="79b7b23d-b64e-4eba-fc91-1564a37df72b"
df_train['Words'] = df_train['Words'].apply(normalize)
df_train['Words'].head()
# + [markdown] id="hcAo9WhGbkNp"
# ### Step 3: Creating a list of unique words to be used as dictionary for encoding
# + colab={"base_uri": "https://localhost:8080/"} id="19UHlp7abfoq" outputId="3c586a05-9088-43e7-88ca-81eb1a6279fa"
word_set = set()
for l in df_train['Words']: # Loop through each phrase
for e in l: # Loop through each word in each phrase
word_set.add(e) # Add to the vocabulary
# Index of word starts from 1, 0 is reserved for padding
word_to_int = {word: ii for ii, word in enumerate(word_set, 1)}
# Check if they are still the same lenght
assert len(word_set) == len(word_to_int)
print("Size of vocabulary: {}".format(len(word_set)))
# + colab={"base_uri": "https://localhost:8080/"} id="trAwxFRBbmFp" outputId="38db6471-9763-481f-8ffa-c4769a91672a"
# Tokenize each phrase
# The phrases are represented as a list of numerical indices
df_train['Tokens'] = df_train['Words'].apply(lambda l: [word_to_int[word] for word in l])
df_train['Tokens'].head()
# + [markdown] id="caI73q3wbqkJ"
# ### Step 4: Pad each phrase representation with zeroes, starting from the beginning of sequence
# + colab={"base_uri": "https://localhost:8080/"} id="otA1n7E0bnsJ" outputId="dd8f505c-1030-4408-c147-3df2912a0eff"
# Get the length of longest phrase for padding
max_len = df_train['Tokens'].str.len().max()
print("The length of longest phrase: {}".format(max_len))
# Will use a combined list of phrases as np array for further work.
# This is expected format for the Pytorch utils to be used later.
all_tokens = np.array([t for t in df_train['Tokens']])
encoded_labels = np.array([l for l in df_train['Sentiment']])
# Create blank rows
features = np.zeros((len(all_tokens), max_len), dtype=int)
# for each phrase, add zeros at the end
for i, row in enumerate(all_tokens):
features[i, :len(row)] = row
#print first 3 rows of the feature matrix
print('\n')
print(features[:3])
# + [markdown] id="1Iq15e3IbutJ"
# ### Step 5: Split the data into training, validation, and test data
# + colab={"base_uri": "https://localhost:8080/"} id="bu_Yv23nbskf" outputId="b8c68350-72e6-4d8e-94de-cfe91ed2db5a"
# <split_frac> of the data as training set
# (1 - <split_frac>) / 2 of the data as validation set
# (1 - <split_frac>) / 2 of the data as test set
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, remaining_x = features[:split_idx], features[split_idx:]
train_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]
test_idx = int(len(remaining_x)*0.5)
valid_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]
valid_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]
## print out the shapes of resultant feature data
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(valid_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
# + [markdown] id="AcRm-PN-bzEc"
# # 1. Defining PyTorch DataLoader and Using Word2Vec
# + id="24sTEiVEbwXZ"
# Embedders requires Long-type inputs
train_data = TensorDataset(torch.tensor(train_x), \
torch.tensor(train_y.astype(np.long)))
valid_data = TensorDataset(torch.tensor(valid_x), torch.tensor(valid_y))
test_data = TensorDataset(torch.tensor(test_x), torch.tensor(test_y))
batch_size = 64
train_loader = DataLoader(train_data, batch_size = batch_size, shuffle = True)
valid_loader = DataLoader(valid_data, batch_size = batch_size, shuffle = True)
test_loader = DataLoader(test_data, batch_size = batch_size, shuffle = True)
# + colab={"base_uri": "https://localhost:8080/"} id="KsEIFFJeb4E2" outputId="add676b5-6f55-4aa6-8db5-5787fd1a43bd"
# How the embedder works
# Convert tokenized phrases into vectors (Word2Vec))
embeds = nn.Embedding(len(word_to_int), 5) # words in vocab, 5 dimensional
lookup_tensor = torch.tensor([word_to_int["good"]], dtype=torch.long)
sample_embed = embeds(lookup_tensor)
print(sample_embed)
# + [markdown] id="08kCzBIUb-EN"
# # 2. Defining LSTM Model
# + id="S8_cbhzYb7wv"
class SentimentLSTM(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""Initialize the model by setting up the layers."""
super(SentimentLSTM, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear
self.fc = nn.Linear(hidden_dim, output_size)
# Softmax
self.softmax = nn.LogSoftmax()
# self.softmax = nn.Softmax()
# Train on GPU or CPU
self.device = "cuda:0" if torch.cuda.is_available() else "cpu"
def forward(self, x, hidden):
"""Perform a forward pass of our model on some input and hidden state."""
batch_size = x.size(0)
# embeddings and lstm_out
embeds = self.embedding(torch.tensor(x, dtype=torch.long, device=self.device))
lstm_out, hidden = self.lstm(embeds, hidden)
# transform lstm output to input size of linear layers
lstm_out = lstm_out.transpose(0,1)
lstm_out = lstm_out[-1]
out = self.dropout(lstm_out)
out = self.fc(out)
out = self.softmax(out)
return out, hidden
def init_hidden(self, batch_size):
"""Initializes hidden state"""
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(self.device),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(self.device))
return hidden
# + [markdown] id="VbmobNXTcBfr"
# # 3. Instantiate The Model
# + id="C1-AX3VQb_3l"
# Instantiate the model w/ hyperparams
vocab_size = len(word_to_int) + 1 # +1 for the 0 padding
output_size = 5 # One-hotted 5 targets
embedding_dim = 400 # Dimension of embedding vectors
hidden_dim = 256 # Number of hidden states
n_layers = 2 # Depth of LSTM
net = SentimentLSTM(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
# Defining the loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr = 0.001)
# + [markdown] id="7Q16eIqtcE38"
# # 4. Train the Model
# + colab={"base_uri": "https://localhost:8080/"} id="1iDCCp5gcDXN" outputId="963e8f39-22a9-4fb7-87af-762c38bc9597"
print_every = 500
counter = 0
epochs = 4 # validation loss increases from ~ epoch 3 or 4
clip = 5 # for gradient clip to prevent exploding gradient problem in LSTM/RNN
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if torch.cuda.is_available:
net.to(device)
net.train()
# Train for <epochs> iterations
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# Loop by batches
for inputs, labels in train_loader:
counter += 1
# LSTM is sensitive to the batch size as initializing hidden states
# depends on the batch size
if inputs.shape[0] != batch_size:
continue
inputs, labels = inputs.to(device), labels.to(device)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output, labels)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
if inputs.shape[0] != batch_size:
continue
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
inputs, labels = inputs.to(device), labels.to(device)
output, val_h = net(inputs, val_h)
val_loss = criterion(output, labels)
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
# + [markdown] id="vQ8Bha35c0pC"
# # 5. Analysis on Test Set
# + colab={"base_uri": "https://localhost:8080/"} id="w8sYi0NHcIBm" outputId="1f58fda9-37e3-4047-8f43-b7de4201c135"
# Get test data loss and accuracy
test_losses = [] # track loss
test_preds = []
test_targets = []
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
if inputs.shape[0] != batch_size:
continue
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
inputs, labels = inputs.to(device), labels.to(device)
# get predicted outputs
output, h = net(inputs, h)
# calculate loss
test_loss = criterion(output, labels)
test_losses.append(test_loss.item())
# convert output probabilities to predicted class
_, pred = torch.max(output,1)
test_preds.append(pred.cpu().numpy())
test_targets.append(labels.cpu().numpy())
# compare predictions to true label
correct_tensor = pred.eq(labels.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not torch.cuda.is_available() else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# Stack test predictions and targets
test_preds = np.concatenate(test_preds)
test_targets = np.concatenate(test_targets)
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
# + [markdown] id="AsJFOwpkc44E"
# ## Random Guess as Baseline
# + colab={"base_uri": "https://localhost:8080/"} id="Qok_0LBzc2NP" outputId="b44c17e7-9d66-4e2e-91d6-5516f2b53076"
# Randomly guess the sentiment score
correct = 0
for i, target in enumerate(test_y):
if target == random.randint(0,4):
correct += 1
print("Test random-guess accuracy: {:.3f}".format(correct / float(test_y.shape[0])))
# + [markdown] id="O6xXvEOjc8bF"
# ## Confusion Matrix
# + id="m_t4VgJgc6cu"
# Define a confusion matrix plotting function
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
"""
given a sklearn confusion matrix (cm), make a nice plot
Arguments
---------
cm: confusion matrix from sklearn.metrics.confusion_matrix
target_names: given classification classes such as [0, 1, 2]
the class names, for example: ['high', 'medium', 'low']
title: the text to display at the top of the matrix
cmap: the gradient of the values displayed from matplotlib.pyplot.cm
see http://matplotlib.org/examples/color/colormaps_reference.html
plt.get_cmap('jet') or plt.cm.Blues
normalize: If False, plot the raw numbers
If True, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # confusion matrix created by
# sklearn.metrics.confusion_matrix
normalize = True, # show proportions
target_names = y_labels_vals, # list of names of the classes
title = best_estimator_name) # title of graph
Citiation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 467} id="W_1YGNMfdBIf" outputId="f804c5b6-a1ce-4f03-ab3d-22562017d0b0"
cm = confusion_matrix(test_targets, test_preds, labels=[0, 1, 2, 3, 4])
plot_confusion_matrix(cm,
normalize = False,
target_names = ['0', '1', '2', '3', '4'],
title = "Confusion Matrix")
# + [markdown] id="FppmwW0ZdD30"
# # Reference
# Sentiment Analysis: Rotten Tomato Movie Reviews: https://www.kaggle.com/oragula/sentiment-analysis-rotten-tomato-movie-reviews
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="lAAfC7judCq_" outputId="9937f8c2-1c4a-459f-805b-ada24bc96ce6"
# + id="U5MpekvejyuS"
| nbs/CS271P_Lab_5_LSTM_Model_for_Sentiment_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.5
# language: python
# name: python3
# ---
# +
from test_on_image import style_image
from train import train
dataset_path = "train_dataset/Faces"
style = "images/styles/beef700.jpg"
# -
train(dataset_path, style, epochs=100, image_size=160, style_size=700, batch_size=8, checkpoint_interval=320, sample_interval=160)
#checkpoint_interval=600, sample_interval=300, checkpoint_model='checkpoints/potato_2000.pth'
| Train Potato.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# In Issue 73 Ben correctly raised the question that our reactions in many cases have the KEGG ID as name, and the more detailed name stored in the notes. Here I will try to write a script that for the metabolites and reactions will convert this automaticaly.
import cameo
import pandas as pd
import cobra.io
from cobra import Reaction, Metabolite
model = cobra.io.read_sbml_model('../model/p-thermo.xml')
for rct in model.reactions:
if rct.name[:1] in 'R':
try:
name = rct.notes['NAME']
name_new = name.replace("’","") #get rid of any apostrophe as it can screw with the code
split = name_new.split(';') #if there are two names assigned, we can split that and only take the first
name_new = split[0]
try: #if the kegg is also stored in the annotation we can remove the name and remove the note
anno = rct.annotation['kegg.reaction']
rct.name = name_new
del rct.notes['NAME']
except KeyError:
print (rct.id) #here print if the metabolite doesn't have the kegg in the annotation but only in the name
except KeyError: #if the metabolite doesn't have a name, just print the ID
print (rct.id)
else:
continue
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# Next I check the names of the above printed by hand to make sure they are still fine.
model.reactions.MALHYDRO.annotation['kegg.reaction'] = 'R00028'
model.reactions.MALHYDRO.name ='maltose glucohydrolase'
model.reactions.AKGDEHY.name = 'oxoglutarate dehydrogenase (succinyl-transferring)'
model.reactions.OMCDC.name = '3-isopropylmalate dehydrogenase'
model.reactions.DHORD6.name = 'dihydroorotate dehydrogenase'
model.reactions.DRBK.name = 'ribokinase'
model.reactions.TREPT.name = 'protein-Npi-phosphohistidine-sugar phosphotransferase'
model.reactions.AHETHMPPR.name = 'pyruvate dehydrogenase (acetyl-transferring)'
model.reactions.HC01435R.name = 'oxoglutarate dehydrogenase (succinyl-transferring)'
model.reactions.G5SADs.name = 'L-glutamate 5-semialdehyde dehydratase (spontaneous)'
model.reactions.MMTSAOR.name = 'L-Methylmalonyl-CoA Dehydrogenase'
model.reactions.ALCD4.name = 'Butanal dehydrogenase (NADH)'
model.reactions.ALCD4y.name = 'Butanal dehydrogenase (NADPH)'
model.reactions.get_by_id('4OT').name = 'hydroxymuconate tautomerase'
model.reactions.FGFTh.name = 'phosphoribosylglycinamide formyltransferase'
model.reactions.APTA1i.name = 'N-Acetyl-L-2-amino-6-oxopimelate transaminase'
model.reactions.IG3PS.name = 'Imidazole-glycerol-3-phosphate synthase'
model.reactions.RZ5PP.name = 'Alpha-ribazole 5-phosphate phosphatase'
model.reactions.UPP1S.name = 'Hydroxymethylbilane breakdown (spontaneous) '
model.reactions.ADOCBIK.name = 'Adenosyl cobinamide kinase'
model.reactions.R05219.id = 'P6AS'
model.reactions.ACBIPGT.name = 'Adenosyl cobinamide phosphate guanyltransferase'
model.reactions.ADOCBLS.name = 'Adenosylcobalamin 5-phosphate synthase'
model.reactions.HGBYR.name = 'hydrogenobyrinic acid a,c-diamide synthase (glutamine-hydrolysing)'
model.reactions.ACDAH.name = 'adenosylcobyric acid synthase (glutamine-hydrolysing)'
model.reactions.P6AS.name = 'precorrin-6A synthase (deacetylating)'
model.reactions.HBCOAH.name = 'enoyl-CoA hydratase'
model.reactions.HEPDPP.name = 'Octaprenyl diphosphate synthase'
model.reactions.HEXTT.name = 'Heptaprenyl synthase'
model.reactions.PPTT.name = 'Hexaprenyl synthase'
model.reactions.MANNHY.name = 'Manninotriose hydrolysis'
model.reactions.STACHY2.name = 'Stachyose hydrolysis'
model.reactions.ADOCBIP.name = 'adenosylcobinamide kinase'
model.reactions.PMDPHT.name = 'Pyrimidine phosphatase'
model.reactions.ARAT.name = 'aromatic-amino-acid transaminase'
model.metabolites.stys_c.annotation['kegg.glycan'] = 'G00278'
model.metabolites.stys_c.name = 'Stachyose'
model.reactions.MOD.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MHOPT.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MOD_4mop.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MHTPPT.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MOD_3mop.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MHOBT.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.CBMKr.name = 'carbamoyl-phosphate synthase (ammonia)'
model.reactions.DMPPOR.name = '4-hydroxy-3-methylbut-2-en-1-yl diphosphate reductase'
model.reactions.CHOLD.name = 'Choline dehydrogenase'
model.reactions.AMACT.name = 'beta-ketoacyl-[acyl-carrier-protein] synthase III'
model.reactions.HDEACPT.name = 'beta-ketoacyl-[acyl-carrier-protein] synthase I'
model.reactions.OXSTACPOR.name = 'L-xylulose reductase'
model.reactions.RMK.name = 'Rhamnulokinase'
model.reactions.RMPA.name = 'Rhamnulose-1-phosphate aldolase'
model.reactions.RNDR4.name = 'Ribonucleoside-diphosphate reductase (UDP)'
model.reactions.get_by_id('3HAD180').name = '3-hydroxyacyl-[acyl-carrier-protein] dehydratase (n-C18:0)'
#SAVE&COMMIT
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
| notebooks/43. Fix rct and met names.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Boston Housing Classification Logistic Regression
import sys
sys.path.append("..")
from pyspark.sql.types import BooleanType
from pyspark.ml.feature import StringIndexer, VectorAssembler
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.sql.session import SparkSession
from pyspark.sql.functions import expr
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.linalg import DenseVector
from pyspark.mllib.evaluation import MulticlassMetrics
from helpers.path_translation import translate_to_file_string
inputFile = translate_to_file_string("../data/Boston_Housing_Data.csv")
# Spark session creation
spark = (SparkSession
.builder
.appName("BostonHousingClass")
.getOrCreate())
# DataFrame creation using an ifered Schema
df = spark.read.option("header", "true") \
.option("inferSchema", "true") \
.option("delimiter", ";") \
.csv(inputFile) \
.withColumn("CATBOOL", expr("CAT").cast(BooleanType()))
print(df.printSchema())
featureCols = df.columns.copy()
featureCols.remove("MEDV")
featureCols.remove("CAT")
featureCols.remove("CATBOOL")
print(featureCols)
assembler = VectorAssembler(outputCol="features", inputCols=featureCols)
featureSet = assembler.transform(df)
# Build Labeled Point Feature Set
featureSetLP = (featureSet.select(featureSet.CAT, featureSet.features)
.rdd
.map(lambda row: LabeledPoint(row.CAT, DenseVector(row.features))))
# Prepare training and test data.
splits = featureSetLP.randomSplit([0.9, 0.1 ], 12345)
training = splits[0]
test = splits[1]
# Logistic regression
# Train the model
modelLRLB = LogisticRegressionWithLBFGS.train(training, numClasses=2, regParam=0, regType="l2", iterations=1000, corrections=10)
# Test the model
predictionAndLabels = test.map(lambda x : [float(modelLRLB.predict(x.features )), float(x.label) ])
print (predictionAndLabels.take(20))
# evaluate the result
metrics = MulticlassMetrics(predictionAndLabels)
print("Test Error LRLBFG =" , (1.0 - metrics.accuracy))
spark.stop()
| solutions/boston_housing_classification_log_reg_lbfgs_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="ZAGS4j8u0gdN"
# !nvidia-smi -L
# + id="ZAGS4aH_BPGN"
from google.colab import drive
drive.mount('/content/gdrive')
# + id="LtK7KvgqCI-k"
# Stopped mid level = 2? use continue
# Stopped mid level = 1 or 0? use upsample
lemode = 'primed' # 'ancestral','primed','continue','cutcontinue','upsample'
lemodel = '5b_lyrics' #5b_lyrics or '5b' or '1b_lyrics'
lecount = 3
lesample_length_in_seconds = 202
lesampling_temperature = .98616616
lehop = [.5,.5,.125] #default [.5,.5,.125], optimized [1,1,0.0625]
lepath = '/content/gdrive/MyDrive/songsample'
leprompt_length_in_seconds=10
leaudio_file = '/content/gdrive/MyDrive/song.wav'
lecut = 70 # used only on cutcontinue
transpose = [0,1,2] # used only on cutcontinue [0,1,2] = default, ex [1,1,1] all samples are copied from item 1
leexportlyrics = True
leprogress = True
leautorename = True
leartist = "blondie"
legenre = "pop rock"
lelyrics = """<NAME> was the son
of <NAME> of <NAME>, Yorkshire.
He was a lieutenant colonel
in the parliamentary army
and distinguished himself at
the siege of Denbigh Castle.
He was one of General Mytton's commissioners
for receiving
the surrender of the castle on 14 October 1646.
He was made governor of Denbigh and ruled
with a firm hand until the Restoration.
"""
lechunk_size = 16
lemax_batch_size = lecount
lelower_batch_size = lechunk_size
lelower_level_chunk_size = lechunk_size * 2
# + [markdown] id="H-8cvPn3CO4s"
# # 1 Sample
#
# + id="ZAGS4k1WCC_C"
if lemode=='ancestral':
leprompt_length_in_seconds=None
leaudio_file = None
###############################################################################
###############################################################################
codes_file=None
# !pip install git+https://github.com/openai/jukebox.git
##$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$#### autosave start
import os
from glob import glob
filex = "/usr/local/lib/python3.7/dist-packages/jukebox/sample.py"
fin = open(filex, "rt")
data = fin.read()
fin.close()
newtext = '''import fire
import os
from glob import glob
from termcolor import colored
from datetime import datetime
newtosample = True'''
data = data.replace('import fire',newtext)
newtext = '''starts = get_starts(total_length, prior.n_ctx, hop_length)
counterr = 0
x = None
for start in starts:'''
data = data.replace('for start in get_starts(total_length, prior.n_ctx, hop_length):',newtext)
newtext = '''global newtosample
newtosample = (new_tokens > 0)
if new_tokens <= 0:'''
data = data.replace('if new_tokens <= 0:',newtext)
newtext = '''counterr += 1
datea = datetime.now()
zs = sample_single_window(zs, labels, sampling_kwargs, level, prior, start, hps)
if newtosample and counterr < len(starts):
del x; x = None; prior.cpu(); empty_cache()
x = prior.decode(zs[level:], start_level=level, bs_chunks=zs[level].shape[0])
logdir = f"{hps.name}/level_{level}"
if not os.path.exists(logdir):
os.makedirs(logdir)
t.save(dict(zs=zs, labels=labels, sampling_kwargs=sampling_kwargs, x=x), f"{logdir}/data.pth.tar")
save_wav(logdir, x, hps.sr)
del x; prior.cuda(); empty_cache(); x = None
dateb = datetime.now()
timex = ((dateb-datea).total_seconds()/60.0)*(len(starts)-counterr)
print(f"Step " + colored(counterr,'blue') + "/" + colored( len(starts),'red') + " ~ New to Sample: " + str(newtosample) + " ~ estimated remaining minutes: " + (colored('???','yellow'), colored(timex,'magenta'))[counterr > 1 and newtosample])'''
data = data.replace('zs = sample_single_window(zs, labels, sampling_kwargs, level, prior, start, hps)',newtext)
newtext = """lepath=hps.name
if level==2:
for filex in glob(os.path.join(lepath + '/level_2','item_*.wav')):
os.rename(filex,filex.replace('item_',lepath.split('/')[-1] + '-'))
if level==1:
for filex in glob(os.path.join(lepath + '/level_1','item_*.wav')):
os.rename(filex,filex.replace('item_',lepath.split('/')[-1] + '-L1-'))
if level==0:
for filex in glob(os.path.join(lepath + '/level_0','item_*.wav')):
os.rename(filex,filex.replace('item_',lepath.split('/')[-1] + '-L0-'))
save_html("""
if leautorename:
data = data.replace('save_html(',newtext)
if leexportlyrics == False:
data = data.replace('if alignments is None','#if alignments is None')
data = data.replace('alignments = get_alignment','#alignments = get_alignment')
data = data.replace('save_html(','#save_html(')
if leprogress == False:
data = data.replace('print(f"Step " +','#print(f"Step " +')
fin = open(filex, "wt")
fin.write(data)
fin.close()
##$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$#### autosave end
import jukebox
import torch as t
import librosa
import os
from datetime import datetime
from IPython.display import Audio
from jukebox.make_models import make_vqvae, make_prior, MODELS, make_model
from jukebox.hparams import Hyperparams, setup_hparams
from jukebox.sample import sample_single_window, _sample, \
sample_partial_window, upsample, \
load_prompts
from jukebox.utils.dist_utils import setup_dist_from_mpi
from jukebox.utils.torch_utils import empty_cache
rank, local_rank, device = setup_dist_from_mpi()
print(datetime.now().strftime("%H:%M:%S"))
model = lemodel
hps = Hyperparams()
hps.sr = 44100
hps.n_samples = lecount
hps.name = lepath
chunk_size = lechunk_size
max_batch_size = lemax_batch_size
hps.levels = 3
hps.hop_fraction = lehop
vqvae, *priors = MODELS[model]
vqvae = make_vqvae(setup_hparams(vqvae, dict(sample_length = 1048576)), device)
top_prior = make_prior(setup_hparams(priors[-1], dict()), vqvae, device)
# Prime song creation using an arbitrary audio sample.
mode = lemode
codes_file=None
audio_file = leaudio_file
prompt_length_in_seconds=leprompt_length_in_seconds
if os.path.exists(hps.name):
# Identify the lowest level generated and continue from there.
for level in [0, 1, 2]:
data = f"{hps.name}/level_{level}/data.pth.tar"
if os.path.isfile(data):
mode = mode if 'continue' in mode else 'upsample'
codes_file = data
print(mode + 'ing from level ' + str(level))
break
print('mode is now '+mode)
sample_hps = Hyperparams(dict(mode=mode, codes_file=codes_file, audio_file=audio_file, prompt_length_in_seconds=prompt_length_in_seconds))
sample_length_in_seconds = lesample_length_in_seconds
hps.sample_length = (int(sample_length_in_seconds*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
assert hps.sample_length >= top_prior.n_ctx*top_prior.raw_to_tokens, f'Please choose a larger sampling rate'
metas = [dict(artist = leartist,
genre = legenre,
total_length = hps.sample_length,
offset = 0,
lyrics = lelyrics,
),
] * hps.n_samples
labels = [None, None, top_prior.labeller.get_batch_labels(metas, 'cuda')]
#----------------------------------------------------------2
sampling_temperature = lesampling_temperature
lower_batch_size = lelower_batch_size
max_batch_size = lemax_batch_size
lower_level_chunk_size = lelower_level_chunk_size
chunk_size = lechunk_size
sampling_kwargs = [dict(temp=.99, fp16=True, max_batch_size=lower_batch_size,
chunk_size=lower_level_chunk_size),
dict(temp=.99, fp16=True, max_batch_size=lower_batch_size,
chunk_size=lower_level_chunk_size),
dict(temp=sampling_temperature, fp16=True,
max_batch_size=max_batch_size, chunk_size=chunk_size)]
if sample_hps.mode == 'ancestral':
zs = [t.zeros(hps.n_samples,0,dtype=t.long, device='cuda') for _ in range(len(priors))]
zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
elif sample_hps.mode == 'upsample':
assert sample_hps.codes_file is not None
# Load codes.
data = t.load(sample_hps.codes_file, map_location='cpu')
zs = [z.cuda() for z in data['zs']]
assert zs[-1].shape[0] == hps.n_samples, f"Expected bs = {hps.n_samples}, got {zs[-1].shape[0]}"
del data
print('Falling through to the upsample step later in the notebook.')
elif sample_hps.mode == 'primed':
assert sample_hps.audio_file is not None
audio_files = sample_hps.audio_file.split(',')
duration = (int(sample_hps.prompt_length_in_seconds*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
x = load_prompts(audio_files, duration, hps)
zs = top_prior.encode(x, start_level=0, end_level=len(priors), bs_chunks=x.shape[0])
print(sample_hps.prompt_length_in_seconds)
print(hps.sr)
print(top_prior.raw_to_tokens)
print('aaaaaaaaaaaaaaaaaaaaaaaaaaaa 4.52')
print(duration)
print(audio_files)
zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
elif sample_hps.mode == 'continue':
data = t.load(sample_hps.codes_file, map_location='cpu')
zs = [z.cuda() for z in data['zs']]
zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
elif sample_hps.mode == 'cutcontinue':
print('-------CUT INIT--------')
lecutlen = (int(lecut*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
print(lecutlen)
data = t.load(codes_file, map_location='cpu')
zabaca = [z.cuda() for z in data['zs']]
print(zabaca)
assert zabaca[-1].shape[0] == hps.n_samples, f"Expected bs = {hps.n_samples}, got {zs[-1].shape[0]}"
priorsz = [top_prior] * 3
top_raw_to_tokens = priorsz[-1].raw_to_tokens
assert lecutlen % top_raw_to_tokens == 0, f"Cut-off duration {lecutlen} not an exact multiple of top_raw_to_tokens"
assert lecutlen//top_raw_to_tokens <= zabaca[-1].shape[1], f"Cut-off tokens {lecutlen//priorsz[-1].raw_to_tokens} longer than tokens {zs[-1].shape[1]} in saved codes"
zabaca = [z[:,:lecutlen//prior.raw_to_tokens] for z, prior in zip(zabaca, priorsz)]
hps.sample_length = lecutlen
print(zabaca)
zs = _sample(zabaca, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
del data
print('-------CUT OK--------')
hps.sample_length = (int(sample_length_in_seconds*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
data = t.load(sample_hps.codes_file, map_location='cpu')
zibica = [z.cuda() for z in data['zs']]
zubu = zibica[:]
if transpose != [0,1,2]:
zubu[2][0] = zibica[:][2][transpose[0]];zubu[2][1] = zibica[:][2][transpose[1]];zubu[2][2] = zibica[:][2][transpose[2]]
zubu[1][0] = zibica[:][1][transpose[0]];zubu[1][1] = zibica[:][1][transpose[1]];zubu[1][2] = zibica[:][1][transpose[2]]
zubu[0][0] = zibica[:][0][transpose[0]];zubu[0][1] = zibica[:][0][transpose[1]];zubu[0][2] = zibica[:][0][transpose[2]]
zubu = _sample(zubu, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
print('-------CONTINUE AFTER CUT OK--------')
zs = zubu
else:
raise ValueError(f'Unknown sample mode {sample_hps.mode}.')
print(datetime.now().strftime("%H:%M:%S"))
# + [markdown] id="ZAGS4pnjbmI1"
# # 2 Upsample
#
# + id="ZAGS4LolpZ6w"
print(datetime.now().strftime("%H:%M:%S"))
del top_prior
empty_cache()
top_prior=None
upsamplers = [make_prior(setup_hparams(prior, dict()), vqvae, 'cpu') for prior in priors[:-1]]
labels[:2] = [prior.labeller.get_batch_labels(metas, 'cuda') for prior in upsamplers]
zs = upsample(zs, labels, sampling_kwargs, [*upsamplers, top_prior], hps)
print(datetime.now().strftime("%H:%M:%S"))
| Zags_Main_3.7fix (2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true language="html"
# <style>
# .output_wrapper, .output {
# height:auto !important;
# max-height:700px; /* your desired max-height here */
# }
# .output_scroll {
# box-shadow:none !important;
# webkit-box-shadow:none !important;
# }
# </style>
# + [markdown] deletable=true editable=true
# ### Import modules
# + deletable=true editable=true
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from sklearn import datasets
from sklearn import svm
from sklearn import linear_model
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
# + [markdown] deletable=true editable=true
# ### Load Dataset
# + deletable=true editable=true
digits = datasets.load_digits()
# + [markdown] deletable=true editable=true
# ### Splitting data into train and test sets
# + deletable=true editable=true
features_train, features_test, labels_train, labels_test = train_test_split(
digits.data, digits.target, test_size=0.4, random_state=0)
# + [markdown] deletable=true editable=true
# ### Grid of parameter values for Grid Search
# + deletable=true editable=true
parameter_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
# + [markdown] deletable=true editable=true
# ### Building a classifier using the parameter grid
# + deletable=true editable=true
gridsearchcv = GridSearchCV(estimator=svm.SVC(), param_grid=parameter_grid, n_jobs=-1)
# + [markdown] deletable=true editable=true
# ### Training the classifier on training data
# + deletable=true editable=true
gridsearchcv.fit(features_train, labels_train)
# + [markdown] deletable=true editable=true
# ### Best score on the training set
# + deletable=true editable=true
gridsearchcv.best_score_
# + [markdown] deletable=true editable=true
# ### Evaluating the model on the test data set
# + deletable=true editable=true
gridsearchcv.score(features_test, labels_test)
# + [markdown] deletable=true editable=true
# ### Get best estimator or parameter values
# + [markdown] deletable=true editable=true
# #### for `C`
# + deletable=true editable=true
gridsearchcv.best_estimator_.C
# + [markdown] deletable=true editable=true
# #### for `kernel`
# + deletable=true editable=true
gridsearchcv.best_estimator_.kernel
# + [markdown] deletable=true editable=true
# #### for `gamma`
# + deletable=true editable=true
gridsearchcv.best_estimator_.gamma
# + [markdown] deletable=true editable=true
# ### Using the best estimators to train and evaluate a new classifier
# + [markdown] deletable=true editable=true
# #### Creating the classifier
# + deletable=true editable=true
svm = svm.SVC(C=1, kernel='rbf', gamma=0.001)
# + [markdown] deletable=true editable=true
# #### Training the classifier on the training data
# + deletable=true editable=true
svm.fit(features_train, labels_train)
# + [markdown] deletable=true editable=true
# #### Evaluating the classifier with the test data
# + deletable=true editable=true
svm.score(features_test, labels_test)
# + [markdown] deletable=true editable=true
# ### Automatic parameter setting example
# + deletable=true editable=true
lasso = linear_model.LassoCV()
# + deletable=true editable=true
lasso.fit(digits.data, digits.target)
# + deletable=true editable=true
lasso.alpha_
# + [markdown] deletable=true editable=true
# ### An SVC classifier optimization example
# + deletable=true editable=true
parameter_grid = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
# + deletable=true editable=true
clf = GridSearchCV(SVC(C=1), parameter_grid, cv=5,
scoring='%s_macro' % 'precision')
clf.fit(features_train, labels_train)
# + deletable=true editable=true
clf.best_params_
# + deletable=true editable=true
| Code/Section 2/Tuning feature performance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Observed Trend 1 : As the latitude increase, temperature decrease
#
# Observed Trend 2 : cloudiness donest get influenced by latitude
#
# Observed Trend 3 : latitude increase, wind speed tend to decrease
import json
from random import random
import math
import requests
from pprint import pprint
import time
import pandas as pd
import matplotlib.pyplot as plt
from citipy import citipy
import openweathermapy.core as owm
from Config_WeatherAPI import api_key
# +
# Function to produce randome coordinate accoriding to arguments(n: number of result, c : center corrdinate, r: radius)
def rand_cluster(n,c,r):
"""returns n random points in disk of radius r centered at c"""
x,y = c
points = []
for i in range(n):
theta = 2*math.pi*random()
s = r*random()
points.append((x+s*math.cos(theta), y+s*math.sin(theta)))
return points
#rand_coord = rand_cluster(n = 500,c = (22.9068, 43.1729), r= (150))
def unique(list1):
# insert the list to the set
list_set = set(list1)
# convert the set to the list
unique_list = (list(list_set))
name_list = []
for x in unique_list:
name_list.append(x)
return name_list
# -
# location_keys = {
# 'Quito' : (0.1807, 78.4678),
# 'Beijing' : (39.9042, 116.4074),
# 'Sydney' : (33.8688, 151.2093),
# 'New York' : (40.7128, 74.006),
# 'Maimi': (25.7907, 80.1300),
# 'Rio' : (22.9068, 43.1729)}
#
# use those 5 cities as center to randomly select cities data base
#
# create a list of target cities coordinates
locationCoords = [(0.1807, 78.4678), (39.9042, 116.4074), (33.8688, 151.2093),(40.7128, 74.006), (25.7907, 80.1300), (22.9068, 43.1729) ]
# +
def get_total_city_name():
total_list = []
cities = []
names = []
total_coord = []
for coordIndex in range(0,len(locationCoords)-1) :
rand_coord = rand_cluster(n = 500,c = locationCoords[coordIndex], r= (500))
for coordinate_pair in rand_coord:
lat, lon = coordinate_pair
cities.append(citipy.nearest_city(lat, lon))
for city in cities:
name = city.city_name
names.append(name)
return total_list
# -
# run get_total_city_name() 3 time to get enough random cities
listt= []
for i in range(3):
listt.append(get_total_city_name())
# create a total list of cities and run unique one more time to get unique names of city
finalCityList =listt[0][0] +listt[1][0] + listt[2][0]
finalCityList = unique(finalCityList)
len(finalCityList)
# +
# port list into pandas dataframe
City_Weather = pd.DataFrame(finalCityList)
City_Weather= City_Weather.rename(columns = {0: 'city_name'})
# get 500 randomsample from the dataframe
random_city = City_Weather.sample(n= 700) # set 700 in case some search resulte dont work
random_city.to_csv('random_city.csv', index = False)
# -
data = pd.read_csv('random_city.csv')
data.head()
# +
base_url = "http://api.openweathermap.org/data/2.5/weather?"
# create new columns
data['lat']= " "
data['temp']= " "
data['humid']= " "
data['cloud']= " "
data['wind_speed']= " "
row_count = 0
for index,row in data.iterrows() : # lets trial first
query = row['city_name']
query_url = base_url + "appid=" + api_key + "&q=" + query
print("Now retrieving city # " + str(row_count))
row_count += 1
responses = requests.get(query_url)
print(responses.url)
responses_city = responses.json()
try:
citi_lat = responses_city['coord']['lat']
citi_temp = responses_city['main']['temp']
citi_humid = responses_city['main']['humidity']
citi_cloud = responses_city['clouds']['all']
citi_windspeed = responses_city['wind']['speed']
data.set_value(index, "lat", citi_lat)
data.set_value(index, "temp", citi_temp)
data.set_value(index, 'humid', citi_humid)
data.set_value(index, 'cloud', citi_cloud)
data.set_value(index, 'wind_speed', citi_windspeed)
except (KeyError, IndexError):
print("Error with city data. Skipping")
continue
# -
#save data into csv and clean the data with blank value
data = data.replace('NA', ' ').reset_index()
#data.to_csv('city_info.csv', index= False)
# import data for visualization
df = pd.read_csv('city_info.csv')
df.head()
min(df['temp'])
#max(df[''])
# +
import matplotlib.pyplot as plt
# Build a scatter plot for each data type
plt.figure(figsize = (10,10))
plt.scatter(x = df["lat"], y = df["temp"], edgecolor="g", linewidths=1, marker="o",
alpha=0.3, )
# Incorporate the other graph properties
plt.title("Latitude vs Temperature ", fontsize = 23)
plt.ylabel("Temperature", fontsize = 19)
plt.xlabel("Latitude", fontsize = 19)
plt.xlim([min(df['lat']) -20, max(df['lat']) +20])
plt.ylim([min(df['temp']) -10, max(df['temp']) +5])
# Save the figure
#plt.savefig("output_analysis/Population_BankCount.png")
# Show plot
plt.show()
# -
# +
plt.figure(figsize = (10,10))
plt.scatter(x = df["lat"], y = df["humid"], edgecolor="black", linewidths=1, marker="o",
alpha=0.3, )
# Incorporate the other graph properties
plt.title("Latitude vs Humidity ", fontsize = 23)
plt.ylabel("Humidity", fontsize = 19)
plt.xlabel("Latitude", fontsize = 19)
plt.xlim([min(df['lat']) -20, max(df['lat']) +20])
plt.ylim([min(df['humid']) -10, max(df['humid']) +4])
# Save the figure
#plt.savefig("output_analysis/Population_BankCount.png")
# Show plot
plt.show()
# +
plt.figure(figsize = (10,10))
plt.scatter(x = df["lat"], y = df["cloud"], edgecolor="purple", linewidths=1, marker="o",
alpha=0.3, )
# Incorporate the other graph properties
plt.title("Latitude vs Cloudiness ", fontsize = 23)
plt.ylabel("Clodiness", fontsize = 19)
plt.xlabel("Latitude", fontsize = 19)
plt.xlim([min(df['lat']) -20, max(df['lat']) +20])
plt.ylim([min(df["cloud"]) -10, max(df["cloud"]) +20])
# Save the figure
#plt.savefig("output_analysis/Population_BankCount.png")
# Show plot
plt.show()
# +
# Build a scatter plot for each data type
plt.figure(figsize = (10,10))
plt.scatter(x = df["lat"], y = df["wind_speed"], edgecolor="red", linewidths=1, marker="o",
alpha=0.3, )
# Incorporate the other graph properties
plt.title("Latitude vs Wind Speed ", fontsize = 23)
plt.ylabel("Wind Speed", fontsize = 19)
plt.xlabel("Latitude", fontsize = 19)
plt.xlim([min(df['lat']) -20, max(df['lat']) +20])
plt.ylim([min(df["wind_speed"]) -2, max(df["wind_speed"]) +2])
# Save the figure
#plt.savefig("output_analysis/Population_BankCount.png")
# Show plot
plt.show()
# -
| Random World City Weather API Data Visualization Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Traffic Sign Classifier with TensorFlow
# - Strcuture: Modified LeNet-5
# - Dataset Source:[German Traffic Sign Benchmarks](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset)
# # Load The Data
# %%capture
#import
import pickle
import cv2
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
import random
import numpy as np
import tensorflow as tf
from tensorflow.contrib.layers import flatten
from sklearn.utils import shuffle
# +
# Load pickled data
import pickle
training_file = "train.p"
validation_file= "valid.p"
testing_file = "test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# -
# ---
#
# ## Dataset Summary & Exploration
#
# The pickled data is a dictionary with 4 key/value pairs:
#
# - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
# - `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
# - `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
# - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image.
# ### Basic Summary of the Data Set
# +
# Number of training examples
n_train = len(X_train)
# Number of validation examples
n_validation = len(X_valid)
# Number of testing examples.
n_test = len(X_test)
# What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# How many unique classes/labels there are in the dataset.
n_classes = len(pd.unique(y_train))
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
# -
# ### Exploratory visualization of the dataset
# +
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
# %matplotlib inline
index = random.randint(0, len(X_train))
plt.figure(figsize=(1,1))
plt.imshow(X_train[index], cmap="gray")
print("Index: {}".format(index))
print(y_train[index])
# -
# ----
#
# ## Modified LeNet Architecture
# ### Pre-process the Data Set: normalization
x_min = X_train.min()
x_max = X_train.max()
X_train_std = (X_train - x_min)/(x_max - x_min)
X_valid_std = (X_valid - x_min)/(x_max - x_min)
X_test_std = (X_test - x_min)/(x_max - x_min)
f, axarr = plt.subplots(1,2)
index = random.randint(0, len(X_train))
axarr[0].imshow(X_train[index])
axarr[1].imshow(X_train_std[index])
print(np.max(X_train[index]))
print(np.max(X_train_std[index]))
# ### Model Architecture
# +
# Input variable
#Input image batch
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
#Input label batch
y = tf.placeholder(tf.int32, (None))
#Encoding
one_hot_y = tf.one_hot(y, 43)
# -
#Main structure: LeNet
def LeNet(x, w_mu = 0, b_mu = 0.05, sigma = 0.05, keep_prob = 1):
F_W = {
"c1": tf.Variable(tf.truncated_normal([5, 5, 3, 10], mean = w_mu, stddev = sigma)),
"c2": tf.Variable(tf.truncated_normal([5, 5, 10, 18], mean = w_mu, stddev = sigma)),
"c3": tf.Variable(tf.truncated_normal([5, 5, 18, 26], mean = w_mu, stddev = sigma)),
"f3": tf.Variable(tf.truncated_normal([650, 400], mean = w_mu, stddev = sigma)),
"f4": tf.Variable(tf.truncated_normal([400, 240], mean = w_mu, stddev = sigma)),
"out": tf.Variable(tf.truncated_normal([240, 43], mean = w_mu, stddev = sigma))
}
F_B = {
"c1": tf.Variable(tf.truncated_normal([10], mean = b_mu, stddev = sigma)),
"c2": tf.Variable(tf.truncated_normal([18], mean = b_mu, stddev = sigma)),
"c3": tf.Variable(tf.truncated_normal([26], mean = b_mu, stddev = sigma)),
"f3": tf.Variable(tf.truncated_normal([400], mean = b_mu, stddev = sigma)),
"f4": tf.Variable(tf.truncated_normal([240], mean = b_mu, stddev = sigma)),
"out": tf.Variable(tf.truncated_normal([43], mean = b_mu, stddev = sigma))
}
# L1: Convolutional
padding = "VALID"
strides = [1,1,1,1]
l1 = tf.nn.conv2d(x, F_W["c1"], strides, padding) + F_B["c1"]
l1_relu = tf.nn.relu(l1)
# L1 Pooling
strides = [1,2,2,1]
ksize = [1,2,2,1]
l1_p = tf.nn.max_pool(l1_relu, ksize, strides, padding)
# L2: Convolutional
strides = [1,1,1,1]
l2 = tf.nn.conv2d(l1_p, F_W["c2"], strides, padding) + F_B["c2"]
l2_relu = tf.nn.relu(l2)
# L2 Pooling
strides = [1,2,2,1]
l2_p = tf.nn.max_pool(l2_relu, ksize, strides, padding)
# L3: Convolutional
strides = [1,1,1,1]
padding = "SAME"
l3 = tf.nn.conv2d(l2_p, F_W["c3"], strides, padding) + F_B["c3"]
l3_relu = tf.nn.relu(l3)
# Flatten
flat = flatten(l3_relu)
# L3: Fully Connected
l4 = tf.add(tf.matmul(flat,F_W["f3"]),F_B["f3"])
l4_relu = tf.nn.relu(l4)
# L4: Fully Connected
l5 = tf.add(tf.matmul(l4_relu,F_W["f4"]),F_B["f4"])
l5_relu = tf.nn.relu(l5)
l5_relu = tf.nn.dropout(l5_relu, keep_prob)
# L5: Fully Connected
logits = tf.add(tf.matmul(l5_relu,F_W["out"]),F_B["out"])
return logits
# ### Train, Validate and Test the Model
# +
# %%capture
#Training pipeline
rate = 0.002
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#saver setup
saver = tf.train.Saver()
# -
#Hyperparameters
EPOCHS = 14
BATCH_SIZE = 128
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
# Training
# %%capture
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_std)
print("Training...")
print()
for i in range(EPOCHS):
X_train_std, y_train = shuffle(X_train_std, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_std[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
training_accuracy = evaluate(X_train_std,y_train)
validation_accuracy = evaluate(X_valid_std, y_valid)
print("EPOCH {} ...".format(i+1))
print("Training Accuracy = {:.3f}".format(training_accuracy))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
# ---
#
# ## Step 3: Test a Model on New Images
#
# To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
#
# You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
# ### Load and Output the Images
# +
#Load image fetched from internet
dim = (32, 32)
img_path = {2:"./newImage/2.png",3:"./newImage/3.jpg",4:"./newImage/4.jpeg",5:"./newImage/5.jpg",6:"./newImage/6.jpg"}
img_gallary = []
for i in img_path:
img = cv2.imread(img_path[i])
img = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
img_rgb = img[:,:,::-1]
img_gallary.append(img_rgb)
# Compress to 4D np array
input4d = np.stack(img_gallary, axis=0)
# data preprocess
x_min = input4d.min()
x_max = input4d.max()
input4d = (input4d - x_min)/(x_max - x_min)
# Correct label
labels = [40,3,26,25,31]
# -
# Preview of inputs
f, axarr = plt.subplots(1,5)
for i in range(len(img_gallary)):
axarr[i].imshow(img_gallary[i])
plt.show()
# Reset tf model
tf.reset_default_graph()
# +
# %%capture
# tf inputs
#Input image batch
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
#Input label batch
y = tf.placeholder(tf.int32, (None))
rate = 0
#Encoding
one_hot_y = tf.one_hot(y, 44)
#items will retrive from metadata
logits = LeNet(x)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
# -
# ### Predict and anlyze the Sign Type
# +
#Testing
with tf.Session() as sess:
# Loading trained model
saver.restore(sess, "./lenet")
print("Load completed")
#Check accuracy
accuracy = sess.run(accuracy_operation, feed_dict={x: input4d, y: labels})
eva_accuracy = evaluate(input4d,labels)
print("Accuracy = {:.2f}".format(accuracy))
#Show top 5 predicted probabilities
predict = sess.run(logits, feed_dict={x: input4d, y: labels})
prob = tf.nn.softmax(predict).eval()
pre = sess.run(tf.nn.top_k(tf.constant(predict), k=5))
# -
| Traffic_Sign_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Import stuff
import numpy as np
from scipy import fftpack as fft
from scipy.io import wavfile as wav
import os as system
import csv
import multiprocessing as mp
command_file = "input.csv"
#calculate IR for single sample
def sample_IR(sample_mem, IR_pos, IR):
'''calculate single sample of IR filter from memory of past samples and IR
sample mem and IR should be the same length
pos is the start position for the samples'''
sample = 0.0
for x in range(0, len(sample_mem)):
sample += sample_mem[(IR_pos+x)%len(sample_mem)]*IR[len(IR)-x-1]
return sample
#string sim function
def step_string(string, pos, length, last_length, reset, pluck, IR_mem, IR_pos, filter_IR):
'''function for incrementing the string simulation by 1 step
returns the sample for that step of simulation
pos will be incremented after each step
IR_pos will also be incremented'''
if (length > last_length):
if (((pos)%len(string)) > ((pos+length)%len(string))):
string[int((pos+length)%len(string)):int((pos)%len(string))] = 0
else:
string[0:int((pos)%len(string))] = 0
string[int((pos+length)%len(string)):int(len(string))] = 0
if reset:#reset string
string = pluck
for x in range(0, len(string)):
string[int((pos+x)%len(string))] = pluck[x]
return 0, string, IR_mem, pos, IR_pos, length
else:
sample = string[pos%len(string)]
IR_mem[IR_pos%len(IR_mem)] = sample
string[int((pos+length-1)%len(string))] = sample_IR(IR_mem, IR_pos, filter_IR)
return sample, string, IR_mem, pos+1, IR_pos+1, length
#make string from given parameters
def make_string(sample_rate, min_f, oversampling, filter_IR):
'''create string'''
IR_mem = np.zeros(len(filter_IR))
string = np.zeros(sample_rate*min_f*oversampling)
return string, IR_mem, 0, 0
#make IR for lowpass filter
def make_lowpass_IR(sample_rate, oversampling, f_cutoff, gain):
'''create lowpass IR to be used for the string
gain is the gain for every cycle. around 1 should be sustained signal'''
filter_IR = np.zeros(int((sample_rate*oversampling)/(f_cutoff*2)))
filter_IR[0:len(filter_IR) - 1] = (gain)/(len(filter_IR))
return filter_IR
#get length of the string to use
def get_length(sample_rate, oversampling, frequency, lenIR):
'''returns length of string to use'''
return (sample_rate*oversampling)/(frequency) - lenIR/2
#make the pluck shape
def make_pluck(string_length, pluck_length, magnitude):
'''create the pluck chape to be copied'''
pluck = np.zeros(string_length)
#pluck[0:int(pluck_length)+1] = np.arange(-1,1,(2/(pluck_length)))
pluck_1 = np.arange(0,1,(1/(pluck_length*32/100)))
pluck_2 = np.arange(1,-0.2,-(1.2/(pluck_length*16/100)))
pluck_3 = np.arange(-0.2,0.2,(0.4/(pluck_length*4/100)))
pluck_4 = np.arange(0.2,-1,-(1.2/(pluck_length*16/100)))
pluck_5 = np.arange(-1,0,(1/(pluck_length*32/100)))
pluck[0:len(pluck_1)] = pluck_1
pluck[len(pluck_1):len(pluck_1)+len(pluck_2)] = pluck_2
pluck[len(pluck_1)+len(pluck_2):len(pluck_1)+len(pluck_2)+len(pluck_3)] = pluck_3
pluck[len(pluck_1)+len(pluck_2)+len(pluck_3):len(pluck_1)+len(pluck_2)+len(pluck_3)+len(pluck_4)] = pluck_4
pluck[len(pluck_1)+len(pluck_2)+len(pluck_3)+len(pluck_4):len(pluck_1)+len(pluck_2)+len(pluck_3)+len(pluck_4)+len(pluck_5)] = pluck_5
pluck = (pluck + (np.random.rand(string_length)-0.5)*0.5)/1.25
return pluck*magnitude
#generate the whole string simulation for 1 string
def string_sim(pluck_time, pluck_amp, length_time, length_freq, damp_time, damp_fac, filter_IR_raw, length, sr, os, min_f, sn):
'''runs the string sim for the whole time'''
samples = np.zeros(int(length/os))
t = 0#current time in number of ticks/steps for the string sim
string, IR_mem, pos, IR_pos = make_string(sr, min_f, os, filter_IR_raw)
last_length = 0
length_pos = 0
f = 110
damp_pos = 0
damp = 1
filter_IR = filter_IR_raw*damp
pluck_pos = 0
reset = 0
pluck_strength = 1
pluck = make_pluck(len(string), get_length(sr, os, f, len(filter_IR)), pluck_strength)
lp = 0;
for x in range(0, len(samples)):
sample_sum = 0.0
for y in range(0, os):
if ((damp_pos < len(damp_time)) and (damp_time[damp_pos] <= t)):
damp = damp_fac[damp_pos]
filter_IR = filter_IR_raw*damp
damp_pos += 1
if ((length_pos < len(length_time)) and (length_time[length_pos] <= t)):
f = length_freq[length_pos]
length_pos += 1
if ((pluck_pos < len(pluck_time)) and (pluck_time[pluck_pos] <= t)):
reset = 1
pluck_strength = pluck_amp[pluck_pos]
pluck = make_pluck(len(string), get_length(sr, os, f, len(filter_IR)), pluck_strength)
pluck_pos += 1
else:
reset = 0
sample_a, string, IR_mem, pos, IR_pos, last_length = step_string(string, pos, get_length(sr, os, f, len(filter_IR)), last_length, reset, pluck, IR_mem, IR_pos, filter_IR)
sample_sum += sample_a
t += 1
samples[x] = (sample_sum)/os #oversample the string simulation
if(int(t*20/length) > lp):
print(str(int(t*20/length)*5) + '% done on string ' + str(sn)) #print progress
lp = int(t*20/length)
return samples
#string sim function wrapper made for multithredding
def string_sim_mp(pluck_time, pluck_amp, length_time, length_freq, damp_time, damp_fac, filter_IR_raw, length, sr, os, min_f, sn, queue):
'''string sim function wrapper made for multiprocessing.
just calles the standard string sim internally'''
queue.put(string_sim(pluck_time, pluck_amp, length_time, length_freq, damp_time, damp_fac, filter_IR_raw, length, sr, os, min_f, sn))
return
sr = 96000
os = 2
min_f = 20
length = 300000;#length of sound in samples
print("starting input parsing")
pluck_time0 = np.empty(0) #create empty arrays for generating each string
pluck_time1 = np.empty(0)
pluck_time2 = np.empty(0)
pluck_time3 = np.empty(0)
pluck_time4 = np.empty(0)
pluck_time5 = np.empty(0)
pluck_amp0 = np.empty(0)
pluck_amp1 = np.empty(0)
pluck_amp2 = np.empty(0)
pluck_amp3 = np.empty(0)
pluck_amp4 = np.empty(0)
pluck_amp5 = np.empty(0)
length_time0 = np.empty(0)
length_time1 = np.empty(0)
length_time2 = np.empty(0)
length_time3 = np.empty(0)
length_time4 = np.empty(0)
length_time5 = np.empty(0)
length_freq0 = np.empty(0)
length_freq1 = np.empty(0)
length_freq2 = np.empty(0)
length_freq3 = np.empty(0)
length_freq4 = np.empty(0)
length_freq5 = np.empty(0)
damp_time0 = np.empty(0)
damp_time1 = np.empty(0)
damp_time2 = np.empty(0)
damp_time3 = np.empty(0)
damp_time4 = np.empty(0)
damp_time5 = np.empty(0)
damp_fac0 = np.empty(0)
damp_fac1 = np.empty(0)
damp_fac2 = np.empty(0)
damp_fac3 = np.empty(0)
damp_fac4 = np.empty(0)
damp_fac5 = np.empty(0)
with open(command_file, 'r') as csvfile: #read and parse the csv file
read_csv = csv.reader(csvfile, delimiter=',')
for row in read_csv:
if (row[0] == "length"):
length = int(float(row[1])*sr)
elif (row[2] == "pluck"):
if (int(row[1]) == 0):
pluck_time0 = np.append(pluck_time0, [int(float(row[0])*sr*os)])
pluck_amp0 = np.append(pluck_amp0, [float(row[3])])
if (int(row[1]) == 1):
pluck_time1 = np.append(pluck_time1, [int(float(row[0])*sr*os)])
pluck_amp1 = np.append(pluck_amp1, [float(row[3])])
if (int(row[1]) == 2):
pluck_time2 = np.append(pluck_time2, [int(float(row[0])*sr*os)])
pluck_amp2 = np.append(pluck_amp2, [float(row[3])])
if (int(row[1]) == 3):
pluck_time3 = np.append(pluck_time3, [int(float(row[0])*sr*os)])
pluck_amp3 = np.append(pluck_amp3, [float(row[3])])
if (int(row[1]) == 4):
pluck_time4 = np.append(pluck_time4, [int(float(row[0])*sr*os)])
pluck_amp4 = np.append(pluck_amp4, [float(row[3])])
if (int(row[1]) == 5):
pluck_time5 = np.append(pluck_time5, [int(float(row[0])*sr*os)])
pluck_amp5 = np.append(pluck_amp5, [float(row[3])])
elif (row[2] == "note"):
if (int(row[1]) == 0):
length_time0 = np.append(length_time0, [int(float(row[0])*sr*os)])
length_freq0 = np.append(length_freq0, [float(row[3])])
if (int(row[1]) == 1):
length_time1 = np.append(length_time1, [int(float(row[0])*sr*os)])
length_freq1 = np.append(length_freq1, [float(row[3])])
if (int(row[1]) == 2):
length_time2= np.append(length_time2, [int(float(row[0])*sr*os)])
length_freq2 = np.append(length_freq2, [float(row[3])])
if (int(row[1]) == 3):
length_time3 = np.append(length_time3, [int(float(row[0])*sr*os)])
length_freq3 = np.append(length_freq3, [float(row[3])])
if (int(row[1]) == 4):
length_time4 = np.append(length_time4, [int(float(row[0])*sr*os)])
length_freq4 = np.append(length_freq4, [float(row[3])])
if (int(row[1]) == 5):
length_time5 = np.append(length_time5, [int(float(row[0])*sr*os)])
length_freq5 = np.append(length_freq5, [float(row[3])])
elif (row[2] == "damp"):
if (int(row[1]) == 0):
damp_time0 = np.append(damp_time0, [int(float(row[0])*sr*os)])
damp_fac0 = np.append(damp_fac0, [float(row[3])])
if (int(row[1]) == 1):
damp_time1 = np.append(damp_time1, [int(float(row[0])*sr*os)])
damp_fac1 = np.append(damp_fac1, [float(row[3])])
if (int(row[1]) == 2):
damp_time2 = np.append(damp_time2, [int(float(row[0])*sr*os)])
damp_fac2 = np.append(damp_fac2, [float(row[3])])
if (int(row[1]) == 3):
damp_time3 = np.append(damp_time3, [int(float(row[0])*sr*os)])
damp_fac3 = np.append(damp_fac3, [float(row[3])])
if (int(row[1]) == 4):
damp_time4 = np.append(damp_time4, [int(float(row[0])*sr*os)])
damp_fac4 = np.append(damp_fac4, [float(row[3])])
if (int(row[1]) == 5):
damp_time5 = np.append(damp_time5, [int(float(row[0])*sr*os)])
damp_fac5 = np.append(damp_fac5, [float(row[3])])
print("starting sample generation")
#single threded version of this code
'''
samples0 = string_sim(pluck_time0, pluck_amp0, length_time0, length_freq0, damp_time0, damp_fac0, make_lowpass_IR(sr, os, 3000, 1.03), length*os, sr, os, min_f, 0)
samples1 = string_sim(pluck_time1, pluck_amp1, length_time1, length_freq1, damp_time1, damp_fac1, make_lowpass_IR(sr, os, 3000, 1.03), length*os, sr, os, min_f, 1)
samples2 = string_sim(pluck_time2, pluck_amp2, length_time2, length_freq2, damp_time2, damp_fac2, make_lowpass_IR(sr, os, 3000, 1.03), length*os, sr, os, min_f, 2)
samples3 = string_sim(pluck_time3, pluck_amp3, length_time3, length_freq3, damp_time3, damp_fac3, make_lowpass_IR(sr, os, 3000, 1.03), length*os, sr, os, min_f, 3)
samples4 = string_sim(pluck_time4, pluck_amp4, length_time4, length_freq4, damp_time4, damp_fac4, make_lowpass_IR(sr, os, 3000, 1.03), length*os, sr, os, min_f, 4)
samples5 = string_sim(pluck_time5, pluck_amp5, length_time5, length_freq5, damp_time5, damp_fac5, make_lowpass_IR(sr, os, 3000, 1.03), length*os, sr, os, min_f, 5)
samples = (samples0 + samples1 + samples2 + samples3 + samples4 + samples5)/6
'''
queue = mp.Queue()
processes = []
#multithread this thing
p = mp.Process(target = string_sim_mp, args=(pluck_time0, pluck_amp0, length_time0, length_freq0, damp_time0, damp_fac0, make_lowpass_IR(sr, os, 10000, 1.122), length*os, sr, os, min_f, 0, queue))
processes.append(p)
p.start()
p = mp.Process(target = string_sim_mp, args=(pluck_time1, pluck_amp1, length_time1, length_freq1, damp_time1, damp_fac1, make_lowpass_IR(sr, os, 10000, 1.122), length*os, sr, os, min_f, 1, queue))
processes.append(p)
p.start()
p = mp.Process(target = string_sim_mp, args=(pluck_time2, pluck_amp2, length_time2, length_freq2, damp_time2, damp_fac2, make_lowpass_IR(sr, os, 6000, 1.061), length*os, sr, os, min_f, 2, queue))
processes.append(p)
p.start()
p = mp.Process(target = string_sim_mp, args=(pluck_time3, pluck_amp3, length_time3, length_freq3, damp_time3, damp_fac3, make_lowpass_IR(sr, os, 6000, 1.061), length*os, sr, os, min_f, 3, queue))
processes.append(p)
p.start()
p = mp.Process(target = string_sim_mp, args=(pluck_time4, pluck_amp4, length_time4, length_freq4, damp_time4, damp_fac4, make_lowpass_IR(sr, os, 4000, 1.038), length*os, sr, os, min_f, 4, queue))
processes.append(p)
p.start()
p = mp.Process(target = string_sim_mp, args=(pluck_time5, pluck_amp5, length_time5, length_freq5, damp_time5, damp_fac5, make_lowpass_IR(sr, os, 4000, 1.038), length*os, sr, os, min_f, 5, queue))
processes.append(p)
p.start()
samples = queue.get()/6
samples += queue.get()/6
samples += queue.get()/6
samples += queue.get()/6
samples += queue.get()/6
samples += queue.get()/6
for p in processes:
p.join()
print("applying reverb")
fs, guitar_IR = wav.read('Guitar IR EQ Edited.wav')
fs, room_L_IR = wav.read('Room IR Left Very Edited.wav')
fs, room_R_IR = wav.read('Room IR Right Very Edited.wav')
samples_fft = fft.fft(np.concatenate([samples, np.zeros(length - len(samples))]))
guitar_IR_fft = fft.fft(np.concatenate([guitar_IR/sum(guitar_IR), np.zeros(length - len(guitar_IR))]))
room_L_IR_fft = fft.fft(np.concatenate([room_L_IR/sum(room_L_IR), np.zeros(length - len(room_L_IR))]))
room_R_IR_fft = fft.fft(np.concatenate([room_R_IR/sum(room_R_IR), np.zeros(length - len(room_R_IR))]))
result_L_fft = samples_fft*guitar_IR_fft*room_L_IR_fft
result_R_fft = samples_fft*guitar_IR_fft*room_R_IR_fft
result_L = fft.ifft(result_L_fft)
result_R = fft.ifft(result_R_fft)
result_L = result_L/np.amax(result_L) #normalise each channel
result_R = result_R/np.amax(result_R)
print("writing output")
wav.write("result.wav", sr, np.array([result_L.astype('float'), result_R.astype('float')], np.float).T)#synth output
wav.write("string.wav", sr, samples.astype('float'))#string sim output for testing
print("converting output wav to opus")
system.system("ffmpeg -i result.wav -c:a libopus -b:a 192k -y result.opus")#convert output to something more reasonable in size
system.system("ffmpeg -i string.wav -c:a libopus -b:a 192k -y string.opus")
print("converting output wav to mp3")
system.system("ffmpeg -i result.wav -c:a libmp3lame -q:a 2 -y result.mp3")#convert output to something easier to share
system.system("ffmpeg -i string.wav -c:a libmp3lame -q:a 2 -y string.mp3")
print("done")
# -
| Guitar Synth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Implement atoi which converts a string to an integer.
#
# The function first discards as many whitespace characters as necessary until the first non-whitespace character is found. Then, starting from this character, takes an optional initial plus or minus sign followed by as many numerical digits as possible, and interprets them as a numerical value.
#
# The string can contain additional characters after those that form the integral number, which are ignored and have no effect on the behavior of this function.
#
# If the first sequence of non-whitespace characters in str is not a valid integral number, or if no such sequence exists because either str is empty or it contains only whitespace characters, no conversion is performed.
#
# If no valid conversion could be performed, a zero value is returned.
#
# Note:
#
# Only the space character ' ' is considered as whitespace character.
# Assume we are dealing with an environment which could only store integers within the 32-bit signed integer range: [−2^31, 2^31 − 1]. If the numerical value is out of the range of representable values, INT_MAX (2^31 − 1) or INT_MIN (−2^31) is returned.
# Example 1:
# ```
# Input: "42"
# Output: 42
# ```
# Example 2:
# ```
# Input: "-42"
# Output: -42
# ```
# Explanation: The first non-whitespace character is '-', which is the minus sign.
# Then take as many numerical digits as possible, which gets 42.
# Example 3:
# ```
# Input: "4193 with words"
# Output: 4193
# ```
# Explanation: Conversion stops at digit '3' as the next character is not a numerical digit.
# Example 4:
# ```
# Input: "words and 987"
# Output: 0
# ```
# Explanation: The first non-whitespace character is 'w', which is not a numerical
# digit or a +/- sign. Therefore no valid conversion could be performed.
# Example 5:
# ```
# Input: "-91283472332"
# Output: -2147483648
# ```
# Explanation: The number "-91283472332" is out of the range of a 32-bit signed integer.
# Thefore INT_MIN (−231) is returned.
class Solution:
def myAtoi(self, s):
"""
:type str: s
:rtype: int
"""
s = s.strip()
n = len(s)
sign = True # positive true, negative False
if n == 0:
return 0
i = 0
ret = 0
if s[i] == '-':
sign = False
i = i + 1
elif s[i] =='+':
i = i + 1
while(i < n):
if (s[i] < '0' or s[i] >'9'):
break
v = int(s[i])
ret = ret*10 + v
i = i + 1
if sign is False:
ret = -ret
if ret >= 2147483647:
ret = 2147483647
elif ret < -2147483648:
ret = -2147483648
return ret
| 8.StringtoInteger_atoi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Release Notes
# =============
#
# Version 0.6 (August 2021)
# -------------------------
#
# ### New Features ###
#
# - Rigid body tracking and reconstruction of virtual markers in module `kinematics`: functions `define_rigid_body()`, `define_virtual_marker()`, and `track_rigid_body()`.
# - Interactive method to edit events in a TimeSeries: method `TimeSeries.ui_edit_events()`.
#
# ### Improvements ###
#
# - All TimeSeries methods now work on copies, see breaking changes below.
#
# ### Breaking Changes ###
#
# - **WARNING - Important breaking change** - Every `TimeSeries` method now works on a copy of the TimeSeries. The original TimeSeries is never modified itself. It was already the case for methods such as `get_ts_between_events()` etc. which would not affect the original TimeSeries, but not for others such as `add_event()`, `rename_data()`, etc. This modification was made in an effort to standardize the TimeSeries behaviour and to allow safe method chaining. This choice is a bit drastic and was implemented quickly because we are still early in Kinetics Toolkit's development. It has been encouraged by Pandas, which seems to also go in a similar direction (https://github.com/pandas-dev/pandas/issues/16529). To migrate your code to the new behaviour, please change your method calls accordingly by adding an assignation before the call. For example: `ts.add_event(...)` becomes `ts = ts.add_event(...)`. The full list of methods that have been changed is:
# - `add_data_info()`
# - `add_event()`
# - `fill_missing_samples()`
# - `merge()`
# - `remove_data()`
# - `remove_data_info()`
# - `remove_event()`
# - `rename_data()`
# - `rename_event()`
# - `shift()`
# - `sort_events()`
# - `sync_event()`
# - `trim_events()`
# - `ui_add_event()` (now discontinued in favour of `ui_edit_events`)
#
# **Please do not update to 0.6 until you are ready to find and modify those statements in your current code**.
#
# Version 0.5 (June 2021)
# -----------------------
#
# ### New Features ###
#
# - Coordinate system origins are now clickable in Player.
#
# ### Improvements ###
#
# - Most TimeSeries arguments can now be used either by position or keyword (removed superflous slash operators in signatures).
# - TimeSeriesEvent class is now a proper data class. This has no implication on usability, but the API is cleaner and more robust.
# - Bug fixes.
#
# ### Breaking Changes ###
#
# - `cycle.detect_cycles` was changed back to experimental. Its argument pairs xxxx1, xxxx2 have been changed to sequences [xxxx1, xxxx2] in prevision of possible cycle detections with more than 2 phases, or even only one phase. This method is now experimental because it may be separated into different functions (one to detect cycles, another to search cycles with given criteria, and another to remove found cycles).
#
#
# Version 0.4 (April 2021)
# ------------------------
#
# ### New Features ###
#
# - Added the `geometry` module to perform rigid body geometry operations such as creating frames, homogeneous transforms, multiplying series of matrices, converting from local to global coordinates and vice-versa, and extracting Euler angles from homogeneous transforms.
# - Added the `span` option to `cycles.time_normalize`, so that cycles can be normalized between other ranges than 0% to 100%. Both reducing span (e.g., 10% to 90%) and increasing span (e.g., -25 to 125%) work.
# - Added the `to_html5` method to `Player`, which allows visualizing 3d markers and bodies in Jupyter.
# - Added the `rename_event` method to `TimeSeries`.
#
# ### Improvements ###
#
# - Added test coverage measurement for continuous improvement of code robustness.
# - Added warnings when using private or unstable functions.
# - Changed the website to use the ReadTheDoc theme, and changed its structure to facilitate continuous improvements of the website without needing to wait for releases.
#
# ### Breaking Changes ###
#
# - The default behaviour for `TimeSeries.remove_event` changed when no occurrence is defined. Previously, only the first occurrence was removed. Now every occurrence is removed.
# - In `cycles.time_normalize`, the way to time-normalize between two events of the same name changed from `event_name, _` to `event_name, event_name`.
#
# Version 0.3 (October 2020)
# --------------------------
#
# ### New Features ###
#
# - Added the `cycles` module to detect, time-normalize and stack cycles (e.g., gait cycles, wheelchair propulsion cycles, etc.)
# - Added the `pushrimkinetics` module to read files from instrumented wheelchair wheels, reconstruct the pushrim kinetics,
# remove dynamic offsets in kinetic signals, and perform speed and power calculation for analysing spatiotemporal and kinetic
# parameters of wheelchair propulsion.
# - Added lab mode to allow importing ktk without changing defaults.
# - Added `ktk.filters.deriv()` to derivate TimeSeries.
# - Added `ktk.filters.median()`, which is a running median filter function.
#
# ### Improvements ###
#
# - `TimeSeries.plot()` now shows the event occurrences besides the event names.
# - Nicer tutorial for the `filters` module.
# - Improved unit tests for the `filters` module.
#
# ### Breaking Changes ###
#
# - The module name has been changed from `ktk` to `kineticstoolkit`. Importing using `import ktk` is now deprecated and the standard way to import is now either `import kineticstoolkit as ktk` or `import kineticstoolkit.lab as ktk`.
# - Now importing Kinetics Toolkit does not change IPython's representation of dicts or matplotlib's defaults. This allows using ktk's functions without modifying the current working environment. The old behaviour is now the lab mode and is the recommended way to import Kinetics Toolkit in an IPython-based environment: `import kineticstoolkit.lab as ktk`.
#
#
# Version 0.2 (August 2020)
# -------------------------
#
# ### New Features ###
#
# - Added the functions `ktk.load` and `ktk.save`.
# - Introduced the `ktk.zip` file format.
# - Added the `gui` module to show messages, input dialogs and file/folder pickers.
# - Added the `filters` module with TimeSeries wrappers for scipy's butterworth and savitsky-golay filters.
# - Added interactive methods to TimeSeries: `TimeSeries.ui_add_event()`, `TimeSeries.ui_get_ts_between_clicks()` and `TimeSeries.ui_sync()` (experimental).
# - Added `TimeSeries.remove_event()` method.
# - Added `TimeSeries.resample()` (experimental).
#
# ### Improvements ###
#
# - Updated the documentation system using sphinx and jupyter-lab.
# - Improved performance of `TimeSeries.from_dataframe()`
# - ktk is now typed.
#
# ### Breaking Changes ###
# - `TimeSeries.from_dataframe()` is now a class function and not an instance method anymore. Therefore we need to call `ktk.TimeSeries.from_dataframe(dataframe)` instead of `ktk.TimeSeries().from_dataframe(dataframe)`.
# - Now depends on python 3.8 instead of 3.7.
#
#
# Version 0.1 (May 2020)
# ----------------------
#
# ### New Features ###
#
# - Added the `TimeSeries` class.
# - Added the `kinematics` module, to read c3d and n3d files.
# - Added the `Player` class, to view markers in 3d.
| doc/release_notes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Nbconvert and Templates
#
# Nbconvert is the tool to export your notebooks to HTML, Markdown, restructuredtext or PDF (via LaTeX). You can use it at the command line:
#
# jupyter nbconvert --to html Index.ipynb
#
# It's also integrated into the notebook interface: look for **Download as** in the **File** menu.
#
# Nbconvert uses a set of templates that describe the structure of different kinds of files, as well as how to insert pieces of content from a notebook into them. IPython includes basic templates for different output formats, but for more specific needs, you can define your own templates. Here's an example of a template that highlights Markdown cells in HTML output:
with open("makeitpop.tpl") as f:
print(f.read())
# To use this template, add a `--template` argument on the command line:
#
# jupyter nbconvert --to html Index.ipynb --template makeitpop.tpl
#
# The template system Nbconvert uses is called *Jinja2*. There's much more information about the syntax of templates in the [Jinja2 documentation](http://jinja.pocoo.org/docs/dev/templates/).
#
# For LaTeX, we use a modified syntax, because the `{}` braces clash with LaTeX itself:
#
# Normal | LaTeX templates
# -------|----------------
# `{{ expression }}` | `((( expression )))`
# `{% logic/block definition %}` | `((* logic/block definition *))`
# `{# comment #}` | `((= comment =))`
#
# We gave our LaTeX templates a `.tplx` extension, instead of `.tpl`, to highlight this. The default template for LaTeX is called `article.tplx`.
from IPython.display import HTML
HTML(filename='nbconvert_template_structure.html')
# ## Challenge
#
# People often want to use notebooks to generate reports, where some or all of the code is unimportant. There's an example in this folder: [Stock display.ipynb](Stock display.ipynb).
#
# 1. Using the information above, make a custom LaTeX template which will hide all the code cells. Use it at the command line, and run pdflatex to check the results.
# 2. Some of the code cells in that notebook have some special metadata: `"nbconvert": {"hide_code": true}`. You can access the metadata in the cell blocks of the template as `cell.metadata`. Make a new LaTeX template which will hide cells with this set, and show cells without it. Look for **if** in the [Jinja template docs](http://jinja.pocoo.org/docs/dev/templates/).
#
# Once you've tried this, compare your solutions with the ones in the *solutions* directory.
# ## Demo
#
# Templates can add complex features to exported notebooks. [foldcode.tpl](foldcode.tpl) is an HTML template that adds buttons to hide and reveal code cells using Javascript and CSS. It uses the same metadata you just used in a LaTeX template to decide determine whether each cell will be visible initially. Run it on `Stock display.ipynb` and look at the result in your browser.
| intro_python/python_tutorials/jupyter-notebook_intro/nbconvert_templates/Nbconvert templates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%% raw\n"} raw_mimetype="text/restructuredtext" active=""
# .. _example-filtering1:
#
# -
# Filter pipelines
# ================
#
# This example shows how to use the `pymia.filtering` package to set up image filter pipeline and apply it to an image.
# The pipeline consists of a gradient anisotropic diffusion filter followed by a histogram matching. This pipeline will be applied to a T1-weighted MR image and a T2-weighted MR image will be used as a reference for the histogram matching.
#
# <div class="alert alert-info">
#
# Tip
#
# This example is available as Jupyter notebook at `./examples/filtering/basic.ipynb` and Python script at `./examples/filtering/basic.py`.
#
# </div>
#
# <div class="alert alert-info">
#
# Note
#
# To be able to run this example:
#
# - Get the example data by executing `./examples/example-data/pull_example_data.py`.
# - Install matplotlib (`pip install matplotlib`).
#
# </div>
# Import the required modules.
# + pycharm={"name": "#%%\n"}
import glob
import os
import matplotlib.pyplot as plt
import pymia.filtering.filter as flt
import pymia.filtering.preprocessing as prep
import SimpleITK as sitk
# -
# Define the path to the data.
# + pycharm={"name": "#%%\n"}
data_dir = '../example-data'
# -
# Let us create a list with the two filters, a gradient anisotropic diffusion filter followed by a histogram matching.
# + pycharm={"name": "#%%\n"}
filters = [
prep.GradientAnisotropicDiffusion(time_step=0.0625),
prep.HistogramMatcher()
]
histogram_matching_filter_idx = 1 # we need the index later to update the HistogramMatcher's parameters
# -
# Now, we can initialize the filter pipeline.
# + pycharm={"name": "#%%\n"}
pipeline = flt.FilterPipeline(filters)
# + [markdown] pycharm={"name": "#%% md\n"}
# We can now loop over the subjects of the example data. We will both load the T1-weighted and T2-weighted MR images and execute the pipeline on the T1-weighted MR image. Note that for each subject, we update the parameters for the histogram matching filter to be the corresponding T2-weighted image.
# + pycharm={"name": "#%%\n"}
# get subjects to evaluate
subject_dirs = [subject for subject in glob.glob(os.path.join(data_dir, '*')) if os.path.isdir(subject) and os.path.basename(subject).startswith('Subject')]
for subject_dir in subject_dirs:
subject_id = os.path.basename(subject_dir)
print(f'Filtering {subject_id}...')
# load the T1- and T2-weighted MR images
t1_image = sitk.ReadImage(os.path.join(subject_dir, f'{subject_id}_T1.mha'))
t2_image = sitk.ReadImage(os.path.join(subject_dir, f'{subject_id}_T2.mha'))
# set the T2-weighted MR image as reference for the histogram matching
pipeline.set_param(prep.HistogramMatcherParams(t2_image), histogram_matching_filter_idx)
# execute filtering pipeline on the T1-weighted image
filtered_t1_image = pipeline.execute(t1_image)
# plot filtering result
slice_no_for_plot = t1_image.GetSize()[2] // 2
fig, axs = plt.subplots(1, 2)
axs[0].imshow(sitk.GetArrayFromImage(t1_image[:, :, slice_no_for_plot]), cmap='gray')
axs[0].set_title('Original image')
axs[1].imshow(sitk.GetArrayFromImage(filtered_t1_image[:, :, slice_no_for_plot]), cmap='gray')
axs[1].set_title('Filtered image')
fig.suptitle(f'{subject_id}', fontsize=16)
plt.show()
# + [markdown] pycharm={"name": "#%% md\n"}
# Visually, we can clearly see the smoothing of the filtered image due to the anisotrophic filtering. Also, the image intensities are brighter due to the histogram matching.
| examples/filtering/basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] heading_collapsed=true
# # Introduction
# + [markdown] heading_collapsed=true
# # Preperation
# + [markdown] heading_collapsed=true hidden=true
# ## Import packages
# + hidden=true
import numpy as np
import matplotlib.pyplot as plt
import mapp4py
from mapp4py import md
from lib.elasticity import rot, cubic, resize, displace, crack
# + [markdown] heading_collapsed=true hidden=true
# ## Block the output of all cores except for one
# + hidden=true
from mapp4py import mpi
if mpi().rank!=0:
with open(os.devnull, 'w') as f:
sys.stdout = f;
# + [markdown] heading_collapsed=true hidden=true
# ## Define an `md.export_cfg` object
# + [markdown] hidden=true
# `md.export_cfg` has a call method that we can use to create quick snapshots of our simulation box
# + hidden=true
xprt = md.export_cfg("");
# + [markdown] heading_collapsed=true
# # Asymptotic Displacement Field of Crack from Linear Elasticity
# + hidden=true
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
# + hidden=true
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
# + hidden=true
B = np.linalg.inv(
np.array([
[C[0, 0, 0, 0], C[0, 0, 1, 1], C[0, 0, 0, 1]],
[C[0, 0, 1, 1], C[1, 1, 1, 1], C[1, 1, 0, 1]],
[C[0, 0, 0, 1], C[1, 1, 0, 1], C[0, 1, 0, 1]]
]
))
# + hidden=true
_ = np.roots([B[0, 0], -2.0*B[0, 2],2.0*B[0, 1]+B[2, 2], -2.0*B[1, 2], B[1, 1]])
mu = np.array([_[0],0.0]);
if np.absolute(np.conjugate(mu[0]) - _[1]) > 1.0e-12:
mu[1] = _[1];
else:
mu[1] = _[2]
alpha = np.real(mu);
beta = np.imag(mu);
p = B[0,0] * mu**2 - B[0,2] * mu + B[0, 1]
q = B[0,1] * mu - B[0, 2] + B[1, 1]/ mu
K = np.stack([p, q]) * np.array(mu[1], mu[0]) /(mu[1] - mu[0])
K_r = np.real(K)
K_i = np.imag(K)
# + hidden=true
Tr = np.stack([
np.array(np.array([[1.0, alpha[0]], [0.0, beta[0]]])),
np.array([[1.0, alpha[1]], [0.0, beta[1]]])
], axis=1)
def u_f0(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) + x[0])
def u_f1(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) - x[0]) * np.sign(x[1])
def disp(x):
_ = Tr @ x
return K_r @ u_f0(_) + K_i @ u_f1(_)
# + hidden=true
n = 300;
r = 10;
disp_scale = 0.3;
n0 = int(np.round(n/ (1 +np.pi), ))
n1 = n - n0
xs = np.concatenate((
np.stack([np.linspace(0, -r , n0), np.full((n0,), -1.e-8)]),
r * np.stack([np.cos(np.linspace(-np.pi, np.pi , n1)),np.sin(np.linspace(-np.pi, np.pi , n1))]),
np.stack([np.linspace(-r, 0 , n0), np.full((n0,), 1.e-8)]),
), axis =1)
xs_def = xs + disp_scale * disp(xs)
fig, ax = plt.subplots(figsize=(10.5,5), ncols = 2)
ax[0].plot(xs[0], xs[1], "b-", label="non-deformed");
ax[1].plot(xs_def[0], xs_def[1], "r-.", label="deformed");
# + [markdown] heading_collapsed=true
# # Configuration
# + [markdown] heading_collapsed=true hidden=true
# ## Create a $[\bar{1}10]\times\frac{1}{2}[111]\times[11\bar{2}]$ cell
# + [markdown] heading_collapsed=true hidden=true
# ### start with a $[100]\times[010]\times[001]$ cell
# + hidden=true
sim = md.atoms.import_cfg("configs/Fe_300K.cfg");
a = sim.H[0][0]
# + [markdown] heading_collapsed=true hidden=true
# ### Create a $[\bar{1}10]\times[111]\times[11\bar{2}]$ cell
# + hidden=true
sim.cell_change([[-1,1,0],[1,1,1],[1,1,-2]])
# + [markdown] heading_collapsed=true hidden=true
# ### Remove half of the atoms and readjust the position of remaining
# + [markdown] hidden=true
# Now one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps:
#
# 1. Remove the atoms that are above located above $\frac{1}{2}[111]$
# 2. Double the position of the remiaing atoms in the said direction
# 3. Shrink the box affinly to half on that direction
# + hidden=true
H = np.array(sim.H);
def _(x):
if x[1] > 0.5*H[1, 1] - 1.0e-8:
return False;
else:
x[1] *= 2.0;
sim.do(_);
_ = np.full((3,3), 0.0)
_[1, 1] = -0.5
sim.strain(_)
# + [markdown] heading_collapsed=true hidden=true
# ### Readjust the postions
# + hidden=true
H = np.array(sim.H);
displace(sim,np.array([sim.H[0][0]/6.0, sim.H[1][1]/6.0, sim.H[2][2]/6.0]))
# + [markdown] heading_collapsed=true hidden=true
# ## Replicating the unit cell
# + hidden=true
max_natms=100000
H=np.array(sim.H);
n_per_area=sim.natms/(H[0,0] * H[1,1]);
_ =np.sqrt(max_natms/n_per_area);
N0 = np.array([
np.around(_ / sim.H[0][0]),
np.around(_ / sim.H[1][1]),
1], dtype=np.int32)
# make sure in 1 direction it is an even number
if N0[1] % 2 == 1:
N0[1] += 1
sim *= N0;
# + [markdown] heading_collapsed=true hidden=true
# ## Add vacuum
# + hidden=true
vaccum = 100.0
# + hidden=true
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[0][0] += vaccum
H_new[1][1] += vaccum
resize(sim, H_new, H.sum(axis=0) * 0.5)
# + [markdown] heading_collapsed=true hidden=true
# ## Get the displacement field for this configuration
# + hidden=true
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
disp = crack(C)
# + [markdown] heading_collapsed=true hidden=true
# ## Impose the diplacement field and other boundary conditions
# + hidden=true
fixed_layer_thickness = 20.0
intensity = 0.5
rate = 0.001
# + hidden=true
H = np.array(sim.H);
ctr = H.sum(axis=0) * 0.5
lim = np.array([H[0, 0], H[1, 1]])
lim -= vaccum;
lim *= 0.5
lim -= fixed_layer_thickness
def _(x, x_d, x_dof):
x_rel = x[:2] - ctr[:2]
u = disp(x_rel)
x[:2] += intensity * u
if (np.abs(x_rel) < lim).sum() != 2:
x_d[:2] = rate * u
x_dof[0] = False;
x_dof[1] = False;
sim.do(_)
# + hidden=true
md.export_cfg("", extra_vecs=["x_dof"] )(sim, "dumps/crack.cfg")
# + [markdown] heading_collapsed=true hidden=true
# ## assign intial velocities
# + hidden=true
sim.kB = 8.617330350e-5
sim.hP = 4.13566766225 * 0.1 * np.sqrt(1.60217656535/1.66053904020)
sim.create_temp(300.0, 846244)
# + [markdown] heading_collapsed=true hidden=true
# ## add hydrogen to the system
# + hidden=true
sim.add_elem('H',1.007940)
# + [markdown] heading_collapsed=true
# # define ensemble
# + [markdown] heading_collapsed=true hidden=true
# ## muvt
# + hidden=true
# GPa and Kelvin
def mu(p,T):
return -2.37+0.0011237850013293155*T+0.00004308665175*T*np.log(p)-0.000193889932875*T*np.log(T);
# + hidden=true
muvt = md.muvt(mu(1.0e-3,300.0), 300.0, 0.1, 'H', 73108204);
muvt.nevery = 100;
muvt.nattempts=40000;
muvt.ntally=1000;
muvt.export=md.export_cfg('dumps/dump',10000)
# + [markdown] heading_collapsed=true
# # run gcmc
# + hidden=true
muvt.run(sim,100000);
| examples/fracture-gcmc-tutorial/fracture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cityplay
# language: python
# name: cityplay
# ---
# <h2>OK! So we have our 3 clean data sets in order..lets see if we can merge them to make 1 nice DataFrame.</h2>
# +
import pandas as pd
import numpy as np
from functools import reduce
import datetime
import seaborn as sns
from matplotlib import rcParams
import matplotlib.pyplot as plt
# figure size in inches
rcParams['figure.figsize'] = 30,15
pd.set_option('display.max_columns', None)
# -
df_sales = pd.read_csv ('../data/raw_data/sales_clean.csv')
df_climate = pd.read_csv('../data/raw_data/climate_clean.csv')
df_holidays = pd.read_csv('../data/raw_data/holidays_clean.csv')
df_climate
df_holidays
# <h2>Merging Sales with climate data:</h2>
df1= pd.merge(df_sales, df_climate, how="right", on="date")
df1
# <h2>And we merge holidays data with the new table we just created above</h2>
df2 = pd.merge(df1, df_holidays, how="right", on="date")
df2
# <h2>Make all holidays upper case, please!</h2>
df2['holiday_type'] = df2['holiday_type'].str.upper()
df2['holiday_name'] = df2['holiday_name'].str.upper()
df2.sample(40)
# <h2>We need to add some new columns. These columns will specify the dates we were closed for maintenance, covid,etc. Also, specify which days there was "toque de queda"!</h2>
df2['is_closed'] = df2['total_sales'].apply(lambda x: 1 if x == 0 else 0)
df2['is_lockdown'] = df2['date'].apply(lambda x: 1 if x >= '2020-03-13' and x <= '2020-06-26' else 0)
df2['is_curfew'] = df2['date'].apply(lambda x: 1 if x >= '2020-03-13' and x <= '2021-05-09' else 0)
df2.head()
# <h2>Time to check correlation, out of interest.</h2>
df2.corr()
df2.to_csv("../data/db_load_files/clean_data.csv", index=False)
df3 = df2.set_index("date")
df3
# <h2>Prepare table for get_dummies!</h2>
df3['year'] = df3.year.astype('category')
del df3['day']
del df3['holiday_type']
del df3['holiday_name' ]
del df3['did_snow']
df4 = pd.get_dummies(df3 ,dummy_na=True)
df4
del df4['day_type_nan']
del df4['month_name_nan']
del df4['day_of_week_nan']
del df4['year_nan']
df4
# <h2>By adding a previous day sales column, we can introduce a little bit of lag to our series..it will help our machine learning model down the road!</h2>
# +
#add total sales lag
number_lags=1
for lag in range(1, number_lags + 1):
df4['prev_sales'] = df4.total_sales.shift(lag)
df4.dropna(subset = ["prev_sales"], inplace=True)
# -
# <h2>Days before and after holidays are also very important factors</h2>
# +
number_lags=1
for lag in range(1, number_lags + 1):
df4['is_post_holiday'] = df4.day_type_festivo.shift(lag)
df4.dropna(subset = ["is_post_holiday"], inplace=True)
# -
df4['is_pre_holiday'] = df4.day_type_festivo.shift(-1)
df4.dropna(subset = ["is_pre_holiday"], inplace=True)
df4
df4.corr()
# <h2>OK lets save!</h2>
# + tags=[]
df4.to_excel("../data/test.xlsx", index=True)
# -
# <h2>Starting Machine Learning models...yay!</h2>
# +
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier , GradientBoostingClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis , QuadraticDiscriminantAnalysis
from sklearn.linear_model import LinearRegression,Ridge,Lasso,RidgeCV, ElasticNet,SGDRegressor
from sklearn.ensemble import RandomForestRegressor,BaggingRegressor,GradientBoostingRegressor,AdaBoostRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import Normalizer , scale
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import RFECV
from sklearn.model_selection import GridSearchCV, KFold , cross_val_score,RandomizedSearchCV
from sklearn.preprocessing import MinMaxScaler , StandardScaler, LabelEncoder
from sklearn.metrics import mean_squared_log_error,mean_squared_error, r2_score,mean_absolute_error
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score
# -
# <h2>I'll start checking some scores for grid, lasso, etc..the usual suspects!</h2>
# ####
# +
X = df4.drop(['total_sales'], axis=1)
y = df4['total_sales']
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.20)
# -
models = { "ridge": Ridge(),
"lasso": Lasso(),
"sgd": SGDRegressor(),
"knn": KNeighborsRegressor(),
"gradient": GradientBoostingRegressor()
}
for name, model in models.items():
print(f"Entrenando modelo ---> {name}")
model.fit(X_train,y_train)
print(f"He acabado :)")
for name, model in models.items():
y_pred = model.predict(X_test)
print(f"--------{name}--------")
print("MAE: ", mean_absolute_error(y_test, y_pred))
print("MSE: ", mean_squared_error(y_test,y_pred))
print("RMSE: ", np.sqrt(mean_squared_error(y_test,y_pred)))
print("R2: ", r2_score(y_test,y_pred))
print("\n")
# <h2>Gradient model was our best there with an R2 of .79 Lets try a Random Forest</h2>
# +
model = RandomForestRegressor()
params = {'n_estimators': [10,30,40,50,100],
'max_features': ["sqrt"],
'max_depth': [15,20,25],
'min_samples_leaf': [1,2,4,6,8,10]}
grid_search = GridSearchCV(model, param_grid=params, verbose=1, n_jobs=-1,cv=5)
grid_search.fit(X_train, y_train)
# -
bestscore = grid_search.best_score_
print("Best GridSearch Score: ", bestscore)
best_rf = grid_search.best_estimator_
print("Best Estimator: ", best_rf)
print("Best RF SCORE: ", best_rf.score(X, y))
# <h2>Randfom Forest gave us 0.94 not too bad! Lets predict some test data</h2>
X["predicted_sales"] = best_rf.predict(X)
X["actual_total_sales"] = y
X
# <h2>Saving model to a pickle file</h2>
import pickle
# +
# save the model to disk
pickle.dump(best_rf, open("./models/best_rf.pkl", 'wb'))
'''
# load the model from disk
loaded_model = pickle.load(open("mi_mejor_modelo", 'rb'))
loaded_model.predict(X_test)
'''
# -
# <h2>Time to bring h20 Autom ML into the mix!</h2>
# +
import joblib
#autoMachineLearning
import h2o
from h2o.automl import H2OAutoML
# -
h2o.init() #To start h2o
h2train = h2o.H2OFrame(df4)
# +
x = list(df4.columns)
x.remove('total_sales')
y = "total_sales"
print("X:", x)
print("y:", y)
# +
#TRAINING all the h20 models
automl = H2OAutoML(max_models=40, max_runtime_secs=10000, sort_metric='RMSE')
automl.train(x=x, y=y, training_frame=h2train)
# +
#Showing the best performers
leader_board = automl.leaderboard
leader_board.head()
# -
# <h2>Saving the best stacked model</h2>
# save the model to disk
model_path = h2o.save_model(model=automl.leader, path="./models/autostacked", force=True)
print (model_path)
# +
#Loading the TEST dataset
stacked_test = X
h2test_stacked = h2o.H2OFrame(stacked_test) #Conversion into a H20 frame to train
h2test_stacked.head() #preview
# -
predicted_price_h2_stacked = automl.leader.predict(h2test_stacked).as_data_frame() #PREDICTING the Sales on the TEST dataset
predicted_price_h2_stacked #Result
pred = X
pred
pred["total_sales"] = y
pred
pred = pred.set_index(predicted_price_h2_stacked.index)
pred['predict'] = predicted_price_h2_stacked['predict']
pred.to_csv("data/h20_stacked_pred.csv", index=True)
# <h2>Saving best unstacked model</h2>
single_model = h2o.get_model(automl.leaderboard.as_data_frame()['model_id'][2]) #Saving the best NON-STACKED model
#Another way to save it:
model_path = h2o.save_model(model=single_model, path="./models/deeplearning", force=True)
saved_model = h2o.load_model(model_path)
print (saved_model)
df2.to_csv("data/final.csv", index=True)
X.to_csv("data/model_outputs/rndForrest.csv", index=True)
| notebook/5-merge_data.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,md
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Including citations in your book
#
# ```{warning}
# ✨✨experimental✨✨
# ```
# Because `jupyter-book` is built on top of Sphinx, we can use the excellent
# [sphinxcontrib-bibtex](https://sphinxcontrib-bibtex.readthedocs.io/en/latest/)
# extension to include citations and a bibliography with your book.
#
# ## How to use citations
#
# Including citations with your markdown files or notebooks is done in the following
# way.
#
# 1. Modify the file in `_bibliography/references.bib`. This has a few sample citations
# in bibtex form. Update as you wish!
# 2. In your content, add the following text to include a citation
#
# ```
# {cite}`mybibtexcitation`
# ```
#
# For example, this text
#
# ```
# {cite}`holdgraf_rapid_2016`
# ```
#
# generates this citation: {cite}`holdgraf_rapid_2016`
#
# You can also include multiple citations in one go, like so:
#
# ```
# {cite}`holdgraf_evidence_2014,holdgraf_portable_2017`
# ```
#
# becomes {cite}`holdgraf_evidence_2014,holdgraf_portable_2017`.
#
# 3. Generate a bibliography on your page by using the following text:
#
# ````
# ```{bibliography} path/to/your/bibtexfile.bib
# ```
# ````
#
# This will generate a bibliography for your entire bibtex file, like so:
#
# ```{bibliography} ../references.bib
# ```
#
# When your book is built, the bibliography and citations will now be included.
| docs/features/citations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week 1 - Getting Started
# +
import numpy as np
print("Numpy:", np.__version__)
# -
# #Python Summary
#
# ##Further information
#
# More information is usually available with the `help` function. Using ? brings up the same information in ipython.
#
# Using the `dir` function lists all the options available from a variable.
# help(np)
#
# np?
#
# dir(np)
# ##Variables
#
# A variable is simply a name for something. One of the simplest tasks is printing the value of a variable.
#
# Printing can be customized using the format method on strings.
# +
location = 'Bethesda'
zip_code = 20892
elevation = 71.9
print("We're in", location, "zip code", zip_code, ", ", elevation, "m above sea level")
print("We're in " + location + " zip code " + str(zip_code) + ", " + str(elevation) + "m above sea level")
print("We're in {0} zip code {1}, {2}m above sea level".format(location, zip_code, elevation))
print("We're in {0} zip code {1}, {2:.2e}m above sea level".format(location, zip_code, elevation))
# -
# ##Types
#
# A number of different types are available as part of the standard library. The following links to the documentation provide a summary.
#
# * https://docs.python.org/3.5/library/stdtypes.html
# * https://docs.python.org/3.5/tutorial/datastructures.html
#
# Other types are available from other packages and can be created to support special situations.
#
# A variety of different methods are available depending on the type.
# +
# Sequences
# Lists
l = [1,2,3,4,4]
print("List:", l, len(l), 1 in l)
# Tuples
t = (1,2,3,4,4)
print("Tuple:", t, len(t), 1 in t)
# Sets
s = set([1,2,3,4,4])
print("Set:", s, len(s), 1 in s)
# Dictionaries
# Dictionaries map hashable values to arbitrary objects
d = {'a': 1, 'b': 2, 3: 's', 2.5: 't'}
print("Dictionary:", d, len(d), 'a' in d)
# -
# ##Conditionals
#
# https://docs.python.org/3.5/tutorial/controlflow.html
# +
import random
if random.random() < 0.5:
print("Should be printed 50% of the time")
elif random.random() < 0.5:
print("Should be primted 25% of the time")
else:
print("Should be printed 25% of the time")
# -
# ##Loops
#
# https://docs.python.org/3.5/tutorial/controlflow.html
# +
for i in ['a', 'b', 'c', 'd']:
print(i)
else:
print('Else')
for i in ['a', 'b', 'c', 'd']:
if i == 'b':
continue
elif i == 'd':
break
print(i)
else:
print('Else')
# -
# ##Functions
#
# https://docs.python.org/3.5/tutorial/controlflow.html
# +
def is_even(n):
return not n % 2
print(is_even(1), is_even(2))
# +
def first_n_squared_numbers(n=5):
return [i**2 for i in range(1,n+1)]
print(first_n_squared_numbers())
# +
def next_fibonacci(status=[]):
if len(status) < 2:
status.append(1)
return 1
status.append(status[-2] + status[-1])
return status[-1]
print(next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci())
# +
def accepts_anything(*args, **kwargs):
for a in args:
print(a)
for k in kwargs:
print(k, kwargs[k])
accepts_anything(1,2,3,4, a=1, b=2, c=3)
# +
# For quick and simple functions a lambda expression can be a useful approach.
# Standard functions are always a valid alternative and often make code clearer.
f = lambda x: x**2
print(f(5))
people = [{'name': 'Alice', 'age': 30},
{'name': 'Bob', 'age': 35},
{'name': 'Charlie', 'age': 35},
{'name': 'Dennis', 'age': 25}]
print(people)
people.sort(key=lambda x: x['age'])
print(people)
# -
# ##Numpy
#
# http://docs.scipy.org/doc/numpy/reference/
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
print(a)
print(a[1:,1:])
a = a + 2
print(a)
a = a + np.array([1,2,3])
print(a)
a = a + np.array([[10],[20],[30]])
print(a)
print(a.mean(), a.mean(axis=0), a.mean(axis=1))
# +
import matplotlib.pyplot as plt
# %matplotlib inline
# +
x = np.linspace(0, 3*2*np.pi, 500)
plt.plot(x, np.sin(x))
plt.show()
# -
# #Exercises
a = "The quick brown fox jumps over the lazy dog"
b = 1234567890.0
# * Print the variable `a` in all uppercase
# * Print the variable `a` with every other letter in uppercase
# * Print the variable `a` in reverse, i.e. god yzal ...
# * Print the variable `a` with the words reversed, i.e. ehT kciuq ...
# * Print the variable `b` in scientific notation with 4 decimal places
people = [{'name': 'Bob', 'age': 35},
{'name': 'Alice', 'age': 30},
{'name': 'Eve', 'age': 20},
{'name': 'Gail', 'age': 30},
{'name': 'Dennis', 'age': 25},
{'name': 'Charlie', 'age': 35},
{'name': 'Fred', 'age': 25},]
# * Print the items in `people` as comma seperated values
# * Sort `people` so that they are ordered by age, and print
# * Sort `people` so that they are ordered by age first, and then their names, i.e. Bob and Charlie should be next to each other due to their ages with Bob first due to his name.
# * Write a function that returns the first n prime numbers
# * Given a list of coordinates calculate the distance using the (Euclidean distance)[https://en.wikipedia.org/wiki/Euclidean_distance]
# * Given a list of coordinates arrange them in such a way that the distance traveled is minimized (the itertools module may be useful).
#
# +
coords = [(0,0), (10,5), (10,10), (5,10), (3,3), (3,7), (12,3), (10,11)]
# -
# * Print the standard deviation of each row in a numpy array
# * Print only the values greater than 90 in a numpy array
# * From a numpy array display the values in each row in a seperate plot (the subplots method may be useful)
np.random.seed(0)
a = np.random.randint(0, 100, size=(10,20))
| Wk01-overview/Wk01-Overview.ipynb |