code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Model evaluation & cross-validation
#
# [<NAME> (he/him)](https://github.com/JoseAlanis)
# Research Fellow - Child and Adolescent Psychology at [Uni Marburg](https://www.uni-marburg.de/de)
# Member - [RTG 2271 | Breaking Expectations](https://www.uni-marburg.de/en/fb04/rtg-2271), [Brainhack](https://brainhack.org/)
#
# <img align="left" src="https://raw.githubusercontent.com/G0RELLA/gorella_mwn/master/lecture/static/Twitter%20social%20icons%20-%20circle%20-%20blue.png" alt="logo" title="Twitter" width="30" height="30" /> <img align="left" src="https://raw.githubusercontent.com/G0RELLA/gorella_mwn/master/lecture/static/GitHub-Mark-120px-plus.png" alt="logo" title="Github" width="30" height="30" /> @JoiAlhaniz
#
#
# <img align="right" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/ml-dl_workshop.png" alt="logo" title="Github" width="400" height="280" />
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Aim(s) of this section
#
# As mention in the previous section, it is not sufficient to apply these methods to learn somthing about the nature of our data. It is always necessary to assess the quality of the implemented model. The goal of these section is to look at ways to estimate the generalization accuracy of a model on future (e.g.,unseen, out-of-sample) data.
#
# In other words, at the end of these sections you should know:
# - 1) different techniques to evaluate a given model
# - 2) understand the basic idea of cross-validation and different kinds of the same
# - 3) get an idea how to assess the significance (e.g., via permutation tests)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Prepare data for model
#
# Lets bring back our example data set (you know the song ...)
# + slideshow={"slide_type": "fragment"}
import numpy as np
import pandas as pd
# get the data set
data = np.load('MAIN2019_BASC064_subsamp_features.npz')['a']
# get the labels
info = pd.read_csv('participants.csv')
print('There are %s samples and %s features' % (data.shape[0], data.shape[1]))
# + [markdown] slideshow={"slide_type": "subslide"}
# Now let's look at the labels
# + slideshow={"slide_type": "fragment"}
info.head(n=5)
# + [markdown] slideshow={"slide_type": "subslide"}
# We'll set `Age` as target
# - i.e., well look at these from the `regression` perspective
# + slideshow={"slide_type": "fragment"}
# set age as target
Y_con = info['Age']
Y_con.describe()
# + [markdown] slideshow={"slide_type": "subslide"}
# Next:
# - we need to divide our input data `X` into `training` and `test` sets
# + slideshow={"slide_type": "fragment"}
# import necessary python modules
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# split the data
X_train, X_test, y_train, y_test = train_test_split(data, Y_con, random_state=0)
# + [markdown] slideshow={"slide_type": "subslide"}
# Now lets look at the size of the data sets
# + slideshow={"slide_type": "fragment"}
# print the size of our training and test groups
print('N used for training:', len(X_train),
' | N used for testing:', len(X_test))
# + [markdown] slideshow={"slide_type": "subslide"}
# **Question:** Is that a good distribution? Does it look ok?
#
# - Why might this be problematic (hint: what do you know about groups (e.g., `Child_Adult`) in the data.
# + slideshow={"slide_type": "fragment"}
import matplotlib.pyplot as plt
import seaborn as sns
sns.displot(y_train,label='train')
plt.legend()
sns.displot(y_test,label='test')
plt.legend()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Model fit
#
# Now lets go ahead and fit the model
# - we will use a fairly standard regression model called a Support Vector Regressor (SVR)
# - similar to the one we used in the previous section
# + slideshow={"slide_type": "fragment"}
from sklearn.svm import SVR
# define the model
lin_svr = SVR(kernel='linear')
# fit the model
lin_svr.fit(X_train, y_train)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Model diagnostics
# + [markdown] slideshow={"slide_type": "slide"}
# Now let's look at how the model performs in predicting the data
# - we can use the `score` method to calculate the coefficient of determination (or [R-squared](https://en.wikipedia.org/wiki/Coefficient_of_determination)) of the prediction.
# - for this we compare the observed data to the predicted data
# + slideshow={"slide_type": "subslide"}
# predict the training data based on the model
y_pred = lin_svr.predict(X_train)
# caluclate the model accuracy
acc = lin_svr.score(X_train, y_train)
# + slideshow={"slide_type": "fragment"}
# print results
print('accuracy (R2)', acc)
# + [markdown] slideshow={"slide_type": "subslide"}
# Now lets plot the predicted values
# + slideshow={"slide_type": "fragment"}
sns.regplot(y=y_pred, x=y_train, scatter_kws=dict(color='k'))
plt.xlabel('Predicted Age')
# + [markdown] slideshow={"slide_type": "subslide"}
# Now thats really cool, eh? **Almost a perfect fit**
# + [markdown] slideshow={"slide_type": "subslide"}
# ... which means something is wrong
# - what are we missing here?
# + [markdown] slideshow={"slide_type": "subslide"}
# - **recall**: We are still using the test data sets.
# + [markdown] slideshow={"slide_type": "subslide"}
# <center><img src="https://raw.githubusercontent.com/neurodatascience/course-materials-2020/master/lectures/14-may/03-intro-to-machine-learning/Imgs/regr.jpg" alt="logo" title="Github" width="800" height="500" /><center>
#
# <br>
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Train/test stratification
#
# Now lets do this again but we'll add some constraints to the predriction
# - Well keey the 75/25 ratio between test and train data sets
# - But now we will try to keep the characteristics of the data set consistent accross training and test datasets
# - For this we will use something called [stratification](https://en.wikipedia.org/wiki/Stratified_sampling)
# + slideshow={"slide_type": "subslide"}
# use `AgeGroup` for stratification
age_class2 = info.loc[y_train.index,'AgeGroup']
# split the data
X_train2, X_test, y_train2, y_test = train_test_split(
X_train, # x
y_train, # y
test_size = 0.25, # 75%/25% split
shuffle = True, # shuffle dataset before splitting
stratify = age_class2, # keep distribution of age class consistent
# betw. train & test sets.
random_state = 0 # same shuffle each time
)
# + [markdown] slideshow={"slide_type": "subslide"}
# Let's re-fit the model on the newly computed (and stratified) train data and evaluate it' performace on an (also stratified) test data
# - We'll compute again the model accuracy (R-squared) to evalueate the models performance,
# - but we'll also have a look at the [mean-absolute-error](https://en.wikipedia.org/wiki/Mean_absolute_error) (MAE), it is measured as the average sum of the absolute diffrences between predictions and actual observations. Unlike other measures, MAE is more robust to outliers, since it doesn't square the deviations (cf. [mean-squared-error](https://en.wikipedia.org/wiki/Mean_squared_error))
# - it provides a way to asses "how far off" are our predictions from our actual data, while staying on it's referential space
# + slideshow={"slide_type": "fragment"}
from sklearn.metrics import mean_absolute_error
# fit model just to training data
lin_svr.fit(X_train2, y_train2)
# predict the *test* data based on the model trained on X_train2
y_pred = lin_svr.predict(X_test)
# calculate the model accuracy
acc = lin_svr.score(X_test, y_test)
mae = mean_absolute_error(y_true=y_test,y_pred=y_pred)
# + [markdown] slideshow={"slide_type": "subslide"}
# Lets check the results
# + slideshow={"slide_type": "fragment"}
# print results
print('accuracy (R2) = ', acc)
print('MAE = ', mae)
# + slideshow={"slide_type": "subslide"}
# plot results
sns.regplot(x=y_pred,y=y_test, scatter_kws=dict(color='k'))
plt.xlabel('Predicted Age')
# + [markdown] slideshow={"slide_type": "slide"}
# ### [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics))
#
# Not perfect, but its not bad, as far as predicting with unseen data goes. Especially with a training sample of "only" 69 subjects.
#
# - But, can we do better?
# - On thing we could do is increase the size our training set while simultaneously reducing bias by instead using 10-fold **cross-validation**
# + [markdown] slideshow={"slide_type": "subslide"}
# <center><img src="https://raw.githubusercontent.com/neurodatascience/course-materials-2020/master/lectures/14-may/03-intro-to-machine-learning/Imgs/KCV2.png" alt="logo" title="Github" width="600" height="500" /></center>
#
# <br>
# + [markdown] slideshow={"slide_type": "subslide"}
# Cross-validation is a technique used to protect against biases in a predictive model
# - particularly useful in cases where the amount of data may be limited.
# - basic idea: you partition the data in a fixed number of folds, run the analysis on each fold, and then average out the overall error estimate
# + [markdown] slideshow={"slide_type": "subslide"}
# Let's look at the models performance across 10 folds
# + slideshow={"slide_type": "subslide"}
# import modules needed for cross-validation
from sklearn.model_selection import cross_val_predict, cross_val_score
# predict
y_pred = cross_val_predict(lin_svr, X_train, y_train, cv=10)
# scores
acc = cross_val_score(lin_svr, X_train, y_train, cv=10)
mae = cross_val_score(lin_svr, X_train, y_train, cv=10,
scoring='neg_mean_absolute_error')
# negative MAE is simply the negative of the
# MAE (by definition a positive quantity),
# since MAE is an error metric, i.e. the lower the better,
# negative MAE is the opposite
# + slideshow={"slide_type": "subslide"}
# print the results for each fold
for i in range(10):
print(
'Fold {} -- Acc = {}, MAE = {}'.format(i, np.round(acc[i], 3), np.round(-mae[i], 3))
)
# + [markdown] slideshow={"slide_type": "subslide"}
# For the visually oriented among us
# + slideshow={"slide_type": "subslide"}
fig = plt.figure(figsize=(8, 6))
plt.plot(acc, label = 'R-squared')
plt.legend()
plt.plot(-mae, label = 'MAE')
plt.legend(prop={'size': 12}, loc=9)
plt.xlabel('Folds [1 to 10]')
plt.ylabel('Metric score [i.e., R-squared 0 to 1]')
# + [markdown] slideshow={"slide_type": "subslide"}
# We can also look at the **overall accuracy** of the model
# + slideshow={"slide_type": "subslide"}
from sklearn.metrics import r2_score
overall_acc = r2_score(y_train, y_pred)
overall_mae = mean_absolute_error(y_train,y_pred)
print('R2:', overall_acc)
print('MAE:', overall_mae)
# + [markdown] slideshow={"slide_type": "subslide"}
# Now, let's look at the final overall model prediction
# + slideshow={"slide_type": "fragment"}
sns.regplot(x=y_train, y=y_pred, scatter_kws=dict(color='k'))
plt.ylabel('Predicted Age')
# + [markdown] slideshow={"slide_type": "slide"}
# ### Summary
#
# Not bad, not bad at all.
#
# But **most importantly**
# - this is a more **accurate estimation** of our model's predictive efficacy.
| lecture/ML_eval_cv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from puzzle16 import *
# +
def make_matrix(input_len):
coef_mat = np.empty((input_len, input_len))
for output_phase in range(1, input_len + 1):
iter = pattern_iter(gen_pattern(output_phase))
row = take(input_len, iter)
coef_mat[output_phase - 1] = row
return coef_mat
def plot_pattern(ax=None, input_len=100):
coef_mat = make_matrix(input_len)
ax.imshow(coef_mat)
# -
fig, ax = plt.subplots()
plot_pattern(ax, 2)
fig, ax = plt.subplots(ncols=3, figsize=(12, 4), sharex=True, sharey=True)
plot_pattern(ax[0], 2)
plot_pattern(ax[1], 4)
plot_pattern(ax[2], 8)
fig, ax = plt.subplots(ncols=3, figsize=(12, 4), sharex=True, sharey=True)
plot_pattern(ax[0], 10)
plot_pattern(ax[1], 20)
plot_pattern(ax[2], 30)
fig, ax = plt.subplots(ncols=3, figsize=(12, 4), sharex=True, sharey=True)
plot_pattern(ax[0], 100)
plot_pattern(ax[1], 200)
plot_pattern(ax[2], 300)
| puzzle16_dataviz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook for calculating the resultant lenght or MRL - Part2 - Sham - bootstrapping proceedure
# 7. Fucntions for computing histogram of ratios over sessions.
# 8. Compute Sham ( bootstrap like) comparison
#
# 9. Plot histograms with sham medians on top of original data
# 10. Plot for sham vs. real condition regardless of visible or invisible - with CI from means of means of boostrapped data
# 11. Plot seperate for visible and invisible sham/ non-sham.
# 12. Compute sliding median and mean window - over time - plot differences with sham
# 13. Compute the sham over 1000 times to have a proper bootstrap.
# 14. Plot diffrecnes of sham with the real data
# 15. Plot it for individual sessions - one per session.
# 16 Trouble shooting - need to be vary of sampling 50 or 100hz?
# as well as dividon by zeros and indexing problems.
#
# +
import math
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#import seaborn as sns
import matplotlib.lines as mlines
import matplotlib.patches as mpatches
from numpy import median
from scipy.stats import ranksums
real_root= 'C:/Users/Fabian/Desktop/Analysis/Multiple_trial_analysis/'
#C:/Users/Fabian/Desktop/Multiple_trial_analysis/Multiple_trial_analysis/'
root = real_root + 'Data/Raw/'
processed = real_root + 'Data/Processed/'
figures = real_root + 'Figures/'
vor = real_root + 'Data/Raw/'
# -
# ## 1. Import functions from previous notebooks -
# Giving trajectories before beacon can be improved to have the whole trajectory before beacon to the time when another beacon is reached - which will result in uneven lenghts and can make that as list of arrays, but numpy is not made for that.
from Data_analysis import *
# ## Import Functions and load the whole data set as one - both rats and all beacons in th esame list for testign.
# +
Day86_fs2 = pd.read_csv(root+'position 20200128-160013.txt',sep=" ", header=None)
Day86_fs1 = pd.read_csv(root+'position 20200128-151826.txt',sep=" ", header=None)
beacon_Day86_fs2 = pd.read_csv(root+'beacons 20200128-160013.txt',sep=" ", header=None)
beacon_Day86_fs1 = pd.read_csv(root+'beacons 20200128-151826.txt',sep=" ", header=None)
beacon_data = beacon_Day86_fs1
position_data = Day86_fs1
beacons = [beacon_Day86_fs1,beacon_Day87_fs1,beacon_Day88_fs1,beacon_Day89_fs1,beacon_Day90_fs1,beacon_Day91_fs1,beacon_Day92_fs1,beacon_Day93_fs1]
beacons2 = [beacon_Day86_fs2,beacon_Day87_fs2,beacon_Day88_fs2,beacon_Day89_fs2,beacon_Day90_fs2,beacon_Day91_fs2,beacon_Day92_fs2,beacon_Day93_fs2]
list_of_days = [Day86_fs1,Day87_fs1,Day88_fs1,Day89_fs1,Day90_fs1,Day91_fs1,Day92_fs1,Day93_fs1]
list_of_days2 = [Day86_fs2,Day87_fs2,Day88_fs2,Day89_fs2,Day90_fs2,Day91_fs2,Day92_fs2,Day93_fs2]
Day_list = list_of_days+list_of_days2
Beacon_list = beacons+beacons2
len(Day_list)== len(Beacon_list)
# -
# ### Convert to numpy
for index,(position,beaconz) in enumerate(zip (Day_list,Beacon_list)):
beacon_d = beaconz.to_numpy()
pos_data = position.to_numpy()
beacon_d[:, 0] -= pos_data[0][0]
pos_data[:, 0] -= pos_data[0][0]
print(beacon_d.shape)
# ### For timing of the proceedure to get a progress bar
# %%capture
from tqdm import tqdm_notebook as tqdm
tqdm().pandas()
# # 7. Real Calculations
# +
beacon_d = beacon_data.to_numpy()
pos_data = position_data.to_numpy()
beacon_d[:, 0] -= pos_data[0][0]
pos_data[:, 0] -= pos_data[0][0]
def get_index_at_pos(beacon_data, position_data):
"""Get correct indexes from beacon data ot the position data
Parameters
--------------
beacon_data : Data frame
Beacon and time trigger
position_data : Data frame
All rats positions
Returns
--------------
list : indexes of beacon in position data
"""
indexes = []
for beacon_t in beacon_data[:, 0]:
#print (beacon_t)
indexes.append( np.abs((beacon_t+10) - position_data[:, 0]).argmin() ) #- to get trajectories 3 sec after bean
return indexes
def get_positions_before(seconds_back, idxs, position_data):
"""create arrays of positions before beacon reached
Parameters
--------------
beacon_data : Data frame
Beacon and time trigger
position_data : Data frame
All rats positions
idxs
indexes where beacon is
Returns
--------------
array of lists of lists of time XYZ position
"""
beacon_periods = []
for beacon_idx in idxs:
beacon_t = position_data[beacon_idx][0]
beacon_t_before = beacon_t - seconds_back
before_idx = np.abs(beacon_t_before - position_data[:, 0]).argmin()
beacon_periods.append(position_data[before_idx:beacon_idx])
return beacon_periods
def ratios (list1,list2):
"""compare resultant lenghts of the trajectories (short and long) and divide
Parameters
--------------
list1,list2 : lst
lenght of calculated trajecotries (lst1-short lst2 long)
Returns
--------------
div :
array of divided results serving as ration between short and long trajectory
at a given time
"""
resultant= (np.asarray(list1),np.asarray(list2))
div = []
for i in range(len(resultant[1])):
if resultant[1][i]==0: #in case rat does not move in .1 sec - lost of tracking etc...
div.append(1)
else:
div.append(resultant[1][i]/resultant[0][i])
return np.asarray(div)
def resultant_lenght_vis_invis_all (list_of_days,beacon,seconds_back):
"""together function calculates ratio over multiple sessions
First compare to numpy and revert trajecotories - ending first
Parameters
--------------
list_of_days : list
which days to look into
beacon : Data Frame
list of time and beacon triggers
seconds_back : int
how many seconds back to collect the trajectories at.
Returns
--------------
div :
array of divided results serving as ration between short and long trajectory
at a given time back - in this case for multiple each beacon
"""
div = []
for index,(position,beaconz) in enumerate(zip (Day_list,Beacon_list)):
beacon_d = beaconz.to_numpy()
pos_data = position.to_numpy()
beacon_d[:, 0] -= pos_data[0][0]
pos_data[:, 0] -= pos_data[0][0]
idxs = get_index_at_pos(beacon_d, pos_data)
beacon_travel = get_positions_before(seconds_back, idxs ,pos_data)
straights=[]
longs=[]
for beacon in range(len(beacon_travel)):
longs.append(calculate_Distance(beacon_travel[beacon][:,1],beacon_travel[beacon][:,3]))
#print(beacon_travel[beacon][:,1],beacon_travel[beacon][:,3])
straights.append(math.sqrt((beacon_travel[beacon][0,1] - beacon_travel[beacon][-1,1]) ** 2 + (beacon_travel[beacon][0,3] - beacon_travel[beacon][-1,3]) ** 2))
div.append(np.asarray((ratios(longs,straights))))
return(np.asarray(div))
large_div = resultant_lenght_vis_invis_all(Day_list, Beacon_list,4)
import numpy as np
import scipy.stats
def mean_confidence_interval(data, confidence=0.95):
"""Stats - calculate confidence interval.
Parameters
--------------
data : list
list of means from dividing the trajectories
confidence_interval : int
set your own
Returns
--------------
div :
median and upper and lower bound for each time.
"""
a = 1.0 * np.array(data)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
return m, m-h, m+h
def histogram_ratio_all_nums (list_of_days,beacon,seconds_back):
"""compute statistics over multiple trials - runs previous function first
TODO: integrate more paramets such as invisibility to be abel to compare during learning.
Parameters
--------------
list_of_days : list
which days to look into
beacon : Data Frame
list of time and beacon triggers
seconds_back : int
how many seconds back to collect the trajectories at.
Returns
--------------
list :
Means and meadians of calculated time back - for visible and invisible
and all as well as confidence intervals
"""
large_div = resultant_lenght_vis_invis_all (list_of_days,beacon,seconds_back)
#print(large_div.shape)
large_mean=[]
large_median=[]
large_mean_vis=[]
large_median_vis=[]
large_mean_invis=[]
large_median_invis=[]
CI=[]
for div in range(len(large_div)):
#within group stats - not pooled
large_median.append(np.median(large_div[div][:]))
large_mean.append(large_div[div][:].mean())
large_mean_vis.append(large_div[div][::2].mean())
large_mean_invis.append(large_div[div][1::2].mean())
large_median_vis.append(np.median(large_div[div][::2]))
large_median_invis.append(np.median(large_div[div][1::2]))
vis = [item for sublist in large_div for item in sublist[::2]] #cool list feature - flatening lists
invis = [item for sublist in large_div for item in sublist[1::2]]
#plt.hist(vis,alpha=.5,color='g', edgecolor='seagreen',label='visible')
#plt.hist(invis,alpha=.5,color='lightgrey', edgecolor='silver',label='invisible')
#plt.legend()
CI,CILow,CIHigh = mean_confidence_interval(large_mean,0.95)
#print (seconds_back)
return [np.mean(np.asarray(large_mean_vis)),np.mean(np.asarray(large_mean_invis)),
np.median(np.asarray(large_median_vis)),np.median(np.asarray(large_median_invis)),
np.mean(np.asarray(large_mean)), np.median(np.asarray(large_median)),
CI,CILow,CIHigh ]
#ave_all = histogram_ratio_all_nums (Day_list, Beacon_list , 4 )
np.seterr('warn')
run_ave=[]
for i in tqdm(range(1,100,)):
"""run all of the above functions for 10hz resolution 10 seconds back"""
run_ave.append(histogram_ratio_all_nums (Day_list, Beacon_list ,i/10 ))
# +
#Testign function
seconds_back = 2
idxs = get_index_at_pos(beacon_d, pos_data)
beacon_travel = get_positions_before(seconds_back, idxs ,pos_data)
#large_div = resultant_lenght_vis_invis_all(Day_list, Beacon_list,4)
ave_all = histogram_ratio_all_nums (Day_list, Beacon_list , 4 ) #also runs previous resultant lenght function
ave_all
# -
# ### Value below describes the mean and median for the ratios across all sessions across all beacons within sessions over 20 seconds. - each value is then an averaged mean and median over the sessions over .1 sec . [mean_vis,mean_invis, median_vis,median_invis, mean, median ]
# +
secs_back=99
r1= np.arange(0.1,(secs_back/10)+.1,.1)
run_ave2 =np.array(run_ave).reshape(99,9,1)
mean=np.array(run_ave)[:,4].tolist()
median=np.array(run_ave)[:,5].tolist()
CILow=np.array(run_ave)[:,7].tolist()
CIHigh=np.array(run_ave)[:,8].tolist()
print ( len(run_ave[3]))
#print(np.array(run_ave)[:,8])
plt.plot(r1,mean,label='mean')
plt.plot(r1,median,label='median')
plt.fill_between(r1, CILow, CIHigh, color='b', alpha=.1)
#plt.fill_between(list(np.array(run_ave)[:,5]), list(np.array(run_ave)[:,7]), list(np.array(run_ave)[:,8]), color='b', alpha=.1)
plt.legend()
plt.xlabel('time(s)')
plt.ylabel('resultant lenght ratio medians')
plt.title('running resultant lenght ratios forward ')
plt.legend()
plt.savefig('%sresultant_lenght_ratios_running_medians_no_sham_forward%s.png' %(figures,i), dpi = 200)
# -
# # 8. Sham calculations
#
# 1. Create random numbers based on the lenght of the recordign and the amount of beacons.
# 2. Use the indexes to index into the data,
# 3. Generate the histograms and resultant lenght for that data.
#
#
import random as rn
# ### 1. Create a random range of numbers lenght of beaons for each session and in a range of len of positions.
# +
Day86_fs2 = pd.read_csv(root+'position 20200128-160013.txt',sep=" ", header=None)
Day86_fs1 = pd.read_csv(root+'position 20200128-151826.txt',sep=" ", header=None)
beacon_Day86_fs2 = pd.read_csv(root+'beacons 20200128-160013.txt',sep=" ", header=None)
beacon_Day86_fs1 = pd.read_csv(root+'beacons 20200128-151826.txt',sep=" ", header=None)
beacon_data = beacon_Day86_fs1
position_data = Day86_fs1
print(len(beacon_data))
print(len(position_data))
rn.randrange(0, len(position_data),len(beacon_data))
my_randoms = rn.sample(range(1, len(position_data)), len(beacon_data))
print(len(my_randoms))
print(max(my_randoms))
# -
# ### Perhaps need to sort the random numbers...
# Get indexes of the random numbers - here indexes is used only for the amount of beacons not as a random index
# +
indexes = get_index_at_pos(beacon_d, pos_data)
def get_positions_before_sham(seconds_back, idxs, position_data):
"""create SHAM arrays of positions before beacon reached
Pick same amount of beacons as in session but attach random trajectories to it.
Parameters
--------------
seconds_back : int
how far back
position_data : Data frame
All rats positions
idxs
indexes where beacon is
Returns
--------------
array of lists of lists of time XYZ position
"""
beacon_periods = []
randoms = rn.sample(range(1, len(position_data)), len(idxs))
randoms.sort()
for beacon_idx in randoms:
beacon_t = position_data[beacon_idx][0]
beacon_t_before = beacon_t - seconds_back
before_idx = np.abs(beacon_t_before - position_data[:, 0]).argmin()
beacon_periods.append(position_data[before_idx:beacon_idx])
return beacon_periods
seconds_back =4
l =get_positions_before_sham(seconds_back,indexes,pos_data)
k= get_positions_before(seconds_back,indexes,pos_data)
print(l[10].shape)
print(k[10].shape)
# -
# #### ↑↑↑↑↑ Can have a varying lenght of the random cues to sampling rate varibility ↑↑↑↑↑
# +
def resultant_lenght_vis_invis_all_sham (list_of_days,beacon,seconds_back):
"""together function calculates SHAM ratio over multiple sessions
First compare to numpy and revert trajecotories - ending first
Parameters
--------------
list_of_days : list
which days to look into
beacon : Data Frame
list of time and beacon triggers
seconds_back : int
how many seconds back to collect the trajectories at.
Returns
--------------
div :
array of divided results serving as ration between short and long trajectory
at a given time back - in this case for multiple each beacon
"""
div = []
for index,(position,beaconz) in enumerate(zip (Day_list,Beacon_list)):
beacon_d = beaconz.to_numpy()
pos_data = position.to_numpy()
beacon_d[:, 0] -= pos_data[0][0]
pos_data[:, 0] -= pos_data[0][0]
idxs = get_index_at_pos(beacon_d, pos_data)
beacon_travel = get_positions_before_sham(seconds_back, idxs ,pos_data)
straights=[]
longs=[]
for beacon in range(len(beacon_travel)):
longs.append(calculate_Distance(beacon_travel[beacon][:,1],beacon_travel[beacon][:,3]))
straights.append(math.sqrt((beacon_travel[beacon][0,1] - beacon_travel[beacon][-1,1]) ** 2 + (beacon_travel[beacon][0,3] - beacon_travel[beacon][-1,3]) ** 2))
div.append(np.asarray((ratios(longs,straights))))
return(np.asarray(div))
large_div_sham = resultant_lenght_vis_invis_all_sham(Day_list, Beacon_list,.1)
#large_div_sham
# -
def histogram_ratio_all_sham (list_of_days,beacon,seconds_back):
"""compute statistics over multiple SHAM trials - runs previous function first
TODO: integrate more parameters such as invisibility to be able to compare during learning.
Parameters
--------------
list_of_days : list
which days to look into
beacon : Data Frame
list of time and beacon triggers
seconds_back : int
how many seconds back to collect the trajectories at.
Returns
--------------
list :
Means and medians of calculated time back - for visible and invisible
and all as well as confidence intervals already plotted
"""
large_div_sham = resultant_lenght_vis_invis_all_sham (list_of_days,beacon,seconds_back)
large_mean_vis=[]
large_median_vis=[]
large_mean_invis=[]
large_median_invis=[]
for div in range(len(large_div_sham)):
#within group stats - not pooled
large_mean_vis.append(large_div_sham[div][::2].mean())
large_mean_invis.append(large_div_sham[div][1::2].mean())
large_median_vis.append(np.median(large_div_sham[div][::2]))
large_median_invis.append(np.median(large_div_sham[div][1::2]))
vis = [item for sublist in large_div_sham for item in sublist[::2]] #cool list feature - flatening lists
invis = [item for sublist in large_div_sham for item in sublist[1::2]]
plt.hist(vis,alpha=.5,color='g', edgecolor='seagreen',label='visible')
plt.hist(invis,alpha=.5,color='lightgrey', edgecolor='silver',label='invisible')
plt.axvline(np.mean(np.asarray(large_mean_vis)), color='g', linestyle='dashed', linewidth=1,label='mean_vis')
plt.axvline(np.mean(np.asarray(large_mean_invis)), color='black', linestyle='dashed', linewidth=1,label='mean_invis')
plt.axvline(np.median(np.asarray(large_median_vis)), color='g', linestyle='solid', linewidth=1,label='median_vis')
plt.axvline(np.median(np.asarray(large_median_invis)), color='black', linestyle='solid', linewidth=1,label='median_invis')
plt.xlabel("ratio short/long ")
plt.legend()
print (seconds_back)
plt.title('resultant lenght ratios of visible and invisible Group_sham %s sec'% seconds_back)
plt.savefig('%sresultant_lenght_ratios_%s_visible_invisible_all_sham.png' %(figures,seconds_back), dpi = 200)
plt.show()
histogram_ratio_all_sham (Day_list, Beacon_list , 3 )
# ### 9. Bootstrapping Permutation test...
# +
def histogram_ratio_all_boot (list_of_days,beacon,seconds_back):
"""compute statistics over multiple SHAM trials - to be abel to bootsrapp it
TODO: integrate more parameters such as invisibility to be able to compare during learning.
Parameters
--------------
list_of_days : list
which days to look into
beacon : Data Frame
list of time and beacon triggers
seconds_back : int
how many seconds back to collect the trajectories at.
Returns
--------------
list :
Means and meadians of calculated time back - for visible and invisible
and all
"""
large_div_sham = resultant_lenght_vis_invis_all_sham (list_of_days,beacon,seconds_back)
large_mean=[]
large_median=[]
large_mean_vis=[]
large_median_vis=[]
large_mean_invis=[]
large_median_invis=[]
for div in range(len(large_div)):
#within group stats - not pooled
large_median.append(np.median(large_div_sham[div][:]))
large_mean.append(large_div_sham[div][:].mean())
large_mean_vis.append(large_div_sham[div][::2].mean())
large_mean_invis.append(large_div_sham[div][1::2].mean())
large_median_vis.append(np.median(large_div_sham[div][::2]))
large_median_invis.append(np.median(large_div_sham[div][1::2]))
vis = [item for sublist in large_div_sham for item in sublist[::2]] #cool list feature - flatening lists
invis = [item for sublist in large_div_sham for item in sublist[1::2]]
CI,CILow,CIHigh = mean_confidence_interval(large_mean,0.95)
#print (seconds_back)
return [np.mean(np.asarray(large_mean_vis)),np.mean(np.asarray(large_mean_invis)),
np.median(np.asarray(large_median_vis)),np.median(np.asarray(large_median_invis)),
np.mean(np.asarray(large_mean)), np.median(np.asarray(large_median)),
CI,CILow,CIHigh ]
histogram_ratio_all_boot (Day_list, Beacon_list , 3 )
# -
# ## Bootstrap - calculate means and sampled data over X times also for whatever times
# +
ave=[]
for i in tqdm(range (1,20)):
"""run all of the SHAM functions for 10hz resolution X seconds back"""
ave.append(histogram_ratio_all_boot (Day_list, Beacon_list , i/10 ))
# -
# ## Strapped means is for calculating grand statistics over all generated bootstrapped data - i.e. over 1000 bootsrapped trials
# +
from scipy.stats import sem, t
from scipy import mean
confidence = 0.95
def strapped_means (ave):
"""Use the means of means to create the ultimate Sham means"""
grand_mean=[]
grand_median=[]
ave_all = []
mean_vis_boot =[]
mean_invis_boot=[]
median_vis_boot=[]
median_invis_boot=[]
bins=25
for i in range(len(ave)):
grand_mean.append(ave[i][4])
grand_median.append(ave[i][5])
mean_vis_boot.append(ave[i][0])
mean_invis_boot.append(ave[i][1])
median_vis_boot.append(ave[i][2])
median_invis_boot.append(ave[i][3])
#print(grand_mean)
CI,CILow,CIHigh = mean_confidence_interval(grand_mean,0.95)
return [np.mean(mean_vis_boot), np.mean(mean_invis_boot),
np.median(np.asarray(median_vis_boot)),np.median(median_invis_boot),
np.mean(grand_mean),np.median(grand_median),
CI,CILow,CIHigh ]
# -
ave_all_boot= strapped_means(ave)
# +
ave_all_boot
# -
# ## Function to generate the boot repetitions...
#
# +
def get_boot_data(seconds_back,boot_reps):
"""run the actual repetitions. """
ave_grand=[]
for i in tqdm(range (boot_reps)):
ave_grand.append(histogram_ratio_all_boot (Day_list, Beacon_list , seconds_back ))
print(len(ave_grand))
ave_all_boot= strapped_means(ave_grand)
return ave_all_boot
get_boot_data(3,10)
# -
# ### statistics on ratios of the original correctly sampled data
# +
ave_all = histogram_ratio_all_nums (Day_list, Beacon_list , 3 )
# -
ave_all
# ## 10. Graph together with bootstrapped data - for a given time:
def histogram_ratio_with_sham (list_of_days,beacon,seconds_back,boot_reps):
""" rerun and graph """
large_div = resultant_lenght_vis_invis_all (list_of_days,beacon,seconds_back)
large_mean_vis=[]
large_median_vis=[]
large_mean_invis=[]
large_median_invis=[]
ave_all_boot = get_boot_data(seconds_back,boot_reps)
for div in range(len(large_div)):
#within group stats - not pooled
large_mean_vis.append(large_div[div][::2].mean())
large_mean_invis.append(large_div[div][1::2].mean())
large_median_vis.append(np.median(large_div[div][::2]))
large_median_invis.append(np.median(large_div[div][1::2]))
vis = [item for sublist in large_div for item in sublist[::2]] #cool list feature - flatening lists
invis = [item for sublist in large_div for item in sublist[1::2]]
print(ranksums(vis, invis))
plt.hist(vis,alpha=.5,color='g', edgecolor='seagreen',label='visible')
plt.hist(invis,alpha=.5,color='lightgrey', edgecolor='silver',label='invisible')
plt.axvline((np.median(np.asarray(large_median_vis))-np.std(vis)), color='blue', linestyle='dashdot', linewidth=1,label='std_vis')
plt.axvline((np.median(np.asarray(large_median_invis))-np.std(invis)), color='orange', linestyle='dashdot', linewidth=1,label='std_invis')
plt.axvline(ave_all_boot[2], color='purple', linestyle='dashed', linewidth=1,label='sham_med_vis')
plt.axvline(ave_all_boot[3], color='pink', linestyle='dashed', linewidth=1,label='sham_med_invis')
#plt.axvline(np.mean(np.asarray(large_mean_vis)), color='g', linestyle='dashed', linewidth=1)
#plt.axvline(np.mean(np.asarray(large_mean_invis)), color='black', linestyle='dashed', linewidth=1)
plt.axvline(np.median(np.asarray(large_median_vis)), color='g', linestyle='solid', linewidth=1,label='med_vis')
plt.axvline(np.median(np.asarray(large_median_invis)), color='black', linestyle='solid', linewidth=1,label='med_invis')
plt.legend()
plt.xlabel("ratio short/long ")
print (seconds_back)
plt.title('resultant lenght ratios of visible and invisible Group_with_sham %s sec'% seconds_back)
plt.savefig('%sresultant_lenght_ratios_%s_visible_invisible_all_with_sham.png' %(figures,seconds_back), dpi = 200)
plt.show()
histogram_ratio_with_sham (Day_list, Beacon_list , 3,10 )
histogram_ratio_with_sham (Day_list, Beacon_list , 2,10 )
histogram_ratio_with_sham (Day_list, Beacon_list , 1,20 )
histogram_ratio_with_sham (Day_list, Beacon_list , 4,20 )
# ## Conclusion:
# Computign the ratio differences showed no significant differecnes between the lenght ratios of resultant lenght between visible and invisble beacon condition. There was a slight preference at 2 and 3 seconds before the beacon, and when calculatign sham it showed that those ratios are onaverage much smaller than the given ratio from the trials, likeley singificantly so .
# #### Note, need to always subtract STD from the mean
#
# ## 11. Sliding median window.
# 1. calculate median and meand in an array for .1 sec each
# 2. calculate simliary for sham condition - 20 repetitions or so
# 3. Plot where x axis are the time points and y woudl be medians of ratios - 4 lines vis, and invis for sham and normal
#
#
# +
run_ave=[]
for i in tqdm(range(1,200,1)):
run_ave.append(histogram_ratio_all_nums (Day_list, Beacon_list ,i/10 ))
# -
# ## Calculate the folowing overnight for 1000 repetitions....
# +
run_ave_sham=[]
for i in tqdm(range(2,201,1)):
run_ave_sham.append(get_boot_data(i/10,1))
# + active=""
#
# -
long_calculation = run_ave_sham
np.asarray(long_calculation).shape
# ### [mean_vis,mean_invis, median_vis,median_invis,mean, median,CI,CILow, CIHigh]
# +
run_ave2 =np.array(run_ave).reshape(199,9,1)
run_ave_sham2 =np.array(run_ave_sham).reshape(199,9,1)
secs_back=110
r1= np.arange(0.1,(secs_back/10)+.1,.1)
mean=np.array(run_ave)[:secs_back,4].tolist()
median=np.array(run_ave)[:secs_back,5].tolist()
CILow=np.array(run_ave)[:secs_back,7].tolist()
CIHigh=np.array(run_ave)[:secs_back,8].tolist()
mean_sham=np.array(run_ave_sham)[:secs_back,4].tolist()
median_sham=np.array(run_ave_sham)[:secs_back,5].tolist()
CILow_sham=np.array(run_ave_sham)[:secs_back,7].tolist()
CIHigh_sham=np.array(run_ave_sham)[:secs_back,8].tolist()
plt.plot(r1,mean,label='mean')
#plt.plot(median,label='median')
plt.fill_between(r1, CILow, CIHigh, color='b', alpha=.1,label = 'CI')
plt.plot(r1,mean_sham,label='mean_sham')
#plt.plot(median_sham,label='median_sham')
plt.fill_between(r1, CILow_sham, CIHigh_sham, color='cyan', alpha=.1,label = 'CI_sham')
plt.xlabel('time(s)')
plt.ylabel('resultant lenght ratio medians')
plt.title('resultant lenght ratios running medians with sham')
plt.legend()
plt.savefig('%sresultant_lenght_ratios_running_means%s.png' %(figures,(secs_back+1)/10), dpi = 200)
# -
# ## Plotting visible - with sham
# +
run_ave2 =np.array(run_ave).reshape(199,9,1)
run_ave_sham2 =np.array(run_ave_sham).reshape(199,9,1)
secs_back=100
r1= np.arange(0.1,(secs_back/10)+.1,.1)
vis_invis_mean = "invisible"
if vis_invis_mean=='visible':
l1=0
l2=2
else:
l1=1
l2=3
mean=np.array(run_ave)[:secs_back,l1].tolist()
median=np.array(run_ave)[:secs_back,l2].tolist()
#CILow=np.array(run_ave)[:secs_back,7].tolist()
#CIHigh=np.array(run_ave)[:secs_back,8].tolist()
mean_sham=np.array(run_ave_sham)[:secs_back,l1].tolist()
median_sham=np.array(run_ave_sham)[:secs_back,l2].tolist()
#CILow_sham=np.array(run_ave_sham)[:secs_back,7].tolist()
#CIHigh_sham=np.array(run_ave_sham)[:secs_back,8].tolist()
plt.plot(r1,mean,label='mean_%s'%vis_invis_mean)
#plt.plot(median,label='median')
#plt.fill_between(r1, CILow, CIHigh, color='b', alpha=.1,label = 'CI')
plt.plot(r1,mean_sham,label='mean_%s_sham'%vis_invis_mean)
#plt.plot(median_sham,label='median_sham')
#plt.fill_between(r1, CILow_sham, CIHigh_sham, color='cyan', alpha=.1,label = 'CI_sham')
plt.xlabel('time(s)')
plt.ylabel('resultant lenght ratio medians')
plt.title('resultant lenght ratios running stats (%s) with sham'%vis_invis_mean)
plt.legend()
plt.savefig('%sresultant_lenght_ratios_running_%s_means%s.png' %(figures,vis_invis_mean,(secs_back+1)/10), dpi = 200)
# -
# ## Individual animals - 16 plots in one figure -8 per animal
# ## one session plot:
#
run_ave_sham=[]
for i in tqdm(range(1,200,1)):
run_ave_sham.append(get_boot_data(i/10,100 )) #(i/10,1000 ) for more precision
# ### Rerun some of the funcitons above with graphing already for the sliding window
# +
mega=[]
for index,(position,beaconz) in enumerate(zip (Day_list,Beacon_list)):
beacon_d = beaconz.to_numpy()
pos_data = position.to_numpy()
beacon_d[:, 0] -= pos_data[0][0]
pos_data[:, 0] -= pos_data[0][0]
idxs = get_index_at_pos(beacon_d, pos_data)
session_all= []
for i in tqdm(range(2,200,1)):
div = []
beacon_travel = get_positions_before(i/10, idxs ,pos_data)
straights=[]
longs=[]
for beacon in range(len(beacon_travel)):
longs.append(calculate_Distance(beacon_travel[beacon][:,1],beacon_travel[beacon][:,3]))
straights.append(math.sqrt((beacon_travel[beacon][0,1] - beacon_travel[beacon][-1,1]) ** 2 + (beacon_travel[beacon][0,3] - beacon_travel[beacon][-1,3]) ** 2))
div.append(np.asarray((ratios(longs,straights))))
large_div = div
large_mean=[]
large_median=[]
large_mean_vis=[]
large_median_vis=[]
large_mean_invis=[]
large_median_invis=[]
CI=[]
for div in range(len(large_div)):
#within group stats - not pooled
large_median.append(np.median(large_div[div][:]))
large_mean.append(large_div[div][:].mean())
large_mean_vis.append(large_div[div][::2].mean())
large_mean_invis.append(large_div[div][1::2].mean())
large_median_vis.append(np.median(large_div[div][::2]))
large_median_invis.append(np.median(large_div[div][1::2]))
vis = [item for sublist in large_div for item in sublist[::2]] #cool list feature - flatening lists
invis = [item for sublist in large_div for item in sublist[1::2]]
#plt.hist(vis,alpha=.5,color='g', edgecolor='seagreen',label='visible')
#plt.hist(invis,alpha=.5,color='lightgrey', edgecolor='silver',label='invisible')
#plt.legend()
CI,CILow,CIHigh = mean_confidence_interval(large_mean,0.95)
#print (seconds_back)
session_all.append((np.mean(np.asarray(large_mean_vis)),np.mean(np.asarray(large_mean_invis)),
np.median(np.asarray(large_median_vis)),np.median(np.asarray(large_median_invis)),
np.mean(np.asarray(large_mean)), np.median(np.asarray(large_median)),
CI,CILow,CIHigh ))
mega.append(np.asarray(session_all))
mega
# -
np.array(mega).shape
# 16,168 and 9 are correct shapes for the array which is 16 sessions - each 20 seconds and 9 values for mean etc.
# # Sham_mega
# +
boot_reps=100
ave_grand=[]
for i in tqdm(range (boot_reps)):
mega_sham=[]
for index,(position,beaconz) in enumerate(zip (Day_list,Beacon_list)):
beacon_d = beaconz.to_numpy()
pos_data = position.to_numpy()
beacon_d[:, 0] -= pos_data[0][0]
pos_data[:, 0] -= pos_data[0][0]
idxs = get_index_at_pos(beacon_d, pos_data)
session_all= []
for i in tqdm(range(2,200,1)):
div = []
beacon_travel = get_positions_before_sham(i/10, idxs ,pos_data)
straights=[]
longs=[]
for beacon in range(len(beacon_travel)):
longs.append(calculate_Distance(beacon_travel[beacon][:,1],beacon_travel[beacon][:,3]))
straights.append(math.sqrt((beacon_travel[beacon][0,1] - beacon_travel[beacon][-1,1]) ** 2 + (beacon_travel[beacon][0,3] - beacon_travel[beacon][-1,3]) ** 2))
div.append(np.asarray((ratios(longs,straights))))
large_div = div
large_mean=[]
large_median=[]
large_mean_vis=[]
large_median_vis=[]
large_mean_invis=[]
large_median_invis=[]
for div in range(len(large_div)):
#within group stats - not pooled
large_median.append(np.median(large_div[div][:]))
large_mean.append(large_div[div][:].mean())
large_mean_vis.append(large_div[div][::2].mean())
large_mean_invis.append(large_div[div][1::2].mean())
large_median_vis.append(np.median(large_div[div][::2]))
large_median_invis.append(np.median(large_div[div][1::2]))
vis = [item for sublist in large_div for item in sublist[::2]] #cool list feature - flatening lists
invis = [item for sublist in large_div for item in sublist[1::2]]
#plt.hist(vis,alpha=.5,color='g', edgecolor='seagreen',label='visible')
#plt.hist(invis,alpha=.5,color='lightgrey', edgecolor='silver',label='invisible')
#plt.legend()
#print (seconds_back)
session_all.append((np.mean(np.asarray(large_mean_vis)),np.mean(np.asarray(large_mean_invis)),
np.median(np.asarray(large_median_vis)),np.median(np.asarray(large_median_invis)),
np.mean(np.asarray(large_mean)), np.median(np.asarray(large_median)),
))
mega_sham.append(np.asarray(session_all))
ave_grand.append(mega_sham)
#need to compute means across all times... - strapped means does it only across one time point - hard for graphing
ave_all_boot= np.mean(np.asarray(ave_grand),axis = 0 )
# -
np.asarray(ave_grand).shape
np.save('%s100_bootstrapped_20 secs_back'%processed, np.asarray(ave_grand))
ave_grand=np.load('%s100_bootstrapped_20 secs_back'%processed)
# ## now average over the boot strapped trials - simply using np.mean and fancy indexing operation.
mega_sham = np.mean(np.asarray(ave_grand),axis = 0 )
mega_sham.shape
# ## Z score operations - to Calculate actual significance on sliding window - still does not work!
z_score = scipy.stats.zscore(mega_sham,axis=0)
# +
fig,ax = plt.subplots(4,4,figsize=(20,20),dpi=200)
num=0
h=0
#print(mega.shape)
secs_back=198
r1= np.arange(0.1,(secs_back/10)+.1,.1)
vis_invis_mean = "all"
if vis_invis_mean=='visible':
l1=0
l2=2
if vis_invis_mean =='all':
l1=4
l2=5
if vis_invis_mean == "invisible":
l1=1
l2=3
for session in z_score:
ax[h][num].bar(r1,session[:secs_back,l1],label='Z_score_sham_%s'%vis_invis_mean)
#ax[h][num].plot(r1,session[:secs_back,l2],label='median_%s'%vis_invis_mean)
#ax[h][num].plot(r1,boot[:secs_back,l1],label='mean_%s'%vis_invis_mean)
#ax[h][num].plot(r1,boot[:secs_back,l2],label='median_%s'%vis_invis_mean)
ax[h][num].set_ylabel('ratios')
ax[h][num].set_xlabel('time(s)')
ax[h][num].legend()
l=0
s=0
h+=1
if h % 4==0:
num += 1
h=0
#ax.set_ylabel('ratios')
plt.savefig('%s16_ratios__%s_sec._before_beacons_z_score_%s_.png' %(figures,secs_back/10,vis_invis_mean), dpi = 100)
plt.show()
# +
fig,ax = plt.subplots(4,4,figsize=(20,20),dpi=200)
num=0
h=0
#print(mega.shape)
secs_back=100
r1= np.arange(0.1,(secs_back/10)+.1,.1)
vis_invis_mean = "visible"
if vis_invis_mean=='visible':
l1=0
l2=2
if vis_invis_mean =='all':
l1=4
l2=5
if vis_invis_mean == "invisible":
l1=1
l2=3
for session,boot,z in zip(mega,mega_sham,z_score):
#print (session)
#ax[h][num].bar(r1,z[:secs_back,l1],label='Z_score_sham_%s'%vis_invis_mean,alpha=.1)
ax[h][num].plot(r1,session[:secs_back,3],label='median_%s'%('invisible'))
ax[h][num].plot(r1,session[:secs_back,l2],label='median_%s'%vis_invis_mean,color='cyan')
#ax[h][num].plot(r1,boot[:secs_back,l1],label='mean_%s'%vis_invis_mean)
ax[h][num].plot(r1,boot[:secs_back,l2],label='median_sham',color='gold')
ax[h][num].set_ylabel('%s %s ratios'%(h,num))
ax[h][num].set_xlabel('time(s)')
ax[h][num].legend()
l=0
s=0
h+=1
if h % 4==0:
num += 1
h=0
#ax.set_ylabel('ratios')
plt.savefig('%s16_ratios_sec._before_beacons_medians%s_.png' %(figures,secs_back/10), dpi = 100)
plt.show()
# -
# ## Plotting invisible - with sham
# #### DEBUGGING - Problems
# 1. Trajecotry check - maybe taking 5 seconds instead of 3 as I think due to the pyhton program which has some frame rate at
# 50hz not always at 100
# 2. for some reason indexign spits out at 2.2 due to the wrong shape (3,221) where it somehow rounds and takes an extra index. - try to fix below, but still does not work - Kind of fixed manually
# 3. Gettign NAN on the bootstrap data - due to division by zero? - maybe need ot normalize time values due to the didison of small numbers where numpy fails
#
| Code/.ipynb_checkpoints/20200708_FS_Beacon_trajectories_Bootstrapped_individual -checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Import our dependencies
import pandas as pd
from bs4 import BeautifulSoup
import pymongo
from splinter import Browser
# +
# Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text.
# Assign the text to variables that you can reference later.
mars_data = {}
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url1 = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest"
browser.visit(url1)
html1 = browser.html
soup1 = BeautifulSoup(html1, 'html.parser')
# +
# Collect latest news title and save it to a variable
results = soup1.find_all('div', class_="content_title")
# -
latest = results[0]
mars_data["latest_title"] = latest.a.text.strip()
# Collect latest news paragraph and save it to a variable
results2 = soup1.find_all('div', class_="rollover_description_inner")
# +
paragraphs = []
for row in results2:
paragraphs.append(row.text)
mars_data["latest_p"] = paragraphs[0].strip()
# +
# JPL Mars Space Images
url2 = "https://www.jpl.nasa.gov/spaceimages/?search=&category=featured#submit"
browser.visit(url2)
# +
featured_url = []
html2 = browser.html
soup2 = BeautifulSoup(html2, 'html.parser')
fancybox = soup2.find_all('a',class_="fancybox")
for fancy in fancybox:
featured_url.append(fancy['data-fancybox-href'])
href = featured_url[1]
# -
mars_data["featured_image_url"] = 'https://www.jpl.nasa.gov' + href
mars_data
# +
# Scrape 'Mars Weather' Twitter Account - Weather in latest tweet
url3 = 'https://twitter.com/marswxreport?lang=en'
browser.visit(url3)
html3 = browser.html
soup3 = BeautifulSoup(html3, 'html.parser')
# -
results3 = soup3.find_all('p', class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text")
# +
weather = []
for row in results3:
weather.append(row.text)
mars_data["mars_weather"] = weather[0].replace('\n',', ')
# +
# Scrape Mars Facts website for table to pandas dataframe - convert pandas df to html
url4 = 'https://space-facts.com/mars/'
tables = pd.read_html(url4)
mars_table = tables[0]
mars_table2 = mars_table.rename(columns={0:"Description",1:"Value"}).set_index("Description")
# +
mars_html_table = mars_table2.to_html().replace("\n", "")
mars_data["mars_facts"] = mars_html_table
# +
# Mars Hemispheres
#'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
# +
url5 = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced'
browser.visit(url5)
html3 = browser.html
soup4 = BeautifulSoup(html3, 'html.parser')
# -
first_title = soup4.find_all('h2')[0].text
cerb_img = soup4.find_all('div',class_="wide-image-wrapper")
for images in cerb_img:
link = images.find('a')
first_url = link['href']
# +
url6 = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/schiaparelli_enhanced'
browser.visit(url6)
html4 = browser.html
soup5 = BeautifulSoup(html4, 'html.parser')
# -
second_title = soup5.find_all('h2')[0].text
schi_img = soup5.find_all('div',class_="wide-image-wrapper")
for images in schi_img:
link = images.find('a')
second_url = link['href']
# +
url7 = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/syrtis_major_enhanced'
browser.visit(url7)
html5 = browser.html
soup6 = BeautifulSoup(html5, 'html.parser')
# -
third_title = soup6.find_all('h2')[0].text
syr_img = soup6.find_all('div',class_="wide-image-wrapper")
for images in syr_img:
link = images.find('a')
third_url = link['href']
# +
url8 = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/valles_marineris_enhanced'
browser.visit(url8)
html6 = browser.html
soup7 = BeautifulSoup(html6, 'html.parser')
# -
fourth_title = soup7.find_all('h2')[0].text
vall_img = soup7.find_all('div',class_="wide-image-wrapper")
for images in vall_img:
link = images.find('a')
fourth_url = link['href']
# +
hemisphere_image_urls = [
{'title': first_title, 'img_url': first_url},
{'title': second_title, 'img_url': second_url},
{'title': third_title, 'img_url': third_url},
{'title': fourth_title, 'img_url': fourth_url}
]
mars_data['hemisphere_image_urls'] = hemisphere_image_urls
mars_data
# -
mars_data["latest_p"]
mars_data['hemisphere_image_urls'][0]['title']
mars_data['hemisphere_image_urls'][0]['img_url']
mars_data['mars_facts']
mars_data['featured_image_url']
| Mission_to_Mars/mission_to_mars-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Processing data with pandas
#
# ```{attention}
# Finnish university students are encouraged to use the CSC Notebooks platform.<br/>
# <a href="https://notebooks.csc.fi/#/blueprint/1b4c5cbce4ab4acb8976e93a1f4de3dc"><img alt="CSC badge" src="https://img.shields.io/badge/launch-CSC%20notebook-blue.svg" style="vertical-align:text-bottom"></a>
#
# Others can follow the lesson and fill in their student notebooks using Binder.<br/>
# <a href="https://mybinder.org/v2/gh/geo-python/notebooks/master?urlpath=lab/tree/L5/processing-data-with-pandas.ipynb"><img alt="Binder badge" src="https://img.shields.io/badge/launch-binder-red.svg" style="vertical-align:text-bottom"></a>
# ```
#
# During the first part of this lesson you learned the basics of pandas data structures (*Series* and *DataFrame*) and got familiar with basic methods loading and exploring data.
# Here, we will continue with basic data manipulation and analysis methods such calculations and selections.
#
# We are now working in a new notebook file and we need to import pandas again.
import pandas as pd
# Let's work with the same input data `'Kumpula-June-2016-w-metadata.txt'` and load it using the `pd.read_csv()` method. Remember, that the first 8 lines contain metadata so we can skip those. This time, let's store the filepath into a separate variable in order to make the code more readable and easier to change afterwards:
# + jupyter={"outputs_hidden": false}
# Define file path:
fp = "Kumpula-June-2016-w-metadata.txt"
# Read in the data from the file (starting at row 9):
data = pd.read_csv(fp, skiprows=8)
# -
# Remember to always check the data after reading it in:
data.head()
# ````{admonition} Filepaths
# Note, that our input file `'Kumpula-June-2016-w-metadata.txt'` is located **in the same folder** as the notebook we are running. Furthermore, the same folder is the working directory for our Python session (you can check this by running the `%pwd` magic command).
# For these two reasons, we are able to pass only the filename to `.read_csv()` function and pandas is able to find the file and read it in. In fact, we are using a **relative filepath** when reading in the file.
#
# The **absolute filepath** to the input data file in the CSC cloud computing environment is `/home/jovyan/work/notebooks/L5/Kumpula-June-2016-w-metadata.txt`, and we could also use this as input when reading in the file. When working with absolute filepaths, it's good practice to pass the file paths as a [raw string](https://docs.python.org/3/reference/lexical_analysis.html#literals) using the prefix `r` in order to avoid problems with escape characters such as `"\n"`.
#
# ```
# # Define file path as a raw string:
# fp = r'/home/jovyan/work/notebooks/L5/Kumpula-June-2016-w-metadata.txt'
#
# # Read in the data from the file (starting at row 9):
# data = pd.read_csv(fp, skiprows=8)
# ```
# ````
# ## Basic calculations
#
# One of the most common things to do in pandas is to create new columns based on calculations between different variables (columns).
#
# We can create a new column `DIFF` in our DataFrame by specifying the name of the column and giving it some default value (in this case the decimal number `0.0`).
# + jupyter={"outputs_hidden": false}
# Define a new column "DIFF"
data["DIFF"] = 0.0
# Check how the dataframe looks like:
data
# -
# Let's check the datatype of our new column:
# + jupyter={"outputs_hidden": false}
data["DIFF"].dtypes
# -
# Okey, so we see that pandas created a new column and recognized automatically that the data type is float as we passed a 0.0 value to it.
#
# Let's update the column `DIFF` by calculating the difference between `MAX` and `MIN` columns to get an idea how much the temperatures have been varying during different days:
# + jupyter={"outputs_hidden": false}
# Calculate max min difference
data["DIFF"] = data["MAX"] - data["MIN"]
# Check the result
data.head()
# -
# The calculations were stored into the `DIFF` column as planned.
#
# You can also create new columns on-the-fly at the same time when doing the calculation (the column does not have to exist before). Furthermore, it is possible to use any kind of math
# algebra (e.g. subtraction, addition, multiplication, division, exponentiation, etc.) when creating new columns.
#
# We can for example convert the Fahrenheit temperatures in the `TEMP` column into Celsius using the formula that we have seen already many times. Let's do that and store it in a new column called `TEMP_CELSIUS`.
# + jupyter={"outputs_hidden": false}
# Create a new column and convert temp fahrenheit to celsius:
data["TEMP_CELSIUS"] = (data["TEMP"] - 32) / (9 / 5)
# Check output
data.head()
# -
# #### Check your understanding
#
# Calculate the temperatures in Kelvins using the Celsius values **and store the result a new column** calle `TEMP_KELVIN` in our dataframe.
#
# 0 Kelvins is -273.15 degrees Celsius as we learned during [Lesson 4](https://geo-python-site.readthedocs.io/en/latest/notebooks/L4/functions.html#let-s-make-another-function).
# + tags=["hide-cell"]
# Solution
data["TEMP_KELVIN"] = data["TEMP_CELSIUS"] + 273.15
data.head()
# -
# ## Selecting rows and columns
#
# We often want to select only specific rows from a DataFrame for further analysis. There are multiple ways of selecting subsets of a pandas DataFrame. In this section we will go through the most useful tricks for selecting specific rows, columns and individual values.
#
# ### Selecting several rows
#
# One common way of selecting only specific rows from your DataFrame is done via **index slicing** to extract part of the DataFrame. Slicing in pandas can be done in a similar manner as with normal Python lists, i.e. you specify the index range you want to select inside the square brackets: ``dataframe[start_index:stop_index]``.
#
# Let's select the first five rows and assign them to a variable called `selection`:
# + jupyter={"outputs_hidden": false}
# Select first five rows of dataframe using row index values
selection = data[0:5]
selection
# -
# ```{note}
# Here we have selected the first five rows (index 0-4) using the integer index.
# ```
# ### Selecting several rows and columns
#
# It is also possible to control which columns are chosen when selecting a subset of rows. In this case we will use [pandas.DataFrame.loc](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html) which selects data based on axis labels (row labels and column labels).
#
# Let's select temperature values (column `TEMP`) from rows 0-5:
# + jupyter={"outputs_hidden": false}
# Select temp column values on rows 0-5
selection = data.loc[0:5, "TEMP"]
selection
# -
# ```{note}
# In this case, we get six rows of data (index 0-5)! We are now doing the selection based on axis labels instead of the integer index.
# ```
# It is also possible to select multiple columns when using `loc`. Here, we select the `TEMP` and `TEMP_CELSIUS` columns from a set of rows by passing them inside a list (`.loc[start_index:stop_index, list_of_columns]`):
# + jupyter={"outputs_hidden": false}
# Select columns temp and temp_celsius on rows 0-5
selection = data.loc[0:5, ["TEMP", "TEMP_CELSIUS"]]
selection
# -
# #### Check your understanding
#
# Find the mean temperatures (in Celsius) for the last seven days of June. Do the selection using the row index values.
# + tags=["hide-cell"]
# Here is the solution
data.loc[23:29, "TEMP_CELSIUS"].mean()
# -
# ### Selecting a single row
#
# You can also select an individual row from a specific position using the `.loc[]` indexing. Here we select all the data values using index 4 (the 5th row):
# + jupyter={"outputs_hidden": false}
# Select one row using index
row = data.loc[4]
row
# -
# ``.loc[]`` indexing returns the values from that position as a ``pd.Series`` where the indices are actually the column names of those variables. Hence, you can access the value of an individual column by referring to its index using the following format (both should work):
#
# + jupyter={"outputs_hidden": false}
# Print one attribute from the selected row
row["TEMP"]
# -
# ### Selecting a single value based on row and column
#
# Sometimes it is enough to access a single value in a DataFrame. In this case, we can use [DataFrame.at](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.at.html#pandas-dataframe-at) instead of `Data.Frame.loc`.
#
# Let's select the temperature (column `TEMP`) on the first row (index `0`) of our DataFrame.
data.at[0, "TEMP"]
# ### EXTRA: Selections by integer position
#
# ```{admonition} .iloc
# `.loc` and `.at` are based on the *axis labels* - the names of columns and rows. Axis labels can be also something else than "traditional" index values. For example, datetime is commonly used as the row index.
# `.iloc` is another indexing operator which is based on *integer value* indices. Using `.iloc`, it is possible to refer also to the columns based on their index value. For example, `data.iloc[0,0]` would return `20160601` in our example data frame.
#
# See the pandas documentation for more information about [indexing and selecting data](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-and-selecting-data).
# ```
#
# For example, we could select `TEMP` and the `TEMP_CELSIUS` columns from a set of rows based on their index.
data.iloc[0:5:, 0:2]
# To access the value on the first row and second column (`TEMP`), the syntax for `iloc` would be:
#
data.iloc[0, 1]
# We can also access individual rows using `iloc`. Let's check out the last row of data:
data.iloc[-1]
# ## Filtering and updating data
#
# One really useful feature in pandas is the ability to easily filter and select rows based on a conditional statement.
# The following example shows how to select rows when the Celsius temperature has been higher than 15 degrees into variable `warm_temps` (warm temperatures). pandas checks if the condition is `True` or `False` for each row, and returns those rows where the condition is `True`:
# Check the condition
data["TEMP_CELSIUS"] > 15
# + jupyter={"outputs_hidden": false}
# Select rows with temp celsius higher than 15 degrees
warm_temps = data.loc[data["TEMP_CELSIUS"] > 15]
warm_temps
# -
# It is also possible to combine multiple criteria at the same time. Here, we select temperatures above 15 degrees that were recorded on the second half of June in 2016 (i.e. `YEARMODA >= 20160615`).
# Combining multiple criteria can be done with the `&` operator (AND) or the `|` operator (OR). Notice, that it is often useful to separate the different clauses inside the parentheses `()`.
# + jupyter={"outputs_hidden": false}
# Select rows with temp celsius higher than 15 degrees from late June 2016
warm_temps = data.loc[(data["TEMP_CELSIUS"] > 15) & (data["YEARMODA"] >= 20160615)]
warm_temps
# -
# Now we have a subset of our DataFrame with only rows where the `TEMP_CELSIUS` is above 15 and the dates in `YEARMODA` column start from 15th of June.
#
# Notice, that the index values (numbers on the left) are still showing the positions from the original DataFrame. It is possible to **reset** the index using `reset_index()` function that
# might be useful in some cases to be able to slice the data in a similar manner as above. By default the `reset_index()` would make a new column called `index` to keep track of the previous
# index which might be useful in some cases but not here, so we can omit that by passing parameter `drop=True`.
# + jupyter={"outputs_hidden": false}
# Reset index
warm_temps = warm_temps.reset_index(drop=True)
warm_temps
# -
# As can be seen, now the index values goes from 0 to 12.
# #### Check your understanding
#
# Find the mean temperatures (in Celsius) for the last seven days of June again. This time you should select the rows based on a condition for the `YEARMODA` column!
# + tags=["hide-cell"]
# Here's the solution
data["TEMP_CELSIUS"].loc[data["YEARMODA"] >= 20160624].mean()
# -
# ```{admonition} Deep copy
# In this lesson, we have stored subsets of a DataFrame as a new variable. In some cases, we are still referring to the original data and any modifications made to the new variable might influence the original DataFrame.
#
# If you want to be extra careful to not modify the original DataFrame, then you should take a proper copy of the data before proceeding using the `.copy()` method. You can read more about indexing, selecting data and deep and shallow copies in [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html) and in [this excellent blog post](https://medium.com/dunder-data/selecting-subsets-of-data-in-pandas-part-4-c4216f84d388).
# ```
# ## Dealing with missing data
#
# As you may have noticed by now, we have several missing values for the temperature minimum, maximum, and difference columns (`MIN`, `MAX`, `DIFF`, and `DIFF_MIN`). These missing values are indicated as `NaN` (not a number). Having missing data in your datafile is really common situation and typically you want to deal with it somehow. Common procedures to deal with `NaN` values are to either **remove** them from the DataFrame or **fill** them with some value. In pandas both of these options are really easy to do.
#
# Let's first see how we can remove the NoData values (i.e. clean the data) using the [.dropna()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html) function. Inside the function you can pass a list of column(s) from which the `NaN` values should found using the `subset` parameter.
# + jupyter={"outputs_hidden": false}
# Drop no data values based on the MIN column
warm_temps_clean = warm_temps.dropna(subset=["MIN"])
warm_temps_clean
# -
# As you can see by looking at the table above (and the change in index values), we now have a DataFrame without the NoData values.
#
# ````{note}
# Note that we replaced the original `warm_temps` variable with version where no data are removed. The `.dropna()` function, among other pandas functions can also be applied "inplace" which means that the function updates the DataFrame object and returns `None`:
#
# ```python
# warm_temps.dropna(subset=['MIN'], inplace=True)
# ```
# ````
#
# Another option is to fill the NoData with some value using the `fillna()` function. Here we can fill the missing values in the with value -9999. Note that we are not giving the `subset` parameter this time.
# + jupyter={"outputs_hidden": false}
# Fill na values
warm_temps.fillna(-9999)
# -
# As a result we now have a DataFrame where NoData values are filled with the value -9999.
# ```{warning}
# In many cases filling the data with a specific value is dangerous because you end up modifying the actual data, which might affect the results of your analysis. For example, in the case above we would have dramatically changed the temperature difference columns because the -9999 values not an actual temperature difference! Hence, use caution when filling missing values.
#
# You might have to fill in no data values for the purposes of saving the data to file in a specific format. For example, some GIS software does not accept missing values. Always pay attention to potential no data values when reading in data files and doing further analysis!
# ```
# ## Data type conversions
# There are occasions where you'll need to convert data stored within a Series to another data type, for example, from floating point to integer.
# Remember, that we already did data type conversions using the [built-in Python functions](https://docs.python.org/3/library/functions.html#built-in-functions) such as `int()` or `str()`.
# For values in pandas DataFrames and Series, we can use the `astype()` method.
# ```{admonition} Truncating versus rounding up
# **Be careful with type conversions from floating point values to integers.** The conversion simply drops the stuff to the right of the decimal point, so all values are rounded down to the nearest whole number. For example, 99.99 will be truncated to 99 as an integer, when it should be rounded up to 100.
#
# Chaining the round and type conversion functions solves this issue as the `.round(0).astype(int)` command first rounds the values with zero decimals and then converts those values into integers.
# ```
print("Original values:")
data["TEMP"].head()
print("Truncated integer values:")
data["TEMP"].astype(int).head()
# + jupyter={"outputs_hidden": false}
print("Rounded integer values:")
data["TEMP"].round(0).astype(int).head()
# -
# Looks correct now.
# ## Unique values
# Sometimes it is useful to extract the unique values that you have in your column.
# We can do that by using `unique()` method:
# + jupyter={"outputs_hidden": false}
# Get unique celsius values
unique = data["TEMP"].unique()
unique
# -
# As a result we get an array of unique values in that column.
# ```{note}
# Sometimes if you have a long list of unique values, you don't necessarily see all the unique values directly as IPython/Jupyter may hide them with an ellipsis `...`. It is, however, possible to see all those values by printing them as a list
# ```
# + jupyter={"outputs_hidden": false}
# unique values as list
list(unique)
# -
# How many days with unique mean temperature did we have in June 2016? We can check that!
#
# + jupyter={"outputs_hidden": false}
# Number of unique values
unique_temps = len(unique)
print(f"There were {unique_temps} days with unique mean temperatures in June 2016.")
# -
# ## Sorting data
#
# Quite often it is useful to be able to sort your data (descending/ascending) based on values in some column
# This can be easily done with pandas using the `sort_values(by='YourColumnName')` function.
#
# Let's first sort the values on ascending order based on the `TEMP` column:
# + jupyter={"outputs_hidden": false}
# Sort dataframe, ascending
data.sort_values(by="TEMP")
# -
# Of course, it is also possible to sort them in descending order with ``ascending=False`` parameter:
#
# + jupyter={"outputs_hidden": false}
# Sort dataframe, descending
data.sort_values(by="TEMP", ascending=False)
# -
# ## Writing data to a file
#
# Lastly, it is of course important to be able to write the data that you have analyzed into your computer. This is really handy in pandas as it [supports many different data formats by default](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html).
#
# **The most typical output format by far is a CSV file.** The function `to_csv()` can be used to easily save your data in CSV format. Let's first save the data from our `data` DataFrame into a file called `Kumpula_temp_results_June_2016.csv`.
# + jupyter={"outputs_hidden": false}
# define output filename
output_fp = "Kumpula_temps_June_2016.csv"
# Save dataframe to csv
data.to_csv(output_fp, sep=",")
# -
# Now we have the data from our DataFrame saved to a file:
# 
#
# As you can see, the first value in the datafile now contains the index value of the rows. There are also quite a lot of decimals present in the new columns
# that we created. Let's deal with these and save the temperature values from `warm_temps` DataFrame without the index and with only 1 decimal in the floating point numbers.
# + jupyter={"outputs_hidden": false}
# define output filename
output_fp2 = "Kumpula_temps_above15_June_2016.csv"
# Save dataframe to csv
warm_temps.to_csv(output_fp2, sep=",", index=False, float_format="%.1f")
# -
# Omitting the index can be done with the `index=False` parameter. Specifying how many decimals should be written can be done with the `float_format` parameter where the text `%.1f` instructs pandas to use 1 decimal in all columns when writing the data to a file (changing the value 1 to 2 would write 2 decimals, etc.)
#
# 
#
# As a result you have a "cleaner" output file without the index column, and with only 1 decimal for floating point numbers.
# That's it for this week. We will dive deeper into data analysis with pandas in the following Lesson.
| .ipynb_checkpoints/processing-data-with-pandas-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import adafdr.method as md
import adafdr.data_loader as dl
import matplotlib.pyplot as plt
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# # RNA-seq: pasilla data
# ## Load the pasilla data
p,x = dl.data_pasilla()
print('p', p.shape)
print('x:', x.shape)
# ## covariate visualization
md.adafdr_explore(p, x, output_folder=None)
# ## hypothesis testing
# Baseline methods: BH, SBH
alpha = 0.1
n_rej, t_rej = md.bh_test(p, alpha=alpha, verbose=False)
print('# number of discoveries for BH: %d'%n_rej)
n_rej, t_rej, pi0_hat = md.sbh_test(p, alpha=alpha, verbose=False)
print('# number of discoveries for SBH: %d'%n_rej)
res = md.adafdr_test(p, x, alpha=0.1, fast_mode=False, single_core=False)
n_rej = res['n_rej']
t_rej = res['threshold']
print('# number of discoveries for adafdr: %d'%np.sum(p<=t_rej))
plt.figure()
plt.scatter(x, p, alpha=0.2, s=4)
plt.scatter(x, t_rej, s=16, label='threshold')
plt.xlabel('covariate x', fontsize=16)
plt.ylabel('p-value', fontsize=16)
plt.legend(fontsize=16)
plt.show()
| vignettes/passila.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from skimage import io as skio
from matplotlib import pyplot as plt
# +
img1 = skio.imread('1016-cat.jpg', as_grey=True)
img1 = (img1 * 256).astype(int)
img1[img1 == 256] = 255
with open('Pelvis_380_300_31_16bit_roi.raw', 'rb') as fin:
img2 = np.fromfile(fin, dtype=np.int16)
img2 = img2.reshape(31, 300, 380)
# -
def matrix(img, bins=16, bits=8):
in_bin = (2 ** bits) // bins
result = np.zeros((bins, bins))
img = img.copy()
img //= in_bin
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if i + 1 < img.shape[0]:
result[img[i, j], img[i + 1, j]] += 1
if j + 1 < img.shape[1]:
if i + 1 < img.shape[0]:
result[img[i, j], img[i + 1, j + 1]] += 1
result[img[i, j], img[i, j + 1]] += 1
if i > 0:
result[img[i, j], img[i - 1, j + 1]] += 1
return result
matrix1 = matrix(img1)
matrix2 = matrix(img1, bins=256, bits=8)
def matrix3d(img, bins=16, bits=12):
in_bin = (2 ** bits) // bins
result = np.zeros((bins, bins))
img = img.copy()
img //= in_bin
for i in range(img.shape[0]):
for j in range(img.shape[1]):
for k in range(img.shape[2]):
if i + 1 < img.shape[0]:
result[img[i, j, k], img[i + 1, j, k]] += 1
if k > 0:
result[img[i, j, k], img[i + 1, j, k - 1]] += 1
if k + 1 < img.shape[2]:
result[img[i, j, k], img[i + 1, j, k + 1]] += 1
if j > 0:
result[img[i, j, k], img[i + 1, j - 1, k]] += 1
if k > 0:
result[img[i, j, k], img[i + 1, j - 1, k - 1]] += 1
if k + 1 < img.shape[2]:
result[img[i, j, k], img[i + 1, j - 1, k + 1]] += 1
if j + 1 < img.shape[1]:
result[img[i, j, k], img[i + 1, j + 1, k]] += 1
if k > 0:
result[img[i, j, k], img[i + 1, j + 1, k - 1]] += 1
if k + 1 < img.shape[2]:
result[img[i, j, k], img[i + 1, j + 1, k + 1]] += 1
if j + 1 < img.shape[1]:
result[img[i, j, k], img[i, j + 1, k]] += 1
if k > 0:
result[img[i, j, k], img[i, j + 1, k - 1]] += 1
if k + 1 < img.shape[2]:
result[img[i, j, k], img[i, j + 1, k + 1]] += 1
if k + 1 < img.shape[2]:
result[img[i, j, k], img[i, j, k + 1]] += 2
return result
matrix3 = matrix3d(img2)
fig, axes = plt.subplots(1, 3, figsize=(12, 6))
axes[0].imshow(matrix1)
axes[1].imshow(matrix2)
axes[2].imshow(matrix3)
| 6_sem/image_analysis/lab7/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# + papermill={"duration": 0.006495, "end_time": "2021-02-01T10:05:46.292022", "exception": false, "start_time": "2021-02-01T10:05:46.285527", "status": "completed"} tags=[]
# + _execution_state="idle" _uuid="051d70d956493feee0c6d64651c6a088724dca2a" papermill={"duration": 2.536288, "end_time": "2021-02-01T10:05:48.833889", "exception": false, "start_time": "2021-02-01T10:05:46.297601", "status": "completed"} tags=[]
library(tidyverse)
library(plyr)
library(dplyr)
library(ggplot2)
library(splitstackshape)
data <- read_csv('../input/marketcategory/marketcategory.csv')
data<- as.data.frame(data)
options(repr.plot.width = 14, repr.plot.height = 8)
# + papermill={"duration": 0.408306, "end_time": "2021-02-01T10:05:49.252522", "exception": false, "start_time": "2021-02-01T10:05:48.844216", "status": "completed"} tags=[]
data
# + papermill={"duration": 1.386819, "end_time": "2021-02-01T10:05:50.653886", "exception": false, "start_time": "2021-02-01T10:05:49.267067", "status": "completed"} tags=[]
all_categories=filter(data,categoryTags !="Income" & categoryTags !="Transfer" & categoryTags !="Taxes")
g1<-all_categories %>%select(categoryTags,currencyAmount)%>%
group_by(categoryTags)%>%dplyr::summarise(total=round(sum(currencyAmount),1))%>%
ggplot(aes(x = categoryTags,y=total,fill = categoryTags))+
geom_bar(stat='identity',alpha = 0.7)+
geom_label(aes(label = total),vjust = -0.1, show.legend = F)+theme_bw()+
labs(x='Age Group', y = "Total amount spent", title = "What are they ordering?")+
theme(plot.title = element_text(hjust = 0.5, face = "bold"),axis.text.x=element_text(face="bold", color="#993333", size=10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16),legend.position = "none")
g1
ggsave('g1.png')
# + papermill={"duration": 1.382109, "end_time": "2021-02-01T10:05:52.052906", "exception": false, "start_time": "2021-02-01T10:05:50.670797", "status": "completed"} tags=[]
g2<-data %>%select(AgeGroup,balance,gender)%>%group_by(AgeGroup,gender)%>%dplyr::summarise(total=round(sum(balance),1))%>%
ggplot(aes(x = AgeGroup, y = total, fill = gender))+geom_bar(stat='identity',position='stack')+
geom_label(aes(label=total),position='stack',size=4,vjust=-0.1)+ theme_bw() +
labs(x='Age Group', y = "Total Amount", title = "Total balance of different age groups",
subtitle = "40-50 age group has more money.")+
theme(plot.title = element_text(hjust = 0.5, face = "bold"),axis.text.x=element_text(face="bold", color="#993333", size=10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16))
g2
ggsave('g2.png')
# + papermill={"duration": 1.384068, "end_time": "2021-02-01T10:05:53.457417", "exception": false, "start_time": "2021-02-01T10:05:52.073349", "status": "completed"} tags=[]
categories=filter(data,categoryTags =="Shopping")
g3<-categories %>%select(AgeGroup,currencyAmount,gender)%>%group_by(AgeGroup,gender)%>%dplyr::summarise(total=round(sum(currencyAmount),1))%>%
ggplot(aes(x = AgeGroup, y = total, fill = gender))+geom_bar(stat='identity',position='stack')+
geom_label(aes(x = AgeGroup, y = total, fill = gender,label=total),stat='identity',position='stack',size=4,vjust=-0.1)+ theme_bw() +
labs(x='Age Group', y = "Spending",title = "Spending of different age groups",
subtitle = "Teenagers especially females are spending more.")+
theme(plot.title = element_text(hjust = 0.5, face = "bold"),axis.text.x=element_text(face="bold", color="#993333", size=10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16))
g3
ggsave('g3.png')
# + papermill={"duration": 1.634931, "end_time": "2021-02-01T10:05:55.116130", "exception": false, "start_time": "2021-02-01T10:05:53.481199", "status": "completed"} tags=[]
g4<-data %>% group_by(AgeGroup) %>%
ggplot(aes(x = AgeGroup, fill = AgeGroup))+
geom_bar(alpha = 0.7)+
geom_label(aes(label = ..count..),stat='count', vjust = -0.1, show.legend = F)+theme_bw()+
labs(x='Age Group', y = "Number of Transactions", title = "Which Age Group are ordering the most?",
subtitle = "People below 30 years are doing more transactions overall.")+
theme(plot.title = element_text(hjust = 0.5, face = "bold"),axis.text.x=element_text(face="bold", color="#993333", size=10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16),legend.position = "none")
g4
ggsave('g4.png')
# + papermill={"duration": 0.936671, "end_time": "2021-02-01T10:05:56.079248", "exception": false, "start_time": "2021-02-01T10:05:55.142577", "status": "completed"} tags=[]
g5<-categories %>% group_by(AgeGroup) %>%
ggplot(aes(x = AgeGroup, fill = AgeGroup))+
geom_bar(alpha = 0.7)+
geom_label(aes(label = ..count..),stat='count', vjust = -0.1, show.legend = F)+theme_bw()+
labs(x='Age Group', y = "Number of Transactions", title = "Which Age Group are ordering the most?",
subtitle = "Age Group 10-20 has highest no of transactions.")+
theme(plot.title = element_text(hjust = 0.5, face = "bold"),axis.text.x=element_text(face="bold", color="#993333", size=10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16),legend.position = "none")
g5
ggsave('g5.png')
| data-analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Bathtub
# With this practical, we start our foray into modelling.
#
# The Bathtub model is one of the simplest, most intuitive models that exist. It is often used as a friendly intro to the concepts behind modelling.
#
# Much of this exercise is based on the example designed by <NAME> for modelling using the STELLA software. His site is available here:
# http://www3.geosc.psu.edu/~dmb53/DaveSTELLA/modeling/ch2contents.html
#
# That site is full of excellent information about the whys and hows of modelling, and you are welcome (and encouraged) to peruse it.
#
# *Note: the site is built for modelling with STELLA, a software package that is more expensive, less intuitive, and much less versatile than python. If you ignore the details of the models being built, you can learn a lot from the general information provided.*
# Let's start with a very simple system: water in a bathtub. The bathtub has one input: the faucet. It has one output: the drain. Intuitively, we know that the amount of water in the tub will depend on:
# 1. The amount of water we start with
# 2. The rate that water flows in from the faucet (i.e. how far we turn the faucet)
# 3. The rate that water flows out through the drain (which itself depends on how much water is in the bathtub at any moment in time)
# We can draw this as a simple reservoir (box) with a single input (arrow in) and output (arrow out).
#
# 
# *Image adapted from Arnott et al., ACCESS 46, 2015: http://www.accessmagazine.org/articles/spring-2015/a-bathtub-model-of-downtown-traffic-congestion/*
#
# Now we can use Python to translate this conceptual model into a quantitative model. The goal for our model is that we can determine how much water is in the bathtub at any given time. Throughout this practical, we will slowly build up a function that represents the bathtub model.
# +
# We'll need this eventually
import numpy
def bathtub_model():
return
# -
# ## Variables
#
# First, we need to think about what the variables are involved in this system. In this case, we need variables that represent:
# - The amount of water in the bathtub at any given time
# - The *initial* amount of water in the bathtub
# - The rate at which water flows *into* the bathtub via the faucet
# - The rate at which water flows *out of* the bathtub via the drain
# - The amount of time since our bathtub filling/draining experiment began.
#
# As usual, it is important to give the variables logical names and use comments to explain what they are and what units they have. Because this function is going to get long and complicated, it's good practice to put all the variable information up at the top (in comments).
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
return
# Next we need to think about how to set up those variables. Let's start with the easy ones. *Total_Water* is going to depend on all the rest, so let's come back to this.
#
# *Initial_Water* is just a number, so we can define this one easily. Let's assume for now we start out with 10 litres of water in the tub.
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water
Initial_Water = 10.0
return
# *Flow_In* is also just a number - as humans, we can change it by opening or closing the faucet, but it is not dependent on any of the other variables. For now, we will assume we add 1/2 litre to the tub every second.
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
return
# What about *Flow_Out*? This one is trickier. If there is more water in the tub, there will be more pressure on the drain, so the flow will be faster. This means that *Flow_Out* depends on *Total_Water*, or to put this into an equation:
#
# <center>*Flow_Out = 0.1 $\times$ Total_Water*</center>
#
# But we can't add this to our function yet, because we haven't defined *Total_Water*.
# What about *Total_Water*? Well, this will depend on *Initial_Water*, *Flow_In*, and *Flow_Out* - and how much time has passed (because our flows are **rates**). So, at some future *time*:
#
# <center>*Total_Water = Initial_Water + Flow_In $\times$ time - Flow_Out $\times$ elapsed_time*</center>
# So far so good, right? But we have a problem. *Total_Water* depends on *Flow_Out*, and *Flow_Out* depends on *Total_Water*!
# ## Solving the equations
# Luckily, this is a really common problem - and we can use pretty simple standard numerical methods to solve it.
#
#
# <span style="color:Purple"><b>Want to know more?</b> This is a differential equation. The method we'll describe to solve it is called the forward Euler method, and is the simplest way to do it. Other more accurate methods also exist and are used in complex models.</span>
# The problem isn't as hard as it seems. Let's think about what happens right at the start. We know the initial amount of *Total_Water* - this is *Initial_Water*. So we can also calculate the initial rate of *Flow_Out*:
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# define Flow_Out (L/S) at time=0 (the start of our experiment)
Flow_Out_0 = 0.1 * Initial_Water
# Print this value to clarify behaviour
print( "Flow_Out at time=0 is: ", Flow_Out_0 )
return
# Let's test out this model! **Remember:** with functions, after we *define* the function we need to *call* (use) it.
# Call the function
bathtub_model()
# Now that we have an initial value for *Flow_Out*, we can calculate the value of *Total_Water* a short time later, let's say after 1 second:
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# define Flow_Out (L/S) at time=0 (the start of our experiment)
Flow_Out_0 = 0.1 * Initial_Water
# Now let's see what happens 1 second later
elapsed_time = 1
Total_Water_1 = Initial_Water + Flow_In * elapsed_time - Flow_Out_0 * elapsed_time
# Print this value to clarify behaviour
print( "Flow_Out at time=0 is: ", Flow_Out_0 )
print( "Total_Water_1 at time=1 is: ", Total_Water_1 )
return
# Ok, let's try out this version:
# Call the function
bathtub_model()
# Because the amount of water (*Total_Water*) has changed, *Flow_Out* will also change:
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# define Flow_Out (L/S) at time=0 (the start of our experiment)
Flow_Out_0 = 0.1 * Initial_Water
# Now let's see what happens 1 second later
elapsed_time = 1
Total_Water_1 = Initial_Water + Flow_In * elapsed_time - Flow_Out_0 * elapsed_time
# We use the new value Total_Water_1 to update from Flow_Out_0 to Flow_Out_1
Flow_Out_1 = 0.1 * Total_Water_1
# Print this value to clarify behaviour
print( "Flow_Out at time=0 is: ", Flow_Out_0 )
print( "Total_Water_1 at time=1 is: ", Total_Water_1 )
print( "Flow_Out_1 at time=1 is: ", Flow_Out_1 )
return
# Call the function
bathtub_model()
# We could repeat the process 1 second further in the future (or 2 seconds since our experiment started) by now using the updated value of *Flow_Out*, *Flow_Out_1*:
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# define Flow_Out (L/S) at time=0 (the start of our experiment)
Flow_Out_0 = 0.1 * Initial_Water
# Now let's see what happens 1 second later
elapsed_time = 1
Total_Water_1 = Initial_Water + Flow_In * elapsed_time - Flow_Out_0 * elapsed_time
# We use the new value Total_Water_1 to update from Flow_Out_0 to Flow_Out_1
Flow_Out_1 = 0.1 * Total_Water_1
# 1 second has elapsed since we last updated Flow_Out and Total_Water,
# so we still use elapsed_time=1
Total_Water_2 = Total_Water_1 + Flow_In * elapsed_time - Flow_Out_1 * elapsed_time
# Print this value to clarify behaviour
print( "Flow_Out at time=0 is: ", Flow_Out_0 )
print( "Total_Water_1 at time=1 is: ", Total_Water_1 )
print( "Flow_Out_1 at time=1 is: ", Flow_Out_1 )
print( "Total_Water_2 at time=2 is: ", Total_Water_2 )
return
# Call the function
bathtub_model()
# There is another way to look at what we have just done. Every second, we are solving this equation:<br>
# <center>*Total_Water_NOW = Total_Water_1_SECOND_AGO + (Flow_In $\times$ 1 second) - (Flow_Out_1_SECOND_AGO $\times$ 1 second)*</center>
#
# Or to put it more mathematically, if the time 1 second ago is *time* then:<br>
# <center>*Total_Water[time + 1 second] = Total_Water[time] + (Flow_In $\times$ 1 second) - (Flow_Out[time] $\times$ 1 second)*</center>
# To determine *Total_Water* at some arbitrary time in the future, we need to:
# 1. Calculate *Flow_Out* at the current time
# 2. Calculate *Total_Water* 1 second later, using the previous values of *Total_Water* and *Flow_Out* and assuming 1 second has passed
# 3. Repeat until we get to our final time.
#
# This sounds like a good candidate for a loop! Let's assume we want to know what happens after one minute (i.e. 60 seconds) has passed. We will set up a new variable called *total_time* equal to 61 seconds and use that to set up our loop.
#
# (*Note*: We use 61 here, because we want to know what happens *after* 60 seconds of experiment.)
#
# We start our loop at time **1** because we already know the value of *Total_Water* at time 0 (before the experiment starts). We only need to calculate it after the seconds have started passing.
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# At the beginning of the experiment (time=0), Initial_Water IS the total amount of water
Total_Water = Initial_Water
# Each time we go around the loop, 1 second has passed.
# So each time, elapsed_time=1
elapsed_time = 1
# Let's look at one minute (60 seconds) to begin with.
# We set the time to 61 because we want know what happens
# after 60 seconds.
total_time = 61
# Loop over the total number of seconds in our experiment.
for i in range(1,total_time,1):
# First we update the rate of Flow_Out, using the current value of Total_Water
Flow_Out = 0.1 * Total_Water
# Next we update Total_Water. Because we execute this command every second,
# the time that has elapsed each loop is elapsed_time (not the total time).
Total_Water = Total_Water + Flow_In * elapsed_time - Flow_Out * elapsed_time
# Print out the values at every time
print( "After ",i, "seconds" )
print( " Flow_Out is: ", Flow_Out )
print( " Total_Water is: ", Total_Water )
return
# Call the function
bathtub_model()
# You can scroll all the way down to the bottom of the output to see what the final volume of water is.
#
# Of course, this is a little tedious.
#
# Instead, we can use what we learned last time about ***return*** values of functions.
#
# **Remember**: to save the output from a function, we need to add something to the `return` line. Here, we will return the value `Total_Water` at the very end of our experiment (after the loop), so it represents the final volume of water.
#
# (We will also remove the print statements to make the output easier to read.)
# ### The Bathtub Model - version 1
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# At the beginning of the experiment (time=0), Initial_Water IS the total amount of water
Total_Water = Initial_Water
# Each time we go around the loop, 1 second has passed.
# So each time, elapsed_time=1
elapsed_time = 1
# Let's look at one minute (60 seconds) to begin with.
total_time = 61
# Loop over the total number of seconds in our experiment.
for i in range(1,total_time,1):
# First we update the rate of Flow_Out, using the current value of Total_Water
Flow_Out = 0.1 * Total_Water
# Next we update Total_Water. Because we execute this command every second,
# the time that has elapsed each loop is elapsed_time (not the total time).
Total_Water = Total_Water + Flow_In * elapsed_time - Flow_Out * elapsed_time
# We'll return the value of Total_Water since this is what we want to know
return Total_Water
# **Remember**: when we *call* the function, we can save the output (here `Total_Water`) into a variable. Here, we are returning `Total_Water` at the very end of our experiment, so it represents the final volume of water and we will call our new variable `Final_Water` (but you could call it anything you want).
# +
# Call the function
Final_Water = bathtub_model()
# Print the final volume
print( "The volume of water in the bathtub after 60 seconds is: ",Final_Water )
# -
# You should find that after 1 minute, the amount of water in the bathtub has decreased from 10 litres to just over 5 litres.<br><br>
#
# <font color=purple>We could further refine the solution by recognising that *Flow_Out* is a function of *Total_Water*. Then we wouldn't even need a variable for *Flow_Out* (it is a local variable, so it only ever exists inside the function anyway), and our equation could be solved each time around with a single line:</font>
# ## <font color=blue>Complete Exercise 1 now</font>
# ## Saving values along the way
# Our current solution allows us to find the volume of water at the end of the experiment. But what if we want to know how this changes over time?
#
# Sometimes the most meaningful information is not the final value, but the shape and rate of the change. For this we need to **save** intermediate values, ideally each time we update the calculation. To do so requires adapting our function. (We printed them before, but we need to save them so that we can eventually plot them.)
#
# Instead of looping over a set number of seconds, we can use the numpy function `numpy.arange(start,stop,step)` to first define an array that represents every second during our experiment. Just like above, we use a `stop` value of 61 here, because we want to know what happens *after* 60 seconds of experiment.
time = numpy.arange(0,61,1)
print( time )
# How does this help us?
#
# We now have an array `time` representing every second of our experiment. If we make `Total_Water` an array with the same number of elements as `time`, then we can store the value of `Total_Water` at each second in our calculation.
#
# We can use **`numpy.zeros(number)`** to set up the `Total_Water` array before we know what values go into it. This might look a bit strange right now, but we will eventually "fill in" all of these zeros.
# +
# Use 1-second spacing
time = numpy.arange(0,61,1)
# Define an array of zeros to store the values of Total_Water at every time.
Total_Water = numpy.zeros(len(time))
print( 'time:',time )
print( 'Total_Water',Total_Water )
# -
# Let's now return to our function. Remember that we previously wrote our equation mathematically like this:
# <center>*Total_Water[time + 1 second] = Total_Water[time] + (Flow_In $\times$ 1 second) - (0.1 $\times$ Total_Water[time] $\times$ 1 second)*</center>
#
# This is equivalent to:
# <center>*Total_Water[time] = Total_Water[time - 1 second] + (Flow_In $\times$ 1 second) - (0.1 $\times$ Total_Water[time - 1 second] $\times$ 1 second)*</center>
#
# We now have a way of storing *Total_Water* at each time. This means we always know the value at *Total_Water[time - 1 second]*, and we can use this in our function:
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# Use 1-second spacing
time = numpy.arange(0,61,1)
# Define an array of zeros to store the values of Total_Water at every time.
Total_Water = numpy.zeros(len(time))
# At the beginning of the experiment (time=0), Initial_Water IS the total amount of water
Total_Water[0] = Initial_Water
# Each time we go around the loop, 1 second has passed.
# So each time, elapsed_time=1
elapsed_time = 1
# Loop over the total number of seconds in our experiment.
for i in range(1,len(time),1):
# First we update the rate of Flow_Out, using the current value of Total_Water
Flow_Out = 0.1 * Total_Water[i-1]
# Next we update Total_Water. Because we execute this command every second,
# the time that has elapsed each loop is elapsed_time (not the total time).
Total_Water[i] = Total_Water[i-1] + Flow_In * elapsed_time - Flow_Out * elapsed_time
# We'll return the array Total_Water
return Total_Water
# You should notice a few very important modifications to our model in the cell above. First, we added these lines:
# ```
# time = numpy.arange(0,61,1)
# Total_Water = numpy.zeros(len(time))
# ```
# These are the lines we described before, that set up both the `time` and the `Total_Water` arrays.
#
# Next, you should see that we changed:
# ```
# Total_Water = Initial_Water
# ```
# to
# ```
# Total_Water[0] = Initial_Water
# ```
# Before, `Total_Water` was a single value (a scalar) that we replaced each time we went around the loop. Now, however, `Total_Water` is an array that represents the total volume of water at each second in our experiment. Only at second 0 is `Total_Water` equal to `Initial_Water`. This is something we call the ***initial condition***.
#
# Next, we changed
# ```
# for i in range(1,total_time,1):
# ```
# to
# ```
# for i in range(1,len(time),1):
# ```
# This is because we now know that our experiment ends when we get to the end of the `time` array. In other words, `total_time` is just the number of individual times we have in the `time` array. We know from previous weeks that `len(time)` will tell us exactly how many values are in `time`.
#
# Finally, our equations now index `Total_Water` with [i] and [i-1]:
# ```
# Flow_Out = 0.1 * Total_Water[i-1]
# Total_Water[i] = Total_Water[i-1] + Flow_In * elapsed_time - Flow_Out * elapsed_time
# ```
# Remember, `Total_Water[i-1]` means `Total_Water` "one moment ago." So we are using the value of `Total_Water` we just calculated to calculate the new flow, and then using that to calculate the value of `Total_Water` now (`Total_Water[i]`).
#
# Let's test our updates:
# +
# Call the function to test the model
All_Water = bathtub_model()
# Now final water is the last value of All_Water
Final_Water = All_Water[-1]
# Print the final volume
print( "The volume of water in the bathtub after the experiment is: ", Final_Water )
# -
# Good news -- we get the same answer as before!
#
# But the value isn't in finding the answer, it's in looking at what happened along the way. For this we need to know the value of the water volume at every point in time (which we have), as well as the different values of time. This requires a small modification to our function: in addition to returning Total_Water at the end, we also return time.
# ### The Bathtub Model - version 2
def bathtub_model():
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Initial_Water (L)
Initial_Water = 10.0
# define Flow_In (L/s)
Flow_In = 0.5
# Use 1-second spacing
time = numpy.arange(0,61,1)
# Define an array of zeros to store the values of Total_Water at every time.
Total_Water = numpy.zeros(len(time))
# At the beginning of the experiment (time=0), Initial_Water IS the total amount of water
Total_Water[0] = Initial_Water
# Each time we go around the loop, 1 second has passed.
# So each time, elapsed_time=1
elapsed_time = 1
# Loop over the total number of seconds in our experiment.
for i in range(1,len(time),1):
# First we update the rate of Flow_Out, using the current value of Total_Water
Flow_Out = 0.1 * Total_Water[i-1]
# Next we update Total_Water. Because we execute this command every second,
# the time that has elapsed each loop is elapsed_time (not the total time).
Total_Water[i] = Total_Water[i-1] + Flow_In * elapsed_time - Flow_Out * elapsed_time
# We'll return the array Total_Water
return time, Total_Water
# Now we can use the function to give us time and water volume.
#
# **Remember**: If we have TWO outputs (two variables on the return line above), you need TWO variable names to the left of the equal
# sign when you call the function:
# Call the model and return both the time values and the volume values
All_Time, All_Water = bathtub_model()
print( All_Time, All_Water )
# Of course, printing the values really isn't very useful for understanding the behaviour. Instead, we want to plot them! For a brief reminder of how to make a plot in python:
# +
# Set-up: only needed once
import matplotlib.pyplot as pyplot
# %matplotlib inline
# Plot with time on x-axis and water on y-axis
pyplot.plot(All_Time,All_Water)
# Add axis labels and units
pyplot.xlabel('time (seconds)')
pyplot.ylabel('Volume of water in bathtub (L)')
pyplot.title('Change in water volume as a function of time')
# Show the plot
pyplot.show()
# -
# ## <font color=blue>Complete Exercise 2 now</font>
# ## Sensitivity analysis
#
# To answer Exercise 2c, you probably had to use some trial and error until you found a value for *Flow_In* that worked (unless you were very clever, in which case - well done!).
#
# A better way to do this is to perform what's called a ***sensitivity analysis***. This is a way to find out how sensitive the system is to the different parameters we are using.
#
# For example, in every experiment, we have assumed the initial amount of water in the bathtub is 10.0 L. What happens as we increase or decrease this value? By running our model for a range of different values, we can quickly answer this question. This is sensitivity analysis.
# If we want to use a different value for `Initial_Water` each time we call the function, we first need to make `Initial_Water` an argument in our function (so that we can easily vary it). Remember, this means we also need to remove it from the interior of the function, where we defined it before.
# ### The Bathtub Model - version 3
def bathtub_model(Initial_Water = 10.0):
# Variables
# Total_Water (litres) - the amount of water in the bathtub at any given time
# Initial_Water (litres) - the amount of water in the bathtub when we start
# Flow_In (litres per second) - the rate at which water flows in via the faucet
# Flow_Out (litres per second) - the rate at which water flows out via the drain
# elapsed_time (seconds) - the amount of time that has passed in our experiment
# define Flow_In (L/s)
Flow_In = 0.5
# Use 1-second spacing
time = numpy.arange(0,61,1)
# Define an array of zeros to store the values of Total_Water at every time.
Total_Water = numpy.zeros(len(time))
# At the beginning of the experiment (time=0), Initial_Water IS the total amount of water
Total_Water[0] = Initial_Water
# Each time we go around the loop, 1 second has passed.
# So each time, elapsed_time=1
elapsed_time = 1
# Loop over the total number of seconds in our experiment.
for i in range(1,len(time),1):
# First we update the rate of Flow_Out, using the current value of Total_Water
Flow_Out = 0.1 * Total_Water[i-1]
# Next we update Total_Water. Because we execute this command every second,
# the time that has elapsed each loop is elapsed_time (not the total time).
Total_Water[i] = Total_Water[i-1] + Flow_In * elapsed_time - Flow_Out * elapsed_time
# We'll return the full value of Total_Water since this is what we want to know
return time,Total_Water
# With our updated function, we can loop over our different values for `Initial_Water` (which we have defined above as `Test_Values`) and then plot each one.
#
# Let's start by setting up an array of possible values for `Initial_Water`:
# Set up an array of possible values for Initial_Water
Test_Values = numpy.array([0.0, 5.0, 10.0, 15.0, 20.0])
# Now we will set up a loop. First a quick reminder how loops and indices work. If we just wanted to print each value of `Test_Values`, our loop would look like this:
# Set up a loop over Test_Values
for i in range(0,len(Test_Values),1):
print( Test_Values[i] )
# Remember using the index [i] means we are going to access a different value of `Test_Values` every time we go around the loop.
#
# This time, we are going to use `Test_Values[i]` to provide a different input value of `Initial_Water` each time.
#
# We are **also** going to add our plotting code to the loop, so that every time we calculate the output we also plot it.
# Set up a loop over Test_Values and
# call the model with each possible value
for i in range(0,len(Test_Values),1):
# Call the model with this value for Initial_Water and save output
All_Time, All_Water = bathtub_model(Initial_Water=Test_Values[i])
# Plot the results
pyplot.plot(All_Time,All_Water)
# Add axis labels and units
pyplot.xlabel('time (seconds)')
pyplot.ylabel('Volume of water in bathtub (L)')
pyplot.title('Change in water volume as a function of time')
# Show the plot
pyplot.show()
# ## Adding labels and legends
#
# Right now, it's not very obvious which plot is which. We can fix that by making two changes to our code:
# 1. In the `pyplot.plot` commands, we will add `label=XXXX`, where `XXXX` is whatever we want the line to be called. In this case, our label is the value of `Initial_Water`.
# 2. We will also add a new line, `pyplot.legend()` before calling `pyplot.show()`.
# Set up a loop over Test_Values and
# call the model with each possible value
for i in range(0,len(Test_Values),1):
# Call the model with this value for Initial_Water and save output
All_Time, All_Water = bathtub_model(Initial_Water=Test_Values[i])
# Plot the results
pyplot.plot(All_Time,All_Water,label=Test_Values[i])
# Add axis labels and units
pyplot.xlabel('time (seconds)')
pyplot.ylabel('Volume of water in bathtub (L)')
pyplot.title('Change in water volume as a function of time')
# Add legend
pyplot.legend(title='Initial Volume (L)')
# Show the plot
pyplot.show()
# ## Adding multiple curves to one plot
#
# It is even more informative to show all the curves in one plot. We can do this by calling the plot command in our loop, but not calling the show command until the loop is complete. We do that by **removing the indent** -- remember, anything that is **not** indented only happens once, after the loop is finished.
# +
# Set up a loop over these values to call the model with each possible value
# Each time, add a line to the plot
for i in range(0,len(Test_Values),1):
# Call the model with this value for Initial_Water and save output
All_Time, All_Water = bathtub_model(Initial_Water=Test_Values[i])
# Plot the results
pyplot.plot(All_Time,All_Water,label=Test_Values[i])
# Add axis labels and units
pyplot.xlabel('time (seconds)')
pyplot.ylabel('Volume of water in bathtub (L)')
pyplot.title('Change in water volume as a function of time')
# Add legend
pyplot.legend(title='Initial Volume (L)')
# Show the plot
pyplot.show()
# -
# That was easy! `pyplot` was even smart enough to use different colours, and to identify those colours in the legend!
# ## <font color=blue>Complete Exercise 3 now</font>
# ## <font color=purple>If you have extra time, complete Exercise 4</font>
| notebooks/Week7_TheBathtub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + dc={"key": "1d0b086e6c"} run_control={"frozen": true} tags=["context"] id="s9JLDE8NIquP"
# # Introduction
#
# Today we'll dive deep into a dataset all about LEGO. From the dataset we can ask whole bunch of interesting questions about the history of the LEGO company, their product offering, and which LEGO set ultimately rules them all:
#
# <ul type="square">
# <li>What is the most enormous LEGO set ever created and how many parts did it have?</li>
#
# <li>How did the LEGO company start out? In which year were the first LEGO sets released and how many sets did the company sell when it first launched?</li>
#
# <li>Which LEGO theme has the most sets? Is it one of LEGO's own themes like Ninjago or a theme they licensed liked Harry Potter or Marvel Superheroes?</li>
#
# <li>When did the LEGO company really expand its product offering? Can we spot a change in the company strategy based on how many themes and sets did it released year-on-year?</li>
#
# <li>Did LEGO sets grow in size and complexity over time? Do older LEGO
# sets tend to have more or fewer parts than newer sets?</li>
# </ul>
#
# **Data Source**
#
# [Rebrickable](https://rebrickable.com/downloads/) has compiled data on all the LEGO pieces in existence. I recommend you use download the .csv files provided in this lesson.
# + [markdown] id="NLy7WycU5xU5"
# <img src="https://i.imgur.com/49FNOHj.jpg">
#
# + [markdown] id="V0u2lGJuIquQ"
# # Import Statements
# + id="z5Wk7rs-IquQ"
import pandas as pd, matplotlib.pyplot as plt
# + [markdown] id="R5NQpJ_KIquT"
# # Data Exploration
# + dc={"key": "044b2cef41"} run_control={"frozen": true} tags=["context"] id="ffaG-UFYIquT"
# **Challenge**: How many different colours does the LEGO company produce? Read the colors.csv file in the data folder and find the total number of unique colours. Try using the [.nunique() method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.nunique.html?highlight=nunique#pandas.DataFrame.nunique) to accomplish this.
# + id="yd4G9pK7IquU" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="feff575b-143b-4374-acb7-c0c6c2315148"
color_df = pd.read_csv('Data/colors.csv')
color_df
# + dc={"key": "044b2cef41"} tags=["sample_code"] id="QmbAXax7IquW" colab={"base_uri": "https://localhost:8080/"} outputId="e44a96c1-8fa2-4e15-e233-175a2f6ee3ef"
color_df['name'].nunique()
# + dc={"key": "<KEY>"} run_control={"frozen": true} tags=["context"] id="PItRbqgcIqua"
# **Challenge**: Find the number of transparent colours where <code>is_trans == 't'</code> versus the number of opaque colours where <code>is_trans == 'f'</code>. See if you can accomplish this in two different ways.
# + id="1UZrfq82Iqub" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="4c5aa8a7-ce16-438f-c583-a64244085a98"
color_df.groupby('is_trans').count()
# + id="KQFf-leCIqud" colab={"base_uri": "https://localhost:8080/"} outputId="ff31c488-eb24-421b-c37b-fb1a3c5bfe7b"
color_df['is_trans'].value_counts()
# + [markdown] id="TMqdhUYcusfy"
# ### Understanding LEGO Themes vs. LEGO Sets
# + [markdown] id="y0kxCh63uwOv"
# Walk into a LEGO store and you will see their products organised by theme. Their themes include Star Wars, Batman, Harry Potter and many more.
#
# <img src='https://i.imgur.com/aKcwkSx.png'>
# + dc={"key": "<KEY>"} run_control={"frozen": true} tags=["context"] id="u_xkZUF8Iqug"
# A lego set is a particular box of LEGO or product. Therefore, a single theme typically has many different sets.
#
# <img src='https://i.imgur.com/whB1olq.png'>
# + [markdown] id="jJTAROe5unkx"
# The <code>sets.csv</code> data contains a list of sets over the years and the number of parts that each of these sets contained.
#
# **Challenge**: Read the sets.csv data and take a look at the first and last couple of rows.
# + id="vGMOv-NRIquh" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="469aed31-6146-44e4-c17a-59f6b9033e6f"
set_df = pd.read_csv('Data/sets.csv')
set_df
# + id="T3lLFvyZIqui" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="2a0e138d-d5a3-4e8a-a62b-48910cd4b8a1"
set_df.head()
# + id="XprDBmzwIquk" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="c7be424f-a048-4bcc-e8af-0f4fd03590ed"
set_df.tail()
# + [markdown] id="ez-UXSMUIqum"
# **Challenge**: In which year were the first LEGO sets released and what were these sets called?
# + id="s2aL6qrGIqum" colab={"base_uri": "https://localhost:8080/"} outputId="922133a1-c16c-409a-96c8-70da183bac7f"
set_df = set_df.sort_values('year', ignore_index=True)
set_df.loc[0]
# + [markdown] id="JJoK3M8TBAVU"
# **Challenge**: How many different sets did LEGO sell in their first year? How many types of LEGO products were on offer in the year the company started?
# + id="h-Tf1w7IBBg9" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="68e78610-bb44-4edb-e891-8e357b84532a"
set_df[set_df['year']==1949]
# + [markdown] id="RJMMYQYqIquo"
# **Challenge**: Find the top 5 LEGO sets with the most number of parts.
# + id="toJvjRuQIqup" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="71dbeaf7-cc6e-4520-b86b-1347dbb33079"
set_by_parts_df = set_df.sort_values('num_parts', ascending=False, ignore_index=True)
set_by_parts_df.head()
# + [markdown] id="uSyhOzAHIqur"
# **Challenge**: Use <code>.groupby()</code> and <code>.count()</code> to show the number of LEGO sets released year-on-year. How do the number of sets released in 1955 compare to the number of sets released in 2019?
# + id="qjdrktZAIqus" colab={"base_uri": "https://localhost:8080/"} outputId="738c0dcc-94dc-47f6-d297-521ff0c8e564"
sets_by_year = set_df.groupby('year').count()
sets_by_year['set_num'].head()
# + id="tFInsHOkIqut" colab={"base_uri": "https://localhost:8080/"} outputId="0ac446e6-53ce-4a08-c558-44ec76a90117"
num_sets_by_year = sets_by_year['set_num']
print(f'Number of sets released in 1955:{num_sets_by_year[1955]} set(s)')
print(f'Number of sets released in 2019:{num_sets_by_year[2019]} set(s)')
# + [markdown] id="xJrmIOULIquv"
# **Challenge**: Show the number of LEGO releases on a line chart using Matplotlib. <br>
# <br>
# Note that the .csv file is from late 2020, so to plot the full calendar years, you will have to exclude some data from your chart. Can you use the slicing techniques covered in Day 21 to avoid plotting the last two years? The same syntax will work on Pandas DataFrames.
# + id="Nckj4lSGIquw" colab={"base_uri": "https://localhost:8080/", "height": 954} outputId="a389974f-5eb7-4789-c9b9-51fc457f0c78"
plt.figure(figsize=(10,16))
plt.xlabel('Year', fontsize=14)
plt.ylabel('Number of Sets', fontsize=14)
plt.plot(sets_by_year[:-2].index, sets_by_year[:-2]['set_num'])
# + [markdown] id="xrDeNYYXIqu1"
# ### Aggregate Data with the Python .agg() Function
#
# Let's work out the number of different themes shipped by year. This means we have to count the number of unique theme_ids per calendar year.
# + dc={"key": "<KEY>"} tags=["sample_code"] id="qx8pTau4Iqu2" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="37728f2b-bb1a-4205-c15f-7d23b5b6fb08"
theme_by_year_df = set_df.groupby('year').agg({'theme_id': pd.Series.nunique})
theme_by_year_df.rename(columns={'theme_id':'num_of_theme'}, inplace=True)
theme_by_year_df
# + [markdown] id="immCqqw1Iqu5"
# **Challenge**: Plot the number of themes released by year on a line chart. Only include the full calendar years (i.e., exclude 2020 and 2021).
# + id="r2pamQEkIqu5" colab={"base_uri": "https://localhost:8080/", "height": 954} outputId="3f4cda61-74b9-4e7d-91d8-532ae53a6ec4"
plt.figure(figsize=(10,16))
plt.xlabel('Year', fontsize=14)
plt.ylabel('Number of Themes', fontsize=14)
plt.plot(theme_by_year_df[:-2].index, theme_by_year_df[:-2]['num_of_theme'])
# + [markdown] id="uBbt9-lJIqu7"
# ### Line Charts with Two Seperate Axes
# + id="j7lQ_amFIqu7" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="8dac2e8b-7238-4a96-fdc5-670bb7102eeb"
ax1 = plt.gca()
ax2 = ax1.twinx() # twinx = set the second axis to be in the same position as first axis
# Set up labels
ax1.set_ylabel('Number of Sets', color='g')
ax2.set_ylabel('Number of Themes', color='b')
ax1.set_xlabel('Year', color='r')
# Print plot
ax1.plot(sets_by_year[:-2].index, sets_by_year[:-2]['set_num'],color='g')
ax2.plot(theme_by_year_df[:-2].index, theme_by_year_df[:-2]['num_of_theme'], color='b')
# + [markdown] id="7BHYaUf-Iqu9"
# **Challenge**: Use the <code>.groupby()</code> and <code>.agg()</code> function together to figure out the average number of parts per set. How many parts did the average LEGO set released in 1954 compared to say, 2017?
# + id="W7BcH9vuIqu9" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="6bfab91c-2e23-498c-d366-50999a7c5f39"
avg_parts_per_set = set_df.groupby('year').agg({'num_parts':'mean'})
avg_parts_per_set.head()
# + id="fjbb3tZcIqu_" colab={"base_uri": "https://localhost:8080/"} outputId="64ece574-8bb3-47c5-f990-80a982983c0f"
avg_parts = avg_parts_per_set['num_parts']
print(f'Average number of sets released in 1954: {int(avg_parts[1954])} set(s)')
print(f'Average number of sets released in 2017: {int(avg_parts[2017])} set(s)')
# + [markdown] id="bAeTe2XqIqvB"
# ### Scatter Plots in Matplotlib
# + [markdown] id="SAViZ_TYIqvB"
# **Challenge**: Has the size and complexity of LEGO sets increased over time based on the number of parts? Plot the average number of parts over time using a Matplotlib scatter plot. See if you can use the [scatter plot documentation](https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.scatter.html) before I show you the solution. Do you spot a trend in the chart?
# + id="EQNZ0D7JIqvB" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="f629dd5e-b7f6-42c7-ec3a-f603d7bef537"
plt.xlabel('Year')
plt.ylabel('Average parts per set')
plt.scatter(avg_parts_per_set.index[:-2], avg_parts_per_set['num_parts'][:-2], color='g')
# + [markdown] id="xK226Ip-IqvE"
# ### Number of Sets per LEGO Theme
# + [markdown] id="VKHa1FePIqvE"
# LEGO has licensed many hit franchises from Harry Potter to Marvel Super Heros to many others. But which theme has the largest number of individual sets?
# + id="hOBcNrC9IqvE" colab={"base_uri": "https://localhost:8080/"} outputId="fe0a08c5-cdb9-45b4-804b-b6bfd70ba5f1"
set_theme_counts = set_df['theme_id'].value_counts()
set_theme_counts
# + [markdown] id="J-i6JULGIqvG"
# **Challenge** Use what you know about HTML markup and tags to display the database schema: https://i.imgur.com/Sg4lcjx.png
# + [markdown] id="27oDwiPHIqvH"
# <img src='https://i.imgur.com/Sg4lcjx.png'>
# + [markdown] id="J_0iuerKIqvG"
# ### Database Schemas, Foreign Keys and Merging DataFrames
#
# The themes.csv file has the actual theme names. The sets .csv has <code>theme_ids</code> which link to the <code>id</code> column in the themes.csv.
# + [markdown] id="cp1tMW6oIqvH"
# **Challenge**: Explore the themes.csv. How is it structured? Search for the name 'Star Wars'. How many <code>id</code>s correspond to this name in the themes.csv? Now use these <code>id</code>s and find the corresponding the sets in the sets.csv (Hint: you'll need to look for matches in the <code>theme_id</code> column)
# + id="3uN3wN5sIqvH" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="31b693d8-828e-43aa-b5e9-2fb4e323e523"
themes_df = pd.read_csv('Data/themes.csv')
themes_df
# + id="xAO2XlQGIqvJ"
theme_count_df = pd.DataFrame(columns=['Name', 'Number of Sets'])
for index in set_theme_counts.index:
theme_name = themes_df[index==themes_df['id']]['name']
theme_count_df = theme_count_df.append({'Name':theme_name.item(),'Number of Sets':set_theme_counts[index]}, ignore_index=True)
theme_count_df = theme_count_df.drop_duplicates()
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="E79chTSIskz8" outputId="1dfcfa5d-9938-40af-b7be-ca62967a5827"
theme_count_df.head()
# + [markdown] id="SmTCXWKKIqvQ"
# ### Merging (i.e., Combining) DataFrames based on a Key
#
# + id="esKQULhcIqvR"
set_theme_counts = pd.DataFrame({
'id':set_theme_counts.index,
'set_counts':set_theme_counts
})
# + id="i0LobgIvIqvT" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="3b252071-dbfe-4103-b8de-b7c1505b2285"
# Faster way to merge than my looping
merged_theme_count_df = pd.merge(set_theme_counts, themes_df, on='id')
merged_theme_count_df.head()
# + id="I7UMP7VXIqvU" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a777af8f-c586-4c12-8eba-2859d3428e00"
plt.figure(figsize = (10,16))
plt.xlabel('Name')
plt.ylabel('Set Counts')
plt.xticks(rotation=45)
plt.bar(merged_theme_count_df['name'][:10], merged_theme_count_df['set_counts'][:10])
| Advanced/73/Day_73_LEGO_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/adeirvan123/Adafruit_BusIO/blob/master/hemanorph.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="pMHXaFCepqMZ"
import numpy as np
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 248} id="U74xP3Jwp6wy" outputId="f468f644-4250-4dd0-ad29-8ec11a432423"
# Parameter yang digunakan
n = 1000000
t = np.logspace(np.log10(10),np.log10(500),n)
# Silakan merubah parameter ini
A = [ 1, 6, 1.5, 1.5 ]
d = [ .004, .001, .002, .0015 ]
f = [ 3, 1, 4, 2.5 ]
# Membuat pasangan x dan y
x = A[0]*np.sin(t*f[0])*np.exp(-d[0]*3*t) + A[1]*np.sin(t*f[1])*np.exp(-d[1]*t)
y = A[2]*np.sin(t*f[2])*np.exp(-d[2]*.5*t) + A[3]*np.sin(t*f[3])*np.exp(-d[3]*t)
# Menampilkan plot nya
plt.plot(x,y,'c',linewidth=.1)
plt.axis('off')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 278} id="KlLicVq0p7wV" outputId="3ec41797-ce83-4318-fa09-2ab279d689e7"
# mengambil satu komponen untuk di plotkan
# Membuat pasangan x dan y
plt.plot(np.sin(t*f[0])*np.exp(-d[0]*3*t),linewidth=.5)
plt.show()
| hemanorph.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
h1b_data = pd.read_excel('h1bdata.xlsx')
h1b_data.head()
h1b_data.info()
h1b_data['SUBMIT DATE'] = pd.to_datetime(h1b_data['SUBMIT DATE'])
h1b_data['START DATE'] = pd.to_datetime(h1b_data['START DATE'])
h1b_data.info()
h1b_data['DAYS FOR RESULT'] = (h1b_data['START DATE']-h1b_data['SUBMIT DATE'])
import datetime as dt
h1b_data.info()
h1b_data
h1b_data['SUBMISSION YEAR']=h1b_data['SUBMIT DATE'].dt.year
h1b_data['SUBMISSION MONTH']=h1b_data['SUBMIT DATE'].dt.month
h1b_data.head()
h1b_data['CITY'] = (h1b_data['LOCATION'].str.split(", ", expand=True))[0]
h1b_data['STATE'] = (h1b_data['LOCATION'].str.split(", ", expand=True))[1]
h1b_data.head()
h1b = h1b_data
h1b.drop(['LOCATION', 'Sr.No.'], axis = 1, inplace = True)
h1b['JOB TITLE'].unique()
h1b['JOB TITLE'].nunique()
h1b.loc[h1b['JOB TITLE'].str.contains(r"^DATA SCI"), 'CATEGORY']="DATA SCIENTIST"
h1b.loc[(h1b['JOB TITLE'].str.contains(r"^DATA ANA"))&(h1b['CATEGORY'].isnull()), 'CATEGORY']="DATA ANALYST"
h1b.loc[(h1b['JOB TITLE'].str.contains(r"^BUSINESS ANA"))&(h1b['CATEGORY'].isnull()), 'CATEGORY']="BUSINESS ANALYST"
h1b
h1b.to_excel("H1B_Clean.xlsx")
| Scraping Code/.ipynb_checkpoints/data_cleaning-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="VoU-jVG1otWk"
# ## Disaster Tweet classifiaction
#
# **Maintained by:** [<NAME>](https://github.com/manan-paneri-99)
#
# * NLP, Tensorflow, Tensorflow Hub
# + [markdown] id="ECSvyyoSoiYS"
# [](https://colab.research.google.com/drive/1YaJu7BC72DJk0RTq26FTaWIMVumVUuZv?usp=sharing)
# + [markdown] id="gzut7KyXqErm"
# ### 1. Setup
# + colab={"base_uri": "https://localhost:8080/"} id="1dVq8rHnE4W4" outputId="19469482-307f-43bd-e9fe-023011af2657"
# !nvidia-smi
# + id="ZX1ZOnyiQvlU"
# !pip install -q tensorflow-text
# + id="0B4nWhKbFaJn"
import pandas as pd
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import tensorflow_text as text
import matplotlib.pyplot as plt
import pathlib
import shutil
import tempfile
tf.get_logger().setLevel('ERROR')
# + colab={"base_uri": "https://localhost:8080/"} id="i0EtutENJBpo" outputId="57a6c73e-9687-45a9-e94d-e59e0abe31a7"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="SxFNzYiatMD2"
# ### 2. Loading the data
# + [markdown] id="WRw_QP8nqy0U"
# Link to data repository : [Data](https://drive.google.com/drive/folders/1-De_MJQftD33OK_ZY3orOHAnTh3LMfG4?usp=sharing)
#
# Change the data_path to: 'content/drive/Shareddrive/data'
# + id="LtRmC_THK6Kx"
data_path = '/content/drive/MyDrive/Kaggle/Natural Language Processing with Disaster Tweets/data'
# + colab={"base_uri": "https://localhost:8080/", "height": 284} id="oWt7XDHsLKHK" outputId="e51e810d-e185-4435-d2e8-240b8a37fc56"
df = pd.read_csv(data_path + '/train.csv', low_memory=False)
df.shape
df.head(8)
# + id="MsLvIJc1LXAj" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="a83984e2-64d9-4a99-96c8-4fb0ec52a102"
df['target'].plot(kind = 'hist', title= 'Target dist')
# + [markdown] id="e8IswWqitQ0_"
# ### 3. Preprocessing
# + colab={"base_uri": "https://localhost:8080/"} id="gX358vSmF_fV" outputId="4c897ac6-3b4d-4265-c51d-d337173a08d1"
from sklearn.model_selection import train_test_split
train_df, valid_df = train_test_split(df, random_state=43, train_size=0.8)
print(train_df.shape)
print(valid_df.shape)
# + [markdown] id="sSfs_nsKtUjv"
# ### 4. Building model parts
# + id="6mt-F5RuqM0G"
models = ['bert_en_uncased_L-12_H-768_A-12', 'bert_en_uncased_L-24_H-1024_A-16',
'bert_en_cased_L-12_H-768_A-12']
preprocessors = ['https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3']
modules = ['https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/3',
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3']
# + id="fI-gO5H-E9D5"
preprocessor_url = preprocessors[2]
module_url = modules[2]
# + id="u1NlB98wMuuf"
bert_preprocess_model = hub.KerasLayer(preprocessor_url)
# + [markdown] id="5DknO5PjtZNL"
# ### 5. Creating the model
# + id="Je9pJJ4fGCYM"
def train_and_evaluate_model():
text_input = tf.keras.Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = hub.KerasLayer(preprocessor_url, name='preprocessing')
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(module_url, trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
#net = tf.keras.layers.Dense(128, activation='relu')(net)
net = tf.keras.layers.Dense(64, activation='relu')(net)
net = tf.keras.layers.Dropout(0.1)(net)
net = tf.keras.layers.Dense(1, activation='sigmoid', name= 'classifier')(net)
return tf.keras.Model(text_input, net)
# + id="5Lb5sVXQJVvY"
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = tf.metrics.BinaryAccuracy()
# + id="p-Ese0rZJg9C"
epochs = 5
#steps_per_epoch = tf.data.experimental.cardinality(train_df).numpy()
#num_train_steps = steps_per_epoch * epochs
#num_warmup_steps = int(0.1*num_train_steps)
int_lr = 3e-5
optimizer = tf.keras.optimizers.Adam(learning_rate=int_lr)
# + id="CmtqwPgbKlYv"
classifier_model = train_and_evaluate_model()
classifier_model.compile(optimizer=optimizer, loss =loss, metrics=metrics)
# + id="r6kv0iBEp8J2"
histories = {}
# + [markdown] id="EVaGeG4UtgAj"
# ### 6. Training the model
# + colab={"base_uri": "https://localhost:8080/"} id="5vUvXAZF3IFn" outputId="edf77b93-9f20-4d61-d698-2b22b4c99d8b"
print('Training model with')
histories[models[2]] = classifier_model.fit(train_df['text'], train_df['target'],
validation_data=(valid_df['text'], valid_df['target']),
epochs=epochs)
# + colab={"base_uri": "https://localhost:8080/", "height": 225} id="8eGvyRaf58fF" outputId="ee807152-787d-4ecb-d117-cb20e60c6bbe"
test_df = pd.read_csv(data_path + '/test.csv', low_memory=False)
test_df.shape
test_df.head(6)
# + colab={"base_uri": "https://localhost:8080/"} id="_YtDNcgRKM8f" outputId="5696d522-2bb5-4615-d519-b04b7efcf067"
test = test_df['text']
test
# + id="O_6wWXyy5kCr"
results = classifier_model.predict(test_df['text'])
# + [markdown] id="9dQYGWvktl50"
# ### 7. Test and evaluate
# + colab={"base_uri": "https://localhost:8080/"} id="jzYLoFVlmD4c" outputId="15e26bd3-f71d-4339-a247-1cfe1f765f77"
threshold =0.5
results = np.where(results > threshold, 1, 0)
results
# + colab={"base_uri": "https://localhost:8080/", "height": 438} id="tCL5hhvV6TYk" outputId="f3a76b59-bb52-4977-91f1-64d39ed58f5b"
history_dict = histories[models[2]].history
print(history_dict.keys())
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'r', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
# plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
# + colab={"base_uri": "https://localhost:8080/", "height": 431} id="TeJSJtXvmj93" outputId="e387de03-2486-427f-d622-b5343c129938"
Y_test = results
df_submission = pd.read_csv(data_path + '/sample_submission.csv', index_col=0).fillna(' ')
df_submission['target'] = Y_test
df_submission
# + id="HWQ8Qq8Cprge"
| demo.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,.md//md
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 5 RoboLab challenges
#
# In this notebook, you will be presented with a series of challenges to complete. These challenges are a bit different from the activities you have completed in earlier in RoboLab sessions. We leave you to work out the details of the challenges for yourself, although you are encouraged to use the forums if you need help or if you want to share or discuss any of your ideas with other students. Essentially, the challenges define a task for you to complete and you must decide how to complete them.
#
# The purpose of the challenges is to allow you to try your hand at writing your own RoboLab environment programs. Don’t spend too much time on this work. If you get stuck, take a break: it’s surprising how often a solution to a programming problem comes to mind if you take a few minutes away from the screen and the keyboard and do something completely different instead.
#
# The challenges in the next notebook are more difficult and are completely optional. They generally require you to have had some computer programming experience before you started this module.
# ## 5.1 Configuring the Simulator
#
# To download code to the simulator, use one of the simulator magics.
#
# For example, prefix a code cell with the `%%sim_magic_preloaded` block magic as the first line to download the code in the cell to the simulator. Remember, you can use the `%sim_magic_preloaded --preview` command to show the code that is automatically imported and the `%sim_magic --help` command to show all available magic switches.
#
# You may find the following specific switches useful when configuring the simulator via the magic:
#
# - `-b BACKGROUND_NAME`: load in a particular background file;
# - `-x X_COORD`: specify the initial x-coordinate value for the simulated robot;
# - `-y Y_COORD`: specify the initial y-coordinate value for the simulated robot;
# - `-a ANGLE`: specify the initial rotation angle for the simulated robot;
# - `-p`: specify pen-down mode;
# - `-C`: clear pen-trace.
#
# Load in the simulator in the usual way:
# +
from nbev3devsim.load_nbev3devwidget import roboSim, eds
# %load_ext nbev3devsim
# -
# ### 5.1.1 Challenge – Moving the simulated robot forwards
#
# Write a program to make the simulated robot move forwards for two seconds. Download your program to the simulator and run it to check that it performs as required.
#
# Remember to prefix the code cell with a magic command that will download the code to the simulator.
# + [markdown] student=true
# *Make any notes / observations about how your programmed worked, or any issues associated with getting it to work, etc, to help you complete the task here.*
# + student=true
# + [markdown] tags=["eportfolio"]
# ### 5.1.2 Challenge – Fitness training
#
# [*You can complete and submit this activity as part of your ePortfolio.*](https://learn2.open.ac.uk/mod/oucontent/olink.php?id=1704241&targetdoc=TM129+ePortfolio)
#
# Write a program to make the simulated robot move forwards a short distance and then reverse to its starting point, repeating this action another four times (so five traverses in all). Download your program to the simulator and run it to check that it performs as required.
#
# Optionally, extend your program so that it speaks a count of how many traverses it has completed so far each time it gets back to the start.
# + [markdown] student=true
# *Make any notes to help you complete the task here.*
# + student=true
# Add your code here...
# Remember to start the cell with some appropriate magic to downloc code to the simulator
# + [markdown] student=true
# *Make any notes / observations about how your programmed worked, or any issues associated with getting it to work, etc, to help you complete the task here.*
# -
# ### 5.1.3 Challenge – Making a countdown program
#
# Write a program to make the simulated robot speak aloud a count down from 10 to 0, finishing by saying ‘OK’. Download the program to the simulator and run it to check that it performs as required.
# + [markdown] student=true
# *Make any notes to help you complete the task here.*
# + student=true
# Add your code here...
# Remember to start the cell with some appropriate magic to downloc code to the simulator
# + [markdown] student=true
# *Make any notes / observations about how your programmed worked, or any issues associated with getting it to work, etc, to help you complete the task here.*
# -
# ### 5.1.4 Challenge – Fitness training take 2
#
# In the first fitness training challenge, the robot had to cover the same distance backwards and forwards five times.
#
# In this challenge, the robot should only do three forwards and backwards traverses, but in a slightly different way. On the first, it should travel forward and backward a short distance; on the second, it should travel twice as far forward and backward as the first; on the third, it should travel three times as far forward and backward as the first.
#
# Download your program to the simulator and run it to check that it performs as required.
# + [markdown] student=true
# *Make any notes to help you complete the task here.*
# + student=true
# Add your code here...
# Remember to start the cell with some appropriate magic to downloc code to the simulator
# + [markdown] student=true
# *Make any notes / observations about how your programmed worked, or any issues associated with getting it to work, etc, to help you complete the task here.*
| content/03. Controlling program execution flow/03.5 Some RoboLab challenges.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:ap-northeast-2:806072073708:image/datascience-1.0
# ---
# +
# Importing required libraries.
import pandas as pd
import numpy as np
import seaborn as sns #visualisation
import matplotlib.pyplot as plt #visualisation
# %matplotlib inline
sns.set(color_codes=True)
data_dir = '../data'
df_claims = pd.read_csv(f"{data_dir}/claims_preprocessed.csv", index_col=0)
df_customers = pd.read_csv(f"{data_dir}/customers_preprocessed.csv", index_col=0)
# -
from IPython.display import display as dp
dp(df_claims.head())
dp(df_customers.head())
| scratch/2.1.preprocess.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Render the Vega Charts as PNGs
#
# The vega charts are required as thumbnail PNGs for the simulation landing page. Currently there doesn't seem to be a way to do this without some manual input in Python.
import glob
from toolz.curried import pipe, map, do
import vega
import simulations
import json
def read_json(filepath):
with open(filepath, 'r') as stream:
data = json.load(stream)
return data
# +
#PYTEST_VALIDATE_IGNORE_OUTPUT
# !rm -rf ~/Downloads/vega*.png
# +
#PYTEST_VALIDATE_IGNORE_OUTPUT
_ = pipe(
"../data/charts/*_free_energy.json",
glob.glob,
sorted,
map(read_json),
map(vega.vega.Vega),
map(do(lambda obj: obj.display())),
list
)
# +
#PYTEST_VALIDATE_IGNORE_OUTPUT
# !ls ~/Downloads/vega*.png
# +
#PYTEST_VALIDATE_IGNORE_OUTPUT
from toolz.curried import itemmap
import shutil
mapping = {
'1a' : '',
'1b' : ' (1)',
'1c' : ' (2)',
'1d' : ' (3)',
'2a' : ' (4)',
'2b' : ' (5)',
'2c' : ' (6)',
'2d' : ' (7)',
}
pipe(
mapping,
itemmap(lambda kv: ('../images/' + kv[0] + '_free_energy.png',
'/home/wd15/Downloads/vega' + kv[1] + '.png')),
do(itemmap(lambda kv: (shutil.copy(kv[1], kv[0]), None)))
)
# +
#PYTEST_VALIDATE_IGNORE_OUTPUT
# !ls ../images/*.png
| _data/vega2png.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # Render Vue from Template
# + tags=[]
import ipyvuetify as v
import traitlets
# + tags=[]
class Dashboard(v.VuetifyTemplate):
template_file = '../templates/renderVueTemplate.vue'
text = traitlets.Unicode('').tag(sync=True)
dashboard = Dashboard()
dashboard
| src/notebooks/render-vue-template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="hAcVCqunNHPq"
# 
# + [markdown] colab_type="text" id="xlIVUULwujKa"
#
# ## Exercícios de manipulação de dados - Parte 3
# + [markdown] colab_type="text" id="RoztT_BzujKc"
# Neste Jupyter notebook você irá resolver uma exercícios utilizando a biblioteca Pandas.\
# Todos os datasets utilizados nos exercícios estão salvos na pasta *datasets*.\
# Todo o seu código deve ser executado neste Jupyter Notebook. Por fim, se desejar, revise as respostas com o seu mentor.
# + [markdown] colab_type="text" id="VYyMlLtIujKe"
# #### Tarefa 1. Importe o dataset e salve os dados em um dataframe
#
# Os dados estão salvos no arquivo ***datasets/users_dataset.txt***.\
# Salve os dados em uma variável de nome *users*.
# -
import pandas as pd
users = pd.read_csv('users_dataset.csv', sep = '|')
# + [markdown] colab_type="text" id="GpGg63VvujKh"
# #### Tarefa 2. Descubra qual a idade média por ocupação.
#
# *Dica: use a função groupby do Pandas na coluna occupation*
# + colab={} colab_type="code" id="xCzIIT60ujKi"
users.groupby('occupation').age.mean()
# + [markdown] colab_type="text" id="y1R0vw2XujKn"
# #### Tarefa 3. Para cada ocupação descubra a idade mínima e máxima.
# + colab={} colab_type="code" id="GhhaqSCtujKp"
users.groupby('occupation').age.agg([min,max])
# + [markdown] colab_type="text" id="DT-Lcsz2ujKz"
# #### Tarefa 4. Para cada combinação de ocupação e sexo, calcule a idade média.
#
# Exemplo:
#
# ```
# occupation gender
# administrator F 40.638889
# M 37.162791
# artist F 30.307692
# M 32.333333
# ```
# + colab={} colab_type="code" id="aMIV75fTujKz"
users.groupby(['occupation', 'gender']).age.mean()
# + [markdown] colab_type="text" id="B2TxLDyoujK3"
# #### Tarefa 5. Para cada ocupação calcule a porcentagem de homens e mulheres.
#
# Exemplo:
#
# ```
# occupation gender
# administrator F 45.569620
# M 54.430380
# artist F 46.428571
# M 53.571429
# doctor M 100.000000
# educator F 27.368421
# M 72.631579
# ```
# + colab={} colab_type="code" id="XnBH1LMuujK4"
round(users.groupby("occupation").gender.value_counts()/users.groupby("occupation").gender.count() * 100, 2)
# + [markdown] colab_type="text" id="Wkw71CayujK-"
# **Parabéns! Você chegou ao fim!**
| data-manipulation-exercises/Manipulacao_de_Dados_Ex_03 (2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="gkzPgvS2F-2C"
# # Enhancing Vision with Convolutional Neural Networks - Best Model
# + colab={"base_uri": "https://localhost:8080/"} id="bFM9w_3WFsgW" outputId="9b3a3fb4-6d2b-4a33-f444-8c5d806f175a"
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
| semana14-15-01-2021/4-introdução-ao-tensorflow/.ipynb_checkpoints/6_best_model_cnn-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Machine Learning and Statistics for Physicists
# A significant part of this material is taken from <NAME>'s material for a [UC Irvine](https://uci.edu/) course offered by the [Department of Physics and Astronomy](https://www.physics.uci.edu/).
#
# Content is maintained on [github](github.com/dkirkby/MachineLearningStatistics) and distributed under a [BSD3 license](https://opensource.org/licenses/BSD-3-Clause).
#
# [Table of contents](Contents.ipynb)
# - Start from the OUTLINE OF THE ML PIPELINE
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# ## Load Data
a_data = pd.read_hdf('data/cluster_a_data.hf5')
b_data = pd.read_hdf('data/cluster_b_data.hf5')
c_data = pd.read_hdf('data/cluster_c_data.hf5')
d_data = pd.read_hdf('data/cluster_d_data.hf5')
cluster_3d = pd.read_hdf('data/cluster_3d_data.hf5')
cosmo_data = pd.read_hdf('data/cosmo_data.hf5')
# ## SciKit Learn
# This will be our first time using the [SciKit Learn package](http://scikit-learn.org/stable/). We don't include it in our standard preamble since it contains many modules (sub-packages). Instead, we import each module as we need it. The ones we need now are:
from sklearn import preprocessing, cluster
# ## Find Structure in Data
# The type of structure we can look for is "clusters" of "nearby" samples, but the definition of these terms requires some care.
#
# ### Distance between samples
#
# In the simplest case, all features $x_{ij}$ have the same (possibly dimensionless) units, and the natural distance between samples (rows) $j$ and $k$ is:
# $$
# d(j, k) = \sum_{\text{features}\,i} (x_{ji} - x_{ki})^2 \; .
# $$
# However, what if some columns have different units? For example, what is the distance between:
# $$
# \left( 1.2, 0.4~\text{cm}, 5.2~\text{kPa}\right)
# $$
# and
# $$
# \left( 0.7, 0.5~\text{cm}, 4.9~\text{kPa}\right)
# $$
# ML algorithms are generally unit-agnostic, so will happily combine features with different units but that may not be what you really want.
#
# One reasonable solution is to normalize each feature with the [sphering transformation](https://en.wikipedia.org/wiki/Whitening_transformation):
# $$
# x \rightarrow (x - \mu) / \sigma
# $$
# where $\mu$, $\sigma$ are the mean and standard deviation of the original feature values.
#
# The [sklearn.preprocessing module](http://scikit-learn.org/stable/modules/preprocessing.html) automates this process with:
cosmo_data
cosmo_normed = cosmo_data.copy()
cosmo_normed[cosmo_data.columns] = preprocessing.scale(cosmo_data)
cosmo_normed.describe()
# However, this may discard useful information contained in the relative normalization between features. To normalize only certain columns use, for example:
cosmo_normed = cosmo_data.copy()
for colname in 'ln10^{10}A_s', 'H0':
cosmo_normed[colname] = preprocessing.scale(cosmo_data[colname])
# ### What is a "cluster"?
# In the simplest case (a), clusters are well separated by a line (in 2D, or hyperplane in more dimensions) and can be unambiguously identified by looking only at the distance between pairs of samples.
#
# In practice, clusters might overlap leading to ambiguities (b), or the clusters we expect to find might require considering groups of more than two samples at a time (c), or might have a non-linear separation boundary (d).
#
# 
# ## K-means Clustering
# The [K-means algorithm](https://en.wikipedia.org/wiki/K-means_clustering) is fast and robust, but assumes that your data consists of roughly round clusters of the same size (where the meanings of "round" and "size" depend on how your data is scaled).
#
# Let's first look at the first of our example datasets:
# plot the data in a_data
plt.scatter(a_data["x1"], a_data["x2"])
# Now let's use K-Means on this data and see if it works well.
#
# Most sklearn algorithms use a similar calling pattern:
# ```
# result = module.ClassName(..args..).fit(data)
# ```
# For the [KMeans algorithm](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans):
a_fit = cluster.KMeans(n_clusters=2).fit(a_data)
# The output of the clustering algoritms is one label (the ID of the cluster) for each datapoint)
print(a_fit.labels_)
def display(data, fit):
labels = fit.labels_
plt.scatter(
data["x1"], data["x2"],
s=5, # set marker size
c=labels, # the colour is given by the labels
cmap=plt.cm.Accent, # set the colour palette
)
# set plot boundaries
plt.xlim(-9, 9)
plt.ylim(-5, 5)
display(a_data, a_fit)
# + [markdown] solution2="hidden" solution2_first=true
# **EXERCISE:** Use KMeans to fit the three other (b,c,d) 2D datasets with `n_clusters=2` and generate similar plots. Which fits give the expected results?
# + solution2="hidden"
b_fit = cluster.KMeans(n_clusters=2).fit(b_data)
display(b_data, b_fit)
# + solution2="hidden"
c_fit = cluster.KMeans(n_clusters=2).fit(c_data)
display(c_data, c_fit)
# + solution2="hidden"
d_fit = cluster.KMeans(n_clusters=2).fit(d_data)
display(d_data, d_fit)
# + [markdown] solution2="hidden"
# The fit results look reasonable for (b), although the sharp dividing line between the two clusters looks artificial.
#
# The fit results for (c) and (d) do not match what we expect because KMeans only considers one pair at a time, so cannot identify larger scale patterns that are obvious by eye.
# +
# Add your solution here...
# -
# ### Hyperparameters
# Algorithms have many parameters that influence their results for a given dataset, but these fall into two categories:
# - Parameters whose values are determined by the data during the fitting process.
# - Parameters which must be externally set.
#
# We refer the second group as "hyperparameters" and set their values during the "model selection" process, which we will discuss later.
# + [markdown] solution2="hidden" solution2_first=true
# **DISCUSS:** Are all of the arguments of the [KMeans constructor](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans) hyperparameters?
# + [markdown] solution2="hidden"
# In principle, yes, but in practice some of these arguments will have no (or minimal) impact on the algorithm result under normal conditions. The arguments that are most clearly hyperparameters are:
# - `n_clusters`, `algorithm`, `tol`
#
# The arguments that are most clearly not hyperparameters are:
# - `verbose`, `precompute_distances`, `n_jobs`
#
# The remaining arugments are in the gray area. In general, it is prudent to experiment with your actual data to identify which arguments affect your results significantly.
# + [markdown] solution2="hidden" solution2_first=true
# **EXERCISE:** Fit dataset (b) with the `n_clusters` hyperparameter set to 3 and display the results. Comparing with the 2-cluster fit above, by eye, what do think is the "true" number of clusters? How might you decide between 2 and 3 more objectively?
# + solution2="hidden"
b_fit_3 = cluster.KMeans(n_clusters=3).fit(b_data)
display(b_data, b_fit_3)
# + [markdown] solution2="hidden"
# The plot above makes a convincing case (to me, at least) that there are three clusters. However, the "truth" in this case is two clusters.
#
# This illustrates the dangers of superimposing a fit result on your data: it inevitably "draws your eye" and makes the fit more credible. Look out for examples of this when reading papers or listening to talks!
# +
# Add your solution here...
# -
# ### Clustering in many dimensions
# An algorithm to find clusters in 2D data is just automating what you could already do by eye. However, most clustering algorithms also work well with higher dimensional data, where the clusters might not be visible in any single 2D projection.
fit_3d = cluster.KMeans(n_clusters=3).fit(cluster_3d)
cluster_3d['label'] = fit_3d.labels_
# FutureWarning from scipy.stats is harmless
sns.pairplot(cluster_3d, vars=('x0', 'x1', 'x2'), hue='label');
# These clusters look quite arbitrary in each of the 2D scatter plots. However, they are actually very well separated, as we can see if we rotate the axes:
R = np.array(
[[ 0.5 , -0.5 , -0.707],
[ 0.707, 0.707, 0. ],
[ 0.5 , -0.5 , 0.707]])
rotated_3d = cluster_3d.copy()
rotated_3d[['x0', 'x1', 'x2']] = cluster_3d[['x0', 'x1', 'x2']].dot(R)
sns.pairplot(rotated_3d, vars=('x0', 'x1', 'x2'), hue='label');
# This example is contrived, but the lesson is that clustering algorithms can discover higher-dimensional structure that you might miss with visualization.
# + [markdown] heading_collapsed=true
# ## Anatomy of a ML Algorithm
# + [markdown] hidden=true
# Now that we have introduced our first ML algorithm, this is a good time for some general comments.
# + [markdown] hidden=true
# Most ML algorithms have some common features:
# - They seek to maximize (or minimize) some goal function $f(\theta, D)$ of the (fixed) data $D$, for some (unknown) parameters $\theta$.
# - The goal function embodies some model (perhaps only implicitly) of what the data is expected to look like.
#
# Questions to ask about the goal function:
# - Is there a single global optimum by construction? (i.e., is $\pm f$ a [convex function](https://en.wikipedia.org/wiki/Convex_function)?)
# - If not, might there be multiple local optima?
#
# Questions to ask about how the algorithm optimizes its goal function:
# - Is it exact or approximate?
# - If it is approximate, is it also iterative? If so, what are the convergence criteria?
# - Does the optimization use derivatives of the goal function to speed its convergence? Are these known in advance or how are they calculated?
# - How does the algorithm's running time scale with:
# - the number of samples in the data?
# - the number of features in the data?
# - the number of parameters in the model?
# + [markdown] hidden=true
# The goal function of the KMeans algorithm is:
# $$
# \sum_{i=1}^n\, \sum_{c_j = i}\, \left| x_j - \mu_i\right|^2
# $$
# where $c_j = 1$ if sample $j$ is assigned to cluster $i$ or otherwise $c_j = 0$, and
# $$
# \mu_i = \sum_{c_j = i}\, x_j
# $$
# is the mean of samples assigned to cluster $i$. The outer sum is over the number of clusters $n$ and $j$ indexes samples. If we consider sample $x_j$ to be a vector, then its elements are the feature values.
# + [markdown] hidden=true solution2="hidden" solution2_first=true
# **DISCUSS:** What are the parameters of the KMeans goal function? How many parameters are there?
# + [markdown] hidden=true solution2="hidden"
# The parameters are the binary values $c_j$ and there is one per sample (row). Note that the number of parameters is independent of the number of features (columns) in the data.
#
# The number of clusters $n$ is a hyperparameter since it is externally set and not adjusted by the algorithm in response to the data.
#
# The means $\mu_i$ are not independent parameters since their values are fixed by the $c_j$ (given the data).
# + [markdown] hidden=true
# ### Supervised vs Unsupervised
# + [markdown] hidden=true
# ML algorithms come in two flavors, depending on whether they require some training data where you already know the answer ("supervised") or not ("unsupervised"). Clustering algorithms are unsupervised.
#
# An advantage of unsupervised ML is that it works with any input data, and can discover patterns that you might not already know about (as in the 3D example above). Even when you have training data available, an unsupervised algorithm can still be useful.
#
# The disadvantage of unsupervised learning is that we cannot formulate objective measures for how well an algorithm is performing, so the results are always somewhat subjective.
# -
# ## Other Clustering Methods
# We have focused on KMeans as a prototypical clustering algorithm, but there are [many others to chose from](http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html).
#
# We will finish this section with some brief experimentation with two alternatives that use more global information than KMeans, so are better suited to examples (c) and (d) above:
# - Spectral clustering: [sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html), [wikipedia](https://en.wikipedia.org/wiki/Spectral_clustering).
# - Density-based spatial clustering of applications with noise (DBSCAN): [sklearn](http://scikit-learn.org/stable/modules/clustering.html#spectral-clustering), [wikipedia](https://en.wikipedia.org/wiki/DBSCAN).
#
# Here is a visualization (taken from the sklearn documentation) of the many clustering algorithms available in `sklearn.cluster` and their behaviour on some stereotyped datasets:
# 
# *Image from sklearn documentation*
# + [markdown] solution2="hidden" solution2_first=true
# **EXERCISE:** Use `cluster.SpectralClustering` to fit `c_data` and `d_data` and display the results. Adjust the default hyperparameters, if necessary, to obtain the expected results.
# + solution2="hidden"
c_fit = cluster.SpectralClustering(n_clusters=2).fit(c_data)
display(c_data, c_fit)
# + solution2="hidden"
d_fit = cluster.SpectralClustering(n_clusters=2, gamma=2.0).fit(d_data)
display(d_data, d_fit)
# +
# Add your solution here...
# + [markdown] solution2="hidden" solution2_first=true
# **EXERCISE:** Use `cluster.DBSCAN` to fit `c_data` and `d_data` and display the results. Adjust the default hyperparameters, if necessary, to obtain the expected results.
# + solution2="hidden"
c_fit = cluster.DBSCAN(eps=1.5).fit(c_data)
display(c_data, c_fit)
# + solution2="hidden"
d_fit = cluster.DBSCAN().fit(d_data)
display(d_data, d_fit)
| machine-learning-1/Clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#For a positive integer n, let σ2(n) be the sum of the squares of its divisors. For example, σ2(10) = 1 + 4 + 25 + 100 = 130.
#Find the sum of all n, 0 < n < 64,000,000 such that σ2(n) is a perfect square.
# +
import math
number = input("Enter your number ")
def get_divisors(n):
divisor = []
for i in range(1, int(math.sqrt(n))+1):
if n % i == 0:
divisor.append(i)
k = len(divisor)
sr = math.sqrt(n)
if ((sr - math.floor(sr)) == 0):
for i in range(1,k):
divisor.append(int(n/divisor[-2*i]))
return divisor
else:
for i in range(1,k+1):
divisor.append(int(n/divisor[1-2*i]))
return divisor
def div_sqr_sum_issqr(x):
div = get_divisors(x)
sqsum = 0
for i in range (1, len(div)+1):
sqsum = sqsum + div[i-1]*div[i-1]
sr = math.sqrt(sqsum)
return((sr - math.floor(sr)) == 0)
found_special = []
for x in range(1,int(number)+1):
if (div_sqr_sum_issqr(x)):
found_special.append(x)
print(found_special)
# -
sum(found_special)
| PE 211.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualización del funcionamiento de PCA
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
centro = (7,7)
n_puntos = 20000
angulo = np.pi/4
a = 3
b = 6
xs = np.random.uniform(0, 20, n_puntos)
ys = np.random.uniform(0, 20, n_puntos)
# +
x_elipse = np.array(centro[0])
y_elipse = np.array(centro[0])
for i in range(n_puntos):
termino1 = (((xs[i] - centro[0])*np.cos(angulo) + (ys[i] - centro[1])*np.sin(angulo))**2)/a**2
termino2 = (((xs[i] - centro[0])*np.sin(angulo) - (ys[i] - centro[1])*np.cos(angulo))**2)/b**2
if termino1 + termino2 <= 1:
x_elipse = np.append(x_elipse, xs[i])
y_elipse = np.append(y_elipse, ys[i])
# -
plt.plot(x_elipse, y_elipse, 'ro')
plt.axis([0,15,0,15])
plt.show()
elipse = np.array(list(zip(x_elipse, y_elipse)))
pca = PCA(n_components=2)
datos_nuevos = pca.fit_transform(elipse)
datos_nuevos.shape
plt.plot(datos_nuevos[:,0], datos_nuevos[:,1],'ro')
plt.axis([-7,7,-6,7])
plt.show()
# ## Reduccion de dimensionalidad usando PCA
z_elipse = np.random.uniform(1,1.1, len(x_elipse))
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
ax.scatter(x_elipse, y_elipse, z_elipse)
pca2 = PCA(n_components=2)
datos_nuevos2 = pca2.fit_transform(elipse)
datos_nuevos2.shape
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
ax.scatter(datos_nuevos2[:,0], datos_nuevos2[:,1])
| notebooks/ejemplo_pca.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy
# +
# Load of all .csv
df_ubereats_centro = pd.read_csv(r'csv/ubereats-pizza-centro.csv')
df_ubereats_alamos = pd.read_csv(r'csv/ubereats-pizza-alamos.csv')
df_ubereats_juriquilla = pd.read_csv(r'csv/ubereats-pizza-juriquilla.csv')
df_ubereats_milenio = pd.read_csv(r'csv/ubereats-pizza-milenio.csv')
df_ubereats_refugio = pd.read_csv(r'csv/ubereats-pizza-refugio.csv')
df_ubereats_balvanera = pd.read_csv(r'csv/ubereats-pizza-balvanera.csv')
df_ubereats_campanario = pd.read_csv(r'csv/ubereats-pizza-campanario.csv')
df_ubereats_cerritocolorado = pd.read_csv(r'csv/ubereats-pizza-cerritocolorado.csv')
df_ubereats_cimatario = pd.read_csv(r'csv/ubereats-pizza-cimatario.csv')
df_ubereats_el_pueblito = pd.read_csv(r'csv/ubereats-pizza-el-pueblito.csv')
df_rappi_centro = pd.read_csv(r'csv/rappi-pizza-centro.csv')
df_rappi_alamos = pd.read_csv(r'csv/rappi-pizza-alamos.csv')
df_rappi_juriquilla = pd.read_csv(r'csv/rappi-pizza-juriquilla.csv')
df_rappi_milenio = pd.read_csv(r'csv/rappi-pizza-milenio.csv')
df_rappi_refugio = pd.read_csv(r'csv/rappi-pizza-refugio.csv')
df_rappi_balvanera = pd.read_csv(r'csv/rappi-pizza-balvanera.csv')
df_rappi_campanario = pd.read_csv(r'csv/rappi-pizza-campanario.csv')
df_rappi_cerritocolorado = pd.read_csv(r'csv/rappi-pizza-cerritocolorado.csv')
df_rappi_cimatario = pd.read_csv(r'csv/rappi-pizza-cimatario.csv')
df_rappi_el_pueblito = pd.read_csv(r'csv/rappi-pizza-el-pueblito.csv')
frames = [
df_ubereats_centro,
df_ubereats_alamos,
df_ubereats_juriquilla,
df_ubereats_milenio,
df_ubereats_refugio,
df_ubereats_balvanera,
df_ubereats_campanario,
df_ubereats_cerritocolorado,
df_ubereats_cimatario,
df_ubereats_el_pueblito,
df_rappi_centro,
df_rappi_alamos,
df_rappi_juriquilla,
df_rappi_milenio,
df_rappi_refugio,
df_rappi_balvanera,
df_rappi_campanario,
df_rappi_cerritocolorado,
df_rappi_cimatario,
df_rappi_el_pueblito]
# We concat all frames and ignore the current index to generate a new one
df = pd.concat(frames, ignore_index=True)
# -
print('Size of the df: {} items'.format(df.count()['name']))
# Note that the price-food has MX$
df.loc[:,['name','rating','evals','name-food','price-food']]
# +
# Remove MX$ from price-food
df['price-food'] = df['price-food'].apply(lambda x: (float(x[3:] if x[0] == 'M' else x[1:]) if not isinstance(x, int) else float(x)) if not isinstance(x, float) else x)
# for index, row in df.iterrows():
# print(float(row['price-food'][3:]) if not isinstance(row['price-food'], float) else row['price-food'])
# -
df.loc[:,['name','name-food','price-food']]
# Get all "pizza" dataframes
dfPizzas = df[df['name-food'].str.contains("pizza", na=False, case=False)]
dfPizzas.reset_index(drop=True, inplace=True)
dfPizzas.loc[:,['name','name-food','price-food']]
# +
meanPizzaVal = dfPizzas['price-food'].mean()
meanPizza = dfPizzas.loc[dfPizzas['price-food'] == meanPizzaVal]
print(f'The mean price is ${meanPizzaVal}:')
# +
stdPizza = dfPizzas['price-food'].std()
varPizza = dfPizzas['price-food'].var()
print(f'Varianza {varPizza}')
print(f'Desviacion estandar ${stdPizza}')
# -
desMuestra=numpy.std(dfPizzas['price-food'])
print(f'Desviacion estandar (numpy) ${desMuestra}')
# +
import matplotlib.pyplot as plt
plt.plot(dfPizzas['price-food'],'o')
plt.title('Precios pizzas')
plt.ylabel('Precios')
plt.xlabel('id')
plt.show()
# -
| PizzaProject-NewVersion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Importing lib
# +
import nltk
import urllib
import bs4 as bs
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
import warnings
warnings.filterwarnings("ignore")
import random
from sklearn.metrics.pairwise import cosine_similarity
from wikipedia import page
import random
import string
import pandas as pd
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
# %matplotlib inline
from pandas import DataFrame
import pyttsx3
import speech_recognition as sr
from nltk.stem import WordNetLemmatizer
nltk.download('popular', quiet=True)
# -
# # Text Gathering
# ## Getting Weather details for city
page1=requests.get('https://www.timeanddate.com/weather/india')
# +
def temp(topic):
page = page1
soup = BeautifulSoup(page.content,'html.parser')
data = soup.find(class_ = 'zebra fw tb-wt zebra va-m')
tags = data('a')
city = [tag.contents[0] for tag in tags]
tags2 = data.find_all(class_ = 'rbi')
temp = [tag.contents[0] for tag in tags2]
indian_weather = pd.DataFrame(
{
'City':city,
'Temperature':temp
}
)
df = indian_weather[indian_weather['City'].str.contains(topic.title())]
return (df['Temperature'])
# -
# ## Scrape city detail from Wiki
def wiki_data(topic):
topic=topic.title()
topic=topic.replace(' ', '_',1)
url1="https://en.wikipedia.org/wiki/"
url=url1+topic
source = urllib.request.urlopen(url).read()
# Parsing the data/ creating BeautifulSoup object
soup = bs.BeautifulSoup(source,'lxml')
# Fetching the data
text = ""
for paragraph in soup.find_all('p'):
text += paragraph.text
import re
# Preprocessing the data
text = re.sub(r'\[[0-9]*\]',' ',text)
text = re.sub(r'\s+',' ',text)
text = text.lower()
text = re.sub(r'\d',' ',text)
text = re.sub(r'\s+',' ',text)
return (text)
# # Text Cleaning
# ## Remove Special char
# +
def rem_special(text):
remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation)
return(text.translate(remove_punct_dict))
sample_text="I am sorry! I don't understand you."
rem_special(sample_text)
# -
# ## Stemming
# +
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
def stemmer(text):
words = word_tokenize(text)
for w in words:
text=text.replace(w,PorterStemmer().stem(w))
return text
stemmer("He is Eating. He played yesterday. He will be going tomorrow.")
# -
# ## Lemmatization
# +
lemmer = WordNetLemmatizer()
def LemTokens(tokens):
return [lemmer.lemmatize(token) for token in tokens]
sample_text="rocks corpora better" #default noun
LemTokens(nltk.word_tokenize(sample_text))
# -
# ## Stop words
# +
from nltk.tokenize.toktok import ToktokTokenizer
tokenizer = ToktokTokenizer()
stopword_list = nltk.corpus.stopwords.words('english')
def remove_stopwords(text, is_lower_case=False):
tokens = tokenizer.tokenize(text)
tokens = [token.strip() for token in tokens]
if is_lower_case:
filtered_tokens = [token for token in tokens if token not in stopword_list]
else:
filtered_tokens = [token for token in tokens if token.lower() not in stopword_list]
filtered_text = ' '.join(filtered_tokens)
return filtered_text
remove_stopwords("This is a sample sentence, showing off the stop words filtration.")
# -
# ## Finding part of Speech (POS)
# +
import spacy
spacy_df=[]
spacy_df1=[]
df_spacy_nltk=pd.DataFrame()
nlp = spacy.load('en_core_web_sm')
# Process whole documents
sample_text = ("The heavens are above. The moral code of conduct is above the civil code of conduct")
doc = nlp(sample_text)
# Token and Tag
for token in doc:
spacy_df.append(token.pos_)
spacy_df1.append(token)
df_spacy_nltk['origional']=spacy_df1
df_spacy_nltk['spacy']=spacy_df
#df_spacy_nltk
# -
# ## Name Entity Recognition
# +
import spacy
nlp = spacy.load('en_core_web_sm')
def ner(sentence):
doc = nlp(sentence)
for ent in doc.ents:
print(ent.text, ent.label_)
sentence = "A gangster family epic set in 1919 Birmingham, England; centered on a gang who sew razor blades in the peaks of their caps, and their fierce boss <NAME>."
ner(sentence)
# -
# ## Sentiment analysis using TextBlob
# +
from textblob import TextBlob
def senti(text):
testimonial = TextBlob(text)
return(testimonial.polarity)
sample_text="This apple is good"
print("polarity",senti(sample_text))
sample_text="This apple is not good"
print("polarity",senti(sample_text))
# -
# ## Spelling check
# +
from spellchecker import SpellChecker
spell = SpellChecker()
def spelling(text):
splits = sample_text.split()
for split in splits:
text=text.replace(split,spell.correction(split))
return (text)
sample_text="hapenning elephnt texte luckno sweeto"
spelling(sample_text)
# -
# ## Tokenizer
#TOkenisation
print(nltk.sent_tokenize("Hey how are you? I am fine."))
print(nltk.word_tokenize("Hey how are you? I am fine."))
# # Word Embedding
# ## TF-IDF
from sklearn.feature_extraction.text import TfidfVectorizer
documentA = 'This is about Messi'
documentB = 'This is about TFIDF'
vectorizer = TfidfVectorizer()
vectors = vectorizer.fit_transform([documentA, documentB])
feature_names = vectorizer.get_feature_names()
dense = vectors.todense()
denselist = dense.tolist()
df = pd.DataFrame(denselist, columns=feature_names)
df
# # Conversation
# ## Voice enabled
# ### Chatbot speak
def speak(message):
engine= pyttsx3.init()
engine.say('{}'.format(message))
engine.runAndWait()
# +
engine = pyttsx3.init()
engine.say("Hello hi")
engine.runAndWait()
# -
# ### User input
r = sr.Recognizer()
mic = sr.Microphone()
with mic as source:
r.adjust_for_ambient_noise(source)
audio = r.listen(source)
text_audio=(r.recognize_google(audio))
print(r.recognize_google(audio))
engine.say(text_audio)
engine.runAndWait()
# ## Creating dictionary for cities
city = {}
city["lucknow"] = ["lucknow", "lko"]
city["delhi"]=["new delhi",'ndls']
# +
def city_name(sentence):
for word in sentence.split():
for key, values in city.items():
if word.lower() in values:
return(key)
# -
# ## Pre-processing all
def LemNormalize(text):
text=rem_special(text) #remove special char
text=text.lower() # lower case
text=remove_stopwords(text) # remove stop words
return LemTokens(nltk.word_tokenize(text))
# ## Generating answer using Cosine Similarity
#Generating answer
def response(user_input):
ToGu_response=''
sent_tokens.append(user_input)
word_vectorizer = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english')
all_word_vectors = word_vectorizer.fit_transform(sent_tokens)
similar_vector_values = cosine_similarity(all_word_vectors[-1], all_word_vectors)
idx=similar_vector_values.argsort()[0][-2]
matched_vector = similar_vector_values.flatten()
matched_vector.sort()
vector_matched = matched_vector[-2]
if(vector_matched==0):
ToGu_response=ToGu_response+"I am sorry! I don't understand you."
return ToGu_response
else:
ToGu_response = ToGu_response+sent_tokens[idx]
return ToGu_response
# ## Input city
# +
topic=str(input("Please enter the city name you want to ask queries for: "))
topic=city_name(topic)
text=wiki_data(topic)
sent_tokens = nltk.sent_tokenize(text)# converts to list of sentences
word_tokens = nltk.word_tokenize(text)# converts to list of words
weather_reading=(temp(topic)).iloc[0]
# -
# ## Greetings
# +
# greetings Keyword matching
GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up","hey")
GREETING_RESPONSES = ["hi", "hey", "hi there", "hello", "I am glad! You are talking to me"]
def greeting(sentence):
for word in sentence.split():
if word.lower() in GREETING_INPUTS:
return random.choice(GREETING_RESPONSES)
# -
# ## Places
# +
PLACES_INPUTS = ("places", "monuments", "buildings","places", "monument", "building")
import spacy
nlp = spacy.load('en_core_web_sm')
def ner(sentence):
places_imp=""
doc = nlp(sentence)
for ent in doc.ents:
if (ent.label_=="FAC"):
#print(ent.text, ent.label_)
places_imp=places_imp+ent.text+","+" "
return(places_imp)
places_imp=ner(text)
s=places_imp
l = s.split()
k = []
for i in l:
# If condition is used to store unique string
# in another list 'k'
if (s.count(i)>1 and (i not in k)or s.count(i)==1):
k.append(i)
PLACES_RESPONSES = ' '.join(k)
def places(sentence):
for word in sentence.split():
if word.lower() in PLACES_INPUTS:
return (PLACES_RESPONSES)
# -
# ## Weather
# +
WEATHER_INPUTS = ("weather", "temp", "temperature")
WEATHER_RESPONSES =weather_reading
def weather(sentence):
for word in sentence.split():
if word.lower() in WEATHER_INPUTS:
return (WEATHER_RESPONSES)
# -
# ## Chat
# +
continue_dialogue=True
print("ToGu: Hello")
speak("Hello")
while(continue_dialogue==True):
user_input = input("User:")
user_input=user_input.lower()
user_input=spelling(user_input) #spelling check
print("Sentiment score=",senti(user_input)) #sentiment score
if(user_input!='bye'):
if(user_input=='thanks' or user_input=='thank you' ):
print("ToGu: You are welcome..")
speak(" You are welcome")
else:
if(greeting(user_input)!=None):
tmp=greeting(user_input)
print("ToGu: "+tmp)
speak(tmp)
elif(weather(user_input)!=None):
tmp=weather(user_input)
print("ToGu: "+tmp)
speak(tmp)
elif(places(user_input)!=None):
tmp=places(user_input)
print("ToGu: Important places are "+tmp)
speak("Important places are")
speak(tmp)
else:
print("ToGu: ",end="")
temp=response(user_input)
print(temp)
speak(temp)
sent_tokens.remove(user_input)
else:
continue_dialogue=False
print("ToGu: Goodbye.")
speak("goodbye")
| Chatbot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Machine Learning with Tensorflow (Keras)
#
# This notebook shows the same systems as the ["P12" notebook](P12-MachineLearning.ipynb), but using Tensorflow/Keras
#
# + colab={} colab_type="code" id="n6wuE9L6IdUv"
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
# + colab={} colab_type="code" id="gXK1BbXwI7Xz"
x = np.array([ # x is an Nx3 (x[0] is always 1, for the threshold, x[1], and x[2]
[1,1,1], # are the "real" inputs that correspond to actual data.
[1,1,2], # For example x[1] could be
[1,2,1],
[1,2,2],
[1,2,3],
[1,3,2],
[1,0.3,0.5],
[1,2.5,2.5],
])
y = np.array([-1,-1,-1,1,1,1,-1,1]) # did they order fries? +1 -> yes, and -1 -> no.
# + colab={} colab_type="code" id="JRH59C63I-_z"
#
# build a perceptron-like single layer
#
l0 = tf.keras.layers.Dense(units=1, input_shape=[3])
# + colab={} colab_type="code" id="HUw9nWzdchIF"
#
# create a single sequence of this single layer
#
model = tf.keras.Sequential([l0])
# + colab={} colab_type="code" id="jJ8q0Xv8cmnd"
#
# use a "mean square" loss/error function and a simple training optimizer
#
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Pi5Cx9-tczb7" outputId="92f3dd61-0fe4-4621-ac4a-d3ad117aff87"
#
# train for 100 iterations
#
history = model.fit(x, y, epochs=100, verbose=False)
print("Finished training the model")
# + colab={"base_uri": "https://localhost:8080/", "height": 378} colab_type="code" id="Ou8i9-9KdEw3" outputId="7cb62315-b4fe-464c-de94-56530b6849ed"
#
# plot the training history.
#
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="_6hC6CcYdUPM" outputId="7cd75e8f-67a9-46e8-fd6b-634c16ede8a5"
print("These are the layer variables: {}".format(l0.get_weights()))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="yNy-WcIKdw1P" outputId="7134bb6b-4a26-426f-9b91-ec3600d875b1"
model.predict(np.array([[1,1,1]]))
# + colab={"base_uri": "https://localhost:8080/", "height": 255} colab_type="code" id="c-ST9FzReMez" outputId="36707d08-b2b4-4fce-b14d-0bef63946cdd"
l0 = tf.keras.layers.Dense(units=4, input_shape=[3])
l1 = tf.keras.layers.Dense(units=4)
l2 = tf.keras.layers.Dense(units=1)
model = tf.keras.Sequential([l0, l1, l2])
model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(x, y, epochs=100, verbose=False)
print("Finished training the model")
print("These are the l0 variables: {}".format(l0.get_weights()))
print("These are the l1 variables: {}".format(l1.get_weights()))
print("These are the l2 variables: {}".format(l2.get_weights()))
# + colab={"base_uri": "https://localhost:8080/", "height": 378} colab_type="code" id="UAFQZlv5fWU0" outputId="20ce73c1-92ba-4e57-8065-925b0fdf12b1"
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
# +
N=100
xx,yy = np.meshgrid(np.linspace(min(x[:,1])-1,max(x[:,1]+1),N),
np.linspace(min(x[:,2])-1,max(x[:,2]+1),N))
xs = []
for i in range(N):
for j in range(N):
xs.append([1,xx[i,j],yy[i,j]])
zs = model.predict(np.array(xs))
zz = zs.reshape(N,N)
# +
plt.contourf(xx,yy,zz,levels=[-10,0,+10])
ix_fries = (y==1) # boolean, do you like fries?
ix_no_fries = np.invert(ix_fries) # boolean, inverse of ix_fries?
plt.title("Fake data! Perceptron output with data")
plt.xlabel("Do you like Salt?")
plt.ylabel("How hungry are you?")
plt.plot(x[ix_fries,1],x[ix_fries,2],'b.',label="fries")
plt.plot(x[ix_no_fries,1],x[ix_no_fries,2],'r.',label="no fries")
plt.legend()
plt.grid()
# -
import pandas as pd
df = pd.read_csv('Iris.csv') # load the Iris dataset
df.head()
# +
sub_frame = df[(df['Species']=='Iris-versicolor')|(df['Species']=='Iris-virginica')] # only two species
train_ix = np.random.rand(len(sub_frame))<0.8 # an array of random booleans, 80% true
train_df = sub_frame[train_ix] # choose a random "training" set using the random index
test_df = sub_frame[np.invert(train_ix)] # use the rest for "testing".
#
# x is a 5xN (~80 samples with 4 traits + offset in column 1)
#
x=np.array([
np.ones(len(train_df)),
train_df.PetalLengthCm.values,
train_df.PetalWidthCm.values,
train_df.SepalLengthCm.values,
train_df.SepalWidthCm.values,]).T # transpose to get in the right shape
#
# y is an Nx1, +1 if species == versi and -1 otherwise
#
y=np.where(train_df.Species=='Iris-versicolor',1,-1)
#
# build testing set arrays, just like training set, but using the remaining indexes
#
xt=np.array([
np.ones(len(test_df)),
test_df.PetalLengthCm.values,
test_df.PetalWidthCm.values,
test_df.SepalLengthCm.values,
test_df.SepalWidthCm.values,]).T
yt=np.where(test_df.Species=='Iris-versicolor',1,-1)
#
# Now plot all the data to visualize what we're seeing
#
versi_df=sub_frame[sub_frame.Species=='Iris-versicolor'] # get all the versi
virg_df=sub_frame[sub_frame.Species=='Iris-virginica'] # get all the virg
plt.plot(versi_df.PetalLengthCm.values, versi_df.PetalWidthCm.values, 'r.', label="Versicolor")
plt.plot(virg_df.PetalLengthCm.values, virg_df.PetalWidthCm.values, 'b.', label="Virginica")
plt.xlabel("Petal Length")
plt.ylabel("Petal Width")
plt.title("Petal Length and Width for two Species of Iris: All Data")
plt.legend(loc=2)
plt.grid()
# -
l0 = tf.keras.layers.Dense(units=4, input_shape=[5])
l1 = tf.keras.layers.Dense(units=4)
l2 = tf.keras.layers.Dense(units=1)
model = tf.keras.Sequential([l0, l1, l2])
model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(x, y, epochs=125, verbose=False)
print("Finished training the model")
print("These are the l0 variables: {}".format(l0.get_weights()))
print("These are the l1 variables: {}".format(l1.get_weights()))
print("These are the l2 variables: {}".format(l2.get_weights()))
plt.title("Training Error History")
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
plt.grid()
# +
#
# build up an array of predictions
#
vals=[]
for i in range(len(xt)):
vals.append((xt[i,1],xt[i,2],model.predict(np.array([xt[i]])),yt[i]))
z=np.array(vals,).T
#
# get outputs
#
gr=z[2]>=0.0
lt=np.invert(gr)
#
# plot outputs
#
plt.plot(z[0,gr],z[1,gr],'bx',label="Predict Yes") # show predictions of model with red and blue 'x's
plt.plot(z[0,lt],z[1,lt],'rx',label="Predict No")
#
# get original data
#
gr=z[3]>=0.0
lt=np.invert(gr)
#
# plot original data
#
plt.xlim(min(xt[:,1])-.5,max(xt[:,1])+.5)
plt.ylim(min(xt[:,2])-.5,max(xt[:,2])+.5)
#
# decorate with labels etc.
#
plt.plot(z[0,gr],z[1,gr],'b.',label="Data Yes") # show actual data with red and blue dots
plt.plot(z[0,lt],z[1,lt],'r.',label="Data No")
plt.legend(loc=2)
plt.xlabel("Petal Length")
plt.ylabel("Petal Width")
plt.title("Species is Versi?, Prediction vs Data")
plt.grid()
#
# final success rate on test
#
print("Prediction: %f%% Correct." % (((z[2]>0)==(z[3]>0)).sum()*100/len(z[1])))
# -
| P12b-MLTensorFlow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="pPMwFwr86JK9" executionInfo={"status": "ok", "timestamp": 1630270091998, "user_tz": 240, "elapsed": 171, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13953621006807715822"}} outputId="515398ff-4835-4e7d-92b9-1ad6c15dc71d"
import pandas as pd
import numpy as np
import progressbar
import tensorflow
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
import time
import math
import statistics
from google.colab import drive
import pandas as pd
import numpy as np
import math
import statistics
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
import time
from google.colab import drive
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import learning_curve
from sklearn.kernel_ridge import KernelRidge
import matplotlib.pyplot as plt
drive.mount('/content/gdrive')
drive.mount('/content/gdrive')
# + id="bBa2d1qz_2Fg"
data_path = 'gdrive/My Drive/Summer Research/Simulated CGM Data/Extracted/'
d1namo_data_path = 'gdrive/My Drive/Summer Research/Glucose/Diabetes/Cleaned Data/'
# + [markdown] id="Mf4teluT8QNV"
# Import data
# + id="BIeDnqSz8Y08"
def getData(c, fn):
if c == 'Extracted':
data_total = 20
train_size = 15
t_train = np.array(list(range(512*train_size)))
t_test = np.array(list(range(512*train_size,512*data_total)))
y_train = np.zeros(512*train_size)
y_test = np.zeros(512*(data_total-train_size))
for i in range(train_size):
y_train[range(512*i,512*(i+1))] = np.loadtxt(data_path+'adult#001_'+f'{(i+1):03d}'+'.csv', delimiter=',')
for i in range(train_size,data_total):
y_test[range(512*(i-train_size),512*(i-train_size+1))] = np.loadtxt(data_path+'adult#001_'+f'{(i+1):03d}'+'.csv', delimiter=',')
X_train = np.stack((t_train,y_train),axis=1)
X_test = np.stack((t_test,y_test),axis=1)
elif c == 'D1NAMO':
y = np.loadtxt(d1namo_data_path+'glucose ('+str(fn)+').csv', delimiter=',', skiprows=1, usecols=[2])
length = len(y)
train_size = int(0.6*length)
y_train = y[range(train_size)]
y_test = y[range(train_size,length)]
t_train = np.array(list(range(train_size)))
t_test = np.array(list(range(train_size,length)))
X_train = np.stack((t_train,y_train),axis=1)
X_test = np.stack((t_test,y_test),axis=1)
return X_train, X_test
# + [markdown] id="Gact5xOapwzd"
# Normalization
# + id="91sj54BVpwK0"
def normalize(X_train, interval_length):
scaler = MinMaxScaler(feature_range = (0, 1))
X_train_scaled = scaler.fit_transform(X_train)
features_set = []
labels = []
for i in range(interval_length, len(X_train)):
features_set.append(X_train_scaled[i-interval_length:i, 1])
labels.append(X_train_scaled[i, 1])
features_set, labels = np.array(features_set), np.array(labels)
features_set = np.reshape(features_set, (features_set.shape[0], features_set.shape[1], 1))
return features_set, labels, scaler
# + [markdown] id="0L2xDO7Jtpjk"
# Create and train the LSTM
# + id="Tmi-cRWnv1UF"
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dropout
def LSTM_Model():
#Creating the model
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape=(features_set.shape[1], 1)))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50))
model.add(Dropout(0.2))
model.add(Dense(units = 1))
return model
# + [markdown] id="UsNAQyv48rPc"
# Test the LSTM
# + id="ogrTlvXp8yCt"
def predict_LSTM(X_test, model, scaler, timestep_to_predict, interval_length):
X_test_scaled = scaler.fit_transform(X_test)
test_features = []
for i in range(interval_length, len(X_test)):
test_features.append(X_test_scaled[i-interval_length:i, 1])
test_features = np.array(test_features)
test_features = np.reshape(test_features, (test_features.shape[0], test_features.shape[1], 1))
p = list()
predictions = np.zeros((len(test_features)-timestep_to_predict,2))
predictions[:,0] = X_test_scaled[-len(predictions):,0]
widgets = [' [',
progressbar.Timer(format= 'elapsed time: %(elapsed)s'),
'] ',
progressbar.Bar('#'),' (',
progressbar.ETA(), ') ',
]
bar = progressbar.ProgressBar(max_value=len(predictions), widgets=widgets).start()
count = 0
for j in range(len(predictions)):
count += 1
bar.update(count)
for i in range(timestep_to_predict):
inp = test_features[j+i:(j+i+1),:,:]
if i != 0:
inp[:,range((interval_length-i),(interval_length)),:] = np.asarray(p).reshape(1,i,1)
p.append(model.predict(inp)[0,0])
predictions[j,1] = p[9]
p.clear()
predictions = scaler.inverse_transform(predictions)
return predictions
# + [markdown] id="PQnDkg2UuaER"
# Performance
# + id="J_fw6J2jubsZ"
def performance(X_test, predictions, time_in_minutes, fname):
plt.figure(figsize=(16,9))
plt.plot(range(0,5*len(predictions),5), 18.016*X_test[-len(predictions):,1], color='blue', label='Actual CGM')
plt.plot(range(0,5*len(predictions),5), 18.016*predictions[:,1], color='red', label='Predicted CGM')
plt.title('CGM Prediction ('+str(time_in_minutes)+' minutes ahead)')
plt.xlabel('Time (minutes)')
plt.ylabel('CGM (mg/dL)')
plt.legend()
rmse = math.sqrt(mean_squared_error(X_test[-len(predictions):,1], predictions[:,1]))
std = statistics.stdev(X_test[-len(predictions):,1])
avg_diff = 0
for i in range(len(predictions)-1):
avg_diff += float(abs(X_test[-len(predictions)+i+1,1] - predictions[i,1]))
avg_diff = avg_diff / (len(predictions)-1)
plt.savefig('gdrive/My Drive/Summer Research/Figures/LSTM/D1NAMO/'+str(time_in_minutes)+' minutes ahead/'+fname)
return rmse, std, avg_diff
# + id="yNVp5NBI_8UJ" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "error", "timestamp": 1630272175342, "user_tz": 240, "elapsed": 2083176, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13953621006807715822"}} outputId="bf3757d9-3242-4db7-b1ba-6fbf0d73522d"
for fn in [1,2,4,5,6,7,8]:
interval_length = 180
X_train, X_test = getData('D1NAMO', fn)
features_set, labels, scaler = normalize(X_train, interval_length)
model = LSTM_Model()
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(features_set, labels, epochs = 100, batch_size = 32)
rmses = []
stds = []
maes = []
for i in [10,20,30]:
predictions = predict_LSTM(X_test, model, scaler, timestep_to_predict=i, interval_length=interval_length)
rmse, std, mae = performance(X_test, predictions, time_in_minutes=3*i, fname=str(fn)+' no wt.png')
rmses.append(rmse)
stds.append(std)
maes.append(mae)
stats = {'RMSE':18.016*rmses, 'Standard Deviation':18.016*stds, 'MAE':18.016*maes}
df = pd.DataFrame(stats)
df.index = ['30 min', '60 min', '90 min']
df.to_csv('gdrive/My Drive/Summer Research/Figures/LSTM/D1NAMO/'+str(fn)+'.csv')
| Code/CGM Forecast/LSTM Multiple Time Steps Ahead/LSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gallery of examples
#
# 
#
# Here you can browse a gallery of examples using poliastro in the form of Jupyter notebooks.
# ## [Comparing Hohmann and bielliptic transfers](docs/source/examples/Comparing Hohmann and bielliptic transfers.ipynb)
#
# [](docs/source/examples/Comparing Hohmann and bielliptic transfers.ipynb)
#
# ## [Exploring the New Horizons launch](docs/source/examples/Exploring the New Horizons launch.ipynb)
#
# [](docs/source/examples/Exploring the New Horizons launch.ipynb)
#
# ## [Going to Jupiter with Python using Jupyter and poliastro](docs/source/examples/Going to Jupiter with Python using Jupyter and poliastro.ipynb)
#
# [](docs/source/examples/Going to Jupiter with Python using Jupyter and poliastro.ipynb)
#
# ## [Going to Mars with Python using poliastro](docs/source/examples/Going to Mars with Python using poliastro.ipynb)
#
# [](docs/source/examples/Going to Mars with Python using poliastro.ipynb)
#
# ## [Propagation using Cowell's formulation](docs/source/examples/Propagation using Cowell's formulation.ipynb)
#
# [](docs/source/examples/Propagation using Cowell's formulation.ipynb)
#
# ## [Revisiting Lambert's problem in Python](docs/source/examples/Revisiting Lambert's problem in Python.ipynb)
#
# [](docs/source/examples/Revisiting Lambert's problem in Python.ipynb)
#
# ## [Studying Hohmann transfers](docs/source/examples/Studying Hohmann transfers.ipynb)
#
# [](docs/source/examples/Studying Hohmann transfers.ipynb)
#
# ## [Catch that asteroid!](docs/source/examples/Catch that asteroid!.ipynb)
#
# [](docs/source/examples/Catch that asteroid!.ipynb)
#
# ## [Using NEOS package](docs/source/examples/Using NEOS package.ipynb)
#
# [](docs/source/examples/Using NEOS package.ipynb)
#
# ## [Plotting in 3D](docs/source/examples/Plotting in 3D.ipynb)
#
# [](docs/source/examples/Plotting in 3D.ipynb)
#
# ## [Visualizing the SpaceX Tesla Roadster trip to Mars](docs/source/examples/Visualizing the SpaceX Tesla Roadster trip to Mars.ipynb)
#
# [](docs/source/examples/Visualizing the SpaceX Tesla Roadster trip to Mars.ipynb)
# ## Old notebooks
#
# These notebooks are based on old versions of poliastro and cannot be run with current ones. They are kept here for historical purposes.
#
# * https://github.com/poliastro/poliastro/blob/c361af5/examples/Solving%20Lambert's%20problem%20in%20Python.ipynb
# * https://github.com/poliastro/poliastro/blob/9e71402/examples/Quickly%20solving%20Kepler's%20Problem%20in%20Python%20using%20numba.ipynb
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Challenge Problem 10
# In this problem, you will use `arcgis` SeDF and concurrency to try and improve performance generating a matrix of neighbors for each data point. The returned result is a `dict` (dictionary) where the `KEY` is the **OBJECTID** and the `VALUE` is the neighbor's **OBJECTID** of that dataset.
#
# 1. The function should allow the user to provide a distance in meters. (hint use search cursors and geometry methods)
# 2. The concurrency process would be launched within a `with` statement and no more than `4` processes should be launched at once.
# 3. The output should be a dictionary
#
# Example Result:
#
# ```python
# w = {
# 1 : [2,3,4],
# 2 : [1,5,7],
# ...
# n : [n1,n2,n3,..., n]
# }
# ```
#
#
# compressed data (located on github)
zip_data = r"./data/challenge_problem_10_data.zip"
def calculate_neighbors(data):
""""""
w = {}
return w
| challenge problems/Challenge Problem 10 Student Version.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext blackcellmagic
# +
import pandas as pd
import numpy as np
import dask.dataframe as dd
from dask.distributed import Client
t_water = pd.read_csv(
"https://raw.githubusercontent.com/jdills26/Tanzania-water-table/master/training_set_values.csv"
)
t_water_tgt = pd.read_csv(
"https://raw.githubusercontent.com/jdills26/Tanzania-water-table/master/training_set_labels.csv"
)
# -
#turning pandas dataframe into dask dataframe
t_water['target']=t_water_tgt['status_group']
wd=dd.from_pandas(t_water, npartitions=3)
# +
region_dict = {
"Arusha": 2,
"Dar es Salaam": 7,
"Dodoma": 1,
"Iringa": 11,
"Kagera": 18,
"Kigoma": 16,
"Kilimanjaro": 3,
"Lindi": 80,
"Manyara": 21,
"Mara": 20,
"Mbeya": 12,
"Morogoro": 5,
"Mtwara": 90,
"Mwanza": 19,
"Pwani": 6,
"Rukwa": 15,
"Ruvuma": 10,
"Shinyanga": 17,
"Singida": 13,
"Tabora": 14,
"Tanga": 4,
}
def clean_region(frame):
frame["region_code"] = frame["region"].map(region_dict)
clean_region(wd)
# -
# make a dataframe to work out average longitude, latitude, gps_height by region
# wd['my_area_code']=100*wd['region_code']+wd['district_code']
averages = (
wd[wd["longitude"] != 0]
.groupby(["region_code"])[["longitude", "latitude"]]
.mean()
.compute()
)
longitude_map = averages["longitude"].to_dict()
latitude_map = averages["latitude"].to_dict()
wd["avg_longitude"] = wd["region_code"].map(longitude_map)
wd["avg_latitude"] = wd["region_code"].map(latitude_map)
wd["new_longitude"] = wd["longitude"].where(wd["longitude"] != 0, wd["avg_longitude"])
wd["new_latitude"] = wd["latitude"].where(wd["longitude"] != 0, wd["avg_latitude"])
# dates
wd["date_recorded"] = dd.to_datetime(wd["date_recorded"], format="%Y-%m-%d")
wd["month"] = wd["date_recorded"].map(lambda x: x.month)
wd["year"] = wd["date_recorded"].map(lambda x: x.year)
wd["date_recorded"] = wd["date_recorded"].map(lambda x: x.toordinal())
wd["rot45X"] = .707* wd["new_latitude"] - .707* wd["new_longitude"]
wd["rot30X"] = (1.732/2)* wd["new_latitude"] - (1./2)* wd["new_longitude"]
wd["rot60X"] = (1./2)* wd["new_latitude"] - (1.732/2)* wd["new_longitude"]
wd["radial_r"] = np.sqrt( np.power(wd["new_latitude"],2) + np.power(wd["new_longitude"],2) )
wd['radial_r'].isna().sum().compute()
features = [
"basin",
"scheme_management",
"extraction_type_group",
"extraction_type_class",
"month",
"payment",
"quantity",
"source",
"waterpoint_type",
"amount_tsh",
"gps_height",
"new_longitude",
"new_latitude",
"population",
"construction_year",
"district_code",
"region_code",
"date_recorded",
"permit",
"public_meeting",
"rot45X",
"radial_r",
]
# +
X = wd[features]
from sklearn.ensemble import RandomForestClassifier
from dask_ml.preprocessing import (
RobustScaler,
Categorizer,
DummyEncoder,
OrdinalEncoder,
)
from sklearn.pipeline import make_pipeline
preprocessor = make_pipeline(
Categorizer(), DummyEncoder(), RobustScaler()
) # ,SimpleImputer()#ce.OrdinalEncoder(),
X = preprocessor.fit_transform(X)
# -
len(X.columns),(len(X))
y_dict={'functional':1,'non functional':0,'functional needs repair':2}
y=wd['target'].map(y_dict)
#just to check it works on dask collection
rfc = RandomForestClassifier()
rfc.fit(X,y)
# +
# i had to use .values here to get this to run. am not sure why as docs say
# should work straight on the dask dataframe
from dask_ml.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distributions_f = {
"n_estimators": randint(100, 140),
"max_depth": randint(16, 23),
}
search_f = RandomizedSearchCV(
estimator=RandomForestClassifier(
criterion="entropy", warm_start=True, oob_score=True, n_jobs=-1, random_state=42
),
param_distributions=param_distributions_f,
n_iter=10,
scoring="accuracy",
n_jobs=-1,
cv=3,
return_train_score=True,
)
search_f.fit(X.values, y.values)
# -
pd.DataFrame(search_f.cv_results_).sort_values(by='rank_test_score').head(5)
type(X),type(y)
type(y.values)
| dask with tanzania water.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''tf_mac'': conda)'
# name: python3
# ---
# +
import tensorflow as tf
import pandas as pd
import numpy as np
from utils.df_loader import load_adult_df, load_compas_df, load_german_df, load_diabetes_df, load_breast_cancer_df
from utils.preprocessing import preprocess_df
from sklearn.model_selection import train_test_split
from utils.dice import generate_dice_result, process_results
from utils.models import train_lp_three_models, save_lp_three_models, load_lp_three_models, evaluation_test
from utils.save import save_result_as_csv
pd.options.mode.chained_assignment = None
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # False
seed = 123
# tf.random.set_seed(seed)
# np.random.seed(seed)
# +
#### Select dataset ####
dataset_name = 'diabetes' # [adult, german, compas]
if dataset_name == 'adult':
dataset_loading_fn = load_adult_df
elif dataset_name == 'german':
dataset_loading_fn = load_german_df
elif dataset_name == 'compas':
dataset_loading_fn = load_compas_df
elif dataset_name == 'diabetes':
dataset_loading_fn = load_diabetes_df
elif dataset_name == 'breast_cancer':
dataset_loading_fn = load_breast_cancer_df
else:
raise Exception("Unsupported dataset")
# -
#### Load datafram info.
df_info = preprocess_df(dataset_loading_fn)
### Seperate to train and test set.
train_df, test_df = train_test_split(df_info.dummy_df, train_size=.8, random_state=seed, shuffle=True)
test_df
### Get training and testing array.
X_train = np.array(train_df[df_info.ohe_feature_names])
y_train = np.array(train_df[df_info.target_name])
X_test = np.array(test_df[df_info.ohe_feature_names])
y_test = np.array(test_df[df_info.target_name])
# +
### Train modkels.
# models = train_three_models(X_train, y_train)
### Save models.
# save_three_models(models, dataset_name)
# +
# models = train_three_models_lp(X_train, y_train)
# ### Save models.
# save_lp_three_models(models, dataset_name)
# -
### Load models.
models = load_lp_three_models(X_train.shape[-1], dataset_name)
### Print out accuracy on testset.
evaluation_test(models, X_test, y_test)
# # DiCE
### Setting up the CF generating amount.
num_instances = 3
num_cf_per_instance = 1
# Generate CF
results = generate_dice_result(
df_info,
test_df,
models,
num_instances,
num_cf_per_instance,
sample_size=50,
models_to_run=['nn']
)
result_dfs = process_results(df_info, results)
from utils.dice import Recorder
i = 0
example_input = df_info.scaled_df.iloc[test_df[i:i+1].index].iloc[0:1]
example_input
print(Recorder.wrapped_models['nn'].predict(example_input))
print(Recorder.wrapped_models['dt'].predict(example_input))
print(Recorder.wrapped_models['rfc'].predict(example_input))
print(Recorder.wrapped_models['nn'].predict_proba(example_input))
print(Recorder.wrapped_models['dt'].predict_proba(example_input))
print(Recorder.wrapped_models['rfc'].predict_proba(example_input))
### Save result as file.
save_result_as_csv("dice", dataset_name, result_dfs)
| [LP]dice_generate_cf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Developing Custom PyTorch Dataloaders
# =====================================
#
# A significant amount of the effort applied to developing machine
# learning algorithms is related to data preparation. PyTorch provides
# many tools to make data loading easy and hopefully, makes your code more
# readable. In this recipe, you will learn how to:
#
# 1. Create a custom dataset leveraging the PyTorch dataset APIs;
# 2. Create callable custom transforms that can be composable; and
# 3. Put these components together to create a custom dataloader.
#
# Please note, to run this tutorial, ensure the following packages are
# installed:
# - ``scikit-image``: For image io and transforms
# - ``pandas``: For easier csv parsing
#
# As a point of attribution, this recipe is based on the original tutorial
# from `<NAME> <https://chsasank.github.io>`__ and was later
# edited by `<NAME> <https://github.com/jspisak>`__.
#
#
# Setup
# ----------------------
# First let’s import all of the needed libraries for this recipe.
#
#
#
#
# +
from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
plt.ion() # interactive mode
# -
# Part 1: The Dataset
# -------------------
#
#
#
# The dataset we are going to deal with is that of facial pose. Overall,
# 68 different landmark points are annotated for each face.
#
# As a next step, please download the dataset from
# `here <https://download.pytorch.org/tutorial/faces.zip>`_ so that the
# images are in a directory named ‘data/faces/’.
#
# **Note:** This dataset was actually generated by applying
# `dlib's pose estimation <https://blog.dlib.net/2014/08/real-time-face-pose-estimation.html>`_
# on images from the imagenet dataset containing the ‘face’ tag.
#
# ::
#
# # !wget https://download.pytorch.org/tutorial/faces.zip
# # !mkdir data/faces/
# import zipfile
# with zipfile.ZipFile("faces.zip","r") as zip_ref:
# zip_ref.extractall("/data/faces/")
# # %cd /data/faces/
#
#
# The dataset comes with a csv file with annotations which looks like
# this:
#
# ::
#
# image_name,part_0_x,part_0_y,part_1_x,part_1_y,part_2_x, ... ,part_67_x,part_67_y
# 0805personali01.jpg,27,83,27,98, ... 84,134
# 1084239450_e76e00b7e7.jpg,70,236,71,257, ... ,128,312
#
# Let’s quickly read the CSV and get the annotations in an (N, 2) array
# where N is the number of landmarks.
#
#
#
# +
landmarks_frame = pd.read_csv('faces/face_landmarks.csv')
n = 65
img_name = landmarks_frame.iloc[n, 0]
landmarks = landmarks_frame.iloc[n, 1:]
landmarks = np.asarray(landmarks)
landmarks = landmarks.astype('float').reshape(-1, 2)
print('Image name: {}'.format(img_name))
print('Landmarks shape: {}'.format(landmarks.shape))
print('First 4 Landmarks: {}'.format(landmarks[:4]))
# -
# 1.1 Write a simple helper function to show an image
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Next let’s write a simple helper function to show an image, its landmarks and use it to show a sample.
#
#
#
#
# +
def show_landmarks(image, landmarks):
"""Show image with landmarks"""
plt.imshow(image)
plt.scatter(landmarks[:, 0], landmarks[:, 1], s=10, marker='.', c='r')
plt.pause(0.001) # pause a bit so that plots are updated
plt.figure()
show_landmarks(io.imread(os.path.join('faces/', img_name)),
landmarks)
plt.show()
# -
# 1.2 Create a dataset class
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Now lets talk about the PyTorch dataset class
#
#
#
#
# ``torch.utils.data.Dataset`` is an abstract class representing a
# dataset. Your custom dataset should inherit ``Dataset`` and override the
# following methods:
#
# - ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
# - ``__getitem__`` to support indexing such that ``dataset[i]`` can be
# used to get $`i$` th sample
#
# Let’s create a dataset class for our face landmarks dataset. We will
# read the csv in ``__init__`` but leave the reading of images to
# ``__getitem__``. This is memory efficient because all the images are not
# stored in the memory at once but read as required.
#
# Here we show a sample of our dataset in the forma of a dict
# ``{'image': image, 'landmarks': landmarks}``. Our dataset will take an
# optional argument ``transform`` so that any required processing can be
# applied on the sample. We will see the usefulness of ``transform`` in
# another recipe.
#
#
#
class FaceLandmarksDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.landmarks_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
landmarks = self.landmarks_frame.iloc[idx, 1:]
landmarks = np.array([landmarks])
landmarks = landmarks.astype('float').reshape(-1, 2)
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample
# 1.3 Iterate through data samples
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
#
#
# Next let’s instantiate this class and iterate through the data samples.
# We will print the sizes of first 4 samples and show their landmarks.
#
#
#
# +
face_dataset = FaceLandmarksDataset(csv_file='faces/face_landmarks.csv',
root_dir='faces/')
fig = plt.figure()
for i in range(len(face_dataset)):
sample = face_dataset[i]
print(i, sample['image'].shape, sample['landmarks'].shape)
ax = plt.subplot(1, 4, i + 1)
plt.tight_layout()
ax.set_title('Sample #{}'.format(i))
ax.axis('off')
show_landmarks(**sample)
if i == 3:
plt.show()
break
# -
# Part 2: Data Tranformations
# ---------------------------
#
#
#
# Now that we have a dataset to work with and have done some level of
# customization, we can move to creating custom transformations. In
# computer vision, these come in handy to help generalize algorithms and
# improve accuracy. A suite of transformations used at training time is
# typically referred to as data augmentation and is a common practice for
# modern model development.
#
# One issue common in handling datasets is that the samples may not all be
# the same size. Most neural networks expect the images of a fixed size.
# Therefore, we will need to write some prepocessing code. Let’s create
# three transforms:
#
# - ``Rescale``: to scale the image
# - ``RandomCrop``: to crop from image randomly. This is data
# augmentation.
# - ``ToTensor``: to convert the numpy images to torch images (we need to
# swap axes).
#
# We will write them as callable classes instead of simple functions so
# that parameters of the transform need not be passed everytime it’s
# called. For this, we just need to implement ``__call__`` method and if
# required, ``__init__`` method. We can then use a transform like this:
#
# ::
#
# tsfm = Transform(params)
# transformed_sample = tsfm(sample)
#
# Observe below how these transforms had to be applied both on the image
# and landmarks.
#
#
#
# 2.1 Create callable classes
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Let’s start with creating callable classes for each transform
#
#
#
#
# +
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = transform.resize(image, (new_h, new_w))
# h and w are swapped for landmarks because for images,
# x and y axes are axis 1 and 0 respectively
landmarks = landmarks * [new_w / w, new_h / h]
return {'image': img, 'landmarks': landmarks}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
landmarks = landmarks - [left, top]
return {'image': image, 'landmarks': landmarks}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'landmarks': torch.from_numpy(landmarks)}
# -
# 2.2 Compose transforms and apply to a sample
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Next let’s compose these transforms and apply to a sample
#
#
# Let’s say we want to rescale the shorter side of the image to 256 and
# then randomly crop a square of size 224 from it. i.e, we want to compose
# ``Rescale`` and ``RandomCrop`` transforms.
# ``torchvision.transforms.Compose`` is a simple callable class which
# allows us to do this.
#
#
#
# +
scale = Rescale(256)
crop = RandomCrop(128)
composed = transforms.Compose([Rescale(256),
RandomCrop(224)])
# Apply each of the above transforms on sample.
fig = plt.figure()
sample = face_dataset[65]
for i, tsfrm in enumerate([scale, crop, composed]):
transformed_sample = tsfrm(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tsfrm).__name__)
show_landmarks(**transformed_sample)
plt.show()
# -
# 2.3 Iterate through the dataset
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Next we will iterate through the dataset
#
#
# Let’s put this all together to create a dataset with composed
# transforms. To summarize, every time this dataset is sampled:
#
# - An image is read from the file on the fly
# - Transforms are applied on the read image
# - Since one of the transforms is random, data is augmentated on
# sampling
#
# We can iterate over the created dataset with a ``for i in range`` loop
# as before.
#
#
#
# +
transformed_dataset = FaceLandmarksDataset(csv_file='faces/face_landmarks.csv',
root_dir='faces/',
transform=transforms.Compose([
Rescale(256),
RandomCrop(224),
ToTensor()
]))
for i in range(len(transformed_dataset)):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['landmarks'].size())
if i == 3:
break
# -
# Part 3: The Dataloader
# ----------------------
#
#
#
# By operating on the dataset directly, we are losing out on a lot of
# features by using a simple ``for`` loop to iterate over the data. In
# particular, we are missing out on:
#
# - Batching the data
# - Shuffling the data
# - Load the data in parallel using ``multiprocessing`` workers.
#
# ``torch.utils.data.DataLoader`` is an iterator which provides all these
# features. Parameters used below should be clear. One parameter of
# interest is ``collate_fn``. You can specify how exactly the samples need
# to be batched using ``collate_fn``. However, default collate should work
# fine for most use cases.
#
#
#
# +
dataloader = DataLoader(transformed_dataset, batch_size=4,
shuffle=True, num_workers=4)
# Helper function to show a batch
def show_landmarks_batch(sample_batched):
"""Show image with landmarks for a batch of samples."""
images_batch, landmarks_batch = \
sample_batched['image'], sample_batched['landmarks']
batch_size = len(images_batch)
im_size = images_batch.size(2)
grid = utils.make_grid(images_batch)
plt.imshow(grid.numpy().transpose((1, 2, 0)))
for i in range(batch_size):
plt.scatter(landmarks_batch[i, :, 0].numpy() + i * im_size,
landmarks_batch[i, :, 1].numpy(),
s=10, marker='.', c='r')
plt.title('Batch from dataloader')
for i_batch, sample_batched in enumerate(dataloader):
print(i_batch, sample_batched['image'].size(),
sample_batched['landmarks'].size())
# observe 4th batch and stop.
if i_batch == 3:
plt.figure()
show_landmarks_batch(sample_batched)
plt.axis('off')
plt.ioff()
plt.show()
break
# -
# Now that you’ve learned how to create a custom dataloader with PyTorch,
# we recommend diving deeper into the docs and customizing your workflow
# even further. You can learn more in the ``torch.utils.data`` docs
# `here <https://pytorch.org/docs/stable/data.html>`__.
#
#
#
| docs/_downloads/813e9f58ce3b4f2283d6c6cee9fee5a7/custom_dataset_transforms_loader.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.4 64-bit
# name: python3
# ---
# String Methods
# --------------
#
name = 'Python'
print("using string methods")
print(name.lower())
print(name.capitalize())
print(name.upper())
# counting number of o's in name
print(name.count('0'))
# Types and Casting
# -----------------
print(type('5'))
print('5'+ '7')
result = int('5') + int('7')
print(result)
# The input() Function
# --------------------
print('Whats your name')
name = input()
# input always returns string str
print(type(name))
print(name)
name = input('What\'s your name? ')
print(name)
rating = input('How was the movie on a Scale from 1 to 10')
print(rating)
# String Indexing and Slicing
# ----------------------------
destination = 'Hyderabad'
print(destination[0])
print(destination[1])
print(destination[-1])
# Slicing
print(destination[::-1])
print(destination[::2])
print(destination[1::2])
| July21/EssentialPython/workshop/strings/exploringstring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dev
# language: python
# name: dev
# ---
# # Classification End-to-End Notebook
# --------
# In this notebook, we will make an E2E project using Titanic Tickets dataset, and using different methods we covered in the previous notebooks
# ## 1. Importing needed modules
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold, cross_val_score, GridSearchCV, train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier as XGBoost
from sklearn.naive_bayes import GaussianNB as NB
from sklearn.metrics import mean_squared_error, confusion_matrix, accuracy_score
# -
# ## 2. Reading Dataset
df = pd.read_csv('Data/titanic.csv')
df
# ## 3. Data Analysis
df.describe(include=['O'])
df.isnull().any()
df.info()
df['Age'].isnull().sum()
df.Cabin.isnull().sum()
df.Embarked.isnull().sum()
unique_last_name = df.Name.apply(lambda x: x[0 : x.find(',')])
891 - len(np.unique(unique_last_name))
# +
def find_prefix(x):
st = x.find(',')
if st == -1: # , doesn't exist
st = 0
else:
st += 2
en = x.find('.')
if en == -1:
en = len(x) # end of x
return x[st:en]
prefix = df.Name.apply(find_prefix)
len(np.unique(prefix))
# -
# ## 4. Data Cleaning
df = df.drop(['Cabin', 'Name', 'Ticket', 'PassengerId', 'Fare'], axis='columns', errors='ignore')
df
df = df[df.Embarked.notna()]
df
df.Embarked.isnull().sum()
df.isnull().any()
df.Gender = df.Gender.replace('male', 0)
df.Gender = df.Gender.replace('female', 1)
df.Gender = df.Gender.astype('uint8') # largest number to save is 2^8=256
df.dtypes
mean_age = int(df.Age.mean())
df.Age = df.Age.fillna(mean_age)
df.Age.isnull().sum()
df.Age = df.Age.astype('int')
df.Age
df
df.dtypes
np.unique(df.Embarked)
df.Embarked = df.Embarked.replace('C', 0)
df.Embarked = df.Embarked.replace('Q', 1)
df.Embarked = df.Embarked.replace('S', 2)
df.Embarked = df.Embarked.astype('uint8')
df.Embarked
df.dtypes
df
# ## 5. Data Splitting & Normalization
# In case we wouldn't use cross validation
# +
X = df.drop('Survived', axis='columns')
y = df.Survived
print(X.shape)
print(y.shape)
# y = np.expand_dims(y, 1)
print(y.shape)
# -
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# +
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train) # mean, st. dev. for X_train
X_test = sc_X.transform(X_test)
X_train
# -
# ## (Extra) Visualization of features with label from all data
# +
# print(X.columns)
labels = np.unique(y)
print(labels)
for i in range(0, 6, 2):
col_name1 = X.columns[i]
col_name2 = X.columns[i+1]
# print(X.loc[:, col_name1].shape)
# print(X.loc[:, col_name2].shape)
X1_X2 = np.vstack((X.loc[:, col_name1], X.loc[:, col_name2])).T
# print(X1_X2.shape)
X1_X2_label0 = X1_X2[y == 0]
X1_X2_label1 = X1_X2[y == 1]
assert len(X1_X2_label0) + len(X1_X2_label1) == len(y)
plt.scatter(X1_X2_label0[:,0], X1_X2_label0[:,1], color='red', label='Not survived')
plt.scatter(X1_X2_label1[:,0], X1_X2_label1[:,1], color='blue', label='Survived')
plt.xlabel(col_name1)
plt.ylabel(col_name2)
plt.legend()
plt.show()
# -
# ## 6. Model Building
# ----
# Choices are:
# - Logistic Regression
# - KNN Model
# - SVM Model
# - Decision Tree Classifier
# - Random Forest Classifier
# - Naive Bayes
# +
model1 = LogisticRegression()
model1 = model1.fit(X_train, y_train)
model2 = KNN(n_neighbors=5)
model2 = model2.fit(X_train, y_train)
model3 = SVC(kernel='rbf')
model3 = model3.fit(X_train, y_train)
model4 = DecisionTreeClassifier(max_depth=5)
model4 = model4.fit(X_train, y_train)
model5 = RandomForestClassifier(max_depth=7, n_estimators=100)
model5 = model5.fit(X_train, y_train)
model6 = NB()
model6 = model6.fit(X_train, y_train)
# -
ypred1 = model1.predict(X_test)
ypred2 = model2.predict(X_test)
ypred3 = model3.predict(X_test)
ypred4 = model4.predict(X_test)
ypred5 = model5.predict(X_test)
ypred6 = model6.predict(X_test)
print('Logistic Regression RMSE = %.5f' % mean_squared_error(y_test, ypred1, squared=True))
print('KNN RMSE = %.5f' % mean_squared_error(y_test, ypred2, squared=True))
print('SVM RMSE = %.5f' % mean_squared_error(y_test, ypred3, squared=True))
print('Decision Tree RMSE = %.5f' % mean_squared_error(y_test, ypred4, squared=True))
print('Random Forest RMSE = %.5f' % mean_squared_error(y_test, ypred5, squared=True))
print('Naive Bayes RMSE = %.5f' % mean_squared_error(y_test, ypred6, squared=True))
print('Logistic Regression Accuracy = %.2f%%' % model1.score(X_test, y_test))
print('KNN Accuracy = %.2f%%' % model2.score(X_test, y_test))
print('SVM Accuracy = %.2f%%' % model3.score(X_test, y_test))
print('Decision Tree Accuracy = %.2f%%' % model4.score(X_test, y_test))
print('Random Forest Accuracy = %.2f%%' % model5.score(X_test, y_test))
print('Naive Bayes Accuracy = %.2f%%' % model6.score(X_test, y_test))
# ## 7. Optimization Methods:
# ------------
# Choices arE:
# - Grid Search
# - Cross Validation
# - Ensembling (XGBoost)
# +
# TO DO Grid Search
# +
# TO DO Cross Validation
# -
# XGBoost
model7 = XGBoost()
model7 = model7.fit(X_train, y_train)
# ## 8. Evaluating the Model
ypred7 = model7.predict(X_test)
print('XGBoost RMSE = %.5f' % mean_squared_error(y_test, ypred7, squared=True))
print('XGBoost Accuracy = %.2f%%' % model7.score(X_test, y_test))
| 06. E2E Projects on Regression & Classification/ClassificationE2ENotebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import pprint as pp
import requests
import time
from citipy import citipy
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
#from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
# +
# for x in range(len(indices)):
# print(f"Making request number: {x} for ID: {indices[x]}")
# # Get one of the posts
# post_response = requests.get(url + str(indices[x]))
# # Save post's JSON
# response_json.append(post_response.json())
city_name = []
city_data = []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
# city_data = {"city_name" : [],
# "city_data : [],
# "cloudiness" : [],
# "country" : [],
# "date" : [],
# "humidity" : [],
# "lat" : [],
# "lng" : [],
# "max_temp" : [],
# "wind_speed" : []}
#query_url = url + city + "73911002d7a18b350355619e7799fdd8"
# query_url = "http://api.openweathermap.org/data/2.5/weather?q={city}&APPID=73911002d7a18b350355619e7799fdd8"
response = requests.get(query_url)
print("Beginning Data Retrieval")
print("----------------------------")
record = 0
for city in cities:
try:
query_url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&APPID=73911002d7a18b350355619e7799fdd8"
city_weather_info = requests.get(query_url).json()
# lat = city_weather_info['coord']['lat']
# city_data["city_name"].append(city_weather_info["name"])
# city_data["cloudiness"].append(city_weather_info["clouds"])
# city_data["country"].append(city_weather_info["sys"]["country"])
# city_data["date"].append(city_weather_info["dt"])
# humidity.append(city_weather_info["main"]["humidity"])
# lat.append(city_weather_info["coord"]["lat"])
# lng.append(city_weather_info["coord"]["lon"])
# max_temp.append(city_weather_info["main"]["temp_max"])
# wind_speed.append(city_weather_info["wind"]["speed"])
# city_record = city_weather_info["name"]
# Append the City information into city_data list
city_data.append({"City": city,
"Lat": city_weather_info['coord']['lat'],
"Lng": city_weather_info['coord']['lon'],
"Max Temp": city_weather_info["main"]["temp_max"],
"Humidity": city_weather_info["main"]["humidity"],
"Cloudiness": city_weather_info["clouds"]["all"],
"Wind Speed": city_weather_info["wind"]["speed"],
"Country": city_weather_info["sys"]["country"],
"Date": city_weather_info["main"]["temp_max"]})
time.sleep(1)
print(f"Processing Record {record} | {city_record}")
print(f"{query_url}&q={city}")
except (TypeError, KeyError):
print(f"Skipping {city}")
record = record + 1
# -
city_weather_info
data = response.json()
data
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
# +
city_data_df= pd.DataFrame(city_data,)
# lats = city_data_df["Lat"]
# max_temps = city_data_df["Max Temp"]
# humidity = city_data_df["Humidity"]
# cloudiness = city_data_df["Cloudiness"]
# wind_speed = city_data_df["Wind Speed"]
# +
# Export the City_Data into a csv
city_data_df.to_csv(output_data_file, index_label="City_ID")
# -
city_data_df.count()
city_data_df.head()
# ### Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# #### Latitude vs. Temperature Plot
# +
# Build scatter plot for latitude vs. temperature
plt.scatter(lat,
max_temp,
edgecolor="black", linewidths=1, marker="o",
alpha=0.8, label="Cities")
# Incorporate the other graph properties
plt.title("City Latitude vs. Max Temperature (%s)" % time.strftime("%x"))
plt.ylabel("Max Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Fig1.png")
# Show plot
plt.show()
# -
# #### Latitude vs. Humidity Plot
# +
plt.scatter(lat,
humidity,
edgecolor="black", linewidths=1, marker="o",
alpha=0.8, label="Cities")
# Incorporate the other graph properties
plt.title("City Latitude vs. Humidity (%s)" % time.strftime("%x"))
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Fig2.png")
# Show plot
plt.show()
# -
# #### Latitude vs. Cloudiness Plot
# +
# Build the scatter plots for latitude vs. cloudiness
plt.scatter(lat,
cloudiness,
edgecolor="black", linewidths=1, marker="o",
alpha=0.8, label="Cities")
# Incorporate the other graph properties
plt.title("City Latitude vs. Cloudiness (%s)" % time.strftime("%x"))
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Fig3.png")
# Show plot
plt.show()
# -
# #### Latitude vs. Wind Speed Plot
# +
# Build the scatter plots for latitude vs. wind speed
plt.scatter(lat,
wind_speed,
edgecolor="black", linewidths=1, marker="o",
alpha=0.8, label="Cities")
# Incorporate the other graph properties
plt.title("City Latitude vs. Wind Speed (%s)" % time.strftime("%x"))
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Fig4.png")
# Show plot
plt.show()
# -
# ## Linear Regression
# +
# OPTIONAL: Create a function to create Linear Regression plots
# Create a function to create Linear Regression plots
def plot_linear_regression(x_values, y_values, title, text_coordinates):
# Run regresson on southern hemisphere
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# Plot
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,text_coordinates,fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel(title)
print(f"The r-squared is: {rvalue}")
plt.show()
# +
# Create Northern and Southern Hemisphere DataFrames
northern_hemi_df = city_data_df.loc[(city_data_df["Lat"] >= 0)]
southern_hemi_df = city_data_df.loc[(city_data_df["Lat"] < 0)]
# -
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# Linear regression on Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Max Temp"]
plot_linear_regression(x_values, y_values, 'Max Temp',(6,30))
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Max Temp"]
plot_linear_regression(x_values, y_values, 'Max Temp', (-30,40))
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Humidity"]
plot_linear_regression(x_values, y_values, 'Humidity',(40,10))
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Humidity"]
plot_linear_regression(x_values, y_values, 'Humidity', (-30,150))
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Cloudiness"]
plot_linear_regression(x_values, y_values, 'Cloudiness', (40,10))
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Cloudiness"]
plot_linear_regression(x_values, y_values, 'Cloudiness', (-30,30))
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Wind Speed"]
plot_linear_regression(x_values, y_values, 'Wind Speed', (40,25))
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Wind Speed"]
plot_linear_regression(x_values, y_values, 'Wind Speed', (-30,30))
| WeatherPY/.ipynb_checkpoints/WeatherPY-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="AFhyoxCK2Bmf"
# !pip install looker_sdk
import looker_sdk
import os
import json
# + id="3jKM6GfzlgpS" cellView="form"
#@title Configuration
model_name = 'firebase' #@param {type:"string"}
instance_url = 'https://YOURS.looker.com:19999' #@param {type:"string"}
#Get the following values from your Users page in the Admin panel of your Looker instance > Users > Your user > Edit API keys. If you know your user id, you can visit https://your.looker.com/admin/users/<your_user_id>/edit.
client_id = 'yours' #@param {type:"string"}
client_secret = 'yours' #@param {type:"string"}
print("All environment variables set.")
os.environ["LOOKERSDK_BASE_URL"] = instance_url
os.environ["LOOKERSDK_CLIENT_ID"] = client_id
os.environ["LOOKERSDK_CLIENT_SECRET"] = client_secret
print(os.environ["LOOKERSDK_BASE_URL"])
# + id="sl7N69UL56vw" cellView="code"
sdk = looker_sdk.init31()
my_user = sdk.me()
print("Hi, " + my_user.first_name)
query = looker_sdk.models40.WriteQuery(
fields=['events.event_name','events__event_params.key','events__event_params__value.type'],
model=model_name,
view='events',
filters={'events.event_date':'7 days'}
)
# result_format can also be sql
event_properties = sdk.run_inline_query(body=query, result_format='json',cache=True)
query = looker_sdk.models40.WriteQuery(
fields=['events__user_properties.key','events__user_properties__value.type'],
model=model_name,
view='events',
filters={'events.event_date':'7 days'}
)
# result_format can also be sql
user_properties = sdk.run_inline_query(body=query, result_format='json',cache=True)
# + id="gAyc4dTnHaMP"
import ast
print("#### User Properties ####")
print("view: user_properties_generated {")
print("extension: required")
print()
for event in ast.literal_eval(user_properties)[1:]: #skip over the first one
key = event['events__user_properties.key']
type = event['events__user_properties__value.type']
print("dimension: user_properties."+key+" {")
print("type: "+ type)
print("sql:")
if type == "string":
print(" (SELECT value.string_value")
else:
print(" (SELECT value.int_value")
print(" FROM UNNEST(${user_properties})")
print( " WHERE key = '"+key+"') ;;")
print(" }")
print()
print ("}")
# + id="p3lEV0oSDWM3"
print("#### Event Properties ####")
print("view: events_generated {")
print("extension: required")
print()
for event in ast.literal_eval(event_properties)[1:]: #skip over the first one
event_name = event['events.event_name']
type = event['events__event_params__value.type']
key = event['events__event_params.key']
print("dimension: "+event_name+"."+key+" {")
print("type: "+ type)
print("sql: CASE WHEN ${event_name} = '"+event_name+"' THEN")
if type == "string":
print(" (SELECT value.string_value")
else:
print(" (SELECT value.int_value")
print(" FROM UNNEST(${event_params})")
print( " WHERE key = '"+key+"')")
print( " END ;;")
print(" }")
print()
print("}")
| Firebase_Block_v3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# +
f = open("SMSSpamCollection","r")
Y = [x.split()[0] for x in f]
stopwords = ['a', 'about', 'above', 'across', 'after', 'afterwards']
stopwords += ['again', 'against', 'all', 'almost', 'alone', 'along']
stopwords += ['already', 'also', 'although', 'always', 'am', 'among']
stopwords += ['amongst', 'amoungst', 'amount', 'an', 'and', 'another']
stopwords += ['any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere']
stopwords += ['are', 'around', 'as', 'at', 'back', 'be', 'became']
stopwords += ['because', 'become', 'becomes', 'becoming', 'been']
stopwords += ['before', 'beforehand', 'behind', 'being', 'below']
stopwords += ['beside', 'besides', 'between', 'beyond', 'bill', 'both']
stopwords += ['bottom', 'but', 'by', 'call', 'can', 'cannot', 'cant']
stopwords += ['co', 'computer', 'con', 'could', 'couldnt', 'cry', 'de']
stopwords += ['describe', 'detail', 'did', 'do', 'done', 'down', 'due']
stopwords += ['during', 'each', 'eg', 'eight', 'either', 'eleven', 'else']
stopwords += ['elsewhere', 'empty', 'enough', 'etc', 'even', 'ever']
stopwords += ['every', 'everyone', 'everything', 'everywhere', 'except']
stopwords += ['few', 'fifteen', 'fifty', 'fill', 'find', 'fire', 'first']
stopwords += ['five', 'for', 'former', 'formerly', 'forty', 'found']
stopwords += ['four', 'from', 'front', 'full', 'further', 'get', 'give']
stopwords += ['go', 'had', 'has', 'hasnt', 'have', 'he', 'hence', 'her']
stopwords += ['here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers']
stopwords += ['herself', 'him', 'himself', 'his', 'how', 'however']
stopwords += ['hundred', 'i', 'ie', 'if', 'in', 'inc', 'indeed']
stopwords += ['interest', 'into', 'is', 'it', 'its', 'itself', 'keep']
stopwords += ['last', 'latter', 'latterly', 'least', 'less', 'ltd', 'made']
stopwords += ['many', 'may', 'me', 'meanwhile', 'might', 'mill', 'mine']
stopwords += ['more', 'moreover', 'most', 'mostly', 'move', 'much']
stopwords += ['must', 'my', 'myself', 'name', 'namely', 'neither', 'never']
stopwords += ['nevertheless', 'next', 'nine', 'no', 'nobody', 'none']
stopwords += ['noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of']
stopwords += ['off', 'often', 'on','once', 'one', 'only', 'onto', 'or']
stopwords += ['other', 'others', 'otherwise', 'our', 'ours', 'ourselves']
stopwords += ['out', 'over', 'own', 'part', 'per', 'perhaps', 'please']
stopwords += ['put', 'rather', 're', 's', 'same', 'see', 'seem', 'seemed']
stopwords += ['seeming', 'seems', 'serious', 'several', 'she', 'should']
stopwords += ['show', 'side', 'since', 'sincere', 'six', 'sixty', 'so']
stopwords += ['some', 'somehow', 'someone', 'something', 'sometime']
stopwords += ['sometimes', 'somewhere', 'still', 'such', 'system', 'take']
stopwords += ['ten', 'than', 'that', 'the', 'their', 'them', 'themselves']
stopwords += ['then', 'thence', 'there', 'thereafter', 'thereby']
stopwords += ['therefore', 'therein', 'thereupon', 'these', 'they']
stopwords += ['thick', 'thin', 'third', 'this', 'those', 'though', 'three']
stopwords += ['three', 'through', 'throughout', 'thru', 'thus', 'to']
stopwords += ['together', 'too', 'top', 'toward', 'towards', 'twelve']
stopwords += ['twenty', 'two', 'un', 'under', 'until', 'up', 'upon']
stopwords += ['us', 'very', 'via', 'was', 'we', 'well', 'were', 'what']
stopwords += ['whatever', 'when', 'whence', 'whenever', 'where']
stopwords += ['whereafter', 'whereas', 'whereby', 'wherein', 'whereupon']
stopwords += ['wherever', 'whether', 'which', 'while', 'whither', 'who']
stopwords += ['whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with']
stopwords += ['within', 'without', 'would', 'yet', 'you', 'your']
stopwords += ['yours', 'yourself', 'yourselves', 'ham', 'spam']
def removeStopwords(wordlist, stopwords):
return [w for w in wordlist if w not in stopwords]
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
f = open("SMSSpamCollection","r").read()
fnopunct = ""
for char in f:
if char not in punctuations:
fnopunct = fnopunct + char
fnopunct = fnopunct.lower()
wordlist = fnopunct.split()
wordlist = removeStopwords(wordlist, stopwords)
uniquewords = set(wordlist)
wordfreq = []
wordfreq = [int(wordlist.count(w)) for w in uniquewords] # count does not do qhat we want but it will do for now,
# count seems to count words within words i.e. "ok joking".count("ok") = 2
# -
f = open("SMSSpamCollection","r")
X = [x.lower().split()[1:] for x in f]
for c in range(0,len(X)):
nopunct = ""
for char in " ".join(X[c]):
if char not in punctuations:
nopunct = nopunct + char
X[c] = nopunct
# +
wordfreq = np.array(wordfreq)
uniquewords = np.array(list(uniquewords))
wordarray = np.vstack((uniquewords, wordfreq)).transpose()
idx = np.argsort(wordarray[:, 1].astype(np.int))
sortedarray = wordarray[idx]
nwords = 50
mostcommon = sortedarray[len(uniquewords)-(nwords):len(uniquewords)]
mostcommon
# +
X[1]
XArray = np.zeros((len(X),nwords))
for c in range(0,len(X)):
XArray[c] = [int(X[c].count(w)) for w in mostcommon[:,0]]
Y = np.array(Y)
# -
XArray
# +
sz1,sz2 = XArray.shape
maximums = [max(x) for x in XArray.transpose()]
biggest_idx = len(Y)
region = np.array([True]*len(Y))
splits = []
for depth in [1,2]:
if (depth == 1) or not(sum(region) == 0 or sum(region) == len(idx)):
S = "Inf"
V = "Inf"
Xsubset = XArray[region,:]
Ysubset = Y[region]
gini_best = 1
for var in range(0,sz2):
for s in range(0,int(maximums[var])):
idx = Xsubset[:,var] < s + 0.5
nspam_R1 = sum(Ysubset[idx] == "spam")
n_R1 = sum(idx)
# nham_R1 = n_R1 - nspam_R1
nspam_R2 = sum(Ysubset[1-idx] == "spam")
n_R2 = sum(1-idx)
# nham_R2 = n_R2 - nspam_R2
prop_spam_R1 = nspam_R1/n_R1
prop_spam_R2 = nspam_R2/n_R2
gini_R1 = 2*prop_spam_R1*(1-prop_spam_R1)
gini_R2 = 2*prop_spam_R2*(1-prop_spam_R2)
gini = gini_R1 + gini_R2
if gini < gini_best:
S = s
V = var
gini_best = gini
region = idx
print(S)
print(V)
splits.append([(S,V)])
# -
a = []
a.append([1])
a
gini_R2
| MyCART.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2977, "status": "ok", "timestamp": 1642547030953, "user": {"displayName": "mehmet glr", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03167899680245464519"}, "user_tz": -180} id="CSwcin58Rg4-" outputId="0336bb76-f42c-4726-c6bb-23a8e7adbc6e"
from google.colab import drive
drive.flush_and_unmount()
drive._mount('/content/gdrive/')
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 7991, "status": "ok", "timestamp": 1642547038942, "user": {"displayName": "mehmet glr", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03167899680245464519"}, "user_tz": -180} id="jFRsDG-kRy6i" outputId="10810402-7cac-442f-9fce-83a371466b4c"
# !pip uninstall folium
# !pip install torch==1.7.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html
# !pip install -r /content/gdrive/MyDrive/SimCSE/requirements.txt
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3569, "status": "ok", "timestamp": 1642547042503, "user": {"displayName": "mehmet glr", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03167899680245464519"}, "user_tz": -180} id="K1C45Q6oBNQC" outputId="9e71b6ee-37b2-4438-b710-5c3b3f36244b"
# !pip install simcse
# + executionInfo={"elapsed": 1853, "status": "ok", "timestamp": 1642547044349, "user": {"displayName": "mehmet glr", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03167899680245464519"}, "user_tz": -180} id="odzohkQ_J4rr"
import os
os.environ['TRANSFORMERS_CACHE'] = '/content/gdrive/MyDrive/SimCSE/transformers'
from simcse import SimCSE
# + executionInfo={"elapsed": 4835, "status": "ok", "timestamp": 1642547049181, "user": {"displayName": "mehmet glr", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03167899680245464519"}, "user_tz": -180} id="PCPIOB-SClV1"
import sys
PATH_TO_SENTEVAL = '/content/gdrive/MyDrive/SimCSE/SentEval'
sys.path.insert(0, PATH_TO_SENTEVAL)
import senteval
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 204238, "status": "error", "timestamp": 1642547253413, "user": {"displayName": "mehmet glr", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "03167899680245464519"}, "user_tz": -180} id="Sd64DW24RM1S" outputId="2c452368-1ea5-4da2-9733-a7c167c11b6e"
import time
timestamp = time.time()
# !python /content/gdrive/MyDrive/SimCSE/train.py --model_name_or_path roberta-large --train_file /content/gdrive/MyDrive/SimCSE/data/nli_for_simcse.csv --output_dir /content/gdrive/MyDrive/SimCSE/trained/trained-sup-simcse-roberta-large --num_train_epochs 3 --per_device_train_batch_size 128 --learning_rate 5e-5 --max_seq_length 32 --evaluation_strategy steps --metric_for_best_model stsb_spearman --load_best_model_at_end --eval_steps 125 --pooler_type cls --overwrite_output_dir --temp 0.05 --do_train --do_eval --fp16 "$@"
print('Elapsed train time: ' + str(time.time() - timestamp + 'sec'))
| Notebooks/trainings/Train sup-simcse-roberta-large.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
patients = pd.read_csv('internacoes_charlson_zero.csv.gz', compression='gzip', verbose=True)
patients.shape,patients.columns
patients['days'].mean(), patients['target'].mean()
patients['evolucoes'].sum()
# +
from scipy.stats import spearmanr, pearsonr
from sklearn.metrics import mean_absolute_error, mean_squared_error
from math import sqrt
import numpy as np
target = patients['target'].values
pred_full = np.full((len(target), 1), np.mean(target))
err_mean = mean_absolute_error(pred_full , target)
err_mean
# -
| charlson_stats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ProfessorPatrickSlatraigh/CST2312/blob/main/file_load_snippet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="2eMnlGWcbl8E"
# This snippet prompts the user with a clickable link to search local directories for a file to upload
# + id="O8jv1Bp-beMq"
from google.colab import files
try:
uploaded_handle = files.upload()
except:
print("Well that did not work. What next?")
| file_load_snippet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Part 2: Checkouts, Branching, & Merging
#
# This section deals with navigating repository history, creating & merging
# branches, and understanding conflicts
# ### The Hangar Workflow
#
# The hangar workflow is intended to mimic common ``git`` workflows in which small
# incremental changes are made and committed on dedicated ``topic`` branches.
# After the ``topic`` has been adequatly set, ``topic`` branch is ``merged`` into
# a separate branch (commonly referred to as ``master``, though it need not be the
# actual branch named ``"master"``), where well vetted and more permanent changes
# are kept.
#
# Create Branch -> Checkout Branch -> Make Changes -> Commit
#
# #### Making the Initial Commit
#
# Let's initialize a new repository and see how branching works in Hangar
#
# <!-- However, unlike GIT, remember that it is not possible to make changes in a DETACHED HEAD state. Hangar enforces the requirement that all work is performed at the tip of a branch. -->
from hangar import Repository
import numpy as np
repo = Repository(path='foo/pth')
repo_pth = repo.init(user_name='Test User', user_email='<EMAIL>')
# When a repository is first initialized, it has no history, no commits.
repo.log() # -> returns None
# Though the repository is essentially empty at this point in time, there is one
# thing which is present: A branch with the name: ``"master"``.
repo.list_branches()
# This ``"master"`` is the branch we make our first commit on; until we do, the
# repository is in a semi-unstable state; with no history or contents, most of the
# functionality of a repository (to store, retrieve, and work with versions of
# data across time) just isn't possible. A significant potion of otherwise
# standard operations will generally flat out refuse to to execute (ie. read-only
# checkouts, log, push, etc.) until the first commit is made.
#
# One of the only options available at this point in time is to create a
# write-enabled checkout on the ``"master"`` branch and begin to add data so we
# can make a commit. let’s do that now:
co = repo.checkout(write=True)
# As expected, there are no arraysets or metadata samples recorded in the checkout.
print(f'number of metadata keys: {len(co.metadata)}')
print(f'number of arraysets: {len(co.arraysets)}')
# Let’s add a dummy array just to put something in the repository history to
# commit. We'll then close the checkout so we can explore some useful tools which
# depend on having at least on historical record (commit) in the repo.
dummy = np.arange(10, dtype=np.uint16)
aset = co.arraysets.init_arrayset(name='dummy_arrayset', prototype=dummy)
aset['0'] = dummy
initialCommitHash = co.commit('first commit with a single sample added to a dummy arrayset')
co.close()
# If we check the history now, we can see our first commit hash, and that it is labeled with the branch name `"master"`
repo.log()
# So now our repository contains:
# - A commit: a fully independent description of the entire repository state as
# it existed at some point in time. A commit is identified by a `commit_hash`
# - A branch: a label pointing to a particular `commit` / `commit_hash`
#
# Once committed, it is not possible to remove, modify, or otherwise tamper with
# the contents of a commit in any way. It is a permanent record, which Hangar has
# no method to change once written to disk.
#
# In addition, as a ``commit_hash`` is not only calculated from the ``commit``\ ’s
# contents, but from the ``commit_hash`` of its parents (more on this to follow),
# knowing a single top-level ``commit_hash`` allows us to verify the integrity of
# the entire repository history. This fundamental behavior holds even in cases of
# disk-corruption or malicious use.
#
# ### Working with Checkouts & Branches
#
# As mentioned in the first tutorial, we work with the data in a repository though
# a ``checkout``. There are two types of checkouts (each of which have different
# uses and abilities):
#
# **Checking out a branch/commit for reading:** is the process of retrieving
# records describing repository state at some point in time, and setting up access
# to the referenced data.
#
# - Any number of read checkout processes can operate on a repository (on
# any number of commits) at the same time.
#
# **Checking out a branch for writing:** is the process of setting up a (mutable)
# ``staging area`` to temporarily gather record references / data before all
# changes have been made and staging area contents are ``committed`` in a new
# permanent record of history (a ``commit``)
#
# - Only one write-enabled checkout can ever be operating in a repository
# at a time
# - When initially creating the checkout, the ``staging area`` is not
# actually “empty”. Instead, it has the full contents of the last ``commit``
# referenced by a branch’s ``HEAD``. These records can be removed/mutated/added
# to in any way to form the next ``commit``. The new ``commit`` retains a
# permanent reference identifying the previous ``HEAD`` ``commit`` was used as
# it’s base ``staging area``
# - On commit, the branch which was checked out has it’s ``HEAD`` pointer
# value updated to the new ``commit``\ ’s ``commit_hash``. A write-enabled
# checkout starting from the same branch will now use that ``commit``\ ’s
# record content as the base for it’s ``staging area``.
#
# #### Creating a branch
#
# A branch is an individual series of changes/commits which diverge from the main
# history of the repository at some point in time. All changes made along a branch
# are completely isolated from those on other branches. After some point in time,
# changes made in a disparate branches can be unified through an automatic
# ``merge`` process (described in detail later in this tutorial). In general, the
# ``Hangar`` branching model is semantically identical ``Git``; Hangar branches
# also have the same lightweight and performant properties which make working with
# ``Git`` branches so appealing.
#
# In hangar, branch must always have a ``name`` and a ``base_commit``. However, If
# no ``base_commit`` is specified, the current writer branch ``HEAD`` ``commit``
# is used as the ``base_commit`` hash for the branch automatically.
branch_1 = repo.create_branch(name='testbranch')
branch_1
# viewing the log, we see that a new branch named: `testbranch` is pointing to our initial commit
print(f'branch names: {repo.list_branches()} \n')
repo.log()
# If instead, we do actually specify the base commit (with a different branch
# name) we see we do actually get a third branch. pointing to the same commit as
# ``"master"`` and ``"testbranch"``
branch_2 = repo.create_branch(name='new', base_commit=initialCommitHash)
branch_2
repo.log()
# #### Making changes on a branch
#
# Let’s make some changes on the ``"new"`` branch to see how things work. We can
# see that the data we added previously is still here (``dummy`` arrayset containing
# one sample labeled ``0``)
co = repo.checkout(write=True, branch='new')
co.arraysets
co.arraysets['dummy_arrayset']
co.arraysets['dummy_arrayset']['0']
# Let's add another sample to the `dummy_arrayset` called `1`
# +
arr = np.arange(10, dtype=np.uint16)
# let's increment values so that `0` and `1` aren't set to the same thing
arr += 1
co.arraysets['dummy_arrayset']['1'] = arr
# -
# We can see that in this checkout, there are indeed, two samples in the `dummy_arrayset`
len(co.arraysets['dummy_arrayset'])
# That's all, let's commit this and be done with this branch
co.commit('commit on `new` branch adding a sample to dummy_arrayset')
co.close()
# #### How do changes appear when made on a branch?
#
# If we look at the log, we see that the branch we were on (`new`) is a commit ahead of `master` and `testbranch`
repo.log()
# The meaning is exactly what one would intuit. we made some changes, they were
# reflected on the ``new`` branch, but the ``master`` and ``testbranch`` branches
# were not impacted at all, nor were any of the commits!
# ### Merging (Part 1) Fast-Forward Merges
#
# Say we like the changes we made on the ``new`` branch so much that we want them
# to be included into our ``master`` branch! How do we make this happen for this
# scenario??
#
# Well, the history between the ``HEAD`` of the ``"new"`` and the ``HEAD`` of the
# ``"master"`` branch is perfectly linear. In fact, when we began making changes
# on ``"new"``, our staging area was *identical* to what the ``"master"`` ``HEAD``
# commit references are right now!
#
# If you’ll remember that a branch is just a pointer which assigns some ``name``
# to a ``commit_hash``, it becomes apparent that a merge in this case really
# doesn’t involve any work at all. With a linear history between ``"master"`` and
# ``"new"``, any ``commits`` exsting along the path between the ``HEAD`` of
# ``"new"`` and ``"master"`` are the only changes which are introduced, and we can
# be sure that this is the only view of the data records which can exist!
#
# What this means in practice is that for this type of merge, we can just update
# the ``HEAD`` of ``"master"`` to point to the ``"HEAD"`` of ``"new"``, and the
# merge is complete.
#
# This situation is referred to as a **Fast Forward (FF) Merge**. A FF merge is
# safe to perform any time a linear history lies between the ``"HEAD"`` of some
# ``topic`` and ``base`` branch, regardless of how many commits or changes which
# were introduced.
#
# For other situations, a more complicated **Three Way Merge** is required. This
# merge method will be explained a bit more later in this tutorial
co = repo.checkout(write=True, branch='master')
# #### Performing the Merge
#
# In practice, you’ll never need to know the details of the merge theory explained
# above (or even remember it exists). Hangar automatically figures out which merge
# algorithms should be used and then performed whatever calculations are needed to
# compute the results.
#
# As a user, merging in Hangar is a one-liner!
co.merge(message='message for commit (not used for FF merge)', dev_branch='new')
# Let's check the log!
repo.log()
co.branch_name
co.commit_hash
co.arraysets['dummy_arrayset']
# As you can see, everything is as it should be!
co.close()
# #### Making a changes to introduce diverged histories
#
# Let’s now go back to our ``"testbranch"`` branch and make some changes there so
# we can see what happens when changes don’t follow a linear history.
co = repo.checkout(write=True, branch='testbranch')
co.arraysets
co.arraysets['dummy_arrayset']
# We will start by mutating sample `0` in `dummy_arrayset` to a different value
dummy_aset = co.arraysets['dummy_arrayset']
old_arr = dummy_aset['0']
new_arr = old_arr + 50
new_arr
dummy_aset['0'] = new_arr
# let’s make a commit here, then add some metadata and make a new commit (all on
# the ``testbranch`` branch)
co.commit('mutated sample `0` of `dummy_arrayset` to new value')
repo.log()
co.metadata['hello'] = 'world'
co.commit('added hellow world metadata')
co.close()
# Looking at our history how, we see that none of the original branches reference
# our first commit anymore
repo.log()
# We can check the history of the ``"master"`` branch by specifying it as
# an argument to the ``log()`` method
repo.log('master')
# ### Merging (Part 2) Three Way Merge
#
# If we now want to merge the changes on `"testbranch"` into `"master"`, we can't just follow a simple linear history; **the branches have diverged**.
#
# For this case, Hangar implements a **Three Way Merge** algorithm which does the following:
# - Find the most recent common ancestor `commit` present in both the `"testbranch"` and `"master"` branches
# - Compute what changed between the common ancestor and each branch's `HEAD` commit
# - Check if any of the changes conflict with eachother (more on this in a later tutorial)
# - If no conflicts are present, compute the results of the merge between the two sets of changes
# - Create a new `commit` containing the merge results reference both branch `HEAD`s as parents of the new `commit`, and update the `base` branch `HEAD` to that new `commit`'s `commit_hash`
co = repo.checkout(write=True, branch='master')
# Once again, as a user, the details are completely irrelevant, and the operation
# occurs from the same one-liner call we used before for the FF Merge.
co.merge(message='merge of testbranch into master', dev_branch='testbranch')
# If we now look at the log, we see that this has a much different look then
# before. The three way merge results in a history which references changes made
# in both diverged branches, and unifies them in a single ``commit``
repo.log()
# #### Manually inspecting the merge result to verify it matches our expectations
#
# ``dummy_arrayset`` should contain two arrays, key ``1`` was set in the previous
# commit originally made in ``"new"`` and merged into ``"master"``. Key ``0`` was
# mutated in ``"testbranch"`` and unchanged in ``"master"``, so the update from
# ``"testbranch"`` is kept.
#
# There should be one metadata sample with they key ``"hello"`` and the value
# ``"world"``
co.arraysets
co.arraysets['dummy_arrayset']
co.arraysets['dummy_arrayset']['0']
co.arraysets['dummy_arrayset']['1']
co.metadata
co.metadata['hello']
# **The Merge was a success!**
co.close()
# ### Conflicts
#
# Now that we've seen merging in action, the next step is to talk about conflicts.
#
# #### How Are Conflicts Detected?
#
# Any merge conflicts can be identified and addressed ahead of running a ``merge``
# command by using the built in ``diff`` tools. When diffing commits, Hangar will
# provide a list of conflicts which it identifies. In general these fall into 4
# categories:
#
# 1. **Additions** in both branches which created new keys (samples /
# arraysets / metadata) with non-compatible values. For samples &
# metadata, the hash of the data is compared, for arraysets, the schema
# specification is checked for compatibility in a method custom to the
# internal workings of Hangar.
# 2. **Removal** in ``Master Commit/Branch`` **& Mutation** in ``Dev Commit /
# Branch``. Applies for samples, arraysets, and metadata
# identically.
# 3. **Mutation** in ``Dev Commit/Branch`` **& Removal** in ``Master Commit /
# Branch``. Applies for samples, arraysets, and metadata
# identically.
# 4. **Mutations** on keys both branches to non-compatible values. For
# samples & metadata, the hash of the data is compared, for arraysets, the
# schema specification is checked for compatibility in a method custom to the
# internal workings of Hangar.
#
# #### Let's make a merge conflict
#
# To force a conflict, we are going to checkout the ``"new"`` branch and set the
# metadata key ``"hello"`` to the value ``"foo conflict... BOO!"``. If we then try
# to merge this into the ``"testbranch"`` branch (which set ``"hello"`` to a value
# of ``"world"``) we see how hangar will identify the conflict and halt without
# making any changes.
#
# Automated conflict resolution will be introduced in a future version of Hangar,
# for now it is up to the user to manually resolve conflicts by making any
# necessary changes in each branch before reattempting a merge operation.
co = repo.checkout(write=True, branch='new')
co.metadata['hello'] = 'foo conflict... BOO!'
co.commit ('commit on new branch to hello metadata key so we can demonstrate a conflict')
repo.log()
# **When we attempt the merge, an exception is thrown telling us there is a conflict!**
co.merge(message='this merge should not happen', dev_branch='testbranch')
# #### Checking for Conflicts
#
# Alternatively, use the diff methods on a checkout to test for conflicts before attempting a merge.
merge_results, conflicts_found = co.diff.branch('testbranch')
conflicts_found
conflicts_found['meta']
# The type codes for a `ConflictRecords` `namedtuple` such as the one we saw:
#
# ConflictRecords(t1=('hello',), t21=(), t22=(), t3=(), conflict=True)
#
# are as follow:
#
# - ``t1``: Addition of key in master AND dev with different values.
# - ``t21``: Removed key in master, mutated value in dev.
# - ``t22``: Removed key in dev, mutated value in master.
# - ``t3``: Mutated key in both master AND dev to different values.
# - ``conflict``: Bool indicating if any type of conflict is present.
# #### To resolve, remove the conflict
del co.metadata['hello']
co.metadata['resolved'] = 'conflict by removing hello key'
co.commit('commit which removes conflicting metadata key')
co.merge(message='this merge succeeds as it no longer has a conflict', dev_branch='testbranch')
# We can verify that history looks as we would expect via the log!
repo.log()
| docs/Hangar-Tutorial-002.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # KDE demo, with histosys!
#
# > It works :)
# 
# ## This depends on a *very* experimental fork of pyhf, install it by running the cell below:
# !python -m pip install git+https://github.com/phinate/pyhf.git@diffable_json
# +
# import jax
import neos.makers as makers
import neos.cls as cls
import numpy as np
import jax.experimental.stax as stax
import jax.experimental.optimizers as optimizers
import jax.random
import time
import pyhf
pyhf.set_backend(pyhf.tensor.jax_backend())
# -
# regression net
init_random_params, predict = stax.serial(
stax.Dense(1024),
stax.Relu,
stax.Dense(1024),
stax.Relu,
stax.Dense(1),
stax.Sigmoid
)
# ## Compose differentiable workflow
# +
# choose hyperparams
bins = np.linspace(0,1,4)
centers = bins[:-1] + np.diff(bins)/2.
bandwidth = 0.8 * 1/(len(bins)-1)
# compose functions from neos to define workflow
hmaker = makers.kde_bins_from_nn_histosys(predict,bins=bins,bandwidth=bandwidth)
nnm = makers.nn_histosys(hmaker)
loss = cls.cls_maker(nnm, solver_kwargs=dict(pdf_transform=True))
bandwidth # print bw
# -
# ### Randomly initialise nn weights and check that we can get the gradient of the loss wrt nn params
_, network = init_random_params(jax.random.PRNGKey(13), (-1, 2))
jax.grad(loss)(network, 1.0)
# ### Define training loop!
# +
#jit_loss = jax.jit(loss)
opt_init, opt_update, opt_params = optimizers.adam(1e-3)
@jax.jit
def update_and_value(i, opt_state, mu):
net = opt_params(opt_state)
value, grad = jax.value_and_grad(loss)(net, mu)
return opt_update(i, grad, state), value, net
def train_network(N):
cls_vals = []
_, network = init_random_params(jax.random.PRNGKey(1), (-1, 2))
state = opt_init(network)
losses = []
# parameter update function
#@<EMAIL>
def update_and_value(i, opt_state, mu):
net = opt_params(opt_state)
value, grad = jax.value_and_grad(loss)(net, mu)
return opt_update(i, grad, state), value, net
for i in range(N):
start_time = time.time()
state, value, network = update_and_value(i,state,1.0)
epoch_time = time.time() - start_time
losses.append(value)
metrics = {"loss": losses}
yield network, metrics, epoch_time
# -
# ### Plotting helper function for awesome animations :)
def plot(axarr, network, metrics, hm, maxN):
ax = axarr[0]
g = np.mgrid[-5:5:101j, -5:5:101j]
levels = bins
ax.contourf(
g[0],
g[1],
predict(network, np.moveaxis(g, 0, -1)).reshape(101, 101, 1)[:, :, 0],
levels=levels,
cmap="binary",
)
ax.contour(
g[0],
g[1],
predict(network, np.moveaxis(g, 0, -1)).reshape(101, 101, 1)[:, :, 0],
colors="w",
levels=levels,
)
ax.scatter(hm.sig[:, 0], hm.sig[:, 1], alpha=0.3, c="C9")
ax.scatter(hm.bkg_up[:, 0], hm.bkg_up[:, 1], alpha=0.1, c="C1", marker = 6)
ax.scatter(hm.bkg_down[:, 0], hm.bkg_down[:, 1], alpha=0.1, c="C1", marker = 7)
ax.scatter(hm.bkg_nom[:, 0], hm.bkg_nom[:, 1], alpha=0.3, c="C1")
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax = axarr[1]
ax.axhline(0.05, c="slategray", linestyle="--")
ax.plot(metrics["loss"], c="steelblue", linewidth=2.0)
ax.set_ylim(0, 0.15)
ax.set_xlim(0, maxN)
ax.set_xlabel("epoch")
ax.set_ylabel(r"$CL_s$")
ax = axarr[2]
s, b, bup, bdown = hm(network)
bin_width = 1/(len(bins)-1)
ax.bar(centers, b, color="C1", width=bin_width)
ax.bar(centers, s, bottom=b, color="C9", width=bin_width)
bunc = np.asarray([[x,y] if x>y else [y,x] for x,y in zip(bup,bdown)])
plot_unc = []
for unc, be in zip(bunc,b):
if all(unc > be):
plot_unc.append([max(unc),be])
elif all(unc < be):
plot_unc.append([be, min(unc)])
else:
plot_unc.append(unc)
plot_unc = np.asarray(plot_unc)
b_up, b_down = plot_unc[:,0], plot_unc[:,1]
ax.bar(centers, b_up-b, bottom=b, alpha=0.4, color="black",width=bin_width)
ax.bar(centers, b-b_down, bottom=b_down, alpha=0.4, color="black", width=bin_width)
ax.set_ylim(0, 60)
ax.set_ylabel("frequency")
ax.set_xlabel("nn output")
# ## Install celluloid to create animations if you haven't already by running this next cell:
# !python -m pip install celluloid
# ### Let's run it!!
# +
#slow
import numpy as np
from matplotlib import pyplot as plt
from IPython.display import HTML
plt.rcParams.update(
{
"axes.labelsize": 13,
"axes.linewidth": 1.2,
"xtick.labelsize": 13,
"ytick.labelsize": 13,
"figure.figsize": [13., 4.0],
"font.size": 13,
"xtick.major.size": 3,
"ytick.major.size": 3,
"legend.fontsize": 11,
}
)
fig, axarr = plt.subplots(1, 3, dpi=120)
maxN = 50 # make me bigger for better results!
animate = False # animations fail tests...
if animate:
from celluloid import Camera
camera = Camera(fig)
# Training
for i, (network, metrics, epoch_time) in enumerate(train_network(maxN)):
print(f"epoch {i}:", f'CLs = {metrics["loss"][-1]}, took {epoch_time}s')
if animate:
plot(axarr, network, metrics, nnm.hm, maxN=maxN)
plt.tight_layout()
camera.snap()
if i % 10 == 0:
camera.animate().save("animation.gif", writer="imagemagick", fps=8)
#HTML(camera.animate().to_html5_video())
if animate:
camera.animate().save("animation.gif", writer="imagemagick", fps=8)
# + active=""
#
# -
| nbs/demo_kde_pyhf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dhlunRqSsdCo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="2b64aadf-7ab6-4284-830d-1fa09a8f0d13"
# !pip install pyDOE
# + id="mu4h5EA3soFy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b3a4433e-4d70-40a9-9d37-465da123fbfc"
import sys
# Include the path that contains a number of files that have txt files containing solutions to the Burger's problem.
# sys.path.insert(0,'../../Utilities/')
import os
os.getcwd()
# Import required modules
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import scipy.io
from scipy.interpolate import griddata
from scipy.integrate import solve_ivp
from pyDOE import lhs
from mpl_toolkits.mplot3d import Axes3D
import time
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
np.random.seed(1234)
torch.manual_seed(1234)
# + [markdown] id="5Sw9C2NDsxyE" colab_type="text"
# Let us obtain the high fidelity solution for the Lorenz attractor
#
# $\frac{dx}{dt} = \sigma(y-x)$
#
# $\frac{dy}{dt} = x(\rho-z) - y$
#
# $\frac{dy}{dt} = xy - \beta z$
#
# A stable system is obtained when $\sigma = 10$, $\beta = \frac{8}{3}$, and $\rho = 28$.
#
# + id="7S-4vWlssvwZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="2513924d-a8c4-40a9-c0a8-5c784ce4f43c"
rho = 28.0
sigma = 10.0
beta = 8.0 / 3.0
def f(t, state):
x, y, z = state # Unpack the state vector
return sigma * (y - x), x * (rho - z) - y, x * y - beta * z # Derivatives
state0 = [1.0, 1.0, 1.0]
t_span = [0, 40.0]
states = solve_ivp(f, t_span, state0, t_eval = np.linspace(0,40.0,4001))
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(states.y[0,:], states.y[1,:], states.y[2,:])
plt.draw()
plt.show()
# + [markdown] id="538Hd5BYzRQz" colab_type="text"
# The above shows the solution for the Lorenz attractor. The question now is can a PINN predict this behaviour gievn some sparese data points? It is clear here that this is strictly an IVP and not a mixed IVP + BVP. Thus, we may need additional points within the domain to ensure that the predictions are right.
#
# Within the PINN, we have a state ```u``` that is a function of space **X** and time **t**. ```u``` is governed by a physics equation (eg. Burgers equation). In the Lorenz attractor, we have three states to track- ```x```, ```y```, and ```z```. Each of these states are a function of **t**.
# + id="ryRIWCdQsqVY" colab_type="code" colab={}
class PhysicsInformedNN:
# Initialize the class
"""
This class defined the Physics Informed Neural Network. The class is first initialized by the __init__ function. Additional functions related to the class are also defined subsequently.
"""
def __init__(self, t, X, t_f, layers, lb, ub, sigma, beta, rho, epochs):
"""
The initialisation takes:
t: Location of training points (contains t)
X: Data at the training point location (x, y, z)
X_u and u form the labelled, ordered training set pair for learning.
t_f: Locations for additional training based on physics (in this case the ODE).
sigma, beta, rho: Additional problem based parameters.
epoch: number of epochs to run the learning.
"""
# Defining the lower and upper bound of the domain.
self.lb = lb
self.ub = ub
self.epochs = epochs
# X_u = 2.0 * (X_u - self.lb)/(self.ub - self.lb) - 1.0
# X_f = 2.0 * (X_f - self.lb)/(self.ub - self.lb) - 1.0
#$ Define the initial conditions for x, y, z and t for MSE loss
self.t = torch.tensor(t).float()
self.t.requires_grad = True
#$ Define the initial conditions for X and t for the field loss
self.t_f = torch.tensor(t_f).float()
self.t_f.requires_grad = True
# Declaring the field for the variable to be solved for
self.X = torch.tensor(X).float()
self.X_true = torch.tensor(X).float()
# Declaring the number of layers in the Neural Network
self.layers = layers
# Defininf the diffusion constant in the problem (?)
self.sigma = sigma
self.beta = beta
self.rho = rho
# Create the structure of the neural network here, or build a function below to build the architecture and send the model here.
self.model = self.neural_net(layers)
# Define the initialize_NN function to obtain the initial weights and biases for the network.
self.model.apply(self.initialize_NN)
# Select the optimization method for the network. Currently, it is just a placeholder.
self.optimizer = torch.optim.SGD(self.model.parameters(), lr = 0.01)
self.losses = []
# train(model,epochs,self.x_u_tf,self.t_u_tf,self.x_f_tf,self.t_f_tf,self.u_tf)
def neural_net(self, layers):
"""
A function to build the neural network of the required size using the weights and biases provided. Instead of doing this, can we use a simple constructor method and initalize them post the construction? That would be sensible and faster.
"""
model = nn.Sequential()
for l in range(0, len(layers) - 1):
model.add_module("layer_"+str(l), nn.Linear(layers[l],layers[l+1], bias=True))
if l != len(layers) - 2:
model.add_module("tanh_"+str(l), nn.Tanh())
return model
def initialize_NN(self, m):
"""
Initialize the neural network with the required layers, the weights and the biases. The input "layers" in an array that contains the number of nodes (neurons) in each layer.
"""
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
# print(m.weight)
def net_X(self, t_point):
"""
Forward pass through the network to obtain the U field.
"""
X = self.model(t_point)
return X
def net_f(self, t_point):
X = self.net_X(t_point)
dX = torch.autograd.grad(X[:,0], t_point, grad_outputs = torch.ones([len(t_point)], dtype = torch.float), create_graph = True)
dY = torch.autograd.grad(X[:,1], t_point, grad_outputs = torch.ones([len(t_point)], dtype = torch.float), create_graph = True)
dZ = torch.autograd.grad(X[:,2], t_point, grad_outputs = torch.ones([len(t_point)], dtype = torch.float), create_graph = True)
# This is the losses from the 3 ODEs
f1 = dX[0].squeeze() - self.sigma * (X[:,1] - X[:,0])
f2 = dY[0].squeeze() - X[:,0] * (self.rho - X[:,2]) + X[:,1]
f3 = dZ[0].squeeze() - X[:,0] * X[:,1] + self.beta * X[:,2]
return [f1,f2,f3]
def calc_loss(self, X_pred, X_true, f_pred):
X_error = X_pred - X_true
loss_u = torch.mean(torch.mul(X_error, X_error))
loss_f1 = torch.mean(torch.mul(f_pred[0], f_pred[0]))
loss_f2 = torch.mean(torch.mul(f_pred[1], f_pred[1]))
loss_f3 = torch.mean(torch.mul(f_pred[2], f_pred[2]))
loss_f = loss_f1 + loss_f2 + loss_f3
losses = loss_u + loss_f
# print('Loss: %.4e, U_loss: %.4e, F_loss: %.4e' %(losses, loss_u, loss_f))
return losses
def set_optimizer(self,optimizer):
self.optimizer = optimizer
def train(self):
for epoch in range(0,self.epochs):
# Now, one can perform a forward pass through the network to predict the value of u and f for various locations of x and at various times t. The function to call here is net_u and net_f.
# Here it is crucial to remember to provide x and t as columns and not as rows. Concatenation in the prediction step will fail otherwise.
X_pred = self.net_X(self.t)
f_pred = self.net_f(self.t_f)
# Now, we can define the loss of the network. The loss here is broken into two components: one is the loss due to miscalculating the predicted value of u, the other is for not satisfying the physical governing equation in f which must be equal to 0 at all times and all locations (strong form).
loss = self.calc_loss(X_pred, self.X_true, f_pred)
if epoch % 100 == 0:
print('Loss: %.4e' %(loss))
self.losses.append(loss.detach().numpy())
# Clear out the previous gradients
self.optimizer.zero_grad()
# Calculate the gradients using the backward() method.
loss.backward() # Here, a tensor may need to be passed so that the gradients can be calculated.
# Optimize the parameters through the optimization step and the learning rate.
self.optimizer.step()
# Repeat the prediction, calculation of losses, and optimization a number of times to optimize the network.
# def closure(self):
# self.optimizer.zero_grad()
# u_pred = self.net_u(self.x_u, self.t_u)
# f_pred = self.net_f(self.x_f, self.t_f)
# loss = self.calc_loss(u_pred, self.u_true, f_pred)
# loss.backward()
# return loss
# + id="d213_a8M8B8x" colab_type="code" colab={}
if __name__ == "__main__":
sigma = 10.0
beta = 8/3
rho = 28.0
n_epochs = 100
N_u = 100
N_f = 1000
# Layer Map
layers = [1, 20, 20, 20, 20, 20, 20, 20, 20, 3]
# data = scipy.io.loadmat('burgers_shock.mat')
# t = data['t'].flatten()[:,None]
# x = data['x'].flatten()[:,None]
# Exact = np.real(data['usol']).T
X = states.y.T
t = states.t
# X, T = np.meshgrid(x,t)
# X_star = np.hstack((X.flatten()[:,None],T.flatten()[:,None]))
# u_star = Exact.flatten()[:,None]
# Domain bounds
lb = t.min(0)
ub = t.max(0)
# T[0:1,:].T
t1 = t[0] # Initial Conditions (time)
X1 = X[0,:] # Initial Condition (state)
# xx2 = np.hstack((X[:,0:1], T[:,0:1])) # Boundary Condition 1
# uu2 = Exact[:,0:1]
# xx3 = np.hstack((X[:,-1:], T[:,-1:])) # Boundary Condition 2
# uu3 = Exact[:,-1:]
# X_u_train = np.vstack([xx1, xx2, xx3])
t_train = [t1]
t_f_train = lb + (ub-lb)*lhs(1,N_f)
t_f_train = np.vstack((t_f_train, t_train))
X_train = X1
idx = np.random.choice(t.shape[0], N_u, replace=False)
t_train = np.resize(np.append(t1,t[idx]),[101,1])
X_train = np.resize(np.append(X1,X[idx,:]),[101,3])
pinn = PhysicsInformedNN(t_train, X_train, t_f_train, layers, lb, ub,
sigma, beta, rho, n_epochs)
# + id="n8g3-0aXmWL9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 340} outputId="c2945f13-e645-42bf-9ff3-6981905a19cb"
pinn.model
# + id="GWi29U3x-hmr" colab_type="code" colab={}
pinn.set_optimizer(torch.optim.Adam(pinn.model.parameters(), lr = 1e-3))
# + id="FWoaZgrP-n5r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 452} outputId="500a3206-6a50-4604-d1b1-a992c6ab6b0d"
for _ in range(10):
pinn.train()
plt.plot(np.linspace(0,len(pinn.losses),num=len(pinn.losses)),np.log10(pinn.losses))
# + id="3I0x3RWEAOke" colab_type="code" colab={}
states_pred = pinn.model(torch.tensor(np.resize(np.linspace(0,1,4001),[4001,1]),dtype=torch.float)).detach().numpy()
# + id="nAVjmCgmEa0z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="5d1c8424-ca29-4d3e-df97-cc3e8b3851de"
t_max = 1000
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(states.y[0,:t_max], states.y[1,:t_max], states.y[2,:t_max])
ax.plot(states_pred[:t_max,0], states_pred[:t_max,1], states_pred[:t_max,2])
plt.draw()
plt.show()
# + id="F1AQBoIffs82" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="0cdf587d-a3dd-4b6d-c21a-9d7c78855c22"
states_pred[:,0]
# + id="DqozRLt8Sb5E" colab_type="code" colab={}
| Lorenz Attractor/Lorenz_Attractor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print('Hello World!')
import time
time.sleep(3)
# # This is a level 1 heading
# ## This is a level 2 heading
# This is some plain text that forms a paragraph.
# Add emphasis via **bold** and __bold__, or *italic* and _italic_.
#
# Paragraphs must be separated by an empty line.
#
# * Sometimes we want to include lists.
# * Which can be indented.
#
# Abc
# 1. Lists can also be numbered.
# 2. For ordered lists.
#
# [It is possible to include hyperlinks](https://www.example.com)
#
# Inline code uses single backticks:
# `foo()`
# and code blocks use triple backticks:
# ```
# foo()
# bar()
# ```
# Or can be indented by 4 spaces:
# foo()
# And finally, adding images is easy: 
| .ipynb_checkpoints/Untitled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
from nltk.corpus import wordnet as wn
# %run scripts/helper.py
crowd_train = load_file('./data/train.csv/train.csv', None)
# +
# General text related features
# 1. Text length of the title
# -
crowd_train.columns
title_length = crowd_train[crowd_train.relevance_variance < 0.5].apply(lambda x: len(x['product_title']), axis=1)
# lets see how correlated this feature is with response variable
# crowd_train[['title_length', 'median_relevance']].corr()
pd.DataFrame({'title_length': title_length,
'median_relevance': crowd_train[crowd_train.relevance_variance < 0.5].median_relevance}).corr()
# +
# Not a high correlation
# -
# 2. Number of words in the title
# crowd_train['num_words_title'] = crowd_train.apply(lambda x: len(x['product_title'].split(' ')), axis=1)
num_words_title = crowd_train[crowd_train.relevance_variance < 0.5].apply(lambda x: len(x['product_title'].split(' ')), axis=1)
# crowd_train[['num_words_title', 'median_relevance']].corr()
pd.DataFrame({'num_words_title': num_words_title,
'median_relevance': crowd_train[crowd_train.relevance_variance < 0.5].median_relevance}).corr()
# 3. Number of words in the prodcut description
crowd_train['num_words_desc'] = crowd_train.apply(lambda x: len(x['product_description']), axis=1)
crowd_train[['num_words_desc', 'median_relevance']].corr()
# +
# 4. Ratio of words in query other than stopwords that match in the title and description
def f(x):
query = x['query'].lower()
title = x['product_title'].lower()
desc = x['product_description'].lower()
stop = stopwords.words('english')
total_words = len(title.split(' ')) + len(desc.split(' '))
count = 0
unique_query_terms = list(set(query.split(' ')))
for q in unique_query_terms:
if q not in stop:
if q in title or q in desc:
count += 1
return (count * 1.) / total_words
crowd_train['ratio_query_terms_in_res'] = crowd_train.apply(f, axis=1)
# -
crowd_train[['ratio_query_terms_in_res', 'median_relevance']].corr()
# +
# Jaccard distance between query and (title + description)
def jaccard(x):
query = x['query'].lower()
title = x['product_title'].lower()
response = title
query_set = set(query.split(' '))
response_set = set(response.split(' '))
query_response_intersection_len = len(query_set & response_set)
query_response_union_len = len(query_set | response_set)
return (query_response_intersection_len * 1.) / (query_response_union_len)
crowd_train['jaccard_dist'] = crowd_train.apply(jaccard, axis=1)
# -
# lets how much this variable is correlated with distance
crowd_train[['jaccard_dist', 'median_relevance']].corr()
crowd_train.jaccard_dist.head()
# +
# Check if query term in response
def is_query_in_response(train):
query_terms = train['query'].split(' ')
response = train['product_title'] + ' ' + train['product_description']
stemmer = PorterStemmer()
query_terms_stemmed = [stemmer.stem(q) for q in query_terms]
response_stemmed = ''.join([stemmer.stem(r) for r in response])
stop = stopwords.words('english')
keyword = False
for q in query_terms_stemmed:
if q not in stop:
keyword = True
if response_stemmed.lower().find(q) == -1:
return 0
if keyword == False:
return 0
else:
return 1
# crowd_train['query_in_response'] = crowd_train.apply(is_query_in_response, axis=1)
query_in_response = crowd_train[crowd_train.relevance_variance < 0.5].apply(is_query_in_response, axis=1)
# -
# crowd_train[['query_in_response', 'median_relevance']].corr()
pd.DataFrame({'query_in_response': query_in_response,
'median_relevance': crowd_train[crowd_train.relevance_variance < 0.5].median_relevance}).corr()
# +
# lets find out how many query terms found in response
def count_query_terms_in_response(train):
query_terms = train['query'].split(' ')
unique_terms = list(set(query_terms))
response = train['product_title'].lower() + ' ' + train['product_description'].lower()
stemmer = PorterStemmer()
query_terms_stemmed = [stemmer.stem(q) for q in unique_terms]
response_stemmed = ''.join([stemmer.stem(r) for r in response])
stop = stopwords.words('english')
count = 0
for q in query_terms_stemmed:
if q not in stop:
if response_stemmed.find(q) != -1:
count += 1
return count
# crowd_train['num_terms_in_resp'] = crowd_train.apply(count_query_terms_in_response, axis=1)
num_terms_in_resp = crowd_train[crowd_train.relevance_variance < 0.5].apply(count_query_terms_in_response, axis=1)
# -
# crowd_train[['num_terms_in_resp', 'median_relevance']].corr()
pd.DataFrame({'num_terms_in_resp': num_terms_in_resp,
'median_relevance': crowd_train[crowd_train.relevance_variance < 0.5].median_relevance}).corr()
# +
def lch_similarity(x):
query = x['query'].lower()
response = x['product_title'].lower() + ' ' + x['product_description'].lower()
stop = stopwords.words('english')
total = 0
for q in query.split(' '):
if q not in stop:
query_noun = wn.synsets(q, pos=wn.NOUN)
if len(query_noun) > 0:
for r in response.split(' '):
if r not in stop:
synonyms = wn.synsets(r, pos=wn.NOUN)
if len(synonyms) > 0:
total += query_noun[0].lch_similarity(synonyms[0])
return total
crowd_train['lch_similarity'] = crowd_train.apply(lch_similarity, axis=1)
# -
crowd_train[['lch_similarity', 'median_relevance']].corr()
# ### Query Length
# +
def query_length(x):
return len(x['query'].split(' '))
crowd_train['query_length'] = crowd_train.apply(query_length, axis=1)
# -
crowd_train[['query_length', 'median_relevance']].corr()
# ### Response length
# +
def response_length(x):
return len(x['product_title']) + len(x['product_description'])
crowd_train['response_length'] = crowd_train.apply(response_length, axis=1)
# -
crowd_train[['response_length', 'median_relevance']].corr()
# +
def query_synonymns_check(x):
query = x['query'].lower()
query_terms = list(set(query.split()))
response = x['product_title'].lower() + ' ' + x['product_description'].lower()
query_synonymns = []
stop = stopwords.words('english')
for q in query_terms:
for i, j in enumerate(wn.synsets(q)):
query_synonymns.extend(j.lemma_names)
keyword = False
for qsynonym in query_synonymns:
if qsynonym not in stop:
keyword = True
if response.find(qsynonym) == -1:
return 0
if keyword == False:
return 0
else:
return 1
crowd_train['query_synonyms_match_count'] = crowd_train.apply(query_synonymns_check, axis=1)
# -
crowd_train[['query_synonyms_match_count', 'median_relevance']].corr()
c = crowd_train.groupby('median_relevance')
c.get_group(4).head().relevance_variance.describe()
c.get_group(3).head().relevance_variance.describe()
c.get_group(2).head().relevance_variance.describe()
c.get_group(1).head().relevance_variance.describe()
| Kaggle-Competitions/CrowdFlower/GenerateFeatures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import random
import math
import itertools
import warnings
import pickle
import gc
import sys
import matplotlib.pyplot as plt
from os.path import join, isfile
from collections import Counter
from scipy.special import gamma
warnings.filterwarnings('ignore')
np.set_printoptions(suppress=True, formatter={'float': lambda x: "{0:0.2f}".format(x)})
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:75% !important; }</style>"))
# -
mainPath = "../../data"
beacons = join(mainPath, "beacon")
ceuPath = join(beacons, "CEU")
# ### Step 1: Load Beacon, MAF, Reference and other cached variables
# CEU
beacon = pd.read_csv(join(ceuPath, "Beacon_164.txt"), index_col=0, delim_whitespace=True)
maf = pd.read_csv(join(ceuPath, "MAF.txt"), index_col=0, delim_whitespace=True)
reference = pickle.load(open(join(ceuPath, "reference.pickle"),"rb"))
# +
binary = np.logical_and(beacon.values != reference, beacon.values != "NN").astype(int)
maf.rename(columns = {'referenceAllele':'major', 'referenceAlleleFrequency':'major_freq',
'otherAllele':'minor', 'otherAlleleFrequency':'minor_freq'}, inplace = True)
beacon_people = np.arange(65)
other_people = np.arange(99)+65
all_people = np.arange(164)
# +
# Construct beacons and the victim
shuffled = np.random.permutation(all_people)
victim_ind = shuffled[0]
a_cind = shuffled[1:21]
s_cind = shuffled[21:41]
s_ind = shuffled[41:101]
s_beacon = binary[:, np.concatenate([s_ind,np.array([victim_ind])])]
#s_beacon = binary[:, s_ind]
a_control = binary[:, a_cind]
s_control = binary[:, s_cind]
victim = binary[:, victim_ind]
# -
# ### Step 2: Function definitions
# ###### SB LRT
# +
# n: Num query
a=1.6483
b=2.2876
error=0.001
def calculate_sb_delta(num_people, response, n):
DN = gamma(a + b) / (gamma(b) * (2*num_people + a + b)**a)
DN_1 = gamma(a + b) / (gamma(b) * (2*(num_people-1) + a + b)**a)
B = np.log(DN / (DN_1 * error))
C = np.log((error * DN_1 * (1 - DN)) / (DN*(1-error*DN_1)))
return n*B + C*response
def sb_lrt(victim, control_people, beacon, A, S, num_query):
control_size = control_people.shape[1]
beacon_size = beacon.shape[1]
response = beacon[A].any(axis=1)*S
# Delta
delta = calculate_sb_delta(beacon_size, response, num_query)
# Victim delta
victim_delta = np.sum(np.dot(delta, victim[A]))
# Control delta
control_delta = np.dot(delta, control_people[A])
return victim_delta, control_delta
# -
# ###### Optimal LRT
# +
# n: Num query
def calculate_optimal_delta(num_people, response, n, maf):
DN_i = np.power((1-maf), (2*num_people))
DN_i_1 = np.power((1-maf), (2*num_people-2))
log1 = np.log(DN_i/(error*DN_i_1))
log2 = np.log((error*DN_i_1 * (1-DN_i)) / (DN_i * (1-error*DN_i_1)))
return log1 + log2*response
def optimal_lrt(victim, control_people, beacon, A, S, num_query):
control_size = control_people.shape[1]
beacon_size = beacon.shape[1]
response = beacon[A].any(axis=1)*S
maf_i = maf.iloc[A]["maf"].values + 1e-6
# Delta
delta = calculate_optimal_delta(beacon_size, response, num_query, maf_i)
# Victim delta
victim_delta = np.sum(np.dot(delta, victim[A]))
# Control delta
control_delta = np.dot(delta, control_people[A])
return victim_delta, control_delta
# -
# ###### p-value Function
def p_value(victim_delta, control_delta):
return np.sum(control_delta <= victim_delta) / control_delta.shape[0]
# #### Attacker Utility
def utility_attacker(ai, si, p_prev, p_current, num_query, hps):
# Gain-Loss=Utility
a_gain = hps[0]*-np.log(maf.iloc[ai]["maf"]+1e-6)/abs(np.log(1e-6)) + hps[1]*(p_prev - p_current)
a_loss = hps[2]*(1-si) + hps[3]*num_query/100
return a_gain-a_loss
def utility_sharer(ai, si, p_prevs, p_currents, hps):
# Gain-Loss=Utility
s_gain = hps[2]*(1-si)
s_loss = hps[0]*-np.log(maf.iloc[ai]["maf"]+1e-6)/abs(np.log(1e-6))
+ hps[1]*np.sum(p_prevs - p_currents)/len(p_prevs)
+ hps[3]*np.sum(p_currents <= 0.05)/len(p_currents)
return s_gain-s_loss
# +
# Game scenario
num_query = 100
A = np.random.choice(beacon.shape[0], num_query)
S = np.ones(num_query)
hps = np.random.uniform(low=0.9, high=1, size=(6,))
### Attacker
victim_delta, control_delta = optimal_lrt(victim, a_control, s_beacon, A, S, num_query)
p_victim = p_value(victim_delta, control_delta)
print("Victim's p-value: ",p_victim)
print("Victim delta: ", victim_delta)
print("Control delta: ", control_delta)
### Sharer
p_donors = np.zeros(s_beacon.shape[1])
for i in range(s_beacon.shape[1]):
victim_delta, control_delta = optimal_lrt(s_beacon[:, i], s_control, s_beacon, A, S, num_query)
p_donors[i] = p_value(victim_delta, control_delta)
print("Donors' p-values:\n",p_donors)
# -
# ###### Random Sequence
# Game scenario
num_query = 20
A = np.random.choice(beacon.shape[0], num_query)
S = np.random.uniform(low=0.95, high=1, size=(num_query,))
S = np.ones(num_query)
hps = np.random.uniform(low=0.9, high=1, size=(6,))
print(victim[A])
print(a_control[A])
# ##### Victim SNP instances
# +
in_victim = maf.iloc[np.where(victim)].sort_values("maf")
out_victim = maf.iloc[np.where(1-victim)].sort_values("maf")
_rarest_yes = in_victim.iloc[0:100].index.values
_rarest_no = out_victim.iloc[0:100].index.values
_common_yes = in_victim.iloc[-100:].index.values
_common_no = out_victim.iloc[-100:].index.values
_mid_yes = in_victim.iloc[len(in_victim)//2:len(in_victim)//2+100].index.values
_mid_no = out_victim.iloc[len(out_victim)//2:len(out_victim)//2+100].index.values
_common_control = np.where(np.logical_and(np.any(a_control == 1,axis=1), victim == 1))[0]
# -
# ###### Rare-Mid-Common
num_query = 23
A = np.concatenate([_rarest_yes[:3], _mid_no[80:100]])
S = np.random.uniform(low=0.95, high=1, size=(num_query,))
hps = 100*np.random.uniform(low=0.9, high=1, size=(6,))
print(victim[A[6]])
print(a_control[A[6]])
# ###### Common
num_query = 15
A = np.concatenate([_common_control[:5], _common_no[:10]])
S = np.ones(num_query)
#S = np.random.uniform(low=0.95, high=1, size=(num_query,))
hps = 100*np.random.uniform(low=0.9, high=1, size=(6,))
print(victim[A])
print(a_control[A])
# #### Example
# +
# Attacker Utility
attacker_utility = np.zeros(num_query)
# Previous p-value
p_victim_prev = 1
for i in range(num_query):
print("QUERY ", i+1)
print("---------")
# Current p-value
victim_delta, control_delta = optimal_lrt(victim, a_control, s_beacon, A[:i+1], S[:i+1], i+1)
print("Victim delta: ", victim_delta)
print("Control delta: ", control_delta)
p_victim_current = p_value(victim_delta, control_delta)
# Gain-Loss=Utility
attacker_utility[i] = utility_attacker(A[i], S[i], p_victim_prev, p_victim_current, i+1, hps)
print("U_A(",i+1,"): ", round(attacker_utility[i], 3), "\tP-prev-P-cur: ",p_victim_prev,"-",p_victim_current, "\tMAF: ", maf.iloc[A[i]]["maf"])
print()
p_victim_prev = p_victim_current
# Sharer Utility
sharer_utility = np.zeros(num_query)
# Previous p-value
p_donors_prev = np.ones(s_beacon.shape[1])
for i in range(num_query):
print("QUERY ", i+1)
print("---------")
# Current p-value
p_donors_current = np.zeros(s_beacon.shape[1])
for j in range(s_beacon.shape[1]):
victim_delta, control_delta = optimal_lrt(s_beacon[:, j], s_control, s_beacon, A[:i+1], S[:i+1], i+1)
p_donors_current[j] = p_value(victim_delta, control_delta)
sharer_utility[i] = utility_sharer(A[i], S[i], p_donors_prev, p_donors_current, hps)
print("U_S(",i+1,"): ", round(sharer_utility[i], 3), "\tPrev-Cur: ",round(np.sum(p_donors_prev),2),"-",round(np.sum(p_donors_current),2), "\tMAF: ", maf.iloc[A[i]]["maf"])
print(p_donors_current)
p_donors_prev = p_donors_current
# -
plt.plot(attacker_utility, label="Attacker")
plt.plot(sharer_utility, label="Sharer")
plt.xlabel("Query")
plt.ylabel("Utility")
plt.legend()
plt.plot(np.cumsum(attacker_utility), label="Attacker")
plt.plot(np.cumsum(sharer_utility), label="Sharer")
plt.xlabel("Query")
plt.ylabel("Utility")
plt.legend()
# ###### Optimization Trials
# +
for i in range(num_query):
print("QUERY ", i+1)
print("---------")
# Current p-value
victim_delta, control_delta = optimal_lrt(victim, a_control, s_beacon, A[:i+1], S[:i+1], i+1)
print("Victim delta: ", victim_delta)
print("Control delta: ", control_delta)
p_victim_current = p_value(victim_delta, control_delta)
# Gain-Loss=Utility
attacker_utility[i] = utility_attacker(A[i], S[i], p_victim_prev, p_victim_current, hps)
print("U_A(",i+1,"): ", round(attacker_utility[i], 3), "\tP-prev-P-cur: ",p_victim_prev,"-",p_victim_current, "\tMAF: ", maf.iloc[A[i]]["maf"])
print()
p_victim_prev = p_victim_current
# Sharer Utility
sharer_utility = np.zeros(num_query)
# Previous p-value
p_donors_prev = np.ones(s_beacon.shape[1])
for i in range(num_query):
print("QUERY ", i+1)
print("---------")
# Current p-value
p_donors_current = np.zeros(s_beacon.shape[1])
for j in range(s_beacon.shape[1]):
victim_delta, control_delta = optimal_lrt(s_beacon[:, j], s_control, s_beacon, A[:i+1], S[:i+1], i+1)
p_donors_current[j] = p_value(victim_delta, control_delta)
sharer_utility[i] = utility_sharer(A[i], S[i], p_donors_prev, p_donors_current, hps)
print("U_S(",i+1,"): ", round(sharer_utility[i], 3), "\tPrev-Cur: ",round(np.sum(p_donors_prev),2),"-",round(np.sum(p_donors_current),2), "\tMAF: ", maf.iloc[A[i]]["maf"])
print()
p_donors_prev = p_donors_current
# -
# Game scenario
num_query = 40
rares = maf.iloc[np.where(victim)].sort_values("maf").iloc[0:1].index.values
A = np.random.choice(beacon.shape[0], num_query)
A[:len(rares)] = rares
S = np.random.uniform(low=0.95, high=1, size=(num_query,))
S = np.ones(num_query)
hps = np.random.uniform(low=0.9, high=1, size=(6,))
print(victim[A])
print(a_control[A])
rares = maf.iloc[np.where(victim)].sort_values("maf").iloc[0:1].index.values
rares
# # STASH
'''
#ternary = binary.copy()
#ternary[beacon.values=="NN"] = -1
def lrt_calculate(victim, control_people, beacon, ai, si, num_query):
victim_delta = 0
control_size = control_people.shape[1]
beacon_size = beacon.shape[1]
control_delta = np.zeros(control_size)
for i in range(num_query):
# Query the beacon
response = beacon[ai[i]].any(axis=0)*si[i]
# Victim delta
victim_delta += calculate_sb_delta(beacon_size, response, 1) * victim[ai[i]]
# Control delta
control_delta += calculate_sb_delta(beacon_size, response, 1) * control_people[ai[i]]
return victim_delta, control_delta
victim_delta = 0
a_control_delta = np.zeros(60)
for i in range(num_query):
# Query the beacon
response = s_beacon[ai[i]].any(axis=0)#*si[i]
# Victim delta
victim_delta += calculate_sb_delta(60, response, 1) * victim[ai[i]]
# Control delta
a_control_delta += calculate_sb_delta(60, response, 1) * a_control[ai[i]]
#print(victim_delta, "-->", a_control_delta)
# p-value of the victim
p_victim = np.sum(a_control_delta <= victim_delta) / 60
print(p_victim)
'''
| src/stackelberg_defence.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bIAYbYajk1w9" colab_type="text"
# <div class="alert alert-block alert-info" style="margin-top: 20px">
#
#
# | Name | Description | Date
# | :- |-------------: | :-:
# |<NAME>| Training and evaluating machine learning models - 2nd PyTorch Datasets | On 23rd of August 2019 | width="750" align="center"></a></p>
# </div>
#
# # Training and evaluating machine learning models
# - Train-test split
# - k-fold Cross-Validation
# + id="GVU5-yp3N89I" colab_type="code" outputId="f9786ac0-7174-4b9b-f48d-d3eaeb07f311" executionInfo={"status": "ok", "timestamp": 1566580835489, "user_tz": 240, "elapsed": 2940, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 119}
# !pip3 install torch torchvision
# + id="8yy37hEYOEiQ" colab_type="code" outputId="3e278965-b4bc-4b42-d693-8027b3f0a2ab" executionInfo={"status": "ok", "timestamp": 1566580838332, "user_tz": 240, "elapsed": 1379, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
import numpy as np
import pandas as pd
import torch, torchvision
torch.__version__
# + id="xz642SBhDGMb" colab_type="code" outputId="9fc68b46-16f5-49e9-fa34-16c1b4cb5910" executionInfo={"status": "ok", "timestamp": 1566580839485, "user_tz": 240, "elapsed": 175, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# to use GPU
device = torch.device("cuda")
device
# + [markdown] id="WVZTmJ4CY8fM" colab_type="text"
# ## 1. Train-test split
# - Splitting train and test data in Pytorch
# + [markdown] id="7K9iC17g-N2E" colab_type="text"
# ### Import data
# - Import [epileptic seizure data](https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition) from UCI ML repository
# - Split train and test data using ```random_split()```
# - Train logistic regression model with training data and evaluate results with test data
# + id="fiu8hP6P_19B" colab_type="code" colab={}
class SeizureDataset(torch.utils.data.Dataset):
def __init__(self):
# import and initialize dataset
df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/00388/data.csv")
df = df[df.columns[1:]]
self.X = df[df.columns[:-1]].values
self.Y = df["y"].astype("category").cat.codes.values.astype(np.int32)
def __getitem__(self, idx):
# get item by index
return self.X[idx], self.Y[idx]
def __len__(self):
# returns length of data
return len(self.X)
# + id="o7TZBFXm_seR" colab_type="code" colab={}
seizuredataset = SeizureDataset()
# + id="qTJSFTAkBO-u" colab_type="code" outputId="01779d0d-f170-4555-ebb3-03505365ce3c" executionInfo={"status": "ok", "timestamp": 1566580851104, "user_tz": 240, "elapsed": 167, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
NUM_INSTANCES = len(seizuredataset)
TEST_RATIO = 0.3
TEST_SIZE = int(NUM_INSTANCES * 0.3)
TRAIN_SIZE = NUM_INSTANCES - TEST_SIZE
print(NUM_INSTANCES, TRAIN_SIZE, TEST_SIZE)
# + id="wJX5td0owoB-" colab_type="code" outputId="6573f362-78ba-4ef5-9ed6-4b5dcb52eeec" executionInfo={"status": "ok", "timestamp": 1566580852153, "user_tz": 240, "elapsed": 193, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
train_data, test_data = torch.utils.data.random_split(seizuredataset, (TRAIN_SIZE, TEST_SIZE))
print(len(train_data), len(test_data))
# + id="mh4YPbf2Cm41" colab_type="code" colab={}
# when splitting train and test sets, data loader for each dataset should be made separately
train_loader = torch.utils.data.DataLoader(train_data, batch_size = 64, shuffle = True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size = 64, shuffle = False)
# + id="x-ICPT4BB6Sd" colab_type="code" outputId="e26f8e92-1dae-4fa0-dd2f-6b4ad3a32505" executionInfo={"status": "ok", "timestamp": 1566580861432, "user_tz": 240, "elapsed": 7815, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# logistic regression model
model = torch.nn.Linear(178, 5).to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
model
# + id="bpgjzV4HCX-F" colab_type="code" outputId="c2f9a0c1-a3af-405f-e258-b8e4ce4516f2" executionInfo={"status": "ok", "timestamp": 1566580879397, "user_tz": 240, "elapsed": 14829, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 187}
num_step = len(train_loader)
for epoch in range(100):
for i, (x, y) in enumerate(train_loader):
x, y = x.float().to(device), y.long().to(device)
outputs = model(x)
loss = criterion(outputs, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch + 1) % 10 == 0:
print("Epoch: {}, Loss: {:.5f}".format(epoch + 1, loss.item()))
# + id="XC1vyxwLHE2y" colab_type="code" colab={}
y_true, y_pred, y_prob = [], [], []
with torch.no_grad():
for x, y in test_loader:
# ground truth
y = list(y.numpy())
y_true += y
x = x.float().to(device)
outputs = model(x)
# predicted label
_, predicted = torch.max(outputs.data, 1)
predicted = list(predicted.cpu().numpy())
y_pred += predicted
# probability for each label
prob = list(outputs.cpu().numpy())
y_prob += prob
# + id="4rvewoaPIfd2" colab_type="code" outputId="3bea3973-6555-4372-ae80-2d6c564e26a5" executionInfo={"status": "ok", "timestamp": 1566580882939, "user_tz": 240, "elapsed": 210, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# calculating overall accuracy
num_correct = 0
for i in range(len(y_true)):
if y_true[i] == y_pred[i]:
num_correct += 1
print("Accuracy: ", num_correct/len(y_true))
# + [markdown] id="Tgf96It4fhr9" colab_type="text"
# ## 2. k-fold Cross-Validation
# - Perform k-fold cross validation in Pytorch
# - Cross validation can be implemented using NumPy, but we rely on ```skorch``` and ```sklearn``` here for the facility of implementation
# + id="SFXadkSTeN4r" colab_type="code" outputId="95e68a69-183a-4b67-8dd0-36ebbfe99955" executionInfo={"status": "ok", "timestamp": 1566580887075, "user_tz": 240, "elapsed": 2452, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 204}
# !pip install -U skorch
# + id="Iv30GdmFg1om" colab_type="code" colab={}
from skorch import NeuralNetClassifier
from sklearn.model_selection import cross_val_score
# + id="znOfjXfFJKr7" colab_type="code" outputId="f5150d92-8b28-4377-e6e3-c064282783a9" executionInfo={"status": "ok", "timestamp": 1566580890697, "user_tz": 240, "elapsed": 992, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# import data
df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/00388/data.csv")
df = df[df.columns[1:]]
X_data = df[df.columns[:-1]].values.astype(np.float32)
y_data = df["y"].astype("category").cat.codes.values.astype(np.int64)
print(X_data.shape, y_data.shape)
# + id="m56oNfIDg37o" colab_type="code" outputId="4b2f0d24-5f6a-4f7a-db15-5ef7de63a08a" executionInfo={"status": "ok", "timestamp": 1566580896578, "user_tz": 240, "elapsed": 5419, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
# generate skorch high-level classifier and perform 5-fold cross validation using cross_val_score()
logistic = NeuralNetClassifier(model, max_epochs = 10, lr = 1e-2)
scores = cross_val_score(logistic, X_data, y_data, cv = 5, scoring = "accuracy")
# + id="VIfIswG4o6Mt" colab_type="code" outputId="8f1234f3-e704-44bc-ccd8-123983de69b9" executionInfo={"status": "ok", "timestamp": 1566580897729, "user_tz": 240, "elapsed": 160, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBrdPXZmzhldkZcu5l9nrnO7t-Ls96No7O8kRuZ=s64", "userId": "14585138350013583795"}} colab={"base_uri": "https://localhost:8080/", "height": 51}
# print out results
print(scores)
print(scores.mean(), scores.std())
| PyTorch Exercise/PyTorch Basics/pytorch-datasets-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Алгоритмы интеллектуальной обработки больших объемов данных
# ## Домашнее задание №4 - Метод k-средних, предобработка признаков
#
# ###### <hr\>
# **Общая информация**
#
# **Срок сдачи:** 28 ноября 2018, 06:00 <br\>
# **Штраф за опоздание:** -2 балла после 06:00 28 ноября, -4 балла после 06:00 5 декабря, -6 баллов после 06:00 12 декабря, -8 баллов после 19 декабря
#
# При отправлении ДЗ указывайте фамилию в названии файла
# Присылать ДЗ необходимо в виде ссылки на свой github репозиторий в slack @alkhamush
#
# Необходимо в slack создать таск в приватный чат:
# /todo Фамилия Имя ссылка на гитхаб @alkhamush
# Пример:
# /todo Ксения Стройкова https://github.com/stroykova/spheremailru/stroykova_hw1.ipynb @alkhamush
# Дополнительно нужно просто скинуть ссылку в slack в личный чат
#
# Используйте данный Ipython Notebook при оформлении домашнего задания.
# # Имплементация K-means
#
# Пользуясь наработками выше, имплементируйте метод k-means.
# При инициализации необходимо задавать количество кластеров, функцию расстояния между кластерами (для оригинального k-means - евклидово расстояние) и начальное состояние генератора случайных чисел.
#
# После обучения, среди атрибутов класса `Kmeans` должны появится
# * Метки кластеров для объектов
# * Координаты центройдов кластеров
#
# k-means - это алгоритм **кластеризации**, а не классификации, а посему метод `.predict()` в нем фактически не нужен, но он может возвращать метки ближайшего кластера для объектов.
# ###### Задание 1 (2 баллов)
# Имплементируйте метод k-means. Задание считается выполненным, если Ваша реализация работает быстрее реализации из sklearn.
#
# Теория для выполнения задания 2 и 3 остаётся на самостоятельное изучение. Теории немного и она совсем простая.
#
# ###### Задание 2 (2 балла)
# Имплементируйте класс MiniBatchKMeans, который является классом наследником Kmeans.
#
# ###### Задание 3 (2 балла)
# Превратите k-means в k-means++. Для этого нужно реализовать метод в классе Kmeans, который будет инициализировать более "хорошие" значения центроидов. Чтобы использовался метод k-means++, в параметр init необходимо передать строковое значение 'k-means' (по умолчанию 'random').
#
# ###### Задание 4 (2 балла)
# В пункте "Проверка корректности метода" нужно нарисовать графики, которые показывают зависимость времени выполнения алгоритма от количества сэмплов. Графики должны быть нарисованы для различных комбинаций реализаций алгоритма (k-means, k-means++, k-means с MiniBatchKMeans, k-means++ с MiniBatchKMeans). График достаточно построить на 5-10 точках.
#
# ###### Задание 5 (2 балла)
# В пункте "Применение K-means на реальных данных" нужно сравнить различные реализации k-means (k-means, k-means++, k-means с MiniBatchKMeans, k-means++ с MiniBatchKMeans). После чего написать вывод, в котором должно быть объяснение того, почему один алгоритм оказался лучше остальных или почему не было выявлено лучшего алгоритма.
#
# **Штрафные баллы:**
#
# 1. Невыполнение PEP8 -1 балл
# 2. Отсутствие фамилии в имени скрипта (скрипт должен называться по аналогии со stroykova_hw4.ipynb) -1 балл
# 3. Все строчки должны быть выполнены. Нужно, чтобы output команды можно было увидеть уже в git'е. В противном случае -1 балл
# 4. При оформлении ДЗ нужно пользоваться данным файлом в качестве шаблона. Не нужно удалять и видоизменять написанный код и текст. В противном случае -1 балл
# <hr\>
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import random
import math
from sklearn.metrics import silhouette_score
import time
import sys
# %matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (12,5)
# %load_ext pycodestyle_magic
# +
# #%%pycodestyle
class MyKmeans:
def __init__(self, n_clusters=8, metric='euclidean', max_iter=300,
random_state=None, init='random', min_dist=1e-6):
"""
Инициализация метода
:k - количество кластеров
:metric - функция расстояния между объектами
:max_iter - максиальное количество итераций
:random_state - seed для инициализации генератора случайных чисел
"""
self.k = n_clusters
self.random_state = random_state
self.metric = metric
self.max_iter = max_iter
if init == 'random':
self.init = self.init_random
elif init == 'k-means':
self.init = self.init_k_means
else:
print('{} is not a possible value for init'.format(self.init))
self.min_dist = min_dist
self.min_float = sys.float_info.min
def init_random(self, X):
# инициализация аналогичная sklearn KMeans
# "choose k observations (rows) at random from data"
# нельзя сделать просто np.random.choice, так как
# центроиды не должны совпадать
positions = np.arange(X.shape[0])
np.random.shuffle(positions)
# это не self.labels
labels = positions[:self.k]
return X[labels]
def init_k_means(self, X):
centroids = np.empty((self.k, X.shape[1]))
position = np.random.choice(X.shape[0])
centroids[0] = X[position]
for i in range(1, self.k):
dist = self.euclidean_distance(X, centroids[:i, :])
min_dist = dist.min(axis=1) ** 2
weight = min_dist / min_dist.sum()
# можно не проверять на сопадение со
# старыми центроидами - для них p = 0
centroids[i] = X[np.random.choice(X.shape[0], p=weight)]
return centroids
def move_centroids(self, X, prev_centroids):
dist = self.euclidean_distance(X, prev_centroids)
# это не self.labels
labels = np.argmin(dist, axis=1)
new_centroids = [np.mean(X[labels == i], axis=0)
for i in range(self.k)]
return np.array(new_centroids)
def euclidean_distance(self, X, Y):
x_sq = (X ** 2).sum(axis=1).reshape(-1, 1)
y_sq = (Y ** 2).sum(axis=1)
xy = X @ Y.T
return (x_sq - 2 * xy + y_sq)
def dots_distance(self, a):
return math.sqrt((a ** 2).sum() + self.min_float)
def fit(self, X, y=None):
"""
Процедура обучения k-means
"""
# Инициализация генератора случайных чисел
np.random.seed(self.random_state)
# Массив с метками кластеров для каждого объекта из X
self.labels = np.empty(X.shape[0])
# Массив с центройдами кластеров
self.centroids = np.empty((self.k, X.shape[1]))
# Your Code Here
# ...
self.centroids = self.init(X)
for _ in range(self.max_iter):
new_centroids = self.move_centroids(X, self.centroids)
if self.dots_distance(new_centroids
- self.centroids) < self.min_dist:
break
else:
self.centroids = new_centroids
# Переобозначу self.labels, чтобы понимать сделан ли
# predict для этого fit или нет
self.labels = None
return self
def predict(self, X, y=None):
"""
Процедура предсказания кластера
Возвращает метку ближайшего кластера для каждого объекта
"""
dist = self.euclidean_distance(X, self.centroids)
self.labels = np.argmin(dist, axis=1)
return self
class MiniBatchKMeans(MyKmeans):
def __init__(self, n_clusters=8, metric='euclidean', max_iter=300,
random_state=None, init='random', min_dist=1e-6,
batch_size=100):
MyKmeans.__init__(self, n_clusters, metric, max_iter,
random_state, init, min_dist)
self.batch_size = batch_size
def move_centroids(self, X, prev_centroids, labels, n_batch):
n_min = (n_batch - 1) * self.batch_size
n_max = n_batch * self.batch_size
if n_max >= X.shape[0]:
n_max = X.shape[0]
mb_positions = range(n_min, n_max)
mb = X[mb_positions]
dist = self.euclidean_distance(mb, prev_centroids)
labels[mb_positions] = np.argmin(dist, axis=1)
new_centroids = [np.mean(X[labels == i], axis=0)
for i in range(self.k)]
n_batch = (n_batch + 1) if n_batch < self.n_batch_max else 1
return np.array(new_centroids)
def fit(self, X, y=None):
np.random.seed(self.random_state)
self.labels = np.empty(X.shape[0])
self.centroids = self.init(X)
dist = self.euclidean_distance(X, self.centroids)
labels = np.argmin(dist, axis=1)
n_batch = 1
self.n_batch_max = X.shape[0] // self.batch_size + 1
for _ in range(self.max_iter):
new_centroids = self.move_centroids(X, self.centroids,
labels, n_batch)
if self.dots_distance(new_centroids
- self.centroids) < self.min_dist:
break
else:
self.centroids = new_centroids
self.labels = None
return self
# -
# ### Проверка корректности метода
#
# Перед тем как применять алгоритм на реальных данных, нужно испытать его на простых "игрушечных" данных.
#
# Если алгоритм реализован правильно, то метод должен идеально разбивать на 3 кластера данные ниже. Проверьте это.
#
# ВНИМАНИЕ! Проверка должна быть осуществлена на всех реализациях, иначе реализация не будет зачтена!
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=100, n_features=2, centers=3,
cluster_std=1, center_box=(-10.0, 10.0),
shuffle=False, random_state=1234)
plt.scatter(X[:,0], X[:, 1], c=y)
scaler = StandardScaler()
X_sc = scaler.fit_transform(X)
plt.scatter(X_sc[:,0], X_sc[:, 1], c=y)
# Проверьте Вашу имплементацию на простых данных (без этого пункта ДЗ не считается выполненным).
# КОММЕНТАРИИ НЕ СТИРАТЬ!
# +
## Работоспособность MyKmeans
# +
random_states = range(100, 1100, 100)
best_score = 0
best_random_state = 0
best_centroids = []
best_labels = []
for random_state in random_states:
model = MyKmeans(n_clusters=3,
random_state=random_state)
model.fit(X_sc)
model.predict(X_sc)
score = silhouette_score(X_sc, model.labels)
if score > best_score:
best_score = score
best_random_state = random_state
best_centroids = model.centroids
best_labels = model.labels
print('best silhouette score {}'.format(best_score))
centroids = best_centroids
labels = best_labels
# -
plt.scatter(X_sc[:,0], X_sc[:, 1], c=labels)
plt.scatter(centroids[:,0], centroids[:,1], s=100)
# +
## Работоспособность MyKmeans++
# +
random_states = range(100, 1100, 100)
best_score = 0
best_random_state = 0
best_centroids = []
best_labels = []
for random_state in random_states:
model = MyKmeans(n_clusters=3,
init='k-means',
random_state=random_state)
model.fit(X_sc)
model.predict(X_sc)
score = silhouette_score(X_sc, model.labels)
if score > best_score:
best_score = score
best_random_state = random_state
best_centroids = model.centroids
best_labels = model.labels
print('best silhouette score {}'.format(best_score))
centroids = best_centroids
labels = best_labels
# -
plt.scatter(X_sc[:,0], X_sc[:, 1], c=y)
plt.scatter(centroids[:,0], centroids[:,1], s=100)
# +
## Работоспособность MyKmeans с MiniBatchMyKmeans
# +
random_states = range(100, 1100, 100)
best_score = 0
best_random_state = 0
best_centroids = []
best_labels = []
for random_state in random_states:
model = MiniBatchKMeans(n_clusters=3,
random_state=random_state,
batch_size=10)
model.fit(X_sc)
model.predict(X_sc)
score = silhouette_score(X_sc, model.labels)
if score > best_score:
best_score = score
best_random_state = random_state
best_centroids = model.centroids
best_labels = model.labels
print('best silhouette score {}'.format(best_score))
centroids = best_centroids
labels = best_labels
# -
plt.scatter(X_sc[:,0], X_sc[:, 1], c=y)
plt.scatter(centroids[:,0], centroids[:,1], s=100)
# +
## Работоспособность MyKmeans++ с MiniBatchMyKmeans
# +
random_states = range(100, 1100, 100)
best_score = 0
best_random_state = 0
best_centroids = []
best_labels = []
for random_state in random_states:
model = MiniBatchKMeans(n_clusters=3,
init='k-means',
random_state=random_state,
batch_size=10)
model.fit(X_sc)
model.predict(X_sc)
score = silhouette_score(X_sc, model.labels)
if score > best_score:
best_score = score
best_random_state = random_state
best_centroids = model.centroids
best_labels = model.labels
print('best silhouette score {}'.format(best_score))
centroids = best_centroids
labels = best_labels
# -
plt.scatter(X_sc[:,0], X_sc[:, 1], c=y)
plt.scatter(centroids[:,0], centroids[:,1], s=100)
# +
## Время выполнения алгоритма Kmeans из sklearn
# +
samples = np.logspace(2, 4, 10)
times = np.empty(samples.size)
nexp = 10
for i, samp in enumerate(samples):
X, y = make_blobs(n_samples=int(samp), n_features=2, centers=3,
cluster_std=1, center_box=(-10.0, 10.0),
shuffle=False, random_state=1234)
scaler = StandardScaler()
X_sc = scaler.fit_transform(X)
t_exp = []
for _ in range(nexp):
model = KMeans(n_clusters=3)
t_begin = time.time()
model.fit(X_sc)
t_exp.append(time.time() - t_begin)
times[i] = np.mean(t_exp)
print('mean calculation time is {}'.format(times.mean()))
plt.xscale('log')
plt.plot(samples, times)
# +
## Время выполнения алгоритма MyKmeans
# +
samples = np.logspace(2, 4, 10)
times = np.empty(samples.size)
nexp = 10
for i, samp in enumerate(samples):
X, y = make_blobs(n_samples=int(samp), n_features=2, centers=3,
cluster_std=1, center_box=(-10.0, 10.0),
shuffle=False, random_state=1234)
scaler = StandardScaler()
X_sc = scaler.fit_transform(X)
t_exp = []
for j in range(nexp):
model = MyKmeans(n_clusters=3)
t_begin = time.time()
model.fit(X_sc)
t_exp.append(time.time() - t_begin)
times[i] = np.mean(t_exp)
print('mean calculation time is {}'.format(times.mean()))
plt.xscale('log')
plt.plot(samples, times)
# +
## Время выполнения алгоритма MyKmeans++
# +
samples = np.logspace(2, 4, 10)
times = np.empty(samples.size)
nexp = 10
for i, samp in enumerate(samples):
X, y = make_blobs(n_samples=int(samp), n_features=2, centers=3,
cluster_std=1, center_box=(-10.0, 10.0),
shuffle=False, random_state=1234)
scaler = StandardScaler()
X_sc = scaler.fit_transform(X)
t_exp = []
for _ in range(nexp):
model = MyKmeans(n_clusters=3, init='k-means')
t_begin = time.time()
model.fit(X_sc)
t_exp.append(time.time() - t_begin)
times[i] = np.mean(t_exp)
print('mean calculation time is {}'.format(times.mean()))
plt.xscale('log')
plt.plot(samples, times)
# +
## Время выполнения алгоритма MyKmeans с MiniBatchMyKmeans
# +
samples = np.logspace(2, 4, 10)
times = np.empty(samples.size)
nexp = 10
for i, samp in enumerate(samples):
X, y = make_blobs(n_samples=int(samp), n_features=2, centers=3,
cluster_std=1, center_box=(-10.0, 10.0),
shuffle=False, random_state=1234)
scaler = StandardScaler()
X_sc = scaler.fit_transform(X)
t_exp = []
for _ in range(nexp):
model = MiniBatchKMeans(n_clusters=3,
batch_size=int(samp / 20))
t_begin = time.time()
model.fit(X_sc)
t_exp.append(time.time() - t_begin)
times[i] = np.mean(t_exp)
print('mean calculation time is {}'.format(times.mean()))
plt.xscale('log')
plt.plot(samples, times)
# +
## Время выполнения алгоритма MyKmeans++ с MiniBatchMyKmeans
# +
samples = np.logspace(2, 4, 10)
times = np.empty(samples.size)
nexp = 10
for i, samp in enumerate(samples):
X, y = make_blobs(n_samples=int(samp), n_features=2, centers=3,
cluster_std=1, center_box=(-10.0, 10.0),
shuffle=False, random_state=1234)
scaler = StandardScaler()
X_sc = scaler.fit_transform(X)
t_exp = []
for _ in range(nexp):
model = MiniBatchKMeans(n_clusters=3, init='k-means',
batch_size=int(samp / 20))
t_begin = time.time()
model.fit(X_sc)
t_exp.append(time.time() - t_begin)
times[i] = np.mean(t_exp)
print('mean calculation time is {}'.format(times.mean()))
plt.xscale('log')
plt.plot(samples, times)
# -
# # Применение K-means на реальных данных
# Загрузите [данные](https://github.com/brenden17/sklearnlab/blob/master/facebook/snsdata.csv) в которых содержится описание интересов профилей учеников старшей школы США. (без этого пункта задание не считается выполненным).
# ВНИМАНИЕ! Проверка должна быть осуществлена на всех реализациях, иначе реализация не будет зачтена!
df_sns = pd.read_csv('snsdata.csv', sep=',')
df_sns.head()
# Данные устроены так:
# * Год выпуска
# * Пол
# * Возраст
# * Количество друзей
# * 36 ключевых слов, которые встречаются в профилe facebook (интересы, сообщества, встречи)
# * Удалите все признаки кроме 36 ключевых слов.
# * Нормализуйте данные - из каждого столбца вычтите его среднее значение и поделите на стандартное отклонение.
# * Используйте метод k-means чтобы выделить 9 кластеров
# * Попробуйте проинтерпретировать каждый кластер проанализировав полученные центройды (Некоторые кластеры могут быть очень большие и очень маленькие - плохо интерпретируются)
# КОММЕНТАРИИ НЕ СТИРАТЬ!
print('Размер таблицы: {}'.format(df_sns.shape))
df_sns = df_sns.drop(['gradyear', 'gender', 'age', 'friends'], axis=1)
print('Размер таблицы: {}'.format(df_sns.shape))
df_sns.head()
df_sns.describe().T
X = df_sns.values
X_sc = (X - np.mean(X, axis=0)) / np.std(X, axis=0)
# +
## MyKMeans
# -
# %%time
model = MyKmeans(n_clusters=9, random_state=123)
model.fit(X_sc)
model.predict(X_sc)
labels = model.labels
centroids = model.centroids
df_sns.loc[:, 'label'] = labels
df_sns_words = df_sns.iloc[:, 5:]
clusters = df_sns_words.groupby('label').mean()
for k in range(9):
print('='*10)
print('cluster label {}'.format(k))
print(clusters.loc[k].sort_values(ascending=False).head(5))
df_sns = df_sns.drop('label', axis=1)
# +
## MyKMeans++
# -
# %%time
model = MyKmeans(n_clusters=9, init='k-means', random_state=123)
model.fit(X_sc)
model.predict(X_sc)
labels = model.labels
centroids = model.centroids
df_sns.loc[:, 'label'] = labels
df_sns_words = df_sns.iloc[:, 5:]
clusters = df_sns_words.groupby('label').mean()
for k in range(9):
print('='*10)
print('cluster label {}'.format(k))
print(clusters.loc[k].sort_values(ascending=False).head(5))
df_sns = df_sns.drop('label', axis=1)
# +
## MyKMeans с MiniBatchMyKMeans
# -
# %%time
model = MiniBatchKMeans(n_clusters=9, random_state=123)
model.fit(X_sc)
model.predict(X_sc)
labels = model.labels
centroids = model.centroids
df_sns.loc[:, 'label'] = labels
df_sns_words = df_sns.iloc[:, 5:]
clusters = df_sns_words.groupby('label').mean()
for k in range(9):
print('='*10)
print('cluster label {}'.format(k))
print(clusters.loc[k].sort_values(ascending=False).head(5))
df_sns = df_sns.drop('label', axis=1)
# +
## MyKMeans++ с MiniBatchMyKMeans
# -
# %%time
model = MiniBatchKMeans(n_clusters=9, init='k-means', random_state=123)
model.fit(X_sc)
model.predict(X_sc)
labels = model.labels
centroids = model.centroids
df_sns.loc[:, 'label'] = labels
df_sns_words = df_sns.iloc[:, 5:]
clusters = df_sns_words.groupby('label').mean()
for k in range(9):
print('='*10)
print('cluster label {}'.format(k))
print(clusters.loc[k].sort_values(ascending=False).head(5))
df_sns = df_sns.drop('label', axis=1)
# +
## Вывод
# -
# Результаты получились довольно похожими для разных вариаций. Вообще говоря, KMeans довольно рандомный метод и для сравнения надо бы побольше статистики набрать. Не очень понятно по каким критериям тут оценивать что лучше, а что хуже. Силуэт у меня завис при попытке применения. По скорости видно, что варианты с minibatch значительно быстрее, чем без него.
| AIPLAD/hw/4/dokukin_hw4.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# notebook_metadata_filter: all
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# language_info:
# codemirror_mode:
# name: ipython
# version: 3
# file_extension: .py
# mimetype: text/x-python
# name: python
# nbconvert_exporter: python
# pygments_lexer: ipython3
# version: 3.7.6
# ---
# # [Uninsured Idiosyncratic Risk and Aggregate Saving](https://www.jstor.org/stable/2118417?seq=1#metadata_info_tab_contents)
# * Author: <NAME>
# * Source: The Quaterly Journal of Economics, Vol. 109, No. 3 (Aug., 1994), pp. 659-684
# * Notebook by Zixuan Huang and Mingzuo Sun
# * 2019 Fall
# This notebook uses [EconForge/Dolark](http://www.econforge.org/dolark/) toolkit to describe the results and reproduce the tables in the linked paper.
#
# __NOTES:__ This is a preliminary draft. This replication depends on $\texttt{Dolo}$ 's and $\texttt{Dolark}$ 's current master branch.
# +
# Set up
# Before this setup, please go and check "README.md" to install what are necessary for running the following code.
from dolo import *
import dolark
from dolark import HModel # The model is written in yaml file, and HModel reads the yaml file.
from dolark.equilibrium import find_steady_state
from dolark.perturbation import perturb #find variations
from dolo import time_iteration, improved_time_iteration #time iteration is backward induction
from matplotlib import pyplot as plt
import numpy as np
# -
# ## Abtract
# * The paper modifies standard growth model to include precautionary saving motives and liquidity constraints.
# * The paper examines the impact of the introduction of a particular kind of uninsurable idiosyncratic risk on the aggregate saving rate; the importance of asset trading to individuals; and the relative inequality of the wealth and income distributions.
# ## Introduction
#
# ### Paper's Goals
# * To provide an exposition of models whose aggregate behavior is the result of market interaction among a large number of agents subject to idiosyncratic shocks.
# > * This class of models contrasts with representative agent models where individual dynamics and uncertainty coincide with aggregate dynamics and uncertainty. <br/>
# > * This exposition is built around the standard growth model of Brock and Mirman(1972) modified to include a role for uninsured idiosyncratic risk and borrowing constraint.
# * To use such a model to study the quantitative importance of individual risk for aggregate saving. <br/>
#
#
# ### Key features
# * Endogenous heterogeneity
# * Aggregation
# * Infinite horizons
# * Exogenous borrowing constraint
# * General equilibrium. i.e. interest rate is endogenously determined since in a steady state equilibrium the capital per capita must equal the per capita asset holdings of consumers, and the interest rate must equal the net marginal product of capital.
# ## Related Literature
# * The Aiyagari model originates from Bewley model and a subsequent literature Zeldes (1989), Deaton (1991), Carroll (1992), and puts these kinds of models into a general equilibrium context. These models all share the same key components as mentioned in the previous part. And they are used to study the following topics:
# > * How much of observed wealth inequality does a particular choice of uninsurable idiosyncratic income uncertainty explain? <br/>
# > * In this model, what is the fraction of aggregate savings due to the precautionary motive? <br/>
# > * In this model, what are the redistributional implications of various policies?
#
# ## Model
# ### The Individual's Problem
#
# \begin{split}
# &\max E_0\left(\sum_{t=0}^\infty \beta^t U(c_t)\right)\\
# &\text{s.t.}\\
# &c_t+a_{t+1}=wl_{t}+(1+r)a_t \\
# &c_t\geq0\\
# &a_t\geq-\phi
# \end{split}
#
#
# where $\phi$ (if positive) is the limit on borrowing; $l_t$ is assumed to be i.i.d with bounded support given by $[l_{min},l_{max}]$, with $l_{min}>0$; $w$ and $r$ represent wage and interest rate repectively.
#
# * $\hat{a}_t\equiv a_t+\phi$
# * $z_t \equiv wl_t+(1+r)\hat{a}_t-r\phi$: total resources of the agent at date $t$ respectively.
# * Then the Bellman equation is as follows:
# $$
# \begin{split}
# V(z_t,\phi,w,r) \equiv \underset{\hat{a}_{t+1}}{\max}\left(U(z_t-\hat{a}_{t+1})+\beta \int V(z_{t+1},\phi,w,r)\ dF(l_{t+1}) \right)
# \end{split}
# $$
# * Euler equation:
#
# \begin{split}
# U^\prime (z_t-\hat{a}_{t+1})=\beta(1+r)\int U^\prime (z_{t+1}-\hat{a}_{t+2})\ dF(l_{t+1})
# \end{split}
# * Decision rule: $\hat{a}_{t+1}=A(z_t,\phi,w,r)$
# * Law of transition: $z_{t+1}=wl_{t+1}+(1+r)A(z_t,\phi,w,r)-r\phi$
# ### Firm's problem
# \begin{split}
# \max F(K,L)-wL-rK
# \end{split}
#
#
#
# where $K$ is the aggregate capital, $L$ is the aggregate labor, $F(K,L)$ is the production function.
# ### General Equilibrium
# In the steady state, variables are time invariant and all markets are clear, i.e.,
# * $F_K(K,L) = r+\delta $
# * $F_L(K,L) = w$
# * $\int l_i di = L$
# * $\int a_i di = K$
# +
# model is written in .yaml file
# HModel reads the yaml file
aggmodel = HModel('Aiyagari.yaml')
# check features of the model
aggmodel.features
# -
# ## Model Specification, Parameterization, and Consumption
#
# ### Model specification and parameters
# | Parameter | Description | Value ||
# |:------:| ------ | ------ | :------: |
# |$\beta$ | Time Preference Factor | 0.96 |
# | $\delta$ | Depreciation Rate | 0.08 |
# | $\alpha$ | Capital Share | 0.36 |
# | $\phi$ | Borrowing Limit | 0 |
# | $\mu$ | Risk Aversion Coefficient | {1,3,5} |
# | $\rho$ | Serial Correlation of Labor Shocks | {0,0.3,0.6,0.9} |
# | $\sigma$ | Variance of Labor Shocks | {0.2,0.4} |
#
#
#
# * Production function: <NAME> with the capital share taken to be $\alpha$
# \begin{split}
# F(K,L) = K^\alpha L^{1-\alpha}
# \end{split}
# * Utility function: CRRA with the relative risk aversion coefficient $\mu$
# * Labor endowment shocks:
# $$
# \begin{split}
# \log(l_t)=\rho\log(l_{t-1})+\sigma(1-\rho^2)^{\frac{1}{2}}\epsilon_{t}, \ \epsilon_t \sim N(0,1)
# \end{split}
# $$
#
# ### Computation
# After importing the model written in .yaml, we can calculate the equilibrium by using $\texttt{Dolark}$
# baseline case where coefficient risk aversion is 1, serial correlation is 0.9 and variance of labor shocks is 0.04
# other values of variables are the same as the those in parameter tables
eq = find_steady_state(aggmodel)
# ## Key Results
# data frame of the steay state
df = eq.as_df()
# +
# plot relationship between assets of this period and of next period
# altair plots a graph showing even if people have the same of level of wealth, their decisions are still different due to the idiosyncratic labor income shocks
import altair as alt
import pandas as pd
df = eq.as_df()
alt.Chart(df).mark_line().encode(
x = alt.X('a', axis = alt.Axis(title='Current Assets')),
y = alt.Y('i', axis=alt.Axis(title='Next Period Assets'))
)
# -
# ### Aggregate Saving
# We first calculate the aggregate saving rate under baseline parameters, where $\rho =0.9$,$\mu = 1$, and $\sigma = 0.2$
# extract variables from the steady state solution
a = df['a']
r = df['r']
w = df['w']
e = df['e']
i = df['i']
μ = df['μ']
# +
# calculate consumption
c = [(1+r[j])*a[j] + w[j]*np.exp(e[j]) - i[j] for j in range(len(df))]
# calcualte income
income = [(r[j]+0.08)*a[j] + w[j]*np.exp(e[j]) for j in range(len(df))] #depreciation is 8%
# -
# aggregate consumption and income
agg_c = np.inner(c,μ)
agg_inc = np.inner(income,μ)
# aggregate consumption and income as an output
print(agg_c)
print(agg_inc)
# saving rate
# risk aversion: 1; serial correlation: 0.9, varinace of labor shock: 0.04
saving = 1 - agg_c/agg_inc
saving
# So you can see the aggregate saving rate is 24.38% under the baseline parameters. Comparabaly, Aiyagari(1994) calcuates this number as 24.14%.
#
# Now, we calculate all other combinations of parameters as Aiyagari(1994) does, and compare them both.
# +
# Be patient. This could calculate for a while. The long wait will be forgotten once the meal is cooked.
rows = [] # create a place to save results
rho_values = np.linspace(0, 0.9, 4) #change serial correlation coefficent "rho "in {0, 0.3, 0.6, 0.9}
sig_values = np.linspace(0.2, 0.4, 2) #change the variance of labor shocks "sig" in {0.2, 0.4}
epsilon_values = np.linspace(1, 5, 3) #change the coefficient of risk aversion {1,3,5}
# recall that in the yaml file, there are individual and the aggregate part of the model
# .model.set_calibration enables you to change calibration parameters in the individual part of the yaml model
# .set_calibration enables you to change the calibration parameters in the aggregate part of the model
# following Aiyagari, we only change parameters in the individual part. But you can definitely play with different calibrations in the aggregate part
for l in epsilon_values:
aggmodel.model.set_calibration( epsilon = l)
for n in sig_values:
aggmodel.model.set_calibration( sig = n )
for m in rho_values:
aggmodel.model.set_calibration( rho=m )
eq = find_steady_state(aggmodel)
df = eq.as_df()
a = df['a']
r = df['r']
w = df['w']
e = df['e']
μ = df['μ']
i = df['i']
# calculate consumption
c = [(1+r[j])*a[j] + w[j]*np.exp(e[j]) - i[j] for j in range(len(df))]
# calcualte income
income = [(r[j]+0.08)*a[j] + w[j]*np.exp(e[j]) for j in range(len(df))] # depreciation is 8%
# aggregate consumption and cinome
agg_c = np.inner(c,μ)
agg_inc = np.inner(income,μ)
saving = (1 - agg_c/agg_inc)*100 #convert to %
saving_rate = float("%.2f" % saving) #with 2 decimals
rows.append((l, n, m, saving_rate)) # append the results in each round to the previous results
# +
# Now we want to create data frames for both results I calcuated and the results caculated by Aiyagari(1994)
# Import modules
import pandas as pd
# Data frame of the results we calculated
df1 = pd.DataFrame(rows)
df1.columns = ['Risk Averse Coefficient', 'Variance of Labor Shocks', 'Serial Correlation', 'Saving Rate'] # add names for columns
# Now we want to create a data frame for results in Aiyagari(1994)
# First, import the excel file containing Aiyagari(1994)'s results and call it xls_file
xls_file = pd.ExcelFile('Data/Aiyagari_SavingRate.xlsx')
# Load the xls file's Sheet1 as a dataframe
df2 = xls_file.parse('Sheet1')
# Merge the two data frame, and let the two share the same first three columns
df3 = pd.merge(df1,df2, on=['Risk Averse Coefficient','Variance of Labor Shocks', 'Serial Correlation'])
# Then we are able to see the results
df3.head()
# +
# Tabulate the data frame, convert it to a markdown table, and write it in "Table_SavingRate.md"
# "Table_SavingRate.md" is loated in the same directory as this notebook.
from tabulate import tabulate
headers = ["Risk Averse Coefficient", "Variance of Labor Shocks", "Serial Correlation", "Saving Rate","Saving Rate_Aiyagari"]
# save it in a markdown table
md = tabulate(df3,headers, tablefmt="github")
#save the markdown table
# with open("Table_SavingRate.md", "w") as table:
# table.write(md)
# save it in a latex table
latex = tabulate(df3,headers, tablefmt="latex")
path = 'Tex/Tables/Table_SavingRate.tex'
#save the latex table
with open(path, 'w') as f:
f.write(latex)
# -
# You can also click [here](Archive/Table_SavingRate.md) for a markdown table.
# ### Inequality Measures
# +
# plot wealth distribution under the baseline calibration
s = eq.dr.endo_grid.nodes # grid for states (i.e the state variable--wealth in this case)
plt.plot(s[0:20], eq.μ.sum(axis=0)[0:20], color='black') # I drop the last 10 grids when plotting since the probabilities of these levels of wealth are very close to zero. # The reason why I didn't use log for wealth is that taking log of a number which is extremely close to zero gets a very negative number
plt.grid()
plt.title("Wealth Distribution")
# plt.savefig('Figure_WealthDistribution.png') # save the figure in the current directory
# save the figure in the directory where TeX file is located.
save_results_to = 'Tex/Figures/'
plt.savefig(save_results_to + 'Figure_WealthDistribution.png', dpi = 300)
# -
# ## Conclusions and Comments
# * Our results are highly similar to but a bit different fom those in Aiyagari(1994). This is because we use a different discrete-valued Markov Chain(MC) to approximate the AR process of individuals' idiosyncratic income shock. In our replication, three grid points (i.e. three MC states), which is by default in $\texttt{Dolo}$ and $\texttt{Dolark}$, are used to simulate the AR process, whereas the number is seven in Aiyagari(1994).
# * The Aiyagari model extends the Bewley model to a context with a production sector. Deviating from the representative agent model in which complete market is implicitly assumed, it tries to study the aggregate saving behavior of the economy with the agents facing a particular kind of uninsured idiosyncratic risk. With an empirically plausible set of parameters, it finds that the aggregate saving rate does increase compared to the case with a complete market, however, the change here caused by the precautionary saving motive is mild. Also, the results of the model qualitatively match the real data in terms of the ranking of the fluctuations of some economic variables. However, in terms of approaching the real inequalities of income and wealth shown by the data, the model does not perform very well. Also, in this model, the joint distribution of income and wealth is not treated as a state variable, which neglects the distribution effect Krusell and Smith(1998) try to address.
| REMARKs/AiyagariIdiosyncratic/Aiyagari1994QJE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="CWPivw5Ss1Hk"
# # 7章
# - 以下で実行するコードには確率的な処理が含まれていることがあり、コードの出力結果と本書に記載されている出力例が異なることがあります。
# + id="LvCX0ZnVJ1WD"
# 7-1
# !mkdir chap7
# %cd ./chap7
# + id="0iMot3XGIhtD"
# 7-2
# !pip install transformers==4.5.0 fugashi==1.1.0 ipadic==1.0.0 pytorch-lightning==1.2.10
# + id="87bW8wO5IhtF"
# 7-3
import random
import glob
import json
from tqdm import tqdm
import torch
from torch.utils.data import DataLoader
from transformers import BertJapaneseTokenizer, BertModel
import pytorch_lightning as pl
# 日本語の事前学習モデル
MODEL_NAME = 'cl-tohoku/bert-base-japanese-whole-word-masking'
# + id="5HFcRL7nnhbX"
# 7-4
class BertForSequenceClassificationMultiLabel(torch.nn.Module):
def __init__(self, model_name, num_labels):
super().__init__()
# BertModelのロード
self.bert = BertModel.from_pretrained(model_name)
# 線形変換を初期化しておく
self.linear = torch.nn.Linear(
self.bert.config.hidden_size, num_labels
)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
labels=None
):
# データを入力しBERTの最終層の出力を得る。
bert_output = self.bert(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids)
last_hidden_state = bert_output.last_hidden_state
# [PAD]以外のトークンで隠れ状態の平均をとる
averaged_hidden_state = \
(last_hidden_state*attention_mask.unsqueeze(-1)).sum(1) \
/ attention_mask.sum(1, keepdim=True)
# 線形変換
scores = self.linear(averaged_hidden_state)
# 出力の形式を整える。
output = {'logits': scores}
# labelsが入力に含まれていたら、損失を計算し出力する。
if labels is not None:
loss = torch.nn.BCEWithLogitsLoss()(scores, labels.float())
output['loss'] = loss
# 属性でアクセスできるようにする。
output = type('bert_output', (object,), output)
return output
# + id="RbWDC5z4x_kP"
# 7-5
tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)
bert_scml = BertForSequenceClassificationMultiLabel(
MODEL_NAME, num_labels=2
)
bert_scml = bert_scml.cuda()
# + id="V_ep4ddFjz-O"
# 7-6
text_list = [
'今日の仕事はうまくいったが、体調があまり良くない。',
'昨日は楽しかった。'
]
labels_list = [
[1, 1],
[0, 1]
]
# データの符号化
encoding = tokenizer(
text_list,
padding='longest',
return_tensors='pt'
)
encoding = { k: v.cuda() for k, v in encoding.items() }
labels = torch.tensor(labels_list).cuda()
# BERTへデータを入力し分類スコアを得る。
with torch.no_grad():
output = bert_scml(**encoding)
scores = output.logits
# スコアが正ならば、そのカテゴリーを選択する。
labels_predicted = ( scores > 0 ).int()
# 精度の計算
num_correct = ( labels_predicted == labels ).all(-1).sum().item()
accuracy = num_correct/labels.size(0)
# + id="QrXA5KgXmX-m"
# 7-7
# データの符号化
encoding = tokenizer(
text_list,
padding='longest',
return_tensors='pt'
)
encoding['labels'] = torch.tensor(labels_list) # 入力にlabelsを含める。
encoding = { k: v.cuda() for k, v in encoding.items() }
output = bert_scml(**encoding)
loss = output.loss # 損失
# + id="HJ9Tbr6PIhtF"
# 7-8
# データのダウンロード
# !wget https://s3-ap-northeast-1.amazonaws.com/dev.tech-sketch.jp/chakki/public/chABSA-dataset.zip
# データの解凍
# !unzip chABSA-dataset.zip
# + id="zgXcOtz6fLge"
# 7-9
data = json.load(open('chABSA-dataset/e00030_ann.json'))
print( data['sentences'][0] )
# + id="l33ix4WDIhtG"
# 7-10
category_id = {'negative':0, 'neutral':1 , 'positive':2}
dataset = []
for file in glob.glob('chABSA-dataset/*.json'):
data = json.load(open(file))
# 各データから文章(text)を抜き出し、ラベル('labels')を作成
for sentence in data['sentences']:
text = sentence['sentence']
labels = [0,0,0]
for opinion in sentence['opinions']:
labels[category_id[opinion['polarity']]] = 1
sample = {'text': text, 'labels': labels}
dataset.append(sample)
# + id="k4Na8gOPHhya"
# 7-11
print(dataset[0])
# + id="igPtmux1IhtI"
# 7-12
# トークナイザのロード
tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME)
# 各データの形式を整える
max_length = 128
dataset_for_loader = []
for sample in dataset:
text = sample['text']
labels = sample['labels']
encoding = tokenizer(
text,
max_length=max_length,
padding='max_length',
truncation=True
)
encoding['labels'] = labels
encoding = { k: torch.tensor(v) for k, v in encoding.items() }
dataset_for_loader.append(encoding)
# データセットの分割
random.shuffle(dataset_for_loader)
n = len(dataset_for_loader)
n_train = int(0.6*n)
n_val = int(0.2*n)
dataset_train = dataset_for_loader[:n_train] # 学習データ
dataset_val = dataset_for_loader[n_train:n_train+n_val] # 検証データ
dataset_test = dataset_for_loader[n_train+n_val:] # テストデータ
# データセットからデータローダを作成
dataloader_train = DataLoader(
dataset_train, batch_size=32, shuffle=True
)
dataloader_val = DataLoader(dataset_val, batch_size=256)
dataloader_test = DataLoader(dataset_test, batch_size=256)
# + id="9y3dO-kBIhtI"
# 7-13
class BertForSequenceClassificationMultiLabel_pl(pl.LightningModule):
def __init__(self, model_name, num_labels, lr):
super().__init__()
self.save_hyperparameters()
self.bert_scml = BertForSequenceClassificationMultiLabel(
model_name, num_labels=num_labels
)
def training_step(self, batch, batch_idx):
output = self.bert_scml(**batch)
loss = output.loss
self.log('train_loss', loss)
return loss
def validation_step(self, batch, batch_idx):
output = self.bert_scml(**batch)
val_loss = output.loss
self.log('val_loss', val_loss)
def test_step(self, batch, batch_idx):
labels = batch.pop('labels')
output = self.bert_scml(**batch)
scores = output.logits
labels_predicted = ( scores > 0 ).int()
num_correct = ( labels_predicted == labels ).all(-1).sum().item()
accuracy = num_correct/scores.size(0)
self.log('accuracy', accuracy)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)
checkpoint = pl.callbacks.ModelCheckpoint(
monitor='val_loss',
mode='min',
save_top_k=1,
save_weights_only=True,
dirpath='model/',
)
trainer = pl.Trainer(
gpus=1,
max_epochs=5,
callbacks = [checkpoint]
)
model = BertForSequenceClassificationMultiLabel_pl(
MODEL_NAME,
num_labels=3,
lr=1e-5
)
trainer.fit(model, dataloader_train, dataloader_val)
test = trainer.test(test_dataloaders=dataloader_test)
print(f'Accuracy: {test[0]["accuracy"]:.2f}')
# + id="My3WI8Qd7yVJ"
# 7-14
# 入力する文章
text_list = [
"今期は売り上げが順調に推移したが、株価は低迷の一途を辿っている。",
"昨年から黒字が減少した。",
"今日の飲み会は楽しかった。"
]
# モデルのロード
best_model_path = checkpoint.best_model_path
model = BertForSequenceClassificationMultiLabel_pl.load_from_checkpoint(best_model_path)
bert_scml = model.bert_scml.cuda()
# データの符号化
encoding = tokenizer(
text_list,
padding = 'longest',
return_tensors='pt'
)
encoding = { k: v.cuda() for k, v in encoding.items() }
# BERTへデータを入力し分類スコアを得る。
with torch.no_grad():
output = bert_scml(**encoding)
scores = output.logits
labels_predicted = ( scores > 0 ).int().cpu().numpy().tolist()
# 結果を表示
for text, label in zip(text_list, labels_predicted):
print('--')
print(f'入力:{text}')
print(f'出力:{label}')
| Chapter7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="nThoqg72bjdC"
# # **State-of-the-art NLP Made Easy with [AdaptNLP](https://www.github.com/novetta/adaptnlp)**
# + [markdown] colab_type="text" id="ID0J_MsTbjdD"
# # 1. Today's Objective: Generate Enriched Data from Unstructured Text
# + [markdown] colab_type="text" id="uMVIO0yiQPF9"
# ### *Prerequisite: Install AdaptNLP*
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="rWsQAI7QQMYd" outputId="846ce58e-13aa-4c18-e195-cbdcf05a2ccb"
# !pip install adaptnlp
# + [markdown] colab_type="text" id="lTU-_49uQeiS"
# ### *Prerequisite: Show Unstructured Text Example*
# + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="nhTD_DggbjdD" jupyter={"source_hidden": true} outputId="331207f4-d864-4fda-f3e1-93033568f249"
from IPython.core.display import HTML, display
example_text = """
The history (and prehistory) of the United States, started with the arrival of Native Americans before 15,000 B.C. Numerous indigenous cultures formed, and many disappeared before 1500. The arrival of <NAME> in the year 1492 started the European colonization of the Americas. Most colonies were formed after 1600, and the early records and writings of <NAME> make the United States the first nation whose most distant origins are fully recorded.[1] By the 1760s, the thirteen British colonies contained 2.5 million people along the Atlantic Coast east of the Appalachian Mountains. After defeating France, the British government imposed a series of taxes, including the Stamp Act of 1765, rejecting the colonists' constitutional argument that new taxes needed their approval. Resistance to these taxes, especially the Boston Tea Party in 1773, led to Parliament issuing punitive laws designed to end self-government in Massachusetts.
Armed conflict began in 1775. In 1776, in Philadelphia, the Second Continental Congress declared the independence of the colonies as the United States. Led by General George Washington, it won the Revolutionary War with large support from France. The peace treaty of 1783 gave the land east of the Mississippi River (except Canada and Florida) to the new nation. The Articles of Confederation established a central government, but it was ineffectual at providing stability as it could not collect taxes and had no executive officer. A convention in 1787 wrote a new Constitution that was adopted in 1789. In 1791, a Bill of Rights was added to guarantee inalienable rights. With Washington as the first president and <NAME> his chief adviser, a strong central government was created. Purchase of the Louisiana Territory from France in 1803 doubled the size of the United States. A second and final war with Britain was fought in 1812, which solidified national pride.
Encouraged by the notion of manifest destiny, U.S. territory expanded all the way to the Pacific Coast. While the United States was large in terms of area, by 1790 its population was only 4 million. It grew rapidly, however, reaching 7.2 million in 1810, 32 million in 1860, 76 million in 1900, 132 million in 1940, and 321 million in 2015. Economic growth in terms of overall GDP was even greater. Compared to European powers, though, the nation's military strength was relatively limited in peacetime before 1940. Westward expansion was driven by a quest for inexpensive land for yeoman farmers and slave owners. The expansion of slavery was increasingly controversial and fueled political and constitutional battles, which were resolved by compromises. Slavery was abolished in all states north of the Mason–Dixon line by 1804, but the South continued to profit from the institution, mostly from the production of cotton. Republican <NAME> was elected president in 1860 on a platform of halting the expansion of slavery.
Seven Southern slave states rebelled and created the foundation of the Confederacy. Its attack of Fort Sumter against the Union forces there in 1861 started the Civil War. Defeat of the Confederates in 1865 led to the impoverishment of the South and the abolition of slavery. In the Reconstruction era following the war, legal and voting rights were extended to freed slaves. The national government emerged much stronger, and because of the Fourteenth Amendment in 1868, it gained explicit duty to protect individual rights. However, when white Democrats regained their power in the South in 1877, often by paramilitary suppression of voting, they passed Jim Crow laws to maintain white supremacy, as well as new disenfranchising state constitutions that prevented most African Americans and many poor whites from voting. This continued until the gains of the civil rights movement in the 1960s and the passage of federal legislation to enforce uniform constitutional rights for all citizens.
The United States became the world's leading industrial power at the turn of the 20th century, due to an outburst of entrepreneurship and industrialization in the Northeast and Midwest and the arrival of millions of immigrant workers and farmers from Europe. A national railroad network was completed and large-scale mines and factories were established. Mass dissatisfaction with corruption, inefficiency, and traditional politics stimulated the Progressive movement, from the 1890s to 1920s. This era led to many reforms, including the Sixteenth to Nineteenth constitutional amendments, which brought the federal income tax, direct election of Senators, prohibition, and women's suffrage. Initially neutral during World War I, the United States declared war on Germany in 1917 and funded the Allied victory the following year. Women obtained the right to vote in 1920, with Native Americans obtaining citizenship and the right to vote in 1924.
After a prosperous decade in the 1920s, the Wall Street Crash of 1929 marked the onset of the decade-long worldwide Great Depression. Democratic President <NAME> ended the Republican dominance of the White House and implemented his New Deal programs, which included relief for the unemployed, support for farmers, Social Security and a minimum wage. The New Deal defined modern American liberalism. After the Japanese attack on Pearl Harbor in 1941, the United States entered World War II and financed the Allied war effort and helped defeat Nazi Germany in the European theater. Its involvement culminated in using newly-invented nuclear weapons on two Japanese cities to defeat Imperial Japan in the Pacific theater.
The United States and the Soviet Union emerged as rival superpowers in the aftermath of World War II. During the Cold War, the two countries confronted each other indirectly in the arms race, the Space Race, proxy wars, and propaganda campaigns. The goal of the United States in this was to stop the spread of communism. In the 1960s, in large part due to the strength of the civil rights movement, another wave of social reforms was enacted which enforced the constitutional rights of voting and freedom of movement to African Americans and other racial minorities. The Cold War ended when the Soviet Union was officially dissolved in 1991, leaving the United States as the world's only superpower.
After the Cold War, the United States's foreign policy has focused on modern conflicts in the Middle East. The beginning of the 21st century saw the September 11 attacks carried out by Al-Qaeda in 2001, which was later followed by wars in Afghanistan and Iraq. In 2007, the United States entered its worst economic crisis since the Great Depression, which was followed by slower-than-usual rates of economic growth during the early 2010s. Economic growth and unemployment rates recovered by the late 2010s, however new economic disruption began in 2020 due to the 2019-20 coronavirus pandemic.
"""
example_text_html = f"""
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
.collapsible {{
background-color: #777;
color: white;
cursor: pointer;
padding: 18px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
}}
.active, .collapsible:hover {{
background-color: #555;
}}
.content {{
padding: 0 18px;
display: none;
overflow: hidden;
background-color: #f1f1f1;
}}
</style>
</head>
<body>
<button type="button" class="collapsible">Example Unstructured Text</button>
<div class="content">
<p>{example_text}</p>
</div>
<script>
var coll = document.getElementsByClassName("collapsible");
var i;
for (i = 0; i < coll.length; i++) {{
coll[i].addEventListener("click", function() {{
this.classList.toggle("active");
var content = this.nextElementSibling;
if (content.style.display === "block") {{
content.style.display = "none";
}} else {{
content.style.display = "block";
}}
}});
}}
</script>
</body>
</html>
"""
display(HTML(example_text_html))
# + [markdown] colab_type="text" id="sDsFKgqvQubU"
# ### *Prerequisite: Download Models and Generate Final Timeline*
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="n5KDs8JPbjdH" jupyter={"source_hidden": true} outputId="6c2dd366-b686-455e-cd1a-eca1b70d2bb1"
from adaptnlp import (
EasyTokenTagger,
EasySequenceClassifier,
EasyQuestionAnswering,
EasySummarizer,
EasyTranslator,
EasyDocumentEmbeddings,
)
from dateutil.parser import parse
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.dates as mdates
import pprint
# Summary
summarizer = EasySummarizer()
summary = summarizer.summarize(text=example_text, model_name_or_path="t5-base", mini_batch_size=1, num_beams=4, min_length=100, max_length=200)
summary = summary[0]
# Translation of Summary
translator = EasyTranslator()
translated_summary = translator.translate(text=summary.split(" . "), model_name_or_path="t5-base", t5_prefix="translate English to French", mini_batch_size=3, min_length=0, max_length=200)
translated_summary = " . ".join(translated_summary)
# NER
nl = "\n" # For f-string formatting
tagger = EasyTokenTagger()
sentences = tagger.tag_text(text=example_text, model_name_or_path="ner-ontonotes-fast", mini_batch_size=32)
ner_dict = sentences[0].to_dict("ner")
ner_dict = [f"<b>{i+1}.</b> {pprint.pformat(ent).replace(nl,'<br>')}" for i, ent in enumerate(ner_dict["entities"][:6])]
ner_html = "<br>" + "<br>".join(ner_dict)
# QA
qa = EasyQuestionAnswering()
_, top_n = qa.predict_qa(query="What happened in 1776?", context=example_text, model_name_or_path="bert-large-cased-whole-word-masking-finetuned-squad", n_best_size=5, mini_batch_size=1)
top_n = [f"<b>{i+1}.</b> {pprint.pformat(dict(ans)).replace(nl,'<br>')}" for i, ans in enumerate(top_n)]
top_n_html = "<br>" + "<br>".join(top_n)
# Timeline
dates = []
for span in sentences[0].get_spans("ner"):
if span.tag == "DATE":
dates.append(span.text)
dates = sorted(dates)
dates_map = {}
for d in dates:
try:
dates_map[d] = parse(d, fuzzy=True)
except:
pass
answers_map = {}
answer, _ = qa.predict_qa(query=[f"What happened in {t}" for t in dates_map.keys()], context = [example_text]*len(dates_map.keys()), model_name_or_path="bert-large-cased-whole-word-masking-finetuned-squad", n_best_size=7, mini_batch_size=10)
def generate_timeline(names_mat: list, dates_mat: list):
# Choose levels
levels = np.tile([-30, 30, -20, 20, -12, 12, -7, 7, -1, 1], int(np.ceil(len(dates_mat)/10)))[:len(dates_mat)]
# Create figure and plot a stem plot with the date
fig, ax = plt.subplots(figsize=(20, 6), constrained_layout=True)
ax.set_title("Timeline of Significant Events in U.S. History", fontsize=30, fontweight='bold')
markerline, stemline, baseline = ax.stem(dates_mat, levels, linefmt="C3-", basefmt="k-", use_line_collection=True)
plt.setp(markerline, mec="k", mfc="w", zorder=3)
# Shift the markers to the baseline by replacing the y-data by zeros.
markerline.set_ydata(np.zeros(len(dates_mat)))
# Annotate lines
vert = np.array(['top', 'bottom'])[(levels > 0).astype(int)]
for d, l, r, va in zip(dates_mat, levels, names_mat, vert):
ax.annotate(r, xy=(d, l), xytext=(-3, np.sign(l)*3), textcoords="offset points", va=va, ha="right")
# Format xaxis with AutoDateLocator
ax.get_xaxis().set_major_locator(mdates.AutoDateLocator())
ax.get_xaxis().set_major_formatter(mdates.DateFormatter("%b %Y"))
plt.setp(ax.get_xticklabels(), rotation=30, ha="right")
# Remove y axis and spines
ax.get_yaxis().set_visible(False)
for spine in ["left", "top", "right"]:
ax.spines[spine].set_visible(False)
ax.margins(y=0.1)
plt.show()
names_mat = list(answer.values()) [:30]
dates_mat = list(dates_map.values()) [:30]
generate_timeline(names_mat=names_mat, dates_mat=dates_mat)
html = f"""
<!DOCTYPE html>
<html>
<head>
<style>
.item0 {{ grid-area: timeline; }}
.item1 {{ grid-area: header; }}
.item2 {{ grid-area: menu; }}
.item3 {{ grid-area: main; }}
.item4 {{ grid-area: right; }}
.grid-container {{
display: grid;
grid-template:
'timeline timeline timeline timeline timeline timeline'
'header header main main right right'
'menu menu main main right right';
grid-gap: 5px;
background-color: #777;
padding: 5px;
}}
.grid-container > div {{
background-color: rgba(255, 255, 255, .9);
text-align: center;
padding: 20px;
font-size: 12px;
}}
</style>
</head>
<body>
<div class="grid-container">
<div class="item0">
<h2>Extracted Metadata using AdaptNLP</h2>
</div>
<div class="item1">
<h3>Summary: </h3>
<p style="text-align: center">{summary}</p>
</div>
<div class="item2">
<h3>Translated French Summary: </h3>
<p style="text-align: center">{translated_summary}</p>
</div>
<div class="item3">
<h3>Extracted Entities: </h3>
<p style="text-align: left">{ner_html}</p>
</div>
<div class="item4">
<h3>Top Answers to the Question: <br><em>"What happened in 1776?"</em></h3>
<p style="text-align: left">{top_n_html}</p>
</div>
</div>
</body>
</html>
"""
display(HTML(html))
# + [markdown] colab_type="text" id="G1z0m-16bjdK"
# # 2. Run NLP Tasks: Summarization, Translation, Named Entity Recognition (NER), and Question Answering (QA)
#
# ### [Documentation and Guides](http://novetta.github.io/adaptnlp)
# + [markdown] colab_type="text" id="tVQB_OZAbjdK"
# ### *Import "Easy" NLP Task Modules with AdaptNLP*
# + colab={} colab_type="code" id="ip3bd4gKbjdL"
from adaptnlp import EasySummarizer, EasyTranslator, EasyTokenTagger, EasyQuestionAnswering
# + [markdown] colab_type="text" id="uj3CCkvrbjdN"
# ### *Set Example Text*
# + colab={} colab_type="code" id="bBJ0Twb0bjdO"
text = """
The history (and prehistory) of the United States, started with the arrival of Native Americans before 15,000 B.C. Numerous indigenous cultures formed, and many disappeared before 1500. The arrival of <NAME> in the year 1492 started the European colonization of the Americas. Most colonies were formed after 1600, and the early records and writings of <NAME> make the United States the first nation whose most distant origins are fully recorded.[1] By the 1760s, the thirteen British colonies contained 2.5 million people along the Atlantic Coast east of the Appalachian Mountains. After defeating France, the British government imposed a series of taxes, including the Stamp Act of 1765, rejecting the colonists' constitutional argument that new taxes needed their approval. Resistance to these taxes, especially the Boston Tea Party in 1773, led to Parliament issuing punitive laws designed to end self-government in Massachusetts.
Armed conflict began in 1775. In 1776, in Philadelphia, the Second Continental Congress declared the independence of the colonies as the United States. Led by General George Washington, it won the Revolutionary War with large support from France. The peace treaty of 1783 gave the land east of the Mississippi River (except Canada and Florida) to the new nation. The Articles of Confederation established a central government, but it was ineffectual at providing stability as it could not collect taxes and had no executive officer. A convention in 1787 wrote a new Constitution that was adopted in 1789. In 1791, a Bill of Rights was added to guarantee inalienable rights. With Washington as the first president and <NAME> his chief adviser, a strong central government was created. Purchase of the Louisiana Territory from France in 1803 doubled the size of the United States. A second and final war with Britain was fought in 1812, which solidified national pride.
Encouraged by the notion of manifest destiny, U.S. territory expanded all the way to the Pacific Coast. While the United States was large in terms of area, by 1790 its population was only 4 million. It grew rapidly, however, reaching 7.2 million in 1810, 32 million in 1860, 76 million in 1900, 132 million in 1940, and 321 million in 2015. Economic growth in terms of overall GDP was even greater. Compared to European powers, though, the nation's military strength was relatively limited in peacetime before 1940. Westward expansion was driven by a quest for inexpensive land for yeoman farmers and slave owners. The expansion of slavery was increasingly controversial and fueled political and constitutional battles, which were resolved by compromises. Slavery was abolished in all states north of the Mason–Dixon line by 1804, but the South continued to profit from the institution, mostly from the production of cotton. Republican Abrah<NAME>coln was elected president in 1860 on a platform of halting the expansion of slavery.
Seven Southern slave states rebelled and created the foundation of the Confederacy. Its attack of Fort Sumter against the Union forces there in 1861 started the Civil War. Defeat of the Confederates in 1865 led to the impoverishment of the South and the abolition of slavery. In the Reconstruction era following the war, legal and voting rights were extended to freed slaves. The national government emerged much stronger, and because of the Fourteenth Amendment in 1868, it gained explicit duty to protect individual rights. However, when white Democrats regained their power in the South in 1877, often by paramilitary suppression of voting, they passed Jim Crow laws to maintain white supremacy, as well as new disenfranchising state constitutions that prevented most African Americans and many poor whites from voting. This continued until the gains of the civil rights movement in the 1960s and the passage of federal legislation to enforce uniform constitutional rights for all citizens.
The United States became the world's leading industrial power at the turn of the 20th century, due to an outburst of entrepreneurship and industrialization in the Northeast and Midwest and the arrival of millions of immigrant workers and farmers from Europe. A national railroad network was completed and large-scale mines and factories were established. Mass dissatisfaction with corruption, inefficiency, and traditional politics stimulated the Progressive movement, from the 1890s to 1920s. This era led to many reforms, including the Sixteenth to Nineteenth constitutional amendments, which brought the federal income tax, direct election of Senators, prohibition, and women's suffrage. Initially neutral during World War I, the United States declared war on Germany in 1917 and funded the Allied victory the following year. Women obtained the right to vote in 1920, with Native Americans obtaining citizenship and the right to vote in 1924.
After a prosperous decade in the 1920s, the Wall Street Crash of 1929 marked the onset of the decade-long worldwide Great Depression. Democratic President <NAME> ended the Republican dominance of the White House and implemented his New Deal programs, which included relief for the unemployed, support for farmers, Social Security and a minimum wage. The New Deal defined modern American liberalism. After the Japanese attack on Pearl Harbor in 1941, the United States entered World War II and financed the Allied war effort and helped defeat Nazi Germany in the European theater. Its involvement culminated in using newly-invented nuclear weapons on two Japanese cities to defeat Imperial Japan in the Pacific theater.
The United States and the Soviet Union emerged as rival superpowers in the aftermath of World War II. During the Cold War, the two countries confronted each other indirectly in the arms race, the Space Race, proxy wars, and propaganda campaigns. The goal of the United States in this was to stop the spread of communism. In the 1960s, in large part due to the strength of the civil rights movement, another wave of social reforms was enacted which enforced the constitutional rights of voting and freedom of movement to African Americans and other racial minorities. The Cold War ended when the Soviet Union was officially dissolved in 1991, leaving the United States as the world's only superpower.
After the Cold War, the United States's foreign policy has focused on modern conflicts in the Middle East. The beginning of the 21st century saw the September 11 attacks carried out by Al-Qaeda in 2001, which was later followed by wars in Afghanistan and Iraq. In 2007, the United States entered its worst economic crisis since the Great Depression, which was followed by slower-than-usual rates of economic growth during the early 2010s. Economic growth and unemployment rates recovered by the late 2010s, however new economic disruption began in 2020 due to the 2019-20 coronavirus pandemic.
"""
# + [markdown] colab_type="text" id="Txj2DaBybjdQ"
# ### *Summarize*
# + colab={} colab_type="code" id="ypP3eEpxbjdR"
summarizer = EasySummarizer()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="RdRof1QbbjdT" outputId="a9505366-18fe-4dfe-fa19-8e9e08652700"
summary = summarizer.summarize(
text = text,
model_name_or_path = "t5-base",
mini_batch_size = 1,
num_beams = 4,
min_length = 100,
max_length = 200,
)
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="L9BnjJt8T26l" outputId="7a7bc879-d242-4890-f9a7-3b41442c2520"
summary[0]
summary = summary[0].split(" . ")
summary
# + [markdown] colab_type="text" id="2CO1jx6YT4L-"
# ### *Translate*
# + colab={} colab_type="code" id="bKwyjCdmT3sZ"
translator = EasyTranslator()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="REqtz8zpT5gI" outputId="f407a45e-8cf9-4a71-80b9-123c91200de4"
translated_summary = translator.translate(
text = summary,
model_name_or_path = "t5-base",
t5_prefix = "translate English to French",
mini_batch_size = 3,
num_beams = 1,
min_length = 0,
max_length = 200,
)
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="YFS-5qDWT65N" outputId="e7fabd73-dc07-488f-a34a-558810644637"
translated_summary
# + [markdown] colab_type="text" id="y3HpUHWtT9Qf"
# ### *Named Entity Recognition (NER)*
# + colab={} colab_type="code" id="tLPsgfE2T_nf"
tagger = EasyTokenTagger()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="AynCfDSjUAJ5" outputId="551463a9-87a3-4803-dc62-e7a8855d2c63"
sentences = tagger.tag_text(
text = text,
model_name_or_path = "ner-ontonotes-fast",
mini_batch_size = 32,
)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="xqSrGIreUAgC" outputId="d15b9fe9-0d2b-4a3f-ba55-09bef15f19ab"
sentences[0].get_spans("ner")
# + [markdown] colab_type="text" id="09Pss3ZmUA4q"
# ### *Question Answering*
# + colab={} colab_type="code" id="om0x2dAsUBel"
qa = EasyQuestionAnswering()
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="RA0y7IDUUB3m" outputId="46aca9f1-758f-4135-9d65-5f729098975e"
answer, top_n = qa.predict_qa(
context = text,
query = "What happened in 1776?",
model_name_or_path = "bert-large-cased-whole-word-masking-finetuned-squad",
mini_batch_size = 1,
n_best_size = 5,
)
# + colab={"base_uri": "https://localhost:8080/", "height": 625} colab_type="code" id="fiNzsE8-UCSW" outputId="399a6d5b-f804-42ec-b574-cba6134046c7"
answer, top_n
# + [markdown] colab_type="text" id="L1tl2lFAbjdl"
# # 3. Generate the Timeline: NER and QA
# + [markdown] colab_type="text" id="rwj55himbjdm"
# ### *Run NER Task to Extract "Date" Tagged Entities*
# + colab={} colab_type="code" id="Fil8QESMbjdm"
sentences = tagger.tag_text(
text = text,
model_name_or_path = "ner-ontonotes-fast",
mini_batch_size = 1,
)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="5di_Vo_Bbjdo" outputId="8bab20ca-3dc3-43a4-946e-99acabd69ab0"
spans = sentences[0].get_spans("ner")
spans
# + colab={"base_uri": "https://localhost:8080/", "height": 851} colab_type="code" id="NCJgK9RCbjds" outputId="5d2ac80b-3506-450c-a0e2-b1df7c45c5ef"
dates = []
for s in spans:
if s.tag == "DATE":
dates.append(s.text)
dates = sorted(dates)
dates
# + colab={"base_uri": "https://localhost:8080/", "height": 607} colab_type="code" id="WmL6mroTbjdu" outputId="d80b783f-3711-4eda-f043-92605e1ed6c8"
from dateutil.parser import parse
dates_map = {}
for d in dates:
try:
dates_map[d] = parse(d, fuzzy=True)
except:
pass
dates_map
# + colab={} colab_type="code" id="eVfEzKm5bjdx"
# + [markdown] colab_type="text" id="FPpCee-kbjdy"
#
# ### *Run QA Task to Extract Information on "What happened in..." Extracted Dates*
# + colab={"base_uri": "https://localhost:8080/", "height": 607} colab_type="code" id="ett854eYbjdz" outputId="599cbc48-0205-4077-af65-fbac6293d8ea"
query_texts = [f"What happened in {d}?" for d in dates_map.keys()]
context_texts = [text]*len(query_texts)
query_texts
# + colab={"base_uri": "https://localhost:8080/", "height": 712} colab_type="code" id="Atu6XkFgbjd1" outputId="61655128-ae9f-4a8e-f8b6-9866e55dec04"
answers, _ = qa.predict_qa(
context = context_texts,
query = query_texts,
model_name_or_path = "bert-large-cased-whole-word-masking-finetuned-squad",
n_best_size = 7,
mini_batch_size = 10
)
answers
# + [markdown] colab_type="text" id="S89ZOXBHABjK"
# ### *Generate Text Timeline*
# + colab={"base_uri": "https://localhost:8080/", "height": 607} colab_type="code" id="OOO1JrdUbW8j" outputId="f7750f38-e877-43fe-96fb-aed448b5de98"
for d, a in zip(dates_map.keys(), answers.values()):
print(d, a)
# + [markdown] colab_type="text" id="C-NpHdq5bjd3"
# ### *Generate Stem Timeline with Matplotlib*
# + colab={} colab_type="code" id="FiFdwFmUbjd3"
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.dates as mdates
from datetime import datetime
def generate_timeline(names_mat: list, dates_mat: list):
# Choose levels
levels = np.tile([-30, 30, -20, 20, -12, 12, -7, 7, -1, 1], int(np.ceil(len(dates_mat)/10)))[:len(dates_mat)]
# Create figure and plot a stem plot with the date
fig, ax = plt.subplots(figsize=(20, 6), constrained_layout=True)
ax.set_title("Timeline of Significant Events in U.S. History", fontsize=30, fontweight='bold')
markerline, stemline, baseline = ax.stem(dates_mat, levels, linefmt="C3-", basefmt="k-", use_line_collection=True)
plt.setp(markerline, mec="k", mfc="w", zorder=3)
# Shift the markers to the baseline by replacing the y-data by zeros.
markerline.set_ydata(np.zeros(len(dates_mat)))
# Annotate lines
vert = np.array(['top', 'bottom'])[(levels > 0).astype(int)]
for d, l, r, va in zip(dates_mat, levels, names_mat, vert):
ax.annotate(r, xy=(d, l), xytext=(-3, np.sign(l)*3), textcoords="offset points", va=va, ha="right")
# Format xaxis with AutoDateLocator
ax.get_xaxis().set_major_locator(mdates.AutoDateLocator())
ax.get_xaxis().set_major_formatter(mdates.DateFormatter("%b %Y"))
plt.setp(ax.get_xticklabels(), rotation=30, ha="right")
# Remove y axis and spines
ax.get_yaxis().set_visible(False)
for spine in ["left", "top", "right"]:
ax.spines[spine].set_visible(False)
ax.margins(y=0.1)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 457} colab_type="code" id="FtL0s2GXbjeC" outputId="d20df009-e2cd-453d-dfbd-f1bd7c420566"
names_mat = list(answers.values())[:30]
dates_mat = list(dates_map.values())[:30]
generate_timeline(names_mat=names_mat, dates_mat=dates_mat)
# + [markdown] colab_type="text" id="tCIKLMuvO9L_"
# # Additional AdaptNLP Resources (All Open and Publically Available)
# + [markdown] colab_type="text" id="mzrIAR4yUMYW"
# # *Tutorials for NLP Tasks with AdaptNLP*
#
# 1. Token Classification: NER, POS, Chunk, and Frame Tagging
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/1.%20Token%20Classification/token_tagging.ipynb)
# 2. Sequence Classification: Sentiment
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/2.%20Sequence%20Classification/Easy%20Sequence%20Classifier.ipynb)
# 3. Embeddings: Transformer Embeddings e.g. BERT, XLM, GPT2, XLNet, roBERTa, ALBERT
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/3.%20Embeddings/embeddings.ipynb)
# 4. Question Answering: Span-based Question Answering Model
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/4.%20Question%20Answering/question_answering.ipynb)
# 5. Summarization: Abstractive and Extractive
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/5.%20Summarization/summarization.ipynb)
# 6. Translation: Seq2Seq
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/6.%20Translation/translation.ipynb)
# + [markdown] colab_type="text" id="wqtzIcarVxFB"
# ### *Tutorial for Fine-tuning and Training Custom Models with AdaptNLP*
#
# 1. Training a Sequence Classifier
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/2.%20Sequence%20Classification/Easy%20Sequence%20Classifier.ipynb)
# 2. Fine-tuning a Transformers Language Model
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/Finetuning%20and%20Training%20(Advanced)/Fine-tuning%20Language%20Model.ipynb)
#
# Checkout the [documentation](https://novetta.github.io/adaptnlp) for more information.
#
# + [markdown] colab_type="text" id="M9CEAsioV61c"
# ## *NVIDIA Docker and Configurable AdaptNLP REST Microservices*
#
# 1. AdaptNLP official docker images are up on [Docker Hub](https://hub.docker.com/r/achangnovetta/adaptnlp).
# 2. REST Microservices with AdaptNLP and FastAPI are also up on [Docker Hub](https://hub.docker.com/r/achangnovetta)
#
# All images can build with GPU support if NVIDIA-Docker is correctly installed.
# + colab={} colab_type="code" id="MSDLN4DoV0L4"
| tutorials/Workshops/ODSC_timeline_generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.13 64-bit (''anc'': conda)'
# name: python3
# ---
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# -
results = pd.read_csv("./London_Marathon_Big.csv")
results
sns.violinplot(data=results, x="Sex", y="Finish (Total Seconds)")
sns.violinplot(data=results, x="Category", y="Finish (Total Seconds)")
| notebooks/london_plotter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Custom Model Compilation and Inference using Onnx runtime
#
# In this example notebook, we describe how to take a pre-trained classification model and compile it using ***Onnx runtime*** to generate deployable artifacts that can be deployed on the target using the ***Onnx*** interface.
#
# - Pre-trained model: `resnet18v2` model trained on ***ImageNet*** dataset using ***Onnx***
#
# In particular, we will show how to
# - compile the model (during heterogenous model compilation, layers that are supported will be offloaded to the`TI-DSP` and artifacts needed for inference are generated)
# - use the generated artifacts for inference
# - perform input preprocessing and output postprocessing
#
# ## Onnx Runtime based work flow
#
# The diagram below describes the steps for Onnx Runtime based work flow.
#
# Note:
# - The user needs to compile models(sub-graph creation and quantization) on a PC to generate model artifacts.
# - The generated artifacts can then be used to run inference on the target.
#
# <img src=docs/images/onnx_work_flow_2.png width="400">
# + tags=["parameters"]
import os
import tqdm
import cv2
import numpy as np
import onnxruntime as rt
from scripts.utils import imagenet_class_to_name, download_model
import matplotlib.pyplot as plt
# -
# ## Define utility function to preprocess input images
# Below, we define a utility function to preprocess images for `resnet18v2`. This function takes a path as input, loads the image and preprocesses it for generic ***Onnx*** inference. The steps are as follows:
#
# 1. load image
# 2. convert BGR image to RGB
# 3. scale image so that the short edge is 256 pixels
# 4. center-crop image to 224x224 pixels
# 5. apply per-channel pixel scaling and mean subtraction
#
#
# - Note: If you are using a custom model or a model that was trained using a different framework, please remember to define your own utility function.
def preprocess_for_onnx_resent18v2(image_path):
# read the image using openCV
img = cv2.imread(image_path)
# convert to RGB
img = img[:,:,::-1]
# Most of the onnx models are trained using
# 224x224 images. The general rule of thumb
# is to scale the input image while preserving
# the original aspect ratio so that the
# short edge is 256 pixels, and then
# center-crop the scaled image to 224x224
orig_height, orig_width, _ = img.shape
short_edge = min(img.shape[:2])
new_height = (orig_height * 256) // short_edge
new_width = (orig_width * 256) // short_edge
img = cv2.resize(img, (new_width, new_height), interpolation=cv2.INTER_CUBIC)
startx = new_width//2 - (224//2)
starty = new_height//2 - (224//2)
img = img[starty:starty+224,startx:startx+224]
# apply scaling and mean subtraction.
# if your model is built with an input
# normalization layer, then you might
# need to skip this
img = img.astype('float32')
for mean, scale, ch in zip([128, 128, 128], [0.0078125, 0.0078125, 0.0078125], range(img.shape[2])):
img[:,:,ch] = ((img.astype('float32')[:,:,ch] - mean) * scale)
img = np.expand_dims(img,axis=0)
img = np.transpose(img, (0, 3, 1, 2))
return img
# ## Compile the model
# In this step, we create Onnx runtime with `tidl_model_import_onnx` library to generate artifacts that offload supported portion of the DL model to the TI DSP.
# - `sess` is created with the options below to calibrate the model for 8-bit fixed point inference
#
# * **artifacts_folder** - folder where all the compilation artifacts needed for inference are stored
# * **tidl_tools_path** - os.getenv('TIDL_TOOLS_PATH'), path to `TIDL` compilation tools
# * **tensor_bits** - 8 or 16, is the number of bits to be used for quantization
# * **advanced_options:calibration_frames** - number of images to be used for calibration
#
# ```
# compile_options = {
# 'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
# 'artifacts_folder' : output_dir,
# 'tensor_bits' : 16,
# 'accuracy_level' : 0,
# 'advanced_options:calibration_frames' : len(calib_images),
# 'advanced_options:calibration_iterations' : 3 # used if accuracy_level = 1
# }
# ```
#
# - Note: The path to `TIDL` compilation tools and `aarch64` `GCC` compiler is required for model compilation, both of which are accessed by this notebook using predefined environment variables `TIDL_TOOLS_PATH` and `ARM64_GCC_PATH`. The example usage of both the variables is demonstrated in the cell below.
# - `accuracy_level` is set to 0 in this example. For better accuracy, set `accuracy_level = 1`. This option results in more time for compilation but better inference accuracy.
# Compilation status log for accuracy_level = 1 is currently not implemented in this notebook. This will be added in future versions.
# - Please refer to TIDL user guide for further advanced options.
output_dir = 'custom-artifacts/onnx/resnet18v2'
onnx_model_path = '../../models/public/onnx/resnet18_opset9.onnx'
download_model(onnx_model_path)
# +
calib_images = [
'sample-images/elephant.bmp',
'sample-images/bus.bmp',
'sample-images/bicycle.bmp',
'sample-images/zebra.bmp',
]
compile_options = {
'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
'artifacts_folder' : output_dir,
'tensor_bits' : 8,
'accuracy_level' : 1,
'advanced_options:calibration_frames' : len(calib_images),
'advanced_options:calibration_iterations' : 3 # used if accuracy_level = 1
}
# create the output dir if not present
# clear the directory
os.makedirs(output_dir, exist_ok=True)
for root, dirs, files in os.walk(output_dir, topdown=False):
[os.remove(os.path.join(root, f)) for f in files]
[os.rmdir(os.path.join(root, d)) for d in dirs]
so = rt.SessionOptions()
EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
input_details = sess.get_inputs()
for num in tqdm.trange(len(calib_images)):
output = list(sess.run(None, {input_details[0].name : preprocess_for_onnx_resent18v2(calib_images[num])}))[0]
# -
# ## Use compiled model for inference
# Then using ***Onnx*** with the ***`libtidl_onnxrt_EP`*** inference library we run the model and collect benchmark data.
# +
EP_list = ['TIDLExecutionProvider','CPUExecutionProvider']
sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
#Running inference several times to get an stable performance output
for i in range(5):
output = list(sess.run(None, {input_details[0].name : preprocess_for_onnx_resent18v2('sample-images/elephant.bmp')}))
for idx, cls in enumerate(output[0].squeeze().argsort()[-5:][::-1]):
print('[%d] %s' % (idx, '/'.join(imagenet_class_to_name(cls))))
from scripts.utils import plot_TI_performance_data, plot_TI_DDRBW_data, get_benchmark_output
stats = sess.get_TI_benchmark_data()
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,5))
plot_TI_performance_data(stats, axis=ax)
plt.show()
tt, st, rb, wb = get_benchmark_output(stats)
print(f'Statistics : \n Inferences Per Second : {1000.0/tt :7.2f} fps')
print(f' Inference Time Per Image : {tt :7.2f} ms \n DDR BW Per Image : {rb+ wb : 7.2f} MB')
| examples/jupyter_notebooks/custom-model-onnx.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// # JSON Validation Java Tutorial
// This tutorial will show you how to validate a JSON document and create a customized string format tag using the Java classes in the com.ndi package. We will start this tutorial with a brief introduction to JSON Schema and the format tag.
// ## A brief introduction to JSON Schema and com.ndi package
//
// For this project, JSON documents are used to represent instances of our ndi classes, which include ndi_session, ndi_daqreader, ndi_daqsystem and many others ndi classes. Every JSON file consists of a series of key-value pairs, each of which stores a class's field and its value. Therefore, before storing a JSON file into the database, it is crucial to ensure that the json file has the desired format. To do so, we need to create a JSON schema document for each class that we wish to validate.
//
// A JSON Schmea is a special type of JSON file that is used to specify the expected data type for each value in a JSON document. We will provide a concrete example to further clarify our explanation. Say that we have an instance of ndi_subject which consits of two fields: **local_identifier** and **description**, we can represents this instance of ndi_subject with the following JSON file: *sample-ndi-subject.json*
// <pre>
// <code>
// <b>sample-ndi-subject.json</b>
//
// {
// "local_identifier" : "<EMAIL>",
// "description" : "this is a dummy subject"
// }
// </code>
// </pre>
// Clearly both local_identifier and description have to be string, something like such will be invalid
// <pre>
// <code>
// <b>sample-ndi-subject-wrong-type.json</b>
//
// {
// "local_identifier" : 153,
// "description" : "this is a dummy subject"
// }
// </code>
// </pre>
// In order to enforce that both the field **local_identifier** and the field **description** to be string, we can create a JSON schema document. Let's call it *ndi_document_subject_schema.json*.
// <pre>
// <code>
// <b>ndi_document_subject_schema.json</b>
//
// {
// "$schema": "http://json-schema.org/draft/2019-09/schema#",
// "id": "$NDISCHEMAPATH\/ndi_document_subject_schema.json",
// "title": "ndi_document_subject",
// "type": "object",
// "properties": {
// "local_identifier": {
// "type": "string"
// },
// "description": {
// "type": "string"
// }
// }
// }
// </code>
// </pre>
// The **"$schema"** tag tells us the official json_schema specification we are using. You can read the specification document using this linke: "http://json-schema.org/draft/2019-09/schema#". The **"id"** tag represents the identifier of the JSON schema document. The **"title"** tag specifies the name of the associated JSON document. All of the three above tags are semantic tags (or annotation). That is, they don't have an impact on the validation outcome.
//
// The **"type"** tag specifies the expected data type of each value of the JSON document. Here ndi_document_subject represents a MATLAB object, so we let the type to be "object". Next within the properties tag, we need to specify the expected data type for each fields of this object. Here we want both the fields "local_identifier" and "description" to be a string. You can read more about the vocabulary and the expected document structure of the JSON Schema file through this linke: "https://json-schema.org/understanding-json-schema
//
//
// The classes within the package com.ndi, which can be found in ndi-validator-java.jar file, proivde methods that can be called from MATLAB to validate a JSON instance (in fact, this is precisely what *ndi_validate.m* does). Particularly, the com.ndi.Validator class, which is a wrapper around org.everit's implementation of JSON Schema. You can check out their source code here: https://github.com/everit-org/json-schema/tree/master/core/src/main/java/org/everit/json/schema. We will explain how to use those methods in the next section of this tutorial.
// ## Validating JSON Document
//
// Our next task is to use the Validator class within the com.ndi package to validate the JSON Document. First we need to import the Validator class from the com.ndi package. If you are curious, you can check out its implementation through this : https://github.com/VH-Lab/NDI-matlab/blob/document_validation_2/database/java/ndi-validator-java/src/main/java/com/ndi/Validator.java
import com.ndi.Validator;
// There are two ways to construct a new instance of the Validator class. Here are the method signature of the Validaotr's constructor.
// <pre>
// <code>
// <b>public</b> Validator(<b>String</b> document, <b>String</b> schema)
//
// <b>public</b> Validator(<b>JSONObject</b> document, <b>JSONObject</b> schema)
// </code>
// </pre>
// **The first constructor takes two parameters**:
//
// <ul>
// <li> <b> document</b>: this represents the content of the JSON document we wish to validate</li>
// <li> <b> schema</b>: this represents the content of the JSON schema document we wish to validate the document against</li>
// </ul>
//
// **The second constructor also takes two parameters**:
// <ul>
// <li> <b> document</b>: same as what the first constructor takes, except document needs to be an instance of org.json.JSONObject</li>
// <li> <b> schema</b>: again, same as what constructor takes, but JSON document needs to be wrapped inside org.json.JSONObject</li>
// </ul>
//
// An example will make will hopefully makes our explaination clear. Let's try to validate the *sample-ndi-subject.json* we have created above against the *ndi_document_subject_schema.json*. We will construct a Validator object using the first constructor first:
// +
//the file content of the sample-ndi-subject.json
String document = "{\n" +
" local_identifier : \"<EMAIL>\",\n" +
" description : \"this is a dummy subject\"\n" +
"}";
// +
//the file content of the ndi_document_subject_schema.json
String schema = "{\n" +
" \"$schema\": \"http://json-schema.org/draft/2019-09/schema#\",\n" +
" \"id\": \"$NDISCHEMAPATH\\/ndi_document_subject_schema.json\",\n" +
" \"title\": \"ndi_document_subject\",\n" +
" \"type\": \"object\",\n" +
" \"properties\": {\n" +
" \"local_identifier\": {\n" +
" \"type\": \"string\"\n" +
" },\n" +
" \"description\": {\n" +
" \"type\": \"string\"\n" +
" }\n" +
" }\n" +
"}";
// -
// document and schema represent the actual JSON content as string
Validator ndi_subject_validator = new Validator(document, schema);
// Next we will call the Validator's instance method getReport() to get a detailed report of our validation. This method returns an instance of java.util.HashMap, which tells you the part of the JSON document that has a type error.
ndi_subject_validator.getReport()
// We get an empty HashMap, which means that our JSON document does not contain any type error. Both the local_identifier field and the description field were initialized to be a string. This confirms that our JSON document is valid. Next, let's see what happen if we entered an invalid value to initialize one of our fields:
// +
//the file content of the sample-ndi-subject-wrong-type.json
String document = "{\n" +
" local_identifier : 153,\n" +
" description : \"this is a dummy subject\"\n" +
"}";
// -
Validator ndi_subject = new Validator(document, schema);
ndi_subject.getReport();
// We got an instance of HashMap, which tells us that the field "local_identifier" is a Integer, which is supposed to be a string based on the schema document we've passed in. Our validation fails just as what we would have expected. We can also validate our JSON document by passing in an instance of org.JSONObject instead of a string. The reason we have a second constructor is that we can initialize a JSONObject by passing in the file path to the JSON document as opposed to its content:
//
// To create an instance of JSONObject from a file path, we need to wrap the file path inside a InputFileStream object. Then wrap that InputFileStream object inside a JSONTokner Object, and finally pass that JSONTokener object to the JSONObject constructor. Suppose we have created the files "sample-ndi-subject-wrong-type.json" and "ndi_document_subject_schema.json" (yes they are the exact same file we have used in our earlier example) in our java classpath. We use a try-with-resource block to safely read those files. To learn more about Java's try-with resource block and IO syntax, check out this link: https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html.
// +
import org.json.JSONObject;
import org.json.JSONTokener;
import java.io.FileInputStream;
import java.io.InputStream;
JSONObject document;
JSONObject schema;
try(InputStream schemaFile = new FileInputStream("/Users/yixin/Documents/ndi-validator-java/src/main/resources/sample-ndi-subject-wrong-type.json");
InputStream documentFile = new FileInputStream("/Users/yixin/Documents/ndi-validator-java/src/main/resources/ndi_document_subject_schema.json")){
document = new JSONObject(new JSONTokener(documentFile));
schema = new JSONObject(new JSONTokener(schemaFile));
}
// -
// Verify that we have successfully loaded the JSON file from disk
document
schema
// Next, we just pass those instances of JSONObject into our constructor:
Validator validator = new Validator(schema, document);
validator.getReport();
// We've got what we would have expected, an error message identifying the appropriate type mismatch error. You've got through the first part of the tutorial successfully. Next we will discuss how we can define our own JSON Schema vocabulary (the format keyword).
// ## Creating your own format keyword within the string type
// In the previous section, we saw how we could restrict the values of a JSON document to a data type. However, what if we want our string to be in a specific format. For instance, what if we want our value to be a valid email address. The official JSON Schema specification allows us to use the format tag to enforce the string to contains a particular pattern. In fact, the official JSON Schema Specification offers the following built-in formats including:
//
// <ul>
// <li>"date-time"</li>
// <li>"email"</li>
// <li>"hostname"</li>
// <li>"ipv4"</li>
// <li>"ipv6"</li>
// <li>"uri"</li>
// </ul>
//
// See the full list here: https://json-schema.org/understanding-json-schema/reference/string.html#format
// To make our explination clearer, let's try to validate a JSON document, one of whose value is supposed to be an email address
// <pre>
// <code>
// <b>studentA.json</b>
//
// {
// "name" : "studentA",
// "email-address" : "<EMAIL>"
// }
// </code>
// </pre>
// <pre>
// <code>
// <b>studentB.json</b>
//
// {
// "name" : "studentB",
// "email-address" : "badEmailAddress!%^@"
// }
// </code>
// </pre>
// <pre>
// <code>
// <b>student_schema.json</b>
//
// {
// "$schema": "http://json-schema.org/draft/2019-09/schema#",
// "id": "$NDISCHEMAPATH\/student_schema.json",
// "title": "student",
// "type": "object",
// "properties": {
// "name": {
// "type": "string"
// },
// "email-address": {
// "type": "string",
// "format" : "email"
// }
// }
// }
// </code>
// </pre>
// As you can see, we add a "format" tag in this student_schema.json for the field "email-address". Our validator will not only check if the value is type string but also verify if this is indeed a valid email address. We will run both studentA.json and studentB.json against our student_schema.json.
//the file content of studentA.json
String studentA = "{\n" +
"\"name\" : \"studentA\",\n" +
" \"email-address\" : \"<EMAIL>\"\n" +
"}";
//the file content of studentB.json
String studentB = " {\n" +
" \"name\" : \"studentB\",\n" +
" \"email-address\" : \"badEmailAddress!%^@\"\n" +
" }";
//the file content of student_schema.json
String studentSchema = "{\n" +
" \"$schema\": \"http://json-schema.org/draft/2019-09/schema#\",\n" +
" \"id\": \"$NDISCHEMAPATH\\/student_schema.json\",\n" +
" \"title\": \"student\",\n" +
" \"type\": \"object\",\n" +
" \"properties\": {\n" +
" \"name\": {\n" +
" \"type\": \"string\"\n" +
" },\n" +
" \"email-address\": {\n" +
" \"type\": \"string\",\n" +
" \"format\" : \"email\"\n" +
" }\n" +
" }\n" +
" }";
// initialize our validators
Validator validatorForStudentA = new Validator(studentA, studentSchema);
Validator validatorForStudentB = new Validator(studentB, studentSchema);
validatorForStudentA.getReport()
validatorForStudentB.getReport()
// Just as what we would have expected, our validator is capable of detecting invalid email address. Next what if we want our string to be in a particular format that the JSON Schema specification does not have. We can implement this logic by ourselves. This requires us to implements the org.everit.json.schema.FormatValidator interface. We will demonstrate how to achieve that through a concrete example. Suppose we want our validator to only accept email address that comes from @brandeis.edu email address. That is, "@" must be followed by "brandeis.edu".
//
// Let's create a class called BrandeisEmailValidator that implements the org.everit.json.schema.FormatValidator interface:
// +
import org.everit.json.schema.FormatValidator;
public class BrandeisEmailValidator implements FormatValidator{
@Override
public Optional<String> validate(String subject){
int separator = subject.indexOf("@");
if (separator == -1 || !subject.substring(separator).equals("@brandeis.edu")){
return Optional.of("requires a brandeis.edu email");
}
return Optional.empty();
}
@Override
public String formatName(){
return "brandeis-email";
}
}
// -
// <p>The two methods we have to override are the public Optional<String> validate(String subject) method, which takes an user input and check if it is a valid Brandeis email address, and another method: public String formatName(), which simply returns the format keyword that we want the validator to recognize when it parses the json schema document. As the method signature suggests, the first method needs to return a string wrapped inside the Optional container object. If something went wrong, the string will represent the error message, otherwise we return an empty Optional container object to indicate that no type error has found. Let's test it out. Modify the email address in studentA.json and studentB.json. Only student B has a valid brandeis email address</p>
// <pre>
// <code>
// <b>studentA.json</b>
//
// {
// "name" : "studentA",
// "email-address" : "<EMAIL>"
// }
// </code>
// </pre>
// <pre>
// <code>
// <b>studentB.json</b>
//
// {
// "name" : "studentB",
// "email-address" : "<EMAIL>"
// }
// </code>
// </pre>
// <pre>
// <code>
// <b>student_schema.json</b>
//
// {
// "$schema": "http://json-schema.org/draft/2019-09/schema#",
// "id": "$NDISCHEMAPATH\/student_schema.json",
// "title": "student",
// "type": "object",
// "properties": {
// "name": {
// "type": "string"
// },
// "email-address": {
// "type": "string",
// "format" : "brandeis-email"
// }
// }
// }
// </code>
// </pre>
//the file content of studentA.json
String studentA = "{\n" +
"\"name\" : \"studentA\",\n" +
" \"email-address\" : \"<EMAIL>\"\n" +
"}";
//the file content of studentB.json
String studentB = " {\n" +
" \"name\" : \"studentB\",\n" +
" \"email-address\" : \"<EMAIL>\"\n" +
" }";
//the file content of student_schema.json
String studentSchema = "{\n" +
" \"$schema\": \"http://json-schema.org/draft/2019-09/schema#\",\n" +
" \"id\": \"$NDISCHEMAPATH\\/student_schema.json\",\n" +
" \"title\": \"student\",\n" +
" \"type\": \"object\",\n" +
" \"properties\": {\n" +
" \"name\": {\n" +
" \"type\": \"string\"\n" +
" },\n" +
" \"email-address\": {\n" +
" \"type\": \"string\",\n" +
" \"format\" : \"brandeis-email\"\n" +
" }\n" +
" }\n" +
" }";
// initialize our validators
Validator validatorForStudentA = new Validator(studentA, studentSchema);
Validator validatorForStudentB = new Validator(studentB, studentSchema);
// This time, we need to add our BrandeisEmailValidator class we've just written to the Validator object so that our Validator class knows which methods to call when it sees our pre-defined foramt tag "brandeis-email" while scanning through the schema document. This can be done through calling Validator <b>addValidator()</b> method, which returns a new instance of the Validator class with the FormatValidator added to it.
validatorForStudentA = validatorForStudentA.addValidator(new BrandeisEmailValidator());
validatorForStudentB = validatorForStudentB.addValidator(new BrandeisEmailValidator());
// Now let's check if our Validator is able to tell if our student has a Brandeis email address
validatorForStudentA.getReport()
validatorForStudentB.getReport()
// Our validator knows if a student has a valid Brandeis email address. This completes the tutorial. Now you know how to use com.ndi.Validate to validate a JSON document and how to write your own JSON format keyword. Remeber that all the classes in the com.ndi packages are automatically added to the MATLAB javapath after ndi_Init is run, so you have access to all the classes in the package and their methods. Simply import the package and call them from MATLAB if you ever need them.
| java/ndi-validator-java/ndi-validator-tutorials/JSON-Validation-Java-Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # T81-558: Applications of Deep Neural Networks
# **Class 9: Regularization: L1, L2 and Dropout**
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# # Regularization
#
# Regularization is a technique that reduces overfitting, which occurs when neural networks attempt to memorize training data, rather than learn from it. Humans are capable of overfitting as well. Before we examine the ways that a machine accidentally overfits, we will first explore how humans can suffer from it.
#
# Human programmers often take certification exams to show their competence in a given programming language. To help prepare for these exams, the test makers often make practice exams available. Consider a programmer who enters a loop of taking the practice exam, studying more, and then taking the practice exam again. At some point, the programmer has memorized much of the practice exam, rather than learning the techniques necessary to figure out the individual questions. The programmer has now overfit to the practice exam. When this programmer takes the real exam, his actual score will likely be lower than what he earned on the practice exam.
#
# A computer can overfit as well. Although a neural network received a high score on its training data, this result does not mean that the same neural network will score high on data that was not inside the training set. Regularization is one of the techniques that can prevent overfitting. A number of different regularization techniques exist. Most work by analyzing and potentially modifying the weights of a neural network as it trains.
#
# # Helpful Functions
#
# These are exactly the same feature vector encoding functions from [Class 3](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class3_training.ipynb). They must be defined for this class as well. For more information, refer to class 3.
# +
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import shutil
import os
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = "{}-{}".format(name, tv)
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32)
else:
# Regression
return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart.
def chart_regression(pred,y,sort=True):
t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()})
if sort:
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# +
# Simple function to evaluate the coefficients of a regression
# %matplotlib inline
from IPython.display import display, HTML
def report_coef(names,coef,intercept):
r = pd.DataFrame( { 'coef': coef, 'positive': coef>=0 }, index = names )
r = r.sort_values(by=['coef'])
display(r)
print("Intercept: {}".format(intercept))
r['coef'].plot(kind='barh', color=r['positive'].map({True: 'b', False: 'r'}))
# -
# # Setup Data
#
# We are going to look at linear regression to see how L1 and L2 regularization work. The following code sets up the auto-mpg data for this purpose.
# +
from sklearn.linear_model import LassoCV
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
# Do not need zscore standardization for linear regression
#encode_numeric_zscore(df, 'horsepower')
#encode_numeric_zscore(df, 'weight')
#encode_numeric_zscore(df, 'cylinders')
#encode_numeric_zscore(df, 'displacement')
#encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
# -
# # Linear Regression
#
# To understand L1/L2 regularization, it is good to start with linear regression. L1/L2 were first introduced for [linear regression](https://en.wikipedia.org/wiki/Linear_regression). They can also be used for neural networks. To fully understand L1/L2 we will begin with how they are used with linear regression.
#
# The following code uses linear regression to fit the auto-mpg data set. The RMSE reported will not be as good as a neural network.
# +
import sklearn
# Create linear regression
regressor = sklearn.linear_model.LinearRegression()
# Fit/train linear regression
regressor.fit(x_train,y_train)
# Predict
pred = regressor.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
names = list(df.columns.values)
names.remove("mpg")
report_coef(
names,
regressor.coef_[0,:],
regressor.intercept_)
# -
# # L1 (Lasso) Regularization
#
# L1 Regularization, also called LASSO (Least Absolute Shrinkage and Selection Operator) is should be used to create sparsity in the neural network. In other words, the L1 algorithm will push many weight connections to near 0. When a weight is near 0, the program drops it from the network. Dropping weighted connections will create a sparse neural network.
#
# Feature selection is a useful byproduct of sparse neural networks. Features are the values that the training set provides to the input neurons. Once all the weights of an input neuron reach 0, the neural network training determines that the feature is unnecessary. If your data set has a large number of input features that may not be needed, L1 regularization can help the neural network detect and ignore unnecessary features.
#
# L1 is implemented by adding the following error to the objective to minimize:
#
# $$ E_1 = \alpha \sum_w{ |w| } $$
#
# The following code demonstrates lasso regression. Notice the effect of the coefficients compared to the previous section that used linear regression.
# +
import sklearn
from sklearn.linear_model import Lasso
# Create linear regression
regressor = Lasso(random_state=0,alpha=0.1)
# Fit/train LASSO
regressor.fit(x_train,y_train)
# Predict
pred = regressor.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
names = list(df.columns.values)
names.remove("mpg")
report_coef(
names,
regressor.coef_,
regressor.intercept_)
# +
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LassoCV
from sklearn.linear_model import Lasso
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
lasso = Lasso(random_state=42)
alphas = np.logspace(-8, 8, 10)
scores = list()
scores_std = list()
n_folds = 3
for alpha in alphas:
lasso.alpha = alpha
this_scores = cross_val_score(lasso, x, y, cv=n_folds, n_jobs=1)
scores.append(np.mean(this_scores))
scores_std.append(np.std(this_scores))
scores, scores_std = np.array(scores), np.array(scores_std)
plt.figure().set_size_inches(8, 6)
plt.semilogx(alphas, scores)
# plot error lines showing +/- std. errors of the scores
std_error = scores_std / np.sqrt(n_folds)
plt.semilogx(alphas, scores + std_error, 'b--')
plt.semilogx(alphas, scores - std_error, 'b--')
# alpha=0.2 controls the translucency of the fill color
plt.fill_between(alphas, scores + std_error, scores - std_error, alpha=0.2)
plt.ylabel('CV score +/- std error')
plt.xlabel('alpha')
plt.axhline(np.max(scores), linestyle='--', color='.5')
plt.xlim([alphas[0], alphas[-1]])
# -
# # L2 (Ridge) Regularization
#
# You should use Tikhonov/Ridge/L2 regularization when you are less concerned about creating a space network and are more concerned about low weight values. The lower weight values will typically lead to less overfitting.
#
# $$ E_1 = \alpha \sum_w{ w^2 } $$
#
# Like the L1 algorithm, the $\alpha$ value determines how important the L2 objective is compared to the neural network’s error. Typical L2 values are below 0.1 (10%). The main calculation performed by L2 is the summing of the squares of all of the weights. The bias values are not summed.
#
# The following code uses L2 with linear regression (Ridge regression):
# +
import sklearn
from sklearn.linear_model import Ridge
# Create linear regression
regressor = Ridge(alpha=1)
# Fit/train Ridge
regressor.fit(x_train,y_train)
# Predict
pred = regressor.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
names = list(df.columns.values)
names.remove("mpg")
report_coef(
names,
regressor.coef_[0,:],
regressor.intercept_)
# -
# +
import sklearn
from sklearn.linear_model import ElasticNet
# Create linear regression
regressor = ElasticNet(alpha=0.1, l1_ratio=0.1)
# Fit/train LASSO
regressor.fit(x_train,y_train)
# Predict
pred = regressor.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
names = list(df.columns.values)
names.remove("mpg")
report_coef(
names,
regressor.coef_,
regressor.intercept_)
# -
# # TensorFlow and L1/L2
#
# L1 and L2 regularization are two common regularization techniques that can reduce the effects of overfitting (Ng, 2004). Both of these algorithms can either work with an objective function or as a part of the backpropagation algorithm. In both cases the regularization algorithm is attached to the training algorithm by adding an additional objective.
#
# Both of these algorithms work by adding a weight penalty to the neural network training. This penalty encourages the neural network to keep the weights to small values. Both L1 and L2 calculate this penalty differently. For gradient-descent-based algorithms, such as backpropagation, you can add this penalty calculation to the calculated gradients. For objective-function-based training, such as simulated annealing, the penalty is negatively combined with the objective score.
#
# Both L1 and L2 work differently in the way that they penalize the size of a weight. L1 will force the weights into a pattern similar to a Gaussian distribution; the L2 will force the weights into a pattern similar to a Laplace distribution, as demonstrated the following:
#
# 
#
# As you can see, L1 algorithm is more tolerant of weights further from 0, whereas the L2 algorithm is less tolerant. We will highlight other important differences between L1 and L2 in the following sections. You also need to note that both L1 and L2 count their penalties based only on weights; they do not count penalties on bias values.
#
# Tensor flow allows [l1/l2 to be directly added to your network](http://tensorlayer.readthedocs.io/en/stable/modules/cost.html).
# +
########################################
# TensorFlow with L1/L2 for Regression
########################################
# %matplotlib inline
from matplotlib.pyplot import figure, show
import tensorflow as tf
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from keras.callbacks import EarlyStopping
from keras.layers import Dense, Dropout
from keras import regularizers
from keras.models import Sequential
path = "./data/"
# Set the desired TensorFlow output level for this example
tf.logging.set_verbosity(tf.logging.ERROR)
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
df.drop('name',1,inplace=True)
missing_median(df, 'horsepower')
x,y = to_xy(df,"mpg")
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dense(25, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, input_dim=64,
kernel_regularizer=regularizers.l2(0.01),
activity_regularizer=regularizers.l1(0.01),activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto')
model.fit(x,y,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000)
pred = model.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
# -
# # Dropout Regularization
#
# <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2014). [Dropout: a simple way to prevent neural networks from overfitting.](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) *Journal of Machine Learning Research*, 15(1), 1929-1958.
#
# Most neural network frameworks implement dropout as a separate layer. Dropout layers function as a regular, densely connected neural network layer. The only difference is that the dropout layers will periodically drop some of their neurons during training. You can use dropout layers on regular feedforward neural networks. In fact, they can also become layers in convolutional LeNET-5 networks like we studied in class 8.
#
# The usual hyper-parameters for a dropout layer are the following:
# * Neuron Count
# * Activation Function
# * Dropout Probability
#
# The neuron count and activation function hyper-parameters work exactly the same way as their corresponding parameters in the dense layer type mentioned previously. The neuron count simply specifies the number of neurons in the dropout layer. The dropout probability indicates the likelihood of a neuron dropping out during the training iteration. Just as it does for a dense layer, the program specifies an activation function for the dropout layer.
#
# 
#
# A certain percentage neurons we be masked during each training step. All neurons return after training is complete. To make use of dropout in TF Learn use the **dropout** parameter of either **DNNClassifier** or **DNNRegressor**. This is the percent of neurons to be dropped. Typically this is a low value, such as 0.1.
#
# Animation that shows how [dropout works](https://yusugomori.com/projects/deep-learning/dropout-relu)
# +
############################################
# TensorFlow with Dropout for Regression
############################################
# %matplotlib inline
from matplotlib.pyplot import figure, show
import tensorflow as tf
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from keras.callbacks import EarlyStopping
from keras.layers import Dense, Dropout
from keras import regularizers
from keras.models import Sequential
path = "./data/"
# Set the desired TensorFlow output level for this example
tf.logging.set_verbosity(tf.logging.ERROR)
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
df.drop('name',1,inplace=True)
missing_median(df, 'horsepower')
x,y = to_xy(df,"mpg")
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(25, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, input_dim=64,activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto')
model.fit(x,y,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000)
pred = model.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
# -
| t81_558_class9_regularization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Применение линейной регрессию на практике – предсказание стоимости машин на основе факторов, от которых зависит ценообразование на автомобили.
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.formula.api as smf
import statsmodels.api as sm
cars = pd.read_csv('https://stepik.org/media/attachments/lesson/387691/cars.csv')
cars['company'] = cars.CarName.str.split(' ').apply(lambda x: x[0])
cars = cars.drop(['CarName', 'car_ID'], axis=1)
cars.head()
cars.company.unique()
cars.horsepower.hist()
cars.company.str.lower().replace({'porcshce': 'porsche', 'toyouta' : 'toyota', 'vokswagen' : 'volkswagen', 'vw' : 'volkswagen', 'maxda' : 'mazda'}).nunique()
cars.columns
new_df = cars.drop(['symboling', 'doornumber', 'enginelocation', 'fuelsystem', 'stroke', 'compressionratio', 'carheight', 'peakrpm', 'citympg', 'highwaympg'], axis=1)
np.round(new_df.corr(method='pearson'), 2)
new_df.dtypes
new_df.shape
df_dummy = pd.get_dummies(data = cars[['fueltype', 'aspiration', 'carbody', 'drivewheel', 'enginetype', 'cylindernumber', 'company']], drop_first= True)
df_dummy
new_df.drop(['fueltype', 'aspiration', 'carbody', 'drivewheel', 'enginetype', 'cylindernumber', 'company'], axis=1)
df.shape
data_frame = pd.concat([df_dummy, new_df.drop(['fueltype', 'aspiration', 'carbody', 'drivewheel', 'enginetype', 'cylindernumber', 'company'], axis=1)], axis=1)
data_frame.shape
model = smf.ols('price ~ horsepower', data=data_frame).fit()
print(model.summary())
np.round(0.653 * 100, 0)
data_frame.columns
X = data_frame.drop('price', axis=1)
X.head()
Y = data_frame['price']
X = sm.add_constant(X)
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary()) # all predictors
X_wo = data_frame.drop(['price', 'company_alfa-romero', 'company_audi', 'company_bmw', 'company_buick',
'company_chevrolet', 'company_dodge', 'company_honda', 'company_isuzu',
'company_jaguar', 'company_maxda', 'company_mazda', 'company_mercury',
'company_mitsubishi', 'company_nissan', 'company_peugeot',
'company_plymouth', 'company_porcshce', 'company_porsche',
'company_renault', 'company_saab', 'company_subaru', 'company_toyota',
'company_toyouta', 'company_vokswagen', 'company_volkswagen',
'company_volvo', 'company_vw'], axis=1)
Y_wo = data_frame['price']
X_wo = sm.add_constant(X_wo)
model = sm.OLS(Y_wo, X_wo)
result = model.fit()
print(result.summary())
| cars_forecasting_price.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #Chopsticks!
#
# A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
#
# ### An investigation for determining the optimum length of chopsticks.
# [Link to Abstract and Paper](http://www.ncbi.nlm.nih.gov/pubmed/15676839)
# *the abstract below was adapted from the link*
#
# Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by [ergonomists](https://www.google.com/search?q=ergonomists). Two laboratory studies were conducted in this research, using a [randomised complete block design](http://dawg.utk.edu/glossary/whatis_rcbd.htm), to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
#
# ### For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
# Download the [data set for the adults](https://www.udacity.com/api/nodes/4576183932/supplemental_media/chopstick-effectivenesscsv/download), then answer the following questions based on the abstract and the data set.
#
# **If you double click on this cell**, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using [Markdown](http://daringfireball.net/projects/markdown/syntax), which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
# #### 1. What is the independent variable in the experiment?
# You can either double click on this cell to add your answer in this cell, or use the plus sign in the toolbar (Insert cell below) to add your answer in a new cell.
#
# #### 2. What is the dependent variable in the experiment?
#
#
# #### 3. How is the dependent variable operationally defined?
#
#
# #### 4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
# Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.)
#
#
# One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
# +
import pandas as pd
# pandas is a software library for data manipulation and analysis
# We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd.
# hit shift + enter to run this cell or block of code
# +
path = r'~/Downloads/chopstick-effectiveness.csv'
# Change the path to the location where the chopstick-effectiveness.csv file is located on your computer.
# If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer.
dataFrame = pd.read_csv(path)
dataFrame
# -
# Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
dataFrame['Food.Pinching.Efficiency'].mean()
# This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
# +
meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index()
meansByChopstickLength
# reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5.
# -
# #### 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
#
#
# +
# Causes plots to display within the notebook rather than in a new window
# %pylab inline
import matplotlib.pyplot as plt
plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency'])
# title="")
plt.xlabel("Length in mm")
plt.ylabel("Efficiency in PPPC")
plt.title("Average Food Pinching Efficiency by Chopstick Length")
plt.show()
# -
# #### 6. Based on the scatterplot created from the code above, interpret the relationship you see. What do you notice?
#
#
# ### In the abstract the researchers stated that their results showed food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 mm long were optimal for adults.
#
# #### 7a. Based on the data you have analyzed, do you agree with the claim?
#
# #### 7b. Why?
#
| P1-Statistics/Data_Analyst_ND_Project0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Default Final Project
#
# Author: <NAME>
#
# The structure of the project will be divided up into 3 categories:
# 1. EDA to learn interesting structures about the data
# 2. Defining environments and how to interact with the environment
# 3. Defining policy evaluation methods
# 4. Defining baselines
# 5. Defining Contextual UBC
# 6. Defining Supervised methods (SVM! Neural networks)
#
# ## EDA
#
# Here we will fist run some exploratory analysis on the data
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
import math
import scipy
import scipy.stats as st
from multiprocessing import Pool
from tqdm import tqdm, trange
df = pd.read_csv('data/warfarin.csv', dtype=str)
df = df.iloc[:5700, :63]
df = df[~df['Therapeutic Dose of Warfarin'].isna()]
df['Height (cm)'] = df['Height (cm)'].astype(float)
df['Height (cm)'] = df['Height (cm)'].fillna(df['Height (cm)'].mean())
df['Weight (kg)'] = df['Weight (kg)'].astype(float)
df['Weight (kg)'] = df['Weight (kg)'].fillna(df['Weight (kg)'].mean())
for column in df.columns:
if column not in ['Height (cm)', 'Weight (kg)']:
df[column] = df[column].fillna('Unknown')
df['Therapeutic Dose of Warfarin'] = df['Therapeutic Dose of Warfarin'].astype(float)
df['Therapeutic Dose of Warfarin (categorized)'] = pd.cut(df['Therapeutic Dose of Warfarin'],
[0, 20.99999, 49, max(df['Therapeutic Dose of Warfarin'])],
labels=['low', 'medium', 'high'])
df = df.drop(columns=['INR on Reported Therapeutic Dose of Warfarin', 'Subject Reached Stable Dose of Warfarin'])
df['Age decades'] = 0
for cat in df['Age'].unique():
decade = 0
if cat == '10 - 19':
decade = 1
elif cat == '20 - 29':
decade = 2
elif cat == '30 - 39':
decade = 3
elif cat == '40 - 49':
decade = 4
elif cat == '50 - 59':
decade = 5
elif cat == '60 - 69':
decade = 6
elif cat == '70 - 79':
decade = 7
elif cat == '80 - 89':
decade = 8
elif cat == '90+':
decade = 9
else:
continue
df.loc[df['Age'] == cat, 'Age decades'] = decade
df.loc[df['Age'].isna(), 'Age decades'] = 7 # this is the mode imputed
plt.figure(figsize=(6, 6))
means = df.groupby('Age').mean()['Therapeutic Dose of Warfarin']
x = list(means.index) + ['Unknown Age']
y = list(means.values.flatten()) + [df[df['Age'].isna()]['Therapeutic Dose of Warfarin'].mean()]
plt.bar(x, y)
plt.title('Average Theraputic Dose of Warfarin by Age')
plt.xlabel('Age group')
plt.ylabel('Average Theraputic Dose of Warfarin')
plt.xticks(rotation=90)
plt.show()
# ## Define features and Encoding
#
# Here we select the features that we want and define a way of encoding them. These will be used throught the rest of the notebook
uniques = {column : list(df[column].unique()) for column in df.columns}
def encode(state, lst_features):
state = state[lst_features]
vec = []
for index in state.index:
if index in ['Height (cm)', 'Weight (kg)', 'Age decades']:
vec += [state[index]]
else:
possible_values = uniques[index]
vec += [1 if possible_value == state[index] else 0 for possible_value in possible_values]
return vec
# +
# df.columns
# -
len(encode(df.iloc[0], ['Race', 'Age decades',
'Height (cm)', 'Weight (kg)', 'Carbamazepine (Tegretol)',
'Amiodarone (Cordarone)','Phenytoin (Dilantin)', 'Rifampin or Rifampicin']))
len(encode(df.iloc[0], ['Gender', 'Race', 'Ethnicity', 'Age decades',
'Height (cm)', 'Weight (kg)', 'Indication for Warfarin Treatment', 'Diabetes',
'Congestive Heart Failure and/or Cardiomyopathy', 'Valve Replacement',
'Aspirin', 'Acetaminophen or Paracetamol (Tylenol)',
'Was Dose of Acetaminophen or Paracetamol (Tylenol) >1300mg/day',
'Simvastatin (Zocor)', 'Atorvastatin (Lipitor)', 'Fluvastatin (Lescol)',
'Lovastatin (Mevacor)', 'Pravastatin (Pravachol)',
'Rosuvastatin (Crestor)', 'Cerivastatin (Baycol)',
'Amiodarone (Cordarone)', 'Carbamazepine (Tegretol)',
'Phenytoin (Dilantin)', 'Rifampin or Rifampicin',
'Sulfonamide Antibiotics', 'Macrolide Antibiotics',
'Anti-fungal Azoles', 'Herbal Medications, Vitamins, Supplements',
'Target INR', 'Estimated Target INR Range Based on Indication', 'Current Smoker',]))
# ## Calculate true parameters
#
# To find the "gold" parameters for each arm, since we constructed the problem as linear bandits, we will find the "gold" parameters as a linear regression on predicting the reward (0 for correct dosage, -1 for wrong dosage). We will find a beta for each arm independently.
feature_names = ['Race', 'Age decades',
'Height (cm)', 'Weight (kg)', 'Carbamazepine (Tegretol)',
'Amiodarone (Cordarone)','Phenytoin (Dilantin)', 'Rifampin or Rifampicin']
# feature_names = ['Gender', 'Race', 'Ethnicity', 'Age decades',
# 'Height (cm)', 'Weight (kg)', 'Indication for Warfarin Treatment', 'Diabetes',
# 'Congestive Heart Failure and/or Cardiomyopathy', 'Valve Replacement',
# 'Aspirin', 'Acetaminophen or Paracetamol (Tylenol)',
# 'Was Dose of Acetaminophen or Paracetamol (Tylenol) >1300mg/day',
# 'Simvastatin (Zocor)', 'Atorvastatin (Lipitor)', 'Fluvastatin (Lescol)',
# 'Lovastatin (Mevacor)', 'Pravastatin (Pravachol)',
# 'Rosuvastatin (Crestor)', 'Cerivastatin (Baycol)',
# 'Amiodarone (Cordarone)', 'Carbamazepine (Tegretol)',
# 'Phenytoin (Dilantin)', 'Rifampin or Rifampicin',
# 'Sulfonamide Antibiotics', 'Macrolide Antibiotics',
# 'Anti-fungal Azoles', 'Herbal Medications, Vitamins, Supplements',
# 'Target INR', 'Estimated Target INR Range Based on Indication', 'Current Smoker']
X = df.apply(lambda row: encode(row, feature_names), 1, True)
X = np.array(X.to_list())
y_low = (df['Therapeutic Dose of Warfarin (categorized)'] == 'low').to_numpy() - 1
y_medium = (df['Therapeutic Dose of Warfarin (categorized)'] == 'medium').to_numpy() - 1
y_high = (df['Therapeutic Dose of Warfarin (categorized)'] == 'high').to_numpy() - 1
from sklearn.linear_model import LinearRegression
linear_low = LinearRegression(fit_intercept=False).fit(X, y_low)
linear_medium = LinearRegression(fit_intercept=False).fit(X, y_medium)
linear_high = LinearRegression(fit_intercept=False).fit(X, y_high)
low_mse = np.mean(np.power(X.dot(linear_low.coef_.T) - y_low, 2))
medium_mse = np.mean(np.power(X.dot(linear_medium.coef_.T) - y_medium, 2))
high_mse = np.mean(np.power(X.dot(linear_high.coef_.T) - y_high, 2))
print(f'low MSE is {low_mse}, medium MSE is {medium_mse}, high MSE is {high_mse}')
# ## Regret
#
# Here we calculate regret. Since regret is independent on the policy used, we can find the total emparical regret using the "gold" betas we found above as well as the true and predicted arm for each round.
# +
def get_coef(action, low_beta, medium_beta, high_beta):
if action == 'low':
return low_beta
if action == 'medium':
return medium_beta
if action == 'high':
return high_beta
def regret(state, action, low_beta, medium_beta, high_beta):
x = np.array(encode(state, feature_names)).reshape(1, -1)
best_linear_reward = np.max([x.dot(beta.T)[0] for beta in [low_beta, medium_beta, high_beta]])
coef = get_coef(action, low_beta, medium_beta, high_beta)
regret = best_linear_reward - x.dot(coef.T)
return regret[0]
print(regret(df.iloc[0], 'high', linear_low.coef_, linear_medium.coef_, linear_high.coef_))
def batch_regret(states, actions, low_beta, medium_beta, high_beta):
all_actions = ['low', 'medium', 'high']
X = states.apply(lambda row: encode(row, feature_names), 1, True)
X = np.array(X.to_list())
betas = np.hstack([low_beta.reshape(-1, 1), medium_beta.reshape(-1, 1), high_beta.reshape(-1, 1)])
linear_rewards = X.dot(betas)
actions_numeric = [all_actions.index(action) for _, action in enumerate(actions)]
regrets = np.max(linear_rewards, 1) - linear_rewards[list(range(linear_rewards.shape[0])), actions_numeric]
return regrets
print(batch_regret(df.iloc[0:3], ['high', 'high', 'high'],
linear_low.coef_, linear_medium.coef_, linear_high.coef_ ))
# -
def simulate(df, policy, linear_low=linear_low, linear_medium=linear_medium, linear_high=linear_high, bar=True):
permuted_df = df.sample(frac=1)
states = permuted_df.drop(columns=['PharmGKB Subject ID', 'Therapeutic Dose of Warfarin', 'Therapeutic Dose of Warfarin (categorized)'])
labels = permuted_df['Therapeutic Dose of Warfarin (categorized)']
total_reward = 0
actions = []
rewards = []
if bar:
t = trange(len(states.index))
else:
t = range(len(states.index))
for i in t:
state = states.iloc[i]
label = labels.iloc[i]
action = policy.get_action(state)
reward = 0 if action == label else -1
policy.update_policy(state, action, reward, label)
total_reward += reward
actions += [action]
rewards += [reward]
if bar:
t.set_postfix(total_reward = total_reward)
regrets = batch_regret(states, actions, linear_low.coef_, linear_medium.coef_, linear_high.coef_)
return actions, rewards, regrets
fixed_actions, fixed_rewards, fixed_regret = simulate(df, FixedDosePolicy('medium'))
clinical_actions, clinical_rewards, clinical_regret = simulate(df, ClinicalDosingAlgorithm())
fixed_regret.sum()
clinical_regret.sum()
len(clinical_regret)
# ## Baselines
#
# Here we define the baselines
# +
class FixedDosePolicy(object):
def __init__(self, dose):
self.dose = dose
def get_action(self, state):
return self.dose
def update_policy(self, state, action, reward, true_label):
return
fixed_actions, fixed_rewards, fixed_regrets = simulate(df, FixedDosePolicy('medium'))
# -
sum(fixed_regrets)
-np.mean(fixed_rewards)
class ClinicalDosingAlgorithm(object):
def get_action(self, state):
dose = 4.0376
dose += - 0.2546 * state['Age decades']
dose += 0.0118 * state['Height (cm)']
dose += 0.0134 * state['Weight (kg)']
if state['Race'] == 'Asian':
dose += - 0.6752
if state['Race'] == 'Black or African American':
dose += 0.4060
if state['Race'] == 'Unknown':
dose += 0.0443
if state['Carbamazepine (Tegretol)'] == '1' or state['Phenytoin (Dilantin)'] == '1'\
or state['Rifampin or Rifampicin'] == '1':
dose += 1.2799
if state['Amiodarone (Cordarone)'] == '1':
dose += -0.5695
dose = dose ** 2
if dose < 21:
return 'low'
if dose < 49:
return 'medium'
return 'high'
def update_policy(self, state, action, reward, true_label):
return
clinical_actions, clinical_rewards, clinical_regrets = simulate(df, ClinicalDosingAlgorithm())
np.sum(clinical_regrets)
-np.mean(clinical_rewards)
# ## Defining Linear UCB Policy
#
# Here we will use Disjoint Liner UCB since the assumption for it is quite nice. There is no additional context per action, but rather we have the exact same context for each arm (namely the patient features).
# +
class LinUCBDisjoint(object):
def __init__(self, alpha, feature_names, actions, d):
self.alpha = alpha
self.feature_names = feature_names
self.actions = actions
self.As = [np.eye(d) for _ in range(len(actions))]
self.bs = [np.zeros((d, 1)) for _ in range(len(actions))]
def featurize(self, state):
return np.array(encode(state, self.feature_names)).reshape(-1, 1)
def get_action(self, state):
ps = []
x = self.featurize(state)
for a in range(3):
A_inv = np.linalg.inv(self.As[a])
theta = A_inv.dot(self.bs[a])
ps += [x.T.dot(theta)[0, 0] + self.alpha * np.sqrt(x.T.dot(A_inv).dot(x))[0, 0]]
a_t = np.argmax(ps)
return self.actions[a_t]
def update_policy(self, state, action, reward, true_label):
x = self.featurize(state)
self.As[self.actions.index(action)] += x.dot(x.T)
self.bs[self.actions.index(action)] += reward * x
policy = LinUCBDisjoint(np.sqrt(np.log(2 / 0.1)/2),
feature_names, ['low', 'medium', 'high'], len(encode(df.iloc[0], feature_names)))
rl_actions, rl_rewards, rl_regret = simulate(df, policy)
# -
print(f'disjoint linear UCB is able to achieve {rl_regret.sum()} regret')
# ## Hyperparameter search
#
alphas = [np.sqrt(np.log(2 / 0.1)/2) * i * 0.25 for i in range(15)]
all_results = {}
for alpha in alphas:
d = len(encode(df.iloc[0], feature_names))
results = []
for i in range(20):
print(f'running experiment for {alpha} iteration {i}')
policy = LinUCBDisjoint(alpha, feature_names, ['low', 'medium', 'high'], d)
results += [simulate(df, policy, bar=False)]
all_results[alpha] = results
# +
# import pickle
# with open('all_results.pk', 'wb') as f:
# pickle.dump(all_results, f)
# -
import pickle
with open('all_results.pk', 'rb') as f:
all_results = pickle.load(f)
alphas = sorted(list(all_results.keys()))
incorrect_frac = [np.mean([-np.mean(rewards) for _, rewards, _ in all_results[alpha]]) for alpha in alphas]
incorrect_frac_std = [np.std([-np.mean(rewards) for _, rewards, _ in all_results[alpha]]) for alpha in alphas]
plt.figure(figsize=(6, 6))
plt.errorbar(alphas, incorrect_frac, yerr=[2*std for std in incorrect_frac_std], markersize=4, capsize=5)
plt.title('Average Fraction of Incorrect doses per alpha')
plt.xlabel('alpha')
plt.ylabel('Average Fraction of Incorrect doses')
plt.xticks(alphas, rotation=90)
plt.savefig('plots/Avg_frac_incorrect_alpha.png')
alphas = sorted(list(all_results.keys()))
regrets = [np.mean([np.sum(regrets) for _, _, regrets in all_results[alpha]]) for alpha in alphas]
regrets_std = [np.std([np.sum(regrets) for _, _, regrets in all_results[alpha]]) for alpha in alphas]
plt.figure(figsize=(6, 6))
plt.errorbar(alphas, regrets, yerr=[2*std for std in regrets_std], markersize=4, capsize=5)
plt.title('Average Total Regret per alpha')
plt.xlabel('alpha')
plt.ylabel('Average Total Regret')
plt.xticks(alphas, rotation=90)
plt.savefig('plots/avg_tot_regret_alpha.png')
alphas = sorted(list(all_results.keys()))[0:6]
plt.figure(figsize=(6, 6))
for alpha in alphas:
alpha_rewards = np.vstack([rewards for _, rewards, _ in all_results[alpha]])
alpha_fracs = (-np.cumsum(alpha_rewards, 1) / np.arange(1, df.shape[0] + 1))
alpha_means = np.mean(alpha_fracs, 0)
alpha_stds = np.std(alpha_fracs, 0)
plt.plot(range(df.shape[0]), alpha_means)
plt.legend(alphas)
plt.xlabel('Number of patients seen')
plt.ylabel('Average fraction of incorrect cases')
plt.title('Average fraction of incorrect cases vs patients seen')
plt.savefig('plots/avg_frac_alphas.png')
alpha_means
alphas[2]
regrets[2]
incorrect_frac[2]
# ## Plotting t-distribution
#
#
import scipy.stats as st
results = []
alpha = 0.6119367076702041
for i in range(20):
print(f'running experiment for {alpha} iteration {i}')
policy = LinUCBDisjoint(alpha, feature_names, ['low', 'medium', 'high'], d)
results += [simulate(df, policy, bar=False)]
reward_means = np.array([-np.mean(rewards) for actions, rewards, regrets in results])
reward_t_interval = np.array([st.t.std(len(rewards) - 1, loc=np.mean(rewards), scale=st.sem(rewards)) \
for actions, rewards, regrets in results])
plt.figure(figsize=(6, 6))
plt.errorbar(range(len(reward_means)), reward_means, yerr=reward_t_interval * 2, fmt='o', capsize=5)
plt.title('Fraction of incorrect dosage per round')
plt.xlabel('Trials')
plt.ylabel('Fraction of incorrect dosage')
plt.xticks(range(20))
plt.savefig('plots/frac_incorrect_dosage_t.png')
regret_means = np.array([np.mean(regrets) for actions, rewards, regrets in results])
regret_t_interval = np.array([st.t.std(len(regrets) - 1, loc=np.mean(regrets), scale=st.sem(regrets)) \
for actions, rewards, regrets in results])
plt.figure(figsize=(6, 6))
plt.errorbar(range(len(regret_means)), regret_means, yerr=regret_t_interval * 2, fmt='o', capsize=5)
plt.title('Average Regret per Patient')
plt.xticks(range(20))
plt.xlabel('Trials')
plt.ylabel('Average Regret per Patient')
plt.savefig('plots/avg_regret_per_t.png')
# ## Supervised
#
# A supervised approach allows us to have an emparical upper bound on the performance of reinforcement learning algorithms
# ### Theoretical limit of SVM
#
# Here we would like to know what is the optimal possible value for the SVM and logistic regression
y = y_low * 0 + (1 + y_medium) * 1 + (1 + y_high) * 2
classifier = sklearn.linear_model.LogisticRegression(n_jobs=8, max_iter=5000).fit(X, y)
predicted_actions = classifier.predict(X)
reward = -np.sum(predicted_actions != y)0.3538
print(f'Logistic Regression could achieve maximum {reward} reward')
np.sum(batch_regret(df, np.array(['low', 'medium', 'high'])[predicted_actions], linear_low.coef_, linear_medium.coef_, linear_high.coef_))
classifier = sklearn.svm.SVC().fit(X, y)
predicted_actions = classifier.predict(X)
reward = -np.sum(predicted_actions != y)
print(f'SVC could achieve maximum {reward} reward')
np.sum(batch_regret(df, np.array(['low', 'medium', 'high'])[predicted_actions], linear_low.coef_, linear_medium.coef_, linear_high.coef_))
from sklearn.linear_model import LogisticRegression
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings('ignore', category=ConvergenceWarning)
# +
class Supervised(object):
def __init__(self, batch_size, feature_names, actions):
self.classifier = sklearn.linear_model.LogisticRegression(n_jobs=8, max_iter=5000)
self.initialized = False
self.actions = actions
self.batch_size = batch_size
self.feature_names = feature_names
self.data = []
self.labels = []
def featurize(self, state):
return np.array(encode(state, self.feature_names))
def get_action(self, state):
s = self.featurize(state)
if self.initialized:
prediction = self.classifier.predict(s.reshape(1, -1))
return self.actions[prediction[0]]
else:
return self.actions[1]
def update_policy(self, state, action, reward, true_label):
s = self.featurize(state)
self.data += [s]
self.labels += [self.actions.index(true_label)]
if len(self.data) % self.batch_size == 0:
self.classifier.fit(np.vstack(self.data), self.labels)
self.initialized = True
supervised_actions, supervised_rewards, supervised_regrets = \
simulate(df, Supervised(50, feature_names, ['low', 'medium', 'high']))
# -
np.sum(supervised_regrets)
-np.mean(supervised_rewards)
# fixed_rewards, clinical_rewards
horizon = len(fixed_rewards)
best_rl_rewards = max([reward for _, runs in all_results.items() for _, reward, _ in runs], key=np.mean)
plt.figure(figsize=(6, 6))
plt.plot(range(horizon), np.cumsum(1 + np.array(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(clinical_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(best_rl_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(supervised_rewards)))
plt.legend(['Fixed Dose', 'Clinical Dosing', 'Best LinUCB', 'Supervised'])
plt.title('Cumulative Number of correct cases')
plt.xlabel('Number of patients seen')
plt.ylabel('Number of correct cases')
plt.savefig('plots/comparison.png')
d = 19
alpha = 0.6119367076702041
supervised_results = [simulate(df, Supervised(50, feature_names, ['low', 'medium', 'high'])) for _ in range(20)]
rl_results = [simulate(df, LinUCBDisjoint(alpha, feature_names, ['low', 'medium', 'high'], d)) for _ in range(20)]
fixed_results = [simulate(df, FixedDosePolicy('medium')) for _ in range(20)]
clinical_results = [simulate(df, ClinicalDosingAlgorithm()) for _ in range(20)]
plt.figure(figsize=(6, 6))
for result in [supervised_results, rl_results, fixed_results, clinical_results]:
rewards = np.vstack([rewards for _, rewards, _ in result])
fracs = (-np.cumsum(rewards, 1) / np.arange(1, df.shape[0] + 1))
means = np.mean(fracs, 0)
# stds = np.std(means, 0)
plt.plot(range(df.shape[0]), means)
plt.legend(['Supervised', 'LinUCB', 'Fixed Dosage', 'Clinical Dosage'])
plt.xlabel('Number of patients seen')
plt.ylabel('Average fraction of incorrect cases')
plt.title('Average fraction of incorrect cases vs patients seen')
plt.savefig('plots/avg_frac_alphas.png')
# fixed_rewards, clinical_rewards
horizon = len(fixed_rewards)
best_rl_rewards = max([reward for _, runs in all_results.items() for _, reward, _ in runs], key=np.mean)
plt.figure(figsize=(6, 6))
plt.plot(range(horizon), np.cumsum(1 + np.array(fixed_rewards)) / np.arange(1, 1+len(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(clinical_rewards))/ np.arange(1, 1+len(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(best_rl_rewards))/ np.arange(1, 1+len(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(supervised_rewards))/ np.arange(1, 1+len(fixed_rewards)))
plt.legend(['Fixed Dose', 'Clinical Dosing', 'Best LinUCB', 'Supervised'])
plt.title('Fraction of correct cases')
plt.xlabel('Number of patients seen')
plt.ylabel('Fraction of correct cases')
plt.savefig('plots/comparison_frac.png')
# fixed_rewards, clinical_rewards
horizon = len(fixed_rewards)
best_rl_rewards = max([reward for _, runs in all_results.items() for _, reward, _ in runs], key=np.mean)
plt.figure(figsize=(6, 6))
plt.plot(range(horizon), np.cumsum(1 + np.array(fixed_rewards)) / np.arange(1, 1+len(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(clinical_rewards))/ np.arange(1, 1+len(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(best_rl_rewards))/ np.arange(1, 1+len(fixed_rewards)))
plt.plot(range(horizon), np.cumsum(1 + np.array(supervised_rewards))/ np.arange(1, 1+len(fixed_rewards)))
plt.legend(['Fixed Dose', 'Clinical Dosing', 'Best LinUCB', 'Supervised'])
plt.title('Fraction of correct cases')
plt.xlabel('Number of patients seen')
plt.ylabel('Fraction of correct cases')
plt.savefig('plots/comparison_frac.png')
| .ipynb_checkpoints/Final Project-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_NG_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="dhydD96df06z"
# **PBMC_NG_2 dataset: Processes the BUG files into files prepared for use in R**
#
# This notebook processes the output from the fastq file processing for this dataset. The data produced here is pre-generated and downloaded by the figure generation code. The purpose of this processing step is to prepare the data for figure generation, by filtering the data and producing downsampled datasets in addition to the original one.
#
# Steps:
# 1. Clone the code repo and download data to process
# 2. Prepare the R environment
# 3. Process the data
# 4. Generate statistics for the dataset
#
# The data used in this processing step is produced by the following notebook:
#
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_NG_2.ipynb
#
# + [markdown] id="h8RnKVMXgbzr"
# **1. Clone the code repo and download data to process**
# + id="doUAtCxIyOiI" colab={"base_uri": "https://localhost:8080/"} outputId="dc5a05f8-770f-4a87-9aa8-3f944607e527"
![ -d "GRNP_2020" ] && rm -r GRNP_2020
# !git clone https://github.com/pachterlab/GRNP_2020.git
# + id="dUNSQ1qBZb2g" colab={"base_uri": "https://localhost:8080/"} outputId="add9fbd7-1933-4c57-8ac7-e8417b090f3b"
#download BUG data from Zenodo
# !mkdir data
# !cd data && wget https://zenodo.org/record/4661569/files/PBMC_NG_2.zip?download=1 && unzip 'PBMC_NG_2.zip?download=1' && rm 'PBMC_NG_2.zip?download=1'
# + id="oesgTqLO0Qje" colab={"base_uri": "https://localhost:8080/"} outputId="aa884386-1bcb-4039-b191-824599f5ae98"
#Check that download worked
# !cd data && ls -l && cd PBMC_NG_2/bus_output && ls -l
# + [markdown] id="sCmhNVdYgkWH"
# **2. Prepare the R environment**
# + id="5Gt6rQkSXriM"
#switch to R mode
# %reload_ext rpy2.ipython
# + id="jJ3rQJCdgeJa" colab={"base_uri": "https://localhost:8080/"} outputId="3c4dee75-9dfe-4cad-9f6c-c1f9b82be141"
#install the R packages
# %%R
install.packages("qdapTools")
install.packages("dplyr")
install.packages("stringdist")
install.packages("stringr")
# + [markdown] id="x56fjfCSicrp"
# **3. Process the data**
#
# Here we discard multimapped UMIs and all UMIs belonging to cells with fewer than 200 UMIs. We also precalculate gene expression, fraction of single-copy molecules etc. and save as stats (statistics). These can later be used when generating figures. We also generate down-sampled BUGs.
# + id="sSlWuq94lvCO"
#create output directory
# !mkdir figureData
# + id="V37XLBAO68oR"
#First set some path variables
# %%R
source("GRNP_2020/RCode/pathsGoogleColab.R")
# + id="R6kuhOmzZL_X" colab={"base_uri": "https://localhost:8080/"} outputId="677a5f0e-0e4b-4ee7-b99d-e7dd35acc97e"
#Process and filter the BUG file
# %%R
source(paste0(sourcePath, "BUGProcessingHelpers.R"))
createStandardBugsData(paste0(dataPath,"PBMC_NG_2/"), "PBMC_NG_2", c(0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1))
# + [markdown] id="YsqeFNdtnLsc"
# **4. Generate statistics for the dataset**
#
# Here we create a file with various statistics for the dataset, which is used for generating table S3. It also contains some additional information about the dataset. Generation of this file may take several hours.
# + id="CffgQFeiW2tc" colab={"base_uri": "https://localhost:8080/"} outputId="a07166e0-f10f-4132-e23b-1304f41253ce" language="R"
# source(paste0(sourcePath, "GenBugSummary.R"))
# genBugSummary("PBMC_NG_2", "FGF23", "RPS10", 10)
# + id="Cb5DifYYcB6g" colab={"base_uri": "https://localhost:8080/"} outputId="4398e7c5-647a-435a-c439-0af83b0a918f"
# !cd figureData/PBMC_NG_2 && ls -l && cat ds_summary.txt
| notebooks/R_processing/ProcessR_PBMC_NG_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Real-bogus classification for the Zwicky Transient Facility (ZTF) using deep learning
# Run this notebook in the browser in `Google Colab`:
#
# <a href="https://colab.research.google.com/github/dmitryduev/braai/blob/master/nb/braai_train.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
#
# <b>Tip:</b> if you are running this notebook in `Google Colab` and run out of disk space with the default runtime, try changing it to a GPU-accelerated one in `Runtime` -> `Change runtime type` -> `Hardware accelerator` -> `GPU`.
# Efficient automated detection of flux-transient, reoccurring flux-variable, and moving objects
# is increasingly important for large-scale astronomical surveys.
#
# The [Zwicky Transient Facilty (ZTF)](https://ztf.caltech.edu) is a new robotic time-domain survey currently in operation at the Palomar Observatory in California, USA. ZTF is capable of visiting the entire visible sky north of $-30$° declination every night. ZTF observes the sky in the `g`, `r`, and `i` bands at different cadences depending on the scientific program and sky region. The new 576 megapixel camera with a 47 deg<sup>2</sup> field of view, installed on the Samuel Oschin 48-inch (1.2-m) Schmidt Telescope, can scan more than 3750 deg<sup>2</sup> per hour, to a $5\sigma$ detection limit of 20.7 mag in the $r$ band with a 30-second exposure during new moon.
#
# Events observed by ZTF may have been triggered from a flux-transient, a reoccurring flux-variable, or a moving object. The metadata and contextual information including the cutouts are put into "alert packets" that are distributed via the ZTF Alert Distribution System (ZADS). On a typical night, the number of detected events ranges from $10^5 - 10^6$.
#
# <table><tr><td><img src='img/fig-ztf.png'></td><td><img src='img/fig-ztf_alerts.png'></td></tr></table>
#
# The real/bogus ($rb$) machine learning (ML) classifiers are designed to separate genuine astrophysical events from bogus detections by scoring individual sources on a scale from 0.0 (bogus) to 1.0 (real). Currently, ZTF employs two $rb$ classifiers: a feature-based random forest classifier ($rfrb$), and `braai` a convolutional-neural-network, deep-learning classifier.
#
# In this tutorial, we will build a deep $rb$ classifier based on `braai`. We will use a data set consisting of $11.5k$ labeled alerts from the [ZTF public alert stream](https://ztf.uw.edu/alerts/public/).
#
# For further details on `braai` please refer to For details, please see [Duev+ 2019, MNRAS, 489, 3582](https://academic.oup.com/mnras/article/489/3/3582/5554758) or [arXiv:1907.11259](https://arxiv.org/pdf/1907.11259.pdf).
# ### Imports
# +
from IPython.display import HTML, display
import tqdm
import json
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.metrics import roc_curve, auc, confusion_matrix
from sklearn.model_selection import train_test_split
import datetime
from astropy.time import Time
import os
import io
import gzip
from astropy.io import fits
from bson.json_util import loads, dumps
import matplotlib.pyplot as plt
# plt.style.use(['dark_background'])
from pandas.plotting import register_matplotlib_converters, scatter_matrix
register_matplotlib_converters()
# %matplotlib inline
# -
# ### Data set
#
# Download from `GCS`:
# !wget -O candidates.csv https://storage.googleapis.com/ztf-braai/braai.candidates.programid1.csv
# !wget -O triplets.norm.npy https://storage.googleapis.com/ztf-braai/braai.triplets.norm.programid1.npy
# !ls
# #### Candidates
#
# First load the $csv$ file containing the `candidate` block of the alerts (cutouts and previous detections are excluded). All alerts are labeled (0=bogus, 1=real).
df = pd.read_csv('candidates.csv')
display(df)
df.info()
df.describe()
print(f'num_bogus: {np.sum(df.label == 0)}')
print(f'num_real: {np.sum(df.label == 1)}')
# #### Cutout images
def make_triplet(alert, normalize: bool = False, to_tpu: bool = False):
"""
Feed in alert packet
"""
cutout_dict = dict()
for cutout in ('science', 'template', 'difference'):
cutout_data = loads(dumps([alert[f'cutout{cutout.capitalize()}']['stampData']]))[0]
# unzip
with gzip.open(io.BytesIO(cutout_data), 'rb') as f:
with fits.open(io.BytesIO(f.read())) as hdu:
data = hdu[0].data
# replace nans with zeros
cutout_dict[cutout] = np.nan_to_num(data)
# normalize
if normalize:
cutout_dict[cutout] /= np.linalg.norm(cutout_dict[cutout])
# pad to 63x63 if smaller
shape = cutout_dict[cutout].shape
if shape != (63, 63):
# print(f'Shape of {candid}/{cutout}: {shape}, padding to (63, 63)')
cutout_dict[cutout] = np.pad(cutout_dict[cutout], [(0, 63 - shape[0]), (0, 63 - shape[1])],
mode='constant', constant_values=1e-9)
triplet = np.zeros((63, 63, 3))
triplet[:, :, 0] = cutout_dict['science']
triplet[:, :, 1] = cutout_dict['template']
triplet[:, :, 2] = cutout_dict['difference']
if to_tpu:
# Edge TPUs require additional processing
triplet = np.rint(triplet * 128 + 128).astype(np.uint8).flatten()
return triplet
# We will now load pre-processed image cutout triplets: [epochal science image, reference image, ZOGY difference image]. The ZTF cutout images are centered on the event candidate and are of size 63x63 pixels (or smaller, if the event is detected near the CCD edge) at a plate scale of 1$"$ per pixel. We perform independent $L^2$-normalization of the epochal science, reference, and difference cutouts and stack them to form 63x63x3 triplets that are input into the model. Smaller examples are accordingly padded using a constant pixel value of $10^{-9}$. See function `make_triplet` above.
# We will use memory mapping as the file is relatively large (1 GB)
triplets = np.load('triplets.norm.npy', mmap_mode='r')
# #### Visuals
#
# Plot a few triplet examples:
ind = np.random.randint(0, high=len(df), size=5)
for ii in ind:
print(f'candid: {df.loc[ii, "candid"]}, label: {df.loc[ii, "label"]}')
fig = plt.figure(figsize=(8, 2), dpi=100)
triplet = triplets[ii, :]
ax = fig.add_subplot(131)
ax.axis('off')
ax.imshow(triplet[:, :, 0], origin='lower', cmap=plt.cm.bone)
ax2 = fig.add_subplot(132)
ax2.axis('off')
ax2.imshow(triplet[:, :, 1], origin='lower', cmap=plt.cm.bone)
ax3 = fig.add_subplot(133)
ax3.axis('off')
ax3.imshow(triplet[:, :, 2], origin='lower', cmap=plt.cm.bone)
plt.show()
# Let's explore the dataset a little bit:
# +
fig = plt.figure(figsize=(8, 8), dpi=100)
ax = fig.add_subplot(111)
color_wheel = {0: "#dc3545",
1: "#28a745"}
colors = df["label"].map(lambda x: color_wheel.get(x))
columns = ['magpsf', 'fwhm', 'ndethist', 'scorr']
axx = scatter_matrix(df.loc[df.label >= 0, columns],
alpha=0.2, diagonal='hist', ax=ax, grid=True, color=colors,
hist_kwds={'color': 'darkblue', 'alpha': 0, 'bins': 50})
for rc in range(len(columns)):
rc_y_max = 0
for group in color_wheel.keys():
y = df[df.label == group][columns[rc]]
hh = axx[rc][rc].hist(y, bins=50, alpha=0.5, color=color_wheel[group], density=1)
# print(np.min(hh[0]), np.max(hh[0]))
rc_y_max = max(rc_y_max, np.max(hh[0]))
axx[rc][rc].set_ylim([0, 1.1*rc_y_max])
# scatter_matrix(df.loc[df.label == 0, ['magpsf', 'fwhm', 'ndethist']],
# alpha=0.2, diagonal='hist', ax=ax,
# hist_kwds={'color': '#dc3545', 'alpha': 0.5, 'bins': 100}, color='#dc3545')
# scatter_matrix(df.loc[df.label == 1, ['magpsf', 'fwhm', 'ndethist']],
# alpha=0.2, diagonal='hist', ax=ax,
# hist_kwds={'color': '#28a745', 'alpha': 0.5, 'bins': 100}, color='#28a745')
# -
df['date'] = df['jd'].map(lambda x: Time(x, format='jd').datetime)
fig = plt.figure(figsize=(7, 3), dpi=100)
ax = fig.add_subplot(111)
ax.hist(df.loc[df['label'] == 0, 'date'], bins=50, #linestyle='dashed',
color=color_wheel[0], histtype='step', label='bogus', linewidth=1.2)
ax.hist(df.loc[df['label'] == 1, 'date'], bins=50,
color=color_wheel[1], histtype='step', label='real', linewidth=1.2)
# ax.set_xlabel('Date')
ax.set_ylabel('Count')
ax.legend(loc='best')
ax.grid(True, linewidth=.3)
plt.tight_layout()
# We will use 81\% / 9\% / 10\% training/validation/test data split:
# +
test_split = 0.1
# set random seed for reproducable results:
random_state = 42
x_train, x_test, y_train, y_test = train_test_split(triplets, df.label,
test_size=test_split, random_state=random_state)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
# -
# ### Deep Learning 101
#
# Highly recommended: [<NAME>'s (MIT) lecture](https://www.youtube.com/watch?v=O5xeyoRL95U).
# Brilliant, both the lecture and the lecturer. Slides below adapted from there.
# <img src='img/fig-dl.png'>
# <table><tr><td><img src='img/fig-dl2.png'></td><td><img src='img/fig-dl3.png'></td></tr></table>
# <table><tr><td><img src='img/dl4.png'></td><td><img src='img/dl5.png'></td></tr></table>
# <table><tr><td><img src='img/dl6.png'></td><td><img src='img/dl7.png'></td></tr></table>
# <table><tr><td><img src='img/dl8.png'></td><td><img src='img/dl9.png'></td></tr></table>
# <table><tr><td><img src='img/dl10.png'></td><td><img src='img/dl11.png'></td></tr></table>
# <table><tr><td><img src='img/dl12.png'></td><td><img src='img/dl13.png'></td></tr></table>
# ### `braai` architecture
#
# We will use a simple custom VGG-like sequential model ($VGG6$; this architecture was first proposed by the Visual Geometry Group of the Department of Engineering Science, University of Oxford, UK). The model has six layers with trainable parameters: four convolutional and two fully-connected. The first two convolutional layers use 16 3x3 pixel filters each while in the second pair, 32 3x3 pixel filters are used. To prevent over-fitting, a dropout rate of 0.25 is applied after each max-pooling layer and a dropout rate of 0.5 is applied after the second fully-connected layer. ReLU activation functions (Rectified Linear Unit -- a function defined as the positive part of its argument) are used for all five hidden trainable layers; a sigmoid activation function is used for the output layer.
#
# 
def vgg6(input_shape=(63, 63, 3), n_classes: int = 1):
"""
VGG6
:param input_shape:
:param n_classes:
:return:
"""
model = tf.keras.models.Sequential(name='VGG6')
# input: 63x63 images with 3 channel -> (63, 63, 3) tensors.
# this applies 16 convolution filters of size 3x3 each.
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=input_shape, name='conv1'))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', name='conv2'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', name='conv3'))
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', name='conv4'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(4, 4)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation='relu', name='fc_1'))
model.add(tf.keras.layers.Dropout(0.5))
# output layer
activation = 'sigmoid' if n_classes == 1 else 'softmax'
model.add(tf.keras.layers.Dense(n_classes, activation=activation, name='fc_out'))
return model
# ### Model training
#
# `braai` is implemented using `TensorFlow` software and its high-level `Keras` API. We will use the binary cross-entropy loss function, the Adam optimizer, a batch size of 64, and a 81\%/9\%/10\% training/validation/test data split. The training image data are weighted per class to mitigate the real vs. bogus imbalance in the data sets. To augment the data, the images may be flipped horizontally and/or vertically at random. No random rotations and translations will be added.
# +
def save_report(path: str = './', stamp: str = None, report: dict = dict()):
f_name = os.path.join(path, f'report.{stamp}.json')
with open(f_name, 'w') as f:
json.dump(report, f, indent=2)
# make train and test masks:
_, _, mask_train, mask_test = train_test_split(df.label, list(range(len(df.label))),
test_size=test_split, random_state=random_state)
masks = {'training': mask_train, 'test': mask_test}
# +
tf.keras.backend.clear_session()
loss = 'binary_crossentropy'
optimizer = 'adam'
epochs = 100
patience = 50
# epochs = 10
# patience = 5
validation_split = 0.1
class_weight = True
batch_size = 64
# halt training if no gain in validation accuracy over patience epochs
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
data_augmentation = {'horizontal_flip': True,
'vertical_flip': True,
'rotation_range': 0,
'fill_mode': 'constant',
'cval': 1e-9}
datagen = tf.keras.preprocessing.image.ImageDataGenerator(horizontal_flip=data_augmentation['horizontal_flip'],
vertical_flip=data_augmentation['vertical_flip'],
rotation_range=data_augmentation['rotation_range'],
fill_mode=data_augmentation['fill_mode'],
cval=data_augmentation['cval'],
validation_split=validation_split)
training_generator = datagen.flow(x_train, y_train, batch_size=batch_size, subset='training')
validation_generator = datagen.flow(x_train, y_train, batch_size=batch_size, subset='validation')
# +
binary_classification = True if loss == 'binary_crossentropy' else False
n_classes = 1 if binary_classification else 2
# training data weights
if class_weight:
# weight data class depending on number of examples?
if not binary_classification:
num_training_examples_per_class = np.sum(y_train, axis=0)
else:
num_training_examples_per_class = np.array([len(y_train) - np.sum(y_train), np.sum(y_train)])
assert 0 not in num_training_examples_per_class, 'found class without any examples!'
# fewer examples -- larger weight
weights = (1 / num_training_examples_per_class) / np.linalg.norm((1 / num_training_examples_per_class))
normalized_weight = weights / np.max(weights)
class_weight = {i: w for i, w in enumerate(normalized_weight)}
else:
class_weight = {i: 1 for i in range(2)}
# image shape:
image_shape = x_train.shape[1:]
print('Input image shape:', image_shape)
# -
# Build and compile the model:
# +
model = vgg6(input_shape=image_shape, n_classes=n_classes)
# set up optimizer:
if optimizer == 'adam':
optimzr = tf.keras.optimizers.Adam(lr=3e-4, beta_1=0.9, beta_2=0.999,
epsilon=None, decay=0.0, amsgrad=False)
elif optimizer == 'sgd':
optimzr = tf.keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=1e-6, nesterov=True)
else:
print('Could not recognize optimizer, using Adam')
optimzr = tf.keras.optimizers.Adam(lr=3e-4, beta_1=0.9, beta_2=0.999,
epsilon=None, decay=0.0, amsgrad=False)
model.compile(optimizer=optimzr, loss=loss, metrics=['accuracy'])
print(model.summary())
# -
# Train!
# +
run_t_stamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
model_name = f'braai_{model.name}_{run_t_stamp}'
h = model.fit_generator(training_generator,
steps_per_epoch=len(x_train) // batch_size,
validation_data=validation_generator,
validation_steps=(len(x_train)*validation_split) // batch_size,
class_weight=class_weight,
epochs=epochs,
verbose=1, callbacks=[early_stopping])
# -
# ### Model evaluation
# Let's now evaluate the resulting model:
# +
print('Evaluating on training set to check misclassified samples:')
labels_training_pred = model.predict(x_train, batch_size=batch_size, verbose=1)
# XOR will show misclassified samples
misclassified_train_mask = np.array(list(map(int, df.label[masks['training']]))).flatten() ^ \
np.array(list(map(int, np.rint(labels_training_pred)))).flatten()
misclassified_train_mask = [ii for ii, mi in enumerate(misclassified_train_mask) if mi == 1]
misclassifications_train = {int(c): [int(l), float(p)]
for c, l, p in zip(df.candid.values[masks['training']][misclassified_train_mask],
df.label.values[masks['training']][misclassified_train_mask],
labels_training_pred[misclassified_train_mask])}
# print(misclassifications_train)
print('Evaluating on test set for loss and accuracy:')
preds = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=1)
test_loss = float(preds[0])
test_accuracy = float(preds[1])
print("Loss = " + str(test_loss))
print("Test Accuracy = " + str(test_accuracy))
print('Evaluating on training set to check misclassified samples:')
preds = model.predict(x=x_test, batch_size=batch_size, verbose=1)
# XOR will show misclassified samples
misclassified_test_mask = np.array(list(map(int, df.label[masks['test']]))).flatten() ^ \
np.array(list(map(int, np.rint(preds)))).flatten()
misclassified_test_mask = [ii for ii, mi in enumerate(misclassified_test_mask) if mi == 1]
misclassifications_test = {int(c): [int(l), float(p)]
for c, l, p in zip(df.candid.values[masks['test']][misclassified_test_mask],
df.label.values[masks['test']][misclassified_test_mask],
preds[misclassified_test_mask])}
# round probs to nearest int (0 or 1)
labels_pred = np.rint(preds)
# -
# #### Training vs. validation accuracy
# +
if 'accuracy' in h.history:
train_acc = h.history['accuracy']
val_acc = h.history['val_accuracy']
else:
train_acc = h.history['acc']
val_acc = h.history['val_acc']
train_loss = h.history['loss']
val_loss = h.history['val_loss']
fig = plt.figure(figsize=(7, 3), dpi=100)
ax = fig.add_subplot(121)
ax.plot(train_acc, label='Training', linewidth=1.2)
ax.plot(val_acc, label='Validation', linewidth=1.2)
ax.set_xlabel('Epoch')
ax.set_ylabel('Accuracy')
ax.legend(loc='best')
ax.grid(True, linewidth=.3)
ax2 = fig.add_subplot(122)
ax2.plot(train_loss, label='Training', linewidth=1.2)
ax2.plot(val_loss, label='Validation', linewidth=1.2)
ax2.set_xlabel('Epoch')
ax2.set_ylabel('Loss')
ax2.legend(loc='best')
ax2.grid(True, linewidth=.3)
plt.tight_layout()
# -
# #### Confusion matrices with score threshold=0.5
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues,
colorbar=False,
savefig=False):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if title is not None:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
# print(unique_labels(y_true, y_pred))
# classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] * 100
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots(dpi=70)
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
if colorbar:
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
# fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, f'{cm[i, j]:.1f}%' if normalize else f'{cm[i, j]:d}', #format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black",
size=20)
fig.tight_layout()
if savefig:
plt.tight_layout()
plt.savefig(savefig, dpi=300)
return ax
# +
confusion_matr = confusion_matrix(y_test, labels_pred)
confusion_matr_normalized = confusion_matr.astype('float') / confusion_matr.sum(axis=1)[:, np.newaxis]
print('Confusion matrix:')
print(confusion_matr)
print('Normalized confusion matrix:')
print(confusion_matr_normalized)
# plot:
t_set = 'test'
plot_confusion_matrix(y_true=df.label.values[masks[t_set]], y_pred=labels_pred,
classes=['bogus', 'real'],
normalize=False,
title=None,
cmap=plt.cm.Blues)
plot_confusion_matrix(y_true=df.label.values[masks[t_set]], y_pred=labels_pred,
classes=['bogus', 'real'],
normalize=True,
title=None,
cmap=plt.cm.Blues)
# -
# #### FNR vs FPR
# +
rbbins = np.arange(-0.0001, 1.0001, 0.0001)
h_b, e_b = np.histogram(preds[(df.label[masks['test']] == 0).values], bins=rbbins, density=True)
h_b_c = np.cumsum(h_b)
h_r, e_r = np.histogram(preds[(df.label[masks['test']] == 1).values], bins=rbbins, density=True)
h_r_c = np.cumsum(h_r)
# h_b, e_b
fig = plt.figure(figsize=(7, 4), dpi=100)
ax = fig.add_subplot(111)
rb_thres = np.array(list(range(len(h_b)))) / len(h_b)
ax.plot(rb_thres, h_r_c/np.max(h_r_c),
label='False Negative Rate (FNR)', linewidth=1.5)
ax.plot(rb_thres, 1 - h_b_c/np.max(h_b_c),
label='False Positive Rate (FPR)', linewidth=1.5)
mmce = (h_r_c/np.max(h_r_c) + 1 - h_b_c/np.max(h_b_c))/2
ax.plot(rb_thres, mmce, '--',
label='Mean misclassification error', color='gray', linewidth=1.5)
ax.set_xlim([-0.05, 1.05])
ax.set_xticks(np.arange(0, 1.1, 0.1))
ax.set_yticks(np.arange(0, 1.1, 0.1))
# vals = ax.get_yticks()
# ax.set_yticklabels(['{:,.0%}'.format(x) for x in vals])
ax.set_yscale('log')
ax.set_ylim([5e-4, 1])
vals = ax.get_yticks()
ax.set_yticklabels(['{:,.1%}'.format(x) if x < 0.01 else '{:,.0%}'.format(x) for x in vals])
# thresholds:
thrs = [0.5, ]
# thrs = [0.5, 0.74]
for t in thrs:
m_t = rb_thres < t
fnr = np.array(h_r_c/np.max(h_r_c))[m_t][-1]
fpr = np.array(1 - h_b_c/np.max(h_b_c))[m_t][-1]
print(t, fnr*100, fpr*100)
# ax.vlines(t_1, 0, 1.1)
ax.vlines(t, 0, max(fnr, fpr))
ax.text(t - .05, max(fnr, fpr) + 0.01, f' {fnr*100:.1f}% FNR\n {fpr*100:.1f}% FPR', fontsize=10)
ax.set_xlabel('RB score threshold')
ax.set_ylabel('Cumulative percentage')
ax.legend(loc='lower center')
ax.grid(True, which='major', linewidth=.5)
ax.grid(True, which='minor', linewidth=.3)
plt.tight_layout()
# -
# #### ROC curve
# +
fig = plt.figure(figsize=(14, 5), dpi=100)
fig.subplots_adjust(bottom=0.09, left=0.05, right=0.70, top=0.98, wspace=0.2, hspace=0.2)
lw = 1.6
# ROCs
ax = fig.add_subplot(1, 2, 1)
# zoomed ROCs
ax2 = fig.add_subplot(1, 2, 2)
ax.plot([0, 1], [0, 1], color='#333333', lw=lw, linestyle='--')
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate (Contamination)')
ax.set_ylabel('True Positive Rate (Sensitivity)')
# ax.legend(loc="lower right")
# ax.legend(loc="best")
ax.grid(True, linewidth=.3)
# ax2.set_xlim([0.0, .2])
# ax2.set_ylim([0.8, 1.0])
ax2.set_xlim([0.0, .2])
ax2.set_ylim([0.8, 1.005])
ax2.set_xlabel('False Positive Rate (Contamination)')
ax2.set_ylabel('True Positive Rate (Sensitivity)')
# ax.legend(loc="lower right")
ax2.grid(True, linewidth=.3)
fpr, tpr, thresholds = roc_curve(df['label'][masks['test']], preds)
roc_auc = auc(fpr, tpr)
ax.plot(fpr, tpr, lw=lw)
ax2.plot(fpr, tpr, lw=lw, label=f'ROC (area = {roc_auc:.5f})')
ax2.legend(loc="lower right")
# -
# #### Generate report
# +
# generate training report in json format
print('Generating report...')
r = {'Run time stamp': run_t_stamp,
'Model name': model_name,
'Model trained': 'vgg6',
'Batch size': batch_size,
'Optimizer': optimizer,
'Requested number of train epochs': epochs,
'Early stopping after epochs': patience,
'Training+validation/test split': test_split,
'Training/validation split': validation_split,
'Weight training data by class': class_weight,
'Random state': random_state,
'Number of training examples': x_train.shape[0],
'Number of test examples': x_test.shape[0],
'X_train shape': x_train.shape,
'Y_train shape': y_train.shape,
'X_test shape': x_test.shape,
'Y_test shape': y_test.shape,
'Data augmentation': data_augmentation,
'Test loss': test_loss,
'Test accuracy': test_accuracy,
'Confusion matrix': confusion_matr.tolist(),
'Normalized confusion matrix': confusion_matr_normalized.tolist(),
'Misclassified test candids': list(misclassifications_test.keys()),
'Misclassified training candids': list(misclassifications_train.keys()),
'Test misclassifications': misclassifications_test,
'Training misclassifications': misclassifications_train,
'Training history': h.history
}
for k in r['Training history'].keys():
r['Training history'][k] = np.array(r['Training history'][k]).tolist()
# print(r)
save_report(path='./', stamp=run_t_stamp, report=r)
print('Done.')
# -
# ### Some real data
def plot_triplet(tr):
fig = plt.figure(figsize=(8, 2), dpi=100)
ax = fig.add_subplot(131)
ax.axis('off')
ax.imshow(tr[:, :, 0], origin='upper', cmap=plt.cm.bone)
ax2 = fig.add_subplot(132)
ax2.axis('off')
ax2.imshow(tr[:, :, 1], origin='upper', cmap=plt.cm.bone)
ax3 = fig.add_subplot(133)
ax3.axis('off')
ax3.imshow(tr[:, :, 2], origin='upper', cmap=plt.cm.bone)
plt.show()
# +
# #!wget -O 714287740515015072.json https://raw.githubusercontent.com/dmitryduev/braai/master/nb/714287740515015072.json
# #!wget -O 893215910715010007.json https://raw.githubusercontent.com/dmitryduev/braai/master/nb/893215910715010007.json
# -
with open('714287740515015072.json', 'r') as f:
al = json.load(f)
print(al['candidate']['rb'])
tr = make_triplet(al)
plot_triplet(tr)
model.predict(np.expand_dims(tr, axis=0))
with open('893215910715010007.json', 'r') as f:
al = json.load(f)
print(al['candidate']['rb'])
tr = make_triplet(al)
plot_triplet(tr)
model.predict(np.expand_dims(tr, axis=0))
# ## Practical advice from one of DL godfathers <NAME>
#
# Highly recommended: [<NAME>'s blog post](http://karpathy.github.io/2019/04/25/recipe/)
#
# - Neural net training is a leaky abstraction
#
# ```bash
# >>> your_data = # plug your awesome dataset here
# >>> model = SuperCrossValidator(SuperDuper.fit, your_data, ResNet50, SGDOptimizer)
# # conquer world here
# ```
#
# - Neural net training fails silently
#
# Lots of ways to screw things up -> many paths to pain and suffering
#
# #### The recipe
#
# - Become one with the data
# - probably, the most important and time consuming step
# - visualize as much as you can
# - check normalizations
#
#
# The neural net is effectively a compressed/compiled version of your dataset, you'll be able to look at your network (mis)predictions and understand where they might be coming from. And if your network is giving you some prediction that doesn't seem consistent with what you've seen in the data, something is off.
#
#
# - Set up the end-to-end training/evaluation skeleton + get dumb baselines
# - fix random seed
# - simplify
# - add significant digits to your eval
# - init well
# - fancy loss func? verify at init
# - verify decreasing training loss
# - visualize just before the net
# - active learning: check model's misclassifications, both training and testing
#
#
# - Overfit: first get a model large enough that it can overfit (i.e. focus on training loss) and then regularize it appropriately (give up some training loss to improve the validation loss).
# - picking the model: don't be a hero
# - adam is safe
# - complexify only one at a time
#
#
# - Regularize
# - get more data
# - data augment
# - smaller model size
# - batchnorm
# - decrease the batch size
# - use dropout
# - early stopping
#
#
# - Tune
# - first prefer random over grid search
# - hyper-parameter optimization. Check out [https://github.com/keras-team/keras-tuner](keras-tuner)
#
#
# - Squeeze out the juice
# - ensembles
# - leave it training
#
| nb/braai_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# >v0.1 This code implements a simple feature extraction and train using Lightgbm.
#
# Feature extraction is very simple and can be improved.
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import librosa
import matplotlib.pyplot as plt
import gc
from tqdm import tqdm, tqdm_notebook
from sklearn.metrics import label_ranking_average_precision_score
from sklearn.metrics import roc_auc_score
from joblib import Parallel, delayed
import lightgbm as lgb
from scipy import stats
from sklearn.model_selection import KFold
import warnings
warnings.filterwarnings('ignore')
tqdm.pandas()
# -
def split_and_label(rows_labels):
row_labels_list = []
for row in rows_labels:
row_labels = row.split(',')
labels_array = np.zeros((80))
for label in row_labels:
index = label_mapping[label]
labels_array[index] = 1
row_labels_list.append(labels_array)
return row_labels_list
train_curated = pd.read_csv('../input/train_curated.csv')
train_noisy = pd.read_csv('../input/train_noisy.csv')
train_noisy = train_noisy[['fname','labels']]
test = pd.read_csv('../input/sample_submission.csv')
print(train_curated.shape, train_noisy.shape, test.shape)
label_columns = list( test.columns[1:] )
label_mapping = dict((label, index) for index, label in enumerate(label_columns))
label_mapping
train_curated_labels = split_and_label(train_curated['labels'])
train_noisy_labels = split_and_label(train_noisy ['labels'])
len(train_curated_labels), len(train_noisy_labels)
# +
for f in label_columns:
train_curated[f] = 0.0
train_noisy[f] = 0.0
train_curated[label_columns] = train_curated_labels
train_noisy[label_columns] = train_noisy_labels
train_curated['num_labels'] = train_curated[label_columns].sum(axis=1)
train_noisy['num_labels'] = train_noisy[label_columns].sum(axis=1)
train_curated['path'] = '../input/train_curated/'+train_curated['fname']
train_noisy ['path'] = '../input/train_noisy/'+train_noisy['fname']
train_curated.head()
# +
train = pd.concat([train_curated, train_noisy],axis=0)
del train_curated, train_noisy
gc.collect()
train.shape
# -
def create_features( pathname ):
var, sr = librosa.load( pathname, sr=44100)
# trim silence
if 0 < len(var): # workaround: 0 length causes error
var, _ = librosa.effects.trim(var)
xc = pd.Series(var)
X = []
X.append( xc.mean() )
X.append( xc.median() )
X.append( xc.std() )
X.append( xc.max() )
X.append( xc.min() )
X.append( xc.skew() )
X.append( xc.mad() )
X.append( xc.kurtosis() )
X.append( np.mean(np.diff(xc)) )
X.append( np.mean(np.nonzero((np.diff(xc) / xc[:-1]))[0]) )
X.append( np.abs(xc).max() )
X.append( np.abs(xc).min() )
X.append( xc[:4410].std() )
X.append( xc[-4410:].std() )
X.append( xc[:44100].std() )
X.append( xc[-44100:].std() )
X.append( xc[:4410].mean() )
X.append( xc[-4410:].mean() )
X.append( xc[:44100].mean() )
X.append( xc[-44100:].mean() )
X.append( xc[:4410].min() )
X.append( xc[-4410:].min() )
X.append( xc[:44100].min() )
X.append( xc[-44100:].min() )
X.append( xc[:4410].max() )
X.append( xc[-4410:].max() )
X.append( xc[:44100].max() )
X.append( xc[-44100:].max() )
X.append( xc[:4410].skew() )
X.append( xc[-4410:].skew() )
X.append( xc[:44100].skew() )
X.append( xc[-44100:].skew() )
X.append( xc.max() / np.abs(xc.min()) )
X.append( xc.max() - np.abs(xc.min()) )
X.append( xc.sum() )
X.append( np.mean(np.nonzero((np.diff(xc[:4410]) / xc[:4410][:-1]))[0]) )
X.append( np.mean(np.nonzero((np.diff(xc[-4410:]) / xc[-4410:][:-1]))[0]) )
X.append( np.mean(np.nonzero((np.diff(xc[:44100]) / xc[:44100][:-1]))[0]) )
X.append( np.mean(np.nonzero((np.diff(xc[-44100:]) / xc[-44100:][:-1]))[0]) )
X.append( np.quantile(xc, 0.95) )
X.append( np.quantile(xc, 0.99) )
X.append( np.quantile(xc, 0.10) )
X.append( np.quantile(xc, 0.05) )
X.append( np.abs(xc).mean() )
X.append( np.abs(xc).std() )
return np.array( X )
# +
X = Parallel(n_jobs= 4)(delayed(create_features)(fn) for fn in tqdm(train['path'].values) )
X = np.array( X )
X.shape
# -
Xtest = Parallel(n_jobs= 4)(delayed(create_features)( '../input/test/'+fn) for fn in tqdm(test['fname'].values) )
Xtest = np.array( Xtest )
Xtest.shape
# +
n_fold = 5
folds = KFold(n_splits=n_fold, shuffle=True, random_state=69)
params = {'num_leaves': 15,
'min_data_in_leaf': 200,
'objective':'binary',
"metric": 'auc',
'max_depth': -1,
'learning_rate': 0.05,
"boosting": "gbdt",
"bagging_fraction": 0.85,
"bagging_freq": 1,
"feature_fraction": 0.20,
"bagging_seed": 42,
"verbosity": -1,
"nthread": -1,
"random_state": 69}
PREDTRAIN = np.zeros( (X.shape[0],80) )
PREDTEST = np.zeros( (Xtest.shape[0],80) )
for f in range(len(label_columns)):
y = train[ label_columns[f] ].values
oof = np.zeros( X.shape[0] )
oof_test = np.zeros( Xtest.shape[0] )
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X,y)):
model = lgb.LGBMClassifier(**params, n_estimators = 20000)
model.fit(X[trn_idx,:],
y[trn_idx],
eval_set=[(X[val_idx,:], y[val_idx])],
eval_metric='auc',
verbose=0,
early_stopping_rounds=25)
oof[val_idx] = model.predict_proba(X[val_idx,:], num_iteration=model.best_iteration_)[:,1]
oof_test += model.predict_proba(Xtest , num_iteration=model.best_iteration_)[:,1]/5.0
PREDTRAIN[:,f] = oof
PREDTEST [:,f] = oof_test
print( f, str(roc_auc_score( y, oof ))[:6], label_columns[f] )
# +
from sklearn.metrics import roc_auc_score
def calculate_overall_lwlrap_sklearn(truth, scores):
"""Calculate the overall lwlrap using sklearn.metrics.lrap."""
# sklearn doesn't correctly apply weighting to samples with no labels, so just skip them.
sample_weight = np.sum(truth > 0, axis=1)
nonzero_weight_sample_indices = np.flatnonzero(sample_weight > 0)
overall_lwlrap = label_ranking_average_precision_score(
truth[nonzero_weight_sample_indices, :] > 0,
scores[nonzero_weight_sample_indices, :],
sample_weight=sample_weight[nonzero_weight_sample_indices])
return overall_lwlrap
print( 'lwlrap cv:', calculate_overall_lwlrap_sklearn( train[label_columns].values, PREDTRAIN ) )
# -
test[label_columns] = PREDTEST
test.to_csv('submission.csv', index=False)
test.head()
| notebooks/lightgbm-simple-solution-lb-0-203.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/waltechel/deep-learning-from-scratch-2/blob/master/ch05/chapter05.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BXv2lC1-pB4T"
# # Chapter05 순환 신경망
#
# + [markdown] id="Dp7t22j3pE8b"
# ## 5.1 확률과 언어 모델
# + [markdown] id="uYlSk_uWywMg"
# ### 5.1.1 word2vec을 확률 관점에서 바라보다
#
# + [markdown] id="wwZPAsmo64Tt"
# ### 5.1.2 언어 모델
#
#
# + [markdown] id="kYt6tcDe9RDR"
# ### 5.1.3 CBOW 모델을 언어 모델로?
#
# + [markdown] id="3G9oVkCh67AW"
# ## 5.2 RNN이란
#
# + [markdown] id="TcLDj87n8J1X"
# ### 5.2.1 순환 하는 신경망
#
# + [markdown] id="NZ-sX0mz8NFj"
# ### 5.2.2 순환 구조 펼치기
#
# + [markdown] id="Rk5O2hBn8PYV"
# ### 5.2.3 BPTT
#
# + [markdown] id="tF49q-Bm8Szk"
# ### 5.2.4 Truncated BPTT
#
# + [markdown] id="cF8Qsz_r8VPp"
# ### 5.2.5 Truncated BPTT의 미니배치 학습
#
# + [markdown] id="sC4CbZAC8bMD"
# ## 5.3 RNN 구현
#
# + [markdown] id="04e2MAVA8ivj"
# ### 5.3.1 RNN 계층 구현
#
# + [markdown] id="V8IxWqa78mOT"
# ### 5.3.2 Time RNN 계층 구현
#
# + [markdown] id="cvVTmhKw8rLs"
# ## 5.4 시계열 데이터 처리 계층 구현
#
# + [markdown] id="-klEduko8toV"
# ### 5.4.1 RNNLM의 전체 그림
#
# + [markdown] id="N91x7on_8woE"
# ### 5.4.2 Time 계층 구현
#
# + [markdown] id="BH9sdno18yrG"
# ## 5.5 RNNLM 학습과 평가
#
# + [markdown] id="mLJjM8ol986J"
# ### 5.5.1 RNNLM 구현
#
# + [markdown] id="s6OX3YSM-Aox"
# ### 5.5.2 언어 모델의 평가
#
# + [markdown] id="pBSjs-IZ-COQ"
# ### 5.5.3 RNNLM의 학습 코드
#
# + [markdown] id="2kJgbDQb-FJ5"
# ### 5.5.4 RNNLM의 Trainer 클래스
#
# + [markdown] id="_yECEctr-HlA"
# ## 5.6 정리
#
| ch05/chapter05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="FJPv38giDj65" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} executionInfo={"status": "ok", "timestamp": 1597302978334, "user_tz": -480, "elapsed": 2282, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/a-/AO<KEY>uVA5TICdNcY-Q1TGicA=s64", "userId": "01997730851420384589"}} outputId="52ad4b9b-f0b8-42fd-fe3e-30039af0a2f4"
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.chdir('/content/gdrive/My Drive/finch/tensorflow1/free_chat/chinese_lccc/data')
# + id="H5uihnC7Dw65" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1597302979522, "user_tz": -480, "elapsed": 2362, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi3ItGjzEGzUOlXTUHjOgeuVA5TICdNcY-Q1TGicA=s64", "userId": "01997730851420384589"}}
from collections import Counter
from pathlib import Path
import json
import numpy as np
# + id="HpsB9ix7DyFE" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1597303049592, "user_tz": -480, "elapsed": 71293, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi3ItGjzEGzUOlXTUHjOgeuVA5TICdNcY-Q1TGicA=s64", "userId": "01997730851420384589"}}
with open('LCCC-base.json') as f:
data = json.load(f)
# + id="0LS6-KlKGIlW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} executionInfo={"status": "ok", "timestamp": 1597303073435, "user_tz": -480, "elapsed": 66970, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi3ItGjzEGzUOlXTUHjOgeuVA5TICdNcY-Q1TGicA=s64", "userId": "01997730851420384589"}} outputId="15df06f5-db4c-400f-b656-08fe16eac60d"
Path('../vocab').mkdir(exist_ok=True)
char_counter = Counter()
src_lens, tgt_lens = [], []
i = 0
with open('train.txt', 'w') as f_out:
for line in data['train']:
if i == 2000000:
break
if len(line) < 2:
continue
elif len(line) == 2:
src, tgt = line
src = src.lower().split()
tgt = tgt.lower().split()
char_counter.update(src)
char_counter.update(tgt)
src_lens.append(len(src))
tgt_lens.append(len(tgt))
f_out.write(''.join(src)+'<SEP>'+''.join(tgt)+'\n')
i += 1
else:
for src, tgt in zip (line, line[1:]):
src = src.lower().split()
tgt = tgt.lower().split()
char_counter.update(src)
char_counter.update(tgt)
src_lens.append(len(src))
tgt_lens.append(len(tgt))
f_out.write(''.join(src)+'<SEP>'+''.join(tgt)+'\n')
i += 1
print('Source Average Length', sum(src_lens)/len(src_lens))
print('Target Average Length', sum(tgt_lens)/len(tgt_lens))
chars = ['<pad>', '<start>', '<end>'] + [char for char, freq in char_counter.most_common() if freq >= 50]
print(len(chars), 'Chars')
with open('../vocab/char.txt', 'w') as f:
for c in chars:
f.write(c+'\n')
# + id="nbGn7XO1UGJ0" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1597303079248, "user_tz": -480, "elapsed": 5797, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi3ItGjzEGzUOlXTUHjOgeuVA5TICdNcY-Q1TGicA=s64", "userId": "01997730851420384589"}}
with open('LCCC-base_test.json') as f:
data = json.load(f)
with open('test.txt', 'w') as f_out:
for line in data:
if len(line) < 2:
continue
elif len(line) == 2:
src, tgt = line
src = src.lower().split()
tgt = tgt.lower().split()
f_out.write(''.join(src)+'<SEP>'+''.join(tgt)+'\n')
else:
for src, tgt in zip (line, line[1:]):
src = src.split()
tgt = tgt.split()
f_out.write(''.join(src)+'<SEP>'+''.join(tgt)+'\n')
# + id="AnCOmSM6YLnP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 410} executionInfo={"status": "ok", "timestamp": 1597303163191, "user_tz": -480, "elapsed": 89725, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi3ItGjzEGzUOlXTUHjOgeuVA5TICdNcY-Q1TGicA=s64", "userId": "01997730851420384589"}} outputId="e5848021-7565-44e9-e166-b51abce53fe2"
char2idx = {}
with open('../vocab/char.txt') as f:
for i, line in enumerate(f):
line = line.rstrip('\n')
char2idx[line] = i
embedding = np.zeros((len(char2idx)+1, 300)) # + 1 for unknown word
with open('../vocab/cc.zh.300.vec') as f:
count = 0
for i, line in enumerate(f):
if i == 0:
continue
if i % 100000 == 0:
print('- At line {}'.format(i))
line = line.rstrip()
sp = line.split(' ')
word, vec = sp[0], sp[1:]
if word in char2idx:
count += 1
embedding[char2idx[word]] = np.asarray(vec, dtype='float32')
print("[%d / %d] characters have found pre-trained values"%(count, len(char2idx)))
np.save('../vocab/char.npy', embedding)
print('Saved ../vocab/char.npy')
| finch/tensorflow1/free_chat/chinese_lccc/data/make_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/junxu-ai/newspaper/blob/master/bokeh.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="rKefADHoXYjj"
try:
import bokeh
except:
import os
pip.main(['install','bokeh'])
# + colab={"base_uri": "https://localhost:8080/"} id="3XEuLf7EXRNT" outputId="c47756d6-b0f3-4860-ec06-2e1c4cbd2b84"
############ START BOILERPLATE ############
#### Interactivity -- BOKEH
import bokeh.plotting.figure as bk_figure
from bokeh.io import curdoc, show
from bokeh.layouts import row, widgetbox
from bokeh.models import ColumnDataSource
from bokeh.models.widgets import Slider, TextInput
from bokeh.io import output_notebook # enables plot interface in J notebook
import numpy as np
# init bokeh
from bokeh.application import Application
from bokeh.application.handlers import FunctionHandler
output_notebook()
############ END BOILERPLATE ############
# Set up data
N = 200
x = np.linspace(0, 4*np.pi, N)
y = np.sin(x)
source = ColumnDataSource(data=dict(x=x, y=y))
# Set up plot
plot = bk_figure(plot_height=400, plot_width=400, title="my sine wave",
tools="crosshair,pan,reset,save,wheel_zoom",
x_range=[0, 4*np.pi], y_range=[-2.5, 2.5])
plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)
# Set up widgets
text = TextInput(title="title", value='my sine wave')
offset = Slider(title="offset", value=0.0, start=-5.0, end=5.0, step=0.1)
amplitude = Slider(title="amplitude", value=1.0, start=-5.0, end=5.0, step=0.1)
phase = Slider(title="phase", value=0.0, start=0.0, end=2*np.pi)
freq = Slider(title="frequency", value=1.0, start=0.1, end=5.1, step=0.1)
# Set up callbacks
def update_title(attrname, old, new):
plot.title.text = text.value
def update_data(attrname, old, new):
# Get the current slider values
a = amplitude.value
b = offset.value
w = phase.value
k = freq.value
# Generate the new curve
x = np.linspace(0, 4*np.pi, N)
y = a*np.sin(k*x + w) + b
source.data = dict(x=x, y=y)
### I thought I might need a show() here, but it doesn't make a difference if I add one
# show(layout)
for w in [offset, amplitude, phase, freq]:
w.on_change('value', update_data)
# Set up layouts and add to document
inputs = widgetbox(text, offset, amplitude, phase, freq)
layout = row(plot,
widgetbox(text, offset, amplitude, phase, freq))
def modify_doc(doc):
doc.add_root(row(layout, width=800))
doc.title = "Sliders"
text.on_change('value', update_title)
# + colab={"base_uri": "https://localhost:8080/"} id="DxtquFEOXYcv" outputId="d95974cc-9280-4a4c-e1b7-15f47351a79b"
handler = FunctionHandler(modify_doc)
app = Application(handler)
show(app)
# + [markdown] id="dnWufdMij7hD"
# # 新段落
# + id="XrE38c1JkA93"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="6SYfnYXCj7-v"
# # 新段落
| bokeh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.5 64-bit
# metadata:
# interpreter:
# hash: bc39a3b1e8761861ef1d3de10cfa50ed857ad7e667ebcf646fad08a8819ab6ed
# name: python3
# ---
# + [markdown] id="kMentdro2jRA"
# # Covid Escape Data Analysis
# + [markdown] id="jjaWsalGGP1y"
# ## Imports
# + id="0PUSMohBGKxV"
import pandas as pd
import plotly.express as px
# + [markdown] id="s85uzXVR_QFF"
# ## Reading data
# + id="rACywviB357u"
files = [
'data/AZ_cocktail_raw_data.txt',
'data/Ellebedy_invivo_raw_data.txt',
'data/MAP_paper_antibodies_raw_data.txt',
'data/REGN_and_LY-CoV016_raw_data.txt',
'data/human_sera_raw_data.txt',
]
dfs = dict()
for filepath in files:
dfs[filepath] = pd.read_csv(filepath)
# + [markdown] id="Uib87c4R_GnQ"
# ## Stats by file
# + [markdown] id="EYG3-0vZGyfo"
# `mut_escape` value stats:
# + colab={"base_uri": "https://localhost:8080/"} id="Jh0cZV5t4CuE" outputId="97a4816a-1d8d-4320-b963-7ec7b2922814"
for df_name, df in dfs.items():
print(df_name + '\n')
print(df['mut_escape'].describe())
print('\n')
# + [markdown] id="LIUjZoj0G3r_"
# and corresponding histograms:
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="kE2-9x7t72i4" outputId="abbf7101-7b3d-498e-83fd-c52cde636eaa"
for df in dfs.values():
fig = px.histogram(df, x="mut_escape", nbins=100)
fig.show()
# + [markdown] id="f0aQsWxi_4zU"
# ## Stats by condition
# + [markdown] id="KYgW2yXzFp7c"
# Obtaining a list of conditions:
# + colab={"base_uri": "https://localhost:8080/"} id="FAdNpBICAAJh" outputId="cc31fc5d-546e-45ab-ca41-e5eb1703ccc0"
conditions = []
for df in dfs.values():
conditions.extend(df.condition.unique())
conditions = list(set(conditions))
conditions = sorted(conditions)
print('Conditions:\n\t{}'.format('\n\t'.join(conditions)))
print()
print('Number of conditions: {}'.format(len(conditions)))
# + [markdown] id="HZ8TVDBOFvxr"
# Splitting all the data by the condition type:
# + id="HyI8zV1bAWt0"
df_by_condition = dict()
for condition in conditions:
current_df = pd.DataFrame()
for df in dfs.values():
current_df = current_df.append(df[df['condition'] == condition], ignore_index=True)
df_by_condition[condition] = current_df
# + [markdown] id="RyzIIVBcF9w_"
# Plot histogram of `mut_escape` for each condition type:
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="EIsh7TYKEZoG" outputId="94442afe-c91e-45c2-89d5-2ea05f84f859"
for condition, df in df_by_condition.items():
fig = px.histogram(df, x="mut_escape", nbins=100, title=condition)
fig.show()
| escape_mutations/covid_escape_data_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import classification_report,confusion_matrix
import pandas as pd
train = pd.read_excel('hackathon1.xlsx')
test = pd.read_excel('hackathon2.xlsx')
from array import array
available = train['Population [2011]'].isnull()
available_index = []
target_index = []
j = 0
for i in available:
if(not i):
available_index.append(j)
else:
target_index.append(j)
j+=1
print(target_index)
fpi = train['Female Population'].isnull()
fpa = []
fpp = []
j = 0
for i in fpi:
if(not i):
fpa.append(j)
else:
fpp.append(j)
j+=1
train['Population [2011]']
# +
from array import array
xs = array('i')
ys = array('i')
j = 0
for i in available_index:
if i in fpa:
xs.append(int(train['Female Population'][i]))
ys.append(int(train['Population [2011]'][i]))
xp = array('i')
yp = array('i')
for i in target_index:
if i in fpa:
print(i)
xp.append(int(train['Female Population'][i]))
xt = np.array(xs).reshape(-1, 1)
yt = np.array(ys).reshape(-1, 1)
xpt = np.array(xp).reshape(-1, 1)
print(xpt)
# -
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(xt, yt)
s = lr.predict(xpt)
# +
lst = []
mn = train['Population [2011]'].mean()
j = 0
for i in train['Population [2011]']:
if pd.isnull(i):
if j<41:
lst.append(int(s[j]))
j+=1
else:
lst.append(int(mn))
else:
lst.append(int(i))
ls = {'Population [2011]' : lst}
train['Population [2011]'] = pd.DataFrame(data = ls)
# -
train['Population [2011]'].isnull().sum()
available = test['Population [2011]'].isnull()
available_index = []
target_index = []
j = 0
for i in available:
if(not i):
available_index.append(j)
else:
target_index.append(j)
j+=1
fpi = test['Female Population'].isnull()
fpa = []
fpp = []
j = 0
for i in fpi:
if(not i):
fpa.append(j)
else:
fpp.append(j)
j+=1
print(fpa)
# +
from array import array
xs = array('i')
ys = array('i')
j = 0
for i in available_index:
if i in fpa:
xs.append(int(test['Female Population'][i]))
ys.append(int(test['Population [2011]'][i]))
xp = array('i')
yp = array('i')
for i in target_index:
if i in fpa:
#print(i)
xp.append(int(test['Female Population'][i]))
xt = np.array(xs).reshape(-1, 1)
yt = np.array(ys).reshape(-1, 1)
xpt = np.array(xp).reshape(-1, 1)
print(xt)
# -
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(xt, yt)
s = lr.predict(xpt)
# +
lst = []
mn = test['Population [2011]'].mean()
j = 0
for i in test['Population [2011]']:
if pd.isnull(i):
if j<41:
lst.append(int(s[j]))
j+=1
else:
lst.append(int(mn))
else:
lst.append(int(i))
ls = {'Population [2011]' : lst}
test['Population [2011]'] = pd.DataFrame(data = ls)
# -
test['Population [2011]'].isnull().sum()
writer = pd.ExcelWriter('hackathon3.xlsx', engine='xlsxwriter')
train.to_excel(writer, sheet_name='Sheet1')
writer.save()
writer = pd.ExcelWriter('hackathon4.xlsx', engine='xlsxwriter')
test.to_excel(writer, sheet_name='Sheet1')
writer.save()
| HackathonPrediction1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
#
# # Resampling data
#
#
# When performing experiments where timing is critical, a signal with a high
# sampling rate is desired. However, having a signal with a much higher sampling
# rate than is necessary needlessly consumes memory and slows down computations
# operating on the data.
#
# This example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold
# reduction in data size, at the cost of an equal loss of temporal resolution.
#
#
# +
# Authors: <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
#
from __future__ import print_function
from matplotlib import pyplot as plt
import mne
from mne.datasets import sample
# -
# Setting up data paths and loading raw data (skip some data for speed)
#
#
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname).crop(120, 240).load_data()
# Since downsampling reduces the timing precision of events, we recommend
# first extracting epochs and downsampling the Epochs object:
#
#
# +
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=2, tmin=-0.1, tmax=0.8, preload=True)
# Downsample to 100 Hz
print('Original sampling rate:', epochs.info['sfreq'], 'Hz')
epochs_resampled = epochs.copy().resample(100, npad='auto')
print('New sampling rate:', epochs_resampled.info['sfreq'], 'Hz')
# Plot a piece of data to see the effects of downsampling
plt.figure(figsize=(7, 3))
n_samples_to_plot = int(0.5 * epochs.info['sfreq']) # plot 0.5 seconds of data
plt.plot(epochs.times[:n_samples_to_plot],
epochs.get_data()[0, 0, :n_samples_to_plot], color='black')
n_samples_to_plot = int(0.5 * epochs_resampled.info['sfreq'])
plt.plot(epochs_resampled.times[:n_samples_to_plot],
epochs_resampled.get_data()[0, 0, :n_samples_to_plot],
'-o', color='red')
plt.xlabel('time (s)')
plt.legend(['original', 'downsampled'], loc='best')
plt.title('Effect of downsampling')
mne.viz.tight_layout()
# -
# When resampling epochs is unwanted or impossible, for example when the data
# doesn't fit into memory or your analysis pipeline doesn't involve epochs at
# all, the alternative approach is to resample the continuous data. This
# can also be done on non-preloaded data.
#
#
# Resample to 300 Hz
raw_resampled = raw.copy().resample(300, npad='auto')
# Because resampling also affects the stim channels, some trigger onsets might
# be lost in this case. While MNE attempts to downsample the stim channels in
# an intelligent manner to avoid this, the recommended approach is to find
# events on the original data before downsampling.
#
#
# +
print('Number of events before resampling:', len(mne.find_events(raw)))
# Resample to 100 Hz (generates warning)
raw_resampled = raw.copy().resample(100, npad='auto')
print('Number of events after resampling:',
len(mne.find_events(raw_resampled)))
# To avoid losing events, jointly resample the data and event matrix
events = mne.find_events(raw)
raw_resampled, events_resampled = raw.copy().resample(
100, npad='auto', events=events)
print('Number of events after resampling:', len(events_resampled))
| 0.13/_downloads/plot_resample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### データ読み込み
import pandas as pd
import numpy as np
test_data = pd.read_csv('02.data/test.csv')
submission_data = pd.read_csv('01.original_data/titanic/gender_submission.csv')
test_data.head()
submission_data.head()
# ### 推論
import onnxruntime
# モデルロード
onnx_session = onnxruntime.InferenceSession('04.model/model_20210410_163453.onnx')
# 入力情報詳細確認
print(onnx_session.get_inputs()[0])
input_name = onnx_session.get_inputs()[0].name
output_name = onnx_session.get_outputs()[0].name
# 入力形状にあわせてリシェイプ
input_dataset = test_data.values.reshape([-1, 1, 7]).astype(np.float32)
# 推論
for index, input_data in enumerate(input_dataset):
predict_result = onnx_session.run([output_name], {input_name: input_data})
submission_data.loc[index].Survived = np.argmax(predict_result)
# ### CSV書き出し
submission_data.to_csv('05.result/submission.csv', index=False)
| 04_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py37_orbit
# language: python
# name: py37_orbit
# ---
# # KTRLite Examples
# +
import pandas as pd
import numpy as np
pd.set_option('display.float_format', lambda x: '%.5f' % x)
import orbit
from orbit.models.ktrlite import KTRLiteMAP
from orbit.estimators.pyro_estimator import PyroEstimatorVI, PyroEstimatorMAP
from orbit.estimators.stan_estimator import StanEstimatorMCMC, StanEstimatorMAP
from orbit.utils.features import make_fourier_series_df, make_fourier_series
from orbit.diagnostics.plot import plot_predicted_data, plot_predicted_components
from orbit.diagnostics.metrics import smape
from orbit.utils.dataset import load_iclaims, load_electricity_demand
import matplotlib
import matplotlib.pyplot as plt
plt.style.use("fivethirtyeight")
# %matplotlib inline
# -
# %load_ext autoreload
# %autoreload 2
print(orbit.__version__)
print(matplotlib.__version__)
# ## Data
# +
# from 2000-01-01 to 2008-12-31
df = load_electricity_demand()
date_col = 'date'
response_col = 'electricity'
df[response_col] = np.log(df[response_col])
print(df.shape)
df.head()
# -
print(f'starts with {df[date_col].min()}\nends with {df[date_col].max()}\nshape: {df.shape}')
# ### Train / Test Split
test_size=365
train_df=df[:-test_size]
test_df=df[-test_size:]
# ## KTRLite
ktrlite = KTRLiteMAP(
response_col=response_col,
date_col=date_col,
# seasonality
seasonality=[7, 365.25],
seasonality_fs_order=[2, 5],
level_knot_scale=.1,
span_level=.05,
span_coefficients=.3,
estimator_type=StanEstimatorMAP,
n_bootstrap_draws=1e4,
)
ktrlite.fit(train_df)
predicted_df = ktrlite.predict(df=test_df, decompose=True)
predicted_df.head()
'{:.2%}'.format(smape(predicted_df['prediction'].values, test_df['electricity'].values))
_ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df,
date_col=date_col, actual_col=response_col,
test_actual_df=test_df, markersize=20, lw=.5)
_ = plot_predicted_components(predicted_df=predicted_df, date_col=date_col,
plot_components=['trend', 'seasonality_7', 'seasonality_365.25'])
_ = ktrlite.plot_lev_knots()
lev_knots_df = ktrlite.get_level_knots()
lev_knots_df.head(5)
lev_df = ktrlite.get_levels()
lev_df.head(5)
# +
# stability check
ktrlite1 = KTRLiteMAP(
response_col=response_col,
date_col=date_col,
# seasonality
seasonality=[7, 365.25],
seasonality_fs_order=[2, 5],
span_coefficients=.3,
estimator_type=StanEstimatorMAP,
n_bootstrap_draws=-1,
seed=2020,
)
# stability check
ktrlite2 = KTRLiteMAP(
response_col=response_col,
date_col=date_col,
# seasonality
seasonality=[7, 365.25],
seasonality_fs_order=[2, 5],
span_coefficients=.3,
estimator_type=StanEstimatorMAP,
n_bootstrap_draws=-1,
seed=2021,
)
ktrlite1.fit(df)
ktrlite2.fit(df)
| examples/ktrlite.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id="title_ID"></a>
# # JWST Pipeline Validation Notebook: Calwebb_coron3 for MIRI coronagraphic imaging
# <span style="color:red"> **Instruments Affected**</span>: MIRI, NIRCam
#
# ### Table of Contents
#
# <div style="text-align: left">
#
# <br> [Introduction](#intro)
# <br> [JWST CalWG Algorithm](#algorithm)
# <br> [Defining Terms](#terms)
# <br> [Test Description](#test_descr)
# <br> [Data Description](#data_descr)
# <br> [Imports](#imports)
# <br> [Load Input Data](#data_load)
# <br> [Run the Pipeline](#run_pipeline)
# <br> [Examine Outputs](#testing)
# <br> [About This Notebook](#about)
# <br>
#
# </div>
# <a id="intro"></a>
# # Introduction
#
# This is the validation notebook for stage 3 coronagraphic processing of MIRI 4QPM exposures. The stage 3 coronagraphic pipeline ([`calwebb_coron3`](https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_coron3.html#calwebb-coron3)) is to be applied to associations of calibrated NIRCam and MIRI coronagraphic exposures, and is used to produce PSF-subtracted, resampled, combined images of the source object. For more information on `calwebb_coron3`, please visit the links below.
#
# > Module description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_coron3.html#calwebb-coron3
#
# > Pipeline code: https://github.com/spacetelescope/jwst/blob/master/jwst/coron/
#
#
# [Top of Page](#title_ID)
# <a id="algorithm"></a>
# # JWST CalWG Algorithm
#
#
# The algorithms for `CALWEBB_CORON3` are as follows:
#
# - **Assemble Reference PSFs**: <br>
# All the available reference PSFs are assembled into the appropriate association.
#
#
# - **Outlier detection**: <br>
# An iterative sigma clipping algorithm is used in pixel coordinates on the image stack. The presence of an outlier results in a pixel flag being set.
#
#
# - **Align reference PSFs**: <br>
# The reference PSFs are aligned with the target observation using the Fourier LSQ algorithm to measure the shifts and the Fourier Shift algorithm to apply the shifts to each reference PSF integration.
#
#
# - **Reference PSF subtraction**: <br>
# The reference PSF that is subtracted from each target integration is created using the list of reference PSFs and the KLIP algorithm.
#
#
# - **Image Combination**: <br>
# The target images (including those at different rotations) are combined into a single combined image using the AstroDrizzle code (with the output pixel size set to the input pixel size).
#
#
# - **Updated Exposure Level Products**: <br>
# The exposure level products are re-created to provide the highest quality products that include the results of the ensemble processing (updated WCS, matching backgrounds, and 2nd pass outlier detection).
#
# <BR>
#
# The current status of these algorithms are summarized in the link below:
#
# > https://outerspace.stsci.edu/display/JWSTCC/CALWEBB_CORON3
#
#
# [Top of Page](#title_ID)
# <a id="terms"></a>
# # Defining Terms
#
# - **JWST**: <NAME> Space Telescope ([see documentation](https://jwst-docs.stsci.edu/))
# - **MIRI**: Mid-Infrared Instrument ([see documentation](https://jwst-docs.stsci.edu/mid-infrared-instrument))
# - **NIRCam**: Near-Infrared Instrument ([see documentation](https://jwst-docs.stsci.edu/near-infrared-camera))
# - **4QPM**: 4 Quadrant Phase Mask ([see documentation](https://jwst-docs.stsci.edu/mid-infrared-instrument/miri-instrumentation/miri-coronagraphs#MIRICoronagraphs-4qpm))
# - **Lyot**: coronagraph design incorporating a classical Lyot spot ([see documentation](https://jwst-docs.stsci.edu/mid-infrared-instrument/miri-instrumentation/miri-coronagraphs#MIRICoronagraphs-lyotcoron))
# - **PanCAKE**: an in-house tool at STScI used to simulate coronagraphic PSFs
# ([see documentation](https://github.com/spacetelescope/pandeia-coronagraphy))
# - **SGD**: Small Grid Dither
# ([see documentation](https://jwst-docs.stsci.edu/methods-and-roadmaps/jwst-high-contrast-imaging/hci-proposal-planning/hci-small-grid-dithers))
#
#
#
#
# [Top of Page](#title_ID)
# <a id="test_descr"></a>
# # Test Description
#
# This notebook tests the the following steps applied by `calwebb_coron3` for pipeline version == **'0.17.1'**.
#
# - [**stack_refs**](#stack_refs)
# - [**align_refs**](#align-refs)
# - [**klip**](#klip)
#
# These tests are performed using simulated MIRI 4QPM coronagraphic data (see [Data Description](#data_descr)).
#
# This notebook does not test the following steps applied by the `calwebb_coron3` pipeline:
#
# - **outlier_detection**
# - **resample**
#
# See [Required Future Testing](#future_tests) for details.
#
#
# [Top of Page](#title_ID)
# <a id="data_descr"></a>
# # Data Description
#
# ### Input Data:
#
# The set of data used in these tests were generated using `PanCAKE` and edited to enable processing through the `calwebb_coron3` pipeline. The simulated data was generated for the MIRI 1065C 4QPM coronagraph and consists of one science exposure and nine reference PSF exposures based on the following observation scenario: (1) a science observation of a target star with two faint companions, followed by (2) the execution of a 9-point small grid dither (SGD) pattern on a PSF calibrator of similar magnitude, to obtain a set of 9 slightly offset reference PSF observations.
#
#
# The data has the following naming format:
# - Science exposure:
#
# 'new_targ_0.fits'
#
# - Reference PSF exposures:
#
# 'new_ref_0.fits', 'new_ref_1.fits', 'new_ref_2.fits', 'new_ref_3.fits', 'new_ref_4.fits', 'new_ref_5.fits', 'new_ref_6.fits', 'new_ref_7.fits', 'new_ref_8.fits'
#
#
#
# ### Refence Files:
#
# The `align_refs` step requires a PSFMASK reference file containing a 2D mask that’s used as a weight function when computing shifts between images.
#
# > File description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/align_refs/description.html#psfmask-reffile
#
# Currently the PSFMASK reference files ingested into CRDS are incorrect (wrong shape and incorrectly centered around coronagraphic obstructions), therefore an updated file is used for these tests:
#
# 'psfmask_MIRI_4QPM_1065.fits'
#
#
# ### Association File:
#
# Currently the individual stage 3 coronagraphic processing steps can only be run in a convenient way by running the `calwebb_coron3` pipeline on an association (ASN) file that lists the various science target and reference PSF exposures to be processed.
#
# > Level 3 Associations documentation: https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/level3_asn_rules.html
#
# We use the following ASN file for the purpose of these tests:
#
# 'test.yml'
#
#
#
#
# [Top of Page](#title_ID)
# <a id="imports_ID"></a>
# # Imports
#
# * `astropy.io` for opening fits files
# * `jwst` is the JWST Calibration Pipeline
# * `jwst.Coron3Pipeline` is the pipeline being tested
# * `matplotlib.pyplot.plt` to generate plots
# * `numpy` for array calculations and manipulation
# * `download_file` for downloading and accessing files
# * `ipywidgets`, `IPython.display.display,clear_output` to display images.
# * `ci_watson.arartifactory_helpers.get_bigdata` to download data
#
# [Top of Page](#title_ID)
import jwst
from jwst.pipeline import Coron3Pipeline
from astropy.io import fits
from ci_watson.artifactory_helpers import get_bigdata
import matplotlib.pyplot as plt
# %matplotlib inline
from astropy.utils.data import download_file
# %config InlineBackend.close_figures=False # To prevent automatic figure display when execution of the cell ends
import numpy as np
import ipywidgets as widgets
from IPython.display import display,clear_output
import os
jwst.__version__
# should out '0.17.1'
# <a id="data_load"></a>
# # Load Input Data
# +
# new PSFMASK file
psf_mask_dir = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'jwst_miri_psfmask_0001_new.fits')
# +
# download science image
target_psf_fn = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_targ.fits')
# +
# download PSF reference images
new_ref_0 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_0.fits')
new_ref_1 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_1.fits')
new_ref_2 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_2.fits')
new_ref_3 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_3.fits')
new_ref_4 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_4.fits')
new_ref_5 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_5.fits')
new_ref_6 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_6.fits')
new_ref_7 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_7.fits')
new_ref_8 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_coron3',
'coron3_miri_test',
'new_ref_8.fits')
# -
# Create array containing the input reference images
input_ref_images = [fits.getdata(new_ref_8)[0], fits.getdata(new_ref_7)[0], fits.getdata(new_ref_6)[0],
fits.getdata(new_ref_5)[0],fits.getdata(new_ref_4)[0], fits.getdata(new_ref_3)[0],
fits.getdata(new_ref_2)[0], fits.getdata(new_ref_1)[0], fits.getdata(new_ref_0)[0]]
# [Top of Page](#title_ID)
# <a id="run_pipeline"></a>
#
#
# ------------
#
# # Run the Pipeline
asn_dir = 'test.yml' # Define ASN file
myCoron3Pipeline = Coron3Pipeline()
myCoron3Pipeline.align_refs.override_psfmask = psf_mask_dir # Override PSFMASK file
myCoron3Pipeline.resample.skip = True # Skip resample step
myCoron3Pipeline.save_results = True
myCoron3Pipeline.output_dir = os.getcwd()
myCoron3Pipeline.run(asn_dir) # run pipeline
# [Top of Page](#title_ID)
# <a id="testing"></a>
# --------------
# # Examine Output Data
#
#
#
#
# <a id="stack_refs"></a>
# ### `stack_refs`: Stack PSF References (*'_psfstack' product*)
#
# The role of the `stack_refs` step is to stack all of the PSF reference exposures (specified in the input ASN file) into a single `CubeModel` for use by subsequent coronagraphic steps. The size of the stack should be equal to the sum of the number of integrations in each input PSF exposure. The image data are simply copied and reformatted and should not be modified in any way.
#
# *Output*: **3D PSF Image Stack** <br>
# *File suffix*: **'_psfstack'**
#
# > Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/stack_refs/index.html#stack-refs-step
stacked_cube_hdu = fits.open('jw10005-miri-mask1065_psfstack.fits')
ref_images = stacked_cube_hdu[1].data
print(stacked_cube_hdu.info())
print("'_psfstack' data product dimensions: "+str(ref_images.shape))
out=widgets.Output()
button=widgets.Button(description='Next')
vbox=widgets.VBox(children=(out,button))
display(vbox)
index = 0
def click(b):
global index
index = index % len(ref_images)
im = plt.imshow(ref_images[index], vmin = 0, vmax = 20,interpolation ="none")
index += 1
with out:
clear_output(wait=True)
plt.title('Stacked PSF image #%i' % index)
plt.show()
button.on_click(click)
click(None)
# The `stack_psfs` step has sucessfully stacked the reference PSF exposures into a single 3D '*_psfstack*' product, with size equal to the sum of the number of integrations in each input PSF exposure *(9)*. To confirm that the image data has not been modified, the input PSF images are subtracted from each image in the stack below:
out=widgets.Output()
button=widgets.Button(description='Next')
vbox=widgets.VBox(children=(out,button))
display(vbox)
index = 0
def click(b):
global index
index = index % len(ref_images)
im = plt.imshow(ref_images[index] - input_ref_images[index], vmin = 0, vmax = 20,interpolation ="none")
index += 1
with out:
clear_output(wait=True)
plt.title('Stacked PSF image - input PSF image %i' % index)
plt.show()
button.on_click(click)
click(None)
# -----------
#
# <a id="align_refs"></a>
#
#
# ### `align_refs`: Align PSF References (*'_psfalign' product*)
#
# The role of the `align_refs` step is to align the coronagraphic PSF images with science target images. It does so by computing the offsets between the science target and reference PSF images, and shifts the PSF images into alignment. The output of the `align_refs` step is a 4D data product, where the 3rd axis has length equal to the total number of reference PSF images in the input PSF stack and the 4th axis has length equal to the number of integrations in the input science target product.
#
# *Output*: **4D aligned PSF Images** <br>
# *File suffix*: **_psfalign**
#
#
# > Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/align_refs/index.html#align-refs-step
aligned_cube_hdu = fits.open('new_targ_c1001_psfalign.fits')
aligned_cube_hdu.info()
aligned_cube_data = (aligned_cube_hdu[1].data)
print("'_psfalign' data product dimensions: " + str(aligned_cube_data.shape))
aligned_cube_data = aligned_cube_data[0]
out=widgets.Output()
button=widgets.Button(description='Next')
vbox=widgets.VBox(children=(out,button))
display(vbox)
index = 0
def click(b):
global index
index = index % len(aligned_cube_data)
im = plt.imshow(aligned_cube_data[index], vmin = 0, vmax = 20,interpolation ="none")
index += 1
with out:
clear_output(wait=True)
plt.title('Aligned PSF image #%i' % index)
plt.show()
button.on_click(click)
click(None)
# The `align_refs` step has successfully aligned the psf images - note the stability when clicking through the images in the cube.
#
# The output is indeed a 4D '*_psfalign*' product, where the 3rd axis has length equal to the total number of reference images in the input PSF stack *(9)* and 4th axis equal to the number of integrations in the input science target image *(1)*.
# ------------
# <a id="klip"></a>
# ### `klip`: Reference PSF Subtraction
#
# The role of the `klip` step is to apply the Karhunen-Loeve Image Plane (KLIP) algorithm on the science target images, using an accompanying set of aligned reference PSF images (result of the `align_refs` step) in order to fit and subtract an optimal PSF from the science target image. The PSF fitting and subtraction is applied to each integration image independently. The output is a 3D stack of PSF-subtracted images of the science target, having the same dimensions as the input science target product.
#
# *Output*: **3D PSF-subtracted image** <br>
# *File suffix*: **_psfsub**
#
# > Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/klip/index.html#klip-step
#
sub_hdu = fits.open('new_targ_c1001_psfsub.fits')
sub_hdu.info()
subtracted_image = sub_hdu[1].data
print("Science target image dimensions: " + str(subtracted_image.shape))
print("PSF subtracted image dimensions: " + str(subtracted_image.shape))
# Note that the PSF subtracted image has the same dimensions as the input target image.
plt.figure()
plt.imshow(subtracted_image[0],vmin = 0, vmax = 10,interpolation ="none")
plt.title("PSF subtracted image")
# The `klip` step has successfully made a synthetic psf reference image and subtracted it from the target PSF - indeed, the two companion PSFs that were injected into the target PSF are now visable. The output stack of PSF-subtracted images has the same dimensions as the input science target product.
# [Top of Page](#title_ID)
# <br>
# <a id="future_tests"></a>
# --------------
# # Required Future Testing
#
# - Testing of the `outlier_detection` step in conjunction with the three steps above.
# - Testing of the `resample` step using a data set containing a reference PSF target with astrophysical contamination (i.e. a companion) and target images at two different orientations (simulating referenced differential imaging) - whereby the the `resample` step should correctly combine the two PSF-subtracted target images based on the WCS information.
# - Testing for LYOT, 1140C/FQPM and 1550C/FQPM datasets (using the updated PSFMASK Reference Files provided).
# - Testing of multiple integration images (current dataset treated as only single integration images).
# - Testing of data that has been processed through stage 1 and 2 pipeline modules.
#
# [Top of Page](#title_ID)
#
# --------------------
# <a id="about"></a>
# ## About this Notebook
# **Author:** <NAME> (Staff Scientist, *MIRI Branch*) & J. <NAME>.
# <br> **Updated On:** 12/26/2020
#
# [Top of Page](#title_ID)
| jwst_validation_notebooks/calwebb_coron3/jwst_calcoron3_miri_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from gensim.summarization import summarize
from newspaper import Article
url = "https://news.v.daum.net/v/20180206160003332"
news=Article(url,language='ko')
news.download()
news.parse()
summarize(url,word_count=50)
from eunjeon import Mecab
m = Mecab()
m.pos("이것은 메캅 테스트입니다. 사용자 사전을 등록하기 전입니다. 비타500")
# +
from konlpy.tag import Komoran;komoran=Komoran()
komoran.nouns('영등포구청역에 있는 맛집 좀 알려주세요.')
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Widgets are basic building blocks of an application. PyQt5 has a wide range of various widgets,
# including buttons, check boxes, sliders or list boxes. In this section we will cover several useful
# widgets: a QCheckBox, a QPushButton in toggle mode, a QSlider, a QProgressBar and a QCalendar Widget.
# -
# ## QCheckBox
# +
# A QCheckBox is a widget that has two states: on and off. It is a box with a label. Checkboxes
# are typically used to represent features in an application that can be enabled or disabled.
# -
import sys
from PyQt5.QtWidgets import QWidget, QCheckBox, QApplication
from PyQt5.QtCore import Qt
# +
class Example(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
cb = QCheckBox('Show title', self)
cb.move(20, 20)
cb.toggle()
cb.stateChanged.connect(self.changeTitle)
self.setGeometry(300, 300, 350, 250)
self.setWindowTitle('QCheckBox')
self.show()
def changeTitle(self, state):
if state == Qt.Checked:
self.setWindowTitle('QCheckBox')
else:
self.setWindowTitle(' ')
def main():
app = QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
# In this example we will create a checkbox that will toggle the window title.
# A QCheckBox constructor is cb = QCheckBox('Show title', self).
# We connect the user defined changeTitle() method to the stateChanged signal.
# The changeTitle() method will toggle the window title.
# The state of the widget is given to the changeTitle() method in the state variable.
# If the widget is checked, we set a title of the window. Otherwise, we set an empty string to the titlebar.
# -
# ## Toggle button
# +
# A toggle button is a QPushButton in a special mode. It is a button that has two states:
# pressed and not pressed. We toggle between those two states by clicking it.
# -
import sys
from PyQt5.QtWidgets import QApplication, QPushButton, QWidget, QFrame
from PyQt5.QtGui import QColor
# +
class Example2(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.col = QColor(0, 0, 0)
redb = QPushButton('Red', self)
redb.setCheckable(True)
redb.move(10, 10)
redb.clicked[bool].connect(self.setColor)
greenb = QPushButton('Green', self)
greenb.setCheckable(True)
greenb.move(10, 60)
greenb.clicked[bool].connect(self.setColor)
blueb = QPushButton('Blue', self)
blueb.setCheckable(True)
blueb.move(10, 110)
blueb.clicked[bool].connect(self.setColor)
self.square = QFrame(self)
self.square.setGeometry(150, 20, 100, 100)
self.square.setStyleSheet("QWidget {background-color: %s}" % self.col.name())
self.setGeometry(300, 300, 300, 250)
self.setWindowTitle('Toggle button')
self.show()
def setColor(self, pressed):
source = self.sender()
if pressed:
val = 255
else:
val = 0
if source.text() == 'Red':
self.col.setRed(val)
elif source.text() == 'Green':
self.col.setGreen(val)
else:
self.col.setBlue(val)
self.square.setStyleSheet("QFrame {background-color: %s}" % self.col.name())
def main():
app = QApplication(sys.argv)
ex = Example2()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
# In this example we create three toggle buttons and a QWidget.
# The toggle buttons will toggle the red, green and blue parts of the color value.
# The background colour depends on which toggle buttons is pressed.
# The initial black colour value is QColor(0, 0, 0).
# To create a toggle button we create a QPushButton and make it checkable by calling the setCheckable() method.
# We connect a clicked signal to our user defined method. We use the clicked signal that operates with Boolean value.
# We use style sheets to change the background colour. The stylesheet is updated with setStyleSheet() method.
# -
# ## Slider
# +
# A QSlider is a widget that has a simple handle. This handle can be pulled back and forth.
# This wat we are choosing a value for a specific task. Sometimes using a slider is more
# natural than entering a number or using a spin box.
# -
import sys
from PyQt5.QtWidgets import QWidget, QApplication, QSlider, QLabel
from PyQt5.QtCore import Qt
from PyQt5.QtGui import QPixmap
# +
class Example4(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
sld = QSlider(Qt.Horizontal, self)
sld.setFocusPolicy(Qt.NoFocus)
sld.setGeometry(30, 40, 200, 30)
sld.valueChanged[int].connect(self.changeValue)
self.label = QLabel(self)
self.label.setPixmap(QPixmap('mute.png'))
self.label.setGeometry(300, 300, 300, 300)
self.setGeometry(100, 60, 1000, 800)
self.setWindowTitle('QSlider')
self.show()
def changeValue(self, value):
if value == 0:
self.label.setPixmap(QPixmap('mute.png'))
elif 0 < value <= 30:
self.label.setPixmap(QPixmap('min.png'))
elif 30 < value < 80:
self.label.setPixmap(QPixmap('med.png'))
else:
self.label.setPixmap(QPixmap('max.png'))
def main():
app = QApplication(sys.argv)
ex = Example4()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
# In this example we simulate a volume control. By dragging the handle of a slider, we change an image on the label.
# QSlider(Qt.Horizontal, self), here we create a horizontal QSlider.
# We create a QLabel widget and set an initial mute image to it.
# We connect the valueChanged signal to the user defined changeValue() method.
# Based on the value of the slider, we set an image to the label.
# -
# ## QProgressBar
# +
# A progress bar is a widget that is used when we process lengthy tasks. It is animated so
# that the user knows that the task is progressing. The QProgressBar widget provides
# a horizontal or a vertcal progress bar in PyQt5 toolkit. The programmer can set the minimum
# and maximum value for the progress bar. The default values are 0 and 99.
# -
import sys
from PyQt5.QtWidgets import QWidget, QApplication, QProgressBar, QPushButton
from PyQt5.QtCore import QBasicTimer
# +
class Example5(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.pbar = QProgressBar(self)
self.pbar.setGeometry(30, 40, 200, 25)
self.btn = QPushButton('Start', self)
self.btn.move(40, 80)
self.btn.clicked.connect(self.doAction)
self.timer = QBasicTimer()
self.step = 0
self.setGeometry(300, 300, 280, 170)
self.setWindowTitle('QProgressBar')
self.show()
def timerEvent(self, e):
if self.step >= 100:
self.timer.stop()
self.btn.setText('Finished')
return
self.step = self.step + 1
self.pbar.setValue(self.step)
def doAction(self):
if self.timer.isActive():
self.timer.stop()
self.btn.setText('Start')
else:
self.timer.start(100, self)
self.btn.setText('Stop')
def main():
app = QApplication(sys.argv)
ex = Example5()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
# In this example we have a horizontal progress bar and a push button. The push button starts and stops the progress bar.
# The QProgressBar constructor is QProgressBar(self).
# To activate the progress bar we use a timer object QBasicTimer().
# To launch a timer event we call its start() method.
# This method has two parameters: the timeout and the object which will receive the events.
# Each QObject and its descends have a timerEvent() event handler.
# Inside the doAction() method, we start and stop the timer.
# -
# ## QCalendarWidget
# +
# A QCalendarWidget provides a monthly based calendar widget.
# It allows a user to select a date in a simple and intuitive way.
# -
import sys
from PyQt5.QtWidgets import QApplication, QWidget, QCalendarWidget, QLabel, QVBoxLayout
from PyQt5.QtCore import QDate
# +
class Example6(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
vbox = QVBoxLayout()
cal = QCalendarWidget(self)
cal.setGridVisible(True)
cal.clicked[QDate].connect(self.showDate)
vbox.addWidget(cal)
self.lbl = QLabel(self)
date = cal.selectedDate()
self.lbl.setText(date.toString())
vbox.addWidget(self.lbl)
self.setLayout(vbox)
self.setGeometry(300, 300, 350, 300)
self.setWindowTitle('Calendar')
self.show()
def showDate(self, date):
self.lbl.setText(date.toString())
def main():
app = QApplication(sys.argv)
ex = Example6()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
# In this example we have created a calendar which has widget and a label widget.
# The currently selected date is displayed in the label widget.
# The QCalendarWidget is created by QCalendarWidget(self).
# If we select a date from the widget, a clicked[QDate] signal is emitted.
# We connect this signal to the user defined showDate() method.
# We retrieve the selected date by calling the selectedDate() method.
# Then we transform the date object into a string and set it to the label widget.
| 07 Widgets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 align="center"> PCA to Speed-up Machine Learning Algorithms </h1>
# The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
# <br>
# It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
# Parameters | Number
# --- | ---
# Classes | 10
# Samples per class | ~7000 samples per class
# Samples total | 70000
# Dimensionality | 784
# Features | integers values from 0 to 255
# The MNIST database of handwritten digits is available on the following website: [MNIST Dataset](http://yann.lecun.com/exdb/mnist/)
# +
import pandas as pd
import numpy as np
# Suppress scientific notation
#np.set_printoptions(suppress=True)
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
# Used for Downloading MNIST
from sklearn.datasets import fetch_mldata
# Used for Splitting Training and Test Sets
from sklearn.model_selection import train_test_split
# %matplotlib inline
# -
# ## Downloading MNIST Dataset
# Change data_home to wherever to where you want to download your data
mnist = fetch_mldata('MNIST original', data_home='~/Desktop/alternativeData')
mnist
# These are the images
mnist.data.shape
# These are the labels
mnist.target.shape
# ## Standardizing the Data
# Since PCA yields a feature subspace that maximizes the variance along the axes, it makes sense to standardize the data, especially, if it was measured on different scales.
# Notebook going over the importance of feature Scaling: http://scikit-learn.org/stable/auto_examples/preprocessing/plot_scaling_importance.html#sphx-glr-auto-examples-preprocessing-plot-scaling-importance-py
#
# Standardize features by removing the mean and scaling to unit variance
mnist.data = StandardScaler().fit_transform(mnist.data)
# ## Splitting Data into Training and Test Sets
# test_size: what proportion of original data is used for test set
train_img, test_img, train_lbl, test_lbl = train_test_split(
mnist.data, mnist.target, test_size=1/7.0, random_state=0)
print(train_img.shape)
print(train_lbl.shape)
print(test_img.shape)
print(test_lbl.shape)
# ## PCA to Speed up Machine Learning Algorithms (Logistic Regression)
# <b>Step 0:</b> Import and use PCA. After PCA we will go apply a machine learning algorithm of our choice to the transformed data
from sklearn.decomposition import PCA
# Make an instance of the Model
pca = PCA(.95)
# Fit PCA on training set. <b>Note: we are fitting PCA on the training set only</b>
pca.fit(train_img)
# Apply the mapping (transform) to <b>both</b> the training set and the test set.
train_img = pca.transform(train_img)
test_img = pca.transform(test_img)
# <b>Step 1: </b> Import the model you want to use
# In sklearn, all machine learning models are implemented as Python classes
from sklearn.linear_model import LogisticRegression
# <b>Step 2:</b> Make an instance of the Model
# all parameters not specified are set to their defaults
# default solver is incredibly slow thats why we change it
# solver = 'lbfgs'
logisticRegr = LogisticRegression(solver = 'lbfgs')
# <b>Step 3:</b> Training the model on the data, storing the information learned from the data
# Model is learning the relationship between x (digits) and y (labels)
logisticRegr.fit(train_img, train_lbl)
# <b>Step 4:</b> Predict the labels of new data (new images)
# Uses the information the model learned during the model training process
# Returns a NumPy Array
# Predict for One Observation (image)
logisticRegr.predict(test_img[0].reshape(1,-1))
# Predict for Multiple Observations (images) at Once
logisticRegr.predict(test_img[0:10])
# ## Measuring Model Performance
# accuracy (fraction of correct predictions): correct predictions / total number of data points
# Basically, how the model performs on new data (test set)
score = logisticRegr.score(test_img, test_lbl)
print(score)
# ## F1 Score
# The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.
# If you are curious about why accuracy is not a great metric (link to why accuracy is a bad metric
# https://github.com/mGalarnyk/datasciencecoursera/blob/master/Stanford_Machine_Learning/Week6/MachineLearningSystemDesign.md)
pred_label = logisticRegr.predict(test_img)
# consider changing metric to http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html
#
# make similar problem to coursera to show that problems can be from just using normal sklearn and going with accuracy.
metrics.f1_score(test_lbl, pred_label, average='weighted')
| Sklearn/PCA/CHECK_LATER_PCA_to_Speed-up_Machine_Learning_Algorithms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Negative Feedback Network
import numpy as np
class negative_feedback_neuron:
""" Negative feedback network algorithm
Parameters
----------
X: np.array
sample
y: np.array
Class vector
W: np.array
Initialised weights
alpha: float, optional
Alpha hyperparameter
threshold: float, optional
Threshold under which to accept the reconstruction error
epochs: int, optional
Number of training epochs. -1 is until convergence (default is -1)
max_epochs: int, optional
Maximum number of epochs
"""
def __init__(self, x, y, W, alpha=0.1, threshold=0.1, iterations=-1, max_iterations=500):
self.x = x
self.y = y
self.W = W
self.alpha = alpha
self.threshold = threshold
self.iterations = iterations
self.max_iterations = max_iterations
self.e = np.Inf
def train(self):
"""
Performs the training
"""
if self.iterations == -1:
self.train_until_convergence()
else:
self.train_until_iteration()
def calc_e(self):
self.e = self.x - self.calc_Wty()
print('e:', str(self.e))
def calc_We(self):
We = np.dot(self.W, self.e)
print('We:', str(We))
return We
def calc_y(self):
self.y = self.y + self.alpha * self.calc_We()
print('y:', str(self.y))
def calc_Wty(self):
Wty = np.dot(self.W.T, self.y)
print('Wty:', str(Wty))
return Wty
def train_until_convergence(self):
counter = 0
iteration = 0
while iteration < self.max_iterations:
print('=====================')
print('ITER. ', str(iteration + 1))
print('=====================')
self.calc_e()
self.calc_y()
if 0 - e < self.threshold:
break
iteration += 1
return
def train_until_iteration(self):
counter = 0
for iteration in range(self.iterations):
print('=====================')
print('ITER. ', str(iteration + 1))
print('=====================')
delta = 0
self.calc_e()
self.calc_y()
return
# ## Examples
x = np.array([
[1], [1], [0]
])
y = np.array([
[0], [0]
])
W = np.array([
[1, 1, 0],
[1, 1, 1]
])
nfn = negative_feedback_neuron(x, y, W, alpha=0.25, iterations=5)
nfn.train()
| negative_feedback_network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston, load_diabetes
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import Ridge
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from autofeat import AutoFeatRegressor
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# +
datasets = ["diabetes", "boston", "concrete", "airfoil", "wine_quality"]
# same interface for loading all datasets - adapt the datapath
# to where you've downloaded (and renamed) the datasets
def load_regression_dataset(name, datapath="../datasets/regression/"):
# load one of the datasets as X and y (and possibly units)
units = {}
if name == "boston":
# sklearn boston housing dataset
X, y = load_boston(True)
elif name == "diabetes":
# sklearn diabetes dataset
X, y = load_diabetes(True)
elif name == "concrete":
# https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength
# Cement (component 1) -- quantitative -- kg in a m3 mixture -- Input Variable
# Blast Furnace Slag (component 2) -- quantitative -- kg in a m3 mixture -- Input Variable
# Fly Ash (component 3) -- quantitative -- kg in a m3 mixture -- Input Variable
# Water (component 4) -- quantitative -- kg in a m3 mixture -- Input Variable
# Superplasticizer (component 5) -- quantitative -- kg in a m3 mixture -- Input Variable
# Coarse Aggregate (component 6) -- quantitative -- kg in a m3 mixture -- Input Variable
# Fine Aggregate (component 7) -- quantitative -- kg in a m3 mixture -- Input Variable
# Age -- quantitative -- Day (1~365) -- Input Variable
# Concrete compressive strength -- quantitative -- MPa -- Output Variable
df = pd.read_csv(os.path.join(datapath, "concrete.csv"))
X = df.iloc[:, :8].to_numpy()
y = df.iloc[:, 8].to_numpy()
elif name == "forest_fires":
# https://archive.ics.uci.edu/ml/datasets/Forest+Fires
# 1. X - x-axis spatial coordinate within the Montesinho park map: 1 to 9
# 2. Y - y-axis spatial coordinate within the Montesinho park map: 2 to 9
# 3. month - month of the year: 'jan' to 'dec'
# 4. day - day of the week: 'mon' to 'sun'
# 5. FFMC - FFMC index from the FWI system: 18.7 to 96.20
# 6. DMC - DMC index from the FWI system: 1.1 to 291.3
# 7. DC - DC index from the FWI system: 7.9 to 860.6
# 8. ISI - ISI index from the FWI system: 0.0 to 56.10
# 9. temp - temperature in Celsius degrees: 2.2 to 33.30
# 10. RH - relative humidity in %: 15.0 to 100
# 11. wind - wind speed in km/h: 0.40 to 9.40
# 12. rain - outside rain in mm/m2 : 0.0 to 6.4
# 13. area - the burned area of the forest (in ha): 0.00 to 1090.84
# (this output variable is very skewed towards 0.0, thus it may make sense to model with the logarithm transform).
# --> first 4 are ignored
df = pd.read_csv(os.path.join(datapath, "forest_fires.csv"))
X = df.iloc[:, 4:12].to_numpy()
y = df.iloc[:, 12].to_numpy()
# perform transformation as they suggested
y = np.log(y + 1)
elif name == "wine_quality":
# https://archive.ics.uci.edu/ml/datasets/Wine+Quality
# Input variables (based on physicochemical tests):
# 1 - fixed acidity
# 2 - volatile acidity
# 3 - citric acid
# 4 - residual sugar
# 5 - chlorides
# 6 - free sulfur dioxide
# 7 - total sulfur dioxide
# 8 - density
# 9 - pH
# 10 - sulphates
# 11 - alcohol
# Output variable (based on sensory data):
# 12 - quality (score between 0 and 10)
df_red = pd.read_csv(os.path.join(datapath, "winequality-red.csv"), sep=";")
df_white = pd.read_csv(os.path.join(datapath, "winequality-white.csv"), sep=";")
# add additional categorical feature for red or white
X = np.hstack([np.vstack([df_red.iloc[:, :-1].to_numpy(), df_white.iloc[:, :-1].to_numpy()]), np.array([[1]*len(df_red) + [0]*len(df_white)]).T])
y = np.hstack([df_red["quality"].to_numpy(), df_white["quality"].to_numpy()])
elif name == "airfoil":
# https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise
# This problem has the following inputs:
# 1. Frequency, in Hertz.
# 2. Angle of attack, in degrees.
# 3. Chord length, in meters.
# 4. Free-stream velocity, in meters per second.
# 5. Suction side displacement thickness, in meters.
# The only output is:
# 6. Scaled sound pressure level, in decibels.
units = {"x001": "Hz", "x003": "m", "x004": "m/sec", "x005": "m"}
df = pd.read_csv(os.path.join(datapath, "airfoil_self_noise.tsv"), header=None, names=["x1", "x2", "x3", "x4", "x5", "y"], sep="\t")
X = df.iloc[:, :5].to_numpy()
y = df["y"].to_numpy()
else:
raise RuntimeError("Unknown dataset %r" % name)
return np.array(X, dtype=float), np.array(y, dtype=float), units
def test_model(dataset, model, param_grid):
# load data
X, y, _ = load_regression_dataset(dataset)
# split in training and test parts
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=12)
if model.__class__.__name__ == "SVR":
sscaler = StandardScaler()
X_train = sscaler.fit_transform(X_train)
X_test = sscaler.transform(X_test)
# train model on train split incl cross-validation for parameter selection
gsmodel = GridSearchCV(model, param_grid, scoring='neg_mean_squared_error', cv=5)
gsmodel.fit(X_train, y_train)
print("best params:", gsmodel.best_params_)
print("best score:", gsmodel.best_score_)
print("MSE on training data:", mean_squared_error(y_train, gsmodel.predict(X_train)))
print("MSE on test data:", mean_squared_error(y_test, gsmodel.predict(X_test)))
print("R^2 on training data:", r2_score(y_train, gsmodel.predict(X_train)))
print("R^2 on test data:", r2_score(y_test, gsmodel.predict(X_test)))
return gsmodel.best_estimator_
def test_autofeat(dataset, feateng_steps=2):
# load data
X, y, units = load_regression_dataset(dataset)
# split in training and test parts
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=12)
# run autofeat
afreg = AutoFeatRegressor(verbose=1, feateng_steps=feateng_steps, units=units)
# fit autofeat on less data, otherwise ridge reg model with xval will overfit on new features
X_train_tr = afreg.fit_transform(X_train, y_train)
X_test_tr = afreg.transform(X_test)
print("autofeat new features:", len(afreg.new_feat_cols_))
print("autofeat MSE on training data:", mean_squared_error(y_train, afreg.predict(X_train_tr)))
print("autofeat MSE on test data:", mean_squared_error(y_test, afreg.predict(X_test_tr)))
print("autofeat R^2 on training data:", r2_score(y_train, afreg.predict(X_train_tr)))
print("autofeat R^2 on test data:", r2_score(y_test, afreg.predict(X_test_tr)))
# train rreg on transformed train split incl cross-validation for parameter selection
print("# Ridge Regression")
rreg = Ridge()
param_grid = {"alpha": [0.00001, 0.0001, 0.001, 0.01, 0.1, 1., 2.5, 5., 10., 25., 50., 100., 250., 500., 1000., 2500., 5000., 10000.]}
with warnings.catch_warnings():
warnings.simplefilter("ignore")
gsmodel = GridSearchCV(rreg, param_grid, scoring='neg_mean_squared_error', cv=5)
gsmodel.fit(X_train_tr, y_train)
print("best params:", gsmodel.best_params_)
print("best score:", gsmodel.best_score_)
print("MSE on training data:", mean_squared_error(y_train, gsmodel.predict(X_train_tr)))
print("MSE on test data:", mean_squared_error(y_test, gsmodel.predict(X_test_tr)))
print("R^2 on training data:", r2_score(y_train, gsmodel.predict(X_train_tr)))
print("R^2 on test data:", r2_score(y_test, gsmodel.predict(X_test_tr)))
print("# Random Forest")
rforest = RandomForestRegressor(n_estimators=100, random_state=13)
param_grid = {"min_samples_leaf": [0.0001, 0.001, 0.01, 0.05, 0.1, 0.2]}
gsmodel = GridSearchCV(rforest, param_grid, scoring='neg_mean_squared_error', cv=5)
gsmodel.fit(X_train_tr, y_train)
print("best params:", gsmodel.best_params_)
print("best score:", gsmodel.best_score_)
print("MSE on training data:", mean_squared_error(y_train, gsmodel.predict(X_train_tr)))
print("MSE on test data:", mean_squared_error(y_test, gsmodel.predict(X_test_tr)))
print("R^2 on training data:", r2_score(y_train, gsmodel.predict(X_train_tr)))
print("R^2 on test data:", r2_score(y_test, gsmodel.predict(X_test_tr)))
# -
for dsname in datasets:
print("####", dsname)
X, y, _ = load_regression_dataset(dsname)
print(X.shape)
for dsname in datasets:
print("####", dsname)
rreg = Ridge()
params = {"alpha": [0.00001, 0.0001, 0.001, 0.01, 0.1, 1., 2.5, 5., 10., 25., 50., 100., 250., 500., 1000., 2500., 5000., 10000., 25000., 50000., 100000.]}
rreg = test_model(dsname, rreg, params)
for dsname in datasets:
print("####", dsname)
svr = SVR(gamma="scale")
params = {"C": [1., 10., 25., 50., 100., 250.]}
svr = test_model(dsname, svr, params)
for dsname in datasets:
print("####", dsname)
rforest = RandomForestRegressor(n_estimators=100, random_state=13)
params = {"min_samples_leaf": [0.0001, 0.001, 0.01, 0.05, 0.1, 0.2]}
rforest = test_model(dsname, rforest, params)
for dsname in datasets:
print("####", dsname)
test_autofeat(dsname, feateng_steps=1)
for dsname in datasets:
print("####", dsname)
test_autofeat(dsname, feateng_steps=2)
for dsname in datasets:
print("####", dsname)
test_autofeat(dsname, feateng_steps=3)
| autofeat_benchmark_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Example usage of NRHybSur3dq8 surrogate model.
# +
import numpy as np
import matplotlib.pyplot as P
# %matplotlib inline
import gwsurrogate
# -
# ## Download surrogate data, this only needs to be done once
# This can take a few minutes
gwsurrogate.catalog.pull('NRHybSur3dq8')
# ## Load the surrogate, this only needs to be done once at the start of a script
sur = gwsurrogate.LoadSurrogate('NRHybSur3dq8')
# ## Read the documentation
help(sur)
# ## Evaluate the waveform
# ### Evaluate waveform modes in dimensionless units (default)
q = 7
chiA = [0, 0, 0.5]
chiB = [0, 0, -0.7]
dt = 0.1 # step size, Units of M
f_low = 5e-3 # initial frequency, Units of cycles/M
t, h, dyn = sur(q, chiA, chiB, dt=dt, f_low=f_low) # dyn stands for dynamics and is always None for this model
# Let's see all available modes (m<0 modes will be included automatically if inclination/phi_ref arguments are given)
print sorted(h.keys())
P.plot(t, h[(2,2)].real, label='l2m2 real')
P.plot(t, h[(3,3)].real, label='l3m3 real')
P.plot(t, h[(4,4)].real, label='l4m4 real')
P.ylabel('Re[$h_{lm}$]', fontsize=18)
P.xlabel('t [M]', fontsize=18)
P.legend()
# ### Evaluate waveform on a fixed time array
# +
q = 7
chiA = [0, 0, 0.5]
chiB = [0, 0, -0.7]
f_low = 0 # this will be ignored and the wavefrom will be returned on the times given below
times = np.arange(-10000,130,0.1)
# The returned times are the same as the input times
times, h, dyn = sur(q, chiA, chiB, times=times, f_low=f_low)
P.plot(times, h[(2,2)].real, label='l2m2 real')
P.plot(times, h[(3,3)].real, label='l3m3 real')
P.plot(times, h[(4,4)].real, label='l4m4 real')
P.ylabel('Re[$h_{lm}$]', fontsize=18)
P.xlabel('t [M]', fontsize=18)
P.legend()
# -
# ### Evaluate waveform modes in physical units
# +
q = 7
chiA = [0, 0, 0.5]
chiB = [0, 0, -0.7]
M = 20 # Total masss in solar masses
dist_mpc = 100 # distance in megaparsecs
dt = 1./4096 # step size in seconds
f_low = 20 # initial frequency in Hz
t, h, dyn = sur(q, chiA, chiB, dt=dt, f_low=f_low, mode_list=[(2,2), (2,1), (3, 3)], M=M, dist_mpc=dist_mpc, units='mks')
P.plot(t, h[(2,2)].real, label='l2m2 real')
P.plot(t, h[(3,3)].real, label='l3m3 real')
P.plot(t, h[(2,1)].real, label='l2m1 real')
P.ylabel('Re[$h_{lm}$]', fontsize=18)
P.xlabel('t [s]', fontsize=18)
P.legend()
# -
# ### Evaluate waveform at a point on the sky
# +
q = 7
chiA = [0, 0, 0.5]
chiB = [0, 0, -0.7]
M = 60 # Total masss in solar masses
dist_mpc = 100 # distance in megaparsecs
dt = 1./4096 # step size in seconds
f_low = 20 # initial frequency in Hz
inclination = np.pi/4
phi_ref = np.pi/5
# Will only include modes given in mode_list argument as well as the m<0 counterparts.
# If mode_list is not specified, uses all available modes.
# Returns h_+ -i h_x
t, h, dyn = sur(q, chiA, chiB, dt=dt, f_low=f_low, mode_list=[(2,2), (2,1), (3, 3)], M=M, dist_mpc=dist_mpc,
inclination=inclination, phi_ref=phi_ref, units='mks')
P.plot(t, h.real)
P.ylabel('$h_{+}$ $(\iota, \phi_{ref})$', fontsize=18)
P.xlabel('t [s]', fontsize=18)
# -
| tutorial/website/NRHybSur3dq8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
# </center>
#
# # Analyzing Where Do People Drink?
#
# Estimated time needed: **30** minutes
#
# ## Objectives
#
# After completing this lab you will be able to:
#
# - Be confident about your data analysis skills
#
# + active=""
# This Dataset is from the story <a href=https://fivethirtyeight.com/features/dear-mona-followup-where-do-people-drink-the-most-beer-wine-and-spirits/> Dear Mona Followup: Where Do People Drink The Most Beer, Wine And Spirits? </a> The dataset contains Average serving sizes per person such as average wine, spirit, beer servings. As well as several other metrics. You will be asked to analyze the data and predict the total liters served given the servings. See how to share your lab at the end.
#
# -
# You will need the following libraries:
#
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
# -
# <b>1.0 Importing the Data</b>
#
# Load the csv:
#
df= pd.read_csv('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/edx/project/drinks.csv')
df
# We use the method <code>head()</code> to display the first 5 columns of the dataframe:
#
df.head()
# <b>Question 1</b>: Display the data types of each column using the attribute dtype.
#
df.dtypes
# <b>Question 2</b> use the method <code>groupby</code> to get the number of wine servings per continent:
#
servings = df[["wine_servings"]]
servings = servings.groupby(df["continent"]).count()
servings
# <b>Question 3:</b> Perform a statistical summary and analysis of beer servings for each continent:
#
df.corr()
# <b>Question 4:</b> Use the function boxplot in the seaborn library to produce a plot that can be used to show the number of beer servings on each continent.
#
df.count()
Q1 = df.quantile(0.25)
Q1
Q2 = df.quantile(0.50)
Q2
Q3 = df.quantile(0.75)
Q3
IQR = Q3 - Q1
IQR
# <b>Question 5</b>: Use the function <code> regplot</code> in the seaborn library to determine if the number of wine servings is
# negatively or positively correlated with the number of beer servings.
#
import seaborn as sns
sns.regplot(x = "beer_servings", y = "wine_servings", data = df)
# <b> Question 6:</b> Fit a linear regression model to predict the <code>'total_litres_of_pure_alcohol'</code> using the number of <code>'wine_servings'</code> then calculate $R^{2}$:
#
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
x = df[["wine_servings"]]
y = df["total_litres_of_pure_alcohol"]
lr.fit(x, y)
lr.score(x, y)
yhat = lr.predict(x)
yhat
# <br>
# <b>Note:</b> Please use <code>test_size = 0.10</code> in the following questions.
#
# ### Question 7
#
# Use the list of features to predict the <code>'total_litres_of_pure_alcohol'</code>, split the data into training and testing and determine the $R^2$ on the test data, using the provided code:
#
from sklearn.linear_model import LinearRegression
df.columns
# +
x = df[["beer_servings", "spirit_servings", "wine_servings"]]
y = df["total_litres_of_pure_alcohol"]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.10, random_state = 0)
lr.fit(x, y)
lr.score(x, y)
yhat = lr.predict(x)
yhat[0:4]
# -
# <b>Question 8 :</b> Create a pipeline object that scales the data, performs a polynomial transform and fits a linear regression model. Fit the object using the training data in the question above, then calculate the R^2 using. the test data. Take a screenshot of your code and the $R^{2}$. There are some hints in the notebook:
#
# <code>'scale'</code>
#
# <code>'polynomial'</code>
#
# <code>'model'</code>
#
# The second element in the tuple contains the model constructor
#
# <code>StandardScaler()</code>
#
# <code>PolynomialFeatures(include_bias=False)</code>
#
# <code>LinearRegression()</code>
#
# +
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
Input = [("scale", StandardScaler()), ("polynomial", PolynomialFeatures(include_bias = False)), ("model", LinearRegression())]
pipe = Pipeline(Input)
pipe.fit(x, y)
yhat = pipe.predict(x)
yhat
# -
# <b>Question 9</b>: Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1 and calculate the $R^{2}$ using the test data. Take a screenshot of your code and the $R^{2}$
#
from sklearn.linear_model import Ridge
RidgeModel = Ridge(alpha = 0.1)
RidgeModel.fit(x, y)
RidgeModel.score(x, y)
yhat = RidgeModel.predict(x)
yhat
# <b>Question 10 </b>: Perform a 2nd order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1. Calculate the $R^{2}$ utilizing the test data provided. Take a screen-shot of your code and the $R^{2}$.
#
# +
from sklearn.preprocessing import PolynomialFeatures
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.1, random_state = 0)
pr = PolynomialFeatures(degree = 2)
x_train_pf = pr.fit_transform(x_train)
x_test_pf = pr.fit_transform(x_test)
from sklearn.linear_model import Ridge
RidgeModel = Ridge(alpha = 0.1)
RidgeModel.fit(x_train_pf, y_train)
yhat = RidgeModel.predict(x_test_pf)
print("Predicted:", yhat[0:4])
print("test set:", y_test[0:4].values)
# -
# <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html\" > CLICK HERE </a> to see how to share your notebook
#
# <b>Sources</b>
#
# <a href=https://fivethirtyeight.com/features/dear-mona-followup-where-do-people-drink-the-most-beer-wine-and-spirits/> Dear Mona Followup: Where Do People Drink The Most Beer, Wine And Spirits?</a> by By <NAME> , you can download the dataset <a href=https://github.com/fivethirtyeight/data/tree/master/alcohol-consumption>here</a>.
#
| Jupiter Book/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
from pyquery import PyQuery as pq
import pandas as pd
import requests
# -
host_path = "https://www.crosslink.tw"
profile_herf = "https://www.crosslink.tw/users/5444"
req_s = requests.Session() # 建連線
req_s.encoding = 'utf-8'
q = pq(req_s.get(profile_herf).text)
req_s.get(profile_herf).text
# +
courses = q(".col-md-9").find(".content").children().attr("data-react-props")
# print(courses)
import json
info = json.loads(courses)
info
# -
info_data = ""
for data in info["current_courses"]:
info_data += data["name"] + " - " + data["lecture"] + '\n\t'
info_data += str(data["time_pairs"]) + '\n'
print(info_data)
| NTUST_API/crosslink_analyze__.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: HousingEnv
# language: python
# name: housingenv
# ---
# ---
# ## Data Prep
# ### Dataset Cleaning
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
from time import time
from src.features import build_features as bf
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import fbeta_score, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer, MissingIndicator
from sklearn.pipeline import FeatureUnion, make_pipeline
sns.set()
# -
# ---
# ## Data Preprocessing
# +
features_raw = pd.read_csv('../data/interim/features_raw.csv', index_col='Id')
test_raw = pd.read_csv('../data/raw/test.csv', index_col='Id')
target = pd.read_csv('../data/interim/target_raw.csv', index_col='Id', squeeze=True)
test = test_raw.copy()
# -
features_raw.head()
features_raw.shape
df_zscore = pd.read_csv('../data/interim/df_zscore.csv', index_col='Id')
outlier_idx = df_zscore[(df_zscore >= 5).any(1)].index
outlier_idx
# ### Handle Outliers
# +
df = features_raw.drop(index=outlier_idx)
target = target.drop(index=outlier_idx)
index = df.index
# Uncomment this line to save the DataFrames
# target.to_csv('../data/interim/target_no_outliers.csv')
# -
df.shape
# ### Assess Missing Data
# #### Assess Missing Data in Each Column
df.isna().sum().sort_values(ascending=False).head(10)
# #### Assess Missing Data in Each Column
nan_count = df.isna().sum()
nan_count = nan_count[nan_count > 0]
nan_cols = df[nan_count.index].columns
(nan_count / df.shape[0]).sort_values(ascending=False)
# +
# Investigate patterns in the amount of missing data in each column.
# plt.rcParams.update({'figure.dpi':100})
plt.figure(figsize=(9, 8))
ax = sns.histplot(nan_count, kde=False)
ax.set_title('Histogram of Missing Data by Column')
ax.set(xlabel='Total Missing or Unknown', ylabel='Total Occurrences')
plt.show()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/MissingDatabyCol_Histogram.svg')
# +
nan_rows = df.isna().sum(axis=1)
plt.figure(figsize=(9, 8))
ax = sns.histplot(nan_rows, kde=False)
ax.set_title('Histogram of Missing Data by Row')
ax.set(xlabel='Total Missing or Unknown', ylabel='Total Occurrences')
plt.show()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/MissingDatabyRow_Histogram.svg')
# -
# #### Assessment Summary
#
# There is a fair amount of missing data in this dataset. Four features in particular (PoolQC, MiscFeature, Alley, Fence) contain >50% missing or unknown values. For PoolQC, we may be able to imply whether or not the home has a pool performing some feature engineering on the PoolArea feature. In addition, several features such as 'GarageArea' indicate the total square footage of a garage (if any) but our dataset does not seem to indicate whether a not a particular home has a garage or not. We'll create features for these. Let's investigate these in turn.
# +
def fill_categorical(val):
return 1 if val != 'NA' else 0
def fill_numerical(val):
return 1 if val > 0 else 0
na_cols = ['PoolArea', 'Fence', 'Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure',
'BsmtFinType1', 'BsmtFinType2', 'FireplaceQu']
df[na_cols] = df[na_cols].fillna('NA')
# Most homes don't have pools. Let's use the value of 'PoolArea' to assign 'NA' to 'PoolQC' if the area == 0
df.loc[df['PoolArea'] == 0, 'PoolQC'] = 'NA' # If no 'PoolArea', (0), then there is no pool
# Similarly, let's apply the same logic to other features where necessary:
df['HasFence'] = df['Fence'].apply(lambda x: fill_categorical(x))
df['HasAlley'] = df['Alley'].apply(lambda x: fill_categorical(x))
df['HasFireplace'] = df['FireplaceQu'].apply(lambda x: fill_categorical(x))
df['HasPool'] = df['PoolArea'].apply(lambda x: fill_numerical(x))
df['HasGarage'] = df['GarageArea'].apply(lambda x: fill_numerical(x))
df['HasBasement'] = df['TotalBsmtSF'].apply(lambda x: fill_numerical(x))
# -------------------------------------------------
# Apply above feature engineering steps on test set
# -------------------------------------------------
test[na_cols] = test[na_cols].fillna('NA')
test.loc[test['PoolArea'] == 0, 'PoolQC'] = 'NA' # If no 'PoolArea', (0), then there is no pool
test['HasFence'] = test['Fence'].apply(lambda x: fill_categorical(x))
test['HasAlley'] = test['Alley'].apply(lambda x: fill_categorical(x))
test['HasFireplace'] = test['FireplaceQu'].apply(lambda x: fill_categorical(x))
test['HasPool'] = test['PoolArea'].apply(lambda x: fill_numerical(x))
test['HasGarage'] = test['GarageArea'].apply(lambda x: fill_numerical(x))
test['HasBasement'] = test['TotalBsmtSF'].apply(lambda x: fill_numerical(x))
# -
categorical_cols = df.select_dtypes(include=object).columns
numerical_cols = df.select_dtypes(include=np.number).columns
# +
# Perform One-Hot Encoding on our Categorical Data
features_enc = df.copy()
features_onehot_enc = pd.get_dummies(data=features_enc, columns=categorical_cols, dummy_na=True)
# Uncomment this line to export DataFrame
# features_onehot_enc.to_csv('../data/interim/features_onehot_enc.csv')
# Print the number of features after one-hot encoding
encoded = list(features_onehot_enc.columns)
print(f'{len(encoded)} total features after one-hot encoding.')
# Uncomment the following line to see the encoded feature names
# print(encoded)
# -------------------------------------------------
# Apply above feature engineering steps on test set
# -------------------------------------------------
test_enc = test.copy()
test_onehot_enc = pd.get_dummies(data=test_enc, columns=categorical_cols, dummy_na=True)
# Uncomment this line to export DataFrame
# test_onehot_enc.to_csv('../data/interim/test_onehot_enc.csv')
# +
imp = IterativeImputer(missing_values=np.nan, random_state=5, max_iter=20)
imputed_arr = imp.fit_transform(features_onehot_enc)
features_imputed = pd.DataFrame(imputed_arr, columns=features_onehot_enc.columns)
features_imputed.index = features_onehot_enc.index
# Uncomment this line to export DataFrame
# features_imputed.to_csv('../data/interim/features_imputed.csv')
# -------------------------------------------------
# Apply above imputation steps on test set
# -------------------------------------------------
imputed_arr_test = imp.fit_transform(test_onehot_enc)
test_imputed = pd.DataFrame(imputed_arr_test, columns=test_onehot_enc.columns)
test_imputed.index = test_onehot_enc.index
# +
def align_dataframes(train_set, test_set):
if train_set.shape[1] > test_set.shape[1]:
cols = train_set.columns.difference(test_set.columns)
df = pd.DataFrame(0, index=train_set.index, columns=cols)
test_set[df.columns] = df
elif train_set.shape[1] < test_set.shape[1]:
cols = test_set.columns.difference(train_set.columns)
df = pd.DataFrame(0, index=test_set.index, columns=cols)
train_set[df.columns] = df
align_dataframes(features_imputed, test_imputed)
# +
test_imputed = test_imputed.fillna(value=0)
# Uncomment this line to export DataFrame
# test_imputed.to_csv('../data/interim/test_imputed.csv')
# -
# ### Feature Transformation
# #### Transforming Skewed Continuous Features
#
# A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. We'll need to check the following continuous data features for 'skew'.
#
# - LotFrontage
# - LotArea
# - MasVnrArea
# - BsmtFinSF1
# - BsmtFinSF2
# - TotalBsmtSF
# - 1stFlrSF
# - 2ndFlrSF
# - LowQualFinSF
# - GrLivArea
# - GarageArea
# - WoodDeckSF
# - OpenPorchSF
# - EnclosedPorch
# - 3SsnPorch
# - ScreenPorch
# - PoolArea
# - MiscVal
continuous_cols = ['LotFrontage', 'LotArea', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'TotalBsmtSF', '1stFlrSF',
'2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF',
'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal']
# +
skewed = ['ScreenPorch', 'PoolArea', 'LotFrontage', '3SsnPorch', 'LowQualFinSF']
fig = plt.figure(figsize = (16,10));
# Skewed feature plotting
for i, feature in enumerate(skewed):
ax = fig.add_subplot(2, 3, i+1)
sns.histplot(features_imputed[feature], bins=20, color='#00A0A0')
ax.set_title("'%s' Feature Distribution"%(feature), fontsize = 14)
ax.set_xlabel("Value")
ax.set_ylabel("Number of Records")
ax.set_ylim((0, 1600))
ax.set_yticks([0, 400, 800, 1200, 1600])
ax.set_yticklabels([0, 400, 800, 1200, ">1600"])
# Plot aesthetics
fig.suptitle("Skewed Distributions of Continuous Data Features", \
fontsize = 16, y = 1.03)
fig.tight_layout()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/Skewed_Distributions.svg')
# +
features_log_xformed = pd.DataFrame(data = features_imputed)
# Since the logarithm of 0 is undefined, translate values a small amount to apply the logarithm successfully
features_log_xformed[continuous_cols] = features_imputed[continuous_cols].apply(lambda x: np.log(x + 1))
fig = plt.figure(figsize = (15,10));
# Skewed feature plotting
for i, feature in enumerate(skewed):
ax = fig.add_subplot(2, 3, i+1)
sns.histplot(features_log_xformed[feature], bins=20, color='#00A0A0')
ax.set_title("'%s' Feature Distribution"%(feature), fontsize = 14)
ax.set_xlabel("Value")
ax.set_ylabel("Number of Records")
ax.set_ylim((0, 1600))
ax.set_yticks([0, 400, 800, 1200, 1600])
ax.set_yticklabels([0, 400, 800, 1200, ">1600"])
# Plot aesthetics
fig.suptitle("Log-Transformed Distributions of Continuous Data Features", \
fontsize = 16, y = 1.03)
fig.tight_layout()
# Uncomment this line to save the figure.
# plt.savefig('../reports/figures/Log_Xformed_Distributions.svg')
# -------------------------------------------------
# Apply above log transformation steps on test set
# -------------------------------------------------
test_log_xformed = pd.DataFrame(data = test_imputed)
# Since the logarithm of 0 is undefined, translate values a small amount to apply the logarithm successfully
test_log_xformed[continuous_cols] = test_imputed[continuous_cols].apply(lambda x: np.log(x + 1))
# -
# We also need to perform a log-transformation on our target variable 'SalePrice' to remove skew.
# +
target_log_xformed = target.transform(np.log)
# Uncomment this line to export DataFrame
# target_log_xformed.to_csv('../data/interim/target_log_xformed.csv')
# -
# ##### Feature Scaling
# #### Normalizing Numerical Features
#
# In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution. Normalization does, however, ensure that each feature is treated equally when applying supervised learners.
# +
# Initialize scaler, then apply it to the features
scaler = StandardScaler()
numerical = features_log_xformed.select_dtypes(include=np.number).columns
features_scaled = pd.DataFrame(data = features_log_xformed)
features_scaled[numerical] = scaler.fit_transform(features_log_xformed[numerical])
features_final = features_scaled.copy()
# Uncomment this line to export DataFrame
# features_scaled.to_csv('../data/interim/features_scaled.csv')
# Uncomment this line to export DataFrame
# features_final.to_csv('../data/processed/features_final.csv')
# Show an example of a record with scaling applied
features_final.head()
# -------------------------------------------------
# Apply above feature scaling steps on test set
# -------------------------------------------------
numerical = test_log_xformed.select_dtypes(include=np.number).columns
test_scaled = pd.DataFrame(data = test_log_xformed)
test_scaled[numerical] = scaler.fit_transform(test_log_xformed[numerical])
test_final = test_scaled.copy()
# Uncomment this line to export DataFrame
# test_scaled.to_csv('../data/interim/test_scaled.csv')
# Uncomment this line to export DataFrame
# test_final.to_csv('../data/processed/test_final.csv')
| notebooks/0.2.2-saw-data-preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Molecular system
# ## With OpenMM
#
# ```python
# import openmm as mm
# import openmm.app as app
# import openmm.unit as unit
# import numpy as np
#
# # Argon LJ parameters
# mass_1 = 39.948*unit.amu
# sigma_1 = 3.404*unit.angstroms
# epsilon_1 = 0.238*unit.kilocalories_per_mole
#
# # Xenon LJ parameters
# mass_2 = 131.293*unit.amu
# sigma_2 = 3.961*unit.angstroms
# epsilon_2 = 0.459*unit.kilocalories_per_mole
#
# # Reduced LJ parameters
# reduced_sigma = 0.5*(sigma_1+sigma_2)
# reduced_epsilon = np.sqrt(epsilon_1*epsilon_2)
#
# # Box and initial coordinates
# coordinates=[[0.0, 0.0, 0.0], [1.25, 0.0, 0.0]]*unit.nanometers
# box=[[2.5, 0.0, 0.0], [0.0, 2.5, 0.0], [0.0, 0.0, 2.5]]*unit.nanometers
#
# # Molecular Mechanics parameters
# cutoff_distance = 3.0*reduced_sigma
# switching_distance = 2.0*reduced_sigma
#
# # OpenMM topology
# topology = app.Topology()
# Ar_element = app.Element.getBySymbol('Ar')
# Xe_element = app.Element.getBySymbol('Xe')
# chain = topology.addChain('A')
# residue = topology.addResidue('Ar', chain)
# atom = topology.addAtom(name='Ar', element= Ar_element, residue=residue)
# residue = topology.addResidue('Xe', chain)
# atom = topology.addAtom(name='Xe', element= Xe_element, residue=residue)
#
# topology.setPeriodicBoxVectors(box[0], box[1], box[2])
#
# # OpenMM system
# system = mm.System()
#
# non_bonded_force = mm.NonbondedForce()
# non_bonded_force.setNonbondedMethod(mm.NonbondedForce.CutoffPeriodic)
# non_bonded_force.setUseSwitchingFunction(True)
# non_bonded_force.setCutoffDistance(cutoff_distance)
# non_bonded_force.setSwitchingDistance(switching_distance)
#
# system.addParticle(mass_1)
# charge_1 = 0.0 * unit.elementary_charge
# non_bonded_force.addParticle(charge_1, sigma_1, epsilon_1)
#
# system.addParticle(mass_2)
# charge_2 = 0.0 * unit.elementary_charge
# non_bonded_force.addParticle(charge_2, sigma_2, epsilon_2)
#
# system.setDefaultPeriodicBoxVectors(box[0], box[1], box[2])
#
# _ = self.system.addForce(non_bonded_force)
#
# ```
# ## With this library
#
# This test system is fully documented in [TwoLJParticles class API](../api/_autosummary/uibcdf_test_systems.TwoLJParticles.html). Let's see an example of how to interact with it:
import numpy as np
import matplotlib.pyplot as plt
from openmm import unit
# +
from uibcdf_systems import TwoLJParticles
coordinates=[[0.0, 0.0, 0.0], [1.25, 0.0, 0.0]]*unit.nanometers
box=[[2.5, 0.0, 0.0], [0.0, 2.5, 0.0], [0.0, 0.0, 2.5]]*unit.nanometers
# Particle 1 with Ar atom values
mass_1 = 39.948 * unit.amu
sigma_1 = 3.404 * unit.angstroms
epsilon_1 = 0.238 * unit.kilocalories_per_mole
# Particle 2 with Xe atom values
mass_2 = 131.293 * unit.amu
sigma_2 = 3.961 * unit.angstroms
epsilon_2 = 0.459 * unit.kilocalories_per_mole
molecular_system = TwoLJParticles(mass_1=mass_1, sigma_1=sigma_1, epsilon_1=epsilon_1,
mass_2=mass_2, sigma_2=sigma_2, epsilon_2=epsilon_2,
coordinates=coordinates, box=box)
# -
molecular_system.parameters
molecular_system.coordinates
molecular_system.box
molecular_system.topology
molecular_system.system
# Let's check that the molecular system behaves as it was predicted above with the reduced mass, sigma and epsilon constants.
from uibcdf_systems.tools import get_potential_energy
get_potential_energy(molecular_system)
# +
coordinates = np.zeros([2,3], float) * unit.angstroms
xlim_figure = [2.0, 12.0]
ylim_figure = [-1.0, 2.0]
x = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstrom
V = [] * unit.kilocalories_per_mole
for xi in x:
coordinates[1,0] = xi
potential_energy = get_potential_energy(molecular_system, coordinates=coordinates)
V.append(potential_energy)
# +
def LJ (x, sigma, epsilon):
t = sigma/x
t6 = t**6
t12 = t6**2
return 4.0*epsilon*(t12-t6)
reduced_sigma = molecular_system.get_reduced_sigma()
reduced_epsilon = molecular_system.get_reduced_epsilon()
plt.plot(x, LJ(x, reduced_sigma, reduced_epsilon))
V._value = np.array(V._value)
plt.plot(x, V, linewidth=2, linestyle='--', color='red')
x_min = 2**(1/6)*reduced_sigma
plt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')
coff = molecular_system.parameters['cutoff_distance']
plt.vlines(coff._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')
sdist = molecular_system.parameters['switching_distance']
plt.vlines(sdist._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')
plt.xlim(xlim_figure)
plt.ylim(ylim_figure)
plt.xlabel('x [{}]'.format(x.unit.get_symbol()))
plt.ylabel('V [{}]'.format(reduced_epsilon.unit.get_symbol()))
plt.show()
# -
molecular_system.get_coordinates_minimum()
molecular_system.get_small_oscillations_time_period_around_minimum()
# As final tip, there's a shortcut if the particules are real atoms such as argon and xenon. You don't need to remember or look for their sigmas and epsilons:
molecular_system = TwoLJParticles(atom_1='Ar', atom_2='Xe', coordinates=coordinates, box=box)
molecular_system.parameters
# -------------
#
# **Sources**
#
# http://docs.openmm.org/6.3.0/userguide/theory.html#lennard-jones-interaction
# https://openmmtools.readthedocs.io/en/0.18.1/api/generated/openmmtools.testsystems.LennardJonesPair.html
# https://openmmtools.readthedocs.io/en/latest/api/generated/openmmtools.testsystems.LennardJonesFluid.html
# https://gpantel.github.io/computational-method/LJsimulation/
| docs/contents/molecular_systems/LJ_particles/two_particles/molecular_system.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
url = 'https://music.bugs.co.kr/chart'
req = requests.get(url)
from bs4 import BeautifulSoup as bs
soup = bs(req.content,'html.parser')
soup
li = soup.select('p.title > a')
len(li)
list = []
for i in li:
list.append(i.text)
list
import pandas as pd
pd.DataFrame('list')
| scraping_bs4_genie_top200.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
import panel as pn
pn.extension()
# The ``Row`` layout allows arranging multiple panel objects in a horizontal container. It has a list-like API with methods to ``append``, ``extend``, ``clear``, ``insert``, ``pop``, ``remove`` and ``__setitem__``, which make it possible to interactively update and modify the layout.
#
# #### Parameters:
#
# For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).
#
# * **``objects``** (list): The list of objects to display in the Column, should not generally be modified directly except when replaced in its entirety.
#
# ___
# A ``Row`` layout can either be instantiated as empty and populated after the fact or using a list of objects provided as positional arguments. If the objects are not already panel components they will each be converted to one using the ``pn.panel`` conversion method.
# +
w1 = pn.widgets.TextInput(name='Text:')
w2 = pn.widgets.FloatSlider(name='Slider')
row = pn.Row('# Row', w1, w2, background='WhiteSmoke')
row
# -
# In general it is preferred to modify layouts only through the provided methods and avoid modifying the ``objects`` parameter directly. The one exception is when replacing the list of ``objects`` entirely, otherwise it is recommended to use the methods on the ``Row`` itself to ensure that the rendered views of the ``Column`` are rerendered in response to the change. As a simple example we might add an additional widget to the ``row`` using the append method:
w3 = pn.widgets.Select(options=['A', 'B', 'C'], name='Select')
row.append(w3)
# On a live server or in a notebook the `row` displayed above will dynamically expand in size to accomodate all three widgets and the title. To see the effect in a statically rendered page, we will display the row a second time:
row
# In general a ``Row`` does not have to be given an explicit ``width``, ``height`` or ``sizing_mode``, allowing it to adapt to the size of its contents. However in certain cases it can be useful to declare a fixed-size layout, which its responsively sized contents will then fill, making it possible to achieve equal spacing between multiple objects:
pn.Row(
pn.Spacer(background='red', sizing_mode='stretch_both'),
pn.Spacer(background='green', sizing_mode='stretch_both'),
pn.Spacer(background='blue', sizing_mode='stretch_both'),
height=200, width=600
)
# When no fixed size is specified the row will expand to accomodate the sizing behavior of its contents:
# +
from bokeh.plotting import figure
p1 = figure(height=200, sizing_mode='stretch_width')
p2 = figure(height=200, sizing_mode='stretch_width')
p1.line([1, 2, 3], [1, 2, 3])
p2.circle([1, 2, 3], [1, 2, 3])
pn.Row(p1, p2)
| examples/reference/layouts/Row.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="HDm4_x_SS7vM"
# # Pandas - DataFrames
#
# ---
# -
import pandas as pd
import re
import numpy as np
# + [markdown] colab_type="text" id="1aDBgkUvgeTL"
# ## EX01
#
# * Carregue para um DataFrame a lista de municípios de fronteira do Brasil utilizando o arquivo "https://github.com/fernandosola/analisedados-python/raw/master/dados/arq_municipios_fronteiricos.tsv".
# +
# Complete o código abaixo para realizar a leitura do arquivo
# o arquivo utiliza tabulação como separador
# dentro de strings a tabulação é representada como \t
df = pd.read_csv(
"dados/arq_municipios_fronteiricos.tsv",
sep='\t',
encoding='utf8',
)
# Visualize as 5 primeiras linhas do DataFrame
df.head()
# -
# * Realize um tratamento na coluna Município para remover os números e hífens.
# * primeiro crie uma que função recebe uma string com o nome do município, realiza a remoção dos números e hifens e retorna a nova string tratada
# * faça os testes com a sua função até que o resultado esteja satisfatório
# * passe a sua função como parâmetro para o método apply do DataFrame
# +
# função que irá realizar o tratamento para um município
def tratar_nome_municipio(nome_municipio):
nome_municipio_tratado = re.sub(r'([\d]* [–-] )(.*)', r'\2', nome_municipio)
return nome_municipio_tratado
# faça um teste de sua função para verificar se a transformação está correta
# +
# aplique a função utilizando o método apply
# -
# exiba todos os dados e verifique se o resultado está correto
# +
# com todos os municípios devidamente tratados
# sobrescreva a coluna Município com os novos valores
# exiba as informações
# -
# ## EX02
#
# Ainda utilizando o DataFrame carregado no exercício anterior, verifique os tipos de dados das colunas do dataframe. Todos os dados numéricos deverão ser transformados para tipos numéricos.
#
# * utilize o método info() do DataFrame para saber detalhes dos tipos de dados das colunas
# * verifique se os números possuem os símbolos de decimal e separadores de milhar compatíveis com a linguagem. Se necessário, fazer as substituições pertinentes.
# verifique os tipos das colunas do DataFrame: df_muni_front
# +
# faça a conversão do campo 'Área territorial'
def converter_para_float(texto):
try:
if type(texto) == str:
t = texto.replace(' ', '')
t = t.replace('.', '')
t = t.replace(',', '.')
n = float(t)
return n
elif type(texto) == int or type(texto) == float:
return texto
except:
pass
return np.NaN
# faça os testes com a sua função
# +
# substitua a coluna pelos valores convertidos
# faça o mesmo para todas as colunas numéricas
# -
# imprima novamente as informações das colunas e verifique os tipos
# imprima novamente as primeiras linhas do DataFrame
# ## EX03
#
#
# Crie uma coluna com a sigla dos estados.
#
# * crie um set à partir da coluna Estados. Atribua a uma variável chamada __nomes_estados__.
# crie o set e verifique o seu conteúdo
nomes_estados = set(df['Estado'])
nomes_estados
# Você identificou um problema no nome dos estados?
# Vamos corrigir no final.
#
# * crie um dicionário onde o nome completo do estado é a chave e a sigla é o valor: __dic_nomes_siglas__
# crie o dicionário
dic_nomes_siglas = {'Acre': 'AC',
'Amapá': 'AP',
'Amazonas': 'AM',
'Mato Grosso': 'MT',
'Mato Grosso do Sul': 'MS',
'Paraná': 'PR',
'Pará': 'PA',
'Rio Grande do Sul': 'RS',
'Rondônia': 'RO',
'Roraima': 'RR',
'Santa Cataria': 'SC',
'Santa Catarina': 'SC',
}
# * utilize a função map do DataFrame para criar uma nova coluna de siglas à partir dos nomes dos estados. Passe o dicionário criado para a função map. Atribua a Série criada à variável __coluna_siglas_uf__.
# +
# faça o mapeamento dos valores e atribua a : coluna_siglas_uf
# verifique os 10 primeiros itens criados
# +
# crie a coluna sigla
# verifique as informações do dataframe
# -
# verifique quantos registros possuem o nome do estado de Santa Catarina escrito errado
# faça a correção dos registros que possuem o nome do estado de Santa Catarina escrito errado
# verifique, novamente, quantos registros possuem o nome do estado de Santa Catarina escrito errado
# ## EX04
#
# Identifique quais municípios possuem 2 ou mais desvios na coluna PIB.
# normalize a coluna PIB em quantidade de desvios padrão
# quais cidades possuem mais de 2 desvios
# ## EX05
#
# Perguntas rápidas.
# quantos registros possuem NaN na coluna IDH/2000?
# quantas cidades por estado?
# +
# faça a ordenação do DataFrame pelo nome do município
# a ordenação está correta?
from libs.texto import TratamentoTexto
# -
# ___
# __Material produzido para o curso__:
# * Introdução à Análise de Dados com Python
#
# __Autor__:
# * <NAME>
#
# __Revisão__:
# * 1.1
| 07_pandas_exercicios_dataframe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Now You Code 1: Word Count
#
# Let's write a program which takes a file name as input, counts the number of words in the file, then outputs the word count.
#
# The program should handle `FileNotFoundError` as an exception in case you enter a file that does not exist.
#
# Sample Run #1:
#
# ```
# Word Count!
# Enter Filename: NYC1-romeo.txt
# There are 33 words in NYC1-romeo.txt
# ```
#
# Sample Run #2:
#
# ```
# Word Count!
# Enter Filename: testing
# File Not Found: testing
# ```
# Try these files:
#
# - `NYC1-cant-stop-the-feeling.txt`
# - `NYC1-preamble.txt`
# - `NYC1-romeo.txt` or
# - `NYC1-zork.txt`
#
#
# **HINT**: Read in the file a line at a time, then split the line into words and count each word... and remember your algorithm does not need to account for exceptions... those are programming concerns!
#
# ## Step 1: Problem Analysis
#
# Inputs:
#
# Outputs:
#
# Algorithm (Steps in Program):
#
#
#
# +
## Step 2: write code here
# -
# ## Step 3: Questions
#
# 1. Where are these files located? Make your own file, add words to it then run with the word count program.
# 2. Explain a strategy for mkaing this program smarter so that it does not include puncutuation in the word counts?
# 3. Is it possible to have a file with a word count of 0? Explain.
#
# ## Reminder of Evaluation Criteria
#
# 1. What the problem attempted (analysis, code, and answered questions) ?
# 2. What the problem analysis thought out? (does the program match the plan?)
# 3. Does the code execute without syntax error?
# 4. Does the code solve the intended problem?
# 5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
#
| content/lessons/08/Now-You-Code/NYC1-Word-Count.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# This notebook compares the outputs from two static LAMMPS jobs one of which is an interactive job. Then outputs from the interactive and non-interactive jobs are checked for consistency.
from pyiron_atomistics import Project
import numpy as np
import pandas
pr = Project("water_interactive")
pr.remove_jobs_silently()
dx = 0.7
cell = np.eye(3) * 10
r_O = [0, 0, 0]
r_H1 = [dx, dx, 0]
r_H2 = [-dx, dx, 0]
water = pr.create_atoms(elements=['H', 'H', 'O'],
positions=[r_H1, r_H2, r_O],
cell=cell, pbc=True)
water_potential = pandas.DataFrame({
'Name': ['H2O_tip3p'],
'Filename': [[]],
'Model': ["TIP3P"],
'Species': [['H', 'O']],
'Config': [['# @potential_species H_O ### species in potential\n', '# W.<NAME> et.al., The Journal of Chemical Physics 79, 926 (1983); https://doi.org/10.1063/1.445869\n', '#\n', '\n', 'units real\n', 'dimension 3\n', 'atom_style full\n', '\n', '# create groups ###\n', 'group O type 2\n', 'group H type 1\n', '\n', '## set charges - beside manually ###\n', 'set group O charge -0.830\n', 'set group H charge 0.415\n', '\n', '### TIP3P Potential Parameters ###\n', 'pair_style lj/cut/coul/long 10.0\n', 'pair_coeff * * 0.0 0.0 \n', 'pair_coeff 2 2 0.102 3.188 \n', 'bond_style harmonic\n', 'bond_coeff 1 450 0.9572\n', 'angle_style harmonic\n', 'angle_coeff 1 55 104.52\n', 'kspace_style pppm 1.0e-5\n', '\n']]
})
# Interactive job
job_int = pr.create.job.Lammps("test", delete_existing_job=True)
job_int.structure = water
job_int.potential = water_potential
job_int.interactive_open()
job_int.interactive_water_bonds = True
job_int.calc_static()
job_int.run()
job_int.interactive_close()
# Non-interactive job
job = pr.create.job.Lammps("test_ni", delete_existing_job=True)
job.structure = water
job.potential = water_potential
job.calc_static()
job.run()
# +
# Assert that the unit converstions work even in the interactive mode
int_nodes = job_int["output/generic"].list_nodes()
usual_nodes = job["output/generic"].list_nodes()
for node in int_nodes:
if node in usual_nodes:
print(node)
assert np.allclose(job_int["output/generic/" + node], job["output/generic/" + node])
# -
| notebooks/integration/interactive_units.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import cv2
import os
import sys
import numpy as np
import glob
import matplotlib.pyplot as plt
files = sorted(glob.glob( "/home/hiten/Desktop/Project 4/train/data/0b599d0670fcbafcaa8ed5567c0f4b10b959e6e49eed157be700bc62cffd1876" + "/frame*.png" ))
images = [cv2.imread(x,0) for x in files]
len(images)
mask = cv2.imread( "/home/hiten/Desktop/Project 4/train/data/0b599d0670fcbafcaa8ed5567c0f4b10b959e6e49eed157be700bc62cffd1876" + "/mask.png" ,0 )
plt.figure()
plt.subplot(1,2,1)
plt.imshow( mask*127, cmap='gray' )
plt.subplot(1,2,2)
plt.imshow(images[0], cmap='gray')
prvs=images[0]
next=images[1]
hsv = np.zeros_like(prvs)
hsv[...,1] = 255
# We get a 2-channel array with optical flow vectors, (u,v).
#We find their magnitude and direction. We color code the result for better visualization.
#Direction corresponds to Hue value of the image.
#Magnitude corresponds to Value plane.
img2 = (images[0])
for ind, img_path in enumerate(images[:-1]):
img1 = img2
img2 = (images[ind+1])
flow = cv2.calcOpticalFlowFarneback(img1,img2, None, 0.5, 3, 15, 3, 5, 1.2, 0)
mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1])
print(mag)
# +
frame1 = images[0]
prvs = cv2.cvtColor(frame1)
hsv = np.zeros_like(frame1)
hsv[...,1] = 255
index =0
while(1):
index += 1
if index == 100: break
frame2 = images[index]
next = cv2.cvtColor(frame2,cv2.COLOR_BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(prvs,next, None, 0.5, 3, 15, 3, 5, 1.2, 0)
mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1])
hsv[...,0] = ang*180/np.pi/2
hsv[...,2] = cv2.normalize(mag,None,0,255,cv2.NORM_MINMAX)
bgr = cv2.cvtColor(hsv,cv2.COLOR_HSV2BGR)
cv2.imshow('frame2',bgr)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
elif k == ord('s'):
cv2.imwrite('opticalfb.png',frame2)
cv2.imwrite('opticalhsv.png',bgr)
prvs = next
cap.release()
cv2.destroyAllWindows()
# -
| analysis/OpticalFLowVersion1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: artemis
# language: python
# name: artemis
# ---
# ## Given a collection of artworks (e.g., images of ArtEmis) for which humans have indicated an emotion, extract for each artwork a _histogram_ that indicates the human's emotional preference.
# - you will use this to train an image2emotion classifier
import pandas as pd
import numpy as np
from artemis.emotions import emotion_to_int
# +
#
# SET YOUR PATHS.
#
#
# I use the ArtEmis dataset with _minimal_ preprocessing
# as prepared by the script preprocess_artemis_data.py --preprocess-for-deep-nets **False** (see STEP.1 at top-README)
# Note, that here you can also use the directly downloaded "artemis_dataset_release_v0.csv'" since the
# preprocess_artemis_data.py does not change the emotions of the "raw" data.
#
artemis_csv = '/home/optas/DATA/OUT/artemis/preprocessed_data/for_analysis/artemis_preprocessed.csv'
# or
# artemis_csv = '/home/optas/DATA/OUT/artemis/official_data/artemis_dataset_release_v0.csv'
save_file = '../../data/image-emotion-histogram.csv' # where to save the result.
# +
df = pd.read_csv(artemis_csv)
print(len(df))
print(df.columns)
u_emo = df.emotion.unique()
print('\nUnique Emotions:', u_emo)
n_emotions = len(u_emo)
# -
df['emotion_label'] = df.emotion.apply(emotion_to_int)
def collect_image_distribution(g):
""" Apply to each pandas group:g (artwork) to extract an *unormalized* distribution of the emotions indicated.
"""
image_distribution = np.zeros(n_emotions, dtype=np.float32)
for l in g.emotion_label:
image_distribution[l] += 1
return image_distribution
image_groups = df.groupby(['art_style', 'painting']) # each group is now a unique artwork
image_distibutions = image_groups.apply(collect_image_distribution)
# assert each image has at least 5 (human) votes!
x = image_distibutions.apply(sum)
assert all(x.values >= 5)
data = []
for row in image_distibutions.items():
style = row[0][0]
name = row[0][1]
dist = row[1]
data.append([style, name, dist.tolist()])
data = pd.DataFrame(data, columns=['art_style', 'painting', 'emotion_histogram'])
data.head()
# Quick check of third row above.
mask = (df.art_style == 'Abstract_Expressionism') & (df.painting == 'aaron-siskind_feet-102-1957')
df[mask]['emotion_label']
data.to_csv(save_file, index=False)
# +
## OK now you go and run the next notebook to use this histograms to train an Image-2-Emotion classifier!
| artemis/notebooks/analysis/extract_emotion_histogram_per_image.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# pandas 라이브러리 호출
# + slideshow={"slide_type": "-"}
import pandas as pd # pandas 라이브러리를 pd 이름으로 호출
# + [markdown] slideshow={"slide_type": "slide"}
# # Series
# + slideshow={"slide_type": "-"}
prices = [1000, 1010, 1020] # 주가를 담아놓은 리스트 생성
# + slideshow={"slide_type": "-"}
dates = pd.date_range('2018-12-01', periods=3) # date_range 함수를 이용해 날짜 생성
dates
# -
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html
# + slideshow={"slide_type": "slide"}
help(pd.date_range)
# + slideshow={"slide_type": "slide"}
pd.date_range('2018-12-01', '2018-12-31', freq='W-FRI')
# -
# https://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-offset-aliases
# + slideshow={"slide_type": "slide"}
# 주가를 데이터로, 날짜를 인덱스로 하는 Series 생성
s = pd.Series(prices, index=dates) # pd.Series(데이터)
s
# -
s2 = pd.Series(prices) # 인덱스를 지정하지 않은 Series
s2
# + slideshow={"slide_type": "slide"}
s2[3] = 1030 # Series에 데이터 추가
s2
# -
s[pd.to_datetime('2018-12-04')] = 1030 # 인덱스를 이용한 데이터 추가
s
# + [markdown] slideshow={"slide_type": "slide"}
# Series 데이터 조회
# -
s[2] # 배열 스타일로 데이터 추출
s['2018-12-03'] # 인덱스를 이용한 데이터 추출
# + [markdown] slideshow={"slide_type": "slide"}
# # DataFrame
# -
# 딕셔너리로부터 DataFrame 만들기
prices = {'A전자' : [1000, 1010, 1020],
'B화학' : [2000, 2010, 2020],
'C금융' : [3000, 3010, 3020]}
df1 = pd.DataFrame(prices) # pd.DataFrame(데이터)
df1
# + slideshow={"slide_type": "slide"}
df2 = pd.DataFrame(prices, index=dates) # 인덱스가 있는 DataFrame
df2
# + [markdown] slideshow={"slide_type": "slide"}
# 데이터 선택 - 인덱스 번호 이용
# -
df2.iloc[0] # 행 선택 - [0행, 생략]
df2.iloc[:, 0] # 열 선택 - [전체 행, 0열]
df2.iloc[0, 0] # 행, 열 지정 - [0행, 0열]
# + [markdown] slideshow={"slide_type": "slide"}
# 데이터 선택 - 레이블 이용
# + slideshow={"slide_type": "-"}
df2.loc['2018-12-01'] # ['2018-12-01'행, 생략]
# -
df2.loc[:, 'A전자'] # [모든 행, 'A전자'열]
df2.loc['2018-12-01', 'A전자'] # ['2018-12-01'행, 'A전자'열]
# + slideshow={"slide_type": "slide"} active=""
# 또 다른 방법들 - 이렇게도 되는구나 라고 참고만 하세요
# + slideshow={"slide_type": "-"}
df2['A전자'] # 열 선택
# -
df2.A전자 # df2['A전자'] 와 동일
df2['A전자']['2018-12-01'] # df2['A전자']의 ['2018-12-01']행 선택
df2.loc[:, 'A전자']['2018-12-01']
# + [markdown] slideshow={"slide_type": "slide"}
# 데이터 추가
# -
# DataFrame에 열 추가
df2['D엔터'] = [4000, 4010, 4020] # 데이터프레임[열이름] = [데이터]
df2
# Series로 부터 DataFrame 열 추가
df2['E텔레콤'] = s # 데이터프레임[열이름] = [데이터]
df2
# + [markdown] slideshow={"slide_type": "slide"}
# 데이터프레임 확장
# -
# 시리즈에 이름 붙이기
s.name = 'F소프트' # 시리즈.name = '이름'
s
# 데이터프레임끼리 합치기
df2 = pd.concat([df2, s], axis=1) # pd.concat([데이터프레임A, 데이터프레임B], axis=방향)
# axis=1이면 열방향으로 확장, axis=0이면 행방향으로 확장
df2
# + slideshow={"slide_type": "slide"}
pd.concat([df2, s], axis=0) # axis=1을 제대로 지정 못해 행방향으로 늘어나버림
# + slideshow={"slide_type": "slide"}
df3 = df2.iloc[0]
df3 = df3 + 60
df3.name = pd.to_datetime('2018-12-07')
df3
# + slideshow={"slide_type": "slide"}
# 데이터프레임 행 확장
df2 = df2.append(df3) # 데이터프레임A.append(데이터프레임B)
df2
# + slideshow={"slide_type": "slide"}
df3 = df2.iloc[0] + 50
df3.name = pd.to_datetime('20181206')
df2 = df2.append(df3)
df2
# + slideshow={"slide_type": "slide"}
df2 = df2.sort_index(axis=0) # 날짜 순으로 인덱스 재정렬
df2
# + [markdown] slideshow={"slide_type": "slide"}
# 데이터 삭제
# -
help(pd.DataFrame.drop) # 모르는 함수가 있는데 인터넷이 안되면 -> help(함수) 로 확인
# + slideshow={"slide_type": "slide"}
'''
삭제한 DataFrame을 저장하지 않으므로
현재 결과값에서는 삭제된것처럼 보이나
df2에 삭제한 결과가 저장되지 않음에 주의
'''
# 행 삭제
df2.drop(pd.to_datetime('2018-12-06')) # 데이터프레임.drop(레이블)
# + slideshow={"slide_type": "slide"}
df2.drop([pd.to_datetime('2018-12-02'), pd.to_datetime('2018-12-06')]) # 여러 행 삭제
# + slideshow={"slide_type": "slide"}
df2.drop('D엔터', axis=1) # 열 삭제
# + slideshow={"slide_type": "slide"}
df2.drop(['C금융', 'E텔레콤'], axis=1)
# + [markdown] slideshow={"slide_type": "slide"}
# 데이터프레임 조회 및 슬라이싱
# + slideshow={"slide_type": "-"}
df2.head(3) # DataFrame의 최초 5줄 조회
# + slideshow={"slide_type": "slide"}
df2.tail(3) # DataFrame의 마지막 3줄 조회
# + slideshow={"slide_type": "slide"}
df2.iloc[2] # 인덱스 위치번호(iloc: Index Location)로 슬라이싱
# + slideshow={"slide_type": "slide"}
df2.loc['2018-12-03'] # 인덱스 이름으로 슬라이싱 시 iloc 대신 loc 사용
# + slideshow={"slide_type": "slide"}
df2.iloc[1:3] # 여러 행 선택
# -
df2.loc['2018-12-02':'2018-12-03'] # 행 다중 선택
# + slideshow={"slide_type": "slide"}
df2[1:3] # 행 다중 선택
# -
df2['C금융'] # 열 슬라이싱은 열 이름으로 가능
# + slideshow={"slide_type": "slide"}
df2.iloc[1:3, 2] # 위치 번호로 행, 열 선택
# + slideshow={"slide_type": "-"}
df2.loc['2018-12-02':'2018-12-03', 'C금융'] # 이름으로 행, 열 선택
# + slideshow={"slide_type": "slide"}
# DataFrame의 스칼라 연산
df2['E텔레콤'] * 10 # 불러온 값 전체에 동일한 연산 수행
# + slideshow={"slide_type": "slide"}
df2.sum(axis=0) # 행간 연산, 즉 열별 합산
# + slideshow={"slide_type": "slide"}
df2.median(axis=1) # 열간 연산, 즉 행별 합산
# + slideshow={"slide_type": "slide"}
df2.describe() # 통계 요약
# + [markdown] slideshow={"slide_type": "slide"}
# 보간법 - 구멍 메우기
# + slideshow={"slide_type": "-"}
df2
# + slideshow={"slide_type": "slide"}
df2.dropna() # NaN 제거
# + slideshow={"slide_type": "slide"}
df2.fillna(0) # NaN을 0으로 바꿈
# + slideshow={"slide_type": "slide"}
df2.fillna(method='ffill') # NaN을 앞의 값으로 채움
# + slideshow={"slide_type": "slide"}
df2.fillna(method='bfill') # NaN을 뒤의 값으로 채움
# -
| w2/w2-03 pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.5
# language: julia
# name: julia-0.4
# ---
using Laplacians
include("../src/akpw.jl")
# this should produce 22.9374025974026
srand(1)
a = grid2(1100)
t = akpwU(a)
sum(comp_stretches(t,a))/nnz(a)
# this should produce 32.47723881214327
#a = grid2(1100)
t = akpw(a)
sum(comp_stretches(t,a))/nnz(a)
# +
#= this should produce
0.835222 seconds (3.65 M allocations: 272.523 MB, 17.63% gc time)
0.900887 seconds (5.59 M allocations: 365.142 MB, 22.07% gc time)
Out[7]:
1x2 Array{Float64,2}:
6.76587 6.80419
=#
a = wtedChimera(60100,22)
f(t) = sum(comp_stretches(t,a))/nnz(a)
@time t5 = akpw(a);
@time t2 = akpw(a, ver=2);
[f(t5) f(t2)]
# -
a = wtedChimera(100200,96)
#a=a*a
(ai,aj,av) = findnz(a)
ijva = IJVindList(ai,aj,av);
ijva = ijva[randperm(length(ijva))];
@time l2 = sortIJVind2(ijva);
@time l1 = sortIJVind(ijva);
@time l3 = sortIJVind3(ijva);
mean(l3 .== l1)
byi(x::IJVind) = x.v
o = ord(isless, byi, false, Forward)
sort!(l3, 1, 6, InsertionSort, o)
l3[1:10]
include("testTrees.jl")
out = testTrees3(500,100);
for i in 1:10000
a = wtedChimera(10,i)
tr = akpwish(a,ver=5)
if !isConnected(tr)
println(i)
break
end
end
vstat = function(x)
x = sort(x)
n = length(x)
n5 = div(n,5)
[mean(x) minimum(x) x[n5:n5:(4*n5)]' maximum(x)]
end
vstat(x)
include("testTrees.jl")
out = testTrees2(401,400)
# +
@time a = wtedChimera(601000,20)
f(t) = sum(comp_stretches(t,a))/nnz(a)
@time t6 = akpwish(a,ver=6)
@time t5 = akpwish(a,ver=5)
@time t2 = akpwish(a,ver=2)
println([f(t6) f(t5) f(t2)])
# -
@time a = wtedChimera(601000,20)
f(t) = sum(comp_stretches(t,a))/nnz(a)
@time t5 = akpwish(a,ver=5)
@time t4 = akpwish(a,ver=4)
@time t2 = akpwish(a,ver=2)
@time t3 = akpwish(a,ver=3)
@time tp = randishPrim(a)
@time tk = randishKruskal(a)
@time told = akpw(a)
println([f(t5) f(t4) f(t2) f(t3) f(tp) f(tk) f(told)])
# +
n = 300
st0 = Array(Float64,0)
st2 = Array(Float64,0)
st4 = Array(Float64,0)
st5 = Array(Float64,0)
stp = Array(Float64,0)
stk = Array(Float64,0)
stold = Array(Float64,0)
t0 = Array(Float64,0)
t2 = Array(Float64,0)
t4 = Array(Float64,0)
t5 = Array(Float64,0)
tp = Array(Float64,0)
tk = Array(Float64,0)
told = Array(Float64,0)
for i in 1:10
a = wtedChimera(n,i)
f(t) = sum(comp_stretches(t,a))/nnz(a)
try
tic()
tr = akpwish(a,ver=0)
at0 = toq()
ast0 = f(tr)
tic()
tr = akpwish(a,ver=2)
at2 = toq()
ast2 = f(tr)
tic()
tr = randishKruskal(a)
atk = toq()
astk = f(tr)
print(n, " ", i, " : ")
print(ast0, " ")
push!(t0,at0)
push!(st0,ast0)
print(ast2, " ")
push!(t2,at2)
push!(st2,ast2)
print(astk, " ")
push!(tk,atk)
push!(stk,astk)
catch
println("bad on wtedChimera ", n, " ", i)
end
end
mist = minimum([st0 st2 stk],2)
f = 1.3
stats(v) = [mean(v./mist) median(v./mist) maximum(v./mist) mean(v .== mist) mean(v .> f*mist)]
println(stats(st0))
println(stats(st2))
println(stats(stk))
# +
st0 = Array(Float64,0)
st2 = Array(Float64,0)
st4 = Array(Float64,0)
st5 = Array(Float64,0)
stp = Array(Float64,0)
stk = Array(Float64,0)
stold = Array(Float64,0)
t0 = Array(Float64,0)
t2 = Array(Float64,0)
t4 = Array(Float64,0)
t5 = Array(Float64,0)
tp = Array(Float64,0)
tk = Array(Float64,0)
told = Array(Float64,0)
# +
a = wtedChimera(301,8)
f(t) = sum(comp_stretches(t,a))/nnz(a)
tic()
tr = akpwish(a,ver=0)
at0 = toq()
ast0 = f(tr)
tic()
tr = akpwish(a,ver=2)
at2 = toq()
ast2 = f(tr)
tic()
tr = akpwish(a,ver=4)
at4 = toq()
ast4 = f(tr)
tic()
tr = akpwish(a,ver=5)
at5 = toq()
ast5 = f(tr)
tic()
tr = randishKruskal(a)
atk = toq()
astk = f(tr)
tic()
tr = randishPrim(a)
atp = toq()
astp = f(tr)
tic()
tr = akpw(a)
atold = toq()
astold = f(tr)
# -
# ## Parameter optimization
# +
st1 = Array(Float64,0)
st2 = Array(Float64,0)
st3 = Array(Float64,0)
stold = Array(Float64,0)
tic()
for i in 1:100
a = chimera(9003,i)
t1 = akpwU(a)
push!(st1, sum(comp_stretches(t1,a)) / nnz(a) )
t2 = akpwU(a,x->(1/(log(x+1))))
push!(st2, sum(comp_stretches(t2,a)) / nnz(a) )
t3 = akpwU(a,x->(1/(2*exp(sqrt(log(x))))))
push!(st3, sum(comp_stretches(t3,a)) / nnz(a) )
told = akpw(a)
push!(stold, sum(comp_stretches(told,a)) / nnz(a) )
end
toc()
mi = minimum([st1 st2 st3 stold],2)
f = 1.3
stats(v) = [mean(v./mi) median(v./mi) maximum(v./mi) mean(v .== mi) mean(v .> f*mi)]
println(stats(st1))
println(stats(st2))
println(stats(st3))
println(stats(stold))
# +
st5 = Array(Float64,0)
tic()
for i in 1:1000
a = chimera(90001,i)
t5 = akpwU(a,x->(1/(2*log(x))))
push!(st5, sum(comp_stretches(t5,a)) / nnz(a) )
end
toc()
mi = minimum([st1 st2 st3 st4 stold],2)
f = 1.3
stats(v) = [mean(v./mi) median(v./mi) maximum(v./mi) sum(v .== mi) sum(v .> f*mi)]
println(stats(st5))
# +
stnew = Array(Float64,0)
stp = Array(Float64,0)
stold = Array(Float64,0)
timenew = 0
timep = 0
timeold = 0
tic()
for i in 1:10000
a = wtedChimera(100,i)
tic()
tnew = akpwish(a)
timenew += toq()
push!(stnew, sum(comp_stretches(tnew,a)) / nnz(a) )
tic()
tp = randishPrim(a)
timep += toq()
push!(stp, sum(comp_stretches(tp,a)) / nnz(a) )
tic()
told = akpw(a)
timeold += toq()
push!(stold, sum(comp_stretches(told,a)) / nnz(a) )
end
toc()
mi = minimum([stnew stp stold],2)
f = 1.3
stats(v) = [mean(v./mi) median(v./mi) maximum(v./mi) mean(v .== mi) mean(v .> f*mi)]
println(stats(stnew))
println(stats(stp))
println(stats(stold))
@show timenew
#@show timep
@show timeold
# -
findmax(stnew./stold)
# +
function complsst(sz::Int, nruns::Int)
stnew = Array(Float64,0)
stp = Array(Float64,0)
stold = Array(Float64,0)
timenew = 0
timep = 0
timeold = 0
tic()
for i in 1:nruns
a = wtedChimera(sz,i)
tic()
tnew = akpwish(a)
timenew += toq()
push!(stnew, sum(comp_stretches(tnew,a)) / nnz(a) )
tic()
tp = randishPrim(a)
timep += toq()
push!(stp, sum(comp_stretches(tp,a)) / nnz(a) )
tic()
told = akpw(a)
timeold += toq()
push!(stold, sum(comp_stretches(told,a)) / nnz(a) )
end
toc()
mi = minimum([stnew stp stold],2)
f = 1.3
stats(v) = [mean(v./mi) median(v./mi) maximum(v./mi) mean(v .== mi) mean(v .> f*mi)]
println(stats(stnew))
println(stats(stp))
println(stats(stold))
@show timenew
@show timep
@show timeold
end
# -
complsst(1000,1001)
complsst(10010,101)
Profile.clear()
a = chimera(1000000,2);
include("../src/akpw.jl")
tnew = akpwish(chimera(50,1))
@time tnew = akpwish(a)
sum(comp_stretches(tnew,a)) / nnz(a)
include("../src/akpw.jl")
tnew = akpwish(chimera(50,1))
@time tnew = akpwish(a)
sum(comp_stretches(tnew,a)) / nnz(a)
told = akpw(chimera(50,1))
@time told = akpw(a)
sum(comp_stretches(told,a)) / nnz(a)
tp = akpw(chimera(50,1))
@time tp = randishPrim(a)
sum(comp_stretches(tp,a)) / nnz(a)
| devel/develAKPW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week One - Machine Learning & Statistics
# ## Computing and Coin Flipping
# ### Libraries and Formatting
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
import seaborn as sns
import math
import itertools
plt.rcParams['figure.figsize'] = (12,8)
# ### What is the difference between Data and Information?
# - [Information Theory, Inference, and Learning Algorithms](https://www.inference.org.uk/itprnn/book.pdf) by [<NAME>](https://en.wikipedia.org/wiki/David_J._C._MacKay)
# ### Binomial Distribution
# - [Website: Star Trek - Binomial Distribution](https://stattrek.com/probability-distributions/binomial.aspx)
# - [Website: NIST Engineering Statistics Handbook - Binomial Distribution](https://www.itl.nist.gov/div898/handbook/eda/section3/eda366i.htm)
# ## Coin Flipping Simulation
# Flipping an unbiased coin, we'd expect the probability of heads to be 0.5, and the probability of tails to also be 0.5.
#
# We can use the binomial theorem to simulate flipping a coin. I looked at this simulations last semester, when completing a project on numpy.random.
#
# See [that project here](https://nbviewer.jupyter.org/github/MarionMcG/numpy.random/blob/master/numpy-random.ipynb).
# Flip a coin once, how many heads did you get?
x = int(np.random.binomial(1, 0.5, 1))
x
# Flip a coin 1000 times, twice, how many heads did you get?
x =np.random.binomial(1000, 0.5, 2)
x
# How likely are we to see a certain number of heads when flipping a coin n times?
#Find the probabiliy of 521 if I flip the coin 1000 times
ss.binom.pmf(521, 1000, 0.5)
# How many different ways can I get 521 heads if I flip a coin 1000 times??
x = np.random.binomial(1000, 0.5, 1000)
sns.distplot(x);
# As the number of trials increases the distribution tends to be normally distributed. Notice how my mean appears to be 500, and 99.7% of data is between 450 and 550. The summary statistics backs this up, but despite these results the data is not perfectly normally distributed.
np.mean(x)
y = np.std(x)
y*3
# What about an unfair or biased coin?
ax = np.random.binomial(10, 0.2, 10)
ax
# So let's say I flip my unfair coin, with p = 0.3, and my results are as follows:
#
# #### H H T T H H H T T T
# We're assuming that one flip has no affect on the next flip. The events are independent.
#The probability of getting 5 heads in 10 flips, in this order
(0.3)*(0.3)*(0.7)*(0.7)*(0.3)*(0.3)*(0.3)*(0.7)*(0.7)*(0.7)
# In general, there's more than one way to get 5 heads in 10 flips of a coin. We can use the Binomial Distribution Formula to calculate the probability of getting 5 heads in any order.
# 
# Formula (in words): (10 choose 5) x p^5 x (1-p)^5
nflips = 10
p = 0.3
#Probability of getting 5 flips in each case
d = [ss.binom.pmf(i, nflips, p) for i in range (nflips+1)]
d
# Notice that the probability of getting 1 head is .028, the probability of getting two heads is 0.121 etc.
# Even though the chances, with probability of getting head = 0.3 during 10 flips, are more likely you'd get ten tails.
(1-0.3)**10
# However because there's more than one way to get three heads, the chances of that happening is more than the probability of getting ten tails.
# ### n CHOOSE r
n = 10 #10flips
r = 6 #6heads
choose = lambda x, y: math.factorial(x)/(math.factorial(y)*math.factorial(x-y))
#Number of ways to get 6 heads from 10 flips
choose(n,r)
# Remember there's only one way to get ten heads..
choose(10, 10)
# or tails
choose(10, 0)
# ### What's this got to do with Computing?
# Bits and Bytes
["".join(seq) for seq in itertools.product("01", repeat =8)]
# MacKay says there's more information in the flip of a fair coin, than a biased one, as there's more randomness in its outcome. When he generate sets, they're deterministic. And there's less information in those sets as a result.
| 1_coin_flipping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from datetime import datetime as dt
import uuid
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import pickle
import random
df = pickle.load( open( "./cleaned_dataset.p", "rb" ) )
len(df.index)
#total sessions
s = df.groupby(['UUID'])
# +
# total sessions
print('total length',len(s))
# total average
print('mean',s.size().mean())
# total median
print('median',s.size().median())
# -
short = s.filter(lambda x: len(x) < 6)
short_sessions = short.groupby(['UUID'])
# +
print(len(short_sessions))
print(len(short_sessions)/len(s))
# short average
print('mean',short_sessions.size().mean())
# short median
print('median',short_sessions.size().median())
# -
short_sessions.head(10)
long = s.filter(lambda x: len(x) >= 6)
long_sessions = long.groupby(['UUID'])
# +
print(len(long_sessions))
print(len(long_sessions)/len(s))
# long average
print('mean',long_sessions.size().mean())
# long median
print('median',long_sessions.size().median())
# -
long_sessions.head(10)
with open('s.p', 'wb') as f:
pickle.dump(short, f)
# +
import torch
def integer_encode(vocabulary, data):
# mapping from urls to int
string_to_int = dict((s,i) for i,s in enumerate(vocabulary))
# integer encode
integer_encoded = [string_to_int[s] for s in data]
return integer_encoded
def split_dataset(dataset):
random.shuffle(dataset)
train_size = int(0.5 * len(dataset))
train = dataset[0:train_size]
test = dataset[train_size:-1]
return train, test
def create_dataset(df, input_column, target_column):
vocab = list(set(df[input_column]))
input_indices = integer_encode(vocab, df[input_column])
df = df.assign(input_index=input_indices)
x = []
for uuid, row in df.groupby('UUID'):
x.append(torch.LongTensor(row['input_index'].values))
return x, vocab
# -
x, vocab = create_dataset(long,'action_cleaned','action_cleaned')
random.shuffle(x)
long_sessions_subset = x[0:len(short_sessions)]
print(len(long_sessions_subset))
# +
# split data into training and testing
x_train, x_test = split_dataset(long_sessions_subset)
d = {}
d['x_train'] = x_train
d['x_test'] = x_test
d['vocab'] = vocab
with open('long_sessions_3.p', 'wb') as f:
pickle.dump(d, f)
# -
with open('l.p', 'wb') as f:
pickle.dump(long, f)
| data_analysis/long_and_short_sessions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Backpropagation Again
#
# We run through the backpropagation procedure again.
#
# This time, we assume that the features are $\begin{bmatrix}8 & -2\end{bmatrix}$ and the correct category is $0$ which is one-hot encoded as $\begin{bmatrix}1 & 0\end{bmatrix}$.
#
# In this exercise, we compute those partial derivatives manually, so we can learn the nuts and bolts of the backpropagation algorithm.
# +
import torch
import torch.nn.functional as F
weights = torch.nn.Parameter(torch.Tensor([[1,2],[2,1]]))
print("Parameters (weights): "+str(weights.data))
bias = torch.nn.Parameter(torch.Tensor([[0,0]]))
print("Parameters (bias): "+str(bias.data))
features = torch.autograd.Variable(torch.Tensor([[8, -2]]))
print("Features: "+str(features.data))
target = torch.autograd.Variable(torch.LongTensor([0]))
#print("The correct class: "+str(target.data))
one_hot_target = torch.autograd.Variable(torch.Tensor([[1, 0]]))
print("One hot encoding of the correct class: "+str(one_hot_target.data))
# -
# We assume the weights are $\begin{bmatrix}1 & 2 \\ 2 & 1\end{bmatrix}$ at start.
#
# The training is assumed to proceed one data point at a time (batch size of 1).
#
# For this iteration of training, the features are $\begin{bmatrix}-10 & 20\end{bmatrix}$ and the correct category is $1$ which is one-hot encoded as $\begin{bmatrix}0 & 1\end{bmatrix}$.
# +
if weights.grad is not None:
weights.grad.data.zero_()
# Forward pass
result = torch.mm(features, weights) + bias
print("c: "+str(result.data))
softmax_result = F.softmax(result, dim=1)
print("Softmax of c: "+str(softmax_result.data))
log_softmax_result = F.log_softmax(result, dim=1)
print("Log softmax of c: "+str(log_softmax_result.data))
loss_nll_softmax = F.nll_loss(log_softmax_result, target)
print("NLL + log_softmax loss: "+str(loss_nll_softmax.data))
loss = F.cross_entropy(result, target)
print("Cross entropy loss: "+str(loss.data))
# Backward pass
print("transpose of features: "+str(features.data.t()))
grad_c = (softmax_result.data - one_hot_target.data)
print("grad of loss wrt to c: "+str(grad_c))
grad_weights = features.data.t().mm(grad_c)
print("grad of loss wrt to weights: "+str(grad_weights))
grad_bias = grad_c
print("grad of loss wrt to bias: "+str(grad_bias))
print("\tThe manually computed gradient of the loss with respect to weights is "+str(grad_weights))
print("\tThe manually computed gradient of the loss with respect to bias is "+str(grad_bias))
# You can now update the weights and bias
learning_rate = 0.01
weights.data = weights.data - learning_rate * grad_weights
bias.data = bias.data - learning_rate * grad_bias
print("\tThe weights are now "+str(weights.data))
print("\tThe bias is now "+str(bias.data))
# -
# ## Parameters
#
# At the end of the first pass, the weights (after being nudged around) should look something like this
#
# $$\begin{bmatrix}1.08 & 1.92 \\ 1.98 & 1.02\end{bmatrix}$$
#
# and the bias should look like this
#
# $$\begin{bmatrix}0.01 & -0.01\end{bmatrix}$$
#
# ## Cross-Check Results
#
# You can check that our computations are correct by verifying the same with Pytorch's automatic backward pass.
# +
# We can uncomment the following two lines to check our gradient against the automatically computed gradients
loss.backward(retain_graph=True) # Setting retain_graph=True allows us to call loss.backward repeatedly in a local scope
grad_weights = weights.grad.data
print("grad of loss wrt to weights: "+str(grad_weights))
grad_bias = bias.grad.data
print("grad of loss wrt to bias: "+str(grad_bias))
if weights.grad is not None:
weights.grad.data.zero_()
if bias.grad is not None:
bias.grad.data.zero_()
# -
| exercise_720.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# ### Playing arund with Iris ###
#
# We will use Iris in class to practice some attribute transformations and computing similarities.
#
# +
import matplotlib.pyplot as plt
from sklearn import datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, 2:4] # we only take petal length and petal width.
Y = iris.target
plt.figure(2, figsize=(8, 6))
plt.clf()
# Plot the training points
plt.scatter(X[:, 0], X[:, 1], c=Y)
plt.xlabel('Petal length')
plt.ylabel('Petal width')
plt.show()
# -
import numpy as np
A = iris.data
a = A[0,:]
b = A[-1,:]
print a,b
c = np.log(a)
d = np.abs(c)
print d
for i in xrange(A.shape[1]):
print np.min(A[:,i]), np.max(A[:,i])
c = A[:,0]
c_mean = np.mean(c)
c_std = np.std(c)
d = (c-c_mean)/c_std
print c_mean, c_std
print np.min(d), np.max(d), np.mean(d), np.std(d)
| Activity-data-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ENV/ATM 415: Climate Laboratory
#
# [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany
#
# # Lecture 10: Climate sensitivity and feedback
# ____________
# <a id='section1'></a>
#
# ## 1. Radiative forcing
# ____________
#
# We've seen the concept of Radiative Forcing before. It is the short-term change in the TOA energy budget when we add a forcing agent to the climate system, **before the surface has a chance to warm up**.
#
# The standard reference forcing is a **doubling of atmospheric CO$_2$**.
# The **radiative forcing** is a number in W m$^{-2}$, defined so that it is **positive if the system is gaining energy**:
#
# $$ \Delta R = \left(\text{ASR}_{2xCO2} - \text{OLR}_{2xCO2}\right) - \left(\text{ASR}_{ref} - \text{OLR}_{ref}\right)$$
#
# $\Delta R$ is a measure of the rate at which energy begins to accumulate in the climate system after an abrupt increase in greenhouse gases, but *before any change in climate* (i.e. temperature).
# ### Radiative forcing in a single-column model
# Let's set up a single-column Radiative-Convective model and look carefully at what happens when we add extra CO$_2$ to the column.
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import climlab
from metpy.plots import SkewT
# Get the observed air temperature
# The NOAA ESRL server is shutdown! January 2019
#temperature_filename = 'air.mon.1981-2010.ltm.nc' # temperature
#ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/pressure/"
#ncep_air = xr.open_dataset(ncep_url + temperature_filename, decode_times=False)
url = 'http://apdrc.soest.hawaii.edu:80/dods/public_data/Reanalysis_Data/NCEP/NCEP/clima/pressure/air'
air = xr.open_dataset(url)
# The name of the vertical axis is different than the NOAA ESRL version..
ncep_air = air.rename({'lev': 'level'})
# Take global, annual average
weight = np.cos(np.deg2rad(ncep_air.lat)) / np.cos(np.deg2rad(ncep_air.lat)).mean(dim='lat')
Tglobal = (ncep_air.air * weight).mean(dim=('lat','lon','time'))
# Get the water vapor data
datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/BrianRose/CESM_runs/"
endstr = "/entry.das"
atm_control = xr.open_dataset( datapath + 'som_1850_f19/som_1850_f19.cam.h0.clim.nc' + endstr, decode_times=False)
Qglobal = ((atm_control.Q * atm_control.gw)/atm_control.gw.mean(dim='lat')).mean(dim=('lat','lon','time'))
# Make a model on same vertical domain as the GCM
state = climlab.column_state(lev=Qglobal.lev, water_depth=2.5)
rad = climlab.radiation.RRTMG(name='Radiation',
state=state,
specific_humidity=Qglobal.values,
timestep = climlab.constants.seconds_per_day,
albedo = 0.25, # tuned to give reasonable ASR for reference cloud-free model
)
conv = climlab.convection.ConvectiveAdjustment(name='Convection',
state=state,
adj_lapse_rate=6.5,
timestep=rad.timestep,)
rcm = rad + conv
rcm.name = 'Radiative-Convective Model'
print(rcm)
# First let's take a look at the default CO$_2$ amount in our reference model:
rcm.subprocess['Radiation'].absorber_vmr['CO2']
# That's 348 parts per million (ppm). Our atmosphere was at this level around the late 1980s.
# Before we can look at the effects of a CO2 perturbation we need to integrate our reference model out to equilibrium:
rcm.integrate_years(5)
# Are we close to energy balance?
rcm.ASR - rcm.OLR
# ### Clone the model and perturb CO2
# Make an exact clone with same temperatures
rcm_2xCO2 = climlab.process_like(rcm)
rcm_2xCO2.name = 'Radiative-Convective Model (2xCO2 initial)'
# Check to see that we indeed have the same CO2 amount
rcm_2xCO2.subprocess['Radiation'].absorber_vmr['CO2']
# Now double it!
rcm_2xCO2.subprocess['Radiation'].absorber_vmr['CO2'] *= 2
# and verify
rcm_2xCO2.subprocess['Radiation'].absorber_vmr['CO2']
# ### Instantaneous radiative forcing
# The simplest measure of radiative forcing is **the instantaneous change** in the energy budget **before the temperature have a chance to adjust**.
#
# To get this we need to call the `compute_diagnostics` method, but not any forward timestep.
rcm_2xCO2.compute_diagnostics()
# Now take a look at the changes in the SW and LW budgets:
rcm_2xCO2.ASR - rcm.ASR
rcm_2xCO2.OLR - rcm.OLR
# So what is instantaneous radiative forcing for the doubling of CO2?
DeltaR_instant = (rcm_2xCO2.ASR - rcm_2xCO2.OLR) - (rcm.ASR - rcm.OLR)
DeltaR_instant
# The radiative forcing for a doubling of CO2 in this model is 2.18 W m$^{-2}$.
# As we can see above, almost all of the radiative forcing appears in the longwave. We have made the atmosphere **more optically thick** by adding CO$_2$.
#
# Think about this the same way we increased the absorptivity / emissivity parameter $\epsilon$ in the simple grey-gas model.
# ### Statosphere-adjusted radiative forcing
# The point of measuring radiative forcing is that it should give us some information about how much global warming we should expect from a particular forcing agent.
#
# We will need to use our model to quantify the **eventual** temperature change associated with this forcing.
#
# It turns out, for reasons we won't get into here, that a more useful measure of the global warming impact of a forcing agent comes from thinking about **changes in radiative flux at the tropopause** rather than the Top of Atmosphere.
#
# The idea here is that we will **let the stratosphere adjust to the extra CO$_2$** while **holding the troposphere and surface temperatures fixed**.
# In this model levels 0 through 12 are in the statosphere; levels 13 and larger are in the troposphere:
rcm.lev[13:]
# So to compute stratosphere-adjusted forcing, we'll timestep the model, but continually reset the temperatures to their reference values below 226 hPa:
rcm_2xCO2_strat = climlab.process_like(rcm_2xCO2)
rcm_2xCO2_strat.name = 'Radiative-Convective Model (2xCO2 stratosphere-adjusted)'
for n in range(1000):
rcm_2xCO2_strat.step_forward()
# hold tropospheric and surface temperatures fixed
rcm_2xCO2_strat.Tatm[13:] = rcm.Tatm[13:]
rcm_2xCO2_strat.Ts[:] = rcm.Ts[:]
# Now we can compute the stratosphere-adjusted radiative forcing for the doubling of CO2:
DeltaR = (rcm_2xCO2_strat.ASR - rcm_2xCO2_strat.OLR) - (rcm.ASR - rcm.OLR)
DeltaR
# The result is about 4.3 W m$^{-2}$.
# ____________
#
# <a id='section2'></a>
#
# ## 2. Equilibrium climate sensitivity (without feedback)
# ____________
#
#
# We now ask the question: How much warming will we get (eventually) in response to this positive radiative forcing?
# We define the **Equilibrium Climate Sensitivity** (denoted **ECS** or $\Delta T_{2xCO2}$):
#
# *The global mean surface warming necessary to balance the planetary energy budget after a doubling of atmospheric CO$_2$.*
# We can go ahead and calculate ECS in our single-column model:
rcm_2xCO2_eq = climlab.process_like(rcm_2xCO2_strat)
rcm_2xCO2_eq.name = 'Radiative-Convective Model (2xCO2 equilibrium)'
rcm_2xCO2_eq.integrate_years(5)
# are we close to equilibrium?
rcm_2xCO2_eq.ASR - rcm_2xCO2_eq.OLR
# Let's follow what we have done before and plot the results on a nice Skew-T:
# +
def make_skewT():
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=30)
skew.plot(Tglobal.level, Tglobal, color='black', linestyle='-', linewidth=2, label='Observations')
skew.ax.set_ylim(1050, 10)
skew.ax.set_xlim(-90, 45)
# Add the relevant special lines
skew.plot_dry_adiabats(linewidth=0.5)
skew.plot_moist_adiabats(linewidth=0.5)
#skew.plot_mixing_lines()
skew.ax.legend()
skew.ax.set_xlabel('Temperature (degC)', fontsize=14)
skew.ax.set_ylabel('Pressure (hPa)', fontsize=14)
return skew
def add_profile(skew, model, linestyle='-', color=None):
line = skew.plot(model.lev, model.Tatm - climlab.constants.tempCtoK,
label=model.name, linewidth=2)[0]
skew.plot(1000, model.Ts - climlab.constants.tempCtoK, 'o',
markersize=8, color=line.get_color())
skew.ax.legend()
# -
skew = make_skewT()
add_profile(skew, rcm)
add_profile(skew, rcm_2xCO2_strat)
add_profile(skew, rcm_2xCO2_eq)
# What do you see here? What has changed?
# ### Calculate the ECS
#
# It is just the difference in surface temperature:
ECS_nofeedback = rcm_2xCO2_eq.Ts - rcm.Ts
ECS_nofeedback
# Doubling CO$_2$ in this model causes about 1.3 K of warming at equilibrium.
# What about the energy budget?
#
# Remember that we have let the model warm up to its new equilibrium!
#
# If we look at the differences between the two equilibrium states (before and after doubling CO$_2$), we find only very small and offsetting changes in SW and LW:
rcm_2xCO2_eq.OLR - rcm.OLR
rcm_2xCO2_eq.ASR - rcm.ASR
# The 1.3 K sensitivity we've just calculated is the warming that we would have *if there were no other changes (feedbacks) in response to the CO$_2$ induced warming!*
# ### The no-feedback response
# We have just calculated an equilibrium warming $\Delta T_0$ (in K) resulting from a radiative forcing $\Delta R$ (in W m$^{-2}$) for a model without feedback.
#
# Let's define the **no-feedback climate response parameter** $\lambda_0$ as
#
# $$ \lambda_0 = \frac{\Delta R}{\Delta T_0 } $$
# With the numbers we came up with above,
lambda0 = DeltaR / ECS_nofeedback
lambda0
# The no-feedback climate response parameter is $\lambda_0 = 3.3$ W m$^{-2}$ K$^{-1}$.
# What are some important processes that our model has neglected?
# ____________
# <a id='section3'></a>
#
# ## 3. The feedback concept
# ____________
#
# A concept borrowed from electrical engineering. You have all heard or used the term before, but we’ll try take a more precise approach today.
#
# A feedback occurs when a portion of the output from the action of a system is added to the input and subsequently alters the output:
# <img src="http://www.atmos.albany.edu/facstaff/brose/classes/ENV415_Spring2018/images/feedback_sketch.png" width="500">
#
# The result of a loop system can either be **amplification** or **dampening** of the process, depending on the sign of the gain in the loop, which we will denote $f$.
#
# We will call amplifying feedbacks **positive** ($f>0$) and damping feedbacks **negative** ($f<0$).
#
# We can think of the “process” here as the entire climate system, which contains many examples of both positive and negative feedback.
# ### Example: the water vapor feedback
#
# The capacity of the atmosphere to hold water vapor (saturation specific humidity) increases exponentially with temperature. Warming is thus accompanied by moistening (more water vapor), which leads to more warming due to the enhanced water vapor greenhouse effect.
#
# **Positive or negative feedback?**
# ### Example: the ice-albedo feedback
#
# Colder temperatures lead to expansion of the areas covered by ice and snow, which tend to be more reflective than water and vegetation. This causes a reduction in the absorbed solar radiation, which leads to more cooling.
#
# **Positive or negative feedback?**
#
# *Make sure it’s clear that the sign of the feedback is the same whether we are talking about warming or cooling.*
# _____________
# <a id='section4'></a>
# ## 4. Climate feedback: some definitions
# ____________
#
# We start with an initial radiative forcing , and get a response
# $$ \Delta T_0 = \frac{\Delta R}{\lambda_0} $$
#
#
# Now consider what happens in the presence of a feedback process. For a concrete example, let’s take the **water vapor feedback**. For every degree of warming, there is an additional increase in the greenhouse effect, and thus additional energy added to the system.
#
# Let’s denote this extra energy as
# $$ f \lambda_0 \Delta T_0 $$
#
# where $f$ is the **feedback amount**, a number that represents what fraction of the output gets added back to the input. $f$ must be between $-\infty$ and +1.
#
# For the example of the water vapor feedback, $f$ is positive (between 0 and +1) – the process adds extra energy to the original radiative forcing.
# The amount of energy in the full "input" is now
#
# $$ \Delta R + f \lambda_0 \Delta T_0 $$
#
# or
#
# $$ (1+f) \lambda_0 \Delta T_0 $$
# But now we need to consider the next loop. A fraction $f$ of the additional energy is also added to the input, giving us
#
# $$ (1+f+f^2) \lambda_0 \Delta T_0 $$
# and we can go round and round, leading to the infinite series
#
# $$ (1+f+f^2+f^3+ ...) \lambda_0 \Delta T_0 = \lambda_0 \Delta T_0 \sum_{n=0}^{\infty} f^n $$
#
# Question: what happens if $f=1$?
# It so happens that this infinite series has an exact solution
#
# $$ \sum_{n=0}^{\infty} f^n = \frac{1}{1-f} $$
# So the full response including all the effects of the feedback is actually
#
# $$ \Delta T = \frac{1}{1-f} \Delta T_0 $$
# This is also sometimes written as
# $$ \Delta T = g \Delta T_0 $$
#
# where
#
# $$ g = \frac{1}{1-f} = \frac{\Delta T}{\Delta T_0} $$
#
# is called the **system gain** -- the ratio of the actual warming (including all feedbacks) to the warming we would have in the absence of feedbacks.
# So if the overall feedback is positive, then $f>0$ and $g>1$.
#
# And if the overall feedback is negative?
# ____________
# <a id='section6'></a>
# ## 6. Contribution of individual feedback processes to Equilibrium Climate Sensitivity
# ____________
#
#
# Now what if we have several individual feedback processes occurring simultaneously?
#
# We can think of individual feedback amounts $f_1, f_2, f_3, ...$, with each representing a physically distinct mechanism, e.g. water vapor, surface snow and ice, cloud changes, etc.
# Each individual process takes a fraction $f_i$ of the output and adds to the input. So the feedback amounts are additive,
#
# $$ f = f_1 + f_2 + f_3 + ... = \sum_{i=0}^N f_i $$
# This gives us a way to compare the importance of individual feedback processes!
#
# The climate sensitivity is now
#
# $$ \Delta T_{2xCO2} = \frac{1}{1- \sum_{i=0}^N f_i } \Delta T_0 $$
#
# The climate sensitivity is thus **increased by positive feedback processes**, and **decreased by negative feedback processes**.
# ### Climate feedback parameters
#
# We can also write this in terms of the original radiative forcing as
#
# $$ \Delta T_{2xCO2} = \frac{\Delta R}{\lambda_0 - \sum_{i=1}^{N} \lambda_i} $$
#
# where
#
# $$ \lambda_i = \lambda_0 f_i $$
#
# known as **climate feedback parameters**, in units of W m$^{-2}$ K$^{-1}$.
#
# With this choice of sign conventions, $\lambda_i > 0$ for a positive feedback process.
# Individual feedback parameters $\lambda_i$ are then additive, and can be compared to the no-feedback parameter $\lambda_0$.
# We might decompose the net climate feedback into, for example
#
# - longwave and shortwave processes
# - cloud and non-cloud processes
#
# These individual feedback processes may be positive or negative. This is very powerful, because we can **measure the relative importance of different feedback processes** simply by comparing their $\lambda_i$ values.
# ### Every climate model has a Planck feedback
#
# Our no-feedback response parameter $\lambda_0$ is often called the **Planck feedback**.
#
# It is not really a feedback at all. It is the most basic and universal climate process, and is present in every climate model. It is simply an expression of the fact that a warm planet radiates more to space than a cold planet.
#
# As we will see, our estimate of $\lambda_0 = 3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $ is essentially the same as the Planck feedback diagnosed from complex GCMs. Most climate models (and the real climate system) have other radiative feedback processes, such that
#
# $$\lambda = \lambda_0 - \sum_{i=1}^{N} \lambda_i \ne \lambda_0 $$
# ____________
# <a id='section7'></a>
# ## 7. Feedbacks diagnosed from complex climate models
# ____________
#
# ### Data from the IPCC AR5
#
# This figure is reproduced from the recent IPCC AR5 report. It shows the feedbacks diagnosed from the various models that contributed to the assessment.
#
# (Later in the term we will discuss how the feedback diagnosis is actually done)
#
# See below for complete citation information.
# 
# **Figure 9.43** | (a) Strengths of individual feedbacks for CMIP3 and CMIP5 models (left and right columns of symbols) for Planck (P), water vapour (WV), clouds (C), albedo (A), lapse rate (LR), combination of water vapour and lapse rate (WV+LR) and sum of all feedbacks except Planck (ALL), from Soden and Held (2006) and Vial et al. (2013), following Soden et al. (2008). CMIP5 feedbacks are derived from CMIP5 simulations for abrupt fourfold increases in CO2 concentrations (4 × CO2). (b) ECS obtained using regression techniques by Andrews et al. (2012) against ECS estimated from the ratio of CO2 ERF to the sum of all feedbacks. The CO2 ERF is one-half the 4 × CO2 forcings from Andrews et al. (2012), and the total feedback (ALL + Planck) is from Vial et al. (2013).
#
# *Figure caption reproduced from the AR5 WG1 report*
# Legend:
#
# - P: Planck feedback
# - WV: Water vapor feedback
# - LR: Lapse rate feedback
# - WV+LR: combined water vapor plus lapse rate feedback
# - C: cloud feedback
# - A: surface albedo feedback
# - ALL: sum of all feedback except Plank, i.e. ALL = WV+LR+C+A
# Things to note:
#
# - The models all agree strongly on the Planck feedback.
# - The Planck feedback is about $-3.3 ~\text{W} ~\text{m}^{-2} ~\text{K}^{-1} $ just like our above estimate of $\lambda_0$ (but with opposite sign convention -- watch carefully for that in the literature)
# - The water vapor feedback is strongly positive in every model.
# - The lapse rate feedback is something we will study later. It is slightly negative.
# - For reasons we will discuss later, the best way to measure the water vapor feedback is to combine it with lapse rate feedback.
# - Models agree strongly on the combined water vapor plus lapse rate feedback.
# - The albedo feedback is slightly positive but rather small globally.
# - By far the largest spread across the models occurs in the cloud feedback.
# - Global cloud feedback ranges from slighly negative to strongly positive across the models.
# - Most of the spread in the total feedback is due to the spread in the cloud feedback.
# - Therefore, most of the spread in the ECS across the models is due to the spread in the cloud feedback.
# ____________
# <a id='section8'></a>
# ## 8. Water vapor feedback in the radiative-convective model
# ____________
# In nature, as in a complex GCM, water vapor tends to increase as the air temperature warms.
#
# The main reason for this is that **the saturation specific humidity** (i.e. how much water vapor the air can hold) **increases strongly with temperature**.
#
# We can **parameterize** this effect in the column model by insisting that the **relative humidity remain fixed** as the column warms.
# actual specific humidity
q = rcm.subprocess['Radiation'].specific_humidity
# saturation specific humidity (a function of temperature and pressure)
qsat = climlab.utils.thermo.qsat(rcm.Tatm, rcm.lev)
# Relative humidity
rh = q/qsat
# Plot relative humidity in percent
fig,ax = plt.subplots()
ax.plot(q*1000, rcm.lev, 'b-')
ax.invert_yaxis()
ax.grid()
ax.set_ylabel('Pressure (hPa)')
ax.set_xlabel('Specific humidity (g/kg)', color='b')
ax.tick_params('x', colors='b')
ax2 = ax.twiny()
ax2.plot(rh*100., rcm.lev, 'r-')
ax2.set_xlabel('Relative humidity (%)', color='r')
ax2.tick_params('x', colors='r')
# ### A radiative-convective model with fixed relative humidity
rcm_2xCO2_h2o = climlab.process_like(rcm_2xCO2)
for n in range(2000):
# At every timestep
# we calculate the new saturation specific humidity for the new temperature
# and change the water vapor in the radiation model
# so that relative humidity is always the same
qsat = climlab.utils.thermo.qsat(rcm_2xCO2_h2o.Tatm, rcm_2xCO2_h2o.lev)
rcm_2xCO2_h2o.subprocess['Radiation'].specific_humidity[:] = rh * qsat
rcm_2xCO2_h2o.step_forward()
# Check for energy balance
rcm_2xCO2_h2o.ASR - rcm_2xCO2_h2o.OLR
# What is the Equilibrium Climate Sensitivity of this new model?
ECS = rcm_2xCO2_h2o.Ts - rcm.Ts
ECS
# With the water vapor feedback, doubling CO$_2$ causes 3 K of warming at equilibrium, rather than the no-feedback sensitivity of 1.3 K.
#
# A pretty big difference!
# The system gain is the ratio
g = ECS / ECS_nofeedback
g
# So the moistening of the atmosphere more than doubles the global warming effect.
# What about the feedback parameter?
#
# This is just the ratio of the radiative forcing $\Delta R$ to the climate sensitivity:
lambda_net = DeltaR / ECS
lambda_net
# The net feedback is $\lambda = 1.43$ W m$^{-2}$ K$^{-1}$.
# Because of the additive nature of the feedbacks, this means that the individual contribution of the water vapor feedback is
#
# $$ \lambda_{h2o} = \lambda_0 - \lambda $$
lambda_h2o = lambda0 - lambda_net
lambda_h2o
# So we measure a water vapor feedback of $\lambda_{h2o} = 1.86$ W m$^{-2}$ K$^{-1}$.
#
# This is a **positive number** consistent with the **amplifying effect** of the water vapor feedback.
#
# The physical meaning of this number:
#
# **For every 1 degree of surface warming, the increased water vapor greenhouse effect provides an additional 1.86 W m$^{-2}$ of radiative forcing.**
# Compare back to the feedback figure from IPCC AR5. How does this analysis compare to the comprehensive GCMs? What is missing here?
| notes/L10_Sensitivity_Feedback.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import sys
import wandb
from omegaconf import OmegaConf as omg
import torch
import numpy as np
from agent import Agent
from trainer import Trainer, reward_fn
from data_generator_v2 import DataGenerator, SimulatedAnnealing
import time
import os
import pandas as pd
# +
def load_conf():
"""Quick method to load configuration (using OmegaConf). By default,
configuration is loaded from the default config file (config.yaml).
Another config file can be specific through command line.
Also, configuration can be over-written by command line.
Returns:
OmegaConf.DictConfig: OmegaConf object representing the configuration.
"""
default_conf = omg.create({"config" : "config.yaml"})
sys.argv = [a.strip("-") for a in sys.argv]
cli_conf = omg.from_cli()
yaml_file = omg.merge(default_conf, cli_conf).config
yaml_conf = omg.load(yaml_file)
return omg.merge(default_conf, yaml_conf, cli_conf)
# -
def main():
conf = load_conf()
wandb.init(project=conf.proj_name, config=dict(conf))
agent = Agent(space_dim= conf.dimension, embed_hidden=conf.embed_hidden, enc_stacks=conf.enc_stacks, ff_hidden=conf.ff_hidden, enc_heads=conf.enc_heads, query_hidden=conf.query_hidden, att_hidden=conf.att_hidden, crit_hidden=conf.crit_hidden, n_history=conf.n_history, p_dropout=conf.p_dropout)
wandb.watch(agent)
dataset = DataGenerator()
trainer = Trainer(conf, agent, dataset)
trainer.run()
# Save trained agent
dir_ = str(conf.dimension)+'D_'+'MESH'+str(conf.max_len) +'_b'+str(conf.batch_size)+'_e'+str(conf.embed_hidden)+'_n'+str(conf.ff_hidden)+'_s'+str(conf.enc_stacks)+'_h'+str(conf.enc_heads)+ '_q'+str(conf.query_hidden) +'_a'+str(conf.att_hidden)+'_c'+str(conf.crit_hidden)+ '_lr'+str(conf.lr)+'_d'+str(conf.lr_decay_steps)+'_'+str(conf.lr_decay_rate)+ '_steps'+str(conf.steps)
path = "save/"+dir_
if not os.path.exists(path):
os.makedirs(path)
save_path= str(path)+'/'+str(conf.model_path)
torch.save(agent.state_dict(), save_path)
input_test = []
for _ in range (conf.batch_size):
input_test1 = np.loadtxt("mpeg_4x3.txt", delimiter=",")
input_test.append(input_test1)
if conf.test:
device = torch.device(conf.device)
# Load trained agent
agent.load_state_dict(torch.load(save_path))
agent.eval()
agent = agent.to(device)
start_time = time.time()
running_reward = 0
for _ in range(conf.test_steps):
#input_batch = dataset.test_batch(conf.batch_size, conf.max_len, conf.dimension, shuffle=False)
input_batch = torch.Tensor(input_test).to(device)
tour, *_ = agent(input_batch)
reward = reward_fn(input_batch, tour, conf.x_dim, conf.batch_size)
# Find best solution
j = reward.argmin()
best_tour = tour[j][:].tolist()
# Log
running_reward += reward[j]
# Display
print('Reward (before 2 opt)', reward[j],'tour', best_tour)
opt_tour, opt_length = dataset.loop2opt(input_batch.cpu()[j], best_tour, conf.x_dim)
print('Reward (with 2 opt)', opt_length, 'opt_tour',opt_tour)
#dataset.visualize_2D_trip(opt_tour)
wandb.run.summary["test_reward"] = running_reward / conf.test_steps
print("--- %s seconds ---" % (time.time() - start_time))
# + jupyter={"outputs_hidden": true}
if __name__ == "__main__":
main()
# -
import torch.nn as nn
dataset = DataGenerator()
input_test = dataset.train_batch(10, 12, 12, 4, 3)
input_batch = torch.Tensor(input_test).to('cpu')
m = nn.Conv1d(12, 128, 1, stride=1)
n = nn.Linear(12, 128)
input = torch.randn(20, 16, 50)
output = m(input_batch)#.transpose(1, 2)
output1 = n(input_batch).transpose(1, 2)
np.shape(output1)
temp = 1000
stopping_temp = 0.00001
alpha = 0.995
stopping_iter = 10000
conf = load_conf()
dataset = DataGenerator()
agent = Agent(space_dim= conf.dimension, embed_hidden=conf.embed_hidden, enc_stacks=conf.enc_stacks, ff_hidden=conf.ff_hidden, enc_heads=conf.enc_heads, query_hidden=conf.query_hidden, att_hidden=conf.att_hidden, crit_hidden=conf.crit_hidden, n_history=conf.n_history, p_dropout=conf.p_dropout)
dir_ = str(conf.dimension)+'D_'+'MESH'+str(conf.max_len) +'_b'+str(conf.batch_size)+'_e'+str(conf.embed_hidden)+'_n'+str(conf.ff_hidden)+'_s'+str(conf.enc_stacks)+'_h'+str(conf.enc_heads)+ '_q'+str(conf.query_hidden) +'_a'+str(conf.att_hidden)+'_c'+str(conf.crit_hidden)+ '_lr'+str(conf.lr)+'_d'+str(conf.lr_decay_steps)+'_'+str(conf.lr_decay_rate)+ '_steps'+str(conf.steps)
path = "save/"+dir_
save_path= str(path)+'/'+str(conf.model_path)
#input_test = dataset.train_batch(conf.batch_size-1, conf.max_len, conf.dimension, conf.x_dim, conf.y_dim)
input_test = []
for _ in range (128):#conf.batch_size):
A=np.zeros((conf.max_len,conf.max_len),dtype='float')
app = pd.read_csv('mpeg.csv')
for i in range ( len(app)):
A[app.source[i]][app.target[i]]= app.weight[i]
input_test.append(A)
if conf.test:
device = torch.device(conf.device)
# Load trained agent
agent.load_state_dict(torch.load(save_path))
agent.eval()
agent = agent.to(device)
start_time = time.time()
running_reward = 0
for _ in range(1):#conf.test_steps):
#input_batch = dataset.test_batch(conf.batch_size, conf.max_len, conf.dimension, shuffle=False)
input_batch = torch.Tensor(input_test).to(device)
tour, *_ = agent(input_batch)
reward = reward_fn(input_batch, tour, conf.x_dim, conf.batch_size)
# Find best solution
j = reward.argmin()
#j=127
best_tour = tour[j][:].tolist()
# Log
running_reward += reward[j]
# Display
print('Reward (before 2 opt)', reward[j],'tour', best_tour)
print("--- %s seconds ---" % (time.time() - start_time))
opt_tour, opt_length = dataset.loop2opt(input_batch.cpu()[j], best_tour, conf.x_dim)
print('Reward (with 2 opt)', opt_length, 'opt_tour',opt_tour)
print("--- %s seconds ---" % (time.time() - start_time))
#sa = SimulatedAnnealing(input_batch.cpu()[j], best_tour, conf.x_dim, temp, alpha, stopping_temp, stopping_iter)
#SA_length, SA_tour = sa.anneal()
#print('Reward (with SA)', SA_length, 'SA_tour',SA_tour)
#print("--- %s seconds ---" % (time.time() - start_time))
#dataset.visualize_2D_trip(opt_tour)
print("---total time %s seconds ---" % (time.time() - start_time))
reward
conf = load_conf()
dir_ = str(conf.dimension)+'D_'+'MESH'+str(conf.max_len) +'_b'+str(conf.batch_size)+'_e'+str(conf.embed_hidden)+'_n'+str(conf.ff_hidden)+'_s'+str(conf.enc_stacks)+'_h'+str(conf.enc_heads)+ '_q'+str(conf.query_hidden) +'_a'+str(conf.att_hidden)+'_c'+str(conf.crit_hidden)+ '_lr'+str(conf.lr)+'_d'+str(conf.lr_decay_steps)+'_'+str(conf.lr_decay_rate)+ '_steps'+str(conf.steps)
print(dir_)
path = "save/"+dir_
if not os.path.exists(path):
os.makedirs(path)
#save_path= str(path)+'/'+str(conf.model_path)
#print (path)
from shutil import copy
copy('config.yaml',path)
import pandas as pd
A=np.zeros((12,12),dtype='float')
app = pd.read_csv('mwd.csv')
for i in range ( len(app)):
A[app.source[i]][app.target[i]]= app.weight[i]
A[app.target[i]][app.source[i]]= app.weight[i]
import random
import networkx as nx
input_batch = []
size1 = 12 #mesh size 4x3
for _ in range(1):
size = random.randint((size1 - 4),size1)
G=nx.connected_watts_strogatz_graph(size, 5, 0.5, tries=100, seed=None)
for (u, v) in G.edges():
G.edges[u,v]['weight'] = random.randint(1,1000)
#C=G.to_undirected()
#B=nx.to_numpy_array(C)
app= nx.to_pandas_edgelist(G)
A=np.zeros((size1,size1),dtype='float')
for i in range (len(app)):
A[app.source[i]][app.target[i]]= app.weight[i]
#A[:B.shape[0], :B.shape[1]] = B
#A=A.flatten()
input_batch.append(A)
gg=nx.from_numpy_array(A)
app1 = nx.to_pandas_edgelist(gg)
dataset = DataGenerator()
input_test = dataset.train_batch(10, 12, 12, 4, 3)
print(input_test[9])
| mapping_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import scipy.stats as st
from scipy.stats import linregress
from sklearn.neighbors import KDTree
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
# +
#Check the structure of the data retrieving and if the link is working
url = "http://api.openweathermap.org/data/2.5/weather?"
city = cities[0]
#Build query URL
query_url = url + "appid=" + weather_api_key + "&q=" + city
weather_response = requests.get(query_url)
weather_json = weather_response.json()
print(cities[0])
print(f"The weather API responded with: {weather_json}.")
# +
#Create the query url for the API call
url = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid="+weather_api_key
# create the varients to store data
city_name = []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
record=1
print('''
---------------------------
Beginning Data Retrieval
---------------------------''')
for city in cities:
# use "try and except" to skip the missing values etc
try:
response = requests.get(url+"&q=" + city).json()
city_name.append(response["name"])
cloudiness.append(response["clouds"]["all"])
country.append(response["sys"]["country"])
date.append(response["dt"])
humidity.append(response["main"]["humidity"])
max_temp.append(response["main"]["temp_max"])
lat.append(response["coord"]["lat"])
lng.append(response["coord"]["lon"])
wind_speed.append(response["wind"]["speed"])
city_record = response["name"]
print(f"Processing Record {record} | {city_record}")
print(f"{url}&q={city}")
# Increase counter by one
record= record + 1
# Wait a second in loop to not over exceed rate limit of API
time.sleep(1.01)
# If no record found "skip" to next call
except(KeyError,IndexError):
print("City not found. Skipping...")
continue
print('''
-----------------------------
Data Retrival Complete
-----------------------------''')
# -
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
# +
#DataFrame to store the weather data
weather_data = pd.DataFrame({"City": city_name,
"Country":country,
"Lat":lat,
"Lng":lng,
"Date":date,
"Cloudiness":cloudiness,
"Humidity": humidity,
"Max Temp": max_temp,
"Wind Speed":wind_speed})
#Preview the dataframe
weather_data.count()
# -
weather_data.describe()
# Save data frame to CSV
weather_data.to_csv(output_data_file,index=False, header=True)
weather_data.head()
# ## Inspect the data and remove the cities where the humidity > 100%.
# ----
# Skip this step if there are no cities that have humidity > 100%.
# Get the indices of cities that have humidity over 100%.
humid_cities=weather_data.loc[weather_data["Humidity"]>100]
humid_cities.head()
# +
# there are no cties with humid level>100%
# -
# ## Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# ## Latitude vs. Temperature Plot
# +
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Max Temp"], marker="o", s=10)
# graph properties
plt.title("City Latitude vs. Max Temperature 03.20.21")
plt.ylabel("Max. Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Max_Temp_vs_Latitude.png")
# Show plot
plt.show()
# -
# By analyzing this plot we can easily visualize that the highest temperature on earth is closer to the equator and also it is evident that the southern hemisphere is warmer than then northern hemisphere.
# ## Latitude vs. Humidity Plot
# +
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Humidity"], marker="o", s=10)
#graph properties
plt.title("City Latitude vs. Humidity 03.20.21")
plt.ylabel("Humidity %")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Latitude_vs_Humidity.png")
# Show plot
plt.show()
# -
# It is harder to visualize a direct correlation between northern and southern hemisphere’s humidity levels based on the latitude. However it look like there is more variation in Southern hemisphere humidity levels than northern hemisphere humidity levels.
# ## Latitude vs. Cloudiness Plot
# +
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Cloudiness"], marker="o", s=10)
#graph properties
plt.title("City Latitude vs. Cloudiness 03.20.21")
plt.ylabel("Cloudiness %")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Latitude_vs_Cloudiness.png")
# Show plot
plt.show()
# -
# There is no correlation between the Latitude and the Cloudiness.
# ## Latitude vs. Wind Speed Plot
# +
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Wind Speed"], marker="o", s=10)
#graph properties
plt.title("City Latitude vs. Wind Speed 03.20.21")
plt.ylabel("Wind Speed")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/Latitude_vs_Wind Speed.png")
# Show plot
plt.show()
# -
# Though there is no direct correlation between, the latitude and the wind speed, there are some out liars at the extreme latitudes.
# ## Linear Regression
# +
# create two data frames for Northern Hemisphere and Southern Hemisphere
Nothern_Weather=weather_data.loc[weather_data['Lat']>0]
Southern_Weather=weather_data.loc[weather_data['Lat']<=0]
Southern_Weather.dropna()
# -
def line_regres(x, y,yaxis):
(slope, intercept, rvalue, pvalue, stderr) = linregress(x, y)
y_pred = intercept + slope*x
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# Plot
plt.scatter(x,y)
plt.plot(x,y_pred,"r-")
plt.xlabel('Latitude')
plt.ylabel(yaxis)
print(f"r-squared: {rvalue}")
plt.show()
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x = Nothern_Weather['Lat']
y = Nothern_Weather['Max Temp']
line_regres(x,y,'Max Temp')
plt.savefig("output_data/Nothern Hemisphere - Max Temp vs. Latitude.png")
# -
# There is a negative correlation between the latitude and the max temperature in the Northern Hemisphere.
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x = Southern_Weather['Lat']
y = Southern_Weather['Max Temp']
line_regres(x,y,'Max Temp')
plt.savefig("output_data/Southern Hemisphere - Max Temp vs. Latitude.png")
# -
# There is a positive correlation between the latitude and the max temperature in the Southern Hemisphere.
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
x = Nothern_Weather['Lat']
y = Nothern_Weather['Humidity']
line_regres(x,y,'Humidity')
plt.savefig("output_data/Northern Hemisphere - Humidity (%)vs. Latitude.png")
# -
# There is a very little positive correlation between the latitude and the humidity in the Nothern Hemisphere.Variation goes down with the increase of the Latitude.
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
x = Southern_Weather['Lat']
y = Southern_Weather['Humidity']
line_regres(x,y,'Humidity')
plt.savefig("output_data/Sourthern Hemisphere - Humidity (%)vs. Latitude.png")
# -
# There is a very little positive correlation between the latitude and the humidity in the Southern Hemisphere.Variation goes down with the increase of the Latitude.
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
x = Nothern_Weather['Lat']
y = Nothern_Weather['Cloudiness']
line_regres(x,y,'Cloudiness')
plt.savefig("output_data/Northern Hemisphere-Cloudiness(%) vs. Latitude.png")
# -
# There is no identifiable correlation between coudiness and the latitude. But it loks like density of the coulds increases slightly with the latitude, which makes it appear to have a slight positive correlation.
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
x = Southern_Weather['Lat']
y = Southern_Weather['Cloudiness']
line_regres(x,y,'Cloudiness')
plt.savefig("output_data/Sourthern Hemisphere -Cloudiness (%)vs. Latitude.png")
# -
# There is no identifiable correlation between coudiness and the latitude. But it loks like density of the coulds increases slightly with the latitude, which makes it appear to have a slight positive correlation.
# There is no identifiable correlation between coudiness and the latitude
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
x = Nothern_Weather['Lat']
y = Nothern_Weather['Wind Speed']
line_regres(x,y,'Wind Speed')
plt.savefig("output_data/Northern Hemisphere - Wind Speed (mph) vs. Latitude.png")
# -
# there is a slight positive correlation between the wind speed and the latitude in the Northern Hemisphere. looks like there are extreme out liars when the latitude is at the extreme high levels.
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
x = Southern_Weather['Lat']
y = Southern_Weather['Wind Speed']
line_regres(x,y,'Wind Speed')
plt.savefig("output_data/Southern Hemisphere - Wind Speed (mph)vs. Latitude.png")
# -
# there is a slight negative correlation between the wind speed and the latitude in the Sourthern Hemisphere. looks like there are extreme out liars when the latitude is at the extreme low levels..
| WeatherPy/WeatherPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/liang42hao/TttT/blob/master/medium_rare/200_number_of_islands.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="CBywqAntbbcQ" colab_type="text"
# ##200. Number of Islands
#
#
# Given a 2d grid map of `'1'`s (land) and `'0'`s (water), count the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water.
#
#
# + [markdown] id="H8i3MJKbdX6u" colab_type="text"
# **Example 1:**
# ```
# Input:
# 11110
# 11010
# 11000
# 00000
#
# Output: 1
#
# ```
# + [markdown] id="ELbCe5t3duts" colab_type="text"
# **Example 2:**
# ```
# Input:
# 11000
# 11000
# 00100
# 00011
#
# Output: 3
# ```
# + id="WLqIY0Z4d3FT" colab_type="code" colab={}
class Solution(object):
def numIslands(self, grid):
"""
:type grid: List[List[str]]
:rtype: int
"""
| medium_rare/200_number_of_islands.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2020 <NAME>. All rights reserved.
# This file is licensed to you under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under
# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
# OF ANY KIND, either express or implied. See the License for the specific language
# governing permissions and limitations under the License.
# Figure: BigGAN edit transferability between classes
# %matplotlib inline
from notebook_init import *
rand = lambda : np.random.randint(np.iinfo(np.int32).max)
outdir = Path('out/figures/edit_transferability')
makedirs(outdir, exist_ok=True)
# +
inst = get_instrumented_model('BigGAN-512', 'husky', 'generator.gen_z', device, inst=inst)
model = inst.model
model.truncation = 0.7
pc_config = Config(components=80, n=1_000_000,
layer='generator.gen_z', model='BigGAN-512', output_class='husky')
dump_name = get_or_compute(pc_config, inst)
with np.load(dump_name) as data:
lat_comp = data['lat_comp']
lat_mean = data['lat_mean']
lat_std = data['lat_stdev']
# name: component_idx, layer_start, layer_end, strength
edits = {
'translate_x': ( 0, 0, 15, -3.0),
'zoom': ( 6, 0, 15, 2.0),
'clouds': (54, 7, 10, 15.0),
#'dark_fg': (51, 7, 10, 20.0),
'sunlight': (33, 7, 10, 25.0),
#'silouette': (13, 7, 10, -20.0),
#'grass_bg': (69, 3, 7, -20.0),
}
def apply_offset(z, idx, start, end, sigma):
lat = z if isinstance(z, list) else [z]*model.get_max_latents()
for i in range(start, end):
lat[i] = lat[i] + lat_comp[idx]*lat_std[idx]*sigma
return lat
show = True
# good geom seeds: 2145371585
# good style seeds: 337336281, 2075156369, 311784160
for _ in range(1):
# Type 1: geometric edit - transfers well
seed1_geom = 2145371585
seed2_geom = 2046317118
print('Seeds geom:', [seed1_geom, seed2_geom])
z1 = model.sample_latent(1, seed=seed1_geom).cpu().numpy()
z2 = model.sample_latent(1, seed=seed2_geom).cpu().numpy()
model.set_output_class('husky')
base_husky = model.sample_np(z1)
zoom_husky = model.sample_np(apply_offset(z1, *edits['zoom']))
transl_husky = model.sample_np(apply_offset(z1, *edits['translate_x']))
img_geom1 = np.hstack([base_husky, zoom_husky, transl_husky])
model.set_output_class('castle')
base_castle = model.sample_np(z2)
zoom_castle = model.sample_np(apply_offset(z2, *edits['zoom']))
transl_castle = model.sample_np(apply_offset(z2, *edits['translate_x']))
img_geom2 = np.hstack([base_castle, zoom_castle, transl_castle])
# Type 2: style edit - often transfers
seed1_style = 417482011 #rand()
seed2_style = 1026291813
print('Seeds style:', [seed1_style, seed2_style])
z1 = model.sample_latent(1, seed=seed1_style).cpu().numpy()
z2 = model.sample_latent(1, seed=seed2_style).cpu().numpy()
model.set_output_class('lighthouse')
base_lighthouse = model.sample_np(z2)
edit1_lighthouse = model.sample_np(apply_offset(z2, *edits['clouds']))
edit2_lighthouse = model.sample_np(apply_offset(z2, *edits['sunlight']))
img_style2 = np.hstack([base_lighthouse, edit1_lighthouse, edit2_lighthouse])
model.set_output_class('barn')
base_barn = model.sample_np(z1)
edit1_barn = model.sample_np(apply_offset(z1, *edits['clouds']))
edit2_barn = model.sample_np(apply_offset(z1, *edits['sunlight']))
img_style1 = np.hstack([base_barn, edit1_barn, edit2_barn])
grid = np.vstack([img_geom1, img_geom2, img_style1, img_style2])
if show:
plt.figure(figsize=(12,12))
plt.imshow(grid)
plt.axis('off')
plt.show()
else:
Image.fromarray((255*grid).astype(np.uint8)).save(outdir / f'{seed1_geom}_{seed2_geom}_{seed1_style}_{seed2_style}_transf.jpg')
# Save individual frames
Image.fromarray((255*base_husky).astype(np.uint8)).save(outdir / 'geom_husky_1.png')
Image.fromarray((255*zoom_husky).astype(np.uint8)).save(outdir / 'geom_husky_2.png')
Image.fromarray((255*transl_husky).astype(np.uint8)).save(outdir / 'geom_husky_3.png')
Image.fromarray((255*base_castle).astype(np.uint8)).save(outdir / 'geom_castle_1.png')
Image.fromarray((255*zoom_castle).astype(np.uint8)).save(outdir / 'geom_castle_2.png')
Image.fromarray((255*transl_castle).astype(np.uint8)).save(outdir / 'geom_castle_3.png')
Image.fromarray((255*base_lighthouse).astype(np.uint8)).save(outdir / 'style_lighthouse_1.png')
Image.fromarray((255*edit1_lighthouse).astype(np.uint8)).save(outdir / 'style_lighthouse_2.png')
Image.fromarray((255*edit2_lighthouse).astype(np.uint8)).save(outdir / 'style_lighthouse_3.png')
Image.fromarray((255*base_barn).astype(np.uint8)).save(outdir / 'style_barn_1.png')
Image.fromarray((255*edit1_barn).astype(np.uint8)).save(outdir / 'style_barn_2.png')
Image.fromarray((255*edit2_barn).astype(np.uint8)).save(outdir / 'style_barn_3.png')
# -
| notebooks/figure_biggan_edit_transferability.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Saving and Restoring `tf.Session`
# + deletable=true editable=true
import os.path
import tensorflow as tf
import prettytensor as pt
from tqdm import tqdm
# + [markdown] deletable=true editable=true
# ## Load datasets
# + deletable=true editable=true
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('../datasets/MNIST/', one_hot=True)
# + deletable=true editable=true
print('Training: = {:,}'.format(data.train.num_examples))
print('Testing: = {:,}'.format(data.test.num_examples))
print('Validation: = {:,}'.format(data.validation.num_examples))
# + [markdown] deletable=true editable=true
# ## Hyperparameters
# + deletable=true editable=true
# Network
image_size = 28
num_channels = 1
image_shape = image_size * image_size * num_channels
kernel_size = 5
conv1_depth = 8
conv2_depth = 16
conv3_depth = 32
conv4_depth = 64
conv5_depth = 128
conv6_depth = 256
fc_size = 1024
num_classes = 10
# Training
learning_rate = 1e-2
batch_size = 24
iterations = 0
save_step = 1000
save_path = '../logs/save-restore-convnet/'
best_val_acc = 0.0
last_improvement = 0
improvement_requirement = 1000
# + [markdown] deletable=true editable=true
# ## Create Log dir
# + deletable=true editable=true
if not os.path.exists(save_path):
os.makedirs(save_path)
# + [markdown] deletable=true editable=true
# ## Define Model's placeholder variables
# + deletable=true editable=true
X = tf.placeholder(tf.float32, [None, image_shape])
y = tf.placeholder(tf.float32, [None, num_classes])
y_true = tf.argmax(y, axis=1)
# + [markdown] deletable=true editable=true
# ## Constructing the Network
# + deletable=true editable=true
X_image = tf.reshape(X, shape=[-1, image_size, image_size, num_channels])
X_pretty = pt.wrap(X_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = X_pretty.\
conv2d(kernel=kernel_size, depth=conv1_depth, name='conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=kernel_size, depth=conv2_depth, name='conv2').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=kernel_size, depth=conv3_depth, name='conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=kernel_size, depth=conv4_depth, name='conv2').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=kernel_size, depth=conv5_depth, name='conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=kernel_size, depth=conv6_depth, name='conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=fc_size, name='fully_connected').\
softmax_classifier(num_classes=num_classes, labels=y)
# + [markdown] deletable=true editable=true
# ## Optimize the `loss` from the Network
# + deletable=true editable=true
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_step = optimizer.minimize(loss)
# + [markdown] deletable=true editable=true
# ## Define a `tf.train.Saver` object
# + deletable=true editable=true
saver = tf.train.Saver()
# + [markdown] deletable=true editable=true
# ## Evaluate Network's accuracy
# + deletable=true editable=true
y_pred_true = tf.argmax(y_pred, axis=1)
correct = tf.equal(y_pred_true, y_true)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
# + [markdown] deletable=true editable=true
# ## Running the network
# ### Define `tf.Session` as the default graph
# + deletable=true editable=true
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
# + deletable=true editable=true
# Run accuracy for both test and validation sets
def accuracy_eval(validation=False, test=True):
test_acc = 0.0
val_acc = 0.0
if test:
feed_dict_test = {X:data.test.images, y:data.test.labels}
test_acc = sess.run(accuracy, feed_dict=feed_dict_test)
if validation:
feed_dict_val = {X:data.validation.images, y:data.validation.labels}
val_acc = sess.run(accuracy, feed_dict=feed_dict_val)
return test_acc, val_acc
# Display Accuracy
def print_accuracy(validation=False, test=True):
test_acc, val_acc = accuracy_eval(validation=validation, test=True)
msg = 'After {:,} iterations:\n'.format(iterations)
if test:
msg += '\tTest Accuracy\t\t= {:.2%}\n'.format(test_acc)
if validation:
msg += '\tValidation Accuracy\t= {:.2%}\n'.format(val_acc)
print(msg)
# Run the optimizer
def optimize(num_iter=100):
global iterations
global last_improvement
global best_val_acc
for i in tqdm(range(0, num_iter)):
# Early stopping
if iterations - last_improvement > improvement_requirement:
print('\nStopping optimization @ {:,} iter due to none improvement in accuracy!!!'.format(iterations))
break
# Update iterations
iterations += 1
# Get training batch
X_batch, y_batch = data.train.next_batch(batch_size=batch_size)
feed_dict = {X: X_batch, y: y_batch}
# Train the network
sess.run(train_step, feed_dict=feed_dict)
# Log after every `save_step`
if i != 0 and ((i%save_step) == 0 or i == num_iter - 1):
_, val_acc = accuracy_eval(validation=True, test=False)
if val_acc > best_val_acc:
# Save the session into the saver object
saver.save(sess=sess, save_path=save_path)
print('Iteration: {:,}'.format(iterations))
print('Last validation = {:.02%}\tNew validation: {:.02%}'.format(best_val_acc, val_acc))
# Update the best_val_acc and last improvement
last_improvement = i
best_val_acc = val_acc
# Log optimization info
print('Optimization details:')
print_accuracy(validation=True, test=True)
# + deletable=true editable=true
print_accuracy()
# + deletable=true editable=true
optimize(num_iter=100)
# + deletable=true editable=true
optimize(num_iter=900)
# + deletable=true editable=true
optimize(num_iter=10000)
# + [markdown] deletable=true editable=true
# ## Restoring the `tf.Session`
# + deletable=true editable=true
# reset global variables
sess.run(init)
# print the accuracy
print_accuracy(validation=True, test=True)
# + deletable=true editable=true
# Restore the session
saver.restore(sess=sess, save_path=save_path)
print_accuracy(validation=True, test=True)
# + [markdown] deletable=true editable=true
# ## Close the `tf.Session`
# + deletable=true editable=true
sess.close()
# + deletable=true editable=true
| save_restore/saving-and-restoring-convnet-weights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Advanced Lane Finding Project
#
# The goals / steps of this project are the following:
#
# * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
# * Apply a distortion correction to raw images.
# * Use color transforms, gradients, etc., to create a thresholded binary image.
# * Apply a perspective transform to rectify binary image ("birds-eye view").
# * Detect lane pixels and fit to find the lane boundary.
# * Determine the curvature of the lane and vehicle position with respect to center.
# * Warp the detected lane boundaries back onto the original image.
# * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
#
# ---
# ## First, I'll compute the camera calibration using chessboard images
# Get imports
import pickle
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# %matplotlib qt
# ## And so on and so forth...
# import helper functions
import camera_calibrator
import lane_smoothing
import binary_gradient
import perspective_transform
import measure_curvature
import original_perspective
import warp_lines
# Files to undistort
calibration1 = ('./camera_cal/calibration1.jpg')
straight_lines1 = ('./test_images/straight_lines1.jpg')
straight_lines2 = ('./test_images/straight_lines2.jpg')
test1 = ('./test_images/test1.jpg')
test2 = ('./test_images/test2.jpg')
test3 = ('./test_images/test3.jpg')
test4 = ('./test_images/test4.jpg')
test5 = ('./test_images/test5.jpg')
test6 = ('./test_images/test6.jpg')
def pipeline(image):
# Directory of images for calibration
image_directory = glob.glob('camera_cal/calibration*.jpg')
# Get matrix from calibration
mtx, dist = camera_calibrator.calibration(image_directory)
# smooth the lanes
smoothened_image = lane_smoothing.apply_clahe(image)
# Undistored image
undistored_image = cv2.undistort(smoothened_image, mtx, dist, None, mtx)
# set thresholds for edge detection
sobel_threshold = (170, 255)
sobelx_threshold = (20, 100)
# Binary Image
color_binary, binary_image = binary_gradient.binary_image(undistored_image, sobel_threshold, sobelx_threshold)
# set top and bottom margins
top_margin = 93
bottom_margin = 450
# Perspective Transform
perspective_transformed_image = perspective_transform.get_transformed_perspective(binary_image, top_margin, bottom_margin)
# set margins, sliding windows, pixels to recenter window
margin = 100
nwindows = 9
minpixels = 50
# convert back to original image space
color_warp = measure_curvature.get_colored_area(perspective_transformed_image, margin, nwindows, minpixels)
warp_image = original_perspective.get_original_perspective(color_warp, top_margin, bottom_margin)
result = cv2.addWeighted(smoothened_image, 1, warp_image, 0.3, 0)
return result
img = lambda fname: mpimg.imread(fname)
result = pipeline(img(test4))
plt.imshow(result)
| examples/example-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import asyncio
import itertools
import collections
import random
import pulp
import json
import functools
import time
import random
def timeit(method):
def timed(*args, **kw):
ts = time.time()
result = method(*args, **kw)
te = time.time()
print('%r %f sec' %(method.__name__, te-ts))
return result
return timed
graph = {'A': {'children':['D'],'parents':[], 'node_w':4, 'edge_w':10},
'B': {'children':['E'],'parents':[], 'node_w':4, 'edge_w':10},
'C': {'children':['F'],'parents':[], 'node_w':4, 'edge_w':10},
'D': {'children':['G'],'parents':['A'], 'node_w':20, 'edge_w':2},
'E': {'children':['G'],'parents':['B'], 'node_w':20, 'edge_w':2},
'F': {'children':['G'],'parents':['C'], 'node_w':20, 'edge_w':2},
'G': {'children':['H'],'parents':['D','E','F'], 'node_w':30, 'edge_w':1},
'H': {'children':[],'parent':['G'], 'node_w':0, 'edge_w':0}}
def create_processors(total_num):
def processor_power(num):
if num==0:
return 10**5
else:
return 2*num
return {i:processor_power(i) for i in range(total_num)}
def create_rssi(total_num):
def to_others(total,i):
def rssi(i,j):
if i==j:
return 10**6
else:
return 10
return{j:rssi(i,j) for j in range(total)}
other_gen = functools.partial(to_others,total_num)
return{i:other_gen(i) for i in range(total_num)}
processors = create_processors(30)
rssi = create_rssi(30)
constraints = {'A':range(0,30),
'B':range(0,30),
'C':range(0,30),
'D':range(0,30),
'E':range(0,30),
'F':range(0,30),
'G':range(30),
'H':[0]}
def find_communication_power(a, b, rssi):
"""a and b are the names of processor"""
return rssi[a][b]
def find_computation_power(a, processors):
return processors[a]
def find_node_cost(node_k,node_v, assignment):
"""node is a key value pair, assignments is a named tuple"""
origin = node_k
destinations = node_v['children']
assigned_origin = getattr(assignment, origin)
assigned_destinations = [getattr(assignment, i) for i in destinations]
comp_cost = node_v['node_w']/processors[assigned_origin]
comm_cost_next = sum([node_v['edge_w']/rssi[assigned_origin][i] for i in assigned_destinations])
return comp_cost+comm_cost_next
def find_total_cost(graph, assignment):
"""iterate through graph finding the assignment for each node
and summing the total cost of the graph for that assignment"""
total_cost = sum([find_node_cost(k,v, assignment) for k,v in graph.items()])
return total_cost
def all_possible_assignments(graph, processors, constraints):
assignment_template = collections.namedtuple('Assignment', (' ').join(sorted([n for n in graph])))
keylist = graph.keys()
possible_values_bykey = [constraints[i] for i in sorted(keylist)]
all_combination_generator = itertools.product(*possible_values_bykey)
assignment_generator = (assignment_template(*i) for i in all_combination_generator)
return assignment_generator
@timeit
def optimise(graph, constraints):
q = asyncio.PriorityQueue(maxsize = 10)
gen = all_possible_assignments(graph, processors, constraints)
for a in gen:
cost = -1*find_total_cost(graph, a)
try:
q.put_nowait((cost, a))
except:
q.get_nowait()
q.put_nowait((cost,a))
return sorted(q._queue)
# +
#print('problem size: ', len(list(all_possible_assignments(graph, processors,constraints))))
#q = optimise(graph, constraints)
#print(q)
# -
def dummy_namer(tupl):
template = collections.namedtuple('Assignment', 'edge parent child')
return template(edge=tupl[0], parent=tupl[1], child=tupl[2])
def flattener(listoflists):
return list(itertools.chain(*listoflists))
def lazy_flattener(listoflists):
return itertools.chain(*listoflists)
def find_node_edges(nodekey,nodevalue):
return itertools.product(nodekey, nodevalue['children'])
def find_graph_edges(graph):
"""given a dictionary representation of a graph
generate a list of the graph edges as tuples"""
nested_combinations = (find_node_edges(k,v) for k,v in graph.items())
return lazy_flattener(nested_combinations)
def find_edgeparents(edge,constraints):
"""given an edge from graph, find valid parent processors given constraints"""
parent_node = edge[0]
return constraints[parent_node]
def find_edgechildren(edge,constraints):
"""given an edge from graph, find valid parent processors given constraints"""
parent_node = edge[1]
return constraints[parent_node]
def combination_finder(edge, constraints):
return ([edge], find_edgeparents(edge,constraints), find_edgechildren(edge,constraints))
@timeit
def timed_product(*args):
return itertools.product(*args)
@timeit
def unroll(combination_generator):
return [dummy_namer(comb) for product in combination_generator
for comb in product]
@timeit
def generate_dummies(graph, constraints):
"""generate a dummy variable named tuple for each
valid combination of edge, assigned parent, assigned child"""
edges = find_graph_edges(graph)
edge_possibilities = (combination_finder(edge,constraints) for edge in edges)
combination_generator = (timed_product(*edge) for edge in edge_possibilities)
all_dummies = unroll(combination_generator)
return all_dummies
# +
@timeit
def add_cost_function(problem, dummy,dummy_vars, cost_calculator):
cost_function = (cost_calculator(i)*dummy_vars[i] for i in dummy)
problem += pulp.lpSum(cost_function), "Sum of DAG edge weights"
return problem
def find_communication_power(a, b, rssi):
"""a and b are the names of processor"""
return rssi[a][b]
def find_computation_power(a, processors):
return processors[a]
def find_cost(graph, processors, rssi, a_dummy):
"""node is a key value pair, assignments is a named tuple"""
parent_node = graph[a_dummy.edge[0]]
child_node = graph[a_dummy.edge[1]]
parent = a_dummy.parent
child = a_dummy.child
comp_cost = parent_node['node_w']/processors[parent]
comm_cost_next = child_node['edge_w']/rssi[parent][child]
return comp_cost+comm_cost_next
@timeit
def edge_uniqueness(problem, dummies, dummy_vars):
"""given all possible dummy variable assignments
and the ILP problem, create the constraints that
guarantee only one dummy variable is turned on
for each edge"""
def get_edge(tupl):
return tupl.edge
grouped = (g for k,g in itertools.groupby(dummies, get_edge))
for group in grouped:
problem = constrain_one_edge(problem, group, dummy_vars)
return problem
def constrain_one_edge(problem, grouped_by_edge, dummy_vars):
"""given a list of dummy variables corresponding to an edge
generate constraint statement for each edge e.g x+y+z <=1, -x-y-z <= 1
"""
edge_vars = (dummy_vars[i] for i in grouped_by_edge)
problem += (pulp.lpSum(edge_vars)==1,
"sum of dummies eq 1 for: "+str(random.random()))
return problem
###=======make sure edge assignments match at nodes======###
def match_parentchild(edge, edges):
"""find any neighbouring edges of node"""
return ((edge,i) for i in edges if i[0] == edge[1])
def find_neighboring_edges(graph):
"""find all pairs of edges where child edge_i = parent edge_j"""
edges = find_graph_edges(graph)
return lazy_flattener((match_parentchild(edge, edges) for edge in edges))
def inconsistent_with_one(in_edge_assignment, all_out_edges):
"""given an assignment corresponding to the edge into a given
node, find all assignments corresponding to the outward edge
where inward_child!=outward_parent"""
return ((in_edge_assignment,i) for i in all_out_edges
if i.parent!=in_edge_assignment.child)
def edgepair_inconsistents(dummies, in_edge, out_edge):
"""given an in edge, e.g. ('A','D'), and an out edge, e.g.
('D', 'G') find all dummy assignments pairs where
in_edge assignment child does not equals out_edge assignment parent
this is basically just the inconsistent_with_one function, but applied
to each possible assignment for the inward edge and then combined back
into a single list
"""
matching_in_edge = (i for i in dummies if i.edge == in_edge)
matching_out_edge = [i for i in dummies if i.edge == out_edge]
return lazy_flattener((inconsistent_with_one(i, matching_out_edge)
for i in matching_in_edge))
def all_inconsistents(graph, dummies):
"""this function applies the find_inconsistent_assignments function
over the whole graph: first by finding all pairs of in_edge, out_edge
and then simply applying the function to each of these pairs in turn"""
edge_pairs = find_neighboring_edges(graph)
catcher = functools.partial(edgepair_inconsistents, dummies)
wrong_nodes = lazy_flattener((catcher(*i) for i in edge_pairs))
return wrong_nodes
@timeit
def inout_consistency(graph, dummies, problem, dummy_vars):
all_matchers = all_inconsistents(graph, dummies)
for inconsistent_pair in all_matchers:
description = json.dumps(inconsistent_pair)
added_dummy_vars = [dummy_vars[i] for i in inconsistent_pair]
problem += (pulp.lpSum(added_dummy_vars)<=1,
"pick one of mutex pair: "+description)
return problem
# -
@timeit
def create_list_from_gen(gen):
return list(gen)
@timeit
def formulate_LP(graph, constraints, processors, rssi):
d_gen = functools.partial(generate_dummies, graph, constraints)
d = create_list_from_gen(d_gen())
print('len d: ', len(d))
problem = pulp.LpProblem("DAG",pulp.LpMinimize)
dummy_vars = pulp.LpVariable.dicts("Sensor",d,0, 1, 'Binary')
cost_calculator = functools.partial(find_cost, graph, processors, rssi)
problem = add_cost_function(problem, d, dummy_vars, cost_calculator)
problem = edge_uniqueness(problem, d, dummy_vars)
problem = inout_consistency(graph, d_gen(), problem, dummy_vars)
return problem
@timeit
def solver(p):
return p.solve()
p = formulate_LP(graph, constraints,processors, rssi)
print(p==None)
solved = solver(p)
cost_calculator = functools.partial(find_cost, graph, processors, rssi)
print(pulp.LpStatus[p.status])
all_nonzero = [(v.name,v.varValue) for v in p.variables() if v.varValue >0]
def keyfunct(tupl):
return tupl[0].split(',_parent')[0]
grouped = [list(g) for k,g in itertools.groupby(all_nonzero, keyfunct)]
def get_val(tupl):
return tupl[1]
chosen = [max(i, key = get_val) for i in grouped]
print(grouped)
print(pulp.value(p.objective))
len(d)
d = list(range(5000))
5000*0.001
| batch/.ipynb_checkpoints/dag-checkpoint.ipynb |