code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Applying a Data Science process to AirBnB dataset (Seattle)
# #### ( Initial project for Udacity Data Scientist Nanodegree )
# ## Introduction
# This post represents an example of analyzing an AirBnB dataset (Seattle in this case) in a way which could help a home-owner to
# focus on important topics / features and so be able to offer a great experience to his guests while increasing revenues.
# <br>
# <br>
# <br>
# ### My 3 questions for my analysis are as follows:
#
# #### Question 1: Can we train a model which could predict review_scores_rating with mae < 10?
# #### Question 2: Can we identify a useful set of features with meaningful impact on target?
# #### Question 3: How do prices of apartments vary by number of beds available?
#
# <br>
# <br>
# <br>
# ## 1. Can we train a model which could predict review_scores_rating with mae < 10? <br>
# <tr>
# <td> <img src="crisp-dm.png" alt="Drawing" style="width: 550px;"/> </td>
# <td> <strong><em>In order to answer the first question we have a good opportunity to follow the CRISP-DM process</em></strong> <td>
# </tr>
# ### 1.1 Business Understanding
# The Business Understanding is a very important part of the process and ignoring this step can lead to a lot of wasted time or completely ruined projects.
# This step usually includes talking to SME's and business owners / business experts, reading articles about the business and many times it can overlap with Data Understanding phase as the dataset fields can give us a clue what to ask.
# I was a bit lucky in this case because some time ago I reached an AirBnB superhost badge so I knew how the business is running and what to expect. However for those who are not familiar with AirBnB hosting, I would recommend to start here: https://www.airbnb.com/help/home
# ### 1.2 Data Understanding
# In Data Understanding part we will:
# - check some basic data characteristics like shape and data types
# - have a look at the dataset itself.
# - We can have a closer look at the target and some other interesting and/or suspicious columns
# - We can also prepare list of columns / features to be removed from training dataset
# import libraries and set
import pandas as pd
import numpy as np
import os
import sys
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error
from sklearn.ensemble import GradientBoostingRegressor
import seaborn as sn
from matplotlib import pyplot as plt
from IPython.display import display
# %matplotlib inline
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option('display.max_colwidth', -1)
import platform
print(platform.python_version())
# load data and print shape
raw_df = pd.read_csv('{}/listings.csv'.format(os.getcwd()))
print (raw_df.shape)
# print dtype counts
raw_df.dtypes.value_counts()
# +
# preview data
# raw_df.head()
# -
# In the first preview I realized that we need to get rid of some text fields to be able to preview data easily.
# However in a real life project I would recommend to have a closer look at these text fields too as they can be turn into key features if a correct preprocessing & feature engineering is applied.
# +
# Identify text & url fields that are not usefull for research & model training
cols_to_remove = []
cols_to_check = []
for col in raw_df.select_dtypes(include=['object']).columns:
if raw_df[col].str.len().dropna().median() > 40:
cols_to_remove.append(col)
elif raw_df[col].str.len().dropna().median() > 20:
cols_to_check.append(col)
print ('cols_to_remove: ', len(cols_to_remove))
print (cols_to_remove, '\n')
print ('cols_to_check: ', len(cols_to_check))
print (cols_to_check)
# -
# Can we remove also cols_to_check fields? Let's check what the values look like
raw_df.name.value_counts().head(10)
# Preview the dataset
cols_to_remove += cols_to_check
preview_cols = [col for col in raw_df.columns if col not in cols_to_remove]
raw_df[preview_cols].head(5)
# Identify columns that have more than 90% of missing values :
raw_df = raw_df.fillna(value=np.nan).replace('None', np.nan).replace('none', np.nan)
miss_val = list(raw_df[preview_cols].columns[raw_df[preview_cols].isnull().mean() > 0.90])
miss_val
# +
# Identify high cardinality fields, natural-key like fields & id's
preview_cols = [col for col in preview_cols if col not in miss_val]
unique_val_cols = []
cols_to_check = []
for col in raw_df[preview_cols].columns:
if raw_df[col].dropna().value_counts().shape == raw_df[col].dropna().shape:
unique_val_cols.append(col)
elif raw_df[col].dropna().value_counts().shape[0] > float(raw_df[col].dropna().shape[0])*0.9:
cols_to_check.append(col)
print (unique_val_cols)
print (cols_to_check)
# +
# Identify single value fields
single_val_cols = []
for col in raw_df[preview_cols].columns:
if raw_df[col].dropna().value_counts().shape[0] == 1:
single_val_cols.append(col)
print (single_val_cols)
# +
# Identify columns with perfect direct or perfect inverse linear correlation and/or high correlation
df_corr = raw_df.corr().stack().reset_index()
df_corr.columns = ['FEATURE_1', 'FEATURE_2', 'CORRELATION']
df_corr[((df_corr.CORRELATION>0.85) | ((df_corr.CORRELATION<-0.85)))
& (df_corr.FEATURE_1!=df_corr.FEATURE_2)]
# +
# Review other suspicious columns:
def review_susp_col(df, col_name):
""" Input params: df: pandas df, col_name: string
Output: Prints out df shape and first 10 values to preview
"""
print ('{}: {}'.format(col_name, df[col_name].value_counts().shape))
print (df[col_name].value_counts().head(10))
print ('\n')
review_susp_col(raw_df, 'city')
review_susp_col(raw_df,'state')
# -
# Finally I define a custom list of columns to be removed before model training takes place. I add to this list:
# - fields from the last "Review other suspicious columns" check,
# - host_total_listings_count which correlates with other field
# - host_is_superhost - as this one indicates and is related to high review_scores_rating
# - and some other fields which don't seem to be important for our model
custom_decision_cols_to_remove = ['host_name', 'host_total_listings_count', 'city', 'smart_location', 'state', 'host_id',
'host_is_superhost', 'first_review', 'last_review']
# ### 1.3 Data Preparation
# In Data Preparation part I will apply some simple feature engineering and basic preprocessing where I drop unwanted columns, missing target rows and fill na's and/or replace missing values.
# +
def custom_feature_engineering(raw_df, date_col_list, currency_cols):
"""
Implement basic feature engineering which replaces
the date object fields with number of days & convert
currency fields to float type.
:param raw_df: type Pandas dataframe
:param date_col_list :type list
:param currency_cols: type list
:return: type pandas dataframe
"""
df = raw_df.copy()
# Date fields
for col, _format in date_col_list.items():
bf_col = df[col].dtype
df[col] = pd.to_datetime(df[col], format=_format)
# df[col] = pd.datetime.now().date() - df[col]
df['temp_col'] = pd.datetime.now().date()
df['temp_col'] = pd.to_datetime(df['temp_col'], format=_format)
df[col] = df['temp_col'] - df[col]
df.drop('temp_col', axis='columns', inplace=True)
df[col] = df[col].dt.days
print ('{}: {} --> {}'.format(col, bf_col, df[col].dtype))
# Currency fields
def clean_currency(x):
""" If the value is a string, then remove currency
symbol and delimiters otherwise, the value is numeric
and can be converted
"""
if isinstance(x, str):
return(x.replace('$', '').replace(',', ''))
else:
return(x)
for col in currency_cols:
bf_col = df[col].dtype
df[col] = df[col].apply(clean_currency).astype('float')
print ('{}: {} --> {}'.format(col, bf_col, df[col].dtype))
after = df.dtypes.value_counts()
return df
date_col_dict = {'host_since': '%Y-%m-%d',
'first_review': '%Y-%m-%d',
'last_review': '%Y-%m-%d'
}
currency_cols = []
for col in raw_df.columns:
if 'price' in col.lower(): currency_cols.append(col)
raw_fe_df = custom_feature_engineering(raw_df, date_col_dict, currency_cols)
# +
def prepare_data(df, target, cols_to_remove):
"""
Preprocess data for model training
:param df: type Pandas dataframe
:param target: type string
:param cols_to_remove :type list
:return: type pandas dataframe
"""
init_shape = df.shape
# Drop rows with missing target values
df = df.dropna(subset=[target], axis=0)
#Drop unwanted columns
df = df.drop(cols_to_remove, axis=1)
# Fill numeric columns na's with the mean
num_vars = df.select_dtypes(include=['float', 'int']).columns
for col in num_vars:
df[col].fillna((df[col].median()), inplace=True)
# Fill categorical columns with empty string
ctg_cols = df.select_dtypes(include=['object']).columns
for col in ctg_cols:
df[col].fillna('', inplace=True)
print ('Dataframe re-shaped: {} --> {}'.format(init_shape, df.shape))
return df
target = 'review_scores_rating'
target_related_cols = [col for col in raw_df.columns if 'review_scores'
in col and col != target]
custom_decision_cols_to_remove += target_related_cols
cols_to_remove_fnl = list(set(cols_to_remove + miss_val + unique_val_cols + single_val_cols
+ custom_decision_cols_to_remove))
target = 'review_scores_rating'
df = prepare_data(raw_fe_df, target, cols_to_remove_fnl)
# -
# ### 1.4 Modeling / Evaluation
# This might be the final step in order to answer question 1. We train the model and calculate the score which in our case is represented by mean absolute error.
# So can we train a model which could predict review_scores_rating with mae < 10?
# +
def split_train_test_and_evaluate(df, target):
"""
Splits data into train & test, prepares dummies, trains the model
and returns feature importance df.
:param df: type Pandas dataframe
:param target: type string
:return: feature importance dataframe, type: pandas dataframe
"""
# Split dataframe into label & features
y = df[target]
X = df.drop(target, axis=1)
# Dummy the categorical variables
cat_vars = X.select_dtypes(include=['object']).copy().columns
for var in cat_vars:
# for each cat add dummy var, drop original column
X = pd.concat([X.drop(var, axis=1), pd.get_dummies(X[var], prefix=var, prefix_sep='#',
drop_first=True)], axis=1)
print ('dummies_df shape: {}'.format(X.shape))
# Split data into train & test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print ('Train: {}, Test: {}'.format(X_train.shape[0], X_test.shape[0]))
# Define and train the model
reg = GradientBoostingRegressor(loss='lad', n_estimators=75, criterion='mae',
max_depth=7, learning_rate=0.06, max_features=15)
reg.fit(X_train, y_train)
# Prepare feature importance dataframe
feats = {}
for feature, importance in zip(X.columns, reg.feature_importances_):
feats[feature] = importance
feature_imp_df = pd.DataFrame.from_dict(feats, orient='index')
.rename(columns={0: 'Feature_importance'})
# Validate the model on test data and print out the requested score (mae)
y_pred = reg.predict(X_test)
mae = mean_absolute_error(y_test, y_pred)
print ('Mean absolute error: {}'.format(mae))
return feature_imp_df
fi_df = split_train_test_and_evaluate(df, target)
# -
# <br>
# Due to the fact that our mae is something more than 4, the answer is obviously <b>"Yes, we can train a model which could predict review_scores_rating with mae < 10." </b>
# However I am pretty much sure that with better feature selection and more advanced feature engineering we could get even better results.
# Also I have to mention here that the "Evaluation" we use is very simple - basically we only use the testing mae score. The last part of CRISP-DM process should be "Deployment" - this is applicable more in a real-life project but let's consider the next question solution our Deployment.
# <br>
# ## Question 2: Can we identify a useful set of features with meaningful impact on target?
# There are various data-science approaches for identifying important features. But whichever we choose, the best idea is to combine the approach with common sense and hands-on experience (if possible).
# <br>
# I decided to use the impurity-based 'feature_importances_' attribute of GradientBoosting Regressor.
#
# The Pros are fast calculation and that it's easy to retrieve but the Cons is biased approach, as it has a tendency to inflate the importance of continuous features or high-cardinality categorical variables. And this is why we should use our common sense too - not just extract the top rated features and focus on them.
# The output of the split_train_test_and_evaluate function is a dataframe with feature names and importances.
# For better review let's visualize it:
# +
fi_df['feature_name'] = fi_df.index
fi_df['feature_name'] = fi_df['feature_name'].apply(lambda x: x.split('#')[0])
fi_df.reset_index(drop=True, inplace=True)
fi_norm = fi_df.groupby(['feature_name'])['Feature_importance'].agg('sum')
plot_df = fi_norm.to_frame().reset_index().sort_values(by='Feature_importance')
plot_df.plot(x='feature_name', y='Feature_importance', figsize=(11, 16), kind='barh', fontsize=14)
# -
# To get the appropriate information from the feature importance chart is not about selecting the top n features.
# Now it is the time to use common sense (and let's support it with a heatmap too):
# Seaborn heatmap
fig, ax = plt.subplots(figsize=(15,15))
ax.tick_params(axis="x", labelsize=14)
ax.tick_params(axis="y", labelsize=14)
sn.heatmap(df.corr(), annot=True, linewidths=.5, ax=ax)
plt.show()
# The first feature in the chart is "number_of_reviews". It's obvious that this feature correlates with "host_since" and that it is important. More reviews mean more experience and also stabilized rating.
# Let's have a look at the second feature - "neighbourhood_cleansed"
# This is a categorical variable so I decided to use boxplot for top 20 areas. Even though it is not clear why there are such differences in review_score_rating across the areas, the plot can help us to decide whether to buy a property in an area (if we have this possibility) or if we should be more careful (and do some deeper research) in order to gain good ratings if we are already offering a property in an area like for example the University District.
# +
catg_ftr = "neighbourhood_cleansed"
# Extract the top neighbourhood_cleansed values to list
nc_20 = df.neighbourhood_cleansed.value_counts().head(20).index.tolist()
# Remove some outliers
dff = df[df[target].between(df[target].quantile(.01), df[target].quantile(1))]
dff = dff[dff[catg_ftr].isin(nc_20)]
# Prepare a helping dataframe to order the boxes in the plot
grouped = dff[[target, catg_ftr]].groupby(catg_ftr)
order_df = pd.DataFrame({col:vals[target] for col,vals in grouped}).mean().sort_values(ascending=True)
# Set some basic seaborn parameters
plt.figure(figsize=(40, 10))
# sn.set_context("paper", rc={"font.size":14,"axes.titlesize":14,"axes.labelsize":15})
# Create a plot
ax = sn.boxplot(x=catg_ftr, y=target, data=dff, palette="Set3", order=order_df.index)
# Set additional parameters
ax.set_xlabel(catg_ftr,fontsize=35)
ax.set_ylabel(target,fontsize=35)
ax.set_ylim(dff[target].min()-2, dff[target].max()+2)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(40)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(40)
plt.xticks(rotation=90, horizontalalignment='right')
plt.show()
# -
# Now we can skip couple of features and have a look at the prices. Let's say we have a 2-bed apartment and we need to set a price. Someone told us that lower price will help us gain better ratings.
# However from a simple scatterplot (where x=target and y=price ) we somehow cannot figure out if the person was right or wrong:
# +
def plot_scatter_basic(df, y_col, target):
"""Creates a basic scatterplot with trendline.
Input params:
df: pandas dataframe,
y_col: numerical feature we compare with target (y axis)
target: target col name (x axis)
"""
# Create a plot
ax = df.plot.scatter(x=target, y=y_col,figsize=(10, 10), fontsize=14)
# Plot params config
ax.set_xlim(df[target].min() - 2, df[target].max() + 2)
ax.set_ylim(df[y_col].min(), df[y_col].max())
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
# Trendline
ax.plot
x = df[target]
y = df[y_col]
m, b = np.polyfit(x, y, 1)
ax.plot(x, m*x + b)
# For the input dataframe we filter the the 2-bed Apartments only:
y_col = 'price'
plot_scatter_basic(df[(df.beds==2) & (df.property_type=='Apartment')], y_col, target)
# -
# <br>
# Following function creates a scatterplot out of a grouped (by target) dataframe with median aggregate function applied to price.
# An upgrade (in compare with previous plot function) is also removal of outliers:
# +
def plot_scatter_median(df, y_col, target):
"""Creates a basic scatterplot with trendline.
Input params:
df: pandas dataframe,
y_col: numerical feature we compare with target (y axis)
target: target col name (x axis)
"""
# Prepares a groupby df
plt_df = df.groupby([target])[y_col].agg('median').to_frame().reset_index()
# remove outliers
plt_df = plt_df[plt_df[y_col].between(plt_df[y_col].quantile(.10), plt_df[y_col].quantile(.85))]
# Create a plot
ax = plt_df.plot.scatter(x=target, y=y_col, figsize=(10, 10), fontsize=14)
# Config plot parameters
ax.set_xlim(plt_df[target].min()-2, plt_df[target].max()+2)
ax.set_ylim(plt_df[y_col].min()-5, plt_df[y_col].max()+5)
ax.plot
# Trendline
x = df[target]
y = df[y_col]
m, b = np.polyfit(plt_df[target], plt_df[y_col], 1)
ax.plot(x, m*x + b)
plot_scatter_median(df[(df.beds==2) & (df.property_type=='Apartment')], 'price', target)
# -
# So the previous chart shows, that lower prices have no impact on higher ratings. By contrast the higher scores ratings have higher price medians.
# We don't need to review all the features from feature importance list to be able to answer the question 2:
# <b>Yes, we can identify a useful set of features with meaningful impact on target however a deeper investigation of how the feature impacts the target is necessary.</b>
# ## Question 3: How do prices of apartments vary by number of beds available?
# First let's have a look at the beds value counts and series dtype:
print (df.beds.dtype)
df.beds.value_counts()
# We can do a simple clipping to appropriately decrease the number of categories and check the value counts again. This time with Apartment filter:
# +
catg_ftr = 'catg_beds'
df[catg_ftr] = df.beds.apply(lambda x: str(int(x)) if x < 4 else '4+')
dff = df[df.property_type=='Apartment']
print (dff[catg_ftr].dtype)
dff[catg_ftr].value_counts()
# +
y_col = 'price'
# Prepares a groupby df
grouped = dff[[y_col, catg_ftr]].groupby(catg_ftr)
order_df = pd.DataFrame({col:vals[y_col] for col,vals in grouped}).mean().sort_values(ascending=True)
# remove outliers
dff = dff[dff[y_col].between(dff[y_col].quantile(.05), dff[y_col].quantile(.95))]
# Create a plo
plt.figure(figsize=(40, 10))
ax = sn.boxplot(x=catg_ftr, y=y_col, data=dff, palette="Set2", order=order_df.index)
# Config plot parameters
ax.set_xlabel(catg_ftr,fontsize=35)
ax.set_ylabel(y_col,fontsize=35)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(40)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(40)
plt.show()
# -
# The boxplot above is the answer to the question 3. We can see that there is some price overlapping (even between the 1-bed & 4+ bed apartments). On the other hand each category has at least a bit greater median than the previous one. Categories 3 and 4+ have nearly identical median but the minimum of 4+ is somewhere around 100 while the minimum of category 3 is close to 60. It is interesting that the 1st quartile of 4+ category is actually lower than the one from category 3.
| [Udacity]_DS_init_project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# imports libraries
import os
import sys
import glob
#import scipy.io.wavfile
import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
#import importlib
import math
# %matplotlib inline
# Grabs the preprocessing and automatic_sync files
sys.path.append(os.path.join(os.pardir,'pythonCode'))
import preprocessing as pp
import automatic_sync as autoS
# +
rawDataPath = os.path.join(os.pardir,'rawData')
files = glob.glob(os.path.join(rawDataPath,'*.wav'))
names = []
for name in files:
fileName = os.path.basename(name).split(".")[0]
names.append(fileName)
# Determines which cameras will be selected (['Bents'],['Camera Location'],['Motion #'])
filt = (None,None,['18']) # Selects the bent 1 cameras during motion 18
# Applies filter to camera names and returns only selected subset names
audioFiles = pp.getKeys(names,filt);
# Reads the .wav files from the list generted by getKeys
(names,cDataset) = pp.readWAV(rawDataPath,audioFiles);
# -
def highpass_filter(origSignal,Fs,F1,F2,method='ellip',show=False):
'''
'''
Nf = Fs/2; # Nyquist freqency in Hz
b,a = signal.iirdesign(F1/Nf,F2/Nf,0.2,80,ftype=method)
w, h = signal.freqz(b, a)
if show is True:
fig = plt.figure()
plt.title('Digital filter frequency response')
plt.plot(Nf*w/math.pi, 20 * np.log10(abs(h)), 'b')
plt.ylabel('Amplitude [dB]', color='b')
plt.xlabel('Frequency [Hz]')
return None, None
elif show is False:
filteredSignal = signal.filtfilt(b,a,origSignal,padlen=150)
time = np.linspace(0,(1/Fs)*len(filteredSignal),len(filteredSignal))
return time, filteredSignal
# Displays highpass filter design
time, newSig = highpass_filter(None,48000,16000,15500,show=True)
# +
# Applies a high pass filter over each of the channels to reduce noise levels
plt.figure(figsize=(6,3*len(names)-1))
count = 1
means = {}
for name in names[1:]:
chan = cDataset[name][:,0]
time, newSig = highpass_filter(chan,48000,15100,15000)
plt.subplot(len(names)-1,1,count)
plt.plot(time,newSig,'r')
plt.xlabel('Time[sec]')
plt.ylabel('Signal')
plt.title(names[0] + " : " + name)
plt.grid()
plt.ylim((-20,20))
plt.xlim((35,50))
count = count+1
plt.draw()
# +
# Adds all of the channels together. Does not work well because some channels have more noise that others.
Fs = 48000
sumSig = np.zeros_like(sDataset[names[0]][:,0])
for name in names:
sumSig = sumSig + np.array(sDataset[name][:,0],dtype = 'float64')
time = np.linspace(0,(1/Fs)*len(sumSig),len(sumSig))
# +
# Normalizes the functions to have the same mean power
scales = {}
sDataset = {} # new Scaled Dataset (hopefully makes the noise levels roughly the same)
Fs = 48000; #sampling frequency in Hz
for name in names:
csignal = cDataset[name][:,0]
time = (1/Fs)*np.linspace(0,len(csignal),len(csignal))
integral = np.trapz(abs(csignal),time)
scales[name]= integral
sDataset[name] = csignal/integral
print(scales)
| notebooks/highpass_filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Madrigal Example
# An example of reading in ISR data from Madgrigal and making some basic plots.
# %matplotlib inline
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import FormatStrFormatter
# ## Read in Data
# This block of code reads in the Madrigal hdf5 file and prunes data to desired antenna, pulse mode and altitude window.
filename = 'mlh181109g.001.hdf5'# Name of file
imode = 'LP'
m_num = 115 # Mode number 115 is long pulse 97 is alternating code.
a_lim = [150,750] # Altitude window in km
kinst = 32# Zenith antenna is 32 MISA is 31
# Parameter to plot
i_param = 'ne' # Parameter name can be NE, TE, TI or V0
i_lim = [1e10, 5e11] #
i_str = '$N_e$ in m$^{-3}$'
# Read in using Pandas
d_f = pd.read_hdf(filename,'Data/Table Layout')
# Add datetime to data frame.
d_f['utc_time'] = pd.to_datetime(d_f['ut1_unix'], unit='s')
# Pruning mode, instrument(antenna at Millstone) and altitude.
d_f = d_f[d_f['mdtyp']==m_num]
d_f = d_f[d_f['kinst']==kinst]
alt_window = np.logical_and(d_f['gdalt']>=a_lim[0], d_f['gdalt']<a_lim[1])
d_f = d_f[alt_window]
# Pivot for plotting
ddata = d_f.pivot(index='gdalt', columns='utc_time', values=i_param)
# ## Plotting
# This portion plots out the desired data in a range time intensity figure. This plot shows how the parameter changes as a function of time and altitude(or range).
# +
grid_kws = {"width_ratios": (.85, .05), "hspace":0.45, 'top':0.95}
f_data, axmat = plt.subplots(1, 2, figsize=(8, 6), gridspec_kw=grid_kws)
curax = axmat[0]
cbar_ax = axmat[1]
ax1 = sns.heatmap(data=ddata, vmin=i_lim[0], vmax=i_lim[1], cmap='viridis',
xticklabels=30, yticklabels=7, ax=curax, cbar_ax=cbar_ax, )
# curax.set_ylim(r_lim[0],r_lim[1])
ax1.invert_yaxis()
#curax.set_ylim(r_lim[0],r_lim[1])
ax1.set_xticklabels(ddata.columns[::30].strftime('%H:%M'))
ax1.set_yticklabels(ddata.index.values[::7].astype(int))
ax1.set_xlabel('UTC Time')
ax1.set_ylabel('Altitude in km')
for label in ax1.get_xmajorticklabels():
label.set_rotation(30)
label.set_horizontalalignment("right")
cbar_ax.set_ylabel(i_str)
#ax1.yaxis.set_major_formatter(majorFormatter)
t1 = curax.set_title('RTI Plot: {0}'.format(i_str))
# -
| Madrigal_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Test de Uniformidad y Aleatoriedad
# ### Pruebas de Kolmogorov-Smirnov
# +
# !pip install numpy
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import random
def get_random_numbers(size):
"""
"""
result = np.zeros(size)
for i in range(size):
result[i] = random.random()
return result
def get_numpy_random_numbers(size):
"""
"""
return np.random.uniform(0, 1, size)
# +
size = 500
own_random_values = get_random_numbers(size)
numpy_random_values = get_numpy_random_numbers(size)
uniform_values = [1.0] * size
x_own_random = np.sort(own_random_values)
x_numpy_random = np.sort(numpy_random_values)
x_uniform = np.arange(1 / size, 1 + 1 / size, 1 / size)
y_own_random = np.cumsum(x_own_random / np.max(np.cumsum(x_own_random)))
y_numpy_random = np.cumsum(x_numpy_random / np.max(np.cumsum(x_numpy_random)))
y_uniform = np.cumsum(uniform_values / np.max(np.cumsum(uniform_values)))
# +
De_own_random = np.max(np.absolute(y_own_random - y_uniform))
De_numpy_random = np.max(np.absolute(y_numpy_random - y_uniform))
print(De_own_random)
print(De_numpy_random)
# -
# ### Prueba $\chi^2$
def chi_square(random_values, classes):
"""
"""
frequency_values, bins, patches = plt.hist(random_values, classes)
plt.show()
expected_observations_per_class = size / classes
summation = 0
for value in frequency_values:
squared = (value - expected_observations_per_class) ** 2
divided = squared / expected_observations_per_class
summation += divided
return summation
# +
classes = 20
own_chi_value = chi_square(x_own_random, classes)
print("El chi cuadrado del generador propio es: {}".format(own_chi_value))
numpy_chi_value = chi_square(x_numpy_random, classes)
print("El chi cuadrado del generador numpy es: {}".format(numpy_chi_value))
# -
# ### Prueba de rachas
# +
def streak_transformer(observations):
"""
"""
size = len(observations)
streak_string = ""
for i in range(size):
if(i == size - 1):
break
if(observations[i] < observations[i + 1]):
streak_string += "1"
else:
streak_string += "0"
return streak_string
def streak_counter(streak_string):
"""
"""
size = len(streak_string)
one_streaks = 0
zero_streaks = 0
for i in range(size):
if(i == size - 1):
break
if(streak_string[i] == "1" and streak_string[i + 1] == "0"):
one_streaks += 1
elif(streak_string[i] == "0" and streak_string[i + 1] == "1"):
zero_streaks += 1
total_streaks = one_streaks + zero_streaks
result = (one_streaks, zero_streaks, total_streaks)
return result
# +
streak_own_string = streak_transformer(own_random_values)
streak_numpy_string = streak_transformer(numpy_random_values)
streak_own_values = streak_counter(streak_own_string)
streak_numpy_values = streak_counter(streak_numpy_string)
print(streak_own_values, streak_numpy_values)
# -
# ### Prueba de permutaciones
# ### Prueba de huecos
| Taller 3 - Test de uniformidad y aleatoriedad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Template Matching
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from mpl_toolkits.mplot3d import Axes3D
import cv2
from scipy.signal import correlate
# +
img = cv2.imread("images/4.2.01.tiff", 0)
print (img.shape)
plt.imshow(img,cmap='gray'),plt.title('Source Image'),plt.show()
template = img[243:260,277:292]
plt.imshow(template,cmap='gray'),plt.title('Template'),plt.show()
print (template.shape)
# +
methods = [cv2.TM_CCORR, cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]
w, h = template.shape[1],template.shape[0]
for method in methods:
res = cv2.matchTemplate(img,template,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img,top_left, bottom_right, (0,0,255), 2)
plt.subplot(121),plt.imshow(res,cmap = 'gray')
plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(img,cmap = 'gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.show()
| 3_Template_Matching.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="007a6ef8-de5a-4619-a4a8-485025ab1727"
# # Predicting Loan Status with Python
# This notebook uses Python, NumPy, and Matplotlib to explore the relationship between several data fields in the Lending Club Loan Data SQLite database. SQL queries are used to obtain the loan data records that contain specific strings in the **title** field, which is the loan title provided by the borrower. The search strings investigated are:
#
# * "credit card"
# * "medical"
# * "debt"
#
# Finally, a decision tree classifier (scikit-learn) is used to predict the **loan_status**, which is the current status of the loan. A binary classification system is used, in which the values for the **loan_status** field are classified into two categories:
#
# * 0: "Fully Paid" or "Current"
# * 1: "Late" (for any time period) or "Charged Off"
#
# The following features are used to predict the loan status category (descriptions are from the "LCDataDictionary.xlsx" file):
#
# * **loan_amnt**: The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.
# * **int_rate**: Interest Rate on the loan.
# * **annual_inc**: The self-reported annual income provided by the borrower during registration.
# * **delinq_2yrs**: The number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years.
# * **open_acc**: The number of open credit lines in the borrower's credit file.
# * **dti**: A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income.
# * **emp_length**: Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years.
# * **funded_amnt**: The total amount committed to that loan at that point in time.
# * **tot_cur_bal**: Total current balance of all accounts.
# * **home_ownership**: The home ownership status provided by the borrower during registration. Our values are: RENT, OWN, MORTGAGE, OTHER.
#
# A loan status category of 0 is considered to be **good** because the loan status is either "Fully Paid" or "Current". A loan status category of 1 is considered to be **poor** because the loan status is either "Late" (for any time period) or "Charged Off".
#
# + _cell_guid="13213df4-d2a1-4e2c-a7d1-9e311537d406"
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
from sklearn import tree
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
#from subprocess import check_output
#print(check_output(["ls", "../input"]).decode("utf8"))
def sql_query(s):
"""Return results for a SQL query.
Arguments:
s (str) -- SQL query string
Returns:
(list) -- SQL query results
"""
conn = sqlite3.connect("../input/database.sqlite")
c = conn.cursor()
c.execute(s)
result = c.fetchall()
conn.close()
return result
def print_details():
"""Print database details including table names and the number of rows.
"""
table_names = sql_query("SELECT name FROM sqlite_master " +
"WHERE type='table' " +
"ORDER BY name;")[0][0]
print("Names of tables in SQLite database: {0}".format(table_names))
num_rows = sql_query("SELECT COUNT(*) FROM loan;")[0][0]
print("Number of records in table: {0}".format(num_rows))
def print_column_names():
"""Print the column names in the 'loan' table.
Note that the "index" column name is specific to Python and is not part of
the original SQLite database.
"""
conn = sqlite3.connect("../input/database.sqlite")
conn.row_factory = sqlite3.Row
c = conn.cursor()
c.execute("SELECT * FROM loan LIMIT 2;")
r = c.fetchone()
i = 1
print("Column names:")
for k in r.keys():
print("{0:d}\t{1}".format(i, k))
i += 1
conn.close()
print_details()
print_column_names()
# + [markdown] _cell_guid="86ffcf4a-7fb3-43aa-8132-9061f2d7d3e4"
# # Data exploration
# Explore loan data records that contain specific strings in the **title** field. The search strings investigated are:
#
# * "credit card"
# * "medical"
# * "debt"
# + _cell_guid="5c214cb4-2e68-4d94-ad7a-f5f3e5cfb22d"
emp_length_dict = {'n/a':0,
'< 1 year':0,
'1 year':1,
'2 years':2,
'3 years':3,
'4 years':4,
'5 years':5,
'6 years':6,
'7 years':7,
'8 years':8,
'9 years':9,
'10+ years':10}
home_ownership_dict = {'MORTGAGE':0,
'OWN':1,
'RENT':2,
'OTHER':3,
'NONE':4,
'ANY':5}
features_dict = {'loan_amnt':0,
'int_rate':1,
'annual_inc':2,
'delinq_2yrs':3,
'open_acc':4,
'dti':5,
'emp_length':6,
'funded_amnt':7,
'tot_cur_bal':8,
'home_ownership':9}
def get_data(s):
"""Return features and targets for a specific search term.
Arguments:
s (str) -- string to search for in loan "title" field
Returns:
(list of lists) -- [list of feature tuples, list of targets]
(features) -- [(sample1 features), (sample2 features),...]
(target) -- [sample1 target, sample2 target,...]
"""
data = sql_query("SELECT " +
"loan_amnt,int_rate,annual_inc," +
"loan_status,title,delinq_2yrs," +
"open_acc,dti,emp_length," +
"funded_amnt,tot_cur_bal,home_ownership " +
"FROM loan " +
"WHERE application_type='INDIVIDUAL';")
features_list = []
target_list = []
n = 0 # counter, number of total samples
n0 = 0 # counter, number of samples with target=0
n1 = 0 # counter, number of samples with target=1
for d in data:
# d[0] (loan_amnt) -- must have type 'float'
# d[1] (int_rate) -- must have type 'str'
# d[2] (annual_inc) -- must have type 'float'
# d[3] (loan_status) -- must have type 'str'
# d[4] (title) -- must have type 'str'
# d[5] (delinq_2yrs) -- must have type 'float'
# d[6] (open_acc) -- must have type 'float'
# d[7] (dti) -- must have type 'float'
# d[8] (emp_length) -- must have type 'str'
# d[9] (funded_amnt) -- must have type 'float'
# d[10] (tot_cur_bal) -- must have type 'float'
# d[11] (home_ownership) -- must have type 'str'
test0 = isinstance(d[0], float)
test1 = isinstance(d[1], str)
test2 = isinstance(d[2], float)
test3 = isinstance(d[3], str)
test4 = isinstance(d[4], str)
test5 = isinstance(d[5], float)
test6 = isinstance(d[6], float)
test7 = isinstance(d[7], float)
test8 = isinstance(d[8], str)
test9 = isinstance(d[9], float)
test10 = isinstance(d[10], float)
if (test0 and test1 and test2 and test3 and test4 and test5 and
test6 and test7 and test8 and test9 and test10):
# Ensure that "int_rate" string value can be converted to float
try:
d1_float = float(d[1].replace("%", ""))
except:
continue
# Ensure that "emp_length" string value is in dict
try:
e = emp_length_dict[d[8]]
except:
print("Error e")
continue
# Ensure that "home_ownership" string value is in dict
try:
h = home_ownership_dict[d[11]]
except:
print("Error h")
continue
# Set "title" string to lowercase for search purposes
if s.lower() in d[4].lower():
if d[3] == 'Fully Paid' or d[3] == 'Current':
target = 0 # Define target value as 0
n += 1
n0 += 1
elif 'Late' in d[3] or d[3] == 'Charged Off':
target = 1 # Define target value as 1
n += 1
n1 += 1
else:
continue
# Define features tuple:
# (loan_amnt, int_rate, annual_inc)
features = (d[0],
float(d[1].replace("%", "")),
d[2],
d[5],
d[6],
d[7],
emp_length_dict[d[8]],
d[9],
d[10],
home_ownership_dict[d[11]])
features_list.append(features)
target_list.append(target)
else:
pass
print("----------------------------------------")
print(s)
print("----------------------------------------")
print("Total number of samples: {0}".format(n))
print("% of all samples with target=0: {0:3.4f}%".format(100*n0/(n0+n1)))
print("% of all samples with target=1: {0:3.4f}%".format(100*n1/(n0+n1)))
print("")
result = [features_list, target_list]
return result
def create_scatter_plot(x0_data, y0_data,
x1_data, y1_data,
pt, pa,
x_label, y_label,
axis_type):
plt.figure(num=2, figsize=(8, 8))
ax = plt.gca()
ax.set_axis_bgcolor("#BBBBBB")
ax.set_axisbelow(True)
plt.subplots_adjust(bottom=0.1, left=0.15, right=0.95, top=0.95)
plt.title(pt, fontsize=16)
plt.axis(pa)
plt.xlabel(x_label, fontsize=16)
plt.ylabel(y_label, fontsize=16)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
if axis_type == 'semilogx':
plt.semilogx(x0_data, y0_data, label='0: "Fully Paid" or "Current"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='b')
plt.semilogx(x1_data, y1_data, label='1: "Late" or "Charged Off"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='r')
elif axis_type == 'semilogy':
plt.semilogy(x0_data, y0_data, label='0: "Fully Paid" or "Current"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='b')
plt.semilogy(x1_data, y1_data, label='1: "Late" or "Charged Off"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='r')
elif axis_type == "loglog":
plt.loglog(x0_data, y0_data, label='0: "Fully Paid" or "Current"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='b')
plt.loglog(x1_data, y1_data, label='1: "Late" or "Charged Off"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='r')
else:
plt.plot(x0_data, y0_data, label='0: "Fully Paid" or "Current"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='b')
plt.plot(x1_data, y1_data, label='1: "Late" or "Charged Off"',
linestyle='None', marker='.', markersize=8,
alpha=0.5, color='r')
plt.grid(b=True, which='major', axis='both',
linestyle="-", color="white")
plt.legend(loc='upper right', numpoints=1, fontsize=12)
plt.show()
plt.clf()
def plot_two_fields(data, s, f1, f2,
pa, x_label, y_label,
axis_type):
# d (list of lists) -- data from "get_data" function
# s (string) -- search string
# f1 (string) -- database field 1
# f2 (string) -- database field 2
# pa (list) -- plot axis
# x_label (string) -- x-axis label
# y_label (string) -- y-axis label
# fn (string) -- figure name
x0_list = [] # Fully Paid or Current
y0_list = [] # Fully Paid or Current
x1_list = [] # Late or Charged Off
y1_list = [] # Late or Charged Off
features_list = data[0]
target_list = data[1]
for i in range(len(features_list)):
x = features_list[i][features_dict[f1]]
y = features_list[i][features_dict[f2]]
if target_list[i] == 0:
x0_list.append(x)
y0_list.append(y)
elif target_list[i] == 1:
x1_list.append(x)
y1_list.append(y)
else:
pass
create_scatter_plot(
x0_list, y0_list,
x1_list, y1_list,
"Loan title search term: " + s, pa,
x_label, y_label,
axis_type)
# + [markdown] _cell_guid="872f9c61-2654-4353-9d47-8d86486fb4c0"
# ### Search string: "credit card"
# + _cell_guid="64edef10-3442-4a9c-a8b1-a5f4147234f1"
cc_data = get_data('credit card')
# + _cell_guid="3a289600-ce3d-4cbf-84e4-dc7ec615a8f1"
plot_two_fields(cc_data, 'credit card', 'loan_amnt', 'int_rate',
[1e2, 1e5, 5.0, 30.0], 'loan amount', 'interest rate',
'semilogx')
# + _cell_guid="8ee15a48-d752-4496-87f5-11478b6fbb7e"
plot_two_fields(cc_data, 'credit card', 'annual_inc', 'int_rate',
[1e3, 1e7, 5.0, 30.0], 'annual income', 'interest rate',
'semilogx')
# + _cell_guid="5df827c1-f955-4d6d-a1ee-c61813544bea"
plot_two_fields(cc_data, 'credit card', 'annual_inc', 'loan_amnt',
[1e3, 1e7, 0.0, 35000.0], 'annual income', 'loan amount',
'semilogx')
# + _cell_guid="1e82df4c-30e7-4d3d-8e79-457ffc267fcd"
plot_two_fields(cc_data, 'credit card', 'loan_amnt', 'funded_amnt',
[0.0, 35000.0, 0.0, 35000.0], 'loan amount', 'funded amount',
'standard')
# + _cell_guid="e3100bd5-6527-473e-9156-96a017e69fd6"
plot_two_fields(cc_data, 'credit card', 'home_ownership', 'funded_amnt',
[-1, 6, 0.0, 35000.0], 'home ownership', 'funded amount',
'standard')
# + [markdown] _cell_guid="8ba8d5d3-f566-446b-9c06-4c7db18f69ee"
# ### Search string: "medical"
# + _cell_guid="723b89c0-a0ff-4bf2-9a60-7001262ca572"
medical_data = get_data('medical')
# + _cell_guid="60e2a8af-00b7-4802-9440-7cad0af70719"
plot_two_fields(medical_data, 'medical', 'loan_amnt', 'int_rate',
[1e2, 1e5, 5.0, 30.0], 'loan amount', 'interest rate',
'semilogx')
# + _cell_guid="0881c1d6-b874-4d1e-b286-703c461bde1e"
plot_two_fields(medical_data, 'medical', 'annual_inc', 'int_rate',
[1e3, 1e7, 5.0, 30.0], 'annual income', 'interest rate',
'semilogx')
# + _cell_guid="980f6a8e-bb81-4cd6-848b-af025c730c53"
plot_two_fields(medical_data, 'medical', 'annual_inc', 'loan_amnt',
[1e3, 1e7, 0.0, 35000.0], 'annual income', 'loan amount',
'semilogx')
# + _cell_guid="2901b99c-d5e2-4bad-9db3-16f3fd30896b"
plot_two_fields(medical_data, 'medical', 'loan_amnt', 'funded_amnt',
[0.0, 35000.0, 0.0, 35000.0], 'loan amount', 'funded amount',
'standard')
# + _cell_guid="956aa427-cb84-43aa-9c6f-5c647f2b7451"
plot_two_fields(medical_data, 'medical', 'home_ownership', 'funded_amnt',
[-1, 6, 0.0, 35000.0], 'home ownership', 'funded amount',
'standard')
# + [markdown] _cell_guid="e3f77ce0-9aaf-4a60-8571-5c1be2cc3b20"
# ### Search string: "debt"
# + _cell_guid="a9daebdd-7cdd-4e45-8c07-14d551a5e529"
debt_data = get_data('debt')
# + _cell_guid="6380a294-3f85-40a4-9892-e7e16ba6505f"
plot_two_fields(debt_data, 'debt', 'loan_amnt', 'int_rate',
[1e2, 1e5, 5.0, 30.0], 'loan amount', 'interest rate',
'semilogx')
# + _cell_guid="59eae0af-bd8b-4ec6-8582-c026f4e67859"
plot_two_fields(debt_data, 'debt', 'annual_inc', 'int_rate',
[1e3, 1e7, 5.0, 30.0], 'annual income', 'interest rate',
'semilogx')
# + _cell_guid="9804c800-3e58-40cf-9e24-388398c440ab"
plot_two_fields(debt_data, 'debt', 'annual_inc', 'loan_amnt',
[1e3, 1e7, 0.0, 35000.0], 'annual income', 'loan amount',
'semilogx')
# + _cell_guid="80af9c05-ee0f-466e-b7a6-cb40c1615405"
plot_two_fields(debt_data, 'debt', 'loan_amnt', 'funded_amnt',
[0.0, 35000.0, 0.0, 35000.0], 'loan amount', 'funded amount',
'standard')
# + _cell_guid="4eb3a83e-47ee-49f5-8048-89bc5fee5e87"
plot_two_fields(debt_data, 'debt', 'home_ownership', 'funded_amnt',
[-1, 6, 0.0, 35000.0], 'home ownership', 'funded amount',
'standard')
# + [markdown] _cell_guid="fbf1cb5b-a2fd-4ada-85f2-af6dadfa0e24"
# # Decision tree classifer for predicting the loan status
# A decision tree classifier (scikit-learn) is used to predict the **loan_status**. A binary classification system is used, in which the values for the **loan_status** field are classified as follows:
#
# * 0: "Fully Paid" or "Current"
# * 1: "Late" (for any time period) or "Charged Off"
#
# The loan status category (0 or 1) is hereafter referred to as the "target".
# + _cell_guid="b405b49c-e982-4f38-80be-f2d2b54988a1"
def create_classifier(f, t, nt):
"""Create classifier for predicting loan status. Print accuracy.
Arguments:
f (list of tuples) -- [(sample 1 features), (sample 2 features),...]
t (list) -- [sample 1 target, sample 2 target,...]
nt (int) -- number of samples to use in training set
"""
training_set_features = []
training_set_target = []
testing_set_features = []
testing_set_target = []
print("Number of training set samples:\t{0}".format(nt))
print("Number of testing set samples:\t{0}".format(len(f)-nt))
print("")
# Build training set
for i in np.arange(0, nt, 1):
training_set_features.append(f[i])
training_set_target.append(t[i])
# Build testing set
for i in np.arange(nt, len(f), 1):
testing_set_features.append(f[i])
testing_set_target.append(t[i])
clf = tree.DecisionTreeClassifier()
clf = clf.fit(training_set_features, training_set_target)
n = 0
n_correct = 0
n0 = 0
n0_correct = 0
n1 = 0
n1_correct = 0
# Compare predictions to testing data
for i in range(len(testing_set_features)):
t = testing_set_target[i]
p = clf.predict(np.asarray(testing_set_features[i]).reshape(1, -1))
# Category 0
if t == 0:
if t == p[0]:
equal = "yes"
n_correct += 1
n0_correct += 1
else:
equal = "no"
n += 1
n0 += 1
# Category 1
elif t == 1:
if t == p[0]:
equal = "yes"
n_correct += 1
n1_correct += 1
else:
equal = "no"
n += 1
n1 += 1
else:
pass
n_accuracy = 100.0 * n_correct / n
n0_accuracy = 100.0 * n0_correct / n0
n1_accuracy = 100.0 * n1_correct / n1
print("Accuracy of predicting testing set target values:")
# Accuracy - manual calculation:
print(" All samples (method 1): {0:3.4f}%".format(n_accuracy))
# Accuracy - scikit-learn built-in method:
print(" All samples (method 2): {0:3.4f}%".format(
100.0 * clf.score(testing_set_features, testing_set_target)))
print(" Samples with target=0: {0:3.4f}%".format(n0_accuracy))
print(" Samples with target=1: {0:3.4f}%\n".format(n1_accuracy))
# + [markdown] _cell_guid="8328d771-ae2a-4e35-8cb0-4f843eafbb18"
# ### Search string: "credit card"
# + _cell_guid="15300381-1253-4c4b-9891-4e2f7d968314"
create_classifier(cc_data[0], cc_data[1], 2000)
# + [markdown] _cell_guid="b788eb22-7d52-4faf-a009-d80ecea87d5d"
# ### Search string: "medical"
# + _cell_guid="dd80fbb1-e01f-4cc6-bfff-f5fc3aeadc84"
create_classifier(medical_data[0], medical_data[1], 2000)
# + [markdown] _cell_guid="1d33b7a2-ee1c-41c4-aa18-db5ba6293b0d"
# ### Search string: "debt"
# + _cell_guid="bec3df7f-88a3-4c94-8d30-de7881eda1c9"
create_classifier(debt_data[0], debt_data[1], 2000)
# + [markdown] _cell_guid="b2bdd08e-676e-47d2-a1c1-ba7903e90dea"
# # Conclusions
# A decision tree classifier was used to predict the loan status category (0 or 1) for loan data associated with specific search strings. Loans with a **poor** loan status category (target=1) were predicted with an accuracy in the range of 16-18% for the three search strings investigated.
#
# The ability to accurately predict loans that are likely to end up with a **poor** outcome is valuable for lenders since this reduces the chance of funding a loan that results in a net financial loss.
#
# # Limitations
#
# * The **poor** loan data was plotted after the **good** loan data. Consequently, many of the **good** loan data points are hidden underneath the **bad** loan data points, resulting in an over representation of the **bad** data points in the plots.
# * The decision tree classifier was tested with only a single training set for each of the three search strings.
# * The date/time features of the data have not been taken into account.
#
# # Future work
#
# * Improve data visualization so that fewer **good** loan data points are hidden under the **bad** loan data points.
# * Test the decision tree classifier with multiple training sets for each of the three search strings.
# * Improve the prediction accuracy.
# * Consider the date/time features of the data.
#
# ***Comments/critiques are welcomed, thanks!***
| downloaded_kernels/loan_data/kernel_87.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # KDD Cup 1999 Data
# http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
import pandas as pd
import matplotlib.pyplot as pyplot
from sklearn import datasets
import sklearn.preprocessing as sp
from sklearn.externals import joblib
% matplotlib inline
#
# |ファイル名|ファイル内容|
# |---|---|
# |kddcup.data|フルデータ|
# |kddcup.data_10_percent|フルデータの10%を抽出した学習用データ|
# |corrected|正常・攻撃のラベル付けがなされた評価用データ|
# |kddcup.testdata.unlabeled|正常・攻撃のラベル付けがなされていないデータ|
# |kddcup.testdata.unlabeled_10_percent|正常・攻撃のラベル付けがなされていないデータの10%サブセット|
# |kddcup.newtestdata_10_percent_unlabeled|正常・攻撃のラベル付けがなされていないデータの10%サブセット|
col_names = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root","num_file_creations",
"num_shells","num_access_files","num_outbound_cmds","is_host_login","is_guest_login","count",
"srv_count","serror_rate","srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate","label"]
# +
#kdd_data = pd.read_csv("kddcup.data", header=None, names = col_names)
# -
kdd_data_10percent = pd.read_csv("kddcup.data_10_percent", header=None, names = col_names)
kdd_data_10percent.head()
# # Transform Objects Into Categories
kdd_data_10percent.dtypes
kdd_data_10percent['label'].value_counts()
kdd_data_10percent.protocol_type = kdd_data_10percent.protocol_type.astype("category")
kdd_data_10percent.service = kdd_data_10percent.service.astype("category")
kdd_data_10percent.flag = kdd_data_10percent.flag.astype("category")
kdd_data_10percent.label = kdd_data_10percent.label.astype("category")
kdd_data_10percent.dtypes
kdd_data_10percent['protocol_type'].value_counts()
kdd_data_10percent['service'].value_counts()
kdd_data_10percent['flag'].value_counts()
# # Transform Categories Into Integers
le_protocol_type=sp.LabelEncoder()
le_protocol_type.fit(kdd_data_10percent['protocol_type'])
joblib.dump(le_protocol_type, 'dump/le_protocol_type.pkl')
kdd_data_10percent.protocol_type=le_protocol_type.transform(kdd_data_10percent['protocol_type'])
kdd_data_10percent.protocol_type.value_counts()
le_service=sp.LabelEncoder()
le_service.fit(kdd_data_10percent['service'])
joblib.dump(le_service, 'dump/le_service.pkl')
kdd_data_10percent.service=le_service.transform(kdd_data_10percent['service'])
kdd_data_10percent.service.value_counts()
le_flag=sp.LabelEncoder()
le_flag.fit(kdd_data_10percent['flag'])
joblib.dump(le_flag, 'dump/le_flag.pkl')
kdd_data_10percent.flag=le_flag.transform(kdd_data_10percent['flag'])
kdd_data_10percent['flag'].value_counts()
train_labels = kdd_data_10percent['label'].copy()
train_features = kdd_data_10percent.drop('label',axis=1)
train_labels.head()
train_features.head()
# # SVM training
# +
from sklearn import svm
clf = svm.SVC()
clf.fit(train_features, train_labels)
# +
#joblib.dump(clf, 'dump/clf.pkl')
# +
#clf = joblib.load('dump/clf.pkl')
# -
test_pred = clf.predict(train_features)
test_pred
# +
from sklearn.metrics import classification_report, accuracy_score
print(classification_report(train_labels, test_pred))
print(accuracy_score(train_labels, test_pred))
# -
train_labels = kdd_data_10percent['label'].copy()
| KDDCUP99.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:climatebench]
# language: python
# name: conda-env-climatebench-py
# ---
# +
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import xarray as xr
from glob import glob
import seaborn as sns
import numpy as np
import pandas as pd
import warnings
from xskillscore import crps_gaussian, rmse
# +
SECONDS_IN_YEAR = 60*60*24*365 #s
convert = lambda x: x * SECONDS_IN_YEAR * 1e-12 # kg -> Gt
data_path = "F:\\Local Data\\ClimateBench\\"
# Only future scenarios for now
inputs = glob(data_path + "inputs_s*.nc")
def get_rmse(truth, pred):
return np.sqrt(((truth-pred)**2))
def global_mean(ds):
weights = np.cos(np.deg2rad(ds.latitude))
return ds.weighted(weights).mean(['latitude', 'longitude'])
# +
variables = ['tas', 'diurnal_temperature_range', 'pr', 'pr90']
Y = xr.open_dataset(data_path + 'outputs_ssp245.nc').sel(time=slice(2050, 2100))
# Convert the precip values to mm/day
Y["pr"] *= 86400
Y["pr90"] *= 86400
gp_predictions = xr.merge([{v: xr.open_dataarray(data_path + "outputs_ssp245_predict_gp_{}.nc".format(v))} for v in variables]).sel(time=slice(2050, 2100))
gp_predictions_std = xr.merge([{v: xr.open_dataarray(data_path + "outputs_ssp245_predict_gp_{}_std.nc".format(v))} for v in variables]).sel(time=slice(2050, 2100))
# -
for v in variables:
print(crps_gaussian(Y[v], gp_predictions[v], gp_predictions_std[v], weights=np.cos(np.deg2rad(Y.lat))))
| GP_CRPS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print("hello world")
# <h1>Housing Price Prediction Model</h1><hr>
# <p>Objective is to create a model which predicts the median house price in a district given other attributes like the
# number of rooms, income,etc.</p>
# +
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROUTE = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROUTE + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url = HOUSING_URL, housing_path = HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
# +
import pandas as pd
def load_housing_data(housing_path = HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
# -
fetch_housing_data()
housing = load_housing_data()
housing.head()
housing.info()
# > Note that the total_bedrooms attribute is 20433 compared to the other 20640. This means 207 states has this attribute missing.
housing['ocean_proximity'].value_counts()
housing.describe()
# +
# %matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20, 15))
plt.show()
# -
# Create test and training data
# +
import numpy as np
def split_train_data(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data)*test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
# -
train_set, test_set = split_train_data(housing, 0.2)
len(train_set)
len(test_set)
# This is a inconvinient way of splitting the data so we have a better method
# +
from zlib import crc32
def test_set_check(identifier, test_ratio):
return crc32(np.int64(identifier)) & 0xffffffff < test_ratio*2**32
def split_train_data_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
# -
housing_with_id = housing.reset_index()
train_set, test_set = split_train_data_by_id(housing_with_id, 0.2, "index")
housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"]
train_set, test_set = split_train_data_by_id(housing_with_id, 0.2, "id")
# <h1>Suppose we need to choose split specifically according to the medium income of the district</h1>
housing["income_cat"] = pd.cut(housing["median_income"], bins=[0.,1.5,3.0,4.5,6.,np.inf],
labels=[1,2,3,4,5])
housing["income_cat"].hist()
# +
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
# -
strat_test_set["income_cat"].value_counts()/len(strat_test_set)
for set_ in (strat_test_set, strat_train_set):
set_.drop("income_cat", axis=1, inplace=True)
housing = strat_train_set.copy()
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1)
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4,
s=housing["population"]/100, label="population", figsize=(10,7),
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True
)
plt.legend()
# <h2>Finding Correlation among values</h2>
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
# +
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12,8))
# -
housing.plot(kind="scatter", x="median_income", y="median_house_value",
alpha=0.1)
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
# +
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy="median")
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
# -
imputer.statistics_
housing_num.median().values
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_cat = housing[["ocean_proximity"]]
housing_cat.head()
from sklearn.preprocessing import OrdinalEncoder
ordinal_encode = OrdinalEncoder()
housing_cat_encoded = ordinal_encode.fit_transform(housing_cat)
housing_cat_encoded[:10]
from sklearn.preprocessing import OneHotEncoder
cat_encoder = OneHotEncoder()
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
# +
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self # nothing else to do
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household,
bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
# -
| .ipynb_checkpoints/Housing-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc, ndimage
import keras
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator
# %matplotlib inline
# plots images with labels within jupyter notebook
def plots(ims, figsize=(12,6), rows=1, interp=False, titles=None):
if type(ims[0]) is np.ndarray:
ims = np.array(ims).astype(np.uint8)
if (ims.shape[-1] != 3):
ims = ims.transpose((0,2,3,1))
f = plt.figure(figsize=figsize)
cols = len(ims)//rows if len(ims) % 2 == 0 else len(ims)//rows + 1
for i in range(len(ims)):
sp = f.add_subplot(rows, cols, i+1)
sp.axis('Off')
if titles is not None:
sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i], interpolation=None if interp else 'none')
gen = ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, shear_range=0.15,
zoom_range=0.1, channel_shift_range=10.,
horizontal_flip=True)
image_path = 'cats-and-dogs/cats-and-dogs/train/dog/dog.12.jpg'
#obtain image
image = np.expand_dims(ndimage.imread(image_path),0)
plt.imshow(image[0])
# generate batches of augmented images using original
aug_iter = gen.flow(image)
aug_images = [next(aug_iter)[0].astype(np.uint8) for i in range(10)]
plots(aug_images, figsize=(10,10), rows=2)
| notebooks/DataAugmentationKeras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (data)
# language: python
# name: data
# ---
# https://leetcode.com/problems/find-winner-on-a-tic-tac-toe-game/
class Solution:
def tictactoe(self, moves):
def draw_board(moves):
grid = [[None for _ in range(3)] for _ in range(3)]
for i,m in enumerate(moves):
if i%2==0:
grid[m[0]][m[1]] = "A"
else:
grid[m[0]][m[1]] = "B"
return grid
def check_line(r):
u = list(set(r))
v = u[0]
if len(u) == 1 and v:
print(r)
return v
def check_victory(grid):
# check all rows
cols = [[] for _ in range(len(grid[0]))]
diags = [[] for _ in range(2)]
for i,r in enumerate(grid):
win = check_line(r)
if win: return win
for j,val in enumerate(r):
cols[j].append(val)
if i==j:
diags[0].append(grid[i][j])
diags[1].append(grid[i][-j-1])
for c in cols:
win = check_line(c)
if win: return win
# check all diags
for d in diags:
win = check_line(d)
if win: return win
num_mvs = len(moves)
grid = draw_board(moves)
res = check_victory(grid)
if res:
return res
elif num_mvs < 9:
return "Pending"
else:
return "Draw"
| lt1275_tictactoe_winner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %run CAR_creation.ipynb
# %run ../../main.py
import pyarc.qcba as qcba
from pyarc import CBA
from pyarc.qcba.data_structures import *
from pyarc.qcba import QuantitativeClassifier
import pyarc.utils.plotting as plotils
import matplotlib.pyplot as plt
from pyarc.algorithms import M1Algorithm, M2Algorithm, top_rules, createCARs
from pyarc.data_structures import TransactionDB
# -
QuantitativeDataFrame
from pyarc.qcba import QCBA
# +
import pandas as pd
from pyarc.qcba.data_structures import (
IntervalReader,
Interval,
QuantitativeDataFrame,
QuantitativeCAR
)
interval_reader = IntervalReader()
interval_reader.closed_bracket = "", "NULL"
interval_reader.open_bracket = "NULL", ""
interval_reader.infinity_symbol = "inf", "inf"
interval_reader.members_separator = "_to_"
interval_reader.compile_reader()
i = interval_reader.read("82.9815_to_inf")
QuantitativeCAR.interval_reader = interval_reader
# -
rules
ds = movies_train_undiscr
ds = ds.reset_index()
quant_dataset = QuantitativeDataFrame(ds)
Y = ds["class"]
# +
dataset_name = "iris"
dataset_index = 1
train_path_undiscr = [ "C:/code/python/machine_learning/assoc_rules/folds_undiscr/train/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
test_path_undiscr = [ "C:/code/python/machine_learning/assoc_rules/folds_undiscr/test/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
train_path_discr = [ "C:/code/python/machine_learning/assoc_rules/train/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
test_path_discr = [ "C:/code/python/machine_learning/assoc_rules/test/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
dataset_train_undiscr = pd.concat([ pd.read_csv(ds) for ds in train_path_undiscr ])
dataset_test_undiscr = pd.concat([ pd.read_csv(ds) for ds in test_path_undiscr ])
dataset_test_undiscr_Y = dataset_test_undiscr.iloc[:,-1]
quant_dataset_train = QuantitativeDataFrame(dataset_train_undiscr)
quant_dataset_test = QuantitativeDataFrame(dataset_test_undiscr)
txns_train_discr = TransactionDB.from_DataFrame(pd.concat([pd.read_csv(ds) for ds in train_path_discr]))
txns_test_discr = TransactionDB.from_DataFrame(pd.concat([pd.read_csv(ds) for ds in test_path_discr]))
rm_cba = CBA(algorithm="m1", confidence=0.1, support=0.01).fit(txns_train_discr, top_rules_args={"target_rule_count":1000})
rm_qcba = QCBA(rm_cba, quant_dataset_train)
rm_qcba.fit(
refitting=True,
literal_pruning=True,
trimming=True,
extension=True,
overlap_pruning=True,
transaction_based_drop=True
)
rm_qcba.clf.rule_model_accuracy(quant_dataset_test, dataset_test_undiscr_Y), rm_cba.rule_model_accuracy(txns_test_discr)
# +
interval_reader = IntervalReader()
interval_reader.closed_bracket = "", "NULL"
interval_reader.open_bracket = "NULL", ""
interval_reader.infinity_symbol = "inf", "inf"
interval_reader.members_separator = "_to_"
interval_reader.compile_reader()
i = interval_reader.read("82.9815_to_inf")
# +
from pyarc.qcba.transformation import QCBATransformation
dataset_name = "iris"
dataset_index = 1
train_path_undiscr = [ "C:/code/python/machine_learning/assoc_rules/folds_undiscr/train/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
test_path_undiscr = [ "C:/code/python/machine_learning/assoc_rules/folds_undiscr/test/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
train_path_discr = [ "C:/code/python/machine_learning/assoc_rules/train/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
test_path_discr = [ "C:/code/python/machine_learning/assoc_rules/test/{}{}.csv".format(dataset_name, dataset_index) for dataset_index in range(0, 9)]
dataset_train_undiscr = pd.concat([ pd.read_csv(ds) for ds in train_path_undiscr ])
dataset_test_undiscr = pd.concat([ pd.read_csv(ds) for ds in test_path_undiscr ])
dataset_test_undiscr_Y = dataset_test_undiscr.iloc[:,-1]
quant_dataset_train = QuantitativeDataFrame(dataset_train_undiscr)
quant_dataset_test = QuantitativeDataFrame(dataset_test_undiscr)
txns_train_discr = TransactionDB.from_DataFrame(pd.concat([pd.read_csv(ds) for ds in train_path_discr]))
txns_test_discr = TransactionDB.from_DataFrame(pd.concat([pd.read_csv(ds) for ds in test_path_discr]))
rm_cba = CBA(algorithm="m1", confidence=0.1, support=0.01).fit(txns_train_discr)
cba_rule_model = rm_cba
quantitative_dataset = quant_dataset
__quant_rules = [ QuantitativeCAR(r) for r in cba_rule_model.clf.rules ]
qcba_transformation = QCBATransformation(quant_dataset_train)
refitting=True,
literal_pruning=True,
trimming=True,
extension=True,
overlap_pruning=True,
transaction_based_drop=True
transformation_dict = {
"refitting": refitting,
"literal_pruning": literal_pruning,
"trimming": trimming,
"extension": extension,
"overlap_pruning": overlap_pruning,
"transaction_based_drop": transaction_based_drop
}
qcba_transformation.transform(__quant_rules)
# +
import pandas
import numpy as np
import math
class RuleExtender1:
def __init__(self, dataframe, min_conditional_improvement=-0.01, min_improvement=0):
if type(dataframe) != QuantitativeDataFrame:
raise Exception(
"type of dataset must be pandas.DataFrame"
)
self.__dataframe = dataframe
self.min_conditional_improvement = min_conditional_improvement
self.min_improvement = min_improvement
def transform_greedy(self, rules, skip_ahead=1):
copied_rules = [ rule.copy() for rule in rules ]
progress_bar_len = 50
copied_rules_len = len(copied_rules)
progress_bar = "#" * progress_bar_len
progress_bar_empty = " " * progress_bar_len
last_progress_bar_idx = -1
extended_rules = []
#print("len: ", copied_rules_len)
for i, rule in enumerate(copied_rules):
current_progress_bar_idx = math.floor(i / copied_rules_len * progress_bar_len)
if last_progress_bar_idx != current_progress_bar_idx:
last_progress_bar_idx = current_progress_bar_idx
progress_string = "[" + progress_bar[:last_progress_bar_idx] + progress_bar_empty[last_progress_bar_idx:] + "]"
print(*progress_string, sep="")
extended_rules.append(self.__extend_greedy(rule, skip_ahead=skip_ahead))
return extended_rules
def __extend_greedy(self, rule, skip_ahead=1):
ext = self.__extend_rule_greedy(rule, skip_ahead=skip_ahead)
return ext
def __extend_rule_greedy(self, rule, skip_ahead=1):
min_improvement=self.min_improvement
min_conditional_improvement=self.min_conditional_improvement
# check improvemnt argument ranges
step = 0
current_best = rule
direct_extensions = self.__get_extensions_greedy(rule)
current_best.update_properties(self.__dataframe)
while True:
extension_succesful = False
direct_extensions = self.__get_extensions_greedy(current_best)
#print("extending - new cycle")
for candidate in direct_extensions:
#print("\tcandidate - direct extensions")
#print("\t", direct_extensions)
candidate.update_properties(self.__dataframe)
delta_confidence = candidate.confidence - current_best.confidence
delta_support = candidate.support - current_best.support
if self.__crisp_accept(delta_confidence, delta_support, min_improvement):
current_best = candidate
extension_succesful = True
break
if self.__conditional_accept(delta_confidence, min_conditional_improvement):
enlargement = candidate
while True:
enlargement = self.get_beam_extensions_greedy(enlargement, skip_ahead=skip_ahead)
if not enlargement:
break
candidate.update_properties(self.__dataframe)
enlargement.update_properties(self.__dataframe)
delta_confidence = enlargement.confidence - current_best.confidence
delta_support = enlargement.support - current_best.support
if self.__crisp_accept(delta_confidence, delta_support, min_improvement):
current_best = enlargement
#plotils.plot_quant_rules([current_best])
#plt.show()
#print(step)
step += 1
extension_succesful = True
elif self.__conditional_accept(delta_confidence, min_conditional_improvement):
#plotils.plot_quant_rules([enlargement])
#plt.show()
#print(step)
step += 1
continue
else:
break
if extension_succesful == True:
break
else:
# continue to next candidate
continue
if extension_succesful == False:
break
return current_best
def __get_extensions_greedy(self, rule, skip_ahead=1):
extended_rules = []
for literal in rule.antecedent:
attribute, interval = literal
neighborhood = self.__get_direct_extensions_greedy(literal, skip_ahead=skip_ahead)
for extended_literal in neighborhood:
# copy the rule so the extended literal
# can replace the default literal
copied_rule = rule.copy()
# find the index of the literal
# so that it can be replaced
current_literal_index = copied_rule.antecedent.index(literal)
copied_rule.antecedent[current_literal_index] = extended_literal
copied_rule.was_extended = True
copied_rule.extended_literal = extended_literal
extended_rules.append(copied_rule)
extended_rules.sort(reverse=True)
return extended_rules
def __get_direct_extensions_greedy(self, literal, skip_ahead=1):
"""
ensure sort and unique
before calling functions
"""
attribute, interval = literal
# if nominal
# needs correction to return null and skip when extending
if type(interval) == str:
return [literal]
vals = self.__dataframe.column(attribute)
vals_len = vals.size
mask = interval.test_membership(vals)
# indices of interval members
# we want to extend them
# once to the left
# and once to the right
# bu we have to check if resulting
# indices are not larger than value size
member_indexes = np.where(mask)[0]
first_index = member_indexes[0]
last_index = member_indexes[-1]
first_index_modified = first_index - skip_ahead
last_index_modified = last_index + skip_ahead
no_left_extension = False
no_right_extension = False
if first_index_modified < 0:
no_left_extension = True
# if last_index_modified is larger than
# available indices
if last_index_modified > vals_len - 1:
no_right_extension = True
new_left_bound = interval.minval
new_right_bound = interval.maxval
if not no_left_extension:
new_left_bound = vals[first_index_modified]
if not no_right_extension:
new_right_bound = vals[last_index_modified]
# prepare return values
extensions = []
if not no_left_extension:
# when values are [1, 2, 3, 3, 4, 5]
# and the corresponding interval is (2, 4)
# instead of resulting interval being (1, 4)
temp_interval = Interval(
new_left_bound,
interval.maxval,
True,
interval.right_inclusive
)
extensions.append((attribute, temp_interval))
if not no_right_extension:
temp_interval = Interval(
interval.minval,
new_right_bound,
interval.left_inclusive,
True
)
extensions.append((attribute, temp_interval))
return extensions
# make private
def get_beam_extensions_greedy(self, rule, skip_ahead=1):
if not rule.was_extended:
return None
# literal which extended the rule
literal = rule.extended_literal
extended_literal = self.__get_direct_extensions_greedy(literal, skip_ahead=skip_ahead)
if not extended_literal and skip_ahead > 1:
return self.get_beam_extensions_greedy(rule, skip_ahead=1)
elif not extended_literal:
return None
copied_rule = rule.copy()
literal_index = copied_rule.antecedent.index(literal)
# so that literal is not an array
copied_rule.antecedent[literal_index] = extended_literal[0]
copied_rule.was_extended = True
copied_rule.extended_literal = extended_literal[0]
return copied_rule
def __crisp_accept(self, delta_confidence, delta_support, min_improvement):
if delta_confidence >= min_improvement and delta_support > 0:
return True
else:
return False
def __conditional_accept(self, delta_conf, min_improvement):
if delta_conf >= min_improvement:
return True
# +
from pyarc.qcba.transformation import *
ir = IntervalReader()
QuantitativeCAR.interval_reader = ir
movies_train_undiscr = pd.read_csv("../data/movies.csv", sep=";", index_col=0)
movies_train_discr = pd.read_csv("../data/movies_discr.csv", sep=";", index_col=0)
movies_undiscr_txns = movies_train_undiscr.reset_index()
movies_discr_txns = TransactionDB.from_DataFrame(movies_train_discr)
rm = CBA(algorithm="m1", confidence=0.2, support=0.02).fit(movies_discr_txns)
rules = rm.clf.rules
quant_dataset = QuantitativeDataFrame(ds)
quant_rules = [ QuantitativeCAR(r) for r in rules ]
qcba_transformation = QCBATransformation(quant_dataset)
#extended_rules = qcba_transformation.extender.transform(quant_rules)
quant_rules
#rule_extender.transform(quant_rules)
# +
rule_extender = RuleExtender1(quant_dataset, min_conditional_improvement=-0.01)
rule_extender_original = RuleExtender(quant_dataset)
qrule_to_extend = quant_rules[0].copy()
qrule_to_extend.antecedent = [
("a-list-celebrities", Interval(2.5, 3.5, True, True)),
("estimated-budget", Interval(60, 140, True, True))
]
qrules_extended = rule_extender.transform_greedy([qrule_to_extend], skip_ahead=1)
#qrules_extended = rule_extender_original.transform([qrule_to_extend])
plotils.plot_quant_rules(qrules_extended)
# -
quant_rules
# +
refitted = qcba_transformation.refitter.transform(quant_rules)
literal_pruned = qcba_transformation.literal_pruner.transform(refitted)
trimmed = qcba_transformation.trimmer.transform(literal_pruned)
extended = qcba_transformation.extender.transform(trimmed)
#extended = rule_extender.transform_greedy(trimmed, skip_ahead=2)
post_pruned, default_class = qcba_transformation.post_pruner.transform(extended)
overlap_pruned = qcba_transformation.overlap_pruner.transform(post_pruned, default_class)
clf = QuantitativeClassifier(overlap_pruned, default_class)
clf.rule_model_accuracy(QuantitativeDataFrame(movies_train_undiscr), movies_train_undiscr.iloc[:, -1])
# -
np.__version__
# +
string_list = np.array(["Ahoj", "ne", "ahoj", "Ahoj", "nnn", "jjjje"])
string_list == "Ahoj"
string_list == "Ahoj"
# +
quant_rules_iris = [ QuantitativeCAR(r) for r in rm_cba.clf.rules ]
quant_dataset_train
qcba_transformation_iris = QCBATransformation(quant_dataset_train)
refitted = qcba_transformation_iris.refitter.transform(quant_rules_iris)
literal_pruned = qcba_transformation_iris.literal_pruner.transform(refitted)
trimmed = qcba_transformation_iris.trimmer.transform(literal_pruned)
extended = qcba_transformation_iris.extender.transform(trimmed)
post_pruned, default_class = qcba_transformation_iris.post_pruner.transform(extended)
overlap_pruned = qcba_transformation_iris.overlap_pruner.transform(post_pruned, default_class)
# +
datasetname = "iris0"
pd_ds = pd.read_csv("c:/code/python/machine_learning/assoc_rules/train/{}.csv".format(datasetname))
pd_ds_undiscr = pd.read_csv("c:/code/python/machine_learning/assoc_rules/folds_undiscr/train/{}.csv".format(datasetname))
pd_ds_undiscr_test = pd.read_csv("c:/code/python/machine_learning/assoc_rules/folds_undiscr/test/{}.csv".format(datasetname))
txns = TransactionDB.from_DataFrame(pd_ds)
txns_test = TransactionDB.from_DataFrame(pd.read_csv("c:/code/python/machine_learning/assoc_rules/test/{}.csv".format(datasetname)))
rm_cba = CBA()
rm_cba.fit(txns)
rm_qcba = QCBA(rm_cba, QuantitativeDataFrame(pd_ds_undiscr))
qcba_clf = rm_qcba.fit()
| notebooks/qcba_extension/QCBA_benchmark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/FilippoAiraldi/RL-demos/blob/main/easy21.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="d-cNHrnbRBAC"
# # Easy21
#
# Assignment from <NAME>'s RL course (https://www.davidsilver.uk/wp-content/uploads/2020/03/Easy21-Johannes.pdf).
# + [markdown] id="ZMvpIKNXRmWj"
# ## 0. Definitions
#
# The game is played with a deck where
# * the deck is infinite (sampled with replacement)
# * cards are from 1 to 10 (no aces or pictures), uniformely distributed
# * cards are either red (probability 1/3) or black (2/3)
#
# At the start of the game, both player and dealer draw a black card. From that point onward, the plauer can either
# * request additional cards (hit=1)
# * stop drawing and end the episode (stick=0)
#
# The values of the player’s cards are added (black cards) or subtracted (red
# cards). The final outcome is
# * if the player’s sum exceeds 21, or becomes less than 1, then the player “goes
# bust” and loses the game (reward -1)
# * if the player sticks then the dealer starts taking turns. The dealer always
# sticks on any sum of 17 or greater, and hits otherwise. If the dealer goes
# bust, then the player wins; otherwise, the outcome – win (reward +1),
# lose (reward -1), or draw (reward 0) – is the player with the largest sum.
#
# ### Quantities
# Action
# * stick=0
# * hit=1
#
# Reward
# * win +1
# * draw 0
# * loss -1
#
# Observation
# * player's current sum
# * dealer's showing card (i.e., starting score before sticking)
#
#
# + id="y1PajnATRLdP"
import gym
import numpy as np
import itertools
from tqdm.notebook import tqdm
from typing import List, Tuple, Dict, Callable
# + [markdown] id="Q0ubSvebU1YB"
# ## 1. Implementation
# + id="MGukBAsIbo8U"
class BoxedMultiDiscrete(gym.spaces.MultiDiscrete):
"""
Adds a low and high parameter to the multidiscrete space to
specify the range of each dimension.
"""
# def __init__(self, nvec, low):
def __init__(self, low, high = None, nvec = None):
assert high or nvec, 'either high or nvec must be specified'
if high and not nvec: nvec = np.asarray(high) - np.asarray(low) + 1
elif nvec and not high: high = np.asarray(nvec) + np.asarray(low) - 1
super(BoxedMultiDiscrete, self).__init__(nvec)
self.high = np.asarray(high, dtype = np.int64)
self.low = np.asarray(low, dtype = np.int64)
assert self.nvec.shape == self.low.shape == self.high.shape, 'nvec, low and high must have the same number of elements'
assert (self.high - self.low + 1 - self.nvec == 0).all(), 'ranges are incompatible'
def sample(self):
return (self.np_random.random_sample(self.nvec.shape) * self.nvec).astype(self.dtype) + self.low
def contains(self, x):
if isinstance(x, (list, tuple)): x = np.array(x)
x -= self.low
return x.shape == self.shape and (0 <= x).all() and (x < self.nvec).all()
def unbias(self, value):
u = np.asarray(value) - self.low
return tuple(u) if isinstance(value, (list, tuple)) else u
def __repr__(self):
return f'BoxedMultiDiscrete(nvec = {self.nvec}, low = {self.low}, high = {self.high})'
def draw_card(np_random) -> int:
card = np_random.randint(1, 11)
color = np_random.choice([-1, 1], p = [1 / 3, 2 / 3])
return card * color
def is_bust(hand: List[int]) -> bool:
return not 1 <= sum(hand) <= 21
def score(hand: List[int]) -> int:
return 0 if is_bust(hand) else sum(hand)
def cmp(a: int, b: int) -> int:
return int(a > b) - int(a < b)
class Easy21(gym.Env):
def __init__(self) -> None:
self.action_space = gym.spaces.Discrete(2)
self.observation_space = BoxedMultiDiscrete(low = [1, 1], high = [21, 10])
self.seed()
self.reset()
def seed(self, seed = None) -> List[int]:
self.np_random, seed = gym.utils.seeding.np_random(seed)
return [seed]
def reset(self, state: Tuple[int,int] = None) -> Tuple[int, int]:
if state:
self.player = [state[0]]
self.dealer = [state[1]]
else:
self.player = [np.abs(draw_card(self.np_random))]
self.dealer = [np.abs(draw_card(self.np_random))]
return self.state
@property
def state(self) -> Tuple[int, int]:
return sum(self.player), self.dealer[0]
def step(self, action: int) -> Tuple[Tuple[int, int], int, bool, Dict]:
assert self.action_space.contains(action), f'invalid action {action}'
if action: # hit
self.player.append(draw_card(self.np_random))
done, reward = (True, -1) if is_bust(self.player) else (False, 0)
else: # stick
done = True
while sum(self.dealer) < 17:
self.dealer.append(draw_card(self.np_random))
if is_bust(self.dealer): break
reward = cmp(score(self.player), score(self.dealer))
return self.state, reward, done, {}
# + [markdown] id="IaU6vaqAistX"
# ### Utils
# + [markdown] id="bZJtrlubWf6_"
# #### Lookup
# + id="4-T02IPPiuEV"
state_action_dims = lambda env: (*env.observation_space.nvec, env.action_space.n)
def generate_episode(env, policy, a0_random = False):
st = env.reset()
at = env.action_space.sample() if a0_random else policy(st)
S, A, R = [], [], []
while True:
st_1, rt_1, done, _ = env.step(at)
S.append(st)
A.append(at)
R.append(rt_1)
if done: break
st = st_1
at = policy(st)
return S, A, R
def step_episode(env, policy, a0_random = False):
st = env.reset()
at = env.action_space.sample() if a0_random else policy(st)
while True:
st_1, rt_1, done, _ = env.step(at)
at_1 = None if done else policy(st_1)
yield st, at, rt_1, st_1, at_1, done
if done: break
st = st_1
at = at_1
def glie_eps_greedy_policy(env, state, Q, Nsa, N0):
eps = N0 / (N0 + Nsa[env.observation_space.unbias(state)].sum())
return (Q[env.observation_space.unbias(state)].argmax()
if env.np_random.random() > eps else
env.action_space.sample())
def compute_Q_rmse(env, Q1, Q2) -> float:
Nsa = np.prod(env.observation_space.nvec) * env.action_space.n
return np.sqrt(1 / Nsa * np.square(Q1 - Q2).sum())
# + [markdown] id="2FpWgJbwWj0-"
# #### Function Approximation
# + id="a_7bZysIWjUw"
player_lower = np.asarray([1, 4, 7, 10, 13, 16])
player_upper = np.asarray([6, 9, 12, 15, 18, 21])
dealer_lower = np.asarray([1, 4, 7])
dealer_upper = np.asarray([4, 7, 10])
def get_features(state, action) -> np.ndarray:
phi = np.zeros((6, 3, 2))
player_idx = np.logical_and(player_lower <= state[0], player_upper >= state[0])
dealer_idx = np.logical_and(dealer_lower <= state[1], dealer_upper >= state[1])
phi[player_idx, dealer_idx, action] = 1
return phi
def eval_q(state, action, w) -> float:
x = get_features(state, action)
return (x * w).sum()
def eps_greedy_linfnc_policy(env, state, w, eps) -> int:
return (np.asarray([eval_q(state, a, w) for a in range(env.action_space.n)]).argmax()
if env.np_random.random() > eps else
env.action_space.sample())
def linfnc2tableQ(env, w) -> np.ndarray:
dims = state_action_dims(env)
Q = np.zeros(dims, dtype = np.float64)
for player_sum, dealer_card, action in itertools.product(*(range(d) for d in dims)):
idx = (player_sum, dealer_card, action)
state = (player_sum + 1, dealer_card + 1)
Q[idx] = eval_q(state, action, w)
return Q
# + [markdown] id="OdKMI50zaOqA"
# ### Visualization
#
# + id="w7AE1wUeaQS-"
import matplotlib.pyplot as plt
def plot_values(data: np.ndarray, title: str) -> None:
fig, ax = plt.subplots(figsize = (17, 7), subplot_kw = { 'projection': '3d' })
X, Y = np.meshgrid(np.arange(data.shape[1]), np.arange(data.shape[0]))
surf = ax.plot_surface(X, Y, data, cmap = 'magma', vmin = -1, vmax = 1)
fig.colorbar(surf)
ax.view_init(60, -60)
ax.set_xticks(np.arange(data.shape[1]))
ax.set_yticks(np.arange(data.shape[0]))
ax.set_zticks(np.linspace(-1, 1, 5))
ax.set_xticklabels(np.arange(data.shape[1]) + 1)
ax.set_yticklabels(np.arange(data.shape[0]) + 1)
ax.set_xlabel('Dealer\'s card')
ax.set_ylabel('Player\'s sum')
ax.set_zlabel('Reward')
ax.set_zlim(-1, 1)
ax.set_title(title, pad = 50)
def plot_policy(data: np.ndarray, title: str) -> None:
fig, ax = plt.subplots(figsize = (17, 7))
cmap = plt.cm.get_cmap('plasma', 2)
heatmap = ax.pcolor(data, cmap = cmap)
fig.colorbar(heatmap, ticks = [0, 1])
ax.set_xticks(np.arange(data.shape[1]) + 0.5)
ax.set_yticks(np.arange(data.shape[0]) + 0.5)
ax.set_xticklabels(np.arange(data.shape[1]) + 1)
ax.set_yticklabels(np.arange(data.shape[0]) + 1)
ax.set_xlabel('Dealer\'s card')
ax.set_ylabel('Player\'s sum')
ax.set_aspect('equal', adjustable = 'box')
ax.set_title(title)
def plot_rmses(lambdas: List[int], rmses: List[List[float]], title: str) -> None:
fig, (ax1, ax2) = plt.subplots(1, 2, sharey = True, figsize = (15, 5))
fig.suptitle(title)
for lam, rmse in list(zip(lambdas, rmses)):
ax1.plot(np.arange(len(rmse)) + 1, rmse, label = r'$\lambda$ = {:.1f}'.format(lam))
ax1.set_xlabel('episode')
ax1.set_ylabel('RMSE')
ax1.legend()
ax2.plot(lambdas, [rmse[-1] for rmse in rmses], '-o')
ax2.set_xticks(lambdas)
ax2.set_xlabel(r'$\lambda$')
ax2.set_ylabel('final RMSE')
# + [markdown] id="qTq3CS45piUn"
# ## 2. Monte-Carlo Control
# + id="PzM0eTfZpoJV"
def monte_carlo_control(env: Easy21, gamma: float = 1.0, episodes: int = int(1e3)) -> np.ndarray:
N = np.zeros(state_action_dims(env), dtype = np.uint64) # state-action visit counter
Q = np.zeros_like(N, dtype = np.float64) # state-action values
policy = lambda s: glie_eps_greedy_policy(env, s, Q, N, N0 = 100) # GLIE policy
for _ in tqdm(range(episodes)):
# simulate whole episode
S, A, R = generate_episode(env, policy)
# replay backwards and update values
Gt = 0
for st, at, rt_1 in reversed(list(zip(S, A, R))):
Gt = rt_1 + gamma * Gt
sa_pair = (*env.observation_space.unbias(st), at)
N[sa_pair] += 1
alpha = 1 / N[sa_pair] # stepsize
Q[sa_pair] += alpha * (Gt - Q[sa_pair])
return Q
# + id="TTn_2SmfYqTd" colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["781b84d16f6948979c5e7a2c23485502", "eedd28d66b7f49008300fd7426a0bb2d", "b12ae4d2601a45538cad8b438f94dd8b", "c9f6d721d541413586a1fc101a881afa", "788452c2d5bd4a85b1e386c58e699926", "8b67317122934d588ffe3c77854d8025", "9943ab0c6f9645e29971e7d99211cc7e", "260b24e31e2b409ea1c9575373042f90", "<KEY>", "<KEY>", "e6aa1f965cff442a8d3f7523c3316e93"]} outputId="1668f086-3fe9-4c26-9714-301e462d4783"
env = Easy21()
Qmc = monte_carlo_control(env, episodes = int(1e6)) # 7min for 1e6 episodes
# + colab={"base_uri": "https://localhost:8080/", "height": 888} id="L51xxIJ4i-1d" outputId="6448844f-bf1a-498a-9ca6-eaeadaca09d2"
plot_values(Qmc.max(axis = 2), 'Monte Carlo Control - State Value Function')
plot_policy(Qmc.argmax(axis = 2), 'Monte Carlo Control - Policy')
# + [markdown] id="09oQTnNJHqnW"
# ## 3. Sarsa($\lambda$)
# + id="9dXdyeblHqFf"
def sarsa_lambda(env: Easy21, lambdaa: float, gamma: float = 1.0, episodes: int = int(1e3), Qopt: np.ndarray = None) -> Tuple[np.ndarray, List[float]]:
N = np.zeros(state_action_dims(env), dtype = np.uint64) # state-action visit counter
Q = np.zeros_like(N, dtype = np.float64) # state-action values
policy = lambda s: glie_eps_greedy_policy(env, s, Q, N, N0 = 100) # GLIE policy
rmse = []
for _ in tqdm(range(episodes)):
E = np.zeros_like(N, dtype = np.float64) # eligibility traces
# simulate step-by-step
for st, at, rt_1, st_1, at_1, done in step_episode(env, policy):
sa_pair = (*env.observation_space.unbias(st), at)
E[sa_pair] += 1
N[sa_pair] += 1
alpha = 1 / N[sa_pair] # stepsize
if done:
delta = rt_1 - Q[sa_pair]
else:
sa_pair_1 = (*env.observation_space.unbias(st_1), at_1)
delta = rt_1 + gamma * Q[sa_pair_1] - Q[sa_pair] # TD error
# update state-actions values and decay traces
Q += alpha * delta * E
E *= gamma * lambdaa
if Qopt is not None:
rmse.append(compute_Q_rmse(env, Q, Qopt))
return Q, rmse
# + id="7yVSvuZlMs6-"
env = Easy21()
lambdas = np.arange(0, 1.1, 0.2)
Qs, rmses = list(zip(*[sarsa_lambda(env, lam, episodes = int(1e3), Qopt = Qmc) for lam in lambdas]))
# + colab={"base_uri": "https://localhost:8080/", "height": 372} id="LDG0Gd6YjCwM" outputId="2b04c25e-5259-46da-c945-41e80a4e68b0"
plot_rmses(lambdas, rmses, r'MC vs Sarsa($\lambda$) RMSE')
# + [markdown] id="5oMeQWL8kUYI"
# ## 4. Sarsa($\lambda$) with Linear Function Approximation
# Algorithm @ Sutton & Barto 12.7
# + id="o_Zs1CuykTyU"
def sarsa_lambda_linfnc_approx(env: Easy21, lambdaa: float, gamma: float = 1.0, episodes: int = int(1e3), Qopt: np.ndarray = None) -> Tuple[np.ndarray, np.ndarray, List[float]]:
eps = 0.05 # exploration rate
alpha = 0.01 # stepsize
w = np.zeros((6, 3, 2), dtype = np.float64) # linear function weights
policy = lambda s: eps_greedy_linfnc_policy(env, s, w, eps) # eps-greedy policy
rmse = []
for _ in tqdm(range(episodes)):
E = np.zeros_like(w) # eligibility traces
# simulate step-by-step
for st, at, rt_1, st_1, at_1, done in step_episode(env, policy):
E += get_features(st, at)
delta = rt_1 - eval_q(st, at, w) # TD error
if not done:
delta += gamma * eval_q(st_1, at_1, w)
# update weights and decay traces
w += alpha * delta * E
E *= gamma * lambdaa
if Qopt is not None:
rmse.append(compute_Q_rmse(env, linfnc2tableQ(env, w), Qopt)) # not working
return w, linfnc2tableQ(env, w), rmse
# + id="xjXSqe47e7Iy"
env = Easy21()
lambdas = np.arange(0, 1.1, 0.2)
ws, Qs, rmses = list(zip(*[sarsa_lambda_linfnc_approx(env, lam, episodes = int(1e3), Qopt = Qmc) for lam in lambdas]))
# + colab={"base_uri": "https://localhost:8080/", "height": 372} id="qYFbCLZCXVUV" outputId="6d9edc8d-570f-4798-b3da-e9b94e554022"
plot_rmses(lambdas, rmses, r'MC vs Sarsa($\lambda$) (linfnc) RMSE')
| easy21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Theory
# # Implementation
# ## Inititialization Cell
# +
from sklearn import datasets
iris_ds = datasets.load_iris()
def groupby(func,items):
d = {}
for itm in items:
k = func(itm)
if k in d:
d[k].append(itm)
else:
d[k] = [itm]
return d
# A nifty little function, mirrors Clojure's frequencies function
def freqs(items):
"Returns a dictionary of the form {item : items_frequency}"
d = {}
for x in items:
d[x] = (d[x] + 1 if x in d else 1)
return d
# -
# ## Utils
# +
import numpy as np
def euclidean_dist(a,b):
t = a-b
return np.dot(t,t.T)**0.5
class fix_size_list:
#this class maintains a list of objects, represented as dicts, which
# contain a distance attribute. The list will never grow beyond a certain
# size, replacing higher distance members with lower distance additions
def __init__(self,max_size):
self._n = max_size
self._list = []
self._max_dist = float("inf")
def add(self, item):
self._list.append(item)
if len(self._list) > self._n:
#I don't care about efficiency, just get it done
self._list = sorted(self._list, key=lambda x : x['distance'])
self._list.pop(-1)
self._max_dist = self._list[-1]['distance']
def threshold(self):
return self._max_dist
def as_list(self):
return self._list
# -
# ## <NAME>
# +
class naive_knn:
def __init__(self, k, dataset, weighted_dist):
self._k = k
self._data = dataset
self._dist_metric = euclidean_dist
def search(self, item):
results = fix_size_list(self._k)
for x in self._data:
d = self._dist_metric(x,item)
if d < results.threshold():
results.add({'item' : x, 'distance' : d})
return [x['item'] for x in results.as_list()]
| DataSci/from_scratch/KNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (machine-learning-pytorch)
# language: python
# name: pycharm-cf2a9c60
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 使用TorchText库完成文本分类
#
# * Access to the raw data as an iterator
# * Build data processing pipeline to convert the raw text strings into torch.Tensor that can be used to train the model
# * Shuffle and iterate the data with torch.utils.data.DataLoader
#
# 使用 AG_NEWS 新闻数据集,做主题分类
# + pycharm={"name": "#%%\n"}
import torch
from torch.utils.data import DataLoader
from torchtext.datasets import AG_NEWS
from torchtext.data.utils import get_tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = get_tokenizer('basic_english')
print('torch:', torch.__version__)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### 1 准备数据集
# + pycharm={"name": "#%%\n"}
# 使用 build_vocab_from_iterator 预处理
from torchtext.vocab import build_vocab_from_iterator
train_iter = AG_NEWS(split='train')
def yield_tokens(data_iter):
for _, text in data_iter:
yield tokenizer(text)
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
VOCAB_SIZE = len(vocab)
print(f'VOCAB_SIZE: {VOCAB_SIZE}')
# + pycharm={"name": "#%%\n"}
# Prepare data processing pipelines
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for (_label, _text) in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
# + pycharm={"name": "#%%\n"}
# 划分训练,验证集,
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE,shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE,shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE,shuffle=True, collate_fn=collate_batch)
# + [markdown] pycharm={"name": "#%% md\n"}
# ### 2 训练
# 1. 定义模型: nn.EmbeddingBag layer plus a linear layer for the classification purpose.
# 2. 训练和评估代码
# 3. 开始训练
# + pycharm={"name": "#%%\n"}
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
# + pycharm={"name": "#%%\n"}
EMBEDDING_SIZE = 64
NUM_CLASS = 4
model = TextClassificationModel(VOCAB_SIZE, EMBEDDING_SIZE, NUM_CLASS).to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
print(f'NUM_CLASS:{NUM_CLASS}, VOCAB_SIZE:{VOCAB_SIZE}, EMBEDDING_SIZE:{EMBEDDING_SIZE}')
# + pycharm={"name": "#%%\n"}
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predicted_label = model(text, offsets)
loss = criterion(predicted_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc/total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predicted_label = model(text, offsets)
loss = criterion(predicted_label, label)
total_acc += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc/total_count
# + pycharm={"name": "#%%\n"}
# 启动训练
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,time.time() - epoch_start_time,accu_val))
# + pycharm={"name": "#%%\n"}
# 启动测试
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))
# + pycharm={"name": "#%%\n"}
# 随机文本,测试
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "Exports of mechanical and electrical products jumped 23.8 percent year-on-year to 7.98 trillion yuan in the first eight months of this year, " \
"accounting for 58.8 percent of the nation's total export value," \
" according to the General Administration of Customs."
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, text_pipeline)])
| part05-natural-language-processing/5_2_text_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Augment rare fine grain classes
import os
import pandas as pd
import numpy as np
from PIL import Image
import cv2
import scipy.stats as scs
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from glob import glob
# +
pad = 5
resize_tile_shape = (128,128)
bb_X_lndex = [2, 4, 6, 8]
bb_y_lndex = [3, 5, 7, 9]
# -
train = pd.read_csv('../data/train.csv')
test = pd.read_csv('../data/test.csv')
print (train.columns)
def prepere_one_id(im_file, labels_csv, index):
im = Image.open(im_file)
bb_x = labels_csv.iloc[index, bb_X_lndex].values.astype(np.int32)
bb_y = labels_csv.iloc[index, bb_y_lndex].values.astype(np.int32)
x_min = np.min(bb_x) - pad
y_min = np.min(bb_y) - pad
x_max = np.max(bb_x) + pad
y_max = np.max(bb_y) + pad
tile = im.crop([x_min, y_min, x_max, y_max])
tile_resized = tile.resize(resize_tile_shape)
im = np.array(tile_resized)[:,:,0:3]
return im
try:
os.mkdir('../data/images/')
except:
pass
try:
os.mkdir('../data/images/train')
except:
pass
try:
os.mkdir('../data/images/test')
except:
pass
for i in range(len(train)):
try:
im = prepere_one_id('../data/training imagery/'+str(train.image_id.iloc[i])+'.tiff',train,i)
except:
try:
im = prepere_one_id('../data/training imagery/'+str(train.image_id.iloc[i])+'.tif',train,i)
except:
im = prepere_one_id('../data/training imagery/'+str(train.image_id.iloc[i])+'.jpg',train,i)
for col in ['enclosed_cab', 'spare_wheel','wrecked', 'flatbed', 'ladder', 'soft_shell_box','harnessed_to_a_cart']:
imdir = '../data/images/train/{}'.format(col)
try:
os.mkdir(imdir)
except:
pass
try:
os.mkdir(os.path.join(imdir,'0'))
except:
pass
try:
os.mkdir(os.path.join(imdir,'1'))
except:
pass
if train[col].iloc[i] == 1:
plt.imsave(os.path.join(imdir,'1',str(train.tag_id.iloc[i])+'.jpg'),im)
else:
plt.imsave(os.path.join(imdir,'0',str(train.tag_id.iloc[i])+'.jpg'),im)
if np.mod(i,1000) == 0 :
print (i)
datagen = ImageDataGenerator(rescale=1./255,featurewise_center=True,
rotation_range=180,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# ?datagen.flow_from_directory
for col in ['enclosed_cab']:
for im in glob('../data/images/train/{}/1/*'.format(col)):
img = load_img(im) # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
i = 0
for batch in datagen.flow(x,save_to_dir='../data/images/train/{}/1/'.format(col),save_format='jpeg',batch_size=1):
i += 1
if i > 20:
break # otherwise the generator would loop indefinitely
for col in ['ladder', 'soft_shell_box','harnessed_to_a_cart']:
for im in glob('../data/images/train/{}/1/*'.format(col)):
img = load_img(im) # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
i = 0
for batch in datagen.flow(x,save_to_dir='../data/images/train/{}/1/'.format(col),save_format='jpeg',batch_size=1):
i += 1
if i > 100:
break # otherwise the generator would loop indefinitely
for col in ['wrecked']:
for im in glob('../data/images/train/{}/1/*'.format(col)):
img = load_img(im) # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
i = 0
for batch in datagen.flow(x,save_to_dir='../data/images/train/{}/1/'.format(col),save_format='jpeg',batch_size=1):
i += 1
if i > 5:
break
for col in ['spare_wheel','flatbed']:
for im in glob('../data/images/train/{}/1/*'.format(col)):
img = load_img(im) # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
i = 0
for batch in datagen.flow(x,save_to_dir='../data/images/train/{}/1/'.format(col),save_format='jpeg',batch_size=1):
i += 1
if i > 10:
break
for i in range(len(test)):
imdir = '../data/images/test/'
try:
im = prepere_one_id('../data/test imagery/'+str(test.image_id.iloc[i])+'.tiff',test,i)
except:
try:
im = prepere_one_id('../data/test imagery/'+str(test.image_id.iloc[i])+'.tif',test,i)
except:
im = prepere_one_id('../data/test imagery/'+str(test.image_id.iloc[i])+'.jpg',test,i)
plt.imsave(os.path.join(imdir,str(test.tag_id.iloc[i])+'.jpg'),im)
if np.mod(i,1000) == 0 :
print (i)
| MAFAT Image Classification/notebooks/.ipynb_checkpoints/data_generator-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
nz = 10000
cell_size = 10.
sat = 0.25
bot = 10.
h = bot + sat * cell_size
top = bot + cell_size
deltaz = top - bot
z = np.linspace(bot, h, nz+1)
ss = 1.
ss_fd = 0.
for idx in range(nz):
z0, z1 = z[idx], z[idx + 1]
dz = z1 - z0
rho = ss * dz
psi = h - z0 - 0.5 * dz
ss_fd += rho * psi
print(ss_fd)
psi_avg = (h - bot - deltaz * sat / 2)
psi_avg
ss * deltaz * sat * (h - bot - sat * (top - bot) / 2)
ss * deltaz * sat * (h - bot)
def quad_sat(x, tp, bt, omega=0.1):
if isinstance(x, float):
x = np.array([x], dtype=float)
elif isinstance(x, (list, tuple)):
x = np.array(x, dtype=float)
y = np.zeros(x.shape, dtype=z.dtype)
dz = tp - bt
br = (x - bt) / dz
br[x < bt] = 0.
br[x > tp] = 1.
if omega == 0:
y[:] = br
else:
av = 1. / (1. - omega)
idx = br < omega
y[idx] = 0.5 * av * br[idx]**2. / omega
idx = (br >= omega) & (br < 1. - omega)
y[idx] = av * br[idx] + 0.5 * (1. - av)
idx = (br >= 1. - omega)
y[idx] = 1. - ((0.5 * av * (1. - br[idx])**2) / omega)
return y
def quad_sat_derv_fd(x, tp, bt, omega=0.1, power=1):
dx = 1e-6
derv = np.zeros(x.shape, dtype=x.dtype)
for idx, xx in enumerate(x):
xx0, xx1 = xx - dx, xx + dx
y0 = quad_sat(xx0, tp, bt, omega=omega)
y1 = quad_sat(xx1, tp, bt, omega=omega)
derv[idx] = (y1**power - y0**power) / (2 * dx)
return derv
harr = np.linspace(bot - 1., top + 1., nz)
omega = 1e-6
sat_lin = quad_sat(harr, top, bot, omega=0.)
sat_lin.mean()
sat = quad_sat(harr, top, bot, omega=omega)
sat
plt.plot(harr, sat_lin)
plt.plot(harr, sat)
sat_derv = quad_sat_derv_fd(harr, top, bot, power=1, omega=omega)
plt.plot(harr, sat_derv)
plt.plot(harr, quad_sat_derv_fd(harr, top, bot, power=2, omega=omega))
plt.plot(harr, 2 * sat * sat_derv)
| doc/SuppTechInfo/python/STO-SpecificStorage-Derivatives.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] _cell_guid="e6335977-1dcb-a5bc-4856-184dd7bce3f9" _uuid="5443ea8bf683530dd1938aa664487227c097d291"
# ## Trying out a linear model:
#
# Author: <NAME> ([@apapiu](https://twitter.com/apapiu), [GitHub](https://github.com/apapiu))
#
# If you use parts of this notebook in your own scripts, please give some sort of credit (for example link back to this). Thanks!
#
#
# There have been a few [great](https://www.kaggle.com/comartel/house-prices-advanced-regression-techniques/house-price-xgboost-starter/run/348739) [scripts](https://www.kaggle.com/zoupet/house-prices-advanced-regression-techniques/xgboost-10-kfolds-with-scikit-learn/run/357561) on [xgboost](https://www.kaggle.com/tadepalli/house-prices-advanced-regression-techniques/xgboost-with-n-trees-autostop-0-12638/run/353049) already so I'd figured I'd try something simpler: a regularized linear regression model. Surprisingly it does really well with very little feature engineering. The key point is to to log_transform the numeric variables since most of them are skewed.
# + _cell_guid="0d706811-b70c-aeab-a78b-3c7abd9978d3" _uuid="ec0222f152b2d4a7aaa7d5ce4c2738a4f79d3f6a"
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from scipy.stats import skew
from scipy.stats.stats import pearsonr
# #%config InlineBackend.figure_format = 'retina'
#set 'png' here when working on notebook
# %matplotlib inline
# + _cell_guid="603292c1-44b7-d72a-5468-e6782f311603" _uuid="78c3afef68efacd36980e176e2d6db9c0c2a49fc"
train = pd.read_csv("input/train.csv")
test = pd.read_csv("input/test.csv")
# + _cell_guid="d646bb1b-56c4-9b45-d5d4-27095f61b1c0" _uuid="4b693d4ad8964e3e3f2902fce3b735d5582e2eba"
train.head()
# -
# + _cell_guid="cb2d88d7-7f76-4b04-d28b-d2c315ae4346" _uuid="6b547a33a0ff87a49952d2d3c20335b585a78d27"
all_data = pd.concat((train.loc[:,'MSSubClass':'SaleCondition'],
test.loc[:,'MSSubClass':'SaleCondition']))
# -
all_data.tail()
# + [markdown] _cell_guid="29fa13df-61e8-b0c2-b3a7-ea92bffd4396" _uuid="a6ac58d0983ce97d07e7fc2474c3c57fd7e3b77e"
# ###Data preprocessing:
# We're not going to do anything fancy here:
#
# - First I'll transform the skewed numeric features by taking log(feature + 1) - this will make the features more normal
# - Create Dummy variables for the categorical features
# - Replace the numeric missing values (NaN's) with the mean of their respective columns
# + _cell_guid="9b5a3e5b-f683-3fd2-7269-4068975bbe42" _uuid="38136de276cfba51bea4be60e4ae9744865941f5"
matplotlib.rcParams['figure.figsize'] = (12.0, 6.0)
prices = pd.DataFrame({"price":train["SalePrice"],
"log(price + 1)":np.log1p(train["SalePrice"])})
prices.hist()
# + _cell_guid="4ed54771-95c4-00e7-b2cd-569d17862878" _uuid="cd318038367a042ce514ba2a21416e47391258a5"
#log transform the target:
train["SalePrice"] = np.log1p(train["SalePrice"])
#log transform skewed numeric features:
numeric_feats = all_data.dtypes[all_data.dtypes != "object"].index
skewed_feats = train[numeric_feats].apply(lambda x: skew(x.dropna()))
#compute skewness
skewed_feats = skewed_feats[skewed_feats > 0.75]
skewed_feats = skewed_feats.index
all_data[skewed_feats] = np.log1p(all_data[skewed_feats])
# + _cell_guid="3854ab12-a4f3-4c88-fe6e-1fee08e18af2" _uuid="04ba1d59633c4cc1b06319722a6136e1f33d8803"
all_data = pd.get_dummies(all_data)
# + _cell_guid="5d417300-0deb-3353-cabf-95f75af62678" _uuid="a72038eb6c676f794020bc495950201f8aa984d6"
#filling NA's with the mean of the column:
all_data = all_data.fillna(all_data.mean())
# + _cell_guid="fe687685-cdac-0a89-4d71-af2d11d87a81" _uuid="187093f23f6ed080e4ea6c6761b80900960a2d41"
#creating matrices for sklearn:
X_train = all_data[:train.shape[0]]
X_test = all_data[train.shape[0]:]
y = train.SalePrice
# + [markdown] _cell_guid="cc4e3014-23b7-2971-ddb0-f67b03f83558" _uuid="96c5c3b2a2187380b9974bc3b53e9a6481753e86"
# ###Models
#
# Now we are going to use regularized linear regression models from the scikit learn module. I'm going to try both l_1(Lasso) and l_2(Ridge) regularization. I'll also define a function that returns the cross-validation rmse error so we can evaluate our models and pick the best tuning par
# + _cell_guid="82886739-eee6-5d7a-4be9-e1fe6ac059f1" _uuid="7452d3a3a205f44fc6b2efbbed98544d09ea1b4a"
from sklearn.linear_model import Ridge, RidgeCV, ElasticNet, LassoCV, LassoLarsCV
from sklearn.model_selection import cross_val_score
def rmse_cv(model):
rmse= np.sqrt(-cross_val_score(model, X_train, y,
scoring="neg_mean_squared_error", cv = 5))
return(rmse)
# + _cell_guid="436ce6e8-917f-8c88-3d7b-245e82a1619f" _uuid="8a827cd737730c53f9bb9fd6f08b89a2f19a7ea0"
model_ridge = Ridge()
# + [markdown] _cell_guid="69ff958c-dbbb-4750-3fb0-d0ac17ff6363" _uuid="a819fa9643b9bef742d99f178bbaf043eec7885e"
# The main tuning parameter for the Ridge model is alpha - a regularization parameter that measures how flexible our model is. The higher the regularization the less prone our model will be to overfit. However it will also lose flexibility and might not capture all of the signal in the data.
# + _cell_guid="f6b86166-f581-6e05-5274-d3d3516ebaf3" _uuid="a6ce3827adb41281f4f0e7471469b427c0eb7e1c"
alphas = [0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 50, 75]
cv_ridge = [rmse_cv(Ridge(alpha = alpha)).mean()
for alpha in alphas]
# + _cell_guid="f8cf53ba-8441-9233-b7f5-a851d270b770" _uuid="2cfd76364b8aa6f750cbb84b7e40635d0a1ad5b8"
cv_ridge = pd.Series(cv_ridge, index = alphas)
cv_ridge.plot(title = "Validation - Just Do It")
plt.xlabel("alpha")
plt.ylabel("rmse")
# + [markdown] _cell_guid="37486402-4a48-912f-84ee-a3611334b133" _uuid="b77fd4340d879e12545f601f99283b061f33ad53"
# Note the U-ish shaped curve above. When alpha is too large the regularization is too strong and the model cannot capture all the complexities in the data. If however we let the model be too flexible (alpha small) the model begins to overfit. A value of alpha = 10 is about right based on the plot above.
# + _cell_guid="d42c18c9-ee70-929f-ce63-aac7f77796cc" _uuid="08c0a407c35313689e2a9fac1864babd57b0ffe3"
cv_ridge.min()
# + [markdown] _cell_guid="863fb699-7bcd-3748-3dbb-1c9b18afee9b" _uuid="bd090c121b2c20ce86c8d2cd63af7f8f76aedcc5"
# So for the Ridge regression we get a rmsle of about 0.127
#
# Let' try out the Lasso model. We will do a slightly different approach here and use the built in Lasso CV to figure out the best alpha for us. For some reason the alphas in Lasso CV are really the inverse or the alphas in Ridge.
# + _cell_guid="8204520c-a595-2ad2-4685-0b84cc662b84" _uuid="7c00e7ef0f94f75308db8ca8427b4a74edaab307"
model_lasso = LassoCV(alphas = [1, 0.1, 0.001, 0.0005]).fit(X_train, y)
# + _cell_guid="e78e6126-4de0-08ad-250b-46a3f0f48de0" _uuid="4c3a1ee5ffe2c74a66064e2737a61db2de8b0c6e"
rmse_cv(model_lasso).mean()
# + [markdown] _cell_guid="abc5f43e-1c38-4c1e-cb70-a95c8d9be8de" _uuid="8003045eeeeceae07fc66761b81bea8d89a4d41f"
# Nice! The lasso performs even better so we'll just use this one to predict on the test set. Another neat thing about the Lasso is that it does feature selection for you - setting coefficients of features it deems unimportant to zero. Let's take a look at the coefficients:
# + _cell_guid="c7be87ca-412a-cb19-1524-cd94cf698d44" _uuid="da0f6e03d46fff57e3027cf5e2e61de7a0511ec9"
coef = pd.Series(model_lasso.coef_, index = X_train.columns)
# + _cell_guid="14be641e-bbe0-824d-d90f-f47698c8b5c5" _uuid="aca8c90d71343c1c9429c97152a35d02fb88723f"
print("Lasso picked " + str(sum(coef != 0)) +
" variables and eliminated the other " +
str(sum(coef == 0)) + " variables")
# + [markdown] _cell_guid="ca153134-b109-1afc-e066-44273f65d44c" _uuid="c24910e0a4bb31c21f4d34c30edd06a01a553e72"
# Good job Lasso. One thing to note here however is that the features selected are not necessarily the "correct" ones - especially since there are a lot of collinear features in this dataset. One idea to try here is run Lasso a few times on boostrapped samples and see how stable the feature selection is.
# + [markdown] _cell_guid="632e23a8-948f-c692-f5e9-1aa8a75f21d5" _uuid="a73c9efce95ffe8e12dbee6c29dfbc5ef950fa23"
# We can also take a look directly at what the most important coefficients are:
# + _cell_guid="3efc02df-c877-b1fe-1807-5dd93c896c63" _uuid="ddef41a217ea6fd1c548854c62b20fae297530d1"
imp_coef = pd.concat([coef.sort_values().head(10),
coef.sort_values().tail(10)])
# + _cell_guid="87317789-6e7e-d57f-0b54-d8ba0ee26abf" _uuid="6e57be73ed03439c5211529c161300b2e8ba938d"
matplotlib.rcParams['figure.figsize'] = (8.0, 10.0)
imp_coef.plot(kind = "barh")
plt.title("Coefficients in the Lasso Model")
# + [markdown] _cell_guid="f6e6f820-1ec6-4a69-c309-9992b80652da" _uuid="177cb743aa1c1f2f874af578285f6e97b887762d"
# The most important positive feature is `GrLivArea` - the above ground area by area square feet. This definitely sense. Then a few other location and quality features contributed positively. Some of the negative features make less sense and would be worth looking into more - it seems like they might come from unbalanced categorical variables.
#
# Also note that unlike the feature importance you'd get from a random forest these are _actual_ coefficients in your model - so you can say precisely why the predicted price is what it is. The only issue here is that we log_transformed both the target and the numeric features so the actual magnitudes are a bit hard to interpret.
# + _cell_guid="cdeaa3d3-f9ad-2e06-1339-61b4425a43f8" _uuid="1a48e85cf2d1e37fdfdaecc1ed22c8869927965c"
#let's look at the residuals as well:
matplotlib.rcParams['figure.figsize'] = (6.0, 6.0)
preds = pd.DataFrame({"preds":model_lasso.predict(X_train), "true":y})
preds["residuals"] = preds["true"] - preds["preds"]
preds.plot(x = "preds", y = "residuals",kind = "scatter")
# + [markdown] _cell_guid="4780532e-2815-e355-9a96-fb1c598f6984" _uuid="253e6176c334296732fdefa4506686f9899bb6d2"
# The residual plot looks pretty good.To wrap it up let's predict on the test set and submit on the leaderboard:
# + [markdown] _cell_guid="f8da43e0-fd51-a4c9-d9b2-364d9911699a" _uuid="be0a8f61420a0aa8c0ee89e61aa1563982575207"
# ### Adding an xgboost model:
# + [markdown] _cell_guid="ae9bcc1a-5106-0909-ecf7-0abc2d2ca386" _uuid="69dc56f6ffd1b84b768702d4c1f52de837509f01"
# Let's add an xgboost model to our linear model to see if we can improve our score:
# + _cell_guid="654e4fcf-a049-921a-4783-3c6d6dcca673" _uuid="db84bc0911e5a4f2d802a023d401e3c56bde884e"
import xgboost as xgb
# + _cell_guid="be53a9f8-d88b-05fb-734d-3a1388d39864" _uuid="ec40e0ac2c20e74ad9ad37c94b5af16d8f6d131f"
dtrain = xgb.DMatrix(X_train, label = y)
dtest = xgb.DMatrix(X_test)
params = {"max_depth":2, "eta":0.1}
model = xgb.cv(params, dtrain, num_boost_round=500, early_stopping_rounds=100)
# + _cell_guid="c9d5bfe5-a0a8-0b10-d54a-1dfbb6123bd0" _uuid="befefcd37a445b643bd7660a333261e22633a82a"
model.loc[30:,["test-rmse-mean", "train-rmse-mean"]].plot()
# + _cell_guid="00b8a271-0f93-c757-7e33-516c3a297628" _uuid="507afe095d0984d8d288eecc38012953ed1f59ac"
model_xgb = xgb.XGBRegressor(n_estimators=360, max_depth=2, learning_rate=0.1)
#the params were tuned using xgb.cv
model_xgb.fit(X_train, y)
# + _cell_guid="2b87a004-3a9a-77cc-4b5b-6540f870c028" _uuid="552e45dcdeae2f51c7a38d5e0483771c571f0d85"
xgb_preds = np.expm1(model_xgb.predict(X_test))
lasso_preds = np.expm1(model_lasso.predict(X_test))
# + _cell_guid="1c9c640b-9e6c-a350-0691-7f6a7dc19c41" _uuid="0fef7b6ca7c39064d8494ecacf79e438996ad7f3"
predictions = pd.DataFrame({"xgb":xgb_preds, "lasso":lasso_preds})
predictions.plot(x = "xgb", y = "lasso", kind = "scatter")
# + [markdown] _cell_guid="74c9bdd2-afbb-5fdf-a776-a8eebfa30d12" _uuid="9622270f05e47810cf773794da6077d76b27a0b9"
# Many times it makes sense to take a weighted average of uncorrelated results - this usually imporoves the score although in this case it doesn't help that much.
# + _cell_guid="623ed0fe-0150-5226-db27-90a321061d52" _uuid="f826941d032765a55f742a8ad058c82412f01a4c"
preds = 0.7*lasso_preds + 0.3*xgb_preds
# + _cell_guid="569d7154-e3b5-84ab-1d28-57bdc02955d9" _uuid="6a6a8e3676de98a40a7f31291d349ff2a5a2e0a0"
solution = pd.DataFrame({"id":test.Id, "SalePrice":preds})
solution.to_csv("ridge_sol.csv", index = False)
# + [markdown] _cell_guid="fe4ec3c9-ae45-e01e-d881-32da250d44ba" _uuid="bb88f14d0e5497453c15770226e9463a3bcd8a39"
# ### Trying out keras?
#
# Feedforward Neural Nets doesn't seem to work well at all...I wonder why.
# + _cell_guid="12121592-5b16-5957-6c54-3fe84bc6708a" _uuid="145f1aafec10c4df7d7042cad11fdaa0fd776dbd"
from keras.layers import Dense
from keras.models import Sequential
from keras.regularizers import l1
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# + _cell_guid="246a88ac-3963-a603-cf33-eb2976737c98" _uuid="96361fbaf89183643918b2d47bd97323cd66c12f"
X_train = StandardScaler().fit_transform(X_train)
# + _cell_guid="04936965-5441-3989-1f07-97138b331dbc" _uuid="076864514e59feb7dd0f9adbdcfbf8f83fcd8d2c"
X_tr, X_val, y_tr, y_val = train_test_split(X_train, y, random_state = 3)
# + _cell_guid="5223b976-c02e-062e-5c73-60516bf70fa5" _uuid="b93105c03a739c06b59c8546e46a6f41b4067cb7"
X_tr.shape
# + _cell_guid="7b7e0df1-ea9c-5dcb-41cd-f79509218a20" _uuid="4327eba6cf67c78afb9beb87aa80c9c750e59254"
X_tr
# + _cell_guid="14ef62de-56e3-03cc-00c6-a5e2307d1b6a" _uuid="9e668e430a8c77296b3d4401a2a562c3c761979f"
model = Sequential()
#model.add(Dense(256, activation="relu", input_dim = X_train.shape[1]))
model.add(Dense(1, input_dim = X_train.shape[1], W_regularizer=l1(0.001)))
model.compile(loss = "mse", optimizer = "adam")
# + _cell_guid="082332bc-b36b-30db-1e0e-c212fba98b58" _uuid="d672a540072602be30f46ae6f721dae73b2d7e77"
model.summary()
# + _cell_guid="ad155a35-1d0b-c42f-9bdf-77ff389ddfd4" _uuid="e2e80b917f03be51afc6ac9d09de79c723618371"
hist = model.fit(X_tr, y_tr, validation_data = (X_val, y_val))
# + _cell_guid="d6c6354f-047b-1d8e-c024-15bb5d570f15" _uuid="6c224691e0c0f771326199fa1ecc185c85ff2dfc"
pd.Series(model.predict(X_val)[:,0]).hist()
# -
| 0-GettingStart/house-prices-advanced-regression-techniques/Trying-out-a-linear-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2
# language: julia
# name: julia-0.6
# ---
using BayesNets
bn = DiscreteBayesNet()
push!(bn, DiscreteCPD(:a, [1/3, 1/3, 1/3]))
push!(bn, DiscreteCPD(:b, [1/3, 1/3, 1/3]))
push!(bn, DiscreteCPD(:c, [:a, :b], [3,3],
[Categorical([0,1/2,1/2]), #A=0, B=0
Categorical([0,0,1]), #A=0, B=1
Categorical([0,1,0]), #A=0, B=2
Categorical([0,0,1]), #A=1, B=0
Categorical([1/2,0,1/2]), #A=1, B=1
Categorical([1,0,0]), #A=1, B=2
Categorical([0,1,0]), #A=2, B=0
Categorical([1,0,0]), #A=2, B=1
Categorical([1/2,1/2,0]) #A=2, B=2
]))
# +
assignment = Assignment(:b => 1, :c => 2)
infer(bn, :a, evidence=assignment)
| doc/classic_monty_hall.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,md:myst
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <p><font size="6"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p>
#
# > *© 2021, <NAME> and <NAME> (<mailto:<EMAIL>>, <mailto:<EMAIL>>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
# **AirBase** is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The [air quality database](https://www.eea.europa.eu/data-and-maps/data/aqereporting-8/air-quality-zone-geometries) consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants.
# Some of the data files that are available from AirBase were included in the data folder: the **hourly concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:
#
# - FR04037 (PARIS 13eme): urban background site at Square de Choisy
# - FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia
# - BETR802: urban traffic site in Antwerp, Belgium
# - BETN029: rural background site in Houtem, Belgium
#
# See http://www.eea.europa.eu/themes/air/interactive/no2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# # Processing a single file
#
# We will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file:
with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f:
print(f.readline())
# So we will need to do some manual processing.
# Just reading the tab-delimited data:
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None)
data.head()
# The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names.
# <div class="alert alert-success">
#
# <b>EXERCISE 1</b>: <br><br> Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html))
#
# <ul>
# <li>specify the correct delimiter</li>
# <li>specify that the values of -999 and -9999 should be regarded as NaN</li>
# <li>specify our own column names (for how the column names are made up, see <a href="http://stackoverflow.com/questions/6356041/python-intertwining-two-lists">http://stackoverflow.com/questions/6356041/python-intertwining-two-lists</a>)
# </ul>
# </div>
# Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag'
hours = ["{:02d}".format(i) for i in range(24)]
column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair]
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing1.py
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing2.py
# -
# For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data).
# <div class="alert alert-success">
#
# **EXERCISE 2**:
#
# Drop all 'flag' columns ('flag1', 'flag2', ...)
#
# </div>
flag_columns = [col for col in data.columns if 'flag' in col]
# we can now use this list to drop these columns
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing3.py
# -
data.head()
# Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries.
# <div class="alert alert-info">
#
# <b>REMEMBER</b>:
#
#
# Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_07_reshaping_data.ipynb)</li>
#
#
#
# <img src="../img/pandas/schema-stack.svg" width=70%>
#
# </div>
# <div class="alert alert-success">
#
# <b>EXERCISE 3</b>:
#
# <br><br>
#
# Reshape the dataframe to a timeseries.
# The end result should look like:<br><br>
#
#
# <div class='center'>
# <table border="1" class="dataframe">
# <thead>
# <tr style="text-align: right;">
# <th></th>
# <th>BETR801</th>
# </tr>
# </thead>
# <tbody>
# <tr>
# <th>1990-01-02 09:00:00</th>
# <td>48.0</td>
# </tr>
# <tr>
# <th>1990-01-02 12:00:00</th>
# <td>48.0</td>
# </tr>
# <tr>
# <th>1990-01-02 13:00:00</th>
# <td>50.0</td>
# </tr>
# <tr>
# <th>1990-01-02 14:00:00</th>
# <td>55.0</td>
# </tr>
# <tr>
# <th>...</th>
# <td>...</td>
# </tr>
# <tr>
# <th>2012-12-31 20:00:00</th>
# <td>16.5</td>
# </tr>
# <tr>
# <th>2012-12-31 21:00:00</th>
# <td>14.5</td>
# </tr>
# <tr>
# <th>2012-12-31 22:00:00</th>
# <td>16.5</td>
# </tr>
# <tr>
# <th>2012-12-31 23:00:00</th>
# <td>15.0</td>
# </tr>
# </tbody>
# </table>
# <p style="text-align:center">170794 rows × 1 columns</p>
# </div>
#
# <ul>
# <li>Reshape the dataframe so that each row consists of one observation for one date + hour combination</li>
# <li>When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns</li>
# <li>Set the new datetime values as the index, and remove the original columns with date and hour values</li>
#
# </ul>
#
#
# **NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions.
#
# </div>
# Reshaping using `melt`:
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing4.py
# -
# Reshaping using `stack`:
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing5.py
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing6.py
# -
# Combine date and hour:
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing7.py
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing8.py
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing9.py
# -
data_stacked.head()
# Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`:
data_stacked.index
data_stacked.plot()
# # Processing a collection of files
# We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above.
# <div class="alert alert-success">
#
# <b>EXERCISE 4</b>:
#
# <ul>
# <li>Write a function <code>read_airbase_file(filename, station)</code>, using the above steps the read in and process the data, and that returns a processed timeseries.</li>
# </ul>
# </div>
def read_airbase_file(filename, station):
"""
Read hourly AirBase data files.
Parameters
----------
filename : string
Path to the data file.
station : string
Name of the station.
Returns
-------
DataFrame
Processed dataframe.
"""
...
return ...
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing10.py
# -
# Test the function on the data file from above:
import os
filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012"
station = os.path.split(filename)[-1][:7]
station
test = read_airbase_file(filename, station)
test.head()
# We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe.
# <div class="alert alert-success">
#
# **EXERCISE 5**:
#
# Use the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.
#
# <details><summary>Hints</summary>
#
# - The pathlib module provides a object oriented way to handle file paths. First, create a `Path` object of the data folder, `pathlib.Path("./data")`. Next, apply the `glob` function to extract all the files containing `*0008001*` (use wildcard * to say "any characters"). The output is a Python generator, which you can collect as a `list()`.
#
# </details>
#
#
# </div>
from pathlib import Path
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing11.py
# -
# <div class="alert alert-success">
#
# **EXERCISE 6**:
#
# * Loop over the data files, read and process the file using our defined function, and append the dataframe to a list.
# * Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`.
#
# <details><summary>Hints</summary>
#
# - The `data_files` list contains `Path` objects (from the pathlib module). To get the actual file name as a string, use the `.name` attribute.
# - The station name is always first 7 characters of the file name.
#
# </details>
#
#
# </div>
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing12.py
# + tags=["nbtutor-solution"]
# # %load _solutions/case4_air_quality_processing13.py
# -
combined_data.head()
# Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file.
# let's first give the index a descriptive name
combined_data.index.name = 'datetime'
combined_data.to_csv("airbase_data_processed.csv")
| notebooks/case4_air_quality_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="T44dSXv5Rctp" executionInfo={"status": "ok", "timestamp": 1634649632500, "user_tz": -480, "elapsed": 300, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}} outputId="4d0914ad-a371-445d-a572-753417e7bf11"
import os
from google.colab import drive
drive.mount('/content/gdrive')
# + id="055hWXd7SYri" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1634649776146, "user_tz": -480, "elapsed": 143343, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}} outputId="ff84d6d7-3d62-4c29-fff5-df25871fdcf8"
# !mkdir train_local
# !unzip /content/gdrive/MyDrive/NFL_Helmet/nfl-health-and-safety-helmet-assignment.zip -d train_local/
# + id="Mm88C49tY1BN" executionInfo={"status": "ok", "timestamp": 1634649776147, "user_tz": -480, "elapsed": 9, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}}
import random
import numpy as np
from pathlib import Path
import datetime
import pandas as pd
from tqdm.notebook import tqdm
from sklearn.model_selection import train_test_split
import cv2
import json
import matplotlib.pyplot as plt
from IPython.core.display import Video, display
import subprocess
import gc
import shutil
# + id="oReHPPUWuHas" executionInfo={"status": "ok", "timestamp": 1634649815101, "user_tz": -480, "elapsed": 496, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}} outputId="108d1490-e00d-489a-a1dc-65438825af81" colab={"base_uri": "https://localhost:8080/"}
os.listdir("./train_local")
# + [markdown] id="FpgkTR8aRncD"
# # prepare data as COCO format (standard format of obeject detection)
# # reference https://www.kaggle.com/eneszvo/mmdet-cascadercnn-helmet-detection-for-beginners
# + colab={"base_uri": "https://localhost:8080/", "height": 442} id="hN_tXj-gRqnL" executionInfo={"status": "error", "timestamp": 1634649776149, "user_tz": -480, "elapsed": 10, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}} outputId="cd7845fd-17fb-467a-a315-dfd0609df755"
import pandas as pd
# Load image level csv file
extra_df = pd.read_csv('./train_local/image_labels.csv')
print('Number of ground truth bounding boxes: ', len(extra_df))
# Number of unique labels
label_to_id = {label: i for i, label in enumerate(extra_df.label.unique())}
print('Unique labels: ', label_to_id)
# + id="TR99TkbsXmjs" executionInfo={"status": "aborted", "timestamp": 1634649776147, "user_tz": -480, "elapsed": 6, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}}
extra_df.head()
# + id="kImf1JW5XlKW" executionInfo={"status": "aborted", "timestamp": 1634649776148, "user_tz": -480, "elapsed": 7, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}}
def create_ann_file(df, category_id):
now = datetime.datetime.now()
data = dict(
info=dict(
description='NFL-Helmet-Assignment',
url=None,
version=None,
year=now.year,
contributor=None,
date_created=now.strftime('%Y-%m-%d %H:%M:%S.%f'),
),
licenses=[dict(
url=None,
id=0,
name=None,
)],
images=[
# license, url, file_name, height, width, date_captured, id
],
type='instances',
annotations=[
# segmentation, area, iscrowd, image_id, bbox, category_id, id
],
categories=[
# supercategory, id, name
],
)
class_name_to_id = {}
labels = ["__ignore__",
'Helmet',
'Helmet-Blurred',
'Helmet-Difficult',
'Helmet-Sideline',
'Helmet-Partial']
for i, each_label in enumerate(labels):
class_id = i - 1 # starts with -1
class_name = each_label
if class_id == -1:
assert class_name == '__ignore__'
continue
class_name_to_id[class_name] = class_id
data['categories'].append(dict(
supercategory=None,
id=class_id,
name=class_name,
))
box_id = 0
for i, image in tqdm(enumerate(os.listdir(TRAIN_PATH))):
img = cv2.imread(TRAIN_PATH+'/'+image)
height, width, _ = img.shape
data['images'].append({
'license':0,
'url': None,
'file_name': image,
'height': height,
'width': width,
'date_camputured': None,
'id': i
})
df_temp = df[df.image == image]
for index, row in df_temp.iterrows():
area = round(row.width*row.height, 1)
bbox =[row.left, row.top, row.width, row.height]
data['annotations'].append({
'id': box_id,
'image_id': i,
'category_id': category_id[row.label],
'area': area,
'bbox':bbox,
'iscrowd':0
})
box_id+=1
return data
# + id="D13-Kjd4XugV" executionInfo={"status": "aborted", "timestamp": 1634649776148, "user_tz": -480, "elapsed": 7, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}}
from sklearn.model_selection import train_test_split
TRAIN_PATH = './train_local/images'
extra_df = pd.read_csv('./train_local/image_labels.csv')
category_id = {'Helmet':0, 'Helmet-Blurred':1,
'Helmet-Difficult':2, 'Helmet-Sideline':3,
'Helmet-Partial':4}
df_train, df_val = train_test_split(extra_df, test_size=0.2, random_state=36)
ann_file_train = create_ann_file(df_train, category_id)
ann_file_val = create_ann_file(df_val, category_id)
# + id="XwmGyzRIdhYo" executionInfo={"status": "aborted", "timestamp": 1634649776148, "user_tz": -480, "elapsed": 7, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}}
print('train:',df_train.shape,'val:',df_val.shape)
# + id="YRdKYpldZ_qY" executionInfo={"status": "aborted", "timestamp": 1634649776149, "user_tz": -480, "elapsed": 8, "user": {"displayName": "zz w", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01928120792238499571"}}
#save as json to gdrive
with open('/content/gdrive/MyDrive/NFL_Helmet/ann_file_train.json', 'w') as f:
json.dump(ann_file_train, f, indent=4)
with open('/content/gdrive/MyDrive/NFL_Helmet/ann_file_val.json', 'w') as f:
json.dump(ann_file_val, f, indent=4)
| miscellaneous/Helmet_recognition_coco(data prepare).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ruby 2.1.3
# language: ruby
# name: ruby
# ---
# +
module CodaGem
# Your code goes here...
def self.create(path)
# A::B::C
# define root
root = Object
#split ::root
path.split('::').each do |name|
new_module = Object.const_defined?(name)? Object.const_get(name) : nil
unless new_module
new_module = root.new
root.const_set(name, new_module)
end
root = new_module
end
end
end
CodaGem.create('A::B::C')
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Pruebas de rendimiento sobre Streaming Inference
# Es necesario tener instalada la versión de java 1.8:
# + language="bash"
# java -version
# -
# También es necesario tener añadida al PATH la carpeta bin de spark 2.2.1 para hadoop 2.7 o posterior ([descarga](https://spark.apache.org/downloads.html)).
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import seaborn as sns
from matplotlib import pylab
import numpy as np
pylab.rcParams['figure.figsize'] = (16.0, 8.0)
sns.set(style="whitegrid")
# ## Creación de las colecciones de test
# Esta función crea en la carpeta input una lista de archivos json con colecciones de elementos:
def createTestFileCollection(elements=120, entities=2, versions=2, depth=2, fields=2, batch=12):
# !rm -rf input
# !mkdir -p input
# out = !java -jar es.um.nosql.streaminginference.benchmark-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
# --elements $elements \
# --entities $entities \
# --versions $versions \
# --depth $depth \
# --fields $fields \
# --mode file \
# --flow stream \
# --batch $batch \
# --output input/collection.json \
# --delay 10
# Esta función rellena la base de datos *benchmark* con entidades de prueba:
def createTestMongoCollection(elements=120, entities=2, versions=2, depth=2, fields=2):
# out = !java -jar es.um.nosql.streaminginference.benchmark-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
# --elements $elements \
# --entities $entities \
# --versions $versions \
# --depth $depth \
# --fields $fields \
# --mode mongo \
# --host localhost \
# --port 27017 \
# --database benchmark
# Esta función crea un único archivo json con una colección de elementos en la carpeta input con nombre "collection":
def createTestSingleCollection(elements=120, entities=2, versions=2, depth=2, fields=2):
# !rm -rf input
# !mkdir -p input
# out = !java -jar es.um.nosql.streaminginference.benchmark-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
# --elements $elements \
# --entities $entities \
# --versions $versions \
# --depth $depth \
# --fields $fields \
# --mode file \
# --output input/collection.json
# Esta función determina el comando a utilizar en función del modo de funcionamiento:
def createTestCollection(mode="file", elements=120, entities=2, versions=2, depth=2, fields=2, batch=12):
# !mkdir -p output
if (mode == "file"):
createTestFileCollection(elements, entities, versions, depth, fields, batch)
elif (mode == "mongo"):
createTestMongoCollection(elements, entities, versions, depth, fields)
elif (mode == "single"):
createTestSingleCollection(elements, entities, versions, depth, fields)
# ## Benchmarking de aplicaciones
# Esta función ejecuta la aplicación de inferencia sobre una serie de colecciones previamente creada y vuelca en *stats.csv* los resultados:
def benchmarkFile(interval=1000, kryo="true"):
# out = !spark-submit --driver-memory 8g --master local[*] es.um.nosql.streaminginference.json2dbschema-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
# --mode file \
# --input input \
# --benchmark true \
# --interval $interval \
# --kryo $kryo
# Esta función ejecuta la aplicación de inferencia sobre la base de datos previamente creada y genera el archivo *stats.csv*:
def benchmarkMongo(interval=1000, block=200, kryo="true"):
# out = !spark-submit --driver-memory 8g --master local[*] es.um.nosql.streaminginference.json2dbschema-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
# --mode mongo \
# --database benchmark \
# --host localhost \
# --port 27017 \
# --benchmark true \
# --interval $interval \
# --block-interval $block \
# --kryo $kryo
# Esta función ejecuta la aplicación de inferencia sobre la colección creada genera el archivo *stats.csv*, en este caso solamente se mostrará el tiempo de procesamiento:
def benchmarkSingle():
# out = !spark-submit --driver-memory 8g --master local[*] es.um.nosql.streaminginference.json2dbschema-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
# --mode single \
# --input input/collection.json \
# --benchmark true
# Esta función determina el comando a utilizar en función del modo de funcionamiento:
def benchmarkSparkApp(mode="file", interval=1000, block=200, kryo="true"):
if (mode == "file"):
benchmarkFile(interval, kryo)
elif (mode == "mongo"):
benchmarkMongo(interval, block, kryo)
elif (mode== "single"):
benchmarkSingle()
# ## Todo junto
# La siguiente función compone las funciones anteriores para ejecutar una prueba con los parámetros introducidos:
def benchmark(mode="file", interval=1000, block=200, elements=120, entities=2, versions=2, depth=2, fields=2, batch=12, kryo="true"):
global benchmarked
# !rm -f output/stats.csv
createTestCollection(mode, elements, entities, versions, depth, fields, batch)
for x in range(0, 10):
benchmarkSparkApp(mode, interval, block, kryo)
benchmarked = pd.read_csv("output/stats.csv")
return benchmarked
# ## Pruebas
# Creación de una colección de 60000 elementos segmentada en 5 archivos:
createTestCollection(mode="file", elements=60000, batch=12000)
# Creación de un único archivo con 60000 elementos:
createTestCollection(mode="single", elements=60000)
# Inserción en la base de datos "benchmark" de MongoDB de 60000 elementos:
createTestCollection(mode="mongo", elements=60000)
# Prueba de ejecución de 60000 elementos en modo file, en batches de 12000 elementos:
benchmark(mode="file",elements=60000, batch=12000)
# Prueba de ejecución de 30000 elementos en modo single:
benchmark(mode="single",elements=60000)
# Prueba de ejecución de 1200 elementos en modo mongo:
benchmark(mode="mongo", elements=60000)
# ## Medición de parámetros
# Estudio del efecto de la serialización Kryo en la aplicación:
results = pd.DataFrame()
df = benchmark(mode="file", elements=2400000, batch=80000, entities=30, versions=30, depth=5, fields=4, kryo="true")
df.to_csv("kryo-enabled.csv")
results["kryo enabled"] = df["TOTAL_PROCESSING"]
df = benchmark(mode="file", elements=2400000, batch=80000, entities=30, versions=30, depth=5, fields=4, kryo="false")
df.to_csv("kryo-disabled.csv")
results["kryo disabled"] = df["TOTAL_PROCESSING"]
ax = sns.barplot(data=results)
ax.set_ylabel("Milisegundos de procesamiento")
# Estudio del efecto del número de entidades en el tiempo de procesamiento:
# +
ents = np.array([])
mode = np.array([])
millis = np.array([])
for entities in [1, 50, 100, 200, 400]:
df = benchmark(mode="file", elements=2400000, batch=80000, entities=entities, versions=1, depth=2, fields=2, kryo="true")
df.to_csv("file-entities-"+str(entities)+".csv")
length = df["TOTAL_PROCESSING"].size
ents = np.append(ents, np.repeat(entities, length))
mode = np.append(mode, np.repeat("Paralelo", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
df = benchmark(mode="single", elements=2400000, entities=entities, versions=1, depth=2, fields=2)
df.to_csv("original-file-entities-"+str(entities)+".csv")
length = df["TOTAL_PROCESSING"].size
ents = np.append(ents, np.repeat(entities, length))
mode = np.append(mode, np.repeat("Original", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
results = pd.DataFrame({"Entidades":ents, "Modo": mode, "Milisegundos de procesamiento": millis})
sns.factorplot(x="Entidades", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=7)
# -
# Estudio del efecto del número de versiones en el tiempo de procesamiento:
# +
vers = np.array([])
mode = np.array([])
millis = np.array([])
for versions in [1, 50, 100, 200, 400]:
df = benchmark(mode="file", elements=2400000, batch=80000, entities=1, versions=versions, depth=2, fields=2, kryo="true")
df.to_csv("file-versions-"+str(versions)+".csv")
length = df["TOTAL_PROCESSING"].size
vers = np.append(vers, np.repeat(versions, length))
mode = np.append(mode, np.repeat("Paralelo", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
df = benchmark(mode="single", elements=2400000, entities=1, versions=versions, depth=2, fields=2)
df.to_csv("original-file-versions-"+str(versions)+".csv")
vers = np.append(vers, np.repeat(versions, length))
mode = np.append(mode, np.repeat("Original", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
results = pd.DataFrame({"Versiones":vers, "Modo": mode, "Milisegundos de procesamiento": millis})
sns.factorplot(x="Versiones", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=7)
# -
# Estudio del efecto del número de elementos en el tiempo de procesamiento:
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [60000, 120000, 480000, 1200000, 2400000, 3600000]:
df = benchmark(mode="file", elements=elements, batch=(elements/30), entities=1, versions=1, depth=2, fields=2, kryo="true")
df.to_csv("light-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Paralelo", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = benchmark(mode="single", elements=elements, entities=1, versions=1, depth=2, fields=2)
df.to_csv("light-original-file-elements-"+str(elements)+".csv")
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Original", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros})
sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", data=results, kind="bar", size=7)
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [60000, 120000, 480000, 1200000, 2400000, 3600000]:
df = benchmark(mode="file", elements=elements, batch=(elements/30), entities=20, versions=20, depth=2, fields=2, kryo="true")
df.to_csv("medium-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Paralelo", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = benchmark(mode="single", elements=elements, entities=20, versions=20, depth=2, fields=2)
df.to_csv("medium-original-file-elements-"+str(elements)+".csv")
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Original", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros})
sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", data=results, kind="bar", size=7)
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [60000, 120000, 480000, 1200000, 2400000, 3600000]:
df = benchmark(mode="file", elements=elements, batch=(elements/30), entities=50, versions=50, depth=2, fields=2, kryo="true")
df.to_csv("hard-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Paralelo", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = benchmark(mode="single", elements=elements, entities=50, versions=50, depth=2, fields=2)
df.to_csv("hard-original-file-elements-"+str(elements)+".csv")
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Original", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros})
sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", data=results, kind="bar", size=7)
# -
# Estudio del efecto del número de particiones en el tiempo de procesamiento:
# +
parts = np.array([])
millis = np.array([])
for partitions in [1, 2, 4, 8, 16]:
df = benchmark(mode="file", elements=2400000, batch=(elements/partitions), entities=1, versions=1, depth=2, fields=2, kryo="true")
df.to_csv("file-partitions-"+str(partitions)+".csv")
length = df["TOTAL_PROCESSING"].size
parts = np.append(parts, np.repeat(partitions, length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
results = pd.DataFrame({"Particiones":parts, "Milisegundos de procesamiento": millis})
sns.factorplot(x="Particiones", y="Milisegundos de procesamiento", data=results, kind="bar", size=7)
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [480000, 1200000, 2400000, 3600000]:
for executors in [4, 16]:
df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-"+str(executors)+"-1.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-1-"+str(executors), length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros})
sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", col_wrap=3, data=results, kind="bar", size=5)
# -
# ## Lectura de resultados obtenidos
# ### Mapa de calor con ejecuciones en máquina local y CESGA
# +
import matplotlib.pyplot as plt
import os.path
f, ax = plt.subplots(1,3, figsize=(11, 7))
f.tight_layout()
cmap = sns.color_palette("Blues", n_colors=1000)
row = 0
for version in [1, 20, 50]:
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [480000, 1200000, 2400000, 3600000]:
if version == 1:
strVersion = "light"
if version == 20:
strVersion = "medium"
elif version == 50:
strVersion = "hard"
df = pd.read_csv("local/"+strVersion+"-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("PARALELO", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = pd.read_csv("local/"+strVersion+"-original-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("ORIGINAL", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
for executors in [4, 16]:
df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-"+str(executors)+"-1.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-"+str(executors).zfill(2)+"-1", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-2-8.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-02-8", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-8-2.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-08-2", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros})
grouped = results.groupby(['Documentos', 'Modo'], as_index=False).mean()
grouped.sort_values("Modo")
pivoted = grouped.pivot("Modo", "Documentos", "Microsegundos por documento")
#display(pivoted)
sns.heatmap(pivoted, annot=True, linewidths=.5, fmt="1.2f", ax=ax[row], cmap=cmap, cbar=False, annot_kws={"size": 14})
#ax[row].yticks(np.arange(0, 1, step=0.2))
row += 1
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.3)
plt.show()
# -
# ### Mapa de calor con mejor alternativa y speedup respecto a proceso de inferencia original
# +
import matplotlib.pyplot as plt
import os.path
cmap = sns.color_palette("Blues", n_colors=1000)
f, ax = plt.subplots(1,1, figsize=(12.95, 4.5))
elems = np.array([])
mode = np.array([])
micros = np.array([])
bestMode = ""
bestMicros = 9999999
originalMicros = 0
labels = pd.DataFrame(columns=["Modo", "Documentos", "Candidato"])
results = pd.DataFrame(columns=["Modo", "Documentos", "Speedup"])
for version in [1, 20, 50]:
for elements in [480000, 1200000, 2400000]:
if version == 1:
strVersion = "light"
labelVersion = u"1 entidad\n1 versión"
if version == 20:
strVersion = "medium"
labelVersion = "20 entidades\n20 versiones"
elif version == 50:
strVersion = "hard"
labelVersion = "50 entidades\n50 versiones"
df = pd.read_csv("local/"+strVersion+"-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
bestMode = "Local"
bestMicros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean()
df = pd.read_csv("local/"+strVersion+"-original-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
originalMicros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean()
if (originalMicros < bestMicros):
bestMicros = originalMicros
bestMode = "Original"
for executors in [4, 16]:
df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-"+str(executors)+"-1.csv")
length = df["TOTAL_PROCESSING"].size
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean()
if (micros < bestMicros):
bestMicros = micros
bestMode = "CESGA\n" + str(executors) + " executors 1 core"
df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-2-8.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean()
if (micros < bestMicros):
bestMicros = micros
bestMode = "CESGA\n2 executors 8 cores"
df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-8-2.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean()
if (micros < bestMicros):
bestMicros = micros
bestMode = "CESGA\n8 executors 2 cores"
speedup = originalMicros/bestMicros
bestMode += "\nSpeedup: " + "{0:.2f}".format(speedup)
results = results.append({"Modo": labelVersion, "Documentos": elements, "Speedup": speedup}, ignore_index=True)
labels = labels.append({"Modo": labelVersion, "Documentos": elements, "Candidato": bestMode}, ignore_index=True)
#results["Tipo"] = results["Tipo"].astype(int)
results["Documentos"] = results["Documentos"].astype(int)
results = results.pivot("Modo", "Documentos", "Speedup")
labels = labels.pivot("Modo", "Documentos", "Candidato")
sns.heatmap(results, annot=labels, linewidths=.5, fmt="", cmap=cmap, cbar=False, annot_kws={"size": 16}, ax=ax)
ax.set_ylabel('')
ax.set_xlabel("Documentos",fontsize=14)
ax.tick_params(labelsize="large")
plt.yticks(rotation=0)
plt.show()
# -
# ### Evolución del tiempo de ejecución en función del número de executors (1 entidad 1 versión)
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [480000, 1200000, 2400000, 3600000]:
for executors in [4, 16]:
df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-"+str(executors)+"-1.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-"+str(executors).zfill(2)+"-1", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros})
sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", col_wrap=3, data=results, kind="bar", size=3)
# -
# ### Evolución del tiempo de ejecución en función del número de executors (50 entidades 50 versiones)
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [480000, 1200000, 2400000, 3600000]:
for executors in [4, 16]:
df = pd.read_csv("cesga/results-"+str(elements)+"-50-50-"+str(elements/30)+"-"+str(executors)+"-1.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA "+str(executors)+" Executors", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros})
sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", col_wrap=3, data=results, kind="bar", size=3.5)
# -
# ### Evolución del tiempo de ejecución en función del número de cores (1 entidad 1 versión)
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [480000, 1200000, 2400000, 3600000]:
df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-16-1.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-16-1", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-8-2.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-08-2", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-2-8.csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("CESGA-02-8", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros})
sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", col_wrap=3, data=results, kind="bar", size=4)
# -
# ### Evolución del tiempo de ejecución en función del número de entidades
# +
ents = np.array([])
mode = np.array([])
millis = np.array([])
for entities in [1, 50, 100, 200, 400]:
df = pd.read_csv("local/file-entities-"+str(entities)+".csv")
length = df["TOTAL_PROCESSING"].size
ents = np.append(ents, np.repeat(entities, length))
mode = np.append(mode, np.repeat("Paralelo", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
df = pd.read_csv("local/original-file-entities-"+str(entities)+".csv")
length = df["TOTAL_PROCESSING"].size
ents = np.append(ents, np.repeat(entities, length))
mode = np.append(mode, np.repeat("Original", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
results = pd.DataFrame({"Entidades":ents.astype(int), "Modo": mode, "Milisegundos de procesamiento": millis})
sns.factorplot(x="Entidades", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=3.5)
# -
# ### Evolución del tiempo de ejecución en función del número de versiones
# +
vers = np.array([])
mode = np.array([])
millis = np.array([])
for versions in [1, 50, 100, 200, 400]:
df = pd.read_csv("local/file-versions-"+str(versions)+".csv")
length = df["TOTAL_PROCESSING"].size
vers = np.append(vers, np.repeat(versions, length))
mode = np.append(mode, np.repeat("Paralelo", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
df = pd.read_csv("local/original-file-versions-"+str(versions)+".csv")
vers = np.append(vers, np.repeat(versions, length))
mode = np.append(mode, np.repeat("Original", length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
results = pd.DataFrame({"Versiones":vers.astype(int), "Modo": mode, "Milisegundos de procesamiento": millis})
sns.factorplot(x="Versiones", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=3.5)
# -
# ### Evolución del tiempo de ejecución en función del número de documentos
# +
elems = np.array([])
mode = np.array([])
micros = np.array([])
for elements in [60000, 120000, 480000, 1200000, 2400000]:
df = pd.read_csv("local/light-file-elements-"+str(elements)+".csv")
length = df["TOTAL_PROCESSING"].size
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Paralelo", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
df = pd.read_csv("local/light-original-file-elements-"+str(elements)+".csv")
elems = np.append(elems, np.repeat(elements, length))
mode = np.append(mode, np.repeat("Original", length))
micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix())
results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros})
sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", data=results, kind="bar", size=3.5)
# -
# ### Evolución del tiempo de ejecución en función del número de ficheros de entrada
# +
parts = np.array([])
millis = np.array([])
for partitions in [1, 2, 4, 8, 16]:
df = pd.read_csv("local/file-partitions-"+str(partitions)+".csv")
length = df["TOTAL_PROCESSING"].size
parts = np.append(parts, np.repeat(partitions, length))
millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix())
results = pd.DataFrame({"Ficheros de entrada":parts.astype(int), "Milisegundos de procesamiento": millis})
sns.factorplot(x="Ficheros de entrada", y="Milisegundos de procesamiento", data=results, kind="bar", size=3.5)
| projects/es.um.nosql.streaminginference.json2dbschema/benchmark/Benchmark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.0 64-bit
# language: python
# name: python38064bit1116672182664a7bac758519cdc3fc68
# ---
# +
# Starting code for HW 2. Add to this!
import numpy as np
# Function definitions:
f = lambda x: np.sin(np.e**x)
# Statics
x0 = 0
xn = 2
def go(n):
xs = np.linspace(0,2,n+1)
coeffs = np.full_like(xs,2) # [2,2,...,2,2]
coeffs[0] = 1 # [1,2,...,2,2]
coeffs[-1] = 1 # [1,2,...,2,1]
h2 = (xn-x0)/(2.0*n) # h/2
return h2 * np.dot(f(xs), coeffs) # = (h/2) * (f_0 + 2f_1 + 2f_2 + ... + 2f_(n-1) + f_n)
# -
h10 = go(20); print(h10)
h20 = go(40); print(h20)
h40 = go(80); print(h40)
(h10 - h20)/(h20 - h40) # = R
| 514/hwk4/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/ClaudeCoulombe/VIARENA/blob/master/Labos/Mon_premier_reseau_de_neurones_avec_Keras.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#
# ### Rappel - Fonctionnement d'un carnet web iPython
#
# * Pour exécuter le code contenu dans une cellule d'un carnet iPython, cliquez dans la cellule et faites (⇧↵, shift-enter)
# * Le code d'un carnet iPython s'exécute séquentiellement de haut en bas de la page. Souvent, l'importation d'une bibliothèque Python ou l'initialisation d'une variable est préalable à l'exécution d'une cellule située plus bas. Il est donc recommandé d'exécuter les cellules en séquence. Enfin, méfiez-vous des retours en arrière qui peuvent réinitialiser certaines variables.
# # Mon premier réseau de neurones avec Keras
# ## Conversion Farenheit à Celsius - version compacte
#
# Vous allez créer votre premier réseau de neurones vraiment très simple pour faire de la reconnaissance de formes (pattern matching).
#
# Il s'agit d'une petite application de conversion des températures de degrés Farenheit à degrés Celsius.
#
# On peut légitimenent se questionner sur l'utilité d'un tel exercice car on peut obtenir une réponse exacte en utilisant l'équation de conversion $\;\;celsius = \large{\frac{5}{9}}\small(farenheit -32)$? En fait, ce qui est «intéressant» c'est de constater que le réseau de neurones va apprendre «seul» à partir des données à approximer cette équation.
#
# **Note**: Il n'est pas important de comprendre le détail du code informatique pour le moment. Ne vous inquiétez pas, des explications détaillées suivront bientôt.
# # Moins de 10 lignes de code...
# +
import tensorflow as tf
import numpy as np
farenheit_np = np.array([-100.0, -50.0, -25.0, 0.0, 25.0, 50.0, 100])
celsius_np = np.array([-73.33, -45.56, -31.67, -17.78, -3.89, 10.00, 37.78])
reseau_de_neurones = tf.keras.models.Sequential([tf.keras.layers.Dense(units=1, input_shape=[1])])
reseau_de_neurones.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.1),loss='mean_squared_error')
reseau_de_neurones.fit(farenheit_np,celsius_np,epochs=400,verbose=2)
farenheit = 32
print("\n\nFarenheit:",farenheit,", prédiction =>","Celsius:",reseau_de_neurones.predict([farenheit])[0][0],", formule exacte:",5/9*(farenheit-32))
# -
# Maintenant essayez avec 212 degrés Farenheit. Évidemment, vous pouvez affecter la valeur de votre choix dans la variable `farenheit`.
farenheit = 212
print("Farenheit:",farenheit,", prédiction =>","Celsius:",reseau_de_neurones.predict([farenheit])[0][0],", formule exacte:",5/9*(farenheit-32))
# Vous pouvez constater que le réseau de neurones retourne des valeurs très proches des vraies valeurs données par la formule $\;\;celsius = \large{\frac{5}{9}}\small(farenheit -32)$. Il est important de comprendre que le réseau de neurones n'apprend pas la formule exacte mais bien qu'il calcule itérativement une approximation de cette formule.
#
# Choses à retenir:
#
# * Le réseau de neurones est capable d'apprendre à approximer une fonction directement à partir des données
# * Le processus d'apprentissage est itératif
#
| Labos/Premier_reseau_neurones-FaC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extraction of SDM
# ### Collect spin-density matrix elements from the `ROOT`-file and save them to a text file
import ROOT
name = '0.100000-0.112853'
# +
############################################
def get_cve(myfile,name,i) :
hist = myfile.Get(name)
return [hist.GetBinCenter(i), hist.GetBinContent(i), hist.GetBinError(i)]
############################################
def get_SDM(myfile,bin,nbins=88) :
SDM = [[0.0 for x in range(nbins)] for y in range(nbins)]
eSDM = [[0.0 for x in range(nbins)] for y in range(nbins)]
# intensities
for i in range(nbins) :
hname = 'h'+str(i+1)
if (myfile.GetListOfKeys().Contains(hname)) :
v = get_cve(myfile,hname, bin)
SDM[i][i] = v[1]
eSDM[i][i] = v[2]
else :
print("Error: ",hname," not found!")
# interferences
for i in range(nbins) :
for j in range(nbins) :
if (i >= j) : continue
rname = 'h1'+'{:03d}'.format(i+1)+'{:03d}'.format(j+1)
iname = 'h2'+'{:03d}'.format(i+1)+'{:03d}'.format(j+1)
if (myfile.GetListOfKeys().Contains(rname)) and (myfile.GetListOfKeys().Contains(iname)):
vr = get_cve(myfile,rname, bin)
vi = get_cve(myfile,iname, bin)
SDM[i][j] = vr[1]+vi[1]*1j
eSDM[i][j] = v[2]+vi[2]*1j
SDM[j][i] = vr[1]-vi[1]*1j
eSDM[j][i] = v[2]-vi[2]*1j
# else :
# print("Error: ",rname," or ",iname," not found!")
return SDM, eSDM
############################################
import numpy as np
def save2file(output, inmat) :
mat=np.matrix([np.array(v).real for v in inmat])
np.savetxt(output+'.re',mat)
mat=np.matrix([np.array(v).imag for v in inmat])
np.savetxt(output+'.im',mat)
def save_all_sdm(rootfilename, outfilename, Nb=100, Nw=88) :
myfile = ROOT.TFile(rootfilename)
for b in range(1,Nb+1) :
SDM_b, eSDM_b = get_SDM(myfile,b,Nw)
save2file(outfilename+'/'+'sdm'+str(b), SDM_b)
save2file(outfilename+'/'+'sdm'+str(b)+'-err', eSDM_b)
myfile.Close()
# -
# ## Check the functionality
names = ["0.100000-0.112853", "0.112853-0.127471", "0.127471-0.144385", "0.144385-0.164401",
"0.164401-0.188816", "0.188816-0.219907", "0.219907-0.262177", "0.262177-0.326380",
"0.326380-0.448588", "0.448588-0.724294", "0.724294-1.000000"];
# Save a single SDM
# +
myfile = ROOT.TFile('/mnt/data/compass/2008/flo_fit_results/flo_88waves_rank2/hfit_'+names[2]+'.root')
SDM91, eSDM91 = get_SDM(myfile,91)
print(SDM91[1][1]," (+-) ",eSDM91[1][1])
myfile.Close()
# -
# Save all SDMs for the fiven t' in the file
# +
myfile = ROOT.TFile('/home/mikhasenko/cernbox/tmp/pwa_results/hfit_0.100000-0.112853.root')
for b in range(1,101) :
SDM_b, eSDM_b = get_SDM(myfile,b)
save2file('SDMs/'+name+'/'+'sdm'+str(b), SDM_b)
save2file('SDMs/'+name+'/'+'sdm'+str(b)+'-err', eSDM_b)
myfile.Close()
# -
# Save all SDMs form all t'-slices in files
for name in names :
print(name)
save_all_sdm('/home/mikhasenko/cernbox/tmp/pwa_results/hfit_'+name+'.root',"SDMs/"+name)
# ### Systematic uncertainties
# Meanwhile, I did
# ```bash
# # cd /mnt/data/compass/2008/
# # create folders
# for i in $(ls flo_fit_results); do echo $i; mkdir flo_fit_results_txt/$i ; done
# # create subfolders with t-slices
# for i in $(ls flo_fit_results); do echo "------> $i"; for l in $(ls flo_fit_results/$i); do N=$(echo $l | sed "s/hfit_//" | sed "s/.root//"); echo $N; mkdir flo_fit_results_txt/$i/$N; done; done
# # # copy foldes names
# for i in $(ls flo_fit_results_txt/); do echo \"$i\",; done
# # # copy 22 t'-clices names
# for i in $(ls flo_fit_results/flo_88waves_22tbins); do N=$(echo $i | sed "s/hfit_//" | sed "s/.root//"); echo \"$N\",; done
# exit
# ```
fold = "/mnt/data/compass/2008";
#
files=[("flo_88waves_best_11tbins",100,88),
("flo_53waves_best_11tbins",100,53),
# ("flo_88waves_22tbins",100,88), # that is special (see below)
("flo_88waves_coarse_ES",100,88),
("flo_88waves_f0_980_BW",100,88),
("flo_88waves_K1",100,88),
("flo_88waves_no_neg_waves",100,81)#,
("flo_88waves_rank2",100,88)];
for ff in files :
fl = ff[0]
print("--->",fl)
root_folder = fold+"/flo_fit_results/" +fl+"/"
text_folder = fold+"/flo_fit_results_txt/"+fl+"/"
for name in names :
print(name)
save_all_sdm(root_folder+'hfit_'+name+'.root',
text_folder+name, ff[1], ff[2])
# ### Special file with 22 $t'$ bins
for ff in [("flo_88waves_22tbins",100,88)] :
fl = ff[0]
print("--->",fl)
root_folder = fold+"/flo_fit_results/" +fl+"/"
text_folder = fold+"/flo_fit_results_txt/"+fl+"/"
for name in ["0.100000-0.106427",
"0.106427-0.112853",
"0.112853-0.120162",
"0.120162-0.127471",
"0.127471-0.135928",
"0.135928-0.144385",
"0.144385-0.154399",
"0.154399-0.164401",
"0.164401-0.176609",
"0.176609-0.188816",
"0.188816-0.204362",
"0.204362-0.219907",
"0.219907-0.241042",
"0.241042-0.262177",
"0.262177-0.294228",
"0.294228-0.326380",
"0.326380-0.387484",
"0.387484-0.448588",
"0.448588-0.586441",
"0.586441-0.724294",
"0.724294-0.862147",
"0.862147-1.000000"]:
print(name)
save_all_sdm(root_folder+'hfit_'+name+'.root',
text_folder+name, ff[1], ff[2])
| saveSDM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Reads IBTrACS data file and creates a timeseries of monthly aggregated TC genesis count within a lat-lon box specified by the user
#
#
# ## NCSU Tropical and Large Scale Dynamics
# ## <NAME>
#
# +
'''
Python For Atmospheric Science By Example
<NAME>
North Carolina State University
'''
from matplotlib.dates import num2date,date2num
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# +
#--------------------------------------------------
yearStart = 1980
yearEnd = 2019
monthStart = 10
monthEnd = 10
#--------------------------------------------------
dataDir = "~/data100/data/ibtracs/"
filename = "IBTrACS.since1980.v04r00.nc"
file = dataDir+filename
try:
ds = xr.open_dataset(file)
except:
print ("file not found. quitting code")
quit()
print ("Ibtracs file found and opened")
latS = 0.
latN = 20.
lonW = -90.
lonE = -20.
# NOW just extract the first track point info for each TC track
time = ds.time[:,0]
# subset tc genesis times for the lat-lob box defined
tsub = time.where((time.lon>=lonW) & (time.lon<=lonE) & (time.lat>=latS) & (time.lat<=latN), drop=True)
# -
# # A count of number of storms for each year (counted over all months. i.e. one total count per year)
# +
# count the number of storms for each year (count over all months)
yearlyCount = tsub.groupby("time.year").count()
# PLOT A BAR CHART TO SHOW THE TC NUMBERS PER YEAR
fig, ax = plt.subplots(figsize=(18,6))
yearlyCount.to_series().plot.bar(ax=ax)
# -
# # Now we count the number of TCs each month of each year
# +
year_month_idx= pd.MultiIndex.from_arrays([tsub['time.year'].values,tsub['time.month'].values])
tsub.coords['year_month'] = ('storm', year_month_idx)
monthlyCount = tsub.groupby('year_month').count()
#
#
#
# PLOT A BAR CHART TO SHOW THE TC NUMBERS PER YEAR
fig, ax = plt.subplots(figsize=(18,6))
monthlyCount.to_series().plot.bar(ax=ax)
# -
# # Now we count the number of TCs within a range of months for each year
# +
# choose the range of months now
mon1 = 7
mon2 = 9
tseas = tsub.where( (tsub['time.month']>=mon1) & (tsub['time.month']<=mon2), drop=True)
# count the number of storms for each year (count over all months)
yearlyCount = tseas.groupby("time.year").count()
# PLOT A BAR CHART TO SHOW THE TC NUMBERS PER YEAR
fig, ax = plt.subplots(figsize=(18,6))
yearlyCount.to_series().plot.bar(ax=ax)
# -
| examples/ibtracs/genesis_timeseries_by_region.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import os
np.random.seed(42)
# Loads dataset & processes it:
# - fills NA data
# - processes categorical data so that categories from both train&test are known
def load_dataset(dataset, drop_columns=None):
df_train = pd.read_csv("./2019-npfl104-shared/data/"+dataset+"/train.txt.gz", header=None)
df_test = pd.read_csv("./2019-npfl104-shared/data/"+dataset+"/test.txt.gz", header=None)
train_size = len(df_train)
df_tog = df_train.append(df_test)
# Convert to categorical
for col in df_tog.columns[np.where(df_tog.dtypes == 'object')]:
df_tog[col] = pd.Categorical(df_tog[col])
# Explicitely drop specified columns
if drop_columns:
df_tog = df_tog.drop(drop_columns, axis=1)
df_train, df_test = df_tog[:train_size], df_tog[train_size:]
df_train = df_train.fillna(df_train.mode().iloc[0])
df_test = df_test.fillna(df_test.mode().iloc[0])
return df_train, df_test
# Used to split dataframe to features & target (last column)
def get_X(df):
return pd.get_dummies(df[df.columns[:-1]], dummy_na=True)
def get_Y(df):
dfc = df[df.columns[-1]]
return dfc.cat.codes if dfc.dtype.name == "category" else dfc
# -
dftr, dfte = load_dataset("pamap-easy")
# +
k = 8
x = get_X(dftr).values
def init_centers():
centers = np.zeros((k, ) + x[0].shape)
for i in range(k):
random_i = np.random.choice(len(x) - 1, 1)
centers[i] = x[random_i]
return centers
print(init_centers())
# +
def get_closest_center(centers, data, j):
closest = -1
closest_dist = 2 ** 31
for i in range(len(centers)):
dist = np.linalg.norm(data[j] - centers[i])
if (dist < closest_dist):
closest_dist = dist
closest = i
return closest
def iteration(data, centers):
aggr = np.zeros((k, ) + data[0].shape)
aggr_n = np.zeros(k)
for i in range(len(data)):
clst_cluster = get_closest_center(centers, x, i)
aggr[clst_cluster] += x[i]
aggr_n[clst_cluster] += 1
for i in range(len(aggr)):
aggr[i] /= aggr_n[i]
return aggr
# +
centers = init_centers()
for i in range(10):
centers = iteration(x, centers)
centers
# +
def predict(centers, data):
result = np.zeros(len(data))
for i in range(len(data)):
result[i] = get_closest_center(centers, data, i)
return result
labels_myknn = predict(centers, x)
# -
from sklearn import metrics
from time import time
# +
class Object(object):
pass
# +
t0 = time()
estimator = Object()
estimator.inertia_ = 6
estimator.labels_ = labels_myknn
name="asdf"
data = get_X(dftr)
labels = get_Y(dftr)
sample_size=len(dftr)
print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette')
print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
# +
import numpy as np
from models.model import Model
class KMeansMy(Model):
'''
KMeans clustering model.
'''
def __init__(self, k, iters):
self.k = k
self.iters = iters
self.inertia_ = 25
def Build(self, inputs):
centers = self.__init_centers(inputs)
for _ in range(self.iters):
centers = self.__iteration(inputs, centers)
self.centers = centers
def __init_centers(self, inputs):
x = inputs
k = self.k
centers = np.zeros((k, ) + x[0].shape)
for i in range(k):
random_i = np.random.choice(len(x) - 1, 1)[0]
centers[i] = x[random_i]
return centers
def __get_closest_center(self, centers, data, j):
closest = -1
closest_dist = 2 ** 31
for i in range(len(centers)):
dist = np.linalg.norm(data[j] - centers[i])
if (dist < closest_dist):
closest_dist = dist
closest = i
return closest
def __iteration(self, data, centers):
k = self.k
aggr = np.zeros((k, ) + data[0].shape)
aggr_n = np.zeros(k)
for i in range(len(data)):
clst_cluster = self.__get_closest_center(centers, data, i)
aggr[clst_cluster] += data[i]
aggr_n[clst_cluster] += 1
for i in range(len(aggr)):
aggr[i] /= aggr_n[i]
return aggr
def fit(self, data):
self.Build(data)
self.labels_ = self.Predict(data)
def Predict(self, input):
centers = self.centers
data = input
result = np.zeros(len(data))
for i in range(len(data)):
result[i] = self.__get_closest_center(centers, data, i)
return result
def bench_k_means(estimator, name, data):
t0 = time()
estimator.fit(data)
print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
# +
from sklearn import metrics
from time import time
from sklearn.cluster import KMeans
k = 10
model = KMeansMy(k, 5)
labels = get_Y(dftr).values
sample_size = len(dftr)
print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette')
bench_k_means(model, "asdf", data = get_X(dftr).values)
bench_k_means(KMeans(init='k-means++', n_clusters=k, n_init=2),
name="k-means++", data=data)
bench_k_means(KMeans(init='random', n_clusters=k, n_init=2),
name="random", data=data)
# -
| hw/my-clustering/hw08.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [LEGALST-190] Lab 4/17: Feature Selection
#
# This lab will cover feature selection in order to train a machine learning model using `scikit-learn`. With complex datasets, we can run into the problems of overfitting your model and a long run time if all features were to be used. Feature selection is used in machine learning to avoid those type of issues.
#
# Estimated time: 35 minutes
#
# ### Table of Contents
# [The Data](#section data)<br>
# 1 - [Second Model](# section 1)<br>
# 2 - [Intro to Feature Removal Intuition](#section 2)<br>
# 3 - [Checking Results](#section 3)<br>
#
# +
# load all libraries
import numpy as np
from datascience import *
import datetime as dt
import pandas as pd
import seaborn as sns
#matplotlib
# %matplotlib inline
import matplotlib.pyplot as plt
#scikit-learn
from sklearn.feature_selection import RFE
from sklearn.feature_selection import VarianceThreshold
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge, Lasso, LinearRegression
from sklearn.model_selection import KFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn import preprocessing
from sklearn import metrics
# -
# ## The Data: Bike Sharing<a id='section data'></a>
#
# By now, you have been exposed to hte bike sharing dataset several times. This lab's data describes one such bike sharing system in Washington D.C., from UC Irvine's Machine Learning Repository.
#
# Information about the dataset: http://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset
# +
bike = pd.read_csv('data/day.csv', index_col=0)
# reformat the date column to integers that represent the day of the year, 001-366
bike['dteday'] = pd.to_datetime(bike['dteday'].unique()).strftime('%j')
# drop casual and registered riders because we want to predict the number of total riders
bike = bike.drop(['casual', 'registered'], axis = 1)
bike.head()
# -
# If you need to become familiar with this data set again, feel free to refer back to lab 2-22.
# To see how feature selection can change the accuracy for the better or worse, let's start by making a Linear Regression classifer that uses all features. This will act as our baseline.
#
# This cell also splits the data into train, validation, and test sets. Just like for model selection, we'll train our data on the training set, use the validation data to compare accuracy for different possible models (in this case, different combinations of features), and only use the test data once after we've finalized our model.
# +
# the features used to predict riders
X = bike.drop(['cnt'], axis = 1)
# the number of riders (target)
y = bike['cnt']
# set the random seed
np.random.seed(10)
# split the data with 0.20 proportion for test size
# train_test_split returns 4 values: X_train, X_test, y_train, y_test
X, X_test, y, y_test = train_test_split(X, y,
train_size=0.80, test_size=0.20)
# split the remaining data with 0.75 proportion for train size and 0.25 for validation size
X_train, X_val, y_train, y_val = train_test_split(X, y,
train_size=0.75, test_size=0.25)
# create a linear regression model
first_model_reg = LinearRegression()
#fit your model
first_model = first_model_reg.fit(X_train, y_train)
#predict X_train using your model
first_pred = first_model.predict(X_train)
#predict X_val using your model
val_pred = first_model.predict(X_val)
# -
# In order to check the error between the predicted values and the actual values, We have defined the root mean square error for you. Recall that the equation is the square root of the average differences between the predicted and actual values.
def rmse(pred, actual):
return np.sqrt(np.mean((pred - actual) ** 2))
# +
# check the rmse of your models
first_train_error = rmse(first_pred, y_train)
first_val_error = rmse(val_pred, y_val)
print("Training RMSE:", first_train_error)
print("Validation RMSE:", first_val_error)
# -
# ## Section 1: Second Model
#
# Our training and test errors seem to be pretty high. Let's see how we can improve our model by using feature selection. This process is often accompanied by lots of Exploratory Data Analysis (EDA). First we will look at which features correlate to our target feature (`cnt`).
#
# **Question 1.1:** Plot a few EDA yourself to become familar with the correlation values between certain features with the number of riders.
#
# **hint:** the `.corr()` method can be called on a dataframe to get the correlation matrix, and heat maps are helpful to visualize correlation (use `sns.heatmap(<data>)`)
corrmat = bike.corr()
plt.figure(figsize=(10,10))
g = sns.heatmap(corrmat, annot=True)
corrmat
# **Question 1.2:** Looking at your EDA, how will that help you select which features to use?
# *Answer:* sample answer
# - Looking at the correlations between all features and cnt, I would pick the ones with correlation values higher than 0.50
# **Question 1.3:** List out features that would probably be important to select for your model. Make sure to not include the rider count in your features list.
features = ["yr", "temp", "atemp"]
features
# **Question 1.4:** Now create a `linear regression` model with the features that you have selected to predict the number of riders(`cnt`).
#
# First, create new subsets of the training and validation data that only contain your chosen features. Then, initialize and fit your model to this new training data and use the model to predict the number of riders for the training set.
#
# Remember: any transformations you do on the training data also need to be done for the validation data (and the test data, if you end up using this model).
#
# *Note that Lasso and Ridge models would use the same steps below.*
# the features used to predict riders
X_train_my_feats = X_train[features]
X_val_my_feats = X_val[features]
# +
lin_reg = LinearRegression()
# fit the model
lin_model = lin_reg.fit(X_train_my_feats, y_train)
# +
# predict the number of riders
lin_pred = lin_model.predict(X_train_my_feats)
# plot the residuals on a scatter plot
plt.scatter(y_train, lin_pred)
plt.title('Linear Model (OLS)')
plt.xlabel('actual value')
plt.ylabel('predicted value')
plt.show()
# -
# **Question 1.5:** What is the rmse for both the prediction of X_train and X_val?
# +
#predict your X_test here
lin_val_pred = lin_model.predict(X_val_my_feats)
second_train_error = rmse(lin_pred, y_train)
second_val_error = rmse(lin_val_pred, y_val)
print("Training RMSE:", second_train_error)
print("Test RMSE:", second_val_error)
# -
# hmm... maybe our selected features did not improve the error as much. Let's see how we can improve our model.
# ## Section 2: Introduction to Feature Removal Intuition<a id = 'section 2'></a>
#
# As a good rule of thumb, we typically wish to pick features that are highly correlated with the target column. Also, even though not relevant to the bike sharing dataset, it is often best to remove columns that contain a high ratio of null values. However, sometimes null values represent 0 instead of data actually missing! So always be on the look out when you have to clean data.
#
# Of course, with any tedious and error prone process there is always a short cut that reduces time and human error. In part 1, you used your own intuition to pick out features that correlate the highest with the target feature. However, we can use `scikit-learn` to help pick the important features for us.
#
# Feature selection methods can give you useful information on the relative importance or relevance of features for a given problem. You can use this information to create filtered versions of your dataset and increase the accuracy of your model.
# ### Remove Features with Low Variance
#
# As you might remember from Data-8 (or other introductory statistics classes), the **variance** of data can give an idea of how spread out it is. This can be useful for feature selection. In removing features with low variance, all features whose variance does not meet some threshold are removed- the idea being that if a feature has low variability, it will be less helpful in prediction.
# **Question 2.1:** What is the current shape of X_train?
# code answer here
X_train.shape
# Use `VarianceThreshold` to filter out features that are below a 0.1 threshold.
#
# Then you can use `transform` on the X_train. This will select features that match the threshold.
# +
#use VarianceThreshold
sel = VarianceThreshold(threshold = 0.1)
sel.fit(X_train_norm)
# Subset features with transform
X_new_train = sel.transform(X_train_norm)
#make sure to also transform X_test so it will match dimensions of X_train
X_new_val = sel.transform(X_val_norm)
# -
# Check the shape of the transformed training data. How many features were removed?
#notice how many features are then selected compared to X_train's original features
X_new_train.shape
# Now, create and fit a new Linear Regression model on these features and make predictions for `X_new_train` and `X_new_val`
# +
#Create a new Linear Regression model for your X_new. Recall that X_new is the X_train with selected features.
new_lin_reg = LinearRegression()
# fit the model
new_lin_model = new_lin_reg.fit(X_new_train, y_train)
#predict X_new
new_lin_pred = new_lin_model.predict(X_new_trai)
#predict X_new_validation
new_val_pred = new_lin_model.predict(X_new_val)
# -
# <div class="alert alert-danger">
#
# **STOP!**
# There is something very sketchy about using this method for feature removal on this particular data set. Can you say why? Hint: look again at the intuition behind the method, the kinds of features in the data set, and the variance of each feature (easily gotten using `sel.variances_`).
# </div>
# check the variances for each feature
sel.variances_
# #### The Issues
# 1. Our bike sharing data includes variables with many *different units*: Celsius temperature, windspeed, and humidity. Comparing variances for features with different scales is like comparing apples to oranges- if they have difference variances, that might just be due to the fact that they're measured on different scales.
# 2. Some of our features are *categorical*: things like 'holiday' or 'year' have integer values that represent categories rather than vary on a continuous scale. For example, although we have holiday values of 0 and 1, we can't have a holiday of anything in between (like 0.5) because a day is either a holiday (1) or it isn't (0). This means the variance for these features isn't meaningful in the same way as for the numerical features.
#
# So why did we have you do the previous section if it isn't valid? Variance thresholds may be a valuable feature removal tool if you're working with data that has many features with comparable units- for example, data for an athletic competition where event results for many different events are measured in milliseconds. Knowing the process may come in handy for other data sets or projects. Just be sure to take these results with a grain of salt, since we know the features would have been filtered out in a somewhat arbitrary way.
# **Question 2.2:** How does your root mean square error change compared to your model in section 1? Try changing the threshold to different values to see how it affects the accuracy.
# *Answer:*
# +
third_train_error = rmse(new_lin_pred, y_train)
third_val_error = rmse(new_val_pred, y_val)
print("Training RMSE:", third_train_error)
print("Validation RMSE:", third_val_error)
# -
# ### Recursive Feature Elimiation with scikit-learn
#
# According to [Feature Selection in Python with Scikit-Learn](https://machinelearningmastery.com/feature-selection-in-python-with-scikit-learn/), recursive feature elimination works by “recursively removing attributes and building a model on those attributes that remain. It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute.”
#
# So, recursive feature elimination takes care of much of the trial-end-error feature removal process for you. `RFE` takes a model (in this case, a `LinearRegression()` object) and the desired number of features to keep. It is then fit on the training X and y.
# +
# create a base classifier used to evaluate a subset of attributes
model = LinearRegression()
# create the RFE model and select 10 attributes
rfe = RFE(model, 10)
rfe.fit(X_train, y_train)
# -
# To check which features have been selected, we can use rfe.support_ to show mask of selected features.
print(rfe.support_)
# The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1.
print(rfe.ranking_)
# +
# using rfe, predict your training set
new_pred = rfe.predict(X_train)
# now predict your test set
new_val_pred = rfe.predict(X_val)
# +
# time for errors
fourth_train_error = rmse(new_pred, y_train)
fourth_val_error = rmse(new_val_pred, y_val)
print("Training RMSE:", fourth_train_error)
print("Validation RMSE:", fourth_val_error)
# -
# **Question 2.3:** How does recursive feature elimination change your error? Does the error get better or worse when you tell RFE to use less than 10 or more than 10 attributes?
# *Answer:* Use the rmse equation to explain.
# ### Feature Importance
#
# Feature importance is selecting features that are most important from a previous classifier. For example, selecting the most important features from a number of randomized decision trees. "A decision tree can be used to visually and explicitly represent decisions and decision making. As the name goes, it uses a tree-like model of decisions." If you would like to read more, feel free to [click here](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052). The main idea behind using the randomized trees is to use many of them to perform prediction. This helps the model to be more robust.
#
# Methods that use ensembles of decision trees (like `Random Forest` or `Extra Trees`) can also compute the relative importance of each attribute. These importance values can be used to inform a feature selection process. In this lab, we will be using `Extra Trees`, Random forest will be introduced in the next lab.
#
# Below shows the construction of an Extra Trees ensemble of the bike share dataset and the display of the relative feature importance.
#
# Once you use `ExtraTreesClassifer` to create a new model, fit the model. Afterwards, you can use `SelectFromModel` to select features using the classifier. Make sure to `transform` your X_train to obtain the selected important features.
# +
# Fits a number of randomized decision trees. Use 15 estimators (this value was arbitrarily chosen)
# this allows us to select features
model = ExtraTreesClassifier(n_estimators = 15)
#fit your model
model.fit(X_train, y_train)
# Select the important features of previous model
sel = SelectFromModel(model, prefit=True)
# Subset features by calling transform on your training X
select_X_train = sel.transform(X_train)
# We want to create a train model
sel_model = ExtraTreesClassifier(n_estimators = 15)
sel_model.fit(select_X_train, y_train)
# +
#predict X_train
y_train_pred = sel_model.predict(select_X_train)
# we must also select features from X_test to have number of features match up with the model
select_X_val = sel.transform(X_val)
y_pred = sel_model.predict(select_X_val)
fifth_train_error = rmse(y_train_pred, y_train)
fifth_val_error = rmse(y_pred, y_val)
print("Training RMSE:", fifth_train_error)
print("Validation RMSE:", fifth_val_error)
# -
# **Question 2.4:** How does using the Extra Trees change your error? Does the error get better or worse when you change the number of estimators?
# *Answer:* Depends on the errors students received.
# ## Section 3: Checking Results<a id = 'section 3'></a>
#
# Note that since Linear Regression is not the only model option, you can use the above methods to select features for many models, including `Lasso` or `Ridge`.
# **Question 3.1:** Within the scope of this class, what other methods besides feature selection can be used to improve estimation?
# *Answer:* possible answers
# - testing multiple models
# - averging models
# - stacking models
# - change number of attributes in Recursive Feature Elimiation method
# - increase number of estimates in Feature Importance method
# **Question 3.1:** Now that we have gone through different methods of feature selection, let's see how the error changes with each method. We have created the dataframe for you, now graph it!
# +
labels = ['all_features', 'own_selection', 'variance_theshold', 'rfe', 'important']
methods = pd.DataFrame(columns = labels)
methods['all_features'] = [first_train_error, first_val_error]
methods['own_selection'] = [second_train_error, second_val_error]
methods['variance_theshold'] = [third_train_error, third_val_error]
methods['rfe'] = [fourth_train_error, fourth_val_error]
methods['important'] = [fifth_train_error, fifth_val_error]
methods = methods.rename(index={0: 'train'})
methods = methods.rename(index={1: 'validation'})
methods
# +
#sample plot
methods.plot.bar()
# -
# **Question**: How do the different error rates compare for the different methods of feature selection? Which ones change significantly for training and validation data?
# ## Bibliography
#
# - <NAME>, An Introduction to Feature Selection. https://machinelearningmastery.com/an-introduction-to-feature-selection/
# - <NAME>, Feature Selection in Python with Scikit-Learn. https://machinelearningmastery.com/feature-selection-in-python-with-scikit-learn/
# - <NAME>, Why, How and When to apply Feature Selection. https://towardsdatascience.com/why-how-and-when-to-apply-feature-selection-e9c69adfabf2
# - Use of `Bike Share` data set adapted from UC Irvine's Machine Learning Repository. http://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset
# - Some code adapted from <NAME>: https://csiu.github.io/blog/update/2017/03/06/day10.html
#
# ----
# Notebook developed by: <NAME>
#
# Data Science Modules: http://data.berkeley.edu/education/modules
| labs/4-17/4-17_Feature_Selection_SOLUTIONS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # MUTRAFF DISPLAY EXPERIMENT CATALOG
# Compares two traffic scenarios data analysis based on BASTRA simulator.
#
# Author: <NAME>. june 2019
#
# ## References
#
# -
# ## Imports
# + slideshow={"slide_type": "fragment"}
# %matplotlib inline
import os
import fileinput
import re
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.mlab as mlab
import matplotlib.lines as mlines
import matplotlib as mpl
from IPython.display import display, HTML
# from scipy.stats import ttest_1samp, wilcoxon, ttest_ind, mannwhitneyu
from scipy import stats as st
import sklearn as sk
import pandas as pd
# -
# # HTML formatting
# The toggle button allows code hiding.
# +
from IPython.display import display
from IPython.display import HTML
import IPython.core.display as di
# This line will hide code by default when the notebook is exported as HTML
di.display_html('<script>jQuery(function() {if (jQuery("body.notebook_app").length == 0) { jQuery(".input_area").toggle(); jQuery(".prompt").toggle();}});</script>', raw=True)
# This line will add a button to toggle visibility of code blocks, for use with the HTML export version
di.display_html('''<button onclick="jQuery('.input_area').toggle(); jQuery('.prompt').toggle();">Toggle code</button>''', raw=True)
# -
# ### Experiments catalog
# Load experiments catalog
# +
from MutraffExperiments.ExperimentCatalog import ExperimentCatalog
theExps = ExperimentCatalog('default')
theExps.loadExperimentsFromCSV( 'CATALOGO DE EXPERIMENTOS.csv' )
# +
# print( theExps.getExperiment(2600) )
# -
data = pd.DataFrame.from_dict(theExps.experiments, orient='index')
#data.groupby('GROUP/OBJECTIVE')
data
# STEP 1: Create groups
groups = {}
for key, exp in theExps.experiments.items():
gr = exp['GROUP/OBJECTIVE']
if not gr in groups:
groups[gr] = {}
groups[gr][exp['ID']]=exp
# STEP 2: Display groups
for gr in sorted(groups.keys()):
print("'{}' : {}".format(gr, len(groups[gr]) ))
for idx in groups[gr]:
print(" [{:04d}] '{}'".format(idx,theExps.experiments[idx]['LABEL']))
print(" '{}'".format(theExps.experiments[idx]['FILE']))
print()
# STEP 2: Display groups
for gr in sorted(groups.keys()):
display(HTML("<h1>'{}' : {}</h1>".format(gr, len(groups[gr]) )))
data = pd.DataFrame.from_dict(groups[gr], orient='index')
display(HTML(data.to_html()))
#for idx in groups[gr]:
# print(" [{:04d}] '{}'".format(idx,theExps.experiments[idx]['FILE']))
| notebooks/mutraff_display_experiment_catalog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Demo: Knowledge Graph Completion
#
# In this tutorial demo, we will use the Graph4NLP library to build a GNN-based knowledge graph completion model. The model consists of
#
# + graph embedding module (e.g., GGNN)
# + predictoin module (e.g., DistMult decoder)
#
# We will use the built-in module APIs to build the model, and evaluate it on the Kinship dataset.
#
# ## Environment setup
#
# Please follow the instructions [here](https://github.com/graph4ai/graph4nlp_demo#environment-setup) to set up the environment. Please also run the following commands to install extra packages used in this demo.
# ```
# pip install spacy
# python -m spacy download en_core_web_sm
# pip install h5py
# pip install future
# ```
#
# This notebook was tested on :
#
# ```
# torch == 1.9.0
# torchtext == 0.10.0
# spacy == 3.0.8
# ```
# # Data preprocessing for KGC
# + Run the preprocessing script for WN18RR and Kinship: ```sh kg_completion/preprocess.sh```
# + You can now run the model
# +
import torch
import numpy as np
import torch.backends.cudnn as cudnn
from evaluation import ranking_and_hits
from model import ConvE, Distmult, Complex, GGNNDistMult, GCNDistMult, GCNComplex
from spodernet.preprocessing.pipeline import DatasetStreamer
from spodernet.preprocessing.processors import JsonLoaderProcessors, Tokenizer, AddToVocab, SaveLengthsToState, StreamToHDF5, SaveMaxLengthsToState, CustomTokenizer
from spodernet.preprocessing.processors import ConvertTokenToIdx, ApplyFunction, ToLower, DictKey2ListMapper, ApplyFunction, StreamToBatch
from spodernet.utils.global_config import Config, Backends
from spodernet.utils.logger import Logger, LogLevel
from spodernet.preprocessing.batching import StreamBatcher
from spodernet.preprocessing.pipeline import Pipeline
from spodernet.preprocessing.processors import TargetIdx2MultiTarget
from spodernet.hooks import LossHook, ETAHook
from spodernet.utils.util import Timer
from spodernet.preprocessing.processors import TargetIdx2MultiTarget
import argparse
np.set_printoptions(precision=3)
cudnn.benchmark = True
# -
''' Preprocess knowledge graph using spodernet. '''
def preprocess(dataset_name, delete_data=False):
full_path = 'data/{0}/e1rel_to_e2_full.json'.format(dataset_name)
train_path = 'data/{0}/e1rel_to_e2_train.json'.format(dataset_name)
dev_ranking_path = 'data/{0}/e1rel_to_e2_ranking_dev.json'.format(dataset_name)
test_ranking_path = 'data/{0}/e1rel_to_e2_ranking_test.json'.format(dataset_name)
keys2keys = {}
keys2keys['e1'] = 'e1' # entities
keys2keys['rel'] = 'rel' # relations
keys2keys['rel_eval'] = 'rel' # relations
keys2keys['e2'] = 'e1' # entities
keys2keys['e2_multi1'] = 'e1' # entity
keys2keys['e2_multi2'] = 'e1' # entity
input_keys = ['e1', 'rel', 'rel_eval', 'e2', 'e2_multi1', 'e2_multi2']
d = DatasetStreamer(input_keys)
d.add_stream_processor(JsonLoaderProcessors())
d.add_stream_processor(DictKey2ListMapper(input_keys))
# process full vocabulary and save it to disk
d.set_path(full_path)
p = Pipeline(args.data, delete_data, keys=input_keys, skip_transformation=True)
p.add_sent_processor(ToLower())
p.add_sent_processor(CustomTokenizer(lambda x: x.split(' ')),keys=['e2_multi1', 'e2_multi2'])
p.add_token_processor(AddToVocab())
p.execute(d)
p.save_vocabs()
# process train, dev and test sets and save them to hdf5
p.skip_transformation = False
for path, name in zip([train_path, dev_ranking_path, test_ranking_path], ['train', 'dev_ranking', 'test_ranking']):
d.set_path(path)
p.clear_processors()
p.add_sent_processor(ToLower())
p.add_sent_processor(CustomTokenizer(lambda x: x.split(' ')),keys=['e2_multi1', 'e2_multi2'])
p.add_post_processor(ConvertTokenToIdx(keys2keys=keys2keys), keys=['e1', 'rel', 'rel_eval', 'e2', 'e2_multi1', 'e2_multi2'])
p.add_post_processor(StreamToHDF5(name, samples_per_file=1000, keys=input_keys))
p.execute(d)
def main(args, model_path):
if args.preprocess:
preprocess(args.data, delete_data=True)
input_keys = ['e1', 'rel', 'rel_eval', 'e2', 'e2_multi1', 'e2_multi2']
p = Pipeline(args.data, keys=input_keys)
p.load_vocabs()
vocab = p.state['vocab']
train_batcher = StreamBatcher(args.data, 'train', args.batch_size, randomize=True, keys=input_keys, loader_threads=args.loader_threads)
dev_rank_batcher = StreamBatcher(args.data, 'dev_ranking', args.test_batch_size, randomize=False, loader_threads=args.loader_threads, keys=input_keys)
test_rank_batcher = StreamBatcher(args.data, 'test_ranking', args.test_batch_size, randomize=False, loader_threads=args.loader_threads, keys=input_keys)
data = []
rows = []
columns = []
num_entities = vocab['e1'].num_token
num_relations = vocab['rel'].num_token
if args.preprocess:
for i, str2var in enumerate(train_batcher):
print("batch number:", i)
for j in range(str2var['e1'].shape[0]):
for k in range(str2var['e2_multi1'][j].shape[0]):
if str2var['e2_multi1'][j][k] != 0:
data.append(str2var['rel'][j].cpu().tolist()[0])
rows.append(str2var['e1'][j].cpu().tolist()[0])
columns.append(str2var['e2_multi1'][j][k].cpu().tolist())
else:
break
from graph4nlp.pytorch.data.data import GraphData, to_batch
KG_graph = GraphData()
KG_graph.add_nodes(num_entities)
for e1, rel, e2 in zip(rows, data, columns):
KG_graph.add_edge(e1, e2)
eid = KG_graph.edge_ids(e1, e2)[0]
KG_graph.edge_attributes[eid]['token'] = rel
torch.save(KG_graph, '{}/processed/KG_graph.pt'.format(args.data))
return
if args.model is None:
model = ConvE(args, vocab['e1'].num_token, vocab['rel'].num_token)
elif args.model == 'conve':
model = ConvE(args, vocab['e1'].num_token, vocab['rel'].num_token)
elif args.model == 'distmult':
model = Distmult(args, vocab['e1'].num_token, vocab['rel'].num_token)
elif args.model == 'complex':
model = Complex(args, vocab['e1'].num_token, vocab['rel'].num_token)
elif args.model == 'ggnn_distmult':
model = GGNNDistMult(args, vocab['e1'].num_token, vocab['rel'].num_token)
elif args.model == 'gcn_distmult':
model = GCNDistMult(args, vocab['e1'].num_token, vocab['rel'].num_token)
elif args.model == 'gcn_complex':
model = GCNComplex(args, vocab['e1'].num_token, vocab['rel'].num_token)
else:
raise Exception("Unknown model!")
if args.model in ['ggnn_distmult', 'gcn_distmult', 'gcn_complex']:
graph_path = '{}/processed/KG_graph.pt'.format(args.data)
KG_graph = torch.load(graph_path)
if Config.cuda:
KG_graph = KG_graph.to('cuda')
else:
KG_graph = None
train_batcher.at_batch_prepared_observers.insert(1,TargetIdx2MultiTarget(num_entities, 'e2_multi1', 'e2_multi1_binary'))
eta = ETAHook('train', print_every_x_batches=args.log_interval)
train_batcher.subscribe_to_events(eta)
train_batcher.subscribe_to_start_of_epoch_event(eta)
train_batcher.subscribe_to_events(LossHook('train', print_every_x_batches=args.log_interval))
if Config.cuda:
model.cuda()
if args.resume:
model_params = torch.load(model_path)
print(model)
total_param_size = []
params = [(key, value.size(), value.numel()) for key, value in model_params.items()]
for key, size, count in params:
total_param_size.append(count)
print(key, size, count)
print(np.array(total_param_size).sum())
model.load_state_dict(model_params)
model.eval()
ranking_and_hits(model, test_rank_batcher, vocab, 'test_evaluation', kg_graph=KG_graph)
ranking_and_hits(model, dev_rank_batcher, vocab, 'dev_evaluation', kg_graph=KG_graph)
else:
model.init()
total_param_size = []
params = [value.numel() for value in model.parameters()]
print(params)
print(np.sum(params))
best_mrr = 0
opt = torch.optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.l2)
for epoch in range(args.epochs):
model.train()
for i, str2var in enumerate(train_batcher):
opt.zero_grad()
e1 = str2var['e1']
rel = str2var['rel']
e2_multi = str2var['e2_multi1_binary'].float()
# label smoothing
e2_multi = ((1.0-args.label_smoothing)*e2_multi) + (1.0/e2_multi.size(1))
pred = model.forward(e1, rel, KG_graph)
loss = model.loss(pred, e2_multi)
loss.backward()
opt.step()
train_batcher.state.loss = loss.cpu()
model.eval()
with torch.no_grad():
if epoch % 2 == 0 and epoch > 0:
dev_mrr = ranking_and_hits(model, dev_rank_batcher, vocab, 'dev_evaluation', kg_graph=KG_graph)
if dev_mrr > best_mrr:
best_mrr = dev_mrr
print('saving best model to {0}'.format(model_path))
torch.save(model.state_dict(), model_path)
if epoch % 2 == 0:
if epoch > 0:
ranking_and_hits(model, test_rank_batcher, vocab, 'test_evaluation', kg_graph=KG_graph)
# ### Config Setup
# +
parser = argparse.ArgumentParser(description='Link prediction for knowledge graphs')
parser.add_argument('--batch-size', type=int, default=128, help='input batch size for training (default: 128)')
parser.add_argument('--test-batch-size', type=int, default=128, help='input batch size for testing/validation (default: 128)')
parser.add_argument('--epochs', type=int, default=1000, help='number of epochs to train (default: 1000)')
parser.add_argument('--lr', type=float, default=0.003, help='learning rate (default: 0.003)')
parser.add_argument('--seed', type=int, default=1234, metavar='S', help='random seed (default: 17)')
parser.add_argument('--log-interval', type=int, default=100, help='how many batches to wait before logging training status')
parser.add_argument('--data', type=str, default='kinship', help='Dataset to use: {FB15k-237, YAGO3-10, WN18RR, umls, nations, kinship}, default: FB15k-237')
parser.add_argument('--l2', type=float, default=0.0, help='Weight decay value to use in the optimizer. Default: 0.0')
parser.add_argument('--model', type=str, default='ggnn_distmult', help='Choose from: {conve, distmult, complex}')
parser.add_argument('--direction_option', type=str, default='undirected', help='Choose from: {undirected, bi_sep, bi_fuse}')
parser.add_argument('--embedding-dim', type=int, default=200, help='The embedding dimension (1D). Default: 200')
parser.add_argument('--embedding-shape1', type=int, default=20, help='The first dimension of the reshaped 2D embedding. The second dimension is infered. Default: 20')
parser.add_argument('--hidden-drop', type=float, default=0.25, help='Dropout for the hidden layer. Default: 0.3.')
parser.add_argument('--input-drop', type=float, default=0.2, help='Dropout for the input embeddings. Default: 0.2.')
parser.add_argument('--feat-drop', type=float, default=0.2, help='Dropout for the convolutional features. Default: 0.2.')
parser.add_argument('--lr-decay', type=float, default=0.995, help='Decay the learning rate by this factor every epoch. Default: 0.995')
parser.add_argument('--loader-threads', type=int, default=4, help='How many loader threads to use for the batch loaders. Default: 4')
parser.add_argument('--preprocess', action='store_true', help='Preprocess the dataset. Needs to be executed only once. Default: 4')
parser.add_argument('--resume', action='store_true', help='Resume a model.')
parser.add_argument('--use-bias', action='store_true', help='Use a bias in the convolutional layer. Default: True')
parser.add_argument('--label-smoothing', type=float, default=0.1, help='Label smoothing value to use. Default: 0.1')
parser.add_argument('--hidden-size', type=int, default=9728, help='The side of the hidden layer. The required size changes with the size of the embeddings. Default: 9728 (embedding size 200).')
parser.add_argument('--channels', type=int, default=200, help='The side of the hidden layer. The required size changes with the size of the embeddings. Default: 9728 (embedding size 200).')
parser.add_argument('--kernel_size', type=int, default=5, help='The side of the hidden layer. The required size changes with the size of the embeddings. Default: 9728 (embedding size 200).')
# -
# ### If you run the task for the first time, run with:
args = parser.parse_args(args=['--data', 'kinship', '--model', 'ggnn_distmult', '--preprocess'])
# +
# parse console parameters and set global variables
Config.backend = 'pytorch'
Config.cuda = False
Config.embedding_dim = args.embedding_dim
model_name = '{2}_{0}_{1}'.format(args.input_drop, args.hidden_drop, args.model)
model_path = 'saved_models/{0}_{1}.model'.format(args.data, model_name)
torch.manual_seed(args.seed)
# -
args.preprocess
import warnings
warnings.filterwarnings('ignore')
# ### After preprocess the kinship data, then run:
main(args, model_path)
args = parser.parse_args(args=['--data', 'kinship', '--model', 'ggnn_distmult'])
main(args, model_path)
| KDD2021_demo/kg_completion/kgc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="wrszJmSoNUKG"
import numpy as np
from keras.models import Sequential
from keras.models import Model
from keras.layers import Input, Dense, LSTM, Dropout, GRU, Conv2D, MaxPooling2D, Flatten, Activation
from keras.layers.wrappers import TimeDistributed
from keras.optimizers import SGD
# + id="cbNYUUNfNaam"
training_data = np.load('data_with_label.npy', allow_pickle=True)
x_train=np.array([i[0] for i in training_data]).reshape(-1,224,672)
y_train=np.array([i[1] for i in training_data])
x_train=x_train/255
testing_data = np.load('data_with_label.npy', allow_pickle=True)
x_test=np.array([i[0] for i in testing_data]).reshape(-1,224,672)
y_test=np.array([i[1] for i in testing_data])
x_test=x_test/255
# + id="hX8s7mwGNadt"
# The GRU architecture
model = Sequential()
# First GRU layer with Dropout regularisation
model.add(GRU(units=50, return_sequences=True, input_shape=(224,672), activation='tanh'))
model.add(Dropout(0.2))
# Second GRU layer
model.add(GRU(units=50, return_sequences=True, activation='tanh'))
model.add(Dropout(0.2))
# Third GRU layer
model.add(GRU(units=50, return_sequences=True, activation='tanh'))
model.add(Dropout(0.2))
# Fourth GRU layer
model.add(GRU(units=50, activation='tanh'))
model.add(Dropout(0.2))
# The output layer
model.add(Dense(units=10))
# Compiling the RNN
model.compile(optimizer=SGD(lr=0.01, decay=1e-7, momentum=0.9, nesterov=False),loss='mean_squared_error',metrics=['accuracy'])
# Fitting to the training set
model.fit(x_train,y_train,epochs=500,batch_size=150, validation_data=(x_test, y_test))
| GRU/GRU_with_Keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Geofísica Matemática y Computacional.
#
# ## Examen
#
# ### 23 de noviembre de 2021
#
# Antes de entregar este *notebook*, asegúrese de que la ejecución se realiza como se espera.
# 1. Reinicie el kernel.
# - Para ello seleccione en el menú principal: Kernel$\rightarrow$Restart.
# 2. Llene todos las celdas que indican:
# - `YOUR CODE HERE` o
# - "YOUR ANSWER HERE"
# 3. Ponga su nombre en la celda siguiente (y el de sus colaboradores si es el caso).
# 4. Una vez terminado el ejercicio haga clic en el botón Validate y asegúrese de que no hay ningún error en la ejecución.
NAME = ""
COLLABORATORS = ""
# ---
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "fcbc180aa52d63e8f7c826e1c843d676", "grade": false, "grade_id": "cell-81b3f7692918ebba", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # Convección-difusión de calor NO estacionaria
# Considere el siguiente problema:
#
# $$
# \begin{eqnarray*}
# \frac{\partial T}{\partial t} +
# u \frac{\partial T}{\partial x} -
# \kappa \frac{\partial^2 T}{\partial x^2} & = & 0 \\
# T(0, t) & = & 1 \qquad \text{para} \qquad 0 < t < T_{max} \\
# T(L, t) & = & 0 \qquad \text{para} \qquad 0 < t < T_{max} \\
# T(x, 0) & = & 0 \qquad \text{para} \qquad 0 < x \leq L
# \end{eqnarray*}
# $$
#
# <img src="conv03.png" width="300" align="middle">
#
# La solución analítica es la siguiente:
#
# $$
# \displaystyle
# T(x,t) = 0.5 \left[\text{erfc}\left(\frac{x - ut}{2 \sqrt{\kappa t}}\right) +
# \exp\left(\frac{u x}{\kappa}\right)
# \text{erfc}\left(\frac{x + ut}{2 \sqrt{\kappa t}}\right) \right]
# $$
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "581f55e69b47bde67f1de29779399cfa", "grade": false, "grade_id": "cell-96f8d3f992674ea0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Implementar la solución numérica con diferencias finitas en Python.
#
# Utilice los siguientes datos:
#
# - $L = 2.5$ [m],
# - $T_{max} = 1$ [s]
# - $h_t = 0.002$ [s]
#
# La $u$ y la $\kappa$ se definen más adelante.
#
# - $\kappa = 0.001$ [kg/m s],
#
# -
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-paper')
params = {'figure.figsize' : (10,7),
# 'text.usetex' : True,
'xtick.labelsize': 20,
'ytick.labelsize': 20,
'axes.labelsize' : 24,
'axes.titlesize' : 24,
'legend.fontsize': 12,
'lines.linewidth': 3,
'lines.markersize': 10,
'grid.color' : 'darkgray',
'grid.linewidth' : 0.5,
'grid.linestyle' : '--',
'font.family': 'DejaVu Serif',
}
plt.rcParams.update(params)
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "590b4b75ec930ff1795d4b2f972a2178", "grade": false, "grade_id": "cell-165dc1c9ecefae40", "locked": true, "schema_version": 3, "solution": false, "task": false}
def mesh(L,N):
x = np.linspace(0,L,N+2)
return (L / (N+1), x)
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "c516496aaf1840641cb6f79241625a5f", "grade": false, "grade_id": "cell-b0264611b455563d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Cálculo de la matriz
# En la siguiente función deberá implementar el cálculo de la matriz del sistema de tal manera que mediante el parámetro `scheme` se pueda elegir entre el esquema de **Diferencias Centradas** y el **Upwind**. Tendrá entonces que escribir un código similar al siguiente:
#
# ```python
# if scheme == 'C': # Caso: Diferencias Centradas
# b = ...
# c = ...
# elif scheme == 'U': # Caso: Upwind
# ...
# ```
# + deletable=false nbgrader={"cell_type": "code", "checksum": "3ff64d41355b1b815b39b5e01132addc", "grade": false, "grade_id": "cell-124ebaaf18ef867e", "locked": false, "schema_version": 3, "solution": true, "task": false}
def Laplaciano1D(par, scheme):#N, h, ht, Gamma, rho, v):
u = par['u']
kappa = par['kappa']
N = par['N']
h = par['h']
ht = par['ht']
# YOUR CODE HERE
raise NotImplementedError()
a = b + c
A = np.zeros((N,N))
A[0,0] = a + 1
A[0,1] = -b
for i in range(1,N-1):
A[i,i] = a + 1
A[i,i+1] = -b
A[i,i-1] = -c
A[N-1,N-2] = -c
A[N-1,N-1] = a + 1
return A
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "13db4c3f448a3e1880213600260fc0b7", "grade": false, "grade_id": "cell-ed49ce764344aaee", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Cálculo del RHS
# De igual manera que en el caso de la matriz, deberá implementar en la siguiente función los casos para calcular el RHS del sistema usando **Diferencias Finitas** o **Upwind** dependiendo del valor de `scheme`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "dc68385ab2639b57c187fdef2b0d89a3", "grade": false, "grade_id": "cell-74858191b1e99842", "locked": false, "schema_version": 3, "solution": true, "task": false}
def RHS(par, T, scheme):
u = par['u']
kappa = par['kappa']
N = par['N']
h = par['h']
ht = par['ht']
T0 = par['BC'][0]
TL = par['BC'][1]
f = np.copy(T[1:N+1])
# YOUR CODE HERE
raise NotImplementedError()
f[0] += ht * c * T0
f[N-1] += ht * b * TL
return f
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "98c44e75ea92e4459d766878ccd09e81", "grade": false, "grade_id": "cell-02de29b43915f024", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Solución analítica.
#
# Observe que la solución analítica contiene la función especial $erfc(x)$ que es la <a href='https://mathworld.wolfram.com/Erfc.html'>función error complementaria</a>. Esta función la puede usar a través de la biblioteca `special` de `scipy` (incluya `from scipy import special`) :
#
# ```python
# special.erfc( ... )
# ```
#
# + deletable=false nbgrader={"cell_type": "code", "checksum": "95d94f63e549112a8e7dce9fb9b3f6c8", "grade": false, "grade_id": "cell-8fc577de434a8c95", "locked": false, "schema_version": 3, "solution": true, "task": false}
from scipy import special
def analyticSol(par, i, NP = 100):
L = par['L']
u = par['u']
kappa = par['kappa']
t = par['ht'] * i
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "8ff3a4d9c8c7ba3f67d4cfbc4384cd3a", "grade": false, "grade_id": "cell-04233116254155f6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Cálculo de la solución numérica
#
# Debido a que usamos el método implícito, en el siguiente código deberá incluir la llamada a las funciones para calcular la matriz, el RHS y la solución del sistema.
#
# + deletable=false nbgrader={"cell_type": "code", "checksum": "4a3fa42febb58c0bce0e5dbff72acaf2", "grade": false, "grade_id": "cell-d98640b2b08aa0e7", "locked": false, "schema_version": 3, "solution": true, "task": false}
def numSol(par, T, scheme):
L = par['L']
N = par['N']
ht = par['ht']
Nt = par['Nt']
freq = par['freq']
error = []
x = np.linspace(0,L,N+2)
for i in range(1, Nt+1):
# YOUR CODE HERE
raise NotImplementedError()
if (i % freq == 0):
xa, Ta = analyticSol(par, i, N+2)
E = np.linalg.norm(Ta - T)
error.append(E)
etiqueta = 'Step = {:2.1f}, $||E||_2$ = {:5.6f}'.format(i*ht, E)
plt.plot(x, T, '-', lw = 1.5, label=etiqueta)
plt.plot(xa, Ta, '--', lw = 1.0, color='gray')
return error
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "aee31ad50fe282d53a95d864cf2bf9f7", "grade": false, "grade_id": "cell-b0a42611b8937a35", "locked": true, "schema_version": 3, "solution": false, "task": false}
def casos(u, kappa, N, scheme):
par = {}
par['L'] = 2.5 # m
par['kappa'] = kappa # kg / m.s
par['u'] = u # m/s
par['BC'] = (1.0, 0.0)
par['N'] = N # Número de incógnitas
par['Tmax'] = 1.0
par['ht'] = 0.001
par['Nt'] = int(par['Tmax'] / par['ht'])
par['freq'] = 100
h, x = mesh(par['L'], par['N'])
par['h'] = h
N = par['N']
T0 = par['BC'][0]
TL = par['BC'][1]
T = np.zeros(N+2)
T[0] = T0
T[-1] = TL
xa, Ta = analyticSol(par, par['Nt'])
plt.figure(figsize=(10,5))
error = numSol(par, T, scheme)
plt.plot(xa,Ta, '--', lw=1.0, color='gray', label='Analítica')
plt.xlabel('x [m]')
plt.ylabel('T [$^o$C]')
plt.grid()
plt.legend(loc='upper right')
plt.show()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "af4a930889b9e13ae0e081041bc583c7", "grade": false, "grade_id": "cell-896c7ac8cedb1452", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Caso 1.
#
# Calcule la solución usando los siguientes datos:
#
# - $u = 1.0$
# - $\alpha = 0.01$
# - $N = 100$
# - Esquema: Diferencias Centradas
# + deletable=false nbgrader={"cell_type": "code", "checksum": "f0e5278eddb197718e88dc30fa967761", "grade": false, "grade_id": "cell-d1027f9daaa170b5", "locked": false, "schema_version": 3, "solution": true, "task": false}
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "cb09adee08c4ad664b6778da8a52f3c0", "grade": false, "grade_id": "cell-c7340d954614911f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Caso 2.
#
# Calcule la solución usando los siguientes datos:
#
# - $u = 1.0$
# - $\alpha = 0.01$
# - $N = 100$
# - Esquema: Upwind
# + deletable=false nbgrader={"cell_type": "code", "checksum": "ed3ba4aecd5d3d5663242cbb059b4ca9", "grade": false, "grade_id": "cell-a4b9608165ab2b22", "locked": false, "schema_version": 3, "solution": true, "task": false}
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "0b7adc45b7ef9cc29c6060dee59f4002", "grade": false, "grade_id": "cell-4ded47513efe9d04", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Pregunta 1:
# 1.1 ¿Cual de los casos 1 y 2 proporciona una mejor solución? <br>
# 1.2 ¿Por qué sucede esto?
# + [markdown] deletable=false nbgrader={"cell_type": "markdown", "checksum": "fe1b8bca02a56c46bbed441efba884ff", "grade": true, "grade_id": "cell-4763c56c0b979211", "locked": false, "points": 2, "schema_version": 3, "solution": true, "task": false}
# YOUR ANSWER HERE
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "8f6a1d5b56194a1c7c837dbb12b449d5", "grade": false, "grade_id": "cell-7611df817a616cc4", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Caso 3.
#
# Calcule la solución usando los siguientes datos:
#
# - $u = 1.0$
# - $\alpha = 0.001$
# - $N = 100$
# - Esquema: Diferencias Centradas
# + deletable=false nbgrader={"cell_type": "code", "checksum": "a88e55dc8490dafdd78e1276698b2ae2", "grade": false, "grade_id": "cell-d6081cc742754775", "locked": false, "schema_version": 3, "solution": true, "task": false}
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "96b99733a26272618cd1b8697b4edc85", "grade": false, "grade_id": "cell-16fe083aa3693b4d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Pregunta 2:
# ¿Cómo podría mejorar la solución del caso 3?
# + [markdown] deletable=false nbgrader={"cell_type": "markdown", "checksum": "3e248e14d67189984499fd2c64a80a46", "grade": true, "grade_id": "cell-c114acd40810dc0e", "locked": false, "points": 1, "schema_version": 3, "solution": true, "task": false}
# YOUR ANSWER HERE
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "42b42c4b0ed82c2045b09512204ba374", "grade": false, "grade_id": "cell-5702c156bdceeab1", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Caso 4.
#
# Calcule la solución usando los siguientes datos:
#
# - $u = 1.0$
# - $\alpha = 0.001$
# - $N = 100$
# - Esquema: Upwind
# + deletable=false nbgrader={"cell_type": "code", "checksum": "89b2105d65bb8c5b7b9959b30fd560aa", "grade": false, "grade_id": "cell-6bc6b8fcc95ba96b", "locked": false, "schema_version": 3, "solution": true, "task": false}
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "9700bb512ed1cd722b9bc8f2592f50d3", "grade": false, "grade_id": "cell-3cccee4773c8876f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Pregunta 3:
# ¿Cómo podría mejorar la solución del caso 4?
# + [markdown] deletable=false nbgrader={"cell_type": "markdown", "checksum": "531da73f8aace16efb36f53d56574af5", "grade": true, "grade_id": "cell-1c81a620061b7836", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false}
# YOUR ANSWER HERE
| 1D_AdvDiff_NSta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#pip install spotipy --upgrade # Uncomment this and run it if you haven't installed spotipy before
# +
# Dependencies
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import json
import re, glob
import os, sys
from scipy import stats
import spotipy # Set up a config file with ckey and skey. These are available if you go
from spotipy.oauth2 import SpotifyClientCredentials # to https://developer.spotify.com/, click on Dashboard
# from the horizontal black menu, login with your normal user info. Click
# Import Keys # on "create an app" if you haven't yet, it doesn't matter what you call it.
from config import clientID, clientSEC # Then click into
# your project and you should see Client ID and Client Secret. Those are your
# ckey and skey.
# +
# Setting up Spotify API info
client_credentials_manager = SpotifyClientCredentials(client_id=clientID, client_secret=clientSEC)
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
# -
# TIP: This next section assumes that you have already downloaded the csv files with the Top200 charts for the country you are working on:
#
# 1. Create a subfolder in the folder this notebook is located in called "input_files". Add the owid-covid-data.csv file there, you'll need that later. Then make another subfolder inside input_files called "spotify_top200_charts". Save the csv files you download there.
# 2. Go to https://spotifycharts.com
# 2. Choose the country you want to work on.
# 3. Download Weekly Top200 charts for 2019 and 2020, 1 chart per month. We agreed as a group to download the chart from last week of each month, to keep things consistent. Save them in the "spotify_top200_charts" folder you set up.
# +
# Create dataframe from weekly chart data
path = r"input_files/spotify_top200_charts/*.csv" # The path requires "".csv" at end of file name.
# This is to prevent the program from blowing up
# when it hits some kind of hidden file.
country_tracks_df = pd.read_csv(glob.glob(path)[0], header=1) # Sets up **main dataframe** with data from **FIRST** file
string = str(glob.glob(path)[0]) # in the folder.
year_month, = re.findall(r"ly-(\d\d\d\d)-(\d\d)-\d\d", string) # This line extracts the year and month from the
# **file name**
country_tracks_df[['Year']] = year_month[0]
country_tracks_df[['Month']] = year_month[1]
country_tracks_df[['yyyy-mm']] = str(year_month[0]) + "-" + str(year_month[1])
Tot_Streams1 = country_tracks_df['Streams'].sum() # Find out total streams in FIRST file in folder.
country_tracks_df[['Stream %']] = country_tracks_df['Streams'] / Tot_Streams1 # New column with % of streams
for file in glob.glob(path)[1:]: # Now that you have the dataframe set up from the
temp_df = pd.read_csv(file, header=1) # first file in the folder, this iterates through
string = str(file) # remaining files
year_month, = re.findall(r"ly-(\d\d\d\d)-(\d\d)-\d\d", string)
#print (year_month)
Tot_Streams2 = temp_df['Streams'].sum()
temp_df[['Year']] = year_month[0]
temp_df[['Month']] = year_month[1]
temp_df[['yyyy-mm']] = str(year_month[0]) + "-" + str(year_month[1])
temp_df[['Stream %']] = temp_df['Streams'] / Tot_Streams2
country_tracks_df = pd.concat([country_tracks_df, # Adds temperary datafame to end of main dataframe
temp_df]) # as new rows. ¿¿¿????
country_tracks_df = country_tracks_df.sort_values(['Year','Month']) # Sort the new dataframe by year and month
# You should get 4,800 rows (24 months x 200
# tracks per month)
country_tracks_df
# +
# Get Track IDs
track_names = country_tracks_df['Track Name'].to_list() # Set up list of tracks to iterate through
track_ids = [] # Empty list to record track IDs into
for track in track_names: # Heads up: with 4800 tracks to process, this takes
song_results = sp.search(q=track, type='track', limit=1) # awhile
try:
track_ids.append(song_results['tracks']['items'][0]['id']) # Prevents program from blowing up - few tracks
print (f"{track} song ID : {song_results['tracks']['items'][0]['id']}") # Just to let you know it's working
except IndexError: # lack track ids
track_ids.append(np.nan) # nan if nothing
# -
# TIP: for this next section, add an "output_files" subfolder to export into.
# +
# Add Track IDs to dataframe
country_tracks_df['Track ID'] = track_ids # Add new column with track IDs
# +
# Drop empty songs and export dataframe to csv to back it up
clean_country_tracks_df = country_tracks_df.dropna(how='any') # Use .dropna() to remove rows with missing data
clean_country_tracks_df.to_csv("output_files/1_tracks_with_track_ids.csv", index = False) # Back up to .csv
# +
# Continue from the backup csv file in case there is some kind of interruption to the notebook and you lose the
# data from the API calls.
country_track_ids_df = pd.read_csv("output_files/1_tracks_with_track_ids.csv")
country_track_ids_df
# +
# Use API again to get audio features
danceability = [] # Set up empty lists to store data in
energy = []
valence = []
tempo = []
for track in country_track_ids_df['Track ID']: # Heads up: this takes a long time
try:
feat_results = sp.audio_features([track])
danceability.append(feat_results[0]['danceability'])
energy.append(feat_results[0]['energy'])
valence.append(feat_results[0]['valence'])
tempo.append(feat_results[0]['tempo'])
print (f"{track} Valence Score: {feat_results[0]['valence']}") # Just to let you see it working
except TypeError: # Covers you in case there is missing data
danceability.append(np.nan)
energy.append(np.nan)
valence.append(np.nan)
tempo.append(np.nan)
# +
# Add audio features to dataframe
country_track_ids_df['Danceability'] = danceability # Add new columns with audio features
country_track_ids_df['Valence'] = valence
country_track_ids_df['Energy'] = energy
country_track_ids_df['Tempo'] = tempo
# Add new columns with product of % and each feature
country_track_ids_df['Danceability_Stream%'] = danceability * country_track_ids_df['Stream %']
country_track_ids_df['Valence_Stream%'] = valence * country_track_ids_df['Stream %']
country_track_ids_df['Energy_Stream%'] = energy * country_track_ids_df['Stream %']
country_track_ids_df['Tempo_Stream%'] = tempo * country_track_ids_df['Stream %']
# +
# Back up dataframe again to .csv
clean_country_track_ids_df = country_track_ids_df.dropna(how='any') # Use .dropna() to remove rows with missing data
clean_country_track_ids_df.to_csv("output_files/2_tracks_with_audio_features.csv", index=False) #Back up the dataframe to csv again
clean_country_track_ids_df
# +
# Continue from the backup csv file in case there is some kind of interruption to the notebook and you lose the
# data from the API calls.
country_tracks_data_df = pd.read_csv("output_files/2_tracks_with_audio_features.csv")
country_tracks_data_df.head()
# +
# Use groupby to get average valence of the 200 songs in each month
country_tracks_data_groupby = country_tracks_data_df.groupby(["Year", 'Month'], as_index=False)['Valence_Stream%'].sum()
country_tracks_data_groupby
# +
# Set up some basic plt formatting configurations
plt.rc('font', size=12)
plt.rc('axes', labelsize=15)
plt.rc('axes', titlesize=20)
# +
# Plot a comparison of 2019 2020 valence scores
# Set up lists to plot
valence_2019 = country_tracks_data_groupby[country_tracks_data_groupby['Year'] == 2019]
valence_2020 = country_tracks_data_groupby[country_tracks_data_groupby['Year'] == 2020]
valence_2020.drop('Year', inplace=True, axis=1)
fig= plt.figure(figsize=(12,8)) # Set up figure size
fig.suptitle('SPOTIFY LISTENER VALENCE PREFERENCE BY MONTH (USA)') # Set up main title
y_axis = valence_2019['Valence_Stream%']
x_axis = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', # Set up x axis
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
plt.plot(x_axis, valence_2019['Valence_Stream%'], label="2019 Weighted Avg Valence", marker='o', color='darkred') # Plot 2019
plt.plot(x_axis, valence_2020['Valence_Stream%'], label="2020 Weighted Avg Valence", marker='o', color='steelblue') # Plot 2020
plt.xlabel('Months') # Set up axis titles
plt.ylabel('Valence Score')
plt.xlim(-0.75, len(x_axis)-0.25) # Set up axis limits
plt.ylim(0, 1)
plt.legend() # Include the legend
plt.show()
# +
# Compare valence scores with covide infection rate
covid_df = pd.read_csv("input_files/owid-covid-data.csv") # read the covid data file
country_covid_df = covid_df.loc[covid_df['location'] == 'United States'] # Filter for country of your choice
country_covid_df.head()
# +
# Filter data for 2020, and add a 'month' column
country_covid__2020_df = country_covid_df[country_covid_df.date.str.contains(r'2020.*')]
country_covid__2020_df['Month'] = ''
country_covid__2020_df.head()
# +
# Extract the month from the 'date' column and add it to the new 'month' column, for sorting later
for index, row in country_covid__2020_df.iterrows():
month, = re.findall(f"2020-(\d\d)-", row['date'])
country_covid__2020_df.at[index, 'Month'] = int(month)
country_covid__2020_df.head()
# +
# Create a groupby to get the sum of new cases in each month
country_covid__2020_groupby = country_covid__2020_df.groupby(['Month'], as_index=False)['new_cases'].sum()
country_covid__2020_groupby
# +
# TIP: This next section is to add missing months. In the case of New Zealand, there was no data for January
# For other countries, it might vary. Here's how I added January to the dataframe:
# No need to add january in Italy
# country_covid__2020_groupby.loc[-1] = [1, 0] # This adds a 1 (month of January) in the first columne
# index 0), in the last row of the dataframe.
# country_covid__2020_groupby.index = country_covid__2020_groupby.index + 1 # shifts the index
country_covid__2020_groupby = country_covid__2020_groupby.sort_index() # sorts by index
country_covid__2020_groupby = country_covid__2020_groupby.rename(columns={"new_cases": "New Cases"})
country_covid__2020_groupby
# +
# Merge the dataframes into one nice comparison dataframe to scatter plot
country_covid_valence_df = pd.merge(valence_2020, country_covid__2020_groupby, on="Month")
country_covid_valence_df
# +
# Add a new cases per million column
country_polulation = 331449281 #TIP: This the population of New Zealand. Adjust for your country
country_covid_valence_df['New Cases Per Million'] = country_covid_valence_df['New Cases'] / country_polulation *1000000
country_covid_valence_df
# +
# Line plot relationship between Valence scores and New Cases per Million with shared x axis and duel y axes
fig, ax1 = plt.subplots(figsize=(12,8)) # Set up subplot figure and size
fig.suptitle('USA: SPOTIFY LISTENER VALENCE PREFERENCE BY MONTH COMPARED TO NEW COVID CASES')
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', # Set up shared x axis
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
ax1.set_xlabel('Months')
ax1.set_ylabel('Average Valence Scores') # Set up first plot
ax1.set_ylim([.2, .8])
ax1_ydata = country_covid_valence_df['Valence_Stream%']
ax1.plot(months, ax1_ydata, label="Weighted Avg Valence Scores", marker='o', color='darkred')
ax2 = ax1.twinx() # Set up second plot
ax2.set_ylabel('New Cases Per Month')
ax2_ydata = country_covid_valence_df['New Cases Per Million']
ax2.set_ylim([0, ax2_ydata.max()+20])
ax2.plot(months, ax2_ydata, label="New Covid Cases Per Million", marker='o', color='steelblue')
lines, labels = ax1.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc=0)
plt.show()
# +
# This is a mini function that adds a qualitative label to the correlation r score in the scatter plot
def r_label(r):
abs_r = abs(r)
if abs_r >= .8 : return "Very Strong"
elif abs_r >= .6 : return "Strong"
elif abs_r >= .4: return "Moderate"
elif abs_r >= .2: return "Low"
else: return "Negligible"
# +
# Line plot relationship between Valence scores and New Cases per Million with shared x axis and duel y axes
x_axis = country_covid_valence_df['Valence_Stream%'] # Set up axes
y_axis = country_covid_valence_df['New Cases Per Million']
slope, intercept, rvalue, pvalue, stderr = stats.linregress(x_axis, y_axis) # Get elements of regression equation
regress_values = x_axis * slope + intercept # Calculate regression values
plt.figure(figsize=(12, 8))
plt.title('USA: SPOTIFY LISTENER VALENCE PREFERENCE VS. NEW COVID CASES') # CHANGE TITLE TO REFLECT YOUR COUNTRY
plt.xlabel(f"New Covid Cases") # Set x axis label for subplot
plt.ylabel(f"Valence_Stream%") # Set title for subplot
r = round(stats.pearsonr(x_axis, y_axis)[0],2) # Calculate correlation coefficient
rlabel = r_label(r) # Call function to create a label for the r number
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) # Ression formula
plt.scatter(x_axis, y_axis, facecolors="darkred", alpha=.5, # Plot the scatter chart
label=f"r = {r}\nCorrelation is {rlabel}\n{line_eq}" )
plt.plot(x_axis, regress_values, color="steelblue") # Plot the regression line
plt.legend() # Add the legend
plt.savefig("output_files/valence_vs_newcases.png") # Save the png file
plt.show()
# -
| usa_ken's_code/usa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # StationSim parametric study
# authors: <NAME>
# created: 2019-06-24
# version: 0.2 (jupyter)
# Here we'll show how to change parameters. And try a little profiling.
# ## Import
# _For more information about this part read the basic experiment._
from sys import path
path.append('..')
from stationsim.stationsim_model import Model
from time import strftime, time
id = strftime('%y%m%d_%H%M%S')
from numpy.random import seed
seed(1)
# `dict_to_csv()` converts a two layer dictionary into comma seperated string. Useful for analysing several variables. `roundp()` is a significant figure rounding method, use `sig_fig` for you accuracy.
# _There is no need to understand this method._
def dict_to_csv(mydict, sig_fig=16):
roundp = lambda x,p: float(f'%.{p-1}e'%x)
csv_str, lines = '', []
for i,row in enumerate(mydict):
if i==0:
header = ', '.join(k for k,_ in mydict[row].items()) + ',\n'
line = ', '.join(f'{roundp(v, sig_fig)}' for _,v in mydict[row].items()) + f', {row}'
lines.append(line)
csv_str = header + '\n'.join(lines)
return csv_str
# ## Pop,Sep Study
# Since the main scale for runtime is population. By increasing separation we can mitigate the interaction loss with a population reduction. As if studying a smaller corridor.
analytics = {}
for pop, sep in [(100, 5), (300, 3), (700, 2)]:
t = time()
model = Model(pop_total=pop, separation=sep, do_print=False)
for _ in range(model.step_limit):
model.step()
analytics[str(model.params_changed)] = {'Compute Time': time()-t, **model.get_analytics()}
# ### Results
print(dict_to_csv(analytics, 4))
# print(csv_str, file=open(f'{id}_pop_sep_study.csv', 'w'))
# ## Speeds Study
# The default average number of speeds is set to 3. This is the number of times the agent will slow down before wiggling. Increasing the number will increase the number of interactions/collisions, but will decrease the number of wiggles. Furthermore, it will increase compute time linearly.
# analytics = {}
for s in (9,5,2,1):
t = time()
model = Model(speed_steps=s, do_print=False)
for _ in range(model.step_limit):
model.step()
analytics[str(model.params_changed)] = {'Compute Time': time()-t, **model.get_analytics()}
# ### Results
# By not resetting the analytics dictionary we append the new tests. And compare a larger parameter space.
# Although we'd expect more checks to increase runtime, it doesn't seem to add much on.
print(dict_to_csv(analytics, 4))
# print(csv_str, file=open(f'{id}_speeds_study.csv', 'w'))
# ## _Default Model_
# _The default model is designed to run quickly and efficiently. Hence setting a low population and high separation._
#
# _For more information about the speed of this model, check the profiling experiments._
| Projects/ABM_DA/experiments/StationSim parametric study.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## mapclassify Change Log Statistics
#
# This notebook generates the summary statistics for a package.
#
# It assumes you are running this under the `tools` directory at the toplevel of the package
#
#
# get date of last tag
from subprocess import Popen, PIPE
x, err = Popen('git log -1 --tags --simplify-by-decoration --pretty="%ai"| cat', stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True).communicate()
start_date = x.split()[0].decode('utf-8')
# today's date
import datetime
release_date = str(datetime.datetime.today()).split()[0]
package_name = 'mapclassify'
# This notebook will generate a file in the current directory with the name "changelog_VERSION.md". You can edit and append this on front of the CHANGELOG file for the package release.
# +
from __future__ import print_function
import os
import json
import re
import sys
import pandas
from datetime import datetime, timedelta
from time import sleep
from subprocess import check_output
try:
from urllib import urlopen
except:
from urllib.request import urlopen
import ssl
import yaml
context = ssl._create_unverified_context()
# -
CWD = os.path.abspath(os.path.curdir)
CWD
since_date = '--since="{start}"'.format(start=start_date)
since_date
since = datetime.strptime(start_date+" 0:0:0", "%Y-%m-%d %H:%M:%S")
since
# +
# get __version__
f = "../{package}/__init__.py".format(package=package_name)
with open(f, 'r') as initfile:
exec(initfile.readline())
# -
# ## Total commits by subpackage
cmd = ['git', 'log', '--oneline', since_date]
ncommits = len(check_output(cmd).splitlines())
ncommits
# ## List Contributors
# Some of our contributors have many aliases for the same identity. So, we've added a mapping to make sure that individuals are listed once (and only once).
# +
identities = {'<NAME>': ('ljwolf', '<NAME>'),
'<NAME>': ('<NAME>', '<NAME>', 'sjsrey', 'serge'),
'<NAME>': ('<NAME>', 'weikang9009'),
'<NAME>-Bel': ('<NAME>-Bel', 'darribas')
}
def regularize_identity(string):
string = string.decode()
for name, aliases in identities.items():
for alias in aliases:
if alias in string:
string = string.replace(alias, name)
if len(string.split(' '))>1:
string = string.title()
return string.lstrip('* ')
# -
author_cmd = ['git', 'log', '--format=* %aN', since_date]
from collections import Counter
# +
ncommits = len(check_output(cmd).splitlines())
all_authors = check_output(author_cmd).splitlines()
counter = Counter([regularize_identity(author) for author in all_authors])
# global_counter += counter
# counters.update({'.'.join((package,subpackage)): counter})
unique_authors = sorted(set(all_authors))
# -
unique_authors = counter.keys()
unique_authors
# ## Disaggregate by PR, Issue
from datetime import datetime, timedelta
ISO8601 = "%Y-%m-%dT%H:%M:%SZ"
PER_PAGE = 100
element_pat = re.compile(r'<(.+?)>')
rel_pat = re.compile(r'rel=[\'"](\w+)[\'"]')
# +
def parse_link_header(headers):
link_s = headers.get('link', '')
urls = element_pat.findall(link_s)
rels = rel_pat.findall(link_s)
d = {}
for rel,url in zip(rels, urls):
d[rel] = url
return d
def get_paged_request(url):
"""get a full list, handling APIv3's paging"""
results = []
while url:
#print("fetching %s" % url, file=sys.stderr)
f = urlopen(url)
results.extend(json.load(f))
links = parse_link_header(f.headers)
url = links.get('next')
return results
def get_issues(project="pysal/pysal", state="closed", pulls=False):
"""Get a list of the issues from the Github API."""
which = 'pulls' if pulls else 'issues'
url = "https://api.github.com/repos/%s/%s?state=%s&per_page=%i" % (project, which, state, PER_PAGE)
return get_paged_request(url)
def _parse_datetime(s):
"""Parse dates in the format returned by the Github API."""
if s:
return datetime.strptime(s, ISO8601)
else:
return datetime.fromtimestamp(0)
def issues2dict(issues):
"""Convert a list of issues to a dict, keyed by issue number."""
idict = {}
for i in issues:
idict[i['number']] = i
return idict
def is_pull_request(issue):
"""Return True if the given issue is a pull request."""
return 'pull_request_url' in issue
def issues_closed_since(period=timedelta(days=365), project="pysal/pysal", pulls=False):
"""Get all issues closed since a particular point in time. period
can either be a datetime object, or a timedelta object. In the
latter case, it is used as a time before the present."""
which = 'pulls' if pulls else 'issues'
if isinstance(period, timedelta):
period = datetime.now() - period
url = "https://api.github.com/repos/%s/%s?state=closed&sort=updated&since=%s&per_page=%i" % (project, which, period.strftime(ISO8601), PER_PAGE)
allclosed = get_paged_request(url)
# allclosed = get_issues(project=project, state='closed', pulls=pulls, since=period)
filtered = [i for i in allclosed if _parse_datetime(i['closed_at']) > period]
# exclude rejected PRs
if pulls:
filtered = [ pr for pr in filtered if pr['merged_at'] ]
return filtered
def sorted_by_field(issues, field='closed_at', reverse=False):
"""Return a list of issues sorted by closing date date."""
return sorted(issues, key = lambda i:i[field], reverse=reverse)
def report(issues, show_urls=False):
"""Summary report about a list of issues, printing number and title.
"""
# titles may have unicode in them, so we must encode everything below
if show_urls:
for i in issues:
role = 'ghpull' if 'merged_at' in i else 'ghissue'
print('* :%s:`%d`: %s' % (role, i['number'],
i['title'].encode('utf-8')))
else:
for i in issues:
print('* %d: %s' % (i['number'], i['title'].encode('utf-8')))
# +
all_issues = {}
all_pulls = {}
total_commits = 0
#prj='pysal/libpysal'
prj = 'pysal/{package}'.format(package=package_name)
issues = issues_closed_since(since, project=prj,pulls=False)
pulls = issues_closed_since(since, project=prj,pulls=True)
issues = sorted_by_field(issues, reverse=True)
pulls = sorted_by_field(pulls, reverse=True)
n_issues, n_pulls = map(len, (issues, pulls))
n_total = n_issues + n_pulls
# -
issue_listing = []
for issue in issues:
entry = "{title} (#{number})".format(title=issue['title'],number=issue['number'])
issue_listing.append(entry)
pull_listing = []
for pull in pulls:
entry = "{title} (#{number})".format(title=pull['title'],number=pull['number'])
pull_listing.append(entry)
pull_listing
message = "We closed a total of {total} issues (enhancements and bug fixes) through {pr} pull requests".format(total=n_total, pr=n_pulls)
message = "{msg}, since our last release on {previous}.".format(msg=message, previous=str(start_date))
message
message += "\n\n## Issues Closed\n"
print(message)
issues = "\n".join([" - "+issue for issue in issue_listing])
message += issues
message += "\n\n## Pull Requests\n"
pulls = "\n".join([" - "+pull for pull in pull_listing])
message += pulls
print(message)
people = "\n".join([" - "+person for person in unique_authors])
print(people)
message +="\n\nThe following individuals contributed to this release:\n\n{people}".format(people=people)
print(message)
head = "# Changes\n\nVersion {version} ({release_date})\n\n".format(version=__version__, release_date=release_date)
print(head+message)
outfile = 'changelog.md'
with open(outfile, 'w') as of:
of.write(head+message)
| tools/gitcount.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="http://landlab.github.io"><img style="float: left" src="landlab_header.png"></a>
# # Welcome to the Landlab Notebooks
#
# This page is provides an index to the Landlab Jupyter Notebooks.
#
# If you are not certain where to start, consider either:
# - [the Landlab Syllabus](tutorials/syllabus.ipynb) if you are interested in teaching yourself Landlab
# - [the Landlab teaching notebooks](teaching/welcome_teaching.ipynb) if you are an educator looking for tutorials to use in the classroom.
#
# ## Other useful links
# - [The Landlab Documentation](https://landlab.readthedocs.io/en/latest/)
# - [The Landlab code base](https://github.com/landlab/landlab)
#
# ## Notebooks by topic
#
# ### Introduction to python
# - [Introduction to Python and NumPy](tutorials/intro/Python_intro.ipynb) *Learn about:* The very basics of Python.
#
# ### Introduction to Landlab
# - [Introduction to Landlab: example model of fault-scarp degradation](tutorials/fault_scarp/landlab-fault-scarp.ipynb) A short overview of some of the things Landlab can do.
# - [Where to get info about Landlab](tutorials/where_to_get_info.ipynb)
# - [Introduction to Landlab: diffusion three ways](tutorials/diffusion_three_ways.ipynb)
# - [Introduction to the model grid object](tutorials/grid_object_demo/grid_object_demo.ipynb) Grid topology; how landlab represents data; connectivity of grid elements.
# - [Introduction to Landlab data fields](tutorials/fields/working_with_fields.ipynb) How Landlab stores spatial data on the grid; a little on naming conventions.
# - [Introduction to plotting output with Landlab](tutorials/plotting/landlab-plotting.ipynb) The basics of plotting with Landlab; combining matplotlib and out plots; the all-powerful ``imshow_grid()`` function.
# - [Introduction to using the Landlab component library](tutorials/component_tutorial/component_tutorial.ipynb) The basics of working with and coupling components, using *diffusion*, *stream power*, and a *storm generator* as examples.
# - [Using the gradient and flux-divergence functions](tutorials/gradient_and_divergence/gradient_and_divergence.ipynb) Landlab as solving environment for staggered grid finite difference differential approximations; functions available to help you do this.
# - [Mapping values from nodes to links](tutorials/mappers/mappers.ipynb) Options for getting data on links to nodes, nodes to links, etc.; min, max, and mean; upwinding and downwinding schemes; one-to-one, one-to-many, and many-to-one mappings.
# - Setting boundary conditions on Landlab grids (several tutorials): How Landlab conceptualises boundary conditions; various ways to interact and work with them.
# - [Raster perimeter](tutorials/boundary_conds/set_BCs_on_raster_perimeter.ipynb)
# - [Based on Y-Y values](tutorials/boundary_conds/set_BCs_from_xy.ipynb)
# - [Watersheds](tutorials/boundary_conds/set_watershed_BCs_raster.ipynb)
# - [Voronoi](tutorials/boundary_conds/set_BCs_on_voronoi.ipynb)
# - [Reading DEMs into Landlab](tutorials/reading_dem_into_landlab/reading_dem_into_landlab.ipynb) Getting an ARC ESRI ASCII into Landlab; getting the boundary conditions set right.
# - [How to write a Landlab component](tutorials/making_components/making_components.ipynb) What makes up a Landlab Component Standard Interface; how to make one for your process model.
#
# ### Notebooks about components, models, or utilities
#
# - Flow Direction and Accumulation
# - [Introduction to the FlowDirector Components](tutorials/flow_direction_and_accumulation/the_FlowDirectors.ipynb)
# - [Introduction to the FlowAccumulator Component](tutorials/flow_direction_and_accumulation/the_FlowAccumulator.ipynb)
# - [Comparison of FlowDirector Components](tutorials/flow_direction_and_accumulation/compare_FlowDirectors.ipynb)
# - Flexure
# - [Introduction](tutorials/flexure/flexure_1d.ipynb)
# - [Multiple loads](tutorials/flexure/lots_of_loads.ipynb)
# - [OverlandFlow](tutorials/overland_flow/overland_flow_driver.ipynb)
# - [Coupled rainfall runoff model with OverlandFlow](tutorials/overland_flow/coupled_rainfall_runoff.ipynb)
# - [Diffusion, stream power, and the storm generator](tutorials/component_tutorial/component_tutorial.ipynb)
# - Ecohydrology (these components not yet updated for v2.0)
# - [Ecohydrology Model on Flat Domain](tutorials/ecohydrology/cellular_automaton_vegetation_flat_surface/cellular_automaton_vegetation_flat_domain.ipynb)
# - [Ecohydrology Model on Actual Landscape](tutorials/ecohydrology/cellular_automaton_vegetation_DEM/cellular_automaton_vegetation_DEM.ipynb)
# - [Spatially variable lithology: Lithology and Litholayers](tutorials/lithology/lithology_and_litholayers.ipynb)
# - [NormalFault](tutorials/normal_fault/normal_fault_component_tutorial.ipynb)
# - [Flow distance utility](tutorials/flow__distance_utility/application_of_flow__distance_utility.ipynb)
# - [TransportLengthHillslopeDiffuser](tutorials/transport-length_hillslope_diffuser/TLHDiff_tutorial.ipynb)
# - [Groundwater Hydrology](tutorials/groundwater/groundwater_flow.ipynb)
#
# ### Teaching Notebooks
#
# - [Quantifying river channel evolution with Landlab](teaching/geomorphology_exercises/channels_streampower_notebooks/stream_power_channels_class_notebook.ipynb)
# - [Modeling Hillslopes and Channels with Landlab](teaching/geomorphology_exercises/drainage_density_notebooks/drainage_density_class_notebook.ipynb)
# - [Linear diffusion exercise with Landlab](teaching/geomorphology_exercises/hillslope_notebooks/hillslope_diffusion_class_notebook.ipynb)
# - [Using Landlab to explore a diffusive hillslope in the piedmont of North Carolina](teaching/geomorphology_exercises/hillslope_notebooks/north_carolina_piedmont_hillslope_class_notebook.ipynb)
# - [Exploring rainfall driven hydrographs with Landlab](teaching/surface_water_hydrology_exercises/overland_flow_notebooks/hydrograph_class_notebook.ipynb)
| notebooks/welcome.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="qjRnB7H8r7hj" colab_type="code" colab={}
import pandas as pd
import numpy as np
from google.colab import files
uploaded = files.upload()
# + id="veuijyKbsQoL" colab_type="code" colab={}
data_pd = pd.read_csv('WA_Fn-UseC_-Telco-Customer-Churn.csv', index_col=False)
df = data_pd
for col_name in df.columns:
if(df[col_name].dtype == 'object'):
df[col_name]= df[col_name].astype('category')
df[col_name] = df[col_name].cat.codes
with np.printoptions(threshold=np.inf):
print(df.columns)
# + id="L0BmS0F8wbe7" colab_type="code" colab={}
features_cols = ['customerID', 'gender', 'SeniorCitizen', 'Partner', 'Dependents',
'tenure', 'PhoneService', 'MultipleLines', 'InternetService',
'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport',
'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling',
'PaymentMethod', 'MonthlyCharges', 'TotalCharges']
X = df[features_cols]
Y = df.Churn
# + id="gW99Abb2_4nb" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=1)
# + id="0TO-ZTC2ALDa" colab_type="code" colab={}
from sklearn.tree import DecisionTreeClassifier
from sklearn import metrics
model = DecisionTreeClassifier()
model.fit(X_train, Y_train)
Y_pred = model.predict(X_test)
pd.set_option('display.max_columns', None)
print(Y_pred[0:5])
from sklearn import metrics
print("Accuracy:",(round(metrics.accuracy_score(Y_test,Y_pred) * 100)), "%")
print("Confusion Matrix For UNPRUNED TREE:")
metrics.confusion_matrix(Y_test, Y_pred)
# + id="F4NhAbmcEeD6" colab_type="code" colab={}
#Visualizing DecisionTree
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
from IPython.display import Image
import pydotplus
dot_data = StringIO()
export_graphviz(model, out_file=dot_data,
filled=True, rounded=True,
special_characters=True,feature_names = features_cols,class_names=['0','1'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('telco.png')
Image(graph.create_png())
# + id="7LJP_FjeFPdH" colab_type="code" colab={}
#Pruning the tree
from sklearn.tree import DecisionTreeClassifier
from sklearn import metrics
model = DecisionTreeClassifier(criterion='entropy', max_depth=3)
model.fit(X_train, Y_train)
Y_pred = model.predict(X_test)
pd.set_option('display.max_columns', None)
#print(X_test[0:19])
print(Y_pred[0:5])
from sklearn import metrics
print("Accuracy:",(round(metrics.accuracy_score(Y_test,Y_pred) * 100)), "%")
print("Confusion Matrix For PRUNED TREE:")
metrics.confusion_matrix(Y_test, Y_pred)
# + id="DI3_ysMPHFUZ" colab_type="code" colab={}
#Visualizing DecisionTree-Prunned
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
from IPython.display import Image
import pydotplus
dot_data = StringIO()
export_graphviz(model, out_file=dot_data,
filled=True, rounded=True,
special_characters=True,feature_names = features_cols,class_names=['0','1'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('telco.png')
Image(graph.create_png())
| Individual_Algorithms/DecisionTreeClassifier(Telco_Dataset).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Drift and Model Performance Dashboards for Iris dataset
# +
import pandas as pd
import numpy as np
from sklearn import datasets, model_selection, neighbors
from evidently.dashboard import Dashboard
from evidently.tabs import DriftTab, CatTargetDriftTab, ClassificationPerformanceTab
# -
# ## Iris data
iris = datasets.load_iris()
iris_frame = pd.DataFrame(iris.data, columns = iris.feature_names)
# ## Data drift report
iris_data_drift_report = Dashboard(iris_frame, iris_frame, column_mapping = None, tabs=[DriftTab])
iris_data_drift_report.show()
iris_data_drift_report.save('reports/iris_data_drift.html')
# ## Data and Target drift reports
iris_frame['target'] = iris.target
iris_data_and_target_drift_report = Dashboard(iris_frame[:75], iris_frame[75:],
column_mapping = None, tabs=[DriftTab, CatTargetDriftTab])
iris_data_and_target_drift_report.show()
iris_data_and_target_drift_report.save('reports/iris_data_and_target_drift.html')
# ## Model Performance report
iris_frame = pd.DataFrame(iris.data, columns = iris.feature_names)
reference, production, y_train, y_test = model_selection.train_test_split(iris_frame,
iris.target,
random_state=0)
model = neighbors.KNeighborsClassifier(n_neighbors=1)
model.fit(reference, y_train)
train_predictions = model.predict(reference)
test_predictions = model.predict(production)
# +
reference['target'] = y_train
reference['prediction'] = train_predictions
production['target'] = y_test
production['prediction'] = test_predictions
# +
reference.target = reference.target.apply(lambda x: iris.target_names[x])
reference.prediction = reference.prediction.apply(lambda x: iris.target_names[x])
production.target = production.target.apply(lambda x: iris.target_names[x])
production.prediction = production.prediction.apply(lambda x: iris.target_names[x])
# +
iris_column_mapping = {}
iris_column_mapping['target'] = 'target'
iris_column_mapping['prediction'] = 'prediction'
iris_column_mapping['numerical_features'] = iris.feature_names
# -
iris_drift = Dashboard(reference, production, column_mapping = iris_column_mapping,
tabs=[ClassificationPerformanceTab])
iris_drift.show()
iris_drift.save('reports/iris_classification_performance_test.html')
| evidently/examples/iris_data_drift.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python3
# ---
"""
Format Dataset: (list of lists)
[
[context_0, context_1, ..., context_N, response], # sample_0
[context_0, context_1, ..., context_N, response], # sample_1
...,
[context_0, context_1, ..., context_N, response] # sample_L
]
"""
# +
import numpy as np
import pandas as pd
import torch
from sklearn.model_selection import train_test_split
# tokenizer
from transformers import (
MODEL_WITH_LM_HEAD_MAPPING,
WEIGHTS_NAME,
AdamW,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
PreTrainedModel,
PreTrainedTokenizer,
get_linear_schedule_with_warmup,
)
# +
PATH_DATASET = "0.0. prepare_dataset.csv"
# Model
DialogGPT_MODEL = "microsoft/DialoGPT-small"
# -
# Instance Model & Tokenizer
tokenizer = AutoTokenizer.from_pretrained(DialogGPT_MODEL)
df = pd.read_csv(PATH_DATASET)
df.head(5)
TEST_SIZE = 0.1
# Split Dataset in train & test
df_train, df_val = train_test_split(df,test_size=TEST_SIZE, random_state=0)
# +
# transform dataset
def construct_conv(row, tokenizer, eos = True):
""" Add EOS token to each sentecenes from row --> tokenized ---> flatten tokenized row
Args:
row (list of sentences): the first N elements is the context and the last element is the response or target
tokenizer (object): tokenizer from respective model
eos (bool, optional): end of sentence. Defaults to True.
Returns:
(list of ints): [len(flatten(row))] (shape is variable because not have padding) flatten row tokenized
>>>> [651,5513,86,24905,287,...,640,284,651,5513,86,24905] --> 2495=eos_token
"""
flatten = lambda l: [item for sublist in l for item in sublist]
conv = [tokenizer.encode(x) + [tokenizer.eos_token_id] for x in row]
conv = flatten(conv)
return conv
import os
from torch.utils.data import Dataset
import logging
logger = logging.getLogger(__name__)
class ConversationDataset(Dataset):
def __init__(self, tokenizer, args, df, block_size=512):
block_size = block_size - (tokenizer.max_len - tokenizer.max_len_single_sentence)
directory = "cached"
cached_features_file = os.path.join(
directory, args.model_type + "_cached_lm_" + str(block_size)
)
if os.path.exists(cached_features_file) and not args.overwrite_cache:
logger.info("Loading features from cached file %s", cached_features_file)
with open(cached_features_file, "rb") as handle:
self.examples = pickle.load(handle)
else:
logger.info("Creating features from dataset file at %s", directory)
self.examples = []
for _, row in df.iterrows():
conv = construct_conv(row, tokenizer)
self.examples.append(conv)
logger.info("Saving features into cached file %s", cached_features_file)
with open(cached_features_file, "wb") as handle:
pickle.dump(self.examples, handle, protocol=pickle.HIGHEST_PROTOCOL)
def __len__(self):
return len(self.examples)
def __getitem__(self, item):
return torch.tensor(self.examples[item], dtype=torch.long)
# +
model_type = 'gpt2'
block_size=512
block_size = block_size - (tokenizer.max_len - tokenizer.max_len_single_sentence)
directory = "cached"
cached_features_file = os.path.join(
directory, model_type + "_cached_lm_" + str(block_size)
)
# -
tokenizer.max_len
| 0.1. Transform_dataset_to_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import this app's dependencies.
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.sql import Row, SparkSession
from IPython import display
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# From the PySpark Streaming Programming Guide at https://spark.apache.org/docs/latest/streaming-programming-guide.html#dataframe-and-sql-operations. This is the recommended way for each cluster node to get the SparkSession.
def getSparkSessionInstance(sparkConf):
"""Spark Streaming Programming Guide's recommended method
for getting an existing SparkSession or creating a new one."""
if ("sparkSessionSingletonInstance" not in globals()):
globals()["sparkSessionSingletonInstance"] = SparkSession \
.builder \
.config(conf=sparkConf) \
.getOrCreate()
return globals()["sparkSessionSingletonInstance"]
# Function to display a Seaborn barplot based on the Spark DataFrame it receives.
def display_barplot(spark_df, x, y, time, scale=2.0, size=(16, 9)):
"""Displays a Spark DataFrame's contents as a bar plot."""
df = spark_df.toPandas()
# remove prior graph when new one is ready to display
display.clear_output(wait=True)
print(f'TIME: {time}')
# create and configure a Figure containing a Seaborn barplot
plt.figure(figsize=size)
sns.set(font_scale=scale)
barplot = sns.barplot(data=df, x=x, y=y,
palette=sns.color_palette('cool', 20))
# rotate the x-axis labels 90 degrees for readability
for item in barplot.get_xticklabels():
item.set_rotation(90)
plt.tight_layout()
plt.show()
# Function count_tags is called for every RDD to summarize the hashtag counts in that RDD, add them to the existing totals, then display an updated top-20 barplot.
def count_tags(time, rdd):
"""Count hashtags and display top-20 in descending order."""
try:
# get SparkSession
spark = getSparkSessionInstance(rdd.context.getConf())
# map hashtag string-count tuples to Rows
rows = rdd.map(
lambda tag: Row(hashtag=tag[0], total=tag[1]))
# create a DataFrame from the Row objects
hashtags_df = spark.createDataFrame(rows)
# create a temporary table view for use with Spark SQL
hashtags_df.createOrReplaceTempView('hashtags')
# use Spark SQL to get the top 20 hashtags in descending order
top20_df = spark.sql(
"""select hashtag, total
from hashtags
order by total desc, hashtag asc
limit 20""")
display_barplot(top20_df, x='hashtag', y='total', time=time)
except Exception as e:
print(f'Exception: {e}')
# Main applications code sets up Spark streaming to read text from the `starttweetstream.py` script on localhost port 9876 and specifies how to process the tweets.
sc = SparkContext()
ssc = StreamingContext(sc, 10)
ssc.checkpoint('hashtagsummarizer_checkpoint')
stream = ssc.socketTextStream('localhost', 9876)
tokenized = stream.flatMap(lambda line: line.split())
mapped = tokenized.map(lambda hashtag: (hashtag, 1))
hashtag_counts = mapped.updateStateByKey(
lambda counts, prior_total: sum(counts) + (prior_total or 0))
hashtag_counts.foreachRDD(count_tags)
ssc.start() # start the Spark streaming
# +
#ssc.awaitTermination() # wait for the streaming to finish
# +
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
| examples/ch16/SparkHashtagSummarizer/hashtagsummarizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + active=""
# This IPython Notebook contains simple examples of the line function.
#
# To clear all previously rendered cell outputs, select from the menu:
#
# Cell -> All Output -> Clear
# -
import numpy as np
from scipy.integrate import odeint
from bokeh.plotting import figure, show, output_notebook
sigma = 10
rho = 28
beta = 8.0/3
theta = 3 * np.pi / 4
def lorenz(xyz, t):
x, y, z = xyz
x_dot = sigma * (y - x)
y_dot = x * rho - x * z - y
z_dot = x * y - beta* z
return [x_dot, y_dot, z_dot]
initial = (-10, -7, 35)
t = np.arange(0, 100, 0.001)
solution = odeint(lorenz, initial, t)
x = solution[:, 0]
y = solution[:, 1]
z = solution[:, 2]
xprime = np.cos(theta) * x - np.sin(theta) * y
colors = ["#C6DBEF", "#9ECAE1", "#6BAED6", "#4292C6", "#2171B5", "#08519C", "#08306B",]
output_notebook()
p = figure(title="lorenz example")
p.multi_line(np.array_split(xprime, 7), np.array_split(z, 7),
line_color=colors, line_alpha=0.8, line_width=1.5)
show(p)
| examples/plotting/notebook/lorenz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# _This is devoted to detecting non-human users in our database_
# From http://www.erinshellman.com/bot-or-not/
#
# Follower distributions
#
# Fast-forward to clean, well-formatted data and it doesn’t take long to find fishiness. On average, bots follow 1400 people whereas humans follow 500. Bots are similarly strange in their distribution of followers. Humans have a fairly uniform distribution of followers. Some people are popular, some not so much, and many in between. Conversely, these bots are extremely unpopular with an average of a measly 28 followers.
#
# Lexical diversity
#
# Again these bots look strange. Humans have a beautiful, almost textbook normal distribution of diversities centered at 0.70. Bots on the other hand have more mass at the extremes, especially towards one. A lexical diversity of one means that every word in the document is unique, implying that bots are either not tweeting much, or are tweeting random strings of text.
# +
import pandas as pd
#Plotting
# %matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
# %cd twitterproject
# inject config value (on command line would've been --config=data-analysis)
import sys
args = ['--config', 'laptop-mining']
old_sys_argv = sys.argv
sys.argv = [old_sys_argv[0]] + args
# try:
# return parser.parse_args()
# finally:
# sys.argv = old_sys_argv
# sys.argv.append('laptop-mining')
# sys.argv = ['data-analysis']
# or could use 'laptop-mining'
import environment
# +
from TwitterDatabase.Repositories import DataRepositories as DR
from TwitterDatabase.DatabaseAccessObjects import DataConnections as DC
from TwitterDatabase.Models.WordORM import Word
from TwitterDatabase.Models.TweetORM import Users as User
from TwitterDatabase.Models.TweetORM import Tweet
from DataAnalysis.SearchTools.WordMaps import get_adjacent_word_counts, get_adjacent_words, get_user_ids_for_word
EXP_TERMS_FILEPATH = '%s/experimental-terms.xlsx' % environment.EXPERIMENTS_FOLDER
IDS_FILEPATH = "%s/temp_output/user-ids.xlsx" % environment.LOG_FOLDER_PATH
# -
# # Bot detection on users
# ## Load data
dao = DC.MySqlConnection(environment.CREDENTIAL_FILE)
data= pd.read_sql_query("select userID, friends_count, followers_count from users", dao.engine, index_col='userID')
# Because someone set this field to string
data.followers_count = data.apply(lambda x : int(x.followers_count), axis=1)
data.describe()
# ## Prune the data
# +
MIN_FRIENDS = 1
MIN_FOLLOWERS =0
# cutoff the top 1%
MAX_PERCENTILE = 0.99
start_count = len(data)
# Trim by number of friends
friends_cutoff = data.friends_count.quantile(MAX_PERCENTILE)
data = data[data.friends_count.between(MIN_FRIENDS, friends_cutoff, inclusive=True)]
# Trim by number of followers
followers_cutoff = data.followers_count.quantile(MAX_PERCENTILE)
data = data[data.followers_count.between(MIN_FOLLOWERS, followers_cutoff, inclusive=True)]
print("Cutoff for friend count: %s \nCutoff for follower count: %s \nRemoved %s users" % (friends_cutoff, followers_cutoff, start_count - len(data)))
# -
# ### Number of people the user follows (friends)
sns.distplot(data['friends_count'])
sns.violinplot(data.friends_count)
# ### How many people follow the user
sns.distplot(data.followers_count)
sns.violinplot(data.followers_count)
data.sort_values('followers_count', ascending=True)[:10]
# ## Distribution of status updates
#
# +
query = "SELECT userID, screen_name, statuses_count FROM users ORDER BY statuses_count DESC"
freq = pd.read_sql_query(query, dao.engine, index_col='userID')
# -
sns.distplot(freq.statuses_count)
freq.statuses_count.describe()
for i in range(80, 100):
j = round(i * 0.01,2)
q = freq.statuses_count.quantile(j)
print("%s th : %s" % (i, int(q)))
# ## Calculate relationships
# +
# Subtract followers from friends
data['friends_less_followers'] = data.apply(lambda x : x.friends_count - x.followers_count, axis=1)
def f(row):
if row.followers_count == 0: return 0
return row.friends_count / row.followers_count
data['ratio_friends_followers'] = data.apply(lambda x: f(x), axis=1)
# -
data.describe()
data[:10]
# ### Difference between friends and followers
data.sort_values('friends_less_followers', ascending=True)[:10]
sns.violinplot(data.friends_less_followers)
sns.distplot(data.friends_less_followers)
# ### Ratio of friends to followers
sns.violinplot(data.ratio_friends_followers)
data.sort_values('ratio_friends_followers', ascending=False)[:10]
# # Bot detection at tweet level
#
# Finding bots based only on the data in tweets
# ## New mining (where tweet stores user data)
| DataAnalysis/Notebooks/.ipynb_checkpoints/Bot removal-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install git+https://github.com/microsoft/coax.git@main
# %load_ext tensorboard
# %tensorboard --logdir ./data/tensorboard
# +
import gym
import jax
import coax
import haiku as hk
import jax.numpy as jnp
from numpy import prod
import optax
# the name of this script
name = 'td3'
# the Pendulum MDP
env = gym.make('Pendulum-v0')
env = coax.wrappers.TrainMonitor(env, name=name, tensorboard_dir=f"./data/tensorboard/{name}")
def func_pi(S, is_training):
seq = hk.Sequential((
hk.Linear(8), jax.nn.relu,
hk.Linear(8), jax.nn.relu,
hk.Linear(8), jax.nn.relu,
hk.Linear(prod(env.action_space.shape), w_init=jnp.zeros),
hk.Reshape(env.action_space.shape),
))
mu = seq(S)
return {'mu': mu, 'logvar': jnp.full_like(mu, jnp.log(0.05))} # (almost) deterministic
def func_q(S, A, is_training):
seq = hk.Sequential((
hk.Linear(8), jax.nn.relu,
hk.Linear(8), jax.nn.relu,
hk.Linear(8), jax.nn.relu,
hk.Linear(1, w_init=jnp.zeros), jnp.ravel
))
X = jnp.concatenate((S, A), axis=-1)
return seq(X)
# main function approximators
pi = coax.Policy(func_pi, env)
q1 = coax.Q(func_q, env, action_preprocessor=pi.proba_dist.preprocess_variate)
q2 = coax.Q(func_q, env, action_preprocessor=pi.proba_dist.preprocess_variate)
# target network
q1_targ = q1.copy()
q2_targ = q2.copy()
pi_targ = pi.copy()
# experience tracer
tracer = coax.reward_tracing.NStep(n=5, gamma=0.9)
buffer = coax.experience_replay.SimpleReplayBuffer(capacity=25000)
# updaters
qlearning1 = coax.td_learning.ClippedDoubleQLearning(
q1, pi_targ_list=[pi_targ], q_targ_list=[q1_targ, q2_targ],
loss_function=coax.value_losses.mse, optimizer=optax.adam(1e-3))
qlearning2 = coax.td_learning.ClippedDoubleQLearning(
q2, pi_targ_list=[pi_targ], q_targ_list=[q1_targ, q2_targ],
loss_function=coax.value_losses.mse, optimizer=optax.adam(1e-3))
determ_pg = coax.policy_objectives.DeterministicPG(pi, q1_targ, optimizer=optax.adam(1e-3))
# action noise
noise = coax.utils.OrnsteinUhlenbeckNoise(mu=0., sigma=0.2, theta=0.15)
# train
while env.T < 1000000:
s = env.reset()
noise.reset()
noise.sigma *= 0.99 # slowly decrease noise scale
for t in range(env.spec.max_episode_steps):
a = noise(pi.mode(s))
s_next, r, done, info = env.step(a)
# trace rewards and add transition to replay buffer
tracer.add(s, a, r, done)
while tracer:
buffer.add(tracer.pop())
# learn
if len(buffer) >= 5000:
transition_batch = buffer.sample(batch_size=128)
# init metrics dict
metrics = {'OrnsteinUhlenbeckNoise/sigma': noise.sigma}
# flip a coin to decide which of the q-functions to update
qlearning = qlearning1 if jax.random.bernoulli(q1.rng) else qlearning2
metrics.update(qlearning.update(transition_batch))
# delayed policy updates
if env.T >= 7500 and env.T % 4 == 0:
metrics.update(determ_pg.update(transition_batch))
env.record_metrics(metrics)
# sync target networks
q1_targ.soft_update(q1, tau=0.001)
q2_targ.soft_update(q2, tau=0.001)
pi_targ.soft_update(pi, tau=0.001)
if done:
break
s = s_next
# generate an animated GIF to see what's going on
if env.period(name='generate_gif', T_period=10000) and env.T > 5000:
T = env.T - env.T % 10000 # round to 10000s
coax.utils.generate_gif(
env=env, policy=pi, filepath=f"./data/gifs/{name}/T{T:08d}.gif")
| doc/_notebooks/pendulum/td3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# # AutoML 06: Train Test Split and Handling Sparse Data
#
# In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML for handling sparse data and how to specify custom cross validations splits.
#
# Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.
#
# In this notebook you will learn how to:
# 1. Create an `Experiment` in an existing `Workspace`.
# 2. Configure AutoML using `AutoMLConfig`.
# 4. Train the model.
# 5. Explore the results.
# 6. Test the best fitted model.
#
# In addition this notebook showcases the following features
# - Explicit train test splits
# - Handling **sparse data** in the input
# ## Create an Experiment
#
# As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
# +
import logging
import os
import random
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
# +
ws = Workspace.from_config()
# choose a name for the experiment
experiment_name = 'automl-local-missing-data'
# project folder
project_folder = './sample_projects/automl-local-missing-data'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data=output, index=['']).T
# -
# ## Diagnostics
#
# Opt-in diagnostics for better experience, quality, and security of future releases.
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
# ## Creating Sparse Data
# +
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.model_selection import train_test_split
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train, X_valid, y_train, y_valid = train_test_split(data_train.data, data_train.target, test_size = 0.33, random_state = 42)
vectorizer = HashingVectorizer(stop_words = 'english', alternate_sign = False,
n_features = 2**16)
X_train = vectorizer.transform(X_train)
X_valid = vectorizer.transform(X_valid)
summary_df = pd.DataFrame(index = ['No of Samples', 'No of Features'])
summary_df['Train Set'] = [X_train.shape[0], X_train.shape[1]]
summary_df['Validation Set'] = [X_valid.shape[0], X_valid.shape[1]]
summary_df
# -
# ## Configure AutoML
#
# Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.
#
# |Property|Description|
# |-|-|
# |**task**|classification or regression|
# |**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|
# |**max_time_sec**|Time limit in seconds for each iteration.|
# |**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|
# |**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.<br>**Note:** If input data is sparse, you cannot use *True*.|
# |**X**|(sparse) array-like, shape = [n_samples, n_features]|
# |**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|
# |**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.|
# |**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification for the custom validation set.|
# |**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
max_time_sec = 3600,
iterations = 5,
preprocess = False,
verbosity = logging.INFO,
X = X_train,
y = y_train,
X_valid = X_valid,
y_valid = y_valid,
path = project_folder)
# ## Train the Models
#
# Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
# In this example, we specify `show_output = True` to print currently running iterations to the console.
local_run = experiment.submit(automl_config, show_output=True)
# ## Explore the Results
# #### Widget for Monitoring Runs
#
# The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.
#
# **Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
from azureml.train.widgets import RunDetails
RunDetails(local_run).show()
#
# #### Retrieve All Child Runs
# You can also use SDK methods to fetch all the child runs and see individual metrics that we log.
# +
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
# -
# ### Retrieve the Best Model
#
# Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
best_run, fitted_model = local_run.get_output()
# #### Best Model Based on Any Other Metric
# Show the run and the model which has the smallest `accuracy` value:
# +
# lookup_metric = "accuracy"
# best_run, fitted_model = local_run.get_output(metric = lookup_metric)
# -
# #### Model from a Specific Iteration
# Show the run and the model from the third iteration:
# +
# iteration = 3
# best_run, fitted_model = local_run.get_output(iteration = iteration)
# -
# ### Testing the Best Fitted Model
# +
# Load test data.
from pandas_ml import ConfusionMatrix
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = vectorizer.transform(data_test.data)
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
| automl/06.auto-ml-sparse-data-train-test-split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import mdptoolbox
from itertools import product
# Initialize warehouse features
features = {
'number_fields': 4,
'number_fillings': 4,
'number_next_color': 3,
'number_actions': 2,
'structure': (2, 2)
}
# -
# Declare global variables to reuse in functions
number_fields = features['number_fields']
number_fillings = features['number_fillings']
number_next_color = features['number_next_color']
number_actions = features['number_actions']
# Represent each state in a multiple array
def create_states(features):
global number_fields
global number_fillings
global number_next_color
global number_actions
# Create multidimensional array with 1536 states in total
all_states = np.ndarray(shape=((number_fillings ** number_fields) * number_next_color * number_actions, number_fields + 2))
field_states = list(product(np.arange(number_fillings), repeat=number_fields))
# Generate all possible states
for counter_fields, field_state in enumerate(field_states):
for counter_actions in range(number_actions):
for counter_next_color in range(number_next_color):
index = number_next_color * number_actions * counter_fields + counter_actions * number_next_color + counter_next_color
# Shift next color counter for better encoding of colors
all_states[index, :] = *field_state, counter_actions, counter_next_color
return all_states
states = create_states(features)
states
def transition_matrix_generator(states, features):
global number_fields
global number_fillings
global number_next_color
global number_actions
number_states = states.shape[0]
# Define the state space transition matrix
transition_matrix = np.zeros((number_fields + 1, number_states, number_states))
for first_state in range(number_states):
for field in range(number_fields):
for second_state in range(number_states):
state_1 = states[first_state]
state_2 = states[second_state]
number_actions = state_1[-2]
number_next_color = state_1[-1]
if number_actions == 0:
if state_1[field] == 3 and state_2[field] == number_next_color:
transition_matrix[field, first_state, second_state] = 1
else:
transition_matrix[4, first_state, second_state] = 1
else:
if state_1[field] == number_next_color and state_2[field] == 3:
transition_matrix[field, first_state, second_state] = 1
else:
transition_matrix[4, first_state, second_state] = 1
row_sum = np.sum(transition_matrix[field, first_state, :])
# Now, normalize the row accordingly
if row_sum > 1:
transition_matrix[field, first_state, :] /= row_sum
else:
transition_matrix[field, first_state, first_state] = 1
row_sum = np.sum(transition_matrix[4, first_state, :])
if row_sum > 1:
transition_matrix[4, first_state, :] /= row_sum
else:
transition_matrix[4, first_state, first_state] = 1
return transition_matrix
transition = transition_matrix_generator(states, features)
transition
def reward_generator(states, transition, features):
global number_fields
global number_fillings
global number_next_color
global number_actions
structure = features['structure']
# Return rows and columns structure
rows = np.arange(structure[0])
columns = np.arange(structure[1])
# Return new array and fill with ones
distances = np.ones(structure)
distances += columns
distances = (distances.T + rows).T
distances = np.append(distances, 7)
# Create empty array to store reward matrix
reward = []
# Define reward matrix
for action_nr, spacetimes in enumerate(transition):
indices = np.where(np.logical_and(spacetimes > 0, spacetimes < 1))
output = np.zeros(spacetimes.shape)
output[indices] = 10 - distances[action_nr]
reward.append(output)
return reward
rewards = reward_generator(states, transition, features)
rewards
#
pi = mdptoolbox.mdp.ValueIteration(transitions=transition, reward=rewards, discount=0.8)
pi.setVerbose()
results = pi.run()
print(pi.policy)
states[11:19,:]
pi.policy[11:19]
# +
color_to_index = {
"white": 0,
"blue": 1,
"red": 2
}
index_to_color = ["white", "blue", "red"]
actions_to_index = {
"store": 0,
"restore": 1
}
index_to_action = ["store", "restore"]
| homework4/saki4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#generate data for users
import numpy as np
from random import randint
# %matplotlib inline
flows = []
def create_flow(user, number_of_flows, time_limits, byte_limits):
for _ in range(number_of_flows):
time = randint(time_limits[0], time_limits[1])
byte_count = randint(byte_limits[0], byte_limits[1])
flows.append([time, user, byte_count])
create_flow('C' + str(0), 15, [1,100], [100, 500])
create_flow('C' + str(1), 15, [1,100], [150, 300])
create_flow('C' + str(2), 20, [1,100], [100, 500])
create_flow('C' + str(3), 20, [1,100], [260, 800])
create_flow('C' + str(4), 20, [1,100], [100, 500])
flows.append([87, 'C' + str(4), 5500])
flows.append([87, 'C' + str(4), 1500])
# +
import pandas as pd
df = pd.DataFrame(flows)
# -
# df = df.sample(frac=1)
df.head()
# +
# Save data for future work
df.to_csv('../../../diploma/generated_data/flows_test.txt', header=False, index=False)
# -
| diploma/generateData.ipynb |
# # Linear model for classification
# In regression, we saw that the target to be predicted was a continuous
# variable. In classification, this target will be discrete (e.g. categorical).
#
# We will go back to our penguin dataset. However, this time we will try to
# predict the penguin species using the culmen information. We will also
# simplify our classification problem by selecting only 2 of the penguin
# species to solve a binary classification problem.
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">If you want a deeper overview regarding this dataset, you can refer to the
# Appendix - Datasets description section at the end of this MOOC.</p>
# </div>
# +
import pandas as pd
penguins = pd.read_csv("../datasets/penguins_classification.csv")
# only keep the Adelie and Chinstrap classes
penguins = penguins.set_index("Species").loc[
["Adelie", "Chinstrap"]].reset_index()
culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"]
target_column = "Species"
# -
# We can quickly start by visualizing the feature distribution by class:
# +
import matplotlib.pyplot as plt
for feature_name in culmen_columns:
plt.figure()
# plot the histogram for each specie
penguins.groupby("Species")[feature_name].plot.hist(
alpha=0.5, density=True, legend=True)
plt.xlabel(feature_name)
# -
# We can observe that we have quite a simple problem. When the culmen
# length increases, the probability that the penguin is a Chinstrap is closer
# to 1. However, the culmen depth is not helpful for predicting the penguin
# species.
#
# For model fitting, we will separate the target from the data and
# we will create a training and a testing set.
# +
from sklearn.model_selection import train_test_split
penguins_train, penguins_test = train_test_split(penguins, random_state=0)
data_train = penguins_train[culmen_columns]
data_test = penguins_test[culmen_columns]
target_train = penguins_train[target_column]
target_test = penguins_test[target_column]
range_features = {
feature_name: (penguins[feature_name].min() - 1,
penguins[feature_name].max() + 1)
for feature_name in culmen_columns
}
# -
# To visualize the separation found by our classifier, we will define an helper
# function `plot_decision_function`. In short, this function will plot the edge
# of the decision function, where the probability to be an Adelie or Chinstrap
# will be equal (p=0.5).
# +
import numpy as np
def plot_decision_function(fitted_classifier, range_features, ax=None):
"""Plot the boundary of the decision function of a classifier."""
from sklearn.preprocessing import LabelEncoder
feature_names = list(range_features.keys())
# create a grid to evaluate all possible samples
plot_step = 0.02
xx, yy = np.meshgrid(
np.arange(*range_features[feature_names[0]], plot_step),
np.arange(*range_features[feature_names[1]], plot_step),
)
# compute the associated prediction
Z = fitted_classifier.predict(np.c_[xx.ravel(), yy.ravel()])
Z = LabelEncoder().fit_transform(Z)
Z = Z.reshape(xx.shape)
# make the plot of the boundary and the data samples
if ax is None:
_, ax = plt.subplots()
ax.contourf(xx, yy, Z, alpha=0.4, cmap="RdBu_r")
return ax
# -
# The linear regression that we previously saw will predict a continuous
# output. When the target is a binary outcome, one can use the logistic
# function to model the probability. This model is known as logistic
# regression.
#
# Scikit-learn provides the class `LogisticRegression` which implements this
# algorithm.
# +
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
logistic_regression = make_pipeline(
StandardScaler(), LogisticRegression(penalty="none")
)
logistic_regression.fit(data_train, target_train)
# +
import seaborn as sns
ax = sns.scatterplot(
data=penguins_test, x=culmen_columns[0], y=culmen_columns[1],
hue=target_column, palette=["tab:red", "tab:blue"])
_ = plot_decision_function(logistic_regression, range_features, ax=ax)
# -
# Thus, we see that our decision function is represented by a line separating
# the 2 classes. We should also note that we did not impose any regularization
# by setting the parameter `penalty` to `'none'`.
#
# Since the line is oblique, it means that we used a combination of both
# features:
coefs = logistic_regression[-1].coef_[0] # the coefficients is a 2d array
weights = pd.Series(coefs, index=culmen_columns)
weights.plot.barh()
plt.title("Weights of the logistic regression")
# Indeed, both coefficients are non-null.
| notebooks/logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sleepypioneer/coffee-leaf-rust-predictor/blob/main/analysis/notebooks/leaf_rust_detection_exploration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="URqT67PyDadW"
# # Coffee Leaf Rust detection ☕🌿
#
# [Hemileia vastatrix](https://en.wikipedia.org/wiki/Hemileia_vastatrix) a fungus which causes coffee leaf rust disease. This disease which reduces a plants ability to derive energy through photosynthesis, is extremely damaging to economics built on coffee cultivation as it can wipe out whole crops. The disease can be identified by the spores which cover the plants leaves.
# + id="g0D41xbsRiS6"
from datetime import datetime
import pathlib
import random
import shutil
from typing import List
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.xception import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import load_img
# %matplotlib inline
# + [markdown] id="8tsSt-LwKE13"
# ## Getting our Data 💾
# + id="h7SlfWV2BHqd"
# ! pip install -q kaggle
# + [markdown] id="0DN2ttnYTBLP"
# In order to use the Kaggle’s public API, you must first authenticate using an API token. From the site header, click on your user profile picture, then on “My Account” from the dropdown menu. This will take you to your account settings at https://www.kaggle.com/account. Scroll down to the section of the page labelled API:
#
# To create a new token, click on the “Create New API Token” button. This will download a fresh authentication token onto your machine in the form of a `kaggle.json`
#
# The cell below once run will require you to select this file for upload.
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 90} id="VQXnHPY9C0tT" outputId="823d0424-db9f-4101-8c0e-2c6ff1140be9"
from google.colab import files
files.upload()
# + id="PlSQ9wdEC0oF"
# ! mkdir ~/.kaggle
# ! cp kaggle.json ~/.kaggle/
# + id="MPTZYwBdC0DY"
# ! chmod 600 ~/.kaggle/kaggle.json
# + colab={"base_uri": "https://localhost:8080/"} id="5pmEauIUCQpn" outputId="ed5c87be-6981-4455-e6cd-b1982bf1fcfd"
# ! kaggle datasets download -d 'badasstechie/coffee-leaf-diseases'
# + id="hihO_MHVE2Ow"
# ! mkdir coffee-leaf-diseases
# + id="AEo_PcJhHs2J"
# ! unzip coffee-leaf-diseases.zip -d coffee-leaf-diseases &> /dev/null
# + id="oaSbCFaHH3c3"
# clean up downloaded zip
# !rm -r coffee-leaf-diseases.zip
# + [markdown] id="Xmha0iwUKLwX"
# ## Creating Train and Test data sets 🗂️
#
# The first task is to split up our data so we have images for both our two classes: `rust` and `other` (disease that is not identified as rust).
# + id="14GEtmznQY2I"
train_data = pd.read_csv("coffee-leaf-diseases/train_classes.csv")
# + id="w6BB0JYJMXHN"
test_data = pd.read_csv("coffee-leaf-diseases/test_classes.csv")
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="X-L5ukNkSApP" outputId="646dc8bd-2252-419a-de84-58d5ccdff1d1"
train_data[train_data["rust"] == 1]
# + [markdown] id="UOim40YwSFsJ"
# We see here that images can be labelled both rust and miner. We kick out the images with double labels.
# + id="kzbpcnpFQY5g"
train_rust = train_data[(train_data["rust"] == 1) & (train_data["miner"] != 1) & (train_data["phoma"] != 1)]
train_rust_ids = train_rust["id"].to_list()
# + id="1q_GR_lNplaV"
other_train_ids = train_data[(train_data["rust"] == 0)]["id"].to_list()
# + id="vaYFv4AaMuoA"
test_rust = test_data[(test_data["rust"] == 1) & (test_data["miner"] != 1) & (test_data["phoma"] != 1)]
test_rust_ids = test_rust["id"].to_list()
# + id="OjFGOieWptYB"
other_test_ids = test_data[(test_data["rust"] == 0)]["id"].to_list()
# + colab={"base_uri": "https://localhost:8080/"} id="9OoqWgWcQGCf" outputId="2967953a-568a-4553-884b-c7670fb741ab"
print(f"Number of train rust images: {len(train_rust_ids)}")
print(f"Number of test rust images: {len(test_rust_ids)}")
# + id="ivMQASC7RZ7p"
os.mkdir("coffee-leaf-diseases/coffee-leaf-diseases/train/images/rust")
os.mkdir("coffee-leaf-diseases/coffee-leaf-diseases/train/images/other")
os.mkdir("coffee-leaf-diseases/coffee-leaf-diseases/test/images/rust")
os.mkdir("coffee-leaf-diseases/coffee-leaf-diseases/test/images/other")
# + id="W_yjMGo8QNZu"
for n in train_rust_ids:
shutil.move(f"./coffee-leaf-diseases/coffee-leaf-diseases/train/images/{n}.jpg", f"./coffee-leaf-diseases/coffee-leaf-diseases/train/images/rust/")
# + id="jM7ml7FdYhLl"
for n in test_rust_ids:
shutil.move(f"./coffee-leaf-diseases/coffee-leaf-diseases/test/images/{n}.jpg", f"./coffee-leaf-diseases/coffee-leaf-diseases/test/images/rust/")
# + id="wn0hsqoJp2Ec"
for n in other_train_ids:
shutil.move(f"./coffee-leaf-diseases/coffee-leaf-diseases/train/images/{n}.jpg", f"./coffee-leaf-diseases/coffee-leaf-diseases/train/images/other/")
# + id="qeFUWyZop2PB"
for n in other_test_ids:
shutil.move(f"./coffee-leaf-diseases/coffee-leaf-diseases/test/images/{n}.jpg", f"./coffee-leaf-diseases/coffee-leaf-diseases/test/images/other/")
# + [markdown] id="we_pbU8JT--2"
# Let's take a look at the kind of image we have.
# + id="JaDI6MJ_YkwR" colab={"base_uri": "https://localhost:8080/", "height": 333} outputId="ee8dc092-5ddf-45c4-941f-8425f423dbfc"
img_path = f"./coffee-leaf-diseases/coffee-leaf-diseases/train/images/rust/{train_rust_ids[0]}.jpg"
rust_leaf = load_img(img_path, target_size=(299, 299))
print(rust_leaf)
rust_leaf
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="ovcooaqrqFtv" outputId="92468812-038a-445f-bb24-35c2cf88d449"
img_path = f"./coffee-leaf-diseases/coffee-leaf-diseases/train/images/other/{other_train_ids[0]}.jpg"
non_rust_leaf = load_img(img_path, target_size=(299, 299))
print(non_rust_leaf)
non_rust_leaf
# + [markdown] id="6a4UN8v6JfMF"
# When creating our data sets we did have images that were marked rust but also a different disease, we want to ignore these multip diseased images so we will remove them from all three datasets.
# + id="t9qWIvtoH-CS"
# !cd coffee-leaf-diseases/coffee-leaf-diseases/test/images && find -type f -not -path '*/other*' -not -path '*/rust*' -delete
# + id="_hKfhYH5JNyV"
# !cd coffee-leaf-diseases/coffee-leaf-diseases/train/images && find -type f -not -path '*/other*' -not -path '*/rust*' -delete
# + [markdown] id="Hi38FhiDKjgW"
# ## Training our model ⚙️
# + id="KUWaVdY8hgiN"
train_data_path = "./coffee-leaf-diseases/coffee-leaf-diseases/train/images/"
# + id="CxeajZRXM-7i"
batch_size = 32
img_height = 150
img_width = 150
input_shape=(150, 150, 3)
data_dir = pathlib.Path(train_data_path)
# + colab={"base_uri": "https://localhost:8080/"} id="rDeGuen_qyUi" outputId="b31d9dd1-ca4b-4dba-ce2a-34ba08a95494"
# allows us to read images from storage and use them for training
train_gen = ImageDataGenerator(preprocessing_function=preprocess_input, validation_split=0.2)
train_ds = train_gen.flow_from_directory(
data_dir,
subset="training",
target_size=(img_height, img_width),
batch_size=batch_size
)
# + colab={"base_uri": "https://localhost:8080/"} id="RWUZe4Rpq2NM" outputId="4efdb24d-78de-4d73-db4b-e2f60e7fc3b0"
# view the classes we have available in our training data
train_ds.class_indices
# + colab={"base_uri": "https://localhost:8080/"} id="KJyvfpt_Yji6" outputId="2ae2810f-2133-4b06-eacd-790454ac7a01"
val_ds = train_gen.flow_from_directory(
data_dir,
subset="validation",
target_size=(img_height, img_width),
batch_size=batch_size
)
# + colab={"base_uri": "https://localhost:8080/"} outputId="d09d5b48-308a-4157-e8a0-a995835aab72" id="PETZ2lXQdICf"
val_ds.class_indices
# + id="m4ATjzuew1Ae"
def make_model(input_size=150, learning_rate=0.01, size_inner=100,
droprate=0.5):
base_model = Xception(
weights='imagenet',
include_top=False,
input_shape=(input_size, input_size, 3)
)
base_model.trainable = False
#########################################
inputs = keras.Input(shape=(input_size, input_size, 3))
base = base_model(inputs, training=False)
vectors = keras.layers.GlobalAveragePooling2D()(base)
inner = keras.layers.Dense(size_inner, activation='relu')(vectors)
drop = keras.layers.Dropout(droprate)(inner)
outputs = keras.layers.Dense(2)(drop)
model = keras.Model(inputs, outputs)
#########################################
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
loss = keras.losses.CategoricalCrossentropy(from_logits=True)
model.compile(
optimizer=optimizer,
loss=loss,
metrics=['accuracy']
)
return model
# + id="2Sp4_X17YI2h"
checkpoint = keras.callbacks.ModelCheckpoint(
'xception_v4_1_{epoch:02d}_{val_accuracy:.3f}.h5',
save_best_only=True,
monitor='val_accuracy',
mode='max'
)
# + [markdown] id="rEn-QIVCVYaJ"
# ### Tuning the learning rate 🔧
# Learning rate parameter is the most important thing to tune.
# - how fast a model learns
# - FAST: High learning rate - consume a lot of learning material quickly but not thorough
# - SLOW: Low learning rate - focus on less material but learn it much more thorough
# - if we use a high learning rate it could be we perform poorly (overfit) on validation where as a low learning rate may underfit so we have to find the right balance
# - we try different parameters to see which gives us the best result
# + colab={"base_uri": "https://localhost:8080/"} id="_-G8mLxD4pLn" outputId="5b15acf4-48b3-4a5b-ef51-c3ac1143f4d7"
# %%time
scores = {}
for lr in [0.0001, 0.001, 0.01, 0.1]:
print(lr)
model = make_model(learning_rate=lr)
history = model.fit(train_ds, epochs=10, validation_data=val_ds, callbacks=[checkpoint])
scores[lr] = history.history
print()
print()
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="7TOcFQYk4v6C" outputId="c0003d56-e13c-41e5-8d08-f5288048d726"
# %%time
for lr in [0.0001, 0.001, 0.01, 0.1]:
hist = scores[lr]
plt.plot(hist['val_accuracy'], label=lr)
plt.legend()
# + id="VE42jhM95R9M"
learning_rate = 0.001
# + [markdown] id="9FlbYfiTVUJj"
# Adding more layers
# + colab={"base_uri": "https://localhost:8080/"} id="Nn0JFKJhVUp0" outputId="302e5788-ce0e-4b49-cf1d-d10cc0f697b5"
# %%time
scores = {}
for size in [10, 100, 1000]:
print(size)
model = make_model(learning_rate=learning_rate, size_inner=size)
history = model.fit(train_ds, epochs=10, validation_data=val_ds)
scores[size] = history.history
print()
print()
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="nfYlCo8IVU30" outputId="e878a1b9-4331-42bf-8849-a1c464ed3500"
# %%time
for size in [10, 100, 1000]:
hist = scores[size]
plt.plot(hist['val_accuracy'], label=size)
plt.legend()
# + id="Mo2d1TXDVVAn"
size = 100
# + [markdown] id="nYCsFRScZc-1"
# ### Regularisation and dropout
# + colab={"base_uri": "https://localhost:8080/"} id="mW6_irRaaUaN" outputId="e5e0c299-dfe0-4d2b-fad4-7ea598b6785f"
scores = {}
for droprate in [0.0, 0.2, 0.5, 0.8]:
print(droprate)
model = make_model(
learning_rate=learning_rate,
size_inner=size,
droprate=droprate
)
history = model.fit(train_ds, epochs=10, validation_data=val_ds)
scores[droprate] = history.history
print()
print()
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="iTSM0Y9gaUza" outputId="1538c248-6405-4dfa-96ce-e9bc5f53ab9c"
for droprate, hist in scores.items():
plt.plot(hist['val_accuracy'], label=('val=%s' % droprate))
plt.ylim(0.78, 0.925)
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="KSNUzQTaaVBM" outputId="6b001b66-4351-4a77-b4f3-090f2aec6270"
hist = scores[0.0]
plt.plot(hist['val_accuracy'], label=0.0)
hist = scores[0.5]
plt.plot(hist['val_accuracy'], label=0.2)
plt.legend()
# + id="rTXaZRaXacsi"
droprate = 0.2
# + [markdown] id="W_2BBaljZdg5"
# ### Data Augmentation
#
# + [markdown] id="xNR3B1geZdo0"
# ## Training a larger model
# + id="4HY0poAtVVHi"
input_size = 299
# + colab={"base_uri": "https://localhost:8080/"} id="VcTVttiDZyht" outputId="d10c3d2f-e59a-4054-ca60-d0c84cc500ae"
train_gen = ImageDataGenerator(
preprocessing_function=preprocess_input,
# shear_range=10,
zoom_range=0.1,
horizontal_flip=True,
validation_split=0.2
)
train_ds = train_gen.flow_from_directory(
data_dir,
subset="training",
target_size=(input_size, input_size),
batch_size=32
)
val_ds = train_gen.flow_from_directory(
data_dir,
subset="validation",
target_size=(input_size, input_size),
batch_size=32,
shuffle=False
)
# + [markdown] id="M2tpPvcBjsLt"
# ## Traing the final model
# + colab={"base_uri": "https://localhost:8080/"} id="YCe84KEzjsnM" outputId="2a9ec4e8-3f5d-4d60-d6f2-0532e040aa9f"
print(f"""Final parameters: \n
input_size={input_size}, \n
learning_rate={learning_rate}, \n
size_inner={size}, \n
droprate={droprate} \n
""")
# + colab={"base_uri": "https://localhost:8080/"} id="xQ7DJeZEKoI7" outputId="c368be39-027c-4d04-cfd4-<KEY>"
final_model = make_model(
input_size=input_size,
learning_rate=learning_rate,
size_inner=size,
droprate=droprate
)
history = model.fit(train_ds, epochs=10, validation_data=val_ds,
callbacks=[checkpoint])
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="HtrfRvZIaBTn" outputId="f871198c-9066-4e2f-9f64-43312a86a8ca"
plt.plot(hist['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="VwUbqZB0U-tf" outputId="4db74fa0-6ec4-4ec8-89af-9f6d3b2e5db8"
final_model.summary()
# + id="zKgTulxTLA6D"
version = "1"
timestamp = datetime.utcnow().strftime("%Y-%m-%d_%H:%M")
model_name = f"xception_v{version}_{timestamp}.h5"
# + colab={"base_uri": "https://localhost:8080/"} id="FFBv4DAxLBNn" outputId="94768e18-3ce0-47fe-fed0-2545dc376f9a"
final_model.save(model_name)
print(f"Latest saved model: {model_name}")
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="E3TV_CAPs5xS" outputId="641769af-9a3b-47df-b474-28fe491c82f5"
files.download(model_name)
# + [markdown] id="NdqfqHU8Np33"
# ## Using our model
# + id="foo8XGvRKp6G"
model = keras.models.load_model(model_name)
# + id="8jE8H9DRNo-K"
test_data_path = "./coffee-leaf-diseases/coffee-leaf-diseases/test/images/"
batch_size = 32
img_height = 150
img_width = 150
input_shape=(150, 150, 3)
test_data_dir = pathlib.Path(test_data_path)
# + colab={"base_uri": "https://localhost:8080/"} id="vrsKsxZPKqAD" outputId="d9e20659-e884-4561-9f25-53e1a8826f86"
test_gen = ImageDataGenerator(preprocessing_function=preprocess_input)
test_ds = test_gen.flow_from_directory(
test_data_dir,
target_size=(img_height, img_width),
batch_size=32,
shuffle=False
)
# + colab={"base_uri": "https://localhost:8080/"} id="Pd_tTKrpKqJH" outputId="d7c482a8-5eb6-489d-aef9-906f1a389433"
final_model.evaluate(test_ds)
# + colab={"base_uri": "https://localhost:8080/"} id="b9RLEuLjU9sV" outputId="bc7c2ab8-25ad-4965-d4c0-40ed1df21481"
classes = list(test_ds.class_indices.keys())
classes
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="VvraivujON5p" outputId="62d3f650-9391-41c8-ba23-bd1fb1f5d164"
random_idx = random.randint(0, len(test_rust_ids))
img_path = f"./coffee-leaf-diseases/coffee-leaf-diseases/test/images/rust/{test_rust_ids[random_idx]}.jpg"
example_rust_leaf = load_img(img_path, target_size=(input_size, input_size))
print(example_rust_leaf)
example_rust_leaf
# + colab={"base_uri": "https://localhost:8080/"} id="kcJ6oQ_AQWxO" outputId="0e3288f7-8fac-481b-d8f0-7bb2664eb7f0"
x = np.array(example_rust_leaf)
X = np.array([x])
X = preprocess_input(X)
pred = final_model.predict(X)
dict(zip(classes, pred[0]))
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="3LPp8X-5RPFI" outputId="33e7036d-9b15-4857-90ea-7a150122d78e"
random_idx = random.randint(0, len(other_test_ids))
img_path = f"./coffee-leaf-diseases/coffee-leaf-diseases/test/images/other/{other_test_ids[random_idx]}.jpg"
example_other_leaf = load_img(img_path, target_size=(input_size, input_size))
print(example_other_leaf)
example_other_leaf
# + colab={"base_uri": "https://localhost:8080/"} id="j8R75Is9SEMj" outputId="60d64292-9af6-4b8c-be20-a5e1e23ae562"
x = np.array(example_other_leaf)
X = np.array([x])
X = preprocess_input(X)
pred = final_model.predict(X)
dict(zip(classes, pred[0]))
| analysis/notebooks/leaf_rust_detection_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Q-Network (DQN)
# ---
# In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment.
#
# ### 1. Import the Necessary Packages
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
# %matplotlib inline
# ### 2. Instantiate the Environment and Agent
#
# Initialize the environment in the code cell below.
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
# Before running the next code cell, familiarize yourself with the code `dqn_agent.py` and `model.py`.
# Define a neural network architecture in `model.py` that maps states to action values.
# +
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# -
# ### 3. Train the Agent with DQN
#
# Run the code cell below to train the agent from scratch.
# +
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'F-checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# -
torch.save(agent.qnetwork_local.state_dict(), 'F-checkpoint.pth')
# ### Rendering
#
# +
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('F-checkpoint.pth'))
for i in range(200):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
# -
| Deep_Q_Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recommender Systems 2018/19
#
# ### Practice 8 - Hybrid recommenders
#
#
# ### The way to go to achieve the best recommendation quality
# ## A few info about hybrids
#
#
# #### There are many different types of hibrids, in this practice we will see the following
# * Linear combination of item-based models
# * Linear combination of heterogeneous models
# * User-wise discrimination
#
# ### Prerequisite: parameter tuning!
#
# #### Let's have an example. In the course repo you will find a BayesianSearch object in the ParameterTuning folder. That is a simple wrapper of another library and its purpose is to provide a very simple way to tune some of the most common parameters. To run heavy tuning on more complex problems or with more sofisiticated constraints you may refer to other libraries.
# +
from urllib.request import urlretrieve
import zipfile, os
# If file exists, skip the download
data_file_path = "data/Movielens_10M/"
data_file_name = data_file_path + "movielens_10m.zip"
# If directory does not exist, create
if not os.path.exists(data_file_path):
os.makedirs(data_file_path)
if not os.path.exists(data_file_name):
urlretrieve ("http://files.grouplens.org/datasets/movielens/ml-10m.zip", data_file_name)
dataFile = zipfile.ZipFile(data_file_name)
URM_path = dataFile.extract("ml-10M100K/ratings.dat", path="data/Movielens_10M")
URM_file = open(URM_path, 'r')
def rowSplit (rowString):
split = rowString.split("::")
split[3] = split[3].replace("\n","")
split[0] = int(split[0])
split[1] = int(split[1])
split[2] = float(split[2])
split[3] = int(split[3])
result = tuple(split)
return result
URM_file.seek(0)
URM_tuples = []
for line in URM_file:
URM_tuples.append(rowSplit (line))
userList, itemList, ratingList, timestampList = zip(*URM_tuples)
userList = list(userList)
itemList = list(itemList)
ratingList = list(ratingList)
timestampList = list(timestampList)
import scipy.sparse as sps
URM_all = sps.coo_matrix((ratingList, (userList, itemList)))
URM_all = URM_all.tocsr()
# +
ICM_path = dataFile.extract("ml-10M100K/tags.dat", path = "data/Movielens_10M")
ICM_file = open(ICM_path, 'r')
def rowSplit (rowString):
split = rowString.split("::")
split[3] = split[3].replace("\n","")
split[0] = int(split[0])
split[1] = int(split[1])
split[2] = str(split[2]) # tag is a string, not a float like the rating
split[3] = int(split[3])
result = tuple(split)
return result
ICM_file.seek(0)
ICM_tuples = []
for line in ICM_file:
ICM_tuples.append(rowSplit(line))
userList_icm, itemList_icm, tagList_icm, timestampList_icm = zip(*ICM_tuples)
userList_icm = list(userList_icm)
itemList_icm = list(itemList_icm)
tagList_icm = list(tagList_icm)
timestampList_icm = list(timestampList_icm)
userList_unique = list(set(userList_icm))
itemList_unique = list(set(itemList_icm))
tagList_unique = list(set(tagList_icm))
numUsers = len(userList_unique)
numItems = len(itemList_unique)
numTags = len(tagList_unique)
print ("Number of items\t {}, Number of users\t {}".format(numItems, numUsers))
print ("Number of tags\t {}, Number of item-tag tuples {}".format(numTags, len(tagList_icm)))
print("\nData example:")
print(userList_icm[0:10])
print(itemList_icm[0:10])
print(tagList_icm[0:10])
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(tagList_icm)
tagList_icm = le.transform(tagList_icm)
import numpy as np
ones = np.ones(len(tagList_icm))
ICM_all = sps.coo_matrix((ones, (itemList_icm, tagList_icm)), shape=(URM_all.shape[1], tagList_icm.max()+1))
ICM_all = ICM_all.tocsr()
# +
from Notebooks_utils.data_splitter import train_test_holdout
URM_train, URM_test = train_test_holdout(URM_all, train_perc = 0.8)
URM_train, URM_validation = train_test_holdout(URM_train, train_perc = 0.9)
# -
# ### Step 1: Import the evaluator objects
# +
from Base.Evaluation.Evaluator import EvaluatorHoldout
evaluator_validation = EvaluatorHoldout(URM_validation, cutoff_list=[5])
evaluator_test = EvaluatorHoldout(URM_test, cutoff_list=[5, 10])
# -
# ### Step 2: Create BayesianSearch object
# +
from KNN.ItemKNNCFRecommender import ItemKNNCFRecommender
from ParameterTuning.SearchBayesianSkopt import SearchBayesianSkopt
recommender_class = ItemKNNCFRecommender
parameterSearch = SearchBayesianSkopt(recommender_class,
evaluator_validation=evaluator_validation,
evaluator_test=evaluator_test)
# -
# ### Step 3: Define parameters range
# +
from ParameterTuning.SearchAbstractClass import SearchInputRecommenderArgs
from skopt.space import Real, Integer, Categorical
hyperparameters_range_dictionary = {}
hyperparameters_range_dictionary["topK"] = Integer(5, 1000)
hyperparameters_range_dictionary["shrink"] = Integer(0, 1000)
hyperparameters_range_dictionary["similarity"] = Categorical(["cosine"])
hyperparameters_range_dictionary["normalize"] = Categorical([True, False])
recommender_input_args = SearchInputRecommenderArgs(
CONSTRUCTOR_POSITIONAL_ARGS = [URM_train],
CONSTRUCTOR_KEYWORD_ARGS = {},
FIT_POSITIONAL_ARGS = [],
FIT_KEYWORD_ARGS = {}
)
output_folder_path = "result_experiments/"
import os
# If directory does not exist, create
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
# -
# ### Step 4: Run!
# +
n_cases = 2
metric_to_optimize = "MAP"
parameterSearch.search(recommender_input_args,
parameter_search_space = hyperparameters_range_dictionary,
n_cases = n_cases,
n_random_starts = 1,
save_model = "no",
output_folder_path = output_folder_path,
output_file_name_root = recommender_class.RECOMMENDER_NAME,
metric_to_optimize = metric_to_optimize
)
# +
from Base.DataIO import DataIO
data_loader = DataIO(folder_path = output_folder_path)
search_metadata = data_loader.load_data(recommender_class.RECOMMENDER_NAME + "_metadata.zip")
# -
search_metadata
best_parameters = search_metadata["hyperparameters_best"]
best_parameters
# # Linear combination of item-based models
#
# #### Let's use an ItemKNNCF with the parameters we just learned and a graph based model
# +
itemKNNCF = ItemKNNCFRecommender(URM_train)
itemKNNCF.fit(**best_parameters)
from GraphBased.P3alphaRecommender import P3alphaRecommender
P3alpha = P3alphaRecommender(URM_train)
P3alpha.fit()
# -
itemKNNCF.W_sparse
P3alpha.W_sparse
# ### We may define another Recommender which takes the two matrices as input as well as the weights
# +
from Base.Recommender_utils import check_matrix, similarityMatrixTopK
from Base.BaseSimilarityMatrixRecommender import BaseItemSimilarityMatrixRecommender
class ItemKNNSimilarityHybridRecommender(BaseItemSimilarityMatrixRecommender):
""" ItemKNNSimilarityHybridRecommender
Hybrid of two similarities S = S1*alpha + S2*(1-alpha)
"""
RECOMMENDER_NAME = "ItemKNNSimilarityHybridRecommender"
def __init__(self, URM_train, Similarity_1, Similarity_2, sparse_weights=True):
super(ItemKNNSimilarityHybridRecommender, self).__init__(URM_train)
if Similarity_1.shape != Similarity_2.shape:
raise ValueError("ItemKNNSimilarityHybridRecommender: similarities have different size, S1 is {}, S2 is {}".format(
Similarity_1.shape, Similarity_2.shape
))
# CSR is faster during evaluation
self.Similarity_1 = check_matrix(Similarity_1.copy(), 'csr')
self.Similarity_2 = check_matrix(Similarity_2.copy(), 'csr')
def fit(self, topK=100, alpha = 0.5):
self.topK = topK
self.alpha = alpha
W = self.Similarity_1*self.alpha + self.Similarity_2*(1-self.alpha)
self.W_sparse = similarityMatrixTopK(W, k=self.topK).tocsr()
# -
hybridrecommender = ItemKNNSimilarityHybridRecommender(URM_train, itemKNNCF.W_sparse, P3alpha.W_sparse)
hybridrecommender.fit(alpha = 0.5)
evaluator_validation.evaluateRecommender(hybridrecommender)
# ### In this case the alpha coefficient is too a parameter to be tuned
# # Linear combination of predictions
#
# #### In case of models with incompatible structure (e.g., ItemKNN with UserKNN or MF) you may ensemble the prediction values
# +
from MatrixFactorization.PureSVDRecommender import PureSVDRecommender
pureSVD = PureSVDRecommender(URM_train)
pureSVD.fit()
user_id = 42
# -
item_scores = itemKNNCF._compute_item_score(user_id)
item_scores
item_scores = pureSVD._compute_item_score(user_id)
item_scores
# +
class ItemKNNScoresHybridRecommender(BaseItemSimilarityMatrixRecommender):
""" ItemKNNScoresHybridRecommender
Hybrid of two prediction scores R = R1*alpha + R2*(1-alpha)
"""
RECOMMENDER_NAME = "ItemKNNScoresHybridRecommender"
def __init__(self, URM_train, Recommender_1, Recommender_2):
super(ItemKNNScoresHybridRecommender, self).__init__(URM_train)
self.URM_train = check_matrix(URM_train.copy(), 'csr')
self.Recommender_1 = Recommender_1
self.Recommender_2 = Recommender_2
def fit(self, alpha = 0.5):
self.alpha = alpha
def _compute_item_score(self, user_id_array, items_to_compute):
item_weights_1 = self.Recommender_1._compute_item_score(user_id_array)
item_weights_2 = self.Recommender_2._compute_item_score(user_id_array)
item_weights = item_weights_1*self.alpha + item_weights_2*(1-self.alpha)
return item_weights
# +
hybridrecommender = ItemKNNScoresHybridRecommender(URM_train, itemKNNCF, pureSVD)
hybridrecommender.fit(alpha = 0.5)
evaluator_validation.evaluateRecommender(hybridrecommender)
# -
# # User-wise hybrid
#
# ### Models do not have the same accuracy for different user types. Let's divide the users according to their profile length and then compare the recommendation quality we get from a CF model
#
#
# +
URM_train = sps.csr_matrix(URM_train)
profile_length = np.ediff1d(URM_train.indptr)
# -
# ### Let's select a few groups of 5% of the users with the least number of interactions
block_size = int(len(profile_length)*0.05)
block_size
sorted_users = np.argsort(profile_length)
for group_id in range(0, 10):
start_pos = group_id*block_size
end_pos = min((group_id+1)*block_size, len(profile_length))
users_in_group = sorted_users[start_pos:end_pos]
users_in_group_p_len = profile_length[users_in_group]
print("Group {}, average p.len {:.2f}, min {}, max {}".format(group_id,
users_in_group_p_len.mean(), users_in_group_p_len.min(), users_in_group_p_len.max()))
# ### Now we plot the recommendation quality of TopPop and ItemKNNCF
# +
from Base.NonPersonalizedRecommender import TopPop
topPop = TopPop(URM_train)
topPop.fit()
# +
from KNN.ItemKNNCBFRecommender import ItemKNNCBFRecommender
recommender_class = ItemKNNCBFRecommender
parameterSearch = SearchBayesianSkopt(recommender_class,
evaluator_validation=evaluator_validation,
evaluator_test=evaluator_test)
# +
hyperparameters_range_dictionary = {}
hyperparameters_range_dictionary["topK"] = Integer(5, 1000)
hyperparameters_range_dictionary["shrink"] = Integer(0, 1000)
hyperparameters_range_dictionary["similarity"] = Categorical(["cosine"])
hyperparameters_range_dictionary["normalize"] = Categorical([True, False])
recommender_input_args = SearchInputRecommenderArgs(
CONSTRUCTOR_POSITIONAL_ARGS = [URM_train, ICM_all],
CONSTRUCTOR_KEYWORD_ARGS = {},
FIT_POSITIONAL_ARGS = [],
FIT_KEYWORD_ARGS = {}
)
output_folder_path = "result_experiments/"
import os
# If directory does not exist, create
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
n_cases = 2
metric_to_optimize = "MAP"
parameterSearch.search(recommender_input_args,
parameter_search_space = hyperparameters_range_dictionary,
n_cases = n_cases,
n_random_starts = 1,
save_model = "no",
output_folder_path = output_folder_path,
output_file_name_root = recommender_class.RECOMMENDER_NAME,
metric_to_optimize = metric_to_optimize
)
# +
data_loader = DataIO(folder_path = output_folder_path)
search_metadata = data_loader.load_data(recommender_class.RECOMMENDER_NAME + "_metadata.zip")
best_parameters_ItemKNNCBF = search_metadata["hyperparameters_best"]
best_parameters_ItemKNNCBF
# -
itemKNNCBF = ItemKNNCBFRecommender(URM_train, ICM_all)
itemKNNCBF.fit(**best_parameters_ItemKNNCBF)
URM_train
# +
MAP_itemKNNCF_per_group = []
MAP_itemKNNCBF_per_group = []
MAP_pureSVD_per_group = []
MAP_topPop_per_group = []
cutoff = 10
for group_id in range(0, 10):
start_pos = group_id*block_size
end_pos = min((group_id+1)*block_size, len(profile_length))
users_in_group = sorted_users[start_pos:end_pos]
users_in_group_p_len = profile_length[users_in_group]
print("Group {}, average p.len {:.2f}, min {}, max {}".format(group_id,
users_in_group_p_len.mean(), users_in_group_p_len.min(), users_in_group_p_len.max()))
users_not_in_group_flag = np.isin(sorted_users, users_in_group, invert = True)
users_not_in_group = sorted_users[users_not_in_group_flag]
evaluator_test = EvaluatorHoldout(URM_test, cutoff_list=[cutoff], ignore_users = users_not_in_group)
results, _ = evaluator_test.evaluateRecommender(itemKNNCF)
MAP_itemKNNCF_per_group.append(results[cutoff]["MAP"])
results, _ = evaluator_test.evaluateRecommender(pureSVD)
MAP_pureSVD_per_group.append(results[cutoff]["MAP"])
results, _ = evaluator_test.evaluateRecommender(itemKNNCBF)
MAP_itemKNNCBF_per_group.append(results[cutoff]["MAP"])
results, _ = evaluator_test.evaluateRecommender(topPop)
MAP_topPop_per_group.append(results[cutoff]["MAP"])
# +
import matplotlib.pyplot as pyplot
# %matplotlib inline
pyplot.plot(MAP_itemKNNCF_per_group, label="itemKNNCF")
pyplot.plot(MAP_itemKNNCBF_per_group, label="itemKNNCBF")
pyplot.plot(MAP_pureSVD_per_group, label="pureSVD")
pyplot.plot(MAP_topPop_per_group, label="topPop")
pyplot.ylabel('MAP')
pyplot.xlabel('User Group')
pyplot.legend()
pyplot.show()
# -
# ### The recommendation quality of the three algorithms changes depending on the user profile length
#
# ## Tip:
# ### If an algorithm works best on average, it does not imply it will work best for ALL user types
| Practice 8 - Hybrid recommenders.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Practical Data Visualization with Python (Homework - Participant)
#
# ## Homework Overview
#
# Thanks for checking out the hands-on reinforcement exercises for this seminar. The goal of this homework is to provide you with a handful of questions that necessitate visualization that you might conceivably face on the job. There is not one "right" answer for the questions below, but some answers are *more* right than others. For example, if you were to be asked to visualize the trends in [LTV](https://www.investopedia.com/terms/l/loantovalue.asp) over the course of a year, would plotting average LTV over time be a better visualization than building twelve violin plots of LTV--one for each month? Not necessarily. But would both of those be better than a single box-and-whisker plot of LTV all originations in that year? Absolutely. It all depends on the context of the question, and the information you intend to convey with your visualization.
#
# When in doubt, ask yourself: **am I clearly and powerfully communicating the relevant information with this visualization?**
#
# - With each of these questions below, you will be asked to do two things:
# 1. Construct a visualization to answer the question.
# - You'll be pre-allotted one cell in the notebook for this, but feel free to use as many as you'd like. As was shown in the lecture materials, a good visualization almost always requires iteration. Feel free to keep the remnants of your iterative creative procees in your notebooks; just ensure your final viz. for each question is clearly marked.
# 2. Briefly explain (in no more than a paragraph) why you chose to visualize the data as you did.
# - If you are struggling to think of what to write, fall back on the lecture materials, particulary Section 1: Why We Visualize. Imagine that each of your visualizations was going to be presented to your team at a Process Confirm / Code Review; your paragraph should read like the explanation you would give in that context, detailing why your choices made for the most effective viz. Be sure to focus on how your visualization answers the question at hand, the crux of which is **in bold** although the entire question provides relevant information as to what is expected.
#
# We'll be using the [same data](https://nbviewer.jupyter.org/github/pmaji/practical-python-data-viz-guide/blob/master/notebooks/data_prep_nb.ipynb) we've been dealing with throughout the seminar: January and December 2017 FNMA originations. Remember, if you don't understand what some of the variables mean, all the information you need is in the `data_prep_nb.ipynb`, including links to relevant glossaries and data dictionnaries.
# ## Setup
# basic packages
import numpy as np
import pandas as pd
import datetime
# store the datetime of the most recent running of this notebook as a form of a log
most_recent_run_datetime = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
f"This notebook was last executed on {most_recent_run_datetime}"
# +
# pulling in our main data; for more info on the data, see the "data_prep_nb.ipynb" file
main_df = pd.read_csv(filepath_or_buffer='../data/jan_and_dec_17_acqs.csv')
# taking a peek at our data
main_df.head()
# -
# ## Question 1
#
# A business partner of yours came to you to ask about how occupancy status relates to risk. They were wondering, **what occupancy status appears riskier in our data: principal homes (i.e. someone's primary residence), second homes, or investor-owned homes?** There are obviously many ways of measuring risk. Here it's safe to assume your business partner means credit risk, so some variables you may want to consider would be the borrower's credit score, DTI, or LTV. You can use one or more of these variables in your analysis, or something else altogether if you see fit; just ensure that in the end you arrive at one a single visualization to share with your business partner.
# +
# code for visualization goes here
# -
# Explanation for why you chose this particular visualization goes here...
# ## Question 2
#
# Imagine that a recent news event broke that had to do with [mortgage insurance (MI)](https://en.wikipedia.org/wiki/Mortgage_insurance), and even though we don't yet know exactly how that news will impact <NAME>'s business, you've been asked to produce a visualization that communicates **to what extent our December 2017 acquisitions were covered by MI**.
# +
# code for visualization goes here
# -
# Explanation for why you chose this particular visualization goes here...
# ## Question 3
#
# One of your business partners is trying to **learn more about the areas of the country where we are providing the highest value loans in terms of origination amount**. You've also been told that an interactive map of the United States would be optimal here, and they'd like you to add whatever data you might think are relevant to the tooltip.
# +
# code for visualization goes here
# -
# Explanation for why you chose this particular visualization goes here...
# ## Question 4
#
# You've received a very open-ended question from an account manager hoping to learn more about how the seller with whom they work most closely compares to all sellers. Pick any seller (aside from "Other") and any two variables in our data (i.e. origination amount and origination value, but don't use that combo), and put together a visualization that communicates **whether or not that seller is unique in any way as it pertains to the two variables you selected**. The answer can be yes, no, or maybe... just justify your answer with your visualization.
# +
# code for visualization goes here
# -
# Explanation for why you chose this particular visualization goes here...
# ## Bonus Question
#
# Maybe a bonus question... depending on what Xi thinks of this existing level of difficulty.
| notebooks/participant_hw_nb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %%time
from __future__ import division
from pyomo.environ import *
import random as ra
for l in range(10):
Model = ConcreteModel()
costumers = 8
depots = 2
nodes = costumers + depots
Model.n = RangeSet(1,nodes)
Model.n0 = set (i for i in Model.n if i <= depots)
Model.nc = set (i for i in Model.n if i > depots )
#def x1 (Model,i):
# return ra.randint(50,150)
#Model.x1 = Param( Model.n , initialize = x1)
x1 = {}
for i in Model.n:
x1[i] = ra.randint(50,100)
Model.x1 = Param(Model.n,initialize = x1)
#def y1 (Model,i):
# return ra.randint(50,150)
#Model.y1 = Param(Model.n,initialize = y1)
y1 = {}
for i in Model.n:
y1[i] = ra.randint(50,100)
Model.y1 = Param(Model.n,initialize = y1)
#def d (Model,i):
# return ra.randint(50,150)
#Model.d = Param(Model.nc,initialize = d)
d = {}
for i in Model.nc :
# if i > depots :
d[i] = ra.randint(50,100)
Model.d = Param(Model.nc,initialize = d)
#def p (Model,i):
# return ra.randint(50,150)
#Model.p = Param(Model.nc,initialize = p)
p = {}
for i in Model.nc :
# if i > depots :
p[i] = ra.randint(50,100)
Model.p = Param(Model.nc,initialize = p)
#def fd (Model,i):
# return ra.randint(50,100)
#Model.p = Param(Model.n0,initialize = fd)
fd ={}
for i in Model.n0 :
# if i <depots+1:
fd[i] = ra.randint(10,50)
Model.fd = Param(Model.n0,initialize = fd)
cd = 500
cv = 200
fv =20
def c (Model,i,j):
return ((x1[i]-x1[j])**2+ (y1[i]-y1[j])**2)**0.5
Model.c = Param( Model.n,Model.n , initialize = c)
#c ={}
#for i in Model.n:
# for j in Model.n:
# c[i,j] = ((x1[i]-x1[j])**2+ (y1[i]-y1[j])**2)**0.5
#
#Model.c = Param(Model.n,Model.n ,initialize = c)
#Model.zz = Var(within = Reals)
#Model.z3 =set( Model.nc , Model.n0)
Model.x = Var(Model.n , Model.n , within = Binary)
Model.y = Var(Model.n0 , within = Binary)
Model.z = Var(Model.nc , Model.n0 , within = Binary)
Model.v = Var(Model.n,Model.n , within = PositiveReals)
Model.u = Var(Model.n,Model.n , within = PositiveReals)
#for i in Model.n:
# for k in Model.n:
# Model.z =
def _2 (Model,i):
xij=0
for j in Model.n:
xij+=Model.x[i,j]
return xij ==1
Model._2 = Constraint(Model.nc,rule =_2)
#def _2 (Model):
#
# for i in Model.nc:
# xij=0
# for j in Model.n:
# xij+=Model.x[i,j]
# return xij ==1
#Model._2 = Constraint(rule =_2)
def _3 (Model,i):
xij = 0
xji = 0
for j in Model.n:
xij+=Model.x[i,j]
xji+=Model.x[j,i]
return xij == xji
Model._3 = Constraint(Model.n,rule =_3)
#def _3 (Model):
# for i in Model.n:
# xij=0
# xji=0
# for j in Model.n:
# xij+=Model.x[i,j]
# xji+=Model.x[j,i]
# return xij ==xji
#Model._3 = Constraint(rule =_3)
def _4 (Model,i):
uij = 0
uji = 0
for j in Model.n:
uji +=Model.u[j,i]
uij +=Model.u[i,j]
return uji - uij == Model.d[i]
Model._4 = Constraint(Model.nc,rule =_4)
#def _4 (Model):
# for i in Model.nc:
# uij = 0
# uji = 0
# for j in Model.n:
# uji +=Model.u[j,i]
# uij +=Model.u[i,j]
# return uji - uij == Model.d[i]
#Model._4 = Constraint(rule =_4)
def _5 (Model,i):
vij = 0
vji = 0
for j in Model.n:
vji +=Model.v[j,i]
vij +=Model.v[i,j]
return vji - vij == Model.p[i]
Model._5 = Constraint(Model.nc,rule =_5)
#def _5 (Model):
# for i in Model.nc:
# vij = 0
# vji = 0
# for j in Model.n:
# vji +=Model.v[j,i]
# vij +=Model.v[i,j]
# return vji - vij == Model.p[i]
#Model._5 = Constraint(rule =_5)
def _6 (Model,i):
for j in Model.n:
if i!= j:
return Model.u[i,j] + Model.v[i,j] <=cv * Model.x[i,j]
Model._6 = Constraint(Model.n,rule =_6)
#def _6 (Model):
# for i in Model.n:
# for j in Model.n:
# if i!= j:
# return Model.u[i,j] + Model.v[i,j] <=cv * Model.x[i,j]
#Model._6 = Constraint(rule =_6)
def _7 (Model,k):
ukj = 0
zjkdj = 0
for j in Model.nc:
ukj += Model.u[k,j]
zjkdj +=Model.z[j,k]*Model.d[j]
return ukj == zjkdj
Model._7 = Constraint(Model.n0,rule =_7)
#def _7 (Model):
# for k in Model.nc:
# ukj = 0
# zjkdj = 0
# for j in Model.nc:
# ukj += Model.u[k,j]
# zjkdj +=Model.z[j,k]*Model.d[j]
# return ukj == zjkdj
#Model._7 = Constraint(rule =_7)
def _8 (Model,k):
ujk=0
for j in Model.nc:
ujk+=Model.u[j,k]
return ujk == 0
Model._8 = Constraint(Model.n0,rule =_8)
#def _8 (Model):
#
# for k in Model.n0:
# ujk=0
# for j in Model.nc:
# ujk+=Model.u[j,k]
# return ujk == 0
#Model._8 = Constraint(rule =_8)
#def _9 (Model,k):
# vjk = 0
# zjkpj = 0
# for j in Model.nc:
# vjk += Model.u[k,j]
# zjkpj += Model.z[j,k]*Model.p[j]
# return vjk == zjkpj
#Model._9 = Constraint(Model.n0,rule =_9)
def _9 (Model):
for k in Model.n0:
vjk = 0
zjkpj = 0
for j in Model.nc:
vjk += Model.u[k,j]
zjkpj +=Model.z[j,k]*Model.p[j]
return vjk == zjkpj
Model._9 = Constraint(rule =_9)
#def _10 (Model,k):
#
# vjk=0
# for j in Model.nc:
# vjk+=Model.v[j,k]
# return vjk == 0
#Model._10 = Constraint(Model.n0,rule =_10)
def _10 (Model,k):
vjk=0
for j in Model.nc:
vjk+=Model.v[j,k]
return vjk == 0
Model._10 = Constraint(Model.n0,rule =_10)
#def _10 (Model):
#
# for k in Model.n0:
# vjk=0
# for j in Model.nc:
# vjk+=Model.v[j,k]
# return vjk == 0
#Model._10 = Constraint(rule =_10)
def _11 (Model,i,j):
return Model.u[i,j] <= (cv - Model.d[i])* Model.x[i,j]
Model._11 = Constraint(Model.nc,Model.n,rule =_11)
#def _11 (Model):
# for j in Model.n:
# for i in Model.nc:
# return Model.u[i,j] <= (cv - Model.d[i])* Model.x[i,j]
#Model._11 = Constraint(rule =_11)
def _12 (Model,i,j):
return Model.v[i,j] <= (cv - Model.p[i])* Model.x[i,j]
Model._12 = Constraint(Model.nc,Model.n,rule =_12)
#def _12 (Model):
# for i in Model.nc:
# for j in Model.n:
# return Model.v[i,j] <= (cv - Model.p[i])* Model.x[i,j]
#Model._12 = Constraint(rule =_12)
#def _13 (Model,i):
# for j in Model.nc:
# return Model.u[i,j] >= Model.d[j]*Model.x[i,j]
#Model._13 = Constraint(Model.n,rule =_13)
def _13 (Model):
for j in Model.nc:
for i in Model.n:
return Model.u[i,j] >= Model.d[j]*Model.x[i,j]
Model._13 = Constraint(rule =_13)
#def _14 (Model,i,j):
#
# return Model.u[i,j] >= Model.d[j]*Model.x[i,j]
#Model._14 = Constraint(Model.nc,Model.n,rule =_14)
def _14 (Model):
for j in Model.nc:
for i in Model.n:
return Model.v[i,j] >= Model.p[j] * Model.x[i,j]
Model._14 = Constraint(rule =_14)
def _15 (Model):
for i in Model.nc:
zik=0
for k in Model.n0:
zik += Model.z[i,k]
return zik == 1
Model._15 = Constraint(rule =_15)
def _16 (Model):
for k in Model.n0:
dizik = 0
for i in Model.nc:
dizik += Model.d[i] * Model.z[i,k]
return dizik <=cd * Model.y[k]
Model._16 = Constraint(rule =_16)
def _17 (Model):
for k in Model.n0:
pizik = 0
for i in Model.nc:
pizik += Model.d[i] * Model.z[i,k]
return pizik <=cd * Model.y[k]
Model._17 = Constraint(rule =_17)
def _18 (Model):
for k in Model.n0:
for i in Model.nc:
return Model.x[i,k] <= Model.z[i,k]
Model._18 = Constraint(rule =_18)
def _19 (Model):
for k in Model.n0:
for i in Model.nc:
return Model.x[k,i] <= Model.z[i,k]
Model._19 = Constraint(rule =_19)
def _20 (Model):
for k in Model.n0:
for i in Model.nc:
for j in Model.nc:
if i!=j :
zjm = 0
for m in Model.n0:
if m != k:
zjm += Model.z[j,m]
return Model.x[i,j]+Model.z[i,k] + zjm <=2
Model._20 = Constraint(rule =_20)
def obj (Model):
cx = 0
fdy = 0
fvx = 0
for i in Model.n :
for j in Model.n :
cx += Model.c[i,j]*Model.x[i,j]
for k in Model.n0 :
fdy += Model.fd[k] * Model.y[k]
for k in Model.n0 :
for i in Model.nc :
fvx += fv * Model.x[k,i]
return cx + fdy + fvx
Model.objective = Objective(rule=obj)
print('!'*60)
def pyomo_postprocess(options=None, instance=None, results=None):
Model.objective.display()
#This is an optional code path that allows the script to be run outside of
#pyomo command-line. For example: python transport.py
if __name__ == '__main__':
# This emulates what the pyomo command-line tools does
from pyomo.opt import SolverFactory
# import pyomo.environ
opt = SolverFactory("gurobi")
results = opt.solve(Model)
#sends results to stdout
results.write()
print("\nDisplaying Solution\n" + '!'*60)
pyomo_postprocess(None, Model, results)
# +
# %%time
from __future__ import division
from pyomo.environ import *
import random as ra
for l in range(10):
Model = ConcreteModel()
costumers = 8
depots = 2
nodes = costumers + depots
Model.n = RangeSet(1,nodes)
Model.n0 = set (i for i in Model.n if i <= depots)
Model.nc = set (i for i in Model.n if i > depots )
#def x1 (Model,i):
# return ra.randint(50,150)
#Model.x1 = Param( Model.n , initialize = x1)
x1 = {}
for i in Model.n:
x1[i] = ra.randint(50,100)
Model.x1 = Param(Model.n,initialize = x1)
#def y1 (Model,i):
# return ra.randint(50,150)
#Model.y1 = Param(Model.n,initialize = y1)
y1 = {}
for i in Model.n:
y1[i] = ra.randint(50,100)
Model.y1 = Param(Model.n,initialize = y1)
#def d (Model,i):
# return ra.randint(50,150)
#Model.d = Param(Model.nc,initialize = d)
d = {}
for i in Model.nc :
# if i > depots :
d[i] = ra.randint(50,100)
Model.d = Param(Model.nc,initialize = d)
#def p (Model,i):
# return ra.randint(50,150)
#Model.p = Param(Model.nc,initialize = p)
p = {}
for i in Model.nc :
# if i > depots :
p[i] = ra.randint(50,100)
Model.p = Param(Model.nc,initialize = p)
#def fd (Model,i):
# return ra.randint(50,100)
#Model.p = Param(Model.n0,initialize = fd)
fd ={}
for i in Model.n0 :
# if i <depots+1:
fd[i] = ra.randint(10,50)
Model.fd = Param(Model.n0,initialize = fd)
cd = 500
cv = 200
fv =20
def c (Model,i,j):
return ((x1[i]-x1[j])**2+ (y1[i]-y1[j])**2)**0.5
Model.c = Param( Model.n,Model.n , initialize = c)
#c ={}
#for i in Model.n:
# for j in Model.n:
# c[i,j] = ((x1[i]-x1[j])**2+ (y1[i]-y1[j])**2)**0.5
#
#Model.c = Param(Model.n,Model.n ,initialize = c)
#Model.zz = Var(within = Reals)
#Model.z3 =set( Model.nc , Model.n0)
Model.x = Var(Model.n , Model.n , within = Binary)
Model.y = Var(Model.n0 , within = Binary)
Model.z = Var(Model.nc , Model.n0 , within = Binary)
Model.v = Var(Model.n,Model.n , within = PositiveReals)
Model.u = Var(Model.n,Model.n , within = PositiveReals)
#for i in Model.n:
# for k in Model.n:
# Model.z =
def _2 (Model,i):
xij=0
for j in Model.n:
xij+=Model.x[i,j]
return xij ==1
Model._2 = Constraint(Model.nc,rule =_2)
#def _2 (Model):
#
# for i in Model.nc:
# xij=0
# for j in Model.n:
# xij+=Model.x[i,j]
# return xij ==1
#Model._2 = Constraint(rule =_2)
def _3 (Model,i):
xij = 0
xji = 0
for j in Model.n:
xij+=Model.x[i,j]
xji+=Model.x[j,i]
return xij == xji
Model._3 = Constraint(Model.n,rule =_3)
#def _3 (Model):
# for i in Model.n:
# xij=0
# xji=0
# for j in Model.n:
# xij+=Model.x[i,j]
# xji+=Model.x[j,i]
# return xij ==xji
#Model._3 = Constraint(rule =_3)
def _4 (Model,i):
uij = 0
uji = 0
for j in Model.n:
uji +=Model.u[j,i]
uij +=Model.u[i,j]
return uji - uij == Model.d[i]
Model._4 = Constraint(Model.nc,rule =_4)
#def _4 (Model):
# for i in Model.nc:
# uij = 0
# uji = 0
# for j in Model.n:
# uji +=Model.u[j,i]
# uij +=Model.u[i,j]
# return uji - uij == Model.d[i]
#Model._4 = Constraint(rule =_4)
def _5 (Model,i):
vij = 0
vji = 0
for j in Model.n:
vji +=Model.v[j,i]
vij +=Model.v[i,j]
return vji - vij == Model.p[i]
Model._5 = Constraint(Model.nc,rule =_5)
#def _5 (Model):
# for i in Model.nc:
# vij = 0
# vji = 0
# for j in Model.n:
# vji +=Model.v[j,i]
# vij +=Model.v[i,j]
# return vji - vij == Model.p[i]
#Model._5 = Constraint(rule =_5)
def _6 (Model,i):
for j in Model.n:
if i!= j:
return Model.u[i,j] + Model.v[i,j] <=cv * Model.x[i,j]
Model._6 = Constraint(Model.n,rule =_6)
#def _6 (Model):
# for i in Model.n:
# for j in Model.n:
# if i!= j:
# return Model.u[i,j] + Model.v[i,j] <=cv * Model.x[i,j]
#Model._6 = Constraint(rule =_6)
def _7 (Model,k):
ukj = 0
zjkdj = 0
for j in Model.nc:
ukj += Model.u[k,j]
zjkdj +=Model.z[j,k]*Model.d[j]
return ukj == zjkdj
Model._7 = Constraint(Model.n0,rule =_7)
#def _7 (Model):
# for k in Model.nc:
# ukj = 0
# zjkdj = 0
# for j in Model.nc:
# ukj += Model.u[k,j]
# zjkdj +=Model.z[j,k]*Model.d[j]
# return ukj == zjkdj
#Model._7 = Constraint(rule =_7)
def _8 (Model,k):
ujk=0
for j in Model.nc:
ujk+=Model.u[j,k]
return ujk == 0
Model._8 = Constraint(Model.n0,rule =_8)
#def _8 (Model):
#
# for k in Model.n0:
# ujk=0
# for j in Model.nc:
# ujk+=Model.u[j,k]
# return ujk == 0
#Model._8 = Constraint(rule =_8)
#def _9 (Model,k):
# vjk = 0
# zjkpj = 0
# for j in Model.nc:
# vjk += Model.u[k,j]
# zjkpj += Model.z[j,k]*Model.p[j]
# return vjk == zjkpj
#Model._9 = Constraint(Model.n0,rule =_9)
def _9 (Model):
for k in Model.n0:
vjk = 0
zjkpj = 0
for j in Model.nc:
vjk += Model.u[k,j]
zjkpj +=Model.z[j,k]*Model.p[j]
return vjk == zjkpj
Model._9 = Constraint(rule =_9)
#def _10 (Model,k):
#
# vjk=0
# for j in Model.nc:
# vjk+=Model.v[j,k]
# return vjk == 0
#Model._10 = Constraint(Model.n0,rule =_10)
def _10 (Model,k):
vjk=0
for j in Model.nc:
vjk+=Model.v[j,k]
return vjk == 0
Model._10 = Constraint(Model.n0,rule =_10)
#def _10 (Model):
#
# for k in Model.n0:
# vjk=0
# for j in Model.nc:
# vjk+=Model.v[j,k]
# return vjk == 0
#Model._10 = Constraint(rule =_10)
def _11 (Model,i,j):
return Model.u[i,j] <= (cv - Model.d[i])* Model.x[i,j]
Model._11 = Constraint(Model.nc,Model.n,rule =_11)
#def _11 (Model):
# for j in Model.n:
# for i in Model.nc:
# return Model.u[i,j] <= (cv - Model.d[i])* Model.x[i,j]
#Model._11 = Constraint(rule =_11)
def _12 (Model,i,j):
return Model.v[i,j] <= (cv - Model.p[i])* Model.x[i,j]
Model._12 = Constraint(Model.nc,Model.n,rule =_12)
#def _12 (Model):
# for i in Model.nc:
# for j in Model.n:
# return Model.v[i,j] <= (cv - Model.p[i])* Model.x[i,j]
#Model._12 = Constraint(rule =_12)
#def _13 (Model,i):
# for j in Model.nc:
# return Model.u[i,j] >= Model.d[j]*Model.x[i,j]
#Model._13 = Constraint(Model.n,rule =_13)
def _13 (Model):
for j in Model.nc:
for i in Model.n:
return Model.u[i,j] >= Model.d[j]*Model.x[i,j]
Model._13 = Constraint(rule =_13)
#def _14 (Model,i,j):
#
# return Model.u[i,j] >= Model.d[j]*Model.x[i,j]
#Model._14 = Constraint(Model.nc,Model.n,rule =_14)
def _14 (Model):
for j in Model.nc:
for i in Model.n:
return Model.v[i,j] >= Model.p[j] * Model.x[i,j]
Model._14 = Constraint(rule =_14)
def _15 (Model):
for i in Model.nc:
zik=0
for k in Model.n0:
zik += Model.z[i,k]
return zik == 1
Model._15 = Constraint(rule =_15)
def _16 (Model):
for k in Model.n0:
dizik = 0
for i in Model.nc:
dizik += Model.d[i] * Model.z[i,k]
return dizik <=cd * Model.y[k]
Model._16 = Constraint(rule =_16)
def _17 (Model):
for k in Model.n0:
pizik = 0
for i in Model.nc:
pizik += Model.d[i] * Model.z[i,k]
return pizik <=cd * Model.y[k]
Model._17 = Constraint(rule =_17)
def _18 (Model):
for k in Model.n0:
for i in Model.nc:
return Model.x[i,k] <= Model.z[i,k]
Model._18 = Constraint(rule =_18)
def _19 (Model):
for k in Model.n0:
for i in Model.nc:
return Model.x[k,i] <= Model.z[i,k]
Model._19 = Constraint(rule =_19)
def _20 (Model):
for k in Model.n0:
for i in Model.nc:
for j in Model.nc:
if i!=j :
zjm = 0
for m in Model.n0:
if m != k:
zjm += Model.z[j,m]
return Model.x[i,j]+Model.z[i,k] + zjm <=2
Model._20 = Constraint(rule =_20)
def obj (Model):
cx = 0
fdy = 0
fvx = 0
for i in Model.n :
for j in Model.n :
cx += Model.c[i,j]*Model.x[i,j]
for k in Model.n0 :
fdy += Model.fd[k] * Model.y[k]
for k in Model.n0 :
for i in Model.nc :
fvx += fv * Model.x[k,i]
return cx + fdy + fvx
Model.objective = Objective(rule=obj)
print('!'*60)
def pyomo_postprocess(options=None, instance=None, results=None):
Model.objective.display()
#This is an optional code path that allows the script to be run outside of
#pyomo command-line. For example: python transport.py
if __name__ == '__main__':
# This emulates what the pyomo command-line tools does
from pyomo.opt import SolverFactory
# import pyomo.environ
opt = SolverFactory("gurobi")
results = opt.solve(Model)
#sends results to stdout
results.write()
print("\nDisplaying Solution\n" + '!'*60)
pyomo_postprocess(None, Model, results)
# -
| pyomo_8_2_gurobi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Analysis") \
.getOrCreate()
# -
# # Read all 2016 Data and Cache
# - [wiki](https://en.wikipedia.org/wiki/January_2016_United_States_blizzard)
df = spark.read.parquet(f"/taxi/dataset.parquet").filter("year = 2016")
import pyspark.sql.functions as f
data_2016 = (
df.groupBy("month").count().orderBy("month")
).toPandas()
data_2016.plot(
x='month', y='count', figsize=(12, 6),
title='Rides in 2016',
legend=False,
kind='bar',
xlabel='Month',
ylabel='Rides'
)
# ### January 2016
jan = df.filter("month = 01").withColumn('day', f.dayofmonth("pickup_datetime"))
jan.show(3)
data_jan = (
sept_df.groupBy("day").count().orderBy("day")
).toPandas()
data_jan.plot(
x='day', y='count', figsize=(12, 6),
title='Rides in Jan/2016',
legend=False,
kind='bar',
xlabel='Days',
ylabel='Rides'
)
# ### Stopping Spark
spark.stop()
| V10/Jupyter_Notebooks_For_Taxi/2_Taxi_Analysis/Prepared/2_Analysis_rides_2016.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Imports
import os
import xarray as xr
from affine import Affine
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.ticker as mticker
# from matplotlib_scalebar.scalebar import ScaleBar
# from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
import matplotlib.font_manager as fm
import pandas as pd
import georasters as gr
import cmocean
# +
# Read the data
da = xr.open_rasterio('../Week 6/Elevation.tif')
transform = Affine.from_gdal(*da.attrs['transform'])
# Define the projection
crs=ccrs.PlateCarree()
# +
data=da.variable.data[0]
data = data.astype('float')
data[data == 32767] = np.nan
y=da.y
x=da.x
# Define extents
lat_min = min(da.y)
lat_max = max(da.y)
lon_min = min(da.x)
lon_max = max(da.x)
# -
x.values.shape, y.values.shape
# +
# Plot!
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection=crs)
im = ax.contourf(x, y, data, cmap=cmocean.cm.delta, extend='both', linestyles="solid")
ax.set_extent([lon_min, lon_max, lat_min, lat_max])
ax.set_facecolor('#f5f5f5')
fontprops = fm.FontProperties(size=18)
# Grid and Labels
gl = ax.gridlines(crs=crs, draw_labels=True, alpha=0.5)
gl.top_labels = None
gl.right_labels = None
xgrid = np.arange(lon_min-0.5, lon_max+.5, 1.)
ygrid = np.arange(lat_min, lat_max+1, 1.)
gl.xlabel_style = {'size': 14, 'color': 'black'}
gl.ylabel_style = {'size': 14, 'color': 'black'}
fig.colorbar(im, extend="both")
#Add North arrow
x1, y1, arrow_length = 0.05, 0.95, 0.1
ax.annotate('N', xy=(x1, y1), xytext=(x1, y1-arrow_length),
arrowprops=dict(facecolor='black', width=5, headwidth=15),
ha='center', va='center', fontsize=20,
xycoords=ax.transAxes)
# Add title and axis names
plt.title('Elevation (m)', fontproperties=fontprops)
# scalebar = AnchoredSizeBar(ax.transData,
# 20, '20 m', 'lower center',
# pad=0.1,
# color='white',
# frameon=False,
# size_vertical=1,
# fontproperties=fontprops)
# ax.add_artist(scalebar)
#scalebar = ScaleBar(1, "m", length_fraction=0.25,box_color=None,box_alpha=0, location='lower left')
#ax.add_artist(scalebar)
plt.show()
# -
data.shape
vlm = np.genfromtxt('new_vlm.csv', delimiter=',')
vlm_df = pd.read_excel('../Week 5/data.xls')
elevation_file = gr.from_file('../Week 6/Elevation.tif')
elevation_df = elevation_file.to_geopandas()
vlm_df.head()
plt.scatter(vlm_df.Longitude, vlm_df.Latitude, c=vlm_df.VLM, cmap='summer', edgecolors='black', s=100)
plt.plot()
min_x, max_x = min(elevation_df.x), max(elevation_df.x)
min_y, max_y = min(elevation_df.y), max(elevation_df.y)
min_x, max_x, min_y, max_y
vlm_points = vlm_df[(vlm_df['Longitude'] >= min_x) & (vlm_df['Longitude'] <= max_x)
& (vlm_df['Latitude'] >= min_y) & (vlm_df['Latitude'] <= max_y)]
vlm_points.reset_index(inplace=True)
vlm_points.drop(columns=['index'], axis=1, inplace=True)
# +
# Plot!
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection=crs)
im = ax.contour(x, y, vlm, colors='black', extend='both', linestyles='solid')
ax.clabel(im, inline=True, fontsize=10)
ax.imshow(vlm, extent=[min(x), max(x), min(y), max(y)], cmap=plt.cm.Spectral)
ax.scatter(vlm_df.Longitude, vlm_df.Latitude, c=vlm_df.VLM, cmap=plt.cm.Spectral, edgecolors='black', s=100)
for i, txt in enumerate(vlm_points.VLM):
ax.annotate(txt, (vlm_points.Longitude[i], vlm_points.Latitude[i]-0.02), ha='center', va='bottom')
ax.set_extent([lon_min, lon_max, lat_min, lat_max])
ax.set_facecolor('#d3d3d3')
fontprops = fm.FontProperties(size=18)
# Grid and Labels
gl = ax.gridlines(crs=crs, draw_labels=True, alpha=0.5)
gl.top_labels = None
gl.right_labels = None
xgrid = np.arange(lon_min-0.5, lon_max+.5, 1.)
ygrid = np.arange(lat_min, lat_max+1, 1.)
gl.xlabel_style = {'size': 14, 'color': 'black'}
gl.ylabel_style = {'size': 14, 'color': 'black'}
fig.colorbar(im, extend="both")
#Add North arrow
x1, y1, arrow_length = 0.05, 0.95, 0.1
ax.annotate('N', xy=(x1, y1), xytext=(x1, y1-arrow_length),
arrowprops=dict(facecolor='black', width=5, headwidth=15),
ha='center', va='center', fontsize=20,
xycoords=ax.transAxes)
# Add title and axis names
plt.title('Vertical land movement (mm)', fontproperties=fontprops)
plt.show()
# +
# Plot!
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection=crs)
im = ax.contour(x, y, vlm, cmap=plt.cm.Spectral, extend='both', linestyles='solid')
ax.clabel(im, inline=True, fontsize=10)
ax.imshow(vlm, extent=[min(x), max(x), min(y), max(y)], cmap=plt.cm.Spectral)
ax.scatter(vlm_df.Longitude, vlm_df.Latitude, c=vlm_df.VLM, cmap=plt.cm.Spectral, edgecolors='black', s=100)
for i, txt in enumerate(vlm_points.VLM):
ax.annotate(txt, (vlm_points.Longitude[i], vlm_points.Latitude[i]-0.02), ha='center', va='bottom')
ax.set_extent([lon_min, lon_max, lat_min, lat_max])
ax.set_facecolor('#d3d3d3')
fontprops = fm.FontProperties(size=18)
# Grid and Labels
gl = ax.gridlines(crs=crs, draw_labels=True, alpha=0.5)
gl.top_labels = None
gl.right_labels = None
xgrid = np.arange(lon_min-0.5, lon_max+.5, 1.)
ygrid = np.arange(lat_min, lat_max+1, 1.)
gl.xlabel_style = {'size': 14, 'color': 'black'}
gl.ylabel_style = {'size': 14, 'color': 'black'}
fig.colorbar(im, extend="both")
#Add North arrow
x1, y1, arrow_length = 0.05, 0.95, 0.1
ax.annotate('N', xy=(x1, y1), xytext=(x1, y1-arrow_length),
arrowprops=dict(facecolor='black', width=5, headwidth=15),
ha='center', va='center', fontsize=20,
xycoords=ax.transAxes)
# Add title and axis names
plt.title('Vertical land movement (mm)', fontproperties=fontprops)
plt.show()
# -
| Ngoc/Graphs/Visualizations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
mu, sigma = 0, 0.1 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000)
import matplotlib.pyplot as plt
count, bins, ignored = plt.hist(s, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2, color='r')
import numpy as np
mu, sigma = 0, 20 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000)
import matplotlib.pyplot as plt
count, bins, ignored = plt.hist(s, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2, color='r')
import numpy as np
mu, sigma = 0, 20 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000)*0.1
import matplotlib.pyplot as plt
count, bins, ignored = plt.hist(s, 30, density=True)
plt.plot(bins, 1/(2 * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * 2**2) ),
linewidth=2, color='r')
import numpy as np
mu, sigma = 0, 2 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000)
import matplotlib.pyplot as plt
count, bins, ignored = plt.hist(s, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2, color='r')
| DP_test/s20*0.1vss2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## TODO
# * Maybe they return cashed answers and don't count our requests? Then we'll not exceed requests limit
# ### Experiment #1: separating block of account from block of ip
# #### At the moment, in case of access error we don't know whether account, ip or both were blocked. So we reach limit, and try to use them separately. This way we can find that one of the resources was not blocked and use it further.
# +
import json
with open("../../resources/checkpoints/data_checkpoint.json") as f:
user_data = json.load(f)
# -
# will just retrieve groups walls
groups_to_get = user_data["68076353"]["groups"]
proxy = input("Proxy address\n")
login = input("Login\n")
password = input("<PASSWORD>")
import sys
sys.path.insert(0, "../../")
# +
from suvec.vk_api_impl.session.session_manager import SessionManager
from suvec.vk_api_impl.errors_handler import VkApiErrorsHandler
from suvec.vk_api_impl.session.records_managing import ProxyManager, CredsManager
from suvec.vk_api_impl.session.records_managing.terminal_out_of_records import TerminalOutOfProxy, TerminalOutOfCreds
import utils
from suvec.vk_api_impl.crawl_runner_with_checkpoints import VkCrawlRunnerWithCheckpoints
from suvec.common.events_tracking.terminal_events_tracker import TerminalEventsTracker
from suvec.vk_api_impl.session.records_managing.records_storing import ProxyStorage, CredsStorage
from suvec.vk_api_impl.session.records_managing.records_storing.serializers import ProxyRecordsSerializer, \
CredsRecordsSerializer
# +
settings_path = "../../settings.json"
proxies_save_pth, creds_save_pth = utils.get_proxy_and_creds_paths(settings_path)
checkp_data, checkp_requester = utils.get_data_requester_checkpoint_paths(settings_path)
result_file = utils.get_result_path(settings_path)
backups_path = utils.get_backups_path(settings_path)
proxy_storage = ProxyStorage(proxies_save_pth, ProxyRecordsSerializer())
creds_storage = CredsStorage(creds_save_pth, CredsRecordsSerializer())
# +
errors_handler = VkApiErrorsHandler(None)
out_of_proxy_handler = TerminalOutOfProxy()
out_of_creds_handler = TerminalOutOfCreds()
proxy_manager = ProxyManager(proxy_storage, None, out_of_proxy_handler,
hours_for_resource_reload=24)
creds_manager = CredsManager(creds_storage, None, out_of_creds_handler,
hours_for_resource_reload=24)
session_manager = SessionManager(errors_handler, proxy_manager, creds_manager)
# -
session = session_manager._create_session(login, password, proxy)
# + tags=[]
from time import sleep
from random import choice
cnt_requests = 0
def _make_request(group_id):
request = session.vk_session.method("groups.getById", values={"group_ids": group_id})
return request
while True:
if cnt_requests % 100 == 0:
print("cnt", cnt_requests)
for _ in range(3):
random_group_id = choice(groups_to_get)
res = _make_request(random_group_id)
cnt_requests += 1
sleep(0)
# -
# # Blocked ~ at 15:30 11.06.21
print(res)
# ### Changed proxy: didn't help, so creds were blocked
other_proxy = "192.168.127.12:7580"
session_other_proxy = session_manager._create_session(login, password, other_proxy)
# +
cnt_requests = 0
def _make_request(group_id):
request = session_other_proxy.vk_session.method("groups.getById", values={"group_ids": group_id})
return request
while True:
if cnt_requests % 100 == 0:
print("cnt", cnt_requests)
for _ in range(3):
random_group_id = choice(groups_to_get)
res = _make_request(random_group_id)
cnt_requests += 1
sleep(0)
# -
# ### Changed creds: helped, so proxy were not blocked
# ### Update: creds were blocked one more time, I reset them and everything works. So the conclusion is: accounts are blocked far more early then proxy
#
# ### Note: every cell worked ~ 2 hours before access error, so maybe situation changes in case of hour limits
# ### Update: works one more time
# ### Update: at 01:30 reseted creds with ones that exceeded limit at 15:30. Maybe they aren't blocked for 24 hours, blocked for 1 hour, or till the end of the day
# ## TODO: exceed limit and retry with same creds after hour, after 2 hours etc.
other_creds = "<PASSWORD>", "<PASSWORD>"
session_other_creds = session_manager._create_session(*other_creds, proxy)
# + tags=[]
cnt_requests = 0
def _make_request(group_id):
request = session_other_creds.vk_session.method("groups.getById", values={"group_ids": group_id})
return request
while True:
if cnt_requests % 100 == 0:
print("cnt", cnt_requests)
for _ in range(3):
random_group_id = choice(groups_to_get)
res = _make_request(random_group_id)
cnt_requests += 1
sleep(0)
# -
| notebooks/vk_experiments/requests_proxy_creds_block.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import os # operating system
import pandas as pd
import matplotlib.pyplot as plt
a = Perceptron()
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
random_state : int
Random number generator seed for random weight
initialization.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications (updates) in each epoch.
"""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_examples, n_features]
Training vectors, where n_examples is the number of
examples and n_features is the number of features.
y : array-like, shape = [n_examples]
Target values.
Returns
-------
self : object
"""
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01,
size=1 + X.shape[1]) # number of columns (features) + 1
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
s = os.path.join('https://archive.ics.uci.edu', 'ml',
'machine-learning-databases',
'iris','iris.data')
print('URL:', s)
df = pd.read_csv('iris.data', header=None, encoding='utf-8')
df
# +
# Next, we extract the first 100 class labels that correspond to the 50 Iris-setosa and
# 50 Iris-versicolor flowers and convert the class labels into the two integer class labels,
# 1 (versicolor) and -1 (setosa)
# select setosa and versicolor
y = df.iloc[0:100, 4].values
# y
# -
y = np.where(y == 'Iris-setosa', -1, 1)
# y
# extract sepal length and petal length
X = df.iloc[0:100, [0, 2]].values
# X
# plot data
plt.scatter(X[:50, 0], X[:50, 1],
color='red', marker='o', label='setosa')
plt.scatter(X[50:100, 0], X[50:100, 1],
color='blue', marker='x', label='versicolor')
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
plt.show()
# Training
ppn = Perceptron(eta=0.1, n_iter=10)
ppn.fit(X, y)
plt.plot(range(1, len(ppn.errors_) + 1),
ppn.errors_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Number of updates')
plt.show()
# Decision regions
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 5, X[:, 0].max() + 5
x2_min, x2_max = X[:, 1].min() - 5, X[:, 1].max() + 5
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class examples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=cl,
edgecolor='black')
for n in range(10):
ppn = Perceptron(eta=0.01, n_iter=n)
ppn.fit(X, y)
plot_decision_regions(X, y, classifier=ppn)
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
plt.show()
| adaline/adaline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Tests on PDA
# + run_control={"frozen": false, "read_only": false}
from jove.SystemImports import *
from jove.DotBashers import *
from jove.Def_md2mc import *
from jove.Def_PDA import *
# + run_control={"frozen": false, "read_only": false}
repda = md2mc('''PDA
!!R -> R R | R + R | R* | ( R ) | 0 | 1 | e
I : '', # ; R# -> M
M : '', R ; RR -> M
M : '', R ; R+R -> M
M : '', R ; R* -> M
M : '', R ; (R) -> M
M : '', R ; 0 -> M
M : '', R ; 1 -> M
M : '', R ; e -> M
M : 0, 0 ; '' -> M
M : 1, 1 ; '' -> M
M : (, ( ; '' -> M
M : ), ) ; '' -> M
M : +, + ; '' -> M
M : e, e ; '' -> M
M : '', # ; # -> F
'''
)
# + run_control={"frozen": false, "read_only": false}
repda
# + run_control={"frozen": false, "read_only": false}
DO_repda = dotObj_pda(repda, FuseEdges = True)
# + run_control={"frozen": false, "read_only": false}
DO_repda
# + run_control={"frozen": false, "read_only": false}
DO_repda.source
# + run_control={"frozen": false, "read_only": false}
fn_dom(repda["Delta"])
# + run_control={"frozen": false, "read_only": false}
fn_range(repda["Delta"])
# + run_control={"frozen": false, "read_only": false}
# + run_control={"frozen": false, "read_only": false}
| notebooks/driver/Drive_Fuse_Multi_Edge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from keras.models import load_model
model = load_model('SH_NNmodel.h5')
import pandas as pd
tr_data1=pd.read_csv('avg.csv',index_col=0)
tr=(tr_data1-tr_data1.min())/(tr_data1.max()-tr_data1.min())
import numpy as np
Y=np.array(tr.iloc[:,0:1])
X=np.array(tr.iloc[:,1:15])
model.predict(X)
Y
(model.predict(X)-Y)/Y
| SHDKY/.ipynb_checkpoints/model-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # These are a series of notebooks presented at the 2018 User Conference
# They highlight configuring adding users, assignment types, integrations, and assignments using various ways. There are 6 notebooks in the series (exluding this one). They should be followed in order from 1 to 6. By substituting the credentials, layers, and project, with your own information these workflows should be easily replicated.
#
# These notebooks require the ArcGIS API for Python version 1.4.1 or higher as well as shapely or arcpy installed.
#
# In the blocks below, there is some code that can be used to throughout the notebooks (if necessary).
# ### Build SQLite Database and Table
# For notebook number 4, assignments are read from a SQLite database. The following block creates the database using sqlite3 and pandas.
# +
import sqlite3
from datetime import datetime
import pandas as pd
df = pd.DataFrame(
[
[1, datetime(2018, 7, 12), "Colombia St & Broadway, San Diego, CA", 1, "Sidewalk Repair", "Completed", "The sidewalk needs to be fixed.", "Done"],
[2, datetime(2018, 7, 13), "1800 Fifth Ave, San Diego, CA", 1, "Sidewalk Repair", "Completed", "The sidewalk is uneven due to tree roots.", "Finished"],
[3, datetime(2018, 7, 14), "2115 Imperial Ave, San Diego, CA", 2, "Sidewalk Repair", "Backlog", "The sidewalk is very uneven.", None],
[4, datetime(2018, 7, 15), "South Evans St & Franklin Ave, San Diego, CA", 2, "Sidewalk Repair", "Backlog", "Please fix the sidewalk near the intersection", None],
[5, datetime(2018, 7, 16), "Market St & 31st St, San Diego, CA", 3, "Sidewalk Repair", "Backlog", "Fix my side walk", None],
[6, datetime(2018, 7, 12), "Ivy St & Fern St, San Diego, CA", 3, "Sidewalk Repair", "Backlog", "Fix the side walk in front of my shop", None],
],
columns=["id", "due_date", "address", "priority", "type", "status", "description", "notes"])
connection = sqlite3.connect("work_orders")
df.to_sql("work_orders", connection, if_exists="replace")
df
# -
# ### Reset Project
# The following block can be used to reset the Workforce Project to the original state.
# +
from arcgis.gis import GIS
from arcgis.apps import workforce
gis = GIS("https://arcgis.com", "workforce_scripts")
item = gis.content.get("1f7b42024da544f6b1e557889e858ac6")
project = workforce.Project(item)
project.assignments_item.layers[0].delete_features(where="1=1")
project.workers.batch_delete(project.workers.search())
project.dispatchers.batch_delete(project.dispatchers.search("userId <> 'workforce_scripts'"))
project.assignment_types.batch_delete(project.assignment_types.search())
project.integrations.batch_delete([project.integrations.get("default-explorer"), project.integrations.get("waze-navigation")])
| notebooks/UC_2018/integrating_workforce_demo_theatre/UC 2018 - 0 - Demo Overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Editor : Sahil
#
# # Task 1 : Prediction using Supervised Machine Learning
#
# ## Under The Sparks Foundation
# ## Importing Required Libraries-- Numpy Array, Pandas, Matplotlib
#
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# ## Step 1 -- Reading the Data
data= pd.read_csv("http://bit.ly/w-data")
print("Data imported successfully")
data
data.head()
# ## Step 2 -- Input data Visualization
#
data.plot(x="Hours" , y="Scores" , style ='*')
plt.title(" Hours Vs Scores of Students")
plt.xlabel("Hours studied by Student")
plt.ylabel("Scores obtained")
plt.show()
# ### As we see in the above graph , we are getting almost a linear relation between our variable , therefore linear regression of order 1 with positive slope can be worked out here
import seaborn as sns
sns.distplot(data['Scores'], bins=5)
plt.show()
# This is correlation Matrix
sns.heatmap(data.corr(),annot=True)
# ## Step 3 -- Data Preprocessing
x = data.iloc[:, :-1].values
y = data.iloc[:, 1].values
x
y
# ## Step 4 -- Model Training
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,train_size=0.1,random_state=0)
# +
regressor = LinearRegression()
regressor.fit(x_train.reshape(-1,1), y_train)
print("Training complete.")
# +
# Plotting the regression line
line = regressor.coef_*x+regressor.intercept_
# Plotting for the test data
plt.scatter(x, y)
plt.plot(x, line,color='red');
plt.show()
# -
# ## Step 5 -- Testing the Model
#
print(x_test)
y_pred = regressor.predict(x_test)
y_pred
print(pd.DataFrame({'Actual':y_test,'Predicted':y_pred}))
# ### Predicting the some of the data
hours = [[10]]
own_pred = regressor.predict(hours)
print("Number of hours = {}".format(hours))
print("Prediction Score = {}".format(own_pred[0]))
# Comparing Actual vs Predicted
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
df.plot(kind='bar',figsize=(5,5))
plt.grid(which='major', linewidth='0.5', color='red')
plt.grid(which='minor', linewidth='0.5', color='blue')
plt.show()
# ## Step 6 -- Evaluating the Model
from sklearn import metrics
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred))
print('Mean Squared Error:', metrics.mean_squared_error(y_test,y_pred))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
print('R-2:', metrics.r2_score(y_test, y_pred))
# ### Value of R-2 near 1 indicates it is good score for the model
| Task 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import glob
import matplotlib.pyplot as plt
import torchvision
import librosa
import random
import json
import pickle
import numpy as np
import torch
import os
torch.multiprocessing.set_sharing_strategy('file_system')
import tqdm
import gc
import sys
sys.path.append("/home/user/Research/sonifications-paper1517/")
from src.utilities import fourier_analysis
from src.utilities import interpretability_utils
from src.utilities import feature_evolution_helper
import IPython.display as ipd
import soundfile as sf
import matplotlib.pyplot as plt
# plt.rcParams["figure.figsize"] = (30,40)
import random
import json
import tqdm
import numpy as np
exp_dir = "/media/user/nvme/contrastive_experiments/experiments_audioset_v5_full/cnn12_1x_full_tr_8x128_Adam_1e-3_warmupcosine_wd0._fixed_lr_scaling_randomgain_gaussiannoise_timemask_bgnoise_nolineareval_full_ft_fullmodel_r2"
model, net, deconv, hparams = interpretability_utils.model_helper(exp_dir, False)
# +
# with open("/media/user/nvme/contrastive_experiments/select_feature_maps.pkl", "wb") as fd:
# pickle.dump(selected_maps, fd)
# -
selected_maps = feature_evolution_helper.get_selected_maps()
selected_inputs = feature_evolution_helper.get_selected_inputs()
val_loader, val_set, lbl_map, inv_lbl_map = interpretability_utils.make_dataloader("/media/user/nvme/datasets/audioset/meta_8000/", hparams.cfg['audio_config'], csv_name="eval.csv")
ckpt_idxs = feature_evolution_helper.get_ckpt_indices(is_contrastive=False)
ckpt_idxs
from tqdm import notebook
output_dir = os.path.join(exp_dir, "feature_evolution")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
for layer_idx in notebook.tqdm(range(1, 12)):
results = feature_evolution_helper.process_features_over_training(exp_dir, ckpt_idxs, layer_idx, selected_inputs, val_set, inv_lbl_map)
feature_evolution_helper.plot_evo_spectrograms(results, layer_idx, ckpt_idxs)
| nbs_2/Feature Evolution-Pretrained-Supervised-Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from dask.distributed import Client
client = Client('scheduler:8786')
client.ncores()
def square(x):
return x ** 2
def neg(x):
return -x
A = client.map(square, range(10))
B = client.map(neg, A)
total = client.submit(sum, B)
total.result()
| dask_crosscheck.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="94e442bb-e3e6-4afb-9b37-5bc478db5f4f" _uuid="51898d40216d04490c572d5ac7598091b73d3615"
# # Analysis and Prediction-Indian Liver Patients.
# + [markdown] _cell_guid="77c7f1f3-9467-49d9-a113-b3b345b08110" _uuid="c8034ad4f2e2bb5d1c932c32232753c82a6b0e33"
# The dataset was downloaded from the UCI ML Repository.
#
# Here are the steps I'm going to perform:
# 1) Data Analysis: This is in general looking at the data to figure out whats going on. Inspect the data: Check whether there is any missing data, irrelevant data and do a cleanup.
# 2) Data Visualization:
# 3) Feature selection.
# 4) Search for any trends, relations & correlations.
# 5) Draw an inference and predict whether the patient can be identified to be having liver disease or not
# + _cell_guid="ead33c46-262f-428c-a300-5372b4f15a7a" _uuid="2cfb47f6cfc64578a9bff30d0e5ec6ab95a8e06b"
#Import all required libraries for reading data, analysing and visualizing data
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
from sklearn.preprocessing import LabelEncoder
# + [markdown] _cell_guid="d58aeb11-2795-4ba6-8dd1-f48160dec59c" _uuid="3b99a454b4a233394a3fa3c8a5190b198cbc84ba"
# # Data Analysis
# + _cell_guid="518b7cef-cafc-433e-94e5-ed91f9b70ba6" _uuid="104d9fe12aae79b3eff8c1ad92e432a77a2b92bc"
#Read the training & test data
liver_df = pd.read_csv('indian_liver_patient.csv')
# + [markdown] _cell_guid="0e641f71-2703-4ec3-84f8-fdf5a6f27e64" _uuid="02b022e1a75c30d06c5b88723711d0589775a1ed"
# This data set contains 416 liver patient records and 167 non liver patient records collected from North East of Andhra Pradesh, India. The "Dataset" column is a class label used to divide groups into liver patient (liver disease) or not (no disease).
# + _cell_guid="04286350-1e1c-46ac-9099-d29585d30287" _uuid="897e4e62d506ee354a42d13005f604da99f12e59"
liver_df.head()
# + _cell_guid="9a45eba7-cb42-4ac8-bd56-e75fee941808" _uuid="3e60087458159fe381e86febe8f9bca1bd097ff4"
liver_df.info()
# + [markdown] _cell_guid="aa1480e6-27f0-4073-83c9-9901d677b615" _uuid="3bc3e850db0d915e15a149433a4a3600ecc59783"
# Here is the observation from the dataset:
# 1) Only gender is non-numeric veriable. All others are numeric.
# 2) There are 10 features and 1 output - dataset. Value 1 indicates that the patient has liver disease and 0 indicates the patient does not have liver disease.
# + _cell_guid="7336699b-7ef0-4ee9-9e3d-1f7741ef3692" _uuid="f98b0529723f1b862688a5738c7afce79365e352"
#Describe gives statistical information about NUMERICAL columns in the dataset
liver_df.describe(include='all')
#We can see that there are missing values for Albumin_and_Globulin_Ratio as only 579 entries have valid values indicating 4 missing values.
#Gender has only 2 values - Male/Female
# + _cell_guid="e379fc6e-e467-47bd-978c-c8eae32f3193" _uuid="a371d92f66a0641259d6053f2aaf33e3cfb64994"
#Which features are available in the dataset?
liver_df.columns
# + _cell_guid="b6d201e0-a049-462a-a4c3-e66fdd3c1ff1" _uuid="<KEY>"
#Check for any null values
liver_df.isnull().sum()
# + [markdown] _cell_guid="1d6972b0-3916-4ec4-9c4e-050b4be86f0d" _uuid="5d2737231e0a088725b3403778a48e9c0ba2c629"
# The only data that is null is the Albumin_and_Globulin_Ratio - Only 4 rows are null. Lets see whether this is an important feature
# + [markdown] _cell_guid="888444b8-b4a9-45b6-b2c2-11b826ff9da7" _uuid="2d518d8a9f73f3ef58ebef6cd7c918a15178a656"
# # Data Visualization
# + _cell_guid="544cd4dc-5cb5-45ac-ba26-c6c2be5d3db1" _uuid="0f20a9162e6c41a4f602f2bbcdfd21c012e035da"
sns.countplot(data=liver_df, x = 'Dataset', label='Count')
LD, NLD = liver_df['Dataset'].value_counts()
print('Number of patients diagnosed with liver disease: ',LD)
print('Number of patients not diagnosed with liver disease: ',NLD)
# + _cell_guid="3e8bf602-3b81-4a53-b190-8ba363295904" _uuid="d0e41e3d9a88577572fe5fea081de4de19f6ad6a"
sns.countplot(data=liver_df, x = 'Gender', label='Count')
M, F = liver_df['Gender'].value_counts()
print('Number of patients that are male: ',M)
print('Number of patients that are female: ',F)
# + _cell_guid="dff35f12-ee87-43d2-a4f3-67ad7a12de76" _uuid="8b1e4c43e56c9eb7f571709bc896f3a9806dc1cd"
sns.factorplot(x="Age", y="Gender", hue="Dataset", data=liver_df);
# + [markdown] _cell_guid="d1f71ba8-caa8-47e2-b675-e234abfab05c" _uuid="059e213bb14b66c18d67a022679d485e00003e25"
# Age seems to be a factor for liver disease for both male and female genders
# + _cell_guid="8201c43b-3d29-4076-a385-d46f4d016670" _uuid="e39a78707778302672d3e058a2b386b53630c062"
liver_df[['Gender', 'Dataset','Age']].groupby(['Dataset','Gender'], as_index=False).count().sort_values(by='Dataset', ascending=False)
# + _cell_guid="b0a4a24b-f24f-4e5e-9511-ef3a99965192" _uuid="258fc14a5fcf4a33f7560dba890ddcf6b7eab0e7"
liver_df[['Gender', 'Dataset','Age']].groupby(['Dataset','Gender'], as_index=False).mean().sort_values(by='Dataset', ascending=False)
# + _cell_guid="d5bd9ed4-6d96-4893-bbf4-795ab25d2c3a" _uuid="6c3f4fefb376a5f538b90e1cf47ee6bc6abb5e70"
g = sns.FacetGrid(liver_df, col="Dataset", row="Gender", margin_titles=True)
g.map(plt.hist, "Age", color="red")
plt.subplots_adjust(top=0.9)
g.fig.suptitle('Disease by Gender and Age');
# + _cell_guid="67d473ed-ea9c-41f2-a3f0-204ef4f0ec97" _uuid="c49b4d8c56d507f819b413686e38560c26310d9f"
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Direct_Bilirubin", "Total_Bilirubin", edgecolor="w")
plt.subplots_adjust(top=0.9)
# + [markdown] _cell_guid="bed3776c-37d8-46a5-a0e2-c9a3aca3b873" _uuid="79b749d653ba9554007543a8d72f9f74ed5e3332"
# There seems to be direct relationship between Total_Bilirubin and Direct_Bilirubin. We have the possibility of removing one of this feature.
# + _cell_guid="698b86f0-9f90-44e4-a455-4e544665e0de" _uuid="46347f3bba50466b2398940b3c975188474723d6"
sns.jointplot("Total_Bilirubin", "Direct_Bilirubin", data=liver_df, kind="reg")
# + _cell_guid="0e313f91-b5e0-437c-8496-583dc84b86df" _uuid="2f0eef12382e1bdfb8f25881a8e91e6017e13e94"
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Aspartate_Aminotransferase", "Alamine_Aminotransferase", edgecolor="w")
plt.subplots_adjust(top=0.9)
# + [markdown] _cell_guid="ff7fe013-f0a1-4489-ba53-576807d6feed" _uuid="e501b5d2dd39ff930376992e9d97c3b991a38d2a"
# There is linear relationship between Aspartate_Aminotransferase and Alamine_Aminotransferase and the gender. We have the possibility of removing one of this feature.
# + _cell_guid="7d71e81c-f7e3-48f6-aba7-fce8c071df3b" _uuid="065ebcad6c036771378419b1d4a4e92767eaeaa8"
sns.jointplot("Aspartate_Aminotransferase", "Alamine_Aminotransferase", data=liver_df, kind="reg")
# + _cell_guid="dc59414a-bbea-4dfd-b160-9c4b9bc441b5" _uuid="913a7c6227545ad9f1802df2d74ab2dac51574c0"
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Alkaline_Phosphotase", "Alamine_Aminotransferase", edgecolor="w")
plt.subplots_adjust(top=0.9)
# + _cell_guid="da7eef26-8d4a-4c05-9c80-570b11499bc9" _uuid="21739cb6eaf4ac688108a1806ac71cb940071493"
sns.jointplot("Alkaline_Phosphotase", "Alamine_Aminotransferase", data=liver_df, kind="reg")
# + [markdown] _cell_guid="8269092e-f173-446f-aaaa-257b75d15633" _uuid="2b63345e2bb199ede233e177754ee5347d0350db"
# No linear correlation between Alkaline_Phosphotase and Alamine_Aminotransferase
# + _cell_guid="9d767027-3dda-43aa-8cc1-f908d631d992" _uuid="716befce65e13a87415987a9885eb36f977be73e"
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Total_Protiens", "Albumin", edgecolor="w")
plt.subplots_adjust(top=0.9)
# + [markdown] _cell_guid="cd3415c5-e60a-45dc-a9ce-6e33561715ea" _uuid="2d09fb0a3ec7978315959d87667cd068b2e10e02"
# There is linear relationship between Total_Protiens and Albumin and the gender. We have the possibility of removing one of this feature.
# + _cell_guid="b18f4c6f-7742-4edd-a0d3-be517b953d35" _uuid="58a74e764ac2bb213ef0701846ca66123a89b946"
sns.jointplot("Total_Protiens", "Albumin", data=liver_df, kind="reg")
# + _cell_guid="c2f9376c-218b-458c-9ef9-0246f7f30c6f" _uuid="d64847416d1a611e041aed536b5391565a1e247b"
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Albumin", "Albumin_and_Globulin_Ratio", edgecolor="w")
plt.subplots_adjust(top=0.9)
# + [markdown] _cell_guid="b82e75f5-f87c-4788-bd96-3608b9066cb0" _uuid="e1acbd8a2f18bad8f794a2a08b300a6c1aae935a"
# There is linear relationship between Albumin_and_Globulin_Ratio and Albumin. We have the possibility of removing one of this feature.
# + _cell_guid="7d80b0d8-1fa0-47ef-9667-9e1621752d9b" _uuid="84126004350eca1fad4180dd154d301bdc2c1c9e"
sns.jointplot("Albumin_and_Globulin_Ratio", "Albumin", data=liver_df, kind="reg")
# + _cell_guid="820ae8a1-03b8-4cc5-8135-84d631f49fad" _uuid="2f564e383e82994d7710f13b5cf94023486af391"
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Albumin_and_Globulin_Ratio", "Total_Protiens", edgecolor="w")
plt.subplots_adjust(top=0.9)
# + [markdown] _cell_guid="79d7d365-9eab-453e-adc1-97b02461c8a1" _uuid="730a2d0a33cdb3db4ec8e783dbc9e03470af1f4f"
# # Observation:
# + [markdown] _cell_guid="824ec774-42ef-4fad-849f-82893dd03849" _uuid="5b9a21eded807cc989db8eab46435ee77bc74e50"
# From the above jointplots and scatterplots, we find direct relationship between the following features:
# Direct_Bilirubin & Total_Bilirubin
# Aspartate_Aminotransferase & Alamine_Aminotransferase
# Total_Protiens & Albumin
# Albumin_and_Globulin_Ratio & Albumin
#
# Hence, we can very well find that we can omit one of the features. I'm going to keep the follwing features:
# Total_Bilirubin
# Alamine_Aminotransferase
# Total_Protiens
# Albumin_and_Globulin_Ratio
# Albumin
# + _cell_guid="af447fa3-486a-4c9f-a0bb-143d240a8b41" _uuid="6b2c57a7dc86c5a48f4749fb0ff55f76bf1ce2c1"
liver_df.head(3)
# + [markdown] _cell_guid="e1bd4a7c-ee90-480d-89d4-981ab14ad59f" _uuid="de6437c17e7114936ceaafcf2c292cd8660391f0"
# Convert categorical variable "Gender" to indicator variables
# + _cell_guid="21a8680c-ee95-4943-97a4-699a6936683a" _uuid="e3eda36cb69d136d9f05502abae8760acf45349d"
pd.get_dummies(liver_df['Gender'], prefix = 'Gender').head()
# + _cell_guid="20bc6e16-444b-4319-8ad0-bc3fee31baac" _uuid="c0072980444041499d7be75dd70863794e2d6a6a"
liver_df = pd.concat([liver_df,pd.get_dummies(liver_df['Gender'], prefix = 'Gender')], axis=1)
# + _cell_guid="bd025bf6-cad5-4d92-a4bc-d8924b46abe5" _uuid="6755149ea5a286853b00e64cedd297a6a1564d72"
liver_df.head()
# + _cell_guid="c6011e12-8e58-4e38-aa82-33bd336723ce" _uuid="6b834a47790a238c390fc7b552587c6fc6bcb24d"
liver_df.describe()
# + _cell_guid="f8b7af4e-9b7c-4943-bc0c-d3de640b4544" _uuid="047946d891f4734a97a0fcc5c60192b4b4b7d74e"
liver_df[liver_df['Albumin_and_Globulin_Ratio'].isnull()]
# + _cell_guid="a22f10fd-a319-401f-9a4a-1c02a8357975" _uuid="a00f9b9d4f6b3c086b4bf8b328955a6b49642057"
liver_df["Albumin_and_Globulin_Ratio"] = liver_df.Albumin_and_Globulin_Ratio.fillna(liver_df['Albumin_and_Globulin_Ratio'].mean())
# + _cell_guid="ab697d55-f4e6-4e43-87a8-e853f893349c" _uuid="73b9eacd147f702a0cb8bdd00bb0e6d52d8b8cb1"
#liver_df[liver_df['Albumin_and_Globulin_Ratio'] == 0.9470639032815201]
# + _cell_guid="f2cc0e7b-0e62-4cd4-99a7-e213b983b9e4" _uuid="cdab8936d73dfe0533c10d723739df09b73c23ad"
# The input variables/features are all the inputs except Dataset. The prediction or label is 'Dataset' that determines whether the patient has liver disease or not.
X = liver_df.drop(['Gender','Dataset'], axis=1)
X.head(3)
# + _cell_guid="134edf24-b704-483f-91f6-a000aa7525a5" _uuid="13e14244f1f54357aba978576a2d1a9515dece6e"
y = liver_df['Dataset'] # 1 for liver disease; 2 for no liver disease
# + _cell_guid="551ce97e-ab7e-4ffc-a9fd-f10ee1d9332a" _uuid="3ca13b008594f58b54943bd1c02f2984a10032e1"
# Correlation
liver_corr = X.corr()
# + _cell_guid="8678807f-c33a-4efd-86de-23dc5b0acf80" _uuid="ab8a278a98ba9b59e8763198652c88233e6fa6d6"
liver_corr
# + _cell_guid="7f17a7ba-96e2-4d91-899c-2378dc63f8fa" _uuid="44e9f67a5c8d33edaa35d750616a5ef8ae292412"
plt.figure(figsize=(30, 30))
sns.heatmap(liver_corr, cbar = True, square = True, annot=True, fmt= '.2f',annot_kws={'size': 15},
cmap= 'coolwarm')
plt.title('Correlation between features');
# + _cell_guid="e5ddc363-f1eb-4225-979b-64d86c2aa578" _uuid="7879565f25f9f5c349c8e39c506f57243d597636"
#The above correlation also indicates the following correlation
# Total_Protiens & Albumin
# Alamine_Aminotransferase & Aspartate_Aminotransferase
# Direct_Bilirubin & Total_Bilirubin
# There is some correlation between Albumin_and_Globulin_Ratio and Albumin. But its not as high as Total_Protiens & Albumin
# + [markdown] _cell_guid="d452f2ac-60ed-4330-98a6-70db275af530" _uuid="458d0033ebbc367888af6082573eef6b7e27e2c9"
# # Machine Learning
# + _cell_guid="10d22778-b70f-4190-a277-82b4d020f281" _uuid="6bc5635a45d3950a0ea94cf088c8571ec8029cbb"
# Importing modules
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import StandardScaler
# -
for i in range(len(y.values)):
if y.values[i]==2:
y.values[i]=0
sns.countplot(y)
# X = StandardScaler().fit_transform(X)
# + _cell_guid="ade84f4a-7907-4ff1-a20d-7493ac06c16b" _uuid="bc317a65e95970b06de98f20e046e1845cb72f75"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
print (X_train.shape)
print (y_train.shape)
print (X_test.shape)
print (y_test.shape)
# + _cell_guid="3ebf85d5-2867-49a8-821d-4566f7d066a0" _uuid="1613eb76e677ff43f9253c41775f0064a6162b1d"
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, y_train)
#Predict Output
rf_predicted = random_forest.predict(X_test)
random_forest_score = round(random_forest.score(X_train, y_train) * 100, 2)
random_forest_score_test = round(random_forest.score(X_test, y_test) * 100, 2)
print('Random Forest Score: \n', random_forest_score)
print('Random Forest Test Score: \n', random_forest_score_test)
print('Accuracy: \n', accuracy_score(y_test,rf_predicted))
print(confusion_matrix(y_test,rf_predicted))
print(classification_report(y_test,rf_predicted))
plt.show()
# +
logreg = LogisticRegression()
# Train the model using the training sets and check score
logreg.fit(X_train, y_train)
#Predict Output
log_predicted= logreg.predict(X_test)
logreg_score = round(logreg.score(X_train, y_train) * 100, 2)
logreg_score_test = round(logreg.score(X_test, y_test) * 100, 2)
#Equation coefficient and Intercept
print('Logistic Regression Training Score: \n', logreg_score)
print('Logistic Regression Test Score: \n', logreg_score_test)
print('Coefficient: \n', logreg.coef_)
print('Intercept: \n', logreg.intercept_)
print('Accuracy: \n', accuracy_score(y_test,log_predicted))
print('Confusion Matrix: \n', confusion_matrix(y_test,log_predicted))
print('Classification Report: \n', classification_report(y_test,log_predicted))
# -
sns.heatmap(confusion_matrix(y_test,log_predicted),annot=True,fmt="d")
import pickle as pk
pk.dump(random_forest, open('liver.sav', 'wb'))
random_forest.predict([[63,0.5,0.1,170,21,28,5.5,2.5,0.8,0,1]])
sns.heatmap(confusion_matrix(y_test,rf_predicted), annot = True)
sns.countplot(rf_predicted)
| ML/Liver_disease/Liver_Disease.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from bs4 import BeautifulSoup
import pandas as pd
from multiprocessing import Pool
import datetime
import urllib.request
import scrapers
import requests
# +
import itertools
import tqdm.notebook as tqdm
def p_imap_unordered_list(pool, func, iterable):
iterable = list(iterable)
temp = tqdm.tqdm(pool.imap_unordered(func, iterable), total=len(iterable))
return list(temp)
the_time = lambda: datetime.datetime.now().time().strftime("%H:%M:%S")
# -
data = pd.read_csv("scraped.csv").query("team != 'team/'")
riders = set(data["rider"])
teams = set(data["team"])
print(the_time())
with Pool(50) as p:
pretty_teams = p_imap_unordered_list(p, scrapers.get_pretty_team, teams)
print(the_time())
with Pool(50) as p:
pretty_riders = p_imap_unordered_list(p, scrapers.get_pretty_rider, riders)
print(the_time())
out = pd.DataFrame(pretty_riders+pretty_teams)[["name", "url", "year"]]
out["name"] = out["name"].str.strip()
out = out.sort_values(["name", "year"])
out = out[out["name"].astype(bool)] # empties are errors on firstcycling's part
scraped = pd.read_csv("scraped.csv")
with Pool(50) as p:
teams_to_fix = p_imap_unordered_list(
p,
scrapers.team_year_from_rider,
(
scraped[scraped["team"].isin(-out.query("year == 1")["url"])]
.drop_duplicates("team")
.itertuples(index=False, name=None)
),
)
big_teams_to_fix = {-team:year for d in teams_to_fix for team,year in d.items()}
to_replace = out["url"].map(big_teams_to_fix).dropna().convert_dtypes()
out.loc[to_replace.index, "year"] = to_replace
out.to_csv("names.csv", index=False)
| scraping_and_analysis/02_scrape_names_years.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kundyyy/100-Days-Of-ML-Code/blob/master/AfterWork_Data_Science_Hyperparameter_Tuning_with_Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="VxYyRzuYN6u7" colab_type="text"
# <font color="blue">To use this notebook on Google Colaboratory, you will need to make a copy of it. Go to **File** > **Save a Copy in Drive**. You can then use the new copy that will appear in the new tab.</font>
# + [markdown] id="6ABs5Pr5OOdM" colab_type="text"
# # AfterWork Data Science: Hyperparameter Tuning with Python
# + [markdown] id="I4acj-OTOP82" colab_type="text"
# ### Pre-requisites
# + id="rpvueFt9N2Wr" colab_type="code" colab={}
# We will start by running this cell which will import the necessary libraries
# ---
#
import pandas as pd # Pandas for data manipulation
import numpy as np # Numpy for scientific computations
import matplotlib.pyplot as plt # Matplotlib for visualisation - We might not use it but just incase you decide to
# %matplotlib inline
# + [markdown] id="2jTFOxfaOd14" colab_type="text"
# ## 1. Manual Search
# + [markdown] id="0CrlFuI-VjsD" colab_type="text"
# ### Example
# + id="UQtTRKfhiMyR" colab_type="code" colab={}
# Example
# ---
# Question: Will John, 40 years old with a salary of 2500 will buy a car?
# ---
# Dataset url = http://bit.ly/SocialNetworkAdsDataset
# ---
#
# + id="Rr4jsDQ7UgDk" colab_type="code" colab={}
# Steps 1
# ---
# Loading our dataset
social_df = pd.read_csv('http://bit.ly/SocialNetworkAdsDataset')
# Data preparation: Encoding
social_df["Gender"] = np.where(social_df["Gender"].str.contains("Male", "Female"), 1, 0)
# Defining our predictor and label variable
X = social_df.iloc[:, [1, 2 ,3]].values # Independent/predictor variables
y = social_df.iloc[:, 4].values # Dependent/label variable
# Splitting our dataset
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 42)
# Performing scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
# + id="jAjFNtLnOgJI" colab_type="code" colab={}
# Steps 2
# ---
# Defining our classifier
from sklearn.tree import DecisionTreeClassifier
# We will get to see the values of the Decision Tree classifier hyper parameters in the output below
# The decision tree has a quite a number of hyperparameters that require fine-tuning in order
# to get the best possible model that reduces the generalization error.
# To explore other decision tree hyperparameters, we can explore the sckit-learn documentation
# by following this link: https://bit.ly/3eu3XIh
# ---
# We will focus on two specific hyperparameters:
# 1. Max depth: This is the maximum number of children nodes that can grow out from
# the decision tree until the tree is cut off.
# For example, if this is set to 3, then the tree will use three children nodes
# and cut the tree off before it can grow any more.
# 2. Min samples leaf: This is the minimum number of samples, or data points,
# that are required to be present in the leaf node.
# ---
#
decision_classifier = DecisionTreeClassifier()
# Fitting our data
decision_classifier.fit(X_train, y_train)
# + id="jSqqtM59QxMw" colab_type="code" colab={}
# Steps 3
# ---
# Making our predictions
decision_y_prediction = decision_classifier.predict(X_test)
# Determining the Accuracy
from sklearn.metrics import accuracy_score
print(accuracy_score(decision_y_prediction, y_test))
# + id="KE20z-jvT69F" colab_type="code" colab={}
# Repeating Steps 2
# ---
# Let's now perform hyper parameter tuning by setting
# the hyperparameters max_depth = 2 and min_samples_leaf = 100
# and get our output?
# ---
#
decision_classifier = DecisionTreeClassifier(max_depth = 2, min_samples_leaf = 50)
# Fitting our data
decision_classifier.fit(X_train, y_train)
# + id="rygoolwDUzJk" colab_type="code" colab={}
# Repeating Steps 3
# ---
# Steps 3
# ---
# Making our predictions
decision_y_prediction = decision_classifier.predict(X_test)
# Determining the Accuracy
from sklearn.metrics import accuracy_score
print(accuracy_score(decision_y_prediction, y_test))
# + [markdown] id="7KQIlwybVZtn" colab_type="text"
# Can you get a better accuracy? By tuning the same hyperparameters or other parameters?
# + [markdown] id="WJD2CgDZXVNj" colab_type="text"
# To read more about hyper parameter tuning for decision trees, you can refer to this reading: [Link](https://towardsdatascience.com/how-to-tune-a-decision-tree-f03721801680)
# + [markdown] id="QaKj_EYnVnJa" colab_type="text"
# ### <font color="green">Challenge</font>
# + id="EsWOOvFmVvHc" colab_type="code" colab={}
# Challenge 1
# ---
# Using the given dataset above, create a logistic regression classifier
# then tune its hyperparameters to get the best possible accuracy.
# Make a comparisons of your with other fellows in your breakout rooms.
# Hint: Use the following documentation to tune the hyper parameters.
# Sckit-learn documentation: https://bit.ly/2YZR4iP
# ---
# Dataset url = http://bit.ly/SocialNetworkAdsDataset
#
# + [markdown] colab_type="text" id="m4f-HCCzcsFn"
# ## 2. Grid Search
# + [markdown] colab_type="text" id="S6xAx-PccsFq"
# ### Example
# + id="VFnMcdWliR-E" colab_type="code" colab={}
# Example
# ---
# Question: Will John, 40 years old with a salary of 2500 will buy a car?
# ---
# Dataset url = http://bit.ly/SocialNetworkAdsDataset
# ---
#
# + id="jotViGxKb8Yp" colab_type="code" colab={}
# Steps 2
# ---
# Defining our classifier
# We will get to see the values of the Decision Tree classifier hyper parameters in the output below
# The decision tree has a quite a number of hyperparameters that require fine-tuning in order
# to get the best possible model that reduces the generalization error.
# To explore other decision tree hyperparameters, we can explore the sckit-learn documentation
# by following this link: https://bit.ly/3eu3XIh
# ---
# Again we will focus on the same two specific hyperparameters:
# 1. Max depth: This is the maximum number of children nodes that can grow out from
# the decision tree until the tree is cut off.
# For example, if this is set to 3, then the tree will use three children nodes
# and cut the tree off before it can grow any more.
# 2. Min samples leaf: This is the minimum number of samples, or data points,
# that are required to be present in the leaf node.
# ---
#
decision_classifier = DecisionTreeClassifier()
# + id="Q03AxVzIZprI" colab_type="code" colab={}
# Step 3: Hyperparameters: Getting Started with Grid Search
# ---
# We will continue from where we left off from the previous example,
# We create a dictionary of all the parameters and their corresponding
# set of values that you want to test for best performance.
# The name of the dictionary items corresponds to the parameter name
# and the value corresponds to the list of values for the parameter.
# As shown grid_param dictionary with three parameters n_estimators, criterion, and bootstrap.
# The parameter values that we want to try out are passed in the list.
# For instance, in the above script we want to find which value
# (out of 100, 300, 500, 800, and 1000) provides the highest accuracy.
# Similarly, we want to find which value results in the
# highest performance for the criterion parameter: "gini" or "entropy"?
# The Grid Search algorithm basically tries all possible combinations
# of parameter values and returns the combination with the highest accuracy.
# For instance, in the above case the algorithm will check all combinations (5 x 5 = 25).
# ---
#
grid_param = {
'max_depth': [2, 3, 4, 10, 15],
'min_samples_leaf': [10, 20, 30, 40, 50]
}
# + id="FgzqZY71Z2Vv" colab_type="code" colab={}
# Step 2: Instantiating GridSearchCV object
# ---
# Once the parameter dictionary is created, the next step
# is to create an instance of the GridSearchCV class.
# We need to pass values for the estimator parameter,
# which basically is the algorithm that you want to execute.
# The param_grid parameter takes the parameter dictionary
# that we just created as parameter, the scoring parameter
# takes the performance metrics, the cv parameter corresponds
# to number of folds, which will set 5 in our case, and finally
# the n_jobs parameter refers to the number of CPU's that we want to use for execution.
# A value of -1 for n_jobs parameter means that use all available computing power.
# You can refer to the GridSearchCV documentation
# if you want to find out more: https://bit.ly/2Yr0qVC
# ---
#
from sklearn.model_selection import GridSearchCV
gd_sr_cl = GridSearchCV(estimator = decision_classifier,
param_grid = grid_param,
scoring = 'accuracy',
cv = 5,
n_jobs =-1)
# + id="xurXa_ovZ5JE" colab_type="code" colab={}
# Step 3: Calling the fit method
# ---
# Once the GridSearchCV class is initialized, we call the fit method of the class
# and pass it the training and test set, as shown in the following code.
# The method might take abit of some time to execute.
# This is the drawback - GridSearchCV will go through all the intermediate
# combinations of hyperparameters which makes grid search computationally very expensive.
# ---
#
gd_sr_cl.fit(X_train, y_train)
# + id="gSjIyP6iZ7gM" colab_type="code" colab={}
# Step 4: Checking the parameters that return the highest accuracy
# ---
# To do so, we print the sr.best_params_ attribute of the GridSearchCV object, as shown below:
# ---
#
best_parameters = gd_sr_cl.best_params_
print(best_parameters)
# The result shows that the highest accuracy is achieved
# when the n_estimators are 300, bootstrap is True and criterion is "gini".
# It would be a good idea to add more number of estimators
# and see if performance further increases since the highest
# allowed value of n_estimators was chosen.
# + id="DY8IpIK1Z9gs" colab_type="code" colab={}
# Step 5: Finding the obtained accuracy
# ---
# The last and final step of Grid Search algorithm is
# to find the accuracy obtained using the best parameters.
# Previously we had a mean accuracy of 64.22%.
# To find the best accuracy achieved, we execute the following code:
# ---
#
best_result = gd_sr_cl.best_score_
print(best_result)
# The accuracy achieved is: 0.6505 of 65.05% which is only slightly better than 64.22%.
# To improve this further, it would be good to test values for other parameters
# of Random Forest algorithm, such as max_features, max_depth, max_leaf_nodes, etc.
# to see if the accuracy further improves or not.
# + [markdown] id="8NZOSpZBc7CU" colab_type="text"
# Can you get a better accuracy? By refering to the decision tree documentation, choosing additional approriate hyper-parameters and set the hyperparameter values to the grid search space in an effort to get a better accuracy.
# + [markdown] id="mIk0U5Aqfhbz" colab_type="text"
# ### <font color="green">Challenge</font>
# + id="MEMGtQlpfk8c" colab_type="code" colab={}
# Challenge
# ---
# In this challenge, we still be required to use grid search while using
# the logistic regression classifier we created earlier to get the best possible accuracy.
# Hint: Use the following documentation to tune the hyperparameters.
# Sckit-learn documentation: https://bit.ly/2YZR4iP
# ---
# Dataset url = http://bit.ly/SocialNetworkAdsDataset
#
# + [markdown] id="X-RiKkKFOrVb" colab_type="text"
# ## 3. Random Search
# + [markdown] colab_type="text" id="A9y1H556gVW_"
# ### Example
# + colab_type="code" id="fPGxiGQFgVXB" colab={}
# Example
# ---
# Question: Will John, 40 years old with a salary of 2500 will buy a car?
# ---
# Dataset url = http://bit.ly/SocialNetworkAdsDataset
# ---
#
# + id="vUTlhWCWglW_" colab_type="code" colab={}
# Step 1: Hyperparameters: Getting Started with Random Search
# ---
# Random search differs from grid search in that we longer
# provide a discrete set of values to explore for each hyperparameter; rather,
# we provide a statistical distribution for each hyperparameter
# from which values may be randomly sampled.
# We'll define a sampling distribution for each hyperparameter.
# ---
#
# specify parameters and distributions to sample from
from scipy.stats import randint as sp_randint
param_dist = {"max_depth": [3, None],
"min_samples_leaf": sp_randint(1, 50)}
# + id="lyDyEIalgmiN" colab_type="code" colab={}
# Step 2: Instantiating RandomizedSearchCV object
# ---
# Documentation: https://bit.ly/2V9Xhri
#
from sklearn.model_selection import RandomizedSearchCV
random_sr = RandomizedSearchCV(decision_classifier, param_dist, cv = 5)
# + id="1mVVunKJgrMT" colab_type="code" colab={}
# Step 3: Calling the fit method
# ---
#
random_sr.fit(X_train, y_train)
# + id="D4v1u9vVguNP" colab_type="code" colab={}
# Step 4: Checking the parameters that return the highest accuracy
# ---
#
best_parameters = random_sr.best_params_
print(best_parameters)
# + id="AzKzEqLxgvx9" colab_type="code" colab={}
# Finding the obtained accuracy
# --
#
best_result = random_sr.best_score_
print(best_result)
# + [markdown] id="qx4Qs-8rjUQu" colab_type="text"
# Can you get a better accuracy? By refering to the decision tree documentation, choosing additional approriate hyper-parameters and set the hyperparameter values to the random search space in an effort to get a better accuracy.
# + [markdown] colab_type="text" id="W6ndQUsSizcy"
# ### <font color="green">Challenge</font>
# + colab_type="code" id="taJUBjUJizc0" colab={}
# Challenge
# ---
# Again, we will also be required to use random search while using
# the logistic regression classifier we created earlier to get the best possible accuracy.
# Hint: Use the following documentation to tune the hyperparameters.
# Sckit-learn documentation: https://bit.ly/2YZR4iP
# ---
# Dataset url = http://bit.ly/SocialNetworkAdsDataset
#
| AfterWork_Data_Science_Hyperparameter_Tuning_with_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# <center><img src="../static/images/logoDocker.png" width=500></center>
#
# # Docker
#
# [Docker](https://www.docker.com) is an open-source project that automates the deployment of applications inside software containers. Those containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, system tools, software libraries, such as Python, FSL, AFNI, SPM, FreeSurfer, ANTs, etc. This guarantees that it will always run the same, regardless of the environment it is running in.
#
# <font color='red'>Important:</font> **You don't need Docker to run Nipype on your system**. For Mac and Linux users, it probably is much simpler to install Nipype directly on your system. For more information on how to do this see the [Nipype website](resources_installation.ipynb). But for Windows users, or users that don't want to set up all the dependencies themselves, Docker is the way to go.
# # Docker Image for the interactive Nipype Tutorial
#
# If you want to run this Nipype Tutorial with the example dataset locally on your own system, you need to use the docker image, provided under [miykael/nipype_tutorial](https://hub.docker.com/r/miykael/nipype_tutorial/). This docker image sets up a Linux environment on your system, with functioning Python, Nipype, FSL, ANTs and SPM12 software package, some example data, and all the tutorial notebooks to learn Nipype. Alternatively, you can also build your own docker image from Dockerfile or create a different Dockerfile using [Neurodocker](https://github.com/kaczmarj/neurodocker).
# # Install Docker
#
# Before you can do anything, you first need to install [Docker](https://www.docker.com) on your system. The installation process differs per system. Luckily, the docker homepage has nice instructions for...
#
# - [Ubuntu](https://docs.docker.com/engine/installation/linux/ubuntu/) or [Debian](https://docs.docker.com/engine/installation/linux/docker-ce/debian/)
# - [Windows 7/8/9/10](https://docs.docker.com/toolbox/toolbox_install_windows/) or [Windows 10Pro](https://docs.docker.com/docker-for-windows/install/)
# - [OS X (from El Capitan 10.11 on)](https://docs.docker.com/docker-for-mac/install/) or [OS X (before El Capitan 10.11)](https://docs.docker.com/toolbox/toolbox_install_mac/).
#
# Once Docker is installed, open up the docker terminal and test it works with the command:
#
# docker run hello-world
#
# **Note:** Linux users might need to use ``sudo`` to run ``docker`` commands or follow [post-installation steps](https://docs.docker.com/engine/installation/linux/linux-postinstall/).
# # Pulling the Docker image
#
# You can download various Docker images, but for this tutorial, we will suggest ``miykael/nipype_tutorial``:
#
# docker pull miykael/nipype_tutorial:latest
#
# Once it's done you can check available images on your system:
#
# docker images
# # How to run the Docker image
#
# After installing docker on your system and making sure that the ``hello-world`` example was running, we are good to go to start the Nipype Tutorial image. The exact implementation is a bit different for Windows user, but the general commands look similar.
#
# The suggested Docker image, miykael/nipype_tutorial, already contains all tutorial notebooks and data used in the tutorial, so the simplest way to run container is:
#
# docker run -it --rm -p 8888:8888 miykael/nipype_tutorial jupyter notebook
#
# However, if you want to use your version of notebooks, save notebook outputs locally or use you local data, you can also mount your local directories, e.g.:
#
# docker run -it --rm -v /path/to/nipype_tutorial/:/home/neuro/nipype_tutorial -v /path/to/data/:/data -v /path/to/output/:/output -p 8888:8888 miykael/nipype_tutorial jupyter notebook
#
# But what do those flags mean?
#
# - The ``-it`` flag tells docker that it should open an interactive container instance.
# - The ``--rm`` flag tells docker that the container should automatically be removed after we close docker.
# - The ``-p`` flag specifies which port we want to make available for docker.
# - The ``-v`` flag tells docker which folders should be mount to make them accessible inside the container. Here: ``/path/to/nipype_tutorial`` is your local directory where you downloaded [Nipype Tutorial repository](https://github.com/miykael/nipype_tutorial/). ``/path/to/data/`` is a directory where you have dataset [``ds000114``](https://openfmri.org/dataset/ds000114/), and ``/path/to/output`` can be an empty directory that will be used for output. The second part of the ``-v`` flag (here: ``/home/neuro/nipype_tutorial``, ``/data`` or ``/output``) specifies under which path the mounted folders can be found inside the container. **Important**: To use the ``tutorial``, ``data`` and ``output`` folder, you first need to create them on your system!
# - ``miykael/nipype_tutorial`` tells docker which image you want to run.
# - ``jupyter notebook`` tells that you want to run directly the jupyter notebook command within the container. Alternatively, you can also use ``jupyter-lab``, ``bash`` or ``ipython``.
#
# **Note** that when you run this docker image without any more specification than it will prompt you a URL link in your terminal that you will need to copy paste into your browser to get to the notebooks.
# ## Run a docker image on Linux or Mac
#
# Running a docker image on a Linux or Mac OS is very simple. Make sure that the folders ``tutorial``, ``data``, and ``output`` exist. Then just open a new terminal and use the command from above. Once the docker image is downloaded, open the shown URL link in your browser and you are good to go. The URL will look something like:
#
# http://localhost:8888/?token=<PASSWORD>
# ## Run a docker image on Windows
#
# Running a docker image on Windows is a bit trickier than on Ubuntu. Assuming you've installed the DockerToolbox, open the Docker Quickstart Terminal. Once the docker terminal is ready (when you see the whale), execute the following steps (see also figure):
#
# 1. We need to check the IP address of your docker machine. For this, use the command:
#
# ``docker-machine ip``
#
# In my case, this returned ``192.168.99.100``
#
# 2. If you haven't already created a new folder to store your container output into, do so. You can create the folder either in the explorer as usual or do it with the command ``mkdir -p`` in the docker console. For example like this:
#
# ``mkdir -p /c/Users/username/output``
#
# Please replace ``username`` with the name of the current user on your system. **Pay attention** that the folder paths in the docker terminal are not a backslash (``\``) as we usually have in Windows. Also, ``C:\`` needs to be specified as ``/c/``.
#
# 3. Now, we can open run the container with the command from above:
#
# `` docker run -it --rm -v /c/Users/username/path/to/nipype_tutorial/:/home/neuro/nipype_tutorial -v /c/Users/username/path/to/data/:/data -v /c/Users/username/path/to/output/:/output -p 8888:8888 miykael/nipype_tutorial``
#
# 4. Once the docker image is downloaded, it will show you an URL that looks something like this:
#
# ``http://localhost:8888/?token=<PASSWORD>``
#
# This URL will not work on a Windows system. To make it work, you need to replace the string ``localhost`` with the IP address of your docker machine, that we acquired under step 1. Afterward, your URL should look something like this:
#
# ``http://192.168.99.100:8888/?token=<PASSWORD>``
#
# Copy this link into your webbrowser and you're good to go!
# # Docker tips and tricks
#
#
# ## Access Docker Container with ``bash`` or ``ipython``
#
# You don't have to open a jupyter notebook when you run ``miykael/nipype_tutorial``. You can also access the docker container directly with ``bash`` or ``ipython`` by adding it to the end of your command, i.e.:
#
# docker run -it --rm -v /path/to/nipype_tutorial/:/home/neuro/nipype_tutorial -v /path/to/data/:/data -v /path/to/output/:/output -p 8888:8888 miykael/nipype_tutorial bash
#
# This also works with other software commands, such as ``bet`` etc.
# ## Stop Docker Container
#
# To stop a running docker container, either close the docker terminal or select the terminal and use the ``Ctrl-C`` shortcut multiple times.
# ## List all installed docker images
#
# To see a list of all installed docker images use:
#
# docker images
# ## Delete a specific docker image
#
# To delete a specific docker image, first use the ``docker images`` command to list all installed containers and then use the ``IMAGE ID`` and the ``rmi`` instruction to delete the container:
#
# docker rmi -f 7d9495d03763
# ## Export and Import a docker image
#
# If you don't want to depend on an internet connection, you can also export an already downloaded docker image and then later on import it on another PC. To do so, use the following two commands:
#
#
# # Export docker image miykael/nipype_tutorial
# docker save -o nipype_tutorial.tar miykael/nipype_tutorial
#
# # Import docker image on another PC
# docker load --input nipype_tutorial.tar
#
# It might be possible that you run into administrator privileges issues because you ran your docker command with ``sudo``. This means that other users don't have access rights to ``nipype_tutorial.tar``. To avoid this, just change the rights of ``nipype_tutorial.tar`` with the command:
#
# sudo chmod 777 nipype_tutorial.tar
| notebooks/introduction_docker.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_json('detections.json')
df.gt_id.nunique()
df.info()
df.iou.hist(bins=25)
from evaldets.results import load_gt
anns = load_gt()
gt_df = pd.DataFrame(anns.dataset['annotations'])
gt_df.id.nunique()
gt_df.id.nunique() - gt_df.iscrowd.sum()
900100002299 in set(gt_df.id)
900100002299 in set(df.gt_id.dropna().astype(int))
import numpy as np
dti = np.array([[False, False, False, True, False, False, True, False, True,
False, False, False, False, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, False, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, True, False, False, True, False, True,
False, False, False, False, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, False, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, False, True, True, False, True, False, True,
False, False, False, False, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, False, True, True, False, True, False, True,
False, False, False, True, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, False, True, True, False, True, False, True,
False, False, False, True, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, False, True, True, False, True, False, True,
False, False, False, True, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, False, True, True, False, True, False, True,
False, False, False, True, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, False, True, True, True, True, False, True,
True, False, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, True, True, True, False, True, False, True,
False, False, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
False, True, True],
[False, False, True, True, True, False, True, True, True,
False, False, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, False, True, True, True, True, True,
False, False, False]])
(~dti[0]).sum()
| 2021-06-25-gt-counts-and-recall.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Setup:
#
# This code as well as the recent version of NeuronC at the time of publication are tested to run under Ubuntu 16.04 LTS with python 3. Rendering of 3D morphologies (figure A and second part of figure F) requires POV-Ray.
#
# 1. The file ``hc_model.py`` has to be in the same directory as this jupyter notebook file.
# 2. Download and unpack NeuronC (http://retina.anatomy.upenn.edu/~rob/neuronc.html)
# 3. Adjust the compiler directive ``CFLAGS`` in ``/nc/models/retsim/make`` according to your
# system.
# 4. Copy the experiment file ``expt_hc_local_final.cc`` into the retsim directory
# (``/nc/models/retsim``)
# 5. Add the experiment to ``makefile``: add the line ``expt_hc_local_final.$(DLSUFFIX)``
# to the list of experiments.
# 6. Copy the files ``dens_hb_final.n``, ``nval_hb_final.n``, and ``morph_HC_dendrite`` into the
# directory ``/nc/models/retsim/runconf``.
# 7. Run ``make`` **both** in ``/nc/models/retsim/`` and ``/nc/``.
# 8. Add the path to your retsim installation below.
#
import numpy as np
import pandas as pd
from PIL import Image
import matplotlib
from matplotlib import pyplot as plt
import seaborn as sns
from scipy.optimize import brent
import hc_model
# %matplotlib inline
sns.set_style('white')
matplotlib.rcParams.update({'mathtext.default': 'regular'})
matplotlib.rcParams.update({'font.size': 14})
#Add full path to the retsim installation
retsim_dir='/home/tom/nc/models/retsim/'
hc_model.set_dir(retsim_dir)
# load morphology file and adjust coordinates
morph=pd.read_csv(retsim_dir+'runconf/morph_hc_dendrite',delim_whitespace=True,header=None,\
comment='#',names=['node','parent','dia','x','y','z','region','dend'])
morph['x_new']=((morph['x']-np.min(morph['x']))*100)
morph['y_new']=((morph['y']-np.min(morph['y']))*100)
morph['z_new']=((morph['z']-np.min(morph['z']))*100)
morph['dia_new']=(morph['dia']*100)
# ### Figure A: Rendering of the morphology
image=hc_model.run(d=1,R=1,stimtype=2,recolor=True)
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.show()
# ### Figure B&C: Voltage and calcium decay as a function of distance
# We determine an injection current for every tip such that the resulting voltage pulse reaches -25mV
# +
def opt_function(current,tip):
data_opt=hc_model.run(stimtype=2,stimdur=0.1,poststimdur=0.3,rectype=20,stimtip=tip,\
istart=current/1e12,istop=current/1e12)
return np.abs(np.max(data_opt[1])+0.025)
def eval_function(current,tip):
data_opt=hc_model.run(stimtype=2,stimdur=0.1,poststimdur=0.3,rectype=20,stimtip=tip,\
istart=current/1e12,istop=current/1e12)
return np.max(data_opt[1])
# -
I_tip1=brent(opt_function,args=(1,),brack=(1,15,20),tol=0.001)
I_tip2=brent(opt_function,args=(2,),brack=(10,20,25),tol=0.001)
I_tip3=brent(opt_function,args=(3,),brack=(5,10,15),tol=0.001)
I_tip4=brent(opt_function,args=(4,),brack=(1,5,10),tol=0.001)
I_tip5=brent(opt_function,args=(5,),brack=(1,15,20),tol=0.001)
I_tip6=brent(opt_function,args=(6,),brack=(1,10,15),tol=0.001)
I_tip7=brent(opt_function,args=(7,),brack=(25,30,40),tol=0.001)
I_tip8=brent(opt_function,args=(8,),brack=(10,12,15),tol=0.001)
I_tip9=brent(opt_function,args=(9,),brack=(12,14,16),tol=0.001)
I_tip10=brent(opt_function,args=(10,),brack=(8,9,10),tol=0.001)
print('Injection currents')
print('tip 1',I_tip1,'pA')
print('tip 2',I_tip2,'pA')
print('tip 3',I_tip3,'pA')
print('tip 4',I_tip4,'pA')
print('tip 5',I_tip5,'pA')
print('tip 6',I_tip6,'pA')
print('tip 7',I_tip7,'pA')
print('tip 8',I_tip8,'pA')
print('tip 9',I_tip9,'pA')
print('tip 10',I_tip10,'pA')
I_inj=np.array([I_tip1,I_tip2,I_tip3,I_tip4,I_tip5,I_tip6,I_tip7,I_tip8,I_tip9,I_tip10])*1e-12
# +
# np.savetxt('data/HC_injection_current',I_inj)
# +
# Load injection saved injection currents to save time
# I_inj=np.loadtxt('data/HC_injection_current')
# -
data_cc=[]
for i in range(10):
data_cc.append(hc_model.run(stimtype=2,rectype=30,stimtip=i+1,stimdur=0.2,poststimdur=0.5,\
istart=I_inj[i],istop=I_inj[i]))
# Calculate distances between cone tips along the dendrite
cone_tips = np.array([0,94,682,276,353,401,457,511,651,764,788])
cone_tip_dist = hc_model.tip_distances(morph,cone_tips)
signal_distance=[]
for i in range(len(data_cc)):
for j in range(10):
signal_distance.append([cone_tip_dist[i+1,j+1],np.max(data_cc[i].iloc[150:450,j+2]),\
np.max(data_cc[i].iloc[150:450,j+13]),i+1])
signal_distance=pd.DataFrame(signal_distance,columns=['dist','v','ca','HC tip'])
signal_distance['mv']=signal_distance['v']*1000
signal_distance['mM']=signal_distance['ca']*1000
pal=sns.color_palette('Paired',10)
contact_cones=[1,2,3,4,5,6,6,7,8,2,2,7,9,10,1,1]
sns.set(context='paper',style='white',rc={"xtick.major.size": 4, "ytick.major.size": 4})
with matplotlib.rc_context({"lines.linewidth": 0.7}):
# plt.figure(figsize=(8/2.54,4/2.54))
ax=sns.lmplot(data=signal_distance,x='dist',y='mv',hue='HC tip',palette=pal,fit_reg=False,aspect=1.5,size=5/2.54,scatter_kws={'s':7},legend=False)
ax.set(ylim=(-84,-20),xlabel='Distance [$\mu m$]',ylabel='Voltage [$mV$]')
legend=plt.legend(ncol=2,title='HC tip',fontsize=7,bbox_to_anchor=(1, 1.1))
legend.get_title().set_fontsize(8)
sns.despine(offset=3)
# plt.savefig('figures/HC_v_vs_tip_distance.svg',bbox_inches='tight',dpi=300)
plt.show()
sns.set(context='paper',style='white',rc={"xtick.major.size": 4, "ytick.major.size": 4})
with matplotlib.rc_context({"lines.linewidth": 0.7}):
ax=sns.lmplot(data=signal_distance,x='dist',y='mM',hue='HC tip',palette=pal,fit_reg=False,aspect=1.5,size=5/2.54,scatter_kws={'s':7},legend=False)
ax.set(ylim=(5e-6,1e-2),yscale='log',xlabel='Distance [$\mu m$]',ylabel='Ca concentration [$mM$]')
sns.despine(offset=3)
# plt.savefig('figures/HC_ca_vs_tip_distance.svg',bbox_inches='tight',dpi=300)
plt.show()
# ### Figure D: Heatmap of voltage along the morphology
# measuring signals in every compartment, done in batches of 100
data_hm0=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=100)
data_hm1=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=101)
data_hm2=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=102)
data_hm3=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=103)
data_hm4=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=104)
data_hm5=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=105)
data_hm6=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=106)
data_hm7=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=107)
data_hm8=hc_model.run(stimtype=2,stimtip=3,istart=1.01155854e-11,istop=1.01155854e-11,poststimdur=0.3,rectype=108)
# +
data_hm_v=np.hstack((data_hm0.as_matrix()[:,:101],data_hm1.as_matrix()[:,1:101],data_hm2.as_matrix()[:,1:101],\
data_hm3.as_matrix()[:,1:101],data_hm4.as_matrix()[:,1:101],data_hm5.as_matrix()[:,1:101],\
data_hm6.as_matrix()[:,1:101],data_hm7.as_matrix()[:,1:101],data_hm8.as_matrix()[:,1:25]))
data_hm_v_peak=np.max(data_hm_v[150:450,1:],axis=0)
minima = min(data_hm_v_peak)
maxima = max(data_hm_v_peak)
# -
im_size=np.array([int((np.max(morph['x'])-np.min(morph['x']))*100),\
int((np.max(morph['y'])-np.min(morph['y']))*100)])
# specifying the color map
norm = matplotlib.colors.Normalize(vmin=minima, vmax=maxima, clip=True)
mapper = matplotlib.cm.ScalarMappable(norm=norm, cmap=sns.diverging_palette(255,10,center='light',as_cmap=True))
# +
# drawing the actual image
im_heatmap=Image.new('RGBA',tuple(im_size+1),(255,255,255,0))
for i in range(1,morph.shape[0]):
color=mapper.to_rgba(data_hm_v_peak[i])
im_temp=hc_model.drcable(im_size,morph.loc[i,'x_new'],morph.loc[i,'y_new'],morph.loc[i,'z_new'],morph.loc[i,'dia_new'],\
morph.loc[morph.loc[i,'parent'],'x_new'],morph.loc[morph.loc[i,'parent'],'y_new'],\
morph.loc[morph.loc[i,'parent'],'z_new'],morph.loc[morph.loc[i,'parent'],'dia_new'],\
color=(int(color[0]*255),int(color[1]*255),int(color[2]*255)))
im_heatmap=Image.alpha_composite(im_heatmap,im_temp)
# -
plt.figure(figsize=(10,10))
plt.imshow(im_heatmap)
plt.show()
# get a scalebar
sns.set(context='paper',style='white',rc={"xtick.major.size": 0, "ytick.major.size": 0})
with matplotlib.rc_context({"lines.linewidth": 0.7}):
plt.figure(figsize=(5/2.54,5/2.54))
test_color_bar=data_hm_v_peak.reshape(-1,8)*1000
plt.imshow(test_color_bar,cmap=sns.diverging_palette(255,10,center='light',as_cmap=True))
cb=plt.colorbar()
cb.set_label('Voltage [$mV$]')
# plt.savefig('figures/HC_cc_color_map_tip2_scale.svg',bbox_inches='tight',dpi=300)
# ### Figure E: Voltage decay along the dendrite
data_cc0=hc_model.run(stimtype=2,stimtip=3,istart=2e-12,istep=2e-12,istop=10e-12,rectype=100)
data_cc1=hc_model.run(stimtype=2,stimtip=3,istart=2e-12,istep=2e-12,istop=10e-12,rectype=101)
data_cc2=hc_model.run(stimtype=2,stimtip=3,istart=2e-12,istep=2e-12,istop=10e-12,rectype=102)
data_cc_v=np.hstack((data_cc0.as_matrix()[:,:101],data_cc1.as_matrix()[:,1:101],data_cc2.as_matrix()[:,1:101]))
peak_data_cc_v=[]
for i in range(5):
peak_data_cc_v.append(np.max(data_cc_v[150+i*750:450+i*750,:],axis=0))
peak_data_cc_v=np.array(peak_data_cc_v)
# get nodes along the dendrite between tip3 and the soma
tip3 = 276
nodes_to_tip3 = hc_model.nodes_to_tip(morph,tip3)
sns.set(context='paper',style='white',rc={"xtick.major.size": 4, "ytick.major.size": 4})
with matplotlib.rc_context({"lines.linewidth": 0.7}):
plt.figure(figsize=(7/2.54,4/2.54))
for i in range(5):
ax=plt.plot(nodes_to_tip3[:,1],peak_data_cc_v[4-i,nodes_to_tip3[:,0].astype(int)+1]*1000,label=str(10-2*i)+' pA',c=sns.dark_palette("white",n_colors=7)[i])
plt.legend(loc='upper right')
plt.xticks([0,10,20,30,40,nodes_to_tip3[0,1]],['tip 3',10,20,30,40,'soma'])
plt.xlabel('Distance [$\mu m$]')
plt.ylabel('Voltage [$mV$]')
plt.xlim(-1,55)
plt.ylim(-85,-20)
sns.despine(offset=3)
# plt.savefig('figures/HC_cc_v_decay_tip3.svg',bbox_inches='tight',dpi=300)
plt.show()
# ### Figure F: Light response and morphology with cones
data_light=hc_model.run(scone_id=2,nrepeats=2)
blue_stim=np.concatenate((np.zeros(5500),np.ones(1000),np.zeros(4000),np.ones(1000),np.zeros(3502)))*0.002-0.035
green_stim=np.concatenate((np.zeros(500),np.ones(1000),np.zeros(9000),np.ones(1000),np.zeros(3502)))*0.002-0.035
sns.set(context='paper',style='white',rc={"xtick.major.size": 4, "ytick.major.size": 4})
with matplotlib.rc_context({"lines.linewidth": 0.7}):
plt.figure(figsize=(7/2.54,4/2.54))
plt.plot(data_light.iloc[20000:,0],blue_stim[5000:],c='black')
plt.plot(data_light.iloc[15000:20000,0],green_stim[:5000],c='black')
plt.fill_between(data_light.iloc[20000:25000,0],-0.035,blue_stim[5000:10000])
plt.fill_between(data_light.iloc[15000:20000,0],-0.035,green_stim[:5000])
plt.plot(data_light.iloc[15000:,0],data_light.iloc[15000:,15]-0.004,c='blue')
plt.plot(data_light.iloc[15000:,0],data_light.iloc[15000:,5],c='green')
plt.plot(data_light.iloc[15000:,0],data_light.iloc[15000:,21]+0.01,c='black')
plt.ylim(-0.05,-0.03)
plt.xticks([15,20,25,30],([0,5,10,15]))
plt.yticks([])
sns.despine()
# plt.savefig('figures/HC_light_stim2.svg')
image=hc_model.run(d=1,R=1,stimtype=0,scone_id=2,recolor=True)
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.show()
| Chapot_et_al_2017/hc_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""A beam deforming under its own weight."""
from dolfin import *
# Scaled variables
l, w = 1, 0.1
mu_, lambda_ = 1, 1
rho = 10
gamma = (w/l)**2
wind = (0, 0.0, 0)
# Create mesh and define function space
mesh = BoxMesh(Point(0, 0, 0), Point(l, w, w), 50, 5, 5)
V = VectorFunctionSpace(mesh, "P", 1)
# Define boundary condition
def clamped_boundary(x, on_boundary):
return on_boundary and (near(x[0], 0) or near(x[0], l))
bc = DirichletBC(V, Constant((0, 0, 0)), clamped_boundary)
# Define strain and stress
def epsilon(u):
return 0.5 * (nabla_grad(u) + nabla_grad(u).T)
def sigma(u):
return lambda_ * nabla_grad(u) * Identity(3) + 2 * mu_ * epsilon(u)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant((0, 0, -rho * gamma))
T = Constant(wind)
a = inner(sigma(u), epsilon(v)) * dx
L = dot(f, v) * dx + dot(T, v) * ds
# Compute solution
u = Function(V)
solve(a == L, u, bc)
################################ Plot solution
from vedo.dolfin import *
plot(u, mode="displaced mesh", shading='flat')
# -
| examples/notebooks/dolfin/elasticbeam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
Original Google Colab link : https://colab.research.google.com/drive/1WIDhjHemJ8VCzd2C7iQjcNo8w_yi3ayz?usp=sharing
# + [markdown] id="xhhqqOps7qw4" colab_type="text"
# **Data Mining - Topic Extraction from StackOverflow Data in the Context of Software Architecture.**
# + [markdown] id="sli2FLHSBWNX" colab_type="text"
# # Import statements
# + id="90du86iigMrq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 901} outputId="72569415-52dd-49ea-8ef9-b0321cb8c0c1"
# !pip install stackapi
# !pip install s3fs
# !pip install pandas
# !pip install numpy
# !pip install nltk
# !pip install sklearn
# !pip install keras
# !pip install regex
# !pip install matplotlib
# !pip install bs4 lxml
# + id="ToSqFyGugx87" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 799} outputId="ede71161-fc22-46e0-c083-348569dd29b4"
# !pip install lda
# !pip install tensorflow
# + id="tOjbKO9tABAR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="e22e8552-0807-46ad-c28a-<PASSWORD>"
import pandas as pd
import numpy as np
import s3fs
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.corpus import wordnet
from nltk.stem import WordNetLemmatizer
import regex as re
import matplotlib.pyplot as plt
from keras.preprocessing.text import Tokenizer
from stackapi import StackAPI
import seaborn as sns
sns.set(style="whitegrid")
from scipy import stats
import lda
from bokeh.palettes import Set3, Blues8, brewer
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as plt
pd.options.mode.chained_assignment = None
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('averaged_perceptron_tagger')
# + [markdown] id="YRCw7g9lAtHA" colab_type="text"
# # Data Extraction using StackAPI
# + id="AUKyHlYpAJS0" colab_type="code" colab={}
#key is required for extracting more than 500 posts from the stackoverflow api.
#maximum posts retrieved using this query will be max_pages * page_size = 100,000
SITE = StackAPI('stackoverflow', max_pages=1000, page_size=100, key='kGCevKwTYZ)K3MXyECOpmg((')
#basically we are collecting ten years worth of data
#1262304000 date refers to 01-01-2010
questions = SITE.fetch('posts', tagged='software-design', filter='withbody', fromdate=1262304000)
# + id="JYv_PraAALOS" colab_type="code" colab={}
#store the indexed content of the posts along with the score
import csv
stackoverflow_data = []
i = 1
for item in questions['items']:
stackoverflow_data.append({'id': i, 'contents': item['body'], 'score':item['score']})
i = i + 1
csv_file = "stackoverflow_data.csv"
with open(csv_file, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=['id','contents','score'])
writer.writeheader()
for data in stackoverflow_data:
writer.writerow(data)
# + id="LtsDDhsRXdpQ" colab_type="code" colab={}
#Verify that stackoverflow data is accessible
# # !pip install google-colab
import os
from google.colab import drive
drive.mount('/content/drive/')
# !ls "/content/drive/My Drive/stackoverflow_data"
# + id="1x_yt_Lb2M6A" colab_type="code" colab={}
# READ DATASET INTO DATAFRAME
df = pd.read_csv("/content/drive/My Drive/stackoverflow_data.csv")
# + id="k2i6BlvJhpD5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="db1e14a5-40bd-451c-f168-e793e374f3dc"
from google.colab import drive
drive.mount('/content/drive')
# + id="OqAz9_-dFUSJ" colab_type="code" colab={}
# Sampling the dataset
df = df.sample(n=25000, random_state=1)
# + id="olKEisBs_CXF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 252} outputId="30f8035b-e25e-4fda-b652-6872430f786a"
df.head
# + [markdown] id="EynUxWhJ6gdZ" colab_type="text"
# # Dataset view
# + id="XlC8L5nk2S3s" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0ffe88ae-e291-4365-8504-8a009d21bc22"
# SHAPE of DATAFRAME
df.shape
# + id="SBCTkRfR2Vx7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="4e0e5f90-5a6f-4b05-ada3-d11c3a5e2444"
# VIEW OF A DATAFRAME
df.head(5)
# + id="Sur6Raom2X6H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="a85bf633-9dcd-40f0-8417-a249ef446a9d"
# VIEW OF A RAW contents
df.iloc[0, 1]
# + [markdown] id="o7no1D1W6TAq" colab_type="text"
# ## Upvote-Score Analysis
# + id="hMd5jxUA8fDL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="655480ee-3c2f-498b-e1f5-125a5d43ca8b"
(df["score"].describe())
# + [markdown] id="jtoyFs5L6qoK" colab_type="text"
# # Data Preprocessing
# + [markdown] id="6idYapUQ3yAI" colab_type="text"
# ### Remove code from Contents column
# + id="4hFGk9svh64I" colab_type="code" colab={}
#Remove all the code details from the posts
from bs4 import BeautifulSoup as bs
import lxml
new_data = []
for post in df['contents']:
data = bs(post, 'lxml')
for tag in data.find_all('code'):
tag.decompose()
new_data.append(data)
df['contents'] = [ str(item) for item in new_data]
# + id="tjXImu172lMa" colab_type="code" colab={}
# DROP ROWS WHICH HAS SAME contents
df.drop_duplicates(subset=['contents'],inplace=True) #dropping duplicates
# DROP ROWS WITH NA VALUES
df.dropna(axis=0,inplace=True)
# + id="7fOcasiT3WlM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="05445b0e-32b8-494b-bebb-3771946ab5d1"
# NEW/CURRENT SHAPE of DATAFRAME
df.shape
# + id="gQOEVGR-4Sat" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 49} outputId="bf255bfb-d2c1-4eb9-fbc3-1b37c9f8a0cb"
# ALTHOUGH IT SEEMS THAT WE DON'T HAVE ANY CONTRACTIONS BUT IT IS A GOOD PRACTICE TO CHECK IF THEY EXIST AND REMOVE IF THEY ARE THERE.
df[df['contents'].str.match('\'')]
# + [markdown] id="qQShYN6g37_a" colab_type="text"
# ### Remove URLs
#
# + id="HzAUHRYNtT2O" colab_type="code" colab={}
# Identify records with URL and Drop records with URL in summary
# https://www.geeksforgeeks.org/python-check-url-string/
def remove_urls(df):
"""
This method removes the records containing URLs in the contents section
"""
url_regex = r"(?i)\b((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:'\".,<>?«»“”‘’]))"
print("Total records:", len(df))
df['hasURL'] = df['contents'].apply(lambda contents : bool(len([x[0] for x in re.findall(url_regex, contents)])))
df = df[df['hasURL']==False]
df.drop(columns=['hasURL'], inplace=True) # dropping the 'hasURL' column
return df
# + id="opX2rIhb2hFP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6796a476-ff32-42b5-cf43-47af96f2f653"
df = remove_urls(df)
# + id="01UR0NQIxrgQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="16c742be-e532-4e2e-8f2b-8e6e99e84ca6"
df.shape
# + [markdown] id="wCKZQoqj4El_" colab_type="text"
# ### Lower-case conversion, Remove HTML tags and stop-words
# + id="OG1T1v5H5WLB" colab_type="code" colab={}
stop_words = set(stopwords.words('english')) # Prepare a set of STOPWORDS
def summary_cleaner(text):
newString = text.lower() # CONVERT INTO LOWER CASE
newString = re.sub(re.compile('<.*?>'), " ", newString) # REMOVE HTML TAGS
newString = re.sub(r'\([^)]*\)', '', newString) # REMOVE SMALL BRACKETS i.e "(xxxx)" => xxxx
newString = re.sub('"','', newString) # REMOVE INVERTED COMMA'S
newString = re.sub(r"'s\b","",newString) # REMOVE 's FROM THE LAST OF ANY TOKEN
newString = re.sub("[^a-zA-Z]", " ", newString) # REMOVE NUMERIC content.
tokens = [w for w in newString.split() if not w in stop_words] # REMOVE STOP WORDS
return (" ".join(tokens)).strip()
cleaned_summary = []
for text in df['contents']:
cleaned_summary.append(summary_cleaner(text))
# + id="jaxXqSN_DQVd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="6f51b1aa-6016-4e97-95ee-f2b106fc0291"
# VIEW OF A CLEAN SUMMARY
cleaned_summary[0]
# + id="Q-h_v5BX3SSM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fe73e9cd-4fda-4881-9355-4b8483decac4"
len(cleaned_summary)
# + id="tK9eDuN1PtG8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c244ca5b-de51-434c-a131-46b54887a49e"
# VIEW OF RAW Contents
df.iloc[0, 1]
# + id="hT7clTVCQqvr" colab_type="code" colab={}
# # ADD the CLEANED SUMMARY AND HEADLINE INTO DATAFRAME IN NEW COLUMNS
df['cleaned_contents']= cleaned_summary
# + [markdown] id="ttechoIu4Spf" colab_type="text"
# ### Cleaned Dataset view
# + id="8a835YMG3-J-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="19fca5b8-f3e5-4a1f-e724-26ccc5b2aa44"
pd.set_option('display.width', 1000)
df.iloc[0:5, 3]
# + id="1V7L2NWFyAVp" colab_type="code" colab={}
writer = pd.ExcelWriter('Cleaned_Data.xlsx')
df.to_excel(writer)
writer.save()
# + id="LyeBzDmASyZk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 358} outputId="01371b05-a40e-4cbd-c488-afbc6345132f"
# populate the lists with sentence lengths
summary_word_count = [len(i.split()) for i in df['contents']]
length_df = pd.DataFrame({'Contents':summary_word_count})
length_df.hist(bins=80, figsize=(10,5) )
plt.suptitle("Distribution of Contents", size=8)
plt.show()
# + [markdown] id="Ka-BYmC4j5Jj" colab_type="text"
# #### Effective Lemmatization - Tokenization, POS Tagging, POS Tagging - Wordnet, Lemmatization
# + id="XzpunKBMtJuC" colab_type="code" colab={}
# Word Tokenization of Sequences
def tokenize_dataset(data):
"""
This method is used to convert input data sequence into tokenized word sequences.
"""
return([nltk.word_tokenize(samples) for samples in data])
# POS Tagging of Tokens
def pos_tagging_tokens(data):
return([nltk.pos_tag(samples) for samples in data])
# + id="mquaUoyXk9Ve" colab_type="code" colab={}
# X Values
# Calling Tokenization method on Data sequences
tokens_X_train = tokenize_dataset(df["cleaned_contents"])
# Calling POS tagging method on Tokenized sequences
tagged_tokens_X_train = pos_tagging_tokens(tokens_X_train)
# + id="Lo1ruOoYfe8y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="e944827f-66d7-4e08-a483-b36a651e56f0"
print(tokens_X_train[10])
# + id="m6p19ZuBgFfc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="da841f22-afcc-4f08-f9b9-367c3a73c59b"
print(tagged_tokens_X_train[1])
# + [markdown] id="9vWXVsslgy3V" colab_type="text"
# #### Converting POS Tags to WordNet Tags
# + id="jwjyQ8Nfgxmt" colab_type="code" colab={}
wnl = WordNetLemmatizer()
def pos_tag_wordnet(data):
"""
This method converts POS tags of input sequences to Effective Lemmatizaton tags sequences.
"""
new_tagged_tokens = []
tag_map = {'j': wordnet.ADJ, 'v': wordnet.VERB, 'n': wordnet.NOUN, 'r': wordnet.ADV}
for tagged_tokens in data:
new_tagged_tokens.append([(word, tag_map.get(tag[0].lower(), wordnet.NOUN)) for word, tag in tagged_tokens])
return new_tagged_tokens
# + id="9xPOvR4QhAdM" colab_type="code" colab={}
# X Values
new_tagged_tokens_X_train = pos_tag_wordnet(tagged_tokens_X_train)
# + id="VydHlVyMhSlb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ac0d23e6-6dda-4c63-bd6c-387dedc24615"
len(new_tagged_tokens_X_train)
# + id="2BK4AlNYhkcu" colab_type="code" colab={}
# Create lemmatization
def lemmatized_text(wordnet_tokens_data):
"""
This method converts input tokenized sequence into lemmatized text sequence.
"""
return([ ' '.join(wnl.lemmatize(word, tag) for word, tag in wordnet_tokens) for wordnet_tokens in wordnet_tokens_data])
# + id="34K1x4tGhlE6" colab_type="code" colab={}
# X Values
lemmatized_text_X_train = lemmatized_text(new_tagged_tokens_X_train)
# + id="I_1EVHHLhlRD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9b5de7c0-2586-480b-93df-5a2bd08df119"
print(lemmatized_text_X_train[0])
# + id="5wWwo8Bgx9b-" colab_type="code" colab={}
X_tokenizer = Tokenizer()
X_tokenizer.fit_on_texts(list(lemmatized_text_X_train))
# + id="Ngcvs3vGx-Hc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ffa40e4c-ef19-481b-e25b-e61b94d0de34"
# Summary vocab size
X_vocabulary_size = len(X_tokenizer.word_index) +1
print("Size of Vocabulary for Cleaned Contents:", X_vocabulary_size)
# + [markdown] id="AgyMTnVOTXV3" colab_type="text"
# # Performing LDA - Topic Modelling
# + [markdown] id="LH9qFaRUBBnj" colab_type="text"
# ## Converting data into Term-Frequency Matrix (tf, unigrams)
#
# + id="vXODRPw8KOzp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="afdc35d5-10c5-46bb-98d2-0bd3c3d52945"
cv = CountVectorizer(min_df=0., max_df=1.)
cv_matrix = cv.fit_transform(lemmatized_text_X_train)
cv_matrix = cv_matrix.toarray()
cv_matrix
# + id="O0weibfe8JtH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="5e405176-c650-42ac-cb22-d769bdcbdb87"
# get all unique words in the corpus
vocab = cv.get_feature_names()
# show document feature vectors
pd.DataFrame(cv_matrix, columns=vocab)
# + [markdown] id="ROjrPXr9BMhc" colab_type="text"
# ## Train the Model
# + id="wD40j7tL8f4s" colab_type="code" colab={}
model = lda.LDA(n_topics=30, n_iter=20000, random_state=100)
# cause and effect relationship, Conclusion Validity , End output changes by changing these params
# + id="WmJTS9I387KK" colab_type="code" colab={}
X = cv_matrix
# + id="f9GIyWZN8swR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="b0678a33-dc69-44e6-922c-d417ef5c7507"
model.fit(X)
# + id="qRr-h6yw89OZ" colab_type="code" colab={}
topic_word = None
topic_word = model.topic_word_
# + [markdown] id="E5-yyZMiZ5mb" colab_type="text"
# # Results
# + [markdown] id="2ceADhvNShCV" colab_type="text"
# ##Extracting "K" topics
# + id="R8ikCZYL0Vcw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f680dff3-cde2-46c5-a83a-381acbef1373"
import joblib
joblib.dump(model, '/content/drive/My Drive/lda_model.jl')
# + id="wkkssQUFFaaL" colab_type="code" colab={}
# Reload the model object
import joblib
reload_model = joblib.load('/content/drive/My Drive/lda_model.jl')
doc_topic = None
topic_word= None
doc_topic = reload_model.doc_topic_
topic_word = reload_model.topic_word_
# + id="gqdb8ERy9FOX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 547} outputId="d3dbd90c-24d8-4858-b1a5-e90aad61e605"
n_top_words = 100
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
# + [markdown] id="bFx1Q55tO4-Y" colab_type="text"
# ## Document-topic distributions/ Percentage share
# + id="3qRd2ZzHPAQc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="e2e061b8-e508-4c9b-c944-bd0ced14ae98"
for i in range(20):
print("Stackoverflow Question index: {} (top topic: {})".format(df.iloc[i,0], doc_topic[i].argmax()))
# + id="585N_q9iDzFo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="cc8be3c3-b065-4470-cabb-a627a279ba26"
df
# + id="ijbjkOqQT1kQ" colab_type="code" colab={}
df["topic_number"] = list(map(lambda x: x.argmax(), doc_topic))
# + id="boFlla3JWxJF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 544} outputId="e470a63b-c4e6-4bfa-ebee-0ec7f2633c83"
data = df["topic_number"].value_counts().sort_index()
data
# + id="0d-ufQ2QNuez" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 502} outputId="72e1942a-e746-4c21-dacc-948744c73bc1"
plt.figure(figsize = (15, 8))
plt.bar(np.arange(data.shape[0]), data, color = 'orange')
plt.xticks(np.arange(data.shape[0]))
plt.grid(False)
plt.xlabel("Topic")
plt.ylabel("Frequency")
plt.savefig("topic.pdf")
# + id="054BwoQdJXOl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="7160d222-cd1a-489f-cece-4b73d8f47e74"
data_np = data.to_numpy()
sorted_data = data.sort_values(ascending=False)
sorted_data.values
# + id="pjGSlcphtKe9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 575} outputId="feed7dd5-5b56-46c8-bf50-0a2379807928"
sorted_data.plot.pie(y='mass', figsize=(10, 10), autopct= '%1.2f%%', pctdistance=1.15,
labeldistance=0.7, radius= 0.8, shadow=True, fontsize=11,
colors = brewer['Dark2'][8] +brewer["Paired"][10]+ Blues8[2:7]+brewer["Accent"][8])
plt.ylabel("Topic Numbers with Percentage share")
plt.savefig("pie_chart.png")
| Data_Mining.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 029
#
# 左右に W 個のマスが並んだ部分があります。
#
# 最初、すべての部分について、高さは 0 です。
#
# ここに N 個の高さ 1 のレンガを順に積みます。
#
# i 番目に積むレンガは、左から L[i] 番目から R[i] 番目のマスをちょうど覆うように置きます。
#
# このとき、レンガが覆う範囲の中で最も高い水平な面で接着します。
#
# 各レンガについて、上面の高さを求めてください。
#
#
# 【制約】
#
# ・2 ≦ W ≦ 500000
#
# ・1 ≦ N ≦ 250000
#
# ・1 ≦ L[i] ≦ R[i] ≦ W
#
# ・入力はすべて整数
#
# 【小課題】
#
# 1. W ≦ 9000,N ≦ 9000
#
# 2. N ≦ 9000
#
# 3. 追加の制約はない
#
# ### 入力形式
# W N
#
# l[1] r[1]
#
# l[2] r[2]
#
# :
#
# l[N] r[N]
# +
# 入力例 1
100 4
27 100
8 39
83 97
24 75
# 出力例 1
1
2
2
3
# +
# 入力例 2
3 5
1 2
2 2
2 3
3 3
1 2
# 出力例 2
1
2
3
4
4
# +
# 入力例 3
10 10
1 3
3 5
5 7
7 9
2 4
4 6
6 8
3 5
5 7
4 6
# 出力例 3
1
2
3
4
3
4
5
5
6
7
# +
# 入力例 4
500000 7
1 500000
500000 500000
1 500000
1 1
1 500000
500000 500000
1 500000
# 出力例 4
1
2
3
4
5
6
7
※この入力例は小課題 2, 3 の制約のみを満たします。
| 029_problem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Self-Driving Car Engineer Nanodegree
#
#
# ## Project: **Finding Lane Lines on the Road**
# ***
# In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
#
# Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
#
# In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
#
# ---
# Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
#
# **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
#
# ---
# **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
#
# ---
#
# <figure>
# <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
# </figcaption>
# </figure>
# **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
# ## Import Packages
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# %matplotlib inline
# ## Read in an Image
# +
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# -
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# ## Helper Functions
# Below are some helper functions to help get you started. They should look familiar from the lesson!
# +
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
# -
# ## Test Images
#
# Build your pipeline to work on the images in the directory "test_images"
# **You should make sure your pipeline works well on these images before you try the videos.**
import os
os.listdir("test_images/")
# ## Build a Lane Finding Pipeline
#
# Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
#
# Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
import statistics
class Filter():
class __Filter():
def __init__(self):
print("create LaneFinder")
def __str__(self):
return repr(self) + self.val
l_slope= []
l_intercept = []
r_slope = []
r_intercept = []
depth = 4
def set_filter_depth(self, depth):
self.depth = depth
def update(self,current_slope, current_intercept,slope, intercept):
if len(current_slope) > self.depth :
current_slope.pop(0)
current_intercept.pop(0)
current_intercept.append(intercept)
current_slope.append(slope)
return self.balance_data(current_slope, current_intercept)
def get_l_balanced_data(self):
return self.balance_data(self.l_slope , self.l_intercept)
def get_r_balanced_data(self):
return self.balance_data(self.r_slope , self.r_intercept)
def balance_data(self,current_slope, current_intercept):
balanced_slope , balanced_intercept, total_div= 0.0, 0.0, 0
for i in range(len(current_slope)):
balanced_slope = balanced_slope + current_slope[i] * (i + 1)
balanced_intercept = balanced_intercept + current_intercept[i] * (i + 1)
total_div = total_div + (i + 1)
return balanced_slope/total_div, balanced_intercept/total_div
def update_l(self, slope, intercept):
return self.update(self.l_slope, self.l_intercept, slope, intercept)
def update_r(self, slope, intercept):
return self.update(self.r_slope, self.r_intercept, slope, intercept)
def resset(self):
self.l_slope= []
self.l_intercept = []
self.r_slope = []
self.r_intercept = []
self.depth = 4
def __init__(self):
if not Filter.instance:
Filter.instance = Filter.__Filter()
instance = None
ignore_mask_color = 255
# kernel size for gaussian blur
kernel_size = 7
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 90
# define parameters for HoughLinesP
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 20 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 20 # minimum number of pixels making up a line
max_line_gap = 300 # maximum gap in pixels between connectable line segments
def get_vertices(image):
return np.array([[(0, image.shape[0]+50),
(image.shape[1] / 2 - 50, image.shape[0] / 1.65),
(image.shape[1] / 2 + 50, image.shape[0] / 1.65),
(image.shape[1] + 50, image.shape[0]+20),
(image.shape[1]-250, image.shape[0]+20),
(image.shape[1] / 2, image.shape[0] / 1.4),
(image.shape[1] / 2, image.shape[0] / 1.4),
(300, image.shape[0]+50)
]], dtype=np.int32)
def process_lines(lines, canva):
l_slope_lst, r_slope_lst, l_intercept_lst, r_intercept_lst = [],[],[],[]
total_l_len , total_r_len = 0, 0
for line in lines:
(x1, y1, x2, y2) = line[0]
# skip devision by zero
if x2-x1 == 0 :
continue
slope = (x2-x1)/(y2-y1)
intercept = y2 - (x2 / slope)
leng = np.sqrt((x2-x1)**2 + (y2-y1)**2)
if -2 > slope or slope > 2 :
continue
if slope > 0 :
l_slope_lst.append(slope * leng)
l_intercept_lst.append(intercept * leng)
total_l_len = total_l_len + leng
else:
r_slope_lst.append(slope * leng)
r_intercept_lst.append(intercept * leng)
total_r_len = total_r_len + leng
if total_l_len == 0:
l_slope,l_intercept = Filter.instance.get_l_balanced_data()
else:
l_slope,l_intercept = Filter.instance.update_l(sum(l_slope_lst)/total_l_len , sum(l_intercept_lst)/total_l_len)
if total_r_len == 0:
r_slope,r_intercept = Filter.instance.get_r_balanced_data()
else:
r_slope,r_intercept= Filter.instance.update_r(sum(r_slope_lst)/total_r_len , sum(r_intercept_lst)/total_r_len)
l_y = [canva.shape[0]/1.5 , canva.shape[0]]
r_y = [canva.shape[0]/1.5 , canva.shape[0]]
l_x = [l_slope*(l_y[0]-l_intercept),l_slope*(l_y[1]-l_intercept)]
r_x = [r_slope*(r_y[0]-r_intercept),r_slope*(r_y[1]-r_intercept)]
cv2.line(canva, (int(l_x[0]), int(l_y[0])), (int(l_x[1]), int(l_y[1])), (255, 0, 0), 25)
cv2.line(canva, (int(r_x[0]), int(r_y[0])), (int(r_x[1]), int(r_y[1])), (255, 0, 0), 25)
return canva
def process_image(original_image):
hls_image = cv2.cvtColor(original_image, cv2.COLOR_RGB2HLS) # lets convert to HLS
yellow_mask = cv2.inRange(hls_image, (10, 0, 100), (40,255,255)) # take yellow mask from HLS
white_mask = cv2.inRange(original_image, (180, 180, 180), (255,255,255)) #take white mask from RGB
white_mask = white_mask + cv2.inRange(hls_image, (0, 100, 100), (40,255,255)) # update white mask from HLS
mask_image = white_mask + yellow_mask # lets take two masks (from hls - yellow , from rgb - white)
hls_image[mask_image>0] = [0,0,0]
gray = cv2.cvtColor(original_image, cv2.COLOR_RGB2GRAY)
blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
mask = np.zeros_like(edges)
cv2.fillPoly(mask, get_vertices(original_image), ignore_mask_color)
masked_edges = cv2.bitwise_and(edges, mask)
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
canvas_for_lines = np.copy(original_image) * 0 # creating a blank to draw lines
return weighted_img(original_image , process_lines(lines, canvas_for_lines) ,0.5, 1)
fig_num = 0
Filter().instance.resset()
Filter().instance.set_filter_depth(1)
for path in os.listdir("test_images/"):
# read file name, check is it needed format
name = path.split('.')
if name[-1] != "jpg" and name[-1] != "jpeg":
continue
image = mpimg.imread("test_images/"+path)
#process image and get lanes (if it possible)
line = process_image(image)
#plot the image
plt.figure(fig_num)
fig_num = fig_num +1
plt.imshow(line)
# -
# m, ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# Let's try the one with the solid white lane on the right first ...
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
# Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# ## Improve the draw_lines() function
#
# **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
#
# **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
# Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
# %time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
# ## Writeup and Submission
#
# If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
#
# ## Optional Challenge
#
# Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'test_videos_output/challenge.mp4'
Filter().instance.resset()
Filter().instance.set_filter_depth(7)
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
# %time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
# + pycharm={"name": "#%%\n"}
# + pycharm={"name": "#%%\n"}
| P1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Common Workflow Language with BioExcel Building Blocks
# ### Based on the Protein MD Setup tutorial using BioExcel Building Blocks (biobb)
# ***
# This tutorial aims to illustrate the process of **building up a CWL workflow** using the **BioExcel Building Blocks library (biobb)**. The tutorial is based on the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup).
# ***
# **Biobb modules** used:
#
# - [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases.
# - [biobb_model](https://github.com/bioexcel/biobb_model): Tools to model macromolecular structures.
# - [biobb_md](https://github.com/bioexcel/biobb_md): Tools to setup and run Molecular Dynamics simulations.
# - [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories.
#
# **Software requirements**:
#
# - [cwltool](https://github.com/common-workflow-language/cwltool): Common Workflow Language tool description reference implementation.
# - [docker](https://www.docker.com/): Docker container platform.
#
# ***
# ### Tutorial Sections:
# 1. [CWL workflows: Brief Introduction](#intro)
#
#
# 2. [BioExcel building blocks TOOLS CWL Descriptions](#tools)
#
# * [Tool Building Block CWL Sections](#toolcwl)
# * [Complete Pdb Building Block CWL description](#pdbcwl)
#
#
# 3. [BioExcel building blocks WORKFLOWS CWL Descriptions](#workflows)
#
# * [Header](#cwlheader)
# * [Inputs](#inputs)
# * [Outputs](#outputs)
# * [Steps](#steps)
# * [Input of a Run](#run)
# * [Complete Workflow](#wf)
# * [Running the CWL workflow](#runwf)
# * [Cwltool workflow output](#wfoutput)
#
#
# 4. [Protein MD-Setup CWL workflow with BioExcel building blocks](#mdsetup)
#
# * [Steps](#mdsteps)
# * [Inputs](#mdinputs)
# * [Outputs](#mdoutputs)
# * [Complete Workflow](#mdworkflow)
# * [Input of a Run](#mdrun)
# * [Running the CWL workflow](#mdcwlrun)
#
#
# 5. [Questions & Comments](#questions)
# ***
#
# <img src="logo.png" />
#
# ***
# <a id="intro"></a>
# ## CWL workflows: Brief Introduction
#
# The **Common Workflow Language (CWL)** is an open standard for describing analysis **workflows and tools** in a way that makes them **portable and scalable** across a variety of software and hardware environments, from workstations to cluster, cloud, and high performance computing (HPC) environments.
#
# **CWL** is a community-led specification to express **portable workflow and tool descriptions**, which can be executed by **multiple leading workflow engine implementations**. Unlike previous standardisation attempts, CWL has taken a pragmatic approach and focused on what most workflow systems are able to do: Execute command line tools and pass files around in a top-to-bottom pipeline. At the heart of CWL workflows are the **tool descriptions**. A command line is described, with parameters, input and output files, in a **YAML format** so they can be shared across workflows and linked to from registries like **ELIXIR’s bio.tools**. These are then combined and wired together in a **second YAML file** to form a workflow template, which can be **executed on any of the supported implementations**, repeatedly and **on different platforms** by specifying input files and workflow parameters. The [CWL User Guide](https://www.commonwl.org/user_guide/index.html) gives a gentle introduction to the language, while the more detailed [CWL specifications](https://www.commonwl.org/v1.1/) formalize CWL concepts so they can be implemented by the different workflow systems. A couple of **BioExcel webinars** were focused on **CWL**, an [introduction to CWL](https://www.youtube.com/watch?v=jfQb1HJWRac) and a [new open source tool to run CWL workflows on LSF (CWLEXEC)](https://www.youtube.com/watch?v=_jSTZMWtPAY).
#
# **BioExcel building blocks** are all **described in CWL**. A specific **CWL** section in the **workflow manager adapters** [github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) gathers all the descriptions, divided in the different categories: io, md, analysis, chemistry, model and pmx (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)).
#
# In this tutorial, we are going to use these **BioExcel building blocks CWL descriptions** to build a **CWL** biomolecular workflow. In particular, the assembled workflow will perform a complete **Molecular Dynamics setup** (MD Setup) using **GROMACS MD package**, taking as a base the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup).
#
# No additional installation is required apart from the **Docker platform** and the **CWL tool reference executor**, as the **building blocks** will be launched using their associated **Docker containers**.
# ***
# <a id="tools"></a>
#
# ## BioExcel building blocks TOOLS CWL Descriptions
#
# Writing a workflow in CWL using the **BioExcel building blocks** is possible thanks to the already generated **CWL descriptions** for all the **building blocks** (wrappers). A specific **CWL** section in the **workflow manager adapters** [github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) gathers all the descriptions, divided in the different categories: io, md, analysis, chemistry, model and pmx (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)).
#
# ***
# <a id="toolcwl"></a>
# ### Tool Building Block CWL sections:
#
# **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The building block used for this is the [Pdb](https://github.com/bioexcel/biobb_io/blob/master/biobb_io/api/pdb.py) building block, from the [biobb_io](https://github.com/bioexcel/biobb_io) package, including tools to **fetch biomolecular data from public databases**. The **CWL description** for this building block can be found in the [adapters github repo](https://github.com/bioexcel/biobb_adapters/blob/master/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl), and is shown in the following notebook cell. Description files like this one for all the steps of the workflow are needed to build and run a **CLW workflow**. To build a **CWL workflow** with **BioExcel building blocks**, one just need to download all the needed description files from the [biobb_adapters github](https://github.com/bioexcel/biobb_adapters/blob/master/biobb_adapters/cwl).
#
# This particular example of a **Pdb building block** is useful to illustrate the most important points of the **CWL description**:
# * **hints**: The **CWL hints** section describes the **process requirements** that should (but not have to) be satisfied to run the wrapped command. The implementation may report a **warning** if a hint cannot be satisfied. In the **BioExcel building blocks**, a **DockerRequirement** subsection is always present in the **hints** section, pointing to the associated **Docker container**. The **dockerPull: parameter** takes the same value that you would pass to a **docker pull** command. That is, the name of the **container image**. In this case we have used the container called **biobb_io:latest** that can be found in the **quay.io repository**, which contains the **Pdb** building block.
hints:
DockerRequirement:
dockerPull: quay.io/biocontainers/biobb_io:latest
# * **namespaces and schemas**: Input and output **metadata** may be represented within a tool or workflow. Such **metadata** must use a **namespace prefix** listed in the **$namespaces and $schemas sections** of the document. All **BioExcel building blocks CWL specifications** use the **EDAM ontology** (http://edamontology.org/) as **namespace**, with all terms included in its **Web Ontology Language** (owl) of knowledge representation (http://edamontology.org/EDAM_1.22.owl). **BioExcel** is contributing to the expansion of the **EDAM ontology** with the addition of new structural terms such as [GROMACS XTC format](http://edamontology.org/format_3875) or the [trajectory visualization operation](http://edamontology.org/operation_3890).
$namespaces:
edam: http://edamontology.org/
$schemas:
- http://edamontology.org/EDAM_1.22.owl
# * **inputs**: The **inputs section** of a **tool** contains a list of input parameters that **control how to run the tool**. Each parameter has an **id** for the name of parameter, and **type** describing what types of values are valid for that parameter. Available primitive types are *string, int, long, float, double, and null*; complex types are *array and record*; in addition there are special types *File, Directory and Any*. The field **inputBinding** is optional and indicates whether and how the input parameter should appear on the tool’s command line, in which **position** (position), and with which **name** (prefix). The **default field** stores the **default value** for the particular **input parameter**. <br>In this particular example, the **Pdb building block** has two different **input parameters**: *output_pdb_path* and *config*. The *output_pdb_path* input parameter defines the name of the **output file** that will contain the downloaded **PDB structure**. The *config* parameter is common to all **BioExcel building blocks**, and gathers all the **properties** of the building block in a **json format**. The **question mark** after the string type (*string?*) denotes that this input is **optional**.
inputs:
output_pdb_path:
type: string
inputBinding:
position: 1
prefix: --output_pdb_path
default: 'downloaded_structure.pdb'
config:
type: string?
inputBinding:
position: 2
prefix: --config
default: '{"pdb_code" : "1aki"}'
# * **outputs**: The **outputs section** of a **tool** contains a list of output parameters that should be returned after running the **tool**. Similarly to the inputs section, each parameter has an **id** for the name of parameter, and **type** describing what types of values are valid for that parameter. The **outputBinding** field describes how to set the value of each output parameter. The **glob field** consists of the name of a file in the **output directory**. In the **BioExcel building blocks**, every **output** has an associated **input parameter** defined in the previous input section, defining the name of the file to be generated. <br>In the particular **Pdb building block** example, the *output_pdb_file* parameter of type *File* is coupled to the *output_pdb_path* input parameter, using the **outputBinding** and the **glob** fields. The standard **PDB** format of the output file is also specified using the **EDAM ontology** format id 1476 ([edam:format_1476](http://edamontology.org/format_1476)).
outputs:
output_pdb_file:
type: File
format: edam:format_1476
outputBinding:
glob: $(inputs.output_pdb_path)
# For more information on CWL tools description, please refer to the [CWL User Guide](https://www.commonwl.org/user_guide/index.html) or the [CWL specifications](https://www.commonwl.org/v1.1/).
# ***
# <a id="pdbcwl"></a>
# ### Complete Pdb Building Block CWL description:
#
# Example of a **BioExcel building block CWL description** (pdb from biobb_io package)
# +
# Example of a BioExcel building block CWL description (pdb from biobb_io package)
# #!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: CommandLineTool
baseCommand: pdb
hints:
DockerRequirement:
dockerPull: quay.io/biocontainers/biobb_io:latest
inputs:
output_pdb_path:
type: string
inputBinding:
position: 1
prefix: --output_pdb_path
default: 'downloaded_structure.pdb'
config:
type: string?
inputBinding:
position: 2
prefix: --config
default: '{"pdb_code" : "1aki"}'
outputs:
output_pdb_file:
type: File
format: edam:format_1476
outputBinding:
glob: $(inputs.output_pdb_path)
$namespaces:
edam: http://edamontology.org/
$schemas:
- http://edamontology.org/EDAM_1.22.owl
# -
# ***
# <a id="workflows"></a>
# ## BioExcel building blocks WORKFLOWS CWL Descriptions
#
# Now that we have seen the **BioExcel building blocks CWL descriptions**, we can use them to build our first **biomolecular workflow** as a demonstrator. All **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. Starting with the **CWL workflow description**, let's explore our first example **section by section**.
# <a id="cwlheader"></a>
# ### Header:
#
# * **cwlVersion** field indicates the version of the **CWL spec** used by the document.
# * **class** field indicates this document describes a **workflow**.
# +
# # !/usr/bin/env cwl-runner
cwlVersion: v1.0
class: Workflow
# -
# <a id="inputs"></a>
# ### Inputs:
#
# The **inputs section** describes the inputs for **each of the steps** of the workflow. The **BioExcel building blocks (biobb)** have three types of **input parameters**: **input**, **output**, and **properties**. The **properties** parameter, which contains all the input parameters that are neither **input** nor **output files**, is defined in **JSON format** (see examples in the **Protein MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup)).
#
# **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. Two different **inputs** are needed for this step: the **name of the file** that will contain the downloaded PDB structure (*step1_output_name*), and the **properties** of the building block (*step1_properties*), that in this case will indicate the PDB code to look for (see **Input of a run** section). Both input parameters have type *string* in this **building block**.
# CWL workflow inputs section example
inputs:
step1_output_name: string
step1_properties: string
# <a id="outputs"></a>
# ### Outputs:
#
# The **outputs section** describes the set of **final outputs** from the **workflow**. These outputs can be a collection of outputs from **different steps of the workflow**. Each parameter consists of an **identifier**, a **data type**, and an **outputSource**, which connects the output parameter of a **particular step** to the **workflow final output parameter**.
#
# **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The *pdb* **output** is a **file** containing the **protein structure** in **PDB format**, which is connected to the output parameter *output_pdb_file* of the **step1 of the workflow** (*step1_pdb*).
# CWL workflow outputs section example
outputs:
pdb:
type: File
outputSource: step1_pdb/output_pdb_file
# <a id="steps"></a>
# ### Steps:
#
# The **steps section** describes the actual steps of the workflow. Steps are **connected** one to the other through the **input parameters**.
#
# **Workflow steps** are not necessarily run in the order they are listed, instead **the order is determined by the dependencies between steps**. In addition, workflow steps which do not depend on one another may run **in parallel**.
#
# **Example**: Step 1 and 2 of the workflow, download a **protein structure** from the **PDB database**, and **fix the side chains**, adding any side chain atoms missing in the original structure. Note how **step1 and step2** are **connected** through the **output** of one and the **input** of the other: **Step2** (*step2_fixsidechain*) receives as **input** (*input_pdb_path*) the **output of the step1** (*step1_pdb*), identified as *step1_pdb/output_pdb_file*.
# CWL workflow steps section example
steps:
step1_pdb:
run: biobb_adapters/pdb.cwl
in:
config: step1_properties
out: [output_pdb_file]
step2_fixsidechain:
run: biobb_adapters/fix_side_chain.cwl
in:
input_pdb_path: step1_pdb/output_pdb_file
out: [output_pdb_file]
# <a id="run"></a>
# ### Input of a run:
#
# As previously stated, all **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. In this example, we are going to produce a **YAML** formatted object in a separate file describing the **inputs of our run**.
#
# **Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The **step1_output_name** contains the name of the file that is going to be produced by the **building block**, whereas the **JSON-formatted properties** (**step1_properties**) contain the **pdb code** of the structure to be downloaded:
#
# * step1_output_name: **"tutorial_1aki.pdb"**
# * step1_properties: **{"pdb_code" : "1aki"}**
step1_output_name: 'tutorial_1aki.pdb'
step1_properties: '{"pdb_code" : "1aki"}'
# <a id="wf"></a>
# ### Complete workflow:
#
# Example of a short **CWL workflow** with **BioExcel building blocks**, which retrieves a **PDB file** for the **Lysozyme protein structure** from the RCSB PDB database (**step1: pdb.cwl**), and fixes the possible problems in the structure, adding **missing side chain atoms** if needed (**step2: fix_side_chain.cwl**).
# +
# Example of a short CWL workflow with BioExcel building blocks
# # !/usr/bin/env cwl-runner
cwlVersion: v1.0
class: Workflow
inputs:
step1_properties: '{"pdb_code" : "1aki"}'
step1_output_name: 'tutorial_1aki.pdb'
outputs:
pdb:
type: File
outputSource: step2_fixsidechain/output_pdb_file
steps:
step1_pdb:
run: biobb_adapters/pdb.cwl
in:
output_pdb_path: step1_output_name
config: step1_properties
out: [output_pdb_file]
step2_fixsidechain:
run: biobb_adapters/fix_side_chain.cwl
in:
input_pdb_path: step1_pdb/output_pdb_file
out: [output_pdb_file]
# -
# <a id="runwf"></a>
# ### Running the CWL workflow:
#
# The final step of the process is **running the workflow described in CWL**. For that, the description presented in the previous cell should be written to a file (e.g. BioExcel-CWL-firstWorkflow.cwl), the **YAML** input should be written to a separate file (e.g. BioExcel-CWL-firstWorkflow-job.yml) and finally both files should be used with the **CWL tool description reference implementation executer** (cwltool).
#
# It is important to note that in order to properly run the **CWL workflow**, the **CWL descriptions** for all the **building blocks** used in the **workflow** should be accessible from the file system. In this example, all the **CWL descriptions** needed where downloaded from the [BioExcel building blocks adapters github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) to a folder named **biobb_adapters**.
#
# The **command line** is shown in the cell below:
# Run CWL workflow with CWL tool description reference implementation (cwltool).
cwltool BioExcel-CWL-firstWorkflow.cwl BioExcel-CWL-firstWorkflow-job.yml
# <a id="wfoutput"></a>
# ### Cwltool workflow output
#
# The **execution of the workflow** will write information to the standard output such as the **step being performed**, the **way it is run** (command line, docker container, etc.), **inputs and outputs** used, and **state of each step** (success, failed). The next cell contains a **real output** for the **execution of our first example**:
Resolved 'BioExcel-CWL-firstWorkflow.cwl' to 'file:///PATH/biobb_wf_md_setup/cwl/BioExcel-CWL-firstWorkflow.cwl'
[workflow BioExcel-CWL-firstWorkflow.cwl] start
[step step1_pdb] start
[job step1_pdb] /private/tmp/docker_tmp1g8y0wu0$ docker \
run \
-i \
--volume=/private/tmp/docker_tmp1g8y0wu0:/private/var/spool/cwl:rw \
--volume=/private/var/folders/7f/0hxgf3d971b98lk_fps26jx40000gn/T/tmps4_pw5tj:/tmp:rw \
--workdir=/private/var/spool/cwl \
--read-only=true \
--user=501:20 \
--rm \
--env=TMPDIR=/tmp \
--env=HOME=/private/var/spool/cwl \
quay.io/biocontainers/biobb_io:0.1.3--py_0 \
pdb \
--config \
'{"pdb_code" : "1aki"}' \
--output_pdb_path \
tutorial.pdb
2019-10-24 08:42:06,235 [MainThread ] [INFO ] Downloading: 1aki from: https://files.rcsb.org/download/1aki.pdb
2019-10-24 08:42:07,594 [MainThread ] [INFO ] Writting pdb to: /private/var/spool/cwl/tutorial.pdb
2019-10-24 08:42:07,607 [MainThread ] [INFO ] Filtering lines NOT starting with one of these words: ['ATOM', 'MODEL', 'ENDMDL']
[job step1_pdb] completed success
[step step1_pdb] completed success
[step step2_fixsidechain] start
[job step2_fixsidechain] /private/tmp/docker_tmpuaecttdd$ docker \
run \
-i \
--volume=/private/tmp/docker_tmpuaecttdd:/private/var/spool/cwl:rw \
--volume=/private/var/folders/7f/0hxgf3d971b98lk_fps26jx40000gn/T/tmp9t_nks8r:/tmp:rw \
--volume=/private/tmp/docker_tmp1g8y0wu0/tutorial.pdb:/private/var/lib/cwl/stg5b2950e7-ef54-4df6-be70-677050c4c258/tutorial.pdb:ro \
--workdir=/private/var/spool/cwl \
--read-only=true \
--user=501:20 \
--rm \
--env=TMPDIR=/tmp \
--env=HOME=/private/var/spool/cwl \
quay.io/biocontainers/biobb_model:0.1.3--py_0 \
fix_side_chain \
--input_pdb_path \
/private/var/lib/cwl/stg5b2950e7-ef54-4df6-be70-677050c4c258/tutorial.pdb \
--output_pdb_path \
fixed.pdb
[job step2_fixsidechain] completed success
[step step2_fixsidechain] completed success
[workflow BioExcel-CWL-firstWorkflow.cwl] completed success
{
"pdb": {
"location": "file:///PATH/biobb_wf_md_setup/cwl/fixed.pdb",
"basename": "fixed.pdb",
"class": "File",
"checksum": "sha1$3ef7a955f93f25af5e59b85bcf4cb1d0bbf69a40",
"size": 81167,
"format": "http://edamontology.org/format_1476",
"path": "/PATH/biobb_wf_md_setup/cwl/fixed.pdb"
}
}
Final process status is success
# ***
# <a id="mdsetup"></a>
# ## Protein MD-Setup CWL workflow with BioExcel building blocks
#
# The last step of this **tutorial** illustrates the building of a **complex CWL workflow**. The example used is the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). It is strongly recommended to take a look at this **notebook** before moving on to the next sections of this **tutorial**, as it contains information for all the **building blocks** used. The aim of this **tutorial** is to illustrate how to build **CWL workflows** using the **BioExcel building blocks**. For information about the science behind every step of the workflow, please refer to the **Protein Gromacs MD Setup** Jupyter Notebook tutorial. The **workflow** presented in the next cells is a translation of the very same workflow to **CWL language**, including the same **number of steps** (23) and **building blocks**.
# <a id="mdsteps"></a>
# ### Steps:
#
# First of all, let's define the **steps of the workflow**.
#
# * **Fetching PDB Structure**: step 1
# * **Fix Protein Structure**: step 2
# * **Create Protein System Topology**: step 3
# * **Create Solvent Box**: step 4
# * **Fill the Box with Water Molecules**: step 5
# * **Adding Ions**: steps 6 and 7
# * **Energetically Minimize the System**: steps 8, 9 and 10
# * **Equilibrate the System (NVT)**: steps 11, 12 and 13
# * **Equilibrate the System (NPT)**: steps 14, 15 and 16
# * **Free Molecular Dynamics Simulation**: steps 17 and 18
# * **Post-processing Resulting 3D Trajectory**: steps 19 to 23
#
# Mandatory and optional **inputs** and **outputs** of every **building block** can be consulted in the appropriate **documentation** pages from the corresponding **BioExcel building block** category (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)).
steps:
step1_pdb:
run: biobb_adapters/pdb.cwl
in:
output_pdb_path: step1_pdb_name
config: step1_pdb_config
out: [output_pdb_file]
step2_fixsidechain:
run: biobb_adapters/fix_side_chain.cwl
in:
input_pdb_path: step1_pdb/output_pdb_file
out: [output_pdb_file]
step3_pdb2gmx:
run: biobb_adapters/pdb2gmx.cwl
in:
input_pdb_path: step2_fixsidechain/output_pdb_file
out: [output_gro_file, output_top_zip_file]
step4_editconf:
run: biobb_adapters/editconf.cwl
in:
input_gro_path: step3_pdb2gmx/output_gro_file
out: [output_gro_file]
step5_solvate:
run: biobb_adapters/solvate.cwl
in:
input_solute_gro_path: step4_editconf/output_gro_file
input_top_zip_path: step3_pdb2gmx/output_top_zip_file
out: [output_gro_file, output_top_zip_file]
step6_grompp_genion:
run: biobb_adapters/grompp.cwl
in:
config: step6_gppion_config
input_gro_path: step5_solvate/output_gro_file
input_top_zip_path: step5_solvate/output_top_zip_file
out: [output_tpr_file]
step7_genion:
run: biobb_adapters/genion.cwl
in:
config: step7_genion_config
input_tpr_path: step6_grompp_genion/output_tpr_file
input_top_zip_path: step5_solvate/output_top_zip_file
out: [output_gro_file, output_top_zip_file]
step8_grompp_min:
run: biobb_adapters/grompp.cwl
in:
config: step8_gppmin_config
input_gro_path: step7_genion/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
out: [output_tpr_file]
step9_mdrun_min:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step8_grompp_min/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file]
step10_energy_min:
run: biobb_adapters/energy.cwl
in:
config: step10_energy_min_config
input_energy_path: step9_mdrun_min/output_edr_file
out: [output_xvg_file]
step11_grompp_nvt:
run: biobb_adapters/grompp.cwl
in:
config: step11_gppnvt_config
input_gro_path: step9_mdrun_min/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
out: [output_tpr_file]
step12_mdrun_nvt:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step11_grompp_nvt/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]
step13_energy_nvt:
run: biobb_adapters/energy.cwl
in:
config: step13_energy_nvt_config
input_energy_path: step12_mdrun_nvt/output_edr_file
out: [output_xvg_file]
step14_grompp_npt:
run: biobb_adapters/grompp.cwl
in:
config: step14_gppnpt_config
input_gro_path: step12_mdrun_nvt/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
input_cpt_path: step12_mdrun_nvt/output_cpt_file
out: [output_tpr_file]
step15_mdrun_npt:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step14_grompp_npt/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]
step16_energy_npt:
run: biobb_adapters/energy.cwl
in:
config: step16_energy_npt_config
input_energy_path: step15_mdrun_npt/output_edr_file
out: [output_xvg_file]
step17_grompp_md:
run: biobb_adapters/grompp.cwl
in:
config: step17_gppmd_config
input_gro_path: step15_mdrun_npt/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
input_cpt_path: step15_mdrun_npt/output_cpt_file
out: [output_tpr_file]
step18_mdrun_md:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step17_grompp_md/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]
step19_rmsfirst:
run: biobb_adapters/rms.cwl
in:
config: step19_rmsfirst_config
input_structure_path: step17_grompp_md/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_xvg_file]
step20_rmsexp:
run: biobb_adapters/rms.cwl
in:
config: step20_rmsexp_config
input_structure_path: step8_grompp_min/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_xvg_file]
step21_rgyr:
run: biobb_adapters/rgyr.cwl
in:
config: step21_rgyr_config
input_structure_path: step8_grompp_min/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_xvg_file]
step22_image:
run: biobb_adapters/gmximage.cwl
in:
config: step22_image_config
input_top_path: step17_grompp_md/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_traj_file]
step23_dry:
run: biobb_adapters/gmxtrjconvstr.cwl
in:
config: step23_dry_config
input_structure_path: step18_mdrun_md/output_gro_file
input_top_path: step17_grompp_md/output_tpr_file
out: [output_str_file]
# <a id="mdinputs"></a>
# ### Inputs:
#
# All inputs for the **BioExcel building blocks** are defined as *strings*. Not all the steps in this particular example need **external inputs**, some of them just works using as input/s an output (or outputs) from **previous steps** (e.g. step2_fixsidechain). For the steps that need input, all of them will receive a **JSON** formatted input (of type string), with the **properties parameters** of the **building blocks** (config). Apart from that, some of the **building blocks** in this example are receiving two different input parameters: the **properties** (e.g. *step1_pdb_config*) and the **name of the output file** to be written (e.g. *step1_pdb_name*). This is particularly useful to identify the files generated by different steps of the **workflow**. Besides, in cases where the same **building block** is used more than once, using the **default value** for the **output files** will cause the **overwritting** of the results generated by previous steps (e.g. energy calculation steps).
#
# All these inputs will be filled up with values from the **separated YAML input file**.
inputs:
step1_pdb_name: string
step1_pdb_config: string
step4_editconf_config: string
step6_gppion_config: string
step7_genion_config: string
step8_gppmin_config: string
step10_energy_min_config: string
step10_energy_min_name: string
step11_gppnvt_config: string
step13_energy_nvt_config: string
step13_energy_nvt_name: string
step14_gppnpt_config: string
step16_energy_npt_config: string
step16_energy_npt_name: string
step17_gppmd_config: string
step19_rmsfirst_config: string
step19_rmsfirst_name: string
step20_rmsexp_config: string
step20_rmsexp_name: string
step21_rgyr_config: string
step22_image_config: string
step23_dry_config: string
# <a id="mdoutputs"></a>
# ### Outputs:
#
# The **outputs section** contains the set of **final outputs** from the **workflow**. In this case, **outputs** from **different steps** of the **workflow** are considered **final outputs**:
#
# * **Trajectories**:
# * **trr**: Raw trajectory from the *free* simulation step.
# * **trr_imaged_dry**: Post-processed trajectory, dehydrated, imaged (rotations and translations removed) and centered.
# * **Structures**:
# * **gro**: Raw structure from the *free* simulation step.
# * **gro_dry**: Resulting protein structure taken from the post-processed trajectory, to be used as a topology, usually for visualization purposes.
# * **Topologies**:
# * **tpr**: GROMACS portable binary run input file, containing the starting structure of the simulation, the molecular topology and all the simulation parameters.
# * **top**: GROMACS topology file, containing the molecular topology in an ASCII readable format.
# * **System Setup Observables**:
# * **xvg_min**: Potential energy of the system during the minimization step.
# * **xvg_nvt**: Temperature of the system during the NVT equilibration step.
# * **xvg_npt**: Pressure and density of the system (box) during the NPT equilibration step.
# * **Simulation Analysis**:
# * **xvg_rmsfirst**: Root Mean Square deviation (RMSd) throughout the whole *free* simulation step against the first snapshot of the trajectory (equilibrated system).
# * **xvg_rmsexp**: Root Mean Square deviation (RMSd) throughout the whole *free* simulation step against the experimental structure (minimized system).
# * **xvg_rgyr**: Radius of Gyration (RGyr) of the molecule throughout the whole *free* simulation step.
# * **Checkpoint file**:
# * **cpt**: GROMACS portable checkpoint file, allowing to restore (continue) the simulation from the last step of the setup process.
#
# Please note that the name of the **output files** is sometimes fixed by a **specific input** (e.g. step10_energy_min_name), whereas when no specific name is given as input, the **default value** is used (e.g. system.tpr). **Default values** can be found in the **CWL description** files for each **building block** (biobb_adapters).
outputs:
trr:
type: File
outputSource: step18_mdrun_md/output_trr_file
trr_imaged_dry:
type: File
outputSource: step22_image/output_traj_file
gro_dry:
type: File
outputSource: step23_dry/output_str_file
gro:
type: File
outputSource: step18_mdrun_md/output_gro_file
cpt:
type: File
outputSource: step18_mdrun_md/output_cpt_file
tpr:
type: File
outputSource: step17_grompp_md/output_tpr_file
top:
type: File
outputSource: step7_genion/output_top_zip_file
xvg_min:
type: File
outputSource: step10_energy_min/output_xvg_file
xvg_nvt:
type: File
outputSource: step13_energy_nvt/output_xvg_file
xvg_npt:
type: File
outputSource: step16_energy_npt/output_xvg_file
xvg_rmsfirst:
type: File
outputSource: step19_rmsfirst/output_xvg_file
xvg_rmsexp:
type: File
outputSource: step20_rmsexp/output_xvg_file
xvg_rgyr:
type: File
outputSource: step21_rgyr/output_xvg_file
# <a id="mdworkflow"></a>
# ### Complete workflow:
#
# The complete **CWL described workflow** to run a **Molecular Dynamics Setup** on a protein structure can be found in the next cell. The **representation of the workflow** using the **CWL Viewer** web service can be found here: XXXXXX. The **full workflow** is a combination of the **inputs**, **outputs** and **steps** revised in the previous cells.
# +
# Protein MD-Setup CWL workflow with BioExcel building blocks
# https://github.com/bioexcel/biobb_wf_md_setup
# #!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: Workflow
inputs:
step1_pdb_name: string
step1_pdb_config: string
step4_editconf_config: string
step6_gppion_config: string
step7_genion_config: string
step8_gppmin_config: string
step10_energy_min_config: string
step10_energy_min_name: string
step11_gppnvt_config: string
step13_energy_nvt_config: string
step13_energy_nvt_name: string
step14_gppnpt_config: string
step16_energy_npt_config: string
step16_energy_npt_name: string
step17_gppmd_config: string
step19_rmsfirst_config: string
step19_rmsfirst_name: string
step20_rmsexp_config: string
step20_rmsexp_name: string
step21_rgyr_config: string
step22_image_config: string
step23_dry_config: string
outputs:
trr:
type: File
outputSource: step18_mdrun_md/output_trr_file
trr_imaged_dry:
type: File
outputSource: step22_image/output_traj_file
gro_dry:
type: File
outputSource: step23_dry/output_str_file
gro:
type: File
outputSource: step18_mdrun_md/output_gro_file
cpt:
type: File
outputSource: step18_mdrun_md/output_cpt_file
tpr:
type: File
outputSource: step17_grompp_md/output_tpr_file
top:
type: File
outputSource: step7_genion/output_top_zip_file
xvg_min:
type: File
outputSource: step10_energy_min/output_xvg_file
xvg_nvt:
type: File
outputSource: step13_energy_nvt/output_xvg_file
xvg_npt:
type: File
outputSource: step16_energy_npt/output_xvg_file
xvg_rmsfirst:
type: File
outputSource: step19_rmsfirst/output_xvg_file
xvg_rmsexp:
type: File
outputSource: step20_rmsexp/output_xvg_file
xvg_rgyr:
type: File
outputSource: step21_rgyr/output_xvg_file
steps:
step1_pdb:
run: biobb_adapters/pdb.cwl
in:
output_pdb_path: step1_pdb_name
config: step1_pdb_config
out: [output_pdb_file]
step2_fixsidechain:
run: biobb_adapters/fix_side_chain.cwl
in:
input_pdb_path: step1_pdb/output_pdb_file
out: [output_pdb_file]
step3_pdb2gmx:
run: biobb_adapters/pdb2gmx.cwl
in:
input_pdb_path: step2_fixsidechain/output_pdb_file
out: [output_gro_file, output_top_zip_file]
step4_editconf:
run: biobb_adapters/editconf.cwl
in:
input_gro_path: step3_pdb2gmx/output_gro_file
out: [output_gro_file]
step5_solvate:
run: biobb_adapters/solvate.cwl
in:
input_solute_gro_path: step4_editconf/output_gro_file
input_top_zip_path: step3_pdb2gmx/output_top_zip_file
out: [output_gro_file, output_top_zip_file]
step6_grompp_genion:
run: biobb_adapters/grompp.cwl
in:
config: step6_gppion_config
input_gro_path: step5_solvate/output_gro_file
input_top_zip_path: step5_solvate/output_top_zip_file
out: [output_tpr_file]
step7_genion:
run: biobb_adapters/genion.cwl
in:
config: step7_genion_config
input_tpr_path: step6_grompp_genion/output_tpr_file
input_top_zip_path: step5_solvate/output_top_zip_file
out: [output_gro_file, output_top_zip_file]
step8_grompp_min:
run: biobb_adapters/grompp.cwl
in:
config: step8_gppmin_config
input_gro_path: step7_genion/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
out: [output_tpr_file]
step9_mdrun_min:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step8_grompp_min/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file]
step10_energy_min:
run: biobb_adapters/energy.cwl
in:
config: step10_energy_min_config
output_xvg_path: step10_energy_min_name
input_energy_path: step9_mdrun_min/output_edr_file
out: [output_xvg_file]
step11_grompp_nvt:
run: biobb_adapters/grompp.cwl
in:
config: step11_gppnvt_config
input_gro_path: step9_mdrun_min/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
out: [output_tpr_file]
step12_mdrun_nvt:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step11_grompp_nvt/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]
step13_energy_nvt:
run: biobb_adapters/energy.cwl
in:
config: step13_energy_nvt_config
output_xvg_path: step13_energy_nvt_name
input_energy_path: step12_mdrun_nvt/output_edr_file
out: [output_xvg_file]
step14_grompp_npt:
run: biobb_adapters/grompp.cwl
in:
config: step14_gppnpt_config
input_gro_path: step12_mdrun_nvt/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
input_cpt_path: step12_mdrun_nvt/output_cpt_file
out: [output_tpr_file]
step15_mdrun_npt:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step14_grompp_npt/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]
step16_energy_npt:
run: biobb_adapters/energy.cwl
in:
config: step16_energy_npt_config
output_xvg_path: step16_energy_npt_name
input_energy_path: step15_mdrun_npt/output_edr_file
out: [output_xvg_file]
step17_grompp_md:
run: biobb_adapters/grompp.cwl
in:
config: step17_gppmd_config
input_gro_path: step15_mdrun_npt/output_gro_file
input_top_zip_path: step7_genion/output_top_zip_file
input_cpt_path: step15_mdrun_npt/output_cpt_file
out: [output_tpr_file]
step18_mdrun_md:
run: biobb_adapters/mdrun.cwl
in:
input_tpr_path: step17_grompp_md/output_tpr_file
out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]
step19_rmsfirst:
run: biobb_adapters/rms.cwl
in:
config: step19_rmsfirst_config
output_xvg_path: step19_rmsfirst_name
input_structure_path: step17_grompp_md/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_xvg_file]
step20_rmsexp:
run: biobb_adapters/rms.cwl
in:
config: step20_rmsexp_config
output_xvg_path: step20_rmsexp_name
input_structure_path: step8_grompp_min/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_xvg_file]
step21_rgyr:
run: biobb_adapters/rgyr.cwl
in:
config: step21_rgyr_config
input_structure_path: step8_grompp_min/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_xvg_file]
step22_image:
run: biobb_adapters/gmximage.cwl
in:
config: step22_image_config
input_top_path: step17_grompp_md/output_tpr_file
input_traj_path: step18_mdrun_md/output_trr_file
out: [output_traj_file]
step23_dry:
run: biobb_adapters/gmxtrjconvstr.cwl
in:
config: step23_dry_config
input_structure_path: step18_mdrun_md/output_gro_file
input_top_path: step17_grompp_md/output_tpr_file
out: [output_str_file]
# -
# <a id="mdrun"></a>
# ### Input of the run:
#
# As previously stated, all **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. The following cell presents the **YAML** file describing the **inputs of the run** for the **Protein Gromacs MD Setup** workflow.
#
# All the steps were defined as *strings* in the **CWL workflow**; **Building blocks** inputs ending by "*_name*" contain a simple *string* with the wanted file name; **Building blocks** inputs ending by "*_config*" contain the **properties parameters** in a *string* reproducing a **JSON format**. Please note here that all double quotes in **JSON format** must be escaped. The **properties parameters** were taken from the original **Protein Gromacs MD Setup** workflow [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). Please refer to it to find information about the values used.
# +
# Protein MD-Setup CWL workflow with BioExcel building blocks - Input YAML configuration file
# https://github.com/bioexcel/biobb_wf_md_setup
step1_pdb_name: 'tutorial.pdb'
step1_pdb_config: '{"pdb_code" : "1aki"}'
step4_editconf_config: '{"box_type": "cubic","distance_to_molecule": 1.0}'
step6_gppion_config: '{"mdp": {"type":"minimization"}}'
step7_genion_config: '{"neutral": "True"}'
step8_gppmin_config: '{"mdp": {"type":"minimization", "nsteps":"5000", "emtol":"500"}}'
step10_energy_min_config: '{"terms": ["Potential"]}'
step10_energy_min_name: 'energy_min.xvg'
step11_gppnvt_config: '{"mdp": {"type":"nvt", "nsteps":"5000", "dt":0.002, "define":"-DPOSRES"}}'
step13_energy_nvt_config: '{"terms": ["Temperature"]}'
step13_energy_nvt_name: 'energy_nvt.xvg'
step14_gppnpt_config: '{"mdp": {"type":"npt", "nsteps":"5000"}}'
step16_energy_npt_config: '{"terms": ["Pressure","Density"]}'
step16_energy_npt_name: 'energy_npt.xvg'
step17_gppmd_config: '{"mdp": {"type":"free", "nsteps":"50000"}}'
step19_rmsfirst_config: '{"selection": "Backbone"}'
step19_rmsfirst_name: 'rmsd_first.xvg'
step20_rmsexp_config: '{"selection": "Backbone"}'
step20_rmsexp_name: 'rmsd_exp.xvg'
step21_rgyr_config: '{"selection": "Backbone"}'
step22_image_config: '{"center_selection":"Protein","output_selection":"Protein","pbc":"mol"}'
step23_dry_config: '{"selection": "Protein"}'
# -
# <a id="mdcwlrun"></a>
# ### Running the CWL workflow:
#
# The final step of the process is **running the workflow described in CWL**. For that, the complete **workflow description** should be written to a file (e.g. BioExcel-CWL-MDSetup.cwl), the **YAML** input should be written to a separate file (e.g. BioExcel-CWL-MDSetup-job.yml) and finally both files should be used with the **CWL tool description reference implementation executer** (cwltool).
#
# As in the previous example, it is important to note that in order to properly run the **CWL workflow**, the **CWL descriptions** for all the **building blocks** used in the **workflow** should be accessible from the file system. In this example, all the **CWL descriptions** needed where downloaded from the [BioExcel building blocks adapters github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) to a folder named **biobb_adapters**.
#
# It is worth to note that as this workflow is using different **BioExcel building block modules** (biobb_io, biobb_model, biobb_md and biobb_analysis), so the **Docker container** for each of the modules will be downloaded the first time that it is launched. This process **could take some time** (and **disk space**). Once all the **Docker containers** are correctly downloaded and integrated in the system, the **workflow** should take around 1h (depending on the machine used).
#
# The **command line** is shown in the cell below:
# Run CWL workflow with CWL tool description reference implementation (cwltool).
cwltool BioExcel-CWL-MDSetup.cwl BioExcel-CWL-MDSetup-job.yml
# ***
# <a id="questions"></a>
#
# ## Questions & Comments
#
# Questions, issues, suggestions and comments are really welcome!
#
# * GitHub issues:
# * [https://github.com/bioexcel/biobb](https://github.com/bioexcel/biobb)
#
# * BioExcel forum:
# * [https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library](https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library)
#
| biobb_wf_cwl_tutorial/notebooks/biobb_CWL_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pathflowai.utils import load_sql_df
import torch
import os
import sys, os
os.environ['CUDA_VISIBLE_DEVICES']="0"
import umap, numba
from sklearn.preprocessing import LabelEncoder
from torch_cluster import knn_graph
from torch_geometric.data import Data
import numpy as np
from torch_geometric.utils import train_test_split_edges
import os
import argparse
from torch_geometric.utils.convert import to_networkx
from torch_geometric.data import InMemoryDataset,DataLoader
import os,glob, pandas as pd
from sklearn.utils.class_weight import compute_class_weight
import pickle
import fire
import torch_geometric
import torch
import scipy.sparse as sps
from torch_cluster import radius_graph
from torch_geometric.utils import subgraph
# +
class MyOwnDataset(InMemoryDataset):
def __init__(self, root=None, transform=None, pre_transform=None):
super(MyOwnDataset, self).__init__(root, transform, pre_transform)
self.data, self.slices = None,None#torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
pass
@property
def processed_file_names(self):
pass
def download(self):
# Download to `self.raw_dir`.
pass
def process(self):
# Read data into huge `Data` list.
data_list = extract_graphs()
if self.pre_filter is not None:
data_list = [data for data in data_list if self.pre_filter(data)]
if self.pre_transform is not None:
data_list = [self.pre_transform(data) for data in data_list]
data, slices = self.collate(data_list)
torch.save((data, slices), self.processed_paths[0])
def get_graph_datasets(embedding_dir,k=8,radius=0,build_connected_components=False):
embeddings={os.path.basename(f).split('.')[0]: torch.load(f) for f in glob.glob("{}/*.pkl".format(embedding_dir))}
embeddings=dict(embeddings=np.vstack([embeddings[k]['embeddings'] for k in embeddings]),
patch_info=pd.concat([embeddings[k]['patch_info'] for k in embeddings]))
df=embeddings['patch_info'].iloc[:,2:].reset_index()
z=pd.DataFrame(embeddings['embeddings']).loc[df.index]
embeddings['patch_info']=df
le=LabelEncoder()
cols=df['annotation'].value_counts().index.tolist()
cols=np.array(cols)
le=le.fit(cols)
df['y_true']=le.transform(cols[df[cols].values.argmax(1)])
weights=compute_class_weight('balanced',sorted(df['y_true'].unique()),df['y_true'].values)
def get_dataset(slide,k=8,radius=0,build_connected_components=False):
xy=embeddings['patch_info'][embeddings['patch_info']['ID']==slide][['x','y']]
xy=torch.tensor(xy.values).float().cuda()
X=z[embeddings['patch_info']['ID'].values==slide]
X=torch.tensor(X.values)
y=torch.tensor(df.loc[embeddings['patch_info']['ID'].values==slide,'y_true'].values)
if not radius:
G=knn_graph(xy,k=k)
else:
G=radius_graph(xy, r=radius*np.sqrt(2), batch=None, loop=True)
G=G.detach().cpu()
G=torch_geometric.utils.add_remaining_self_loops(G)[0]
xy=xy.detach().cpu()
datasets=[]
if build_connected_components:
edges=G.detach().cpu().numpy().astype(int)
n_components,components=list(sps.csgraph.connected_components(sps.coo_matrix((np.ones_like(edges[0]),(edges[0],edges[1])))))
components=torch.LongTensor(components)
for i in range(n_components):
G_new=subgraph(components==i,G,relabel_nodes=True)[0]
xy_new=xy[components==i]
X_new=X[components==i]
y_new=y[components==i]
np.random.seed(42)
idx=np.arange(X_new.shape[0])
idx2=np.arange(X_new.shape[0])
np.random.shuffle(idx)
train_idx,val_idx,test_idx=torch.tensor(np.isin(idx2,idx[:int(0.8*len(idx))])),torch.tensor(np.isin(idx2,idx[int(0.8*len(idx)):int(0.9*len(idx))])),torch.tensor(np.isin(idx2,idx[int(0.9*len(idx)):]))
dataset=Data(x=X_new, edge_index=G_new, edge_attr=None, y=y_new, pos=xy_new)
dataset.train_mask=train_idx
dataset.val_mask=val_idx
dataset.test_mask=test_idx
datasets.append(dataset)
components=components.numpy()
else:
components=np.ones(X.shape[0])
np.random.seed(42)
idx=np.arange(X.shape[0])
idx2=np.arange(X.shape[0])
np.random.shuffle(idx)
train_idx,val_idx,test_idx=torch.tensor(np.isin(idx2,idx[:int(0.8*len(idx))])),torch.tensor(np.isin(idx2,idx[int(0.8*len(idx)):int(0.9*len(idx))])),torch.tensor(np.isin(idx2,idx[int(0.9*len(idx)):]))
dataset=Data(x=X, edge_index=G, edge_attr=None, y=y, pos=xy)
dataset.train_mask=train_idx
dataset.val_mask=val_idx
dataset.test_mask=test_idx
datasets.append(dataset)
return datasets,components
def extract_graphs(df,k=8,radius=0,build_connected_components=False):
graphs=[]
if build_connected_components: df['component']=-1
for slide in df['ID'].unique():
if df.loc[df['ID']==slide,'y_true'].sum():
G,components=get_dataset(slide,k,radius,build_connected_components)
graphs.extend(G)
if build_connected_components: df.loc[df['ID']==slide,"component"]=components
return graphs,df
graph_dataset,df=extract_graphs(df,k,radius,build_connected_components)
return dict(df=df,weight=weights,graph_dataset=graph_dataset)
def graph_extraction(embedding_dir,save_file='graph_dataset_test.pkl',k=8,radius=0,build_connected_components=False):
graph_datasets=get_graph_datasets(embedding_dir,k,radius,build_connected_components)
pickle.dump(graph_datasets,open(save_file,'wb'))
# +
# use pathflowai or https://github.com/jlevy44/PathPretrain to pretrain / extract image features first
# -
graph_datasets={}
for k in ['your_data_set']:
embedding_dir=f"{k}/imagenet_embeddings"
out_dir=f"{k}/graph_datasets"
os.makedirs(out_dir,exist_ok=True)
graph_extraction(embedding_dir,save_file=f'{out_dir}/imagenet_graph_data.pkl')
| notebooks/1_create_graph_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="sIQx9WNCX2uu"
import os
# Find the latest version of spark 3.0 from http://www.apache.org/dist/spark/ and enter as the spark version
# For example:
# spark_version = 'spark-3.0.3'
spark_version = 'spark-3.<enter version>'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
# !apt-get update
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !wget -q http://www.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
# !tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
# !pip install -q findspark
# Set Environment Variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
# + colab={} colab_type="code" id="5ZDEtnSDX40y"
# Start Spark session
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("tokenizing").getOrCreate()
# + colab={} colab_type="code" id="S69zYZBOX4tY"
from pyspark.ml.feature import RegexTokenizer, Tokenizer
from pyspark.sql.functions import col, udf
from pyspark.sql.types import IntegerType
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" id="gLervTVLX4NN" outputId="d91d30de-be15-426c-9d8a-49af1001a443"
# Read in data from S3 Buckets
from pyspark import SparkFiles
url ="https://s3.amazonaws.com/dataviz-curriculum/day_2/data.csv"
spark.sparkContext.addFile(url)
df = spark.read.csv(SparkFiles.get("data.csv"), sep=",", header=True)
# Show DataFrame
df.show()
# + colab={} colab_type="code" id="2HyFAR7MX4Gt"
# Tokenize DataFrame
tokened = Tokenizer(inputCol="Poem", outputCol="words")
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" id="gkfj1Or3X4BP" outputId="1d7447ed-4cd7-4e31-e614-5964cb43a90a"
# Transform DataFrame
tokenized = tokened.transform(df)
tokenized.show()
# + colab={} colab_type="code" id="pSAxlypmX36z"
# Create a Function to count vowels
def vowel_counter(words):
vowel_count = 0
for word in words:
for letter in word:
if letter in ('a', 'e', 'i', 'o', 'u'):
vowel_count += 1
return vowel_count
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="sKttbJoRX3zx" outputId="8fe8bb02-e087-499d-a8a7-ad6ba3c32b89"
# Store a user defined function
count_vowels = udf(vowel_counter, IntegerType())
count_vowels
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" id="VB0UEk_KX3mw" outputId="05810289-f5a6-4699-eed2-0f1903cd662e"
# Create new DataFrame with the udf
tokenized.select("Poem", "words")\
.withColumn("vowels", count_vowels(col("words"))).show(truncate=False)
# + colab={} colab_type="code" id="ngmtFiUMYaMU"
| 01-Lesson-Plans/22-Big-Data/2/Activities/03-Stu_Pyspark_NLP_Tokens/Solved/tokenizing_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from predictiveness_curve import plot_predictiveness_curve
from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# +
data = load_breast_cancer()
y = data.target
X = data.data
training_X, test_X, training_y, test_y = train_test_split(
X, y, test_size=0.5, random_state=42)
# -
clsf = RandomForestClassifier(n_estimators=100, random_state=42)
clsf.fit(training_X, training_y)
probabilities = clsf.predict_proba(test_X)[:, 1]
fig = plot_predictiveness_curve(risks=probabilities, labels=test_y)
fig = plot_predictiveness_curve(risks=probabilities, labels=test_y, kind='EF',
top_ylabel='Probability', bottom_ylabel='Enrichment Factor')
| notebooks/example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0"
# ## Summarizing Documents With *ktrain*
#
# *ktrain* includes the ability to summarize text based on a pretrained [BART](https://arxiv.org/abs/1910.13461) model from the `transformers` library.
#
# To perform summarization, first create a `TransformerSummarizer` instance as follows. (Note that this feature requires PyTorch to be installed on your system.)
from ktrain.text.summarization import TransformerSummarizer
ts = TransformerSummarizer()
# Next, let's create a long document about planetary probes. The text is taken from a post in the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
sample_doc = """Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for <NAME> (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by <NAME> in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada (<EMAIL>). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator <NAME>
(<EMAIL>) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016."""
# Now, let's use our `TransformerSummarizer` instance to summarize the long document.
ts.summarize(sample_doc)
| examples/text/text_summarization_with_bart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import shutil
import re
import os
import numpy as np
import python_speech_features
import scipy.io.wavfile as wav
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, MaxPooling2D, Input, AveragePooling2D
import datetime
# -
def placeUtteranceToFolder(wavPath, category, savePath):
catePath = savePath+"/"+category
if (os.path.exists(wavPath)!=True):
raise ValueError("wavPath doesn't exist")
if (os.path.exists(savePath)!=True):
print("Creating dir: " + savePath)
os.makedirs(savePath)
if (os.path.exists(catePath)!=True):
print("Creating dir: " + catePath)
os.makedirs(catePath)
filename = os.path.basename(wavPath)
shutil.copy2(wavPath, catePath) # complete target filename given
print("{} is put into path: {}".format(filename, catePath))
# +
def readFileAndAggregateUtterance(filePath, wavDir, relativeSavePath):
categories = ['Neutral', 'Anger', 'Frustration', 'Sadness', 'Happiness']
wavDirPath = "/Users/Chen/百度云同步盘/Startup/Clevo/数据/IEMOCAP_full_release/EmotionRecognization/wav/"
with open(filePath) as f:
wav_basename = ""
count = 0
cateStats = {'Neutral':0, 'Anger':0, 'Frustration':0, 'Sadness':0, 'Happiness':0}
for line in f:
if (line[0]=="A"):
if (wav_basename != ""):
cateStats['Anger'] += cateStats['Frustration']
cateStats['Frustration'] = 0
# print("count", count)
# determine if estimators have a common estimation
for category in categories:
# print("category", category, "cateStats[category]", cateStats[category])
# print("cateStats[category] / count", cateStats[category] / count)
if (cateStats[category] / count > 0.5):
wavFolder = re.search('(.*)_[^_]*', wav_basename).group(1)
wavFilePath = "{}/{}/{}.wav".format(wavDirPath, wavFolder, wav_basename)
placeUtteranceToFolder(wavFilePath, category, relativeSavePath)
# re-initialize
wav_basename = ""
count = 0
cateStats = {'Neutral':0, 'Anger':0, 'Frustration':0, 'Sadness':0, 'Happiness':0}
continue
if (wav_basename == ""):
regexp = re.compile(r'\[.*\].*(Ses[\d\w]*).*\[.*\]')
result = regexp.search(line)
if result:
wav_basename = result.group(1)
# print(wav_basename)
# print(line)
else:
continue
else: # line with categories
count += 1
for category in categories:
if (re.search(category, line)):
cateStats[category]+=1
# print("category {} is counted as {}".format(category, cateStats[category]))
# print("category: ", category, line)
# +
wavDir = "/Users/Chen/百度云同步盘/Startup/Clevo/数据/IEMOCAP_full_release/EmotionRecognization/wav"
emoTagsDir = "/Users/Chen/百度云同步盘/Startup/Clevo/数据/IEMOCAP_full_release/EmotionRecognization/emotionTags"
for emoFile in os.listdir(emoTagsDir):
emoFilePath = "{}/{}".format(emoTagsDir, emoFile)
readFileAndAggregateUtterance(emoFilePath, wavDir, "Preproc")
# +
wavPath = "/Users/Chen/Developer/Repository/Clevo-Emotion-Detection-Service/serverless/dev/CNNClassifier2/Preproc/Anger/Ses01F_impro01_M002.wav"
(rate,sig) = wav.read(wavPath)
print("rate", rate)
# - Mel Frequency Cepstral Coefficients
mfcc_feat = python_speech_features.mfcc(sig,rate)
d_mfcc_feat = python_speech_features.delta(mfcc_feat, 2)
# - Filterbank Energies
fbank_feat = python_speech_features.fbank(sig,rate)[0]
# - Log Filterbank Energies
logfbank_feat = python_speech_features.logfbank(sig,rate)
# - Spectral Subband Centroids
ssc_feat = python_speech_features.ssc(sig,rate)
print("mfcc_feat shape: ", mfcc_feat.shape)
print("fbank_feat shape: ", fbank_feat.shape)
print("logfbank_feat shape: ", logfbank_feat.shape)
print("ssc_feat shape: ", ssc_feat.shape)
# -
logfbank_feat[0]
# +
a = [[1,2,3], [4,5,6]]
b = np.array(a)
print(b.shape)
c = np.concatenate((b,b),axis=1)
print(c)
# c[0,4] = b[0,0]
# -
def getFeature(wavPath):
(rate,sig) = wav.read(wavPath)
# features = []
logfbank_feat = python_speech_features.logfbank(sig,rate)
delta_feat = python_speech_features.delta(logfbank_feat, 2)
features = np.concatenate((logfbank_feat, delta_feat), axis = 1)
# print(features.shape)
return features
getFeature(wavPath)
# +
x = np.array([[1,2,3], [4,5,6]])
np.savetxt("IEMOCAP_X", x)
y = np.loadtxt("IEMOCAP_X")
print(y)
# -
x = np.array([1,2,3])
y = np.array([4,5,6])
np.append(x, y)
x
y
def conv2D_AvePool(wavPath, kernalSize):
input = getFeature(wavPath)
input_tensor = input.reshape((1,input.shape[0], input.shape[1] ,1))
# build model
inputs = Input(shape=(input.shape[0], input.shape[1], 1))
x = Conv2D(kernalSize, (25, 52))(inputs)
output = AveragePooling2D((input.shape[0]-24, 1))(x)
model = Model(inputs=inputs, outputs=output)
result = model.predict(input_tensor)[0,0,0,:]
return result
# +
def calculate_XY(wavDirBase, categories, kernalSize):
counter = 0
#waveArr = list(os.walk(wavDirBase))
waveArr0 = [os.listdir(os.path.join(wavDirBase, x)) for x in os.listdir(wavDirBase) if not os.path.isfile(x)]
fileCount = sum([len(list1) for list1 in waveArr0])
# waveArr = [item for sublist in waveArr0 for item in sublist]
x_all_list = []
y_all_list = []
print("Start processing at {}".format(datetime.datetime.utcnow()))
for category in categories:
waveArr = os.listdir(os.path.join(wavDirBase, category))
print("len(waveArr)", len(waveArr))
for wavFile in waveArr:
wavPath = "{}/{}/{}".format(wavDirBase, category, wavFile)
result = conv2D_AvePool(wavPath, kernalSize)
# input = getFeature(wavPath)
# input_tensor = input.reshape((1,input.shape[0], input.shape[1] ,1))
# inputs = Input(shape=(input.shape[0], input.shape[1], 1))
# x = Conv2D(kernalSize, (25, 52))(inputs)
# x = AveragePooling2D((input.shape[0]-24, 1))(x)
# model = Model(inputs=inputs, outputs=x)
# result = model.predict(input_tensor)
x_all_list.append(result)
y_all_list.append(categories.index(category))
counter += 1
if (counter % 100 == 0):
K.clear_session()
print("{} files have been processed at {}".format(counter, datetime.datetime.utcnow()))
# if (counter>=200):
# break;
# break
x_all = np.array(x_all_list)
y_all = np.array(y_all_list)
return x_all, y_all
# +
wavDirBase = "/Users/Chen/Developer/Repository/Clevo-Emotion-Detection-Service/serverless/dev/CNNClassifier2/Preproc/"
categories = ["Anger", "Happiness", "Neutral", "Sadness"]
x_all, y_all = calculate_XY(wavDirBase, categories, 32)
print(x_all.shape)
print(y_all.shape)
# +
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, MaxPooling2D, Input, AveragePooling2D
from keras import backend as K
import os
import numpy as np
import matplotlib.pyplot as plt
import scipy.io.wavfile as wav
# from python_speech_features import mfcc
# from python_speech_features import delta
# from python_speech_features import logfbank
from api import readFileAndAggregateUtterance, getFeature, calculate_XY, conv2D_AvePool
from keras.utils.np_utils import to_categorical
import python_speech_features
import datetime
import config
# model params
batch_size = config.arc1Config['batch_size']
categories = config.arc1Config['categories']
epochs = config.arc1Config['epochs']
kernalSize = config.arc1Config['kernalSize']
num_classes = config.arc1Config['num_classes']
def build_index_label(pred, label_list):
a = max([(v,i) for i,v in enumerate(pred)])
idx = a[1]
return label_list[idx]
lastModel = load_model('emotion_model.h5')
lastModel.load_weights('emotion_model_weights.h5')
newWavPath = "Preproc/Sadness/Ses03F_impro02_M017.wav"
result = conv2D_AvePool(newWavPath, kernalSize)
# print(result)
pred_result = lastModel.predict(np.reshape(result, (1,kernalSize)), batch_size)[0]
print(pred_result)
print("Prediction result: {}".format(build_index_label(pred_result, categories)))
# -
newWavPath = "Preproc/Sadness/Ses03F_impro02_M017.wav"
result = conv2D_AvePool(newWavPath, kernalSize)
print(result)
| serverless/dev/CNNClassifier/PreprocessingData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="OyWWIHOnDIyJ" colab_type="code" colab={}
from zipfile import ZipFile
import numpy as np
import pickle
import pandas as pd
# + id="DyICX22fDLme" colab_type="code" outputId="165e7a93-a895-4641-c3d7-492a0be63a93" colab={"base_uri": "https://localhost:8080/", "height": 118}
# ! wget --user rijulganguly --password <PASSWORD> http://www.eecs.qmul.ac.uk/mmv/datasets/deap/data/data_preprocessed_python.zip
# + id="kxv4zESuDIyO" colab_type="code" colab={}
filename = "data_preprocessed_python.zip"
m = "data_preprocessed_python/s"
num = 1
# + id="70ObhMU7DIyS" colab_type="code" colab={}
dest_dir = "/content"
with ZipFile(filename) as zf:
zf.extractall(dest_dir)
# + id="BrSFXQhTDIyV" colab_type="code" outputId="04977cea-2f58-4dd6-ed99-b723d92ffb89" colab={"base_uri": "https://localhost:8080/", "height": 151}
# + id="hhXWyQ6NDIyY" colab_type="code" outputId="4aa0717b-54e1-4f98-ff6a-09aca08633ac" colab={"base_uri": "https://localhost:8080/", "height": 34}
# + id="6kPpS__6DIyc" colab_type="code" colab={}
# + id="idpFfiugDIye" colab_type="code" colab={}
x = pickle.load(open('data_preprocessed_python/s04.dat', 'rb'), encoding='iso-8859-1')
# + id="JhZvGrfmDIyg" colab_type="code" outputId="c595c58d-e609-4daf-9993-8e4a4781f4fb" colab={"base_uri": "https://localhost:8080/", "height": 34}
x['labels'][0][0]
# + id="gX1DcOb6DIyk" colab_type="code" colab={}
datas = {}
# + id="5D_vGeChDIym" colab_type="code" colab={}
for i in range(1,33):
if(i<10):
eID = "0" + str(i)
else:
eID = str(i)
fLoad = 'data_preprocessed_python/s' + eID + '.dat'
dat = pickle.load(open(fLoad, 'rb'), encoding='iso-8859-1')
datas[i] = dat
# + id="HxIm1pZ7IwCo" colab_type="code" colab={}
from scipy.stats import kurtosis
from scipy.stats import skew
# + id="WBnhAtF4NxHc" colab_type="code" colab={}
changed_data = {}
m_in = np.zeros((40,40,101))
cnt = 0
mj = 0
for i in range(1,33):
for j in range(40):
for k in range(40):
mj = 0
cnt = 0
for l in range(10):
m_in[j][k][cnt] = np.mean(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+1] = np.std(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+2] = np.min(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+3] = np.max(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+4] = np.median(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+5] = np.var(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+6] = np.ptp(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+7] = skew(datas[i]['data'][j][k][mj:mj+807])
m_in[j][k][cnt+8] = kurtosis(datas[i]['data'][j][k][mj:mj+807])
cnt += 9
mj += 807
if(mj > 8064):
mj = 8064
m_in[j][k][cnt] = np.mean(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+1] = np.std(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+2] = np.min(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+3] = np.max(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+4] = np.median(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+5] = np.var(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+6] = np.ptp(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+7] = skew(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+8] = kurtosis(datas[i]['data'][j][k][:8064])
m_in[j][k][cnt+9] = i
m_in[j][k][cnt+10] = j+1
changed_data[i] = m_in
m_in = np.zeros((40,40,101))
# + id="GMzujTtxU3Bb" colab_type="code" outputId="85cb7cfb-d8c2-4045-ef3c-86a0a72be0ae" colab={"base_uri": "https://localhost:8080/", "height": 34}
changed_data[32].transpose().shape
# + id="kBr7vrzbIX1n" colab_type="code" colab={}
import torch
import torch.nn as nn
import torch.functional as F
# + id="E6wHcWQYIcDh" colab_type="code" colab={}
class CNetValence(nn.Module):
def __init__(self):
super(CNetValence, self).__init__()
self.layer1 = nn.Sequential(nn.Conv2d(1,100, kernel_size=(3,3)),
nn.Tanh()
)
self.layer2 = nn.Sequential(nn.Conv2d(100,100, kernel_size=(3,3)),
nn.Tanh(),
nn.MaxPool2d(kernel_size=(2,2)),
nn.Dropout(0.25))
self.layer3 = nn.Sequential(nn.Linear(100*18*48, 128),
nn.Tanh(),
nn.Dropout(0.5))
self.final_layer = nn.Sequential(nn.Linear(128,3),
nn.Softplus())
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.view(40,-1)
out = self.layer3(out)
out = self.final_layer(out)
return out
# + id="UxgYNr6oJ7hz" colab_type="code" colab={}
model = CNetValence()
t = torch.from_numpy(changed_data[32].reshape((40,1,40,101))).float()
# + id="IbimUbYaDIyo" colab_type="code" colab={}
otpt = model(t)
# + id="odznt1u4DIys" colab_type="code" outputId="1a850da4-7f49-4df9-c7e1-be688e1d61a1" colab={"base_uri": "https://localhost:8080/", "height": 34}
otpt.shape
# + id="AskVT4uFDIyu" colab_type="code" colab={}
#v,ind = torch.max(otpt,1)
# + id="k-wQikr0DIyw" colab_type="code" outputId="6aca2142-f211-4845-b2a4-14d49bc3cf44" colab={"base_uri": "https://localhost:8080/", "height": 689}
otpt
# + id="DfJACGw0DIyy" colab_type="code" colab={}
changed_labels = {}
m_lab = np.zeros((40,))
for i in range(1,33):
for j in range(40):
k = datas[i]['labels'][j][0]
if(k>6):
m_lab[j] = 2
elif(k>4):
m_lab[j] = 1
else:
m_lab[j] = 0
changed_labels[i] = m_lab
m_lab = np.zeros((40,))
# + id="dGkda2DiAmdE" colab_type="code" outputId="c88bb692-dc5c-4048-8ee7-51328f19b042" colab={"base_uri": "https://localhost:8080/", "height": 34}
t_lab = torch.from_numpy(changed_labels[32])
t_lab.shape
# + id="uqIm0fIWAo_d" colab_type="code" colab={}
criterion = nn.CrossEntropyLoss()
#ind_c = oneHot(ind)
#ind_c.shape
# + id="b0_A4dHpTzaK" colab_type="code" colab={}
#l = criterion(otpt, t_lab.type(torch.LongTensor))
#l
# + id="F1kfTYwtT61-" colab_type="code" colab={}
def oneHot(a):
mt = np.zeros((40,3))
for i in range(40):
if(a[i] == 0):
mt[i][0] = 1
elif(a[i] == 1):
mt[i][1] = 1
else:
mt[i][2] = 1
return torch.from_numpy(mt)
# + id="SP5vPu1JU5gr" colab_type="code" colab={}
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr=0.00001, momentum=0.9)
l_arr = []
# + id="K7mDWx0Kdl-m" colab_type="code" outputId="80c1a943-274e-43e8-86e7-fcd7d2a59120" colab={"base_uri": "https://localhost:8080/", "height": 67}
for epoch in range(250):
epoch_acc = 0
num_correct = 0
for i in range(1,31):
for j in range(1,31):
if(j==i):
continue
input = torch.from_numpy(changed_data[j].reshape((40,1,40,101))).float()
labels = torch.from_numpy(changed_labels[j])
labels.requires_grad=True
optimizer.zero_grad()
output = model(input)
v,ind = torch.max(output,1)
#ind_n = oneHot(ind)
#ind_n.requires_grad=True
loss = criterion(output, labels.type(torch.LongTensor))
#l_arr.append(loss)
#num_correct += torch.sum(labels.type(torch.LongTensor) == ind)
loss.backward()
optimizer.step()
input = torch.from_numpy(changed_data[i].reshape((40,1,40,101))).float()
labels = torch.from_numpy(changed_labels[i])
labels.requires_grad=False
output = model(input)
v,ind = torch.max(output,1)
loss = criterion(output, labels.type(torch.LongTensor))
num_correct += torch.sum(labels.type(torch.LongTensor) == ind)
l_arr.append(loss)
#print("WORKING")
epoch_acc = num_correct.float()/(30*40)
if(epoch%10 == 0):
print("EPOCH ", epoch)
print("ACCURACY ", epoch_acc)
print("LOSS", loss)
# + id="Sp9xlanxe8gX" colab_type="code" colab={}
#class DNNValence(nn.Module):
# + id="xMxowxLck7vX" colab_type="code" colab={}
from matplotlib import pyplot as plt
# %matplotlib inline
plt.plot(l_arr)
plt.show()
# + id="L48IfW7Wm0YJ" colab_type="code" outputId="bac51fec-1270-4ee5-b6de-e88a89b8f33c" colab={"base_uri": "https://localhost:8080/", "height": 34}
| Valence_Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# 
# + [markdown] nteract={"transient": {"deleting": false}}
# # Quickstart: Train and deploy a model in Azure Machine Learning in 10 minutes
#
# In this quickstart, learn how to get started with Azure Machine Learning. You'll train an image classification model using the [MNIST](https://docs.microsoft.com/azure/open-datasets/dataset-mnist) dataset.
#
# You'll learn how to:
#
# * Download a dataset and look at the data
# * Train an image classification model and log metrics using MLflow
# * Deploy the model to do real-time inference
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Import Data
#
# Before you train a model, you need to understand the data you're using to train it. In this section, learn how to:
#
# * Download the MNIST dataset
# * Display some sample images
#
# You'll use Azure Open Datasets to get the raw MNIST data files. [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/overview-what-are-open-datasets) are curated public datasets that you can use to add scenario-specific features to machine learning solutions for better models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.
# +
import os
from azureml.opendatasets import MNIST
data_folder = os.path.join(os.getcwd(), "/tmp/qs_data")
os.makedirs(data_folder, exist_ok=True)
mnist_file_dataset = MNIST.get_file_dataset()
mnist_file_dataset.download(data_folder, overwrite=True)
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Take a look at the data
#
# Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them.
#
# Note this step requires a `load_data` function that's included in an `utils.py` file. This file is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
# +
from utils import load_data
import matplotlib.pyplot as plt
import numpy as np
import glob
# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
X_train = (
load_data(
glob.glob(
os.path.join(data_folder, "**/train-images-idx3-ubyte.gz"), recursive=True
)[0],
False,
)
/ 255.0
)
X_test = (
load_data(
glob.glob(
os.path.join(data_folder, "**/t10k-images-idx3-ubyte.gz"), recursive=True
)[0],
False,
)
/ 255.0
)
y_train = load_data(
glob.glob(
os.path.join(data_folder, "**/train-labels-idx1-ubyte.gz"), recursive=True
)[0],
True,
).reshape(-1)
y_test = load_data(
glob.glob(
os.path.join(data_folder, "**/t10k-labels-idx1-ubyte.gz"), recursive=True
)[0],
True,
).reshape(-1)
# now let's show some randomly chosen images from the traininng set.
count = 0
sample_size = 30
plt.figure(figsize=(16, 6))
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
count = count + 1
plt.subplot(1, sample_size, count)
plt.axhline("")
plt.axvline("")
plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)
plt.show()
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Train model and log metrics with MLflow
#
# You'll train the model using the code below. Note that you are using MLflow autologging to track metrics and log model artefacts.
#
# You'll be using the [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) classifier from the [SciKit Learn framework](https://scikit-learn.org/) to classify the data.
#
# **Note: The model training takes approximately 2 minutes to complete.**
# + gather={"logged": 1612966046970} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# create the model
import mlflow
import numpy as np
from sklearn.linear_model import LogisticRegression
from azureml.core import Workspace
# connect to your workspace
ws = Workspace.from_config()
# create experiment and start logging to a new run in the experiment
experiment_name = "azure-ml-in10-mins-tutorial"
# set up MLflow to track the metrics
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment(experiment_name)
mlflow.autolog()
# set up the Logistic regression model
reg = 0.5
clf = LogisticRegression(
C=1.0 / reg, solver="liblinear", multi_class="auto", random_state=42
)
# train the model
with mlflow.start_run() as run:
clf.fit(X_train, y_train)
# -
# ## View Experiment
# In the left-hand menu in Azure Machine Learning Studio, select __Experiments__ and then select your experiment (azure-ml-in10-mins-tutorial). An experiment is a grouping of many runs from a specified script or piece of code. Information for the run is stored under that experiment. If the name doesn't exist when you submit an experiment, if you select your run you will see various tabs containing metrics, logs, explanations, etc.
#
# ## Version control your models with the model registry
#
# You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. The code below registers and versions the model you trained above. Once you have executed the code cell below you will be able to see the model in the registry by selecting __Models__ in the left-hand menu in Azure Machine Learning Studio.
# + gather={"logged": 1612881042710} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# register the model
model_uri = "runs:/{}/model".format(run.info.run_id)
model = mlflow.register_model(model_uri, "sklearn_mnist_model")
# -
# ## Deploy the model for real-time inference
# In this section you learn how to deploy a model so that an application can consume (inference) the model over REST.
#
# ### Create deployment configuration
# The code cell gets a _curated environment_, which specifies all the dependencies required to host the model (for example, the packages like scikit-learn). Also, you create a _deployment configuration_, which specifies the amount of compute required to host the model. In this case, the compute will have 1CPU and 1GB memory.
# + gather={"logged": 1612881061728} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# create environment for the deploy
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.webservice import AciWebservice
# get a curated environment
env = Environment.get(
workspace=ws,
name="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu",
version=1
)
env.inferencing_stack_version='latest'
# create deployment config i.e. compute resources
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=1,
memory_gb=1,
tags={"data": "MNIST", "method": "sklearn"},
description="Predict MNIST with sklearn",
)
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Deploy model
#
# This next code cell deploys the model to Azure Container Instance (ACI).
#
# **Note: The deployment takes approximately 3 minutes to complete.**
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# %%time
import uuid
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core.model import Model
# get the registered model
model = Model(ws, "sklearn_mnist_model")
# create an inference config i.e. the scoring script and environment
inference_config = InferenceConfig(entry_script="score.py", environment=env)
# deploy the service
service_name = "sklearn-mnist-svc-" + str(uuid.uuid4())[:4]
service = Model.deploy(
workspace=ws,
name=service_name,
models=[model],
inference_config=inference_config,
deployment_config=aciconfig,
)
service.wait_for_deployment(show_output=True)
# -
# The [*scoring script*](score.py) file referenced in the code above can be found in the same folder as this notebook, and has two functions:
#
# 1. an `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables
# 1. a `run(data)` function that executes each time a call is made to the service. In this function, you normally format the input data, run a prediction, and output the predicted result.
#
# ### View Endpoint
# Once the model has been successfully deployed, you can view the endpoint by navigating to __Endpoints__ in the left-hand menu in Azure Machine Learning Studio. You will be able to see the state of the endpoint (healthy/unhealthy), logs, and consume (how applications can consume the model).
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Test the model service
#
# You can test the model by sending a raw HTTP request to test the web service.
# + gather={"logged": 1612881538381} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# send raw HTTP request to test the web service.
import requests
# send a random row from the test set to score
random_index = np.random.randint(0, len(X_test) - 1)
input_data = '{"data": [' + str(list(X_test[random_index])) + "]}"
headers = {"Content-Type": "application/json"}
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
print("label:", y_test[random_index])
print("prediction:", resp.text)
# -
# ## Clean up resources
#
# If you're not going to continue to use this model, delete the Model service using:
# + gather={"logged": 1612881556520} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# if you want to keep workspace and only delete endpoint (it will incur cost while running)
service.delete()
# -
# If you want to control cost further, stop the compute instance by selecting the "Stop compute" button next to the **Compute** dropdown. Then start the compute instance again the next time you need it.
#
# ## Next Steps
#
# In this quickstart, you learned how to run machine learning code in Azure Machine Learning.
#
# Now that you have working code in a development environment, learn how to submit a **_job_** - ideally on a schedule or trigger (for example, arrival of new data).
#
# [**Learn how to get started with Azure ML Job Submission**](../quickstart-azureml-python-sdk/quickstart-azureml-python-sdk.ipynb)
| tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins/quickstart-azureml-in-10mins.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import math
import operator
import sklearn
# +
# Importing data
feature_names=['id',
'age',
'job',
'marital',
'education',
'default',
'balance',
'housing',
'loan',
'contact',
'day',
'month',
'duration',
'campaign',
'pdays',
'previous',
'poutcome',
'y']
data = pd.read_csv('trainingset.txt', delimiter=",", names=feature_names)
data = data[data.pdays != -1] # the prediction is "how likely the client will subscribe a term deposit *after* they have been contacted by phone. -1 signals never been contacted and therefore it is not *after*
data = data.drop('id', axis='columns') # id is not needed to predict the value of y
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
data = data.apply(le.fit_transform)
data.head()
# -
# outlier check
firstQ_cont = data.quantile(0.25)
thirdQ_cont = data.quantile(0.75)
cont_result = thirdQ_cont - firstQ_cont
print(cont_result)
print(data.shape)
# shows how uneven our dataset is
data['y'].value_counts()
# +
# Upsampling TYPE B, creates synthetic data
from sklearn.utils import resample
data_majority = data[data.y==0]
data_minority = data[data.y==1]
data_minority_upsampled = resample(data_minority,
replace=True, # sample with replacement
n_samples=2261, # to match majority class
random_state=123) # reproducible results
data_upsampled = pd.concat([data_majority, data_minority_upsampled])
data_upsampled.y.value_counts()
# +
# Downsampling TYPE A, deleted data
# deleted data, for target_name_balanced to be equal in length to dataset
data_majority = data_upsampled[data_upsampled.y==0]
data_minority = data_upsampled[data_upsampled.y==1]
data_majority_downsampled = resample(data_majority,
replace=True, # sample with replacement
n_samples=2262, # to match majority class
random_state=123) # reproducible results
# Combine minority class with downsampled majority class
data_balanced = pd.concat([data_majority_downsampled, data_minority])
# Display new class counts
data_balanced.y.value_counts()
# +
dataset = data_balanced[[
'age',
'job',
'marital',
'education',
'default',
'balance',
'housing',
'loan',
'contact',
'day',
'month',
'duration',
'campaign',
'pdays',
'previous',
'poutcome']].copy()
target_name = data_balanced[['y']].copy()
# -
from sklearn import preprocessing
names = dataset.columns
# creating scaler object
scaler = preprocessing.StandardScaler()
scaled_dataset = scaler.fit_transform(dataset)
scaled_dataset = pd.DataFrame(scaled_dataset, columns=names)
from sklearn.model_selection import train_test_split
# Splitting Data into 80/20
X_train, X_test, y_train, y_test = train_test_split(scaled_dataset, target_name, test_size=0.2,random_state=42)
# +
from sklearn.neighbors import KNeighborsClassifier
#Create KNN Classifier
knn = KNeighborsClassifier(n_neighbors=5)
#Train the model using the training sets
knn.fit(X_train, y_train.values.ravel())
#Predict the response for test dataset
y_pred = knn.predict(X_test)
# -
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test,y_pred))
# +
testData = pd.read_csv('queries.txt', delimiter=',', header=None, names=feature_names)
testData = testData[testData.pdays != -1]
testData = testData.drop('id', axis='columns')
testData.head()
testData = testData.apply(le.fit_transform)
testData.head()
# +
testDataset = testData[[
'age',
'job',
'marital',
'education',
'default',
'balance',
'housing',
'loan',
'contact',
'day',
'month',
'duration',
'campaign',
'pdays',
'previous',
'poutcome']].copy()
test_target_name = data[['y']].copy()
testDataset.head()
# -
a_test = testDataset
a_pred = knn.predict(a_test)
print ("\n".join(str("Type A" if a == 0 else " Type B") for a in a_pred))
np_array = np.asarray(["Type A" if a == 0 else "Type B" for a in a_pred])
# +
myfile = open('KNNfixes.txt', 'w')
count = 0
for index, item in enumerate(np_array):
count += 1
myfile.writelines("Test" + str(count) + "," + str(item)+ '\n')
myfile.close()
# -
| Classfiers/KNNwDatafixes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="VNfvc6aIWudA" colab_type="code" outputId="5145ce18-d5cd-4b08-e8e4-2b4a6c419bc1" executionInfo={"status": "ok", "timestamp": 1586392653485, "user_tz": 180, "elapsed": 4863, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhPi1sLE-8-tWMGTFE1pVyN5cAzhF-H3JdWk4o14Q=s64", "userId": "15553628256580200178"}} colab={"base_uri": "https://localhost:8080/", "height": 73}
from tensorflow.compat.v1.keras.applications.resnet50 import ResNet50
from tensorflow.compat.v1.keras.preprocessing import image
from tensorflow.compat.v1.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
model = ResNet50(weights='imagenet')
# + id="nJx6PjlZWudF" colab_type="code" outputId="24002b2b-b511-4463-e893-897faa482e1f" colab={"base_uri": "https://localhost:8080/", "height": 55} executionInfo={"status": "ok", "timestamp": 1586393040617, "user_tz": 180, "elapsed": 1028, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhPi1sLE-8-tWMGTFE1pVyN5cAzhF-H3JdWk4o14Q=s64", "userId": "15553628256580200178"}}
from io import BytesIO
import urllib
def loadImage(URL):
with urllib.request.urlopen(URL) as url:
img = image.load_img(BytesIO(url.read()), target_size=(224, 224))
return image.img_to_array(img)
img_path = 'https://a-static.mlcdn.com.br/618x463/violao-eletroacustico-flat-cutaway-aco-preto-sf-14-ceq-giannini/estrela10/119320/127e36123fdd664ddc678ee05ebd55d9.jpg'
x = loadImage(img_path)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
# + id="1s_Uwe1EWudH" colab_type="code" outputId="589ba926-1128-422c-babf-6aeb95c24887" colab={"base_uri": "https://localhost:8080/", "height": 305} executionInfo={"status": "ok", "timestamp": 1586393013307, "user_tz": 180, "elapsed": 1099, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AO<KEY>", "userId": "15553628256580200178"}}
import matplotlib.pyplot as plt
# %matplotlib inline
plt.imshow(loadImage(img_path), cmap=plt.cm.binary)
# + id="9QUkZQp4WudJ" colab_type="code" outputId="8e04787c-7dad-4121-d76c-395801aee8c4" colab={}
model.summary()
# + id="ebF45q4kWudM" colab_type="code" colab={}
| Keras_app_ex.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This Notebook is an adaptation of the notebook https://github.com/fastai/fastai/blob/master/courses/dl1/lesson1.ipynb to the Belgium Traffic Sign Classification Dataset (~5000 images): http://btsd.ethz.ch/shareddata/
# Put these at the top of every notebook, to get automatic reloading and inline plotting
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# Here we import the libraries we need.
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
# It's important that you have a working NVidia GPU set up. The programming framework used to behind the scenes to work with NVidia GPUs is called CUDA. Therefore, you need to ensure the following line returns `True` before you proceed. If you have problems with this, please check the FAQ and ask for help on [the forums](http://forums.fast.ai).
torch.cuda.is_available()
# In addition, NVidia provides special accelerated functions for deep learning in a package called CuDNN. Although not strictly necessary, it will improve training performance significantly, and is included by default in all supported fastai configurations. Therefore, if the following does not return `True`, you may want to look into why.
torch.backends.cudnn.enabled
# # Classes visualization
# `PATH` is the path to your data - if you use the recommended setup approaches from the lesson, you won't need to change this. `sz` is the size that the images will be resized to in order to ensure that the training runs quickly.
PATH = "D:/Datasets/TrafficSigns/BelgiumTSC_Fastai/"
sz=299
# The loading and visualization code below are adapted from <NAME>'s blog post on [Traffic Sign Recognition with Tensorflow](https://medium.com/@waleedka/traffic-sign-recognition-with-tensorflow-629dffc391a6)
# +
import skimage.transform
import skimage.data
def load_data(data_dir):
# Get all subdirectories of data_dir. Each represents a label.
directories = [d for d in os.listdir(data_dir)
if os.path.isdir(os.path.join(data_dir, d))]
# Loop through the label directories and collect the data in
# two lists, labels and images.
labels = []
images = []
for d in directories:
label_dir = os.path.join(data_dir, d)
file_names = [os.path.join(label_dir, f)
for f in os.listdir(label_dir)
if f.endswith(".ppm")]
for f in file_names:
current_img = skimage.data.imread(f)
resized_img = skimage.transform.resize(current_img, (32, 32))
images.append(resized_img)
labels.append(int(d))
return images, labels
train_data_dir = PATH + 'train/'
test_data_dir = PATH + 'valid/'
images_test, labels_test = load_data(test_data_dir)
# +
def display_images_and_labels(images, labels):
"""Display the first image of each label."""
unique_labels = set(labels)
plt.figure(figsize=(15, 15))
i = 1
for label in unique_labels:
# Pick the first image for each label.
image = images[labels.index(label)]
plt.subplot(8, 8, i) # A grid of 8 rows x 8 columns
plt.axis('off')
plt.title("Label {0} ({1})".format(label, labels.count(label)))
i += 1
_ = plt.imshow(image)
plt.show()
display_images_and_labels(images_test, labels_test)
# -
# ## Define the model
# We're going to use a <b>pre-trained</b> model, that is, a model created by some one else to solve a different problem. Instead of building a model from scratch to solve a similar problem, we'll use a model trained on ImageNet (1.2 million images and 1000 classes) as a starting point. The model is a Convolutional Neural Network (CNN), a type of Neural Network that builds state-of-the-art models for computer vision. We'll be learning all about CNNs during this course.
#
# We will be using the <b>resnet50</b> model. resnet50 is a version of the model that won the 2015 ImageNet competition. Here is more info on [resnet models](https://github.com/KaimingHe/deep-residual-networks).
# +
# Uncomment the below if you need to reset your precomputed activations
# shutil.rmtree(f'{PATH}tmp', ignore_errors=True)
# -
arch=resnet50
bs=32
# ## Data augmentation
# If you try training for more epochs, you'll notice that we start to *overfit*, which means that our model is learning to recognize the specific images in the training set, rather than generalizing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through *data augmentation*. This refers to randomly changing the images in ways that shouldn't impact their interpretation, such as horizontal flipping, zooming, and rotating.
#
# We can do this by passing `aug_tfms` (*augmentation transforms*) to `tfms_from_model`, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions `transforms_side_on`. We can also specify random zooming of images up to specified scale by adding the `max_zoom` parameter.
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_basic, max_zoom=1.3)
def get_augs():
data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=4)
x,_ = next(iter(data.aug_dl))
return data.trn_ds.denorm(x)[1]
ims = np.stack([get_augs() for i in range(6)])
print(ims.shape)
plots(ims, rows=2)
# Let's create a new `data` object that includes this augmentation in the transforms, as well as a ConvLearner object.
# Please note that we use a high dropout value, because we're fitting a large model into a small dataset
#
# IMPORTANT NOTE: In this work, the test set is used directly as validation set. In general, one should rather use a validation set different than the test set, so that hyper parameter tuning is done on the validation set, and only the final model is tested on the test set. However, in this case, since the code is mostly taken "out-of-the-box" from fast.ai lesson, and no hyper-parameter tuning has been performed, the results are still legitimate.
data = ImageClassifierData.from_paths(PATH, tfms=tfms,bs=bs)
# For now, we set precompute = True, which prevents the data augmentation to have effect, but allows for faster training of the last layers
learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.7)
# ## Choosing a learning rate
# The *learning rate* determines how quickly or how slowly you want to update the *weights* (or *parameters*). Learning rate is one of the most difficult parameters to set, because it significantly affect model performance.
#
# The method `learn.lr_find()` helps you find an optimal learning rate. It uses the technique developed in the 2015 paper [Cyclical Learning Rates for Training Neural Networks](http://arxiv.org/abs/1506.01186), where we simply keep increasing the learning rate from a very small value, until the loss stops decreasing. We can plot the learning rate across batches to see what this looks like.
#
# We first create a new learner, since we want to know how to set the learning rate for a new (untrained) model.
lrf=learn.lr_find(start_lr=1e-5)
# Our `learn` object contains an attribute `sched` that contains our learning rate scheduler, and has some convenient plotting functionality including this one:
learn.sched.plot_lr()
# Note that in the previous plot *iteration* is one iteration (or *minibatch*) of SGD. In one epoch there are
# (num_train_samples/num_iterations) of SGD.
#
# We can see the plot of loss versus learning rate to see where our loss stops decreasing:
learn.sched.plot()
lr = 2e-2
# The loss is still clearly improving at lr=1e-1 (0.1), so that's what we use. Note that the optimal learning rate can change as we training the model, so you may want to re-run this function from time to time.
# ## Learning
# ### Without Data Augmentation (precompute=false)
# The [1cycle training policy](https://arxiv.org/abs/1803.09820) is used here, with parameters set according to the experiment 3 provided in Sylvain Gugger's notebook: https://github.com/sgugger/Deep-Learning/blob/master/Cyclical%20LR%20and%20momentums.ipynb
learn.fit(lr, 1, cycle_len=3, use_clr_beta=(20,10,0.95,0.85),wds=1e-4)
learn.sched.plot_lr()
learn.sched.plot_loss()
# ### Unfreeze all layers
# Now that we have a good final layer trained, we can try fine-tuning the other layers. To tell the learner that we want to unfreeze the remaining layers, just call (surprise surprise!) `unfreeze()`.
learn.precompute=False
learn.unfreeze()
# Note that the other layers have *already* been trained to recognize imagenet photos (whereas our final layers where randomly initialized), so we want to be careful of not destroying the carefully tuned weights that are already there.
#
# Generally speaking, the earlier layers (as we've seen) have more general-purpose features. Therefore we would expect them to need less fine-tuning for new datasets. For this reason we will use different learning rates for different layers: the first few layers will be at 1e-4, the middle layers at 1e-3, and our FC layers we'll leave at 1e-2 as before. We refer to this as *differential learning rates*.
lrs=np.array([lr/9,lr/3,lr])
# ### Use Learning Rate Finder again
learn.lr_find(lrs/1000)
learn.sched.plot()
lr = 5e-2
lrs=np.array([lr/9,lr/3,lr])
# ### Train again
# The [1cycle training policy](https://arxiv.org/abs/1803.09820) is used here again, with parameters
learn.fit(lrs, 1, cycle_len=15, use_clr_beta=(20,10,0.95,0.85), wds=1e-4)
learn.sched.plot_lr()
# Note that's what being plotted above is the learning rate of the *final layers*. The learning rates of the earlier layers are fixed at the same multiples of the final layer rates as we initially requested (i.e. the first layers have 100x smaller, and middle layers 10x smaller learning rates, since we set `lr=np.array([1e-4,1e-3,1e-2])`.
learn.sched.plot_loss()
learn.save('299_all_BTSC_994')
learn.load('299_all_BTSC_994')
# There is something else we can do with data augmentation: use it at *inference* time (also known as *test* time). Not surprisingly, this is known as *test time augmentation*, or just *TTA*.
#
# TTA simply makes predictions not just on the images in your validation set, but also makes predictions on a number of randomly augmented versions of them too (by default, it uses the original image along with 4 randomly augmented versions). It then takes the average prediction from these images, and uses that. To use TTA on the validation set, we can use the learner's `TTA()` method.
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
accuracy_np(probs, y)
# I generally see about a 10-20% reduction in error on this dataset when using TTA at this point, which is an amazing result for such a quick and easy technique!
# ## Analyzing results
# ### Confusion matrix
preds = np.argmax(probs, axis=1)
probs = probs[:,1]
# A common way to analyze the result of a classification model is to use a [confusion matrix](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/). Scikit-learn has a convenient function we can use for this purpose:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, preds)
# We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
plot_confusion_matrix(cm, data.classes)
| fastai_BTSC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Bayesian Network Structure Learning in Pomegranate
#
# author: <NAME> <br>
# contact: <EMAIL>
#
# Learning the structure of Bayesian networks can be complicated for two main reasons: (1) difficulties in inferring causality and (2) the super-exponential number of directed edges that could exist in a dataset. The first issue presents itself when the structure lerning algorithm considers only correlation or another measure of co-occurance to determine if an edge should exist. The first point presents challenges which deserve a far more in depth treatment unrelated to implementations in pomegranate, so instead this tutorial will focus on how pomegranate implements fast Bayesian network structure learning. It will also cover a new concept called the "constraint graph" which can be used to massively speed up structure search while also making causality assignment a bit more reasonable.
#
# ## Introduction to Bayesian Network Structure Learning
#
# Most methods for Bayesian network structure learning (BNSL) can be put into one of the following three categories:
#
# (1) Search and Score: The most intuitive method is that of 'search and score,' where one searches over the space of all possible directed acyclic graphs (DAGs) and identifies the one that minimizes some objective function. Typical objective functions attempt to balance the log probability of the data given the model (the likelihood) with the complexity of the model to encourage sparser models. A naive implementation of this search is super-exponential in time with the number of variables, and becomes infeasible when considering even less than a dozen variables. However, dynamic programming can efficiently remove the many repeated calculations and reduce this to be simply exponential in time. This allows exact BNSL to scale to ~25-30 variables. In addition, the A\* algorithm can be used to smartly search the space and reduce computational time even further by not even considering all possibile networks.
#
# (2) Constraint learning: These methods typically involve calculating some measure of correlation or co-occurance to identify an undirected backbone of edges that could exist, and then prune these edges systematically until a DAG is reached. A common method is to iterate over all triplets of variables to identify conditional independencies that specify both presence and direction of the edges. This algorithm is asymptotically faster (quadratic in time) than search-and-score, but it does not have a simple probabilistic interpretation.
#
# (3) Approximate algorithms: In many real world examples, one wishes to merge the interpretability of the search and score method with the attractiveness of the task finishing before the universe ends. To this end, several heuristics have been developed with different properties to yield good structures in a reasonable amount of time. These methods include the Chow-Liu tree building algorithm, the hill-climbing algorithm, and optimal reinsertion, though there are others.
#
# pomegranate currently implements a search-and-score method based on the minimum description length score which utilizes the dynamic programming and A\* algorithm (DP/A\*), a greedy algorithm based off of DP/A\*, and the Chow-Liu tree building algorithm, though there are plans to soon add other algorithms.
#
# ## Structure Learning in pomegranate
# ### Exact Learning
#
# Structure learning in pomegranate is done using the `from_samples` method. All you pass in is the samples, their associated weights (if not uniform), and the algorithm which you'd like to use, and it will learn the network for you using the dynamic programming implementation. Lets see a quick synthetic example to make sure that appropriate connections are found. Lets add connections between variables 1, 3, 6, and variables 0 and 2, and variables 4 and 5.
# +
# %pylab inline
# %load_ext memory_profiler
from pomegranate import BayesianNetwork
import seaborn, time
seaborn.set_style('whitegrid')
X = numpy.random.randint(2, size=(2000, 7))
X[:,3] = X[:,1]
X[:,6] = X[:,1]
X[:,0] = X[:,2]
X[:,4] = X[:,5]
model = BayesianNetwork.from_samples(X, algorithm='exact')
print model.structure
model.plot()
# -
# The structure attribute returns a tuple of tuples, where each inner tuple corresponds to that node in the graph (and the column of data learned on). The numbers in that inner tuple correspond to the parents of that node. The results from this structure are that node 3 has node 1 as a parent, that node 2 has node 0 as a parent, and so forth. It seems to faithfully recapture the underlying dependencies in the data.
#
# Now, two algorithms for performing search-and-score were mentioned, the traditional shortest path algorithm and the A\* algorithm. These both work by essentially turning the Bayesian network structure learning problem into a shortest path problem over an 'order graph.' This order graph is a lattice made up of layers of variable sets from the BNSL problem, with the root node having no variables, the leaf node having all variables, and layer `i` in the lattice having all subsets of variables of size `i`. Each path from the root to the leaf represents a certain topological sort of the variables, with the shortest path corresponding to the optimal topological sort and Bayesian network. Details can be found <a href="http://url.cs.qc.cuny.edu/publications/Yuan11learning.pdf">here</a>. The traditional shortest path algorithm calculates the values of all edges in the order lattice before finding the shortest path, while the A\* algorithm searches only a subset of the order lattice and begins searching immediately. Both methods yield optimal Bayesian networks.
#
# A major problem that arises in the traditional shortest path algorithm is that the size of the order graph grows exponentially with the number of variables, and can make tasks infeasible that have otherwise-reasonable computational times. While the A\* algorithm is faster computationally, another advantage is that it uses a much smaller amount of memory since it doesn't explore the full order graph, and so can be applied to larger problems.
#
# In order to see the differences between these algorithms in practice, let's turn to the task of learning a Bayesian network over the digits dataset. The digits dataset is comprised of over a thousand 8x8 pictures of handwritten digits. We binarize the values into 'on' or 'off' for simplicity, and try to learn dependencies between the pixels.
# +
from sklearn.datasets import load_digits
X, y = load_digits(10, True)
X = X > numpy.mean(X)
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.imshow(X[0].reshape(8, 8), interpolation='nearest')
plt.grid(False)
plt.subplot(132)
plt.imshow(X[1].reshape(8, 8), interpolation='nearest')
plt.grid(False)
plt.subplot(133)
plt.imshow(X[2].reshape(8, 8), interpolation='nearest')
plt.grid(False)
# +
X = X[:,:18]
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact-dp') # << BNSL done here!
t1 = time.time() - tic
p1 = model.log_probability(X).sum()
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
t2 = time.time() - tic
p2 = model.log_probability(X).sum()
print "Shortest Path"
print "Time (s): ", t1
print "P(D|M): ", p1
# %memit BayesianNetwork.from_samples(X, algorithm='exact-dp')
print
print "A* Search"
print "Time (s): ", t2
print "P(D|M): ", p2
# %memit BayesianNetwork.from_samples(X, algorithm='exact')
# -
# These results show that the A\* algorithm is both computationally faster and requires far less memory than the traditional algorithm, making it a better default for the 'exact' algorithm. The amount of memory used by the BNSL process is under 'increment', not 'peak memory', as 'peak memory' returns the total memory used by everything, while increment shows the difference in peak memory before and after the function has run.
#
# ### Approximate Learning: Greedy Search (pomegranate default)
#
# A natural heuristic when a non-greedy algorithm is too slow is to consider the greedy version. This simple implementation iteratively finds the best variable to add to the growing topological sort, allowing the new variable to draw only from variables already in the topological sort. This is the default in pomegranate because it has a nice balance between producing good (often optimal) graphs and having a small computational cost and memory footprint. However, there is no guarantee that this produces the globally optimal graph.
#
# Let's see how it performs on the same dataset as above.
# +
tic = time.time()
model = BayesianNetwork.from_samples(X) # << Default BNSL setting
t = time.time() - tic
p = model.log_probability(X).sum()
print "Greedy"
print "Time (s): ", t
print "P(D|M): ", p
# %memit BayesianNetwork.from_samples(X)
# -
# ### Approximate Learning: Chow-Liu Trees
#
# However, there are even cases where the greedy heuristic is too slow, for example hundreds of variables. One of the first heuristics for BNSL is that of Chow-Liu trees, which learns the optimal tree from data. Essentially it calculates the mutual information between all pairs of variables and then finds the maximum spanning tree. A root node has to be input to turn the undirected edges based on mutual information into directed edges for the Bayesian network. The algorithm is is $O(d^{2})$ and practically is extremely fast and memory efficient, though it produces structures with a worse $P(D|M)$.
# +
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='chow-liu') # << Default BNSL setting
t = time.time() - tic
p = model.log_probability(X).sum()
print "Chow-Liu"
print "Time (s): ", t
print "P(D|M): ", p
# %memit BayesianNetwork.from_samples(X, algorithm='chow-liu')
# -
# ### Comparison
#
# We can then compare the algorithms directly to each other on the digits dataset as we expand the number of pixels to consider.
# +
X, _ = load_digits(10, True)
X = X > numpy.mean(X)
t1, t2, t3, t4 = [], [], [], []
p1, p2, p3, p4 = [], [], [], []
n_vars = range(8, 19)
for i in n_vars:
X_ = X[:,:i]
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='exact-dp') # << BNSL done here!
t1.append(time.time() - tic)
p1.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='exact')
t2.append(time.time() - tic)
p2.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='greedy')
t3.append(time.time() - tic)
p3.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='chow-liu')
t4.append(time.time() - tic)
p4.append(model.log_probability(X_).sum())
# +
plt.figure(figsize=(14, 4))
plt.subplot(121)
plt.title("Time to Learn Structure", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.ylabel("Time (s)", fontsize=14)
plt.xlabel("Variables", fontsize=14)
plt.plot(n_vars, t1, c='c', label="Exact Shortest")
plt.plot(n_vars, t2, c='m', label="Exact A*")
plt.plot(n_vars, t3, c='g', label="Greedy")
plt.plot(n_vars, t4, c='r', label="Chow-Liu")
plt.legend(fontsize=14, loc=2)
plt.subplot(122)
plt.title("$P(D|M)$ with Resulting Model", fontsize=14)
plt.xlabel("Variables", fontsize=14)
plt.ylabel("logp", fontsize=14)
plt.plot(n_vars, p1, c='c', label="Exact Shortest")
plt.plot(n_vars, p2, c='m', label="Exact A*")
plt.plot(n_vars, p3, c='g', label="Greedy")
plt.plot(n_vars, p4, c='r', label="Chow-Liu")
plt.legend(fontsize=14)
# -
# We can see the expected results-- that the A\* algorithm works faster than the shortest path, the greedy one faster than that, and Chow-Liu the fastest. The purple and cyan lines superimpose on the right plot as they produce graphs with the same score, followed closely by the greedy algorithm and then Chow-Liu performing the worst.
# ## Constraint Graphs
#
# Now, sometimes you have prior information about how groups of nodes are connected to each other and want to exploit that. This can take the form of a global ordering, where variables can be ordered in such a manner that edges only go from left to right, for example. However, sometimes you have layers in your network where variables are a part of these layers and can only have parents in another layer.
#
# Lets consider a diagnostics Bayesian network like the following (no need to read code, the picture is all that is important for now):
# +
from pomegranate import DiscreteDistribution, ConditionalProbabilityTable, Node
BRCA1 = DiscreteDistribution({0: 0.999, 1: 0.001})
BRCA2 = DiscreteDistribution({0: 0.985, 1: 0.015})
LCT = DiscreteDistribution({0: 0.950, 1: 0.050})
OC = ConditionalProbabilityTable([[0, 0, 0, 0.999],
[0, 0, 1, 0.001],
[0, 1, 0, 0.750],
[0, 1, 1, 0.250],
[1, 0, 0, 0.700],
[1, 0, 1, 0.300],
[1, 1, 0, 0.050],
[1, 1, 1, 0.950]], [BRCA1, BRCA2])
LI = ConditionalProbabilityTable([[0, 0, 0.99],
[0, 1, 0.01],
[1, 0, 0.20],
[1, 1, 0.80]], [LCT])
PREG = DiscreteDistribution({0: 0.90, 1: 0.10})
LE = ConditionalProbabilityTable([[0, 0, 0.99],
[0, 1, 0.01],
[1, 0, 0.25],
[1, 1, 0.75]], [OC])
BLOAT = ConditionalProbabilityTable([[0, 0, 0, 0.85],
[0, 0, 1, 0.15],
[0, 1, 0, 0.70],
[0, 1, 1, 0.30],
[1, 0, 0, 0.40],
[1, 0, 1, 0.60],
[1, 1, 0, 0.10],
[1, 1, 1, 0.90]], [OC, LI])
LOA = ConditionalProbabilityTable([[0, 0, 0, 0.99],
[0, 0, 1, 0.01],
[0, 1, 0, 0.30],
[0, 1, 1, 0.70],
[1, 0, 0, 0.95],
[1, 0, 1, 0.05],
[1, 1, 0, 0.95],
[1, 1, 1, 0.05]], [PREG, OC])
VOM = ConditionalProbabilityTable([[0, 0, 0, 0, 0.99],
[0, 0, 0, 1, 0.01],
[0, 0, 1, 0, 0.80],
[0, 0, 1, 1, 0.20],
[0, 1, 0, 0, 0.40],
[0, 1, 0, 1, 0.60],
[0, 1, 1, 0, 0.30],
[0, 1, 1, 1, 0.70],
[1, 0, 0, 0, 0.30],
[1, 0, 0, 1, 0.70],
[1, 0, 1, 0, 0.20],
[1, 0, 1, 1, 0.80],
[1, 1, 0, 0, 0.05],
[1, 1, 0, 1, 0.95],
[1, 1, 1, 0, 0.01],
[1, 1, 1, 1, 0.99]], [PREG, OC, LI])
AC = ConditionalProbabilityTable([[0, 0, 0, 0.95],
[0, 0, 1, 0.05],
[0, 1, 0, 0.01],
[0, 1, 1, 0.99],
[1, 0, 0, 0.40],
[1, 0, 1, 0.60],
[1, 1, 0, 0.20],
[1, 1, 1, 0.80]], [PREG, LI])
s1 = Node(BRCA1, name="BRCA1")
s2 = Node(BRCA2, name="BRCA2")
s3 = Node(LCT, name="LCT")
s4 = Node(OC, name="OC")
s5 = Node(LI, name="LI")
s6 = Node(PREG, name="PREG")
s7 = Node(LE, name="LE")
s8 = Node(BLOAT, name="BLOAT")
s9 = Node(LOA, name="LOA")
s10 = Node(VOM, name="VOM")
s11 = Node(AC, name="AC")
model = BayesianNetwork("Hut")
model.add_nodes(s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11)
model.add_edge(s1, s4)
model.add_edge(s2, s4)
model.add_edge(s3, s5)
model.add_edge(s4, s7)
model.add_edge(s4, s8)
model.add_edge(s4, s9)
model.add_edge(s4, s10)
model.add_edge(s5, s8)
model.add_edge(s5, s10)
model.add_edge(s5, s11)
model.add_edge(s6, s9)
model.add_edge(s6, s10)
model.add_edge(s6, s11)
model.bake()
plt.figure(figsize=(14, 10))
model.plot()
plt.show()
# -
# This network contains three layer, with symptoms on the bottom (low energy, bloating, loss of appetite, vomitting, and abdominal cramps), diseases in the middle (overian cancer, lactose intolerance, and pregnancy), and genetic tests on the top for three different genetic mutations. The edges in this graph are constrainted such that symptoms are explained by diseases, and diseases can be partially explained by genetic mutations. There are no edges from diseases to genetic conditions, and no edges from genetic conditions to symptoms. If we were going to design a more efficient search algorithm, we would want to exploit this fact to drastically reduce the search space of graphs.
#
# Before presenting a solution, lets also consider another situation. In some cases you can define a global ordering of the variables, meaning you can order them from left to right and ensure that edges only go from the left to the right. This can represent some temporal separation (things on the left happen before things on the right), physical separation, or anything else. This would also dramatically reduce the search space.
#
# In addition to reducing the search space, an efficient algorithm can exploit this layered structure. A key property of most scoring functions is the idea of "global parameter independence", meaning that that the parents of node A are independent of the parents of node B assuming that they do not form a cycle in the graph. If you have a layered structure, either like in the diagnostics network or through a global ordering, it is impossible to form a cycle in the graph through any valid assignment of parent values. This means that the parents for each node can be identified independently, drastically reducing the runtime of the algorithm.
#
# Now, sometimes we know ~some things~ about the structure of the variables, but nothing about the others. For example, we might have a partial ordering on some variables but not know anything about the others. We could enforce an arbitrary ordering on the others, but this may not be well justified. In essence, we'd like to exploit whatever information we have.
#
# Abstractly, we can think about this in terms of constraint graphs. Lets say you have some symptoms, diseases, and genetic tests, and don't a priori know the connection between all of these pieces, but you do know the previous layer structure. You can define a "constraint graph" which is made up of three nodes, "symptoms", "diseases", and "genetic mutations". There is a directed edge from genetic mutations to diseases, and a directed edge from diseases to symptoms. This specifies that genetic mutations can be parents to diseases, and diseases to symptoms. It would look like the following:
# +
import networkx
from pomegranate.utils import plot_networkx
constraints = networkx.DiGraph()
constraints.add_edge('genetic conditions', 'diseases')
constraints.add_edge('diseases', 'symptoms')
plot_networkx(constraints)
# -
# All variables corresponding to these categories would be put in their appropriate name. This would define a scaffold for structure learning.
#
# Now, we can do the same thing for a global ordering. Lets say we have 3 variables in an order from 0-2.
constraints = networkx.DiGraph()
constraints.add_edge(0, 1)
constraints.add_edge(1, 2)
constraints.add_edge(0, 2)
plot_networkx(constraints)
# In this graph, we're saying that variable 0 can be a parent for 1 or 2, and that variable 1 can be a parent for variable 2. In the same way that putting multiple variables in a node of the constraint graph allowed us to define layers, putting a single variable in the nodes of a constraint graph can allow us to define an ordering.
#
# To be specific, lets say we want to find the parents of the variables in node 1 given that those variables parents can only come from the variables in node 0. We can independently find the best parents for each variable in node 1 from the set of those in node 0. This is significantly faster than trying to find the best Bayesian network of all variables in nodes 0 and 1. We can also do the same thing for the variables in node 2 by going through the variables in both nodes 0 and 1 to find the best parent set for the variables in node 2.
#
# However, there are some cases where we know nothing about the parent structure of some variables. This can be solved by including self-loops in the graph, where a node is its own parent. This means that we know nothing about the parent structure of the variables in that node and that the full exponential time algorithm will have to be run. The naive structure learning algorithm can be thought of as putting all variables in a single node in the constraint graph and putting a self-loop on that node.
#
# We are thus left with two procedures; one for solving edges which are self edges, and one for solving edges which are not. Even though we have to use the exponential time procedure on variables in nodes with self loops, it will still be significantly faster because we will be using less variables (except in the naive case).
#
# Frequently though we will have some information about some of the nodes of the graph even if we don't have information about all of the nodes. Lets take the case where we know some variables have no children but can have parents, and know nothing about the other variables.
constraints = networkx.DiGraph()
constraints.add_edge(0, 1)
constraints.add_edge(0, 0)
plot_networkx(constraints)
# In this situation we would have to run the exponential time algorithm on the variables in node 0 to find the optimal parents, and then run the independent parents algorithm on the variables in node 1 drawing only from the variables in node 0. To be specific:
#
# (1) Use exponential time procedure to find optimal structure amongst variables in node 0
# (2) Use independent-parents procedure to find the best parents of variables in node 1, restricting the parents to be in node 0
# (3) Concatenate these parent sets together to get the optimal structure of the network given the constraints.
#
# We can generalize this to any arbitrary constraint graph:
#
# (1) Use exponential time procedure to find optimal structure amongst variables in nodes with self loops (including parents from other nodes if needed)
# (2) Use independent-parents procedure to find best parents of variables in a node given the constraint that the parents must come from variables in the node which is this nodes parent
# (3) Concatenate these parent sets together to get the optimal structure of the network given the constraints.
#
# According to the global parameter independence property of Bayesian networks, this procedure will give the globally optimal Bayesian network while exploring a significantly smaller part of the network.
#
# pomegranate supports constraint graphs in an extremely easy to use manner. Lets say that we have a graph with three layers like the diagnostic model, and five variables in each layer. We can define the constraint graph as a networkx DiGraph, with the nodes being tuples containing the column ids of each variable belonging to that variable.
#
# In this case, we're saying that (0, 1, 2, 3, 4) is the first node, (5, 6, 7, 8, 9) is the second node, and (10, 11, 12, 13, 14) is the final node. Lets make nodes 1, 7, and 12 related, 11, 13, 14 related, and 3 and 5 related. In this case, where should be an edge from 1 to 7, and 7 to 12. 11, 13, and 14 are all a part of the same layer and so that connection should be ignored, and then there should be a connection from 3 to 5.
# +
numpy.random.seed(6)
X = numpy.random.randint(2, size=(200, 15))
X[:,1] = X[:,7]
X[:,12] = 1 - X[:,7]
X[:,5] = X[:,3]
X[:,13] = X[:,11]
X[:,14] = X[:,11]
a = networkx.DiGraph()
b = tuple((0, 1, 2, 3, 4))
c = tuple((5, 6, 7, 8, 9))
d = tuple((10, 11, 12, 13, 14))
a.add_edge(b, c)
a.add_edge(c, d)
print "Constraint Graph"
plot_networkx(a)
plt.show()
print "Learned Bayesian Network"
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=a)
plt.figure(figsize=(16, 8))
model.plot()
plt.show()
print "pomegranate time: ", time.time() - tic, model.structure
# -
# We see that reconstructed perfectly here. Lets see what would happen if we didn't use the exact algorithm.
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
plt.figure(figsize=(16, 8))
model.plot()
plt.show()
print "pomegranate time: ", time.time() - tic, model.structure
# It looks like we got three desirable attributes by using a constraint graph. The first is that there was over an order of magnitude speed improvement in finding the optimal graph. The second is that we were able to remove some edges we didn't want in the final Bayesian network, such as those between 11, 13, and 14. We also removed the edge between 1 and 12 and 1 and 3, which are spurious given the model that we originally defined. The third desired attribute is that we can specify the direction of some of the edges and get a better causal model.
#
# Lets take a look at how big of a model we can learn given a three layer constraint graph like before.
# +
constraint_times, times = [], []
x = numpy.arange(1, 7)
for i in x:
symptoms = tuple(range(i))
diseases = tuple(range(i, i*2))
genetic = tuple(range(i*2, i*3))
constraints = networkx.DiGraph()
constraints.add_edge(genetic, diseases)
constraints.add_edge(diseases, symptoms)
X = numpy.random.randint(2, size=(2000, i*3))
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=constraints)
constraint_times.append( time.time() - tic )
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
times.append( time.time() - tic )
plt.figure(figsize=(14, 6))
plt.title('Time To Learn Bayesian Network', fontsize=18)
plt.xlabel("Number of Variables", fontsize=14)
plt.ylabel("Time (s)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.plot( x*3, times, linewidth=3, color='c', label='Exact')
plt.plot( x*3, constraint_times, linewidth=3, color='m', label='Constrained')
plt.legend(loc=2, fontsize=16)
plt.yscale('log')
# -
# Looks like by including a constraint graph, we can far more quickly find the optimal Bayesian network. This can allow us to create Bayesian networks on dozens of variables as long as we can order them into layers.
#
# ## Other Parameters
#
# There are three other parameters that are worth mentioning. The first is that a vector of weights can be passed in alongside a matrix of samples, with each entry in the vector corresponding to the weight of that sample. By default all weights are set to 1, but they can be any non-negative number.
#
# ```
# model = BayesianNetwork.from_samples(X, weights=weights)
# ```
#
# Next is the pseudocounts parameter that indicates the number added to the observations of each observed variable combination. This acts as a regularizer by smoothing over the counts and is typically used to prevent the model from saying it is impossible for a combination of variables to occur together just because they weren't observed in the data. Currently all learning algorithms support pseudocounts. By default this is set to 0.
#
# ```
# model = BayesianNetwork.from_samples(X, pseudocount=1.7
# ```
#
# Lastly, the maximum number of parents that each variable can have can be limited. This is sometimes referred to as the k-learn problem in literature. In practice this can dramatically speed up the BNSL task as the majority of time is spent determining the best parent sets. Currently this only affects the exact and greedy algorithm as in the Chow-Liu algorithm each variable only has a single parent.
#
# ```
# model = BayesianNetwork.from_samples(X, algorithm='exact', max_parents=2)
# ```
#
# Since exact inference uses minimum description length as a score function, the maximum number of parents is by default set to $\log \left(\frac{n}{\log(n)} \right)$ with $n$ being the number of samples in the dataset.
#
# # Conclusions
#
# pomegranate currently supports exact BNSL through an a shortest path dynamic programming algorithm, an A\* algorithm, a greedy algorithm, the Chow-Liu tree building algorithm, and a constraint-graph based algorithm which can significantly speed up structure learning if you have any prior knowledge of the interactions between variables.
#
# If you have any suggestions for how to improve pomegranate or ideas for algorithms to implement, feel free to open an issue on the issue tracker! I'd love to hear feedback.
| tutorials/Tutorial_4b_Bayesian_Network_Structure_Learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from utility_based_gaussian_noise_sampler import UtilityBasedGaussianNoiseSampler
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('display.max_rows',100)
data = pd.read_csv('test/ImbR.csv', index_col=0)
data
# # Test c_perc= list of percentage
gn_perc = UtilityBasedGaussianNoiseSampler(data, thr_rel=0.8, c_perc=[0.5,3])
method = gn_perc.getMethod()
extrType = gn_perc.getExtrType()
thr_rel = gn_perc.getThrRel()
controlPtr = gn_perc.getControlPtr()
c_perc_undersampling, c_perc_oversampling = gn_perc.getCPerc()
pert = gn_perc.getPert()
method, extrType, thr_rel, controlPtr, c_perc_undersampling, c_perc_oversampling, pert
resampled = gn_perc.resample()
len(resampled), resampled
x = resampled['X1'].to_numpy()
y = resampled['X2'].to_numpy()
color = resampled['Tgt'].to_numpy()
plt.scatter(x, y, s=6.0, c=color)
cb = plt.colorbar()
cb.set_label('Tgt')
plt.xlabel('X1')
plt.ylabel('X2')
plt.title('method=extremes,thr_rel=0.8, c_perc=[0.5,3]')
plt.show()
# # Test c_perc= 'balance'
data = pd.read_csv('test/ImbR.csv', index_col=0)
gn_balance = UtilityBasedGaussianNoiseSampler(data, thr_rel=0.8, c_perc='balance')
method = gn_balance.getMethod()
extrType = gn_balance.getExtrType()
thr_rel = gn_balance.getThrRel()
controlPtr = gn_balance.getControlPtr()
c_perc = gn_balance.getCPerc()
pert = gn_balance.getPert()
method, extrType, thr_rel, controlPtr, c_perc, pert
resampled = gn_balance.resample()
len(resampled), resampled
x = resampled['X1'].to_numpy()
y = resampled['X2'].to_numpy()
color = resampled['Tgt'].to_numpy()
plt.scatter(x, y, s=6.0, c=color)
cb = plt.colorbar()
cb.set_label('Tgt')
plt.xlabel('X1')
plt.ylabel('X2')
plt.title('method=extremes,thr_rel=0.8, c_perc=balance')
plt.show()
# # Test c_perc= 'extreme'
data = pd.read_csv('test/ImbR.csv', index_col=0)
gn_extreme = UtilityBasedGaussianNoiseSampler(data, thr_rel=0.8, c_perc='extreme')
method = gn_extreme.getMethod()
extrType = gn_extreme.getExtrType()
thr_rel = gn_extreme.getThrRel()
controlPtr = gn_extreme.getControlPtr()
c_perc = gn_extreme.getCPerc()
pert = gn_extreme.getPert()
method, extrType, thr_rel, controlPtr, c_perc, pert
resampled = gn_extreme.resample()
len(resampled), resampled
x = resampled['X1'].to_numpy()
y = resampled['X2'].to_numpy()
color = resampled['Tgt'].to_numpy()
plt.scatter(x, y, s=6.0, c=color)
cb = plt.colorbar()
cb.set_label('Tgt')
plt.xlabel('X1')
plt.ylabel('X2')
plt.title('method=extremes,thr_rel=0.8, c_perc=extreme')
plt.show()
| test_gn_sampler.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Metadata
#
# ```
# Course: DS 5001
# Module: 03 Lab
# Topic: Inferring Language Models
# Author: <NAME>
#
# Purpose: We create word-level langage models using simple smoothing from a set of novel. We then evaluate them as predictors of variosu sentences and as generators of text (as in the Shannon Game).
# ```
# + [markdown] colab_type="text" id="-5x8B8RODykY"
# # Set Up
# -
# ## Import libraries
# + colab={} colab_type="code" id="MwrVU8kZDykb"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.display import HTML
# + colab={} colab_type="code" id="MwrVU8kZDykb"
# This prevents matplotlib from throw irrelevant errors
from matplotlib.axes._axes import _log as matplotlib_axes_logger
matplotlib_axes_logger.setLevel('ERROR')
# + colab={} colab_type="code" id="MwrVU8kZDykb"
sns.set()
# -
# ## Configure
data_home = '../data'
# + colab={} colab_type="code" id="Fb3ZsuIsDykn"
OHCO = ['book_id', 'chap_num', 'para_num', 'sent_num', 'token_num']
text_file = f'{data_home}/output/austen-combo.csv' # Generated in HW 02
# + tags=[]
ngram_size = 3
k = .5 # Add-k Smoothing parameter
# -
# # Get Training Data
# ## Import TOKENS
#
# We use our way of representing a text as input. Normally, `term_str` would already be included in the dataframe.
TOKENS = pd.read_csv(text_file).set_index(OHCO).dropna()
TOKENS['term_str'] = TOKENS.token_str.str.lower().str.replace(r'[\W_]+', '', regex=True)
TOKENS = TOKENS[TOKENS.term_str != '']
TOKENS.head()
# # Handle OOV Terms
# ## Extract VOCAB
VOCAB = TOKENS.term_str.value_counts().to_frame('n').sort_index()
VOCAB.index.name = 'term_str'
VOCAB['n_chars'] = VOCAB.index.str.len()
VOCAB.head()
# ## Compute HAPAX info
#
# We use hapax info to account for out-of-vocabulary (OOV) terms in our test data. Hapax refers to *hapax legomenon*, meaning a word that appears only once in a corpus. Normally, with a large training corpus, we might just replace all hapax terms with `<UNK>`. But our set is small, so we try something else. We use the probability features of terms based on length to locate these. This prevents us from getting rid of "good" words that appear in low frequency.
HAPAX = VOCAB.query("n == 1").n_chars.value_counts().to_frame('n').sort_index()
HAPAX.index.name = 'n_chars'
HAPAX['p'] = HAPAX.n / HAPAX.n.sum()
HAPAX['i'] = np.log2(1/HAPAX.p)
HAPAX.style.background_gradient()
hapax_mean = HAPAX.i.mean() # Use mean as threshhold
HAPAX.i.plot(title="Information by Word Length");
plt.axhline(y=hapax_mean, color='orange')
plt.axvline(x=3.45, color='red')
plt.axvline(x=13, color='red');
unk_list_i = HAPAX[HAPAX.i > hapax_mean].index.to_list()
UNK = VOCAB[VOCAB.n_chars.isin(unk_list_i) & (VOCAB.n == 1)].index.to_list()
unk_list_i
HTML(' '.join(UNK))
# ## Create Training VOCAB $V_{train}$
V_TRAIN = sorted(list(set(VOCAB.index) - set(UNK)) + ['<UNK>'])
V_TRAIN[100:105]
# ## Generate Training Sentences
S_TRAIN = list(TOKENS.groupby(OHCO[:-1]).term_str.apply(lambda x: ' '.join(x)).values)
S_TRAIN[:5]
# # Interlude: Demonstrate Logic
# ## Pad Sentences and Generate New Tokens
S = pd.DataFrame(dict(sent_str=S_TRAIN))
pad = '<s> ' * ngram_size
S['padded'] = pad + S.sent_str + ' ' + ' </s>'
I = S.padded.str.split(expand=True).stack().to_frame('w0')
I.index.names = ['sent_num', 'token_num']
S['len'] = I.groupby('sent_num').w0.count()
S.head()
I.head()
# ## Remove OOV terms from Tokens
I.loc[~I.w0.isin(V_TRAIN + ['<s>','</s>']), 'w0'] = '<UNK>'
I.query("w0 == '<UNK>'")
# ## Generate Ngram Index
#
# We expand our padded token table with itself, offset by 1 each time.
#
# Note: there are many ways to do this.
for i in range(1, ngram_size):
I[f'w{i}'] = I[f"w{i-1}"].shift(-1)
I
I = I.dropna()
I.loc[0]
# ## Generate Lower Order Ngrams
#
# We get column slices from I based on ngram level.
NG = []
for i in range(ngram_size):
NG.append(I.iloc[:, :i+1].copy())
NG[2].loc[0]
# ## Extract Ngram Types and Frequencies
LM = []
for i in range(ngram_size):
LM.append(NG[i].value_counts().to_frame('n'))
LM[i] = LM[i].sort_index()
# Hack to recast single value tuple in unigram table index ...
LM[0].index = [i[0] for i in LM[0].index]
LM[0].index.name = 'w0'
LM[2]
# ## Apply Smoothing
# + [markdown] tags=[]
# * $c$: Individual ngram count.
# * $k$: Liddstone smoothing value.
# * $N$: Total ngram token count.
# * $B$: Ngram type count; $ = V^{n}$ for ngram size $n$ and unigram vector length $V$.
# * Seen ngram: $ \frac{c + k}{N + Bk} $
# * Unseen ngram with seen context: $ \frac{k}{N + Bk} $
# * Unseen ngram with unseen context: $ \frac{k}{Bk} \rightarrow \frac{1}{B} $
# * Unknown Unigrams: `<UNK>`
# +
# Z1 and Z2 will hold info about unseen ngrams
Z1 = [None for _ in range(ngram_size)] # Unseen N grams, but seen N-1 grams
Z2 = [None for _ in range(ngram_size)] # Unsess N-1 grams too
# The number of unigram types
V = len(LM[0]) # Inlcides <s> and </s>
# +
# Sample space of possible ngram terms
B = [V**(i+1) for i in range(ngram_size)]
# Ungram case
LM[0]['p'] = LM[0].n / LM[0].n.sum()
LM[0]['log_p'] = np.log2(LM[0].p)
# Higher-order cases
for i in range(1, ngram_size):
# We take advantage of Pandas' implicit join of the lower order
# ngram table with the higher order on
LM[i]['mle'] = LM[i].n / LM[i-1].n
LM[i]['p'] = (LM[i].n + k) / (LM[i-1].n + B[i-1] * k)
LM[i]['log_p'] = np.log2(LM[i].p)
# Handle unseen data
Z1[i] = np.log2(k / (LM[i-1].n + B[i-1] * k))
Z2[i] = np.log2(k / B[i-1] * k)
# Just in case
LM[i].sort_index(inplace=True)
# -
LM[2].loc[('he','had')].sort_values('log_p', ascending=False).head(10)
# # Train Models
#
# We create a class to do the work of the preceding.
# ## Generate and Count Ngrams
class NgramCounter():
"""A class to generate tables of ngram tokens and types from a list of sentences."""
unk_sign = '<UNK>'
sent_pad_signs = ['<s>','</s>']
def __init__(self, sents:[], vocab:[], n:int=3):
self.sents = sents # Expected to be normalized
self.vocab = vocab # Can be extracted from another corpus
self.n = n
self.widx = [f'w{i}' for i in range(self.n)] # Used for cols and index names
def generate(self):
# Convert sentence list to dataframe
self.S = pd.DataFrame(dict(sent_str=self.sents))
# Pad sentences and create a new token table
# Wrap sentences in <s> and </s> tags. Multiply <s> tags
# as a function of ngram size, e.g. '<s> <s>' for trigrams.
pad = (self.sent_pad_signs[0] + ' ') * (self.n - 1)
self.I = (pad + self.S.sent_str + ' ' + self.sent_pad_signs[1])\
.str.split(expand=True).stack().to_frame('w0')
# Set index names to resulting token table
self.I.index.names = ['sent_num', 'token_num']
# Remove OOV terms
# When processing test data, we use the training vocab
# Anything that is not in the vocab (incl. s tags) is called <UNK>.
self.I.loc[~self.I.w0.isin(self.vocab + self.sent_pad_signs), 'w0'] = self.unk_sign
# Get sentence lengths (these will include pads)
# May want to use this info when computing perplexity
self.S['len'] = self.I.groupby('sent_num').w0.count()
# Add w columns
# We progressively bind columns of the same token sequence,
# but offset by one each time
for i in range(1, self.n):
self.I[f'w{i}'] = self.I[f"w{i-1}"].shift(-1)
# Generate ngrams
# We create ngram tables of all orders <= n by
# slicing off columns from the main table
self.NG = []
for i in range(self.n):
self.NG.append(self.I.iloc[:, :i+1].copy())
self.NG[i] = self.NG[i].dropna()
# Generate raw ngram counts and MLEs
self.LM = []
for i in range(self.n):
self.LM.append(self.NG[i].value_counts().to_frame('n'))
self.LM[i]['mle'] = self.LM[i].n / self.LM[i].n.sum()
self.LM[i] = self.LM[i].sort_index()
# Hack to remove single value tuple from unigram table ...
self.LM[0].index = [i[0] for i in self.LM[0].index]
self.LM[0].index.name = 'w0'
# + tags=[]
train = NgramCounter(S_TRAIN, V_TRAIN)
train.generate()
# -
train.LM[1].sort_values('n', ascending=False).head()
# ## Inspect how a sentence is represented
train.NG[0].loc[0]
train.NG[1].loc[0]
# + tags=[]
train.NG[2].loc[0]
# -
# ## Estimate Model
class NgramLanguageModel():
"""A class to create ngram language models."""
# Set the Lidstone Smoothing value; LaPlace = 1
k:float = .5
def __init__(self, ngc:NgramCounter):
self.S = ngc.S
self.LM = ngc.LM
self.NG = ngc.NG
self.n = ngc.n
self.widx = ngc.widx
def apply_smoothing(self):
"""Applies simple smoothing to ngram type counts to estimate the models."""
# Z1 and Z2 will hold info about unseen ngrams
self.Z1 = [None for _ in range(self.n)] # Unseen N grams, but seen N-1 grams
self.Z2 = [None for _ in range(self.n)] # Unseen N-1 grams too
# The base vocab size (same as number of unigram types)
V = len(self.LM[0]) # Inlcides <s> and </s>
# The number of ngram types
B = [V**(i+1) for i in range(self.n)]
# Handle unigram case (no need for smoothing)
self.LM[0]['p'] = self.LM[0].n / self.LM[0].n.sum()
self.LM[0]['log_p'] = np.log2(self.LM[0].p)
# Handle higher order ngrams
for i in range(1, self.n):
self.LM[i]['mle2'] = self.LM[i].n / self.LM[i-1].n
self.LM[i]['p'] = (self.LM[i].n + self.k) / (self.LM[i-1].n + B[i-1] * self.k)
self.LM[i]['log_p'] = np.log2(self.LM[i].p)
# Unseen N grams, but seen N-1 grams
self.Z1[i] = np.log2(self.k / (self.LM[i-1].n + B[i-1] * self.k))
# Unsess N-1 grams too
self.Z2[i] = np.log2(self.k / B[i-1] * self.k)
self.LM[i].sort_index(inplace=True)
def predict(self, test:NgramCounter):
"""Predicts test sentences with estimated models."""
self.T = test
self.PP = []
p_key = 'log_p'
for i in range(self.n):
ng = i + 1
if i == 0:
self.T.S[f'ng_{ng}_ll'] = self.T.NG[0].join(self.LM[0].log_p, on=self.widx[:ng])\
.groupby('sent_num').log_p.sum()
else:
self.T.S[f'ng_{ng}_ll'] = self.T.NG[i].join(self.LM[i][p_key], on=self.widx[:ng])\
.fillna(self.Z1[i]).fillna(self.Z2[i])\
.groupby('sent_num')[p_key].sum()
self.T.S[f'pp{ng}'] = 2**( -self.T.S[f'ng_{ng}_ll'] / self.T.S['len'])
model = NgramLanguageModel(train)
model.k = 1
model.apply_smoothing()
NG = model.NG
LM = model.LM
Z1 = model.Z1
Z2 = model.Z2
LM[2].loc[('anne', 'had')].sort_values('p', ascending=False)[['log_p']]
LM[2].loc[('wentworth', 'had')].sort_values('p', ascending=False)[['n']]
# # Test Models
# ## Choose Test Sentences
# + tags=[]
# Some paragraphs from Austen's _Emma_ and other stuff (first two)
S_TEST = """
The car was brand new
Computer programs are full of bugs
The event had every promise of happiness for her friend
Mr Weston was a man of unexceptionable character easy fortune suitable age and pleasant manners
and there was some satisfaction in considering with what self-denying generous friendship she had always wished and promoted the match
but it was a black morning's work for her
The want of <NAME> would be felt every hour of every day
She recalled her past kindness the kindness the affection of sixteen years
how she had taught and how she had played with her from five years old
how she had devoted all her powers to attach and amuse her in health
and how nursed her through the various illnesses of childhood
A large debt of gratitude was owing here
but the intercourse of the last seven years
the equal footing and perfect unreserve which had soon followed Isabella's marriage
on their being left to each other was yet a dearer tenderer recollection
She had been a friend and companion such as few possessed intelligent well-informed useful gentle
knowing all the ways of the family
interested in all its concerns
and peculiarly interested in herself in every pleasure every scheme of hers
one to whom she could speak every thought as it arose
and who had such an affection for her as could never find fault
How was she to bear the change
It was true that her friend was going only half a mile from them
but Emma was aware that great must be the difference between a Mrs Weston
only half a mile from them
and a Miss Taylor in the house
and with all her advantages natural and domestic
she was now in great danger of suffering from intellectual solitude
She dearly loved her father
but he was no companion for her
He could not meet her in conversation rational or playful
The evil of the actual disparity in their ages
and Mr Woodhouse had not married early
was much increased by his constitution and habits
for having been a valetudinarian all his life
without activity of mind or body
he was a much older man in ways than in years
and though everywhere beloved for the friendliness of his heart and his amiable temper
his talents could not have recommended him at any time
Her sister though comparatively but little removed by matrimony
being settled in London only sixteen miles off was much beyond her daily reach
and many a long October and November evening must be struggled through at Hartfield
before Christmas brought the next visit from Isabella and her husband
and their little children to fill the house and give her pleasant society again
""".split('\n')[1:-1]
# -
# ## Run Models
test = NgramCounter(S_TEST, V_TRAIN) # Note that we use the training vocab
test.generate()
model.predict(test)
train.LM[2]
test.NG[2]
# ## OOV Percent
round((test.NG[0].w0 == '<UNK>').sum() / len(test.NG[0]), 2)
# ## Compare Models
# ### Unigrams
comp_cols = ['len','pp1','pp2','pp3','sent_str']
test.S.sort_values('pp1')[comp_cols].head()
test.S.sort_values('pp1')[comp_cols].tail()
# ### Bigrams
test.S.sort_values('pp2')[comp_cols].head()
test.S.sort_values('pp2')[comp_cols].tail()
# ### Trigrams
test.S.sort_values('pp3')[comp_cols].head()
test.S.sort_values('pp3')[comp_cols].tail()
sns.lmplot(data=test.S, x='len', y='pp2', aspect=1.5);
# + [markdown] colab_type="text" id="cPY7ekfXgbE_" tags=[]
# # Generate Text
# -
def generate_text(LM, n=20):
# Start with beginning sentence marker
words = ['<s>', '<s>']
# Sentence counter
sent_count = 0
# Generate words stochastically
while sent_count < n:
# Get trigram context
bg = tuple(words[-2:])
# Get next word
words.append(LM[2].loc[bg].sample(weights='mle2').index.values[0])
# Terminate when end-of-sentence marker found
if words[-1] == '</s>':
sent_count += 1
if sent_count < n:
words.append('<s>')
# Create text from words
text = ' '.join(words)
# Format text for printing
sents = pd.DataFrame(dict(sent_str=text.split('<s> <s>')))
sents['len'] = sents.sent_str.str.len()
sents = sents[sents.len > 0]
sents.sent_str = sents.sent_str.str.replace('<s> ', '')
sents.sent_str = sents.sent_str.str.replace(' </s>', '')
sents.sent_str = sents.sent_str.str.strip()
sents.sent_str = sents.sent_str.str.replace(r" s ", "'s ", regex=True)
sents['html'] = "<li>" + sents.sent_str.str.upper() + ".</li>"
output = sents.html.str.cat(sep='\n')
# Print text
display(HTML(f"<ol style='font-family:monospace;margin-left:1rem;width:4in;'>{output}</ol>"))
# Return sentences with prediction data
return sents
sents = generate_text(LM, n=20)
# # Save
# + tags=[]
path_prefix = f"{data_home}/output/austen-combo"
HAPAX.to_csv(f"{path_prefix}-HAPAX.csv")
pd.Series(UNK).to_csv(f"{path_prefix}-UNK.txt", 'w')
for n in range(1, ngram_size):
NG[n].to_csv(f"{path_prefix}-NG{n}.csv")
LM[n].to_csv(f"{path_prefix}-LM{n}.csv")
# -
# # Fun
#
# How OHCO helps us learn about language. We use the `tokne_num` to explore which words tend to begin and end sentences.
# All words that begin sentences
VOCAB['start_freq'] = TOKENS.query("token_num == 0").term_str.value_counts()
# All words that end sentences
VOCAB['end_freq'] = TOKENS.shift(1).query("token_num == 0").term_str.value_counts()
VOCAB.describe().T
VOCAB[(VOCAB.start_freq > 5) & (VOCAB.end_freq > 4)].plot.scatter('start_freq','end_freq');
VOCAB.sort_values('end_freq', ascending=False).head(10)
VOCAB.sort_values('start_freq', ascending=False).head(10)
| M03_LanguageModels/M03_02_LanguageModels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
sc
docs = sc.textFile('/Users/adrianopagano/Desktop/Big_Dive/BigDive5/Data/reddit_2014-12-10k.json')
docs.take(1)
import json
docs.map(json.loads).map(lambda x: x['body']).take(2)
# +
import re
pattern = '(?u)\\b[A-Za-z]{3,}'
# -
docs.map(json.loads)\
.map(lambda x: x['body']) \
.flatMap(lambda x: re.findall(pattern, x))\
.map(lambda x: (x.lower(), 1))\
.reduceByKey(lambda x,y: x+y)\
.map(lambda (a, b): (b, a))\
.sortByKey(ascending=False)\
.take(10)
total_subreddit = set(docs.map(json.loads).map(lambda x: x['subreddit']).collect())
topics_serious = ['philosophy', 'history', 'politics', 'science']
topics_funny = ['funny', 'memes', 'humour', 'comic']
subreddit_serious= []
subreddit_funny= []
for sub in total_subreddit:
for i, topic in enumerate(topics_serious):
if topic in sub or topic.upper() in sub.upper():
subreddit_serious.append(sub)
for i, topic in enumerate(topics_funny):
if topic in sub or topic.upper() in sub.upper():
subreddit_funny.append(sub)
print subreddit_serious, subreddit_funny
print 'Number of serious subreddits: ' + str(len(subreddit_serious))
print 'Number of funny subreddits :' +str(len(subreddit_funny))
# %pylab inline
# +
# %pylab inline
import matplotlib.pyplot as plt
import seaborn
num_serious = docs.map(json.loads)\
.filter(lambda x: x['subreddit'] in subreddit_serious) \
.map(lambda x: (x['subreddit'], 1)) \
.groupByKey() \
.map(lambda pair: len(pair[0]))\
.collect()
num_funny = docs.map(json.loads)\
.filter(lambda x: x['subreddit'] in subreddit_funny) \
.map(lambda x: (x['subreddit'], 1)) \
.groupByKey() \
.map(lambda pair: len(pair[0]))\
.collect()
fig, ax = plt.subplots(1,2)
ax[0].bar(range(len(num_serious)), sorted(num_serious))
ax[1].bar(range(len(num_funny)), sorted(num_funny))
ax[0].set_xlabel('Serious subreddit')
ax[1].set_xlabel('Funny subreddit')
ax[0].set_ylabel('Comments')
ax[0].set_ylim([0,25])
# +
import re
re_punct = re.compile('[,\';:]')
re_word = re.compile('([^\w\s]|\d+)')
def punct(x):
try:
return len(re.findall(re_punct, x))/(float(len(re.findall(re_word, x))))
except ZeroDivisionError:
return 0
punct_serious = docs.map(json.loads)\
.filter(lambda x: x['subreddit'] in subreddit_serious) \
.map(lambda x: x['body']) \
.map(punct) \
.collect()
punct_funny = docs.map(json.loads)\
.filter(lambda x: x['subreddit'] in subreddit_funny) \
.map(lambda x: x['body']) \
.map(punct) \
.collect()
fig, ax = plt.subplots(1,2)
ax[0].hist(punct_serious)
ax[1].hist(punct_funny)
ax[0].set_xlim([0,1.0])
ax[0].set_ylim([0,120])
# -
| DataScience/Reddit_comment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import re
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
# ## Load Data
# load data
data = pd.read_csv("./all/train_2.csv")
whole_period = len(data.columns) -1
print("dates are amount to {}, from {} to {}".format(whole_period, data.columns[1],data.columns[-1]))
# ## Train Test Split
# 1. whether need data augmentation or cross validation by overlapping train data ?
# 2. whether avoid wasting testing encoder data points by overlapping train and test data ?
# +
train_window = 365
predict_shift = 30
predict_window = 365
test_start = whole_period - predict_window - predict_shift # start point for test training and predicting
overlap = 0 # overlap shows overlap between train and test
split_point = test_start + overlap # data before split_point cound be used for train and validation
def train_test_split(aug_shift = []):
test_X = data[list(data.columns[1+test_start:test_start + train_window])]
test_Y = data[list(data.columns[1+test_start + predict_shift:test_start + predict_shift + predict_window])]
train_X = data[list(data.columns[1:train_window])]
train_Y = data[list(data.columns[1+predict_shift:predict_shift + predict_window])]
if(len(aug_shift)): #data_augmentation
for aug in aug_shift:
aug_X,aug_Y = augmentation(aug)
train_X = pd.concat([train_X,aug_X])
train_Y = pd.concat([train_Y,aug_Y])
val_X, val_Y = augmentation(split_point - predict_window - predict_shift)
return train_X, train_Y, val_X, val_Y, test_X, test_Y
def augmentation(aug_shift):
X = data[list(data.columns[aug_shift:aug_shift + train_window])]
Y = data[list(data.columns[aug_shift + predict_shift:aug_shift + predict_shift + predict_window])]
return X,Y
# -
train_X, train_Y, val_X, val_Y, test_X, test_Y = train_test_split()
# ## Data Preprocessing
# 1. normalize(values - means), regard means as additional feature
# 2. map to bins(std), regard std as additional feature
# 3. missing values: set to -1
# 4. batchify
df = train_X[:200]
df = df.dropna(how="all")
mean_df = df.mean(axis= 1,skipna=True)
std_df = df.std(axis = 1,skipna=True)
index = df.divide(0.05*std_df,axis=0).fillna(-1).astype(int).values
idx = 20
val = df.iloc[idx].values
plt.plot([mean_df.iloc[idx]]*len(val),label = "mean")
#plt.plot([0.2*std_df.iloc[0]]*len(val),label = "std")
for d in np.arange(0,1,0.05):
plt.plot([d*std_df.iloc[idx]]*len(val))
plt.plot(val[:250])
plt.legend()
plt.show()
mean_df.values.shape
len(np.arange(0,20,0.05))
def X_loader(df,std_ratio = 0.05,batchsize=10):
df = df.dropna(how="all").reset_index()
mean_df = df.mean(axis= 1,skipna=True)
std_df = df.std(axis = 1,skipna=True)
x = df.divide(std_ratio*std_df,axis=0).fillna(-1).astype(int).values
batch=0
while batch<(len(x) // batchsize):
data=x[batch*batchsize:(batch+1)*batchsize,:]
mean=mean_df.values[batch*batchsize:(batch+1)*batchsize]
std = std_df.values[batch*batchsize:(batch+1)*batchsize]
#yield(torch.LongTensor(data),torch.Tensor(mean),torch.Tensor(std).cuda())
yield(torch.LongTensor(data),torch.Tensor(mean),torch.Tensor(std))
batch+=1
def Y_loader(df, batchsize=10):
df = df.dropna(how="all").reset_index()
y = df.fillna(-1).values
batch=0
while batch<(len(y) // batchsize):
data=y[batch*batchsize:(batch+1)*batchsize,:]
#yield(torch.FloatTensor(data).cuda())
yield(torch.Tensor(data))
batch+=1
xloader = X_loader(df)
for data,mean,std in xloader:
print(data)
print(mean)
break
yloader = Y_loader(df)
for data in yloader:
print(data)
break
train_Xloader = X_loader(train_X)
train_Yloader = Y_loader(train_Y)
# ## Build Model
class AutoEncoder(nn.Module):
def __init__(self, emb_dim,encoder_dim=150,out_dim=63, vocab_size=1200):
super(NN, self).__init__()
self.emb_dim = emb_dim
self.embed = nn.Embedding(vocab_size, emb_dim)
self.encoder = nn.GRU(emb_dim, encoder_dim, batch_first=True, bidirectional=False)
self.decoder = nn.Linear(encoder_dim, out_dim)
def forward(self, x):
out = self.embed(x)
# print('embedding',out)
output, hidden = self.encoder(out)
# print('hidden',hidden)
output = self.decoder(hidden)
return output.squeeze()
# ## Train
| analysis/WebTraffic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# ## _Debugging with Assertions_
#
#
# The latest version of this notebook is available on:
#
# ***
# ### Contributors
# <NAME>, <NAME>
#
# ### Qiskit Package Versions
import qiskit
qiskit.__qiskit_version__
# ## Introduction
# Once a quantum program has been written, it is important to check if it is correct. However, debugging quantum programs can be difficult because classical debugging approaches such as using print statements will not work since measurement disturbs state.
#
# Using quantum assertions is a useful method for debugging quantum programs. This method allows us to put breakpoints at points in a program where we expect certain qubits to be in a certain state (for example, a classical state, a uniform superposition state, or a product state). Next, we can use statistical tests to check if the qubits are actually in the expected states.
#
# Currently, we support assertions of classical states, uniform superposition states, product states, and the negation of each of these.
# +
# useful additional packages
import math
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
# importing Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute
from qiskit import BasicAer, IBMQ
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
# +
backend = BasicAer.get_backend('qasm_simulator') # run on local simulator by default
# Uncomment the following lines to run on a real device
#IBMQ.load_accounts()
#from qiskit.providers.ibmq import least_busy
#backend = least_busy(IBMQ.backends(operational=True, simulator=False))
#print("the best backend is " + backend.name())
# -
# ## Bell State<a id='section1'></a>
#
# Suppose we want to make a circuit that creates a bell state. We can use assertions to check if our program does what we expect.
#
# First, we make an empty list to store all our breakpoints throughout the program. (This is useful if you later have multiple breakpoints and want to comment some out easily.)
breakpoints = []
# We now initialize a circuit and apply a Hadamard gate to qubit 0:
#making circuit: bell state
qc = QuantumCircuit(2, 2)
qc.h(0)
# Note that at this point in the program we expect qubit 1 and 0 to be in a product state. (Not entangled.)
#
#
breakpoints.append(qc.get_breakpoint_product(0, 0, 1, 1, 0.05))
# Here, the first four inputs are qubit 0, cbit 0, qubit 1, cbit 1, indicating that we are asserting that qubit 0 and qubit 1 are in a product state, and we are storing our measurements of qubit 0 on cbit 0 and qubit 1 on cbit 1. The last argument is optional and represents the critical p-value that determines whether the statistical tests underlying this quantum assertion will pass or fail (default 0.05).
#
# The breakpoint creates a copy of the quantum circuit up to this point with measurement operations at the end as visualized below. This will later be run on the backend to get an intermediate measurement distribution used for statistical tests to evaluate our assertion.
breakpoints[0].draw(output="mpl")
# Our new breakpoint has now been added to the list of breakpoints and we can finish writing our program:
qc.cx(0, 1)
qc.measure([0,1], [0,1])
# When executing the circuit, we add in the list of breakpoints to be executed as well:
# running the job, printing results
job_sim = execute(breakpoints + [qc], backend) #execute all breakpoints as well as the main quantum circuit
sim_result = job_sim.result() #obtain results object as before
# Now using a classical Python assertion, we can check if our breakpoint passed the assertion of a uniform superposition state.
assert(sim_result.get_assertion_passed(breakpoints[0]))
# If the assertion passes, then the program will continue running. If the assertion fails, the program will stop with an assertion error.
#
# Since our assertion passes, we conclude that we did not make an error in creating the state at the breakpoint.
#
# You can access the p-value, chi squared statistic and whether or not a test passed for the statistical test within each assertion as well:
# +
print("breakpoint0 pval =")
print("\n"+ "breakpoint0 chisq =")
print("\n"+"breakpoint0 passed?")
# -
# As before, we can still draw our original circuit and view its results. (Breakpoints not shown.)
qc.draw(output="mpl")
plot_histogram(sim_result.get_counts(qc))
# We see that we have created a bell state.
#
# In a longer program, we could place breakpoints in the middle of a program after multiple instructions in a more useful manner to check if we have correctly implemented the gates in our circuit.
#
# ## QFT<a id='section1'></a>
#
# Here is an example of using assertions in a quantum fourier transform circuit:
# Initialize breakpoints list:
breakpoints = []
# Continue writing the program:
# +
#make the qft
def input_state(circ, n):
"""n-qubit input state for QFT that produces output 1."""
for j in range(n):
circ.h(j)
circ.u1(-math.pi/float(2**(j)), j)
def qft(circ, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), j, k)
circ.h(j)
qft3 = QuantumCircuit(5, 5, name="qft3")
qft4 = QuantumCircuit(5, 5, name="qft4")
qft5 = QuantumCircuit(5, 5, name="qft5")
# Below, qft3 is a 3-qubit quantum circuit.
input_state(qft3, 3) # Initializes the state so that post-QFT, the state should be 1.
# -
# Insert a breakpoint to the qft3 circuit after initializing the input state:
breakpoint1 = qft3.get_breakpoint_uniform(range(3), range(3), 0.05)
# This asserts that the 3 qubits are in uniform superposition, with critical p-value 0.05.
#
# Continue the program:
qft3.barrier()
qft(qft3, 3)
qft3.barrier()
# Insert a breakpoint after the quantum Fourier Transform has been performed:
breakpoint2 = qft3.get_breakpoint_classical(range(3), range(3), 0.05, 1)
# This asserts that the 3 qubits are a classical value of 1, with critical p-value 0.05.
#
# Continuing, we measure necessary qubits before calling execute:
# +
for j in range(3):
qft3.measure(j, j)
input_state(qft4, 4)
qft4.barrier()
qft(qft4, 4)
qft4.barrier()
for j in range(4):
qft4.measure(j, j)
input_state(qft5, 5)
qft5.barrier()
qft(qft5, 5)
qft5.barrier()
for j in range(5):
qft5.measure(j, j)
# -
# Add the list of breakpoints to the circuits being run in execute:
# setting up the backend, running the breakpoint and the job
sim_backend = BasicAer.get_backend('qasm_simulator')
job = execute(breakpoints + [qft3, qft4, qft5], sim_backend, shots=1024)
result = job.result()
# We can now see the results of our assertions:
# Show the assertion
breakpoints = [breakpoint1, breakpoint2]
print()
for breakpoint in breakpoints:
print("Results of our " + result.get_assertion_type(breakpoint) + " Assertion:")
tup = result.get_assertion(breakpoint)
print('chisq = %s\npval = %s\npassed = %s\n' % tuple(map(str,tup)))
assert ( result.get_assertion_passed(breakpoint) )
# As well as the outputs of our original quantum circuits:
# Show the results
print(result.get_counts(qft3))
print(result.get_counts(qft4))
print(result.get_counts(qft5))
| terra/qis_intro/debugging_with_assertions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Escreva um programa que faça o computador "pensar" em um número inteiro entre 0 e 5 e peça para o usuário tentar descobrir qual foi o número escolhido pleo computador.
# # O programa deverá escrever na tela se o usuário venceu ou perdeu.
#
# +
import random
sorteio = random.randint(0,6)
n = int(input('Informe um número entre 0 e 5: '))
if sorteio == n:
print('Parabéns!! Você acertou o número sorteado')
print('Número sorteado: {} Palpite {}'.format(sorteio, n))
else:
print('Tente novamente')
print('Número sorteado: {} Palpite {}'.format(sorteio, n))
# -
# # Escreva um programa que leia a velocidade de um carro. Se ele ultrapassar 80Km/h, mostre uma mensagem dizendo que ele foi multado. A multa vai custar R$7,00 por cada Km acima do limite.
velocidade = float(input('Informe a velocidade do carro: '))
if velocidade > 80:
diferenca = velocidade - 80
multa = diferenca * 7
print('Você ultrapassou a velocidade permitida. Será cobrada uma multa de R${:.2f}'.format(multa))
else:
print('Velocidade dentro do permitido')
# # Crie um programa que leia um número inteiro e mostre na tela se ele é PAR ou ÍMPAR
n = int(input('Informe um número: '))
if n % 2 == 0:
print('O número que você informou é PAR')
else:
print('O número que você informou é ÍMPAR')
# # Desenvolva um programa que pergunte a distância de uma viagem em Km. Calcule o preço da passagem, cobrando R$$0.50 por Km para viagens de até 200Km e R$0.45 para viagens mais longas
km = float(input('Informe a distância da viagem em Km: '))
if km <= 200:
print('O valor da passagem é: R${:.2f}'.format(km*0.5))
else:
print('O valor da passagem é: R${:.2f}'.format(km*0.45))
# # Faça um programa que leia um ano qualquer e mostre se ele é Bissexto
ano = int(input('Informe um ano: '))
if ano %4 == 0:
print('O ano {} é bissexto'.format(ano))
else:
print('O ano {} não é bissexto'.format(ano))
# # Faça um programa que leia três números e mostre qual é o maior e qual é o menor
# +
n1 = int(input('Informe um número: '))
n2 = int(input('Informe outro número: '))
n3 = int(input('Informe outro número: '))
menor = n1
maior = n1
if n2 < menor:
menor = n2
if n3 < menor:
menor = n3
if n2 > maior:
maior = n2
if n3 > maior:
maior = n3
print('O menor número é: {}'.format(menor))
print('O maior número é: {}'.format(maior))
# -
# # Escreva um programa que pergunte o salário de um funcionário e calcule o valor do seu aumento. Para salários superiores a R$1.250,00, calcule um aumento de 10%. Para os inferiores ou iguais, o aumento é de 15%.
salario = float(input('Informe o salário do funcionário: R$'))
if salario > 1250:
print('O salário reajustado com 10% é igual a R${:.2f}'.format(salario*1.1))
else:
print('O salário reajustado com 15% é igual a R${:.2f}'.format(salario*1.15))
# # Desenvolva um programa que leia o comprimento de três retas e diga ao usuário se elas podem ou não formar um triângulo
# +
r1 = float(input('Informe o comprimento da primeira reta: '))
r2 = float(input('Informe o comprimento da segunda reta: '))
r3 = float(input('Informe o comprimento da terceira reta: '))
if r1+r2 > r3:
if r1+r3 > r2:
if r2+r3 > r1:
print('Com essas retas é possível fazer um triângulo')
else:
print('Com essas retas não é possível fazer um triângulo')
else:
print('Com essas retas não é possível fazer um triângulo')
else:
print('Com essas retas não é possível fazer um triângulo')
| 04-IF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mask R-CNN Demo
#
# A quick intro to using the pre-trained model to detect and segment objects.
# +
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
# Root directory of the project
ROOT_DIR = os.path.abspath("C:/Users/Administrator/Desktop/Mask_RCNN-master")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
# %matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.abspath("C:/Users/Administrator/logs/coco20201209T0149")
# Local path to trained weights file
#COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco_0024.h5")
# Download COCO trained weights from Releases if needed
#if not os.path.exists(COCO_MODEL_PATH):
# utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "dataset/train/train2014")
# -
# ## Configurations
#
# We'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.
#
# For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
# +
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
# -
# ## Create Model and Load Trained Weights
# +
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(MODEL_PATH, by_name=True)
# -
# ## Class Names
#
# The model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.
#
# To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.
#
# To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.
# ```
# # Load COCO dataset
# dataset = coco.CocoDataset()
# dataset.load_coco(COCO_DIR, "train")
# dataset.prepare()
#
# # Print class names
# print(dataset.class_names)
# ```
#
# We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
# ## Run Object Detection
# +
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
# + jupyter={"outputs_hidden": true}
| samples/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/easleydp/AIAE-to-SPX-total-returns/blob/main/AIAE_to_SPX_total_returns.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="sQYOpy4nYXNU"
# # Aggregate (or Average) Investor Allocation to Equities (AIAE)
#
# This notebook uses the most recently available data to plot the AIAE valuation measure introduced by <NAME> in his Dec 2013 essay ["The Single Greatest Predictor of Future Stock Market Returns"](https://www.philosophicaleconomics.com/2013/12/the-single-greatest-predictor-of-future-stock-market-returns/). AIAE is calculated using freely available data from [FRED (Federal Reserve Economic Data program)](https://fred.stlouisfed.org/docs/api/fred/fred.html). Livermore showed how the AIAE ratio on any given date is predictive of [SPX](https://en.wikipedia.org/wiki/S%26P_500) total returns over the subsequent 10 years.
#
# <NAME>'s research note "[Market Timing Using Aggregate Equity Allocation Signals](https://alphaarchitect.com/2021/04/29/market-timing-using-aggregate-equity-allocation-signals/)" re-visited the analysis in Apr 2021. He found that AIAE continues to trump all the well-known indicators (such as CAPE ratio) and concluded: "While rarely mentioned in discussions of equity forecasting indicators, the AIAE does appear to be the single greatest predictor of long-term equity returns currently in the public domain."
# + [markdown] id="dbbdK9-GuMfd"
# ---
# NOTE: Delete the `data` folder to have the FRED CSV files downloaded afresh, otherwise the existing files will be used (the source data is only updated quarterly). Alternatively (since manually deleting a folder in Google Colab is a bit of a chore), temporarily set `force_refresh` to `True` in the next code block.
# + id="D6m6NtaJ0EuH"
# Configuration
# As an alternative to deleting the `data` folder, temporarily set this
# to True for force a refresh of the FRED data.
force_refresh = False
# Min supported start year is 1952. Used when downloading the FRED data.
start_year = 1989
# + colab={"base_uri": "https://localhost:8080/"} id="ZRwdQVrJYY_z" outputId="04a91dbb-4a09-48a5-8c48-51ec03c14321"
# Download the FRED data
# Each of these data frames has a DATE series and $ value series. The latter is
# in billions unless `'millions':True`, in which case we'll transform the values
# to billions.
fred_frames = [
# Nonfinancial Corporate Business; Corporate Equities; Liability, Market Value Levels
{'id':"NCBCEL"},
# Domestic Financial Sectors; Corporate Equities; Liability, Level
{'id':"FBCELLQ027S", 'millions':True},
# State and Local Governments; Debt Securities and Loans; Liability, Level
{'id':"SLGTCMDODNS"},
# Households and Nonprofit Organizations; Debt Securities and Loans; Liability, Level
{'id':"TCMILBSHNO"},
# Nonfinancial Corporate Business; Debt Securities and Loans; Liability, Level
{'id':"TCMILBSNNCB"},
# Federal Government; Debt Securities and Loans; Liability, Level
{'id':"FGTCMDODNS"},
# Rest of the World; Debt Securities and Loans; Liability, Level
{'id':"WCMITCMFODNS"}
]
# We'll also fetch recession data so we can paint grey bars in the background
# of the plot. This frame has a DATE series and a flag series (0 and 1).
recession_id = 'JHDUSRGDPBR' # Dates of U.S. recessions as inferred by GDP-based recession indicator
from pathlib import Path
import urllib.request
data_dir = Path('data')
if force_refresh or not data_dir.exists():
print('Refreshing CSV files...')
data_dir.mkdir(exist_ok=True)
def request_csv(id, start_year):
url = f'https://fred.stlouisfed.org/graph/fredgraph.csv?cosd={start_year}-01-01&id={id}'
urllib.request.urlretrieve(url, f'data/{id}.csv')
for ff in fred_frames:
request_csv(ff['id'], start_year)
request_csv(recession_id, max(start_year, 1968)) # FRED recession data is only available since 1968
print('\tCSV files refereshed.')
else:
print('Using existing CSV files.')
# + colab={"base_uri": "https://localhost:8080/"} id="ctvW1ujtYvAi" outputId="ffd6273a-6d91-4152-8e3a-a7477815adfd"
# Process the FRED data and calculate AIAE
import pandas as pd
from dateutil.relativedelta import relativedelta
# The timestamps of the FRED data are at the beginning of the quarter to
# which they correspond, but the values are as of the end of the quarter.
# So we shift the FRED timestamps to the end of the quarter.
def shift_timestamps(frame):
values = frame['DATE']
values.update(values.transform(lambda d: d + relativedelta(months=3)))
def read_csv(id):
return pd.read_csv(f'data/{id}.csv', header=0, parse_dates=['DATE'])
last_fred_frame = None
for ff in fred_frames:
id = ff['id']
dataframe = read_csv(id)
ff['dataframe'] = dataframe
shift_timestamps(dataframe)
# Convert Value to billions if necessary
if ff.get('millions', False):
values = dataframe[ff['id']]
values.update(values.transform(lambda v: v / 1000))
# Confirm all dataframes have the same Date series
if not last_fred_frame is None:
if not last_fred_frame['dataframe']['DATE'].equals(dataframe['DATE']):
sys.exit(f'Date series differ between {last_fred_frame["id"]} and {id}!')
last_fred_frame = ff
# Likewise, the recession data (odd one out because it has its own length and values are flags)
recession_frame = read_csv(recession_id)
shift_timestamps(recession_frame)
# Transform frames so that DATE is the index. (This is primarily for the
# sake of the $ value frames, as a preliminary step before merging.)
def make_date_the_index(frame):
frame.set_index('DATE', inplace=True, verify_integrity=True)
for ff in fred_frames:
make_date_the_index(ff['dataframe'])
make_date_the_index(recession_frame)
# Merge the $ value frames into a new frame that includes a computed "AIAE" series.
# (We then drop the other series from the new frame.)
def compute_aiae_frame():
frame = pd.concat([ff['dataframe'] for ff in fred_frames], axis=1)
# Compute a new "AIAE" series
frame['_equities'] = frame['NCBCEL'] + frame['FBCELLQ027S']
frame['_borrowerLiabilities'] = frame['SLGTCMDODNS'] + frame['TCMILBSHNO'] + \
frame['TCMILBSNNCB'] + frame['FGTCMDODNS'] + frame['WCMITCMFODNS']
frame['AIAE'] = frame['_equities'] / (frame['_equities'] + frame['_borrowerLiabilities'])
frame.drop(columns=[ff['id'] for ff in fred_frames], inplace=True)
frame.drop(columns=['_equities', '_borrowerLiabilities'], inplace=True)
return frame
aiae_frame = compute_aiae_frame()
aiae_series = aiae_frame["AIAE"]
first_date = aiae_series.index[0]
last_date = aiae_series.index[-1]
date_format = "%Y-%m-%d"
print(f'{aiae_series.size} quarters ({first_date.strftime(date_format)} to {last_date.strftime(date_format)})')
# + colab={"base_uri": "https://localhost:8080/", "height": 528} id="NafXokjwYzJc" outputId="b5df0d60-8517-43f0-8e43-2490aa37d303"
# Plot AIAE, correlated with SPX 10yr returns
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.ticker as mtick
fig, ax_aiae = plt.subplots(figsize=(12, 8)) # size in inches
ax_spx = ax_aiae.twinx() # Create a twin y-axis sharing the x-axis
# The approach used for keeping the two y-axes properly correlated is taken
# from Matplotlib's tech note "Different scales on the same axes":
# <https://matplotlib.org/stable/gallery/subplots_axes_and_figures/fahrenheit_celsius_scales.html>
# Formula for predicted SPX 10 year total return given an AIAE value.
# The equation derived from this scatter chart
# https://i2.wp.com/www.philosophicaleconomics.com/wp-content/uploads/2013/12/linearavg1.jpg
# is: spx returns = 0.335 - 0.678 * aiae
# Oddly, the AIAE / SPX correlation depicted in the essay's main chart is noticably different:
# https://i0.wp.com/www.philosophicaleconomics.com/wp-content/uploads/2013/12/avginv11.jpg
spx_from_aiae = lambda aiae: 0.366 - 0.775 * aiae
# Closure function to be used as a callback
def update_spx_axis_according_to_aiae_axis(ax_aiae):
y1, y2 = ax_aiae.get_ylim()
ax_spx.set_ylim(spx_from_aiae(y1), spx_from_aiae(y2))
ax_spx.figure.canvas.draw()
# Automatically update ylim of ax_spx when ylim of ax_aiae changes
ax_aiae.callbacks.connect("ylim_changed", update_spx_axis_according_to_aiae_axis)
ax_aiae.plot(aiae_frame)
# x-axis ticks for each year
ax_aiae.xaxis.set_major_locator(mdates.YearLocator())
ax_aiae.xaxis.set_major_formatter(mdates.DateFormatter("%y"))
# Format y-axes as percent
ax_aiae.yaxis.set_major_formatter(mtick.PercentFormatter(1.0, 0))
ax_spx.yaxis.set_major_formatter(mtick.PercentFormatter(1.0, 0))
# Control frequency of y-axis ticks (every 2% for AIAE and 1% for SPX)
ax_aiae.yaxis.set_major_locator(mtick.MultipleLocator(0.02))
ax_spx.yaxis.set_major_locator(mtick.MultipleLocator(0.01))
# Ensure plot line is tight against the y-axes
ax_aiae.margins(x=0)
# ... and the recession bars are tight against the top and bottom
ax_aiae.axis('tight')
# Include grid lines
ax_aiae.grid(color='black', alpha=0.3, linestyle='dashed', linewidth=0.5)
# Paint grey bars in the background of the plot to indicate recessions
def determine_recession_spans():
# Returns list of list[dt, dt], each inner list being the span of a recession.
spans = []
current_span = None
for index, row in recession_frame.iterrows():
if row[recession_id] == 1:
if current_span is None:
current_span = [index, index]
spans.append(current_span)
else:
current_span[-1] = index
else:
current_span = None
return spans
interpolate_date = lambda dt: (dt - first_date) / (last_date - first_date)
# Maps list of `[date, date]` to list of `(interpolated date, interpolated date)`,
# where `interpolated date` is a value betweeen 0.0 and 1.0 covering the x-axis.
interpolate_dates = lambda date_spans: \
map(lambda span:
(interpolate_date(span[0]), interpolate_date(span[1])), date_spans)
ymin, ymax = ax_aiae.get_ylim()
for span in interpolate_dates(determine_recession_spans()):
ax_aiae.axhspan(ymin, ymax, span[0], span[1], facecolor='lightgrey')
# Title and axis labels
ax_aiae.set_title('AIAE and corresponding predicted SPX 10 year returns (annualised).\nGrey bars mark recessions.')
ax_aiae.set_ylabel('Average Investor Equity Allocation')
ax_spx.set_ylabel('Predicted Subsequent 10 Yr SPX Total Return')
ax_aiae.set_xlabel('Year')
plt.show()
| AIAE_to_SPX_total_returns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import lux
import pandas as pd
# Collecting basic usage statistics for Lux (For more information, see: https://tinyurl.com/logging-consent)
lux.logger = True # Remove this line if you do not want your interactions recorded
# -
# Now that we've covered the basics of Lux, you can explore a dataset that you're interested in with Lux. We've provided some starter datasets as possible starting points. Pick any of the following dataset to explore:
# [Census Income](https://archive.ics.uci.edu/ml/datasets/census+income): Dataset with 32561 records of adult census information, including education, martial status, income, etc.
#
# ```python
# df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/census.csv")
# ```
# [Airbnb](https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data): Dataset containing information regarding 48895 Airbnb rental listings in New York City.
#
# ```python
# df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/airbnb_nyc.csv")
# ```
#
# [Instacart Purchase Orders](https://www.kaggle.com/c/instacart-market-basket-analysis/data): Sample dataset of 1000 product purchase orders from Instacart.
#
# ```python
# df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/instacart_sample.csv")
# ```
#
# [Customer Churn](https://www.kaggle.com/blastchar/telco-customer-churn): 7043 rows of customer information at a telephone company. The dataset can be used to predict customer Churn behavior.
#
# ```python
# df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/churn.csv")
# ```
#
# [Employee attrition](https://www.kaggle.com/pavansubhasht/ibm-hr-analytics-attrition-dataset): A fictional dataset of 1470 HR records of employees for studying employee attrition.
#
# ```python
# df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/employee.csv")
# ```
# [Spotify Songs](https://www.kaggle.com/yamaerenay/spotify-dataset-19212020-160k-tracks?select=data.csv): A dataset containing more than 160k soundtracks from Spotify and their descriptions.
#
# ```python
# df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/spotify.csv")
# ```
# +
# Start exploring your data with Lux!
# -
# Here are some ideas of things that you could try out:
# - printing and visualizing your dataframe
# - cleaning and transforming your dataframe with Pandas
# - inspecting recommendations provided by Lux
# - steering recommendations by setting your analysis intent for the dataframe
# - creating visualizations on-demand
# - generating and browsing through lists of visualizations
# - __Bonus:__ check out what the three buttons on the top right corner of the Lux widget does
#
# ... and more!
# ### Conclusion
#
# We hope that this tutorial has demonstrated how Lux supports fast and easy experimentation with data through visualizations seamlessly in Jupyter notebooks. If you are interested in using Lux, we would love to hear from you. Any feedback, suggestions, and contributions for improving Lux are welcome. Please help us improve Lux and these tutorials by filling out this quick survey [here](tinyurl.com/lux-rc20survey).
#
# Here are some additional resources as next-steps to continue exploring:
#
# - Visit [ReadTheDoc](https://lux-api.readthedocs.io/en/latest/) for more detailed documentation.
# - Check out this longer [notebook tutorial series](https://github.com/lux-org/lux-binder/tree/master/tutorial) on how to use Lux.
# - Join the [Lux Slack channel](http://lux-project.slack.com/) for support and discussion.
# - Report any bugs, issues, or requests through [Github Issues](https://github.com/lux-org/lux/issues).
| exercise/4-Data-Playground.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Day 2 Assignment
# ##### Lists and its default functions:
lst=[1,2,3,4]
lst.append(5) #append method
lst
lst.append(2)
lst
lst.count(2) # Count: to count no. of obj occurence
lst1=[2,3,4,5,6]
lst.extend(lst1) #extend : use dto extend the list by adding other list
lst
lst.count(2)
len(lst) #len : used to determine the length of the list
max(lst) #max: used to detremine the maximum in the list
min(lst) #min: Used to determine minimum in the list
lst.index(5) # index: it is used to determine the lowest index of the element in the list
lst.remove(3) # remove: it is used to remove the element
lst
lst.pop() #pop : it is used to pop the last elment out of the list
lst.sort() #sort ; it is used to sort the list elemnt in ascending order
lst
lst.reverse() #reverse: it is used to reverse the list element
lst
# #### Tuple And its default function:
tup=('a','b',1,2,3)
tup
tup1=(2,4,3,6,8,1,9)
max(tup1) #max: It is used to determine max in the tuple
min(tup1) #min: it is used to determine min in the tuple
# #### Dictonary and its default function
dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First',"python":1,"numpy":2}
dict
len(dict) #len: used to determine the length of the dictonary
str(dict) # convert dictonary into pi=rintable string form
print(type (dict)) #print the type of datatype
dict.keys() # keys() : return the key in dictonary
dict.items() #items() : returns the item of the dictonary
dict.values() #values(): return the values of the dictonary
dict["django"]=3 #to add new key,value pair in dictonary
dict
dict.clear() #used to clear all the key, value pair from the dictonary
dict
dict.pop('numpy') #pop: used to pop the key,value pair and return the value
dict.copy() #Returns a shallow copy of dictionary dict
# #### Strings and its default function
str="<NAME>"
str
str.split(" ") #split: It is used to split the string and add it to lists
str[1] #it is used to acces the character with given index no.
str[1:] # slicing: It is used to access all the character after given index
len(str) # len: used to calculate lenth of string
str[:15]
str[:15]+' is student' # it is a method to add new part in string
# +
str1="abhishek"
str1.capitalize() # used to captalize the first character of string
# -
str.count("h",1,15) # this method returns the occurence of character in the string
str2="1234hkh"
str2.isalpha() #Returns true if string has at least 1 character and all characters are alphabetic and false
str.islower( ) # check wether the all character of string is lower or not
# #### Set and its default function
set={1,2,3,4,5}
set
set.add(6) # used to add new element in the set
set
fruits = {"apple", "banana", "cherry"}
fruits.clear() #used to clear all the elemant of set
fruits
fruits = {"apple", "banana", "cherry"}
x = fruits.copy() # copy all the information in other set
x
x = {"apple", "banana", "cherry"}
y = {"google", "microsoft", "apple"}
z = x.difference(y) #Returns a set containing the difference between two or more sets
z
k = x.intersection(y) #Returns a set, that is the intersection of two other sets
k
x = {"a", "b", "c"}
y = {"f", "e", "d", "c", "b", "a"}
z = x.issubset(y) # Returns whether another set contains this set or not
z
# +
x = {"f", "e", "d", "c", "b", "a"}
y = {"a", "b", "c"}
z = x.issuperset(y) #Returns whether this set contains another set or not
print(z)
# +
fruits = {"apple", "banana", "cherry"}
fruits.pop() #Removes an element from the set
print(fruits)
# -
| Assginment-1 Day-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math as math
cities = ['Shanghai','Beijing','Istanbul']
population = [24183300, 20794100,15030000]
citypop = pd.Series(population, cities)
citypop
citypop = pd.Series(data = population, index =cities)
citypop
citypop.sum()
citypop.mean()
citypop.index
citypop.keys()
citypop.values
| Data_Series/DataSeries3_lab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.5
# language: python
# name: python3
# ---
# + [markdown] id="-i4AojK1pwYZ" colab_type="text"
# # Modern Data Science
# **(Module 11: Data Analytics (IV))**
#
# ---
# - Materials in this module include resources collected from various open-source online repositories.
# - You are free to use, change and distribute this package.
#
# Prepared by and for
# **Student Members** |
# 2006-2018 [TULIP Lab](http://www.tulip.org.au), Australia
#
# ---
#
#
# ## Session 11A - Case Study: Prediction
#
#
# The purpose of this session is to demonstrate different coefficient and linear regression.
#
#
# ### Content
#
# ### Part 1 Linear Regression
#
# 1.1 [Linear Regression Package](#lrp)
#
# 1.2 [Evaluation](#eva)
#
#
# ### Part 2 Classificiation
#
# 2.1 [Skulls Dataset](#data)
#
# 2.2 [Data Preprocessing](#datapre)
#
# 2.3 [KNN](#knn)
#
# 2.4 [Decision Tree](#dt)
#
# 2.5 [Random Forest](#rf)
#
#
# ---
#
#
# + [markdown] id="z0iFk3hNpwYc" colab_type="text"
#
# ## <span style="color:#0b486b">1. Linear Regression</span>
#
# <a id = "lrp"></a>
# ### <span style="color:#0b486b">1.1 Linear Regression Package</span>
#
# + [markdown] id="BIqKUQSSpwYe" colab_type="text"
# We will learn how to use sklearn package to do linear regression
# + id="UQjMiLiepwYg" colab_type="code" colab={}
from sklearn.datasets import load_diabetes
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="4qBtmk-KpwYl" colab_type="text"
# Now create an instance of the diabetes data set by using the <b>load_diabetes</b> function as a variable called <b>diabetes</b>.
# + id="k3KSfZLCpwYn" colab_type="code" colab={}
diabetes = load_diabetes()
# + [markdown] id="BYF8SNszpwYt" colab_type="text"
# We will work with one feature only.
# + id="nnrvHNqwpwYu" colab_type="code" colab={}
diabetes_X = diabetes.data[:, None, 2]
# + [markdown] id="a0TMlGDFpwYx" colab_type="text"
# Now create an instance of the LinearRegression called LinReg.
# + id="XLFqLTmDpwYy" colab_type="code" colab={}
LinReg = LinearRegression()
# + [markdown] id="6lpmBpYEpwY1" colab_type="text"
# Now to perform <b>train/test split</b> we have to split the <b>X</b> and <b>y</b> into two different sets: The <b>training</b> and <b>testing</b> set. Luckily there is a sklearn function for just that!
#
# Import the <b>train_test_split</b> from <b>sklearn.cross_validation</b>
# + id="rGEt8QKbpwY1" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
# + [markdown] id="IhqKH_5XpwY4" colab_type="text"
# Now <b>train_test_split</b> will return <b>4</b> different parameters. We will name this <b>X_trainset</b>, <b>X_testset</b>, <b>y_trainset</b>, <b>y_testset</b>.
#
# Now let's use <b>diabetes_X</b> as the <b>Feature Matrix</b> and <b>diabetes.target</b> as the <b>response vector</b> and split it up using <b>train_test_split</b> function we imported earlier (<i>If you haven't, please import it</i>). The <b>train_test_split</b> function should have <b>test_size = 0.3</b> and a <b>random state = 7</b>.
#
#
# The <b>train_test_split</b> will need the parameters <b>X</b>, <b>y</b>, <b>test_size=0.3</b>, and <b>random_state=7</b>. The <b>X</b> and <b>y</b> are the arrays required before the split, the <b>test_size</b> represents the ratio of the testing dataset, and the <b>random_state</b> ensures we obtain the same splits.
# + id="ZNTQICAPpwY5" colab_type="code" colab={}
X_trainset, X_testset, y_trainset, y_testset = train_test_split(diabetes_X, diabetes.target, test_size=0.3, random_state=7)
# + [markdown] id="fm-ghNAapwY9" colab_type="text"
# Train the <b>LinReg</b> model using <b>X_trainset</b> and <b>y_trainset</b>
# + id="ACWvRh7XpwY-" colab_type="code" colab={}
LinReg.fit(X_trainset, y_trainset)
# + [markdown] id="gHttUR9dpwZA" colab_type="text"
# Now let's <i>plot</i> the graph.
# <p> Use plt's <b>scatter</b> function to plot all the datapoints of <b>X_testset</b> and <b>y_testset</b> and color it <b>black</b> </p>
# <p> Use plt's <b>plot</b> function to plot the line of best fit with <b>X_testset</b> and <b>LinReg.predict(X_testset)</b>. Color it <b>blue</b> with a <b>linewidth</b> of <b>3</b>. </p> <br>
# <b>Note</b>: Please ignore the FutureWarning.
# + id="hxecorBLpwZA" colab_type="code" colab={}
plt.scatter(X_testset, y_testset, color='black')
plt.plot(X_testset, LinReg.predict(X_testset), color='blue', linewidth=3)
# + [markdown] id="pqeV1JuqpwZD" colab_type="text"
#
# <a id = "eva"></a>
#
#
# ### <span style="color:#0b486b">1.2 Evaluation</span>
#
#
#
# + [markdown] id="CnAn21iGpwZD" colab_type="text"
# In this part, you will learn the about the different evaluation models and metrics. You will be able to identify the strengths and weaknesses of each model and how to incorporate underfitting or overfilling them also referred to as the Bias-Variance trade-off.
# + id="0Cj2ch2upwZE" colab_type="code" colab={}
import numpy as np
# + [markdown] id="4CAJ1WhIpwZF" colab_type="text"
# <br><b> Here's a list of useful functions: </b><br>
# mean -> np.mean()<br>
# exponent -> **<br>
# absolute value -> abs()
#
#
# We use three evaluation metrics:
#
# $$ MAE = \frac{\sum_{j=1}^n|y_i-\hat y_i|}{n} $$
#
# $$ MSE = \frac{\sum_{j=1}^n (y_i-\hat y_i)^2}{n} $$
#
# $$ RMSE = \sqrt{\frac{\sum_{j=1}^n (y_i-\hat y_i)^2}{n}} $$
#
#
#
#
# + id="zePNX0JQpwZG" colab_type="code" colab={}
print(np.mean(abs(LinReg.predict(X_testset) - y_testset)))
# + id="lq-KlU0spwZJ" colab_type="code" colab={}
print(np.mean((LinReg.predict(X_testset) - y_testset) ** 2) )
# + id="SFfcNV0IpwZK" colab_type="code" colab={}
print(np.mean((LinReg.predict(X_testset) - y_testset) ** 2) ** (0.5) )
# + [markdown] id="FdoFJxB2pwZM" colab_type="text"
# ---
# ## <span style="color:#0b486b">2. Classification</span>
#
# <a id = "cls"></a>
#
#
# + [markdown] id="jtVF6uQGpwZO" colab_type="text"
# <a id = "data"></a>
#
# ### <span style="color:#0b486b">2.1 Skulls dataset</span>
#
# In this section, we will take a closer look at a data set.
#
# Everything starts off with how the data is stored. We will be working with .csv files, or comma separated value files. As the name implies, each attribute (or column) in the data is separated by commas.
#
# Next, a little information about the dataset. We are using a dataset called skulls.csv, which contains the measurements made on Egyptian skulls from five epochs.
#
# #### The attributes of the data are as follows:
#
#
# <b>epoch</b> - The epoch the skull as assigned to, a factor with levels c4000BC c3300BC, c1850BC, c200BC, and cAD150, where the years are only given approximately.
#
# <b>mb</b> - Maximal Breadth of the skull.
#
# <b>bh</b> - Basiregmatic Heights of the skull.
#
# <b>bl</b> - Basilveolar Length of the skull.
#
# <b>nh</b> - Nasal Heights of the skull.
#
# #### Importing Libraries
#
# Before we begin, we need to import some libraries, as they have useful functions that will be used later on.<br>
#
# If you look at the imports below, you will notice the return of **numpy**! Remember that numpy is homogeneous multidimensional array (ndarray).
# + id="mvyJcJMOpwZO" colab_type="code" colab={}
import numpy as np
import pandas
# + [markdown] id="IiEjESv9pwZQ" colab_type="text"
# ---
# We need the **pandas** library for a function to read .csv files
# <ul>
# <li> <b>pandas.read_csv</b> - Reads data into DataFrame </li>
# <li> The read_csv function takes in <i>2 parameters</i>: </li>
# <ul>
# <li> The .csv file as the first parameter </li>
# <li> The delimiter as the second parameter </li>
# </ul>
# </ul>
#
# -----------------------------
# Save the "<b> skulls.csv </b>" data file into a variable called <b> my_data </b>
# + id="h4PJ22qjqQdn" colab_type="code" colab={}
# !pip install wget
# + id="PnpsjhRPpwZQ" colab_type="code" colab={}
import wget
link_to_data = 'https://github.com/tuliplab/mds/raw/master/Jupyter/data/skulls.csv'
DataSet = wget.download(link_to_data)
# + id="y6Yk5s3QpwZT" colab_type="code" colab={}
my_data = pandas.read_csv("skulls.csv", delimiter=",")
my_data.describe()
# + [markdown] id="PtaXUcUTpwZU" colab_type="text"
# Print out the data in <b> my_data </b>
# + id="yl2l8yWbpwZV" colab_type="code" colab={}
print(my_data)
# + [markdown] id="7DYhaFnzpwZW" colab_type="text"
# Check the type of <b> my_data </b>
# + id="oF7XZrZPpwZX" colab_type="code" colab={}
print (type(my_data))
# + [markdown] id="R-rVWMGwpwZZ" colab_type="text"
# -----------
# There are various functions that the **pandas** library has to look at the data
# <ul>
# <li> <font color = "red"> [DataFrame Data].columns </font> - Displays the Header of the Data </li>
# <ul>
# <li> Type: pandas.indexes.base.Index </li>
# </ul>
# </ul>
#
# <ul>
# <li> <font color = "red"> [DataFrame Data].values </font> (or <font color = "red"> [DataFrame Data].as_matrix() </font>) - Displays the values of the data (without headers) </li>
# <ul>
# <li> Type: numpy.ndarray </li>
# </ul>
# </ul>
#
# <ul>
# <li> <font color = "red"> [DataFrame Data].shape </font> - Displays the dimensions of the data (rows x columns) </li>
# <ul>
# <li> Type: tuple </li>
# </ul>
# </ul>
#
# ----------
# Using the <b> my_data </b> variable containing the DataFrame data, retrieve the <b> header </b> data, data <b> values </b>, and <b> shape </b> of the data.
# + id="e7Wmx7wCpwZa" colab_type="code" colab={}
print( my_data.columns)
# + id="u86qD8VIpwZb" colab_type="code" colab={}
print (my_data.values)
# + id="U42JYSqypwZd" colab_type="code" colab={}
print (my_data.shape)
# + [markdown] id="cCGeHJoPpwZf" colab_type="text"
# <a id = "datapre"></a>
#
# ### <span style="color:#0b486b">2.2 Data Preprocessing</span>
#
# When we train a model, the model requires two inputs, X and y
# <ul>
# <li> X: Feature Matrix, or array that contains the data. </li>
# <li> y: Response Vector, or 1-D array that contains the classification categories </li>
# </ul>
#
#
#
# ------------
# There are some problems with the data in my_data:
# <ul>
# <li> There is a header on the data (Unnamed: 0 epoch mb bh bl nh) </li>
# <li> The data needs to be in numpy.ndarray format in order to use it in the machine learning model </li>
# <li> There is non-numeric data within the dataset </li>
# <li> There are row numbers associated with each row that affect the model </li>
# </ul>
#
# To resolve these problems, I have created a function that fixes these for us:
# <b> removeColumns(pandasArray, column) </b>
#
# This function produces one output and requires two inputs.
# <ul>
# <li> 1st Input: A pandas array. The pandas array we have been using is my_data </li>
# <li> 2nd Input: Any number of integer values (order doesn't matter) that represent the columns that we want to remove. (Look at the data again and find which column contains the non-numeric values). We also want to remove the first column because that only contains the row number, which is irrelevant to our analysis.</li>
# <ul>
# <li> Note: Remember that Python is zero-indexed, therefore the first column would be 0. </li>
# </ul>
# </ul>
#
#
# + id="KagMLcl_pwZg" colab_type="code" colab={}
# Remove the column containing the target name since it doesn't contain numeric values.
# Also remove the column that contains the row number
# axis=1 means we are removing columns instead of rows.
# Function takes in a pandas array and column numbers and returns a numpy array without
# the stated columns
def removeColumns(pandasArray, *column):
return pandasArray.drop(pandasArray.columns[[column]], axis=1).values
# + [markdown] id="X2SXWwMVpwZh" colab_type="text"
# ---------
# Using the function, store the values from the DataFrame data into a variable called new_data.
# + id="2ceENKvWpwZh" colab_type="code" colab={}
new_data = removeColumns(my_data, 0, 1)
# + [markdown] id="38-9PpU3pwZj" colab_type="text"
# Print out the data in <b> new_data </b>
# + id="uCppgx4CpwZk" colab_type="code" colab={}
print(new_data)
# + [markdown] id="8kObOrRapwZl" colab_type="text"
# -------
# Now, we have one half of the required data to fit a model, which is X or new_data
#
# Next, we need to get the response vector y. Since we cannot use .target and .target_names, I have created a function that will do this for us.
#
# <b> targetAndtargetNames(numpyArray, targetColumnIndex) </b>
#
# This function produces two outputs, and requires two inputs.
# <ul>
# <li> <font size = 3.5><b><i>1st Input</i></b></font>: A numpy array. The numpy array you will use is my_data.values (or my_data.as_matrix())</li>
# <ul>
# <li> Note: DO NOT USE <b> new_data </b> here. We need the original .csv data file without the headers </li>
# </ul>
# </ul>
# <ul>
# <li> <font size = 3.5><b><i>2nd Input</i></b></font>: An integer value that represents the target column . (Look at the data again and find which column contains the non-numeric values. This is the target column)</li>
# <ul>
# <li> Note: Remember that Python is zero-indexed, therefore the first column would be 0. </li>
# </ul>
# </ul>
#
# <ul>
# <li> <font size = 3.5><b><i>1st Output</i></b></font>: The response vector (target) </li>
# <li> <font size = 3.5><b><i>2nd Output</i></b></font>: The target names (target_names) </li>
# </ul>
#
#
#
# + id="V7TbMorkpwZm" colab_type="code" colab={}
def targetAndtargetNames(numpyArray, targetColumnIndex):
target_dict = dict()
target = list()
target_names = list()
count = -1
for i in range(len(my_data.values)):
if my_data.values[i][targetColumnIndex] not in target_dict:
count += 1
target_dict[my_data.values[i][targetColumnIndex]] = count
target.append(target_dict[my_data.values[i][targetColumnIndex]])
# Since a dictionary is not ordered, we need to order it and output it to a list so the
# target names will match the target.
for targetName in sorted(target_dict, key=target_dict.get):
target_names.append(targetName)
return np.asarray(target), target_names
# + [markdown] id="34YrMEkFpwZn" colab_type="text"
# Using the targetAndtargetNames function, create two variables called <b>target</b> and <b>target_names</b>
# + id="qw85enV6pwZo" colab_type="code" colab={}
y, targetNames = targetAndtargetNames(my_data, 1)
# + [markdown] id="qRzZ2-h5pwZp" colab_type="text"
# Print out the <b>y</b> and <b>targetNames</b> variables you created.
# + id="rkPyVhsXpwZq" colab_type="code" colab={}
print(y)
print(targetNames)
# + [markdown] id="LlNngm_EpwZr" colab_type="text"
# Now that we have the two required variables to fit the data, a sneak peak at how to fit data will be shown in the cell below.
#
#
# + [markdown] id="v36tF3mPpwZs" colab_type="text"
# <a id = "knn"></a>
#
# ### <span style="color:#0b486b">2.3 KNN</span>
#
# **K-Nearest Neighbors** is an algorithm for supervised learning. Where the data is 'trained' with data points corresponding to their classification. Once a point is to be predicted, it takes into account the 'K' nearest points to it to determine it's classification.
#
# #### Here's an visualization of the K-Nearest Neighbors algorithm.
#
# <img src = "https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/KNN.png">
#
# In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A.
#
# In this sense, it is important to consider the value of k. But hopefully from this diagram, you should get a sense of what the K-Nearest Neighbors algorithm is. It considers the 'K' Nearest Neighbors (points) when it predicts the classification of the test point.
# + id="IYW20mUIpwZs" colab_type="code" colab={}
# X = removeColumns(my_data, 0, 1)
# y = target(my_data, 1)
X = new_data
print( X.shape)
print (y.shape)
# + [markdown] id="xBbGlmRGpwZt" colab_type="text"
# Now to perform <b>train/test split</b> we have to split the <b>X</b> and <b>y</b> into two different sets: The <b>training</b> and <b>testing</b> set. Luckily there is a sklearn function for just that!
#
# Import the <b>train_test_split</b> from <b>sklearn.cross_validation</b>
# + [markdown] id="flJCseaLpwZv" colab_type="text"
# Now <b>train_test_split</b> will return <b>4</b> different parameters. We will name this <b>X_trainset</b>, <b>X_testset</b>, <b>y_trainset</b>, <b>y_testset</b>. The <b>train_test_split</b> will need the parameters <b>X</b>, <b>y</b>, <b>test_size=0.3</b>, and <b>random_state=7</b>. The <b>X</b> and <b>y</b> are the arrays required before the split, the <b>test_size</b> represents the ratio of the testing dataset, and the <b>random_state</b> ensures we obtain the same splits.
# + id="bApOmENjpwZw" colab_type="code" colab={}
X_trainset, X_testset, y_trainset, y_testset = train_test_split(X, y, test_size=0.3, random_state=7)
# + [markdown] id="TlmgP4NOpwZx" colab_type="text"
# Now let's print the shape of the training sets to see if they match.
# + id="Xy0keZp7pwZx" colab_type="code" colab={}
print (X_trainset.shape)
print (y_trainset.shape)
# + [markdown] id="kElVgrBJpwZz" colab_type="text"
# Let's check the same with the testing sets! They should both match up!
# + id="7lf6WJWqpwZ0" colab_type="code" colab={}
print (X_testset.shape)
print (y_testset.shape)
# + [markdown] id="nA3KaK6JpwZ3" colab_type="text"
# Now similarly with the last lab, let's create declarations of KNeighborsClassifier. Except we will create 3 different ones:
#
# - neigh -> n_neighbors = 1
# - neigh23 -> n_neighbors = 23
# - neigh90 -> n_neighbors = 90
# + id="0bK9cmNDpwZ3" colab_type="code" colab={}
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors = 1)
neigh23 = KNeighborsClassifier(n_neighbors = 23)
neigh90 = KNeighborsClassifier(n_neighbors = 90)
# + [markdown] id="78jLDZiSpwZ5" colab_type="text"
# Now we will fit each instance of <b>KNeighborsClassifier</b> with the <b>X_trainset</b> and <b>y_trainset</b>
# + id="O7024BPRpwZ5" colab_type="code" colab={}
neigh.fit(X_trainset, y_trainset)
neigh23.fit(X_trainset, y_trainset)
neigh90.fit(X_trainset, y_trainset)
# + [markdown] id="o1W6I9IzpwZ7" colab_type="text"
# Now you are able to predict with <b>multiple</b> datapoints. We can do this by just passing in the <b>y_testset</b> which contains multiple test points into a <b>predict</b> function of <b>KNeighborsClassifier</b>.
#
# Let's pass the <b>y_testset</b> in the <b>predict</b> function each instance of <b>KNeighborsClassifier</b> but store it's returned value into <b>pred</b>, <b>pred23</b>, <b>pred90</b> (corresponding to each of their names)
#
#
# + id="TCpl_RdVpwZ7" colab_type="code" colab={}
pred = neigh.predict(X_testset)
pred23 = neigh23.predict(X_testset)
pred90 = neigh90.predict(X_testset)
# + [markdown] id="H-48wO51pwZ8" colab_type="text"
# Awesome! Now let's compute neigh's <b>prediction accuracy</b>. We can do this by using the <b>metrics.accuracy_score</b> function
# + id="5jDOiBFcpwZ9" colab_type="code" colab={}
from sklearn import metrics
print("Neigh's Accuracy: "), metrics.accuracy_score(y_testset, pred)
# + [markdown] id="1fuWfOMzpwZ-" colab_type="text"
# Interesting! Let's do the same for the other instances of KNeighborsClassifier.
# + id="7DTxvsFJpwZ_" colab_type="code" colab={}
print("Neigh23's Accuracy: "), metrics.accuracy_score(y_testset, pred23)
print("Neigh90's Accuracy: "), metrics.accuracy_score(y_testset, pred90)
# + [markdown] id="NCHX3-aUpwaA" colab_type="text"
# As shown, the accuracy of <b>neigh23</b> is the highest. When <b>n_neighbors = 1</b>, the model was <b>overfit</b> to the training data (<i>too specific</i>) and when <b>n_neighbors = 90</b>, the model was <b>underfit</b> (<i>too generalized</i>). In comparison, <b>n_neighbors = 23</b> had a <b>good balance</b> between <b>Bias</b> and <b>Variance</b>, creating a generalized model that neither <b>underfit</b> the data nor <b>overfit</b> it.
# + [markdown] id="0dNJ8QynpwaA" colab_type="text"
# <a id = "dt"></a>
#
# ### <span style="color:#0b486b">2.4 Decision Tree</span>
#
# In this section, you will learn <b>decision trees</b> and <b>random forests</b>.
# + [markdown] id="kbe5NjkPpwaB" colab_type="text"
# The <b> getFeatureNames </b> is a function made to get the attribute names for specific columns
#
# This function produces one output and requires two inputs:
# <ul>
# <li> <b>1st Input</b>: A pandas array. The pandas array we have been using is <b>my_data</b>. </li>
# <li> <b>2nd Input</b>: Any number of integer values (order doesn't matter) that represent the columns that we want to include. In our case we want <b>columns 2-5</b>. </li>
# <ul> <li> Note: Remember that Python is zero-indexed, therefore the first column would be 0. </li> </ul>
# + id="UcYuPFz1pwaB" colab_type="code" colab={}
def getFeatureNames(pandasArray, *column):
actualColumns = list()
allColumns = list(pandasArray.columns.values)
for i in sorted(column):
actualColumns.append(allColumns[i])
return actualColumns
# + [markdown] id="SP_R1KwkpwaD" colab_type="text"
# Now we prepare the data for decision tree construction.
#
# + id="aTXMkA_MpwaD" colab_type="code" colab={}
#X = removeColumns(my_data, 0, 1)
#y, targetNames = targetAndtargetNames(my_data, 1)
featureNames = getFeatureNames(my_data, 2,3,4,5)
# + [markdown] id="JhLtmvxxpwaE" colab_type="text"
# Print out <b>y</b>, <b>targetNames</b>, and <b>featureNames</b> to use in the next example. Remember that the numbers correspond to the names, 0 being the first name,1 being the second name, and so on.
# + id="9vlC1s9opwaE" colab_type="code" colab={}
print( y )
# + id="P_IpmY83pwaG" colab_type="code" colab={}
print (targetNames )
# + id="nyWa1-WCpwaH" colab_type="code" colab={}
print (featureNames)
# + [markdown] id="CJ2qKZ8wpwaI" colab_type="text"
# We will first create an instance of the <b>DecisionTreeClassifier</b> called <b>skullsTree</b>.<br>
# Inside of the classifier, specify <i> criterion="entropy" </i> so we can see the information gain of each node.
# + id="tLNvx-fHpwaI" colab_type="code" colab={}
from sklearn.tree import DecisionTreeClassifier
# + id="rmSkmhO3pwaJ" colab_type="code" colab={}
skullsTree = DecisionTreeClassifier(criterion="entropy")
# + id="xYP21CVHpwaK" colab_type="code" colab={}
skullsTree.fit(X_trainset,y_trainset)
# + [markdown] id="ORB1JXP9pwaL" colab_type="text"
# Let's make some <b>predictions</b> on the testing dataset and store it into a variable called <b>predTree</b>.
# + id="OR0mZc32pwaL" colab_type="code" colab={}
predTree = skullsTree.predict(X_testset)
# + [markdown] id="-6p-YRqWpwaN" colab_type="text"
# You can print out <b>predTree</b> and <b>y_testset</b> if you want to visually compare the prediction to the actual values.
# + id="rB1gNzZdpwaN" colab_type="code" colab={}
print (predTree)
print (y_testset)
# + [markdown] id="wJOlvj4TpwaO" colab_type="text"
# Next, let's import metrics from sklearn and check the accuracy of our model.
# + id="mYfTbriBpwaO" colab_type="code" colab={}
from sklearn import metrics
print("DecisionTrees's Accuracy: "), metrics.accuracy_score(y_testset, predTree)
# + [markdown] id="K3HNra-_pwaQ" colab_type="text"
# Now we can visualize the tree constructed.
#
# However, it should be noted that the following code may not work in all Python2 environment. You can try to see packages like <b>pydot</b>, <b>pydot2</b>, <b>pydot2</b>, <b>pydotplus</b>, etc., and see which one works in your platform.
# + id="Pd9cajumpwaR" colab_type="code" colab={}
# !pip install pydotplus
# #!pip install pydot2
# #!pip install pyparsing==2.2.0
# !pip install pydot
# + id="8IkZz2HApwaR" colab_type="code" colab={}
# !conda install sklearn
# + id="r_dyLINUpwaS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 183} outputId="9ecd2194-69ca-4af2-c6d2-33502da0be8f" executionInfo={"status": "ok", "timestamp": 1552205595309, "user_tz": -660, "elapsed": 15789, "user": {"displayName": "Yale", "photoUrl": "https://lh4.googleusercontent.com/-5_QILC8qmus/AAAAAAAAAAI/AAAAAAAAAmw/SHwGbrUo4gU/s64/photo.jpg", "userId": "04458048662130211781"}}
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn import tree
import pydot
import pydotplus
import pandas as pd
dot_data = StringIO()
tree.export_graphviz(skullsTree, out_file=dot_data,
feature_names=featureNames,
class_names=targetNames,
filled=True, rounded=True,
special_characters=True,
leaves_parallel=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# + [markdown] id="GNsW8mChpwaT" colab_type="text"
# <a id = "rf"></a>
#
# ### <span style="color:#0b486b">2.5 Random Forest</span>
#
# Import the <b>RandomForestClassifier</b> class from <b>sklearn.ensemble</b>
# + id="eSRf2S5_pwaU" colab_type="code" colab={}
from sklearn.ensemble import RandomForestClassifier
# + [markdown] id="vTStwdE2pwaV" colab_type="text"
# Create an instance of the <b>RandomForestClassifier()</b> called <b>skullsForest</b>, where the forest has <b>10 decision tree estimators</b> (<i>n_estimators=10</i>) and the <b>criterion is entropy</b> (<i>criterion="entropy"</i>)
# + id="48Trpi_vpwaV" colab_type="code" colab={}
skullsForest=RandomForestClassifier(n_estimators=10, criterion="entropy")
# + [markdown] id="QxTt33FUpwaa" colab_type="text"
# Let's use the same <b>X_trainset</b>, <b>y_trainset</b> datasets that we made when dealing with the <b>Decision Trees</b> above to fit <b>skullsForest</b>.
# <br> <br>
# <b>Note</b>: Make sure you have ran through the Decision Trees section.
# + id="RuKjrhOlpwab" colab_type="code" colab={}
skullsForest.fit(X_trainset, y_trainset)
# + [markdown] id="Y80fIr4npwac" colab_type="text"
# Let's now create a variable called <b>predForest</b> using a predict on <b>X_testset</b> with <b>skullsForest</b>.
# + id="zej456jcpwac" colab_type="code" colab={}
predForest = skullsForest.predict(X_testset)
# + [markdown] id="nU46RlL2pwad" colab_type="text"
# You can print out <b>predForest</b> and <b>y_testset</b> if you want to visually compare the prediction to the actual values.
# + id="GMTpeFFopwad" colab_type="code" colab={}
print (predForest )
print (y_testset)
# + [markdown] id="VlrBeSGMpwaf" colab_type="text"
# Let's check the accuracy of our model. <br>
#
# Note: Make sure you have metrics imported from sklearn
# + id="uHsNgp8ppwaf" colab_type="code" colab={}
print("RandomForests's Accuracy: "), metrics.accuracy_score(y_testset, predForest)
# + [markdown] id="DCuY384Qpwah" colab_type="text"
# We can also see what trees are in our <b> skullsForest </b> variable by using the <b> .estimators_ </b> attribute. This attribute is indexable, so we can look at any individual tree we want.
# + id="Q9XKXv4apwah" colab_type="code" colab={}
print(skullsForest.estimators_)
# + [markdown] id="RcTYfREhpwai" colab_type="text"
# You can choose to view any tree by using the code below. Replace the <i>"&"</i> in <b>skullsForest[&]</b> with the tree you want to see.
#
# The following block may not work in your Python enrionment.
# + id="Q34I9zl5pwai" colab_type="code" colab={}
from IPython.display import Image
from sklearn.externals.six import StringIO
import pydot
dot_data = StringIO()
#Replace the '&' below with the tree number
tree.export_graphviz(skullsForest[&], out_file=dot_data,
feature_names=featureNames,
class_names=targetNames,
filled=True, rounded=True,
special_characters=True,
leaves_parallel=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
| Jupyter/SIT742P11A-CS-Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 1
# Add the specified code for each code cell, running the cells _in order_.
# Create a variable `my_name` that contains your name.
my_name = "<NAME>"
# Create a variable `name_length` that holds how many letters are in your name. Print the number of letters.
name_length = len(my_name)
print(name_length)
# Print out your name with the uppercase letters made lowercase, and the lowercase letters made uppercase. **Hint:** look for a [string method](https://docs.python.org/3/library/stdtypes.html#string-methods) that will modify the _case_ of the string.
# - Try to do this without creating a separate variable!
print(my_name.swapcase())
# Pick two of your favorite numbers (between 1 and 100) and assign them to `favorite_1` and `favorite_2`
favorite_1 = 7
favorite_2 = 65
# Divide each number by the length of your name raised to the power of `.598` (use the built-in `pow()` function for practice), and save it in the same variable.
favorite_1 = favorite_1/pow(name_length, .598)
favorite_2 = favorite_2/pow(name_length, .598)
favorite_1, favorite_2
# Create a variable `raw_sum` that is the sum of those two variables. Note you _cannot_ use the `sum()` function for this, so just use a normal operator!
raw_sum = favorite_1 + favorite_2
raw_sum
# Create a variable `round_sum` that is the `raw_sum` rounded to 1 decimal place. Use the `round()` function.
round_sum = round(raw_sum, 1)
round_sum
# Create two new variables `rounded_1` and `rounded_2` that are your `favorite_1` and `favorite_2` variables rounded to 1 decimal place. Print them out on a single line (hint: pass them as two different arguments).
rounded_1 = round(favorite_1, 1)
rounded_2 = round(favorite_2, 1)
print(rounded_1, rounded_2)
# Create a variable `sum_round` that is the sum of the rounded values (use a normal math operator).
sum_round = rounded_1 + rounded_2
sum_round
# Which is bigger, `round_sum` or `sum_round`? (You can use the `max()` function!)
max(round_sum, sum_round)
# Create a variable `fruits` that contains the string `"apples and bananas"`
fruits = "apples and bananas"
# Use the `replace()` function to substitute all the "a"s in `fruits` with "ee". Store the result in a variable called `fruits_e`.
fruits_e = fruits.replace("a", "ee")
# Use the `replace()` function to substitute all the "a"s in `fruits` with "o". Store the result in a variable called `fruits_o`.
fruits_o = fruits.replace("a", "o")
# Print out the string "I like to eat " followed by each of `fruits`, `fruits_e` and `fruits_o` (three sentences).
print("I like to eat " +fruits)
print("I like to eat " +fruits_e)
print("I like to eat " +fruits_o)
| exercise-1/exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project 1: Driving Licenses, Traffic Accidents and Casualties Analysis
# # Introduction
# The most common causes of death and injury in Saudi Arabia is by Traffic accidents.
# Traffic accidents are significant reasons for sickness and passing . These mishaps are an intense general medical issue that requires concerted efforts for effective and sustainable prevention.
# ## Problem
# Traffic accidents are Increasing in many region saudi arabia
# ### Dataset Description
#
#
# Two dataset are given :
#
# Driving Licenses This dataset contains Saudi Arabia Driving Licenses Issued By Administrative Area for 1993 - 2016.
#
# Traffic Accidents and Casualties This dataset contains Saudi Arabia Traffic Accidents and Casualties by Region for 2016 and 2017.
#
# ## Data dictionary
#
# |Feature|Type|Dataset|Description|
# |---|---|---|---|
# |year|integr|Driving_Licenses/Traffic_Accidents|the year is describe the driver's license was issued and accidents happen|
# |region|object|Driving_Licenses/Traffic_Accidents|the region describe the area that driving license was issued and accidents happen|
# |driving_Licenses|intger|Driving_Licenses|describe the number of driving licenses issued |
# |indicator|object|Traffic_Accidents|the number of indicator describe.|
# |geo_point_2d|object|Traffic_Accidents|Region location coordinator|
# |x|float|Driving_Licenses|x coordinator for region|
# |y|float|Driving_Licenses|y coordinator for region|
# ## Executive summary
#
# In the first place I consider that the driving licenses has the influenced to the Traffic Accidents that causes the harms and dead and Through examination the information I discovered in 2016-2017 the most noteworthy car crash was in Makkah area and the most elevated driving licenses was in Riyadh district . From this we can't state that the driving licenses could influence to the Traffic Accidents.
# ## Conclusions and recommendation
# I prescribed to give a Severe law authorization of petty criminal offenses so it can exceptionally influence the complete number of accident and henceforth decline the quantities of injured and dead
# Additionally awareness social campaigns that showcase that injuries and deaths caused by by criminal traffic.It is necessary to remind individuals that they should comply with the guidelines and respect the road and the other drives to avoid any lossoffense
#
# https://medium.com/@khowla739/traffic-accident-analysis-and-driving-licenses-in-saudi-arabia-36dd2a0f39d5
| Read.ME.kh.ipynb |