markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Detect 1 larger region
There is about 1 peak in 200 voxels (based on simulations)
Before, we made the assumption that the activated field is so small that there
is only one local maximum. We expect there to be only one local maximum
if the activated region is smaller than 200 voxels (based on simulations),
which seems reasonable.
What if the activated regions is larger and we expect 3 peaks to appear in the field?
We treat the peaks as independent, which is reasonable,
see this notebook for the simulated results.
|
PowerLarge = 1-(1-Power)**(1/3)
print("If we expect 3 peaks, then the power per peak should be: "+str(PowerLarge))
|
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
|
jokedurnez/RequiredEffectSize
|
mit
|
3. How large statistic in a field be to exceed the threshold with power 0.80?
We quantify this by computing the expected local maximum in the field (which is a null field elevated by value D).
We use the distribution of local maxima of Cheng&Schwartzman to compute the power/effect size.
|
muRange = np.arange(1.8,5,0.01)
muLarge = []
muSingle = []
for muMax in muRange:
# what is the power to detect a maximum
power = 1-integrate.quad(lambda x:peakdistribution.peakdens3D(x,1),-20,float(FweThres)-muMax)[0]
if power>PowerLarge:
muLarge.append(muMax)
if power>Power:
muSingle.append(muMax)
if power>PowerPerRegion:
muUnion = muMax
break
print("The power is sufficient in larger fields if mu equals: "+str(muLarge[0]))
print("The power is sufficient for one region if mu equals: "+str(muSingle[0]))
print("The power is sufficient for all regions if mu equals: "+str(muUnion))
|
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
|
jokedurnez/RequiredEffectSize
|
mit
|
5. From the required voxel statistic to Cohen's D for a given sample size
|
# Read in data
Data = pd.read_csv("tal_data.txt",sep=" ",header=None,names=['year','n'])
Data['source']='Tal'
David = pd.read_csv("david_data.txt",sep=" ",header=None,names=['year','n'])
David['source']='David'
Data=Data.append(David)
# add detectable effect
Data['deltaUnion']=muUnion/np.sqrt(Data['n'])
Data['deltaSingle']=muSingle[0]/np.sqrt(Data['n'])
Data['deltaLarge']=muLarge[0]/np.sqrt(Data['n'])
# add jitter for figure
stdev = 0.01*(max(Data.year)-min(Data.year))
Data['year_jitter'] = Data.year+np.random.randn(len(Data))*stdev
# Compute medians per year (for smoother)
Medians = pd.DataFrame({'year':
np.arange(start=np.min(Data.year),stop=np.max(Data.year)+1),
'TalMdSS':'nan',
'DavidMdSS':'nan',
'TalMdDSingle':'nan',
'TalMdDUnion':'nan',
'TalMdDLarge':'nan',
'DavidMdDSingle':'nan',
'DavidMdDUnion':'nan',
'DavidMdDLarge':'nan',
'MdSS':'nan',
'DSingle':'nan',
'DUnion':'nan',
'DLarge':'nan'
})
for yearInd in (range(len(Medians))):
# Compute medians for Tal's data
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year==Medians.year[yearInd])])
Medians.TalMdSS[yearInd] = np.median(Data.n[yearBoolTal])
Medians.TalMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolTal])
Medians.TalMdDUnion[yearInd] = np.median(Data.deltaUnion[yearBoolTal])
Medians.TalMdDLarge[yearInd] = np.median(Data.deltaLarge[yearBoolTal])
# Compute medians for David's data
yearBoolDavid = np.array([a and b for a,b in zip(Data.source=="David",Data.year==Medians.year[yearInd])])
Medians.DavidMdSS[yearInd] = np.median(Data.n[yearBoolDavid])
Medians.DavidMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolDavid])
Medians.DavidMdDUnion[yearInd] = np.median(Data.deltaUnion[yearBoolDavid])
Medians.DavidMdDLarge[yearInd] = np.median(Data.deltaLarge[yearBoolDavid])
# Compute medians for all data
yearBool = np.array(Data.year==Medians.year[yearInd])
Medians.MdSS[yearInd] = np.median(Data.n[yearBool])
Medians.DSingle[yearInd] = np.median(Data.deltaSingle[yearBool])
Medians.DUnion[yearInd] = np.median(Data.deltaUnion[yearBool])
Medians.DLarge[yearInd] = np.median(Data.deltaLarge[yearBool])
Medians
# add logscale
Medians.MdSSLog = [np.log(x) for x in Medians.MdSS]
Data.nLog = [np.log(x) for x in Data.n]
|
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
|
jokedurnez/RequiredEffectSize
|
mit
|
The figure per List (Tal or David)
|
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter[Data.source=="Tal"],Data.n[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Data.year_jitter[Data.source=="David"],Data.n[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.TalMdSS,color=twocol[1],lw=3,label="Neurosynth")
axs[0].plot(Medians.year,Medians.DavidMdSS,color=twocol[3],lw=3,label="David et al.")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,200])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("Median Sample Size")
axs[0].legend(loc="upper left",frameon=False)
axs[1].plot(Data.year_jitter[Data.source=="Tal"],Data.deltaUnion[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter[Data.source=="David"],Data.deltaUnion[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[1].plot(Medians.year,Medians.TalMdDUnion,color=twocol[1],lw=3,label="Neurosynth")
axs[1].plot(Medians.year,Medians.DavidMdDUnion,color=twocol[3],lw=3,label="David et al.")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size with 80% power")
axs[1].legend(loc="upper right",frameon=False)
plt.show()
|
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
|
jokedurnez/RequiredEffectSize
|
mit
|
The figure for different Alternatives
|
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter,Data.n,"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.MdSS,color=twocol[1],lw=3,label="Median Sample Size")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,200])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("Sample Size")
axs[0].legend(loc="upper left",frameon=False)
axs[1].plot(Data.year_jitter,Data.deltaUnion,"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter,Data.deltaSingle,"r.",color=twocol[2],alpha=0.5,label="")
axs[1].plot(Data.year_jitter,Data.deltaLarge,"r.",color=twocol[4],alpha=0.5,label="")
axs[1].plot(Medians.year,Medians.DUnion,color=twocol[1],lw=3,label="Alternative: detect 3 regions")
axs[1].plot(Medians.year,Medians.DSingle,color=twocol[3],lw=3,label="Alternative: detect 1 region")
axs[1].plot(Medians.year,Medians.DLarge,color=twocol[5],lw=3,label="Alternative: detect 1 (larger) region")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size detectable with 80% power")
axs[1].legend(loc="upper right",frameon=False)
#plt.savefig("ReqEffSize.pdf")
|
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
|
jokedurnez/RequiredEffectSize
|
mit
|
Figure with logscale and only 1 condition
|
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter,Data.n,"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.MdSS,color=twocol[1],lw=3,label="log(Median Sample Size)")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,200])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("log(Sample Size)")
axs[0].legend(loc="upper left",frameon=False)
#axs[1].plot(Data.year_jitter,Data.deltaUnion,"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter,Data.deltaSingle,"r.",color=twocol[2],alpha=0.5,label="")
#axs[1].plot(Data.year_jitter,Data.deltaLarge,"r.",color=twocol[4],alpha=0.5,label="")
#axs[1].plot(Medians.year,Medians.DUnion,color=twocol[1],lw=3,label="Alternative: detect 3 regions")
axs[1].plot(Medians.year,Medians.DSingle,color=twocol[3],lw=3,label="Median detectable effect size")
#axs[1].plot(Medians.year,Medians.DLarge,color=twocol[5],lw=3,label="Alternative: detect 1 (larger) region")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size detectable with 80% power")
axs[1].legend(loc="upper right",frameon=False)
plt.savefig("ReqEffSize.jpg")
|
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
|
jokedurnez/RequiredEffectSize
|
mit
|
Plot Functions
|
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({'font.size': 16})
FIG_SIZE = (12, 8)
import numpy as np
np.random.seed(12345)
from emukit.multi_fidelity.convert_lists_to_array import convert_x_list_to_array
n_plot_points = 100
x_plot = np.linspace(0, 1, 500)[:, None]
y_plot_low = forrester_fcn_low(x_plot)
y_plot_high = forrester_fcn_high(x_plot)
plt.figure(figsize=FIG_SIZE)
plt.plot(x_plot, y_plot_low, 'b')
plt.plot(x_plot, y_plot_high, 'r')
plt.legend(['Low fidelity', 'High fidelity'])
plt.xlim(0, 1)
plt.title('High and low fidelity Forrester functions')
plt.xlabel('x')
plt.ylabel('y');
plt.show()
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Bayesian optimization
Define Parameter Space
The parameter space now contains two parameters: the first is a ContinuousParameter that is the $x$ input to the Forrester function. The second is an InformaionSourceParameter that tells Emukit whether a given fucntion evaluation is to be performed by the high or low fidelity function.
|
from emukit.core import ParameterSpace, ContinuousParameter, InformationSourceParameter
n_fidelities = 2
parameter_space = ParameterSpace([ContinuousParameter('x', 0, 1), InformationSourceParameter(n_fidelities)])
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Generate Initial Data
We shall randomly choose 12 low fidelity and then choose 6 of these points at which to evaluate the high fidelity function.
|
x_low = np.random.rand(12)[:, None]
x_high = x_low[:6, :]
y_low = forrester_fcn_low(x_low)
y_high = forrester_fcn_high(x_high)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Define Model
We will use the linear multi-fidelity model defined in Emukit. In this model, the high-fidelity function is modelled as a scaled sum of the low-fidelity function plus an error term:
$$
f_{high}(x) = f_{err}(x) + \rho \,f_{low}(x)
$$
|
from emukit.multi_fidelity.models.linear_model import GPyLinearMultiFidelityModel
import GPy
from emukit.multi_fidelity.kernels.linear_multi_fidelity_kernel import LinearMultiFidelityKernel
from emukit.multi_fidelity.convert_lists_to_array import convert_xy_lists_to_arrays
from emukit.model_wrappers import GPyMultiOutputWrapper
from GPy.models.gp_regression import GPRegression
x_array, y_array = convert_xy_lists_to_arrays([x_low, x_high], [y_low, y_high])
kern_low = GPy.kern.RBF(1)
kern_low.lengthscale.constrain_bounded(0.01, 0.5)
kern_err = GPy.kern.RBF(1)
kern_err.lengthscale.constrain_bounded(0.01, 0.5)
multi_fidelity_kernel = LinearMultiFidelityKernel([kern_low, kern_err])
gpy_model = GPyLinearMultiFidelityModel(x_array, y_array, multi_fidelity_kernel, n_fidelities)
gpy_model.likelihood.Gaussian_noise.fix(0.1)
gpy_model.likelihood.Gaussian_noise_1.fix(0.1)
model = GPyMultiOutputWrapper(gpy_model, 2, 5, verbose_optimization=False)
model.optimize()
x_plot_low = np.concatenate([np.atleast_2d(x_plot), np.zeros((x_plot.shape[0], 1))], axis=1)
x_plot_high = np.concatenate([np.atleast_2d(x_plot), np.ones((x_plot.shape[0], 1))], axis=1)
def plot_model(x_low, y_low, x_high, y_high):
mean_low, var_low = model.predict(x_plot_low)
mean_high, var_high = model.predict(x_plot_high)
plt.figure(figsize=FIG_SIZE)
def plot_with_error_bars(x, mean, var, color):
plt.plot(x, mean, color=color)
plt.fill_between(x.flatten(), mean.flatten() - 1.96*var.flatten(), mean.flatten() + 1.96*var.flatten(),
alpha=0.2, color=color)
plot_with_error_bars(x_plot_high[:, 0], mean_low, var_low, 'b')
plot_with_error_bars(x_plot_high[:, 0], mean_high, var_high, 'r')
plt.plot(x_plot, forrester_fcn_high(x_plot), 'k--')
plt.scatter(x_low, y_low, color='b')
plt.scatter(x_high, y_high, color='r')
plt.legend(['Low fidelity model', 'High fidelity model', 'True high fidelity'])
plt.title('Low and High Fidelity Models')
plt.xlim(0, 1)
plt.xlabel('x')
plt.ylabel('y');
plt.show()
plot_model(x_low, y_low, x_high, y_high)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Define Acquisition Function
As in [1] & [2] we shall use the entropy search acquisition function, scaled by the cost of evaluating either the high or low fidelity function.
|
from emukit.bayesian_optimization.acquisitions.entropy_search import MultiInformationSourceEntropySearch
from emukit.core.acquisition import Acquisition
# Define cost of different fidelities as acquisition function
class Cost(Acquisition):
def __init__(self, costs):
self.costs = costs
def evaluate(self, x):
fidelity_index = x[:, -1].astype(int)
x_cost = np.array([self.costs[i] for i in fidelity_index])
return x_cost[:, None]
@property
def has_gradients(self):
return True
def evaluate_with_gradients(self, x):
return self.evalute(x), np.zeros(x.shape)
cost_acquisition = Cost([low_fidelity_cost, high_fidelity_cost])
acquisition = MultiInformationSourceEntropySearch(model, parameter_space) / cost_acquisition
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Create OuterLoop
|
from emukit.core.loop import FixedIntervalUpdater, OuterLoop, SequentialPointCalculator
from emukit.core.loop.loop_state import create_loop_state
from emukit.core.optimization.multi_source_acquisition_optimizer import MultiSourceAcquisitionOptimizer
from emukit.core.optimization import GradientAcquisitionOptimizer
initial_loop_state = create_loop_state(x_array, y_array)
acquisition_optimizer = MultiSourceAcquisitionOptimizer(GradientAcquisitionOptimizer(parameter_space), parameter_space)
candidate_point_calculator = SequentialPointCalculator(acquisition, acquisition_optimizer)
model_updater = FixedIntervalUpdater(model)
loop = OuterLoop(candidate_point_calculator, model_updater, initial_loop_state)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Add Plotting of Acquisition Function
To see how the acquisition function evolves. This is done by using the iteration_end_event on the OuterLoop. This is a list of functions where each function should have the signature: function(loop, loop_state). All functions in the list are called after each iteration of the optimization loop.
|
def plot_acquisition(loop, loop_state):
colours = ['b', 'r']
plt.plot(x_plot_low[:, 0], loop.candidate_point_calculator.acquisition.evaluate(x_plot_low), 'b')
plt.plot(x_plot_high[:, 0], loop.candidate_point_calculator.acquisition.evaluate(x_plot_high), 'r')
previous_x_collected = loop_state.X[[-1], :]
fidelity_idx = int(previous_x_collected[0, -1])
plt.scatter(previous_x_collected[0, 0],
loop.candidate_point_calculator.acquisition.evaluate(previous_x_collected),
color=colours[fidelity_idx])
plt.legend(['Low fidelity', 'High fidelity'], fontsize=12)
plt.title('Acquisition Function at Iteration ' + str(loop_state.iteration))
plt.xlabel('x')
plt.xlim(0, 1)
plt.ylabel('Acquisition Value')
plt.tight_layout()
plt.show()
loop.iteration_end_event.append(plot_acquisition)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Find Esimated Minimum at Every Iteration
On each iteration of the optimization loop, find the minimum value of the high fidelity model.
|
x_search = np.stack([np.linspace(0, 1, 1000), np.ones(1000)], axis=1)
model_min_mean = []
model_min_loc = []
def calculate_metrics(loop, loop_state):
mean, var = loop.model_updaters[0].model.predict(x_search)
model_min_mean.append(np.min(mean))
model_min_loc.append(x_search[np.argmin(mean), 0])
# subscribe to event
loop.iteration_end_event.append(calculate_metrics)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Run Optimization
|
loop.run_loop(forrester_fcn, 10)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Plot Final Model
|
is_high_fidelity = loop.loop_state.X[:, -1] == 1
plot_model(x_low=loop.loop_state.X[~is_high_fidelity, 0], y_low=loop.loop_state.Y[~is_high_fidelity],
x_high=loop.loop_state.X[is_high_fidelity, 0], y_high=loop.loop_state.Y[is_high_fidelity])
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Comparison to High Fidelity Only Bayesian Optimization
This section compares the multi-fidelity optimization to Bayesian optimization using high fidelity observations only.
|
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.bayesian_optimization.acquisitions.entropy_search import EntropySearch
from emukit.model_wrappers import GPyModelWrapper
import GPy
# Make model
gpy_model = GPy.models.GPRegression(x_high, y_high)
gpy_model.Gaussian_noise.variance.fix(0.1)
hf_only_model = GPyModelWrapper(gpy_model)
# Create loop
hf_only_space = ParameterSpace([ContinuousParameter('x', 0, 1)])
hf_only_acquisition = EntropySearch(hf_only_model, hf_only_space)
hf_only_loop = BayesianOptimizationLoop(hf_only_space, hf_only_model, hf_only_acquisition)
# Calculate best guess at minimum at each iteration of loop
hf_only_model_min_mean = []
x_search = np.linspace(0, 1, 1000)[:, None]
hf_only_model_min_loc = []
def calculate_metrics(loop, loop_state):
mean, var = loop.model_updaters[0].model.predict(x_search)
hf_only_model_min_mean.append(np.min(mean))
hf_only_model_min_loc.append(x_search[np.argmin(mean)])
# subscribe to event
hf_only_loop.iteration_end_event.append(calculate_metrics)
# Run optimization
hf_only_loop.run_loop(forrester_fcn_high, 10)
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Plot Estimated Minimum Location
|
# Plot comparison
plt.figure(figsize=FIG_SIZE)
x = np.array(range(len(model_min_mean))) + 1
# Calculate cumulative cost of evaluating high fidelity only observations
n_hf_points = hf_only_loop.loop_state.X.shape[0]
cumulative_cost_hf = high_fidelity_cost * (np.array(range(n_hf_points)) + 1)
cumulative_cost_hf = cumulative_cost_hf[x_high.shape[0]:]
# Calculate cumulative cost of evaluating multi-fidelity observations
cost_mf = cost_acquisition.evaluate(loop.loop_state.X)
cumulative_cost_mf = np.cumsum(cost_mf)
cumulative_cost_mf = cumulative_cost_mf[x_array.shape[0]:]
x_min = np.min([cumulative_cost_hf, cumulative_cost_mf])
x_max = np.max([cumulative_cost_hf, cumulative_cost_mf])
plt.plot(cumulative_cost_hf, hf_only_model_min_loc, 'm', marker='x', markersize=16)
plt.plot(cumulative_cost_mf, model_min_loc, 'c', marker='.', markersize=16)
plt.hlines(x_search[np.argmin(forrester_fcn_high(x_search))], x_min, x_max, color='k', linestyle='--')
plt.legend(['High fidelity only optimization', 'Multi-fidelity only optimization', 'True minimum'])
plt.title('Comparison of Multi-Fidelity and High Fidelity Only Optimizations')
plt.ylabel('Estimated Location of Minimum')
plt.xlabel('Cumulative Cost of Evaluting Objective');
plt.show()
|
notebooks/Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb
|
EmuKit/emukit
|
apache-2.0
|
How many tweets are about the 'wall'?
|
# Lowercase the hashtags and tweet body
df['hashtags'] = df['hashtags'].str.lower()
df['text'] = df['text'].str.lower()
print("Total number of tweets containing hashtag 'wall' = {}".format(len(df[df['hashtags'].str.contains('wall')])))
print("Total number of tweets whose body contains 'wall' = {}".format(len(df[df['text'].str.contains('wall')])))
wall_tweets = df[(df['hashtags'].str.contains('wall')) | (df['text'].str.contains('wall'))].copy()
print("Total number of tweets about the 'wall' = {}".format(len(wall_tweets)))
|
exploratory_notebooks/mexico_wall_part_1.ipynb
|
jss367/assemble
|
mit
|
What is the average twitter tenure of people who tweeted about the wall?
|
def months_between(end, start):
return (end.year - start.year)*12 + end.month - start.month
wall_tweets['created'] = pd.to_datetime(wall_tweets['created'])
wall_tweets['user_created'] = pd.to_datetime(wall_tweets['user_created'])
wall_tweets['user_tenure'] = wall_tweets[['created', \
'user_created']].apply(lambda row: months_between(row[0], row[1]), axis=1)
tenure_grouping = wall_tweets.groupby('user_tenure').size() / len(wall_tweets) * 100
fig, ax = plt.subplots()
ax.plot(tenure_grouping.index, tenure_grouping.values)
ax.set_ylabel("% of tweets")
ax.set_xlabel("Acct tenure in months")
plt.show()
|
exploratory_notebooks/mexico_wall_part_1.ipynb
|
jss367/assemble
|
mit
|
There are a couple of users tweeting multiple times, but most tweets come from distinct twitter handles
|
tweets_per_user = wall_tweets.groupby('user_name').size().sort_values(ascending=False)
fig, ax = plt.subplots()
ax.plot(tweets_per_user.values)
plt.show()
|
exploratory_notebooks/mexico_wall_part_1.ipynb
|
jss367/assemble
|
mit
|
Who are the 'top tweeters' + descriptions?
|
wall_tweets.groupby(['user_name', 'user_description']).size().sort_values(ascending=False).head(20).to_frame()
|
exploratory_notebooks/mexico_wall_part_1.ipynb
|
jss367/assemble
|
mit
|
What is the reach of these tweets in terms of followers?
|
plt.boxplot(wall_tweets['friends_count'].values, vert=False)
plt.show()
wall_tweets['friends_count'].describe()
|
exploratory_notebooks/mexico_wall_part_1.ipynb
|
jss367/assemble
|
mit
|
Location of the tweets?
|
wall_tweets.groupby('user_location').size().sort_values(ascending=False)
|
exploratory_notebooks/mexico_wall_part_1.ipynb
|
jss367/assemble
|
mit
|
INTRO:
The highly modular nature of FIDDLE, exemplified by the depiction below, entails similarly modular input files.
RELEVANT JSON FILES:
There are two .json files that dictate how FIDDLE is run:
1. configurations.json
2. architecture.json
configurations.json
'configurations.json' parametrizes the sequencing file input types and their characteristics. In the example case, the Genome sub-field is "sacCer3", the Tracks sub-field consist of TSS-seq data and others, the Options sub-field consist of which "Inputs", "Outputs", and other traits FIDDLE takes into consideration. Note that the caveat to the hyper-modularity of this input file is that each of the modified variables must exactly mirror what lies within the input hdf5 files - more on that down the page.
! cat configurations.json
|
! cat configurations.json
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
architecture.json
'architecture.json' parametrizes the hyper-parameters and other neural network specific variables that FIDDLE will employ. The Encoder and Decoder will utilize the same hyper-parameters.
! cat architecture.json
|
! cat architecture.json
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
RELEVANT HDF5 FILES:
Using the quick start hdf5 datasets as examples, one can see that the dimensions of the tracks within the hdf5 datasets reflect the characteristics of the sequencing inputs. The train, validation, and test hdf5datasets are simply partitions of an original hdf5dataset that was compiled from scripts found in the 'fiddle/data_prep/' directory. A guide on how this is carried out can be found by starting up the 'fiddle/data_prep/data_guide.ipynb'.
train = h5py.File('../data/hdf5datasets/train.h5', 'r')
train.items()
|
train = h5py.File('../data/hdf5datasets/NSMSDSRSCSTSRI_500bp/train.h5', 'r')
train.items()
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
validation = h5py.File('../data/hdf5datasets/validation.h5', 'r')
validation.items()
|
validation = h5py.File('../data/hdf5datasets/NSMSDSRSCSTSRI_500bp/validation.h5', 'r')
validation.items()
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
test = h5py.File('../data/hdf5datasets/test.h5', 'r')
test.items()
|
test = h5py.File('../data/hdf5datasets/NSMSDSRSCSTSRI_500bp/test.h5', 'r')
test.items()
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
Examining the 'info' track:
The 'info' track is the track that holds index information relevant to the sequencing datasets. The dimensions of the 'info' correspond to the following:
1. Chromosome number (e.g. 1-16)
2. Strandedness (e.g. -1, 1)
3. Gene index (parsed from the original GFF file input)
4. Base Pair index (e.g. up to ~10^6)
infoRef_test = test.get('info')[:]
stats.describe(infoRef_test[:, X])
|
infoRef_test = test.get('info')[:]
stats.describe(infoRef_test[:, 0])
stats.describe(infoRef_test[:, 1])
stats.describe(infoRef_test[:, 2])
stats.describe(infoRef_test[:, 3])
print()
infoRef_validation = validation.get('info')[:]
stats.describe(infoRef_validation[:, 0])
stats.describe(infoRef_validation[:, 1])
stats.describe(infoRef_validation[:, 2])
stats.describe(infoRef_validation[:, 3])
print()
infoRef_train = train.get('info')[:]
stats.describe(infoRef_train[:, 0])
stats.describe(infoRef_train[:, 1])
stats.describe(infoRef_train[:, 2])
stats.describe(infoRef_train[:, 3])
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
Jupyter Notebook as a Documentation Resource:
An advantage of this medium of Python interaction is the ability to quickly examine Python scripts. Using the '?' as a prepend, FIDDLE's documentation is easily accessed. First, we will import several FIDDLE scripts and quickly checkout their internals.
import main, models, analysis
|
import main, models, visualization
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
Note: the following applies to any Python method/class/ADT/etc. as well as all imported Python packages:
The '?' prepend allows direct access to a Python script's docstrings. The '??' prepend allows direct access to the whole Python script. Jupyter Notebook's autocomplete feature allows an easy understanding of available methods. Click <Esc> to escape from the pop up that results from the following commands:
?main
??models
?visualization
|
?main
??models.
?visualization
|
fiddle/guide.ipynb
|
DylanM-Marshall/FIDDLE
|
gpl-3.0
|
Creating the autoencoder
Similar to regular neural networks in Shogun, we create a deep autoencoder using an array of NeuralLayer-based classes, which can be created using the utility class NeuralLayers. However, for deep autoencoders there's a restriction that the layer sizes in the network have to be symmetric, that is, the first layer has to have the same size as the last layer, the second layer has to have the same size as the second-to-last layer, and so on. This restriction is necessary for pre-training to work. More details on that can found in the following section.
We'll create a 5-layer deep autoencoder with following layer sizes: 256->512->128->512->256. We'll use rectified linear neurons for the hidden layers and linear neurons for the output layer.
|
ae = sg.create_machine("DeepAutoencoder", seed=10)
ae.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256))
ae.add("layers", sg.create_layer("NeuralRectifiedLinearLayer", num_neurons=512))
ae.add("layers", sg.create_layer("NeuralRectifiedLinearLayer", num_neurons=128))
ae.add("layers", sg.create_layer("NeuralRectifiedLinearLayer", num_neurons=512))
ae.add("layers", sg.create_layer("NeuralLinearLayer", num_neurons=256))
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Pre-training
Now we can pre-train the network. To illustrate exactly what's going to happen, we'll give the layers some labels: L1 for the input layer, L2 for the first hidden layer, and so on up to L5 for the output layer.
In pre-training, an autoencoder will formed for each encoding layer (layers up to the middle layer in the network). So here we'll have two autoencoders: L1->L2->L5, and L2->L3->L4. The first autoencoder will be trained on the raw data and used to initialize the weights and biases of layers L2 and L5 in the deep autoencoder. After the first autoencoder is trained, we use it to transform the raw data into the states of L2. These states will then be used to train the second autoencoder, which will be used to initialize the weights and biases of layers L3 and L4 in the deep autoencoder.
The operations described above are performed by the the pre_train() function. Pre-training parameters for each autoencoder can be controlled using the pt_* public attributes of DeepAutoencoder. Each of those attributes is an SGVector whose length is the number of autoencoders in the deep autoencoder (2 in our case). It can be used to set the parameters for each autoencoder indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all autoencoders.
Different noise types can be used to corrupt the inputs in a denoising autoencoder. Shogun currently supports 2 noise types: dropout noise, where a random portion of the inputs is set to zero at each iteration in training, and gaussian noise, where the inputs are corrupted with random gaussian noise. The noise type and strength can be controlled using pt_noise_type and pt_noise_parameter. Here, we'll use dropout noise.
|
ae.put("noise_type", "AENT_DROPOUT") # use dropout noise
ae.put("noise_parameter", 0.5) # each input has a 50% chance of being set to zero
ae.put("optimization_method", "NNOM_GRADIENT_DESCENT") # train using gradient descent
ae.put("gd_learning_rate", 0.01)
ae.put("gd_mini_batch_size", 128)
ae.put("max_num_epochs", 50)
ae.put("epsilon", 0.0) # disable automatic convergence testing
# uncomment this line to allow the training progress to be printed on the console
# from shogun import MSG_INFO; sg.env().set_loglevel(MSG_INFO)
# tell ae to do pre-training.
ae.put("do_pretrain", True)
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Fine-tuning
After pre-training, we can train the autoencoder as a whole to fine-tune the parameters. Training the whole autoencoder is performed using the train() function. Training parameters are controlled through the public attributes, same as a regular neural network.
|
ae.put('noise_type', "AENT_DROPOUT") # same noise type we used for pre-training
ae.put('noise_parameter', 0.5)
ae.put('max_num_epochs', 50)
ae.put('optimization_method', "NNOM_GRADIENT_DESCENT")
ae.put('gd_mini_batch_size', 128)
ae.put('gd_learning_rate', 0.0001)
ae.put('epsilon', 0.0)
# start fine-tuning. this might take some time
_ = ae.train(Xtrain)
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Evaluation
Now we can evaluate the autoencoder that we trained. We'll start by providing it with corrupted inputs and looking at how it will reconstruct them. The function reconstruct() is used to obtain the reconstructions:
|
# get a 50-example subset of the test set
subset = Xtest.get("feature_matrix")[:,0:50]
# corrupt the first 25 examples with multiplicative noise
subset[:,0:25] *= (np.random.random((256,25))>0.5)
# corrupt the other 25 examples with additive noise
subset[:,25:50] += np.random.random((256,25))
# obtain the reconstructions
reconstructed_subset = sg.reconstruct(ae, sg.create_features(subset))
# plot the corrupted data and the reconstructions
plt.figure(figsize=(10,10))
for i in range(50):
ax1=plt.subplot(10,10,i*2+1)
ax1.imshow(subset[:,i].reshape((16,16)), interpolation='nearest', cmap = matplotlib.cm.Greys_r)
ax1.set_xticks([])
ax1.set_yticks([])
ax2=plt.subplot(10,10,i*2+2)
ax2.imshow(reconstructed_subset.get("feature_matrix")[:,i].reshape((16,16)),
interpolation='nearest', cmap = matplotlib.cm.Greys_r)
ax2.set_xticks([])
ax2.set_yticks([])
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
The figure shows the corrupted examples and their reconstructions. The top half of the figure shows the ones corrupted with multiplicative noise, the bottom half shows the ones corrupted with additive noise. We can see that the autoencoders can provide decent reconstructions despite the heavy noise.
Next we'll look at the weights that the first hidden layer has learned. To obtain the weights, we can call the get_layer_parameters() function, which will return a vector containing both the weights and the biases of the layer. The biases are stored first in the array followed by the weights matrix in column-major format.
|
# obtain the weights matrix of the first hidden layer
# the 512 is the number of biases in the layer (512 neurons)
# the transpose is because numpy stores matrices in row-major format, and Shogun stores
# them in column major format
w1 = ae.get("layer_parameters")[0][512:].reshape(256,512).T
# visualize the weights between the first 100 neurons in the hidden layer
# and the neurons in the input layer
plt.figure(figsize=(10,10))
for i in range(100):
ax1=plt.subplot(10,10,i+1)
ax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = matplotlib.cm.Greys_r)
ax1.set_xticks([])
ax1.set_yticks([])
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Now, we can use the autoencoder to initialize a supervised neural network. The network will have all the layer of the autoencoder up to (and including) the middle layer. We'll also add a softmax output layer. So, the network will look like: L1->L2->L3->Softmax. The network is obtained by calling convert_to_neural_network():
|
nn = sg.convert_to_neural_network(ae, sg.create_layer("NeuralSoftmaxLayer", num_neurons=10), 0.01)
nn.put('max_num_epochs', 50)
nn.put('labels', Ytrain)
_ = nn.train(Xtrain)
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Next, we'll evaluate the accuracy on the test set:
|
predictions = nn.apply(Xtest)
accuracy = sg.create_evaluation("MulticlassAccuracy").evaluate(predictions, Ytest) * 100
print("Classification accuracy on the test set =", accuracy, "%")
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Convolutional Autoencoders
Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification.
In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using NeuralConvolutionalLayer objects:
|
conv_ae = sg.create_machine("DeepAutoencoder", seed=10)
# 16x16 single channel images
conv_ae.add("layers", sg.create_layer("NeuralInputLayer", width=16, height=16, num_neurons=256))
# the first encoding layer: 5 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
conv_ae.add("layers", sg.create_layer("NeuralConvolutionalLayer",
activation_function="CMAF_RECTIFIED_LINEAR",
num_maps=5,
radius_x=2,
radius_y=2,
pooling_width=2,
pooling_height=2))
# the second encoding layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 20 4x4 feature maps
conv_ae.add("layers", sg.create_layer("NeuralConvolutionalLayer",
activation_function="CMAF_RECTIFIED_LINEAR",
num_maps=15,
radius_x=2,
radius_y=2,
pooling_width=2,
pooling_height=2))
# the first decoding layer: same structure as the first encoding layer
conv_ae.add("layers", sg.create_layer("NeuralConvolutionalLayer",
activation_function="CMAF_RECTIFIED_LINEAR",
num_maps=15,
radius_x=2,
radius_y=2))
# the second decoding layer: same structure as the input layer
conv_ae.add("layers", sg.create_layer("NeuralConvolutionalLayer",
activation_function="CMAF_RECTIFIED_LINEAR",
num_maps=1,
radius_x=2,
radius_y=2))
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Now we'll pre-train the autoencoder:
|
ae.put("noise_type", "AENT_DROPOUT") # use dropout noise
ae.put("noise_parameter", 0.3) # each input has a 50% chance of being set to zero
ae.put("optimization_method", "NNOM_GRADIENT_DESCENT") # train using gradient descent
ae.put("gd_learning_rate", 0.002)
ae.put("gd_mini_batch_size", 100)
ae.put("max_num_epochs", 30)
# tell ae to do pre-training.
ae.put("do_pretrain", True)
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
And then convert the autoencoder to a regular neural network for classification:
|
conv_nn = sg.convert_to_neural_network(ae, sg.create_layer("NeuralSoftmaxLayer", num_neurons=10), 0.01)
# train the network
conv_nn.put('epsilon', 0.0)
conv_nn.put('max_num_epochs', 50)
conv_nn.put('labels', Ytrain)
# start training. this might take some time
_ = conv_nn.train(Xtrain)
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
And evaluate it on the test set:
|
predictions = conv_nn.apply_multiclass(Xtest)
accuracy = sg.create_evaluation("MulticlassAccuracy").evaluate(predictions, Ytest) * 100
print("Classification accuracy on the test set =", accuracy, "%")
|
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
|
geektoni/shogun
|
bsd-3-clause
|
Not bad performance for such a simple model and featurization. More sophisticated models do slightly better on this dataset, but not enormously better.
Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
Star DeepChem on GitHub
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
Join the DeepChem Gitter
The DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
Citing This Tutorial
If you found this tutorial useful please consider citing it using the provided BibTeX.
|
@manual{Intro4,
title={Molecular Fingerprints},
organization={DeepChem},
author={Ramsundar, Bharath},
howpublished = {\url{https://github.com/deepchem/deepchem/blob/master/examples/tutorials/Molecular_Fingerprints.ipynb}},
year={2021},
}
|
examples/tutorials/Molecular_Fingerprints.ipynb
|
deepchem/deepchem
|
mit
|
In the notebook we can execute cells with more than one line of code, in the style of matlab or mathematica
|
a = 5
b = 10
print "a + b = %d" % (a + b)
print "a * b = %d" % (a * b)
|
Intro_IPython.ipynb
|
OriolAbril/Statistics-Rocks-MasterCosmosUAB
|
mit
|
In the menu we can find useful drop-down menus to perform simple tasks to edit our notebook. The best way to know them is to experiment
Most important, the kernel menu, where we can kill the execution if we run intro trouble.
The use of the notebook allows to develop code faster in an exploratory way, writing small pieces successively, thus concentrating in the implementation of the algorithm, and not in the details of programming language.
Running long codes from the notebook is however not advisable.
We sould never thus forget to refactor our code into a standalone python program, encapsulating the different pieces of the algorithms in function or modules, to allow for reusability.
Markdown essentials for documenting an Ipython notebook
Here are some examples of formatting with markdown (click in the cell to see the markdown source):
Headers:
Header 1
Header 2
Header 3
Header 6
Lists:
Red
Green
Blue
Nested lists:
Red
Strawberry, cherry
Green
Apple, pear
Blue
None
As far as I know
Ordered lists:
Bird
McHale
Parish
Code blocks:
for i in xrange(1000):
print i
Inline code:
Use the printf() function.
Horizontal rules:
HTML links:
This is an example inline link.
Emphasis:
italics
italics
bold face
bold face
Inserting $\LaTeX$ symbols
One of the great improvements of ipython over simple markdown is that it allows to include latex expressions. This operation is performed by using MathJax to render LaTeX inside markdown.
MathJax is installed by default in the anaconda distribution. It can handle regularly complex expressions, but take into account that it is not a full full latex intepreter.
Let us see some examples:
$$c = \sqrt{a^2 + b^2}$$
$$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx$$
\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray}
Magic commands
Ipython includes a number of "magic" commands, that offer increased functionality beyond python.
Magic commands come on two flavors:
Line magic commands: Prefixed with the character '%'. They work as functions, with the rest of the line treated as arguments
Cell magic commands: Prefixed with the character '%%'. The affect the behavior of the whole cell
Line magic commands
%timeit: Estimate the performance of a function, running it in a loop several times
%edit: Start the default editor to edit a file
%automagic: Make magic commands callable without the '%' character
%magic: Information on magic commands
|
%lsmagic
|
Intro_IPython.ipynb
|
OriolAbril/Statistics-Rocks-MasterCosmosUAB
|
mit
|
Clasificamos el nombre segun el titulo de una persona
|
# Filter the name
def get_title(x):
y = x[x.find(',')+1:].replace('.', '').replace(',', '').strip().split(' ')
if y[0] == 'the': # Search for the countess
title = y[1]
else:
title = y[0]
return title
def filter_title(title, sex):
if title in ['Countess', 'Dona', 'Lady', 'Jonkheer', 'Mme', 'Mlle', 'Ms', 'Capt', 'Col', 'Don', 'Sir', 'Major', 'Rev', 'Dr']:
if sex:
return 'Rare_male'
else:
return 'Rare_female'
else:
return title
for df in [train_df, test_df]:
df['NameLength'] = df['Name'].apply(lambda x : len(x))
df['Title'] = df['Name'].apply(get_title)
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in [train_df, test_df]:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
|
ejercicio 3/titanic.ipynb
|
cristhro/Machine-Learning
|
gpl-3.0
|
Quitamos los titulos especiales y los agrupamos en categorias mas concretas
|
for df in [train_df, test_df]:
df['Title'] = df.apply(lambda x: filter_title(x['Title'], x['Sex']), axis=1)
sns.countplot(y=train_df['Title'])
train_df.groupby('Title')['PassengerId'].count().sort_values(ascending=False)
# Borramos la columna Name
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
train_df.head()
|
ejercicio 3/titanic.ipynb
|
cristhro/Machine-Learning
|
gpl-3.0
|
Eleccion del Modelo
Hemos probado con 4 modelos diferentes a ver cual da mejor resultado, y en este caso, Random Forest ha sido el daba mas puntuacion en kaggle, de entre los 3 que empataban a score (RF, DT y SVM)
|
X_train = train_df.drop(["Survived"], axis=1).copy()
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
|
ejercicio 3/titanic.ipynb
|
cristhro/Machine-Learning
|
gpl-3.0
|
Create network using networkx
This example uses the small and simple network found here: network-example-undirected.txt. <br>
|
# Step 2: Import/Create the network that PathLinker will run on
network_file = 'network-example-undirected.txt'
# create a new network by importing the data from a sample using pandas
df = pd.read_csv(network_file, sep='\t', lineterminator='\n')
# and create the networkx Graph from the pandas dataframe
G = nx.from_pandas_edgelist(df, "source", "target")
# create the CyNetwork object from the networkx in CytoScape
cy_network = cy.network.create_from_networkx(G, name = 'network-example-undirected', collection = 'F1000 PathLinker Use Case')
# obtain the CyNetwork object SUID
cy_network_suid = cy_network.get_id()
# give the network some style and a layout
my_style = cy.style.create('defaut')
# copied from here: https://github.com/cytoscape/cytoscape-automation/blob/master/for-scripters/Python/basic-fundamentals.ipynb
basic_settings = {
'NODE_FILL_COLOR': '#6AACB8',
'NODE_SIZE': 55,
'NODE_BORDER_WIDTH': 0,
'NODE_LABEL_COLOR': '#555555',
'EDGE_WIDTH': 2,
'EDGE_TRANSPARENCY': 100,
'EDGE_STROKE_UNSELECTED_PAINT': '#333333',
'NETWORK_BACKGROUND_PAINT': '#FFFFEA'
}
my_style.update_defaults(basic_settings)
# Create some mappings
my_style.create_passthrough_mapping(column='name', vp='NODE_LABEL', col_type='String')
cy.layout.apply(name="force-directed", network=cy_network)
cy.style.apply(my_style, cy_network)
#cy.layout.fit(network=cy_network)
|
cytoscape-automation-example/simple_use_case.ipynb
|
Murali-group/PathLinker-Cytoscape
|
gpl-3.0
|
The network shown below will be generated in Cytoscape with the above code.
Run PathLinker using the API function
Run PathLinker
The function takes user sources, targets, and a set of parameters, and computes the k shortest paths. The function returns the paths in JSON format. Based on the user input, the function could generate a subnetwork (and view) containing those paths, and returns the computed paths and subnetwork/view SUIDs.
Additional description of the parameters are available in the PathLinker app documentation.
|
# Step 3: Construct input data to pass to PathLinker API function
# construct PathLinker input data for API request
# For a description of all of the parameters, please see below
params = {
'sources': 'a',
'targets': 'e h',
'k': 2, # the number of shortest path to compute
'treatNetworkAsUndirected': True, # Our graph is undirected, so use this option
'includeTiedPaths': True, # This option is not necessary. I'm including it here just to show what it does
}
# construct REST API request url
url = "http://localhost:1234/pathlinker/v1/" + str(cy_network_suid) + "/run"
# to just run on the network currently in view on cytoscape, use the following:
# url = "http://localhost:1234/pathlinker/v1/currentView/run"
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
# perform the REST API call
result_json = requests.request("POST",
url,
data = json.dumps(params),
params = None,
headers = headers)
# ------------ Description of all parameters ------------------
# the node names for the sources and targets are space separated
# and must match the "name" column in the Node Table in Cytoscape
params["sources"] = "a"
params["targets"] = "e h"
# the number of shortest path to compute, must be greater than 0
# Default: 50
params["k"] = 2
# Edge weight type, must be one of the three: [UNWEIGHTED, ADDITIVE, PROBABILITIES]
params["edgeWeightType"] = "UNWEIGHTED"
# Edge penalty. Not needed for UNWEIGHTED
# Must be 0 or greater for ADDITIVE, and 1 or greater for PROBABILITIES
params["edgePenalty"] = 0
# The column name in the Edge Table in Cytoscape containing edge weight property,
# column type must be numerical type
params["edgeWeightColumnName"] = "weight"
# The option to ignore directionality of edges when computing paths
# Default: False
params["treatNetworkAsUndirected"] = True
# Allow source/target nodes to appear as intermediate nodes in computed paths
# Default: False
params["allowSourcesTargetsInPaths"] = False
# Include more than k paths if the path length/score is equal to kth path length/score
# Default: False
params["includeTiedPaths"] = False
# Option to disable the generation of the subnetwork/view, path rank column, and result panel
# and only return the path result in JSON format
# Default: False
params["skipSubnetworkGeneration"] = False
|
cytoscape-automation-example/simple_use_case.ipynb
|
Murali-group/PathLinker-Cytoscape
|
gpl-3.0
|
Output
The app will generate the following (shown below):
- a subnetwork containing the paths (with the hierarchical layout applied)
- a path rank column in the Edge Table (shows for each edge, the rank of the first path in which it appears)
- a Result Panel within Cytoscape.
The API will return:
- the computed paths
- the SUIDs of the generated subnetwork and subnetwork view
- the path rank column name in JSON format.
|
# Step 4: Store result, parse, and print
results = json.loads(result_json.content)
print("Output:\n")
# access the suid, references, and path rank column name
subnetwork_suid = results["subnetworkSUID"]
subnetwork_view_suid = results["subnetworkViewSUID"]
# The path rank column shows for each edge, the rank of the first path in which it appears
path_rank_column_name = results["pathRankColumnName"]
print("subnetwork SUID: %s" % (subnetwork_suid))
print("subnetwork view SUID: %s" % (subnetwork_view_suid))
print("Path rank column name: %s" % (path_rank_column_name))
print("")
# access the paths generated by PathLinker
paths = results["paths"]
# print the paths found
for path in paths:
print("path rank: %d" % (path['rank']))
print("path score: %s" % (str(path['score'])))
print("path: %s" % ("|".join(path['nodeList'])))
# write them to a file
paths_file = "use-case-images/paths.txt"
print("Writing paths to %s" % (paths_file))
with open(paths_file, 'w') as out:
out.write("path rank\tpath score\tpath\n")
for path in paths:
out.write('%d\t%s\t%s\n' % (path['rank'], str(path['score']), "|".join(path['nodeList'])))
# access network and network view references
subnetwork = cy.network.create(suid=subnetwork_suid)
#subnetwork_view = subnetwork.get_first_view()
# TODO copy the layout of the original graph to this graph to better visualize the results.
# The copycat layout doesn't seem to be working
# for now, just apply the cose layout to get a little better layout (see image below)
cy.layout.apply(name="cose", network=subnetwork)
|
cytoscape-automation-example/simple_use_case.ipynb
|
Murali-group/PathLinker-Cytoscape
|
gpl-3.0
|
The subnetwork with "cose" layout will look something like this:
Visualization using cytoscape.js and py2cytoscape
|
# *** Currently the function does not work therefore is commented out. ***
# import py2cytoscape.cytoscapejs as renderer
# # visualize the subnetwork view using CytoScape.js
# renderer.render(subnetwork_view, 'Directed', background='radial-gradient(#FFFFFF 15%, #DDDDDD 105%)')
|
cytoscape-automation-example/simple_use_case.ipynb
|
Murali-group/PathLinker-Cytoscape
|
gpl-3.0
|
View the subnetwork and store the image
|
# png
subnetwork_image_png = subnetwork.get_png()
subnetwork_image_file = 'use-case-images/subnetwork-image.png'
print("Writing PNG to %s" % (subnetwork_image_file))
with open(subnetwork_image_file, 'wb') as f:
f.write(subnetwork_image_png)
from IPython.display import Image
Image(subnetwork_image_png)
# # pdf
# subnetwork_image_pdf = subnetwork.get_pdf()
# subnetwork_image_file = subnetwork_image_file.replace('.png', '.pdf')
# print("Writing PDF to %s" % (subnetwork_image_file))
# with open(subnetwork_image_file, 'wb') as f:
# f.write(subnetwork_image_pdf)
# # display the pdf in frame
# from IPython.display import IFrame
# IFrame('use_case_images/subnetwork_image.pdf', width=600, height=300)
# # svg
# subnetwork_image_svg = subnetwork.get_svg()
# from IPython.display import SVG
# SVG(subnetwork_image_svg)
|
cytoscape-automation-example/simple_use_case.ipynb
|
Murali-group/PathLinker-Cytoscape
|
gpl-3.0
|
Четное количество лепестков у 20 ромашек, нечетное у 40. Но это малая выборка с большого ромашкового поля. Настоящая вероятность получить нечетное количество лепестков может отличаться. Рассмотрим поле из 6'000 ромашек, и нечет колчество лепестков у 4'000. И посмотрим какие выводы можно сделать, если сорвать 60 из них:
|
universe = np.concatenate([np.zeros(2000), np.ones(4000)])
sample = np.random.choice(universe, 60)
float(sample.sum()) / sample.size
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Хотя должно было получиться:
|
float(universe.sum()) / universe.size
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Точечные оценки врут (хоть немного) очень часто. Интервальные только с определенной вероятностью. Природа сложна, но не злонамерена. Исследовав часть поля, мы с большой вероятностью можем сказать интервал истиной вероятности получения нечетного количества лепестков.
Bootstrap
Сделаем быстрое оценивание с помощью бутстэпа. https://ru.wikipedia.org/wiki/Статистический_бутстрэп
|
sample = np.concatenate([np.zeros(20), np.ones(40)])
def point_est(sample):
return float(sample.sum()) / sample.size
mean = point_est(sample)
mean
def Choose(sample):
return np.random.choice(sample, sample.size)
B_size = 10000
boot = np.vectorize(lambda it: point_est(Choose(sample)))(np.arange(B_size))
sns.distplot(boot)
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Из имеющейся выборки я сгенерировал 10'000 выборок с возвращением такого же размера. Это похоже на следующий процесс. Пусть поле состоит из очень большого числа ромашек с тем количеством лепестков, которые есть в данных Матвея. И распределение на поле такое как в его выборке. И мы 10'000 раз моделируем процесс выбора 60 ромашек.
|
boot.mean()
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Среднее получилось почти как нужно. Но ценность не в нем, ценность в дисперсии, которой не было у точечной оценки, а теперь есть, благодаря bootstrap методу.
|
boot_mean = float(sample.sum()) / sample.size
boot_std = boot.std()
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Рассмотрим любимый многими 95% доверительный интервал (вероятность, что истиное значение вероятности окажется вне интервала 5%). Немножко ЦПТ магии:
|
alpha = 0.05
level = 1. - alpha / 2
deviation = stats.norm.ppf(level) * boot_std
boot_mean - deviation, boot_mean + deviation
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Вот такой на самом деле широкий разброс получается. В любом случае, теперь мы на 95% уверены, что четность количества лепестков не равновероятна. Посмотрим насколько эту уверенность можно повысить.
|
stats.norm.cdf((boot_mean - .5) / boot_std)
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Шансы честного распределения четности менее 1%
Да, можно гораздо проще
Bootstrap хорош. Но он ничего не знает о том, что мы там оценивали. С одной стороны это плюс. Можно не заморачиваясь оценивать сложные статистики.
С другой, если вдаться в детали, можно получить лучшую оценку. Естественно, например считать, что вероятность встретить нечетное количество лепестков на ромашке постоянна. Тогда это последовательность независимых испытаний Бернулли. Что очень похоже на правду. Но если это сделать сразу, не будет прикольной картинки.
|
p = float(sample.sum()) / sample.size
se = np.sqrt(p * (1. - p) / sample.size)
alpha = 0.05
level = 1. - alpha / 2
deviation = stats.norm.ppf(level) * se
p - deviation, p + deviation
|
petal.ipynb
|
dmittov/misc
|
apache-2.0
|
Matlab Basics
Matlab has very similar syntax to Python. The main difference in general arithmetic is that Matlab includes all the math/array creation functions by default. Let's compare some examples
|
#Python
import numpy as np
b = 4
x = np.linspace(0,10, 5)
y = x * b
print(y)
%%matlab
b = 4;
x = linspace(0,10,5);
y = b .* x
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
There are three main differences we can spot:
Matlab doesn't require any imports or prefixes for linspace
Matlab requires a ; at the end to suppress output. If you don't end with a ;, Matlab will tell you about that line.
In numpy, all arithmetic operations are by default element-by-element. In Matlab, arithmetic operations are by default their matrix versions. So you put a . to indicate element-by-element
Working with matrices
$$
\left[\begin{array}{lr}
4 & 3\
-2 & 1\
\end{array}\right]
\left[\begin{array}{c}
2\
6\
\end{array}\right]
=
\left[\begin{array}{c}
26\
2\
\end{array}\right]
$$
|
x = np.array([[4,3], [-2, 1]])
y = np.array([2,6]).transpose()
print(x.dot(y))
%%matlab
x = [4, 3; -2, 1];
y = [2,6]';
x * y
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
You can see here that Matlab doesn't distinguish between lists, which can grow/shrink, and arrays, which are fixed size
|
x = [2,5]
x.append(3)
x
%%matlab
x = [5,2];
x = [x 3]
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Since Matlab variables are always fixed length, you must create new ones to to change size
Many of the same commands we used have the same name in matlab
|
import scipy.linalg as lin
example = np.random.random( (3,3) )
lin.eig(example)
%%matlab
example = rand(3,3);
eig(example)
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Slicing
Slicing is nearly the same, except Matlab starts at 1 and includes both the start and end of the slice. Matlab uses parenthesis instead of brackets
|
%%matlab
x = 1:10;
%this is how you make comments BTW, the % sign
x(1)
x(1:2)
x(1:2:5)
x = list(range(1,11))
print(x[0])
print(x[0:2])
print(x[0:6:2])
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Program flow control
All the same flow statements from Python exist
|
for i in range(3):
print(i)
print('now with a list')
x = [2,5]
for j in x:
print(j)
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Matlab can only iterate in for loops on integers. Thus, to iterate over elements of an array, you need use this syntax:
|
%%matlab
for i = 0:2
i
end
'now with a list'
x = [2, 5]
n = size(x)
for j = 1:n
j
end
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
If statements are similar. and is replaced by &, or by | and not by ~
|
%%matlab
a = -3;
if a < 0 & abs(a) > 2
a * 2
end
if ~(a == 3 | a ~= 3)
'foo'
end
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Creating functions in Matlab
In Matlab, you always define functions in another file and then read them in. You can use the writefile cell magic to do this
|
%%writefile compute_pow.m
function[result] = compute_pow(x, p)
%this function computes x^p
result = x ^ p;
%%matlab
compute_pow(4,2)
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
If you modify the file, you have to force matlab to reload your function:
|
%%matlab
clear compute_pow
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Plotting
The matplotlib was inspired by Matlab's plotting, so many of the functions are similar.
|
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(0,10,100)
y = np.cos(x)
plt.figure(figsize=(7,5))
plt.plot(x,y)
plt.grid()
plt.xlabel('x')
plt.ylabel('y')
plt.title('cosine wave')
plt.show()
%%matlab
x = linspace(0,10,100);
y = cos(x);
plot(x,y)
grid on
xlabel('x')
ylabel('y')
title('cosine wave')
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
Example - Solving and plotting an Optimization Problem
Minimize
$$f(x) = (x - 4)^2 $$
|
from scipy.optimize import *
def fxn(x):
return (x - 4)**2
x_min = newton(fxn,x0=0)
plt.figure(figsize=(7,5))
x_grid = np.linspace(2,6,100)
plt.plot(x_grid, fxn(x_grid))
plt.axvline(x_min, color='red')
plt.show()
%%writefile my_obj.m
function[y] = my_obj(x)
%be careful to use .^ here so that we can pass matrices to this method
y = (x - 4).^2;
%%matlab
[x_min, fval] = fminsearch(@my_obj, 0);
x_grid = linspace(2,6,100);
plot(x_grid, my_obj(x_grid));
hold on
ylimits=get(gca,'ylim');
plot([x_min, x_min], ylimits)
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
There are a few key differences to note:
When referring to a function in matlab, you have to put an @ if you're not calling it
There is not an easy way to plot a vertical line in matlab, so you have to plot a line from the lowest y-value to highest y-value
Be careful about making sure your function can handle matrices
Learning Matlab
You currently know many numerical methods. The key to learning Matlab is using their online documentation and judicious web searches. For example, if you want to solve two equations you know you could use a root-finding method. A bing search would bring you to the fsolve method.
Excel
Excel is a pretty self-explanatory program. I'm just going to show you some advanced things which you may not already know.
References
Dragging equations
Auto-fill
Optimization -> solver add-in
Statistics -> data analysis add in for rn
Matrix-functions -> ctrl-shift
Text parsing -> past-special, to delete blanks go to find-select and then delete
Importing data from Excel
Pandas
Pandas is a library that can read data from many formats, including excel. It also has some built in graphing/analysis tools.
|
import pandas as pd
data = pd.read_excel('fuel_cell.xlsx')
data.info()
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
You can access data in two ways:
|
data.Resistance[0:10]
data.ix[0:10, 3]
%system jupyter nbconvert unit_10_lecture_1.ipynb --to slides --post serve
|
unit_16/lectures/lecture_1.ipynb
|
whitead/numerical_stats
|
gpl-3.0
|
TensorFlow Lattice を使った倫理のための形状制約
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints_for_ethics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このチュートリアルでは、TensorFlow Lattice(TFL)ライブラリを使用して、モデルが責任を持って振る舞い、ある倫理的または平等な想定に反しないようにトレーニングします。具体的には、単調性制約を使用して、特定の属性が不平等にペナルティ付けされないようにすることに焦点を当てます。このチュートリアルには、Serena Wang と Maya Gupta が執筆し、AISTATS 2020 で発表された論文「Deontological Ethics By Monotonicity Shape Constraints」の実験の実演が含まれます。
パブリックデータセットで TFL 缶詰 Estimator を使用しますが、このチュートリアルに記載されている内容は、TFL Keras レイヤーから構築されたモデルでも実行できます。
続行する前に、ランタイムに必要なすべてのパッケージがインストールされていることを確認してください(以下のコードセルでインポートされるとおりに行います)。
セットアップ
TF Lattice パッケージをインストールします。
|
#@test {"skip": true}
!pip install tensorflow-lattice seaborn
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
必要なパッケージをインポートします。
|
import tensorflow as tf
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
このチュートリアルで使用されるデフォルト値です。
|
# List of learning rate hyperparameters to try.
# For a longer list of reasonable hyperparameters, try [0.001, 0.01, 0.1].
LEARNING_RATES = [0.01]
# Default number of training epochs and batch sizes.
NUM_EPOCHS = 1000
BATCH_SIZE = 1000
# Directory containing dataset files.
DATA_DIR = 'https://raw.githubusercontent.com/serenalwang/shape_constraints_for_ethics/master'
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
事例 1: 法科大学院への進学適正
このチュートリアルの前半では、法科大学院入学審議会(LSAC)の Law School Admissions データセットを使った事例を考察します。学生の LSAT スコアと学部課程の GPA という 2 つの特徴量を使用して、学生が司法試験に合格するかどうかを予測するための分類器をトレーニングします。
分類器のスコアによって法科大学院への進学適正または奨学金が誘導されると仮定しましょう。メリットに基づく社会規範に従うと、GPA と LSAT スコアがより高い学生が分類器からより高いスコアを得ることが期待されますが、 モデルがたやすくこういった直感的な規範を違反し、より高い GPA と LSAT スコアに対して学生にペナルティが科されることがあることが観察されます。
この不平等なペナリゼーション問題を解決するために、単調性制約を課すことで、モデルがより高い GPA と LSAT スコアにペナルティを決して科すことなく、すべて平等に審査するようにすることができます。このチュートリアルでは、TFL を使って単調性制約を課す方法を説明します。
法科大学院データを読み込む
|
# Load data file.
law_file_name = 'lsac.csv'
law_file_path = os.path.join(DATA_DIR, law_file_name)
raw_law_df = pd.read_csv(law_file_path, delimiter=',')
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
データセットを前処理します。
|
# Define label column name.
LAW_LABEL = 'pass_bar'
def preprocess_law_data(input_df):
# Drop rows with where the label or features of interest are missing.
output_df = input_df[~input_df[LAW_LABEL].isna() & ~input_df['ugpa'].isna() &
(input_df['ugpa'] > 0) & ~input_df['lsat'].isna()]
return output_df
law_df = preprocess_law_data(raw_law_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
データをトレーニング/検証/テストセットに分割する
|
def split_dataset(input_df, random_state=888):
"""Splits an input dataset into train, val, and test sets."""
train_df, test_val_df = train_test_split(
input_df, test_size=0.3, random_state=random_state)
val_df, test_df = train_test_split(
test_val_df, test_size=0.66, random_state=random_state)
return train_df, val_df, test_df
law_train_df, law_val_df, law_test_df = split_dataset(law_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
データの分布を視覚化する
まず、データの分布を視覚化します。司法試験に合格したすべての学生と合格しなかった学生の GPA と LSAT スコアを描画します。
|
def plot_dataset_contour(input_df, title):
plt.rcParams['font.family'] = ['serif']
g = sns.jointplot(
x='ugpa',
y='lsat',
data=input_df,
kind='kde',
xlim=[1.4, 4],
ylim=[0, 50])
g.plot_joint(plt.scatter, c='b', s=10, linewidth=1, marker='+')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('Undergraduate GPA', 'LSAT score', fontsize=14)
g.fig.suptitle(title, fontsize=14)
# Adust plot so that the title fits.
plt.subplots_adjust(top=0.9)
plt.show()
law_df_pos = law_df[law_df[LAW_LABEL] == 1]
plot_dataset_contour(
law_df_pos, title='Distribution of students that passed the bar')
law_df_neg = law_df[law_df[LAW_LABEL] == 0]
plot_dataset_contour(
law_df_neg, title='Distribution of students that failed the bar')
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
司法試験通過を予測する較正済み線形モデルをトレーニングする
次に、学生が司法試験に合格するかどうかを予測するために、TFL の較正済み線形モデルをトレーニングします。LSAT スコアと学部課程の GPA を 2 つの特徴量として使用し、学生が司法試験に合格するかどうかをトレーニングラベルとします。
まず制約を使用せずに、較正済み線形モデルをトレーニングします。次に、単調性制約を使用して較正済み線形モデルをトレーニングし、モデルの出力と精度に生じる違いを観察します。
TFL 較正済み線形 Estimator をトレーニングするためのヘルパー関数
これらは、この法科大学院の事例と以下の債務不履行の事例に使用されます。
|
def train_tfl_estimator(train_df, monotonicity, learning_rate, num_epochs,
batch_size, get_input_fn,
get_feature_columns_and_configs):
"""Trains a TFL calibrated linear estimator.
Args:
train_df: pandas dataframe containing training data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rate: learning rate of Adam optimizer for gradient descent.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
estimator: a trained TFL calibrated linear estimator.
"""
feature_columns, feature_configs = get_feature_columns_and_configs(
monotonicity)
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs, use_bias=False)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=get_input_fn(input_df=train_df, num_epochs=1),
optimizer=tf.keras.optimizers.Adam(learning_rate))
estimator.train(
input_fn=get_input_fn(
input_df=train_df, num_epochs=num_epochs, batch_size=batch_size))
return estimator
def optimize_learning_rates(
train_df,
val_df,
test_df,
monotonicity,
learning_rates,
num_epochs,
batch_size,
get_input_fn,
get_feature_columns_and_configs,
):
"""Optimizes learning rates for TFL estimators.
Args:
train_df: pandas dataframe containing training data.
val_df: pandas dataframe containing validation data.
test_df: pandas dataframe containing test data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rates: list of learning rates to try.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
A single TFL estimator that achieved the best validation accuracy.
"""
estimators = []
train_accuracies = []
val_accuracies = []
test_accuracies = []
for lr in learning_rates:
estimator = train_tfl_estimator(
train_df=train_df,
monotonicity=monotonicity,
learning_rate=lr,
num_epochs=num_epochs,
batch_size=batch_size,
get_input_fn=get_input_fn,
get_feature_columns_and_configs=get_feature_columns_and_configs)
estimators.append(estimator)
train_acc = estimator.evaluate(
input_fn=get_input_fn(train_df, num_epochs=1))['accuracy']
val_acc = estimator.evaluate(
input_fn=get_input_fn(val_df, num_epochs=1))['accuracy']
test_acc = estimator.evaluate(
input_fn=get_input_fn(test_df, num_epochs=1))['accuracy']
print('accuracies for learning rate %f: train: %f, val: %f, test: %f' %
(lr, train_acc, val_acc, test_acc))
train_accuracies.append(train_acc)
val_accuracies.append(val_acc)
test_accuracies.append(test_acc)
max_index = val_accuracies.index(max(val_accuracies))
return estimators[max_index]
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
法科大学院データセットの特徴量を構成するためのヘルパー関数
これらのヘルパー関数は法科大学院の事例に特化した関数です。
|
def get_input_fn_law(input_df, num_epochs, batch_size=None):
"""Gets TF input_fn for law school models."""
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['ugpa', 'lsat']],
y=input_df['pass_bar'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_law(monotonicity):
"""Gets TFL feature configs for law school models."""
feature_columns = [
tf.feature_column.numeric_column('ugpa'),
tf.feature_column.numeric_column('lsat'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='ugpa',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='lsat',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
トレーニング済みモデルの出力を視覚化するためのヘルパー関数
|
def get_predicted_probabilities(estimator, input_df, get_input_fn):
predictions = estimator.predict(
input_fn=get_input_fn(input_df=input_df, num_epochs=1))
return [prediction['probabilities'][1] for prediction in predictions]
def plot_model_contour(estimator, input_df, num_keypoints=20):
x = np.linspace(min(input_df['ugpa']), max(input_df['ugpa']), num_keypoints)
y = np.linspace(min(input_df['lsat']), max(input_df['lsat']), num_keypoints)
x_grid, y_grid = np.meshgrid(x, y)
positions = np.vstack([x_grid.ravel(), y_grid.ravel()])
plot_df = pd.DataFrame(positions.T, columns=['ugpa', 'lsat'])
plot_df[LAW_LABEL] = np.ones(len(plot_df))
predictions = get_predicted_probabilities(
estimator=estimator, input_df=plot_df, get_input_fn=get_input_fn_law)
grid_predictions = np.reshape(predictions, x_grid.shape)
plt.rcParams['font.family'] = ['serif']
plt.contour(
x_grid,
y_grid,
grid_predictions,
colors=('k',),
levels=np.linspace(0, 1, 11))
plt.contourf(
x_grid,
y_grid,
grid_predictions,
cmap=plt.cm.bone,
levels=np.linspace(0, 1, 11)) # levels=np.linspace(0,1,8));
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Model score', fontsize=20)
cbar.ax.tick_params(labelsize=20)
plt.xlabel('Undergraduate GPA', fontsize=20)
plt.ylabel('LSAT score', fontsize=20)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
制約なし(非単調性)の較正済み線形モデルをトレーニングする
|
nomon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(nomon_linear_estimator, input_df=law_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
単調な較正済み線形モデルをトレーニングする
|
mon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(mon_linear_estimator, input_df=law_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
ほかの制約なしモデルをトレーニングする
TFL の較正済み線形モデルを、精度をそれほど大きく犠牲にすることなく LSAT スコアと GPA の両方で単調になるようにトレーニングできることを実演しました。
しかし、較正済み線形モデルは、ディープニューラルネットワーク(DNN)や勾配ブースティング木(GBT)などのほかの種類のモデルとどのように比較されるでしょうか。DNN と GBT は合理的に平等な出力を持つように見えるのでしょうか。この疑問への答えをえるために、制約なしの DNN と GBT をトレーニングすることにします。実際のところ、DNN と GBT は、LSAT スコアと学部課程の GPA の単調性を大きく侵害することが観察されるでしょう。
制約なしのディープニューラルネットワーク(DNN)モデルをトレーニングする
アーキテクチャは、高い検証精度を得られるように、最適化済みです。
|
feature_names = ['ugpa', 'lsat']
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
hidden_units=[100, 100],
optimizer=tf.keras.optimizers.Adam(learning_rate=0.008),
activation_fn=tf.nn.relu)
dnn_estimator.train(
input_fn=get_input_fn_law(
law_train_df, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS))
dnn_train_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
dnn_val_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
dnn_test_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for DNN: train: %f, val: %f, test: %f' %
(dnn_train_acc, dnn_val_acc, dnn_test_acc))
plot_model_contour(dnn_estimator, input_df=law_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
制約なしの勾配ブースティング木(GBT)モデルをトレーニングする
木の構造は、高い検証精度を得られるように、最適化済みです。
|
tree_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
n_batches_per_layer=2,
n_trees=20,
max_depth=4)
tree_estimator.train(
input_fn=get_input_fn_law(
law_train_df, num_epochs=NUM_EPOCHS, batch_size=BATCH_SIZE))
tree_train_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
tree_val_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
tree_test_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for GBT: train: %f, val: %f, test: %f' %
(tree_train_acc, tree_val_acc, tree_test_acc))
plot_model_contour(tree_estimator, input_df=law_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
事例 2: 債務不履行
このチュートリアルで考察する 2 つ目の事例は、個人の債務不履行確率の予測です。UCI リポジトリの Default of Credit Card Clients データセットを使用します。このデータは、台湾の 30,000 人のクレジットカード利用者から集められたもので、ある期間に利用者が債務不履行となるかどうかを示すバイナリラベルが含まれます。特徴量には、2005 年 4 月から 9 月までの毎月の婚姻状況、性別、学歴、および利用者の既存の支払いの延滞期間が含まれます。
最初の事例と同様に、不平等なペナリゼーションを回避するために単調性制約を使用して説明します。モデルが利用者の信用スコアを決定するために使用される場合、ほかのすべてが同等である場合に請求書を早期に支払うことに対してペナルティが科されるのであれば、利用者は不平等だと感じることでしょう。そのため、モデルが早期払いにペナルティを科さないように、単調性制約を適用します。
債務不履行データを読み込む
|
# Load data file.
credit_file_name = 'credit_default.csv'
credit_file_path = os.path.join(DATA_DIR, credit_file_name)
credit_df = pd.read_csv(credit_file_path, delimiter=',')
# Define label column name.
CREDIT_LABEL = 'default'
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
データをトレーニング/検証/テストセットに分割する
|
credit_train_df, credit_val_df, credit_test_df = split_dataset(credit_df)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
データの分布を視覚化する
まず、データの分布を視覚化します。さまざまな婚姻状況と支払い状況の利用者に確認される不履行率の平均と標準偏差を描画します。支払い状況は、利用者のローン返済の月数で表されます(2005 年 4 月時点)。
|
def get_agg_data(df, x_col, y_col, bins=11):
xbins = pd.cut(df[x_col], bins=bins)
data = df[[x_col, y_col]].groupby(xbins).agg(['mean', 'sem'])
return data
def plot_2d_means_credit(input_df, x_col, y_col, x_label, y_label):
plt.rcParams['font.family'] = ['serif']
_, ax = plt.subplots(nrows=1, ncols=1)
plt.setp(ax.spines.values(), color='black', linewidth=1)
ax.tick_params(
direction='in', length=6, width=1, top=False, right=False, labelsize=18)
df_single = get_agg_data(input_df[input_df['MARRIAGE'] == 1], x_col, y_col)
df_married = get_agg_data(input_df[input_df['MARRIAGE'] == 2], x_col, y_col)
ax.errorbar(
df_single[(x_col, 'mean')],
df_single[(y_col, 'mean')],
xerr=df_single[(x_col, 'sem')],
yerr=df_single[(y_col, 'sem')],
color='orange',
marker='s',
capsize=3,
capthick=1,
label='Single',
markersize=10,
linestyle='')
ax.errorbar(
df_married[(x_col, 'mean')],
df_married[(y_col, 'mean')],
xerr=df_married[(x_col, 'sem')],
yerr=df_married[(y_col, 'sem')],
color='b',
marker='^',
capsize=3,
capthick=1,
label='Married',
markersize=10,
linestyle='')
leg = ax.legend(loc='upper left', fontsize=18, frameon=True, numpoints=1)
ax.set_xlabel(x_label, fontsize=18)
ax.set_ylabel(y_label, fontsize=18)
ax.set_ylim(0, 1.1)
ax.set_xlim(-2, 8.5)
ax.patch.set_facecolor('white')
leg.get_frame().set_edgecolor('black')
leg.get_frame().set_facecolor('white')
leg.get_frame().set_linewidth(1)
plt.show()
plot_2d_means_credit(credit_train_df, 'PAY_0', 'default',
'Repayment Status (April)', 'Observed default rate')
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
債務不履行率を予測する較正済み線形モデルをトレーニングする
次に、利用者がローンの債務不履行となるかどうかを予測するために、TFL の較正済み線形モデルをトレーニングします。利用者の 4 月の婚姻状況とローン返済の延滞月数(返済状況)を 2 つの特徴量として使用します。トレーニングラベルは、利用者がローンの債務不履行になったかどうかを示します。
まず制約を使用せずに、較正済み線形モデルをトレーニングします。次に、単調性制約を使用して較正済み線形モデルをトレーニングし、モデルの出力と精度に生じる違いを観察します。
債務不履行データセットの特徴量を構成するためのヘルパー関数
これらのヘルパー関数は債務不履行の事例に特化した関数です。
|
def get_input_fn_credit(input_df, num_epochs, batch_size=None):
"""Gets TF input_fn for credit default models."""
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['MARRIAGE', 'PAY_0']],
y=input_df['default'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_credit(monotonicity):
"""Gets TFL feature configs for credit default models."""
feature_columns = [
tf.feature_column.numeric_column('MARRIAGE'),
tf.feature_column.numeric_column('PAY_0'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='MARRIAGE',
lattice_size=2,
pwl_calibration_num_keypoints=3,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='PAY_0',
lattice_size=2,
pwl_calibration_num_keypoints=10,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
トレーニング済みモデルの出力を視覚化するためのヘルパー関数
|
def plot_predictions_credit(input_df,
estimator,
x_col,
x_label='Repayment Status (April)',
y_label='Predicted default probability'):
predictions = get_predicted_probabilities(
estimator=estimator, input_df=input_df, get_input_fn=get_input_fn_credit)
new_df = input_df.copy()
new_df.loc[:, 'predictions'] = predictions
plot_2d_means_credit(new_df, x_col, 'predictions', x_label, y_label)
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
制約なし(非単調性)の較正済み線形モデルをトレーニングする
|
nomon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, nomon_linear_estimator, 'PAY_0')
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
単調な較正済み線形モデルをトレーニングする
|
mon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, mon_linear_estimator, 'PAY_0')
|
site/ja/lattice/tutorials/shape_constraints_for_ethics.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Evaluate fitness of model using cost (lost) function
Cost function for simple linear regression is the sum total of residuals, called RSS
|
import numpy as np
print("Residual sum of squares: %.2f" %np.mean((model.predict(X)-Y)**2))
|
Chapters/Two/Simple Linear Regression.ipynb
|
fadeetch/Mastering-ML-Python
|
mit
|
Solving OLS for simple regression
Goal is to calculate vector of coefficients beta that minimizes cost function.
|
from __future__ import division
xbar = np.mean(X)
ybar = np.mean(Y)
print("Mean of X is:", xbar)
print("Mean of Y is:", ybar)
#Make own function for variance and covariance to better understand how it works
def variance(X):
return np.sum((X - np.mean(X))**2 / (len(X)-1))
def covariance(X,Y):
return np.sum((X - np.mean(X)) * (Y - np.mean(Y)) / (len(X)-1))
print("Variance of X: ", variance(X))
print("Covariance of X, Y is: ", covariance(X,Y))
#For simple linear regression, beta is cov/var.
#Following calculation of beta, I can also get alpha a = y - bx
beta = covariance(X,Y) / variance(X)
beta
|
Chapters/Two/Simple Linear Regression.ipynb
|
fadeetch/Mastering-ML-Python
|
mit
|
Evaluate fit using r square
|
#Load another set
X_test = np.array([8,9,11,16,12])
Y_test = np.array([11,8.5,15,18,11])
model.fit(X_test.reshape(-1,1),Y_test)
model.predict(X_test.reshape(-1,1))
def total_sum_squares(Y):
return np.sum((Y_test - np.mean(Y_test))**2)
#Residual sum of squares
def residual_sum_squares(Y):
return np.sum( (Y_test - model.predict(X_test.reshape(-1,1)))**2)
#Get R square
1 - residual_sum_squares(Y_test)/total_sum_squares(Y_test)
#From sklearn
model.score(X_test.reshape(-1,1),Y_test)
|
Chapters/Two/Simple Linear Regression.ipynb
|
fadeetch/Mastering-ML-Python
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.