markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Adding Datasets
b.add_dataset('mesh', compute_times=[0], dataset='mesh01') b.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01') b.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01') b.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01')
development/examples/minimal_contact_binary.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Synthetics To ensure compatibility with computing synthetics in detached and semi-detached systems in Phoebe, the synthetic meshes for our overcontact system are attached to each component separetely, instead of the contact envelope.
print(b['mesh01@model'].components)
development/examples/minimal_contact_binary.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting Meshes
afig, mplfig = b['mesh01@model'].plot(x='ws', show=True)
development/examples/minimal_contact_binary.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Orbits
afig, mplfig = b['orb01@model'].plot(x='ws',show=True)
development/examples/minimal_contact_binary.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Light Curves
afig, mplfig = b['lc01@model'].plot(show=True)
development/examples/minimal_contact_binary.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
RVs
afig, mplfig = b['rv01@model'].plot(show=True)
development/examples/minimal_contact_binary.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
LMFIT package: https://lmfit.github.io/lmfit-py/index.html Example :
import numpy as np import matplotlib.pyplot as plt %matplotlib inline x= np.array([0.,1.,2.,3.]) data = np.array([1.3,1.8,5.,10.7])
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Lets visualize how a quadratic curve fits to it:
plt.scatter(x,data) xarray=np.arange(-1,4,0.1) plt.plot(xarray, xarray**2,'r-') # Not the best fit plt.plot(xarray, xarray**2+1,'g-')
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Lets build a general quadratic model:
def get_residual(vars,x, data): a= vars[0] b=vars[1] model =a* x**2 +b return data-model vars=[1.,0.] print get_residual(vars,x,data) print sum(get_residual(vars,x,data)) vars=[1.,1.] print sum(get_residual(vars,x,data)) vars=[2.,0.] print sum(get_residual(vars,x,data))
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Questions ? leastsq function from scipy:
from scipy.optimize import leastsq vars = [0.,0.] out = leastsq(get_residual, vars, args=(x, data)) print out vars=[1.06734694, 0.96428571] print sum(get_residual(vars,x,data)**2) vars=[1.06734694, 0.96428571] plt.scatter(x,data) xarray=np.arange(-1,4,0.1) plt.plot(xarray, xarray**2,'r-') plt.plot(xarray, xarray**2+1,'g-') fitted = vars[0]* xarray**2 +vars[1] plt.plot(xarray, fitted,'b-')
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
LMFIT : Using Parameter objects instead of plain floats as variables. A parameter value: can be varied in the fit have a fixed value have upper and/or lower bounds constrained by an algebraic expression of other Parameter values Ease of changing fitting algorithms. Once a fitting model is set up, one can change the fitting algorithm used to find the optimal solution without changing the objective function. Improved estimation of confidence intervals. While scipy.optimize.leastsq() will automatically calculate uncertainties and correlations from the covariance matrix, the accuracy of these estimates are often questionable. To help address this, lmfit has functions to explicitly explore parameter space to determine confidence levels even for the most difficult cases. Improved curve-fitting with the Model class. This extends the capabilities of scipy.optimize.curve_fit(), allowing you to turn a function that models for your data into a python class that helps you parametrize and fit data with that model. Many pre-built models for common lineshapes are included and ready to use. minimize & Parameters:
from lmfit import minimize, Parameters params = Parameters() params.add('amp', value=0.) params.add('offset', value=0.) def get_residual(params,x, data): amp= params['amp'].value offset=params['offset'].value model =amp* x**2 +offset return data-model out = minimize(get_residual, params, args=(x, data)) dir(out) out.params dir(out.params) out.params.values
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Fit values are the same as before !
out.__dict__ out.params['amp'].__dict__
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Questions ? Manipulating parameters : parameter class gives a lot of flexibility in manipulating the model parameters !
params['amp'].vary = False out = minimize(get_residual, params, args=(x, data)) print out.params print out.chisqr params['amp'].value = 1.0673469387778385 out = minimize(get_residual, params, args=(x, data)) print out.chisqr
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Another way of defining the parameters :
def get_residual(params,x, data): #amp= params['amp'].value #offset=params['offset'].value #xoffset=params['xoffset'].value parvals = params.valuesdict() amp = parvals['amp'] offset = parvals['offset'] model =amp* x**2 +offset return data-model
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Other manipulations :
params = Parameters() params.add('amp', value=0.) #params['amp'] = Parameter(value=..., min=...) params.add('offset', value=0.) params.add('xoffset', value=0.0, vary=False) out = minimize(get_residual, params, args=(x, data)) print out.params Image(filename='output.png', width=500, height=500)
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Challenge : Set parameter bound for 'amp' using "min" and "max"
params['offset'].min = -10. params['offset'].max = 10. out = minimize(get_residual, params, args=(x, data)) print out.params
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
stderr:
print out.params['amp'].stderr
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
correl:
print out.params['amp'].correl
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
report_fit: For a better report :
from lmfit import minimize, Parameters, Parameter, report_fit result = minimize(get_residual, params, args=(x, data)) help(report_fit) # write error report report_fit(result.params)
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Choosing Different Fitting Methods :
Image(filename='fitting.png', width=500, height=500)
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Challenge : Run with two other methods e.g., 'tnc' and 'powell' and compare the results:
result2 = minimize(get_residual, params, args=(x, data), method='tnc') report_fit(result2.params) result3 = minimize(get_residual, params, args=(x, data), method='powell') report_fit(result3.params)
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Complete report :
print(report_fit(result3))
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
Using expressions:
params.add('amp2', expr='(amp-offset)**2') def get_residual(params,x, data): #amp= params['amp'].value #offset=params['offset'].value #xoffset=params['xoffset'].value parvals = params.valuesdict() amp = parvals['amp'] offset = parvals['offset'] amp2 = parvals['amp2'] model =amp* x**2 + amp2*x**4 +offset return data-model result4 = minimize(get_residual, params, args=(x, data)) report_fit(result4.params)
Other_files/LMFIT_tutorial.ipynb
aliojjati/aliojjati.github.io
mit
if you are in a terminal, you will see something like this:: Processing Data Dictionary Processing Input File Initializing Simulation Reporting Surfaces Beginning Primary Simulation Initializing New Environment Parameters Warming up {1} Warming up {2} Warming up {3} Warming up {4} Warming up {5} Warming up {6} Starting Simulation at 07/21 for CHICAGO_IL_USA COOLING .4% CONDITIONS DB=>MWB Initializing New Environment Parameters Warming up {1} Warming up {2} Warming up {3} Warming up {4} Warming up {5} Warming up {6} Starting Simulation at 01/21 for CHICAGO_IL_USA HEATING 99.6% CONDITIONS Writing final SQL reports EnergyPlus Run Time=00hr 00min 0.24sec It’s as simple as that to run using the EnergyPlus defaults, but all the EnergyPlus command line interface options are also supported. To get a description of the options available, as well as the defaults you can call the Python built-in help function on the IDF.run method and it will print a full description of the options to the console.
help(idf.run)
docs/runningeplus.ipynb
santoshphilip/eppy
mit
Note: idf.run() works for E+ version >= 8.3 Running in parallel processes If you have acomputer with multiple cores, you may want to use all the cores. EnergyPlus allows you to run simulations on multiple cores. Here is an example script of how use eppy to run on multiple cores
"""multiprocessing runs""" import os from eppy.modeleditor import IDF from eppy.runner.run_functions import runIDFs def make_eplaunch_options(idf): """Make options for run, so that it runs like EPLaunch on Windows""" idfversion = idf.idfobjects['version'][0].Version_Identifier.split('.') idfversion.extend([0] * (3 - len(idfversion))) idfversionstr = '-'.join([str(item) for item in idfversion]) fname = idf.idfname options = { 'ep_version':idfversionstr, # runIDFs needs the version number 'output_prefix':os.path.basename(fname).split('.')[0], 'output_suffix':'C', 'output_directory':os.path.dirname(fname), 'readvars':True, 'expandobjects':True } return options def main(): iddfile = "/Applications/EnergyPlus-9-3-0/Energy+.idd" # change this for your operating system IDF.setiddname(iddfile) epwfile = "USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" runs = [] # File is from the Examples Folder idfname = "HVACTemplate-5ZoneBaseboardHeat.idf" idf = IDF(idfname, epwfile) theoptions = make_eplaunch_options(idf) runs.append([idf, theoptions]) # copy of previous file idfname = "HVACTemplate-5ZoneBaseboardHeat1.idf" idf = IDF(idfname, epwfile) theoptions = make_eplaunch_options(idf) runs.append([idf, theoptions]) num_CPUs = 2 runIDFs(runs, num_CPUs) if __name__ == '__main__': main()
docs/runningeplus.ipynb
santoshphilip/eppy
mit
Running in parallel processes using Generators Maybe you want to run a 100 or a 1000 simulations. The code above will not let you do that, since it will try to load 1000 files into memory. Now you need to use generators (python's secret sauce. if you don't know this, you need to look into it). Here is a code using generators. Now you can simulate a 1000 files gmulti.py
"""multiprocessing runs using generators instead of a list when you are running a 100 files you have to use generators""" import os from eppy.modeleditor import IDF from eppy.runner.run_functions import runIDFs def make_eplaunch_options(idf): """Make options for run, so that it runs like EPLaunch on Windows""" idfversion = idf.idfobjects['version'][0].Version_Identifier.split('.') idfversion.extend([0] * (3 - len(idfversion))) idfversionstr = '-'.join([str(item) for item in idfversion]) fname = idf.idfname options = { 'ep_version':idfversionstr, # runIDFs needs the version number 'output_prefix':os.path.basename(fname).split('.')[0], 'output_suffix':'C', 'output_directory':os.path.dirname(fname), 'readvars':True, 'expandobjects':True } return options def main(): iddfile = "/Applications/EnergyPlus-9-3-0/Energy+.idd" IDF.setiddname(iddfile) epwfile = "USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" # File is from the Examples Folder idfname1 = "HVACTemplate-5ZoneBaseboardHeat.idf" # copy of previous file idfname2 = "HVACTemplate-5ZoneBaseboardHeat1.idf" fnames = [idfname1, idfname1] idfs = (IDF(fname, epwfile) for fname in fnames) runs = ((idf, make_eplaunch_options(idf) ) for idf in idfs) num_CPUs = 2 runIDFs(runs, num_CPUs) if __name__ == '__main__': main()
docs/runningeplus.ipynb
santoshphilip/eppy
mit
True Multi-processing What if you want to run your simulations on multiple computers. What if those computers are on other networks (some at home and the other in your office and others in your server room) and some on the cloud. There is an experimental repository where you can do this. Keep an eye on this: https://github.com/pyenergyplus/zeppy Make idf.run() work like EPLaunch I like the function make_eplaunch_options. Can I use it to do a single run ? Yes! You can. An Explanation: EPLaunch is an application that comes with EnergyPlus on the Windows Platform. It has a default functionality that people become familiar with and come to expect. make_eplaunch_options set up the idf.run() arguments so that it behaves in the same way as EPLaunch. Here is the Sample code below. Modify and use it for your needs
"""single run EPLaunch style""" import os from eppy.modeleditor import IDF from eppy.runner.run_functions import runIDFs def make_eplaunch_options(idf): """Make options for run, so that it runs like EPLaunch on Windows""" idfversion = idf.idfobjects['version'][0].Version_Identifier.split('.') idfversion.extend([0] * (3 - len(idfversion))) idfversionstr = '-'.join([str(item) for item in idfversion]) fname = idf.idfname options = { # 'ep_version':idfversionstr, # runIDFs needs the version number # idf.run does not need the above arg # you can leave it there and it will be fine :-) 'output_prefix':os.path.basename(fname).split('.')[0], 'output_suffix':'C', 'output_directory':os.path.dirname(fname), 'readvars':True, 'expandobjects':True } return options def main(): iddfile = "/Applications/EnergyPlus-9-3-0/Energy+.idd" # change this for your operating system and E+ version IDF.setiddname(iddfile) epwfile = "USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" # File is from the Examples Folder idfname = "HVACTemplate-5ZoneBaseboardHeat.idf" idf = IDF(idfname, epwfile) theoptions = make_eplaunch_options(idf) idf.run(**theoptions) if __name__ == '__main__': main()
docs/runningeplus.ipynb
santoshphilip/eppy
mit
Debugging and reporting problems Debugging issues with IDF.run() used to be difficult, since you needed to go and hunt for the eplusout.err file, and the error message returned was not at all helpful. Now the output from EnergyPlus is returned in the error message, as well as the location and contents of eplusout.err. For example, this is the error message produced when running an IDF which contains an “HVACTemplate:Thermostat” object without passing expand_objects=True to idf.run():
E eppy.runner.run_functions.EnergyPlusRunError: E Program terminated: EnergyPlus Terminated--Error(s) Detected. E E Contents of EnergyPlus error file at C:\Users\jamiebull1\git\eppy\eppy\tests\test_dir\eplusout.err E Program Version,EnergyPlus, Version 8.9.0-40101eaafd, YMD=2018.10.14 20:49, E ** Severe ** Line: 107 You must run the ExpandObjects program for "HVACTemplate:Thermostat" E ** Fatal ** Errors occurred on processing input file. Preceding condition(s) cause termination. E ...Summary of Errors that led to program termination: E ..... Reference severe error count=1 E ..... Last severe error=Line: 107 You must run the ExpandObjects program for "HVACTemplate:Thermostat" E ************* Warning: Node connection errors not checked - most system input has not been read (see previous warning). E ************* Fatal error -- final processing. Program exited before simulations began. See previous error messages. E ************* EnergyPlus Warmup Error Summary. During Warmup: 0 Warning; 0 Severe Errors. E ************* EnergyPlus Sizing Error Summary. During Sizing: 0 Warning; 0 Severe Errors. E ************* EnergyPlus Terminated--Fatal Error Detected. 0 Warning; 1 Severe Errors; Elapsed Time=00hr 00min 0.16sec
docs/runningeplus.ipynb
santoshphilip/eppy
mit
Define formulae
def peakdens1D(x,k): f1 = (3-k**2)**0.5/(6*math.pi)**0.5*np.exp(-3*x**2/(2*(3-k**2))) f2 = 2*k*x*math.pi**0.5/6**0.5*stats.norm.pdf(x)*stats.norm.cdf(k*x/(3-k**2)**0.5) out = f1+f2 return out def peakdens2D(x,k): f1 = 3**0.5*k**2*(x**2-1)*stats.norm.pdf(x)*stats.norm.cdf(k*x/(2-k**2)**0.5) f2 = k*x*(3*(2-k**2))**0.5/(2*math.pi) * np.exp(-x**2/(2-k**2)) f31 = 6**0.5/(math.pi*(3-k**2))**0.5*np.exp(-3*x**2/(2*(3-k**2))) f32 = stats.norm.cdf(k*x/((3-k**2)*(2-k**2))**0.5) out = f1+f2+f31*f32 return out def peakdens3D(x,k): fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36) fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.) fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.)) fd213 = 3./2. fd21 = (fd211 + fd212 + fd213) fd22 = np.exp(-k**2.*x**2./(2.*(3.-k**2.))) / (2.*(3.-k**2.))**(0.5) fd23 = stats.norm.cdf(2.*k*x / ((3.-k**2.)*(5.-3.*k**2.))**(0.5)) fd2 = fd21*fd22*fd23 fd31 = (k**2.*(2.-k**2.))/4.*x**2. - k**2.*(1.-k**2.)/2. - 1. fd32 = np.exp(-k**2.*x**2./(2.*(2.-k**2.))) / (2.*(2.-k**2.))**(0.5) fd33 = stats.norm.cdf(k*x / ((2.-k**2.)*(5.-3.*k**2.))**(0.5)) fd3 = fd31 * fd32 * fd33 fd41 = (7.-k**2.) + (1-k**2)*(3.*(1.-k**2.)**2. + 12.*(1.-k**2.) + 28.)/(2.*(3.-k**2.)) fd42 = k*x / (4.*math.pi**(0.5)*(3.-k**2.)*(5.-3.*k**2)**0.5) fd43 = np.exp(-3.*k**2.*x**2/(2.*(5-3.*k**2.))) fd4 = fd41*fd42 * fd43 fd51 = math.pi**0.5*k**3./4.*x*(x**2.-3.) f521low = np.array([-10.,-10.]) f521up = np.array([0.,k*x/2.**(0.5)]) f521mu = np.array([0.,0.]) f521sigma = np.array([[3./2., -1.],[-1.,(3.-k**2.)/2.]]) fd521,i = stats.mvn.mvnun(f521low,f521up,f521mu,f521sigma) f522low = np.array([-10.,-10.]) f522up = np.array([0.,k*x/2.**(0.5)]) f522mu = np.array([0.,0.]) f522sigma = np.array([[3./2., -1./2.],[-1./2.,(2.-k**2.)/2.]]) fd522,i = stats.mvn.mvnun(f522low,f522up,f522mu,f522sigma) fd5 = fd51*(fd521+fd522) out = fd1*(fd2+fd3+fd4+fd5) return out
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
Apply formulae to a range of x-values
xs = np.arange(-4,10,0.01).tolist() ys_3d_k01 = [] ys_3d_k05 = [] ys_3d_k1 = [] ys_2d_k01 = [] ys_2d_k05 = [] ys_2d_k1 = [] ys_1d_k01 = [] ys_1d_k05 = [] ys_1d_k1 = [] for x in xs: ys_1d_k01.append(peakdens1D(x,0.1)) ys_1d_k05.append(peakdens1D(x,0.5)) ys_1d_k1.append(peakdens1D(x,1)) ys_2d_k01.append(peakdens2D(x,0.1)) ys_2d_k05.append(peakdens2D(x,0.5)) ys_2d_k1.append(peakdens2D(x,1)) ys_3d_k01.append(peakdens3D(x,0.1)) ys_3d_k05.append(peakdens3D(x,0.5)) ys_3d_k1.append(peakdens3D(x,1))
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
Figure 1 from paper
plt.figure(figsize=(7,5)) plt.plot(xs,ys_1d_k01,color="black",ls=":",lw=2) plt.plot(xs,ys_1d_k05,color="black",ls="--",lw=2) plt.plot(xs,ys_1d_k1,color="black",ls="-",lw=2) plt.plot(xs,ys_2d_k01,color="blue",ls=":",lw=2) plt.plot(xs,ys_2d_k05,color="blue",ls="--",lw=2) plt.plot(xs,ys_2d_k1,color="blue",ls="-",lw=2) plt.plot(xs,ys_3d_k01,color="red",ls=":",lw=2) plt.plot(xs,ys_3d_k05,color="red",ls="--",lw=2) plt.plot(xs,ys_3d_k1,color="red",ls="-",lw=2) plt.ylim([-0.1,0.55]) plt.xlim([-4,4]) plt.show()
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
Apply the distribution to simulated data, extracted peaks with FSL I now simulate random field, extract peaks with FSL and compare these simulated peaks with the theoretical distribution.
os.chdir("/Users/Joke/Documents/Onderzoek/ProjectsOngoing/Power/WORKDIR/") sm=1 smooth_FWHM = 3 smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2))) data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1) minimum = data.min() newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values img=nib.Nifti1Image(newdata,np.eye(4)) img.to_filename(os.path.join("RF_"+str(sm)+".nii.gz")) cl=fsl.model.Cluster() cl.inputs.threshold = 0 cl.inputs.in_file=os.path.join("RF_"+str(sm)+".nii.gz") cl.inputs.out_localmax_txt_file=os.path.join("locmax_"+str(sm)+".txt") cl.inputs.num_maxima=10000000 cl.inputs.connectivity=26 cl.inputs.terminal_output='none' cl.run() plt.figure(figsize=(6,4)) plt.imshow(data[1:20,1:20,1]) plt.colorbar() plt.show() peaks = pd.read_csv("locmax_"+str(1)+".txt",sep="\t").drop('Unnamed: 5',1) peaks.Value = peaks.Value + minimum 500.**3/len(peaks) twocol = cb.qualitative.Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-5,5,0.1),label="observed distribution") plt.xlim([-2,5]) plt.ylim([0,0.6]) plt.plot(xs,ys_3d_k1,color=twocol[1],lw=3,label="theoretical distribution") plt.title("histogram") plt.xlabel("peak height") plt.ylabel("density") plt.legend(loc="upper left",frameon=False) plt.show() peaks[1:5]
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
Are the peaks independent? Below, I take a random sample of peaks to compute distances for computational ease. With 10K peaks, it already takes 15 minutes to compute al distances.
ss = 10000 smpl = np.random.choice(len(peaks),ss,replace=False) peaksmpl = peaks.loc[smpl].reset_index()
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
Compute distances between peaks and the difference in their height.
dist = [] diff = [] for p in range(ss): for q in range(p+1,ss): xd = peaksmpl.x[q]-peaksmpl.x[p] yd = peaksmpl.y[q]-peaksmpl.y[p] zd = peaksmpl.z[q]-peaksmpl.z[p] if not any(x > 20 or x < -20 for x in [xd,yd,zd]): dist.append(np.sqrt(xd**2+yd**2+zd**2)) diff.append(abs(peaksmpl.Value[p]-peaksmpl.Value[q]))
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
Take the mean of heights in bins of 1.
mn = [] ds = np.arange(start=2,stop=100) for d in ds: mn.append(np.mean(np.array(diff)[np.round(np.array(dist))==d])) twocol = cb.qualitative.Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.plot(dist,diff,"r.",color=twocol[0],linewidth=0,label="combination of 2 points") plt.xlim([2,20]) plt.plot(ds,mn,color=twocol[1],lw=4,label="average over all points in bins with width 1") plt.title("Are peaks independent?") plt.xlabel("Distance between peaks") plt.ylabel("Difference between peaks heights") plt.legend(loc="upper left",frameon=False) plt.show() np.min(dist) def nulprobdensEC(exc,peaks): f0 = exc*np.exp(-exc*(peaks-exc)) return f0 def peakp(x): y = [] iterator = (x,) if not isinstance(x, (tuple, list)) else x for i in iterator: y.append(integrate.quad(lambda x:peakdens3D(x,1),-20,i)[0]) return y fig,axs=plt.subplots(1,5,figsize=(13,3)) fig.subplots_adjust(hspace = .5, wspace=0.3) axs=axs.ravel() thresholds=[2,2.5,3,3.5,4] bins=np.arange(2,5,0.5) x=np.arange(2,10,0.1) twocol=cb.qualitative.Paired_10.mpl_colors for i in range(5): thr=thresholds[i] axs[i].hist(peaks.Value[peaks.Value>thr],lw=0,facecolor=twocol[i*2-2],normed=True,bins=np.arange(thr,5,0.1)) axs[i].set_xlim([thr,5]) axs[i].set_ylim([0,3]) xn = x[x>thr] ynb = nulprobdensEC(thr,xn) ycs = [] for n in xn: ycs.append(peakdens3D(n,1)/(1-peakp(thr)[0])) axs[i].plot(xn,ycs,color=twocol[i*2-1],lw=3,label="C&S") axs[i].plot(xn,ynb,color=twocol[i*2-1],lw=3,linestyle="--",label="EC") axs[i].set_title("threshold:"+str(thr)) axs[i].set_xticks(np.arange(thr,5,0.5)) axs[i].set_yticks([1,2]) axs[i].legend(loc="upper right",frameon=False) axs[i].set_xlabel("peak height") axs[i].set_ylabel("density") plt.show()
peakdistribution/chengschwartzman_thresholdfree_distribution_simulation.ipynb
jokedurnez/neuropower_extended
mit
RLDS: Examples This colab provides some examples of RLDS usage based on real use cases. If you are looking for an introduction to RLDS, see the RLDS tutorial in Google Colab. <table class="tfo-notebook-buttons" align="left"> <td> <a href="https://colab.research.google.com/github/google-research/rlds/blob/main/rlds/examples/rlds_examples.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Run In Google Colab"/></a> </td> </table> Install Modules
!pip install rlds[tensorflow] !pip install tfds-nightly --upgrade !pip install envlogger !apt-get install libgmp-dev
rlds/examples/rlds_examples.ipynb
google-research/rlds
apache-2.0
Import Modules
import functools import rlds import tensorflow.compat.v2 as tf import tensorflow_datasets as tfds
rlds/examples/rlds_examples.ipynb
google-research/rlds
apache-2.0
Load dataset We can load the human dataset from the Panda Pick Place Can task of the Robosuite collection in TFDS. In these examples, we are assuming that certain fields are present in the steps, so datasets from different tasks will not be compatible.
dataset_config = 'human_dc29b40a' # @param { isTemplate : true} dataset_name = f'robosuite_panda_pick_place_can/{dataset_config}' num_episodes_to_load = 30 # @param { isTemplate: true}
rlds/examples/rlds_examples.ipynb
google-research/rlds
apache-2.0
Learning from Demonstrations or Offline RL We consider the setup where an agent needs to solve a task specified by a reward $r$. We assume a dataset of episodes with the corresponding rewards is available for training. This includes: * The ORL setup [[1], 2] where the agent is trained solely from a dataset of episodes collected in the environment. * The LfD setup [[4], [5], [6], [7]] where the agent can also interact with the environment. Using one of the two provided datasets on the Robosuite PickPlaceCan environment, a typical RLDS pipeline would include the following steps: sample $K$ episodes from the dataset so the performance of the trained agent could be expressed as a function of the number of available episodes. combine the observations used as an input of the agent. The Robosuite datasets include many fields in the observations and one could try to train the agent from the state or form the visual observations for example. finally, convert the dataset of episodes into a dataset of transitions that can be consumed by algorithms such as SAC or TD3.
K = 5 # @param { isTemplate: true} buffer_size = 30 # @param { isTemplate: true} dataset = tfds.load(dataset_name, split=f'train[:{num_episodes_to_load}]') dataset = dataset.shuffle(buffer_size, seed=42, reshuffle_each_iteration=False) dataset = dataset.take(K) def prepare_observation(step): """Filters the obseravtion to only keep the state and flattens it.""" observation_names = ['robot0_proprio-state', 'object-state'] step[rlds.OBSERVATION] = tf.concat( [step[rlds.OBSERVATION][key] for key in observation_names], axis=-1) return step dataset = rlds.transformations.map_nested_steps(dataset, prepare_observation) def batch_to_transition(batch): """Converts a pair of consecutive steps to a custom transition format.""" return {'s_cur': batch[rlds.OBSERVATION][0], 'a': batch[rlds.ACTION][0], 'r': batch[rlds.REWARD][0], 's_next': batch[rlds.OBSERVATION][1]} def make_transition_dataset(episode): """Converts an episode of steps to a dataset of custom transitions.""" # Create a dataset of 2-step sequences with overlap of 1. batched_steps = rlds.transformations.batch(episode[rlds.STEPS], size=2, shift=1) return batched_steps.map(batch_to_transition) transitions_ds = dataset.flat_map(make_transition_dataset)
rlds/examples/rlds_examples.ipynb
google-research/rlds
apache-2.0
Absorbing Terminal States in Imitation Learning Imitation learning is the setup where an agent tries to imitate a behavior, as defined by some sample episodes of that behavior. In particular, the reward is not specified. The dataset processing pipeline requires all the different pieces seen in the learning from demonstrations setup (create a train split, assemble the observation, ...) but also has some specifics. One specific is related to the particular role of the terminal state in imitation learning. While in standard RL tasks, looping over the terminal states only brings zero in terms of reward, in imitation learning, making this assumption of zero reward for transitions from a terminal state to the same terminal state induces some bias in algorithms like GAIL. One way to counter this bias was proposed in 1. It consists in learning the reward value of the transition from the absorbing state to itself. Implementation wise, to tell a terminal state from another state, an absorbing bit is added to the observation (1 for a terminal state, 0 for a regular state). The dataset is also augmented with terminal state to terminal state transitions so the agent can learn from those transitions.
def duplicate_terminal_step(episode): """Duplicates the terminal step if the episode ends in one. Noop otherwise.""" return rlds.transformations.concat_if_terminal( episode, make_extra_steps=tf.data.Dataset.from_tensors) def convert_to_absorbing_state(step): padding = step[rlds.IS_TERMINAL] if step[rlds.IS_TERMINAL]: step[rlds.OBSERVATION] = tf.zeros_like(step[rlds.OBSERVATION]) step[rlds.ACTION] = tf.zeros_like(step[rlds.ACTION]) # This is no longer a terminal state as the episode loops indefinitely. step[rlds.IS_TERMINAL] = False step[rlds.IS_LAST] = False # Add the absorbing bit to the observation. step[rlds.OBSERVATION] = tf.concat([step[rlds.OBSERVATION], [padding]], 0) return step absorbing_state_ds = rlds.transformations.apply_nested_steps( dataset, duplicate_terminal_step) absorbing_state_ds = rlds.transformations.map_nested_steps( absorbing_state_ds, convert_to_absorbing_state)
rlds/examples/rlds_examples.ipynb
google-research/rlds
apache-2.0
Offline Analysis One significant use case we envision for RLDS is the offline analysis of collected datasets. There is no standard offline analysis procedure as what is possible is only limited by the imagination of the users. We expose in this section a fictitious use case to illustrate how custom tags stored in a RL dataset can be processed as part of an RLDS pipeline. Let's assume we want to generate an histogram of the returns of the episodes present in the provided dataset of human episodes on the robosuite PickPlaceCan environment. This dataset holds episodes of fixed length of size 400 but also has a tag to indicate the actual end of the task. We consider here the histogram of returns of the variable length episodes ending on the completion tag.
def placed_tag_is_set(step): return tf.not_equal(tf.math.count_nonzero(step['tag:placed']),0) def compute_return(steps): """Computes the return of the episode up to the 'placed' tag.""" # Truncate the episode after the placed tag. steps = rlds.transformations.truncate_after_condition( steps, truncate_condition=placed_tag_is_set) return rlds.transformations.sum_dataset(steps, lambda step: step[rlds.REWARD]) returns_ds = dataset.map(lambda episode: compute_return(episode[rlds.STEPS]))
rlds/examples/rlds_examples.ipynb
google-research/rlds
apache-2.0
Initial set-up Load experiments for unified dataset: - Steady-state activation [Li1997] - Activation time constant [Li1997] - Steady-state inactivation [Li1997] - Inactivation time constant [Sun1997] - Recovery time constant [Li1997]
from experiments.ical_li import (li_act_and_tau, li_inact_1000, li_inact_kin_80, li_recov) modelfile = 'models/courtemanche_ical.mmt'
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot steady-state and tau functions
from ionchannelABC.visualization import plot_variables sns.set_context('talk') V = np.arange(-80, 40, 0.01) cou_par_map = {'di': 'ical.d_inf', 'fi': 'ical.f_inf', 'dt': 'ical.tau_d', 'ft': 'ical.tau_f'} f, ax = plot_variables(V, cou_par_map, 'models/courtemanche_ical.mmt', figshape=(2,2))
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Activation gate ($d$) calibration Combine model and experiments to produce: - observations dataframe - model function to run experiments and return traces - summary statistics function to accept traces
observations, model, summary_statistics = setup(modelfile, li_act_and_tau) assert len(observations)==len(summary_statistics(model({}))) g = plot_sim_results(modelfile, li_act_and_tau)
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Set up prior ranges for each parameter in the model. See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
limits = {'ical.p1': (-100, 100), 'ical.p2': (0, 50), 'log_ical.p3': (-7, 3), 'ical.p4': (-100, 100), 'ical.p5': (0, 50)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs())))
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Run ABC calibration
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "courtemanche_ical_dgate_unified.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG) pop_size = theoretical_population_size(2, len(limits)) print("Theoretical minimum population size is {} particles".format(pop_size)) abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(1000), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=8), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs) history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Analysis of results
df, w = history.get_distribution(m=0) df.describe() sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, li_act_and_tau, df=df, w=w) plt.tight_layout() m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals) plt.tight_layout() import pandas as pd N = 100 cou_par_samples = df.sample(n=N, weights=w, replace=True) cou_par_samples = cou_par_samples.set_index([pd.Index(range(N))]) cou_par_samples = cou_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 V = np.arange(-80, 40, 0.01) f, ax = plot_variables(V, cou_par_map, 'models/courtemanche_ical.mmt', [cou_par_samples], figshape=(2,2))
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Voltage-dependent inactivation gate ($f$) calibration
observations, model, summary_statistics = setup(modelfile, li_inact_1000, li_inact_kin_80, li_recov) assert len(observations)==len(summary_statistics(model({}))) g = plot_sim_results(modelfile, li_inact_1000, li_inact_kin_80, li_recov) limits = {'log_ical.q1': (0, 3), 'log_ical.q2': (-2, 3), 'log_ical.q3': (-4, 0), 'ical.q4': (-100, 100), 'log_ical.q5': (-4, 0), 'ical.q6': (-100, 100), 'ical.q7': (0, 50)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs())))
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Run ABC calibration
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "courtemanche_ical_fgate_unified.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG) pop_size = theoretical_population_size(2, len(limits)) print("Theoretical minimum population size is {} particles".format(pop_size)) abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(2000), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=8), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs) history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Analysis of results
df, w = history.get_distribution() df.describe() sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, li_inact_1000, li_inact_kin_80, li_recov, df=df, w=w) plt.tight_layout() m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals) plt.tight_layout() import pandas as pd N = 100 cou_par_samples = df.sample(n=N, weights=w, replace=True) cou_par_samples = cou_par_samples.set_index([pd.Index(range(N))]) cou_par_samples = cou_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 V = np.arange(-80, 40, 0.01) f, ax = plot_variables(V, cou_par_map, 'models/courtemanche_ical.mmt', [cou_par_samples], figshape=(2,2))
docs/examples/human-atrial/courtemanche_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Construct the model m
# Impedance, imp VP RHO imp = np.ones(50) * 2550 * 2650 imp[10:15] = 2700 * 2750 imp[15:27] = 2400 * 2450 imp[27:35] = 2800 * 3000 plt.plot(imp)
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
But I really want to use the reflectivity, so let's compute that:
D = convmtx([-1, 1], imp.size)[:, :-1] D r = D @ imp plt.plot(r[:-1])
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
I don't know how best to control the magnitude of the coefficients or how to combine this matrix with G, so for now we'll stick to the model m being the reflectivity, calculated the normal way.
m = (imp[1:] - imp[:-1]) / (imp[1:] + imp[:-1]) plt.plot(m)
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
Forward operator: convolution with wavelet Now we make the kernel matrix G, which represents convolution.
from scipy.signal import ricker wavelet = ricker(40, 2) plt.plot(wavelet) # Downsampling: set to 1 to use every sample. s = 2 # Make G. G = convmtx(wavelet, m.size)[::s, 20:70] plt.imshow(G, cmap='viridis', interpolation='none') # Or we can use bruges (pip install bruges) # from bruges.filters import ricker # wavelet = ricker(duration=0.04, dt=0.001, f=100) # G = convmtx(wavelet, m.size)[::s, 21:71] # f, (ax0, ax1) = plt.subplots(1, 2) # ax0.plot(wavelet) # ax1.imshow(G, cmap='viridis', interpolation='none', aspect='auto')
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
Forward model the data d Now we can perform the forward problem: computing the data.
d = G @ m
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
Let's visualize these components for fun...
def add_subplot_axes(ax, rect, axisbg='w'): """ Facilitates the addition of a small subplot within another plot. From: http://stackoverflow.com/questions/17458580/ embedding-small-plots-inside-subplots-in-matplotlib License: CC-BY-SA Args: ax (axis): A matplotlib axis. rect (list): A rect specifying [left pos, bot pos, width, height] Returns: axis: The sub-axis in the specified position. """ def axis_to_fig(axis): fig = axis.figure def transform(coord): a = axis.transAxes.transform(coord) return fig.transFigure.inverted().transform(a) return transform fig = plt.gcf() left, bottom, width, height = rect trans = axis_to_fig(ax) x1, y1 = trans((left, bottom)) x2, y2 = trans((left + width, bottom + height)) subax = fig.add_axes([x1, y1, x2 - x1, y2 - y1]) x_labelsize = subax.get_xticklabels()[0].get_size() y_labelsize = subax.get_yticklabels()[0].get_size() x_labelsize *= rect[2] ** 0.5 y_labelsize *= rect[3] ** 0.5 subax.xaxis.set_tick_params(labelsize=x_labelsize) subax.yaxis.set_tick_params(labelsize=y_labelsize) return subax from matplotlib import gridspec, spines fig = plt.figure(figsize=(12, 6)) gs = gridspec.GridSpec(5, 8) # Set up axes. axw = plt.subplot(gs[0, :5]) # Wavelet. axg = plt.subplot(gs[1:4, :5]) # G axm = plt.subplot(gs[:, 5]) # m axe = plt.subplot(gs[:, 6]) # = axd = plt.subplot(gs[1:4, 7]) # d cax = add_subplot_axes(axg, [-0.08, 0.05, 0.03, 0.5]) params = {'ha': 'center', 'va': 'bottom', 'size': 40, 'weight': 'bold', } axw.plot(G[5], 'o', c='r', mew=0) axw.plot(G[5], 'r', alpha=0.4) axw.locator_params(axis='y', nbins=3) axw.text(1, 0.6, "wavelet", color='k') im = axg.imshow(G, cmap='viridis', aspect='1', interpolation='none') axg.text(45, G.shape[0]//2, "G", color='w', **params) axg.axhline(5, color='r') plt.colorbar(im, cax=cax) y = np.arange(m.size) axm.plot(m, y, 'o', c='r', mew=0) axm.plot(m, y, c='r', alpha=0.4) axm.text(0, m.size//2, "m", color='k', **params) axm.invert_yaxis() axm.locator_params(axis='x', nbins=3) axe.set_frame_on(False) axe.set_xticks([]) axe.set_yticks([]) axe.text(0.5, 0.5, "=", color='k', **params) y = np.arange(d.size) axd.plot(d, y, 'o', c='b', mew=0) axd.plot(d, y, c='b', alpha=0.4) axd.plot(d[5], y[5], 'o', c='r', mew=0, ms=10) axd.text(0, d.size//2, "d", color='k', **params) axd.invert_yaxis() axd.locator_params(axis='x', nbins=3) for ax in fig.axes: ax.xaxis.label.set_color('#888888') ax.tick_params(axis='y', colors='#888888') ax.tick_params(axis='x', colors='#888888') for child in ax.get_children(): if isinstance(child, spines.Spine): child.set_color('#aaaaaa') # For some reason this doesn't work... for _, sp in cax.spines.items(): sp.set_color('w') # But this does... cax.xaxis.label.set_color('#ffffff') cax.tick_params(axis='y', colors='#ffffff') cax.tick_params(axis='x', colors='#ffffff') fig.tight_layout() plt.show()
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
Note that G * m gives us exactly the same result as np.convolve(w_, m). This is just another way of implementing convolution that lets us use linear algebra to perform the operation, and its inverse.
plt.plot(np.convolve(wavelet, m, mode='same')[::s], 'blue', lw=3) plt.plot(G @ m, 'red')
NumPy_reflectivity.ipynb
kwinkunks/axb
apache-2.0
Plot something with Matlab:
%%matlab %%%%%%%%%%%% %%% Plot Test to check that Matlab is loaded properly %%%%%%%%%%%% a = linspace(0.01,6*pi,100); plot(sin(a)) grid on hold on plot(cos(a),'r')
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
First Part (Hilbert for a random signal) Hilbert Transform Here we look at how padding affects the phase when taking the Hilbert transform. This code is from Javier for a simple signal.
%%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Checks if padding a signal makes a difference when taking the %%% Hilbert transform to find the phase. %%% %%% code from Javier (email, 3/07/2014) %%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% signal = randn(1000,1); hsig = hilbert(signal); %% padding with the first and last values; ideally we would like to keep the trend too, %% but this is just a quick example signalPadded = [repmat(signal(1),100,1);signal;repmat(signal(end),100,1)]; hsigpad = hilbert(signalPadded); %% the code above does something like this: for "signal" --> "ssssssssignallllllllll" %plot(angle(signal)) % meaningless because this is not the instantaneous phase hold on title('Phase with angle() for simple and padded signal') plot(angle(hsig),'b') %% this is the phase :) plot(angle(hsigpad(101:end-100)),'r') %% the phase of the central part of the padded signal %%% Get analytical phase (as per http://www.scholarpedia.org/article/Hilbert_transform_for_brain_waves) %% plot phase from atan for the initial signal and the padded one %figure; hold on; phase = atan(imag(hsig)./signal); phasepad = atan(imag(hsigpad(101:end-100))./signalPadded(101:end-100)); %plot(unwrap(phase), 'b') %plot(unwrap(phasepad), 'r') figure; hold on; title('Analytical phase (atan) for simple and padded signal'); plot(phase, 'b') plot(phasepad, 'r') figure; hold on; title('Unwrapped analytical phase atan2 for simple and padded signal'); plot(unwrap(atan2(imag(hsig), signal)), 'b') plot(unwrap(atan2(imag(hsigpad(101:end-100)), signalPadded(101:end-100))), 'r') %% these do the same as the two lines above %plot(unwrap(angle(hsig)), 'b') %plot(unwrap(angle(hsigpad(101:end-100))), 'r')
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Apparently for a complex number z: angle(z) - is equal to theta = atan2(imag(z),real(z)). Quick check below and comparison to unwrapped phase:
%%matlab signal = randn(10,1); hilbert(signal); %phase = atan(signal./imag(hilbert(signal))) phase = atan2(imag(hilbert(signal)), signal); x = imag(hilbert(signal)); hold on % the first 2 below are exactly the same, so they are indeed the same plot(angle(hilbert(signal)), 'g') plot(phase, 'b') plot(unwrap(phase), 'r')
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Second Part (Hilbert for padded/unpadded MEG) Looking at how padding affects the phase of cleaned MEG signals when taking the Hilbert transform:
%%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Loads cleaned MEG epoch. %%% Padds the signal with 200 columns at the beginning with the first column of the epoch %%% and 200 columns at the end with the last column of the epoch. %%% Computes Hilbert transform of unpadded and padded signals. %%% %%% OBS: In this cell, the code takes the Hilbert transform of %%% the "channels (rows) X time samples (columns)" matrix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% add data folder to path addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/07'); epoch = load('MEGnoECG_07_segm9.mat'); signal = epoch.meg_no_ecg; % size(signal) %% this is 148 (channels) x 1695 (samples) %%% subtract mean before doing anything else? %% plot the MEG signals %plot(signal) hsignal = hilbert(signal); %% pad signal signalPadded = [repmat(signal(:,1),1,200) signal repmat(signal(:,end),1,200)]; % size(signalPadded) %% this is 148 x 1895 --> correct as we padded with 200 columns in total %signalPadded(:,98:102) %% extra check --> OK %signalPadded(:, 1793:1799) %% extra check --> OK hsignalPadded = hilbert(signalPadded); %%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Plots the phase for the unpadded MEG signal %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure; hold on; title('Phase for unpadded signal') plot(angle(hsignal),'b') %%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Plots the phase for the padded MEG signal %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure; hold on; title('Phase for padded signal') %plot(angle(hsignalPadded(:,201:end-200)),'r') plot(angle(hsignalPadded),'r') %%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Plots the unwrapped phase for the unpadded and padded signal %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure; hold on; title('Unwrapped Phase for unpadded and padded signal') plot(unwrap(angle(hsignal)),'b') %% these two lines below generate the same thing w.r.t the plot above?? plot(unwrap(angle(hsignalPadded(:,201:end-200))),'r') %plot(unwrap(angle(hsignalPadded)),'r')
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Third Part (Hilbert for padded/unpadded MEG) Here we take the Hilbert transform of the transposed MEG matrix which yields a time samples x channels matrix. In Matlab, the hilbert() function operates columnwise. I guess this is the correct way to compute the phase. Padding is done by: - appending 200 columns identical to the first column of the epoch at the beginning of the signal - appending 200 columns identical to the last column of the epoch at the end of the signal
%%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Load MEG signals for epochs specified in the "data" array. %%% Computes Hilbert transform of unpadded and padded version. %%% Plots the previous graphs. %%% %%% OBS: Code takes Hilbert transform of %%% the "time samples (rows) X channels (columns)" matrix (using transpose()). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% add data folders to path addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/07'); addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/AL41D'); data = {'MEGnoECG_07_segm9.mat', ... 'MEGnoECG_07_segm20.mat', ... 'MEGnoECG_AL41D_segm6.mat', ... 'MEGnoECG_AL41D_segm29'}; for i=1:length(data) epoch = load( data{i} ); signal = epoch.meg_no_ecg; hsignal = hilbert( transpose(signal) ); %% pad signal (200 columns at beginning and end) and take Hilbert signalPadded = [repmat(signal(:,1),1,200) signal repmat(signal(:,end),1,200)]; hsignalPadded = hilbert( transpose(signalPadded) ); figure; hold on; title(strcat('Phase for unpadded signal ', data{i})) plot(angle(hsignal), 'b') figure; hold on; title(strcat('Phase for padded signal', data{i})) plot(angle(hsignalPadded(201:end-200,:)),'r') %% need to get rid of rows as I've transposed figure; hold on; title(strcat('Unwrapped Phase for unpadded and padded signal (all channels) ', data{i})) plot(unwrap(angle(hsignal)),'b') plot(unwrap(angle(hsignalPadded(201:end-200, :))),'r') figure; hold on; title(strcat('Phase for unpadded and padded signal (only last 5 channels)', data{i})) plot(angle(hsignal(:, end-5:end)),'b') plot(angle(hsignalPadded(201:end-200, end-5:end)),'r') figure; hold on; title(strcat('Unwrapped Phase for unpadded and padded signal (only last 5 channels)', data{i})) plot(unwrap(angle(hsignal(:, end-5:end))),'b') plot(unwrap(angle(hsignalPadded(201:end-200, end-5:end))),'r') end
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Fourth Part Here we also compare the phases of unpadded and padded signal as in Part Three, with the difference that the padding is done using symmetric extension (also called reflective padding in the past). Padding is done using wextend().
%%matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Load MEG signals for epochs specified in the "data" array. %%% Computes Hilbert transform of unpadded and padded version. %%% Padding is done using symmetric extension. %%% Plots the previous graphs. %%% %%% OBS: Code takes Hilbert transform of %%% the "time samples (rows) X channels (columns)" matrix (using transpose()). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% add data folders to path addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/07'); addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/AL41D'); data = {'MEGnoECG_07_segm9.mat', ... 'MEGnoECG_07_segm20.mat', ... 'MEGnoECG_AL41D_segm6.mat', ... 'MEGnoECG_AL41D_segm29'}; for i=1:length(data) epoch = load( data{i} ); signal = epoch.meg_no_ecg; hsignal = hilbert( transpose(signal) ); %% pad signal symmetrically (200 columns at beginning and end) and take Hilbert L = [0, 200]; %% add 0 rows, 200 columns (on each side) %% wextend(TYPE,MODE,X,L,LOC) signalPadded = wextend('2D', 'sym', signal, L); %% symmetric pading %size(signalPadded) % OK! hsignalPadded = hilbert( transpose(signalPadded) ); figure; hold on; title(strcat('Phase for unpadded signal ', data{i})) plot(angle(hsignal), 'b') figure; hold on; title(strcat('Phase for padded signal', data{i})) plot(angle(hsignalPadded(201:end-200,:)),'r') %% need to get rid of rows as I've transposed figure; hold on; title(strcat('Unwrapped Phase for unpadded and padded signal (all channels) ', data{i})) plot(unwrap(angle(hsignal)),'b') plot(unwrap(angle(hsignalPadded(201:end-200, :))),'r') figure; hold on; title(strcat('Phase for unpadded and padded signal (only last 5 channels)', data{i})) plot(angle(hsignal(:, end-5:end)),'b') plot(angle(hsignalPadded(201:end-200, end-5:end)),'r') figure; hold on; title(strcat('Unwrapped Phase for unpadded and padded signal (only last 5 channels)', data{i})) plot(unwrap(angle(hsignal(:, end-5:end))),'b') plot(unwrap(angle(hsignalPadded(201:end-200, end-5:end))),'r') end
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Fifth Part (Hilbert on padded/unpadded filtered MEG) This section looks at estimating instantaneous phase with FieldTrip. We filter the our data and then apply the Hilbert transform. This can be done in ft_preprocessing() like so:
%%matlab % load header file load('/home/dragos/DTC/MSc/SummerProject/4D_header_adapted.mat'); filterOrder = 750; Fs = 169.54; %% add data folders to path addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/07'); addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/MEG_50863_noECG_10s/AL41D'); megFiles = {'MEGnoECG_07_segm9.mat', ... 'MEGnoECG_07_segm20.mat', ... 'MEGnoECG_AL41D_segm6.mat', ... 'MEGnoECG_AL41D_segm29'}; for i=1:length(megFiles) epoch = load( megFiles{i} ); % configuration structure cfg = []; cfg.channel = header.label; %% channels that will be read and/or preprocessed cfg.bpfilter = 'yes'; %% bandpass filter cfg.bpfreq = [0.5 4]; %% bandpass frequency range as [low high] in Hz (delta here) cfg.bpfiltord = filterOrder; %% bandpass filter order cfg.bpfilttype = 'fir'; %% band pass filter type - FIR cfg.bpfiltdir = 'twopass'; %% filter direction - two pass, like filtfilt cfg.demean = 'yes'; %% apply baseline correction (remove DC offset) cfg.hilbert = 'angle'; % this gives you just the phase, you can %specify 'complex' to get both phase and amplitude % data structure data = []; data.label = header.label; data.fsample = Fs; data.trial = {epoch.meg_no_ecg}; load('/home/dragos/DTC/MSc/SummerProject/src/timeVectorFor10s.mat'); data.time = {timeFor10s'}; startsample = 1; endsample = size(epoch.meg_no_ecg, 2); data.sampleinfo = [startsample endsample]; [processedData] = ft_preprocessing(cfg, data); %cfg.padding = cfg.padtype = 'mirror'; [processedDataWithPadding] = ft_preprocessing(cfg, data); figure; plot(1:1695,unwrap(processedData.trial{1})); end
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Looking at how padding affects the phase of RAW MEG signals when taking the Hilbert transform. Have to merge raw data first...
%%matlab %% add RAW data folder to path addpath('/home/dragos/DTC/MSc/SummerProject/data/MEG_AD_Thesis/50863') %unload_ext pymatbridge
sources/notebooks/testing_connectivity.ipynb
dnstanciu/masters-project
gpl-3.0
Fun with the BBBC021 dataset This uses the BBBC021 dataset, a screen of MCF-7 breast cancer cells treated with a library of 113 compounds. A subset (103 wells) have been annotated such that the compounds they were treated have been placed in 12 categories. I ran the feature extraction using Microscopium's object features, and some additional features of my choosing. These are Haralick texture features and threshold adjacancy statistics. Even though Microscopium is designed with unsupervised clustering and dimensionality reduction in mind, these annotated data-sets are useful because we can quantify how useful features are and whether or not the clustering scheme preserves the ground truth labels. The boring bit First we need to wrangle the metadata into useable format.
# first load in the screen metadata bbbc021_metadata = pd.read_csv("./BBBC021_v1_image.csv") # now load in the mechanism of action metadata, these map # compouds to a class of compounds bbbc021_moa = pd.read_csv("./BBBC021_v1_moa.csv") # wrangle the metadata into a form that maps screen-plate-well format IDs # to the compound it was treated with. this dataset contains no controls or # empty wells, so we don't need to worry about those!! # first only keep the colums we want -- # Image_FileName_DAPI, Image_PathName_DAPI, Image_Metadata_Compound, Image_Metadata_Concentration bbbc021_metadata = bbbc021_metadata[["Image_FileName_DAPI", "Image_PathName_DAPI", "Image_Metadata_Compound", "Image_Metadata_Concentration"]] def fn_to_id(fn): sem = image_xpress.ix_semantic_filename(fn) return "{0}-{1}-{2}".format("BBBC021", sem["plate"], sem["well"]) # merge the Image_PathName_DAPI and Image_FileName_DAPI column with os.path.join fn_cols = zip(bbbc021_metadata["Image_PathName_DAPI"], bbbc021_metadata["Image_FileName_DAPI"]) bbbc021_metadata.index = list(map(fn_to_id, [os.path.join(i, j) for (i, j) in fn_cols])) bbbc021_metadata = bbbc021_metadata[["Image_Metadata_Compound", "Image_Metadata_Concentration"]] bbbc021_metadata.head() # good idea to check that different concentrations don't # change the expected mechanism of action in the annotations bbbc021_moa.groupby(['compound', 'moa']).count() # now merge the dataframes! right_cols = ["compound", "concentration"] bbbc021_merged = bbbc021_metadata.reset_index().merge( bbbc021_moa, how="outer", left_on=["Image_Metadata_Compound", "Image_Metadata_Concentration"], right_on=right_cols).set_index("index").dropna().drop_duplicates() # only a subset of the data was annotated -- 103 # how are the classes distributed? bbbc021_merged.head() bbbc021_merged.groupby("moa").count() # only one example for the "DMSO" class. remove this. bbbc021_merged = bbbc021_merged[bbbc021_merged["compound"] != "DMSO"] # now load the feature data frame bbbc021_complete = pd.read_csv("./BBBC021_feature.csv", index_col=0) # we only want the feature vectors for samples that were annotated bbbc021_feature = bbbc021_complete.ix[bbbc021_merged.index] bbbc021_feature.head() # Now scale the dataframe and we're good to go! std = StandardScaler().fit_transform(bbbc021_feature.values) bbbc021_feature = pd.DataFrame(std, columns=bbbc021_feature.columns, index=bbbc021_feature.index)
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
Supervised learning Now we have our training data. Let's try it with a simple linear SVM using the full set of features. Why SVM? A linear SVM is simple, performs well and the weights can be used to quantify feature importance. First a quick 5-fold cross validation to check it can discriminate between classes.
classifier = svm.SVC(kernel='linear', C=1) scores = cross_validation.cross_val_score(classifier, bbbc021_feature.values, bbbc021_merged["moa"].values, cv=5) sum(scores / 5)
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
Hey, that's not bad!! Previous studies have ~90% accuracy but they've done lots more fine-tuning of the features and segmentation pipeline. The features show the data is somewhat linearly seperable. Let's see how object features, texture features and threshold adjacancy statistics perform on their own.
object_cols = [col for col in bbbc021_feature.columns if "pftas" not in col and "haralick" not in col] haralick_cols = [col for col in bbbc021_feature.columns if "haralick" in col] pftas_cols = [col for col in bbbc021_feature.columns if "pftas" in col] scores = cross_validation.cross_val_score(classifier, bbbc021_feature[object_cols].values, bbbc021_merged["moa"].values, cv=5) sum(scores / 5) scores = cross_validation.cross_val_score(classifier, bbbc021_feature[haralick_cols].values, bbbc021_merged["moa"].values, cv=5) sum(scores / 5) scores = cross_validation.cross_val_score(classifier, bbbc021_feature[pftas_cols].values, bbbc021_merged["moa"].values, cv=5) sum(scores / 5)
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
Wow! Object and PFTAS features do great on their own. What if we tried both?
object_pftas_cols = [col for col in bbbc021_feature.columns if "haralick" not in col] scores = cross_validation.cross_val_score(classifier, bbbc021_feature[object_pftas_cols].values, bbbc021_merged["moa"].values, cv=5) sum(scores / 5)
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
85%! That's the best performance yet. Let's look at some feature importance scores using ExtraTrees. We can use this model to get Gini coefficients for each feature.
et_classifier = ExtraTreesClassifier() et_classifier.fit(bbbc021_feature[object_pftas_cols].values, bbbc021_merged["moa"].values) feature_scores = pd.DataFrame(data={"feature": bbbc021_feature[object_pftas_cols].columns, "gini": et_classifier.feature_importances_}) feature_scores = feature_scores.sort_values(by="gini", ascending=False) top_k = 30 plt.barh(np.arange(top_k), feature_scores.head(top_k)["gini"], align="center", alpha=0.4) plt.ylim([top_k, -1]) plt.yticks(np.arange(top_k), feature_scores.head(30)["feature"]) plt.tight_layout() plt.title("Feature Importance for Annotated BBBC021 Data Subset") plt.xlabel("Gini Coefficient") plt.show()
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
Features across all three channels, and a mixture of both object and adjacancy threshold statistics contribute as the most important features. There's no evidence to suggest that one particular feature dominates here. Supervised Learning - Independence of Features How accurate if the classification if we take random subsets of features? We've shown pretty clearly that no feature dominates by means of the gini coefficients, but we can demonstrate this further by training classifiers with random subsets of features and seeing how they perform. Let's train them with say, 65% of the features.
n_features = bbbc021_feature[object_pftas_cols].shape[1] sample_size = int(np.round(n_features * 0.65)) all_scores = [] for i in range(10000): random_index = np.random.choice(np.arange(n_features), sample_size) scores = cross_validation.cross_val_score(classifier, bbbc021_feature[object_pftas_cols][random_index].values, bbbc021_merged["moa"].values, cv=5) cv_score = sum(scores / 5) all_scores.append(cv_score) pd.DataFrame(all_scores).describe()
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
The accuracy of the classifier is robust against random subsetting of the data. Maximum classifier is as high as ~90% on some subset(s) of the data. Unsupervised Learning The next task is to cluster the feature vectors into 11 clusters and see how well the original labels are represented. I'll use Agglomerative Clustering with the Cosine distance.
ag_clustering = AgglomerativeClustering(n_clusters=12, affinity="cosine", linkage="complete") ag_predict = ag_clustering.fit_predict(X=bbbc021_feature[object_pftas_cols].values) metrics.adjusted_rand_score(bbbc021_merged["moa"].values, ag_predict)
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
Not great, but they're not completely random either. Dimensionality Reduction Now we look at the PCA and TSNE embeddings. This is the real test, seeing as the PCA and TSNE embeddings of the data drive the Microscopium interface.
rand_seed = 42 bbbc021_pca = PCA(n_components=2).fit_transform(bbbc021_feature[object_pftas_cols].values) bbbc021_pca_50 = PCA(n_components=50).fit_transform(bbbc021_feature[object_pftas_cols].values) bbbc021_tsne = TSNE(n_components=2, learning_rate=100, random_state=42).fit_transform(bbbc021_pca_50) labels = list(set(bbbc021_merged["moa"])) bmap = brewer2mpl.get_map("Paired", "Qualitative", 12) color_scale = dict(zip(labels, bmap.mpl_colors)) bbbc021_pca_df = pd.DataFrame(dict(x=bbbc021_pca[:, 0], y=bbbc021_pca[:, 1], label=bbbc021_merged["moa"].values), index=bbbc021_feature.index) groups = bbbc021_pca_df.groupby('label') fig, ax = plt.subplots() ax.margins(0.05) for name, group in groups: ax.scatter(group.x, group.y, s=45, label=name, c=color_scale[name]) ax.legend(scatterpoints=1, loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=4) plt.title("PCA") fig = plt.gcf() fig.subplots_adjust(bottom=0.2) plt.show()
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
The PCA embedding isn't particularly useful.
bbbc021_tsne_df = pd.DataFrame(dict(x=bbbc021_tsne[:, 0], y=bbbc021_tsne[:, 1], label=bbbc021_merged["moa"].values), index=bbbc021_feature.index) groups = bbbc021_tsne_df.groupby('label') fig, ax = plt.subplots() ax.margins(0.05) for name, group in groups: ax.scatter(group.x, group.y, s=45, label=name, c=color_scale[name]) ax.legend(scatterpoints=1, loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=4) plt.title("TSNE") fig = plt.gcf() fig.subplots_adjust(bottom=0.2) plt.show()
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
TSNE's is much better. We get some tight clusters and individual categories tend to stay close. Can TSNE embeddings classify examples? Finally, we plot together 10 randomly chosen examples along with the annotated examples. The idea here is to find samples that group together. We can then look up the compound and determine if it's in the same category of compound. Conclusions Based on the peformance of the supervised classifier, object features in tandem with threshold adjacancy statistics perform the best. There's evidence to suggest the texture features are useless, but it'd be good to try this against another data-set with ground-truth labels.
np.random.seed(13) # get the set difference of indices in the whole dataset, and indices unannot_index = np.setdiff1d(bbbc021_complete.index, bbbc021_feature.index) # get 20 random examples from the data frame unannot_sample = np.random.choice(unannot_index, 10) # combine these samples with the annotated ones, rescale the data-frame bbbc021_new = bbbc021_complete.ix[bbbc021_feature.index | unannot_sample] bbbc021_new_std = StandardScaler().fit_transform(bbbc021_new.values) bbbc021_new = pd.DataFrame(bbbc021_new_std, columns=bbbc021_new.columns, index=bbbc021_new.index) # embed to tsne bbbc021_new_pca_50 = PCA(n_components=50).fit_transform(bbbc021_new.values) bbbc021_new_tsne = TSNE(n_components=2, learning_rate=45, random_state=rand_seed).fit_transform(bbbc021_new_pca_50) bbbc021_new_tsne_df = pd.DataFrame(dict(x=bbbc021_new_tsne[:, 0], y=bbbc021_new_tsne[:, 1]), index=bbbc021_new.index) # add moa labels bbbc021_new_tsne_df = bbbc021_new_tsne_df.merge(bbbc021_merged, how="outer", left_index=True, right_index=True) groups = bbbc021_new_tsne_df.fillna("No Annotation").groupby('moa') fig, ax = plt.subplots() ax.margins(0.05) for name, group in groups: color = color_scale.get(name) if color is None: color = (1, 1, 1) ax.scatter(group.x, group.y, s=45, label=name, c=color) ax.legend(scatterpoints=1, loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=4) plt.title("TSNE") for idx in unannot_sample: row = bbbc021_new_tsne_df.ix[idx] plt.annotate( idx, xy = (row[0], row[1]), xytext = (0, 20), textcoords = 'offset points', ha = 'right', va = 'bottom', bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5), arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0')) fig = plt.gcf() fig.subplots_adjust(bottom=0.2) plt.show()
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
Woo, I've never matplotlibbed that hard before. Of the unannotated samples, I make the following observations: BBBC021-22141-D08 clusters together tightly with the Microtubule stabilizers BBBC021-25701-C07 and BBBC021-25681-C09 group together with Protein synthesis BBBC021-22161-F07 clusters together with the Auroroa kinase stabilizers BBBC021-27821-C05 clusters loosely with DNA damagers BBBC021-34641-C10 groups with Kinase inhibitors Are these groupings at all relevant? If we look up their corresponding compounds, does the mechanism of action of the samples they were clustered with agree?? Let's get their compunds.
selected_indices = ["BBBC021-22141-D08", "BBBC021-25701-C07", "BBBC021-22161-F07", "BBBC021-27821-C05", "BBBC021-25681-C09", "BBBC021-34641-C10"] bbbc021_metadata.ix[selected_indices].drop_duplicates()
bbbc021_analysis.ipynb
microscopium/microscopium-scripts
bsd-3-clause
MinDiff Data Preparation <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/responsible_ai/model_remediation/min_diff/guide/min_diff_data_preparation"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-remediation/blob/master/docs/min_diff/guide/min_diff_data_preparation.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/model-remediation/blob/master/docs/min_diff/guide/min_diff_data_preparation.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td> <td> <a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/model-remediation/docs/min_diff/guide/min_diff_data_preparation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table></div> Introduction When implementing MinDiff, you will need to make complex decisions as you choose and shape your input before passing it on to the model. These decisions will largely determine the behavior of MinDiff within your model. This guide will cover the technical aspects of this process, but will not discuss how to evaluate a model for fairness, or how to identify particular slices and metrics for evaluation. Please see the Fairness Indicators guidance for details on this. To demonstrate MinDiff, this guide uses the UCI income dataset. The model task is to predict whether an individual has an income exceeding $50k, based on various personal attributes. This guide assumes there is a problematic gap in the FNR (false negative rate) between "Male" and "Female" slices and the model owner (you) has decided to apply MinDiff to address the issue. For more information on the scenarios in which one might choose to apply MinDiff, see the requirements page. Note: We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender. Further, we acknowledge that this task doesn’t represent a real-world use case, and is used only to demonstrate the technical details of the MinDiff library. MinDiff works by penalizing the difference in distribution scores between examples in two sets of data. This guide will demonstrate how to choose and construct these additional MinDiff sets as well as how to package everything together so that it can be passed to a model for training. Setup
!pip install --upgrade tensorflow-model-remediation import tensorflow as tf from tensorflow_model_remediation import min_diff from tensorflow_model_remediation.tools.tutorials_utils import uci as tutorials_utils
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Original Data For demonstration purposes and to reduce runtimes, this guide uses only a sample fraction of the UCI Income dataset. In a real production setting, the full dataset would be utilized.
# Sampled at 0.3 for reduced runtimes. train = tutorials_utils.get_uci_data(split='train', sample=0.3) print(len(train), 'train examples')
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Converting to tf.data.Dataset MinDiffModel requires that the input be a tf.data.Dataset. If you were using a different format of input prior to integrating MinDiff, you will have to convert your input data. Use tf.data.Dataset.from_tensor_slices to convert to tf.data.Dataset. dataset = tf.data.Dataset.from_tensor_slices((x, y, weights)) dataset.shuffle(...) # Optional. dataset.batch(batch_size) See Model.fit documentation for details on equivalences between the two methods of input. In this guide, the input is downloaded as a Pandas DataFrame and therefore, needs this conversion.
# Function to convert a DataFrame into a tf.data.Dataset. def df_to_dataset(dataframe, shuffle=True): dataframe = dataframe.copy() labels = dataframe.pop('target') ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) if shuffle: ds = ds.shuffle(buffer_size=5000) # Reasonable but arbitrary buffer_size. return ds # Convert the train DataFrame into a Dataset. original_train_ds = df_to_dataset(train)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Note: The training dataset has not been batched yet but it will be later. Creating MinDiff data During training, MinDiff will encourage the model to reduce differences in predictions between two additional datasets (which may include examples from the original dataset). The selection of these two datasets is the key decision which will determine the effect MinDiff has on the model. The two datasets should be picked such that the disparity in performance that you are trying to remediate is evident and well-represented. Since the goal is to reduce a gap in FNR between "Male" and "Female" slices, this means creating one dataset with only positively labeled "Male" examples and another with only positively labeled "Female" examples; these will be the MinDiff datasets. Note: The choice of using only positively labeled examples is directly tied to the target metric. This guide is concerned with false negatives which, by definition, are positively labeled examples that were incorrectly classified. First, examine the data present.
female_pos = train[(train['sex'] == ' Female') & (train['target'] == 1)] male_pos = train[(train['sex'] == ' Male') & (train['target'] == 1)] print(len(female_pos), 'positively labeled female examples') print(len(male_pos), 'positively labeled male examples')
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
It is perfectly acceptable to create MinDiff datasets from subsets of the original dataset. While there aren't 5,000 or more positive "Male" examples as recommended in the requirements guidance, there are over 2,000 and it is reasonable to try with that many before collecting more data.
min_diff_male_ds = df_to_dataset(male_pos)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Positive "Female" examples, however, are much scarcer at 385. This is probably too small for good performance and so will require pulling in additional examples. Note: Since this guide began by reducing the dataset via sampling, this problem (and the corresponding solution) may seem contrived. However, it serves as a good example of how to approach concerns about the size of your MinDiff datasets.
full_uci_train = tutorials_utils.get_uci_data(split='train') augmented_female_pos = full_uci_train[((full_uci_train['sex'] == ' Female') & (full_uci_train['target'] == 1))] print(len(augmented_female_pos), 'positively labeled female examples')
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Using the full dataset has more than tripled the number of examples that can be used for MinDiff. It’s still low but it is enough to try as a first pass.
min_diff_female_ds = df_to_dataset(augmented_female_pos)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Both the MinDiff datasets are significantly smaller than the recommended 5,000 or more examples. While it is reasonable to attempt to apply MinDiff with the current data, you may need to consider collecting additional data if you observe poor performance or overfitting during training. Using tf.data.Dataset.filter Alternatively, you can create the two MinDiff datasets directly from the converted original Dataset. Note: When using .filter it is recommended to use .cache() if the dataset can easily fit in memory for runtime performance. If it is too large to do so, consider storing your filtered datasets in your file system and reading them in.
# Male def male_predicate(x, y): return tf.equal(x['sex'], b' Male') and tf.equal(y, 0) alternate_min_diff_male_ds = original_train_ds.filter(male_predicate).cache() # Female def female_predicate(x, y): return tf.equal(x['sex'], b' Female') and tf.equal(y, 0) full_uci_train_ds = df_to_dataset(full_uci_train) alternate_min_diff_female_ds = full_uci_train_ds.filter(female_predicate).cache()
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
The resulting alternate_min_diff_male_ds and alternate_min_diff_female_ds will be equivalent in output to min_diff_male_ds and min_diff_female_ds respectively. Constructing your Training Dataset As a final step, the three datasets (the two newly created ones and the original) need to be merged into a single dataset that can be passed to the model. Batching the datasets Before merging, the datasets need to batched. The original dataset can use the same batching that was used before integrating MinDiff. The MinDiff datasets do not need to have the same batch size as the original dataset. In all likelihood, a smaller one will perform just as well. While they don't even need to have the same batch size as each other, it is recommended to do so for best performance. While not strictly necessary, it is recommended to use drop_remainder=True for the two MinDiff datasets as this will ensure that they have consistent batch sizes. Warning: The 3 datasets must be batched before they are merged together. Failing to do so will likely result in unintended input shapes that will cause errors downstream.
original_train_ds = original_train_ds.batch(128) # Same as before MinDiff. # The MinDiff datasets can have a different batch_size from original_train_ds min_diff_female_ds = min_diff_female_ds.batch(32, drop_remainder=True) # Ideally we use the same batch size for both MinDiff datasets. min_diff_male_ds = min_diff_male_ds.batch(32, drop_remainder=True)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Packing the Datasets with pack_min_diff_data Once the datasets are prepared, pack them into a single dataset which will then be passed along to the model. A single batch from the resulting dataset will contain one batch from each of the three datasets you prepared previously. You can do this by using the provided utils function in the tensorflow_model_remediation package:
train_with_min_diff_ds = min_diff.keras.utils.pack_min_diff_data( original_dataset=original_train_ds, sensitive_group_dataset=min_diff_female_ds, nonsensitive_group_dataset=min_diff_male_ds)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
And that's it! You will be able to use other util functions in the package to unpack individual batches if needed.
for inputs, original_labels in train_with_min_diff_ds.take(1): # Unpacking min_diff_data min_diff_data = min_diff.keras.utils.unpack_min_diff_data(inputs) min_diff_examples, min_diff_membership = min_diff_data # Unpacking original data original_inputs = min_diff.keras.utils.unpack_original_inputs(inputs)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
With your newly formed data, you are now ready to apply MinDiff in your model! To learn how this is done, please take a look at the other guides starting with Integrating MinDiff with MinDiffModel. Using a Custom Packing Format (optional) You may decide to pack the three datasets together in whatever way you choose. The only requirement is that you will need to ensure the model knows how to interpret the data. The default implementation of MinDiffModel assumes that the data was packed using min_diff.keras.utils.pack_min_diff_data. One easy way to format your input as you want is to transform the data as a final step after you have used min_diff.keras.utils.pack_min_diff_data.
# Reformat input to be a dict. def _reformat_input(inputs, original_labels): unpacked_min_diff_data = min_diff.keras.utils.unpack_min_diff_data(inputs) unpacked_original_inputs = min_diff.keras.utils.unpack_original_inputs(inputs) return { 'min_diff_data': unpacked_min_diff_data, 'original_data': (unpacked_original_inputs, original_labels)} customized_train_with_min_diff_ds = train_with_min_diff_ds.map(_reformat_input)
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Your model will need to know how to read this customized input as detailed in the Customizing MinDiffModel guide.
for batch in customized_train_with_min_diff_ds.take(1): # Customized unpacking of min_diff_data min_diff_data = batch['min_diff_data'] # Customized unpacking of original_data original_data = batch['original_data']
docs/min_diff/guide/min_diff_data_preparation.ipynb
tensorflow/model-remediation
apache-2.0
Загрузка данных Время ремонта телекоммуникаций Verizon — основная региональная телекоммуникационная компания (Incumbent Local Exchange Carrier, ILEC) в западной части США. В связи с этим данная компания обязана предоставлять сервис ремонта телекоммуникационного оборудования не только для своих клиентов, но и для клиентов других локальных телекоммуникационых компаний (Competing Local Exchange Carriers, CLEC). При этом в случаях, когда время ремонта оборудования для клиентов других компаний существенно выше, чем для собственных, Verizon может быть оштрафована.
data = pd.read_csv('verizon.txt', sep='\t') data.shape data.head() data.Group.value_counts() pylab.figure(figsize(12, 5)) pylab.subplot(1,2,1) pylab.hist(data[data.Group == 'ILEC'].Time, bins = 20, color = 'b', range = (0, 100), label = 'ILEC') pylab.legend() pylab.subplot(1,2,2) pylab.hist(data[data.Group == 'CLEC'].Time, bins = 20, color = 'r', range = (0, 100), label = 'CLEC') pylab.legend() pylab.show()
course4/week1 - Доверительные интервалы на основе bootstrap - demo.ipynb
astarostin/MachineLearningSpecializationCoursera
apache-2.0
Bootstrap
def get_bootstrap_samples(data, n_samples): indices = np.random.randint(0, len(data), (n_samples, len(data))) samples = data[indices] return samples def stat_intervals(stat, alpha): boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)]) return boundaries
course4/week1 - Доверительные интервалы на основе bootstrap - demo.ipynb
astarostin/MachineLearningSpecializationCoursera
apache-2.0
Интервальная оценка медианы
ilec_time = data[data.Group == 'ILEC'].Time.values clec_time = data[data.Group == 'CLEC'].Time.values np.random.seed(0) ilec_median_scores = map(np.median, get_bootstrap_samples(ilec_time, 1000)) clec_median_scores = map(np.median, get_bootstrap_samples(clec_time, 1000)) print "95% confidence interval for the ILEC median repair time:", stat_intervals(ilec_median_scores, 0.05) print "95% confidence interval for the CLEC median repair time:", stat_intervals(clec_median_scores, 0.05)
course4/week1 - Доверительные интервалы на основе bootstrap - demo.ipynb
astarostin/MachineLearningSpecializationCoursera
apache-2.0
Точечная оценка разности медиан
print "difference between medians:", np.median(clec_time) - np.median(ilec_time)
course4/week1 - Доверительные интервалы на основе bootstrap - demo.ipynb
astarostin/MachineLearningSpecializationCoursera
apache-2.0
Интервальная оценка разности медиан
delta_median_scores = map(lambda x: x[1] - x[0], zip(ilec_median_scores, clec_median_scores)) print "95% confidence interval for the difference between medians", stat_intervals(delta_median_scores, 0.05)
course4/week1 - Доверительные интервалы на основе bootstrap - demo.ipynb
astarostin/MachineLearningSpecializationCoursera
apache-2.0
For now, neglect rotational inertia. Interpolation functions
xi, l, rho = symbols('xi, l, rho') # Shape functions S = Matrix(np.zeros((4, 12))) x2 = (1 - xi) S[0, 0 ] = x2 # extension S[0, 6 ] = xi S[1, 1 ] = x2**2 * (3 - 2*x2) # y-deflection S[1, 7 ] = xi**2 * (3 - 2*xi) S[1, 5 ] = -x2**2 * (x2 - 1) * l S[1, 11] = xi**2 * (xi - 1) * l S[2, 2 ] = x2**2 * (3 - 2*x2) # z-deflection S[2, 8 ] = xi**2 * (3 - 2*xi) S[2, 4 ] = x2**2 * (x2 - 1) * l S[2, 10] = -xi**2 * (xi - 1) * l S[3, 3 ] = x2 # torsion S[3, 9 ] = xi #S[4, 2 ] = 6 * x2 * (x2 - 1) / l # y-rotation #S[4, 8 ] = 6 * xi * (xi - 1) / l #S[4, 4 ] = -x2 * (3*x2 - 2) #S[4, 10] = xi * (3*xi - 2) #S[5, 1 ] = -6 * x2 * (x2 - 1) / l # z-rotation #S[5, 7 ] = -6 * xi * (xi - 1) / l #S[5, 5 ] = x2 * (3*x2 - 2) #S[5, 11] = xi * (3*xi - 2) S[:3, :].T titles = ['x-defl', 'y-defl', 'z-defl', 'torsion'] for i in range(4): sympy.plot(*([xx.subs(l, 2) for xx in S[i,:] if xx != 0] + [(xi, 0, 1)]), title=titles[i])
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Mass matrix Define the density distribution (linear):
rho1, rho2 = symbols('rho_1, rho_2') rho = (1 - xi)*rho1 + xi*rho2 rho
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Integrate the density distribution with the shape functions.
def sym_me(): m = Matrix(np.diag([rho, rho, rho, 0])) integrand = S.T * m * S me = integrand.applyfunc( lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).expand().factor() ) return me me = sym_me() me.shape me[0,:] me[6,:] me[1,:]
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Special case: rho1 == rho2
me.subs({rho2: rho1})/rho1
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Shape integrals As well as the actual mass matrix, the shape integrals are needed for the multibody dynamics equations: \begin{align} m &= \int \mathrm{d}m \ \boldsymbol{S} &= \int \boldsymbol{S} \mathrm{d}m \ \boldsymbol{S}_{kl} &= \int \boldsymbol{S}_k^T \boldsymbol{S}_l \mathrm{d}m \end{align} where $\boldsymbol{S}_k$ is the $k$th row of the element shape function. The mass is the average density times the length:
mass = l * sympy.integrate(rho, (xi, 0, 1)).factor() mass
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
First shape integral:
shape_integral_1 = S[:3, :].applyfunc( lambda xxx: l * sympy.integrate(rho * xxx, (xi, 0, 1)).expand().simplify() ) shape_integral_1.T shape_integral_2 = [ [l * (S[i, :].T * S[j, :]).applyfunc( lambda xxx: sympy.integrate(rho * xxx, (xi, 0, 1)).expand().simplify()) for j in range(3)] for i in range(3) ]
theory/FE element matrices.ipynb
ricklupton/beamfe
mit