INSTRUCTION stringlengths 1 8.43k | RESPONSE stringlengths 75 104k |
|---|---|
Method to be used by all launchers that prepares the root directory and generate basic launch information for command templates to use ( including a registered timestamp ). | def _setup_launch(self):
"""
Method to be used by all launchers that prepares the root
directory and generate basic launch information for command
templates to use (including a registered timestamp).
"""
self.root_directory = self.get_root_directory()
if not os.pa... |
Launches processes defined by process_commands but only executes max_concurrency processes at a time ; if a process completes and there are still outstanding processes to be executed the next processes are run until max_concurrency is reached again. | def _launch_process_group(self, process_commands, streams_path):
"""
Launches processes defined by process_commands, but only
executes max_concurrency processes at a time; if a process
completes and there are still outstanding processes to be
executed, the next processes are run ... |
A succinct summary of the Launcher configuration. Unlike the repr a summary does not have to be complete but must supply key information relevant to the user. | def summary(self):
"""
A succinct summary of the Launcher configuration. Unlike the
repr, a summary does not have to be complete but must supply
key information relevant to the user.
"""
print("Type: %s" % self.__class__.__name__)
print("Batch Name: %r" % self.ba... |
Method to generate Popen style argument list for qsub using the qsub_switches and qsub_flag_options parameters. Switches are returned first. The qsub_flag_options follow in keys () ordered if not a vanilla Python dictionary ( ie. a Python 2. 7 + or param. external OrderedDict ). Otherwise the keys are sorted alphanumer... | def _qsub_args(self, override_options, cmd_args, append_options=[]):
"""
Method to generate Popen style argument list for qsub using
the qsub_switches and qsub_flag_options parameters. Switches
are returned first. The qsub_flag_options follow in keys()
ordered if not a vanilla Py... |
Method that collates the previous jobs and launches the next block of concurrent jobs when using DynamicArgs. This method is invoked on initial launch and then subsequently via a commandline call ( to Python via qsub ) to collate the previously run jobs and launch the next block of jobs. | def collate_and_launch(self):
"""
Method that collates the previous jobs and launches the next
block of concurrent jobs when using DynamicArgs. This method
is invoked on initial launch and then subsequently via a
commandline call (to Python via qsub) to collate the
previo... |
The method that actually runs qsub to invoke the python process with the necessary commands to trigger the next collation step and next block of jobs. | def _qsub_collate_and_launch(self, output_dir, error_dir, job_names):
"""
The method that actually runs qsub to invoke the python
process with the necessary commands to trigger the next
collation step and next block of jobs.
"""
job_name = "%s_%s_collate_%d" % (self.batc... |
This method handles static argument specifiers and cases where the dynamic specifiers cannot be queued before the arguments are known. | def _qsub_block(self, output_dir, error_dir, tid_specs):
"""
This method handles static argument specifiers and cases where
the dynamic specifiers cannot be queued before the arguments
are known.
"""
processes = []
job_names = []
for (tid, spec) in tid_sp... |
Runs qdel command to remove all remaining queued jobs using the <batch_name > * pattern. Necessary when StopIteration is raised with scheduled jobs left on the queue. Returns exit - code of qdel. | def qdel_batch(self):
"""
Runs qdel command to remove all remaining queued jobs using
the <batch_name>* pattern . Necessary when StopIteration is
raised with scheduled jobs left on the queue.
Returns exit-code of qdel.
"""
p = subprocess.Popen(['qdel', '%s_%s*' % ... |
Aggregates all process_commands and the designated output files into a list and outputs it as JSON after which the wrapper script is called. | def _launch_process_group(self, process_commands, streams_path):
"""
Aggregates all process_commands and the designated output files into a
list, and outputs it as JSON, after which the wrapper script is called.
"""
processes = []
for cmd, tid in process_commands:
... |
Performs consistency checks across all the launchers. | def cross_check_launchers(self, launchers):
"""
Performs consistency checks across all the launchers.
"""
if len(launchers) == 0: raise Exception('Empty launcher list')
timestamps = [launcher.timestamp for launcher in launchers]
if not all(timestamps[0] == tstamp for tst... |
Launches all available launchers. | def _launch_all(self, launchers):
"""
Launches all available launchers.
"""
for launcher in launchers:
print("== Launching %s ==" % launcher.batch_name)
launcher()
return True |
Runs the review process for all the launchers. | def _review_all(self, launchers):
"""
Runs the review process for all the launchers.
"""
# Run review of launch args if necessary
if self.launch_args is not None:
proceed = self.review_args(self.launch_args,
show_repr=True,
... |
Reviews the given argument specification. Can review the meta - arguments ( launch_args ) or the arguments themselves. | def review_args(self, obj, show_repr=False, heading='Arguments'):
"""
Reviews the given argument specification. Can review the
meta-arguments (launch_args) or the arguments themselves.
"""
args = obj.args if isinstance(obj, Launcher) else obj
print('\n%s\n' % self.summary... |
Helper to prompt the user for input on the commandline. | def input_options(self, options, prompt='Select option', default=None):
"""
Helper to prompt the user for input on the commandline.
"""
check_options = [x.lower() for x in options]
while True:
response = input('%s [%s]: ' % (prompt, ', '.join(options))).lower()
... |
The implementation in the base class simply checks there is no clash between the metadata and data keys. | def save(self, filename, metadata={}, **data):
"""
The implementation in the base class simply checks there is no
clash between the metadata and data keys.
"""
intersection = set(metadata.keys()) & set(data.keys())
if intersection:
msg = 'Key(s) overlap betwe... |
Returns the full path for saving the file adding an extension and making the filename unique as necessary. | def _savepath(self, filename):
"""
Returns the full path for saving the file, adding an extension
and making the filename unique as necessary.
"""
(basename, ext) = os.path.splitext(filename)
basename = basename if (ext in self.extensions) else filename
ext = ext ... |
Returns a boolean indicating whether the filename has an appropriate extension for this class. | def file_supported(cls, filename):
"""
Returns a boolean indicating whether the filename has an
appropriate extension for this class.
"""
if not isinstance(filename, str):
return False
(_, ext) = os.path.splitext(filename)
if ext not in cls.extensions:... |
Data may be either a PIL Image object or a Numpy array. | def save(self, filename, imdata, **data):
"""
Data may be either a PIL Image object or a Numpy array.
"""
if isinstance(imdata, numpy.ndarray):
imdata = Image.fromarray(numpy.uint8(imdata))
elif isinstance(imdata, Image.Image):
imdata.save(self._savepath(f... |
return YYYY - MM - DD when the file was modified. | def fileModifiedTimestamp(fname):
"""return "YYYY-MM-DD" when the file was modified."""
modifiedTime=os.path.getmtime(fname)
stamp=time.strftime('%Y-%m-%d', time.localtime(modifiedTime))
return stamp |
returns a dict of active folders with days as keys. | def loadResults(resultsFile):
"""returns a dict of active folders with days as keys."""
with open(resultsFile) as f:
raw=f.read().split("\n")
foldersByDay={}
for line in raw:
folder=line.split('"')[1]+"\\"
line=[]+line.split('"')[2].split(", ")
for day in line[1:]... |
generates HTML report of active folders/ days. | def HTML_results(resultsFile):
"""generates HTML report of active folders/days."""
foldersByDay=loadResults(resultsFile)
# optionally skip dates before a certain date
# for day in sorted(list(foldersByDay.keys())):
# if time.strptime(day,"%Y-%m-%d")<time.strptime("2016-05-01","%Y-%m-%d"):
... |
Given some data ( Y ) break it into chunks and return just the quiet ones. Returns data where the variance for its chunk size is below the given percentile. CHUNK_POINTS should be adjusted so it s about 10ms of data. | def quietParts(data,percentile=10):
"""
Given some data (Y) break it into chunks and return just the quiet ones.
Returns data where the variance for its chunk size is below the given percentile.
CHUNK_POINTS should be adjusted so it's about 10ms of data.
"""
nChunks=int(len(Y)/CHUNK_POINTS)
... |
given some data and a list of X posistions return the normal distribution curve as a Y point at each of those Xs. | def ndist(data,Xs):
"""
given some data and a list of X posistions, return the normal
distribution curve as a Y point at each of those Xs.
"""
sigma=np.sqrt(np.var(data))
center=np.average(data)
curve=mlab.normpdf(Xs,center,sigma)
curve*=len(data)*HIST_RESOLUTION
return curve |
show basic info about ABF class variables. | def abfinfo(self,printToo=False,returnDict=False):
"""show basic info about ABF class variables."""
info="\n### ABF INFO ###\n"
d={}
for thingName in sorted(dir(self)):
if thingName in ['cm','evIs','colormap','dataX','dataY',
'protoX','protoY']:
... |
read the ABF header and save it HTML formatted. | def headerHTML(self,fname=None):
"""read the ABF header and save it HTML formatted."""
if fname is None:
fname = self.fname.replace(".abf","_header.html")
html="<html><body><code>"
html+="<h2>abfinfo() for %s.abf</h2>"%self.ID
html+=self.abfinfo().replace("<","<").... |
use 1 colormap for the whole abf. You can change it!. | def generate_colormap(self,colormap=None,reverse=False):
"""use 1 colormap for the whole abf. You can change it!."""
if colormap is None:
colormap = pylab.cm.Dark2
self.cm=colormap
self.colormap=[]
for i in range(self.sweeps): #TODO: make this the only colormap
... |
Load X/ Y data for a particular sweep. determines if forced reload is needed updates currentSweep regenerates dataX ( if not None ) decimates returns X/ Y. Note that setSweep () takes 0. 17ms to complete so go for it! | def setSweep(self,sweep=0,force=False):
"""Load X/Y data for a particular sweep.
determines if forced reload is needed, updates currentSweep,
regenerates dataX (if not None),decimates,returns X/Y.
Note that setSweep() takes 0.17ms to complete, so go for it!
"""
if sweep i... |
generate sweepX ( in seconds ) to match sweepY | def sweep_genXs(self):
"""generate sweepX (in seconds) to match sweepY"""
if self.decimateMethod:
self.dataX=np.arange(len(self.dataY))/self.rate
self.dataX*=self.decimateBy
return
if self.dataX is None or len(self.dataX)!=len(self.dataY):
self.dat... |
decimate data using one of the following methods: avg max min fast They re self explainatory. fast just plucks the n th data point. | def sweep_decimate(self):
"""
decimate data using one of the following methods:
'avg','max','min','fast'
They're self explainatory. 'fast' just plucks the n'th data point.
"""
if len(self.dataY)<self.decimateBy:
return
if self.decimateMethod:
... |
return self. dataY around a time point. All units are seconds. if thisSweep == False the time point is considered to be experiment time and an appropriate sweep may be selected. i. e. with 10 second sweeps and timePint = 35 will select the 5s mark of the third sweep | def get_data_around(self,timePoints,thisSweep=False,padding=0.02,msDeriv=0):
"""
return self.dataY around a time point. All units are seconds.
if thisSweep==False, the time point is considered to be experiment time
and an appropriate sweep may be selected. i.e., with 10 second
... |
Create ( x y ) points necessary to graph protocol for the current sweep. | def generate_protocol(self,sweep=None):
"""
Create (x,y) points necessary to graph protocol for the current sweep.
"""
#TODO: make a line protocol that's plottable
if sweep is None:
sweep = self.currentSweep
if sweep is None:
sweep = 0
if n... |
return an array of command values at a time point ( in sec ). Useful for things like generating I/ V curves. | def clampValues(self,timePoint=0):
"""
return an array of command values at a time point (in sec).
Useful for things like generating I/V curves.
"""
Cs=np.zeros(self.sweeps)
for i in range(self.sweeps):
self.setSweep(i) #TODO: protocol only = True
... |
This just generates a string to define the nature of the ABF. The ultimate goal is to use info about the abf to guess what to do with it. [ vc/ ic ] - [ steps/ fixed ] - [ notag/ drugs ] - [ 2ch/ 1ch ] This represents 2^4 ( 18 ) combinations but is easily expanded. | def guess_protocol(self):
"""
This just generates a string to define the nature of the ABF.
The ultimate goal is to use info about the abf to guess what to do with it.
[vc/ic]-[steps/fixed]-[notag/drugs]-[2ch/1ch]
This represents 2^4 (18) combinations, but is easily expan... |
given an array of sweeps return X Y Err average. This returns * SWEEPS * of data not just 1 data point. | def average_sweep(self,T1=0,T2=None,sweeps=None,stdErr=False):
"""
given an array of sweeps, return X,Y,Err average.
This returns *SWEEPS* of data, not just 1 data point.
"""
T1=T1*self.rate
if T2 is None:
T2 = self.sweepSize-1
else:
T2 = T... |
given a list of ranges return single point averages for every sweep. Units are in seconds. Expects something like: ranges = [[ 1 2 ] [ 4 5 ] [ 7 7. 5 ]] None values will be replaced with maximum/ minimum bounds. For baseline subtraction make a range baseline then sub it youtself. returns datas [ iSweep ] [ iRange ] [ A... | def average_data(self,ranges=[[None,None]],percentile=None):
"""
given a list of ranges, return single point averages for every sweep.
Units are in seconds. Expects something like:
ranges=[[1,2],[4,5],[7,7.5]]
None values will be replaced with maximum/minimum bounds.
... |
RETURNS filtered trace. Desn t filter it in place. | def filter_gaussian(self,sigmaMs=100,applyFiltered=False,applyBaseline=False):
"""RETURNS filtered trace. Desn't filter it in place."""
if sigmaMs==0:
return self.dataY
filtered=cm.filter_gaussian(self.dataY,sigmaMs)
if applyBaseline:
self.dataY=self.dataY-filtere... |
save any object as/ swhlab4/ ID_ [ fname ]. pkl | def saveThing(self,thing,fname,overwrite=True,ext=".pkl"):
"""save any object as /swhlab4/ID_[fname].pkl"""
if not os.path.exists(os.path.dirname(self.outpre)):
os.mkdir(os.path.dirname(self.outpre))
if ext and not ext in fname:
fname+=ext
fname=self.outpre+fname
... |
save any object from/ swhlab4/ ID_ [ fname ]. pkl | def loadThing(self,fname,ext=".pkl"):
"""save any object from /swhlab4/ID_[fname].pkl"""
if ext and not ext in fname:
fname+=ext
fname=self.outpre+fname
time1=cm.timethis()
thing = pickle.load(open(fname,"rb"))
print(" -> loading [%s] (%.01f kB) took %.02f ms"... |
delete/ swhlab4/ ID_ * | def deleteStuff(self,ext="*",spareInfo=True,spare=["_info.pkl"]):
"""delete /swhlab4/ID_*"""
print(" -- deleting /swhlab4/"+ext)
for fname in sorted(glob.glob(self.outpre+ext)):
reallyDelete=True
for item in spare:
if item in fname:
rea... |
Raises a ValidationError for any ActivatableModel that has ForeignKeys or OneToOneFields that will cause cascading deletions to occur. This function also raises a ValidationError if the activatable model has not defined a Boolean field with the field name defined by the ACTIVATABLE_FIELD_NAME variable on the model. | def validate_activatable_models():
"""
Raises a ValidationError for any ActivatableModel that has ForeignKeys or OneToOneFields that will
cause cascading deletions to occur. This function also raises a ValidationError if the activatable
model has not defined a Boolean field with the field name defined b... |
Helper function to convet an Args object to a HoloViews Table | def to_table(args, vdims=[]):
"Helper function to convet an Args object to a HoloViews Table"
if not Table:
return "HoloViews Table not available"
kdims = [dim for dim in args.constant_keys + args.varying_keys
if dim not in vdims]
items = [tuple([spec[k] for k in kdims+vdims])
... |
Method to define the positional arguments and keyword order for pretty printing. | def pprint_args(self, pos_args, keyword_args, infix_operator=None, extra_params={}):
"""
Method to define the positional arguments and keyword order
for pretty printing.
"""
if infix_operator and not (len(pos_args)==2 and keyword_args==[]):
raise Exception('Infix form... |
Pretty printer that prints only the modified keywords and generates flat representations ( for repr ) and optionally annotates the top of the repr with a comment. | def _pprint(self, cycle=False, flat=False, annotate=False, onlychanged=True, level=1, tab = ' '):
"""
Pretty printer that prints only the modified keywords and
generates flat representations (for repr) and optionally
annotates the top of the repr with a comment.
"""
(kw... |
Formats the elements of an argument set appropriately | def spec_formatter(cls, spec):
" Formats the elements of an argument set appropriately"
return type(spec)((k, str(v)) for (k,v) in spec.items()) |
Returns a dictionary like object with the lists of values collapsed by their respective key. Useful to find varying vs constant keys and to find how fast keys vary. | def _collect_by_key(self,specs):
"""
Returns a dictionary like object with the lists of values
collapsed by their respective key. Useful to find varying vs
constant keys and to find how fast keys vary.
"""
# Collect (key, value) tuples as list of lists, flatten with chain... |
Takes the Cartesian product of the specifications. Result will contain N specifications where N = len ( first_specs ) * len ( second_specs ) and keys are merged. Example: [ { a: 1 } { b: 2 } ] * [ { c: 3 } { d: 4 } ] = [ { a: 1 c: 3 } { a: 1 d: 4 } { b: 2 c: 3 } { b: 2 d: 4 } ] | def _cartesian_product(self, first_specs, second_specs):
"""
Takes the Cartesian product of the specifications. Result will
contain N specifications where N = len(first_specs) *
len(second_specs) and keys are merged.
Example: [{'a':1},{'b':2}] * [{'c':3},{'d':4}] =
[{'a':... |
A succinct summary of the argument specifier. Unlike the repr a summary does not have to be complete but must supply the most relevant information about the object to the user. | def summary(self):
"""
A succinct summary of the argument specifier. Unlike the repr,
a summary does not have to be complete but must supply the
most relevant information about the object to the user.
"""
print("Items: %s" % len(self))
varying_keys = ', '.join('%r... |
Returns the specs the remaining kwargs and whether or not the constructor was called with kwarg or explicit specs. | def _build_specs(self, specs, kwargs, fp_precision):
"""
Returns the specs, the remaining kwargs and whether or not the
constructor was called with kwarg or explicit specs.
"""
if specs is None:
overrides = param.ParamOverrides(self, kwargs,
... |
Note: repr () must be implemented properly on all objects. This is implicitly assumed by Lancet when Python objects need to be formatted to string representation. | def _unique(self, sequence, idfun=repr):
"""
Note: repr() must be implemented properly on all objects. This
is implicitly assumed by Lancet when Python objects need to be
formatted to string representation.
"""
seen = {}
return [seen.setdefault(idfun(e),e) for e i... |
Convenience method to inspect the available argument values in human - readable format. The ordering of keys is determined by how quickly they vary. | def show(self, exclude=[]):
"""
Convenience method to inspect the available argument values in
human-readable format. The ordering of keys is determined by
how quickly they vary.
The exclude list allows specific keys to be excluded for
readability (e.g. to hide long, abs... |
The lexical sort order is specified by a list of string arguments. Each string is a key name prefixed by + or - for ascending and descending sort respectively. If the key is not found in the operand s set of varying keys it is ignored. | def lexsort(self, *order):
"""
The lexical sort order is specified by a list of string
arguments. Each string is a key name prefixed by '+' or '-'
for ascending and descending sort respectively. If the key is
not found in the operand's set of varying keys, it is ignored.
... |
A lexsort is specified using normal key string prefixed by + ( for ascending ) or - for ( for descending ). | def _lexsorted_specs(self, order):
"""
A lexsort is specified using normal key string prefixed by '+'
(for ascending) or '-' for (for descending).
Note that in Python 2, if a key is missing, None is returned
(smallest Python value). In Python 3, an Exception will be
rais... |
Simple replacement for numpy linspace | def linspace(self, start, stop, n):
""" Simple replacement for numpy linspace"""
if n == 1: return [start]
L = [0.0] * n
nm1 = n - 1
nm1inv = 1.0 / nm1
for i in range(n):
L[i] = nm1inv * (start*(nm1 - i) + stop*i)
return L |
Parses the log file generated by a launcher and returns dictionary with tid keys and specification values. | def extract_log(log_path, dict_type=dict):
"""
Parses the log file generated by a launcher and returns
dictionary with tid keys and specification values.
Ordering can be maintained by setting dict_type to the
appropriate constructor (i.e. OrderedDict). Keys are converted
... |
Writes the supplied specifications to the log path. The data may be supplied as either as a an Args or as a list of dictionaries. | def write_log(log_path, data, allow_append=True):
"""
Writes the supplied specifications to the log path. The data
may be supplied as either as a an Args or as a list of
dictionaries.
By default, specifications will be appropriately appended to
an existing log file. This... |
Load all the files in a given directory selecting only files with the given extension if specified. The given kwargs are passed through to the normal constructor. | def directory(cls, directory, root=None, extension=None, **kwargs):
"""
Load all the files in a given directory selecting only files
with the given extension if specified. The given kwargs are
passed through to the normal constructor.
"""
root = os.getcwd() if root is Non... |
Return the fields specified in the pattern using Python s formatting mini - language. | def fields(self):
"""
Return the fields specified in the pattern using Python's
formatting mini-language.
"""
parse = list(string.Formatter().parse(self.pattern))
return [f for f in zip(*parse)[1] if f is not None] |
Loads the files that match the given pattern. | def _load_expansion(self, key, root, pattern):
"""
Loads the files that match the given pattern.
"""
path_pattern = os.path.join(root, pattern)
expanded_paths = self._expand_pattern(path_pattern)
specs=[]
for (path, tags) in expanded_paths:
filelist =... |
From the pattern decomposition finds the absolute paths matching the pattern. | def _expand_pattern(self, pattern):
"""
From the pattern decomposition, finds the absolute paths
matching the pattern.
"""
(globpattern, regexp, fields, types) = self._decompose_pattern(pattern)
filelist = glob.glob(globpattern)
expansion = []
for fname i... |
Given a path pattern with format declaration generates a four - tuple ( glob_pattern regexp pattern fields type map ) | def _decompose_pattern(self, pattern):
"""
Given a path pattern with format declaration, generates a
four-tuple (glob_pattern, regexp pattern, fields, type map)
"""
sep = '~lancet~sep~'
float_codes = ['e','E','f', 'F','g', 'G', 'n']
typecodes = dict([(k,float) for... |
Convenience method to directly chain a pattern processed by FilePattern into a FileInfo instance. | def from_pattern(cls, pattern, filetype=None, key='filename', root=None, ignore=[]):
"""
Convenience method to directly chain a pattern processed by
FilePattern into a FileInfo instance.
Note that if a default filetype has been set on FileInfo, the
filetype argument may be omitt... |
Load the file contents into the supplied pandas dataframe or HoloViews Table. This allows a selection to be made over the metadata before loading the file contents ( may be slow ). | def load(self, val, **kwargs):
"""
Load the file contents into the supplied pandas dataframe or
HoloViews Table. This allows a selection to be made over the
metadata before loading the file contents (may be slow).
"""
if Table and isinstance(val, Table):
retur... |
Load the file contents into the supplied Table using the specified key and filetype. The input table should have the filenames as values which will be replaced by the loaded data. If data_key is specified this key will be used to index the loaded data to retrive the specified item. | def load_table(self, table):
"""
Load the file contents into the supplied Table using the
specified key and filetype. The input table should have the
filenames as values which will be replaced by the loaded
data. If data_key is specified, this key will be used to index
th... |
Load the file contents into the supplied dataframe using the specified key and filetype. | def load_dframe(self, dframe):
"""
Load the file contents into the supplied dataframe using the
specified key and filetype.
"""
filename_series = dframe[self.key]
loaded_data = filename_series.map(self.filetype.data)
keys = [list(el.keys()) for el in loaded_data.v... |
Generates the union of the source. specs and the metadata dictionary loaded by the filetype object. | def _info(self, source, key, filetype, ignore):
"""
Generates the union of the source.specs and the metadata
dictionary loaded by the filetype object.
"""
specs, mdata = [], {}
mdata_clashes = set()
for spec in source.specs:
if key not in spec:
... |
Push new data into the buffer. Resume looping if paused. | async def _push(self, *args, **kwargs):
"""Push new data into the buffer. Resume looping if paused."""
self._data.append((args, kwargs))
if self._future is not None:
future, self._future = self._future, None
future.set_result(True) |
increments version counter in swhlab/ version. py | def newVersion():
"""increments version counter in swhlab/version.py"""
version=None
fname='../swhlab/version.py'
with open(fname) as f:
raw=f.read().split("\n")
for i,line in enumerate(raw):
if line.startswith("__counter__"):
if version is None:
... |
Create a plot of one area of interest of a single sweep. | def figureStimulus(abf,sweeps=[0]):
"""
Create a plot of one area of interest of a single sweep.
"""
stimuli=[2.31250, 2.35270]
for sweep in sweeps:
abf.setsweep(sweep)
for stimulus in stimuli:
S1=int(abf.pointsPerSec*stimulus)
S2=int(abf.pointsPerSec*(stimul... |
chunkMs should be ~50 ms or greater. bin sizes must be equal to or multiples of the data resolution. transients smaller than the expected RMS will be silenced. | def phasicTonic(self,m1=None,m2=None,chunkMs=50,
quietPercentile=10,histResolution=1):
"""
chunkMs should be ~50 ms or greater.
bin sizes must be equal to or multiples of the data resolution.
transients smaller than the expected RMS will be silenced.
"""
... |
Inelegant for now but lets you manually analyze every ABF in a folder. | def doStuff(ABFfolder,analyze=False,convert=False,index=True,overwrite=True,
launch=True):
"""Inelegant for now, but lets you manually analyze every ABF in a folder."""
IN=INDEX(ABFfolder)
if analyze:
IN.analyzeAll()
if convert:
IN.convertImages() |
Reanalyze data for a single ABF. Also remakes child and parent html. | def analyzeSingle(abfFname):
"""Reanalyze data for a single ABF. Also remakes child and parent html."""
assert os.path.exists(abfFname) and abfFname.endswith(".abf")
ABFfolder,ABFfname=os.path.split(abfFname)
abfID=os.path.splitext(ABFfname)[0]
IN=INDEX(ABFfolder)
IN.analyzeABF(abfID)
IN.sca... |
scan folder1 and folder2 into files1 and files2. since we are on windows simplify things by making them all lowercase. this WILL cause problems on nix operating systems. If this is the case just run a script to rename every file to all lowercase. | def scan(self):
"""
scan folder1 and folder2 into files1 and files2.
since we are on windows, simplify things by making them all lowercase.
this WILL cause problems on 'nix operating systems.If this is the case,
just run a script to rename every file to all lowercase.
"""... |
run this to turn all folder1 TIFs and JPGs into folder2 data. TIFs will be treated as micrographs and converted to JPG with enhanced contrast. JPGs will simply be copied over. | def convertImages(self):
"""
run this to turn all folder1 TIFs and JPGs into folder2 data.
TIFs will be treated as micrographs and converted to JPG with enhanced
contrast. JPGs will simply be copied over.
"""
# copy over JPGs (and such)
exts=['.jpg','.png']
... |
analyze every unanalyzed ABF in the folder. | def analyzeAll(self):
"""analyze every unanalyzed ABF in the folder."""
searchableData=str(self.files2)
self.log.debug("considering analysis for %d ABFs",len(self.IDs))
for ID in self.IDs:
if not ID+"_" in searchableData:
self.log.debug("%s needs analysis",ID)... |
Analye a single ABF: make data index it. If called directly will delete all ID_data_ and recreate it. | def analyzeABF(self,ID):
"""
Analye a single ABF: make data, index it.
If called directly, will delete all ID_data_ and recreate it.
"""
for fname in self.files2:
if fname.startswith(ID+"_data_"):
self.log.debug("deleting [%s]",fname)
o... |
return appropriate HTML determined by file extension. | def htmlFor(self,fname):
"""return appropriate HTML determined by file extension."""
if os.path.splitext(fname)[1].lower() in ['.jpg','.png']:
html='<a href="%s"><img src="%s"></a>'%(fname,fname)
if "_tif_" in fname:
html=html.replace('<img ','<img class="datapic ... |
generate a generic flat file html for an ABF parent. You could give this a single ABF ID its parent ID or a list of ABF IDs. If a child ABF is given the parent will automatically be used. | def html_single_basic(self,abfID,launch=False,overwrite=False):
"""
generate a generic flat file html for an ABF parent. You could give
this a single ABF ID, its parent ID, or a list of ABF IDs.
If a child ABF is given, the parent will automatically be used.
"""
if type(a... |
create ID_plot. html of just intrinsic properties. | def html_single_plot(self,abfID,launch=False,overwrite=False):
"""create ID_plot.html of just intrinsic properties."""
if type(abfID) is str:
abfID=[abfID]
for thisABFid in cm.abfSort(abfID):
parentID=cm.parent(self.groups,thisABFid)
saveAs=os.path.abspath("%s... |
minimal complexity low - pass filtering. Filter size is how wide the filter will be. Sigma will be 1/ 10 of this filter width. If filter size isn t given it will be 1/ 10 of the data size. | def lowpass(data,filterSize=None):
"""
minimal complexity low-pass filtering.
Filter size is how "wide" the filter will be.
Sigma will be 1/10 of this filter width.
If filter size isn't given, it will be 1/10 of the data size.
"""
if filterSize is None:
filterSize=len(data)/10
ke... |
This applies a kernel to a signal through convolution and returns the result. | def convolve(signal,kernel):
"""
This applies a kernel to a signal through convolution and returns the result.
Some magic is done at the edges so the result doesn't apprach zero:
1. extend the signal's edges with len(kernel)/2 duplicated values
2. perform the convolution ('same' mode)
... |
simple timer. returns a time object or a string. | def timeit(timer=None):
"""simple timer. returns a time object, or a string."""
if timer is None:
return time.time()
else:
took=time.time()-timer
if took<1:
return "%.02f ms"%(took*1000.0)
elif took<60:
return "%.02f s"%(took)
else:
... |
if the value is in the list move it to the front and return it. | def list_move_to_front(l,value='other'):
"""if the value is in the list, move it to the front and return it."""
l=list(l)
if value in l:
l.remove(value)
l.insert(0,value)
return l |
if the value is in the list move it to the back and return it. | def list_move_to_back(l,value='other'):
"""if the value is in the list, move it to the back and return it."""
l=list(l)
if value in l:
l.remove(value)
l.append(value)
return l |
given a list and a list of items to be first return the list in the same order except that it begins with each of the first items. | def list_order_by(l,firstItems):
"""given a list and a list of items to be first, return the list in the
same order except that it begins with each of the first items."""
l=list(l)
for item in firstItems[::-1]: #backwards
if item in l:
l.remove(item)
l.insert(0,item)
... |
given a list of goofy ABF names return it sorted intelligently. This places things like 16o01001 after 16901001. | def abfSort(IDs):
"""
given a list of goofy ABF names, return it sorted intelligently.
This places things like 16o01001 after 16901001.
"""
IDs=list(IDs)
monO=[]
monN=[]
monD=[]
good=[]
for ID in IDs:
if ID is None:
continue
if 'o' in ID:
m... |
Given a folder path or list of files return groups ( dict ) by cell. | def abfGroups(abfFolder):
"""
Given a folder path or list of files, return groups (dict) by cell.
Rules which define parents (cells):
* assume each cell has one or several ABFs
* that cell can be labeled by its "ID" or "parent" ABF (first abf)
* the ID is just the filename of the fi... |
when given a dictionary where every key contains a list of IDs replace the keys with the list of files matching those IDs. This is how you get a list of files belonging to each child for each parent. | def abfGroupFiles(groups,folder):
"""
when given a dictionary where every key contains a list of IDs, replace
the keys with the list of files matching those IDs. This is how you get a
list of files belonging to each child for each parent.
"""
assert os.path.exists(folder)
files=os.listdir(fo... |
given a groups dictionary and an ID return its actual parent ID. | def parent(groups,ID):
"""given a groups dictionary and an ID, return its actual parent ID."""
if ID in groups.keys():
return ID # already a parent
if not ID in groups.keys():
for actualParent in groups.keys():
if ID in groups[actualParent]:
return actualParent # ... |
given a list of files return them as a dict sorted by type: * plot tif data other | def filesByType(fileList):
"""
given a list of files, return them as a dict sorted by type:
* plot, tif, data, other
"""
features=["plot","tif","data","other","experiment"]
files={}
for feature in features:
files[feature]=[]
for fname in fileList:
other=True
f... |
return the semi - temporary user folder | def userFolder():
"""return the semi-temporary user folder"""
#path=os.path.abspath(tempfile.gettempdir()+"/swhlab/")
#don't use tempdir! it will get deleted easily.
path=os.path.expanduser("~")+"/.swhlab/" # works on windows or linux
# for me, path=r"C:\Users\swharden\.swhlab"
if not os.path.ex... |
return the path of the last loaded ABF. | def abfFname_Load():
"""return the path of the last loaded ABF."""
fname=userFolder()+"/abfFname.ini"
if os.path.exists(fname):
abfFname=open(fname).read().strip()
if os.path.exists(abfFname) or abfFname.endswith("_._"):
return abfFname
return os.path.abspath(os.sep) |
return the path of the last loaded ABF. | def abfFname_Save(abfFname):
"""return the path of the last loaded ABF."""
fname=userFolder()+"/abfFname.ini"
with open(fname,'w') as f:
f.write(os.path.abspath(abfFname))
return |
Launch an ABF file selection file dialog. This is smart and remembers ( through reboots ) where you last were. | def gui_getFile():
"""
Launch an ABF file selection file dialog.
This is smart, and remembers (through reboots) where you last were.
"""
import tkinter as tk
from tkinter import filedialog
root = tk.Tk() # this is natively supported by python
root.withdraw() # hide main window
root.w... |
Launch a folder selection dialog. This is smart and remembers ( through reboots ) where you last were. | def gui_getFolder():
"""
Launch a folder selection dialog.
This is smart, and remembers (through reboots) where you last were.
"""
import tkinter as tk
from tkinter import filedialog
root = tk.Tk() # this is natively supported by python
root.withdraw() # hide main window
root.wm_attr... |
Coroutine wrapper to catch errors after async scheduling. | async def _try_catch_coro(emitter, event, listener, coro):
"""Coroutine wrapper to catch errors after async scheduling.
Args:
emitter (EventEmitter): The event emitter that is attempting to
call a listener.
event (str): The event that triggered the emitter.
listener (async d... |
Check if the listener limit is hit and warn if needed. | def _check_limit(self, event):
"""Check if the listener limit is hit and warn if needed."""
if self.count(event) > self.max_listeners:
warnings.warn(
'Too many listeners for event {}'.format(event),
ResourceWarning,
) |
Bind a listener to a particular event. | def add_listener(self, event, listener):
"""Bind a listener to a particular event.
Args:
event (str): The name of the event to listen for. This may be any
string value.
listener (def or async def): The callback to execute when the event
fires. Thi... |
Add a listener that is only called once. | def once(self, event, listener):
"""Add a listener that is only called once."""
self.emit('new_listener', event, listener)
self._once[event].append(listener)
self._check_limit(event)
return self |
Remove a listener from the emitter. | def remove_listener(self, event, listener):
"""Remove a listener from the emitter.
Args:
event (str): The event name on which the listener is bound.
listener: A reference to the same object given to add_listener.
Returns:
bool: True if a listener was removed... |
Remove all listeners or those of the specified * event *. | def remove_all_listeners(self, event=None):
"""Remove all listeners, or those of the specified *event*.
It's not a good idea to remove listeners that were added elsewhere in
the code, especially when it's on an emitter that you didn't create
(e.g. sockets or file streams).
"""
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.