text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selecttrue(table, field, complement=False):
"""Select rows where the given field evaluates `True`.""" |
return select(table, field, lambda v: bool(v), complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectfalse(table, field, complement=False):
"""Select rows where the given field evaluates `False`.""" |
return select(table, field, lambda v: not bool(v),
complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectnone(table, field, complement=False):
"""Select rows where the given field is `None`.""" |
return select(table, field, lambda v: v is None, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selectnotnone(table, field, complement=False):
"""Select rows where the given field is not `None`.""" |
return select(table, field, lambda v: v is not None,
complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stdchannel_redirected(stdchannel):
""" Redirects stdout or stderr to a StringIO object. As of python 3.4, there is a standard library contextmanager for this, but backwards compatibility! """ |
try:
s = io.StringIO()
old = getattr(sys, stdchannel)
setattr(sys, stdchannel, s)
yield s
finally:
setattr(sys, stdchannel, old) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load(param):
""" If the supplied parameter is a string, assum it's a simple pattern. """ |
return (
Pattern(param) if isinstance(param, str)
else param if param is not None
else Null()
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _multi_permission_mask(mode):
""" Support multiple, comma-separated Unix chmod symbolic modes. True """ |
def compose(f, g):
return lambda *args, **kwargs: g(f(*args, **kwargs))
return functools.reduce(compose, map(_permission_mask, mode.split(','))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _permission_mask(mode):
""" Convert a Unix chmod symbolic mode like ``'ugo+rwx'`` to a function suitable for applying to a mask to affect that change. True True True True True True True """ |
# parse the symbolic mode
parsed = re.match('(?P<who>[ugoa]+)(?P<op>[-+=])(?P<what>[rwx]*)$', mode)
if not parsed:
raise ValueError("Unrecognized symbolic mode", mode)
# generate a mask representing the specified permission
spec_map = dict(r=4, w=2, x=1)
specs = (spec_map[perm] for perm in parsed.group('what'))
spec = functools.reduce(operator.or_, specs, 0)
# now apply spec to each subject in who
shift_map = dict(u=6, g=3, o=0)
who = parsed.group('who').replace('a', 'ugo')
masks = (spec << shift_map[subj] for subj in who)
mask = functools.reduce(operator.or_, masks)
op = parsed.group('op')
# if op is -, invert the mask
if op == '-':
mask ^= 0o777
# if op is =, retain extant values for unreferenced subjects
if op == '=':
masks = (0o7 << shift_map[subj] for subj in who)
retain = functools.reduce(operator.or_, masks) ^ 0o777
op_map = {
'+': operator.or_,
'-': operator.and_,
'=': lambda mask, target: target & retain ^ mask,
}
return functools.partial(op_map[op], mask) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def uncshare(self):
""" The UNC mount point for this path. This is empty for paths on local drives. """ |
unc, r = self.module.splitunc(self)
return self._next_class(unc) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def splitall(self):
r""" Return a list of the path components in this path. The first item in the list will be a Path. Its value will be either :data:`os.curdir`, :data:`os.pardir`, empty, or the root directory of this path (for example, ``'/'`` or ``'C:\\'``). The other items in the list will be strings. ``path.Path.joinpath(*result)`` will yield the original path. """ |
parts = []
loc = self
while loc != os.curdir and loc != os.pardir:
prev = loc
loc, child = prev.splitpath()
if loc == prev:
break
parts.append(child)
parts.append(loc)
parts.reverse()
return parts |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def relpath(self, start='.'):
""" Return this path as a relative path, based from `start`, which defaults to the current working directory. """ |
cwd = self._next_class(start)
return cwd.relpathto(self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fnmatch(self, pattern, normcase=None):
""" Return ``True`` if `self.name` matches the given `pattern`. `pattern` - A filename pattern with wildcards, for example ``'*.py'``. If the pattern contains a `normcase` attribute, it is applied to the name and path prior to comparison. `normcase` - (optional) A function used to normalize the pattern and filename before matching. Defaults to :meth:`self.module`, which defaults to :meth:`os.path.normcase`. .. seealso:: :func:`fnmatch.fnmatch` """ |
default_normcase = getattr(pattern, 'normcase', self.module.normcase)
normcase = normcase or default_normcase
name = normcase(self.name)
pattern = normcase(pattern)
return fnmatch.fnmatchcase(name, pattern) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def glob(self, pattern):
""" Return a list of Path objects that match the pattern. `pattern` - a path relative to this directory, with wildcards. For example, ``Path('/users').glob('*/bin/*')`` returns a list of all the files users have in their :file:`bin` directories. .. seealso:: :func:`glob.glob` .. note:: Glob is **not** recursive, even when using ``**``. To do recursive globbing see :func:`walk`, :func:`walkdirs` or :func:`walkfiles`. """ |
cls = self._next_class
return [cls(s) for s in glob.glob(self / pattern)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chunks(self, size, *args, **kwargs):
""" Returns a generator yielding chunks of the file, so it can be read piece by piece with a simple for loop. Any argument you pass after `size` will be passed to :meth:`open`. :example: This will read the file by chunks of 8192 bytes. """ |
with self.open(*args, **kwargs) as f:
for chunk in iter(lambda: f.read(size) or None, None):
yield chunk |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lines(self, encoding=None, errors='strict', retain=True):
r""" Open this file, read all lines, return them in a list. Optional arguments: `encoding` - The Unicode encoding (or character set) of the file. The default is ``None``, meaning the content of the file is read as 8-bit characters and returned as a list of (non-Unicode) str objects. `errors` - How to handle Unicode errors; see help(str.decode) for the options. Default is ``'strict'``. `retain` - If ``True``, retain newline characters; but all newline character combinations (``'\r'``, ``'\n'``, ``'\r\n'``) are translated to ``'\n'``. If ``False``, newline characters are stripped off. Default is ``True``. .. seealso:: :meth:`text` """ |
return self.text(encoding, errors).splitlines(retain) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _hash(self, hash_name):
""" Returns a hash object for the file at the current path. `hash_name` should be a hash algo name (such as ``'md5'`` or ``'sha1'``) that's available in the :mod:`hashlib` module. """ |
m = hashlib.new(hash_name)
for chunk in self.chunks(8192, mode="rb"):
m.update(chunk)
return m |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chown(self, uid=-1, gid=-1):
""" Change the owner and group by names rather than the uid or gid numbers. .. seealso:: :func:`os.chown` """ |
if hasattr(os, 'chown'):
if 'pwd' in globals() and isinstance(uid, str):
uid = pwd.getpwnam(uid).pw_uid
if 'grp' in globals() and isinstance(gid, str):
gid = grp.getgrnam(gid).gr_gid
os.chown(self, uid, gid)
else:
msg = "Ownership not available on this platform."
raise NotImplementedError(msg)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def in_place( self, mode='r', buffering=-1, encoding=None, errors=None, newline=None, backup_extension=None, ):
""" A context in which a file may be re-written in-place with new content. Yields a tuple of :samp:`({readable}, {writable})` file objects, where `writable` replaces `readable`. If an exception occurs, the old file is restored, removing the written data. Mode *must not* use ``'w'``, ``'a'``, or ``'+'``; only read-only-modes are allowed. A :exc:`ValueError` is raised on invalid modes. For example, to add line numbers to a file:: p = Path(filename) assert p.isfile() with p.in_place() as (reader, writer):
for number, line in enumerate(reader, 1):
writer.write('{0:3}: '.format(number))) writer.write(line) Thereafter, the file at `filename` will have line numbers in it. """ |
import io
if set(mode).intersection('wa+'):
raise ValueError('Only read-only file modes can be used')
# move existing file to backup, create new file with same permissions
# borrowed extensively from the fileinput module
backup_fn = self + (backup_extension or os.extsep + 'bak')
try:
os.unlink(backup_fn)
except os.error:
pass
os.rename(self, backup_fn)
readable = io.open(
backup_fn, mode, buffering=buffering,
encoding=encoding, errors=errors, newline=newline,
)
try:
perm = os.fstat(readable.fileno()).st_mode
except OSError:
writable = open(
self, 'w' + mode.replace('r', ''),
buffering=buffering, encoding=encoding, errors=errors,
newline=newline,
)
else:
os_mode = os.O_CREAT | os.O_WRONLY | os.O_TRUNC
if hasattr(os, 'O_BINARY'):
os_mode |= os.O_BINARY
fd = os.open(self, os_mode, perm)
writable = io.open(
fd, "w" + mode.replace('r', ''),
buffering=buffering, encoding=encoding, errors=errors,
newline=newline,
)
try:
if hasattr(os, 'chmod'):
os.chmod(self, perm)
except OSError:
pass
try:
yield readable, writable
except Exception:
# move backup back
readable.close()
writable.close()
try:
os.unlink(self)
except os.error:
pass
os.rename(backup_fn, self)
raise
else:
readable.close()
writable.close()
finally:
try:
os.unlink(backup_fn)
except os.error:
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_dir(self, scope, class_):
""" Return the callable function from appdirs, but with the result wrapped in self.path_class """ |
prop_name = '{scope}_{class_}_dir'.format(**locals())
value = getattr(self.wrapper, prop_name)
MultiPath = Multi.for_class(self.path_class)
return MultiPath.detect(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _next_class(cls):
""" Multi-subclasses should use the parent class """ |
return next(
class_
for class_ in cls.__mro__
if not issubclass(class_, Multi)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def render_math(self, token):
""" Ensure Math tokens are all enclosed in two dollar signs. """ |
if token.content.startswith('$$'):
return self.render_raw_text(token)
return '${}$'.format(self.render_raw_text(token)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def markdown(iterable, renderer=HTMLRenderer):
""" Output HTML with default settings. Enables inline and block-level HTML tags. """ |
with renderer() as renderer:
return renderer.render(Document(iterable)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_file(filename, renderer):
""" Parse a Markdown file and dump the output to stdout. """ |
try:
with open(filename, 'r') as fin:
rendered = mistletoe.markdown(fin, renderer)
print(rendered, end='')
except OSError:
sys.exit('Cannot open file "{}".'.format(filename)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def interactive(renderer):
""" Parse user input, dump to stdout, rinse and repeat. Python REPL style. """ |
_import_readline()
_print_heading(renderer)
contents = []
more = False
while True:
try:
prompt, more = ('... ', True) if more else ('>>> ', True)
contents.append(input(prompt) + '\n')
except EOFError:
print('\n' + mistletoe.markdown(contents, renderer), end='')
more = False
contents = []
except KeyboardInterrupt:
print('\nExiting.')
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toc(self):
""" Returns table of contents as a block_token.List instance. """ |
from mistletoe.block_token import List
def get_indent(level):
if self.omit_title:
level -= 1
return ' ' * 4 * (level - 1)
def build_list_item(heading):
level, content = heading
template = '{indent}- {content}\n'
return template.format(indent=get_indent(level), content=content)
return List([build_list_item(heading) for heading in self._headings]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def render_inner(self, token):
""" Recursively renders child tokens. Joins the rendered strings with no space in between. If newlines / spaces are needed between tokens, add them in their respective templates, or override this function in the renderer subclass, so that whitespace won't seem to appear magically for anyone reading your program. Arguments: token: a branch node who has children attribute. """ |
rendered = [self.render(child) for child in token.children]
return ''.join(rendered) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def complexity_entropy_multiscale(signal, max_scale_factor=20, m=2, r="default"):
""" Computes the Multiscale Entropy. Uses sample entropy with 'chebychev' distance. Parameters signal : list or array List or array of values. max_scale_factor: int Max scale factor (*tau*). The max length of coarse-grained time series analyzed. Will analyze scales for all integers from 1:max_scale_factor. See Costa (2005). m : int The embedding dimension (*m*, the length of vectors to compare). r : float Similarity factor *r*. Distance threshold for two template vectors to be considered equal. Default is 0.15*std(signal). Returns mse: dict A dict containing "MSE_Parameters" (a dict with the actual max_scale_factor, m and r), "MSE_Values" (an array with the sample entropy for each scale_factor up to the max_scale_factor), "MSE_AUC" (A float: The area under the MSE_Values curve. A point-estimate of mse) and "MSE_Sum" (A float: The sum of MSE_Values curve. Another point-estimate of mse; Norris, 2008). Example Notes *Details* - **multiscale entropy**: Entropy is a measure of unpredictability of the state, or equivalently, of its average information content. Multiscale entropy (MSE) analysis is a new method of measuring the complexity of coarse grained versions of the original data, where coarse graining is at all scale factors from 1:max_scale_factor. *Authors* - tjugo (https://github.com/nikdon) - Dominique Makowski (https://github.com/DominiqueMakowski) - Anthony Gatti (https://github.com/gattia) *Dependencies* - numpy - nolds *See Also* - pyEntropy package: https://github.com/nikdon/pyEntropy References - Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039-H2049. - Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. - Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of-pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926-7947. - Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17-22. """ |
if r == "default":
r = 0.15*np.std(signal)
n = len(signal)
per_scale_entropy_values = np.zeros(max_scale_factor)
# Compute SampEn for all scale factors
for i in range(max_scale_factor):
b = int(np.fix(n / (i + 1)))
temp_ts = [0] * int(b)
for j in range(b):
num = sum(signal[j * (i + 1): (j + 1) * (i + 1)])
den = i + 1
temp_ts[j] = float(num) / float(den)
se = nolds.sampen(temp_ts, m, r, nolds.measures.rowwise_chebyshev, debug_plot=False, plot_file=None)
if np.isinf(se):
print("NeuroKit warning: complexity_entropy_multiscale(): Signal might be to short to compute SampEn for scale factors > " + str(i) + ". Setting max_scale_factor to " + str(i) + ".")
max_scale_factor = i
break
else:
per_scale_entropy_values[i] = se
all_entropy_values = per_scale_entropy_values[0:max_scale_factor]
# Compute final indices
parameters = {"max_scale_factor": max_scale_factor,
"r": r,
"m": m}
mse = {"MSE_Parameters": parameters,
"MSE_Values" : all_entropy_values,
"MSE_AUC": np.trapz(all_entropy_values),
"MSE_Sum": np.sum(all_entropy_values)}
return (mse) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_gfp(raws, gflp_method="GFPL1", scale=True, normalize=True, smoothing=None):
""" Run the GFP analysis. """ |
# Load data if necessary
# if isinstance(raws, str):
# raws = load_object(filename=raws)
# Initialize empty dict
gfp = {}
for participant in raws:
gfp[participant] = {}
for run in raws[participant]:
# Generate empty dic
gfp[participant][run] = {}
# Assign raw object to raw
raw = raws[participant][run].copy()
# Check if MEG or EEG data
if True in set(["MEG" in ch for ch in raw.info["ch_names"]]):
meg = True
eeg = False
else:
meg = False
eeg = True
# Save ECG channel
try:
gfp[participant][run]["ecg"] = np.array(raw.copy().pick_types(meg=False, eeg=False, ecg=True).to_data_frame())
except ValueError:
gfp[participant][run]["ecg"] = np.nan
# Select appropriate channels
data = raw.copy().pick_types(meg=meg, eeg=eeg)
gfp[participant][run]["data_info"] = data.info
gfp[participant][run]["data_freq"] = data.info["sfreq"]
gfp[participant][run]["run_duration"] = len(data) / data.info["sfreq"]
# Convert to numpy array
data = np.array(data.to_data_frame())
# find GFP peaks
data, gfp_curve, gfp_peaks = eeg_gfp_peaks(data,
gflp_method=gflp_method,
smoothing=smoothing,
smoothing_window=100,
peak_method="wavelet",
normalize=normalize)
# Store them
gfp[participant][run]["microstates_times"] = gfp_peaks
# Select brain state at peaks
data_peaks = data[gfp_peaks]
# Store the data and scale parameters
if scale is True:
gfp[participant][run]["data"] = z_score(data_peaks)
else:
gfp[participant][run]["data"] = data_peaks
gfp[participant][run]["data_scale"] = scale
gfp[participant][run]["data_normalize"] = normalize
gfp[participant][run]["data_smoothing"] = smoothing
return(gfp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_microstates_clustering(data, n_microstates=4, clustering_method="kmeans", n_jobs=1, n_init=25, occurence_rejection_treshold=0.05, max_refitting=5, verbose=True):
""" Fit the clustering algorithm. """ |
# Create training set
training_set = data.copy()
if verbose is True:
print("- Initializing the clustering algorithm...")
if clustering_method == "kmeans":
algorithm = sklearn.cluster.KMeans(init='k-means++', n_clusters=n_microstates, n_init=n_init, n_jobs=n_jobs)
elif clustering_method == "spectral":
algorithm = sklearn.cluster.SpectralClustering(n_clusters=n_microstates, n_init=n_init, n_jobs=n_jobs)
elif clustering_method == "agglom":
algorithm = sklearn.cluster.AgglomerativeClustering(n_clusters=n_microstates, linkage="complete")
elif clustering_method == "dbscan":
algorithm = sklearn.cluster.DBSCAN(min_samples=100)
elif clustering_method == "affinity":
algorithm = sklearn.cluster.AffinityPropagation(damping=0.5)
else:
print("NeuroKit Error: eeg_microstates(): clustering_method must be 'kmeans', 'spectral', 'dbscan', 'affinity' or 'agglom'")
refitting = 0 # Initialize the number of refittings
good_fit_achieved = False
while good_fit_achieved is False:
good_fit_achieved = True
if verbose is True:
print("- Fitting the classifier...")
# Fit the algorithm
algorithm.fit(training_set)
if verbose is True:
print("- Clustering back the initial data...")
# Predict the more likely cluster for each observation
predicted = algorithm.fit_predict(training_set)
if verbose is True:
print("- Check for abnormalities...")
# Check for abnormalities and prune the training set until none found
occurences = dict(collections.Counter(predicted))
masks = [np.array([True]*len(training_set))]
for microstate in occurences:
# is the frequency of one microstate inferior to a treshold
if occurences[microstate] < len(data)*occurence_rejection_treshold:
good_fit_achieved = False
refitting += 1 # Increment the refitting
print("NeuroKit Warning: eeg_microstates(): detected some outliers: refitting the classifier (n=" + str(refitting) + ").")
masks.append(predicted!=microstate)
mask = np.all(masks, axis=0)
training_set = training_set[mask]
return(algorithm) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_microstates_plot(method, path="", extension=".png", show_sensors_position=False, show_sensors_name=False, plot=True, save=True, dpi=150, contours=0, colorbar=False, separate=False):
""" Plot the microstates. """ |
# Generate and store figures
figures = []
names = []
# Check if microstates metrics available
try:
microstates = method["microstates_good_fit"]
except KeyError:
microstates = method["microstates"]
# Create individual plot for each microstate
for microstate in set(microstates):
if microstate != "Bad":
values = np.mean(method["data"][np.where(microstates == microstate)], axis=0)
values = np.array(values, ndmin=2).T
evoked = mne.EvokedArray(values, method["raw.info_example"], 0)
fig = evoked.plot_topomap(times=0, title=microstate, size=6, contours=contours, time_format="", show=plot, colorbar=colorbar, show_names=show_sensors_name, sensors=show_sensors_position)
figures.append(fig)
# Save separate figures
name = path + "microstate_%s_%s%s%s_%s%i_%s%s" %(microstate, method["data_scale"], method["data_normalize"], method["data_smoothing"], method["feature_reduction_method"], method["n_features"], method["clustering_method"], extension)
fig.savefig(name, dpi=dpi)
names.append(name)
# Save Combined plot
if save is True:
# Combine all plots
image_template = PIL.Image.open(names[0])
X, Y = image_template.size
image_template.close()
combined = PIL.Image.new('RGB', (int(X*len(set(microstates))/2), int( Y*len(set(microstates))/2)))
fig = 0
for x in np.arange(0, len(set(microstates))/2*int(X), int(X)):
for y in np.arange(0, len(set(microstates))/2*int(Y), int(Y)):
try:
newfig = PIL.Image.open(names[fig])
combined.paste(newfig, (int(x), int(y)))
newfig.close()
except:
pass
fig += 1
#combined.show()
combined_name = path + "microstates_%s%s%s_%s%i_%s%s" %(method["data_scale"], method["data_normalize"], method["data_smoothing"], method["feature_reduction_method"], method["n_features"], method["clustering_method"], extension)
combined.save(combined_name)
# Detete separate plots in needed
if separate is False or save is False:
for name in names:
os.remove(name)
return(figures) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_microstates_relabel(method, results, microstates_labels, reverse_microstates=None):
""" Relabel the microstates. """ |
microstates = list(method['microstates'])
for index, microstate in enumerate(method['microstates']):
if microstate in list(reverse_microstates.keys()):
microstates[index] = reverse_microstates[microstate]
method["data"][index] = -1*method["data"][index]
if microstate in list(microstates_labels.keys()):
microstates[index] = microstates_labels[microstate]
method['microstates'] = np.array(microstates)
return(results, method) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bio_process(ecg=None, rsp=None, eda=None, emg=None, add=None, sampling_rate=1000, age=None, sex=None, position=None, ecg_filter_type="FIR", ecg_filter_band="bandpass", ecg_filter_frequency=[3, 45], ecg_segmenter="hamilton", ecg_quality_model="default", ecg_hrv_features=["time", "frequency"], eda_alpha=8e-4, eda_gamma=1e-2, scr_method="makowski", scr_treshold=0.1, emg_names=None, emg_envelope_freqs=[10, 400], emg_envelope_lfreq=4, emg_activation_treshold="default", emg_activation_n_above=0.25, emg_activation_n_below=1):
""" Automated processing of bio signals. Wrapper for other bio processing functions. Parameters ecg : list or array ECG signal array. rsp : list or array Respiratory signal array. eda : list or array EDA signal array. emg : list, array or DataFrame EMG signal array. Can include multiple channels. add : pandas.DataFrame Dataframe or channels to add by concatenation to the processed dataframe. sampling_rate : int Sampling rate (samples/second). age : float Subject's age. sex : str Subject's gender ("m" or "f"). position : str Recording position. To compare with data from Voss et al. (2015), use "supine". ecg_filter_type : str Can be Finite Impulse Response filter ("FIR"), Butterworth filter ("butter"), Chebyshev filters ("cheby1" and "cheby2"), Elliptic filter ("ellip") or Bessel filter ("bessel"). ecg_filter_band : str Band type, can be Low-pass filter ("lowpass"), High-pass filter ("highpass"), Band-pass filter ("bandpass"), Band-stop filter ("bandstop"). ecg_filter_frequency : int or list Cutoff frequencies, format depends on type of band: "lowpass" or "bandpass": single frequency (int), "bandpass" or "bandstop": pair of frequencies (list). ecg_quality_model : str Path to model used to check signal quality. "default" uses the builtin model. None to skip this function. ecg_hrv_features : list What HRV indices to compute. Any or all of 'time', 'frequency' or 'nonlinear'. None to skip this function. ecg_segmenter : str The cardiac phase segmenter. Can be "hamilton", "gamboa", "engzee", "christov" or "ssf". See :func:`neurokit.ecg_preprocess()` for details. eda_alpha : float cvxEDA penalization for the sparse SMNA driver. eda_gamma : float cvxEDA penalization for the tonic spline coefficients. scr_method : str SCR extraction algorithm. "makowski" (default), "kim" (biosPPy's default; See Kim et al., 2004) or "gamboa" (Gamboa, 2004). scr_treshold : float SCR minimum treshold (in terms of signal standart deviation). emg_names : list List of EMG channel names. Returns processed_bio : dict Dict containing processed bio features. Contains the ECG raw signal, the filtered signal, the R peaks indexes, HRV characteristics, all the heartbeats, the Heart Rate, and the RSP filtered signal (if respiration provided), respiratory sinus arrhythmia (RSA) features, the EDA raw signal, the filtered signal, the phasic component (if cvxEDA is True), the SCR onsets, peak indexes and amplitudes, the EMG raw signal, the filtered signal and pulse onsets. Example Notes *Details* - **ECG Features**: See :func:`neurokit.ecg_process()`. - **EDA Features**: See :func:`neurokit.eda_process()`. - **RSP Features**: See :func:`neurokit.rsp_process()`. - **EMG Features**: See :func:`neurokit.emg_process()`. *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - pandas *See Also* - BioSPPY: https://github.com/PIA-Group/BioSPPy - hrv: https://github.com/rhenanbartels/hrv - cvxEDA: https://github.com/lciti/cvxEDA References - Heart rate variability. (1996). Standards of measurement, physiological interpretation, and clinical use. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Eur Heart J, 17, 354-381. - Voss, A., Schroeder, R., Heitmann, A., Peters, A., & Perz, S. (2015). Short-term heart rate variability—influence of gender and age in healthy subjects. PloS one, 10(3), e0118308. - Greco, A., Valenza, G., & Scilingo, E. P. (2016). Evaluation of CDA and CvxEDA Models. In Advances in Electrodermal Activity Processing with Applications for Mental Health (pp. 35-43). Springer International Publishing. - Greco, A., Valenza, G., Lanata, A., Scilingo, E. P., & Citi, L. (2016). cvxEDA: A convex optimization approach to electrodermal activity processing. IEEE Transactions on Biomedical Engineering, 63(4), 797-804. - Zohar, A. H., Cloninger, C. R., & McCraty, R. (2013). Personality and heart rate variability: exploring pathways from personality to cardiac coherence and health. Open Journal of Social Sciences, 1(06), 32. - Smith, A. L., Owen, H., & Reynolds, K. J. (2013). Heart rate variability indices for very short-term (30 beat) analysis. Part 2: validation. Journal of clinical monitoring and computing, 27(5), 577-585. - Azevedo, R. T., Garfinkel, S. N., Critchley, H. D., & Tsakiris, M. (2017). Cardiac afferent activity modulates the expression of racial stereotypes. Nature communications, 8. - Edwards, L., Ring, C., McIntyre, D., & Carroll, D. (2001). Modulation of the human nociceptive flexion reflex across the cardiac cycle. Psychophysiology, 38(4), 712-718. - Gray, M. A., Rylander, K., Harrison, N. A., Wallin, B. G., & Critchley, H. D. (2009). Following one's heart: cardiac rhythms gate central initiation of sympathetic reflexes. Journal of Neuroscience, 29(6), 1817-1825. - Kim, K. H., Bang, S. W., & Kim, S. R. (2004). Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing, 42(3), 419-427. - Gamboa, H. (2008). Multi-Modal Behavioral Biometrics Based on HCI and Electrophysiology (Doctoral dissertation, PhD thesis, Universidade Técnica de Lisboa, Instituto Superior Técnico). """ |
processed_bio = {}
bio_df = pd.DataFrame({})
# ECG & RSP
if ecg is not None:
ecg = ecg_process(ecg=ecg, rsp=rsp, sampling_rate=sampling_rate, filter_type=ecg_filter_type, filter_band=ecg_filter_band, filter_frequency=ecg_filter_frequency, segmenter=ecg_segmenter, quality_model=ecg_quality_model, hrv_features=ecg_hrv_features, age=age, sex=sex, position=position)
processed_bio["ECG"] = ecg["ECG"]
if rsp is not None:
processed_bio["RSP"] = ecg["RSP"]
bio_df = pd.concat([bio_df, ecg["df"]], axis=1)
if rsp is not None and ecg is None:
rsp = rsp_process(rsp=rsp, sampling_rate=sampling_rate)
processed_bio["RSP"] = rsp["RSP"]
bio_df = pd.concat([bio_df, rsp["df"]], axis=1)
# EDA
if eda is not None:
eda = eda_process(eda=eda, sampling_rate=sampling_rate, alpha=eda_alpha, gamma=eda_gamma, scr_method=scr_method, scr_treshold=scr_treshold)
processed_bio["EDA"] = eda["EDA"]
bio_df = pd.concat([bio_df, eda["df"]], axis=1)
# EMG
if emg is not None:
emg = emg_process(emg=emg, sampling_rate=sampling_rate, emg_names=emg_names, envelope_freqs=emg_envelope_freqs, envelope_lfreq=emg_envelope_lfreq, activation_treshold=emg_activation_treshold, activation_n_above=emg_activation_n_above, activation_n_below=emg_activation_n_below)
bio_df = pd.concat([bio_df, emg.pop("df")], axis=1)
for i in emg:
processed_bio[i] = emg[i]
if add is not None:
add = add.reset_index(drop=True)
bio_df = pd.concat([bio_df, add], axis=1)
processed_bio["df"] = bio_df
return(processed_bio) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ecg_process(ecg, rsp=None, sampling_rate=1000, filter_type="FIR", filter_band="bandpass", filter_frequency=[3, 45], segmenter="hamilton", quality_model="default", hrv_features=["time", "frequency"], age=None, sex=None, position=None):
""" Automated processing of ECG and RSP signals. Parameters ecg : list or ndarray ECG signal array. rsp : list or ndarray Respiratory (RSP) signal array. sampling_rate : int Sampling rate (samples/second). filter_type : str Can be Finite Impulse Response filter ("FIR"), Butterworth filter ("butter"), Chebyshev filters ("cheby1" and "cheby2"), Elliptic filter ("ellip") or Bessel filter ("bessel"). filter_band : str Band type, can be Low-pass filter ("lowpass"), High-pass filter ("highpass"), Band-pass filter ("bandpass"), Band-stop filter ("bandstop"). filter_frequency : int or list Cutoff frequencies, format depends on type of band: "lowpass" or "bandpass": single frequency (int), "bandpass" or "bandstop": pair of frequencies (list). segmenter : str The cardiac phase segmenter. Can be "hamilton", "gamboa", "engzee", "christov" or "ssf". See :func:`neurokit.ecg_preprocess()` for details. quality_model : str Path to model used to check signal quality. "default" uses the builtin model. None to skip this function. hrv_features : list What HRV indices to compute. Any or all of 'time', 'frequency' or 'nonlinear'. None to skip this function. age : float Subject's age for adjusted HRV. sex : str Subject's gender ("m" or "f") for adjusted HRV. position : str Recording position. To compare with data from Voss et al. (2015), use "supine". Returns processed_ecg : dict Dict containing processed ECG features. Contains the ECG raw signal, the filtered signal, the R peaks indexes, HRV features, all the heartbeats, the Heart Rate, the RSP filtered signal (if respiration provided) and the respiratory sinus arrhythmia (RSA). Example Notes *Details* - **Cardiac Cycle**: A typical ECG showing a heartbeat consists of a P wave, a QRS complex and a T wave.The P wave represents the wave of depolarization that spreads from the SA-node throughout the atria. The QRS complex reflects the rapid depolarization of the right and left ventricles. Since the ventricles are the largest part of the heart, in terms of mass, the QRS complex usually has a much larger amplitude than the P-wave. The T wave represents the ventricular repolarization of the ventricles. On rare occasions, a U wave can be seen following the T wave. The U wave is believed to be related to the last remnants of ventricular repolarization. - **RSA**: Respiratory sinus arrhythmia (RSA) is a naturally occurring variation in heart rate that occurs during the breathing cycle, serving as a measure of parasympathetic nervous system activity. See :func:`neurokit.ecg_rsa()` for details. - **HRV**: Heart-Rate Variability (HRV) is a finely tuned measure of heart-brain communication, as well as a strong predictor of morbidity and death (Zohar et al., 2013). It describes the complex variation of beat-to-beat intervals mainly controlled by the autonomic nervous system (ANS) through the interplay of sympathetic and parasympathetic neural activity at the sinus node. In healthy subjects, the dynamic cardiovascular control system is characterized by its ability to adapt to physiologic perturbations and changing conditions maintaining the cardiovascular homeostasis (Voss, 2015). In general, the HRV is influenced by many several factors like chemical, hormonal and neural modulations, circadian changes, exercise, emotions, posture and preload. There are several procedures to perform HRV analysis, usually classified into three categories: time domain methods, frequency domain methods and non-linear methods. See :func:`neurokit.ecg_hrv()` for a description of indices. - **Adjusted HRV**: The raw HRV features are normalized :math:`(raw - Mcluster) / sd` according to the participant's age and gender. In data from Voss et al. (2015), HRV analysis was performed on 5-min ECG recordings (lead II and lead V2 simultaneously, 500 Hz sample rate) obtained in supine position after a 5–10 minutes resting phase. The cohort of healthy subjects consisted of 782 women and 1124 men between the ages of 25 and 74 years, clustered into 4 groups: YF (Female, Age = [25-49], n=571), YM (Male, Age = [25-49], n=744), EF (Female, Age = [50-74], n=211) and EM (Male, Age = [50-74], n=571). - **Systole/Diastole**: One prominent channel of body and brain communication is that conveyed by baroreceptors, pressure and stretch-sensitive receptors within the heart and surrounding arteries. Within each cardiac cycle, bursts of baroreceptor afferent activity encoding the strength and timing of each heartbeat are carried via the vagus and glossopharyngeal nerve afferents to the nucleus of the solitary tract. This is the principal route that communicates to the brain the dynamic state of the heart, enabling the representation of cardiovascular arousal within viscerosensory brain regions, and influence ascending neuromodulator systems implicated in emotional and motivational behaviour. Because arterial baroreceptors are activated by the arterial pulse pressure wave, their phasic discharge is maximal during and immediately after the cardiac systole, that is, when the blood is ejected from the heart, and minimal during cardiac diastole, that is, between heartbeats (Azevedo, 2017). - **ECG Signal Quality**: Using the PTB-Diagnostic dataset available from PhysioNet, we extracted all the ECG signals from the healthy participants, that contained 15 recording leads/subject. We extracted all cardiac cycles, for each lead, and downsampled them from 600 to 200 datapoints. Note that we dropped the 8 first values that were NaNs. Then, we fitted a neural network model on 2/3 of the dataset (that contains 134392 cardiac cycles) to predict the lead. Model evaluation was done on the remaining 1/3. The model show good performances in predicting the correct recording lead (accuracy=0.91, precision=0.91). In this function, this model is fitted on each cardiac cycle of the provided ECG signal. It returns the probable recording lead (the most common predicted lead), the signal quality of each cardiac cycle (the probability of belonging to the probable recording lead) and the overall signal quality (the mean of signal quality). See creation `scripts <https://github.com/neuropsychology/NeuroKit.py/tree/master/utils/ecg_signal_quality_model_creation>`_. *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ - Rhenan Bartels (https://github.com/rhenanbartels) *Dependencies* - biosppy - numpy - pandas *See Also* - BioSPPY: https://github.com/PIA-Group/BioSPPy - hrv: https://github.com/rhenanbartels/hrv - RHRV: http://rhrv.r-forge.r-project.org/ References - Heart rate variability. (1996). Standards of measurement, physiological interpretation, and clinical use. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Eur Heart J, 17, 354-381. - Voss, A., Schroeder, R., Heitmann, A., Peters, A., & Perz, S. (2015). Short-term heart rate variability—influence of gender and age in healthy subjects. PloS one, 10(3), e0118308. - Zohar, A. H., Cloninger, C. R., & McCraty, R. (2013). Personality and heart rate variability: exploring pathways from personality to cardiac coherence and health. Open Journal of Social Sciences, 1(06), 32. - Smith, A. L., Owen, H., & Reynolds, K. J. (2013). Heart rate variability indices for very short-term (30 beat) analysis. Part 2: validation. Journal of clinical monitoring and computing, 27(5), 577-585. - Azevedo, R. T., Garfinkel, S. N., Critchley, H. D., & Tsakiris, M. (2017). Cardiac afferent activity modulates the expression of racial stereotypes. Nature communications, 8. - Edwards, L., Ring, C., McIntyre, D., & Carroll, D. (2001). Modulation of the human nociceptive flexion reflex across the cardiac cycle. Psychophysiology, 38(4), 712-718. - Gray, M. A., Rylander, K., Harrison, N. A., Wallin, B. G., & Critchley, H. D. (2009). Following one's heart: cardiac rhythms gate central initiation of sympathetic reflexes. Journal of Neuroscience, 29(6), 1817-1825. """ |
# Preprocessing
# =============
processed_ecg = ecg_preprocess(ecg,
sampling_rate=sampling_rate,
filter_type=filter_type,
filter_band=filter_band,
filter_frequency=filter_frequency,
segmenter=segmenter)
# Signal quality
# ===============
if quality_model is not None:
quality = ecg_signal_quality(cardiac_cycles=processed_ecg["ECG"]["Cardiac_Cycles"], sampling_rate=sampling_rate, rpeaks=processed_ecg["ECG"]["R_Peaks"], quality_model=quality_model)
processed_ecg["ECG"].update(quality)
processed_ecg["df"] = pd.concat([processed_ecg["df"], quality["ECG_Signal_Quality"]], axis=1)
# HRV
# =============
if hrv_features is not None:
hrv = ecg_hrv(rpeaks=processed_ecg["ECG"]["R_Peaks"], sampling_rate=sampling_rate, hrv_features=hrv_features)
try:
processed_ecg["df"] = pd.concat([processed_ecg["df"], hrv.pop("df")], axis=1)
except KeyError:
pass
processed_ecg["ECG"]["HRV"] = hrv
if age is not None and sex is not None and position is not None:
processed_ecg["ECG"]["HRV_Adjusted"] = ecg_hrv_assessment(hrv, age, sex, position)
# RSP
# =============
if rsp is not None:
rsp = rsp_process(rsp=rsp, sampling_rate=sampling_rate)
processed_ecg["RSP"] = rsp["RSP"]
processed_ecg["df"] = pd.concat([processed_ecg["df"], rsp["df"]], axis=1)
# RSA
# =============
rsa = ecg_rsa(processed_ecg["ECG"]["R_Peaks"], rsp["df"]["RSP_Filtered"], sampling_rate=sampling_rate)
processed_ecg["ECG"]["RSA"] = rsa
processed_ecg["df"] = pd.concat([processed_ecg["df"], rsa.pop("df")], axis=1)
return(processed_ecg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ecg_signal_quality(cardiac_cycles, sampling_rate, rpeaks=None, quality_model="default"):
""" Attempt to find the recording lead and the overall and individual quality of heartbeats signal. Although used as a routine, this feature is experimental. Parameters cardiac_cycles : pd.DataFrame DataFrame containing heartbeats. Computed by :function:`neurokit.ecg_process`. sampling_rate : int Sampling rate (samples/second). rpeaks : None or ndarray R-peak location indices. Used for computing an interpolated signal of quality. quality_model : str Path to model used to check signal quality. "default" uses the builtin model. Returns classification : dict Contains classification features. Example Notes *Details* - **ECG Signal Quality**: Using the PTB-Diagnostic dataset available from PhysioNet, we extracted all the ECG signals from the healthy participants, that contained 15 recording leads/subject. We extracted all cardiac cycles, for each lead, and downsampled them from 600 to 200 datapoints. Note that we dropped the 8 first values that were NaNs. Then, we fitted a neural network model on 2/3 of the dataset (that contains 134392 cardiac cycles) to predict the lead. Model evaluation was done on the remaining 1/3. The model show good performances in predicting the correct recording lead (accuracy=0.91, precision=0.91). In this function, this model is fitted on each cardiac cycle of the provided ECG signal. It returns the probable recording lead (the most common predicted lead), the signal quality of each cardiac cycle (the probability of belonging to the probable recording lead) and the overall signal quality (the mean of signal quality). See creation `scripts <https://github.com/neuropsychology/NeuroKit.py/tree/master/utils/ecg_signal_quality_model_creation>`_. *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - numpy - pandas """ |
if len(cardiac_cycles) > 200:
cardiac_cycles = cardiac_cycles.rolling(20).mean().resample("3L").pad()
if len(cardiac_cycles) < 200:
cardiac_cycles = cardiac_cycles.resample("1L").pad()
cardiac_cycles = cardiac_cycles.rolling(20).mean().resample("3L").pad()
if len(cardiac_cycles) < 200:
fill_dict = {}
for i in cardiac_cycles.columns:
fill_dict[i] = [np.nan] * (200-len(cardiac_cycles))
cardiac_cycles = pd.concat([pd.DataFrame(fill_dict), cardiac_cycles], ignore_index=True)
cardiac_cycles = cardiac_cycles.fillna(method="bfill")
cardiac_cycles = cardiac_cycles.reset_index(drop=True)[8:200]
cardiac_cycles = z_score(cardiac_cycles).T
cardiac_cycles = np.array(cardiac_cycles)
if quality_model == "default":
model = sklearn.externals.joblib.load(Path.materials() + 'heartbeat_classification.model')
else:
model = sklearn.externals.joblib.load(quality_model)
# Initialize empty dict
quality = {}
# Find dominant class
lead = model.predict(cardiac_cycles)
lead = pd.Series(lead).value_counts().index[0]
quality["Probable_Lead"] = lead
predict = pd.DataFrame(model.predict_proba(cardiac_cycles))
predict.columns = model.classes_
quality["Cardiac_Cycles_Signal_Quality"] = predict[lead].values
quality["Average_Signal_Quality"] = predict[lead].mean()
# Interpolate to get a continuous signal
if rpeaks is not None:
signal = quality["Cardiac_Cycles_Signal_Quality"]
signal = interpolate(signal, rpeaks, sampling_rate) # Interpolation using 3rd order spline
signal.name = "ECG_Signal_Quality"
quality["ECG_Signal_Quality"] = signal
return(quality) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ecg_simulate(duration=10, sampling_rate=1000, bpm=60, noise=0.01):
""" Simulates an ECG signal. Parameters duration : int Desired recording length. sampling_rate : int Desired sampling rate. bpm : int Desired simulated heart rate. noise : float Desired noise level. Returns ECG_Response : dict Event-related ECG response features. Example Notes *Authors* - `Diarmaid O Cualain <https://github.com/diarmaidocualain>`_ - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - numpy - scipy.signal References """ |
# The "Daubechies" wavelet is a rough approximation to a real, single, cardiac cycle
cardiac = scipy.signal.wavelets.daub(10)
# Add the gap after the pqrst when the heart is resting.
cardiac = np.concatenate([cardiac, np.zeros(10)])
# Caculate the number of beats in capture time period
num_heart_beats = int(duration * bpm / 60)
# Concatenate together the number of heart beats needed
ecg = np.tile(cardiac , num_heart_beats)
# Add random (gaussian distributed) noise
noise = np.random.normal(0, noise, len(ecg))
ecg = noise + ecg
# Resample
ecg = scipy.signal.resample(ecg, sampling_rate*duration)
return(ecg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rsp_process(rsp, sampling_rate=1000):
""" Automated processing of RSP signals. Parameters rsp : list or array Respiratory (RSP) signal array. sampling_rate : int Sampling rate (samples/second). Returns processed_rsp : dict Dict containing processed RSP features. Contains the RSP raw signal, the filtered signal, the respiratory cycles onsets, and respiratory phases (inspirations and expirations). Example Notes *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) *Dependencies* - biosppy - numpy - pandas *See Also* - BioSPPY: https://github.com/PIA-Group/BioSPPy """ |
processed_rsp = {"df": pd.DataFrame({"RSP_Raw": np.array(rsp)})}
biosppy_rsp = dict(biosppy.signals.resp.resp(rsp, sampling_rate=sampling_rate, show=False))
processed_rsp["df"]["RSP_Filtered"] = biosppy_rsp["filtered"]
# RSP Rate
# ============
rsp_rate = biosppy_rsp["resp_rate"]*60 # Get RSP rate value (in cycles per minute)
rsp_times = biosppy_rsp["resp_rate_ts"] # the time (in sec) of each rsp rate value
rsp_times = np.round(rsp_times*sampling_rate).astype(int) # Convert to timepoints
try:
rsp_rate = interpolate(rsp_rate, rsp_times, sampling_rate) # Interpolation using 3rd order spline
processed_rsp["df"]["RSP_Rate"] = rsp_rate
except TypeError:
print("NeuroKit Warning: rsp_process(): Sequence too short to compute respiratory rate.")
processed_rsp["df"]["RSP_Rate"] = np.nan
# RSP Cycles
# ===========================
rsp_cycles = rsp_find_cycles(biosppy_rsp["filtered"])
processed_rsp["df"]["RSP_Inspiration"] = rsp_cycles["RSP_Inspiration"]
processed_rsp["RSP"] = {}
processed_rsp["RSP"]["Cycles_Onsets"] = rsp_cycles["RSP_Cycles_Onsets"]
processed_rsp["RSP"]["Expiration_Onsets"] = rsp_cycles["RSP_Expiration_Onsets"]
processed_rsp["RSP"]["Cycles_Length"] = rsp_cycles["RSP_Cycles_Length"]/sampling_rate
# RSP Variability
# ===========================
rsp_diff = processed_rsp["RSP"]["Cycles_Length"]
processed_rsp["RSP"]["Respiratory_Variability"] = {}
processed_rsp["RSP"]["Respiratory_Variability"]["RSPV_SD"] = np.std(rsp_diff)
processed_rsp["RSP"]["Respiratory_Variability"]["RSPV_RMSSD"] = np.sqrt(np.mean(rsp_diff ** 2))
processed_rsp["RSP"]["Respiratory_Variability"]["RSPV_RMSSD_Log"] = np.log(processed_rsp["RSP"]["Respiratory_Variability"]["RSPV_RMSSD"])
return(processed_rsp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rsp_find_cycles(signal):
""" Find Respiratory cycles onsets, durations and phases. Parameters signal : list or array Respiratory (RSP) signal (preferably filtered). Returns rsp_cycles : dict RSP cycles features. Example Notes *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) *Dependencies* - biosppy *See Also* - BioSPPY: https://github.com/PIA-Group/BioSPPy """ |
# Compute gradient (sort of derivative)
gradient = np.gradient(signal)
# Find zero-crossings
zeros, = biosppy.tools.zero_cross(signal=gradient, detrend=True)
# Find respiratory phases
phases_indices = []
for i in zeros:
if gradient[i+1] > gradient[i-1]:
phases_indices.append("Inspiration")
else:
phases_indices.append("Expiration")
# Select cycles (inspiration) and expiration onsets
inspiration_onsets = []
expiration_onsets = []
for index, onset in enumerate(zeros):
if phases_indices[index] == "Inspiration":
inspiration_onsets.append(onset)
if phases_indices[index] == "Expiration":
expiration_onsets.append(onset)
# Create a continuous inspiration signal
# ---------------------------------------
# Find initial phase
if phases_indices[0] == "Inspiration":
phase = "Expiration"
else:
phase = "Inspiration"
inspiration = []
phase_counter = 0
for i, value in enumerate(signal):
if i == zeros[phase_counter]:
phase = phases_indices[phase_counter]
if phase_counter < len(zeros)-1:
phase_counter += 1
inspiration.append(phase)
# Find last phase
if phases_indices[len(phases_indices)-1] == "Inspiration":
last_phase = "Expiration"
else:
last_phase = "Inspiration"
inspiration = np.array(inspiration)
inspiration[max(zeros):] = last_phase
# Convert to binary
inspiration[inspiration == "Inspiration"] = 1
inspiration[inspiration == "Expiration"] = 0
inspiration = pd.to_numeric(inspiration)
cycles_length = np.diff(inspiration_onsets)
rsp_cycles = {"RSP_Inspiration": inspiration,
"RSP_Expiration_Onsets": expiration_onsets,
"RSP_Cycles_Onsets": inspiration_onsets,
"RSP_Cycles_Length": cycles_length}
return(rsp_cycles) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_select_channels(raw, channel_names):
""" Select one or several channels by name and returns them in a dataframe. Parameters raw : mne.io.Raw Raw EEG data. channel_names : str or list Channel's name(s). Returns channels : pd.DataFrame Channel. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - mne *See Also* - mne package: http://martinos.org/mne/dev/index.html """ |
if isinstance(channel_names, list) is False:
channel_names = [channel_names]
channels, time_index = raw.copy().pick_channels(channel_names)[:]
if len(channel_names) > 1:
channels = pd.DataFrame(channels.T, columns=channel_names)
else:
channels = pd.Series(channels[0])
channels.name = channel_names[0]
return(channels) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_create_mne_events(onsets, conditions=None):
""" Create MNE compatible events. Parameters onsets : list or array Events onsets. conditions : list A list of equal length containing the stimuli types/conditions. Returns (events, event_id) : tuple MNE-formated events and a dictionary with event's names. Example Authors - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ """ |
event_id = {}
if conditions is None:
conditions = ["Event"] * len(onsets)
# Sanity check
if len(conditions) != len(onsets):
print("NeuroKit Warning: eeg_create_events(): conditions parameter of different length than onsets. Aborting.")
return()
event_names = list(set(conditions))
# event_index = [1, 2, 3, 4, 5, 32, 64, 128]
event_index = list(range(len(event_names)))
for i in enumerate(event_names):
conditions = [event_index[i[0]] if x==i[1] else x for x in conditions]
event_id[i[1]] = event_index[i[0]]
events = np.array([onsets, [0]*len(onsets), conditions]).T
return(events, event_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_add_events(raw, events_channel, conditions=None, treshold="auto", cut="higher", time_index=None, number="all", after=0, before=None, min_duration=1):
""" Find events on a channel, convert them into an MNE compatible format, and add them to the raw data. Parameters raw : mne.io.Raw Raw EEG data. events_channel : str or array Name of the trigger channel if in the raw, or array of equal length if externally supplied. conditions : list List containing the stimuli types/conditions. treshold : float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut : str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Add a corresponding datetime index, will return an addional array with the onsets as datetimes. number : str or int How many events should it select. after : int If number different than "all", then at what time should it start selecting the events. before : int If number different than "all", before what time should it select the events. min_duration : int The minimum duration of an event (in timepoints). Returns (raw, events, event_id) : tuple The raw file with events, the mne-formatted events and event_id. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - pandas *See Also* - mne: http://martinos.org/mne/dev/index.html References - None """ |
# Extract the events_channel from raw if needed
if isinstance(events_channel, str):
try:
events_channel = eeg_select_channels(raw, events_channel)
except:
print("NeuroKit error: eeg_add_events(): Wrong events_channel name provided.")
# Find event onsets
events = find_events(events_channel, treshold=treshold, cut=cut, time_index=time_index, number=number, after=after, before=before, min_duration=min_duration)
# Create mne compatible events
events, event_id = eeg_create_mne_events(events["onsets"], conditions)
# Add them
raw.add_events(events)
return(raw, events, event_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_to_all_evokeds(all_epochs, conditions=None):
""" Convert all_epochs to all_evokeds. DOCS INCOMPLETE :( """ |
if conditions is None:
# Get event_id
conditions = {}
for participant, epochs in all_epochs.items():
conditions.update(epochs.event_id)
all_evokeds = {}
for participant, epochs in all_epochs.items():
evokeds = {}
for cond in conditions:
try:
evokeds[cond] = epochs[cond].average()
except KeyError:
pass
all_evokeds[participant] = evokeds
return(all_evokeds) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_to_df(eeg, index=None, include="all", exclude=None, hemisphere="both", central=True):
""" Convert mne Raw or Epochs object to dataframe or dict of dataframes. DOCS INCOMPLETE :( """ |
if isinstance(eeg, mne.Epochs):
data = {}
if index is None:
index = range(len(eeg))
for epoch_index, epoch in zip(index, eeg.get_data()):
epoch = pd.DataFrame(epoch.T)
epoch.columns = eeg.ch_names
epoch.index = eeg.times
selection = eeg_select_electrodes(eeg, include=include, exclude=exclude, hemisphere=hemisphere, central=central)
data[epoch_index] = epoch[selection]
else: # it might be a Raw object
data = eeg.get_data().T
data = pd.DataFrame(data)
data.columns = eeg.ch_names
data.index = eeg.times
return(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_polarbar(scores, labels=None, labels_size=15, colors="default", distribution_means=None, distribution_sds=None, treshold=1.28, fig_size=(15, 15)):
""" Polar bar chart. Parameters scores : list or dict Scores to plot. labels : list List of labels to be used for ticks. labels_size : int Label's size. colors : list or str List of colors or "default". distribution_means : int or list List of means to add a range ribbon. distribution_sds : int or list List of SDs to add a range ribbon. treshold : float Limits of the range ribbon (in terms of standart deviation from mean). fig_size : tuple Figure size. Returns plot : matplotlig figure The figure. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - matplotlib - numpy """ |
# Sanity check
if isinstance(scores, dict):
if labels is None:
labels = list(scores.keys())
try:
scores = [scores[key] for key in labels]
except KeyError:
print("NeuroKit Error: plot_polarbar(): labels and scores keys not matching. Recheck them.")
# Parameters
if colors == "default":
if len(scores) < 9:
colors = ["#f44336", "#9C27B0", "#3F51B5","#03A9F4", "#009688", "#8BC34A", "#FFEB3B", "#FF9800", "#795548"]
else:
colors = None
if labels is None:
labels = range(len(scores))
N = len(scores)
theta = np.linspace(0.0, -2 * np.pi, N, endpoint=False)
width = 2 * np.pi / N
# Main
plot = plt.figure(figsize=fig_size)
layer1 = plot.add_subplot(111, projection="polar")
bars1 = layer1.bar(theta+np.pi/len(scores), scores, width=width, bottom=0.0)
layer1.yaxis.set_ticks(range(11))
layer1.yaxis.set_ticklabels([])
layer1.xaxis.set_ticks(theta+np.pi/len(scores))
layer1.xaxis.set_ticklabels(labels, fontsize=labels_size)
for index, bar in enumerate(bars1):
if colors is not None:
bar.set_facecolor(colors[index])
bar.set_alpha(1)
# Layer 2
if distribution_means is not None and distribution_sds is not None:
# Sanity check
if isinstance(distribution_means, int):
distribution_means = [distribution_means]*N
if isinstance(distribution_sds, int):
distribution_sds = [distribution_sds]*N
# TODO: add convertion if those parameter are dict
bottoms, tops = normal_range(np.array(distribution_means), np.array(distribution_sds), treshold=treshold)
tops = tops - bottoms
layer2 = plot.add_subplot(111, polar=True)
bars2 = layer2.bar(theta, tops, width=width, bottom=bottoms, linewidth=0)
layer2.xaxis.set_ticks(theta+np.pi/len(scores))
layer2.xaxis.set_ticklabels(labels, fontsize=labels_size)
for index, bar in enumerate(bars2):
bar.set_facecolor("#607D8B")
bar.set_alpha(0.3)
return(plot) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def feature_reduction(data, method, n_features):
""" Feature reduction. Parameters NA Returns NA Example NA Authors Dominique Makowski Dependencies - sklearn """ |
if method == "PCA":
feature_red_method = sklearn.decomposition.PCA(n_components=n_features)
data_processed = feature_red_method.fit_transform(data)
elif method == "agglom":
feature_red_method = sklearn.cluster.FeatureAgglomeration(n_clusters=n_features)
data_processed = feature_red_method.fit_transform(data)
elif method == "ica":
feature_red_method = sklearn.decomposition.FastICA(n_components=n_features)
data_processed = feature_red_method.fit_transform(data)
elif method == "kernelPCA":
feature_red_method = sklearn.decomposition.KernelPCA(n_components=n_features, kernel='linear')
data_processed = feature_red_method.fit_transform(data)
elif method == "kernelPCA":
feature_red_method = sklearn.decomposition.KernelPCA(n_components=n_features, kernel='linear')
data_processed = feature_red_method.fit_transform(data)
elif method == "sparsePCA":
feature_red_method = sklearn.decomposition.SparsePCA(n_components=n_features)
data_processed = feature_red_method.fit_transform(data)
elif method == "incrementalPCA":
feature_red_method = sklearn.decomposition.IncrementalPCA(n_components=n_features)
data_processed = feature_red_method.fit_transform(data)
elif method == "nmf":
if np.min(data) < 0:
data -= np.min(data)
feature_red_method = sklearn.decomposition.NMF(n_components=n_features)
data_processed = feature_red_method.fit_transform(data)
else:
feature_red_method = None
data_processed = data.copy()
return(data_processed) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def binarize_signal(signal, treshold="auto", cut="higher"):
""" Binarize a channel based on a continuous channel. Parameters signal = array or list The signal channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Returns list binary_signal Example Authors - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies None """ |
if treshold == "auto":
treshold = (np.max(np.array(signal)) - np.min(np.array(signal)))/2
signal = list(signal)
binary_signal = []
for i in range(len(signal)):
if cut == "higher":
if signal[i] > treshold:
binary_signal.append(1)
else:
binary_signal.append(0)
else:
if signal[i] < treshold:
binary_signal.append(1)
else:
binary_signal.append(0)
return(binary_signal) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def localize_events(events_channel, treshold="auto", cut="higher", time_index=None):
""" Find the onsets of all events based on a continuous signal. Parameters events_channel = array or list The trigger channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". time_index = array or list Add a corresponding datetime index, will return an addional array with the onsets as datetimes. Returns dict dict containing the onsets, the duration and the time index if provided. Example Authors - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies None """ |
events_channel = binarize_signal(events_channel, treshold=treshold, cut=cut)
events = {"onsets":[], "durations":[]}
if time_index is not None:
events["onsets_time"] = []
index = 0
for key, g in (groupby(events_channel)):
duration = len(list(g))
if key == 1:
events["onsets"].append(index)
events["durations"].append(duration)
if time_index is not None:
events["onsets_time"].append(time_index[index])
index += duration
return(events) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_events(events_channel, treshold="auto", cut="higher", time_index=None, number="all", after=0, before=None, min_duration=1):
""" Find and select events based on a continuous signal. Parameters events_channel : array or list The trigger channel. treshold : float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut : str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Add a corresponding datetime index, will return an addional array with the onsets as datetimes. number : str or int How many events should it select. after : int If number different than "all", then at what time should it start selecting the events. before : int If number different than "all", before what time should it select the events. min_duration : int The minimum duration of an event (in timepoints). Returns events : dict Dict containing events onsets and durations. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - numpy """ |
events = localize_events(events_channel, treshold=treshold, cut=cut, time_index=time_index)
# Warning when no events detected
if len(events["onsets"]) == 0:
print("NeuroKit warning: find_events(): No events found. Check your events_channel or adjust trehsold.")
return()
# Remove less than duration
toremove = []
for event in range(len(events["onsets"])):
if events["durations"][event] < min_duration:
toremove.append(False)
else:
toremove.append(True)
events["onsets"] = np.array(events["onsets"])[np.array(toremove)]
events["durations"] = np.array(events["durations"])[np.array(toremove)]
if time_index is not None:
events["onsets_time"] = np.array(events["onsets_time"])[np.array(toremove)]
# Before and after
if isinstance(number, int):
after_times = []
after_onsets = []
after_length = []
before_times = []
before_onsets = []
before_length = []
if after != None:
if events["onsets_time"] == []:
events["onsets_time"] = np.array(events["onsets"])
else:
events["onsets_time"] = np.array(events["onsets_time"])
after_onsets = list(np.array(events["onsets"])[events["onsets_time"]>after])[:number]
after_times = list(np.array(events["onsets_time"])[events["onsets_time"]>after])[:number]
after_length = list(np.array(events["durations"])[events["onsets_time"]>after])[:number]
if before != None:
if events["onsets_time"] == []:
events["onsets_time"] = np.array(events["onsets"])
else:
events["onsets_time"] = np.array(events["onsets_time"])
before_onsets = list(np.array(events["onsets"])[events["onsets_time"]<before])[:number]
before_times = list(np.array(events["onsets_time"])[events["onsets_time"]<before])[:number]
before_length = list(np.array(events["durations"])[events["onsets_time"]<before])[:number]
events["onsets"] = before_onsets + after_onsets
events["onsets_time"] = before_times + after_times
events["durations"] = before_length + after_length
return(events) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_events_in_signal(signal, events_onsets, color="red", marker=None):
""" Plot events in signal. Parameters signal : array or DataFrame Signal array (can be a dataframe with many signals). events_onsets : list or ndarray Events location. color : int or list Marker color. marker : marker or list of markers (for possible marker values, see: https://matplotlib.org/api/markers_api.html) Marker type. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ - `Renatosc <https://github.com/renatosc/>`_ *Dependencies* - matplotlib - pandas """ |
df = pd.DataFrame(signal)
ax = df.plot()
def plotOnSignal(x, color, marker=None):
if (marker is None):
plt.axvline(x=event, color=color)
else:
plt.plot(x, signal[x], marker, color=color)
events_onsets = np.array(events_onsets)
try:
len(events_onsets[0])
for index, dim in enumerate(events_onsets):
for event in dim:
plotOnSignal(x=event,
color=color[index] if isinstance(color, list) else color,
marker=marker[index] if isinstance(marker, list) else marker)
except TypeError:
for event in events_onsets:
plotOnSignal(x=event,
color=color[0] if isinstance(color, list) else color,
marker=marker[0] if isinstance(marker, list) else marker)
return ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eda_scr(signal, sampling_rate=1000, treshold=0.1, method="fast"):
""" Skin-Conductance Responses extraction algorithm. Parameters signal : list or array EDA signal array. sampling_rate : int Sampling rate (samples/second). treshold : float SCR minimum treshold (in terms of signal standart deviation). method : str "fast" or "slow". Either use a gradient-based approach or a local extrema one. Returns onsets, peaks, amplitudes, recoveries : lists SCRs features. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - biosppy - numpy - pandas *See Also* - BioSPPy: https://github.com/PIA-Group/BioSPPy References - Kim, K. H., Bang, S. W., & Kim, S. R. (2004). Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing, 42(3), 419-427. - Gamboa, H. (2008). Multi-Modal Behavioral Biometrics Based on HCI and Electrophysiology (Doctoral dissertation, PhD thesis, Universidade Técnica de Lisboa, Instituto Superior Técnico). """ |
# Processing
# ===========
if method == "slow":
# Compute gradient (sort of derivative)
gradient = np.gradient(signal)
# Smoothing
size = int(0.1 * sampling_rate)
smooth, _ = biosppy.tools.smoother(signal=gradient, kernel='bartlett', size=size, mirror=True)
# Find zero-crossings
zeros, = biosppy.tools.zero_cross(signal=smooth, detrend=True)
# Separate onsets and peaks
onsets = []
peaks = []
for i in zeros:
if smooth[i+1] > smooth[i-1]:
onsets.append(i)
else:
peaks.append(i)
peaks = np.array(peaks)
onsets = np.array(onsets)
else:
# find extrema
peaks, _ = biosppy.tools.find_extrema(signal=signal, mode='max')
onsets, _ = biosppy.tools.find_extrema(signal=signal, mode='min')
# Keep only pairs
peaks = peaks[peaks > onsets[0]]
onsets = onsets[onsets < peaks[-1]]
# Artifact Treatment
# ====================
# Compute rising times
risingtimes = peaks-onsets
risingtimes = risingtimes/sampling_rate*1000
peaks = peaks[risingtimes > 100]
onsets = onsets[risingtimes > 100]
# Compute amplitudes
amplitudes = signal[peaks]-signal[onsets]
# Remove low amplitude variations
mask = amplitudes > np.std(signal)*treshold
peaks = peaks[mask]
onsets = onsets[mask]
amplitudes = amplitudes[mask]
# Recovery moments
recoveries = []
for x, peak in enumerate(peaks):
try:
window = signal[peak:onsets[x+1]]
except IndexError:
window = signal[peak:]
recovery_amp = signal[peak]-amplitudes[x]/2
try:
smaller = find_closest_in_list(recovery_amp, window, "smaller")
recovery_pos = peak + list(window).index(smaller)
recoveries.append(recovery_pos)
except ValueError:
recoveries.append(np.nan)
recoveries = np.array(recoveries)
return(onsets, peaks, amplitudes, recoveries) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_response(self, response, value):
""" Add response to staircase. Parameters response : int or bool 0 or 1. value : int or float Signal corresponding to response. """ |
if value != "stop":
self.X = pd.concat([self.X, pd.DataFrame({"Signal":[value]})])
self.y = np.array(list(self.y) + [response])
if len(set(list(self.y))) > 1:
self.model = self.fit_model(self.X , self.y) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_epochs(data, events_onsets, sampling_rate=1000, duration=1, onset=0, index=None):
""" Epoching a dataframe. Parameters data : pandas.DataFrame Data*time. events_onsets : list A list of event onsets indices. sampling_rate : int Sampling rate (samples/second). duration : int or list Duration(s) of each epoch(s) (in seconds). onset : int Epoch onset(s) relative to events_onsets (in seconds). index : list Events names in order that will be used as index. Must contains uniques names. If not provided, will be replaced by event number. Returns epochs : dict dict containing all epochs. Example Notes *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) *Dependencies* - numpy """ |
# Convert ints to arrays if needed
if isinstance(duration, list) or isinstance(duration, np.ndarray):
duration = np.array(duration)
else:
duration = np.array([duration]*len(events_onsets))
if isinstance(onset, list) or isinstance(onset, np.ndarray):
onset = np.array(onset)
else:
onset = np.array([onset]*len(events_onsets))
if isinstance(data, list) or isinstance(data, np.ndarray) or isinstance(data, pd.Series):
data = pd.DataFrame({"Signal": list(data)})
# Store durations
duration_in_s = duration.copy()
onset_in_s = onset.copy()
# Convert to timepoints
duration = duration*sampling_rate
onset = onset*sampling_rate
# Create the index
if index is None:
index = list(range(len(events_onsets)))
else:
if len(list(set(index))) != len(index):
print("NeuroKit Warning: create_epochs(): events_names does not contain uniques names, replacing them by numbers.")
index = list(range(len(events_onsets)))
else:
index = list(index)
# Create epochs
epochs = {}
for event, event_onset in enumerate(events_onsets):
epoch_onset = int(event_onset + onset[event])
epoch_end = int(event_onset+duration[event]+1)
epoch = data[epoch_onset:epoch_end].copy()
epoch.index = np.linspace(start=onset_in_s[event], stop=duration_in_s[event], num=len(epoch), endpoint=True)
relative_time = np.linspace(start=onset[event], stop=duration[event], num=len(epoch), endpoint=True).astype(int).tolist()
absolute_time = np.linspace(start=epoch_onset, stop=epoch_end, num=len(epoch), endpoint=True).astype(int).tolist()
epoch["Epoch_Relative_Time"] = relative_time
epoch["Epoch_Absolute_Time"] = absolute_time
epochs[index[event]] = epoch
return(epochs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def interpolate(values, value_times, sampling_rate=1000):
""" 3rd order spline interpolation. Parameters values : dataframe Values. value_times : list Time indices of values. sampling_rate : int Sampling rate (samples/second). Returns signal : pd.Series An array containing the values indexed by time. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - scipy - pandas """ |
# values=RRis.copy()
# value_times=beats_times.copy()
# Preprocessing
initial_index = value_times[0]
value_times = np.array(value_times) - initial_index
# fit a 3rd degree spline on the data.
spline = scipy.interpolate.splrep(x=value_times, y=values, k=3, s=0) # s=0 guarantees that it will pass through ALL the given points
x = np.arange(0, value_times[-1], 1)
# Get the values indexed per time
signal = scipy.interpolate.splev(x=x, tck=spline, der=0)
# Transform to series
signal = pd.Series(signal)
signal.index = np.array(np.arange(initial_index, initial_index+len(signal), 1))
return(signal) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_peaks(signal):
""" Locate peaks based on derivative. Parameters signal : list or array Signal. Returns peaks : array An array containing the peak indices. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - scipy - pandas """ |
derivative = np.gradient(signal, 2)
peaks = np.where(np.diff(np.sign(derivative)))[0]
return(peaks) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eeg_name_frequencies(freqs):
""" Name frequencies according to standart classifications. Parameters freqs : list or numpy.array list of floats containing frequencies to classify. Returns freqs_names : list Named frequencies Example Notes *Details* - Delta: 1-3Hz - Theta: 4-7Hz - Alpha1: 8-9Hz - Alpha2: 10-12Hz - Beta1: 13-17Hz - Beta2: 18-30Hz - Gamma1: 31-40Hz - Gamma2: 41-50Hz - Mu: 8-13Hz *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) References - None """ |
freqs = list(freqs)
freqs_names = []
for freq in freqs:
if freq < 1:
freqs_names.append("UltraLow")
elif freq <= 3:
freqs_names.append("Delta")
elif freq <= 7:
freqs_names.append("Theta")
elif freq <= 9:
freqs_names.append("Alpha1/Mu")
elif freq <= 12:
freqs_names.append("Alpha2/Mu")
elif freq <= 13:
freqs_names.append("Beta1/Mu")
elif freq <= 17:
freqs_names.append("Beta1")
elif freq <= 30:
freqs_names.append("Beta2")
elif freq <= 40:
freqs_names.append("Gamma1")
elif freq <= 50:
freqs_names.append("Gamma2")
else:
freqs_names.append("UltraHigh")
return(freqs_names) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normal_range(mean, sd, treshold=1.28):
""" Returns a bottom and a top limit on a normal distribution portion based on a treshold. Parameters treshold : float maximum deviation (in terms of standart deviation). Rule of thumb of a gaussian distribution: 2.58 = keeping 99%, 2.33 = keeping 98%, 1.96 = 95% and 1.28 = keeping 90%. Returns (bottom, top) : tuple Lower and higher range. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ """ |
bottom = mean - sd*treshold
top = mean + sd*treshold
return(bottom, top) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_following_duplicates(array):
""" Find the duplicates that are following themselves. Parameters array : list or ndarray A list containing duplicates. Returns uniques : list A list containing True for each unique and False for following duplicates. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - numpy """ |
array = array[:]
uniques = []
for i in range(len(array)):
if i == 0:
uniques.append(True)
else:
if array[i] == array[i-1]:
uniques.append(False)
else:
uniques.append(True)
return(uniques) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_closest_in_list(number, array, direction="both", strictly=False):
""" Find the closest number in the array from x. Parameters number : float The number. array : list The list to look in. direction : str "both" for smaller or greater, "greater" for only greater numbers and "smaller" for the closest smaller. strictly : bool False for stricly superior or inferior or True for including equal. Returns closest : int The closest number in the array. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ """ |
if direction == "both":
closest = min(array, key=lambda x:abs(x-number))
if direction == "smaller":
if strictly is True:
closest = max(x for x in array if x < number)
else:
closest = max(x for x in array if x <= number)
if direction == "greater":
if strictly is True:
closest = min(filter(lambda x: x > number, array))
else:
closest = min(filter(lambda x: x >= number, array))
return(closest) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def emg_process(emg, sampling_rate=1000, emg_names=None, envelope_freqs=[10, 400], envelope_lfreq=4, activation_treshold="default", activation_n_above=0.25, activation_n_below=1):
""" Automated processing of EMG signal. Parameters emg : list, array or DataFrame EMG signal array. Can include multiple channels. sampling_rate : int Sampling rate (samples/second). emg_names : list List of EMG channel names. envelope_freqs : list [fc_h, fc_l], optional cutoff frequencies for the band-pass filter (in Hz). envelope_lfreq : number, optional cutoff frequency for the low-pass filter (in Hz). activation_treshold : float minimum amplitude of `x` to detect. activation_n_above : float minimum continuous time (in s) greater than or equal to `threshold` to detect (but see the parameter `n_below`). activation_n_below : float minimum time (in s) below `threshold` that will be ignored in the detection of `x` >= `threshold`. Returns processed_emg : dict Dict containing processed EMG features. Contains the EMG raw signal, the filtered signal and pulse onsets. This function is mainly a wrapper for the biosppy.emg.emg() function. Credits go to its authors. Example Notes *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) *Dependencies* - biosppy - numpy - pandas *See Also* - BioSPPy: https://github.com/PIA-Group/BioSPPy References - None """ |
if emg_names is None:
if isinstance(emg, pd.DataFrame):
emg_names = emg.columns.values
emg = np.array(emg)
if len(np.shape(emg)) == 1:
emg = np.array(pd.DataFrame(emg))
if emg_names is None:
if np.shape(emg)[1]>1:
emg_names = []
for index in range(np.shape(emg)[1]):
emg_names.append("EMG_" + str(index))
else:
emg_names = ["EMG"]
processed_emg = {"df": pd.DataFrame()}
for index, emg_chan in enumerate(emg.T):
# Store Raw signal
processed_emg["df"][emg_names[index] + "_Raw"] = emg_chan
# Compute several features using biosppy
biosppy_emg = dict(biosppy.emg.emg(emg_chan, sampling_rate=sampling_rate, show=False))
# Store EMG pulse onsets
pulse_onsets = np.array([np.nan]*len(emg))
if len(biosppy_emg['onsets']) > 0:
pulse_onsets[biosppy_emg['onsets']] = 1
processed_emg["df"][emg_names[index] + "_Pulse_Onsets"] = pulse_onsets
processed_emg["df"][emg_names[index] + "_Filtered"] = biosppy_emg["filtered"]
processed_emg[emg_names[index]] = {}
processed_emg[emg_names[index]]["EMG_Pulse_Onsets"] = biosppy_emg['onsets']
# Envelope
envelope = emg_linear_envelope(biosppy_emg["filtered"], sampling_rate=sampling_rate, freqs=envelope_freqs, lfreq=envelope_lfreq)
processed_emg["df"][emg_names[index] + "_Envelope"] = envelope
# Activation
if activation_treshold == "default":
activation_treshold = 1*np.std(envelope)
processed_emg["df"][emg_names[index] + "_Activation"] = emg_find_activation(envelope, sampling_rate=sampling_rate, threshold=1*np.std(envelope), n_above=activation_n_above, n_below=activation_n_below)
return(processed_emg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def emg_linear_envelope(emg, sampling_rate=1000, freqs=[10, 400], lfreq=4):
r"""Calculate the linear envelope of a signal. Parameters emg : array raw EMG signal. sampling_rate : int Sampling rate (samples/second). freqs : list [fc_h, fc_l], optional cutoff frequencies for the band-pass filter (in Hz). lfreq : number, optional cutoff frequency for the low-pass filter (in Hz). Returns ------- envelope : array linear envelope of the signal. Notes ----- *Authors* - Marcos Duarte *See Also* See this notebook [1]_. References .. [1] https://github.com/demotu/BMC/blob/master/notebooks/Electromyography.ipynb """ |
emg = emg_tkeo(emg)
if np.size(freqs) == 2:
# band-pass filter
b, a = scipy.signal.butter(2, np.array(freqs)/(sampling_rate/2.), btype = 'bandpass')
emg = scipy.signal.filtfilt(b, a, emg)
if np.size(lfreq) == 1:
# full-wave rectification
envelope = abs(emg)
# low-pass Butterworth filter
b, a = scipy.signal.butter(2, np.array(lfreq)/(sampling_rate/2.), btype = 'low')
envelope = scipy.signal.filtfilt(b, a, envelope)
return (envelope) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def emg_find_activation(envelope, sampling_rate=1000, threshold=0, n_above=0.25, n_below=1):
"""Detects onset in data based on amplitude threshold. Parameters envelope : array Linear envelope of EMG signal. sampling_rate : int Sampling rate (samples/second). threshold : float minimum amplitude of `x` to detect. n_above : float minimum continuous time (in s) greater than or equal to `threshold` to detect (but see the parameter `n_below`). n_below : float minimum time (in s) below `threshold` that will be ignored in the detection of `x` >= `threshold`. Returns ------- activation : array With 1 when muscle activated and 0 when not. Notes ----- You might have to tune the parameters according to the signal-to-noise characteristic of the data. See this IPython Notebook [1]_. References .. [1] http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/DetectOnset.ipynb """ |
n_above = n_above*sampling_rate
n_below = n_below*sampling_rate
envelope = np.atleast_1d(envelope).astype('float64')
# deal with NaN's (by definition, NaN's are not greater than threshold)
envelope[np.isnan(envelope)] = -np.inf
# indices of data greater than or equal to threshold
inds = np.nonzero(envelope >= threshold)[0]
if inds.size:
# initial and final indexes of continuous data
inds = np.vstack((inds[np.diff(np.hstack((-np.inf, inds))) > n_below+1], \
inds[np.diff(np.hstack((inds, np.inf))) > n_below+1])).T
# indexes of continuous data longer than or equal to n_above
inds = inds[inds[:, 1]-inds[:, 0] >= n_above-1, :]
if not inds.size:
inds = np.array([]) # standardize inds shape
inds = np.array(inds)
activation = np.array([0]*len(envelope))
for i in inds:
activation[i[0]:i[1]] = 1
return (activation) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ecg_find_peaks(signal, sampling_rate=1000):
""" Find R peaks indices on the ECG channel. Parameters signal : list or ndarray ECG signal (preferably filtered). sampling_rate : int Sampling rate (samples/second). Returns rpeaks : list List of R-peaks location indices. Example Notes *Authors* - the bioSSPy dev team (https://github.com/PIA-Group/BioSPPy) *Dependencies* - biosppy *See Also* - BioSPPY: https://github.com/PIA-Group/BioSPPy """ |
rpeaks, = biosppy.ecg.hamilton_segmenter(np.array(signal), sampling_rate=sampling_rate)
rpeaks, = biosppy.ecg.correct_rpeaks(signal=np.array(signal), rpeaks=rpeaks, sampling_rate=sampling_rate, tol=0.05)
return(rpeaks) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ecg_wave_detector(ecg, rpeaks):
""" Returns the localization of the P, Q, T waves. This function needs massive help! Parameters ecg : list or ndarray ECG signal (preferably filtered). rpeaks : list or ndarray R peaks localization. Returns ecg_waves : dict Contains wave peaks location indices. Example Notes *Details* - **Cardiac Cycle**: A typical ECG showing a heartbeat consists of a P wave, a QRS complex and a T wave.The P wave represents the wave of depolarization that spreads from the SA-node throughout the atria. The QRS complex reflects the rapid depolarization of the right and left ventricles. Since the ventricles are the largest part of the heart, in terms of mass, the QRS complex usually has a much larger amplitude than the P-wave. The T wave represents the ventricular repolarization of the ventricles. On rare occasions, a U wave can be seen following the T wave. The U wave is believed to be related to the last remnants of ventricular repolarization. *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ """ |
q_waves = []
p_waves = []
q_waves_starts = []
s_waves = []
t_waves = []
t_waves_starts = []
t_waves_ends = []
for index, rpeak in enumerate(rpeaks[:-3]):
try:
epoch_before = np.array(ecg)[int(rpeaks[index-1]):int(rpeak)]
epoch_before = epoch_before[int(len(epoch_before)/2):len(epoch_before)]
epoch_before = list(reversed(epoch_before))
q_wave_index = np.min(find_peaks(epoch_before))
q_wave = rpeak - q_wave_index
p_wave_index = q_wave_index + np.argmax(epoch_before[q_wave_index:])
p_wave = rpeak - p_wave_index
inter_pq = epoch_before[q_wave_index:p_wave_index]
inter_pq_derivative = np.gradient(inter_pq, 2)
q_start_index = find_closest_in_list(len(inter_pq_derivative)/2, find_peaks(inter_pq_derivative))
q_start = q_wave - q_start_index
q_waves.append(q_wave)
p_waves.append(p_wave)
q_waves_starts.append(q_start)
except ValueError:
pass
except IndexError:
pass
try:
epoch_after = np.array(ecg)[int(rpeak):int(rpeaks[index+1])]
epoch_after = epoch_after[0:int(len(epoch_after)/2)]
s_wave_index = np.min(find_peaks(epoch_after))
s_wave = rpeak + s_wave_index
t_wave_index = s_wave_index + np.argmax(epoch_after[s_wave_index:])
t_wave = rpeak + t_wave_index
inter_st = epoch_after[s_wave_index:t_wave_index]
inter_st_derivative = np.gradient(inter_st, 2)
t_start_index = find_closest_in_list(len(inter_st_derivative)/2, find_peaks(inter_st_derivative))
t_start = s_wave + t_start_index
t_end = np.min(find_peaks(epoch_after[t_wave_index:]))
t_end = t_wave + t_end
s_waves.append(s_wave)
t_waves.append(t_wave)
t_waves_starts.append(t_start)
t_waves_ends.append(t_end)
except ValueError:
pass
except IndexError:
pass
# pd.Series(epoch_before).plot()
# t_waves = []
# for index, rpeak in enumerate(rpeaks[0:-1]):
#
# epoch = np.array(ecg)[int(rpeak):int(rpeaks[index+1])]
# pd.Series(epoch).plot()
#
# # T wave
# middle = (rpeaks[index+1] - rpeak) / 2
# quarter = middle/2
#
# epoch = np.array(ecg)[int(rpeak+quarter):int(rpeak+middle)]
#
# try:
# t_wave = int(rpeak+quarter) + np.argmax(epoch)
# t_waves.append(t_wave)
# except ValueError:
# pass
#
# p_waves = []
# for index, rpeak in enumerate(rpeaks[1:]):
# index += 1
# # Q wave
# middle = (rpeak - rpeaks[index-1]) / 2
# quarter = middle/2
#
# epoch = np.array(ecg)[int(rpeak-middle):int(rpeak-quarter)]
#
# try:
# p_wave = int(rpeak-quarter) + np.argmax(epoch)
# p_waves.append(p_wave)
# except ValueError:
# pass
#
# q_waves = []
# for index, p_wave in enumerate(p_waves):
# epoch = np.array(ecg)[int(p_wave):int(rpeaks[rpeaks>p_wave][0])]
#
# try:
# q_wave = p_wave + np.argmin(epoch)
# q_waves.append(q_wave)
# except ValueError:
# pass
#
# # TODO: manage to find the begininng of the Q and the end of the T wave so we can extract the QT interval
ecg_waves = {"T_Waves": t_waves,
"P_Waves": p_waves,
"Q_Waves": q_waves,
"S_Waves": s_waves,
"Q_Waves_Onsets": q_waves_starts,
"T_Waves_Onsets": t_waves_starts,
"T_Waves_Ends": t_waves_ends}
return(ecg_waves) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ecg_systole(ecg, rpeaks, t_waves_ends):
""" Returns the localization of systoles and diastoles. Parameters ecg : list or ndarray ECG signal (preferably filtered). rpeaks : list or ndarray R peaks localization. t_waves_ends : list or ndarray T waves localization. Returns systole : ndarray Array indicating where systole (1) and diastole (0). Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Details* - **Systole/Diastole**: One prominent channel of body and brain communication is that conveyed by baroreceptors, pressure and stretch-sensitive receptors within the heart and surrounding arteries. Within each cardiac cycle, bursts of baroreceptor afferent activity encoding the strength and timing of each heartbeat are carried via the vagus and glossopharyngeal nerve afferents to the nucleus of the solitary tract. This is the principal route that communicates to the brain the dynamic state of the heart, enabling the representation of cardiovascular arousal within viscerosensory brain regions, and influence ascending neuromodulator systems implicated in emotional and motivational behaviour. Because arterial baroreceptors are activated by the arterial pulse pressure wave, their phasic discharge is maximal during and immediately after the cardiac systole, that is, when the blood is ejected from the heart, and minimal during cardiac diastole, that is, between heartbeats (Azevedo, 2017). References - Azevedo, R. T., Garfinkel, S. N., Critchley, H. D., & Tsakiris, M. (2017). Cardiac afferent activity modulates the expression of racial stereotypes. Nature communications, 8. - Edwards, L., Ring, C., McIntyre, D., & Carroll, D. (2001). Modulation of the human nociceptive flexion reflex across the cardiac cycle. Psychophysiology, 38(4), 712-718. - Gray, M. A., Rylander, K., Harrison, N. A., Wallin, B. G., & Critchley, H. D. (2009). Following one's heart: cardiac rhythms gate central initiation of sympathetic reflexes. Journal of Neuroscience, 29(6), 1817-1825. """ |
waves = np.array([""]*len(ecg))
waves[rpeaks] = "R"
waves[t_waves_ends] = "T"
systole = [0]
current = 0
for index, value in enumerate(waves[1:]):
if waves[index-1] == "R":
current = 1
if waves[index-1] == "T":
current = 0
systole.append(current)
return(systole) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_eeg_erp_topo(all_epochs, colors=None):
""" Plot butterfly plot. DOCS INCOMPLETE :( """ |
all_evokeds = eeg_to_all_evokeds(all_epochs)
data = {}
for participant, epochs in all_evokeds.items():
for cond, epoch in epochs.items():
data[cond] = []
for participant, epochs in all_evokeds.items():
for cond, epoch in epochs.items():
data[cond].append(epoch)
if colors is not None:
color_list = []
else:
color_list = None
evokeds = []
for condition, evoked in data.items():
grand_average = mne.grand_average(evoked)
grand_average.comment = condition
evokeds += [grand_average]
if colors is not None:
color_list.append(colors[condition])
plot = mne.viz.plot_evoked_topo(evokeds, background_color="w", color=color_list)
return(plot) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_nk_object(obj, filename="file", path="", extension="nk", compress=False, compatibility=-1):
""" Save whatever python object to a pickled file. Parameters file : object filename : str File's name. path : str File's path. extension : str File's extension. Default "nk" but can be whatever. compress: bool Enable compression using gzip. compatibility : int See :func:`pickle.dump`. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - pickle - gzip """ |
if compress is True:
with gzip.open(path + filename + "." + extension, 'wb') as name:
pickle.dump(obj, name, protocol=compatibility)
else:
with open(path + filename + "." + extension, 'wb') as name:
pickle.dump(obj, name, protocol=compatibility) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_nk_object(filename, path=""):
""" Read a pickled file. Parameters filename : str Full file's name (with extension). path : str File's path. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - pickle - gzip """ |
filename = path + filename
try:
with open(filename, 'rb') as name:
file = pickle.load(name)
except pickle.UnpicklingError:
with gzip.open(filename, 'rb') as name:
file = pickle.load(name)
except ModuleNotFoundError: # In case you're trying to unpickle a dataframe made with pandas < 0.17
try:
file = pd.read_pickle(filename)
except:
pass
return(file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_creation_date(path):
""" Try to get the date that a file was created, falling back to when it was last modified if that's not possible. Parameters path : str File's path. Returns creation_date : str Time of file creation. Example Notes *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ - Mark Amery *Dependencies* - platform - os *See Also* - http://stackoverflow.com/a/39501288/1709587 """ |
if platform.system() == 'Windows':
return(os.path.getctime(path))
else:
stat = os.stat(path)
try:
return(stat.st_birthtime)
except AttributeError:
print("Neuropsydia error: get_creation_date(): We're probably on Linux. No easy way to get creation dates here, so we'll settle for when its content was last modified.")
return(stat.st_mtime) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _register(self, obj):
"""Creates a random but unique session handle for a session object, register it in the sessions dictionary and return the value :param obj: a session object. :return: session handle :rtype: int """ |
session = None
while session is None or session in self.sessions:
session = random.randint(1000000, 9999999)
self.sessions[session] = obj
return session |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _return_handler(self, ret_value, func, arguments):
"""Check return values for errors and warnings. TODO: THIS IS JUST COPIED PASTED FROM NIVisaLibrary. Needs to be adapted. """ |
logger.debug('%s%s -> %r',
func.__name__, _args_to_str(arguments), ret_value,
extra=self._logging_extra)
try:
ret_value = StatusCode(ret_value)
except ValueError:
pass
self._last_status = ret_value
# The first argument of almost all registered visa functions is a session.
# We store the error code per session
session = None
if func.__name__ not in ('viFindNext', ):
try:
session = arguments[0]
except KeyError:
raise Exception('Function %r does not seem to be a valid '
'visa function (len args %d)' % (func, len(arguments)))
# Functions that use the first parameter to get a session value.
if func.__name__ in ('viOpenDefaultRM', ):
# noinspection PyProtectedMember
session = session._obj.value
if isinstance(session, integer_types):
self._last_status_in_session[session] = ret_value
else:
# Functions that might or might have a session in the first argument.
if func.__name__ not in ('viClose', 'viGetAttribute', 'viSetAttribute', 'viStatusDesc'):
raise Exception('Function %r does not seem to be a valid '
'visa function (type args[0] %r)' % (func, type(session)))
if ret_value < 0:
raise errors.VisaIOError(ret_value)
if ret_value in self.issue_warning_on:
if session and ret_value not in self._ignore_warning_in_session[session]:
warnings.warn(errors.VisaIOWarning(ret_value), stacklevel=2)
return ret_value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear(self, session):
"""Clears a device. Corresponds to viClear function of the VISA library. :param session: Unique logical identifier to a session. :return: return value of the library call. :rtype: :class:`pyvisa.constants.StatusCode` """ |
try:
sess = self.sessions[session]
except KeyError:
return constants.StatusCode.error_invalid_object
return sess.clear() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gpib_command(self, session, command_byte):
"""Write GPIB command byte on the bus. Corresponds to viGpibCommand function of the VISA library. See: https://linux-gpib.sourceforge.io/doc_html/gpib-protocol.html#REFERENCE-COMMAND-BYTES :param command_byte: command byte to send :type command_byte: int, must be [0 255] :return: return value of the library call :rtype: :class:`pyvisa.constants.StatusCode` """ |
try:
return self.sessions[session].gpib_command(command_byte)
except KeyError:
return constants.StatusCode.error_invalid_object |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def assert_trigger(self, session, protocol):
"""Asserts software or hardware trigger. Corresponds to viAssertTrigger function of the VISA library. :param session: Unique logical identifier to a session. :param protocol: Trigger protocol to use during assertion. (Constants.PROT*) :return: return value of the library call. :rtype: :class:`pyvisa.constants.StatusCode` """ |
try:
return self.sessions[session].assert_trigger(protocol)
except KeyError:
return constants.StatusCode.error_invalid_object |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unlock(self, session):
"""Relinquishes a lock for the specified resource. Corresponds to viUnlock function of the VISA library. :param session: Unique logical identifier to a session. :return: return value of the library call. :rtype: :class:`pyvisa.constants.StatusCode` """ |
try:
sess = self.sessions[session]
except KeyError:
return StatusCode.error_invalid_object
return sess.unlock() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_raw_devices(vendor=None, product=None, serial_number=None, custom_match=None, **kwargs):
"""Find connected USB RAW devices. See usbutil.find_devices for more info. """ |
def is_usbraw(dev):
if custom_match and not custom_match(dev):
return False
return bool(find_interfaces(dev, bInterfaceClass=0xFF,
bInterfaceSubClass=0xFF))
return find_devices(vendor, product, serial_number, is_usbraw, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, data):
"""Send raw bytes to the instrument. :param data: bytes to be sent to the instrument :type data: bytes """ |
begin, end, size = 0, 0, len(data)
bytes_sent = 0
raw_write = super(USBRawDevice, self).write
while not end > size:
begin = end
end = begin + self.RECV_CHUNK
bytes_sent += raw_write(data[begin:end])
return bytes_sent |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(self, size):
"""Read raw bytes from the instrument. :param size: amount of bytes to be sent to the instrument :type size: integer :return: received bytes :return type: bytes """ |
raw_read = super(USBRawDevice, self).read
received = bytearray()
while not len(received) >= size:
resp = raw_read(self.RECV_CHUNK)
received.extend(resp)
return bytes(received) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_listeners():
"""Find GPIB listeners. """ |
for i in range(31):
try:
if gpib.listener(BOARD, i) and gpib.ask(BOARD, 1) != i:
yield i
except gpib.GpibError as e:
logger.debug("GPIB error in _find_listeners(): %s", repr(e)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_devices(vendor=None, product=None, serial_number=None, custom_match=None, **kwargs):
"""Find connected USB devices matching certain keywords. Wildcards can be used for vendor, product and serial_number. :param vendor: name or id of the vendor (manufacturer) :param product: name or id of the product :param serial_number: serial number. :param custom_match: callable returning True or False that takes a device as only input. :param kwargs: other properties to match. See usb.core.find :return: """ |
kwargs = kwargs or {}
attrs = {}
if isinstance(vendor, str):
attrs['manufacturer'] = vendor
elif vendor is not None:
kwargs['idVendor'] = vendor
if isinstance(product, str):
attrs['product'] = product
elif product is not None:
kwargs['idProduct'] = product
if serial_number:
attrs['serial_number'] = str(serial_number)
if attrs:
def cm(dev):
if custom_match is not None and not custom_match(dev):
return False
for attr, pattern in attrs.items():
if not fnmatch(getattr(dev, attr).lower(), pattern.lower()):
return False
return True
else:
cm = custom_match
return usb.core.find(find_all=True, custom_match=cm, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_referenced_object(prev_obj, obj, dot_separated_name, desired_type=None):
""" get objects based on a path Args: prev_obj: the object containing obj (req. if obj is a list) obj: the current object dot_separated_name: the attribute name "a.b.c.d" starting from obj Note: the attribute "parent(TYPE)" is a shortcut to jump to the parent of type "TYPE" (exact match of type name). desired_type: (optional) Returns: the object if found, None if not found or Postponed() if some postponed refs are found on the path """ |
from textx.scoping import Postponed
assert prev_obj or not type(obj) is list
names = dot_separated_name.split(".")
match = re.match(r'parent\((\w+)\)', names[0])
if match:
next_obj = obj
desired_parent_typename = match.group(1)
next_obj = get_recursive_parent_with_typename(next_obj,
desired_parent_typename)
if next_obj:
return get_referenced_object(None, next_obj, ".".join(names[1:]),
desired_type)
else:
return None
elif type(obj) is list:
next_obj = None
for res in obj:
if hasattr(res, "name") and res.name == names[0]:
if desired_type is None or textx_isinstance(res, desired_type):
next_obj = res
else:
raise TypeError(
"{} has type {} instead of {}.".format(
names[0], type(res).__name__,
desired_type.__name__))
if not next_obj:
# if prev_obj needs to be resolved: return Postponed.
if needs_to_be_resolved(prev_obj, names[0]):
return Postponed()
else:
return None
elif type(obj) is Postponed:
return Postponed()
else:
next_obj = getattr(obj, names[0])
if not next_obj:
# if obj in in crossref return Postponed, else None
if needs_to_be_resolved(obj, names[0]):
return Postponed()
else:
return None
if len(names) > 1:
return get_referenced_object(obj, next_obj, ".".join(
names[1:]), desired_type)
if type(next_obj) is list and needs_to_be_resolved(obj, names[0]):
return Postponed()
return next_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_referenced_object_as_list( prev_obj, obj, dot_separated_name, desired_type=None):
""" Same as get_referenced_object, but always returns a list. Args: prev_obj: see get_referenced_object obj: see get_referenced_object dot_separated_name: see get_referenced_object desired_type: see get_referenced_object Returns: same as get_referenced_object, but always returns a list """ |
res = get_referenced_object(prev_obj, obj, dot_separated_name,
desired_type)
if res is None:
return []
elif type(res) is list:
return res
else:
return [res] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_model( self, the_metamodel, filename, is_main_model, encoding='utf-8', add_to_local_models=True):
""" load a single model Args: the_metamodel: the metamodel used to load the model filename: the model to be loaded (if not cached) Returns: the loaded/cached model """ |
if not self.local_models.has_model(filename):
if self.all_models.has_model(filename):
new_model = self.all_models.filename_to_model[filename]
else:
# print("LOADING {}".format(filename))
# all models loaded here get their references resolved from the
# root model
new_model = the_metamodel.internal_model_from_file(
filename, pre_ref_resolution_callback=lambda
other_model: self.pre_ref_resolution_callback(other_model),
is_main_model=is_main_model, encoding=encoding)
self.all_models.filename_to_model[filename] = new_model
# print("ADDING {}".format(filename))
if add_to_local_models:
self.local_models.filename_to_model[filename] = new_model
assert self.all_models.has_model(filename) # to be sure...
return self.all_models.filename_to_model[filename] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check(ctx, meta_model_file, model_file, ignore_case):
""" Check validity of meta-model and optionally model. """ |
debug = ctx.obj['debug']
check_model(meta_model_file, model_file, debug, ignore_case) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_entity_mm():
""" Builds and returns a meta-model for Entity language. """ |
type_builtins = {
'integer': SimpleType(None, 'integer'),
'string': SimpleType(None, 'string')
}
entity_mm = metamodel_from_file(join(this_folder, 'entity.tx'),
classes=[SimpleType],
builtins=type_builtins)
return entity_mm |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sm_to_dot(model):
""" Transforms given state machine model to dot str. """ |
dot_str = HEADER
# Render states
first = True
for state in model.states:
dot_str += '{}[label="{{{}{}|{}}}"]\n'.format(
id(state), r"-\> " if first else "", state.name,
"\\n".join(action.name for action in state.actions))
first = False
# Render transitions
for transition in state.transitions:
dot_str += '{} -> {} [label="{}"]\n'\
.format(id(state), id(transition.to_state),
transition.event.name)
# If there are reset events declared render them.
if model.resetEvents:
dot_str += 'reset_events [label="{{Reset Events|{}}}", style=""]\n'\
.format("\\n".join(event.name for event in model.resetEvents))
dot_str += '\n}\n'
return dot_str |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_language(language_name):
""" Returns a callable that instantiates meta-model for the given language. """ |
langs = list(pkg_resources.iter_entry_points(group=LANG_EP,
name=language_name))
if not langs:
raise TextXError('Language "{}" is not registered.'
.format(language_name))
if len(langs) > 1:
# Multiple languages defined with the same name
raise TextXError('Language "{}" registered multiple times:\n{}'
.format(language_name,
"\n".join([l.dist for l in langs])))
return langs[0].load()() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_model(obj):
""" Finds model root element for the given object. """ |
p = obj
while hasattr(p, 'parent'):
p = p.parent
return p |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_parent_of_type(typ, obj):
""" Finds first object up the parent chain of the given type. If no parent of the given type exists None is returned. Args: typ(str or python class):
The type of the model object we are looking for. obj (model object):
Python model object which is the start of the search process. """ |
if type(typ) is not text:
typ = typ.__name__
while hasattr(obj, 'parent'):
obj = obj.parent
if obj.__class__.__name__ == typ:
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_model_parser(top_rule, comments_model, **kwargs):
""" Creates model parser for the given language. """ |
class TextXModelParser(Parser):
"""
Parser created from textual textX language description.
Semantic actions for this parser will construct object
graph representing model on the given language.
"""
def __init__(self, *args, **kwargs):
super(TextXModelParser, self).__init__(*args, **kwargs)
# By default first rule is starting rule
# and must be followed by the EOF
self.parser_model = Sequence(
nodes=[top_rule, EOF()], rule_name='Model', root=True)
self.comments_model = comments_model
# Stack for metaclass instances
self._inst_stack = []
# Dict for cross-ref resolving
# { id(class): { obj.name: obj}}
self._instances = {}
# List to keep track of all cross-ref that need to be resolved
# Contained elements are tuples: (instance, metaattr, cross-ref)
self._crossrefs = []
def clone(self):
"""
Responsibility: create a clone in order to parse a separate file.
It must be possible that more than one clone exist in parallel,
without being influenced by other parser clones.
Returns:
A clone of this parser
"""
import copy
the_clone = copy.copy(self) # shallow copy
# create new objects for parse-dependent data
the_clone._inst_stack = []
the_clone._instances = {}
the_clone._crossrefs = []
# TODO self.memoization = memoization
the_clone.comments = []
the_clone.comment_positions = {}
the_clone.sem_actions = {}
return the_clone
def _parse(self):
try:
return self.parser_model.parse(self)
except NoMatch as e:
line, col = e.parser.pos_to_linecol(e.position)
raise TextXSyntaxError(message=text(e),
line=line,
col=col,
expected_rules=e.rules)
def get_model_from_file(self, file_name, encoding, debug,
pre_ref_resolution_callback=None,
is_main_model=True):
"""
Creates model from the parse tree from the previous parse call.
If file_name is given file will be parsed before model
construction.
"""
with codecs.open(file_name, 'r', encoding) as f:
model_str = f.read()
model = self.get_model_from_str(
model_str, file_name=file_name, debug=debug,
pre_ref_resolution_callback=pre_ref_resolution_callback,
is_main_model=is_main_model, encoding=encoding)
return model
def get_model_from_str(self, model_str, file_name=None, debug=None,
pre_ref_resolution_callback=None,
is_main_model=True, encoding='utf-8'):
"""
Parses given string and creates model object graph.
"""
old_debug_state = self.debug
try:
if debug is not None:
self.debug = debug
if self.debug:
self.dprint("*** PARSING MODEL ***")
self.parse(model_str, file_name=file_name)
# Transform parse tree to model. Skip root node which
# represents the whole file ending in EOF.
model = parse_tree_to_objgraph(
self, self.parse_tree[0], file_name=file_name,
pre_ref_resolution_callback=pre_ref_resolution_callback,
is_main_model=is_main_model, encoding=encoding)
finally:
if debug is not None:
self.debug = old_debug_state
try:
model._tx_metamodel = self.metamodel
except AttributeError:
# model is some primitive python type (e.g. str)
pass
return model
return TextXModelParser(**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resolve_one_step(self):
""" Resolves model references. """ |
metamodel = self.parser.metamodel
current_crossrefs = self.parser._crossrefs
# print("DEBUG: Current crossrefs #: {}".
# format(len(current_crossrefs)))
new_crossrefs = []
self.delayed_crossrefs = []
resolved_crossref_count = 0
# -------------------------
# start of resolve-loop
# -------------------------
default_scope = DefaultScopeProvider()
for obj, attr, crossref in current_crossrefs:
if (get_model(obj) == self.model):
attr_value = getattr(obj, attr.name)
attr_refs = [obj.__class__.__name__ + "." + attr.name,
"*." + attr.name, obj.__class__.__name__ + ".*",
"*.*"]
for attr_ref in attr_refs:
if attr_ref in metamodel.scope_providers:
if self.parser.debug:
self.parser.dprint(" FOUND {}".format(attr_ref))
resolved = metamodel.scope_providers[attr_ref](
obj, attr, crossref)
break
else:
resolved = default_scope(obj, attr, crossref)
# Collect cross-references for textx-tools
if resolved and not type(resolved) is Postponed:
if metamodel.textx_tools_support:
self.pos_crossref_list.append(
RefRulePosition(
name=crossref.obj_name,
ref_pos_start=crossref.position,
ref_pos_end=crossref.position + len(
resolved.name),
def_pos_start=resolved._tx_position,
def_pos_end=resolved._tx_position_end))
if not resolved:
# As a fall-back search builtins if given
if metamodel.builtins:
if crossref.obj_name in metamodel.builtins:
# TODO: Classes must match
resolved = metamodel.builtins[crossref.obj_name]
if not resolved:
line, col = self.parser.pos_to_linecol(crossref.position)
raise TextXSemanticError(
message='Unknown object "{}" of class "{}"'.format(
crossref.obj_name, crossref.cls.__name__),
line=line, col=col, err_type=UNKNOWN_OBJ_ERROR,
expected_obj_cls=crossref.cls,
filename=self.model._tx_filename)
if type(resolved) is Postponed:
self.delayed_crossrefs.append((obj, attr, crossref))
new_crossrefs.append((obj, attr, crossref))
else:
resolved_crossref_count += 1
if attr.mult in [MULT_ONEORMORE, MULT_ZEROORMORE]:
attr_value.append(resolved)
else:
setattr(obj, attr.name, resolved)
else: # crossref not in model
new_crossrefs.append((obj, attr, crossref))
# -------------------------
# end of resolve-loop
# -------------------------
# store cross-refs from other models in the parser list (for later
# processing)
self.parser._crossrefs = new_crossrefs
# print("DEBUG: Next crossrefs #: {}".format(len(new_crossrefs)))
return (resolved_crossref_count, self.delayed_crossrefs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def python_type(textx_type_name):
"""Return Python type from the name of base textx type.""" |
return {
'ID': text,
'BOOL': bool,
'INT': int,
'FLOAT': float,
'STRICTFLOAT': float,
'STRING': text,
'NUMBER': float,
'BASETYPE': text,
}.get(textx_type_name, textx_type_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def language_from_str(language_def, metamodel):
""" Constructs parser and initializes metamodel from language description given in textX language. Args: language_def (str):
A language description in textX. metamodel (TextXMetaModel):
A metamodel to initialize. Returns: Parser for the new language. """ |
if type(language_def) is not text:
raise TextXError("textX accepts only unicode strings.")
if metamodel.debug:
metamodel.dprint("*** PARSING LANGUAGE DEFINITION ***")
# Check the cache for already conctructed textX parser
if metamodel.debug in textX_parsers:
parser = textX_parsers[metamodel.debug]
else:
# Create parser for TextX grammars using
# the arpeggio grammar specified in this module
parser = ParserPython(textx_model, comment_def=comment,
ignore_case=False,
reduce_tree=False,
memoization=metamodel.memoization,
debug=metamodel.debug,
file=metamodel.file)
# Cache it for subsequent calls
textX_parsers[metamodel.debug] = parser
# Parse language description with textX parser
try:
parse_tree = parser.parse(language_def)
except NoMatch as e:
line, col = parser.pos_to_linecol(e.position)
raise TextXSyntaxError(text(e), line, col)
# Construct new parser and meta-model based on the given language
# description.
lang_parser = visit_parse_tree(parse_tree,
TextXVisitor(parser, metamodel))
# Meta-model is constructed. Validate its semantics.
metamodel.validate()
# Here we connect meta-model and language parser for convenience.
lang_parser.metamodel = metamodel
metamodel._parser_blueprint = lang_parser
if metamodel.debug:
# Create dot file for debuging purposes
PMDOTExporter().exportFile(
lang_parser.parser_model,
"{}_parser_model.dot".format(metamodel.rootcls.__name__))
return lang_parser |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def second_textx_model(self, model_parser):
"""Cross reference resolving for parser model.""" |
if self.grammar_parser.debug:
self.grammar_parser.dprint("RESOLVING MODEL PARSER: second_pass")
self._resolve_rule_refs(self.grammar_parser, model_parser)
self._determine_rule_types(model_parser.metamodel)
self._resolve_cls_refs(self.grammar_parser, model_parser)
return model_parser |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _resolve_rule_refs(self, grammar_parser, model_parser):
"""Resolves parser ParsingExpression crossrefs.""" |
def _resolve_rule(rule):
"""
Recursively resolve peg rule references.
Args:
rule(ParsingExpression or RuleCrossRef)
"""
if not isinstance(rule, RuleCrossRef) and rule in resolved_rules:
return rule
resolved_rules.add(rule)
if grammar_parser.debug:
grammar_parser.dprint("Resolving rule: {}".format(rule))
if type(rule) is RuleCrossRef:
rule_name = rule.rule_name
suppress = rule.suppress
if rule_name in model_parser.metamodel:
rule = model_parser.metamodel[rule_name]._tx_peg_rule
if type(rule) is RuleCrossRef:
rule = _resolve_rule(rule)
model_parser.metamodel[rule_name]._tx_peg_rule = rule
if suppress:
# Special case. Suppression on rule reference.
_tx_class = rule._tx_class
rule = Sequence(nodes=[rule],
rule_name=rule_name,
suppress=suppress)
rule._tx_class = _tx_class
else:
line, col = grammar_parser.pos_to_linecol(rule.position)
raise TextXSemanticError(
'Unexisting rule "{}" at position {}.'
.format(rule.rule_name,
(line, col)), line, col)
assert isinstance(rule, ParsingExpression),\
"{}:{}".format(type(rule), text(rule))
# Recurse into subrules, and resolve rules.
for idx, child in enumerate(rule.nodes):
if child not in resolved_rules:
child = _resolve_rule(child)
rule.nodes[idx] = child
return rule
# Two pass resolving
for i in range(2):
if grammar_parser.debug:
grammar_parser.dprint("RESOLVING RULE CROSS-REFS - PASS {}"
.format(i + 1))
resolved_rules = set()
_resolve_rule(model_parser.parser_model)
# Resolve rules of all meta-classes to handle unreferenced
# rules also.
for cls in model_parser.metamodel:
cls._tx_peg_rule = _resolve_rule(cls._tx_peg_rule) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def match_abstract_str(cls):
""" For a given abstract or match rule meta-class returns a nice string representation for the body. """ |
def r(s):
if s.root:
if s in visited or s.rule_name in ALL_TYPE_NAMES or \
(hasattr(s, '_tx_class') and
s._tx_class._tx_type is not RULE_MATCH):
return s.rule_name
visited.add(s)
if isinstance(s, Match):
result = text(s)
elif isinstance(s, OrderedChoice):
result = "|".join([r(x) for x in s.nodes])
elif isinstance(s, Sequence):
result = " ".join([r(x) for x in s.nodes])
elif isinstance(s, ZeroOrMore):
result = "({})*".format(r(s.nodes[0]))
elif isinstance(s, OneOrMore):
result = "({})+".format(r(s.nodes[0]))
elif isinstance(s, Optional):
result = "{}?".format(r(s.nodes[0]))
elif isinstance(s, SyntaxPredicate):
result = ""
return "{}{}".format(result, "-" if s.suppress else "")
mstr = ""
if cls.__name__ not in ALL_TYPE_NAMES and \
not (cls._tx_type is RULE_ABSTRACT and
cls.__name__ != cls._tx_peg_rule.rule_name):
e = cls._tx_peg_rule
visited = set()
if not isinstance(e, Match):
visited.add(e)
if isinstance(e, OrderedChoice):
mstr = "|".join([r(x) for x in e.nodes
if x.rule_name in BASE_TYPE_NAMES or not x.root])
elif isinstance(e, Sequence):
mstr = " ".join([r(x) for x in e.nodes])
else:
mstr = r(e)
mstr = dot_escape(mstr)
return mstr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def metamodel_from_str(lang_desc, metamodel=None, **kwargs):
""" Creates a new metamodel from the textX description given as a string. Args: lang_desc(str):
A textX language description. metamodel(TextXMetaModel):
A metamodel that should be used. other params: See TextXMetaModel. """ |
if not metamodel:
metamodel = TextXMetaModel(**kwargs)
language_from_str(lang_desc, metamodel)
return metamodel |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def metamodel_from_file(file_name, **kwargs):
""" Creates new metamodel from the given file. Args: file_name(str):
The name of the file with textX language description. other params: See metamodel_from_str. """ |
with codecs.open(file_name, 'r', 'utf-8') as f:
lang_desc = f.read()
metamodel = metamodel_from_str(lang_desc=lang_desc,
file_name=file_name,
**kwargs)
return metamodel |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_class(self, cls, peg_rule, position, position_end=None, inherits=None, root=False, rule_type=RULE_MATCH):
""" Setup meta-class special attributes, namespaces etc. This is called both for textX created classes as well as user classes. """ |
cls._tx_metamodel = self
# Attribute information (MetaAttr instances) keyed by name.
cls._tx_attrs = OrderedDict()
# A list of inheriting classes
cls._tx_inh_by = inherits if inherits else []
cls._tx_position = position
cls._tx_position_end = \
position if position_end is None else position_end
# The type of the rule this meta-class results from.
# There are three rule types: common, abstract and match
# Base types are match rules.
cls._tx_type = rule_type
cls._tx_peg_rule = peg_rule
if peg_rule:
peg_rule._tx_class = cls
# Push this class and PEG rule in the current namespace
current_namespace = self.namespaces[self._namespace_stack[-1]]
cls._tx_fqn = self._cls_fqn(cls)
current_namespace[cls.__name__] = cls
if root:
self.rootcls = cls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cls_fqn(self, cls):
""" Returns fully qualified name for the class based on current namespace and the class name. """ |
ns = self._namespace_stack[-1]
if ns in ['__base__', None]:
return cls.__name__
else:
return ns + '.' + cls.__name__ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _new_cls_attr(self, clazz, name, cls=None, mult=MULT_ONE, cont=True, ref=False, bool_assignment=False, position=0):
"""Creates new meta attribute of this class.""" |
attr = MetaAttr(name, cls, mult, cont, ref, bool_assignment,
position)
clazz._tx_attrs[name] = attr
return attr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert(self, value, _type):
""" Convert instances of textx types and match rules to python types. """ |
return self.type_convertors.get(_type, lambda x: x)(value) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.