repo stringlengths 7 55 | path stringlengths 4 127 | func_name stringlengths 1 88 | original_string stringlengths 75 19.8k | language stringclasses 1
value | code stringlengths 75 19.8k | code_tokens listlengths 20 707 | docstring stringlengths 3 17.3k | docstring_tokens listlengths 3 222 | sha stringlengths 40 40 | url stringlengths 87 242 | partition stringclasses 1
value | idx int64 0 252k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
MIT-LCP/wfdb-python | wfdb/io/annotation.py | wrann | def wrann(record_name, extension, sample, symbol=None, subtype=None, chan=None,
num=None, aux_note=None, label_store=None, fs=None,
custom_labels=None, write_dir=''):
"""
Write a WFDB annotation file.
Specify at least the following:
- The record name of the WFDB record (record_name)
- The annotation file extension (extension)
- The annotation locations in samples relative to the beginning of
the record (sample)
- Either the numerical values used to store the labels
(`label_store`), or more commonly, the display symbols of each
label (`symbol`).
Parameters
----------
record_name : str
The string name of the WFDB record to be written (without any file
extensions).
extension : str
The string annotation file extension.
sample : numpy array
A numpy array containing the annotation locations in samples relative to
the beginning of the record.
symbol : list, or numpy array, optional
The symbols used to display the annotation labels. List or numpy array.
If this field is present, `label_store` must not be present.
subtype : numpy array, optional
A numpy array containing the marked class/category of each annotation.
chan : numpy array, optional
A numpy array containing the signal channel associated with each
annotation.
num : numpy array, optional
A numpy array containing the labelled annotation number for each
annotation.
aux_note : list, optional
A list containing the auxiliary information string (or None for
annotations without notes) for each annotation.
label_store : numpy array, optional
A numpy array containing the integer values used to store the
annotation labels. If this field is present, `symbol` must not be
present.
fs : int, or float, optional
The numerical sampling frequency of the record to be written to the file.
custom_labels : pandas dataframe, optional
The map of custom defined annotation labels used for this annotation, in
addition to the standard WFDB annotation labels. Custom labels are
defined by two or three fields:
- The integer values used to store custom annotation labels in the file
(optional)
- Their short display symbols
- Their long descriptions.
This input argument may come in four formats:
1. A pandas.DataFrame object with columns:
['label_store', 'symbol', 'description']
2. A pandas.DataFrame object with columns: ['symbol', 'description']
If this option is chosen, label_store values are automatically chosen.
3. A list or tuple of tuple triplets, with triplet elements
representing: (label_store, symbol, description).
4. A list or tuple of tuple pairs, with pair elements representing:
(symbol, description). If this option is chosen, label_store values
are automatically chosen.
If the `label_store` field is given for this function, and
`custom_labels` is defined, `custom_labels` must contain `label_store`
in its mapping. ie. it must come in format 1 or 3 above.
write_dir : str, optional
The directory in which to write the annotation file
Notes
-----
This is a gateway function, written as a simple way to write WFDB annotation
files without needing to explicity create an Annotation object. You may also
create an Annotation object, manually set its attributes, and call its
`wrann` instance method.
Each annotation stored in a WFDB annotation file contains a sample field and
a label field. All other fields may or may not be present.
Examples
--------
>>> # Read an annotation as an Annotation object
>>> annotation = wfdb.rdann('b001', 'atr', pb_dir='cebsdb')
>>> # Write a copy of the annotation file
>>> wfdb.wrann('b001', 'cpy', annotation.sample, annotation.symbol)
"""
# Create Annotation object
annotation = Annotation(record_name=record_name, extension=extension,
sample=sample, symbol=symbol, subtype=subtype,
chan=chan, num=num, aux_note=aux_note,
label_store=label_store, fs=fs,
custom_labels=custom_labels)
# Find out which input field describes the labels
if symbol is None:
if label_store is None:
raise Exception("Either the 'symbol' field or the 'label_store' field must be set")
else:
if label_store is None:
annotation.sym_to_aux()
else:
raise Exception("Only one of the 'symbol' and 'label_store' fields may be input, for describing annotation labels")
# Perform field checks and write the annotation file
annotation.wrann(write_fs=True, write_dir=write_dir) | python | def wrann(record_name, extension, sample, symbol=None, subtype=None, chan=None,
num=None, aux_note=None, label_store=None, fs=None,
custom_labels=None, write_dir=''):
"""
Write a WFDB annotation file.
Specify at least the following:
- The record name of the WFDB record (record_name)
- The annotation file extension (extension)
- The annotation locations in samples relative to the beginning of
the record (sample)
- Either the numerical values used to store the labels
(`label_store`), or more commonly, the display symbols of each
label (`symbol`).
Parameters
----------
record_name : str
The string name of the WFDB record to be written (without any file
extensions).
extension : str
The string annotation file extension.
sample : numpy array
A numpy array containing the annotation locations in samples relative to
the beginning of the record.
symbol : list, or numpy array, optional
The symbols used to display the annotation labels. List or numpy array.
If this field is present, `label_store` must not be present.
subtype : numpy array, optional
A numpy array containing the marked class/category of each annotation.
chan : numpy array, optional
A numpy array containing the signal channel associated with each
annotation.
num : numpy array, optional
A numpy array containing the labelled annotation number for each
annotation.
aux_note : list, optional
A list containing the auxiliary information string (or None for
annotations without notes) for each annotation.
label_store : numpy array, optional
A numpy array containing the integer values used to store the
annotation labels. If this field is present, `symbol` must not be
present.
fs : int, or float, optional
The numerical sampling frequency of the record to be written to the file.
custom_labels : pandas dataframe, optional
The map of custom defined annotation labels used for this annotation, in
addition to the standard WFDB annotation labels. Custom labels are
defined by two or three fields:
- The integer values used to store custom annotation labels in the file
(optional)
- Their short display symbols
- Their long descriptions.
This input argument may come in four formats:
1. A pandas.DataFrame object with columns:
['label_store', 'symbol', 'description']
2. A pandas.DataFrame object with columns: ['symbol', 'description']
If this option is chosen, label_store values are automatically chosen.
3. A list or tuple of tuple triplets, with triplet elements
representing: (label_store, symbol, description).
4. A list or tuple of tuple pairs, with pair elements representing:
(symbol, description). If this option is chosen, label_store values
are automatically chosen.
If the `label_store` field is given for this function, and
`custom_labels` is defined, `custom_labels` must contain `label_store`
in its mapping. ie. it must come in format 1 or 3 above.
write_dir : str, optional
The directory in which to write the annotation file
Notes
-----
This is a gateway function, written as a simple way to write WFDB annotation
files without needing to explicity create an Annotation object. You may also
create an Annotation object, manually set its attributes, and call its
`wrann` instance method.
Each annotation stored in a WFDB annotation file contains a sample field and
a label field. All other fields may or may not be present.
Examples
--------
>>> # Read an annotation as an Annotation object
>>> annotation = wfdb.rdann('b001', 'atr', pb_dir='cebsdb')
>>> # Write a copy of the annotation file
>>> wfdb.wrann('b001', 'cpy', annotation.sample, annotation.symbol)
"""
# Create Annotation object
annotation = Annotation(record_name=record_name, extension=extension,
sample=sample, symbol=symbol, subtype=subtype,
chan=chan, num=num, aux_note=aux_note,
label_store=label_store, fs=fs,
custom_labels=custom_labels)
# Find out which input field describes the labels
if symbol is None:
if label_store is None:
raise Exception("Either the 'symbol' field or the 'label_store' field must be set")
else:
if label_store is None:
annotation.sym_to_aux()
else:
raise Exception("Only one of the 'symbol' and 'label_store' fields may be input, for describing annotation labels")
# Perform field checks and write the annotation file
annotation.wrann(write_fs=True, write_dir=write_dir) | [
"def",
"wrann",
"(",
"record_name",
",",
"extension",
",",
"sample",
",",
"symbol",
"=",
"None",
",",
"subtype",
"=",
"None",
",",
"chan",
"=",
"None",
",",
"num",
"=",
"None",
",",
"aux_note",
"=",
"None",
",",
"label_store",
"=",
"None",
",",
"fs",... | Write a WFDB annotation file.
Specify at least the following:
- The record name of the WFDB record (record_name)
- The annotation file extension (extension)
- The annotation locations in samples relative to the beginning of
the record (sample)
- Either the numerical values used to store the labels
(`label_store`), or more commonly, the display symbols of each
label (`symbol`).
Parameters
----------
record_name : str
The string name of the WFDB record to be written (without any file
extensions).
extension : str
The string annotation file extension.
sample : numpy array
A numpy array containing the annotation locations in samples relative to
the beginning of the record.
symbol : list, or numpy array, optional
The symbols used to display the annotation labels. List or numpy array.
If this field is present, `label_store` must not be present.
subtype : numpy array, optional
A numpy array containing the marked class/category of each annotation.
chan : numpy array, optional
A numpy array containing the signal channel associated with each
annotation.
num : numpy array, optional
A numpy array containing the labelled annotation number for each
annotation.
aux_note : list, optional
A list containing the auxiliary information string (or None for
annotations without notes) for each annotation.
label_store : numpy array, optional
A numpy array containing the integer values used to store the
annotation labels. If this field is present, `symbol` must not be
present.
fs : int, or float, optional
The numerical sampling frequency of the record to be written to the file.
custom_labels : pandas dataframe, optional
The map of custom defined annotation labels used for this annotation, in
addition to the standard WFDB annotation labels. Custom labels are
defined by two or three fields:
- The integer values used to store custom annotation labels in the file
(optional)
- Their short display symbols
- Their long descriptions.
This input argument may come in four formats:
1. A pandas.DataFrame object with columns:
['label_store', 'symbol', 'description']
2. A pandas.DataFrame object with columns: ['symbol', 'description']
If this option is chosen, label_store values are automatically chosen.
3. A list or tuple of tuple triplets, with triplet elements
representing: (label_store, symbol, description).
4. A list or tuple of tuple pairs, with pair elements representing:
(symbol, description). If this option is chosen, label_store values
are automatically chosen.
If the `label_store` field is given for this function, and
`custom_labels` is defined, `custom_labels` must contain `label_store`
in its mapping. ie. it must come in format 1 or 3 above.
write_dir : str, optional
The directory in which to write the annotation file
Notes
-----
This is a gateway function, written as a simple way to write WFDB annotation
files without needing to explicity create an Annotation object. You may also
create an Annotation object, manually set its attributes, and call its
`wrann` instance method.
Each annotation stored in a WFDB annotation file contains a sample field and
a label field. All other fields may or may not be present.
Examples
--------
>>> # Read an annotation as an Annotation object
>>> annotation = wfdb.rdann('b001', 'atr', pb_dir='cebsdb')
>>> # Write a copy of the annotation file
>>> wfdb.wrann('b001', 'cpy', annotation.sample, annotation.symbol) | [
"Write",
"a",
"WFDB",
"annotation",
"file",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L1057-L1168 | train | 216,200 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | rdann | def rdann(record_name, extension, sampfrom=0, sampto=None, shift_samps=False,
pb_dir=None, return_label_elements=['symbol'],
summarize_labels=False):
"""
Read a WFDB annotation file record_name.extension and return an
Annotation object.
Parameters
----------
record_name : str
The record name of the WFDB annotation file. ie. for file '100.atr',
record_name='100'.
extension : str
The annotatator extension of the annotation file. ie. for file
'100.atr', extension='atr'.
sampfrom : int, optional
The minimum sample number for annotations to be returned.
sampto : int, optional
The maximum sample number for annotations to be returned.
shift_samps : bool, optional
Specifies whether to return the sample indices relative to `sampfrom`
(True), or sample 0 (False).
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank database
directory from which to find the required annotation file. eg. For
record '100' in 'http://physionet.org/physiobank/database/mitdb':
pb_dir='mitdb'.
return_label_elements : list, optional
The label elements that are to be returned from reading the annotation
file. A list with at least one of the following options: 'symbol',
'label_store', 'description'.
summarize_labels : bool, optional
If True, assign a summary table of the set of annotation labels
contained in the file to the 'contained_labels' attribute of the
returned object. This table will contain the columns:
['label_store', 'symbol', 'description', 'n_occurrences']
Returns
-------
annotation : Annotation
The Annotation object. Call help(wfdb.Annotation) for the attribute
descriptions.
Notes
-----
For every annotation sample, the annotation file explictly stores the
'sample' and 'symbol' fields, but not necessarily the others. When reading
annotation files using this function, fields which are not stored in the
file will either take their default values of 0 or None, or will be carried
over from their previous values if any.
Examples
--------
>>> ann = wfdb.rdann('sample-data/100', 'atr', sampto=300000)
"""
return_label_elements = check_read_inputs(sampfrom, sampto,
return_label_elements)
# Read the file in byte pairs
filebytes = load_byte_pairs(record_name, extension, pb_dir)
# Get wfdb annotation fields from the file bytes
(sample, label_store, subtype,
chan, num, aux_note) = proc_ann_bytes(filebytes, sampto)
# Get the indices of annotations that hold definition information about
# the entire annotation file, and other empty annotations to be removed.
potential_definition_inds, rm_inds = get_special_inds(sample, label_store,
aux_note)
# Try to extract information describing the annotation file
(fs,
custom_labels) = interpret_defintion_annotations(potential_definition_inds,
aux_note)
# Remove annotations that do not store actual sample and label information
(sample, label_store, subtype,
chan, num, aux_note) = rm_empty_indices(rm_inds, sample, label_store,
subtype, chan, num, aux_note)
# Convert lists to numpy arrays dtype='int'
(sample, label_store, subtype,
chan, num) = lists_to_int_arrays(sample, label_store, subtype, chan, num)
# Try to get fs from the header file if it is not contained in the
# annotation file
if fs is None:
try:
rec = record.rdheader(record_name, pb_dir)
fs = rec.fs
except:
pass
# Create the annotation object
annotation = Annotation(record_name=os.path.split(record_name)[1],
extension=extension, sample=sample,
label_store=label_store, subtype=subtype,
chan=chan, num=num, aux_note=aux_note, fs=fs,
custom_labels=custom_labels)
# Apply the desired index range
if sampfrom > 0 and sampto is not None:
annotation.apply_range(sampfrom=sampfrom, sampto=sampto)
# If specified, obtain annotation samples relative to the starting
# index
if shift_samps and len(sample) > 0 and sampfrom:
annotation.sample = annotation.sample - sampfrom
# Get the set of unique label definitions contained in this
# annotation
if summarize_labels:
annotation.get_contained_labels(inplace=True)
# Set/unset the desired label values
annotation.set_label_elements(return_label_elements)
return annotation | python | def rdann(record_name, extension, sampfrom=0, sampto=None, shift_samps=False,
pb_dir=None, return_label_elements=['symbol'],
summarize_labels=False):
"""
Read a WFDB annotation file record_name.extension and return an
Annotation object.
Parameters
----------
record_name : str
The record name of the WFDB annotation file. ie. for file '100.atr',
record_name='100'.
extension : str
The annotatator extension of the annotation file. ie. for file
'100.atr', extension='atr'.
sampfrom : int, optional
The minimum sample number for annotations to be returned.
sampto : int, optional
The maximum sample number for annotations to be returned.
shift_samps : bool, optional
Specifies whether to return the sample indices relative to `sampfrom`
(True), or sample 0 (False).
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank database
directory from which to find the required annotation file. eg. For
record '100' in 'http://physionet.org/physiobank/database/mitdb':
pb_dir='mitdb'.
return_label_elements : list, optional
The label elements that are to be returned from reading the annotation
file. A list with at least one of the following options: 'symbol',
'label_store', 'description'.
summarize_labels : bool, optional
If True, assign a summary table of the set of annotation labels
contained in the file to the 'contained_labels' attribute of the
returned object. This table will contain the columns:
['label_store', 'symbol', 'description', 'n_occurrences']
Returns
-------
annotation : Annotation
The Annotation object. Call help(wfdb.Annotation) for the attribute
descriptions.
Notes
-----
For every annotation sample, the annotation file explictly stores the
'sample' and 'symbol' fields, but not necessarily the others. When reading
annotation files using this function, fields which are not stored in the
file will either take their default values of 0 or None, or will be carried
over from their previous values if any.
Examples
--------
>>> ann = wfdb.rdann('sample-data/100', 'atr', sampto=300000)
"""
return_label_elements = check_read_inputs(sampfrom, sampto,
return_label_elements)
# Read the file in byte pairs
filebytes = load_byte_pairs(record_name, extension, pb_dir)
# Get wfdb annotation fields from the file bytes
(sample, label_store, subtype,
chan, num, aux_note) = proc_ann_bytes(filebytes, sampto)
# Get the indices of annotations that hold definition information about
# the entire annotation file, and other empty annotations to be removed.
potential_definition_inds, rm_inds = get_special_inds(sample, label_store,
aux_note)
# Try to extract information describing the annotation file
(fs,
custom_labels) = interpret_defintion_annotations(potential_definition_inds,
aux_note)
# Remove annotations that do not store actual sample and label information
(sample, label_store, subtype,
chan, num, aux_note) = rm_empty_indices(rm_inds, sample, label_store,
subtype, chan, num, aux_note)
# Convert lists to numpy arrays dtype='int'
(sample, label_store, subtype,
chan, num) = lists_to_int_arrays(sample, label_store, subtype, chan, num)
# Try to get fs from the header file if it is not contained in the
# annotation file
if fs is None:
try:
rec = record.rdheader(record_name, pb_dir)
fs = rec.fs
except:
pass
# Create the annotation object
annotation = Annotation(record_name=os.path.split(record_name)[1],
extension=extension, sample=sample,
label_store=label_store, subtype=subtype,
chan=chan, num=num, aux_note=aux_note, fs=fs,
custom_labels=custom_labels)
# Apply the desired index range
if sampfrom > 0 and sampto is not None:
annotation.apply_range(sampfrom=sampfrom, sampto=sampto)
# If specified, obtain annotation samples relative to the starting
# index
if shift_samps and len(sample) > 0 and sampfrom:
annotation.sample = annotation.sample - sampfrom
# Get the set of unique label definitions contained in this
# annotation
if summarize_labels:
annotation.get_contained_labels(inplace=True)
# Set/unset the desired label values
annotation.set_label_elements(return_label_elements)
return annotation | [
"def",
"rdann",
"(",
"record_name",
",",
"extension",
",",
"sampfrom",
"=",
"0",
",",
"sampto",
"=",
"None",
",",
"shift_samps",
"=",
"False",
",",
"pb_dir",
"=",
"None",
",",
"return_label_elements",
"=",
"[",
"'symbol'",
"]",
",",
"summarize_labels",
"="... | Read a WFDB annotation file record_name.extension and return an
Annotation object.
Parameters
----------
record_name : str
The record name of the WFDB annotation file. ie. for file '100.atr',
record_name='100'.
extension : str
The annotatator extension of the annotation file. ie. for file
'100.atr', extension='atr'.
sampfrom : int, optional
The minimum sample number for annotations to be returned.
sampto : int, optional
The maximum sample number for annotations to be returned.
shift_samps : bool, optional
Specifies whether to return the sample indices relative to `sampfrom`
(True), or sample 0 (False).
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank database
directory from which to find the required annotation file. eg. For
record '100' in 'http://physionet.org/physiobank/database/mitdb':
pb_dir='mitdb'.
return_label_elements : list, optional
The label elements that are to be returned from reading the annotation
file. A list with at least one of the following options: 'symbol',
'label_store', 'description'.
summarize_labels : bool, optional
If True, assign a summary table of the set of annotation labels
contained in the file to the 'contained_labels' attribute of the
returned object. This table will contain the columns:
['label_store', 'symbol', 'description', 'n_occurrences']
Returns
-------
annotation : Annotation
The Annotation object. Call help(wfdb.Annotation) for the attribute
descriptions.
Notes
-----
For every annotation sample, the annotation file explictly stores the
'sample' and 'symbol' fields, but not necessarily the others. When reading
annotation files using this function, fields which are not stored in the
file will either take their default values of 0 or None, or will be carried
over from their previous values if any.
Examples
--------
>>> ann = wfdb.rdann('sample-data/100', 'atr', sampto=300000) | [
"Read",
"a",
"WFDB",
"annotation",
"file",
"record_name",
".",
"extension",
"and",
"return",
"an",
"Annotation",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L1196-L1315 | train | 216,201 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | update_extra_fields | def update_extra_fields(subtype, chan, num, aux_note, update):
"""
Update the field if the current annotation did not
provide a value.
- aux_note and sub are set to default values if missing.
- chan and num copy over previous value if missing.
"""
if update['subtype']:
subtype.append(0)
if update['chan']:
if chan == []:
chan.append(0)
else:
chan.append(chan[-1])
if update['num']:
if num == []:
num.append(0)
else:
num.append(num[-1])
if update['aux_note']:
aux_note.append('')
return subtype, chan, num, aux_note | python | def update_extra_fields(subtype, chan, num, aux_note, update):
"""
Update the field if the current annotation did not
provide a value.
- aux_note and sub are set to default values if missing.
- chan and num copy over previous value if missing.
"""
if update['subtype']:
subtype.append(0)
if update['chan']:
if chan == []:
chan.append(0)
else:
chan.append(chan[-1])
if update['num']:
if num == []:
num.append(0)
else:
num.append(num[-1])
if update['aux_note']:
aux_note.append('')
return subtype, chan, num, aux_note | [
"def",
"update_extra_fields",
"(",
"subtype",
",",
"chan",
",",
"num",
",",
"aux_note",
",",
"update",
")",
":",
"if",
"update",
"[",
"'subtype'",
"]",
":",
"subtype",
".",
"append",
"(",
"0",
")",
"if",
"update",
"[",
"'chan'",
"]",
":",
"if",
"chan... | Update the field if the current annotation did not
provide a value.
- aux_note and sub are set to default values if missing.
- chan and num copy over previous value if missing. | [
"Update",
"the",
"field",
"if",
"the",
"current",
"annotation",
"did",
"not",
"provide",
"a",
"value",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L1471-L1497 | train | 216,202 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | get_special_inds | def get_special_inds(sample, label_store, aux_note):
"""
Get the indices of annotations that hold definition information about
the entire annotation file, and other empty annotations to be removed.
Note: There is no need to deal with SKIP annotations (label_store=59)
which were already dealt with in proc_core_fields and hence not
included here.
"""
s0_inds = np.where(sample == np.int64(0))[0]
note_inds = np.where(label_store == np.int64(22))[0]
# sample = 0 with aux_note means there should be an fs or custom label definition.
# Either way, they are to be removed.
potential_definition_inds = set(s0_inds).intersection(note_inds)
# Other indices which are not actual annotations.
notann_inds = np.where(label_store == np.int64(0))[0]
rm_inds = potential_definition_inds.union(set(notann_inds))
return potential_definition_inds, rm_inds | python | def get_special_inds(sample, label_store, aux_note):
"""
Get the indices of annotations that hold definition information about
the entire annotation file, and other empty annotations to be removed.
Note: There is no need to deal with SKIP annotations (label_store=59)
which were already dealt with in proc_core_fields and hence not
included here.
"""
s0_inds = np.where(sample == np.int64(0))[0]
note_inds = np.where(label_store == np.int64(22))[0]
# sample = 0 with aux_note means there should be an fs or custom label definition.
# Either way, they are to be removed.
potential_definition_inds = set(s0_inds).intersection(note_inds)
# Other indices which are not actual annotations.
notann_inds = np.where(label_store == np.int64(0))[0]
rm_inds = potential_definition_inds.union(set(notann_inds))
return potential_definition_inds, rm_inds | [
"def",
"get_special_inds",
"(",
"sample",
",",
"label_store",
",",
"aux_note",
")",
":",
"s0_inds",
"=",
"np",
".",
"where",
"(",
"sample",
"==",
"np",
".",
"int64",
"(",
"0",
")",
")",
"[",
"0",
"]",
"note_inds",
"=",
"np",
".",
"where",
"(",
"lab... | Get the indices of annotations that hold definition information about
the entire annotation file, and other empty annotations to be removed.
Note: There is no need to deal with SKIP annotations (label_store=59)
which were already dealt with in proc_core_fields and hence not
included here. | [
"Get",
"the",
"indices",
"of",
"annotations",
"that",
"hold",
"definition",
"information",
"about",
"the",
"entire",
"annotation",
"file",
"and",
"other",
"empty",
"annotations",
"to",
"be",
"removed",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L1504-L1526 | train | 216,203 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | rm_empty_indices | def rm_empty_indices(*args):
"""
Remove unwanted list indices. First argument is the list
of indices to remove. Other elements are the lists
to trim.
"""
rm_inds = args[0]
if not rm_inds:
return args[1:]
keep_inds = [i for i in range(len(args[1])) if i not in rm_inds]
return [[a[i] for i in keep_inds] for a in args[1:]] | python | def rm_empty_indices(*args):
"""
Remove unwanted list indices. First argument is the list
of indices to remove. Other elements are the lists
to trim.
"""
rm_inds = args[0]
if not rm_inds:
return args[1:]
keep_inds = [i for i in range(len(args[1])) if i not in rm_inds]
return [[a[i] for i in keep_inds] for a in args[1:]] | [
"def",
"rm_empty_indices",
"(",
"*",
"args",
")",
":",
"rm_inds",
"=",
"args",
"[",
"0",
"]",
"if",
"not",
"rm_inds",
":",
"return",
"args",
"[",
"1",
":",
"]",
"keep_inds",
"=",
"[",
"i",
"for",
"i",
"in",
"range",
"(",
"len",
"(",
"args",
"[",
... | Remove unwanted list indices. First argument is the list
of indices to remove. Other elements are the lists
to trim. | [
"Remove",
"unwanted",
"list",
"indices",
".",
"First",
"argument",
"is",
"the",
"list",
"of",
"indices",
"to",
"remove",
".",
"Other",
"elements",
"are",
"the",
"lists",
"to",
"trim",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L1567-L1580 | train | 216,204 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.apply_range | def apply_range(self, sampfrom=0, sampto=None):
"""
Filter the annotation attributes to keep only items between the
desired sample values
"""
sampto = sampto or self.sample[-1]
kept_inds = np.intersect1d(np.where(self.sample>=sampfrom),
np.where(self.sample<=sampto))
for field in ['sample', 'label_store', 'subtype', 'chan', 'num']:
setattr(self, field, getattr(self, field)[kept_inds])
self.aux_note = [self.aux_note[i] for i in kept_inds]
self.ann_len = len(self.sample) | python | def apply_range(self, sampfrom=0, sampto=None):
"""
Filter the annotation attributes to keep only items between the
desired sample values
"""
sampto = sampto or self.sample[-1]
kept_inds = np.intersect1d(np.where(self.sample>=sampfrom),
np.where(self.sample<=sampto))
for field in ['sample', 'label_store', 'subtype', 'chan', 'num']:
setattr(self, field, getattr(self, field)[kept_inds])
self.aux_note = [self.aux_note[i] for i in kept_inds]
self.ann_len = len(self.sample) | [
"def",
"apply_range",
"(",
"self",
",",
"sampfrom",
"=",
"0",
",",
"sampto",
"=",
"None",
")",
":",
"sampto",
"=",
"sampto",
"or",
"self",
".",
"sample",
"[",
"-",
"1",
"]",
"kept_inds",
"=",
"np",
".",
"intersect1d",
"(",
"np",
".",
"where",
"(",
... | Filter the annotation attributes to keep only items between the
desired sample values | [
"Filter",
"the",
"annotation",
"attributes",
"to",
"keep",
"only",
"items",
"between",
"the",
"desired",
"sample",
"values"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L134-L151 | train | 216,205 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.wrann | def wrann(self, write_fs=False, write_dir=''):
"""
Write a WFDB annotation file from this object.
Parameters
----------
write_fs : bool, optional
Whether to write the `fs` attribute to the file.
"""
for field in ['record_name', 'extension']:
if getattr(self, field) is None:
raise Exception('Missing required field for writing annotation file: ',field)
present_label_fields = self.get_label_fields()
if not present_label_fields:
raise Exception('At least one annotation label field is required to write the annotation: ', ann_label_fields)
# Check the validity of individual fields
self.check_fields()
# Standardize the format of the custom_labels field
self.standardize_custom_labels()
# Create the label map used in this annotaion
self.create_label_map()
# Check the cohesion of fields
self.check_field_cohesion(present_label_fields)
# Calculate the label_store field if necessary
if 'label_store' not in present_label_fields:
self.convert_label_attribute(source_field=present_label_fields[0],
target_field='label_store')
# Write the header file using the specified fields
self.wr_ann_file(write_fs=write_fs, write_dir=write_dir)
return | python | def wrann(self, write_fs=False, write_dir=''):
"""
Write a WFDB annotation file from this object.
Parameters
----------
write_fs : bool, optional
Whether to write the `fs` attribute to the file.
"""
for field in ['record_name', 'extension']:
if getattr(self, field) is None:
raise Exception('Missing required field for writing annotation file: ',field)
present_label_fields = self.get_label_fields()
if not present_label_fields:
raise Exception('At least one annotation label field is required to write the annotation: ', ann_label_fields)
# Check the validity of individual fields
self.check_fields()
# Standardize the format of the custom_labels field
self.standardize_custom_labels()
# Create the label map used in this annotaion
self.create_label_map()
# Check the cohesion of fields
self.check_field_cohesion(present_label_fields)
# Calculate the label_store field if necessary
if 'label_store' not in present_label_fields:
self.convert_label_attribute(source_field=present_label_fields[0],
target_field='label_store')
# Write the header file using the specified fields
self.wr_ann_file(write_fs=write_fs, write_dir=write_dir)
return | [
"def",
"wrann",
"(",
"self",
",",
"write_fs",
"=",
"False",
",",
"write_dir",
"=",
"''",
")",
":",
"for",
"field",
"in",
"[",
"'record_name'",
",",
"'extension'",
"]",
":",
"if",
"getattr",
"(",
"self",
",",
"field",
")",
"is",
"None",
":",
"raise",
... | Write a WFDB annotation file from this object.
Parameters
----------
write_fs : bool, optional
Whether to write the `fs` attribute to the file. | [
"Write",
"a",
"WFDB",
"annotation",
"file",
"from",
"this",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L153-L191 | train | 216,206 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.get_label_fields | def get_label_fields(self):
"""
Get the present label fields in the object
"""
present_label_fields = []
for field in ann_label_fields:
if getattr(self, field) is not None:
present_label_fields.append(field)
return present_label_fields | python | def get_label_fields(self):
"""
Get the present label fields in the object
"""
present_label_fields = []
for field in ann_label_fields:
if getattr(self, field) is not None:
present_label_fields.append(field)
return present_label_fields | [
"def",
"get_label_fields",
"(",
"self",
")",
":",
"present_label_fields",
"=",
"[",
"]",
"for",
"field",
"in",
"ann_label_fields",
":",
"if",
"getattr",
"(",
"self",
",",
"field",
")",
"is",
"not",
"None",
":",
"present_label_fields",
".",
"append",
"(",
"... | Get the present label fields in the object | [
"Get",
"the",
"present",
"label",
"fields",
"in",
"the",
"object"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L193-L202 | train | 216,207 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.check_field_cohesion | def check_field_cohesion(self, present_label_fields):
"""
Check that the content and structure of different fields are consistent
with one another.
"""
# Ensure all written annotation fields have the same length
nannots = len(self.sample)
for field in ['sample', 'num', 'subtype', 'chan', 'aux_note']+present_label_fields:
if getattr(self, field) is not None:
if len(getattr(self, field)) != nannots:
raise ValueError("The lengths of the 'sample' and '"+field+"' fields do not match")
# Ensure all label fields are defined by the label map. This has to be checked because
# it is possible the user defined (or lack of) custom_labels does not capture all the
# labels present.
for field in present_label_fields:
defined_values = self.__label_map__[field].values
if set(getattr(self, field)) - set(defined_values) != set():
raise ValueError('\n'.join(['\nThe '+field+' field contains elements not encoded in the stardard WFDB annotation labels, or this object\'s custom_labels field',
'- To see the standard WFDB annotation labels, call: show_ann_labels()',
'- To transfer non-encoded symbol items into the aux_note field, call: self.sym_to_aux()',
'- To define custom labels, set the custom_labels field as a list of tuple triplets with format: (label_store, symbol, description)']))
return | python | def check_field_cohesion(self, present_label_fields):
"""
Check that the content and structure of different fields are consistent
with one another.
"""
# Ensure all written annotation fields have the same length
nannots = len(self.sample)
for field in ['sample', 'num', 'subtype', 'chan', 'aux_note']+present_label_fields:
if getattr(self, field) is not None:
if len(getattr(self, field)) != nannots:
raise ValueError("The lengths of the 'sample' and '"+field+"' fields do not match")
# Ensure all label fields are defined by the label map. This has to be checked because
# it is possible the user defined (or lack of) custom_labels does not capture all the
# labels present.
for field in present_label_fields:
defined_values = self.__label_map__[field].values
if set(getattr(self, field)) - set(defined_values) != set():
raise ValueError('\n'.join(['\nThe '+field+' field contains elements not encoded in the stardard WFDB annotation labels, or this object\'s custom_labels field',
'- To see the standard WFDB annotation labels, call: show_ann_labels()',
'- To transfer non-encoded symbol items into the aux_note field, call: self.sym_to_aux()',
'- To define custom labels, set the custom_labels field as a list of tuple triplets with format: (label_store, symbol, description)']))
return | [
"def",
"check_field_cohesion",
"(",
"self",
",",
"present_label_fields",
")",
":",
"# Ensure all written annotation fields have the same length",
"nannots",
"=",
"len",
"(",
"self",
".",
"sample",
")",
"for",
"field",
"in",
"[",
"'sample'",
",",
"'num'",
",",
"'subt... | Check that the content and structure of different fields are consistent
with one another. | [
"Check",
"that",
"the",
"content",
"and",
"structure",
"of",
"different",
"fields",
"are",
"consistent",
"with",
"one",
"another",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L361-L386 | train | 216,208 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.get_available_label_stores | def get_available_label_stores(self, usefield='tryall'):
"""
Get the label store values that may be used
for writing this annotation.
Available store values include:
- the undefined values in the standard wfdb labels
- the store values not used in the current
annotation object.
- the store values whose standard wfdb symbols/descriptions
match those of the custom labels (if custom_labels exists)
If 'usefield' is explicitly specified, the function will use that
field to figure out available label stores. If 'usefield'
is set to 'tryall', the function will choose one of the contained
attributes by checking availability in the order: label_store, symbol, description
"""
# Figure out which field to use to get available labels stores.
if usefield == 'tryall':
if self.label_store is not None:
usefield = 'label_store'
elif self.symbol is not None:
usefield = 'symbol'
elif self.description is not None:
usefield = 'description'
else:
raise ValueError('No label fields are defined. At least one of the following is required: ', ann_label_fields)
return self.get_available_label_stores(usefield = usefield)
# Use the explicitly stated field to get available stores.
else:
# If usefield == 'label_store', there are slightly fewer/different steps
# compared to if it were another option
contained_field = getattr(self, usefield)
# Get the unused label_store values
if usefield == 'label_store':
unused_label_stores = set(ann_label_table['label_store'].values) - contained_field
else:
# the label_store values from the standard wfdb annotation labels
# whose symbols are not contained in this annotation
unused_field = set(ann_label_table[usefield].values) - contained_field
unused_label_stores = ann_label_table.loc[ann_label_table[usefield] in unused_field, 'label_store'].values
# Get the standard wfdb label_store values overwritten by the
# custom_labels if any
if self.custom_symbols is not None:
custom_field = set(self.get_custom_label_attribute(usefield))
if usefield == 'label_store':
overwritten_label_stores = set(custom_field).intersection(set(ann_label_table['label_store']))
else:
overwritten_fields = set(custom_field).intersection(set(ann_label_table[usefield]))
overwritten_label_stores = ann_label_table.loc[ann_label_table[usefield] in overwritten_fields, 'label_store'].values
else:
overwritten_label_stores = set()
# The undefined values in the standard wfdb labels
undefined_label_stores = self.get_undefined_label_stores()
# Final available label stores = undefined + unused + overwritten
available_label_stores = set(undefined_label_stores).union(set(unused_label_stores)).union(overwritten_label_stores)
return available_label_stores | python | def get_available_label_stores(self, usefield='tryall'):
"""
Get the label store values that may be used
for writing this annotation.
Available store values include:
- the undefined values in the standard wfdb labels
- the store values not used in the current
annotation object.
- the store values whose standard wfdb symbols/descriptions
match those of the custom labels (if custom_labels exists)
If 'usefield' is explicitly specified, the function will use that
field to figure out available label stores. If 'usefield'
is set to 'tryall', the function will choose one of the contained
attributes by checking availability in the order: label_store, symbol, description
"""
# Figure out which field to use to get available labels stores.
if usefield == 'tryall':
if self.label_store is not None:
usefield = 'label_store'
elif self.symbol is not None:
usefield = 'symbol'
elif self.description is not None:
usefield = 'description'
else:
raise ValueError('No label fields are defined. At least one of the following is required: ', ann_label_fields)
return self.get_available_label_stores(usefield = usefield)
# Use the explicitly stated field to get available stores.
else:
# If usefield == 'label_store', there are slightly fewer/different steps
# compared to if it were another option
contained_field = getattr(self, usefield)
# Get the unused label_store values
if usefield == 'label_store':
unused_label_stores = set(ann_label_table['label_store'].values) - contained_field
else:
# the label_store values from the standard wfdb annotation labels
# whose symbols are not contained in this annotation
unused_field = set(ann_label_table[usefield].values) - contained_field
unused_label_stores = ann_label_table.loc[ann_label_table[usefield] in unused_field, 'label_store'].values
# Get the standard wfdb label_store values overwritten by the
# custom_labels if any
if self.custom_symbols is not None:
custom_field = set(self.get_custom_label_attribute(usefield))
if usefield == 'label_store':
overwritten_label_stores = set(custom_field).intersection(set(ann_label_table['label_store']))
else:
overwritten_fields = set(custom_field).intersection(set(ann_label_table[usefield]))
overwritten_label_stores = ann_label_table.loc[ann_label_table[usefield] in overwritten_fields, 'label_store'].values
else:
overwritten_label_stores = set()
# The undefined values in the standard wfdb labels
undefined_label_stores = self.get_undefined_label_stores()
# Final available label stores = undefined + unused + overwritten
available_label_stores = set(undefined_label_stores).union(set(unused_label_stores)).union(overwritten_label_stores)
return available_label_stores | [
"def",
"get_available_label_stores",
"(",
"self",
",",
"usefield",
"=",
"'tryall'",
")",
":",
"# Figure out which field to use to get available labels stores.",
"if",
"usefield",
"==",
"'tryall'",
":",
"if",
"self",
".",
"label_store",
"is",
"not",
"None",
":",
"usefi... | Get the label store values that may be used
for writing this annotation.
Available store values include:
- the undefined values in the standard wfdb labels
- the store values not used in the current
annotation object.
- the store values whose standard wfdb symbols/descriptions
match those of the custom labels (if custom_labels exists)
If 'usefield' is explicitly specified, the function will use that
field to figure out available label stores. If 'usefield'
is set to 'tryall', the function will choose one of the contained
attributes by checking availability in the order: label_store, symbol, description | [
"Get",
"the",
"label",
"store",
"values",
"that",
"may",
"be",
"used",
"for",
"writing",
"this",
"annotation",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L463-L527 | train | 216,209 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.get_custom_label_attribute | def get_custom_label_attribute(self, attribute):
"""
Get a list of the custom_labels attribute.
ie. label_store, symbol, or description.
The custom_labels variable could be in
a number of formats
"""
if attribute not in ann_label_fields:
raise ValueError('Invalid attribute specified')
if isinstance(self.custom_labels, pd.DataFrame):
if 'label_store' not in list(self.custom_labels):
raise ValueError('label_store not defined in custom_labels')
a = list(self.custom_labels[attribute].values)
else:
if len(self.custom_labels[0]) == 2:
if attribute == 'label_store':
raise ValueError('label_store not defined in custom_labels')
elif attribute == 'symbol':
a = [l[0] for l in self.custom_labels]
elif attribute == 'description':
a = [l[1] for l in self.custom_labels]
else:
if attribute == 'label_store':
a = [l[0] for l in self.custom_labels]
elif attribute == 'symbol':
a = [l[1] for l in self.custom_labels]
elif attribute == 'description':
a = [l[2] for l in self.custom_labels]
return a | python | def get_custom_label_attribute(self, attribute):
"""
Get a list of the custom_labels attribute.
ie. label_store, symbol, or description.
The custom_labels variable could be in
a number of formats
"""
if attribute not in ann_label_fields:
raise ValueError('Invalid attribute specified')
if isinstance(self.custom_labels, pd.DataFrame):
if 'label_store' not in list(self.custom_labels):
raise ValueError('label_store not defined in custom_labels')
a = list(self.custom_labels[attribute].values)
else:
if len(self.custom_labels[0]) == 2:
if attribute == 'label_store':
raise ValueError('label_store not defined in custom_labels')
elif attribute == 'symbol':
a = [l[0] for l in self.custom_labels]
elif attribute == 'description':
a = [l[1] for l in self.custom_labels]
else:
if attribute == 'label_store':
a = [l[0] for l in self.custom_labels]
elif attribute == 'symbol':
a = [l[1] for l in self.custom_labels]
elif attribute == 'description':
a = [l[2] for l in self.custom_labels]
return a | [
"def",
"get_custom_label_attribute",
"(",
"self",
",",
"attribute",
")",
":",
"if",
"attribute",
"not",
"in",
"ann_label_fields",
":",
"raise",
"ValueError",
"(",
"'Invalid attribute specified'",
")",
"if",
"isinstance",
"(",
"self",
".",
"custom_labels",
",",
"pd... | Get a list of the custom_labels attribute.
ie. label_store, symbol, or description.
The custom_labels variable could be in
a number of formats | [
"Get",
"a",
"list",
"of",
"the",
"custom_labels",
"attribute",
".",
"ie",
".",
"label_store",
"symbol",
"or",
"description",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L530-L562 | train | 216,210 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.create_label_map | def create_label_map(self, inplace=True):
"""
Creates mapping df based on ann_label_table and self.custom_labels.
Table composed of entire WFDB standard annotation table, overwritten/appended
with custom_labels if any. Sets __label_map__ attribute, or returns value.
"""
label_map = ann_label_table.copy()
if self.custom_labels is not None:
self.standardize_custom_labels()
for i in self.custom_labels.index:
label_map.loc[i] = self.custom_labels.loc[i]
if inplace:
self.__label_map__ = label_map
else:
return label_map | python | def create_label_map(self, inplace=True):
"""
Creates mapping df based on ann_label_table and self.custom_labels.
Table composed of entire WFDB standard annotation table, overwritten/appended
with custom_labels if any. Sets __label_map__ attribute, or returns value.
"""
label_map = ann_label_table.copy()
if self.custom_labels is not None:
self.standardize_custom_labels()
for i in self.custom_labels.index:
label_map.loc[i] = self.custom_labels.loc[i]
if inplace:
self.__label_map__ = label_map
else:
return label_map | [
"def",
"create_label_map",
"(",
"self",
",",
"inplace",
"=",
"True",
")",
":",
"label_map",
"=",
"ann_label_table",
".",
"copy",
"(",
")",
"if",
"self",
".",
"custom_labels",
"is",
"not",
"None",
":",
"self",
".",
"standardize_custom_labels",
"(",
")",
"fo... | Creates mapping df based on ann_label_table and self.custom_labels.
Table composed of entire WFDB standard annotation table, overwritten/appended
with custom_labels if any. Sets __label_map__ attribute, or returns value. | [
"Creates",
"mapping",
"df",
"based",
"on",
"ann_label_table",
"and",
"self",
".",
"custom_labels",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L565-L583 | train | 216,211 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.wr_ann_file | def wr_ann_file(self, write_fs, write_dir=''):
"""
Calculate the bytes used to encode an annotation set and
write them to an annotation file
"""
# Calculate the fs bytes to write if present and desired to write
if write_fs:
fs_bytes = self.calc_fs_bytes()
else:
fs_bytes = []
# Calculate the custom_labels bytes to write if present
cl_bytes = self.calc_cl_bytes()
# Calculate the core field bytes to write
core_bytes = self.calc_core_bytes()
# Mark the end of the special annotation types if needed
if fs_bytes == [] and cl_bytes == []:
end_special_bytes = []
else:
end_special_bytes = [0, 236, 255, 255, 255, 255, 1, 0]
# Write the file
with open(os.path.join(write_dir, self.record_name+'.'+self.extension),
'wb') as f:
# Combine all bytes to write: fs (if any), custom annotations (if any), main content, file terminator
np.concatenate((fs_bytes, cl_bytes, end_special_bytes, core_bytes,
np.array([0,0]))).astype('u1').tofile(f)
return | python | def wr_ann_file(self, write_fs, write_dir=''):
"""
Calculate the bytes used to encode an annotation set and
write them to an annotation file
"""
# Calculate the fs bytes to write if present and desired to write
if write_fs:
fs_bytes = self.calc_fs_bytes()
else:
fs_bytes = []
# Calculate the custom_labels bytes to write if present
cl_bytes = self.calc_cl_bytes()
# Calculate the core field bytes to write
core_bytes = self.calc_core_bytes()
# Mark the end of the special annotation types if needed
if fs_bytes == [] and cl_bytes == []:
end_special_bytes = []
else:
end_special_bytes = [0, 236, 255, 255, 255, 255, 1, 0]
# Write the file
with open(os.path.join(write_dir, self.record_name+'.'+self.extension),
'wb') as f:
# Combine all bytes to write: fs (if any), custom annotations (if any), main content, file terminator
np.concatenate((fs_bytes, cl_bytes, end_special_bytes, core_bytes,
np.array([0,0]))).astype('u1').tofile(f)
return | [
"def",
"wr_ann_file",
"(",
"self",
",",
"write_fs",
",",
"write_dir",
"=",
"''",
")",
":",
"# Calculate the fs bytes to write if present and desired to write",
"if",
"write_fs",
":",
"fs_bytes",
"=",
"self",
".",
"calc_fs_bytes",
"(",
")",
"else",
":",
"fs_bytes",
... | Calculate the bytes used to encode an annotation set and
write them to an annotation file | [
"Calculate",
"the",
"bytes",
"used",
"to",
"encode",
"an",
"annotation",
"set",
"and",
"write",
"them",
"to",
"an",
"annotation",
"file"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L586-L615 | train | 216,212 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.calc_core_bytes | def calc_core_bytes(self):
"""
Convert all used annotation fields into bytes to write
"""
# The difference sample to write
if len(self.sample) == 1:
sampdiff = np.array([self.sample[0]])
else:
sampdiff = np.concatenate(([self.sample[0]], np.diff(self.sample)))
# Create a copy of the annotation object with a
# compact version of fields to write
compact_annotation = copy.deepcopy(self)
compact_annotation.compact_fields()
# The optional fields to be written. Write if they are not None or all empty
extra_write_fields = []
for field in ['num', 'subtype', 'chan', 'aux_note']:
if not isblank(getattr(compact_annotation, field)):
extra_write_fields.append(field)
data_bytes = []
# Iterate across all fields one index at a time
for i in range(len(sampdiff)):
# Process the samp (difference) and sym items
data_bytes.append(field2bytes('samptype', [sampdiff[i], self.symbol[i]]))
# Process the extra optional fields
for field in extra_write_fields:
value = getattr(compact_annotation, field)[i]
if value is not None:
data_bytes.append(field2bytes(field, value))
# Flatten and convert to correct format
data_bytes = np.array([item for sublist in data_bytes for item in sublist]).astype('u1')
return data_bytes | python | def calc_core_bytes(self):
"""
Convert all used annotation fields into bytes to write
"""
# The difference sample to write
if len(self.sample) == 1:
sampdiff = np.array([self.sample[0]])
else:
sampdiff = np.concatenate(([self.sample[0]], np.diff(self.sample)))
# Create a copy of the annotation object with a
# compact version of fields to write
compact_annotation = copy.deepcopy(self)
compact_annotation.compact_fields()
# The optional fields to be written. Write if they are not None or all empty
extra_write_fields = []
for field in ['num', 'subtype', 'chan', 'aux_note']:
if not isblank(getattr(compact_annotation, field)):
extra_write_fields.append(field)
data_bytes = []
# Iterate across all fields one index at a time
for i in range(len(sampdiff)):
# Process the samp (difference) and sym items
data_bytes.append(field2bytes('samptype', [sampdiff[i], self.symbol[i]]))
# Process the extra optional fields
for field in extra_write_fields:
value = getattr(compact_annotation, field)[i]
if value is not None:
data_bytes.append(field2bytes(field, value))
# Flatten and convert to correct format
data_bytes = np.array([item for sublist in data_bytes for item in sublist]).astype('u1')
return data_bytes | [
"def",
"calc_core_bytes",
"(",
"self",
")",
":",
"# The difference sample to write",
"if",
"len",
"(",
"self",
".",
"sample",
")",
"==",
"1",
":",
"sampdiff",
"=",
"np",
".",
"array",
"(",
"[",
"self",
".",
"sample",
"[",
"0",
"]",
"]",
")",
"else",
... | Convert all used annotation fields into bytes to write | [
"Convert",
"all",
"used",
"annotation",
"fields",
"into",
"bytes",
"to",
"write"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L676-L716 | train | 216,213 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.get_contained_labels | def get_contained_labels(self, inplace=True):
"""
Get the set of unique labels contained in this annotation.
Returns a pandas dataframe or sets the contained_labels
attribute of the object.
Requires the label_store field to be set.
Function will try to use attributes contained in the order:
1. label_store
2. symbol
3. description
This function should also be called
to summarize information about an
annotation after it has been
read. Should not be a helper function
to others except rdann.
"""
if self.custom_labels is not None:
self.check_field('custom_labels')
# Create the label map
label_map = ann_label_table.copy()
# Convert the tuple triplets into a pandas dataframe if needed
if isinstance(self.custom_labels, (list, tuple)):
custom_labels = label_triplets_to_df(self.custom_labels)
elif isinstance(self.custom_labels, pd.DataFrame):
# Set the index just in case it doesn't already match the label_store
self.custom_labels.set_index(
self.custom_labels['label_store'].values, inplace=True)
custom_labels = self.custom_labels
else:
custom_labels = None
# Merge the standard wfdb labels with the custom labels.
# custom labels values overwrite standard wfdb if overlap.
if custom_labels is not None:
for i in custom_labels.index:
label_map.loc[i] = custom_labels.loc[i]
# This doesn't work...
# label_map.loc[custom_labels.index] = custom_labels.loc[custom_labels.index]
# Get the labels using one of the features
if self.label_store is not None:
index_vals = set(self.label_store)
reset_index = False
counts = np.unique(self.label_store, return_counts=True)
elif self.symbol is not None:
index_vals = set(self.symbol)
label_map.set_index(label_map['symbol'].values, inplace=True)
reset_index = True
counts = np.unique(self.symbol, return_counts=True)
elif self.description is not None:
index_vals = set(self.description)
label_map.set_index(label_map['description'].values, inplace=True)
reset_index = True
counts = np.unique(self.description, return_counts=True)
else:
raise Exception('No annotation labels contained in object')
contained_labels = label_map.loc[index_vals, :]
# Add the counts
for i in range(len(counts[0])):
contained_labels.loc[counts[0][i], 'n_occurrences'] = counts[1][i]
contained_labels['n_occurrences'] = pd.to_numeric(contained_labels['n_occurrences'], downcast='integer')
if reset_index:
contained_labels.set_index(contained_labels['label_store'].values,
inplace=True)
if inplace:
self.contained_labels = contained_labels
return
else:
return contained_labels | python | def get_contained_labels(self, inplace=True):
"""
Get the set of unique labels contained in this annotation.
Returns a pandas dataframe or sets the contained_labels
attribute of the object.
Requires the label_store field to be set.
Function will try to use attributes contained in the order:
1. label_store
2. symbol
3. description
This function should also be called
to summarize information about an
annotation after it has been
read. Should not be a helper function
to others except rdann.
"""
if self.custom_labels is not None:
self.check_field('custom_labels')
# Create the label map
label_map = ann_label_table.copy()
# Convert the tuple triplets into a pandas dataframe if needed
if isinstance(self.custom_labels, (list, tuple)):
custom_labels = label_triplets_to_df(self.custom_labels)
elif isinstance(self.custom_labels, pd.DataFrame):
# Set the index just in case it doesn't already match the label_store
self.custom_labels.set_index(
self.custom_labels['label_store'].values, inplace=True)
custom_labels = self.custom_labels
else:
custom_labels = None
# Merge the standard wfdb labels with the custom labels.
# custom labels values overwrite standard wfdb if overlap.
if custom_labels is not None:
for i in custom_labels.index:
label_map.loc[i] = custom_labels.loc[i]
# This doesn't work...
# label_map.loc[custom_labels.index] = custom_labels.loc[custom_labels.index]
# Get the labels using one of the features
if self.label_store is not None:
index_vals = set(self.label_store)
reset_index = False
counts = np.unique(self.label_store, return_counts=True)
elif self.symbol is not None:
index_vals = set(self.symbol)
label_map.set_index(label_map['symbol'].values, inplace=True)
reset_index = True
counts = np.unique(self.symbol, return_counts=True)
elif self.description is not None:
index_vals = set(self.description)
label_map.set_index(label_map['description'].values, inplace=True)
reset_index = True
counts = np.unique(self.description, return_counts=True)
else:
raise Exception('No annotation labels contained in object')
contained_labels = label_map.loc[index_vals, :]
# Add the counts
for i in range(len(counts[0])):
contained_labels.loc[counts[0][i], 'n_occurrences'] = counts[1][i]
contained_labels['n_occurrences'] = pd.to_numeric(contained_labels['n_occurrences'], downcast='integer')
if reset_index:
contained_labels.set_index(contained_labels['label_store'].values,
inplace=True)
if inplace:
self.contained_labels = contained_labels
return
else:
return contained_labels | [
"def",
"get_contained_labels",
"(",
"self",
",",
"inplace",
"=",
"True",
")",
":",
"if",
"self",
".",
"custom_labels",
"is",
"not",
"None",
":",
"self",
".",
"check_field",
"(",
"'custom_labels'",
")",
"# Create the label map",
"label_map",
"=",
"ann_label_table... | Get the set of unique labels contained in this annotation.
Returns a pandas dataframe or sets the contained_labels
attribute of the object.
Requires the label_store field to be set.
Function will try to use attributes contained in the order:
1. label_store
2. symbol
3. description
This function should also be called
to summarize information about an
annotation after it has been
read. Should not be a helper function
to others except rdann. | [
"Get",
"the",
"set",
"of",
"unique",
"labels",
"contained",
"in",
"this",
"annotation",
".",
"Returns",
"a",
"pandas",
"dataframe",
"or",
"sets",
"the",
"contained_labels",
"attribute",
"of",
"the",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L781-L859 | train | 216,214 |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | Annotation.set_label_elements | def set_label_elements(self, wanted_label_elements):
"""
Set one or more label elements based on
at least one of the others
"""
if isinstance(wanted_label_elements, str):
wanted_label_elements = [wanted_label_elements]
# Figure out which desired label elements are missing
missing_elements = [e for e in wanted_label_elements if getattr(self, e) is None]
contained_elements = [e for e in ann_label_fields if getattr(self, e )is not None]
if not contained_elements:
raise Exception('No annotation labels contained in object')
for e in missing_elements:
self.convert_label_attribute(contained_elements[0], e)
unwanted_label_elements = list(set(ann_label_fields)
- set(wanted_label_elements))
self.rm_attributes(unwanted_label_elements)
return | python | def set_label_elements(self, wanted_label_elements):
"""
Set one or more label elements based on
at least one of the others
"""
if isinstance(wanted_label_elements, str):
wanted_label_elements = [wanted_label_elements]
# Figure out which desired label elements are missing
missing_elements = [e for e in wanted_label_elements if getattr(self, e) is None]
contained_elements = [e for e in ann_label_fields if getattr(self, e )is not None]
if not contained_elements:
raise Exception('No annotation labels contained in object')
for e in missing_elements:
self.convert_label_attribute(contained_elements[0], e)
unwanted_label_elements = list(set(ann_label_fields)
- set(wanted_label_elements))
self.rm_attributes(unwanted_label_elements)
return | [
"def",
"set_label_elements",
"(",
"self",
",",
"wanted_label_elements",
")",
":",
"if",
"isinstance",
"(",
"wanted_label_elements",
",",
"str",
")",
":",
"wanted_label_elements",
"=",
"[",
"wanted_label_elements",
"]",
"# Figure out which desired label elements are missing"... | Set one or more label elements based on
at least one of the others | [
"Set",
"one",
"or",
"more",
"label",
"elements",
"based",
"on",
"at",
"least",
"one",
"of",
"the",
"others"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L861-L885 | train | 216,215 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _dat_read_params | def _dat_read_params(fmt, sig_len, byte_offset, skew, tsamps_per_frame,
sampfrom, sampto):
"""
Calculate the parameters used to read and process a dat file, given
its layout, and the desired sample range.
Parameters
----------
fmt : str
The format of the dat file
sig_len : int
The signal length (per channel) of the dat file
byte_offset : int
The byte offset of the dat file
skew : list
The skew for the signals of the dat file
tsamps_per_frame : int
The total samples/frame for all channels of the dat file
sampfrom : int
The starting sample number to be read from the signals
sampto : int
The final sample number to be read from the signals
Returns
-------
start_byte : int
The starting byte to read the dat file from. Always points to
the start of a byte block for special formats.
n_read_samples : int
The number of flat samples to read from the dat file.
block_floor_samples : int
The extra samples read prior to the first desired sample, for
special formats, in order to ensure entire byte blocks are read.
extra_flat_samples : int
The extra samples desired beyond what is contained in the file.
nan_replace : list
The number of samples to replace with nan at the end of each
signal, due to skew wanting samples beyond the file.
Examples
--------
sig_len=100, t = 4 (total samples/frame), skew = [0, 2, 4, 5]
sampfrom=0, sampto=100 --> read_len = 100, n_sampread = 100*t, extralen = 5, nan_replace = [0, 2, 4, 5]
sampfrom=50, sampto=100 --> read_len = 50, n_sampread = 50*t, extralen = 5, nan_replace = [0, 2, 4, 5]
sampfrom=0, sampto=50 --> read_len = 50, n_sampread = 55*t, extralen = 0, nan_replace = [0, 0, 0, 0]
sampfrom=95, sampto=99 --> read_len = 4, n_sampread = 5*t, extralen = 4, nan_replace = [0, 1, 3, 4]
"""
# First flat sample number to read (if all channels were flattened)
start_flat_sample = sampfrom * tsamps_per_frame
# Calculate the last flat sample number to read.
# Cannot exceed sig_len * tsamps_per_frame, the number of samples
# stored in the file. If extra 'samples' are desired by the skew,
# keep track.
# Where was the -sampfrom derived from? Why was it in the formula?
if (sampto + max(skew)) > sig_len:
end_flat_sample = sig_len * tsamps_per_frame
extra_flat_samples = (sampto + max(skew) - sig_len) * tsamps_per_frame
else:
end_flat_sample = (sampto + max(skew)) * tsamps_per_frame
extra_flat_samples = 0
# Adjust the starting sample number to read from start of blocks for special fmts.
# Keep track of how many preceeding samples are read, to be discarded later.
if fmt == '212':
# Samples come in groups of 2, in 3 byte blocks
block_floor_samples = start_flat_sample % 2
start_flat_sample = start_flat_sample - block_floor_samples
elif fmt in ['310', '311']:
# Samples come in groups of 3, in 4 byte blocks
block_floor_samples = start_flat_sample % 3
start_flat_sample = start_flat_sample - block_floor_samples
else:
block_floor_samples = 0
# The starting byte to read from
start_byte = byte_offset + int(start_flat_sample * BYTES_PER_SAMPLE[fmt])
# The number of samples to read
n_read_samples = end_flat_sample - start_flat_sample
# The number of samples to replace with nan at the end of each signal
# due to skew wanting samples beyond the file
nan_replace = [max(0, sampto + s - sig_len) for s in skew]
return (start_byte, n_read_samples, block_floor_samples,
extra_flat_samples, nan_replace) | python | def _dat_read_params(fmt, sig_len, byte_offset, skew, tsamps_per_frame,
sampfrom, sampto):
"""
Calculate the parameters used to read and process a dat file, given
its layout, and the desired sample range.
Parameters
----------
fmt : str
The format of the dat file
sig_len : int
The signal length (per channel) of the dat file
byte_offset : int
The byte offset of the dat file
skew : list
The skew for the signals of the dat file
tsamps_per_frame : int
The total samples/frame for all channels of the dat file
sampfrom : int
The starting sample number to be read from the signals
sampto : int
The final sample number to be read from the signals
Returns
-------
start_byte : int
The starting byte to read the dat file from. Always points to
the start of a byte block for special formats.
n_read_samples : int
The number of flat samples to read from the dat file.
block_floor_samples : int
The extra samples read prior to the first desired sample, for
special formats, in order to ensure entire byte blocks are read.
extra_flat_samples : int
The extra samples desired beyond what is contained in the file.
nan_replace : list
The number of samples to replace with nan at the end of each
signal, due to skew wanting samples beyond the file.
Examples
--------
sig_len=100, t = 4 (total samples/frame), skew = [0, 2, 4, 5]
sampfrom=0, sampto=100 --> read_len = 100, n_sampread = 100*t, extralen = 5, nan_replace = [0, 2, 4, 5]
sampfrom=50, sampto=100 --> read_len = 50, n_sampread = 50*t, extralen = 5, nan_replace = [0, 2, 4, 5]
sampfrom=0, sampto=50 --> read_len = 50, n_sampread = 55*t, extralen = 0, nan_replace = [0, 0, 0, 0]
sampfrom=95, sampto=99 --> read_len = 4, n_sampread = 5*t, extralen = 4, nan_replace = [0, 1, 3, 4]
"""
# First flat sample number to read (if all channels were flattened)
start_flat_sample = sampfrom * tsamps_per_frame
# Calculate the last flat sample number to read.
# Cannot exceed sig_len * tsamps_per_frame, the number of samples
# stored in the file. If extra 'samples' are desired by the skew,
# keep track.
# Where was the -sampfrom derived from? Why was it in the formula?
if (sampto + max(skew)) > sig_len:
end_flat_sample = sig_len * tsamps_per_frame
extra_flat_samples = (sampto + max(skew) - sig_len) * tsamps_per_frame
else:
end_flat_sample = (sampto + max(skew)) * tsamps_per_frame
extra_flat_samples = 0
# Adjust the starting sample number to read from start of blocks for special fmts.
# Keep track of how many preceeding samples are read, to be discarded later.
if fmt == '212':
# Samples come in groups of 2, in 3 byte blocks
block_floor_samples = start_flat_sample % 2
start_flat_sample = start_flat_sample - block_floor_samples
elif fmt in ['310', '311']:
# Samples come in groups of 3, in 4 byte blocks
block_floor_samples = start_flat_sample % 3
start_flat_sample = start_flat_sample - block_floor_samples
else:
block_floor_samples = 0
# The starting byte to read from
start_byte = byte_offset + int(start_flat_sample * BYTES_PER_SAMPLE[fmt])
# The number of samples to read
n_read_samples = end_flat_sample - start_flat_sample
# The number of samples to replace with nan at the end of each signal
# due to skew wanting samples beyond the file
nan_replace = [max(0, sampto + s - sig_len) for s in skew]
return (start_byte, n_read_samples, block_floor_samples,
extra_flat_samples, nan_replace) | [
"def",
"_dat_read_params",
"(",
"fmt",
",",
"sig_len",
",",
"byte_offset",
",",
"skew",
",",
"tsamps_per_frame",
",",
"sampfrom",
",",
"sampto",
")",
":",
"# First flat sample number to read (if all channels were flattened)",
"start_flat_sample",
"=",
"sampfrom",
"*",
"... | Calculate the parameters used to read and process a dat file, given
its layout, and the desired sample range.
Parameters
----------
fmt : str
The format of the dat file
sig_len : int
The signal length (per channel) of the dat file
byte_offset : int
The byte offset of the dat file
skew : list
The skew for the signals of the dat file
tsamps_per_frame : int
The total samples/frame for all channels of the dat file
sampfrom : int
The starting sample number to be read from the signals
sampto : int
The final sample number to be read from the signals
Returns
-------
start_byte : int
The starting byte to read the dat file from. Always points to
the start of a byte block for special formats.
n_read_samples : int
The number of flat samples to read from the dat file.
block_floor_samples : int
The extra samples read prior to the first desired sample, for
special formats, in order to ensure entire byte blocks are read.
extra_flat_samples : int
The extra samples desired beyond what is contained in the file.
nan_replace : list
The number of samples to replace with nan at the end of each
signal, due to skew wanting samples beyond the file.
Examples
--------
sig_len=100, t = 4 (total samples/frame), skew = [0, 2, 4, 5]
sampfrom=0, sampto=100 --> read_len = 100, n_sampread = 100*t, extralen = 5, nan_replace = [0, 2, 4, 5]
sampfrom=50, sampto=100 --> read_len = 50, n_sampread = 50*t, extralen = 5, nan_replace = [0, 2, 4, 5]
sampfrom=0, sampto=50 --> read_len = 50, n_sampread = 55*t, extralen = 0, nan_replace = [0, 0, 0, 0]
sampfrom=95, sampto=99 --> read_len = 4, n_sampread = 5*t, extralen = 4, nan_replace = [0, 1, 3, 4] | [
"Calculate",
"the",
"parameters",
"used",
"to",
"read",
"and",
"process",
"a",
"dat",
"file",
"given",
"its",
"layout",
"and",
"the",
"desired",
"sample",
"range",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1035-L1123 | train | 216,216 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _required_byte_num | def _required_byte_num(mode, fmt, n_samp):
"""
Determine how many signal bytes are needed to read or write a
number of desired samples from a dat file.
Parameters
----------
mode : str
Whether the file is to be read or written: 'read' or 'write'.
fmt : str
The wfdb dat format.
n_samp : int
The number of samples wanted.
Returns
-------
n_bytes : int
The number of bytes required to read or write the file
Notes
-----
Read and write require the same number in most cases. An exception
is fmt 311 for n_extra==2.
"""
if fmt == '212':
n_bytes = math.ceil(n_samp*1.5)
elif fmt in ['310', '311']:
n_extra = n_samp % 3
if n_extra == 2:
if fmt == '310':
n_bytes = upround(n_samp * 4/3, 4)
# 311
else:
if mode == 'read':
n_bytes = math.ceil(n_samp * 4/3)
# Have to write more bytes for wfdb c to work
else:
n_bytes = upround(n_samp * 4/3, 4)
# 0 or 1
else:
n_bytes = math.ceil(n_samp * 4/3 )
else:
n_bytes = n_samp * BYTES_PER_SAMPLE[fmt]
return int(n_bytes) | python | def _required_byte_num(mode, fmt, n_samp):
"""
Determine how many signal bytes are needed to read or write a
number of desired samples from a dat file.
Parameters
----------
mode : str
Whether the file is to be read or written: 'read' or 'write'.
fmt : str
The wfdb dat format.
n_samp : int
The number of samples wanted.
Returns
-------
n_bytes : int
The number of bytes required to read or write the file
Notes
-----
Read and write require the same number in most cases. An exception
is fmt 311 for n_extra==2.
"""
if fmt == '212':
n_bytes = math.ceil(n_samp*1.5)
elif fmt in ['310', '311']:
n_extra = n_samp % 3
if n_extra == 2:
if fmt == '310':
n_bytes = upround(n_samp * 4/3, 4)
# 311
else:
if mode == 'read':
n_bytes = math.ceil(n_samp * 4/3)
# Have to write more bytes for wfdb c to work
else:
n_bytes = upround(n_samp * 4/3, 4)
# 0 or 1
else:
n_bytes = math.ceil(n_samp * 4/3 )
else:
n_bytes = n_samp * BYTES_PER_SAMPLE[fmt]
return int(n_bytes) | [
"def",
"_required_byte_num",
"(",
"mode",
",",
"fmt",
",",
"n_samp",
")",
":",
"if",
"fmt",
"==",
"'212'",
":",
"n_bytes",
"=",
"math",
".",
"ceil",
"(",
"n_samp",
"*",
"1.5",
")",
"elif",
"fmt",
"in",
"[",
"'310'",
",",
"'311'",
"]",
":",
"n_extra... | Determine how many signal bytes are needed to read or write a
number of desired samples from a dat file.
Parameters
----------
mode : str
Whether the file is to be read or written: 'read' or 'write'.
fmt : str
The wfdb dat format.
n_samp : int
The number of samples wanted.
Returns
-------
n_bytes : int
The number of bytes required to read or write the file
Notes
-----
Read and write require the same number in most cases. An exception
is fmt 311 for n_extra==2. | [
"Determine",
"how",
"many",
"signal",
"bytes",
"are",
"needed",
"to",
"read",
"or",
"write",
"a",
"number",
"of",
"desired",
"samples",
"from",
"a",
"dat",
"file",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1126-L1173 | train | 216,217 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _rd_dat_file | def _rd_dat_file(file_name, dir_name, pb_dir, fmt, start_byte, n_samp):
"""
Read data from a dat file, either local or remote, into a 1d numpy
array.
This is the lowest level dat reading function (along with
`_stream_dat` which this function may call), and is called by
`_rd_dat_signals`.
Parameters
----------
start_byte : int
The starting byte number to read from.
n_samp : int
The total number of samples to read. Does NOT need to create
whole blocks for special format. Any number of samples should be
readable.
* other params
See docstring for `_rd_dat_signals`
Returns
-------
sig_data : numpy array
The data read from the dat file. The dtype varies depending on
fmt. Byte aligned fmts are read in their final required format.
Unaligned formats are read as uint8 to be further processed.
Notes
-----
See docstring notes for `_rd_dat_signals`
"""
# element_count is the number of elements to read using np.fromfile
# for local files
# byte_count is the number of bytes to read for streaming files
if fmt == '212':
byte_count = _required_byte_num('read', '212', n_samp)
element_count = byte_count
elif fmt in ['310', '311']:
byte_count = _required_byte_num('read', fmt, n_samp)
element_count = byte_count
else:
element_count = n_samp
byte_count = n_samp * BYTES_PER_SAMPLE[fmt]
# Local dat file
if pb_dir is None:
with open(os.path.join(dir_name, file_name), 'rb') as fp:
fp.seek(start_byte)
sig_data = np.fromfile(fp, dtype=np.dtype(DATA_LOAD_TYPES[fmt]),
count=element_count)
# Stream dat file from physiobank
else:
sig_data = download._stream_dat(file_name, pb_dir, byte_count,
start_byte,
np.dtype(DATA_LOAD_TYPES[fmt]))
return sig_data | python | def _rd_dat_file(file_name, dir_name, pb_dir, fmt, start_byte, n_samp):
"""
Read data from a dat file, either local or remote, into a 1d numpy
array.
This is the lowest level dat reading function (along with
`_stream_dat` which this function may call), and is called by
`_rd_dat_signals`.
Parameters
----------
start_byte : int
The starting byte number to read from.
n_samp : int
The total number of samples to read. Does NOT need to create
whole blocks for special format. Any number of samples should be
readable.
* other params
See docstring for `_rd_dat_signals`
Returns
-------
sig_data : numpy array
The data read from the dat file. The dtype varies depending on
fmt. Byte aligned fmts are read in their final required format.
Unaligned formats are read as uint8 to be further processed.
Notes
-----
See docstring notes for `_rd_dat_signals`
"""
# element_count is the number of elements to read using np.fromfile
# for local files
# byte_count is the number of bytes to read for streaming files
if fmt == '212':
byte_count = _required_byte_num('read', '212', n_samp)
element_count = byte_count
elif fmt in ['310', '311']:
byte_count = _required_byte_num('read', fmt, n_samp)
element_count = byte_count
else:
element_count = n_samp
byte_count = n_samp * BYTES_PER_SAMPLE[fmt]
# Local dat file
if pb_dir is None:
with open(os.path.join(dir_name, file_name), 'rb') as fp:
fp.seek(start_byte)
sig_data = np.fromfile(fp, dtype=np.dtype(DATA_LOAD_TYPES[fmt]),
count=element_count)
# Stream dat file from physiobank
else:
sig_data = download._stream_dat(file_name, pb_dir, byte_count,
start_byte,
np.dtype(DATA_LOAD_TYPES[fmt]))
return sig_data | [
"def",
"_rd_dat_file",
"(",
"file_name",
",",
"dir_name",
",",
"pb_dir",
",",
"fmt",
",",
"start_byte",
",",
"n_samp",
")",
":",
"# element_count is the number of elements to read using np.fromfile",
"# for local files",
"# byte_count is the number of bytes to read for streaming ... | Read data from a dat file, either local or remote, into a 1d numpy
array.
This is the lowest level dat reading function (along with
`_stream_dat` which this function may call), and is called by
`_rd_dat_signals`.
Parameters
----------
start_byte : int
The starting byte number to read from.
n_samp : int
The total number of samples to read. Does NOT need to create
whole blocks for special format. Any number of samples should be
readable.
* other params
See docstring for `_rd_dat_signals`
Returns
-------
sig_data : numpy array
The data read from the dat file. The dtype varies depending on
fmt. Byte aligned fmts are read in their final required format.
Unaligned formats are read as uint8 to be further processed.
Notes
-----
See docstring notes for `_rd_dat_signals` | [
"Read",
"data",
"from",
"a",
"dat",
"file",
"either",
"local",
"or",
"remote",
"into",
"a",
"1d",
"numpy",
"array",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1176-L1234 | train | 216,218 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _skew_sig | def _skew_sig(sig, skew, n_sig, read_len, fmt, nan_replace, samps_per_frame=None):
"""
Skew the signal, insert nans and shave off end of array if needed.
Parameters
----------
sig : numpy array
The original signal
skew : list
List of samples to skew for each signal
n_sig : int
The number of signals
Notes
-----
`fmt` is just for the correct nan value.
`samps_per_frame` is only used for skewing expanded signals.
"""
if max(skew)>0:
# Expanded frame samples. List of arrays.
if isinstance(sig, list):
# Shift the channel samples
for ch in range(n_sig):
if skew[ch]>0:
sig[ch][:read_len*samps_per_frame[ch]] = sig[ch][skew[ch]*samps_per_frame[ch]:]
# Shave off the extra signal length at the end
for ch in range(n_sig):
sig[ch] = sig[ch][:read_len*samps_per_frame[ch]]
# Insert nans where skewed signal overran dat file
for ch in range(n_sig):
if nan_replace[ch]>0:
sig[ch][-nan_replace[ch]:] = _digi_nan(fmt)
# Uniform array
else:
# Shift the channel samples
for ch in range(n_sig):
if skew[ch]>0:
sig[:read_len, ch] = sig[skew[ch]:, ch]
# Shave off the extra signal length at the end
sig = sig[:read_len, :]
# Insert nans where skewed signal overran dat file
for ch in range(n_sig):
if nan_replace[ch]>0:
sig[-nan_replace[ch]:, ch] = _digi_nan(fmt)
return sig | python | def _skew_sig(sig, skew, n_sig, read_len, fmt, nan_replace, samps_per_frame=None):
"""
Skew the signal, insert nans and shave off end of array if needed.
Parameters
----------
sig : numpy array
The original signal
skew : list
List of samples to skew for each signal
n_sig : int
The number of signals
Notes
-----
`fmt` is just for the correct nan value.
`samps_per_frame` is only used for skewing expanded signals.
"""
if max(skew)>0:
# Expanded frame samples. List of arrays.
if isinstance(sig, list):
# Shift the channel samples
for ch in range(n_sig):
if skew[ch]>0:
sig[ch][:read_len*samps_per_frame[ch]] = sig[ch][skew[ch]*samps_per_frame[ch]:]
# Shave off the extra signal length at the end
for ch in range(n_sig):
sig[ch] = sig[ch][:read_len*samps_per_frame[ch]]
# Insert nans where skewed signal overran dat file
for ch in range(n_sig):
if nan_replace[ch]>0:
sig[ch][-nan_replace[ch]:] = _digi_nan(fmt)
# Uniform array
else:
# Shift the channel samples
for ch in range(n_sig):
if skew[ch]>0:
sig[:read_len, ch] = sig[skew[ch]:, ch]
# Shave off the extra signal length at the end
sig = sig[:read_len, :]
# Insert nans where skewed signal overran dat file
for ch in range(n_sig):
if nan_replace[ch]>0:
sig[-nan_replace[ch]:, ch] = _digi_nan(fmt)
return sig | [
"def",
"_skew_sig",
"(",
"sig",
",",
"skew",
",",
"n_sig",
",",
"read_len",
",",
"fmt",
",",
"nan_replace",
",",
"samps_per_frame",
"=",
"None",
")",
":",
"if",
"max",
"(",
"skew",
")",
">",
"0",
":",
"# Expanded frame samples. List of arrays.",
"if",
"isi... | Skew the signal, insert nans and shave off end of array if needed.
Parameters
----------
sig : numpy array
The original signal
skew : list
List of samples to skew for each signal
n_sig : int
The number of signals
Notes
-----
`fmt` is just for the correct nan value.
`samps_per_frame` is only used for skewing expanded signals. | [
"Skew",
"the",
"signal",
"insert",
"nans",
"and",
"shave",
"off",
"end",
"of",
"array",
"if",
"needed",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1343-L1394 | train | 216,219 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _check_sig_dims | def _check_sig_dims(sig, read_len, n_sig, samps_per_frame):
"""
Integrity check of a signal's shape after reading.
"""
if isinstance(sig, np.ndarray):
if sig.shape != (read_len, n_sig):
raise ValueError('Samples were not loaded correctly')
else:
if len(sig) != n_sig:
raise ValueError('Samples were not loaded correctly')
for ch in range(n_sig):
if len(sig[ch]) != samps_per_frame[ch] * read_len:
raise ValueError('Samples were not loaded correctly') | python | def _check_sig_dims(sig, read_len, n_sig, samps_per_frame):
"""
Integrity check of a signal's shape after reading.
"""
if isinstance(sig, np.ndarray):
if sig.shape != (read_len, n_sig):
raise ValueError('Samples were not loaded correctly')
else:
if len(sig) != n_sig:
raise ValueError('Samples were not loaded correctly')
for ch in range(n_sig):
if len(sig[ch]) != samps_per_frame[ch] * read_len:
raise ValueError('Samples were not loaded correctly') | [
"def",
"_check_sig_dims",
"(",
"sig",
",",
"read_len",
",",
"n_sig",
",",
"samps_per_frame",
")",
":",
"if",
"isinstance",
"(",
"sig",
",",
"np",
".",
"ndarray",
")",
":",
"if",
"sig",
".",
"shape",
"!=",
"(",
"read_len",
",",
"n_sig",
")",
":",
"rai... | Integrity check of a signal's shape after reading. | [
"Integrity",
"check",
"of",
"a",
"signal",
"s",
"shape",
"after",
"reading",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1397-L1410 | train | 216,220 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _digi_bounds | def _digi_bounds(fmt):
"""
Return min and max digital values for each format type.
Accepts lists.
Parmeters
---------
fmt : str, or list
The wfdb dat format, or a list of them.
"""
if isinstance(fmt, list):
return [_digi_bounds(f) for f in fmt]
if fmt == '80':
return (-128, 127)
elif fmt == '212':
return (-2048, 2047)
elif fmt == '16':
return (-32768, 32767)
elif fmt == '24':
return (-8388608, 8388607)
elif fmt == '32':
return (-2147483648, 2147483647) | python | def _digi_bounds(fmt):
"""
Return min and max digital values for each format type.
Accepts lists.
Parmeters
---------
fmt : str, or list
The wfdb dat format, or a list of them.
"""
if isinstance(fmt, list):
return [_digi_bounds(f) for f in fmt]
if fmt == '80':
return (-128, 127)
elif fmt == '212':
return (-2048, 2047)
elif fmt == '16':
return (-32768, 32767)
elif fmt == '24':
return (-8388608, 8388607)
elif fmt == '32':
return (-2147483648, 2147483647) | [
"def",
"_digi_bounds",
"(",
"fmt",
")",
":",
"if",
"isinstance",
"(",
"fmt",
",",
"list",
")",
":",
"return",
"[",
"_digi_bounds",
"(",
"f",
")",
"for",
"f",
"in",
"fmt",
"]",
"if",
"fmt",
"==",
"'80'",
":",
"return",
"(",
"-",
"128",
",",
"127",... | Return min and max digital values for each format type.
Accepts lists.
Parmeters
---------
fmt : str, or list
The wfdb dat format, or a list of them. | [
"Return",
"min",
"and",
"max",
"digital",
"values",
"for",
"each",
"format",
"type",
".",
"Accepts",
"lists",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1416-L1439 | train | 216,221 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _digi_nan | def _digi_nan(fmt):
"""
Return the wfdb digital value used to store nan for the format type.
Parmeters
---------
fmt : str, or list
The wfdb dat format, or a list of them.
"""
if isinstance(fmt, list):
return [_digi_nan(f) for f in fmt]
if fmt == '80':
return -128
if fmt == '310':
return -512
if fmt == '311':
return -512
elif fmt == '212':
return -2048
elif fmt == '16':
return -32768
elif fmt == '61':
return -32768
elif fmt == '160':
return -32768
elif fmt == '24':
return -8388608
elif fmt == '32':
return -2147483648 | python | def _digi_nan(fmt):
"""
Return the wfdb digital value used to store nan for the format type.
Parmeters
---------
fmt : str, or list
The wfdb dat format, or a list of them.
"""
if isinstance(fmt, list):
return [_digi_nan(f) for f in fmt]
if fmt == '80':
return -128
if fmt == '310':
return -512
if fmt == '311':
return -512
elif fmt == '212':
return -2048
elif fmt == '16':
return -32768
elif fmt == '61':
return -32768
elif fmt == '160':
return -32768
elif fmt == '24':
return -8388608
elif fmt == '32':
return -2147483648 | [
"def",
"_digi_nan",
"(",
"fmt",
")",
":",
"if",
"isinstance",
"(",
"fmt",
",",
"list",
")",
":",
"return",
"[",
"_digi_nan",
"(",
"f",
")",
"for",
"f",
"in",
"fmt",
"]",
"if",
"fmt",
"==",
"'80'",
":",
"return",
"-",
"128",
"if",
"fmt",
"==",
"... | Return the wfdb digital value used to store nan for the format type.
Parmeters
---------
fmt : str, or list
The wfdb dat format, or a list of them. | [
"Return",
"the",
"wfdb",
"digital",
"value",
"used",
"to",
"store",
"nan",
"for",
"the",
"format",
"type",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1442-L1472 | train | 216,222 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | est_res | def est_res(signals):
"""
Estimate the resolution of each signal in a multi-channel signal in
bits. Maximum of 32 bits.
Parameters
----------
signals : numpy array, or list
A 2d numpy array representing a uniform multichannel signal, or
a list of 1d numpy arrays representing multiple channels of
signals with different numbers of samples per frame.
Returns
-------
bit_res : list
A list of estimated integer resolutions for each channel
"""
res_levels = np.power(2, np.arange(0, 33))
# Expanded sample signals. List of numpy arrays
if isinstance(signals, list):
n_sig = len(signals)
# Uniform numpy array
else:
if signals.ndim ==1:
n_sig = 1
else:
n_sig = signals.shape[1]
res = []
for ch in range(n_sig):
# Estimate the number of steps as the range divided by the
# minimum increment.
if isinstance(signals, list):
sorted_sig = np.sort(np.unique(signals[ch]))
else:
if signals.ndim == 1:
sorted_sig = np.sort(np.unique(signals))
else:
sorted_sig = np.sort(np.unique(signals[:,ch]))
min_inc = min(np.diff(sorted_sig))
if min_inc == 0:
# Case where signal is flat. Resolution is 0.
res.append(0)
else:
nlevels = 1 + (sorted_sig[-1]-sorted_sig[0]) / min_inc
if nlevels >= res_levels[-1]:
res.append(32)
else:
res.append(np.where(res_levels>=nlevels)[0][0])
return res | python | def est_res(signals):
"""
Estimate the resolution of each signal in a multi-channel signal in
bits. Maximum of 32 bits.
Parameters
----------
signals : numpy array, or list
A 2d numpy array representing a uniform multichannel signal, or
a list of 1d numpy arrays representing multiple channels of
signals with different numbers of samples per frame.
Returns
-------
bit_res : list
A list of estimated integer resolutions for each channel
"""
res_levels = np.power(2, np.arange(0, 33))
# Expanded sample signals. List of numpy arrays
if isinstance(signals, list):
n_sig = len(signals)
# Uniform numpy array
else:
if signals.ndim ==1:
n_sig = 1
else:
n_sig = signals.shape[1]
res = []
for ch in range(n_sig):
# Estimate the number of steps as the range divided by the
# minimum increment.
if isinstance(signals, list):
sorted_sig = np.sort(np.unique(signals[ch]))
else:
if signals.ndim == 1:
sorted_sig = np.sort(np.unique(signals))
else:
sorted_sig = np.sort(np.unique(signals[:,ch]))
min_inc = min(np.diff(sorted_sig))
if min_inc == 0:
# Case where signal is flat. Resolution is 0.
res.append(0)
else:
nlevels = 1 + (sorted_sig[-1]-sorted_sig[0]) / min_inc
if nlevels >= res_levels[-1]:
res.append(32)
else:
res.append(np.where(res_levels>=nlevels)[0][0])
return res | [
"def",
"est_res",
"(",
"signals",
")",
":",
"res_levels",
"=",
"np",
".",
"power",
"(",
"2",
",",
"np",
".",
"arange",
"(",
"0",
",",
"33",
")",
")",
"# Expanded sample signals. List of numpy arrays",
"if",
"isinstance",
"(",
"signals",
",",
"list",
")",
... | Estimate the resolution of each signal in a multi-channel signal in
bits. Maximum of 32 bits.
Parameters
----------
signals : numpy array, or list
A 2d numpy array representing a uniform multichannel signal, or
a list of 1d numpy arrays representing multiple channels of
signals with different numbers of samples per frame.
Returns
-------
bit_res : list
A list of estimated integer resolutions for each channel | [
"Estimate",
"the",
"resolution",
"of",
"each",
"signal",
"in",
"a",
"multi",
"-",
"channel",
"signal",
"in",
"bits",
".",
"Maximum",
"of",
"32",
"bits",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1475-L1528 | train | 216,223 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _np_dtype | def _np_dtype(bit_res, discrete):
"""
Given the bit resolution of a signal, return the minimum numpy dtype
used to store it.
Parameters
----------
bit_res : int
The bit resolution.
discrete : bool
Whether the dtype is to be int or float.
Returns
-------
dtype : str
String numpy dtype used to store the signal of the given
resolution
"""
bit_res = min(bit_res, 64)
for np_res in [8, 16, 32, 64]:
if bit_res <= np_res:
break
if discrete is True:
return 'int' + str(np_res)
else:
# No float8 dtype
return 'float' + str(max(np_res, 16)) | python | def _np_dtype(bit_res, discrete):
"""
Given the bit resolution of a signal, return the minimum numpy dtype
used to store it.
Parameters
----------
bit_res : int
The bit resolution.
discrete : bool
Whether the dtype is to be int or float.
Returns
-------
dtype : str
String numpy dtype used to store the signal of the given
resolution
"""
bit_res = min(bit_res, 64)
for np_res in [8, 16, 32, 64]:
if bit_res <= np_res:
break
if discrete is True:
return 'int' + str(np_res)
else:
# No float8 dtype
return 'float' + str(max(np_res, 16)) | [
"def",
"_np_dtype",
"(",
"bit_res",
",",
"discrete",
")",
":",
"bit_res",
"=",
"min",
"(",
"bit_res",
",",
"64",
")",
"for",
"np_res",
"in",
"[",
"8",
",",
"16",
",",
"32",
",",
"64",
"]",
":",
"if",
"bit_res",
"<=",
"np_res",
":",
"break",
"if",... | Given the bit resolution of a signal, return the minimum numpy dtype
used to store it.
Parameters
----------
bit_res : int
The bit resolution.
discrete : bool
Whether the dtype is to be int or float.
Returns
-------
dtype : str
String numpy dtype used to store the signal of the given
resolution | [
"Given",
"the",
"bit",
"resolution",
"of",
"a",
"signal",
"return",
"the",
"minimum",
"numpy",
"dtype",
"used",
"to",
"store",
"it",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1599-L1628 | train | 216,224 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | _infer_sig_len | def _infer_sig_len(file_name, fmt, n_sig, dir_name, pb_dir=None):
"""
Infer the length of a signal from a dat file.
Parameters
----------
file_name : str
Name of the dat file
fmt : str
WFDB fmt of the dat file
n_sig : int
Number of signals contained in the dat file
Notes
-----
sig_len * n_sig * bytes_per_sample == file_size
"""
if pb_dir is None:
file_size = os.path.getsize(os.path.join(dir_name, file_name))
else:
file_size = download._remote_file_size(file_name=file_name,
pb_dir=pb_dir)
sig_len = int(file_size / (BYTES_PER_SAMPLE[fmt] * n_sig))
return sig_len | python | def _infer_sig_len(file_name, fmt, n_sig, dir_name, pb_dir=None):
"""
Infer the length of a signal from a dat file.
Parameters
----------
file_name : str
Name of the dat file
fmt : str
WFDB fmt of the dat file
n_sig : int
Number of signals contained in the dat file
Notes
-----
sig_len * n_sig * bytes_per_sample == file_size
"""
if pb_dir is None:
file_size = os.path.getsize(os.path.join(dir_name, file_name))
else:
file_size = download._remote_file_size(file_name=file_name,
pb_dir=pb_dir)
sig_len = int(file_size / (BYTES_PER_SAMPLE[fmt] * n_sig))
return sig_len | [
"def",
"_infer_sig_len",
"(",
"file_name",
",",
"fmt",
",",
"n_sig",
",",
"dir_name",
",",
"pb_dir",
"=",
"None",
")",
":",
"if",
"pb_dir",
"is",
"None",
":",
"file_size",
"=",
"os",
".",
"path",
".",
"getsize",
"(",
"os",
".",
"path",
".",
"join",
... | Infer the length of a signal from a dat file.
Parameters
----------
file_name : str
Name of the dat file
fmt : str
WFDB fmt of the dat file
n_sig : int
Number of signals contained in the dat file
Notes
-----
sig_len * n_sig * bytes_per_sample == file_size | [
"Infer",
"the",
"length",
"of",
"a",
"signal",
"from",
"a",
"dat",
"file",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L1798-L1824 | train | 216,225 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | SignalMixin.adc | def adc(self, expanded=False, inplace=False):
"""
Performs analogue to digital conversion of the physical signal stored
in p_signal if expanded is False, or e_p_signal if expanded is True.
The p_signal/e_p_signal, fmt, gain, and baseline fields must all be
valid.
If inplace is True, the adc will be performed inplace on the variable,
the d_signal/e_d_signal attribute will be set, and the
p_signal/e_p_signal field will be set to None.
Parameters
----------
expanded : bool, optional
Whether to transform the `e_p_signal` attribute (True) or
the `p_signal` attribute (False).
inplace : bool, optional
Whether to automatically set the object's corresponding
digital signal attribute and set the physical
signal attribute to None (True), or to return the converted
signal as a separate variable without changing the original
physical signal attribute (False).
Returns
-------
d_signal : numpy array, optional
The digital conversion of the signal. Either a 2d numpy
array or a list of 1d numpy arrays.
Examples:
---------
>>> import wfdb
>>> record = wfdb.rdsamp('sample-data/100')
>>> d_signal = record.adc()
>>> record.adc(inplace=True)
>>> record.dac(inplace=True)
"""
# The digital nan values for each channel
d_nans = _digi_nan(self.fmt)
# To do: choose the minimum return res needed
intdtype = 'int64'
# Do inplace conversion and set relevant variables.
if inplace:
if expanded:
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = np.isnan(self.e_p_signal[ch])
np.multiply(self.e_p_signal[ch], self.adc_gain[ch],
self.e_p_signal[ch])
np.add(e_p_signal[ch], self.baseline[ch],
self.e_p_signal[ch])
self.e_p_signal[ch] = self.e_p_signal[ch].astype(intdtype,
copy=False)
self.e_p_signal[ch][ch_nanlocs] = d_nans[ch]
self.e_d_signal = self.e_p_signal
self.e_p_signal = None
else:
nanlocs = np.isnan(self.p_signal)
np.multiply(self.p_signal, self.adc_gain, self.p_signal)
np.add(self.p_signal, self.baseline, self.p_signal)
self.p_signal = self.p_signal.astype(intdtype, copy=False)
self.d_signal = self.p_signal
self.p_signal = None
# Return the variable
else:
if expanded:
d_signal = []
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = np.isnan(self.e_p_signal[ch])
ch_d_signal = self.e_p_signal.copy()
np.multiply(ch_d_signal, self.adc_gain[ch], ch_d_signal)
np.add(ch_d_signal, self.baseline[ch], ch_d_signal)
ch_d_signal = ch_d_signal.astype(intdtype, copy=False)
ch_d_signal[ch_nanlocs] = d_nans[ch]
d_signal.append(ch_d_signal)
else:
nanlocs = np.isnan(self.p_signal)
# Cannot cast dtype to int now because gain is float.
d_signal = self.p_signal.copy()
np.multiply(d_signal, self.adc_gain, d_signal)
np.add(d_signal, self.baseline, d_signal)
d_signal = d_signal.astype(intdtype, copy=False)
if nanlocs.any():
for ch in range(d_signal.shape[1]):
if nanlocs[:,ch].any():
d_signal[nanlocs[:,ch],ch] = d_nans[ch]
return d_signal | python | def adc(self, expanded=False, inplace=False):
"""
Performs analogue to digital conversion of the physical signal stored
in p_signal if expanded is False, or e_p_signal if expanded is True.
The p_signal/e_p_signal, fmt, gain, and baseline fields must all be
valid.
If inplace is True, the adc will be performed inplace on the variable,
the d_signal/e_d_signal attribute will be set, and the
p_signal/e_p_signal field will be set to None.
Parameters
----------
expanded : bool, optional
Whether to transform the `e_p_signal` attribute (True) or
the `p_signal` attribute (False).
inplace : bool, optional
Whether to automatically set the object's corresponding
digital signal attribute and set the physical
signal attribute to None (True), or to return the converted
signal as a separate variable without changing the original
physical signal attribute (False).
Returns
-------
d_signal : numpy array, optional
The digital conversion of the signal. Either a 2d numpy
array or a list of 1d numpy arrays.
Examples:
---------
>>> import wfdb
>>> record = wfdb.rdsamp('sample-data/100')
>>> d_signal = record.adc()
>>> record.adc(inplace=True)
>>> record.dac(inplace=True)
"""
# The digital nan values for each channel
d_nans = _digi_nan(self.fmt)
# To do: choose the minimum return res needed
intdtype = 'int64'
# Do inplace conversion and set relevant variables.
if inplace:
if expanded:
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = np.isnan(self.e_p_signal[ch])
np.multiply(self.e_p_signal[ch], self.adc_gain[ch],
self.e_p_signal[ch])
np.add(e_p_signal[ch], self.baseline[ch],
self.e_p_signal[ch])
self.e_p_signal[ch] = self.e_p_signal[ch].astype(intdtype,
copy=False)
self.e_p_signal[ch][ch_nanlocs] = d_nans[ch]
self.e_d_signal = self.e_p_signal
self.e_p_signal = None
else:
nanlocs = np.isnan(self.p_signal)
np.multiply(self.p_signal, self.adc_gain, self.p_signal)
np.add(self.p_signal, self.baseline, self.p_signal)
self.p_signal = self.p_signal.astype(intdtype, copy=False)
self.d_signal = self.p_signal
self.p_signal = None
# Return the variable
else:
if expanded:
d_signal = []
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = np.isnan(self.e_p_signal[ch])
ch_d_signal = self.e_p_signal.copy()
np.multiply(ch_d_signal, self.adc_gain[ch], ch_d_signal)
np.add(ch_d_signal, self.baseline[ch], ch_d_signal)
ch_d_signal = ch_d_signal.astype(intdtype, copy=False)
ch_d_signal[ch_nanlocs] = d_nans[ch]
d_signal.append(ch_d_signal)
else:
nanlocs = np.isnan(self.p_signal)
# Cannot cast dtype to int now because gain is float.
d_signal = self.p_signal.copy()
np.multiply(d_signal, self.adc_gain, d_signal)
np.add(d_signal, self.baseline, d_signal)
d_signal = d_signal.astype(intdtype, copy=False)
if nanlocs.any():
for ch in range(d_signal.shape[1]):
if nanlocs[:,ch].any():
d_signal[nanlocs[:,ch],ch] = d_nans[ch]
return d_signal | [
"def",
"adc",
"(",
"self",
",",
"expanded",
"=",
"False",
",",
"inplace",
"=",
"False",
")",
":",
"# The digital nan values for each channel",
"d_nans",
"=",
"_digi_nan",
"(",
"self",
".",
"fmt",
")",
"# To do: choose the minimum return res needed",
"intdtype",
"=",... | Performs analogue to digital conversion of the physical signal stored
in p_signal if expanded is False, or e_p_signal if expanded is True.
The p_signal/e_p_signal, fmt, gain, and baseline fields must all be
valid.
If inplace is True, the adc will be performed inplace on the variable,
the d_signal/e_d_signal attribute will be set, and the
p_signal/e_p_signal field will be set to None.
Parameters
----------
expanded : bool, optional
Whether to transform the `e_p_signal` attribute (True) or
the `p_signal` attribute (False).
inplace : bool, optional
Whether to automatically set the object's corresponding
digital signal attribute and set the physical
signal attribute to None (True), or to return the converted
signal as a separate variable without changing the original
physical signal attribute (False).
Returns
-------
d_signal : numpy array, optional
The digital conversion of the signal. Either a 2d numpy
array or a list of 1d numpy arrays.
Examples:
---------
>>> import wfdb
>>> record = wfdb.rdsamp('sample-data/100')
>>> d_signal = record.adc()
>>> record.adc(inplace=True)
>>> record.dac(inplace=True) | [
"Performs",
"analogue",
"to",
"digital",
"conversion",
"of",
"the",
"physical",
"signal",
"stored",
"in",
"p_signal",
"if",
"expanded",
"is",
"False",
"or",
"e_p_signal",
"if",
"expanded",
"is",
"True",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L320-L416 | train | 216,226 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | SignalMixin.dac | def dac(self, expanded=False, return_res=64, inplace=False):
"""
Performs the digital to analogue conversion of the signal stored
in `d_signal` if expanded is False, or `e_d_signal` if expanded
is True.
The d_signal/e_d_signal, fmt, gain, and baseline fields must all be
valid.
If inplace is True, the dac will be performed inplace on the
variable, the p_signal/e_p_signal attribute will be set, and the
d_signal/e_d_signal field will be set to None.
Parameters
----------
expanded : bool, optional
Whether to transform the `e_d_signal attribute` (True) or
the `d_signal` attribute (False).
inplace : bool, optional
Whether to automatically set the object's corresponding
physical signal attribute and set the digital signal
attribute to None (True), or to return the converted
signal as a separate variable without changing the original
digital signal attribute (False).
Returns
-------
p_signal : numpy array, optional
The physical conversion of the signal. Either a 2d numpy
array or a list of 1d numpy arrays.
Examples
--------
>>> import wfdb
>>> record = wfdb.rdsamp('sample-data/100', physical=False)
>>> p_signal = record.dac()
>>> record.dac(inplace=True)
>>> record.adc(inplace=True)
"""
# The digital nan values for each channel
d_nans = _digi_nan(self.fmt)
# Get the appropriate float dtype
if return_res == 64:
floatdtype = 'float64'
elif return_res == 32:
floatdtype = 'float32'
else:
floatdtype = 'float16'
# Do inplace conversion and set relevant variables.
if inplace:
if expanded:
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = self.e_d_signal[ch] == d_nans[ch]
self.e_d_signal[ch] = self.e_d_signal[ch].astype(floatdtype, copy=False)
np.subtract(self.e_d_signal[ch], self.baseline[ch], self.e_d_signal[ch])
np.divide(self.e_d_signal[ch], self.adc_gain[ch], self.e_d_signal[ch])
self.e_d_signal[ch][ch_nanlocs] = np.nan
self.e_p_signal = self.e_d_signal
self.e_d_signal = None
else:
nanlocs = self.d_signal == d_nans
# Do float conversion immediately to avoid potential under/overflow
# of efficient int dtype
self.d_signal = self.d_signal.astype(floatdtype, copy=False)
np.subtract(self.d_signal, self.baseline, self.d_signal)
np.divide(self.d_signal, self.adc_gain, self.d_signal)
self.d_signal[nanlocs] = np.nan
self.p_signal = self.d_signal
self.d_signal = None
# Return the variable
else:
if expanded:
p_signal = []
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = self.e_d_signal[ch] == d_nans[ch]
ch_p_signal = self.e_d_signal[ch].astype(floatdtype, copy=False)
np.subtract(ch_p_signal, self.baseline[ch], ch_p_signal)
np.divide(ch_p_signal, self.adc_gain[ch], ch_p_signal)
ch_p_signal[ch_nanlocs] = np.nan
p_signal.append(ch_p_signal)
else:
nanlocs = self.d_signal == d_nans
p_signal = self.d_signal.astype(floatdtype, copy=False)
np.subtract(p_signal, self.baseline, p_signal)
np.divide(p_signal, self.adc_gain, p_signal)
p_signal[nanlocs] = np.nan
return p_signal | python | def dac(self, expanded=False, return_res=64, inplace=False):
"""
Performs the digital to analogue conversion of the signal stored
in `d_signal` if expanded is False, or `e_d_signal` if expanded
is True.
The d_signal/e_d_signal, fmt, gain, and baseline fields must all be
valid.
If inplace is True, the dac will be performed inplace on the
variable, the p_signal/e_p_signal attribute will be set, and the
d_signal/e_d_signal field will be set to None.
Parameters
----------
expanded : bool, optional
Whether to transform the `e_d_signal attribute` (True) or
the `d_signal` attribute (False).
inplace : bool, optional
Whether to automatically set the object's corresponding
physical signal attribute and set the digital signal
attribute to None (True), or to return the converted
signal as a separate variable without changing the original
digital signal attribute (False).
Returns
-------
p_signal : numpy array, optional
The physical conversion of the signal. Either a 2d numpy
array or a list of 1d numpy arrays.
Examples
--------
>>> import wfdb
>>> record = wfdb.rdsamp('sample-data/100', physical=False)
>>> p_signal = record.dac()
>>> record.dac(inplace=True)
>>> record.adc(inplace=True)
"""
# The digital nan values for each channel
d_nans = _digi_nan(self.fmt)
# Get the appropriate float dtype
if return_res == 64:
floatdtype = 'float64'
elif return_res == 32:
floatdtype = 'float32'
else:
floatdtype = 'float16'
# Do inplace conversion and set relevant variables.
if inplace:
if expanded:
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = self.e_d_signal[ch] == d_nans[ch]
self.e_d_signal[ch] = self.e_d_signal[ch].astype(floatdtype, copy=False)
np.subtract(self.e_d_signal[ch], self.baseline[ch], self.e_d_signal[ch])
np.divide(self.e_d_signal[ch], self.adc_gain[ch], self.e_d_signal[ch])
self.e_d_signal[ch][ch_nanlocs] = np.nan
self.e_p_signal = self.e_d_signal
self.e_d_signal = None
else:
nanlocs = self.d_signal == d_nans
# Do float conversion immediately to avoid potential under/overflow
# of efficient int dtype
self.d_signal = self.d_signal.astype(floatdtype, copy=False)
np.subtract(self.d_signal, self.baseline, self.d_signal)
np.divide(self.d_signal, self.adc_gain, self.d_signal)
self.d_signal[nanlocs] = np.nan
self.p_signal = self.d_signal
self.d_signal = None
# Return the variable
else:
if expanded:
p_signal = []
for ch in range(self.n_sig):
# nan locations for the channel
ch_nanlocs = self.e_d_signal[ch] == d_nans[ch]
ch_p_signal = self.e_d_signal[ch].astype(floatdtype, copy=False)
np.subtract(ch_p_signal, self.baseline[ch], ch_p_signal)
np.divide(ch_p_signal, self.adc_gain[ch], ch_p_signal)
ch_p_signal[ch_nanlocs] = np.nan
p_signal.append(ch_p_signal)
else:
nanlocs = self.d_signal == d_nans
p_signal = self.d_signal.astype(floatdtype, copy=False)
np.subtract(p_signal, self.baseline, p_signal)
np.divide(p_signal, self.adc_gain, p_signal)
p_signal[nanlocs] = np.nan
return p_signal | [
"def",
"dac",
"(",
"self",
",",
"expanded",
"=",
"False",
",",
"return_res",
"=",
"64",
",",
"inplace",
"=",
"False",
")",
":",
"# The digital nan values for each channel",
"d_nans",
"=",
"_digi_nan",
"(",
"self",
".",
"fmt",
")",
"# Get the appropriate float dt... | Performs the digital to analogue conversion of the signal stored
in `d_signal` if expanded is False, or `e_d_signal` if expanded
is True.
The d_signal/e_d_signal, fmt, gain, and baseline fields must all be
valid.
If inplace is True, the dac will be performed inplace on the
variable, the p_signal/e_p_signal attribute will be set, and the
d_signal/e_d_signal field will be set to None.
Parameters
----------
expanded : bool, optional
Whether to transform the `e_d_signal attribute` (True) or
the `d_signal` attribute (False).
inplace : bool, optional
Whether to automatically set the object's corresponding
physical signal attribute and set the digital signal
attribute to None (True), or to return the converted
signal as a separate variable without changing the original
digital signal attribute (False).
Returns
-------
p_signal : numpy array, optional
The physical conversion of the signal. Either a 2d numpy
array or a list of 1d numpy arrays.
Examples
--------
>>> import wfdb
>>> record = wfdb.rdsamp('sample-data/100', physical=False)
>>> p_signal = record.dac()
>>> record.dac(inplace=True)
>>> record.adc(inplace=True) | [
"Performs",
"the",
"digital",
"to",
"analogue",
"conversion",
"of",
"the",
"signal",
"stored",
"in",
"d_signal",
"if",
"expanded",
"is",
"False",
"or",
"e_d_signal",
"if",
"expanded",
"is",
"True",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L419-L513 | train | 216,227 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | SignalMixin.calc_adc_params | def calc_adc_params(self):
"""
Compute appropriate adc_gain and baseline parameters for adc
conversion, given the physical signal and the fmts.
Returns
-------
adc_gains : list
List of calculated `adc_gain` values for each channel.
baselines : list
List of calculated `baseline` values for each channel.
Notes
-----
This is the mapping equation:
`digital - baseline / adc_gain = physical`
`physical * adc_gain + baseline = digital`
The original WFDB library stores `baseline` as int32.
Constrain abs(adc_gain) <= 2**31 == 2147483648
This function does carefully deal with overflow for calculated
int32 `baseline` values, but does not consider over/underflow
for calculated float `adc_gain` values.
"""
adc_gains = []
baselines = []
if np.where(np.isinf(self.p_signal))[0].size:
raise ValueError('Signal contains inf. Cannot perform adc.')
# min and max ignoring nans, unless whole channel is nan.
# Should suppress warning message.
minvals = np.nanmin(self.p_signal, axis=0)
maxvals = np.nanmax(self.p_signal, axis=0)
for ch in range(np.shape(self.p_signal)[1]):
# Get the minimum and maximum (valid) storage values
dmin, dmax = _digi_bounds(self.fmt[ch])
# add 1 because the lowest value is used to store nans
dmin = dmin + 1
pmin = minvals[ch]
pmax = maxvals[ch]
# Figure out digital samples used to store physical samples
# If the entire signal is nan, gain/baseline won't be used
if pmin == np.nan:
adc_gain = 1
baseline = 1
# If the signal is just one value, store one digital value.
elif pmin == pmax:
if pmin == 0:
adc_gain = 1
baseline = 1
else:
# All digital values are +1 or -1. Keep adc_gain > 0
adc_gain = abs(1 / pmin)
baseline = 0
# Regular varied signal case.
else:
# The equation is: p = (d - b) / g
# Approximately, pmax maps to dmax, and pmin maps to
# dmin. Gradient will be equal to, or close to
# delta(d) / delta(p), since intercept baseline has
# to be an integer.
# Constraint: baseline must be between +/- 2**31
adc_gain = (dmax-dmin) / (pmax-pmin)
baseline = dmin - adc_gain*pmin
# Make adjustments for baseline to be an integer
# This up/down round logic of baseline is to ensure
# there is no overshoot of dmax. Now pmax will map
# to dmax or dmax-1 which is also fine.
if pmin > 0:
baseline = int(np.ceil(baseline))
else:
baseline = int(np.floor(baseline))
# After baseline is set, adjust gain correspondingly.Set
# the gain to map pmin to dmin, and p==0 to baseline.
# In the case where pmin == 0 and dmin == baseline,
# adc_gain is already correct. Avoid dividing by 0.
if dmin != baseline:
adc_gain = (dmin - baseline) / pmin
# Remap signal if baseline exceeds boundaries.
# This may happen if pmax < 0
if baseline > MAX_I32:
# pmin maps to dmin, baseline maps to 2**31 - 1
# pmax will map to a lower value than before
adc_gain = (MAX_I32) - dmin / abs(pmin)
baseline = MAX_I32
# This may happen if pmin > 0
elif baseline < MIN_I32:
# pmax maps to dmax, baseline maps to -2**31 + 1
adc_gain = (dmax - MIN_I32) / pmax
baseline = MIN_I32
adc_gains.append(adc_gain)
baselines.append(baseline)
return (adc_gains, baselines) | python | def calc_adc_params(self):
"""
Compute appropriate adc_gain and baseline parameters for adc
conversion, given the physical signal and the fmts.
Returns
-------
adc_gains : list
List of calculated `adc_gain` values for each channel.
baselines : list
List of calculated `baseline` values for each channel.
Notes
-----
This is the mapping equation:
`digital - baseline / adc_gain = physical`
`physical * adc_gain + baseline = digital`
The original WFDB library stores `baseline` as int32.
Constrain abs(adc_gain) <= 2**31 == 2147483648
This function does carefully deal with overflow for calculated
int32 `baseline` values, but does not consider over/underflow
for calculated float `adc_gain` values.
"""
adc_gains = []
baselines = []
if np.where(np.isinf(self.p_signal))[0].size:
raise ValueError('Signal contains inf. Cannot perform adc.')
# min and max ignoring nans, unless whole channel is nan.
# Should suppress warning message.
minvals = np.nanmin(self.p_signal, axis=0)
maxvals = np.nanmax(self.p_signal, axis=0)
for ch in range(np.shape(self.p_signal)[1]):
# Get the minimum and maximum (valid) storage values
dmin, dmax = _digi_bounds(self.fmt[ch])
# add 1 because the lowest value is used to store nans
dmin = dmin + 1
pmin = minvals[ch]
pmax = maxvals[ch]
# Figure out digital samples used to store physical samples
# If the entire signal is nan, gain/baseline won't be used
if pmin == np.nan:
adc_gain = 1
baseline = 1
# If the signal is just one value, store one digital value.
elif pmin == pmax:
if pmin == 0:
adc_gain = 1
baseline = 1
else:
# All digital values are +1 or -1. Keep adc_gain > 0
adc_gain = abs(1 / pmin)
baseline = 0
# Regular varied signal case.
else:
# The equation is: p = (d - b) / g
# Approximately, pmax maps to dmax, and pmin maps to
# dmin. Gradient will be equal to, or close to
# delta(d) / delta(p), since intercept baseline has
# to be an integer.
# Constraint: baseline must be between +/- 2**31
adc_gain = (dmax-dmin) / (pmax-pmin)
baseline = dmin - adc_gain*pmin
# Make adjustments for baseline to be an integer
# This up/down round logic of baseline is to ensure
# there is no overshoot of dmax. Now pmax will map
# to dmax or dmax-1 which is also fine.
if pmin > 0:
baseline = int(np.ceil(baseline))
else:
baseline = int(np.floor(baseline))
# After baseline is set, adjust gain correspondingly.Set
# the gain to map pmin to dmin, and p==0 to baseline.
# In the case where pmin == 0 and dmin == baseline,
# adc_gain is already correct. Avoid dividing by 0.
if dmin != baseline:
adc_gain = (dmin - baseline) / pmin
# Remap signal if baseline exceeds boundaries.
# This may happen if pmax < 0
if baseline > MAX_I32:
# pmin maps to dmin, baseline maps to 2**31 - 1
# pmax will map to a lower value than before
adc_gain = (MAX_I32) - dmin / abs(pmin)
baseline = MAX_I32
# This may happen if pmin > 0
elif baseline < MIN_I32:
# pmax maps to dmax, baseline maps to -2**31 + 1
adc_gain = (dmax - MIN_I32) / pmax
baseline = MIN_I32
adc_gains.append(adc_gain)
baselines.append(baseline)
return (adc_gains, baselines) | [
"def",
"calc_adc_params",
"(",
"self",
")",
":",
"adc_gains",
"=",
"[",
"]",
"baselines",
"=",
"[",
"]",
"if",
"np",
".",
"where",
"(",
"np",
".",
"isinf",
"(",
"self",
".",
"p_signal",
")",
")",
"[",
"0",
"]",
".",
"size",
":",
"raise",
"ValueEr... | Compute appropriate adc_gain and baseline parameters for adc
conversion, given the physical signal and the fmts.
Returns
-------
adc_gains : list
List of calculated `adc_gain` values for each channel.
baselines : list
List of calculated `baseline` values for each channel.
Notes
-----
This is the mapping equation:
`digital - baseline / adc_gain = physical`
`physical * adc_gain + baseline = digital`
The original WFDB library stores `baseline` as int32.
Constrain abs(adc_gain) <= 2**31 == 2147483648
This function does carefully deal with overflow for calculated
int32 `baseline` values, but does not consider over/underflow
for calculated float `adc_gain` values. | [
"Compute",
"appropriate",
"adc_gain",
"and",
"baseline",
"parameters",
"for",
"adc",
"conversion",
"given",
"the",
"physical",
"signal",
"and",
"the",
"fmts",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L516-L622 | train | 216,228 |
MIT-LCP/wfdb-python | wfdb/io/_signal.py | SignalMixin.wr_dat_files | def wr_dat_files(self, expanded=False, write_dir=''):
"""
Write each of the specified dat files
"""
# Get the set of dat files to be written, and
# the channels to be written to each file.
file_names, dat_channels = describe_list_indices(self.file_name)
# Get the fmt and byte offset corresponding to each dat file
DAT_FMTS = {}
dat_offsets = {}
for fn in file_names:
DAT_FMTS[fn] = self.fmt[dat_channels[fn][0]]
# byte_offset may not be present
if self.byte_offset is None:
dat_offsets[fn] = 0
else:
dat_offsets[fn] = self.byte_offset[dat_channels[fn][0]]
# Write the dat files
if expanded:
for fn in file_names:
wr_dat_file(fn, DAT_FMTS[fn], None , dat_offsets[fn], True,
[self.e_d_signal[ch] for ch in dat_channels[fn]],
self.samps_per_frame, write_dir=write_dir)
else:
# Create a copy to prevent overwrite
dsig = self.d_signal.copy()
for fn in file_names:
wr_dat_file(fn, DAT_FMTS[fn],
dsig[:, dat_channels[fn][0]:dat_channels[fn][-1]+1],
dat_offsets[fn], write_dir=write_dir) | python | def wr_dat_files(self, expanded=False, write_dir=''):
"""
Write each of the specified dat files
"""
# Get the set of dat files to be written, and
# the channels to be written to each file.
file_names, dat_channels = describe_list_indices(self.file_name)
# Get the fmt and byte offset corresponding to each dat file
DAT_FMTS = {}
dat_offsets = {}
for fn in file_names:
DAT_FMTS[fn] = self.fmt[dat_channels[fn][0]]
# byte_offset may not be present
if self.byte_offset is None:
dat_offsets[fn] = 0
else:
dat_offsets[fn] = self.byte_offset[dat_channels[fn][0]]
# Write the dat files
if expanded:
for fn in file_names:
wr_dat_file(fn, DAT_FMTS[fn], None , dat_offsets[fn], True,
[self.e_d_signal[ch] for ch in dat_channels[fn]],
self.samps_per_frame, write_dir=write_dir)
else:
# Create a copy to prevent overwrite
dsig = self.d_signal.copy()
for fn in file_names:
wr_dat_file(fn, DAT_FMTS[fn],
dsig[:, dat_channels[fn][0]:dat_channels[fn][-1]+1],
dat_offsets[fn], write_dir=write_dir) | [
"def",
"wr_dat_files",
"(",
"self",
",",
"expanded",
"=",
"False",
",",
"write_dir",
"=",
"''",
")",
":",
"# Get the set of dat files to be written, and",
"# the channels to be written to each file.",
"file_names",
",",
"dat_channels",
"=",
"describe_list_indices",
"(",
"... | Write each of the specified dat files | [
"Write",
"each",
"of",
"the",
"specified",
"dat",
"files"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_signal.py#L667-L700 | train | 216,229 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | wfdb_strptime | def wfdb_strptime(time_string):
"""
Given a time string in an acceptable wfdb format, return
a datetime.time object.
Valid formats: SS, MM:SS, HH:MM:SS, all with and without microsec.
"""
n_colons = time_string.count(':')
if n_colons == 0:
time_fmt = '%S'
elif n_colons == 1:
time_fmt = '%M:%S'
elif n_colons == 2:
time_fmt = '%H:%M:%S'
if '.' in time_string:
time_fmt += '.%f'
return datetime.datetime.strptime(time_string, time_fmt).time() | python | def wfdb_strptime(time_string):
"""
Given a time string in an acceptable wfdb format, return
a datetime.time object.
Valid formats: SS, MM:SS, HH:MM:SS, all with and without microsec.
"""
n_colons = time_string.count(':')
if n_colons == 0:
time_fmt = '%S'
elif n_colons == 1:
time_fmt = '%M:%S'
elif n_colons == 2:
time_fmt = '%H:%M:%S'
if '.' in time_string:
time_fmt += '.%f'
return datetime.datetime.strptime(time_string, time_fmt).time() | [
"def",
"wfdb_strptime",
"(",
"time_string",
")",
":",
"n_colons",
"=",
"time_string",
".",
"count",
"(",
"':'",
")",
"if",
"n_colons",
"==",
"0",
":",
"time_fmt",
"=",
"'%S'",
"elif",
"n_colons",
"==",
"1",
":",
"time_fmt",
"=",
"'%M:%S'",
"elif",
"n_col... | Given a time string in an acceptable wfdb format, return
a datetime.time object.
Valid formats: SS, MM:SS, HH:MM:SS, all with and without microsec. | [
"Given",
"a",
"time",
"string",
"in",
"an",
"acceptable",
"wfdb",
"format",
"return",
"a",
"datetime",
".",
"time",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L653-L672 | train | 216,230 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | _read_header_lines | def _read_header_lines(base_record_name, dir_name, pb_dir):
"""
Read the lines in a local or remote header file.
Parameters
----------
base_record_name : str
The base name of the WFDB record to be read, without any file
extensions.
dir_name : str
The local directory location of the header file. This parameter
is ignored if `pb_dir` is set.
pb_dir : str
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
Returns
-------
header_lines : list
List of strings corresponding to the header lines.
comment_lines : list
List of strings corresponding to the comment lines.
"""
file_name = base_record_name + '.hea'
# Read local file
if pb_dir is None:
with open(os.path.join(dir_name, file_name), 'r') as fp:
# Record line followed by signal/segment lines if any
header_lines = []
# Comment lines
comment_lines = []
for line in fp:
line = line.strip()
# Comment line
if line.startswith('#'):
comment_lines.append(line)
# Non-empty non-comment line = header line.
elif line:
# Look for a comment in the line
ci = line.find('#')
if ci > 0:
header_lines.append(line[:ci])
# comment on same line as header line
comment_lines.append(line[ci:])
else:
header_lines.append(line)
# Read online header file
else:
header_lines, comment_lines = download._stream_header(file_name,
pb_dir)
return header_lines, comment_lines | python | def _read_header_lines(base_record_name, dir_name, pb_dir):
"""
Read the lines in a local or remote header file.
Parameters
----------
base_record_name : str
The base name of the WFDB record to be read, without any file
extensions.
dir_name : str
The local directory location of the header file. This parameter
is ignored if `pb_dir` is set.
pb_dir : str
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
Returns
-------
header_lines : list
List of strings corresponding to the header lines.
comment_lines : list
List of strings corresponding to the comment lines.
"""
file_name = base_record_name + '.hea'
# Read local file
if pb_dir is None:
with open(os.path.join(dir_name, file_name), 'r') as fp:
# Record line followed by signal/segment lines if any
header_lines = []
# Comment lines
comment_lines = []
for line in fp:
line = line.strip()
# Comment line
if line.startswith('#'):
comment_lines.append(line)
# Non-empty non-comment line = header line.
elif line:
# Look for a comment in the line
ci = line.find('#')
if ci > 0:
header_lines.append(line[:ci])
# comment on same line as header line
comment_lines.append(line[ci:])
else:
header_lines.append(line)
# Read online header file
else:
header_lines, comment_lines = download._stream_header(file_name,
pb_dir)
return header_lines, comment_lines | [
"def",
"_read_header_lines",
"(",
"base_record_name",
",",
"dir_name",
",",
"pb_dir",
")",
":",
"file_name",
"=",
"base_record_name",
"+",
"'.hea'",
"# Read local file",
"if",
"pb_dir",
"is",
"None",
":",
"with",
"open",
"(",
"os",
".",
"path",
".",
"join",
... | Read the lines in a local or remote header file.
Parameters
----------
base_record_name : str
The base name of the WFDB record to be read, without any file
extensions.
dir_name : str
The local directory location of the header file. This parameter
is ignored if `pb_dir` is set.
pb_dir : str
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
Returns
-------
header_lines : list
List of strings corresponding to the header lines.
comment_lines : list
List of strings corresponding to the comment lines. | [
"Read",
"the",
"lines",
"in",
"a",
"local",
"or",
"remote",
"header",
"file",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L675-L730 | train | 216,231 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | _parse_record_line | def _parse_record_line(record_line):
"""
Extract fields from a record line string into a dictionary
"""
# Dictionary for record fields
record_fields = {}
# Read string fields from record line
(record_fields['record_name'], record_fields['n_seg'],
record_fields['n_sig'], record_fields['fs'],
record_fields['counter_freq'], record_fields['base_counter'],
record_fields['sig_len'], record_fields['base_time'],
record_fields['base_date']) = re.findall(_rx_record, record_line)[0]
for field in RECORD_SPECS.index:
# Replace empty strings with their read defaults (which are
# mostly None)
if record_fields[field] == '':
record_fields[field] = RECORD_SPECS.loc[field, 'read_default']
# Typecast non-empty strings for non-string (numerical/datetime)
# fields
else:
if RECORD_SPECS.loc[field, 'allowed_types'] == int_types:
record_fields[field] = int(record_fields[field])
elif RECORD_SPECS.loc[field, 'allowed_types'] == float_types:
record_fields[field] = float(record_fields[field])
# cast fs to an int if it is close
if field == 'fs':
fs = float(record_fields['fs'])
if round(fs, 8) == float(int(fs)):
fs = int(fs)
record_fields['fs'] = fs
elif field == 'base_time':
record_fields['base_time'] = wfdb_strptime(record_fields['base_time'])
elif field == 'base_date':
record_fields['base_date'] = datetime.datetime.strptime(
record_fields['base_date'], '%d/%m/%Y').date()
# This is not a standard wfdb field, but is useful to set.
if record_fields['base_date'] and record_fields['base_time']:
record_fields['base_datetime'] = datetime.datetime.combine(
record_fields['base_date'], record_fields['base_time'])
return record_fields | python | def _parse_record_line(record_line):
"""
Extract fields from a record line string into a dictionary
"""
# Dictionary for record fields
record_fields = {}
# Read string fields from record line
(record_fields['record_name'], record_fields['n_seg'],
record_fields['n_sig'], record_fields['fs'],
record_fields['counter_freq'], record_fields['base_counter'],
record_fields['sig_len'], record_fields['base_time'],
record_fields['base_date']) = re.findall(_rx_record, record_line)[0]
for field in RECORD_SPECS.index:
# Replace empty strings with their read defaults (which are
# mostly None)
if record_fields[field] == '':
record_fields[field] = RECORD_SPECS.loc[field, 'read_default']
# Typecast non-empty strings for non-string (numerical/datetime)
# fields
else:
if RECORD_SPECS.loc[field, 'allowed_types'] == int_types:
record_fields[field] = int(record_fields[field])
elif RECORD_SPECS.loc[field, 'allowed_types'] == float_types:
record_fields[field] = float(record_fields[field])
# cast fs to an int if it is close
if field == 'fs':
fs = float(record_fields['fs'])
if round(fs, 8) == float(int(fs)):
fs = int(fs)
record_fields['fs'] = fs
elif field == 'base_time':
record_fields['base_time'] = wfdb_strptime(record_fields['base_time'])
elif field == 'base_date':
record_fields['base_date'] = datetime.datetime.strptime(
record_fields['base_date'], '%d/%m/%Y').date()
# This is not a standard wfdb field, but is useful to set.
if record_fields['base_date'] and record_fields['base_time']:
record_fields['base_datetime'] = datetime.datetime.combine(
record_fields['base_date'], record_fields['base_time'])
return record_fields | [
"def",
"_parse_record_line",
"(",
"record_line",
")",
":",
"# Dictionary for record fields",
"record_fields",
"=",
"{",
"}",
"# Read string fields from record line",
"(",
"record_fields",
"[",
"'record_name'",
"]",
",",
"record_fields",
"[",
"'n_seg'",
"]",
",",
"record... | Extract fields from a record line string into a dictionary | [
"Extract",
"fields",
"from",
"a",
"record",
"line",
"string",
"into",
"a",
"dictionary"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L733-L777 | train | 216,232 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | _parse_signal_lines | def _parse_signal_lines(signal_lines):
"""
Extract fields from a list of signal line strings into a dictionary.
"""
n_sig = len(signal_lines)
# Dictionary for signal fields
signal_fields = {}
# Each dictionary field is a list
for field in SIGNAL_SPECS.index:
signal_fields[field] = n_sig * [None]
# Read string fields from signal line
for ch in range(n_sig):
(signal_fields['file_name'][ch], signal_fields['fmt'][ch],
signal_fields['samps_per_frame'][ch], signal_fields['skew'][ch],
signal_fields['byte_offset'][ch], signal_fields['adc_gain'][ch],
signal_fields['baseline'][ch], signal_fields['units'][ch],
signal_fields['adc_res'][ch], signal_fields['adc_zero'][ch],
signal_fields['init_value'][ch], signal_fields['checksum'][ch],
signal_fields['block_size'][ch],
signal_fields['sig_name'][ch]) = _rx_signal.findall(signal_lines[ch])[0]
for field in SIGNAL_SPECS.index:
# Replace empty strings with their read defaults (which are mostly None)
# Note: Never set a field to None. [None]* n_sig is accurate, indicating
# that different channels can be present or missing.
if signal_fields[field][ch] == '':
signal_fields[field][ch] = SIGNAL_SPECS.loc[field, 'read_default']
# Special case: missing baseline defaults to ADCzero if present
if field == 'baseline' and signal_fields['adc_zero'][ch] != '':
signal_fields['baseline'][ch] = int(signal_fields['adc_zero'][ch])
# Typecast non-empty strings for numerical fields
else:
if SIGNAL_SPECS.loc[field, 'allowed_types'] is int_types:
signal_fields[field][ch] = int(signal_fields[field][ch])
elif SIGNAL_SPECS.loc[field, 'allowed_types'] is float_types:
signal_fields[field][ch] = float(signal_fields[field][ch])
# Special case: adc_gain of 0 means 200
if field == 'adc_gain' and signal_fields['adc_gain'][ch] == 0:
signal_fields['adc_gain'][ch] = 200.
return signal_fields | python | def _parse_signal_lines(signal_lines):
"""
Extract fields from a list of signal line strings into a dictionary.
"""
n_sig = len(signal_lines)
# Dictionary for signal fields
signal_fields = {}
# Each dictionary field is a list
for field in SIGNAL_SPECS.index:
signal_fields[field] = n_sig * [None]
# Read string fields from signal line
for ch in range(n_sig):
(signal_fields['file_name'][ch], signal_fields['fmt'][ch],
signal_fields['samps_per_frame'][ch], signal_fields['skew'][ch],
signal_fields['byte_offset'][ch], signal_fields['adc_gain'][ch],
signal_fields['baseline'][ch], signal_fields['units'][ch],
signal_fields['adc_res'][ch], signal_fields['adc_zero'][ch],
signal_fields['init_value'][ch], signal_fields['checksum'][ch],
signal_fields['block_size'][ch],
signal_fields['sig_name'][ch]) = _rx_signal.findall(signal_lines[ch])[0]
for field in SIGNAL_SPECS.index:
# Replace empty strings with their read defaults (which are mostly None)
# Note: Never set a field to None. [None]* n_sig is accurate, indicating
# that different channels can be present or missing.
if signal_fields[field][ch] == '':
signal_fields[field][ch] = SIGNAL_SPECS.loc[field, 'read_default']
# Special case: missing baseline defaults to ADCzero if present
if field == 'baseline' and signal_fields['adc_zero'][ch] != '':
signal_fields['baseline'][ch] = int(signal_fields['adc_zero'][ch])
# Typecast non-empty strings for numerical fields
else:
if SIGNAL_SPECS.loc[field, 'allowed_types'] is int_types:
signal_fields[field][ch] = int(signal_fields[field][ch])
elif SIGNAL_SPECS.loc[field, 'allowed_types'] is float_types:
signal_fields[field][ch] = float(signal_fields[field][ch])
# Special case: adc_gain of 0 means 200
if field == 'adc_gain' and signal_fields['adc_gain'][ch] == 0:
signal_fields['adc_gain'][ch] = 200.
return signal_fields | [
"def",
"_parse_signal_lines",
"(",
"signal_lines",
")",
":",
"n_sig",
"=",
"len",
"(",
"signal_lines",
")",
"# Dictionary for signal fields",
"signal_fields",
"=",
"{",
"}",
"# Each dictionary field is a list",
"for",
"field",
"in",
"SIGNAL_SPECS",
".",
"index",
":",
... | Extract fields from a list of signal line strings into a dictionary. | [
"Extract",
"fields",
"from",
"a",
"list",
"of",
"signal",
"line",
"strings",
"into",
"a",
"dictionary",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L780-L824 | train | 216,233 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | _read_segment_lines | def _read_segment_lines(segment_lines):
"""
Extract fields from segment line strings into a dictionary
"""
# Dictionary for segment fields
segment_fields = {}
# Each dictionary field is a list
for field in SEGMENT_SPECS.index:
segment_fields[field] = [None] * len(segment_lines)
# Read string fields from signal line
for i in range(len(segment_lines)):
(segment_fields['seg_name'][i], segment_fields['seg_len'][i]) = _rx_segment.findall(segment_lines[i])[0]
# Typecast strings for numerical field
if field == 'seg_len':
segment_fields['seg_len'][i] = int(segment_fields['seg_len'][i])
return segment_fields | python | def _read_segment_lines(segment_lines):
"""
Extract fields from segment line strings into a dictionary
"""
# Dictionary for segment fields
segment_fields = {}
# Each dictionary field is a list
for field in SEGMENT_SPECS.index:
segment_fields[field] = [None] * len(segment_lines)
# Read string fields from signal line
for i in range(len(segment_lines)):
(segment_fields['seg_name'][i], segment_fields['seg_len'][i]) = _rx_segment.findall(segment_lines[i])[0]
# Typecast strings for numerical field
if field == 'seg_len':
segment_fields['seg_len'][i] = int(segment_fields['seg_len'][i])
return segment_fields | [
"def",
"_read_segment_lines",
"(",
"segment_lines",
")",
":",
"# Dictionary for segment fields",
"segment_fields",
"=",
"{",
"}",
"# Each dictionary field is a list",
"for",
"field",
"in",
"SEGMENT_SPECS",
".",
"index",
":",
"segment_fields",
"[",
"field",
"]",
"=",
"... | Extract fields from segment line strings into a dictionary | [
"Extract",
"fields",
"from",
"segment",
"line",
"strings",
"into",
"a",
"dictionary"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L827-L847 | train | 216,234 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | BaseHeaderMixin.get_write_subset | def get_write_subset(self, spec_type):
"""
Get a set of fields used to write the header; either 'record'
or 'signal' specification fields. Helper function for
`get_write_fields`. Gets the default required fields, the user
defined fields, and their dependencies.
Parameters
----------
spec_type : str
The set of specification fields desired. Either 'record' or
'signal'.
Returns
-------
write_fields : list or dict
For record fields, returns a list of all fields needed. For
signal fields, it returns a dictionary of all fields needed,
with keys = field and value = list of channels that must be
present for the field.
"""
if spec_type == 'record':
write_fields = []
record_specs = RECORD_SPECS.copy()
# Remove the n_seg requirement for single segment items
if not hasattr(self, 'n_seg'):
record_specs.drop('n_seg', inplace=True)
for field in record_specs.index[-1::-1]:
# Continue if the field has already been included
if field in write_fields:
continue
# If the field is required by default or has been
# defined by the user
if (record_specs.loc[field, 'write_required']
or getattr(self, field) is not None):
req_field = field
# Add the field and its recursive dependencies
while req_field is not None:
write_fields.append(req_field)
req_field = record_specs.loc[req_field, 'dependency']
# Add comments if any
if getattr(self, 'comments') is not None:
write_fields.append('comments')
# signal spec field. Need to return a potentially different list for each channel.
elif spec_type == 'signal':
# List of lists for each channel
write_fields = []
signal_specs = SIGNAL_SPECS.copy()
for ch in range(self.n_sig):
# The fields needed for this channel
write_fields_ch = []
for field in signal_specs.index[-1::-1]:
if field in write_fields_ch:
continue
item = getattr(self, field)
# If the field is required by default or has been defined by the user
if signal_specs.loc[field, 'write_required'] or (item is not None and item[ch] is not None):
req_field = field
# Add the field and its recursive dependencies
while req_field is not None:
write_fields_ch.append(req_field)
req_field = signal_specs.loc[req_field, 'dependency']
write_fields.append(write_fields_ch)
# Convert the list of lists to a single dictionary.
# keys = field and value = list of channels in which the
# field is required.
dict_write_fields = {}
# For fields present in any channel:
for field in set([i for write_fields_ch in write_fields for i in write_fields_ch]):
dict_write_fields[field] = []
for ch in range(self.n_sig):
if field in write_fields[ch]:
dict_write_fields[field].append(ch)
write_fields = dict_write_fields
return write_fields | python | def get_write_subset(self, spec_type):
"""
Get a set of fields used to write the header; either 'record'
or 'signal' specification fields. Helper function for
`get_write_fields`. Gets the default required fields, the user
defined fields, and their dependencies.
Parameters
----------
spec_type : str
The set of specification fields desired. Either 'record' or
'signal'.
Returns
-------
write_fields : list or dict
For record fields, returns a list of all fields needed. For
signal fields, it returns a dictionary of all fields needed,
with keys = field and value = list of channels that must be
present for the field.
"""
if spec_type == 'record':
write_fields = []
record_specs = RECORD_SPECS.copy()
# Remove the n_seg requirement for single segment items
if not hasattr(self, 'n_seg'):
record_specs.drop('n_seg', inplace=True)
for field in record_specs.index[-1::-1]:
# Continue if the field has already been included
if field in write_fields:
continue
# If the field is required by default or has been
# defined by the user
if (record_specs.loc[field, 'write_required']
or getattr(self, field) is not None):
req_field = field
# Add the field and its recursive dependencies
while req_field is not None:
write_fields.append(req_field)
req_field = record_specs.loc[req_field, 'dependency']
# Add comments if any
if getattr(self, 'comments') is not None:
write_fields.append('comments')
# signal spec field. Need to return a potentially different list for each channel.
elif spec_type == 'signal':
# List of lists for each channel
write_fields = []
signal_specs = SIGNAL_SPECS.copy()
for ch in range(self.n_sig):
# The fields needed for this channel
write_fields_ch = []
for field in signal_specs.index[-1::-1]:
if field in write_fields_ch:
continue
item = getattr(self, field)
# If the field is required by default or has been defined by the user
if signal_specs.loc[field, 'write_required'] or (item is not None and item[ch] is not None):
req_field = field
# Add the field and its recursive dependencies
while req_field is not None:
write_fields_ch.append(req_field)
req_field = signal_specs.loc[req_field, 'dependency']
write_fields.append(write_fields_ch)
# Convert the list of lists to a single dictionary.
# keys = field and value = list of channels in which the
# field is required.
dict_write_fields = {}
# For fields present in any channel:
for field in set([i for write_fields_ch in write_fields for i in write_fields_ch]):
dict_write_fields[field] = []
for ch in range(self.n_sig):
if field in write_fields[ch]:
dict_write_fields[field].append(ch)
write_fields = dict_write_fields
return write_fields | [
"def",
"get_write_subset",
"(",
"self",
",",
"spec_type",
")",
":",
"if",
"spec_type",
"==",
"'record'",
":",
"write_fields",
"=",
"[",
"]",
"record_specs",
"=",
"RECORD_SPECS",
".",
"copy",
"(",
")",
"# Remove the n_seg requirement for single segment items",
"if",
... | Get a set of fields used to write the header; either 'record'
or 'signal' specification fields. Helper function for
`get_write_fields`. Gets the default required fields, the user
defined fields, and their dependencies.
Parameters
----------
spec_type : str
The set of specification fields desired. Either 'record' or
'signal'.
Returns
-------
write_fields : list or dict
For record fields, returns a list of all fields needed. For
signal fields, it returns a dictionary of all fields needed,
with keys = field and value = list of channels that must be
present for the field. | [
"Get",
"a",
"set",
"of",
"fields",
"used",
"to",
"write",
"the",
"header",
";",
"either",
"record",
"or",
"signal",
"specification",
"fields",
".",
"Helper",
"function",
"for",
"get_write_fields",
".",
"Gets",
"the",
"default",
"required",
"fields",
"the",
"... | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L147-L233 | train | 216,235 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | HeaderMixin.set_defaults | def set_defaults(self):
"""
Set defaults for fields needed to write the header if they have
defaults.
Notes
-----
- This is NOT called by `rdheader`. It is only automatically
called by the gateway `wrsamp` for convenience.
- This is also not called by `wrheader` since it is supposed to
be an explicit function.
- This is not responsible for initializing the attributes. That
is done by the constructor.
See also `set_p_features` and `set_d_features`.
"""
rfields, sfields = self.get_write_fields()
for f in rfields:
self.set_default(f)
for f in sfields:
self.set_default(f) | python | def set_defaults(self):
"""
Set defaults for fields needed to write the header if they have
defaults.
Notes
-----
- This is NOT called by `rdheader`. It is only automatically
called by the gateway `wrsamp` for convenience.
- This is also not called by `wrheader` since it is supposed to
be an explicit function.
- This is not responsible for initializing the attributes. That
is done by the constructor.
See also `set_p_features` and `set_d_features`.
"""
rfields, sfields = self.get_write_fields()
for f in rfields:
self.set_default(f)
for f in sfields:
self.set_default(f) | [
"def",
"set_defaults",
"(",
"self",
")",
":",
"rfields",
",",
"sfields",
"=",
"self",
".",
"get_write_fields",
"(",
")",
"for",
"f",
"in",
"rfields",
":",
"self",
".",
"set_default",
"(",
"f",
")",
"for",
"f",
"in",
"sfields",
":",
"self",
".",
"set_... | Set defaults for fields needed to write the header if they have
defaults.
Notes
-----
- This is NOT called by `rdheader`. It is only automatically
called by the gateway `wrsamp` for convenience.
- This is also not called by `wrheader` since it is supposed to
be an explicit function.
- This is not responsible for initializing the attributes. That
is done by the constructor.
See also `set_p_features` and `set_d_features`. | [
"Set",
"defaults",
"for",
"fields",
"needed",
"to",
"write",
"the",
"header",
"if",
"they",
"have",
"defaults",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L241-L262 | train | 216,236 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | HeaderMixin.get_write_fields | def get_write_fields(self):
"""
Get the list of fields used to write the header, separating
record and signal specification fields. Returns the default
required fields, the user defined fields,
and their dependencies.
Does NOT include `d_signal` or `e_d_signal`.
Returns
-------
rec_write_fields : list
Record specification fields to be written. Includes
'comment' if present.
sig_write_fields : dict
Dictionary of signal specification fields to be written,
with values equal to the channels that need to be present
for each field.
"""
# Record specification fields
rec_write_fields = self.get_write_subset('record')
# Add comments if any
if self.comments != None:
rec_write_fields.append('comments')
# Get required signal fields if signals are present.
self.check_field('n_sig')
if self.n_sig > 0:
sig_write_fields = self.get_write_subset('signal')
else:
sig_write_fields = None
return rec_write_fields, sig_write_fields | python | def get_write_fields(self):
"""
Get the list of fields used to write the header, separating
record and signal specification fields. Returns the default
required fields, the user defined fields,
and their dependencies.
Does NOT include `d_signal` or `e_d_signal`.
Returns
-------
rec_write_fields : list
Record specification fields to be written. Includes
'comment' if present.
sig_write_fields : dict
Dictionary of signal specification fields to be written,
with values equal to the channels that need to be present
for each field.
"""
# Record specification fields
rec_write_fields = self.get_write_subset('record')
# Add comments if any
if self.comments != None:
rec_write_fields.append('comments')
# Get required signal fields if signals are present.
self.check_field('n_sig')
if self.n_sig > 0:
sig_write_fields = self.get_write_subset('signal')
else:
sig_write_fields = None
return rec_write_fields, sig_write_fields | [
"def",
"get_write_fields",
"(",
"self",
")",
":",
"# Record specification fields",
"rec_write_fields",
"=",
"self",
".",
"get_write_subset",
"(",
"'record'",
")",
"# Add comments if any",
"if",
"self",
".",
"comments",
"!=",
"None",
":",
"rec_write_fields",
".",
"ap... | Get the list of fields used to write the header, separating
record and signal specification fields. Returns the default
required fields, the user defined fields,
and their dependencies.
Does NOT include `d_signal` or `e_d_signal`.
Returns
-------
rec_write_fields : list
Record specification fields to be written. Includes
'comment' if present.
sig_write_fields : dict
Dictionary of signal specification fields to be written,
with values equal to the channels that need to be present
for each field. | [
"Get",
"the",
"list",
"of",
"fields",
"used",
"to",
"write",
"the",
"header",
"separating",
"record",
"and",
"signal",
"specification",
"fields",
".",
"Returns",
"the",
"default",
"required",
"fields",
"the",
"user",
"defined",
"fields",
"and",
"their",
"depen... | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L306-L342 | train | 216,237 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | HeaderMixin.set_default | def set_default(self, field):
"""
Set the object's attribute to its default value if it is missing
and there is a default.
Not responsible for initializing the
attribute. That is done by the constructor.
"""
# Record specification fields
if field in RECORD_SPECS.index:
# Return if no default to set, or if the field is already
# present.
if RECORD_SPECS.loc[field, 'write_default'] is None or getattr(self, field) is not None:
return
setattr(self, field, RECORD_SPECS.loc[field, 'write_default'])
# Signal specification fields
# Setting entire list default, not filling in blanks in lists.
elif field in SIGNAL_SPECS.index:
# Specific dynamic case
if field == 'file_name' and self.file_name is None:
self.file_name = self.n_sig * [self.record_name + '.dat']
return
item = getattr(self, field)
# Return if no default to set, or if the field is already
# present.
if SIGNAL_SPECS.loc[field, 'write_default'] is None or item is not None:
return
# Set more specific defaults if possible
if field == 'adc_res' and self.fmt is not None:
self.adc_res = _signal._fmt_res(self.fmt)
return
setattr(self, field,
[SIGNAL_SPECS.loc[field, 'write_default']] * self.n_sig) | python | def set_default(self, field):
"""
Set the object's attribute to its default value if it is missing
and there is a default.
Not responsible for initializing the
attribute. That is done by the constructor.
"""
# Record specification fields
if field in RECORD_SPECS.index:
# Return if no default to set, or if the field is already
# present.
if RECORD_SPECS.loc[field, 'write_default'] is None or getattr(self, field) is not None:
return
setattr(self, field, RECORD_SPECS.loc[field, 'write_default'])
# Signal specification fields
# Setting entire list default, not filling in blanks in lists.
elif field in SIGNAL_SPECS.index:
# Specific dynamic case
if field == 'file_name' and self.file_name is None:
self.file_name = self.n_sig * [self.record_name + '.dat']
return
item = getattr(self, field)
# Return if no default to set, or if the field is already
# present.
if SIGNAL_SPECS.loc[field, 'write_default'] is None or item is not None:
return
# Set more specific defaults if possible
if field == 'adc_res' and self.fmt is not None:
self.adc_res = _signal._fmt_res(self.fmt)
return
setattr(self, field,
[SIGNAL_SPECS.loc[field, 'write_default']] * self.n_sig) | [
"def",
"set_default",
"(",
"self",
",",
"field",
")",
":",
"# Record specification fields",
"if",
"field",
"in",
"RECORD_SPECS",
".",
"index",
":",
"# Return if no default to set, or if the field is already",
"# present.",
"if",
"RECORD_SPECS",
".",
"loc",
"[",
"field",... | Set the object's attribute to its default value if it is missing
and there is a default.
Not responsible for initializing the
attribute. That is done by the constructor. | [
"Set",
"the",
"object",
"s",
"attribute",
"to",
"its",
"default",
"value",
"if",
"it",
"is",
"missing",
"and",
"there",
"is",
"a",
"default",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L345-L384 | train | 216,238 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | HeaderMixin.check_field_cohesion | def check_field_cohesion(self, rec_write_fields, sig_write_fields):
"""
Check the cohesion of fields used to write the header
"""
# If there are no signal specification fields, there is nothing to check.
if self.n_sig>0:
# The length of all signal specification fields must match n_sig
# even if some of its elements are None.
for f in sig_write_fields:
if len(getattr(self, f)) != self.n_sig:
raise ValueError('The length of field: '+f+' must match field n_sig.')
# Each file_name must correspond to only one fmt, (and only one byte offset if defined).
datfmts = {}
for ch in range(self.n_sig):
if self.file_name[ch] not in datfmts:
datfmts[self.file_name[ch]] = self.fmt[ch]
else:
if datfmts[self.file_name[ch]] != self.fmt[ch]:
raise ValueError('Each file_name (dat file) specified must have the same fmt')
datoffsets = {}
if self.byte_offset is not None:
# At least one byte offset value exists
for ch in range(self.n_sig):
if self.byte_offset[ch] is None:
continue
if self.file_name[ch] not in datoffsets:
datoffsets[self.file_name[ch]] = self.byte_offset[ch]
else:
if datoffsets[self.file_name[ch]] != self.byte_offset[ch]:
raise ValueError('Each file_name (dat file) specified must have the same byte offset') | python | def check_field_cohesion(self, rec_write_fields, sig_write_fields):
"""
Check the cohesion of fields used to write the header
"""
# If there are no signal specification fields, there is nothing to check.
if self.n_sig>0:
# The length of all signal specification fields must match n_sig
# even if some of its elements are None.
for f in sig_write_fields:
if len(getattr(self, f)) != self.n_sig:
raise ValueError('The length of field: '+f+' must match field n_sig.')
# Each file_name must correspond to only one fmt, (and only one byte offset if defined).
datfmts = {}
for ch in range(self.n_sig):
if self.file_name[ch] not in datfmts:
datfmts[self.file_name[ch]] = self.fmt[ch]
else:
if datfmts[self.file_name[ch]] != self.fmt[ch]:
raise ValueError('Each file_name (dat file) specified must have the same fmt')
datoffsets = {}
if self.byte_offset is not None:
# At least one byte offset value exists
for ch in range(self.n_sig):
if self.byte_offset[ch] is None:
continue
if self.file_name[ch] not in datoffsets:
datoffsets[self.file_name[ch]] = self.byte_offset[ch]
else:
if datoffsets[self.file_name[ch]] != self.byte_offset[ch]:
raise ValueError('Each file_name (dat file) specified must have the same byte offset') | [
"def",
"check_field_cohesion",
"(",
"self",
",",
"rec_write_fields",
",",
"sig_write_fields",
")",
":",
"# If there are no signal specification fields, there is nothing to check.",
"if",
"self",
".",
"n_sig",
">",
"0",
":",
"# The length of all signal specification fields must ma... | Check the cohesion of fields used to write the header | [
"Check",
"the",
"cohesion",
"of",
"fields",
"used",
"to",
"write",
"the",
"header"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L387-L420 | train | 216,239 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | HeaderMixin.wr_header_file | def wr_header_file(self, rec_write_fields, sig_write_fields, write_dir):
"""
Write a header file using the specified fields. Converts Record
attributes into appropriate wfdb format strings.
Parameters
----------
rec_write_fields : list
List of record specification fields to write
sig_write_fields : dict
Dictionary of signal specification fields to write, values
being equal to a list of channels to write for each field.
write_dir : str
The directory in which to write the header file
"""
# Create record specification line
record_line = ''
# Traverse the ordered dictionary
for field in RECORD_SPECS.index:
# If the field is being used, add it with its delimiter
if field in rec_write_fields:
string_field = str(getattr(self, field))
# Certain fields need extra processing
if field == 'fs' and isinstance(self.fs, float):
if round(self.fs, 8) == float(int(self.fs)):
string_field = str(int(self.fs))
elif field == 'base_time' and '.' in string_field:
string_field = string_field.rstrip('0')
elif field == 'base_date':
string_field = '/'.join((string_field[8:],
string_field[5:7],
string_field[:4]))
record_line += RECORD_SPECS.loc[field, 'delimiter'] + string_field
# The 'base_counter' field needs to be closed with ')'
if field == 'base_counter':
record_line += ')'
header_lines = [record_line]
# Create signal specification lines (if any) one channel at a time
if self.n_sig > 0:
signal_lines = self.n_sig * ['']
for ch in range(self.n_sig):
# Traverse the signal fields
for field in SIGNAL_SPECS.index:
# If the field is being used, add each of its
# elements with the delimiter to the appropriate
# line
if field in sig_write_fields and ch in sig_write_fields[field]:
signal_lines[ch] += SIGNAL_SPECS.loc[field, 'delimiter'] + str(getattr(self, field)[ch])
# The 'baseline' field needs to be closed with ')'
if field == 'baseline':
signal_lines[ch] += ')'
header_lines += signal_lines
# Create comment lines (if any)
if 'comments' in rec_write_fields:
comment_lines = ['# ' + comment for comment in self.comments]
header_lines += comment_lines
lines_to_file(self.record_name + '.hea', write_dir, header_lines) | python | def wr_header_file(self, rec_write_fields, sig_write_fields, write_dir):
"""
Write a header file using the specified fields. Converts Record
attributes into appropriate wfdb format strings.
Parameters
----------
rec_write_fields : list
List of record specification fields to write
sig_write_fields : dict
Dictionary of signal specification fields to write, values
being equal to a list of channels to write for each field.
write_dir : str
The directory in which to write the header file
"""
# Create record specification line
record_line = ''
# Traverse the ordered dictionary
for field in RECORD_SPECS.index:
# If the field is being used, add it with its delimiter
if field in rec_write_fields:
string_field = str(getattr(self, field))
# Certain fields need extra processing
if field == 'fs' and isinstance(self.fs, float):
if round(self.fs, 8) == float(int(self.fs)):
string_field = str(int(self.fs))
elif field == 'base_time' and '.' in string_field:
string_field = string_field.rstrip('0')
elif field == 'base_date':
string_field = '/'.join((string_field[8:],
string_field[5:7],
string_field[:4]))
record_line += RECORD_SPECS.loc[field, 'delimiter'] + string_field
# The 'base_counter' field needs to be closed with ')'
if field == 'base_counter':
record_line += ')'
header_lines = [record_line]
# Create signal specification lines (if any) one channel at a time
if self.n_sig > 0:
signal_lines = self.n_sig * ['']
for ch in range(self.n_sig):
# Traverse the signal fields
for field in SIGNAL_SPECS.index:
# If the field is being used, add each of its
# elements with the delimiter to the appropriate
# line
if field in sig_write_fields and ch in sig_write_fields[field]:
signal_lines[ch] += SIGNAL_SPECS.loc[field, 'delimiter'] + str(getattr(self, field)[ch])
# The 'baseline' field needs to be closed with ')'
if field == 'baseline':
signal_lines[ch] += ')'
header_lines += signal_lines
# Create comment lines (if any)
if 'comments' in rec_write_fields:
comment_lines = ['# ' + comment for comment in self.comments]
header_lines += comment_lines
lines_to_file(self.record_name + '.hea', write_dir, header_lines) | [
"def",
"wr_header_file",
"(",
"self",
",",
"rec_write_fields",
",",
"sig_write_fields",
",",
"write_dir",
")",
":",
"# Create record specification line",
"record_line",
"=",
"''",
"# Traverse the ordered dictionary",
"for",
"field",
"in",
"RECORD_SPECS",
".",
"index",
"... | Write a header file using the specified fields. Converts Record
attributes into appropriate wfdb format strings.
Parameters
----------
rec_write_fields : list
List of record specification fields to write
sig_write_fields : dict
Dictionary of signal specification fields to write, values
being equal to a list of channels to write for each field.
write_dir : str
The directory in which to write the header file | [
"Write",
"a",
"header",
"file",
"using",
"the",
"specified",
"fields",
".",
"Converts",
"Record",
"attributes",
"into",
"appropriate",
"wfdb",
"format",
"strings",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L423-L489 | train | 216,240 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | MultiHeaderMixin.get_write_fields | def get_write_fields(self):
"""
Get the list of fields used to write the multi-segment header.
Returns the default required fields, the user defined fields,
and their dependencies.
"""
# Record specification fields
write_fields = self.get_write_subset('record')
# Segment specification fields are all mandatory
write_fields = write_fields + ['seg_name', 'seg_len']
# Comments
if self.comments !=None:
write_fields.append('comments')
return write_fields | python | def get_write_fields(self):
"""
Get the list of fields used to write the multi-segment header.
Returns the default required fields, the user defined fields,
and their dependencies.
"""
# Record specification fields
write_fields = self.get_write_subset('record')
# Segment specification fields are all mandatory
write_fields = write_fields + ['seg_name', 'seg_len']
# Comments
if self.comments !=None:
write_fields.append('comments')
return write_fields | [
"def",
"get_write_fields",
"(",
"self",
")",
":",
"# Record specification fields",
"write_fields",
"=",
"self",
".",
"get_write_subset",
"(",
"'record'",
")",
"# Segment specification fields are all mandatory",
"write_fields",
"=",
"write_fields",
"+",
"[",
"'seg_name'",
... | Get the list of fields used to write the multi-segment header.
Returns the default required fields, the user defined fields,
and their dependencies. | [
"Get",
"the",
"list",
"of",
"fields",
"used",
"to",
"write",
"the",
"multi",
"-",
"segment",
"header",
".",
"Returns",
"the",
"default",
"required",
"fields",
"the",
"user",
"defined",
"fields",
"and",
"their",
"dependencies",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L533-L550 | train | 216,241 |
MIT-LCP/wfdb-python | wfdb/io/_header.py | MultiHeaderMixin.wr_header_file | def wr_header_file(self, write_fields, write_dir):
"""
Write a header file using the specified fields
"""
# Create record specification line
record_line = ''
# Traverse the ordered dictionary
for field in RECORD_SPECS.index:
# If the field is being used, add it with its delimiter
if field in write_fields:
record_line += RECORD_SPECS.loc[field, 'delimiter'] + str(getattr(self, field))
header_lines = [record_line]
# Create segment specification lines
segment_lines = self.n_seg * ['']
# For both fields, add each of its elements with the delimiter
# to the appropriate line
for field in SEGMENT_SPECS.index:
for seg_num in range(self.n_seg):
segment_lines[seg_num] += SEGMENT_SPECS.loc[field, 'delimiter'] + str(getattr(self, field)[seg_num])
header_lines = header_lines + segment_lines
# Create comment lines (if any)
if 'comments' in write_fields:
comment_lines = ['# '+ comment for comment in self.comments]
header_lines += comment_lines
lines_to_file(self.record_name + '.hea', header_lines, write_dir) | python | def wr_header_file(self, write_fields, write_dir):
"""
Write a header file using the specified fields
"""
# Create record specification line
record_line = ''
# Traverse the ordered dictionary
for field in RECORD_SPECS.index:
# If the field is being used, add it with its delimiter
if field in write_fields:
record_line += RECORD_SPECS.loc[field, 'delimiter'] + str(getattr(self, field))
header_lines = [record_line]
# Create segment specification lines
segment_lines = self.n_seg * ['']
# For both fields, add each of its elements with the delimiter
# to the appropriate line
for field in SEGMENT_SPECS.index:
for seg_num in range(self.n_seg):
segment_lines[seg_num] += SEGMENT_SPECS.loc[field, 'delimiter'] + str(getattr(self, field)[seg_num])
header_lines = header_lines + segment_lines
# Create comment lines (if any)
if 'comments' in write_fields:
comment_lines = ['# '+ comment for comment in self.comments]
header_lines += comment_lines
lines_to_file(self.record_name + '.hea', header_lines, write_dir) | [
"def",
"wr_header_file",
"(",
"self",
",",
"write_fields",
",",
"write_dir",
")",
":",
"# Create record specification line",
"record_line",
"=",
"''",
"# Traverse the ordered dictionary",
"for",
"field",
"in",
"RECORD_SPECS",
".",
"index",
":",
"# If the field is being us... | Write a header file using the specified fields | [
"Write",
"a",
"header",
"file",
"using",
"the",
"specified",
"fields"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/_header.py#L578-L608 | train | 216,242 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | resample_ann | def resample_ann(resampled_t, ann_sample):
"""
Compute the new annotation indices
Parameters
----------
resampled_t : numpy array
Array of signal locations as returned by scipy.signal.resample
ann_sample : numpy array
Array of annotation locations
Returns
-------
resampled_ann_sample : numpy array
Array of resampled annotation locations
"""
tmp = np.zeros(len(resampled_t), dtype='int16')
j = 0
tprec = resampled_t[j]
for i, v in enumerate(ann_sample):
while True:
d = False
if v < tprec:
j -= 1
tprec = resampled_t[j]
if j+1 == len(resampled_t):
tmp[j] += 1
break
tnow = resampled_t[j+1]
if tprec <= v and v <= tnow:
if v-tprec < tnow-v:
tmp[j] += 1
else:
tmp[j+1] += 1
d = True
j += 1
tprec = tnow
if d:
break
idx = np.where(tmp>0)[0].astype('int64')
res = []
for i in idx:
for j in range(tmp[i]):
res.append(i)
assert len(res) == len(ann_sample)
return np.asarray(res, dtype='int64') | python | def resample_ann(resampled_t, ann_sample):
"""
Compute the new annotation indices
Parameters
----------
resampled_t : numpy array
Array of signal locations as returned by scipy.signal.resample
ann_sample : numpy array
Array of annotation locations
Returns
-------
resampled_ann_sample : numpy array
Array of resampled annotation locations
"""
tmp = np.zeros(len(resampled_t), dtype='int16')
j = 0
tprec = resampled_t[j]
for i, v in enumerate(ann_sample):
while True:
d = False
if v < tprec:
j -= 1
tprec = resampled_t[j]
if j+1 == len(resampled_t):
tmp[j] += 1
break
tnow = resampled_t[j+1]
if tprec <= v and v <= tnow:
if v-tprec < tnow-v:
tmp[j] += 1
else:
tmp[j+1] += 1
d = True
j += 1
tprec = tnow
if d:
break
idx = np.where(tmp>0)[0].astype('int64')
res = []
for i in idx:
for j in range(tmp[i]):
res.append(i)
assert len(res) == len(ann_sample)
return np.asarray(res, dtype='int64') | [
"def",
"resample_ann",
"(",
"resampled_t",
",",
"ann_sample",
")",
":",
"tmp",
"=",
"np",
".",
"zeros",
"(",
"len",
"(",
"resampled_t",
")",
",",
"dtype",
"=",
"'int16'",
")",
"j",
"=",
"0",
"tprec",
"=",
"resampled_t",
"[",
"j",
"]",
"for",
"i",
"... | Compute the new annotation indices
Parameters
----------
resampled_t : numpy array
Array of signal locations as returned by scipy.signal.resample
ann_sample : numpy array
Array of annotation locations
Returns
-------
resampled_ann_sample : numpy array
Array of resampled annotation locations | [
"Compute",
"the",
"new",
"annotation",
"indices"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L7-L57 | train | 216,243 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | resample_sig | def resample_sig(x, fs, fs_target):
"""
Resample a signal to a different frequency.
Parameters
----------
x : numpy array
Array containing the signal
fs : int, or float
The original sampling frequency
fs_target : int, or float
The target frequency
Returns
-------
resampled_x : numpy array
Array of the resampled signal values
resampled_t : numpy array
Array of the resampled signal locations
"""
t = np.arange(x.shape[0]).astype('float64')
if fs == fs_target:
return x, t
new_length = int(x.shape[0]*fs_target/fs)
resampled_x, resampled_t = signal.resample(x, num=new_length, t=t)
assert resampled_x.shape == resampled_t.shape and resampled_x.shape[0] == new_length
assert np.all(np.diff(resampled_t) > 0)
return resampled_x, resampled_t | python | def resample_sig(x, fs, fs_target):
"""
Resample a signal to a different frequency.
Parameters
----------
x : numpy array
Array containing the signal
fs : int, or float
The original sampling frequency
fs_target : int, or float
The target frequency
Returns
-------
resampled_x : numpy array
Array of the resampled signal values
resampled_t : numpy array
Array of the resampled signal locations
"""
t = np.arange(x.shape[0]).astype('float64')
if fs == fs_target:
return x, t
new_length = int(x.shape[0]*fs_target/fs)
resampled_x, resampled_t = signal.resample(x, num=new_length, t=t)
assert resampled_x.shape == resampled_t.shape and resampled_x.shape[0] == new_length
assert np.all(np.diff(resampled_t) > 0)
return resampled_x, resampled_t | [
"def",
"resample_sig",
"(",
"x",
",",
"fs",
",",
"fs_target",
")",
":",
"t",
"=",
"np",
".",
"arange",
"(",
"x",
".",
"shape",
"[",
"0",
"]",
")",
".",
"astype",
"(",
"'float64'",
")",
"if",
"fs",
"==",
"fs_target",
":",
"return",
"x",
",",
"t"... | Resample a signal to a different frequency.
Parameters
----------
x : numpy array
Array containing the signal
fs : int, or float
The original sampling frequency
fs_target : int, or float
The target frequency
Returns
-------
resampled_x : numpy array
Array of the resampled signal values
resampled_t : numpy array
Array of the resampled signal locations | [
"Resample",
"a",
"signal",
"to",
"a",
"different",
"frequency",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L60-L92 | train | 216,244 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | resample_singlechan | def resample_singlechan(x, ann, fs, fs_target):
"""
Resample a single-channel signal with its annotations
Parameters
----------
x: numpy array
The signal array
ann : wfdb Annotation
The wfdb annotation object
fs : int, or float
The original frequency
fs_target : int, or float
The target frequency
Returns
-------
resampled_x : numpy array
Array of the resampled signal values
resampled_ann : wfdb Annotation
Annotation containing resampled annotation locations
"""
resampled_x, resampled_t = resample_sig(x, fs, fs_target)
new_sample = resample_ann(resampled_t, ann.sample)
assert ann.sample.shape == new_sample.shape
resampled_ann = Annotation(record_name=ann.record_name,
extension=ann.extension,
sample=new_sample,
symbol=ann.symbol,
subtype=ann.subtype,
chan=ann.chan,
num=ann.num,
aux_note=ann.aux_note,
fs=fs_target)
return resampled_x, resampled_ann | python | def resample_singlechan(x, ann, fs, fs_target):
"""
Resample a single-channel signal with its annotations
Parameters
----------
x: numpy array
The signal array
ann : wfdb Annotation
The wfdb annotation object
fs : int, or float
The original frequency
fs_target : int, or float
The target frequency
Returns
-------
resampled_x : numpy array
Array of the resampled signal values
resampled_ann : wfdb Annotation
Annotation containing resampled annotation locations
"""
resampled_x, resampled_t = resample_sig(x, fs, fs_target)
new_sample = resample_ann(resampled_t, ann.sample)
assert ann.sample.shape == new_sample.shape
resampled_ann = Annotation(record_name=ann.record_name,
extension=ann.extension,
sample=new_sample,
symbol=ann.symbol,
subtype=ann.subtype,
chan=ann.chan,
num=ann.num,
aux_note=ann.aux_note,
fs=fs_target)
return resampled_x, resampled_ann | [
"def",
"resample_singlechan",
"(",
"x",
",",
"ann",
",",
"fs",
",",
"fs_target",
")",
":",
"resampled_x",
",",
"resampled_t",
"=",
"resample_sig",
"(",
"x",
",",
"fs",
",",
"fs_target",
")",
"new_sample",
"=",
"resample_ann",
"(",
"resampled_t",
",",
"ann"... | Resample a single-channel signal with its annotations
Parameters
----------
x: numpy array
The signal array
ann : wfdb Annotation
The wfdb annotation object
fs : int, or float
The original frequency
fs_target : int, or float
The target frequency
Returns
-------
resampled_x : numpy array
Array of the resampled signal values
resampled_ann : wfdb Annotation
Annotation containing resampled annotation locations | [
"Resample",
"a",
"single",
"-",
"channel",
"signal",
"with",
"its",
"annotations"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L95-L134 | train | 216,245 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | resample_multichan | def resample_multichan(xs, ann, fs, fs_target, resamp_ann_chan=0):
"""
Resample multiple channels with their annotations
Parameters
----------
xs: numpy array
The signal array
ann : wfdb Annotation
The wfdb annotation object
fs : int, or float
The original frequency
fs_target : int, or float
The target frequency
resample_ann_channel : int, optional
The signal channel used to compute new annotation indices
Returns
-------
resampled_xs : numpy array
Array of the resampled signal values
resampled_ann : wfdb Annotation
Annotation containing resampled annotation locations
"""
assert resamp_ann_chan < xs.shape[1]
lx = []
lt = None
for chan in range(xs.shape[1]):
resampled_x, resampled_t = resample_sig(xs[:, chan], fs, fs_target)
lx.append(resampled_x)
if chan == resamp_ann_chan:
lt = resampled_t
new_sample = resample_ann(lt, ann.sample)
assert ann.sample.shape == new_sample.shape
resampled_ann = Annotation(record_name=ann.record_name,
extension=ann.extension,
sample=new_sample,
symbol=ann.symbol,
subtype=ann.subtype,
chan=ann.chan,
num=ann.num,
aux_note=ann.aux_note,
fs=fs_target)
return np.column_stack(lx), resampled_ann | python | def resample_multichan(xs, ann, fs, fs_target, resamp_ann_chan=0):
"""
Resample multiple channels with their annotations
Parameters
----------
xs: numpy array
The signal array
ann : wfdb Annotation
The wfdb annotation object
fs : int, or float
The original frequency
fs_target : int, or float
The target frequency
resample_ann_channel : int, optional
The signal channel used to compute new annotation indices
Returns
-------
resampled_xs : numpy array
Array of the resampled signal values
resampled_ann : wfdb Annotation
Annotation containing resampled annotation locations
"""
assert resamp_ann_chan < xs.shape[1]
lx = []
lt = None
for chan in range(xs.shape[1]):
resampled_x, resampled_t = resample_sig(xs[:, chan], fs, fs_target)
lx.append(resampled_x)
if chan == resamp_ann_chan:
lt = resampled_t
new_sample = resample_ann(lt, ann.sample)
assert ann.sample.shape == new_sample.shape
resampled_ann = Annotation(record_name=ann.record_name,
extension=ann.extension,
sample=new_sample,
symbol=ann.symbol,
subtype=ann.subtype,
chan=ann.chan,
num=ann.num,
aux_note=ann.aux_note,
fs=fs_target)
return np.column_stack(lx), resampled_ann | [
"def",
"resample_multichan",
"(",
"xs",
",",
"ann",
",",
"fs",
",",
"fs_target",
",",
"resamp_ann_chan",
"=",
"0",
")",
":",
"assert",
"resamp_ann_chan",
"<",
"xs",
".",
"shape",
"[",
"1",
"]",
"lx",
"=",
"[",
"]",
"lt",
"=",
"None",
"for",
"chan",
... | Resample multiple channels with their annotations
Parameters
----------
xs: numpy array
The signal array
ann : wfdb Annotation
The wfdb annotation object
fs : int, or float
The original frequency
fs_target : int, or float
The target frequency
resample_ann_channel : int, optional
The signal channel used to compute new annotation indices
Returns
-------
resampled_xs : numpy array
Array of the resampled signal values
resampled_ann : wfdb Annotation
Annotation containing resampled annotation locations | [
"Resample",
"multiple",
"channels",
"with",
"their",
"annotations"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L137-L185 | train | 216,246 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | normalize_bound | def normalize_bound(sig, lb=0, ub=1):
"""
Normalize a signal between the lower and upper bound
Parameters
----------
sig : numpy array
Original signal to be normalized
lb : int, or float
Lower bound
ub : int, or float
Upper bound
Returns
-------
x_normalized : numpy array
Normalized signal
"""
mid = ub - (ub - lb) / 2
min_v = np.min(sig)
max_v = np.max(sig)
mid_v = max_v - (max_v - min_v) / 2
coef = (ub - lb) / (max_v - min_v)
return sig * coef - (mid_v * coef) + mid | python | def normalize_bound(sig, lb=0, ub=1):
"""
Normalize a signal between the lower and upper bound
Parameters
----------
sig : numpy array
Original signal to be normalized
lb : int, or float
Lower bound
ub : int, or float
Upper bound
Returns
-------
x_normalized : numpy array
Normalized signal
"""
mid = ub - (ub - lb) / 2
min_v = np.min(sig)
max_v = np.max(sig)
mid_v = max_v - (max_v - min_v) / 2
coef = (ub - lb) / (max_v - min_v)
return sig * coef - (mid_v * coef) + mid | [
"def",
"normalize_bound",
"(",
"sig",
",",
"lb",
"=",
"0",
",",
"ub",
"=",
"1",
")",
":",
"mid",
"=",
"ub",
"-",
"(",
"ub",
"-",
"lb",
")",
"/",
"2",
"min_v",
"=",
"np",
".",
"min",
"(",
"sig",
")",
"max_v",
"=",
"np",
".",
"max",
"(",
"s... | Normalize a signal between the lower and upper bound
Parameters
----------
sig : numpy array
Original signal to be normalized
lb : int, or float
Lower bound
ub : int, or float
Upper bound
Returns
-------
x_normalized : numpy array
Normalized signal | [
"Normalize",
"a",
"signal",
"between",
"the",
"lower",
"and",
"upper",
"bound"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L188-L213 | train | 216,247 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | smooth | def smooth(sig, window_size):
"""
Apply a uniform moving average filter to a signal
Parameters
----------
sig : numpy array
The signal to smooth.
window_size : int
The width of the moving average filter.
"""
box = np.ones(window_size)/window_size
return np.convolve(sig, box, mode='same') | python | def smooth(sig, window_size):
"""
Apply a uniform moving average filter to a signal
Parameters
----------
sig : numpy array
The signal to smooth.
window_size : int
The width of the moving average filter.
"""
box = np.ones(window_size)/window_size
return np.convolve(sig, box, mode='same') | [
"def",
"smooth",
"(",
"sig",
",",
"window_size",
")",
":",
"box",
"=",
"np",
".",
"ones",
"(",
"window_size",
")",
"/",
"window_size",
"return",
"np",
".",
"convolve",
"(",
"sig",
",",
"box",
",",
"mode",
"=",
"'same'",
")"
] | Apply a uniform moving average filter to a signal
Parameters
----------
sig : numpy array
The signal to smooth.
window_size : int
The width of the moving average filter. | [
"Apply",
"a",
"uniform",
"moving",
"average",
"filter",
"to",
"a",
"signal"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L216-L229 | train | 216,248 |
MIT-LCP/wfdb-python | wfdb/processing/basic.py | get_filter_gain | def get_filter_gain(b, a, f_gain, fs):
"""
Given filter coefficients, return the gain at a particular
frequency.
Parameters
----------
b : list
List of linear filter b coefficients
a : list
List of linear filter a coefficients
f_gain : int or float, optional
The frequency at which to calculate the gain
fs : int or float, optional
The sampling frequency of the system
"""
# Save the passband gain
w, h = signal.freqz(b, a)
w_gain = f_gain * 2 * np.pi / fs
ind = np.where(w >= w_gain)[0][0]
gain = abs(h[ind])
return gain | python | def get_filter_gain(b, a, f_gain, fs):
"""
Given filter coefficients, return the gain at a particular
frequency.
Parameters
----------
b : list
List of linear filter b coefficients
a : list
List of linear filter a coefficients
f_gain : int or float, optional
The frequency at which to calculate the gain
fs : int or float, optional
The sampling frequency of the system
"""
# Save the passband gain
w, h = signal.freqz(b, a)
w_gain = f_gain * 2 * np.pi / fs
ind = np.where(w >= w_gain)[0][0]
gain = abs(h[ind])
return gain | [
"def",
"get_filter_gain",
"(",
"b",
",",
"a",
",",
"f_gain",
",",
"fs",
")",
":",
"# Save the passband gain",
"w",
",",
"h",
"=",
"signal",
".",
"freqz",
"(",
"b",
",",
"a",
")",
"w_gain",
"=",
"f_gain",
"*",
"2",
"*",
"np",
".",
"pi",
"/",
"fs",... | Given filter coefficients, return the gain at a particular
frequency.
Parameters
----------
b : list
List of linear filter b coefficients
a : list
List of linear filter a coefficients
f_gain : int or float, optional
The frequency at which to calculate the gain
fs : int or float, optional
The sampling frequency of the system | [
"Given",
"filter",
"coefficients",
"return",
"the",
"gain",
"at",
"a",
"particular",
"frequency",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/basic.py#L232-L256 | train | 216,249 |
MIT-LCP/wfdb-python | wfdb/io/record.py | _check_item_type | def _check_item_type(item, field_name, allowed_types, expect_list=False,
required_channels='all'):
"""
Check the item's type against a set of allowed types.
Vary the print message regarding whether the item can be None.
Helper to `BaseRecord.check_field`.
Parameters
----------
item : any
The item to check.
field_name : str
The field name.
allowed_types : iterable
Iterable of types the item is allowed to be.
expect_list : bool, optional
Whether the item is expected to be a list.
required_channels : list, optional
List of integers specifying which channels of the item must be
present. May be set to 'all' to indicate all channels. Only used
if `expect_list` is True, ie. item is a list, and its
subelements are to be checked.
Notes
-----
This is called by `check_field`, which determines whether the item
should be a list or not. This function should generally not be
called by the user directly.
"""
if expect_list:
if not isinstance(item, list):
raise TypeError('Field `%s` must be a list.' % field_name)
# All channels of the field must be present.
if required_channels == 'all':
required_channels = list(range(len(item)))
for ch in range(len(item)):
# Check whether the field may be None
if ch in required_channels:
allowed_types_ch = allowed_types
else:
allowed_types_ch = allowed_types + (type(None),)
if not isinstance(item[ch], allowed_types_ch):
raise TypeError('Channel %d of field `%s` must be one of the following types:' % (ch, field_name),
allowed_types_ch)
else:
if not isinstance(item, allowed_types):
raise TypeError('Field `%s` must be one of the following types:',
allowed_types) | python | def _check_item_type(item, field_name, allowed_types, expect_list=False,
required_channels='all'):
"""
Check the item's type against a set of allowed types.
Vary the print message regarding whether the item can be None.
Helper to `BaseRecord.check_field`.
Parameters
----------
item : any
The item to check.
field_name : str
The field name.
allowed_types : iterable
Iterable of types the item is allowed to be.
expect_list : bool, optional
Whether the item is expected to be a list.
required_channels : list, optional
List of integers specifying which channels of the item must be
present. May be set to 'all' to indicate all channels. Only used
if `expect_list` is True, ie. item is a list, and its
subelements are to be checked.
Notes
-----
This is called by `check_field`, which determines whether the item
should be a list or not. This function should generally not be
called by the user directly.
"""
if expect_list:
if not isinstance(item, list):
raise TypeError('Field `%s` must be a list.' % field_name)
# All channels of the field must be present.
if required_channels == 'all':
required_channels = list(range(len(item)))
for ch in range(len(item)):
# Check whether the field may be None
if ch in required_channels:
allowed_types_ch = allowed_types
else:
allowed_types_ch = allowed_types + (type(None),)
if not isinstance(item[ch], allowed_types_ch):
raise TypeError('Channel %d of field `%s` must be one of the following types:' % (ch, field_name),
allowed_types_ch)
else:
if not isinstance(item, allowed_types):
raise TypeError('Field `%s` must be one of the following types:',
allowed_types) | [
"def",
"_check_item_type",
"(",
"item",
",",
"field_name",
",",
"allowed_types",
",",
"expect_list",
"=",
"False",
",",
"required_channels",
"=",
"'all'",
")",
":",
"if",
"expect_list",
":",
"if",
"not",
"isinstance",
"(",
"item",
",",
"list",
")",
":",
"r... | Check the item's type against a set of allowed types.
Vary the print message regarding whether the item can be None.
Helper to `BaseRecord.check_field`.
Parameters
----------
item : any
The item to check.
field_name : str
The field name.
allowed_types : iterable
Iterable of types the item is allowed to be.
expect_list : bool, optional
Whether the item is expected to be a list.
required_channels : list, optional
List of integers specifying which channels of the item must be
present. May be set to 'all' to indicate all channels. Only used
if `expect_list` is True, ie. item is a list, and its
subelements are to be checked.
Notes
-----
This is called by `check_field`, which determines whether the item
should be a list or not. This function should generally not be
called by the user directly. | [
"Check",
"the",
"item",
"s",
"type",
"against",
"a",
"set",
"of",
"allowed",
"types",
".",
"Vary",
"the",
"print",
"message",
"regarding",
"whether",
"the",
"item",
"can",
"be",
"None",
".",
"Helper",
"to",
"BaseRecord",
".",
"check_field",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L858-L909 | train | 216,250 |
MIT-LCP/wfdb-python | wfdb/io/record.py | check_np_array | def check_np_array(item, field_name, ndim, parent_class, channel_num=None):
"""
Check a numpy array's shape and dtype against required
specifications.
Parameters
----------
item : numpy array
The numpy array to check
field_name : str
The name of the field to check
ndim : int
The required number of dimensions
parent_class : type
The parent class of the dtype. ie. np.integer, np.floating.
channel_num : int, optional
If not None, indicates that the item passed in is a subelement
of a list. Indicate this in the error message if triggered.
"""
# Check shape
if item.ndim != ndim:
error_msg = 'Field `%s` must have ndim == %d' % (field_name, ndim)
if channel_num is not None:
error_msg = ('Channel %d of f' % channel_num) + error_msg[1:]
raise TypeError(error_msg)
# Check dtype
if not np.issubdtype(item.dtype, parent_class):
error_msg = 'Field `%s` must have a dtype that subclasses %s' % (field_name, parent_class)
if channel_num is not None:
error_msg = ('Channel %d of f' % channel_num) + error_msg[1:]
raise TypeError(error_msg) | python | def check_np_array(item, field_name, ndim, parent_class, channel_num=None):
"""
Check a numpy array's shape and dtype against required
specifications.
Parameters
----------
item : numpy array
The numpy array to check
field_name : str
The name of the field to check
ndim : int
The required number of dimensions
parent_class : type
The parent class of the dtype. ie. np.integer, np.floating.
channel_num : int, optional
If not None, indicates that the item passed in is a subelement
of a list. Indicate this in the error message if triggered.
"""
# Check shape
if item.ndim != ndim:
error_msg = 'Field `%s` must have ndim == %d' % (field_name, ndim)
if channel_num is not None:
error_msg = ('Channel %d of f' % channel_num) + error_msg[1:]
raise TypeError(error_msg)
# Check dtype
if not np.issubdtype(item.dtype, parent_class):
error_msg = 'Field `%s` must have a dtype that subclasses %s' % (field_name, parent_class)
if channel_num is not None:
error_msg = ('Channel %d of f' % channel_num) + error_msg[1:]
raise TypeError(error_msg) | [
"def",
"check_np_array",
"(",
"item",
",",
"field_name",
",",
"ndim",
",",
"parent_class",
",",
"channel_num",
"=",
"None",
")",
":",
"# Check shape",
"if",
"item",
".",
"ndim",
"!=",
"ndim",
":",
"error_msg",
"=",
"'Field `%s` must have ndim == %d'",
"%",
"("... | Check a numpy array's shape and dtype against required
specifications.
Parameters
----------
item : numpy array
The numpy array to check
field_name : str
The name of the field to check
ndim : int
The required number of dimensions
parent_class : type
The parent class of the dtype. ie. np.integer, np.floating.
channel_num : int, optional
If not None, indicates that the item passed in is a subelement
of a list. Indicate this in the error message if triggered. | [
"Check",
"a",
"numpy",
"array",
"s",
"shape",
"and",
"dtype",
"against",
"required",
"specifications",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L912-L944 | train | 216,251 |
MIT-LCP/wfdb-python | wfdb/io/record.py | rdheader | def rdheader(record_name, pb_dir=None, rd_segments=False):
"""
Read a WFDB header file and return a `Record` or `MultiRecord`
object with the record descriptors as attributes.
Parameters
----------
record_name : str
The name of the WFDB record to be read, without any file
extensions. If the argument contains any path delimiter
characters, the argument will be interpreted as PATH/BASE_RECORD.
Both relative and absolute paths are accepted. If the `pb_dir`
parameter is set, this parameter should contain just the base
record name, and the files fill be searched for remotely.
Otherwise, the data files will be searched for in the local path.
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
rd_segments : bool, optional
Used when reading multi-segment headers. If True, segment headers will
also be read (into the record object's `segments` field).
Returns
-------
record : Record or MultiRecord
The wfdb Record or MultiRecord object representing the contents
of the header read.
Examples
--------
>>> ecg_record = wfdb.rdheader('sample-data/test01_00s', sampfrom=800,
channels = [1,3])
"""
dir_name, base_record_name = os.path.split(record_name)
dir_name = os.path.abspath(dir_name)
# Read the header file. Separate comment and non-comment lines
header_lines, comment_lines = _header._read_header_lines(base_record_name,
dir_name, pb_dir)
# Get fields from record line
record_fields = _header._parse_record_line(header_lines[0])
# Single segment header - Process signal specification lines
if record_fields['n_seg'] is None:
# Create a single-segment WFDB record object
record = Record()
# There are signals
if len(header_lines)>1:
# Read the fields from the signal lines
signal_fields = _header._parse_signal_lines(header_lines[1:])
# Set the object's signal fields
for field in signal_fields:
setattr(record, field, signal_fields[field])
# Set the object's record line fields
for field in record_fields:
if field == 'n_seg':
continue
setattr(record, field, record_fields[field])
# Multi segment header - Process segment specification lines
else:
# Create a multi-segment WFDB record object
record = MultiRecord()
# Read the fields from the segment lines
segment_fields = _header._read_segment_lines(header_lines[1:])
# Set the object's segment fields
for field in segment_fields:
setattr(record, field, segment_fields[field])
# Set the objects' record fields
for field in record_fields:
setattr(record, field, record_fields[field])
# Determine whether the record is fixed or variable
if record.seg_len[0] == 0:
record.layout = 'variable'
else:
record.layout = 'fixed'
# If specified, read the segment headers
if rd_segments:
record.segments = []
# Get the base record name (could be empty)
for s in record.seg_name:
if s == '~':
record.segments.append(None)
else:
record.segments.append(rdheader(os.path.join(dir_name, s),
pb_dir))
# Fill in the sig_name attribute
record.sig_name = record.get_sig_name()
# Fill in the sig_segments attribute
record.sig_segments = record.get_sig_segments()
# Set the comments field
record.comments = [line.strip(' \t#') for line in comment_lines]
return record | python | def rdheader(record_name, pb_dir=None, rd_segments=False):
"""
Read a WFDB header file and return a `Record` or `MultiRecord`
object with the record descriptors as attributes.
Parameters
----------
record_name : str
The name of the WFDB record to be read, without any file
extensions. If the argument contains any path delimiter
characters, the argument will be interpreted as PATH/BASE_RECORD.
Both relative and absolute paths are accepted. If the `pb_dir`
parameter is set, this parameter should contain just the base
record name, and the files fill be searched for remotely.
Otherwise, the data files will be searched for in the local path.
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
rd_segments : bool, optional
Used when reading multi-segment headers. If True, segment headers will
also be read (into the record object's `segments` field).
Returns
-------
record : Record or MultiRecord
The wfdb Record or MultiRecord object representing the contents
of the header read.
Examples
--------
>>> ecg_record = wfdb.rdheader('sample-data/test01_00s', sampfrom=800,
channels = [1,3])
"""
dir_name, base_record_name = os.path.split(record_name)
dir_name = os.path.abspath(dir_name)
# Read the header file. Separate comment and non-comment lines
header_lines, comment_lines = _header._read_header_lines(base_record_name,
dir_name, pb_dir)
# Get fields from record line
record_fields = _header._parse_record_line(header_lines[0])
# Single segment header - Process signal specification lines
if record_fields['n_seg'] is None:
# Create a single-segment WFDB record object
record = Record()
# There are signals
if len(header_lines)>1:
# Read the fields from the signal lines
signal_fields = _header._parse_signal_lines(header_lines[1:])
# Set the object's signal fields
for field in signal_fields:
setattr(record, field, signal_fields[field])
# Set the object's record line fields
for field in record_fields:
if field == 'n_seg':
continue
setattr(record, field, record_fields[field])
# Multi segment header - Process segment specification lines
else:
# Create a multi-segment WFDB record object
record = MultiRecord()
# Read the fields from the segment lines
segment_fields = _header._read_segment_lines(header_lines[1:])
# Set the object's segment fields
for field in segment_fields:
setattr(record, field, segment_fields[field])
# Set the objects' record fields
for field in record_fields:
setattr(record, field, record_fields[field])
# Determine whether the record is fixed or variable
if record.seg_len[0] == 0:
record.layout = 'variable'
else:
record.layout = 'fixed'
# If specified, read the segment headers
if rd_segments:
record.segments = []
# Get the base record name (could be empty)
for s in record.seg_name:
if s == '~':
record.segments.append(None)
else:
record.segments.append(rdheader(os.path.join(dir_name, s),
pb_dir))
# Fill in the sig_name attribute
record.sig_name = record.get_sig_name()
# Fill in the sig_segments attribute
record.sig_segments = record.get_sig_segments()
# Set the comments field
record.comments = [line.strip(' \t#') for line in comment_lines]
return record | [
"def",
"rdheader",
"(",
"record_name",
",",
"pb_dir",
"=",
"None",
",",
"rd_segments",
"=",
"False",
")",
":",
"dir_name",
",",
"base_record_name",
"=",
"os",
".",
"path",
".",
"split",
"(",
"record_name",
")",
"dir_name",
"=",
"os",
".",
"path",
".",
... | Read a WFDB header file and return a `Record` or `MultiRecord`
object with the record descriptors as attributes.
Parameters
----------
record_name : str
The name of the WFDB record to be read, without any file
extensions. If the argument contains any path delimiter
characters, the argument will be interpreted as PATH/BASE_RECORD.
Both relative and absolute paths are accepted. If the `pb_dir`
parameter is set, this parameter should contain just the base
record name, and the files fill be searched for remotely.
Otherwise, the data files will be searched for in the local path.
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
rd_segments : bool, optional
Used when reading multi-segment headers. If True, segment headers will
also be read (into the record object's `segments` field).
Returns
-------
record : Record or MultiRecord
The wfdb Record or MultiRecord object representing the contents
of the header read.
Examples
--------
>>> ecg_record = wfdb.rdheader('sample-data/test01_00s', sampfrom=800,
channels = [1,3]) | [
"Read",
"a",
"WFDB",
"header",
"file",
"and",
"return",
"a",
"Record",
"or",
"MultiRecord",
"object",
"with",
"the",
"record",
"descriptors",
"as",
"attributes",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L950-L1051 | train | 216,252 |
MIT-LCP/wfdb-python | wfdb/io/record.py | rdsamp | def rdsamp(record_name, sampfrom=0, sampto=None, channels=None, pb_dir=None,
channel_names=None, warn_empty=False):
"""
Read a WFDB record, and return the physical signals and a few important
descriptor fields.
Parameters
----------
record_name : str
The name of the WFDB record to be read (without any file
extensions). If the argument contains any path delimiter
characters, the argument will be interpreted as PATH/baserecord
and the data files will be searched for in the local path.
sampfrom : int, optional
The starting sample number to read for all channels.
sampto : int, or 'end', optional
The sample number at which to stop reading for all channels.
Reads the entire duration by default.
channels : list, optional
List of integer indices specifying the channels to be read.
Reads all channels by default.
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
channel_names : list, optional
List of channel names to return. If this parameter is specified,
it takes precedence over `channels`.
warn_empty : bool, optional
Whether to display a warning if the specified channel indices
or names are not contained in the record, and no signal is
returned.
Returns
-------
signals : numpy array
A 2d numpy array storing the physical signals from the record.
fields : dict
A dictionary containing several key attributes of the read
record:
- fs: The sampling frequency of the record
- units: The units for each channel
- sig_name: The signal name for each channel
- comments: Any comments written in the header
Notes
-----
If a signal range or channel selection is specified when calling
this function, the resulting attributes of the returned object will
be set to reflect the section of the record that is actually read,
rather than necessarily the entire record. For example, if
`channels=[0, 1, 2]` is specified when reading a 12 channel record,
the 'n_sig' attribute will be 3, not 12.
The `rdrecord` function is the base function upon which this one is
built. It returns all attributes present, along with the signals, as
attributes in a `Record` object. The function, along with the
returned data type, has more options than `rdsamp` for users who
wish to more directly manipulate WFDB content.
Examples
--------
>>> signals, fields = wfdb.rdsamp('sample-data/test01_00s',
sampfrom=800,
channel =[1,3])
"""
record = rdrecord(record_name=record_name, sampfrom=sampfrom,
sampto=sampto, channels=channels, physical=True,
pb_dir=pb_dir, m2s=True, channel_names=channel_names,
warn_empty=warn_empty)
signals = record.p_signal
fields = {}
for field in ['fs','sig_len', 'n_sig', 'base_date', 'base_time',
'units','sig_name', 'comments']:
fields[field] = getattr(record, field)
return signals, fields | python | def rdsamp(record_name, sampfrom=0, sampto=None, channels=None, pb_dir=None,
channel_names=None, warn_empty=False):
"""
Read a WFDB record, and return the physical signals and a few important
descriptor fields.
Parameters
----------
record_name : str
The name of the WFDB record to be read (without any file
extensions). If the argument contains any path delimiter
characters, the argument will be interpreted as PATH/baserecord
and the data files will be searched for in the local path.
sampfrom : int, optional
The starting sample number to read for all channels.
sampto : int, or 'end', optional
The sample number at which to stop reading for all channels.
Reads the entire duration by default.
channels : list, optional
List of integer indices specifying the channels to be read.
Reads all channels by default.
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
channel_names : list, optional
List of channel names to return. If this parameter is specified,
it takes precedence over `channels`.
warn_empty : bool, optional
Whether to display a warning if the specified channel indices
or names are not contained in the record, and no signal is
returned.
Returns
-------
signals : numpy array
A 2d numpy array storing the physical signals from the record.
fields : dict
A dictionary containing several key attributes of the read
record:
- fs: The sampling frequency of the record
- units: The units for each channel
- sig_name: The signal name for each channel
- comments: Any comments written in the header
Notes
-----
If a signal range or channel selection is specified when calling
this function, the resulting attributes of the returned object will
be set to reflect the section of the record that is actually read,
rather than necessarily the entire record. For example, if
`channels=[0, 1, 2]` is specified when reading a 12 channel record,
the 'n_sig' attribute will be 3, not 12.
The `rdrecord` function is the base function upon which this one is
built. It returns all attributes present, along with the signals, as
attributes in a `Record` object. The function, along with the
returned data type, has more options than `rdsamp` for users who
wish to more directly manipulate WFDB content.
Examples
--------
>>> signals, fields = wfdb.rdsamp('sample-data/test01_00s',
sampfrom=800,
channel =[1,3])
"""
record = rdrecord(record_name=record_name, sampfrom=sampfrom,
sampto=sampto, channels=channels, physical=True,
pb_dir=pb_dir, m2s=True, channel_names=channel_names,
warn_empty=warn_empty)
signals = record.p_signal
fields = {}
for field in ['fs','sig_len', 'n_sig', 'base_date', 'base_time',
'units','sig_name', 'comments']:
fields[field] = getattr(record, field)
return signals, fields | [
"def",
"rdsamp",
"(",
"record_name",
",",
"sampfrom",
"=",
"0",
",",
"sampto",
"=",
"None",
",",
"channels",
"=",
"None",
",",
"pb_dir",
"=",
"None",
",",
"channel_names",
"=",
"None",
",",
"warn_empty",
"=",
"False",
")",
":",
"record",
"=",
"rdrecord... | Read a WFDB record, and return the physical signals and a few important
descriptor fields.
Parameters
----------
record_name : str
The name of the WFDB record to be read (without any file
extensions). If the argument contains any path delimiter
characters, the argument will be interpreted as PATH/baserecord
and the data files will be searched for in the local path.
sampfrom : int, optional
The starting sample number to read for all channels.
sampto : int, or 'end', optional
The sample number at which to stop reading for all channels.
Reads the entire duration by default.
channels : list, optional
List of integer indices specifying the channels to be read.
Reads all channels by default.
pb_dir : str, optional
Option used to stream data from Physiobank. The Physiobank
database directory from which to find the required record files.
eg. For record '100' in 'http://physionet.org/physiobank/database/mitdb'
pb_dir='mitdb'.
channel_names : list, optional
List of channel names to return. If this parameter is specified,
it takes precedence over `channels`.
warn_empty : bool, optional
Whether to display a warning if the specified channel indices
or names are not contained in the record, and no signal is
returned.
Returns
-------
signals : numpy array
A 2d numpy array storing the physical signals from the record.
fields : dict
A dictionary containing several key attributes of the read
record:
- fs: The sampling frequency of the record
- units: The units for each channel
- sig_name: The signal name for each channel
- comments: Any comments written in the header
Notes
-----
If a signal range or channel selection is specified when calling
this function, the resulting attributes of the returned object will
be set to reflect the section of the record that is actually read,
rather than necessarily the entire record. For example, if
`channels=[0, 1, 2]` is specified when reading a 12 channel record,
the 'n_sig' attribute will be 3, not 12.
The `rdrecord` function is the base function upon which this one is
built. It returns all attributes present, along with the signals, as
attributes in a `Record` object. The function, along with the
returned data type, has more options than `rdsamp` for users who
wish to more directly manipulate WFDB content.
Examples
--------
>>> signals, fields = wfdb.rdsamp('sample-data/test01_00s',
sampfrom=800,
channel =[1,3]) | [
"Read",
"a",
"WFDB",
"record",
"and",
"return",
"the",
"physical",
"signals",
"and",
"a",
"few",
"important",
"descriptor",
"fields",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L1323-L1402 | train | 216,253 |
MIT-LCP/wfdb-python | wfdb/io/record.py | _get_wanted_channels | def _get_wanted_channels(wanted_sig_names, record_sig_names, pad=False):
"""
Given some wanted signal names, and the signal names contained in a
record, return the indices of the record channels that intersect.
Parameters
----------
wanted_sig_names : list
List of desired signal name strings
record_sig_names : list
List of signal names for a single record
pad : bool, optional
Whether the output channels is to always have the same number
of elements and the wanted channels. If True, pads missing
signals with None.
Returns
-------
wanted_channel_inds
"""
if pad:
return [record_sig_names.index(s) if s in record_sig_names else None for s in wanted_sig_names]
else:
return [record_sig_names.index(s) for s in wanted_sig_names if s in record_sig_names] | python | def _get_wanted_channels(wanted_sig_names, record_sig_names, pad=False):
"""
Given some wanted signal names, and the signal names contained in a
record, return the indices of the record channels that intersect.
Parameters
----------
wanted_sig_names : list
List of desired signal name strings
record_sig_names : list
List of signal names for a single record
pad : bool, optional
Whether the output channels is to always have the same number
of elements and the wanted channels. If True, pads missing
signals with None.
Returns
-------
wanted_channel_inds
"""
if pad:
return [record_sig_names.index(s) if s in record_sig_names else None for s in wanted_sig_names]
else:
return [record_sig_names.index(s) for s in wanted_sig_names if s in record_sig_names] | [
"def",
"_get_wanted_channels",
"(",
"wanted_sig_names",
",",
"record_sig_names",
",",
"pad",
"=",
"False",
")",
":",
"if",
"pad",
":",
"return",
"[",
"record_sig_names",
".",
"index",
"(",
"s",
")",
"if",
"s",
"in",
"record_sig_names",
"else",
"None",
"for",... | Given some wanted signal names, and the signal names contained in a
record, return the indices of the record channels that intersect.
Parameters
----------
wanted_sig_names : list
List of desired signal name strings
record_sig_names : list
List of signal names for a single record
pad : bool, optional
Whether the output channels is to always have the same number
of elements and the wanted channels. If True, pads missing
signals with None.
Returns
-------
wanted_channel_inds | [
"Given",
"some",
"wanted",
"signal",
"names",
"and",
"the",
"signal",
"names",
"contained",
"in",
"a",
"record",
"return",
"the",
"indices",
"of",
"the",
"record",
"channels",
"that",
"intersect",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L1405-L1429 | train | 216,254 |
MIT-LCP/wfdb-python | wfdb/io/record.py | wrsamp | def wrsamp(record_name, fs, units, sig_name, p_signal=None, d_signal=None,
fmt=None, adc_gain=None, baseline=None, comments=None,
base_time=None, base_date=None, write_dir=''):
"""
Write a single segment WFDB record, creating a WFDB header file and any
associated dat files.
Parameters
----------
record_name : str
The string name of the WFDB record to be written (without any file
extensions).
fs : int, or float
The sampling frequency of the record.
units : list
A list of strings giving the units of each signal channel.
sig_name :
A list of strings giving the signal name of each signal channel.
p_signal : numpy array, optional
An (MxN) 2d numpy array, where M is the signal length. Gives the
physical signal values intended to be written. Either p_signal or
d_signal must be set, but not both. If p_signal is set, this method will
use it to perform analogue-digital conversion, writing the resultant
digital values to the dat file(s). If fmt is set, gain and baseline must
be set or unset together. If fmt is unset, gain and baseline must both
be unset.
d_signal : numpy array, optional
An (MxN) 2d numpy array, where M is the signal length. Gives the
digital signal values intended to be directly written to the dat
file(s). The dtype must be an integer type. Either p_signal or d_signal
must be set, but not both. In addition, if d_signal is set, fmt, gain
and baseline must also all be set.
fmt : list, optional
A list of strings giving the WFDB format of each file used to store each
channel. Accepted formats are: '80','212",'16','24', and '32'. There are
other WFDB formats as specified by:
https://www.physionet.org/physiotools/wag/signal-5.htm
but this library will not write (though it will read) those file types.
adc_gain : list, optional
A list of numbers specifying the ADC gain.
baseline : list, optional
A list of integers specifying the digital baseline.
comments : list, optional
A list of string comments to be written to the header file.
base_time : str, optional
A string of the record's start time in 24h 'HH:MM:SS(.ms)' format.
base_date : str, optional
A string of the record's start date in 'DD/MM/YYYY' format.
write_dir : str, optional
The directory in which to write the files.
Notes
-----
This is a gateway function, written as a simple method to write WFDB record
files using the most common parameters. Therefore not all WFDB fields can be
set via this function.
For more control over attributes, create a `Record` object, manually set its
attributes, and call its `wrsamp` instance method. If you choose this more
advanced method, see also the `set_defaults`, `set_d_features`, and
`set_p_features` instance methods to help populate attributes.
Examples
--------
>>> # Read part of a record from Physiobank
>>> signals, fields = wfdb.rdsamp('a103l', sampfrom=50000, channels=[0,1],
pb_dir='challenge/2015/training')
>>> # Write a local WFDB record (manually inserting fields)
>>> wfdb.wrsamp('ecgrecord', fs = 250, units=['mV', 'mV'],
sig_name=['I', 'II'], p_signal=signals, fmt=['16', '16'])
"""
# Check input field combinations
if p_signal is not None and d_signal is not None:
raise Exception('Must only give one of the inputs: p_signal or d_signal')
if d_signal is not None:
if fmt is None or adc_gain is None or baseline is None:
raise Exception("When using d_signal, must also specify 'fmt', 'gain', and 'baseline' fields.")
# Depending on whether d_signal or p_signal was used, set other
# required features.
if p_signal is not None:
# Create the Record object
record = Record(record_name=record_name, p_signal=p_signal, fs=fs,
fmt=fmt, units=units, sig_name=sig_name,
adc_gain=adc_gain, baseline=baseline,
comments=comments, base_time=base_time,
base_date=base_date)
# Compute optimal fields to store the digital signal, carry out adc,
# and set the fields.
record.set_d_features(do_adc=1)
else:
# Create the Record object
record = Record(record_name=record_name, d_signal=d_signal, fs=fs,
fmt=fmt, units=units, sig_name=sig_name,
adc_gain=adc_gain, baseline=baseline,
comments=comments, base_time=base_time,
base_date=base_date)
# Use d_signal to set the fields directly
record.set_d_features()
# Set default values of any missing field dependencies
record.set_defaults()
# Write the record files - header and associated dat
record.wrsamp(write_dir=write_dir) | python | def wrsamp(record_name, fs, units, sig_name, p_signal=None, d_signal=None,
fmt=None, adc_gain=None, baseline=None, comments=None,
base_time=None, base_date=None, write_dir=''):
"""
Write a single segment WFDB record, creating a WFDB header file and any
associated dat files.
Parameters
----------
record_name : str
The string name of the WFDB record to be written (without any file
extensions).
fs : int, or float
The sampling frequency of the record.
units : list
A list of strings giving the units of each signal channel.
sig_name :
A list of strings giving the signal name of each signal channel.
p_signal : numpy array, optional
An (MxN) 2d numpy array, where M is the signal length. Gives the
physical signal values intended to be written. Either p_signal or
d_signal must be set, but not both. If p_signal is set, this method will
use it to perform analogue-digital conversion, writing the resultant
digital values to the dat file(s). If fmt is set, gain and baseline must
be set or unset together. If fmt is unset, gain and baseline must both
be unset.
d_signal : numpy array, optional
An (MxN) 2d numpy array, where M is the signal length. Gives the
digital signal values intended to be directly written to the dat
file(s). The dtype must be an integer type. Either p_signal or d_signal
must be set, but not both. In addition, if d_signal is set, fmt, gain
and baseline must also all be set.
fmt : list, optional
A list of strings giving the WFDB format of each file used to store each
channel. Accepted formats are: '80','212",'16','24', and '32'. There are
other WFDB formats as specified by:
https://www.physionet.org/physiotools/wag/signal-5.htm
but this library will not write (though it will read) those file types.
adc_gain : list, optional
A list of numbers specifying the ADC gain.
baseline : list, optional
A list of integers specifying the digital baseline.
comments : list, optional
A list of string comments to be written to the header file.
base_time : str, optional
A string of the record's start time in 24h 'HH:MM:SS(.ms)' format.
base_date : str, optional
A string of the record's start date in 'DD/MM/YYYY' format.
write_dir : str, optional
The directory in which to write the files.
Notes
-----
This is a gateway function, written as a simple method to write WFDB record
files using the most common parameters. Therefore not all WFDB fields can be
set via this function.
For more control over attributes, create a `Record` object, manually set its
attributes, and call its `wrsamp` instance method. If you choose this more
advanced method, see also the `set_defaults`, `set_d_features`, and
`set_p_features` instance methods to help populate attributes.
Examples
--------
>>> # Read part of a record from Physiobank
>>> signals, fields = wfdb.rdsamp('a103l', sampfrom=50000, channels=[0,1],
pb_dir='challenge/2015/training')
>>> # Write a local WFDB record (manually inserting fields)
>>> wfdb.wrsamp('ecgrecord', fs = 250, units=['mV', 'mV'],
sig_name=['I', 'II'], p_signal=signals, fmt=['16', '16'])
"""
# Check input field combinations
if p_signal is not None and d_signal is not None:
raise Exception('Must only give one of the inputs: p_signal or d_signal')
if d_signal is not None:
if fmt is None or adc_gain is None or baseline is None:
raise Exception("When using d_signal, must also specify 'fmt', 'gain', and 'baseline' fields.")
# Depending on whether d_signal or p_signal was used, set other
# required features.
if p_signal is not None:
# Create the Record object
record = Record(record_name=record_name, p_signal=p_signal, fs=fs,
fmt=fmt, units=units, sig_name=sig_name,
adc_gain=adc_gain, baseline=baseline,
comments=comments, base_time=base_time,
base_date=base_date)
# Compute optimal fields to store the digital signal, carry out adc,
# and set the fields.
record.set_d_features(do_adc=1)
else:
# Create the Record object
record = Record(record_name=record_name, d_signal=d_signal, fs=fs,
fmt=fmt, units=units, sig_name=sig_name,
adc_gain=adc_gain, baseline=baseline,
comments=comments, base_time=base_time,
base_date=base_date)
# Use d_signal to set the fields directly
record.set_d_features()
# Set default values of any missing field dependencies
record.set_defaults()
# Write the record files - header and associated dat
record.wrsamp(write_dir=write_dir) | [
"def",
"wrsamp",
"(",
"record_name",
",",
"fs",
",",
"units",
",",
"sig_name",
",",
"p_signal",
"=",
"None",
",",
"d_signal",
"=",
"None",
",",
"fmt",
"=",
"None",
",",
"adc_gain",
"=",
"None",
",",
"baseline",
"=",
"None",
",",
"comments",
"=",
"Non... | Write a single segment WFDB record, creating a WFDB header file and any
associated dat files.
Parameters
----------
record_name : str
The string name of the WFDB record to be written (without any file
extensions).
fs : int, or float
The sampling frequency of the record.
units : list
A list of strings giving the units of each signal channel.
sig_name :
A list of strings giving the signal name of each signal channel.
p_signal : numpy array, optional
An (MxN) 2d numpy array, where M is the signal length. Gives the
physical signal values intended to be written. Either p_signal or
d_signal must be set, but not both. If p_signal is set, this method will
use it to perform analogue-digital conversion, writing the resultant
digital values to the dat file(s). If fmt is set, gain and baseline must
be set or unset together. If fmt is unset, gain and baseline must both
be unset.
d_signal : numpy array, optional
An (MxN) 2d numpy array, where M is the signal length. Gives the
digital signal values intended to be directly written to the dat
file(s). The dtype must be an integer type. Either p_signal or d_signal
must be set, but not both. In addition, if d_signal is set, fmt, gain
and baseline must also all be set.
fmt : list, optional
A list of strings giving the WFDB format of each file used to store each
channel. Accepted formats are: '80','212",'16','24', and '32'. There are
other WFDB formats as specified by:
https://www.physionet.org/physiotools/wag/signal-5.htm
but this library will not write (though it will read) those file types.
adc_gain : list, optional
A list of numbers specifying the ADC gain.
baseline : list, optional
A list of integers specifying the digital baseline.
comments : list, optional
A list of string comments to be written to the header file.
base_time : str, optional
A string of the record's start time in 24h 'HH:MM:SS(.ms)' format.
base_date : str, optional
A string of the record's start date in 'DD/MM/YYYY' format.
write_dir : str, optional
The directory in which to write the files.
Notes
-----
This is a gateway function, written as a simple method to write WFDB record
files using the most common parameters. Therefore not all WFDB fields can be
set via this function.
For more control over attributes, create a `Record` object, manually set its
attributes, and call its `wrsamp` instance method. If you choose this more
advanced method, see also the `set_defaults`, `set_d_features`, and
`set_p_features` instance methods to help populate attributes.
Examples
--------
>>> # Read part of a record from Physiobank
>>> signals, fields = wfdb.rdsamp('a103l', sampfrom=50000, channels=[0,1],
pb_dir='challenge/2015/training')
>>> # Write a local WFDB record (manually inserting fields)
>>> wfdb.wrsamp('ecgrecord', fs = 250, units=['mV', 'mV'],
sig_name=['I', 'II'], p_signal=signals, fmt=['16', '16']) | [
"Write",
"a",
"single",
"segment",
"WFDB",
"record",
"creating",
"a",
"WFDB",
"header",
"file",
"and",
"any",
"associated",
"dat",
"files",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L1435-L1539 | train | 216,255 |
MIT-LCP/wfdb-python | wfdb/io/record.py | is_monotonic | def is_monotonic(full_list):
"""
Determine whether elements in a list are monotonic. ie. unique
elements are clustered together.
ie. [5,5,3,4] is, [5,3,5] is not.
"""
prev_elements = set({full_list[0]})
prev_item = full_list[0]
for item in full_list:
if item != prev_item:
if item in prev_elements:
return False
prev_item = item
prev_elements.add(item)
return True | python | def is_monotonic(full_list):
"""
Determine whether elements in a list are monotonic. ie. unique
elements are clustered together.
ie. [5,5,3,4] is, [5,3,5] is not.
"""
prev_elements = set({full_list[0]})
prev_item = full_list[0]
for item in full_list:
if item != prev_item:
if item in prev_elements:
return False
prev_item = item
prev_elements.add(item)
return True | [
"def",
"is_monotonic",
"(",
"full_list",
")",
":",
"prev_elements",
"=",
"set",
"(",
"{",
"full_list",
"[",
"0",
"]",
"}",
")",
"prev_item",
"=",
"full_list",
"[",
"0",
"]",
"for",
"item",
"in",
"full_list",
":",
"if",
"item",
"!=",
"prev_item",
":",
... | Determine whether elements in a list are monotonic. ie. unique
elements are clustered together.
ie. [5,5,3,4] is, [5,3,5] is not. | [
"Determine",
"whether",
"elements",
"in",
"a",
"list",
"are",
"monotonic",
".",
"ie",
".",
"unique",
"elements",
"are",
"clustered",
"together",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L1542-L1559 | train | 216,256 |
MIT-LCP/wfdb-python | wfdb/io/record.py | BaseRecord._adjust_datetime | def _adjust_datetime(self, sampfrom):
"""
Adjust date and time fields to reflect user input if possible.
Helper function for the `_arrange_fields` of both Record and
MultiRecord objects.
"""
if sampfrom:
dt_seconds = sampfrom / self.fs
if self.base_date and self.base_time:
self.base_datetime = datetime.datetime.combine(self.base_date,
self.base_time)
self.base_datetime += datetime.timedelta(seconds=dt_seconds)
self.base_date = self.base_datetime.date()
self.base_time = self.base_datetime.time()
# We can calculate the time even if there is no date
elif self.base_time:
tmp_datetime = datetime.datetime.combine(
datetime.datetime.today().date(), self.base_time)
self.base_time = (tmp_datetime
+ datetime.timedelta(seconds=dt_seconds)).time() | python | def _adjust_datetime(self, sampfrom):
"""
Adjust date and time fields to reflect user input if possible.
Helper function for the `_arrange_fields` of both Record and
MultiRecord objects.
"""
if sampfrom:
dt_seconds = sampfrom / self.fs
if self.base_date and self.base_time:
self.base_datetime = datetime.datetime.combine(self.base_date,
self.base_time)
self.base_datetime += datetime.timedelta(seconds=dt_seconds)
self.base_date = self.base_datetime.date()
self.base_time = self.base_datetime.time()
# We can calculate the time even if there is no date
elif self.base_time:
tmp_datetime = datetime.datetime.combine(
datetime.datetime.today().date(), self.base_time)
self.base_time = (tmp_datetime
+ datetime.timedelta(seconds=dt_seconds)).time() | [
"def",
"_adjust_datetime",
"(",
"self",
",",
"sampfrom",
")",
":",
"if",
"sampfrom",
":",
"dt_seconds",
"=",
"sampfrom",
"/",
"self",
".",
"fs",
"if",
"self",
".",
"base_date",
"and",
"self",
".",
"base_time",
":",
"self",
".",
"base_datetime",
"=",
"dat... | Adjust date and time fields to reflect user input if possible.
Helper function for the `_arrange_fields` of both Record and
MultiRecord objects. | [
"Adjust",
"date",
"and",
"time",
"fields",
"to",
"reflect",
"user",
"input",
"if",
"possible",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L235-L255 | train | 216,257 |
MIT-LCP/wfdb-python | wfdb/io/record.py | Record.wrsamp | def wrsamp(self, expanded=False, write_dir=''):
"""
Write a wfdb header file and any associated dat files from this
object.
Parameters
----------
expanded : bool, optional
Whether to write the expanded signal (e_d_signal) instead
of the uniform signal (d_signal).
write_dir : str, optional
The directory in which to write the files.
"""
# Perform field validity and cohesion checks, and write the
# header file.
self.wrheader(write_dir=write_dir)
if self.n_sig > 0:
# Perform signal validity and cohesion checks, and write the
# associated dat files.
self.wr_dats(expanded=expanded, write_dir=write_dir) | python | def wrsamp(self, expanded=False, write_dir=''):
"""
Write a wfdb header file and any associated dat files from this
object.
Parameters
----------
expanded : bool, optional
Whether to write the expanded signal (e_d_signal) instead
of the uniform signal (d_signal).
write_dir : str, optional
The directory in which to write the files.
"""
# Perform field validity and cohesion checks, and write the
# header file.
self.wrheader(write_dir=write_dir)
if self.n_sig > 0:
# Perform signal validity and cohesion checks, and write the
# associated dat files.
self.wr_dats(expanded=expanded, write_dir=write_dir) | [
"def",
"wrsamp",
"(",
"self",
",",
"expanded",
"=",
"False",
",",
"write_dir",
"=",
"''",
")",
":",
"# Perform field validity and cohesion checks, and write the",
"# header file.",
"self",
".",
"wrheader",
"(",
"write_dir",
"=",
"write_dir",
")",
"if",
"self",
"."... | Write a wfdb header file and any associated dat files from this
object.
Parameters
----------
expanded : bool, optional
Whether to write the expanded signal (e_d_signal) instead
of the uniform signal (d_signal).
write_dir : str, optional
The directory in which to write the files. | [
"Write",
"a",
"wfdb",
"header",
"file",
"and",
"any",
"associated",
"dat",
"files",
"from",
"this",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L350-L370 | train | 216,258 |
MIT-LCP/wfdb-python | wfdb/io/record.py | MultiRecord.wrsamp | def wrsamp(self, write_dir=''):
"""
Write a multi-segment header, along with headers and dat files
for all segments, from this object.
"""
# Perform field validity and cohesion checks, and write the
# header file.
self.wrheader(write_dir=write_dir)
# Perform record validity and cohesion checks, and write the
# associated segments.
for seg in self.segments:
seg.wrsamp(write_dir=write_dir) | python | def wrsamp(self, write_dir=''):
"""
Write a multi-segment header, along with headers and dat files
for all segments, from this object.
"""
# Perform field validity and cohesion checks, and write the
# header file.
self.wrheader(write_dir=write_dir)
# Perform record validity and cohesion checks, and write the
# associated segments.
for seg in self.segments:
seg.wrsamp(write_dir=write_dir) | [
"def",
"wrsamp",
"(",
"self",
",",
"write_dir",
"=",
"''",
")",
":",
"# Perform field validity and cohesion checks, and write the",
"# header file.",
"self",
".",
"wrheader",
"(",
"write_dir",
"=",
"write_dir",
")",
"# Perform record validity and cohesion checks, and write th... | Write a multi-segment header, along with headers and dat files
for all segments, from this object. | [
"Write",
"a",
"multi",
"-",
"segment",
"header",
"along",
"with",
"headers",
"and",
"dat",
"files",
"for",
"all",
"segments",
"from",
"this",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L479-L490 | train | 216,259 |
MIT-LCP/wfdb-python | wfdb/io/record.py | MultiRecord._check_segment_cohesion | def _check_segment_cohesion(self):
"""
Check the cohesion of the segments field with other fields used
to write the record
"""
if self.n_seg != len(self.segments):
raise ValueError("Length of segments must match the 'n_seg' field")
for i in range(n_seg):
s = self.segments[i]
# If segment 0 is a layout specification record, check that its file names are all == '~''
if i == 0 and self.seg_len[0] == 0:
for file_name in s.file_name:
if file_name != '~':
raise ValueError("Layout specification records must have all file_names named '~'")
# Sampling frequencies must all match the one in the master header
if s.fs != self.fs:
raise ValueError("The 'fs' in each segment must match the overall record's 'fs'")
# Check the signal length of the segment against the corresponding seg_len field
if s.sig_len != self.seg_len[i]:
raise ValueError('The signal length of segment '+str(i)+' does not match the corresponding segment length')
totalsig_len = totalsig_len + getattr(s, 'sig_len') | python | def _check_segment_cohesion(self):
"""
Check the cohesion of the segments field with other fields used
to write the record
"""
if self.n_seg != len(self.segments):
raise ValueError("Length of segments must match the 'n_seg' field")
for i in range(n_seg):
s = self.segments[i]
# If segment 0 is a layout specification record, check that its file names are all == '~''
if i == 0 and self.seg_len[0] == 0:
for file_name in s.file_name:
if file_name != '~':
raise ValueError("Layout specification records must have all file_names named '~'")
# Sampling frequencies must all match the one in the master header
if s.fs != self.fs:
raise ValueError("The 'fs' in each segment must match the overall record's 'fs'")
# Check the signal length of the segment against the corresponding seg_len field
if s.sig_len != self.seg_len[i]:
raise ValueError('The signal length of segment '+str(i)+' does not match the corresponding segment length')
totalsig_len = totalsig_len + getattr(s, 'sig_len') | [
"def",
"_check_segment_cohesion",
"(",
"self",
")",
":",
"if",
"self",
".",
"n_seg",
"!=",
"len",
"(",
"self",
".",
"segments",
")",
":",
"raise",
"ValueError",
"(",
"\"Length of segments must match the 'n_seg' field\"",
")",
"for",
"i",
"in",
"range",
"(",
"n... | Check the cohesion of the segments field with other fields used
to write the record | [
"Check",
"the",
"cohesion",
"of",
"the",
"segments",
"field",
"with",
"other",
"fields",
"used",
"to",
"write",
"the",
"record"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L492-L518 | train | 216,260 |
MIT-LCP/wfdb-python | wfdb/io/record.py | MultiRecord._required_segments | def _required_segments(self, sampfrom, sampto):
"""
Determine the segments and the samples within each segment in a
multi-segment record, that lie within a sample range.
Parameters
----------
sampfrom : int
The starting sample number to read for each channel.
sampto : int
The sample number at which to stop reading for each channel.
"""
# The starting segment with actual samples
if self.layout == 'fixed':
startseg = 0
else:
startseg = 1
# Cumulative sum of segment lengths (ignoring layout segment)
cumsumlengths = list(np.cumsum(self.seg_len[startseg:]))
# Get first segment
seg_numbers = [[sampfrom < cs for cs in cumsumlengths].index(True)]
# Get final segment
if sampto == cumsumlengths[len(cumsumlengths) - 1]:
seg_numbers.append(len(cumsumlengths) - 1)
else:
seg_numbers.append([sampto <= cs for cs in cumsumlengths].index(True))
# Add 1 for variable layout records
seg_numbers = list(np.add(seg_numbers,startseg))
# Obtain the sampfrom and sampto to read for each segment
if seg_numbers[1] == seg_numbers[0]:
# Only one segment to read
seg_numbers = [seg_numbers[0]]
# The segment's first sample number relative to the entire record
segstartsamp = sum(self.seg_len[0:seg_numbers[0]])
readsamps = [[sampfrom-segstartsamp, sampto-segstartsamp]]
else:
# More than one segment to read
seg_numbers = list(range(seg_numbers[0], seg_numbers[1]+1))
readsamps = [[0, self.seg_len[s]] for s in seg_numbers]
# Starting sample for first segment.
readsamps[0][0] = sampfrom - ([0] + cumsumlengths)[seg_numbers[0]-startseg]
# End sample for last segment
readsamps[-1][1] = sampto - ([0] + cumsumlengths)[seg_numbers[-1]-startseg]
return (seg_numbers, readsamps) | python | def _required_segments(self, sampfrom, sampto):
"""
Determine the segments and the samples within each segment in a
multi-segment record, that lie within a sample range.
Parameters
----------
sampfrom : int
The starting sample number to read for each channel.
sampto : int
The sample number at which to stop reading for each channel.
"""
# The starting segment with actual samples
if self.layout == 'fixed':
startseg = 0
else:
startseg = 1
# Cumulative sum of segment lengths (ignoring layout segment)
cumsumlengths = list(np.cumsum(self.seg_len[startseg:]))
# Get first segment
seg_numbers = [[sampfrom < cs for cs in cumsumlengths].index(True)]
# Get final segment
if sampto == cumsumlengths[len(cumsumlengths) - 1]:
seg_numbers.append(len(cumsumlengths) - 1)
else:
seg_numbers.append([sampto <= cs for cs in cumsumlengths].index(True))
# Add 1 for variable layout records
seg_numbers = list(np.add(seg_numbers,startseg))
# Obtain the sampfrom and sampto to read for each segment
if seg_numbers[1] == seg_numbers[0]:
# Only one segment to read
seg_numbers = [seg_numbers[0]]
# The segment's first sample number relative to the entire record
segstartsamp = sum(self.seg_len[0:seg_numbers[0]])
readsamps = [[sampfrom-segstartsamp, sampto-segstartsamp]]
else:
# More than one segment to read
seg_numbers = list(range(seg_numbers[0], seg_numbers[1]+1))
readsamps = [[0, self.seg_len[s]] for s in seg_numbers]
# Starting sample for first segment.
readsamps[0][0] = sampfrom - ([0] + cumsumlengths)[seg_numbers[0]-startseg]
# End sample for last segment
readsamps[-1][1] = sampto - ([0] + cumsumlengths)[seg_numbers[-1]-startseg]
return (seg_numbers, readsamps) | [
"def",
"_required_segments",
"(",
"self",
",",
"sampfrom",
",",
"sampto",
")",
":",
"# The starting segment with actual samples",
"if",
"self",
".",
"layout",
"==",
"'fixed'",
":",
"startseg",
"=",
"0",
"else",
":",
"startseg",
"=",
"1",
"# Cumulative sum of segme... | Determine the segments and the samples within each segment in a
multi-segment record, that lie within a sample range.
Parameters
----------
sampfrom : int
The starting sample number to read for each channel.
sampto : int
The sample number at which to stop reading for each channel. | [
"Determine",
"the",
"segments",
"and",
"the",
"samples",
"within",
"each",
"segment",
"in",
"a",
"multi",
"-",
"segment",
"record",
"that",
"lie",
"within",
"a",
"sample",
"range",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L525-L577 | train | 216,261 |
MIT-LCP/wfdb-python | wfdb/io/record.py | MultiRecord._required_channels | def _required_channels(self, seg_numbers, channels, dir_name, pb_dir):
"""
Get the channel numbers to be read from each specified segment,
given the channel numbers specified for the entire record.
Parameters
----------
seg_numbers : list
List of segment numbers to read.
channels : list
The channel indices to read for the whole record. Same one
specified by user input.
Returns
-------
required_channels : list
List of lists, containing channel indices to read for each
desired segment.
"""
# Fixed layout. All channels are the same.
if self.layout == 'fixed':
required_channels = [channels] * len(seg_numbers)
# Variable layout: figure out channels by matching record names
else:
required_channels = []
# The overall layout signal names
l_sig_names = self.segments[0].sig_name
# The wanted signals
w_sig_names = [l_sig_names[c] for c in channels]
# For each segment
for i in range(len(seg_numbers)):
# Skip empty segments
if self.seg_name[seg_numbers[i]] == '~':
required_channels.append([])
else:
# Get the signal names of the current segment
s_sig_names = rdheader(
os.path.join(dir_name, self.seg_name[seg_numbers[i]]),
pb_dir=pb_dir).sig_name
required_channels.append(_get_wanted_channels(
w_sig_names, s_sig_names))
return required_channels | python | def _required_channels(self, seg_numbers, channels, dir_name, pb_dir):
"""
Get the channel numbers to be read from each specified segment,
given the channel numbers specified for the entire record.
Parameters
----------
seg_numbers : list
List of segment numbers to read.
channels : list
The channel indices to read for the whole record. Same one
specified by user input.
Returns
-------
required_channels : list
List of lists, containing channel indices to read for each
desired segment.
"""
# Fixed layout. All channels are the same.
if self.layout == 'fixed':
required_channels = [channels] * len(seg_numbers)
# Variable layout: figure out channels by matching record names
else:
required_channels = []
# The overall layout signal names
l_sig_names = self.segments[0].sig_name
# The wanted signals
w_sig_names = [l_sig_names[c] for c in channels]
# For each segment
for i in range(len(seg_numbers)):
# Skip empty segments
if self.seg_name[seg_numbers[i]] == '~':
required_channels.append([])
else:
# Get the signal names of the current segment
s_sig_names = rdheader(
os.path.join(dir_name, self.seg_name[seg_numbers[i]]),
pb_dir=pb_dir).sig_name
required_channels.append(_get_wanted_channels(
w_sig_names, s_sig_names))
return required_channels | [
"def",
"_required_channels",
"(",
"self",
",",
"seg_numbers",
",",
"channels",
",",
"dir_name",
",",
"pb_dir",
")",
":",
"# Fixed layout. All channels are the same.",
"if",
"self",
".",
"layout",
"==",
"'fixed'",
":",
"required_channels",
"=",
"[",
"channels",
"]"... | Get the channel numbers to be read from each specified segment,
given the channel numbers specified for the entire record.
Parameters
----------
seg_numbers : list
List of segment numbers to read.
channels : list
The channel indices to read for the whole record. Same one
specified by user input.
Returns
-------
required_channels : list
List of lists, containing channel indices to read for each
desired segment. | [
"Get",
"the",
"channel",
"numbers",
"to",
"be",
"read",
"from",
"each",
"specified",
"segment",
"given",
"the",
"channel",
"numbers",
"specified",
"for",
"the",
"entire",
"record",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/record.py#L580-L625 | train | 216,262 |
MIT-LCP/wfdb-python | wfdb/io/tff.py | rdtff | def rdtff(file_name, cut_end=False):
"""
Read values from a tff file
Parameters
----------
file_name : str
Name of the .tff file to read
cut_end : bool, optional
If True, cuts out the last sample for all channels. This is for
reading files which appear to terminate with the incorrect
number of samples (ie. sample not present for all channels).
Returns
-------
signal : numpy array
A 2d numpy array storing the physical signals from the record.
fields : dict
A dictionary containing several key attributes of the read record.
markers : numpy array
A 1d numpy array storing the marker locations.
triggers : numpy array
A 1d numpy array storing the trigger locations.
Notes
-----
This function is slow because tff files may contain any number of
escape sequences interspersed with the signals. There is no way to
know the number of samples/escape sequences beforehand, so the file
is inefficiently parsed a small chunk at a time.
It is recommended that you convert your tff files to wfdb format.
"""
file_size = os.path.getsize(file_name)
with open(file_name, 'rb') as fp:
fields, file_fields = _rdheader(fp)
signal, markers, triggers = _rdsignal(fp, file_size=file_size,
header_size=file_fields['header_size'],
n_sig=file_fields['n_sig'],
bit_width=file_fields['bit_width'],
is_signed=file_fields['is_signed'],
cut_end=cut_end)
return signal, fields, markers, triggers | python | def rdtff(file_name, cut_end=False):
"""
Read values from a tff file
Parameters
----------
file_name : str
Name of the .tff file to read
cut_end : bool, optional
If True, cuts out the last sample for all channels. This is for
reading files which appear to terminate with the incorrect
number of samples (ie. sample not present for all channels).
Returns
-------
signal : numpy array
A 2d numpy array storing the physical signals from the record.
fields : dict
A dictionary containing several key attributes of the read record.
markers : numpy array
A 1d numpy array storing the marker locations.
triggers : numpy array
A 1d numpy array storing the trigger locations.
Notes
-----
This function is slow because tff files may contain any number of
escape sequences interspersed with the signals. There is no way to
know the number of samples/escape sequences beforehand, so the file
is inefficiently parsed a small chunk at a time.
It is recommended that you convert your tff files to wfdb format.
"""
file_size = os.path.getsize(file_name)
with open(file_name, 'rb') as fp:
fields, file_fields = _rdheader(fp)
signal, markers, triggers = _rdsignal(fp, file_size=file_size,
header_size=file_fields['header_size'],
n_sig=file_fields['n_sig'],
bit_width=file_fields['bit_width'],
is_signed=file_fields['is_signed'],
cut_end=cut_end)
return signal, fields, markers, triggers | [
"def",
"rdtff",
"(",
"file_name",
",",
"cut_end",
"=",
"False",
")",
":",
"file_size",
"=",
"os",
".",
"path",
".",
"getsize",
"(",
"file_name",
")",
"with",
"open",
"(",
"file_name",
",",
"'rb'",
")",
"as",
"fp",
":",
"fields",
",",
"file_fields",
"... | Read values from a tff file
Parameters
----------
file_name : str
Name of the .tff file to read
cut_end : bool, optional
If True, cuts out the last sample for all channels. This is for
reading files which appear to terminate with the incorrect
number of samples (ie. sample not present for all channels).
Returns
-------
signal : numpy array
A 2d numpy array storing the physical signals from the record.
fields : dict
A dictionary containing several key attributes of the read record.
markers : numpy array
A 1d numpy array storing the marker locations.
triggers : numpy array
A 1d numpy array storing the trigger locations.
Notes
-----
This function is slow because tff files may contain any number of
escape sequences interspersed with the signals. There is no way to
know the number of samples/escape sequences beforehand, so the file
is inefficiently parsed a small chunk at a time.
It is recommended that you convert your tff files to wfdb format. | [
"Read",
"values",
"from",
"a",
"tff",
"file"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/tff.py#L14-L57 | train | 216,263 |
MIT-LCP/wfdb-python | wfdb/io/tff.py | _rdheader | def _rdheader(fp):
"""
Read header info of the windaq file
"""
tag = None
# The '2' tag indicates the end of tags.
while tag != 2:
# For each header element, there is a tag indicating data type,
# followed by the data size, followed by the data itself. 0's
# pad the content to the nearest 4 bytes. If data_len=0, no pad.
tag = struct.unpack('>H', fp.read(2))[0]
data_size = struct.unpack('>H', fp.read(2))[0]
pad_len = (4 - (data_size % 4)) % 4
pos = fp.tell()
# Currently, most tags will be ignored...
# storage method
if tag == 1001:
storage_method = fs = struct.unpack('B', fp.read(1))[0]
storage_method = {0:'recording', 1:'manual', 2:'online'}[storage_method]
# fs, unit16
elif tag == 1003:
fs = struct.unpack('>H', fp.read(2))[0]
# sensor type
elif tag == 1007:
# Each byte contains information for one channel
n_sig = data_size
channel_data = struct.unpack('>%dB' % data_size, fp.read(data_size))
# The documentation states: "0 : Channel is not used"
# This means the samples are NOT saved.
channel_map = ((1, 1, 'emg'),
(15, 30, 'goniometer'), (31, 46, 'accelerometer'),
(47, 62, 'inclinometer'),
(63, 78, 'polar_interface'), (79, 94, 'ecg'),
(95, 110, 'torque'), (111, 126, 'gyrometer'),
(127, 142, 'sensor'))
sig_name = []
# The number range that the data lies between gives the
# channel
for data in channel_data:
# Default case if byte value falls outside of channel map
base_name = 'unknown'
# Unused channel
if data == 0:
n_sig -= 1
break
for item in channel_map:
if item[0] <= data <= item[1]:
base_name = item[2]
break
existing_count = [base_name in name for name in sig_name].count(True)
sig_name.append('%s_%d' % (base_name, existing_count))
# Display scale. Probably not useful.
elif tag == 1009:
# 100, 500, 1000, 2500, or 8500uV
display_scale = struct.unpack('>I', fp.read(4))[0]
# sample format, uint8
elif tag == 3:
sample_fmt = struct.unpack('B', fp.read(1))[0]
is_signed = bool(sample_fmt >> 7)
# ie. 8 or 16 bits
bit_width = sample_fmt & 127
# Measurement start time - seconds from 1.1.1970 UTC
elif tag == 101:
n_seconds = struct.unpack('>I', fp.read(4))[0]
base_datetime = datetime.datetime.utcfromtimestamp(n_seconds)
base_date = base_datetime.date()
base_time = base_datetime.time()
# Measurement start time - minutes from UTC
elif tag == 102:
n_minutes = struct.unpack('>h', fp.read(2))[0]
# Go to the next tag
fp.seek(pos + data_size + pad_len)
header_size = fp.tell()
# For interpreting the waveforms
fields = {'fs':fs, 'n_sig':n_sig, 'sig_name':sig_name,
'base_time':base_time, 'base_date':base_date}
# For reading the signal samples
file_fields = {'header_size':header_size, 'n_sig':n_sig,
'bit_width':bit_width, 'is_signed':is_signed}
return fields, file_fields | python | def _rdheader(fp):
"""
Read header info of the windaq file
"""
tag = None
# The '2' tag indicates the end of tags.
while tag != 2:
# For each header element, there is a tag indicating data type,
# followed by the data size, followed by the data itself. 0's
# pad the content to the nearest 4 bytes. If data_len=0, no pad.
tag = struct.unpack('>H', fp.read(2))[0]
data_size = struct.unpack('>H', fp.read(2))[0]
pad_len = (4 - (data_size % 4)) % 4
pos = fp.tell()
# Currently, most tags will be ignored...
# storage method
if tag == 1001:
storage_method = fs = struct.unpack('B', fp.read(1))[0]
storage_method = {0:'recording', 1:'manual', 2:'online'}[storage_method]
# fs, unit16
elif tag == 1003:
fs = struct.unpack('>H', fp.read(2))[0]
# sensor type
elif tag == 1007:
# Each byte contains information for one channel
n_sig = data_size
channel_data = struct.unpack('>%dB' % data_size, fp.read(data_size))
# The documentation states: "0 : Channel is not used"
# This means the samples are NOT saved.
channel_map = ((1, 1, 'emg'),
(15, 30, 'goniometer'), (31, 46, 'accelerometer'),
(47, 62, 'inclinometer'),
(63, 78, 'polar_interface'), (79, 94, 'ecg'),
(95, 110, 'torque'), (111, 126, 'gyrometer'),
(127, 142, 'sensor'))
sig_name = []
# The number range that the data lies between gives the
# channel
for data in channel_data:
# Default case if byte value falls outside of channel map
base_name = 'unknown'
# Unused channel
if data == 0:
n_sig -= 1
break
for item in channel_map:
if item[0] <= data <= item[1]:
base_name = item[2]
break
existing_count = [base_name in name for name in sig_name].count(True)
sig_name.append('%s_%d' % (base_name, existing_count))
# Display scale. Probably not useful.
elif tag == 1009:
# 100, 500, 1000, 2500, or 8500uV
display_scale = struct.unpack('>I', fp.read(4))[0]
# sample format, uint8
elif tag == 3:
sample_fmt = struct.unpack('B', fp.read(1))[0]
is_signed = bool(sample_fmt >> 7)
# ie. 8 or 16 bits
bit_width = sample_fmt & 127
# Measurement start time - seconds from 1.1.1970 UTC
elif tag == 101:
n_seconds = struct.unpack('>I', fp.read(4))[0]
base_datetime = datetime.datetime.utcfromtimestamp(n_seconds)
base_date = base_datetime.date()
base_time = base_datetime.time()
# Measurement start time - minutes from UTC
elif tag == 102:
n_minutes = struct.unpack('>h', fp.read(2))[0]
# Go to the next tag
fp.seek(pos + data_size + pad_len)
header_size = fp.tell()
# For interpreting the waveforms
fields = {'fs':fs, 'n_sig':n_sig, 'sig_name':sig_name,
'base_time':base_time, 'base_date':base_date}
# For reading the signal samples
file_fields = {'header_size':header_size, 'n_sig':n_sig,
'bit_width':bit_width, 'is_signed':is_signed}
return fields, file_fields | [
"def",
"_rdheader",
"(",
"fp",
")",
":",
"tag",
"=",
"None",
"# The '2' tag indicates the end of tags.",
"while",
"tag",
"!=",
"2",
":",
"# For each header element, there is a tag indicating data type,",
"# followed by the data size, followed by the data itself. 0's",
"# pad the co... | Read header info of the windaq file | [
"Read",
"header",
"info",
"of",
"the",
"windaq",
"file"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/tff.py#L60-L139 | train | 216,264 |
MIT-LCP/wfdb-python | wfdb/io/tff.py | _rdsignal | def _rdsignal(fp, file_size, header_size, n_sig, bit_width, is_signed, cut_end):
"""
Read the signal
Parameters
----------
cut_end : bool, optional
If True, enables reading the end of files which appear to terminate
with the incorrect number of samples (ie. sample not present for all channels),
by checking and skipping the reading the end of such files.
Checking this option makes reading slower.
"""
# Cannot initially figure out signal length because there
# are escape sequences.
fp.seek(header_size)
signal_size = file_size - header_size
byte_width = int(bit_width / 8)
# numpy dtype
dtype = str(byte_width)
if is_signed:
dtype = 'i' + dtype
else:
dtype = 'u' + dtype
# big endian
dtype = '>' + dtype
# The maximum possible samples given the file size
# All channels must be present
max_samples = int(signal_size / byte_width)
max_samples = max_samples - max_samples % n_sig
# Output information
signal = np.empty(max_samples, dtype=dtype)
markers = []
triggers = []
# Number of (total) samples read
sample_num = 0
# Read one sample for all channels at a time
if cut_end:
stop_byte = file_size - n_sig * byte_width + 1
while fp.tell() < stop_byte:
chunk = fp.read(2)
sample_num = _get_sample(fp, chunk, n_sig, dtype, signal, markers, triggers, sample_num)
else:
while True:
chunk = fp.read(2)
if not chunk:
break
sample_num = _get_sample(fp, chunk, n_sig, dtype, signal, markers, triggers, sample_num)
# No more bytes to read. Reshape output arguments.
signal = signal[:sample_num]
signal = signal.reshape((-1, n_sig))
markers = np.array(markers, dtype='int')
triggers = np.array(triggers, dtype='int')
return signal, markers, triggers | python | def _rdsignal(fp, file_size, header_size, n_sig, bit_width, is_signed, cut_end):
"""
Read the signal
Parameters
----------
cut_end : bool, optional
If True, enables reading the end of files which appear to terminate
with the incorrect number of samples (ie. sample not present for all channels),
by checking and skipping the reading the end of such files.
Checking this option makes reading slower.
"""
# Cannot initially figure out signal length because there
# are escape sequences.
fp.seek(header_size)
signal_size = file_size - header_size
byte_width = int(bit_width / 8)
# numpy dtype
dtype = str(byte_width)
if is_signed:
dtype = 'i' + dtype
else:
dtype = 'u' + dtype
# big endian
dtype = '>' + dtype
# The maximum possible samples given the file size
# All channels must be present
max_samples = int(signal_size / byte_width)
max_samples = max_samples - max_samples % n_sig
# Output information
signal = np.empty(max_samples, dtype=dtype)
markers = []
triggers = []
# Number of (total) samples read
sample_num = 0
# Read one sample for all channels at a time
if cut_end:
stop_byte = file_size - n_sig * byte_width + 1
while fp.tell() < stop_byte:
chunk = fp.read(2)
sample_num = _get_sample(fp, chunk, n_sig, dtype, signal, markers, triggers, sample_num)
else:
while True:
chunk = fp.read(2)
if not chunk:
break
sample_num = _get_sample(fp, chunk, n_sig, dtype, signal, markers, triggers, sample_num)
# No more bytes to read. Reshape output arguments.
signal = signal[:sample_num]
signal = signal.reshape((-1, n_sig))
markers = np.array(markers, dtype='int')
triggers = np.array(triggers, dtype='int')
return signal, markers, triggers | [
"def",
"_rdsignal",
"(",
"fp",
",",
"file_size",
",",
"header_size",
",",
"n_sig",
",",
"bit_width",
",",
"is_signed",
",",
"cut_end",
")",
":",
"# Cannot initially figure out signal length because there",
"# are escape sequences.",
"fp",
".",
"seek",
"(",
"header_siz... | Read the signal
Parameters
----------
cut_end : bool, optional
If True, enables reading the end of files which appear to terminate
with the incorrect number of samples (ie. sample not present for all channels),
by checking and skipping the reading the end of such files.
Checking this option makes reading slower. | [
"Read",
"the",
"signal"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/tff.py#L142-L196 | train | 216,265 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | xqrs_detect | def xqrs_detect(sig, fs, sampfrom=0, sampto='end', conf=None,
learn=True, verbose=True):
"""
Run the 'xqrs' qrs detection algorithm on a signal. See the
docstring of the XQRS class for algorithm details.
Parameters
----------
sig : numpy array
The input ecg signal to apply the qrs detection on.
fs : int or float
The sampling frequency of the input signal.
sampfrom : int, optional
The starting sample number to run the detection on.
sampto :
The final sample number to run the detection on. Set as 'end' to
run on the entire signal.
conf : XQRS.Conf object, optional
The configuration object specifying signal configuration
parameters. See the docstring of the XQRS.Conf class.
learn : bool, optional
Whether to apply learning on the signal before running the main
detection. If learning fails or is not conducted, the default
configuration parameters will be used to initialize these
variables.
verbose : bool, optional
Whether to display the stages and outcomes of the detection
process.
Returns
-------
qrs_inds : numpy array
The indices of the detected qrs complexes
Examples
--------
>>> import wfdb
>>> from wfdb import processing
>>> sig, fields = wfdb.rdsamp('sample-data/100', channels=[0])
>>> qrs_inds = processing.xqrs_detect(sig=sig[:,0], fs=fields['fs'])
"""
xqrs = XQRS(sig=sig, fs=fs, conf=conf)
xqrs.detect(sampfrom=sampfrom, sampto=sampto, verbose=verbose)
return xqrs.qrs_inds | python | def xqrs_detect(sig, fs, sampfrom=0, sampto='end', conf=None,
learn=True, verbose=True):
"""
Run the 'xqrs' qrs detection algorithm on a signal. See the
docstring of the XQRS class for algorithm details.
Parameters
----------
sig : numpy array
The input ecg signal to apply the qrs detection on.
fs : int or float
The sampling frequency of the input signal.
sampfrom : int, optional
The starting sample number to run the detection on.
sampto :
The final sample number to run the detection on. Set as 'end' to
run on the entire signal.
conf : XQRS.Conf object, optional
The configuration object specifying signal configuration
parameters. See the docstring of the XQRS.Conf class.
learn : bool, optional
Whether to apply learning on the signal before running the main
detection. If learning fails or is not conducted, the default
configuration parameters will be used to initialize these
variables.
verbose : bool, optional
Whether to display the stages and outcomes of the detection
process.
Returns
-------
qrs_inds : numpy array
The indices of the detected qrs complexes
Examples
--------
>>> import wfdb
>>> from wfdb import processing
>>> sig, fields = wfdb.rdsamp('sample-data/100', channels=[0])
>>> qrs_inds = processing.xqrs_detect(sig=sig[:,0], fs=fields['fs'])
"""
xqrs = XQRS(sig=sig, fs=fs, conf=conf)
xqrs.detect(sampfrom=sampfrom, sampto=sampto, verbose=verbose)
return xqrs.qrs_inds | [
"def",
"xqrs_detect",
"(",
"sig",
",",
"fs",
",",
"sampfrom",
"=",
"0",
",",
"sampto",
"=",
"'end'",
",",
"conf",
"=",
"None",
",",
"learn",
"=",
"True",
",",
"verbose",
"=",
"True",
")",
":",
"xqrs",
"=",
"XQRS",
"(",
"sig",
"=",
"sig",
",",
"... | Run the 'xqrs' qrs detection algorithm on a signal. See the
docstring of the XQRS class for algorithm details.
Parameters
----------
sig : numpy array
The input ecg signal to apply the qrs detection on.
fs : int or float
The sampling frequency of the input signal.
sampfrom : int, optional
The starting sample number to run the detection on.
sampto :
The final sample number to run the detection on. Set as 'end' to
run on the entire signal.
conf : XQRS.Conf object, optional
The configuration object specifying signal configuration
parameters. See the docstring of the XQRS.Conf class.
learn : bool, optional
Whether to apply learning on the signal before running the main
detection. If learning fails or is not conducted, the default
configuration parameters will be used to initialize these
variables.
verbose : bool, optional
Whether to display the stages and outcomes of the detection
process.
Returns
-------
qrs_inds : numpy array
The indices of the detected qrs complexes
Examples
--------
>>> import wfdb
>>> from wfdb import processing
>>> sig, fields = wfdb.rdsamp('sample-data/100', channels=[0])
>>> qrs_inds = processing.xqrs_detect(sig=sig[:,0], fs=fields['fs']) | [
"Run",
"the",
"xqrs",
"qrs",
"detection",
"algorithm",
"on",
"a",
"signal",
".",
"See",
"the",
"docstring",
"of",
"the",
"XQRS",
"class",
"for",
"algorithm",
"details",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L603-L648 | train | 216,266 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | gqrs_detect | def gqrs_detect(sig=None, fs=None, d_sig=None, adc_gain=None, adc_zero=None,
threshold=1.0, hr=75, RRdelta=0.2, RRmin=0.28, RRmax=2.4,
QS=0.07, QT=0.35, RTmin=0.25, RTmax=0.33,
QRSa=750, QRSamin=130):
"""
Detect qrs locations in a single channel ecg. Functionally, a direct port
of the gqrs algorithm from the original wfdb package. Accepts either a
physical signal, or a digital signal with known adc_gain and adc_zero.
See the notes below for a summary of the program. This algorithm is not
being developed/supported.
Parameters
----------
sig : 1d numpy array, optional
The input physical signal. The detection algorithm which replicates
the original, works using digital samples, and this physical option is
provided as a convenient interface. If this is the specified input
signal, automatic adc is performed using 24 bit precision, to obtain
the `d_sig`, `adc_gain`, and `adc_zero` parameters. There may be minor
differences in detection results (ie. an occasional 1 sample
difference) between using `sig` and `d_sig`. To replicate the exact
output of the original gqrs algorithm, use the `d_sig` argument
instead.
fs : int, or float
The sampling frequency of the signal.
d_sig : 1d numpy array, optional
The input digital signal. If this is the specified input signal rather
than `sig`, the `adc_gain` and `adc_zero` parameters must be specified.
adc_gain : int, or float, optional
The analogue to digital gain of the signal (the number of adus per
physical unit).
adc_zero: int, optional
The value produced by the ADC given a 0 volt input.
threshold : int, or float, optional
The relative amplitude detection threshold. Used to initialize the peak
and qrs detection threshold.
hr : int, or float, optional
Typical heart rate, in beats per minute.
RRdelta : int or float, optional
Typical difference between successive RR intervals in seconds.
RRmin : int or float, optional
Minimum RR interval ("refractory period"), in seconds.
RRmax : int or float, optional
Maximum RR interval, in seconds. Thresholds will be adjusted if no
peaks are detected within this interval.
QS : int or float, optional
Typical QRS duration, in seconds.
QT : int or float, optional
Typical QT interval, in seconds.
RTmin : int or float, optional
Minimum interval between R and T peaks, in seconds.
RTmax : int or float, optional
Maximum interval between R and T peaks, in seconds.
QRSa : int or float, optional
Typical QRS peak-to-peak amplitude, in microvolts.
QRSamin : int or float, optional
Minimum QRS peak-to-peak amplitude, in microvolts.
Returns
-------
qrs_locs : numpy array
Detected qrs locations
Notes
-----
This function should not be used for signals with fs <= 50Hz
The algorithm theoretically works as follows:
- Load in configuration parameters. They are used to set/initialize the:
* allowed rr interval limits (fixed)
* initial recent rr interval (running)
* qrs width, used for detection filter widths (fixed)
* allowed rt interval limits (fixed)
* initial recent rt interval (running)
* initial peak amplitude detection threshold (running)
* initial qrs amplitude detection threshold (running)
* `Note`: this algorithm does not normalize signal amplitudes, and
hence is highly dependent on configuration amplitude parameters.
- Apply trapezoid low-pass filtering to the signal
- Convolve a QRS matched filter with the filtered signal
- Run the learning phase using a calculated signal length: detect qrs and
non-qrs peaks as in the main detection phase, without saving the qrs
locations. During this phase, running parameters of recent intervals
and peak/qrs thresholds are adjusted.
- Run the detection::
if a sample is bigger than its immediate neighbors and larger
than the peak detection threshold, it is a peak.
if it is further than RRmin from the previous qrs, and is a
*primary peak.
if it is further than 2 standard deviations from the
previous qrs, do a backsearch for a missed low amplitude
beat
return the primary peak between the current sample
and the previous qrs if any.
if it surpasses the qrs threshold, it is a qrs complex
save the qrs location.
update running rr and qrs amplitude parameters.
look for the qrs complex's t-wave and mark it if
found.
else if it is not a peak
lower the peak detection threshold if the last peak found
was more than RRmax ago, and not already at its minimum.
*A peak is secondary if there is a larger peak within its neighborhood
(time +- rrmin), or if it has been identified as a T-wave associated with a
previous primary peak. A peak is primary if it is largest in its neighborhood,
or if the only larger peaks are secondary.
The above describes how the algorithm should theoretically work, but there
are bugs which make the program contradict certain parts of its supposed
logic. A list of issues from the original c, code and hence this python
implementation can be found here:
https://github.com/bemoody/wfdb/issues/17
gqrs will not be supported/developed in this library.
Examples
--------
>>> import numpy as np
>>> import wfdb
>>> from wfdb import processing
>>> # Detect using a physical input signal
>>> record = wfdb.rdrecord('sample-data/100', channels=[0])
>>> qrs_locs = processing.gqrs_detect(record.p_signal[:,0], fs=record.fs)
>>> # Detect using a digital input signal
>>> record_2 = wfdb.rdrecord('sample-data/100', channels=[0], physical=False)
>>> qrs_locs_2 = processing.gqrs_detect(d_sig=record_2.d_signal[:,0],
fs=record_2.fs,
adc_gain=record_2.adc_gain[0],
adc_zero=record_2.adc_zero[0])
"""
# Perform adc if input signal is physical
if sig is not None:
record = Record(p_signal=sig.reshape([-1,1]), fmt=['24'])
record.set_d_features(do_adc=True)
d_sig = record.d_signal[:,0]
adc_zero = 0
adc_gain = record.adc_gain[0]
conf = GQRS.Conf(fs=fs, adc_gain=adc_gain, hr=hr, RRdelta=RRdelta, RRmin=RRmin,
RRmax=RRmax, QS=QS, QT=QT, RTmin=RTmin, RTmax=RTmax, QRSa=QRSa,
QRSamin=QRSamin, thresh=threshold)
gqrs = GQRS()
annotations = gqrs.detect(x=d_sig, conf=conf, adc_zero=adc_zero)
return np.array([a.time for a in annotations]) | python | def gqrs_detect(sig=None, fs=None, d_sig=None, adc_gain=None, adc_zero=None,
threshold=1.0, hr=75, RRdelta=0.2, RRmin=0.28, RRmax=2.4,
QS=0.07, QT=0.35, RTmin=0.25, RTmax=0.33,
QRSa=750, QRSamin=130):
"""
Detect qrs locations in a single channel ecg. Functionally, a direct port
of the gqrs algorithm from the original wfdb package. Accepts either a
physical signal, or a digital signal with known adc_gain and adc_zero.
See the notes below for a summary of the program. This algorithm is not
being developed/supported.
Parameters
----------
sig : 1d numpy array, optional
The input physical signal. The detection algorithm which replicates
the original, works using digital samples, and this physical option is
provided as a convenient interface. If this is the specified input
signal, automatic adc is performed using 24 bit precision, to obtain
the `d_sig`, `adc_gain`, and `adc_zero` parameters. There may be minor
differences in detection results (ie. an occasional 1 sample
difference) between using `sig` and `d_sig`. To replicate the exact
output of the original gqrs algorithm, use the `d_sig` argument
instead.
fs : int, or float
The sampling frequency of the signal.
d_sig : 1d numpy array, optional
The input digital signal. If this is the specified input signal rather
than `sig`, the `adc_gain` and `adc_zero` parameters must be specified.
adc_gain : int, or float, optional
The analogue to digital gain of the signal (the number of adus per
physical unit).
adc_zero: int, optional
The value produced by the ADC given a 0 volt input.
threshold : int, or float, optional
The relative amplitude detection threshold. Used to initialize the peak
and qrs detection threshold.
hr : int, or float, optional
Typical heart rate, in beats per minute.
RRdelta : int or float, optional
Typical difference between successive RR intervals in seconds.
RRmin : int or float, optional
Minimum RR interval ("refractory period"), in seconds.
RRmax : int or float, optional
Maximum RR interval, in seconds. Thresholds will be adjusted if no
peaks are detected within this interval.
QS : int or float, optional
Typical QRS duration, in seconds.
QT : int or float, optional
Typical QT interval, in seconds.
RTmin : int or float, optional
Minimum interval between R and T peaks, in seconds.
RTmax : int or float, optional
Maximum interval between R and T peaks, in seconds.
QRSa : int or float, optional
Typical QRS peak-to-peak amplitude, in microvolts.
QRSamin : int or float, optional
Minimum QRS peak-to-peak amplitude, in microvolts.
Returns
-------
qrs_locs : numpy array
Detected qrs locations
Notes
-----
This function should not be used for signals with fs <= 50Hz
The algorithm theoretically works as follows:
- Load in configuration parameters. They are used to set/initialize the:
* allowed rr interval limits (fixed)
* initial recent rr interval (running)
* qrs width, used for detection filter widths (fixed)
* allowed rt interval limits (fixed)
* initial recent rt interval (running)
* initial peak amplitude detection threshold (running)
* initial qrs amplitude detection threshold (running)
* `Note`: this algorithm does not normalize signal amplitudes, and
hence is highly dependent on configuration amplitude parameters.
- Apply trapezoid low-pass filtering to the signal
- Convolve a QRS matched filter with the filtered signal
- Run the learning phase using a calculated signal length: detect qrs and
non-qrs peaks as in the main detection phase, without saving the qrs
locations. During this phase, running parameters of recent intervals
and peak/qrs thresholds are adjusted.
- Run the detection::
if a sample is bigger than its immediate neighbors and larger
than the peak detection threshold, it is a peak.
if it is further than RRmin from the previous qrs, and is a
*primary peak.
if it is further than 2 standard deviations from the
previous qrs, do a backsearch for a missed low amplitude
beat
return the primary peak between the current sample
and the previous qrs if any.
if it surpasses the qrs threshold, it is a qrs complex
save the qrs location.
update running rr and qrs amplitude parameters.
look for the qrs complex's t-wave and mark it if
found.
else if it is not a peak
lower the peak detection threshold if the last peak found
was more than RRmax ago, and not already at its minimum.
*A peak is secondary if there is a larger peak within its neighborhood
(time +- rrmin), or if it has been identified as a T-wave associated with a
previous primary peak. A peak is primary if it is largest in its neighborhood,
or if the only larger peaks are secondary.
The above describes how the algorithm should theoretically work, but there
are bugs which make the program contradict certain parts of its supposed
logic. A list of issues from the original c, code and hence this python
implementation can be found here:
https://github.com/bemoody/wfdb/issues/17
gqrs will not be supported/developed in this library.
Examples
--------
>>> import numpy as np
>>> import wfdb
>>> from wfdb import processing
>>> # Detect using a physical input signal
>>> record = wfdb.rdrecord('sample-data/100', channels=[0])
>>> qrs_locs = processing.gqrs_detect(record.p_signal[:,0], fs=record.fs)
>>> # Detect using a digital input signal
>>> record_2 = wfdb.rdrecord('sample-data/100', channels=[0], physical=False)
>>> qrs_locs_2 = processing.gqrs_detect(d_sig=record_2.d_signal[:,0],
fs=record_2.fs,
adc_gain=record_2.adc_gain[0],
adc_zero=record_2.adc_zero[0])
"""
# Perform adc if input signal is physical
if sig is not None:
record = Record(p_signal=sig.reshape([-1,1]), fmt=['24'])
record.set_d_features(do_adc=True)
d_sig = record.d_signal[:,0]
adc_zero = 0
adc_gain = record.adc_gain[0]
conf = GQRS.Conf(fs=fs, adc_gain=adc_gain, hr=hr, RRdelta=RRdelta, RRmin=RRmin,
RRmax=RRmax, QS=QS, QT=QT, RTmin=RTmin, RTmax=RTmax, QRSa=QRSa,
QRSamin=QRSamin, thresh=threshold)
gqrs = GQRS()
annotations = gqrs.detect(x=d_sig, conf=conf, adc_zero=adc_zero)
return np.array([a.time for a in annotations]) | [
"def",
"gqrs_detect",
"(",
"sig",
"=",
"None",
",",
"fs",
"=",
"None",
",",
"d_sig",
"=",
"None",
",",
"adc_gain",
"=",
"None",
",",
"adc_zero",
"=",
"None",
",",
"threshold",
"=",
"1.0",
",",
"hr",
"=",
"75",
",",
"RRdelta",
"=",
"0.2",
",",
"RR... | Detect qrs locations in a single channel ecg. Functionally, a direct port
of the gqrs algorithm from the original wfdb package. Accepts either a
physical signal, or a digital signal with known adc_gain and adc_zero.
See the notes below for a summary of the program. This algorithm is not
being developed/supported.
Parameters
----------
sig : 1d numpy array, optional
The input physical signal. The detection algorithm which replicates
the original, works using digital samples, and this physical option is
provided as a convenient interface. If this is the specified input
signal, automatic adc is performed using 24 bit precision, to obtain
the `d_sig`, `adc_gain`, and `adc_zero` parameters. There may be minor
differences in detection results (ie. an occasional 1 sample
difference) between using `sig` and `d_sig`. To replicate the exact
output of the original gqrs algorithm, use the `d_sig` argument
instead.
fs : int, or float
The sampling frequency of the signal.
d_sig : 1d numpy array, optional
The input digital signal. If this is the specified input signal rather
than `sig`, the `adc_gain` and `adc_zero` parameters must be specified.
adc_gain : int, or float, optional
The analogue to digital gain of the signal (the number of adus per
physical unit).
adc_zero: int, optional
The value produced by the ADC given a 0 volt input.
threshold : int, or float, optional
The relative amplitude detection threshold. Used to initialize the peak
and qrs detection threshold.
hr : int, or float, optional
Typical heart rate, in beats per minute.
RRdelta : int or float, optional
Typical difference between successive RR intervals in seconds.
RRmin : int or float, optional
Minimum RR interval ("refractory period"), in seconds.
RRmax : int or float, optional
Maximum RR interval, in seconds. Thresholds will be adjusted if no
peaks are detected within this interval.
QS : int or float, optional
Typical QRS duration, in seconds.
QT : int or float, optional
Typical QT interval, in seconds.
RTmin : int or float, optional
Minimum interval between R and T peaks, in seconds.
RTmax : int or float, optional
Maximum interval between R and T peaks, in seconds.
QRSa : int or float, optional
Typical QRS peak-to-peak amplitude, in microvolts.
QRSamin : int or float, optional
Minimum QRS peak-to-peak amplitude, in microvolts.
Returns
-------
qrs_locs : numpy array
Detected qrs locations
Notes
-----
This function should not be used for signals with fs <= 50Hz
The algorithm theoretically works as follows:
- Load in configuration parameters. They are used to set/initialize the:
* allowed rr interval limits (fixed)
* initial recent rr interval (running)
* qrs width, used for detection filter widths (fixed)
* allowed rt interval limits (fixed)
* initial recent rt interval (running)
* initial peak amplitude detection threshold (running)
* initial qrs amplitude detection threshold (running)
* `Note`: this algorithm does not normalize signal amplitudes, and
hence is highly dependent on configuration amplitude parameters.
- Apply trapezoid low-pass filtering to the signal
- Convolve a QRS matched filter with the filtered signal
- Run the learning phase using a calculated signal length: detect qrs and
non-qrs peaks as in the main detection phase, without saving the qrs
locations. During this phase, running parameters of recent intervals
and peak/qrs thresholds are adjusted.
- Run the detection::
if a sample is bigger than its immediate neighbors and larger
than the peak detection threshold, it is a peak.
if it is further than RRmin from the previous qrs, and is a
*primary peak.
if it is further than 2 standard deviations from the
previous qrs, do a backsearch for a missed low amplitude
beat
return the primary peak between the current sample
and the previous qrs if any.
if it surpasses the qrs threshold, it is a qrs complex
save the qrs location.
update running rr and qrs amplitude parameters.
look for the qrs complex's t-wave and mark it if
found.
else if it is not a peak
lower the peak detection threshold if the last peak found
was more than RRmax ago, and not already at its minimum.
*A peak is secondary if there is a larger peak within its neighborhood
(time +- rrmin), or if it has been identified as a T-wave associated with a
previous primary peak. A peak is primary if it is largest in its neighborhood,
or if the only larger peaks are secondary.
The above describes how the algorithm should theoretically work, but there
are bugs which make the program contradict certain parts of its supposed
logic. A list of issues from the original c, code and hence this python
implementation can be found here:
https://github.com/bemoody/wfdb/issues/17
gqrs will not be supported/developed in this library.
Examples
--------
>>> import numpy as np
>>> import wfdb
>>> from wfdb import processing
>>> # Detect using a physical input signal
>>> record = wfdb.rdrecord('sample-data/100', channels=[0])
>>> qrs_locs = processing.gqrs_detect(record.p_signal[:,0], fs=record.fs)
>>> # Detect using a digital input signal
>>> record_2 = wfdb.rdrecord('sample-data/100', channels=[0], physical=False)
>>> qrs_locs_2 = processing.gqrs_detect(d_sig=record_2.d_signal[:,0],
fs=record_2.fs,
adc_gain=record_2.adc_gain[0],
adc_zero=record_2.adc_zero[0]) | [
"Detect",
"qrs",
"locations",
"in",
"a",
"single",
"channel",
"ecg",
".",
"Functionally",
"a",
"direct",
"port",
"of",
"the",
"gqrs",
"algorithm",
"from",
"the",
"original",
"wfdb",
"package",
".",
"Accepts",
"either",
"a",
"physical",
"signal",
"or",
"a",
... | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L1123-L1278 | train | 216,267 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._set_conf | def _set_conf(self):
"""
Set configuration parameters from the Conf object into the detector
object.
Time values are converted to samples, and amplitude values are in mV.
"""
self.rr_init = 60 * self.fs / self.conf.hr_init
self.rr_max = 60 * self.fs / self.conf.hr_min
self.rr_min = 60 * self.fs / self.conf.hr_max
# Note: if qrs_width is odd, qrs_width == qrs_radius*2 + 1
self.qrs_width = int(self.conf.qrs_width * self.fs)
self.qrs_radius = int(self.conf.qrs_radius * self.fs)
self.qrs_thr_init = self.conf.qrs_thr_init
self.qrs_thr_min = self.conf.qrs_thr_min
self.ref_period = int(self.conf.ref_period * self.fs)
self.t_inspect_period = int(self.conf.t_inspect_period * self.fs) | python | def _set_conf(self):
"""
Set configuration parameters from the Conf object into the detector
object.
Time values are converted to samples, and amplitude values are in mV.
"""
self.rr_init = 60 * self.fs / self.conf.hr_init
self.rr_max = 60 * self.fs / self.conf.hr_min
self.rr_min = 60 * self.fs / self.conf.hr_max
# Note: if qrs_width is odd, qrs_width == qrs_radius*2 + 1
self.qrs_width = int(self.conf.qrs_width * self.fs)
self.qrs_radius = int(self.conf.qrs_radius * self.fs)
self.qrs_thr_init = self.conf.qrs_thr_init
self.qrs_thr_min = self.conf.qrs_thr_min
self.ref_period = int(self.conf.ref_period * self.fs)
self.t_inspect_period = int(self.conf.t_inspect_period * self.fs) | [
"def",
"_set_conf",
"(",
"self",
")",
":",
"self",
".",
"rr_init",
"=",
"60",
"*",
"self",
".",
"fs",
"/",
"self",
".",
"conf",
".",
"hr_init",
"self",
".",
"rr_max",
"=",
"60",
"*",
"self",
".",
"fs",
"/",
"self",
".",
"conf",
".",
"hr_min",
"... | Set configuration parameters from the Conf object into the detector
object.
Time values are converted to samples, and amplitude values are in mV. | [
"Set",
"configuration",
"parameters",
"from",
"the",
"Conf",
"object",
"into",
"the",
"detector",
"object",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L126-L145 | train | 216,268 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._bandpass | def _bandpass(self, fc_low=5, fc_high=20):
"""
Apply a bandpass filter onto the signal, and save the filtered
signal.
"""
self.fc_low = fc_low
self.fc_high = fc_high
b, a = signal.butter(2, [float(fc_low) * 2 / self.fs,
float(fc_high) * 2 / self.fs], 'pass')
self.sig_f = signal.filtfilt(b, a, self.sig[self.sampfrom:self.sampto],
axis=0)
# Save the passband gain (x2 due to double filtering)
self.filter_gain = get_filter_gain(b, a, np.mean([fc_low, fc_high]),
self.fs) * 2 | python | def _bandpass(self, fc_low=5, fc_high=20):
"""
Apply a bandpass filter onto the signal, and save the filtered
signal.
"""
self.fc_low = fc_low
self.fc_high = fc_high
b, a = signal.butter(2, [float(fc_low) * 2 / self.fs,
float(fc_high) * 2 / self.fs], 'pass')
self.sig_f = signal.filtfilt(b, a, self.sig[self.sampfrom:self.sampto],
axis=0)
# Save the passband gain (x2 due to double filtering)
self.filter_gain = get_filter_gain(b, a, np.mean([fc_low, fc_high]),
self.fs) * 2 | [
"def",
"_bandpass",
"(",
"self",
",",
"fc_low",
"=",
"5",
",",
"fc_high",
"=",
"20",
")",
":",
"self",
".",
"fc_low",
"=",
"fc_low",
"self",
".",
"fc_high",
"=",
"fc_high",
"b",
",",
"a",
"=",
"signal",
".",
"butter",
"(",
"2",
",",
"[",
"float",... | Apply a bandpass filter onto the signal, and save the filtered
signal. | [
"Apply",
"a",
"bandpass",
"filter",
"onto",
"the",
"signal",
"and",
"save",
"the",
"filtered",
"signal",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L148-L162 | train | 216,269 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._set_init_params | def _set_init_params(self, qrs_amp_recent, noise_amp_recent, rr_recent,
last_qrs_ind):
"""
Set initial online parameters
"""
self.qrs_amp_recent = qrs_amp_recent
self.noise_amp_recent = noise_amp_recent
# What happens if qrs_thr is calculated to be less than the explicit
# min threshold? Should print warning?
self.qrs_thr = max(0.25*self.qrs_amp_recent
+ 0.75*self.noise_amp_recent,
self.qrs_thr_min * self.transform_gain)
self.rr_recent = rr_recent
self.last_qrs_ind = last_qrs_ind
# No qrs detected initially
self.last_qrs_peak_num = None | python | def _set_init_params(self, qrs_amp_recent, noise_amp_recent, rr_recent,
last_qrs_ind):
"""
Set initial online parameters
"""
self.qrs_amp_recent = qrs_amp_recent
self.noise_amp_recent = noise_amp_recent
# What happens if qrs_thr is calculated to be less than the explicit
# min threshold? Should print warning?
self.qrs_thr = max(0.25*self.qrs_amp_recent
+ 0.75*self.noise_amp_recent,
self.qrs_thr_min * self.transform_gain)
self.rr_recent = rr_recent
self.last_qrs_ind = last_qrs_ind
# No qrs detected initially
self.last_qrs_peak_num = None | [
"def",
"_set_init_params",
"(",
"self",
",",
"qrs_amp_recent",
",",
"noise_amp_recent",
",",
"rr_recent",
",",
"last_qrs_ind",
")",
":",
"self",
".",
"qrs_amp_recent",
"=",
"qrs_amp_recent",
"self",
".",
"noise_amp_recent",
"=",
"noise_amp_recent",
"# What happens if ... | Set initial online parameters | [
"Set",
"initial",
"online",
"parameters"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L314-L330 | train | 216,270 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._set_default_init_params | def _set_default_init_params(self):
"""
Set initial running parameters using default values.
The steady state equation is:
`qrs_thr = 0.25*qrs_amp + 0.75*noise_amp`
Estimate that qrs amp is 10x noise amp, giving:
`qrs_thr = 0.325 * qrs_amp or 13/40 * qrs_amp`
"""
if self.verbose:
print('Initializing using default parameters')
# Multiply the specified ecg thresholds by the filter and mwi gain
# factors
qrs_thr_init = self.qrs_thr_init * self.transform_gain
qrs_thr_min = self.qrs_thr_min * self.transform_gain
qrs_amp = 27/40 * qrs_thr_init
noise_amp = qrs_amp / 10
rr_recent = self.rr_init
last_qrs_ind = 0
self._set_init_params(qrs_amp_recent=qrs_amp,
noise_amp_recent=noise_amp,
rr_recent=rr_recent,
last_qrs_ind=last_qrs_ind)
self.learned_init_params = False | python | def _set_default_init_params(self):
"""
Set initial running parameters using default values.
The steady state equation is:
`qrs_thr = 0.25*qrs_amp + 0.75*noise_amp`
Estimate that qrs amp is 10x noise amp, giving:
`qrs_thr = 0.325 * qrs_amp or 13/40 * qrs_amp`
"""
if self.verbose:
print('Initializing using default parameters')
# Multiply the specified ecg thresholds by the filter and mwi gain
# factors
qrs_thr_init = self.qrs_thr_init * self.transform_gain
qrs_thr_min = self.qrs_thr_min * self.transform_gain
qrs_amp = 27/40 * qrs_thr_init
noise_amp = qrs_amp / 10
rr_recent = self.rr_init
last_qrs_ind = 0
self._set_init_params(qrs_amp_recent=qrs_amp,
noise_amp_recent=noise_amp,
rr_recent=rr_recent,
last_qrs_ind=last_qrs_ind)
self.learned_init_params = False | [
"def",
"_set_default_init_params",
"(",
"self",
")",
":",
"if",
"self",
".",
"verbose",
":",
"print",
"(",
"'Initializing using default parameters'",
")",
"# Multiply the specified ecg thresholds by the filter and mwi gain",
"# factors",
"qrs_thr_init",
"=",
"self",
".",
"q... | Set initial running parameters using default values.
The steady state equation is:
`qrs_thr = 0.25*qrs_amp + 0.75*noise_amp`
Estimate that qrs amp is 10x noise amp, giving:
`qrs_thr = 0.325 * qrs_amp or 13/40 * qrs_amp` | [
"Set",
"initial",
"running",
"parameters",
"using",
"default",
"values",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L333-L361 | train | 216,271 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._update_qrs | def _update_qrs(self, peak_num, backsearch=False):
"""
Update live qrs parameters. Adjust the recent rr-intervals and
qrs amplitudes, and the qrs threshold.
Parameters
----------
peak_num : int
The peak number of the mwi signal where the qrs is detected
backsearch: bool, optional
Whether the qrs was found via backsearch
"""
i = self.peak_inds_i[peak_num]
# Update recent rr if the beat is consecutive (do this before
# updating self.last_qrs_ind)
rr_new = i - self.last_qrs_ind
if rr_new < self.rr_max:
self.rr_recent = 0.875*self.rr_recent + 0.125*rr_new
self.qrs_inds.append(i)
self.last_qrs_ind = i
# Peak number corresponding to last qrs
self.last_qrs_peak_num = self.peak_num
# qrs recent amplitude is adjusted twice as quickly if the peak
# was found via backsearch
if backsearch:
self.backsearch_qrs_inds.append(i)
self.qrs_amp_recent = (0.75*self.qrs_amp_recent
+ 0.25*self.sig_i[i])
else:
self.qrs_amp_recent = (0.875*self.qrs_amp_recent
+ 0.125*self.sig_i[i])
self.qrs_thr = max((0.25*self.qrs_amp_recent
+ 0.75*self.noise_amp_recent), self.qrs_thr_min)
return | python | def _update_qrs(self, peak_num, backsearch=False):
"""
Update live qrs parameters. Adjust the recent rr-intervals and
qrs amplitudes, and the qrs threshold.
Parameters
----------
peak_num : int
The peak number of the mwi signal where the qrs is detected
backsearch: bool, optional
Whether the qrs was found via backsearch
"""
i = self.peak_inds_i[peak_num]
# Update recent rr if the beat is consecutive (do this before
# updating self.last_qrs_ind)
rr_new = i - self.last_qrs_ind
if rr_new < self.rr_max:
self.rr_recent = 0.875*self.rr_recent + 0.125*rr_new
self.qrs_inds.append(i)
self.last_qrs_ind = i
# Peak number corresponding to last qrs
self.last_qrs_peak_num = self.peak_num
# qrs recent amplitude is adjusted twice as quickly if the peak
# was found via backsearch
if backsearch:
self.backsearch_qrs_inds.append(i)
self.qrs_amp_recent = (0.75*self.qrs_amp_recent
+ 0.25*self.sig_i[i])
else:
self.qrs_amp_recent = (0.875*self.qrs_amp_recent
+ 0.125*self.sig_i[i])
self.qrs_thr = max((0.25*self.qrs_amp_recent
+ 0.75*self.noise_amp_recent), self.qrs_thr_min)
return | [
"def",
"_update_qrs",
"(",
"self",
",",
"peak_num",
",",
"backsearch",
"=",
"False",
")",
":",
"i",
"=",
"self",
".",
"peak_inds_i",
"[",
"peak_num",
"]",
"# Update recent rr if the beat is consecutive (do this before",
"# updating self.last_qrs_ind)",
"rr_new",
"=",
... | Update live qrs parameters. Adjust the recent rr-intervals and
qrs amplitudes, and the qrs threshold.
Parameters
----------
peak_num : int
The peak number of the mwi signal where the qrs is detected
backsearch: bool, optional
Whether the qrs was found via backsearch | [
"Update",
"live",
"qrs",
"parameters",
".",
"Adjust",
"the",
"recent",
"rr",
"-",
"intervals",
"and",
"qrs",
"amplitudes",
"and",
"the",
"qrs",
"threshold",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L396-L435 | train | 216,272 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._is_twave | def _is_twave(self, peak_num):
"""
Check whether a segment is a t-wave. Compare the maximum gradient of
the filtered signal segment with that of the previous qrs segment.
Parameters
----------
peak_num : int
The peak number of the mwi signal where the qrs is detected
"""
i = self.peak_inds_i[peak_num]
# Due to initialization parameters, last_qrs_ind may be negative.
# No way to check in this instance.
if self.last_qrs_ind - self.qrs_radius < 0:
return False
# Get half the qrs width of the signal to the left.
# Should this be squared?
sig_segment = normalize((self.sig_f[i - self.qrs_radius:i]
).reshape(-1, 1), axis=0)
last_qrs_segment = self.sig_f[self.last_qrs_ind - self.qrs_radius:
self.last_qrs_ind]
segment_slope = np.diff(sig_segment)
last_qrs_slope = np.diff(last_qrs_segment)
# Should we be using absolute values?
if max(segment_slope) < 0.5*max(abs(last_qrs_slope)):
return True
else:
return False | python | def _is_twave(self, peak_num):
"""
Check whether a segment is a t-wave. Compare the maximum gradient of
the filtered signal segment with that of the previous qrs segment.
Parameters
----------
peak_num : int
The peak number of the mwi signal where the qrs is detected
"""
i = self.peak_inds_i[peak_num]
# Due to initialization parameters, last_qrs_ind may be negative.
# No way to check in this instance.
if self.last_qrs_ind - self.qrs_radius < 0:
return False
# Get half the qrs width of the signal to the left.
# Should this be squared?
sig_segment = normalize((self.sig_f[i - self.qrs_radius:i]
).reshape(-1, 1), axis=0)
last_qrs_segment = self.sig_f[self.last_qrs_ind - self.qrs_radius:
self.last_qrs_ind]
segment_slope = np.diff(sig_segment)
last_qrs_slope = np.diff(last_qrs_segment)
# Should we be using absolute values?
if max(segment_slope) < 0.5*max(abs(last_qrs_slope)):
return True
else:
return False | [
"def",
"_is_twave",
"(",
"self",
",",
"peak_num",
")",
":",
"i",
"=",
"self",
".",
"peak_inds_i",
"[",
"peak_num",
"]",
"# Due to initialization parameters, last_qrs_ind may be negative.",
"# No way to check in this instance.",
"if",
"self",
".",
"last_qrs_ind",
"-",
"s... | Check whether a segment is a t-wave. Compare the maximum gradient of
the filtered signal segment with that of the previous qrs segment.
Parameters
----------
peak_num : int
The peak number of the mwi signal where the qrs is detected | [
"Check",
"whether",
"a",
"segment",
"is",
"a",
"t",
"-",
"wave",
".",
"Compare",
"the",
"maximum",
"gradient",
"of",
"the",
"filtered",
"signal",
"segment",
"with",
"that",
"of",
"the",
"previous",
"qrs",
"segment",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L438-L470 | train | 216,273 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._update_noise | def _update_noise(self, peak_num):
"""
Update live noise parameters
"""
i = self.peak_inds_i[peak_num]
self.noise_amp_recent = (0.875*self.noise_amp_recent
+ 0.125*self.sig_i[i])
return | python | def _update_noise(self, peak_num):
"""
Update live noise parameters
"""
i = self.peak_inds_i[peak_num]
self.noise_amp_recent = (0.875*self.noise_amp_recent
+ 0.125*self.sig_i[i])
return | [
"def",
"_update_noise",
"(",
"self",
",",
"peak_num",
")",
":",
"i",
"=",
"self",
".",
"peak_inds_i",
"[",
"peak_num",
"]",
"self",
".",
"noise_amp_recent",
"=",
"(",
"0.875",
"*",
"self",
".",
"noise_amp_recent",
"+",
"0.125",
"*",
"self",
".",
"sig_i",... | Update live noise parameters | [
"Update",
"live",
"noise",
"parameters"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L472-L479 | train | 216,274 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._require_backsearch | def _require_backsearch(self):
"""
Determine whether a backsearch should be performed on prior peaks
"""
if self.peak_num == self.n_peaks_i-1:
# If we just return false, we may miss a chance to backsearch.
# Update this?
return False
next_peak_ind = self.peak_inds_i[self.peak_num + 1]
if next_peak_ind-self.last_qrs_ind > self.rr_recent*1.66:
return True
else:
return False | python | def _require_backsearch(self):
"""
Determine whether a backsearch should be performed on prior peaks
"""
if self.peak_num == self.n_peaks_i-1:
# If we just return false, we may miss a chance to backsearch.
# Update this?
return False
next_peak_ind = self.peak_inds_i[self.peak_num + 1]
if next_peak_ind-self.last_qrs_ind > self.rr_recent*1.66:
return True
else:
return False | [
"def",
"_require_backsearch",
"(",
"self",
")",
":",
"if",
"self",
".",
"peak_num",
"==",
"self",
".",
"n_peaks_i",
"-",
"1",
":",
"# If we just return false, we may miss a chance to backsearch.",
"# Update this?",
"return",
"False",
"next_peak_ind",
"=",
"self",
".",... | Determine whether a backsearch should be performed on prior peaks | [
"Determine",
"whether",
"a",
"backsearch",
"should",
"be",
"performed",
"on",
"prior",
"peaks"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L481-L495 | train | 216,275 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS._run_detection | def _run_detection(self):
"""
Run the qrs detection after all signals and parameters have been
configured and set.
"""
if self.verbose:
print('Running QRS detection...')
# Detected qrs indices
self.qrs_inds = []
# qrs indices found via backsearch
self.backsearch_qrs_inds = []
# Iterate through mwi signal peak indices
for self.peak_num in range(self.n_peaks_i):
if self._is_qrs(self.peak_num):
self._update_qrs(self.peak_num)
else:
self._update_noise(self.peak_num)
# Before continuing to the next peak, do backsearch if
# necessary
if self._require_backsearch():
self._backsearch()
# Detected indices are relative to starting sample
if self.qrs_inds:
self.qrs_inds = np.array(self.qrs_inds) + self.sampfrom
else:
self.qrs_inds = np.array(self.qrs_inds)
if self.verbose:
print('QRS detection complete.') | python | def _run_detection(self):
"""
Run the qrs detection after all signals and parameters have been
configured and set.
"""
if self.verbose:
print('Running QRS detection...')
# Detected qrs indices
self.qrs_inds = []
# qrs indices found via backsearch
self.backsearch_qrs_inds = []
# Iterate through mwi signal peak indices
for self.peak_num in range(self.n_peaks_i):
if self._is_qrs(self.peak_num):
self._update_qrs(self.peak_num)
else:
self._update_noise(self.peak_num)
# Before continuing to the next peak, do backsearch if
# necessary
if self._require_backsearch():
self._backsearch()
# Detected indices are relative to starting sample
if self.qrs_inds:
self.qrs_inds = np.array(self.qrs_inds) + self.sampfrom
else:
self.qrs_inds = np.array(self.qrs_inds)
if self.verbose:
print('QRS detection complete.') | [
"def",
"_run_detection",
"(",
"self",
")",
":",
"if",
"self",
".",
"verbose",
":",
"print",
"(",
"'Running QRS detection...'",
")",
"# Detected qrs indices",
"self",
".",
"qrs_inds",
"=",
"[",
"]",
"# qrs indices found via backsearch",
"self",
".",
"backsearch_qrs_i... | Run the qrs detection after all signals and parameters have been
configured and set. | [
"Run",
"the",
"qrs",
"detection",
"after",
"all",
"signals",
"and",
"parameters",
"have",
"been",
"configured",
"and",
"set",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L510-L543 | train | 216,276 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | XQRS.detect | def detect(self, sampfrom=0, sampto='end', learn=True, verbose=True):
"""
Detect qrs locations between two samples.
Parameters
----------
sampfrom : int, optional
The starting sample number to run the detection on.
sampto : int, optional
The final sample number to run the detection on. Set as
'end' to run on the entire signal.
learn : bool, optional
Whether to apply learning on the signal before running the
main detection. If learning fails or is not conducted, the
default configuration parameters will be used to initialize
these variables. See the `XQRS._learn_init_params` docstring
for details.
verbose : bool, optional
Whether to display the stages and outcomes of the detection
process.
"""
if sampfrom < 0:
raise ValueError("'sampfrom' cannot be negative")
self.sampfrom = sampfrom
if sampto == 'end':
sampto = self.sig_len
elif sampto > self.sig_len:
raise ValueError("'sampto' cannot exceed the signal length")
self.sampto = sampto
self.verbose = verbose
# Don't attempt to run on a flat signal
if np.max(self.sig) == np.min(self.sig):
self.qrs_inds = np.empty(0)
if self.verbose:
print('Flat signal. Detection skipped.')
return
# Get/set signal configuration fields from Conf object
self._set_conf()
# Bandpass filter the signal
self._bandpass()
# Compute moving wave integration of filtered signal
self._mwi()
# Initialize the running parameters
if learn:
self._learn_init_params()
else:
self._set_default_init_params()
# Run the detection
self._run_detection() | python | def detect(self, sampfrom=0, sampto='end', learn=True, verbose=True):
"""
Detect qrs locations between two samples.
Parameters
----------
sampfrom : int, optional
The starting sample number to run the detection on.
sampto : int, optional
The final sample number to run the detection on. Set as
'end' to run on the entire signal.
learn : bool, optional
Whether to apply learning on the signal before running the
main detection. If learning fails or is not conducted, the
default configuration parameters will be used to initialize
these variables. See the `XQRS._learn_init_params` docstring
for details.
verbose : bool, optional
Whether to display the stages and outcomes of the detection
process.
"""
if sampfrom < 0:
raise ValueError("'sampfrom' cannot be negative")
self.sampfrom = sampfrom
if sampto == 'end':
sampto = self.sig_len
elif sampto > self.sig_len:
raise ValueError("'sampto' cannot exceed the signal length")
self.sampto = sampto
self.verbose = verbose
# Don't attempt to run on a flat signal
if np.max(self.sig) == np.min(self.sig):
self.qrs_inds = np.empty(0)
if self.verbose:
print('Flat signal. Detection skipped.')
return
# Get/set signal configuration fields from Conf object
self._set_conf()
# Bandpass filter the signal
self._bandpass()
# Compute moving wave integration of filtered signal
self._mwi()
# Initialize the running parameters
if learn:
self._learn_init_params()
else:
self._set_default_init_params()
# Run the detection
self._run_detection() | [
"def",
"detect",
"(",
"self",
",",
"sampfrom",
"=",
"0",
",",
"sampto",
"=",
"'end'",
",",
"learn",
"=",
"True",
",",
"verbose",
"=",
"True",
")",
":",
"if",
"sampfrom",
"<",
"0",
":",
"raise",
"ValueError",
"(",
"\"'sampfrom' cannot be negative\"",
")",... | Detect qrs locations between two samples.
Parameters
----------
sampfrom : int, optional
The starting sample number to run the detection on.
sampto : int, optional
The final sample number to run the detection on. Set as
'end' to run on the entire signal.
learn : bool, optional
Whether to apply learning on the signal before running the
main detection. If learning fails or is not conducted, the
default configuration parameters will be used to initialize
these variables. See the `XQRS._learn_init_params` docstring
for details.
verbose : bool, optional
Whether to display the stages and outcomes of the detection
process. | [
"Detect",
"qrs",
"locations",
"between",
"two",
"samples",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L546-L600 | train | 216,277 |
MIT-LCP/wfdb-python | wfdb/processing/qrs.py | GQRS.detect | def detect(self, x, conf, adc_zero):
"""
Run detection. x is digital signal
"""
self.c = conf
self.annotations = []
self.sample_valid = False
if len(x) < 1:
return []
self.x = x
self.adc_zero = adc_zero
self.qfv = np.zeros((self.c._BUFLN), dtype="int64")
self.smv = np.zeros((self.c._BUFLN), dtype="int64")
self.v1 = 0
t0 = 0
self.tf = len(x) - 1
self.t = 0 - self.c.dt4
self.annot = GQRS.Annotation(0, "NOTE", 0, 0)
# Cicular buffer of Peaks
first_peak = GQRS.Peak(0, 0, 0)
tmp = first_peak
for _ in range(1, self.c._NPEAKS):
tmp.next_peak = GQRS.Peak(0, 0, 0)
tmp.next_peak.prev_peak = tmp
tmp = tmp.next_peak
tmp.next_peak = first_peak
first_peak.prev_peak = tmp
self.current_peak = first_peak
if self.c.spm > self.c._BUFLN:
if self.tf - t0 > self.c._BUFLN:
tf_learn = t0 + self.c._BUFLN - self.c.dt4
else:
tf_learn = self.tf - self.c.dt4
else:
if self.tf - t0 > self.c.spm:
tf_learn = t0 + self.c.spm - self.c.dt4
else:
tf_learn = self.tf - self.c.dt4
self.countdown = -1
self.state = "LEARNING"
self.gqrs(t0, tf_learn)
self.rewind_gqrs()
self.state = "RUNNING"
self.t = t0 - self.c.dt4
self.gqrs(t0, self.tf)
return self.annotations | python | def detect(self, x, conf, adc_zero):
"""
Run detection. x is digital signal
"""
self.c = conf
self.annotations = []
self.sample_valid = False
if len(x) < 1:
return []
self.x = x
self.adc_zero = adc_zero
self.qfv = np.zeros((self.c._BUFLN), dtype="int64")
self.smv = np.zeros((self.c._BUFLN), dtype="int64")
self.v1 = 0
t0 = 0
self.tf = len(x) - 1
self.t = 0 - self.c.dt4
self.annot = GQRS.Annotation(0, "NOTE", 0, 0)
# Cicular buffer of Peaks
first_peak = GQRS.Peak(0, 0, 0)
tmp = first_peak
for _ in range(1, self.c._NPEAKS):
tmp.next_peak = GQRS.Peak(0, 0, 0)
tmp.next_peak.prev_peak = tmp
tmp = tmp.next_peak
tmp.next_peak = first_peak
first_peak.prev_peak = tmp
self.current_peak = first_peak
if self.c.spm > self.c._BUFLN:
if self.tf - t0 > self.c._BUFLN:
tf_learn = t0 + self.c._BUFLN - self.c.dt4
else:
tf_learn = self.tf - self.c.dt4
else:
if self.tf - t0 > self.c.spm:
tf_learn = t0 + self.c.spm - self.c.dt4
else:
tf_learn = self.tf - self.c.dt4
self.countdown = -1
self.state = "LEARNING"
self.gqrs(t0, tf_learn)
self.rewind_gqrs()
self.state = "RUNNING"
self.t = t0 - self.c.dt4
self.gqrs(t0, self.tf)
return self.annotations | [
"def",
"detect",
"(",
"self",
",",
"x",
",",
"conf",
",",
"adc_zero",
")",
":",
"self",
".",
"c",
"=",
"conf",
"self",
".",
"annotations",
"=",
"[",
"]",
"self",
".",
"sample_valid",
"=",
"False",
"if",
"len",
"(",
"x",
")",
"<",
"1",
":",
"ret... | Run detection. x is digital signal | [
"Run",
"detection",
".",
"x",
"is",
"digital",
"signal"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/qrs.py#L750-L806 | train | 216,278 |
MIT-LCP/wfdb-python | wfdb/processing/hr.py | compute_hr | def compute_hr(sig_len, qrs_inds, fs):
"""
Compute instantaneous heart rate from peak indices.
Parameters
----------
sig_len : int
The length of the corresponding signal
qrs_inds : numpy array
The qrs index locations
fs : int, or float
The corresponding signal's sampling frequency.
Returns
-------
heart_rate : numpy array
An array of the instantaneous heart rate, with the length of the
corresponding signal. Contains numpy.nan where heart rate could
not be computed.
"""
heart_rate = np.full(sig_len, np.nan, dtype='float32')
if len(qrs_inds) < 2:
return heart_rate
for i in range(0, len(qrs_inds)-2):
a = qrs_inds[i]
b = qrs_inds[i+1]
c = qrs_inds[i+2]
rr = (b-a) * (1.0 / fs) * 1000
hr = 60000.0 / rr
heart_rate[b+1:c+1] = hr
heart_rate[qrs_inds[-1]:] = heart_rate[qrs_inds[-1]]
return heart_rate | python | def compute_hr(sig_len, qrs_inds, fs):
"""
Compute instantaneous heart rate from peak indices.
Parameters
----------
sig_len : int
The length of the corresponding signal
qrs_inds : numpy array
The qrs index locations
fs : int, or float
The corresponding signal's sampling frequency.
Returns
-------
heart_rate : numpy array
An array of the instantaneous heart rate, with the length of the
corresponding signal. Contains numpy.nan where heart rate could
not be computed.
"""
heart_rate = np.full(sig_len, np.nan, dtype='float32')
if len(qrs_inds) < 2:
return heart_rate
for i in range(0, len(qrs_inds)-2):
a = qrs_inds[i]
b = qrs_inds[i+1]
c = qrs_inds[i+2]
rr = (b-a) * (1.0 / fs) * 1000
hr = 60000.0 / rr
heart_rate[b+1:c+1] = hr
heart_rate[qrs_inds[-1]:] = heart_rate[qrs_inds[-1]]
return heart_rate | [
"def",
"compute_hr",
"(",
"sig_len",
",",
"qrs_inds",
",",
"fs",
")",
":",
"heart_rate",
"=",
"np",
".",
"full",
"(",
"sig_len",
",",
"np",
".",
"nan",
",",
"dtype",
"=",
"'float32'",
")",
"if",
"len",
"(",
"qrs_inds",
")",
"<",
"2",
":",
"return",... | Compute instantaneous heart rate from peak indices.
Parameters
----------
sig_len : int
The length of the corresponding signal
qrs_inds : numpy array
The qrs index locations
fs : int, or float
The corresponding signal's sampling frequency.
Returns
-------
heart_rate : numpy array
An array of the instantaneous heart rate, with the length of the
corresponding signal. Contains numpy.nan where heart rate could
not be computed. | [
"Compute",
"instantaneous",
"heart",
"rate",
"from",
"peak",
"indices",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/hr.py#L4-L40 | train | 216,279 |
MIT-LCP/wfdb-python | wfdb/processing/hr.py | calc_rr | def calc_rr(qrs_locs, fs=None, min_rr=None, max_rr=None, qrs_units='samples',
rr_units='samples'):
"""
Compute rr intervals from qrs indices by extracting the time
differences.
Parameters
----------
qrs_locs : numpy array
1d array of qrs locations.
fs : float, optional
Sampling frequency of the original signal. Needed if
`qrs_units` does not match `rr_units`.
min_rr : float, optional
The minimum allowed rr interval. Values below this are excluded
from the returned rr intervals. Units are in `rr_units`.
max_rr : float, optional
The maximum allowed rr interval. Values above this are excluded
from the returned rr intervals. Units are in `rr_units`.
qrs_units : str, optional
The time unit of `qrs_locs`. Must be one of: 'samples',
'seconds'.
rr_units : str, optional
The desired time unit of the returned rr intervals in. Must be
one of: 'samples', 'seconds'.
Returns
-------
rr : numpy array
Array of rr intervals.
"""
rr = np.diff(qrs_locs)
# Empty input qrs_locs
if not len(rr):
return rr
# Convert to desired output rr units if needed
if qrs_units == 'samples' and rr_units == 'seconds':
rr = rr / fs
elif qrs_units == 'seconds' and rr_units == 'samples':
rr = rr * fs
# Apply rr interval filters
if min_rr is not None:
rr = rr[rr > min_rr]
if max_rr is not None:
rr = rr[rr < max_rr]
return rr | python | def calc_rr(qrs_locs, fs=None, min_rr=None, max_rr=None, qrs_units='samples',
rr_units='samples'):
"""
Compute rr intervals from qrs indices by extracting the time
differences.
Parameters
----------
qrs_locs : numpy array
1d array of qrs locations.
fs : float, optional
Sampling frequency of the original signal. Needed if
`qrs_units` does not match `rr_units`.
min_rr : float, optional
The minimum allowed rr interval. Values below this are excluded
from the returned rr intervals. Units are in `rr_units`.
max_rr : float, optional
The maximum allowed rr interval. Values above this are excluded
from the returned rr intervals. Units are in `rr_units`.
qrs_units : str, optional
The time unit of `qrs_locs`. Must be one of: 'samples',
'seconds'.
rr_units : str, optional
The desired time unit of the returned rr intervals in. Must be
one of: 'samples', 'seconds'.
Returns
-------
rr : numpy array
Array of rr intervals.
"""
rr = np.diff(qrs_locs)
# Empty input qrs_locs
if not len(rr):
return rr
# Convert to desired output rr units if needed
if qrs_units == 'samples' and rr_units == 'seconds':
rr = rr / fs
elif qrs_units == 'seconds' and rr_units == 'samples':
rr = rr * fs
# Apply rr interval filters
if min_rr is not None:
rr = rr[rr > min_rr]
if max_rr is not None:
rr = rr[rr < max_rr]
return rr | [
"def",
"calc_rr",
"(",
"qrs_locs",
",",
"fs",
"=",
"None",
",",
"min_rr",
"=",
"None",
",",
"max_rr",
"=",
"None",
",",
"qrs_units",
"=",
"'samples'",
",",
"rr_units",
"=",
"'samples'",
")",
":",
"rr",
"=",
"np",
".",
"diff",
"(",
"qrs_locs",
")",
... | Compute rr intervals from qrs indices by extracting the time
differences.
Parameters
----------
qrs_locs : numpy array
1d array of qrs locations.
fs : float, optional
Sampling frequency of the original signal. Needed if
`qrs_units` does not match `rr_units`.
min_rr : float, optional
The minimum allowed rr interval. Values below this are excluded
from the returned rr intervals. Units are in `rr_units`.
max_rr : float, optional
The maximum allowed rr interval. Values above this are excluded
from the returned rr intervals. Units are in `rr_units`.
qrs_units : str, optional
The time unit of `qrs_locs`. Must be one of: 'samples',
'seconds'.
rr_units : str, optional
The desired time unit of the returned rr intervals in. Must be
one of: 'samples', 'seconds'.
Returns
-------
rr : numpy array
Array of rr intervals. | [
"Compute",
"rr",
"intervals",
"from",
"qrs",
"indices",
"by",
"extracting",
"the",
"time",
"differences",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/hr.py#L43-L94 | train | 216,280 |
MIT-LCP/wfdb-python | wfdb/processing/hr.py | calc_mean_hr | def calc_mean_hr(rr, fs=None, min_rr=None, max_rr=None, rr_units='samples'):
"""
Compute mean heart rate in beats per minute, from a set of rr
intervals. Returns 0 if rr is empty.
Parameters
----------
rr : numpy array
Array of rr intervals.
fs : int, or float
The corresponding signal's sampling frequency. Required if
'input_time_units' == 'samples'.
min_rr : float, optional
The minimum allowed rr interval. Values below this are excluded
when calculating the heart rate. Units are in `rr_units`.
max_rr : float, optional
The maximum allowed rr interval. Values above this are excluded
when calculating the heart rate. Units are in `rr_units`.
rr_units : str, optional
The time units of the input rr intervals. Must be one of:
'samples', 'seconds'.
Returns
-------
mean_hr : float
The mean heart rate in beats per minute
"""
if not len(rr):
return 0
if min_rr is not None:
rr = rr[rr > min_rr]
if max_rr is not None:
rr = rr[rr < max_rr]
mean_rr = np.mean(rr)
mean_hr = 60 / mean_rr
# Convert to bpm
if rr_units == 'samples':
mean_hr = mean_hr * fs
return mean_hr | python | def calc_mean_hr(rr, fs=None, min_rr=None, max_rr=None, rr_units='samples'):
"""
Compute mean heart rate in beats per minute, from a set of rr
intervals. Returns 0 if rr is empty.
Parameters
----------
rr : numpy array
Array of rr intervals.
fs : int, or float
The corresponding signal's sampling frequency. Required if
'input_time_units' == 'samples'.
min_rr : float, optional
The minimum allowed rr interval. Values below this are excluded
when calculating the heart rate. Units are in `rr_units`.
max_rr : float, optional
The maximum allowed rr interval. Values above this are excluded
when calculating the heart rate. Units are in `rr_units`.
rr_units : str, optional
The time units of the input rr intervals. Must be one of:
'samples', 'seconds'.
Returns
-------
mean_hr : float
The mean heart rate in beats per minute
"""
if not len(rr):
return 0
if min_rr is not None:
rr = rr[rr > min_rr]
if max_rr is not None:
rr = rr[rr < max_rr]
mean_rr = np.mean(rr)
mean_hr = 60 / mean_rr
# Convert to bpm
if rr_units == 'samples':
mean_hr = mean_hr * fs
return mean_hr | [
"def",
"calc_mean_hr",
"(",
"rr",
",",
"fs",
"=",
"None",
",",
"min_rr",
"=",
"None",
",",
"max_rr",
"=",
"None",
",",
"rr_units",
"=",
"'samples'",
")",
":",
"if",
"not",
"len",
"(",
"rr",
")",
":",
"return",
"0",
"if",
"min_rr",
"is",
"not",
"N... | Compute mean heart rate in beats per minute, from a set of rr
intervals. Returns 0 if rr is empty.
Parameters
----------
rr : numpy array
Array of rr intervals.
fs : int, or float
The corresponding signal's sampling frequency. Required if
'input_time_units' == 'samples'.
min_rr : float, optional
The minimum allowed rr interval. Values below this are excluded
when calculating the heart rate. Units are in `rr_units`.
max_rr : float, optional
The maximum allowed rr interval. Values above this are excluded
when calculating the heart rate. Units are in `rr_units`.
rr_units : str, optional
The time units of the input rr intervals. Must be one of:
'samples', 'seconds'.
Returns
-------
mean_hr : float
The mean heart rate in beats per minute | [
"Compute",
"mean",
"heart",
"rate",
"in",
"beats",
"per",
"minute",
"from",
"a",
"set",
"of",
"rr",
"intervals",
".",
"Returns",
"0",
"if",
"rr",
"is",
"empty",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/hr.py#L97-L142 | train | 216,281 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | compare_annotations | def compare_annotations(ref_sample, test_sample, window_width, signal=None):
"""
Compare a set of reference annotation locations against a set of
test annotation locations.
See the Comparitor class docstring for more information.
Parameters
----------
ref_sample : 1d numpy array
Array of reference sample locations
test_sample : 1d numpy array
Array of test sample locations to compare
window_width : int
The maximum absolute difference in sample numbers that is
permitted for matching annotations.
signal : 1d numpy array, optional
The original signal of the two annotations. Only used for
plotting.
Returns
-------
comparitor : Comparitor object
Object containing parameters about the two sets of annotations
Examples
--------
>>> import wfdb
>>> from wfdb import processing
>>> sig, fields = wfdb.rdsamp('sample-data/100', channels=[0])
>>> ann_ref = wfdb.rdann('sample-data/100','atr')
>>> xqrs = processing.XQRS(sig=sig[:,0], fs=fields['fs'])
>>> xqrs.detect()
>>> comparitor = processing.compare_annotations(ann_ref.sample[1:],
xqrs.qrs_inds,
int(0.1 * fields['fs']),
sig[:,0])
>>> comparitor.print_summary()
>>> comparitor.plot()
"""
comparitor = Comparitor(ref_sample=ref_sample, test_sample=test_sample,
window_width=window_width, signal=signal)
comparitor.compare()
return comparitor | python | def compare_annotations(ref_sample, test_sample, window_width, signal=None):
"""
Compare a set of reference annotation locations against a set of
test annotation locations.
See the Comparitor class docstring for more information.
Parameters
----------
ref_sample : 1d numpy array
Array of reference sample locations
test_sample : 1d numpy array
Array of test sample locations to compare
window_width : int
The maximum absolute difference in sample numbers that is
permitted for matching annotations.
signal : 1d numpy array, optional
The original signal of the two annotations. Only used for
plotting.
Returns
-------
comparitor : Comparitor object
Object containing parameters about the two sets of annotations
Examples
--------
>>> import wfdb
>>> from wfdb import processing
>>> sig, fields = wfdb.rdsamp('sample-data/100', channels=[0])
>>> ann_ref = wfdb.rdann('sample-data/100','atr')
>>> xqrs = processing.XQRS(sig=sig[:,0], fs=fields['fs'])
>>> xqrs.detect()
>>> comparitor = processing.compare_annotations(ann_ref.sample[1:],
xqrs.qrs_inds,
int(0.1 * fields['fs']),
sig[:,0])
>>> comparitor.print_summary()
>>> comparitor.plot()
"""
comparitor = Comparitor(ref_sample=ref_sample, test_sample=test_sample,
window_width=window_width, signal=signal)
comparitor.compare()
return comparitor | [
"def",
"compare_annotations",
"(",
"ref_sample",
",",
"test_sample",
",",
"window_width",
",",
"signal",
"=",
"None",
")",
":",
"comparitor",
"=",
"Comparitor",
"(",
"ref_sample",
"=",
"ref_sample",
",",
"test_sample",
"=",
"test_sample",
",",
"window_width",
"=... | Compare a set of reference annotation locations against a set of
test annotation locations.
See the Comparitor class docstring for more information.
Parameters
----------
ref_sample : 1d numpy array
Array of reference sample locations
test_sample : 1d numpy array
Array of test sample locations to compare
window_width : int
The maximum absolute difference in sample numbers that is
permitted for matching annotations.
signal : 1d numpy array, optional
The original signal of the two annotations. Only used for
plotting.
Returns
-------
comparitor : Comparitor object
Object containing parameters about the two sets of annotations
Examples
--------
>>> import wfdb
>>> from wfdb import processing
>>> sig, fields = wfdb.rdsamp('sample-data/100', channels=[0])
>>> ann_ref = wfdb.rdann('sample-data/100','atr')
>>> xqrs = processing.XQRS(sig=sig[:,0], fs=fields['fs'])
>>> xqrs.detect()
>>> comparitor = processing.compare_annotations(ann_ref.sample[1:],
xqrs.qrs_inds,
int(0.1 * fields['fs']),
sig[:,0])
>>> comparitor.print_summary()
>>> comparitor.plot() | [
"Compare",
"a",
"set",
"of",
"reference",
"annotation",
"locations",
"against",
"a",
"set",
"of",
"test",
"annotation",
"locations",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L334-L381 | train | 216,282 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | benchmark_mitdb | def benchmark_mitdb(detector, verbose=False, print_results=False):
"""
Benchmark a qrs detector against mitdb's records.
Parameters
----------
detector : function
The detector function.
verbose : bool, optional
The verbose option of the detector function.
print_results : bool, optional
Whether to print the overall performance, and the results for
each record.
Returns
-------
comparitors : dictionary
Dictionary of Comparitor objects run on the records, keyed on
the record names.
specificity : float
Aggregate specificity.
positive_predictivity : float
Aggregate positive_predictivity.
false_positive_rate : float
Aggregate false_positive_rate.
Notes
-----
TODO:
- remove non-qrs detections from reference annotations
- allow kwargs
Examples
--------
>>> import wfdb
>> from wfdb.processing import benchmark_mitdb, xqrs_detect
>>> comparitors, spec, pp, fpr = benchmark_mitdb(xqrs_detect)
"""
record_list = get_record_list('mitdb')
n_records = len(record_list)
# Function arguments for starmap
args = zip(record_list, n_records * [detector], n_records * [verbose])
# Run detector and compare against reference annotations for all
# records
with Pool(cpu_count() - 1) as p:
comparitors = p.starmap(benchmark_mitdb_record, args)
# Calculate aggregate stats
specificity = np.mean([c.specificity for c in comparitors])
positive_predictivity = np.mean(
[c.positive_predictivity for c in comparitors])
false_positive_rate = np.mean(
[c.false_positive_rate for c in comparitors])
comparitors = dict(zip(record_list, comparitors))
print('Benchmark complete')
if print_results:
print('\nOverall MITDB Performance - Specificity: %.4f, Positive Predictivity: %.4f, False Positive Rate: %.4f\n'
% (specificity, positive_predictivity, false_positive_rate))
for record_name in record_list:
print('Record %s:' % record_name)
comparitors[record_name].print_summary()
print('\n\n')
return comparitors, specificity, positive_predictivity, false_positive_rate | python | def benchmark_mitdb(detector, verbose=False, print_results=False):
"""
Benchmark a qrs detector against mitdb's records.
Parameters
----------
detector : function
The detector function.
verbose : bool, optional
The verbose option of the detector function.
print_results : bool, optional
Whether to print the overall performance, and the results for
each record.
Returns
-------
comparitors : dictionary
Dictionary of Comparitor objects run on the records, keyed on
the record names.
specificity : float
Aggregate specificity.
positive_predictivity : float
Aggregate positive_predictivity.
false_positive_rate : float
Aggregate false_positive_rate.
Notes
-----
TODO:
- remove non-qrs detections from reference annotations
- allow kwargs
Examples
--------
>>> import wfdb
>> from wfdb.processing import benchmark_mitdb, xqrs_detect
>>> comparitors, spec, pp, fpr = benchmark_mitdb(xqrs_detect)
"""
record_list = get_record_list('mitdb')
n_records = len(record_list)
# Function arguments for starmap
args = zip(record_list, n_records * [detector], n_records * [verbose])
# Run detector and compare against reference annotations for all
# records
with Pool(cpu_count() - 1) as p:
comparitors = p.starmap(benchmark_mitdb_record, args)
# Calculate aggregate stats
specificity = np.mean([c.specificity for c in comparitors])
positive_predictivity = np.mean(
[c.positive_predictivity for c in comparitors])
false_positive_rate = np.mean(
[c.false_positive_rate for c in comparitors])
comparitors = dict(zip(record_list, comparitors))
print('Benchmark complete')
if print_results:
print('\nOverall MITDB Performance - Specificity: %.4f, Positive Predictivity: %.4f, False Positive Rate: %.4f\n'
% (specificity, positive_predictivity, false_positive_rate))
for record_name in record_list:
print('Record %s:' % record_name)
comparitors[record_name].print_summary()
print('\n\n')
return comparitors, specificity, positive_predictivity, false_positive_rate | [
"def",
"benchmark_mitdb",
"(",
"detector",
",",
"verbose",
"=",
"False",
",",
"print_results",
"=",
"False",
")",
":",
"record_list",
"=",
"get_record_list",
"(",
"'mitdb'",
")",
"n_records",
"=",
"len",
"(",
"record_list",
")",
"# Function arguments for starmap",... | Benchmark a qrs detector against mitdb's records.
Parameters
----------
detector : function
The detector function.
verbose : bool, optional
The verbose option of the detector function.
print_results : bool, optional
Whether to print the overall performance, and the results for
each record.
Returns
-------
comparitors : dictionary
Dictionary of Comparitor objects run on the records, keyed on
the record names.
specificity : float
Aggregate specificity.
positive_predictivity : float
Aggregate positive_predictivity.
false_positive_rate : float
Aggregate false_positive_rate.
Notes
-----
TODO:
- remove non-qrs detections from reference annotations
- allow kwargs
Examples
--------
>>> import wfdb
>> from wfdb.processing import benchmark_mitdb, xqrs_detect
>>> comparitors, spec, pp, fpr = benchmark_mitdb(xqrs_detect) | [
"Benchmark",
"a",
"qrs",
"detector",
"against",
"mitdb",
"s",
"records",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L384-L454 | train | 216,283 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | benchmark_mitdb_record | def benchmark_mitdb_record(rec, detector, verbose):
"""
Benchmark a single mitdb record
"""
sig, fields = rdsamp(rec, pb_dir='mitdb', channels=[0])
ann_ref = rdann(rec, pb_dir='mitdb', extension='atr')
qrs_inds = detector(sig=sig[:,0], fs=fields['fs'], verbose=verbose)
comparitor = compare_annotations(ref_sample=ann_ref.sample[1:],
test_sample=qrs_inds,
window_width=int(0.1 * fields['fs']))
if verbose:
print('Finished record %s' % rec)
return comparitor | python | def benchmark_mitdb_record(rec, detector, verbose):
"""
Benchmark a single mitdb record
"""
sig, fields = rdsamp(rec, pb_dir='mitdb', channels=[0])
ann_ref = rdann(rec, pb_dir='mitdb', extension='atr')
qrs_inds = detector(sig=sig[:,0], fs=fields['fs'], verbose=verbose)
comparitor = compare_annotations(ref_sample=ann_ref.sample[1:],
test_sample=qrs_inds,
window_width=int(0.1 * fields['fs']))
if verbose:
print('Finished record %s' % rec)
return comparitor | [
"def",
"benchmark_mitdb_record",
"(",
"rec",
",",
"detector",
",",
"verbose",
")",
":",
"sig",
",",
"fields",
"=",
"rdsamp",
"(",
"rec",
",",
"pb_dir",
"=",
"'mitdb'",
",",
"channels",
"=",
"[",
"0",
"]",
")",
"ann_ref",
"=",
"rdann",
"(",
"rec",
","... | Benchmark a single mitdb record | [
"Benchmark",
"a",
"single",
"mitdb",
"record"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L457-L471 | train | 216,284 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | Comparitor._calc_stats | def _calc_stats(self):
"""
Calculate performance statistics after the two sets of annotations
are compared.
Example:
-------------------
ref=500 test=480
{ 30 { 470 } 10 }
-------------------
tp = 470
fp = 10
fn = 30
specificity = 470 / 500
positive_predictivity = 470 / 480
false_positive_rate = 10 / 480
"""
# Reference annotation indices that were detected
self.matched_ref_inds = np.where(self.matching_sample_nums != -1)[0]
# Reference annotation indices that were missed
self.unmatched_ref_inds = np.where(self.matching_sample_nums == -1)[0]
# Test annotation indices that were matched to a reference annotation
self.matched_test_inds = self.matching_sample_nums[
self.matching_sample_nums != -1]
# Test annotation indices that were unmatched to a reference annotation
self.unmatched_test_inds = np.setdiff1d(np.array(range(self.n_test)),
self.matched_test_inds, assume_unique=True)
# Sample numbers that were matched and unmatched
self.matched_ref_sample = self.ref_sample[self.matched_ref_inds]
self.unmatched_ref_sample = self.ref_sample[self.unmatched_ref_inds]
self.matched_test_sample = self.test_sample[self.matched_test_inds]
self.unmatched_test_sample = self.test_sample[self.unmatched_test_inds]
# True positives = matched reference samples
self.tp = len(self.matched_ref_inds)
# False positives = extra test samples not matched
self.fp = self.n_test - self.tp
# False negatives = undetected reference samples
self.fn = self.n_ref - self.tp
# No tn attribute
self.specificity = float(self.tp) / self.n_ref
self.positive_predictivity = float(self.tp) / self.n_test
self.false_positive_rate = float(self.fp) / self.n_test | python | def _calc_stats(self):
"""
Calculate performance statistics after the two sets of annotations
are compared.
Example:
-------------------
ref=500 test=480
{ 30 { 470 } 10 }
-------------------
tp = 470
fp = 10
fn = 30
specificity = 470 / 500
positive_predictivity = 470 / 480
false_positive_rate = 10 / 480
"""
# Reference annotation indices that were detected
self.matched_ref_inds = np.where(self.matching_sample_nums != -1)[0]
# Reference annotation indices that were missed
self.unmatched_ref_inds = np.where(self.matching_sample_nums == -1)[0]
# Test annotation indices that were matched to a reference annotation
self.matched_test_inds = self.matching_sample_nums[
self.matching_sample_nums != -1]
# Test annotation indices that were unmatched to a reference annotation
self.unmatched_test_inds = np.setdiff1d(np.array(range(self.n_test)),
self.matched_test_inds, assume_unique=True)
# Sample numbers that were matched and unmatched
self.matched_ref_sample = self.ref_sample[self.matched_ref_inds]
self.unmatched_ref_sample = self.ref_sample[self.unmatched_ref_inds]
self.matched_test_sample = self.test_sample[self.matched_test_inds]
self.unmatched_test_sample = self.test_sample[self.unmatched_test_inds]
# True positives = matched reference samples
self.tp = len(self.matched_ref_inds)
# False positives = extra test samples not matched
self.fp = self.n_test - self.tp
# False negatives = undetected reference samples
self.fn = self.n_ref - self.tp
# No tn attribute
self.specificity = float(self.tp) / self.n_ref
self.positive_predictivity = float(self.tp) / self.n_test
self.false_positive_rate = float(self.fp) / self.n_test | [
"def",
"_calc_stats",
"(",
"self",
")",
":",
"# Reference annotation indices that were detected",
"self",
".",
"matched_ref_inds",
"=",
"np",
".",
"where",
"(",
"self",
".",
"matching_sample_nums",
"!=",
"-",
"1",
")",
"[",
"0",
"]",
"# Reference annotation indices ... | Calculate performance statistics after the two sets of annotations
are compared.
Example:
-------------------
ref=500 test=480
{ 30 { 470 } 10 }
-------------------
tp = 470
fp = 10
fn = 30
specificity = 470 / 500
positive_predictivity = 470 / 480
false_positive_rate = 10 / 480 | [
"Calculate",
"performance",
"statistics",
"after",
"the",
"two",
"sets",
"of",
"annotations",
"are",
"compared",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L69-L116 | train | 216,285 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | Comparitor.compare | def compare(self):
"""
Main comparison function
"""
"""
Note: Make sure to be able to handle these ref/test scenarios:
A:
o----o---o---o
x-------x----x
B:
o----o-----o---o
x--------x--x--x
C:
o------o-----o---o
x-x--------x--x--x
D:
o------o-----o---o
x-x--------x-----x
"""
test_samp_num = 0
ref_samp_num = 0
# Iterate through the reference sample numbers
while ref_samp_num < self.n_ref and test_samp_num < self.n_test:
# Get the closest testing sample number for this reference sample
closest_samp_num, smallest_samp_diff = (
self._get_closest_samp_num(ref_samp_num, test_samp_num))
# Get the closest testing sample number for the next reference
# sample. This doesn't need to be called for the last index.
if ref_samp_num < self.n_ref - 1:
closest_samp_num_next, smallest_samp_diff_next = (
self._get_closest_samp_num(ref_samp_num + 1, test_samp_num))
else:
# Set non-matching value if there is no next reference sample
# to compete for the test sample
closest_samp_num_next = -1
# Found a contested test sample number. Decide which
# reference sample it belongs to. If the sample is closer to
# the next reference sample, leave it to the next reference
# sample and label this reference sample as unmatched.
if (closest_samp_num == closest_samp_num_next
and smallest_samp_diff_next < smallest_samp_diff):
# Get the next closest sample for this reference sample,
# if not already assigned to a previous sample.
# It will be the previous testing sample number in any
# possible case (scenario D below), or nothing.
if closest_samp_num and (not ref_samp_num or closest_samp_num - 1 != self.matching_sample_nums[ref_samp_num - 1]):
# The previous test annotation is inspected
closest_samp_num = closest_samp_num - 1
smallest_samp_diff = abs(self.ref_sample[ref_samp_num]
- self.test_sample[closest_samp_num])
# Assign the reference-test pair if close enough
if smallest_samp_diff < self.window_width:
self.matching_sample_nums[ref_samp_num] = closest_samp_num
# Set the starting test sample number to inspect
# for the next reference sample.
test_samp_num = closest_samp_num + 1
# Otherwise there is no matching test annotation
# If there is no clash, or the contested test sample is
# closer to the current reference, keep the test sample
# for this reference sample.
else:
# Assign the reference-test pair if close enough
if smallest_samp_diff < self.window_width:
self.matching_sample_nums[ref_samp_num] = closest_samp_num
# Increment the starting test sample number to inspect
# for the next reference sample.
test_samp_num = closest_samp_num + 1
ref_samp_num += 1
self._calc_stats() | python | def compare(self):
"""
Main comparison function
"""
"""
Note: Make sure to be able to handle these ref/test scenarios:
A:
o----o---o---o
x-------x----x
B:
o----o-----o---o
x--------x--x--x
C:
o------o-----o---o
x-x--------x--x--x
D:
o------o-----o---o
x-x--------x-----x
"""
test_samp_num = 0
ref_samp_num = 0
# Iterate through the reference sample numbers
while ref_samp_num < self.n_ref and test_samp_num < self.n_test:
# Get the closest testing sample number for this reference sample
closest_samp_num, smallest_samp_diff = (
self._get_closest_samp_num(ref_samp_num, test_samp_num))
# Get the closest testing sample number for the next reference
# sample. This doesn't need to be called for the last index.
if ref_samp_num < self.n_ref - 1:
closest_samp_num_next, smallest_samp_diff_next = (
self._get_closest_samp_num(ref_samp_num + 1, test_samp_num))
else:
# Set non-matching value if there is no next reference sample
# to compete for the test sample
closest_samp_num_next = -1
# Found a contested test sample number. Decide which
# reference sample it belongs to. If the sample is closer to
# the next reference sample, leave it to the next reference
# sample and label this reference sample as unmatched.
if (closest_samp_num == closest_samp_num_next
and smallest_samp_diff_next < smallest_samp_diff):
# Get the next closest sample for this reference sample,
# if not already assigned to a previous sample.
# It will be the previous testing sample number in any
# possible case (scenario D below), or nothing.
if closest_samp_num and (not ref_samp_num or closest_samp_num - 1 != self.matching_sample_nums[ref_samp_num - 1]):
# The previous test annotation is inspected
closest_samp_num = closest_samp_num - 1
smallest_samp_diff = abs(self.ref_sample[ref_samp_num]
- self.test_sample[closest_samp_num])
# Assign the reference-test pair if close enough
if smallest_samp_diff < self.window_width:
self.matching_sample_nums[ref_samp_num] = closest_samp_num
# Set the starting test sample number to inspect
# for the next reference sample.
test_samp_num = closest_samp_num + 1
# Otherwise there is no matching test annotation
# If there is no clash, or the contested test sample is
# closer to the current reference, keep the test sample
# for this reference sample.
else:
# Assign the reference-test pair if close enough
if smallest_samp_diff < self.window_width:
self.matching_sample_nums[ref_samp_num] = closest_samp_num
# Increment the starting test sample number to inspect
# for the next reference sample.
test_samp_num = closest_samp_num + 1
ref_samp_num += 1
self._calc_stats() | [
"def",
"compare",
"(",
"self",
")",
":",
"\"\"\"\n Note: Make sure to be able to handle these ref/test scenarios:\n\n A:\n o----o---o---o\n x-------x----x\n\n B:\n o----o-----o---o\n x--------x--x--x\n\n C:\n o------o-----o---o\n x-x-... | Main comparison function | [
"Main",
"comparison",
"function"
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L118-L197 | train | 216,286 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | Comparitor._get_closest_samp_num | def _get_closest_samp_num(self, ref_samp_num, start_test_samp_num):
"""
Return the closest testing sample number for the given reference
sample number. Limit the search from start_test_samp_num.
"""
if start_test_samp_num >= self.n_test:
raise ValueError('Invalid starting test sample number.')
ref_samp = self.ref_sample[ref_samp_num]
test_samp = self.test_sample[start_test_samp_num]
samp_diff = ref_samp - test_samp
# Initialize running parameters
closest_samp_num = start_test_samp_num
smallest_samp_diff = abs(samp_diff)
# Iterate through the testing samples
for test_samp_num in range(start_test_samp_num, self.n_test):
test_samp = self.test_sample[test_samp_num]
samp_diff = ref_samp - test_samp
abs_samp_diff = abs(samp_diff)
# Found a better match
if abs_samp_diff < smallest_samp_diff:
closest_samp_num = test_samp_num
smallest_samp_diff = abs_samp_diff
# Stop iterating when the ref sample is first passed or reached
if samp_diff <= 0:
break
return closest_samp_num, smallest_samp_diff | python | def _get_closest_samp_num(self, ref_samp_num, start_test_samp_num):
"""
Return the closest testing sample number for the given reference
sample number. Limit the search from start_test_samp_num.
"""
if start_test_samp_num >= self.n_test:
raise ValueError('Invalid starting test sample number.')
ref_samp = self.ref_sample[ref_samp_num]
test_samp = self.test_sample[start_test_samp_num]
samp_diff = ref_samp - test_samp
# Initialize running parameters
closest_samp_num = start_test_samp_num
smallest_samp_diff = abs(samp_diff)
# Iterate through the testing samples
for test_samp_num in range(start_test_samp_num, self.n_test):
test_samp = self.test_sample[test_samp_num]
samp_diff = ref_samp - test_samp
abs_samp_diff = abs(samp_diff)
# Found a better match
if abs_samp_diff < smallest_samp_diff:
closest_samp_num = test_samp_num
smallest_samp_diff = abs_samp_diff
# Stop iterating when the ref sample is first passed or reached
if samp_diff <= 0:
break
return closest_samp_num, smallest_samp_diff | [
"def",
"_get_closest_samp_num",
"(",
"self",
",",
"ref_samp_num",
",",
"start_test_samp_num",
")",
":",
"if",
"start_test_samp_num",
">=",
"self",
".",
"n_test",
":",
"raise",
"ValueError",
"(",
"'Invalid starting test sample number.'",
")",
"ref_samp",
"=",
"self",
... | Return the closest testing sample number for the given reference
sample number. Limit the search from start_test_samp_num. | [
"Return",
"the",
"closest",
"testing",
"sample",
"number",
"for",
"the",
"given",
"reference",
"sample",
"number",
".",
"Limit",
"the",
"search",
"from",
"start_test_samp_num",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L200-L232 | train | 216,287 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | Comparitor.print_summary | def print_summary(self):
"""
Print summary metrics of the annotation comparisons.
"""
# True positives = matched reference samples
self.tp = len(self.matched_ref_inds)
# False positives = extra test samples not matched
self.fp = self.n_test - self.tp
# False negatives = undetected reference samples
self.fn = self.n_ref - self.tp
# No tn attribute
self.specificity = self.tp / self.n_ref
self.positive_predictivity = self.tp / self.n_test
self.false_positive_rate = self.fp / self.n_test
print('%d reference annotations, %d test annotations\n'
% (self.n_ref, self.n_test))
print('True Positives (matched samples): %d' % self.tp)
print('False Positives (unmatched test samples: %d' % self.fp)
print('False Negatives (unmatched reference samples): %d\n' % self.fn)
print('Specificity: %.4f (%d/%d)'
% (self.specificity, self.tp, self.n_ref))
print('Positive Predictivity: %.4f (%d/%d)'
% (self.positive_predictivity, self.tp, self.n_test))
print('False Positive Rate: %.4f (%d/%d)'
% (self.false_positive_rate, self.fp, self.n_test)) | python | def print_summary(self):
"""
Print summary metrics of the annotation comparisons.
"""
# True positives = matched reference samples
self.tp = len(self.matched_ref_inds)
# False positives = extra test samples not matched
self.fp = self.n_test - self.tp
# False negatives = undetected reference samples
self.fn = self.n_ref - self.tp
# No tn attribute
self.specificity = self.tp / self.n_ref
self.positive_predictivity = self.tp / self.n_test
self.false_positive_rate = self.fp / self.n_test
print('%d reference annotations, %d test annotations\n'
% (self.n_ref, self.n_test))
print('True Positives (matched samples): %d' % self.tp)
print('False Positives (unmatched test samples: %d' % self.fp)
print('False Negatives (unmatched reference samples): %d\n' % self.fn)
print('Specificity: %.4f (%d/%d)'
% (self.specificity, self.tp, self.n_ref))
print('Positive Predictivity: %.4f (%d/%d)'
% (self.positive_predictivity, self.tp, self.n_test))
print('False Positive Rate: %.4f (%d/%d)'
% (self.false_positive_rate, self.fp, self.n_test)) | [
"def",
"print_summary",
"(",
"self",
")",
":",
"# True positives = matched reference samples",
"self",
".",
"tp",
"=",
"len",
"(",
"self",
".",
"matched_ref_inds",
")",
"# False positives = extra test samples not matched",
"self",
".",
"fp",
"=",
"self",
".",
"n_test"... | Print summary metrics of the annotation comparisons. | [
"Print",
"summary",
"metrics",
"of",
"the",
"annotation",
"comparisons",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L234-L261 | train | 216,288 |
MIT-LCP/wfdb-python | wfdb/processing/evaluate.py | Comparitor.plot | def plot(self, sig_style='', title=None, figsize=None,
return_fig=False):
"""
Plot the comparison of two sets of annotations, possibly
overlaid on their original signal.
Parameters
----------
sig_style : str, optional
The matplotlib style of the signal
title : str, optional
The title of the plot
figsize: tuple, optional
Tuple pair specifying the width, and height of the figure.
It is the'figsize' argument passed into matplotlib.pyplot's
`figure` function.
return_fig : bool, optional
Whether the figure is to be returned as an output argument.
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(1, 1, 1)
legend = ['Signal',
'Matched Reference Annotations (%d/%d)' % (self.tp, self.n_ref),
'Unmatched Reference Annotations (%d/%d)' % (self.fn, self.n_ref),
'Matched Test Annotations (%d/%d)' % (self.tp, self.n_test),
'Unmatched Test Annotations (%d/%d)' % (self.fp, self.n_test)
]
# Plot the signal if any
if self.signal is not None:
ax.plot(self.signal, sig_style)
# Plot reference annotations
ax.plot(self.matched_ref_sample,
self.signal[self.matched_ref_sample], 'ko')
ax.plot(self.unmatched_ref_sample,
self.signal[self.unmatched_ref_sample], 'ko',
fillstyle='none')
# Plot test annotations
ax.plot(self.matched_test_sample,
self.signal[self.matched_test_sample], 'g+')
ax.plot(self.unmatched_test_sample,
self.signal[self.unmatched_test_sample], 'rx')
ax.legend(legend)
# Just plot annotations
else:
# Plot reference annotations
ax.plot(self.matched_ref_sample, np.ones(self.tp), 'ko')
ax.plot(self.unmatched_ref_sample, np.ones(self.fn), 'ko',
fillstyle='none')
# Plot test annotations
ax.plot(self.matched_test_sample, 0.5 * np.ones(self.tp), 'g+')
ax.plot(self.unmatched_test_sample, 0.5 * np.ones(self.fp), 'rx')
ax.legend(legend[1:])
if title:
ax.set_title(title)
ax.set_xlabel('time/sample')
fig.show()
if return_fig:
return fig, ax | python | def plot(self, sig_style='', title=None, figsize=None,
return_fig=False):
"""
Plot the comparison of two sets of annotations, possibly
overlaid on their original signal.
Parameters
----------
sig_style : str, optional
The matplotlib style of the signal
title : str, optional
The title of the plot
figsize: tuple, optional
Tuple pair specifying the width, and height of the figure.
It is the'figsize' argument passed into matplotlib.pyplot's
`figure` function.
return_fig : bool, optional
Whether the figure is to be returned as an output argument.
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(1, 1, 1)
legend = ['Signal',
'Matched Reference Annotations (%d/%d)' % (self.tp, self.n_ref),
'Unmatched Reference Annotations (%d/%d)' % (self.fn, self.n_ref),
'Matched Test Annotations (%d/%d)' % (self.tp, self.n_test),
'Unmatched Test Annotations (%d/%d)' % (self.fp, self.n_test)
]
# Plot the signal if any
if self.signal is not None:
ax.plot(self.signal, sig_style)
# Plot reference annotations
ax.plot(self.matched_ref_sample,
self.signal[self.matched_ref_sample], 'ko')
ax.plot(self.unmatched_ref_sample,
self.signal[self.unmatched_ref_sample], 'ko',
fillstyle='none')
# Plot test annotations
ax.plot(self.matched_test_sample,
self.signal[self.matched_test_sample], 'g+')
ax.plot(self.unmatched_test_sample,
self.signal[self.unmatched_test_sample], 'rx')
ax.legend(legend)
# Just plot annotations
else:
# Plot reference annotations
ax.plot(self.matched_ref_sample, np.ones(self.tp), 'ko')
ax.plot(self.unmatched_ref_sample, np.ones(self.fn), 'ko',
fillstyle='none')
# Plot test annotations
ax.plot(self.matched_test_sample, 0.5 * np.ones(self.tp), 'g+')
ax.plot(self.unmatched_test_sample, 0.5 * np.ones(self.fp), 'rx')
ax.legend(legend[1:])
if title:
ax.set_title(title)
ax.set_xlabel('time/sample')
fig.show()
if return_fig:
return fig, ax | [
"def",
"plot",
"(",
"self",
",",
"sig_style",
"=",
"''",
",",
"title",
"=",
"None",
",",
"figsize",
"=",
"None",
",",
"return_fig",
"=",
"False",
")",
":",
"fig",
"=",
"plt",
".",
"figure",
"(",
"figsize",
"=",
"figsize",
")",
"ax",
"=",
"fig",
"... | Plot the comparison of two sets of annotations, possibly
overlaid on their original signal.
Parameters
----------
sig_style : str, optional
The matplotlib style of the signal
title : str, optional
The title of the plot
figsize: tuple, optional
Tuple pair specifying the width, and height of the figure.
It is the'figsize' argument passed into matplotlib.pyplot's
`figure` function.
return_fig : bool, optional
Whether the figure is to be returned as an output argument. | [
"Plot",
"the",
"comparison",
"of",
"two",
"sets",
"of",
"annotations",
"possibly",
"overlaid",
"on",
"their",
"original",
"signal",
"."
] | cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/processing/evaluate.py#L264-L331 | train | 216,289 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine.add_module | def add_module(self, module):
"""
Ownership of module is transferred to the execution engine
"""
if module in self._modules:
raise KeyError("module already added to this engine")
ffi.lib.LLVMPY_AddModule(self, module)
module._owned = True
self._modules.add(module) | python | def add_module(self, module):
"""
Ownership of module is transferred to the execution engine
"""
if module in self._modules:
raise KeyError("module already added to this engine")
ffi.lib.LLVMPY_AddModule(self, module)
module._owned = True
self._modules.add(module) | [
"def",
"add_module",
"(",
"self",
",",
"module",
")",
":",
"if",
"module",
"in",
"self",
".",
"_modules",
":",
"raise",
"KeyError",
"(",
"\"module already added to this engine\"",
")",
"ffi",
".",
"lib",
".",
"LLVMPY_AddModule",
"(",
"self",
",",
"module",
"... | Ownership of module is transferred to the execution engine | [
"Ownership",
"of",
"module",
"is",
"transferred",
"to",
"the",
"execution",
"engine"
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L80-L88 | train | 216,290 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine.remove_module | def remove_module(self, module):
"""
Ownership of module is returned
"""
with ffi.OutputString() as outerr:
if ffi.lib.LLVMPY_RemoveModule(self, module, outerr):
raise RuntimeError(str(outerr))
self._modules.remove(module)
module._owned = False | python | def remove_module(self, module):
"""
Ownership of module is returned
"""
with ffi.OutputString() as outerr:
if ffi.lib.LLVMPY_RemoveModule(self, module, outerr):
raise RuntimeError(str(outerr))
self._modules.remove(module)
module._owned = False | [
"def",
"remove_module",
"(",
"self",
",",
"module",
")",
":",
"with",
"ffi",
".",
"OutputString",
"(",
")",
"as",
"outerr",
":",
"if",
"ffi",
".",
"lib",
".",
"LLVMPY_RemoveModule",
"(",
"self",
",",
"module",
",",
"outerr",
")",
":",
"raise",
"Runtime... | Ownership of module is returned | [
"Ownership",
"of",
"module",
"is",
"returned"
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L105-L113 | train | 216,291 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine.target_data | def target_data(self):
"""
The TargetData for this execution engine.
"""
if self._td is not None:
return self._td
ptr = ffi.lib.LLVMPY_GetExecutionEngineTargetData(self)
self._td = targets.TargetData(ptr)
self._td._owned = True
return self._td | python | def target_data(self):
"""
The TargetData for this execution engine.
"""
if self._td is not None:
return self._td
ptr = ffi.lib.LLVMPY_GetExecutionEngineTargetData(self)
self._td = targets.TargetData(ptr)
self._td._owned = True
return self._td | [
"def",
"target_data",
"(",
"self",
")",
":",
"if",
"self",
".",
"_td",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_td",
"ptr",
"=",
"ffi",
".",
"lib",
".",
"LLVMPY_GetExecutionEngineTargetData",
"(",
"self",
")",
"self",
".",
"_td",
"=",
"target... | The TargetData for this execution engine. | [
"The",
"TargetData",
"for",
"this",
"execution",
"engine",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L116-L125 | train | 216,292 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine._find_module_ptr | def _find_module_ptr(self, module_ptr):
"""
Find the ModuleRef corresponding to the given pointer.
"""
ptr = cast(module_ptr, c_void_p).value
for module in self._modules:
if cast(module._ptr, c_void_p).value == ptr:
return module
return None | python | def _find_module_ptr(self, module_ptr):
"""
Find the ModuleRef corresponding to the given pointer.
"""
ptr = cast(module_ptr, c_void_p).value
for module in self._modules:
if cast(module._ptr, c_void_p).value == ptr:
return module
return None | [
"def",
"_find_module_ptr",
"(",
"self",
",",
"module_ptr",
")",
":",
"ptr",
"=",
"cast",
"(",
"module_ptr",
",",
"c_void_p",
")",
".",
"value",
"for",
"module",
"in",
"self",
".",
"_modules",
":",
"if",
"cast",
"(",
"module",
".",
"_ptr",
",",
"c_void_... | Find the ModuleRef corresponding to the given pointer. | [
"Find",
"the",
"ModuleRef",
"corresponding",
"to",
"the",
"given",
"pointer",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L136-L144 | train | 216,293 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine.set_object_cache | def set_object_cache(self, notify_func=None, getbuffer_func=None):
"""
Set the object cache "notifyObjectCompiled" and "getBuffer"
callbacks to the given Python functions.
"""
self._object_cache_notify = notify_func
self._object_cache_getbuffer = getbuffer_func
# Lifetime of the object cache is managed by us.
self._object_cache = _ObjectCacheRef(self)
# Note this doesn't keep a reference to self, to avoid reference
# cycles.
ffi.lib.LLVMPY_SetObjectCache(self, self._object_cache) | python | def set_object_cache(self, notify_func=None, getbuffer_func=None):
"""
Set the object cache "notifyObjectCompiled" and "getBuffer"
callbacks to the given Python functions.
"""
self._object_cache_notify = notify_func
self._object_cache_getbuffer = getbuffer_func
# Lifetime of the object cache is managed by us.
self._object_cache = _ObjectCacheRef(self)
# Note this doesn't keep a reference to self, to avoid reference
# cycles.
ffi.lib.LLVMPY_SetObjectCache(self, self._object_cache) | [
"def",
"set_object_cache",
"(",
"self",
",",
"notify_func",
"=",
"None",
",",
"getbuffer_func",
"=",
"None",
")",
":",
"self",
".",
"_object_cache_notify",
"=",
"notify_func",
"self",
".",
"_object_cache_getbuffer",
"=",
"getbuffer_func",
"# Lifetime of the object cac... | Set the object cache "notifyObjectCompiled" and "getBuffer"
callbacks to the given Python functions. | [
"Set",
"the",
"object",
"cache",
"notifyObjectCompiled",
"and",
"getBuffer",
"callbacks",
"to",
"the",
"given",
"Python",
"functions",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L157-L168 | train | 216,294 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine._raw_object_cache_notify | def _raw_object_cache_notify(self, data):
"""
Low-level notify hook.
"""
if self._object_cache_notify is None:
return
module_ptr = data.contents.module_ptr
buf_ptr = data.contents.buf_ptr
buf_len = data.contents.buf_len
buf = string_at(buf_ptr, buf_len)
module = self._find_module_ptr(module_ptr)
if module is None:
# The LLVM EE should only give notifications for modules
# known by us.
raise RuntimeError("object compilation notification "
"for unknown module %s" % (module_ptr,))
self._object_cache_notify(module, buf) | python | def _raw_object_cache_notify(self, data):
"""
Low-level notify hook.
"""
if self._object_cache_notify is None:
return
module_ptr = data.contents.module_ptr
buf_ptr = data.contents.buf_ptr
buf_len = data.contents.buf_len
buf = string_at(buf_ptr, buf_len)
module = self._find_module_ptr(module_ptr)
if module is None:
# The LLVM EE should only give notifications for modules
# known by us.
raise RuntimeError("object compilation notification "
"for unknown module %s" % (module_ptr,))
self._object_cache_notify(module, buf) | [
"def",
"_raw_object_cache_notify",
"(",
"self",
",",
"data",
")",
":",
"if",
"self",
".",
"_object_cache_notify",
"is",
"None",
":",
"return",
"module_ptr",
"=",
"data",
".",
"contents",
".",
"module_ptr",
"buf_ptr",
"=",
"data",
".",
"contents",
".",
"buf_p... | Low-level notify hook. | [
"Low",
"-",
"level",
"notify",
"hook",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L170-L186 | train | 216,295 |
numba/llvmlite | llvmlite/binding/executionengine.py | ExecutionEngine._raw_object_cache_getbuffer | def _raw_object_cache_getbuffer(self, data):
"""
Low-level getbuffer hook.
"""
if self._object_cache_getbuffer is None:
return
module_ptr = data.contents.module_ptr
module = self._find_module_ptr(module_ptr)
if module is None:
# The LLVM EE should only give notifications for modules
# known by us.
raise RuntimeError("object compilation notification "
"for unknown module %s" % (module_ptr,))
buf = self._object_cache_getbuffer(module)
if buf is not None:
# Create a copy, which will be freed by the caller
data[0].buf_ptr = ffi.lib.LLVMPY_CreateByteString(buf, len(buf))
data[0].buf_len = len(buf) | python | def _raw_object_cache_getbuffer(self, data):
"""
Low-level getbuffer hook.
"""
if self._object_cache_getbuffer is None:
return
module_ptr = data.contents.module_ptr
module = self._find_module_ptr(module_ptr)
if module is None:
# The LLVM EE should only give notifications for modules
# known by us.
raise RuntimeError("object compilation notification "
"for unknown module %s" % (module_ptr,))
buf = self._object_cache_getbuffer(module)
if buf is not None:
# Create a copy, which will be freed by the caller
data[0].buf_ptr = ffi.lib.LLVMPY_CreateByteString(buf, len(buf))
data[0].buf_len = len(buf) | [
"def",
"_raw_object_cache_getbuffer",
"(",
"self",
",",
"data",
")",
":",
"if",
"self",
".",
"_object_cache_getbuffer",
"is",
"None",
":",
"return",
"module_ptr",
"=",
"data",
".",
"contents",
".",
"module_ptr",
"module",
"=",
"self",
".",
"_find_module_ptr",
... | Low-level getbuffer hook. | [
"Low",
"-",
"level",
"getbuffer",
"hook",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/binding/executionengine.py#L188-L206 | train | 216,296 |
numba/llvmlite | llvmlite/ir/builder.py | IRBuilder.position_before | def position_before(self, instr):
"""
Position immediately before the given instruction. The current block
is also changed to the instruction's basic block.
"""
self._block = instr.parent
self._anchor = self._block.instructions.index(instr) | python | def position_before(self, instr):
"""
Position immediately before the given instruction. The current block
is also changed to the instruction's basic block.
"""
self._block = instr.parent
self._anchor = self._block.instructions.index(instr) | [
"def",
"position_before",
"(",
"self",
",",
"instr",
")",
":",
"self",
".",
"_block",
"=",
"instr",
".",
"parent",
"self",
".",
"_anchor",
"=",
"self",
".",
"_block",
".",
"instructions",
".",
"index",
"(",
"instr",
")"
] | Position immediately before the given instruction. The current block
is also changed to the instruction's basic block. | [
"Position",
"immediately",
"before",
"the",
"given",
"instruction",
".",
"The",
"current",
"block",
"is",
"also",
"changed",
"to",
"the",
"instruction",
"s",
"basic",
"block",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/ir/builder.py#L182-L188 | train | 216,297 |
numba/llvmlite | llvmlite/ir/builder.py | IRBuilder.position_after | def position_after(self, instr):
"""
Position immediately after the given instruction. The current block
is also changed to the instruction's basic block.
"""
self._block = instr.parent
self._anchor = self._block.instructions.index(instr) + 1 | python | def position_after(self, instr):
"""
Position immediately after the given instruction. The current block
is also changed to the instruction's basic block.
"""
self._block = instr.parent
self._anchor = self._block.instructions.index(instr) + 1 | [
"def",
"position_after",
"(",
"self",
",",
"instr",
")",
":",
"self",
".",
"_block",
"=",
"instr",
".",
"parent",
"self",
".",
"_anchor",
"=",
"self",
".",
"_block",
".",
"instructions",
".",
"index",
"(",
"instr",
")",
"+",
"1"
] | Position immediately after the given instruction. The current block
is also changed to the instruction's basic block. | [
"Position",
"immediately",
"after",
"the",
"given",
"instruction",
".",
"The",
"current",
"block",
"is",
"also",
"changed",
"to",
"the",
"instruction",
"s",
"basic",
"block",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/ir/builder.py#L190-L196 | train | 216,298 |
numba/llvmlite | llvmlite/ir/builder.py | IRBuilder.resume | def resume(self, landingpad):
"""
Resume an in-flight exception.
"""
br = instructions.Branch(self.block, "resume", [landingpad])
self._set_terminator(br)
return br | python | def resume(self, landingpad):
"""
Resume an in-flight exception.
"""
br = instructions.Branch(self.block, "resume", [landingpad])
self._set_terminator(br)
return br | [
"def",
"resume",
"(",
"self",
",",
"landingpad",
")",
":",
"br",
"=",
"instructions",
".",
"Branch",
"(",
"self",
".",
"block",
",",
"\"resume\"",
",",
"[",
"landingpad",
"]",
")",
"self",
".",
"_set_terminator",
"(",
"br",
")",
"return",
"br"
] | Resume an in-flight exception. | [
"Resume",
"an",
"in",
"-",
"flight",
"exception",
"."
] | fcadf8af11947f3fd041c5d6526c5bf231564883 | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/ir/builder.py#L809-L815 | train | 216,299 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.