text_prompt
stringlengths
157
13.1k
code_prompt
stringlengths
7
19.8k
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def is_regular(self): """Determine whether this `Index` contains linearly increasing samples This also works for linear decrease """
if self.size <= 1: return False return numpy.isclose(numpy.diff(self.value, n=2), 0).all()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def table_from_omicron(source, *args, **kwargs): """Read an `EventTable` from an Omicron ROOT file This function just redirects to the format='root' reader with appropriate defaults. """
if not args: # only default treename if args not given kwargs.setdefault('treename', 'triggers') return EventTable.read(source, *args, format='root', **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot(self, *args, **kwargs): """Plot data onto these axes Parameters args a single instance of - `~gwpy.segments.DataQualityFlag` - `~gwpy.segments.Segment` - `~gwpy.segments.SegmentList` - `~gwpy.segments.SegmentListDict` or equivalent types upstream from :mod:`ligo.segments` kwargs keyword arguments applicable to `~matplotib.axes.Axes.plot` Returns ------- Line2D the `~matplotlib.lines.Line2D` for this line layer See Also -------- :meth:`matplotlib.axes.Axes.plot` for a full description of acceptable ``*args` and ``**kwargs`` """
out = [] args = list(args) while args: try: plotter = self._plot_method(args[0]) except TypeError: break out.append(plotter(args[0], **kwargs)) args.pop(0) if args: out.extend(super(SegmentAxes, self).plot(*args, **kwargs)) self.autoscale(enable=None, axis='both', tight=False) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_dict(self, flags, label='key', known='x', **kwargs): """Plot a `~gwpy.segments.DataQualityDict` onto these axes Parameters flags : `~gwpy.segments.DataQualityDict` data-quality dict to display label : `str`, optional labelling system to use, or fixed label for all `DataQualityFlags`. Special values include - ``'key'``: use the key of the `DataQualityDict`, - ``'name'``: use the :attr:`~DataQualityFlag.name` of the `DataQualityFlag` If anything else, that fixed label will be used for all lines. known : `str`, `dict`, `None`, default: '/' display `known` segments with the given hatching, or give a dict of keyword arguments to pass to :meth:`~SegmentAxes.plot_segmentlist`, or `None` to hide. **kwargs any other keyword arguments acceptable for `~matplotlib.patches.Rectangle` Returns ------- collection : `~matplotlib.patches.PatchCollection` list of `~matplotlib.patches.Rectangle` patches """
out = [] for lab, flag in flags.items(): if label.lower() == 'name': lab = flag.name elif label.lower() != 'key': lab = label out.append(self.plot_flag(flag, label=to_string(lab), known=known, **kwargs)) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_flag(self, flag, y=None, **kwargs): """Plot a `~gwpy.segments.DataQualityFlag` onto these axes. Parameters flag : `~gwpy.segments.DataQualityFlag` Data-quality flag to display. y : `float`, optional Y-axis value for new segments. height : `float`, optional, Height for each segment, default: `0.8`. known : `str`, `dict`, `None` One of the following - ``'fancy'`` - to use fancy format (try it and see) - ``'x'`` (or similar) - to use hatching - `str` to specify ``facecolor`` for known segmentlist - `dict` of kwargs to use - `None` to ignore known segmentlist **kwargs Any other keyword arguments acceptable for `~matplotlib.patches.Rectangle`. Returns ------- collection : `~matplotlib.patches.PatchCollection` list of `~matplotlib.patches.Rectangle` patches for active segments """
# get y axis position if y is None: y = self.get_next_y() # default a 'good' flag to green segments and vice-versa if flag.isgood: kwargs.setdefault('facecolor', '#33cc33') kwargs.setdefault('known', '#ff0000') else: kwargs.setdefault('facecolor', '#ff0000') kwargs.setdefault('known', '#33cc33') known = kwargs.pop('known') # get flag name name = kwargs.pop('label', flag.label or flag.name) # make active collection kwargs.setdefault('zorder', 0) coll = self.plot_segmentlist(flag.active, y=y, label=name, **kwargs) # make known collection if known not in (None, False): known_kw = { 'facecolor': coll.get_facecolor()[0], 'collection': 'ignore', 'zorder': -1000, } if isinstance(known, dict): known_kw.update(known) elif known == 'fancy': known_kw.update(height=kwargs.get('height', .8)*.05) elif known in HATCHES: known_kw.update(fill=False, hatch=known) else: known_kw.update(fill=True, facecolor=known, height=kwargs.get('height', .8)*.5) self.plot_segmentlist(flag.known, y=y, label=name, **known_kw) return coll
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_segmentlist(self, segmentlist, y=None, height=.8, label=None, collection=True, rasterized=None, **kwargs): """Plot a `~gwpy.segments.SegmentList` onto these axes Parameters segmentlist : `~gwpy.segments.SegmentList` list of segments to display y : `float`, optional y-axis value for new segments collection : `bool`, default: `True` add all patches as a `~matplotlib.collections.PatchCollection`, doesn't seem to work for hatched rectangles label : `str`, optional custom descriptive name to print as y-axis tick label **kwargs any other keyword arguments acceptable for `~matplotlib.patches.Rectangle` Returns ------- collection : `~matplotlib.patches.PatchCollection` list of `~matplotlib.patches.Rectangle` patches """
# get colour facecolor = kwargs.pop('facecolor', kwargs.pop('color', '#629fca')) if is_color_like(facecolor): kwargs.setdefault('edgecolor', tint(facecolor, factor=.5)) # get y if y is None: y = self.get_next_y() # build patches patches = [SegmentRectangle(seg, y, height=height, facecolor=facecolor, **kwargs) for seg in segmentlist] if collection: # map to PatchCollection coll = PatchCollection(patches, match_original=patches, zorder=kwargs.get('zorder', 1)) coll.set_rasterized(rasterized) coll._ignore = collection == 'ignore' coll._ypos = y out = self.add_collection(coll) # reset label with tex-formatting now # matplotlib default label is applied by add_collection # so we can only replace the leading underscore after # this point if label is None: label = coll.get_label() coll.set_label(to_string(label)) else: out = [] for patch in patches: patch.set_label(label) patch.set_rasterized(rasterized) label = '' out.append(self.add_patch(patch)) self.autoscale(enable=None, axis='both', tight=False) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_segmentlistdict(self, segmentlistdict, y=None, dy=1, **kwargs): """Plot a `~gwpy.segments.SegmentListDict` onto these axes Parameters segmentlistdict : `~gwpy.segments.SegmentListDict` (name, `~gwpy.segments.SegmentList`) dict y : `float`, optional starting y-axis value for new segmentlists **kwargs any other keyword arguments acceptable for `~matplotlib.patches.Rectangle` Returns ------- collections : `list` list of `~matplotlib.patches.PatchCollection` sets for each segmentlist """
if y is None: y = self.get_next_y() collections = [] for name, segmentlist in segmentlistdict.items(): collections.append(self.plot_segmentlist(segmentlist, y=y, label=name, **kwargs)) y += dy return collections
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_collections(self, ignore=None): """Return the collections matching the given `_ignore` value Parameters ignore : `bool`, or `None` value of `_ignore` to match Returns ------- collections : `list` if `ignore=None`, simply returns all collections, otherwise returns those collections matching the `ignore` parameter """
if ignore is None: return self.collections return [c for c in self.collections if getattr(c, '_ignore', None) == ignore]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parse_keytab(keytab): """Read the contents of a KRB5 keytab file, returning a list of credentials listed within Parameters keytab : `str` path to keytab file Returns ------- creds : `list` of `tuple` the (unique) list of `(username, realm, kvno)` as read from the keytab file Examples -------- [('albert.einstein', 'LIGO.ORG', 1)] """
try: out = subprocess.check_output(['klist', '-k', keytab], stderr=subprocess.PIPE) except OSError: raise KerberosError("Failed to locate klist, cannot read keytab") except subprocess.CalledProcessError: raise KerberosError("Cannot read keytab {!r}".format(keytab)) principals = [] for line in out.splitlines(): if isinstance(line, bytes): line = line.decode('utf-8') try: kvno, principal, = re.split(r'\s+', line.strip(' '), 1) except ValueError: continue else: if not kvno.isdigit(): continue principals.append(tuple(principal.split('@')) + (int(kvno),)) # return unique, ordered list return list(OrderedDict.fromkeys(principals).keys())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def y0(self): """Y-axis coordinate of the first data point :type: `~astropy.units.Quantity` scalar """
try: return self._y0 except AttributeError: self._y0 = Quantity(0, self.yunit) return self._y0
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def dy(self): """Y-axis sample separation :type: `~astropy.units.Quantity` scalar """
try: return self._dy except AttributeError: try: self._yindex except AttributeError: self._dy = Quantity(1, self.yunit) else: if not self.yindex.regular: raise AttributeError( "This series has an irregular y-axis " "index, so 'dy' is not well defined") self._dy = self.yindex[1] - self.yindex[0] return self._dy
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def yunit(self): """Unit of Y-axis index :type: `~astropy.units.Unit` """
try: return self._dy.unit except AttributeError: try: return self._y0.unit except AttributeError: return self._default_yunit
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def yindex(self): """Positions of the data on the y-axis :type: `~astropy.units.Quantity` array """
try: return self._yindex except AttributeError: self._yindex = Index.define(self.y0, self.dy, self.shape[1]) return self._yindex
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def is_compatible(self, other): """Check whether this array and ``other`` have compatible metadata """
super(Array2D, self).is_compatible(other) # check y-axis metadata if isinstance(other, type(self)): try: if not self.dy == other.dy: raise ValueError("%s sample sizes do not match: " "%s vs %s." % (type(self).__name__, self.dy, other.dy)) except AttributeError: raise ValueError("Series with irregular y-indexes cannot " "be compatible") return True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def find_flag_groups(h5group, strict=True): """Returns all HDF5 Groups under the given group that contain a flag The check is just that the sub-group has a ``'name'`` attribute, so its not fool-proof by any means. Parameters h5group : `h5py.Group` the parent group in which to search strict : `bool`, optional, default: `True` if `True` raise an exception for any sub-group that doesn't have a name, otherwise just return all of those that do Raises ------ KeyError if a sub-group doesn't have a ``'name'`` attribtue and ``strict=True`` """
names = [] for group in h5group: try: names.append(h5group[group].attrs['name']) except KeyError: if strict: raise continue return names
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _is_flag_group(obj): """Returns `True` if `obj` is an `h5py.Group` that looks like if contains a flag """
return ( isinstance(obj, h5py.Group) and isinstance(obj.get("active"), h5py.Dataset) and isinstance(obj.get("known"), h5py.Dataset) )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _find_flag_groups(h5f): """Return all groups in `h5f` that look like flags """
flag_groups = [] def _find(name, obj): if _is_flag_group(obj): flag_groups.append(name) h5f.visititems(_find) return flag_groups
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_flag_group(h5f, path): """Determine the group to use in order to read a flag """
# if user chose the path, just use it if path: return h5f[path] # if the user gave us the group directly, use it if _is_flag_group(h5f): return h5f # otherwise try and find a single group that matches try: path, = _find_flag_groups(h5f) except ValueError: pass else: return h5f[path] # if not exactly 1 valid group in the file, complain raise ValueError( "please pass a valid HDF5 Group, or specify the HDF5 Group " "path via the ``path=`` keyword argument", )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_hdf5_flag(h5f, path=None, gpstype=LIGOTimeGPS): """Read a `DataQualityFlag` object from an HDF5 file or group. """
# extract correct group dataset = _get_flag_group(h5f, path) # read dataset active = SegmentList.read(dataset['active'], format='hdf5', gpstype=gpstype) try: known = SegmentList.read(dataset['known'], format='hdf5', gpstype=gpstype) except KeyError as first_keyerror: try: known = SegmentList.read(dataset['valid'], format='hdf5', gpstype=gpstype) except KeyError: raise first_keyerror return DataQualityFlag(active=active, known=known, **dict(dataset.attrs))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_hdf5_segmentlist(h5f, path=None, gpstype=LIGOTimeGPS, **kwargs): """Read a `SegmentList` object from an HDF5 file or group. """
# find dataset dataset = io_hdf5.find_dataset(h5f, path=path) segtable = Table.read(dataset, format='hdf5', **kwargs) out = SegmentList() for row in segtable: start = LIGOTimeGPS(int(row['start_time']), int(row['start_time_ns'])) end = LIGOTimeGPS(int(row['end_time']), int(row['end_time_ns'])) if gpstype is LIGOTimeGPS: out.append(Segment(start, end)) else: out.append(Segment(gpstype(start), gpstype(end))) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_hdf5_dict(h5f, names=None, path=None, on_missing='error', **kwargs): """Read a `DataQualityDict` from an HDF5 file """
if path: h5f = h5f[path] # allow alternative keyword argument name (FIXME) if names is None: names = kwargs.pop('flags', None) # try and get list of names automatically if names is None: try: names = find_flag_groups(h5f, strict=True) except KeyError: names = None if not names: raise ValueError("Failed to automatically parse available flag " "names from HDF5, please give a list of names " "to read via the ``names=`` keyword") # read data out = DataQualityDict() for name in names: try: out[name] = read_hdf5_flag(h5f, name, **kwargs) except KeyError as exc: if on_missing == 'ignore': pass elif on_missing == 'warn': warnings.warn(str(exc)) else: raise ValueError('no H5Group found for flag ' '{0!r}'.format(name)) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_hdf5_flag_group(flag, h5group, **kwargs): """Write a `DataQualityFlag` into the given HDF5 group """
# write segmentlists flag.active.write(h5group, 'active', **kwargs) kwargs['append'] = True flag.known.write(h5group, 'known', **kwargs) # store metadata for attr in ['name', 'label', 'category', 'description', 'isgood', 'padding']: value = getattr(flag, attr) if value is None: continue elif isinstance(value, Quantity): h5group.attrs[attr] = value.value elif isinstance(value, UnitBase): h5group.attrs[attr] = str(value) else: h5group.attrs[attr] = value return h5group
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_hdf5_dict(flags, output, path=None, append=False, overwrite=False, **kwargs): """Write this `DataQualityFlag` to a `h5py.Group`. This allows writing to an HDF5-format file. Parameters output : `str`, :class:`h5py.Group` path to new output file, or open h5py `Group` to write to. path : `str` the HDF5 group path in which to write a new group for this flag **kwargs other keyword arguments passed to :meth:`h5py.Group.create_dataset` Returns ------- dqfgroup : :class:`h5py.Group` HDF group containing these data. This group contains 'active' and 'known' datasets, and metadata attrs. See also -------- astropy.io for details on acceptable keyword arguments when writing a :class:`~astropy.table.Table` to HDF5 """
if path: try: parent = output[path] except KeyError: parent = output.create_group(path) else: parent = output for name in flags: # handle existing group if name in parent: if not (overwrite and append): raise IOError("Group '%s' already exists, give ``append=True, " "overwrite=True`` to overwrite it" % os.path.join(parent.name, name)) del parent[name] # create group group = parent.create_group(name) # write flag write_hdf5_flag_group(flags[name], group, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def float_to_latex(x, format="%.2g"): # pylint: disable=redefined-builtin # pylint: disable=anomalous-backslash-in-string r"""Convert a floating point number to a latex representation. In particular, scientific notation is handled gracefully: e -> 10^ Parameters x : `float` the number to represent format : `str`, optional the output string format Returns ------- tex : `str` a TeX representation of the input Examples -------- '1' '2\times 10^{3}' '10^{2}' r'-5\!\!\times\!\!10^{2}' """
if x == 0.: return '0' base_str = format % x if "e" not in base_str: return base_str mantissa, exponent = base_str.split("e") if float(mantissa).is_integer(): mantissa = int(float(mantissa)) exponent = exponent.lstrip("0+") if exponent.startswith('-0'): exponent = '-' + exponent[2:] if float(mantissa) == 1.0: return r"10^{%s}" % exponent return r"%s\!\!\times\!\!10^{%s}" % (mantissa, exponent)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def label_to_latex(text): # pylint: disable=anomalous-backslash-in-string r"""Convert text into a latex-passable representation. This method just escapes the following reserved LaTeX characters: % \ _ ~ &, whilst trying to avoid doubly-escaping already escaped characters Parameters text : `str` input text to convert Returns ------- tex : `str` a modified version of the input text with all unescaped reserved latex characters escaped Examples -------- 'normal text' '$1 + 2 = 3$' 'H1:ABC-DEF\\_GHI' 'H1:ABC-DEF\\_GHI' """
if text is None: return '' out = [] x = None # loop over matches in reverse order and replace for m in re_latex_control.finditer(text): a, b = m.span() char = m.group()[0] out.append(text[x:a]) out.append(r'\%s' % char) x = b if not x: # no match return text # append prefix and return joined components out.append(text[b:]) return ''.join(out)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def preformat_cache(cache, start=None, end=None): """Preprocess a `list` of file paths for reading. - read the cache from the file (if necessary) - sieve the cache to only include data we need Parameters cache : `list`, `str` List of file paths, or path to a LAL-format cache file on disk. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine. end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine. Returns ------- modcache : `list` A parsed, sieved list of paths based on the input arguments. """
# open cache file if isinstance(cache, FILE_LIKE + string_types): return read_cache(cache, sort=file_segment, segment=Segment(start, end)) # format existing cache file cache = type(cache)(cache) # copy cache # sort cache try: cache.sort(key=file_segment) # sort except ValueError: # if this failed, then the sieving will also fail, but lets proceed # anyway, since the user didn't actually ask us to do this (but # its a very good idea) return cache # sieve cache if start is None: # start time of earliest file start = file_segment(cache[0])[0] if end is None: # end time of latest file end = file_segment(cache[-1])[-1] return sieve(cache, segment=Segment(start, end))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def progress_bar(**kwargs): """Create a `tqdm.tqdm` progress bar This is just a thin wrapper around `tqdm.tqdm` to set some updated defaults """
tqdm_kw = { 'desc': 'Processing', 'file': sys.stdout, 'bar_format': TQDM_BAR_FORMAT, } tqdm_kw.update(kwargs) pbar = tqdm(**tqdm_kw) if not pbar.disable: pbar.desc = pbar.desc.rstrip(': ') pbar.refresh() return pbar
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def num_taps(sample_rate, transitionwidth, gpass, gstop): """Returns the number of taps for an FIR filter with the given shape Parameters sample_rate : `float` sampling rate of target data transitionwidth : `float` the width (in the same units as `sample_rate` of the transition from stop-band to pass-band gpass : `float` the maximum loss in the passband (dB) gstop : `float` the minimum attenuation in the stopband (dB) Returns ------- numtaps : `int` the number of taps for an FIR filter Notes ----- Credit: http://dsp.stackexchange.com/a/31077/8223 """
gpass = 10 ** (-gpass / 10.) gstop = 10 ** (-gstop / 10.) return int(2/3. * log10(1 / (10 * gpass * gstop)) * sample_rate / transitionwidth)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def is_zpk(zpktup): """Determin whether the given tuple is a ZPK-format filter definition Returns ------- iszpk : `bool` `True` if the ``zpktup`` looks like a ZPK-format filter definition, otherwise `False` """
return ( isinstance(zpktup, (tuple, list)) and len(zpktup) == 3 and isinstance(zpktup[0], (list, tuple, numpy.ndarray)) and isinstance(zpktup[1], (list, tuple, numpy.ndarray)) and isinstance(zpktup[2], float))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def truncate_transfer(transfer, ncorner=None): """Smoothly zero the edges of a frequency domain transfer function Parameters transfer : `numpy.ndarray` transfer function to start from, must have at least ten samples ncorner : `int`, optional number of extra samples to zero off at low frequency, default: `None` Returns ------- out : `numpy.ndarray` the smoothly truncated transfer function Notes ----- By default, the input transfer function will have five samples tapered off at the left and right boundaries. If `ncorner` is not `None`, then `ncorner` extra samples will be zeroed on the left as a hard highpass filter. See :func:`~gwpy.signal.window.planck` for more information. """
nsamp = transfer.size ncorner = ncorner if ncorner else 0 out = transfer.copy() out[0:ncorner] = 0 out[ncorner:nsamp] *= planck(nsamp-ncorner, nleft=5, nright=5) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def truncate_impulse(impulse, ntaps, window='hanning'): """Smoothly truncate a time domain impulse response Parameters impulse : `numpy.ndarray` the impulse response to start from ntaps : `int` number of taps in the final filter window : `str`, `numpy.ndarray`, optional window function to truncate with, default: ``'hanning'`` see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : `numpy.ndarray` the smoothly truncated impulse response """
out = impulse.copy() trunc_start = int(ntaps / 2) trunc_stop = out.size - trunc_start window = signal.get_window(window, ntaps) out[0:trunc_start] *= window[trunc_start:ntaps] out[trunc_stop:out.size] *= window[0:trunc_start] out[trunc_start:trunc_stop] = 0 return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def fir_from_transfer(transfer, ntaps, window='hanning', ncorner=None): """Design a Type II FIR filter given an arbitrary transfer function Parameters transfer : `numpy.ndarray` transfer function to start from, must have at least ten samples ntaps : `int` number of taps in the final filter, must be an even number window : `str`, `numpy.ndarray`, optional window function to truncate with, default: ``'hanning'`` see :func:`scipy.signal.get_window` for details on acceptable formats ncorner : `int`, optional number of extra samples to zero off at low frequency, default: `None` Returns ------- out : `numpy.ndarray` A time domain FIR filter of length `ntaps` Notes ----- The final FIR filter will use `~numpy.fft.rfft` FFT normalisation. If `ncorner` is not `None`, then `ncorner` extra samples will be zeroed on the left as a hard highpass filter. See Also -------- scipy.signal.remez an alternative FIR filter design using the Remez exchange algorithm """
# truncate and highpass the transfer function transfer = truncate_transfer(transfer, ncorner=ncorner) # compute and truncate the impulse response impulse = npfft.irfft(transfer) impulse = truncate_impulse(impulse, ntaps=ntaps, window=window) # wrap around and normalise to construct the filter out = numpy.roll(impulse, int(ntaps/2 - 1))[0:ntaps] return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def bilinear_zpk(zeros, poles, gain, fs=1.0, unit='Hz'): """Convert an analogue ZPK filter to digital using a bilinear transform Parameters zeros : array-like list of zeros poles : array-like list of poles gain : `float` filter gain fs : `float`, `~astropy.units.Quantity` sampling rate at which to evaluate bilinear transform, default: 1. unit : `str`, `~astropy.units.Unit` unit of inputs, one or 'Hz' or 'rad/s', default: ``'Hz'`` Returns ------- zpk : `tuple` digital version of input zpk """
zeros = numpy.array(zeros, dtype=float, copy=False) zeros = zeros[numpy.isfinite(zeros)] poles = numpy.array(poles, dtype=float, copy=False) gain = gain # convert from Hz to rad/s if needed unit = Unit(unit) if unit == Unit('Hz'): zeros *= -2 * pi poles *= -2 * pi elif unit != Unit('rad/s'): raise ValueError("zpk can only be given with unit='Hz' " "or 'rad/s'") # convert to Z-domain via bilinear transform fs = 2 * Quantity(fs, 'Hz').value dpoles = (1 + poles/fs) / (1 - poles/fs) dzeros = (1 + zeros/fs) / (1 - zeros/fs) dzeros = numpy.concatenate(( dzeros, -numpy.ones(len(dpoles) - len(dzeros)), )) dgain = gain * numpy.prod(fs - zeros)/numpy.prod(fs - poles) return dzeros, dpoles, dgain
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parse_filter(args, analog=False, sample_rate=None): """Parse arbitrary input args into a TF or ZPK filter definition Parameters args : `tuple`, `~scipy.signal.lti` filter definition, normally just captured positional ``*args`` from a function call analog : `bool`, optional `True` if filter definition has analogue coefficients sample_rate : `float`, optional sampling frequency at which to convert analogue filter to digital via bilinear transform, required if ``analog=True`` Returns ------- ftype : `str` either ``'ba'`` or ``'zpk'`` filt : `tuple` the filter components for the returned `ftype`, either a 2-tuple for with transfer function components, or a 3-tuple for ZPK """
if analog and not sample_rate: raise ValueError("Must give sample_rate frequency to convert " "analog filter to digital") # unpack filter if isinstance(args, tuple) and len(args) == 1: # either packed defintion ((z, p, k)) or simple definition (lti,) args = args[0] # parse FIR filter if isinstance(args, numpy.ndarray) and args.ndim == 1: # fir b, a = args, [1.] if analog: return 'ba', signal.bilinear(b, a) return 'ba', (b, a) # parse IIR filter if isinstance(args, LinearTimeInvariant): lti = args elif (isinstance(args, numpy.ndarray) and args.ndim == 2 and args.shape[1] == 6): lti = signal.lti(*signal.sos2zpk(args)) else: lti = signal.lti(*args) # convert to zpk format try: lti = lti.to_zpk() except AttributeError: # scipy < 0.18, doesn't matter pass # convert to digital components if analog: return 'zpk', bilinear_zpk(lti.zeros, lti.poles, lti.gain, fs=sample_rate) # return zpk return 'zpk', (lti.zeros, lti.poles, lti.gain)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def lowpass(frequency, sample_rate, fstop=None, gpass=2, gstop=30, type='iir', **kwargs): """Design a low-pass filter for the given cutoff frequency Parameters frequency : `float` corner frequency of low-pass filter (Hertz) sample_rate : `float` sampling rate of target data (Hertz) fstop : `float`, optional edge-frequency of stop-band (Hertz) gpass : `float`, optional, default: 2 the maximum loss in the passband (dB) gstop : `float`, optional, default: 30 the minimum attenuation in the stopband (dB) type : `str`, optional, default: ``'iir'`` the filter type, either ``'iir'`` or ``'fir'`` **kwargs other keyword arguments are passed directly to :func:`~scipy.signal.iirdesign` or :func:`~scipy.signal.firwin` Returns ------- filter the formatted filter. the output format for an IIR filter depends on the input arguments, default is a tuple of `(zeros, poles, gain)` Notes ----- By default a digital filter is returned, meaning the zeros and poles are given in the Z-domain in units of radians/sample. Examples -------- To create a low-pass filter at 1000 Hz for 4096 Hz-sampled data: To view the filter, you can use the `~gwpy.plot.BodePlot`: """
sample_rate = _as_float(sample_rate) frequency = _as_float(frequency) if fstop is None: fstop = min(frequency * 1.5, sample_rate/2.) if type == 'iir': return _design_iir(frequency, fstop, sample_rate, gpass, gstop, **kwargs) return _design_fir(frequency, fstop, sample_rate, gpass, gstop, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def highpass(frequency, sample_rate, fstop=None, gpass=2, gstop=30, type='iir', **kwargs): """Design a high-pass filter for the given cutoff frequency Parameters frequency : `float` corner frequency of high-pass filter sample_rate : `float` sampling rate of target data fstop : `float`, optional edge-frequency of stop-band gpass : `float`, optional, default: 2 the maximum loss in the passband (dB) gstop : `float`, optional, default: 30 the minimum attenuation in the stopband (dB) type : `str`, optional, default: ``'iir'`` the filter type, either ``'iir'`` or ``'fir'`` **kwargs other keyword arguments are passed directly to :func:`~scipy.signal.iirdesign` or :func:`~scipy.signal.firwin` Returns ------- filter the formatted filter. the output format for an IIR filter depends on the input arguments, default is a tuple of `(zeros, poles, gain)` Notes ----- By default a digital filter is returned, meaning the zeros and poles are given in the Z-domain in units of radians/sample. Examples -------- To create a high-pass filter at 100 Hz for 4096 Hz-sampled data: To view the filter, you can use the `~gwpy.plot.BodePlot`: """
sample_rate = _as_float(sample_rate) frequency = _as_float(frequency) if fstop is None: fstop = frequency * 2/3. if type == 'iir': return _design_iir(frequency, fstop, sample_rate, gpass, gstop, **kwargs) return _design_fir(frequency, fstop, sample_rate, gpass, gstop, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def bandpass(flow, fhigh, sample_rate, fstop=None, gpass=2, gstop=30, type='iir', **kwargs): """Design a band-pass filter for the given cutoff frequencies Parameters flow : `float` lower corner frequency of pass band fhigh : `float` upper corner frequency of pass band sample_rate : `float` sampling rate of target data fstop : `tuple` of `float`, optional `(low, high)` edge-frequencies of stop band gpass : `float`, optional, default: 2 the maximum loss in the passband (dB) gstop : `float`, optional, default: 30 the minimum attenuation in the stopband (dB) type : `str`, optional, default: ``'iir'`` the filter type, either ``'iir'`` or ``'fir'`` **kwargs other keyword arguments are passed directly to :func:`~scipy.signal.iirdesign` or :func:`~scipy.signal.firwin` Returns ------- filter the formatted filter. the output format for an IIR filter depends on the input arguments, default is a tuple of `(zeros, poles, gain)` Notes ----- By default a digital filter is returned, meaning the zeros and poles are given in the Z-domain in units of radians/sample. Examples -------- To create a band-pass filter for 100-1000 Hz for 4096 Hz-sampled data: To view the filter, you can use the `~gwpy.plot.BodePlot`: """
sample_rate = _as_float(sample_rate) flow = _as_float(flow) fhigh = _as_float(fhigh) if fstop is None: fstop = (flow * 2/3., min(fhigh * 1.5, sample_rate/2.)) fstop = (_as_float(fstop[0]), _as_float(fstop[1])) if type == 'iir': return _design_iir((flow, fhigh), fstop, sample_rate, gpass, gstop, **kwargs) return _design_fir((flow, fhigh), fstop, sample_rate, gpass, gstop, pass_zero=False, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def notch(frequency, sample_rate, type='iir', **kwargs): """Design a ZPK notch filter for the given frequency and sampling rate Parameters frequency : `float`, `~astropy.units.Quantity` frequency (default in Hertz) at which to apply the notch sample_rate : `float`, `~astropy.units.Quantity` number of samples per second for `TimeSeries` to which this notch filter will be applied type : `str`, optional, default: 'iir' type of filter to apply, currently only 'iir' is supported **kwargs other keyword arguments to pass to `scipy.signal.iirdesign` Returns ------- zpk : `tuple` of `complex` or `float` the filter components in digital zero-pole-gain format See Also -------- scipy.signal.iirdesign for details on the IIR filter design method Notes ----- By default a digital filter is returned, meaning the zeros and poles are given in the Z-domain in units of radians/sample. Examples -------- To create a low-pass filter at 1000 Hz for 4096 Hz-sampled data: To view the filter, you can use the `~gwpy.plot.BodePlot`: """
frequency = Quantity(frequency, 'Hz').value sample_rate = Quantity(sample_rate, 'Hz').value nyq = 0.5 * sample_rate df = 1.0 # pylint: disable=invalid-name df2 = 0.1 low1 = (frequency - df)/nyq high1 = (frequency + df)/nyq low2 = (frequency - df2)/nyq high2 = (frequency + df2)/nyq if type == 'iir': kwargs.setdefault('gpass', 1) kwargs.setdefault('gstop', 10) kwargs.setdefault('ftype', 'ellip') return signal.iirdesign([low1, high1], [low2, high2], output='zpk', **kwargs) else: raise NotImplementedError("Generating %r notch filters has not been " "implemented yet" % type)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def welch(timeseries, segmentlength, noverlap=None, **kwargs): """Calculate a PSD of this `TimeSeries` using Welch's method. """
# calculate PSD freqs, psd_ = scipy.signal.welch( timeseries.value, noverlap=noverlap, fs=timeseries.sample_rate.decompose().value, nperseg=segmentlength, **kwargs ) # generate FrequencySeries and return unit = scale_timeseries_unit( timeseries.unit, kwargs.get('scaling', 'density'), ) return FrequencySeries( psd_, unit=unit, frequencies=freqs, name=timeseries.name, epoch=timeseries.epoch, channel=timeseries.channel, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def bartlett(timeseries, segmentlength, **kwargs): """Calculate a PSD using Bartlett's method """
kwargs.pop('noverlap', None) return welch(timeseries, segmentlength, noverlap=0, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def median(timeseries, segmentlength, **kwargs): """Calculate a PSD using Welch's method with a median average """
if scipy_version <= '1.1.9999': raise ValueError( "median average PSD estimation requires scipy >= 1.2.0", ) kwargs.setdefault('average', 'median') return welch(timeseries, segmentlength, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def rayleigh(timeseries, segmentlength, noverlap=0): """Calculate a Rayleigh statistic spectrum Parameters timeseries : `~gwpy.timeseries.TimeSeries` input `TimeSeries` data. segmentlength : `int` number of samples in single average. noverlap : `int` number of samples to overlap between segments, defaults to 50%. Returns ------- spectrum : `~gwpy.frequencyseries.FrequencySeries` average power `FrequencySeries` """
stepsize = segmentlength - noverlap if noverlap: numsegs = 1 + int((timeseries.size - segmentlength) / float(noverlap)) else: numsegs = int(timeseries.size // segmentlength) tmpdata = numpy.ndarray((numsegs, int(segmentlength//2 + 1))) for i in range(numsegs): tmpdata[i, :] = welch( timeseries[i*stepsize:i*stepsize+segmentlength], segmentlength) std = tmpdata.std(axis=0) mean = tmpdata.mean(axis=0) return FrequencySeries(std/mean, unit='', copy=False, f0=0, epoch=timeseries.epoch, df=timeseries.sample_rate.value/segmentlength, channel=timeseries.channel, name='Rayleigh spectrum of %s' % timeseries.name)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def csd(timeseries, other, segmentlength, noverlap=None, **kwargs): """Calculate the CSD of two `TimeSeries` using Welch's method Parameters timeseries : `~gwpy.timeseries.TimeSeries` time-series of data other : `~gwpy.timeseries.TimeSeries` time-series of data segmentlength : `int` number of samples in single average. noverlap : `int` number of samples to overlap between segments, defaults to 50%. **kwargs other keyword arguments are passed to :meth:`scipy.signal.csd` Returns ------- spectrum : `~gwpy.frequencyseries.FrequencySeries` average power `FrequencySeries` See also -------- scipy.signal.csd """
# calculate CSD try: freqs, csd_ = scipy.signal.csd( timeseries.value, other.value, noverlap=noverlap, fs=timeseries.sample_rate.decompose().value, nperseg=segmentlength, **kwargs) except AttributeError as exc: exc.args = ('{}, scipy>=0.16 is required'.format(str(exc)),) raise # generate FrequencySeries and return unit = scale_timeseries_unit(timeseries.unit, kwargs.get('scaling', 'density')) return FrequencySeries( csd_, unit=unit, frequencies=freqs, name=str(timeseries.name)+'---'+str(other.name), epoch=timeseries.epoch, channel=timeseries.channel)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def duration(self): """Duration of this series in seconds :type: `~astropy.units.Quantity` scalar """
return units.Quantity(self.span[1] - self.span[0], self.xunit, dtype=float)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read(cls, source, *args, **kwargs): """Read data into a `TimeSeries` Arguments and keywords depend on the output format, see the online documentation for full details for each format, the parameters below are common to most formats. Parameters source : `str`, `list` Source of data, any of the following: - `str` path of single data file, - `str` path of LAL-format cache file, - `list` of paths. name : `str`, `~gwpy.detector.Channel` the name of the channel to read, or a `Channel` object. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine format : `str`, optional source format identifier. If not given, the format will be detected if possible. See below for list of acceptable formats. nproc : `int`, optional number of parallel processes to use, serial process by default. pad : `float`, optional value with which to fill gaps in the source data, by default gaps will result in a `ValueError`. Notes -----"""
from .io.core import read as timeseries_reader return timeseries_reader(cls, source, *args, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def fetch(cls, channel, start, end, host=None, port=None, verbose=False, connection=None, verify=False, pad=None, allow_tape=None, scaled=None, type=None, dtype=None): """Fetch data from NDS Parameters channel : `str`, `~gwpy.detector.Channel` the data channel for which to query start : `~gwpy.time.LIGOTimeGPS`, `float`, `str` GPS start time of required data, any input parseable by `~gwpy.time.to_gps` is fine end : `~gwpy.time.LIGOTimeGPS`, `float`, `str` GPS end time of required data, any input parseable by `~gwpy.time.to_gps` is fine host : `str`, optional URL of NDS server to use, if blank will try any server (in a relatively sensible order) to get the data port : `int`, optional port number for NDS server query, must be given with `host` verify : `bool`, optional, default: `False` check channels exist in database before asking for data scaled : `bool`, optional apply slope and bias calibration to ADC data, for non-ADC data this option has no effect connection : `nds2.connection`, optional open NDS connection to use verbose : `bool`, optional print verbose output about NDS progress, useful for debugging; if ``verbose`` is specified as a string, this defines the prefix for the progress meter type : `int`, optional NDS2 channel type integer dtype : `type`, `numpy.dtype`, `str`, optional identifier for desired output data type """
return cls.DictClass.fetch( [channel], start, end, host=host, port=port, verbose=verbose, connection=connection, verify=verify, pad=pad, scaled=scaled, allow_tape=allow_tape, type=type, dtype=dtype)[str(channel)]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def fetch_open_data(cls, ifo, start, end, sample_rate=4096, tag=None, version=None, format='hdf5', host=GWOSC_DEFAULT_HOST, verbose=False, cache=None, **kwargs): """Fetch open-access data from the LIGO Open Science Center Parameters ifo : `str` the two-character prefix of the IFO in which you are interested, e.g. `'L1'` start : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS start time of required data, defaults to start of data found; any input parseable by `~gwpy.time.to_gps` is fine end : `~gwpy.time.LIGOTimeGPS`, `float`, `str`, optional GPS end time of required data, defaults to end of data found; any input parseable by `~gwpy.time.to_gps` is fine sample_rate : `float`, optional, the sample rate of desired data; most data are stored by LOSC at 4096 Hz, however there may be event-related data releases with a 16384 Hz rate, default: `4096` tag : `str`, optional file tag, e.g. ``'CLN'`` to select cleaned data, or ``'C00'`` for 'raw' calibrated data. version : `int`, optional version of files to download, defaults to highest discovered version format : `str`, optional the data format to download and parse, default: ``'h5py'`` - ``'hdf5'`` - ``'gwf'`` - requires |LDAStools.frameCPP|_ host : `str`, optional HTTP host name of LOSC server to access verbose : `bool`, optional, default: `False` print verbose output while fetching data cache : `bool`, optional save/read a local copy of the remote URL, default: `False`; useful if the same remote data are to be accessed multiple times. Set `GWPY_CACHE=1` in the environment to auto-cache. **kwargs any other keyword arguments are passed to the `TimeSeries.read` method that parses the file that was downloaded Examples -------- TimeSeries([ 2.17704028e-19, 2.08763900e-19, 2.39681183e-19, 7.58121195e-20] unit: Unit(dimensionless), t0: 1126259446.0 s, dt: 0.000244140625 s, name: Strain, channel: None) StateVector([127,127,127,127,127,127,127,127,127,127,127,127, 127,127,127,127,127,127,127,127,127,127,127,127, 127,127,127,127,127,127,127,127] unit: Unit(dimensionless), t0: 1126259446.0 s, dt: 1.0 s, name: Data quality, channel: None, bits: Bits(0: data present 1: passes cbc CAT1 test 2: passes cbc CAT2 test 3: passes cbc CAT3 test 4: passes burst CAT1 test 5: passes burst CAT2 test 6: passes burst CAT3 test, channel=None, epoch=1126259446.0)) For the `StateVector`, the naming of the bits will be ``format``-dependent, because they are recorded differently by LOSC in different formats. For events published in O2 and later, LOSC typically provides multiple data sets containing the original (``'C00'``) and cleaned (``'CLN'``) data. To select both data sets and plot a comparison, for example: Notes ----- `StateVector` data are not available in ``txt.gz`` format. """
from .io.losc import fetch_losc_data return fetch_losc_data(ifo, start, end, sample_rate=sample_rate, tag=tag, version=version, format=format, verbose=verbose, cache=cache, host=host, cls=cls, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def find(cls, channel, start, end, frametype=None, pad=None, scaled=None, dtype=None, nproc=1, verbose=False, **readargs): """Find and read data from frames for a channel Parameters channel : `str`, `~gwpy.detector.Channel` the name of the channel to read, or a `Channel` object. start : `~gwpy.time.LIGOTimeGPS`, `float`, `str` GPS start time of required data, any input parseable by `~gwpy.time.to_gps` is fine end : `~gwpy.time.LIGOTimeGPS`, `float`, `str` GPS end time of required data, any input parseable by `~gwpy.time.to_gps` is fine frametype : `str`, optional name of frametype in which this channel is stored, will search for containing frame types if necessary pad : `float`, optional value with which to fill gaps in the source data, by default gaps will result in a `ValueError`. scaled : `bool`, optional apply slope and bias calibration to ADC data, for non-ADC data this option has no effect. nproc : `int`, optional, default: `1` number of parallel processes to use, serial process by default. dtype : `numpy.dtype`, `str`, `type`, or `dict` numeric data type for returned data, e.g. `numpy.float`, or `dict` of (`channel`, `dtype`) pairs allow_tape : `bool`, optional, default: `True` allow reading from frame files on (slow) magnetic tape verbose : `bool`, optional print verbose output about read progress, if ``verbose`` is specified as a string, this defines the prefix for the progress meter **readargs any other keyword arguments to be passed to `.read()` """
return cls.DictClass.find( [channel], start, end, frametype=frametype, verbose=verbose, pad=pad, scaled=scaled, dtype=dtype, nproc=nproc, **readargs )[str(channel)]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot(self, method='plot', figsize=(12, 4), xscale='auto-gps', **kwargs): """Plot the data for this timeseries Returns ------- figure : `~matplotlib.figure.Figure` the newly created figure, with populated Axes. See Also -------- matplotlib.pyplot.figure for documentation of keyword arguments used to create the figure matplotlib.figure.Figure.add_subplot for documentation of keyword arguments used to create the axes matplotlib.axes.Axes.plot for documentation of keyword arguments used in rendering the data """
kwargs.update(figsize=figsize, xscale=xscale) return super(TimeSeriesBase, self).plot(method=method, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_nds2_buffer(cls, buffer_, scaled=None, copy=True, **metadata): """Construct a new series from an `nds2.buffer` object **Requires:** |nds2|_ Parameters buffer_ : `nds2.buffer` the input NDS2-client buffer to read scaled : `bool`, optional apply slope and bias calibration to ADC data, for non-ADC data this option has no effect copy : `bool`, optional if `True`, copy the contained data array to new to a new array **metadata any other metadata keyword arguments to pass to the `TimeSeries` constructor Returns ------- timeseries : `TimeSeries` a new `TimeSeries` containing the data from the `nds2.buffer`, and the appropriate metadata """
# get Channel from buffer channel = Channel.from_nds2(buffer_.channel) # set default metadata metadata.setdefault('channel', channel) metadata.setdefault('epoch', LIGOTimeGPS(buffer_.gps_seconds, buffer_.gps_nanoseconds)) metadata.setdefault('sample_rate', channel.sample_rate) metadata.setdefault('unit', channel.unit) metadata.setdefault('name', buffer_.name) # unwrap data scaled = _dynamic_scaled(scaled, channel.name) slope = buffer_.signal_slope offset = buffer_.signal_offset null_scaling = slope == 1. and offset == 0. if scaled and not null_scaling: data = buffer_.data.copy() * slope + offset copy = False else: data = buffer_.data # construct new TimeSeries-like object return cls(data, copy=copy, **metadata)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_lal(cls, lalts, copy=True): """Generate a new TimeSeries from a LAL TimeSeries of any type. """
from ..utils.lal import from_lal_unit try: unit = from_lal_unit(lalts.sampleUnits) except (TypeError, ValueError) as exc: warnings.warn("%s, defaulting to 'dimensionless'" % str(exc)) unit = None channel = Channel(lalts.name, sample_rate=1/lalts.deltaT, unit=unit, dtype=lalts.data.data.dtype) out = cls(lalts.data.data, channel=channel, t0=lalts.epoch, dt=lalts.deltaT, unit=unit, name=lalts.name, copy=False) if copy: return out.copy() return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_lal(self): """Convert this `TimeSeries` into a LAL TimeSeries. """
import lal from ..utils.lal import (find_typed_function, to_lal_unit) # map unit try: unit = to_lal_unit(self.unit) except ValueError as e: warnings.warn("%s, defaulting to lal.DimensionlessUnit" % str(e)) unit = lal.DimensionlessUnit # create TimeSeries create = find_typed_function(self.dtype, 'Create', 'TimeSeries') lalts = create(self.name, lal.LIGOTimeGPS(self.epoch.gps), 0, self.dt.value, unit, self.shape[0]) lalts.data.data = self.value return lalts
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_pycbc(cls, pycbcseries, copy=True): """Convert a `pycbc.types.timeseries.TimeSeries` into a `TimeSeries` Parameters pycbcseries : `pycbc.types.timeseries.TimeSeries` the input PyCBC `~pycbc.types.timeseries.TimeSeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- timeseries : `TimeSeries` a GWpy version of the input timeseries """
return cls(pycbcseries.data, t0=pycbcseries.start_time, dt=pycbcseries.delta_t, copy=copy)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_pycbc(self, copy=True): """Convert this `TimeSeries` into a PyCBC `~pycbc.types.timeseries.TimeSeries` Parameters copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- timeseries : `~pycbc.types.timeseries.TimeSeries` a PyCBC representation of this `TimeSeries` """
from pycbc import types return types.TimeSeries(self.value, delta_t=self.dt.to('s').value, epoch=self.epoch.gps, copy=copy)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def coalesce(self): """Merge contiguous elements of this list into single objects This method implicitly sorts and potentially shortens this list. """
self.sort(key=lambda ts: ts.t0.value) i = j = 0 N = len(self) while j < N: this = self[j] j += 1 if j < N and this.is_contiguous(self[j]) == 1: while j < N and this.is_contiguous(self[j]): try: this = self[i] = this.append(self[j]) except ValueError as exc: if 'cannot resize this array' in str(exc): this = this.copy() this = self[i] = this.append(self[j]) else: raise j += 1 else: self[i] = this i += 1 del self[i:] return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def join(self, pad=None, gap=None): """Concatenate all of the elements of this list into a single object Parameters pad : `float`, optional, default: `0.0` value with which to pad gaps gap : `str`, optional, default: `'raise'` what to do if there are gaps in the data, one of - ``'raise'`` - raise a `ValueError` - ``'ignore'`` - remove gap and join data - ``'pad'`` - pad gap with zeros If `pad` is given and is not `None`, the default is ``'pad'``, otherwise ``'raise'``. Returns ------- series : `gwpy.types.TimeSeriesBase` subclass a single series containing all data from each entry in this list See Also -------- TimeSeries.append for details on how the individual series are concatenated together """
if not self: return self.EntryClass(numpy.empty((0,) * self.EntryClass._ndim)) self.sort(key=lambda t: t.epoch.gps) out = self[0].copy() for series in self[1:]: out.append(series, gap=gap, pad=pad) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def copy(self): """Return a copy of this list with each element copied to new memory """
out = type(self)() for series in self: out.append(series.copy()) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_lal_type_str(pytype): """Convert the input python type to a LAL type string Examples -------- To convert a python type: 'REAL8' To convert a `numpy.dtype`: 'UINT4' To convert a LAL type code: 'REAL8' Raises ------ KeyError if the input doesn't map to a LAL type string """
# noop if pytype in LAL_TYPE_FROM_STR: return pytype # convert type code if pytype in LAL_TYPE_STR: return LAL_TYPE_STR[pytype] # convert python type try: dtype = numpy.dtype(pytype) return LAL_TYPE_STR_FROM_NUMPY[dtype.type] except (TypeError, KeyError): raise ValueError("Failed to map {!r} to LAL type string")
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def find_typed_function(pytype, prefix, suffix, module=lal): """Returns the lal method for the correct type Parameters pytype : `type`, `numpy.dtype` the python type, or dtype, to map prefix : `str` the function name prefix (before the type tag) suffix : `str` the function name suffix (after the type tag) Raises ------ AttributeError if the function is not found Examples -------- <built-in function CreateREAL8Sequence> """
laltype = to_lal_type_str(pytype) return getattr(module, '{0}{1}{2}'.format(prefix, laltype, suffix))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_lal_unit(aunit): """Convert the input unit into a `LALUnit` For example:: m^2 kg^-4 Parameters aunit : `~astropy.units.Unit`, `str` the input unit Returns ------- unit : `LALUnit` the LALUnit representation of the input Raises ------ ValueError if LAL doesn't understand the base units for the input """
if isinstance(aunit, string_types): aunit = units.Unit(aunit) aunit = aunit.decompose() lunit = lal.Unit() for base, power in zip(aunit.bases, aunit.powers): # try this base try: lalbase = LAL_UNIT_FROM_ASTROPY[base] except KeyError: lalbase = None # otherwise loop through the equivalent bases for eqbase in base.find_equivalent_units(): try: lalbase = LAL_UNIT_FROM_ASTROPY[eqbase] except KeyError: continue # if we didn't find anything, raise an exception if lalbase is None: raise ValueError("LAL has no unit corresponding to %r" % base) lunit *= lalbase ** power return lunit
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_lal_unit(lunit): """Convert a LALUnit` into a `~astropy.units.Unit` Parameters lunit : `lal.Unit` the input unit Returns ------- unit : `~astropy.units.Unit` the Astropy representation of the input Raises ------ TypeError if ``lunit`` cannot be converted to `lal.Unit` ValueError if Astropy doesn't understand the base units for the input """
return reduce(operator.mul, ( units.Unit(str(LAL_UNIT_INDEX[i])) ** exp for i, exp in enumerate(lunit.unitNumerator)))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_lal_ligotimegps(gps): """Convert the given GPS time to a `lal.LIGOTimeGPS` object Parameters gps : `~gwpy.time.LIGOTimeGPS`, `float`, `str` input GPS time, can be anything parsable by :meth:`~gwpy.time.to_gps` Returns ------- ligotimegps : `lal.LIGOTimeGPS` a SWIG-LAL `~lal.LIGOTimeGPS` representation of the given GPS time """
gps = to_gps(gps) return lal.LIGOTimeGPS(gps.gpsSeconds, gps.gpsNanoSeconds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_property_columns(tabletype, columns): """Returns list of GPS columns required to read gpsproperties for a table Examples -------- ['peak_time', 'peak_time_ns'] """
from ligo.lw.lsctables import gpsproperty as GpsProperty # get properties for row object rowvars = vars(tabletype.RowType) # build list of real column names for fancy properties extracols = {} for key in columns: prop = rowvars[key] if isinstance(prop, GpsProperty): extracols[key] = (prop.s_name, prop.ns_name) return extracols
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_column_dtype(llwcol): """Get the data type of a LIGO_LW `Column` Parameters llwcol : :class:`~ligo.lw.table.Column`, `numpy.ndarray`, iterable a LIGO_LW column, a numpy array, or an iterable Returns ------- dtype : `type`, None the object data type for values in the given column, `None` is returned if ``llwcol`` is a `numpy.ndarray` with `numpy.object_` dtype, or no data type can be parsed (e.g. empty list) """
try: # maybe its a numpy array already! dtype = llwcol.dtype if dtype is numpy.dtype('O'): # don't convert raise AttributeError return dtype except AttributeError: # dang try: # ligo.lw.table.Column llwtype = llwcol.parentNode.validcolumns[llwcol.Name] except AttributeError: # not a column try: return type(llwcol[0]) except IndexError: return None else: # map column type str to python type from ligo.lw.types import (ToPyType, ToNumPyType) try: return ToNumPyType[llwtype] except KeyError: return ToPyType[llwtype]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_table(source, tablename=None, **kwargs): """Read a `Table` from one or more LIGO_LW XML documents source : `file`, `str`, :class:`~ligo.lw.ligolw.Document`, `list` one or more open files, file paths, or LIGO_LW `Document` objects tablename : `str`, optional the `Name` of the relevant `Table` to read, if not given a table will be returned if only one exists in the document(s) **kwargs keyword arguments for the read, or conversion functions See Also -------- gwpy.io.ligolw.read_table for details of keyword arguments for the read operation gwpy.table.io.ligolw.to_astropy_table for details of keyword arguments for the conversion operation """
from ligo.lw import table as ligolw_table from ligo.lw.lsctables import TableByName # -- keyword handling ----------------------- # separate keywords for reading and converting from LIGO_LW to Astropy read_kw = kwargs # rename for readability convert_kw = { 'rename': None, 'use_numpy_dtypes': False, } for key in filter(kwargs.__contains__, convert_kw): convert_kw[key] = kwargs.pop(key) if convert_kw['rename'] is None: convert_kw['rename'] = {} # allow user to specify LIGO_LW columns to read to provide the # desired output columns try: columns = list(kwargs.pop('columns')) except KeyError: columns = None try: read_kw['columns'] = list(kwargs.pop('ligolw_columns')) except KeyError: read_kw['columns'] = columns convert_kw['columns'] = columns or read_kw['columns'] if tablename: tableclass = TableByName[ligolw_table.Table.TableName(tablename)] # work out if fancy property columns are required # means 'peak_time' and 'peak_time_ns' will get read if 'peak' # is requested if convert_kw['columns'] is not None: readcols = set(read_kw['columns']) propcols = _get_property_columns(tableclass, convert_kw['columns']) for col in propcols: try: readcols.remove(col) except KeyError: continue readcols.update(propcols[col]) read_kw['columns'] = list(readcols) # -- read ----------------------------------- return Table(read_ligolw_table(source, tablename=tablename, **read_kw), **convert_kw)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_table(table, target, tablename=None, ilwdchar_compat=None, **kwargs): """Write a `~astropy.table.Table` to file in LIGO_LW XML format This method will attempt to write in the new `ligo.lw` format (if ``ilwdchar_compat`` is ``None`` or ``False``), but will fall back to the older `glue.ligolw` (in that order) if that fails (if ``ilwdchar_compat`` is ``None`` or ``True``). """
if tablename is None: # try and get tablename from metadata tablename = table.meta.get('tablename', None) if tablename is None: # panic raise ValueError("please pass ``tablename=`` to specify the target " "LIGO_LW Table Name") try: llwtable = table_to_ligolw( table, tablename, ilwdchar_compat=ilwdchar_compat or False, ) except LigolwElementError as exc: if ilwdchar_compat is not None: raise try: llwtable = table_to_ligolw(table, tablename, ilwdchar_compat=True) except Exception: raise exc return write_ligolw_tables(target, [llwtable], **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_ascii_series(input_, array_type=Series, unpack=True, **kwargs): """Read a `Series` from an ASCII file Parameters input : `str`, `file` file to read array_type : `type` desired return type """
xarr, yarr = loadtxt(input_, unpack=unpack, **kwargs) return array_type(yarr, xindex=xarr)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_ascii_series(series, output, **kwargs): """Write a `Series` to a file in ASCII format Parameters series : :class:`~gwpy.data.Series` data series to write output : `str`, `file` file to write to See also -------- numpy.savetxt for documentation of keyword arguments """
xarr = series.xindex.value yarr = series.value return savetxt(output, column_stack((xarr, yarr)), **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def channel_dict_kwarg(value, channels, types=None, astype=None): """Format the given kwarg value in a dict with one value per channel Parameters value : any type keyword argument value as given by user channels : `list` list of channels being read types : `list` of `type` list of valid object types for value astype : `type` output type for `dict` values Returns ------- dict : `dict` `dict` of values, one value per channel key, if parsing is successful None : `None` `None`, if parsing was unsuccessful """
if types is not None and isinstance(value, tuple(types)): out = dict((c, value) for c in channels) elif isinstance(value, (tuple, list)): out = dict(zip(channels, value)) elif value is None: out = dict() elif isinstance(value, dict): out = value.copy() else: return None if astype is not None: return dict((key, astype(out[key])) for key in out) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def import_gwf_library(library, package=__package__): """Utility method to import the relevant timeseries.io.gwf frame API This is just a wrapper around :meth:`importlib.import_module` with a slightly nicer error message """
# import the frame library here to have any ImportErrors occur early try: return importlib.import_module('.%s' % library, package=package) except ImportError as exc: exc.args = ('Cannot import %s frame API: %s' % (library, str(exc)),) raise
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_default_gwf_api(): """Return the preferred GWF library Examples -------- If you have |LDAStools.frameCPP|_ installed: 'framecpp' Or, if you don't have |lalframe|_: 'lalframe' Otherwise: ImportError: no GWF API available, please install a third-party GWF library (framecpp, lalframe) and try again """
for lib in APIS: try: import_gwf_library(lib) except ImportError: continue else: return lib raise ImportError("no GWF API available, please install a third-party GWF " "library ({}) and try again".format(', '.join(APIS)))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def print_verbose(*args, **kwargs): """Utility to print something only if verbose=True is given """
if kwargs.pop('verbose', False) is True: gprint(*args, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_parameter(connection, parameter, value, verbose=False): """Set a parameter for the connection, handling errors as warnings """
value = str(value) try: if not connection.set_parameter(parameter, value): raise ValueError("invalid parameter or value") except (AttributeError, ValueError) as exc: warnings.warn( 'failed to set {}={!r}: {}'.format(parameter, value, str(exc)), io_nds2.NDSWarning) else: print_verbose( ' [{}] set {}={!r}'.format( connection.get_host(), parameter, value), verbose=verbose, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _pad_series(ts, pad, start, end): """Pad a timeseries to match the specified [start, end) limits To cover a gap in data returned from NDS """
span = ts.span pada = max(int((span[0] - start) * ts.sample_rate.value), 0) padb = max(int((end - span[1]) * ts.sample_rate.value), 0) if pada or padb: return ts.pad((pada, padb), mode='constant', constant_values=(pad,)) return ts
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _create_series(ndschan, value, start, end, series_class=TimeSeries): """Create a timeseries to cover the specified [start, end) limits To cover a gap in data returned from NDS """
channel = Channel.from_nds2(ndschan) nsamp = int((end - start) * channel.sample_rate.value) return series_class(numpy_ones(nsamp) * value, t0=start, sample_rate=channel.sample_rate, unit=channel.unit, channel=channel)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_data_segments(channels, start, end, connection): """Get available data segments for the given channels """
allsegs = io_nds2.get_availability(channels, start, end, connection=connection) return allsegs.intersection(allsegs.keys())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def in_git_clone(): """Returns `True` if the current directory is a git repository Logic is 'borrowed' from :func:`git.repo.fun.is_git_dir` """
gitdir = '.git' return os.path.isdir(gitdir) and ( os.path.isdir(os.path.join(gitdir, 'objects')) and os.path.isdir(os.path.join(gitdir, 'refs')) and os.path.exists(os.path.join(gitdir, 'HEAD')) )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def reuse_dist_file(filename): """Returns `True` if a distribution file can be reused Otherwise it should be regenerated """
# if target file doesn't exist, we must generate it if not os.path.isfile(filename): return False # if we can interact with git, we can regenerate it, so we may as well try: import git except ImportError: return True else: try: git.Repo().tags except (TypeError, git.GitError): return True else: return False
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_gitpython_version(): """Determine the required version of GitPython Because of target systems running very, very old versions of setuptools, we only specify the actual version we need when we need it. """
# if not in git clone, it doesn't matter if not in_git_clone(): return 'GitPython' # otherwise, call out to get the git version try: gitv = subprocess.check_output('git --version', shell=True) except (OSError, IOError, subprocess.CalledProcessError): # no git installation, most likely git_version = '0.0.0' else: if isinstance(gitv, bytes): gitv = gitv.decode('utf-8') git_version = gitv.strip().split()[2] # if git>=2.15, we need GitPython>=2.1.8 if LooseVersion(git_version) >= '2.15': return 'GitPython>=2.1.8' return 'GitPython'
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_setup_requires(): """Return the list of packages required for this setup.py run """
# don't force requirements if just asking for help if {'--help', '--help-commands'}.intersection(sys.argv): return list() # otherwise collect all requirements for all known commands reqlist = [] for cmd, dependencies in SETUP_REQUIRES.items(): if cmd in sys.argv: reqlist.extend(dependencies) return reqlist
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_scripts(scripts_dir='bin'): """Get relative file paths for all files under the ``scripts_dir`` """
scripts = [] for (dirname, _, filenames) in os.walk(scripts_dir): scripts.extend([os.path.join(dirname, fn) for fn in filenames]) return scripts
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _parse_years(years): """Parse string of ints include ranges into a `list` of `int` Source: https://stackoverflow.com/a/6405228/1307974 """
result = [] for part in years.split(','): if '-' in part: a, b = part.split('-') a, b = int(a), int(b) result.extend(range(a, b + 1)) else: a = int(part) result.append(a) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _format_years(years): """Format a list of ints into a string including ranges Source: https://stackoverflow.com/a/9471386/1307974 """
def sub(x): return x[1] - x[0] ranges = [] for k, iterable in groupby(enumerate(sorted(years)), sub): rng = list(iterable) if len(rng) == 1: s = str(rng[0][1]) else: s = "{}-{}".format(rng[0][1], rng[-1][1]) ranges.append(s) return ", ".join(ranges)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def update_copyright(path, year): """Update a file's copyright statement to include the given year """
with open(path, "r") as fobj: text = fobj.read().rstrip() match = COPYRIGHT_REGEX.search(text) x = match.start("years") y = match.end("years") if text[y-1] == " ": # don't strip trailing whitespace y -= 1 yearstr = match.group("years") years = set(_parse_years(yearstr)) | {year} with open(path, "w") as fobj: print(text[:x] + _format_years(years) + text[y:], file=fobj)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def percentile(self, percentile): """Calculate a given spectral percentile for this `SpectralVariance` Parameters percentile : `float` percentile (0 - 100) of the bins to compute Returns ------- spectrum : `~gwpy.frequencyseries.FrequencySeries` the given percentile `FrequencySeries` calculated from this `SpectralVaraicence` """
rows, columns = self.shape out = numpy.zeros(rows) # Loop over frequencies for i in range(rows): # Calculate cumulative sum for array cumsumvals = numpy.cumsum(self.value[i, :]) # Find value nearest requested percentile abs_cumsumvals_minus_percentile = numpy.abs(cumsumvals - percentile) minindex = abs_cumsumvals_minus_percentile.argmin() val = self.bins[minindex] out[i] = val name = '%s %s%% percentile' % (self.name, percentile) return FrequencySeries(out, epoch=self.epoch, channel=self.channel, frequencies=self.bins[:-1], name=name)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def ndstype(self): """NDS type integer for this channel. This property is mapped to the `Channel.type` string. """
if self.type is not None: return io_nds2.Nds2ChannelType.find(self.type).value
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def ndsname(self): """Name of this channel as stored in the NDS database """
if self.type not in [None, 'raw', 'reduced', 'online']: return '%s,%s' % (self.name, self.type) return self.name
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def query(cls, name, use_kerberos=None, debug=False): """Query the LIGO Channel Information System for the `Channel` matching the given name Parameters name : `str` name of channel use_kerberos : `bool`, optional use an existing Kerberos ticket as the authentication credential, default behaviour will check for credentials and request username and password if none are found (`None`) debug : `bool`, optional print verbose HTTP connection status for debugging, default: `False` Returns ------- c : `Channel` a new `Channel` containing all of the attributes set from its entry in the CIS """
channellist = ChannelList.query(name, use_kerberos=use_kerberos, debug=debug) if not channellist: raise ValueError("No channels found matching '%s'" % name) if len(channellist) > 1: raise ValueError("%d channels found matching '%s', please refine " "search, or use `ChannelList.query` to return " "all results" % (len(channellist), name)) return channellist[0]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_nds2(cls, nds2channel): """Generate a new channel using an existing nds2.channel object """
# extract metadata name = nds2channel.name sample_rate = nds2channel.sample_rate unit = nds2channel.signal_units if not unit: unit = None ctype = nds2channel.channel_type_to_string(nds2channel.channel_type) # get dtype dtype = { # pylint: disable: no-member nds2channel.DATA_TYPE_INT16: numpy.int16, nds2channel.DATA_TYPE_INT32: numpy.int32, nds2channel.DATA_TYPE_INT64: numpy.int64, nds2channel.DATA_TYPE_FLOAT32: numpy.float32, nds2channel.DATA_TYPE_FLOAT64: numpy.float64, nds2channel.DATA_TYPE_COMPLEX32: numpy.complex64, }.get(nds2channel.data_type) return cls(name, sample_rate=sample_rate, unit=unit, dtype=dtype, type=ctype)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parse_channel_name(cls, name, strict=True): """Decompose a channel name string into its components Parameters name : `str` name to parse strict : `bool`, optional require exact matching of format, with no surrounding text, default `True` Returns ------- match : `dict` `dict` of channel name components with the following keys: - `'ifo'`: the letter-number interferometer prefix - `'system'`: the top-level system name - `'subsystem'`: the second-level sub-system name - `'signal'`: the remaining underscore-delimited signal name - `'trend'`: the trend type - `'ndstype'`: the NDS2 channel suffix Any optional keys that aren't found will return a value of `None` Raises ------ ValueError if the name cannot be parsed with at least an IFO and SYSTEM Examples -------- {'ifo': 'L1', 'ndstype': None, 'signal': 'IN1_DQ', 'subsystem': 'DARM', 'system': 'LSC', 'trend': None} 'H1:ISI-BS_ST1_SENSCOR_GND_STS_X_BLRMS_100M_300M.rms,m-trend') {'ifo': 'H1', 'ndstype': 'm-trend', 'signal': 'ST1_SENSCOR_GND_STS_X_BLRMS_100M_300M', 'subsystem': 'BS', 'system': 'ISI', 'trend': 'rms'} """
match = cls.MATCH.search(name) if match is None or (strict and ( match.start() != 0 or match.end() != len(name))): raise ValueError("Cannot parse channel name according to LIGO " "channel-naming convention T990033") return match.groupdict()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def copy(self): """Returns a copy of this channel """
new = type(self)(str(self)) new._init_from_channel(self) return new
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_names(cls, *names): """Create a new `ChannelList` from a list of names The list of names can include comma-separated sets of names, in which case the return will be a flattened list of all parsed channel names. """
new = cls() for namestr in names: for name in cls._split_names(namestr): new.append(Channel(name)) return new
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _split_names(namestr): """Split a comma-separated list of channel names. """
out = [] namestr = QUOTE_REGEX.sub('', namestr) while True: namestr = namestr.strip('\' \n') if ',' not in namestr: break for nds2type in io_nds2.Nds2ChannelType.names() + ['']: if nds2type and ',%s' % nds2type in namestr: try: channel, ctype, namestr = namestr.split(',', 2) except ValueError: channel, ctype = namestr.split(',') namestr = '' out.append('%s,%s' % (channel, ctype)) break elif nds2type == '' and ',' in namestr: channel, namestr = namestr.split(',', 1) out.append(channel) break if namestr: out.append(namestr) return out
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def find(self, name): """Find the `Channel` with a specific name in this `ChannelList`. Parameters name : `str` name of the `Channel` to find Returns ------- index : `int` the position of the first `Channel` in this `ChannelList` whose `~Channel.name` matches the search key. Raises ------ ValueError if no matching `Channel` is found. """
for i, chan in enumerate(self): if name == chan.name: return i raise ValueError(name)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def query(cls, name, use_kerberos=None, debug=False): """Query the LIGO Channel Information System a `ChannelList`. Parameters name : `str` name of channel, or part of it. use_kerberos : `bool`, optional use an existing Kerberos ticket as the authentication credential, default behaviour will check for credentials and request username and password if none are found (`None`) debug : `bool`, optional print verbose HTTP connection status for debugging, default: `False` Returns ------- channels : `ChannelList` a new list containing all `Channels <Channel>` found. """
from .io import cis return cis.query(name, use_kerberos=use_kerberos, debug=debug)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def query_nds2_availability(cls, channels, start, end, ctype=126, connection=None, host=None, port=None): """Query for when data are available for these channels in NDS2 Parameters channels : `list` list of `Channel` or `str` for which to search start : `int` GPS start time of search, or any acceptable input to :meth:`~gwpy.time.to_gps` end : `int` GPS end time of search, or any acceptable input to :meth:`~gwpy.time.to_gps` connection : `nds2.connection`, optional open connection to an NDS(2) server, if not given, one will be created based on ``host`` and ``port`` keywords host : `str`, optional name of NDS server host port : `int`, optional port number for NDS connection Returns ------- segdict : `~gwpy.segments.SegmentListDict` dict of ``(name, SegmentList)`` pairs """
start = int(to_gps(start)) end = int(ceil(to_gps(end))) chans = io_nds2.find_channels(channels, connection=connection, unique=True, epoch=(start, end), type=ctype) availability = io_nds2.get_availability(chans, start, end, connection=connection) return type(availability)(zip(channels, availability.values()))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_gravityspy_triggers(tablename, engine=None, **kwargs): """Fetch data into an `GravitySpyTable` Parameters table : `str`, The name of table you are attempting to receive triggers from. selection other filters you would like to supply underlying reader method for the given format .. note:: For now it will attempt to automatically connect you to a specific DB. In the future, this may be an input argument. Returns ------- table : `GravitySpyTable` """
from sqlalchemy.engine import create_engine from sqlalchemy.exc import ProgrammingError # connect if needed if engine is None: conn_kw = {} for key in ('db', 'host', 'user', 'passwd'): try: conn_kw[key] = kwargs.pop(key) except KeyError: pass engine = create_engine(get_connection_str(**conn_kw)) try: return GravitySpyTable(fetch(engine, tablename, **kwargs)) except ProgrammingError as exc: if 'relation "%s" does not exist' % tablename in str(exc): msg = exc.args[0] msg = msg.replace( 'does not exist', 'does not exist, the following tablenames are ' 'acceptable:\n %s\n' % '\n '.join(engine.table_names())) exc.args = (msg,) raise
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_connection_str(db='gravityspy', host='gravityspy.ciera.northwestern.edu', user=None, passwd=None): """Create string to pass to create_engine Parameters db : `str`, default: ``gravityspy`` The name of the SQL database your connecting to. host : `str`, default: ``gravityspy.ciera.northwestern.edu`` The name of the server the database you are connecting to lives on. user : `str`, default: `None` Your username for authentication to this database. passwd : `str`, default: `None` Your password for authentication to this database. .. note:: `user` and `passwd` should be given together, otherwise they will be ignored and values will be resolved from the ``GRAVITYSPY_DATABASE_USER`` and ``GRAVITYSPY_DATABASE_PASSWD`` environment variables. Returns ------- conn_string : `str` A SQLAlchemy engine compliant connection string """
if (not user) or (not passwd): user = os.getenv('GRAVITYSPY_DATABASE_USER', None) passwd = os.getenv('GRAVITYSPY_DATABASE_PASSWD', None) if (not user) or (not passwd): raise ValueError('Remember to either pass ' 'or export GRAVITYSPY_DATABASE_USER ' 'and export GRAVITYSPY_DATABASE_PASSWD in order ' 'to access the Gravity Spy Data: ' 'https://secrets.ligo.org/secrets/144/' ' description is username and secret is password.') return 'postgresql://{0}:{1}@{2}:5432/{3}'.format(user, passwd, host, db)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_timezone_offset(ifo, dt=None): """Return the offset in seconds between UTC and the given interferometer Parameters ifo : `str` prefix of interferometer, e.g. ``'X1'`` dt : `datetime.datetime`, optional the time at which to calculate the offset, defaults to now Returns ------- offset : `int` the offset in seconds between the timezone of the interferometer and UTC """
import pytz dt = dt or datetime.datetime.now() offset = pytz.timezone(get_timezone(ifo)).utcoffset(dt) return offset.days * 86400 + offset.seconds + offset.microseconds * 1e-6
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def normalize_fft_params(series, kwargs=None, func=None): """Normalize a set of FFT parameters for processing This method reads the ``fftlength`` and ``overlap`` keyword arguments (presumed to be values in seconds), works out sensible defaults, then updates ``kwargs`` in place to include ``nfft`` and ``noverlap`` as values in sample counts. If a ``window`` is given, the ``noverlap`` parameter will be set to the recommended overlap for that window type, if ``overlap`` is not given. If a ``window`` is given as a `str`, it will be converted to a `numpy.ndarray` containing the correct window (of the correct length). Parameters series : `gwpy.timeseries.TimeSeries` the data that will be processed using an FFT-based method kwargs : `dict` the dict of keyword arguments passed by the user func : `callable`, optional the FFT method that will be called Examples -------- {'nfft': 1024, 'noverlap': 0} 3.76490804e-05, 9.41235870e-06]), 'noverlap': 0, 'nfft': 1024} """
# parse keywords if kwargs is None: kwargs = dict() samp = series.sample_rate fftlength = kwargs.pop('fftlength', None) or series.duration overlap = kwargs.pop('overlap', None) window = kwargs.pop('window', None) # parse function library and name if func is None: method = library = None else: method = func.__name__ library = _fft_library(func) # fftlength -> nfft nfft = seconds_to_samples(fftlength, samp) # overlap -> noverlap noverlap = _normalize_overlap(overlap, window, nfft, samp, method=method) # create window window = _normalize_window(window, nfft, library, series.dtype) if window is not None: # allow FFT methods to use their own defaults kwargs['window'] = window # create FFT plan for LAL if library == 'lal' and kwargs.get('plan', None) is None: from ._lal import generate_fft_plan kwargs['plan'] = generate_fft_plan(nfft, dtype=series.dtype) kwargs.update({ 'nfft': nfft, 'noverlap': noverlap, }) return kwargs