text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_json_flag(fobj):
"""Read a `DataQualityFlag` from a segments-web.ligo.org JSON file """ |
# read from filename
if isinstance(fobj, string_types):
with open(fobj, 'r') as fobj2:
return read_json_flag(fobj2)
# read from open file
txt = fobj.read()
if isinstance(txt, bytes):
txt = txt.decode('utf-8')
data = json.loads(txt)
# format flag
name = '{ifo}:{name}:{version}'.format(**data)
out = DataQualityFlag(name, active=data['active'],
known=data['known'])
# parse 'metadata'
try:
out.description = data['metadata'].get('flag_description', None)
except KeyError: # no metadata available, but that's ok
pass
else:
out.isgood = not data['metadata'].get(
'active_indicates_ifo_badness', False)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_json_flag(flag, fobj, **kwargs):
"""Write a `DataQualityFlag` to a JSON file Parameters flag : `DataQualityFlag` data to write fobj : `str`, `file` target file (or filename) to write **kwargs other keyword arguments to pass to :func:`json.dump` See also -------- json.dump for details on acceptable keyword arguments """ |
# write to filename
if isinstance(fobj, string_types):
with open(fobj, 'w') as fobj2:
return write_json_flag(flag, fobj2, **kwargs)
# build json packet
data = {}
data['ifo'] = flag.ifo
data['name'] = flag.tag
data['version'] = flag.version
data['active'] = flag.active
data['known'] = flag.known
data['metadata'] = {}
data['metadata']['active_indicates_ifo_badness'] = not flag.isgood
data['metadata']['flag_description'] = flag.description
# write
json.dump(data, fobj, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def in_segmentlist(column, segmentlist):
"""Return the index of values lying inside the given segmentlist A `~gwpy.segments.Segment` represents a semi-open interval, so for any segment `[a, b)`, a value `x` is 'in' the segment if a <= x < b """ |
segmentlist = type(segmentlist)(segmentlist).coalesce()
idx = column.argsort()
contains = numpy.zeros(column.shape[0], dtype=bool)
j = 0
try:
segstart, segend = segmentlist[j]
except IndexError: # no segments, return all False
return contains
i = 0
while i < contains.shape[0]:
# extract time for this index
x = idx[i] # <- index in original column
time = column[x]
# if before start, move to next value
if time < segstart:
i += 1
continue
# if after end, find the next segment and check value again
if time >= segend:
j += 1
try:
segstart, segend = segmentlist[j]
continue
except IndexError:
break
# otherwise value must be in this segment
contains[x] = True
i += 1
return contains |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fft(self, nfft=None):
"""Compute the one-dimensional discrete Fourier transform of this `TimeSeries`. Parameters nfft : `int`, optional length of the desired Fourier transform, input will be cropped or padded to match the desired length. If nfft is not given, the length of the `TimeSeries` will be used Returns ------- out : `~gwpy.frequencyseries.FrequencySeries` the normalised, complex-valued FFT `FrequencySeries`. See Also -------- :mod:`scipy.fftpack` for the definition of the DFT and conventions used. Notes ----- This method, in constrast to the :func:`numpy.fft.rfft` method it calls, applies the necessary normalisation such that the amplitude of the output `~gwpy.frequencyseries.FrequencySeries` is correct. """ |
from ..frequencyseries import FrequencySeries
if nfft is None:
nfft = self.size
dft = npfft.rfft(self.value, n=nfft) / nfft
dft[1:] *= 2.0
new = FrequencySeries(dft, epoch=self.epoch, unit=self.unit,
name=self.name, channel=self.channel)
try:
new.frequencies = npfft.rfftfreq(nfft, d=self.dx.value)
except AttributeError:
new.frequencies = numpy.arange(new.size) / (nfft * self.dx.value)
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def average_fft(self, fftlength=None, overlap=0, window=None):
"""Compute the averaged one-dimensional DFT of this `TimeSeries`. This method computes a number of FFTs of duration ``fftlength`` and ``overlap`` (both given in seconds), and returns the mean average. This method is analogous to the Welch average method for power spectra. Parameters fftlength : `float` number of seconds in single FFT, default, use whole `TimeSeries` overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : complex-valued `~gwpy.frequencyseries.FrequencySeries` the transformed output, with populated frequencies array metadata See Also -------- :mod:`scipy.fftpack` for the definition of the DFT and conventions used. """ |
from gwpy.spectrogram import Spectrogram
# format lengths
if fftlength is None:
fftlength = self.duration
if isinstance(fftlength, units.Quantity):
fftlength = fftlength.value
nfft = int((fftlength * self.sample_rate).decompose().value)
noverlap = int((overlap * self.sample_rate).decompose().value)
navg = divmod(self.size-noverlap, (nfft-noverlap))[0]
# format window
if window is None:
window = 'boxcar'
if isinstance(window, (str, tuple)):
win = signal.get_window(window, nfft)
else:
win = numpy.asarray(window)
if len(win.shape) != 1:
raise ValueError('window must be 1-D')
elif win.shape[0] != nfft:
raise ValueError('Window is the wrong size.')
win = win.astype(self.dtype)
scaling = 1. / numpy.absolute(win).mean()
if nfft % 2:
nfreqs = (nfft + 1) // 2
else:
nfreqs = nfft // 2 + 1
ffts = Spectrogram(numpy.zeros((navg, nfreqs), dtype=numpy.complex),
channel=self.channel, epoch=self.epoch, f0=0,
df=1 / fftlength, dt=1, copy=True)
# stride through TimeSeries, recording FFTs as columns of Spectrogram
idx = 0
for i in range(navg):
# find step TimeSeries
idx_end = idx + nfft
if idx_end > self.size:
continue
stepseries = self[idx:idx_end].detrend() * win
# calculated FFT, weight, and stack
fft_ = stepseries.fft(nfft=nfft) * scaling
ffts.value[i, :] = fft_.value
idx += (nfft - noverlap)
mean = ffts.mean(0)
mean.name = self.name
mean.epoch = self.epoch
mean.channel = self.channel
return mean |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def psd(self, fftlength=None, overlap=None, window='hann', method=DEFAULT_FFT_METHOD, **kwargs):
"""Calculate the PSD `FrequencySeries` for this `TimeSeries` Parameters fftlength : `float` number of seconds in single FFT, defaults to a single FFT covering the full duration overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats method : `str`, optional FFT-averaging method, see *Notes* for more details **kwargs other keyword arguments are passed to the underlying PSD-generation method Returns ------- psd : `~gwpy.frequencyseries.FrequencySeries` a data series containing the PSD. Notes ----- The accepted ``method`` arguments are: - ``'bartlett'`` : a mean average of non-overlapping periodograms - ``'median'`` : a median average of overlapping periodograms - ``'welch'`` : a mean average of overlapping periodograms """ |
# get method
method_func = spectral.get_method(method)
# calculate PSD using UI method
return spectral.psd(self, method_func, fftlength=fftlength,
overlap=overlap, window=window, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def asd(self, fftlength=None, overlap=None, window='hann', method=DEFAULT_FFT_METHOD, **kwargs):
"""Calculate the ASD `FrequencySeries` of this `TimeSeries` Parameters fftlength : `float` number of seconds in single FFT, defaults to a single FFT covering the full duration overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats method : `str`, optional FFT-averaging method, see *Notes* for more details Returns ------- psd : `~gwpy.frequencyseries.FrequencySeries` a data series containing the PSD. See also -------- TimeSeries.psd Notes ----- The accepted ``method`` arguments are: - ``'bartlett'`` : a mean average of non-overlapping periodograms - ``'median'`` : a median average of overlapping periodograms - ``'welch'`` : a mean average of overlapping periodograms """ |
return self.psd(method=method, fftlength=fftlength, overlap=overlap,
window=window, **kwargs) ** (1/2.) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def csd(self, other, fftlength=None, overlap=None, window='hann', **kwargs):
"""Calculate the CSD `FrequencySeries` for two `TimeSeries` Parameters other : `TimeSeries` the second `TimeSeries` in this CSD calculation fftlength : `float` number of seconds in single FFT, defaults to a single FFT covering the full duration overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- csd : `~gwpy.frequencyseries.FrequencySeries` a data series containing the CSD. """ |
return spectral.psd(
(self, other),
spectral.csd,
fftlength=fftlength,
overlap=overlap,
window=window,
**kwargs
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def spectrogram(self, stride, fftlength=None, overlap=None, window='hann', method=DEFAULT_FFT_METHOD, nproc=1, **kwargs):
"""Calculate the average power spectrogram of this `TimeSeries` using the specified average spectrum method. Each time-bin of the output `Spectrogram` is calculated by taking a chunk of the `TimeSeries` in the segment `[t - overlap/2., t + stride + overlap/2.)` and calculating the :meth:`~gwpy.timeseries.TimeSeries.psd` of those data. As a result, each time-bin is calculated using `stride + overlap` seconds of data. Parameters stride : `float` number of seconds in single PSD (column of spectrogram). fftlength : `float` number of seconds in single FFT. overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats method : `str`, optional FFT-averaging method, see *Notes* for more details nproc : `int` number of CPUs to use in parallel processing of FFTs Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency power spectrogram as generated from the input time-series. Notes ----- The accepted ``method`` arguments are: - ``'bartlett'`` : a mean average of non-overlapping periodograms - ``'median'`` : a median average of overlapping periodograms - ``'welch'`` : a mean average of overlapping periodograms """ |
# get method
method_func = spectral.get_method(method)
# calculate PSD using UI method
return spectral.average_spectrogram(
self,
method_func,
stride,
fftlength=fftlength,
overlap=overlap,
window=window,
**kwargs
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def spectrogram2(self, fftlength, overlap=None, window='hann', **kwargs):
"""Calculate the non-averaged power `Spectrogram` of this `TimeSeries` Parameters fftlength : `float` number of seconds in single FFT. overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats scaling : [ 'density' | 'spectrum' ], optional selects between computing the power spectral density ('density') where the `Spectrogram` has units of V**2/Hz if the input is measured in V and computing the power spectrum ('spectrum') where the `Spectrogram` has units of V**2 if the input is measured in V. Defaults to 'density'. **kwargs other parameters to be passed to `scipy.signal.periodogram` for each column of the `Spectrogram` Returns ------- spectrogram: `~gwpy.spectrogram.Spectrogram` a power `Spectrogram` with `1/fftlength` frequency resolution and (fftlength - overlap) time resolution. See also -------- scipy.signal.periodogram for documentation on the Fourier methods used in this calculation Notes ----- This method calculates overlapping periodograms for all possible chunks of data entirely containing within the span of the input `TimeSeries`, then normalises the power in overlapping chunks using a triangular window centred on that chunk which most overlaps the given `Spectrogram` time sample. """ |
# set kwargs for periodogram()
kwargs.setdefault('fs', self.sample_rate.to('Hz').value)
# run
return spectral.spectrogram(self, signal.periodogram,
fftlength=fftlength, overlap=overlap,
window=window, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fftgram(self, fftlength, overlap=None, window='hann', **kwargs):
"""Calculate the Fourier-gram of this `TimeSeries`. At every ``stride``, a single, complex FFT is calculated. Parameters fftlength : `float` number of seconds in single FFT. overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable Returns ------- a Fourier-gram """ |
from ..spectrogram import Spectrogram
try:
from scipy.signal import spectrogram
except ImportError:
raise ImportError("Must have scipy>=0.16 to utilize "
"this method.")
# format lengths
if isinstance(fftlength, units.Quantity):
fftlength = fftlength.value
nfft = int((fftlength * self.sample_rate).decompose().value)
if not overlap:
# use scipy.signal.spectrogram noverlap default
noverlap = nfft // 8
else:
noverlap = int((overlap * self.sample_rate).decompose().value)
# generate output spectrogram
[frequencies, times, sxx] = spectrogram(self,
fs=self.sample_rate.value,
window=window,
nperseg=nfft,
noverlap=noverlap,
mode='complex',
**kwargs)
return Spectrogram(sxx.T,
name=self.name, unit=self.unit,
xindex=self.t0.value + times,
yindex=frequencies) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def spectral_variance(self, stride, fftlength=None, overlap=None, method=DEFAULT_FFT_METHOD, window='hann', nproc=1, filter=None, bins=None, low=None, high=None, nbins=500, log=False, norm=False, density=False):
"""Calculate the `SpectralVariance` of this `TimeSeries`. Parameters stride : `float` number of seconds in single PSD (column of spectrogram) fftlength : `float` number of seconds in single FFT method : `str`, optional FFT-averaging method, see *Notes* for more details overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` maximum number of independent frame reading processes, default is set to single-process file reading. bins : `numpy.ndarray`, optional, default `None` array of histogram bin edges, including the rightmost edge low : `float`, optional left edge of lowest amplitude bin, only read if ``bins`` is not given high : `float`, optional right edge of highest amplitude bin, only read if ``bins`` is not given nbins : `int`, optional number of bins to generate, only read if ``bins`` is not given log : `bool`, optional calculate amplitude bins over a logarithmic scale, only read if ``bins`` is not given norm : `bool`, optional normalise bin counts to a unit sum density : `bool`, optional normalise bin counts to a unit integral Returns ------- specvar : `SpectralVariance` 2D-array of spectral frequency-amplitude counts See Also -------- :func:`numpy.histogram` for details on specifying bins and weights Notes ----- The accepted ``method`` arguments are: - ``'bartlett'`` : a mean average of non-overlapping periodograms - ``'median'`` : a median average of overlapping periodograms - ``'welch'`` : a mean average of overlapping periodograms """ |
specgram = self.spectrogram(stride, fftlength=fftlength,
overlap=overlap, method=method,
window=window, nproc=nproc) ** (1/2.)
if filter:
specgram = specgram.filter(*filter)
return specgram.variance(bins=bins, low=low, high=high, nbins=nbins,
log=log, norm=norm, density=density) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rayleigh_spectrum(self, fftlength=None, overlap=None):
"""Calculate the Rayleigh `FrequencySeries` for this `TimeSeries`. The Rayleigh statistic is calculated as the ratio of the standard deviation and the mean of a number of periodograms. Parameters fftlength : `float` number of seconds in single FFT, defaults to a single FFT covering the full duration overlap : `float`, optional number of seconds of overlap between FFTs, defaults to that of the relevant method. Returns ------- psd : `~gwpy.frequencyseries.FrequencySeries` a data series containing the PSD. """ |
return spectral.psd(
self,
spectral.rayleigh,
fftlength=fftlength,
overlap=overlap,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rayleigh_spectrogram(self, stride, fftlength=None, overlap=0, nproc=1, **kwargs):
"""Calculate the Rayleigh statistic spectrogram of this `TimeSeries` Parameters stride : `float` number of seconds in single PSD (column of spectrogram). fftlength : `float` number of seconds in single FFT. overlap : `float`, optional number of seconds of overlap between FFTs, default: ``0`` nproc : `int`, optional maximum number of independent frame reading processes, default default: ``1`` Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency Rayleigh spectrogram as generated from the input time-series. See Also -------- TimeSeries.rayleigh for details of the statistic calculation """ |
specgram = spectral.average_spectrogram(
self,
spectral.rayleigh,
stride,
fftlength=fftlength,
overlap=overlap,
nproc=nproc,
**kwargs
)
specgram.override_unit('')
return specgram |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def csd_spectrogram(self, other, stride, fftlength=None, overlap=0, window='hann', nproc=1, **kwargs):
"""Calculate the cross spectral density spectrogram of this `TimeSeries` with 'other'. Parameters other : `~gwpy.timeseries.TimeSeries` second time-series for cross spectral density calculation stride : `float` number of seconds in single PSD (column of spectrogram). fftlength : `float` number of seconds in single FFT. overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` maximum number of independent frame reading processes, default is set to single-process file reading. Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency cross spectrogram as generated from the two input time-series. """ |
return spectral.average_spectrogram(
(self, other),
spectral.csd,
stride,
fftlength=fftlength,
overlap=overlap,
window=window,
nproc=nproc,
**kwargs
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def highpass(self, frequency, gpass=2, gstop=30, fstop=None, type='iir', filtfilt=True, **kwargs):
"""Filter this `TimeSeries` with a high-pass filter. Parameters frequency : `float` high-pass corner frequency gpass : `float` the maximum loss in the passband (dB). gstop : `float` the minimum attenuation in the stopband (dB). fstop : `float` stop-band edge frequency, defaults to `frequency * 1.5` type : `str` the filter type, either ``'iir'`` or ``'fir'`` **kwargs other keyword arguments are passed to :func:`gwpy.signal.filter_design.highpass` Returns ------- hpseries : `TimeSeries` a high-passed version of the input `TimeSeries` See Also -------- gwpy.signal.filter_design.highpass for details on the filter design TimeSeries.filter for details on how the filter is applied .. note:: When using `scipy < 0.16.0` some higher-order filters may be unstable. With `scipy >= 0.16.0` higher-order filters are decomposed into second-order-sections, and so are much more stable. """ |
# design filter
filt = filter_design.highpass(frequency, self.sample_rate,
fstop=fstop, gpass=gpass, gstop=gstop,
analog=False, type=type, **kwargs)
# apply filter
return self.filter(*filt, filtfilt=filtfilt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bandpass(self, flow, fhigh, gpass=2, gstop=30, fstop=None, type='iir', filtfilt=True, **kwargs):
"""Filter this `TimeSeries` with a band-pass filter. Parameters flow : `float` lower corner frequency of pass band fhigh : `float` upper corner frequency of pass band gpass : `float` the maximum loss in the passband (dB). gstop : `float` the minimum attenuation in the stopband (dB). fstop : `tuple` of `float`, optional `(low, high)` edge-frequencies of stop band type : `str` the filter type, either ``'iir'`` or ``'fir'`` **kwargs other keyword arguments are passed to :func:`gwpy.signal.filter_design.bandpass` Returns ------- bpseries : `TimeSeries` a band-passed version of the input `TimeSeries` See Also -------- gwpy.signal.filter_design.bandpass for details on the filter design TimeSeries.filter for details on how the filter is applied .. note:: When using `scipy < 0.16.0` some higher-order filters may be unstable. With `scipy >= 0.16.0` higher-order filters are decomposed into second-order-sections, and so are much more stable. """ |
# design filter
filt = filter_design.bandpass(flow, fhigh, self.sample_rate,
fstop=fstop, gpass=gpass, gstop=gstop,
analog=False, type=type, **kwargs)
# apply filter
return self.filter(*filt, filtfilt=filtfilt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resample(self, rate, window='hamming', ftype='fir', n=None):
"""Resample this Series to a new rate Parameters rate : `float` rate to which to resample this `Series` window : `str`, `numpy.ndarray`, optional window function to apply to signal in the Fourier domain, see :func:`scipy.signal.get_window` for details on acceptable formats, only used for `ftype='fir'` or irregular downsampling ftype : `str`, optional type of filter, either 'fir' or 'iir', defaults to 'fir' n : `int`, optional if `ftype='fir'` the number of taps in the filter, otherwise the order of the Chebyshev type I IIR filter Returns ------- Series a new Series with the resampling applied, and the same metadata """ |
if n is None and ftype == 'iir':
n = 8
elif n is None:
n = 60
if isinstance(rate, units.Quantity):
rate = rate.value
factor = (self.sample_rate.value / rate)
# NOTE: use math.isclose when python >= 3.5
if numpy.isclose(factor, 1., rtol=1e-09, atol=0.):
warnings.warn(
"resample() rate matches current sample_rate ({}), returning "
"input data unmodified; please double-check your "
"parameters".format(self.sample_rate),
UserWarning,
)
return self
# if integer down-sampling, use decimate
if factor.is_integer():
if ftype == 'iir':
filt = signal.cheby1(n, 0.05, 0.8/factor, output='zpk')
else:
filt = signal.firwin(n+1, 1./factor, window=window)
return self.filter(filt, filtfilt=True)[::int(factor)]
# otherwise use Fourier filtering
else:
nsamp = int(self.shape[0] * self.dx.value * rate)
new = signal.resample(self.value, nsamp,
window=window).view(self.__class__)
new.__metadata_finalize__(self)
new._unit = self.unit
new.sample_rate = rate
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def zpk(self, zeros, poles, gain, analog=True, **kwargs):
"""Filter this `TimeSeries` by applying a zero-pole-gain filter Parameters zeros : `array-like` list of zero frequencies (in Hertz) poles : `array-like` list of pole frequencies (in Hertz) gain : `float` DC gain of filter analog : `bool`, optional type of ZPK being applied, if `analog=True` all parameters will be converted in the Z-domain for digital filtering Returns ------- timeseries : `TimeSeries` the filtered version of the input data See Also -------- TimeSeries.filter for details on how a digital ZPK-format filter is applied Examples -------- To apply a zpk filter with file poles at 100 Hz, and five zeros at 1 Hz (giving an overall DC gain of 1e-10):
: """ |
return self.filter(zeros, poles, gain, analog=analog, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter(self, *filt, **kwargs):
"""Filter this `TimeSeries` with an IIR or FIR filter Parameters *filt : filter arguments 1, 2, 3, or 4 arguments defining the filter to be applied, - an ``Nx1`` `~numpy.ndarray` of FIR coefficients - an ``Nx6`` `~numpy.ndarray` of SOS coefficients - ``(numerator, denominator)`` polynomials - ``(zeros, poles, gain)`` - ``(A, B, C, D)`` 'state-space' representation filtfilt : `bool`, optional filter forward and backwards to preserve phase, default: `False` analog : `bool`, optional if `True`, filter coefficients will be converted from Hz to Z-domain digital representation, default: `False` inplace : `bool`, optional if `True`, this array will be overwritten with the filtered version, default: `False` **kwargs other keyword arguments are passed to the filter method Returns ------- result : `TimeSeries` the filtered version of the input `TimeSeries` Notes ----- IIR filters are converted either into cascading second-order sections (if `scipy >= 0.16` is installed), or into the ``(numerator, denominator)`` representation before being applied to this `TimeSeries`. .. note:: When using `scipy < 0.16` some higher-order filters may be unstable. With `scipy >= 0.16` higher-order filters are decomposed into second-order-sections, and so are much more stable. FIR filters are passed directly to :func:`scipy.signal.lfilter` or :func:`scipy.signal.filtfilt` without any conversions. See also -------- scipy.signal.sosfilt for details on filtering with second-order sections (`scipy >= 0.16` only) scipy.signal.sosfiltfilt for details on forward-backward filtering with second-order sections (`scipy >= 0.18` only) scipy.signal.lfilter for details on filtering (without SOS) scipy.signal.filtfilt for details on forward-backward filtering (without SOS) Raises ------ ValueError if ``filt`` arguments cannot be interpreted properly Examples -------- We can design an arbitrarily complicated filter using :mod:`gwpy.signal.filter_design` And then can download some data from LOSC to apply it using `TimeSeries.filter`: We can plot the original signal, and the filtered version, cutting off either end of the filtered data to remove filter-edge artefacts """ |
# parse keyword arguments
filtfilt = kwargs.pop('filtfilt', False)
# parse filter
form, filt = filter_design.parse_filter(
filt, analog=kwargs.pop('analog', False),
sample_rate=self.sample_rate.to('Hz').value,
)
if form == 'zpk':
try:
sos = signal.zpk2sos(*filt)
except AttributeError: # scipy < 0.16, no SOS filtering
sos = None
b, a = signal.zpk2tf(*filt)
else:
sos = None
b, a = filt
# perform filter
kwargs.setdefault('axis', 0)
if sos is not None and filtfilt:
out = signal.sosfiltfilt(sos, self, **kwargs)
elif sos is not None:
out = signal.sosfilt(sos, self, **kwargs)
elif filtfilt:
out = signal.filtfilt(b, a, self, **kwargs)
else:
out = signal.lfilter(b, a, self, **kwargs)
# format as type(self)
new = out.view(type(self))
new.__metadata_finalize__(self)
new._unit = self.unit
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def coherence(self, other, fftlength=None, overlap=None, window='hann', **kwargs):
"""Calculate the frequency-coherence between this `TimeSeries` and another. Parameters other : `TimeSeries` `TimeSeries` signal to calculate coherence with fftlength : `float`, optional number of seconds in single FFT, defaults to a single FFT covering the full duration overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats **kwargs any other keyword arguments accepted by :func:`matplotlib.mlab.cohere` except ``NFFT``, ``window``, and ``noverlap`` which are superceded by the above keyword arguments Returns ------- coherence : `~gwpy.frequencyseries.FrequencySeries` the coherence `FrequencySeries` of this `TimeSeries` with the other Notes ----- If `self` and `other` have difference :attr:`TimeSeries.sample_rate` values, the higher sampled `TimeSeries` will be down-sampled to match the lower. See Also -------- :func:`matplotlib.mlab.cohere` for details of the coherence calculator """ |
from matplotlib import mlab
from ..frequencyseries import FrequencySeries
# check sampling rates
if self.sample_rate.to('Hertz') != other.sample_rate.to('Hertz'):
sampling = min(self.sample_rate.value, other.sample_rate.value)
# resample higher rate series
if self.sample_rate.value == sampling:
other = other.resample(sampling)
self_ = self
else:
self_ = self.resample(sampling)
else:
sampling = self.sample_rate.value
self_ = self
# check fft lengths
if overlap is None:
overlap = 0
else:
overlap = int((overlap * self_.sample_rate).decompose().value)
if fftlength is None:
fftlength = int(self_.size/2. + overlap/2.)
else:
fftlength = int((fftlength * self_.sample_rate).decompose().value)
if window is not None:
kwargs['window'] = signal.get_window(window, fftlength)
coh, freqs = mlab.cohere(self_.value, other.value, NFFT=fftlength,
Fs=sampling, noverlap=overlap, **kwargs)
out = coh.view(FrequencySeries)
out.xindex = freqs
out.epoch = self.epoch
out.name = 'Coherence between %s and %s' % (self.name, other.name)
out.unit = 'coherence'
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def auto_coherence(self, dt, fftlength=None, overlap=None, window='hann', **kwargs):
"""Calculate the frequency-coherence between this `TimeSeries` and a time-shifted copy of itself. The standard :meth:`TimeSeries.coherence` is calculated between the input `TimeSeries` and a :meth:`cropped <TimeSeries.crop>` copy of itself. Since the cropped version will be shorter, the input series will be shortened to match. Parameters dt : `float` duration (in seconds) of time-shift fftlength : `float`, optional number of seconds in single FFT, defaults to a single FFT covering the full duration overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats **kwargs any other keyword arguments accepted by :func:`matplotlib.mlab.cohere` except ``NFFT``, ``window``, and ``noverlap`` which are superceded by the above keyword arguments Returns ------- coherence : `~gwpy.frequencyseries.FrequencySeries` the coherence `FrequencySeries` of this `TimeSeries` with the other Notes ----- The :meth:`TimeSeries.auto_coherence` will perform best when ``dt`` is approximately ``fftlength / 2``. See Also -------- :func:`matplotlib.mlab.cohere` for details of the coherence calculator """ |
# shifting self backwards is the same as forwards
dt = abs(dt)
# crop inputs
self_ = self.crop(self.span[0], self.span[1] - dt)
other = self.crop(self.span[0] + dt, self.span[1])
return self_.coherence(other, fftlength=fftlength,
overlap=overlap, window=window, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def coherence_spectrogram(self, other, stride, fftlength=None, overlap=None, window='hann', nproc=1):
"""Calculate the coherence spectrogram between this `TimeSeries` and other. Parameters other : `TimeSeries` the second `TimeSeries` in this CSD calculation stride : `float` number of seconds in single PSD (column of spectrogram) fftlength : `float` number of seconds in single FFT overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` number of parallel processes to use when calculating individual coherence spectra. Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency coherence spectrogram as generated from the input time-series. """ |
from ..spectrogram.coherence import from_timeseries
return from_timeseries(self, other, stride, fftlength=fftlength,
overlap=overlap, window=window,
nproc=nproc) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rms(self, stride=1):
"""Calculate the root-mean-square value of this `TimeSeries` once per stride. Parameters stride : `float` stride (seconds) between RMS calculations Returns ------- rms : `TimeSeries` a new `TimeSeries` containing the RMS value with dt=stride """ |
stridesamp = int(stride * self.sample_rate.value)
nsteps = int(self.size // stridesamp)
# stride through TimeSeries, recording RMS
data = numpy.zeros(nsteps)
for step in range(nsteps):
# find step TimeSeries
idx = int(stridesamp * step)
idx_end = idx + stridesamp
stepseries = self[idx:idx_end]
rms_ = numpy.sqrt(numpy.mean(numpy.abs(stepseries.value)**2))
data[step] = rms_
name = '%s %.2f-second RMS' % (self.name, stride)
return self.__class__(data, channel=self.channel, t0=self.t0,
name=name, sample_rate=(1/float(stride))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def demodulate(self, f, stride=1, exp=False, deg=True):
"""Compute the average magnitude and phase of this `TimeSeries` once per stride at a given frequency. Parameters f : `float` frequency (Hz) at which to demodulate the signal stride : `float`, optional stride (seconds) between calculations, defaults to 1 second exp : `bool`, optional return the magnitude and phase trends as one `TimeSeries` object representing a complex exponential, default: False deg : `bool`, optional if `exp=False`, calculates the phase in degrees Returns ------- mag, phase : `TimeSeries` if `exp=False`, returns a pair of `TimeSeries` objects representing magnitude and phase trends with `dt=stride` out : `TimeSeries` if `exp=True`, returns a single `TimeSeries` with magnitude and phase trends represented as `mag * exp(1j*phase)` with `dt=stride` Examples -------- Demodulation is useful when trying to examine steady sinusoidal signals we know to be contained within data. For instance, we can download some data from LOSC to look at trends of the amplitude and phase of LIGO Livingston's calibration line at 331.3 Hz: We can demodulate the `TimeSeries` at 331.3 Hz with a stride of one minute: We can then plot these trends to visualize fluctuations in the amplitude of the calibration line: """ |
stridesamp = int(stride * self.sample_rate.value)
nsteps = int(self.size // stridesamp)
# stride through the TimeSeries and mix with a local oscillator,
# taking the average over each stride
out = type(self)(numpy.zeros(nsteps, dtype=complex))
out.__array_finalize__(self)
out.sample_rate = 1 / float(stride)
w = 2 * numpy.pi * f * self.dt.decompose().value
for step in range(nsteps):
istart = int(stridesamp * step)
iend = istart + stridesamp
idx = numpy.arange(istart, iend)
mixed = 2 * numpy.exp(-1j * w * idx) * self.value[idx]
out.value[step] = mixed.mean()
if exp:
return out
mag = out.abs()
phase = type(mag)(numpy.angle(out, deg=deg))
phase.__array_finalize__(out)
phase.override_unit('deg' if deg else 'rad')
return (mag, phase) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def taper(self, side='leftright'):
"""Taper the ends of this `TimeSeries` smoothly to zero. Parameters side : `str`, optional the side of the `TimeSeries` to taper, must be one of `'left'`, `'right'`, or `'leftright'` Returns ------- out : `TimeSeries` a copy of `self` tapered at one or both ends Raises ------ ValueError if `side` is not one of `('left', 'right', 'leftright')` Examples -------- To see the effect of the Planck-taper window, we can taper a sinusoidal `TimeSeries` at both ends: We can plot it to see how the ends now vary smoothly from 0 to 1: Notes ----- The :meth:`TimeSeries.taper` automatically tapers from the second stationary point (local maximum or minimum) on the specified side of the input. However, the method will never taper more than half the full width of the `TimeSeries`, and will fail if there are no stationary points. See :func:`~gwpy.signal.window.planck` for the generic Planck taper window, and see :func:`scipy.signal.get_window` for other common window formats. """ |
# check window properties
if side not in ('left', 'right', 'leftright'):
raise ValueError("side must be one of 'left', 'right', "
"or 'leftright'")
out = self.copy()
# identify the second stationary point away from each boundary,
# else default to half the TimeSeries width
nleft, nright = 0, 0
mini, = signal.argrelmin(out.value)
maxi, = signal.argrelmax(out.value)
if 'left' in side:
nleft = max(mini[0], maxi[0])
nleft = min(nleft, self.size/2)
if 'right' in side:
nright = out.size - min(mini[-1], maxi[-1])
nright = min(nright, self.size/2)
out *= planck(out.size, nleft=nleft, nright=nright)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def whiten(self, fftlength=None, overlap=0, method=DEFAULT_FFT_METHOD, window='hanning', detrend='constant', asd=None, fduration=2, highpass=None, **kwargs):
"""Whiten this `TimeSeries` using inverse spectrum truncation Parameters fftlength : `float`, optional FFT integration length (in seconds) for ASD estimation, default: choose based on sample rate overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 method : `str`, optional FFT-averaging method window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, default: ``'hanning'`` see :func:`scipy.signal.get_window` for details on acceptable formats detrend : `str`, optional type of detrending to do before FFT (see `~TimeSeries.detrend` for more details), default: ``'constant'`` asd : `~gwpy.frequencyseries.FrequencySeries`, optional the amplitude spectral density using which to whiten the data, overrides other ASD arguments, default: `None` fduration : `float`, optional duration (in seconds) of the time-domain FIR whitening filter, must be no longer than `fftlength`, default: 2 seconds highpass : `float`, optional highpass corner frequency (in Hz) of the FIR whitening filter, default: `None` **kwargs other keyword arguments are passed to the `TimeSeries.asd` method to estimate the amplitude spectral density `FrequencySeries` of this `TimeSeries` Returns ------- out : `TimeSeries` a whitened version of the input data with zero mean and unit variance See Also -------- TimeSeries.asd for details on the ASD calculation TimeSeries.convolve for details on convolution with the overlap-save method gwpy.signal.filter_design.fir_from_transfer for FIR filter design through spectrum truncation Notes ----- The accepted ``method`` arguments are: - ``'bartlett'`` : a mean average of non-overlapping periodograms - ``'median'`` : a median average of overlapping periodograms - ``'welch'`` : a mean average of overlapping periodograms The ``window`` argument is used in ASD estimation, FIR filter design, and in preventing spectral leakage in the output. Due to filter settle-in, a segment of length ``0.5*fduration`` will be corrupted at the beginning and end of the output. See `~TimeSeries.convolve` for more details. The input is detrended and the output normalised such that, if the input is stationary and Gaussian, then the output will have zero mean and unit variance. For more on inverse spectrum truncation, see arXiv:gr-qc/0509116. """ |
# compute the ASD
fftlength = fftlength if fftlength else _fft_length_default(self.dt)
if asd is None:
asd = self.asd(fftlength, overlap=overlap,
method=method, window=window, **kwargs)
asd = asd.interpolate(1./self.duration.decompose().value)
# design whitening filter, with highpass if requested
ncorner = int(highpass / asd.df.decompose().value) if highpass else 0
ntaps = int((fduration * self.sample_rate).decompose().value)
tdw = filter_design.fir_from_transfer(1/asd.value, ntaps=ntaps,
window=window, ncorner=ncorner)
# condition the input data and apply the whitening filter
in_ = self.copy().detrend(detrend)
out = in_.convolve(tdw, window=window)
return out * numpy.sqrt(2 * in_.dt.decompose().value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gate(self, tzero=1.0, tpad=0.5, whiten=True, threshold=50., cluster_window=0.5, **whiten_kwargs):
"""Removes high amplitude peaks from data using inverse Planck window. Points will be discovered automatically using a provided threshold and clustered within a provided time window. Parameters tzero : `int`, optional half-width time duration in which the time series is set to zero tpad : `int`, optional half-width time duration in which the Planck window is tapered whiten : `bool`, optional if True, data will be whitened before gating points are discovered, use of this option is highly recommended threshold : `float`, optional amplitude threshold, if the data exceeds this value a gating window will be placed cluster_window : `float`, optional time duration over which gating points will be clustered **whiten_kwargs other keyword arguments that will be passed to the `TimeSeries.whiten` method if it is being used when discovering gating points Returns ------- out : `~gwpy.timeseries.TimeSeries` a copy of the original `TimeSeries` that has had gating windows applied Examples -------- Read data into a `TimeSeries` Apply gating using custom arguments fftlength=4, overlap=2, method='median') Plot the original data and the gated data, whiten both for visualization purposes label='Ungated', color='dodgerblue', zorder=2) color='orange', zorder=3) """ |
try:
from scipy.signal import find_peaks
except ImportError as exc:
exc.args = ("Must have scipy>=1.1.0 to utilize this method.",)
raise
# Find points to gate based on a threshold
data = self.whiten(**whiten_kwargs) if whiten else self
window_samples = cluster_window * data.sample_rate.value
gates = find_peaks(abs(data.value), height=threshold,
distance=window_samples)[0]
out = self.copy()
# Iterate over list of indices to gate and apply each one
nzero = int(abs(tzero) * self.sample_rate.value)
npad = int(abs(tpad) * self.sample_rate.value)
half = nzero + npad
ntotal = 2 * half
for gate in gates:
# Set the boundaries for windowed data in the original time series
left_idx = max(0, gate - half)
right_idx = min(gate + half, len(self.value) - 1)
# Choose which part of the window will replace the data
# This must be done explicitly for edge cases where a window
# overlaps index 0 or the end of the time series
left_idx_window = half - (gate - left_idx)
right_idx_window = half + (right_idx - gate)
window = 1 - planck(ntotal, nleft=npad, nright=npad)
window = window[left_idx_window:right_idx_window]
out[left_idx:right_idx] *= window
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convolve(self, fir, window='hanning'):
"""Convolve this `TimeSeries` with an FIR filter using the overlap-save method Parameters fir : `numpy.ndarray` the time domain filter to convolve with window : `str`, optional window function to apply to boundaries, default: ``'hanning'`` see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : `TimeSeries` the result of the convolution See Also -------- scipy.signal.fftconvolve for details on the convolution scheme used here TimeSeries.filter for an alternative method designed for short filters Notes ----- The output `TimeSeries` is the same length and has the same timestamps as the input. Due to filter settle-in, a segment half the length of `fir` will be corrupted at the left and right boundaries. To prevent spectral leakage these segments will be windowed before convolving. """ |
pad = int(numpy.ceil(fir.size/2))
nfft = min(8*fir.size, self.size)
# condition the input data
in_ = self.copy()
window = signal.get_window(window, fir.size)
in_.value[:pad] *= window[:pad]
in_.value[-pad:] *= window[-pad:]
# if FFT length is long enough, perform only one convolution
if nfft >= self.size/2:
conv = signal.fftconvolve(in_.value, fir, mode='same')
# else use the overlap-save algorithm
else:
nstep = nfft - 2*pad
conv = numpy.zeros(self.size)
# handle first chunk separately
conv[:nfft-pad] = signal.fftconvolve(in_.value[:nfft], fir,
mode='same')[:nfft-pad]
# process chunks of length nstep
k = nfft - pad
while k < self.size - nfft + pad:
yk = signal.fftconvolve(in_.value[k-pad:k+nstep+pad], fir,
mode='same')
conv[k:k+yk.size-2*pad] = yk[pad:-pad]
k += nstep
# handle last chunk separately
conv[-nfft+pad:] = signal.fftconvolve(in_.value[-nfft:], fir,
mode='same')[-nfft+pad:]
out = type(self)(conv)
out.__array_finalize__(self)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def correlate(self, mfilter, window='hanning', detrend='linear', whiten=False, wduration=2, highpass=None, **asd_kw):
"""Cross-correlate this `TimeSeries` with another signal Parameters mfilter : `TimeSeries` the time domain signal to correlate with window : `str`, optional window function to apply to timeseries prior to FFT, default: ``'hanning'`` see :func:`scipy.signal.get_window` for details on acceptable formats detrend : `str`, optional type of detrending to do before FFT (see `~TimeSeries.detrend` for more details), default: ``'linear'`` whiten : `bool`, optional boolean switch to enable (`True`) or disable (`False`) data whitening, default: `False` wduration : `float`, optional duration (in seconds) of the time-domain FIR whitening filter, only used if `whiten=True`, defaults to 2 seconds highpass : `float`, optional highpass corner frequency (in Hz) of the FIR whitening filter, only used if `whiten=True`, default: `None` **asd_kw keyword arguments to pass to `TimeSeries.asd` to generate an ASD, only used if `whiten=True` Returns ------- snr : `TimeSeries` the correlated signal-to-noise ratio (SNR) timeseries See Also -------- TimeSeries.asd for details on the ASD calculation TimeSeries.convolve for details on convolution with the overlap-save method Notes ----- The `window` argument is used in ASD estimation, whitening, and preventing spectral leakage in the output. It is not used to condition the matched-filter, which should be windowed before passing to this method. Due to filter settle-in, a segment half the length of `mfilter` will be corrupted at the beginning and end of the output. See `~TimeSeries.convolve` for more details. The input and matched-filter will be detrended, and the output will be normalised so that the SNR measures number of standard deviations from the expected mean. """ |
self.is_compatible(mfilter)
# condition data
if whiten is True:
fftlength = asd_kw.pop('fftlength',
_fft_length_default(self.dt))
overlap = asd_kw.pop('overlap', None)
if overlap is None:
overlap = recommended_overlap(window) * fftlength
asd = self.asd(fftlength, overlap, window=window, **asd_kw)
# pad the matched-filter to prevent corruption
npad = int(wduration * mfilter.sample_rate.decompose().value / 2)
mfilter = mfilter.pad(npad)
# whiten (with errors on division by zero)
with numpy.errstate(all='raise'):
in_ = self.whiten(window=window, fduration=wduration, asd=asd,
highpass=highpass, detrend=detrend)
mfilter = mfilter.whiten(window=window, fduration=wduration,
asd=asd, highpass=highpass,
detrend=detrend)[npad:-npad]
else:
in_ = self.detrend(detrend)
mfilter = mfilter.detrend(detrend)
# compute matched-filter SNR and normalise
stdev = numpy.sqrt((mfilter.value**2).sum())
snr = in_.convolve(mfilter[::-1], window=window) / stdev
snr.__array_finalize__(self)
return snr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detrend(self, detrend='constant'):
"""Remove the trend from this `TimeSeries` This method just wraps :func:`scipy.signal.detrend` to return an object of the same type as the input. Parameters detrend : `str`, optional the type of detrending. Returns ------- detrended : `TimeSeries` the detrended input series See Also -------- scipy.signal.detrend for details on the options for the `detrend` argument, and how the operation is done """ |
data = signal.detrend(self.value, type=detrend).view(type(self))
data.__metadata_finalize__(self)
data._unit = self.unit
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def notch(self, frequency, type='iir', filtfilt=True, **kwargs):
"""Notch out a frequency in this `TimeSeries`. Parameters frequency : `float`, `~astropy.units.Quantity` frequency (default in Hertz) at which to apply the notch type : `str`, optional type of filter to apply, currently only 'iir' is supported **kwargs other keyword arguments to pass to `scipy.signal.iirdesign` Returns ------- notched : `TimeSeries` a notch-filtered copy of the input `TimeSeries` See Also -------- TimeSeries.filter for details on the filtering method scipy.signal.iirdesign for details on the IIR filter design method """ |
zpk = filter_design.notch(frequency, self.sample_rate.value,
type=type, **kwargs)
return self.filter(*zpk, filtfilt=filtfilt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def q_gram(self, qrange=qtransform.DEFAULT_QRANGE, frange=qtransform.DEFAULT_FRANGE, mismatch=qtransform.DEFAULT_MISMATCH, snrthresh=5.5, **kwargs):
"""Scan a `TimeSeries` using the multi-Q transform and return an `EventTable` of the most significant tiles Parameters qrange : `tuple` of `float`, optional `(low, high)` range of Qs to scan frange : `tuple` of `float`, optional `(low, high)` range of frequencies to scan mismatch : `float`, optional maximum allowed fractional mismatch between neighbouring tiles snrthresh : `float`, optional lower inclusive threshold on individual tile SNR to keep in the table **kwargs other keyword arguments to be passed to :meth:`QTiling.transform`, including ``'epoch'`` and ``'search'`` Returns ------- qgram : `EventTable` a table of time-frequency tiles on the most significant `QPlane` See Also -------- TimeSeries.q_transform for a method to interpolate the raw Q-transform over a regularly gridded spectrogram gwpy.signal.qtransform for code and documentation on how the Q-transform is implemented gwpy.table.EventTable.tile to render this `EventTable` as a collection of polygons Notes ----- Only tiles with signal energy greater than or equal to `snrthresh ** 2 / 2` will be stored in the output `EventTable`. The table columns are ``'time'``, ``'duration'``, ``'frequency'``, ``'bandwidth'``, and ``'energy'``. """ |
qscan, _ = qtransform.q_scan(self, mismatch=mismatch, qrange=qrange,
frange=frange, **kwargs)
qgram = qscan.table(snrthresh=snrthresh)
return qgram |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def q_transform(self, qrange=qtransform.DEFAULT_QRANGE, frange=qtransform.DEFAULT_FRANGE, gps=None, search=.5, tres="<default>", fres="<default>", logf=False, norm='median', mismatch=qtransform.DEFAULT_MISMATCH, outseg=None, whiten=True, fduration=2, highpass=None, **asd_kw):
"""Scan a `TimeSeries` using the multi-Q transform and return an interpolated high-resolution spectrogram By default, this method returns a high-resolution spectrogram in both time and frequency, which can result in a large memory footprint. If you know that you only need a subset of the output for, say, a figure, consider using ``outseg`` and the other keyword arguments to restrict the size of the returned data. Parameters qrange : `tuple` of `float`, optional `(low, high)` range of Qs to scan frange : `tuple` of `float`, optional `(log, high)` range of frequencies to scan gps : `float`, optional central time of interest for determine loudest Q-plane search : `float`, optional window around `gps` in which to find peak energies, only used if `gps` is given tres : `float`, optional desired time resolution (seconds) of output `Spectrogram`, default is `abs(outseg) / 1000.` fres : `float`, `int`, `None`, optional desired frequency resolution (Hertz) of output `Spectrogram`, or, if ``logf=True``, the number of frequency samples; give `None` to skip this step and return the original resolution, default is 0.5 Hz or 500 frequency samples logf : `bool`, optional boolean switch to enable (`True`) or disable (`False`) use of log-sampled frequencies in the output `Spectrogram`, if `True` then `fres` is interpreted as a number of frequency samples, default: `False` norm : `bool`, `str`, optional whether to normalize the returned Q-transform output, or how, default: `True` (``'median'``), other options: `False`, ``'mean'`` mismatch : `float` maximum allowed fractional mismatch between neighbouring tiles outseg : `~gwpy.segments.Segment`, optional GPS `[start, stop)` segment for output `Spectrogram`, default is the full duration of the input whiten : `bool`, `~gwpy.frequencyseries.FrequencySeries`, optional boolean switch to enable (`True`) or disable (`False`) data whitening, or an ASD `~gwpy.freqencyseries.FrequencySeries` with which to whiten the data fduration : `float`, optional duration (in seconds) of the time-domain FIR whitening filter, only used if `whiten` is not `False`, defaults to 2 seconds highpass : `float`, optional highpass corner frequency (in Hz) of the FIR whitening filter, used only if `whiten` is not `False`, default: `None` **asd_kw keyword arguments to pass to `TimeSeries.asd` to generate an ASD to use when whitening the data Returns ------- out : `~gwpy.spectrogram.Spectrogram` output `Spectrogram` of normalised Q energy See Also -------- TimeSeries.asd for documentation on acceptable `**asd_kw` TimeSeries.whiten for documentation on how the whitening is done gwpy.signal.qtransform for code and documentation on how the Q-transform is implemented Notes ----- This method will return a `Spectrogram` of dtype ``float32`` if ``norm`` is given, and ``float64`` otherwise. To optimize plot rendering with `~matplotlib.axes.Axes.pcolormesh`, the output `~gwpy.spectrogram.Spectrogram` can be given a log-sampled frequency axis by passing `logf=True` at runtime. The `fres` argument is then the number of points on the frequency axis. Note, this is incompatible with `~matplotlib.axes.Axes.imshow`. It is also highly recommended to use the `outseg` keyword argument when only a small window around a given GPS time is of interest. This will speed up this method a little, but can greatly speed up rendering the resulting `Spectrogram` using `pcolormesh`. If you aren't going to use `pcolormesh` in the end, don't worry. Examples -------- Generate a `TimeSeries` containing Gaussian noise sampled at 4096 Hz, centred on GPS time 0, with a sine-Gaussian pulse ('glitch') at 500 Hz: Compute and plot the Q-transform of these data: """ | # noqa: E501
from ..frequencyseries import FrequencySeries
# condition data
if whiten is True: # generate ASD dynamically
window = asd_kw.pop('window', 'hann')
fftlength = asd_kw.pop('fftlength',
_fft_length_default(self.dt))
overlap = asd_kw.pop('overlap', None)
if overlap is None and fftlength == self.duration.value:
asd_kw['method'] = DEFAULT_FFT_METHOD
overlap = 0
elif overlap is None:
overlap = recommended_overlap(window) * fftlength
whiten = self.asd(fftlength, overlap, window=window, **asd_kw)
if isinstance(whiten, FrequencySeries):
# apply whitening (with error on division by zero)
with numpy.errstate(all='raise'):
data = self.whiten(asd=whiten, fduration=fduration,
highpass=highpass)
else:
data = self
# determine search window
if gps is None:
search = None
elif search is not None:
search = Segment(gps-search/2, gps+search/2) & self.span
qgram, _ = qtransform.q_scan(
data, frange=frange, qrange=qrange, norm=norm,
mismatch=mismatch, search=search)
return qgram.interpolate(
tres=tres, fres=fres, logf=logf, outseg=outseg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_index(self, axis, key, value):
"""Update the current axis index based on a given key or value This is an internal method designed to set the origin or step for an index, whilst updating existing Index arrays as appropriate Examples -------- To actually set an index array, use `_set_index` """ |
# delete current value if given None
if value is None:
return delattr(self, key)
_key = "_{}".format(key)
index = "{[0]}index".format(axis)
unit = "{[0]}unit".format(axis)
# convert float to Quantity
if not isinstance(value, Quantity):
try:
value = Quantity(value, getattr(self, unit))
except TypeError:
value = Quantity(float(value), getattr(self, unit))
# if value is changing, delete current index
try:
curr = getattr(self, _key)
except AttributeError:
delattr(self, index)
else:
if (
value is None or
getattr(self, key) is None or
not value.unit.is_equivalent(curr.unit) or
value != curr
):
delattr(self, index)
# set new value
setattr(self, _key, value)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_index(self, key, index):
"""Set a new index array for this series """ |
axis = key[0]
origin = "{}0".format(axis)
delta = "d{}".format(axis)
if index is None:
return delattr(self, key)
if not isinstance(index, Index):
try:
unit = index.unit
except AttributeError:
unit = getattr(self, "_default_{}unit".format(axis))
index = Index(index, unit=unit, copy=False)
setattr(self, origin, index[0])
if index.regular:
setattr(self, delta, index[1] - index[0])
else:
delattr(self, delta)
setattr(self, "_{}".format(key), index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def x0(self):
"""X-axis coordinate of the first data point :type: `~astropy.units.Quantity` scalar """ |
try:
return self._x0
except AttributeError:
self._x0 = Quantity(0, self.xunit)
return self._x0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dx(self):
"""X-axis sample separation :type: `~astropy.units.Quantity` scalar """ |
try:
return self._dx
except AttributeError:
try:
self._xindex
except AttributeError:
self._dx = Quantity(1, self.xunit)
else:
if not self.xindex.regular:
raise AttributeError("This series has an irregular x-axis "
"index, so 'dx' is not well defined")
self._dx = self.xindex[1] - self.xindex[0]
return self._dx |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def xindex(self):
"""Positions of the data on the x-axis :type: `~astropy.units.Quantity` array """ |
try:
return self._xindex
except AttributeError:
self._xindex = Index.define(self.x0, self.dx, self.shape[0])
return self._xindex |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def xunit(self):
"""Unit of x-axis index :type: `~astropy.units.Unit` """ |
try:
return self._dx.unit
except AttributeError:
try:
return self._x0.unit
except AttributeError:
return self._default_xunit |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot(self, method='plot', **kwargs):
"""Plot the data for this series Returns ------- figure : `~matplotlib.figure.Figure` the newly created figure, with populated Axes. See Also -------- matplotlib.pyplot.figure for documentation of keyword arguments used to create the figure matplotlib.figure.Figure.add_subplot for documentation of keyword arguments used to create the axes matplotlib.axes.Axes.plot for documentation of keyword arguments used in rendering the data """ |
from ..plot import Plot
from ..plot.text import default_unit_label
# correct for log scales and zeros
if kwargs.get('xscale') == 'log' and self.x0.value == 0:
kwargs.setdefault('xlim', (self.dx.value, self.xspan[1]))
# make plot
plot = Plot(self, method=method, **kwargs)
# set default y-axis label (xlabel is set by Plot())
default_unit_label(plot.gca().yaxis, self.unit)
return plot |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def step(self, **kwargs):
"""Create a step plot of this series """ |
kwargs.setdefault('linestyle', kwargs.pop('where', 'steps-post'))
data = self.append(self.value[-1:], inplace=False)
return data.plot(**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def shift(self, delta):
"""Shift this `Series` forward on the X-axis by ``delta`` This modifies the series in-place. Parameters delta : `float`, `~astropy.units.Quantity`, `str` The amount by which to shift (in x-axis units if `float`), give a negative value to shift backwards in time Examples -------- 0.0 m 5.0 m -995.0 m """ |
self.x0 = self.x0 + Quantity(delta, self.xunit) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def value_at(self, x):
"""Return the value of this `Series` at the given `xindex` value Parameters x : `float`, `~astropy.units.Quantity` the `xindex` value at which to search Returns ------- y : `~astropy.units.Quantity` the value of this Series at the given `xindex` value """ |
x = Quantity(x, self.xindex.unit).value
try:
idx = (self.xindex.value == x).nonzero()[0][0]
except IndexError as e:
e.args = ("Value %r not found in array index" % x,)
raise
return self[idx] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_contiguous(self, other, tol=1/2.**18):
"""Check whether other is contiguous with self. Parameters other : `Series`, `numpy.ndarray` another series of the same type to test for contiguity tol : `float`, optional the numerical tolerance of the test Returns ------- 1 if `other` is contiguous with this series, i.e. would attach seamlessly onto the end -1 if `other` is anti-contiguous with this seires, i.e. would attach seamlessly onto the start 0 if `other` is completely dis-contiguous with thie series Notes ----- if a raw `numpy.ndarray` is passed as other, with no metadata, then the contiguity check will always pass """ |
self.is_compatible(other)
if isinstance(other, type(self)):
if abs(float(self.xspan[1] - other.xspan[0])) < tol:
return 1
elif abs(float(other.xspan[1] - self.xspan[0])) < tol:
return -1
return 0
elif type(other) in [list, tuple, numpy.ndarray]:
return 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_compatible(self, other):
"""Check whether this series and other have compatible metadata This method tests that the `sample size <Series.dx>`, and the `~Series.unit` match. """ |
if isinstance(other, type(self)):
# check step size, if possible
try:
if not self.dx == other.dx:
raise ValueError("%s sample sizes do not match: "
"%s vs %s." % (type(self).__name__,
self.dx, other.dx))
except AttributeError:
raise ValueError("Series with irregular xindexes cannot "
"be compatible")
# check units
if not self.unit == other.unit and not (
self.unit in [dimensionless_unscaled, None] and
other.unit in [dimensionless_unscaled, None]):
raise ValueError("%s units do not match: %s vs %s."
% (type(self).__name__, str(self.unit),
str(other.unit)))
else:
# assume an array-like object, and just check that the shape
# and dtype match
arr = numpy.asarray(other)
if arr.ndim != self.ndim:
raise ValueError("Dimensionality does not match")
if arr.dtype != self.dtype:
warn("Array data types do not match: %s vs %s"
% (self.dtype, other.dtype))
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prepend(self, other, inplace=True, pad=None, gap=None, resize=True):
"""Connect another series onto the start of the current one. Parameters other : `Series` another series of the same type as this one inplace : `bool`, optional perform operation in-place, modifying current series, otherwise copy data and return new series, default: `True` .. warning:: `inplace` prepend bypasses the reference check in `numpy.ndarray.resize`, so be carefully to only use this for arrays that haven't been sharing their memory! pad : `float`, optional value with which to pad discontiguous series, by default gaps will result in a `ValueError`. gap : `str`, optional action to perform if there's a gap between the other series and this one. One of - ``'raise'`` - raise a `ValueError` - ``'ignore'`` - remove gap and join data - ``'pad'`` - pad gap with zeros If `pad` is given and is not `None`, the default is ``'pad'``, otherwise ``'raise'``. resize : `bool`, optional resize this array to accommodate new data, otherwise shift the old data to the left (potentially falling off the start) and put the new data in at the end, default: `True`. Returns ------- series : `TimeSeries` time-series containing joined data sets """ |
out = other.append(self, inplace=False, gap=gap, pad=pad,
resize=resize)
if inplace:
self.resize(out.shape, refcheck=False)
self[:] = out[:]
self.x0 = out.x0.copy()
del out
return self
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, other, inplace=True):
"""Update this series by appending new data from an other and dropping the same amount of data off the start. This is a convenience method that just calls `~Series.append` with `resize=False`. """ |
return self.append(other, inplace=inplace, resize=False) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def crop(self, start=None, end=None, copy=False):
"""Crop this series to the given x-axis extent. Parameters start : `float`, optional lower limit of x-axis to crop to, defaults to current `~Series.x0` end : `float`, optional upper limit of x-axis to crop to, defaults to current series end copy : `bool`, optional, default: `False` copy the input data to fresh memory, otherwise return a view Returns ------- series : `Series` A new series with a sub-set of the input data Notes ----- If either ``start`` or ``end`` are outside of the original `Series` span, warnings will be printed and the limits will be restricted to the :attr:`~Series.xspan` """ |
x0, x1 = self.xspan
xtype = type(x0)
if isinstance(start, Quantity):
start = start.to(self.xunit).value
if isinstance(end, Quantity):
end = end.to(self.xunit).value
# pin early starts to time-series start
if start == x0:
start = None
elif start is not None and xtype(start) < x0:
warn('%s.crop given start smaller than current start, '
'crop will begin when the Series actually starts.'
% type(self).__name__)
start = None
# pin late ends to time-series end
if end == x1:
end = None
if end is not None and xtype(end) > x1:
warn('%s.crop given end larger than current end, '
'crop will end when the Series actually ends.'
% type(self).__name__)
end = None
# find start index
if start is None:
idx0 = None
else:
idx0 = int((xtype(start) - x0) // self.dx.value)
# find end index
if end is None:
idx1 = None
else:
idx1 = int((xtype(end) - x0) // self.dx.value)
if idx1 >= self.size:
idx1 = None
# crop
if copy:
return self[idx0:idx1].copy()
return self[idx0:idx1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pad(self, pad_width, **kwargs):
"""Pad this series to a new size Parameters pad_width : `int`, pair of `ints` number of samples by which to pad each end of the array. Single int to pad both ends by the same amount, or (before, after) `tuple` to give uneven padding **kwargs see :meth:`numpy.pad` for kwarg documentation Returns ------- series : `Series` the padded version of the input See also -------- numpy.pad for details on the underlying functionality """ |
# format arguments
kwargs.setdefault('mode', 'constant')
if isinstance(pad_width, int):
pad_width = (pad_width,)
# form pad and view to this type
new = numpy.pad(self, pad_width, **kwargs).view(type(self))
# numpy.pad has stripped all metadata, so copy it over
new.__metadata_finalize__(self)
new._unit = self.unit
# finally move the starting index based on the amount of left-padding
new.x0 -= self.dx * pad_width[0]
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def inject(self, other):
"""Add two compatible `Series` along their shared x-axis values. Parameters other : `Series` a `Series` whose xindex intersects with `self.xindex` Returns ------- out : `Series` the sum of `self` and `other` along their shared x-axis values Raises ------ ValueError if `self` and `other` have incompatible units or xindex intervals Notes ----- If `other.xindex` and `self.xindex` do not intersect, this method will return a copy of `self`. If the series have uniformly offset indices, this method will raise a warning. If `self.xindex` is an array of timestamps, and if `other.xspan` is not a subset of `self.xspan`, then `other` will be cropped before being adding to `self`. Users who wish to taper or window their `Series` should do so before passing it to this method. See :meth:`TimeSeries.taper` and :func:`~gwpy.signal.window.planck` for more information. """ |
# check Series compatibility
self.is_compatible(other)
if (self.xunit == second) and (other.xspan[0] < self.xspan[0]):
other = other.crop(start=self.xspan[0])
if (self.xunit == second) and (other.xspan[1] > self.xspan[1]):
other = other.crop(end=self.xspan[1])
ox0 = other.x0.to(self.x0.unit)
idx = ((ox0 - self.x0) / self.dx).value
if not idx.is_integer():
warn('Series have overlapping xspan but their x-axis values are '
'uniformly offset. Returning a copy of the original Series.')
return self.copy()
# add the Series along their shared samples
slice_ = slice(int(idx), int(idx) + other.size)
out = self.copy()
out.value[slice_] += other.value
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _select_query_method(cls, url):
"""Select the correct query method based on the URL Works for `DataQualityFlag` and `DataQualityDict` """ |
if urlparse(url).netloc.startswith('geosegdb.'): # only DB2 server
return cls.query_segdb
return cls.query_dqsegdb |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query(cls, flag, *args, **kwargs):
"""Query for segments of a given flag This method intelligently selects the `~DataQualityFlag.query_segdb` or the `~DataQualityFlag.query_dqsegdb` methods based on the ``url`` kwarg given. Parameters flag : `str` The name of the flag for which to query *args Either, two `float`-like numbers indicating the GPS [start, stop) interval, or a `SegmentList` defining a number of summary segments url : `str`, optional URL of the segment database, defaults to ``$DEFAULT_SEGMENT_SERVER`` environment variable, or ``'https://segments.ligo.org'`` See Also -------- DataQualityFlag.query_segdb DataQualityFlag.query_dqsegdb for details on the actual query engine, and documentation of other keyword arguments appropriate for each query Returns ------- flag : `DataQualityFlag` A new `DataQualityFlag`, with the `known` and `active` lists filled appropriately. """ |
query_ = _select_query_method(
cls, kwargs.get('url', DEFAULT_SEGMENT_SERVER))
return query_(flag, *args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_segdb(cls, flag, *args, **kwargs):
"""Query the initial LIGO segment database for the given flag Parameters flag : `str` The name of the flag for which to query *args Either, two `float`-like numbers indicating the GPS [start, stop) interval, or a `SegmentList` defining a number of summary segments url : `str`, optional URL of the segment database, defaults to ``$DEFAULT_SEGMENT_SERVER`` environment variable, or ``'https://segments.ligo.org'`` Returns ------- flag : `DataQualityFlag` A new `DataQualityFlag`, with the `known` and `active` lists filled appropriately. """ |
warnings.warn("query_segdb is deprecated and will be removed in a "
"future release", DeprecationWarning)
# parse arguments
qsegs = _parse_query_segments(args, cls.query_segdb)
# process query
try:
flags = DataQualityDict.query_segdb([flag], qsegs, **kwargs)
except TypeError as exc:
if 'DataQualityDict' in str(exc):
raise TypeError(str(exc).replace('DataQualityDict',
cls.__name__))
else:
raise
if len(flags) > 1:
raise RuntimeError("Multiple flags returned for single query, "
"something went wrong:\n %s"
% '\n '.join(flags.keys()))
elif len(flags) == 0:
raise RuntimeError("No flags returned for single query, "
"something went wrong.")
return flags[flag] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_dqsegdb(cls, flag, *args, **kwargs):
"""Query the advanced LIGO DQSegDB for the given flag Parameters flag : `str` The name of the flag for which to query *args Either, two `float`-like numbers indicating the GPS [start, stop) interval, or a `SegmentList` defining a number of summary segments url : `str`, optional URL of the segment database, defaults to ``$DEFAULT_SEGMENT_SERVER`` environment variable, or ``'https://segments.ligo.org'`` Returns ------- flag : `DataQualityFlag` A new `DataQualityFlag`, with the `known` and `active` lists filled appropriately. """ |
# parse arguments
qsegs = _parse_query_segments(args, cls.query_dqsegdb)
# get server
url = kwargs.pop('url', DEFAULT_SEGMENT_SERVER)
# parse flag
out = cls(name=flag)
if out.ifo is None or out.tag is None:
raise ValueError("Cannot parse ifo or tag (name) for flag %r"
% flag)
# process query
for start, end in qsegs:
# handle infinities
if float(end) == +inf:
end = to_gps('now').seconds
# query
try:
data = query_segments(flag, int(start), int(end), host=url)
except HTTPError as exc:
if exc.code == 404: # if not found, annotate flag name
exc.msg += ' [{0}]'.format(flag)
raise
# read from json buffer
new = cls.read(
BytesIO(json.dumps(data).encode('utf-8')),
format='json',
)
# restrict to query segments
segl = SegmentList([Segment(start, end)])
new.known &= segl
new.active &= segl
out += new
# replace metadata
out.description = new.description
out.isgood = new.isgood
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_open_data(cls, flag, start, end, **kwargs):
"""Fetch Open Data timeline segments into a flag. flag : `str` the name of the flag to query start : `int`, `str` the GPS start time (or parseable date string) to query end : `int`, `str` the GPS end time (or parseable date string) to query verbose : `bool`, optional show verbose download progress, default: `False` timeout : `int`, optional timeout for download (seconds) host : `str`, optional URL of LOSC host, default: ``'losc.ligo.org'`` Returns ------- flag : `DataQualityFlag` a new flag with `active` segments filled from Open Data Examples -------- <DataQualityFlag('H1:DATA', description=None)> """ |
start = to_gps(start).gpsSeconds
end = to_gps(end).gpsSeconds
known = [(start, end)]
active = timeline.get_segments(flag, start, end, **kwargs)
return cls(flag.replace('_', ':', 1), known=known, active=active,
label=flag) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(cls, source, *args, **kwargs):
"""Read segments from file into a `DataQualityFlag`. Parameters filename : `str` path of file to read name : `str`, optional name of flag to read from file, otherwise read all segments. format : `str`, optional source format identifier. If not given, the format will be detected if possible. See below for list of acceptable formats. coltype : `type`, optional, default: `float` datatype to force for segment times, only valid for ``format='segwizard'``. strict : `bool`, optional, default: `True` require segment start and stop times match printed duration, only valid for ``format='segwizard'``. coalesce : `bool`, optional if `True` coalesce the all segment lists before returning, otherwise return exactly as contained in file(s). nproc : `int`, optional, default: 1 number of CPUs to use for parallel reading of multiple files verbose : `bool`, optional, default: `False` print a progress bar showing read status Returns ------- dqflag : `DataQualityFlag` formatted `DataQualityFlag` containing the active and known segments read from file. Notes -----""" |
if 'flag' in kwargs: # pragma: no cover
warnings.warn('\'flag\' keyword was renamed \'name\', this '
'warning will result in an error in the future')
kwargs.setdefault('name', kwargs.pop('flags'))
coalesce = kwargs.pop('coalesce', False)
def combiner(flags):
"""Combine `DataQualityFlag` from each file into a single object
"""
out = flags[0]
for flag in flags[1:]:
out.known += flag.known
out.active += flag.active
if coalesce:
return out.coalesce()
return out
return io_read_multi(combiner, cls, source, *args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_veto_def(cls, veto):
"""Define a `DataQualityFlag` from a `VetoDef` Parameters veto : :class:`~ligo.lw.lsctables.VetoDef` veto definition to convert from """ |
name = '%s:%s' % (veto.ifo, veto.name)
try:
name += ':%d' % int(veto.version)
except TypeError:
pass
if veto.end_time == 0:
veto.end_time = +inf
known = Segment(veto.start_time, veto.end_time)
pad = (veto.start_pad, veto.end_pad)
return cls(name=name, known=[known], category=veto.category,
description=veto.comment, padding=pad) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def populate(self, source=DEFAULT_SEGMENT_SERVER, segments=None, pad=True, **kwargs):
"""Query the segment database for this flag's active segments. This method assumes all of the metadata for each flag have been filled. Minimally, the following attributes must be filled .. autosummary:: ~DataQualityFlag.name ~DataQualityFlag.known Segments will be fetched from the database, with any :attr:`~DataQualityFlag.padding` added on-the-fly. This `DataQualityFlag` will be modified in-place. Parameters source : `str` source of segments for this flag. This must be either a URL for a segment database or a path to a file on disk. segments : `SegmentList`, optional a list of segments during which to query, if not given, existing known segments for this flag will be used. pad : `bool`, optional, default: `True` apply the `~DataQualityFlag.padding` associated with this flag, default: `True`. **kwargs any other keyword arguments to be passed to :meth:`DataQualityFlag.query` or :meth:`DataQualityFlag.read`. Returns ------- self : `DataQualityFlag` a reference to this flag """ |
tmp = DataQualityDict()
tmp[self.name] = self
tmp.populate(source=source, segments=segments, pad=pad, **kwargs)
return tmp[self.name] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def contract(self, x):
"""Contract each of the `active` `Segments` by ``x`` seconds. This method adds ``x`` to each segment's lower bound, and subtracts ``x`` from the upper bound. The :attr:`~DataQualityFlag.active` `SegmentList` is modified in place. Parameters x : `float` number of seconds by which to contract each `Segment`. """ |
self.active = self.active.contract(x)
return self.active |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def protract(self, x):
"""Protract each of the `active` `Segments` by ``x`` seconds. This method subtracts ``x`` from each segment's lower bound, and adds ``x`` to the upper bound, while maintaining that each `Segment` stays within the `known` bounds. The :attr:`~DataQualityFlag.active` `SegmentList` is modified in place. Parameters x : `float` number of seconds by which to protact each `Segment`. """ |
self.active = self.active.protract(x)
return self.active |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pad(self, *args, **kwargs):
"""Apply a padding to each segment in this `DataQualityFlag` This method either takes no arguments, in which case the value of the :attr:`~DataQualityFlag.padding` attribute will be used, or two values representing the padding for the start and end of each segment. For both the `start` and `end` paddings, a positive value means pad forward in time, so that a positive `start` pad or negative `end` padding will contract a segment at one or both ends, and vice-versa. This method will apply the same padding to both the `~DataQualityFlag.known` and `~DataQualityFlag.active` lists, but will not :meth:`~DataQualityFlag.coalesce` the result. Parameters start : `float` padding to apply to the start of the each segment end : `float` padding to apply to the end of each segment inplace : `bool`, optional, default: `False` modify this object in-place, default is `False`, i.e. return a copy of the original object with padded segments Returns ------- paddedflag : `DataQualityFlag` a view of the modified flag """ |
if not args:
start, end = self.padding
else:
start, end = args
if kwargs.pop('inplace', False):
new = self
else:
new = self.copy()
if kwargs:
raise TypeError("unexpected keyword argument %r"
% list(kwargs.keys())[0])
new.known = [(s[0]+start, s[1]+end) for s in self.known]
new.active = [(s[0]+start, s[1]+end) for s in self.active]
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def round(self, contract=False):
"""Round this flag to integer segments. Parameters contract : `bool`, optional if `False` (default) expand each segment to the containing integer boundaries, otherwise contract each segment to the contained boundaries Returns ------- roundedflag : `DataQualityFlag` A copy of the original flag with the `active` and `known` segments padded out to integer boundaries. """ |
def _round(seg):
if contract: # round inwards
a = type(seg[0])(ceil(seg[0]))
b = type(seg[1])(floor(seg[1]))
else: # round outwards
a = type(seg[0])(floor(seg[0]))
b = type(seg[1])(ceil(seg[1]))
if a >= b: # if segment is too short, return 'null' segment
return type(seg)(0, 0) # will get coalesced away
return type(seg)(a, b)
new = self.copy()
new.active = type(new.active)(map(_round, new.active))
new.known = type(new.known)(map(_round, new.known))
return new.coalesce() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def coalesce(self):
"""Coalesce the segments for this flag. This method does two things: - `coalesces <SegmentList.coalesce>` the `~DataQualityFlag.known` and `~DataQualityFlag.active` segment lists - forces the `active` segments to be a proper subset of the `known` segments .. note:: this operations is performed in-place. Returns ------- self a view of this flag, not a copy. """ |
self.known = self.known.coalesce()
self.active = self.active.coalesce()
self.active = (self.known & self.active).coalesce()
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_name(self, name):
"""Internal method to parse a `string` name into constituent `ifo, `name` and `version` components. Parameters name : `str`, `None` the full name of a `DataQualityFlag` to parse, e.g. ``'H1:DMT-SCIENCE:1'``, or `None` to set all components to `None` Returns ------- (ifo, name, version) A tuple of component string parts Raises ------ `ValueError` If the input ``name`` cannot be parsed into {ifo}:{tag}:{version} format. """ |
if name is None:
self.ifo = None
self.tag = None
self.version = None
elif re_IFO_TAG_VERSION.match(name):
match = re_IFO_TAG_VERSION.match(name).groupdict()
self.ifo = match['ifo']
self.tag = match['tag']
self.version = int(match['version'])
elif re_IFO_TAG.match(name):
match = re_IFO_TAG.match(name).groupdict()
self.ifo = match['ifo']
self.tag = match['tag']
self.version = None
elif re_TAG_VERSION.match(name):
match = re_TAG_VERSION.match(name).groupdict()
self.ifo = None
self.tag = match['tag']
self.version = int(match['version'])
else:
raise ValueError("No flag name structure detected in '%s', flags "
"should be named as '{ifo}:{tag}:{version}'. "
"For arbitrary strings, use the "
"`DataQualityFlag.label` attribute" % name)
return self.ifo, self.tag, self.version |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_segdb(cls, flags, *args, **kwargs):
"""Query the inital LIGO segment database for a list of flags. Parameters flags : `iterable` A list of flag names for which to query. *args Either, two `float`-like numbers indicating the GPS [start, stop) interval, or a `SegmentList` defining a number of summary segments. url : `str`, optional URL of the segment database, defaults to ``$DEFAULT_SEGMENT_SERVER`` environment variable, or ``'https://segments.ligo.org'`` Returns ------- flagdict : `DataQualityDict` An ordered `DataQualityDict` of (name, `DataQualityFlag`) pairs. """ |
warnings.warn("query_segdb is deprecated and will be removed in a "
"future release", DeprecationWarning)
# parse segments
qsegs = _parse_query_segments(args, cls.query_segdb)
url = kwargs.pop('url', DEFAULT_SEGMENT_SERVER)
if kwargs.pop('on_error', None) is not None:
warnings.warn("DataQualityDict.query_segdb doesn't accept the "
"on_error keyword argument")
if kwargs.keys():
raise TypeError("DataQualityDict.query_segdb has no keyword "
"argument '%s'" % list(kwargs.keys()[0]))
# process query
from glue.segmentdb import (segmentdb_utils as segdb_utils,
query_engine as segdb_engine)
connection = segdb_utils.setup_database(url)
engine = segdb_engine.LdbdQueryEngine(connection)
segdefs = []
for flag in flags:
dqflag = DataQualityFlag(name=flag)
ifo = dqflag.ifo
name = dqflag.tag
if dqflag.version is None:
vers = '*'
else:
vers = dqflag.version
for gpsstart, gpsend in qsegs:
if float(gpsend) == +inf:
gpsend = to_gps('now').seconds
gpsstart = float(gpsstart)
if not gpsstart.is_integer():
raise ValueError("Segment database queries can only"
"operate on integer GPS times")
gpsend = float(gpsend)
if not gpsend.is_integer():
raise ValueError("Segment database queries can only"
"operate on integer GPS times")
segdefs += segdb_utils.expand_version_number(
engine, (ifo, name, vers, gpsstart, gpsend, 0, 0))
segs = segdb_utils.query_segments(engine, 'segment', segdefs)
segsum = segdb_utils.query_segments(engine, 'segment_summary', segdefs)
# build output
out = cls()
for definition, segments, summary in zip(segdefs, segs, segsum):
# parse flag name
flag = ':'.join(map(str, definition[:3]))
name = flag.rsplit(':', 1)[0]
# if versionless
if flag.endswith('*'):
flag = name
key = name
# if asked for versionless, but returned a version
elif flag not in flags and name in flags:
key = name
# other
else:
key = flag
# define flag
if key not in out:
out[key] = DataQualityFlag(name=flag)
# add segments
out[key].known.extend(summary)
out[key].active.extend(segments)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_dqsegdb(cls, flags, *args, **kwargs):
"""Query the advanced LIGO DQSegDB for a list of flags. Parameters flags : `iterable` A list of flag names for which to query. *args Either, two `float`-like numbers indicating the GPS [start, stop) interval, or a `SegmentList` defining a number of summary segments. on_error : `str` how to handle an error querying for one flag, one of - `'raise'` (default):
raise the Exception - `'warn'`: print a warning - `'ignore'`: move onto the next flag as if nothing happened url : `str`, optional URL of the segment database, defaults to ``$DEFAULT_SEGMENT_SERVER`` environment variable, or ``'https://segments.ligo.org'`` Returns ------- flagdict : `DataQualityDict` An ordered `DataQualityDict` of (name, `DataQualityFlag`) pairs. """ |
# check on_error flag
on_error = kwargs.pop('on_error', 'raise').lower()
if on_error not in ['raise', 'warn', 'ignore']:
raise ValueError("on_error must be one of 'raise', 'warn', "
"or 'ignore'")
# parse segments
qsegs = _parse_query_segments(args, cls.query_dqsegdb)
# set up threading
inq = Queue()
outq = Queue()
for i in range(len(flags)):
t = _QueryDQSegDBThread(inq, outq, qsegs, **kwargs)
t.setDaemon(True)
t.start()
for i, flag in enumerate(flags):
inq.put((i, flag))
# capture output
inq.join()
outq.join()
new = cls()
results = list(zip(*sorted([outq.get() for i in range(len(flags))],
key=lambda x: x[0])))[1]
for result, flag in zip(results, flags):
if isinstance(result, Exception):
result.args = ('%s [%s]' % (str(result), str(flag)),)
if on_error == 'ignore':
pass
elif on_error == 'warn':
warnings.warn(str(result))
else:
raise result
else:
new[flag] = result
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(cls, source, names=None, format=None, **kwargs):
"""Read segments from file into a `DataQualityDict` Parameters source : `str` path of file to read format : `str`, optional source format identifier. If not given, the format will be detected if possible. See below for list of acceptable formats. names : `list`, optional, default: read all names found list of names to read, by default all names are read separately. coalesce : `bool`, optional if `True` coalesce the all segment lists before returning, otherwise return exactly as contained in file(s). nproc : `int`, optional, default: 1 number of CPUs to use for parallel reading of multiple files verbose : `bool`, optional, default: `False` print a progress bar showing read status Returns ------- flagdict : `DataQualityDict` a new `DataQualityDict` of `DataQualityFlag` entries with ``active`` and ``known`` segments seeded from the XML tables in the given file. Notes -----""" |
on_missing = kwargs.pop('on_missing', 'error')
coalesce = kwargs.pop('coalesce', False)
if 'flags' in kwargs: # pragma: no cover
warnings.warn('\'flags\' keyword was renamed \'names\', this '
'warning will result in an error in the future')
names = kwargs.pop('flags')
def combiner(inputs):
out = cls()
# check all names are contained
required = set(names or [])
found = set(name for dqdict in inputs for name in dqdict)
for name in required - found: # validate all names are found once
msg = '{!r} not found in any input file'.format(name)
if on_missing == 'ignore':
continue
if on_missing == 'warn':
warnings.warn(msg)
else:
raise ValueError(msg)
# combine flags
for dqdict in inputs:
for flag in dqdict:
try: # repeated occurence
out[flag].known.extend(dqdict[flag].known)
out[flag].active.extend(dqdict[flag].active)
except KeyError: # first occurence
out[flag] = dqdict[flag]
if coalesce:
return out.coalesce()
return out
return io_read_multi(combiner, cls, source, names=names,
format=format, on_missing='ignore', **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_veto_definer_file(cls, fp, start=None, end=None, ifo=None, format='ligolw'):
"""Read a `DataQualityDict` from a LIGO_LW XML VetoDefinerTable. Parameters fp : `str` path of veto definer file to read start : `~gwpy.time.LIGOTimeGPS`, `int`, optional GPS start time at which to restrict returned flags end : `~gwpy.time.LIGOTimeGPS`, `int`, optional GPS end time at which to restrict returned flags ifo : `str`, optional interferometer prefix whose flags you want to read format : `str`, optional format of file to read, currently only 'ligolw' is supported Returns ------- flags : `DataQualityDict` a `DataQualityDict` of flags parsed from the `veto_def_table` of the input file. Notes ----- This method does not automatically `~DataQualityDict.populate` the `active` segment list of any flags, a separate call should be made for that as follows """ |
if format != 'ligolw':
raise NotImplementedError("Reading veto definer from non-ligolw "
"format file is not currently "
"supported")
# read veto definer file
with get_readable_fileobj(fp, show_progress=False) as fobj:
from ..io.ligolw import read_table as read_ligolw_table
veto_def_table = read_ligolw_table(fobj, 'veto_definer')
if start is not None:
start = to_gps(start)
if end is not None:
end = to_gps(end)
# parse flag definitions
out = cls()
for row in veto_def_table:
if ifo and row.ifo != ifo:
continue
if start and 0 < row.end_time <= start:
continue
elif start:
row.start_time = max(row.start_time, start)
if end and row.start_time >= end:
continue
elif end and not row.end_time:
row.end_time = end
elif end:
row.end_time = min(row.end_time, end)
flag = DataQualityFlag.from_veto_def(row)
if flag.name in out:
out[flag.name].known.extend(flag.known)
out[flag.name].known.coalesce()
else:
out[flag.name] = flag
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_ligolw_tables(cls, segmentdeftable, segmentsumtable, segmenttable, names=None, gpstype=LIGOTimeGPS, on_missing='error'):
"""Build a `DataQualityDict` from a set of LIGO_LW segment tables Parameters segmentdeftable : :class:`~ligo.lw.lsctables.SegmentDefTable` the ``segment_definer`` table to read segmentsumtable : :class:`~ligo.lw.lsctables.SegmentSumTable` the ``segment_summary`` table to read segmenttable : :class:`~ligo.lw.lsctables.SegmentTable` the ``segment`` table to read names : `list` of `str`, optional a list of flag names to read, defaults to returning all gpstype : `type`, `callable`, optional class to use for GPS times in returned objects, can be a function to convert GPS time to something else, default is `~gwpy.time.LIGOTimeGPS` on_missing : `str`, optional action to take when a one or more ``names`` are not found in the ``segment_definer`` table, one of - ``'ignore'`` : do nothing - ``'warn'`` : print a warning - ``error'`` : raise a `ValueError` Returns ------- dqdict : `DataQualityDict` a dict of `DataQualityFlag` objects populated from the LIGO_LW tables """ |
out = cls()
id_ = dict() # need to record relative IDs from LIGO_LW
# read segment definers and generate DataQualityFlag object
for row in segmentdeftable:
ifos = sorted(row.instruments)
ifo = ''.join(ifos) if ifos else None
tag = row.name
version = row.version
name = ':'.join([str(k) for k in (ifo, tag, version) if
k is not None])
if names is None or name in names:
out[name] = DataQualityFlag(name)
thisid = int(row.segment_def_id)
try:
id_[name].append(thisid)
except (AttributeError, KeyError):
id_[name] = [thisid]
# verify all requested flags were found
for flag in names or []:
if flag not in out and on_missing != 'ignore':
msg = ("no segment definition found for flag={0!r} in "
"file".format(flag))
if on_missing == 'warn':
warnings.warn(msg)
else:
raise ValueError(msg)
# parse a table into the target DataQualityDict
def _parse_segments(table, listattr):
for row in table:
for flag in out:
# match row ID to list of IDs found for this flag
if int(row.segment_def_id) in id_[flag]:
getattr(out[flag], listattr).append(
Segment(*map(gpstype, row.segment)),
)
break
# read segment summary table as 'known'
_parse_segments(segmentsumtable, "known")
# read segment table as 'active'
_parse_segments(segmenttable, "active")
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_ligolw_tables(self, ilwdchar_compat=None, **attrs):
"""Convert this `DataQualityDict` into a trio of LIGO_LW segment tables Parameters ilwdchar_compat : `bool`, optional whether to write in the old format, compatible with ILWD characters (`True`), or to use the new format (`False`); the current default is `True` to maintain backwards compatibility, but this will change for gwpy-1.0.0. **attrs other attributes to add to all rows in all tables (e.g. ``'process_id'``) Returns ------- segmentdeftable : :class:`~ligo.lw.lsctables.SegmentDefTable` the ``segment_definer`` table segmentsumtable : :class:`~ligo.lw.lsctables.SegmentSumTable` the ``segment_summary`` table segmenttable : :class:`~ligo.lw.lsctables.SegmentTable` the ``segment`` table """ |
if ilwdchar_compat is None:
warnings.warn("ilwdchar_compat currently defaults to `True`, "
"but this will change to `False` in the future, to "
"maintain compatibility in future releases, "
"manually specify `ilwdchar_compat=True`",
PendingDeprecationWarning)
ilwdchar_compat = True
if ilwdchar_compat:
from glue.ligolw import lsctables
else:
from ligo.lw import lsctables
from ..io.ligolw import to_table_type as to_ligolw_table_type
SegmentDefTable = lsctables.SegmentDefTable
SegmentSumTable = lsctables.SegmentSumTable
SegmentTable = lsctables.SegmentTable
segdeftab = lsctables.New(SegmentDefTable)
segsumtab = lsctables.New(SegmentSumTable)
segtab = lsctables.New(SegmentTable)
def _write_attrs(table, row):
for key, val in attrs.items():
setattr(row, key, to_ligolw_table_type(val, table, key))
# write flags to tables
for flag in self.values():
# segment definer
segdef = segdeftab.RowType()
for col in segdeftab.columnnames: # default all columns to None
setattr(segdef, col, None)
segdef.instruments = {flag.ifo}
segdef.name = flag.tag
segdef.version = flag.version
segdef.comment = flag.description
segdef.insertion_time = to_gps(datetime.datetime.now()).gpsSeconds
segdef.segment_def_id = SegmentDefTable.get_next_id()
_write_attrs(segdeftab, segdef)
segdeftab.append(segdef)
# write segment summary (known segments)
for vseg in flag.known:
segsum = segsumtab.RowType()
for col in segsumtab.columnnames: # default columns to None
setattr(segsum, col, None)
segsum.segment_def_id = segdef.segment_def_id
segsum.segment = map(LIGOTimeGPS, vseg)
segsum.comment = None
segsum.segment_sum_id = SegmentSumTable.get_next_id()
_write_attrs(segsumtab, segsum)
segsumtab.append(segsum)
# write segment table (active segments)
for aseg in flag.active:
seg = segtab.RowType()
for col in segtab.columnnames: # default all columns to None
setattr(seg, col, None)
seg.segment_def_id = segdef.segment_def_id
seg.segment = map(LIGOTimeGPS, aseg)
seg.segment_id = SegmentTable.get_next_id()
_write_attrs(segtab, seg)
segtab.append(seg)
return segdeftab, segsumtab, segtab |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def populate(self, source=DEFAULT_SEGMENT_SERVER, segments=None, pad=True, on_error='raise', **kwargs):
"""Query the segment database for each flag's active segments. This method assumes all of the metadata for each flag have been filled. Minimally, the following attributes must be filled .. autosummary:: ~DataQualityFlag.name ~DataQualityFlag.known Segments will be fetched from the database, with any :attr:`~DataQualityFlag.padding` added on-the-fly. Entries in this dict will be modified in-place. Parameters source : `str` source of segments for this flag. This must be either a URL for a segment database or a path to a file on disk. segments : `SegmentList`, optional a list of known segments during which to query, if not given, existing known segments for flags will be used. pad : `bool`, optional, default: `True` apply the `~DataQualityFlag.padding` associated with each flag, default: `True`. on_error : `str` how to handle an error querying for one flag, one of - `'raise'` (default):
raise the Exception - `'warn'`: print a warning - `'ignore'`: move onto the next flag as if nothing happened **kwargs any other keyword arguments to be passed to :meth:`DataQualityFlag.query` or :meth:`DataQualityFlag.read`. Returns ------- self : `DataQualityDict` a reference to the modified DataQualityDict """ |
# check on_error flag
if on_error not in ['raise', 'warn', 'ignore']:
raise ValueError("on_error must be one of 'raise', 'warn', "
"or 'ignore'")
# format source
source = urlparse(source)
# perform query for all segments
if source.netloc and segments is not None:
segments = SegmentList(map(Segment, segments))
tmp = type(self).query(self.keys(), segments, url=source.geturl(),
on_error=on_error, **kwargs)
elif not source.netloc:
tmp = type(self).read(source.geturl(), **kwargs)
# apply padding and wrap to given known segments
for key in self:
if segments is None and source.netloc:
try:
tmp = {key: self[key].query(
self[key].name, self[key].known, **kwargs)}
except URLError as exc:
if on_error == 'ignore':
pass
elif on_error == 'warn':
warnings.warn('Error querying for %s: %s' % (key, exc))
else:
raise
continue
self[key].known &= tmp[key].known
self[key].active = tmp[key].active
if pad:
self[key] = self[key].pad(inplace=True)
if segments is not None:
self[key].known &= segments
self[key].active &= segments
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def copy(self, deep=False):
"""Build a copy of this dictionary. Parameters deep : `bool`, optional, default: `False` perform a deep copy of the original dictionary with a fresh memory address Returns ------- flag2 : `DataQualityFlag` a copy of the original dictionary """ |
if deep:
return deepcopy(self)
return super(DataQualityDict, self).copy() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def union(self):
"""Return the union of all flags in this dict Returns ------- union : `DataQualityFlag` a new `DataQualityFlag` who's active and known segments are the union of those of the values of this dict """ |
usegs = reduce(operator.or_, self.values())
usegs.name = ' | '.join(self.keys())
return usegs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intersection(self):
"""Return the intersection of all flags in this dict Returns ------- intersection : `DataQualityFlag` a new `DataQualityFlag` who's active and known segments are the intersection of those of the values of this dict """ |
isegs = reduce(operator.and_, self.values())
isegs.name = ' & '.join(self.keys())
return isegs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rc_params(usetex=None):
"""Returns a new `matplotlib.RcParams` with updated GWpy parameters The updated parameters are globally stored as `gwpy.plot.rc.GWPY_RCPARAMS`, with the updated TeX parameters as `gwpy.plot.rc.GWPY_TEX_RCPARAMS`. .. note:: This function doesn't apply the new `RcParams` in any way, just creates something that can be used to set `matplotlib.rcParams`. Parameters usetex : `bool`, `None` value to set for `text.usetex`; if `None` determine automatically using the ``GWPY_USETEX`` environment variable, and whether `tex` is available on the system. If `True` is given (or determined) a number of other parameters are updated to improve TeX formatting. Examples -------- """ |
# if user didn't specify to use tex or not, guess based on
# the `GWPY_USETEX` environment variable, or whether tex is
# installed at all.
if usetex is None:
usetex = bool_env(
'GWPY_USETEX',
default=rcParams['text.usetex'] or tex.has_tex())
# build RcParams from matplotlib.rcParams with GWpy extras
rcp = GWPY_RCPARAMS.copy()
if usetex:
rcp.update(GWPY_TEX_RCPARAMS)
return rcp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_subplot_params(figsize):
"""Return sensible default `SubplotParams` for a figure of the given size Parameters figsize : `tuple` of `float` the ``(width, height)`` figure size (inches) Returns ------- params : `~matplotlib.figure.SubplotParams` formatted set of subplot parameters """ |
width, height, = figsize
try:
left, right = SUBPLOT_WIDTH[width]
except KeyError:
left = right = None
try:
bottom, top = SUBPLOT_HEIGHT[height]
except KeyError:
bottom = top = None
return SubplotParams(left=left, bottom=bottom, right=right, top=top) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_string(input_):
"""Format an input for representation as text This method is just a convenience that handles default LaTeX formatting """ |
usetex = rcParams['text.usetex']
if isinstance(input_, units.UnitBase):
return input_.to_string('latex_inline')
if isinstance(input_, (float, int)) and usetex:
return tex.float_to_latex(input_)
if usetex:
return tex.label_to_latex(input_)
return str(input_) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def default_unit_label(axis, unit):
"""Set default label for an axis from a `~astropy.units.Unit` If the axis already has a label, this function does nothing. Parameters axis : `~matplotlib.axis.Axis` the axis to manipulate unit : `~astropy.units.Unit` the unit to use for the label Returns ------- text : `str`, `None` the text for the new label, if set, otherwise `None` """ |
if not axis.isDefault_label:
return
label = axis.set_label_text(unit.to_string('latex_inline_dimensional'))
axis.isDefault_label = True
return label.get_text() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def initialize(self):
"""Initialize a connection to the LDAP server. :return: LDAP connection object. """ |
try:
conn = ldap.initialize('{0}://{1}:{2}'.format(
current_app.config['LDAP_SCHEMA'],
current_app.config['LDAP_HOST'],
current_app.config['LDAP_PORT']))
conn.set_option(ldap.OPT_NETWORK_TIMEOUT,
current_app.config['LDAP_TIMEOUT'])
conn = self._set_custom_options(conn)
conn.protocol_version = ldap.VERSION3
if current_app.config['LDAP_USE_TLS']:
conn.start_tls_s()
return conn
except ldap.LDAPError as e:
raise LDAPException(self.error(e.args)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bind(self):
"""Attempts to bind to the LDAP server using the credentials of the service account. :return: Bound LDAP connection object if successful or ``None`` if unsuccessful. """ |
conn = self.initialize
try:
conn.simple_bind_s(
current_app.config['LDAP_USERNAME'],
current_app.config['LDAP_PASSWORD'])
return conn
except ldap.LDAPError as e:
raise LDAPException(self.error(e.args)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bind_user(self, username, password):
"""Attempts to bind a user to the LDAP server using the credentials supplied. .. note:: Many LDAP servers will grant anonymous access if ``password`` is the empty string, causing this method to return :obj:`True` no matter what username is given. If you want to use this method to validate a username and password, rather than actually connecting to the LDAP server as a particular user, make sure ``password`` is not empty. :param str username: The username to attempt to bind with. :param str password: The password of the username we're attempting to bind with. :return: Returns ``True`` if successful or ``None`` if the credentials are invalid. """ |
user_dn = self.get_object_details(user=username, dn_only=True)
if user_dn is None:
return
try:
conn = self.initialize
conn.simple_bind_s(user_dn.decode('utf-8'), password)
return True
except ldap.LDAPError:
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_user_groups(self, user):
"""Returns a ``list`` with the user's groups or ``None`` if unsuccessful. :param str user: User we want groups for. """ |
conn = self.bind
try:
if current_app.config['LDAP_OPENLDAP']:
fields = \
[str(current_app.config['LDAP_GROUP_MEMBER_FILTER_FIELD'])]
records = conn.search_s(
current_app.config['LDAP_BASE_DN'], ldap.SCOPE_SUBTREE,
ldap_filter.filter_format(
current_app.config['LDAP_GROUP_MEMBER_FILTER'],
(self.get_object_details(user, dn_only=True),)),
fields)
else:
records = conn.search_s(
current_app.config['LDAP_BASE_DN'], ldap.SCOPE_SUBTREE,
ldap_filter.filter_format(
current_app.config['LDAP_USER_OBJECT_FILTER'],
(user,)),
[current_app.config['LDAP_USER_GROUPS_FIELD']])
conn.unbind_s()
if records:
if current_app.config['LDAP_OPENLDAP']:
group_member_filter = \
current_app.config['LDAP_GROUP_MEMBER_FILTER_FIELD']
if sys.version_info[0] > 2:
groups = [record[1][group_member_filter][0].decode(
'utf-8') for record in records]
else:
groups = [record[1][group_member_filter][0] for
record in records]
return groups
else:
if current_app.config['LDAP_USER_GROUPS_FIELD'] in \
records[0][1]:
groups = records[0][1][
current_app.config['LDAP_USER_GROUPS_FIELD']]
result = [re.findall(b'(?:cn=|CN=)(.*?),', group)[0]
for group in groups]
if sys.version_info[0] > 2:
result = [r.decode('utf-8') for r in result]
return result
except ldap.LDAPError as e:
raise LDAPException(self.error(e.args)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_group_members(self, group):
"""Returns a ``list`` with the group's members or ``None`` if unsuccessful. :param str group: Group we want users for. """ |
conn = self.bind
try:
records = conn.search_s(
current_app.config['LDAP_BASE_DN'], ldap.SCOPE_SUBTREE,
ldap_filter.filter_format(
current_app.config['LDAP_GROUP_OBJECT_FILTER'], (group,)),
[current_app.config['LDAP_GROUP_MEMBERS_FIELD']])
conn.unbind_s()
if records:
if current_app.config['LDAP_GROUP_MEMBERS_FIELD'] in \
records[0][1]:
members = records[0][1][
current_app.config['LDAP_GROUP_MEMBERS_FIELD']]
if sys.version_info[0] > 2:
members = [m.decode('utf-8') for m in members]
return members
except ldap.LDAPError as e:
raise LDAPException(self.error(e.args)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def login_required(func):
"""When applied to a view function, any unauthenticated requests will be redirected to the view named in LDAP_LOGIN_VIEW. Authenticated requests do NOT require membership from a specific group. The login view is responsible for asking for credentials, checking them, and setting ``flask.g.user`` to the name of the authenticated user if the credentials are acceptable. :param func: The view function to decorate. """ |
@wraps(func)
def wrapped(*args, **kwargs):
if g.user is None:
return redirect(url_for(current_app.config['LDAP_LOGIN_VIEW'],
next=request.path))
return func(*args, **kwargs)
return wrapped |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def group_required(groups=None):
"""When applied to a view function, any unauthenticated requests will be redirected to the view named in LDAP_LOGIN_VIEW. Authenticated requests are only permitted if they belong to one of the listed groups. The login view is responsible for asking for credentials, checking them, and setting ``flask.g.user`` to the name of the authenticated user and ``flask.g.ldap_groups`` to the authenticated user's groups if the credentials are acceptable. :param list groups: List of groups that should be able to access the view function. """ |
def wrapper(func):
@wraps(func)
def wrapped(*args, **kwargs):
if g.user is None:
return redirect(
url_for(current_app.config['LDAP_LOGIN_VIEW'],
next=request.path))
match = [group for group in groups if group in g.ldap_groups]
if not match:
abort(401)
return func(*args, **kwargs)
return wrapped
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def buy_streak_freeze(self):
""" figure out the users current learning language use this one as parameter for the shop """ |
lang = self.get_abbreviation_of(self.get_user_info()['learning_language_string'])
if lang is None:
raise Exception('No learning language found')
try:
self.buy_item('streak_freeze', lang)
return True
except AlreadyHaveStoreItemException:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _compute_dependency_order(skills):
""" Add a field to each skill indicating the order it was learned based on the skill's dependencies. Multiple skills will have the same position if they have the same dependencies. """ |
# Key skills by first dependency. Dependency sets can be uniquely
# identified by one dependency in the set.
dependency_to_skill = MultiDict([(skill['dependencies_name'][0]
if skill['dependencies_name']
else '',
skill)
for skill in skills])
# Start with the first skill and trace the dependency graph through
# skill, setting the order it was learned in.
index = 0
previous_skill = ''
while True:
for skill in dependency_to_skill.getlist(previous_skill):
skill['dependency_order'] = index
index += 1
# Figure out the canonical dependency for the next set of skills.
skill_names = set([skill['name']
for skill in
dependency_to_skill.getlist(previous_skill)])
canonical_dependency = skill_names.intersection(
set(dependency_to_skill.keys()))
if canonical_dependency:
previous_skill = canonical_dependency.pop()
else:
# Nothing depends on these skills, so we're done.
break
return skills |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_languages(self, abbreviations=False):
""" Get praticed languages. :param abbreviations: Get language as abbreviation or not :type abbreviations: bool :return: List of languages :rtype: list of str """ |
data = []
for lang in self.user_data.languages:
if lang['learning']:
if abbreviations:
data.append(lang['language'])
else:
data.append(lang['language_string'])
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_language_from_abbr(self, abbr):
"""Get language full name from abbreviation.""" |
for language in self.user_data.languages:
if language['language'] == abbr:
return language['language_string']
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_abbreviation_of(self, name):
"""Get abbreviation of a language.""" |
for language in self.user_data.languages:
if language['language_string'] == name:
return language['language']
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_language_details(self, language):
"""Get user's status about a language.""" |
for lang in self.user_data.languages:
if language == lang['language_string']:
return lang
return {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_certificates(self):
"""Get user's certificates.""" |
for certificate in self.user_data.certificates:
certificate['datetime'] = certificate['datetime'].strip()
return self.user_data.certificates |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_calendar(self, language_abbr=None):
"""Get user's last actions.""" |
if language_abbr:
if not self._is_current_language(language_abbr):
self._switch_language(language_abbr)
return self.user_data.language_data[language_abbr]['calendar']
else:
return self.user_data.calendar |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_language_progress(self, lang):
"""Get informations about user's progression in a language.""" |
if not self._is_current_language(lang):
self._switch_language(lang)
fields = ['streak', 'language_string', 'level_progress',
'num_skills_learned', 'level_percent', 'level_points',
'points_rank', 'next_level', 'level_left', 'language',
'points', 'fluency_score', 'level']
return self._make_dict(fields, self.user_data.language_data[lang]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_friends(self):
"""Get user's friends.""" |
for k, v in iter(self.user_data.language_data.items()):
data = []
for friend in v['points_ranking_data']:
temp = {'username': friend['username'],
'id': friend['id'],
'points': friend['points_data']['total'],
'languages': [i['language_string'] for i in
friend['points_data']['languages']]}
data.append(temp)
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_known_words(self, lang):
"""Get a list of all words learned by user in a language.""" |
words = []
for topic in self.user_data.language_data[lang]['skills']:
if topic['learned']:
words += topic['words']
return set(words) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_learned_skills(self, lang):
""" Return the learned skill objects sorted by the order they were learned in. """ |
skills = [skill for skill in
self.user_data.language_data[lang]['skills']]
self._compute_dependency_order(skills)
return [skill for skill in
sorted(skills, key=lambda skill: skill['dependency_order'])
if skill['learned']] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_known_topics(self, lang):
"""Return the topics learned by a user in a language.""" |
return [topic['title']
for topic in self.user_data.language_data[lang]['skills']
if topic['learned']] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_unknown_topics(self, lang):
"""Return the topics remaining to learn by a user in a language.""" |
return [topic['title']
for topic in self.user_data.language_data[lang]['skills']
if not topic['learned']] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.